|
Joint inference and input optimization |
|
in equilibrium networks |
|
Swaminathan Gurumurthy |
|
Carnegie Mellon UniversityShaojie Bai |
|
Carnegie Mellon UniversityZachary Manchester |
|
Carnegie Mellon University |
|
J. Zico Kolter |
|
Carnegie Mellon University |
|
Bosch Center for AI |
|
Abstract |
|
Many tasks in deep learning involve optimizing over the inputs to a network to |
|
minimize or maximize some objective; examples include optimization over latent |
|
spaces in a generative model to match a target image, or adversarially perturbing |
|
an input to worsen classifier performance. Performing such optimization, however, |
|
is traditionally quite costly, as it involves a complete forward and backward pass |
|
through the network for each gradient step. In a separate line of work, a recent |
|
thread of research has developed the deep equilibrium (DEQ) model, a class of |
|
models that foregoes traditional network depth and instead computes the output of a |
|
network by finding the fixed point of a single nonlinear layer. In this paper, we show |
|
that there is a natural synergy between these two settings. Although, naively using |
|
DEQs for these optimization problems is expensive (owing to the time needed to |
|
compute a fixed point for each gradient step), we can leverage the fact that gradient- |
|
based optimization can itself be cast as a fixed point iteration to substantially |
|
improve the overall speed. That is, we simultaneously both solve for the DEQ |
|
fixed point andoptimize over network inputs, all within a single “augmented” |
|
DEQ model that jointly encodes both the original network and the optimization |
|
process. Indeed, the procedure is fast enough that it allows us to efficiently train |
|
DEQ models for tasks traditionally relying on an “inner” optimization loop. We |
|
demonstrate this strategy on various tasks such as training generative models while |
|
optimizing over latent codes, training models for inverse problems like denoising |
|
and inpainting, adversarial training and gradient based meta-learning. |
|
1 Introduction |
|
Many settings in deep learning involve optimization over the inputs to a network to minimize some |
|
desired loss. For example, for a “generator” network G:Z!X that maps from latent space Zto |
|
an observed space X, it may be desirable to find a latent vector z2Z that most closely produces |
|
some target output x2X by solving the optimization problem (e.g. [10, 13]) |
|
minimize |
|
z2Zkx G(z)k2 |
|
2: (1) |
|
As another example, constructing adversarial examples for classifiers [ 28,53] typically involves |
|
optimizating over a perturbation to a given input; i.e., given a classifier network g:X!Y , task loss |
|
Correspondence to: Swaminathan Gurumurthy <[email protected]> |
|
Code available at https://github.com/locuslab/JIIO-DEQ |
|
Preprint. Under review.arXiv:2111.13236v1 [cs.LG] 25 Nov 2021`:Y!R+, and a sample x2X, we want to solve |
|
maximize |
|
kk`(g(x+)): (2) |
|
More generally, a wide range of inverse problems [ 10] and other auxiliary tasks [ 22,3] in deep |
|
learning can also be formulated in such a manner. |
|
Orthogonal to this line of work, a recent trend has focused on the use of an implicit layer within deep |
|
networks to avoid traditional depth. For instance, Bai et al. [5]introduced deep equilibrium models |
|
(DEQs) which instead treat the network as repeated applications of a single layer and compute the |
|
output of the network as a solution to an equilibrium-finding problem instead of simply specifying |
|
a sequence of non-linear layer operations. Bai et al. [5]and subsequent work [ 6] have shown that |
|
DEQs can achieve results competitive with traditional deep networks for many realistic tasks. |
|
In this work, we highlight the benefit of using these implicit models in the context of input optimization |
|
routines. Specifically, because optimization over inputs itself is typically done via an iterative method |
|
(e.g., gradient descent), we can combine this optimization fixed-point iteration with the forward |
|
DEQ fixed point iteration all within a single “augmented” DEQ model that simultaneously performs |
|
forward model inference as well as optimization over the inputs. This enables the models to more |
|
quickly perform both the inference and optimization procedures, and the resulting speedups further |
|
allow us to train networks that use such “bi-level” fixed point passes. In addition, we also show |
|
a close connection between our proposed approach and the primal-dual methods for constrained |
|
optimization. |
|
We illustrate our methods on 4 tasks that span across different domains and problems: 1) training |
|
DEQ-based generative models while optimizing over latent codes; 2) training models for inverse |
|
problems such as denoising and inpainting; 3) adversarial training of implicit models; and 4) gradient- |
|
based meta-learning. We show that in all cases, performing this simultaneous optimization and |
|
forward inference accelerates the process over a more naive inner/outer optimization approach. For |
|
instance, using the combined approach leads to a 3.5-9x speedup for generative DEQ networks, |
|
a 3x speedup in adverarial training of DEQ networks and a 2.5-3x speedup for gradient based |
|
meta-learning. In total, we believe this work points to a variety of new potential applications for |
|
optimization with implicit models. |
|
2 Related Work |
|
Implicit layers. Layers with implicitly defined depth have gained tremendous popularity in recent |
|
years[ 46,19,29]. Rather than a static computation graph, these layers define a condition on the output |
|
that the model must satisfy, which can represent “infinite” depth, be directly differentiated through |
|
via the implicit function theorem [ 47], and are memory-efficient to train. Some recent examples of |
|
implicit layers include optimization layers [ 16,1], deep equilibrium models[ 5,6,68,40,52], neural |
|
ordinary differential equations (ODEs) [ 14,18,61], logical structure learning [ 67], and continuous |
|
generative models [30]. |
|
In particular, deep equilibrium models (DEQs) [ 5] define the output of the model as the fixed point |
|
of repeated applications of a layer. They compute this using black-box root-finding methods[ 5] or |
|
accelerated fixed-point iterations [ 36] (e.g., Broyden’s method [ 11]). In this work, we propose an |
|
efficient approach to perform input optimization with the DEQ by simultaneously optimizing over |
|
the inputs and solving the forward fixed point of an equilibrium model as a joint, augmented system. |
|
As related work, Jeon et al. [36] introduce fixed point iteration networks that generalize DEQs to |
|
repeated applications of gradient descent over variables. However, they don’t address the specific |
|
formulation presented in this paper, which has a number of practical use cases (e.g., adversarial |
|
training). Lu et al. [52] proposes an implicit version of normalizing flows by formulating a joint |
|
root-finding problem that defines an invertible function between the input xand outputz?. Perhaps |
|
the most relevant approach to our work is Gilton et al. [26], which specifically formulates inverse |
|
imaging problems as a DEQ model. In contrast, our approach focuses on solving input optimization |
|
problems where the network of interest is already a DEQ, and thus the combined optimization and |
|
forward inference task leads to a substantially different set of update equations and tradeoffs. |
|
2Input optimization in deep learning. Many problems in deep learning can be framed as optimizing |
|
over the inputs to minimize some objective . Some canonical examples of this include finding |
|
adversarial examples [ 53,45], solving inverse problems [ 10,13,56], learning generative models [ 9, |
|
72], meta-learning [ 58,22,74,32] etc. For most of these examples, input optimization is typically |
|
done using gradient descent on the input, i.e., we feed the input through the network and compute |
|
some loss, which we minimize by optimizing over the input with gradient descent. While some of |
|
these problems might not require differentiating through the entire optimization process, many do |
|
(introduced below), and can further slow down training and impose massive memory requirements. |
|
Input optimization has recently been applied to train generative models. Zadeh et al. [72], Bojanowski |
|
et al. [9]proposed to train generator networks by jointly optimizing the parameters and the latent |
|
variables corresponding to each example. Similarly, optimizing a latent variable to make the |
|
corresponding output match a target image is common in decoder-only models like GANs to get |
|
correspondences [ 10,39], and has been found useful to stabilize GAN training [ 71]. However, in all |
|
of these cases, the input is optimized for just a few (mostly 1) iterations. In this work, we present a |
|
generative model, where we optimize and find the optimal latent code for each image at each training |
|
step. Additionally, Bora et al. [10], Chang et al. [13] showed that we can take a pretrained generative |
|
model and use it as a prior to solve for the likely solutions to inverse problems by optimizing on |
|
the input space of the generative model (i.e., unsupervised inverse problem solving). Furthermore, |
|
Diamond et al. [15], Gilton et al. [25], Gregor and LeCun [31] have shown that networks can also be |
|
trained to solve specific inverse problems by effectively unrolling the optimization procedure and |
|
iteratively updating the input. We demonstrate our approach in the unsupervised setting as in Bora |
|
et al. [10], Chang et al. [13], but also show flexible extension of our framework to train implicit |
|
models for supervised inverse problem solving. |
|
Another crucial application of input optimization is to find adversarial examples [ 64,28]. This |
|
manifests as optimizing an objective that incentivices an incorrect prediction by the classifier, while |
|
constraining the input to be within a bounded region of the original input. Many attempts have been |
|
made on the defense side [ 57,37,65,69]. The most successful strategy thus far has been adversarial |
|
training with a projected gradient descent (PGD) adversary [ 53] which involves training the network |
|
on the adversarial examples computed using PGD online during training . We show that our joint |
|
optimization approach can be easily applied to this setting, allowing us to train implicit models to |
|
perform competitively with PGD in guaranteeing adversarial robustness, but at much faster speeds. |
|
While the examples above were illustrated with non-convex networks, attempts have also been made |
|
to design networks whose output is a convex function of the input [ 2]. This allows one to use more |
|
sophisticated optimization algorithms, but usually at a heavy cost of model capacity. They have been |
|
demonstrated to work in a variety of problems including multi-label prediction, image completion |
|
[2], learning stable dynamical systems [44] and optimal transport mappings [54], MPC [12], etc. |
|
3 Joint inference and input optimization in DEQs |
|
Here we present our main methodological contribution, which sets up an augmented DEQ that jointly |
|
performs inference and input optimization over an existing DEQ model. We first define the base |
|
DEQ model, and then illustrate a joint approach that simultaneously finds it’s forward fixed point and |
|
optimizes over its inputs. We discuss several methodological details and extensions. |
|
3.1 Preliminaries: DEQ-based models |
|
To begin with, we recall the deep equilibrium model setting from Bai et al. [5], but with the notation |
|
slightly adapted to better align with its usage in this paper. Specifically, we consider an input-injected |
|
layerf:ZX!Z whereZdenotes the hidden state of the network, Xdenotes the input space, |
|
anddenotes the parameters of the layer. Given an input x2X, computing the forward pass in a |
|
DEQ model involves finding a fixed point z?(x)2Z, such that |
|
z? |
|
(x) =f(z? |
|
(x);x); (3) |
|
which (under proper stability conditions) corresponds to the “infinite depth” limit of repeatedly |
|
applying the ffunction. We emphasize that under this setting, we can effectively think of z? |
|
itself |
|
3as the implicitly defined network (which thus is also parameterized by ), and one can differentiate |
|
through this “network” via the implicit function theorem [8, 47]. |
|
The fixed point of a DEQ could be computed via the simple forward iteration |
|
z+:=f(z;x) (4) |
|
starting at some artibrary initial value of z(typically 0). However, in practice DEQ models |
|
will typically compute this fixed point not simply by iterating the function f, but by using a |
|
more accelerated root-finding or fixed-point approach such as Broyden’s method [ 11] or Anderson |
|
acceleration [ 4,66]. Further, although little can be said about e.g., the existence or uniqueness of these |
|
fixed points in general (though there do exist restrictive settings where this is possible [ 68,59,23]), in |
|
practice a wide suite of techniques have been used to ensure that such fixed points exist, can be found |
|
using relatively few function evaluations, and are able to competitively model large-scale tasks [ 5,6]. |
|
3.2 Joint inference and input optimization |
|
Now we consider the setting of performing input optimization for such a DEQ model. Specifically, |
|
consider the task of attempting to optimize the input x2X to minimize some loss `:ZY! R+. |
|
minimize |
|
x2X`(z? |
|
(x);y) (5) |
|
wherey2Y represents the data point. To solve this, we typically perform such an optimization via |
|
e.g., gradient descent, which repeats the update |
|
x+:=x @`(z? |
|
(x);y) |
|
@x> |
|
(6) |
|
until convergence, where we use term z?alone to denote the fixed output of the network z? |
|
(i.e., just |
|
as a fixed output rather than a function). Using the chain rule and the implicit function theorem, we |
|
can further expand update (6) using the following analytical expression of the gradient: |
|
@`(z? |
|
(x);y) |
|
@x=@`(z?;y) |
|
@z?@z? |
|
(x) |
|
@x=@`(z?;y) |
|
@z? |
|
I @f(z?;x) |
|
z? >@f(z?;x) |
|
@x(7) |
|
Thinking about z? |
|
as an implicit function of xpermits us to combine the fixed-point equation in Eq. 4 |
|
(onz) with this input optimization update (on x), thus performing a joint forward update: |
|
|
|
z+ |
|
x+ |
|
:=f(z;x) |
|
x @f(z;x) |
|
@x> |