id
stringlengths 10
10
| title
stringlengths 26
192
| abstract
stringlengths 172
1.92k
| authors
stringlengths 7
591
| published_date
stringlengths 20
20
| link
stringlengths 33
33
| markdown
stringlengths 269
344k
|
---|---|---|---|---|---|---|
2309.12063 | Suppression of Neutron Background using Deep Neural Network and Fourier
Frequency Analysis at the KOTO Experiment | We present two analysis techniques for distinguishing background events
induced by neutrons from photon signal events in the search for the rare
$K^0_L\rightarrow\pi^0\nu\bar{\nu}$ decay at the J-PARC KOTO experiment. These
techniques employed a deep convolutional neural network and Fourier frequency
analysis to discriminate neutrons from photons, based on their variations in
cluster shape and pulse shape, in the electromagnetic calorimeter made of
undoped CsI. The results effectively suppressed the neutron background by a
factor of $5.6\times10^5$, while maintaining the efficiency of
$K^0_L\rightarrow\pi^0\nu\bar{\nu}$ at $70\%$. | Y. -C. Tung, J. Li, Y. B. Hsiung, C. Lin, H. Nanjo, T. Nomura, J. C. Redeker, N. Shimizu, S. Shinohara, K. Shiomi, Y. W. Wah, T. Yamanaka | 2023-09-21T13:33:21Z | http://arxiv.org/abs/2309.12063v1 | Suppression of Neutron Background using Deep Neural Network and Fourier Frequency Analysis at the KOTO Experiment
###### Abstract
We present two analysis techniques for distinguishing background events induced by neutrons from photon signal events in the search for the rare \(K^{0}_{L}\to\pi^{0}\nu\bar{\nu}\) decay at the J-PARC KOTO experiment. These techniques employed a deep convolutional neural network and Fourier frequency analysis to discriminate neutrons from photons, based on their variations in cluster shape and pulse shape, in the electromagnetic calorimeter made of undoped CsI. The results effectively suppressed the neutron background by a factor of \(5.6\times 10^{5}\), while maintaining the efficiency of \(K^{0}_{L}\to\pi^{0}\nu\bar{\nu}\) at 70%.
+
Footnote †: journal: Nuclear Instruments and Methods in Physics Research Section A
## 1 Introduction
The KOTO experiment at J-PARC was designed to search for the rare decay of \(K^{0}_{L}\to\pi^{0}\nu\bar{\nu}\), which has a theoretical branching ratio of \(\mathcal{B}_{\rm SM}(K^{0}_{L}\to\pi^{0}\nu\bar{\nu})=(3.00\pm 0.30)\times 10^{-11}\) in the standard model [1]. The current result on the \(K^{0}_{L}\to\pi^{0}\nu\bar{\nu}\) measurement is an experimental upper limit on the branching ratio, which is \(\mathcal{B}_{\rm EXP}(K^{0}_{L}\to\pi^{0}\nu\bar{\nu})<3.0\times 10^{-9}\) at the 90% confidence level set by KOTO [2; 3]. KOTO utilized the high-intensity 30 GeV proton beam incident on a gold target to produce secondary particles, and the secondary neutral particles, including kaons, were guided to the KOTO detector by two sets of collimators [4]. The only visible products in the \(K^{0}_{L}\to\pi^{0}\nu\bar{\nu}\) decay are two photons from the subsequent decay of \(\pi^{0}\to\gamma\gamma\). Therefore, a \(K^{0}_{L}\to\pi^{0}\nu\bar{\nu}\) event is identified by two photons detected in a Cesium Iodide crystal calorimeter (CSI) [5]. One of the dominant background sources in the search for \(K^{0}_{L}\to\pi^{0}\nu\bar{\nu}\) was the beam-halo neutron. These neutrons could present a similar event signature with two photon-like hits in the CSI. A halo neutron background event was caused by a single halo neutron particle that interacted inside the CSI and produced two photon-like hits. Typically, the first hit occurred near the neutron's incident point on the CSI, while the second hit, produced by the same neutron after the scattering process, was separated from the first hit by some distance. These two hits in the CSI with no trace in between could be mistaken as the two isolated photon hits from \(K^{0}_{L}\to\pi^{0}\nu\bar{\nu}\). Supressing the halo neutron background relied on distinguishing the interaction footprints of photon and neutron hits in the CSI, which were characterized by differences in the incident particle's cluster shape and pulse shape in the CSI.
The CSI consisted of 2716 Cesium Iodide (CsI) crystals arranged in a grid format with each crystal having its own individual readout. When a particle interacted with the CSI, the analog pulse shape in each crystal was digitized and recorded, and the cluster shape was formed by grouping nearby CsI crystals with deposited energy by the incident particle. Discrimination between photons and neutrons was based on variations in cluster shape and pulse shape. Although the cluster shape discrimination [5] and the pulse shape discrimination [6] had been previously studied at KOTO, in this article, we introduce two new techniques: using a Convolutional Neural Network (CNN) [8] to classify cluster shapes (Section 3) and Fourier frequency analysis to discriminate pulse shapes (Section 4). In addition, we introduce a more precise method for estimating the neutron background level in Section 5. These techniques were first introduced to the \(K^{0}_{L}\to\pi^{0}\nu\bar{\nu}\) analysis of the data obtained from the years 2016-2018 [3], and the result further suppressed KOTO's dominant background source, the halo neutron, by a factor of 26 over the previous \(K^{0}_{L}\to\pi^{0}\nu\bar{\nu}\) analysis result of 2015 data [2]. This article provides a detailed explanation of how neutron background events were suppressed and estimated in the 2016-2018 data analysis, which was not presented in the previous publications of KOTO.
## 2 CsI detector
Figure 1 shows a cross-sectional view of the KOTO detector, where the coordinate origin is defined at the entrance of the detector. The CSI is located at \(z=6.1\) m, the downstream end of the decay volume. It consists of 2716 undoped CsI crystals, covering a circular area with a radius of 90 cm and with a square hole for the beam to pass, as shown in Fig. 2. The CSI has two different sizes of crystals. Crystals with a size of 2.5\(\times\)2.5\(\times\)50 cm\({}^{3}\) are located in the central 120\(\times\)120 cm\({}^{2}\) square region, and others with a size of \(5.0\times 5.0\times 50\) cm\({}^{3}\) are situated in the outer area. The small crystals are viewed by 3/4 inch Hamamatsu R5364 PMTs, while the large crystals are viewed by 1.5 inch Hamamatsu R5330 PMTs. The analog pulses from each PMT are digitized and recorded using custom-made 14-bit 125-MHz ADC modules [7]. For each event, 64 samples of voltages are recorded every 8 ns. In order to achieve a better timing resolution of the pulse, the analog pulse signal is reformed before digitization using a 10-pole Bessel filter. This Bessel filter is used to widen and transform the PMT pulse into a Gaussian shape to increase the number of sampling points in the pulse rising edge. The 14-bit dynamic range of the ADC covers energy deposits from sub-MeV to 2 GeV, with an energy deposit of 1 MeV resulting in a pulse height of 8-10 ADC counts. Further details on the CSI can be found in Ref. [5].
## 3 Cluster shape discrimination
### Cluster shapes and data set
Cluster shape discrimination (CSD) is a method to distinguish photons from neutrons, based on their different characteristics in the cluster shapes. These differences arose from the interaction processes between electromagnetic interactions from photons and hadronic interactions from neutrons. For a pulse in a crystal, the energy was calculated by integrating ADC values of the digitized waveform, and the pulse timing was defined as the time of the waveform crossing half of its peak height. These measurements of energy and timing from each crystal in the CSI grid formed the cluster's energy and timing distributions (shapes), which can be considered as images of particle interactions in the CSI. Figure 3 shows an illustration of the energy and timing shapes of photon and neutron clusters. Typically, photon clusters produced through electromagnetic interactions were more circular and symmetric in shapes. In general, the crystals closer to the incident point tend to have higher deposited energy due to the short radiation length of CSI. If photons have finite incident angles, crystals viewing the tail of the shower tend to have earlier timing, as they have energy deposits deeper in the crystal (closer to the PMT). On the other hand, neutron clusters produced through hadronic interactions had more asymmetrical energy and timing shapes.
In the CSD study, to optimize the CSD for selecting photons with the energy and angle spectrums from the \(K_{L}^{0}\rightarrow\pi^{0}\nu\bar{\nu}\) decays, the photon samples were obtained from \(K_{L}^{0}\rightarrow\pi^{0}\nu\bar{\nu}\) Monte Carlo (MC) events, generated using Geant4 simulations. To reflect actual beam activities and electronic noise, the MC events were overlaid with accidental data collected during data-taking. The photon samples were first selected by requiring
Figure 1: Cross-sectional view of the KOTO detector, with the beam entering from the left. The detector components with their names underlined represent charged-particle veto counters. The other components, except for CSI, serve as photon veto counters.
Figure 2: Eight-fold symmetrical layout of the CSI calorimeter viewed from downstream; this allows to mirror and fold a cluster at any location on the CSI surface to an equivalent location within the angular region of \(0^{\circ}\leq\phi\leq 45^{\circ}\).
two coincident clusters in the CSI and no in-time hits in other detector components. The decay vertex (\(Z_{\nu tx}\)) and transverse momentum (\(P_{T}\)) of \(\pi^{0}\) were then reconstructed by assuming the \(\pi^{0}\) decay point to lie along the beam axis and the two clusters to have the invariant mass of \(\pi^{0}\). The \(K^{0}_{L}\rightarrow\pi^{0}\nu\bar{\nu}\) events were selected by requiring the \(Z_{\nu tx}\) to be within the fiducial decay region of 3200-5000 mm. Additionally, the \(P_{T}\) was required to be in the range of 130-250 MeV/\(c\), to account for the missing momentum carried by two neutrinos.
Neutron samples were obtained from special neutron data-taking runs conducted in 2016-2018. During these runs, an aluminum plate was placed at the detector entrance to scatter neutrons in the beam and enhance halo neutron events. The neutron data was collected by requiring two coincident clusters in the CSI and no hits in the major veto detectors. This sample was dominated by the events with a single neutron particle scattering within the CSI and producing two clusters. The neutron events were processed through the same reconstruction and selection procedures for the \(K^{0}_{L}\rightarrow\pi^{0}\nu\bar{\nu}\) event candidates.
### CNN and its network architecture
In the CSD study, a CNN was employed to differentiate between photon and neutron clusters based on their energy and timing shapes. The CNN is a popular deep-learning architecture used for image classification tasks. The input layer of the network consisted of the images of photon and neutron clusters, which were then processed through ten hidden layers, including four convolutional and six dense layers, as illustrated in Fig. 4.
The convolutional layers scanned each block of \(3\times 3\) image pixels to identify local features of the cluster's energy and timing images using 32 filters. Each filter was a tensor of \(3\times 3\times 2\) weights and an offset bias. The output of the fourth convolutional layer, along with the incident particle's energy (\(E\)) and direction (\(\theta\), \(\phi\)) as additional inputs, were processed by six dense layers. The dense layers were fully connected between two adjacent layers with 2048 neurons each to learn a non-linear function to produce the final output.
During network training, the neuron weights were calculated through the minimization of binary cross-entropy [9], which served as the loss function. This loss function provides an indication of the network model's performance based on the deviations between the value predicted by the model and the actual true value. To prevent overtraining, the common L2 regularization [10] and dropout [11] techniques were applied to the network model. The L2 regularization, with a hyperparameter of \(\lambda=0.001\), added a penalty term to the loss function at every layer of the model, making the neuron weights less sensitive to the training data. A dropout layer with the dropout rate of 10% was inserted between dense layers. During the training, the dropout layer randomly deactivated 10% of the neurons, promoting the network to learn multiple representations of the data. These hyperparameters were fine-tuned to ensure the model's results were consistent between training and test samples. Finally, the output layer produced a probability distribution as the CSD score, with values closer to 1 indicating photon-like and values closer to 0 indicating neutron-like.
### Training details
The input cluster images were generated from the cluster shape displayed in the CSI grid. Each pixel in the cluster image contained the energy and timing measured by the corresponding CsI crystal. There were three categories of cluster images: the clusters with small crystals only (Type-I), the clusters with large crystal only (Type-II), and the clusters with both types of crystals (Type-III). In each category, an equal number of photon and neutron images were used for the network training and were divided into three separate sets: training, validation, and test data. The training set was used for the network to learn and adjust the weights and biases of the model. The validation set was used to evaluate the model's performance during the training process. The test set was used as a final evaluation of the performance of trained model. The ratio of data in the three sets was \(4:1:4\).
Figure 4: Architecture of the CSD Neural Network with four convolutional layers and six dense layers designed for the classification of photon and neutron cluster patterns.
Figure 3: Example of the energy and timing shapes of photon cluster from the Monte Carlo simulation (left) and neutron cluster from data (right). The color code represents the deposited energy in MeV and the timing in nanoseconds for each crystal in the cluster.
The size of the cluster images in Type-I and Type-II categories were \(16\times 16\) and \(12\times 12\) pixels, respectively. The Type-III cluster was treated like a Type-I cluster by dividing each large crystal into four small crystals, each containing \(1/4\) of the energy. However, an additional layer was added to each pixel in the Type-III cluster image to indicate the crystal type. To achieve optimal performance, the CNN was trained separately on each of these three cluster categories.
The network was provided with additional inputs: the incident particle's direction (\(\theta,\phi\)) in spherical coordinates with the origin set at the reconstructed \(Z_{ctx}\). Here, \(\phi\) is the azimuthal angle of the cluster in the CSI surface plane, and \(\theta\) is the incident angle of the particle to the CSI surface, with \(\theta=0^{\circ}\) pointing to the beam axis direction. To simplify the input images and account for the eight-fold symmetrical layout of the CSI, the cluster images were mirrored and folded into the range of \(\phi=[0^{\circ},45^{\circ}]\), as shown in Fig. 2. To allow the network to recognize cluster patterns from different directions, the cluster images in the training and validation sets were duplicated by transposing the cluster image pixels with \((x,y)\rightarrow(y,x)\). Moreover, each neutron cluster image in the training and validation samples was duplicated by randomly assigning its supplied input of incident angle \(\theta\). This data augmentation technique enabled the network to identify neutron clusters from a more generalized perspective, regardless of the reconstructed \(Z_{ctx}\).
### Training results
As mentioned in Section 3.2, the training of the CSD network was optimized to prevent overfitting. Our results indicated that the CSD performed consistently on the training, validation, and test sets, with slightly better performance on the test set. This was due to the fact that the data augmentation was only applied to the training and validation samples. The consistency between the data and MC simulations was verified by using the photon clusters from the \(K_{L}^{0}\rightarrow\pi^{0}\pi^{0}\pi^{0}\) events. The results showed that the CSD score of the data can be accurately reproduced by the MC simulations, as shown in Fig. 5. This indicates that the CSD method is reliable in distinguishing between photon and neutron clusters in actual data.
The performance of the CSD algorithm is presented by the acceptance of photon clusters in comparison to the acceptance of neutron clusters at various thresholds on the CSD score, as shown in Fig. 6. In general, the CSD had a higher discriminating power for higher energy clusters as larger cluster images contained more information on the cluster pattern features. After imposing veto and kinematic selection criteria for the \(K_{L}^{0}\rightarrow\pi^{0}\nu\bar{\nu}\) events, the average energies of clusters from the neutron data samples and \(K_{L}^{0}\rightarrow\pi^{0}\nu\bar{\nu}\) MC events were found to be similar, around 550 MeV. The results based on neutron data and \(K_{L}^{0}\rightarrow\pi^{0}\nu\bar{\nu}\) MC events showed that the CSD algorithm effectively suppressed neutron clusters by a factor of 150, while maintaining a 90% acceptance for photon clusters.
## 4 Pulse shape discrimination
### Pulse shapes and data set
The pulse shape discrimination (PSD) is a method to distinguish between photon and neutron particles, based on their distinctive pulse shapes in the CSI. Neutron-induced pulses through hadronic interactions have a longer tail compared to those produced by photons through electromagnetic interactions, as shown in Fig. 7.
The intrinsic differences in the detector response between photon and neutron interactions were studied using photon and neutron samples in data. Photon samples were obtained from the data with six coincident clusters in the CSI and no in-time hits in major veto detectors. This sample was primarily dominated by \(K_{L}^{0}\rightarrow\pi^{0}\pi^{0}\pi^{0}\) events. For the neutron samples, the same data set described in Section 3.1 was used.
### Discrimination method and results
In this study, the Discrete Fourier Transformation (DFT) was used to extract the differences between the neutron and photon pulses in the frequency domain. For a given CSI pulse, the DFT was applied to the ADC values of \(N_{s}=28\) samples: \(\mathcal{H}^{n}=\{\mathcal{H}^{0},\mathcal{H}^{1},...,\mathcal{H}^{N_{s}-1}\}\), where \(\mathcal{H}^{i}\) is the ADC values of the \(i^{th}\) sample. The first sample \(i=0\) was chosen to align \(\mathcal{H}^{10}\) with the ADC values of the pulse peak. The DFT transformed \(\mathcal{H}^{n}\) into a sequence of 28 complex numbers (\(X_{k}\)) using the equation defined as:
\[X_{k}=\sum_{n=0}^{N^{s}-1}\mathcal{H}^{n}\mathrm{exp}\left(-\frac{i2\pi k}{N_{ s}}n\right), \tag{1}\]
where the complex number \(X_{k}\) encloses both amplitude and phase information for the complex sinusoid at the frequency of \(2\pi k/N_{s}\). The tail of the CSI pulse was represented by the amplitude (\(\mathcal{A}_{k}\coloneqq|X_{k}|\)) of the lower frequency sinusoids. In this analysis, the amplitudes of the lowest five frequency sinusoids,
Figure 5: Distribution of the CSD score of photon clusters from \(K_{L}^{0}\rightarrow\pi^{0}\pi^{0}\pi^{0}\) data (dots) and MC simulation (histogram).
\(\mathcal{A}_{k}=\{\mathcal{A}_{0},\mathcal{A}_{1},\mathcal{A}_{2},\mathcal{A}_{3}, \mathcal{A}_{4}\}\), were used to create templates for photons and neutrons, as shown in Fig. 8. To account for the pulse shape variations among crystals and energies, templates were created for each CsI crystal and for 20 bins in \(\mathcal{H}^{10}\) between \(5.5<\log_{2}(\mathcal{H}^{10})<14\). Each template includes five sets of \(\bar{\mathcal{A}}_{k}\pm\sigma_{k}\), where \(\bar{\mathcal{A}}_{k}\) and \(\sigma_{k}\) are the average and the standard deviation of \(\mathcal{A}_{k}\), respectively.
To determine whether a given cluster is more photon-like or neutron-like, the likelihood of being either case was first calculated for each crystal contained in the cluster, which was defined as
\[\mathcal{L}^{\gamma,n}_{\rm crystal}=\prod_{k=0}^{k<5}\frac{1}{\sqrt{2\pi} \sigma_{k}^{\gamma,n}}\exp\left[-\frac{1}{2}\left(\frac{\mathcal{A}_{k}-\bar{ \mathcal{A}}_{k}^{\gamma,n}}{\sigma_{k}^{\gamma,n}}\right)^{2}\right], \tag{2}\]
where \(\mathcal{A}_{k}\) is the Fourier amplitudes of a crystal in the cluster, and \(\bar{\mathcal{A}}_{k}^{\gamma,n}\) are the templates of the photon or neutron Fourier amplitudes of that crystal. The likelihood of the cluster for being photon-like (\(\mathcal{L}^{\gamma}_{\rm cluster}\)) or neutron-like (\(\mathcal{L}^{n}_{\rm cluster}\)) was then calculated by multiplying the likelihood of each crystal in the cluster as
\[\mathcal{L}^{\gamma,n}_{\rm cluster}=\prod^{N^{c}}\mathcal{L}^{\gamma,n}_{ \rm crystal}, \tag{3}\]
where \(N^{c}\) is the total number of crystals in the cluster.
With the photon and neutron likelihood values of a given cluster, the final likelihood ratio \(\mathcal{R}\), or PSD score, was calculated as
\[\mathcal{R}=\frac{\mathcal{L}^{\gamma}_{\rm cluster}}{\mathcal{L}^{\gamma}_{ \rm cluster}+\mathcal{L}^{n}_{\rm cluster}}. \tag{4}\]
The value of \(\mathcal{R}\) is between 0 and 1, with \(\mathcal{R}\) closer to 1 being photon-like and \(\mathcal{R}\) closer to 0 being neutron-like.
Figure 8: Templates of photon (blue dots) and neutron (red triangles) pulses for CsI crystal ID=1013.
Figure 6: Performance of CSD presented as the acceptance of photon versus neutron clusters at different discrimination thresholds; the top figure illustrates the energy dependence, and the bottom figure displays the dependence on incident angle. The solid line in both figures represents the average performance across the energy and incident angle spectrum of neutron data and \(K^{0}_{L}\to\pi^{0}\nu\bar{\nu}\) MC events.
Figure 7: Average pulse shape of photon samples (blue dots) and neutron samples (red triangles) for CsI crystal ID=1013.
The performance of PSD is presented as the acceptance of photon clusters versus the acceptance of neutron clusters for different discrimination thresholds on the PSD score, as shown in Fig. 9. The results indicate that the PSD is more effective in discriminating high-energy clusters. To evaluate the realistic performance in differentiating between neutron clusters and photon clusters in the \(K^{0}_{L}\rightarrow\pi^{0}\nu\bar{\nu}\) analysis, the energy spectrum of photon clusters in the \(K^{0}_{L}\rightarrow\pi^{0}\pi^{0}\pi^{0}\) data events was weighted to match that of clusters from \(K^{0}_{L}\rightarrow\pi^{0}\nu\bar{\nu}\) MC events. It was evaluated that the PSD suppressed neutron clusters by a factor of 6.5 while maintaining a 90% acceptance for photon clusters from the \(K^{0}_{L}\rightarrow\pi^{0}\nu\bar{\nu}\) events. In comparison to the method described in [6], this result with the DFT technique performed approximately twice as effective in suppressing neutron clusters.
## 5 Combined performance of CSD and PSD
The combined effectiveness of the CSD and PSD in suppressing neutron background events in the \(K^{0}_{L}\rightarrow\pi^{0}\nu\bar{\nu}\) analysis was evaluated using an event-weighted method. This approach first calculated the survival probability (\(\mathcal{W}\)) of individual neutron clusters under the combined rejections of the CSD and PSD. To take into account the energy (\(E\)) and incident angle (\(\theta\)) dependence of the CSD and PSD effectiveness, \(\mathcal{W}\) was derived as a function of \(E\) and \(\theta\) based on a large sample of neutron clusters. \(\mathcal{W}(E,\theta)\) was then used to assign event weights to neutron events subjected to the CSD and PSD rejections. For a neutron event with two clusters, the event's survival probability was calculated as the product of the individual cluster's \(\mathcal{W}\) values: \(\mathcal{W}_{1}(E_{1},\theta_{1})\times\mathcal{W}_{2}(E_{2},\theta_{2})\). To estimate the remaining neutron events in a certain region after the CSD and PSD rejections, the event-weighted method simply summed the survival probability of all events in that region, which gives
\[\mathcal{N}^{\rm Est}=\sum_{i}\mathcal{W}^{i}_{1}(E_{1},\theta_{1})\times \mathcal{W}^{i}_{2}(E_{2},\theta_{2}), \tag{5}\]
where \(i\) counts the number of events in that region before the CSD and PSD rejections, and the \(\mathcal{N}^{\rm Est}\) was the estimated number of neutron events that may remain after the CSD and PSD rejections.
In this study, the neutron events in the Wide Signal Box (WSB) region were used for evaluating the effectiveness of the CSD and PSD in suppressing neutron background events. The WSB region was defined as \(2900<Z_{\tau\bar{\tau}x}<5100\) mm and \(120<P_{T}<260\) MeV/\(c\), as indicated in Fig. 10.
After imposing the \(K^{0}_{L}\rightarrow\pi^{0}\nu\bar{\nu}\) event selection criteria without CSD and PSD to the neutron data, there were 5973 neutron events (\(\mathcal{N}^{\rm Total}\)) in the WSB region. Based on these events, the estimated (\(\mathcal{N}^{\rm Est}\)) and observed (\(\mathcal{N}^{\rm Obs}\)) number of neutron events, and the \(K^{0}_{L}\rightarrow\pi^{0}\nu\bar{\nu}\) efficiency (\(\mathcal{E}^{\rm Sig}\)) after further imposing the CSD and PSD rejections with different thresholds are summarized in Table 1. With loose thresholds on the CSD and PSD scores, the results indicate that the number of observed neutron events (\(\mathcal{N}^{\rm Obs}\)) could be accurately predicted by \(\mathcal{N}^{\rm Est}\), demonstrating the reliability of the event-weighted method. In the \(K^{0}_{L}\rightarrow\pi^{0}\nu\bar{\nu}\) analysis of 2016-2018 data, the thresholds on the CSD and PSD scores were set at 0.985 and 0.5, respectively. Under these thresholds, the event-weighted method predicted \(0.0106\pm 0.0002\) remaining neutron events in the WSB region, while no events were actually observed. The acceptance of neutron events against the CSD and PSD rejections (\(\mathcal{R}^{\rm CSD+PSD}\)) was then calculated to be
\[\mathcal{R}^{\rm CSD+PSD}=\frac{\mathcal{N}^{\rm Est}}{\mathcal{N}^{\rm Total }}=(1.77\pm 0.03)\times 10^{-6},\]
Figure 10: \(Z_{\tau\bar{\tau}x}\)\(vs.\)\(P_{T}\) distributions of the \(K^{0}_{L}\rightarrow\pi^{0}\nu\bar{\nu}\) MC events (dots) and neutron data events (contour). The dash-lined region indicates the signal region of \(K^{0}_{L}\rightarrow\pi^{0}\nu\bar{\nu}\) used in 2016–2018 data analysis, and the solid-lined rectangular box indicates the Wider Signal Box (WSB) region used for estimating the performance of the combined CSD and PSD discriminations.
Figure 9: Performance of PSD as the acceptance of photon clusters versus the acceptance of neutron clusters at different discrimination thresholds on PSD score and in different cluster energy regions. The solid line represents the performance of PSD with the cluster energy spectrum of \(K^{0}_{L}\rightarrow\pi^{0}\nu\bar{\nu}\).
which corresponds to a suppression factor of \(5.6\times 10^{5}\) on the neutron background events. The efficiency of detecting \(K^{0}_{L}\to\pi^{0}\nu\bar{\nu}\) under the same thresholds was determined to be 69.9%.
## 6 Conclusion
We developed two analysis techniques to distinguish between photon and neutron events in the undoped CsI calorimeter of KOTO. These methods were based on the distinct characteristics in their cluster shapes displayed on the CSI grid and pulse shapes in each CsI crystal. We employed a convolutional neural network to classify the cluster shapes and Fourier frequency analysis to differentiate between the waveform shapes. The performance of discrimination was estimated through an event-weight method. As a result, we suppressed the neutron background events by a factor of \(5.6\times 10^{5}\), while maintaining the acceptance of \(K^{0}_{L}\to\pi^{0}\nu\bar{\nu}\) at 69.9%.
## Acknowledgement
We would like to express our gratitude to all members of the J-PARC Accelerator and Hadron Experimental Facility groups for their support. We also thank the KEK Computing Research Center for KEKCC, the National Institute of Information for SINET4, and the University of Chicago Computational Institute for the GPU farm. This material is based upon work supported by the Ministry of Education, Culture, Sports, Science, and Technology (MEXT) of Japan and the Japan Society for the Promotion of Science (JSPS) under KAKENHI Grant Number JP16H06343 and through the Japan-U.S. Cooperative Research Program in High Energy Physics; the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Awards No. DE-SC0009798; the National Science and Technology Council (NSTC) and Ministry of Education (MOE) in Taiwan, under Grants No. NSTC-111-2112-M-002-036, NSTC-110-2112-M-002-020, MOE-111L894805, and MOE-111L104079 through National Taiwan University. We would also like to thank K. Kotera for the critical reading and suggestions for this article.
|
2310.20398 | A hybrid approach for solving the gravitational N-body problem with
Artificial Neural Networks | Simulating the evolution of the gravitational N-body problem becomes
extremely computationally expensive as N increases since the problem complexity
scales quadratically with the number of bodies. We study the use of Artificial
Neural Networks (ANNs) to replace expensive parts of the integration of
planetary systems. Neural networks that include physical knowledge have grown
in popularity in the last few years, although few attempts have been made to
use them to speed up the simulation of the motion of celestial bodies. We study
the advantages and limitations of using Hamiltonian Neural Networks to replace
computationally expensive parts of the numerical simulation. We compare the
results of the numerical integration of a planetary system with asteroids with
those obtained by a Hamiltonian Neural Network and a conventional Deep Neural
Network, with special attention to understanding the challenges of this
problem. Due to the non-linear nature of the gravitational equations of motion,
errors in the integration propagate. To increase the robustness of a method
that uses neural networks, we propose a hybrid integrator that evaluates the
prediction of the network and replaces it with the numerical solution if
considered inaccurate. Hamiltonian Neural Networks can make predictions that
resemble the behavior of symplectic integrators but are challenging to train
and in our case fail when the inputs differ ~7 orders of magnitude. In
contrast, Deep Neural Networks are easy to train but fail to conserve energy,
leading to fast divergence from the reference solution. The hybrid integrator
designed to include the neural networks increases the reliability of the method
and prevents large energy errors without increasing the computing cost
significantly. For this problem, the use of neural networks results in faster
simulations when the number of asteroids is >70. | Veronica Saz Ulibarrena, Philipp Horn, Simon Portegies Zwart, Elena Sellentin, Barry Koren, Maxwell X. Cai | 2023-10-31T12:20:20Z | http://arxiv.org/abs/2310.20398v1 | # A hybrid approach for solving the gravitational \(N\)-body problem with Artificial Neural Networks
###### Abstract
Simulating the evolution of the gravitational \(N\)-body problem becomes extremely computationally expensive as \(N\) increases since the problem complexity scales quadratically with the number of bodies. In order to alleviate this problem, we study the use of Artificial Neural Networks (ANNs) to replace expensive parts of the integration of planetary systems. Neural networks that include physical knowledge have rapidly grown in popularity in the last few years, although few attempts have been made to use them to speed up the simulation of the motion of celestial bodies. For this purpose, we study the advantages and limitations of using Hamiltonian Neural Networks to replace computationally expensive parts of the numerical simulation of planetary systems, focusing on realistic configurations found in astrophysics. We compare the results of the numerical integration of a planetary system with asteroids with those obtained by a Hamiltonian Neural Network and a conventional Deep Neural Network, with special attention to understanding the challenges of this specific problem. Due to the non-linear nature of the gravitational equations of motion, errors in the integration propagate, which may lead to divergence from the reference solution. To increase the robustness of a method that uses neural networks, we propose a hybrid integrator that evaluates the prediction of the network and replaces it with the numerical solution if considered inaccurate.
Hamiltonian Neural Networks can make predictions that resemble the behavior of symplectic integrators but are challenging to train and in our case fail when the inputs differ \(\sim\)7 orders of magnitude. In contrast, Deep Neural Networks are easy to train but fail to conserve energy, leading to fast divergence from the reference solution. The hybrid integrator designed to include the neural networks increases the reliability of the method and prevents large energy errors without increasing the computing cost significantly. For the problem at hand, the use of neural networks results in faster simulations when the number of asteroids is \(\gtrsim\)70.
keywords: Machine Learning, Gravitational N-body problem, Numerical integrator, Planetary systems, Physics-aware Neural Networks, Hybrid method +
Footnote †: journal:
## 1 Introduction
Planetary systems are a special case of the gravitational \(N\)-body problem in which a massive central star is orbited by multiple minor bodies, which include planets and asteroids among others. To model the evolution of planetary systems, it is necessary to know the gravitational interaction between the different bodies, which can be calculated using the equations derived by Newton Newton (1954). Unlike the calculation of the gravitational force, the equations of motion can only be solved analytically for two bodies using the relations derived by Kepler in 1609 Kepler (1609). This means that when the system consists of three or more bodies, the equations need to be solved numerically with what we call the integrator. Hermite (1934) and Verlet Verlet (1966) integrators are frequently used for solving the general \(N\)-body problem, whereas others such as the Wisdom-Holman integrator Wisdom (1973) have been developed for the specific case of planetary systems.
Currently, the study of the evolution of \(N\)-body systems is limited by the large computational resources required to obtain an accurate1 solution (Beckl, 1992; Beckl, 1993). Newton's equation of gravitation implies that the computational complexity of the problem scales with \(N^{2}\). As a consequence, for multiple applications in astrophysics such as the evolution of globular clusters or asteroids around a star (Beckl, 1992), the large number of bodies in the system is one of the main reasons for the high computational cost.
Footnote 1: Accurate refers to solutions with low energy error.
Machine Learning (ML) is a tool with the potential to ameliorate this problem (Beckl, 1992). Although the applications of ML, or more precisely Artificial Neural Networks (ANNs), are scarce for the gravitational \(N\)-body problem (Beren et al., 2012; Beren et al., 2012), ANNs have recently demonstrated their potential in other fields (Beren et al., 2013; Beren et al., 2014; Beren et al., 2015). We study the efficiency of neural networks to replace computationally expensive parts of the integration of \(N\)-body systems for astrophysics applications.
Some studies have been carried out to apply ANNs to the two- and three-body gravitational problems to predict the future state of the system. For example, Beren et al. Beren et al. (2016) in 2020 designed a Deep Neural Network (DNN) to replace the integration of the chaotic three-body problem. Their setup consists of three coplanar bodies of equal mass, with a zero initial velocity, which state is propagated in time using the arbitrary precise Brutus integrator developed by Boekholt and Portegies Zwart, 2015 Boekholt and Portegies Zwart (2015). The ANN receives as inputs the state of the particles at initial time \(t_{0}\) and the simulation time \(t\). In this simplified approach, the network is able to capture the complex motions of the three bodies, at a fraction of the computational expense.
Since the introduction of Physics-Informed Neural Networks (PINNs) in 2019 Beren et al. (2019), the popularity of neural networks with physics knowledge included has grown rapidly (Beren et al., 2019; Beren et al., 2019). The claim is that the introduction of physical properties into the neural network allows for better predictions, better extrapolation capabilities, and less training data. So far, PINNs
have not been applied to astrophysics problems. Following the idea of introducing physical knowledge into the neural network, Greydanus et al. [20] developed in 2019 Hamiltonian Neural Networks (HNNs) to address Hamiltonian mechanics within the network's architecture. To study the performance of their network, they use the gravitational two- and three-body problems as test cases. For the two-body problem, Greydanus et al. found that the HNN can predict the trajectories of the particles better than a baseline DNN. However, for the three-body problem, both networks fail to predict the trajectories. An alternative for HNNs was developed by Chen and Tao in 2021 [21], denominated as Generating Function Neural Networks (GFNNs). They tested this approach on the two-body problem with similar inputs as in Greydanus et al. [20]. The comparison with other types of neural networks such as HNNs and SympNets [19] shows that GFNNs outperform the other methods for this particular test case. Although the results of Greydanus et al. [20], and Chen and Tao [21] are promising, both references take the two- and three-body problems as test cases to demonstrate the performance of their neural networks. It is not yet certain that the introduction of physics into the neural network represents an advantage for more complicated problem configurations. For that reason, we study the advantages and disadvantages of HNNs when applied to more realistic astrophysics problems, in particular, the orbital evolution of celestial bodies.
We study the use of neural networks for the integration of planetary systems formed by two planets and up to 2,000 asteroids. Due to the popularity of physics-aware neural networks, we compare the results of direct numerical integration with the predictions of a network that includes physical knowledge (HNN) and a conventional Deep Neural Network (DNN). In subsection 2.3, we discuss the setup of a hybrid integrator that uses the neural network but switches to direct numerical integration when the former fails to produce sufficiently reliable answers. This method is faster than the classical integration and more accurate than the neural network. In section 3, we discuss the hyperparameter selection and the training results of the neural networks. In subsection 4.2, we find the improvement in performance by the neural networks in the form of computation time as a function of the number of asteroids in the system, and in subsection 4.3 we show the results of integrating a planetary system. The code is publicly available at [https://github.com/veronicasaz/PlanetarySystem_HNN](https://github.com/veronicasaz/PlanetarySystem_HNN).
## 2 Methodology
### Numerical integration
We consider a system of \(N\) point masses interacting only via their Newtonian gravitational force. The gravitational force exerted on a body \(i\), can be written as a function of mass (\(m\)), position vector (\(\vec{q}\)), and the universal gravitational constant (\(G\)) as
\[m_{i}\;\frac{d^{2}\vec{q}_{i}}{dt^{2}}=\sum_{j=0,\;j\neq i}^{N-1}G\;\frac{m_{i }m_{j}}{||\vec{q}_{ij}||^{3}}\vec{q}_{ij},\qquad\vec{q}_{ij}=\vec{q}_{j}- \vec{q}_{i}, \tag{1}\]
where the indices \(i\) and \(j\) denote the celestial bodies.
Knowing the acceleration vector, the state of the system can be evolved in time using an integrator. Wisdom and Holman in 1991 [5] proposed a symplectic integrator for systems in which one body is much more massive than the others. In our case, we assume that the smaller bodies orbit this massive one and the barycenter of the system is located approximately at the center of the massive body. The other bodies orbit the barycenter in almost Keplerian trajectories.
The Hamiltonian of the system is given by
\[\mathcal{H}=\sum_{i=0}^{N-1}\frac{||\vec{p}_{i}||^{2}}{2m_{i}}-G\sum_{i=0}^{N -2}m_{i}\sum_{j=i+1}^{N-1}\ \frac{m_{j}}{||\vec{q}_{j}-\vec{q}_{i}||}, \tag{2}\]
where \(\vec{p}\) represents the linear momentum vector.
For planetary systems, Equation (2) can be split into two parts. Due to the assumption of the Sun being at the barycenter, i = 0 is excluded from the following equations. The first one, the Keplerian part,
\[\mathcal{H}_{\text{Kepler}}=\sum_{i=1}^{N-1}\frac{||\vec{p}_{i}||^{2}}{2m_{i} }-Gm_{0}\sum_{j=1}^{N-1}\ \frac{m_{j}}{||\vec{q}_{j}||}, \tag{3}\]
contains the terms related to the kinetic energy of the bodies and the potential energy due to the central body (body 0). The second part called interactive part,
\[\mathcal{H}_{\text{inter}}=-G\sum_{i=1}^{N-2}m_{i}\sum_{j=i+1}^{N-1}\ \frac{m_{j}}{||\vec{q}_{j}-\vec{q}_{i}||}, \tag{4}\]
contains the terms with the potential energy due to the mutual interaction between the orbiting bodies.
The Wisdom-Holman (WH) integrator first propagates the trajectory of the orbiting bodies without taking their mutual interaction into account by performing a Keplerian propagation around the central body. After that, the perturbing acceleration is calculated and converted to a correction of the velocity.
Although Equations (3) and (4) are expressed in heliocentric coordinates for clarity, WH's integrator uses Jacobian coordinates for parts of its integration, as explained in Wisdom and Holman [5].
The computing time of the Keplerian propagation scales linearly with the number of bodies (\(N\)) as seen in Equation (3). In contrast, the interactive part (Equation (4)) scales quadratically with the number of bodies. Therefore, it is interesting to find methods to speed up the latter. We use ANNs to replace the interactive part to speed up the calculation of the mutual perturbations.
### Neural Network surrogates
In examples such as Breen et al. [15] and Greydanus et al. [20], a neural network is used to replace the integrator. However, this approach falls short for many astrophysics applications. For example, for the case of a planetary system, the force exerted by the central body is orders of magnitude larger than the mutual forces exerted by the orbiting bodies. If a neural network is used to predict the future state of the system, it will fail to capture the smaller contributions of the planets. We propose a method in which the neural network is integrated into the numerical integration without losing information about the perturbations. We do so by calculating the Keplerian Hamiltonian \(\mathcal{H}_{\text{Kepler}}\) analytically and the interactive Hamiltonian using a neural network \(\mathcal{H}_{\text{inter}}\).
For systems in which energy is conserved, Hamiltonian Neural Networks (HNNs) constitute an attractive choice since the Hamiltonian of the system can be input as a physical constraint into its architecture. We therefore use HNNs to predict the interactive part of Equation (2) similarly to the Neural Interacting Hamiltonian (NIH) designed by Cai et al. [22]. We study the advantages and disadvantages of HNNs by comparing them to the numerical integration, which we consider the baseline, and to a conventional Deep Neural Network (DNN).
An HNN [20] receives as inputs the position and linear momentum of all the bodies in the system and outputs the Hamiltonian of the system. In Figure 1 we show a comparison of an uninformed neural network (DNN) and a HNN.
With the output of the HNN and automatic differentiation, the derivatives of the inputs are calculated using Hamilton's equations:
\[-\frac{\partial\mathcal{H}}{\partial\vec{q}}=\frac{d\vec{p}}{dt},\qquad \qquad\frac{\partial\mathcal{H}}{\partial\vec{p}}=\frac{d\vec{q}}{dt}. \tag{5}\]
The derivatives are then used to compute the loss function during the training of the network.
Figure 1: Schematic of a a) Deep Neural Network and a b) Hamiltonian Neural Network that predict the derivatives of the inputs with respect to time and the Hamiltonian of the system, respectively. The inputs for both are the position and linear momentum of all objects in the system.
Unlike in Greydanus et al. [20], we use the neural networks for the calculation of the interactive Hamiltonian as expressed in Equation (4). This Hamiltonian is only a function of the masses and positions of the different bodies, and the universal gravitational constant. This means that the neural networks from Figure 1 can be simplified by eliminating the linear momentum from the inputs. Since the acceleration requires knowing the masses of the system, the inputs then become:
\[X=[m_{1},\vec{q}_{1},m_{2},\vec{q}_{2},...,m_{N},\vec{q}_{N}]. \tag{6}\]
Similarly, the outputs of the DNN are now reduced to the derivatives of the linear momentum with respect to time. By doing this, we achieve a substantial reduction in the number of parameters of the network. We will explain whether the symplectic structure of the integrator is conserved when using neural networks for the calculation of the interactive Hamiltonian in subsection 2.4.
Taking into account this modification of the set of inputs and outputs, the loss function \(\mathcal{L}\) for the HNN is the difference between the acceleration calculated using Newton's equation and the one obtained from differentiating the output of the HNN using Equation (5):
\[\mathcal{L}_{\text{HNN}}(\theta)=\frac{1}{M}\sum_{i=1}^{M}\left(-\frac{ \partial\mathcal{H}_{\text{pred}}}{\partial\vec{q}}-\frac{d\vec{p}}{dt} \right)^{2}. \tag{7}\]
In Equation (7), \(\theta\) represents the trainable parameters of the network, \(\mathcal{H}_{pred}\) is the output of the HNN, and the gradients of \(\mathcal{H}\) are obtained using automatic differentiation. \(M\) is the number of samples for which the loss is being evaluated.
For the DNN, the inputs are the same as for the HNN and the derivatives of the inputs with respect to time are the outputs of the neural network. Therefore, the loss function is written as:
\[\mathcal{L}_{\text{DNN}}(\theta)=\frac{1}{M}\sum_{i=1}^{M}\left(\left[\frac{d \vec{p}}{dt}\right]_{\text{pred}}-\frac{d\vec{p}}{dt}\right)^{2}. \tag{8}\]
### Hybrid numerical method
The use of neural networks to replace parts of the integration raises several challenges. Firstly, neural networks cannot be expected to be as accurate as the numerical calculation: the use of ANNs implies a loss in accuracy with the goal of improving computing speed. Secondly, since integration is a repetitive process in which the output of one time step is used as the input for the next one, errors propagate in time. In non-linear systems, this may quickly lead to unphysical solutions. In previous research trying to solve the \(N\)-body problem using neural networks, it is common to propagate over short time scales. This implies that the accumulation of errors is not relevant, but does not constitute a realistic case for astrophysics problems. To address this problem, we develop a hybrid method in which the prediction of the neural network is evaluated and replaced by the numerical solution if considered insufficiently accurate.
Evaluating the accuracy of the prediction is not straight-forward since we want to avoid using Newton's equation. Therefore, we use as a measurement of accuracy the fact that accelerations should be fairly smooth in time. We evaluate the prediction of the network by comparing it to the prediction of the previous time step. Since the perturbations are expected to be rather smooth, we assume that a large difference between the acceleration predicted by the network at time \(t_{0}+\Delta t\) and the acceleration at \(t_{0}\) is an indication of either a poor prediction or a region with quick changes in the acceleration. In both cases, it is beneficial to calculate those steps numerically instead of relying on the neural network. Although accelerations are expected to vary smoothly in time, by using a numerical time integrator we need to account for the discretization error when setting the tolerance \(R\) for this smoothness criterion. From now on, we use the term "flag" when the prediction of the network is not accepted. We calculate the acceleration \(\vec{a}\,^{(t)}\) at time \(t=t_{0}+\Delta t\) by numerical integration if
\[\frac{||\vec{a}\,^{(t_{0})}-\vec{a}\,^{(t)}||}{||\vec{a}\,^{(t_{0})}||+ \varepsilon}>R. \tag{9}\]
This criterion represents the relative difference between the previous acceleration and the current one. The addition of \(\varepsilon=1\times 10^{-11}\) prevents the denominator from becoming zero. We adopt \(R=0.3\) to achieve an accurate reproduction of the trajectory whereas higher values result in larger energy errors, as we show in B. The value of \(R\) should be chosen according to the specifications of the problem at hand. If computational speed is more important than accuracy, higher values of \(R\) could be chosen, whereas if the focus is on accuracy, \(R\) should be smaller. A value of \(0.3\) represents a strict case in which ensuring accuracy is considered more important than achieving a low computational cost.
In Figure 2 we show the schematic diagram of the hybrid WH integrator. At time \(t_{0}\), the state of each body is propagated a time step \(\Delta t\), assuming that the particle is on a Keplerian trajectory. Afterward, the neural network (\(f_{\text{NN}}\)) calculates the perturbing accelerations for the given inputs (\(X\)). This prediction is evaluated and if considered insufficiently smooth according to the criterion defined in Equation (9), the accelerations are recalculated analytically. The perturbing accelerations are then converted into corrections in the velocity and the new state of the system is subsequently used as the starting point for the next time step.
### Symplecticity of the integrator
The original Wisdom-Holman integrator is a symplectic integrator [5]. Using a symplectic integrator is essential for long-term stability and energy conservation of Hamiltonian systems [23]. Some attempts have been made to conserve the symplectic structure of the integrator when using neural networks, such as in Zhu et al. [24]. When using neural networks as surrogates for the calculation of the interactive part in the new hybrid integrator, it is beneficial to have the same symplectic structure as the original Wisdom-Holman integrator.
The first part in the hybrid Wisdom-Holman integrator is the Keplerian propagation, which is the flow map of a Hamiltonian system and therefore a symplectic map. In the second part, the linear momentum vector is updated with accelerations either calculated by an ANN or using Newton's equation for the interactive part. In this step, the positions are
always kept unchanged. If this update forms a symplectic map, the whole hybrid integrator becomes symplectic since concatenations of symplectic maps are again symplectic.
The update of the linear momentum vector only depends on the positions and not on the current momenta. So, the left-hand side of the symplectic condition
\[J_{\vec{f}}^{T}\begin{pmatrix}0&I\\ -I&0\end{pmatrix}J_{\vec{f}}=\begin{pmatrix}0&I\\ -I&0\end{pmatrix} \tag{10}\]
Figure 2: Schematic of the hybrid Wisdom-Holman integration. The state of the system defined by \(\vec{p}\) and \(\vec{q}\) is propagated as a Keplerian trajectory. Then the Neural Network predicts the mutual perturbation between bodies. If the prediction is insufficiently smooth, the accelerations are calculated numerically using Equation (1). Finally, these accelerations are added as a change in linear momentum (\(\Delta\vec{p}\)). This process is repeated in the next time steps.
of the Jacobian matrix \(J_{\vec{f}}\) of a map \(\vec{f}(\vec{p},\vec{q})\) simplifies to
\[J_{\vec{f}}^{T}\begin{pmatrix}0&I\\ -I&0\end{pmatrix}J_{\vec{f}} =\begin{pmatrix}I&0\\ \Delta t\begin{pmatrix}\partial\vec{a}\\ \partial\vec{q}\end{pmatrix}^{T}&I\end{pmatrix}\begin{pmatrix}0&I\\ -I&0\end{pmatrix}\begin{pmatrix}I&\Delta t\begin{pmatrix}\partial\vec{a}\\ \partial\vec{q}\end{pmatrix}\end{pmatrix} \tag{11}\] \[=\begin{pmatrix}I&0\\ \Delta t\begin{pmatrix}\partial\vec{a}\\ \partial\vec{q}\end{pmatrix}^{T}&I\end{pmatrix}\begin{pmatrix}0&I\\ -I&-\Delta t\begin{pmatrix}\partial\vec{a}\\ \partial\vec{q}\end{pmatrix}\end{pmatrix}\] (12) \[=\begin{pmatrix}0&I\\ -I&\Delta t\begin{pmatrix}\left(\frac{\partial\vec{a}}{\partial\vec{q}} \right)^{T}-\begin{pmatrix}\partial\vec{a}\\ \partial\vec{q}\end{pmatrix}\end{pmatrix}\end{pmatrix}. \tag{13}\]
This implies that the Jacobian matrix of the calculated accelerations \(\begin{pmatrix}\frac{\partial\vec{a}}{\partial\vec{q}}\end{pmatrix}\) has to be symmetric.
If the accelerations are calculated using Newton's equation or using an HNN, they are the gradient of a scalar function, the Hamiltonian. This means that the Jacobian matrix of the accelerations is the Hessian matrix of the Hamiltonian, which is symmetric for continuous second derivatives. However, if a DNN is used in the hybrid integrator, no such statement can be made and the Jacobian matrix of the predicted accelerations can be non-symmetric.
Therefore, we can expect the energy-preserving characteristics of symplectic integrators to be present when using HNNs within the WH integrator but not when including DNNs. This result is investigated numerically in subsection 4.3.
### Problem setup
We study two cases: the first one, Jupiter and Saturn orbiting the Sun, and the second one with a large number of asteroids added to the first case, as illustrated in Figure 3.
For the first case in which only the Sun, Jupiter, and Saturn are studied (from now on referred to as SJS), the Hamiltonian of the system is given by Equation (2) for \(N=3\) with the Sun as \(i=0\), Jupiter \(i=1\), and Saturn \(i=2\). The interactive part of the Hamiltonian corresponds to the interaction between Jupiter (\(J\)) and Saturn (\(S\)):
\[\mathcal{H}_{\text{inter}}=-G\frac{m_{J}m_{S}}{||\vec{q}_{J}-\vec{q}_{S}||}. \tag{14}\]
In this case, only one operation suffices to calculate the interactive part, and as a consequence, the use of ANNs will lead to a deceleration of the calculation. However, this setup constitutes an interesting study case. We set up the network for the inputs (\(X\)) to be the masses and positions of the two bodies and the output to be the Hamiltonian, as explained in subsection 2.2. Therefore, for the SJS case, the inputs are:
\[X_{\text{SJS}}=[m_{J},\vec{q}_{J},m_{S},\vec{q}_{S}]. \tag{15}\]
In the second case (to which we refer as SJSa), we add \(N_{a}\) asteroids in orbit around the massive central body. The Hamiltonian can again be calculated for the star, the two
planets, and the asteroids with \(N=3+N_{a}\). For example, when \(N_{a}=2\), the interactive Hamiltonian can be expressed as follows:
\[\mathcal{H}_{inter}=-G\frac{m_{J}m_{S}}{q_{JS}}-G\frac{m_{1}m_{J}}{q_{1J}}-G \frac{m_{1}m_{S}}{q_{1S}}-G\frac{m_{2}m_{J}}{q_{2J}}-G\frac{m_{2}m_{S}}{q_{2S}} -G\frac{m_{1}m_{2}}{q_{12}}, \tag{16}\]
with \(q\) representing the magnitude of \(\vec{q}\). Because asteroids are orders of magnitude less massive than the planets, it can be safely assumed that the mutual gravitational interaction between asteroids is negligible, and we therefore neglect the last term in Equation (16). We also assume that the effect of the asteroids on Jupiter and Saturn is negligible. In contrast, the effect of the planets on the asteroids cannot be neglected. To set up a neural network that predicts the perturbations on the asteroids due to the planets, we separate this interactive Hamiltonian for each of the asteroids. For asteroids 1 and 2, their interactive Hamiltonian is defined as:
\[\mathcal{H}_{1}=-G\frac{m_{J}m_{S}}{q_{JS}}-G\frac{m_{1}m_{J}}{q_{1J}}-G \frac{m_{1}m_{S}}{q_{1S}}, \tag{17}\]
\[\mathcal{H}_{2}=-G\frac{m_{J}m_{S}}{q_{JS}}-G\frac{m_{2}m_{J}}{q_{2J}}-G\frac{ m_{2}m_{S}}{q_{2S}}. \tag{18}\]
We now set up the network such that the position and mass of each of the two asteroids correspond to one set of inputs. Therefore:
\[X_{\text{SJSa}}=[m_{J},\vec{q}_{J},m_{S},\vec{q}_{S},m_{a},\vec{q}_{a}], \tag{19}\]
where the subindex \(a\) represents one of the \(N_{a}\) asteroids. This choice of inputs allows the size of the neural network to be independent of the number of asteroids in the system, which implies that the same neural network can be used for any number of asteroids without retraining.
Figure 3: Schematic of the problem setup. (a) First study case with the Sun, Jupiter, and Saturn. (b) Second study case with the Sun, Jupiter, Saturn, and asteroids located within Jupiter’s orbit.
## 3 Neural network results
In this section, we explain the creation of the training dataset, the choice of hyperparameters, and the training results for the Hamiltonian Neural Network and the Deep Neural Network.
### Dataset
We generate training and test datasets for each of the two cases: SJS and SJSa. The ranges of values can be found in Table 1 of Appendix A. From these, the initial conditions are chosen using Latin hypercube sampling [25] and the simulations are run until the end time is reached. At each time step, the state of the system is saved as a training sample. Then, we verify if the dataset created covers the entire search space, i.e., if there are samples in the full range of true anomaly \([0^{\circ}-360^{\circ})\), which is displayed in Figure 4.
With the time step and the end time in Appendix A, the number of training samples is 3,000,000. We randomly choose a fraction of these for the training. On an AMD Ryzen 9 5900hs, it takes \(\sim\)80 min to generate this dataset. All experiments utilize this same computer architecture.
The accelerations of the planets and asteroids differ by orders of magnitude, which means that normalization of the training data is essential to train the network. However, since HNNs have physics embedded into their architecture, we cannot normalize the inputs or outputs independently without breaking the physical constraints. For example, re-scaling the inputs between 0 and 1 implies that the relation between different inputs does not remain constant. The distributions of inputs and outputs have been included in Figure 14 and Figure 15, respectively.
### Architecture and training parameters
In order to make a fair comparison between the DNN and the HNN, the settings chosen will be common for both of them unless otherwise stated. Each of the two cases studied (SJS
Figure 4: Distribution of \(x\) and \(y\) positions of the Sun, Jupiter, Saturn, and the asteroids in the training dataset.
and SJSa) requires different neural network hyperparameters. For the SJS case, we adopt a Mean Squared Error (MSE) loss function as indicated in Equation (7) for the HNN and Equation (8) for the DNN. For SJSa, the accelerations of the different bodies range multiple orders of magnitude, and therefore we implement a weighted MSE for the loss function, i.e, the error in the predicted acceleration of each body is weighted. The weights are applied to the losses defined in Equation(7) and Equation (8) as:
\[\mathcal{L}_{\text{NN}}(\theta)=W_{1}\;\mathcal{L}_{a}(\theta)+W_{2}\; \mathcal{L}_{S}(\theta)+W_{3}\;\mathcal{L}_{J}(\theta), \tag{20}\]
where \(\mathcal{L}_{a}\), \(\mathcal{L}_{S}\), and \(\mathcal{L}_{J}\) represent the MSE loss for the accelerations of the asteroids, Saturn, and Jupiter, respectively. We empirically find that weights of \(W_{1}=100\), \(W_{2}=10\), and \(W_{3}=1\) produce the best results as these weight values relate to differences in orders of magnitude of the accelerations of the bodies. These weights are only necessary due to the impossibility of normalizing the inputs and outputs without breaking the physics constraints of the HNN. Although normalization is possible with the DNN, we have used the weighted loss function instead to get a fair comparison with the HNN.
For SJS, no hyperparameter optimization is carried out, but the architecture is chosen manually instead. Both the DNN and the HNN have three layers, 200 neurons in the first hidden layer and each hidden layer has 0.7 times the number of neurons of the previous one. The learning rate follows an exponential decay, with an initial learning rate of 0.01, a decay of 0.9, and \(2\times 10^{5}\) steps. We use 150,000 samples with a proportion of 90/10 for training and validation datasets, and 10,000 samples for the test dataset.
For SJSa, the training of the HNN is not straightforward. To find a suitable combination of parameters, we perform a hyperparameter optimization where the variables are the number of training samples, number of layers, number of neurons per layer, and the learning rate parameters. We use a randomized grid search to explore different combinations of those parameters and train \(\sim\)30 networks for 200 epochs. The results are presented in Figure 5, where each simulation is plotted with the training loss along the \(x\)-axis and the validation loss along the \(y\)-axis. The figure indicates that regardless of the choice of parameters, the training and validation loss cannot be improved simultaneously to achieve the desired accuracy during testing. Among the best solutions, we choose the network architecture with three layers, 300 neurons per layer, and we use 250,000 samples for the training dataset. The test dataset is chosen to consist of 10,000 samples. The learning rate is chosen to follow an exponential decay schedule with an initial learning rate of \(10^{-3}\), 800,000 steps, and a decay rate of 0.9. The same parameters are used for the DNN.
Some of the most commonly used activation functions fail to capture the characteristics of the problem. For example, the activation function has to take into account the large dynamic range of the values of the problem. Therefore, we select the SymmetricLog activation function,
\[f(x)=\text{tanh}(x)\text{log}(x\text{tanh}(x)+1), \tag{21}\]
which was specifically designed for this problem by Cai et al. in 2021 [22], together with a Glorot weight initialization [26].
This function behaves similarly to tanh close to zero, and like a logarithmic function for larger values. Moreover, it is symmetric for positive and negative values, as seen in Figure 6.
### Training results
Both the DNN and the HNN are trained using the Adam optimizer [27] for 2,000 epochs. For SJS, this takes \(\sim\)1.3 h and \(\sim\)2.3 h for the DNN and the HNN, respectively, on the same computer as we used for the creation of the dataset. For SJSa, the training time is \(\sim\)2.5 h and \(\sim\)5 h for the DNN and the HNN, respectively.
Once the networks have been trained, we check their accuracy by applying them to the test dataset. For SJS, both the DNN and the HNN converge to a low loss value. Figure 7 shows the prediction error for the accelerations obtained with each network. Both networks produce accurate results when the accelerations are large as their output is very close to the 45\({}^{\circ}\) zero-error line. The relative error grows as the value of the acceleration decreases, since the absolute prediction error is in the order of \(10^{-5}\). The errors of the DNN are larger than
Figure 5: Results of the hyperparameter optimization for the Hamiltonian Neural Network. The training loss is plotted against the validation loss. Each point represents one trained network during the hyperparameter optimization. The points with the best training and validation loss are represented by blue and red squares, respectively, and their associated parameters are shown in the legend.
Figure 6: Comparison of activation functions. The SymmetricLog activation function was created for this problem by Cai et al. [22].
those of the HNN and overestimate the accelerations in the \(y\)-direction of Jupiter and in the \(x\)-direction of Saturn, and underestimate the \(y\)-acceleration of Saturn. This asymmetry leads to a drift in the energy error, as we will explain in section 4. Due to the orbits being almost planar, the accelerations in the \(z\)-direction are smaller than for the \(x\)- and \(y\)-direction, which in Figure 7 appears as a larger dispersion of small values.
The hyperparameter optimization in subsection 3.2 shows that we fail to train the HNN for the SJSa case to a satisfactory loss value. Because of the large difference in masses between the asteroids and the planets (\(\sim\)7 orders of magnitude), when calculating the loss function, some of the gradients of the output with respect to the inputs are required to be extremely large, whereas others have to be small. This leads to the training process focusing on improving the predictions of the accelerations of the asteroids or the planets and, after a certain loss value is achieved, improving one of these implies making the others worse. As a solution, since we successfully trained a network that predicts the accelerations of Jupiter and Saturn, we now train a network that solely predicts the accelerations of the asteroids. We therefore train another HNN where we only include the accelerations of the
Figure 7: Real against predicted values of the acceleration components for the case with the Sun, Jupiter, and Saturn. The real value is compared to the one predicted by the Deep Neural Network and the Hamiltonian Neural Network.
asteroids in the loss function, ignoring the predictions for Jupiter and Saturn. These results are presented in Figure 8.
The DNN is trained with all the bodies in the loss function and can accurately predict
Figure 8: Real against predicted values of the acceleration components for the case with the Sun, Jupiter, Saturn, and asteroids. The real value is compared to those predicted by the Deep Neural Network, the Hamiltonian Neural Network for the three bodies, and the Hamiltonian Neural Network with only the asteroids in the loss function.
the accelerations but, similarly to the predictions for SJS depicted in Figure 7, makes errors on the same side of the \(45^{\circ}\) zero-error line. The HNN trained for the three bodies makes poor predictions for all the outputs, and the HNN trained only for the asteroids predicts the accelerations for the asteroids accurately but (as expected since they are not included in the loss function) fails to predict the accelerations for Jupiter and Saturn (Figure 8).
### Selection of networks
For SJSa, the HNN fails to predict the accelerations of Jupiter, Saturn, and the asteroids simultaneously. However, if the network is trained with the loss only accounting for the prediction of the accelerations of the asteroids, it can predict these accurately as we discussed in subsection 3.3. For SJSa, we will therefore calculate the accelerations using a combination of two networks: the predictions for Jupiter and Saturn with the network trained for SJS (Figure 7), and the prediction for the asteroids with the network that is only trained to predict the accelerations of the asteroids (orange markers in Figure 8). This combination of two networks is done with both the HNN and the DNN.
### Output of the HNN
It is interesting to understand if the output of the HNN is the same as the actual interactive Hamiltonian of the system (Equation 4). To test this hypothesis, we set up an experiment for SJS in which we compare the output of the HNN with the interactive energy of the system.
In Figure 9, we show that the predicted values of the interactive Hamiltonian with the HNN, i.e., \(f_{\text{HNN}}(X)\) (WH-HNN H in Figure 9) are not the same as the interactive energy of the hybrid integrator \(\mathcal{H}_{\text{inter}}^{\text{WH-HNN}}\) (WH-HNN Energy in Figure 9). The energy of the numerical solution \(\mathcal{H}_{\text{inter}}^{\text{WH}}\) is also plotted as WH Energy for reference. The energy evolution of the hybrid integrator exactly coincides with the one of the numerical solution. The output of the HNN does not correspond to the energy value. Therefore, the output of the network does not have physical meaning. This can be explained by realizing that the accelerations obtained with the HNN depend on the relation between the output and the gradients. As a consequence, different combinations of these two variables may lead to similar values of the accelerations.
## 4 Results of the hybrid Wisdom-Holman integrator
In this section, we use the networks trained in section 3 in a simulation to further study their performance.
### Integration parameters
We initialize the simulation with the state of the Sun, Jupiter, and Saturn from the Horizon System of the Jet Propulsion Laboratory [28]. We consider a variable number of asteroids initialized with a semi-major axis chosen randomly between 2.2 and 3.2 au, an eccentricity of 0.1, an inclination of \(0^{\circ}\), and a random true anomaly. Then, we use the Wisdom-Holman integrator with a time step (\(h\)) of 0.1 yr until a final integration time which depends on the specific case (SJS or SJSa).
### Validation of the code
Before discussing the results, we validate the hybrid implementation of the Wisdom-Holman integrator with the neural network. For this purpose, we compare two methods for SJSa: without replacing the HNN result by that of the numerical integrator if the requirement (Equation (9)) is not achieved (without flags), and the method with flags as described in subsection 2.3. In Figure 10 we show the accelerations of Saturn and two asteroids: asteroid 1 within the limits of the training dataset and asteroid 2 outside to study the extrapolation capabilities of the network. When the prediction of the network is accurate, as it is for Saturn, no flags are needed. However, when the network is not able to reproduce the numerical results, as in the case of asteroid 2, the hybrid integrator detects the poor predictions and replaces these with the results of the numerical calculation. By doing so, the hybrid HNN method becomes significantly more robust against prediction errors. In Appendix B, we discuss the number of flags as a function of the parameter \(R\) from Equation (9).
We show in Figure 10 that the hybrid integrator yields better solutions for the accelerations. However, verifying the predictions of the networks at each time step entails a cost in terms of computing time.
The numerical integration scales with \(N^{2}\) whereas the neural network result scales with \(N\). For a small number of asteroids, the additional computing time needed to include the neural networks into the integrator makes the method with neural networks more expensive than the numerical computation. We therefore study what the minimum number of asteroids is to make the use of neural networks computationally less expensive than the numerical computation. In Figure 11, three cases are displayed: Wisdom-Holman integrator, WH with HNN without flags, i.e. HNN, and hybrid WH with HNN, i.e, WH-HNN. For a number of asteroids \(\leq\)70, the use of the HNNs is not preferred above WH as it takes longer to run. However, as the number of asteroids increases, using either HNNs or the hybrid method with HNNs within the integrator results in faster computations, halving the computing time for 2,000 asteroids. Using the hybrid method with the HNN only slightly increases the
Figure 9: Comparison of the output of the Hamiltonian Neural Network with the interactive Hamiltonian of the Wisdom-Holman integrator.
computing time with respect to the pure HNN case since the prediction for each asteroid is evaluated and replaced individually if necessary. In Figure 10(b), we see that the hybrid integrator reduces the energy error without significantly increasing the computing time. Since the energy error is dominated by the planets, a small improvement in the energy error implies a significant improvement in the predictions of the accelerations of the asteroids.
The computing times shown in Figure 11 refer to the times for the calculation of the accelerations, i.e., the training times for the neural networks are not included. Once the networks are trained, they can be used in multiple experiments. For example, if the objective is to run 100 experiments, a training time of 2 h is negligible compared to the total computing time.
### Trajectory integration
Once the neural networks have been trained, we integrate SJS for 5,000 years (Figure 12) and SJSa for 1,000 years (Figure 13). To study the extrapolation capabilities of the network, we add two asteroids to SJSa, of which the initial conditions are within the range of training parameters (asteroids 1 and 2) and one asteroid with a semi-major axis outside the range (asteroid 3).
Figure 10: Comparison of the accelerations of Saturn, asteroid 1 with an orbit inside the range of the training dataset, and asteroid 2 with an orbit outside the range of training data, using different integration setups. _First row:_ Wisdom-Holman integrator, _second row:_ Wisdom-Holman integrator with a Hamiltonian Neural Network, and _third row:_ hybrid Wisdom-Holman integrator with a Hamiltonian Neural Network and \(R=0.3\). In the third row, the dots represent the points in which the numerical integrator was used because the prediction of the neural network was not considered sufficiently accurate.
In Figure 12, we compare the trajectory, change in eccentricity, and energy error of the hybrid integrator with the HNN and the DNN with respect to the numerical integration. The integrator with the HNN (WH-HNN) reproduces the change in eccentricity of the integrator better than the one with the DNN (WH-DNN). The evolution of the eccentricity is an important indicator of how well the orbit is reproduced using the neural networks. Another indicator is the energy error (third row). Although the WH-HNN leads to a larger energy error than the WH integrator, it shows symplectic behavior. In contrast, the use of the WH-DNN leads to a systematic drift in the energy error. This causes a gradual divergence from the numerical solution. We illustrate this with Figure 7, where the DNN produces prediction errors that are asymmetrically distributed around the zero-error line. We conclude that for the SJS case the hybrid integrators with the HNN and the DNN can reproduce the numerical results for short time scales, although the use of the latter results in a systematic deviation from the reference solution.
For SJSa, the results in Figure 13 show that the trajectories of asteroids 1 and 2 can be predicted with both the WH-HNN and the WH-DNN for short integration times. For longer integration times, the DNN is not able to reproduce the trajectories of the asteroids accurately; there is a systematic drift in the evolution of the eccentricity. Regarding the extrapolation capabilities of the networks, neither the HNN nor the DNN can predict the trajectory of asteroid 3. However, the hybrid integration allows the accelerations to be adjusted to the numerical values, leading to more accurate trajectories. Regarding the energy error, the behavior observed is the same as in Figure 12 since the energy magnitudes of Jupiter and Saturn dominate over the energy magnitudes of the asteroids. We conclude that for short time scales both networks incur in a small error with respect to the numerical integration results, but the HNN achieves a more accurate reproduction of the trajectory of
Figure 11: Computing time (a), and difference in computing time and energy error with respect to the numerical solution (b) for the integration to 20 years as a function of the number of asteroids. Three cases are shown: numerical integrator (WH), numerical integrator with Hamiltonian Neural Network (HNN), and hybrid numerical integrator with Hamiltonian Neural Network (WH-HNN). The \(N\) and \(N^{2}\) lines are displayed as a reference for linear and quadratic scaling, respectively.
the asteroids over a longer time scale.
## 5 Conclusion
In this paper, we studied the use of Artificial Neural Networks for the prediction of accelerations in a planetary system with a star orbited by two planets and a number of asteroids. We compared the results produced by a Deep Neural Network and a Hamiltonian Neural Network. The latter includes physical knowledge about the conservation of energy.
In contrast to previous studies that use neural networks for the gravitational _N_-body problem, we focused on an actual astrophysics problem. By using a case-specific integrator and modifying the number of bodies and their masses and positions to represent a realistic scenario, we encountered challenges that are not found when using this problem as a test case.
We created a method that circumvents some of the major challenges of using neural networks for the \(N\)-body problem. First of all, by using a hybrid integrator that evaluates the prediction and chooses between the numerical or the neural network solution, we addressed the problem of accumulation of errors over large timescales. Secondly, our setup allows for
Figure 12: Simulation results for the Sun, Jupiter, and Saturn (SJS). The results are generated using the Wisdom-Holman integrator (_left_), hybrid Wisdom-Holman with the Hamiltonian Neural Network (_middle_), and hybrid Wisdom-Holman with the Deep Neural Network (_right_). The trajectories in the \(x\)-\(y\) plane are shown in the first row, the eccentricities in the second row, and the relative energy error in the third row.
a variable number of bodies in the system without the need to retrain the network. With the simplest setups found in literature, an increase in the number of bodies in the system implies that the network needs to be retrained. Finally, we use custom activation functions and weights in the loss function to adapt to the characteristics of the problem.
Although based on the optimistic results from the literature [17; 18; 20] we expected the HNN to outperform the DNN, in the case with the asteroids, the HNN could not be trained to predict simultaneously the accelerations of the planets and the asteroids. Because of the presence of physics constraints in HNNs, normalization is not possible. This becomes an obstacle for training due to the differences in masses of the bodies. We therefore trained two individual networks for the accelerations of the planets and the asteroids. Although using HNNs has its advantages for the simplified case with Jupiter and Saturn, we demonstrated their limitations for other configurations.
HNNs turn out to be more time-consuming and harder to train, in contrast to the DNN. We had to develop a dedicated activation function specifically for this problem and the hyperparameter optimization performed was time-consuming as well.
With more than 70 asteroids, the integration with the neural networks becomes faster than the direct numerical integration, and for 2,000 asteroids the use of neural networks leads to a halving of the computing time. Since the goal is to create a method that can be used multiple times, the performance comparison does not include the time used for training.
Figure 13: Simulation results for the case with the Sun, Jupiter, Saturn, and asteroids. The results are generated with the Wisdom-Holman integrator (_left_), hybrid Wisdom-Holman with the Hamiltonian Neural Network (_middle_), and hybrid Wisdom-Holman with the Deep Neural Network (_right_). The trajectory in the \(x\)-\(y\) plane is shown in the first row and the eccentricity in the second row. Asteroids 1 and 2 are within the limits of the training dataset and asteroid 3 has a semi-major axis below the lowest limit of the training dataset to study the extrapolation capabilities of the networks.
We developed a hybrid integrator to alleviate the problems induced by the introduction of neural networks in the integration process. By verifying the prediction made by the ANN at each time step and replacing this prediction by the numerical integrator if necessary, the integrator becomes more reliable and robust to prediction errors without significantly increasing the computing time. Therefore, for a sufficiently large number of asteroids (\(\sim 70\)), we find that the hybrid approach with the HNN proposed here outperforms the direct integration without losing the underlying physics of the system, as opposed to the hybrid integrator with the DNN. Although our study shows that it is beneficial to use physics-aware architectures that conserve the symplectic structure of the integrator, our hybrid method is independent of the network topology chosen. We focused on the simplest cases of neural networks to allow for a better understanding of the underlying challenges of the problem, but further studies should focus on the use of more complex network topologies.
In short, we showed that neural networks can be used to speed up the integration process for problems with a large number of asteroids. However, for long integration times, the prediction errors may accumulate causing the results to diverge with respect to the solution obtained by direct numerical integration. Moreover, if no hybrid integration method that verifies the prediction of the network is used, these prediction errors may lead to unphysical solutions on a short time scale. The use of HNNs is justified for cases in which normalization is not needed to train the network, which in this study means when the masses of the different bodies are of the same order of magnitude. When the HNNs can be trained, they show symplectic behavior, with the energy error oscillating around the initial value. In contrast, DNNs are easy to train and lead to satisfactory solutions, but are not able to extrapolate to conditions that are not part of the training data and are unsuited for finding solutions that conserve energy.
## 6 Acknowledgments
This publication is funded by the Dutch Research Council (NWO) with project number OCENW.GROOT.2019.044 of the research programme NWO XL. It is part of the project "Unravelling Neural Networks with Structure-Preserving Computing". In addition, part of this publication is funded by the Nederlandse Onderzoekschool Voor Astronomie (NOVA). |
2309.03351 | Using Neural Networks for Fast SAR Roughness Estimation of High
Resolution Images | The analysis of Synthetic Aperture Radar (SAR) imagery is an important step
in remote sensing applications, and it is a challenging problem due to its
inherent speckle noise. One typical solution is to model the data using the
$G_I^0$ distribution and extract its roughness information, which in turn can
be used in posterior imaging tasks, such as segmentation, classification and
interpretation. This leads to the need of quick and reliable estimation of the
roughness parameter from SAR data, especially with high resolution images.
Unfortunately, traditional parameter estimation procedures are slow and prone
to estimation failures. In this work, we proposed a neural network-based
estimation framework that first learns how to predict underlying parameters of
$G_I^0$ samples and then can be used to estimate the roughness of unseen data.
We show that this approach leads to an estimator that is quicker, yields less
estimation error and is less prone to failures than the traditional estimation
procedures for this problem, even when we use a simple network. More
importantly, we show that this same methodology can be generalized to handle
image inputs and, even if trained on purely synthetic data for a few seconds,
is able to perform real time pixel-wise roughness estimation for high
resolution real SAR imagery. | Li Fan, Jeova Farias Sales Rocha Neto | 2023-09-06T20:24:13Z | http://arxiv.org/abs/2309.03351v1 | # Using Neural Networks for Fast SAR Roughness Estimation of High Resolution Images
###### Abstract
The analysis of Synthetic Aperture Radar (SAR) imagery is an important step in remote sensing applications, and it is a challenging problem due to its inherent speckle noise. One typical solution is to model the data using the \(G_{I}^{0}\) distribution and extract its roughness information, which in turn can be used in posterior imaging tasks, such as segmentation, classification and interpretation. This leads to the need of quick and reliable estimation of the roughness parameter from SAR data, especially with high resolution images. Unfortunately, traditional parameter estimation procedures are slow and prone to estimation failures. In this work, we proposed a neural network-based estimation framework that first learns how to predict underlying parameters of \(G_{I}^{0}\) samples and then can be used to estimate the roughness of unseen data. We show that this approach leads to an estimator that is quicker, yields less estimation error and is less prone to failures than the traditional estimation procedures for this problem, even when we use a simple network. More importantly, we show that this same methodology can be generalized to handle image inputs and, even if trained on purely synthetic data for a few seconds, is able to perform real time pixel-wise roughness estimation for high resolution real SAR imagery.
Synthetic Aperture Radar Images, Neural Networks, Image Analysis, Statistical Modeling.
## I Introduction
Synthetic Aperture Radar (SAR) data is crucial to remote sensing and earth monitoring. Its ability to capture high resolution snapshots of targets and landscapes independently of the weather conditions and sunlight has opened the way to important advancements in environmental monitoring, emergency response, evaluation of damages in natural catastrophes, urban planning and ecology, to mention a few. Because of its widespread use, there is a considerable industrial appeal for SAR image understanding algorithms. This task, however, is challenging due to the degrading speckle noise inherent to such data, which prevents the application of image processing techniques that are common in other image domains.
Thankfully, this same appeal led to the developments of statistical models that describe the SAR data despite its noise pattern. In particular, the \(G_{I}^{0}\)[1] distribution for intensity data, highly successful in both theory and practice [2], arouse to prominence in the remote sensing community because of its capacity to model well textured, extremely textured and textureless terrain data. To do so, this model relies on three parameters, one of which is roughness. It directly corresponds to the captured texture and has been extensively used for SAR data understanding [3, 4, 5, 6, 7]. Estimating such parameter became crucial in many SAR imaging techniques [7, 4]. However, the best current available algorithms for such a task, i.e. Maximum Likelihood [2], Log-cumulant [8] and Minimum Distance Estimators [5], are too computationally expensive to be applied to high resolution images within reasonable time.
Taking a different approach, deep learning has been successfully applied to SAR [9] and many other imaging domains [10]. In these techniques, one typically relies on training a large neural network on a substantial amount of labeled data, or augmentations thereof. Practically speaking, this considerably sized process can be accomplished by implementing parallelleizable learning techniques in powerful parallel processing machines, such as Graphics Processing Units (GPUs). Nowadays, many efficient and simple-to-use libraries are available for manipulating data, and training deep networks on GPUs, which prompts researchers and practicians alike to design their methodologies in a way to leverage these powerful tools.
In this work, we propose using neural network-based learning algorithm for estimating roughness maps in SAR images. In our methodology, we generate sets of \(G_{I}^{0}\) samples of varying sizes, compute their sample moments and use them to predict the parameters that generated the data. Once the network is trained on this fully synthetic data, we proceed with network inference on real data. To the best of our knowledge, our work is the first of its kind to train a neural network to estimate parameters of distributions based on sample moments from synthetic data. As we shall show, in fact, a very small network composed, trained for a few seconds on a small dataset, is sufficient to produce better estimates on unseen data than maximum likelihood and log-cumulant-based estimators in terms of mean squared error, while being also faster and less prone to failures. Finally, this same process can be fully implemented, from sample moment computation and training to network inference, on a GPU for high resolution image-sized input, with pixel-wise roughness estimation accomplished in a few milliseconds on average. Overall, we propose a simple, principled and efficient method for training neural networks for SAR image understanding, using the statistical baggage this type of data carries.
## II Preliminaries
### _SAR Statistical Modeling and Parameter Estimation_
The intensity return \(Z\) in monopolarized SAR image can be effectively modeled by the product of two independent random variables: \(X\), the _backscatter data_, and \(Y\), the inherent _speckle noise_[2]. Typically, \(Y\) is modeled as a unitary-mean Gamma distributed random variable with shape parameter corresponding to the number of looks \(L\in\mathbb{N}^{*}\) used to
capture the data, and considered known during estimation [2]. Assuming that \(X\) obeys the reciprocal of Gamma law, Frery _et Al._[1] showed that \(Z\sim G_{l}^{0}(\alpha,\gamma,L)\) with density:
\[f_{\mathcal{G}_{l}^{0}}(z,\theta)=\frac{L^{L}\Gamma(L-\alpha)}{\gamma^{\alpha} \Gamma(-\alpha)\Gamma(L)}z^{L-1}(\gamma+Lz)^{\alpha-L}, \tag{1}\]
where \(-\alpha,\gamma>0\) are called _roughness_ and scale, respectively.
The \(G_{l}^{0}\) model has become a popular choice for SAR data due to its expressiveness [1], mathematical tractability [2], and effectiveness when applied to various imaging tasks [11, 4]. In fact, it was shown that the roughness parameter \(\alpha\) plays an important role in SAR understanding [5]. With roughness close to zero, typically \(\alpha>-3\), it suggests the targeted region is highly textured, therefore urban. For \(\alpha\in[-3,-6]\), we have evidence for moderately textured regions, which can correspond to forests. Finally, \(\alpha<-6\) implies textureless areas, such as seas and pastures. Hence, using \(\alpha\) estimates instead of pixel intensities on SAR images led to improvements in image segmentation [7, 3] and region discrimination [12].
This direct application of SAR statistical modelling fueled the study and improvement of estimators for the \(G_{l}^{0}\) distribution [12, 5]. While early work employed a Maximum Likelihood Estimator (MLE) [2] for parameter estimation, later developments led to the application of Method of Moments [2], Method of Log Cumulants (LCUM) [8, 6, 7] and Minimum-Distance Estimators [5, 4] to this problem. Despite some of their successes, all these methods, especially LCUM, heavily rely on slow optimization procedures and are prone to high estimation failures [5]. These issues hinder their application in image-sized data understanding, as the current praxis relies on generating _roughness maps_ by sweeping the image with a window that collects the intensity data centered at each pixel and then proceeds with the parameter estimation algorithm [7], a process that can be excessively slow.
### _Neural Networks_
We focus on supervised learning algorithms that, given a dataset \(\mathcal{D}=\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{N}\), aim to compute a model \(f_{\theta}(\cdot)\) parameterized by \(\theta\) that best predicts the output \(\mathbf{y}_{i}\)'s when given the input \(\mathbf{x}_{i}\)'s. The prediction error, to be minimized, is measured via a loss, typically the Mean Squared Error (MSE):
\[\mathrm{MSE}=\frac{1}{N}\sum_{(\mathbf{x},\mathbf{y})\in\mathcal{D}}\| \mathbf{y}-f_{\theta}(\mathbf{x}))\|_{2}^{2}, \tag{2}\]
where \(\|\cdot\|_{2}\) is the Euclidean vector norm. In _Neural Networks_, we assume that \(f_{\theta}\) is composed of units of a non-linear activations \(\sigma(\cdot)\) and parameters \(\theta=\{\mathbf{W},\mathcal{B}\}\), where \(\mathcal{W}=\{W_{i}\}_{i=1}^{Q}\) are _weight_ matrices and \(\mathcal{B}=\{\mathbf{b}_{i}\}_{i=1}^{Q}\) a set of _bias_ vectors. The Multilayer Perceptron (MLP), a popular network, uses a model \(NN_{\theta}(\mathbf{x})\) formed by the composition of affine transformations of the inputs followed by the element-wise application of \(\sigma(\cdot)\):
\[NN_{\theta}(\mathbf{x})=\sigma(W_{Q}\dots\sigma(W_{2}\sigma(W_{1}\mathbf{x}+ \mathbf{b}_{1})+\mathbf{b}_{2})\dots+\mathbf{b}_{Q}). \tag{3}\]
Each transformation is typically referred to as a _layer_ of the MLP, where the first and last are the input and output layers, and others are called hidden layers. A network with multiple hidden layers is commonly known as a "deep" neural network.
Another, arguably more popular, neural network model is the Convolutional Neural Network (CNN). Different from the MLP model, in CNN, the weights in each layer are shared so that they effectively perform a convolution operation on the input. For example, consider an image \(I\) of shape \(c\times w\times l\), where \(w\) and \(l\) are 2D spatial dimensions of \(I\), and \(c\) is its number of channels (three for RGB data). One can design a matrix of weights whose application on a vectorized/flattened version of \(I\) corresponds to a convolution of \(I\) on its original shape and a _kernel matrix_ of shape \(c\times k\times k\), where \(k\ll w,l\). Using such _convolutional layers_ brings several advantages for learning on image-sized data. First, the number of weights to be learned is drastically reduced, from \(O(cwl)\) to \(O(ck^{2})\). Second, the number of weights to be learned no longer depends on the size of the input. Third, one can show that this process learns separate feature detectors, which improves the network performance on important imaging tasks such as detection, denoising and segmentation [10].
More importantly to us is the setting where MLPs and CNNs are equivalent. Suppose we have the input vectors of the dataset \(\mathcal{D}\) in \(\mathbb{R}^{d}\) and output vectors in \(\mathbb{R}^{d^{\prime}}\). Learning a dense layer for \(\mathcal{D}\) would require finding a matrix \(W\in\mathbb{R}^{d\times d^{\prime}}\). Now, consider the tensors \(X\in\mathbb{R}^{d\times w\times l}\) and \(Y\in\mathbb{R}^{d^{\prime}\times w\times l}\), where \(w\) and \(l\) are chosen such that \(wl=N\), and each \(\mathbf{x}_{n}\) (resp. \(\mathbf{y}_{n}\)) is placed in a transversal tube \(X[:,i,j]\) (resp. \(Y[:,i,j]\))1[13]. It is easy to see that training \(d^{\prime}\) kernel matrices of shape \(d\times 1\times 1\) is the same as learning the values in \(W\)[14]. As discussed in the next section, using these \(1\times 1\) convolutional layers will enable us to see our parameter estimation procedure as an inference on a pre-trained CNN, leveraging the speed of highly parallel computing engines during estimation and preventing us from using time-consuming loops for generating roughness maps.
Footnote 1: We use a notation similar to Numpy’s for array indexing (for example, \(X[i,j]\) is the element in \(X\) at indices \((i,j)\)) and slicing (or example, \(X[i,:]\) is all the elements in the \(i\)-th row of \(X\)).
## III Proposed Methodology
### _Estimating \(G_{l}^{0}\) Using Neural Networks_
Our approach for estimation relies on neural network-based learning procedure. Let \(L\in\mathbb{N}^{*}\) be known [2] and \(\mathcal{H}=\{(\mathcal{Z}_{i},\alpha_{i})\}_{i=1}^{N}\) be a dataset composed of samples \(\mathcal{Z}_{i}\) from a \(G_{l}^{0}(\alpha_{i},\gamma_{i},L)\) distribution of varying sizes, where we assume \(\gamma_{i}=-\alpha_{i}-1\)2. We also assume that \(\alpha_{i}\) is always above a given lower bound (set to \(-15\) in our experiments) [5].
Footnote 2: We follow the praxis of assuming that the samples have unit mean [2].
Our goal is to train a neural network to predict \(\alpha_{i}\) from \(\mathcal{Z}_{i}\). However, the data in \(\mathcal{Z}_{i}\) cannot be used directly, as the input vector of our network needs to be of fixed size. The final estimator also needs to be invariant to sample permutations in each input set. To solve these issues, the authors in [15] proved that that all permutation invariant functions \(t(\cdot)\) on a set \(\mathcal{S}\) can be represented as \(t(\mathcal{S})=\rho\left(\sum_{s\in\mathcal{S}}\phi(s)\right)\), where \(\rho(\cdot)\) and \(\phi(\cdot)\) are continuous functions. Setting \(\rho(x)=\frac{1}{|\mathcal{Z}_{i}|}x\) and \(\phi(x)=(\log x)^{m}\) and applying the resulting \(t(\cdot)\) on a sample \(\mathcal{Z}_{i}\) from \(\mathcal{D}\), we get:
\[\mu_{m}(\mathcal{Z}_{i})=\frac{1}{|\mathcal{Z}_{i}|}\sum_{z\in\mathcal{Z}_{i}}( \log z)^{m}, \tag{4}\]
where \(\mu_{m}(\mathcal{Z}_{i})\) conveniently represents the sample log-moment of order \(m\), a statistic commonly used in SAR literature [6, 7]. In practice, we create a vector of \(N_{m}\) moments \(\boldsymbol{\mu}_{i}=[\mu_{1}(\mathcal{Z}_{i})\ \mu_{2}(\mathcal{Z}_{i})\ \ldots\ \mu_{N_{m}}( \mathcal{Z}_{i})]\) and train our network on the dataset \(\mathcal{D}=\{(\boldsymbol{\mu}_{i},\alpha_{i})\}_{i=0}^{N}\). We use an MLP as our network architecture, since its structure can be adapted to infer full sized roughness maps, as explained in the next section.
```
Input:\(L\): # looks. \(R\): dataset size. \(T\): # epochs. \(N_{m}\): # log-mom. \(\mathcal{A}\): set of \(\alpha\) values, \(\mathcal{S}\): set of sample sizes. FunctionMain(): \(\mathcal{D}\leftarrow\) SyntheticDataset(\(L,w,h,\mathcal{A},\mathcal{K},N_{m},R\)) \(NN_{\theta}\leftarrow\)TrainNeuralNet(\(\mathcal{D},T\)) Function\(\mathcal{D}=\) SyntheticDataset(\(L,\mathcal{A},\mathcal{S},N_{m},R\)): \(\mathcal{D}\leftarrow\) Empty set repeat\(R\) times \(\alpha,s\leftarrow\) Sample uniformly from \(\mathcal{A}\) and \(\mathcal{S}\), resp. \(\gamma\leftarrow-\alpha-1\) \(\mathcal{Z}\leftarrow\{z|z\sim G_{0}^{2}(\alpha,\gamma,L),|\mathcal{Z}|=s\}\), \(\boldsymbol{\mu}\leftarrow\)ComputeMoments(\(\mathcal{Z},N_{m}\)) \(\mathcal{D}\leftarrow\mathcal{D}\cup\{(\boldsymbol{\mu},\alpha)\}\) Function\(\boldsymbol{\mu}=\)ComputeMoments(\(\mathcal{Z},N_{m}\)): foreach\(m\in\{1,2,\ldots,N_{m}\}\)do \(\mu[m]\leftarrow\frac{|\mathcal{D}|}{|\mathcal{Z}|}\sum_{i\in\mathcal{Z}}(\log z)^{m}\) Function\(NN_{\theta}=\)TrainNeuralNet(\(\mathcal{D}\), \(T\)): \(NN_{\theta}\leftarrow\)MultilayerPerceptron with parameters \(\theta\). repeat\(T\) times foreach batch\((\mathbf{x},\mathbf{y})\in\mathcal{D}\)do Optimize\(\theta\) to minimize \(\|\mathbf{y}-NN_{\theta}(\mathbf{x})\|_{2}^{2}\)
```
**Algorithm 1**Training Roughness Estimator
Algorithm 2: Training Roughness Map Estimator
Algorithm 1 gives an overview of this training procedure. Following Main(), a synthetic \(G_{I}^{0}\) dataset is created by drawing samples of various predefined sample sizes and \(\alpha\) values. For each set, ComputeMoments() runs on each sample to find its first \(N_{m}\) log-moments and concatenate the resulting moment vector to the roughness value that generated it. Finally, an MLP is trained on these pairs of moment vectors and the \(\alpha\) parameter over a given amount of epochs.
After the network is trained, one can estimate the parameters of a given unseen sample set by (1) computing its moments via ComputeMoments() and (2) feeding them through the trained model. We hope that, although training requires extra computational time, the inference step in this process is fast.
### _Adaptation to Roughness Map Estimation_
In many practical scenarios involving parameter estimation in SAR, one wishes to estimate the roughness of all pixel locations in a potentially high resolution image \(I\)[7]. While we could apply the algorithm described previously to all \(k\times k\) sized windows in \(I\) for a given \(k>0\), in this section we show the functions described in Algorithm 1 can be easily adapted to image size data and be fully implemented on a GPU. This transition will enable us to fully parallelize our estimation algorithm in both training and inference phases, which will consequently highly reduce our computation time.
In this new setting, let \(I\in\mathbb{R}^{w\times h}\). For our training phase, each input in our training set will be composed of samples from the \(G_{I}^{0}\) distribution for unique values of \(\alpha\), \(\gamma\) and \(L\) disposed in a \(w\times h\) matrix. In the inference phase, \(I\) is a real SAR image of any size, meaning that our method will be able to estimate the roughness of an image with a network trained on purely synthetic and random data.
Our goals are (1) to efficiently compute all the desired \(N_{m}\) moments of the data surrounding each pixel in \(I\) on a \(k\times k\) window, composing a tensor \(M\in\mathbb{R}^{N_{n}\times w\times h}\), and (2) to feed this data to an appropriate network that estimates the roughness parameters of each pixel location on the image grid. For step (1), one can use an Average Pooling Layer, \(\operatorname{AvgPool}(\cdot)\), where a convolution kernel sweeps the input data, averaging the pixel values within that window [16]. Here we add an appropriate padding composed of \(G_{I}^{0}\) samples to \(\operatorname{AvgPool}\)'s input, so its output size remains the same. Now, if the input data is \(\log(I)^{m}\), where the exponentiation is computed pixel-wise, one can estimate each pixel's log-moments of order \(m\) via \(\operatorname{AvgPool}(\log(I)^{m})\). Each channel in \(M\) is computed by applying this procedure to all \(m\in\{1,2,\cdots,N_{m}\}\).
For step (2), we can use the connection between MLPs and \(1\times 1\) convolutional networks described in Section II-B and turn the MLP architecture used in Algorithm 1 in to a Fully Convolutional Neural Network (FCNN). In that network, the input of its convolutions will be the transversal tubes on \(M\), which correspond to the sample log-moments of each pixel in \(I\). During training, its output will be a matrix \(A\in\mathbb{R}^{w\times h}\), such that \(A[i,j]=\alpha,\forall i,j\), where \(\alpha\) is the roughness parameter that generated \(I\) along with \(\gamma=-\alpha-1\) and a predetermined \(L\). Algorithm 2 explains this training algorithm in more detail and is analogous to Algorithm 1 in its execution.
## IV Numerical Experiments and Discussion
### _Data and Algorithmic Setup and Assessment Methodology_
We assess the performance of our proposed methodology on two settings. The first is a synthetic one where we aim at estimating the roughness parameter from samples of the \(G_{I}^{0}\) distribution. We qualitatively compare the resulting network from Algorithm 1 to the standard estimators for this problem. In our second experiment, we qualitatively evaluate a model trained using Algorithm 2 on a real SAR image.
To sample synthetic SAR data from the \(G_{I}^{0}\) model, we make use of its multiplicative nature and sample \(X\sim\Gamma(1,L)\) and \(Y^{\prime}\sim\Gamma(-\alpha,\gamma)\) to get \(Z=X/Y^{\prime}\sim G_{I}^{0}(\alpha,\gamma,L)\)[2]. In our real experiments, we use a \(1500\times 1500\) SAR image acquired by Airborne SAR (AIRSAR) in HH polarization with \(L=1\) in C-band over Lake Superior, near Sault Saint Marie, MI.
We choose simple and analogous network architectures to be trained in both Algorithms 1 and 2. The MLP in Algorithm 1 consists of two hidden layers, one with eight and the other with four units, followed by \(\tanh()\) activation functions. The FCNN in Algorithm 2 is composed of two \(1\times 1\) convolutional layers with eight and four filters, respectively, followed again by \(\tanh()\) activations. These activations were chosen as they performed better than the traditional ReLU activation. For experimental simplicity, no batch normalization, dropout or data preprocessing were used during training.
Our networks were trained following the predicament in the TrainNeuralNet() functions in both Algorithms 1 and 2 using Adaptive Moment Estimation (ADAM) optimizer [17] with learning rate of 0.001 and batch size of 32. For Algorithm 1, we set \(\mathcal{S}=\{100,1000,10000\}\) and tested \(N_{m}=2\) and 4, and for Algorithm 2, we have \((w,h)=(10,10)\) and \(\mathcal{K}=\{2,5,\ldots,11\}\) and only used \(N_{m}=2\). Both algorithms use \(\mathcal{A}=\{-15,-13.5,\cdots,-1.5\}\), \(R=1000\) datapoints and \(T=300\) epochs, enough for their training convergence.
We compare our algorithms to LCUM and MLE results. To optimize both methods, we use Scipy's fsolve, a Python wrapper for an implementation of Powell's dog leg method [18]. We start each optimization with \(\alpha=-1.0001\).
In our synthetic experiments, the quantitative assessment of our methods' estimation performance compared the estimated roughness with their ground truth values via Mean Square Error (MSE), as depicted in 2. For each experiment, we also compared the failure rates for each algorithm, where we consider an \(\alpha\) estimate to fail if (1) it is not in the interval \([-1.5,-15]\)[5], or (2) if its optimization procedure, in the case of MLE and LCUM, did not converge.
We used a Tesla T4(R) GPU with 16Gb of RAM for our neural network training. The other methods were run on an Intel(R) Xeon(R) CPU at 2.30GHz with 26Gb of RAM. All methods were implemented in Python 3 and Pytorch 3. Training each network took around 30 seconds each.
Footnote 3: The code used to generate the results from this paper can be found at [https://github.com/jeovafarias/SAR-Roughens-Estimation-Neural-Nets.git](https://github.com/jeovafarias/SAR-Roughens-Estimation-Neural-Nets.git).
### _Results on Synthetic Data_
We start by evaluating our estimation methods on synthetic samples of size \(s\in\{9,25,49,121,1000\}\), generated using roughness \(\alpha\in\{-1.5,-7,-15\}\) and number of looks \(L\in\{1,3,8\}\). For each setting, we performed a Monte Carlo experiment with 1000 \(G_{I}^{0}\) samples. Figure 1 compares MLE, LCUM and the networks trained using Algorithm 1 with 2 and 4 log-moments when estimating \(\alpha\) for each sample. We only considered the samples whose estimation did not fail to compute the MSE values. Note that we run our networks on sample sizes that they were not trained on and, despite that, both of them outperform its counterparts in most scenarios. The one trained on fewer log-moments generally performed better, potentially corroborating the premise in LCUM that also only uses two log-moments in its algorithm. This also means that our neural network-based approach more competently utilizes the information contained in those log-moments for estimation than LCUM, especially for lower \(\alpha\) values.
Table I shows the failure rates of each method for various \(L\) values using the same data used in Figure 1. Our networks also overperform both MLE and LCUM on this domain for all depicted scenarios, especially for lower \(L\), where the estimation is typically more challenging [2]. Counterintuitively, more failures are detected in our methods as \(L\) increases. Further experimentation is necessary to better understand this phenomenon, and it is left to future work. We also found that estimation on our networks is generally 3 times faster than the other methods.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(L\) & MLE & LCUM & \multicolumn{2}{c}{Neural Network} \\ & & 2 Log-moments & 4 Log-moments \\ \hline \(1\) & 71.84 & 37.44 & 0.14 & 3.24 \\ \(3\) & 59.84 & 25.55 & 1.46 & 6.22 \\ \(8\) & 47.99 & 18.59 & 6.04 & 6.96 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Failure rates (%) on synthetic data.
Fig. 1: MSE values for the synthetic data. Missing values on graphs correspond to the method failing in all simulations.
### _Results on Real Data_
Figure 2 shows the roughness maps estimated by networks trained according to Algorithm 2 and using kernel sizes \(k=\{3,11,20,45\}\) during inference. Note that the \(\alpha\) values in all maps follow their expected understanding: higher values are related to urban areas; moderate values suggest forest zones and low values correspond to lake regions [5]. This demonstrates our methodology's ability to correctly estimate the desired pixel roughness, despite the networks having only been trained on purely synthetic data. Furthermore, we see the blurring effect one expects from applying larger kernels in their convolutions. More interestingly, however, is the observation that the network is still able to predict an expected map using kernel sizes it was not trained one (such as \(k=\{20,45\}\)). Finally and more importantly, Figure 2 also provides the timings to the computation of each map. Here, we note that we are able to estimate the roughness of each individual pixel of an image as large as \(1500\times 1500\) pixel in less than a tenth of a second for \(k=3\). Processing an image at this rate is usually considered real time [19], meaning that this estimation can be performed as the images are being acquired. As \(k\) increases, the map computation becomes slower, but it is still quickly accomplished for low \(k\).
## V Conclusion
We proposed a neural-network based algorithm for roughness estimation in \(G_{I}^{0}\) modeled SAR data, a task that is increasingly crucial in SAR image understanding. Practically, it consists of using log-moments from the samples as the input of a network that, when trained, outputs the underlying roughness value. This same network can be easily adapted to process SAR images, from moment computation to parameter estimation, leading to quick estimation of pixel-level roughness. We empirically demonstrate that such networks trained on purely synthetic data are able to outperform traditional estimation methods and return reliable roughness maps from high resolution SAR images in real time. Overall, this result shows that one can use cheap synthetic data to train performant networks for SAR imaging tasks, where labeled data acquisition is expensive. SAR data is in fact ideal for this approach, since there are good statistical models for them that allow the generation of synthetic training data, and they are structured such that important operations on them can be implemented on GPUs. Future work will consist of applying similar techniques to other data domains, where these aspects can also be found.
|
2309.14822 | OS-net: Orbitally Stable Neural Networks | We introduce OS-net (Orbitally Stable neural NETworks), a new family of
neural network architectures specifically designed for periodic dynamical data.
OS-net is a special case of Neural Ordinary Differential Equations (NODEs) and
takes full advantage of the adjoint method based backpropagation method.
Utilizing ODE theory, we derive conditions on the network weights to ensure
stability of the resulting dynamics. We demonstrate the efficacy of our
approach by applying OS-net to discover the dynamics underlying the R\"{o}ssler
and Sprott's systems, two dynamical systems known for their period doubling
attractors and chaotic behavior. | Marieme Ngom, Carlo Graziani | 2023-09-26T10:40:04Z | http://arxiv.org/abs/2309.14822v1 | # OS-net: Orbitally Stable Neural Networks
###### Abstract
We introduce OS-net (Orbitally Stable neural NETworks), a new family of neural network architectures specifically designed for periodic dynamical data. OS-net is a special case of Neural Ordinary Differential Equations (NODEs) and takes full advantage of the adjoint method based backpropagation method. Utilizing ODE theory, we derive conditions on the network weights to ensure stability of the resulting dynamics. We demonstrate the efficacy of our approach by applying OS-net to discover the dynamics underlying the Rossler and Sprott's systems, two dynamical systems known for their period doubling attractors and chaotic behavior.
## 1 Introduction
The study of periodic orbits of systems of the form
\[\dot{\mathbf{x}}=f(\mathbf{x}),\ \mathbf{x}(0)=\mathbf{x}_{0},\ \mathbf{x}\in\mathbf{U}\subset\mathbf{R}^ {n} \tag{1.1}\]
is an important area of research within the field of nonlinear dynamics with applications in both the physical (astronomy, meteorology) and the nonphysical (economics, social psychology) sciences. In particular, periodic orbits play a significant role in chaos theory. In [6], chaotic systems are defined as systems that are sensitive to initial conditions, are topologically transitive (meaning that any region of the phase space can be reached from any other region), and have dense periodic orbits. Notably, chaotic systems are constituted of infinitely many Unstable Periodic Orbits (UPOs) which essentially form a structured framework, or a "skeleton", for chaotic attractors. A periodic orbit is (orbitally) unstable if trajectories that start near the orbit do not remain close to it. Finding and stabilizing UPOs is an interesting and relevant research field with numerous applications such as the design of lasers [18], the control of seizure activities [22] or the design of control systems for satellites [30]. An important tool when studying the stability of periodic orbits of a given system is the Poincare or return map which allows one to study the dynamics of this system in a lower dimensional subspace. It is well-known that the stability of a periodic orbit containing a point \(\mathbf{x}_{0}\) is inherently connected to the stability of \(\mathbf{x}_{0}\) as a fixed point of the corresponding Poincare map. However, explicitly computing Poincare maps has been proven to be highly challenging and inefficient [29]. With the emergence of data-driven approaches, researchers in [2] proposed a data-driven computation of Poincare maps using the SINDy method [4]. Subsequently, they leveraged this technique in to develop a method for stabilizing UPOs of chaotic systems [3].
As a matter of fact, researchers have been increasingly exploring the intersection of machine learning and differential equations in recent years. For example, Partial Differential Equations (PDEs) inspired neural network architectures have been developed for image classification in [19, 27]. On the other hand, data-driven-based PDE solvers were proposed in [24] while machine learning has been effectively utilized to discover hidden dynamics from data in [17, 4, 21]. One notable example of such intersectional work is Neural Ordinary Differential Equations (NODEs), which were introduced in [5]. NODEs are equivalent to continuous
residual networks that can be viewed as discretized ODEs [9]. This innovative approach has led to several extensions that leverage well-established ODE theory and methods [7; 34; 14; 35; 33; 9] to develop more stable, computationally efficient, and generalizable architectures.
In the present work, we aim at learning dynamics obtained from chaotic systems with a shallow network featuring a single hidden layer, wherein the network's output serves as a solution to the dynamical system
\[\dot{\mathbf{x}}=\mathbf{W}_{d}^{T}\sigma(\mathbf{W}_{e}^{T}\mathbf{x}+\mathbf{b}_{e}),\ \ \mathbf{x}(0)=\mathbf{x}_{0}. \tag{1.2}\]
where \(\mathbf{W}_{e}\) are the input-to-hidden layer weights, \(\mathbf{b}_{e}\) the corresponding bias term, \(\mathbf{W}_{d}\) the hidden-to-output layer weights, and \(\sigma\) the activation function of the hidden layer. The proposed network is a specific case of NODEs and fully utilizes the adjoint method-based [16] weight update strategy introduced in [5]. Our primary objective is to establish sufficient conditions on the network parameters to ensure that the resulting dynamics are orbitally stable. We base our argument on the finding that the stability of Poincare maps is equivalent to the stability of the first variational equation associated with the dynamical system under consideration [29]. We then build on the stability results of linear canonical systems presented in [11] to derive a new regularization strategy that depends on the matrix \(\mathbf{J}=\mathbf{W}_{e}^{T}\mathbf{W}_{d}^{T}\) and not on the weight matrices taken independently. We name the constructed network OS-net for Orbitally Stable neural NETworks.
Since we are dealing with periodic data, the choice of activation function is critical. Indeed, popular activation functions such as the sigmoid or the \(\tanh\) functions do not preserve periodicity outside the training region. A natural choice would be sinusoidal activations however these do not hold desired properties such as monotonicity. Furthermore, they perform poorly [15] on the training phase because the optimization can stagnate in a local minimum because of the oscillating nature of sinusoidal functions. In [13], the authors constructed a Fourier neural network (i.e a neural network that mimics the Fourier Decomposition) [23; 36] that uses a \(\sin\) activation but had to enforce the periodicity in the loss function to ensure that periodicity is conserved outside of the training region. The activation functions \(x+\frac{1}{a}\sin^{2}(ax)\) -called snake function with frequency \(a\)- and \(x+\sin x\) were proposed in [37] for periodic data and were proven to be particularly well suited to periodic data. As such, we use both these activation functions in this work.
This paper is organized as follows: in section (2) we present the OS-net's architecture and the accompanying new regularization strategy. In section (3) we showcase its performance on simulated data from the chaotic Rossler and Sprott systems and perform an ablation study to assess the contributions of the different parts of OS-net.
## 2 Building OS-net
### Background
In this chapter, we recall the main results on the stability of periodic orbits of dynamical systems we will be using to build OS-net. We refer readers to the appendices for more details about orbits of dynamical systems.
We consider the system
\[\dot{\mathbf{x}}=f(\mathbf{x}),\ \ \mathbf{x}(0)=\mathbf{x}_{0}. \tag{2.1}\]
and suppose it has a periodic solution \(\phi(t,x_{0})\) of period \(T\). We denote \(\gamma(x_{0})\) a periodic orbit corresponding to \(\phi(t,x_{0})\). Stability of periodic orbits have been widely studied in the literature. It is, in particular well-known [29; Chapter 12] that the stability of periodic orbits of Equation (2.1) is linked to the stability of its First Variational (FV) problem
\[\dot{\mathbf{y}}=\mathbf{A}(t)\mathbf{y},\ \ \mathbf{A}(t)=d\left(f(\mathbf{x})\right)_{(\mathbf{\Phi}(t, \mathbf{x}_{0}))}\ \ \text{and}\ \mathbf{A}(t+T)=\mathbf{A}(t). \tag{2.2}\]
which is obtained by taking the gradient of Equation (2.1) with respect to \(x\) at \(\phi(t,x_{0})\). As such, the first variational problem describes the dynamics of the state variable \(y=d\left(\phi(t,t_{0},\mathbf{x})\right)\) and is a linear system as the matrix \(\mathbf{A}(t)\) does not depend on \(y\).
To assess the stability of OS-net, we investigate the first variational equation associated with Equation (1.2). OS-net's FV is given by
\[\dot{\mathbf{y}}=\mathbf{W}_{d}^{T}diag\left(\sigma^{{}^{\prime}}\left(\mathbf{W}_{e}^{T} \phi(t,\mathbf{x}_{0})+\mathbf{b}_{e}\right)\right)\mathbf{W}_{e}^{T}\mathbf{y},\]
and if we make the change of variables \(\mathbf{z}=W_{e}^{T}\mathbf{y}\), this equation becomes
\[\dot{\mathbf{z}}=\mathbf{J}\mathbf{H}(t)\mathbf{z}, \tag{2.3}\]
where \(J=\mathbf{W}_{e}^{T}\mathbf{W}_{d}^{T}\) and \(\mathbf{H}(t)=diag\left(\sigma^{{}^{\prime}}\left(\mathbf{W}_{e}^{T}\phi(t,\mathbf{x}_{0} )+\mathbf{b}_{e}\right)\right)\) is periodic. This formulation can be seen as a generalization of linear canonical systems with periodic coefficients
\[\dot{\mathbf{y}}=\lambda\mathbf{J}_{m}\mathbf{H}(t)\mathbf{y} \tag{2.4}\]
where
\[\mathbf{J}_{m}=\begin{pmatrix}0&\mathbf{I}_{m}\\ -\mathbf{I}_{m}&0\end{pmatrix},\mathbf{I}_{m}\text{ is the identity matrix of size }m,\]
\(\mathbf{H}\) is a periodic matrix-valued function and \(\lambda\in\mathrm{R}\). Stability of such systems was extensively studied in [12] and in particular in [11]. We recall the main definitions and results from [11] and build upon these to derive stability conditions for OS-net. In particular, we give the definition of stability zones for Equation (2.4) and provide the main stability results we will base our study on.
**Definition 1**.: _A point \(\lambda=\lambda_{0}\)\((-\infty<\lambda_{0}<\infty)\) is called a \(\lambda\)-point of stability of Equation (2.4) if for \(\lambda=\lambda_{0}\) all solutions of Equation (2.4) are bounded on the entire t-axis. If, in addition, for \(\lambda=\lambda_{0}\) all solutions for any equation_
\[\dot{\mathbf{y}}=\lambda\mathbf{J}_{m}\mathbf{H}_{1}(t)\mathbf{y}\]
_with a periodic symmetric. matrix valued function \(H_{1}(t)=H_{1}(t+T)\) sufficiently close to \(H(t)\) are bounded, then \(\lambda_{0}\) is a \(\lambda\)-point of strong stability of Equation 2.4._
The set of \(\lambda\)-point of strong stability of Equation 2.4 is an open-set that decomposes into a system of disjoint open intervals called \(\lambda\)-zones of stability of Equation2.4. If a zone of stability contains the point \(\lambda=0\) then it is called a central zone of stability.
**Definition 2**.: _We say that Equation (2.4) is of positive type if_
\[\mathbf{H}\in\mathrm{P}_{n}(T)=\{\mathbf{A}(t)\text{ symmetric }s.t\text{ }\mathbf{A}(t) \geq 0\text{ }(0\leq t\leq T)\text{ and }\int_{0}^{T}\mathbf{A}(t)dt>0\}.\]
\(\mathbf{A}(t)\geq 0\) _means \(\forall x\in\mathbf{R}^{n},\langle\mathbf{A}(t)\mathbf{x},\text{ }\mathbf{x}\rangle\geq 0\) and \(\int_{0}^{T}\mathbf{A}(t)dt>0\) means \(\int_{0}^{T}\langle\mathbf{H}(t)\mathbf{x},\text{ }\mathbf{x}\rangle dt>0\)_
**Definition 3**.: _Let \(\mathbf{A}\) be a square matrix with non-negative elements. We denote by \(\mathcal{M}(\mathbf{A})\) the least positive eigenvalue among its eigenvalues of largest modulus. Note that Perron's theorem (1907) guarantees the existence of \(\mathcal{M}(\mathbf{A})\)[10]._
We can now state the main result we will derive our regularization from:
**Theorem 1** ([11] section 7, criterion \(I_{n}\)).: _A real \(\lambda\) belongs to the central zone of stability of an Equation (2.4) of positive type, if_
\[|\lambda|<2\mathcal{M}^{-1}(\mathbf{C})\]
_where \(\mathbf{C}=\mathbf{J}_{m_{n}}\int_{0}^{T}\mathbf{H}_{a}(t)\). If \(K\) is a matrix, \(K_{a}\) is the matrix obtained by replacing the elements of \(K\) by their absolute values._
The proof of this theorem is recalled in Appendix (B).
### Architecture and stability of OS-net
To base the stability of OS-net on stability theory for systems of type Equation (2.4), we need the matrix-valued function \(\mathbf{H}(t)\) and the matrix \(\mathbf{J}\) in Equation (2.3) to be respectively of positive type and skew-symmetric. To ensure \(\mathbf{H}(t)\) is of positive type, it is sufficient to use activation functions that are increasing since they have positive derivatives and diagonal matrices with positive elements are of positive type. Fortunately, many common activation functions (\(\tan\) or \(sigmoid\)) have that property. In this paper, we use the strictly monotonic activation functions \(x+\sin(x)\) and (the snake function) \(x+\frac{1}{a}\sin^{2}(ax),\ a\in\mathbf{R}\) displayed in Figure (2.1). These activation functions were proved to be able to learn and extrapolate periodic functions in [37].
Let us now pay attention to the matrix \(\mathbf{J}=\mathbf{W}_{e}^{T}\mathbf{W}_{d}^{T}\). To ensure \(J\) is skew-symmetric, we introduce the matrices \(\mathbf{W}\in\mathrm{M}_{n,2m}(\mathbf{R})\) and \(\mathbf{K}\in\mathrm{M}_{2m}(\mathbf{R})\) where \(n\) is the input size (i.e the size of \(x\)) and \(2m\) the size of the hidden layer. We then set \(\mathbf{W}_{e}^{T}=\mathbf{W}\), \(\mathbf{W}_{d}^{T}=\mathbf{\Omega}\mathbf{W}^{T}\), and \(\mathbf{\Omega}=\mathbf{K}-\mathbf{K}^{T}\). Note that the size of the hidden layer which is the size of \(\Omega\) needs to be even. Otherwise, \(\mathbf{\Omega}\) would be a singular matrix. The elements of the matrices \(\mathbf{W}\) and \(\mathbf{K}\) are the hyperparameters of the network that will be optimized during training. Now, knowing that any real skew-symmetric matrix \(\mathbf{J}\) is congruent to \(\mathbf{J}_{m}\)[32], there exists a real invertible matrix \(\mathbf{S}\) such that
\[\mathbf{J}_{m}=\mathbf{S}^{T}\mathbf{J}\mathbf{S}\]
and Equation (2.3) is equivalent to Equation (2.4). In fact, let \(\mathbf{u}=\mathbf{S}^{T}\mathbf{z}\) in Equation (2.3), we obtain
\[\dot{\mathbf{u}}=\lambda\mathbf{J}_{m}\mathbf{S}^{-1}\mathbf{H}(t)(\mathbf{S}^{-1})^{T}\mathbf{u}= \lambda\mathbf{J}_{m}\tilde{\mathbf{H}}(t)\mathbf{u}\]
where \(\tilde{\mathbf{H}}(t)=\mathbf{S}^{-1}\mathbf{H}(t)(\mathbf{S}^{-1})^{T}\in\mathrm{P}_{n}(T)\). We can now apply Theorem (1) to OS-net and state that OS-net is stable if
\[1<2\mathcal{M}^{-1}\left(\mathbf{J}_{a}\int_{0}^{T}\mathbf{H}(t)dt\right). \tag{2.5}\]
Note that since \(\mathbf{H}(t)\) is a diagonal matrix with positive elements, \(\mathbf{H}_{a}(t)=\mathbf{H}(t)\). We can now prove the following result that will justify our regularization strategy:
**Corollary 1.1**.: _Suppose the activation function \(\sigma\) is strictly increasing with a uniformly bounded derivative. Then, OS-net is stable if_
\[||\mathbf{J}_{a}||_{2}<\frac{2}{LT} \tag{2.6}\]
_where \(L\) is the superior bound of the derivative of the activation function._
Proof.: Let \(\mu\) be any eigenvalue of \(\mathbf{J}_{a}\int_{0}^{T}\mathbf{H}(t)dt\) then \(|\mu|\leq\left\|\mathbf{J}_{a}\int_{0}^{T}\mathbf{H}(t)dt\right\|_{2}\). Knowing that any norm in \(\mathbf{R}^{n,n}\) can be rescaled to be submultiplicative (i.e. \(||AB||_{2}\leq||A||_{2}||B||_{2}\)), we obtain
\[|\mu|\leq\|\mathbf{J}_{a}\|_{2}\left\|\int_{0}^{T}\mathbf{H}(t)dt\right\|_{2}\]
which leads to
\[\mathcal{M}\left(\mathbf{J}_{a}\int_{0}^{T}\mathbf{H}(t)dt\right)\leq||\mathbf{J}_{a}||_{2 }\left\|\int_{0}^{T}\mathbf{H}(t)dt\right\|_{2}\]
If \(L\) is the superior bound of the derivative of the activation function, then, since \(\mathbf{H}(t)\) is a diagonal matrix, we have \(\mathcal{M}\left(\mathbf{J}_{a}\int_{0}^{T}\mathbf{H}(t)dt\right)\leq LT||\mathbf{J}_{a}|| _{2}\). Therefore, OS-net is stable if \(1<\frac{2}{LT}||\mathbf{J}_{a}||_{2}^{-1}\) i.e \(||\mathbf{J}_{a}||_{2}<\frac{2}{LT}\).
All in all our minimization problem becomes
\[L(\mathbf{x}_{o},g(f(\mathbf{x}_{0})))=||g(f(\mathbf{x}_{0}))-\mathbf{x}_{0}||_{2}^{2}\ \ \text{s.t.}\ \ ||\mathbf{J}_{a}||_{2}<\frac{2}{LT} \tag{2.7}\]
and this formulation is equivalent [1] to
\[L(\mathbf{x}_{0},g(f(\mathbf{x}_{0})))=||g(f(\mathbf{x}_{0}))-\mathbf{x}_{0}||_{2}^{2}+\alpha|| \mathbf{J}_{a}||_{2}^{2} \tag{2.8}\]
where \(\alpha\in\mathbf{R}\) can be fine-tuned using cross-validation. We thus have derived a new regularization strategy that stabilizes the network. By controlling the norm of \(J_{a}=|\mathbf{W}_{e}^{T}\mathbf{W}_{d}^{T}|\), we ensure solutions of Equation (2.3) and consequently periodic orbits of Equation (1.2) are stable. We validate these claims in the next section by running a battery of tests on simulated data from dynamical systems known for their chaotic behavior.
Figure 2.1: Left: Activation functions \(x+\sin(x)\) and \(x+\frac{1}{0.2}\sin^{2}(0.2x)\). Right: Derivatives of the activation functions
Numerical results
In this section, we showcace the learning capabilities and stability of OS-net on different regimes of the Rossler [20] and of the Sprott systems [25]. In all of the following experiments, the data was generated using Matlab's ode45 solver. We take snapshots at different time intervals to obtain the data used to train OS-net..
We used the \(LBFGS\) optimizer with a learning rate \(lr=1\). and the strong Wolfe [31] line search algorithm for all the experiments. Our code uses Pytorch and all the tests were performed on a single GPU1 using Argonne Leadership Computing Facility (ALCF)'s Theta/ThetaGPU [8].
Footnote 1: We base our code on the Neural ode implementation in [28]
### _The Rossler system_
As in [3], we consider the Rossler system
\[\dot{x} =-y-z \tag{3.1}\] \[\dot{y} =x+0.1y\] (3.2) \[\dot{z} =0.1+z(x-c) \tag{3.3}\]
where \(c\in\mathbf{R}\). Rossler introduced this system as an example of simple chaotic system with a single nonlinear term (\(zx\)). As \(c\) increases, this system displays period doubling bifurcations leading to chaotic behavior. Here, we consider the values \(c=6\) and \(c=18\).
#### 3.1.1 c = 6, period-2 attractor
First, we set \(c=6\) and initialize the trajectory at \([x_{0},y_{0},z_{0}]=[0,-9.1238,0]\). In this regime, the Rossler system possesses a period-2 attractor [3].
We generate the training data by solving the Rossler system using Matlab's ode45 solver with a time step of \(0.001\) from \(t=0\) to \(t=10\). We then take snapshots of the simulated data every \(50\) step and feed it to OS-net. We build OS-net using the Runge-Kutta 4 (RK4) algorithm with a time step of \(0.005\). We chose the snake activation function \(x+\frac{1}{0.2}\sin^{2}(0.2x)\) and set the number of nodes in the hidden layer to be \(2\times 16\). We set \(\alpha=0.07\) in Equation (2.8) and use \(10\) epochs.
Figure (3.1) (left) shows the training output for the \(y\) component. OS-net was able to learn the dynamics accurately by the end of training. The norm of \(J_{a}\) is approximately \(0.99\) after training as recorded in Table (3.1). In this case, Inequality (1.1) is not strictly enforced but the norm of the matrix \(J_{a}\) is controlled enough so
Figure 3.1: Left: Training data in black along with the learned dynamics in red in the training time interval \([0,10]\). Right: Target dynamics in gray along with the data generated by OS-net on the time interval \([0,100]\).
that OS-net renders stable orbits. The elements of the matrix \(\Omega\) are concentrated in \([-0.7,\ 0.7]\) as shown in Figure 3.3.
We validate OS-net by propagating a trajectory initialized \([x_{0},y_{0},z_{0}]=[0,-9.1238+0.01,0]\) using the learned dynamics. Figure (3.1) (right) shows prediction using OS-net up to \(t=100\) and displays the accuracy of this prediction when compared to the correct dynamics. We assess the stability of OS-net by propagating the trajectory to \(t=10000\). OS-net converges to a stable period-1 attractor while the Rossler system converges to a period-2 one as show
\begin{table}
\begin{tabular}{l l l} \hline \multicolumn{3}{c}{Part} \\ \hline System & Attractor type & \(||J_{a}||\) \\ \hline Rössler, \(c=6\) & Period-2 & 0.9937 \\ Rössler, \(c=18\) & Chaotic & 0.6318 \\ Sprott, \(\mu=2.1\) & Period-2 & 0.0085 \\ \hline \end{tabular}
\end{table}
Table 3.1: Norm of \(J_{a}\)
Figure 3.2: Left: Rössler’s period-2 attractor. Right: stable OS-net period-1 attractor (right)
Ablation studyWe compare OS-net with a network obtained by keeping the same architecture and settings as in Section (3.1.1) but with the regularization in Equation (2.8) switched off. The left side of Figure (3.4) shows that training was successful while the right side shows the dynamics learned by the unregularized network diverge from the true dynamics in the time interval \([0,10]\). This shows the role of the regularization term in stabilizing the dynamics learned by OS-net.
#### 3.1.2 c = 18, chaotic behavior
We now set \(c=18\) and initialize the trajectory at \([x_{0},y_{0},z_{0}]=[0,-22.9049,0]\). The Rossler system displays a chaotic behavior in this regime. We generate the training data as before but take snapshots every \(10\) steps. For OS-net, we use RK4 with a step size of \(0.005\) and \(x+\sin(x)\) as an activation function. The hidden layer size is \(2\times 32\) and the penalty coefficient \(\alpha=2\).
Figure (3.5) shows the training output and confirms the ability of OS-net to learn the target dynamics. We then use the learned dynamics to generate a trajectory starting at \([x_{0},y_{0},z_{0}]=[0,-22.9049+0.01,0]\). Since we are dealing with a chaotic system, the learned dynamics should not be expected to reproduce the training data [2]. Figure (3.5) shows that OS-net was able to track the chaotic system up to \(t\approx 30\). The norm of the matrix \(J_{a}\) was approximately \(0.6318\) at the end of training as recorded in Table (3.1). Furthermore, the elements of the matrix \(\Omega\) were concatenated between \(-0.5\) and \(+0.5\). Figure (3.6) displays the chaotic Rossler system and the stable attractor obtained by propagating OS-net's learned dynamics from \(t=0\) to \(t=10000\).
### Simplest quadratic (Sprott's) chaotic flow
We consider the following system
\[\dot{x} =y \tag{3.4}\] \[\dot{y} =z\] \[\dot{z} =-\nu z-x+y^{2}\]
where \(\nu\in\mathbf{R}\). This system was introduced in [25] and also has period doubling bifurcations as \(\nu\) varies. Here we set \(\nu=2.1\) which yields a period-2 attractor for Equation (3.4).
We initialize the trajectory at \([x_{0},y_{0},z_{0}]=[5.7043,0.0,-2.12778]\) and solve the system using ode45 on the time interval \([0,15]\) with a step size of \(0.001\). We then take snapshots every \(10\) step and use the data for
Figure 3.4: Left: Training data in black along with the learned dynamics in red in the training time interval \([0,10]\). Right: Target dynamics in gray along with the data generated by OS-net on the time interval \([0,100]\).
training. OS-net is solved using RK4 with a step size of \(0.01\) and \(x+\frac{1}{0.3}\sin^{2}(0.3x)\) as an activation function. The hidden layer has \(2\times 16\) nodes and the penalty coefficient \(\alpha=1\).
We show in Figure (3.7) (left) the dynamics learned by OS-net for the \(y\) component after \(20\) epochs. Figure (3.7) (right) also shows how well OS-net tracks the original system in the interval \(t=0\) to \(t=100\). In this case, the norm of the matrix \(J_{a}\) was approximately \(8e-3\) and the elements of the matrix \(\Omega\) are in the interval \([-0.7,\ 0.7]\). Inequality (1.1) is strictly enforced here. We then assess the stability of the learned dynamics by generating a trajectory starting at \([x_{0},y_{0},z_{0}]=[5.7043+0.01,0.0,-2.12778]\) and evolving it from \(t=0\) to \(t=10000\). Figure (3.8) shows the period-2 attractor of the original system and the stable period-1 OS-net orbit.
NoteThe current implementation of OS-net uses the adjoint method presented in [5] which accumulates numerical errors when integrating backward. We circumvent that by using RK4 with a small step size. This
Figure 3.5: Left: Training data in black along with the learned dynamics in red in the training time interval \([0,10]\). Right: Target dynamics in gray along with the data generated by OS-net on the time interval \([0,100]\).
Figure 3.6: Left: Chaotic Rössler attractor. Right: Stable period-1 OS-net attractor
results in a computationally expensive implementation that can be improved using the methods proposed in [14, 35, 34] that we plan on incorporating into OS-net in the future.
## 4 Conclusion
We have presented a new family of stable neural network architectures for periodic dynamical data. The proposed architecture is a particular case of NODES with dynamics represented by a shallow neural network. We leveraged well-grounded ode theory to propose a new regularization scheme that controls the norm of the product of the weight matrices of the network. We have validated our theory by learning the Rossler and Sprott's systems in different regimes including a chaotic one. In all the regimes considered, OS-net was able to track the exact dynamics and converge to a stable period-1 attractor. That indicates that OS-net is a promising network architecture that can handle highly complex dynamical systems. In the future, we aim at controlling the parameters of the systems of interest by incorporating them into the state vectors that OS-net
Figure 3.7: Left: Training data in black along with the learned dynamics in red in the training time interval \([0,15]\). Right: Target dynamics in gray along with the data generated by OS-net on the time interval \([0,100]\).
Figure 3.8: Left: Sprott’s period-2 attractor. Right: stable period-1 OS-net attractor
aims at learning. Additionally, we plan on using OS-net to learn and monitor the orbits of celestial objects that have short orbital periods such as certain exoplanets or three-body systems like Mars-Phobos. This extension of OS-net's applications holds great potential in providing a broader range of stable periodic orbits for the design of spatial missions.
## Acknowledgments
This work was supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program and the FASTMath Institute under Contract No. DE-AC02-06CH11357 at Argonne National Laboratory. Government License. The submitted manuscript has been created by UChicago Argonne, LLC, Operator of Argonne National Laboratory ("Argonne"). Argonne, a U.S. Department of Energy Office of Science laboratory, is operated under Contract No. DE-AC02-06CH11357. The U.S. Government retains for itself, and others acting on its behalf, a paid-up nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan. [http://energy.gov/downloads/doe-public-access-plan](http://energy.gov/downloads/doe-public-access-plan).
|
2309.08760 | Biased Attention: Do Vision Transformers Amplify Gender Bias More than
Convolutional Neural Networks? | Deep neural networks used in computer vision have been shown to exhibit many
social biases such as gender bias. Vision Transformers (ViTs) have become
increasingly popular in computer vision applications, outperforming
Convolutional Neural Networks (CNNs) in many tasks such as image
classification. However, given that research on mitigating bias in computer
vision has primarily focused on CNNs, it is important to evaluate the effect of
a different network architecture on the potential for bias amplification. In
this paper we therefore introduce a novel metric to measure bias in
architectures, Accuracy Difference. We examine bias amplification when models
belonging to these two architectures are used as a part of large multimodal
models, evaluating the different image encoders of Contrastive Language Image
Pretraining which is an important model used in many generative models such as
DALL-E and Stable Diffusion. Our experiments demonstrate that architecture can
play a role in amplifying social biases due to the different techniques
employed by the models for feature extraction and embedding as well as their
different learning properties. This research found that ViTs amplified gender
bias to a greater extent than CNNs | Abhishek Mandal, Susan Leavy, Suzanne Little | 2023-09-15T20:59:12Z | http://arxiv.org/abs/2309.08760v1 | Biased Attention: Do Vision Transformers Amplify Gender Bias More than Convolutional Neural Networks?
###### Abstract
Deep neural networks used in computer vision have been shown to exhibit many social biases such as gender bias. Vision Transformers (ViTs) have become increasingly popular in computer vision applications, outperforming Convolutional Neural Networks (CNNs) in many tasks such as image classification. However, given that research on mitigating bias in computer vision has primarily focused on CNNs, it is important to evaluate the effect of a different network architecture on the potential for bias amplification. In this paper we therefore introduce a novel metric to measure bias in architectures, Accuracy Difference. We examine bias amplification when models belonging to these two architectures are used as a part of large multimodal models, evaluating the different image encoders of Contrastive Language Image Pretraining which is an important model used in many generative models such as DALL-E and Stable Diffusion. Our experiments demonstrate that architecture can play a role in amplifying social biases due to the different techniques employed by the models for feature extraction and embedding as well as their different learning properties. This research found that ViTs amplified gender bias to a greater extent than CNNs. The code for this paper is available at: [https://github.com/aibhishek/Biased-Attention](https://github.com/aibhishek/Biased-Attention)
## 1 Introduction
Vision Transformers (ViT), derived from Transformers in Natural Language Processing, have increasingly become important as they outperform Convolutional Neural Networks in many application domains [1, 2, 3]. Unlike Convolutional Neural Networks (CNN), which rely on a sequence of convolution operations extracting information from visual data, ViTs
###### Abstract
We propose a novel bias detection method for CNNs based on the proposed method. The proposed method is based on the proposed method.
Implicit Association Test. It estimates human-like biases in vision models by measuring the association between two sets of concepts: two attributes and a target in the model's embeddings. The attributes in the case of gender can be man and woman, and the target can be a real-world concept like occupation. Thus, if a particular occupation (e.g. CEO) is closer to man than woman, in a model's embedding space, then the model is biased.
**Contrastive Language Image Pretraining (CLIP)** is a large multimodal model developed by OpenAI, trained on 300 million image-text pairs crawled from the Internet []. It connects images with text and is trained using contrastive loss and is used in other popular generative models such as DALL-E and Stable Diffusion []. CLIP uses a text encoder and an image encoder, with the option of CNNs (ResNet 50,50x4, and 101) and ViTs (ViT B/16 and B/32) being provided. This enables us to study the multimodal effect of bias in these two architectures from a multimodal perspective. Although CLIP has been shown to exhibit social biases [], [], the effect of image encoder architecture on bias is yet to be studied.
## 3 Measuring Bias
### Accuracy Difference
For a multiclass, class-balanced visual dataset \(\mathcal{D}\) containing instances \((X_{i},Y_{i},g_{i})\), where \(X_{i}\) is an image having class label \(Y_{i}\), and a protected attribute \(g_{i}\) denoting gender, where \(g_{i}\in\{m,w\},(m:men,w:women)\). Let \(\mathcal{D}_{balanced}\subset\mathcal{D}\); \(f(g_{i}(m=w))\), be a dataset containing instances with protected attributes such as gender. The dataset is class balanced as well as gender-balanced, meaning all instances have an equal gender ratio. Let \(\mathcal{D}_{imbalanced}\subset\mathcal{D}\); \(f(g_{i}(m>w\lor m<w))\), be a dataset which is class-balanced but gender imbalanced. Let \(\mathcal{D}_{test}\subset\mathcal{D}\) be a class and gender-balanced dataset. The generalisation error (misclassification rate) of a classifier trained on \(\mathcal{D}\) and tested on \(\mathcal{D}_{test}\) can be estimated as:
\[E=\frac{1}{N}\sum_{i=1}^{N}\mathbb{1}\left(y_{i}\neq\hat{y_{i}}\right)\qquad \cdots eq(1)\]
where \(\mathbb{1}(.)\) is the indicator function, \(N\) is the number of samples in the dataset, and \(\hat{y_{i}}\) is the predicted class label. The generalisation error (misclassification rate) can also be given as:
\[E=bias+variance+ unavoidable\ error\qquad\cdots eq(2)\]
If we neglect the unavoidable error and express bias and variance in terms of \(g_{i}\), then \(g_{i}\) can be used as a proxy for \(E\). As the accuracy of the classifier on the \(\mathcal{D}_{test}\) can be expressed as \(1-E\), then from \(eq(1)\) and \(eq(2)\), accuracy can be used as a proxy for bias \(g_{i}\). Let image classifiers \(M_{unbiased}\) be trained on \(\mathcal{D}_{balanced}\) and \(M_{biased}\) be trained on \(\mathcal{D}_{imbalanced}\) having an accuracy of \(A_{biased}\) and \(A_{unbiased}\) on \(\mathcal{D}_{test}\) respectively.
Then we define accuracy difference (\(\Delta\)) as:
\[\Delta=\left|A_{unbiased}-A_{biased}\right|\qquad\cdots eq(3)\]
If the effect of gender bias on a classifier is minimal, then \(M_{biased}\) will perform very similarly to \(M_{unbiased}\) on the gender-balanced \(\mathcal{D}_{test}\) and \(\Delta\) will be very small. However, if the effect of gender bias on the classifier is significant, then the performances of the models will differ and \(\Delta\) will be high. Higher the value of \(\Delta\), more the effect of bias.
### Image-Image Association Score (IIAS)
The authors of IIAS [] used CLIP embeddings to calculate IIAS. We adapted the metric by replacing the CLIP embeddings with the image features extracted by the classifier model. In the case of CNNs, it was the output of the final pre-fully connected layer and in the case of ViTs, the final pre-MLP layer. We then used cosine distance to measure similarity. For two images \(I_{1}\) and \(I_{2}\), with extracted features \(\nu_{1}\) and \(\nu_{2}\) respectively, we calculate image similarity as:
\[sim(I_{1},I_{2})=\frac{\nu_{1}\cdot\nu_{2}}{||\nu_{1}||_{2}\cdot||_{2}}\quad \ldots eq(4)\]
\[sim(I_{1},I_{2})\in[0,1]\]
Then we calculate IIAS in the same way as the authors. Let \(A\) and \(B\) be two sets of images containing images of men and women, respectively called gender attributes. Let \(W\) be a set of images containing images corresponding to a real-world concept such as occupation, called target. Then the Image-Image Association Score, IIAS, is given by:
\[II_{AS}=mean_{w\in W}s(w,A,B)\quad\ldots eq(5)\]
where,
\[s(w,A,B)=mean_{a\in A}sim(\vec{w},\vec{a})-mean_{b\in B}sim(\vec{w},\vec{b}) \qquad[from\quad eq(4)]\]
\[IIAS\in[-1,1]\]
If IIAS is positive, then the target is closer to men showing a male bias and if IIAS is negative, then the target is closer to women, showing a female bias. The numeric value indicates the magnitude of the bias.
## 4 Experiment
The experiments are divided into two parts. In the first part, we measure the effect of gender bias on eight sets of image classifiers belonging to CNNs and ViTs, using Accuracy Difference and IIAS. In the second part, we analyse the zero-shot predictions of CLIP using four different image encoders belonging to CNNs and ViTs.
### Bias Analytics using Image Classifiers
We selected four CNN models: VGG16, ResNet152, Inceptionv3, and Xception, and four ViT models: ViT B/16, B/32, L/16, and L/32. All the models were pre-trained on the Imagenet dataset. We used the feature-extracting layers of the models and added customised dense layers to all the models. Then, the models were fine-tuned and tested on our custom dataset containing about 10k images. In order to ensure controlled variables, we limited our study to simpler models such as the original ViTs and older CNNs. This allowed us to isolate the bias comparison solely to the architecture and not have any influence from complex additions.
#### 4.1.1 The Dataset
We created a custom visual dataset to measure gender bias by crawling images using Google Search using the Selenium library1 for occupation-related query terms 'CEO', 'Engineer', 'Nurse', and 'School Teacher'. The occupation categories 'CEO' and 'Engineer' are traditionally male-dominated and 'Nurse' and 'School Teacher' are female-dominated []. Two sets of training data were created: gender-balanced and imbalanced. In the balanced dataset, all categories have a 50:50 split of images of men and women. In the imbalanced dataset, the gender ratio of the classes was split in a male:female ratio of 9:1 for 'CEO' and 'Engineer' and 1:9 for 'Nurse' and 'School Teacher', as per existing workforce bias. The queried images did show gender bias as per previous research [] and the gender ratio was adjusted in order to achieve uniformity. The test dataset was also gender balanced. The image filtering to achieve the necessary gender ratios was done manually. The train dataset consists of 7,200 images: 3,600 images for balanced and imbalanced datasets with each containing 900 images for each category. The test dataset consists of 1,200 images: 300 images for each category with 150 images for each gender. The validation sets for both the biased and unbiased training were split from the balanced and imbalanced datasets manually, keeping the gender ratios intact. A separate dataset containing images of men and women was queried using the terms'man' and 'woman' for the IIAS assessment.
Footnote 1: [https://www.selenium.dev/](https://www.selenium.dev/)
#### 4.1.2 Measuring Accuracy Difference
The models were partially retrained (fine-tuned) on the balanced and imbalanced datasets, creating a total of 80 models: (4 CNNs & 4 ViTs) x 2 (biased & unbiased) x 5 iterations. The training methodology for the CNNs is as follows. First, the feature-extracting layers were frozen and the custom dense layers warmed up for 50 epochs. Then the last two convolution blocks were unfrozen and the model was trained for a further 50 epochs with a smaller learning rate and with early stopping parameters with patience set to 10 iterations. For the ViTs, first the feature extracting layers were kept frozen and the models trained for 100 epochs with early stopping parameters with patience set to 10 iterations. Then the entire model was unfrozen and trained for 50 epochs with a very small learning rate with early stopping patience set to 5. The Accuracy Difference was calculated for all the models as explained in section 3.1 and as per eq(3).
#### 4.1.3 Measuring IIAS
The fine-tuned biased and unbiased models (from the previous experiment; section 4.1.2) were saved and their classification layers were removed for this part and the models were used as feature extractors on two sets of target images. The first set is the test dataset used for the previous part and for the second set, we blacked out (masked) the faces in the images as the most important feature for determining gender. Two sets of five images of men and women each were used for each part (masked and unmasked) as targets (Table 1). Ten images of men and women each were used as gender attributes (Figure 1). Then, the biased and unbiased model feature extracting layers were used to calculate IIAS as per eq (5). The experiment was repeated five times and the images for the attributes and the targets were chosen randomly without repeating. It is important to note that only the last layers of the CNN based feature extractors were retrained on our dataset, but as the training data for all
the models are the same, it gives us an estimate of how bias is handled differently by the different model families.
### Bias Analytics using CLIP
To further understand the effect of gender bias on model architecture, four different types of CLIP image encoders were used: CNNs ResNet 50 and 50x4 and ViTs ViT B/16 and B/32. A list of 100 occupation terms was created based on official lists and CLIP's zero-shot predictions used to predict labels for images of men and women (full list of terms is provided in Appendix A). The image dataset is the same as that used for attributes in the IIAS experiment. The top predictions for men and women were then analysed to study the differences in the effect of gender bias on CNNs and ViTs.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline & **Masked** & **Unmasked** \\ \hline CEO & & **Unmasked** \\ \hline Engineer & & **Unmasked** \\ \hline Nurse & & **Unmasked** \\ \hline \end{tabular}
\end{table}
Table 1: Target images
## 5 Findings and Discussions
### Accuracy Difference
We found the Accuracy Difference for ViTs to be significantly higher than CNNs. The figures in Table 2 show \(\Delta\) to be 54% higher and the % \(\Delta\) to be 123% higher for ViTs. This means the effect of gender bias is higher on the ViTs. This may be explained by the fact that ViTs have global attention which enables them to get more visual cues allowing them to deduce gender from multiple visual features. We also see the variation in \(\Delta\) among the CNNs. ResNet 152 has the highest \(\Delta\) and % \(\Delta\). This may be due to ResNet 152 having a larger receptive field [\(\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{ \box
### Iias
The results of the IIAS experiment showed similar results to those in the previous experiment with ViTs showing higher bias than CNNs as shown in Table 3. The scores show stereotypical bias in occupations with 'CEO' and 'Engineer' having a positive score indicating male bias and 'Nurse' and 'School Teacher' showing female bias as indicated by a negative score. This is similar to the results shown in previous research []. For the masked images, we see a 23% higher IIAS for the biased ViT models but an 80% higher IIAS for the unbiased CNN models. In the case of the unmasked images, the ViTs had a higher IIAS for both the biased and unbiased models, 111% and 176% respectively. Ideally, as there is an equal number of images of men and women in the target sets, the values should be zero or very close. In the case of masked images, where the face is hidden, the models may learn gender from other features such as the dress worn []. ViTs with their global attention may amplify bias due to this as seen from Table 3. An interesting observation is that for masked images, the unbiased CNNs show a higher bias than the ViTs. This may be due to convolutions being a high-pass filter amplifying high-frequency signals [] and the absence of the low-frequency signals in the face affecting its performance. Another reason may simply be that the CNNs are unable to localize their focus as faces generally have a higher saliency. We are, however, not fully sure of what might cause this.
### Analysis of CLIP Zero-shot Predictions
The predictions using CLIP zero-shot (Table 4) reveal the presence of gender bias in the model with the top three predictions for men being stereotypically male-dominated occupa
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Image Encoder** & **Man** & **Top 3** & **Woman** & **Top 3** \\ & **Occurrence** & **Predictions** & **Occurrence** & **Predictions** \\ \hline RN 50 & 47 & mathematician, & & beautiful, \\ & & psychiatrist’youther & 49 & student, \\ & & & & housekeeper \\ \hline RN 50x4 & 46 & investment banker, & & housekeeper, \\ & & economist, & 56 & jewellery maker, \\ & & coach & & midwife \\ \hline ViT B/16 & 50 & coach, & & midwife, \\ & & psychiatatrist, & & beautician, \\ & & administrator & & jewellery maker \\ \hline ViT B/32 & 45 & & chief executive officer, & & beautician, \\ & & musician, & 63 & housekeeper, \\ & & hairdresser & & jewellery maker \\ \hline
**CNN** & 46.5 & & 52.5 & \\
**ViT** & 48 (3.3 \% \(\uparrow\)) & & 59 (12.53 \% \(\uparrow\)) & \\ \hline \hline \end{tabular}
\end{table}
Table 4: Top 3 predictions for images of men and women using CLIP. The occurrence values show the percentage of predictions for the top 3 predictions. (\(\uparrow\)) indicates a higher concentration of biased predictions i.e. higher bias in percentage and is given in red.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline \hline
**Encoder Type** &
\begin{tabular}{l} **Image** \\ **Encoder** \\ \end{tabular} & \multicolumn{1}{c|}{**Skewness**} & \\ \hline & & **Man** & **Woman** \\ \hline CNN & RN 50 & 2.27 & 3.6 \\ \cline{2-3} & RN 50x4 & 2.06 & 3.84 \\ \hline ViT & ViT-B/16 & 2.54 & 3.75 \\ \cline{2-3} & Vi1-B/32 & 2.73 & 4.26 \\ \hline Model Average & CNN & 2.16 & 3.7 \\ \cline{2-3} & Vi1 & 2.63 (21.7\% \(\uparrow\)) & 4 (8 \% \(\uparrow\)) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Skewness in CLIP’s predictions using different image encoders. (\(\uparrow\)) indicates a higher skewness of biased predictions i.e. higher bias in percentage and is given in red.
tions such as 'chief executive officer, 'economist', and 'investment banker' whereas those for women are stereotypically female-dominated such as 'beautician', 'housekeeper', and 'jewellery maker' [\(\yngdown\), \(\yngdown\)]. The predictions are highly skewed with these biased predictions making up nearly half of all the predictions. The skewness is higher when ViTs are used as image encoders showing a higher bias. The skewness metrics given in Table 5 also show higher skewness for ViT encoders. Although the higher bias in CLIP's ViT encoder models shows a similar pattern to our classifier experiments, the effect is less pronounced. This may be due to the debiasing done in CLIP [\(\yngdown\)].
## 6 Conclusion and Future Work
In our experiments, we found evidence that the model architecture affects the amplification of social biases and show that vision transformers amplify gender bias more than convolutional neural networks. We attribute this to two features of vision transformers: 1) a shallower loss landscape leading to better generalisation and 2) global attention and a larger receptive field due to the multi-headed self-attention mechanism that enables vision transformers to capture more visual cues and long-term dependencies. Both these properties of vision transformers allow them to learn contextual information and generalise better than convolutional neural networks and learn complex concepts. But this inadvertently enables ViTs to learn social concepts such as gender. Therefore, when the training data is gender biased, the ViTs learn biased associations better than CNNs.
This paper also introduces _Accuracy Difference_, a metric for social bias in both CNNs and ViTs. It may be used for estimating and comparing bias in many different types of models with different architectures. It is simple, easy to understand and implement and can work on black box models such as closed-sourced models and APIs. We further adapted the _Image-Image Association Score_ for detecting bias in image classifiers and evaluated the effect of architecture choice in image encoders of a large multimodal model, CLIP. With the prevalence of large multimodal models and their wide applications, the potential for inadvertent amplification of biases is of particular concern and requires further consideration beyond gender in a binary sense and also to include other forms of social bias (geographic, racial, etc).
### Future Work
This research can help understand the effect of model architecture on social biases and assist developers in making informed choices about selecting vision models. One such case is CLIP, as discussed earlier. Accuracy difference can be used for bias analytics for different architectures. ViTs have been shown to outperform CNNs in many applications [\(\ttB\), \(\ttB\), \(\yngdown\)], leading to widespread adoption. However, if, as this research suggests, they may amplify bias to a greater extent, this aspect needs to be understood and considered as part of the adoption of ViTs.
## 7 Acknowledgements
Abhishek Mandal was partially supported by the \(<\)A+\(>\) Alliance / Women at the Table as an Inaugural Tech Fellow 2020/2021. This publication has emanated from research supported
by Science Foundation Ireland (SFI) under Grant Number SFI/12/RC/2289_2, cofunded by the European Regional Development Fund.
We would like to thank Dr. Alessandra Mileo for her input in this paper.
## 8 Appendix A
**List of Occupations**
accountant, administrator, architect, artist, athlete, attendant, auctioneer, author, baker, beautician, blacksmith, broker, business analyst, carpenter, cashier, chef, chemist, chief executive officer, cleaner, clergy, clerk, coach, collector, conductor, construction worker, counsellor, customer service executive, dancer, dentist, designer, digital content creator, doctor, driver, economist, electrician, engineer, farmer, filmmaker, firefighter, fitter, food server, gardener, geologist, guard, hairdresser, handyman, housekeepingeper, inspector, instructor, investment banker, jewellery maker, journalist, judge, laborer, lawyer, librarian, lifeguard, machine operator, manager, mathematician, mechanic, midwife, musician, nurse, official, operator, painter, photographer, physician, physicist, pilot, plumber, police, porter, postmaster, product owner, professor, programmer, psychiatrist, psychologist, retail assistant, sailor, salesperson, scientist, secretary, sheriff, soldier, statistician, student, supervisor, supply chain associate, support worker, surgeon, surveyor, tailor, teacher, trainer, warehouse operative, welder, youtuber
**Sources:** Garg et al. [1], BBC Careers 2, LinkedIn 34, Australian Occupation List 5 and Canadian Occupation List6.
Footnote 2: [https://www.bbc.co.uk/bithesize/articles/zdqnxyc](https://www.bbc.co.uk/bithesize/articles/zdqnxyc)
Footnote 3: [https://business.linkedin.com/talent-solutions/resources/talent-acquisition/jobs-on-the-rise-nl-en-cont-fact](https://business.linkedin.com/talent-solutions/resources/talent-acquisition/jobs-on-the-rise-nl-en-cont-fact) accessed: 19-04-2023
Footnote 4: [https://business.linkedin.com/content/dam/me/business/en-us/talent-solutions/emerging-jobs-report/Emerging_Jobs_Report_U.S._FINAL.pdf](https://business.linkedin.com/content/dam/me/business/en-us/talent-solutions/emerging-jobs-report/Emerging_Jobs_Report_U.S._FINAL.pdf) accessed: 19-04-2023
Footnote 5: [https://imimni.homeaffairs.gov.au/visas/working-in-australia/skill-occupation-list](https://imimni.homeaffairs.gov.au/visas/working-in-australia/skill-occupation-list) accessed: 19-04-2023
Footnote 6: [https://www.canada.ca/en/immigration-refugees-citizenship/services/immigrate-canada/express-entry/eligibility/find-national-occupation-code.html](https://www.canada.ca/en/immigration-refugees-citizenship/services/immigrate-canada/express-entry/eligibility/find-national-occupation-code.html) accessed: 19-04-2023
|
2309.12937 | Evolving Spiking Neural Networks to Mimic PID Control for Autonomous
Blimps | In recent years, Artificial Neural Networks (ANN) have become a standard in
robotic control. However, a significant drawback of large-scale ANNs is their
increased power consumption. This becomes a critical concern when designing
autonomous aerial vehicles, given the stringent constraints on power and
weight. Especially in the case of blimps, known for their extended endurance,
power-efficient control methods are essential. Spiking neural networks (SNN)
can provide a solution, facilitating energy-efficient and asynchronous
event-driven processing. In this paper, we have evolved SNNs for accurate
altitude control of a non-neutrally buoyant indoor blimp, relying solely on
onboard sensing and processing power. The blimp's altitude tracking performance
significantly improved compared to prior research, showing reduced oscillations
and a minimal steady-state error. The parameters of the SNNs were optimized via
an evolutionary algorithm, using a Proportional-Derivative-Integral (PID)
controller as the target signal. We developed two complementary SNN controllers
while examining various hidden layer structures. The first controller responds
swiftly to control errors, mitigating overshooting and oscillations, while the
second minimizes steady-state errors due to non-neutral buoyancy-induced drift.
Despite the blimp's drivetrain limitations, our SNN controllers ensured stable
altitude control, employing only 160 spiking neurons. | Tim Burgers, Stein Stroobants, Guido de Croon | 2023-09-22T15:34:18Z | http://arxiv.org/abs/2309.12937v1 | # Evolving Spiking Neural Networks to Mimic PID Control for Autonomous Blimps
###### Abstract
In recent years, Artificial Neural Networks (ANN) have become a standard in robotic control. However, a significant drawback of large-scale ANNs is their increased power consumption. This becomes a critical concern when designing autonomous aerial vehicles, given the stringent constraints on power and weight. Especially in the case of blimps, known for their extended endurance, power-efficient control methods are essential. Spiking neural networks (SNN) can provide a solution, facilitating energy-efficient and asynchronous event-driven processing. In this paper, we have evolved SNNs for accurate altitude control of a non-neutrally buoyant indoor blimp, relying solely on onboard sensing and processing power. The blimp's altitude tracking performance significantly improved compared to prior research, showing reduced oscillations and a minimal steady-state error. The parameters of the SNNs were optimized via an evolutionary algorithm, using a Proportional-Derivative-Integral (PID) controller as the target signal. We developed two complementary SNN controllers while examining various hidden layer structures. The first controller responds swiftly to control errors, mitigating overshooting and oscillations, while the second minimizes steady-state errors due to non-neutral buoyancy-induced drift. Despite the blimp's drivetrain limitations, our SNN controllers ensured stable altitude control, employing only 160 spiking neurons.
This material is based upon work supported by the Air Force Office of Scientific Research under award number FA8655-20-1-7044.
## I Introduction
Throughout history, humans have been fascinated by how animals gracefully and precisely navigate complex environments. This fascination has inspired efforts to understand the brain's computational processes behind such behavior, leading to the development of Artificial Neural Networks (ANN). These ANNs represent the neural processes using simplified mathematical models. Their ability to approximate complex non-linear functions makes them highly effective in controlling complex systems, such as quadrotors [1, 2]. However, as the size of ANNs grow, both response latency and computational demands increase. The latter poses particular challenges for robotic applications with restricted onboard energy capacity, such as flying robots. A solution may be found in the information transmission methods employed by biological brains. ANNs rely on continuous-valued signals, whereas the biological brain employs sparse spatial-temporal "spike" signals--brief, rapid increases in neuron voltage--for data encoding and transmission.
There are neural networks that use this spike-based approach to transmitting information, called a Spiking Neural Network (SNN) [3]. These more biologically plausible neural networks show potential for energy-efficient and low-latency controllers [4]. This fact was demonstrated in a study by Vitale et al. [5], where an SNN controller in a fully neuromorphic control loop outperformed a conventional control loop on power consumption and control latency when tracking the roll angle of a bench-fixed 1 DoF birotor. However, the application of SNN controllers in robotics is still in its early stages of development. One of the biggest challenges is the availability of suitable training algorithms. The SNN's temporal dynamics, sparse spiking activity, and non-differentiable spike signal make most existing ANN training algorithms unsuitable [6]. Recent research has enabled the use of error backpropagation for SNNs by means of surrogate-gradient algorithms [7]. Nevertheless, training SNNs using these gradient-based algorithms is still difficult due to the susceptibility to local minima, exploding/vanishing gradient, and sensitivity to initial conditions [8].
Due to the increased training complexity of SNNs compared to non-spiking counterparts, practical applications of SNNs in the robotic control domain are still limited. Designing
Fig. 1: The proposed SNN altitude controller for the blimp, where [\(t_{0}\),\(t_{1}\),\(t_{2}\)] indicates time instances of the blimp’s altitude while approaching a setpoint, marked by the dashed line.
a fully neuromorphic SNN controller to emulate low-level controllers, such as the Proportional-Integral-Derivative (PID) control, still remains a complex task. Recent research showed SNNs performing differentiation and integration within the network, by manually configuring neuron connections and weights [9, 10, 11]. In these studies, the controllers were implemented on Intel's Loihi neuromorphic processor, showcasing the promise of neuromorphic hardware by exhibiting very low latencies [12]. In Zaidel et al. [13], multiple populations with pre-determined network parameters were used to implement all three pathways of the PID controller to control a 6 Degrees of Freedom (DoF) robotic arm. The integral pathway was implemented using a fully recurrent population of neurons, while differentiation was achieved by using a slow and fast time constant for two populations. Although the integral controller succeeded in reducing the steady-state error, it was unable to completely eliminate it. In another study, spiking neurons were trained to replace the rate controller of a tiny quadrotor, the Crazyflie [14]. Integration in the SNN controller was achieved through discretized Input-Weighted Threshold Adaptation (IWTA), where the threshold depends on the previous layer's spiking activity. The training process had limitations because it relied partially on predetermined connections and grouped network parameters.
In this work, we investigate the different mechanisms, recurrency and IWTA, used in prior SNN PID research that enabled differentiation and/or integration. For each mechanism, we evolve an SNN to control the altitude of a real-world indoor blimp. The blimp is an interesting test platform for the SNN controller, allowing validation of all components of the PID controller. The changing buoyancy over time requires a good integrator to be present. Moreover, due to the high delays and slow system dynamics of a blimp, a high proportional gain is necessary in the reference PID controller. This reduces the blimp's rise time, requiring a strong derivative in the controller's output to prevent overshoot and oscillations. In Gonzalez et al. [15], an open-source indoor blimp was designed and used as a test vehicle for an evolved neuromorphic altitude controller, showing adequate tracking of the reference signal. However, even after including an additional non-spiking PD controller to the output of the SNN controller, there were still oscillations present of approximately \(\pm\) 0.3m. Additionally, the SNN was only trained on a neutrally buoyant blimp. Slight changes in the buoyancy of the blimp would cause a steady state error, which the controller was unable to eliminate.
We build further on this research, presenting here the following contributions: 1) We developed a fully neuromorphic height controller for a blimp (visualized in Figure 1), using an evolved SNN of only 160 neurons that is able to minimize the overshoot and oscillations while also removing the steady-state error caused by the buoyancy of the blimp. 2) We analyze the individual and combined influence of recurrent connections and IWTA on the performance of the SNN controller 3) We made improvements to the hardware components of the open-source blimp, improving the onboard computational power and increasing the accuracy of the height measurements.
## II Methodology
The SNN controller consists of three layers of neurons, where all parameters are optimized using an evolutionary algorithm to mirror the output of a tuned PID controller. The Proportional-Derivative (PD) controller's rapid dynamics demand fast time constants, while the integral controller relies on slower dynamics and, thus, slow time constants. To facilitate the learning process, we split the controller into two separate parts based on the required time constants to model each component. After completing the training process, the evolved controllers are used to control the altitude of a helium-filled blimp. Detailed discussions on the SNN's structure, parameters, experiment setup, and the evolutionary training algorithm used in this study follow below.
### _Spiking Neural Network Controller_
We use current-based Leaky-Integrate and Fire (LIF) neurons with a soft reset for the threshold (\(\vartheta\)). The discretized equations that describe the dynamics of the three states of the neuron (synaptic current \(i(t)\), membrane potential \(v(t)\) and spike train \(s(t)\)) are described as follows:
\[i_{i}(t) =\tau_{i}^{\text{syn}}i_{i}(t-1)+\sum W_{ij}s_{j}(t) \tag{1}\] \[v_{i}(t) =\tau_{i}^{\text{mem}}v_{i}(t-1)+i_{i}(t)-s_{i}(t-1)\vartheta_{i}\] (2) \[s_{i}(t) =H\left(v_{i}(t)-\vartheta_{i}\right) \tag{3}\]
where subscript \(i\) and \(j\) denote the post- and presynaptic neuron respectively. The discretized time constants, known as the decay parameters, of the synapses and the membrane potential are respectively referred to as \(\tau^{syn}\) and \(\tau^{mem}\). The spiking behavior of a neuron is modeled using the Heaviside step function \(H\), which outputs a spike when the membrane potential exceeds the threshold.
The basic structure of a spiking neural network controller is schematically depicted in Figure 2. The input weights, indicated by \(W^{e}\), \(W^{h}\) and \(W^{d}\), are linked to the encoding, hidden and decoding layers, respectively. The encoding layer is responsible for translating the floating-point input into a sequence of spikes, and conversely, the decoding layer performs the reverse operation.
Fig. 2: The basic structure of the SNN controller. The encoding layer has an additional bias (\(b\)) added to the input current.
The input to the network is the error of the controlled state. After applying the encoding weights and biases, the error signal is directly used as the synaptic current for the encoding neuron (\(\tau^{syn}\) = 0). To facilitate the training of the encoding layer, we paired neurons with shared bias and flipped weight sign. This results in symmetric encoding, ensuring similar spiking patterns for both positive and negative errors. The effect that the input weight and bias have on the spiking behavior of a LIF neuron is presented in Figure 3. The encoding layer incorporates a bias to achieve spike activity for the encoding of error values close to zero, which would otherwise be impossible.
We study the effect of using different types of structures for the hidden layer of the SNN controller in this work. The most basic type of hidden layer (LIF) is depicted in Figure 2. In addition to the basic structure, we evaluated the influence of recurrency [13, 16] and Input-Weighted Threshold Adaptation (IWTA) [14], as both these network structures demonstrated their essential role in enabling integration within the SNN. We focused solely on threshold adaptation linked to the incoming spiking activity rather than the hidden neuron's activity itself. An overview of the different hidden layer structures is shown in Figure 4. In contrast to the original implementation in [14], the threshold for the IWTA-LIF neurons is modeled using a decay term (\(\tau^{th}\)), where the threshold converges back to a base value (\(\vartheta\)), after an increase/decrease (\(W^{th}\)) caused by an incoming spike. This method of implementing IWTA adds new dynamics to the threshold and increases the solution space.
The spiking neural network controller is decoded using a single leaky-integrator neuron, that calculates the exponential moving average of the spikes in the hidden layer.
### _Real-World Experiment_
To validate the performance of the SNN controller on a real-world application, we implemented an SNN altitude controller for an open-source micro-blimp developed in [15]. The combination of the buoyancy-caused drift and slow system dynamics make the blimp a useful test vehicle for this research. The input of the SNN height controller is the difference between a reference altitude and the onboard lidar measurement, which makes it a fully on-board closed-loop control system. The output of the SNN is the motor command, \(u\in[-3.3,3.3]\), where \(u<0\) indicates downward and \(u>0\) upwards movement. The blimp's control system consists of two coreless direct current (DC) motors attached to a 180-degree rotating shaft, which enable the rotors to push the blimp both up- and downwards. A visual representation of the blimp and its hardware components is provided in Figure 5. Two improvements have been made to the blimp's hardware setup. Firstly, to improve the altitude tracking ability, we implemented a LiDAR sensor, the TFmini S LiDAR-module, which significantly increased the accuracy from \(\pm\) 20cm to \(\pm\) 1cm. Secondly, the Raspberry Pi Zero 2 W, running on Ubuntu 20.04, replaced its predecessor to increase the processing power and prevent software compatibility issues. The total weight of all hardware components attached to the blimp is 140g. In order to ensure smooth communication between all system components, we have used the Robot Operating System (ROS1) framework. The control loop runs at a rate of 10 Hz.
Fig. 4: Visualization of the different hidden layer structures. The two left and right circles represent the encoding and the hidden neurons respectively.
Fig. 5: The open-source micro-blimp used for the real-world experiments [15]. Two adjustments made on the blimp are: A) TFmini S LiDAR-module B) Raspberry Pi Zero 2 W.
Fig. 3: Influence of the input weight and bias on the spiking frequency of a LIF neuron.
### _Evolutionary Training Algorithm_
The training process for the SNN controller employs an evolutionary algorithm, consisting of two recurring steps: population creation and evaluation, with each cycle representing one generation of the evolution. In the first step, a set number of individuals forms the population, each representing a unique controller with slightly varying parameters. The second step ranks these individuals based on performance via a cost function, and this information guides the creation of the new population. Further details for each step are discussed below.
Every training loop was executed on the DelftBlue supercomputer, running on 20 cores for 50,000 generations with 50 individuals [17]. To prevent overfitting, we randomly sampled one of the 100 training datasets to evaluate each generation. Additionally, both the size and the frequency of the step inputs used in each dataset were varied to enhance the diversity of the training data. We conducted a total of 30 training loops for each combination of controller type (2) and hidden layer structure (4), resulting in a total of 240 training sessions. Each controller is limited to 80 neurons to showcase the potential of small-scale networks for neuromorphic control.
#### Iv-C1 Population Creation
For the creation of a new population, we have used a Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES) [18]. CMA-ES is a distribution-based optimization algorithm that iteratively adjusts the mean and covariance matrix of a multi-variate Gaussian distribution, from which all network parameters of each individual are sampled.
The CMA-ES is implemented using the _Evotorch_ Python library [19]. Every network parameter that needs to be evolved is initialized using the mean and standard deviation of a Gaussian distribution. Strict-bounded parameters, such as the decay constants in the neuron model, are constrained by rejecting and resampling. The initial mean is set by sampling from a uniform distribution within parameter bounds, while the standard deviation is set to 1/10th of the parameter's range. An overview of all trained parameters including the bounds is provided in Table I, where \(W^{r}\) represents the recurrent weights.
#### Iv-C2 Population Evaluation
The performance of each individual SNN controller is determined by comparing the output of the SNN controller (\(u\)) to the output of a tuned PD or integral controller (\(\hat{u}\)). The SNN is tasked to learn the mapping between the input signal (\(e\)) to the output of the target controller (\(\hat{u}\)). The discretized equation that describes the PID response is provided below:
\[\hat{u_{k}}=\underbrace{K_{p}e_{k}+K_{d}\frac{e_{k}-e_{k-1}}{T}}_{\text{PD controller}}+\underbrace{K_{i}(\sum_{k=0}^{k}Te_{k})}_{\text{Integral controller}} \tag{4}\]
where \(K_{p}\), \(K_{i}\), and \(K_{d}\) refer to the proportional, integral and derivative gains respectively and \(T\) is denoted by the sampling period. Both the input error signal and the target controller response are recorded in a dataset.
To quantify the performance of the SNN compared to the target controller, the Mean Absolute Error (MAE) was used as the main term in the cost function. The MAE was used instead of the mean square error (MSE) to prevent over-penalization of the error that is created during the transient response to a step input because this led to more oscillations in the steady state. Additionally, the cost function was augmented with the Pearson Correlation Coefficient (PCC) [20], denoted by \(\rho\), to incentivize that the sign of the output of the SNN and PD/I controllers is equal. The PCC measures the linear correlation between two signals, ranging from -1 to 1, where the latter indicates a fully linear relation. Since the fitness function is a minimization function, the PCC is included by adding \(1-\rho\) to the MAE. This results in the following cost function that was used in the population evaluation step of the evolutionary training process:
\[L(u,\hat{u})=\text{MAE}(u,\hat{u})+(1-\rho(u,\hat{u})) \tag{5}\]
### _Dataset Generation_
The methods used to gather the test and training data for each controller are discussed below.
#### Iv-D1 PD controller
To successfully train the PD SNN controller, we need a broad spectrum of error signals. Therefore, we used a semi-randomly tuned PID controller on the neutrally buoyant real-world blimp. The error signal of these recordings is used as the SNN input data of the dataset. The target signal for the training algorithm is generated by passing the recorded error signal through a PD controller with the tuned gains for the blimp's altitude controller.
#### Iv-D2 Integral controller
The Integral SNN has to learn to integrate the error within the SNN itself. For this training process, the decay parameter of the decoding neuron is purposely bound to ensure a quick decay (eg. [0-0.3]) in order to prevent the algorithm from converging to a slow decay. A slow decoding decay parameter would imply that the integration only happens in the decoding neuron, instead of in the hidden layer.
If the buoyancy remains constant throughout the training process of the integral controller, the network might learn to add a bias to counteract the buoyancy. Therefore we must adjust the buoyancy during training to ensure that the network learns to integrate information over time. Instead of recording multiple datasets with varying buoyancy, we decided to train the Integral controller on a model where we could also change
the "buoyancy" within a single dataset to facilitate the learning process.
The blimp was modeled by the double integrator control problem and controlled using a PID controller. The integral gain was set to match that of the tuned real-world blimp, while the PD gains were adjusted to align the model's dynamics approximately with those of the tuned PID-Blimp system. In the double integrator system, the output of the controller, \(u(t)\) is directly proportional to the second derivative of the state plus an additional bias: \(\ddot{x}(t)=u(t)+b\). The bias simulates the level of buoyancy of the blimp. Every time a step input is received, the bias is randomly sampled (\(U(-4,4)\)). Each dataset consists of 5 step inputs maintained for 50s.
## III Results
The first subsection displays the training outcome of both SNN controllers using a test dataset, followed by the assessment of their performance on an actual blimp. The results also contain a performance analysis of various neural mechanisms within the hidden layer.
### _Training of the SNN Controllers_
#### Iii-A1 PD SNN Controller
The result of the PD training process is provided in Figure 6, showing a single step response from the test dataset. The solid red line represents the SNN's target, while the dashed red line depicts the proportional controller. The P controller is included to visualize the additional effect of the derivative. Initially, all spiking PD controllers show clear influences of the derivative, as they match the target signal. However, after the initial damping of the proportional output, the controllers start to diverge. To prevent overshoot and oscillations, the derivative controller should counteract the proportional controller when the state is approaching the setpoint, which happens around 4 seconds in the Figure. The only controller that counteracts the P controller sufficiently is the LIF SNN. Based on this analysis and the lowest loss value across the entire test dataset, shown in Table II, we opted to use the LIF SNN for the blimp's altitude controller. The larger solution space for the recurrent and IWTA neuron structures makes the search space more complex and leads to local minima.
#### Iii-B2 Integral SNN Controller
The result of the integral training process is provided in Figure 7, showing the response of the different hidden layer mechanisms to a test dataset. The LIF SNN failed to learn to integrate, hence, it is excluded from the figure. All three spiking controllers following the test signal, without a clear standout performer. To see how well the controllers perform in the real-world, they are all tested on the indoor blimp.
### _Performance of SNN controlling Real-World Blimp_
To analyze the performance of each controller separately, we first test the PD SNN controller using a neutrally buoyant blimp. Afterward, we added some weight to the blimp to make it negatively buoyant. The negatively buoyant blimp is used first to evaluate the performance of the SNN I controllers, followed by the evaluation of the fully spiking controller.
#### Iii-B1 PD control of neutrally buoyant blimp
We assess the performance of the PD SNN controller with LIF hidden structure by comparing it to a tuned conventional PD controller. The tracking accuracy is tested using five different step sizes \(\Delta h\)=[1,0.5,0,-0.5,-1], maintained for 50s each and the results are shown in Figure 8. Both the conventional PD and the SNN show small oscillations, \(\pm 6\) cm, around the setpoint. These oscillations are caused by the discretized mapping of the motor command to the actual voltage sent to the motors using PWM. This causes a deadzone to be present in the motor command signal, which is the region of the motor commands, \(u=\)[-0.1,0.1], that does not result in the actuation of the rotors.
#### Iii-B2 Integral control of negatively buoyant blimp
To isolate the effect of the spiking integrator, we used the non-spiking PD controller in combination with the spiking integrator to control the negatively buoyant blimp. The result of two-step responses
Fig. 6: The moving average of the step responses of all evolved SNN controllers compared to a PD target signal using a test dataset. The LIF shows derivative action by damping the command.
Fig. 7: The moving average of the step responses of three evolved SNN controllers compared to a integral target signal using a test dataset, with a changing bias every step input.
is shown in Figure 9, where the setpoint was changed after 70 seconds. When analyzing the steady-state error, we take the average over the last 10 seconds of each step. The steady-state error of the IWTA-LIF (\(\pm\)2 cm) is significantly smaller than the ones for the R-LIF and R-IWTA-LIF (\(\pm\)5 cm). Despite the small oscillations caused by the LIF SNN, we decided to use this spiking integrator for the full spiking controller due to the minimal steady-state error.
#### Iii-B3 Full spiking control of negatively buoyant blimp
The combination of both the spiking PD (LIF) and integral (IWTA-LIF) controller is shown in Figure 10. We analyzed the step response for the blimp using different setpoints \(h\)=[0.5,1,1.5]m, maintained for 70s. The combined SNN controller demonstrates effective altitude control while minimizing the steady-state error to \(\pm\)3cm. However, the SNN controller shows relatively large initial oscillations when receiving a downward step input, compared to the upward steps. The oscillations result from the delay introduced when the rotors must make a 180-degree turn during direction changes. Given the blimp's negative buoyancy, continuous upward thrust is essential for stability. In cases of upward step inputs, the rotors constantly push the blimp upwards. Conversely, when a lower setpoint is used, the blimp is initially pushed downwards by the rotor before pushing back up to attenuate the movement. This difference in overshoot is also visible for the PID controller, with an average overshoot of 14cm for upward steps and 20cm for downward steps.
## IV Conclusion
In this work, we evolved a spiking neural network (SNN) that successfully controls the altitude of a non-neutrally buoyant indoor blimp. The SNN parameters were optimized through an evolutionary algorithm, facilitating extensive exploration of the solution space. This exploratory training approach allowed for an in-depth analysis of various hidden-layer configurations, recurrency and Input Weighted Threshold Adaptation (IWTA), for the Leaky-Integrate and Fire (LIF) neuron model. As a result, we developed two complementary SNN controllers which, when combined, achieved accurate tracking of the reference state. The first controller exhibited rapid response to control errors while effectively mitigating overshoot and large oscillations, after being trained on a tuned PD controller for the blimp. In parallel, the second controller was designed to minimize steady-state errors arising from the blimp's non-neutral buoyancy-induced drift. This controller learned to perform integration of the error using IWTA within the hidden layer of the network.
Despite the limitation within the blimp's current drivetrain configuration, the developed SNN controllers have showcased their ability to maintain stable altitude control, employing just 160 spiking neurons. All processing and sensing is performed onboard the blimp, with the SNN running on the Raspberry Pi's CPU. Future research will focus on the completion of the neuromorphic control loop, integrating event-based sensors with neuromorphic processors. This integration aims to fully demonstrate the potential of neuromorphic computing in robotic control.
Fig. 8: Real-world step responses of the altitude of the neutrally buoyant blimp using a conventional PD control and an SNN controller with no recurrency or IWTA in the hidden layer. Each setpoint was tested eight times for every controller and the average is shown by the thick line.
Fig. 10: Real-world multi-step response of the altitude of the negatively buoyant blimp using a non-spiking PID & PD controller and the SNN controller, which is the combination of the SNN PD (LIF) and the SNN integral (IWTA-LIF) controller. The average over five runs is shown for each controller.
Fig. 9: Real-world step responses of the altitude of the negatively buoyant blimp using a conventional PD controller and different spiking integrators. The average over five runs is shown for each controller. |
2309.03190 | Blink: Link Local Differential Privacy in Graph Neural Networks via
Bayesian Estimation | Graph neural networks (GNNs) have gained an increasing amount of popularity
due to their superior capability in learning node embeddings for various graph
inference tasks, but training them can raise privacy concerns. To address this,
we propose using link local differential privacy over decentralized nodes,
enabling collaboration with an untrusted server to train GNNs without revealing
the existence of any link. Our approach spends the privacy budget separately on
links and degrees of the graph for the server to better denoise the graph
topology using Bayesian estimation, alleviating the negative impact of LDP on
the accuracy of the trained GNNs. We bound the mean absolute error of the
inferred link probabilities against the ground truth graph topology. We then
propose two variants of our LDP mechanism complementing each other in different
privacy settings, one of which estimates fewer links under lower privacy
budgets to avoid false positive link estimates when the uncertainty is high,
while the other utilizes more information and performs better given relatively
higher privacy budgets. Furthermore, we propose a hybrid variant that combines
both strategies and is able to perform better across different privacy budgets.
Extensive experiments show that our approach outperforms existing methods in
terms of accuracy under varying privacy budgets. | Xiaochen Zhu, Vincent Y. F. Tan, Xiaokui Xiao | 2023-09-06T17:53:31Z | http://arxiv.org/abs/2309.03190v2 | # Blink: Link Local Differential Privacy in Graph Neural Networks via Bayesian Estimation+
###### Abstract.
Graph neural networks (GNNs) have gained an increasing amount of popularity due to their superior capability in learning node embeddings for various graph inference tasks, but training them can raise privacy concerns. To address this, we propose using link local differential privacy over decentralized nodes, enabling collaboration with an untrusted server to train GNNs without revealing the existence of any link. Our approach spends the privacy budget separately on links and degrees of the graph for the server to better denoise the graph topology using Bayesian estimation, alleviating the negative impact of LDP on the accuracy of the trained GNNs. We bound the mean absolute error of the inferred link probabilities against the ground truth graph topology. We then propose two variants of our LDP mechanism complementing each other in different privacy settings, one of which estimates fewer links under lower privacy budgets to avoid false positive link estimates when the uncertainty is high, while the other utilizes more information and performs better given relatively higher privacy budgets. Furthermore, we propose a hybrid variant that combines both strategies and is able to perform better across different privacy budgets. Extensive experiments show that our approach outperforms existing methods in terms of accuracy under varying privacy budgets.
local differential privacy, graph neural networks +
Footnote †: This work is licensed under a Creative Commons Attribution 4.0 International License.
nodes, which data owners are often unwilling to disclose. Moreover, the issue of link LDP in GNNs over decentralized nodes as clients has yet to be sufficiently addressed in the literature, and there is currently a lack of effective mechanisms to balance privacy and utility. (Srivastava et al., 2017) first propose locally differentially private GNNs, but only providing protection for node features and labels, while assuming the server has full access to the graph topology. Current differential privacy techniques for protecting graph topology while training GNNs, such as those described in (Zhou et al., 2018; Zhang et al., 2019; Zhang et al., 2019), are limited by poor performance and are often outperformed by MLPs that are trained without link information at all (which naturally provides full link privacy). This issue with (Zhou et al., 2019) has been investigated in (Zhou et al., 2019), and we also demonstrate similar behaviors of other baselines in this paper. On a separate line of research, there have been recent works on privacy-preserving graph synthesis and analysis with link local privacy guarantees (Zhou et al., 2019; Zhang et al., 2019; Zhang et al., 2019). However, although some of these works do provide valid mechanisms to train GNNs with link LDP protections, these mechanisms are usually designed to estimate aggregate statistics of the graph, such as subgraph counts (Zhou et al., 2019), graph modularities and clustering coefficients (Zhou et al., 2019; Zhang et al., 2019), which are not useful for training GNNs. Hence, these works are not directly applicable to our setting, and we will later show in this paper that they perform poorly in terms of GNN test accuracy. As such, there is a clear need for novel approaches to alleviate the performance loss of GNNs caused by enforcing privacy guarantees and to achieve link privacy with acceptable utility.
**Challenges.** First, local DP is a stronger notion than central DP (CDP) where the magnitude of noise increases with the number of nodes. This creates an issue in real-world graph datasets where the number of vertices is typically large. Moreover, as shown in Figure 2, removing links leads to a significant drop in GNN performance, indicating that graph topology is crucial in training effective graph neural networks. This is because GNN training is very sensitive to link alterations as every single wrong link will lead to the aggregation of information of neighboring nodes which should have been irrelevant. When the server adopts local differential privacy, it only has access to graph topology that is perturbed to be noisy for privacy protection, thus making it very challenging to train any effective GNNs on it. Additionally, conventional LDP mechanisms such as randomized response (Zhou et al., 2019) flip too many bits in the adjacency matrix and renders the noisy graph too dense, thus making it difficult to train any useful GNNs. To conclude, it is challenging to alleviate the negative effects of local differential privacy on GNN performance.
**Contribution.** In this paper, we propose Blink (Bayesian estimation for **link** local privacy), a principled mechanism for link local differential privacy in GNNs. Our approach involves separately and independently injecting noise into each node's adjacency list and degree, which guarantees LDP due to the basic composition theorem of differential privacy (Koren, 2017). On the server side, our proposed mechanism uses Bayesian estimation to denoise the received noisy information in order to alleviate the negative effects on GNN performance of local differential privacy.
Receiving the noisy adjacency lists and degrees, the server first uses maximum likelihood estimation (MLE) in the \(\beta\)-model (Chen et al., 2017) to estimate the existence probability of each link based solely on the collected noisy degree sequence. Then, the server uses the estimated link probability as a prior and the noisy adjacency lists as evidence to evaluate posterior link probabilities where both pieces of information are taken into consideration. We theoretically explain the rationale behind our mechanism and provide an upper bound of the expected absolute error of the estimated link probabilites against the ground truth adjacency matrix. Finally, the posterior link probabilities are used to construct the denoised graph, and we propose three variants of such a construction--hard thresholding, soft thresholding, and a hybrid approach. Hard thresholding ignores links with a small posterior probability; it performs better when privacy budget is low and uncertainty is high because the lost noisy information would not significantly help with GNN training. The soft variant keeps all the inferred information and uses the posterior link probabilities as edge weights in the GNN model, and performs better than the hard variant when privacy budget is relatively higher thanks to the extra information. The hybrid approach combines both hard and soft variants and performs well for a wide range of privacy budgets. Extensive experiments demonstrate that all three variants of Blink outperform existing baseline mechanisms in terms of the test accuracy of trained GNNs. The hard and soft variants complement each other at different privacy budgets and the hybrid variant is able to consistently perform well across varying privacy budgets.
**Paper organization.** The rest of this paper is organized as follows. Section 2 introduces preliminaries for GNNs and LDP and Section 3 formally formulates our problem statement. We describe our proposed solution, Blink, in Section 4 and explain its rationale and properties theoretically. We report and discuss extensive experimental results with all Blink variants and other existing methods in Section 5. In Section 6, we conduct a literature review on related topics and give brief introduction to relevant prior work. At last, Section 7 concludes our work and discusses possible future research directions. The appendix includes complete proofs and experimental details.
## 2. Preliminaries
### Graph neural networks
We consider the problem of semi-supervised node classification (Zhou et al., 2019; Zhang et al., 2019; Zhang et al., 2019) on a simple undirected graph \(G=(V,A,X,Y)\). Vertex
Figure 2. Test accuracy of GCN (Zhou et al., 2019) and MLP (GCN after removing links) on various graph datasets. Significant performance degeneration caused by removing links indicates the importance of graph topology in GNN training.
set \(V=\{v_{i}:i\in\{1,2,\ldots,n\}\}\) is the set of all \(n\) nodes, consisting of labelled and unlabeled nodes. Let \(V_{L},V_{U}\) be the sets of labelled and unlabeled nodes, respectively, then \(V_{L}\cap V_{U}=\emptyset\) and \(V_{L}\cup V_{U}=V\). The adjacency matrix \(A\in\{0,1\}^{n\times n}\) represents all the links in the graph, where \(A_{i,j}=1\) if and only if a link exists between \(v_{i}\) and \(v_{j}\). The feature matrix of the graph is \(X\in\mathbb{R}^{n\times d}\), where \(d\) is the number of features on each node and for each \(i\), row vector \(X_{i}\) is the feature of node \(v_{i}\). Finally, \(Y\in\{0,1\}^{n\times c}\) is the label matrix where \(c\) is the number of classes. In the semi-supervised setting, if vertex \(v_{i}\in V_{L}\), then its label vector \(Y_{i}\) is a one-hot vector, i.e. \(Y_{i}\cdot\vec{1}\), where \(\vec{1}\) is an all-ones vector of compatible dimension. Otherwise, when the vertex is unlabeled, i.e., \(v_{i}\in V_{U}\), its label vector \(Y_{i}\) is the zero vector \(\vec{0}\).
A GNN learns high-dimensional representations of all nodes in the graph by aggregating node embeddings of neighbor nodes and mapping the aggregated embedding through parameterized non-linear transformation. More formally, let \(x_{i}^{(k-1)}\) denote the node embedding of \(v_{i}\) in \((k-1)\)-th layer. The GNN learns the node embedding of \(v_{i}\) in the \(k\)-th layer by
\[x_{i}^{(k)}=\gamma^{(k)}(\text{Aggregate}(\{x_{j}^{(k-1)}:v_{j}\in\mathcal{N} (v_{i})\})) \tag{1}\]
where \(\mathcal{N}(v_{i})\) is the set of neighboring nodes of \(v_{i}\), Aggregate(\(\cdot\)) is a differentiable, permutation invariant aggregation function such as sum or mean, and \(\gamma(\cdot)\) is a differentiable transformation such as multi-layer perceptrons (MLPs). Note that the neighbor set \(\mathcal{N}(v_{i})\) may contain the node \(v_{i}\) itself, depending on the GNN architecture. When initialized, all node embeddings are the node feature, i.e., \(x_{i}^{(0)}=X_{i}\) for each \(v_{i}\in V\). At the last layer, the model outputs vectors of \(c\) dimension followed by a softmax layer to be compared against the ground truth so that the parameters in \(\gamma\) can be updated via back-propagation to minimize a pre-defined loss function such as cross-entropy loss.
### Local differential privacy
Differential privacy (DP) is the state-of-the-art mathematical framework to quantify and reduce information disclosure about individuals (K
exactly one edge in the graph. Therefore, if a mechanism satisfies \(\varepsilon\)-link LDP as defined in Definition 3.1, the influence of any single link on the released output is bounded and thus the link privacy is preserved.
After sending the privatized adjacency lists \(\mathcal{R}(A_{1}),\ldots,\mathcal{R}(A_{n})\) to the server, we also aim to design a server-side algorithm to denoise the received data which yields an estimated adjacency matrix \(\hat{A}\). Finally, the server uses \((V,X,Y,\hat{A})\) to train a GNN to perform node classification as described in Equation (1). Additionally, note that although we assume that the server has access to \(V,X,Y\), but it can be seen in Section 4 that our proposed method does not involve the server utilizing node features or labels to denoise the graph topology. Hence, our method is compatible with existing LDP mechanisms that provide protections for node features and labels, such as LPGNN (Zhu et al., 2017), and can serve as a convenient add-on to provide full local differential privacy on \(X,Y,A\).
## 4. Our Approach
To train GNNs with link local differential privacy over decentralized nodes, we propose Blink (**B**ayesian estimation for **link** local privacy), a new framework to inject noise to the graph topology on the client side to preserve privacy and to denoise the server side to train better GNN models. The key idea is to independently inject noise to the adjacency matrix and degree sequence such that the degree of each node can be utilized by the server to better denoise the graph structure. More specifically, as shown in Figure 3, the server uses the received noisy degree sequence as the prior and the noisy adjacency matrix as evidence to calculate posterior probabilities for each potential link. We now describe our method in more detail in the following subsections.
### Client-side noise injection
As suggested by previous studies (Zhu et al., 2017; Wang et al., 2018; Wang et al., 2018), simply randomly flipping the bits in adjacency lists will render the noisy graph too dense. Therefore, node degrees and graph density must be encoded in the private messages as well. Our main idea is to independently inject noise to the adjacency list and the degree of a node, and let the server estimate the ground truth adjacency matrix based on the gathered information. Based on this idea, we let the nodes send privatized adjacency lists and their degrees separately to the server, such that degree information can be preserved and utilized by the server to better denoise the graph topology. Specifically, for each node \(v_{i}\), we spend the total privacy budget \(\varepsilon\) separately on adjacency list \(A_{i}\) and its degree \(d_{i}\), controlled by degree privacy parameter \(\delta\in[0,1]\), such that we spend a privacy budget \(\varepsilon_{d}=\delta\varepsilon\) on degree and the remaining \(\varepsilon_{a}=(1-\delta)\varepsilon\) on adjacency list. This is possible because of the basic composition theorem of differential privacy (Koren et al., 2016). For real-valued degree \(d_{i}\), we use the widely-adopted Laplace mechanism (Koren et al., 2016) to inject unbiased noise drawn from Laplace(\(0,1/\varepsilon_{d}\)). And for bit sequence \(A_{i}\), we use randomized response (Zhu et al., 2017) to randomly flip each bit with probability \(1/(1+\exp(\varepsilon_{a}))\). This procedure is described in Algorithm 1. According to basic composition and the privacy guarantee of Laplace mechanism and randomized response, we have the following theorem, stating that Algorithm 1 achieves \(\varepsilon\)-link LDP. The detailed proof, together with the proofs of subsequent results, will be included in Appendix A.
```
0:\(A_{i}\in\{0,1\}^{n}\) - adjacency list of \(v_{i}\); \(\varepsilon\) - total privacy budget; \(\delta\) - degree privacy parameter
0:\(\left(\hat{A}_{i},\hat{d}_{i}\right)\) - the private adjacency list \(\hat{A}_{i}\in\{0,1\}^{n}\) and the private degree \(\tilde{d}_{i}\in\mathbb{R}\) of node \(v_{i}\).
1:functionLinkDP(\(A_{i},\varepsilon,\delta\)):\(\triangleright\) run by node \(v_{i}\)
2:\(\varepsilon_{d}\leftarrow\delta\varepsilon\)
3:\(\varepsilon_{a}\leftarrow(1-\delta)\varepsilon\)
4:for\(j\in\{1,2,\ldots,n\}\)do
5:\(\hat{A}_{ij}=\begin{cases}A_{ij},&\text{with probability }\exp(\varepsilon_{a})/(1+\exp( \varepsilon_{a}))\\ 1-A_{ij},&\text{with probability }1/(1+\exp(\varepsilon_{a}))\end{cases}\)
6:endfor
7:\(d_{i}\leftarrow\sum_{j=1}^{n}A_{ij}\)\(\triangleright\) degree of node \(v_{i}\)
8: sample \(l_{i}\sim\text{Laplace}(0,1/\varepsilon_{d})\)
9:\(\hat{d}_{i}\gets d_{i}+l_{i}\)\(\triangleright\) Laplace mechanism
10:return\(\left(\hat{A}_{i},\hat{d}_{i}\right)\)
11:endfunction
```
**Algorithm 1** Node-side \(\varepsilon\)-link LDP mechanism
### Server-side denoising
After receiving the noisy adjacency lists \(\tilde{A}_{1},\tilde{A}_{2},\ldots,\tilde{A}_{n}\) and degrees \(\tilde{d}_{1},\tilde{d}_{2},\ldots,\tilde{d}_{n}\) from the nodes, the server first assembles them into a noisy adjacency matrix \(\tilde{A}\in\{0,1\}^{n\times n}\) and noisy degree sequence \(\tilde{d}\in\mathbb{R}^{n}\). The server then uses \(\tilde{d}\) to estimate link probability to be used as prior, and uses \(\tilde{A}\) as the evidence to calculate the posterior probability for each potential link to exist in the ground truth graph. At last, the server constructs graph estimations based on the posterior link probabilities and use the estimated graph to train GNNs. These steps are described in greater details as follows.
#### 4.2.1. Estimation of link probability given degree sequence
Given noisy degree sequence \(\tilde{d}\), the server aims to estimate the probability of each link to exist, which is then used as prior probability in the next step. To estimate link probability, we adopt \(\beta\)-model, which is widely adopted in social network analysis (Bahdan et al., 2016; Wang et al., 2018; Wang et al., 2018) and closely related to the well-known Bradley-Terry-Luce (BTL) model for ranking (Bahdan et al., 2016; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). Given a vector \(\beta=(\beta_{1},\beta_{2},\ldots,\beta_{n})\in\mathbb{R}^{n}\), the model assumes that a random undirected simple graph of \(n\) vertices is drawn as follows: for each \(1\leq i<j\leq n\), an edge between node \(v_{i}\) and \(v_{j}\) exists with probability
\[p_{ij}=\frac{\exp(\beta_{i}+\beta_{j})}{1+\exp(\beta_{i}+\beta_{j})} \tag{4}\]
independently of all other edges. Hence, the probability of observing the (true) degree sequence \(d=(d_{1},\ldots,d_{n})\) from a random graph drawn according to the \(\beta\)-model is
\[L_{d}(\beta)=\frac{\exp(\sum_{i}\beta_{i}d_{i})}{\prod_{i<j}(1+\exp(\beta_{i}+ \beta_{j}))}. \tag{5}\]
As a result, one can estimate the value of \(\beta\) by maximizing the likelihood \(L_{d}(\beta)\) of observing \(d\). The maximum likelihood estimate (MLE) \(\hat{\beta}\) of \(\beta\) must satisfy the system of equations
\[d_{i}=\sum_{j\neq i}\frac{\exp(\hat{\beta}_{i}+\hat{\beta}_{j})}{1+\exp(\hat{ \beta}_{i}+\hat{\beta}_{j})},\qquad i=1,2,\ldots,n. \tag{6}\]
Chatterjee et al. (2016) show that with high probability, there exists a unique MLE solution \(\hat{\beta}\) as long as the ground truth sequence \((\beta_{i})\)
is bounded from above and below, and the authors also provide an efficient algorithm for computing the MLE when it exists. Consider the following function \(\phi_{d}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) where
\[\phi_{d}(x)_{t}=\log(d_{t})\,-\log\left(\sum_{j\neq t}\frac{1}{\exp(-x_{j})+ \exp(x_{t})}\right). \tag{7}\]
Chatterjee et al. (2017) prove that the MLE solution is a fixed point of the function \(\phi\) and hence can be found iteratively using Algorithm 2.
Therefore, if the degree sequence \(d\) were to be released to and observed by the server, the server could then model the graph using the \(\beta\)-model and estimate link probabilities via MLE. However, actual degree sequence \(d\) must be kept private to the server for the privacy guarantee. As per Algorithm 1, \(d\) is privatized through Laplace mechanism and only the noisy \(\tilde{d}\) can be observed by the server. Although the server cannot directly maximize the likelihood \(L_{d}(\beta)\) of observing \(d\), the following theorem shows that the observable log-likelihood \(\ell_{\tilde{d}}(\beta)=\log(L_{\tilde{d}}(\tilde{\beta}))\) is a lower bound of the unobservable \(\ell_{d}(\beta)=\log(L_{d}(\tilde{\beta}))\) (up to a gap).
**Theorem 4.2**.: _For any \(\beta\in\mathbb{R}^{n}\) that is bounded from above and below, let \(M=\max_{1\leq i\leq n}\|\tilde{\rho}_{i}\|\). For any given constant \(a\), with probability at least \(1-1/a^{2}\), we have \(\ell_{\tilde{d}}(\beta)\geq\ell_{\tilde{d}}(\beta)-a\sqrt{n}M/\ell_{d}^{2}\), where the probability is measured over the randomness of Laplace mechanism of \(\tilde{d}\)._
_Remark 4.3_.: Maximizing the observable but noisy log-likelihood \(\ell_{\tilde{d}}(\beta)\) can maximize the unobservable target log-likelihood \(\ell_{\tilde{d}}(\beta)\) to a certain extent (a gap of \(\Theta(\sqrt{n}M/\ell_{d}^{2})\)). Similar to (6), the solution \(\hat{\beta}\) that maximizes log-likelihood \(\ell_{\tilde{d}}(\beta)\) satisfies the following system of equations
\[\tilde{d}_{t}=\sum_{j\neq t}\frac{\exp(\hat{\rho}_{t}+\hat{\beta}_{j})}{1+ \exp(\hat{\rho}_{t}+\hat{\beta}_{j})},\qquad i=1,2,\ldots,n, \tag{8}\]
and it will be a fixed point of function \(\phi_{\tilde{d}}\). However, Eq. (8) has a solution only if \(\tilde{d}_{t}\in(0,n-1)\) for all \(i=1,2,\ldots,n\). Therefore, the server first clips the values of \(\tilde{d}\) to \(\tilde{d}^{+}\in[1,n-2]^{n}\) and then calls the function \(\textsc{MELinkProbability}(\tilde{d}^{+})\) from Algorithm 2 to find the link probabilities that maximize \(\ell_{\tilde{d}^{+}}(\beta)\). This step is described in Lines 2 and 3 of Algorithm 3.
```
0:\(\tilde{A}\in\{0,1\}^{n\times n}\) - privileged adjacency matrix and degree sequence, where for \(1\leq i\leq n\), \((\tilde{A}_{i},\tilde{d}_{t})=\textsc{LmxLDP}(A_{t},\varepsilon,\delta)\) executed by node \(q_{ij}\); \(\varepsilon\) - total privacy budget; \(\delta\) - degree privacy parameter
0:\(P\in[0,1]^{n\times n}\) - the posterior probabilities for each link to exist
1:functionDenskip(\(\tilde{A},\tilde{d},\varepsilon,\delta\)):
2:\(\tilde{d}^{*}\gets\tilde{A}_{\textsc{clip}}(\textsc{min}=1,\textsc{max}= n-2)\)
3:\(p\leftarrow\textsc{MELinkProbability}(\tilde{d}^{+})\)\(\triangleright\) prior
4:for\((i,j)\in\{1,2,\ldots,n\}^{2}\)do
5:\(q_{ij}\leftarrow\Pr\{(\tilde{A}_{ij},\tilde{A}_{ji})|(A_{ij},A_{ji})=(1,1)\}\) \(\triangleright\) evidence likelihood
6:\(q^{*}_{ij}\leftarrow\Pr\{(\tilde{A}_{ij},\tilde{A}_{ji})|(A_{ij},A_{ji})=(0,0 )\}\) \(\triangleright\) evidence likelihood
7:\(P_{ij}\leftarrow\frac{q_{ij}P_{ij}}{q_{ij}p_{ij}+q^{*}_{ij}(1-p_{ij})}\)\(\triangleright\) Bayesian posterior probability
8:endfor
9:return\(P\)
10:endfunction
```
**Algorithm 3** Server-side estimation of posterior link probabilities
#### 4.2.2. Estimation of posterior link probabilities
The noisy degree sequence \(\tilde{d}\) enables the server to estimate the link probabilities to be used as a prior, such that the server can use the received noisy adjacency matrix as evidence to evaluate posterior probability. For each potential link between \(q_{i},q_{j}\in V\), the server receives two bits \(\tilde{A}_{ij}\) and \(\tilde{A}_{ji}\) related to its existence. Because the privacy budget \(\varepsilon_{a}\) in RR (Algorithm 1) is known to the server, the server can use the flip probability \(p_{\textsc{flip}}=1/(1+\exp(\varepsilon_{a}))\) to calculate the likelihood of observing the received bits \((\tilde{A}_{ij},\tilde{A}_{ji})\) conditioned on whether a link exists between \(q_{i}\) and \(q_{j}\) in the actual graph. More specifically, we have
\[q_{ij} =\begin{cases}p_{\textsc{flip}}^{2},&(\tilde{A}_{ij},\tilde{A}_{ji })=(0,0)\\ p_{\textsc{flip}}(1-p_{\textsc{flip}}),&(\tilde{A}_{ij},\tilde{A}_{ji})=(0,1) \vee(\tilde{A}_{ij},\tilde{A}_{ji})=(1,0)\ ;\\ (1-p_{\textsc{flip}})^{2},&(\tilde{A}_{ij},\tilde{A}_{ji})=(1,1)\end{cases}\] \[q^{*}_{ij} =\begin{cases}(1-p_{\textsc{flip}})^{2},&(\tilde{A}_{ij},\tilde{A}_{ ji})=(0,0)\\ p_{\textsc{flip}}(1-p_{\textsc{flip}}),&(\tilde{A}_{ij},\tilde{A}_{ji})=(0,1)\vee( \tilde{A}_{ij},\tilde{A}_{ji})=(1,0)\.\\ p_{\textsc{flip}}^{2},&(\tilde{A}_{ij},\tilde{A}_{ji})=(1,1)\end{cases}\]
Here, \(q_{ij}\) is the likelihood of observing \((\tilde{A}_{ij},\tilde{A}_{ji})\) given the existence of the link \((v_{i},v_{j})\), and \(q^{*}_{ij}\) is the likelihood of observing \((\tilde{A}_{ij},\tilde{A}_{ji})\) given the non-existence of the link \((v_{i},v_{j})\). Hence, together with the link probability (without taking evidence into consideration) \(p_{ij}\) estimated solely from noisy degree sequence, one can apply Bayes rule to evaluate the posterior probability, i.e. for each \(1\leq i\neq j\leq n\),
\[P_{ij}=\Pr\{(A_{ij},A_{ji})=(1,1)|(\tilde{A}_{ij},\tilde{A}_{ji})|\}=\frac{q_{ij}P _{ij}}{q_{ij}p_{ij}+q^{*}_{ij}(1-p_{ij})}.\]
For each \(1\leq i\neq j\leq n\), \(P_{ij}\) is the posterior probability that a link exists between \(v_{i}\) and \(v_{j}\) conditioned on the evidence \((\tilde{A}_{ij},\tilde{A}_{ji})\). We will show the accuracy of this estimation of graph topology in Section 4.3 by bounding the mean absolute error between \(P\) and ground truth \(A\).
#### 4.2.3. Graph estimation given posterior link probabilities
After obtaining \(P\), we propose three different variants of Blink for the server to construct graph estimations used for GNN training.
Blink-Hard.The simplest and most straightforward approach is to only keep links whose posterior probability of its existence triumphs that of its absence, i.e. keep a link between \(v_{i}\) and \(v_{j}\) in the estimated graph \(\tilde{A}\) if and only if \(P_{ij}>0.5\).
It is clear that hard-thresholding loses a lot of information contained in \(P\) by simply rounding all entries to \(0\) or \(1\). However, when privacy budget is low and uncertainty is high, the information provided by the nodes are usually too noisy to be useful for GNN training, and may even corrupt the GNN model (Zhu et al., 2017). Therefore, Blink-Hard is expected to perform better when privacy budget is low, while when privacy budget grows, it is likely to be outperformed by other variants of Blink.
Blink-Soft.Instead of hard-thresholding, the server can keep all the information in \(P\) instead by using them as edge weights. In this way, the GNN formulation in (1) is modified as follows to adopt weighted aggregation:
\[x_{i}^{(k)}=\gamma^{(k)}\left(\text{Aggregate}(\{P_{ij}\cdot x_{j}^{(k-1)}:y_{ j}\in V\})\right), \tag{9}\]
where \(\text{Aggregate}(\cdot)\) is a permutation invariant aggregation function such as sum or mean. Detailed modifications of specific GNN architectures will be included in Appendix B.
The soft variant utilizes extra information of \(P\) compared to Blink-Hard, and hence, is expected to achieve better performance as long as the information is not too noisy to be useful. Therefore, we form a hypothesis that Blink-Soft and Blink-Hard complement each other and the former is preferred when privacy budget is relatively higher while the latter is preferred at lower privacy budgets.
Blink-Hybrid.At last, we combine both the hard and soft variants such that the server can eliminate unhelpful noisy information while utilizing more confident information in \(\|P\|_{1,1}\) via weighted aggregation. The server first takes the highest \(\|P\|_{1,1}\) entries of \(P\) and filters out the remaining by setting them as zeros. This is to only keep the top \(\|P\|_{1,1}\) possible links in the graph as \(\|P\|_{1,1}\) is an estimation of the graph density \(\|A\|_{1,1}\) (suggested by Theorem 4.4 and Corollary 4.6). This step is inspired by the idea of only keeping the top \(|E|\) links from DpGCN (Zhu et al., 2017). Then, the server utilizes the remaining entries in \(P\) by using them as edge weights and trains the GNN as in Equation (9). Blink-Hybrid is expected to incorporate the advantages of both Blink-Hard and Blink-Soft and perform well for all privacy levels.
### Theoretical analysis for utility
We have described our proposed approach, namely, Blink, in detail in previous sections. While its privacy guarantee has been shown in Theorem 4.1, we now theoretically demonstrate its utility guarantees.
Choice of utility metric.To quantify the utility of Blink, we first need to identify a metric to be bounded that is able to reflect the quality of the estimated graph, \(P\). In the related literature, many metrics have been used to demonstrate the utility of differentially private mechanisms for graph analysis. For example, Hidano and Murakami (2017) show that their estimated graph topology preserves the graph density; Imola et al. (2018) bound the error in triangle count and \(k\)-star count; Ye et al. (2019) evaluate the error in any arbitrary aggregate graph statistic, as long as the aggregate statistic can be estimated without bias from both degree and neighbor lists, such as clustering coefficient and modularity. However, none of these metrics can reflect the quality of the estimated graph topology directly and represent the performance of the GNN trained on the estimated graph because they only involve aggregate and high-level graph statistics. In contrast, the performance of GNNs for node classification is very sensitive to link information from a microscopic or local perspective, as node information propagates along links and any perturbation of links will lead to aggregation of other nodes' information that should have been irrelevant, or missing the information of neighboring nodes. This is one of the reasons that many prior works involving privacy-preserving GNNs for node classification (Zhu et al., 2017; Wang et al., 2017; Zhu et al., 2017) only provide empirical evidence of the utility of their approaches. Although there is no metric that can directly reflect the performance of the trained GNNs, the closer the estimated adjacency matrix \(P\) is to the ground truth \(A\), the better the GNNs trained on \(P\) are expected to perform, and the closer the trained GNNs would perform compared to those trained with accurate graph topology. Therefore, we evaluate the utility of Blink as a statistical estimator of the ground truth adjacency matrix \(A\) by bounding the expectation of the \(\ell_{1}\)-distance between \(P\) and \(A\), i.e. \(\text{E}[\|P-A\|_{1,1}]\). If \(P\) is a binary matrix similar to \(A\), this metric measures the number of edges in the ground truth graph that are missing or falsely added in the estimated graph; if \(P\) is a matrix of link probabilities, this metric measures to what extent the links in \(A\) are perturbed. It can be seen that this metric, just like GNN performance, is sensitive to link perturbations from a local perspective, and it is able to reflect the overall quality of the estimated graph topology. This metric is also closely related to the mean absolute error (MAE) between \(P\) and \(A\), defined as \(\frac{1}{n^{2}}\sum_{i,j}|P_{ij}-A_{ij}|\), which is a commonly used metric in empirical evaluation. Therefore, we use the expectation of \(\ell_{1}\)-distance between \(P\) and \(A\) as the utility metric to quantify the utility of Blink, and we present an upper bound of it in the following Theorem 4.4.
Theorem 4.4 ().: _Assume that \(\beta\) found by MLE in Algorithm 3 is the optimal solution that maximizes \(\ell_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*} _{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{ \tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}}^{*}_{ \tilde{d}^{*}_{\tilde{d}}^{*}_{\tilde{d}^{*}_{\tilde{d}}^{*}_{\tilde{d}}^{*}_{ \tilde{d}^{*}_{\tilde{d}}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}}^{*}_{ \tilde{d}}^{*}_{\tilde{d}^{*}_{\tilde{d}}^{*}_{\tilde{d}^{*}_{\tilde{d}}^{*}_{ \tilde{d}^{*}_{\tilde{d}}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}}^{*}_{ \tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}}^{*}_{\tilde{d}^{*}_{\tilde{d}}^{*}_{ \tilde{d}^{*}_{\tilde{d}}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{ \tilde{d}}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}}^{*}_{\tilde{d}^{*}_{ \tilde{d}}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{ \tilde{d}}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{ \tilde{d}}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{ \tilde{d}}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{ \tilde{d}}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{ \tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{ \tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{ \tilde{d}^{*}_{\tilde{d}}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{ \tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{ \tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}}^{*}_{\tilde{d}^{*}_{ \tilde{d}^{*}_{\tilde{d}}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{ \tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{ \tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}}^{*}_{ \tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{ \tilde{d}}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{ \tilde{d}^{*}_{\tilde{d}^{d}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{ \tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}}_{\tilde{d}^{*}_{\tilde{d}}^{*}_{ \tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}}^{*}_{ \tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d} }^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}}^{*}_{\tilde{d} ^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{ \tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d}^{*}_{\tilde{d} ^{*}_{\tilde{d}^{*}_{\tilde{d}}^{*}_{\tilde{d}^{*}_{\tilde
**Corollary 4.6**.: _For a sparse graph where \(\|A\|_{1,1}=O(n)\), the expected mean absolute error (MAE) of \(P\) against \(A\) is bounded by \(O(1/n+1/(n\varepsilon))\)._
Empirical bound tightness.To empirically evaluate the estimation accuracy of the posterior link probabilities \(P\) against the ground truth graph topology \(A\), and to inspect the tightness of our upper bound on the estimation error given in Theorem 4.4, we report the average mean absolute error (MAE) and its theoretical upper bound (as given in Theorem 4.4) between \(P\) and \(A\) on four well-known graph datasets in Figure 4. Figure 4 shows that in all datasets, the MAE between \(P\) and the ground truth \(A\) is very small, and decrease to almost zero (on the order of \(10^{-6}\) when \(\varepsilon=8\)) as the privacy budget \(\varepsilon\) increases. This demonstrates that the inferred link probability matrix \(P\) is a close estimation of the unseen private adjacency matrix \(A\), and thus can be used for GNN training. Furthermore, Figure 4 reports that the upper bound of the expected MAE given by Theorem 4.4 is very close to the empirical average MAE when the total privacy budget \(\varepsilon\) is small. However, empirical results also suggest that the bound given in Theorem 4.4 is not tight when \(\varepsilon\) is large, as our upper bound converges to \(2\|A\|_{1,1}\) instead of \(0\) when \(\varepsilon\to\infty\), while the empirical estimation error converges to zero when \(\varepsilon\) grows larger. This has inspired us to prove Theorem 4.7 below, which states that the estimated graph from noisy messages will be identical to the actual one when \(\varepsilon\to\infty\).
**Theorem 4.7**.: _As \(P\) is a random function of the total link privacy parameter \(\varepsilon\), we write \(P=P_{\varepsilon}\). Then, we have \(\lim_{\varepsilon\to\infty}P_{\varepsilon}=A\), i.e., when \(\varepsilon\to\infty\), the estimated graph from noisy messages converges to the ground truth._
_Remark 4.8_ (Implications of Theorem 4.7).: Theorem 4.7 demonstrates that when \(\varepsilon\) goes to infinity, the estimated graph from noisy messages will converge to the ground truth graph and hence the trained GNN will also have the same performance as its theoretical upper bound - the performance of a GNN trained with the accurate graph topology. This is a desirable property of any differentially private mechanisms, yet not enjoyed by all existing ones. For example, LDPGen (Zhu et al., 2017) clusters structurally similar nodes together and generates a synthetic graph based on noisy degree vectors via the Chung-Lu model (Chung et al., 2018). Even when no noise is injected, the generated graph is not guaranteed to be identical to the ground truth graph since only accurate degree vectors are used to construct the graph. Theorem 4.7 shows that Blink is able to achieve this desirable property, and together with Theorem 4.4, which has been shown to be quite tight when \(\varepsilon\) is small, we show that the estimation error of Blink is well controlled for all \(\varepsilon\), as demonstrated empirically in Figure 4.
_Remark 4.9_.: Note that Theorem 4.4, Corollary 4.6 and Theorem 4.7 are not violations to privacy. They indicate how good the server can estimate the ground truth, conditioned on the fact that the input information is theoretically guaranteed to satisfy \(\varepsilon\)-link LDP (as shown in Theorem 4.1). These are known as privacy-utility bounds, and it is standard practice in local differential privacy literature for the server to denoise the received noisy information to aggregate useful information.
_Remark 4.10_ (Limitations).: Note that Theorems 4.4 and 4.7 only capture the estimation errors of \(P\), and are not direct indicators of the performance of the GNNs trained on \(P\). As discussed in the beginning of this section, there is no metric that can directly reflect the performance of the trained GNNs. However, in general, the closer the estimated adjacency matrix \(P\) is to the ground truth \(A\), the better the GNNs trained on \(P\) are expected to perform. Still, Theorem 4.4 and 4.7 alone are not sufficient to demonstrate the superior performance of the proposed approach, Blink, over existing approaches. For example, for L-DrGCN (to be introduced in Section 5.1), the mechanism only retains around the same number of links in the estimated graph as the ground truth graph, and hence the estimation error is approximately bounded by \(2\|A\|_{1,1}\), similar to what we have proved in Theorem 4.4. This is also the case for degree-preserving randomized response proposed in (Kolmogorov et al., 2017). Therefore, we provide extensive empirical evaluations of the performance of Blink in Section 5 and show that Blink outperforms existing approaches in terms of utility at the same level of privacy.
### Technical novelty
The split of privacy budget to be separately used for degree information and adjacency lists has appeared in the literature (Kolmogorov et al., 2017; Kolmogorov et al., 2017). However, in both works, the noisy degrees and adjacency lists are denoised or calibrated such that a target aggregate statistic can be estimated more accurately. Hidano and Murakami (2017) uses the noisy degree to sample from the noisy adjacency lists such that the overall graph density can be preserved. (Kolmogorov et al., 2017) combines two estimators of the target aggregate statistic, one from noisy degrees and the other from noisy adjacency lists, and calibrates for a better estimation for the target aggregate statistic, such as clustering coefficient and modularity. As discussed previously, guarantees on
Figure 4. Average mean absolute error (MAE) of the inferred link probabilities \(P\) against ground truth \(A\) (\(\delta\) set to \(0.1\)).
the estimation error of these aggregate statistics are not sufficient to train useful GNNs due to their sensitivity to link perturbations. In contrast, our approach, Biink, utilizes the noisy degree information to estimate the posterior link probabilities conditioned on the evidence of noisy RR outputs _for all possible links_, via Bayesian estimation. To the best of our knowledge, this is a novel approach that has not been explored in the literature.
## 5. Experiments
### Experimental settings
EnvironmentTo demonstrate the privacy-utility trade-off of our proposed mechanism, we ran extensive experiments on real-world graph datasets with state-of-the-art GNN models.2 The experiments are conducted on a machine running Ubuntu 20.04 LTS, equipped with two Intel(r) Xeon(r) Gold 6326 CPUs, 256GB of RAM and an NVIDIA(r) A100 80GB GPU. We implement our mechanism and other baseline mechanisms using the PyTorch3 and PyTorch Geometric4 frameworks. To speed up execution, we use NVIDIA's TF32 tensor cores (Fan et al., 2017) in hyperparameter search at the slight cost of precision. All experiments other than hyperparameter grid search are done using the more precise FP32 format to maintain precision.
Footnote 2: Our code can be found at [https://github.com/ahcchdt/blink_gmn](https://github.com/ahcchdt/blink_gmn).
Footnote 3: Available at [https://pytorch.org](https://pytorch.org)
**Datasets.** We evaluate Blink and other mechanisms on real-world graph datasets. The description of the datasets is as followed:
* _Cora_ and _CiteSeer_(Xeon et al., 2017) are two well-known citation networks commonly used for benchmarking, where each node represents a document and links represent citation relationships. Each node has a feature vector of bag-of-words and a label for category.
* _LastFM_(Nishu et al., 2018) is a social network collected from music streaming service LastFM, where each node represents a user and links between them represent friendships. Each node also has a feature vector indicating the artists liked by the corresponding user and a label indicating the home country of the user.
* _Facebook_(Nishu et al., 2018) is a social network collected from Facebook, where each node represents a verified Facebook page and links indicate mutual likes. Each node is associated with a feature vector extracted from site description and a label indicating the cite category. This graph is significantly larger and more dense than the previous datasets, and hence represents the scalability and performance on larger graphs of our proposed method.
Table 1 summarizes the statistics of datasets used in experiments.
**Baselines.** To better present the performance of Blink, we implement the following baseline mechanisms for comparison.
1. Randomized response (RR) (Nishu et al., 2018) is included to demonstrate the effectiveness of our server-side denoising algorithm, where the server directly uses the RR result of adjacency matrix as the estimated graph without calibration.
2. Wu et al. (Wu et al., 2018) propose DrGCN as a mechanism to achieve \(\varepsilon\)-central DP to protect graph links. It adds Laplacian noise to all entries of \(A\) and keeps the top \(\|A\|_{1,1}\) entries to be estimated links. However, in LDP setting, \(\|A\|_{1,1}\) is kept private to the server and cannot be directly utilized. Following the same idea, we propose a LDP variant of it, namely L-DrGCN, where each node adds Laplacian noise to entries of its own adjacency list and sends the noisy adjacency matrix \(\tilde{A}\) to the server. The server first estimates the number of links by \(\|\tilde{A}\|_{1,1}\) and keeps the top \(\|\tilde{A}\|_{1,1}\) entries as estimated links.
3. Solitude is proposed in (Zhou et al., 2018) as a LDP mechanism to protect features, labels and links of the training graphs. The link LDP setting of theirs is identical to ours, and we only use the link privacy component of their mechanism. In Solitude, each node perturbs its adjacency list via randomized response, and the server collects noisy matrix \(\tilde{A}\). However, RR result is usually too dense to be useful for GNN training. Hence, Solitude learns a more sparse adjacency matrix by replacing the original GNN learning objective with \[\min_{\tilde{A},\theta}\mathcal{L}(\hat{A}|\theta)+\lambda_{1}\|\hat{A}-\hat{ A}\|_{F}+\lambda_{2}\|\hat{A}\|_{1,1},\] (10) where \(\theta\) is the GNN trainable parameters and \(\mathcal{L}(\hat{A}|\theta)\) is the original GNN training loss under parameters \(\theta\) and graph topology \(\hat{A}\). To optimize for Equation (10), Solitude uses alternating optimization to optimize for both variables.
4. Hidano and Murakami (Hidano and Murakami, 2018) propose DPRR (degree-preserving randomized response) to achieve \(\varepsilon\)-link local differential privacy to train GNNs for graph classification tasks. The algorithm denoises the randomized response noisy output by sampling from links reported by RR such that the density of the sampled graph is an unbiased estimation to the ground truth density. We implement DPRR as a baseline to compare with Blink variants.
5. We also implement and include baselines designed for privacy-preserving graph synthesis and analysis. Qin et al. (Qin et al., 2018) propose LDPGen, a mechanism to generate synthetic graphs by collecting link information from decentralized nodes with link LDP guarantees, similar to ours. The key idea of the mechanism is to cluster structurally similar nodes together (via K-means (Beng et al., 2017)) and use noisy degree vectors reported by nodes to generate a synthetic graph via the Chung-Lu model of random graphs (Beng et al., 2017).
6. Imola et al. (Imola et al., 2018) propose locally differentially private mechanisms for graph analysis tasks, namely, triangle counting and \(k\)-star counting. Their main idea is to use the randomized response mechanism to collect noisy adjacency lists from nodes, and then derive an estimator to estimate the target graph statistics from the noisy adjacency lists. We adopt the first part of their mechanisms, i.e., random response, to derive noisy graph topology to be used for GNNs. The RR mechanism used in (Imola et al., 2018) only involves injecting noise to the lower triangular part of the adjacency matrix, i.e., node \(v_{i}\) only perturbs \(a_{i,1},\ldots,a_{i,i-1}\) and sends these bits, to force the noisy adjacency matrix to be symmetric. Hence, we denote this baseline as SymRR.
More discussions of these baseline methods and other related works can be found in Section 6.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Dataset & \#Nodes & \#Features & \#Classes & \#Edges \\ \hline Cora(Xeon et al., 2017) & 2708 & 1433 & 7 & 5278 \\ CitsSeer(Xeon et al., 2017) & 3327 & 3703 & 6 & 4552 \\ LastFM(Nishu et al., 2018) & 7624 & 128 & 18 & 27806 \\ Facebook(Nishu et al., 2018) & 22470 & 128 & 4 & 171002 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Statistics of the graph datasets used in experiments
Experimental setup.For all models and datasets, we randomly split the nodes into train/validation/test nodes with the ratio of 2:1:1. To better demonstrate the performance of Blink and other baseline methods, we apply them with multiple state-of-the-art GNN architectures including graph convolutional networks (GCN) (Gendle et al., 2016), GraphSAGE (Gendle et al., 2017) and graph attention networks (GAT) (Kumar et al., 2018) (details of the model configurations can be found in Appendix B.1). Note that we do not conduct experiment on Blink-Soft or Solitude with the GAT architecture because it is not reasonable to let all nodes attend over all others (Kumar et al., 2018) (even in a weighted manner). To compare the performance of DP mechanisms, we also experiment on all datasets with non-private GNNs, whose performance will serve as a theoretical upper bound of all DP mechanisms. Moreover, following (Kumar et al., 2018), we also include the performance of multi-layer perceptrons (MLPs) for each dataset, which is trained after removing all links from the graph and is considered fully link private. We experiment all mechanisms under all architectures and datasets with \(\varepsilon\in\{1,2,\ldots,8\}\). To showcase the full potential of our proposed method, for each combination of dataset, GNN architecture, privacy budget and mechanism, we run grid search and select the hyperparameters with the best average performance on validation data over 5 trials, and report the mean and standard deviation of model accuracy (or equivalently, micro F1-score, since each node belongs to exactly one class) on test data over 30 trials for statistical significance. Similar to previous works (Zhu et al., 2018; Wang et al., 2018), we do not consider the potential privacy loss during hyperparameter search. The hyperparameter spaces for all mechanisms used in grid search are described in detail in Appendix B.2.
### Experimental results and discussions
#### 5.2.1. Privacy-utility of the proposed Blink mechanisms
We report the average test accuracy of all three variants of Blink and other baseline methods over all datasets and GNN architectures in Figure 5. For all methods, the test accuracy increases as the total privacy budget increases, indicating the privacy-utility trade-off commonly found in differential privacy mechanisms. At all privacy budgets, L-DpGCN outperforms RR because the former one takes
Figure 5. Performance of Blink and other mechanisms. X-axis represents \(\varepsilon\) and y-axis represents test accuracy (%).
the graph density into consideration and preserves the number of links in the estimated graph, while the latter mechanism produces too many edges after randomly flipping the bits in the adjacency matrix which renders the graph too dense. This is consistent with Remark 4.5 where we show the huge improvement in estimation error for Blink over RR. We notice that the performance of SymRR and our implementation of Solitude is on par with RR, which makes sense because SymRR is essentially RR only applied to the lower triangular part of the adjacency matrix while Solitude de-noises graph topology based on RR outputs. Note that Lin et al. [27] did not made the implementation of Solitude publicly available by the time this paper was written, and the authors only performed experiments at large privacy budgets where \(\epsilon\geq 7\), while our results under similar privacy budgets agree with or outperform theirs presented in the paper. Additionally, the performance of LDPGen is worse than other mechanisms, which is expected because it is designed for graph synthesis and not for GNN training. The performance of LDPGen on GNNs also does not improve as the privacy budget increases, which is also expected because at all privacy budgets, the synthetic graph generated by LDPGen is always generated by a random graph model given noisy degree vectors. This has been discussed in Remark 4.8.
It is evident from Figure 5 that at all levels of \(\varepsilon\), the Blink variants generally outperform all baseline methods, because they also take individual degrees into consideration when estimating the prior probabilities, and utilize its confidence in links via hard thresholding, soft weighted aggregation or both. Noticeably, only Blink variants (especially Blink-Hard and Blink-Hybrid) can consistently perform on par with the fully link private MLP baselines, which is due to the fact that these variants can eliminate noisy and non-confident link predictions at low privacy budget and high uncertainty. Additionally, among the three GNN architectures, the baseline methods can perform better on GraphSAGE with accuracy closer to MLP. This is because GraphSAGE convolutional layers have a separate weight matrix to transform the embedding of the root node and hence can learn not to be distracted by the embeddings of false positive neighbors. See Appendix B.1 for details on GNN architectures. At last, for \(\epsilon\in[4,8]\) which is widely adopted in LDP settings in real-world industry practice [3, 13], Blink variants on different GNN architectures outperform the MLP and baselines significantly in most cases, indicating their utility under real-world scenarios. Also, when \(\epsilon\geq 6\), Blink variants achieve test accuracy on par with the theoretical upper bound on all datasets and architectures. In the following paragraphs, we describe with greater detail the performance and trade-off among the Blink variants.
Performance of Blink-Hard.As demonstrated in Figure 5, one main advantage of Blink-Hard is that it is almost never outperformed by MLP trained only on node features, which is not the case for the baseline methods. Existing approaches [21, 44] of achieving (central or local) link privacy on graphs aim to preserve the graph density in the estimated graph, i.e. try to make \(\|\hat{A}\|_{1,1}\approx\|A\|_{1,1}\), however, when \(\epsilon\) is small, it is against the promise of differential privacy to identify the same number of links from the estimated graph from the actual graph. As Kolluri et al. [25] point out, 100% of the selected top \(|E|\) links estimated by DrGCN [44] at \(\epsilon=1\) are false positive, corrupting the GNN results when aggregating neighbor embeddings. By only keeping links whose posterior link probability of existence exceeds 0.5, Blink-Hard takes an alternative approach of understanding graph density at tight privacy budgets and high uncertainty. As shown in Figure 6, \(\|\hat{A}\|_{1,1}\) estimated by Blink-Hard is much lower than the ground truth density \(\|A\|_{1,1}\) when \(\epsilon\) is small, and gradually increases to a similar level with \(\|A\|_{1,1}\) as \(\varepsilon\) increases. In this way, Blink-Hard eliminates information that is too noisy to be useful at low privacy budgets, thus reducing false positive link estimations and avoiding them from corrupting the GNN model. As shown in Figure 6, among the much fewer link estimations given by Blink-Hard, the true positive rates are much higher than DrGCN as reported in [25]. Therefore, Blink-Hard consistently outperforms the fully link-private MLP and other baselines.
Figure 6. Density of estimated \(\hat{A}\) against ground truth \(A\) in Blink-Hard.
Figure 7. GCN test accuracy on LastFM with all three variants at \(\epsilon\in(1,8)\). This is a closer look on the results in Figure 5 on LastFM with GCN.
Performance of Blink-Soft.Although Blink-Hard outperforms MLP and baseline mechanisms, the elimination and rounding of link probabilities lead to a significant amount of information loss. Blink-Soft aims to improve over the hard variant at moderate privacy budgets by utilizing the extra information while it is not too noisy. As described in Section 4.2.3, Blink-Soft utilizes the inferred link probabilities by using them as weights in the GNN aggregation step (see Appendix B.1 for more details), which has enabled the GNN to be fed with more information and perform better as long as the extra information is useful. As reflected in Figure 5 and Figure 7, Blink-Soft is able to outperform Blink-Hard at moderate privacy budgets (i.e., \(\varepsilon\in[4,6]\)) under almost all dataset and GNN architecture combinations. For higher privacy budgets, as both variants perform very well and are on par with the non-private upper bound, the performance gap is not significant. However, at lower privacy budgets where \(\varepsilon\in[1,3]\), Blink-Soft sometimes performs much worse than Blink-Hard and the fully private MLP baseline, for example, on LastFM with GCN model which we take a closer look in Figure 7. This is caused by the low information-to-noise ratio of the inferred link probabilities at low privacy budgets. Here, we confirm the hypothesis proposed in Section 4.2.3 that Blink-Hard and Blink-Soft complement each other where the hard variant performs better at low privacy budget while the soft variant performs better at higher privacy budgets.
Performance of Blink-Hybrid.The hybrid variant is proposed to combine the previous two aiming to enjoy the benefits of both variants across all privacy settings. As shown in Figure 5, at low privacy budgets, Blink-Hybrid successfully outperforms Blink-Soft by a significant margin and achieves test accuracy on par with Blink-Hard (thus, not outperformed by MLP), due to its elimination of noisy and useless information, avoiding false positive links from poisoning the model. At higher privacy budgets, Blink-Hybrid is often able to perform better than the hard variant, thanks to keeping the link probabilities as aggregation weights. For example, for the configuration of LastFM with GCN in Figure 5 which we take a closer look in Figure 7, Blink-Hard achieves accuracy close to Blink-Hard at \(\varepsilon\in[1,3]\) while performs on par with Blink-Soft at \(\varepsilon\in[4,8]\), achieving the best of both worlds. Although Blink-Hybrid is seldom able to outperform both the hard and the soft variants, it can enjoy both the benefits of Blink-Hard at low privacy budgets and the benefits of Blink-Soft at higher privacy budgets.
#### 5.2.2. On the effects of \(\delta\)
The degree privacy budget parameter, \(\delta\), is an important hyperparameter that makes a difference on the performance of the trained GNNs. In previous experiments, we choose the value of \(\delta\) by grid search and use the one that is associated with the best validation accuracy. To better understand the effects of different choices of \(\delta\) values on the GNN performance, we report the test accuracy of graph convolutional network on CiteSeer with Blink-Soft over varying \(\delta\) values at \(\varepsilon\in\{1,8\}\) in Figure 8. It can be seen that at different privacy budgets, there are different implications for the values of \(\delta\). At small privacy budget, i.e., \(\varepsilon=1\), we notice that the GNN performance increases as \(\delta\) value increases, while at larger privacy budget, e.g., \(\varepsilon=8\), it is clear that lower \(\delta\) values result in better performance. This is because at very tight privacy budgets, the noisy adjacency matrix given by randomized response will be too noisy to be useful, hence, the prior probabilities estimated from the noisy degree sequence will be more important. Therefore, it is optimal to allocate more privacy budget towards degrees. At much higher privacy budgets, the flip probability in randomized response becomes so small that the noisy adjacency matrix itself is sufficient to provide the necessary information to effectively train the GNN, hence, it is preferred to allocate more privacy budget to the adjacency lists. If we denote \(\delta^{*}(\varepsilon)\) as the optimal \(\delta\) at total privacy budget \(\varepsilon\) that achieves the best performing GNNs (for instance, we have \(\delta^{*}(1)=0.9\) and \(\delta^{*}(8)=0.1\) in Figure 8), we propose a conjecture that \(\delta^{*}\) decreases as \(\varepsilon\) increases, i.e., \(\delta^{*\prime}(\varepsilon)<0\), and our experiments resonate with this conjecture.
#### 5.2.3. Ablation studies
Naturally, one would be curious about which component, the prior or the evidence, of the proposed Blink mechanisms, contributes more to the final link estimations \(P\). To answer this question, we conduct ablation studies on the proposed methods. Figure 9 reports the MAE of the estimated link probabilities \(P\) against the ground truth \(A\) under Blink, its prior component and its evidence components. Blink with prior component only is equivalent to taking \(\delta=1\) where the flip probability of RR becomes \(1/2\) and hence the noisy adjacency matrix does not provide any information as evidence. Blink with evidence only is the case where the prior probabilities are set to be all \(1/2\) to provide no extra information.
Figure 8. GCN test accuracy on CiteSeer with Blink-Soft at \(\varepsilon\in\{1,8\}\).
Figure 9. The MAE of the estimated link probabilities \(P\) against ground truth \(A\) for full Blink, Blink with prior component only and Blink with evidence component only on CiteSeer. The latter figure is a closer look at the prior component whose trend is unclear in the former figure.
First, as shown in Figure 9, all mechanisms, the complete and partial ones, have their MAE decreasing as privacy budgets increase. More importantly, it can be seen that at tighter privacy budgets, the prior-only mechanism produces better estimations than its evidence-only counterpart, indicating that the prior contributes more to the final estimation at tighter privacy budgets. When \(\varepsilon\) grows (i.e., \(\varepsilon\geq 7\) in Figure 9), the noisy adjacency matrix becomes less noisy so the evidence-only method starts to produce better estimations, playing a more important role than the prior. This agrees with the findings in Section 5.2.2 that it is optimal to allocate more privacy budget to degrees (i.e., the prior component) at smaller \(\varepsilon\) and vise versa. It is important to note that at all privacy budgets, the full Blink method significantly outperforms both single-component methods, indicating that our proposed method effectively utilizes both components to make better estimations and both components are irreplaceable in Blink.
## 6. Related Work
Graph neural networks.Recent years have witnessed an emerging amount of work on graph neural networks for many tasks on graph, such as node classification, link classification and graph classification. Many novel GNN models have been proposed, including GCN (Kipf and Welling, 2015), GraphSAGE (Kipf and Welling, 2015), GAT (Kipf and Welling, 2015) and Graph Isomorphism Networks (Kipf and Welling, 2015). As our proposed mechanism estimates the graph topology and then feed it into GNN models without interfering the model architecture, we will not survey recent advances in GNNs here in great detail, but refer the audience to available surveys (Kipf and Welling, 2015; Kipf and Welling, 2015; Kipf and Welling, 2015) for detailed discussions of GNN models, performance and applications.
Differentially private GNNs.There have been recent attempts in the literature of incorporating the notion of differential privacy to GNNs. Wu et al. (Wu et al., 2015) study the adversarial link inference attacks on GNNs and proposes DrGCN, a central DP mechanism to protect edge-level privacy, which can be easily modified to adopt stronger LDP guarantee. Daigavane et al. (Daigavane et al., 2016) attempt to extend the well-celebrated DP-SGD (Deng et al., 2015) algorithm on neural networks to GNNs and achieve stronger node-level central differential privacy. More recently, Kolluri et al. (Kolluri et al., 2016) propose new GNN architecture to achieve edge-level central DP, where they separate the edge structure and only use MLPs to model both node features and graph structure information. Following a similar intuition, Sajadmanesh et al. (Sajadmanesh et al., 2016) propose a new mechanism where the aggregation step is decoupled from the GNN and executed as a pre-processing step to save privacy budget. When combined with DP-SGD, Sajadmanesh et al. (Sajadmanesh et al., 2016) achieve stronger node-level central DP on the altered GNN architecture.
For local differential privacy, Sajadmanesh and Gatica-Perez (Sajadmanesh et al., 2016) propose a LDP mechanism to protect node features but not the graph topology. Lin et al. (Lin et al., 2017) extend on (Sajadmanesh et al., 2016) and propose Soutput to also protect edge information in a LDP setting. The link LDP notion of (Lin et al., 2017) is identical to that of ours. However, their link DP mechanism is not principled and their estimated graph structure is learned by minimizing a loss function \(\|\hat{A}-\hat{A}\|_{1,1}+\lambda\|\hat{A}\|_{F}\) to encourage the model to choose less dense graphs. Hidano and Murakami (Hidano and Murakami, 2017) propose link LDP mechanism for graph classification tasks, and takes a similar approach to ours, by separately injecting noise to adjacency matrix and degrees. However, Hidano and Murakami (Hidano and Murakami, 2017) aim to preserve node degrees in the estimated graph like DrGCN, which is not suitable to node classification tasks and performs worse than our method as shown in Section 5.
Privacy-preserving graph synthesis.Privacy-preserving graph publication is also closely related to what we have studied, where one aims to publish a sanitized and privacy-preserving graph given an input graph. Blocki et al. (Blocki et al., 2015) utilize the Johnson-Lindenstrauss Transform to achieve graph publication with edge differential privacy. Qin et al. (Qin et al., 2016) consider local edge differential privacy where an untrusted data curator collects information from each individual user about their adjacency lists and construct a representative synthetic graph of the underlying ground truth graph with edge LDP guarantee. This is achieved by incrementally clustering structurally similar users together. More recently, Yang et al. (Yang et al., 2017) achieve differentially private graph generation by noise injection to a graph generative adversarial network (GAN) such that the output of the GAN model is privacy-preserving. It is worth noting that in the settings of (Blocki et al., 2015; Wu et al., 2015), a privacy-preserving synthetic graph is generated in a centralized way, i.e., the curator has access to the ground truth graph and perturbs it for a privacy-preserving publication, which is a weaker threat model than ours. Qin et al. (Qin et al., 2016) consider threat models similar to ours with local differential privacy where the curator does not need to have access to the actual graph, but there's no theoretical upper bound on the distance from the synthetic graph to the ground truth graph, which we provide in Theorem 4.4.
Privacy-preserving graph analysis.There exist prior works in the literature proposed for graph analysis tasks with local differential privacy. Imola et al. (Imola et al., 2017) propose mechanisms to derive estimators for triangle count and \(k\)-star count in graphs with link LDP. Ye et al. (Ye et al., 2017) propose a general framework for graph analysis with local differential privacy, to estimate an arbitrary aggregate graph statistic, such as clustering coefficient or modularity. This approach combines two estimators of the target aggregate statistic, one from noisy neighbor lists and one from noisy degrees, and derives a better estimator for the target statistic. However, this approach does not produce an estimated graph topology that can be used for GNN training. Hence, we do not include this as a baseline in our experiments in Section 5.
Link inference attacks in GNNs.As the popularization of GNNs in research and practice in recent years, there has garnered an increasing amount of attention on their privacy and security in the research community, and several privacy attacks on GNNs have been proposed for an attacker to infer links in the training graph. He et al. (He et al., 2017) propose multiple link stealing attacks for an adversary to infer links in the training graph given black-box access to the trained GNN model, guided by the heuristic that two nodes are more likely to be linked if they share more similar attributes or embeddings. Wu et al. (Wu et al., 2015) consider a scenario where a server with full access to graph topology trains a GNN by querying node features and labels from node clients (who do not host graph topology), and demonstrate that the nodes can infer the links from the server by designing adversarial queries via influence analysis. Zhang et al. (Zhang et al., 2017) propose graph reconstruction attacks where an adversary examines the trained graph embeddings and aims to reconstruct a graph similar to the ground truth graph used in GNN
training. All these attacks share the same threat model where the GNN is trained with complete and accurate information and an adversary aims to infer links by examining the trained model. Our proposed solution, Blink, naturally defends this kind of attacks at its source as a local differential privacy mechanism with a more severe threat model, where even the server who trains the GNN does not have non-private access to any links in the training graph.
**Estimate of link probability given degree sequence.** The model and estimation of random graphs given degree sequence is a common topic in network science and probability theory. Chatterjee et al. (2019) discuss the maximum likelihood estimate of parameters in \(\beta\)-model, which is closely related to BTL model for ranking (Blink, 2018; Dwork et al., 2019). Parameters in BTL model can be estimated via MM algorithms (Krishnan et al., 2019) or MLE (Zhu et al., 2019). Alternatively, configuration model (Krishnan et al., 2019) can also be used to model random graphs given degree sequence, which generates multi-graphs that allows multiple edges between two vertices. In configuration model, the expected number of edges between two nodes \(v_{i}\) and \(v_{j}\) conditioned on the degree sequence \(d\) is given by \(d_{i}d_{j}/(\sum d_{i}-1)\). When this value \(\ll 1\), it can be considered a probability that there's (at least one) edge between \(v_{i}\) and \(v_{j}\). We also attempt Blink with configuration model instead of \(\beta\)-model, but the link probabilities fail to be consistently below 1.
## 7. Conclusion
Overall, the presented framework, Blink, is a step towards making GNNs locally privacy-preserving while retaining their accuracy. It separately injects noise to adjacency lists and node degrees, and uses the latter as prior and the former as evidence to evaluate posterior link probabilities to estimate the ground truth graph. We propose three variants of Blink based on different approaches of constructing the graph estimation from the inferred link probabilities. Theoretical and empirical evidence support the state-of-the-art performance of Blink against existing link LDP approaches.
The area of differentially private GNNs is still novel with many open challenges and potential directions. There are a few future research directions and improvements for the presented paper. First, one may want to improve the bound in Theorem 4.4, which would require careful inspection of \(\hat{\beta}\) found by MLE. Also, an interesting future direction of research is to design algorithms such that each node can optimally decide its own privacy parameter \(\delta\), to avoid the use of hyperparameter search of \(\delta\) which may potentially lead to information leakage (Zhu et al., 2019). We also leave the investigation of such potential risk to future work. Additionally, one could consider exploring different models for graph generation from the posterior link probabilities, or extend the proposed framework to other types of graphs such as directed or weighted graphs. Furthermore, exploring the scalability of Blink for large-scale graph data is an important future direction. At last, one may also want to incorporate other LDP mechanisms that protect features and labels (such as (Zhu et al., 2019)) into Blink to provide complete local privacy protection over decentralized nodes.
## Acknowledgements
This work is supported by the Singapore Ministry of Education Academic Research Fund (AcRF) Tier 3 under MOE's official grant number MOE2017-T3-1-007 and AcRF Tier 1 under grant numbers A-8000980-00-0 and A-8000189-01-00.
|
2301.13370 | On the Correctness of Automatic Differentiation for Neural Networks with
Machine-Representable Parameters | Recent work has shown that forward- and reverse- mode automatic
differentiation (AD) over the reals is almost always correct in a
mathematically precise sense. However, actual programs work with
machine-representable numbers (e.g., floating-point numbers), not reals. In
this paper, we study the correctness of AD when the parameter space of a neural
network consists solely of machine-representable numbers. In particular, we
analyze two sets of parameters on which AD can be incorrect: the incorrect set
on which the network is differentiable but AD does not compute its derivative,
and the non-differentiable set on which the network is non-differentiable. For
a neural network with bias parameters, we first prove that the incorrect set is
always empty. We then prove a tight bound on the size of the non-differentiable
set, which is linear in the number of non-differentiabilities in activation
functions, and give a simple necessary and sufficient condition for a parameter
to be in this set. We further prove that AD always computes a Clarke
subderivative even on the non-differentiable set. We also extend these results
to neural networks possibly without bias parameters. | Wonyeol Lee, Sejun Park, Alex Aiken | 2023-01-31T02:30:10Z | http://arxiv.org/abs/2301.13370v2 | # On the Correctness of Automatic Differentiation
###### Abstract
Recent work has shown that automatic differentiation over the reals is almost always correct in a mathematically precise sense. However, actual programs work with _machine-representable numbers_ (e.g., floating-point numbers), not reals. In this paper, we study the correctness of automatic differentiation when the parameter space of a neural network consists solely of machine-representable numbers. For a neural network _with bias parameters_, we prove that automatic differentiation is correct at all parameters where the network is differentiable. In contrast, it is incorrect at all parameters where the network is non-differentiable, since it never informs non-differentiability. To better understand this non-differentiable set of parameters, we prove a tight bound on its size, which is linear in the number of non-differentiabilities in activation functions, and provide a simple necessary and sufficient condition for a parameter to be in this set. We further prove that automatic differentiation always computes a Clarke subderivative, even on the non-differentiable set. We also extend these results to neural networks possibly without bias parameters.
Machine-Representable Parameters, Automatic Differentiation, Neural Networks, Machine-Representable Parameters
## 1 Introduction
Automatic differentiation refers to various algorithms that compute the derivative of a function represented by a program. Diverse practical systems for automatic differentiation have been developed for general-purpose programs (Baydin et al., 2016; Hascoet & Pascual, 2013; Maclaurin et al., 2015; Pearlmutter & Siskind, 2008; Revels et al., 2016; Slusanschi & Dumitrel, 2016; Walther & Griewank, 2012), and particularly for machine-learning programs (Bergstra et al., 2010; Collobert et al., 2011; Jia et al., 2014; Seide & Agarwal, 2016; Tokui et al., 2019; van Merrienboer et al., 2018), including TensorFlow (Abadi et al., 2016), PyTorch (Paszke et al., 2017), and JAX (Frostig et al., 2018). The development of such automatic differentiation systems has been a driving force of the rapid advances in deep learning (and machine learning in general) in the past 10 years (Baydin et al., 2017; Goodfellow et al., 2016; LeCun et al., 2015; Schmidhuber, 2015).
Recently, the correctness of automatic differentiation has been actively studied for various types of programs. For programs that only use differentiable functions, automatic differentiation is correct _everywhere_, i.e., it computes the derivative of a given program at all inputs (Abadi & Plotkin, 2020; Barthe et al., 2020; Brunel et al., 2020; Elliott, 2018; Huot et al., 2020; Krawiec et al., 2022; Radul et al., 2023; Smeding & Vakar, 2023; Vakar, 2021). On the other hand, for programs that use non-differentiable functions (e.g., \(\mathrm{ReLU}^{1}\)), automatic differentiation can be incorrect at some inputs (Bolte & Pauwels, 2020; Griewank & Walther, 2008; Lee et al., 2020).
There are two cases where automatic differentiation is incorrect. The first case is when the function \(f\) represented by a given program is differentiable at some \(x\), but automatic differentiation returns a value different from the derivative of \(f\) at \(x\). For instance, consider a program2 that represents the identity function, defined as \(\mathrm{ReLU}(x)-\mathrm{ReLU}(-x)\). If automatic differentiation uses zero as a "derivative" of \(\mathrm{ReLU}\) at \(x=0\), as is standard (e.g., in TensorFlow and PyTorch), it returns zero for this program at \(x=0\) while the true derivative is one. The second case is when \(f\) is non-differentiable at some \(x\), but automatic differentiation returns some finite value and does not signal the information about the non-differentiability of \(f\) at \(x\). For example, \(\mathrm{ReLU}(x)\) represents a function that is non-differentiable at \(x=0\), but automatic differentiation outputs some real number for this program at \(x=0\). Although automatic differentiation can be incorrect, recent works show that for a large class of programs using non-differentiable functions, automatic differentiation is correct _almost everywhere_, i.e., it is incorrect at most on a Lebesgue measure-zero subset of the input domain of a program (Bolte & Pauwels, 2020;
Lee et al., 2020; Lew et al., 2021; Mazza & Pagani, 2021).
These prior works, however, have a limitation: they consider automatic differentiation over the real numbers, but in practice, inputs to a program are always _machine-representable numbers_ such as \(32\)-bit floating-point numbers. Since the set of machine-representable numbers is countable (and usually finite), it is always a Lebesgue measure-zero subset of the real numbers. Hence, automatic differentiation could be incorrect on _all_ machine-representable inputs according to prior works, and this is indeed possible. Consider a program3 for a function from \(\mathbb{R}\) to \(\mathbb{R}\), defined as
Footnote 3: Inspired by Bolte & Pauwels (2020b); Mazza & Pagani (2021).
\[\sum_{c\in\mathbb{H}}\Big{[}\lambda x+\Big{(}\frac{1}{|\mathbb{H}|}-\lambda \Big{)}\Big{(}\mathrm{ReLU}(x-c)-\mathrm{ReLU}(-x+c)\Big{)}\Big{]},\]
where \(\mathbb{H}\subseteq\mathbb{R}\) is a finite set of machine-representable numbers and \(\lambda\in\mathbb{R}\setminus\{1\}\) is an arbitrary constant. Then, the program represents the affine function \(x\mapsto x+a\) for \(a=(\lambda-\frac{1}{|\mathbb{H}|})\times\sum_{c\in\mathbb{H}}c\), but automatic differentiation incorrectly computes its derivative at any \(x\in\mathbb{H}\) as \(\lambda\) (the arbitrarily chosen value) if zero is used as a "derivative" of \(\mathrm{ReLU}\) at \(0\) as before.4
Footnote 4: We can even make automatic differentiation return different values at different \(x\in\mathbb{H}\), by using a different \(\lambda_{i}\) for each \(c_{i}\in\mathbb{H}\). Similarly, we can also construct a program such that at all machine-representable numbers \(\mathbb{H}\), the program is non-differentiable and automatic differentiation returns arbitrary values.
Given these observations, we raise the following questions: for a program that represents a neural network, at which machine-representable inputs to the program (i.e., parameters to the network) can automatic differentiation be incorrect, and how many such inputs can there be? In this work, we tackle these questions and present the first theoretical results. In particular, we study the two sets of machine-representable parameters of a neural network on which automatic differentiation is incorrect: the _incorrect set_, on which the network is differentiable but automatic differentiation does not compute its derivative, and the _non-differentiable set_, on which the network is non-differentiable.
**Summary of results.** We focus on neural networks consisting of alternating analytic pre-activation functions (e.g., fully-connected and convolution layers) and pointwise continuous activation functions (e.g., \(\mathrm{ReLU}\) and \(\mathrm{Sigmoid}\)). The first set of our results (SS3) is for such networks _with bias parameters_ at every layer, and is summarized as follows.
* We prove that the incorrect set is _always empty_, not only over machine-representable parameters but also over real-valued ones. To our knowledge, this is the first result showing that the incorrect set can be empty for a class of neural networks using possibly non-differentiable functions; prior works only bounded the measure of this set.
* On the other hand, the non-differentiable set can be non-empty. We give a tight bound on its density over all machine-representable parameters, which has the form \(n/|\mathbb{H}|\) where \(n\) is the _total number of non-differentiable points_ in activation functions. This result implies that in practice, the non-differentiable set often has a low density, especially if we use high-precision parameters (e.g., use \(32\)-bit floating-point numbers for \(\mathbb{H}\), where \(|\mathbb{H}|\approx 2^{32}\)).
* To better describe the non-differentiable set, we provide a simple, easily verifiable _necessary and sufficient condition_ for a parameter to be in the non-differentiable set. Given that deciding the differentiability of a neural network is NP-hard in general (Bolte et al., 2022), our result is surprising: having bias parameters is sufficient to efficiently decide the differentiability.
* Given that the non-differentiable set can be non-empty, a natural question arises: what does automatic differentiation compute on this set? We prove that automatic differentiation _always computes a Clarke subderivative_ (a generalized derivative) even on the non-differentiable set. That is, automatic differentiation is an efficient algorithm for computing a Clarke subderivative in this case.
The second set of our results (SS4) extends the above results to neural networks possibly _without bias parameters_ at some layers, and is summarized as follows.
* As we observed in the \(\mathrm{ReLU}(x)-\mathrm{ReLU}(-x)\) example, the incorrect set can be non-empty in this case. Thus, we prove tight bounds on the density of both the incorrect and non-differentiable sets, which have the form \(n^{\prime}/|\mathbb{H}|\) where \(n^{\prime}\) is linear in the total number of non-differentiable points in activation functions as well as the total number of boundary points in activation functions' zero sets.
* We provide simple, easily verifiable sufficient conditions on parameters under which automatic differentiation computes the standard derivative or a Clarke subderivative.
Our theoretical results carry two main practical implications: automatic differentiation for neural networks is correct on most machine-representable parameters, and it is correct more often with bias parameters. For networks with bias parameters at all layers, our results further provide an exact characterization of when automatic differentiation is correct and what it computes. We remark that our results may not be directly applicable to neural networks with non-analytic pre-activation functions or non-pointwise activation functions. We discuss this and related limitations in SS5.
**Organization.** We first introduce notation and the problem setup (SS2). We then present our main results for neural networks with bias parameters (SS3) and extend them to neural networks possibly without bias parameters (SS4). We conclude the paper with discussion (SS5).
## 2 Problem Setup
### Notation and Definitions
We use the following notation and definitions. Let \(\mathbb{N}\) and \(\mathbb{R}\) be the sets of positive integers and real numbers, respectively. For \(n\in\mathbb{N}\), we use \([n]\triangleq\{1,2,\ldots,n\}\) and \(\vec{0}_{n}\triangleq(0,\ldots,0)\in\mathbb{R}^{n}\), and often drop \(n\) from \(\vec{0}_{n}\) when the subscript is clear from context. For \(x=(x_{1},\ldots,\)\(x_{n})\in\mathbb{R}^{n}\), we use \(x_{-i}\triangleq(x_{1},\ldots,x_{i-1},x_{i+1},\ldots,x_{n})\). We call \(A\subseteq\mathbb{R}\) an _interval_ if it is \([a,b]\), \([a,b)\), \((a,b]\), or \((a,b)\) for some \(a,b\in\mathbb{R}\cup\{\pm\infty\}\). For \(A\subseteq\mathbb{R}^{n}\), \(\mathit{int}(A)\) and \(bd(A)\) denote the interior and the boundary of \(A\), and \(\mathbf{1}_{A}:\mathbb{R}^{n}\rightarrow\{0,1\}\) denotes the indicator function of \(A\). We say that \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) is _analytic_ if it is infinitely differentiable and its Taylor series at any \(x\in\mathbb{R}^{n}\) converges to \(f\) on some neighborhood of \(x\). For \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\),
\[Df:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m\times n}\cup\{\bot\}\]
denotes the standard derivative of \(f\), where \(f(x)=\bot\) denotes that \(f\) is non-differentiable at \(x\). Lastly, for \(f:\mathbb{R}\rightarrow\mathbb{R}\),
\[\mathsf{ndf}(f) \triangleq\{x\in\mathbb{R}\mid\text{$f$ is non-differentiable at $x$}\},\] \[\mathsf{bdz}(f) \triangleq bd(\{x\in\mathbb{R}\mid f(x)=0\})\]
denote the set of non-differentiable points of \(f\) and the boundary of the zero set of \(f\), respectively.
### Neural Networks
We define a neural network as follows, following (Jacot et al., 2018; Laurent and von Brecht, 2018). Given the number of layers \(L\in\mathbb{N}\), let \(N_{0}\in\mathbb{N}\) be the dimension of input data, \(N_{l}\in\mathbb{N}\) and \(W_{l}\in\mathbb{N}\cup\{0\}\) be the number of neurons and the number of parameters at layer \(l\in[L]\), and \(N\triangleq N_{1}+\cdots+N_{L}\) and \(W\triangleq W_{1}+\cdots+W_{L}\). Further, for each \(l\in[L]\), let \(\tau_{l}:\mathbb{R}^{N_{l-1}}\times\mathbb{R}^{W_{l}}\rightarrow\mathbb{R}^{ N_{l}}\) be an analytic _pre-activation function_ and \(\sigma_{l}:\mathbb{R}^{N_{l}}\rightarrow\mathbb{R}^{N_{l}}\) be a pointwise, continuous _activation function_, i.e.,
\[\sigma_{l}(x_{1},\ldots,x_{N_{l}})\triangleq\big{(}\sigma_{l,1}(x_{1}),\ldots,\sigma_{l,N_{l}}(x_{N_{l}})\big{)}\]
for some continuous \(\sigma_{l,i}:\mathbb{R}\rightarrow\mathbb{R}\). Under this setup, we define a neural network as a function of model parameters: given input data \(c\in\mathbb{R}^{N_{0}}\), a _neural network_\(z_{L}(\,\cdot\,;c):\mathbb{R}^{W}\rightarrow\mathbb{R}^{N_{L}}\) is defined as
\[z_{L}(w;c)\triangleq(\sigma_{L}\circ\tau_{L}^{(w_{L})}\circ\cdots\circ\sigma_{ 1}\circ\tau_{1}^{(w_{1})})(c),\]
where \(w\triangleq(w_{1},\ldots,w_{L})\), \(w_{l}\triangleq(w_{l,1},\ldots,w_{l,W_{l}})\in\mathbb{R}^{W_{l}}\), and \(\tau_{l}^{(w_{l})}(x)\triangleq\tau_{l}(x,w_{l})\). We say such \(z_{L}\)_has \(L\) layers, \(N\) neurons, and \(W\) parameters_.
We next define the _activation neurons_\(z_{l}(\,\cdot\,;c):\mathbb{R}^{W}\rightarrow\mathbb{R}^{N_{l}}\) and the _pre-activation values_\(y_{l}(\,\cdot\,;c):\mathbb{R}^{W}\rightarrow\mathbb{R}^{N_{l}}\) at layer \(l\in[L]\), as we defined \(z_{L}\) above:
\[z_{l}(w;c)\triangleq(\sigma_{l}\circ\tau_{l}^{(w_{l})}\circ\cdots\circ\sigma_{ 1}\circ\tau_{1}^{(w_{1})})(c),\]
\[y_{l}(w;c)\triangleq\tau_{l}^{(w_{l})}(z_{l-1}(w;c)),\]
where \(z_{0}(w;c)\triangleq c\). Since the input data \(c\) is fixed while we compute the derivative of \(z_{L}\) with respect to \(w\) (e.g., in order to train \(z_{L}\)), we often omit \(c\) and simply write \(z_{l}(w)\) and \(y_{l}(w)\) to denote \(z_{l}(w;c)\) and \(y_{l}(w;c)\), respectively.
For the set of all indices of neurons
\[\mathsf{ldx}\triangleq\{(l,i)\mid l\in[L],i\in[N_{l}]\}\]
and for each \((l,i)\in\mathsf{ldx}\), we use \(y_{l,i},z_{l,i}:\mathbb{R}^{W}\rightarrow\mathbb{R}\) and \(\tau_{l,i}:\mathbb{R}^{N_{l-1}}\times\mathbb{R}^{W_{l}}\rightarrow\mathbb{R}\) to denote the functions that take only the \(i\)-th output component of \(y_{l}\), \(z_{l}\), and \(\tau_{l}\), respectively. Note that we defined \(\sigma_{l,i}\) above in a slightly different way: its domain is not \(\mathbb{R}^{N_{l}}\) (i.e., the domain of \(\sigma_{l}\)) but \(\mathbb{R}\).
Finally, we introduce the notion of piecewise-analytic5 to consider possibly non-differentiable activation functions.
Footnote 5: It is inspired by the notion of PAP in Lee et al. (2020).
**Definition 2.1**.: A function \(f:\mathbb{R}\rightarrow\mathbb{R}\) is _piecewise-analytic_ if there exist \(n\in\mathbb{N}\), a partition \(\{A_{i}\}_{i\in[n]}\) of \(\mathbb{R}\) consisting of non-empty intervals, and analytic functions \(\{f_{i}:\mathbb{R}\rightarrow\mathbb{R}\}_{i\in[n]}\) such that \(f=f_{i}\) on \(A_{i}\) for all \(i\in[n]\).
**Assumption**.: \(\sigma_{l,i}\) is piecewise-analytic for all \((l,i)\in\mathsf{ldx}\).
The class of piecewise-analytic functions includes not only all analytic functions but also many non-differentiable functions widely used in neural networks such as ReLU, LeakyReLU, and HardSigmoid. Hence, our definition of neural networks includes a rich class of practical networks: \(\tau_{l}\) can be any analytic function (e.g., a fully-connected, convolution, or normalization layer), and \(\sigma_{l}\) can be any pointwise continuous and piecewise-analytic function (e.g., ReLU, LeakyReLU, or HardSigmoid).
In practice, we often apply automatic differentiation to the composition of a neural network \(z_{L}\) and a loss function \(\ell\) (e.g., Softmax followed by CrossEntropy), to compute the derivative of the loss value of \(z_{L}\) with respect to its parameters. We emphasize that all of our results continue to hold even if \(z_{L}\) in the results is replaced by \(\ell\circ z_{L}\) for any analytic \(\ell:\mathbb{R}^{N_{L}}\rightarrow\mathbb{R}^{m}\). For simplicity, however, we state our results only for \(z_{L}\) and not for \(\ell\circ z_{L}\).
### Automatic Differentiation
As discussed in SS1, automatic differentiation operates not on mathematical functions, but on programs that represent those functions. To this end, we define a program \(\mathtt{P}\) that represents a function from \(\mathbb{R}^{W}\) to \(\mathbb{R}\) as follows:
\[\mathtt{P}::=r\mid\mathtt{w}_{l,j}\mid f\left(\mathtt{P}_{1},\ldots,\mathtt{P} _{n}\right)\]
where \(r\in\mathbb{R}\), \(l\in[L]\), \(j\in[W_{l}]\), \(f\in\{\tau_{l,i},\sigma_{l,i}\mid(l,i)\in\mathsf{ldx}\}\), and \(n\in\mathbb{N}\). This definition says that a program
can be either a real-valued constant \(r\), a real-valued parameter \(w_{l,j}\), or the application of a function \(f:\mathbb{R}^{n}\to\mathbb{R}\) to subprograms \(\mathtt{P}_{1},\ldots,\mathtt{P}_{n}\). In this paper, we focus on particular programs \(\mathtt{P}_{y_{i,i}}\) and \(\mathtt{P}_{z_{l,i}}\) that represent the functions \(y_{l,i}(\,\cdot\,;c),z_{l,i}(\,\cdot\,;c):\mathbb{R}^{W}\to\mathbb{R}\) and are defined in a canonical way as follows:
\[\mathtt{P}_{y_{l,i}} \triangleq\tau_{l,i}\left(\mathtt{P}_{z_{l-1,1}},\ldots,\mathtt{ P}_{z_{l-1,N_{l-1}}},w_{l,1},\ldots,w_{l,W_{l}}\right),\] \[\mathtt{P}_{z_{l,i}} \triangleq\sigma_{l,i}\left(\mathtt{P}_{y_{l,i}}\right),\]
where \(\mathtt{P}_{z_{0,i^{\prime}}}\triangleq c_{i^{\prime}}\) for \(i^{\prime}\in[N_{0}]\) represents the constant function \(z_{0,i^{\prime}}(\,\cdot\,;c):\mathbb{R}^{W}\to\mathbb{R}\).
Given a program \(\mathtt{P}\), we define \(\llbracket\mathtt{P}\rrbracket:\mathbb{R}^{W}\to\mathbb{R}\) as the function represented by \(\mathtt{P}\), and \(\llbracket\mathtt{P}\rrbracket^{\mathsf{AD}}:\mathbb{R}^{W}\to\mathbb{R}^{1 \times W}\) as the function that automatic differentiation computes when applied to \(\mathtt{P}\). These functions are defined inductively as follows (Abadi and Plotkin, 2020; Baydin et al., 2017; Lee et al., 2020):
\[\llbracket r\rrbracket(w) \triangleq r,\] \[\llbracket w_{l,j}\rrbracket(w) \triangleq w_{l,j},\] \[\llbracket f\left(\mathtt{P}_{1},\ldots,\mathtt{P}_{n}\right) \rrbracket(w) \triangleq f\big{(}\llbracket\mathtt{P}_{1}\rrbracket(w),\ldots, \llbracket\mathtt{P}_{n}\rrbracket(w)\big{)},\] \[\llbracket r\rrbracket^{\mathsf{AD}}(w) \triangleq \mathtt{0},\] \[\llbracket w_{l,j}\rrbracket^{\mathsf{AD}}(w) \triangleq \mathtt{1}_{l,j},\] \[\llbracket f\left(\mathtt{P}_{1},\ldots,\mathtt{P}_{n}\right) \rrbracket^{\mathsf{AD}}(w) \triangleq D^{\mathsf{AD}}f\big{(}\llbracket\mathtt{P}_{1}\rrbracket(w), \ldots,\llbracket\mathtt{P}_{n}\rrbracket(w)\big{)}\] \[\quad\cdot\big{[}\llbracket\mathtt{P}_{1}\rrbracket^{\mathsf{ AD}}(w)\big{/}\cdots\big{/}\big{/}\llbracket\mathtt{P}_{n}\rrbracket^{\mathsf{AD}}(w)\big{]}.\]
Here \(w_{l,j}\in\mathbb{R}\) is defined as \((w_{1,1},w_{1,2}\ldots,w_{L,W_{L}})\triangleq w\), \(\mathtt{0},\mathtt{1}_{l,j}\in\mathbb{R}^{1\times W}\) denote the zero matrix and the matrix whose entries are all zeros except for a single one at the \((W_{1}+\cdots+W_{l-1}+j)\)-th entry, \(D^{\mathsf{AD}}f:\mathbb{R}^{n}\to\mathbb{R}^{1\times n}\) denotes a "derivative" of \(f\) used by automatic differentiation, and \([M_{1}\,/\cdots\,/\,M_{n}]\) denotes the matrix that stacks up matrices \(M_{1},\ldots,M_{n}\) vertically. Note that \(\llbracket f\left(\mathtt{P}_{1},\ldots,\mathtt{P}_{n}\right)\rrbracket^{ \mathsf{AD}}\) captures the essence of automatic differentiation: it computes derivatives based on the chain rule for differentiation.
Using the above definitions, we define \(D^{\mathsf{AD}}z_{L}:\mathbb{R}^{W}\to\mathbb{R}^{N_{L}\times W}\) as the result of applying automatic differentiation6 to a program that canonically represents a neural network \(z_{L}:\mathbb{R}^{W}\to\mathbb{R}^{N_{L}}\):
Footnote 6: We remark that the definition of \(D^{\mathsf{AD}}z_{L}\) (and \(\llbracket\mathtt{P}\rrbracket^{\mathsf{AD}}\)) does not assume any specific choice of automatic differentiation algorithms (e.g., forward-mode or reverse-mode). Thus, all of our results are applicable to any choice of those algorithms.
\[D^{\mathsf{AD}}z_{L}(w)\triangleq\big{[}\llbracket\mathtt{P}_{z_{L,1}} \rrbracket^{\mathsf{AD}}(w)\,\big{/}\cdots\big{/}\,\llbracket\mathtt{P}_{z_{L,N _{L}}}\rrbracket^{\mathsf{AD}}(w)\big{]}.\]
The output \(D^{\mathsf{AD}}z_{L}\) of automatic differentiation depends on the choice of \(D^{\mathsf{AD}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(z_{L}\) is differentiable but automatic differentiation does not compute its standard derivative; on the non-differentiable set \(\mathsf{ndf}_{\Omega}(z_{L})\), \(z_{L}\) is non-differentiable but automatic differentiation outputs some finite values without informing non-differentiability. Note that \(\mathsf{ndf}_{\Omega}(z_{L})\subseteq\Omega\) is different from \(\mathsf{ndf}(f)\subseteq\mathbb{R}\) which was defined in SS2.1 for \(f:\mathbb{R}\to\mathbb{R}\).
## 3 Correctness of Automatic Differentiation for Neural Networks with Bias Parameters
Our main objective is to understand the incorrect and non-differentiable sets. In particular, we focus on neural networks with bias parameters (defined below) in this section and consider more general neural networks in SS4. For the former class of neural networks, we characterize the incorrect and non-differentiable sets in SS3.1 and SS3.2, and establish a connection between automatic differentiation and Clarke subderivatives (a generalized notion of derivative) in SS3.3.
We start by defining neural networks with bias parameters.
**Definition 3.1**.: A pre-activation function \(\tau_{l}:\mathbb{R}^{N_{l-1}}\times\mathbb{R}^{W_{l}}\to\mathbb{R}^{N_{l}}\) of a neural network _has bias parameters_ if \(W_{l}\geq N_{l}\) and for all \(i\in[N_{l}]\), there is \(f_{i}:\mathbb{R}^{N_{l-1}}\times\mathbb{R}^{W_{l}-N_{l}}\to\mathbb{R}\) such that \(\tau_{l,i}(x,(u,v))=f_{i}(x,u)+v_{i}\) for all \((u,v)\in\mathbb{R}^{W_{l}-N_{l}}\times\mathbb{R}^{N_{l}}\). Here \(v_{i}\) is called the _bias parameter of \(\tau_{l,i}\)_. A neural network \(z_{L}\)_has bias parameters_ if \(\tau_{l}\) has bias parameters for all \(l\in[L]\).
We note that many popular pre-activation functions, including fully-connected, convolution, and normalization layers, are typically implemented with bias parameters in practice.
### Characterization of the Incorrect Set
We first show that the incorrect set of a neural network is _always empty_ if the network has bias parameters, i.e., automatic differentiation computes the standard derivative wherever the network is differentiable.
**Theorem 3.2**.: _If a neural network \(z_{L}\) has bias parameters, then for all \(w\in\mathbb{R}^{W}\) at which \(z_{L}\) is differentiable,_
\[D^{\mathsf{AD}}z_{L}(w)=Dz_{L}(w). \tag{1}\]
_This implies that \(|\mathsf{inc}_{\Omega}(z_{L})|=0\)._
It should be emphasized that Eq. (1) is not only for machine-representable parameters, but also for any _real-valued_ parameters. Compared to existing results, this result is surprising. For instance, Bolte & Pauwels (2020); Lee et al. (2020) show that the incorrect set over \(\mathbb{R}^{n}\) (not over \(\mathbb{M}^{n}\)) has Lebesgue measure zero for some classes of programs, but they do not give any results on whether the set can be empty. In contrast, Theorem 3.2 states that the incorrect set over \(\mathbb{R}^{n}\) is empty for a smaller, yet still large class of programs, i.e., neural networks with bias parameters.
In Theorem 3.2, the condition that \(z_{L}\) has bias parameters plays a crucial role. Namely, Theorem 3.2 does not hold if this condition is dropped. For instance, consider a neural network \(z_{L}:\mathbb{R}\to\mathbb{R}\) that is essentially the same as \(f:\mathbb{R}\to\mathbb{R}\) with \(f(w)=\mathrm{ReLU}(w)-\mathrm{ReLU}(-w)\) (which we discussed in SS1). Then, \(z_{L}\) does not have bias parameters, and \(\mathsf{inc}_{\Omega}(z_{L})\) is non-empty if \(D^{\mathsf{AD}}\mathrm{ReLU}=\mathbf{1}_{(0,\infty)}\) is used.
The proof of Theorem 3.2 consists of the following two arguments: for all \(w\in\mathbb{R}^{W}\) with \(Dz_{L}(w)\neq\bot\),
1. if \(y_{l,i}(w)\in\mathsf{ndf}(\sigma_{l,i})\), then \(\partial z_{L}/\partial z_{l,i}=\vec{0}\) at \(w\), and
2. if (i) holds, then \(D^{\mathsf{AD}}z_{L}(w)=Dz_{L}(w)\).
That is, (i) if a pre-activation value \(y_{l,i}\) touches a non-differentiable point of its activation function \(\sigma_{l,i}\), then the derivative of \(z_{L}\) with respect to \(z_{l,i}\) should always be zero; and (ii) Theorem 3.2 follows from (i). We point out that the proof of (i) relies heavily on the bias parameter condition. For more details, see Appendix C.
### Characterization of the Non-Differentiable Set
We next show that if a neural network has bias parameters, then the density of the non-differentiable set in \(\Omega\) is bounded by \(n/|\mathbb{M}|\), where \(n\) is the total number of non-differentiable points in activation functions.
**Theorem 3.3**.: _If a neural network \(z_{L}\) has bias parameters,_
\[\frac{|\mathsf{ndf}_{\Omega}(z_{L})|}{|\Omega|}\leq\frac{1}{|\mathbb{M}|}\sum_ {(l,i)\in\mathsf{ldx}}|\mathsf{ndf}(\sigma_{l,i})|\]
_where \(\mathsf{ndf}(f)\) is the set of non-differentiable points of \(f\)._
In many practical settings, the bound in Theorem 3.3 is often small, especially under high-precision parameters. For example, \(\mathbb{M}\) is frequently chosen as the set of \(32\)-bit floating-point numbers so \(|\mathbb{M}|\approx 2^{32}\), while \(|\mathsf{ldx}|\) (the number of neurons) is often smaller than \(2^{32}\) and \(|\mathsf{ndf}(\sigma_{l,i})|\) is typically small (e.g., \(0\) for differentiable \(\sigma_{l,i}\), \(1\) for \(\mathrm{ReLU}\), and \(2\) for \(\mathrm{HardSigmoid}\)). This implies that in practice, the non-differentiable set often has a low density in \(\Omega\). We remark, however, that the bound in Theorem 3.3 can grow large in low-precision settings (e.g., when parameters are represented by \(\leq 16\)-bit numbers).
Although the bound in Theorem 3.3 can be large in some cases (e.g., when \(|\mathbb{M}|\) is small), we prove that the bound is in general tight up to a constant multiplicative factor.
**Theorem 3.4**.: _For any \(\mathbb{M}\subseteq\mathbb{R}\) and \(n,\alpha\in\mathbb{N}\) with \(1\leq|\mathbb{M}|<\infty\), \(n\geq 2\), and \(\alpha\leq|\mathbb{M}|/(n-1)\), there is a neural network \(z_{L}:\mathbb{R}^{W}\to\mathbb{R}\) with bias parameters that satisfies_
\[\frac{|\mathsf{ndf}_{\Omega}(z_{L})|}{|\Omega|}\geq\frac{1}{2}\cdot\frac{1}{| \mathbb{M}|}\sum_{(l,i)\in\mathsf{ldx}}|\mathsf{ndf}(\sigma_{l,i})|\]
_and the following: \(z_{L}\) has \(n+1\) neurons and \(|\mathsf{ndf}(\sigma_{1,i})|=\alpha\) for all \(i\in[N_{1}]\)._
In Theorem 3.4, the condition \(\alpha\leq|\mathbb{M}|/(n-1)\) is for achieving the constant \(1/2\) in the bound. A similar bound can be derived for a larger \(\alpha\) (i.e., \(\alpha>|\mathbb{M}|/(n-1)\)) but with a constant smaller than \(1/2\).
Theorems 3.3 and 3.4 describe how large the non-differentiable set \(\mathsf{ndf}_{\Omega}(z_{L})\) can be, but give no clue about exactly which parameters constitute this set. To better understand this, we present an easily verifiable _necessary and sufficient_ condition for characterizing \(\mathsf{ndf}_{\Omega}(z_{L})\).
**Theorem 3.5**.: _If a neural network \(z_{L}\) has bias parameters, then the following are equivalent for all \(w\in\mathbb{R}^{W}\)._
* \(z_{L}\) _is non-differentiable at_ \(w\)_._
* \(y_{l,i}(w)\in\mathsf{ndf}(\sigma_{l,i})\) _and_ \(\partial^{\mathsf{AD}}z_{L}/\partial z_{l,i}\neq\vec{0}\) _at_ \(w\) _for some_ \((l,i)\in\mathsf{ldx}\)_._
Here \(\partial^{\mathsf{AD}}z_{L}/\partial z_{l,i}\) denotes the partial derivative of \(z_{L}\) with respect to \(z_{l,i}\) that _reverse-mode_ automatic differentiation (e.g., backpropagation) computes as a byproduct of computing \(D^{\mathsf{AD}}z_{L}\) (see Appendix E for more details). Hence, Theorem 3.5 implies that we can efficiently7 decide whether a neural network with bias parameters is differentiable at a (real-valued) parameter or not. This result is somewhat surprising given a recent, relevant result that deciding such differentiability is NP-hard in general (Bolte et al., 2022).
Footnote 7: in \(\mathcal{O}(N_{L}T)\) time for a neural network \(z_{L}:\mathbb{R}^{W}\to\mathbb{R}^{N_{L}}\) where \(T\) is the time to evaluate \(z_{L}(w)\), because reverse-mode automatic differentiation takes \(\mathcal{O}(N_{L}T)\) time to compute \(D^{\mathsf{AD}}z_{L}(w)\).
Using Theorems 3.2 and 3.5, we can efficiently determine whether the output of (reverse-mode) automatic differentiation is correct with respect to the standard derivative: if the second item in Theorem 3.5 holds, then the output is always incorrect (since the standard derivative should be \(\perp\)); otherwise, it is always correct (by Theorem 3.2).
We now sketch the proof of Theorem 3.3, to explain how we obtain the bound in the theorem and where we use the bias parameter condition. First, we prove that if \(y_{l,i}(w)\) does not touch any non-differentiable point of \(\sigma_{l,i}\) for all \((l,i)\in\mathsf{ldx}\), then \(z_{L}\) is differentiable at \(w\). In other words,
\[\mathsf{ndf}_{\Omega}(z_{L})\subseteq\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
## 4 Correctness of Automatic Differentiation for Neural Networks without Bias Parameters
In this section, we investigate the correctness of automatic differentiation for neural networks that may or may not have bias parameters. For such general networks, however, considering only the properties of activation functions such as \(\mathsf{ndf}\left(\sigma_{l,i}\right)\) (as we did in SS3) is insufficient to derive non-trivial bounds on the size of the incorrect and non-differentiable sets, as long as general pre-activation functions are used.
To illustrate this, consider neural networks \(z_{L},\widehat{z}_{L}:\mathbb{R}\to\mathbb{R}\) that are essentially the same as \(f,\widehat{f}:\mathbb{R}\to\mathbb{R}\) with \(f(w)=\mathrm{ReLU}(h(w))-\mathrm{ReLU}(-h(w))\) and \(\widehat{f}(w)=\mathrm{ReLU}(h(w))\), where \(h:\mathbb{R}\to\mathbb{R}\) is some analytic pre-activation function satisfying \(h(x)=0\) and \(Dh(x)=1\) for all \(x\in\mathbb{M}\). Suppose that \(D^{\mathsf{d}\mathsf{D}}\mathrm{ReLU}=\mathbf{1}_{(0,\infty)}\). Then, we have \(\mathsf{inc}_{\Omega}(z_{L})=\mathsf{ndf}_{\Omega}(\widehat{z}_{L})=\Omega\) even though \(z_{L}\) and \(\widehat{z}_{L}\) have only \(\leq 2\) non-differentiable points in their activation functions. The main culprit of having such large \(\mathsf{inc}_{\Omega}(z_{L})\) and \(\mathsf{ndf}_{\Omega}(\widehat{z}_{L})\), even with a tiny number of non-differentiable points in activation functions, is that \(z_{L}\) and \(\widehat{z}_{L}\) use the unrealistic pre-activation function \(h\) which does not have bias parameters.
To exclude such extreme cases and focus on realistic neural networks, we will often consider _well-structured biaffine_ pre-activation functions when they do not have bias parameters.
**Definition 4.1**.: A pre-activation function \(\tau_{l}:\mathbb{R}^{N_{l-1}}\times\mathbb{R}^{W_{l}}\to\mathbb{R}^{N_{l}}\) is _well-structured biaffine_ if there are \(M_{i}\in\mathbb{R}^{N_{l-1}\times W_{l}}\) and \(c_{i}\in\mathbb{R}\) for all \(i\in[N_{l}]\) such that \(\tau_{l,i}(x,u)=x^{\mathsf{T}}M_{i}u+c_{i}\) and each column of \(M_{i}\) has at most one non-zero entry.
We note that any fully-connected or convolution layer is well-structured biaffine if it does not have bias parameters. Thus, a large class of neural networks is still under our consideration even after we impose the above restriction.
We now present our results for neural networks possibly without bias parameters, extending Theorems 3.2-3.6.
### Bounds for Non-Differentiable and Incorrect Sets
We first bound the density of the non-differentiable and incorrect sets in \(\Omega\), extending Theorem 3.3.
**Theorem 4.2**.: _If a pre-activation function \(\tau_{l}\) has bias parameters or is well-structured biaffine for all \(l\in[L]\), then_
\[\frac{|\mathsf{ndf}_{\Omega}(z_{L})\cup\mathsf{inc}_{\Omega}(z_{L })|}{|\Omega|}\] \[\qquad\leq\frac{1}{|\mathbb{M}|}\sum_{(l,i)\in\mathsf{ldx}}\Bigl{|} \mathsf{ndf}(\sigma_{l,i})\cup\bigl{(}\mathsf{bdz}(\sigma_{l,i})\cap S_{l+1} \bigr{)}\Bigr{|},\]
_where \(\mathsf{bdz}(f)\) is the boundary of \(f\)'s zero set (see SS2.1), and_
\[S_{l}\triangleq\begin{cases}\emptyset&\text{ if }l>L\text{ or }\tau_{l}\text{ has bias parameters}\\ \mathbb{R}&\text{ otherwise}.\end{cases}\]
We note that if \(z_{L}\) has bias parameters, Theorem 4.2 reduces to Theorem 3.3 since \(\mathsf{inc}_{\Omega}(z_{L})=\emptyset\) (by Theorem 3.2) and \(S_{l}=\emptyset\) for all \(l\) (by its definition) in such a case.
As in Theorem 3.3, the bound in Theorem 4.2 is often small for neural networks that use practical activation functions, since \(|\mathsf{ndf}(\sigma_{l,i})\cup\mathsf{bdz}(\sigma_{l,i})|\) is typically small for those activation functions (e.g., \(1\) for \(\mathrm{ReLU}\) and \(2\) for \(\mathrm{HardSigmoid}\)).
We now show that the additional term \(\mathsf{bdz}(\sigma_{l,i})\) in Theorem 4.2 is indeed necessary by providing a matching lower bound up to a constant factor.
**Theorem 4.3**.: _For any \(\mathbb{M}\subseteq\mathbb{R}\) and \(n,\alpha\in\mathbb{N}\) with \(1\leq|\mathbb{M}|<\infty\), \(n\geq 4\), and \(\alpha\leq|\mathbb{M}|/(n-1)\), there is a neural network \(z_{L}:\mathbb{R}^{W}\to\mathbb{R}\) that satisfies_
\[\frac{|\mathsf{ndf}_{\Omega}(z_{L})|}{|\Omega|}\geq\frac{1}{9}\cdot\frac{1}{| \mathbb{M}|}\sum_{(l,i)\in\mathsf{ldx}}\Bigl{|}\mathsf{ndf}(\sigma_{l,i}) \cup\mathsf{bdz}(\sigma_{l,i})\Bigr{|}\]
_and the following: (i) \(\tau_{l}\) is well-structured biaffine without bias parameters for all \(l<L\), and has bias parameters for \(l=L\); (ii) \(z_{L}\) has \(n+1\) neurons; and (iii) \(|\mathsf{ndf}(\sigma_{1,i})|=\alpha\) and \(|\mathsf{bdz}(\sigma_{1,i})|=0\) for all \(i\). We obtain the same result for (i), (ii'), and (iii'): (ii') \(z_{L}\) has \(2n+1\) neurons; and (iii') \(|\mathsf{ndf}(\sigma_{1,i})|=0\) and \(|\mathsf{bdz}(\sigma_{1,i})|=\alpha\) for all \(i\)._
We next give an intuition for why the zero set of \(\sigma_{l,i}\) (from which the additional term \(\mathsf{bdz}(\sigma_{l,i})\) is defined) appears in Theorem 4.2, by examining its proof. The proof consists of two main parts that extend Eqs. (2) and (3) from the proof sketch of Theorem 3.3: we first show
\[\mathsf{ndf}_{\Omega}(z_{L})\cup\mathsf{inc}_{\Omega}(z_{L})\subseteq\bigcup_{ (l,i)\in\mathsf{ldx},\;c\in\mathsf{ndf}(\sigma_{l,i})}\{w\in\Omega\mid y_{l,i }(w)=c\}\]
and then find a reasonable bound on \(|\Lambda_{l,i,c}|\) for \(\Lambda_{l,i,c}\triangleq\{w\in\Omega\mid y_{l,i}(w)=c\}\), the set of parameters on which the pre-activation value \(y_{l,i}\) touches the non-differentiable point \(c\) of \(\sigma_{l,i}\). Among the two parts, the zero set of \(\sigma_{l,i}\) arises from the second part (i.e., bounding \(|\Lambda_{l,i,c}|\)), especially when \(\tau_{l}\) does not have bias parameters and is well-structured biaffine. For simplicity, assume that \(\tau_{l}\) is a fully-connected layer with constant biases, i.e., \(y_{l,i}(w)=\sum_{j\in[N_{l-1}]}z_{l-1,j}(w)\cdot w_{j+a}+b\) for some constants \(a,b\). Based on this, we decompose \(\Lambda_{l,i,c}\) into \(\Lambda^{\prime}\cup\Lambda^{\prime\prime}\):
\[\Lambda^{\prime} \triangleq\{w\in\Omega\mid y_{l,i}(w)=c,\;z_{l-1,j}(w)\neq 0\text{ for some }j\},\] \[\Lambda^{\prime\prime} \triangleq\{w\in\Omega\mid y_{l,i}(w)=c,\;z_{l-1,j}(w)=0\text{ for all }j\}.\]
Then, we can show \(|\Lambda^{\prime}|\leq|\mathbb{M}|^{W-1}\) as in Eq. (3), since \(w_{j+a}\) acts like a bias parameter of \(y_{l,i}\) for any \(j\) with \(z_{l-1,j}(w)\neq 0\). To bound \(|\Lambda^{\prime\prime}|\), however, we cannot apply a similar approach due to the lack of \(j\) with \(z_{l-1,j}(w)\neq 0\). Instead, we directly count the number of parameters \(w\in\Omega\) achieving \(z_{l-1,j}(w)=0\) for all \(j\) (i.e., \(\sigma_{l-1,j}(y_{l-1,j}(w))=0\) for all \(j\)), and this requires the zero set of \(\sigma_{l-1,j}\). For the full proofs of Theorems 4.2 and 4.3, see Appendices B and D.
### Bounds for the Incorrect Set
For the non-differentiable set, Theorems 4.2 and 4.3 provide tight bounds on its size. For the incorrect set, it turns out that we can further improve the upper bound in Theorem 4.2 and get a similar lower bound to Theorem 4.3.
**Theorem 4.4**.: _If a pre-activation function \(\tau_{l}\) has bias parameters or is well-structured biaffine for all \(l\in[L]\), then_
\[\frac{|\mathsf{inc}_{\Omega}(z_{L})|}{|\Omega|}\leq\frac{1}{|\mathbb{P}|} \sum_{(l,i)\in\mathsf{idx}}\Big{|}\big{(}\mathsf{ndf}(\sigma_{l,i})\cap S_{l} \big{)}\]
_where \(S_{l}\) is defined as in Theorem 4.2._
**Theorem 4.5**.: _For any \(\mathbb{M}\subseteq\mathbb{R}\) and \(n,\alpha\in\mathbb{N}\) with \(1\leq|\mathbb{M}|<\infty\), \(n\geq 4\), and \(\alpha\leq|\mathbb{M}|/(n-1)\), there is a neural network \(z_{L}:\mathbb{R}^{W}\to\mathbb{R}\) that satisfies_
\[\frac{|\mathsf{inc}_{\Omega}(z_{L})|}{|\Omega|}\geq\frac{1}{13}\cdot\frac{1}{ |\mathbb{P}|}\sum_{(l,i)\in\mathsf{idx}}\Big{|}\mathsf{ndf}(\sigma_{l,i})\cup \mathsf{bdz}(\sigma_{l,i})\Big{|}\]
_and the following: (i) \(\tau_{l}\) is well-structured biaffine without bias parameters for all \(l<L\), and has bias parameters for \(l=L\); (ii) \(z_{L}\) has \(2n+1\) neurons; and (iii) \(|\mathsf{ndf}(\sigma_{1,i})|=\alpha\) and \(|\mathsf{bdz}(\sigma_{1,i})|=0\) for all \(i\). We obtain the same result for (i), (ii'), and (iii'): (ii') \(z_{L}\) has \(3n+1\) neurons; and (iii') \(|\mathsf{ndf}(\sigma_{1,i})|=0\) and \(|\mathsf{bdz}(\sigma_{1,i})|=\alpha\) for all \(i\)._
We note that if \(z_{L}\) has bias parameters, Theorem 4.4 reduces to \(|\mathsf{inc}_{\Omega}(z_{L})|=0\) as in Theorem 3.2 since \(S_{l}=\emptyset\) for all \(l\) in the case. On the other hand, if \(z_{L}\) does not have bias parameters, then the incorrect set can be non-empty as discussed in SS3.1, and more importantly, its size can be bounded by Theorem 4.4. To see why the bounds on \(|\mathsf{inc}_{\Omega}(z_{L})|\) depend on both \(\mathsf{ndf}(\sigma_{l,i})\) and \(\mathsf{bdz}(\sigma_{l,i})\), refer to the proofs of Theorems 4.4 and 4.5 in Appendices C and D.
### Sufficient Conditions for Computing
Standard Derivatives and Clarke Subderivatives
We extend Theorems 3.5 and 3.6 to general neural networks without the well-structured biaffinity restriction, by characterizing two sufficient conditions on parameters under which automatic differentiation computes the standard derivative or a Clarke subderivative.
**Theorem 4.6**.: _Let \(w\in\mathbb{R}^{W}\). If \(y_{l,i}(w)\notin\mathsf{ndf}(\sigma_{l,i})\) for all \((l,i)\in\mathsf{ldx}\) such that \(\tau_{l}\) does not have bias parameters or \(\partial^{\mathsf{AD}}z_{L}/\partial z_{l,i}\neq\vec{0}\) at \(w\), then_
\[D^{\mathsf{AD}}z_{L}(w)=Dz_{L}(w)\neq\bot.\]
**Theorem 4.7**.: _Let \(w\in\mathbb{R}^{W}\) and assume that \(D^{\mathsf{AD}}\sigma_{l,i}\) is consistent for all \((l,i)\in\mathsf{ldx}\). If \(y_{l,i}(w)\notin\mathsf{ndf}(\sigma_{l,i})\) for all \((l,i)\in\mathsf{ldx}\) such that \(\tau_{l}\) does not have bias parameters, then_
\[D^{\mathsf{AD}}z_{L}(w)=\begin{cases}Dz_{L}(w)&\text{if }Dz_{L}(w)\neq\bot\\ \lim_{n\to\infty}Dz_{L}(w^{\prime}_{n})&\text{if }Dz_{L}(w)=\bot\\ \quad\text{for some }w^{\prime}_{n}\to w\end{cases}\]
_and so \(D^{\mathsf{AD}}z_{L}(w)\) is a Clarke subderivative of \(z_{L}\) at \(w\). Here \(\mathsf{ndf}(f)\) denotes the set of real numbers at which \(f:\mathbb{R}\to\mathbb{R}\) is not continuously differentiable._
The two sufficient conditions on \(w\) given in Theorems 4.6 and 4.7 are simple enough to be checked efficiently in practice, so that we can use them to validate whether the output of automatic differentiation is the standard derivative or a Clarke subderivative. If \(w\) does not satisfy either of the sufficient conditions, then automatic differentiation may not compute the standard derivative or a Clarke subderivative; the first example discussed in SS3.3 illustrates both cases. We remark that the sufficient condition in Theorem 4.7 involves \(\mathsf{ndf}(\sigma_{l,i})\) (not \(\mathsf{ndf}(\sigma_{l,i})\)), since we use continuous differentiability (not differentiability) in the proof to properly handle the limit of derivatives \(Dz_{L}(w^{\prime}_{n})\). For the proofs of Theorems 4.6 and 4.7, see Appendices E and F.
## 5 Conclusion and Discussion
In this paper, we study for the first time the correctness of automatic differentiation for neural networks with machine-representable parameters. In particular, we provide various theoretical results on the incorrect and non-differentiable sets of a neural network, as well as closely related questions such as when automatic differentiation is correct and what it computes. Our results have two major practical implications: automatic differentiation is correct at most machine-representable parameters when applied to neural networks, and it is correct more often if more layers of the network have bias parameters. Furthermore, our theoretical analyses suggest new applications of automatic differentiation for identifying differentiability and computing Clarke subderivatives, not only for machine-representable parameters but also for any real-valued ones.
Our results have some limitations. For example, all of our results are for a class of neural networks consisting of alternating analytic pre-activation functions and pointwise continuous activation functions. Hence, if a network contains non-pointwise activation functions (e.g., MaxPool) or a residual connection bypassing a non-analytic activation function (e.g., ReLU), then our results may not be directly applicable. Our results for general neural networks (e.g., Theorems 4.2 and 4.4) additionally assume pre-activation functions to have bias parameters or to be well-structured biaffine, which does not allow, e.g., BatchNorm without bias parameters. Nevertheless, we believe that our results still cover a large class of neural networks, especially compared to prior works studying theoretical aspects of neural networks (Jacot et al., 2018; Kidger and Lyons, 2020; Laurent and von Brecht, 2018; Lu et al., 2017; Park et al., 2021). We believe extending our work to more general neural networks is an interesting direction for future work. |
2309.13022 | Graph Neural Network for Stress Predictions in Stiffened Panels Under
Uniform Loading | Machine learning (ML) and deep learning (DL) techniques have gained
significant attention as reduced order models (ROMs) to computationally
expensive structural analysis methods, such as finite element analysis (FEA).
Graph neural network (GNN) is a particular type of neural network which
processes data that can be represented as graphs. This allows for efficient
representation of complex geometries that can change during conceptual design
of a structure or a product. In this study, we propose a novel graph embedding
technique for efficient representation of 3D stiffened panels by considering
separate plate domains as vertices. This approach is considered using Graph
Sampling and Aggregation (GraphSAGE) to predict stress distributions in
stiffened panels with varying geometries. A comparison between a
finite-element-vertex graph representation is conducted to demonstrate the
effectiveness of the proposed approach. A comprehensive parametric study is
performed to examine the effect of structural geometry on the prediction
performance. Our results demonstrate the immense potential of graph neural
networks with the proposed graph embedding method as robust reduced-order
models for 3D structures. | Yuecheng Cai, Jasmin Jelovica | 2023-09-22T17:34:20Z | http://arxiv.org/abs/2309.13022v1 | # Graph Neural Network for Stress Predictions in Stiffened Panels Under Uniform Loading
###### Abstract
Machine learning (ML) and deep learning (DL) techniques have gained significant attention as reduced order models (ROMs) to computationally expensive structural analysis methods, such as finite element analysis (FEA). Graph neural network (GNN) is a particular type of neural network which processes data that can be represented as graphs. This allows for efficient representation of complex geometries that can change during conceptual design of a structure or a product. In this study, we propose a novel graph embedding technique for efficient representation of 3D stiffened panels by considering separate plate domains as vertices. This approach is considered using Graph Sampling and Aggregation (GraphSAGE) to predict stress distributions in stiffened panels with varying geometries. A comparison between a finite-element-vertex graph representation is conducted to demonstrate the effectiveness of the proposed approach. A comprehensive parametric study is performed to examine the effect of structural geometry on the prediction performance. Our results demonstrate the immense potential of graph neural networks with the proposed graph embedding method as robust reduced-order models for 3D structures.
**Keywords:** Machine learning, Deep learning, Reduced order models, Graph embedding, Graph Neural Networks, Stiffened panels, Stress prediction, Geometric variations
## 1 Introduction
### Motivation
The progress in developing efficient modern structures is significantly driven by advancements in structural analysis methods such as Finite Element Analysis (FEA) [1]. With the adoption of this powerful tool, engineers can improve design of engineering structures. People can now routinely analyze structures with diverse geometries without the need for intricate mathematical or analytical solutions to governing differential equations. Consequently, there is a growing emphasis on pursuing cost-effective structural solutions, as they not only enhance performance but also lead to reductions in weight and manufacturing costs [2].
Many thin-walled structures, such as bridges, ships, and aircraft incorporate stiffened panels in their design, which are generally effective in carrying a diverse range of loads and relatively easy to manufacture. The stiffened panel consists of a plate and discrete stiffeners welded onto it. The geometry of both stiffeners and face plates used in the construction is often rectangular because this is relatively easy to fabricate and can always provide good performance. The panels often need to be optimized in design.
The optimization of stiffened panels and large structures incorporating the is mostly performed by using sizing optimization [3, 4] and sometimes topology optimization [5]. Structural optimization often relies on FEA to estimate stresses under loads. Although FEA can be successfully applied to solve numerous complex problems, several challenges remain in structural optimization:
* The computational costs increase as the complexity of the model rises, including the refinement of the mesh;
* Evaluation of each design requires independent simulations;
* Re-meshing is often necessary when the structure changes geometrically.
With these challenges, the use of FEM in structural optimization is expensive. In order to overcome this problem, traditional reduced-order models (ROMs) have been used, such as Multivariate Adaptive Regression Splines (MARS), Kriging (KRG), Radial Basis Functions (RBF), and Response Surface Method (RSM) [6]. These methods aim to maintain the precision of high-fidelity models while having a relatively low computational cost.
However, applying the ROMs above to complicated engineering problems is limited due to their inherent assumptions. Moreover, they may lose fidelity when the structure changes geometrically. More recently, with the advancements in machine learning (ML) and in particular deep learning (DL) methodologies, there is a tendency to adopt ML/DL models as ROMs, taking advantage of their versatility in capturing various data properties. Particularly in mechanical engineering, ML/DL techniques are effective modelling tools and approximators that often exceed conventional ROM techniques in accuracy and capacity to represent even nonlinearities of engineering problems [7, 8, 9].
### Deep learning-based reduced order models for structural analysis
Typical ROMs require a structure to be represented parametrically, where the structural variables are identified as inputs to the ML model. The typical technique for reduced-order modelling is artificial neural networks (ANNs). One of the most commonly used is multi-layer perceptron, also known as MLP, which has been widely implemented in various fields, as evidenced by numerous scholarly works. This is because MLP with one hidden layer is a universal approximator, which can accurately predict any continuous function arbitrarily well if there are enough hidden units, as stated in Ref [10].
An early study of MLP-based ROM for structural optimization can be found in Ref [11], where MLP with one hidden layer was employed as ROM to replace the structural analysis in optimization. With the demand for higher prediction accuracy, deeper neural networks have been employed by researchers to accomplish more complex tasks such as the prediction of axial load carrying capacity [12], buckling load [13], shear [14] and lateral-torsional [15] resistance, etc. More recent advancements in ANNs for structural analysis can be found in Ref. [16, 17, 18].
However, limited by the monolithic structure of traditional neural networks, even if additional layers can be added to increase the architecture's complexity, MLPs are inefficient at accurately describing complicated structural behaviour since they demand a large amount of training data and computational resources. In addition, MLPs tend to overfit, and they are less interpretable than other types of neural network (NN) approaches, therefore its capabilities are limited and not suited for advanced problems.
Therefore, to capture more complex features, some researchers have used more advanced NNs such as convolutional neural networks (CNNs) as ROMs for the structures that can be represented as images (2D matrices) or composition of images (3D matrices). A few examples can be found from Refs. [19, 20], where different composite materials' modulus, strength, and toughness are predicted by feeding enough composite configurations (around 1,000,000). In addition, researchers utilized CNN for stress predictions in different structures, for instance, the maximum stress of brittle materials [21], and stress contour in components of civil engineering structures [22].
Instead of approximating the mechanical properties and structural responses of structures, some researchers also train CNNs or generative adversarial networks (GANs) to predict the optimal structural design directly; interested readers could check Refs. [23, 24, 25, 26] for more information.
However, modelling engineering structures using fixed-size vectors or matrices, such as images, proves to be challenging, considering that one of the variables is structural geometry. It introduces a dimensional change in the design input, which is primarily handled by re-training for NN approaches such as MLP and CNN. Furthermore, stiffened panels are discrete structures, which consist of repeated structural units, e.g., stiffeners and plates, whose connection can not be neglected as they affect the mechanical response of the structure. Motivated by these two reasons, this paper proposes an approach that transforms structural models into graphs. This transformation permits
flexibility in varying the dimension of design inputs, a characteristic that aligns well with the capabilities of graph neural networks (GNNs).
GNNs have made significant strides across various fields, such as computer vision [27], traffic prediction [28], chemistry [29], biology [30], and recommender systems [31], etc. In fluid mechanics, researchers have employed GNNs in replacing the expensive CFD simulations using dynamic graphs in recent years [32, 33, 34], where they converted mesh to the vertex and edge features, consequently applying GNNs to learn the temporal evolution of the properties in a fluid system. However, since GNN just gained significant attention in recent years, not many advancements have been made in the structural engineering field. In a recent study by Zheng et al. [35], a graph embedding approach was employed to represent 2D and 3D trusses as graphs, with vertices denoting the joints and edges representing the bars. This method aimed to optimize the arrangement of bars within trusses while adhering to stress and displacement constraints. Similarly, other recent investigations applying GNNs to truss-related problems can be found in references such as [36] and [37]. These studies employed GNNs in conjunction with various techniques, including Q-learning and transfer learning, to address distinct objectives. Nevertheless, there has been a notable absence of research dedicated to the application of GNNs to more intricate 3D structures beyond the scope of truss problems.
In this paper, we extend the previous research by introducing an innovative graph embedding technique tailored for 3D stiffened panels. We use this novel graph embedding method for predicting stresses in stiffened panels, employing a Graph Sampling and Aggregation network (GraphSAGE). We compare the proposed graph embedding with the conventional finite-element-vertex embedding that has been used in fluid mechanics. Additionally, we conduct a comprehensive parametric study for various structural geometry parameters including various boundary conditions and complex geometric variations.
## 2 Methodology
### Basis of Graph Neural Networks
Graph Neural Networks (GNNs) represent a specialized subset of neural networks (NNs) renowned for their capacity to handle data with graph embeddings. Unlike the CNN, which is typically used for tasks involving data that is defined on a regular grid, such as images, GNNs are designed to process and analyze data represented in the form of graphs, such as data with complex relationships between entities or data that has a natural representation as a network.
In general, a graph can be defined as \(G=(V,E,A)\), where \(V\) represents the set of vertices, \(E\) indicates the set of edges between these vertices, and \(A\) is the adjacency matrix. We denote the edge going from vertex \(v_{i}\) to vertex \(v_{j}\) as \(e_{ij}=(v_{i},v_{j})\in E\). If a graph is undirected, every two vertices contain two edges \(e_{ij}=(v_{i},v_{j})\in E\), and \(e_{ji}=(v_{j},v_{i})\in E\). The adjacency matrix \(A\in\mathbb{R}^{N\times N}\) is a convenient way to represent the graph structure, where \(N=|V|\) is the number of vertices, \(A_{ij}=1\) if \(e_{ij}\in E\). For an undirected graph, \(A_{ij}=A_{ji}\). Therefore, a graph is associated with graph attributes
\(X\in\mathbb{R}^{N\times D}\) and \(A\in\mathbb{R}^{N\times N}\), where \(D\) is the number of input features of each vertex. It is worth mentioning that in this study, all graphs are defaulted as undirected graphs.
To conduct convolution on a graph, researchers developed techniques such as the graph spectral method, which leverages the eigenvalues and eigenvectors of the graph Laplacian matrix to define graph convolutions, few of the most famous networks are Spectral CNN [38], Chebyshev Spectral CNN (ChebNet) [39], and Graph Convolution Network (GCN) [40], etc. However, they require a large memory to conduct the graph convolution, which is not efficient in handling large graphs. Hence, the emergence of the graph spatial method, which operates by leveraging the spatial interconnections among vertices and their adjacent neighbours, for instance, original Graph Neural Network (GNN) [41], Graph Sampling and Aggregation (GraphSAGE) [42], and Graph Isomorphism Network (GIN) [43], etc.
All the above-mentioned techniques can be summarized into message-passing neural networks (MPNNs) [44], which represent a general framework of graph convolution. Under this framework, we specifically employ the GraphSAGE network [42], which is a prominent and benchmark method for many graph-related tasks. To preserve the maximum information of each vertex, the'sum' operator is determined as the aggregation function [43]. Utilizing the MPNN framework, the GraphSAGE operator with a'sum' aggregator is defined as:
\[\mathbf{h}_{v}^{l}=\sigma(\mathbf{W}^{l}(\mathbf{h}_{v}^{l-1}+sum_{u\in \mathcal{N}(v)}\mathbf{h}_{u}^{l-1})) \tag{1}\]
where \(\mathbf{h}_{v}^{l-1}\) and \(\mathbf{h}_{v}^{l}\) are the embedding for vertex \(v\) at layer \(l-1\) and \(l\), respectively. \(\mathbf{W}^{l}\) is the trainable parameter at the current layer \(l\). Message propagation is conducted
Fig. 1: Architecture of the GraphSAGE Network employed in this study. Each vertex represents a structural unit, including plate span, stiffener web and flange. In the input graph, vertices hold the geometric details of their respective structural units. Meanwhile, in the output graph, vertices contain the stress information associated with their corresponding structural components.
through vector concatenation, followed by the message update phase through the \(tanh\) activation function \(\sigma\).
A general structure of a GraphSAGE network can be found in Fig. 1. Structural geometric data and external loading are initially transformed into the proposed graph representation, as detailed in Section 2.2. At each layer, the GraphSAGE operator is employed to process and learn the features of the graph. Batch normalization has been utilized after each GraphSAGE convolution layer to stabilize the training process of the GNN. Mean square error (MSE) is adopted as the loss function in this study. The hyperparameters for the utilized model have been fine-tuned and are detailed in Table 1. Note that the hyperparameter settings are consistent throughout the investigation in Section 4.
### Proposed graph embedding for stiffened panels
As indicated in the previous section, graph embedding is the prerequisite for a structure being handled by a GNN. In this study, we propose a simple but efficient method to graphically represent the stiffened panel.
Fig. 2 illustrates a conventional finite-element-vertex graph representation where each element is a vertex. Our approach is to represent each stiffener or plate between stiffeners as a vertex. The conventional method maps each finite element to a corresponding vertex, while connections between these elements are determined as edges. This transformation allows the direct representation of the finite element (FE) model of the structure as a graph. Consequently, as the mesh refinement progresses, the graph naturally scales in size in a proportional manner.
However, it is essential to consider that the time complexity of GNN training is proportional to the number of vertices \(|V|\). The employed graphs in this study are sparse graphs, with the following assumptions:
* The GNN architecture is fixed, with a depth of \(L\), and a transformed graph embedding size of \(F_{i}\) at layer \(i\);
* The average number of neighbours \(k\) of vertices in the graph is about the same for different graph representations;
\begin{table}
\begin{tabular}{l l} \hline \hline Category & Value \\ \hline Number of layers & 32 \\ Number of hidden neurons for each layer & 64 \\ Activation function & tanh \\ Optimizer & adam \\ Learning rate & 0.02 \\ Batch size & 512 \\ L2 regularization factor & 1e-4 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Hyperparameter setting of the Employed GraphSAGE Network
* The time complexity of feature transformation and aggregation dominates other operators.
The time complexity for feature transformation is \(\mathcal{O}(N\times\Sigma_{i=1}^{L}F_{i-1}\times F_{i})\), and \(\mathcal{O}(N\times k\times\Sigma_{i=1}^{L}F_{i})\) for aggregation. Combining the time complexity for both feature transformation and aggregation, we can get the total training complexity:
\[\mathcal{O}(N\times\Sigma_{i=1}^{L}F_{i-1}\times F_{i})+\mathcal{O}(N\times k \times\Sigma_{i=1}^{L}F_{i}) \tag{2}\]
Given constants \(C_{1}=\Sigma_{i=1}^{L}F_{i-1}\times F_{i}\) and \(C_{2}=k\times\Sigma_{i=1}^{L}F_{i}\), the combined complexity can be written as: \(\mathcal{O}(N\times(C_{1}+C_{2}))\). We can clearly see that \(\mathcal{O}(N\times(C_{1}+C_{2}))\propto\mathcal{O}(N)\), where the primary factor is the number of vertices \(N\) in a graph. Therefore, adopting the conventional finite-element-vertex graph representation can lead to a significant increase in computational resources as the level of structural discretization grows.
In this paper, we introduce an efficient graph embedding technique designed for modelling stiffened panels with a reduced number of vertices, as indicated in Figure 2. More specifically, we represent each structural unit, including plates between stiffeners, stiffener webs, and flanges, as individual vertices within the graph. Each vertex incorporates both geometrical information of the physical entity and its boundary conditions. Our objective is to reduce the total number of vertices from \(N\) to a smaller number \(N^{\prime}\), thereby expediting the GNN's training process by approximately \(N/N^{\prime}\) than with the conventional approach. For each vertex, we employ eight variables, including structural unit width, length, thickness, and boundary conditions for each edge, together with the value of the applied pressure. The connectivity between each
Figure 2: Proposed graph representation versus finite element-vertex graph representation. The representation of the finite element-vertex graph is displayed on the top right. Each element is substituted by a vertex, and the connections between elements are denoted as edges. The proposed graph layout is presented at the bottom. Every vertex stands for a structural component, such as the plate span, stiffener web, and flange, with their interrelationships defined by edges.
structural unit is encoded by the adjacency matrix and is not reflected in the vertex input embedding.
The objective of this study is to predict the von Mises stress distribution across stiffened panels, which is crucial for structural design. Thus, it is necessary to obtain the grid space (mesh) stress information of each structural unit. Utilizing the proposed graph embedding, and given that the vertex output dimensions do not affect the GNN training time, we have allocated a grid space of \(10\times 20\) to each structural unit. For each vertex, the von Mises stress information of each node on the grid space is determined as output. Since vertices in the graph can only accept vector inputs, the number of input features \(D\) for each vertex has been reshaped to a dimension of \(1\times 200\).
## 3 Data preparation
The case study are stiffened panels because they represent the basic repeating unit of many large-scale structures. To approximate the stiffened panel of real-life structures, the span of the panel has been set as \(3m\times 3m\). Main plate thickness ranges from \(10mm\) to \(20mm\). Each panel contains 2 to 8 stiffeners, with a random height from \(0.1m\) to \(0.4m\). All stiffeners have a T-shaped cross-section, with a rectangular flange whose width ranges from \(0.05m\) to \(0.15m\). The thickness for both stiffener web and flange can change from \(5mm\) to \(20mm\). We allow the stiffener/flange height and thickness to continuously vary in this range, which allows a wider design space. We assume that the plate of the panel is subjected to a uniformly distributed pressure, which ranges from \(0.05MPa\) to \(0.1MPa\). The summary of the upper and lower limits for the structure's geometrical details can be found in Table 2.
In this study, we have systematically considered two key variables for a comprehensive parametric study: structural boundary conditions and geometric variations. In both cases, we maintain the remaining variables fixed to carefully assess the effect of introducing a specific variable under investigation. For each case separately we prepare a total of 2000 randomly generated design configurations, allocating 80% of them for training and 20% for validation. Detailed analysis for each parametric test case is exhibited and discussed in Section 4.
The dataset is obtained using parametric models prepared in MATLAB and executed through ABAQUS FEM software. The static analysis is performed on a model
\begin{table}
\begin{tabular}{l l l l} \hline \hline Category & Lower limit & Upper limit & Unit \\ \hline Plate thickness & 10 & 20 & mm \\ Stiffener thickness & 5 & 20 & mm \\ Stiffener height & 100 & 400 & mm \\ Flange thickness & 5 & 20 & mm \\ Flange width & 50 & 150 & mm \\ Number of stiffeners & 2 & 8 & \(-\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Lower and upper limits of stiffened panel geometrical variables
discretized using the 'SR4' element. The training procedure of GNN is executed using Pytorch Geometric and carried out on a computing device with a GTX 3090 GPU.
## 4 Results and discussion
In this section, we first compare the proposed entity-vertex graph embedding with the finite element-vertex embedding, to demonstrate the efficiency of the proposed approach. Afterwards, we present the results of our investigation into the influence of various parameters on the accuracy of the trained GNN model. The parameters under examination include boundary conditions and geometric complexity. Understanding how these parameters affect the neural network's performance is crucial in optimizing its accuracy and robustness for real-world applications. For each parameter, we first discuss its influence on neural network accuracy, followed by a detailed analysis and comparison of predictions with the ground truth data.
### Impact of graph representation on GNN computational resources
To quantitatively study the influence of graph embedding on GNN training time complexity as discussed in 2.2, we compare our proposed stiffened panel graph embedding with a finite-element-vertex graph representation through a simple test case. This comparison primarily focuses on the training time difference between the two graph embedding techniques, the test case presented here omits the stiffener flange for simplicity. All other structural variables remain consistent with those described in Table 2. Two graph neural networks have been trained independently based on the same dataset, GraphSAGE architecture, and hyperparameter setting, differing only in graph embedding. An example for both embeddings can be found in Fig. 2.
Figure 3 illustrates the comparison of stress contour predictions using GraphSAGE with two graph embeddings, both of which yield comparable performance levels. The training computational resources for both approaches are presented in Table 3. Utilizing the same batch size, the GNN trained with the proposed graph embedding achieves approximately 27 times faster training times per epoch compared to the conventional method. The GPU memory requirements also differ significantly: the proposed approach consumes 0.5 GB for a batch size of 64, while the finite element-vertex embedding requires 23.4 GB. These results demonstrate the benefits of the proposed approach, which is useful across a broader spectrum of computer devices.
### Parametric study
#### 4.2.1 The effect of boundary conditions
In this subsection, we examine the impact of structural boundary conditions on the performance of the GraphSAGE model. Entity-vertex embedding is used, where the entity is plate between the stiffeners (or edge and stiffener), stiffener web and stiffener flange. While considering all the variables outlined in Table 2, we also incorporate the structural boundary conditions as an additional variable. Specifically, plate edges and stiffeners are assigned with random boundary conditions separately. All edges of the plate are either simply supported or fixed. In addition, the edges of the stiffener web and flange are free, simply supported, or fixed.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Test examples & Model & 3D Stress Distribution & Stiffener webs and Flanges & Stress Range \\ \hline
1 & GNN & & & \\ & FEA & & & \\ \hline
2 & GNN & & & \\ & FEA & & & \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of GraphSAGE predictions and FEA results for stiffened panel with varying structural boundary conditions
Figure 3: Comparison between finite element-vertex embedding and the proposed entity-vertex embedding
In Table 4, we present a comparison between the predicted stress distribution of the stiffened panel and the distribution obtained through finite element analysis (FEA). We have selected two test examples at random to illustrate the prediction accuracy of our model. The 3D view of the structure is shown, where the contour shows the stress distribution and stress intensity accross the panel. For each test example, the same color for both GraphSAGE prediction and FEA ground truth represents the same stress value. The geometry of the structure and the external loading amplitude for both test examples can be found in Table 5.
For both test examples in Table 4, the prediction of the stress distribution in the plate and stiffeners shows a high consistency with the FEA. This can also be verified by the detailed stress comparison in Fig. 4, where we exhibit the stress distribution at different locations of the stiffened panel. The red circles denote the FEA results and the blue solid curve denotes the predicted stress distribution along the specified path with GNN. The location chosen to be evaluated depends on the geometry and stress distribution of the stiffened panel. The focus is on the areas that experience relatively higher stress, such as plate edges and stiffener edges.
We can observe that although both test examples in this test case have employed different boundary conditions, the GraphSAGE model with the proposed graph embedding accurately captures the stress distribution of the structure. The predicted stress distribution exhibits a high consistency with the FEA results for both test examples at different locations. Although the prediction shows some slight deviation when predicting the lower stresses such as the plate center area in test example 1, the trained GraphSAGE network exhibits good performance in capturing the high-stress distribution. The maximum stress occurs at the intersection of plate and stiffener web edges in test example 1, and at the plate center in test example 2. For the high-stress estimation in both test examples (Fig. 3(a) (3) and Fig. 3(b) (1)), the prediction error is less than 1%.
As is generally known, the amount of data and data quality are paramount for accurate predictions with neural networks. Fig. 5 depicts the relationship between training data size and the MSE value for the validation set. The GraphSAGE model, utilized in this study, demonstrates consistent performance when the amount of data
\begin{table}
\begin{tabular}{l c c c} \hline \hline Category & Test Example 1 & Test Example 2 & Unit \\ \hline Plate thickness & 14.51 & 12.58 & mm \\ Stiffener web thickness & 9.08 & 16.62 & mm \\ Stiffener web height & 308.37 & 223.98 & mm \\ Flange thickness & 16.83 & 15.61 & mm \\ Flange width & 111.12 & 86.52 & mm \\ Number of stiffeners & 8 & 3 & \(-\) \\ Uniform pressure & 0.071 & 0.072 & MPa \\ BCs @ Plate edges & Simply support & Fixed & \(-\) \\ BCs @ Stiffener web and Flange edges & Free & Simply support & \(-\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Structural geometry details in parametric study for boundary conditions.
is 400 and above. This suggests that the proposed approach holds promise for practical applications in practical situations, as it demands a relatively small amount of data.
#### 4.2.2 The effect of structural geometry
An inherent advantage of graph neural networks (GNNs) over other types of neural networks lies in their adaptability to structural geometric changes. In this section, we aim to demonstrate the capacity of GNNs with the proposed graph embedding technique. Instead of employing uniformly distributed stiffeners, we introduce randomness to the distribution of stiffeners. Furthermore, we allow variations in the height of each
Figure 4: Stress distribution comparison along specified paths in the stiffened Panel under parametric study with varying boundary conditions
stiffener separately and width of each flange separately, aiming at introducing more complexity to the structural geometries.
Specifically, in this section, the geometry of each stiffener and their corresponding flanges can be individually changed within the domain as set in Table 2, and they can be randomly positioned on the stiffened panel. With the remaining structural geometric parameters the same as in Section 4.2.1, two test examples are exhibited and their corresponding stress contour and detailed stress analysis are shown in Table 6 and Fig. 6, respectively. The corresponding structural geometry details can be found in Table. 7. In each bracket, the numerical values represent the dimensions of the corresponding category of structural geometry, with the order of the numbers corresponding to the sequence of the stiffeners from left to right as shown in the Table. 6.
From Table 6, we can observe that increasing the complexity of the structural geometry does not hinder the prediction capability of the GraphSAGE model with the proposed graph embedding. For both test examples, the predicted stress contour for both plate and stiffeners exhibits a good consistency with the FEA results. As shown in Fig. 5(a), the predicted stress distribution shows a good alignment with the FEA results for all specified paths including the stiffener edge and flange edge, where the maximum stress occurs. The GraphSAGE estimation for the second test example is also accurate and captures all the stress information of the structure. The maximum prediction error occurs at the plate edge, which is about 6.18%. The average prediction accuracy for the other paths is 96.7%.
Following a similar methodology as discussed in Section 4.2.1, we examine the necessary amount of training data in this particular test case. In contrast to the previous scenario, the GraphSAGE model requires a larger quantity of training data to effectively capture stress distributions within more intricate structures. As depicted in Figure 7, the curve reaches a plateau after having 1200 training data points. However,
Figure 5: GraphSAGE training data versus mean square error for parametric study considering the effect of boundary conditions.
it is important to note that such a large dataset might not be required in practical real-life cases. The test structures considered in this subsection encompass lots of unrealistic structures, which can hinder the training process of the GraphSAGE model.
## 5 Conclusion and future work
In this study, we explored the potential of graph neural networks (GNN), specifically Graph Sampling and Aggregation (GraphSAGE) as a promising avenue for developing a reduced-order model for stress prediction in stiffened panels. By proposing a novel
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Test examples & Model & 3D Stress Distribution & Stiffeners and Flanges & Stress Range \\ \hline
1 & GNN & & & \\ & FEA & & & \\ \hline
2 & GNN & & & \\ & FEA & & & \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison of GraphSAGE predictions and FEA results for stiffened panel with complex geometry.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Category & Test Example 1 & Test Example 2 & Unit \\ \hline Plate thickness & 19.85 & 16.14 & mm \\ Stiffener web thickness & (11.8, 17.9, 12.8, 12.6, 10.8) & (19.8, 17.0, 11.5) & mm \\ Stiffener web height & (320, 267, 320, 160, 293) & (347, 160, 347) & mm \\ Flange thickness & (14.9, 19.0, 14.0, 6.7, 6.2) & (14.1, 14.8, 9.8) & mm \\ Flange width & (74.3, 66.7, 107.8, 146.6, 69.8) & (52.1, 142.2, 104.0) & mm \\ Number of stiffeners & 5 & 3 & \(-\) \\ Uniform pressure & 0.079 & 0.090 & MPa \\ BCs @ Plate edges & Fixed & Fixed & \(-\) \\ BCs @ Stiffener and Flange edges & Fixed & Fixed & \(-\) \\ \hline \hline \end{tabular}
\end{table}
Table 7: Structural geometry details for the two test examples when considering complex geometry.
graph embedding technique, the research showcased its superiority over conventional finite element-vertex embedding. A parametric analysis is conducted to demonstrate the versatility and robustness of the employed model, emphasizing its ability to handle various boundary conditions and complex geometric variations. The results indicate that the integration of GNN and the proposed graph embedding can revolutionize the structural modelling and analysis, offering both accuracy and efficiency. It can be concluded that:
* The proposed structural (physical unit) entity-vertex graph embedding is a viable method to model the 3D structures, such as stiffened panels;
Figure 6: Stress distribution comparison along specified paths in the stiffened panel under parametric study with complex structural geometry.
* The proposed structural entity-vertex embedding is more efficient than the conventional finite element-vertex embedding;
* GraphSAGE with the proposed embedding can handle various variables including boundary conditions and geometric variations;
* The predicted stress distribution exhibits high consistency with the FEA simulation.
Compared with the conventional reduced-order models employed in solid and structural mechanics, the proposed modelling technique in this study first introduces structural geometry as one of the input variables, which provides a foundation and potential for further development of reduced-order models for more complex structures. Even though this paper utilizes the data from the FEA simulation, obtaining the data in real-life is very difficult. However, this is a common issue for most data-driven models, which could be potentially addressed in future works as well.
## Acknowledgments
This research was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC). This research was supported in part through computational resources and services provided by Advanced Research Computing at The University of British Columbia.
|
2310.00129 | ILB: Graph Neural Network Enabled Emergency Demand Response Program For
Electricity | Demand Response (DR) programs have become a crucial component of smart
electricity grids as they shift the flexibility of electricity consumption from
supply to demand in response to the ever-growing demand for electricity. In
particular, in times of crisis, an emergency DR program is required to manage
unexpected spikes in energy demand. In this paper, we propose the
Incentive-Driven Load Balancer (ILB), a program designed to efficiently manage
demand and response during crisis situations. By offering incentives to
flexible households likely to reduce demand, the ILB facilitates effective
demand reduction and prepares them for unexpected events. To enable ILB, we
introduce a two-step machine learning-based framework for participant
selection, which employs a graph-based approach to identify households capable
of easily adjusting their electricity consumption. This framework utilizes two
Graph Neural Networks (GNNs): one for pattern recognition and another for
household selection. Through extensive experiments on household-level
electricity consumption in California, Michigan, and Texas, we demonstrate the
ILB program's significant effectiveness in supporting communities during
emergencies. | Sina Shaham, Bhaskar Krishnamachari, Matthew Kahn | 2023-09-29T20:38:04Z | http://arxiv.org/abs/2310.00129v1 | # ILB: Graph Neural Network Enabled Emergency Demand Response Program For Electricity
###### Abstract
Demand Response (DR) programs have become a crucial component of smart electricity grids as they shift the flexibility of electricity consumption from supply to demand in response to the ever-growing demand for electricity. In particular, in times of crisis, an emergency DR program is required to manage unexpected spikes in energy demand. In this paper, we propose the Incentive-Driven Load Balancer (ILB), a program designed to efficiently manage demand and response during crisis situations. By offering incentives to flexible households likely to reduce demand, the ILB facilitates effective demand reduction and prepares them for unexpected events. To enable ILB, we introduce a two-step machine learning-based framework for participant selection, which employs a graph-based approach to identify households capable of easily adjusting their electricity consumption. This framework utilizes two Graph Neural Networks (GNNs): one for pattern recognition and another for household selection. Through extensive experiments on household-level electricity consumption in California, Michigan, and Texas, we demonstrate the ILB program's significant effectiveness in supporting communities during emergencies.
Demand Response Program, GNN
## I Introduction
Demand Response (DR) programs have emerged as an integral component of smart electricity grids. By shifting the flexibility of power consumption from the supply side to the demand side, these programs manage the continually increasing demand for electricity. Such programs incentivize households to reduce their electricity usage, thereby enhancing the agility and responsiveness of the network. The benefits of DR programs are manifold, including, but not limited to, preventing the need for new power plants, lowering electricity costs by avoiding high-priced energy purchases, enhancing grid reliability to prevent blackouts, and mitigating environmental damage by reducing the consumption of fossil fuels. An example of such a program is Time-of-Use pricing (TOU), where the cost of electricity varies based on the time of day, typically increasing during peak periods [1].
A more diligent and critical type of DR programs is designed for times of crisis and is referred to as an emergency DR program. Such programs are developed to manage unexpected spikes in energy demand during emergency situations. Unfortunately, most of the existing emergency DR programs are only activated after the accident and without prior provisions made for households, such as during extreme weather events, unexpected equipment failures, or other emergencies. Hence, they mostly work as mediators of the damage rather than the prevention mechanism, which can have severe consequences for households and businesses. For example, during a heat wave in California in August 2020, the California Independent System Operator (CAISO) called for an emergency DR program to reduce electricity usage and prevent blackouts [2]. This included asking businesses to reduce their energy consumption during peak demand periods, as well as calling on residential customers to save energy during the hottest parts of the day. In another unfortunate incident during a winter storm in Texas in February 2021, the Electric Reliability Council of Texas (ERCOT) called for an emergency DR program to reduce electricity usage and prevent blackouts [3]. This included asking businesses and residents to conserve energy by reducing heating and hot water usage, as well as turning off non-essential appliances and electronics.
The earlier examples illustrate the requirement for more advanced emergency DR programs that can greatly ease the pressure on the power grid during crises and avoid losses to communities. Nevertheless, there exist several obstacles that have impeded the development of efficient DR programs, including the need for real-time communication and monitoring of energy usage, precise prediction of energy demand, and ensuring sufficient resources are available to meet the heightened demand during emergencies. Thankfully, recent progress in Machine Learning (ML) combined with the increased adoption of Advanced Metering Infrastructure (AMI) in households has cleared the way for the implementation of new DR programs [4]. AMI meters are capable of measuring and recording electricity usage at least once per hour and transmitting this information to both the utility and the customer at least once per day. The range of AMI installations varies from basic hourly interval meters to real-time meters with built-in two-way communication capabilities that allow for instantaneous data recording and transmission.
Considering recent advancements, we propose an incentive-based program accompanied by a novel ML-driven approach to address the existing challenges and help to balance supply and demand in times of crisis. In particular, we have the following contributions:
* We propose an innovative incentive-based strategy designed to manage demand and response effectively during critical scenarios. This approach, termed the Incentive
Driven Load Balancer (ILB), strategically identifies households with the flexibility to adjust their electricity consumption and then operates on a meticulously designed incentive system, motivating these families to moderate their energy usage in emergency situations.
* We develop an ML-driven two-step framework for the efficient selection of participants in the program. The framework involves two GNNs: a pattern recognition GNN and a household selection GNN. The pattern recognition GNN is a time-series forecasting dynamic graph with the goal of revealing similarities among users based on attention mechanism, taking into account socio-economic factors influencing the elasticity of demand. Meanwhile, the household selection GNN models communities and aids in selecting suitable households for the program based on the pattern recognition GNN's output. Our framework is thoughtfully developed to factor in geo-spatial neighborhoods.
* We have devised and publicly shared a dataset based on the factors that influence the elasticity of electricity demand for data mining objectives. This dataset is greatly empowered by a sample from the synthetically generated household-level data in [5], which offers a digital twin of residential energy consumption. We further enrich this dataset by integrating education and awareness factors from high schools [6], college education data from the Census [7], median household income and unemployment statistics from the US Department of Agriculture [8], and climate data [9].
* We perform an extensive number of experiments on the household-level electricity consumption of people in California, Michigan, and Texas. The results demonstrate the significant effectiveness of the ILB program in assisting communities during emergency situations.
## II Related Work
### _Demand Response Programs_
A critical advantage of DR programs lies in their potential to substantially mitigate peak demand - a pressing issue in electricity systems that necessitates significant investment in underutilized infrastructure. Violette et al. [10] have demonstrated the potential of DR programs to curtail peak demand by up to \(15\%\). Moreover, DR programs have been effective in enhancing the reliability of the electricity system, with comprehensive reviews like the one by Albadi and El-Saadany [11] elucidating a reduction in the frequency and duration of blackouts.
Existing DR programs fall into two broad categories: price-based programs (PBP) and incentive-based programs (IBP). PBP programs typically revolve around dynamic pricing rates, with the objective of reducing consumption during peak times. Notable approaches in this category include Time of Use [1], where the rate changes based on the time of day, and Real-Time Pricing [12], where the price fluctuates according to the market price of electricity. The second category, i.e. IBP programs, often reward participants with financial benefits either for their performance or for their participation. Direct Load Control (DLC) is one such program that allows the utility to control certain appliances within a consumer's home, such as air conditioning, in exchange for a financial incentive [13]. Similarly, the Demand Bidding/Buyback program provides consumers the opportunity to bid for payments in exchange for their desired load reductions [14].
### _Time Series Forecasting_
Time-series methods can be grouped into two broad categories: univariate time series techniques and multivariate time series techniques. Univariate time series methods focus on analyzing individual observations sequentially without considering the correlations between different time series. ARIMA methods [15], for instance, assume linearity, where the prediction is a weighted linear sum of past observations. On the other hand, multivariate time series techniques consider and model the interactions and co-movements among a group of time series variables [16, 17]. Wan et al. [18] propose an encoder-encoder model based on attention mechanism to capture correlations. The proposed end to end approach include bi-directional long short-term memory networks (Bi-LSTM) layers as the encoder network to adaptively learning long-term dependency and hidden correlation features of multivariate temporal data. The authors in [19] use filters to extract temporal patterns that remain consistent over time. Then, apply attention mechanism to select pertinent time series and utilizes their frequency domain information for multivariate prediction.
## III Preliminaries
### _Notation_
Consider a map that contains \(n\) households \(\mathcal{U}=\{u_{1},...,u_{n}\}\), which are distributed among \(m\) non-overlapping neighborhoods \(\mathcal{N}=\{N_{1},...,N_{m}\}\). Each neighborhood
\begin{table}
\begin{tabular}{p{142.3pt} p{142.3pt}} \hline \hline Symbol & Description \\ \hline \(n\), \(m\) & Number of households and neighborhoods \\ \(\mathcal{U}=\{u_{1},...,u_{n}\}\) & Set of users \\ \(X_{d}^{\text{supply}},\)\(X_{d}^{\text{demand}}\) & Supply and demand power on day \(d\) \\ \(X_{d}^{u}\) & Power consumption of user \(u\) on day \(d\) \\ \(k\) & Number of features \\ \(x^{u}\) & User \(u\) power consumption time series \\ \(\mathcal{D},\mathcal{D}^{\prime}\) & Set of all days and emergency days \\ \(r_{\text{baseline}}^{u}\), \(r_{\text{emergency}}^{u}\) & Baseline \& emergency price rate on ILB participants \\ \(I^{u}\) & Incentive for household \(u\) \\ \(e^{u}\) & PE of household \(u\) \\ \(A_{real},\)\(A_{est}\) & Real and estimated similarity matrices \\ \hline \hline \end{tabular}
\end{table}
Table I: Summary of Notations.
contains a population of size \(|N_{i}|\). We represent the hourly electricity consumption of household \(u\in\mathcal{U}\) using a time series \(x^{u}\in\mathbb{R}^{1\times T}\), and we denote the set of all time series by \(X=(x^{1},...x^{n})\in\mathbb{R}^{n\times T}\). Additionally, we model the total power consumption of user \(u\) on day \(i\) as \(X_{i}^{u}\), and the set of all days in a billing cycle by \(\mathcal{D}\). Table I summarizes the important notations used throughout the manuscript.
### _Message Passing Framework_
Most GNNs use message-passing and aggregation to learn improved node representations, or so-called embeddings, of the graph. At propagation step \(i\), the embeddings of node \(v\) is derived by:
\[H_{v}^{i}=f(\text{aggr}(H_{u}^{i-1}|u\in N_{1}(v))). \tag{1}\]
In the above formulation, \(H_{v}^{i}\) represents the \(i\)-th set of features for the nodes, \(f(.)\) is the function used to transform the embeddings between propagation steps, \(N_{1}(.)\) retrieves the 1-hop embeddings of a node, and aggr combines the embeddings of its 1-hop neighbors. For instance, in GCN, node aggregation and message-passing are expressed as:
\[H_{v}^{i}=\sigma(W\sum_{u\in N_{1}(v)}\frac{1}{\sqrt{\hat{d}_{u}\hat{d}_{v}}}H_ {u}^{i-1}), \tag{2}\]
where \(\sigma\) is the activation function, \(W\) is a matrix of learnable weights, \(\hat{d}_{v}\) is the degree of node \(v\). The propagation length of the message-passing framework is commonly limited to avoid over-smoothing.
## IV Proposed Scheme
In this section, we present our proposed emergency DR program.
### _Program Overview_
With the rising trend of electricity consumption, combined with the limited capacity of supply, electricity disruptions in household networks are becoming inevitable. Suppose the utility provides \(X_{i}^{\text{supply}}\) kWh of power on a particular day \(i\), but the electricity demand is \(X_{i}^{\text{demand}}\) kWh, where \(X_{i}^{\text{supply}}<X_{i}^{\text{demand}}\). This necessitates a strategy to balance the supply and demand. Currently, there are two extreme strategies in place: (I) implementing the same level of power outage for all users by reducing each household's electricity by \((X_{i}^{\text{demand}}-X_{i}^{\text{supply}})/n\), and (II) focusing on only \(l\) households and cutting their power for a longer duration, while ensuring uninterrupted power supply for the others. The second strategy results in an outage of \((X_{i}^{\text{demand}}-X_{i}^{\text{supply}})/l\) in this group of households.
The sudden and unexpected power cuts can disrupt the daily life of households, regardless of their efforts to manage the power demand and response. In order to tackle this issue, we propose a price-driven demand response program named the Incentive-driven Load Balancer (ILB), which prioritizes the user's preferences. The ILB program enables households to voluntarily participate in a program that offers them financial incentives upfront, in exchange for paying higher rates during a few unanticipated days. The households are informed of such occurrences a day before and encouraged to reduce their electricity consumption during those times. For those who do not opt in to the program, higher rates will be charged throughout the period to cover the program's expenses. The formal definition of the ILB program is provided in Definition 1.
**Definition 1**.: (_Incentive-Driven Load Balancer_). Under this strategy, households are allowed to voluntarily participate in a program, where they agree to pay higher rates for a number of unplanned days during the upcoming billing cycle, which they will be notified of only one day in advance. In return for their participation, they receive a monetary incentive at the start of the program. For the remaining duration of the billing cycle, the standard rates will be applied. To cover the cost of the incentives, the electricity rate for all other households is increased.
The aim of ILB to incentivize flexible users who opt-in to reduce their power usage during emergency days, rather than enforcing power outages to balance demand and supply, or force higher prices on all users. This is achieved by charging the opt-in users higher rates, encouraging them to adjust their consumption behavior. The incentives provided to the users needs to be carefully determined -- if it is too low, then not enough customers will accept the offer resulting in an insufficient reduction of the demand during the emergency days; if too high, it may potentially result in too high a number of households signing up for the incentive making the program too expensive, raising the burden on non-participating households beyond acceptable levels.
To quantify the utility of the proposed program, we propose the following two indicators. The first utility function aims to reveal if offers are successful in attracting customers.
**Definition 2**.: (_Utility: Acceptance Rate_). The acceptance rate indicates the percentage of households who agree to participate in the program after receiving an offer.
\[\text{Acceptance rate}=\frac{\#\,\text{accepted offers}}{\#\,\text{ offers made}}\times 100. \tag{3}\]
The second utility focuses on the amount of reduction made in consumption given the incentives formulated in Definition 3.
**Definition 3**.: (_Utility: Responsiveness Cost_). Let \(\mathcal{U}^{\prime}\) denote the set of users who voluntarily participate in the program, and \(\mathcal{D}^{\prime}=d_{i},...,d_{j}\) denote the set of emergency days that occurred during the billing cycle. The responsiveness cost of ILB is defined as
\[\text{Responsiveness Cost}=(\sum_{u\in\mathcal{U}^{\prime}}I^{u})/(\sum_{d\in D^{ \prime}}\sum_{u\in\mathcal{U}^{\prime}}\Delta X_{d}^{u}). \tag{4}\]
In the above formulation, \(\Delta X_{d}^{u}=\bar{X}_{i}^{u}-X_{i}^{u}\) represents the variation in the consumption of user \(u\) on day \(d\). Here, the consumption of user \(u\) is represented by \(X_{d}^{u}\), and the adjusted consumption due to their participation is represented by \(\bar{X}_{d}^{u}\). The symbol \(I^{u}\) refers to the incentive provided to user \(u\) at the beginning.
The rationale behind the responsiveness utility function is to determine the amount of power consumption reduction that can be achieved for a given amount of incentives to participants. The utility company uses the following optimization formulation to balance supply and demand while minimizing incentive expenditure. The responsiveness cost is a metric that measures how well this objective is met in practice.
\[\text{Minimize} \sum_{u\in\mathcal{U}^{\prime}}I^{u}\] (5) subject to \[X_{d}^{\text{demand}}-X_{d}^{\text{supply}}\leq\sum_{u\in \mathcal{U}^{\prime}}\Delta X_{d}^{u},\quad\forall\,d\in\mathcal{D}^{\prime}\]
### _Pricing_
As utility functions outlined in the previous section reveal, ILB requires a diligent selection of participants and careful pricing of incentives. In this subsection, we address the latter and explain how incentives are calculated, and in Section V, the selection process of applicants is illustrated. Recall that \(\mathcal{D}^{\prime}\subset\mathcal{D}\) are the emergency days. For any \(u\in\mathcal{U}^{\prime}\), let us denote the price rate for emergency days (\(d\in\mathcal{D}^{\prime}\)) by \(r_{\text{emergency}}^{u}\) and their regular rate for \(d\in D\backslash D^{\prime}\) by \(r_{\text{baseline}}^{u}\). Thus, by participating in the program the cost of user \(u\) will be modified from their baseline cost,
\[Cost_{\text{baseline}}(u)=\sum_{d\in D}X_{d}^{u}\times r_{\text{baseline}^{u}}, \tag{6}\]
to the modified cost of,
\[Cost_{\text{lib}}(u)=\sum_{d\in D\backslash D^{\prime}}X_{d}^{u}\times r_{ \text{baseline}}^{u}+\sum_{d\in D^{\prime}}\bar{X}_{d}^{u}\times r_{\text{emergency}}^{u}-I^{u}. \tag{7}\]
Once it comes to consideration, the household should understandably expect
\[Cost_{\text{baseline}}(u)\geq Cost_{\text{lib}}(u). \tag{8}\]
Otherwise, the offer will not be beneficial for the user. The households who are not part of the program will be charged by an extra charge of \(r_{\text{extra}}\) and pay the modified rate of,
\[r_{others}=r_{\text{baseline}}+r_{\text{extra}}. \tag{9}\]
The rate hike for users not included in the program is derived based on the incentives provided in ILB, calculated as,
\[r_{\text{extra}}=(\sum_{u\in\mathcal{U}^{\prime}}I^{u})/(\sum_{u\in\mathcal{U} \backslash\mathcal{U}^{\prime}}\sum_{d\in D}X_{d}^{u}). \tag{10}\]
### _Rate Hikes_
A critical factor in the above formulation is the rate hikes on emergency days. This factor directly influences the amount of demand response on such days. We derive this rate based on the _price elasticity of demand for electricity_.
**Definition 4**.: (_Elasticity of Demand_). The elasticity of demand refers to the degree to which demand responds to a change in an economic factor.
The appropriate value of \(r_{\text{emergency}}\) must be chosen such that the increase in rates results in a change in consumers' demand, which can be measured by the price elasticity of demand. The price elasticity of demand is derived by,
\[PE=\frac{\%\text{Change in Quantity}}{\%\text{Change in price}}. \tag{11}\]
The equation indicates the percentage of change in demanded quantity for a given percentage of change in price. For example, if the price of electricity increases by \(10\%\), and the quantity of electricity demanded decreases by \(5\%\), the price elasticity of demand for electricity can be calculated as:
\[PE=(-5\%/10\%)=-0.5 \tag{12}\]
The PE for electricity has been measured and estimated in many countries in the world including the US. The factor is commonly considered for the short term and long term. The average PE for electricity on the state level is estimated to be \(-0.1\) in the short-run and \(-1.0\) in the long-run [20, 21].
It is crucial to note that the numbers mentioned earlier represent average statistics for households. However, on an individual household level, the PE for electricity can vary greatly, which can impact the necessary amount of incentive required for them to accept an offer. Let us denote the PE for household \(u\) by \(e^{u}\). Hence, given the goal of ILB that every household in the program reduces its consumption by \(i\%\), the percentage of change in price for the household is calculated by
\[\%\,\text{Change in price}=i/e^{u}. \tag{13}\]
Once the percentage of change in price is calculated, the rates on emergency days are calculated by
\[r_{\text{emergency}}^{u}=(1+\%\,\text{Change in price})\times r_{\text{ baseline}}^{u}. \tag{14}\]
Armed with this knowledge, applying inequality in the Equation 8, the minimum incentive values can be derived as,
\[Cost_{\text{baseline}}(u)\geq Cost_{\text{lib}}(u)\rightarrow \tag{15}\] \[\sum_{d\in D}X_{d}^{u}\times r_{\text{baseline}}^{u}\geq \sum_{d\in D\backslash D^{\prime}}X_{d}^{u}\times r_{\text{baseline}}^{u}\] (16) \[+\sum_{d\in D^{\prime}}\bar{X}_{d}^{u}\times r_{\text{emergency }}^{u}-I^{u}\rightarrow\] \[I^{u}\geq\sum_{d\in D^{\prime}}\bar{X}_{d}^{u}\times r_{\text{emergency }}^{u}-\sum_{d\in D^{\prime}}X_{d}^{u}\times r_{\text{baseline}}^{u}. \tag{17}\]
## V Implementation Framework
The success of the program relies heavily on the process of selecting candidates. Utilizing both a sequential and machine learning approach can aid in identifying when enough households have been efficiently selected, offered, and accepted the incentive. This is essential in ensuring that a satisfactory
number of high-demand flexibility users have accepted the incentives and that the expected reduction in demand meets the shortfall on emergency days. Additionally, this method prevents over-recruitment, which could result in unnecessary program expenses.
Our proposed approach for choosing potential participants for the ILB program is based on two graph models: the pattern recognition GNN and the household selection GNN. The dynamic pattern recognition model is designed to generate a similarity matrix among users, which indicates the probability that a household would respond positively to an offer based on the response of its neighbors and other households. The household selection model, on the other hand, is a node classification model that aims to identify the candidates who are most likely to accept the offer.
### _Pattern Recognition GNN_
The main objective of the pattern recognition model is to create an accurate similarity matrix that reflects the degree of similarity between two individuals based on their socio-economic status and electricity consumption pattern. This advance model helps us understand the likelihood of a household accepting an offer, given the responses of those already queried. The underlying assumption is that households with a high similarity score are likely to respond similarly to the offer. Although there may be some margin of error, a well-designed model can mitigate this issue to a reasonable extent. Our proposed model considers three factors: (I) socio-economic factors affecting demand response, (II) intra-series temporal correlations in time series, and (III) inter-series correlations captured by an attention mechanism that reveals pattern similarity. While the model is primarily designed to predict future demand, the inter-series attention matrix enables enhanced modeling of user similarity.
We use \(A_{real}\in\mathbb{R}^{n\times n}\) and \(A_{est}\in\mathbb{R}^{n\times n}\) to denote the actual and predicted probability matrices of accepting the offer, respectively. The element (\(a_{ij}\)) located in the \(i\)-th row and \(j\)-th column of these matrices represents the likelihood of household \(u_{i}\) accepting the offer, given that household \(u_{j}\) has already accepted the offer. An overview of the model is provided in Figure 1 and the components of the model are elaborated layer by layer in the following.
#### Iii-A1 Intra-series Temporal Correlation Layer
The first layer of the model aims to capture the temporal correlation in each time series. For a given window \(s\), the training data \(X\in\mathbb{R}^{n\times s}\) would be input to the model. Each time series is processed by two essential components within this layer: a Gated Recurrent Unit (GRU) unit and a self-attention module.
The purpose of GRU units is to handle sequential data by temporal dependencies. GRU has been shown to utilize less memory and perform faster compared to Long Short Term Memory (LSTM). The self-attention components have been included after GRU units to enhance their performance further. The input data \(X\) is fed into GRU units and self-attention components generating embeddings \(C=(c_{1},...,c_{s})\in\mathbb{R}^{s\times M}\) where \(M\) is the embedding size.
#### Iii-A2 Inter-series Correlation Layer
Next, the generated embeddings are fed to a multi-head attention layer [22]. The multi-head attention layer consists of multiple attention heads, each of which learns to attend to different parts of the input sequence. This is the critical step where the correlation and similarity between households are captured. The ultimate output of this unit is the matrix \(A_{\text{est}}\), or the so-called attention matrix, representing the pairwise correlations of households.
The input of the attention layer, as formulated in Equation 18 consists of queries (\(Q\)), keys (\(K\)), and values (\(V\)), each of which is a sequence of vectors. To understand the similarity between time series, all inputs are set to the embedding matrix \(C\) generated in the previous layer. The layer then applies multiple attention heads, each of which computes a weighted sum of the values using a query and key pair. The outputs from each attention head are concatenated and linearly transformed to produce the final output of the layer.
\[A_{est}=MultiHead(Q,K,V)=(head_{1}\parallel\ldots\parallel head_{j})W^{O}, \tag{18}\]
where each head consists of,
\[head_{i}=Attention(QW^{Q}_{i},KW^{K}_{i},VW^{V}_{i}), \tag{19}\]
\[Attention\left(Q,\ K,\ V\right)=Softmax\left(\frac{QK^{T}}{\sqrt{M}}\right)V. \tag{20}\]
In the above formulation, \(W^{O}\), \(W^{Q}_{i}\), \(W^{K}_{i}\), and \(W^{V}_{i}\) are weight matrices learned during the training.
#### Iii-A3 Socio-economic Factors
Next, the embedding produced in the preceding layers will be combined with the socio-economic factors to serve as node features in the dynamic
Fig. 1: Pattern Recognition Graph.
graph. It is important to note that our focus is on factors that have been proven to have a considerable effect on demand elasticity, not just on user consumption. For instance, even though the number of individuals in a household can impact consumption, it has not been identified as a significant coefficient of demand elasticity in some studies, as cited in [23]. We have summarized the key determinants in the following. (I) _Income_: Studies, such as the one by Brounen et al.[24], have demonstrated that the level of income has a notable influence on the elasticity of demand for electricity. (II) Demographic characteristics: Factors such as race and age significantly impact household electricity consumption[25, 23]. (III) Dwelling physical factors: Aspects including the type of building, its size, and its thermal and quality characteristics are linked to the amount of energy consumed by a household [26]. (IV) Climate: Temperature has been demonstrated to impact electricity consumption significantly [23]. (V) Living Area: The geographical location of households, including regional variables such as urban and rural areas, cities and counties, critically affect their elasticity of demand [23]. (VI) Awareness and Education: The level of awareness about policies and the education level of individuals can influence their electricity demand and compliance with ILB, as shown by Du et al. [23], who found the coefficient representing the extent to which users understand the policy to be a significant factor in demand response.
Let us denote the socio-economic features for \(i\)-th household by \(\hat{c}_{i}\in\mathbb{R}^{\bar{M}}\). The concatenation of embeddings from time series and socio-economic features leads to node features of the dynamic graph, mathematically represented by,
\[c^{\prime}_{i}=c_{i}||\hat{c}_{i}\in\mathbb{R}^{(M+\bar{M})}, \tag{21}\]
resulting in the final node feature matrix \(C^{\prime}=(c^{\prime}_{1},...,c^{\prime}_{n})\in\mathbb{R}^{n\times(M+\bar{ M})}\).
#### Iii-A4 Dynamic Graph
In the final stage of the pattern recognition graph, the dynamic GNN is formulated as \(G(V,E)\), where \(V\) represents the node set and \(E\) represents the edge set. To create the graph, each household is assigned a node, and their node features are generated using the concatenated features derived from Equation 21. The attention matrix \(A_{est}\) obtained from Equation 18 is utilized as the edge weights of the graph. Subsequently, the GNN block is applied to the graph. The result of this process is the embeddings for the nodes, represented by
\[C^{\prime\prime}=F_{\text{GNN}}(G(V,E))\in\mathbb{R}^{n\times M^{\prime}}. \tag{22}\]
where \(F_{\text{GNN}}\) denotes a custome GNN model. The graph uses the mean square error (MSE) as its objective loss function to predict demand. After the training process, the pattern recognition graph outputs the attention matrix \(A_{\text{est}}\), which is utilized to represent the similarity between households.
### _Household Selection GNN_
The purpose of the household selection graph is to efficiently select the users who have the highest likelihood of participating in the program. Therefore, the graph is a node classification GNN conducting the semi-supervised task. Let \(G_{\text{HSG}}=(V_{\text{HSG}},E_{\text{HSG}})\) denote the household selection graph. The similarity matrix generated based on the pattern recognition graph represents the weighted edges of the graph, i.e., \(E_{\text{HSG}}=A_{\text{est}}\). At this stage all socio-economic and power consumption information have been incorporated in the similarity matrix. The set of nodes in the graph consist of a single node for each household, i.e., \(V_{\text{HSG}}=\mathcal{U}\). The one-hot encoding method is used as node features of the graph. For example, if \(4\) households exist in the network, their feature set would be \(0001,\,0010,\,0100,\,1000\).
The goal of this graph is to efficiently label nodes indicating acceptance or rejection of an offer. Two steps are taken to conduct the labeling. First, Spectral Graph Analysis is conducted on the graph to cluster nodes based on the similarity. Such an approach allows for naive understanding of the node labels. Based on the spectral graph analysis small portion of households in each neighborhood are queried to understand the true labels. Then, those labels are used in the graph to conduct the semi-supervised graph labeling.
#### Iii-B1 Spectral Graph Analaysis
Spectral clustering methods rely on the spectrum (eigenvalues) of the data's similarity matrix to reduce its dimensions before clustering it in a lower number of dimensions. The similarity matrix \(A_{\text{est}}\) is given as input to the clustering.
First, the method computes the normalized Laplacian of the graph as defined by the following equation:
\[L=I^{\prime}-D^{\prime\prime-1/2}A_{\text{est}}D^{\prime\prime-1/2}. \tag{23}\]
In this equation, \(I^{\prime}\) represents the identity matrix, while \(D^{\prime\prime}\) is defined as \(diag(d)\), where \(d(i)\) denotes the degree of node \(i\).
Secondly, it computes the first \(k\) eigenvectors that correspond to the \(k\) smallest eigenvalues of the Laplacian matrix. Once eigenvectors are derived, a new matrix with the \(k\) eigenvectors is generated, where each row is treated as a set of features of the corresponding node in the graph. Finally, the nodes are clustered based on these features using \(k\)-means algorithms into \(2\) clusters representing people who are likely
Figure 2: Household Selection Graph.
to accept or reject the offer.
#### V-A2 Semi-Supervised Node classification
Up to this point, there have been no offers made to determine real-world responses to offers. However, using spectral graph analysis, households have been grouped into two clusters and assigned a label. It's important to note that the labels based on these clusters don't necessarily indicate which group is more likely to accept the offers. The ultimate goal at this stage is to use the perceptions received from clustering to survey a small portion of individuals from each of the \(m\) neighborhoods, and use this data alongside the adjacency matrix in a semi-supervised approach to identify individuals who are likely to accept the offers.
To ensure that all parts of the graph are properly discovered and fairly treated, it's crucial to diversify the initial queries in all neighborhoods. For this reason, in each neighborhood, \(5\%\) of users from each cluster generated by spectral graph analysis are selected to be offered incentives. Querying this sample of users leads to real labels determined for \(10\%\) of the population. With this information in hand, a node classification GNN model is applied on top of \(G_{\text{HSG}}=(V_{\text{HSG}},E_{\text{HSG}})\) to classify whether other users will accept the offer or not. The resulting labels are used to make offers.
## VI Experimental Evaluation
### _Datasets_
We conduct our experiments by combining multiple datasets from various sources, taking into consideration significant aspects such as geographical diversity, the time series of household electricity consumption, and socioeconomic factors.
_Household-level Electricity Time-Series._ The privacy concerns regarding household-level electricity consumption have limited the availability of publicly accessible datasets, with current datasets containing no more than \(50\) households. A comprehensive overview of these datasets can be found in [27]. Nonetheless, the work in [5] has tackled this issue by creating a synthetic dataset that covers households throughout the United States. This dataset functions as a digital twin of a residential energy-use dataset within the residential sector. For our experiments, we use the time-series data at the household level from this dataset.
_Education and Awareness._ To indicate the level of awareness, the average ACT scores are used, which are obtained from the EdGap dataset [6]. This dataset contains socioeconomic features of high school students across the United States. The geospatial coordinates of schools are derived by linking their identification number to data provided by the National Center for Education Statistics [7], and then the average ACT scores are calculated. Additionally, the percentage of people who have completed some college education provided by [8] is used to further expatiate on the level of awareness.
_Median Household Income and Unemployment Percentage._ The county-level information for the median household income as well as the unemployment percentage in 2014 is extracted from the US Department of Agriculture website [8] and used as static features for counties.
_Climate._ Two key attributes provided by the National Centers for Environmental Information [9] are used as indicators of climate in counties: average temperature and precipitation amount.
_Geographic Diversity._ Three states of California (CA), Texas (TX), and Michigan (MI) are selected for the purpose of experiments, including the first five counties based on codes published by the U.S. Census Bureau [28]. The geospatial location of counties used in experiments and their corresponding statistics is provided in Figure 3.
### _Experimental Setup_
_Overview._ The experiments are divided into two parts, each looking at performance before and after applying the framework. Subsection VI-C specifically looks at how well the ILB performs before the selection framework is applied, and investigates how the program affects responsiveness cost and how much it can reduce the total consumption given a certain amount of incentive. Subsection VI-D focuses on the performance after the framework has been applied in which factors such as responsiveness cost, the effects of rate increases on those not participating in the program, and noise analysis in the selection process are thoroughly evaluated.
_Hyper-parameters Setting._ The household electricity time series is used on an hourly basis between September and December of \(2014\). The number of households participating in the program in each county is \(50\), adding up to \(250\) households in each state. The dataset was separated into three parts: training, evaluation, and testing sets, with a ratio of 7:2:1 respectively. Z-normalization was applied to normalize the input data, and training was conducted using the RMSProp optimizer with a learning rate of 3e-4. Training took place over \(100\) epochs, with a batch size of \(32\). The number of emergency days per month is set to three days. The number of participants in the experiment is set to be one-quarter of the population being considered and the default incentive provided to participants is \(100\) dollars unless stated otherwise. The electricity PE for households is randomly selected based on a Gaussian distribution with a mean of \(-0.25\) and a standard deviation of \(0.1\).
_Hardware and Software Setup._ Our experiments were performed on a cluster node equipped with an \(18\)-core Intel i9-9980XE CPU, \(125\) GB of memory, and two \(11\) GB NVIDIA GeForce RTX 2080 Ti GPUs. Furthermore, all neural network models are implemented based on PyTorch version 1.13.0 with CUDA 11.7 using Python version 3.10.8.
### _Performance Analysis of ILB_
In this subsection, we focus on the ILB program's performance independent of the framework. Initially, we evaluate the ILB's efficiency in terms of responsiveness cost, and subsequently, the program's effectiveness in total demand reduction of electricity.
#### Vi-C1 Responsiveness Cost
Figure 4 illustrates the results for the ILB program's responsiveness cost. Recall that responsiveness reflects the total incentive amount offered in
relation to the observed reduction, thus a lower cost indicates better performance. The figure showcases the performance across three states: California, Michigan, and Texas. In each subfigure, given a specified incentive (highlighted with a bar) offered to the entire community, each individual assesses the benefit of participating in the program. This assessment is conducted using Equation 17, which allows us to derive the community's acceptance rate and represent it on the \(x\)-axis, while the corresponding responsiveness cost is plotted on the \(y\)-axis. The figure reveals that an increase in the incentive amount expectedly raises the acceptance rate, but it also results in a higher responsiveness cost. The rate of increase for responsiveness cost tends to be lower for smaller incentive amounts and grows for larger values.
#### Iv-C2 Total Demand Reduction
Figure 5 shows the assessment of the total percentage reduction in consumption on emergency days. The structure of subfigures is analogous to Figure 4, but they illustrate the impact on the total consumption reduction instead of responsiveness cost on the \(y\)-axis. An increasing trend is observed across all three states indicating that a rise in the incentive amount and acceptance rate leads to a higher reduction in consumption on emergency days, enhancing the agility and performance of the electricity network. This pattern hints at a crucial trade-off between responsiveness cost and overall demand reduction, which becomes increasingly apparent as the incentives and participant count increase.
### _Evaluating the Integrated Framework and ILB_
Unlike the previous subsection, where offers were made to every household, this subsection evaluates performance based on a proposed framework that incorporates socio-economic factors and other determinants affecting the elasticity of demand for electricity when selecting participants.
#### Iv-D1 Responsiveness Cost
The performance evaluation in terms of responsiveness cost is depicted in Figure 6. The
Figure 4: Evaluation of total responsiveness cost.
Figure 3: Counties used for the purpose of experiments and their corresponding average hourly consumption. The standard deviation of consumption is shown as error bars.
axis represents the reduction in consumption by ILB program participants during emergency days, while the \(y\)-axis corresponds to the responsiveness cost. A quarter of households with the highest scores based on our framework are selected to participate in the program. As expected, once the amount of reduction by participants increases, a considerable decrease in responsiveness is observed in all three states. The horizontal red lines in the figure indicate the total reduction in demand for the whole community. The intersection of the curve and the horizontal line shows when the reduction in demand by ILB users accounts for a certain amount of total reduction in demand specified by the red line. The figures demonstrate that even small reductions of \(10\) to \(20\) percent by participants can lead to a significant reduction of approximately \(5\) to \(10\) percent in total demand. When the reduction in ILB participants is \(50\%\), the total reduction accounts for \(20\) to \(25\) percent. Therefore, the proposed approach is a viable approach to effectively manage supply and demand during emergencies and prevent severe outages across states and counties.
#### Vi-B2 Rate Hikes on Non-participants
Figure 7 shows the impact of the ILB program on non-participants. It displays the rate hikes in \(c/kWh\) for non-participants at different percentages of ILB participation and corresponding incentives paid. Consistent with previous experiments, the PE for electricity is considered, and participants with the highest scores based on our framework are selected for the program. As it can be seen in the figure, increasing the percentage of participants and the amount of incentives paid leads to a higher cost for non-participants. However, the figure illustrates that in all three states, with a billing period of one month, by keeping
Figure 5: Evaluation of total demand reduction.
Figure 6: Responsiveness performance of ILB.
Figure 7: Rate hike on nonparticipants in ILB.
the incentive amount within the range of \(100\) to \(200\) dollars, it is possible to only sacrifice a minimal cost on hourly consumption of non-participants.
#### Vi-B3 Candidate Selection and Noise Analysis
In Table II, the proposed framework's effectiveness for selecting candidates to participate in ILB is presented alongside noise analysis on the similarity matrix. To introduce inaccuracies in the attention matrix, a uniform random noise is added to each entry of the matrix, following a distribution of \(Uniform(0,b)\) where \(b\) is set to three values: \(25\%,50\%\) and \(75\%\) of the average value of the attention matrix. The results demonstrate that when the attention matrix is accurate, the model's accuracy is approximately \(90\%\). However, as inaccuracies increase, the model's performance consistently declines across all three states. However, it is notable that even when the amount of inaccuracy is large, it does not severely impact the performance of the final model and the accuracy remains in an acceptable range.
## VII conclusion
In conclusion, this paper introduced ILB, a novel program specifically designed for efficient demand management and response during crisis events. The ILB program encourages flexible households with the potential to lower their energy demand through incentives, thus facilitating effective demand reduction and preparing them for unforeseen circumstances. We also developed a two-step machine learning framework for the selection of participants. This framework, which incorporates a graph-based method to pinpoint households capable of readily modifying their energy usage, leverages two GNNs - one for recognizing patterns and another for household selection. Our comprehensive experiments on household-level electricity consumption across CA, MI, and TX provided compelling evidence of the ILB program's significant capacity to aid communities during emergency situations.
|
2305.07416 | A Multidimensional Graph Fourier Transformation Neural Network for
Vehicle Trajectory Prediction | This work introduces the multidimensional Graph Fourier Transformation Neural
Network (GFTNN) for long-term trajectory predictions on highways. Similar to
Graph Neural Networks (GNNs), the GFTNN is a novel network architecture that
operates on graph structures. While several GNNs lack discriminative power due
to suboptimal aggregation schemes, the proposed model aggregates scenario
properties through a powerful operation: the multidimensional Graph Fourier
Transformation (GFT). The spatio-temporal vehicle interaction graph of a
scenario is converted into a spectral scenario representation using the GFT.
This beneficial representation is input to the prediction framework composed of
a neural network and a descriptive decoder. Even though the proposed GFTNN does
not include any recurrent element, it outperforms state-of-the-art models in
the task of highway trajectory prediction. For experiments and evaluation, the
publicly available datasets highD and NGSIM are used | Marion Neumeier, Andreas Tollkühn, Michael Botsch, Wolfgang Utschick | 2023-05-12T12:36:48Z | http://arxiv.org/abs/2305.07416v1 | # A Multidimensional Graph Fourier Transformation Neural Network for Vehicle Trajectory Prediction
###### Abstract
This work introduces the multidimensional Graph Fourier Transformation Neural Network (GFTNN) for long-term trajectory predictions on highways. Similar to Graph Neural Networks (GNNs), the GFTNN is a novel network architecture that operates on graph structures. While several GNNs lack discriminative power due to suboptimal aggregation schemes, the proposed model aggregates scenario properties through a powerful operation: the multidimensional Graph Fourier Transformation (GFT). The spatio-temporal vehicle interaction graph of a scenario is converted into a spectral scenario representation using the GFT. This beneficial representation is input to the prediction framework composed of a neural network and a descriptive decoder. Even though the proposed GFTNN does not include any recurrent element, it outperforms state-of-the-art models in the task of highway trajectory prediction. For experiments and evaluation, the publicly available datasets highD and NGSIM are used.
## I Introduction
Predicting the motion intention of nearby road users in different traffic scenarios is crucial for autonomous driving systems to operate safely. However, the trajectory prediction of other participants is a challenging task since it highly depends on the scenario's cooperative context. The cooperative context describes the situational and social interactions between traffic participants, that influence the behavior of each individual. These interdependencies of the cooperative context mainly result from the temporal and spatial relations between all participants. Existing approaches therefore attempt to model or statistically learn causalities within these two key dimensions. Since trajectory prediction can be regarded as a sequence-to-sequence learning task, most machine learning based methods use Recurrent Neural Networks (RNNs) [1][7][19]. Recently, Graph Neural Networks (GNNs) have also gained increasing popularity in the field of autonomous driving. GNNs are deep learning networks that operate on graph structures. Therefore, this class of deep learning architectures is especially powerful when applied on data generated from non-Euclidean domains or information represented by graphs.
Although GNNs are successfully used in several applications [24][8][5], there has been limited amount of works that study their representational properties. Recent works with focus on mathematical properties revealed theoretical limitations of the representative power of GNNs [22][17][3]. It has been shown, that popular GNN variants like the Graph Convolutional Network (GCN) [13] and GraphSAGE [11] lack expressive power, since these networks are unable to distinguish different graph structures [29]. The expressive power of any GNN mainly depends on the discriminative power of the aggregation scheme. A maximally powerful GNN provides an _injective_ neighborhood aggregation scheme, which means that two different neighborhoods are never mapped to the same representation [29]. GNNs that regard this principle are able to effectively learn complex relationships and interdependencies.
In this work, the relational representation power of graphs is used to find an efficient scenario representation in order to predict a vehicle's trajectory. It introduces the multidimensional Graph Fourier Transformation Neural Network (GFTNN), which models the traffic scenario through a multidimensional graph. The aggregation of the scenario graph is done by applying the multidimensional Graph Fourier Transformation (GFT). This embedding is injective and preserves dimension properties. By forwarding the GFT embedding through a basic Feedforward Neural Network (FNN), the architecture outperforms current state-of-the-art deep neural networks in the task of trajectory prediction on highways. To the best of the authors' knowledge, this work is the first to simultaneously utilize spatial and temporal dependencies through the multidimensional GFT within a neural network setup for trajectory prediction in traffic.
**Contribution.** The work contributes towards the usage of multidimensional dependencies for trajectory prediction through the definition of graphs and dimension perceiving operations. The main contributions are summarized as follows:
* Introduction of the GFTNN for vehicle trajectory predictions on highways.
* Performance evaluation on the publicly available highway datasets highD and NGSIM.
* Interpretation of the prediction performance of the proposed GFTNN.
In this work, vectors are denoted as bold lowercase letters and matrices as bold capital letters.
## II Related Work
Being aware of the cooperative traffic context and the behaviour of individual traffic participants is essential for autonomous driving systems to safely plan their motion. There
fore, it is necessary to contrive architectures that are able to capture the interactive nature of traffic scenarios and predict the future motions of participants. Motion prediction of vehicles can be categorized into two main categories: short-term predictions (\(<1\,\mathrm{s}\)) and long-term predictions (\(\geq 1\,\mathrm{s}\)) [21]. Current state-of-the-art deep learning methods for long-term trajectory prediction of vehicles are mainly dominated by neural network architectures that are not based on graph structure. [18][19][6] An extensive survey on different approaches for motion prediction was done by Lefevre _et al._[16]. Important representatives are the Social LSTM [1] and Convolutional Social Pooling LSTM (CS-LSTM) [7]. These models use RNNs and Convolution Neural Networks (CNNs) to evaluate the sequential and interdependent nature of the trajectory prediction task. Messaoud _et al._[19] extended this idea by introducing the Multi-head Attention Social Pooling (MHA-LSTM) model. The core idea of the MHA-LSTM is to use the attention mechanism [26] to better evaluate the cooperative context of a scenario. Although such RNN-based models are widespread and perform good in sequence analysis, these methods suffer from time-consuming iterative propagation and gradient explosion/vanishing issues [23]. The proposed GFTNN does not contain recurrent network structures for the prediction and therefore prevents these issues.
Over the last years, the field of GNNs has experienced increasing attention and various models were introduced. For spatio-temporal sequence learning tasks, which relates to the trajectory prediction task, often spatio-temporal Graph Neural Networks (STGNN) are applied. The surveys [32] and [28] provide an introduction to the group of STGNN. In [30], a STGNN was applied on traffic forecasting. The STGNN model is based on a graph structure followed by operations of stacked sequential convolutions. In contrast to the proposed network in this work, their algorithm does not define a spatio-temporal graph but rather creates a stacked set of spatial graphs. In [20], a graph and RNN based vehicle trajectory prediction model for highway driving was introduced. The model includes a Time-Extrapolator Convolution Neural Network (TXPCNN) layer to setup a stateless system for the trajectory prediction. Zhou _et al._[31] propose an Attention-based Spatio-Temporal Graph Neural Network (AST-GNN) for interaction-aware pedestrian trajectory prediction. This model uses the attention mechanism in order to extract interactions and motion patterns within the spatial-temporal domain. Contrary to the architecture proposed in this work, the AST-GNN handles the spatio-temporal dimensions separately. It has a spatial GNN and an additional temporal GNN. In [9], the Repulsion and Attraction Graph Attention (RA-GAT) model for trajectory prediction is presented. The model is based on two stacked Graph Attention Networks [27], that address either free space or vehicle state information through distinct graph definition. Through this setup, the authors follow the idea of repulsive and attractive forces within a traffic scenario. To encode and decode the vehicle movements, LSTMs are used. VectorNet, introduced in [10], is a hierarchical GNN that initially aggregates the agents' trajectories and map features to polyline subgraphs. The information is then passed to a global interaction graph (GNN) to fuse the features among the subgraphs. The global graph aggregation function is implemented through a self-attention mechanism.
The approach most related to the one of this work is the Spectral Temporal Graph Neural Network (SpecTGNN) of Cao _et al._[4]. The SpecTGNN handles the environmental and agent modelling separately. Both matters are computed by defining a 1D spatial connectivity graph structure, where each node contains a feature vector stacking the information of all considered time steps. By separately applying a spectral graph convolution and a temporal convolution, the state information of the interactive agents (and the environment) are encoded. After combining the resulting spatial and temporal latent spaces, the inverse Fourier transform is applied. Following, the environmental and agent modelling latent spaces are added and a multi-head attention mechanism is applied. The trajectory is predicted through a temporal CNN. In contrast to the proposed model of this work, the SpecTGNN does not define a spatio-temporal graph, but rather addresses both dimensions separately and successively.
## III Method
The proposed GFTNN is novel due to the extraction of expressive representations of traffic scenarios through a global and injective aggregation function based on the graph properties. By defining a spatio-temporal graph based on the scenario and applying the multidimensional GFT, the graph's spectral features are computed. This spectrum holds the temporal and spatial dynamic characteristics of a scenario. The transformed scenario representation is input to a neural network. The GFTNN does not contain any computationally expensive components for the trajectory prediction and thus model complexity is reduced.
### _Preliminaries_
**Graph definition**: Let \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{W})\) be an undirected weighted graph with \(\mathcal{V}\) being the set of nodes, \(\mathcal{E}\) the set of edges and the weight matrix \(\mathbf{W}\in\mathbb{R}^{N\times N}\). The weight matrix is symmetric and satisfies \(\mathbf{W}_{ij}=1\) for all connections \(i\to j\) if the graph is unweighted. The graph consists of \(N=|\mathcal{V}|\) nodes and \(E=|\mathcal{E}|\) edges. The connectivity of the graph is represented by the adjacency matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\) where \(\mathbf{A}_{ij}=1\) if there exists an edge between the nodes \(i\) and \(j\), otherwise \(\mathbf{A}_{ij}=0\). The degree matrix \(\mathbf{D}\in\mathbb{R}^{N\times N}\) is a diagonal matrix with the degrees on the diagonals such that \(\mathbf{D}_{ii}=\sum_{j}\mathbf{W}_{ij}\). The node feature of node \(i\) is denoted by \(f(i)\). Particularly important for the GFT is the Laplacian matrix \(\mathbf{L}=\mathbf{D}-\mathbf{W}\odot\mathbf{A}\), where \(\odot\) indicates the Hadamard product operation. The Laplacian matrix \(\mathbf{L}\in\mathbb{R}^{N\times N}\) is a real, symmetric and positive-semidefinite matrix.
**Cartesian product graph**: A Cartesian product \(\mathcal{G}_{1}\square\mathcal{G}_{2}\) is a type of graph multiplication that satisfies specific properties as described in [15]. Such a Cartesian product operation for the graphs \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\) is qualitatively illustrated in Fig. 1. A beneficial property of the Cartesian product graph is that
solving the eigenproblem of the resulting graph \(\mathcal{G}_{1}\square\mathcal{G}_{2}\) can be broken down into addressing the eigenproblem of each factor graph \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\) separately.
**Eigenvalue decomposition of graphs**: The eigenvalue problem of graphs can be solved by using the Laplacian matrix \(\mathbf{L}\)[15]. Due to the mathematical properties of the Laplacian matrix \(\mathbf{L}\in\mathbb{R}^{N\times N}\) its eigenvalue decomposition results in \(N\) real, non-negative eigenvalues \(\lambda_{0},\ldots,\lambda_{N-1}\) and the eigenvectors \(\mathbf{u}_{0},\ldots,\mathbf{u}_{N-1}\) such that
\[\mathbf{L}\mathbf{u}_{i}=\lambda_{i}\mathbf{u}_{i}, \tag{1}\]
where index \(i=0,\ldots,N-1\). The resulting set of eigenvalues (usually sorted by size) are denoted as the spectrum of a graph. The corresponding eigenvectors form the orthogonal basis \(\mathfrak{B}=\{\mathbf{u}_{0},\ldots,\mathbf{u}_{N-1}\}\) which can be used to perform frequency transformations.
**Multidimensional graph Fourier transformation**: Analogous to the classic Fourier transformation, the set of eigenvalues \(\mathbf{\Lambda}\in\mathbb{R}^{N}\) of a Laplacian matrix represent frequencies and the eigenvectors \(\mathbf{U}\in\mathbb{R}^{N\times N}\) form the basis \(\mathfrak{B}\). The multidimensional GFT of a Cartesian product graph \(\mathcal{G}_{1}\square\mathcal{G}_{2}\) of the signal \(f\) is described by [15]
\[\hat{f}(\lambda_{l_{1}}^{(1)},\lambda_{l_{2}}^{(2)})=\sum_{i_{1}=0}^{N_{1}-1} \sum_{i_{2}=0}^{N_{2}-1}f(i_{1},i_{2})\overline{u_{l_{1}}^{(1)}(i_{1})u_{l_{2} }^{(2)}(i_{2})}, \tag{2}\]
for \(l_{1}=0,\ldots,N_{1}-1\) and \(l_{2}=0,\ldots,N_{2}-1\). The superscript \({}^{(1)}\) denotes the eigenvalues/eigenvectors of graph \(\mathcal{G}_{1}\) and \({}^{(2)}\) of graph \(\mathcal{G}_{2}\), respectively. The expression \(\overline{u_{l_{1}}^{(1)}(i_{1})u_{l_{2}}^{(2)}(i_{2})}\) denotes the element-wise complex conjugate of the multiplication.
### _Problem Definition_
The task is to predict the trajectory of a single target vehicle in a traffic scenario, given motion observations of the target vehicle and its surrounding vehicles. In total, the \(N_{V}\) road users including the target vehicle itself are considered in the prediction process. At each timestep \(t\) the relative spatial coordinates of the surrounding road users with respect to the target vehicle are given. The motion information of the \(j\)-th participating road user is represented as \(\mathbf{\xi}_{j}=[\mathbf{x}_{j,\text{rel}},\mathbf{y}_{j,\text{rel}},\mathbf{y}_ {j,x,\text{rel}},\mathbf{v}_{j,y,\text{rel}}]\). The input matrix of the \(j\)-th road user contains the relative distances \((\mathbf{x}_{j,\text{rel}},\mathbf{y}_{j,\text{rel}})\) and velocities \((\mathbf{v}_{j,x,\text{rel}},\mathbf{v}_{j,y,\text{rel}})\) within a specified observation period \(t_{\text{obs}}\) up to the current time step \(t_{0}\). The scenario origin is fixed within the start position of the target vehicle. The input of the prediction module is given through \(\mathbf{X}=[\mathbf{\xi}_{1},\mathbf{\xi}_{2},\ldots,\mathbf{\xi}_{N_{V}}]^{\text{T}}\) where \(\mathbf{\xi}_{1}\) is the prediction vehicle's motion information. By processing the input relations, the model predicts the trajectory of the target vehicle \(\hat{\mathbf{\epsilon}}=[\hat{\mathbf{x}}^{\text{T}},\hat{\mathbf{y}}^{\text{T}}]^ {\text{T}}\) within the prediction period from \(t_{0}\) to \(t_{\text{pred}}\). The total number of time steps covered by the observation and prediction period are given by \(T_{\text{obs}}\) and \(T_{\text{pred}}\), respectively.
### _Model Structure_
The model is structured into three elements, namely the spectral feature generation, an encoder and a decoder. In Fig. 2, the complete architecture of the prediction model resulting from the sub-elements is shown. The proposed GFTNN represents a scenario as 2D graph. From this graph representation, spectral features are extracted by applying the multidimensional GFT and these features are used as input to a Multilayer Perceptron (MLP).
In a first step, a multidimensional graph structure is built through the Cartesian product graph considering the temporal and spatial interdependencies. The definition of graph \(\mathcal{G}_{\text{S}}\) is based on the spatial relation between the traffic participants of a scenario. Within the graph, each node represents an agent and the edges describe possible interdependencies due to cooperative interactions. The spatial graph construction is defined based on the interaction assumptions between the traffic participants, e. g.:
* _Spider graph_: All traffic participants primarily interact with the target vehicle, but not among each other,
* _Mesh graph_: All traffic participants interact with the target vehicle and also among each other, and
* _Scenario-dependent graph_: Dependent on pre-defined rules that define interactions, the graph can be build up accordingly, e. g., \(k\)-Nearest-Neighbor or \(\epsilon\)-Neighborhoods.
In the proposed GFTNN, a spider graph is applied and hence, the spatial graph definition is similar to graph \(\mathcal{G}_{2}\) in Fig. 1. Note, that there also might be interdependencies between neighboring agents. Despite not modelling these explicitly through the spider graph, these interdependencies are implicitly transformed into the spectral scenario since the prediction vehicle itself is connected to all participants. If the goal was a multi-modal prediction, it would be reasonable to explicitly model all connections by using a mesh graph. To represent the temporal dependencies of each vehicle, a line-graph is set up. Each vehicle has dependencies with itself as time evolves. Therefore, each step of the observed time sequence represents a node and each node is connected to its previous and following time step node. The resulting temporal graph \(\mathcal{G}_{\text{T}}\) is qualitatively similar to graph \(\mathcal{G}_{1}\) of
Fig. 1. Hence, the resulting Cartesian product graph of \(\mathcal{G}_{\mathrm{T}}\square\mathcal{O}_{\mathrm{S}}\cong\mathcal{G}_{1} \square\mathcal{G}_{2}\).
In the proposed GFTNN, the graph definition is kept constant and is not adopted depending on the scenario. The multidimensional GFT can be considered as the aggregation scheme of the graph structure. As stated initially, such graph aggregations are optimally chosen to be injective. Since the GFT is based on the eigendecomposition of the graph, which is a non-injective operation, also the aggregation would be non-injective if the graph was non-static. By keeping the graph definition constant the GFT represents an injective aggregation function. In appendix -A, a proof for the injective nature of the GFT based on a static graph is provided.
### _Spectral Feature Generation_
The spectral feature generation of the GFTNN is similar to a deterministic aggregation scheme within GNNs. Along with the graph definition, the Laplacian matrices \(\mathbf{L}_{\mathrm{T}}\in\mathbb{R}^{T_{\text{obs}}\times T_{\text{obs}}}\) and \(\mathbf{L}_{\mathrm{S}}\in\mathbb{R}^{N_{V}\times N_{V}}\) are computed, where \(T_{\text{obs}}\) is the dimension of the observation time steps and \(N_{V}\) the total number of traffic participants in a scenario. The eigenvalues \(\mathbf{\Lambda}^{(\mathrm{T})},\mathbf{\Lambda}^{(\mathrm{S})}\) and eigenvectors \(\mathbf{U}^{(\mathrm{T})},\mathbf{U}^{(\mathrm{S})}\) result from the individual eigenvalue decomposition (c. f. (1)) of the Laplacian matrices \(\mathbf{L}_{\mathrm{T}}\) and \(\mathbf{L}_{\mathrm{S}}\). Based on this information, the scenario graph is transformed into the spectral domain by a multidimensional GFT. The advantage of the multidimensional GFT is that the transformation retains the dimensional context of the graph signal. This differentiation of the spatial and temporal domain within the graph spectrum would be lost by using only an one-dimensional GFT. Since the dynamics along these two dimensions imply different scenario conditions, it is meaningful to keep the multidimensional nature. Semantically, the spectral feature space indicates how smooth a vehicle's movement is with respect to the neighboring vehicles and along time. Without loss of generality, the signal function is extended in a fashion that each node of the graph holds a feature vector information such that \(\mathbf{F}\in\mathbb{R}^{K\times N_{1}\times N_{2}}\). Features can be for example: velocity, acceleration, etc. Totally there are \(K\) features. The extended formula describing the GFT, considering the extended feature information, is
\[\mathbf{\hat{F}}(k,\lambda_{l_{1}}^{(1)},\lambda_{l_{2}}^{(2)})=\sum_{i_{1}=0}^{N_{ 1}-1}\sum_{i_{2}=0}^{N_{2}-1}\mathbf{F}(k,i_{1},i_{2})\overline{u_{l_{1}}^{(1)}(i_ {1})u_{l_{2}}^{(2)}(i_{2})}, \tag{3}\]
where \(k=1,\ldots,K\) represents the feature dimensions. The spectral representation maps the temporal and spatial dynamic within a traffic scenario under consideration of the context interdependencies. The transformation results in a GFT tensor \(\mathbf{\hat{F}}\in\mathbb{R}^{K\times N_{1}\times N_{2}}\) where \(N_{1}\equiv N_{T_{\text{obs}}}\) is the total amount of observed time steps and \(N_{2}\equiv N_{V}\) the considered traffic participants including the target vehicle. An exemplar of a spectral scenario representation for a one-dimensional feature information with \(K=1\) can be seen in Fig. 2(c). The spectral representation results from the GFT of the scenario illustrated in Fig. 2(a), where the velocity information for 5 different vehicles is given over a period of 30 time steps. The information of the resulting spectrum can be interpreted as classical frequencies: Small (eigen-)values indicate low frequencies and vice versa. For this application, the high frequencies can be interpreted as noise, hence, they do not carry important scenario information. Large eigenvalues within the temporal spectra can possibly be neglected for further computations, since the important scenario characteristic is represented through the small eigenvalues. By selecting the \(p\) most important eigenvalues \(\mathbf{\Lambda}_{[0:p-1]}^{(\mathrm{T})}\) irrelevant information for the trajectory prediction can be filtered out. The filtering process within the frequency domain results in a low-pass characteristics in the original time domain. The resulting low-pass characteristic can be seen in Fig. 2(b), where the inverse GFT is applied by only using the \(p=10\) smallest eigenvalues \(\mathbf{\Lambda}_{[0:p-1]}^{(\mathrm{T})}\) of the spectrum illustrated in Fig. 2(c). Since no inverse GFT is applied within the GFTNN, the hyper-parameter \(p\) defines the temporal eigenvalue information used for further computations. Hence, from the GFT representation \(\mathbf{\hat{F}}\) the constant subsection of \(\mathbf{s}=[s_{1},s_{2},\ldots,s_{Z}]\), where \(Z=|K\times N_{T_{\text{obs}}[0:p-1]}\times N_{V}|\) is chosen to be forwarded as input to the encoder.
### _Encoder_
The encoder is designed as lightweight neural network which does not contain any recurrence or computationally
Fig. 2: Architecture of the GFTNN. The observed traffic scenario is represented as a 2D graph and mapped into a frequency scenario representation through the GFT. The GFT tensor operates as the input to a MLP that computes the parameters of the descriptive decoder in order to predict the future trajectory of the target vehicle.
expensive component. Its core element is the MLP block as illustrated in Fig. 4. After an element-wise multiplication of the GFT scenario representation
\[\mathbf{h}_{s}=\mathbf{s}\odot\mathbf{w}_{s}, \tag{4}\]
where \(\mathbf{w}_{s}\in\mathbb{R}^{Z}\) are learnable parameters, the result is passed on to the MLP block. This block is stacked and computationally repeated \(N_{x}\) times. Note, that each of the \(K\) features is passed through a separate MLP block. The feature-selective information \(\mathbf{h}_{l}^{k}\) are concatenated afterwards. Mathematically, the MLP block is represented as
\[\mathbf{h}_{\text{norm}}^{k}=\frac{\mathbf{h}_{s}^{k}-\mathbb{E}[\mathbf{h}_{s}^{k}]}{ \sqrt{\text{VAR}[\mathbf{h}_{s}^{k}]+\epsilon}},\quad\epsilon=1e-05 \tag{5}\]
\[\mathbf{h}_{l}^{k}=\mathbf{W}_{l}^{k}\Phi(\mathbf{W}_{h}^{k}\mathbf{h}_{\text{norm}}^{k}+\mathbf{b} _{n}^{k})+\mathbf{b}_{l}^{k} \tag{6}\]
\[\mathbf{h}_{c}=concat(\mathbf{h}_{l}^{k},\dots,\mathbf{h}_{l}^{K}), \tag{7}\]
where \(\mathbf{h}_{s}^{k}\in\mathbb{R}^{|N_{x_{\text{ind}}(0:p-1)}\times N_{V}|}\) is the corresponding feature information from \(\mathbf{h}_{s}\) and \(\Phi\) is the non-linear activation through the GELU function [12]. Following [2], the layer normalization (5) computes the mean \(\mathbb{E}[\cdot]\) and variance \(\text{VAR}[\cdot]\) based on a single feature dimension and training case. The combined information vector \(\mathbf{h}_{c}\) is forwarded through a sigmoid function and another fully connected linear layer, that outputs the latent space representation \(\mathbf{h}_{z}\in\mathbb{R}^{3}\).
### _Decoder_
The decoder of the GFTNN is implemented as the descriptive decoder proposed in [21]. Instead of using neural networks for creating the future trajectory, a model-based approach is used. The usage of the descriptive decoder setup introduces interpretability in the latent space \(\mathbf{h}_{z}\). In this descriptive decoder the longitudinal (\(\hat{\mathbf{x}}\)) and lateral (\(\hat{\mathbf{y}}\)) trajectory are approximated through functions capable of representing the vehicle dynamics on highways. This simple vehicle dynamic model provides three parametriable variables, that define the trajectory characteristic. The parametrization is based on the computations of the encoder network: The variables of the latent space \(\mathbf{h}_{z}\) are used as variables in the model-based decoder where
\[\hat{\mathbf{x}}=v_{0}\mathbf{t}+0.5h_{z_{1}}\mathbf{t}^{2}, \tag{8}\]
\[\hat{\mathbf{y}}=\frac{h_{z_{2}}}{1+e^{h_{z_{3}}\mathbf{\tau}}}-\frac{h_{z_{2}}}{1+e^ {h_{z_{3}}\mathbf{\tau}_{0}}}. \tag{9}\]
The variables \(\mathbf{t}=0,\dots,T_{pred}\) and \(\mathbf{\tau}=\mathbf{t}-0.5T_{\text{pred}}\) define the temporal information of the trajectory, and \(v_{0}\) is the latest observed longitudinal velocity of the prediction vehicle.
## IV Experimental Evaluation
In order to evaluate the prediction performance of the GFTNN, it is compared to state-of-the-art approaches in the task of vehicle trajectory prediction. Thereby, prediction should only be based on traffic dynamics and no additional network to handle the infrastructural conditions is added. To yet allow a fair comparison with baseline models, datasets and models where infrastructural information is crucial for the prediction performance are excluded. The proposed GFTNN is hence benchmarked only upon highway datasets. For training and testing the publicly available datasets highD [14] and NGSIM [25] are used due to their extend of application-oriented scenarios. The datasets provide traffic observations of different German or US highways, respectively. The raw datasets, however, hold a naturally huge imbalance of scenarios since lane changes happen less often than lane keepings. When training on such a highly unbalanced data, a pretty high accuracy can be achieved by solely predicting the trajectory belonging to the majority class. Simultaneously, the model most probably fails to capture the minority class of lane change predictions. This problem was investigated in more depth by Ding _et al._[9]: As analyzed, about \(96.37\,\mathrm{\char 37}\) of the scenarios contained in NGSIM dataset are keep lane scenarios. While the performance of their introduced trajectory prediction model on the overall dataset is superior to other baseline models, it performs poorly on the lane change scenarios. In order to enable fair benchmarking, each dataset is pre-selected and pre-processed in a fashion, that the contained trajectory scenarios are balanced. Furthermore, the global description of the datasets is transformed
Fig. 4: Feature-selective MLP block architecture.
Fig. 3: The traffic scenario regarding the feature velocity in (a) is mapped into a frequency scenario representation (c) through a multidimensional GFT. By applying the inverse GFT that only considers the \(p\) smallest eigenvectors results in a low-pass filtered time signal as shown in (b).
into relative (target vehicle centered) descriptions. The resulting data format aligns with the problem definition explained beforehand. In total, \(N_{V}=9\) vehicles are considered. This definition allows the inclusion of the eight possible immediate positional neighbors and the target vehicle itself. When a scenario holds too many vehicles, the participants with the smallest Euclidean distance to the target vehicle are selected. When a scenario lacks participants, "ghost vehicles" are inserted. These ghost vehicles are attributed with the same motion features as the target vehicle itself, so that no artificially simulated dynamic is added. Each scenario consists of an observation period and a prediction period. The observation period is set to \(3\,\mathrm{s}\) so that \(T_{\text{obs}}\) is adaptively parametrized by the dataset frequency (\(fps_{\text{highD}}=25\,\mathrm{Hz}\), \(fps_{\text{NGSIM}}=10\,\mathrm{Hz}\)). The prediction horizon is set to \(t_{\text{pred}}=5\,\mathrm{s}\) and the number of prediction steps result from the respective dataset frequency. From the highD/NGSIM dataset 9000/1100 highway scenarios are extracted, that represent an equal distribution of the maneuvers _keep lane_, _lane change right_ and _lane change left_. The reduced amount of scenarios for the NGSIM results from limited lane change scenarios in the dataset itself.
### _Implementation Details_
With the scope of implementing a lightweight neural network, the MLP block is set to \(N_{x}=1\). The used encoder architecture hence holds one element-wise multiplication layer and three MLP layers with the dimensions \(T_{\text{obs}(0:p-1]}\times N_{V}-50-3\) for each feature \(K\). In the following, different graph and hyperparamter settings for the GFTNN, which are included into evaluation, are explained.
**gftnn**: The spatio-temporal graph is an undirected and unweighted Cartesian product graph. The node feature dimension is set to \(K=4\) so that all features of \(\mathbf{\xi}_{j}\) are considered in the GFT. The complete spectrum is forwarded to the encoder, hence \(p=T_{\text{obs}}\).
**gftnn-w**: The spatio-temporal graph is an undirected and _weighted_ Cartesian product graph. The weights are defined by the inverse Euclidean distance to the target vehicle. The node feature dimension is therefore reduced to \(K=2\), only considering the velocities of \(\mathbf{\xi}_{j}\) in the GFT. The complete spectrum is forwarded to the encoder, hence, \(p=T_{\text{obs}}\). Since the graph weighting is adapted according to the scenario, the aggregation scheme is non-injective.
**gftnn-rdcby5/15**: The spatio-temporal graph is an undirected and unweighted Cartesian product graph. The node feature dimension is set to \(K=4\) so that all features of \(\mathbf{\xi}_{j}\) are considered in the GFT. Only a reduced (rdc) fraction of the complete spectrum is forwarded to the encoder:
\[\mathcal{L}=\text{MSE}(\mathbf{x},\mathbf{\hat{x}})+\text{MSE}(\mathbf{y},\mathbf{\hat{y}}). \tag{10}\]
### _Evaluation_
The GFTNN is compared with state-of-the-art baselines including conventional recurrent models (CS-LSTM [7], MHA-LSTM(+f) [19]) and novel graph-based approaches (Two-Channel GNN [20], RA-GAT [9]). For evaluation the
Fig. 5: Average displacement error histogram on highD trajectory predictions. The GFTNN variants show an exponential decrease for error values greater than \(\sim$0.4\,\mathrm{m}$\). In contrast, the baseline models indicate a more frequent occurrence of greater prediction errors.
\begin{table}
\begin{tabular}{c|c||c|c|c|c} Architecture & \(n_{params}\) & \multicolumn{2}{c}{highD} & \multicolumn{2}{c}{NGSIM} \\ & & ADE[m]@5s & FDE[m]@5s & ADE[m]@5s & FDE[m]@5s & FDE[m]@5s \\ \hline \hline
**gftnn** & 567.253 & **1.04** & **2.41** & **3.08** & 7.31 \\ \hline
**gftnn-w** & 283.653 & 1.15 & 2.60 & 3.13 & **7.21** \\ \hline
**gftnn-rdcby5** & 113.653 & 1.10 & 2.58 & 3.56 & 8.75 \\ \hline
**gftnn-rdcby15** & 38.053 & 1.15 & 2.57 & 4.00 & 9.97 \\ \hline VAE [21] & 253.736 & 4.21 & 4.38 & 3.45 & 7.36 \\ \hline CS-LSTM & 556.362 & 3.27*1/2.88 & 5.71 & 8.65 & 16.93 \\ \hline MHA-LSTM(+f) & 673.305 & 1.18*1/2.58 & 5.44 & 13.10 & 27.45 \\ \hline Two-channel & 80.370 & 2.97 & 6.30 & 6.55 & 14.13 \\ \hline RA-GAT & 91.578 & 3.46 & 6.93 & 4.23*2/7.05 & 15.49 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Trajectory prediction performance of different AI-models including state-of-the-art approaches. Where needed, the grid definitions of the baseline models were adopted, so that the same traffic scenario is provided to each model. Results marked with *1 are taken from [19] and *2 from [9], since their problem definition is equal to the one of this work.
widely used metrics _Average Displacement Error (ADE)_
\[\text{ADE}=\sqrt{\frac{\sum_{i=1}^{N}\frac{1}{\text{1}_{\text{pred}}}\left((\mathbf{x}^{( i)}-\hat{\mathbf{x}}^{(i)})^{2}+(\mathbf{y}^{(i)}-\hat{\mathbf{y}}^{(i)})^{2}\right)}{N}} \tag{11}\]
and _Final Displacement Error (FDE)_
\[\text{FDE}=\frac{\sum_{i=1}^{N}\sqrt{(\mathbf{x}_{\text{1}_{\text{pred}}}^{(i)}- \hat{\mathbf{x}}_{\text{1}_{\text{pred}}}^{(i)})^{2}+(\mathbf{y}_{\text{1}_{\text{pred} }}^{(i)}-\hat{\mathbf{y}}_{\text{1}_{\text{pred}}}^{(i)})^{2}}}{N} \tag{12}\]
are applied. As shown in Table I, GFTNN achieves the best results on both metrics and datasets. More specifically, applied on the highD dataset it achieves a performance improvement of \(\sim 13\,\mathrm{\char 37}\) on ADE with respect to the results provided by literature of the best performing model. Evaluation based on the balanced dataset used for this work emphasizes the prediction performance of GFTNN:the error metrics ADE and FDE indicate a performance improvement of \(\sim 60\,\mathrm{\char 37}\) and \(\sim 45\,\mathrm{\char 37}\), respectively, with respect to the best performing baseline model. Considering the results of NGSIM the performance of all models decreases. Yet, the prediction performance of the GFTNN without parameter reduction is still superior. On this dataset, the best performing baseline model is the LSTM-based VAE [21], which the vanilla gftnn outperforms by \(\sim 12\,\mathrm{\char 37}\). Since average values do not depict the error variance, the prediction performance is additionally evaluated through the error distribution. In Fig. 5 the distribution of the ADEs resulting from the best performing models on highD is illustrated. Both GFTNN implementations represent distributions that have their peaks at an ADE of \(0.4\,\mathrm{m}\) and exponentially decrease as the prediction error increases. The compared models Two-Channel and MHA-LSTM(+f) show a suboptimal error distributions that indicate a worse prediction accuracy in general. The distribution peaks of both models (\(1.7\,\mathrm{m}/1.2\,\mathrm{m}\)) indicate a greater universal prediction error. Evaluation of the different hyperparameter settings for the GFTNN shows, that the best performing model is the gftnn setup. For the highD dataset, however, performance does not drastically decrease even when the model complexity is reduced as in gftnn-rdcby15. This equals a parameter reduction to \(7\,\mathrm{\char 37}\) of the original number of parameters. Applied on the NGSIM dataset, the reduction of parameters affects the prediction performance substantially. Note here, that the frequency provided by the NGSIM dataset (\(10\,\mathrm{Hz}\)) is lower than the one of the highD (\(25\,\mathrm{Hz}\)). Consequently, also the embedded temporal spectra using the NGSIM dataset provide a reduced frequency range. By neglecting the high frequencies of the NGSIM spectra essential dynamics information for the motion prediction is removed. This character indicates that when data frequency is low, high frequency neglection leads to inaccurate predictions. Furthermore, prediction performance decreases when using a non-injective neighborhood aggregation scheme. The gftnn-w adapts its spectral transformation for each scenario by weighting the graph edges through distance information. Even though this approach is provided the same information as the non-weighted GFTNNs, prediction accuracy generally decreases.
### _Interpretation of Prediction Performance_
The GFT is a global aggregation scheme, which comprises the complete scenario graph instead of several local \(k\)-hop neighborhoods. The global characteristic is important for the understanding of the cooperative context of the scenario and ensures the sequence consideration without recurrent layers. By this transformation, the multidimensional context of the graph signal is retained. Since the Fourier transformation is an integral transform, the resulting spectrum holds the information of the complete time series. The spectral feature space indicates how smooth the prediction vehicle's movement is within the cooperative spatio-temporal context. By explicitly providing the network with relational information through the graph structure, a lot of additional knowledge is inserted that fortifies the learning process. The resulting relation-based spectral scenario representation is a beneficial mapping, which facilitates the neural network to capture the scenario context and extract the key information for trajectory predictions.
## V Conclusion
In this work, the GFTNN is proposed which combines the advantages of using graph structures and the expressive power of conventional FNNs. By representing a traffic scenario through a spatio-temporal graph and applying a multidimensional GFT, an efficient time-space-relation representation of the scenario is obtained. Even though the overall GFTNN architecture is kept computationally simple, it outperforms the baseline models in the task of vehicle trajectory prediction on highways. Quantitatively, the vanilla GFTNN setup surpasses the best performing state-of-the-art method by \(\sim 13\,\mathrm{\char 37}\) on the highD dataset and \(\sim 12\,\mathrm{\char 37}\) on the NGSIM. The architecture enables to create a lightweight GFTNN version with highly reduced complexity and parameters while still achieving competitive performance to state-of-the-art networks. In future work, the intuition behind the graph properties and the usage of non-static graphs will be addressed. Furthermore, the GFTNN will be extended to include infrastructural scenario information and compared to state-of-the-art models that consider infrastructure, like the SpecTGNN [4].
## Appendix A Appendix: GFT on static graphs
The 2D GFT is represented as a chain of matrix multiplications. By using \(N_{1}\times N_{2}\) matrices \(\mathbf{F}_{i_{1},i_{2}}=f(i_{1},i_{2})\) and \(\hat{\mathbf{F}}_{k_{1},k_{2}}=\hat{f}(\lambda_{k_{1}}^{(1)},\lambda_{k_{2}}^{(2)})\), the 2D GFT applied to the signal \(f\) is expressed as
\[\mathcal{F}(\mathbf{F})=\hat{\mathbf{F}}=\mathbf{U}_{1}^{*}\mathbf{F}\overline{\mathbf{U}}_{2} \tag{13}\]
where \(\mathbf{U}_{n}\) is an \(N_{n}\times N_{n}\) unitary matrix with (\(i,k\))-th element \(u_{k}^{(n)}(i)\) for \(n=1,2\). Let \(\mathbf{U}_{n}^{*}\) be the Hermitian transpose and \(\overline{\mathbf{U}}_{n}\) the element-wise complex conjugate matrix. The eigenvector matrices \(\mathbf{U}_{n}\in\mathbb{R}^{N_{n}\times N_{n}}\) result from the constant Laplacian matrices which are real, symmetric and positive-semidefinite.
_Proof of injective characteristic_
A function is injective if \(\forall a,b\in\mathcal{X},\mathcal{F}(a)=\mathcal{F}(b)\to a=b\). Assume \(\mathcal{F}(\mathbf{F}_{A})=\mathbf{U}_{1}^{*}\mathbf{F}_{A}\overline{\mathbf{U}}_{2}\) and \(\mathcal{F}(\mathbf{F}_{B})=\mathbf{U}_{1}^{*}\mathbf{F}_{B}\overline{\mathbf{U}}_{2}\) where \(\mathcal{F}:\mathbb{R}^{N_{1}\times N_{2}}\to\mathbb{R}^{N_{1}\times N_{2}}\).
\[\mathcal{F}(\mathbf{F}_{A}) =\mathcal{F}(\mathbf{F}_{B}) \tag{14}\] \[\mathbf{U}_{1}^{*}\mathbf{F}_{A}\overline{\mathbf{U}}_{2} =\mathbf{U}_{1}^{*}\mathbf{F}_{B}\overline{\mathbf{U}}_{2}\] (15) \[\mathbf{F}_{A} =\mathbf{F}_{B} \tag{16}\]
|
2310.16975 | Efficient Neural Network Approaches for Conditional Optimal Transport
with Applications in Bayesian Inference | We present two neural network approaches that approximate the solutions of
static and dynamic conditional optimal transport (COT) problems. Both
approaches enable conditional sampling and conditional density estimation,
which are core tasks in Bayesian inference$\unicode{x2013}$particularly in the
simulation-based ("likelihood-free") setting. Our methods represent the target
conditional distributions as transformations of a tractable reference
distribution and, therefore, fall into the framework of measure transport.
Although many measure transport approaches model the transformation as COT
maps, obtaining the map is computationally challenging, even in moderate
dimensions. To improve scalability, our numerical algorithms use neural
networks to parameterize COT maps and further exploit the structure of the COT
problem. Our static approach approximates the map as the gradient of a
partially input-convex neural network. It uses a novel numerical implementation
to increase computational efficiency compared to state-of-the-art alternatives.
Our dynamic approach approximates the conditional optimal transport via the
flow map of a regularized neural ODE; compared to the static approach, it is
slower to train but offers more modeling choices and can lead to faster
sampling. We demonstrate both algorithms numerically, comparing them with
competing state-of-the-art approaches, using benchmark datasets and
simulation-based Bayesian inverse problems. | Zheyu Oliver Wang, Ricardo Baptista, Youssef Marzouk, Lars Ruthotto, Deepanshu Verma | 2023-10-25T20:20:09Z | http://arxiv.org/abs/2310.16975v2 | Efficient neural network approaches for conditional optimal transport with applications in Bayesian inference+
###### Abstract
We present two neural network approaches that approximate the solutions of static and dynamic conditional optimal transport (COT) problems, respectively. Both approaches enable sampling and density estimation of conditional probability distributions, which are core tasks in Bayesian inference. Our methods represent the target conditional distributions as transformations of a tractable reference distribution and, therefore, fall into the framework of measure transport. COT maps are a canonical choice within this framework, with desirable properties such as uniqueness and monotonicity. However, the associated COT problems are computationally challenging, even in moderate dimensions. To improve the scalability, our numerical algorithms leverage neural networks to parameterize COT maps. Our methods exploit the structure of the static and dynamic formulations of the COT problem. PCP-Map models conditional transport maps as the gradient of a partially input convex neural network (PICNN) and uses a novel numerical implementation to increase computational efficiency compared to state-of-the-art alternatives. COT-Flow models conditional transports via the flow of a regularized neural ODE; it is slower to train but offers faster sampling. We demonstrate their effectiveness and efficiency by comparing them with state-of-the-art approaches using benchmark datasets and Bayesian inverse problems.
M 62F15, 62M45
## 1 Introduction
Bayesian inference models the relationship between the parameter \(\mathbf{x}\in\mathbb{R}^{n}\) and noisy indirect measurements \(\mathbf{y}\in\mathbb{R}^{m}\) via the posterior distribution, whose density we denote by \(\pi(\mathbf{x}|\mathbf{y})\). From Bayes' rule, the posterior density is proportional to the product of the likelihood, \(\pi(\mathbf{y}|\mathbf{x})\) and a prior, \(\pi(\mathbf{x})\). A key goal in many Bayesian inference applications is to characterize the posterior by obtaining independent identically distributed (i.i.d.) samples from this distribution. When the conditional distribution is not in some family of tractable distributions (e.g., Gaussian distributions), computational sampling algorithms such as Markov chain Monte Carlo (MCMC) schemes or variational inference are required.
Most effective sampling techniques are limited to the conventional Bayesian setting as they require a tractable likelihood model (often given by a forward operator that maps \(\mathbf{x}\) to \(\mathbf{y}\) and a noise model) and prior. Even in the conventional setting, producing thousands of approximately i.i.d. samples from the conditional distribution often requires millions of forward operator evaluations. This can be prohibitive for complex forward operators, such as those in science and engineering applications based on stochastic differential equations (SDE) or partial differential equations
(PDE). Moreover, MCMC schemes are sequential and difficult to parallelize.
Beyond the conventional Bayesian setting, one is usually limited to likelihood-free methods; see, e.g., [12]. Among these methods, measure transport provides a general framework to characterize complex posteriors using samples from the joint distribution. The key idea is to construct transport maps that push forward a simple reference (e.g., a standard Gaussian) toward a complex target distribution; see [3] for discussions and reviews. Once obtained, these transport maps provide an immediate way to generate i.i.d. samples from the target distribution by evaluating the map at samples from a reference distribution.
Under mild assumptions, there exist infinitely many transport maps that fulfill the push-forward constraint but have drastically different theoretical properties. One way to establish uniqueness is to identify the transport map that satisfies the push-forward constraint and incurs minimal transport cost. Adding transport costs renders the measure transport problem for the conditional distribution into a conditional optimal transport (COT) problem [52].
Solving COT problems is computationally challenging, especially when the number of parameters, \(n\), or the number of measurements, \(m\), are large or infinite. The curse of dimensionality affects methods that use grids or polynomials to approximate transport maps and renders most of them impractical when \(n+m\) is larger than ten. Due to their function approximation properties, neural networks are a natural candidate to parameterize COT maps. This choice bridges the measure transport framework and deep generative modeling [38, 27]. While showing promising results, many recent neural network approaches for COT such as [32, 9, 3] rely on adversarial training, which requires solving a challenging stochastic saddle point problem.
This paper contributes two neural network approaches for COT that can be trained by maximizing the likelihood of the target samples, and we demonstrate their use in Bayesian inference. Our approaches exploit the known structure of the COT map in different ways. Our first approach parameterizes the map as the gradient of a PICNN [2]; we name it _partially convex potential map_ (PCP-Map). By construction, this yields a monotone map for any choice of network weights. When trained to sufficient accuracy, we obtain the optimal transport map with respect to an expected \(L_{2}\) cost, known as the conditional Brenier map [10]. Our second approach builds upon the relaxed dynamical formulation of the \(L_{2}\) optimal transport problem that instead seeks a map defined by the flow map of an ODE; we name it _conditional optimal transport flow_ (COT-Flow). Here, we parameterize the velocity of the map as the gradient of a scalar potential and obtain a neural ODE. To ensure that the network achieves sufficient accuracy, we monitor and penalize violations of the associated optimality conditions, which are given by a Hamilton Jacobi Bellman (HJB) PDE.
A series of numerical experiments are conducted to evaluate PCP-Map and COT-Flow comprehensively. The first experiment demonstrates our approaches' robustness to hyperparameters and superior numerical accuracy for density estimation compared to results from other approaches in [4] using six UCI tabular datasets [29]. The second experiment demonstrates our approaches' effectiveness and efficiency by comparing them to a provably convergent approximate Bayesian computation approach on the task of conditional sampling using a Bayesian inference problem involving the stochastic Lotka-Volterra equation, which gives rise to intractable likelihood. The third experiment compares our approaches' competitiveness against the flow-based neural posterior estimation (NPE) approach studied in [40] on a real-world high-dimensional Bayesian inference problem involving the 1D shallow water equations. The final experiment demonstrates PCP-Map's improvements in computational stability and efficiency over an amortized version of the approach relevant to [20]. Through these experiments, we conclude that the proposed approaches characterize conditional distributions with improved numerical accuracy and efficiency. Moreover, they improve
upon recent computational methods for the numerical solution of the COT problem.
Like most neural network approaches, the effectiveness of our methods relies on an adequate choice of network architecture and an accurate solution to a stochastic non-convex optimization problem. As there is little theoretical guidance for choosing the network architecture and optimization hyper-parameters, our numerical experiments show the effectiveness and robustness of a simple random grid search. The choice of the optimization algorithm is a modular component in our approach; we use the common Adam method [24] for simplicity of implementation.
The remainder of the paper is organized as follows: Section 2 contains the mathematical formulation of the conditional sampling problem and reviews related learning approaches. Section 3 presents our partially convex potential map (PCP-Map). Section 4 presents our conditional optimal transport flow (COT-Flow). Section 5 describes our effort to achieve reproducible results and procedures for identifying effective hyperparameters for neural network training. Section 6 contains a detailed numerical evaluation of both approaches using six open-source data sets and experiments motivated by Bayesian inference for the stochastic Lotka-Volterra equation and the 1D shallow water equation. Section 7 features a detailed discussion of our results and highlights the advantages and limitations of the presented approaches.
## 2 Background and Related Work
Given i.i.d. samples \(\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{N}\) in \(\mathbb{R}^{n+m}\) from the joint distribution with density \(\pi(\mathbf{x},\mathbf{y})\), our goal is to learn the conditional distribution \(\pi(\mathbf{x}|\mathbf{y})\) for any conditioning variables \(\mathbf{y}\). In the context of Bayesian inference, these samples can be obtained by sampling parameters from the prior distribution \(\pi(\mathbf{x})\) and observations from the likelihood model \(\pi(\mathbf{y}|\mathbf{x}_{i})\). Under the measure transport framework, we aim to represent the posterior distribution of parameters given any observation realization as a transformation of a tractable reference distribution. Since our proposed approaches tackle the curse of dimensionality by parameterizing transport maps with neural networks, we discuss related deep generative modeling approaches from the machine learning community that inform our methods.
Deep generative modeling seeks to characterize complex target distributions by enabling sampling. We refer to [42] for a general introduction to this vibrant research area, including three main classes of approaches: normalizing flows, variational autoencoders (VAEs), and generative adversarial networks (GANs). Very recently, stochastic generative procedures based on score-based diffusion models have also emerged as a new and promising framework that has achieved impressive results for image generation [46]. For the task of sampling conditional distributions, examples of the corresponding generative models are conditional normalizing flows [31], conditional VAEs [45], conditional GANs [34, 30], and conditional diffusion models [5, 49]. Lastly, neural network approaches that directly perform conditional density estimation have also been studied in [1, 41, 6, 44].
In the following, we focus on normalizing flows to enable maximum likelihood training. We are particularly interested in the family of diffeomorphic transport maps, \(g\colon\mathbb{R}^{n}\times\mathbb{R}^{m}\to\mathbb{R}^{n}\), for which we can estimate the conditional probability using the change of variables formula
\[\pi(\mathbf{x}|\mathbf{y})\approx g(\cdot,\mathbf{y})_{\sharp}\rho_{z}( \mathbf{x})\coloneqq\rho_{z}\left(g^{-1}(\mathbf{x},\mathbf{y})\right)\cdot| \det\nabla_{\mathbf{x}}g^{-1}(\mathbf{x},\mathbf{y})|. \tag{1}\]
Here, \(g(\cdot,\mathbf{y})_{\sharp}\rho_{z}\) denotes the push-forward measure of some tractable reference measure \(\rho_{z}\), and the inverse and gradient of \(g\) are computed with respect to \(\mathbf{x}\). Once trained, the map \(g\) can produce more samples from the conditional distribution using samples from \(\rho_{z}\), which is a primary goal in applications such as uncertainty quantification. While there is some flexibility in choosing the reference distribution, we set \(\rho_{z}\) as the \(n\)-dimensional standard Gaussian for simplicity.
In some instances, it is possible to obtain \(g\) from another transport map \(h\colon\mathbb{R}^{n+m}\to\mathbb{R}^{n+m}\) that pushes forward the \((n+m)\)-dimensional standard Gaussian \(\mathcal{N}(0,I_{n+m})\) to the joint distribution
\(\pi(\mathbf{x},\mathbf{y})\). One example is when \(h\) has a block-triangular structure, i.e., it can be written as a function of the form
\[h(\mathbf{z}_{x},\mathbf{z}_{y})=\begin{bmatrix}h_{x}(\mathbf{z}_{x},\mathbf{y} )\\ h_{y}(\mathbf{z}_{y})\end{bmatrix} \tag{2}\]
where \(\mathbf{z}_{x}\sim\mathcal{N}(0,I_{n})\) and \(\mathbf{z}_{y}\sim\mathcal{N}(0,I_{m})\). The block-triangular structure is critical to obtain a conditional generator \(g=h_{x}\), and generally requires careful network design and training; see, e.g., the strictly triangular approaches [33, 4, 47] and their statistical complexity analysis in [22], and the block triangular approaches in [3, 13, 48]. A more flexible alternative is when \(h\) is a score-based diffusion model [5]. Following the procedure in [46, Appendix I.4], \(h\) can be used to obtain various conditional distributions as long as there is a tractable and differentiable log-likelihood function; see, e.g., applications to image generation and time series imputation in [5, 49]. Related to score-based approaches is conditional flow matching [51]. In this paper, we bypass learning \(h\) and train \(g\) directly using samples from the joint distribution. If \(h\) is indeed desired, we can obtain \(h_{y}\) by training another (unconditional) transport map that pushes forward \(\mathcal{N}(0,I_{m})\) to \(\pi(\mathbf{y})\) and use \(g\) as \(h_{x}\). We will demonstrate this further in our numerical experiment in Section 6.
In addition to conditional sampling, the diffeomorphic transport map \(g\) allows us to estimate the conditional density of a sample \(\mathbf{x}\) through (1). Even when density estimation is not required, this can simplify the training. In this work, we seek a map that maximizes the likelihood of the target samples from \(\pi(\mathbf{x},\mathbf{y})\); see Section 3 and Section 4. It is worth emphasizing that achieving both efficient training and sampling puts some constraints on the form of map \(g\) and has led to many different variants. Some of the most commonly used normalizing flows include Glow [25], Conditional Glow [31], FFJORD [19], and RNODE [17]. To organize the relevant works on diffeomorphic measure transport, we provide a schematic overview grouped by the modeling assumptions on \(g\) in Figure 1.
Since there are usually infinitely many diffeomorphisms that satisfy (1), imposing additional properties such as monotonicity can effectively limit the search. Optimal transport theory can
Figure 1: Schematic overview of related measure transport approaches grouped by the constraints imposed on the transport map. The red dot represents optimal transport maps, and the red dashed circles represent the notion of approximately OT. When categorizing the approaches, we only consider the transport map constraints enforced in their associated literature.
further motivate the restriction to monotone transport maps. Conditional optimal transport maps ensure equality in (1) while minimizing a specific _transport cost_. These maps often have additional structural properties that can be exploited during learning, leading to several promising works. For example, when using the \(L_{2}\) transport costs, a unique solution to the OT problem exists and is indeed a monotone operator; see [10] for theoretical results on the conditional OT problem. As observed in related approaches for (unconditional) deep generative modeling [36, 56, 58, 17], adding a transport cost penalty to the maximum likelihood training objective does not limit the ability to match the distributions.
Measure transport approaches that introduce monotonicity through optimal transport theory include the CondOT approach [9], normalizing flows [20, 36, 58], and a GAN approach [3]. A more general framework for constructing OT-regularized transport maps is proposed by [56]. Many other measure transport approaches do not consider transport cost but enforce monotonicity by incorporating specific structures in their map parameterizations; see, for example, [14, 26, 21, 39, 13, 15]. More generally, UMNN [53] provides a framework for constructing monotone neural networks. Other methods like the adaptive transport map (ATM) algorithm [4] enforce monotonicity through rectifier operators.
There are some close relatives among the family of conditional optimal transport maps to our proposed approaches. The PCP-Map approach is most similar to the CP-Flow approach in [20]. CP-Flow is primarily designed to approximately solve the joint optimal transport problem. Its variant, however, allows for variational inference in the setting of a variational autoencoder (VAE). Inside the GitHub repository associated with [20], a script enables CP-Flow to approximately solve the COT problem over a 1D Gaussian mixture target distribution in a similar fashion to PCP-Map. Therefore, we compare our approach to the amortized CP-Flow using numerical experiments and find that the amortized CP-Flow either fails to solve the problems or is considerably slower than PCP-Map; see subsection 6.4. Our implementation of PCP-Map differs in the following aspects: a simplified transport map architecture, new automatic differentiation tools that avoid the need for stochastic log-determinant estimators, and a projected gradient method to enforce non-negativity constraints on parts of the weights.
Another close relative is the CondOT approach in [9]. The main distinction between CondOT and PCP-Map is the definition of the learning problem. The former solves the \(W_{2}\) COT problem in an adversarial manner, which leads to a challenging stochastic saddle point problem. Our approach minimizes the \(L_{2}\) transport costs, which results in a minimization problem. The COT-Flow approach extends the dynamic OT formulation in [36] to enable conditional sampling and density estimation.
## 3 Partially Convex Potential Maps (PCP-Map) for Conditional OT
Our first approach, PCP-Map, approximately solves the static COT problem through maximum likelihood training over a set of partially monotone maps. Motivated by the structure of transport maps that are optimal with respect to the quadratic cost function, we parameterize these maps as the gradients of scalar-valued neural networks that are strictly convex in \(\mathbf{x}\) for any choice of the weights.
_Training problem._ Given samples from the joint distribution, our algorithm seeks a conditional generator \(g\) by solving the maximum likelihood problem
\[\min_{g\in\mathcal{M}}J_{\mathrm{NLL}}[g]. \tag{2}\]
The objective \(J_{\rm NLL}\) is the expected negative log-likelihood functional
\[J_{\rm NLL}[g]=\mathbb{E}_{\pi({\bf x},{\bf y})}\left[\frac{1}{2}\left\|g^{-1}({ \bf x},{\bf y})\right\|^{2}-\log\det\nabla_{\bf x}g^{-1}({\bf x},{\bf y})\right], \tag{10}\]
which agrees up to an additive constant with the negative logarithm of (1), and \({\mathcal{M}}\) is the set of maps that are monotonically increasing in their first argument, i.e.,
\[{\mathcal{M}}=\left\{g\ :\ (g({\bf v},{\bf y})-g({\bf w},{\bf y}))^{\top}({\bf v }-{\bf w})\geq 0,\ \forall\ {\bf v},{\bf w}\in{\mathbb{R}}^{n},{\bf y}\in{\mathbb{R}}^{m}\right\}. \tag{11}\]
We note that \(J_{\rm NLL}\) also agrees (up to an additive constant) with the Kullback-Leibler (KL) divergence from the push forward of \(\rho_{z}\) through the generator \(g\) to the conditional distribution in expectation over the conditioning variable.
Since the learning problem in (10) only involves the inverse generator \(g^{-1}\), we seek this map directly in the same space of monotone functions \({\mathcal{M}}\) in (11). This avoids inverting the generator during training. The drawback is that sampling the target conditional requires inverting the learned map. In this work, we will find the conditional generator \(g^{-1}(\cdot,{\bf y})\) that pushes forward the conditional distribution \(\pi({\bf x}|{\bf y})\) to the reference distribution for each \({\bf y}\).
Limiting the search to monotone maps is motivated by the celebrated Brenier's theorem, which ensures there exists a unique monotone map \(g^{-1}\) such that \(g^{-1}(\cdot,{\bf y})_{\sharp}\pi({\bf x}|{\bf y})=g(\cdot,{\bf y})^{\sharp} \pi({\bf x}|{\bf y})=\rho_{z}\) among all maps written as the gradient of a convex potential; see [8] for the original result and [10] for conditional transport maps. Theorem 2.3 in [10] also shows that \(g^{-1}\) is _optimal_ in the sense that among all maps that match the distributions, it minimizes the integrated \(L_{2}\) transport costs
\[P_{\rm OT}[g]=\mathbb{E}_{\pi({\bf x},{\bf y})}\left[\left\|{\bf x}-g^{-1}({ \bf x},{\bf y})\right\|^{2}\right]. \tag{12}\]
#### Neural Network Representation
In this work, we leverage the structural form of the conditional Brenier map by using partially input convex neural networks (PICNNs) [2] to express the inverse generator directly. In particular, we parameterize \(g^{-1}\) as the gradient of a PICNN \(\tilde{G}_{\theta}:{\mathbb{R}}^{n}\times{\mathbb{R}}^{m}\to{\mathbb{R}}\) that depends on weights \(\theta\). A PICNN is a feed-forward neural network that is specifically designed to ensure convexity in some of its inputs; see the original work that introduced this neural network architecture in [2] and its use for generative modeling in [20]. To the best of our knowledge, investigating if PICNNs are universal approximators of partially input convex functions is still an open issue, also see [23, Section IV], that is beyond the scope of our paper; a perhaps related result for fully input convex neural networks is given in [20, Appendix C].
To ensure the monotonicity of \(g^{-1}\), we construct \(\tilde{G}\) to be strictly convex as a linear combination of a PICNN and a positive definite quadratic term. That is,
\[\tilde{G}_{\theta}({\bf x},{\bf y})=\psi(\gamma_{1})\cdot{\bf w}_{K}+(\sigma_ {\rm ReLU}(\gamma_{2})+\psi(\gamma_{3}))\cdot\frac{1}{2}\|{\bf x}\|^{2}, \tag{13}\]
where \(\gamma_{1},\gamma_{2}\), and \(\gamma_{3}\) are scalar parameters that are re-parameterized via the soft-plus function \(\psi(x)=\log(1+\exp(x))\) and the ReLU function \(\sigma_{\rm ReLU}(x)=\max\{0,x\}\) to ensure strict convexity of \(\tilde{G}_{\theta}\). Here, \({\bf w}_{K}\) is the output of an \(K\)-layer PICNN and is computed through forward propagation through the layers \(k=0,\ldots,K-1\) starting with the inputs \({\bf v}_{0}={\bf y}\) and \({\bf w}_{0}={\bf x}\)
\[{\bf v}_{k+1}=\sigma^{(v)}\ \left({\bf L}_{k}^{(v)}{\bf v}_{k}\ +\ { \bf b}_{k}^{(v)}\right), \tag{14}\] \[{\bf w}_{k+1}=\sigma^{(w)}\ \left(\sigma_{\rm ReLU}\left({\bf L}_{k}^{(w)} \right)\left({\bf w}_{k}\odot\ \sigma_{\rm ReLU}\left({\bf L}_{k}^{(ww)}{\bf v}_{k}+{\bf b}_{k}^{(wv)} \right)\right)+\right.\] \[\left.{\bf L}_{k}^{(x)}\left({\bf x}\ \odot\ \left({\bf L}_{k}^{(xv)}{\bf v}_{k}+{\bf b}_{k}^{(xv)} \right)\right)+{\bf L}_{k}^{(vw)}{\bf v}_{k}+{\bf b}_{k}^{(w)}\right).\]
Here, \(\mathbf{v}_{k}\), termed context features, are layer activations of input \(\mathbf{y}\), and \(\odot\) denotes the element-wise Hadamard product. We implement the non-negativity constraints on \(\mathbf{L}_{k}^{(w)}\) and \(\mathbf{L}_{k}^{(ww)}\mathbf{v}_{k}\ +\ \mathbf{b}_{k}^{(wv)}\) via the ReLU activation and set \(\sigma^{(w)}\) and \(\sigma^{(v)}\) to the softplus and ELU functions, respectively, where
\[\sigma_{\mathrm{ELU}}(x)=\begin{cases}x&\text{if }x>0\\ e^{x}-1&\text{if }x\leq 0\end{cases}.\]
We list the dimensions of the weight matrices in Table 1. The sizes of the bias terms equal the number of rows of their corresponding weight matrices. The trainable parameters are
\[\theta=(\gamma_{1:3}, \mathbf{L}_{0:K-2}^{(v)},\mathbf{b}_{0:K-2}^{(v)},\mathbf{L}_{0:K- 1}^{(vw)},\mathbf{L}_{0:K-1}^{(w)},\mathbf{b}_{0:K-1}^{(w)},\] \[\mathbf{L}_{0:K-1}^{(wv)},\mathbf{b}_{0:K-1}^{(wv)},\mathbf{L}_{1 :K-1}^{(xv)},\mathbf{b}_{1:K-1}^{(xv)},\mathbf{L}_{1:K-1}^{(x)}). \tag{12}\]
Using properties for the composition of convex functions [18], it can be verified that the forward propagation in (11) defines a function that is convex in \(\mathbf{x}\), but not necessarily in \(\mathbf{y}\) (which is not needed), as long as \(\sigma^{(w)}\) is convex and non-decreasing.
To compute the log determinant of \(g^{-1}\), we use vectorized automatic differentiation to obtain the Hessian of \(\tilde{G}_{\theta}\) with respect to its first input and then compute its eigenvalues. This is feasible when the Hessian is moderate; e.g., in our experiments, it is less than one hundred. We use efficient implementations of these methods that parallelize the computations over all the samples in a batch.
Our algorithm enforces the non-negativity constraint by projecting the parameters into the non-negative orthant after each optimization step using ReLU. Thereby, we alleviate the need for re-parameterization, for example, using the softplus function in [2]. Another novelty introduced in PICNN is that we utilize trainable affine layer parameters \(\mathbf{L}_{k}^{(v)}\) and a context feature width \(u\) as a hyperparameter to increase the expressiveness of the conditioning variables, which are pivotal to characterizing conditional distributions; existing works such as [20] set \(\mathbf{L}_{1:K-2}^{(v)}=\mathbf{I}\).
_Sample generation._ Due to our neural network parameterization, there is generally no closed-form relation between \(g\) and \(g^{-1}=\tilde{G}_{\theta}\). As in [20], we approximate the inverse of \(\tilde{G}_{\theta}\) during sampling as the Legendre-Fenchel dual. That is, we solve the convex optimization problem
\[\operatorname*{arg\,min}_{\mathbf{v}}\;\tilde{G}_{\theta}(\mathbf{v},\mathbf{ y})-\mathbf{z}^{\top}\mathbf{v}. \tag{13}\]
Due to the strict convexity of \(\tilde{G}_{\theta}\) in its first argument, the first-order optimality conditions gives
\[\mathbf{v}^{*}\approx\nabla_{\mathbf{z}}\tilde{G}_{\theta}^{-1}(\mathbf{z}, \mathbf{y})=g(\mathbf{z},\mathbf{y})\sim\pi(\mathbf{x}|\mathbf{y}).\]
\begin{table}
\begin{tabular}{|c|c c c c c c|} \hline \(k\), layer & \(\mathrm{size}(\mathbf{L}_{k}^{(v)})\) & \(\mathrm{size}(\mathbf{L}_{k}^{(vw)})\) & \(\mathrm{size}(\mathbf{L}_{k}^{(wv)})\) & \(\mathrm{size}(\mathbf{L}_{k}^{(wv)})\) & \(\mathrm{size}(\mathbf{L}_{k}^{(xv)})\) & \(\mathrm{size}(\mathbf{L}_{k}^{(x)})\) \\ \hline \hline
0 & \((u,\,m)\) & \((w,\,m)\) & \((w,\,n)\) & \((n,\,m)\) & 0 & 0 \\
1 & \((u,\,u)\) & \((w,\,u)\) & \((w,\,w)\) & \((w,\,u)\) & \((n,\,u)\) & \((w,\,n)\) \\ \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\ \(K-2\) & \((u,\,u)\) & \((w,\,u)\) & \((w,\,w)\) & \((w,\,u)\) & \((n,\,u)\) & \((w,\,n)\) \\ \(K-1\) & 0 & \((1,\,u)\) & \((1,\,w)\) & \((w,\,u)\) & \((n,\,u)\) & \((w,\,n)\) \\ \hline \end{tabular}
\end{table}
Table 1: Parameter dimensions of a \(K\)-layer PICNN architecture from (11). Since the dimensions of the inputs, \(\mathbf{x}\in\mathbb{R}^{n}\) and \(\mathbf{y}\in\mathbb{R}^{m}\), are given, and the network outputs a scalar, we vary only the depth, \(K\), feature width, \(w\), and context width, \(u\), in our experiments.
_Hyperparameters._ In our numerical experiments, we vary only three hyperparameters to adjust the complexity of the architecture. As described in section 5 we randomly sample the depth, \(K\), the feature width, \(w\), and the context width, \(u\), from the values in Table 2.
## 4 Conditional OT flow (COT-Flow)
In this section, we extend the OT-regularized continuous normalizing flows in [36] to the conditional generative modeling problem introduced in section 2.
Training ProblemFollowing the general approach of continuous normalizing flows [11, 19], we express the transport map that pushes forward the reference distribution \(\rho_{z}\) to the target \(\pi(\mathbf{x}|\mathbf{y})\) via the flow map of an ODE. That is, we define \(g(\mathbf{z},\mathbf{y})=\mathbf{u}(1)\) as the terminal state of the ODE
\[\frac{d}{dt}\mathbf{u}=v(t,\mathbf{u},\mathbf{y}),\quad t\in(0,1],\quad\mathbf{ u}(0)=\mathbf{z}, \tag{10}\]
where the evolution of \(\mathbf{u}\colon[0,1]\to\mathbb{R}^{n}\) depends on the velocity field \(v\colon[0,1]\times\mathbb{R}^{n}\times\mathbb{R}^{m}\to\mathbb{R}^{n}\) and will be parameterized with a neural network below. When the velocity is trained to minimize the negative log-likelihood loss in (11), the resulting mapping is called a continuous normalizing flow [19]. We add \(\mathbf{y}\) as an additional input to the velocity field to enable conditional sampling. Consequently, the resulting flow map and generator \(g\) depend on \(\mathbf{y}\).
One advantage of defining the generator through an ODE is that the loss function can be evaluated efficiently for a wide range of velocity functions. Recall that the loss function requires the inverse of the generator and the log-determinant of its Jacobian. For sufficiently regular velocity fields, the inverse of the generator can be obtained by integrating backward in time. To be precise, we define \(g^{-1}(\mathbf{x},\mathbf{y})=\mathbf{p}(0)\) where \(\mathbf{p}:[0,1]\to\mathbb{R}^{n}\) satisfies (10) with the terminal condition \(\mathbf{p}(1)=\mathbf{x}\). As derived in [58, 19, 56], constructing the generator through an ODE also simplifies computing the log-determinant of the Jacobian, i.e.,
\[\log\det\nabla_{x}g^{-1}(\mathbf{x},\mathbf{y})=\int_{0}^{1}\operatorname{ trace}\left(\nabla_{x}v(t,\mathbf{p},\mathbf{y})\right)dt. \tag{11}\]
Penalizing transport costs during training leads to theoretical and numerical advantages; see, for example, [58, 56, 36, 17]. Hence, we consider the OT-regularized training problem
\[\min_{v}J_{\mathrm{NLL}}[g]+\alpha_{1}P_{\mathrm{DOT}}[v] \tag{12}\]
where \(\alpha_{1}>0\) is a regularization parameter that trades off matching the distributions (for \(\alpha_{1}\ll 1\)) and minimizing the transport costs (for \(\alpha_{1}\gg 0\)) given by the dynamic transport cost penalty
\[P_{\mathrm{DOT}}[v]=\mathbb{E}_{\pi(\mathbf{x},\mathbf{y})}\left[\int_{0}^{1} \frac{1}{2}\|v(t,\mathbf{p},\mathbf{y})\|^{2}dt\right]. \tag{13}\]
This penalty is stronger than the static counterpart in (12), in other words
\[P_{\mathrm{OT}}[g]\leq P_{\mathrm{DOT}}[g]\quad\forall g:\mathbb{R}^{n}\times \mathbb{R}^{m}\to\mathbb{R}^{n}. \tag{14}\]
However, the values of both penalties agree when the velocity field in (10) is constant along the trajectories, which is the case for the optimal transport map. Hence, we expect the solution of the dynamic problem to be close to that of the static formulation when \(\alpha_{1}\) is chosen well.
To provide additional theoretical insight and motivate our numerical method, we note that (4.3) is related to the potential mean field game (MFG)
\[\begin{split}\min_{\rho,v}&\int_{\mathbb{R}^{m}}\int_{ \mathbb{R}^{n}}\left(-\log\rho(1,\mathbf{x})\pi(\mathbf{x},\mathbf{y})+\alpha_ {1}\int_{0}^{1}\frac{1}{2}\|v(t,\mathbf{x},\mathbf{y})\|^{2}\rho(t,\mathbf{x},\mathbf{y})dt\right)d\mathbf{x}d\mathbf{y}\\ \text{subject to}&\partial_{t}\rho(t,\mathbf{x}, \mathbf{y})+\nabla_{x}\cdot\left(\rho(t,\mathbf{x},\mathbf{y})v(t,\mathbf{x},\mathbf{y})\right)=0,\quad t\in(0,1]\\ &\rho(0,\mathbf{x},\mathbf{y})=\rho_{z}(\mathbf{x}).\end{split} \tag{4.6}\]
Here, the terminal and running costs in the objective functional are the \(L_{2}\) anti-derivatives of \(J_{\text{NLL}}\) and \(P_{\text{DOT}}\), respectively, and the continuity equation is used to represent the density evolution under the velocity \(v\). To be precise, (4.3) can be seen as a discretization of (4.6) in the Lagrangian coordinates defined by the reversed ODE (4.14). Note that both formulations differ in the way the transport costs are measured: (4.4) computes the cost of pushing the conditional distribution to the Gaussian, while (4.6) penalizes the costs of pushing the Gaussian to the conditional. While these two terms do not generally agree, they coincide for the \(L_{2}\) optimal transport map; see [16, Corollary 2.5.13] for a proof for unconditional optimal transport. For more insights into MFGs and their relation to optimal transport and generative modeling, we refer to our prior work [43] and the more recent work [57, Sec 3.4]. The fact that the solutions of the microscopic version in (4.3) and the macroscopic version in (4.6) agree is remarkable and was first shown in the seminal work [28].
By Pontryagin's maximum principle, the solution of (4.3) satisfies the feedback form
\[v(t,\mathbf{p},\mathbf{y})=-\frac{1}{\alpha_{1}}\nabla_{x}\Phi(t,\mathbf{p}, \mathbf{y}) \tag{4.7}\]
where \(\Phi:[0,1]\times\mathbb{R}^{n}\times\mathbb{R}^{m}\to\mathbb{R}\) and \(\Phi(\cdot,\cdot,\mathbf{y})\) can be seen as the value function of the MFG and, alternatively, as the Lagrange multiplier in (4.6) for a fixed \(\mathbf{y}\). Therefore, we model the velocity as a conservative vector field as also proposed in [36, 43]. This also simplifies the computation of the log-determinant since the Jacobian of \(v\) is symmetric, and we note that
\[\text{trace}\nabla_{x}\left(\nabla_{x}\Phi(t,\mathbf{u},\mathbf{y})\right)= \Delta_{x}\Phi(t,\mathbf{u},\mathbf{y}). \tag{4.8}\]
Another consequence of optimal control theory is that the value function satisfies the Hamilton Jacobi Bellman (HJB) equations
\[\partial_{t}\Phi(t,\mathbf{x},\mathbf{y})-\frac{1}{2\alpha_{1}}\|\nabla_{x} \Phi(t,\mathbf{x},\mathbf{y})\|^{2}=0,\quad t\in[0,1) \tag{4.9}\]
with the terminal condition
\[\Phi(1,\mathbf{x},\mathbf{y})=-\frac{\pi(\mathbf{x},\mathbf{y})}{\rho(1, \mathbf{x},\mathbf{y})}, \tag{4.10}\]
which is only tractable if the joint density, \(\pi(\mathbf{x},\mathbf{y})\), is available. These \(n\)-dimensional PDEs are parameterized by \(\mathbf{y}\in\mathbb{R}^{m}\).
_Neural network approach._ We parameterize the value function, \(\Phi\), with a scalar-valued neural network. In contrast to the static approach in the previous section, the choice of function approximator is more flexible in the dynamic approach. The approach can be effective as long as \(\Phi\) is parameterized with any function approximation tool that is effective in high dimensions, allows efficient evaluation of its gradient and Laplacian, and the training problem can be solved sufficiently
well. For our numerical experiments, we use the architecture considered in [36] and model \(\Phi\) as the sum of a simple feed-forward neural network and a quadratic term. That is,
\[\Phi_{\theta}(\mathbf{q})=\mathrm{NN}_{\theta_{\mathrm{NN}}}(\mathbf{q})+Q_{ \theta_{\mathrm{Q}}}(\mathbf{q}),\quad\text{with}\quad\mathbf{q}=(t,\mathbf{x},\mathbf{y}),\quad\theta=(\theta_{\mathrm{NN}},\theta_{\mathrm{Q}}). \tag{11}\]
As in [36], we model the neural network as a two-layer residual network of width \(w\) that reads
\[\mathbf{h}_{0} =\sigma(\mathbf{A}_{0}\mathbf{q}+\mathbf{b}_{0}) \tag{12}\] \[\mathbf{h}_{1} =\mathbf{h}_{0}+\sigma(\mathbf{A}_{1},\mathbf{h}_{0}+\mathbf{b}_ {1})\] \[NN_{\theta_{\mathrm{NN}}}(\mathbf{q}) =\mathbf{a}^{\top}\mathbf{h}_{1}\]
with trainable weights \(\theta_{\mathrm{NN}}=(\mathbf{a},\mathbf{A}_{0},\mathbf{b}_{0},\mathbf{A}_{1},\mathbf{b}_{1})\) where \(\mathbf{a}\in\mathbb{R}^{w},\mathbf{A}_{0}\in\mathbb{R}^{(w\times(m+n+1)}\), \(\mathbf{b}_{0}\in\mathbb{R}^{w},\mathbf{A}_{1}\in\mathbb{R}^{w\times w}\), and \(\mathbf{b}_{1}\in\mathbb{R}^{w}\). The quadratic term depends on the weights \(\theta_{\mathrm{Q}}=(\mathbf{A},\mathbf{b},c)\) where \(\mathbf{A}\in\mathbb{R}^{(n+m+1)\times r},\mathbf{b}\in\mathbb{R}^{m+n+1},c \in\mathbb{R}\) and is defined as
\[Q_{\theta_{\mathrm{Q}}}(\mathbf{q})=\frac{1}{2}\mathbf{q}^{\top}(\mathbf{A} \mathbf{A}^{\top})\mathbf{q}+\mathbf{b}^{\top}\mathbf{q}+c. \tag{13}\]
Adding the quadratic term provides a simple and efficient way to model affine shifts between the distributions. In our experiments \(r\) is chosen to be \(\min(10,n+m+1)\).
Training problemIn summary, we obtain the training problem
\[\min_{\theta} J_{\mathrm{NLL}}[g_{\theta}]+\alpha_{1}P_{\mathrm{DOT}}[\nabla_{x} \Phi_{\theta}]+\alpha_{2}P_{\mathrm{HJB}}[\Phi_{\theta}] \tag{14}\] \[\text{where} g_{\theta}^{-1}(\mathbf{x},\mathbf{y})=\mathbf{p}(0)\text{ and }\frac{d}{dt}\mathbf{p}(t)=-\frac{1}{\alpha_{1}}\nabla_{x}\Phi_{\theta}(t,\mathbf{p}, \mathbf{y}),\;t\in(0,1],\;\mathbf{p}(1)=\mathbf{x}.\]
Here, \(\alpha_{2}\geq 0\) controls the influence of the HJB penalty term from [36], which reads
\[P_{\mathrm{HJB}}[\Phi]=\mathbb{E}_{\pi(\mathbf{x},\mathbf{y})}\left[\int_{0}^{ 1}\left|\partial_{t}\Phi(t,\mathbf{p},\mathbf{y})-\frac{1}{2\alpha_{1}}\| \nabla_{x}\Phi(t,\mathbf{p},\mathbf{y})\|^{2}\right|dt\right]. \tag{15}\]
When \(\alpha_{1}\) is chosen so that the minimizer of the above problem matches the densities exactly, the solution is the optimal transport map. In this situation, the relationship between the value function, \(\Phi_{\theta}\), and the optimal potential, \(\tilde{G}_{\theta}\), is given by
\[\tilde{G}_{\theta}(\mathbf{x},\mathbf{y})+C=\frac{1}{2}\mathbf{x}^{\top} \mathbf{x}+\frac{1}{\alpha_{1}}\Phi_{\theta}(1,\mathbf{x},\mathbf{y}). \tag{16}\]
for some constant \(C\in\mathbb{R}\). To train the COT-Flow, we use a discretize-then-optimize paradigm. In our experiments, we use \(n_{t}\) equidistant steps of the Runge-Kutta-4 scheme to discretize the ODE constraint in (14) and apply the Adam optimizer to the resulting unconstrained optimization problem. Note that following the implementation by [36], we enforce a box constraint of \([-1.5,1.5]\) to the network parameters \(\theta_{NN}\). Since the velocities defining the optimal transport map will be constant along the trajectories, we expect the accuracy of the discretization to improve as we get closer to the solution.
Sample generationAfter training, we draw i.i.d. samples from the approximated conditional distribution for a given \(\mathbf{y}\) by sampling \(\mathbf{z}\sim\rho_{z}\) and solving the ODE (1). Since we train the inverse generator using a fixed number of integration steps, \(n_{t}\), it is interesting to investigate how close our solution is to the continuous problem. One indicator is to compare the samples obtained with different number of integration steps during sampling. Another indicator is to compute the variance of \(\nabla\Phi\) along the trajectories.
_Hyperparameters._ As described in section 5, we randomly sample the width, \(w\), of the neural network, the number of time integration steps during training, the penalty parameters \(\alpha_{1},\alpha_{2}\), and the hyperparameters of the Adam optimizer (batch size and initial learning rate) from the values listed in Table 2.
## 5 Implementation and Experimental Setup
This section describes our implementations and experimental setups and provides guidance for applying our techniques to new problems.
ImplementationThe scripts for implementing our neural network approaches and running our numerical experiments are written in Python using PyTorch. For datasets that are not publicly available, we provide the binary files we use in our experiments and the Python scripts for generating the data. We have published the code and data along with detailed instructions on how to reproduce the results in our main repository [https://github.com/EmoryMLIP/PCP-Map.git](https://github.com/EmoryMLIP/PCP-Map.git). Since the COTPlow approach is a generalization of a previous approach, we have created a fork for this paper [https://github.com/EmoryMLIP/COT-Flow.git](https://github.com/EmoryMLIP/COT-Flow.git).
Hyperparameter selectionFinding hyperparameters, such as network architectures and optimization parameters, is crucial in neural network approaches. In our experience, the choice of hyperparameters is often approach and problem-dependent. Establishing rigorous mathematical principles for choosing the parameters in these models (as is now well known for regularizing convex optimization problems based on results from high-dimensional statistics [35]) is an important area of future work and is beyond the scope of this paper. Nevertheless, we present an objective and robust way of identifying an effective combination of hyperparameters for our approaches.
We limit the search for optimal hyperparameters to the search space outlined in Table 2. Due to the typically large number of possible hyperparameters, we employ a two-step procedure to identify effective combinations. In the initial step, called the pilot run, we randomly sample 50 or 100 combinations and conduct a relatively small number of training steps, which will be specified for each experiment in section 6. For our experiments, the sample space from which we sample for hyperparameters will be a subset of the search space defined in Table 2. The selection of the subset depends on the properties of the training dataset and will be explained in more detail in each experiment. Subsequently, we select the models that exhibit the best performance on the validation set and continue training them for the desired number of epochs. This iterative process allows us to refine the hyperparameters and identify the most effective settings for the given task. We adopt the ADAM optimizer for all optimization problems from the pilot and training runs.
The number of samples in the pilot runs, the number of models selected for the final training, and the number of repetitions can be adjusted based on the computational resources and expected complexity of the dataset. Further details for each experiment are provided in section 6.
## 6 Numerical Experiments
We test the accuracy, robustness, efficiency, and scalability of our approaches from sections 3 and 4 using three problem settings that lead to different challenges and benchmark methods for comparison. In subsection 6.1, we compare our proposed approaches to the Adaptive Transport Maps (ATM) approach developed in [4] on estimating the joint and conditional distributions of six UCI tabular datasets [29]. In subsection 6.2, we compare our approaches to a provably convergent approximate Bayesian computation (ABC) approach on accuracy and computational cost using the stochastic Lotka-Volterra model, which yields intractable likelihood. Using this dataset, we also compare PCP-Map's and COT-Flow's sampling efficiency for different settings. In subsection 6.3, we demonstrate the scalability of our approaches to higher-dimensional problems by comparing them to the flow-based neural posterior estimation (NPE) approach on an inference problem involving the 1D shallow water equations. To demonstrate the improvements of
PCP-Map over the amortized CP-Flow approach in the repository associated with [20], we compare computational cost in subsection 6.4.
### UCI Tabular Datasets
We follow the experimental setup in [4] by first removing the discrete-valued features and one variable of every pair with a Pearson correlation coefficient greater than 0.98. We then partition the datasets into training, validation, and testing sets using an 8:1:1 split, followed by normalization. For the joint and conditional tasks, we set \(\mathbf{x}\) to be the second half of the features and the last feature, respectively. The conditioning variable \(\mathbf{y}\) is set to be the remaining features for both tasks.
To perform joint density estimation, we use the block-triangular generator \(h\colon\mathbb{R}^{n+m}\to\mathbb{R}^{n+m}\) as in (2.2), which leads to
\[h^{-1}(\mathbf{x},\mathbf{y})=\begin{bmatrix}h_{x}^{-1}\left(\mathbf{x}, \mathbf{y}\right)\\ h_{y}^{-1}(\mathbf{y})\end{bmatrix}\text{ and }\nabla h^{-1}(\mathbf{x},\mathbf{y})= \begin{bmatrix}\nabla_{\mathbf{x}}h_{x}^{-1}\left(\mathbf{x},\mathbf{y}\right) &\nabla_{\mathbf{y}}h_{x}^{-1}\left(\mathbf{x},\mathbf{y}\right)\\ 0&\nabla_{\mathbf{y}}h_{y}^{-1}(\mathbf{y})\end{bmatrix}. \tag{6.1}\]
Here, the transformation in the first block, \(h_{x}\), is either PCP-Map or COT-Flow, and the transformation in the second block, \(h_{y}\), is their associated unconditional version. We learn the weights by minimizing the expected negative log-likelihood functional
\[J_{\text{NLL}}[h]=\mathbb{E}_{\pi(\mathbf{x},\mathbf{y})}\left[\frac{1}{2} \left\|h^{-1}(\mathbf{x},\mathbf{y})\right\|^{2}-\log\det\nabla h^{-1}( \mathbf{x},\mathbf{y})\right]. \tag{6.2}\]
An alternative approach for joint density estimation is to learn a generator \(g\) where each component depends on variables \(\mathbf{x}\) and \(\mathbf{y}\), i.e., \(g\) does not have the (block)-triangular structure in (6.1). This map, however, does not immediately provide a way to sample conditional distributions; for example, it requires performing variational inference to model the conditional distribution [54]. Instead, when the variables that will be used for conditioning are known in advance, learning a generator with the structure in (6.1) can be used to characterize both the joint distribution \(\pi(\mathbf{x},\mathbf{y})\) and the conditional \(\pi(\mathbf{x}|\mathbf{y})\).
Since the reference density, \(\rho_{z}=\mathcal{N}(0,I_{n+m})\), is block-separable (i.e., it factorizes into the product \(\rho_{z}(\mathbf{z}_{x},\mathbf{z}_{y})=\rho_{x}(\mathbf{z}_{x})\rho_{y}( \mathbf{z}_{y})\)), we decouple the objective functional into the following two terms
\[J_{\text{NLL}}[h]=J_{\text{NLL},\mathbf{x}}[h_{x}]+J_{\text{NLL},\mathbf{y}}[ h_{y}]\]
\begin{table}
\begin{tabular}{|l||c||c|} \hline Hyperparameters & PCP-Map & COT-Flow \\ \hline \hline Batch size & \(\{2^{5},2^{6},2^{7},2^{8}\}\) & \(\{2^{5},2^{6},...,2^{10}\}\) \\ Learning rate & \(\{0.05,0.01,10^{-3},10^{-4}\}\) & \(\{0.05,0.01,10^{-3},10^{-4}\}\) \\ Feature Layer width, \(w\) & \(\{2^{5},2^{6},...,2^{9}\}\) & \(\{2^{5},2^{6},...,2^{10}\}\) \\ Context Layer width, \(u\) & \(\{\frac{w}{2^{i}}|\frac{w}{2^{i}}>m\), i=0,1,...\}\cup\{m\}\) & \\ Embedding feature width, \(w_{y}\) & & \(\{2^{5},2^{6},2^{7}\}\) \\ Embedding output width, \(w_{yout}\) & & \(\{2^{5},2^{6},2^{7}\}\) \\ Number of layers, \(K\) & \(\{\text{2, 3, 4, 5, 6}\}\) & \(\{2\}\) \\ Number of time steps, \(n_{t}\) & & \(\{8,\,16\}\) \\ \([\log(\alpha_{1}),\log(\alpha_{2})]\) & & \([\mathcal{U}\text{(-1, 3), }\mathcal{U}\text{(-1, 3)]}\) or \\ & & \([\mathcal{U}(10^{2},10^{5}),\mathcal{U}(10^{2},10^{5})]\) \\ \hline \end{tabular}
\end{table}
Table 2: Hyperparameter search space for PCP-Map and COT-Flow. Here \(m\) denotes the size of the context feature or observation \(\mathbf{y}\)
with \(J_{\rm NLL,\mathbf{x}}\) defined in (3.2) and
\[J_{\rm NLL,\mathbf{y}}[h_{y}]=\mathbb{E}_{\mathbf{y}\sim\pi(\mathbf{y})}\left[ \left\|h_{y}^{-1}(\mathbf{y})\right\|^{2}-\log\det\nabla_{\mathbf{y}}h_{y}^{-1 }(\mathbf{y})\right]. \tag{6.3}\]
For PCP-Map, as proposed in [2], we employ the gradient of a fully input convex neural network (FICNN) \(\tilde{F}_{\theta_{F}}:\mathbb{R}^{n}\rightarrow\mathbb{R}\), parameterized by weights \(\theta_{F}\), to represent the second block \(h_{y}^{-1}\). A general \(K\)-layer FICNN can be expressed as the following sequence starting with \(\mathbf{z}_{0}=\mathbf{y}\):
\[\mathbf{z}_{k+1}=\sigma^{(z)}\left(\mathbf{L}_{k}^{(w)}\mathbf{z}_{k}+\mathbf{ L}_{k}^{(y)}\mathbf{y}+\mathbf{b}_{k}\right) \tag{6.4}\]
for k = 0,..., \(K-1\). Here \(\sigma^{(z)}\) is the softplus activation function. Since the FICNN map is not part of our contribution, we use the input-augmented ICNN implementation used in [20]. The only difference is that we remove activation normalization. For the first map component \(h_{x}^{-1}\), we use the gradient of a PICNN as described in section 3, which takes all the features as input. For COT-Flow, we construct two distinct neural networks parameterized potentials \(\Phi_{x}\) and \(\Phi_{y}\). Here, \(\Phi_{y}\) only takes the conditioning variable \(\mathbf{y}\) as inputs and can be constructed exactly like in [36]. \(\Phi_{x}\) acts on all features and is the same potential described in section 4.
We only learn the first block \(h_{x}^{-1}\) to perform conditional density estimation. This is equivalent to solely learning the weights \(\theta\) of the PICNN \(\tilde{G}_{\theta}\) for PCP-Map. For COT-Flow, we only construct and learn the potential \(\Phi_{x}\).
The hyperparameter sample space we use for this experiment is presented in Table 3. We select smaller batch sizes and large learning rates as this leads to fast convergence on these relatively simple problems. For each dataset, we select the ten best hyperparameter combinations based on a pilot run for full training. For PCP-Map's pilot run, we performed 15 epochs for all three conditional datasets, three epochs for the Parkinson's and the White Wine datasets, and four epochs for Red Wine using 100 randomly sampled combinations. For COT-Flow, we limit the pilot runs to only 50 sampled combinations due to a narrower sample space on model architecture. To assess the robustness of our approaches, we performed five full training runs with random initializations of network weights for each of the ten hyperparameter combinations for each dataset.
In Table 4, we report the best, median, and worst mean negative log-likelihood on the test data across the six datasets for the two proposed approaches and the best results for ATM. The table demonstrates that the best models outperform ATM for all datasets and that the median performance is typically superior. Overall, COT-Flow slightly outperforms PCP-Map in terms of loss values for the best models. The improvements are more pronounced for the conditional sampling tasks, where even the worst hyperparameters from PCP-Map and COT-Flow improve over ATM with a substantial margin.
\begin{table}
\begin{tabular}{|l||c||c|} \hline Hyperparameters & PCP-Map & COT-Flow \\ \hline \hline Batch size & \{32, 64\} & \{32, 64\} \\ Learning rate & \{0.01, 0.005, 0.001\} & \{0.01, 0.005, 0.001\} \\ Feature Layer width, \(w\) & \{32, 64, 128, 256, 512\} & \{32, 64, 128, 256, 512\} \\ Context Layer width, \(u\) & \{\(\frac{w}{2^{i}}|\frac{w}{2^{j}}>m,i=0,1,...\}\cup\{m\}\) & \\ Number of layers, \(K\) & \{2, 3, 4, 5, 6\} & \{2\} \\ Number of time steps, \(n_{t}\) & & \{8, 16\} \\ \([\log(\alpha_{1}),\log(\alpha_{2})]\) & & \([\mathcal{U}\)(-1, 3), \(\mathcal{U}\)(-1, 3)] \\ \hline \end{tabular}
\end{table}
Table 3: Hyperparameter sample space for the UCI tabular datasets experiment.
### Stochastic Lotka-Volterra
We compare our approaches to an ABC approach based on Sequential Monte Carlo (SMC) for likelihood-free Bayesian inference using the stochastic Lotka-Volterra (LV) model [55]. The LV model is a stochastic process whose dynamics describe the evolution of the populations \(S(t)=(S_{1}(t),S_{2}(t))\) of two interacting species, e.g., predators and prey. These populations start from a fixed initial condition \(S(0)=(50,100)\). The parameter \(\mathbf{x}\in\mathbb{R}^{4}\) determines the rate of change of the populations over time, and the observation \(\mathbf{y}\in\mathbb{R}^{9}\) contains summary statistics of the time series generated by the model. This results in an observation vector with nine entries: the mean, the log-variance, the auto-correlation with lags 1 and 2, and the cross-correlation coefficient. The procedure for sampling a trajectory of the species populations is known as Gillespie's algorithm.
Given a prior distribution for the parameter \(\mathbf{x}\), we aim to sample from the posterior distribution corresponding to an observation \(\mathbf{y}^{*}\). As in [37], we consider a log-uniform prior distribution for the parameters whose density (of each component) is given by \(\pi(\log x_{i})=\mathcal{U}(-5,2)\). As a result of the stochasticity that enters non-linearly in the dynamics, the likelihood function is not available in closed form. Hence, this model is a popular benchmark for likelihood-free inference algorithms as they avoid evaluating \(\pi(\mathbf{y}|\mathbf{x})\)[37].
We generate two training sets consisting of 50k and 500k samples from the joint distribution \(\pi(\mathbf{x},\mathbf{y})\) obtained using Gillespie's algorithm. To account for the strict positivity of the parameter, which follows a log-uniform prior distribution, we perform a log transformation of the \(\mathbf{x}\) samples. This ensures that the conditional distribution of interest has full support, which is needed to find a diffeomorphic map to a Gaussian. We split the log-transformed data into ten folds and use nine folds of the samples as training data and one fold as validation data. We normalize the training and validation sets using the training set's empirical mean and standard deviation.
For the pilot run, we use the same sample space in Table 3 except expanding the batch size space to {64, 128, 256} for PCP-Map and {32, 64, 128, 256} for COT-Flow to account for the increase in sample size. We also fixed, for PCP-Map, \(w=u\). During PCP-Map's pilot run, we perform two training epochs with 100 hyperparameter combination samples using the 50k dataset. For COT-Flow, we only perform one epoch of pilot training as it is empirically observed to be sufficient. We then use the best hyperparameter combinations to train our models on the 50k and 500k datasets to learn the posterior for the normalized parameter in the log-domain. After learning the maps, we used their inverses, the training data mean and standard deviation, and the
\begin{table}
\begin{tabular}{|l||c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{joint} & \multicolumn{3}{c|}{conditional} \\ \hline dataset name & Parkinson’s & white wine & red wine & concrete & energy & yacht \\ dimensionality & \(d=15\) & \(d=11\) & \(d=11\) & \(d=9\) & \(d=10\) & \(d=7\) \\ no. samples & \(N=5875\) & \(N=4898\) & \(N=1599\) & \(N=1030\) & \(N=768\) & \(N=308\) \\ \hline \hline ATM [4] & \(2.8\pm 0.4\) & \(11.0\pm 0.2\) & \(9.8\pm 0.4\) & \(3.1\pm 0.1\) & \(1.5\pm 0.1\) & \(0.5\pm 0.2\) \\ PCP-Map (best) & 1.59\(\pm\)0.08 & 10.81\(\pm\)0.15 & 8.80\(\pm\)0.11 & 0.19\(\pm\)0.14 & -1.15\(\pm\)0.08 & -2.76\(\pm\)0.18 \\ PCP-Map (median) & 1.96\(\pm\)0.08 & 10.99\(\pm\)0.24 & 9.90\(\pm\)0.53 & 0.28\(\pm\)0.07 & -1.02\(\pm\)0.16 & -2.42\(\pm\)0.25 \\ PCP-Map (worst) & 2.34\(\pm\)0.09 & 12.53\(\pm\)2.68 & 11.08\(\pm\)0.72 & 1.18\(\pm\)0.54 & 0.30\(\pm\)0.63 & -0.27\(\pm\)1.28 \\ COT-Flow (best) & **1.58\(\pm\)0.09** & **10.45\(\pm\)0.08** & **8.54\(\pm\)0.13** & **0.15\(\pm\)0.05** & **-1.19\(\pm\)0.09** & **-3.14\(\pm\)0.14** \\ COT-Flow (median) & 2.72\(\pm\)0.34 & 10.73\(\pm\)0.05 & 8.71\(\pm\)0.12 & 0.21\(\pm\)0.04 & -0.83\(\pm\)0.05 & -2.77\(\pm\)0.12 \\ COT-Flow (worst) & 3.27\(\pm\)0.15 & 11.04\(\pm\)0.28 & 9.00\(\pm\)0.05 & 0.35\(\pm\)0.04 & -0.56\(\pm\)0.04 & -2.38\(\pm\)0.11 \\ \hline \end{tabular}
\end{table}
Table 4: Mean negative log-likelihood comparisons between ATM, PCP-Map, and COT-Flow on test data. For our approaches, we report the best, median, and worst results over different hyperparameter combinations and five training runs. Lower is better, and we highlight the best results in bold.
log transformations to yield parameter samples in the original domain.
The SMC-ABC algorithm finds parameters that match the observations with respect to a selected distance function by gradually reducing a tolerance \(\epsilon>0\), which leads to samples from the true posterior exactly as \(\epsilon\to 0\). For our experiment, we allow \(\epsilon\) to converge to 0.1 for the first true parameter and 0.15 for the second.
We evaluate our approaches for maximum-a-posteriori (MAP) estimation and posterior sampling. We consider a true parameter \(\mathbf{x}^{*}=(0.01,0.5,1,0.01)^{\top}\), which was chosen to give rise to oscillatory behavior in the population time series. Given one observation \(\mathbf{y}^{*}\sim\pi(\mathbf{y}|\mathbf{x}^{*})\), we first identify the MAP point by maximizing the estimated log-likelihoods provided by our approaches. Then, we generate 2000 samples from the approximate posterior \(\pi(\mathbf{x}|\mathbf{y}^{*})\) using our approaches. Figure 2 presents one and two-dimensional marginal histograms and scatter plots of the MAP point
Figure 2: Posterior samples in log scale and MAP point quality comparisons between proposed approaches and ABC with \(\mathbf{x}^{*}=(0.01,0.5,1,0.01)^{\top}\). **Left**: posterior samples generated by proposed approaches trained on 50k samples. **Middle**: posterior samples generated by proposed approaches trained on 500k samples. The red dots and bars correspond to \(\mathbf{x}^{*}\), and the black crosses and bars correspond to the MAP point. **Right**: posterior samples from ABC.
and samples, compared against 2000 samples generated by the SMC-ABC algorithm from [7]. Our approaches yield samples tightly concentrated around the MAP points that are close to the true parameter \(\mathbf{x}^{*}\).
To provide more evidence that our learning approaches indeed solve the amortized problem, Figure 3 shows the MAP points and approximate posterior samples generated from a new random observation \(\mathbf{y}^{*}\) corresponding to the true parameter \(\mathbf{x}^{*}=(0.02,0.02,0.02,0.02)^{\top}\). We observe similar concentrations of the MAP point and posterior samples around the true parameter and similar correlations learned by the generative model and ABC, for example, between the third and other parameters.
Efficiency-wise, PCP-Map and COT-Flow yield similar approximations to ABC at a fraction of the computational cost of ABC. The latter requires approximately 5 or 18 million model simulations for each conditioning observation, while the learned approaches use the same 50 thousand simulations to amortize over the observation. These savings generally offset the hyperparameter search and training time for the proposed approaches, which is typically less than half an hour per full training run on a GPU. For some comparison, SMC-ABC took 15 days to reach \(\epsilon=0.1\) for \(\mathbf{x}^{*}=(0.01,0.5,1,0.01)^{\top}\).
To further validate that we approximate the posterior distribution as well, Figure 4 compares the population time series generated from approximate posterior samples given by our approaches and SMC-ABC. This corresponds to comparing the posterior predictive distributions for the states \(S(t)\) given different posterior approximations for the parameters. While the simulations have inherent stochasticity, we observe that the generated posterior parameter samples from all approaches recover the expected oscillatory time series simulated from the true parameter \(\mathbf{x}^{*}\), especially at earlier times.
In the experiments above, we employed \(n_{t}=32\) to generate posterior samples during testing for the COT-Flow. However, one can also decrease \(n_{t}\) after training to generate samples faster without sacrificing much accuracy, as shown in Figure 5. Thereby, one can achieve faster sampling speed using COT-Flow than using PCP-Map as demonstrated in Table 5. To establish such a comparison, we increase the \(l\)-BFGS tolerance when sampling using PCP-Map, which produces a similar effect on sampling accuracy than decreasing the number of time steps for COT-Flow.
### 1D Shallow Water Equations
The shallow water equations model wave propagation through shallow basins described by the depth profiles parameter, \(\mathbf{x}\in\mathbb{R}^{100}\), which is discretized at 100 equidistant points in space. After solving the equations over a time grid with 100 cells, the resulting wave amplitudes form the 10k-dimensional raw observations. As in [40], we perform a 2D Fourier transform on the raw observations and concatenate the real and imaginary parts since the waves are periodic. We define this simulation-then-Fourier-transform process as our forward model
\begin{table}
\begin{tabular}{||c c||c c||} \hline \hline \multicolumn{2}{||c||}{COT-Flow Sampling (s)} & \multicolumn{2}{c||}{PCP-Map Sampling (s)} \\ \hline \hline \(n_{t}\)=32 & 0.538 \(\pm\) 0.003 & tol=1e-6 & 0.069 \(\pm\) 0.004 \\ \(n_{t}\)=16 & 0.187 \(\pm\) 0.004 & tol=1e-5 & 0.066 \(\pm\) 0.004 \\ \(n_{t}\)=8 & 0.044 \(\pm\) 0.000 & tol=1e-4 & 0.066 \(\pm\) 0.004 \\ \(n_{t}\)=4 & 0.025 \(\pm\) 0.001 & tol=1e-3 & 0.066 \(\pm\) 0.004 \\ \(n_{t}\)=2 & 0.012 \(\pm\) 0.000 & tol=1e-2 & 0.059 \(\pm\) 0.001 \\ \(n_{t}\)=1 & 0.006 \(\pm\) 0.000 & tol=1e-1 & 0.049 \(\pm\) 0.004 \\ \hline \end{tabular}
\end{table}
Table 5: Sampling efficiency comparisons between PCP-Map and COT-Flow in terms of GPU time in seconds (s). We report the mean and standard deviation over five runs, respectively.
and denote it as \(\Psi(\mathbf{x})\). Additive Gaussian noise is then introduced to the outputs of \(\Psi\), which gives us the observations \(\mathbf{y}=\Psi(\mathbf{x})+0.25\boldsymbol{\epsilon}\), where \(\mathbf{y}\in\mathbb{R}^{200\times 100}\) and \(\boldsymbol{\epsilon}_{i,j}\sim\mathcal{N}(0,1)\). We aim to use the proposed approaches to learn the posterior \(\pi(\mathbf{x}|\mathbf{y})\).
We follow instructions from [40] to set up the experiment and obtain 100k samples from the joint distribution \(\pi(\mathbf{x},\mathbf{y})\) as the training dataset using the provided scripts. We use the prior distribution
\[\pi(\mathbf{x})=\mathcal{N}(10\cdot\mu\mathbf{1}_{100},\boldsymbol{\Sigma})\ \ \text{with}\ \ \boldsymbol{\Sigma}_{\text{ij}}=\sigma\exp\left(\frac{-(\text{i}-\text{j})^{2}} {2\tau}\right),\ \sigma=15,\ \tau=100.\]
Using a principal component analysis (PCA), we analyze the intrinsic dimensions of \(\mathbf{x}\) and \(\mathbf{y}\). To
Figure 3: Posterior samples log scale and MAP point quality comparisons between proposed approaches and ABC with \(\mathbf{x}^{*}=(0.02,0.02,0.02,0.02)^{\top}\). **Left**: posterior samples generated by proposed approaches trained on 50k samples. **Middle**: posterior samples generated by proposed approaches trained on 500k samples. The red dots and bars correspond to \(\mathbf{x}^{*}\), and the black crosses and bars correspond to the MAP point. **Right**: posterior samples from ABC.
ensure that the large additive noise \(0.25\mathbf{\epsilon}\) does not affect our analysis, we first study another set of 100k noise-free prior predictives using the estimated covariance \(\text{Cov}(\mathbf{Y})\approx\frac{1}{N-1}\mathbf{Y}^{\top}\mathbf{Y}\). Here, \(\mathbf{Y}\in\mathbb{R}^{100000\times 20000}\) stores the samples row-wise in a matrix. The top 3500 modes explain around 96.5% of the variance. A similar analysis on the noise-present training dataset shows that the top 3500 modes, in this case, only explain around 75.6% of the variance due to the added noise. To address the rank deficiency, we construct a projection matrix \(\mathbf{V}_{\text{proj}}\) using the top 3500 eigenvectors of \(\text{Cov}(\mathbf{Y})\) and obtain the projected observations \(\mathbf{y}_{\text{proj}}=\mathbf{V}_{\text{proj}}^{\top}\mathbf{y}\) from the training datasets. We
Figure 4: Posterior predictives quality comparisons between proposed approaches and ABC with \(\mathbf{x}^{\star}=(0.01,0.5,1,0.01)^{\top}\). The solid lines in each plot represent a simulated time series using \(\mathbf{x}^{\star}\). Dotted lines represent time series simulated using ten randomly selected posterior samples from the 2000. **Left**: posterior predictives from proposed approaches trained on 50k samples. **Middle**: posterior predictives from proposed approaches trained on 500k samples. **Right**: posterior predictives from ABC.
then perform a similar analysis for \(\mathbf{x}\) and discovered that the top 14 modes explained around 99.9% of the variance. Hence, we obtain \(\mathbf{x}_{\text{proj}}\in\mathbb{R}^{14}\) as the projected parameters.
We then trained our approaches to learn the reduced posterior, \(\pi_{\text{proj}}(\mathbf{x}_{\text{proj}}|\mathbf{y}_{\text{proj}})\). For comparisons, we trained the flow-based NPE approach to learn the same posterior exactly like did in [40]. For COT-Flow, we add a 3-layer fully connected neural network with tanh activation to embed \(\mathbf{y}_{\text{proj}}\). To pre-process the training dataset, we randomly select 5% of the dataset as the validation set and use the rest as the training set. We project both \(\mathbf{x}\) and \(\mathbf{y}\) and then normalize them by subtracting the empirical mean and dividing by the empirical standard deviations of the training data.
We employ the sample space presented in Table 6 for the pilot runs. For COT-Flow, we select a \(w\) sample space with larger values for maximum expressiveness and allow multiple optimization steps over one batch, randomly selected from \(\{8,16\}\). We then use the best hyperparameter combination based on the validation loss for the full training. For NPE, we used the sbi package [50] and the scripts provided by [40].
We first compare the accuracy of the MAP points, posterior samples, and posterior predictives across the three approaches. The MAP points are obtained using the same method as in subsection 6.2. For posterior sampling, we first sample a "ground truth" \(\mathbf{x}^{*}\sim\pi(\mathbf{x})\) and obtain the associated ground truth reduced observation \(\mathbf{y}_{\text{proj}}^{*}=\mathbf{V}_{\text{proj}}^{\top}(\Psi(\mathbf{x}^{* })+0.25\epsilon)\). Then, we use the three approaches to sample from the posterior \(\pi_{\text{proj}}(\mathbf{x}_{\text{proj}}|\mathbf{y}_{\text{proj}}^{*})\). This allows us to obtain approximate samples \(\mathbf{x}\sim\pi(\mathbf{x}|\mathbf{y}^{*})\). The posterior predictives are obtained by solving the forward model for the generated parameters. Through Figure 6, we observe that the MAP points, posterior samples, and predictives produced by PCP-Map and COT-Flow are more concentrated around the ground truth than those produced by NPE.
We perform the simulation-based calibration (SBC) analysis described in [40, App. D.2] to further assess the three approaches' accuracy; see Figure 7. We can see that, while they are all well calibrated, the cumulative density functions of the rank statistics produced by PCP-Map align almost perfectly with the CDF of a uniform distribution besides a few outliers.
Finally, we analyze the three approaches' efficiency in terms of the number of forward model evaluations. We train the models using the best hyperparameter combinations from the pilot run on two extra datasets with 50k and 20k samples. We compare the posterior samples' mean and standard deviation against \(\mathbf{x}^{*}\) across three approaches trained on the 100k, 50k, and 20k sized datasets as presented in Figure 8. We see that PCP-Map and COT-Flow can generate posterior samples centered more closely around the ground truth than NPE using only 50k training samples, which
\begin{table}
\begin{tabular}{|l||c||c|} \hline Hyperparameters & PCP-Map & COT-Flow \\ \hline \hline Batch size & \{64, 128, 256\} & \{2^{7},2^{8},2^{9},2^{10}\}\) \\ Learning rate & \(\{10^{-2},10^{-3},10^{-4}\}\) & \(\{10^{-2},10^{-3},10^{-4}\}\) \\ Feature layer width, \(w\) & \{32, 64, 128, 256, 512\} & \{512, 1024\} \\ Context layer width, \(u\) & \{\(u=w\}\) & \\ Embedding feature width, \(w_{y}\) & & \(\{2^{5},2^{6},2^{7}\}\) \\ Embedding output width, \(w_{yout}\) & & \(\{2^{5},2^{6},2^{7}\}\) \\ Number of layers, \(K\) & \{2, 3, 4, 5, 6\} & \{2\} \\ Number of time steps, \(n_{t}\) & & \{8, 16\} \\ \([\log(\alpha_{1}),\log(\alpha_{2})]\) & & \([\mathcal{U}(10^{2},10^{5}),\,\mathcal{U}(10^{2},10^{5})]\) \\ \hline \end{tabular}
\end{table}
Table 6: Hyperparameter sample space for the 1D shallow water equations experiment.
translates to higher computational efficiency since fewer forward model evaluations are required.
### Comparing PCP-Map to amortized CP Flow
We conduct this comparative experiment using the shallow water equations problem for its high dimensionality. We include the more challenging task of learning \(\pi(\mathbf{x}|\mathbf{y}_{\mathrm{proj}})\), obtained without projecting the parameter, to test the two approaches most effectively. We followed [20] and its associated GitHub repository as closely as possible to implement the amortized CP-Flow. To ensure as fair of a comparison as possible, we used the hyperparameter combination from the amortized CP-Flow pilot run for learning \(\pi(\mathbf{x}|\mathbf{y}_{\mathrm{proj}})\) and the combination from the PCP-Map pilot run for learning \(\pi_{\mathrm{proj}}(\mathbf{x}_{\mathrm{proj}}|\mathbf{y}_{\mathrm{proj}})\). Note that for learning \(\pi(\mathbf{x}|\mathbf{y}_{\mathrm{proj}})\), we limited the learning rate to \(\{0.0001\}\) for which we observed reasonable convergence.
In the experiment, we observed that amortized CP-Flow's Hessian vector product function gave NaNs consistently when computing the stochastic log-determinant estimation. Thus, we resorted to exact computation for the pilot and training runs. PCP-Map circumvents this as it uses exact log-determinant computations. Each model was trained five times to capture possible variance.
For comparison, we use the exact mean negative log-likelihood as the metric for accuracy and
Figure 6: Prior, MAP point, posterior samples, and predictives quality comparisons between PCP-Map, COTSow, and NPE. Each Row: left: prior (first row) or posterior samples in gray and \(\mathbf{x^{+}}\) in black. middle: 2D image of the wave amplitudes simulated using \(\mathbf{x^{*}}\) (first row) or posterior samples for 100 time grids. right: wave amplitudes simulated using 50 prior (first row) or posterior samples at \(t\) = 22, 69, 94. Here \(\mathbf{y^{*}}\) is plotted in black.
GPU time as the metric for computational cost. We only record the time for loss function evaluations and optimizer steps for both approaches. The comparative results are presented in Table 7. We see that PCP-Map and amortized CP-Flow reach similar training and validation accuracy. However, PCP-Map takes roughly 7 and 3 times less GPU time, respectively, on average, to achieve that accuracy than amortized CP-Flow. Possible reasons for the increased efficiency of PCP-Map's are its use of ReLU non-negative projection, vectorized Hessian computation, removal of activation normalization, and gradient clipping.
## 7 Discussion
We present two measure transport approaches, PCP-Map and COT-Flow, that learn conditional distributions by approximately solving the static and dynamic conditional optimal transport problems, respectively. Specifically, penalizing transport costs in the learning problem provides unique optimal transport maps, known as the conditional Brenier map, between the target conditional distribution and the reference. Furthermore, for PCP-Map, minimizing the quadratic transport costs motivate us to exploit the structure of the Brenier map by constraining the search to monotone maps given as the gradient of convex potentials. Similarly, for COT flow this choice leads to a conservative vector field, which we enforce by design.
Our comparison to the SMC-ABC approach for the stochastic Lotka-Volterra problem shows common trade-offs when selecting conditional sampling approaches. Advantages of the ABC approach include its strong theoretical guarantees and well-known guidelines for choosing the involved hyper-parameters (annealing, burn-in, number of samples to skip to reduce correlation, etc.). The disadvantages are that ABC typically requires a large number of likelihood evaluations to produce (approximately) i.i.d. samples and produce low-variance estimators in high-dimensional parameter
\begin{table}
\begin{tabular}{|l||c c||c c||} \hline & \multicolumn{2}{c||}{\(\pi(\mathbf{x}|\mathbf{y}_{\mathrm{proj}})\)} & \multicolumn{2}{c||}{\(\pi_{\mathrm{proj}}(\mathbf{x}_{\mathrm{proj}}|\mathbf{y}_{\mathrm{proj}})\)} \\ \hline Approach & PCP-Map & CP-Flow & PCP-Map & CP-Flow \\ Number of Parameters & \(\sim\)8.9M & \(\sim\)5.7M & \(\sim\)2.5M & \(\sim\)1.4M \\ Training Mean NLL & \(-540.3\pm 4.5\) & \(-534.2\pm 9.1\) & \(-5.8\pm 0.6\) & \(-6.6\pm 0.3\) \\ Validation Mean NLL & \(-519.0\pm 4.1\) & \(-506.4\pm 5.1\) & \(8.5\pm 0.1\) & \(6.1\pm 0.2\) \\ \hline \hline Training(s) & **6652.2\(\pm\)1030.3** & 47207.4\(\pm\)7221.0 & **706.9\(\pm\)61.0** & 1989.4\(\pm\)10.5 \\ Validation(s) & **108.5\(\pm\)17.1** & 232.1\(\pm\)35.5 & **6.6\(\pm\)0.6** & 10.3\(\pm\)0.1 \\ \hline Total(s) & **6760.7\(\pm\)1047.4** & 47439.5\(\pm\)7256.4 & **713.5\(\pm\)61.5** & 1999.7\(\pm\)10.5 \\ \hline \end{tabular}
\end{table}
Table 7: Computational cost comparisons between amortized CP-Flow and PCP-Map in terms of GPU time in seconds (s). We report the mean and standard deviation of the training and validation mean NLL, the mean and standard deviation of the GPU times over five training runs, and the number of parameters in million(M), respectively.
Figure 7: SBC Analysis for PCP-Map, COT-Flow and NPE. Each colored line represents the empirical cumulative density function (CDF) of the SBC rank associated with one posterior sample dimension.
spaces; the computation is difficult to parallelize in the sequential Monte Carlo setting, and the sampling process is not amortized over the conditioning variable \(\mathbf{y}^{*}\), i.e., it needs to be recomputed whenever \(\mathbf{y}^{*}\) changes.
Comparisons to the flow-based NPE method for the high-dimensional 1D shallow water equations problem illustrate the superior numerical accuracy achieved by our approaches. In terms of numerical efficiency, the PCP-Map approach, while providing a working computational scheme to the static COT problem, achieves significantly faster convergence than the amortized CP-Flow approach.
Learning posterior distributions using our techniques or similar measure transport approaches is attractive for real-world applications where samples from the joint distributions are available (or can be generated efficiently), but evaluating the prior density or the likelihood model is intractable. Common examples where a non-intrusive approach for conditional sampling can be fruitful include inverse problems where the predictive model involves stochastic differential equations (as in subsection 6.2) or legacy code and imaging problems where only prior samples are available.
Figure 8: Posterior sample quality comparisons between PCP-Map, COT-Flow, and NPE trained using 20k, 50k, and 100k samples. We report the relative normed error between the posterior sample mean (over 100 samples) and the ground truth parameter. The gray bands represent regions within one standard deviation of the means.
Given the empirical nature of our study, we paid particular attention to the setup and reproducibility of our numerical experiments. To show the robustness of our approaches to hyperparameters and to provide guidelines for hyperparameter selection in future experiments, we report the results of a simple two-step heuristic that randomly samples hyperparameters and identifies the most promising configurations after a small number of training steps. We stress that the same search space of hyperparameters is used across all numerical experiments.
The results of the shallow water dataset subsection 6.3 indicate that both methods can learn high-dimensional COT maps. Here, the number of effective parameters in the dataset was \(n=14\), and the number of effective measurements was \(m=3500\). Particularly worth noting is that PCP-Map, on average, converges in around 715 seconds on this challenging high-dimensional problem.
Since both approaches perform similarly in our numerical experiments, we want to comment on some distinguishing factors. One advantage of the PCP-Map approach is that it only depends on three hyperparameters (feature width, context width, and network depth), and we observed consistent performance for most choices. This feature is particularly attractive when experimenting with new problems. The limitation is that a carefully designed network architecture needs to be imposed to guarantee partial convexity of the potential. On the other hand, the value function (i.e., the velocity field) in COT-Flow can be designed almost arbitrarily. Thus, the latter approach may be beneficial when new data types and their invariances need to be modeled, e.g., permutation invariances or symmetries, that might conflict with the network architecture required by the direct transport map. Both approaches also differ in terms of their numerical implementation. Training the PCP-Map via backpropagation is relatively straightforward, but sampling requires solving a convex program, which can be more expensive than integrating the ODE defined by the COT-Flow approach, especially when that model is trained well, and the velocity is constant along trajectories. Training the COT-Flow model, however, is more involved due to the ODE constraints.
Although our numerical evidence supports the effectiveness of our methods, important remaining limitations of our approaches include the absence of theoretical guarantees for sample efficiency and optimization. In particular, statistical complexity and approximation theoretic analysis for approximating COT maps using PCP-Map or COT-Flow in a conditional sampling context will be beneficial. We also point out that it can be difficult to quantify the produced samples' accuracy without a benchmark method.
|
2301.11783 | Certified Invertibility in Neural Networks via Mixed-Integer Programming | Neural networks are known to be vulnerable to adversarial attacks, which are
small, imperceptible perturbations that can significantly alter the network's
output. Conversely, there may exist large, meaningful perturbations that do not
affect the network's decision (excessive invariance). In our research, we
investigate this latter phenomenon in two contexts: (a) discrete-time dynamical
system identification, and (b) the calibration of a neural network's output to
that of another network. We examine noninvertibility through the lens of
mathematical optimization, where the global solution measures the ``safety" of
the network predictions by their distance from the non-invertibility boundary.
We formulate mixed-integer programs (MIPs) for ReLU networks and $L_p$ norms
($p=1,2,\infty$) that apply to neural network approximators of dynamical
systems. We also discuss how our findings can be useful for invertibility
certification in transformations between neural networks, e.g. between
different levels of network pruning. | Tianqi Cui, Thomas Bertalan, George J. Pappas, Manfred Morari, Ioannis G. Kevrekidis, Mahyar Fazlyab | 2023-01-27T15:40:38Z | http://arxiv.org/abs/2301.11783v2 | # Certified Invertibility in Neural Networks
###### Abstract
Neural networks are notoriously vulnerable to adversarial attacks - small imperceptible perturbations that can change the network's output drastically. In the reverse direction, there may exist large, meaningful perturbations that leave the network's decision unchanged (excessive invariance, noninvertibility). We study the latter phenomenon in two contexts: (a) discrete-time dynamical system identification, as well as (b) calibration of the output of one neural network to the output of another (neural network matching). We characterize noninvertibility through the lens of mathematical optimization, in which the global solution quantifies the "safety" of the network predictions: their distance from the noninvertibility boundary. For ReLU networks and \(L_{p}\) norms (\(p=1,2,\infty\)), we formulate these optimization problems as mixed-integer programs (MIPs) that apply to neural network approximators of dynamical systems. We also discuss the applicability of our results to invertibility certification in transformations between neural networks (e.g. at different levels of pruning).
ArXiv 1-24
## 1 Introduction
Despite achieving high performance in a variety of classification and regression tasks, neural networks are not always guaranteed to satisfy certain desired properties after training. A prominent example is adversarial robustness. Neural networks can be overly sensitive to carefully designed input perturbations (Szegedy et al. (2013)). This intriguing property holds in the reverse direction too. In classification problems, neural networks can also be excessively insensitive to large perturbations, causing two semantically different inputs (e.g., images) to be classified in the same category (Jacobsen et al. (2018)). Indeed, a fundamental trade-off has been shown between adversarial robustness and excessive invariance (Tramer et al. (2020)), which is mathematically related to the noninvertibility of the map defined by the neural network.
To mitigate noninvertibility, and hence excessive invariance, one can consider invertible-by-design architectures. Invertible neural networks (INNs) have been used to design generative models (Donahue and Simonyan (2019)), implement memory-saving gradient computation (Gomez et al. (2017)), and solve inverse problems (Ardizzone et al. (2018)). However, commonly-used INN architectures suffer from exploding inverses; in this paper, we therefore consider the problem of certifying the (possible) noninvertibility of conventional neural networks after training. Specifically, we study two relevant invertibility problems: _(i) local invertibility of neural networks:_ given a dynamical system whose time-\(\tau\) map is parameterized by a neural network, we verify whether it is locally invertible around a certain input (or trajectory) and compute the largest region of local invertibility; and _(ii) local invertibility of transformations between neural networks:_ we certify whether two (assumed "equivalent") neural networks (e.g., related through pruning) can be transformed (i.e. calibrated) to each other locally via an invertible transformation. We develop mathematical tools based on mixed-integer linear/quadratic programming for the characterization of noninvertibility that are applicable to both (a) neural network approximators of dynamics, as well as to (b) transformations between neural networks.
Related workNoninvertibility in neural networks was studied in the 1990s (Gicquel et al. (1998); Rico-Martinez et al. (1993)); more recently, several papers focus on the global invertibility property in neural networks (see Chang et al. (2018); Teshima et al. (2020); Chen et al. (2018); MacKay et al. (2018); Jaeger (2014)). Analyzing invertibility of neural networks (Behrmann et al. (2018)) and constructing invertible architectures arises in many contexts, such as generative modeling (Chen et al. (2019)), inverse problems (Ardizzone et al. (2019)) or probabilistic inference (Radev et al. (2020)). Neural networks invertible by design have been developed for these applications. Some of the these networks (e.g. RevNet (Gomez et al. (2017)), NICE (Dinh et al. (2015)), real NVP (Dinh et al. (2017))) partition the input domains and use affine or coupling transformations as the forward pass, keeping the Jacobians (block-)triangular with nonzero diagonal elements, resulting in nonzero determinants; others, like i-ResNet (Behrmann et al. (2019)) have no analytical forms for the inverse dynamics, yet their finite bi-Lipschitz constants can be derived: both methods can guarantee global invertibility. A comprehensive analysis is found in (Behrmann et al. (2021); Song et al. (2019)). However, a theoretical understanding of the expressiveness of these architectures, as well as of their universal approximation properties, is still incomplete. Compared to standard networks like multi-layer perceptrons (MLPs) or convolutional neural networks (CNNs), the novel invertible neural networks (INNs) become computationally demanding. Neural ODE (Chen et al. (2018)) use an alternative method to compute gradients for backward propagation; i-ResNet (Behrmann et al. (2019)) has restrictions on the norm of every weight matrix to be enforced during the training process. In most cases, the input domain of interest is a small subset of the whole space. For example, the grey-scale image domain in computer vision problems is \([0,1]^{H\times W}\) (where \(H\) and \(W\) are height and width of images), and it is unnecessary to consider the whole space \(\mathbb{R}^{H\times W}\). We thus focus on _local invertibility_: how do we know if our network is invertible on a given finite domain, and if not, how do we quantify noninvertibility?
Beyond classification problems, noninvertibility can also lead to catastrophic consequences in regression, and more specifically in dynamical systems prediction. The flow of smooth differential equations is invertible when it exists; yet traditional numerical integrators used to approximate them can be noninvertible. Neural network approximations of the corresponding time-\(\tau\) map also suffer
from this potential pathology. In this paper, we initially study noninvertibility in the context of dynamical systems predictions.
## 2 Local invertibility of dynamical systems and neural networks
Continuous-time dynamical systems, in particular autonomous ordinary differential equations (ODEs) have the form \(dX(t)/dt=f(X(t)),X(t=t_{0})=X_{0}\), where \(X(t)\in\mathbb{R}^{m}\) are the state variables of interest; \(f:\mathbb{R}^{m}\mapsto\mathbb{R}^{m}\) relates the states to their time derivatives and \(X_{0}\in\mathbb{R}^{m}\) is the initial condition at \(t_{0}\). If \(f\) is uniformly Lipschitz continuous in \(X\) and continuous in \(t\), the Cauchy-Lipschitz theorem guarantees the existence and uniqueness of the solution.
In practice, we observe the states \(X(t)\) at discrete points in time, starting at \(t_{0}=0\). For a fixed timestep \(\tau\in\mathbb{R}^{+}\), and \(\forall n\in\mathbb{N}\), \(t_{n}=n\tau\) denotes the \(n\)-th time stamp, and \(X_{n}=X(t=t_{n})\) the corresponding state values. Now we will have:
\[X_{n+1}:=F(X_{n})=X_{n}+\int_{t_{n}}^{t_{n+1}}f(X(t))dt;\;X_{n}=F^{-1}(X_{n+1}). \tag{1}\]
This equation also works as the starting point of many numerical ODE solvers.
For the time-\(\tau\) map in (1), the inverse function theorem provides a sufficient condition for its invertibility: If \(F\) is a continuously differentiable function from an open set \(\mathcal{B}\) of \(\mathbb{R}^{m}\) into \(\mathbb{R}^{m}\), and the Jacobian determinant of \(F\) at \(p\) is non-zero, then \(F\) is invertible near \(p\). Thus, if we define the _noninvertibility locus_ as the set \(J_{0}(F)=\{p\in\mathcal{B}:\det(\mathbf{J}_{F}(p))=0\}\), then the condition \(J_{0}(F)=\emptyset\) guarantees global invertibility of \(F\) (notice that this condition is not _necessary_: the scalar function \(F(X)=X^{3}\) provides a counterexample). If \(F\) is continuous over \(\mathcal{B}\) but not everywhere differentiable, then the definition of \(J_{0}\) set should be altered to:
\[J_{0}(F)=\{p\in\mathcal{B}:\forall N_{0}(p),\exists\,p_{1},p_{2}\in N_{0}(p), p_{1}\neq p_{2},\;\text{s.t.}\;\det(\mathbf{J}_{F}(p_{1}))\det(\mathbf{J}_{F}(p_{2})) \leq 0\}\,., \tag{2}\]
the set of points where the determinant discontinuously changes sign.
Numerical integrators are (often) noninvertibleNumerically approximating the finite integral in (1) can introduce noninvertibility in the transformation. Here is a simple one-dimensional illustrative ODE example: \(dX/dt=f(X)=X^{2}+bX+c,\quad X(t=0)=X_{0}\), where \(b,c\in\mathbb{R}\) are two fixed parameters. The analytical solution (1) is invertible; however a forward-Euler discretization with step \(\tau\) gives
\[X_{n+1}=F(X_{n})=X_{n}+\tau(X_{n}^{2}+bX_{n}+c)\Rightarrow\tau X_{n}^{2}+(\tau b +1)X_{n}+(\tau c-X_{n+1})=0. \tag{3}\]
Given a fixed \(X_{n+1}\), Equation (3) is quadratic w.r.t. \(X_{n}\); this determines the local invertibility of \(F\) based on \(\Delta=(\tau b+1)^{2}-4\tau(\tau c-X_{n+1})\): no real root if \(\Delta<0\); one real root with multiplicity 2 if \(\Delta=0\); and two distinct real roots if \(\Delta>0\). In practice, one uses small timesteps \(\tau\ll 1\) for accuracy/stability, leading to the last case: there will always exist a solution \(X_{n}\) close to \(X_{n+1}\), and a second preimage, far away from the region of our interest, and arguably physically irrelevant (this second \(X_{n}\rightarrow-\infty\) as \(\tau\to 0\)). On the other hand, as \(\tau\) grows, the two roots move closer to each other, \(J_{0}(F)\) moves close to the regime of our simulations, and noninvertibility can have visible implications on the predicted dynamics. Thus, choosing a small timestep in explicit integrators guarantees desirable accuracy, and simultaneously _practically_ mitigates noninvertibility pathologies in the dynamics.
Invertibility in transformations between neural networksTraining two neural networks for the same regression or classification task practically never gives identical network parameters. Numerous criteria exist for comparing the performance of different models (e.g. accuracy in classification, or mean-squared loss in regression). Here we explore whether two different models _can be calibrated to each other_ (leading to a _de facto_ implicit function problem). Extending our analysis provides invertibility guarantees for the transformation from the output of network 1 to the output of network 2 (and vice versa).
## 3 Invertibility certification of neural networks and of transformations between them
Here we pose the verification of local invertibility of continuous functions as an optimization problem. We then show that for ReLU networks, this leads to a mixed-integer linear/quadratic program. For an integer \(q\geq 1\), we denote the \(L_{q}\)-ball centered at \(x_{c}\) by \(\mathcal{B}_{q}(x_{c},r)=\{x\in\mathbb{R}^{n}\mid\|x-x_{c}\|_{q}\leq r\}\) (the notation also holds when \(q\to+\infty\)).
**Problem 1** (Local Invertibility of NNs): _Given a neural network \(f:\mathbb{R}^{m}\mapsto\mathbb{R}^{m}\) and a point \(x_{c}\in\mathbb{R}^{m}\) in the input space, we want to find the largest radius \(r>0\) such that \(f\) is invertible on \(\mathcal{B}_{q}(x_{c},r)\), i.e., \(f(x_{1})\neq f(x_{2})\) for all \(x_{1},x_{2}\in\mathcal{B}_{q}(x_{c},r)\), \(x_{1}\neq x_{2}\)._
Another relevant problem is to verify whether, for a particular point, a nearby point exists with the same forward image. This is of particular interest in assessing invertibility of discrete-time dynamical systems around a given trajectory. We formally state the problem as follows:
**Problem 2** (Pseudo Local Invertibility of NNs): _Given a neural network \(f:\mathbb{R}^{m}\mapsto\mathbb{R}^{m}\) and a point \(x_{c}\in\mathbb{R}^{m}\) in the input space, we want to find the largest radius \(R>0\) such that \(f(x)\neq f(x_{c})\) for all \(x\in\mathcal{B}_{q}(x_{c},R)\), \(x\neq x_{c}\)._
If \(r\) and \(R\) are the optimal radii in problems 1 and 2 respectively, we must have \(r\leq R\). For Problem 1, the ball \(\mathcal{B}_{q}(x_{c},r)\) just "touches" the \(J_{0}\) set; for Problem 2, the ball \(\mathcal{B}_{q}(x_{c},R)\) extends to the "other" closest preimage of \(f(x_{c})\). Figure 1 illustrates both concepts in the one-dimensional case. For the scalar function \(y=f(x)\) and around a particular input \(x_{c}\), we show the nearest bounds of local invertibility and pseudo invertibility. The points \(Q_{1}=(x_{Q_{1}},y_{Q_{1}})\) and \(Q_{2}=(x_{Q_{2}},y_{Q_{2}})\) are the two closest turning points (elements of the \(J_{0}\) set) to the point \(C=(x_{c},y_{c})\); \(f\) is uniquely invertible (bi-Lipschitz) on the open interval \((x_{Q_{1}},x_{Q_{2}})\), so that the optimal solution to Problem 1 is: \(r=\min\{|x_{Q_{1}}-x_{c}|,|x_{Q_{2}}-x_{c}|\}=|x_{Q_{1}}-x_{c}|\). Noting that \(M_{1}=(x_{M_{1}},y_{M_{1}})\) and \(M_{2}=(x_{M_{2}},y_{M_{2}})\) are the two closest points that have the same \(y\)-coordinate as the point \(C=(x_{c},y_{c})\), the optimal solution to Problem 2 is \(R=\min\{|x_{M_{1}}-x_{c}|,|x_{M_{2}}-x_{c}|\}=|x_{M_{1}}-x_{c}|\).
We now state our first result, posing the local invertibility of a function (such as a neural network) as a constrained optimization problem.
**Theorem 1** (Local Invertibility of Continuous Functions): _Let \(f\colon\mathbb{R}^{m}\to\mathbb{R}^{m}\) be a continuous function and \(\mathcal{B}\subset\mathbb{R}^{m}\) be a compact set. Consider the following optimization problem,_
\[p^{\star}\leftarrow\max\quad\|x_{1}-x_{2}\|\quad\text{subject to }x_{1},x_{2} \in\mathcal{B},\quad f(x_{1})=f(x_{2}). \tag{4}\]
_Then \(f\) is invertible on \(\mathcal{B}\) if and only if \(p^{\star}=0\)._
**Theorem 2** (Pseudo Local Invertibility): _Let \(f\colon\mathbb{R}^{m}\to\mathbb{R}^{m}\) be a continuous function and \(\mathcal{B}\subset\mathbb{R}^{m}\) be a compact set. Suppose \(x_{c}\in\mathcal{B}\). Consider the following optimization problem,_
\[P^{\star}\leftarrow\max\quad\|x-x_{c}\|\quad\text{subject to }x\in\mathcal{B}, \quad f(x)=f(x_{c}). \tag{5}\]
_Then we have \(f(x)\neq f(x_{c})\) for all \(x\in\mathcal{B}\setminus\{x_{c}\}\) if and only if \(P^{\star}=0\)._
Note that by adding the equality constraints \(x=x_{1},x_{c}=x_{2}\) to the optimization problem (4), we obtain the optimization problem (5). Hence, we will only focus on (4) in what follows.
Invertibility certification of ReLU networks via mixed-integer programmingWe now show that for a given ball \(\mathcal{B}_{\infty}(x_{c},r)\) in the input space, and piecewise linear networks with ReLU activations, the optimization problem in (4) can be cast as an MILP.
A single ReLU constraint \(y=\max(0,x)\) with pre-activation bounds \(\underline{x}\leq x\leq\bar{x}\) can be equivalently described by the following mixed-integer linear constraints (Tjeng et al. (2017)):
\[y=\max(0,x),\ \underline{x}\leq x\leq\bar{x}\iff\{y\geq 0,\ y\geq x,y\leq x- \underline{x}(1-t),\ y\leq\bar{x}t,\ t\in\{1,0\}\}, \tag{6}\]
where the binary variable \(t\in\{1,0\}\) is an indicator of the activation function being active (\(y=x\)) or inactive (\(y=0\)). Now consider an \(\ell\)-layer feed-forward fully-connected ReLU network with input \(x\) given by the following recursions,
\[x^{(k+1)}=\max(W^{(k)}x^{(k)}+b^{(k)},0)\text{ for }k=0,\cdots,\ell-1;\ f(x^{(0)} )=W^{(\ell)}x^{(\ell)}+b^{(\ell)}, \tag{7}\]
Figure 1: Illustration of problems 1 (distance to invertibility boundary, red) and 2 (distance to pseudo invertibility boundary, blue).
where \(x^{(k)}\in\mathbb{R}^{n_{k}}\) gives the input to the \((k+1)\)-th layer (specifically, we have \(x=x^{(0)}\) and \(n_{0}=m\)), \(W^{(k)}\in\mathbb{R}^{n_{k+1}\times n_{k}},b^{(k)}\in\mathbb{R}^{n_{k+1}}\) are the weight matrices and bias vectors of the affine layers. We denote \(n=\sum_{k=1}^{\ell}n_{k}\) the total number of neurons. Suppose \(l^{(k)}\) and \(u^{(k)}\) are known elementwise lower and upper bounds on the input to the \((k+1)\)-th activation layer, i.e., \(l^{(k)}\leq W^{(k)}x^{(k)}+b^{(k)}\leq u^{(k)}\). Then the neural network equations are equivalent to a set of mixed-integer constraints as follows:
\[x^{(k+1)}=\max(W^{(k)}x^{(k)}+b^{(k)},0)\Leftrightarrow\begin{cases}x^{(k+1)} \geq W^{(k)}x^{(k)}+b^{(k)}\\ x^{(k+1)}\leq W^{(k)}x^{(k)}+b^{(k)}-l^{(k)}\odot(1_{n_{k+1}}-t^{(k)})\\ x^{(k+1)}\geq 0,\quad x^{(k+1)}\leq u^{(k)}\odot t^{(k)},\end{cases} \tag{8}\]
where \(t^{(k)}\in\{1,0\}^{n_{k+1}}\) is a vector of binary variables for the \((k+1)\)-th activation layer and \(1_{n_{k+1}}\) denotes vector of all \(1\)'s in \(\mathbb{R}^{n_{k+1}}\). We note that the element-wise pre-activation bounds \(\{l^{(k)},u^{(k)}\}\) can be precomputed by, for example, interval bound propagation or linear programming, assuming known bounds on the input of the neural network (Weng et al. (2018); Zhang et al. (2018); Hein and Andriushchenko (2017); Wang et al. (2018); Wong and Kolter (2018)). Since the state-of-the-art solvers for mixed-integer programming are based on branch \(\&\) bound algorithms (Land and Doig (1960); Beasley (1996)), tight pre-activation bounds will allow the algorithm to prune branches more efficiently and reduce the total running time.
Local invertibility certificates via mixed-integer programmingHaving represented the neural network equations by mixed-integer constraints, it remains to encode the objective function \(\|x_{1}-x_{2}\|\) of (4) as well as the set \(\mathcal{B}\). We assume that \(\mathcal{B}\) is an \(L_{\infty}\) ball around a given point \(x_{c}\), i.e., \(\mathcal{B}=\mathcal{B}_{\infty}(x_{c},r)\). Furthermore, for the sake of space, we only consider \(L_{\infty}\) norms for the objective function. Specifically, consider the equality \(w=\|x_{1}-x_{2}\|_{\infty}\). This equality can be encoded as mixed-integer linear constraints by introducing \(2n_{0}\) mutually exclusive indicator variables, which leads to the following MILP:
\[p^{\star}\leftarrow\max w\text{ subject to }\|x_{1}-x_{c}\|_{ \infty}\leq r,\ \ \|x_{2}-x_{c}\|_{\infty}\leq r\] \[\text{(I)}: \begin{cases}(x_{1}-x_{2})\leq w1_{n_{0}}\leq(x_{1}-x_{2})+4r(1_ {n_{0}}-f)\\ -(x_{1}-x_{2})\leq w1_{n_{0}}\leq-(x_{1}-x_{2})+4r(1_{n_{0}}-f^{\prime})\\ f+f^{\prime}\leq 1_{n_{0}},1_{n_{0}}^{\top}(f+f^{\prime})=1,f,f^{\prime}\in\{0,1\}^ {n_{0}}\end{cases}\] \[\text{(II)}: W^{(\ell)}x_{1}^{(\ell)}=W^{(\ell)}x_{2}^{(\ell)}\] \[\text{for }k=0,\cdots,\ell-1:\] \[\text{(III)}: \begin{cases}x_{1}^{(k+1)}\geq W^{(k)}x_{1}^{(k)}+b^{(k)},x_{2}^{ (k+1)}\geq W^{(k)}x_{2}^{(k)}+b^{(k)}\\ x_{1}^{(k+1)}\leq W^{(k)}x_{1}^{(k)}+b^{(k)}-l^{(k)}\odot(1-t^{(k)}),x_{2}^{(k +1)}\leq W^{(k)}x_{2}^{(k)}+b^{(k)}-l^{(k)}\odot(1-t^{(k)})\\ x_{1}^{(k+1)}\geq 0,x_{2}^{(k+1)}\geq 0,x_{1}^{(k+1)}\leq u^{(k)}\odot t^{(k)},x_{2}^{ (k+1)}\leq u^{(k)}\odot t^{(k)};t^{(k)},s^{(k)}\in\{0,1\}^{n_{k}+1},\end{cases}\]
where the set of constraints in (I) model the objective function \(\|x_{1}-x_{2}\|_{\infty}\), and the set of constraints \(\text{(III)}\) encode the network \(x_{1}^{(k+1)}=\max(W^{(k)}x_{1}^{(k)}+b^{(k)},0)\) and \(x_{2}^{(k+1)}=\max(W^{(k)}x_{2}^{(k)}+b^{(k)},0)\). The constraint \(\text{(II)}\) enforces that \(f(x_{1})=f(x_{2})\). This optimization problem (4) has \(2(n_{0}+n)\) integer variables.
**Remark 3**: _If we instead use the \(\ell_{2}\) norm both for the objective function and the ball \(\mathcal{B}_{2}(x_{c},r)\), we will arrive at a mixed-integer quadratic program (MIQP). However, (9) remains an MILP if we change them to \(\ell_{1}\) norms._
_Largest region of invertibility_For a fixed radius \(r\geq 0\), the optimization problem (9) either verifies whether \(f\) is invertible on \(\mathcal{B}_{\infty}(x_{c},r)\) or it finds counterexamples \(x_{1}\neq x_{2}\) such that \(f(x_{1})=f(x_{2})\). Thus, we can find the maximal \(r\) by performing a bisection search on \(r\) (Problem 1).
To close this section, we consider the problem of invertibility certification in transformations between two functions (and in particular two neural networks).
**Problem 3** (Transformation Invertibility): _Given two functions \(f_{1},f_{2}\colon\mathbb{R}^{m}\to\mathbb{R}^{m}\) and a particular point \(x_{c}\in\mathbb{R}^{m}\) in the input space, we would like to find the largest ball \(\mathcal{B}_{q}(x_{c},r)\) over which the output of \(f_{2}\) is a function of the output of \(f_{1}\) (and vice versa)._
**Theorem 4**: _Let \(f_{1}\colon\mathbb{R}^{m}\to\mathbb{R}^{n}\), \(f_{2}\colon\mathbb{R}^{m}\to\mathbb{R}^{n}\) be two continuous functions and \(\mathcal{B}\subset\mathbb{R}^{m}\) be a compact set. Consider the following optimization problem,_
\[p_{12}^{\star}\leftarrow\max\quad\|f_{2}(x_{1})-f_{2}(x_{2})\|\quad\text{ subject to }x_{1},x_{2}\in\mathcal{B},\quad f_{1}(x_{1})=f_{1}(x_{2}). \tag{10}\]
_Then the output of \(f_{2}\) is a function of the output of \(f_{1}\) on \(\mathcal{B}\) if and only if \(p_{12}^{\star}=0\)._
Similar to Problem 1, we can pose Problem 3 as a mixed-integer program. Furthermore, we can also define \(p_{21}^{\star}\), whose zero value determines whether output of \(f_{1}\) is a function of output of \(f_{2}\) over \(\mathcal{B}\). It is straightforward to see that \(p_{12}^{\star}=p_{21}^{\star}=0\) if and only if output of \(f_{2}\) is an invertible function of output of \(f_{1}\).
## 4 Numerical Experiments
We now present experiments with \(\mathrm{ReLU}\) multi-layer perceptrons (MLPs) in both (a) regression problems, and also in (b) transformations between two \(\mathrm{ReLU}\) networks.
1D ExampleWe use a 1-10-10-1 randomly generated fully-connected neural network \(f(x)\) with \(\mathrm{ReLU}\) activations. We find the largest interval around the points \(x=-1.8;-1;-0.3\) on which \(f\) is invertible (Problem 1); we also find the largest interval around the point \(x=-1\) for which no other interior points map to \(f(-1)\) (Problem 2). The results are plotted in Figure 2, where intervals in red and blue respectively represent the optimal solutions for the two problems. The largest certified radii are 0.157, 0.322 and 0.214 for Problem 1 and 0.553 for Problem 2.
2D Example: a discrete-time integrator.The Brusselator (Tyson (1973)) is a system of two ODEs for the two variables \((x,y)\), depending on the parameters \((a,b)\); it describes oscillatory dynamics in a theoretical chemical reaction scheme. We use its forward-Euler discretization with step \(\tau\),
\[x_{n+1}=x_{n}+\tau(a+x_{n}^{2}y_{n}-(b+1)x_{n}),\;y_{n+1}=y_{n}+\tau(bx_{n}-x_{n }^{2}y_{n}). \tag{11}\]
Rearranging and eliminating \(y_{n+1}\) in (11) we obtain:
\[\tau(1-\tau)x_{n}^{3}+\tau(\tau a-x_{n+1}-y_{n+1})x_{n}^{2}+(\tau b+\tau-1)x_{n }+(x_{n+1}-\tau a)=0. \tag{12}\]
Equation (12) is a cubic for \(x_{n}\) given \((x_{n+1},y_{n+1})\) when \(\tau\neq 1\). By varying the parameters \(a\), \(b\) and \(\tau\), we see the past states \((x_{n},y_{n})^{T}\) of point \((x_{n+1},y_{n+1})^{T}\) (also called "inverses" or "preimages") may be multi-valued, so that this discrete-time system is, in general, noninvertible. We fix \(a=1\) and consider how inverses will be changing (a) with \(b\) for fixed \(\tau=0.15\); and (b) with \(\tau\), for fixed \(b=2\).
We are interested in training a neural network that learns this time-\(\tau\) mapping; for a fixed set of parameter values, this is a network from 3D to 2D: \((x_{n+1},y_{n+1})^{T}\approx\mathcal{N}(x_{n},y_{n};p)^{T}\), where \(p\in\mathbb{R}\) is the parameter. The network dynamics will be parameter-dependent if we set \(p\equiv b\), or timestep-dependent if \(p\equiv\tau\). The first layer of such an MLP reads
\[W^{(0)}\begin{bmatrix}x_{n}\\ y_{n}\\ p\end{bmatrix}+b^{(0)}=(W^{(0)}(e_{1}+e_{2}))\begin{bmatrix}x_{n}\\ y_{n}\end{bmatrix}+(pW^{(0)}e_{3}+b^{(0)}), \tag{13}\]
where \(e_{1,2,3}\in\mathbb{R}^{3}\) are indicator vectors. Here we trained two separate MLPs, tone with \(b\) and one with \(\tau\) dependence. For fixed \(p\) (either \(b\) or \(\tau\)) each of these two networks \(\mathcal{N}\) can be thought of as a MLP mapping from \(\mathbb{R}^{2}\) to \(\mathbb{R}^{2}\), by slightly modifying the weights and biases in the first linear layer.
Parameter-dependent InversesIt is useful to start with a brief discussion of the dynamics and noninvertibilities in the ground-truth system (see Figure 3). Consider a state located on the invariant circle (IC, shown in orange), for we therefore know there exists at least one preimage _also on this IC_. In Figure 3 we indeed see that every point on the IC has three preimages: one still on the IC, and two extra inverses (in green and purple) after one iteration, all three loops map to the orange one,
Figure 2: Solutions to Problem 1 (left, red) and Problem 2 (right, blue) for the MLP corresponding to a randomly-generated ReLU network (see text).
and then remain forward invariant. The phase space, upon iteration, _folds_ along the two branches of the \(J_{0}\) curve (sets of red points). For lower values of \(b\), these three closed loops _do not intersect each other_. As \(b\) increases the (orange) attractor will become tangent to, and subsequently intersect \(J_{0}\), leading to an interaction with the other (green) preimage branch. At this point the dynamics predicted by the network become unphysical (beyond just inaccurate).
After convergence of training, we employ our algorithm to obtain noninvertibility certificates for the resulting MLP, and plot results for \(b=2.1\) in Figure 4. In Figure 4, we arbitrarily select one representative point, marked by triangle (\(\triangle\)), on the attractor (the orange invariant circle); we know there exists one inverse _also_ located on the attractor, see the nearby cross (\(+\)); we call this the _primal_ inverse. Our algorithm will produce two regions for this point, one for each of our problems (squares of constant \(L_{\infty}\) distance in 2D). As a sanity check, we also compute the \(J_{0}\) sets (the red point), as well as a few additional inverses, beyond the primal ones with the help of a numerical root solver and automatic differentiation (Baydin et al. (2017)). Clearly, the smaller square neighborhood just hits the \(J_{0}\) curve, while the larger one extends to the closest non-primal inverse of the attractor.
Timestep-dependent InversesIn the right two subfigures of Figure 4, we explore the effect of varying the time horizon \(\tau\). We compare a single Euler step of the ground truth ODE to the MLP approximating the same time \(\tau\) map, and find that, for both of them, smaller time horizons lead to larger regions of invertibility.
Figure 3: Attractors (and their multiple inverses) for several parameter values of the discrete Brusselator neural network for \(\tau=0.15\). Notice the relative positions of the \(J_{0}\) curves (red), the “main” preimage locus (yellow), and the “extra” preimages (green, purple). When the attractor starts interacting with the \(J_{0}\) curve and, therefore, with these extra preimages, the dynamic behavior degenerates quantitatively and qualitatively (see also Rico-Martinez et al. (1993)).
Network Transformation Example: Learning the Van der Pol EquationHere, to test our algorithm on the problem of transformations between networks 3, we trained two networks on the same regression task. Our data comes from the 2D Van der Pol equation \(dx_{1}/dt=x_{2},dx_{2}/dt=\mu(1-x_{1}^{2})x_{2}-x_{1}\), where the input and output are the initial and final states of 1000 short solution trajectories of duration 0.2 for \(\mu=1\), when a stable limit cycle exists. The initial states are uniformly sampled in the region \([-3,3]\times[-3,3]\). The neural network A used to learn the time-\(\tau=0.2\) map is a 2-32-32-2 MLP, while the neural network B is a retrained sparse version of A, where half of the weight entries are pruned (set to zero) based on Zhu and Gupta (2018). To visualize the performances of the two networks, two trajectories, generated by respectively iterating each network function for a fixed number of times starting from a common given initial state have been plotted in the left subplot of Figure 5. The ODE solution trajectory starting at the same initial state with same overall time duration is also shown. We see that both network functions A and B exhibit long term oscillations; the shapes of both attractors appear to only have small visual differences from the true ODE solution (the red curve).
These two network functions were then used to illustrate the algorithm for Problem 3. Here we chose a center point \(x_{c}=(0,0)^{T}\), computed and plotted the mappable regions (the regions over which there is a one-to-one mapping between the output of one network and the output of the other, i.e. where one network can be _calibrated_ to the other). This was done for two subcases (see the right subfigure of Figure 5): (a) where the output of network \(B\) is a function of the output of network \(A\) (the square with white bounds centered at the red point, radius 3.0820), and vice versa, where the output of network \(A\) is a function of the output of the network \(B\) (the square with black bounds centered at the red point, radius 3.6484). This also gives us the "common" region (the interior of the white square) where both networks can be calibrated _to each other_. For validation we also computed the Jacobian values of network \(A\) and network \(B\) on every grid point of the input domain, and shown that the white square touches the \(J_{0}\) curve of network \(A\), while the black square touches the \(J_{0}\) curve of network \(B\). Inside the black square the Jacobian of network \(B\) remains positive, so that network \(B\) is invertible (i.e. there exists a mapping from \(f_{B}(x)\) to \(x\), or equivalently, \(f_{B}^{-1}(x)\));
Figure 4: Left: illustration of our solution to Problems 1 and 2 for the Brusselator network with \((a,b)=(1,2.1)\). For a particular reference point on the attractor, we show the neighborhoods found by our algorithms. They clearly locate the closest point on the \(J_{0}\) curve / the closest “extra preimage” of the point of interest. Last two: plots of \(J_{0}\) curves at different \(\tau\) with \((a,b)=(1,2)\), for both the Euler integrator and our Brusselator ReLU network. Small timesteps lead to progressively more remote \(J_{0}\) curves. Notice also the piecewise linear nature of the \(J_{0}\) curve for the ReLU network; its accurate computation constitutes an interesting challenge by itself.
therefore we can find the mapping from \(f_{B}(x)\) to \(f_{A}(x)\) by composing the mapping from \(f_{B}(x)\) to \(x\) with the mapping from \(x\) to \(f_{A}(x)\) (the function \(f_{A}(x)\) itself). The size of the white square can be similarly rationalized, validating our computation.
As a sanity check, we consructed eight more pruned networks; two of them have \(50\%\) sparsity (networks \(B_{5}\) and \(B_{6}\)), three have \(40\%\) sparsity (networks \(B_{1},B_{2}\) and \(B_{3}\)) and the others have \(60\%\) sparsity (networks \(B_{7},B_{8}\) and \(B_{9}\)). Above we discussed network \(B_{4}\) For each pruned network, we computed the radii of the regions of interest (aka \(r_{AB}\) and \(r_{BA}\)). The results are listed in Table 1. All pruned networks \(\{B_{i}\}\) share the same radii \(r_{AB}\), consistent with the invertibility of \(A\) itself. Since \(r_{A}=3.0820\), \(A\) is invertible in the ball we computed, and the existence of the mapping \(f_{A}(x)\mapsto f_{B}(x)\) follows by composition of \(f_{A}(x)\mapsto x\) and \(x\mapsto f_{B}(x)\). Based on these few computational experiments one might very tentatively surmise a trend: the higher the pruning (e.g. \(60\%\)) the larger the invertibility guarantee for the pruned network. In our work the input and output
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} \hline \hline Sparsity & \multicolumn{4}{c|}{40 \%} & \multicolumn{4}{c|}{50 \%} & \multicolumn{4}{c}{60 \%} \\ \hline Network \(B\) & \(B_{1}\) & \(B_{2}\) & \(B_{3}\) & \(B_{4}\) & \(B_{5}\) & \(B_{6}\) & \(B_{7}\) & \(B_{8}\) & \(B_{9}\) \\ \hline \(r_{AB}\) & 3.0820 & 3.0820 & 3.0820 & 3.0820 & 3.0820 & 3.0820 & 3.0820 & 3.0820 & 3.0820 & 3.0820 \\ \hline \(r_{BA}\) & 3.4609 & 3.1055 & 3.8555 & 3.6484 & 2.6523 & 3.8203 & 3.6328 & 3.9727 & 4.5547 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The radii of the mappable regions between the original network \(A\) and its pruned versions \(B\). \(r_{AB}\) relates to the region within which \(f_{B}(x)\) is a function of \(f_{A}(x)\).
Figure 5: Left: Trajectories of the ODE solution for the Van der Pol system (red), and their discrete-time neural network approximations (blue and green). All three trajectories begin at the same initial state. While the ODE solution curve is smooth due to its continuous-time nature, the others are just straight line segments connecting consecutive states (discrete-time dynamics). However, it is clear that all three systems have visually nearby long-time dynamic attractors, corroborating the good performance of the network and its pruned version. Right: visualization of MILP computation results, along with signs of Jacobian values of networks on the grid points of the input domain. Here, the center of the region is shown in red, while the white and black boundaries quantify the mappable region between outputs of network A and network B.
dimensions are the same (e.g. \(m=n\) in Problem 3). However, this condition is not necessary, and our algorithm can be conceptually extended to classification problems, where in general \(m\gg n\).
## 5 Conclusions
In this paper, we revisited noninvertibility issues that arise in discrete-time dynamical systems integrators) as well as in neural networks that perform approximations of the same (time-series related) task. We argued that such noninvertibility may have dramatic pathological consequences, going beyond just inaccuracies, in the dynamics predicted by the networks. We also extended the analysis to transformations between different neural networks. We formulated three problems that provide a quantifiable assessment of "local" invertibility for any given, arbitrarily selected input. Specifically, for functions like MLPs with ReLU activations, these problems were formulated as mixed-integer programs. We then performed experiments on regression tasks. An extension of our algorithm to ResNets. can be found in the Appendix.
Future directions include developing structure-exploiting methods to globally solve these MIPs more efficiently, and for larger networks. On the other hand, given that convolution and average pooling are linear operations, while max pooling is piecewise linear, it is natural to adapt our algorithms to convolutional neural networks like AlexNet (Krizhevsky et al. (2017)) or VGG (Simonyan and Zisserman (2015)). The successful application of our algorithm to ResNet architectures (He et al. (2016)) holds promise for applicability also to recursive architectures (Lu et al. (2018); E (2017)), such as fractal networks (Larsson et al. (2017)), poly-inception networks (Zhang et al. (2016)), and RevNet (Gomez et al. (2017)). We are working on making the algorithm practical for continuous differentiable activations like tanh or Swish (Ramachandran et al. (2017)), and for other piecewise activations like gaussian error linear units (GELUs, Hendrycks and Gimpel (2016)). We are particularly interested in the case when the input and output domains are of different dimension (e.g., classifiers).
|
2306.05037 | Normalization-Equivariant Neural Networks with Application to Image
Denoising | In many information processing systems, it may be desirable to ensure that
any change of the input, whether by shifting or scaling, results in a
corresponding change in the system response. While deep neural networks are
gradually replacing all traditional automatic processing methods, they
surprisingly do not guarantee such normalization-equivariance (scale + shift)
property, which can be detrimental in many applications. To address this issue,
we propose a methodology for adapting existing neural networks so that
normalization-equivariance holds by design. Our main claim is that not only
ordinary convolutional layers, but also all activation functions, including the
ReLU (rectified linear unit), which are applied element-wise to the
pre-activated neurons, should be completely removed from neural networks and
replaced by better conditioned alternatives. To this end, we introduce
affine-constrained convolutions and channel-wise sort pooling layers as
surrogates and show that these two architectural modifications do preserve
normalization-equivariance without loss of performance. Experimental results in
image denoising show that normalization-equivariant neural networks, in
addition to their better conditioning, also provide much better generalization
across noise levels. | Sébastien Herbreteau, Emmanuel Moebel, Charles Kervrann | 2023-06-08T08:42:08Z | http://arxiv.org/abs/2306.05037v2 | # Normalization-Equivariant Neural Networks with Application to Image Denoising
###### Abstract
In many information processing systems, it may be desirable to ensure that any change of the input, whether by shifting or scaling, results in a corresponding change in the system response. While deep neural networks are gradually replacing all traditional automatic processing methods, they surprisingly do not guarantee such normalization-equivariance (scale + shift) property, which can be detrimental in many applications. To address this issue, we propose a methodology for adapting existing neural networks so that normalization-equivariance holds by design. Our main claim is that not only ordinary convolutional layers, but also all activation functions, including the ReLU (rectified linear unit), which are applied element-wise to the pre-activated neurons, should be completely removed from neural networks and replaced by better conditioned alternatives. To this end, we introduce affine-constrained convolutions and channel-wise sort pooling layers as surrogates and show that these two architectural modifications do preserve normalization-equivariance without loss of performance. Experimental results in image denoising show that normalization-equivariant neural networks, in addition to their better conditioning, also provide much better generalization across noise levels.
## 1 Introduction
Sometimes wrongly confused with the invariance property which designates the characteristic of a function \(f\) not to be affected by a specific transformation \(\mathcal{T}\) applied beforehand, the equivariance property, on the other hand, means that \(f\) reacts in accordance with \(\mathcal{T}\). Formally, invariance is \(f\circ\mathcal{T}=f\) whereas equivariance reads \(f\circ\mathcal{T}=\mathcal{T}\circ f\), where \(\circ\) denotes the function composition operator. Both invariance and equivariance play a crucial role in many areas of study, including physics, computer vision, signal processing and have recently been studied in various settings for deep-learning-based models [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14].
In this paper, we focus on the equivariance of neural networks \(f_{\theta}\) to a specific transformation \(\mathcal{T}\), namely normalization. Although highly desirable in many applications and in spite of its omnipresence in machine learning, current neural network architectures do not equivary to normalization. With application to image denoising, for which _normalization-equivariance_ is generally guaranteed for a lot of conventional methods [18, 20, 27, 22], we propose a methodology for adapting existing neural networks, and in particular denoising CNNs [30, 32, 31, 34, 33], so that _normalization-equivariance_ holds by design. In short, the proposed adaptation is based on two innovations:
1. affine convolutions: the weights from one layer to each neuron from the next layer, _i.e._ the convolution kernels in a CNN, are constrained to encode affine combinations of neurons (the sum of the weights is equal to 1).
2. channel-wise sort pooling: all activation functions that apply element-wise, such as the ReLU, are substituted with higher-dimensional nonlinearities, namely two by two sorting along channels that constitutes a fast and efficient _normalization-equivariant_ alternative.
Despite strong architectural constraints, we show that these simple modifications do not degrade performance and, even better, increase robustness to noise levels in image denoising both in practice and in theory.
## 2 Related Work
A non-exhaustive list of application fields where equivariant neural networks were studied includes graph theory, point cloud analysis and image processing. Indeed, graph neural networks are usually expected to equivary, in the sense that a permutation of the nodes of the input graph should permute the output nodes accordingly. Several specific architectures were investigated to guarantee such a property [5; 6; 7]. In parallel, rotation and translation-equivariant networks for dealing with point cloud data were proposed in a recent line of research [12; 13; 14]. A typical application is the ability for these networks to produce direction vectors consistent with the arbitrary orientation of the input point clouds, thus eliminating the need for data augmentation. Finally, in the domain of image processing, it may be desirable that neural networks produce outputs that equivary with regard to rotations of the input image, whether these outputs are vector fields [8], segmentation maps [9; 10], labels for image classification [9] or even bounding boxes for object tracking [11].
In addition to their better conditioning, equivariant neural networks by design are expected to be more robust to outliers. A spectacular example has been revealed by S. Mohan _et al._[1] in the field of image denoising. By simply removing the additive constant ("bias") terms in neural networks with ReLU activation functions, they showed that a much better generalization at noise levels outside the training range was ensured. Although they do not fully elucidate why biases prevent generalization, and their removal allows it, the authors establish some clues that the answer is probably linked to the _scale-equivariant_ property of the resulting encoded function: rescaling the input image by a positive constant value rescales the output by the same amount.
## 3 Overview of normalization equivariance
### Definitions and properties of three types of fundamental equivariances
We start with formal definitions of the different types of equivariances studied in this paper.
**Definition 1**: _A function \(f:\mathbb{R}^{n}\mapsto\mathbb{R}^{m}\) is said to be:_
* _scale-equivariant if_ \(\forall x\in\mathbb{R}^{n},\forall\lambda\in\mathbb{R}^{+}_{*},\;f(\lambda x)= \lambda f(x)\,,\)__
* _shift-equivariant if_ \(\forall x\in\mathbb{R}^{n},\forall\mu\in\mathbb{R},\;f(x+\mu)=f(x)+\mu\,,\)__
* _normalization-equivariant if it is both scale-equivariant and shift-equivariant:_ \[\forall x\in\mathbb{R}^{n},\forall\lambda\in\mathbb{R}^{+}_{*},\forall\mu \in\mathbb{R},\;f(\lambda x+\mu)=\lambda f(x)+\mu\,,\]
_where addition with the scalar shift \(\mu\) is applied element-wise._
Note that the _scale-equivariance_ property is more often referred to as positive homogeneity in pure mathematics. Like linear maps that are completely determined by their values on a basis, the above described equivariant functions are actually entirely characterized by the values their take on specific subset of \(\mathbb{R}^{n}\), as stated by the following proposition (see proof in Appendix D.1).
**Proposition 1** (Characterizations): \(f:\mathbb{R}^{n}\mapsto\mathbb{R}^{m}\) _is entirely determined by its values on the:_
* _unit sphere_ \(\mathcal{S}\) _of_ \(\mathbb{R}^{n}\) _if it is scale-equivariant,_
* _orthogonal complement of_ \(\operatorname{Span}(\mathbf{1}_{n})\)_, i.e._ \(\operatorname{Span}(\mathbf{1}_{n})^{\perp}\)_, if it is shift-equivariant,_
* _intersection_ \(\mathcal{S}\cap\operatorname{Span}(\mathbf{1}_{n})^{\perp}\) _if it is normalization-equivariant,_
_where \(\mathbf{1}_{n}\) denotes the all-ones vector of \(\mathbb{R}^{n}\)._
Finally, Proposition 2 highlights three basic equivariance-preserving mathematical operations that can be used as building blocks for designing neural network architectures.
**Proposition 2** (Operations preserving equivariance): _Let \(f\) and \(g\) be two equivariant functions of the same type (either in scale, shift or normalization). Then, subject to dimensional compatibility, all of the following functions are still equivariant:_
* \(f\circ g\) _(_\(f\) _composed with_ \(g\)_),_
* \(x\mapsto(f(x)^{\top}\,g(x)^{\top}\,)^{\top}\) _(concatenation of_ \(f\) _and_ \(g\)_),_
* \((1-t)f+tg\) _for all_ \(t\in\mathbb{R}\) _(affine combination of_ \(f\) _and_ \(g\)_)._
### Examples of normalization-equivariant conventional denoisers
A ("blind") denoiser is basically a function \(f:\mathbb{R}^{n}\mapsto\mathbb{R}^{n}\) which, given a noisy image \(y\in\mathbb{R}^{n}\), tries to map the corresponding noise-free image \(x\in\mathbb{R}^{n}\). Since scaling up an image by a positive factor \(\lambda\) or adding it up a constant shift \(\mu\) does not change its contents, it is natural to expect scale and shift equivariance, _i.e._ normalization equivariance, from the denoising procedure emulated by \(f\). In image denoising, a majority of methods usually assume an additive white Gaussian noise model with variance \(\sigma^{2}\). The corruption model then reads \(y\sim\mathcal{N}(x,\sigma^{2}I_{n})\), where \(I_{n}\) denotes the identity matrix of size \(n\), and the noise standard deviation \(\sigma>0\) is generally passed as an additional argument to the denoiser ("non-blind" denoising). In this case, the augmented function \(f:(y,\sigma)\in\mathbb{R}^{n}\times\mathbb{R}^{+}_{*}\mapsto\mathbb{R}^{n}\) is said _normalization-equivariant_ if:
\[\forall(y,\sigma)\in\mathbb{R}^{n}\times\mathbb{R}^{+}_{*},\forall\lambda\in \mathbb{R}^{+}_{*},\forall\mu\in\mathbb{R},\ f(\lambda y+\mu,\lambda\sigma)= \lambda f(y,\sigma)+\mu\,, \tag{1}\]
as, according to the laws of statistics, \(\lambda y+\mu\sim\mathcal{N}(\lambda x+\mu,(\lambda\sigma)^{2}I_{n})\). In what follows, we give some well-known examples of traditional denoisers that are _normalization-equivariant_ (see proofs in Appendix D.2).
Noise-reduction filters:The most rudimentary methods for image denoising are the smoothing filters, among which we can mention the averaging filter or the Gaussian filter for the linear filters and the median filter which is nonlinear. These elementary "blind" denoisers all implement a _normalization-equivariant_ function. More generally, one can prove that a linear filter is _normalization-equivariant_ if and only if its coefficients add up to \(1\). In others words, _normalization-equivariant_ linear filters process images by affine combinations of pixels.
Patch-based denoising:The popular N(on)-L(ocal) M(eans) algorithm [20] and its variants [22, 23, 25] consist in computing, for each pixel, an average of its neighboring noisy pixels, weighted by the degree of similarity of patches they belong to. In other words, they processes images by convex combinations of pixels. More precisely, NLM can be defined as:
\[f_{\mathrm{NLM}}(y,\sigma)_{i}=\frac{1}{W_{i}}\sum_{y_{j}\in\Omega(y_{i})}e^{ -\frac{\|p(y_{i})-p(y_{j})\|_{2}^{2}}{h^{2}}}y_{j}\quad\text{with}\quad W_{i} =\sum_{y_{j}\in\Omega(y_{i})}e^{-\frac{\|p(y_{i})-p(y_{j})\|_{2}^{2}}{h^{2}}} \tag{2}\]
where \(y_{i}\) denotes the \(i^{th}\) component of vector \(y\), \(p(y_{i})\) represents the vectorized patch centered at \(y_{i}\), \(\Omega(y_{i})\) the set of its neighboring pixels and the smoothing parameter \(h\) is proportional to \(\sigma\) as proposed by several authors [21, 24, 25]. Defined as such, \(f_{\mathrm{NLM}}\) is a _normalization-equivariant_ function. More recently, NL-Ridge [27] and LIChI [28] propose to process images by linear combinations of similar patches and achieves state-of-the-art performance in unsupervised denoising. When restricting the coefficients of the combinations to sum to \(1\), that is imposing affine combination constraints, the resulting algorithms encode _normalization-equivariant_ functions as well.
TV denoising:Total variation (TV) denoising [18] is finally one of the most famous image denoising algorithm, appreciated for its edge-preserving properties. In its original form [18], a TV denoiser is defined as a function \(f:\mathbb{R}^{n}\times\mathbb{R}^{+}_{*}\mapsto\mathbb{R}^{n}\) that solves the following equality-constrained problem:
\[f_{\mathrm{TV}}(y,\sigma)=\operatorname*{arg\,min}_{x\in\mathbb{R}^{n}}\ \|x\|_{\mathrm{TV}}\quad\text{s.t.}\quad\|y-x\|_{2}^{2}=n\sigma^{2} \tag{3}\]
where \(\|x\|_{\mathrm{TV}}:=\|\nabla x\|_{2}\) is the total variation of \(x\in\mathbb{R}^{n}\). Defined as such, \(f_{\mathrm{TV}}\) is a _normalization-equivariant_ function.
### The case of neural networks
Deep learning hides a subtlety about normalization equivariance that deserves to be highlighted. Usually, the weights of neural networks are learned on a training set containing data all normalized to the same arbitrary interval \([a_{0},b_{0}]\). This training procedure improves the performance and allows for more stable optimization of the model. At inference, unseen data are processed within the interval \([a_{0},b_{0}]\) via a \(a\)-\(b\) linear normalization with \(a_{0}\leq a<b\leq b_{0}\) denoted \(\mathcal{T}_{a,b}\) and defined by:
\[\mathcal{T}_{a,b}:y\mapsto(b-a)\frac{y-\min(y)}{\max(y)-\min(y)}+a\,. \tag{4}\]
Note that this transform is actually the unique linear one with positive slope that exactly bounds the output to \([a,b]\). The data is then passed to the trained network and its response is finally returned to the original range via the inverse operator \(\mathcal{T}_{a,b}^{-1}\). This proven pipeline is actually relevant in light of the following proposition (see proof in Appendix D.1).
**Proposition 3**: \(\forall\,a<b\in\mathbb{R},\forall\,f:\mathbb{R}^{n}\mapsto\mathbb{R}^{m},\mathcal{T }_{a,b}^{-1}\circ f\circ\mathcal{T}_{a,b}\) _is a normalization-equivariant function._
While normalization-equivariance appears to be solved, a question is still remaining: how to choose the hyperparameters \(a\) and \(b\) for a given function \(f\)? Obviously, a natural choice for neural networks is to take the same parameters \(a\) and \(b\) as in the learning phase whatever the input image is, _i.e._\(a=a_{0}\) and \(b=b_{0}\), but are they really optimal? The answer to this question is generally negative. Figure 1 depicts an example of the phenomenon in image denoising, taken from a real-world application. In this example, the straightforward choice is largely sub-optimal. This suggests that there are always inherent performance leaks for deep neural networks due to the two degrees of freedom induced by the normalization (_i.e._, choice of \(a\) and choice of \(b\)). In addition, this poor conditioning can be a source of confusion and misinterpretation in critical applications.
### Categorizing image denoisers
Table 1 summarizes the equivariance properties of several popular denoisers, either conventional [18; 20; 27; 28; 19; 26; 29] or deep-learning-based [30; 32; 35; 36]. Interestingly, if _scale-equivariance_ is generally guaranteed for traditional denoisers, not all of them are equivariant to shifts. In particular, the widely used algorithms DCT [19] and BM3D [26] are sensitive to offsets, mainly because the hard thresholding function at their core is not _shift-equivariant_. Regarding the deep-learning-based networks, only DRUNet [32] is insensitive to scale because it is a bias-free convolutional neural network with only ReLU activation functions [1]. In the next section, we show how to adapt existing neural architectures to guarantee _normalization-equivariance_ without loss of performance and study the resulting class of parameterized functions \((f_{\theta})\).
## 4 Design of Normalization-Equivariant Networks
### Affine convolutions
To justify the introduction of a new type of convolutional layers, let us study one of the most basic neural network, namely the linear (parameterized) function \(f_{\Theta}:x\in\mathbb{R}^{n}\mapsto\Theta x\), where parameters \(\Theta\) are a matrix of \(\mathbb{R}^{m\times n}\). Indeed, \(f_{\Theta}\) can be interpreted as a dense neural network with no bias, no hidden layer and no activation function. Obviously, \(f_{\Theta}\) is always _scale-equivariant_, whatever the weights \(\Theta\). As for the _shift-equivariance_, a simple calculation shows
\begin{table}
\begin{tabular}{l c c c c c c c|c c c c} & TV & NLM & NL-Ridge & LIChI & DCT & BM3D & WNNNM & DnCNN & NLRN & SwinIR & DRUNet \\ \hline Scale & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & ✓ \\ Shift & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline \end{tabular}
\end{table}
Table 1: Equivariance properties of several image denoisers (left: traditional, right: learning-based)
Figure 1: Influence of normalization for deep-learning-based image denoising. The raw input data is a publicly available real noisy image of the _Convallaria_ dataset [17]. “Blind” DnCNN [30] with official pre-trained weights is used for denoising and is applied on four different normalization intervals displayed in red, each of which being included in \([0,1]\) over which it was learned. PSNR is calculated with the average of \(100\) independent noisy static acquisitions of the same sample (called ground truth). Interestingly, the straightforward interval \([0,1]\) does not give the best results. Normalization intervals are (a) \([0,1]\), (b) \([0.08,0.12]\), (c) \([0.48,0,52]\) and (d) \([0.64,0.96]\). In the light of the denoising results \((b)\)-\((c)\) and \((c)\)-\((d)\), DnCNN is neither _shift-equivariant_, nor _scale-equivariant_.
that:
\[x\mapsto\Theta x\text{ is {shift-equivariant} }\;\Leftrightarrow\;\forall x\in\mathbb{R}^{n},\forall\mu\in \mathbb{R},\Theta(x+\mu\mathbf{1}_{n})=\Theta x+\mu\mathbf{1}_{m}\; \Leftrightarrow\;\Theta\mathbf{1}_{n}=\mathbf{1}_{m}\,. \tag{5}\]
Therefore, \(f_{\Theta}\) is _normalization-equivariant_ if and only if each row of matrix \(\Theta\) sums to \(1\). In other words, for the _normalization-equivariance_ to hold, the rows of \(\Theta\) must encode weights of affine combinations. Transposing the demonstration to any convolutional neural network, a convolutional layer preserves the _normalization-equivariance_ if and only if the weights of the each convolutional kernel sums to \(1\). In the following, we call such convolutional layers "affine convolutions".
As a consequence, since _normalization-equivariance_ is preserved through function composition, concatenation and affine combination (see Prop. 2), a (linear) convolutional neural network composed of only affine convolutions with no bias and possibly skip or _affine_ residual connections (trainable affine combination of two layers), is guaranteed to be _normalization-equivariant_, provided that padding is performed with existing features (reflect, replicate or circular padding for example). Obviously, in their current state, these neural networks are of little interest, as linear functions do not encode best-performing functions for many applications, image denoising being no exception. Nevertheless, based on such networks, we show in the next subsection how to introduce nonlinearities without breaking the _normalization-equivariance_.
### Channel-wise sort pooling as a normalization-equivariant alternative to ReLU
The first idea that comes to mind is to apply a nonlinear activation function \(\varphi:\mathbb{R}\mapsto\mathbb{R}\) preserving _normalization-equivariance_ after each affine convolution. In other words, we look for a nonlinear solution \(\varphi\) of the characteristic functional equation of _normalization-equivariant_ functions (see Def. 1) for \(n=1\). Unfortunately, the unique solution is the identity function which is linear (see Prop. 1: \(\mathcal{S}\cap\operatorname{Span}(\mathbf{1}_{n})^{\perp}=\emptyset\) for \(n=1\)). Therefore, activation functions that apply element-wise are to be excluded.
To find interesting nonlinear functions, one needs to examine multi-dimensional activation functions, _i.e._ ones of the form \(\varphi:\mathbb{R}^{n}\mapsto\mathbb{R}^{m}\) with \(n\geq 2\). In order to preserve the dimensions of the neural layers and to limit the computational costs, we focus on the case \(n=m=2\), meaning that \(\varphi\) processes pre-activated neurons by pairs. According to Prop. 1, _normalization-equivariant_ functions are completely determined by their values on \(\mathcal{S}\cap\operatorname{Span}(\mathbf{1}_{n})^{\perp}\), which reduces to the characteristic set \(\mathcal{C}=\{-u,u\}\), where \(u=(-1/\sqrt{2},1/\sqrt{2})\), when considering the Euclidean distance of \(\mathbb{R}^{2}\). By arbitrarily setting \(\varphi(-u)=\varphi(u)=u\), the resulting function simply reads:
\[\varphi:(x,y)\in\mathbb{R}^{2}\mapsto\begin{pmatrix}\min(x,y)\\ \max(x,y)\end{pmatrix}\,, \tag{6}\]
which is nothing else that the sorting function in \(\mathbb{R}^{2}\). More generally, it is easy to show that all the sorting functions of \(\mathbb{R}^{n}\) are _normalization-equivariant_. The good news is that these functions are nonlinear as soon as \(n\geq 2\). Therefore, they are candidates to replace the conventional activation functions such as the popular ReLU (rectified linear unit) function.
Since the sorting function (6) is to be applied on non-overlapping pairs of neurons, the partitioning of layers needs to be determined. In order not to mix unrelated neurons, we propose to apply this two-dimensional activation function
Figure 2: Illustration of the proposed alternative for replacing the traditional scheme “convolution + element-wise activation function” in convolutional neural networks: affine convolutions supersede ordinary ones by restricting the coefficients of each kernel to sum to one and the proposed sort pooling patterns introduce nonlinearities by sorting two by two the pre-activated neurons along the channels.
channel-wisely across layers and call this operation "sort pooling" in reference to the max pooling operation, widely used for downsampling, and from which it can be effectively implemented. Figure 2 illustrates the sequence of the two proposed innovations, namely affine convolution followed by channel-wise sort pooling, to replace the traditional scheme "conv+ReLU", while guaranteeing _normalization-equivariance_.
### Encoding adaptive affine filters
Based on Prop. 2, we can formulate the following proposition which tells more about the class of parameterized functions \((f_{\theta})\) encoded by the proposed networks (see proof in Appendix D.1).
**Proposition 4**: _Let \(f_{\theta}^{\text{NE}}:\mathbb{R}^{n}\mapsto\mathbb{R}^{m}\) be a CNN composed of only:_
* _affine convolution kernels with no bias and where padding is made of existing features,_
* _sort pooling nonlinearities,_
* _possibly skip or affine residual connections._
_Then, \(f_{\theta}^{\text{NE}}\) is a normalization-equivariant continuous piecewise-linear function with finitely many pieces. Moreover, on each piece represented by the vector \(y_{r}\),_
\[f_{\theta}^{\text{NE}}(y)=A_{\theta}^{y_{r}}y,\text{ with }A_{\theta}^{y_{r}} \in\mathbb{R}^{m\times n}\text{ such that }A_{\theta}^{y_{r}}\mathbf{1}_{n}=\mathbf{1}_{m}\,.\]
In Prop. 4, the subscripts on \(A_{\theta}^{y_{r}}\) serve as a reminder that this matrix depends on the sort pooling activation patterns, which in turn depend on both the input vector \(y\) and the weights \(\theta\). As already revealed for bias-free networks with ReLU [1], \(A_{\theta}^{y_{r}}\) is the Jacobian matrix of \(f_{\theta}^{\text{NE}}\) taken at any point \(y\) in the interior of the piece represented by vector \(y_{r}\). Moreover, as \(A_{\theta}^{y_{r}}\mathbf{1}_{n}=\mathbf{1}_{m}\), the output vector of such networks are locally made of fixed affine combinations of the entries of the input vector. And since a CNN has a limited receptive field centered on each pixel, \(f_{\theta}^{\text{NE}}\) can be thought of as an adaptive filter that produces an estimate of each pixel through a custom affine combination of pixels. By examining these filters in the case of image denoising (see Fig. 3), it becomes apparent that they vary in their characteristics and are intricately linked to the contents of the underlying images. Indeed, these filters are specifically designed to cater to the specific local features of the noisy image: averaging is done over uniform areas without affecting the sharpness of edges. Note that this behavior has already been extensively studied by [1] for unconstrained filters.
The total number of fixed adaptive affine filters depends on the weights \(\theta\) of the network \(f_{\theta}^{\text{NE}}\) and is bounded by \(2^{S}\) where \(S\) represents the total number of sort pooling patterns traversed to get from the receptive filed to its final pixel. Obviously, this upper bound grows exponentially with \(S\), suggesting that a limited number of sort pooling operations
Figure 3: Visual comparisons of the generalization capabilities of a _scale-equivariant_ neural network (left) and its _normalization-equivariant_ counterpart (right) for Gaussian noise. Both networks were trained for Gaussian noise at noise level \(\sigma=25\) exclusively. The adaptive filters (rows of \(A_{\theta}^{y_{r}}\) in Prop. 4) are indicated for two particular pixels as well as the sum of their coefficients (note that some weights are negative, indicated in red). The _scale-equivariant_ network tends to excessively smooth out the image when evaluated at a lower noise level, whereas the _normalization-equivariant_ network is more adaptable and considers the underlying texture to a greater extent.
may generate an extremely large number of filters. Interestingly, if ReLU activation functions where used instead, the upper bound would reach \(2^{2S}\).
## 5 Experimental results
We demonstrate the effectiveness and versatility of the proposed methodology in the case of image denoising. To this end, we modify two well-established neural network architectures for image denoising, chosen for both their simplicity and efficiency, namely DRUNet [32]: a state-of-the-art U-Net with residual connections [2]; and FDnCNN, the unpublished flexible variant of the popular DnCNN [30]: a simple feedforward CNN that chains "conv+ReLU" layers with no downsampling, no residual connections and no batch normalization during training [3], and with a tunable noise level map as additional input [31]. We show that adapting these networks to become _normalization-equivariant_ does not adversely affect performance and, better yet, increases their generalization capabilities. For each scenario, we train three variants of the original Gaussian denoising network for grayscale images: _ordinary_ (original network with additive bias), _scale-equivariant_ (bias-free variation with ReLU [1]) and our _normalization-equivariant_ architecture (see Fig. 2). Details about training and implementations can be found in Appendix A and B; the code is available at [https://github.com/sherbret/normalization_equivariant_nn/](https://github.com/sherbret/normalization_equivariant_nn/). Unless otherwise noted, all results presented in this paper are obtained with DRUNet [32]; similar outcomes can be achieved with FDnCNN [30] architecture (see Appendix C).
Finally, note that both DRUNet [32] and FDnCNN [30] can be trained as "blind" but also as "non-blind" denoisers and thus achieve increased performance, by passing an additional noisemap as input. In the case of additive white Gaussian noise of variance \(\sigma^{2}\), the noisemap is constant equal to \(\sigma\mathbf{1}_{n}\) and the resulting parameterized functions can then be put mathematically under the form \(f_{\theta}:(y,\sigma)\in\mathbb{R}^{n}\times\mathbb{R}^{+}_{*}\mapsto\mathbb{ R}^{n}\). In order to integrate this feature to _normalization-equivariant_ networks as well, a slight modification of the first affine convolutional layer must be made. Indeed, by adapting the proof (5) to the case (1), we can show that the first convolutional layer must be affine with respect to the input image \(y\) only - the coefficients of the kernels acting on the image pixels add up to \(1\) - while the other coefficients of the kernels need not be constrained.
### The proposed architectural modifications do not degrade performance
The performance, assessed in terms of PSNR values, of our _normalization-equivariant_ alternative (see Fig. 2) and of its _scale-equivariant_ and _ordinary_ counterparts is compared in Table 2 for "non-blind" architectures on two popular datasets [37]. We can notice that the performance gap between two different variants is less than 0.05 dB at most for all noise levels, which is not significant. This result suggests that the class of parameterized functions \((f_{\theta})\) currently used in image denoising can drastically be reduced at no cost. Moreover, it shows that it is possible to dispense with activation functions, such as the popular ReLU: nonlinearities can simply brought by sort pooling patterns. In terms of subjective visual evaluation, we can draw the same conclusion since images produced by two architectural variants inside the training range are hardly distinguishable (see Fig. 3 at \(\sigma=25\)).
### Increased robustness across noise levels
S. Mohan _et al._[1] revealed that bias-free neural networks with ReLU, which are _scale-equivariant_, could much better generalize when evaluated at new noise levels beyond their training range, than their counterparts with bias that systematically overfit. Even if they do not fully elucidate how such networks achieve this remarkable generalization, they suggest that _scale-equivariance_ certainly plays a major role. What about _normalization-equivariance_ then? We
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Dataset & \multicolumn{3}{c}{Set12} & BSD68 \\ \hline \hline \multicolumn{2}{c}{Noise level \(\sigma\)} & 15 / 25 / 50 & 15 / 25 / 50 \\ \hline \multirow{3}{*}{DRUNet [32]} & _ordinary_ & 33.23 / 30.92 / 27.87 & 31.89 / 29.44 / 26.54 \\ & _scale-equiviv_ & 33.25 / 30.94 / 27.90 & 31.91 / 29.48 / 26.59 \\ & _norm-equiviv_ & 33.20 / 30.90 / 27.85 & 31.88 / 29.45 / 26.55 \\ & _ordinary_ & 32.87 / 30.49 / 27.28 & 31.69 / 29.22 / 26.27 \\ FDnCNN [30] & _scale-equiviv_ & 32.85 / 30.49 / 27.29 & 31.67 / 29.20 / 26.25 \\ & _norm-equiviv_ & 32.85 / 30.50 / 27.27 & 31.69 / 29.22 / 26.25 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The PSNR (dB) results of “non-blind” deep-learning-based methods applied to popular grayscale datasets corrupted by synthetic white Gaussian noise with \(\sigma=15\), \(25\) and \(50\).
have compared the robustness faculties of the three variants of networks when trained at a fixed noise level \(\sigma\) for Gaussian noise. Figure 4 summarizes the explicit results obtained: _normalization-equivariance_ pushes generalization capabilities of neural networks one step further. While performance is identical to their _scale-equivariant_ counterparts when evaluated at higher noise levels, the _normalization-equivariant_ networks are, however, much more robust at lower noise levels. This phenomenon is also illustrated in Fig. 3.
Demystifying robustnessLet \(x\) be a clean patch of size \(n\), representative of the training set on which a CNN \(f_{\theta}\) was optimized to denoise its noisy realizations \(y=x+\varepsilon\) with \(\varepsilon\sim\mathcal{N}(0,\sigma^{2}I_{n})\) (denoising at a fixed noise level \(\sigma\) exclusively). Formally, we note \(x\in\mathcal{D}\subset\mathbb{R}^{n}\), where \(\mathcal{D}\) is the space of representative clean patches of size \(n\) on which \(f_{\theta}\) was trained. We are interested in the output of \(f_{\theta}\) when it is evaluated at \(x+\lambda\varepsilon\) (denoising at noise level \(\lambda\sigma\)) with \(\lambda>0\). Assuming that \(f_{\theta}\) encodes a _normalization-equivariant_ function, we have:
\[\forall\lambda\in\mathbb{R}_{*}^{+},\forall\mu\in\mathbb{R},\;f_{\theta}(x+ \lambda\varepsilon)=\lambda f_{\theta}((x-\mu)/\lambda+\varepsilon)+\mu\,. \tag{7}\]
Figure 4: Comparison of the performance of our _normalization-equivariant_ alternative with its _scale-equivariant_ and _ordinary_ counterparts for Gaussian denoising with the same architecture on Set12 dataset. The vertical blue line indicates the unique noise level on which the networks were trained exclusively (from left to right: \(\sigma=50\), \(\sigma=25\) and \(\sigma=10\)). In all cases, _normalization-equivariant_ networks generalize much more robustly beyond the training noise level.
Figure 5: Denoising results for example images of the form \(y=x+\lambda\varepsilon\) (see notations of subsection 5.2) with \(\sigma=25/255\) and \(x\in[0,1]^{n}\), by CNNs specialized for noise level \(\sigma\) only. \(f_{\theta}^{\varnothing}\), \(f_{\theta}^{\text{SE}}\) and \(f_{\theta}^{\text{NE}}\) denote the _ordinary_, _scale-equivariant_ and _normalization-equivariant_ variants, respectively. In order to get the best results with \(f_{\theta}^{\varnothing}\) and \(f_{\theta}^{\text{SE}}\), it is necessary know the renormalization parameters \((\lambda,\mu)\) such that \((x-\mu)/\lambda\) belongs to \(\mathcal{D}\subset[0,1]^{n}\) (see subsection 5.2). Note that for \(f_{\theta}^{\text{SE}}\), it is however sufficient to know only \(\mu\) as \(\lambda\) is implicit by construction. In contrast, \(f_{\theta}^{\text{NE}}\) can be applied directly.
The above equality shows how such networks can deal with noise levels \(\lambda\sigma\) different from \(\sigma\): _normalization-equivariance_ simply brings the problem back to the denoising of an implicitly renormalized image patch with fixed noise level \(\sigma\). Note that this artificial change of noise level does not make this problem any easier to solve as the signal-to-noise ratio is preserved by normalization. Obviously, the denoising result of \(x+\lambda\varepsilon\) will be all the more accurate as \((x-\mu)/\lambda\) is a representative patch of the training set. In other words, if \((x-\mu)/\lambda\) can still be considered to be in \(\mathcal{D}\), then \(f_{\theta}\) should output a consistent denoised image patch. For a majority of methods [30, 32, 31], training is performed within the interval \([0,1]\) and therefore \(x/\lambda\) still belongs generally to \(\mathcal{D}\) for \(1<\lambda<10\) (contraction), but this is much less true for \(\lambda<1\) (stretching) for the reason that it may exceed the bounds of the interval \([0,1]\). This explains why _scale-equivariant_ functions do not generalize well to noise levels lower than their training one. In contrast, _normalization-equivariant_ functions can benefit from the implicit extra adjustment parameter \(\mu\). Indeed, there exists some cases where the stretched patch \(x/\lambda\) is not in \(\mathcal{D}\) but \((x-\mu)/\lambda\) is (see Fig. 4(b)). This is why _normalization-equivariant_ networks are more able to generalize at low noise levels. Note that, based on this argument, _ordinary_ neural networks trained at a fixed noise level \(\sigma\) can also be used to denoise images at noise level \(\lambda\sigma\), provided that a correct normalization is done beforehand. However, this time the normalization is explicit: the exact scale factor \(\lambda\), and possibly the shift \(\mu\), must be known (see Fig. 4(a)).
## 6 Conclusion and perspectives
In this work, we presented an original approach to adapt the architecture of existing neural networks so that they become _normalization-equivariant_, a property highly desirable and expected in many applications such that image denoising. We argue that the classical pattern "conv+ReLU" can be favorably replaced by the two proposed innovations: affine convolutions that ensure that all coefficients of the convolutional kernels sum to one; and channel-wise sort pooling nonlinearities as a substitute for all activation functions that apply element-wise, including ReLU or sigmoid functions. Despite these two important architectural changes, we show that the performance of these alternative networks is not affected in any way. On the contrary, thanks to their better-conditioning, they benefit, in the context of image denoising, from an increased interpretability and especially robustness to variable noise levels both in practice and in theory.
More generally, the proposed channel-wise sort pooling nonlinearities may potentially change the way we commonly understand neural networks: the usual paradigm that neurons are either active ("fired") or inactive, is indeed somewhat shaken. With sort pooling nonlinearities, neurons are no longer static but they "wiggle and mingle" according to the received signal. We believe that this discovery may help building new neural architectures, potentially with stronger theoretical guarantees, and more broadly, may also open the doors for novel perspectives in deep learning.
## Acknowledgments
This work was supported by Bpfrance agency (funding) through the LiChIE contract. Computations were performed on the Inria Rennes computing grid facilities partly funded by France-BioImaging infrastructure (French National Research Agency - ANR-10-INBS-04-07, "Investments for the future"). We would like to thank R. Fraisse (Airbus) for fruitful discussions.
|
2307.04772 | Digital Twins for Patient Care via Knowledge Graphs and Closed-Form
Continuous-Time Liquid Neural Networks | Digital twin technology has is anticipated to transform healthcare, enabling
personalized medicines and support, earlier diagnoses, simulated treatment
outcomes, and optimized surgical plans. Digital twins are readily gaining
traction in industries like manufacturing, supply chain logistics, and civil
infrastructure. Not in patient care, however. The challenge of modeling complex
diseases with multimodal patient data and the computational complexities of
analyzing it have stifled digital twin adoption in the biomedical vertical.
Yet, these major obstacles can potentially be handled by approaching these
models in a different way. This paper proposes a novel framework for addressing
the barriers to clinical twin modeling created by computational costs and
modeling complexities. We propose structuring patient health data as a
knowledge graph and using closed-form continuous-time liquid neural networks,
for real-time analytics. By synthesizing multimodal patient data and leveraging
the flexibility and efficiency of closed form continuous time networks and
knowledge graph ontologies, our approach enables real time insights,
personalized medicine, early diagnosis and intervention, and optimal surgical
planning. This novel approach provides a comprehensive and adaptable view of
patient health along with real-time analytics, paving the way for digital twin
simulations and other anticipated benefits in healthcare. | Logan Nye | 2023-07-08T12:52:31Z | http://arxiv.org/abs/2307.04772v1 | Digital Twins for Patient Care via Knowledge Graphs and Closed-Form Continuous-Time Liquid Neural Networks
###### Abstract
Digital twin technology has is anticipated to transform healthcare, enabling personalized medicines and support, earlier diagnoses, simulated treatment outcomes, and optimized surgical plans. Digital twins are readily gaining traction in industries like manufacturing, supply chain logistics, and civil infrastructure. Not in patient care, however. The challenge of modeling complex diseases with multimodal patient data and the computational complexities of analyzing it have stifled digital twin adoption in the biomedical vertical. Yet, these major obstacles can potentially be handled by approaching these models in a different way. This paper proposes a novel framework for addressing the barriers to clinical twin-modeling created by computational costs and modeling complexities. We propose structuring patient health data as a knowledge graph and using closed-form continuous-time "liquid" neural networks (CfCs) for real-time analytics. By synthesizing multimodal patient data and leveraging the flexibility and efficiency of CfCs and knowledge graph ontologies, our approach enables real-time insights, personalized medicine, early diagnosis and intervention, and optimal surgical planning. This novel approach provides a comprehensive and adaptable view of patient health along with real-time analytics, paving the way for digital twin simulations and other anticipated benefits in healthcare.
informatics, artificial intelligence, deep learning, decision support, clinical model, precision medicine,
## I Introduction
Digital twins are an emerging technology poised to revolutionize every sector of industry.
These same principles can logically carry to healthcare as well. Digital twins are expected to enable precision medicine, personalized therapeutics, earlier diagnoses, and optimized treatment outcomes via simulation capabilites [1]. So far, however, they are still largely absent. While other verticals like manufacturing, systems engineering, infrastructure, and others have begun realizing these advantages, healthcare is more challenging. The computational demands and inherent challenges of modeling complex pathologies leads to steep expenses and missed opportunities. We propose a novel method of modeling a digital twin for patient care that could finally enable this idea. Our approach combines two powerful technologies: (1) knowledge graph representations [2], and (2) closed-form continuous-time liquid neural networks (CfCs) [3]. Together, they provide a comprehensive and adaptable view of individual patient health data that can also be efficiently reflected in real-time, ultimately improving bringing us closer to our end-goal: a true digital twin for patient care.
## II Digital Twin Technology
### _What Are Digital Twins?_
Digital twins are virtual representations of physical entities, processes, or systems that leverage real-time data and advanced analytics to optimize performance, enable predictive maintenance, and drive innovation. In the healthcare industry, digital twins can revolutionize patient care, clinical research, and healthcare infrastructure management. The concept of digital twins first arose at NASA and gained traction after Gartner named digital twins as one of its top 10 strategic technology trends for 2017 [1]. They estimated that by 2020, 21 billion connected sensors and endpoints would enable digital twins for billions of things [2].
### _Digital Twins in Healthcare_
A digital twin's architecture comprises three core components: the physical entity, the virtual representation, and the data management system [3]. These components are interconnected through an IoT (Internet of Things) network, enabling real-time synchronization and communication. In healthcare, the physical entity can range from an individual patient to an entire hospital. Data
is collected through sensors, wearables, or other IoT devices, capturing vital parameters such as heart rate, blood pressure, and body temperature [4]. The virtual representation is a data-driven model that mirrors the physical entity's behavior, state, and properties. It is created using a combination of mathematical models, machine learning algorithms, and simulation techniques [5].
### _Twins-enabled Healthcare Analytics_
Mathematical models are utilized to represent complex biological systems, such as organ function, metabolic pathways, or disease progression [6]. These models can be deterministic (e.g., differential equations) or stochastic (e.g., Monte Carlo simulations). Advanced analytics techniques, such as predictive analytics, prescriptive analytics, and real-time analytics, are applied to derive insights and facilitate decision-making [7]. These techniques often involve the application of machine learning and artificial intelligence algorithms, including supervised, unsupervised, and reinforcement learning techniques, which are employed to analyze and predict patterns within the collected data [8]. Deep learning architectures, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), have shown promising results in modeling patient-specific medical conditions [9]. In addition, simulation techniques, like agent-based modeling and discrete event simulation, can be used to recreate the dynamic behavior of healthcare processes or systems, allowing for the evaluation of various intervention strategies and resource allocation scenarios [10, 11].
### _A Valuable Clinical Tool_
Healthcare digital twins require secure, scalable, and efficient storage solutions to handle vast amounts of structured and unstructured data [12]. Technologies like distributed file systems (e.g., Hadoop Distributed File System) and cloud-based storage services are commonly used. Data processing involves cleaning, transforming, and integrating data from multiple sources to ensure accuracy and consistency. Techniques like data wrangling, normalization, and feature extraction are employed in this stage [13]. Digital twins are valuable to healthcare because they facilitate personalized medicine [14], enable early diagnosis and intervention [15], enhance clinical trials [16], support remote patient monitoring [17], optimize healthcare facilities [18], enable predictive maintenance [19], and improve medical education and training [20]. These benefits contribute to improved patient outcomes, operational efficiency, and cost savings in the healthcare industry.
## III Knowledge Graphs
### _What are Knowledge Graphs?_
Knowledge graphs are a graph-based data structure for representing structured and semi-structured data [4]. By encoding entities as nodes and relationships as edges, knowledge graphs allow for complex querying and reasoning over large datasets, making them a natural choice for representing patient health data [5]. Medical ontologies, such as the Systematized Nomenclature of Medicine - Clinical Terms (SNOMED CT) [6] and the Human Phenotype Ontology (HPO) [7], provide the foundation for our knowledge graph representation, ensuring accurate and consistent terminology.
### _Problems Solved by Knowledge Graphs_
Knowledge graph is a modern and effective method for harmonizing enterprise data, allowing representation and operationalization
Knowledge graph is a modern and effective method for harmonizing enterprise data, allowing representation and operationalization of data, data sources, and databases of all types. As we progress further into the AI era, the increasing amount of data is being utilized for business benefits and advantages, steadily transforming data into knowledge [32]. In this context, knowledge graphs are gaining popularity in enterprises that seek to connect the dots between the data world and the business world more effectively.
### _Ontologies and Formal Semantics_
Ontologies represent the backbone of the formal semantics of a knowledge graph. As the data schema of the graph, they serve as a contract between the developers of the knowledge graph and its users regarding the meaning of the data in it. A user could be another human being or a software application that wants to interpret the data in a reliable and precise way. Ontologies ensure a shared understanding of the data and its meanings. When formal semantics are used to express and interpret the data of a knowledge graph, there are a number of representation and modeling instruments.
Classes
Most often an entity description contains classification of the entity with respect to a class hierarchy. For instance, when dealing with general news or business information there could be classes Person, Organization and Location. Persons and organizations can have a common super class Agent. Location usually has numerous sub-classes, e.g. Country, Populated place, City, etc. The notion of class is borrowed by the object-oriented design, where each entity should belong to exactly one class.
Relationship Types
The relationships between entities are usually tagged with types, which provide information about the nature of the relationship, e.g. friend, relative, competitor, etc. Relation types can also have formal definitions, e.g. that parent-of is an inverse relation of child-of and they both are special cases of relative-of, which is a symmetric relationship. Or defining that sub-region and subsidiary are transitive relationships.
Categories
An entity can be associated with categories, which describe some aspect of its semantics, e.g. "Big four consultants" or "XIX century composers". A book can belong simultaneously to all these categories: "Books about Africa", "Bestseller", "Books by Italian authors", "Books for kids", etc. Often the categories are described and ordered into a taxonomy
Paired with complementary AI technologies such as machine learning and natural language processing, knowledge graphs are enabling new opportunities for leveraging data and quickly becoming a fundamental component of modern data systems [33]. With the rapid advancements in machine learning and knowledge representation through knowledge graphs, these technologies have the potential to improve the accuracy of outcomes and augment the potential of machine learning approaches [38].
At the core of a knowledge graph is a knowledge model, a collection of interlinked descriptions of concepts, entities, relationships, and events. Knowledge graphs put data into context through linking and semantic metadata, providing a framework for data integration, unification, analytics, and sharing. These graphs, also known as semantic networks, represent a network of real-world entities, such as objects, events, situations, or concepts, and illustrate the relationship between them. This information is typically stored in a graph database and visualized as a graph structure.
Knowledge graphs consist of three main components: nodes, edges, and labels. Nodes can represent objects, places, or people, while edges define the relationship between nodes. Labels provide further information about the relationships or nodes. These graphs are often fueled by machine learning, using natural language processing (NLP) to construct a comprehensive view of nodes, edges, and labels through a process called semantic enrichment.
Ontologies are frequently mentioned in the context of knowledge graphs, creating a formal representation of the entities in the graph. They are usually based on a taxonomy but can contain multiple taxonomies, thus maintaining a separate definition. Since both knowledge graphs and ontologies are represented through nodes and edges and are based on the Resource Description Framework (RDF) triples, they tend to resemble each other in visualizations.
The Web Ontology Language (OWL) is an example of a widely adopted ontology, supported by the World Wide Web Consortium (W3C), an international community that champions open standards for the internet's longevity. This organization of knowledge is supported by technological infrastructure such as databases, APIs, and machine learning algorithms, which help people and services access and process information more efficiently.
Knowledge graphs work by combining datasets from various sources, which frequently differ in structure. Schemas, identities, and context work together to provide structure to diverse data. These components help distinguish words with multiple meanings, allowing systems like Google's search engine algorithm to differentiate between different meanings of words like "Apple."
Knowledge graphs have various use cases, including popular consumer-facing applications like DBPedia and Wikidata, which are knowledge graphs for data on Wikipedia.org. Another example is the Google Knowledge Graph, which is represented through Google Search Engine Results Pages (SERPs), serving information based on people's searches. Knowledge graphs also have applications in other industries, such as healthcare, by organizing and categorizing relationships within medical research to assist providers in validating diagnoses and identifying individualized treatment plans.
Key characteristics of knowledge graphs include combining features of several data management paradigms such as databases, graphs, and knowledge bases. Knowledge graphs, represented in RDF, provide the best framework for data integration, unification, linking, and reuse due to their expressivity, performance, interoperability, and standardization.
## IV Liquid Neural Networks
### _Closed-Form Continuous-Time Liquid Neural Networks (CfCs)_.
Continuous-depth neural models, where the derivative of the model's hidden state is defined by a neural network, have facilitated advanced sequential data processing capabilities [18]. However, these models rely on advanced numerical differential equation (DE) solvers, resulting in substantial overhead in terms of computational cost and model complexity. Closed-form Continuous-depth (CfC) networks offer a solution to these limitations [35].
CfC networks are derived from the analytical closed-form solution of an expressive subset of time-continuous models, eliminating the need for complex DE solvers altogether [35]. Experimental evaluations have
demonstrated that CfC networks outperform advanced recurrent models over a diverse set of time-series prediction tasks, including those with long-term dependencies and irregularly sampled data [35]. These findings open new opportunities to train and deploy rich, continuous neural models in resource-constrained settings that demand both performance and efficiency.
ODE-based continuous neural network architectures have shown promise in density estimation applications [19, 20, 21, 22] and modeling sequential and irregularly sampled data [23, 24, 25, 26]. While these ODE-based neural networks can perform competitively with advanced discretized recurrent models on relatively smaller benchmarks, their training and inference are slow due to the use of advanced numerical DE solvers [27]. The complexity of the task increases (i.e., requiring more precision) [36] in open-world problems such as medical data processing, self-driving cars, financial time-series, and physics simulations.
The research community has worked on solutions for resolving the computational overhead and facilitating the training of neural ODEs. These solutions include relaxing the stiffness of a flow by state augmentation techniques [28, 29, 37], reformulating the forward-pass as a root-finding problem [30, 38], using regularization schemes [31, 32, 33, 39], and improving the inference time of the network [34, 40].
CfC networks, derived from closed-form solutions, provide a fundamental solution to the limitations of ODE-based models [35]. These networks do not require any solver to model data and yield significantly faster training and inference speeds while retaining the expressiveness of their ODE-based counterparts. The closed-form solution enables the formulation of a neuron model that can be scaled to create flexible, highly performant architectures on challenging sequential datasets.
Continuous-time neural networks (CTNNs) are a class of machine learning systems that excel in representation learning for spatiotemporal decision-making tasks [18]. These models are typically represented by continuous differential equations. However, the expressive power of CTNNs is bottlenecked when deployed on computers due to the limitations of numerical differential equation solvers. This constraint has hindered the scaling and understanding of various natural physical phenomena, including the dynamics of nervous systems [18].
To circumvent this bottleneck, researchers have sought closed-form solutions for the given dynamical systems. Although finding closed-form solutions is generally intractable, it has been demonstrated that it is possible to efficiently approximate the interaction between neurons and synapses-- the fundamental building blocks of natural and artificial neural networks--constructed by liquid time-constant networks in closed form [18].
This closed-form solution is achieved by computing a tightly bounded approximation of the solution of an integral appearing in liquid time-constant dynamics, which previously had no known closed-form solution [18]. The introduction of closed-form continuous-time liquid neural networks (CfCs) has significant implications for the design of continuous-time and continuous-depth neural models. For example, since time appears explicitly in closed form, complex numerical solvers are no longer required [18]. As a result, CfCs are between one and five orders of magnitude faster in training and inference compared to their differential equation-based counterparts [18].
Moreover, CfCs, derived from liquid networks, demonstrate excellent performance in time-series modeling, outperforming advanced recurrent neural network models [18]. They can scale remarkably well compared to other deep learning instances, making them ideal for tackling the diverse and irregularly sampled data inherent in patient health records and hospital databases.
## V Methodology & Application
### _Knowledge Graph Construction._
Our approach to constructing the knowledge graph involves three main steps: data collection, entity extraction, and relationship modeling. We gather data from electronic health records, clinical trials, and genomic databases to ensure a comprehensive representation of patient health. Entities such as diseases, symptoms, and treatments are extracted using natural language processing techniques [11] and linked to their corresponding concepts in established medical ontologies, such as SNOMED CT [6] and HPO [7]. Relationships between entities are modeled using both domain-specific and general-purpose relationship types [12], facilitating the integration of diverse data sources and enabling complex querying recorded values of 0.7362 and 0.7373, respectively, representing a considerable improvement compared to reported accuracy and reliability studies of the past.
### _Closed-Form Continuous-Time Liquid Neural Network Integration._
To integrate CfCs with the knowledge graph, we employ a two-step process: feature extraction and network training. First, features are extracted from the knowledge graph using graph embedding techniques, such as node2vec [13] or GraphSAGE [14]. These embeddings capture the topological structure and semantic information of the graph, which are then used as inputs to the CfC model. Next, the CfC model is trained using a combination of supervised and unsupervised learning techniques to
predict various patient health outcomes, such as disease progression, treatment response, and surgical outcomes.
Our approach combines the versatile structure and potent semantic representations of knowledge graphs with the computational efficiency and plasticity of closed-form continuous-time liquid neural networks (CfCs) to create digital twins for patient care. The integration of these two technologies forms a more true-to-life model of patient health that enables real-time healthcare analytics.
By integrating knowledge graphs with CfCs, we can build powerful predictive models and facilitate personalized care that leads to improved patient outcomes. This combination enables real-time analytics and adaptability, essential for early diagnosis and intervention, tailoring treatment plans to each patient's unique needs, and simulating surgical procedures and therapeutic strategies.
### _Real-Time Analytics and Simulation._
To enable real-time analytics and simulation, our digital twin system continuously updates the knowledge graph and CfC model as new patient data becomes available. Incremental graph updates are achieved using efficient graph maintenance algorithms [15], while the CfC model is updated using online learning techniques [16]. This continuous updating process allows healthcare professionals to access up-to-date patient health information and model predictions, thereby enabling more informed and accurate decision-making.
### _Surgical and Intervention Planning._
Digital twins can be used to simulate surgical procedures and therapeutic strategies by incorporating detailed anatomical and physiological models [17]. Our approach integrates these models with the knowledge graph and CfC predictions, allowing healthcare providers to optimize treatment plans and minimize potential risks. By combining patient-specific data with the latest medical knowledge, our digital twin system facilitates personalized care and improves patient outcomes.
## IV Evaluation and Validation
To assess the effectiveness of our digital twin approach, we will perform a series of experiments comparing our method to traditional techniques, such as deep learning models [10] and Bayesian networks [18]. We will evaluate the performance of our method using metrics such as accuracy, precision, recall, and F1-score for various prediction tasks, as well as computational efficiency and scalability. Furthermore, we will conduct clinical studies to investigate the impact of our digital twin system on patient outcomes and healthcare decision-making.
## V Conclusion
Our approach to digital twin technology combines the flexible structure and powerful semantic representations of knowledge graphs with the computational efficiency and plasticity of closed-form continuous-time liquid neural networks (CfCs) to revolutionize patient care. By synthesizing multimodal patient data and leveraging the flexibility and efficiency of CfCs, it becomes possible to create digital twins that enable real-time healthcare analytics, personalized medicine, early diagnosis and intervention, and improved surgical planning. This groundbreaking approach holds the promise to finally bring digital twins to the medical space, unlocking insights that benefit individual patients and entire healthcare systems alike.
|
2304.02653 | Adaptive Ensemble Learning: Boosting Model Performance through
Intelligent Feature Fusion in Deep Neural Networks | In this paper, we present an Adaptive Ensemble Learning framework that aims
to boost the performance of deep neural networks by intelligently fusing
features through ensemble learning techniques. The proposed framework
integrates ensemble learning strategies with deep learning architectures to
create a more robust and adaptable model capable of handling complex tasks
across various domains. By leveraging intelligent feature fusion methods, the
Adaptive Ensemble Learning framework generates more discriminative and
effective feature representations, leading to improved model performance and
generalization capabilities.
We conducted extensive experiments and evaluations on several benchmark
datasets, including image classification, object detection, natural language
processing, and graph-based learning tasks. The results demonstrate that the
proposed framework consistently outperforms baseline models and traditional
feature fusion techniques, highlighting its effectiveness in enhancing deep
learning models' performance. Furthermore, we provide insights into the impact
of intelligent feature fusion on model performance and discuss the potential
applications of the Adaptive Ensemble Learning framework in real-world
scenarios.
The paper also explores the design and implementation of adaptive ensemble
models, ensemble training strategies, and meta-learning techniques, which
contribute to the framework's versatility and adaptability. In conclusion, the
Adaptive Ensemble Learning framework represents a significant advancement in
the field of feature fusion and ensemble learning for deep neural networks,
with the potential to transform a wide range of applications across multiple
domains. | Neelesh Mungoli | 2023-04-04T21:49:49Z | http://arxiv.org/abs/2304.02653v1 | Adaptive Ensemble Learning: Boosting Model Performance through Intelligent Feature Fusion in Deep Neural Networks
###### Abstract
In this paper, we present an Adaptive Ensemble Learning framework that aims to boost the performance of deep neural networks by intelligently fusing features through ensemble learning techniques. The proposed framework integrates ensemble learning strategies with deep learning architectures to create a more robust and adaptable model capable of handling complex tasks across various domains. By leveraging intelligent feature fusion methods, the Adaptive Ensemble Learning framework generates more discriminative and effective feature representations, leading to improved model performance and generalization capabilities.
We conducted extensive experiments and evaluations on several benchmark datasets, including image classification, object detection, natural language processing, and graph-based learning tasks. The results demonstrate that the proposed framework consistently outperforms baseline models and traditional feature fusion techniques, highlighting its effectiveness in enhancing deep learning models' performance. Furthermore, we provide insights into the impact of intelligent feature fusion on model performance and discuss the potential applications of the Adaptive Ensemble Learning framework in real-world scenarios.
The paper also explores the design and implementation of adaptive ensemble models, ensemble training strategies, and meta-learning techniques, which contribute to the framework's versatility and adaptability. In conclusion, the Adaptive Ensemble Learning framework represents a significant advancement in the field of feature fusion and ensemble learning for deep neural networks, with the potential to transform a wide range of applications across multiple domains.
**Index Terms:** Deep-Learning--Advancements--Techniques--A1
## 1 Introduction
The rapid advancements in the field of machine learning, particularly deep learning, have led to remarkable breakthroughs across various domains, including computer vision, natural language processing, and graph-based learning. Deep neural networks have demonstrated their ability to automatically extract complex features from raw data, contributing to their success in solving a wide range of tasks. However, despite these advancements, there is still room for improvement in terms of model performance and generalization capabilities. One approach to address these limitations is to explore the integration of ensemble learning techniques with deep learning architectures.
Ensemble learning is a widely-used technique that aims to improve model performance by combining multiple base models to create a more robust and accurate model. Traditionally, ensemble learning techniques, such as bagging, boosting, and stacking, have been employed with shallow learning models, such as decision trees or support vector machines. Nevertheless, recent research has started to explore the potential benefits of applying ensemble learning strategies to deep neural networks, aiming to enhance their performance and generalization capabilities.
One critical aspect of ensemble learning is the fusion of features from multiple base models. Conventional feature fusion techniques, such as concatenation, element-wise addition, or multiplication, have been used to combine features in both shallow and deep learning models. However, these methods may not be optimal for all tasks or scenarios, leading to suboptimal model performance.
In this paper, we introduce the Adaptive Ensemble Learning framework, a novel approach that intelligently fuses features from multiple deep neural networks to create a more robust and adaptable model. The proposed framework leverages ensemble learning strategies and adaptive feature fusion techniques to create more discriminative and effective feature representations, leading to improved model performance and generalization capabilities. We present the design and implementation of adaptive ensemble models, the integration of ensemble learning strategies with deep learning architectures, and the exploration of meta-learning techniques for optimizing the fusion process.
We conduct extensive experiments and evaluations on several benchmark datasets across various tasks and domains, such as image classification, object detection, sentiment analysis, and graph-based learning. The results demonstrate the effectiveness of the proposed Adaptive Ensemble Learning framework in boosting the performance of deep neural networks, consistently outperforming baseline models and traditional feature fusion techniques. Moreover, we provide insights into the impact of intelligent feature fusion on model performance and discuss the potential applications of the framework in real-world scenarios [7][11].
This paper is organized as follows: we begin with a literature review of ensemble learning techniques, feature fusion methods, and deep learning architectures for ensemble models. Next, we present the Adaptive Ensemble Learning framework and discuss its design and implementation. We then describe the experimentation and evaluation process, followed by a presentation of the results and a discussion of the findings. Finally, we conclude the paper by highlighting the key contributions and outlining future research directions [6].
## 2 Literature Review
This chapter provides an overview of the relevant literature in the areas of ensemble learning techniques, feature fusion methods, and deep learning architectures for ensemble models. We discuss the key concepts, strategies, and techniques that have been developed and applied in the field of machine learning to improve model performance and generalization capabilities.
### _Ensemble Learning Techniques_
Ensemble learning is an approach that combines multiple base models to create a more robust and accurate model. The main idea behind ensemble learning is that a diverse set of models can complement each other's strengths and weaknesses, resulting in a better overall model. In this section, we review the main ensemble learning techniques that have been proposed and applied in the field of machine learning.
* Bagging (Bootstrap Aggregating): Bagging is an ensemble learning technique that trains multiple base models independently on different subsets of the training data, generated by random sampling with replacement. The predictions of the individual models are combined through majority voting or averaging to produce the final output. Bagging is particularly effective in reducing the variance of unstable models, such as decision trees.
* Boosting: Boosting is an iterative ensemble learning technique that trains a sequence of base models, with each model learning to correct the errors made by its predecessor. The most popular boosting algorithm is AdaBoost, which assigns weights to the training instances and updates them based on the errors made by the current model. The final prediction is obtained by a weighted vote of the individual models.
* Stacking (Stacked Generalization): Stacking is an ensemble learning technique that combines the outputs of multiple base models using a higher-level model, called the meta-model or meta-learner. The base models are trained on the original training data, while the meta-model is trained on a new dataset consisting of the base models' predictions. Stacking is known for its ability to effectively combine models with different learning algorithms and architectures.
### Feature Fusion Methods in Machine Learning
Feature fusion is the process of combining features from multiple sources or models to create a new, more informative feature representation. Feature fusion plays a crucial role in the success of ensemble learning techniques, as it can enhance the diversity and complementarity of the individual models. In this section, we review the main feature fusion methods that have been proposed and applied in the field of machine learning.
* Concatenation: Concatenation is a simple feature fusion method that combines the feature vectors of multiple sources by appending them together. This method preserves the original features' information but may result in high-dimensional feature representations, potentially leading to the curse of dimensionality.
* Element-wise Addition and Multiplication: Element-wise addition and multiplication are feature fusion methods that combine the feature vectors of multiple sources by computing the element-wise sum or product, respectively. These methods can reduce the dimensionality of the fused feature representation but may lose some information from the original features.
* Linear and Nonlinear Transformations: Linear and nonlinear transformations are feature fusion methods that project the feature vectors of multiple sources into a common feature space using linear or nonlinear functions, respectively. Examples of these methods include Principal Component Analysis (PCA), Canonical Correlation Analysis (CCA), and kernel-based techniques.
### Deep Learning Architectures for Ensemble Models
In recent years, deep learning architectures have been successfully applied to various tasks and domains, demonstrating their ability to learn complex and hierarchical feature representations from raw data. In this section, we review the main deep learning architectures that have been proposed and applied in the context of ensemble learning, focusing on their unique characteristics and capabilities.
## 3 TAdaptive Ensemble Learning Framework
The Adaptive Ensemble Learning framework is a novel approach designed to enhance the performance of deep neural networks by intelligently combining features through ensemble learning techniques. This framework aims to overcome the limitations of traditional feature fusion methods by dynamically adapting the fusion process based on the underlying data and task at hand. By incorporating adaptive feature fusion techniques into the ensemble learning process, the framework is capable of generating more discriminative and effective feature representations, leading to improved model performance and generalization capabilities.
In the Adaptive Ensemble Learning framework, multiple base models, such as deep neural networks, are trained on the input data to learn diverse and complementary features. These base models can be of the same or different architectures, depending on the specific requirements of the task. The learned features from the base models are then intelligently combined using adaptive feature fusion strategies, which are designed to optimize the fusion process according to the characteristics of the input data and the task objectives.
One key aspect of the Adaptive Ensemble Learning framework is the integration of meta-learning techniques to guide the adaptive feature fusion process. Meta-learning, also known as learning-to-learn, involves training a higher-level model, called the meta-model or meta-learner, to learn the optimal way of combining the features from the base models. The meta-model is trained on a new dataset, which consists of the base models' predictions and the corresponding ground-truth labels. By learning the optimal feature fusion strategy from the data, the meta-model is capable of adapting to different tasks and datasets, ensuring that the ensemble model remains versatile and robust.
Another important aspect of the Adaptive Ensemble Learning framework is the exploration of various ensemble training strategies, such as bagging, boosting, and stacking, in the context of deep learning architectures. By incorporating these ensemble learning techniques into the framework, the base models can be combined in a more effective and diverse manner, further enhancing the performance and generalization capabilities of the ensemble model [3][8][5].
The Adaptive Ensemble Learning framework is not limited to a specific domain or application, and can be applied to a wide range of tasks, such as image classification, object detection, natural language processing, and graph-based learning. By leveraging the power of deep learning architectures and the adaptability of ensemble learning techniques, the framework offers a powerful and versatile solution for boosting the performance of machine learning models across various domains.
## 4 Design and Implementation of Adaptive Ensemble Models
This chapter presents the design and implementation of adaptive ensemble models within the Adaptive Ensemble Learning framework. We discuss the model architectures, fusion layers, ensemble training strategies, and meta-learning techniques that contribute to the framework's versatility and adaptability.
### Model Architectures and Fusion Layers
The Adaptive Ensemble Learning framework allows for the integration of various deep learning architectures as base models, depending on the specific requirements of the task. Examples of base model architectures include Convolutional Neural Networks (CNNs) for image processing tasks, Recurrent Neural Networks (RNNs) for sequence-based tasks, and Graph Neural Networks (GNNs) for graph-based learning tasks. Each base model is designed to learn diverse and complementary features from the input
data, which are then intelligently combined using adaptive fusion layers.
Fusion layers are a crucial component of the adaptive ensemble model, as they are responsible for merging the features from the base models. The fusion layers can be designed using various techniques, such as linear and nonlinear transformations, attention mechanisms, or gating mechanisms. These techniques can be combined or adapted to create more sophisticated fusion layers tailored to the specific requirements of the task and dataset [15].
### Ensemble Training and Meta-Learning Strategies
The ensemble training process within the Adaptive Ensemble Learning framework involves training multiple base models independently, followed by training the meta-model to learn the optimal feature fusion strategy. Various ensemble training strategies, such as bagging, boosting, and stacking, can be employed to enhance the diversity and complementarity of the base models.
Bagging, for example, involves training the base models on different subsets of the training data, generated by random sampling with replacement. Boosting, on the other hand, trains a sequence of base models, with each model learning to correct the errors made by its predecessor. Stacking, as another alternative, trains the base models on the original training data, while the meta-model is trained on a new dataset consisting of the base models' predictions.
Meta-learning techniques play a significant role in guiding the adaptive feature fusion process. By training the meta-model on a dataset consisting of the base models' predictions and the corresponding ground-truth labels, the meta-model learns to optimally combine the features from the base models. This process enables the ensemble model to adapt to different tasks and datasets, ensuring its versatility and robustness [12].
### Hyperparameter Optimization and Model Selection
The performance of adaptive ensemble models is highly dependent on the choice of hyperparameters, such as the number of base models, the depth and complexity of the fusion layers, and the ensemble training strategy parameters. To optimize the performance of the ensemble model, a systematic hyperparameter optimization process can be employed, which involves exploring the hyperparameter space and evaluating the performance of various candidate models.
Hyperparameter optimization techniques, such as grid search, random search, or Bayesian optimization, can be used to efficiently search the hyperparameter space and identify the optimal configuration for the ensemble model. Cross-validation is often used in conjunction with hyperparameter optimization to obtain a more accurate estimation of the model's performance on unseen data, which helps prevent overfitting and ensures better generalization capabilities.
Once the optimal hyperparameters have been identified, the final ensemble model can be selected based on its performance on a validation dataset. This model is then evaluated on a separate test dataset to provide an unbiased assessment of its performance and generalization capabilities [13].
In summary, the design and implementation of adaptive ensemble models within the Adaptive Ensemble Learning framework involve selecting appropriate deep learning architectures for the base models, designing intelligent fusion layers, employing ensemble training strategies, and optimizing hyperparameters using meta-learning techniques. These components work together to create a versatile and robust ensemble model capable of boosting the performance of deep neural networks across various tasks and domains
## 5 Experimentation and Evaluation
In this chapter, we describe the experimentation and evaluation process of the Adaptive Ensemble Learning framework, including the selection of benchmark datasets, performance metrics, and the comparison with baseline models and traditional feature fusion techniques. The objective of this experimentation and evaluation process is to demonstrate the effectiveness of the proposed framework in enhancing the performance of deep neural networks and to provide insights into its generalization capabilities across different tasks and domains.
### Benchmark Datasets
To assess the performance of the Adaptive Ensemble Learning framework, we conduct experiments on several benchmark datasets across various tasks and domains. These datasets have been widely used in the machine learning community and serve as a standard for comparing the performance of different models and techniques. The selected datasets cover a range of tasks, such as image classification, object detection, sentiment analysis, and graph-based learning, to ensure a comprehensive evaluation of the framework's capabilities.
### Performance Metrics
To evaluate the performance of the adaptive ensemble models and compare them with baseline models and traditional feature fusion techniques, we use several performance metrics that are commonly employed in the machine learning community. These metrics provide a quantitative assessment of the models' performance and allow for a fair comparison across different tasks and domains. Examples of performance metrics include accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC-ROC).
### Baseline Models and Traditional Feature Fusion Techniques
To demonstrate the effectiveness of the Adaptive Ensemble Learning framework, we compare the performance of the adaptive ensemble models with several baseline models and traditional feature fusion techniques. The baseline models include single deep learning architectures, such as CNNs, RNNs, or GNNs, as well as ensemble models that employ conventional feature fusion methods, such as concatenation, element-wise addition, or multiplication. By comparing the performance of the adaptive ensemble models with these baseline models and traditional feature fusion techniques, we aim to highlight the advantages of the proposed framework in terms of model performance and generalization capabilities [4][14].
### Experimental Setup and Results
The experimental setup involves training the adaptive ensemble models on the selected benchmark datasets using the specified performance metrics and ensemble training strategies. The hyperparameters of the ensemble models are optimized through a systematic hyperparameter optimization process, as described in the previous chapter. The performance of the adaptive ensemble models is then evaluated on a separate test dataset and compared with the performance of the baseline models and traditional feature fusion techniques.
The results of the experiments demonstrate the effectiveness of the Adaptive Ensemble Learning framework in boosting the performance of deep neural networks across various tasks and domains. The adaptive ensemble models consistently outperform the baseline models and traditional feature fusion techniques, indicating the advantages of the proposed framework in terms of model performance and generalization capabilities. Furthermore, the results provide insights into the impact of intelligent feature fusion on model performance and highlight the potential applications of the Adaptive Ensemble Learning framework in real-world scenarios [1].
### Discussion and Analysis
In this section, we discuss the experimental results and analyze the factors that contribute to the success of the Adaptive Ensemble Learning framework. We explore the role of adaptive feature fusion techniques in enhancing the performance of the ensemble models and discuss the importance of ensemble training strategies and meta-learning techniques in ensuring the framework's versatility and adaptability. Additionally, we investigate the generalization capabilities of the adaptive ensemble models across different tasks and domains, highlighting the potential of the proposed framework in addressing a wide range of real-world challenges.
In conclusion, the experimentation and evaluation process demonstrates the effectiveness of the Adaptive Ensemble Learning framework in boosting the performance of deep neural networks and provides valuable insights into its generalization capabilities across various tasks and domains. These results serve as a foundation for further research and development in the field of adaptive ensemble learning and intelligent feature fusion techniques.
## 6 Results and Discussion
In this chapter, we present the results of our experiments with the Adaptive Ensemble Learning framework and discuss the key findings and insights derived from the evaluation process. The main objective of this chapter is to analyze the performance of the adaptive ensemble models and investigate the factors that contribute to their success in enhancing the performance of deep neural networks across various tasks and domains.
The results of our experiments indicate that the Adaptive Ensemble Learning framework consistently outperforms the baseline models and traditional feature fusion techniques across the selected benchmark datasets. These findings demonstrate the effectiveness of the proposed framework in boosting the performance of deep neural networks, and they provide evidence of the benefits of adaptive feature fusion techniques and ensemble training strategies in improving model performance and generalization capabilities.
One key insight derived from the evaluation process is the importance of intelligently combining features from multiple base models. The adaptive fusion layers, which are designed using various techniques such as linear and nonlinear transformations, attention mechanisms, or gating mechanisms, play a crucial role in merging the features from the base models and creating more discriminative and effective feature representations. By learning to optimally combine the features from the base models, the adaptive ensemble models are able to exploit the diversity and complementarity of the individual models, leading to improved performance and generalization capabilities.
Another important finding from our experiments is the impact of ensemble training strategies and meta-learning techniques on the versatility and adaptability of the Adaptive Ensemble Learning framework. By incorporating techniques such as bagging, boosting, and stacking into the ensemble training process, the framework is able to effectively combine the base models in a diverse and complementary manner, further enhancing the performance and generalization capabilities of the ensemble model. Additionally, the integration of meta-learning techniques into the framework enables the meta-model to learn the optimal feature fusion strategy from the data, ensuring that the ensemble model remains versatile and robust across different tasks and datasets.
The generalization capabilities of the adaptive ensemble models across different tasks and domains are another key aspect of our findings. The results demonstrate that the Adaptive Ensemble Learning framework is capable of adapting to a wide range of tasks, such as image classification, object detection, sentiment analysis, and graph-based learning. This adaptability is crucial for addressing real-world challenges, where the complexity of the data and the diversity of the tasks require versatile and robust machine learning models.
In conclusion, the results and discussion presented in this chapter highlight the effectiveness of the Adaptive Ensemble Learning framework in boosting the performance of deep neural networks and provide valuable insights into the factors that contribute to its success. These findings serve as a foundation for further research and development in the field of adaptive ensemble learning and intelligent feature fusion techniques, with the potential to significantly impact various domains and applications [10][2][9].
## 7 Conclusion
In conclusion, this paper has presented the Adaptive Ensemble Learning framework, a novel approach designed to enhance the performance of deep neural networks through intelligent feature fusion and ensemble learning techniques. The framework overcomes the limitations of traditional feature fusion methods by dynamically adapting the fusion process based on the underlying data and task at hand. Our experiments on various benchmark datasets and tasks demonstrate the effectiveness of the Adaptive Ensemble Learning framework in boosting the performance of deep neural networks and providing insights into its generalization capabilities across different domains.
The key contributions of this paper include the integration of adaptive feature fusion techniques into the ensemble learning process, the incorporation of meta-learning strategies to guide the adaptive feature fusion, and the exploration of various ensemble training strategies in the context of deep learning architectures. Our results and discussion highlight the importance of intelligently combining features from multiple base models and emphasize the impact of ensemble training strategies and meta-learning techniques on the framework's versatility and adaptability.
Future work on the Adaptive Ensemble Learning framework may focus on several directions. First, the development of more sophisticated and task-specific fusion layers could further improve the performance of the adaptive ensemble models by allowing for more fine-grained control over the feature fusion process. Additionally, the exploration of alternative meta-learning techniques and architectures, such as few-shot learning or memory-augmented neural networks, could enhance the adaptability of the framework and enable it to learn more complex fusion strategies from limited data.
Another direction for future work involves the application of the Adaptive Ensemble Learning framework to new tasks and domains, such as reinforcement learning, unsupervised learning, or transfer learning. By adapting the framework to these challenging learning scenarios, we can further demonstrate its versatility and potential impact on a wide range of real-world problems.
Lastly, the integration of the Adaptive Ensemble Learning framework with other emerging machine learning paradigms, such as federated learning, edge computing, or privacy-preserving learning, could open new avenues for research and development. By combining the advantages of adaptive ensemble learning with these cutting-edge technologies, we can develop more powerful, efficient, and secure machine learning models, ultimately benefiting various applications and industries.
In summary, the Adaptive Ensemble Learning framework offers a promising direction for enhancing the performance of deep neural networks and addressing the challenges posed by complex data and diverse tasks. Through continued research and development, this framework has the potential to significantly impact the field of machine learning and contribute to the advancement of artificial intelligence.
|
2302.08796 | Approaching epidemiological dynamics of COVID-19 with physics-informed
neural networks | A physics-informed neural network (PINN) embedded with the
susceptible-infected-removed (SIR) model is devised to understand the temporal
evolution dynamics of infectious diseases. Firstly, the effectiveness of this
approach is demonstrated on synthetic data as generated from the numerical
solution of the susceptible-asymptomatic-infected-recovered-dead (SAIRD) model.
Then, the method is applied to COVID-19 data reported for Germany and shows
that it can accurately identify and predict virus spread trends. The results
indicate that an incomplete physics-informed model can approach more
complicated dynamics efficiently. Thus, the present work demonstrates the high
potential of using machine learning methods, e.g., PINNs, to study and predict
epidemic dynamics in combination with compartmental models. | Shuai Han, Lukas Stelz, Horst Stoecker, Lingxiao Wang, Kai Zhou | 2023-02-17T10:36:58Z | http://arxiv.org/abs/2302.08796v2 | # Approaching epidemiological dynamics of COVID-19 with physics-informed neural networks
###### Abstract
A physics-informed neural network (PINN) embedded with the susceptible-infected-removed (SIR) model is devised to understand the temporal evolution dynamics of infectious diseases. Firstly, the effectiveness of this approach is demonstrated on synthetic data as generated from the numerical solution of the susceptible-asymptomatic-infected-recovered-dead (SAIRD) model. Then, the method is applied to COVID-19 data reported for Germany and shows that it can accurately identify and predict virus spread trends. The results indicate that an incomplete physics-informed model can approach more complicated dynamics efficiently. Thus, the present work demonstrates the high potential of using machine learning methods, e.g., PINNs, to study and predict epidemic dynamics in combination with compartmental models.
keywords: COVID-19, Epidemiological Dynamics, Physics-informed machine learning.
## 1 Introduction
The coronavirus SARS-CoV-2 was discovered in Wuhan, China, in December 2019. The virus spread quickly around the world. It was declared a pandemic by the World Health Organization (WHO)[1] in March 2020. By January 2023, there had been 733 million confirmed cases of the resulting disease from COVID-19 and 6.69 million fatalities[1]. The spread of the virus and the impact of policy decisions on containing the disease has been studied in compartmental models[2; 3]. Non-pharmaceutical interventions, such as social distancing, were found to be effective[4; 5; 6; 7]. The role of vaccination has been explored in recent studies[8].
The unknown numbers of infectious individuals have been a major challenge for obtaining precise real-time data on the spatiotemporal spread of COVID-19[9]. Based on ambigu
ous reports and predicted cases [10, 11], governments had difficulties in implementing effective intervention policies, such as the allocation of detection resources [12], the mobilization and delivery of protective and therapeutic materials [13], and the stringency of lockdown measures [14, 15, 16, 17]. Also the early shortage of detection supplies and the massive number of asymptomatic or mildly symptomatic cases also make forecasting difficult [18]. The alternative approach to the tracking of the spread of any infectious diseases is an estimate of parameters of epidemic models, such as the basic reproduction rate (\(R_{0}\)) and infection rate [19]. Studies presently explore various data-driven methods to infer those parameters from the available but limited data [20].
Epidemiological models, such as the Susceptible-Infected-Recovered (SIR) model [21], have been helpful in understanding the spread of infectious diseases. The SIR model is one of the first used compartmental models. SIR divides the population into the three SIR compartments. Various models have been derived and developed, based on the SIR model, including the Susceptible-Exposed-Infected-Removed (SEIR) [22, 23] and the Susceptible-Exposed-Asymptomatic-Infectious-Recovered (SEAIR) model [24]. Mathematical modeling based on numerical solutions of systems of Ordinary Differential Equations (ODEs) has also been used to study COVID-19 spread. These studies have provided valuable insights into the spread of the disease, including, more recently, disease prevalence curves with machine learning assistance [25, 26, 27], the impact of asymptomatic infected individuals [28], the effectiveness of wearing masks [29], and the effectiveness of prevention and control measures [30, 31, 32].
Deep Learning (DL) models, such as recurrent neural networks (RNNs), have been used to analyze the patterns in COVID-19 time series data [33, 34]. Chimmula et al. [35] used RNNs and its variant long short-term memory (LSTM) for predicting COVID-19 prevalence trends in Canada and Italy, which show reasonable predictive capabilities. Zeroual et al. [36] applied LSTM, bi-directional LSTM (BiLSTM), and gated recurrent units (GRUs) to different countries' COVID-19 data for data-driven simulations. Zhang et al. [37] used a residual neural network (ResNet) to account for external factors for model uncertainties, parameters, and for other factors which affect prediction accuracy for trend analysis of COVID-19. Chen et al. [38] showed that generalized ResNet can learn the structure of complex unknown dynamical systems. These predictions are more accurate than standard ResNet structures. However, these models require large sets of training data, while the current datasets for COVID-19 are relatively small. This leads to a lack of robustness of the models [39]. In addition, these DL models are able only to identify the dynamics of the virus based on available data; they might not be stable or accurate enough in predicting future trends [40] due to the variability of the virus and the influence of external factors like weather [41]. Frameworks that can accurately tackle the epidemic dynamics, which are governed by systems of ordinary differential equations (ODEs) or systems of partial differential equations (PDEs) [42], should be developed to effectively handle the predicament induced by the limits of recorded data. It is crucial to incorporate the details of the necessarily known full domain knowledge, e.g., the laws governing the physical system, rigorously into the machine learning treatment used [43]. Physics-Informed Neural Networks (PINNs) [44] were proposed to address this need. PINNs can introduce physical constraints into the training explicitly. The PINNs can make accurate predictions based on tiny datasets when Combined with epidemiological models [42]. They may even help to identify the underlying epidemiological dynamics [45].
The present study introduces a SIR-dynamics-informed machine learning method to explore the approach towards complicated epidemiological dynamics, as often hidden in the study of the spread of COVID-19, by incorporating prior knowledge, in the form of the ODEs, from the SIR model into the loss functions of Deep Neural Networks (DNNs) as physical regularizer. This new method is first confronted with generated synthetic data of the ODE system, by using a SAIRD model to simulate different scenarios and test the effectiveness of the new approach. The method is then validated by reported COVID-19 data from Germany between March 1, 2021 and July 1, 2021. The data from the COVID-19 Data Repository at the Johns Hopkins University are used. Such simple, SIR-physics informed models, do can apparently approach more complicated dynamics efficiently.
This paper is organized into three sections: The methodology section details the general concepts of the SIR and SAIRD models and gives the definition of the parameters. The basics of the PINN framework, and how PINNs can be used to solve systems of ODEs and their optimization problems, is introduced in the second section. Here, the SIR-dynamics-informed neural networks, the definition and calculation of loss functions and physical residuals, which will be applied to estimate the dynamics of an infectious system are detailed. The third section contains the numerical synthetical data experiments and results for real, recorded data. It also covers the set-ups of the neural network, the pre-processing of synthetic and reported data, and the analysis of the results. The conclusions summarize the findings and yield recommendations for future research.
## 2 Methodology
This section introduces two mathematical epidemiological models used the present paper. Furthermore, physics-informed neural networks are described in conjunction with the epidemiological models and their optimization approaches.
### Mathematical Epidemiology
Mathematical modelling of epidemiological dynamics is a popular area of research in applied mathematics. Conventionally, compartmental models are developed for modelling the dynamics of epidemics within a population.
#### 2.1.1 The SIR Model
A plain, but yet powerful and well-known compartmental model is the original SIR model [18]. It assumes that the size of the population \(N\) remains constant. \(N\) is divided into three separate groups or compartments: susceptible (S), infectious (I) and removed (R). Individuals are transferred between compartments as shown in Figure 1 with certain rates, called transition rates, \(\beta I/N\) and \(\gamma\):
Figure 1: Schematic illustration of the interactions between the compartments in the SIR model.
Epidemiological compartmental models treat all individuals in the same compartment as sharing identical features. Therefore each compartment is homogeneous. The SIR model is described by the following set of differential equations:
\[\begin{split}\frac{dS}{dt}&=-\frac{\beta I}{N}S\;,\\ \frac{dI}{dt}&=\frac{\beta I}{N}S-\gamma I\;,\\ \frac{dR}{dt}&=\gamma I\;.\end{split} \tag{1}\]
The parameter \(\beta\) is the effective contact or transmission rate. It denotes the number of effective contacts made by one infectious and one susceptible individual leading to one infection per unit of time. The removal rate \(\gamma\) indicates the fraction of infectious individuals who recover or die per unit of time. Thus, \(\gamma\) can be calculated using 1/D, with D the average time duration that an infected individual can carry and transmit the virus. Equation (1) is subject to the initial conditions, \(S\left(t_{0}\right)>0,I\left(t_{0}\right)\geq 0,\text{ and }R\left(t_{0}\right)\geq 0\) at the initial time \(t_{0}\). The model assumes to conserve the total number of individuals, thus \(S(t)+I(t)+R(t)=N\) holds at any time \(t\). In general, the time scale of the epidemic dynamics is assumed to be short as compared to the length of the life of individuals in the population: the effects of births and deaths on the population are simply ignored.
#### 2.1.2 The SAIRD Model
The SAIRD model is an extension of the SIR model: two more compartments as shown in Figure 2 are introduced. The compartment \(I\) is split into two. Here compartment \(A\) represents the asymptomatic or unidentified, but infectious individuals, i.e., A is the number of individuals who, despite being infected, are either not identified or not detected. A consists of infected individuals, which are not from symptoms. The new compartment \(I\) here contains those individuals which have been detected as infectious. The compartment \(R\) is also split into the number of recovered, \(R\), and the number of deceased, \(D\), individuals.
Figure 2: Diagram for the SAIRD model which is inspired from [46] illustrating the interactions of compartment.
The SAIRD model is given by the following set of differential equations:
\[\begin{array}{l}\frac{dS}{dt}=-\beta I\frac{S}{N}-\alpha A\frac{S}{N}\;,\\ \frac{dA}{dt}=(1-\rho_{1})\alpha A\frac{S}{N}+\rho_{2}\beta I\frac{S}{N}-\gamma A -\delta A\;,\\ \frac{dI}{dt}=(1-\rho_{2})\beta I\frac{S}{N}+\rho_{1}\alpha A\frac{S}{N}- \gamma I-\delta I\;,\\ \frac{dR}{dt}=\gamma I+\gamma A\;,\\ \frac{dD}{dt}=\delta I+\delta A\;.\end{array} \tag{2}\]
The transition rates and flow between the compartments are shown in Figure 2. The infections happen at a rate of \(\alpha\) and \(\beta\) is due to contact of a susceptible individual with an asymptomatic or symptomatic infectious individual, respectively. The probability of a susceptible individual to become a symptomatically infected individual, through contact with an asymptomatic individual, is \(\rho_{1}\). The probability of a susceptible individual to become an asymptomatically infected individual, through contact with a symptomatically infected individual, is \(\rho_{2}\). Infected individuals recover at a rate of \(\gamma\). They will pass away at a rate of \(\delta\), independently of their symptoms. This system (2) is subject to the initial conditions \(S\left(t_{0}\right)>0,A\left(t_{0}\right)\geq 0,I\left(t_{0}\right)\geq 0,R \left(t_{0}\right)\geq 0\;\text{and}\;D\left(t_{0}\right)\geq 0\), with an initial time \(t_{0}\). The SAIRD model assumes, just as, that the total population, including the deceased, remains constant, \(N\). Hence, this model satisfies \(S(t)+A(t)+I(t)+R(t)+D(t)=N\), at any time \(t\).
### Physics-Informed Neural Network
The basic idea of PINNs is to integrate any a-priori knowledge of the system into the learning process of the deep neural network. This knowledge, e.g., about basic physical laws or domain know-how, is usually given in the form of ordinary- or partial differential equations (ODEs/PDEs), and is incorporated in PINNs by the loss functions. The training is performed to optimize the network weights and biases, and also to optimize the model parameters (e.g., those inside the physical laws). The loss term consists of two terms, the data loss and the residual loss, representing the regularization from the physics prior, like the generally obeyed differential equations. Figure 3 illustrates the training process for an ODE-dynamics-informed neural network.
Figure 3 shows how a PINN can be used to tackle a solution of the system of ODEs by training the neural network using the combined losses. The system of first-order ordinary differential equations of the general form is usually written as:
\[\frac{\partial U}{\partial t}(t)+F(U(t);\lambda)=0,\quad t\in[t_{0},T] \tag{3}\]
with
\[U(t)=[u^{1}(t),...,u^{n}(t)],\quad F(U)=[f^{1}(U),...,f^{n}(U)] \tag{4}\]
where \(u^{i}\in\mathbb{R}\) and \(f^{i}:\mathbb{R}\rightarrow\mathbb{R},i=1,...,n\). \(t_{0}\) and \(T\) are the initial and final time, respectively; \(F\) is the function and \(U\) the solution. The \(\lambda\in\mathbb{R}^{k}\) are the unknown parameters of the system (3). \(U_{s}\) are the observed data at times \(t_{1},...,t_{m}\), which determine \(\lambda\). The data loss is, naturally,
\[\mathcal{L}_{data}=\sum_{s=1}^{m}\left\|U(t_{s})-U_{s}\right\|^{2}\;. \tag{5}\]
For a conventional fitting, other than with PINNs, one identifies the optimum vector of the model parameters \(\lambda\) by minimizing eq.(5). Thus, a solution \(U(t)\) is obtained which is best suitable for the observed data, in the sense of the least squares deviations. The PINNs are based on a general neural network, as shown in Figure 3 (black dashed frame). Its form can be represented as follows:
\[N\!N^{\omega,b}(t):\mathbb{R}\rightarrow\mathbb{R}^{n} \tag{6}\]
which approximates the solution
\[U(t):\mathbb{R}\rightarrow\mathbb{R}^{n} \tag{7}\]
of the system of first-order ODEs. The weights \(\omega\) and biases \(b\) of \(N\!N^{\omega,b}\) are trainable parameters of the neural network. For the purpose of solving ODEs like (3) with neural networks (6), the weights \(\omega\) and biases \(b\) can be optimized and that the neural network (6) offers the best fits of the observed data \(U_{s},s=1,...,m\), in the sense of the least squares differences,
\[\operatorname*{arg\,min}_{\omega,b}(\text{MSE}_{U}^{\omega,b}) \tag{8}\]
Thus the loss function with respect to the observed data is
\[\text{MSE}_{U}^{\omega,b}:=\frac{1}{m}\sum_{s=1}^{m}\left\|N\!N^{\omega,b} \left(t_{s}\right)-U_{s}\right\|^{2}. \tag{9}\]
Figure 3: A schematic diagram of physics-informed neural networks (PINNs). The black dashed-line block is a common neural network that takes a time \(t\) as input and the output is \(U\), \(\beta\) and \(\gamma\) are weights and biases, respectively. The orange dashed-line block stands for the calculation of residual loss. The loss function consists of a mismatch of boundaries and initial conditions for the observed data (data loss). The residuals of the ODE is a set of random points in the spatial-temporal domain (residuals). The parameters of the PINNs can be optimized by minimising the loss \(MSE=MSE_{data}+MSE_{residuals}\).
The residual loss of the set of ODEs in (3), which can be expressed as
\[\mathcal{F}\left(NN^{\omega,b},t;\lambda\right)=\frac{\partial NN^{\omega,b}}{ \partial t}(t)+F\left(NN^{\omega,b}(t)\right), \tag{10}\]
allows to extend the \(NN^{\omega,b}\) to a PINN by putting the residual term (10) into the loss function (9). The automatic differentiation technique of the neural networks can be used to compute the derivatives (\(\frac{\partial NN^{\omega,b}}{\partial t}\)) of the output of the network with respect to the input (see Figure 3). Thus, letting \(\mathcal{F}\left(NN^{\omega,b},t;\lambda\right)=0,\forall t\in[t_{0},T]\) is equivalent to forcing a neural network (6) to fulfill the ODE dynamics (3).
In other words, the standard neural network can be turned into a PINN by adding a mean squared residual error \(\text{MSE}_{\mathcal{F}}^{\omega,b,\lambda}\) to the loss function (8). Thus, PINNs can be trained to identify the optimum neural network parameters, \(\omega\) and \(b\), as well as the parameters \(\lambda\) for the ODEs. In that way, the following ODE-dynamics-regularized optimization is solved:
\[\operatorname*{arg\,min}_{\omega,b,\lambda}(\text{MSE}_{U}^{\omega,b}+\text{ MSE}_{\mathcal{F}}^{\omega,b,\lambda}). \tag{11}\]
### SIR Model-Informed Machine Learning
This section explicitly introduces how to incorporate the SIR model into the PINN formalism, as a prior information. The dynamic parameters of the SIR model are estimated. This includes, both, the definition and the calculation of the data loss function and of the physical residuals.
### Architecture
Figure 4 utilize, for the SIR model, a fully connected neural network (marked by the black-dashed frame), to evaluate \((S(t),I(t),R(t))^{\top}\) as defined in (6). Here, \(S(t)\), \(I(t)\) and \(R(t)\) obey the SIR model at a given input time \(t\). The residual term of the ODEs can be minimized for the SIR model as defined in (10) to enforce eq.(6). Thus, here
\[\mathcal{F}\left(NN^{\omega,b},t;\beta,\gamma\right)=\begin{bmatrix}\frac{dS( t)}{dt}+\frac{\beta S(t)I(t)}{N}\\ \frac{dI(t)}{dt}-\frac{\beta S(t)I(t)}{N}+\gamma I(t)\\ \frac{dR(t)}{dt}-\gamma I(t)\end{bmatrix}. \tag{12}\]
Hence, the mean residual squared error of the present work is
\[\text{MSE}_{SIR}=\text{MSE}_{Sresidual}+\text{MSE}_{Iresidual}+\text{MSE }_{Rresidual} \tag{13}\]
with
\[\text{MSE}_{Sresidual} =\frac{1}{q}\sum_{i=1}^{q}\left|\frac{dS(t_{i})}{dt_{i}}+\frac{ \beta S(t_{i})I(t_{i})}{N}\right|^{2}, \tag{14}\] \[\text{MSE}_{Iresidual} =\frac{1}{q}\sum_{i=1}^{q}\left|\frac{dI(t_{i})}{dt_{i}}-\frac{ \beta S(t_{i})I(t_{i})}{N}+\gamma I(t_{i})\right|^{2},\] \[\text{MSE}_{Rresidual} =\frac{1}{q}\sum_{i=1}^{q}\left|\frac{dR(t_{i})}{dt_{i}}-\gamma I (t_{i})\right|^{2}.\]
Here \(q\) is the total number of discrete time points. Note that, the discrete time points are chosen to be consistent with the observed time step, chosen in units of one natural day. In other words, the time step between two time points is \(\Delta t\)=1.
Simultaneously, the mean squared error of the data is, for the SIR model,
\[\text{MSE}_{data}=\text{MSE}_{S_{data}}+\text{MSE}_{I_{data}}+\text{MSE}_{R_{ data}}. \tag{15}\]
Here
\[\text{MSE}_{S_{data}} =\frac{1}{s}\sum_{i=1}^{s}\left|S(t_{i})-S_{i}^{o}\right|^{2},\] \[\text{MSE}_{I_{data}} =\frac{1}{s}\sum_{i=1}^{s}\left|I(t_{i})-I_{i}^{o}\right|^{2}, \tag{16}\] \[\text{MSE}_{R_{data}} =\frac{1}{s}\sum_{i=1}^{s}\left|R(t_{i})-R_{i}^{o}\right|^{2}.\]
\(S_{i}^{o},I_{i}^{o}\) and \(R_{i}^{o}\) are the observed data at time points \(t_{i}\), and \(s\) is the number of observed data points.
The total loss of the present study consists of both, the data loss and the residual loss, according to which, both, the weights and biases, as well as the trainable dynamic parameters, are optimized,
Figure 4: Schematic diagram of the SIR-dynamics informed neural network. The black-dashed frame represents the dense neural network used here. The green-dashed frame, on the other hand, represents the SIR-informed neural network, which takes time \(t\) as input and outputs the susceptible-(\(S\)), infected-(\(I\)) and removed (\(R\)) populations. The box labeled ‘ODEs’ represents the computation of the residual, with respect to the SIR model. The label ‘Loss’ is comprised of two parts: the mismatch between the available data and the network output, on one hand and the physical residual, on the other hand. The NN fits simultaneously both, the data and infers the dynamic parameters \(\beta\) and \(\gamma\), by satisfying the ODE dynamics, by minimizing loss function.
\[\operatorname*{arg\,min}_{\omega,b,\beta,\gamma}(\text{MSE}_{data}+\text{MSE}_{ SIR}), \tag{17}\]
### Parameter Identifications
Algorithm 1 shows how PINNs can be used to determine trainable parameters, including the NN- and SIR-model parameters. The input is the time point \(t\), the output is the value of each compartment of the SIR model at a given \(t\) value. Both, the weights \(\omega\) and biases \(b\), as well as the model parameters \(\beta\) and \(\gamma\), are initialized randomly (\(\omega\) and \(b\) use the PyTorch default initialization function, and \(\beta\) and \(\gamma\) were chosen randomly from (0,1). To ensure reproducibility, they were fixed during the practical experiments).
```
Data: \(t,S^{o},I^{o},R^{o}\) Randomly initialize weights \(\omega\), biases \(b\), and dynamics parameters \(\beta\), \(\gamma\) ; forepochin epochsdo The values of each compartment of the SIR model can be obtained from the forward propagation of the neural network with the input as \(t\) \[S,I,R=N\!N(t).\] Evaluate the composed loss function, including the data loss (with \(s\) to be the number of observations in each compartment, thus the number of time points collected): \[\text{MSE}_{SIR}=\frac{1}{s}\sum_{i=1}^{s}(|S_{i}-S_{i}^{o}|^{2}+|I_{i}-I_{i}^ {o}|^{2}+|R_{i}-R_{i}^{o}|^{2}),\] denoting the mismatch of the output of the neural network and observation data. Here the residual loss: \[\text{MSE}_{Residuals}=\frac{1}{q}\sum_{i=1}^{q}\left(\left|\frac{dS_{i}}{dt_ {i}}+\frac{\beta S_{i}I_{i}}{N}\right|^{2}+\left|\frac{dI_{i}}{dt_{i}}-\frac{ \beta S_{i}I_{i}}{N}+\gamma I_{i}\right|^{2}+\left|\frac{dR_{i}}{dt_{i}}- \gamma I_{i}\right|^{2}\right)\] stands for the sum of the residual errors for each compartment of the SIR model. Here, the residuals and data loss are calculated using the same time step \(\Delta t=1\). Thus, the total loss function can be obtained: \[Loss=\text{MSE}_{SIR}+\text{MSE}_{Residuals}\] The Adam Optimizer\({}^{\@@cite[cite]{[\@@bibref{}{Dagman2}{}{}]}}\) toolkit in Pytorch is utilized to update the weights \(\omega\) and biases \(b\), as well as \(\beta\) and \(\gamma\) by minimizing the loss function.
```
**Algorithm 1**PINNs used to determine simultaneously the parameters of the neural network and the embedded SIR model.
## 3 Experiments and Results
This section presents first the setup for generating the synthetic data from the SAIRD model, and then the pre-processing for the data reported for Germany. The details of the here devised physics-informed neural networks for the above two situations follow. Finally, the performance of proposed the PINNs are demonstrated for the testing data sets.
### Data Preparation
#### 3.1.1 Synthetic data generation and processing
To test and validate the PINN frameworks, whose physical laws are derived from the simple SIR model, a complex mathematical model, the SAIRD model (2) is used, to generate the mock data. The advantage of using this type of data is that here the dynamics is clean, little noisy, verify that the approach does work. The data shown in Figure 5 starting the model evolution with the given model parameters and the initial conditions as shown in Table 1.
Figure 5 shows the time evolution of the classical SIR model. Here the number of susceptibles in the population decreases as the number of actively infected people increases. The population size for recovered and death (removal) then increases until it hits a maximum or stabilizes.
The epidemiological characteristics of the SAIRD model, both its S (Susceptible) and A (Asymptomatic) compartments are derived from the S (Susceptible) compartment of the SIR model. The SAIRD's R (Recovered) and D (Dead) compartments are obtained from the R (Removed) compartment in the SIR model.
Hence, the values of the S and R compartments of the SIR model are obtained by adding the values of A to S in the SAIRD model, and the values of R to D, respectively. The compartment I have been defined identically in both models. So, the values of the three compartments in the SIR model, shown in Figure 6 are used as simulation data for the straightforward tests and validations to follow here.
All values of all compartments stabilize after approximately 100 days, the first 100 days, where compartment I exhibits a broad, 30 day wide peak, are chosen as the synthetic dataset for the calculations and testing shown in Figure 7.
\begin{table}
\begin{tabular}{c c c} \hline \hline Parameter & Description & Value \\ \hline \(\rho_{1}\) & Probability that one becoming symptomatically by exposure to an asymptomatic carrier & 0.80 \\ \(\rho_{2}\) & Probability that one becoming asymptomatically by exposure to a symptomatically carrier & 0.29 \\ \(\alpha\) & Infection rate of individual exposed to an asymptomatic carrier & 0.1 \\ \(\beta\) & Infection rate of individual exposed to symptomatic carrier & 0.17 \\ \(\gamma\) & Recovery rate & 1/16 \\ \(\theta\) & Death rate & 0.001 \\ \(N\) & Population & 1000 \\ \(A_{0}\) & Initial number of asymptomatic infectious individuals & 10 \\ \(I_{0}\) & Initial number of Infectious individuals & 20 \\ \(R_{0}\) & Initial number of Recovered individuals & 0 \\ \(D_{0}\) & Initial number of Dead individuals & 0 \\ \(S_{0}\) & Initial number of Susceptible individuals & 970 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The value of parameters and initial conditions for the SAIRD model.
#### 3.1.2 Pre-processing of reported data
The COVID-19 epidemic data of Germany are used for our model analysis. they are based on the cumulative number of infected individuals, the cumulative recovered individuals, and the cumulative number of deaths, as reported by Robert Koch-Institut (RKI) COVID-19 Data for the period from March 1 to July 1, 2021. The SIR dynamics integrated into our method. Hence, the first step of the preprocessing of the reported data is to extract the values for the different compartment from the records. The data for Germany track the cumulative
Figure 5: A Mathematical model generated time evolution of an example SAIRD model. The solid yellow line represents the number of susceptible people in the population, the solid grey line represents the number of asymptomatic infected people, the solid blue line represents the number of recovered people, and the solid red line stand for the number of active infected person. The solid black line is the death population, and The population is assumed to be constant (N=1000).
Figure 6: SIR mock data derived from SAIRD model generation data.
infectious, \(I_{c}\), recovered, \(R_{c}\), and deceased cases, \(D_{c}\). In accord with the definition of the compartment \(R\), in the SIR model, \(R\) contains both the recovered as well as the virus-induced dead individuals, \(R=R_{recovered}+D_{death}\). As the compartment \(R\) does not have any outflow, \(R=Re_{c}\). Here, \(Re_{c}\) is the cumulative value of the removed. Given the reported data's values, the theoretical values for removed, infectious, susceptible cases are obtained by \(R=R_{recovered}+D_{death}\), \(I=I_{c}-R\), and \(S=N\)\(-\)\(I\)\(-\)\(R\). Here, \(N\) is the total population. All values of all compartments are normalized by dividing by the total population. Thus, N is set to 1.
Reliability and cleanliness of the reported data matter a lot for the predictability of compartmental models, in particular for the sense of parameter estimates. Reported data are, however, quite noisy, both due to misreporting, late reporting and other reasons. The reported data from Germany vary greatly substantially between weekdays and weekends. There is due to the significant reduction in detection at weekends. Therefore, the ability to make valuable estimates from the available reported data may be limited. Thus, before the data was analysed, we pre-process the dataset by applying a 7-day moving average window to smooth out weekday-weekend zigzags in the outbreak reports. As shown in the Figure 8, the reported data from the different compartments are significantly smoother and less noisy than the raw data after the sliding window pre-processing.
### Setup of SIR-Infomed Neural Networks
The neural network structure we use in our experiments takes a single value, the time \(t\), as input. There're weights \(W_{i,j}\) being associated for each hidden layer, where \(i\) is the position of the start node and \(j\) is the position of the end node. The non-linear activation function \(tanh\) is applied at each node in the hidden layer,
\[\tanh\left(x\right)=\frac{e^{x}-e^{-x}}{e^{x}+e^{-x}}. \tag{18}\]
Figure 7: Selected SIR mock data derived from the SAIRD model generation data are used for the synthetic data simulations.
For the output nodes of the network, \(sigmoid\) activation function is applied,
\[\sigma(x)=\frac{1}{1+e^{-x}}, \tag{19}\]
to account for the normalization applied to \(S(t)\), \(I(t)\) and \(R(t)\) respectively. The neural network contains 4 hidden layers and 64 neurons in each layer. The Adam algorithm of the PyTorch package was selected as the optimizer. The used learning rate is 0.0001 and the training epochs is 200k. To prevent overfitting and improve the generalization ability of the model in confronting new data, we applied early stopping, to which the epoch limit is set to be 300.
### Performance on synthetic data
We investigated using respectively 15, 25 and 37 days of data as the training set and, to ensure reproducibility, the same initial values of \(\beta=0.15\) and \(\gamma=0.15\) to test the identifiability and predictability of our proposed approach. In Figures 9, 10 and 11, we present identification and prediction results of our proposed method for using different sizes of training sets on mock data. Clearly we see that for a complex system including compartments S and R, the PINNs integrated with simple SIR dynamics can fit and predict well on the training and test sets. The numerical solutions for the ODE with the PINN learned parameters also perform well which basically would overlap the PINN results. However, for data with peaks like compartment I, the SIR dynamics informed neural network model gives slightly better results than the numerical solution of the ODE, especially at the peaks. This can also be seen in the loss value shown in the Table 2 of the results.
Figure 8: The figure compares the results before and after pre-processing of the reported data. I, S and R correspond to the trends in the number of infectious, susceptible, and removed individuals, respectively, where the black scatter shows the raw, unprocessed data, and the solid red line is a curve of the data processed through a sliding window. The graphs show that the pre-processed data curves are smoother, and the fluctuations are reduced effectively.
### Performance on Reported Germany Data
After validation of the method on synthetic data, we move to test it on the realistic reported data. With pre-processing performed on the raw reported German data from March 1 to July 1, 2021, we selected the first 30, 40, and 50 days record of the compartment I (infected) as the training set and, to ensure reproductively, the same initial values of \(\beta=0.25\) and \(\gamma=0.15\) to
Figure 10: The graph shows results identified and predicted using the first 25 days as the training set. I, S and R diagrams of compartments I, S and R, respectively. The black star indicates the training set, the solid black line is the synthetic data, the solid red line is the result from PINNs.
Figure 9: The graph shows results identified and predicted using the first 15 days as the training set. I, S and R diagrams of compartments I, S and R, respectively. The black star indicates the training set, the solid black line is the synthetic data, the solid red line is the result from PINNs.
the method for analysis and prediction testing. Figures 12, 13 and 14 show the identification and prediction results for using first 30, 40, 50 days as training set, respectively, including results from PINNs, solutions from the ODEs system (1) with PINN learned parameters, and the reported data (after pre-processing). In which, we adopt the LSODA algorithm \({}^{\text{\tiny\@@cite[cite]{[\@@bibref{}{LSODA}{}{}]}}}\) to solve the SIR model numerically with PINNs-learned parameters.
well, and the trend of the predictions is also reasonably consistent with the true data especially at the peak appearance for I. Compared to the solutions of the ODE with parameters learned from the PINN, the results from SIR-informed NN have better a match on the training set e.g. on the I compartment while the prediction trend maintains, because of the balance introduced between the data loss and physical residual loss. In particular, the predictions of the proposed method are closer to the actual values around the peaks. It can be seen in the Table 4. Compared to purely data-driven machine learning method on the same problem, the residual loss contained in our method serves as a physics informed regularizer. On the other hand, the data-loss can also be viewed as a regularizer for the conventional physics dynamics fitting (e.g., using SIR or any other compartmental model).
Then, the PINNs learned parameters are used to produce the numerical solution of SIR model as Eq.(1) shows, which is obtained by using the LSODA algorithm [48], which is used to solve the numerical solution problem for rigid or non-rigid systems of first-order ordinary differential equations. Then, the solution of the ODEs system is compared with the results from the PINNs.
We found that in the process of identifying dynamics and making forecasting on reported German data using PINN, several factors have an impact on the training convergence speed and final results: the initial value setting of the trainable parameters, the different assignment of the data loss, and residual loss weights in the loss function. However, this effect is negligible for synthetic data, which is less noisy and with cleaner dynamics. In Part A of the Appendix, we analyzed the impact of different weight assignments for the loss function's data loss and residual loss on the error and the prediction and identification of the number of infections. The goal is to analyze which weighting assignment of the data and residual loss in the loss
Figure 12: The graph shows the compartment results identified and predicted using the first 30 days as the training set. I, S and R diagrams of compartments I, S and R, respectively. The black star indicates the training set, the solid black line is the reported data, the red solid line is the result from PINNs, and the dashed green line indicates the result obtained by solving the SIR system using the parameters learned from PINNs.
function can be optimal for the error impact of the results, i.e., lead to the lowest error and more accurate identification and prediction in compartment I.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & & MSE & MAE \\ \hline \multirow{2}{*}{30} & PINNs & 0.00644 & 0.08468 \\ & SIR-ODEs & 0.00745 & 0.10278 \\ \multirow{2}{*}{40} & PINNs & 0.00248 & 0.04984 \\ & SIR-ODEs & 0.00244 & 0.05179 \\ \multirow{2}{*}{50} & PINNs & 0.00499 & 0.06934 \\ & SIR-ODEs & 0.00568 & 0.08474 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Error metrics for PINNs and SIR-ODEs on Germany reported data.
Figure 13: The graph shows the compartment results identified and predicted using the first 40 days as the training set. I, S and R diagrams of compartments I, S and R, respectively. The black star indicates the training set, the solid black line is the reported data, the red solid line is the result from PINNs, and the dashed green line indicates the result obtained by solving the SIR system using the parameters learned from PINNs.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & & \multicolumn{4}{c}{Parameters learned} \\ \cline{2-5} & initial & 30 & 40 & 50 \\ \hline \(\beta\) & 0.25 & 0.1268 & 0.1167 & 0.1233 \\ \(\gamma\) & 0.15 & 0.0392 & 0.0404 & 0.0391 \\ \hline \hline \end{tabular}
\end{table}
Table 5: The result of parameter learned from PINNs on Germany reported data.
## 4 Conclusions
A machine learning approach based on physics-informed neural networks is considered to identify, predict and estimate, the parameters for the evolution of dynamic systems. We combine it with a simple SIR compartmental model from mathematical epidemiology to learn and explore complex epidemiological dynamics. We first demonstrate the proposed method on synthetic data from a SAIRD model in multiple numerical experiments, with different-sized training sets generated by solving a system of ODEs using fixed parameters and initial conditions. It's found that the PINNs with SIR model performs better on the training set and provided a better trend of the prediction than solutions from ODEs just with model parameters estimated using PINNs.
Then, we extended our tests to realistic reported data about the COVID-19 epidemic in Germany. Specifically we picked a period in which compartment I had a peak for demonstration. The experimental results show that the PINNs combined with the SIR model also gives better results in the training set and trend prediction for the test set than solving the ODE system using the parameters estimated by the PINNs. Experimental results from both simulations and reported data show that it is feasible to put physical information from simple epidemic models into PINNs to study more complex epidemiological dynamics. Moreover, with certain finetuning, better results can be achieved.
However, in order to obtain more accurate results in actual epidemiological data, we should extend our study to more detailed models in mathematical epidemiology while explore other architectures of PINNs. This will be the subject of future work.
Figure 14: The graph shows the compartment results identified and predicted using the first 50 days as the training set. I, S and R diagrams of compartments I, S and R, respectively. The black star indicates the training set, the solid black line is the reported data, the red solid line is the result from PINNs, and the dashed green line indicates the result obtained by solving the SIR system using the parameters learned from PINNs.
#### Funding
The present work was supported by XF-IJRC (S. Han, L. Wang), the ENABLE Project of HMWK (L. Stelz), the BMBF under ErUM-Data (K. Zhou), the AI grant of SAMSON AG, Frankfurt (L. Wang, K. Zhou), the Walter Greiner Gesellschaft zur Forderung der physikalischen Grundlagenforschung e.V. through the Judah M. Eisenberg Laureatus Chair at Goethe Universitat Frankfurt am Main (H. Stocker). We also thank NVIDIA Corporation for the donation of NVIDIA GPUs.
#### Data Availability Statement
The reported COVID-19 data for Germany used for the present work are available at the COVID-19 Data Repository ([https://github.com/robert-koch-institut](https://github.com/robert-koch-institut)) and provided by the German Robert Koch Institute (RKI Berlin, Germany) data.
#### Conflicts of Interest
The authors declare that they do not see any conflict of interest.
|
2306.02002 | Can Directed Graph Neural Networks be Adversarially Robust? | The existing research on robust Graph Neural Networks (GNNs) fails to
acknowledge the significance of directed graphs in providing rich information
about networks' inherent structure. This work presents the first investigation
into the robustness of GNNs in the context of directed graphs, aiming to
harness the profound trust implications offered by directed graphs to bolster
the robustness and resilience of GNNs. Our study reveals that existing directed
GNNs are not adversarially robust. In pursuit of our goal, we introduce a new
and realistic directed graph attack setting and propose an innovative,
universal, and efficient message-passing framework as a plug-in layer to
significantly enhance the robustness of GNNs. Combined with existing defense
strategies, this framework achieves outstanding clean accuracy and
state-of-the-art robust performance, offering superior defense against both
transfer and adaptive attacks. The findings in this study reveal a novel and
promising direction for this crucial research area. The code will be made
publicly available upon the acceptance of this work. | Zhichao Hou, Xitong Zhang, Wei Wang, Charu C. Aggarwal, Xiaorui Liu | 2023-06-03T04:56:04Z | http://arxiv.org/abs/2306.02002v1 | # Can Directed Graph Neural Networks be
###### Abstract
The existing research on robust Graph Neural Networks (GNNs) fails to acknowledge the significance of directed graphs in providing rich information about networks' inherent structure. This work presents the first investigation into the robustness of GNNs in the context of directed graphs, aiming to harness the profound trust implications offered by directed graphs to bolster the robustness and resilience of GNNs. Our study reveals that existing directed GNNs are not adversarially robust. In pursuit of our goal, we introduce a new and realistic directed graph attack setting and propose an innovative, universal, and efficient message-passing framework as a plug-in layer to significantly enhance the robustness of GNNs. Combined with existing defense strategies, this framework achieves outstanding clean accuracy and state-of-the-art robust performance, offering superior defense against both transfer and adaptive attacks. The findings in this study reveal a novel and promising direction for this crucial research area. The code will be made publicly available upon the acceptance of this work.
## 1 Introduction
Graph neural networks (GNNs) have emerged to be a promising approach for learning feature representations from graph data, owing to their ability to capture node features and graph topology information through message-passing frameworks [1; 2]. However, extensive research has revealed that GNNs are vulnerable to adversarial attacks [3; 4; 5; 6; 7]. Even slight perturbations in the graph structure can lead to significant performance deterioration. Despite the existence of numerous defense strategies, their effectiveness has been questioned due to a potential false sense of robustness against transfer attacks [8]. In particular, a recent study [8] demonstrated that existing robust GNNs are much less robust when facing stronger adaptive attacks. In many cases, these models even underperform simple multi-layer perceptions (MLPs) that disregard graph topology information, indicating the failure of GNNs in the presence of adversarial attacks. As existing research fails to deliver satisfying robustness, new strategies are needed to effectively enhance the robustness of GNNs.
It is worth noting that existing research on robust GNNs largely overlooks the significance of directed graphs in providing rich information about the networks' inherent structure. Directed graphs allow one to encode pairwise relations between entities with many relations being directed [9]. Examples of such directed graphs include citation networks [10], social networks [11], and web networks [12] where edges represent paper citations, user following relationships, or website hyperlinks, respectively. Despite many datasets being naturally modeled as directed graphs, most existing GNNs, particularly those designed for robustness, primarily focus on undirected graphs. Consequently, directed graphs
and adversarial attacks are often converted to undirected graphs through symmetrization, leading to the loss of valuable directional information.
We highlight that the link directions in graphs have inherent implications for trustworthiness [13; 14; 15]: (1) out-links are usually more reliable than in-links; and (2) it is practically more challenging to attack out-links than in-links of target nodes. For instance, in social media platforms as shown in Figure 1, it is relatively straightforward to create fake users and orchestrate large-scale link spam (i.e., in-links) targeting specific users [16]. However, hacking into the accounts of those target users and manipulating their following behaviors (i.e., out-links) are considerably more difficult [17]. Due to these valuable implications, directed graphs have played a crucial role in trust computing and spam detection in link analysis algorithms such as TrustRank [15] and EigenTrust [14] over the last two decades. However, to the best of our knowledge, the potential of directed graphs remains unexplored in the robustness and trustworthiness of GNNs.
To address this research gap, we provide the first investigation into the robustness of GNNs in the context of directed graphs and explore the potential of leveraging directed graphs to enhance the robustness of GNNs. In pursuit of this goal, our contributions can be summarized as follows:
* We analyze the limitations of existing research on attacks and defenses of GNNs. To overcome these limitations, we introduce Restricted Directed Graph Attack (RDGA), a new and realistic adversarial graph attack setting that differentiates between in-link and out-link attacks on target nodes while restricting the adversary's capability to execute out-link attacks on target nodes.
* Our performance evaluation of popular directed GNNs reveals that they suffer from lower clean accuracy and notably inadequate robustness, indicating their inability to effectively leverage the rich directed link information offered by directed graphs.
* We propose an innovative, universal, and efficient Biased Bidirectional Random Walk (BBRW) message-passing framework that effectively leverages directional information in directed graphs to enhance both the clean and robust performance of GNNs. Our solution provides a plug-in layer that substantially enhances the robustness of various GNN backbones.
* Our comprehensive comparison showcases that BBRW achieves outstanding clean accuracy and state-of-the-art robustness against both transfer and adaptive attacks. We provide detailed ablation studies to further understand the working mechanism of the proposed approach.
## 2 Preliminary: Are Directed GNNs Robust?
In this section, we first discuss the limitations of existing adversarial graph attack settings, and we introduce a new and realistic adversarial graph attack setting to overcome these limitations. We then present an insightful performance analysis of existing directed GNNs against adversarial attacks.
**Notations.** In this paper, we consider a directed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with \(|\mathcal{V}|=n\) nodes and \(|\mathcal{E}|=m\) edges. The adjacency matrix of \(\mathcal{G}\) is denoted as \(\mathbf{A}\in\{0,1\}^{n\times n}\). The feature matrix of \(n\) nodes is denoted as \(\mathbf{X}\in\mathbb{R}^{n\times d}\). The label matrix is denoted as \(\mathbf{Y}\in\mathbb{R}^{n}\). The out-degree matrix of \(\mathbf{A}\) is \(\mathbf{D}_{\text{out}}=\text{diag}\left(d_{1}^{+},d_{2}^{+},...,d_{n}^{+}\right)\), where \(d_{i}^{+}=\sum_{j}\mathbf{A}_{ij}\) is the out-degree of node \(i\). The in-degree matrix of \(\mathbf{A}\) is \(\mathbf{D}_{\text{in}}=\text{diag}\left(d_{1}^{-},d_{2}^{-},...,d_{n}^{-}\right)\), where \(d_{i}^{-}=\sum_{j}\mathbf{A}_{ji}\) is the in-degree of node \(i\). \(f_{\theta}(\mathbf{A},\mathbf{X})\) denotes the GNN encoder that extract features from \(\mathbf{A}\) and \(\mathbf{X}\) with network parameters \(\theta\).
### Limitations of Existing Adversarial Graph Attack
The existing research on the attacks and defenses of GNNs focuses on undirected graphs where the original graphs and adversarial attacks are eventually converted to undirected graphs, which disregards the link direction information. As a result, existing adversarial graph attacks mostly conduct undirected graph attacks that flip both directions (in-link and out-link) of an adversarial edge once being selected [18; 19; 6; 20]. However, this common practice has some critical limitations:
Figure 1: Large-scale link spam.
* First, it is often impractical to attack both directions of an edge in graphs. For instance, flipping the out-links of users in social media platforms or financial systems usually requires hacking into their accounts to change their following or transaction behaviors, which can be easily detected by security countermeasures such as Intrusion Detection Systems [21];
* Second, the undirected graph attack setting does not distinguish the different roles of in-links and out-links, which fundamentally undermines the resilience of networks. For instance, a large-scale link spam attack targeting a user does not imply the targeted user fully trusts these in-links. But the link spam attack can destroy the feature of target nodes if being made undirected.
Due to these limitations, existing graph attacks are not practical in many real-world applications, and existing defenses can not effectively leverage useful information from directed graphs.
### Restricted Directed Graph Attack
To overcome the limitations of existing attack and defense research on GNNs, we propose Restricted Directed Graph Attack (RDGA), a new and realistic graph attack setting that differentiates between in-link and out-link attacks on target nodes while restricting the adversary's capability to execute out-link attacks on target nodes.
**Remark 1**.: _While the directed attack performs the same as the undirected attack on undirected GNNs due to the symmetrization operation, this offers unprecedented opportunities to distinguish different roles between in-links and out-links in directed graphs for directed GNNs._
**Restricted Directed Graph Attack.** Mathematically, we denote the directed adversarial attack on the directed graph \(\mathbf{A}\in\{0,1\}^{n\times n}\) as an asymmetric perturbation matrix \(\mathbf{P}\in\{0,1\}^{n\times n}\). The adjacency matrix being attacked is given by \(\tilde{\mathbf{A}}=\mathbf{A}+(\mathbf{1}\mathbf{1}^{\top}-2\mathbf{A})\odot \mathbf{P}\) where \(\mathbf{1}=[1,1,\ldots,1]^{\top}\in\mathbb{R}^{n}\) and \(\odot\) denotes element-wise product. \(\mathbf{P}_{ij}=1\) means flipping the edge \((i,j)\) (i.e., \(\tilde{\mathbf{A}}_{ij}=0\) if \(\mathbf{A}_{ij}=1\) or \(\tilde{\mathbf{A}}_{ij}=1\) if \(\mathbf{A}_{ij}=0\)) while \(\mathbf{P}_{ij}=0\) means keeping the edge \((i,j)\) unchanged (i.e., \(\tilde{\mathbf{A}}_{ij}=\mathbf{A}_{ij}\)). The asymmetric nature of this perturbation matrix indicates the adversarial edges have directions so that one direction will not necessarily imply the attack from the opposite direction.
Given the practical difficulty to attack the out-links on the target nodes, we impose restrictions on the adversary's capacity for executing out-link attacks on target nodes. The Restricted Directed Graph Attack (RDGA) is given by \(\tilde{\mathbf{A}}=\mathbf{A}+(\mathbf{1}\mathbf{1}^{\top}-2\mathbf{A})\odot \mathbf{(P}\odot\mathbf{M})\), where \(\tilde{\mathbf{P}}=\mathbf{P}\odot\mathbf{M}\) denotes the restricted perturbation. When restricting the out-link of nodes \(\mathcal{T}\) (e.g., the target nodes), the mask matrix is defined as \(\mathbf{M}_{ij}=0\)\(\forall i\in\mathcal{T},j\in\mathcal{N}\) and \(\mathbf{M}_{ij}=1\) otherwise. The attacking process closely follows existing undirected graph attacks such as PGD attack [18] or Nettack [22] but it additionally considers different attacking budgets for in-links and out-links when selecting the edges. In Section 4.3, we also study a more general RDGA that allows some portion of the attack budgets on targets' out-links where the masking matrix is partially masked.
### Performance Analysis
To answer the question of whether existing directed GNNs are robust against adversarial attacks, we evaluate the performance of directed GNNs under the common Undirected Graph Attack (UGA) setting and the proposed Restricted Directed Graph Attack (RDGA) setting. We choose the state-of-the-art directed GNNs designed for directed graphs including DGCN [23], DiGCN [24], and MagNet [25]. DGCN [23] combines a normalized first-order proximity matrix \(\hat{\mathbf{A}}_{\mathbf{F}}\) and two normalized second-order proximity matrices (\(\hat{\mathbf{A}}_{\mathbf{S}_{m}}\) and \(\hat{\mathbf{A}}_{\mathbf{S}_{m}}\)) together to construct a directed graph convolution. DiGCN [24] defines digraph Laplacian in the symmetrically normalized form as \(\mathbf{I}-\frac{1}{2}\left(\mathbf{\Pi}_{pr}^{\frac{1}{2}}\mathbf{P}_{pr} \mathbf{\Pi}_{pr}^{-\frac{1}{2}}+\mathbf{\Pi}_{pr}^{-\frac{1}{2}}\mathbf{P}_{ pr}^{T}\mathbf{\Pi}_{pr}^{\frac{1}{2}}\right)\), where \(\mathbf{P}_{pr}=(1-\alpha)\mathbf{D}_{out}^{-1}\mathbf{A}+\alpha+\frac{ \alpha}{n}\mathbf{1}^{n\times n}\) is the PageRank transition matrix and \(\mathbf{\Pi}_{pr}\) is the diagonal matrix of stationary distribution of \(\mathbf{P}_{pr}\). MagNet [25] leverages a complex Hermitian matrix to encode undirected and directed information in the magnitude and the phase of the matrix's entries, respectively. Note that we test two versions of MagNet according to the setting of its hyperparameter \(q\). Undirected-MagNet sets \(q=0\) while Directed-MagNet sets \(q>0\). Additionally, we compare their performance with simple baselines such as MLP and GCN (as an example of undirected GNNs) [26].
We test their node classification performance under different in-link attack budgets on the Cora-ML dataset following the experimental setting described in Section 4.1. We use PGD [18] for the local evasion attack. For simplicity, we transfer the attack from the surrogate model GCN. From the
results in Table 1, we can make the following observations: **(1)** Undirected GNNs (e.g., GCN and Undirected MagNet) perform the same under UGA and RDGA, which confirms the conclusion in Remark 1. It also indicates that undirected GNNs can not benefit from directed links; **(2)** In terms of clean accuracy, directed GNNs (e.g., DGCN, DiGCN, and Directed-MagNet) perform worse than undirected GNNs (e.g., GCN and Undirected-MagNet) by significant margins; **(3)** In terms of robust accuracy, directed GNNs can perform better under directed attacks (RDGA) than under undirected attacks (UGA) in some cases, but the improvements are not stable. Overall, directed GNNs are still vulnerable to adversarial attacks. They often underperform simple graph-agnostic MLP which does not use graph information even if they are tested under weak attacks transferred from the surrogate model. These observations demonstrate that existing directed GNNs are not strong in either clean or adversarial settings and they can not effectively leverage the rich information in directed links.
## 3 Methodology: Robust Directed GNNs
The preliminary study in Section 2 indicates the incapability of existing directed GNNs [23; 24; 25], which calls for novel message passing solutions that effectively leverage the rich information in directed links. In this section, we first investigate the performance of directed random walk message passing. The discovery of catastrophic failures and the false sense of robustness due to indirect attacks motivates the design of a novel biased bidirectional random walk message passing framework.
### Motivation: Directed Random Walk Message Passing
In order to differentiate the roles of in-links and out-links, we propose to explore two variants of random walk message passing: (1) \(\mathbf{RW_{out}}\): aggregates node features following out-links: \(\mathbf{X}^{l+1}=\mathbf{D}_{\text{out}}^{-1}\mathbf{AX}^{l}\); and (2) \(\mathbf{RW_{in}}\): inversely aggregates node features following in-links: \(\mathbf{X}^{l+1}=\mathbf{D}_{\text{in}}^{-1}\mathbf{A}^{\top}\mathbf{X}^{l}\). We select two popular GNNs including GCN [26] and APPNP [27] as the backbone models and substitute their symmetric aggregation matrix \(\mathbf{D}^{-\frac{1}{2}}\mathbf{A}_{\text{sys}}\mathbf{D}^{-\frac{1}{2}}\) as \(\mathbf{RW_{out}}\) and \(\mathbf{RW_{in}}\) respectively, which leads to four variants GCN-\(\mathbf{RW_{out}}\), GCN-\(\mathbf{RW_{in}}\), APPNP-\(\mathbf{RW_{out}}\), and APPNP-\(\mathbf{RW_{in}}\).
We evaluate the clean and robust node classification accuracy of these variants on the Cora-ML dataset under RDGA, following the experimental setting detailed in Section 4. It is worth emphasizing that while we transfer attacks from the surrogate model GCN as usual, we additionally test the robust performance of adaptive attacks which directly attack the victim model to avoid a potential false sense of robustness. The results in Table 2 provide the following insightful observations:
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Clean (total)} & \multirow{2}{*}{0\% (target)} & \multicolumn{2}{c}{25\%} & \multicolumn{2}{c}{50\%} & \multicolumn{2}{c}{100\%} \\ \cline{5-10} & & & UGA & RDGA & UGA & RDGA & UGA & RDGA \\ \hline MLP & 64.6±2.2 & 73.5±7.4 & 73.5±7.4 & 73.5±7.4 & 73.5±7.4 & 73.5±7.4 & 73.5±7.4 & 73.5±7.4 \\ \hline GCN & 81.8±1.5 & 89.5±4.1 & 66.0±49.7 & 66.0±49.7 & 40.5±48.5 & 40.5±48.5 & 12.0±6.4 & 12.0±6.4 \\ \hline & 79.6±2.1 & 88.5±3.2 & 70.5±10.6 & 70.5±10.6 & 55.5±6.9 & 55.5±6.9 & 35.5±6.1 & 35.5±6.1 \\ \hline & 75.0±3.1 & 89.5±7.6 & 49.0±8.0 & 76.5±13.0 & 39.5±49.6 & 54.5±7.9 & 33.5±9.5 & 38.0±1±4.2 \\ \cline{2-10} & 75.5±2.2 & 85.0±7.4 & 49.0±8.0 & 50.0±67.3 & 33.5±49.5 & 40.5±49.1 & 16.5±6.7 & 29.0±6.2 \\ \cline{2-10} & 57.1±5.2 & 69.5±10.4 & 65.0±10.0 & 65.0±49.7 & 63.5±7.1 & 59.5±10.6 & 53.0±7.5 & 54.0±7.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Classification accuracy (%) under targeted transfer attacks (Cora-ML).
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Clean (total)} & \multirow{2}{*}{0\%} & \multicolumn{2}{c}{25\%} & \multicolumn{2}{c}{50\%} & \multicolumn{2}{c}{100\%} \\ \cline{5-10} & & Target & Transfer & Adaptive & Transfer & Adaptive & Transfer & Adaptive \\ \hline MLP & 64.6±2.2 & 73.5±7.4 & 73.5±7.4 & 73.5±7.4 & 73.5±7.4 & 73.5±7.4 & 73.5±7.4 & 73.5±7.4 & 73.5±7.4 \\ \hline GCN & 81.8±1.5 & 89.5±46.1 & 66.0±49.7 & 66.0±49.7 & 40.5±48.5 & 40.5±48.5 & 12.0±6.4 & 12.0±6.4 \\ \cline{2-10} GCN-\(\mathbf{RW_{out}}\) & 75.9±1.7 & 86.5±46.3 & 86.5±46.3 & 52.0±8.1 & 86.5±46.3 & 28.0±4.6 & 86.5±46.3 & 10.5±5.7 \\ \cline{2-10} GCN-\(\mathbf{RW_{in}}\) & 70.8±2.8 & 78.0±45.1 & 27.0±5.1 & 19.0±7.7 & 12.0±7.8 & 0.0±0.0 & 3.0±3.3 & 0.0±0.0 \\ \hline APPNP & 82.5±1.6 & 90.5±4.7 & 81.5±49.5 & 80.5±10.4 & 66.5±48.7 & 68.0±12.1 & 44.0±9.2 & 46.0±7.3 \\ \cline{2-10} APPNP-\(\mathbf{RW_{out}}\) & 75.0±1.6 & 85.5±46.5 & 85.5±6.5 & 30.0±7.7 & 85.5±46.5 & 15.0±3.9 & 85.0±6.3 & 11.5±3.2 \\ \cline{2-10} APPNP-\(\mathbf{RW_{in}}\) & 72.2±2.4 & 78.5±5.9 & 30.0±7.4 & 18.5±5.0 & 17.5±46.8 & 2.0±2.4 & 9.5±5.2 & 0.0±0.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Classification accuracy (%) of GNNs under transfer and adaptive attacks (Cora-ML)
* In terms of clean accuracy, we have GCN > GCN-\(\text{RW}_{\text{out}}\) > GCN-\(\text{RW}_{\text{in}}\) > MLP and APPNP > APPNP-\(\text{RW}_{\text{out}}\) > APPNP-\(\text{RW}_{\text{in}}\) > MLP. This indicates that both out-links and in-links in the clean directed graph provide useful graph topology information and out-links are indeed more reliable than in-links. Moreover, undirected GNNs (GCN and APPNP) achieve the best clean performance since both in-links and out-links are utilized through the symmetrization operation.
* Under transfer attack, we have GCN-\(\text{RW}_{\text{out}}\) > GCN > GCN-\(\text{RW}_{\text{in}}\) and APPNP-\(\text{RW}_{\text{out}}\) > APPNP > APPNP-\(\text{RW}_{\text{in}}\). In particular, the transfer attack barely impacts GCN-\(\text{RW}_{\text{out}}\) and APPNP-\(\text{RW}_{\text{out}}\) since no out-link attack on target nodes is allowed under the RDGA setting. It indicates that \(\text{RW}_{\text{out}}\) is free from the impact of adversarial in-links. However, the adversarial in-links in the transfer attack hurt GCN and APPNP badly and completely destroy GCN-\(\text{RW}_{\text{in}}\) and APPNP-\(\text{RW}_{\text{in}}\) that only reply on in-links.
* Although \(\text{RW}_{\text{out}}\) performs extremely well under transfer attacks, we surprisingly find that GCN-\(\text{RW}_{\text{out}}\) and APPNP-\(\text{RW}_{\text{out}}\) suffer from _catastrophic failures_ under stronger adaptive attacks and they significantly underperform simple MLP, which uncovers a severe _false sense of robustness_.
### New Approach: Biased Bidirectional Random Walk Message Passing (BBRW)
The studies on existing directed GNNs (Section 2.3) and the directed random walk message passing (Section 3.1) indicate that it is highly non-trivial to robustify GNNs using directed graphs. But these studies provide insightful motivations to develop a better approach. In this section, we start with a discussion on the catastrophic failures of \(\text{RW}_{\text{out}}\) and propose an innovative and effective approach.
Catastrophic Failures due to Indirect Attacks.The catastrophic failures of \(\text{RW}_{\text{out}}\) (GCN-\(\text{RW}_{\text{out}}\) and APPNP-\(\text{RW}_{\text{out}}\)) under adaptive attacks indicate their false sense of robustness.
In order to understand this phenomenon and gain deeper insights, we perform statistical analyses on the adversary behaviors when attacking different victim models such as GCN, GCN-\(\text{RW}_{\text{out}}\) and GCN-\(\text{RW}_{\text{in}}\) using attack budget \(50\%\). Note that similar observations can be made under other attack budgets as shown in Appendix A.2. In particular, we separate adversarial links into different groups according to whether they directly connect target nodes or targets' neighbors. The distributions of adversarial links shown in Figure 2 indicate:
* When attacking GCN and GCN-\(\text{RW}_{\text{in}}\), the adversary majorly attacks the in-links of target nodes directly using 96.32% and 80.34% perturbation budgets respectively, which badly hurts their performance since both GCN and GCN-\(\text{RW}_{\text{in}}\) reply on in-links. However, the attack transferred from these two victim models barely impact GCN-\(\text{RW}_{\text{out}}\) that only trusts out-links.
* When attacking GCN-\(\text{RW}_{\text{out}}\), the adversary can not manipulate the out-links of target nodes under the restricted setting (RDGA). Therefore, they do not focus on attacking the target nodes directly since in-links of target nodes can not influence GCN-\(\text{RW}_{\text{out}}\) either. Instead, the adversary actfully identifies the targets' neighbors and conducts indirect out-link attacks on these neighbors using 65.55% budgets. As a result, it catastrophically destroys the predictions of target nodes indirectly through their out-linking neighbors that have mostly been attacked.
The systematic studies and analyses in Section 2.3 and Section 3.1 offer two valuable lessons: **(1)** Both in-links and out-links provide useful graph topology information; **(2)** While out-links are more reliable than in-links, full trust in out-links can cause catastrophic failures and a false sense of robustness under adaptive attacks due to the existence of indirect attacks. These lessons motivate us to develop a novel message-passing framework that not only fully utilizes the out-links and in-links information but also differentiates their roles. Importantly, it also needs to avoid a false sense of robustness under adaptive attacks.
Figure 2: Adversary behaviors: the yellow portion represents attacks on the target (direct in-link attacks); the red portion represents attacks on the targets’ neighbors (indirect out-link attacks); and the blue portion represents other attacks.
To this end, we propose a Biased Bidirectional Random Walk (BBRW) Message Passing framework represented by the propagation matrix that balances the trust on out-links and in-links:
\[\tilde{\mathbf{A}}_{\beta}=\mathbf{D}_{\beta,\text{out}}^{-1}\mathbf{A}_{\beta} \ \ \text{where}\ \ \mathbf{D}_{\beta,\text{out}}=\mathbf{A}_{\beta}\mathbf{1},\ \ \mathbf{A}_{\beta}=\beta\mathbf{A}+(1-\beta)\mathbf{A}^{\top}.\]
\(\mathbf{A}_{\beta}\) is the weighted sum of \(\mathbf{A}\) and \(\mathbf{A}^{\top}\) that combines the out-links (directed random walk) and in-links (inversely directed random walk), i.e., \(\{\mathbf{A}_{\beta}\}_{ij}=\beta\mathbf{A}_{ij}+(1-\beta)\mathbf{A}_{ji}\). \(\mathbf{D}_{\beta,\text{out}}\) is the out-degree matrix of \(\mathbf{A}_{\beta}\). \(\tilde{\mathbf{A}}_{\beta}\) denotes the random walk normalized propagation matrix that aggregates node features from both out-linking and in-linking neighbors. The bias weight \(\beta\in[0,1]\) controls the relative trustworthiness of out-links compared with in-links. When \(\beta=0\), it reduces to RW\({}_{\text{in}}\) that fully trusts in-links. But RW\({}_{\text{in}}\) suffers from adversarial attacks, and even a weak transfer attack destroys it. When \(\beta=1\), it reduces to RW\({}_{\text{out}}\) that fully trusts out-links. But RW\({}_{\text{out}}\) suffers from catastrophic failures under adaptive attacks as shown in Section 3.1. Therefore, \(\beta\) is typically recommended to be selected in the range \((0.5,1)\) to reflect the reasonable assumption that out-links are more reliable than in-links but out-links are not fully trustworthy due to the existence of indirect in-link attacks on the neighbors.
**Advantages.** The proposed BBRW enjoys the following advantages: (1) **Effectiveness**: BBRW is able to leverage both in-link and out-link graph topology information in directed graphs, which leads to excellent clean accuracy. (2) **Trustworthiness**: the hyperparameter \(\beta\) provides the flexibility to adjust the trust between out-links and in-links, which helps avoid the catastrophic failures and false sense of robustness caused by the unconditional trust on out-links as discussed in the case of RW\({}_{\text{out}}\). (3) **Simplicity**: BBRW is simple due to its clear motivation and easy implementation. It is easy to tune with only one hyperparameter; (4) **Universality**: It is universal so that it can be readily used as a plug-in layer to improve the robustness of various GNN backbones. It is also compatible with existing defense strategies developed for undirected GNNs. (5) **Efficiency**: BBRW shares the same computational and memory complexities and costs as vanilla GNNs such as GCN and APPNP.
## 4 Experiment
In this section, we provide comprehensive experiments to verify the advantages of the proposed BBRW. Further ablation studies are presented to illustrate the working mechanism of BBRW.
### Experimental Setting
**Datasets.** For the attack setting, we use the two most widely used datasets in the literature, namely Cora ML [28] and Citeseer [29] We use the directed graphs downloaded from the work [25] and follow their data splits (10% training, 10% validation, and 80% testing). We repeat the experiments for 10 random data splits and report the mean and variance of the node classification accuracy.
**Baselines.** We compare our models with seven undirected GNNs: GCN [26], APPNP [27], Jaccard-GCN [5], RGCN [30], GRAND [31], GCN-Soft-Median [32], and ElasticGNN [33], most of which are designed as robust GNNs. Additionally, we also select three state-of-the-art directed GNNs including DGCN [23], DiGCN [24] and MagNet [25] as well as the graph-agnostic MLP.
**Hyperparameter settings.** For all methods, hyperparameters are tuned from the following search space: 1) learning rate: {0.05, 0.01, 0.005}; 2) weight decay: {5e-4, 5e-5, 5e-6}; 3) dropout rate: {0.0, 0.5, 0.8}. For APPNP, we use the teleport probability \(\alpha=0.1\) and propagation step \(K=10\) as [27]. For BBRW-based methods, we tune \(\beta\) from 0 to 1 with the interval 0.1. For a fair comparison, the proposed BBRW-based methods share the same architectures and hyperparameters with the backbone models except for the plugged-in BBRW layer. For all models, we use 2 layer neural networks with 64 hidden units. Other hyperparameters follow the settings in their original papers.
**Adversary attacks & evaluations.** We conduct evasion target attacks using PGD topology attack algorithm [18] under the proposed RDGA setting. The details of the attacking algorithm are presented in Appendix A.1. We randomly select 20 target nodes per split for robustness evaluation and run the experiments for multiple link budgets \(\Delta\in\{0\%,25\%,50\%,100\%\}\) of the target node's total degree. _Transfer_ and _Adaptive_ refer to transfer and adaptive attacks, respectively. For transfer attacks, we choose a 2-layer GCN as the surrogate model following existing works [8; 6]. For adaptive attacks, the victim models are the same as the surrogate models, avoiding a false sense of robustness in transfer attacks. _Clean (total)_ and _Target_ denote the accuracy on the entire set of test nodes and the subset of 20 target nodes, respectively. "\(\backslash\)" means we do not find a trivial solution for adaptive attack since it is non-trivial to compute the gradient of the adjacency matrix for those victim models.
### Robust Performance
To demonstrate the effectiveness, robustness, and universality of the proposed BBRW message-passing framework, we develop multiple variants of it by plugging BBRW into classic GNN backbones: GCN [26], APPNP [27] and GCN-Soft-Median [32]. The clean and robust performance are compared with plenty of representative GNN baselines on Cora-ML and Citeseer datasets as summarized in Table 3 and Table 4, respectively. From these results, we can observe the following:
* In most cases, all baseline GNNs underperform the graph-agnostic MLP under adaptive attacks, which indicates their incapability to robustly leverage graph topology information. However, most of BBRW variants outperform MLP. Taking Cora-ML as an instance, the best BBRW variant (BBRW-GCN-Soft-Median) significantly outperforms MLP by \(\{18\%,16\%,13.5\%\}\) (transfer attack) and \(\{18.5\%,14.5\%,11\%\}\) (adaptive attack) under \(\{25\%,50\%,100\%\}\) attack budgets. Even under 100% perturbation, BBRW-GCN-Soft-Median still achieves 84.5% robust accuracy under strong adaptive attacks, which indicates the powerful values of trusting out-links.
* The proposed BBRW is a highly effective plug-in layer that significantly and consistently enhances the robustness of GNN backbones in both transfer and adaptive attack settings. Taking Cora-ML as an instance, under increasing attack budgets \(\{25\%,50\%,100\%\}\): (1) BBRW-GCN outperforms GCN by \(\{23.5\%,45.5\%,73\%\}\) (transfer attack) and \(\{23\%,44.5\%,63\%\}\) (adaptive attack); (2) BBRW-APPNP outperforms APPNP by \(\{7.5\%,18.5\%,39.5\%\}\) (transfer attack) and \(\{7\%,15\%,23\%\}\) (adaptive attack); (3) BBRW-GCN-Soft-Median outperforms GCN-Soft-Median by \(\{5.5\%,14.5\%,38.5\%\}\) (transfer attack) and \(\{9\%,15\%,37\%\}\) (adaptive attack). The improvements are stronger under larger attack budgets.
* The proposed BBRW not only significantly outperforms existing directed GNNs such as DGCN, DiGCN, and MagNet in terms of robustness but also exhibits consistently better clean accuracy. BBRW also overwhelmingly outperforms existing robust GNNs under attacks. Compared with undirected GNN backbones such as GCN, APPNP, and GCN-Soft-Median, BBRW maintains the same or comparable clean accuracy.
### Ablation Study
In this section, we conduct further ablation studies on the attacking patterns, hyperparameter setting, and adversary capacity in RDGA to understand the working mechanisms of the proposed BBRW.
**Attacking patterns.** In Table 3, we observe that BBRW-GCN-Soft-Median overwhelmingly outperform all baselines in terms of robustness. To investigate the reason, we show the adversarial attack patterns of transfer and adaptive attacks on BBRW-GCN-Soft-Median (\(\beta=0.7\)) in Figure 3. In the transfer attack, the adversary spends 96.32% budget on in-links attacks on the
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Clean (total)} & \multicolumn{2}{c}{0\%} & \multicolumn{2}{c}{25\%} & \multicolumn{2}{c}{50\%} & \multicolumn{2}{c}{100\%} \\ & & Target & Transfer & Adaptive & Transfer & Adaptive & Transfer & Adaptive \\ \hline MLP & 64.6\(\pm\)2.2 & 73.5\(\pm\)7.4 & 73.5\(\pm\)7.4 & 73.5\(\pm\)7.4 & 73.5\(\pm\)7.4 & 73.5\(\pm\)7.4 & 73.5\(\pm\)7.4 & 73.5\(\pm\)7.4 \\ \hline DGCN & 75.0\(\pm\)3.1 & 89.5\(\pm\)7.6 & 76.5\(\pm\)13.0 & \(\backslash\) & 54.5\(\pm\)7.9 & \(\backslash\) & 38.0\(\pm\)14.2 & \(\backslash\) \\ DiGCN & 75.5\(\pm\)2.2 & 85.0\(\pm\)7.4 & 50.046.7 & \(\backslash\) & 40.5\(\pm\)9.1 & \(\backslash\) & 29.0\(\pm\)6.2 & \(\backslash\) \\ \cline{2-8} Directed-MagNet & 57.1\(\pm\)5.2 & 69.5\(\pm\)10.4 & 65.0\(\pm\)49.7 & \(\backslash\) & 59.5\(\pm\)10.6 & \(\backslash\) & 54.0\(\pm\)7.0 & \(\backslash\) \\ \cline{2-8} Undirected-MagNet & 79.6\(\pm\)2.1 & 88.5\(\pm\)3.2 & 70.5\(\pm\)10.6 & \(\backslash\) & 55.5\(\pm\)6.9 & \(\backslash\) & 35.5\(\pm\)6.1 & \(\backslash\) \\ \hline Jaccard-GCN & 81.0\(\pm\)6.6 & 90.5\(\pm\)6.5 & 69.5\(\pm\)57.9 & 65.5\(\pm\)7.9 & 44.0\(\pm\)6.2 & 34.0\(\pm\)7.0 & 21.0\(\pm\)67.0 & 8.0\(\pm\)4.6 \\ \cline{2-8} RGCN & 81.4\(\pm\)1.5 & 88.0\(\pm\)6.0 & 72.5\(\pm\)8.4 & 66.0\(\pm\)7.7 & 44.0\(\pm\)8.9 & 36.0\(\pm\)5.4 & 17.5\(\pm\)48.7 & 7.0\(\pm\)4.6 \\ \cline{2-8} GRAND & 81.2\(\pm\)0.9 & 85.5\(\pm\)6.1 & 74.0\(\pm\)7.0 & 65.0\(\pm\)7.4 & 64.0\(\pm\)9.2 & 51.0\(\pm\)8.6 & 45.0\(\pm\)7.1 & 24.0\(\pm\)7.7 \\ \cline{2-8} ElasticGNN & 79.0\(\pm\)7.0 & 89.0\(\pm\)6.2 & 86.0\(\pm\)5.4 & \(\backslash\) & 74.0\(\pm\)5.8 & \(\backslash\) & 50.0\(\pm\)9.7 & \(\backslash\) \\ \hline GCN & 81.8\(\pm\)1.5 & 89.5\(\pm\)6.1 & 66.0\(\pm\)49.7 & 66.0\(\pm\)9.7 & 30.5\(\pm\)8.5 & 40.5\(\pm\)8.5 & 40.6\(\pm\)4.7 & 12.0\(\pm\)6.4 \\ \cline{2-8} BRW-GCN & 80.5\(\pm\)1.3 & 90.0\(\pm\)5.5 & 89.5\(\pm\)46.1 & 80.0\(\pm\)6.2 & 86.0\(\pm\)5.4 & 85.0\(\pm\)6.3 & 85.0\(\pm\)47.1 & 72.0\(\pm\)10.2 \\ \hline APPNP & **82.5\(\pm\)1.6** & 90.5\(\pm\)4.7 & 81.5\(\pm\)49.5 & 80.5\(\pm\)10.4 & 66.5\(\pm\)8.7 & 68.0\(\pm\)12.1 & 44.0\(\pm\)49.2 & 46.0\(\pm\)47.3 \\ \cline{2-8} BBRW-APPNP & **82.5\(\pm\)1.2** & 91.0\(\pm\)4.9 & 89.0\(\pm\)4.5 & 87.5\(\pm\)5.6 & 85.0\(\pm\)7.1 & 83.0\(\pm\)6.4 & 83.5\(\pm\)6.3 & 69.0\(\pm\)49.7 \\ \hline GCN-Soft-Median & 81.6\(\pm\)1.3 & 91.5\(\pm\)5.5 & 86.0\(\pm\)47.0 & 83.0\(\pm\)7.1 & 75.0\(\pm\)8.4 & 73.0\(\pm\)7.1 & 48.5\(\pm\)11.4 & 47.5\(\pm\)9.3 \\ \cline{2-8} BBRW-GCN-Soft-Median & 82.4\(\pm\)1.3 & **92.0\(\pm\)4.6** & **91.5\(\pm\)5.0** & **92.0\(\pm\)4.6** & **89.5\(\pm\)6.9** & **88.0\(\pm\)5.1** & **87.0\(\pm\)8.4** & **84.5\(\pm\)8.8** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Classification accuracy (%) under different perturbation rates of graph attack. The best results are in **bold**, and the second-best results are underlined. (Cora-ML)
target nodes directly, which causes a minor effect on BBRW-GCN-Soft-Median that trusts out-links more. In the adaptive attack, the adversary is aware of the biased trust of BBRW and realizes that in-links attacks are not sufficient. Therefore, besides direct in-link attacks, it allocates 14.01% and 14.40% budgets to conduct the out-links indirect attacks on targets' neighbors and other attacks. Even though the adversary optimally adjusts the attack strategy, BBRW-GCN-Soft-Median still achieves incredible 87% and 84.5% robust accuracy under 50% and 100% total attack budgets. This verifies BBRW's extraordinary capability to defend against attacks.
**Hyperparameter in BBRW.** BBRW is a simple and efficient approach. The only hyperparameter is the bias weight \(\beta\) that provides the flexibility to differentiate and adjust the trust between out-links and in-links. We study the effect of \(\beta\) by varying \(\beta\) from 0 to 1 with an interval of 0.1 using BBRW-GCN. The accuracy under different attack budgets on Cora-ML is summarized in Figure 4. The accuracy on Citeseer is shown in Figure 5 in Appendix A.2. We can make the following observations:
* In terms of clean accuracy (0% attack budget), BBRW-GCN with \(\beta\) ranging from 0.2 to 0.8 exhibit stable performance while the special cases GCN-RW\({}_{\text{in}}\) (\(\beta=0\)) and GCN-RW\({}_{\text{out}}\) (\(\beta=1\)) perform worse. This suggests that both in-links and out-links provide useful graph information that is beneficial for clean performance, which is consistent with the conclusion in Section 3.1.
* Under transfer attacks, BBRW-GCN becomes more robust with the growth of \(\beta\). It demonstrates that larger \(\beta\) indeed can reduce the trust and impact of in-links on target nodes.
* Under adaptive attacks, BBRW-GCN becomes more robust with the growth of \(\beta\) but when it transits to the range close to \(\beta=1\) (RW\({}_{\text{out}}\) ), it suffers from catastrophic failures due to the indirect out-link attacks on targets' neighbors, which is consistent with the discovery in Section 3.1, This also indicates the false sense of robustness evaluated under transfer attacks.
**Adversary capacity in RDGA.** One of the major reasons BBRW can achieve extraordinary robustness is to differentiate the roles and trust of in-links and out-links. In RDGA, we assume that the adversary can not manipulate the out-links of target nodes by fully masking target nodes' out-links (i.e., masking rate=100%). This reflects the practical constraints in real-world applications as explained in Section 1 and Section 2. However, in reality, it is beneficial to consider more dangerous cases when the adversary may be able to manipulate some proportion of targets' out-links. Therefore, we also provide ablation study on the general RDGA setting by varying the masking rates of targets' out-links
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Clean (total)} & \multirow{2}{*}{0\%} & \multicolumn{2}{c}{25\%} & \multicolumn{2}{c}{50\%} & \multicolumn{2}{c}{100\%} \\ & & Target & Transfer & Adaptive & Transfer & Adaptive & Transfer & Adaptive \\ \hline MLP & 55.4a2.2 & 49.04a.4 & 49.04a.4 & 49.04a.4 & 49.04a.4 & 49.04a.4 & 49.04a.4 & 49.04a.4 \\ \hline DGCN & 62.54a2.3 & 64.047.0 & 54.04a8.3 & \(\backslash\) & 34.54a10.6 & \(\backslash\) & 27.0410.1 & \(\backslash\) \\ \cline{2-7} DiGCN & 60.74a2.4 & 66.04a8.6 & 41.54a10.5 & \(\backslash\) & 29.54a8.2 & \(\backslash\) & 21.545.9 & \(\backslash\) \\ \cline{2-7} Directed-MagNet & 45.34e5.5 & 42.54a.3 & 42.54a11.5 & \(\backslash\) & 35.04a12.0 & \(\backslash\) & 35.04a7.7 & \(\backslash\) \\ \cline{2-7} Undirected-MagNet & 66.94a1.6 & 68.04e6.0 & 51.54a11.2 & \(\backslash\) & 29.04a10.2 & \(\backslash\) & 17.047.1 & \(\backslash\) \\ \hline Jaccard-GCN & 66.24a1.4 & 57.047.1 & 45.54f.9 & 38.54g.5 & 23.047.8 & 11.545.5 & 20.0410.2 & 6.545.0 \\ \cline{2-7} RGCN & 64.24a2.0 & 61.54f.7 & 34.54g.1 & 34.04a10.2 & 9.54a2.4 & 7.045.6 & 6.545.4 & 4.543.5 \\ \cline{2-7} GRAND & 68.14a1.2 & 67.54e6.0 & 56.54e3.6 & 56.048.9 & 43.045.1 & 42.549.0 & 37.548.1 & 27.546.8 \\ \cline{2-7} ElasticGNN & 60.04a2.6 & 59.048.6 & 54.04e6.6 & \(\backslash\) & 27.54e6.8 & \(\backslash\) & 13.549.0 & \(\backslash\) \\ \hline GCN & 66.24a1.4 & 59.045.4 & 36.54g.5 & 36.54g.5 & 10.545.7 & 10.545.7 & 4.544.2 & 4.544.2 \\ \cline{2-7} BBRW-GCN & 65.34a1.4 & 61.54f.4 & 50.047.7 & 43.04a10.3 & 31.54g.3 & 27.0414.4 & 26.048.0 & 20.549.6 \\ \hline APPNP & **68.54a1.4** & **72.04e6.0** & 53.54g.5 & 51.04e6.2 & 16.0410.7 & 13.549.8 & 9.044.4 & 8.549.0 \\ \cline{2-7} BBBW-APPNP & 68.34a1.8 & 69.04e4.4 & **66.04a8.3** & **59.04a9.7** & **55.04a8.1** & 26.548.4 & 43.546.3 & 14.546.1 \\ \hline GCN-Soft-Median & 66.64a1.7 & 61.54e5.9 & 56.048.3 & 56.048.3 & 34.54a10.8 & 35.0410.7 & 26.549.8 & 26.049.0 \\ \cline{2-7} BBRW-GCN-Soft-Median & 65.74a2.0 & 59.54f.2 & 58.54f.8 & 55.54f.8 & 53.047.5 & 48.047.0 & **49.02.77** & 48.048.1 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Classification accuracy (%) under different perturbation rates of graph attack. The best results are in **bold**, and the second-best results are underlined. (Citeseer)
Figure 3: Distributions of adversarial links.
from 50% to 100%. The total attack budget including in-links and out-links is set as 50% of the degree of the target node. The results in Table 5 offer the following observations: (1) The robustness of undirected backbone GNNs is not affected by constraints on the out-link attacks of the target node, as they can't differentiate the out-links and in-links; (2) BBRW can significantly enhance the robustness of backbones models (e.g., GCN-Soft-Median) under varying masking rates. The improvements are stronger when out-links are better protected (higher mask rate).
## 5 Related Work
The existing research on the attacks and defenses of GNNs focuses on undirected GNNs that convert the graphs into undirected graphs [19; 20; 6; 18; 30; 34; 31; 35; 36; 32]. Therefore, these works can not fully leverage the rich directed link information in directed graphs. A recent study [8] categorized 49 defenses published at major conferences/journals and evaluated 7 of them covering the spectrum of all defense techniques under adaptive attacks. Their systematic evaluations show that while some defenses are effective, their robustness is much lower than claimed in their original papers under stronger adaptive attacks. This not only reveals the pitfall of the false sense of robustness but also calls for new effective solutions. Our work differs from existing works by studying robust GNNs in the context of directed graphs, which provides unprecedented opportunities for improvements orthogonal to existing efforts.
There exist multiple directed GNNs specifically designed for directed graphs although robustness is not considered. The work [37] proposed a spectral-based GCN for directed graphs by constructing a directed Laplacian matrix using the random walk matrix and its stationary distribution. DGCN [23] extended spectral-based graph convolution to directed graphs by utilizing first-order and second-order proximity, which can retain the connection properties of the directed graph and expand the receptive field of the convolution operation. MotifNet [38] used convolution-like anisotropic graph filters based on local sub-graph structures called motifs. DiGCN [24] proposed a directed Laplacian matrix with a PageRank matrix rather than the random-walk matrix. MagNet [25] utilized a complex Hermitian matrix called the magnetic Laplacian to encode undirected geometric structures in the magnitudes and directional information in the phases. The BBRW proposed in this work is a general framework that can equip various GNNs with the superior capability to handle directed graphs more effectively.
## 6 Conclusion
This work conducts the first investigation into the robustness and trustworthiness of GNNs in the context of directed graphs. To achieve this objective, we introduce a new and realistic graph attack setting for directed graphs. Additionally, we propose an innovative and universal message-passing approach as a plug-in layer to significantly enhance the robustness of various GNN backbones, tremendously surpassing the performance of existing methods. Although the primary focus of this
Figure 4: Ablation study on \(\beta\) (Cora-ML). Colors denote the accuracy under different attack budgets.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Model \(\backslash\) Masking Rate & 50\% & 60\% & 70\% & 80\% & 90\% & 100\% \\ \hline GCN-Soft-Median & 73.047.1 & 73.047.1 & 73.047.1 & 73.047.1 & 73.047.1 & 73.047.1 \\ BBRW-GCN-Soft-Median & 86.545.9 & 87.045.1 & 87.545.6 & 87.545.6 & 87.544.6 & 89.044.9 \\ Best \(\beta\) & 0.7 & 0.7 & 0.7 & 0.7 & 0.7 & 0.8 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation study on masking rates of target nodes’ out-links under adaptive attack (Cora-ML).
study is evasion targeted attack, the valuable findings suggest a promising direction for future research: enhancing the robustness of GNNs against adversarial attacks by leveraging the inherent network structure in directed graphs. Moving forward, further exploration of this potential will encompass various attack settings such as poison attack and global attack in this crucial research area. |
2307.01622 | Renewable energy management in smart home environment via forecast
embedded scheduling based on Recurrent Trend Predictive Neural Network | Smart home energy management systems help the distribution grid operate more
efficiently and reliably, and enable effective penetration of distributed
renewable energy sources. These systems rely on robust forecasting,
optimization, and control/scheduling algorithms that can handle the uncertain
nature of demand and renewable generation. This paper proposes an advanced ML
algorithm, called Recurrent Trend Predictive Neural Network based Forecast
Embedded Scheduling (rTPNN-FES), to provide efficient residential demand
control. rTPNN-FES is a novel neural network architecture that simultaneously
forecasts renewable energy generation and schedules household appliances. By
its embedded structure, rTPNN-FES eliminates the utilization of separate
algorithms for forecasting and scheduling and generates a schedule that is
robust against forecasting errors. This paper also evaluates the performance of
the proposed algorithm for an IoT-enabled smart home. The evaluation results
reveal that rTPNN-FES provides near-optimal scheduling $37.5$ times faster than
the optimization while outperforming state-of-the-art forecasting techniques. | Mert Nakıp, Onur Çopur, Emrah Biyik, Cüneyt Güzeliş | 2023-07-04T10:18:16Z | http://arxiv.org/abs/2307.01622v2 | Renewable Energy Management in Smart Home Environment via Forecast Embedded Scheduling based on Recurrent Trend Predictive Neural Network
###### Abstract
Smart home energy management systems help the distribution grid operate more efficiently and reliably, and enable effective penetration of distributed renewable energy sources. These systems rely on robust forecasting, optimization, and control/scheduling algorithms that can handle the uncertain nature of demand and renewable generation. This paper proposes an advanced ML algorithm, called Recurrent Trend Predictive Neural Network based Forecast Embedded Scheduling (rTPNN-FES), to provide efficient residential demand control. rTPNN-FES is a novel neural network architecture that simultaneously forecasts renewable energy generation and schedules household appliances. By its embedded structure, rTPNN-FES eliminates the utilization of separate algorithms for forecasting and scheduling and generates a schedule that is robust against forecasting errors. This paper also evaluates the performance of the proposed algorithm for an IoT-enabled smart home. The evaluation results reveal that rTPNN-FES provides near-optimal scheduling 37.5 times faster than the optimization while outperforming state-of-the-art forecasting techniques.
keywords: energy management, forecasting, scheduling, neural networks, recurrent trend predictive neural network +
Footnote †: journal: Applied Energy
## 1 Introduction
Residential loads account for a significant portion of the demand on the power system. Therefore, intelligent control and scheduling of these loads enable a more flexible, robust, and economical power system operation. Moreover, the distributed nature of the local residential load controllers increases system scalability. On the distribution level, the smart grid benefits from the increased adoption of residential demand and generation control systems, because they improve system flexibility, help to achieve a better demand-supply balance, and enable increased penetration of renewable energy sources. Increasing flexibility of the building energy demand depends on multiple developments, including accurate forecasting and effective scheduling of the loads, incorporation of renewable energy sources such as solar and wind power, and integration of suitable energy storage technologies (e.g. batteries and/or electric vehicle charging) into the building energy management system. Advanced control, optimization and forecasting approaches are necessary to operate these complex systems seamlessly.
In this paper, in order to address this problem, we propose a novel embedded neural network architecture, called Recurrent Trend Predictive Neural Network based Forecast Embedded Scheduling (rTPNN-FES), which simultaneously forecasts the renewable energy generation and schedules the household appliances (loads). rTPNN-FES is a unique neural network architecture that enables both accurate forecasting and heuristic scheduling in a single neural network. This architecture is comprised of two main layers: 1) the Forecasting Layer which consists of replicated Recurrent Trend Predictive Neural Networks (rTPNN) with weight-sharing properties, and 2) the Scheduling Layer which contains parallel softmax layers with customized inputs each of which is assigned to a single load. In this paper, we also develop a 2-Stage Training algorithm that trains rTPNN-FES to learn the optimal scheduling along with the forecasting. However, the proposed rTPNN-FES architecture does not depend on the particular training algorithm, and the main contributions and advantages are provided by the architectural design. Note that the rTPNN model was originally proposed by Nakip et al. [1] for multivariate time series prediction, and its superior performance compared to other ML models was demonstrated when making predictions based on multiple time series features in the case of multi-sensor fire detection. On the other hand, rTPNN has not yet been used in an energy management system and for forecasting renewable energy generation.
Furthermore, the advantages of using rTPNN-FES instead of a separate forecaster and scheduler are in three folds:
1. rTPNN-FES learns how to construct a schedule adapted to forecast energy generation by emulating (mimicking) optimal scheduling. Thus, the scheduling via rTPNN-FES is highly robust against forecasting errors.
2. The requirements of rTPNN-FES for the memory space and computation time are significantly lower compared to the combination of a forecaster and an optimal scheduler.
3. rTPNN-FES proposes a considerably high scalability for the systems in which the set of loads varies over time, e.g. adding new devices into a smart home Internet of Things (IoT) network.
We numerically evaluate the performance of the proposed rTPNN-FES architecture against 7 different well-known ML algorithms combined with optimal scheduling. To this end, publicly available datasets [2; 3] are utilized for a smart home environment with 12 distinct appliances. Our results reveal that the proposed rTPNN-FES architecture achieves significantly high forecasting accuracy while generating a close-to-optimal schedule over a period of one year. It also outperforms existing techniques in both forecasting and scheduling tasks.
The remainder of this paper is organized as follows: Section 2 reviews the differences between this paper and the state-of-the-art. Section 3 presents the system set-up and initiates the optimization problem. Section 4 presents the rTPNN-FES architecture and the 2-Stage Training algorithm which is used to learn and emulate the optimal scheduling. Section 5 presents the performance evaluation and comparison. Finally, Section 6 summarizes the main contributions of this paper.
## 2 Related Works
In this section, we present the comparison of this paper with the-state-of-the art works in three categories: 1) The works in the first category develop an optimization-based energy management system without interacting with ML. 2) The works in the second category focus on forecasting renewable energy generation using either statistical or deep learning techniques. 3) The works in the last category develop energy management systems using ML algorithms.
### Optimization-based Energy Management Systems
We first review the recent works which developed optimization-based energy management
systems. In [4], Shareef et al. gave a comprehensive summary of heuristic optimization techniques used for home energy management systems. In [5], Nezhad et al. presented a model predictive controller for a home energy management system with loads, photovoltaic (PV) and battery electric storage. They formulated the MPC as a mixed-integer programming problem and evaluated its economic performance under different energy pricing schemes. In [6], Albogamy et al. utilized Lyapunov-based optimization to regulate HVAC loads in a home with battery energy storage and renewable generation. In [7], S. Ali et al. considered heuristic optimization techniques to develop a demand response scheduler for smart homes with renewable energy sources, energy storage, and electric and thermal loads. In [8], G. Belli et al. resorted to mixed integer linear programming for optimal scheduling of thermal and electrical appliances in homes within a demand response framework. They utilized a cloud service provider to compute and share aggregate data in a distributed fashion. In [9], variants of several heuristic optimization methods (optimal stopping rule, particle swarm optimization, and grey wolf optimization) were applied to the scheduling of home appliances under a virtual power plant framework for the distribution grid. Then, their performance was compared for three types of homes with different demand levels and profiles.
There is a wealth of research on optimization and model predictive controller-based scheduling of residential loads. In this literature, usually, prediction of the load demand and generation (if available) are pursued independently from the scheduling algorithm and are merely used as a constraint parameter in the optimization problem. The discrepancy in predicted and observed demand and generation may lead to poor performance and robustness issues. The proposed rTPPN-FES in this paper handles forecast and scheduling in a unified way and, therefore, provides robustness in the presence of forecasting errors.
### Forecasting of Renewable Energy Generation
We now briefly review the related works on forecasting renewable energy generation, which have also been reviewed in more detail in the literature, i.e. [10; 11].
The earlier research in this category forecast energy generation using statistical methods. For example, in [12], Kushwaha et al. use the well-known seasonal autoregressive integrated moving average technique to forecast the PV generation in 20-minute intervals. In [13], Rogier et al. evaluated the performance of a nonlinear autoregressive neural network on forecasting the PV generation data collected through a LoRa-based IoT network. In [14], Fentis et al. used Feed Forward Neural Network and Least Square Support Vector Regression with exogenous inputs to perform short-term forecasting of PV generation. In [15] analyzed the performances of (Autoregressive Integrated Moving Average) ARIMA and Artificial Neural Network (ANN) for forecasting the PV energy generation. In [16], Atique et al. used ARIMA with parameter selection based on Akaike information criterion and the sum of the squared estimate to forecast PV generation. In [17], Erdem and Shi analyzed the performance of autoregressive moving averages to forecast wind speed and direction in four different approaches such as decomposing the lateral and longitudinal components of the speed. In [18], Cadenas et al. performed a comparative study between ARIMA and nonlinear autoregressive exogenous artificial neural network on the forecasting wind speed.
The recent trend of research focuses on the development of ML and (neural network-based) deep learning techniques. In [19], Pawar et al. combined ANN and Support Vector Regressor (SVR) to predict renewable energy generated via PV. In [20], Corizzo et al. forecast renewable energy using a regression tree with an adopted Tucker tensor decomposition. In [21] forecast the PV generation based on the historical data of some features such as irradiance, temperature and relative humidity. In [22], Shi et al. proposed a pooling-based deep recurrent neural network technique to prevent overfitting for household load forecast. In [23], Zheng et al. developed an adaptive neuro-fuzzy system that forecasts the generation of wind turbines in conjunction with the forecast of weather features such as wind speed. In [24], Vandeventer et al. used
a genetic algorithm to select the parameters of SVM to forecast residential PV generation. In [25], van der Meer et al. performed a probabilistic forecast of solar power using quantile regression and dynamic Gaussian process. In [26], He and Li have combined quantile regression with kernel density estimation to predict wind power density. In [27], Alessandrini et al. used an analogue ensemble method to problematically forecast wind power. In [28], Cervone et al. combined ANN with the analogue ensemble method to forecast the PV generations in both deterministic and probabilistic ways. Recently in [29], Guo et al. proposed a combined load forecasting method for a Multi Energy Systems (MES) based on Bi-directional Long Short-Term Memory (BiLSTM). The combined load forecasting framework is trained with a multi-tasking approach for sharing the coupling information among the loads.
Although there is a significantly large number of studies to forecast renewable energy generation and/or other factors related to generation, this paper differs sharply from the existing literature as it proposes an embedded neural network architecture called rTPNN-FES that performs both forecasting and scheduling simultaneously.
### Machine Learning Enabled Energy Management Systems
In this category, we review the recent studies that aim to develop energy management systems enabled by ML, especially for residential buildings.
The first group of works in this category used scheduling (based on either optimization or heuristic) using the forecasts provided by an ML algorithm. In [30], Elkazaz et al. developed a heuristic energy management algorithm for hybrid systems using an autoregressive ML for forecasting and optimization for parameter settings. In [31], Zaouali et al. developed an auto-configurable middle-ware using Long-Short Term Memory (LSTM) based forecasting of renewable energy generated via PV. In [32], Shakir et al. developed a home energy management system using LSTM for forecasting and Genetic Algorithm for optimization. In [33], Manue et al. used LSTM to forecast the load for battery utilization in a solar system in a smart home system. In [34] developed a hybrid system of renewable and grid-supplied energy via exponential weighted moving average-based forecasting and a heuristic load control algorithm. In [35], Aurangzeh et al. developed an energy management system which uses a convolutional neural network to forecast renewable energy generation. Finally, in [36], in order to distribute the load and decrease the costs, Sarker et al. developed a home energy management system based on heuristic scheduling.
The second group of works in this category developed energy management systems based on reinforcement learning. In [37], Ren et al. developed a model-free Dueling-double deep Q-learning neural network for home energy management systems. In [38], Lissa et al. used ANN-based deep reinforcement learning to minimize energy consumption by adjusting the hot water temperature in the PV-enabled home energy management system. In [39], Yu et al. developed an energy management system using a deep deterministic policy gradient algorithm. In [40], Wan et al. used a deep reinforcement learning algorithm to learn the energy management strategy for a residential building. In [41], Mathew et al. developed a reinforcement learning-based energy management system to reduce both the peak load and the electricity cost. In [42], Liu et al. developed a home energy management system using deep and double deep Q-learning techniques for scheduling home appliances. In [43], Lu et al. developed an energy management system with hybrid CNN-LSTM based forecasting and rolling horizon scheduling. In [44], Ji et al. developed a microgrid energy management system using the Markov decision process for modelling and ANN-based deep reinforcement learning for determining actions.
Deep learning-based control systems are also very popular for off-grid scenarios, as off-grid energy management systems are gaining increasing attention to provide sustainable and reliable energy services. In References [45] and [46], the authors developed algorithms based on deep reinforcement to deal with the uncertain and stochastic nature of renewable energy sources.
All of these works have used ML techniques, especially deep learning and reinforcement learning, to
-based scheduling. However, in contrast with rTPNN-FES proposed in this paper, none of them has used ANN to generate scheduling or combined forecasting and scheduling in a single neural network architecture.
## 3 System Setup and Optimization Problem
In this section, we present the assumptions, mathematical definitions and the optimization problem related to the system setup which is used for embedded forecasting scheduling via rTPNN-FES and shown in Figure 1. During this paper, rTPNN-FES is assumed to perform at the beginning of a scheduling window that consists of equal-length \(S\) slots and has a total duration of \(H\) in actual time (i.e. the horizon length). In addition, the length of each slot \(s\) equals \(H/S\), and the actual time instance at which the slot \(s\) starts is denoted by \(m_{s}\). Then, we let \(g^{m_{s}}\) denote the power generation by the renewable energy source within slot \(s\). Also, \(\hat{g}^{m_{s}}\) denotes the forecast of \(g^{m_{s}}\).
We let \(\mathcal{N}\) be the set of devices that need to be scheduled until \(H\) (in other words until the end of slot \(S\)), and \(N\) denote the total number of devices, i.e. \(|\mathcal{N}|=N\). Each device \(n\in\mathcal{N}\) has a constant power consumption per slot denoted by \(E_{n}\). In addition, \(n\) should be active uninterruptedly for \(a_{n}\) successive slots. That is, when \(n\) is started, it consumes \(a_{n}E_{n}\) until it stops. Moreover, we assume that the considered renewable energy system contains a battery with a capacity of \(B_{max}\), where the stored energy in this battery is used via an inverter with a supply limit of \(\Theta\). We assume that there is enough energy in total (the sum of the stored energy in the battery and total generation) to supply all devices within \([0,H]\).
At the beginning of the scheduling window, we forecast the renewable energy generation and schedule the devices accordingly. To this end, as the main contribution of this paper, we combine the forecaster and scheduler in a single neural network architecture, called rTPNN-FES, which shall be presented in Section 4.
**Optimization Problem:** We now define the optimization problem for the non-preemptive scheduling of the starting slots of devices to minimize _user dissatisfaction_. In other words, this optimization problem aims to distribute the energy consumption over slots prioritizing "user satisfaction", assuming that the operation of each device is uninterruptible. In this article, we consider a completely off-grid system -which utilizes only renewable energy sources- where it is crucial to achieve near-optimal scheduling
Figure 1: The illustration of the system considered by rTPNN-FES
to use limited available resources. Recall that this optimization problem is re-solved at the beginning of each scheduling window for the available set of devices \(\mathcal{N}\) using the forecast generation \(\hat{g}^{m_{s}}\) over the scheduling window in Figure 1.
Moreover, for each \(n\in\mathcal{N}\), there is a predefined cost of user dissatisfaction, denoted by \(c_{(n,s)}\), for scheduling the start of \(n\) at slot \(s\). This cost can take value in the range of \([0,+\infty)\), and \(c_{(n,s)}\) set to \(+\infty\) if the user does not want slot \(s\) to be reserved for device \(n\). As we shall explain in more detail in Section 5, we determine the user dissatisfaction cost \(c_{(n,s)}\) as the increasing function of the distance between \(s\) and the desired start time of the considered device \(n\). We should note that the definition of the user dissatisfaction cost only affects the numerical results since the proposed rTPNN-FES methodology does not depend on its definition.
Then, we let \(x_{(n,s)}\) denote a binary schedule for the start of the activity of device \(n\) at slot \(s\). That is, \(x_{(n,s)}=1\) if device \(n\) is scheduled to start at the beginning of slot \(s\), and \(x_{(n,s)}=0\) otherwise. In addition, in our optimization program, we let \(x_{(n,s)}^{*}\) be a binary decision variable and denote the optimal value of \(x_{(n,s)}\). Accordingly, we define the optimization problem as follows:
\[\textit{min}\ \ \sum_{n\in\mathcal{N}}\sum_{s=1}^{S}x_{(n,s)}^{*}c_{(n,s)} \tag{1}\]
subject to
\[\sum_{s=1}^{S-(a_{n}-1)}x_{(n,s)}^{*}=1,\qquad\forall n\in \mathcal{N} \tag{2}\] \[\sum_{n\in\mathcal{N}}\sum_{s^{\prime}=[s-(a_{n}-1)]^{*}}^{s}E_{n }x_{(n,s^{\prime})}^{*}\leq\Theta,\quad\forall s\in\{1,\ldots,S\}\] (3) \[\sum_{n\in\mathcal{N}}\sum_{s^{\prime}=[s-(a_{n}-1)]^{*}}^{s}E_{n }x_{(n,s^{\prime})}^{*}\leq\hat{g}^{m_{s}}+B_{max},\] (4) \[\forall s\in\{1,\ldots,S\}\] \[\sum_{n\in\mathcal{N}}\sum_{s^{\prime}=1}^{s}\sum_{s^{\prime\prime }=[s^{\prime}-(a_{n}-1)]^{*}}^{s^{\prime}}E_{n}x_{(n,s^{\prime\prime})}^{*} \leq B+\sum_{s^{\prime}=1}^{s}\hat{g}^{m_{s^{\prime}}},\] (5) \[\forall s\in\{1,\ldots,S\}\]
where \([\Xi]^{+}=\Xi\) if \(\Xi\geq 1\); otherwise, \([\Xi]^{+}=1\). The objective function (1) minimizes the total user dissatisfaction cost over all devices as (\(\sum_{n\in\mathcal{N}}\sum_{s=1}^{S}x_{(n,s)}^{*}c_{(n,s)}\)). While minimizing user dissatisfaction, the optimization problem also considers the following constraints:
* **Uniqueness and Operation** constraint in (2) ensures that each device \(n\) is scheduled to start exactly at a single slot between 1-st and \([S-(a_{n}-1)]\)-th slot. The upper limit for the starting of the operation of device \(n\) is set to \([S-(a_{n}-1)]\) because \(n\) must operate for successive \(a_{n}\) slots before the end of the last slot \(S\).
* **Inverter Limitation** constraint in (3) limits total power consumption at each slot \(s\) to the maximum power of \(\Theta\) that can be provided by the inverter. Note that the term \(\sum_{s^{\prime}=s-(a_{n}-1)}^{s}x_{(n,s^{\prime})}^{*}\) is a convolution which equals 1 if device \(n\) is scheduled to be active at slot \(s\) (i.e. \(n\) is scheduled to start between \(s-(a_{n}-1)\) and \(s\)).
* **Maximum Storage** constraint in (4) ensures that the scheduled consumption at each slot \(s\) does not exceed the sum of the predicted generation (\(\hat{g}^{m_{s}}\)) at this slot and the maximum energy (\(B_{max}\)) that can be stored in the battery.
* **Total Consumption** constraint in (5) ensures that the scheduled total power consumption until each slot \(s\) is not greater than the summation of the stored energy, \(B\), at the beginning of the scheduling window and the total generation until \(s\). This constraint is used as we are considering a completely off-grid system.
## 4 Recurrent Trend Predictive Neural Network based Forecast Embedded Scheduling (rTPNN-FES)
In this section, we present our rTPNN-FES neural network architecture. Figure 2 displays the architectural design of rTPNN-FES which aims to generate scheduling for the considered window while forecasting the power generation through this window automatically and simultaneously. To this
end, rTPNN-FES is comprised of two main layers of "Forecasting Layer" and "Scheduling Layer", and it is trained using the "2-Stage Training Procedure".
We let \(\mathcal{F}\) be the set of features and \(\mathcal{F}\equiv\{1,\ldots,F\}\). In addition, \(z_{f}^{m_{s}}\) denotes the value of input feature \(f\) in slot \(s\) which starts at \(m_{s}\), where this feature can be considered as any external data, such as weather predictions, that are directly or indirectly related to power generation \(g^{m_{s}}\). We also let \(\tau_{f}\) be a duration of time when the system developer has observed that the feature \(f\) has periodicity; \(\tau_{0}\) represents the periodicity duration for \(g^{m_{s}}\). Note that we do not assume that the features will have a periodic nature. If there is no observed periodicity, \(\tau_{f}\) can be set to \(H\).
As shown in Figure 2, the inputs of rTPNN-FES are \(\{g^{m_{s}-2\tau_{0}},g^{m_{s}-\tau_{0}}\}\) and \(\{z_{f}^{m_{s}-2\tau_{f}},z_{f}^{m_{s}-\tau_{f}}\}\) for \(f\in\mathcal{F}\), and the output of that is \(\{x_{n,s}\}_{n\in\{1,\ldots,N\}}\).
Figure 2: Recurrent Trend Predictive Neural Network based Forecast Embedded Scheduling (rTPNN-FES)
### Forecasting Layer
Forecasting Layer is responsible for forecasting the power generation within the architecture of rTPNN-FES. For each slot \(s\) in the scheduling window, rTPNN-FES forecasts the renewable energy generation \(\hat{g}^{m_{s}}\) based on the collection of the past feature values for two periods, \(\{z_{f}^{m_{s}-2\tau_{f}},z_{f}^{m_{s}-\tau_{f}}\}_{f\in\mathcal{F}}\), as well as the past generation for two periods \(\{g^{m_{s}-2\tau_{0}},g^{m_{s}-\tau_{0}}\}\). To this end, this layer consists of \(S\) parallel rTPNN models that share the same parameter set (connection weights and biases). That is, in this layer, there are \(S\) replicas of a single trained rTPNN; in other words, one may say that a single rTPNN is used with different inputs to forecast the traffic generation for each slot \(s\). Therefore, all but one of the Trained rTPNN blocks are shown as transparent in Figure 2.
The weight sharing among rTPNN models (i.e. using replicated rTPNNs) has the following advantages:
* The number of parameters in the Forecasting Layer decreases by a factor of \(S\); thus reducing both time and space complexity.
* By avoiding rTPNN training repeated \(S\) times, the training time is also reduced by a factor of \(S\).
* Because a single rTPNN is trained on the data collected over \(S\) different slots, the rTPNN can now capture recurrent trends and relationships with higher generalization ability.
#### 4.1.1 Structure of rTPNN
We now briefly explain the structure of rTPNN, which has been originally proposed in [1], for our rTPNN-FES neural network architecture. As shown in Figure 3 displaying the structure of rTPNN, for any \(s\), the inputs of rTPNN are \(\{g^{m_{s}-2\tau_{0}},g^{m_{s}-\tau_{0}}\}\) and \(\{z_{f}^{m_{s}-2\tau_{f}},z_{f}^{m_{s}-\tau_{f}}\}\) for \(f\in\mathcal{F}\), and the output is \(\hat{g}^{m_{s}}\). In addition, the rTPNN architecture consists of \((F+1)\) Data Processing (DP) units and \(L\) fully connected layers, including the output layer.
#### 4.1.2 DP units
In the architecture of rTPNN, there is one DP unit either for the past values of energy generation, denoted by DP\({}_{0}\) or for each time series feature \(f\), denoted by DP\({}_{f}\). That is, DP\({}_{f}\) for any feature \(f\) (including \(f=0\)) has the same structure but its corresponding input is different for each \(f\). For example, the input of DP\({}_{f}\) is \(\{z_{f}^{m_{s}-2\tau_{f}},z_{f}^{m_{s}-\tau_{f}}\}\) corresponding to any time series feature \(f\in\{1,\ldots,F\}\) while the input of DP\({}_{0}\) is the past values of energy generation \(\{g^{m_{s}-2\tau_{0}},g^{m_{s}-\tau_{0}}\}\). Thus, one may notice that DP\({}_{0}\) is the only unit with a special input.
During the explanation of the DP unit, we focus on a particular instance DP\({}_{f}\), which is also shown in detail in Figure 3. Using \(\{z_{f}^{m_{s}-2\tau_{f}},z_{f}^{m_{s}-\tau_{f}}\}\) input pair, DP\({}_{f}\) aims to learn the relationship between this pair and each of the predicted trend \(t_{f}^{s}\) and the predicted level \(l_{f}^{s}\). To this end, DP\({}_{f}\) consists of Trend Predictor and Level Predictor sub-units each of which is a linear recurrent neuron.
As shown in Figure 3, Trend Predictor of DP\({}_{f}\) computes the weighted sum of the change in the value of feature \(f\) from \(m_{s}-2\tau_{f}\) to \(m_{s}-\tau_{f}\) and the previous value of the predicted trend. That is, DP\({}_{f}\) calculates the sum of the difference between \((z_{f}^{m_{s}-\tau_{f}}-z_{f}^{m_{s}-2\tau_{f}})\) with connection weight of \(\alpha_{f}^{1}\) and the previous value of the predicted trend \(t_{f}^{s-1}\) with the connection weight of \(\alpha_{f}^{2}\) as
\[t_{f}^{s}=\alpha_{f}^{1}\,(z_{f}^{m_{s}-\tau_{f}}-z_{f}^{m_{s}-2\tau_{f}})+ \alpha_{f}^{2}\,t_{f}^{s-1} \tag{6}\]
By calculating the trend of a feature and learning the parameters in (6), rTPNN is able to capture behavioural changes over time, particularly those related to the forecasting of \(\hat{g}^{m_{s}}\).
Level Predictor sub-unit of DP\({}_{f}\) predicts the level of feature value, which is the smoothed version of the value of feature \(f\), using only \(z_{f}^{m_{s}-\tau_{f}}\) and the previous state of the predicted level \(l_{f}^{s-1}\). To this end, it computes the sum of the \(z_{f}^{m_{s}-\tau_{f}}\) and \(l_{f}^{s-1}\) with weights of \(\beta_{f}^{1}\) and \(\beta_{f}^{2}\) respectively as
\[l_{f}^{s}=\beta_{f}^{1}\,z_{f}^{m_{s}-\tau_{f}}+\beta_{f}^{2}\,l_{f}^{s-1} \tag{7}\]
By predicting the level, we can reduce the effects on the forecasting of any anomalous instantaneous changes in the measurement of any other feature \(f\).
Note that parameters \(\alpha_{f}^{1}\), \(\alpha_{f}^{2}\), \(\beta_{f}^{1}\) and \(\beta_{f}^{2}\) of Trend Predictor and Level Predictor sub-units are learned
during the rTPNN training like all other parameters (i.e. connection weights).
#### 4.1.3 Feed-forward of rTPNN
We now describe the calculations performed during the execution of the rTPNN; that is, when making a prediction via rTPNN. To this end, first, let \(\mathbf{W}_{l}\) denote the connection weight matrix for the inputs of hidden layer \(l\), and \(\mathbf{b}_{l}\) denote the vector of biases of \(l\). Thus, for each \(s\), the forward pass of rTPNN is as follows:
1. Trend Predictors of DP\({}_{0}\)-DP\({}_{F}\): \[t_{0}^{s}=\alpha_{0}^{1}(g^{m_{s}-\tau_{0}}-g^{m_{s}-2\tau_{0}})+ \alpha_{0}^{2}t_{0}^{s-1},\] \[t_{f}^{s}=\alpha_{f}^{1}(z_{f}^{m_{s}-\tau_{f}}-z_{f}^{m_{s}-2 \tau_{f}})+\alpha_{f}^{2}t_{f}^{s-1},\quad\forall f\in\mathcal{F}\] (8) \[t_{0}^{s}=\alpha_{0}^{1}(g^{m_{s}-\tau_{0}}-g^{m_{s}-2\tau_{0}})+ \alpha_{0}^{2}t_{0}^{s-1},\] \[t_{f}^{s}=\alpha_{f}^{1}(z_{f}^{m_{s}-\tau_{f}}-z_{f}^{m_{s}-2 \tau_{f}})+\alpha_{f}^{2}t_{f}^{s-1},\quad\forall f\in\mathcal{F}\] (9) \[t_{0}^{s}=\beta_{0}^{1}s^{m_{s}-\tau_{0}}+\beta_{0}^{2}t_{0}^{s- 1},\] \[t_{f}^{s}=\beta_{f}^{1}z_{f}^{m_{s}-\tau_{0}}+\beta_{f}^{2}t_{f}^{ s-1},\qquad\forall f\in\mathcal{F}\] (10)
2. Concatenation of the outputs of DP\({}_{0}\)-DP\({}_{F}\) to feed to the hidden layers: \[\mathbf{z}^{s}=[t_{0}^{s},t_{0}^{s},g^{m_{s}-\tau_{0}},\ldots,t_{F}^{s},I_{F}^ {s},z_{F}^{m_{s}-\tau_{F}}]\] (11)
3. Hidden Layers from \(l=1\) to \(l=L\): \[\mathbf{O}_{1}^{s}=\Psi(\mathbf{W}_{1}(\mathbf{z}^{s})^{T}+ \mathbf{b}_{1}),\] (12) \[\mathbf{O}_{l}^{s}=\Psi(\mathbf{W}_{l}\mathbf{O}_{l-1}^{s}+ \mathbf{b}_{l}),\quad\forall l\in\{2,\ldots,L-1\}\] (13) \[\hat{g}^{m_{s}}=\Psi(\mathbf{W}_{L}\mathbf{O}_{L-1}^{s}+\mathbf{ b}_{L}),\] (14)
where \((\mathbf{z}^{s})^{T}\) is the transpose of the input vector \(\mathbf{z}^{s}\)
Figure 3: The structure of rTPNN used in rTPNN-FES
\(\mathbf{O}_{i}^{g}\) is the output vector of hidden layer \(l\), and \(\Psi(\cdot)\) denotes the activation function as an element-wise operator.
### Scheduling Layer
The Scheduling Layer consists of \(N\) parallel softmax layers, each responsible for generating a schedule for a single device's start time. A single softmax layer for device \(n\) is shown in Figure 4. Since this layer is cascaded behind the Forecasting Layer, each device \(n\) is scheduled to be started at each slot \(s\) based on the output of the Forecasting Layer \(\hat{g}^{m_{s}}\) as well as the system parameters \(c_{(n,s)}\), \(E_{n}\), \(B\), \(B_{max}\) and \(\Theta\) for this device \(n\) and this slot \(s\).
In Figure 4, each arrow represents a connection weight. Accordingly, for device \(n\) for slot \(s\) in a softmax layer of the Scheduling Layer, a neuron first calculates the weighted sum of the inputs as
\[\alpha_{(n,s)} = w_{(n,s)}^{g}g^{m_{s}}+w_{(n,s)}^{B}\frac{B}{S}-w_{(n,s)}^{c}c_{(n,s)}\] \[-w_{(n,s)}^{E}E_{n}-w_{(n,s)}^{\Theta}\Theta-w_{(n,s)}^{B_{max}}B_ {max}\]
where all connection weights of \(w_{(n,s)}^{g}\), \(w_{(n,s)}^{B}\), \(w_{(n,s)}^{c}\), \(w_{(n,s)}^{E}\), \(w_{(n,s)}^{\Theta}\), and \(w_{(n,s)}^{B_{max}}\) are _strictly positive_. In addition, the signs of the terms are determined considering the intuitive effect of the parameter on the schedule decision for device \(n\) at slot \(s\). For example, the higher \(g^{m_{s}}\) makes slot \(s\) a better candidate to schedule \(n\), while the higher user dissatisfaction cost \(c_{(n,s)}\) makes slot \(s\) a worse candidate. In addition, a softmax activation is applied at the output of this neuron:
\[x_{(n,s)}\ =\ \Phi(\alpha_{(n,s)})\ =\ \frac{e^{\alpha_{(n,s)}}}{\sum_{s=1}^{S} \alpha_{(n,s)}} \tag{15}\]
### 2-Stage Training Procedure
We train our rTPNN-FES architecture to learn the optimal scheduling of devices as well as the forecasting of energy generation in a single neural network. To this end, we first assume that there is a collected dataset comprised of the actual values of \(g^{m_{s}}\) and \(\{z_{f}^{m_{s}}\}_{f\in\mathcal{F}}\) for \(s\in\{1,\ldots,S\}\) for multiple scheduling windows. Note that rTPNN-FES does not depend on the developed 2-stage training procedure, so it can be used with any training algorithm. For each window in this dataset, the 2-stage procedure works as follows:
#### 4.3.1 Stage 1 - Training of rTPNN Separately for Forecasting
In this first stage of training, in order to create a forecaster, the rTPNN model (Figure 3) is trained separately from the rTPNN-FES architecture (Figure 2). To this end, the deviation of \(\hat{g}^{m_{s}}\) from \(g^{m_{s}}\) for \(s\in\{1,\ldots,S\}\), i.e. the forecasting error of rTPNN, is measured via Mean Squared Error as
\[MS\,E_{\text{forecast}}\equiv\frac{1}{S}\sum_{s=1}^{S}(g^{m_{s}}-\hat{g}^{m_ {s}})^{2} \tag{16}\]
We update the parameters (connection weights and biases) of rTPNN via back-propagation with gradient descent, in particular the Adam algorithm, to minimize \(MS\,E_{\text{forecast}}\), where the initial parameters are set to parameters found in previous training. We repeat updating parameters as many epochs as required without over-fitting to the training samples.
When Stage 1 is completed, the parameters of "Trained rTPNN" in Figure 2 are replaced by the resulting parameters found in this stage. Then, the parameters of Trained rTPNN are frozen to continue further training of rTPNN-FES in Stage 2. That is, the parameters of Trained rTPNN are not updated in Stage 2.
#### 4.3.2 Stage 2 - Training of rTPNN-FES for Scheduling
In Stage 2 of training, in order to create a scheduler emulating optimization, the rTPNN-FES architecture (Figure 2) is trained following the steps shown in Figure 5.
Figure 4: The structure of Scheduling Layer
The steps in Stage 2 shown in Figure 5 are as follows:
1. The optimal schedule, \(\{x_{n,s}^{*}\}_{n\in\{1,\ldots,N\}}^{s\in\{1,\ldots,S\}}\) is computed by solving the optimization problem given in Section 3 in (1)-(5).
2. The feed-forward output of rTPNN-FES, \(\{x_{n,s}\}_{n\in\{1,\ldots,N\}}^{s\in\{1,\ldots,S\}}\), which is the estimation of scheduling, is computed through (6)-(15) using the architecture in Figure 2.
3. The performance of rTPNN-FES for scheduling, i.e. total estimation error of rTPNN-FES, is measured via Categorical Cross-Entropy as \[CCE_{\text{schedule}}\equiv-\sum_{n=1}^{N}\sum_{s=1}^{S}x_{n,s}^{*}\log(x_{n,s})\] (17)
4. The parameters (connection weights and biases) in the "Scheduling Layers" of rTPNN-FES are updated via back-propagation with gradient decent (using Adam optimization algorithm) to minimize \(CCE_{\text{schedule}}\).
As soon as this training procedure is completed, i.e. during real-time operation, rTPNN-FES generates both forecasts of renewable energy generations, \(\{\hat{g}^{m_{s}}\}_{s\in\{1,\ldots,S\}}\) and a schedule \(\{x_{n,s}\}_{n\in\{1,\ldots,N\}}^{s\in\{1,\ldots,S\}}\) that emulates the optimization.
## 5 Results
In this section, we aim to evaluate the performance of our rTPNN-FES. To this end, during this section, we first present the considered datasets and hyper-parameter settings. We also perform a brief time-series data analysis aiming to determine the most important features for the forecasting of PV energy generation. Then, we numerically evaluate the performance of our technique and compare that with some existing techniques.
### Methodology of Experiments
#### 5.1.1 Datasets
For the performance evaluation of the proposed rTPNN-FES, we combine two publicly available datasets [2] and [3]. The first dataset [2] consists of hourly solar power generation (kW) of various residential buildings in Konstanz, Germany between 22-05-2015 and 12-03-2017. Within this dataset, we consider only the residential building called "freq_DE_KN_residential1_pv" which corresponds to 15864 samples in total. The second dataset contains weather-related information which is scraped with World Weather Online (WWO) API [3]. This API provides 19 features related to temperature, precipitation, illumination and wind.
#### 5.1.2 Experimental Set-up
Considering the limitations of the available dataset, we perform our experiments on a virtual
Figure 5: The steps in Stage 2 training of rTPNN-FES to learn to schedule
residential building which is, each year, actively used between May and September. It is assumed that there are 12 different smart home appliances in active months. These appliances are shown in Table 1, where each appliance should operate at least once a day. Note that Electric Water Heater and Central AC operate twice a day, where the desired start times are 6:00 and 17:00 for the heater, and 6:00 and 18:00 for the AC. In order to produce sufficient energy for the operation of these appliances, this building has its own PV system which consists of the following elements: 1) PV panels for which the generations are taken from the dataset [2] explained above, 2) three batteries with 13.5 kWh capacity of each, and 3) inverter with a power rate of 10kW.
Furthermore, during our experimental work, we set \(H=24\)\(h\), and we define the user dissatisfaction cost \(c_{(n,s)}\) for each device \(n\) at each slot \(s\) based on the "Desired Start Time", which is given in Table 1, as
\[c_{(n,s)}=1-\frac{1}{\sigma_{n}\sqrt{2\pi}}\,\exp\left(-\frac{1}{2}\left(\frac{ s-\mu_{n}}{\sigma_{n}}\right)^{2}\right) \tag{18}\]
where \(\mu_{n}\) is the desired start time of \(n\), and \(\sigma_{n}\) is the acceptable variance for the start of \(n\). The value of \(\sigma_{n}\) is 1 for Iron and Electric Water Heater, 2 for TV, Oven, Dishwasher and AC, 3 for Washing Machine and Dryer, and 5 for Robot Vacuum Cleaner. Also, the value of \(c_{(n,s)}\) is set to infinity for \(s\) lower than the earliest start time and for that greater than the latest start time.
Recall that the Water Heater and AC, which are activated twice a day, are modelled as two separate devices.
#### 5.1.3 Implementation and Hyper-Parameter Settings for rTPNN-FES
We implemented rTPNN-FES by using Keras API on Python 3.7.13. The experiments are executed on the Google Colab platform with an operating system of Linux 5.4.144 and a 2.2GHz processor with 13 GB RAM.
Forecasting Layer is trained on this platform via the adam optimizer for 40 epochs with \(10^{-3}\) initial learning rate. In order to exploit the PV generation trend on daily basis, the batch size is fixed at 24. Moreover, an \(L_{2}\) regularization term is injected into Trend and Level Predictors in the rTPNN layer in order to avoid gradient vanishing. Finally, we used fully connected layers of rTPNN which are respectively comprised of \(F+1\) and \(\lceil(F+1)/2\rceil\) neurons with sigmoid activation. Scheduling Layer of each device is trained on the same platform also using the adam optimizer for 20 epochs with a batch size of 1 and initial learning rate of \(10^{-3}\). Note that setting the batch size to 1 is due to the particular implementation of rTPNN-FES which uses the Keras library. In addition, the infinity values of \(c_{(n,s)}\) are set to 100 at the inputs of the scheduling layer in order to be able to calculate the neuron activation. We also set the periodicity \(\tau_{0}\) of \(g^{m_{s}}\) as 24 \(h\).
\begin{table}
\begin{tabular}{|l l l l|} \hline Appliance Name & Power Consumption (kW) & Active Duration & Desired Start Time \\ \hline \hline Washing Machine (warm wash) & 2.3 & 2 & 14 \\ Dryer (avg. load) & 3 & 2 & 16 (earliest 15) \\ Robot Vacuum Cleaner & 0.007 & 2 & 15 \\ Iron & 1.08 & 2 & 8 \\ TV & 0.15 & 3 & 20 \\ *Refrigerator & 0.083 & 24 & non-stop \\ Oven & 2.3 & 1 & 18 \\ Dishwasher & 2 & 2 & 21 \\ Electric Water Heater & 0.7 & 1 & 6, 17 \\ Central AC & 3 & 2 & 6, 18 \\ Pool Filter Pump & 1.12 & 8 & 10 \\ Electric Vehicle Charger & 7.7 & 8 & 21 (earliest 18 latest 23) \\ \hline \end{tabular}
\end{table}
Table 1: Household Appliances in the Smart Home Environment
Furthermore, the source codes of the rTPNN-FES and experiments in this paper are shared in [48] in addition to the repository of the original rTPNN.
#### 5.1.4 Genetic Algorithm-based Scheduling for Comparison
Genetic algorithms (GAs) have been widely used in scheduling tasks due to their ability to effectively solve complex optimization problems. GAs are able to incorporate various constraints and prior knowledge into the optimization process, making them well-suited for scheduling tasks with many constraints. GAs are also able to efficiently search through a vast search space to find near-optimal solutions, even for problems with a large number of variables [49]. These characteristics make GAs powerful tools for finding high-quality solutions in our experimental setup and good candidates to compare against rTPNN-FES.
The experiments are executed on the Google Colab platform with the same hardware configurations of rTPNN-FES. In this experimental setting, a chromosome is a daily schedule matrix. The cross-over is made by swapping device schedules by selecting a random cross-over point out of the total number of devices and mutation is introduced by changing the scheduled time of a single device randomly with probability 0.1. The GA application starts with sampling feasible solutions out of 5000 random solutions as an initial population. After that, 1000 new generations are simulated while the population size is fixed to 200 by making selections in an elitist style.
### Forecasting Performance of rTPNN-FES
We now compare the forecasting performance of rTPNN with the performances of LSTM, MLP, Linear Regression, Lasso, Ridge, ElasticNet, Random Forest as well as 1-Day Naive Forecast.1 Recall that in recent literature, References [31; 32; 33] used LSTM, and Reference [14; 15; 19; 38] used MLP.
Footnote 1: 1-Day Naive Forecast equals to the original time series with 1-day lag.
During our experimental work, the dataset is partitioned into training and test sets with the first 300 days (corresponding to 7200 samples) and the rest 361 days (corresponding to 8664 samples) respectively.
First, Table 2 presents the performances of all models on both training and test sets with respect to Mean Squared Error (MSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and Symmetric Mean Absolute Percentage Error (SMAPE) metrics, which are calculated as
\[MSE = \frac{1}{S}\sum_{s=1}^{S}(g^{m_{s}}-\hat{g}^{m_{s}})^{2} \tag{19}\] \[MAE = \frac{1}{S}\sum_{s=1}^{S}\left|g^{m_{s}}-\hat{g}^{m_{s}}\right|\] (20) \[MAPE = \frac{100\%}{S}\sum_{s=1}^{S}\left|\frac{g^{m_{s}}-\hat{g}^{m_{ s}}}{g^{m_{s}}}\right|\] (21) \[SMAPE = \frac{100\%}{S}\sum_{s=1}^{S}\frac{\left|g^{m_{s}}-\hat{g}^{m_{ s}}\right|}{(|g^{m_{s}}|+|\hat{g}^{m_{s}}|)/2} \tag{22}\]
In Table 2, the results on the test set show that rTPNN outperforms all of the other forecasters for the majority of the error metrics while some forecasters may perform better in individual metrics. However, observations on an individual error metric (without considering the other metrics) may be misleading due to its properties. For example, the MAPE of Ridge Regression is significantly low but MSE, MAE and SMAPE of that are high. The reason is that Ridge is more accurate in forecasting samples with high energy generation than forecasting those with low generations. Moreover, rTPNN is shown to have high generalization ability since it performs well for both training and test sets with regard to all metrics. Also, only rTPNN and LSTM are able to achieve better performances than the benchmark performance of the 1-Day Naive Forecast with respect to MSE, MAE and SMAPE.
We also see that SMAPE yields significantly larger values than those of other metrics (including MAPE) because SMAPE takes values in [0, 200] and has a scaling effect as a result of the denominator in (22). In particular, the absolute deviation of forecast values from the actual values is divided by the sum of those. Therefore, under- and over-forecasting have different effects on SMAPE,
where under-forecasting results in higher SMAPE.
Next, in Figure 6, we present the actual energy generation between the fifth and the seventh days of the test set as well as those forecast by the best three techniques (rTPNN, LSTM and MLP). Our results show that the predictions of rTPNN are the closest to the actual generation within the predictions of these three techniques. In addition, we see that rTPNN can successfully capture both increases and decreases in energy generation while LSTM and MLP struggle to predict sharp increases and decreases.
Finally, Figure 7 displays the histogram of the forecasting error that is realized by each of rTPNN, LSTM, and MLP on the test set. Our results in this figure show that the forecasting error of rTPNN is around zero for the significantly large number of samples (around 5000 out of 8664 samples). We also see that the absolute error is smaller than 2 for 93% of the samples. We also see that the overall forecasting error is lower for rTPNN than both LSTM and MLP.
### Scheduling Performance of rTPNN-FES
We now evaluate the scheduling performance of rTPNN-FES for the considered smart home energy management system. To this end, we compare the schedule generated by rTPNN-FES with that by optimization (solving (1)-(5)) using actual energy generations as well as the GA-based scheduling (presented in Section 5.1.4). Note that although the schedule generated by the optimization using actual generations is the best achievable schedule, it is practically not available due to the lack of future information about the actual generations.
\begin{table}
\begin{tabular}{|c||c|c|c|c||c|c|c|c|} \hline \multirow{2}{*}{**Forecasting Methods**} & \multicolumn{4}{c||}{**Training Set**} & \multicolumn{4}{c|}{**Test Set**} \\ \cline{2-10} & **MSE** & **MAE** & **MAPE** & **SMAPE** & **MSE** & **MAE** & **MAPE** & **SMAPE** \\ \hline
**rTPNN** & 2.23 & 1.13 & 3.72 & 51.84 & 2.58 & 1.21 & 10.67 & 54.42 \\ \hline
**LSTM** & 2.18 & 1.18 & 4.95 & 54.83 & 2.56 & 1.26 & 13.59 & 57.98 \\ \hline
**MLP** & 2.77 & 1.35 & 6.33 & 60.57 & 3.09 & 1.42 & 14.25 & 63.06 \\ \hline
**Linear Regression** & 2.78 & 1.28 & 4.92 & 57.71 & 3.16 & 1.35 & 6.08 & 60.38 \\ \hline
**Lasso Regression** & 8.61 & 2.12 & 4.06 & 88.68 & 8.7 & 2.14 & 11.16 & 90.68 \\ \hline
**Ridge Regression** & 2.78 & 1.29 & 4.93 & 57.74 & 3.16 & 1.36 & 6.11 & 60.41 \\ \hline
**ElasticNet Regression** & 8.61 & 2.12 & 4.06 & 88.68 & 8.7 & 2.14 & 11.16 & 90.69 \\ \hline
**RandomForestRegressor** & 0.3 & 0.41 & 1.5 & 24.75 & 3.18 & 1.36 & 6.82 & 60 \\ \hline
**1-Day Naive Forecast** & 3.68 & 1.25 & 2.76 & 56.63 & 4.25 & 1.37 & 1.26 & 58.29 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of the forecasting performance of rTPNN with that of state-of-the-art forecasters with respect to MSE, MAE, MAPE, and SMAPE excluding nights
Figure 6: Forecasting results of the three most competitive models (rTPNN, LSTM and MLP) with respect to results in Table 2 for the time between fifth and seventh days in the test set
Figure 7: Histogram of the forecasting error in kW measured as (\(\hat{g}^{m_{s}}-g^{m_{s}}\)) for each \(m_{s}\) in the test set
Figure 8 (top) displays the comparison of rTPNN-FES against the optimal scheduling and the GA-based scheduling regarding the cost value for the days of the test set. In this figure, we see that rTPNN-FES significantly outperforms GA-based scheduling achieving close-to-optimal cost. In other words, the user dissatisfaction cost - which is defined in (1) - of rTPNN-FES is significantly lower than the cost of GA-based scheduling, and it is slightly higher than that of optimal scheduling. The average cost difference between rTPNN-FES and optimal scheduling is 1.3% and the maximum difference is about 3.48%.
Furthermore, Figure 8 (bottom) displays the summary of the statistics for the cost difference between rTPNN-FES and the optimal scheduling as well as the difference between GA-based and optimal scheduling as a boxplot. In Figure 8 (bottom), we first see that the cost difference is significantly lower for rTPNN-FES, where even the upper quartile of rTPNN-FES is smaller than the lower quartile of GA-based scheduling. We also
Figure 8: Comparison of rTPNN-FES against the optimal scheduling and GA-based scheduling with respect to the scheduling cost (top) for the days of the test set and (bottom) as the boxplot of the cost difference.
see that the median of the cost difference between rTPNN-FES and optimal scheduling is 0.13 and the upper quartile of that is about 0.146. That is, the cost difference is less than 0.146 for 75% of the days in the test set. In addition, we see that there are only 7 outlier days for which the cost is between 0.19 and 0.3. According to the results presented in Figure 8, rTPNN-FES can be considered as a successful heuristic with a low increase in cost.
### Evaluation of the Computation Time
In Table 3, we present measurements on the training and execution times of each forecasting model. Our results first show that the execution time of rTPNN (0.17 \(ms\)) is comparable with the execution time of LSTM and highly acceptable for real-time applications. On the other hand, the training time measurements show that the training of rTPNN takes longer than that of other forecasting models. Accordingly, one may say that there is a trade-off between training time and the forecasting performance of rTPNN.
Figure 9 displays the computation time of rTPNN-FES and that of optimization combined with LSTM (the second-best forecaster after rTPNN) in seconds. Note that we do not present the computation of GA-based scheduling in this figure since it takes 4.61 seconds on average - which is approximately 3 orders of magnitude higher than the computation time of rTPNN-FES and 1 order of magnitude higher than that of optimization - to find a schedule for a single window. Our results in this figure show that rTPNN-FES requires significantly lower computation time than optimization to generate a daily schedule of household appliances. The average computation time of rTPNN-FES is about 4 \(ms\) while that of optimization with LSTM is 150 \(ms\). That is, rTPNN-FES is 37.5 times faster than optimization with LSTM to simultaneously forecast and schedule. Although the absolute computation time difference seems insignificant for a small use case (as in this paper), it would have important effects on the operation of large renewable energy networks with a high number of sources and devices.
## 6 Conclusion
We have proposed a novel neural network architecture, called Recurrent Trend Predictive Neural Network based Forecast Embedded Scheduling (namely rTPNN-FES), for smart home energy management systems. The rTPNN-FES architecture forecasts renewable energy generation and schedules household appliances to use renewable energy efficiently and to minimize user dissatisfaction. As the main contribution of rTPNN-FES, it performs both forecasting and scheduling in a single architecture. Thus, it 1)
\begin{table}
\begin{tabular}{|c|c|c|} \hline Forecasting & Training Time & Execution Time \\ Methods & (seconds) & (milliseconds) \\ \hline rTPNN & 210 & 0.17 \\ \hline LSTM & 70 & 0.14 \\ \hline MLP & 47 & 0.08 \\ \hline Random & \multirow{2}{*}{11.8} & \multirow{2}{*}{0.12} \\ Forest & & \\ \hline Linear & \multirow{2}{*}{0.004} & \multirow{2}{*}{0.0025} \\ Regression & & \\ \hline Lasso & \multirow{2}{*}{0.005} & \multirow{2}{*}{0.0012} \\ Regression & & \\ \hline Ridge & \multirow{2}{*}{0.004} & \multirow{2}{*}{0.0012} \\ Regression & & \\ \hline Elastic Net & \multirow{2}{*}{0.007} & \multirow{2}{*}{0.0012} \\ Regression & & \\ \hline \end{tabular}
\end{table}
Table 3: Training and Execution Times for Forecasting
Figure 9: Computation time (in seconds) comparison between rTPNN-FES and optimal scheduling under LSTM forecaster
provides a schedule that is robust against forecasting and measurement errors, 2) requires significantly low computation time and memory space by eliminating the use of two separate algorithms for forecasting and scheduling, and 3) offers high scalability to grow the load set (i.e. adding devices) over time.
We have evaluated the performance of rTPNN-FES for both forecasting renewable energy generation and scheduling household appliances using two publicly available datasets. During the performance evaluation, rTPNN-FES is compared against 8 different techniques for forecasting and against the optimization and genetic algorithm for scheduling. Our experimental results have drawn the following conclusions:
* The forecasting layer of rTPNN-FES outperforms all of the other forecasters for the majority of MSE, MAE, MAPE, and SMAPE metrics.
* rTPNN-FES achieves a highly successful schedule which is very close to the optimal schedule with only 1.3% of the cost difference.
* rTPNN-FES requires a much shorter time than both optimal and GA-based scheduling to generate embedded forecasts and scheduling, although the forecasting time alone is slightly higher than other forecasters.
Future work shall improve the training of rTPNN-FES by directly minimizing the cost of user dissatisfaction (or other scheduling costs) to eliminate the collection of optimal schedules for training. In addition, the integration of a predictive dynamic thermal model into the rTPNN-FES framework shall be pursued in future studies. (Such integration is required to utilize more advanced HVAC scheduling/control system designs.) It would also be interesting to observe the performance of rTPNN-FES for large-scale renewable energy networks. Furthermore, since the architecture of rTPNN-FES is not dependent on the particular optimization problem formulated in this paper, rTPNN-FES shall be applied for other forecasting/scheduling problems such as optimal dispatch in microgrids, flow control in networks, and smart energy distribution in future work.
|
2305.15990 | PINNslope: seismic data interpolation and local slope estimation with
physics informed neural networks | Interpolation of aliased seismic data constitutes a key step in a seismic
processing workflow to obtain high quality velocity models and seismic images.
Building on the idea of describing seismic wavefields as a superposition of
local plane waves, we propose to interpolate seismic data by utilizing a
physics informed neural network (PINN). In the proposed framework, two
feed-forward neural networks are jointly trained using the local plane wave
differential equation as well as the available data as two terms in the
objective function: a primary network assisted by positional encoding is tasked
with reconstructing the seismic data, whilst an auxiliary, smaller network
estimates the associated local slopes. Results on synthetic and field data
validate the effectiveness of the proposed method in handling aliased (coarsely
sampled) data and data with large gaps. Our method compares favorably against a
classic least-squares inversion approach regularized by the local plane-wave
equation as well as a PINN-based approach with a single network and
pre-computed local slopes. We find that introducing a second network to
estimate the local slopes whilst at the same time interpolating the aliased
data enhances the overall reconstruction capabilities and convergence behavior
of the primary network. Moreover, an additional positional encoding layer
embedded as the first layer of the wavefield network confers to the network the
ability to converge faster improving the accuracy of the data term. | Francesco Brandolin, Matteo Ravasi, Tariq Alkhalifah | 2023-05-25T12:29:28Z | http://arxiv.org/abs/2305.15990v2 | PINNslope: seismic data interpolation and local slope estimation with physics informed neural networks
###### Abstract
Interpolation of aliased seismic data constitutes a key step in a seismic processing workflow to obtain high quality velocity models and seismic images. Leveraging on the idea of describing seismic wavefields as a superposition of local plane waves, we propose to interpolate seismic data by utilizing a physics informed neural network (PINN). In the proposed framework, two feed-forward neural networks are jointly trained using the local plane wave differential equation as well as the available data as two terms in the objective function: a primary network assisted by positional encoding is tasked with reconstructing the seismic data, whilst an auxiliary, smaller network estimates the associated local slopes. Results on synthetic and field data validate the effectiveness of the proposed method in handling aliased (sampled coarsely) data and data with large gaps. Our method compares favorably against a classic least-squares inversion approach regularized by the local plane-wave equation as well as a PINN-based approach with a single network and pre-computed local slopes. We find that by introducing a second network to estimate the local slopes whilst at the same time interpolating the aliased data, the overall reconstruction capabilities and convergence behavior of the primary network is enhanced. An additional positional encoding, embedded as a network layer, confers to the network the ability to converge faster improving the accuracy of the data term.
## 1 Introduction
The idea of describing seismic data as a superposition of local plane-waves was introduced by Jon Claerbout back in 1992. This elementary type of waves can be modelled by the plane-wave partial differential equation (plane-wave PDE), which is only parameterized by the local slope factor (also referred to as slowness, or ray parameter). [1] demonstrated how a plane-wave defined by its local slope can be annihilated within a given wavefield by the mean of a plane-wave partial differential operator. Leveraging on this simple concept, be developed a linear approach for local-slope estimation using small moving windows across data, where a single slope at the center of the window is computed by linear least-squares. Later, [2] proposed the plane-wave destruction filters (PWD), a global approach for local slope estimation that requires the solution of a non-linear system to estimate the dip, but removes the need for windowing the data. Several techniques have been developed during the years that utilizes the plane-wave approximation in seismic processing algorithms with applications ranging from denoising ([3]), trace interpolation, detection of local discontinuities ([2]), velocity-independent imaging ([4]) and as a regularization in seismic estimation problems ([5]).
In this work we build upon the concept introduced by Claerbout, utilizing neural networks informed by the local plane-wave equation to simultaneously interpolate seismic data and estimate the local slopes of the events. In the machine learning literature the idea of integrating the governing laws of physics into the learning process of a neural network is commonly referred to as: physics informed neural networks (PINNs - [6]). PINNs have recently emerged as a novel computational paradigm in the field of scientific machine learning and have been shown to be very effective in representing solutions of partial differential equations (PDEs). The PINNs framework can be utilized to solve both forward and inverse problems and has been successfully applied in various domains of computational physics. In the context of exploration seismology, PINNs have been utilized with different PDEs, for modelling wave propagation in time domain using the wave equation ([7], [8]) and to model wavefields in frequency domain leveraging the Helmholtz
equation and the vertical transversely isotropic wave equation ([9], [10] and [11]). PINNs have also been applied to the eikonal equation to help overcome some limitations of conventional techniques in seismic tomography problems ([12], [13], [14], [15]).
PINNs are a special class of feed-forward neural networks that predicts a solution that satisfies the PDE, which governs the physics of the problem, restricting the space of possible solutions in favor of physically reliable ones. In the standard scenario, where the loss function includes a data fitting term (as in our implementation), the PDE term acts as a soft constrain of the network optimization problem. Specifically, PINNs are generic function approximators promoting reconstructions that are naturally similar to the available data. With this advantage we can obtain an algorithm that leverages on the interpolation capabilities of feed-forward neural networks, but whose solution is heavily dependent on the physics of the problem. Moreover, compared to traditional numerical approaches that rely on the discretization of derivatives involved in a PDE, PINNs learn a direct mapping from spatial coordinates to wavefield amplitudes (or any other physical quantity), removing the need for finite difference approximations, and relying on functional derivatives, which are more stable.
A recent deep learning framework named coordinate based learning has emerged in the literature ([16], [17]). Coordinate based learning aims to solve imaging inverse problems by utilizing a feed-forward network to learn a continuous mapping from the measurement coordinates to the corresponding sensor responses. The approach bear resemblance to PINNs except that it does not utilize any physics laws to constrain the network solution. [16] found that having the feed-forward network acting directly on the input coordinates performs poorly at representing high-frequency component of the signals in accord to the low-frequency bias of neural networks as demonstrated by [18]. To overcome this problem he proposed to map the input coordinate to a higher dimensional space through \(positional\)\(encoding\) before feeding them to the network. Successful implementations of \(positional\)\(encoding\) in seismic applications can be found in the works of [19] and [20]. In our implementation, we also leverage positional encoding to ensure that the network is capable to reconstruct multi-scale, oscillating signals like those encountered in seismic data.
To support our claims, we present numerical examples focused on two of the most challenging tasks in seismic interpolation: namely, interpolation of regularly subsampled data (beyond aliasing) and data with large-gaps. We first evaluate the performance of the proposed approach on two simple synthetic data examples, comparing the results obtained by the PINNs approach with a simple plane-wave regularized least-square inversion (PWLS) and against a previous version of our framework called PWD-PINN ([21]). We then consider a field data example where given the challenge of the higher frequency content and higher complexity of the recorded signals, we decided to leverage on positional encoding to overcome the low frequency bias of these types of architectures ([18]). The proposed framework, that we refer to as PINNslope, not only allows to interpolate seismic data, but it can also be used to estimate slopes from fully sampled data of quality comparable to those of the PWD filters. In the context of sparsely sampled data, the procedure is advantageous because we can directly estimate the slopes during the interpolation of the recorded wavefield (i.e. using the aliased data), without the need of low-pass filtering the data to obtain an alias-free version on which to perform a reliable slope estimation as in our previous work. The inversion result of the slope network turn outs to be smooth, but a more accurate version than the one estimated from the low frequency data by means of PWD filters. Thanks to that the physical constrain the network is very efficient in steering the interpolation process towards an accurate physically driven solution.
To summarize, our main contributions comprise:
1. A novel machine learning framework for slope assisted seismic data interpolation.
2. An innovative procedure for local slope attribute estimation by the mean of physics informed neural networks.
3. Successful application of the framework on field data.
The paper is organized as follows. First, we present the theoretical background of our methodology. We then describe the network architecture, focusing on some of the key implementation details needed to achieve a stable training process. Finally, our method is applied to a range of synthetic and field data. Its reconstruction capabilities are compared to those of a classic least-squares inversion regularized by the discretised local plane-wave equation (PWLS), as well as to our previous published PWD-PINN approach which uses pre-computed local slopes.
Theory
### Problem statement
The objective of this paper is to formulate the problem of seismic data interpolation within the framework of physics informed neural networks.
To begin with let's define the basic mathematical model used to obtain a decimated version of the original seismic data: a restriction operator **R**, is defined such that it samples the columns of the data matrix \(\textbf{u}=[\textbf{u}_{1}^{T},\textbf{u}_{2}^{T},...,\textbf{u}_{N}^{T}]\) at desired locations (where \(N\) is the number of total traces in the dataset), removing the missing traces that we wish to interpolate. In matrix notation the operation of subsampling a gather of traces can be written as:
\[\textbf{d}=\textbf{Ru} \tag{1}\]
where **d** is the subsampled data with either missing traces at a regular interval to simulate spatial aliasing, or missing a large number of consecutive traces to simulate a gap in the acquisition geometry.
### Slope estimation with plane-wave destructors
The physical model used to express seismic data as local plane-waves is represented by the local plane-wave differential equation:
\[\frac{\partial u(t,x)}{\partial x}+\sigma(t,x)\frac{\partial u(t,x)}{\partial t }=r(t,x)\approx 0, \tag{2}\]
where \(u(t,x)\) is the pressure wavefield, \(r(t,x)\) is the PDE residual, and the parameter \(\sigma(t,x)\) is the local slope (or wavenumber) with units equal to the inverse of the velocity of propagation. An analytical expression exists for the solution of equation 2 in case of a constant slope, which is simply represented by a plane-wave
\[u(t,x)=f(t-\sigma x), \tag{3}\]
where \(f(t)\) is an arbitrary waveform at \(x=0\). We can see that the left hand side of equation 2 decreases as the observation \(u(t,x)\) matches the wave displacement \(u(t-\sigma x)\) ([22]).
In our work, we are interested in computing a slope varying both in time and space, but no analytical solution exists for such a case. Hence, Claerbout (1992) casts the dip estimation as a linear least-squares problem, through an operation named plane-wave destruction. In this approach, the curvature of the events is linearly approximated by computing the slope in a small window of the entire data. The slope is estimated through equation 2, minimizing the quadratic residual:
\[Q(\sigma)=(\textbf{u}_{x}+\sigma\textbf{u}_{t})\cdot(\textbf{u}_{x}+\sigma \textbf{u}_{t}), \tag{4}\]
where \(\textbf{u}_{x}\) and \(\textbf{u}_{t}\) are respectively defined as the spatial and temporal derivatives of the wavefield **u**. Setting the derivative of \(Q(\sigma)\) to zero, we can find its minimum as:
\[\sigma=-\frac{\textbf{u}_{x}\cdot\textbf{u}_{t}}{\textbf{u}_{t}\cdot\textbf{ u}_{t}}. \tag{5}\]
Fomel (2002), on the other hand, frames the slope estimation as a non-linear least-squares problem computing the slope globally on the entire data by the means of a plane-wave destruction filters. Given a wavefield **u** as a gather of seismic traces \(\textbf{u}=[\textbf{u}_{1},\textbf{u}_{2},...,\textbf{u}_{N}]^{T}\), the destruction operator predicts each trace from the previous one by shifting the observed trace along the dominant local slopes of the seismic data and subtracts the prediction from the original one. The phase-shift operation on the traces is approximated by an all-pass digital filter (or prediction filter in 2D). The filter coefficients are determined by fitting the filter frequency response (at low frequencies) to the response of the phase-shift operator. In this implementation, the slope (\(\sigma\)) enters in the filters coefficients in a non linear way. To characterize and describe the entire gather, the prediction of several plane waves is needed (and not only one). This is achieved by cascading various filters of the above mentioned form. The filter is applied to the data **u** as a convolutional operator \(\textbf{D}(\sigma)\). In matrix notation, the slope estimation problem can be written as
\[\textbf{D}(\sigma)\textbf{u}=\textbf{r}, \tag{6}\]
where **r** is the residual. This non-linear least-squares problem is solved via Gauss-Newton iterations, which implies solving
\[\textbf{D}^{\prime}(\sigma_{0})\Delta\sigma\textbf{u}+\textbf{D}(\sigma_{0}) \textbf{u}=\textbf{r}, \tag{7}\]
where \(\textbf{D}^{\prime}(\sigma_{0})\) is the derivative of the filter coefficients \(\textbf{D}(\sigma)\) with respect to \(\sigma\). The minimization problem is solved for the update \(\Delta\sigma\), which is repeatedly added (at every iteration) for an initial guess \(\sigma_{0}\). The problem can be regularized by adding an appropriate penalty term that avoids oscillatory solutions of the slope attribute.
### Plane-wave regularized least-squares interpolation
In this section, we describe a conventional approach to take into account pre-computed slopes whilst interpolating seismic data (i.e., restoring missing traces). This method will be later used as a benchmark for our PINNs approach. The inverse problem is cast as follows: finding the shot-gather **u** (i.e., the full gather of traces) that minimizes the Euclidean distance between the subsampled data **d** and the estimated subsampled data **Ru**, whilst at the same time satisfying the plane-wave differential equation with pre-computed slopes. The objective function is formally defined as
\[f(\textbf{u})=\|\textbf{d}-\textbf{Ru}\|_{2}^{2}+\epsilon_{r}\|\textbf{u}_{x }+\boldsymbol{\Sigma}\textbf{u}_{t}\|_{2}^{2} \tag{8}\]
where \(\textbf{u}_{x}\) and \(\textbf{u}_{t}\) are respectively defined as the spatial and temporal derivatives of the data **u**, \(\boldsymbol{\Sigma}\) is a diagonal matrix that applies element-wise multiplication of the pre-computed local slope and \(\epsilon_{r}\) is a weight to control the contribution of the PDE in the solution. The data term of the objective function aims at accurately reproducing the available traces from the estimated full shot-gather **u** subsampled by the restriction operator **R**. This means that all the interpolation operation between the traces is performed by the local plane-wave regularization term. In other words, the regularization term has the function of filling the gaps between the subsampled traces, spraying the information available from two neighboring traces along the curvature of the provided local slope field.
### Physics Informed Neural Networks
In this section, we aim to show that starting from a knowledge of a slope field, estimated via PWD filters or any other algorithm, the problem of seismic interpolation can be formulated within the PINNs framework.
PINNs have been designed to blend the universal function approximator capabilities of neural networks ([23]) with a physical constraint given by a PDE, which describes the physical system under study. In our specific case, the PDE that we seek to satisfy is the local plane-wave differential equation (equation 2).
A neural network \(\phi_{\theta}(t,x)\) is designed to approximate the function \(u(t,x)\), where \(\theta\) refers to the weights (and biases) to be optimized and the pair (\(t,x\)) represents the input to the network. The network predicts the recorded wavefield \(u(t,x)\) at the corresponding location in the time-space domain of interest. A remarkable convenience of PINNs is that in contrast to traditional numerical methods, they do not require a discretization of the computational domain. The partial derivatives of the underlying PDEs are computed by the means of automatic differentiation (AD), which is a general and efficient way to compute derivatives based on the chain rule. AD is usually implemented in neural networks training to compute the derivatives of the loss function with respect to the parameters of the network. However, AD can be more broadly applied to every computational program that performs simple arithmetic operations and calculates elementary functions (linear transformations and non-linear activation functions in the case of neural networks) by keeping track of the operations dependencies via a computational graph and successively computing their derivatives using the chain rule. The PINN framework is trained in an unsupervised manner, using a loss function which includes both the local plane differential equation and a set of \(N_{t}\) (number of traces in the subsampled gather) boundary conditions corresponding to the available traces
\[\mathcal{L}=\frac{1}{N_{u}}\sum_{i=0}^{N_{u}}\left(\frac{\partial\phi_{\theta }(t_{i},x_{i})}{\partial x}+\sigma(t_{i},x_{i})\frac{\partial\phi_{\theta}(t_ {i},x_{i})}{\partial t}\right)^{2}+\lambda\left(\frac{1}{N_{t}}\sum_{j=0}^{N_ {t}}|u(t_{j},x_{j})-\phi_{\theta}(t_{j},x_{j})|\right), \tag{9}\]
where \((t_{i},x_{i})\) are points randomly sampled from the input space with \(N_{u}\) as number of total grid points, \(u(t_{j},x_{j})\) is the known solution regions at points indexed by \(j\) (available traces), and \(\lambda\) is a scalar weight for the second term.
In this first approach named PWD-PINN (1), the slope \(\sigma(t_{i},x_{i})\) is pre-computed and it is estimated by means of PWD filters, outside the training process of the network. The slope array remains fixed during training and it is not updated.
### Simultaneous data interpolation and slope estimation
In this section, we introduce the slope estimation framework using physics informed neural networks, named PINNslope. We propose to estimate the local slopes while at the same time interpolating the aliased data (or any other type of interpolation task). Specifically, we simultaneously train two neural networks to predict the data and the local slopes that satisfy the plane-wave PDE. This approach bears similarity with previous works by [12] and [24], in the context of traveltime tomography.
As shown in the diagram of fig.2, both networks have fully-connected architectures and utilize \(Tanh(\cdot)\) activation functions; moreover, a positional encoding is added to the wavefield network to tackle multi-scale signals (i.e., signal exhibiting both low and high frequency components) such as our seismic traces. The two networks also differ in the number and size of the layers: the wavefield network aims at reconstructing the shot gather data and requires a much larger number of degrees of freedom to fit the complexity of seismic signals (i.e., the seismic traces \(u(t_{j},x_{j})\)); rather for the slope, as we aim to obtain a smooth solution, this can be achieved by using more compact architecture. After computing the loss function, both networks are simultaneously updated. Two separate ADAM optimizers are utilized to allow two different learning rate values if necessary.
### Positional Encoding
During the numerical experiments both frameworks struggled in fitting signals with high frequency content. In our previous experiments, the low frequency bias of neural networks ([18]) was addressed with frequency upscaling by the
Figure 1: A diagram explaining the PWD-PINN algorithm. The network is trained while maintaining the slope array fixed during training.
Figure 2: Double network scheme for joint wavefield and slope estimation in the same training procedure.
mean of _neuron splitting_ ([25]) and with locally adaptive activation function ([26]). Here, the low frequency bias of multi-layer perceptrons (MLPs) is tackled by including _positional encoding_ of spatial coordinates ([16]).
Differently from the classical Transformer approach to positional encoding, where it is used to track the token positions, in our application we use it to map the input coordinate grid into a higher dimensional space, which allows for a better fit of high frequency signals. The approach implemented in this work resembles the one previously presented in [17] and referred to as _Fourier feature mapping_, where the authors utilized a linear sampling in the Fourier space that enables for a large amount of frequency components in the low-frequency regions. This modification is fundamental also for our implementation as other forms of encodings were introducing noise into the reconstruction, as the higher frequencies present in the encodings were able to fit the noise in the traces. Additionally, [20] proposed an anisotropic version of positional encoding justified by the idea that seismic data components present different features and should not be equally encoded. The formulation utilized here can be summarized as follows:
\[\gamma_{X}(x)=[cos(k_{0}x),sin(k_{0}x),...,cos(k_{X-1}x),sin(k_{X-1}x)], \tag{10}\]
where \(k_{x}=\frac{\pi x}{2}\) with \(x=0,...,X-1\), \(\in\mathbb{N}\), is a simple linear sampling, and \(X\) represents the number of encoded frequencies in the \(x\) coordinate. The \(t\) coordinate is encoded in the same way with the number of frequencies equal to \(T\)\(\in\mathbb{N}\), and the encoded coordinates are subsequently concatenated together as
\[\Gamma_{X,T}(x,t)=[\gamma_{X}\left(x\right),\gamma_{T}(t)]. \tag{11}\]
The positional encoding operation has been embedded as a network layer inside the architecture of the data network, and the number of frequencies corresponding to each coordinate is decided through trial and errors.
## 3 Numerical experiments
In this section, the proposed methodology is tested on synthetic and field data. For both the PWDPINN and PINNslope approaches, a feed-forward neural network architecture with 4 layers and a \(Tanh(\cdot)\) activation function was utilized. We set the number of neurons to be the same for all layers in all our experiments but this number can vary in different experiments as we specify it in the subsections. In both frameworks, the networks are trained in an unsupervised manner, passing as input an ensemble of \((x,t)\) points. The ensemble is passed to networks in batches of 1000 randomly sampled points. For every batch, the ensemble of the collocation points is concatenated to an array containing half of the points \((x_{j},t_{j})\) associated with the available traces to be fitted. All networks in every experiments are trained using ADAM optimizer, with learning rate fixed at \(10^{-3}\). These parameters are chosen based on some initial tests and kept fixed throughout the study.
### Synthetic data examples
#### 3.1.1 Local slope estimation
In this first example, we estimate the slope with the PWD algorithm and with the PINNslope framework, to compare their performance. The synthetic seismic image (Sigmoid model, [1]) is assumed to be fully sampled and all the traces have been utilized in the training process.
Figure 3: Slope comparison between PWD algorithm. a) Seismic image, b) local slope estimate from Plane-wave destruction filters [2], c) local slope estimate obtained with the PINNslope approach, d) difference between the PWD and PINNslope estimated slopes.
As shown in Fig.3, the PINNslope framework can accurately estimate the local slope of complex subsurface geometries, and it results in a slightly smoother version with less artefacts near the major fault compared to the local slope estimated via the PWD algorithm.
#### 3.1.2 Interpolation beyond aliasing with local slope estimation
The goal of this second example is to reduce the spatial aliasing present in the recorded data by interpolating the missing traces. The synthetic data in fig.4a have a trace spacing of 10 meters and have been subsampled by a factor of 5 through the operator **R**, to obtain the aliased version in fig.4b.
It is not possible to apply the PWD filter directly to estimate the slope from the subsampled data, since it can lead to erroneous estimates by picking the aliased dips instead of the true ones of the fully sampled data [2].
To avoid this issue, the following pre-processing steps have been performed:
1. Apply \(f-k\) filter to the spectrum of the aliased data.
2. Inverse transform the filtered spectrum to get a low frequency alias-free version of the data.
3. Apply the PWD filters algorithm to the low frequency data and estimate the slope.
4. Utilize the PWD estimated slope from low frequency data inside the PINN loss function.
The network capacity corresponds to 4 layers with 512 neurons each, with the number of encoding frequencies set to \(X=8\) and \(T=32\) for the \(x\) and \(t\) coordinates respectively. The network is trained using the loss function in eq.9 with the parameter \(\lambda\) set to 1000 and \(\sigma\) correspond to the PWD estimated slope displayed in fig. 4c.
Fig.5 compares the results obtained with the different approaches. The output of the regularized least-squares inversion in fig.5a demonstrates the importance of the plane-wave penalty term, which helps in filling the gap between the
Figure 4: a) Original seismic data, b) seismic data with missing traces, c) low frequency data from which the PWD slope has been computed, d) \(f-k\) spectrum of the data, e) \(f-k\) spectrum of the subsampled data, f) PWD estimated slope from the low frequency data in fig.4c.
available traces following the correct overall geometry of the arrivals. Unfortunately, as soon as the reflections start bending their resolution decrease, worsening towards the far-offset. In this interpolation attempt the sharp and definite seismic response that characterize this simple synthetic data is slightly spread in a fuzzy pattern, a sign that the algorithm cannot properly restore the energy in the correct position. The difference with the original data in fig.5d shows the amount of energy lost, as well as some artifacts. The achieved result is almost perfect where the arrivals are generally linear. Anyhow, LS inversion is almost instantaneous compared to the neural networks approach. The result displayed in fig.5b requires a runtime of approximately 26 minutes for 2000 epochs as shown in the plot of the loss curves in fig.6a.
As in the previous result, the quality of the interpolation of the PWD-PINN algorithm decreases in the far-offset, although only for the first few reflections. This is a limitation of the algorithms that leverage on the PWD estimated slope, which is inaccurate at the far-offsets where the events are steeper; the slope estimated via PWD inherently contains errors because of the procedure through which it has been computed, but even more it has been estimated from a low frequency version of the original data. Despite the poor interpolation of the above mentioned arrivals, all the others look adequately restored. Most of the energy is in the correct position as we can see from its spectrum in fig.5h.
Figure 5: a) Plane-wave regularized least-squares inversion interpolation result, b) PWD-PINN interpolation result, c) PINNslope interpolation result, d) difference between fig.4a and fig.5a, e) difference between fig.4a and fig.5b, f) difference between fig.4a and fig.5c, g) \(f-k\) spectrum of fig.5a, h) \(f-k\) spectrum of fig.5b, i) \(f-k\) spectrum of fig.5c.
Anyhow, in this result the resolution is lower, in fact the traces interpolated at the far-offsets include gaps. Moreover, in the first two events the amplitude is not properly reproduced.
The best reconstruction is clearly given by the PINNslope framework. The architecture of the network has the same capacity then the one of the PWD-PINN algorithm and the loss function in fig.6b shows that it has been trained for the same amount of epochs as PWD-PINN. The key difference in the result is made by the second smaller network that approximates the local slope function. The slope estimate is carried on simultaneously with the interpolation performed by the bigger data network on the original shot gather (Fig. 4a), no filtering required. This simultaneous updating process of data and slope allows for larger search space to speed up convergence to the accurate data. As can see from fig.7, the PINN estimated slope closely matches the accurate PWD slope computed from the data of figure 4a (ignoring the right-upper part where there are no arrivals and the two approaches clearly extrapolate the values).
### Field data examples
#### 3.2.1 Interpolation beyond aliasing with local slope estimation
The numerical examples below are performed on a field dataset from the Gulf of Mexico. Here, only the results for the PINNslope approach and PWD-PINN will be compared as we want to focus on the PINN approaches. The trace spacing in the original shot-gather of fig.8a is 26.7 meters and we subsampled it by a factor of 5, increasing the spacing between the traces to 133.5 meters (fig.8b).
Figure 6: a) Loss of the PWD-PINN training. b) Loss resulting from the PINNslope training. In the small box inside the two plots it is shown the contribution of each term of the loss function in log-scale, that is: data-term (in green), physical-term (in blue) and total loss (in orange).
Figure 7: a) Accurate slope estimated with PWD algorithm from the data shown in 4a. b)Slope estimate through the PINNslope framework while simultaneously interpolating the data in fig.5c, c) difference between fig.7a and fig.7b.
As mentioned earlier, computing directly the slope from the subsampled gather is not feasible and some pre-processing steps are required. Filtering out the aliased part of the field data in fig.8e is far more challenging then for the on synthetic ones; the part of the signal that is not aliased is very small and does not contain significant energy. The retrieved low frequency data (fig.8c) are fed into the PWD algorithm and, due to the low frequency nature of the data, the resulting local slope is a low resolution rough estimate of the slope of the high-resolution data (fig.8f). The network has an architecture of 4 layers and 512 neurons in each layer. It is equal to the one used for synthetic data. That is because even if the traces are way more complex in the field data, the synthetic traces have an amplitude that does not decrease as much in time. From our initial tests the network requires the same capacity to easily fit the strong oscillations of the synthetic signals. The slope network (as in the synthetic case) has 2 layers with 2 neurons to estimate a smooth version of the slope field.
The PDW-PINN algorithm (fig.9a) does not achieve a good result. Only the interpolation in the near-offset could be consider reasonable. In the far-offsets the reconstruction is worse as we already observed in the synthetic data. In this part of the dataset, the PWD estimated slope is prone to errors and it does not allow a good interpolation. The loss curves (fig.10a) show that we had to increase the \(\lambda\) parameter to very high values (\(\lambda=10000\)) to make the network properly fit the traces. The network struggles to accurately fit the traces if the accuracy of the local slope is poor, as this will negatively effect the PDE term of the loss function. As a result, increasing the weight on the data fitting term rendered the PWD-PINN algorithm to be a data fitting algorithm.
In contrast, the PINNslope approach admits a good performance. It reproduces the original dataset; most of the energy has been restored and the aliasing has been suppressed. The extra degrees of freedom provided by the small slope network helped the convergence.
Figure 8: a) The original seismic data, b) subsampled seismic data, c) low frequency data from which the PWD slope has been computed, d) \(f-k\) spectrum of the original data, e) \(f-k\) spectrum of the subsampled data, f) PWD estimated slope from the low frequency data in fig.8c.
In fig.11, the PINN estimated slope is compared to the PWD slope computed from the full data (fig.8a). The PINN slope is smoother than the PWD one and again admits generally lower values, probably because it has been estimate on less dense data. However, the overall trend of the PINNslope slope is correct and its smoothness serves to its purpose in the plane-wave regularization term. If compared to the PWD slope in fig.8f, which is the realistically achievable slope when we try to solve an interpolation problem of this kind, the PINN slope is a way better and more precise estimate.
The residuals shown in fig.9f and fig.9e, differently from the synthetic data, are partially due to the field data containing secondary events with conflicting dips that cannot be recovered by our method. We note that this is a general weakness of interpolation methods relying on the plane-wave PDE.
#### 3.2.2 Performance assessment
In this section, the PINNslope framework is tasked to a harder interpolation where fewer traces are available. The aim is to assess its performance on the current dataset and evaluate its limits. Moreover, as subsequent shot gathers in the dataset will have only minor changes between them, we test the converge behaviour of the pre-trained network when applied to the next gathers is the data. We first apply PINNslope on the shot-gather subsampled by various factors: 6, 7 and 8 (respectively, 160.2 meters, 186.9 meters, 213.6 meters intervals between each traces). Finally, we also test the ability of our framework to interpolate a dataset with a large gap of traces (i.e. 15 traces, for a total of 400.5
Figure 9: a) PWD-PINN interpolation result, b) Difference between the original full data (fig.8a) and PWD-PINN result, c) \(f-k\) spectrum of PWD-PINN result, d) PINNslope interpolation result, e) difference between the original full data (fig.8a) and PINNslope result, f) \(f-k\) spectrum of PINNslope result.
meters gap) placed in the middle of the gather. This a very hard task for the network, which has to rely solely on the information obtainable from the left and right side of the gather, as in the gap region, there is no knowledge of the shape of the arrivals nor no that of their slope field. So far, we are not aware of any interpolation algorithm that can solve this category of interpolation tasks in an automatic and physically driven manner.
The performance results are shown in fig.12. For this shot gather, a higher subsampling of 6 and 7 (respectively fig.12a and fig.12b) does not impact the network performance. The signal to noise ratio is maintained almost constant from the result described in the previous section and the arrivals are perfectly interpolated. The framework start to face some challenge when the gather is subsampled by a factor of 8 (213.6 meters interval between the traces). In the near-offset the interpolation is still accurate but as the dip of the arrival starts to increase, PINNslope is unable to retrieve the correct slopes and struggles in interpolating the arrivals. We consider this sampling to be the threshold limit for the framework for this gather (and the freqeuncy range involved). The presented result are obtained with an increasing the number of epochs as the subsampling increases (1500 epochs for a subsampling of 6, and 2500 for a subsampling of 8). In the case of the higher subsampling factor, it was additionally needed a \(\lambda\) value of 10000 instead of the usual 100.
By enlarging fig.12 at early times (fig.13) we can see the main difference between the original data and the PINNslope interpolation. In fact, most of the events are well reproduced, as what is missing is the energy corresponding to the often weaker events with conflicting; as such the interpolated data looks cleaner than the original one. As mentioned before most algorithms that rely on slope estimation to perform their tasks cannot leverage on the energy of the second
Figure 11: a) The slope estimated with PWD algorithm from the data shown in 8a. b) The slope estimate through the PINNslope framework while simultaneously interpolating the data in fig.9e, c) the difference between fig.11a and fig.11b.
Figure 10: a) Loss curves of the PWD-PINN training on real data. b) Loss curves of the PINNslope training on real data. In the small box inside the two plots we show the contribution of each term of the loss function in log-scale, that is: data-term (in green), physical-term (in blue) and total loss (in orange).
order dips for its reconstruction unless some additional slopes are included in the process.
The result of the gap interpolation (fig.12d and fig.13c) is remarkable. The PINNslope framework is able to extrapolate the main reflection from the left and right and connect them together. Of course we are aware of the high errors in the middle of the gap compared it to the same part in the original data. The algorithm cannot completely restore the missing part, as mentioned above it lacks information on the conflicting dips. Is it worth mentioning that this experiment has been done to show the interpolation capabilities of neural networks and especially of PINNs. Fig.14 shows the convergence capabilities of the PINNslope network (pre-trained on the gather of fig.9a with a subsampling factor of 5) on the subsequent gathers of the dataset (that have been subsampled by a factor of 5).
Specifically, the gathers of fig.14e and fig.14f are consecutive to the one used as benchmark until now. Instead, the ones in fig.14g and fig.14h are way more distant. The idea is to use only one pre-training step (on the gather of fig.9a) and a small fine-tuning step to make the network fit the subsequent gather. As we can see, the number of epochs needed for fine-tuning the network depend on how different the gathers are from the one used for the pre-training. Although, all the gathers has been well reproduced with a small number of epochs.
Figure 12: The performance of PINNslope with high subsampling rates and a particular case where an entire part of the gather is missing: a) the result obtained starting from data subsampled by 6, b) result obtained starting from data subsampled by 7, c) result obtained starting from data subsampled by 8, d) result obtained starting from data that contains a large gap of 15 traces (white hatched lines delimit the gap area).
Figure 13: Zoom in of some of the results in fig.12. a) Zoom in of the original data, b) Zoom in of the result obtained starting from a data subsampled by a factor of 7, c) Zoom in of the result starting from a data that contains a large gap of 15 traces (white hatched lines delimit the gap area).
## 4 Discussion
Out of all the implementations we tested, PINNslope achieves the best performance. It can interpolate the missing data eliminating the alias present in the data whether they are related to coarse recordings or the presence of obstacles during the acquisition. Parameterising the plane-wave PDE in the loss function with a small neural network does not significantly affect the runtime with respect to PWD-PINN. For PWD-PINN and PWLS inversion a number of pre-processing steps should be performed before being able to compute the local slope to use in the PDE regularization term. This is also a time consuming part of these approaches. Although, the PWLS inversion is almost instantaneous compared to the PINNs.
All of the three methodologies present similar drawbacks due to the of the plane-wave approximation. In fact, the estimated slopes with the PWD filters (or any other algorithm that can estimate slopes) and PINNslope are computed with respect to the main events and they are not able to retrieve information for the conflicting slopes (slopes that have opposite or different direction) in the data. The PINNslope framework cannot reproduce the events that have a reverse slope with respect to the main trend of the arrivals. Moreover, the estimate of the slope on a data point where two events are crossing each other will always induce errors in the data fit. Currently, it is not possible to compute the value of two slopes on one single point of the dataset unless we include two slopes in the framework, which will be investigated in the future.
PINNslope has the potential to fill a large gap of traces via a physically driven approach not achievable by any other type of algorithm, furthermore estimating the full slope field. This example illustrates the interpolation capabilities of this type of implementation. A substantial help to the network fitting ability is derived from the _positional encoding_ layer. In the past experiments and implementations various methodologies have been tested to make the network able to fit complex signals as field seismic traces. _locally-adaptive activation functions_ ([26]) has been implemented along with a _Sin()_ activation function ([21]), to allow the network to be more expressive without the need to extend its capacity, but still it was not enough. In addition, a technique named frequency upscaling by the mean of neuron splitting ([25]) have been tested achieving good results in fitting complex high frequency signals. One of its drawbacks is that it was injecting low level noise in the reconstructed result and required to train various time the network to adequately fit the entire frequency content of the dataset. On the other hand, _positional encoding_ solved this issue allowing the network to
Figure 14: The results of fine-tuning PINNslope on different gathers from the one used to pre-train the network (fig.9a). a) interpolated gather subsequent to the one in fig.9a, b) interpolated gather subsequent to the one in fig.14b, c) interpolated gather at position 100 in the dataset, d) interpolated gather at position 150 in the dataset, e) original gather at position 2, f) original gather at position 3, g) original gather at position 100, h) original gather at position 150.
accurately reproduce the field data and achieve a faster convergence, without the need of training several times on the same dataset (as frequency upscaling with neuron splitting).
## 5 Conclusions
We introduced a novel PINN framework in the seismic signal processing field for simultaneous seismic data interpolation and local slope estimation. It is discernible also as an innovative procedure for local slope attribute estimation of complex subsurface images. The obtained results are compared on synthetics and field datasets against PWD-PINN, which showed good results previously in the synthetic data experiments, and the plane-wave regularized least-square inversion (for the synthetic example) to better examine the PINNs performances against an approach that does not rely on neural networks to optimize the same objective function.
We found that introducing a second network to estimate the local slope attribute while at the same time interpolating the aliased data achieves better results in terms of signal to noise ratio, while also improving the overall network convergence. The positional encoding layer was a fundamental addition to the architecture and helped overcome previous difficulties such as high frequency fitting and noise introduction during the interpolation process. The PINN estimated slopes look accurate and consistent with the interpolated data, their accuracy is comparable to the one obtainable with the plane-wave destruction filters estimate.
## Acknowledgments
This publication is based on work supported by the King Abdullah University of Science and Technology (KAUST). The authors thank the DeepWave sponsors for supporting this research.
|
2307.07521 | Artistic Strategies to Guide Neural Networks | Artificial Intelligence is present in the generation and distribution of
culture. How do artists exploit neural networks? What impact do these
algorithms have on artistic practice? Through a practice-based research
methodology, this paper explores the potentials and limits of current AI
technology, more precisely deep neural networks, in the context of image, text,
form and translation of semiotic spaces. In a relatively short time, the
generation of high-resolution images and 3D objects has been achieved. There
are models, like CLIP and text2mesh, that do not need the same kind of media
input as the output; we call them translation models. Such a twist contributes
toward creativity arousal, which manifests itself in art practice and feeds
back to the developers' pipeline. Yet again, we see how artworks act as
catalysts for technology development. Those creative scenarios and processes
are enabled not solely by AI models, but by the hard work behind implementing
these new technologies. AI does not create a 'push-a-button' masterpiece but
requires a deep understanding of the technology behind it, and a creative and
critical mindset. Thus, AI opens new avenues for inspiration and offers novel
tool sets, and yet again the question of authorship is asked. | Varvara Guljajeva, Mar Canet Sola, Isaac Joseph Clarke | 2023-07-06T22:57:10Z | http://arxiv.org/abs/2307.07521v1 | # Artistic Strategies to Guide Neural Networks
###### Abstract
Artificial Intelligence is present in the generation and distribution of culture. How do artists exploit neural networks? What impact do these algorithms have on artistic practice? Through a practice-based research methodology, this paper explores the potentials and limits of current AI technology, more precisely deep neural networks, in the context of image, text, form and translation of semiotic spaces. In a relatively short time, the generation of high-resolution images and 3D objects has been achieved. There are models, like CLIP and text2mesh, that do not need the same kind of media input as the output; we call them translation models. Such a twist contributes toward creativity arousal, which manifests itself in art practice and feeds back to the developers' pipeline. Yet again, we see how artworks act as catalysts for technology development. Those creative scenarios and processes are enabled not solely by AI models, but by the hard work behind implementing these new technologies. AI does not create a 'push-a-button' masterpiece but requires a deep understanding of the technology behind it, and a creative and critical mindset. Thus, AI opens new avenues for inspiration and offers novel tool sets, and yet again the question of authorship is asked.
## 1 Introduction
It is claimed that recent advancements in AI, such as CLIP-based products Midjourney and DALL-E, are supposed to augment our creativity. For the first time, it does not sound so absurd that artists can find themselves out of jobs (Nicholas, 2017). Not that artists would have ever had a secure and stable job, but deep learning (DL) tools might eventually lead to losing some commercial commissions. Such thinking relies on a modern art approach where skills are in the centre of attention and not the conceptual idea. Quoting Lev Manovich: "Since 1970 the contemporary art world has become conceptual, ie focused on ideas. It is no longer about visual skills but semantic skills." (Manovich, 2022) As these new tools advance, the interfaces and techniques become more complex and sophisticated as our eyes are becoming more accustomed to not being easily surprised.
Echoing Aaron Hertzmann, once painters were in a similar situation when photography was invented and took over the niche of portrait-making. Then visual artists had to re-invent themselves and re-think the meaning of painting. Photography had to wait another 40 years until it got recognized as an artistic medium (Hertzmann, 2018). So-called AI artists have faced similar challenges in gaining acceptance within the art world and even inside the digital art niche (Roose, 2022).
Computer art emerged with the invention of the computer. Artists, such as Vera Molnar and Manfred Mohr, created their first computer-generated artworks in the 1960s using scientific lab computers at night when they were not used by scientists. Early computer artists were re-purposing a machine for artistic use and
the question of authorship emerged: is the artist a machine or human?
Today, with the appearance of neural networks (NN) and their creative applications, the same question re-appears. Hertzmann has written several articles arguing that people do art and not computers (Hertzmann 2018; Hertzmann 2020). Manovich also describes how AI-generated images that imitate realist and modernist paintings are claimed to be art (Manovich 2022). At the same time, experimental art forms, like installation, interactive format, performance and sound art, are often overlooked unless they are promoted by a large corporation. Instead of re-telling a short but very dense history of DL technology development, in the next section, we focus on the appearance of neural network tools that raised interest amongst artists and led to meaningful artwork production.
## 2 Historical overview of DL development
DL is a subset of machine learning (ML) using Deep Neural Networks (DNN) to learn underlying patterns and structures in large datasets. In 2012, a DNN designed by Alex Krizhevsky outperformed other computer vision algorithms to achieve the new state of the art in the ImageNet Large Scale Visual Recognition Challenge (Heravi et al. 2016). This model, AlexNet, signalled the start of a new DL era. As AI technology has developed and become more prevalent in real-world systems, artists have been exploring its limits and potentials, adapting these models to their own practices. As the number of scientific publications on AI grows exponentially it is useful to map out the influential papers, and related applications, to help track the evolution of the AI-Art space in relation to the technological advances (Krenn et al. 2022). Figure 1 shows a timeline of the development of generative models for images and text. Using this diagram we can make a few observations on the past ten years: the dominance of GANs for image generation, the influence of the Transformer on
Large Language Models (LLM), and the growing interest in multi-modal approaches and translation models. The starting period of image generation using DNNs can be traced back to the creation of the Variational Auto-Encoder (VAE) in 2013, and the Generative Adversarial Network (GAN) in 2014 (Kingma and Welling 2013; Goodfellow et al. 2020). These models showed different ways in which a NN can be trained on a large dataset, and then used to generate outputs that resemble but do not copy the original dataset.
For much of the past decade, GAN art has been a dominant and defining element of AI Art. GANs are trained using a competitive lying game, played by two players: the Generator and the Discriminator. The Generator wins by making an image that the Discriminator thinks is from the original dataset. The Discriminator wins
Figure 1: Timeline of creative deep learning development.
by successfully identifying which images the Generator has made. By playing this game repeatedly, both sides slowly learn when they have been fooled and remember information so they don't fall for the same tricks again. The Generator gets better at making images, and the Discriminator gets better at detecting these fakes. At the end of the game we are left with a Generator that is very good at generating new images, with the qualities and style of our original inputs. After the original GAN paper, there was a rush of exploration of this new technique for generating images. Alongside general improvements to the models architecture and stability, new ways of guiding the outputs and applying GANs to specific problems were also explored (Radford et al. 2015; Arjovsky et al. 2017).
Image-to-Image Translation with Conditional Adversarial Nets (2016), also known as pix2pix, showed a process of converting one type of image into another type (Isola et al. 2017). Mario Klingemann's work _Alternative Face1_ used the pix2pix model with a dataset of biometric face markers and the music videos of the singer Francois Hardy. This allowed him to control the movement of the face with this form of digital puppetry, which he then demonstrated by transferring the facial expressions of the political consultant Kellyanne Conway onto Hardy's face as she talks about "alternative facts".
Footnote 1: [https://underdestruction.com/2017/02/04/alternative-face/](https://underdestruction.com/2017/02/04/alternative-face/)
In 2015, on the Google research blog, the post Inceptionism: Going Deeper into NNs described a tool that attempted to understand how image features are understood in the hidden layers of the NN (Mordvintsev et al. 2015). Alongside this post they released a tool called DeepDream. This model enhances an image with the NN's attempts to find the features of the dataset it was trained on. The creative use of DeepDream was proposed by the authors in the original article "It also makes us wonder whether neural networks could become a tool for artists--a new way to remix visual the creative process in general".[13]
DeepDream's psychedelic imagery quickly caught the attention of the internet and of artists around the world, resonating with those interested in understanding the cross-over between biological and neurological construction of images. Memo Atken's work _All Watched Over By Machines Of Loving Grace2_: Deepdream edition, hallucinated over an aerial photograph of the GCHQ headquarters. This work raises questions around the motivations of the organisations funding the development of AI, and in doing so make the dreamlike qualities a little more nightmarish.
Footnote 2: [https://www.memo.tv/works/all-watched-over-by-machines-of-loving-grace-deepdream-edition/](https://www.memo.tv/works/all-watched-over-by-machines-of-loving-grace-deepdream-edition/)
In the same year, the paper A Neural Algorithm of Artistic Style introduced a DNN "to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic image" (Gatys et al. 2015). Neural Style Transfer (later known simply as StyleTransfer) takes two inputs, a style image and a content image, it extracts textural information from the style image and compositional information from the content image, then generates an image with minimal distance between the two. The paper demonstrates this with images of a DeepDream's psychedelic imagery quickly caught the attention of the internet and of artists around the world, resonating with those interested in understanding the cross-over between biological and neurological construction of images. Memo Atken's work All Watched Over By Machines Of Loving Grace: Deepdream edition, hallucinated over an aerial photograph of the GCHQ headquarters. This work raises questions around the motivations of the organisations funding the development of AI, and in doing so make the dreamlike qualities a little more nightmarish.
In the same year, the paper A Neural Algorithm of Artistic Style introduced a DNN "to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic image" (Gatys et al. 2015). Neural Style Transfer (later known simply as StyleTransfer) takes two inputs, a style image and a content image, it extracts textural information from the style image and compositional information from the content image, then generates an image with minimal distance between the two. The paper demonstrates this with images of a photograph represented in various styles of famous paintings, such as Van Gogh's _The Starry Night_.
In 2017, CycleGAN continued with the problem of image-to-image generation shown in pix2pix, but removed the requirement of aligned image pairs being needed for training (Zhu et al., 2017). Instead a set of source images and a set of target images that are not directly related can be used. The advantage of this is it is simpler to scale to larger datasets, making the process more accessible for artists. Helena Sarin has been using CycleGAN for a number of years, and recently in _Leaves of Manifold34_ she collected and photographed thousands of leaves to build her own training dataset, and then implemented a custom pipeline with changes that improve results when working with smaller datasets. This personalised approach in crafting the models resonates with the hand-made, collaged aesthetic of the images generated.
Footnote 3: [https://www.nvidia.com/en-us/research/ai-art-gallery/artists/helena-sarin/](https://www.nvidia.com/en-us/research/ai-art-gallery/artists/helena-sarin/)
Footnote 4: [https://twitter.com/NeuralBricolage/status/954027624728354821](https://twitter.com/NeuralBricolage/status/954027624728354821)
Other notable developments to GANs brought improvements to image quality and resolution (Karras et al., 2017; Wang et al., 2018). In late 2018, the release of StyleGAN, a model built on a combination of ideas from Style Transfer and PGGAN, demonstrated very convincing images of human faces (Karras et al., 2019). In his article "How to recognize fake AI-generated Images", the artist Kyle McDonald investigated the images generated by StyleGAN, and highlighted the visual artefacts he found (McDonald, 2018). At a glance these images look like photographs, but on closer inspection irregularities such as patches of straight hair, misaligned eyelines, or mismatched earrings reveal the difficulties GANs have in managing "long-distance dependencies" in images.
In 2017 the paper Attention Is All You Need proposed a new network architecture called the Transformer (Vaswani et al., 2017). This model addressed the long-distance dependency issue in RNNs and CNNs by re-thinking how we could handle sequences. Rather than looking at a sentence word by word, the Transformer observes the relationship between all elements of the sequence simultaneously. Being able to better handle long distance dependencies meant the Transformer was appropriate for natural language generation. Artists have explored the use of VAEs for short text generation, but with the emergence of LLM passages of long, coherent, texts could be generated (Brown et al., 2020). As dataset sizes increased, along with hardware costs for training these large models, they have become harder for individuals to train themselves, and the mode of interaction has shifted from curated datasets and homemade scripts, to web APIs and third party services. While it is more difficult to participate in the training process, the availability of services and interfaces provides new ways of working with these models that can produce less technical and more playful approaches. For example, Hito Steyerl used GPT-3 to create Twenty-One Art Worlds: A Game Map and described the process as "fooling around" with GPT-3 to write descriptions of different Art Worlds (Steyerl, 2022). In the resulting text it is difficult to distinguish which words may have been written by Steyerl and which were written by GPT-3.
The learnings from LLM for text generation were soon applied to image generation (Image GPT, Vision Transformer), and the simultaneous release of CLIP and DALL-E in January 2021 signalled the start of a new era of image generation (Chen et al., 2020; Dosovitskiy et al., 2020). Although the DALL-E model was not released, CLIP was made available to the public, and the model was quickly adopted by AI artists who applied the idea of CLIP guidance to various image generation techniques. Ryan Murdock produced the colab notebooks DeepDaze5 (combining CLIP and SIREN) and BigSleep6 (CLIP and BIGGAN), which were subsequently adapted by Katherine Crowson in the widely distributed VQGAN+CLIP7 notebook.
Footnote 5: [https://github.com/lucidrains/deep-daze](https://github.com/lucidrains/deep-daze)
Footnote 6: [https://github.com/lucidrains/big-sleep](https://github.com/lucidrains/big-sleep)
Footnote 7: [https://github.com/EleutherAI/vqgan-clip](https://github.com/EleutherAI/vqgan-clip)
The paper Denoising Diffusion Probabilistic Models introduced a different method for creating generative models (Ho et al., 2020). This technique trains a model by adding increasing amounts of noise to an image and then having the model remove the noise, resulting in a model that can generate images from only noise. Diffusion models, when combined with CLIP or other conditioning processes, enable much faster text-to-image processing. The popularity and accessibility of these techniques was further raised by the release of
DALL-E 2 and Midjourney in 2022. Midjourney became so popular it is now the largest Discord server with over 5 million members. Following the releases of these products, open source models such as Stable Diffusion have also been developed. There are many benefits of using free and open source models for artists. Being able to modify code and develop on your own software allows the artist to pursue their own experimental approaches, not restricted to the interface designed by a service provider.
The artist's involvement in generating new images with these models is vastly different to working with GANs. Rather than building custom datasets and training models, instead the focus has shifted to writing prompts that can generate the images the artist wants to find, and designing interfaces for exploring these prompts and their translations. The artist Johannez coined the term Promptism for describing his art practice, and wrote a humorous Prompist manifesto using GPT-3. Against a backdrop of models trained on hundreds of millions of images scraped from the internet, including many artists' portfolios, the manifesto asserts "The prompt must always be yours" (Johannez 2022).
## 3 Artist-Guided Neural Networks
Many papers discuss AI from the point of view of creativity taking mostly one position of two: either AI as an amazing tool for artists and creativity, or AI is seen as something negative in art. It is easy to see that the people from industry advocate for the first position, and theory scholars for the second one. But, how do practitioners see contemporary AI technology themselves? And in which ways AI is deployed in art practice? Hence, it is not the focus of this paper to discuss whether AI can make art, but rather how AI can be useful for artists and what new ideas it can offer. By using practice-based research methodology, we decode the role of AI tools in artistic practice and trace the evolution of such artistic work. In this paper, the practice of artist duo Varvara & Mar was used as a case study, which provided us with the insides in this research. We divide the case studies into four categories based on medium: synthetic image, synthetic text, synthetic form, and translation models. From the view of the practitioner, the limitations, new possibilities, and change in production processes are discussed.
Figure 2: A single still image from the VR 360° video _Neural Landscape_ (2017). ©Varvara & Mar.
### Synthetic Image
Our DL exploration began in 2017 with Google DeepDream, focusing on image generation (Fig.2). The concept behind _Neuronal Landscapes8_ project was to imagine how Estonian landscape will look like in 100 years time (commission work for the Estonian History Museum). Through synthetic vistas created by machines, the artwork offers a glimpse into the environment from a machine's perspective, immersing viewers in a hallucinated neural net simulacrum. To depict the evolution of Estonian society over time, from forests and farmlands to urbanization and digitalization, a 360\({}^{\circ}\) VR video was created. Filmed with drone-mounted two 360\({}^{\circ}\) cameras, the footage was edited and processed using DeepDream. The rendering process spanned 30 days on powerful machines with Nvidia TitanX GPUs. While some customization was possible, the algorithm's aesthetic footprint remained prominent.
Footnote 8: [https://var-mar.info/neuronal-landscapes/](https://var-mar.info/neuronal-landscapes/)
In the next art project, ProGAN was deployed. For the first time we worked with datasets and training GAN models. _Plasticland9_ (2019) talks about plastic waste and ecological problems this material causes (Fig.3). We composed four different datasets of images of layered plastics in our planet: landfills, plastic on top of water, plastic underwater, and plastiglomerates. The ProGAN model was trained on a local machine using pyTorch and took a week to train, and the artist used a selection of generated images to create a video composition. A metal totem displaying those synthetic, as plastic is, layers, we draw attention not only to the problem of waste but also question whether AI has some similarity with this material. Since the invention of plastic, this material was applied almost everywhere because of its perfect qualities, until we realised that it is not sustainable and ecology-friendly. Will a similar story happen with AI? From the practice-based research perspective, this work shows artists' desire to move from a still to moving image and towards sculptural form that is held back by the early stage of machine learning technology: low resolution images jumping from one frame to another.
Footnote 9: [https://var-mar.info/postcard-landscapes-from-lanzarote/](https://var-mar.info/postcard-landscapes-from-lanzarote/)
The next artworks _POSTcard Landscapes from Lanzarote I_ (00:18:37) and II (00:18:40)10 in 2021 demonstrate
Figure 3: Left: installation view of _Plasticland_ (2019). Right: an AI-generated image from the dataset of platic under the water. _Plasticland_ (2019). ©Varvara & Mar.
the artist's ability to create video works with StyleGAN2 (Fig.4). The hypnotic appearance of these works, where one frame morphs naturally into another, shows the artists' ability in guiding the outputs of the neural network. Vector curation and composition of a journey through the latent space, created by training the model on specific datasets of 2000+ images, were crucial and integral parts of the artistic process. The artwork talks about critical tourism and how circulation of images representing touristic gaze overpower the nature of seeing. In the words of Jonas Larsen "reality' becomes touristic, and item for visual consumption" (Larsen, 2006). Hence, we scraped, where licence allowed, the location-tagged images from Flickr and composed two datasets of photos categorised as tourism or landscape. As we have written earlier: "The two videos are random walks in the latent space of the Stylegan2 trained models, creating a cinematic synthetic space. The audiovisual piece shows an animated image through the melted liquid trip of learning acquired from the dataset composed of static images. The video flows from point to point, generating new views and meaning spaces through the latent space's movement. The audio was created after the video was generated in response to the visual material to complete the art piece." (Guljajeva and Canet Sola, 2022). The sound for local or landscape view was created by a sound artist from Lanzarote, Adrian Rodd, who aimed to give a socio-political voice to the piece. In contrast, the sound design created by Taavi Varn is a soundscape replying to touristic gaze. The artists aimed to initiate collaborations with others but also to experiment with human-AI co-creation. In a similar vein is the artwork Phantom Landscapes of Buenos Aires (00:20:00, 2021), with sound work by Cecilia
Figure 4: Single still images from the two AI-generated videos _POSTcard Landscapes from Lanzarote LII_ (2020). ©Varvara & Mar.
Figure 5: _ENA_ (2020). Left: ENA Book with all conversations. Right: Screenshot of the website app in the Theatre Lliure installed during May 2020 during Covid lock down times. ©Varvara & Mar.
Castro.
Our last experiment with GAN models _Synthetic-scapes of Tartu_ (00:10:00, 2022), demonstrates a different approach. Taking a dataset composed from our own video footage (flanewr walks), we first produced the sound (a composition by Taavi Varm, Ville MJ Hyvonen with piano by J. Kujampai) and used this to inform the direction of the video. The result was a sound-guided AI-generated visual output.
### Synthetic Text
In this section, we focus on artwork incorporating AI text generation as part of the artistic concept. Our journey to text generation started with the online participative theatre project _ENA11_ and ended with a hand-bound publication (Fig.5).
Footnote 11: [https://var-nar.info/ena/](https://var-nar.info/ena/)
During the first lockdown in May 2020, together with theatre maker Roger Bernat, we created an online participative theatre piece _ENA_ on the website of Theater Lliure in Barcelona. _ENA_ is a generative chatbot that talks to its audience, and together (AI and audience), they make theatre. As we have described before: "Although in the description of the project it was stated explicitly that people were talking to a machine, multiple participants were convinced that on the other side of the screen another human was replying to them--more precisely the theatre director himself, or at least an actor." (Guljajeva and Canet Sola 2021).
Analysing synthetic books, Varvara Guljajeva has stressed the importance of human input in the AI text-generation systems (Guljajeva 2021). In addition, one also needs to guide the audience participation and interaction with the chatbot. For this purpose, we have adopted the traditional theatre method for guiding actors, as a way to guide the audience, and thus, the bot, too. Stage directions were used as a guiding method, which triggered thematic conversation and offered meaningful dialogue between humans and the AI system. We found the conversations so meaningful that we decided to publish a book that contains all the conversations with _ENA_.
With this project, we learned that it is essential to guide neural networks via audience interaction. In order to do this, it is also necessary to guide the audience. Without audience interaction guidance, it is nearly impossible to achieve meaningful navigation of neural networks.
### Translation models
This category focuses on translation models that enable interactive and installation-based formats. Translation refers to the conversion of mediums, or as we put it, translation of semiotic spaces. To illustrate this, we introduce _Dream Painter12_ an art installation that translates audience's spoken dreams to a line-drawing produced by a robot (Fig.6). As described earlier: "_Dream Painter_ is an interactive robotic art installation that explores the creative potential of speech-to-AI-drawing transformation, which is a translation of different semiotic spaces performed by a robot. We extended the AI model CLIPdraw which use CLIP encoder and the differential rasterizer diffyg for transforming the spoken dreams into a robot-drawn image." (Canet Sola and Guljajeva 2022). "Design- and technology-wise, the installation is composed of four larger parts: audience interaction via spoken word, AI-driven multi-colored drawing software, control of an industrial robot arm, and kinetic mechanism, which makes paper progression after each painting has been completed. All these interconnected parts are orchestrated into an interactive and autonomous system in a form of an art installation [...]." (Guljajeva and Canet Sola 2022). Out of all the projects discussed, this was the most difficult to realise. This is because of the large scale of the artwork, and multiple parts of software and hardware that need to run automatically and synchronously.
Footnote 12: [https://var-mar.info/dream-painter/](https://var-mar.info/dream-painter/)
In this project we investigated how guidance of neural networks could be interactive and real-time instead of non-interactive and pre-determined, as shown in previous examples of our work. It is important to notice that methods, such as dataset composition and output curation were not used in this case. In fact, visual output curation is totally missing. The artists created an interactive system to be experienced and discovered by the audience. This means the audience determines the output. Instead of curating a dataset, a CLIP model is used that can produce nearly real-time output guided by a text prompt. As we have written earlier: "Translation of semiotic spaces, such as spoken dreams to AI-generated robot-drawn painting, allowed us to deviate from image-to-image or text-to-text creation, and thus, imagine different scenarios for interaction and participation." (Guljajeva and Canet Sola 2022a).
This project indicates our search for transformative outputs of AI technology, and thus, shows the evolution in practice. By extending available DL tools and combining with other technology, for example, text-to-speech models, real-time industrial robot control, and physical computing, it offered an interactive robotic and kinetic experience of neural network latent space navigation. This contributes towards the explainability of AI because the audience could experience how the words affected the drawing, and which concept triggered which outcome.
Being inspired by Sigmund Freud's work on the interpretation of the human mind while unconscious, we speculatively ask if AI is powerful enough to understand our dreamworld. Through practice we question the capacities of neural networks and investigate how far we can push this technology in the art context. This artwork allows the audience to experience the limits of concept-based navigation with AI. The system is unable to interpret and can only illustrate our dreams. It cannot understand the prompt semantically and only gets the concepts.
Figure 6: Kuka industrial robot painting audience’s dreams. Installation view of Dream Painter (2021). ©Varvara & Mar.
### Synthetic Form
In this section, we ask how artists can guide neural networks when creating volumetric forms, and what happens when AI meets materiality. After working for a while with DL tools that produce 2D outputs, it is an obvious step to explore possibilities to produce 3D results. To our surprise, it was not an easy task to find the solution (Oct 2021). _Psychedelic Forms_ is a series of sculptures produced in ceramics and recycled plastic through which we investigated the possibilities of AI in producing physical sculptures. The project re-interprets antique culture in the contemporary language and tools (Guljajeva and Canet Sola 2023). Following the same paradigm shift as in the previous section, text2mesh is a CLIP-based model that does not require a dataset, but a 3D object and text prompt as input (Michel et al. 2022). Hence, the model actually does not create a 3D model but stylises the inserted one, guided by inputted text.
We decided to go back to the origins, in terms of ancient sculptures and material selection. Although it was said that there was no dataset, we still had a collection of 3D models of ancient sculptures because, by far, not all produced a desirable output. In this sense, there was definitely an output curation present in the process. The criteria for selection were the following: first, the form had to be intriguing, and second, it should be possible to produce it in material afterwards. It was clear that we had to modify each model because the physical world has gravity, and the DL model does not take this into account. Some generated models were discarded because they were seen as not-fixable, although interesting in their shape.
The process demonstrated here is quite an unusual way to create an object. After extensive experimentation
Figure 7: Ceramic sculpture guided by 3D object and text prompt, 3D printed in clay, and glazed manually. This piece belongs to the series Psychedelic Forms (2022). ©Varvara & Mar.
with the tool, we learned how certain words triggered certain shapes and colours. This knowledge gave us a chance to treat text prompts as poetic input. Thus, we created short poems to guide NN. The best ones survived as titles and are reflected in the forms. The artists did not strictly follow the original model but took the creative liberty to modify the shape and determine the colour by manually glazing the sculptures. The dripping technique was used for colouring the sculptures. This served as a metaphor for liquid latent space and the psychedelic production process (this was the artists' inner feeling about the creative process because they did not know what results would be achieved in the end). Sometimes, AI-generated vertex colouring was taken as inspiration, sometimes totally ignored. Nevertheless, digital sculptures were exhibited alongside the physical ones to underline the transformation and human role in the creative process. Although ceramic sculptures were 3D printed in clay, the fabrication process had to follow the traditional way of producing pottery (Fig.7). Since the artists had never engaged in ceramics before, the whole production process felt psychedelic: unexpected neural network processes led to transformation by numerical, physical, and chemical processes, all guided by both the artists and chance. Hence, the art project highlights the relationship between different agencies.
In the end, we can say that AI is not prepared for the physical world. It created nice images, but when one wants to materialise the output, it requires considerable additional work. However, those extra processes were very rewarding and creative in our case. In this project, AI served as an inspiration or a departing point more than anything else. In other words, the experimental phase of technology is necessary for experimental practices, and this can lead to the creation of a new production pipeline. The fine line between control and chance when guiding the neural networks and related processes is likely the main creative drive for the artists.
## 4 Discussion
According to the media hype around AI, this technology is intelligent enough to create art autonomously (Perez 2018; Vallance 2022). However, the reality is different. A computer scientist and a co-inventor of Siri Luc Julia, AI does not exist. He advocates for machines' multiple intelligences that often outperform humans. However, machine intelligence is limited and discontinuous compared to human intelligence (Julia 2020). Therefore, it is vital to have artistic practices around this technology, as a counterbalance to the AI fantasies served by the industry and mass media.
We see AI as a creative tool with its own possibilities and limitations, which can stimulate artists' creativity through unexpected outputs. Research has shown that tool-making expands human cognitive level and constitutes evolution in culture (Stout 2011; Stout 2016). Similarly, as a new tool, generative AI could potentially enrich creativity by allowing new production pipelines that can create unique results.
Coming back to the synthetic images, we can say that all machine-created synthetic image-based works discussed here have particular aesthetics: both with DeepDream and GAN. Unlike the output of GANs, DeepDream has a more recognizable style and can be seen more as a filter that transforms every inputted image instead of learning from the given dataset. Regarding GAN aesthetics, such visual appearance is inherited from two entities to a large extent: the dataset and the model itself. GANs have a particular footprint, as seen in all works produced with this model. The visual palette comes from the used datasets. For example, if a dataset is homogeneous (only landscape images), then we will easily recognize landscapes in the generated output. However, if images in the dataset have a lot of visual variation, the output is rather abstract. _POSTard Landscapes from Lanzarote II_ illustrates this well. Also, when photos in the dataset look similar, the output will also be similar, as was the case with the _Synthetic-scapes of Tartu_ video work where frames from recorded flaneur walks in a city were extracted. When we talk about video works generated with the neural net, then manual guidance of latent space offered more variations than an audio-led approach.
Synthetic image works have encouraged us to work with formats like images and videos that we did not en
gage in before in our art practice, but we found it exciting working with AI and video. For example, AI video generation has some affordances, like starting and ending can be done in a perfect loop since images are synthetically generated. However, creating real-time AI work is much more complex because some models are too slow. It might take a few minutes to render a single image. The limitations inspire us to devise new solutions and work in new mediums. Moreover, the limitations of the medium has always been a good challenge for our creativity.
Working with GANs or other image-generation tools has become much easier in recent years, although it used to be quite difficult. We must note that for practitioners, easy-to-use tools, such as DALL-E and Midjourney, offer little creative freedom, and thus, are less attractive to the artists. Those products tend to instrumentalize the user rather than the other way around. At the same time, open source models offer more creative freedom and enable broader use of artistic ideas.
The work with generated text demonstrates that AI is not context-aware but maps concepts automatically without understanding semantics. More importantly, as shown in the _ENA_ project the audience must also be guided alongside the AI. In the case of _ENA_, stage directions were used, and in the Dream Painter project, the concept of dream telling was applied to guide the participants who in turn guided the neural net through their interaction, creating a chain reaction. Navigating concepts in latent space is artistically interesting and inspiring, this was especially evident when working with form. The artists went beyond semantics and learned how to guide neural networks with a text prompt and 3D object.
The presented practice represents a paradigm shift in machine learning, moving away from composing datasets for GANs and toward translating semiotic spaces enabled by diffusion models. The evolution in practice shows how artists discover and learn to work with the DL toolset, embracing its possibilities and limitations. In the case of practice-based research, practice can be seen as a lab for testing artistic ideas with technology through chance until control is encountered.
## 5 Conclusion
In this article, we have summarised DL development from the perspective of artists' interests concentrating on the image, video, text, 3D object generation, and translation models. We applied practice-based research methodology to investigate the role and possibilities of recent co-creative AI tools in artistic practice.
It is difficult to keep pace with AI development. In less than a decade, we have gone from blurry black-and-white faces to impressive high-resolution images guided by text prompts. The user level has gone from difficult to easy, which on one side, broadens possibilities for creation, but on another, it diminishes experimentation and creativity, since AI outputs seem ready-made. This is also demonstrated by the explorative nature of the body of work presented here.
Furthermore, it was noticed that creative AI, especially GAN models, have recognizable aesthetics, which in the long run, become repetitive. This led to the change of tools by the artists. The curation of datasets, models, and outputs, along with neural network guidance, have become the toolset of an artist working with AI. Finally, these models can generate multitudes of outputs, but the art is giving the right input to guide the desired output and selecting the results that best serve the concept
As Andy Warhol had envisioned in 1963, eventually, art production will become mechanised and automated (Sichel 2018). In his own words: "I want to be a machine" (Bergin 1967), which was also a reflection on that time's vast industrialization process. Resonating with today's deep learning age: I want my machine to do art.
## Author contributions, acknowledgments and funding
MSC is supported as a CUDAN research fellow and ERA Chair for Cultural Data Analytics, funded through the European Union's Horizon 2020 research and innovation program (Grant No.810961).
|
2308.08097 | S-Mixup: Structural Mixup for Graph Neural Networks | Existing studies for applying the mixup technique on graphs mainly focus on
graph classification tasks, while the research in node classification is still
under-explored. In this paper, we propose a novel mixup augmentation for node
classification called Structural Mixup (S-Mixup). The core idea is to take into
account the structural information while mixing nodes. Specifically, S-Mixup
obtains pseudo-labels for unlabeled nodes in a graph along with their
prediction confidence via a Graph Neural Network (GNN) classifier. These serve
as the criteria for the composition of the mixup pool for both inter and
intra-class mixups. Furthermore, we utilize the edge gradient obtained from the
GNN training and propose a gradient-based edge selection strategy for selecting
edges to be attached to the nodes generated by the mixup. Through extensive
experiments on real-world benchmark datasets, we demonstrate the effectiveness
of S-Mixup evaluated on the node classification task. We observe that S-Mixup
enhances the robustness and generalization performance of GNNs, especially in
heterophilous situations. The source code of S-Mixup can be found at
\url{https://github.com/SukwonYun/S-Mixup} | Junghurn Kim, Sukwon Yun, Chanyoung Park | 2023-08-16T02:08:46Z | http://arxiv.org/abs/2308.08097v1 | # S-Mixup: Structural Mixup for Graph Neural Networks
###### Abstract.
Existing studies for applying the mixup technique on graphs mainly focus on graph classification tasks, while the research in node classification is still under-explored. In this paper, we propose a novel mixup augmentation for node classification called Structural Mixup (S-Mixup). The core idea is to take into account the structural information while mixing nodes. Specifically, S-Mixup obtains pseudolabels for unlabeled nodes in a graph along with their prediction confidence via a Graph Neural Network (GNN) classifier. These serve as the criteria for the composition of the mixup pool for both inter and intra-class mixups. Furthermore, we utilize the edge gradient obtained from the GNN training and propose a gradient-based edge selection strategy for selecting edges to be attached to the nodes generated by the mixup. Through extensive experiments on real-world benchmark datasets, we demonstrate the effectiveness of S-Mixup evaluated on the node classification task. We observe that S-Mixup enhances the robustness and generalization performance of GNNs, especially in heterophilous situations. The source code of S-Mixup can be found at [https://github.com/SukwonYun/S-Mixup](https://github.com/SukwonYun/S-Mixup).
Graph Neural Networks, Mixup, Node Classification +
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: [
performance gain stems from accurately classifying high homophily nodes. However, we observe a performance drop for nodes with a relatively low homophily ratio compared to the vanilla GCN baseline. This suggests that newly generated nodes through mixup do not significantly contribute to smoothing decision boundaries and achieving generalizability. Moreover, given nodes with low homophily ratio comprise around 1/3 of all nodes (33.3% in Cora and 35.7% in Citeseer), their relatively low performance should not be ignored. This becomes critical in real-world scenarios, where disassortative nodes inevitably emerge within assortative graphs, especially when such nodes are linked to malicious activities like fraud or bots (Han et al., 2017; Li et al., 2018) negatively affecting neighbors. Hence, a balanced approach that prevents bias towards high homophily nodes, considering structural information, is vital for optimal performance.
Furthermore, existing methods such as GraphMix and Manifold Mixup lack _the ability to manage newly generated nodes_ in terms of their structural information. These methods primarily focus on obtaining representations of mixed nodes without considering their local neighborhood context. As a result, they resort to replacing existing nodes with newly generated ones instead of adding them to the graph and connecting them to existing nodes. This approach loses the opportunity to utilize the original features of the given nodes and leverage message-passing through the newly generated edges.
In this regard, with the goal of properly utilizing structural information, we propose a novel mixup method for node classification, called Structural Mixup (S-Mixup), which equips the ability to connect the relevant edges to the newly generated nodes that seamlessly align with the current graph. More precisely, we first pass the original graph through the GNN classifier to obtain two key components that enable the mixup process while incorporating structural information: pseudo-labels with prediction confidence and edge gradients. We utilize pseudo-labels with prediction confidence to expand the candidate pool for node mixup. Specifically, nodes with high and low prediction confidence are selected for the intra-class mixup, while nodes with medium confidence are used for the inter-class mixup. We then utilize the edge gradients to identify edges with high gradient values. The newly generated nodes are connected to the existing nodes and are passed through the GNN classifier. Extensive experiments illustrate that S-Mixup outperforms existing mixup-based GNNs in node classification task.
## 2. Methodology
**Notations.** Given a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{X})\), let \(\mathcal{V}=\{v_{1},...,v_{N}\}\) denote the set of nodes, \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) denote the set of edges, and \(\mathbf{X}\in\mathbb{R}^{N\times F}\) denote the node feature matrix. We use \(\mathcal{C}\) to denote the set of classes of nodes in \(\mathcal{G}\). We define \(\mathbf{A}\in\mathbb{R}^{N\times N}\) as the adjacency matrix where \(\mathbf{A}_{ij}=1\) iff \((v_{i},v_{j})\in\mathcal{E}\) and \(\mathbf{A}_{ij}=0\) otherwise. Our main goal is to enhance performance in a node classification task.
**Our approach.** We propose a novel mixup method, Structural Mixup (S-Mixup), that incorporates structural information within newly generated nodes. We first revisit vanilla GNN to obtain two key outputs, pseudo-labels with prediction confidence and edge gradient, in Section 3.1. Then, we describe how we expand the mixup pool through the prediction confidence and conduct inter-class and intra-class mixups in Section 3.2. With edge gradients obtained via a vanilla GNN, we then demonstrate how we select edges of the newly generated nodes in Section 3.3. Finally, we propose an overall training process of S-Mixup in Section 3.4. The overall architecture of S-Mixup is illustrated in Figure 2.
### Revisiting Graph Neural Networks
Before we perform node mixup and edge selection, we first revisit the vanilla GNN to obtain key outputs that will play a significant role in the following sections. Specifically, we pass a graph through a conventional two-layer Graph Convolutional Network (Gan et al., 2017) as: \(\hat{\mathbf{Y}}=\text{softmax}(\hat{\mathbf{A}}\sigma(\hat{\mathbf{A}} \mathbf{X}\mathbf{W}^{1})\mathbf{W}^{2})\), where \(\hat{\mathbf{Y}}\in\mathbb{R}^{N\times|\mathcal{C}|}\) denotes prediction probability, \(\hat{\mathbf{A}}=\hat{\mathbf{D}}^{-1/2}\hat{\mathbf{A}}\hat{\mathbf{D}}^{-1/2}\) denotes a transition matrix where \(\hat{\mathbf{D}}\) and \(\hat{\mathbf{A}}\) indicates the self-loop included diagonal degree matrix and adjacency matrix, respectively. \(\sigma\) is the ReLU activation function, and \(\mathbf{W}^{1}\in\mathbb{R}^{F\times D}\) and \(\mathbf{W}^{2}\in\mathbb{R}^{D\times|\mathcal{C}|}\) denote trainable weight matrices that encode features into hidden embedding space, \(D\) and hidden embedding space to class space \(|\mathcal{C}|\), respectively. Here, we define one-hot transformed _pseudo-labels_ of each node as \(\hat{\mathbf{Y}}=\text{one-hot}(\text{argmax}(\hat{\mathbf{Y}}))\in\mathbb{R} ^{N\times|\mathcal{C}|}\), and _prediction confidence_ of each node as \(\mathbf{y}_{\text{conf}}=\text{max}(\hat{\mathbf{Y}})\in\mathbb{R}^{N}\), where \(0\leq\mathbf{y}_{\text{conf}}\leq 1\) due to the softmax operation. This will later serve as a criterion for node mixup. Then the node classification loss is calculated as: \(\mathcal{L}_{\text{ce}}=-\sum_{\mathbf{x}\in\mathcal{V}_{\mathcal{H}}}\sum_{ \mathbf{x}\in\mathcal{C}}\mathbf{y}_{\mathbf{y}}[c]\log(\hat{\mathbf{y}}_{ \mathbf{y}}[c])\), where \(\mathcal{V}_{\mathcal{H}}\) denotes the set of training nodes and \(\mathbf{y}_{\mathbf{y}}[c]\) is either 1 or 0 depending on the value in the \(c\)-th index of class one-hot vector of training node \(v\). During backpropagation using gradient descent within this loss, we can naturally obtain gradients with respect to an adjacency matrix, \(\hat{\mathbf{A}}\) by applying the chain rule as follows:
\[\frac{\partial\mathcal{L}_{\text{ce}}}{\partial\hat{\mathbf{A}}}=\frac{\partial \mathcal{L}_{\text{ce}}}{\partial\hat{\mathbf{Y}}}\frac{\partial\hat{\mathbf{Y }}}{\partial\hat{\mathbf{A}}}\frac{\partial\hat{\mathbf{A}}}{\partial\hat{ \mathbf{A}}}=-(\hat{\mathbf{Y}}^{-1})\cdot\{(\hat{\mathbf{A}}\mathbf{X}\mathbf{W }^{1}\mathbf{W}^{2})^{\top}\mathbf{I}+(\mathbf{X}\mathbf{W}^{1}\mathbf{W}^{2})^ {\top}\hat{\mathbf{A}}\}\cdot(\hat{\mathbf{D}}^{-1}) \tag{1}\]
It is important to note that while the adjacency matrix does not possess trainable edge weights and thus maintains a static state, its gradient does change with each epoch due to the modifications in the trainable weight parameters \(\mathbf{W}^{1}\) and \(\mathbf{W}^{2}\). Here, we define the _edge gradient_ for each edge as \(e_{ij}=\left|\frac{d\hat{\mathbf{x}}}{d\hat{\mathbf{x}}}\right|_{2},\forall i,j\leq N\), which will later serve as a criterion for edge selection.
### Confidence-based Node Mixup
Given the _pseudo-labels_\(\hat{\mathbf{Y}}\) with _prediction confidence_\(\mathbf{y}_{\text{conf}}\), we now perform mixup while considering the class information as well as its uncertainty, i.e., confidence in terms of Inter-class Mixup and Intra-class Mixup. In essence, during the Inter-class Mixup, we aim to use nodes from different classes that have medium-level confidence. This approach helps to smooth the decision boundary between classes. On the other hand, during the Intra-class Mixup, we aim to
Figure 2. Overall architecture of S-Mixup.
use nodes from the same class with high and low confidence levels. This approach enhances generalizability within a class.
**Inter-class Mixup.** The key success of mixup lies in its ability to create smoother decision boundaries, which are well-established factors contributing to a model's generalizability (Bang et al., 2017; Chen et al., 2018). Thus, to achieve such smooth decision boundaries among different classes, we utilize nodes with medium-level confidence. The intuition behind using medium-level confidence nodes is that nodes with high confidence tend to be far from the decision boundary and too discriminative (i.e., easy to classify), whereas nodes with low confidence may be too close to the decision boundary (i.e., hard to classify). Hence, we choose nodes with medium-level confidence for obtaining smooth boundaries between classes. Formally, we consider the middle \(2\)\(\pi\)% among \(\mathbf{y}_{\text{conf}}\) in each class as the inter-class mixup pool, where \(r\) is a hyperparameter. The generation of new nodes can then be achieved by randomly sampling from the inter-class mixup pool in each class, as follows:
\[\mathbf{x}_{\text{new}}^{\text{Inter}}=\lambda\mathbf{x}_{j}+(1-\lambda) \mathbf{x}_{j},\mathbf{y}_{\text{new}}^{\text{Inter}}=\lambda\tilde{\mathbf{y} }_{j}+(1-\lambda)\tilde{\mathbf{y}}_{j},\ \forall i,j\leq N\ s.t\ \tilde{\mathbf{y}}_{j}\neq \tilde{\mathbf{y}}_{j} \tag{2}\]
where \(\mathbf{x}_{\text{new}}^{\text{Inter}}\in\mathbb{R}^{F}\), \(\mathbf{y}_{\text{new}}^{\text{Inter}}\in\mathbb{R}^{|\mathcal{C}|}\) denote the feature, label of the newly generated node, respectively, and \(\lambda\in[0,1]\) is the mixing ratio, drawn from a Beta distribution with parameter \(\alpha\) fixed as 1.0.
**Intra-class Mixup.** At the same time, considering nodes within the same class is also crucial, as it can significantly influence the decision boundaries of the classes. Thanks to the pseudo-labels we previously obtained, we can now conduct intra-class mixup to achieve robust characteristics for each class. Specifically, we aim to obtain robust features for each class by interpolating between nodes with high confidence and those with low confidence. Formally, we consider the upper \(r\)\(\neq\) among \(\mathbf{y}_{\text{conf}}\) as the mixup pool for high-confidence nodes and the lower \(r\)\(\neq\) among \(\mathbf{y}_{\text{conf}}\) as the mixup pool for low-confidence nodes. Then, through random sampling from the intra-class mixup pool, i.e., high-confidence and low-confidence mixup pools, we generate new nodes in each class as follows:
\[\mathbf{x}_{\text{new}}^{\text{Intra}}=\lambda\mathbf{x}_{i}+(1-\lambda) \mathbf{x}_{j},\mathbf{y}_{\text{new}}^{\text{Intra}}=\tilde{\mathbf{y}}_{j}, \ \forall i,j\leq N\ s.t\ \hat{\mathbf{y}}_{i}=\tilde{\mathbf{y}}_{j} \tag{3}\]
where \(\mathbf{x}_{\text{new}}^{\text{Intra}}\in\mathbb{R}^{F}\) denotes the feature of the newly generated node, and \(\mathbf{y}_{\text{new}}^{\text{Intra}}\) shares the same one-hot class vector as nodes \(i\) and \(j\).
### Gradient-based Edge Selection
Recall that we are dealing with the graph-structured data where both nodes and edges play significant roles. In this regard, the subsequent challenge that naturally emerges is: _how do we create connections between the newly generated nodes and the existing ones?_ To address this question, we propose to leverage the edge gradient, \(\epsilon_{ij}\), which we acquired during the vanilla GNN in Section 3.1. The rationale behind utilizing the edge gradient is twofold: **(1)** The newly generated nodes would be in favor of being connected to nodes that can convey supervisory signals to them. Intuitively, as the training loss is derived from labeled training samples, the gradient value within an edge that is either directly connected to or located within a 1-hop distance from the original training nodes would be larger than that of an edge that is neither connected nor in vicinity of the training nodes. This tendency is corroborated by Figure 3 (a), where we report the edge gradient with respect to labeled nodes as the training epoch increases. In short, an edge with high gradient implies that it is close to the labeled nodes, and thus such an edge would be preferred for newly generated nodes in order to transmit supervised signals. **(2)** The existing nodes would prefer to be connected to newly generated nodes if these new connections can help alleviate their local and structural difficulties, particularly in heterophilous scenarios. More precisely, nodes that are surrounded by disassortative neighbors would encounter difficulties in being correctly classified, thus necessitating mitigation. This phenomenon is substantiated by Figure 3 (b), which illustrates the edge gradient dynamics for both homophilous and heterophilous situations. We observe that with increasing epochs, the edge gradient value intensifies for difficult samples, i.e., those on heterophilous edges. Consequently, for existing nodes, selecting high-gradient edges emerges as a preferable strategy for existing nodes, as it provides opportunities to better manage such heterophilous situations. To sum up, both newly generated nodes and existing nodes would benefit from being connected to edges with high gradient values.
Now, to create edges for the newly generated nodes \(\mathbf{x}_{\text{new}}^{\text{Inter}}\), \(\mathbf{x}_{\text{new}}^{\text{Intra}}\), we leverage the edge gradient obtained during the training of the vanilla GNN. In each epoch, we initially connect a generated node with the two nodes from which the node is generated. Subsequently, we expand the connections to encompass high-gradient edges that fall within the top \(m\)% of the total edge gradients with hyperparameter \(m\) and are connected to either of the two nodes involved in the node mixup. Formally, the adjacency for a newly generated node stemming from the source node \(i\) and \(j\) can be expressed as follows:
\[\mathbf{A}_{\text{new,}k}=\begin{cases}1,&\text{if }m\text{-th Percentile}(e_{..})\leq \epsilon_{kq},\ \forall k\leq N\ \text{s.t.}\ q\in\{i,j\}\\ 0,&\text{otherwise}\end{cases} \tag{4}\]
where \(e_{..}\) denote the list of all edge gradient values and \(e_{kq}\) denote edge gradient value within node \(k\) and node \(q\).
### Model Training
To sum up, the overall training process of S-Mixup is expressed as: \(\mathcal{L}_{\text{final}}=\mathcal{L}_{\text{ce}}+\eta\mathcal{L}_{\text{ mixup}}^{\text{Intra}}+(1-\eta)\mathcal{L}_{\text{mixup}}^{\text{Inter}}\), where \(\mathcal{L}_{\text{ce}}\) denotes the cross entropy loss computed from the GNN classifier with the labeled training nodes, and \(\mathcal{L}_{\text{mixup}}^{\text{Intra}}\), \(\mathcal{L}_{\text{mixup}}^{\text{Inter}}\) represent the cross entropy loss computed from GNN classifier with the newly added nodes with their corresponding pseudo-labels in intra-class mixup and inter-class, respectively, with a loss balancing hyperparameter \(\eta\). During training, since node mixup and edge connection are implemented based on the pseudo-labels and prediction confidence derived from the initial GNN, we prioritize the optimization of \(\mathcal{L}_{\text{ce}}\) loss followed by optimizing \(\eta\mathcal{L}_{\text{mixup}}^{\text{Intra}}+(1-\eta)\mathcal{L}_{\text{ mixup}}^{\text{Inter}}\) loss in our implementation.
Figure 3. Edge gradient of vanilla GCN in Cora dataset. **(a) ‘Direct & O’, ‘1-Hop & X’ represent a correctly predicted edge directly linked to labeled nodes and an incorrectly predicted edge linked to labeled nodes within 1-hop, respectively. **(b) ‘Homophilous’ denote edges connecting same labeled nodes**.
## 3. Experiments
**Datasets.** We evaluate S-Mixup on five benchmark citation datasets, namely Cora, CiteSeer, PubMed, Coauthor-CS, and Coauthor-Physics. For the split, we follow the data split setting of GraphMix (Gan et al., 2017). The detailed statistics can be found in Table 1.
**Compared Methods.** We compare S-Mixup with GCN (Golovolov et al., 2015), node-perspective mixup models such as GraphMix (Gan et al., 2017) and Manifold Mixup (Gan et al., 2017), as well as an oversampling-based model, GraphSMOTE (Gan et al., 2018), and a feature saliency-based mixup model, GraphENS (Gan et al., 2018).
**Evaluation Protocol.** All experiments are repeated ten times with randomly initialized parameters, and we present the mean accuracy and standard deviation conducted on RTX 3090 (24GB). Common hyperparameters include a learning rate chosen from \(\{0.01,0.05,0.1\}\), hidden dimensions from \(\{16,64\}\), and a fixed dropout rate of 0.5. For each model, individual hyperparameters are tuned within a range as recommended by the respective authors.
**Overall Performance.** In Table 2, S-Mixup outperforms other mixup-based models such as GraphMix and Manifold Mixup, as well as oversampling-based models like GraphSMOTE and GraphENS. The vanilla GCN model serves as the lower performance bound. Specifically, in CiteSeer dataset, our method achieves the highest performance, showing a significant improvement of 4.69% over the vanilla GCN, which is a substantial improvements considering that the performance gain of GraphMix over the vanilla GCN is 2.35%.
**Ablation Studies.** As S-Mixup employs a confidence-based node mixup for both inter-class and intra-class nodes with a gradient-based edge selection strategy, we observe several key outcomes from Table 2. **(1)** Incorporation of both inter-class and intra-class mixup is crucial to the efficacy of the node mixup technique. This integration aids not only in generating smoother decision boundaries but also in establishing a more generalized representation for each class. Figure 4 provides a detailed view of the progression of node representations as epochs grow. We observe that generated nodes in both the inter-class and intra-class mixup are placed near the decision boundaries, which aligns with the objectives of smoothing decision boundaries and creating generalized representations. **(2)** It is essential to connect the newly generated nodes with the pre-existing nodes. Our findings show that even when the proposed method is used with a low edge gradient (S-Mixup w/ low Edge), it surpasses the performance of the same method without any edge connection (S-Mixup w/o Edge). **(3)** Our proposed method is superior when using high-gradient edges rather than low-gradient ones. High-gradient edges encapsulate a greater degree of supervised information, providing an opportunity to mitigate heterophilous situations, as mentioned in Section 3.3.
**Sensitivity Analysis.** Figure 5 (a) demonstrates the hyperparameter sensitivity with respect to the sampling hyperparameter \(r\in\{0.1,0.2,0.3,0.4,0.5\}\), which is responsible for the inter- and intra-class mixup, and \(m\in\{0.1,0.2,0.3,0.4,0.5\}\) that dictates high edge gradient selection. Interestingly, our proposed method exhibits robustness against significant variations in both hyperparameters. However, favoring a joint small range for \(m\) and \(r\) appears to be beneficial. This is likely because node generation via high-confidence values (i.e., small \(r\)) and node connections via high edge gradient values (i.e., small \(m\)) introduce less uncertainty compared to their low-confidence and low-gradient counterparts. In Figure 5 (b), we also investigate the impact of the loss controlling parameter \(\eta\in\{0.1,0.2,...,0.9\}\), keeping \(m\) and \(r\) fixed. We find that a balanced consideration of both inter- and intra-class mixup, represented by \(\eta=0.5\), is advantageous for training S-Mixup.
## 4. Conclusion
In this paper, we present a novel mixup augmentation technique that thoroughly accounts for the structural information inherent in the graph domain. We identified that existing works, which do not fully incorporate structural information, fail to generalize well in heterophilous scenarios. In response to this, S-Mixup initially processes the vanilla GNN to acquire pseudo-labels along with their prediction confidence. Subsequently, we constitute the mixup pool for both inter-class and intra-class cases using nodes with medium levels of prediction confidence, and a combination of high and low confidence levels, respectively. Furthermore, to establish connections between newly generated nodes and existing ones, we employ the edge gradient-based edge selection. Through comprehensive experiments, we showed the superiority of S-Mixup in enhancing the robustness and generalization of node classification task.
**Acknowledgement: No.2021R1C1C1009081 and No.2022-0-00157.** |
2307.01636 | HAGNN: Hybrid Aggregation for Heterogeneous Graph Neural Networks | Heterogeneous graph neural networks (GNNs) have been successful in handling
heterogeneous graphs. In existing heterogeneous GNNs, meta-path plays an
essential role. However, recent work pointed out that simple homogeneous graph
model without meta-path can also achieve comparable results, which calls into
question the necessity of meta-path. In this paper, we first present the
intrinsic difference about meta-path-based and meta-path-free models, i.e., how
to select neighbors for node aggregation. Then, we propose a novel framework to
utilize the rich type semantic information in heterogeneous graphs
comprehensively, namely HAGNN (Hybrid Aggregation for Heterogeneous GNNs). The
core of HAGNN is to leverage the meta-path neighbors and the directly connected
neighbors simultaneously for node aggregations. HAGNN divides the overall
aggregation process into two phases: meta-path-based intra-type aggregation and
meta-path-free inter-type aggregation. During the intra-type aggregation phase,
we propose a new data structure called fused meta-path graph and perform
structural semantic aware aggregation on it. Finally, we combine the embeddings
generated by each phase. Compared with existing heterogeneous GNN models, HAGNN
can take full advantage of the heterogeneity in heterogeneous graphs. Extensive
experimental results on node classification, node clustering, and link
prediction tasks show that HAGNN outperforms the existing modes, demonstrating
the effectiveness of HAGNN. | Guanghui Zhu, Zhennan Zhu, Hongyang Chen, Chunfeng Yuan, Yihua Huang | 2023-07-04T10:40:20Z | http://arxiv.org/abs/2307.01636v1 | # HAGNN: Hybrid Aggregation for Heterogeneous Graph Neural Networks
###### Abstract
Heterogeneous graph neural networks (GNNs) have been successful in handling heterogeneous graphs. In existing heterogeneous GNNs, meta-path plays an essential role. However, recent work pointed out that simple homogeneous graph model without meta-path can also achieve comparable results, which calls into question the necessity of meta-path. In this paper, we first present the intrinsic difference about meta-path-based and meta-path-free models, i.e., how to select neighbors for node aggregation. Then, we propose a novel framework to utilize the rich type semantic information in heterogeneous graphs comprehensively, namely HAGNN (Hybrid Aggregation for Heterogeneous GNNs). The core of HAGNN is to leverage the meta-path neighbors and the directly connected neighbors simultaneously for node aggregations. HAGNN divides the overall aggregation process into two phases: meta-path-based intra-type aggregation and meta-path-free inter-type aggregation. During the intra-type aggregation phase, we propose a new data structure called fused meta-path graph and perform structural semantic aware aggregation on it. Finally, we combine the embeddings generated by each phase. Compared with existing heterogeneous GNN models, HAGNN can take full advantage of the heterogeneity in heterogeneous graphs. Extensive experimental results on node classification, node clustering, and link prediction tasks show that HAGNN outperforms the existing modes, demonstrating the effectiveness of HAGNN.
Heterogeneous graph, Graph neural network, Graph representation learning.
## I Introduction
Many real-world data can be naturally represented as graph structure. Meanwhile, in many practical scenarios, such as knowledge graphs [1], scholar networks [2, 3, 4], and biochemical networks [5], the graphs are heterogeneous. Compared with homogeneous graphs, heterogeneous graphs have more than one type of nodes or links, which encode more semantic information [6].
Graph neural networks (GNNs) [7, 8] have achieved remarkable success on graph-structured data by learning a low-dimensional representation for each node. Moreover, to tackle the challenge of heterogeneity, many representation learning models are proposed to utilize the rich semantic information in heterogeneous graphs. Among these methods, meta-path [6] is considered a natural way to decouple the diversified connection patterns among nodes. Specifically, meta-path is a composite relation consisting of multiple edge types. For example, in Figure 1, Paper-Author-Paper is a typical meta-path, which reflects that two papers are published by the same author. metapath2vec [9] formalizes meta-path-based random walks to compute node embeddings. HAN [10] and MAGNN [11] employ hierarchical attention to aggregate information from meta-path-based neighbors. GTN [12] implicitly learns meta-paths by fusing different node types based on the attention mechanism. Meta-path is also used for knowledge distillation [13], text summarization [14], and contrastive learning [15, 16] on heterogeneous graphs.
Meta-path plays an essential role in the existing heterogeneous GNNs. However, recent researchers [17] experimentally found that homogeneous GNNs such as GAT [18] actually perform pretty well on heterogeneous graph by revisiting the model design, data preprocessing, and experimental settings of the heterogeneous GNNs, which calls into question the necessity of meta-paths [17, 19]. To answer this question, we first give an in-depth analysis about the intrinsic difference about meta-path-based models (e.g., HAN, MAGNN, GTN) and meta-path-free models (e.g., RGCN [20], GAT, Simple-HGN [17]). For a given node, how to select neighbors for node aggregation is a fundamental step in heterogeneous GNNs, which is also the key difference between the two types of models. For meta-path-based models, they usually construct the meta-path-based graph for the target node type. In the meta-path-based graph, all nodes have the same type and any two adjacent nodes have at least one path instance following the specific symmetric meta-path. The final representation of a node is calculated by aggregating its neighbors in the meta-path-based graph. For meta-path-free models, they directly aggregate neighbors in the original graph. In most heterogeneous graphs, a node is of different types with its immediate neighbors. Thus, the final embedding is the aggregation of the nodes with different types.
Take an example as shown in Figure 1, if we want to know the category of a paper, meta-path-based models check the
Fig. 1: Different aggregation schemes on the DBLP dataset.
categories of other papers written by the same author, while meta-path-free models collect information about the author of the paper, the conference where the paper was published, and the term of the paper etc. Overall, the immediate neighbors of a node contain key attributes, and the meta-path-based neighbors can easily supply high-order connectivity information of the same node type. Both meta-path-based neighbors and immediate neighbors are useful, they can complement each other. Therefore, to improve the performance of heterogeneous GNNs, it is essential to design a new representation leaning method that can leverage both meta-path-based neighbors and immediate neighbors.
A straightforward idea is to directly combine the meta-path-based intra-type aggregation with the immediate-neighbor-based aggregation. But as shown in Figure 2, simply combining the typical meta-path-based model (i.e., HAN) with the SOTA meta-path-free model (i.e., SimpleHGN) even leads to performance decreases on both the node classification datasets (i.e., DBLP and IMDB) and the link prediction dataset (i.e., PubMed), especially on the IMDB dataset. The reason for this problem is that existing meta-path-based models suffer from information redundancy and excessive additional parameters, which may lead to over-parameterization as well as over-fitting if directly further combined with meta-path-free models.
Table I shows the issue of information redundancy. In existing meta-path-based models [10, 11], each meta-path corresponds to a graph. The node representations learned from each separate math-path-based graphs are then aggregated with hierarchical attention. For the DBLP dataset in Table I, the meta-path-based-graph produced by Author-Paper-Author (APA) is a subgraph of that produced by Author-Paper-Conference-Paper-Author (APCPA). The meta-path-based graph generated by the meta-path Author-Paper-Conference-Paper-Author and the meta-path Author-Paper-Term-Paper-Author (APTPA) have 61.84% duplicate edges. There exists the similar problem for the ACM dataset. It can be seen that if we build separate meta-path-based graphs for each different meta-paths, unnecessary computation will be squandered on redundant information. Moreover, too many duplicate edges lead to excessive additional learnable parameters for node aggregation, which may degrade the learning performance.
Furthermore, Table I also shows that the number of meta-path-based neighbors is much larger than that of direct neighbors in the original graph and thus too many neighbors in the meta-path-based graph cause difficulties in the learning of attention weights. Existing meta-path-based models only consider the node connectivity in the meta-path-based graph, ignoring the _structural semantic information_ (e.g., the number of path instances following the specific meta-path), which can be exploited to improve the learning of node representation.
Based on the above analysis, we propose a novel framework to utilize the rich type semantic information in heterogeneous graphs comprehensively, namely HAGNN1 (Hybrid Aggregation for Heterogeneous Graph Neural Networks). The core of HAGNN is to leverage the meta-path-based neighbors and the directly connected neighbors simultaneously for node aggregation. Specifically, we divide the overall aggregation process into two phases: meta-path-based intra-type aggregation phase and meta-path-free inter-type aggregation phase. During the intra-type aggregation phase, we first propose a new data structure called _fused meta-path graph_ to avoid information redundancy. For a specific node type, the meta-path neighbor relationships of multiple meta-paths are fused in a single graph, where all nodes have the same type. Also, the fused meta-path graph contains the connectivity information of multiple meta-graphs. Then, we perform attention-based intra-type aggregation in the fused meta-path graph.
Footnote 1: HAGNN is available at [https://github.com/Pasal_ab/HAGNN](https://github.com/Pasal_ab/HAGNN)
To further improve the learning of attention weights, we propose a _structural semantic aware aggregation_ method. For each two neighbor nodes of the fused meta-path graph, we view the number of path instances in the original graph as the structural semantic weight, which is used to guide the learning of attention weights. During the inter-type aggregation phase, we directly perform node aggregation with the self-attention mechanism in the original heterogeneous graph to capture the information of immediate neighbors. Finally, the node embeddings generated by the intra-type aggregation and the inter-type aggregation are combined.
To summarize, the main contributions are highlighted as follows:
* Based on the analysis that both meta-path-based and meta-path-free aggregation should be beneficial to the heterogeneous graph, we propose a novel hybrid aggre
Fig. 2: Comparison between the simple combination of intra-type aggregation and inter-type aggregation (i.e., HAN + SimpleHGN) vs. HAGNN (Proposed)
gation framework consisting of two phases: meta-path-based intra-type aggregation and meta-path-free inter-type aggregation. It is the first attempt of combining two kinds of aggregations.
* To eliminate information redundancy and make the intra-type aggregation more effectively, we propose the fused meta-path graph to capture the meta-path neighbors for a specific node type.
* To improve the leaning of attention weights in the intra-type aggregation phase, we propose a structural semantic aware aggregation mechanism, which leverages the number of path instances as the auxiliary aggregation weights.
* Extensive experimental results on five heterogeneous graph datasets reveal that HAGNN outperforms the existing heterogeneous GNNs on node classification, node clustering, and link prediction tasks. The discussion about HAGNN also provides insightful guidance to the use of meta-paths in the heterogeneous graph neural networks.
## II Related Work
### _Homogeneous Graph Representation Learning_
Graph representation learning aims to learn low-dimensional representations from non-Euclidean graph structure. For homogeneous graphs, most methods learn node representations from neighborhood. Line [21] utilizes the first-order and second-order proximity between nodes to learn node embeddings. DeepWalk [22], node2vec [23], TADW [24], and Struc2vec [25] extract a node sequence by random walk and feed the sequence to a skip-gram model. Graph neural networks, following the message passing framework, have been widely exploited in graph representation learning. GNNs can be divided into spectral-based and spatial-based models [7]. GCN [26] is a typical spectral-based model that achieves spectral graph convolution via localized first-order approximation. While the spatial-based models such as GAT [18] leverage the attention mechanism for node aggregation.
Representation learning models for homogeneous graphs are considered unsuitable for heterogeneous graphs due to ignoring type information, but recent work points out that some homogeneous graph models actually perform well in heterogeneous graphs, which is thought provoking.
### _Heterogeneous Graph Representation Learning_
Heterogeneous graphs have more than one type of nodes or edges. To utilize the semantics encoded in heterogeneous graphs, many models designed for heterogeneous graphs are proposed [10, 27].
Depending on whether using meta-path, we can divide these models into meta-path-based and meta-path-free models. For meta-path-based models, metapath2vec [9] utilizes the node paths traversed by meta-path-guided random walks to model the context of a node. HIN2Vec [28] carries out multiple prediction training tasks to learn latent vectors of nodes and meta-paths. HAN [10] leverages the semantic information of meta-paths, and uses hierarchical attention to aggregate neighboring nodes. MAGNN [10] utilizes RotatE [29] to encode intermediate nodes along each meta-path and mix multiple meta-paths using hierarchical attention. GTN [12] learns a soft selection of edge types and composite relations for generating useful meta-paths. SHGNN [30] uses a tree-based attention module to aggregate information on the meta-path and consider the graph structure in multiple meta-path instances. R-HGNN [31] learns node representations on heterogeneous graphs at a fine-grained level by considering relation-aware characteristics. CKD [13] learns the meta-path-based embeddings by collaboratively distilling the knowledge from intra-meta-path and inter-meta-path simultaneously.
The meta-path-free models extract rich semantic information without meta-path. RGCN [20] introduces relation-specific transformations to handle different edge types. HetGNN [32] uses Bi-LSTM to aggregate node features for each type and among types. SimpleHGN [17] revisits existing models and proposes a simple framework with GAT as backbone.
The difference between meta-path-based and meta-path-free models is that they have conflict on which kind of neighbors is more informative. In this paper, we propose a hybrid aggregation framework that can leverage both the meta-path-based neighbors and immediate neighbors effectively.
## III Preliminaries and Notations
**Definition 1**: **Heterogeneous Graph [6]**_. A heterogeneous graph is a directed graph with the form \(\mathcal{H}=\{\mathcal{V},\mathcal{E},\mathcal{T},\mathcal{R},\phi,\psi\}\), where \(\mathcal{V}\) and \(\mathcal{E}\) denote the node set and the edge set in \(\mathcal{H}\). Each node \(v_{i}\in\mathcal{V}\) is associated with a node type \(\phi(v_{i})=t_{i}\in\mathcal{T}\). Similarly, each edge \(e_{ij}\in\mathcal{E}\) is associated with an edge type \(\psi(e_{ij})=r_{ij}\in\mathcal{R}\). In graph \(\mathcal{H}\), \(|\mathcal{T}|+|\mathcal{R}|>2\). Every node has attribute \(x_{i}\in\mathbb{R}^{d_{i}}\), and \(d_{i}\) varies from the type of node \(v_{i}\). Let \(\mathbf{A}_{t_{i},t_{j}}\) denote the adjacent matrix of type \(t_{i}\) and \(t_{j}\). \(\mathbf{A}_{t_{i},t_{j}}[u][v]=1\) indicates that \(\phi(u)=t_{i}\) and \(\phi(v)=t_{j}\)._
**Definition 2**: **Meta-Path [6]**_. A meta-path \(p\) is a composite relation, which consists of multiple edge types, i.e., \(p=t_{1}\xrightarrow{r_{1}}t_{2}\xrightarrow{r_{2}}\ldots\xrightarrow{r_{1}}t_{l+1}\), where \(t_{1},\ldots,t_{l+1}\in\mathcal{T}\) and \(r_{1},\ldots,r_{l}\in\mathcal{R}\). One meta-path contains many meta-path instances in \(\mathcal{H}\)._
**Definition 3**: **Meta-path-based Neighbors [11]**_. Given a meta-path \(p\) in \(\mathcal{H}\), the meta-path-based neighbors of node \(v\) is defined as the set of nodes that connect with node \(v\) via a meta-path instance of \(p\). If meta-path \(p\) is symmetrical, the meta-path-based neighbors of node \(v\) contain itself._
**Definition 4**: **Meta-path-based Graph [11]**_. Given a meta-path \(p\) in \(\mathcal{H}\), the meta-path-based graph \(G_{p}\) of \(p\) is a graph constructed by all meta-path-based neighbor pairs. \(G_{p}\) is homogeneous if the head and tail node types of \(p\) are the same. The neighbors of node \(v\) in \(G_{p}\) can be donated as \(N_{G_{p}}^{v}\)._
Table II shows the used notations and their definitions.
## IV The proposed methodology
### _Overall Framework_
Figure 3 shows the overall framework of HAGNN, which consists of three phases: meta-path-based intra-type aggregation, meta-path-free inter-type aggregation, and combination of semantic information. In the intra-type aggregation phase, we first construct the fused meta-path graph that contains all meta-path-based neighbors. Then, we leverage the structural semantic weights to guide the node aggregation in the fused meta-path graph. The node embeddings obtained in the intra-type aggregation phase are fed to the inter-type aggregation phase, which performs node aggregation directly on the original heterogeneous graph. The embeddings generated by the two aggregation phases are combined to form the final embeddings of the target node types, which are further used for downstream tasks.
The intra-type aggregation phase aims to capture the information of high-order meta-path neighbors, while the inter-type aggregation phase directly aggregates the attribute information of immediate neighbors. The two phases perform node aggregation from two different perspectives. Next, we introduce each phase of HAGNN.
### _Meta-path-based Intra-type Aggregation_
Since the intra-type aggregation is performed on the fused meta-path graph, we first introduce the definition of the proposed fused meta-path graph.
#### Iv-B1 Fused Meta-path Graph
The core of heterogeneous GNNs is to use the graph topology to perform message aggregation. Thus, the graph topology plays an essential role in heterogeneous GNNs. Most of exiting heterogeneous GNNs construct the meta-path-based graph for node aggregation. Each meta-path corresponds to a meta-path-based graph. As shown in Table I, information redundancy arises when we put meta-path-based neighbors of different meta-paths in separate graphs. A large number of duplicate edges leads to computational redundancy and excessive additional parameters when computing and optimizing attention weights in the meat-path-based graph, leading to negative impacts on the computation efficiency and learning performance. To address the issues, we propose a novel data structure called fused meta-path graph to carry the information of multiple meta-paths.
**Definition 5**: **Type of meta-path:** Given a meta-path \(p\) with length of \(l\), \(p=t_{1}\xrightarrow{r_{1}}t_{2}\xrightarrow{r_{2}}\ldots\xrightarrow{r_{l}}t_{l+1}\), where \(t_{1},\ldots,t_{l+1}\in\mathcal{T}\) and \(r_{1},\ldots,r_{l}\in\mathcal{R}\). The type of \(p\) is the head node type \(t_{1}\) and meta-path \(p\) can also be denote as \(p_{t_{1}}\).
**Definition 6**: **Fused meta-path graph:** Given a node type \(t\in\mathcal{T}\) of a heterogeneous graph \(\mathcal{H}\), the meta-path set of type \(t\) is \(\mathcal{P}_{t}=\{p_{t}^{1},p_{t}^{2},\ldots,p_{t}^{l}\}\), the fused meta-path graph \(\mathcal{G}_{t}\) is a single graph contains all meta-path neighbors in \(\mathcal{P}_{t}\). The neighbors of node \(v\) in \(\mathcal{G}_{t}\) is denoted as \(N_{\mathcal{G}_{t}}^{v}\).
\(\mathcal{G}_{t}\) is a homogeneous graph and defines the topological adjacency relationship between nodes of the same node type. To construct \(\mathcal{G}_{t}\), we first obtain the meta-path-based graph \(G_{p_{t}}\) for \(p_{t}\in\mathcal{P}_{t}\), and then we take the union of \(G_{p_{t}}\), i.e.,
\[\mathcal{G}_{t}=\bigcup_{p_{t}\in\mathcal{P}_{t}}G_{p_{t}} \tag{1}\]
Figure 3 shows the construction process of the fused meta-path graph. Unlike the meta-path-based graph, each node type instead of each meta-path corresponds to a fused meta-path graph. For node \(v\) in fused meta-path graph \(\mathcal{G}_{t}\), \(N_{\mathcal{G}_{t}}^{v}\) contains all meta-path neighbors from \(\mathcal{P}_{t}\). The relationship between \(v\) and its neighbor \(u\in N_{\mathcal{G}_{t}}^{v}\) may belong to multiple meta-paths. Since the repeated meta-path-based neighbor relationships are reflected in a single edge, the information redundancy between different meta-paths can be eliminated.
Table III compares fused meta-path graph with existing methods to extract homogeneous graphs from heterogeneous graphs using meta-paths. For HAN and MAGNN, meta-path-based graphs are directly constructed by meta-path-based neighbors. But as we point out in Table I, there is a serious data redundancy problem between different meta-path graphs. GTN is proposed to learn meta-path graphs. It uses 1x1 convolution to softly select different edge types, and then uses matrix multiplication to generate new meta-paths. Since the selection of edge types is continuous rather than discrete, the meta-path graph generated by GTN is dense. Moreover, the computation overhead of GTN is huge. Overall, the proposed fused meta
path graph can solve the data redundancy problem and achieve better performance with less time overhead.
#### Iii-A2 Type Selection for Intra-type Aggregation
HAGNN is a two-phase aggregation model. As shown in Figure 3, in the latter phase (i.e, the inter-type aggregation phase), the representations of other node types are absorbed in the embeddings of the target node type. Unlike the previous models [10, 11] where other node types only play a role as bridges between nodes of the target type and do not directly participate in the learning of node representation, we also perform intra-type aggregation for non-target node types.
Moreover, not all node types are suitable for participating in the intra-type aggregation phase. We select node types for intra-type aggregation mainly based on the following two aspects. First, in heterogeneous graphs, the quantity of different types of nodes is varied. For example, the DBLP dataset has 14,328 paper nodes, but only 20 conferences nodes. For those types with a small number of nodes, we believe that the relationship between the nodes is clear enough and the intra-type aggregation is not necessary. Second, having a closed meta-path is also an important condition. A meta-path \(p\) is closed if its head and tail types are the same, and only using a closed meta-path can generate homogeneous graphs.
Formally, we denote the type set participating in the intra-type aggregation phase as \(\mathcal{T}^{\prime}\).
\[\mathcal{T}^{\prime}=\{t\ |\ \frac{|\mathcal{V}_{t}|}{|\mathcal{V}|}>threshold \text{ and }t\text{ has closed meta-paths}\} \tag{2}\]
where \(V_{t}\) denotes the number of nodes with type \(t\). In practice, the threshold can be set to 1%.
#### Iii-A3 Type-specific Linear Transformation
Before the intra-type aggregation, since nodes of different types have different feature dimensions, we apply a type-specific linear transformation to each type of node, projecting the features of each type of node into the same latent factor space. For node \(v\in\mathcal{V}\) of \(t\in\mathcal{T}\):
\[\tilde{h}_{v}=\tilde{W}_{t}\cdot x_{v}^{t} \tag{3}\]
where \(x_{v}^{t}\in\mathbb{R}^{d_{t}}\) is the original feature vector of node \(v\). The dimension \(d_{t}\) varies from the node type. \(\tilde{W}_{t}\in\mathbb{R}^{d\times d_{t}}\) is the learnable transformation matrix.
After the linear transformation, all nodes have the same dimension \(d\). Next, the intra-type node aggregation can be carried out to aggregate meta-path-based higher-order neighbors.
#### Iii-A4 Structural Semantic Aware Aggregation
For the specific node type \(t\), once the corresponding fused meta-path graph \(\mathcal{G}_{t}\) is constructed, we perform intra-type node aggregation according to the topology (i.e., the neighborhood of each node) of \(\mathcal{G}_{t}\). For each node \(v\) in the fused meta-path graph, the neighborhood is much larger than that in the original graph. Optimizing the attention weight of each neighbor becomes a challenging task. To address the challenge, we propose a structural semantic aware mechanism to guide the learning of the attention weight for node aggregation.
For meta-path \(p_{t}\), suppose that the length of \(p_{t}\) is \(l\) and the node type sequence of \(p_{t}\) is \(t_{1}\to t_{2}\rightarrow\cdots\to t_{l-1}\to t_{l}\). Let \(\mathcal{A}\) be the weighted adjacent matrix following meta-path \(p_{t}\):
\[\mathcal{A}(\mathcal{H},p_{t})=\prod_{i=1}^{l-1}\mathbf{A}_{t_{i},t_{i+1}} \tag{4}\]
where \(\mathbf{A}_{t_{i},t_{i}+1}\) is the adjacent matrix of type \(t_{i}\) and and \(t_{i+1}\). For the node pair \((u,v)\), \(\mathcal{A}(\mathcal{H},p_{t})[u][v]\) denotes the number of path instances from node \(u\) to node \(v\) in heterogeneous graph
Fig. 3: The overall framework of HAGNN. Different circle colors represent different node types.
\(\mathcal{H}\), and the node types on the path conform to the pattern of meta-path \(p_{t}\).
\(\mathcal{A}\) defines the meta-path-based similarity between nodes of the same type. In the fused meta-path graph \(\mathcal{G}_{t}\), since the neighbor pair \((u,v)\) may contain the meta-path neighbor relationships of multiple meta-paths, we further additively mix these meta-path-based similarities:
\[\delta_{uv}^{t}=\sum_{p_{t}\in\mathcal{P}_{t}}\mathcal{A}(\mathcal{H},p_{t})[u] [v] \tag{5}\]
\(\delta_{uv}^{t}\) can be viewed as the common neighbours [33, 34] of node \(u\) and \(v\) in \(\mathcal{G}_{t}\). We define \(\delta\) as _structural semantic weight_ because it can reflect the structural semantic similarity between nodes. For example, as shown in Figure 4, the number of papers published by author \(A\) and author \(B\) is more than the number of papers published by author \(A\) and author \(C\). Thus, \(A\) and \(B\) should have a stronger relationship than \(A\) and \(C\), and their representations should be more similar.
To make structural semantic weight participate in node aggregation, we normalize it by softmax.
\[\tilde{\delta}_{uv}^{t}=\frac{exp(\delta_{uv}^{t})}{\sum_{(u^{\prime},v^{ \prime})\in\mathcal{G}_{t}}exp(\delta_{u^{\prime}v^{\prime}}^{t})} \tag{6}\]
However, only use \(\delta\) as the attention weight for intra-type aggregation has the following two problems:
* \(\delta\) is not capable of distinguishing the importance of different meta-paths by directly summing all meta-path-based similarity. As the higher-order relationship between nodes, different meta-paths represent different levels of intimacy. For example, there are three nodes \(A_{1}\), \(A_{2}\), \(A_{3}\), and the meta-path instances between them is \(A_{1}-P_{1}-A_{2}\) and \(A_{1}-P_{1}-T_{1}-P_{2}-A_{3}\). Even if the meta-path connected between A1-A2 and A1-A3 are different, \(\delta_{(A_{1},A_{2})}=\delta_{(A_{1},A_{3})}\) in the fused meta-path graph.
* \(\delta\) only reflects the intuitive semantic information. As a fixed value, \(\delta\) does not express small differences within the same meta-path. For example, for the APA meta-path, although co-authors may be in the same field, there are also papers in cross-cutting fields whose authors are in different fields.
Therefore, \(\delta\) contains useful information, but cannot be completely relied upon. Learnable adaptive weights are also necessary. In this paper, we adopt a graph self-attention mechanism to calculate the adaptive weights.
\[\begin{split} u\in N_{\mathcal{G}_{t}}^{v}\\ e_{uv}^{t}=LeakyReLU(a_{t}^{T}\cdot W_{t}[\tilde{h}_{v}||\tilde{h }_{u}])\\ \alpha_{uv}^{t}=\frac{exp(e_{uv}^{t})}{\sum_{s\in N_{\mathcal{G}_ {t}}^{v}}exp(e_{us}^{t})}\end{split} \tag{7}\]
where \(||\) denotes the concatenation operator, \(a_{t}\) and \(W_{t}\) are learnable parameters in the self-attention mechanism that vary with node type \(t\). \(\alpha\) represents learned attention weights, which can be adaptively adjusted according to the performance of downstream tasks. Inspired by [35][17], we introduce \(\delta\) as an edge residual into \(\alpha\):
\[\eta_{uv}^{t}=(1-\beta)\alpha_{uv}^{t}+\beta\tilde{\delta}_{uv}^{t} \tag{8}\]
\(\beta\) is a hyperparameter that controls how much structural semantic information we add into the attention weight. At each layer of intra-type aggregation, the attention weight between nodes perceives the structural semantic information in the fused meta-path graph, and the learnable parameters refine the structural semantic weight, helping the target node to select neighbors more effectively.
Then, we can perform intra-type aggregation as follows:
\[h_{v}^{intra}=\begin{cases}\tilde{h}_{v}&\phi(v)\notin\mathcal{T}^{\prime}\\ \sigma(\sum_{u\in N_{\mathcal{G}_{t}}^{v}}(\eta_{uv}^{t}\cdot\tilde{h}_{u}))& \phi(v)\in\mathcal{T}^{\prime}\end{cases} \tag{9}\]
For the node \(v\) of type \(t\in\mathcal{T}^{\prime}\), the new embedding \(h_{v}^{intra}\) is the weighted sum of its neighbors. Otherwise it stays the same. Then, after the intra-type aggregation phase, \(h_{v}^{intra}\) are fed to the following inter-type aggregation phase.
### _Meta-path-free Inter-type Aggregation_
The inter-type aggregation considers the direct neighbors \(N_{\mathcal{H}}^{v}\) of node \(u\) in the original graph \(\mathcal{H}\). Since nodes and their first-order neighbors often belong to different types, the neighbors of a node reflect its attributes. Thus, the inter-type aggregation is actually the process of continuously integrating the node attributes. When performing information fusion between types, the contribution of different neighbors to the target node is different. For each neighbor \(u\in N_{\mathcal{H}}^{v}\), we can learn a normalized importance weight \(\alpha_{uv}\).
\[\begin{split} e_{uv}=LeakyReLU(a^{T}\cdot W[h_{v}^{intra}||h_{u }^{intra}]),\\ \alpha_{uv}=\frac{exp(e_{uv})}{\sum_{s\in N_{\mathcal{H}}^{v}}exp( e_{us})}\\ h_{v}^{inter}=\sum_{u\in N_{\mathcal{H}}^{v}}\alpha_{uv}\cdot h_{u}^{ intra}\end{split} \tag{10}\]
where \(a\) and \(W\) are learnable parameters shared on all node types. Moreover, to stabilize the learning process and reduce the the large variation caused by the heterogeneity of \(\mathcal{H}\), we further employ the multi-head attention mechanism. Specifically, we implement \(K\) independent attention processes and concatenate their outputs.
\[h_{v}^{inter}=\|_{k=1}^{K}\sum_{u\in N_{\mathcal{H}}^{v}}\alpha_{uv}^{k}\cdot h_ {u}^{intra} \tag{11}\]
Fig. 4: Structural semantic weight of the meta-path APA.
### _Combination of Semantic Information_
Due to the different neighborhoods selected in the aggregation phase, intra-type aggregation and inter-type aggregation actually extract the semantic information encoded in heterogeneous graphs from different perspectives. To explicitly capture the information from these two perspectives, we further propose an information combination method.
\[z_{v}=COMBINE(h_{v}^{intra},W_{m}\cdot h_{v}^{inter}) \tag{12}\]
First, we unify the dimensions of \(h_{v}^{intra}\) and \(h_{v}^{inter}\). \(W_{m}\in\mathbb{R}^{Kd\times d}\) is the learnable transformation matrix. Then, we combine the two embeddings by addition, concatenation or other pooling operations. As we described before, the immediate neighbors of a node contain key attributes, and the meta-path-based neighbors can easily supply high-order connectivity information of the same node type. Hence we choose concatenation as the COMBINE function, that is, we view the representation of the intra-type aggregation phase and the representation of the inter-type aggregation phase as the characteristics of different channels. After the two-phase aggregation, the embedding \(z_{v}\) fusing high-order inter-type information and direct intra-type information is obtained, which can be further used in different downstream tasks. Algorithm 1 shows the details of HAGNN.
```
Input: Heterogeneous graph \(\mathcal{H}\), Selected types \(\mathcal{T}^{\prime}\), Meta-paths \(\mathcal{P}=\{\mathcal{P}_{t}\ |\ t\in\mathcal{T}^{\prime}\}\), Node features \(\{x_{v},v\in\mathcal{V}\}\), Number of intra-type aggregation layers \(L_{intra}\), Number of inter-type aggregation layers \(L_{inter}\). Output: Embeddings of target nodes fornode type \(t\in\mathcal{T}\)do perform linear transformation for nodes of type \(t\); end for /*Building the fused meta-path graph */ for\(t\in\mathcal{T}^{\prime}\)do \(\mathcal{G}_{t}\leftarrow\emptyset\); for\(p_{t}\in\mathcal{P}_{t}\)do construct \(G^{p_{t}}\); \(\mathcal{G}_{t}=\mathcal{G}_{t}\cup G^{p_{t}}\) ; end for end for /*Intra-type aggregation phase */ for\(i=1\to l_{intra}\)do for\(t\in\mathcal{T}^{\prime}\)do for\(v\in\mathcal{V}_{t}\)do for\(u\in N_{\mathcal{G}_{t}}^{v}\)do calculate learnable weight \(\alpha_{uv}^{t}\) and structural semantic weight \(\tilde{\delta}_{uv}^{t}\); combine \(\alpha_{uv}^{t}\) and \(\tilde{\delta}_{uv}^{t}\) with edge residuals; end for calculate \(h_{v}^{intra}\) using Equation 9; end for end for /*Inter-type aggregation phase */ for\(i=1\to l_{inter}\)do for\(v\in\mathcal{V}\)do calculate \(h_{v}^{inter}\) using the multi-head attention ; end for /*Combination of Semantic */ \(z_{v}=h_{v}^{intra}||W_{m}\cdot h_{v}^{inter}\); return\(\{z_{v},\ v\in\mathcal{V}\}\)
```
**Algorithm 1**HAGNN
### _Complexity Analysis_
In this section, we theoretically analyze the time complexity of HAGNN during the training stage and compare it with the typical meta-path-based model HAN and meta-path-free model SimpleHGN. Suppose the number of nodes in the heterogeneous graph is \(N\), the number of edges is \(E\), the dimension of the node raw feature is \(D\), the node embedding dimension after linear transformation is \(D^{\prime}\). For the fused meta-path graph, the number of edges is \(E^{\prime}\) and the number of nodes is \(N^{\prime}\).
For HAGNN, in the linear transformation phase and semantic information combination phase, the complexities are all \(O(NDD^{\prime})\). In the intra-type aggregation phase, attention
is calculated pairwise between nodes in the fused meta-path graph, its complexity is \(O(E^{\prime}D^{\prime}+N^{\prime}D^{\prime 2})\). The inter-type aggregation phase is carried out in the original graph, the time complexity is \(O(ED^{\prime}+ND^{\prime 2})\), which is consistent with GAT.
For SimpleHGN, the complexity of its linear transformation phase is \(O(NDD^{\prime})\). Since it introduces edge features, let the dimension of edge features be \(F\), the number of edge types be \(T\), the complexity of calculating attention and aggregating neighbors is \(O(E(D^{\prime}+F)+ND^{\prime 2}+TF^{2})\).
Overall, the complexity of HAGNN is \(O(NDD^{\prime})+O(E^{\prime}D^{\prime}+ED^{\prime}+ND^{\prime 2})\). The complexity of SimpleHGN is \(O(NDD^{\prime})+O(TF^{2}+EF+ED^{\prime}+ND^{\prime 2})\). It can be seen that the complexity of the two models is similar. The difference is mainly in the intra-type aggregation phase of HAGNN, and the edge feature usage of SimpleHGN.
Meanwhile, the complexity of HAN is \(O(K\epsilon D^{\prime 2}+KD^{\prime 2}+ND^{\prime 2})\), where \(K\) is the number of meta-paths, \(\epsilon\) is the number of edges in each meta-path-based graph. Thanks to the fused meta-path graph, HAGNN can reduce redundant edges and avoid double-layer attention. The experimental results in Section V-D reveal that HAGNN can achieve better performance with higher efficiency than existing HGNNs.
## V Experiments
In this section, we conduct extensive experiments to answer the following questions:
* **RQ1**: How is the effectiveness of the proposed HAGNN compared with existing heterogeneous GNN models?
* **RQ2**: What is the impact of each major component of HAGNN?
* **RQ3**: How about the efficiency of HAGNN?
* **RQ4**: How to evaluate the quality of node representations learned by HAGNN in a visual way?
* **RQ5**: How robust is HAGNN to hyperparameter?
* **RQ6**: Are meta-paths or variants still useful in heterogeneous GNNs? [17] How to select suitable meta-paths?
### _Experimental Setup_
#### V-A1 Experimental Setting
All experiments are conducted under the recently proposed Heterogeneous Graph Benchmark (HGB) [17]. HGB provides unified data split and data pre-processing to ensure the fairness of comparison. In the node classification task, node labels are split according to 24% for training, 6% for validation, and 70% for test in each dataset. In the link prediction task, the test set uses 2-hop neighbors as negative. To prevent data leakage, the evaluation metrics are obtained by submitting predictions to the HGB website2. All experiments are run on a single GPU (NVIDIA Tesla V100) with 32 GB memory.
Footnote 2: [https://www.biendata.xyz/competition/hgb-1/](https://www.biendata.xyz/competition/hgb-1/)
#### V-A2 Datasets
We use five real-world datasets for node classification, node clustering, and link prediction. The statistics of datasets are summarized in Table IV.
* **DBLP** is a computer science bibliography website containing author (A), paper (P), term (T), and conference (C).
* **IMDB** is a movie website, which contains movie (M), director (D), actor (A), and keyword (K).
* **Freebase** is a huge knowledge graph with book (B), film (F), music (M), organization (O), business (U), etc.
* **LastFM** is an online music website containing user (U), artist (A), and tag (T). The target edge type is user-artist.
* **PubMed** is a biomedical literature library, which has gene (G), disease (D), chemical (C), and specie (S). The target is to predict the connection between diseases.
#### V-A3 Implementation Details
The parameters are randomly initialized. We use Adam [36] to optimize parameters. \(\beta\) in intra-type aggregation is set to 0.3. We set the number of intra-type aggregation layers to 2 for all datasets, the number of inter-type aggregation layers to 5 for the IMDB dataset and 2 for other datasets. All GNN models are implemented with PyTorch. The selected meta-paths are listed in Table X (Section V-G).
### _Performance Comparison (RQ1)_
#### V-B1 Node Classification
We select baselines depending on whether or not the meta-path is used.
* **Meta-path-based:** HAN [10], GTN [12], MAGNN [11], HetSANN [37], HGNN-AC [38], R-HGNN [31], and CKD [13].
* **Meta-path-free:** R-GCN [20], HGT [39], HetGNN [32], GCN [26], GAT [18], and SimpleHGN [17].
We run these baselines using the official codes. We adopt the Macro F1 and Micro F1 metrics for node classification. All models are run five times and the mean and standard deviation are reported. The results are shown in Table V.
Neither the meta-path-based models or the meta-path-free models can always achieve better performance on all datasets. For meta-path-based models, HAN, MAGNN, and R-HGNN performs better on the DBLP dataset. R-HGNN and CKD performs better on the IMDB dataset. Nevertheless, in contrast with the meta-path-based models, the meta-path-free models such as the commonly used GAT and GCN can also achieve competitive or even better performance. Also, SimpleHGN serves as a strong baseline, indicating that it is necessary to fuse different types of information in the node classification task. The results show that intra-type aggregation and inter-type aggregation have their own advantages and disadvantages. For relatively large datasets such as Freebase, existing models run out of memory due to either focusing too much on meta-path (e.g., MAGNN, GTN, HetSANN) or too high model complexity (e.g., HetGNN).
In comparison, HAGNN can not only run on large datasets but also consistently achieves the best performance on all three datasets, which demonstrates that intra-type and inter-type aggregations are complementary and making good use of type information in heterogeneous graphs is crucial.
#### Iv-B2 Node Clustering
To verify the quality of node embeddings generated by different models, we conduct node clustering on the IMDB and DBLP datasets. The labeled nodes (i.e., movies in IMDB and authors in DBLP) are clustered with the \(k\)-means algorithm. The number of clusters in \(k\)-means is set to the number of classes for each dataset, i.e., 3 for IMDB and 4 for DBLP. We employ the normalized mutual information (NMI) and the adjusted rand index (ARI) as evaluation metrics. From Table VI, we see that HAGNN regularly outperforms all other baselines in the node clustering task. Note that all models perform significantly worse on IMDB than on DBLP. This is due to the dirty labels of movies in IMDB, i.e., every movie node in the original IMDB dataset has multiple genres, and we only choose the very first one as its class label [11]. As shown in Table VI, the traditional heterogeneous models do not have many advantages over the traditional homogeneous models in node clustering. And the node embedding generated by HAGNN has higher quality, leading to better clustering effect.
#### Iv-B3 Link Prediction
We select widely used link prediction models as baselines, including RGCN, GATNE [40], HetGNN, MAGNN, HGT, GCN, GAT, and SimpleHGN. The link prediction task is performed on the LastFM and Pubmed datasets. We adopt the MRR and ROC-AUC metrics and the mean and standard deviation of five runs are reported. From Table VII, we see that GCN, GAT, and other direct neighbor aggregation models perform better than MAGNN and HGT on the LastFM dataset. This is mainly due to the weak heterogeneity of LastFM, which contains only three node types and three edge types. The heterogeneous graph models on the PubMed dataset can achieve comparable results. In comparison, HAGNN outperforms existing models on both datasets, especially on PubMed, where the MRR metric improves by 2% compared to SimpleHGN.
Overall, HAGNN achieves better performance in different tasks and different datasets. Moreover, HAGNN can easily handle larger datasets.
### _Ablation Study (RQ2)_
We design the following variants of HAGNN.
* **HAGNN-wo-inter** removes the inter-type aggregation phase and directly performs downstream tasks on the embeddings obtained after the intra-type aggregation phase.
* **HAGNN-wo-intra** removes the intra-type aggregation phase and aggregates the direct neighbors of all nodes.
* **HAGNN-wo-sw** follows the two-phase framework, but in the intra-type aggregation phase, the semantic structure information is not utilized.
* **HAGNN-wo-fused** conducts intra-type aggregation on meta-path-based graphs rather than the fused meta-path graph.
* **HAGNN-wo-combine** removes the combination of the embedding obtained in intra-type aggregation and inter-type aggregation.
* **HAGNN-combine-add** uses the _add_ operation to combine the the embedding obtained in intra-type aggregation and inter-type aggregation.
Table VIII shows the performance comparison between HAGNN and its six variants on the node classification (i.e, DBLP and IMDB) and link prediction (i.e., PubMed) tasks. HAGNN-wo-intra performs better than HAGNN-wo-inter on the DBLP dataset. While HAGNN-wo-inter is much better than HAGNN-wo-intra on the PubMed dataset. No matter which variant, its performance is far inferior to HAGNN. That is, both the higher-order intra-type information and direct inter-type information to the target nodes are important, but their importance varies in different datasets. The proposed two-phase aggregation method in HAGNN can leverage the information within and between types simultaneously.
Moreover, the performance of HAGNN-wo-sw is inferior to that of HAGNN, which indicates that the proposed structural semantic weight helps to aggregate intra-type information. Also, HAGNN-wo-fused performs worse than HAGNN, especially on the IMDB and PubMed datasets. The main reason is that the information redundancy between multiple meta-path-based graphs plays an negative effect on the performance. In addition, HAGNN-wo-combine and HAGNN-combine-add perform worse than HAGNN, which indicates that the concatenation combination can work better.
### _Efficiency Comparison (RQ3)_
We further evaluate and compare the efficiency of HAGNN and SimpleHGN from the following perspectives, i.e., parameter size, FLOPs, and runtime per training epoch. Let HANGNN denote the heterogeneous GNN that use HAN's meta-path-based graphs instead of the proposed fused meta-path graph in the intra-type aggregation phase. From Table IX, we see that HAGNN achieves the best efficiency among the three models, which is consistent with the complexity analysis in Section IV-F. Moreover, during the training process, HAGNN is also faster than SimpleHGN and HAN-HAGNN. Although HAGNN is composed two phases (i.e., intra-type and inter-type aggregation phases), HAGNN can achieve better performance with higher efficiency than existing models with one aggregation phase due to the fused meta-path graph and structural semantic aware aggregation mechanism.
### _Visualization (RQ4)_
#### Iv-E1 Quality of node representation
To demonstrate the quality of node representations, we project the low-dimensional node embeddings into the two-dimensional space using t-SNE [41], and the visualization results are shown in Figure 5. Different colors represent different classes. We can see that HAGNN can generate more clear classification boundaries than commonly used models (i.e., MAGNN and GAT). Moreover, the point distribution in the same class is more close, indicating that the node embedding generated by HAGNN has higher quality. The clustering results of other baselines can be seen in Table VI. Overall, visualization results further demonstrate the effectiveness of HAGNN.
#### Iv-E2 Difference Between Intra-type Aggregation and Inter-type Aggregation
Due to the different neighborhoods selected in the aggregation phase, intra-type aggregation and inter-type aggregation actually extract the semantic information encoded in heterogeneous graphs from different perspective. To further verify the difference between the information obtained in the two phases, we project node embeddings of the target type generated by each phase into a three-dimensional space using
Fig. 5: Node visualization on the DBLP dataset.
t-SNE. The result is illustrated in Figure 6. We see that the node embedding of intra-type aggregation is denser while the node embedding of inter-type aggregation is looser. More importantly, the two types of embeddings are distributed on different planes, which indicates that they extract the semantic information from different perspectives.
### _Hyperparameter Sensitivity (RQ5)_
\(\beta\) in Equation 8 controls how much structural semantic information added into the attention weight. We further evaluate the hyperparameter sensitivity on \(\beta\). The value of \(\beta\) is selected in \([0.1,0.2,0.3,0.4,0.5]\). Figure 7 shows the performance comparison of HAGNN with different \(\beta\). It can be seen that HAGNN is very robust to \(\beta\), and the performance change is insignificant. The main reason is that we use the learnable adaptive weight and the structural semantic weight simultaneously, which can increase the robustness of HAGNN.
### _Discussion on Meta-path (RQ6)_
Previous work [17] has questioned the effect of meta-paths. The performance improvement in HAGNN verifies that meta-paths are still indispensable for heterogeneous graphs. Table X shows the selected meta-paths in HAGNN. To verify the effectiveness of meta-paths, we further conduct experiments with different meta-paths in Table XI. As shown in Table XI, for the DBLP dataset, meta-paths need to be selected for all types in \(\mathcal{T}^{\prime}\). If only one type is selected, the performance will be degraded. Moreover, the selection of meta-paths should be comprehensive, which can be reflected on the IMDB dataset.
Another question is, how to choose a suitable meta-path. Meta-paths are important, but arbitrarily choosing meta-paths may cause damage to the model performance. We divide meta-paths into two categories: **strong-relational** and **weak-relational meta-paths**. Strong-relational meta-path-based graph tend to be sparse, while weak-relational meta-path-based graph are denser. Take the DBLP dataset as an example. The meta-path 'APA' represents the co-author relationship, which is a strong relationship, and the average degree of the 'APA'-based-graph is only 3. 'APTPA' is a weak relationship because two articles with the same keyword do not indicate that the two authors are closely related. The average degree of the corresponding meta-path-based graph is 1232.
We believe that the sparse strong relationship is difficult to provide enough information in aggregation, while the dense weak relationship provides too much redundant information. Thus, we can choose the combination of strong relationships and weak relationships. Moreover, due to the propose fused meta-path graph and the structural semantic aware aggregation mechanism, HAGNN can effectively avoid redundant information and introduce the weight of strong relationships into the learning of the attention weight.
## VI Conclusion
To utilize the intra-type and inter-type information in the heterogeneous graph, we proposed a novel hybrid aggregation framework called HAGNN, which consists of two phases: meta-path-based intra-type aggregation and meta-path-free inter-type aggregation. To make the intra-type aggregation more effectively, we proposed the fused meta-path graph to capture the meta-path neighbors for a specific node type. Moreover, we proposed a structural semantic aware aggregation mechanism to improve the leaning of attention weights in the intra-type aggregation. Extensive experimental results on
Fig. 6: Node embedding visualization of intra-type and inter-type aggregation phases on the DBLP dataset.
Fig. 7: Performance comparison of HAGNN under different \(\beta\)
five heterogeneous graph datasets reveal that HAGNN outperforms the existing heterogeneous GNNs on node classification and link prediction tasks. Finally, we discussed the role of meta-paths in the heterogeneous GNNs.
## Acknowledgment
This work was supported by the National Natural Science Foundation of China (#62102177), the Natural Science Foundation of Jiangsu Province (#BK20210181), the Key R&D Program of Jiangsu Province (#BE2021729), Open Research Projects of Zhejiang Lab (#2022PG0AB07), and the Collaborative Innovation Center of Novel Software Technology and Industrialization, Jiangsu, China.
|
2310.02069 | TOaCNN: Adaptive Convolutional Neural Network for Multidisciplinary
Topology Optimization | This paper presents an adaptive convolutional neural network (CNN)
architecture that can automate diverse topology optimization (TO) problems
having different underlying physics. The architecture uses the encoder-decoder
networks with dense layers in the middle which includes an additional adaptive
layer to capture complex geometrical features. The network is trained using the
dataset obtained from the three open-source TO codes involving different
physics. The robustness and success of the presented adaptive CNN are
demonstrated on compliance minimization problems with constant and
design-dependent loads and material bulk modulus optimization. The architecture
takes the user's input of the volume fraction. It instantly generates optimized
designs resembling their counterparts obtained via open-source TO codes with
negligible performance and volume fraction error. | Khaish Singh Chadha, Prabhat Kumar | 2023-10-03T14:12:36Z | http://arxiv.org/abs/2310.02069v1 | # TOaCNN: Adaptive Convolutional Neural Network for Multidisciplinary Topology Optimization
###### Abstract
This paper presents an adaptive convolutional neural network (CNN) architecture that can automate diverse topology optimization (TO) problems having different underlying physics. The architecture uses the encoder-decoder networks with dense layers in the middle which includes an additional adaptive layer to capture complex geometrical features. The network is trained using the dataset obtained from the three open-source TO codes involving different physics. The robustness and success of the presented adaptive CNN are demonstrated on compliance minimization problems with constant and design-dependent loads and material bulk modulus optimization. The architecture takes the user's input of the volume fraction. It instantly generates optimized designs resembling their counterparts obtained via open-source TO codes with negligible performance and volume fraction errors.
Keywords:Topology optimization, Machine learning, Convolutional neural network, Standard architecture
## 1 Introduction
Topology optimization (TO) is a computational technique that determines the efficient material distribution within a design domain while optimizing an objective with predetermined constraints and boundary conditions [1]. With the remarkable progress in additive manufacturing processes, TO's utility and demand steadily increase for various design problems [2]. TO process typically involves four stages: (i) parameterizing the given design domain, (ii) conducting finite element analyses for relevant physical aspects, (iii) assessing the objective function, constraints, and their corresponding sensitivities, and (iv) iterating through the optimization procedure. TO becomes notably intricate and computationally demanding, especially when dealing with multi-physics problems [3] and design-dependent loads [4, 5]. These computational requirements present significant challenges and hinder the broader practical implementation of TO methods [3]. This paper introduces an adaptive convolutional neural network-based architecture for multidisciplinary design optimization to complement traditional TO methods. Once trained, this architecture provides optimized designs instantly for the same boundary and force conditions it was trained.
Integration of deep learning into optimization tasks has emerged as a promising avenue [6, 7, 8, 9]. TO codes generate visual representations of optimized designs in the form of images. On the other hand, Convolutional Neural Networks (CNNs) have demonstrated exceptional proficiency in extracting valuable features and discerning intricate patterns and relationships within image data. This is one of the key reasons for their adoption in automating the TO process. Previous attempts at employing CNNs for TO have shown remarkable performance [7, 10, 8]. However, the neural network architecture developed in the mentioned Refs. are tailored for a specific optimization task and lack generalization ability to adapt to new optimization problems. To fill this gap, in this work, we propose an adaptive CNN architecture capable of automating TO problems across various problems. Efficiency of the proposed architecture is demonstrated on three distinct TO problems with different physics: compliance minimization with constant and design-dependent loads and material bulk modulus optimization problems (Fig. 1).
## 2 Methodology
This section presents the problem description, the proposed neural network architecture, and the training data generation methodology.
### Problem description
As mentioned above, we select a set of three problems involving different physics to train the proposed adaptive CNN architecture and obtain the optimized designs. First, the compliance minimization problem for designing a cantilever beam (Fig. 0(a)) is taken. The data pertaining to that is generated using top88 code [11]. Second, the compliance minimization problem for the loadbearing arch structure with the design-dependent load (Fig. 0(b)) is considered. TOPress code [12] is used to generate the data for the arch problem. Design-dependent
Figure 1: Problem descriptions
loads within a TO setting pose several challenges [4, 5, 12] as these loads change their direction, magnitude, and direction with TO evolution. However, once the architecture is trained, those challenges no longer exist. Third, we solve the material bulk modulus optimization problem per [13]. topX code [13] is used to generate the training data.
### Neural Network Architecture
We propose a deep learning model using a convolutional neural network. The network contains an encoder-decoder architecture with dense layers added at the bottleneck region. The encoder and decoder part of the network are purely convolutional, whereas the dense layer is adaptive; one can change the number of neurons \(n\) in the middle layer of the dense layers block (Fig. 2) as per complexity of geometrical feature of the optimized designs.
Figure 2: General Architecture of the Adaptive Convolutional Neural Network (CNN)
"adaptive layer." The architecture combines the strengths of convolutional layers for feature extraction from images and fully connected layers (dense layers) for relatively more abstract, high-level processing. The three main parts of the proposed architecture are discussed below.
**Encoder network** It plays a prominent role in extracting meaningful information from the training data while reducing the dimensionality of the input data. Herein, the input image has a size of \((100\times 100\times 1)\), which is downsampled by the encoder network to a size of \((5\times 5\times 512)\). This is achieved by three successive convolutional and max pooling operations (Fig. 2). All the convolution operations performed are "same" convolutions to ensure that the information at the edges of the input image is fully considered in the output feature map. Max-pooling operations are responsible for the down-sampling of the image. The stride value of each convolution operation is kept as \((1\times 1)\), whereas that for the max-pooling is kept equal to the filter size.
**Dense layers** Output of the encoder network with \((5\times 5\times 512)\) is flattened out to create the first dense layer with 12800 neurons (Fig. 2); the optional adaptive dense layer follows this (indicated by dotted boundary in Fig. 2) and another dense layer having 12800 neurons (Fig. 2). The adaptive layer equips the network with the capability to automate a broad spectrum of optimization tasks. The choice to include the adaptive layer and determine the number of neurons in that layer should be based on the complexity of the optimized designs for the particular TO problem.
**Decoder network** The output of the dense layers is to form a multi-channel image having a size of \((5\times 5\times 512)\), which is then up-scaled to \((100\times 100\times 1)\) via successive transpose convolution operations (Fig. 2). Filter size and number of filters used in each transpose convolution are mentioned in Fig. 2. Each transpose convolution layer has a stride value that equals the filter size. We provide zero padding to both input and output for all the transpose convolution operations.
We use "ReLU" activation function for the convolution and transpose convolution. Mean squared error is the cost function. "adam" optimizer is employed for efficient training of the neural network.
### Generation of training data
The proposed architecture takes input data as images. top88 [11], TOPress[12] and topX[13] MATLAB codes are used to generate target (optimized designs, cf. Figs. 3) images having size \(100\times 100\) (i.e., \(100\times 100\) finite elements are used to parameterize the design domains) for the problems by varying the volume fraction from \(0.01\) to \(0.95\) with an increment of \(0.01\). We use \(\{\texttt{p}\texttt{p}\texttt{e}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}{a}\texttt{a} \texttt{a}\texttt{a}{a}\texttt{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}\texttt{a}{a} \texttt{a}{a}\texttt{a}{a}\texttt{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}{a} \texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a} \texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a} \texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}{a} \texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a} \texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}{a} \texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a} \texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a} \texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a} \texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a} \texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}\texttt{a}{a}\texttt{a} \texttt{a}{a}\texttt{a}{a}\texttt{a}{a}\texttt{a}\texttt{a}{a}\texttt{a} \texttt{a}{a}\texttt{a}{a}\texttt{a}\texttt{a}{a}\texttt{a}\texttt{a}{a}\texttt{a} \texttt{a}\texttt{a}{a}\texttt{a}\texttt{a}{a}\texttt{a}\texttt{a}{a}\texttt{a} \texttt{a}\texttt{a}{a}\texttt{a}\texttt{a}\texttt{a}{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a} \texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a}\texttt{a
## 3 Results and discussion
This section demonstrates efficacy of the proposed architecture on problems involving three different physics (Fig. 3) used for training the architecture.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \(V_{f}\) & **Input** & \multicolumn{5}{c|}{**CNN results**} & **Target** \\ \hline \multirow{4}{*}{0.03} & \multicolumn{5}{c|}{**\# neurons in adaptive layer (\(n\))**} \\ \cline{2-7} & **1000** & **2000** & **4000** & **8000** & **12000** & **16000** \\ \cline{2-7} & & & & \(\{V_{\rm err}\), \(Obj_{\rm err}\) (\%) & & \\ \cline{2-7} & & \{3.3, 2.76\} & \{17, 7.9\} & \{10.3, 15.02\} & \{1, 2.48\} & \{1, 2.76\} & \{0.33, 0.89\} \\ \hline \multirow{4}{*}{0.25} & \multicolumn{5}{c|}{\(\{V_{\rm err}\), \(Obj_{\rm err}\}\) (\%)} \\ \cline{2-7} & \multicolumn{5}{c|}{\(\{0.6, 0.71\}\)} & \{0.12, 0.45\} & \{0.12, 0.31\} & \{0.44, 0.41\} & \{0.024, 0.34\} & \{0.6, 0.56\} \\ \hline \multirow{4}{*}{0.50} & \multicolumn{5}{c|}{\(\{V_{\rm err}\), \(Obj_{\rm err}\}\) (\%)} \\ \cline{2-7} & \multicolumn{5}{c|}{\(\{0.22, 0.056\}\)} & \{0.04, 0.28\} & \{0.72, 0.28\} & \{0.28, 0.037\} & \{0.08, 0.09\} & \{0.16, 0.14\} \\ \hline \multirow{4}{*}{0.75} & \multicolumn{5}{c|}{\(\{V_{\rm err}\), \(Obj_{\rm err}\}\) (\%)} \\ \cline{2-7} & \multicolumn{5}{c|}{\(\{0.08, 0.13\}\)} & \{0.41, 0.47\} & \{0.6, 0.68\} & \{0.0, 0.01\} & \{0.34, 0.34\} & \{0.36, 0.41\} \\ \hline \end{tabular}
\end{table}
Table 1: Optimized cantilever beam with different number of neurons \(n\) in the adaptive layer (Fig. 2). \(V_{\rm err}\) and \(Obj_{\rm err}\) indicate the volume and objective error between the results obtained by the proposed CNN and the target output (generated by the MATLAB code used).
Figure 3: Training data set: Input and target images. (a) Input image. Target images: (b) Cantilever beam (Fig. 0(a)), (c) Loadbearing arch (Fig. 0(b)) and (d) Material bulk modulus (Fig. 0(c))
### Cantilever beam problem
The cantilever beam design domain (Fig. 0(a)) is solved herein. First, the proposed CNN architecture without the adaptive layer is trained for 2000 epochs with the data generated per Sec. 2.3. However, this network's optimized designs (output) do not meet the desired quality. After a thorough analysis of the training process, it is noted that despite prolonging the network's training for a substantial number of epochs, the loss function remains persistently elevated and shows no substantial improvement with further training. These designs feature intricate patterns, and this architecture lacked sufficient trainable parameters to capture the underlying patterns embedded within the training data effectively. To circumvent this abnormal behavior and enhance the model learning capabilities, we introduce an adaptive layer containing \(n\) neurons within the dense layers (Fig. 2). Users can change \(n\) as per the complexity of the optimized geometrical features.
Table 1 depicts the optimized results with different \(n\). Errors in volume and performance are denoted via \(\{V_{\text{err}},\,obj_{\text{err}}\}\) in the table. \(\{V_{\text{err}},\,obj_{\text{err}}\}\) with \(n=8000\) are low, which shows the optimized designs obtained by the proposed CNN are in perfect agreement with those generated via the MATLAB code. However, using \(n=8000\) in the adaptive layer may not always give low errors for the different optimization problems. To demonstrate the diversity and capability of the proposed network, next, we present design-dependent loadbearing arch structure [12] and material bulk modulus optimization problems [13].
### Pressure loadbearing arch structure
The design domain is depicted in Fig. 0(b) for pressure loadbearing arch structure. Though the physics involved is complex [4, 12], the obtained optimized arch geometrical feature is simple. Therefore, \(n=0\) is used, i.e., the adaptive layer is not used in this case while training the proposed network.
Figure 4 shows the optimized loadbearing arch designs obtained for different volume fractions (\(V_{f}\)) using the proposed neural network. The output results resemble those obtained via the TOPress code [12] with less than 1% \(V_{\text{err}}\), i.e., negligible \(V_{\text{err}}\). Developing such a network for 3D design-dependent problems [14] can be one of the research directions.
### Material bulk modulus optimization problem
Next, the material bulk modulus optimization problem is solved (Fig. 0(c)). This is considered one of the involved problems in TO [13].
Noting the complexity in the optimized designs, the adaptive layer with \(n=4000\) is employed while training the network. The output of the network with different volume fractions are depicted in Fig. 5. The volume fraction error is negligible.
## 4 Conclusion
This paper proposes an adaptive deep Convolutional Neural Network approach to tackle the TO problem. The architect employs an encoder-decoder network with dense layers wherein an adaptive layer with \(n\) neurons is introduced to help capture the complex geometrical features for the optimized designs. Users can vary \(n\) as per the desired level of accuracy. Three publicly available MATLAB codes are used to generate the data for training purposes. The efficacy and success of the developed CNN architecture are tested by generating optimized designs for compliance minimization problems with constant and design-dependent loads and material bulk modulus optimization. Once the network is trained with a certain number of epochs, it gives the sought-optimized designs instantaneously with negligible error in volume w.r.t. to the target designs.
The proposed model has a fixed domain of \(100\times 100\), and output (optimized designs) can be generated explicitly for the boundary and force conditions for which the training data has been provided. Tapping the power of the deep neural network, generalizing the proposed network for design domain size, and obtaining
Figure 4: Optimized loadbearing arch structures at different volume fractions. (a) \(V_{f}=0.25\) (b) \(V_{f}=0.50\). \(V_{\text{err}}\) indicates the volume fraction errors with respect to the target results.
Figure 5: Results for material bulk modulus optimization for different volume fractions. (a)\(V_{f}=0.25\) (b) \(V_{f}=0.50\).
the output results for the boundary and force conditions that are not used for the training data open up exciting avenues for further research.
|
2308.05202 | Learnable Gabor kernels in convolutional neural networks for seismic
interpretation tasks | The use of convolutional neural networks (CNNs) in seismic interpretation
tasks, like facies classification, has garnered a lot of attention for its high
accuracy. However, its drawback is usually poor generalization when trained
with limited training data pairs, especially for noisy data. Seismic images are
dominated by diverse wavelet textures corresponding to seismic facies with
various petrophysical parameters, which can be suitably represented by Gabor
functions. Inspired by this fact, we propose using learnable Gabor
convolutional kernels in the first layer of a CNN network to improve its
generalization. The modified network combines the interpretability features of
Gabor filters and the reliable learning ability of original CNN. More
importantly, it replaces the pixel nature of conventional CNN filters with a
constrained function form that depends on 5 parameters that are more in line
with seismic signatures. Further, we constrain the angle and wavelength of the
Gabor kernels to certain ranges in the training process based on what we expect
in the seismic images. The experiments on the Netherland F3 dataset show the
effectiveness of the proposed method in a seismic facies classification task,
especially when applied to testing data with lower signal-to-noise ratios.
Besides, we also test this modified CNN using different kernels on
salt$\&$pepper and speckle noise. The results show that we obtain the best
generalization and robustness of the CNN to noise when Gabor kernels are used
in the first layer. | Fu Wang, Tariq Alkhalifah | 2023-08-09T19:46:48Z | http://arxiv.org/abs/2308.05202v1 | # Learnable Gabor kernels in convolutional neural networks for seismic interpretation tasks
###### Abstract
The use of convolutional neural networks (CNNs) in seismic interpretation tasks, like facies classification, has garnered a lot of attention for its high accuracy. However, its drawback is usually poor generalization when trained with limited training data pairs, especially for noisy data. Seismic images are dominated by diverse wavelet textures corresponding to seismic facies with various petrophysical parameters, which can be suitably represented by Gabor functions. Inspired by this fact, we propose using learnable Gabor convolutional kernels in the first layer of a CNN network to improve its generalization. The modified network combines the interpretability features of Gabor filters and the reliable learning ability of original CNN. More importantly, it replaces the pixel nature of conventional CNN filters with a constrained function form that depends on 5 parameters that are more in line with seismic signatures. Further, we constrain the angle and wavelength of the Gabor kernels to certain ranges in the training process based on what we expect in the seismic images. The experiments on the Netherland F3 dataset show the effectiveness of the proposed method in a seismic facies classification task, especially when applied to testing data with lower signal-to-noise ratios. Besides, we also test this modified CNN using different kernels on salt\(\&\)pepper and speckle noise. The results show that we obtain the best generalization and robustness of the CNN to noise when Gabor kernels are used in the first layer.
## 1 Introduction
Seismic interpretation is a fundamental step in the seismic exploration value chain, and it is often the most critical step in the decision-making process (Dumay and Fournier, 1988). Within the seismic interpretation tasks, identifying seismic facies from seismic images plays a vital role in hydrocarbon exploration and development. Seismic facies with different petrophysical parameters often induce different seismic responses. The identification of seismic facies can be regarded as a pattern-recognition problem. The traditional manual interpretation of seismic facies is highly dependent on the skills of interpreters and it is also time-consuming. In order to overcome these limitations, multiple seismic attributes with explainable correspondence have been proposed to assist seismic interpretation. Subsequently, some machine learning methods including support vector machine (Zhao et al., 2014; Zhang et al., 2015; Wrona et al., 2018), the random forest (Ao et al., 2019), self-organizing maps (Saraswat and Sen, 2012; Zhao et al., 2017), generative topographic mapping (Roy et al., 2014), and independent component analysis (Lubo-Robles and Marfurt, 2019) use these attributes to classify seismic facies.
Recently, with the rising popularity of deep learning, many have proposed using convolutional neural networks (CNNs) to promote seismic interpretation tasks with reasonable success, like seismic facies classification (Zhao, 2018; Di et al., 2020; Feng et al., 2021; Liu et al., 2020), salt body identification (Waldeland and Solberg, 2017; Shi et al., 2019; Di and AlRegib, 2020), fault detection
[Di et al., 2018; Zhao and Mukhopadhyay, 2018; Wu et al., 2019, 2019, 2019], and horizon tracking [Wu et al., 2019, 2019]. The advantage of CNNs is their strong representation and nonlinear mapping abilities in handling large datasets, which improve the speed and accuracy of seismic facies interpretation. However, CNNs also have some disadvantages. Without enough training data pairs, they can easily overfit, resulting in poor generalization. In other words, the performance on tests and especially inference data could be poor, which is a serious limitation of CNNs. One of the many suitable ways to remedy this weakness is to embed prior information into CNNs. Specifically, for seismic facies classification, the input seismic images mainly include texture features constituted by band-limited wavelets and some additive or correlated noise. Thus, we consider the Gabor filter as a good prior to constraining the CNNs. Seismic data have long been represented and filtered by Gabor functions [Womack and Cruz, 1994]. For machine learning, Daugman [1985] first proposed the Gabor function to model the spatial summation properties (of the receptive fields) of simple cells in the visual cortex. Soon afterward, Gabor filters were widely used in diverse pattern analysis applications and are regarded as an efficient tool in extracting spatial local texture features. Besides, Krizhevsky et al. [2017] demonstrated that after training on real-life images, the first convolution layer of deep CNNs tends to learn Gabor-like filters. Thus, using Gabor filters in the first layer of a CNN for seismic pattern recognition problems could provide more reliable results.
In this work, encouraged by the good interpretability of the Gabor filter and the reliable learning ability of CNNs, we use a Gabor convolution kernel in the first layer of a CNN for seismic facies classification, as an example of interpretation task. Such a modification will replace the pixel focus of the common CNN filters with a constrained function form that is in line with seismic signatures. As a result, the Gabor filter parameters are learnable, and these parameters, like wavelength and angle, provide identifiable characteristics of seismic signals. In fact, each Gabor kernel is controlled by 5 parameters with distinct meanings and can be constrained within the range we expect in the seismic images. Thus, we utilize constraints on the wavelength and angle range of the Gabor filter. We test our method on the Netherland F3 datasets to validate the features gained by replacing the conventional convolution kernel with the Gabor kernel.
Specifically, the main contributions of this paper can be summarized as follows:
* We propose a simple modified CNN with Gabor kernels in the first layer, which embeds the prior seismic texture into neural networks. We regularize certain parameters of Gabor kernels to filter the input based on our expectations of seismic images.
* Our approach provides new insights into incorporating the prior information provided by Gabor functions into CNN for seismic facies classification, and these insights could be easily adapted to other seismic data tasks.
* We evaluate the effectiveness of the CNN with Gabor kernels on seismic images with lower signal-noise ratios, and with added salt\(\&\)pepper and speckle noise, and show the improvements in accuracy compared to conventional CNN.
The rest of the paper is organized as follows: we first introduce the Gabor function. Next, we explain how to replace Conv kernels with Gabor kernels in the first layer of CNN, and how to regularize the parameter of Gabor kernels during the training. Then we show the results of conventional CNN and our modified CNN, and the modified CNN with constrained Gabor kernels. To further validate the generalization and robustness to noise, we also test on salt\(\&\)pepper and speckle noise. Finally, we share out thoughts and summarize our developments in the discussion and conclusion sections.
## 2 Method
### The Gabor function
A two-dimensional Gabor function is defined as a complex sinusoidal plane wave weighted by a Gaussian filter, as follows
\[g(x,y;\lambda,\theta,\phi,\sigma,\gamma)=\exp\left(-\frac{x^{\prime 2}+ \gamma^{2}y^{\prime 2}}{2\sigma^{2}}\right)\exp\left(i\left(2\pi\frac{x^{ \prime}}{\lambda}+\phi\right)\right), \tag{1}\]
where \(x^{\prime}=x\cos\theta+y\sin\theta\) and \(y^{\prime}=-x\sin\theta+y\cos\theta\); \(i\) is the imaginary unit and \(\Theta=\{\lambda,\theta,\phi,\gamma,\sigma\}\) are the parameters controlling the shape of Gabor function. Thereinto, \(\lambda\) represents the wavelength of the sinusoidal plane wave component of the Gabor function; \(\theta\) defines the
orientation (angle) of the plane wave; \(\phi\) is the phase shift; \(\gamma\) represents the ellipticity of the Gaussian support of the Gabor function; \(\sigma\) controls the standard deviation of the Gaussian filter of the Gabor function.
Equation 1 can be rewritten in real numbers form, by splitting it into its real and imaginary parts, with the real part
\[g_{r}(x,y;\lambda,\theta,\phi,\sigma,\gamma)=\exp\left(-\frac{x^{2}+\gamma^{2}y ^{\prime 2}}{2\sigma^{2}}\right)\cos\left(2\pi\frac{x^{\prime}}{\lambda}+\phi \right), \tag{2}\]
and the imaginary part
\[g_{i}(x,y;\lambda,\theta,\phi,\sigma,\gamma)=\exp\left(-\frac{x^{\prime 2}+ \gamma^{2}y^{2}}{2\sigma^{2}}\right)\sin\left(2\pi\frac{x^{\prime}}{\lambda}+ \phi\right), \tag{3}\]
In this work, we use (equation 2) to replace the classic convolution kernel in the first layer of our CNN, as shown in Figure 1.
### Learnable Gabor kernels in CNNs
Traditionally, we specify the Gabor function parameters in advance before we use them as a filter. Gabor filters generated in this way may not be optimal for certain inputs. Besides, it is not an easy task to select suitable parameter values for the Gabor function. So in our work, all the parameters in the Gabor function are potentially learned from the training data in the training process. In other words, we use Gabor convolution kernels that adaptively update their parameters instead of using hand-crafted parameters. Compared with classical convolution kernels in which each pixel is learned independently, the parameters \(\Theta=\{\lambda,\theta,\phi,\gamma,\sigma\}\) in the Gabor kernel constrain the relation between the pixels to produce the Gabor filter. In addition, a Gabor convolution kernel only has 5 parameters, whereas a classical convolution with kernel size \(k\) will have \(k^{2}\) independent pixels to determine. Luckily, the Gabor convolution module can be embedded into any network and we can train the 5 parameters using the gradient descent method as part of the full network for as many CNN channels we need to represent the input.
### Constrained Gabor kernels in first layers
Due to the distinct meaning of the five parameters in the Gabor convolution kernel, we can regularize them to control (filter) the input. For seismic images, two key parameters are the wavelength and direction of the plane wave. In this work, we set a range for these parameters that is in line with what we want to include from the seismic images for facies classification. This range can also help reduce the propagation of noise into our CNN.
Figure 1: The U-Net structure with learnable Gabor kernels in the first layer.
## 3 Numerical examples
We use part of the Netherland F3 dataset (specifically, inlines [300,700] and crosslines [300,1000]), which includes six types of facies [Alaudah et al., 2019] to demonstrate the generalization abilities of our proposed method. The dataset includes six groups of lithoststratigraphic units, from top to bottom consisting of the Upper North Sea group, the Middle North Sea group, the Lower North Sea group, the Rijnland/chalk group, the Scruff group, and the Zechstein group. We uniformly select 21 of the 401 inline profiles as the training dataset, while testing on the rest. To make it more challenging, we add 0.4dB random noise to the training dataset, but add random noise with the same and lower signal-to-noise ratios to the testing dataset. The backbone network used here is UNet with 23 layers. First, we test two conventional UNets with 3\(\times\)3 and 11\(\times\)11 Conv Kernels, and a modified UNet with a Gabor convolutional kernels (11\(\times\)11) without any constraints in the first layer. As for the residual layers, we use 3\(\times\)3 Conv Kernels. Then we test the Gabor kernels with different angles constraints (\(\theta\in[-\pi/6,\pi/6]\), \([-\pi/4,\pi/4]\) and \([-\pi/3,\pi/3]\)) and wavelength constraints (\(\lambda\in[4,+\infty)\), \([8,+\infty)\), \([12,+\infty)\)), respectively. Besides, to further test the generalization of the Gabor kernels, we include salt&pepper and speckle noise in our tests.
Here we use the following metrics: pixel accuracy (PA), class accuracy (CA) for a single class, mean class accuracy (MCA) for all the classes, mean class accuracy (MCA), frequency-weighted intersection over union (FWIU), and mean intersection over union (Mean IU) to evaluate the performance of seismic facies segmentation with various kernels in the first layer. A detailed explanation of these metrics is provided in Appendix.
### The comparison between Gabor kernel and Conv kernel
We first display the training metric curves, including FWIU, MCA, Mean IU, PA, and loss, for the 3\(\times\)3 and 11\(\times\)11 Conv kernels and the 11\(\times\)11 Gabor kernels in Figure 2. Figure 3 shows the corresponding test metric curves for the test data with 0.4dB random noise. During the training, we observe that the FWIU and PA of the CNNs with 3\(\times\)3 and 11\(\times\)11 Conv kernels or the 11\(\times\)11 Gabor kernel are close to each other, though there is an improvement using the Gabor kernel. While in terms of MCA and Mean IU, the CNN with the 11\(\times\)11 Gabor kernel has noticeable improvements compared to the classic conventional CNN, where the CNN with the Gabor kernel converges much faster (at 45 epochs) than those with the Conv kernels, which converge at 90 epochs. Recall that FWIU and PA (see Appendix), unlike the other measure, reflect the overall performance of the classification. Still, the MCA and Mean IU provide average scores for different classes, which means the high score only happens when the accuracy for each class reaches a high level. Thus, metrics demonstrate that the CNN with the Gabor kernel could quickly fit the classes with fewer samples.
At the testing stage, we observe that when the training accuracy reaches the same level, the testing accuracy of the CNN with the Gabor kernel is much better than that of the CNN with Conv kernel. This implies that the CNN with the Gabor kernel has better generalization properties.
Figure 4 is the visualization of the predictions for inline #355 using the trained model after 50 training epochs. It shows that with limited training epochs, the CNN with Conv kernel could not reasonably predict the classes with limited training samples, while the CNN with Gabor kernel could achieve that. Figure 5 is the visualization of the predictions for inline #375 using the trained model after 100 training epochs. For this case, all types of CNN predict well.
To further test the generalization of the CNN with different kernels, we test these types of CNNs on testing images containing -5.6 dB white Gaussian noise, shown in Figure 6. We see that even when dealing with noisy images, whose signal-to-noise level is lower than that of the training data, the Gabor kernels still produce good performance when considering early stopping. It means that the CNN with the Gabor kernel is more robust to the noise.
In this case, we adopt the early stopping and evaluate the CNNs. We show the predictions for inline #355 and #609 in Figures 7 and 8, respectively. The CNN with 3\(\times\)3 Conv kernel is the model trained for 36 epochs, the CNN with 11\(\times\)11 Conv kernel is trained for 32 epochs, and the CNN with 11\(\times\)11 Gabor kernel is trained for 50 epochs. We can again see that the CNN with the Gabor kernel is robust to noise.
Figure 4: The prediction results for inline #355 with 0.4dB Gaussian noise using the trained CNN models after 50 training epochs in Figure 2.
Figure 5: The prediction results for inline #375 with 0.4dB Gaussian noise using the trained CNN models after 50 training epochs in Figure 2.
Figure 6: The testing metric curves for the rest (validation) of the 380 inline profiles with -5.6 dB Gaussian noise, including (a)FWIU, (b)MCA, (c)Mean IU, (d)PA, and (e)loss, for the 3\(\times\)3 and 11\(\times\)11 Conv kernels and the 11\(\times\)11 Gabor kernels.
Figure 3: The testing metric curves for the rest (validation) of the 380 inline profiles with 0.4dB white Gaussian noise, including (a)FWIU, (b)MCA, (c)Mean IU, (d)PA, and (e)loss, for the 3\(\times\)3 and 11\(\times\)11 Conv kernels and the 11\(\times\)11 Gabor kernels.
Figure 7: The prediction results for inline #355 with -5.6 dB Gaussian noise using the trained CNN models with different epochs, for 3\(\times\)3 Conv kernel is 36 epoch, for 11\(\times\)11 Conv kernel is 32 epoch, and for 11\(\times\)11 Gabor kernel is 50 epoch.
### Constraining the Gabor parameters
As we know, the most obvious and important features in seismic imaging results, for interpretation, are textures, angles, and frequencies. As mentioned before, the beauty of the Gabor kernels is that we could control these parameters, as they define the Gabor basis function, to improve the performance of the CNN with Gabor kernels and also to avoid overfitting to some extent. Thus, in this section, we focus on tests where parameters of the Gabor kernel in CNN are constrained within a range. Specifically, we apply constraints on the angles and wavelengths of the Gabor basis function. Figure 10 shows the training metric curves of the CNN with the Gabor kernels under different constraints of angles using the training data containing 0.4 dB white Gaussian noise. The constraints admit slightly faster convergences than that without the constraints. Figure 11 is the corresponding test curves on the testing data containing -5.6 dB Gaussian noise. It is obvious that with the angle constraint, the CNN with Gabor kernels is much better than that without the constraint, avoiding overfitting to some extent. We further show in Figure 11 the predictions on the inline #333 after 50 training epochs. When we constrain the angle \(\theta\in[-\pi/6,\pi/6]\), for the facies denoted by the red color, the CNN with the Gabor kernel has a hard time identifying the class where the slope of the texture is large. As we increase the range of the angles in the Gabor kernel, the CNN could identify the red class with higher accuracy.
Figures 12 shows the training metric curves for the CNN with the Gabor kernels under different constraints applied to the wavelength using the training data containing 0.4 dB Gaussian noise. The CNN with a \([12,\infty]\) wavelength constraint, converges slightly faster. Figure 13 shows the corresponding test metric curves on the data containing -5.6 dB Gaussian noise. It is obvious that with the wavelength constraint, the CNN with Gabor kernels is more stable than that without the constraint and also avoids overfitting to some extent. We further show in Figure 14 the predictions on the inline #465 after 50 training epochs. When the minimum wavelength of the Gabor kernel increases, the predictions for the classes, where the imaging result contains long-wavelength components, are
Figure 8: The prediction results for inline #609 with -5.6 dB Gaussian noise using the trained CNN models with different epochs, for 3\(\times\)3 Conv kernel is 36 epoch, for 11\(\times\)11 Conv kernel is 32 epoch, and for 11\(\times\)11 Gabor kernel is 50 epoch.
Figure 10: The testing metric curves of the CNN with the Gabor kernels under different constraints of angles (no-constraints, \(\theta\in[-\pi/6,\pi/6]\), \([-\pi/4,\pi/4]\) and \([-\pi/3,\pi/3]\)) using the testing data containing -5.6 dB Gaussian noise.
Figure 9: The training metric curves of the CNN with the Gabor kernels under different constraints of angles (no-constraints, \(\theta\in[-\pi/6,\pi/6]\), \([-\pi/4,\pi/4]\) and \([-\pi/3,\pi/3]\)) using the training data containing 0.4 dB Gaussian noise.
improving. The application of constraints on particular Gabor parameters helps the training of the other Gabor parameters, as we reduce the search space. So, the improvements of the CNN with a constrained Gabor kernel are not only the contribution of the constrained parameters but also the other parameters.
Figure 11: The prediction results for inline #333 with -5.6 dB Gaussian noise using the trained CNN models with different angle constraints in the Gabor kernel after 50 epochs.
Figure 14: The prediction results for inline #465 with -5.6 dB white Gaussian noise using the trained CNN models with different wavelength constraints in the Gabor kernel after 50 epochs.
Figure 12: The training metric curves of the CNN with the Gabor kernels under different constraints of wavelengths (no-constraints, \(\lambda\in[4,+\infty)\), \([8,+\infty)\), \([12,+\infty)\)) using the training data containing 0.4 dB Gaussian noise.
Figure 13: The testing metric curves of the CNN with the Gabor kernels under different constraints of wavelengths (no-constraints, \(\lambda\in[4,+\infty)\), \([8,+\infty)\), \([12,+\infty)\)) using the testing data containing -5.6 dB Gaussian noise.
### Test on salt\(\&\)pepper and speckle noises
In this section, we test the robustness of the CNN with different kernels (3\(\times\)3 Conv kernel, 11\(\times\)11 Conv kernel, and 11\(\times\)11 Gabor kernel) on the different types of noise, e.g., salt\(\&\)pepper noise and speckle noise. For different kernels, the CNN models used here are the same as that in Figures 7 and 8. Figures 15 and 16 show the predictions on the imaging results with 30% salt \(\&\) pepper noise for inline #510 and #695. The CNN with the Gabor kernel obviously outperforms others. Table 1 is the quantitative evaluation results on the whole 3D volumes excluding the training profiles. The CNN with the Gabor kernel shows better performance compared to the CNN with Conv kernel.
As for the speckle noise, we add the noise by multiplying the mean noise with the image, where the mean and variance of the mean noise are 0 and 0.49. Then we test the models with different kernels. The model used here are the same as above. Figures 17 and 18 are the predictions for inline #388 and #557. We also show the quantitative evaluation on the whole 3D volume excluding the training profiles in Table 2. The metrics demonstrate that the CNN with the Gabor kernel is robust to the noise even dealing with the noise it has never seen during the training (The salt\(\&\)pepper and speckle noises can be regarded as out-of-distribution noise).
## 4 Discussion
The Gabor kernel layer added to the UNet CNN architecture in the first layer provides a favorable filter for input seismic images. With its five trainable parameters and functional form, it is inherently biased against noise. The Gabor kernel also allows us to constrain some of the Gabor parameters like wavelength and angle to further control the type of data we admit to the convolutional layers. So if certain angle ranges are preferred in the facies classification, we can emphasize these angles in the network by constraining the range. Though the parameters are still learned as part of the training of the network, the learned values will be within the specified range. The network also allows us to fix some of these parameters if needed. If a certain wavelength is preferred to isolate certain classes of facies, we can set that wavelength in the network and it is not learned.
Though we show the effectiveness of having a Gabor layer for a certain network Unet on a certain task, facies classification, we feel that the Gabor layer for seismic inputs is beneficial for any network, convolutional or not, including networks like ResNet and Vision Transformers. We also speculate that the benefits go beyond facies classification tasks to other seismic tasks like salt segmentation and horizon picking and even denoising. The message here is that Gabor functions are optimal basis functions for seismic data, and having them represent input seismic data should only help in the feature extraction of seismic data.
## 5 Conclusions
Here, we proposed a modified UNet with learnable Gabor convolutional kernels for facies classification. Because Gabor functions are suitable for representing seismic wavefields and images, the features extracted by the Gabor convolutional kernels adapt easily to the seismic texture. Meanwhile, since the Gabor parameters have recognizable meanings with respect to direction and wavelength, the CNN performance can be further improved by constraining these parameters based on our expectations of them in seismic images. The experiments on the Netherland F3 dataset show the effectiveness of the proposed method. The test on salt\(\&\)pepper and speckle noises further demonstrates the good generalization and robustness of the CNN with Gabor kernels in the first layer. In many practical exploration applications of deep learning, where we train the CNN model on limited data, adopting Gabor kernels in shallow layers promises high potential in improving the model's generalization.
## 6 Appendix
To evaluate the performance of CNN models with various kernels on the F3 dataset, we use several evaluation metrics. First, we define \(G_{i}\) as the set of pixels that belong to class \(i\), and \(F_{i}\) represents the set of pixels predicted as class \(i\). Then, the set of correctly predicted pixels is denoted by \(G_{i}\cap F_{i}\), and the number of elements in a set is extracted by the operator \(|\cdot|\). Thus, the evaluation metrics can be defined as follows:
* Pixel accuracy (PA) represents the percentage of pixels in the image correctly identified to the right class \[\mathrm{PA}=\frac{\sum_{i}|F_{i}\cap G_{i}|}{\sum_{i}|G_{i}|}\] (4) This metric could reflect the overall accuracy of the classification but may fail for the class with limited testing samples.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Metric Model} & \multirow{2}{*}{PA} & \multirow{2}{*}{MCA} & \multirow{2}{*}{FWIU} & \multicolumn{4}{c}{Class Accuracy} \\ & & & & Upper N.S. & Middle N. S. & Lower N.S. & Rijnland/Chalk & Scruff & Zechstein \\ \hline Conv 3\(\times\)3 & 0.619 & 0.302 & 0.408 & 0.354 & 0.164 & 0.998 & 0.181 & 0.114 & 0.000 \\ \hline Conv 11\(\times\)11 & 0.899 & 0.646 & 0.815 & 0.990 & 0.786 & 0.983 & 0.383 & 0.737 & 0.000 \\ \hline Gabor 11\(\times\)11 & **0.954** & **0.886** & **0.916** & **0.997** & **0.947** & 0.969 & 0.820 & 0.674 & 0.911 \\ \hline \end{tabular}
\end{table}
Table 2: The accuracy of prediction using different kernels with speckle noise
* Class accuracy (CA) represents the prediction accuracy for each class, and \(\mathrm{CA}_{i}\) represents the percentage of pixels correctly classified in a class \(i\) \[\mathrm{CA}_{i}=\frac{|F_{i}\cap G_{i}|}{|G_{i}|}\] (5) \(MCA\) is defined as the average of CA for all classes \[\mathrm{MCA}=\frac{1}{n_{c}}\sum_{i}\mathrm{CA}_{i}=\frac{1}{n_{c}}\sum_{i} \frac{|F_{i}\cap G_{i}|}{|G_{i}|},\] (6) where \(n_{c}\) is the number of classes. The high MAC only happens when the accuracy for each class reaches a high level.
* Intersection over union \((\mathrm{IU}_{i})\) is defined as the number of elements of the intersection of \(G_{i}\) and \(F_{i}\) over the number of elements of their union set \[\mathrm{IU}_{i}=\frac{|F_{i}\cap G_{i}|}{|F_{i}\cup G_{i}|}\] (7) This metric measures the overlap between the ground truth and the predictions, and higher scores mean better. It equals one only in case all pixels were correctly identified to the right class. The further metric averaging IU over all classes is defined as the mean intersection over union (Mean IU) \[\mathrm{Mean\ IU}=\frac{1}{n_{c}}\sum_{i}\mathrm{IU}_{i}=\frac{1}{n_{c}}\sum_{ i}\frac{|F_{i}\cap G_{i}|}{|F_{i}\cup G_{i}|}.\] (8) To avoid the bias of this metric for the imbalanced classes and sensitivity to the certain class (whose samples are few), the weighted version of the metric results in FWIU: \[\mathrm{FWIU}=\frac{1}{\sum_{i}|\mathcal{G}_{i}|}\cdot\sum_{i}|\mathcal{G}_{ i}|\cdot\frac{|\mathcal{F}_{i}\cup\mathcal{G}_{i}|}{|\mathcal{F}_{i}\cap \mathcal{G}_{i}|}.\] (9)
## Acknowledgment
The authors thank KAUST and the DeepWave Consortium sponsors for their support, and Xinquan Huang for useful discussion. We would also like to thank the SWAG group for the collaborative environment.
|
2302.07328 | Hybrid Spiking Neural Network Fine-tuning for Hippocampus Segmentation | Over the past decade, artificial neural networks (ANNs) have made tremendous
advances, in part due to the increased availability of annotated data. However,
ANNs typically require significant power and memory consumptions to reach their
full potential. Spiking neural networks (SNNs) have recently emerged as a
low-power alternative to ANNs due to their sparsity nature. SNN, however, are
not as easy to train as ANNs. In this work, we propose a hybrid SNN training
scheme and apply it to segment human hippocampi from magnetic resonance images.
Our approach takes ANN-SNN conversion as an initialization step and relies on
spike-based backpropagation to fine-tune the network. Compared with the
conversion and direct training solutions, our method has advantages in both
segmentation accuracy and training efficiency. Experiments demonstrate the
effectiveness of our model in achieving the design goals. | Ye Yue, Marc Baltes, Nidal Abujahar, Tao Sun, Charles D. Smith, Trevor Bihl, Jundong Liu | 2023-02-14T20:18:57Z | http://arxiv.org/abs/2302.07328v1 | # Hybrid Spiking Neural Networks Fine-tuning for Hippocamp Segmentation
###### Abstract
Over the past decade, artificial neural networks (ANNs) have made tremendous advances, in part due to the increased availability of annotated data. However, ANNs typically require significant power and memory consumptions to reach their full potential. Spiking neural networks (SNNs) have recently emerged as a low-power alternative to ANNs due to their sparsity nature.
SNN, however, are not as easy to train as ANNs. In this work, we propose a hybrid SNN training scheme and apply it to segment human hippocampi from magnetic resonance images. Our approach takes ANN-SNN conversion as an initialization step and relies on spike-based backpropagation to fine-tune the network. Compared with the conversion and direct training solutions, our method has advantages in both segmentation accuracy and training efficiency. Experiments demonstrate the effectiveness of our model in achieving the design goals.
Ye Yue\({}^{1}\), Marc Baltes\({}^{1}\), Nidal Abujahar\({}^{1}\), Tao Sun\({}^{1}\), Charles D. Smith\({}^{2}\), Trevor Bihl\({}^{3}\), Jundong Liu\({}^{1}\)\({}^{1}\)School of Electrical Engineering and Computer Science, Ohio University
\({}^{2}\) Department of Neurology, University of Kentucky
\({}^{3}\)Department of Biomedical, Industrial \(\&\) Human Factors Engineering
Wright State University
Spiking neural network, image segmentation, hippocampus, brain, U-Net, ANN-SNN conversion
## 1 Introduction
Artificial Neural Networks (ANNs) have revolutionized many AI-related areas, producing state-of-the-art results for a variety of tasks in computer vision and medical image analysis. The remarkable performance of ANNs, however, often comes with a huge computational burden, which limits their applications in power-hungry systems such as edge and portable devices. Bio-inspired spiking neural networks (SNNs), whose neurons imitate the temporal and sparse spiking nature of biological neurons [1, 2, 3, 4], have recently emerged as a low-power alternative for ANNs. SNN neurons process information with temporal binary spikes, leading to sparser activations and natural reductions in power consumption.
An SNN can be obtained by either converting from a fully trained ANN, or through a direct training (training from scratch) procedure, where a surrogate gradient is needed for the network to conduct backpropagation. Most ANN-SNN conversion solutions [5, 6, 7, 8] focus on setting proper firing thresholds after copying the weights from a trained ANN model. The converted SNNs commonly require a large number of time steps to achieve comparable accuracy, reducing the gains in power savings. [9]. Direct training solutions, on the other hand, often suffer from expensive computation burdens on complex network architectures [10, 11, 9, 12]. For many pre-trained ANNs on large datasets, e.g., ImageNet or LibriSpeech, training equivalent SNNs from scratch would be very difficult.
Furthermore, most existing SNN works focus on recognition related tasks. Image segmentation, a very important task in medical image analysis, is rarely studied, with the exception of [13, 14]. In [13], Kim _et al._ take a direct training approach, which inevitably suffers from the common drawbacks of this category. Patel _et al._[14] use leaky _integrate-and-fire_ (LIF) neurons for both ANNs and SNNs. While convenient for conversion, the ANN networks are limited to a specific type of activation functions and must be trained from scratch.
In this paper, we propose a hybrid SNN training scheme and apply it to segment the human hippocampus from magnetic resonance (MR) images. We use an ANN-SNN conversion step to initialize the weights and layer thresholds in an SNN, and then apply a spike-based fine-tuning process to adjust the network weights in dealing with potentially sub-optimal thresholds set by the conversion. Compared with conversion-only and direct training methods, our approach can significantly improve segmentation accuracy, as well as decreases the training effort for convergence.
We choose the hippocampus as the target brain structure as accurate segmentation of hippocampus provides a quantitative foundation for many other analyses [15, 16], and therefore has long been an important task in neuro-image research. A modified U-Net [17] is used as the baseline ANN model in our work. To the best of our knowledge, this is the first hybrid SNN fine-tuning work proposed for the image segmentation task, as well as on U-shaped networks.
## 2 Background
### Hippocampus segmentation
Segmentation of brain structures from MR images is an important task in many neuroimage studies because it often influences the outcomes of other analysis steps. Among the anatomical structures to be delineated, hippocampus is of particular interest, as it plays a curial role in memory formation. It is also one of the brain structures to suffer tissue damage in Alzheimer's Disease. Traditional solutions for automatic hippocampal segmentation include atlas-based and patch-based methods [18, 19, 20], which commonly rely on identifying certain similarities between the target image and the anatomical atlas, to infer labels for individual voxels.
In recent years, deep learning models, especially U-net [17] and its variants, have become the dominant solutions for medical image segmentation. We have developed two network-based solutions for hippocampus segmentation [21, 22], producing state-of-the-art results. In [21], we proposed a multi-view ensemble convolutional neural network (CNN) framework in which multiple decision maps generated along different 2D views are integrated. In [22], an end-to-end deep learning architecture is developed that combines CNN and recurrent neural network (RNN) to better leverage the dimensional anisotropism in volumetric medical data.
### Spiking neural network optimization
Gradient descent-based backpropagation is by far the most used method to train ANN models. Unfortunately, the neurons in SNNs are highly non-differentiable with a large temporal aspect, which makes gradient descent not directly applicable to training SNNs.
The direct training approach works around this issue through surrogate gradients [11, 23], which are approximations of the step function that allow the backpropagation algorithm to be conducted to update the network weights. In terms of assigning spatial and temporal gradients along neurons, spike-timing-dependent plasticity (STDP) is a popular solution, which actively adjusts connection weights based on the firing timing of associated neurons [24].
Training an ANN first and converting it into an SNN can completely circumvent the non-differentiability problem. One major group of conversion solutions [5, 6, 7, 8] train ANNs with rectified linear unit (ReLU) neurons and then convert them to SNNs with IF neurons by setting appropriate firing thresholds. Hunsberger and Eliasmith [25, 26] use soft LIF neurons, which have smoothing operations applied around the firing threshold. As a result of the smoothing, gradient-based backpropagation can be carried out to train the network. This design makes the conversion from ANN to SNN rather straightforward.
## 3 Method
**ANN baseline model** We use a modified U-Net as the baseline ANN model in this work, which also follows an encoding and decoding architecture, as shown in Fig. 1. Taking 2D images as inputs, the encoding part repeats the conventional convolution + pooling layers to extract high-level latent features. The decoding part reconstructs the segmentation ground-truth mask by using transpose / deconvolution layers. To fully exploit the local information, _skip connections_ are introduced to concatenate the corresponding features maps between encoding stack and decoding stack.
Due to the constraints imposed by ANN-SNN conversion, a number of modifications have to be made from the original U-Net, as well as our previous hippocampus segmentation networks [21, 22]. First, we replace max-pooling with average-pooling, as there is no effective implementation of max-pooling in SNNs. Second, the bias components of the neurons are removed, as they may interfere with the voltage thresholds in SNNs, complicating the training of the networks. Third, batch normalizations are removed, due to the absence of the bias terms. Last, _dropout layers_ are added to provide regularization for both ANN and SNN training.
### Our proposed hybrid SNN fine-tuning
Inspired by a previous work on image classification network [9], we develop a hybrid SNN fine-tuning scheme for our segmentation network. We first train an ANN to its full convergence and then convert it to a spiking network with reduced time steps. The converted SNN is then taken as an initial state for a fine-tuning procedure.
**The SNN neuron** model in our work is integrate-and-fire (IF) model where the membrane potential will not decrease during the time when neuron does not fire as opposed to LIF neurons. The dynamics of our IF neurons can be described as:
\[u_{i}^{t}=u_{i}^{t-1}+\sum_{j}w_{ij}o_{j}-vo_{i}^{t-1} \tag{1}\]
\[o_{i}^{t-1}=\begin{cases}1&\text{if }u_{i}^{t-1}>v\\ 0&\text{otherwise}\end{cases} \tag{2}\]
Figure 1: Network architecture of our baseline ANN model.
where \(u\) is the membrane potential, \(t\) is the time step, subscript \(i\) and \(j\) represent the post- and pre-neuron, respectively,\(w\) is the weight connecting the pre- and post-neuron, \(o\) is the binary output spike, and \(v\) is the firing threshold. Each neuron integrates the inputs from the previous layer into its membrane potential, and reduces the potential by a threshold voltage if a spike is fired.
Our SNN network has the same architecture as the baseline ANN, where the signals transmitted within the SNN are rate-coded spike trains generated through a Poisson generator. During the conversion process, we load the weights of the trained ANN into the SNN network and set the thresholds for all layers to 1s. Then a threshold balancing procedure [7] is carried out to update the threshold of each layer.
**Fine-tuning** of the converted SNN is conducted using spike-based backpropagation. It starts at the output layer, where the signals are continuous membrane potentials, generated through the summation:
\[u_{i}^{t}=u_{i}^{t-1}+\sum_{j}w_{ij}o_{j} \tag{3}\]
The number of neurons in the output layer is the same as the size of the input image. Compared with the hidden layer neurons in Eqn. 1, the output layer does not fire and therefore the voltage reduction term is removed. Each neuron in the output layer is connected to a Sigmoid activation function to produce the predictive probability of the corresponding pixel belonging to the target area (hippocampus).
Let \(L(\cdot)\) be the loss function defined based on the ground-truth mask and the predictions. In the output layer, neurons do not generate spikes and thus do not have the non-differentiable problem. The update of the hidden layer parameters \(W_{ij}\) is described by:
\[\Delta W_{ij}=\sum_{t}\frac{\partial L}{\partial W_{ij}^{t}}=\sum_{t}\frac{ \partial L}{\partial o_{i}^{t}}\frac{\partial o_{i}^{t}}{\partial u_{i}^{t} }\frac{\partial u_{i}^{t}}{\partial W_{ij}^{t}} \tag{4}\]
Due to the non-differentiability of spikes, a surrogate gradient-based method is used in backpropagation. In [9], the authors propose a surrogate gradient function \(\frac{\partial o^{t}}{\partial u^{t}}=\alpha e^{-\beta\Delta t}\). In this work, we choose a linear approximation proposed in [27], which is described as:
\[\frac{\partial o^{t}}{\partial u^{t}}=\alpha\max\{0,1-\left|u^{t}-V_{t}\right|\} \tag{5}\]
where \(V_{t}\) is the threshold potential at time \(t\), and \(\alpha\) is a constant.
Our _conversion + fine-tuning_ follows the same framework proposed in [9], which demonstrates that such hybrid approach can achieve, with much fewer time steps, similar accuracy compared to purely converted SNNs, as well as faster convergence than direct training methods. It should be noted that our work is the first attempt of exploring the application of spike-based fine-tuning and threshold balancing on fully convolutional networks (FCNs), including the U-Net.
### Different losses
We explore different loss functions in this work, which include _binary cross entropy_ (BCE), _Dice loss_ and a combination of the two losses (BCE-Dice). BCE loss is the average of per-pixel loss and gives an unbiased measurement of pixel-wise similarity between prediction and ground-truth:
\[L_{\text{BCE}}=\sum_{i=1}^{N}-[r_{i}\log s_{i}+(1-r_{i})\log(1-s_{i})] \tag{6}\]
where \(s_{i}\in[0,1]\) is the predicted value of a pixel and \(r_{i}\in\{0,1\}\) is the ground-truth label for the same pixel. \(\epsilon\) is a small number to smooth the loss, which is set to \(10^{-5}\) in our experiments. _Dice loss_ focuses more on the extent to which the predicted and ground-truth overlap:
\[L_{\text{Dice}}=\frac{2\sum_{i}s_{i}r_{i}+\epsilon}{\sum_{i}s_{i}+\sum_{i}r_{i} +\epsilon} \tag{7}\]
We also explore the effects of a weighted combination of BCE and Dice: \(L_{\text{BCE,Dice}}=0.3\times L_{\text{BCE}}+0.7\times L_{\text{Dice}}\)
## 4 Experiments and results
**The data** used in this work were obtained from the ADNI database ([https://adni.loni.usc.edu/](https://adni.loni.usc.edu/)). In total, 110 baseline T1-weighted whole brain MRI images from different subjects along with their hippocampus masks were downloaded. In our experiments we only included normal control subjects. Due to the fact that the hippocampus only occupies a very small part of the whole brain and its position in the whole brain is relatively fixed, we roughly cropped the right hippocampus of each subject and use them as the input for segmentation. The size of the cropping box is \(24\times 56\times 48\).
### Training and testing
We train and evaluate our proposed model using a 5-fold cross-validation, with 22 and 88 subjects in the test and training sets, respectively, in each fold. The batch size in training and testing of ANN and SNN is set to 26. Training of both ANN and the SNN networks uses the Adam optimizer with slightly different parameters. The learning rate for training both networks is initially set to 0.001 and is later adjusted by the ReduceLROPlateau scheduler in PyTorch, which monitors the loss during training, and tunes down the learning rate when it finds the loss stops dropping. Following a similar setup in [9], we use time steps of 200 for the ANN-SNN conversion routine. The ANN models were trained with 100 epochs and SNN models were trained over 35 epochs. Also, we repeat the ANN \(\rightarrow\) Conversion \(\rightarrow\) Fine-tuning procedure with three different loss functions: BCE only, Dice only and combined BCE and Dice.
### Results
In this section, we present and evaluate the experimental results for the proposed model. Two different performance metrics, 3D Dice ratio and 2D slice-wise Dice ratio, were used to measure the accuracy of the segmentation models. The 3D Dice ratio was calculated subject-wise for each 3D volume. Mean and standard deviation averaged from 5 folds are reported. The 2D slice based Dice ratio was calculated slice by slice, and the mean and standard deviation were averaged from all test subjects' slices.
Accuracies of the model on the test data are summarized in Table 1. The best performance for the ANN and fine-tuned SNN are highlighted with bold font. It is evident that network accuracies drop significantly after conversion (middle column) and our fine-tuning procedure can bring the performance of SNNs (right column) back close to the ANN level. The models built on the three loss functions have comparable performance in ANNs and fine-tuned SNNs, with Dice loss has slight edge over BCE and BCE-Dice combined.
Most fine-tuning procedures converge much faster than the maximal 35 epochs. On average, the three models in our experiments take 10.11 epochs to converge (with a standard deviation of 4.90). This achieves a great reduction in training complexity compared to the direct training models, which take roughly 60 epochs on average.
In order to find out how the fine-tuning procedure improves the segmentation accuracy, we look into details of both the outputs and the internal spiking patterns of the networks. Fig. 2 shows an example of input slice, ground-truth mask and the corresponding outputs from the converted and fine-tuned SNNs. We can see the output mask from the converted SNN (Fig. 2.(c)), while carrying a similar shape, is much smaller than the ground-truth. We believe the reason is that many neurons are not sufficiently activated due to the suboptimal thresholds set by the conversion procedure. The proposed fine-tuning step, on the other hand, can update the network weights to adjust for such thresholds, bringing the neurons back to active for improved accuracy. To confirm this thought, we record the firing frequency of each layer in the SNN models before and after the fine-tuning and plot them in Fig. 3. It is evident that neurons become more active after the fine-tuning, producing more accurate segmentation predictions.
## 5 Conclusions
In this work, we present a hybrid ANN-SNN fine-tuning scheme for the image segmentation problem. Our approach is rather general and can potentially be applied to many applications based on U-shaped networks. We take the human hippocampus as the target structure, and demonstrate the effectiveness of our approach through experiments on ADNI data. Exploring more applications is among our next steps.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Loss} & \multicolumn{2}{c|}{ANN Accuracy} & \multicolumn{2}{c|}{Converted-SNN Accuracy} & \multicolumn{2}{c|}{Fine-tuned SNN Accuracy} \\ \cline{2-7} & 2D & 3D & 2D & 3D & 2D & 3D \\ \hline \hline BCE & \(77.76\pm 2.40\) & \(83.17\pm 1.44\) & \(30.51\pm 9.93\) & \(21.60\pm 12.39\) & \(76.92\pm 3.77\) & \(81.83\pm 2.99\) \\ \hline Dice & \(\mathbf{78.86\pm 2.39}\) & \(\mathbf{84.21\pm 1.66}\) & \(61.78\pm 3.32\) & \(65.90\pm 6.94\) & \(\mathbf{78.09\pm 3.85}\) & \(81.86\pm 4.40\) \\ \hline BCE-Dice & \(78.58\pm 2.44\) & \(83.14\pm 1.59\) & \(52.03\pm 9.91\) & \(52.78\pm 14.73\) & \(77.72\pm 3.69\) & \(\mathbf{81.95\pm 3.24}\) \\ \hline \end{tabular}
\end{table}
Table 1: Average accuracies of ANNs, converted SNNs and fine-tuned SNNs built on three difference loss functions.
Figure 3: Firing frequencies of neurons in different layers. Blue bars show those for a converted-SNN and orange bars are for the fine-tuned SNN. \(v\) is the layer threshold.
Figure 2: An example slice of (a) input; (b) ground-truth mask; (c) segmentation result from the converted SNN; and (d) result from the fine-tuned SNN. |
2305.01932 | Fully Automatic Neural Network Reduction for Formal Verification | Formal verification of neural networks is essential before their deployment
in safety-critical applications. However, existing methods for formally
verifying neural networks are not yet scalable enough to handle practical
problems involving a large number of neurons. We address this challenge by
introducing a fully automatic and sound reduction of neural networks using
reachability analysis. The soundness ensures that the verification of the
reduced network entails the verification of the original network. To the best
of our knowledge, we present the first sound reduction approach that is
applicable to neural networks with any type of element-wise activation
function, such as ReLU, sigmoid, and tanh. The network reduction is computed on
the fly while simultaneously verifying the original network and its
specifications. All parameters are automatically tuned to minimize the network
size without compromising verifiability. We further show the applicability of
our approach to convolutional neural networks by explicitly exploiting similar
neighboring pixels. Our evaluation shows that our approach can reduce the
number of neurons to a fraction of the original number of neurons with minor
outer-approximation and thus reduce the verification time to a similar degree. | Tobias Ladner, Matthias Althoff | 2023-05-03T07:13:47Z | http://arxiv.org/abs/2305.01932v2 | # Specification-Driven Neural Network Reduction for Scalable Formal Verification
###### Abstract
Formal verification of neural networks is essential before their deployment in safety-critical settings. However, existing methods for formally verifying neural networks are not yet scalable enough to handle practical problems that involve a large number of neurons. In this work, we propose a novel approach to address this challenge: A conservative neural network reduction approach that ensures that the verification of the reduced network implies the verification of the original network. Our approach constructs the reduction on-the-fly, while simultaneously verifying the original network and its specifications. The reduction merges all neurons of a nonlinear layer with similar outputs and is applicable to neural networks with any type of activation function such as ReLU, sigmoid, and tanh. Our evaluation shows that our approach can reduce a network to less than \(5\%\) of the number of neurons and thus to a similar degree the verification time is reduced.
## 1 Introduction
Neural networks achieve impressive results in a variety of fields such as autonomous cars [1]. However, the applicability of neural networks in safety-critical environments is still limited as small perturbations of the input can lead to unexpected outputs of the neural network [2]. Thus, the formal verification of neural networks gained importance in recent years [3], where approaches rigorously prove that the output of neural networks meets given specifications. These approaches are often based on satisfiability modulo theory solvers [4; 5], symbolic interval propagation [6; 7], or deploy reachability analysis [8; 9; 10; 11]. However, scalability is still a major issue for all of them [3].
While several approaches [12; 13] exist which reduce the size of the network by approximating the original network, to our best knowledge, there exist only a few reduction approaches with formal error bounds. An early approach [14] splits the neurons based on analytic properties and merges similar neurons afterward. This work is extended using interval neural networks [15; 16] and residual reasoning [17]. More closely related to our work is the approach in [18], which approximates the neural network output by merging neurons using clustering algorithms on a given dataset, however, \(80-90\%\) of the neurons remain when formal error bounds are demanded. Most approaches only consider ReLU neurons; however, [15] also considers tanh neurons. A network reduction algorithm with formal error bounds for general neurons is still missing.
We propose a novel approach that reduces the neural network for given specifications. For example, consider an image as an input to a neural network. Neurons representing neighboring pixels often have similar values and thus can be merged during the verification process, which helps to reduce the high dimensionality of these neural networks. Such properties cannot be inferred when analyzing a neural network without considering a specific input. Our novel approach is orthogonal to many verification techniques, thus, many of them can be used as an underlying verification engine. We demonstrate our approach by deploying reachability analysis using zonotopes [9], the extension to
other set-based verification tools is straightforward including Taylor models [19; 10] and polynomial zonotopes [11].
To summarize our main contributions, we present a novel, formally sound approach to reduce large neural networks by merging similar neurons for given specifications. The reduced network is constructed on-the-fly and the verification of the reduced network entails the verification of the original network. Our approach works on a variety of common activation functions, including ReLU, sigmoid, and tanh. The evaluation of high-dimensional datasets and benchmarks shows that the networks can be reduced to less than \(5\%\) of the original number of neurons using our novel approach. The overhead of constructing the reduced network is computationally cheap and thus the overall time for verifying the original network primarily depends on the number of remaining neurons in the reduced network.
The rest of this work is structured as follows. Sec. 2 introduces the notation and background for this work, then Sec. 3 presents our novel neuron-merging approach. We first show how similar neurons can be merged with formal error bounds, followed by the algorithm to construct the reduced network on-the-fly while verifying the original network. Finally, we evaluate our approach in Sec. 4 and conclude this work in Sec. 5.
## 2 Preliminaries
### Notation
We denote vectors by lower-case letters, matrices by upper-case letters, and sets by calligraphic letters. The \(i\)-th element of a vector \(b\in\mathbb{R}^{n}\) is written as \(b_{(i)}\). Consequently, \(b_{2(i)}\) is the \(i\)-th element of a vector \(b_{2}\). The element in the \(i\)-th row and \(j\)-th column of a matrix \(A\in\mathbb{R}^{n\times m}\) is written as \(A_{(i,j)}\), the entire \(i\)-th row and \(j\)-th column are written as \(A_{(i,\cdot)}\) and \(A_{(\cdot,j)}\), respectively. The concatenation of two matrices \(A\) and \(B\) is denoted by \([A\ B]\). Given \(n\in\mathbb{N}\), then \([n]=\{1,\ldots,n\}\). Let \(\mathcal{C}\subseteq[n]\), then \(A_{(\mathcal{C},\cdot)}\) denotes all rows \(i\in\mathcal{C}\). The cardinality of a discrete set \(\mathcal{C}\) is denoted by \(|\mathcal{C}|\). Let \(\mathcal{S}\subset\mathbb{R}^{n}\) be a set of dimension \(n\), then \(\mathcal{S}_{(i)}\) is its projection on the \(i\)-th dimension. Given a function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\), then \(f(\mathcal{S})=\{f(x)\mid x\in\mathcal{S}\}\). An interval with bounds \(a,b\in\mathbb{R}^{n}\) is denoted by \([a,b]\).
### Neural Networks
In this work, we describe the reduction of feed-forward neural networks [20, Sec. 5.1]). We further note that our approach is also applicable to convolutional neural networks [20, Sec. 5.5.8], as convolutional layers and subsampling layers, e.g. average pooling layers, can be transformed into linear layers as defined subsequently.
**Definition 1**.: _(Layers of Neural Networks [20, Sec. 5.1]) Let \(v_{k}\) denote the number of neurons in a layer \(k\) and \(h_{k-1}\in\mathbb{R}^{v_{k-1}}\) the input. Further, let \(W\in\mathbb{R}^{v_{k}\times v_{k-1}},b\in\mathbb{R}^{v_{k}}\), and \(\sigma_{k}(\cdot)\) be the respective continuous activation function (e.g. sigmoid and ReLU), which is applied element-wise. Then, the operation \(L_{k}:\mathbb{R}^{v_{k-1}}\rightarrow\mathbb{R}^{v_{k}}\) on layer \(k\) is given by_
\[L_{k}(h_{k-1})=\left\{\begin{array}{ll}W_{k}h_{k-1}+b_{k},&\text{if layer $k$ is linear},\\ \sigma_{k}(h_{k-1}),&\text{otherwise.}\end{array}\right. \tag{1}\]
**Definition 2**.: _(Neural Networks [20, Sec. 5.1]) Given \(K\) alternating linear and nonlinear layers, \(v_{0}\) input and \(v_{K}\) output neurons, and let \(x\in\mathbb{R}^{v_{0}}\) be the input and \(y\in\mathbb{R}^{v_{K}}\) be the output of a neural network, then, a neural network \(\Phi\) with \(y=\Phi(x)\) can be formulated as_
\[h_{0}=x,\quad h_{k}=L_{k}(h_{k-1}),\quad y=h_{K},\qquad k=1\ldots K. \tag{2}\]
We call the last linear and the last nonlinear layer output layers, all other layers are hidden layers. If all hidden layers output the same number of neurons, we write \(6\times 200\) to refer to a network with 6 linear and \(6\) nonlinear hidden layers with \(200\) neurons each.
### Set-based Computation
We use sets for the formal verification of neural networks. Let \(\mathcal{X}\) be the input set of the neural network. Then, the exact output sets of each layer are denoted by
\[\mathcal{H}_{0}^{*}=\mathcal{X},\quad\mathcal{H}_{k}^{*}=L_{k}(\mathcal{H}_{k-1}^{* }),\quad\mathcal{Y}^{*}=\mathcal{H}_{K}^{*},\qquad k=1\ldots K. \tag{3}\]
We use zonotopes as set representation to demonstrate our novel network reduction approach.
**Definition 3**.: _(Zonotope [21, Def. 1]) Given a center vector \(c\in\mathbb{R}^{n}\) and a generator matrix \(G\in\mathbb{R}^{n\times q}\), a zonotope is defined as_
\[\mathcal{Z}=\left\langle c,G\right\rangle_{Z}=\left\{c+\sum_{j=1}^{q}\beta_{j} G_{(\cdot,j)}\ \Bigg{|}\ \beta_{j}\in[-1,1]\right\}. \tag{4}\]
Further, we define two basic operations on zonotopes, which are essential for our approach. These operations can also be applied to many other set representations such as Taylor models and polynomial zonotopes.
**Proposition 1**.: _(Interval Enclosure [22, Prop. 2.2]) Given a zonotope \(\mathcal{Z}=\left\langle c,G\right\rangle_{Z}\), then the interval \([l,u]=\texttt{interval}(\mathcal{Z})\supseteq\mathcal{Z}\) is given by_
\[\begin{array}{l}l=c-\Delta g\\ u=c+\Delta g\end{array},\quad\text{with}\ \Delta g=\sum_{j=1}^{q}|G_{(\cdot,j)}|. \tag{5}\]
**Proposition 2**.: _(Interval Addition [22, (2.1)]) Given a zonotope \(\mathcal{Z}=\left\langle c,G\right\rangle_{Z}\subset\mathbb{R}^{n}\) and an interval \(\mathcal{I}=[l,u]\subset\mathbb{R}^{n}\), then_
\[\mathcal{Z}\oplus\mathcal{I}=\left\langle c+c_{\mathcal{I}},[G\texttt{ diag}(u-c_{\mathcal{I}})]\right\rangle_{Z}, \tag{6}\]
_where \(c_{\mathcal{I}}=\frac{l+u}{2}\) and diag creates a diagonal matrix._
### Neural Network Verification
Finally, we briefly introduce the main steps to propagate a zonotope through a neural network. The propagation cannot be computed exactly in general, thus, the exact output of each layer needs to be enclosed.
**Proposition 3**.: _(Image Enclosure [9, Sec. 3]) Let \(\mathcal{H}_{k-1}\supseteq\mathcal{H}_{k-1}^{*}\) be an input set to layer \(k\), then_
\[\mathcal{H}_{k}=\texttt{enclose}(L_{k},\mathcal{H}_{k-1})\supseteq\mathcal{H }_{k}^{*} \tag{7}\]
_computes an over-approximative output set._
Linear layers can be computed exactly using zonotopes [21], however, nonlinear layers introduce over-approximations. We refer to [9, Sec. 3] for a detailed explanation, the main steps are as follows and are visualized in Fig. 1: For each nonlinear layer, we iterate over all neurons \(i\) in the current layer by projecting the input set \(\mathcal{H}_{k-1}\) onto its \(i\)-th dimension (step 1) and determine the input bounds using Prop. 1 (step 2). We then find a linear approximation polynomial within the input bounds via regression [20, Sec. 3] (step 3). A key challenge is bounding the approximation error (step 4): For piece-wise linear activation functions, e.g. ReLU, we can compute the approximation error exactly using the extreme points of the difference between the approximation polynomial and each linear segment. For other activation functions, e.g. sigmoid, the approximation error can be determined by sampling evenly within the input bounds and bounding the approximation error between two points via global bounds of the derivative. Finally, we evaluate \(\mathcal{H}_{k-1(i)}\) on the polynomial (step 5) and add the approximation error as an additional generator (step 6, Prop. 2).
Figure 1: Main steps of enclosing a nonlinear layer. Step 1: Neuron-wise Sigmoid function. Step 2: Input bounds. Step 3: Approximation polynomial. Step 4: Approximation error. Step 5: Evaluate input on polynomial. Step 6: Add approximation error.
### Problem Statement
Given an input set \(\mathcal{X}\subset\mathbb{R}^{v_{0}}\), a neural network \(\Phi\) as specified in Def. 2, and an unsafe set \(\mathcal{S}\subset\mathbb{R}^{v_{K}}\) in the output space of \(\Phi\), we want to find a reduced network \(\widehat{\Phi}\) for which the verification entails the safety of the original network for \(\mathcal{X},\mathcal{S}\). Thus, it must hold
\[\widehat{\Phi}(\mathcal{X})\cap\mathcal{S}=\emptyset\ \implies\ \Phi(\mathcal{X})\cap\mathcal{S}=\emptyset. \tag{8}\]
## 3 Specification-Driven Neural Network Reduction
Our novel approach is based on the observation that many neurons in a layer \(k\) behave similarly for a specific input \(x\), e.g. many sigmoid neurons are fully saturated and thus output a value near 1 as shown in Fig. 2. Neuron saturation [23], neural activation patterns [24], and over-parametrization [25] have been observed in the literature, however, to the best of our knowledge, it has not been considered during the verification of neural networks. Our main idea is to merge similar neurons such as these saturated neurons and provide the corresponding error bounds. Our novel approach is not restricted to the saturation values of an activation function but can merge neurons with any activation.
### Neuron Merging
Subsequently, we explain how this observation can help to construct a much smaller network \(\widehat{\Phi}\), where the verification of \(\widehat{\Phi}\) entails the verification of the original network \(\Phi\). We denote the neurons which are merged using merge buckets:
**Definition 4**.: _(Merge Buckets) Given output bounds \(\mathcal{I}_{k}\supseteq\mathcal{H}_{k}^{*}\) of a nonlinear layer \(k\) with \(v_{k}\) neurons, an output \(y\in\mathbb{R}\), and a tolerance \(\delta\in\mathbb{R}\), then a merge bucket is defined as_
\[\mathcal{B}_{k,y,\delta}=\left\{w\in[v_{k}]\ \big{|}\ \mathcal{I}_{k(w)} \subseteq[y-\delta,y+\delta]\right\}. \tag{9}\]
Conceptually, we replace all neurons in \(\mathcal{B}_{k,y,\delta}\) by a single neuron with constant output \(y\) and adjust the weight matrices of the linear layers \(k-1,k+1\) such that the reduced network \(\widehat{\Phi}\) approximates the behavior of the original network \(\Phi\). Finally, we add an approximation error to the output to obtain a sound over-approximation (Fig. 3). As the new neuron is constant, we can propagate it forward to the bias of the layer \(k+1\) without inducing an over-approximation.
**Proposition 4**.: _(Neuron Merging) Given a nonlinear hidden layer \(k\) of a network \(\Phi\), output bounds \(\mathcal{I}_{k}\supseteq\mathcal{H}_{k}^{*}\), and a merge bucket \(\mathcal{B}_{k,y,\delta}\), then we can construct a reduced network \(\widehat{\Phi}\), where we remove
Figure 2: Sigmoid activations of a 6x200 neural network with an image input from the MNIST digit dataset. In this neural network, many neurons output values close to the saturation values 0 and 1.
the merged neurons by adjusting the linear layers \(k-1\), \(k-1\) such that_
\[\widehat{W}_{k-1} =W_{k-1(\overline{\mathcal{B}}_{k,y,\delta},\cdot)}, \quad\widehat{b}_{k-1} =b_{k-1(\overline{\mathcal{B}}_{k,y,\delta})}, \tag{10}\] \[\widehat{W}_{k+1} =W_{k+1(\cdot,\overline{\mathcal{B}}_{k,y,\delta})}, \quad\widehat{b}_{k+1} =b_{k+1}+\underbrace{W_{k+1(\cdot,\mathcal{B}_{k,y,\delta})} \mathcal{I}_{k(\mathcal{B}_{k,y,\delta})}}_{\text{approximation error}},\]
_where \(\overline{\mathcal{B}}_{k,y,\delta}=[v_{k}]\backslash\mathcal{B}_{k,y,t}\) and \(\widehat{b}_{k+1}\) includes the approximation error. We denote the layer operations of the reduced network \(\widehat{\Phi}\) with \(\widehat{L}_{k}\). The construction is sound._
Proof.: _Soundness._ We show that the output \(\widehat{\mathcal{H}}_{k+1}\) of layer \(k+1\) of the reduced network \(\widehat{\Phi}\) is an over-approximation of the exact set \(\mathcal{H}_{k+1}^{*}\) as defined in (3). We drop the indices of \(\mathcal{B}_{k,y,\delta},\overline{\mathcal{B}}_{k,y,\delta}\) for conciseness:
\[\mathcal{H}_{k+1}^{*} \stackrel{{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeqeq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq
Static buckets.The merge buckets are determined by the asymptotic values of the respective activation function:
\[\mathbf{\mathcal{B}}_{\text{Sigmoid}}=\left\{\mathcal{B}_{\cdot,0,\delta},\;\mathcal{ B}_{\cdot,1,\delta}\right\},\quad\mathbf{\mathcal{B}}_{\text{Tanh}}=\left\{ \mathcal{B}_{\cdot,-1,\delta},\;\mathcal{B}_{\cdot,1,\delta}\right\},\quad\mathbf{ \mathcal{B}}_{\text{ReLU}}=\left\{\mathcal{B}_{\cdot,0,\delta}\right\}. \tag{13}\]
Dynamic buckets.The merge buckets are dynamically created based on the output bounds \(\mathcal{I}_{k}=[l_{k},u_{k}]\subset\mathbb{R}^{v_{k}}\):
\[\mathbf{\mathcal{B}}_{k,\delta}=\left\{\mathcal{B}_{k,c_{k(w)},\delta}\;\middle|\; c_{k}=(l_{k}+u_{k})/2,\;w\in[v_{k}]\right\}. \tag{14}\]
Bucket tolerance.The bucket tolerance \(\delta\) is obtained by initially setting a large \(\delta\) that allows for aggressive neuron merging and then decreasing \(\delta\) adaptively if the specifications \(\mathcal{S}\) are violated.
### On-the-fly Neural Network Reduction
Note that we require output bounds \(\mathcal{I}_{k}\) of the next nonlinear layer \(k\) to determine which neurons can be merged (Prop. 4). However, computing them requires the construction of high-dimensional zonotopes via the next linear layer \(k-1\), propagating them through the nonlinear layer \(k\) -- where we have to evaluate all neurons, which is what should be avoided. Thus, we deploy a look-ahead algorithm using interval arithmetic [26] to avoid these expensive computations and reduce the network on-the-fly. The algorithm is summarized in Alg. 1.
```
1:Input \(\mathcal{X}\), neural network layers \(k\in[K]\), bucket tolerance \(\delta\)
2:\(\mathcal{H}_{0}\leftarrow\mathcal{X}\)
3:\(\widehat{L}_{1}\gets L_{1}\)
4:for\(k=2,4,\dots,K\)do\(\triangleright\) Look ahead
5:if\(k<K\)then\(\mathcal{I}_{k-2}\leftarrow\texttt{interval}(H_{k-2})\)\(\triangleright\) Prop. 1
6:if\(k>2\)then\(\mathcal{I}_{k-2}\leftarrow\mathcal{I}_{k-2}\cap\widehat{L}_{k-2}(\mathcal{I}_{k-3})\)endif\(\triangleright\) Tighten bounds
7:\(\mathcal{I}_{k}\gets L_{k}(\widehat{L}_{k-1}(\mathcal{I}_{k-2}))\)
8: Create merge buckets \(\mathbf{\mathcal{B}}_{k,\delta}\)\(\triangleright\) Sec. 3.2
9:\(\widehat{L}_{k-1},\widehat{L}_{k},\widehat{L}_{k+1}\leftarrow\text{Reduce network}\)\(\triangleright\) Prop. 4
10:endif\(\triangleright\) Verify
11:\(\mathcal{H}_{k-1}\leftarrow\texttt{enclose}(\widehat{L}_{k-1},\mathcal{H}_{k-2})\)\(\triangleright\) Prop. 3
12:\(\mathcal{H}_{k}\leftarrow\texttt{enclose}(\widehat{L}_{k},\mathcal{H}_{k-1})\)
13:endfor
14:\(\mathcal{Y}\leftarrow\mathcal{H}_{K}\)
15:return\(\mathcal{Y}\)
```
**Algorithm 1** On-the-fly Neural Network Reduction
Instead of propagating the zonotope itself forward, we just propagate the interval bounds to the next nonlinear layer \(k\) (line 5-7). The overhead is computationally cheap and the bound computation is over-approximative. The bounds \(\mathcal{I}_{k-2}\) can be intersected with \(L_{k-2}(\mathcal{I}_{k-3})\) if \(k>2\) to get a tighter estimate (line 6), as \(\mathcal{I}_{k-3}\) is computed anyway during the enclosure of the nonlinear layer \(k-2\) (line 13; Fig. 1). After \(\mathcal{I}_{k}\) is obtained, the merge buckets are determined (line 8) and the network is reduced by merging the respective neurons (line 9). Finally, we propagate the zonotope \(\mathcal{H}_{k-2}\) on the reduced network. The addition of the interval bias of layer \(k+1\) in (10) is computed using Prop. 2. Thus, we never construct a high-dimensional zonotope during the verification. Note that the number of input and output neurons remains unchanged.
**Proposition 5**.: _(Reduced Network) Given a neural network \(\Phi\) and an input set \(\mathcal{X}\), then Alg. 1 constructs a reduced network \(\widehat{\Phi}\), which satisfies the problem statement in Sec. 2.5._
Proof.: The algorithm is sound as each step is over-approximative.
## 4 Evaluation
We evaluate our novel network reduction approach using several benchmarks and neural network variants from the VNN Competition [3]. For all datasets and networks, we sample \(100\) correctly classified images from the test set and average the results. All following figures show the mean remaining input neurons per layer \(k\in[K]\) as well as the number of output neurons of the network at \(K+1\). The number of neurons of the original network is shown in the same color with reduced opacity and dynamic merge bucket creation (14) is used in all figures if not otherwise stated. We implement our approach in MATLAB and use CORA [27; 11] to verify the neural networks. All computations were performed on an Intel(r) Core(tm) Gen. 11 i7-11800H CPU @2.30GHz with 64GB memory.
ERAN benchmark in Tab. 1. While it is expected that a more aggressive neuron merging (larger \(\delta\)) results in fewer verified instances, we want to stress that using a very small \(\delta\) can already result in many merged neurons. We observe similar results for the other benchmarks as well. All following figures use a perturbations radius \(r=0.001\) and a bucket tolerance \(\delta=0.005\). Fig. 4 shows the neuron reduction per layer for this combination. The number of remaining neurons in the reduced networks is very similar for both activation functions. The overhead of constructing the reduced network is computationally cheap as shown in Fig. 5: We normalize the time to construct and verify the reduced network per image, where the verification of the original network takes \(\sim 0.67s\). The verification time primarily depends on the remaining number of neurons. Our evaluation shows that the reduction on the ReLU network uses up to 5 dynamic merge buckets per layer and on the sigmoid network around 10 dynamic merge buckets per layer. Finally, Fig. 6 shows that the benefit of our novel reduction approach increases with the size of the network, as less neurons remain in the reduced network relative to the size of the original network.
such extreme outputs as shown in Fig. 2. Fig. 10 shows another extreme case, where many ReLU neurons output zeros and thus can be removed. For this benchmark, we use the static merge buckets defined in (13) to only remove ReLU neurons with negative input. However, in general it is advisable to use dynamic merge buckets as previously shown in Fig. 10.
### CIFAR-10 Colored Image Dataset
Finally, we show that our approach is also applicable to the CIFAR-10 colored image dataset. The size of networks trained on CIFAR-10 is typically much larger due to the complexity of the dataset. We note that the CIFAR-10 images are not normalized to \([0,1]\).
Marabou benchmark.We show the network reduction on the Marabou benchmark in Fig. 10. The networks consist of two convolutional layers followed by three linear layers with ReLU activation. Our novel approach reduces these networks significantly, where more neurons are merged in larger networks.
Cifar2020 benchmark.The network consists of four convolutional layers with up to \(32,768\) neurons per layer followed by three linear layers and ReLU activation. A dynamic merge bucket creation reduces the number of neurons to \(25\%\) while still verifying all images.
### Discussion
We want to stress that our novel reduction method barely increases the over-approximation of the output \(\mathcal{Y}\) by construction, as the over-approximation induced by the neuron merging is determined by the bucket tolerance \(\delta\). It is advisable to start with a large \(\delta\) and automatically decrease \(\delta\) until an image is verified. For example, consider the sigmoid network in Tab. 1: About \(85\%\) of the images can be verified with \(\delta\geq 0.01\) in less than a third of the original verification time. While some networks have many activation neurons output near their asymptotic values, this is not the case for all networks. Thus, a dynamic creation of the merge buckets is important. Our approach is also applicable to convolutional neural networks as shown in the evaluation, which are usually especially high-dimensional. As neighboring pixels are often similar and remain similar through convolutional layers, our approach can reduce the dimensionality drastically for these networks.
## 5 Conclusion
We present a novel, conservative neural network reduction approach, where the verification of the reduced networks entails the verification of the original network. To our best knowledge, our approach is the first approach that works on a variety of activation functions and considers the specifications. The neural network reductions is computed on-the-fly while verifying the original network. Our approach merges neurons of nonlinear layers based on the output bounds of these neurons. These output bounds are computed by looking ahead to the next nonlinear layer using interval arithmetic,
such that the more expensive verification algorithm only needs to be executed on the reduced network. Our novel approach is orthogonal to many verification tools and thus can be used in combination with them. We show the applicability of our approach on various benchmarks and network architectures. The overhead of computing the reduced network is not computationally expensive and the over-approximation of the output set barely increases. We believe that our work is a significant step toward more scalable neural network verification.
## Acknowledgments and Disclosure of Funding
The authors gratefully acknowledge financial support from the project FAI funded by the German Research Foundation (DFG) under project number 286525601.
|
2307.03784 | NeuroBlend: Towards Low-Power yet Accurate Neural Network-Based
Inference Engine Blending Binary and Fixed-Point Convolutions | This paper introduces NeuroBlend, a novel neural network architecture
featuring a unique building block known as the Blend module. This module
incorporates binary and fixed-point convolutions in its main and skip paths,
respectively. There is a judicious deployment of batch normalizations on both
main and skip paths inside the Blend module and in between consecutive Blend
modules. Additionally, we present a compiler and hardware architecture designed
to map NeuroBlend models onto FPGA devices, aiming to minimize inference
latency while maintaining high accuracy. Our NeuroBlend-20 (NeuroBlend-18)
model, derived from ResNet-20 (ResNet-18) trained on CIFAR-10 (CIFAR-100),
achieves 88.0\% (73.73\%) classification accuracy, outperforming
state-of-the-art binary neural networks by 0.8\% (1.33\%), with an inference
time of 0.38ms per image, 1.4x faster than previous FPGA implementation for
BNNs. Similarly, our BlendMixer model for CIFAR-10 attains 90.6\%
accuracy(1.59\% less than full precision MLPMixer), with a 3.5x reduction in
model size compared to full precision MLPMixer. Furthermore, leveraging DSP
blocks for 48-bit bitwise logic operations enables low-power FPGA
implementation, yielding a 2.5x reduction in power consumption. | Arash Fayyazi, Mahdi Nazemi, Arya Fayyazi, Massoud Pedram | 2023-07-07T18:10:41Z | http://arxiv.org/abs/2307.03784v2 | BendNet: Design and Optimization of a Neural Network-Based Inference Engine Blending Binary and Fixed-Point Convolutions
###### Abstract
This paper presents BlendNet, a neural network architecture employing a novel building block called Blend module, which relies on performing binary and fixed-point convolutions in its main and skip paths, respectively. There is a judicious deployment of batch normalizations on both main and skip paths inside the Blend module and in between consecutive Blend modules. This paper also presents a compiler for mapping various BlendNet models obtained by replacing some blocks/modules in various vision neural network models with BlendNet modules to FPGA devices with the goal of minimizing the end-to-end inference latency while achieving high output accuracy. BlendNet-20, derived from ResNet-20 trained on the CIFAR-10 dataset, achieves 88.0% classification accuracy (0.8% higher than the state-of-the-art binary neural network) while it only takes 0.38ms to process each image (1.4x faster than state-of-the-art). Similarly, our BlendMixer model trained on the CIFAR-10 dataset achieves 90.6% accuracy (1.59% less than full precision MLPMixer) while achieving a 3.5x reduction in the model size. Moreover, The reconfigurability of DSP blocks for performing 48-bit bitwise logic operations is utilized to achieve low-power FPGA implementation. Our measurements show that the proposed implementation yields 2.5x lower power consumption.
## I Introduction
Deep neural networks (DNNs) have surpassed the accuracy of conventional machine learning models in many challenging domains, including computer vision [1] and natural language processing (NLP) [2]. Recently, inspired by the successes in NLP, _transformers_[3] are adopted by the computer vision community. Built with self-attention layers, multi-layer perceptrons (MLPs), and skip connections, transformers make numerous breakthroughs on visual tasks [4]. To reduce the transformer model complexity, MLPMixers [5] which replace the multi-head self-attention module in transformers with a two-layer spatial MLP, are introduced.
Advances in building both general-purpose and custom hardware have been among the key enablers for shifting DNNs and MLPmixers from rather theoretical concepts to practical solutions for a wide range of problems [6]. Unfortunately, many DNN-based inference engines have a high latency cost and use enormous hardware resources which, in turn, prevent their deployment in latency-critical applications, especially on resource-constrained platforms. The high latency and large hardware cost are due to the fact that practical, high-quality deep learning models entail billions of arithmetic operations and millions of parameters, which exert considerable pressure on both the processing and memory subsystems.
Quantization has emerged as a promising model compression method where parameters and/or activations are replaced with low-precision, quantized, fixed-point values. Despite such a transformation, quantized models can match the accuracy of full-precision models utilizing 32-bit floating-point (FP-32) while requiring fewer data transfers and storage space. Early works on binary neural networks (BNNs) [7] and XNOR-net [8] demonstrated the potential advantages of extreme quantization, i.e., binarization. BNNs are 1-bit quantized models where all weights and activations are represented by two values, -1/0 and +1, significantly decreasing the memory footprint. Additionally, to speed up the inference, the multiplication/addition operations are switched out for less complex operations like the XNOR logical operation and bit count [8]. However, this superior performance is achieved at the cost of a significant accuracy drop in deep neural networks.
To address the issue of significant accuracy loss, some prior work [9, 10, 11, 12] propose to modify the well-known architectures (e.g. ResNet) as they show the network architecture can affect BNN performance. Although some of these works can achieve improved accuracy, their proposed models cannot be efficiently deployed on hardware platforms. For instance, ReActNet [11] significantly improves the accuracy by activation shifting and reshaping the MobileNet V1 [13] architecture at the cost of increasing the parameters such that the total number of parameters is about 30 million more than the number of parameters in MobileNet V2 [14].
This paper presents a novel building block called Blend module, which utilizes binary convolution on its main path and fixed-point convolution on its skip path. We show how the Blend module is used to produce BlendNet versions of some well-known vision models, such as ResNet and MLPMixer. The key contributions of this work can be summarized as follows.
* We present BlendNet, a hardware-friendly neural network architecture with binary activations, where all convolutional layers are computed using binary multiply-accumulate (BMAC) operations (except on skip paths that utilize fixed-point convolutions). On a ResNet-20-like architecture designed for the CIFAR-10 dataset, BlendNet outperforms the state-of-the-art binary-based implementation of FracBNN [12] by 0.8% in top-1 accuracy.
* We introduce a powerful and flexible compiler for mapping a given BlendNet inference engine running on any
dataset onto our optimized accelerator design by converting the network model to a computational graph, scheduling the graph's execution, and optimizing its nodes by leveraging intrinsic fusions of the convolution (activation function) and batch normalization layers.
* We present a flexible FPGA-based design that enables the DSP-based realization of BMAC operations. The reconfigurability of DSPs for performing bitwise logic operations is utilized to achieve a low-power implementation.
* We also apply our transformations, integrate our blocks to MLPMixers and improve the naively binarized MLPMixer models by 6%. To the best of our knowledge, this is the first paper that presents a binary model of MLPMixers with negligible accuracy drop.
## II Preliminaries
In this section, we briefly introduce the MLPMixer and then describe the conventional BNN model and architecture used for the hardware implementation. After that, we discuss some recent BNN architectures that have significantly improved accuracy.
### _MLPMixer_
To further reduce the inductive biases introduced by CNNs, MLP-Mixer has recently proposed a more straightforward solution that is fully based on multi-layer perceptrons (MLPs) [5] (see fig. 1). The basic layer in MLP-Mixer consists of two components: the channel-mixing block and the token-mixing block. Each of these mixing blocks has the same units and is shown in Fig. 0(b). In the channel-mixing block, the feature map is projected along the channel dimension for the communications among various channels; while the feature map is projected along the spatial dimension and communications among spatial locations are accomplished concurrently by the token-mixing block.
### _Conventional BNN_
BNNs have several properties that enable a more efficient mapping to FPGAs without affecting network accuracy. For implementing conventional BNN models which are fully binarized (both weights and activations are quantized to 1-bit values), the product of a binary weight and an activation can be replaced with binary XNOR operation.
Furthermore, by assuming that an unset bit represents -1 and a set bit represents +1, there are only two possible values of +1 and -1 for the result of the XNOR operation and, thus, synapse input. Therefore, the summation of a binary dot product can be implemented by a popcount operation that counts the number of set bits instead of accumulation with signed arithmetic. Using popcount approximately reduces the resource usage in terms of LUT and FF to half compared to using signed-accumulate. Furthermore, all BNN layers use batch normalization on convolutional or fully connected layer outputs, then apply the sign function to determine the output activation. For hardware implementation, the same output activation can be computed via thresholding [15]. Lastly, Max-pooling on binary activations can be implemented in hardware using the Boolean OR function.
### _Prior Work on State-of-the-art BNN Architectures_
The state-of-the-art BNN models achieved a high accuracy using blocks similar to ResNet models [16]. Martinez et al. [10] presented a strong baseline for BNN models which is based on the modified ResNet block suitable for 1-bit CNNs. They used double skip connections and PReLU (PReLU was first introduced in [17]) as the activation function. They follow the idea of using real-valued downsample layers proposed in [9] that improves the accuracy significantly. In addition, They tried to decrease the discrepancy among the output of the binary and the corresponding real-valued convolution. They proposed a real-to-binary attention matching that is proper for training 1-bit CNNs. They also presented a method wherein the architectural gap between real and binary networks is step by step bridged via a sequence of teacher-student pairs.
More recently, a more accurate BNN model is proposed called ReActNet [11] to mitigate the precision gap between the binarized model and its counterpart of real-valued. ReActNet took one step further and is based on MobileNetV1 [13] architecture. It reaches a Top1 accuracy of 69.4% in the IMAGENET data set using 4.82G BMACS with a 4.6 MB model size. The key block in ReActNet is a biased PReLU (BPReLU) activation function that shifts and reshapes the feature maps between two convolutional layers. This substantially improves the accuracy of the model. However, this model has as many as 29.3 million parameters (29.3 million bits).
To improve ReActNet [11], The authors in [12] have proposed FracBNN which employs a dual-precision activation scheme to compute features with up to two bits, using an additional sparse binary convolution layer. They have achieved MobileNetV2-level accuracy with competitive model size.
## III The Proposed Building Blocks
The proposed BNN model comprises two types of building blocks, as shown in Fig. 2: one with no operations on the skip path and another with average pooling, convolution, and batch normalization on its skip path. There are a few differences between the presented building blocks compared to prior work.
Fig. 1: Overview of MLPMixer model. (a) Overall MLPMixer and (b) mixing block architectures.
First, both types of building blocks include a batch normalization layer as their final output layer. This batch normalization layer, which does not include any data-driven, trainable channel rescaling/shifting parameters, ensures the output of each block is in a range centered around zero, and is, therefore, amenable to fixed-point or binary quantization. Adding channel rescaling/shifting to this batch normalization layer tends to reduce the classification accuracy by a few percentage points.
Second, a PReLU activation is placed in the main path and before the addition operation, which yields improvements in classification accuracy compared to other activation functions such as ReLU [10].
Last but not least, the building blocks are designed with the hardware implementation cost in mind. For example, as shown in Fig. 4, the final batch normalization layer of a building block can be fused into the next building block, resulting in a thresholding operation in the main path and modified convolutional weights in the skip path (see details in Sec. IV-A). A similar fusing can be performed for batch normalization in the skip path, all leading to reduced end-to-end inference latency. We extend our building blocks to the MLPMixer model and propose a new Mixing block, as shown in Fig. 3.
## IV Proposed Accelerator Design
In this section, we describe the proposed hardware accelerator and the associated compiler to performs optimization tailored to the employed accelerator design.
### _Compiler Optimization_
Our compiler performs optimizations tailored to the employed accelerator design and fuses operations to reduce the hardware cost and generate an intermediate graph. After such optimization and fusions, our compiler compiles the intermediate graph, extracts the required parameters for the accelerator design, and generates a static schedule.
All BNN layers proposed in this paper (cf. Fig 2) begin with a sign function (SG) and end with a batch normalization (BN) operation. First, we move the last BN block of a layer to the next layer and pass it through both feed-forward and skip paths of the next layer (cf. Fig. 3(b). Moreover, the BN block that comes before the SG block can be replaced by a thresholding (TH) function in order to reduce the hardware cost (c.f. Fig. 3(c)). Using such a technique, we can process the input feature map using an unsigned comparison and avoid expensive operations such as multiplication that are required in the BN block. Reference [15] explains how the hardware cost of a regular BN-SG block is reduced from 2 DSPs, 55 FFs, and 40 LUTs for separate BN and SG computations to merely 6 LUTs for the TH block computations using such a technique.
In addition, the BN block that is passed onto the skip path is also fused with the CONV layer in the skip path if one exists. In summary, we will replace a BN-CONV-BN sequence of layers on the skip path with a CONV layer (c.f. Fig. 3(c)). More details of such fusion is provided in the following. Note that the BN block that is passed onto the skip path is passed through the average pooling block in the inference (c.f. Fig. 3(b)). This simplification also reduces the chances of encountering overflow/underflow even if we assign fewer bits than required bits for summation because the weights/biases of the fused BN-CONV-BN layer will help normalize this layer's outputs. In other words, the output of BMAC can be considered 16 bits, and no quantization is required.
Algorithm 1 shows the well-known batch normalization algorithm where the two parameters \(\gamma\) and \(\beta\) are learned during the training process. Note that \(\epsilon\) is a small constant value used to ensure that division-by-zero error is not encountered.
```
Input \(\mathcal{B}=\{y_{0},y_{1},...,y_{m}\}\); \(\gamma\); \(\beta\) Output \(y^{\prime}_{i}=BN_{\gamma,\beta}(y_{i})\)
1:\(\mu_{\text{S}}\leftarrow\frac{1}{\gamma}\sum_{i=1}^{m}y_{i}\)// mini-batch mean
2:\(\sigma_{\text{B}}^{\gamma}\leftarrow\frac{y_{i}}{\gamma}\sum_{i=1}^{m}(y_{i}- \mu_{\text{B}})^{2}\)// mini-batch variance
3:\(\hat{y_{i}}\leftarrow\frac{y_{i}}{\sqrt{\sigma_{\text{B}}^{\gamma}+\epsilon}}\)// normalize
4:\(y^{\prime}_{i}\leftarrow\gamma\hat{y_{i}}+\beta\equiv BN_{\gamma,\beta}(y_{i})\)// scale and shift
```
**Algorithm 1** Batch normalization for activation \(y\) in a mini-batch, \(y^{\prime}\) is the normalized result
When the BN layer is fused into a subsequent convolutional
Fig. 3: The proposed Mixing block in this paper.
Fig. 2: The proposed building blocks. The differences with respect to real-to-binary blocks are highlighted in red.
layer, the fused layer's computations may be written as,
\[y^{\prime}=\mathbf{w}(\gamma\frac{\mathbf{x}-\mathbf{\mu}^{\prime}_{\text{ \emph{B}}}}{\sqrt{\mathbf{\sigma}^{\prime}_{\text{ \emph{B}}}^{2}+\epsilon}}+\mathbf{\beta}^{\prime})+\mathbf{b} \tag{1}\]
Hence, the new fused parameters can be calculated as:
\[\mathbf{w}^{\prime}=\frac{\mathbf{\gamma}^{\prime}\mathbf{w}}{\sqrt{\mathbf{\sigma}^{\prime}_{ \text{\emph{B}}}^{2}+\epsilon}},\quad\mathbf{b}^{\prime}=-\frac{\mathbf{\gamma}^{ \prime}\mathbf{w}\mathbf{\mu}^{\prime}_{\text{\emph{B}}}}{\sqrt{\mathbf{\sigma}^{\prime}_ {\text{\emph{B}}}^{2}+\epsilon}}+\mathbf{w}\mathbf{\beta}^{\prime}+\mathbf{b} \tag{2}\]
Note that \(\mathbf{x}\) in Eqn. 1 is the output of previous layer, so the 4-D \(\mathbf{w}\) must be squeezed into a 2-D vector of size (\(c_{\text{in}}\), \(c_{\text{out}}\)\(\times\)\(w_{\text{k}}\)\(\times\)\(h_{\text{k}}\)). Indeed, if we use zero padding in convolutions, we will have 0 values entering the convolution after the BN. When we fuse the parameters, this must still be the case. So, to correctly fold \(\mathbf{w}\) into the 2-D vector, we have to replace the default 0 values for padding by \(\mathbf{\mu}^{\prime}_{\text{\emph{B}}}-\frac{\mathbf{\beta}^{\prime}\sqrt{\mathbf{\sigma} ^{\prime}_{\text{\emph{B}}}^{2}+\epsilon}}{\mathbf{\gamma}^{\prime}}\), that is, we must apply the BN transformation to the padding. Hence, by multiplying the fused weights \(\mathbf{w}^{\prime}\) to the padding values, we apply the inverse of the BN transformation.
Using algorithm 1 and proceeding with fused parameters as in the previous case, the weights and bias of the resulting fused BN-CONV-BN block may be expressed as:
\[\mathbf{w}^{\prime\prime}=\frac{\mathbf{\gamma}^{\prime\prime}\mathbf{w}^{\prime}}{\sqrt{ \mathbf{\sigma}^{\prime\prime}_{\text{\emph{B}}}^{2}+\epsilon}},\quad\mathbf{b}^{ \prime\prime}=\mathbf{\beta}^{\prime\prime}+\mathbf{\gamma}^{\prime\prime}\frac{\mathbf{ b}^{\prime}-\mathbf{\mu}^{\prime\prime}_{\text{\emph{B}}}}{\sqrt{\mathbf{\sigma}^{\prime \prime}_{\text{\emph{B}}}^{2}+\epsilon}} \tag{3}\]
### _Accelerator Architecture_
We adopted a heterogeneous streaming architecture where each layer uses its own hardware resources for this work.
Our Accelerator consists of the following types of operations:
* \(3\times 3\) BNN convolution
* \(1\times 1\) FPNN convolution
* Average pooling
* Max Pooling
* Linear layer
* Thresholding
* BN-PReLU/BN-PReLU-BN
* Residual connection and summation
We have designed and implemented an efficient accelerator that supports these operations on FPGA devices. We separate our design into three domains: binary, fixed-point and joint domains. The BNN (FPNN) convolutions are placed in binary (fixed-point) domains whereas all other operations are in the joint domain. Our hardware is designed by using Xilinx's High Level Synthesis (HLS) tools. In the following, we will describe the hardware engines and the used HLS design techniques in detail.
#### Iv-B1 BNN and FPNN Block
The BMAC/FPMAC of the proposed accelerator are mapped to DSPs. Each DSP block (i.e., DSP48E2 in Xilinx FPGA) is capable of performing a 18 by 27-bit or a 48-bit bitwise logic operation including AND, OR, NOT, NAND, NOR, XOR, and XNOR. We used this feature for performing 48 XNOR operations simultaneously as one BMAC operation. Note that similar 48 XNOR operations on LUTs results in the usage of 48 LUTs and 49 FFs. So, the input feature map and weights must be packed into groups of size 48. For this purpose, We pack bits along the channel dimension into 48-bit unsigned integers for concurrent access. We will highlight the advantage of mapping logic operations to DSPs in Section V-C.
Our design results in a balanced usage of hardware resources since LUTs are mostly used for other operations while DSPs are used to do BMAC/FPMAC operations. Note that reconfigurability of DSPs in each clock cycle can be used to switch between BMAC and FPMAC operations. This is very helpful in the case of a homogeneous single execution engine design where we only use a fixed set of resources for all layers, although this is not in the scope of the present paper. Note that option can be used in single execution engine architecture. In the streaming architecture, each unit has its own resource. A 2D array of DSPs with the size of 32 * 32 is designed to perform FPMAC operations.
#### Iv-B2 FP/B Joint Blocks
In addition to the accelerators for the convolution operations, which account for the majority of the computations in a vision neural network, we also implement hardware accelerators for other operations that must be performed in the joint domain, including the max (average) pooling and thresholding blocks. For instance, the TH unit compares each output activation from the previous layer with a programmable threshold and outputs +1 (0) if the output activation is greater (smaller) than the corresponding threshold. Since these TH blocks only contain channel-wise parameters, their impact on the total number of model parameters is negligible. Although we can achieve the maximum concurrency by processing all output activations simultaneously, this would require a lot of resources, including both memories (we must increase the number of access lines by partitioning the data to several memory blocks) and TH block. The performance gain may not be worth the cost of additional resources needed. We use the greatest common divisor (GCD) of the height of the systolic array in the FPNN block (e.g., 32) and the width of the array of BMACs in the BNN block (e.g., 48) as the parallelism factor for all blocks in the joint domain.
## V Experimental Results
For evaluation purposes, we targeted a high-end Virtex(r) UltraScale+ FPGA (Xilinx VU9P FPGA, which is available in the cloud as the AWS EC2 F1 instance). This FPGA contains approximately 2.5 million logic elements and approximately 6,800 DSP units.We use Vitis 2020.2 for mapping to the FPGA and set the target frequency to 340 Mhz. We also use the Vivado power report provided by Xilinx to assess the power consumption of each design. Finally, we evaluate our proposed method on a well-known CNN, i.e., ResNet-20 [16] and MLPMixers [5] and a commonly used computer-vision dataset for object recognition i.e., the CIFAR-10 [18] dataset.
### _Experimental setup_
In the case of MLPMixers, the resolution of the input image is 32*32, and the patch size that the experiments are based on is 4*4. So, we have S non-overlapping image patches that are mapped to a hidden dimention \(C\). \(D_{S}\) and \(D_{C}\) are tunable hidden widths in the token-mixing and channel-mixing MLPs,
respectively. The summary of design specifications is shown in Table I.
### _ResNet20_
Table II demonstrates the superiority of BlendNet on ResNet20 compared to other binary-based approaches. We can observe that BlendNet achieves a higher top-1 accuracy by 0.8% compared to the state-of-the-art FracBNN and improves the accuracy by 1.5% to about 9% compared to the other approaches. BlendNet-20 achieves the same model size as FracBNN even with keeping the first and last layer full precision (i.e., 16-bit fixed-point).
### _Hardware cost and performance of BlendNet_
In this section, we evaluate our BlendNet-20 model on the FPGA platform. Compared to the BNN accelerator in FracBNN [12], our design achieves a higher frame rate (3846 FPS compared to 2807 FPS reported in FracBNN [12]) and higher working frequency (342 MHz compared to 250 MHz) while yielding higher accuracy. Note that the FPGA used in this paper is a server-class FPGA while authors in [12] deployed their design on an embedding FPGA with fewer resources. However, the frame rate comparison is fair since they unroll the entire model, similar to what we achieve. The unrolling is feasible because ResNet-20-based models on CIFAR-10 are very compact and can be fitted into an FPGA to eliminate unnecessary transactions between the logic blocks and the DDR memory.
Table III compares the hardware cost of two approaches for implementing the BlendNet-20 on the CIFAR-10 dataset where the DSP-based BlendNet-20 is the approach presented in Section IV-B2. The LUT-based is a naive implementation where logic operations are performed using LUTs. Note that DSP usage in LUT-based implementation is for the first and last layers. Decreasing the LUT usage results in power saving. Our measurements show that the proposed implementation yields 2.5x lower power consumption.
\begin{table}
\begin{tabular}{l c} \hline \hline Approach & Accuracy (\%) \\ \hline DoReFa-Net [19] & 79.3 \\ DSQ [20] & 84.1 \\ ReActNet [11] & 85.8 \\ IR-Net [21] & 86.5 \\ FracBNN [12] & 87.2 \\ \hline
**BlendNet** & **88.0** \\ \hline \hline \end{tabular}
\end{table} TABLE II: Classification accuracy of ResNet20-like models on CIFAR-10.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline
**Specification** & S/4 & B/4 & 2S/4 \\ \hline
**Sequence length S** & 64 & 64 & 64 \\ \hline
**Hidden size C** & 128 & 192 & 256 \\ \hline
**Patch resolution** & 4*4 & 4*4 & 4*4 \\ \hline
**MLP dimention \(D_{C}\)** & 512 & 768 & 1024 \\ \hline
**MLP dimention \(D_{S}\)** & 64 & 96 & 128 \\ \hline
**Number of layers** & 8 & 12 & 8 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Summary of design specifications for MLPMixers used in this paper. The “S” and “B” (small and base) models scales down follow Tolstikhin et al. [5]. The notation “B/4” means the model of base scale with patches of resolution 4*4.
Fig. 4: (a) The proposed building block. (b) Distribute the BN from the precedent BlendNet block to both main and skip paths. (c) Merge components that result in the optimized BlendNet block. Note that the red and yellow dash boxes show the precedent and current building blocks. The green dash boxes show the potential components for merging.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**Approach** & DSP482 & LUT & FF & BRAM\_18K & \(f_{target}\) & Power \\ \hline
**DSP-based** & 2240 & 10K & 30K & 912 & 342 & 6.1 \\
**BlendNet-20** & (32\%) & (1\%) & (1\%) & (42\%) & MHz & **w** \\ \hline
**LUT-Based** & 1132 & 500K & 900K & 912 & 342 & 15.3 \\
**BlendNet-20** & (17\%) & (50\%) & (34\%) & (42\%) & MHz & **w** \\ \hline \hline \end{tabular}
\end{table} TABLE III: Comparison between the hardware metrics of different types of BlendNet-20 implementation on CIFAR-10.
### _MLPMixers_
As table IV shows, our model (i.e., BlendMixer) outperforms the naively binarized version of MLPMixer (i.e., BinaryMixer) yet achieves a smaller memory footprint (i.e., model size). Increasing the model size (e.g., model B/4) reduces the accuracy drop due to binarization. Note that FPMAC operations in both BinaryMixer and BlendMixer are due to the patch embedding block and the last FC layer. When it comes to analyzing the performance, BlendMixer can achieve 0.02ms (0.06ms) latency for a small (base) model while MLPMixer can process an image in 0.24ms (1.14ms).
### _Ablation study with normalization unit in mixing blocks_
To further evaluate the influence of the normalization unit on the proposed Mixing blocks, we performed an ablation study with different normalization units, layer normalization as suggested in the paper, batch normalization over channels and batch normalization over patches. As shown in Table V, the Mixing block with batch normalization over channels outperforms other models. Using batch normalization also reduces the memory footprint, requiring storing fewer parameters. Note that the MLPMixer consists of the proposed mixing blocks is called BlendMixer.
### _Computation/memory cost vs accuracy in mixing blocks_
In this section, we assess the trade-off between computation/memory complexity and result accuracy. As demonstrated in Table VI, we can manipulate the type of computation in FC layers of mixing blocks to improve the accuracy at the cost of computation and memory complexity. The BlendMixer(BB/BB-2S/4) is simply the widened version of BlendMixer S/4, and BlendMixer(BB/FPB-2S/4) is the same as the previous architecture with the exception of that the FC layer of the second mixing block being calculated in FP format. The other models are named accordingly. The BlendMixer(BB/FPB-2S/4) achieves a comparable accuracy (less than 1% accuracy drop compared to MLPMixer S/4).
## VI Conclusion
This paper introduces BlendNet, an innovative neural network design that makes use of the Blend module, a new building block that performs binary and fixed-point convolutions in the main and skip paths, respectively. Batch normalizations are deployed intelligently on both the main and skip paths within the Blend module as well as between adjacent Blend modules. BlendNet-20, a descendant of ResNet-20 trained on the CIFAR-10 dataset, achieves 88.0% classification accuracy (0.8 percent better than the state-of-the-art), yet 1.4x higher throughput. BlendMixer outperforms the naively binarized version of MLPMixer yet achieves a smaller memory footprint.
|
2310.08578 | Neural network approach to quasiparticle dispersions in doped
antiferromagnets | Numerically simulating spinful, fermionic systems is of great interest from
the perspective of condensed matter physics. However, the exponential growth of
the Hilbert space dimension with system size renders an exact parameterization
of large quantum systems prohibitively demanding. This is a perfect playground
for neural networks, owing to their immense representative power that often
allows to use only a fraction of the parameters that are needed in exact
methods. Here, we investigate the ability of neural quantum states (NQS) to
represent the bosonic and fermionic $t-J$ model - the high interaction limit of
the Fermi-Hubbard model - on different 1D and 2D lattices. Using autoregressive
recurrent neural networks (RNNs) with 2D tensorized gated recurrent units, we
study the ground state representations upon doping the half-filled system with
holes. Moreover, we present a method to calculate dispersion relations from the
neural network state representation, applicable to any neural network
architecture and any lattice geometry, that allows to infer the low-energy
physics from the NQS. To demonstrate our approach, we calculate the dispersion
of a single hole in the $t-J$ model on different 1D and 2D square and
triangular lattices. Furthermore, we analyze the strengths and weaknesses of
the RNN approach for fermionic systems, pointing the way for an accurate and
efficient parameterization of fermionic quantum systems using neural quantum
states. | Hannah Lange, Fabian Döschl, Juan Carrasquilla, Annabelle Bohrdt | 2023-10-12T17:59:33Z | http://arxiv.org/abs/2310.08578v1 | # Neural network approach to quasiparticle dispersions in doped antiferromagnets
###### Abstract
Numerically simulating spinful, fermionic systems is of great interest from the perspective of condensed matter physics. However, the exponential growth of the Hilbert space dimension with system size renders an exact parameterization of large quantum systems prohibitively demanding. This is a perfect playground for neural networks, owing to their immense representative power that often allows to use only a fraction of the parameters that are needed in exact methods. Here, we investigate the ability of neural quantum states (NQS) to represent the bosonic and fermionic \(t-J\) model - the high interaction limit of the Fermi-Hubbard model - on different 1D and 2D lattices. Using autoregressive recurrent neural networks (RNNs) with 2D tensorized gated recurrent units, we study the ground state representations upon doping the half-filled system with holes. Moreover, we present a method to calculate dispersion relations from the neural network state representation, applicable to any neural network architecture and any lattice geometry, that allows to infer the low-energy physics from the NQS. To demonstrate our approach, we calculate the dispersion of a single hole in the \(t-J\) model on different 1D and 2D square and triangular lattices. Furthermore, we analyze the strengths and weaknesses of the RNN approach for fermionic systems, pointing the way for an accurate and efficient parameterization of fermionic quantum systems using neural quantum states.
The simulation of quantum systems has remained a persistent challenge until today, primarily due to the exponential growth of the Hilbert space, making it exceedingly difficult to parameterize the wave functions of large systems using exact methods. Since the seminal work of Carleo and Troyer [1], the idea of using neural networks to simulate quantum systems [1; 2; 3; 4; 5] has been applied successfully for a large number of quantum systems, leveraging various neural network architectures. These architectures include restricted Boltzmann machines [2; 3], convolutional neural networks (CNNs) [6], group CNNs [7], autoencoders [8] as well as autoregressive neural networks such as recurrent neural networks (RNNs) [9; 10; 11; 12; 13], with neural network representations of both amplitude and phase distributions of the quantum state under consideration. These neural quantum states (NQS) make use of the innate ability of neural networks to efficiently represent probability distributions. When applying them to represent quantum systems, this ability can help to reduce the number of parameters required to encode the system.
Despite their representative power, NQS have been shown to face challenges during the training process, for example when they are trained to minimize the energy, i.e. to represent ground states. This results from the intricate nature of the loss landscape, characterized by numerous saddle points and local minima that complicate the search for the global minimum [14]. One promising avenue to overcome this problem is the use of many uncorrelated samples during the training. This strategy is facilitated when using autoregressive neural networks [15; 16], allowing to directly sample from the wave functions' amplitudes. Autoregressive networks have already been applied in the physics context [17; 18], such as for variational simulation of spin systems [10; 11; 12; 13].
Many works have so far focused on NQS representations of spin systems at half-filling, revealing that NQS can be used to study a variety of phenomena that are relevant to state-of-the-art research, as e.g. shown for RNN representations on various lattice geometries, including frustrated spin systems [10; 19], and systems with topological order [20]. For all of these systems, the physics becomes even richer when introducing mobile impurities, e.g. holes, into the system, yielding a competition between the magnetic background and the kinetic energy of the impurity. Simulating such systems holds particular relevance for understanding high-temperature superconductivity, where the superconducting dome arises upon doping the antiferromagnetic half-filled state with holes [21]. The search for NQS that are capable of representing such spinful fermionic systems is still in its early stages. In recent years, first NQS have been developed that obey the fermionic statistics, simulating molecules [22; 23; 24], spinless fermions [16] and spinful fermions [25; 26; 27; 28]. Among those architectures are FermiNet [22; 23], Slater-Jastrow ansatze [16; 25; 27; 28] or variants of Jordan-Wigner transformations [24; 26; 29; 30; 31].
Here, we use an autoregressive neural network architecture, supplemented with a Jordan-Wigner transformation, to simulate ground states of the high interaction
limit of the Fermi-Hubbard model, believed to capture essential features of high-temperature cuprate superconductors. Specifically, we use RNNs, proven to successfully model spin systems [9; 10; 19; 20; 32; 33], and simulate the ground states of the fermionic (bosonic) \(t-J\) model, both in one and two dimensions. In its more generalized form, known as the fermionic (bosonic) \(t-\)XXZ model, with anisotropic superexchange interactions denoted as \(J_{z}\) and \(J_{\pm}\), the Hamiltonian under consideration reads as follows:
\[\mathcal{H}_{tXXZ}= -t\sum_{\langle\mathbf{i},\mathbf{j}\rangle,\sigma}\mathcal{P}_{G}\left( \hat{c}_{\mathbf{i},\sigma}^{\dagger}\hat{c}_{\mathbf{j},\sigma}+\text{h.c.}\right) \mathcal{P}_{G}\] \[+J_{z}\sum_{\langle\mathbf{i},\mathbf{j}\rangle}\left(\hat{S}_{\mathbf{i}}^{ \mathrm{\hat{\mathbf{z}}}}\cdot\hat{S}_{\mathbf{j}}^{z}-\frac{1}{4}\hat{n}_{\mathbf{i}} \hat{n}_{\mathbf{j}}\right)\] \[+J_{\pm}\sum_{\langle\mathbf{i},\mathbf{j}\rangle}\frac{1}{2}\left(\hat{ S}_{\mathbf{i}}^{+}\cdot\hat{S}_{\mathbf{j}}^{-}+\hat{S}_{\mathbf{i}}^{-}\cdot\hat{S}_{\mathbf{j}}^{+} \right), \tag{1}\]
with the fermionic (bosonic) creation and annihilation operators \(\hat{c}_{\mathbf{i},\sigma}^{\dagger}\) and \(\hat{c}_{\mathbf{i},\sigma}\) for particles at site \(\mathbf{i}\) with spin \(\sigma\); spin operators are denoted by \(\hat{\mathbf{S}}_{\mathbf{i}}=\sum_{\sigma,\sigma^{\prime}}\hat{c}_{\mathbf{i},\sigma}^{ \dagger}\frac{1}{2}\mathbf{\sigma}_{\sigma\sigma^{\prime}}\hat{c}_{\mathbf{i},\sigma^ {\prime}}\) as well as density operators by \(\hat{n}_{\mathbf{i}}\)[34]. For \(J_{z}=J_{\pm}\), Eq. (1) reduces to the \(t-J\) model and for \(J_{\pm}=0\) to the \(t-J_{z}\) model.
In the absence of doping (\(\hat{n}_{\mathbf{i}}=1\)), Eq. (1) reduces to the XXZ model or, in the case of \(J_{z}=J\pm\), the Heisenberg model. Prior studies have already utilized RNNs to simulate these spin models [19; 36], with the possibility of rendering the model stoquastic by making use of the Marshall sign rule [37]. This is done by implementing the sign rule directly in the RNN architecture [19], yielding a simplified optimization procedure of the wave functions' phase.
When the ground state at \(\hat{n}_{\mathbf{i}}=1\) is doped with a single hole, the resulting mobile impurity gets dressed with a cloud of magnetic excitations. This yields the formation of a magnetic polaron, which has already been observed in ultracold atom experiments [38]. Its properties strongly depend on the spin background, see Fig. 1a and b. Upon further doping, the strong correlations in the model make the simulation of the Fermi-Hubbard or \(t-J\) models numerically challenging, despite impressive numerical advances in the past years [39; 40; 41; 42]: Commonly used methods all come with their specific limitations, e.g. density matrix renormalization group [43; 44] is limited by the area-law of entanglement, making it challenging to apply this methods to 2D or higher dimensions. Finally, the calculation of spectral functions or the dispersion relations \(E(\mathbf{k})\)[35], as exemplary shown in Fig. 1, is of great interest for many fields in physics to reveal emergent physics of a system under investigation. In condensed matter physics, they are typically used to infer the dominating excitations in the ground state or higher energy states, e.g. upon doping the system. This information is contained in specific features of the spectra, e.g. the bandwidth of the quasiparticle dispersion \(E(\mathbf{k})\). However, the calculation of spectra or dispersions \(E(\mathbf{k})\) is in general computationally costly using conventional methods, e.g. density-matrix renormalization group (DMRG) simulations: The former typically involves a, in general expensive, time-evolution of the state [45], and the latter the calculation of a global operator, the momentum \(\mathbf{k}\), which is typically very costly for matrix-product-states.
The remaining part of the paper is structured as follows: In the first section, we introduce the fermionic RNN architecture and its training. Second, we apply the RNN architecture for the ground state search of the \(t-\)XXZ model on different lattice geometries, including 1D and 2D lattices. Furthermore, we present a method to map out the dispersion relation of the system under consideration. This method is not limited to our specific RNN quantum state representation, but applicable for any NQS architecture. Moreover, it can in principle be combined with spatial symme
Figure 1: \(t-J\) and \(t-J_{z}\) square lattice with \(10\times 4\) sites, \(t/J_{z}=3\) and open boundaries in \(x\), periodic boundaries in \(y\) direction: a) Quasiparticle dispersion of a single hole for the \(t-J\) system obtained with the RNN (blue markers), compared to the MPS spectral function from Ref. [35] with the spectral weight \(S\) indicated by the colormap and shown in the inset figure for \(\mathbf{k}=(0.44\pi,0.44\pi)\). We average the energy over the last 100 training iterations, each with 200 samples, with the respective error bars denoted in blue. b) Dispersion of the \(t-J_{z}\) system obtained with the RNN, compared to the MPS spectral function. c) Relative errors \(\Delta\epsilon=\frac{E_{\text{RNN}}-E_{\text{DMRG}}}{|E_{\text{DMRG}}|}\) during the training for \(t-J\) and \(t-J_{z}\) systems, both with \(d_{h}=300\). Dashed vertical lines denote the training step where the training was restarted. In the last restart the number of samples per minimization step was increased from 200 to 600 (\(t-J\)) or 1000 (\(t-J_{z}\)).
help to improve the accuracy, and furthermore enable the analysis of low-lying excitations in a specific symmetry sector, e.g. \(m_{4}\) rotational resonances [46; 47]. We present the results for different lattice geometries, including a triangular ladder. Finally, we address the limitations and drawbacks of our RNN ansatz, provide tests on the effects of more sophisticated training procedures, and discuss possible improvements.
## I Architecture and training
In the present paper we use a recurrent neural network (RNN) [48] to represent a quantum state defined on a 2D lattice with \(N_{\rm sites}=N_{x}\cdot N_{y}\) positions occupied by \(N_{p}\) particles. RNNs and similar generative architectures combined with variational energy minimization have already been applied successfully for spin systems [5; 32; 36; 10]. One of the advantages of these architectures is their autoregressive property, which allows extremely efficient independent sampling from the RNN wave function [49; 18], which is important for the training procedure.
In order to represent fermionic wave functions, we start from the same approach as for bosonic spin systems and use an RNN architecture consisting of \(N_{\rm sites}\) (tensorized) gated recurrent units (GRUs), each one representing one site of the system. The information is passed from the first cell, corresponding to the first lattice site, to the last site in a recurrent fashion, see Fig. 13 in Appendix A.
The RNN architecture and its application to model quantum states can most easily be understood for 1D systems: At each lattice site \(i\) we define \(\mathbf{\sigma}_{i}\), a \(N_{s}\times d_{v}\) matrix, to denote the \(N_{s}\) local sample configurations at the respective site, and \(\mathbf{\sigma}\) the complete configuration of system size \(L\), a \(N_{s}\times L\times d_{v}\) matrix, with \(d_{v}\) the visible dimension. For the \(t-J\) model, each (local) configuration consists of zeros, ones and minus ones to denote holes, spin up and spin down particles, respectively, i.e. the visible dimension is \(d_{v}=3\). Furthermore, we define the _hidden state_\(\mathbf{h}_{i}\) of dimension \(N_{s}\times d_{h}\) that is used to pass information from previous lattice sites through the network, with \(d_{h}\) the hidden dimension. Given the configuration \(\sigma_{i}\) at site \(i\) and a hidden state \(\mathbf{h}_{i-1}\), the RNN cell outputs the updated hidden state \(\mathbf{h}_{i}\) as well as a conditional probability distribution and a local phase. Hereby, the hidden dimension \(d_{h}\) determines the number of parameters of our RNN quantum state.
Since it is possible to generate \(N_{s}\geq 1\) samples at once, by passing sets of local configurations \(\mathbf{\sigma}_{i}\) through the network in parallel, we will use the notation as vectors \(\mathbf{\sigma}_{i}\) and \(\mathbf{\sigma}\) in the following, where each entry in \(\mathbf{\sigma}\) (\(\mathbf{\sigma}_{i}\)) corresponds to one configuration (local configuration).
The RNN wave function is represented by an RNN with cells that have two output layers, one for the local phase \(\phi_{\mathbf{\lambda}}(\sigma_{i}|\sigma_{<i})\), and one for the local amplitude \(P_{\mathbf{\lambda}}(\sigma_{i}|\sigma_{<i})\)[10]. In total, the RNN wave function is given by
\[\ket{\psi}_{\mathbf{\lambda}}=\sum_{\mathbf{\sigma}}\exp(i\phi_{\mathbf{\lambda}}(\sigma) )\sqrt{P_{\mathbf{\lambda}}(\sigma)}\ket{\sigma}, \tag{2}\]
where \(\phi_{\mathbf{\lambda}}(\sigma)=\sum_{i}^{N}\phi_{\mathbf{\lambda}}(\sigma_{i}|\sigma _{<i})\) is the phase and \(\sqrt{P_{\mathbf{\lambda}}(\sigma)}\) with \(P_{\mathbf{\lambda}}(\sigma)=\Pi_{i}^{N}P_{\mathbf{\lambda}}(\sigma_{i}|\sigma_{<i})\) is the amplitude of the respective configuration \(\sigma\).
In the present work we use the tensorized 2D version of the RNN wave function introduced above, as proposed in Ref. [50], where the information encoded in the hidden states is passed in a 2D manner, see Appendix A. Furthermore, we use a variant of a gated recurrent unit (GRU) instead of a simple RNN cell, that are more successful in capturing long-term dependencies [51; 52; 53].
Our RNN ansatz uses \(U(1)=U(1)_{\hat{N}}\times U(1)_{\hat{S}_{\hat{\varepsilon}}}\) symmetry, i.e. conserved total particle and total magnetization, as in Refs. [19; 54; 54; 10; 9]. Further details on the RNN architecture can be found in Appendix A. Moreover, in contrast to previous RNN works on the Heisenberg model [10], we do not implement any bias on the phase of the quantum state such as the Marshall sign rule [37], in order to make our architecture applicable to any number of holes in the system.
### Minimization Procedure
In order to find the ground state of the system under consideration, we use the variational Monte Carlo (VMC) minimization of the energy [55; 49]. VMC has already been used in a wide range of machine learning applications (see e.g. Refs. [56; 17] for an overview). In VMC, the expectation value of the energy of the RNN trial wave function,
\[\langle E_{\mathbf{\lambda}}\rangle=\sum_{\mathbf{\sigma}}\ket{\psi_{\mathbf{\lambda}}( \sigma)}^{2}E_{\mathbf{\lambda}}^{\rm loc}(\sigma), \tag{3}\]
is minimized. Here, we have defined the local energy
\[E_{\mathbf{\lambda}}^{\rm loc}(\sigma)=\frac{\langle\sigma|\mathcal{H}|\psi_{\mathbf{ \lambda}}\rangle}{\langle\sigma|\psi_{\mathbf{\lambda}}\rangle}\,. \tag{4}\]
As shown e.g. in Refs. [10; 26] one can use the cost function
\[\mathcal{C}=\sum_{\mathbf{\sigma}}\ket{\psi_{\mathbf{\lambda}}(\sigma)}^{2}\underbrace {\big{[}E_{\mathbf{\lambda}}^{\rm loc}(\sigma)-\langle E_{\mathbf{\lambda}}^{\rm loc} \rangle\big{]}}_{=:-\sqrt{N_{s}\bar{\epsilon}(\sigma)}} \tag{5}\]
to minimize both the local energy as well as the variance of the local energy to make the training more stable. In Eq. (5), we have defined \(\bar{\epsilon}(\sigma):=-\frac{1}{\sqrt{N_{s}}}\left[E_{\mathbf{\lambda}}^{\rm loc }(\sigma)-\langle E_{\mathbf{\lambda}}^{\rm loc}\rangle\right]\), where \(N_{s}\) denotes the number of samples.
One of the main difficulties of neural network quantum states is the optimization of Eq. (5), due to its typically
rugged landscape with many local minima and saddle points [14]. If not stated differently, we use the Adam optimizer [57] for the optimization of Eq. (5), following previous works on NQS using RNNs [9; 10; 36]. To improve the optimization, often stochastic reconfiguration (SR) [58; 59] is used. In this method, each parameter \(\mathbf{\lambda}_{k}\) of the neural network is optimized individually according to
\[\bar{O}_{\mathbf{\sigma}k}\,\delta\mathbf{\lambda}_{k}=\bar{\epsilon}(\mathbf{\sigma})\,, \tag{6}\]
with \(O_{\mathbf{\sigma}k}=\frac{1}{\psi(\mathbf{\sigma})}\,\frac{\partial\psi(\mathbf{\sigma})} {\partial\mathbf{\lambda}_{k}}\) and \(\bar{O}_{\mathbf{\sigma}k}=(O_{\mathbf{\sigma}k}-\langle O_{\mathbf{\sigma}k}\rangle)/ \sqrt{N_{s}}\). In the cases where SR is applied, we use the two recently proposed, SR variants, namely minimum-step stochastic reconfiguration (minSR) and the SR variant based on a linear algebra trick by Rende et al. [60]. Both enable the use of a large numbers of NQS parameters, see Appendix B.2. In the minSR update, Eq. (6) is solved by
\[\delta\mathbf{\lambda}_{k}=O^{\dagger}_{k\mathbf{\sigma}^{\prime}}(T^{-1})_{\mathbf{ \sigma}^{\prime}\mathbf{\sigma}}\,\bar{\epsilon}(\mathbf{\sigma})\,, \tag{7}\]
with \(T=\bar{O}\bar{O}^{\dagger}\)[61]. In the version of Rende et al.,
\[\delta\mathbf{\lambda}_{k}=X_{k\mathbf{\sigma}^{\prime}}(X^{T}X)^{-1}_{\mathbf{\sigma}^{ \prime}\mathbf{\sigma}}\mathbf{f}_{\mathbf{\sigma}}\,, \tag{8}\]
with \(X=\mathrm{Concat}(\mathrm{Re}\,\bar{O},\mathrm{Im}\,\bar{O})\) and \(\mathbf{f}_{\mathbf{\sigma}}=\mathrm{Concat}(\mathrm{Re}\,\bar{\epsilon}(\mathbf{\sigma}),-\mathrm{Im}\,\bar{\epsilon}(\mathbf{\sigma}))\)[60].
### Fermionic RNN Wave Functions
The architecture introduced above is per se bosonic. When considering fermionic systems, we need to take the antisymmetry of the wave function into account. This antisymmetry is included during the variational Monte Carlo steps when calculating the local energy introduced in Eq. (4). We can expand the local energy to
\[E_{loc}(\sigma)=\sum_{\mathbf{\sigma}^{\prime}}\frac{\bra{\sigma}H\ket{\sigma^{ \prime}}\bra{\sigma^{\prime}}\ket{\psi_{\mathbf{\lambda}}}}{\bra{\sigma}\ket{ \psi_{\mathbf{\lambda}}}}. \tag{9}\]
In this sum, we multiply each term with a factor \((-1)^{P}\) if \(\sigma^{\prime}\) is connected to \(\sigma\) by \(P\) two-particle permutations, as suggested in Ref. [26]. In order to do so, we take the permutations along the sampling path into account. For the \(t-\)XXZ Hamiltonian under consideration we only need to consider the hopping term for calculating the antisymmetric signs. An example is shown in Fig. 2. This procedure is similar to the implementation of Jordan-Wigner strings as e.g. in Ref. [24].
## II NQS dispersion relations
A lot of information on a physical system under investigation is contained in its dispersion relation \(E(\mathbf{k})\), e.g. in the bandwidth (effective mass) and low-lying elementary excitations relative to the ground state, that determine the physical properties. Hence, it is of high relevance to access \(E(\mathbf{k})\). However, its calculation is in general computationally costly [62], since it typically requires a time-evolution of the state [45].
In this section, we calculate the dispersion relations \(E(\mathbf{k})\) of \(t-\)XXZ models in different dimensions and on different lattice geometries using NQS. Specifically, we use the RNN wave function introduced in Sec. I. However, the method is applicable to any NQS architecture, in contrast to e.g. Ref. [63]. It only requires the possibility to draw samples from the NQS and calculate the respective probabilities, making the calculation of \(E_{\mathrm{NQS}}(k_{x},k_{y})\) computationally efficient. Furthermore, the scheme can also be combined with spatial symmetries, as discussed further in Sec. III III.3. This could help to improve the accuracy, e.g. when using a NQS with implemented translational invariance, but additional symmetries could also be used to calculate e.g. \(m_{4}\) rotational resonances [46]
Figure 2: Left: A typical configuration \(\sigma\) for a \(5\times 5\) system with five holes and ten spin up (red) and spin down (blue) particles each. Sites are labeled in a 1D manner, as denoted by the white numbers. Right: An exemplary hopping process to the nearest neighbor in horizontal direction ends in the configuration \(\sigma^{\prime}\) and effectively exchanges \(P\) particles, here \(P=3\). The respective sign of \(\sigma^{\prime}\) relative to \(\sigma\) is calculated using Eq. (9).
Figure 3: Adding the momentum constrain \(\mathcal{C}_{k_{\mathrm{target}}}\), Eq. (13), on top of the energy minimization \(\mathcal{C}\), Eq. (5), (top right) changes the loss landscape as schematically shown on the bottom right and forces the NQS into a higher energy state with the desired momentum \(k_{\mathrm{target}}\) (top left vs. bottom left).
In order to calculate the dispersion relation from the NQS under consideration, we train our NQS to represent the ground state and then turn on a constrain in the loss function that forces the system to a higher energy state with the respective target momentum, see Fig. 3.
The momentum \(\mathbf{k}_{\text{NQS}}\) of the NQS wave function is calculated from the translation operator \(\hat{T}_{\mathbf{R}}\), which translates a state \(\psi(\mathbf{r})\) by the respective vector \(\mathbf{R}\), i.e. \(\hat{T}_{\mathbf{R}}\psi(\mathbf{r})=\psi(\mathbf{r}+\mathbf{R})\). Furthermore, it can be written as [64]
\[\hat{T}_{\mathbf{R}}=\exp\left(-i\mathbf{R}\cdot\mathbf{\hat{k}}\right)\,, \tag{10}\]
with the momentum operator \(\mathbf{\hat{k}}\). To determine the expectation value \(\mathbf{k}_{\text{NQS}}=(k_{x},k_{y})\) using samples \(\mathbf{\sigma}\) drawn from the NQS wave function, we calculate the expectation value of \(\hat{T}_{\mathbf{R}}\). For example, for a square lattice, this is done by translating all snapshots by \(\mathbf{R}=\mathbf{e}_{x}\) and \(\mathbf{R}=\mathbf{e}_{y}\) with \(|\mathbf{e}_{\mu}|=a\) for lattice distance \(a\) and \(\mu=x,y\). Then, we calculate the respective NQS amplitudes of the translated states, \(P_{\mathbf{\lambda}}(\hat{T}_{\mathbf{e}_{\mu}}\sigma)\), to determine the expectation value
\[\bra{\psi_{\mathbf{\lambda}}}\hat{T}_{\mathbf{e}_{\mu}}\ket{\psi_{\mathbf{ \lambda}}}=\frac{1}{N_{s}}\sum_{\mathbf{\sigma}}\frac{P_{\mathbf{\lambda}}(\hat{T}_{ \mathbf{e}_{\mu}}\sigma)}{P_{\mathbf{\lambda}}(\sigma)}=\exp\left(-i\mathbf{e}_{\mu}\cdot \mathbf{k}_{\text{NQS}}\right), \tag{11}\]
with the last equality due to the translational invariance of the ground state of a square lattice, which we assume to be (approximately) present for our NQS ground states, see also Appendix C. Hence,
\[\mathbf{k}_{\text{NQS}}^{\mu}=\frac{i}{a}\text{log}\bra{\psi_{\mathbf{ \lambda}}}\hat{T}_{\mathbf{e}_{\mu}}\ket{\psi_{\mathbf{\lambda}}}. \tag{12}\]
Using a sufficiently converged NQS ground state wave function as initial state, we train using VMC with an additional term in the loss function,
\[\mathcal{C}(\mathbf{k}_{\text{target}})=\gamma(t)\sum_{\mu}\left(\mathbf{k}_{\text{ NQS}}^{\mu}-\mathbf{k}_{\text{target}}^{\mu}\right)^{2}, \tag{13}\]
with the RNN momentum \(\mathbf{k}_{\text{NQS}}\) and the target momentum \(\mathbf{k}_{\text{target}}\). We use a prefactor \(\gamma(t)=\gamma_{0}\text{log}_{10}(1+9(t-t_{\text{warmup}})/\tau)\) that is turned on with typically \(\tau=100,\dots,1000\) and \(\gamma_{0}=1,\dots,10\) and gradually lifts all areas in the loss landscape that correspond to a NQS wave function with momentum \(\mathbf{k}_{\text{NQS}}\neq\mathbf{k}_{\text{target}}\), forcing the NQS to a higher energy state at momentum \(\mathbf{k}_{\text{NQS}}=\mathbf{k}_{\text{target}}\), see Fig. 3.
For \(\mathbf{k}_{\text{target}}\) far away from the ground state momentum, we observe empirically that the imaginary part of \(\mathbf{k}_{\text{NQS}}\) can become large, on the same order as the real part, in particular if the ground state accuracy was not sufficiently high. In these cases, the RNN ends up in states that are not eigenstates of the momentum operator. In order to prevent our RNN wave function to get trapped in these states we apply an additional constrain in the loss function in these cases, penalizing large imaginary parts of the momentum, \(\text{Im}\,\mathbf{k}_{\text{NQS}}\).
#### iii.2.1 \(t-\)Xxz model in 1D
In Fig. 4a the dispersion for an antiferromagnetic \(t-\)XXZ chain with 20 sites and \(J_{\pm}=1\), \(J_{z}=4\) and \(t=8\), obtained with a 1D RNN and exact diagonalization (ED) is shown. The relative error on the ground state energy at \(k_{x}=0.5\pi\), obtained during a training with 20000 iterations, is shown in Fig. 4b. The energies away from the ground state at \(k_{x}=0.5\pi\), see Fig. 4a, are in relatively good agreement with the exact values from ED. However, at some values of \(k_{x}\neq 0.5\pi\) it can be seen that the RNN is trapped in local minima close to the ground state. Overall, the RNN succeeds in capturing physical properties like the bandwidth very accurately, revealing the underlying physical excitations:
For the system under consideration, the bandwidth and the shape of the dispersion in Fig. 4a is a result of spin-charge separation in 1D systems. Spin-charge separation denotes the fact that the motion of a hole in such an AFM spin chain with coupling \(J_{\pm},J_{z}\ll t\) can be approximated by an almost free hole that is only weakly coupled to the spin chain. Hence, the dispersion in Fig. 4 can be approximated by two separate dispersions; i.e. holon and spinon dispersions. Hereby, the holon is the charge excitation, associated with energy scales \(t\), and the spinon is the spin excitation associated with energy \(J_{\perp},J_{z}\). In Ref. [46] it is shown that the combined dis
Figure 4: 1D \(t-\)XXZ system with 20 sites and \(J_{\pm}=1\), \(J_{z}=4\) and \(t=8\): a) Quasiparticle dispersion for a single hole obtained with the RNN (red markers), compared to exact energies from ED (light red lines) and the combined spinon and holon dispersions from Eq. (14) (gray). We average the RNN energy over the last 100 training iterations, each with 200 samples, with the errors denoted by the errorbars. We show the exact low-energy excited states as well. b) Relative error \(\Delta\epsilon=\frac{E_{\text{RNNN}}-E_{\text{ED}}}{|E_{\text{ED}}|}\) during the ground state training. a) and b) are obtained using a 1D RNN architecture with \(d_{h}=100\).
persion is
\[E(k_{x})=-2t\cos(k_{h})+J_{\pm}\cos\left(2(k_{x}-k_{h})\right)+J_{\pm}+J_{z}, \tag{14}\]
where \(k_{h}\) is the momentum of the holon and \(k_{x}=k_{h}+k_{s}\) is the combined momentum of holon and spinon. Eq. (14) is denoted by the gray line in Fig. 4. Again, the agreement with the RNN is relatively good.
#### iii.1.2 \(t-J\) model on a square lattice
Due to the layered structure of high-T\({}_{c}\) superconductors like cuprates [21] or nickelates [65; 66], the physics of \(t-J\) systems upon doping is particularly interesting in 2D. In Figs. 1 and 5, the Quasiparticle dispersion for a single hole on \(10\times 4\) and \(4\times 4\)\(t-J\) and \(t-J_{z}\) lattices are presented. In both cases, Figs. 1b and 5b show that the ground state convergence is better for the \(t-J_{z}\) model with relative errors on the order of \(\Delta\epsilon\approx 10^{-3}\) for both system sizes, yielding a good agreement with the reference energies from DMRG (\(10\times 4\) system) and ED (\(4\times 4\) system) for all considered energies \(E(k_{x},k_{y})\) away from the ground state. With a relative error of \(\Delta\epsilon\approx 10^{-2}\), the error of the \(t-J\) ground states is above the \(t-J_{z}\) systems, which is also reflected in the accuracy of the dispersion \(E_{\rm RNN}(k_{x},k_{y})\) in Figs. 1a and 5a.
In contrast to the previous section, there is no spin-charge separation in the strict sense in two dimensional systems. In the case \(t\gg J_{\pm}=J_{z}=:J\) that we consider here (\(t/J=3\)), the mobile dopant can be described by fractionalized spinons and chargons that are confined by a string-like potential that arises due to the spin background distortion when the dopant moves through the system [67; 68]. Based on this idea, Laughlin [69] drew the analogy with the 1D Fermi-Hubbard or \(t-J\) systems and suggested that the dispersion in the respective 2D systems can be interpreted in terms of pointlike partons, spinons and chargons, that interact with each other. This _parton picture_ explains that the quasiparticle dispersion for a single hole is dominated the spinon with a bandwidth on the order of \(J_{\pm}\), with corrections by the chargon on energy scales of \(t\)[35]. This mechanism also provides the explanation for the flat dispersion for the \(t-J_{z}\) model in contrast to the \(t-J\) model, as captured by the RNN, see Figs. 1 and 5. Despite the small deviations from the dispersions calculated with ED or DMRG, our RNN architecture, succeeds in capturing the respective bandwidths of \(t-J_{z}\) and \(t-J\) models very accurately, allowing to gain valuable insights on the spinon and chargon physics from the RNN dispersions. Furthermore, the fact that node (\(\pi/2,\pi/2\)) and antinode (\(\pi,0\)) are degenerate in the \(4x4\) system is correctly reproduced.
Lastly, we would like to mention that there is a small region of suppressed spectral weight near (\(\pi,\pi\)) in the DMRG results of the \(t-J\) system [46]. This suppression yields difficulties for our RNN scheme that are further discussed in Appendix C.
#### iii.1.3 \(t-J\) model on a triangular lattice
On triangular lattices, the physical phenomena that are observed are distinctly different from the physics of bipartite lattices, due to the notion of frustration and the absence of particle-hole symmetry in non-bipartite lattices, among them e.g. kinetic frustration [70; 71]. In particular, the underlying constituents upon doping the triangular ladder are not known [71], making the triangular lattice an intriguing system to study. Recent advancements have shown that these lattices can also be studied experimentally using optical triangular lattices [72; 73; 74] and solid state platforms based on Moire heterostructures [75; 76; 77].
Triangular spin systems have already been studied using RNNs [19]. Here, we consider a triangular \(t-J\) ladder with length \(L_{x}=9\), with the quasiparticle dispersion for a single hole and the learning curves with and without doping shown in Fig. 6.
As suggested in Ref. [19], we use variational annealing for the training for the triangular lattice, that was shown to improve the performance for frustrated systems like the triangular Heisenberg model [19]. The idea of annealing is to avoid getting stuck in local minima by including an artificial temperature \(T\) in the learning process. In
Figure 5: \(t-J\) (blue) and \(t-J_{z}\) (red) square lattice with \(4\times 4\) sites, \(t/J=3\) and periodic boundaries: a) Quasiparticle dispersion for a single hole obtained with the RNN (blue and red markers), compared to the exact energies from ED (lines). We average the energy over the last 100 training iterations, each with 200 samples, with the respective error bars shown in blue and red. We show the exact low-energy excited states as well. b) Relative error \(\Delta\epsilon\) during the ground state training for \(t-J\) (light blue) and \(t-J_{z}\) (light red) square lattice ground states, with \(d_{h}=100\) and minSR (\(t-J\)) and \(d_{h}=70\) and Adam (\(t-J_{z}\)). Thick lines are averages over 100 training iterations to guide the eye.
order to do so, the variational free energy of the model,
\[F_{\mathbf{\lambda}}=\langle H_{\mathbf{\lambda}}\rangle-T(n_{\rm step})\cdot S \tag{15}\]
instead of the energy (3) is minimized. Here, the averaged Hamiltonian \(\langle H_{\mathbf{\lambda}}\rangle\) is given by \(\langle H_{\mathbf{\lambda}}\rangle=\sum_{\mathbf{\sigma}}|\psi_{\mathbf{\lambda}}(\sigma )|^{2}H_{\mathbf{\lambda}}(\sigma)\). Furthermore, \(S\) denotes the Shannon entropy
\[S=-\sum_{\mathbf{\sigma}}|\psi_{\mathbf{\lambda}}(\sigma)|^{2}{\rm log}\left[|\psi_{ \mathbf{\lambda}}(\sigma)|^{2}\right]\,. \tag{16}\]
The minimization procedure that we use starts with a warmup phase with a constant temperature \(T_{0}\), before decreasing the temperature \(T(t)=T_{0}(1-(t-t_{\rm warmup})/\tau)\) linearly with the minimization steps \(t\) with \(\tau=10000\) and \(t_{\rm final}=40000\) training steps.
In Fig. 6b it can be seen that this procedure yields relatively good results for the ground states, with errors of \(\Delta\epsilon\approx 0.001\) for both \(N_{h}=0\) and \(N_{h}=1\). For the dispersion shown in Fig. 6a, we consider the momentum \(k\) defined along the ladder, as shown in the inset figure. When enforcing \(k\neq 0.444\pi\) away from the ground state, the exact energy gaps from ED to the first excited states strongly decrease and the the RNN gets trapped in these states in most cases, in particular for \(k>0.444\pi\). Furthermore, the errorbars of the enforced momenta are much higher compared to the other lattice geometries that were studied in Figs. 1, 4 and 5, suggesting that the RNN states partly break the translation invariance, and hence challenge the momentum optimization scheme.
In this section, we discuss the capability of our bosonic and fermionic RNN ansatze presented in Sec. I to learn and represent the ground states of the \(t-\)XXZ model. For our analysis, we focus on \(t-J\) and \(t-J_{z}\) models on a \(4\times 4\) square lattice.
Figs. 7 and 8 show the relative error for the ground state energies of \(t-J_{z}\) and \(t-J\) models obtained with our RNN ansatz upon doping the half-filled system with \(N_{h}\) holes. Starting from \(N_{h}=0\) in the \(t-J_{z}\) model, the accuracy of the respective Ising ground state is very high in both cases with relative errors \(\Delta\epsilon=\frac{E_{\rm RNN}-E_{\rm ED}}{|E_{\rm ED}|}\) below the numerical precision. The \(t-J\) model, reducing to the Heisenberg model at \(N_{h}=0\), features spin-flip terms besides the Ising interactions, making the ground state search more difficult. Our RNN reaches a ground state energy error \(\Delta\epsilon\approx 10^{-4}\) after 20000 training steps. For both models, the phase and amplitude distributions shown in Figs. 7b and 8b are relatively simple with a low variance for the logarithmic amplitude and only two values for the phase, \(0\) and \(\pi\). In particular, the Ising state for the \(N_{h}=0\) case of the \(t-J_{z}\) model, features basically only two Neel states with non-zero amplitude (i.e. approx. zero log-amplitudes), shown in Fig 7b on the very left. Note that when comparing to the literature of ground state representations using RNNs for the Heisenberg model [10; 36], the optimization problem in our setup is more challenging due to the following reasons: (_i_) The RNN that we use has a local Hilbert space dimension of three states instead of two, allowing for all values of \(N_{h}\) in principle. (_ii_) Our RNN learns the sign structure without any bias, i.e. we do not implement the Marshall sign rule already in the RNN, which would only work for \(N_{h}=0\). (_iii_) We do not include the knowledge of spatial symmetries yet, which will be done later in Sec. III III.
## III Performance of the RNN ansatz
Upon doping, the relative error of the ground states without antisymmetrization of the RNN wave function for the \(t-J_{z}\) model in Fig. 7 is below \(\Delta\epsilon\ll 5\cdot 10^{-4}\) for all considered hole dopings \(1\leq N_{h}\leq 12\). As exemplary shown for the bosonic \(N_{h}=6\) case in Fig. 7b in blue, the true ground state from exact diagonalization does not have a phase structure in this case and the logarithmic amplitudes are very similar. When including the antisymmetry for the fermionic wave functions, the variance of both phase and amplitude distributions increases, from \(\sigma^{\rm b}_{N_{h}=6}({\rm log}|\psi|^{2})=2.23\) to \(\sigma^{\rm f}_{N_{h}=6}({\rm log}|\psi|^{2})=19.00\), and \(\sigma^{\rm f}_{N_{h}=6}({\rm Im}\psi)=0\) to \(\sigma^{\rm f}_{N_{h}=6}({\rm Im}\psi)=2.47\), which can be seen from bare eye when comparing the bosonic and fermionic ED distributions in Fig. 7b. This complicates the ground state search and the ground state error increases significantly between \(2\leq N_{h}\leq 9\) for the fermionic \(t-J_{z}\) model. At \(N_{h}=10\), when only four particles remain in the system and probably
Figure 6: \(t-J\) model on a triangular lattice with \(9\times 2\) sites, \(t/J=3\) and periodic boundaries along \(x\) direction: a) Quasi-particle dispersion for a single hole obtained with the RNN (blue markers), compared to the exact energies from ED (light blue lines). We average the energy over the last 100 training iterations, each with 200 samples, with the error denoted by the blue errorbars. We show the exact low-energy excited states as well. b) Relative error \(\Delta\epsilon\) during the ground state training without doping (orange) and with one hole (blue).
a Fermi-liquid regime is entered, the error decreases again to \(\Delta\epsilon<1\%\) in the fermionic case, coinciding with a lower variance of the exact log-probabilities than for \(N_{h}=6\), \(\sigma^{\rm b}_{N_{h}=6}(\log|\psi|^{2})=9.48\).
The exact log-amplitude and phase distributions from ED for \(N_{h}>0\) of the \(t-J\) model are typically more complicated than for the \(t-J_{z}\) model. For example, for \(N_{h}=4\), the variance of the exact amplitudes becomes very large, \(\sigma^{\rm b}_{N_{h}=6}(\log|\psi|^{2})=15.91\), see Fig. 8b. This yields larger ground state energy errors than for the \(t-J_{z}\) model, and is further complicated when including the antisymmetry in the fermionic case. Again, we make the observation that for larger hole dopings, \(N_{h}\geq 6\) for bosons and \(N_{h}\geq 10\) for fermions, the distributions for phase and amplitude become less complicated than in the low to intermediate doping regime, yielding a higher accuracy of the RNN wave function with errors \(\Delta\epsilon\leq 10^{-4}\) for bosons and \(\Delta\epsilon\leq 10^{-2}\) for fermions in the respective doping regimes.
Our results show that in the low doping regime of the \(t-J\) model, both fermionic systems and bosonic systems are difficult to learn, see Fig. 8. This suggests that not only the fermionic sign structure is challenging, but also the motion of bosonic holes in the AFM Heisenberg background. When these holes move through the system, the spin background is affected, giving rise to an effective \(J_{1}-J_{2}\) spin model with nearest and next-nearest spin exchange interactions and is hence more difficult to learn [78]. For the \(t-J_{z}\) model, we observe that, probably due to the lack of spin dynamics resulting from the absence of spin-flip terms, the relative errors are comparably low in the bosonic case.
Furthermore, for all states with high \(\log|\psi|^{2}\) variance, there are several configurations \(\mathbf{\sigma}\) with a large negative log-amplitude, i.e. \(|\psi(\mathbf{\sigma})|^{2}\approx 0\). This makes an accurate determination of expectation values extremely costly and can affect the training process. For example, in Ref. [79] it was shown that this yields higher variances for the gradients determined by stochastic reconfiguration.
Given these relatively high errors on the ground state energies in some cases, we test potential bottlenecks of our approach in the following, namely: \((i)\) Difficulties in learning either the phase or the amplitude, by considering the partial learning problems separately. \((ii)\) The optimization procedure. \((iii)\) The optimization landscape. \((iv)\) The expressivity of the RNN ansatz, compared to the complexity of the learning problem.
#### iv.2.1 The partial learning problem
One potential bottleneck of our approach is the way the RNN wave function is split into amplitude and phase. In order to test if there are problems with the optimization of the phase or amplitude alone, we consider their learning problems separately as suggested e.g. in Refs. [14, 80].
1. Phase training: We sample from the exact ground state distribution \(|\psi|^{2}\), calculated with ED, and optimize only the phase.
2. Amplitude training: Given the correct phase distribution from ED, we optimize only the logarithmic amplitude to check if the ground-state probability amplitudes can be learned.
Fig. 9 shows the results of amplitude and phase trainings (dark and light blue), compared to the full train
Figure 8: RNN representation for ground states of the bosonic and fermionic \(t-J\) model with \(t/J=3\), \(0\leq N_{h}\leq 12\) for a \(4\times 4\) square lattice with open boundaries. a) Relative error for bosons (blue) and fermions (orange). b) Logarithmic amplitude and phase distributions from ED for exemplary bosonic (blue) and fermionic (orange) hole numbers. We use a hidden dimension of \(h_{d}=100\).
Figure 7: RNN representation for ground states of the bosonic and fermionic \(t-J_{z}\) model with \(t/J_{z}=3\), \(0\leq N_{h}\leq 12\) for a \(4\times 4\) square lattice with open boundaries. a) Relative error for bosons (blue) and fermions (orange). b) Logarithmic amplitude and phase distributions from ED for exemplary bosonic (blue) and fermionic (orange) hole numbers. On the very left, the two states \(\sigma_{N\rm{\acute{e}}el}\) with \(\log|\psi(\sigma_{N\rm{\acute{e}}el})|^{2}=0\) are the Néel states. We use a hidden dimension of \(h_{d}=100\).
ing of both amplitude and phase (red). For all considered systems, the results of the partial trainings are closer to the exact ground state, e.g. for open boundaries and \(N_{h}=1\), the relative error is decreased from \(\Delta\epsilon=0.0147(37)\) to \(\Delta\epsilon=0.0040(30)\) for the amplitude training and \(\Delta\epsilon=0.0039(33)\) for the phase training. However, for all considered cases we observe the same problem as in the full training: the RNN gets stuck in a plateau that survives up to 20000 training steps. Although the relative error of the plateau decreases when considering the partial learning problems, the improvement is surprisingly low given the amount of information that is added to the training. Furthermore, whether the amplitude or phase training is more problematic remains unclear. Even for the phase training, for which the training samples are generated from the exact distribution \(|\psi|^{2}\) calculated with ED, the improvement is not significantly larger than for the amplitude training. This is in agreement with the results of Bukov et al. [14].
#### iv.2.2 Comparison of optimizers
As a next test, we compare the optimization results of different optimizers in Fig. 10a, namely Stochastic gradient descent (SGD), adaptive methods like AdaBound [81] and Adam [57], and more advanced methods such as Adam+Annealing [19] and the recently developed variant of stochastic reconfiguration (SR), minimum-step SR (minSR) [61]. We show the optimization results for the \(t-J_{z}\) model on the left and the \(t-J\) model on the right, both for \(N_{h}=1\).
Typically, Adam is used for RNN wave function optimization [10, 19, 36, 50], adapting the learning rate in each VMC update. For 200 samples used in each optimization step, Adam yields relative errors on the order of \(\Delta\epsilon\approx 10^{-3}\) for the \(t-J_{z}\) model and \(\Delta\epsilon\approx 10^{-2}\) for the \(t-J\) model. AdaBound, employing dynamic bounds on learning rates, yielding a gradual transition from Adam to SGD during the training, has similar results.
Another modification of the Adam training is the use of variational annealing as introduced in Sec. II.3, shown to improve the performance for frustrated systems [19]. The minimization procedure that we use starts with a warmup phase with a constant temperature \(T_{0}=1\), before decreasing the temperature \(T(t)=T_{0}(1-(t-t_{\text{warmup}})/\tau)\) linearly with the minimization steps \(t\). Typically, we use \(\tau=5000\) and stop the training after \(t_{\text{final}}=20000\) training iterations, but tests up to \(\tau=20000\) and \(t_{\text{final}}=40000\) did not yield any improvements. Fig. 10a shows that for the square lattice, the use of annealing does not bring any advantage within the errorbars.
Lastly, we apply minSR, a recently developed variant of SR [61], as introduced in Sec. I.1. For a stable training, we ensure non-exploding gradients by adding a diagonal offset \(\delta(t)\) to the diagonals of the \(T\)-matrix, with \(\delta(t)\) exponentially decaying from 1 to \(10^{-10}\). After determining the gradients using Eq. (7), we apply the Adam update rule, which we empirically find to perform better than the GD update. Moreover, since it is crucial to use enough
Figure 10: a) Testing different optimizers: Optimization results for the \(t-J_{z}\) model (left) and the \(t-J\) model (right) on a \(4\times 4\) square lattice with \(t/J_{z}=3\), both for \(N_{h}=1\) and periodic boundaries, using SGD, AdaBound, Adam+Annealing and minSR, and 200 samples (1000 samples for minSR) in each VMC step. b) Eigenvalues of the \(T\)-matrix (minSR algorithm [61], solid lines) and of the \(X^{T}X\) matrix (SR variant of Rende et al. [60], dotted lines) before the training, for the \(4\times 4\)\(t-J\) system with one hole and open boundaries and \(h_{d}=30,70\), using 1000 samples,
Figure 9: Partial training, i.e. separate amplitude (dark blue) and phase (light blue) training, for ground states of the \(t-J\) model on a \(4\times 4\) square lattice with \(t/J=3\), open boundaries and \(N_{h}=0\) (top) and \(N_{h}=1\) (bottom), compared to the full training in red. We use a hidden dimension of \(h_{d}=70\).
samples for a sufficiently good approximation of the gradients in SR, typically more samples than for the other optimization routines are needed. Here, we use 1000 samples in each minSR update and find that the results on the one-hole \(t-J\) ground state errors improve below the values obtained with Adam, see Fig. 10a on the right. However, we show in Appendix B.2 that a comparison with Adam using the same number of samples does not lead to a conclusive result which optimization routine is better, similar to the SR results in Ref. [14].
The reason behind this can be understood when considering the spectrum of the \(T\)-matrix of the minSR algorithm: Similar to the results of Ref. [82] for the \(S\)-matrix of the SR algorithm, Fig. 10b shows that the eigenvalues of \(T\), \(\lambda_{i}\), decrease extremely rapidly, in particular at the beginning of the training, indicating a very flat optimization landscape. This is a typical problem of autoregressive architectures [82] and causes uncontrolled, high values of \(T^{-1}\) and consequently also of the gradients \(\delta\theta\), see Eq. (7). Furthermore, the shape of the spectrum does not have any feature that indicates that the spectrum could be cut off at a specific eigenvalue, making a regularization very difficult. Hence, the diagonal offset \(\delta(t)\) must be chosen relatively large, yielding parameter updates that are very similar to the plain vanilla Adam optimization as long as \(\delta(t)\) is larger than many of the \(T\)-eigenvalues. The spectrum of the \((X^{T}X)\) matrix of the SR variant by Rende et al. [60], see Eq. (8), exhibits the same problem.
When comparing the results for different hidden dimensions, e.g. for minSR in Fig. 10a (right), it may suggest that a hidden dimension \(h_{d}>100\) could in principle improve the results further. However, we will show in Sec. III.4 that for such a large number of parameters, it is even possible, by restricting to a fixed number of holes and hence reducing the Hilbert space dimension to \(\ll 3^{N_{\text{states}}}\), to encode the wave function using exact methods.
#### iii.2.3 Spatial symmetries
The RNN ansatz we use has implemented \(U(1)=U(1)_{\hat{N}}\times U(1)_{\hat{S}_{z}}\) symmetry, i.e. conserved total particle and total magnetization [24, 10]. This is done by calculating the current particle number \(N_{p}(i)\) (magnetization \(S_{z}(i)\)) after the \(i\)-th RNN cell during the sampling process and assigning a zero conditional probability if \(N_{p}(i)=N_{\text{target}}\) (\(S_{z}(i)=S_{z,\text{target}}\)) for all sites \(j>i\) that are considered afterwards, see Appendix A.3. As a next test, we employ additional spatial symmetries: For a symmetry operation \(\mathcal{T}\) according to the lattice symmetry, we know that
\[|\psi(\sigma)|^{2}=|\psi(\mathcal{T}\sigma)|^{2} \tag{17}\]
for the exact ground state. For rotational \(C_{4}\) symmetry of the square lattice, we employ this constrain \((i)\) in the training, by implementing it in the cost function, or \((ii)\) in the RNN ansatz as in Ref. [10].
The constrain in the cost function that we use in \((i)\) is calculated by rotating all samples drawn from \(|\psi_{\mathbf{\lambda}}|^{2}\) according to \(C_{4}\) in each VMC step, calculating \(p_{\mathbf{\lambda}}(\mathcal{T}_{i}\sigma)=|\psi_{\mathbf{\lambda}}(\mathcal{T}_{i} \sigma)|^{2}\) for all \(\{\mathcal{T}_{i}\}_{i}\) and adding the squared difference \(\gamma(t)\sum_{\mathbf{\sigma}}\left(|\psi_{\mathbf{\lambda}}(\sigma)|^{2}-|\psi_{\mathbf{ \lambda}}(\mathcal{T}_{i}\sigma)|^{2}\right)^{2}\) with a prefactor \(\gamma(t)=\gamma_{0}\text{log}_{10}(1+9(t-t_{\text{warmup}})/\tau)\) to the cost function. Typically, we use long decay times on the order of \(\tau=5000\) steps.
For \((ii)\), we assign
\[p_{\mathbf{\lambda}}(\sigma)=\frac{1}{|\{\mathcal{T}_{i}\}_{i}|}\sum_{\mathcal{T }=1,\{\mathcal{T}_{i}\}_{i}}|\psi_{\mathbf{\lambda}}(\mathcal{T}\sigma)|^{2} \tag{18}\]
for all operations \(\mathcal{T}_{i}\) in the symmetry group, similar to Ref. [10].
The optimization results using \((i)\) and \((ii)\) are shown in Fig. 11 for the \(t-J\) and \(t-J_{z}\) model on a \(4\times 4\) square lattice. It can be seen that constraining the RNN wave function directly via \((ii)\) is more succesful than via the cost function \((i)\): Using \((ii)\), we get an order of magnitude lower relative errors compared to the results without spatial symmetries for the \(t-J_{z}\) model. This possibly results from the fact that the additional constrain on the symmetry leads to barriers in the loss landscape in the regions where the symmetry is violated. Even when increasing the symmetry constrain gradually during the training, as described above, these barriers can prevent getting close to the minimum.
The \(t-J\) model results do not improve significantly for both symmetry implementations \((i)\) and \((ii)\), with an error on the order of \(\Delta\epsilon\approx 10^{-2}\) with and without spatial symmetries. Hence, we conclude that applying symmetries does only help to improve the accuracy if the ground state can already be learned sufficiently well, as for the \(t-J_{z}\) model.
For systems with sufficiently high convergence, also ro
Figure 11: Relative error for \(t-J\) (dark blue) and \(t-J_{z}\) (light blue) models on a \(4\times 4\) square lattice with one hole, \(t/J_{z}=3\) and periodic boundaries, for RNNs with implemented \(U(1)=U(1)_{\hat{N}}\times U(1)_{\hat{S}_{z}}\) symmetry, \(U(1)\) and \(C_{4}\) symmetry, implemented via the cost function and the RNN ansatz. We use a hidden dimension of \(h_{d}=70\). For the \(t-J_{z}\) model, we provide the relative errors as numbers in light blue.
tational symmetries like \(s\), \(p\) or \(d\)-wave symmetries could be enforced to probe the competition between the ground state energies in the respective symmetry sectors [83], which is highly relevant for the study of high-T\({}_{c}\) superconductivity. In addition, also low-energy excited states for these symmetry sectors could be calculated by making use of the dispersion scheme from Sec. II, e.g. \(m_{4}\) rotational spectra [47].
#### iii.2.4 Complexity of the learning problem
Lastly, we consider the complexity of our learning problem and compare it to the expressivity of our RNN ansatz in terms of the number of parameters that are encoded in the RNN. In Fig. 12 on the left, we show the number of parameters used in the RNN ansatz for the \(4\times 4\)\(t-J\) square lattice for hidden dimensions \(30\leq h_{d}\leq 100\). The number of parameters encoded in the ansatz is slightly lower than the number of parameters that is actually used (gray circles on the left). This is due to the way we encode the \(U(1)\) symmetry in our approach, resulting in a small fraction of weights that are not updated since the respective probabilities are set to zero to obey the \(U(1)\) symmetry, see Appendix A.3. Furthermore, we show the dimension of the Hilbert space for the same system \(3^{16}\) in black, i.e. the dimension of the distribution that needs to be learned by our RNN. For the small system size that we consider in Fig. 12, the Hilbert space dimension is two orders of magnitude larger than the number of RNN parameters. For the \(10\times 4\) system in Fig. 1 however, our RNN representation has \(13\) orders of magnitude less parameters than the Hilbert space with dimension \(3^{40}\) that is learned.
The Hilbert space dimension \(3^{N_{\text{sites}}}\) that was considered so far allows for three states per site - spin up, down and hole -, i.e. for a variable number of holes in the system. For a fixed number of holes, the number of parameters to describe the exact state can be reduced to the Hilbert space dimension of the spin system multiplied by all combinations of how holes can be distributed on the lattice. This yields a much lower number of parameters than \(3^{N_{\text{sites}}}\), as shown by the blue lines in Fig. 12 for \(1\leq N_{h}\leq 4\). In fact, for \(N_{h}=1\) our RNNs encode even more parameters than this exact parameterization when \(h_{d}>70\). This reveals one main problem of our RNN ansatz, namely the flexibility to encode any number of holes and hence a \(3^{N_{\text{sites}}}\)-dimensional parameter space. For future studies, we envision an RNN ansatz for a fixed number of holes, reducing the dimension of the parameter space that needs to be learned and hence facilitating the learning problem.
Lastly, we would like to point out that the learning problem that we consider here is more complex than for spin systems that are typically considered with this architecture [32; 33; 36; 10], as can be seen when comparing the Hilbert space dimensions for local dimensions \(d=2\) as for spin systems, vs. \(d=3\) as for the \(t-J\) model in Fig. 12 on the right. For larger systems, this difference increases, e.g. for the \(10\times 4\) system in Fig. 1 the Hilbert space dimension increases by seven orders of magnitude when going from a spin to a \(t-J\) system (with flexible number of holes). This problem becomes even more pronounced when the Fermi-Hubbard model with local dimension \(d=4\) would be considered.
## IV Summary and outlook
To conclude, we present a neural network architecture, based on RNNs [10], to simulate ground states of the fermionic and bosonic \(t-J\) model upon finite hole doping. We show that, despite many challenges due to the increased complexity of the learning problem compared to spin systems, the RNN succeeds in capturing remarkable physical properties like the shape of the dispersion, indicating the dominating emergent excitations of the systems. In order to calculate the dispersion, we present a new method that can be used with any NQS ansatz and for any lattice geometry and map out quasiparticle dispersion using the RNN ansatz for several different lattice geometries, including 1D and 2D systems. Moreover, it enables an extremely efficient calculation of dispersion relations compared to conventional methods like DMRG [62], which usually require a time-evolution of the state [45]. The dispersion scheme yields a good agreement when comparing to exact diagonalization or DMRG results, and is expected to perform even better for a better ground state convergence. In principle, it can also be combined with a translationally symmetric NQS ansatz to improve the accuracy. Furthermore, the scheme could be combined additional symmetries, e.g. rotational symmetries, enabling the calculation of \(m_{4}\) rotational spectra [84].
Figure 12: Number of parameters for the exact wave function of a \(4\times 4\) system compared to the RNN ansatz. Left: We compare the number of parameters of the exact wave function using \(U(1)_{\hat{N}}\times U(1)_{\hat{S}_{z}}\) symmetry for \(0\leq N_{h}\leq 4\) (blue) to the Hilbert space dimension \(3^{16}\) that we want to learn with the RNN ansatz. The number of parameters of the RNN ansatz with hidden dimension \(30\leq h_{d}\leq 100\) is denoted in gray. Right: Hilbert space dimension for a local dimension of 2 (Heisenberg model), 3 (\(t-J\) model) and 4 (Fermi-Hubbard model).
In addition, we provide a detailed discussion on the challenges that are encountered during training our \(t-J\) RNN architecture, namely \((i)\) the enlarged local Hilbert space with three states for spin up particles, spin down particles and holes, respectively, yielding \(3^{N_{\text{sites}}}\) possible configurations instead of \(2^{N_{\text{sites}}}\) as for spin systems; \((ii)\) the significant number of wave function amplitudes that are close to zero; \((iii)\) the learning plateau associated with a local minimum that is encountered for all considered optimization routines - including annealing [19], minimum-step stochastic reconfiguration (minSR) [61] and the recently proposed SR variant based on a linear algebra trick [60] - and the fact that SR algorithms have problems with autoregressive architectures [82]; \((iv)\) the complicated interplay between phase and amplitude optimization [14]; \((v)\) the difficulty to implement constrains on the symmetry sector under consideration, e.g. the particle number, magnetization and spatial symmetries directly into the RNN architecture [10, 36]. Remarkably, all of these challenges are inherent to the simulation of both bosonic and fermionic systems. Our results indicate that the bottleneck for simulating fermionic spinful systems is the training and not the expressivity of the ansatz, and point the way to possible improvements concerning the ansatz and the training procedure.
_Code availability.-_ The code and the data used for this paper is provided here: [https://github.com/HannahLange/Fermionic-RNNs/](https://github.com/HannahLange/Fermionic-RNNs/).
_Acknowledgements.-_ We thank Ao Chen, Ejaaz Merali, Estelle Inack, Fabian Grusdt, Lukas Vetter, Markus Heyl, Markus Schmitt, Mohammed Hibat-Allah, Moritz Reh, Roeland Wiersema, Roger Melko, Schuyler Moss, Stefan Kienle, Stefanie Czischek and Tizian Blatz for helpful and inspiring discussions. We acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2111 - 390814868 and from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programm (Grant Agreement no 948141) -- ERC Starting Grant SimUcQuam. HL acknowledges support by the International Max Planck Research School. JC acknowledges support from the Natural Sciences and Engineering Research Council (NSERC) and the Canadian Institute for Advanced Research (CIFAR) AI chair program. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute www.vectorinstitute.ai/#partners.
|
2309.02650 | Convolutional Neural Network-based RoCoF-Constrained Unit Commitment | The fast growth of inverter-based resources such as wind plants and solar
farms will largely replace and reduce conventional synchronous generators in
the future renewable energy-dominated power grid. Such transition will make the
system operation and control much more complicated; and one key challenge is
the low inertia issue that has been widely recognized. However, locational
post-contingency rate of change of frequency (RoCoF) requirements to
accommodate significant inertia reduction has not been fully investigated in
the literature. This paper presents a convolutional neural network (CNN) based
RoCoF-constrained unit commitment (CNN-RCUC) model to guarantee RoCoF stability
following the worst generator outage event while ensuring operational
efficiency. A generic CNN based predictor is first trained to track the highest
locational RoCoF based on a high-fidelity simulation dataset. The RoCoF
predictor is then formulated as MILP constraints into the unit commitment
model. Case studies are carried out on the IEEE 24-bus system, and simulation
results obtained with PSS/E indicate that the proposed method can ensure
locational post-contingency RoCoF stability without conservativeness. | Mingjian Tuo, Xingpeng Li | 2023-09-06T01:22:10Z | http://arxiv.org/abs/2309.02650v1 | # Convolutional Neural Network-based RoCoF-Constrained Unit Commitment
###### Abstract
The fast growth of inverter-based resources such as wind plants and solar farms will largely replace and reduce conventional synchronous generators in the future renewable energy-dominated power grid. Such transition will make the system operation and control much more complicated; and one key challenge is the low inertia issue that has been widely recognized. However, locational post-contingency rate of change of frequency (RoCoF) requirements to accommodate significant inertia reduction has not been fully investigated in the literature. This paper presents a convolutional neural network (CNN) based RoCoF-constrained unit commitment (CNN-RCUC) model to guarantee RoCoF stability following the worst generator outage event while ensuring operational efficiency. A generic CNN based predictor is first trained to track the highest locational RoCoF based on a high-fidelity simulation dataset. The RoCoF predictor is then formulated as MILP constraints into the unit commitment model. Case studies are carried out on the IEEE 24-bus system, and simulation results obtained with PSS/E indicate that the proposed method can ensure locational post-contingency RoCoF stability without conservativeness.
Convolutional neural network, Deep learning, Frequency stability, Low-inertia power systems, Rate of change of frequency, Unit commitment. +
Footnote †: publication: Submitted to the 23rd Power Systems Computation Conference (PSCC 2024).
## I Introduction
Integration of converter-based resources such as wind plants and solar farms help realize the decarbonization of the electricity generation. However, the increased penetration of renewable energy sources (RES) imposes great challenges on maintaining power system frequency stability for reliable system operations [1]. Traditionally, synchronous generators play an important role in regulating frequency excursion and rate of change of frequency (RoCoF) after a disturbance as it ensures slower frequency dynamics [2]. The conventional source of system kinetic energy, provided by rotating masses, decreases significantly as synchronous generators are retired and replaced. This makes the system more susceptible to large fluctuations in load or generation [3]. It is also reported that the lack of system inertia has already caused wind curtailment in Ireland [4].
With high penetration levels of renewable energy sources, transmission system operators (TSOs) pay more attentions to the increased frequency stability challenges. There are considerable interests in incorporating post-contingency rate of change of frequency (RoCoF) constraints into the traditional security-constrained unit commitment (SCUC) model. Such incorporation helps determine the minimum requirement for synchronous inertia online and ensure the stability of system frequency [5].
EirGrid has introduced a synchronous inertial response constraint to ensure that the available inertia is always above a minimum limit of 23 GWs in the Ireland grid [6]. Swedish TSO once ordered one of its nuclear power plants to reduce output by 100 MW to mitigate the risk of loss of that power plant considering system frequency stability [7]. Both [8]-[9] implement a system frequency stability constrained multiperiod SCUC model which incorporated frequency related constraints derived from uniform frequency dynamic model. In [10], uniform frequency response model was extended by including converter-based control, and constraints on RoCoF are then derived and incorporated into SCUC formulations. Yang proposes a data-driven distributionally robust chance-constrained approach which optimizes the SCUC problem while limiting the risk of frequency related constraint violations [11]. The work in [12] studies a mixed analytical-numerical approach based on multi-regions and investigated a model combining evolution of the center of inertia and certain inter-area oscillations. However, these approaches oversimplify the problem as they neglect locational frequency dynamics and the oscillation within the system. The actual need for frequency ancillary services would be subsequently underestimated.
Reference [13] considers the geographical discrepancies and connectivity impacts on nodal frequency dynamics. However, simulation results show that these physical model-based approaches may fail to handle higher order characteristics and nonlinearities in system frequency response. Model approximation may introduce large errors
into the derived constraints, resulting in more conservative solutions. Recently, a pioneering data-driven approach has been proposed in [14], which incorporates neural network-based frequency nadir constraints against the worst-case contingency into RoCoF-constrained unit commitment (RCUC). However, a power system is an interconnected network of generators and loads which has embedded spatial information. Traditional methods neglect the spatial information embedded in the system, and RoCoF predictions for each node were not considered.
In this paper, we propose a novel convolutional neural network (CNN)-based RCUC (CNN-RCUC) model to address the aforementioned gaps. A unique feature of this model is that its solution can ensure system frequency stability after the occurrence of the most severe contingency event. CNN-based RoCoF predictor is first trained using system operation data, which can reflect geographical discrepancies and locational frequency dynamics due to non-uniform distribution of inertia. The major contributions of this work are summarized as follows:
* First, to enhance the data-driven RCUC model, we incorporate locational post-contingency RoCoF requirements as constraints. Unlike existing data-driven methods that solely rely on fully connected layer based deep neural network (DNN), we introduce a CNN-based RoCoF predictor that utilizes spatial information processing. This predictor effectively monitors locational RoCoF, even in post-contingency scenarios where frequency oscillations must be taken into account.
* Secondly, we demonstrate the dynamic model of a power system that includes a realistic number of generators located at various nodes; heterogeneous responses of each node are then derived. Instead of creating random grid operation data that may not be reasonable or realistic, we implemented a model-based approach that enforces system locational frequency security to efficiently generate realistic data samples which covers a vast range of operating conditions. It can also eliminate samples with post-contingency instability issues.
* Last, the condition of disturbance propagation is considered [13]. Methods focusing on the initial RoCoF value may fail to capture highest locational RoCoF value during the oscillation. The proposed CNN based RoCoF predictor can track the highest RoCoF within the period following the contingency and secure frequency stability.
The remainder of this paper is organized as follows. In section II, the frequency dynamic model and data-driven approach are demonstrated. Section III details the definition of input feature vector and the architecture of convolutional neural network based RoCoF predictor. Section IV describes the formulation of proposed CNN-RCUC model and the linearization of CNN forward propagation. The simulation results and analysis are presented in Section V. Section VI presents the concluding remarks and future work.
## II System Frequency Dynamics and Overview of Solution
### System Frequency Dynamics
The power system's frequency is a crucial measure that signifies the stability of the system. In the past, the frequency is usually interpreted as either a representation of a single bus or as the center of inertia (COI) representation. The total inertia of the power system was regarded as the sum of the kinetic energy stored in all the generators synchronized with the system [15].
\[E_{sys}=\sum_{i=1}^{N}2H_{i}S_{h_{i}} \tag{1}\]
where \(S_{h_{i}}\) is the generator rated power in MVA and \(H_{i}\) denotes the inertia constant of the generator which is usually provided by the generator manufacturer.
When a disturbance occurs in the electrical power system, the dynamics between power and frequency can be modeled by the swing equation described in (2) with \(M=2H\) denoting the normalized inertia constant and \(D\) denoting damping constant respectively [16].
\[P_{m}-\Delta P_{e}=M\frac{\partial\Delta\omega}{\partial t}+D\Delta\omega \tag{2}\]
where \(\Delta P_{m}\) is the total change in mechanical power and \(\Delta P_{e}\) is the total change in electric power of the power system. \(\partial\Delta\omega/\partial t\) is commonly known as RoCoF. However only considering system uniform metrics would neglect the geographical discrepancies in locational frequency dynamics on each bus, which has imposed risks on power system stability [12]. The topological information and system parameters can be embedded into the model by using swing equation on each individual bus to describe the oscillatory behavior within the system,
\[m_{i}\ddot{\theta_{i}}+d_{i}\dot{\theta_{i}}=P_{n_{i},i}-\sum_{j=1}^{n}b_{i,j }\sin(\theta_{i}-\theta_{j}) \tag{3}\]
where \(m_{i}\) and \(d_{i}\) denote the inertia coefficient and damping ratio for node \(i\) respectively, while \(P_{n_{i},i}\) denotes the power input. A network-reduced model with \(N_{g}\) generator buses can be obtained by eliminating passive load buses via Kron reduction [17]. By focusing on the network connectivity's impact on the power system nodal dynamics, the phase angle \(\theta\) of generator buses can be expressed by the following dynamic equation [18],
\[M\ddot{\theta}+D\dot{\theta}=P-L\theta \tag{4}\]
where \(M=\text{diag}\left(\left\{m_{i}\right\}\right),\)\(D=\text{diag}\left(\left\{d_{i}\right\}\right);\) for the Laplacian matrix \(L\), its off-diagonal elements are \(l_{y}=-b_{y}V_{i}^{(0)}V_{j}^{(0)}\), and diagonals are \(l_{y}=\sum_{j=1,j,n_{i}}^{s}b_{y}V_{i}^{(0)}V_{j}^{(0)}\). The RoCoF value at bus \(i\) can then be derived,
\[f_{\sigma_{d},f_{i}}(t_{o})=\] (5) \[\frac{\Delta Pe^{\frac{-\frac{\gamma t}{2}}{2}}}{2\pi m}\sum_{s=1}^{ \frac{\gamma t}{2}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
the input features \(x_{i}\), which helps improve the performance of machined learning assisted SCUC.
The CNN model used in this study is illustrated in Fig. 1 created using the NN-SVG tool. The proposed model consists of two types of layers, namely convolutional and fully connected layers. The convolutional layers aim to learn feature representations of the input. The convolutional layer is composed of \(\xi\) convolutional kernels which are used to compute different feature maps. In essence, the neurons within a feature map establish connections with neighboring neurons in the preceding layer, forming a receptive field for each neuron. This receptive field is a region of influence that affects the neuron's input. To generate a new feature map, the input is convolved with a learned kernel and then subjected to an element-wise nonlinear activation function on the convolved outcomes. It's important to note that in the process of generating each feature map, the kernel is utilized across all spatial locations of the input, effectively sharing the same kernel weights throughout the entire input volume. The forward propagation equations are defined as,
\[\hat{z}_{i,j}^{o}=x_{i,j}w_{\xi}^{o}+b_{\xi}^{o},\forall i,\forall j,\forall \sigma,\forall\xi \tag{12}\]
\[z_{i,j}^{o}=\max\left(z_{i,j,\sigma}^{o},0\right),\forall i,\forall j,\forall \sigma,\forall\xi \tag{13}\]
where \(w_{\xi}^{o}\) and \(b_{\xi}^{o}\) are the weight vector and bias term of the \(\xi\) -th filter of the \(o\) -th layer respectively, and \(x_{i,j}\) is the input patch centered at location \((i,j)\) of the \(o\) -th layer. It should be noted that the kernel \(w_{\xi}^{o}\) that generates the feature map is shared, such mechanism can reduce the model complexity and improve the efficiency of the model. ReLU is used as the activation function for introducing nonlinearities to CNN.
Before feeding into the fully connected layer, the convolved features should be flattened in advance. Denoting the flatten function as flatten(\(\cdot\)), for the generated feature map \(z_{i,j,\xi}^{o}\) we have,
\[z^{full}=\text{flatten}\left(z_{\xi}^{o_{x}}\right) \tag{14}\]
\[z^{1}=z^{full}W^{1}+b^{1} \tag{15}\]
\[\hat{z}^{q}=z^{q-1}W^{q}+b^{q} \tag{16}\]
\[z^{q}=\max\left(z^{q},0\right) \tag{17}\]
and
\[R_{h,s}=z_{N_{L}}W_{N_{L+1}}+b_{N_{L+1}} \tag{18}\]
where \(O_{N}\) is the last convolutional layer, \(W^{q}\) and \(b^{q}\) represent the weight and bias for the \(q\) -th fully connected layer. \(W_{N_{L+1}}\) and \(b_{N_{L+1}}\) represent the set of weight and bias of the output layer. The training process is to minimize the total mean squared error between the predicted output and the labeled outputs of all training samples as follows,
\[\min_{\phi}\frac{1}{N_{S}}\sum_{s=1}^{N_{S}}\left(\hat{f}_{\text{max}}-\hat{f} _{ref}\right)^{2} \tag{19}\]
where \(\Phi=\left\{w_{\xi}^{o},b_{\xi}^{o},W^{q},b^{q},W_{N_{L+1}}^{\text{ref}},b_{N_{ L+1}}^{\text{ref}}\right\}\) represents the set of optimization variables.
## IV CNN-Based RoCoF-Constrained Unit Commitment
### _Basic SCUC Model_
In this section, the proposed CNN-RCUC considering frequency related constraints is formulated. The objective of the modified CNN-RCUC model is to minimize total operating cost subject to various system operational constraints and guarantee system frequency stability. The formulation is shown below:
\[\min_{\phi}\sum_{g\in G}\sum_{i\sim t}(c_{g}P_{g,s}+c_{g}^{NL}u_{g,t}+c_{g}^{SN}v_{g,t}+c_{g}^{RE}v_{g,t}) \tag{20a}\] \[\sum_{g\in G}P_{g,t}+\sum_{i\in K^{c}}(P_{g,t}-\sum_{i\in K^{c}}P _{g,t}-D_{g,t}+E_{g,t}=0,\forall n,t\] (20b) \[P_{k,t}-b_{k}\left(\theta_{a,t}-\theta_{a,t}\right)=0,\forall k,t\] (20c) \[-I_{k}^{\text{max}}\leq P_{k,t}\leq P_{k^{\text{max}}},\forall k,t\] (20d) \[P_{g}^{\text{min}}u_{g,t}\leq P_{g,t},\forall g,t\] (20e) \[P_{g,t}+r_{g,t}\leq u_{g,t},P_{g,t}^{\text{max}},\forall g,t\] (20f) \[0\leq r_{g,t}\leq R_{g}^{\text{re}}u_{g,t},\forall g,t\] (20g) \[\sum_{j\in G}r_{j,t}\geq P_{g,t}+r_{g,t},\forall g,t\] (20h) \[P_{g,t}-P_{g,t-1}\leq R_{g}^{\text{tr}},\forall g,t\] (20i) \[P_{g,t}\geq u_{g,t}-u_{g,t-1},\forall g,t\] (20k) \[v_{g,t+1}\leq 1-u_{g,t},\forall g,t\leq nT-1\] (20l) \[v_{g,t}\leq u_{g,t},\forall g,t\] (20m) \[\sum_{s+t\sim t_{T_{g}}}^{t}v_{g,t}\leq u_{g,t},\forall g,t\geq UT _{g}\] (20n) \[\sum_{s+t\sim t_{T_{g}}}^{t+D_{g}}v_{g,t}\leq 1-u_{g,t},\forall g,t\geq nT -DT_{g}\] (20o) \[u_{g,t},v_{g,t}\in\{0,1\},\forall g,t \tag{20p}\]
Equation (20a) is the objective function, and the basic constraints include (20b)-(20o). Equation (20b) enforces the
Fig. 1: Architecture of proposed CNN model.
nodal power balance. Network power flows are calculated in (20c) and are restricted by the transmission capacity as shown in (20d). The scheduled energy production and generation reserves are bounded by unit generation capacity and ramping rate (20e)-(20j). As defined in (20h), the reserve requirements ensure the reserve is sufficient to cover any loss of a single generator. The start-up status and on/off status of conventional units are defined as binary variables (20k)-(20m). Minimum-down time before a generator can be started-up and the minimum-up time before a generator can be shutdown are depicted in (20n) and (20o), respectively. Indicating variables for generator start-up and commitment status are binary and are represented by (20p).
### _Feature Encoding_
Since \(\boldsymbol{\sigma}_{i}^{G}\) contains the max operator, it cannot be directly used in the encoding formulation. Thus, we introduce the supplementary variables to indicate if generator \(g\) outputs the largest active power in scheduling period \(t\). The reformulations are expressed as follows,
\[P_{x,t}^{G}-P_{g,t}^{G}\leq A\left(1-v_{g,t}^{G}\right),\forall g,t \tag{21}\] \[\sum_{g\in G}v_{g,t}^{G}=1,\forall t \tag{22}\]
where \(A\) is a big number. Equation (21) enforces \(v_{g,t}^{G}\) to be zero if the dispatched power of any generator \(\chi\) is larger than interested generator \(g\) at period \(t\). Equation (22) ensures that there would be only one largest generator being considered as the potential largest contingency. Equation (21) together with (22) could enforce generator \(g\) has the largest output power and \(v_{g,t}^{G}\) would be set as 1. To further encode the magnitude and spatial information of disturbance into feature vector, variable \(\rho_{x,t}^{G}\) is introduced, and the value of \(\rho_{x,t}^{G}\) equals to the largest generation \(P_{g,t}^{G}\), the constraints can be expressed as follows,
\[\rho_{x,t}^{G}-P_{g,t}^{G}=-A\left(1-v_{g,t}^{G}\right),\forall g,t \tag{23}\] \[\rho_{x,t}^{G}-P_{g,t}^{G}\leq A\left(1-v_{g,t}^{G}\right),\forall g,t\] (24) \[0\leq\rho_{x,t}^{G}\leq Av_{g,t}^{G},\forall g,t \tag{25}\]
Thus, the overall feature vector of a case \(x_{\text{g}}\) can be then defined as follows,
\[x_{t}=[u_{1,t},\cdots,u_{g,t},\rho_{1,t}^{G},\cdots,\rho_{g,t}^{G},P_{1,t}, \cdots,P_{g,t}],\forall g,t \tag{26}\]
### _CNN Linearization_
Since ReLU activation functions are nonlinear, to include the CNN into the MILP, binary variables \(\alpha_{i,t,\xi,t}^{o}\) and \(\alpha_{i,t}^{g}\) are introduced which represent the activation status of the ReLU within CNN model. Considering \(A\) is a big number that is larger than the absolute value of all \(\hat{z}_{i,t,\xi,t}^{o}\) and \(\hat{z}_{i,t}^{g}\), when preactivated value \(\hat{z}_{i,t,\xi,t}^{o}\) or \(\hat{z}_{i,t}^{g}\) is larger than zero, constraints (27b) - (27e) and (27h) - (27f) will force binary variables \(\alpha_{i,i,\xi,t}^{o}\) or \(\alpha_{i,t}^{g}\) to one, and the activated value will be equal to pre-activated value. When \(\hat{z}_{i,t,\xi,t}^{o}\) or \(\hat{z}_{i,t}^{g}\) is less than or equal to zero, constraints will force binary variable \(\alpha_{i,i,\xi,t}^{o}\) or \(\alpha_{i,t}^{g}\) to zero. Subsequently, the activated value will be set zero.
\[\hat{z}_{i,t,\xi,t}^{o}=x_{i,t,j}W_{\xi}^{o}+b_{\xi}^{o},\forall i,j,t,o,\xi \tag{27a}\] \[z_{i,t,\xi,t}^{o}\leq\hat{z}_{i,t,\xi,t}^{o}+A\left(1-\alpha_{i,t,\xi,t}^{o}\right),\forall i,j,t,o,\xi\] (27b) \[z_{i,t,\xi,t}^{o}\geq\hat{z}_{i,t,\xi,t}^{o},\forall i,j,t,o,\xi\] (27c) \[z_{i,t,\xi,t}^{o}\leq A\alpha_{i,t,\xi,t}^{o},\forall i,j,t,o,\xi\] (27d) \[z_{i,t,\xi,t}^{o}\geq 0,\forall i,j,t,o,\xi\] (27e) \[z_{i,t}^{\text{full}}=\text{flatten}\left(z_{i}^{o_{i}}\right),\forall t\] (27f) \[z_{i,t}^{1}=z_{i}^{\text{full}}W_{1}^{1}+b_{1}^{1},\forall l,t\] (27g) \[\hat{z}_{i,t}^{g}=z_{i,t}^{\text{full}}W_{1}^{g}+b_{t}^{g},\forall q,l,t\] (27h) \[z_{i,t}^{q}\leq\hat{z}_{i,t}^{q}+A\left(1-\alpha_{i,t}^{q}\right), \forall q,l,t\] (27i) \[z_{i,t}^{q}\geq\hat{z}_{i,t}^{q},\forall q,l,t\] (27j) \[z_{i,t}^{q}\leq A\alpha_{i,t}^{q},\forall q,l,t\] (27k) \[z_{i,t}^{q}\geq 0,\forall q,l,t\] (27l) \[\alpha_{i,t,\xi,t}^{o},\alpha_{i,t}^{q}\in\{0,1\},\forall i,j,o,q,\zeta,l,t \tag{27m}\]
## V Results Analysis
A case study on the IEEE 24-bus system [20] is provided to demonstrate the effectiveness of the proposed methods. This test system contains 24 buses, 33 generators and 38 lines, which also has wind power as renewable resources. The mathematical model-based data generation is operated in Python using Pyomo [21]. The PSS/E software is used for time domain simulation and labeling process [22]. We use full-scale models for the dynamic simulation during the labeling process: GENROU and GENTPJ for the synchronous machine; IEEEX1 for the excitation system; IESGO for the turbine-governor; PSS2A for the power system stabilizer. Standard wind turbined generator (WTG) and corresponding control modules are employed. The data creation and verification steps are implemented using Pyomo and PSS/E. SCUC is solved using Gurobi solver. Machine learning step is implemented in Python 3.6. A computer with Intel(r) Xeon(R) W-2295 CPU @ 3.00GHz, 192 GB of RAM and NVIDIA GeForce RTX 2060, 6GB GPU was utilized.
### _Predictor Training_
We first generate 8300 samples for predictor training. Each case is labeled with security status, 0 for insecure and 1 for secure based on post contingency conditions. To ensure the
practically of the dataset and the generality of the trained model, load profile and RES profile are sampled based on Gaussian distribution while the deviation of means value ranges from [-20%, 20%] of the based value. The optimality gap of the solver is set to 0.1%. We assume synchronous generators have adequate reactive power capacity, and WTGs are controlled with a unity power factor.
Several common classification methods are first compared with proposed CNN model on the IEEE 24-Bus system data. The proposed CNN based RoCoF predictor is compared with benchmark DNN in TABLE I. The results show that with a tolerance of 5%, the proposed CNN model has a validation accuracy of 92.78%. For the benchmark DNN model, the validation accuracy with 5% tolerance is calculated as 93.53% on the same validation dataset, which is relatively lower than the proposed CNN model. Fig. 2 presents the evolution of MSE losses on the training and validation sets over the training process of the proposed CNN model.
Additionally, we validate performance of CNN predictor using the following metrics to demonstrate the prediction accuracy: (1) median absolute error (MED-E), (2) mean absolute error (MEA-E), and (3) R2 score. From TABLE II we can observe that CNN based RoCoF predictor has lower MSE as well as MEA-E, indicating that CNN model has a better performance in processing power system data with graphical information embedded.
Electricity demand ranges from 1,300 MW to a peak of 1,786 MW. The results presented in TABLE II show that the all RoCoF constrained SCUC models alleviate the reserve cost over T-SCUC. Especially for CNN-RCUC, the reserve cost is reduced from $83,475 to $61,882, giving a reduction of 25.87%. An increase in startup cost can also be observed in TABLE III when RoCoF related constraints are applied on period 19, which indicates extra cost is introduced due to improvement of generator flexibility.
We assume the worst-case contingency takes place in period 19, and the generator outputting the largest power is tripped. The system uniform RoCoF responses of different schedule cases are shown in Fig. 3. Fig. 3 (a) shows the uniform RoCoF response of ERC-SCUC model. With system equivalent RoCoF constraints incorporated, the highest RoCoF absolute value of ERC-SCUC is strictly 0.5 Hz/s, which satisfies the RoCoF constraint. Fig. 3 (b), (c) and (d) show that system uniform RoCoF doesn't violate the threshold in both LRC-SCUC and DNN-RCUC cases. It should be noted that the proposed CNN-RCUC model has the lowest RoCoF value.
The locational RoCoF dynamics of all models are plotted in Fig. 4. Combining Fig. 3 (a) and Fig. 4 (a) we can observe that even though the system uniform RoCoF doesn't violate the threshold when equivalent RoCoF constraint is applied, locational RoCoF on several generator buses violate its threshold due to oscillations. The ERC-SCUC is insecure as dispatched condition cannot withstand the trip of the largest generator, cascaded generator contingency may occur under such condition. From Fig. 3 (b) we can find that the highest RoCoF is mitigated for LRC-SCUC s
Fig. 3: Uniform RoCoF curves of different model following worst contingency case.
Fig. 2: Learning curve of CNN model.
violates the threshold due to introduced approximation error. For data driven methods, Fig. 4 (b) and Fig. 4 (c) show better RoCoF dynamics. As we can observe, DNN-RCUC can mitigate the highest RoCoF to -0.55Hz/s which slightly violates the threshold. The highest RoCoF if the proposed CNN-RCUC method is -0.52 Hz/s which outperforms all other SCUC schedules.
## VI Conclusions
The presence of a high level of renewable energy sources in the power grid reduces its overall inertia, which could lead to frequency instability during worst-case G-1 contingencies. Frequency-related constraints have been included in the SCUC process to ensure post-contingency frequency stability. Current physical model-based approaches face limitations. They either struggle to maintain locational RoCoF stability due to approximation errors or provide overly conservative solutions that result in additional costs. This paper proposes a novel CNN-RCUC model that integrates frequency-related constraints derived from convolutional neural networks capturing spatial correlations. Simulations conducted with PSS/E demonstrate that the proposed CNN-RCUC model effectively ensures system frequency stability without unnecessary conservatism.
|
2303.02384 | Hierarchical Training of Deep Neural Networks Using Early Exiting | Deep neural networks provide state-of-the-art accuracy for vision tasks but
they require significant resources for training. Thus, they are trained on
cloud servers far from the edge devices that acquire the data. This issue
increases communication cost, runtime and privacy concerns. In this study, a
novel hierarchical training method for deep neural networks is proposed that
uses early exits in a divided architecture between edge and cloud workers to
reduce the communication cost, training runtime and privacy concerns. The
method proposes a brand-new use case for early exits to separate the backward
pass of neural networks between the edge and the cloud during the training
phase. We address the issues of most available methods that due to the
sequential nature of the training phase, cannot train the levels of hierarchy
simultaneously or they do it with the cost of compromising privacy. In
contrast, our method can use both edge and cloud workers simultaneously, does
not share the raw input data with the cloud and does not require communication
during the backward pass. Several simulations and on-device experiments for
different neural network architectures demonstrate the effectiveness of this
method. It is shown that the proposed method reduces the training runtime for
VGG-16 and ResNet-18 architectures by 29% and 61% in CIFAR-10 classification
and by 25% and 81% in Tiny ImageNet classification when the communication with
the cloud is done over a low bit rate channel. This gain in the runtime is
achieved whilst the accuracy drop is negligible. This method is advantageous
for online learning of high-accuracy deep neural networks on sensor-holding
low-resource devices such as mobile phones or robots as a part of an edge-cloud
system, making them more flexible in facing new tasks and classes of data. | Yamin Sepehri, Pedram Pad, Ahmet Caner Yüzügüler, Pascal Frossard, L. Andrea Dunbar | 2023-03-04T11:30:16Z | http://arxiv.org/abs/2303.02384v4 | # Hierarchical Training of Deep Neural Networks Using Early Exiting
###### Abstract
Deep neural networks provide state-of-the-art accuracy for vision tasks but they require significant resources for training. Thus, they are trained on cloud servers far from the edge devices that acquire the data. This issue increases communication cost, runtime and privacy concerns. In this study, a novel hierarchical training method for deep neural networks is proposed that uses early exits in a divided architecture between edge and cloud workers to reduce the communication cost, training runtime and privacy concerns. The method proposes a brand-new use case for early exits to separate the backward pass of neural networks between the edge and the cloud during the training phase. We address the issues of most available methods that due to the sequential nature of the training phase, cannot train the levels of hierarchy simultaneously or they do it with the cost of compromising privacy. In contrast, our method can use both edge and cloud workers simultaneously, does not share the raw input data with the cloud and does not require communication during the backward pass. Several simulations and on-device experiments for different neural network architectures demonstrate the effectiveness of this method. It is shown that the proposed method reduces the training runtime by \(29\%\) and \(61\%\) in CIFAR-10 classification experiment for VGG-16 and ResNet-18 when the communication with the cloud is done at a low bit rate channel. This gain in the runtime is achieved whilst the accuracy drop is negligible. This method is advantageous for online learning of high-accuracy deep neural networks on low-resource devices such as mobile phones or robots as a part of an edge-cloud system, making them more flexible in facing new tasks and classes of data.
Hierarchical Training, Early Exiting, Neural Network, Deep Learning, Edge-Cloud Systems
## I Introduction
Deep Neural Networks (DNNs) have shown their effectiveness in different computer vision tasks such as classification [1][2], object detection [3][4], and body-pose estimation [6]. These methods outperform the previous classical approaches in all of these areas in terms of accuracy. However, in general, these state-of-the-art DNNs are made of complex structures with numerous layers that are resource-demanding. For example, ResNet-18 is made of 72 layers structured as 18 deep layers and around 11 million trainable parameters [5]. Implementation of DNNs requires high computational resources to perform many multiplications and accumulations and a high amount of memory to store the vast number of parameters and feature maps. This issue is far more critical in the training phase of DNNs as it is more intense in terms of computations than the inference phase [7] due to the higher number of FLOPs in the backward pass in comparison to the forward pass and the additional operations needed to update the parameters [8]. All these demanding operations are often done in multiple passes in the training phase. Moreover, as the parameters, their updates and the layers' activation maps should be stored, the memory requirement is also higher in the training phase [9].
The large resource requirements for training classical DNNs make their implementation often unsuitable for resource-limited edge devices used in IoT systems. The conventional solution to this issue is to offload DNN training to cloud servers, which are abundant in terms of computational resources and memory. However, training DNNs on the cloud requires sending the collected input data from the edge to the cloud. This data communication increases the total latency of training. This training latency is crucial when one needs fast online learning and seamless adaptation of models in tasks like human-robot interaction [11]. Additionally, in many tasks, the dataset contains sensitive content such as personal information like identity, gender, etc., which raises privacy concerns if they are sent to the cloud due to the possibility of untrusted connections or cloud service providers [10].
To overcome the above problems, different hierarchical training methods have been proposed. The goal of hierarchical training is to train a complex DNN more efficiently by bringing it closer to where the data is acquired by sensors, i.e. the edge worker. As the edge cannot handle the whole training task of the DNN, a part of the training phase is outsourced to another device with substantial resources, i.e., the cloud worker. In other words, it offers a method to efficiently train the DNNs on a heterogeneous hierarchy of workers. Other optional levels may also be added in between, that represent local server workers which are closer to the edge but with lower resources in comparison to the cloud. For example, in [12], the authors analyzed the different workers as different graph nodes and found the shortest path to decide the schedule of execution on different workers, and in [13], the authors divided the data batches between the different workers with respect to their resources. Although these methods successfully divided the training phase between the workers, they suffered from issues such as high communication cost due to the several data transfers between the workers [12], and privacy concerns as they send directly the raw input data over the network [13].
In this work, we propose a novel hierarchical training framework using the idea of early exiting that provides efficient training of DNNs on edge-cloud systems. The proposed method addresses the issues of training on edge-cloud systems and mitigates the communication cost, reduces the latency, and improves privacy. Our method reduces the latency of training and privacy concerns as it does not send the raw data to the cloud; instead, it shares a set of features only. In order to achieve this goal, we benefit from the idea of early exiting at the edge, which was solely used in full-cloud training to achieve higher accuracy in specific architectures like GoogLeNet [14], or the inference phase of hierarchical systems [15]. We use early exiting to split the backward pass of training in the DNN architecture parts implemented at the edge and the cloud. It provides the possibility to partially parallelize the training over the edge and the cloud workers, enabling the use of their full potential and reducing the latency. Our early exiting approach does not need to communicate the gradients between the workers in the backward pass. Moreover, it enables the use of non-differentiable operations such as quantization to compress the communicated data in the forward pass. It also has the side benefits of providing robustness against network failures and a possibility of reduction in the edge power consumption, since there is no need to communicate the backward pass gradients from the cloud back to the edge.
We perform extensive experiments and compare the performance of our novel model with the baseline of full-cloud training, that is the scenario of communicating the input data directly to the cloud and training the neural network there. We show in on-device experiments that our method can reduce the latency of training in comparison to the baseline while having a negligible amount of accuracy reduction. It improves the training runtime significantly, especially when the communication bandwidth is low. The advantage of our method in terms of computational and memory requirements and communication burden is also shown in our experiments.
Our proposed method is especially useful in online training on resource-constrained devices that acquire their own data from their built-in cameras, like mobile phones or robots. It allows training high-accuracy DNNs for the given tasks without the need for having high computational and memory resources, nor to share their private raw data with cloud severs.
The main contributions of this work are summarized as:
* We propose a novel approach to train DNNs in hierarchical edge-cloud systems, using early exiting. It results in lower latency, reduced communication cost, improved privacy, and robustness against network failures with respect to a classical full-cloud training framework.
* We propose guidelines to efficiently select the partitioning strategy of the DNN between the edge and the cloud based on specific requirements such as runtime, accuracy or memory.
* We conduct a performance analysis and show the advantage of the proposed hierarchical training method in terms of memory consumption, computational resource requirement and communication burden. We perform extensive simulations and calculate the latency of different architectures of DNNs when they are trained with our edge-cloud framework and show their superiority over the full-cloud training while having a negligible accuracy drop.
* We implement the system and perform on-device experiments to show the runtime improvements in an experimental edge-cloud setup to validate the proposed idea.
The manuscript is structured as follows: the related works are described in Section II. After that, the proposed hierarchical training approach is elaborated in Section III. Section IV is dedicated to the experiments and their results and the study is concluded in Section V.
## II Related Works
In this section, we describe the related works and divide them into three groups: hierarchical training, early exiting, and hierarchical inference.
### _Hierarchical Training_
The idea of training a DNN on a system of hierarchical workers with different available resources has recently received attention from researchers. In these works, the authors try to train a DNN directly on an edge-cloud system. Eshratifar et al. [12] proposed the idea of JointDNN that defines the blocks of a DNN that are executed on each worker as graph nodes and proposed a method of DNN division by solving the shortest path problem in this graph. Although their method is able to improve the runtime and energy consumption of the training phase, it cannot benefit all levels of the hierarchy at the same time due to the sequential nature of the training phase. Additionally, the communication cost is still high between the mobile and the cloud levels as their proposed method sometimes has more than the two usual communication stages (for the forward pass and the backward pass).
Liu et al. [13] proposed the idea of HierTrain that benefits from hybrid parallelism to use the capacity of the different workers. They proposed a method that finds an optimized dividing position of the model between the workers and is also able to divide the input data in terms of batches with different sizes between the different levels of the hierarchy. Their method is able to reduce the latency of execution with respect to the full-cloud framework; however, it compromises the privacy of the users. The hybrid parallelism method sends a significant amount of raw input data directly to the cloud. Sending this raw data to the higher level workers also increases the communication cost of their method, in addition to the privacy issues. Moreover, this method is vulnerable in the case of network connection failures.
In contrast to these works, our goal is to propose a hierarchical training method that has a low communication cost and respects the privacy of users by not communicating the raw data. This method should be able to exploit the computational potentials of the edge and the cloud at the same time as well.
### _Early Exiting_
Early exiting is a method in DNNs that performs the decision-making in earlier layers in addition to the last layer. In non-hierarchical systems, it has been used in the training
of DNNs in architectures such as GoogLeNet [14] to achieve better accuracy of inference. In these methods, the early exit is used during the training phase and is removed in the inference phase. In hierarchical systems, there are methods such as [15] and [16] that use early exiting to reduce the communication burden of DNN's inference. In these models, the samples that have high confidence levels in the early exit are classified on the edge while the others are sent to the cloud for making the decision.
In contrast, in our work, the focus is on the benefit of early exits to improve the training phase of hierarchical edge-cloud systems. It reduces the latency of the training phase by parallelizing it on the edge and the cloud workers and provides robustness against network failure.
### _Hierarchical Inference_
To be complete on the related studies, we also discuss the hierarchical inference methods. In this group of works, the authors proposed a hierarchical framework for the inference phase of a DNN. Teerapittayanon et al. [15] proposed a distributed computing hierarchy that is made of three levels of cloud, fog and edge where they can execute the inference phase of a DNN when it is separated between them. They benefit from early exiting to reduce the required communication cost between the different levels. After that, Wang et al. [16] proposed adaptive distributed acceleration of DNNs where they proposed a method to select the best position to divide neural networks between the two workers in the inference phase. Recently, Xue et al. [17] proposed a more advanced algorithm that, instead of using iterative approaches to find the best position of separation on hierarchical systems, benefits from the decision-making ability of reinforcement learning to perform the offloading strategy, in systems with complex conditions. Another idea that has been used to reduce the communication cost, relates to using methods of lossy and lossless compression on the communicated data [18].
Although these methods are effective in executing the inference phase on the edge-cloud frameworks, they do not consider the problem of training, which is more complex.
## III Hierarchical Training with Early Exiting
### _Proposed Hierarchical Training Framework_
As mentioned in Section II-A, the idea of training DNNs on a hierarchy of multiple workers is a new area of research that is not yet fully explored. The previous works generally do not exploit the potential of all levels and have high communication cost and privacy concerns. Our method addresses these issues by using the early exiting scheme, that enables a level of parallelism between the edge and the cloud in the different steps of the training phase.
More specifically we divide a conventional DNN architecture between an edge worker and a cloud worker at one of the middle layers, assuming the cloud has higher computational resources in comparison to the edge device, which is often the case (Figure 1-a). In the first step of training (shown in Figure 1-b), the data acquired by the sensor at the edge passes through the layers of the neural network that are located at the edge worker, and the forward pass of the edge is performed. This neural network has two main building blocks: a feature extractor and a local decision maker. The output of the edge feature extractor is transmitted to the cloud to be fed to the rest of the DNN architecture layers. The local decision maker also takes the output of this feature extractor enabling an early exit. The edge loss that is generated in this early exit later allows the neural network parameters at the edge to be updated during the backward pass.
In the second step (shown in Figure 1-c), the feature map, which is generated by the feature extractor at the edge, is communicated to the cloud server. The forward pass of the cloud layers of the DNN is completed, allowing a more computationally intensive feature extraction to take place. Then, the final output generates the cloud loss using the targets. At this time, the backward pass of the edge layer is also done at the edge level for training the parameters of the edge layer. Thus, these two tasks are done in parallel, the backward pass of the edge worker and the forward pass of the cloud worker. In the third step (shown in Figure 1-d), the backward pass of the cloud layers is done to update the parameters of these layers.
In addition to exploiting the potential of both workers simultaneously, an important benefit of this approach is that there is no need to perform any communication during the backward pass, contrary to the previous works (e.g., [13]). The backward pass of each worker is done independently, resulting in a significant reduction in communication cost and total runtime. Indeed, the edge worker only transmits data and does not need to have the receiving ability. Thus, a less complex communicator device is required at the edge which is favorable in practice.
Another benefit of this method is that, as no backward pass happens in the position between the edge and the cloud, non-differentiable functions can be applied to the feature map to further compress it before communication. For example, the activations can be quantized before communicating to the cloud server, reducing the communication burden. These lossy compression methods should however be implemented while considering their possible penalty on the total accuracy.
The training phase is done for the number of iterations that is selected by the user. After finishing it, the inference phase is done on the proposed edge-cloud framework. The steps of the inference phase are simply the forward pass in the edge, the communication of the feature map to the cloud, the forward pass of the cloud and the decision making using the final exit on the cloud. It is worth mentioning that the early exit at the edge device can be also used as a local decision maker in case of network failures.
### _Runtime Analysis_
One of the main goals of this work is to improve the overall runtime of the training phase in our proposed hierarchical training method in comparison to the full-cloud training. This target can be achieved by splitting a neural network at a suitable point between the edge and the cloud, based on the performance of the devices, the selected communication protocol, and the selected DNN architecture. However, the training runtime cannot be estimated easily without physically
implementing it on the devices. The reason is the different choices for the devices and the different internal structures and delays that result in different overall runtimes. To tackle this issue, we propose a method to simplify this procedure and estimate the runtime by just performing one epoch of forward pass on the selected edge and the cloud. We use this forward pass runtime to estimate the runtime of the backward pass. We also calculate the runtime of the update phase and communication phase and propose a method to sum them up to compute the estimated training runtime.
We propose Equation 1 to estimate the training runtime of our proposed hierarchical training method.
\[\begin{split} T_{\text{total}}^{\text{\,biererachical}}=T_{\text{ comp,fow}}^{\text{\,edge}}+\max\left(\left(T_{\text{common}}^{\text{\,biererachical}}+\right. \right.\\ \left.T_{\text{comp,fow}}^{\text{\,cloud}}+T_{\text{comp,hackw}}^{ \text{\,cloud}}\right),T_{\text{comp,backw}}^{\text{\,edge}}\right)\end{split} \tag{1}\]
where \(T_{\text{total}}^{\text{\,biererachical}}\) is the total runtime, \(T_{\text{comp,fow}}^{\text{\,edge}}\) and \(T_{\text{comp,backw}}^{\text{\,edge}}\) are the runtimes of the forward pass and the backward pass of the edge part of the DNN architecture, \(T_{\text{comp,fow}}^{\text{\,cloud}}\) and \(T_{\text{comp,backw}}^{\text{\,cloud}}\) are the runtimes of the forward pass and the backward pass of the cloud part of the DNN architecture and \(T_{\text{comm}}^{\text{\,biererachical}}\) shows the duration of communication between the edge and the cloud. The \(\max\) function reflects that, in our proposed hierarchical training method, the backward pass in the edge is executed at the same time as the communication and execution on the cloud.
Only one forward pass is done on the connected edge and cloud to measure the values of \(T_{\text{comp,fow}}^{\text{\,edge}}\) and \(T_{\text{comp,fow}}^{\text{\,cloud}}\). Afterwards, we calculate the backward pass runtime terms in Equation 1 as
\[T_{\text{comp,backw}}^{\text{\,edge}}=\alpha\ T_{\text{comp,fow}}^{\text{\,edge }}+\beta\sum_{i=1}^{e}\frac{P_{i}}{S^{\text{\,edge}}}, \tag{2}\]
\[T_{\text{comp,backw}}^{\text{\,cloud}}=\alpha\ T_{\text{comp,backw}}^{\text{\, edge}}+\beta\sum_{i=e+1}^{c}\frac{P_{i}}{S^{\text{\,cloud}}}. \tag{3}\]
In Equation (2) and (3), the first term shows the time which is needed to perform calculations of the backward pass. We calculate this by multiplying the backward to forward ratio \(\alpha\) by the measured forward pass runtime. In general, the scaler \(\alpha\) is a value that for most of DNNs with convolution layers and large batch sizes is close to \(2\)[8]. The second term shows the update phase duration and the summations indicate all the layers that are implemented at the edge or the cloud, where \(e\) represents the last layer that is executed on the edge and \(c\) is the total number of layers in the DNN. \(P_{i}\) represents the number of parameters in layer \(i\) that should be updated. \(S\) indicates the theoretical performance (computation speed) of each of the edge and the cloud devices. The scalar \(\beta\) shows the number of FLOPs that are needed to update every parameter of the DNN using the selected optimizer. As an example,
Fig. 1: a) A schematic view of the different parts in the proposed hierarchical execution framework. b) In the first step of training, the forward pass is done at the edge feature extractor to generate the feature set that is sent to the cloud. A local decision maker also takes the output of this feature extractor enabling an early exit that is later used for the backward pass at the edge. c) The feature set is communicated to the cloud and a more computationally intensive feature extraction and the final decision making is done at the cloud. Simultaneously, the backward pass of the edge is done to train the edge parameters. d) The backward pass of the cloud is done to update the cloud parameters. The green borders indicate the running process at each step.
stochastic gradient decent requires \(2\) FLOPs per parameter and Adam optimizer [19] requires \(18\) FLOPs per parameter [8], that results in \(\beta=2\) or \(18\) for these cases.
The communication duration in Equation (1) can be calculated as
\[T_{\text{comm}}^{\text{hierarchical}}=\frac{D_{\text{comm}}}{S_{\text{network}}} \tag{4}\]
where \(D_{\text{comm}}\) is the size of data that is communicated. For our proposed method of hierarchical training, it is the size of the feature map that is communicated. \(S_{\text{network}}\) represents the bandwidth of the selected communication network.
In this study, we compare our work with the full-cloud training method as the baseline. In this case, the edge device just captures the inputs and transmits them to the cloud and the whole training procedure for all layers of the DNN is done on the cloud. The runtime of the full-cloud method is simply calculated as
\[T_{\text{total}}^{\text{fullcol}}=T_{\text{comm}}^{\text{fullcol}}\ +T_{\text{comp, \text{for}}}^{\text{cloud}}+T_{\text{comp,\text{backw}}}^{\text{cloud}}. \tag{5}\]
The backward pass term of Equation (5) can be similarly calculated with one forward pass runtime measurement by exploiting Equation (3). The communication term of Equation (5) can also be calculated with Equation (4). The only difference is the \(D_{\text{comm}}\) that is now the size of the raw input data that is communicated to the cloud.
### _Separation Point Selection_
In the proposed hierarchical method, selecting the position of the separation point of the DNN between the edge and the cloud is important as it can affect the computational and memory burden on the edge and the cloud, the total runtime, and the accuracy. In this section, we propose an algorithm to select the edge-cloud separation position based on user requirements. As the full training of DNNs is computationally heavy, it is highly demanding to try all possible splitting points. Hence, the proposed algorithm smartly confines the splitting point candidates in order to lower the number of full-training trials. The user requirements may contain a maximum possible number of parameters at the edge (memory), a maximum duration of training (runtime), and a minimum precision (accuracy).
We propose Algorithm 1 for this purpose. The algorithm benefits from Equations (1)-(4) to reduce the number of full-training iterations needed on different separation points and to finally find a good position of separation. It finds the desired separation point for the given hardware specifications.
```
0: DNN, Available Edge Memory, Speed of Computation (edge and cloud) and Communication, Accepted Runtime, Accepted Accuracy
0: Position of Separation \(P\) Initialization :
1:\(L^{i}|i=1:N\) : layers/blocks in the DNN
2:\(S_{1},S_{2},S_{3}\leftarrow\emptyset\) Loop 1 :
3:for\(L^{i}|i=1:N\)do
4: Measure\(M_{\text{edge}}^{i}\) by counting parameters at the edge
5:if\(M_{\text{edge}}^{i}<\text{Available Edge Memory then}\)
6:\(S_{1}\).append\((i)\)
7:endif
8:endfor
9:Loop 2 :
10:for\(j\in S_{1}\)do
11: Calculate\(T_{\text{hierarchical,calc}}^{j}\) by Eq. (1)-(4)
12:if\(T_{\text{hierarchical,calc}}^{j}<\text{Accepted Runtime then}\)
13:\(S_{2}\).append\((j)\)
14:endif
15:endif
16:endfor
17:\(S_{3}\leftarrow\text{sort}\ \{S_{3}|\text{in descending order on}\ m,m\in S_{3}\}\)
18:Loop 4 :
19:for\(n\in S_{3}\)do
20: Train DNN for the rest of epochs
21:if\(Acc_{\text{hierarchical,exp}}^{n}>\text{Accepted Accuracy}\)then
22:\(P\gets n\)
23:break
24:endif
25:endfor
26:return\(P\)
```
**Algorithm 1** Selecting the position of separation of DNN between the edge and the cloud with specific runtime and accuracy criteria.
The algorithm inputs are the DNN architecture, the memory of the edge device, the computational power of the edge and cloud devices, the communication speed between the edge and the cloud and the accepted runtime and accuracy selected by the user. In Loop 1, we consider the memory requirement. We simply measure the number of parameters when the architecture is separated on different points. We add the separation points that satisfy the user's edge memory criterion to set \(S_{1}\).
In Loop 2, we compute the runtime based on Equations (1)-(4) for each possible separation point in \(S_{1}\). If for a separation point, the calculated training time is less than the acceptable criterion (\(T_{\text{hierarchical,calc}}^{j}<Accepted\ Runtime\)), it is added to set \(S_{2}\). As we mentioned in Section III-B, it is required to measure the time of just one forward pass before doing these calculations.
In Loop 3, the network is trained for one epoch for these separation points at Set \(S_{2}\) to measure the experimental runtime \(T_{\text{hierarchical,calc}}^{l}\). If the experimental runtime is acceptable for a separation position, it is added to set \(S_{3}\). Afterwards, \(S_{3}\) is sorted based on the separation points, from the deeper to earlier separation positions. The reason for this strategy is that we observe that the deeper separation positions provide better accuracy. It will be shown in the experiments in the next sections.
In Loop 4, the network is trained for the rest of the epochs to
calculate the accuracy for the separation points selected from the sorted Set \(S_{3}\). If the accuracy is higher than the accepted value, the separation point is reported and the procedure is finished; otherwise, the next separation position from the set is selected and the same procedure is repeated. Notice that, in order to measure the accuracy, there is no way but to train the network completely; however, our method requires a low effort to achieve this goal by doing it on a carefully confined and sorted set.
In this algorithm, the full training just exists in Loop 4, after we limit the size of possible candidate separation points in Loop 1 by a measurement of parameters' size and in Loop 2 by a calculation of estimated runtime. Moreover, in Loop 3, we again reduce the size of possible separation points by performing only 1 epoch of training. This often results in few splitting points left to be fully trained in Loop 4. The Loop 4 also might not be done completely since we sorted the set of possible separation points for the loop in a way that there is a higher chance to achieve the required accuracy in the first iteration.
It is worth mentioning that Algorithm 1 can be modified easily for simpler scenarios. For example, when the user has no accuracy requirements, the only difference is that Loop 4 should be removed and all the separation points of Set \(S_{3}\) are acceptable, or when the user has no memory criterion, Loop 1 should be omitted.
In the next section, we perform experiments to validate the benefits of the proposed hierarchical training method.
## IV Experiments
In this section, we elaborate on the hierarchical training framework for different DNN architectures, describe the experiments performed using the proposed hierarchical training method and report the results. These particular DNN architectures are selected as they are widely used in vision tasks, and they are relatively intensive in terms of resource requirements which makes them challenging to run on low-resource devices.
### _Hierarchical Models_
Figure 2 shows VGG-16 [1] architecture implemented in a hierarchical fashion. In this figure, the network is divided between the edge and the cloud after the third convolution layer. This position of separation can however be moved along all layers based on the user's requirements.
The output of early exit provides the loss for the backward pass in the edge and the final output for the cloud provides the loss for the backpropagation of the cloud. The important point here is that most of the conventional DNN architectures such as VGG are not designed to be hierarchically executed. As a result, after most of the layers, the size of the feature map is large, even in comparison to the original input image which makes them expensive to communicate over the network. This
Fig. 3: The structure of the proposed hierarchical training method applied on ResNet-18 architecture [2]. The position of separation can be moved along the different residual blocks of the architecture.
Fig. 2: The structure of the proposed hierarchical training method applied on VGG-16 architecture [1]. The position of separation can be moved along the different layers of the architecture.
increase in the communication cost is not desirable in the hierarchical execution of DNNs. In order to address this issue, a compression convolution layer and a quantizer are used at the division point which reduce the size of the feature map before sending it to the cloud. In our experiments, a \(4\)-bit quantization is used. This bit width is the maximum value that provides a lower communication burden for all the possible separation points of our DNNs, in comparison to the full-cloud training.
Since the range of the feature map may change significantly during the training phase [20], for every quantized batch that is sent to the cloud, one full precision scale value is also communicated [21]. The scale value is calculated as the difference between the maximum value and the minimum value in that batch of data divided by the maximum quantization level (\(2^{4}-1=15\) in our case). The minimum value is \(0\) since ReLU activation functions are used. All the members of the batch are divided by this scale and then quantized before communicating to the cloud. The scale is also communicated to the cloud and is multiplied by the values of the batch there again. This single scale for every batch has a negligible impact on the communication cost, but a significant role in covering the data range to keep the total accuracy high during the different training iterations. Finally, in order to avoid a high memory burden for the edge worker, only one fully connected layer is used in the early exit.
The same form of hierarchical implementation is applied on ResNet-18 [2] architecture. Figure 3 shows the proposed structure. There is again the compression convolution and the quantizer, and the edge-cloud separation can be done after every residual block in this network.
It is worth mentioning that the two DNN architectures of VGG-16 and ResNet-18 are analyzed in this study as they are relatively large models that cannot be easily trained on low-resource devices and the results can represent a large group of convolutional neural network and residual network models that are widely in use in vision systems.
Fig. 4: The test accuracy of experiment on CIFAR-10 for hierarchical training when it is separated on different positions along the architecture compared to the full-cloud training accuracy. The left figure shows the results for VGG-16 and the right figure shows the results for ResNet-18. In addition to the main accuracy, the edge-only test accuracy is also presented as a side benefit of the proposed method. Although this accuracy is lower in comparison to the hierarchical training, it shows how the edge handles the classification problem alone when there is a possible communication failure.
Fig. 5: The number of parameters of the deep neural network in the edge side and the cloud side in the hierarchical system when it is separated on different positions along the architecture in comparison to the number of parameters in the full-cloud system. The left figure shows the results for VGG-16 and the right figure shows the results for ResNet-18. Notice the low number of parameters in the edge for most of the separation positions in hierarchical training that is desirable due to the possible memory constraints.
### _Performance Analysis_
In this part, the performance analysis setup is elaborated. We analyze the important performance factors of accuracy, memory footprint, computational burden, communication rate and runtime in our proposed hierarchical training framework and compare the results with the baseline of full-cloud training. The experiments are done on CIFAR-10 dataset [22]. The VGG-16 architecture is trained for \(100\) epochs with Adam optimizer [19] with a learning rate of \(2\times 10^{-4}\). The number of output channels of the compression convolution varies between \(4\) and \(512\) based on the position of the separation point. The ResNet-18 architecture is trained for \(200\) epochs with stochastic gradient descent optimizer and an initial learning rate of \(0.1\) that is reduced through every epoch with a cosine annealing scheduler [23]. Here, the number of channels in the output of the compression convolution varies between \(4\) and \(64\) based on the position of the separation point.
Figure 4 shows the test accuracy of the hierarchically trained DNN when it is separated at different positions along the architecture in comparison to the full-cloud training. For most of the separation points, the accuracy of the final exit of hierarchical training is close to the accuracy of the full-cloud training, even though the backpropagation is not connected between the edge and the cloud, and the information communicated in the forward pass is compressed. As a side benefit, the test accuracy of the early exit in the edge is shown. Although this accuracy is lower in comparison to the hierarchical cloud, it shows that our proposed method can also provide a level of robustness against network failures, as the edge can still handle the classification problem up to an acceptable accuracy on its own. As an example, at separation position 3, our proposed hierarchical training method provides \(89.29\%\) accuracy for the VGG-16 architecture in the final exit that is close to the accuracy of \(91.86\%\) in the baseline full-cloud training. The early exit of our proposed hierarchical training method is able to provide \(85.49\%\) on its own in case of a communication
Fig. 6: The computational cost of the deep neural network in terms of MACC in the edge side and the cloud side during the hierarchical training when it is separated in different positions along the architecture in comparison to the computational cost of the full-cloud training. The left figure shows the results for VGG-16 and the right figure shows the results for ResNet-18. Notice the low computational burden in the edge for many of the separation points in hierarchical training that makes them desirable as the computational power is often constrained.
Fig. 7: The required communication burden in terms of bits by the proposed hierarchical training when it is separated in different positions along the architecture compared to the full-cloud training that directly sends the original inputs to the cloud. The left figure shows the results for VGG-16 and the right figure shows the results for ResNet-18. The proposed hierarchical training approach has a lower communication burden in comparison to the full-cloud training that communicates the original inputs.
failure which is an acceptable accuracy.
Figure 5 shows the number of parameters of the DNN that exists in the edge and the cloud. These values are obtained by counting the parameters on both hierarchical and full-cloud training models when the DNN is implemented. The important point in these figures is that, for many of the separation points, especially the ones in the earlier layers, the number of parameters in the edge is significantly lower in comparison to the number in the full-cloud model. As an example, at the separation point 3, there are just \(1.73\times 10^{5}\) parameters implemented at the edge for our proposed hierarchical training method for VGG-16 architecture while this value is equal to \(1.53\times 10^{7}\) for the full-cloud training. This result is desirable in edge-cloud frameworks since the memory of the edge is usually confined.
Figure 6 shows the computational burden of hierarchical training of DNNs in the edge and the cloud in comparison to the full-cloud training in terms of multiplications and accumulations (MACC). The value is measured by counting all the MACC operations in each of these models, on each device. The crucial point is that the computational burden of the edge is significantly low for many of the separation positions, especially in the earlier separation points. For example, there are only \(1.90\times 10^{8}\) MACC operations at the edge in the VGG-16 architecture in our proposed hierarchical training method, while there are \(9.58\times 10^{8}\) MACC operations in the baseline full-cloud training for this architecture. This is favorable for the edge-cloud systems since there are often restrictions in the computational resources of the edge devices which may cause high latency. As a side benefit, one can see that in all separation points, the computational burden of the cloud is also lower in comparison to the full-cloud training, which provides a lower cost of cloud services.
Figure 7 shows the communication requirement of the proposed hierarchical training method in comparison to the full-cloud training. This value is the measured size of the quantized feature maps and the scales that are communicated in the hierarchical training model and the raw images that are communicated in the full-cloud model. As it is shown in these figures, for all of the separation points, this communication burden is lower in comparison to the full-cloud training as this method does not communicate in the backward pass, and compresses the information that is sent to the cloud in the forward pass. For example, at separation point 3, \(16384\) bits are communicated during the proposed hierarchical training method in VGG-16 architecture, while in the baseline full-cloud training, \(24576\) bits are communicated. In this experiment using our proposed hierarchical training method, we choose the number of channels of the compression convolution in such a way that the size of the output feature map stays less than a predefined fixed value, that results in almost flat curves.
The next important performance factor is the training runtime. As improving the runtime of training is a main goal of our proposed hierarchical training method, we should have a good estimation of this value to know where to separate a DNN architecture between the edge and the cloud (See Section III). In Section III-B, we proposed a set of equations to estimate the runtime of training. In this part, we evaluate their performance by experimenting on a system of two devices, made of NVIDIA Quadro K620 [24] (edge) and NVIDIA GeForce RTX 2080 Ti [25] (cloud). As mentioned before, we perform a simple forward pass runtime measurement, then we use Equations (1)-(3) to estimate the training runtime. We perform this for different separation positions and compare the estimated runtime with the experimental runtime on two devices. The devices are directly connected in this simplified experiment; consequently, the communication term of Equations (1) is negligible.
Figure 8 shows the result of this experiment. It shows that our calculation method can provide a good estimation of runtime for different separation points along the DNN architectures, and it can be used for separation point selection (see Section III-C). It is worth mentioning that for the VGG-16 experiment, the selected edge device memory could not handle the separation points after the point 7; as a result, they do not exist in Figure 8.
Using Equations (1)-(5) from Section III-B and the accuracies that have been measured in Section IV Figure 4, we
Fig. 8: The calculated runtime of one epoch of training in different positions along the architecture in comparison to the experimental values, for the proposed hierarchical training framework implemented on a simplified edge-cloud system. The left figure shows the results for VGG-16 and the right figure shows the results for ResNet-18. Our calculation method can provide a good estimation of the experimental values for most of the points.
obtain the guide Figure 9. This figure shows the accuracy versus calculated training runtime for the different separation positions on the neural networks (the separation positions are shown by the point labels) for our proposed hierarchical training method in comparison to the full-cloud training baseline. These values are computed for 3G and 4G communication protocols between edge and cloud. 3G and 4G are selected as these networks have covered \(94.9\%\) and \(85.0\%\) percent of the world population respectively according to to [26] and can be used for hierarchical systems.
The figures show that for both VGG-16 and ResNet-18 architectures and both 3G and 4G communication protocols, our proposed method can provide a good reduction in the runtime, while the penalty on accuracy is low for many of the separation points. The gain in runtime is higher when 3G communication protocol is used since the communication bandwidth is lower and reducing its burden can have a more significant effect on the total runtime. These figures help to choose the best hierarchical division between the edge and the cloud based on the specific accuracy and runtime requirements of the user. Remind that the Figures 5 and 6 also help the user to satisfy the memory footprint and the computational cost requirements.
### _On-Device Experiments_
In this section, we demonstrate the performance of the proposed hierarchical training method with on-device experiments. In this setting, the edge device has a low-resource NVIDIA Quadro K620 [24] with \(863.2\) GFLOPS theoretical performance and the cloud has a high-end NVIDIA GeForce RTX 2080 Ti [25] with \(13.45\) TFLOPS theoretical performance. For the data communication stage, we simulate two different telecommunication protocols, 3G and 4G. Notice that in these setups, the data transfer bottleneck is the upload link which has an average speed of \(1.1\) Mbps for 3G and \(5.85\) Mbps for 4G in the United States [12].
Table I shows the results of experiments for VGG-16 and ResNet-18. The computational runtime results and the final accuracies are also added to Table I for the sake of comparison. For these experiments, the separation position 3 has been selected for both architectures as it provides a good balance between the accuracy, the edge memory, the computational cost and the runtime (see Figures 4-9). As it is shown, for the two different DNNs and the telecommunication technologies, the hierarchical training scenarios provide a lower runtime in comparison to the full-cloud systems whilst having a marginal reduction in the accuracy. In 3G communication, it results in \(28.96\%\) improvement for VGG-16 and \(60.91\%\) for ResNet-18 in terms of runtime. For 4G communication, the improvements are \(13.78\%\) for VGG-16 and \(36.26\%\) for ResNet-18. As expected, the proposed hierarchical method is more advantageous for less efficient communication links since generally, the communication link is the bottleneck of the overall efficiency of the hierarchical systems. Moreover, it is shown that our calculations generally provide a good estimation in comparison to the experiments. The calculated values are slightly different compared to the experimental ones, due to the complexities that exist in the real devices and experiments that are not counted in our simplified calculations, like the overhead for the communication between different parts inside a single GPU.
## V Summary and Future Works
In this study, a novel hierarchical training approach for DNN architectures in edge-cloud scenarios has been proposed. It provides less communication cost, lower runtime, higher
Fig. 9: The accuracy and the calculated runtime of the proposed hierarchical training method when it is separated in different positions along the architecture in comparison to the full-cloud training. The label numbers show the positions of separation points. Left: The 3G communication protocol is considered. Right: The 4G communication protocol is considered.
privacy for the user, and improved robustness against possible network failures compared to the full-cloud training. We performed simulations on different neural network architectures and implemented the proposed approach on a two-device framework and validated its superiority with respect to the full-cloud training.
In the domain of hierarchical training of neural networks, a further topic to investigate is the design of hierarchical-friendly neural network architectures. As we saw in this study, the available DNN architectures are not made to be executed hierarchically. They have issues like the larger size of intermediate feature maps in comparison to the inputs, which reduces the effectiveness of hierarchical training approaches. New neural network architectures could be developed that inherently consider this issue. They could also take into account other important points such as keeping a low computational cost in the parts that are executed on the edge.
|
2306.12749 | Physical informed neural networks with soft and hard boundary
constraints for solving advection-diffusion equations using Fourier
expansions | Deep learning methods have gained considerable interest in the numerical
solution of various partial differential equations (PDEs). One particular focus
is physics-informed neural networks (PINN), which integrate physical principles
into neural networks. This transforms the process of solving PDEs into
optimization problems for neural networks. To address a collection of
advection-diffusion equations (ADE) in a range of difficult circumstances, this
paper proposes a novel network structure. This architecture integrates the
solver, a multi-scale deep neural networks (MscaleDNN) utilized in the PINN
method, with a hard constraint technique known as HCPINN. This method
introduces a revised formulation of the desired solution for ADE by utilizing a
loss function that incorporates the residuals of the governing equation and
penalizes any deviations from the specified boundary and initial constraints.
By surpassing the boundary constraints automatically, this method improves the
accuracy and efficiency of the PINN technique. To address the ``spectral bias''
phenomenon in neural networks, a subnetwork structure of MscaleDNN and a
Fourier-induced activation function are incorporated into the HCPINN, resulting
in a hybrid approach called SFHCPINN. The effectiveness of SFHCPINN is
demonstrated through various numerical experiments involving ADE in different
dimensions. The numerical results indicate that SFHCPINN outperforms both
standard PINN and its subnetwork version with Fourier feature embedding. It
achieves remarkable accuracy and efficiency while effectively handling complex
boundary conditions and high-frequency scenarios in ADE. | Xi'an Li, Jiaxin Deng, Jinran Wu, Shaotong Zhang, Weide Li, You-Gan Wang | 2023-06-22T09:04:40Z | http://arxiv.org/abs/2306.12749v2 | Physical informed neural networks with soft and hard boundary constraints for solving advection-diffusion equations using Fourier expansions
###### Abstract
Deep learning methods have gained considerable interest in the numerical solution of various partial differential equations (PDEs). One particular focus is on physics-informed neural networks (PINNs), which integrate physical principles into neural networks. This transforms the process of solving PDEs into optimization problems for neural networks. In order to address a collection of advection-diffusion equations (ADE) in a range of difficult circumstances, this paper proposes a novel network structure. This architecture integrates the solver, which is a multi-scale deep neural network (MscaleDNN) utilized in the PINN method, with a hard constraint technique known as HCPINN. This method introduces a revised formulation of the desired solution for advection-diffusion equations (ADE) by utilizing a loss function that incorporates the residuals of the governing equation and penalizes any deviations from the specified boundary and initial constraints. By surpassing the boundary constraints automatically, this method improves the accuracy and efficiency of the PINN technique. To address the "spectral bias" phenomenon in neural networks, a subnetwork structure of MscaleDNN and a Fourier-induced activation function are incorporated into the HCPINN, resulting in a hybrid approach called SFHCPINN. The effectiveness of SFHCPINN is demonstrated through various numerical experiments involving advection-diffusion equations (ADE) in different dimensions. The numerical results indicate that SFHCPINN outperforms both standard PINN and its subnetwork version with Fourier feature embedding. It achieves remarkable accuracy and efficiency while effectively handling complex boundary conditions and high-frequency scenarios in ADE.
keywords: Advection-Diffusion equation, PINN, Hard constraint, Subnetworks, Fourier feature mapping +
Footnote †: journal: Journal of LaTeX Templates
## 1 Introduction
The advection-diffusion equation (ADE) is a fundamental equation with widespread applications in various scientific and engineering domains. It finds relevance in fields such as physical biology [1], marine science [2], earth and atmospheric sciences [3], as well as mantle dynamics [4]. This article primarily focuses on investigating the dynamics of the unsteady advection-diffusion equation (ADE) under various boundary constraints. The ADE is mathematically described by the following equation:
\[\frac{\partial u}{\partial t}=\textbf{div}\big{(}\boldsymbol{p}\cdot\nabla u \big{)}-\boldsymbol{q}\cdot\nabla u+f. \tag{1}\]
This equation captures the interaction between convection and diffusion at different temporal and spatial scales. It involves a scalar variable denoted as \(u\), which is transported through advection and diffusion. The constant
or vector parameters \(\mathbf{p}\) and \(\mathbf{q}\) represent the advection field's speed and the diffusion coefficient in different directions, respectively. The term \(f\) signifies the internal source or sink's capacity, while the concentration gradient is represented by \(\nabla u\), where \(\nabla\) denotes the gradient operator and \(\mathbf{div}\) denotes the divergence operator.
Similar to other types of partial differential equations(PDEs), the analytical solutions for ADE are generally seldom available, then solving these PDEs numerically using approximation methods is necessary. Numerical methods such as finite element method (FEM) [5, 6, 7], finite difference method (FDM) [8, 9] and finite volume method (FVM) [10] are commonly used to solve ADE. In these approaches, the computational domain of interest is divided into a set of simple regular mesh, and the solution is computed in these mesh patches. Generally, to reduce the numerical error, the size of the mesh is required to be small when solving PDEs by these mesh-dependent methods, it will yield significant computational and storage challenges [11]. Given a specific mesh size, several numerical techniques have been developed to reduce the errors of mesh methods [12], such as the upwind scheme [13] and Galerkin least squares strategy [14]. However, mesh-dependent methods can be challenging, time-consuming, and computationally expensive when dealing with complex domains of interest and boundary constraints. In contrast, meshless methods that use a set of configuration points without grids have been developed to approximate the solution of ADE, such as Radial Basis Function [15, 16], Monte Carlo methods [17], and B-spline collocation approaches [18]. While these methods are easy to implement and straightforward, their accuracy may deteriorate compared to grid-based methods.
In the past few years, deep neural networks (DNN) have demonstrated significant potential in solving ordinary and partial differential equations as well as inverse problems. This is due to their ability to handle strong nonlinearity and high-dimensional problems, as highlighted in various studies [19, 20, 21, 22, 23, 24, 25, 26]. This methodology is preferable because it transforms a PDE problem into an optimization problem, and then approximates the PDE solution through gradient backpropagation and automatic differentiation of the DNN. Furthermore, they are inherently mesh-free and can address high-dimensional and geometrically complex problems more efficiently than mesh-based methods. In the early 1990s, the concept of physics-constrained learning was introduced for solving conventional differential equations [27, 28, 29]. Physics Informed Neural Networks (PINN) were proposed by Raissi et al. [21], which incorporates physical laws into neural networks by adding the residuals of both the PDEs and the boundary conditions (BCs) as multiple terms in the loss function. The Deep Ritz Method was proposed by Yu et al. [19] for numerically solving variational problems, and various PINN methods have been proposed for solving PDEs with complex boundaries by researchers such as Wang and Zhang [30], Gao et al. [31], Sheng and Yang [32]. Some researchers have also investigated using physical constraints to train PINNs [33, 34, 35, 36]. Despite some impressive developments in the field, including DPM [37], PINN still faces significant challenges, as pointed out in the study [38]. Several theoretical papers [39, 40] have highlighted an imbalanced competition between the terms of PDEs and BCs in the loss function, limiting PINN's applicability in complex geometric domains. To address this problem, researchers such as Berg and Nystrom [20], Sun et al. [41], Lu et al. [42] have proposed incorporating BCs into the ansatz such that any instance from the ansatz automatically satisfies the BCs, resulting in so-called hard-constraint methods. By satisfying the BCs precisely, the PDE solution becomes an unconstrained problem, which can be more effectively trained in a neural network.
In addition, the standard PINN method may not perform well on problems with high-frequency components because of the low-frequency bias of DNNs, which has been documented by Xu et al. [43] and Rahaman et al. [44]. For instance, when using a DNN to fit the function \(y=\sin(x)+\sin(2x)\), the DNN output initially approximates \(\sin(x)\) and then gradually converges to \(\sin(x)+\sin(2x)\). This phenomenon is called the Frequency Principle (F-Principle) or spectral bias, which stems from the inherent divergence in the gradient-based DNN training process. To address this issue, researchers have explored the relationship between DNNs and Fourier analysis, which was inspired by the F-Principle [45, 46, 47]. Recent experimental studies by Zhong et al. [48], Mildenhall et al. [49] have suggested that a heuristic sinusoidal encoding of input coordinates, termed "positional encoding", can enable PINNs to capture higher frequency content. To enhance the computational efficiency of the aforementioned PINN, we construct a separate subnetwork for each frequency to capture signals at different frequencies.
Despite many researchers having discovered the F-Principle phenomenon and the unbalanced rivalry between the terms of PDEs and BCs, there is still no universal solution for both issues, which presents an opportunity for further research. Additionally, there is an opportunity to explore the Neumann boundary in more depth, as the majority of research on ADE has focused on the Dirichlet boundary, while the Neumann boundary is less
studied and more intricate.
This paper introduces a novel approach called sub-Fourier Hard-constraint PINN (SFHCPINN) to solve a class of ADEs using Fourier analysis and the hard constraint technique. The SFHCPINN approach uses hard-constraint PINN to enforce initial and boundary conditions and incorporates the residual of the governed equation for ADE into the cost function to guide the training process. This transforms the solution of ADE into an unconstrained optimization problem that can be efficiently solved using gradient-based optimization techniques. To further reduce the approximation error of the DNN, a subnetworks framework of DNN with sine and cosine as the activation function is introduced by the F-Principle and Fourier theory. While previous studies primarily focused on simulated data using mean squared error (MSE) loss, the primary contributions of this paper are:
1. Our proposed method involves a PINN with a subnetwork architecture and a Fourier-based activation function. This approach aims to address the issue of gradient leakage in DNN parameters by leveraging the F-principle and Fourier theorem.
2. Our approach involves the use of a structured PINN with hard constraints, which allows for the automatic satisfaction of initial and boundary conditions. This approach enables the accurate resolution of ADEs with complex boundary conditions, including the Dirichlet boundary, Neumann boundary, and mixed form.
3. We provide compelling evidence for the efficacy of the proposed method by demonstrating the superiority of hard-constraint PINN over soft-constraint PINN in solving a class of ADEs under both Dirichlet and Neumann boundaries.
The structure of this paper is outlined as follows. Section 2 presents an overview of deep neural networks and the standard PINN framework for PDEs. In Section 3, we introduce a novel approach to solving ADE using PINN with hard constraints, as well as an activation function based on Fourier analysis. Section 4 details the SFHCPINN algorithm for approximating the solution of ADE. In Section 5, we provide numerical examples to demonstrate the effectiveness of the proposed method for ADE. Finally, we present the conclusions of the paper in Section 6.
## 2 Preliminaries
This section has presented a detailed exposition of the relevant mathematical principles and formulae about DNN and PINN.
### Deep Neural Networks
Initially, we present the standard neural cell of Deep Neural Networks (DNN) and the mapping relationship between input \(\mathbf{x}\in\mathbb{R}^{d}\) and an output \(\mathbf{y}\in\mathbb{R}^{m}\), as expressed by (2):
\[\mathbf{y}=\sigma(\mathbf{W}\mathbf{x}+\mathbf{b}). \tag{2}\]
Here, the activation function \(\sigma(\cdot)\) is an element-wise non-linear model, \(\mathbf{W}=(w_{ij})\in\mathbb{R}^{d\times m}\) and \(\mathbf{b}\in\mathbb{R}^{m}\) are the weight matrix and bias vector, respectively. The standard unit (2) is usually known as a hidden layer, and its output is fed into another activation function after modification with a new weight and bias. Hence, a DNN is constructed with stacked linear and nonlinear activation functions. The mathematical expression for a DNN with input data \(\mathbf{x}\in\mathbb{R}^{d}\) can be formulated as:
\[\begin{cases}\mathbf{y}^{[0]}=\mathbf{x}\\ \mathbf{y}^{[\ell]}=\sigma\circ(\mathbf{W}^{[\ell]}\mathbf{y}^{[\ell-1]}+\mathbf{b}^{[\ell]}), \ \ \text{for}\ \ \ell=1,2,3,\cdots\cdots,L\end{cases}. \tag{3}\]
Here, \(\mathbf{W}^{[\ell]}\in\mathbb{R}^{n_{\ell+1}\times n_{\ell}}\) and \(\mathbf{b}^{[\ell]}\in\mathbb{R}^{n_{\ell+1}}\) denote the weights and biases of the \(\ell\)-th hidden layer, respectively. \(n_{0}=d\) and \(n_{L+1}\) is the dimension of output. The notation "\(\circ\)" indicates an element-wise operation. The parameter set of \(\mathbf{W}^{[1]},\cdots\mathbf{W}^{[L]},\mathbf{b}^{[1]},\cdots\mathbf{b}^{[L]}\) is represented by \(\mathbf{\theta}\), and the DNN output is denoted by \(\mathbf{y}(\mathbf{x};\mathbf{\theta})\).
### Physics-Informed Neural Networks
Let us consider a system of parametrized PDEs given by:
\[\begin{split}&\mathcal{N}_{\boldsymbol{\lambda}}[\hat{u}(\boldsymbol{x},t)]=\hat{f}(\boldsymbol{x},t),\hskip 14.226378pt\boldsymbol{x}\in\Omega,t\in[t_{0},T] \\ &\mathcal{B}\hat{u}\left(\boldsymbol{x},t\right)=\hat{g}( \boldsymbol{x},t),\hskip 28.452756pt\boldsymbol{x}\in\partial\Omega,t\in[t_{0},T] \\ &\mathcal{I}\hat{u}(\boldsymbol{x},t_{0})=\hat{h}(\boldsymbol{x} ),\hskip 28.452756pt\boldsymbol{x}\in\Omega,\end{split} \tag{4}\]
in which \(\mathcal{N}_{\boldsymbol{\lambda}}\) stands for the linear or nonlinear differential operator with parameters \(\boldsymbol{\lambda}\), \(\mathcal{B}\) and \(\mathcal{I}\) are the boundary and initial operators, respectively. \(\Omega\) and \(\partial\Omega\) respectively illustrate the zone of interest and its border. In general PINN, one can substitute a DNN model for the solution of PDEs (4), then obtain the optimal solution by minimizing the following loss function:
\[L=Loss_{PDE}+\omega_{1}Loss_{IC}+\omega_{2}Loss_{BC} \tag{5}\]
with
\[\begin{split}& Loss_{PDE}=\frac{1}{N_{P}}\sum_{i=1}^{N_{P}}\left| \mathcal{N}_{\boldsymbol{\lambda}}[u_{NN}(\boldsymbol{x}^{i},t^{i})]-\hat{f}( \boldsymbol{x}^{i},t^{i})\right|^{2}\\ & Loss_{BC}=\frac{1}{N_{B}}\sum_{i=1}^{N_{B}}\left|\mathcal{B}u_{NN }\left(\boldsymbol{x}^{i},t_{0}\right)-\hat{g}(\boldsymbol{x}^{i})\right|^{2} \\ & Loss_{IC}=\frac{1}{N_{I}}\sum_{i=1}^{N_{I}}\left|\mathcal{I}u_{NN }(\boldsymbol{x}^{i},t)-\hat{h}(t)\right|^{2}\end{split} \tag{6}\]
where \(\omega_{1}\) and \(\omega_{2}\) are the loss weighting coefficients on distinct borders. \(Loss_{PDE}\), \(Loss_{IC}\), and \(Loss_{BC}\) depict the residual of governing equations, the loss of the given initial condition, and the loss of the prescribed BC, respectively. In addition, if some data are available inside the domain, an extra loss term indicating the mismatch between the predictions and the data can be taken into account
\[Loss_{D}=\frac{1}{N_{D}}\sum_{i=1}^{N_{D}}\left|u_{NN}(\boldsymbol{x}^{i},t^{i })-u_{Data}^{i}\right|^{2}. \tag{7}\]
The structure of PINN for solving parametrized PDEs (4) is depicted in the following Figure 1.
## 3 Fourier induced Subnetworks Hard Constraint PINN to Advection-Diffusion Equation
### Unified architecture of Hard Constraint PINN to Advection-Diffusion Equation
In this section, we now consider the following advection-diffusion equation with prescribed boundary and initial conditions, it is
\[\begin{split}&\frac{\partial u(\boldsymbol{x},t)}{\partial t}= \textbf{div}\big{(}\boldsymbol{p}\cdot\nabla u(\boldsymbol{x},t)\big{)}- \boldsymbol{q}\cdot\nabla u(\boldsymbol{x},t)+f(\boldsymbol{x},t),\quad \boldsymbol{x}\in\Omega,t\in[t_{0},T]\\ &\mathcal{B}u(\boldsymbol{x},t)=g(\boldsymbol{x},t),\hskip 28.452756pt \boldsymbol{x}\in\partial\Omega,t\in[t_{0},T]\\ &\mathcal{I}u(\boldsymbol{x},t_{0})=h(\boldsymbol{x}),\hskip 28.452756pt \boldsymbol{x}\in\Omega\end{split} \tag{8}\]
where \(\Omega\) is a bounded subset of \(\mathbb{R}^{d}\) with piecewise Lipschitz boundary which satisfies the interior cone condition and \(\partial\Omega\) represents the boundary of the interested domain. Generally, the boundary is a complicated geometry and composed of essential and natural boundaries, i.e., \(\partial\Omega=\Gamma_{D}\cup\Gamma_{N}\) and \(\Gamma_{D}\cap\Gamma_{N}=\emptyset\). The operational item \(\mathcal{B}\) indicates the BC, such as Dirichlet, Neumann, or Robin boundary.
When a standard PINN trial function \(u_{NN}\) is used to approximate the solution of ADE (8), it will be firstly differential with respect to variable \(\boldsymbol{x}\) and \(t\), respectively, then embed into the residual of the governed equation
and constructed the main part of loss function of neural networks. Under the imposed boundary and initial conditions, the neural network solution with parameter \(\mathbf{\theta}\) can be obtained by minimizing
\[Loss(S_{R},S_{B},S_{I};\mathbf{\theta})=Loss_{R}(S_{R};\mathbf{\theta})+\frac{\gamma}{N_ {B}}\sum_{i=1}^{N_{B}}\left|\mathcal{B}u_{NN}(\mathbf{x}_{i}^{B},t_{i}^{B})-g(\mathbf{x }_{i}^{B},t_{i}^{B})\right|^{2}+\frac{\omega}{N_{I}}\sum_{i=1}^{N_{I}}\left| \mathcal{I}u_{NN}(\mathbf{x}_{i}^{I},t_{0})-h(\mathbf{x}_{i}^{I})\right|^{2} \tag{9}\]
for \((\mathbf{x}_{B}^{i},t_{i}^{B})\in S_{B}\) and \((\mathbf{x}_{I}^{i},t_{0})\in S_{I}\), as well as
\[Loss_{R}(S_{R};\mathbf{\theta})=\frac{1}{N_{R}}\sum_{i=1}^{N_{R}}\left|\frac{ \partial u_{NN}(\mathbf{x}_{i}^{R},t_{i}^{R})}{\partial t}-\textbf{div}\big{(}\bm {p}\cdot\nabla u_{NN}(\mathbf{x}_{i}^{R},t_{i}^{R})\big{)}+\mathbf{q}\cdot\nabla u_{NN }(\mathbf{x}_{i}^{R},t_{i}^{R})-f(\mathbf{x}_{i}^{R},t_{i}^{R})\right|^{2}\]
for \((\mathbf{x}_{i}^{R},t_{i}^{R})\in S_{R}\). Here and hereinafter, \(S_{R}=\{(\mathbf{x}_{i}^{R},t_{i}^{R})\}_{i=1}^{N_{R}}\), \(S_{B}=\{(\mathbf{x}_{i}^{B},t_{i}^{B})\}_{i=1}^{N_{B}}\) and \(S_{I}=\{(\mathbf{x}_{I}^{i},t_{0})\}_{i=1}^{N_{I}}\) stand for the sets of distributed sample points on \(\Omega\times T\), \(\partial\Omega\times T\) and \(\Omega\times\{t_{0}\}\), respectively. In addition, two penalty parameters \(\gamma\) and \(\omega\) are introduced to control the contributions of boundary and initial for loss function.
Many scholars have studied carefully the choice of the residual term of in loss function, for example, Mixed PINN [24], XPINN [26], cPINN [25], two-stage PINN [50] and gPINN [51]. Another problem to be addressed is how to enforce the initial and boundary conditions (I/BCs). The imposition of I/BCs is crucial for solving PDEs because it allows a unique solution. Considering the optimization nature of the PINN, the primitive way of applying I/BCs is to penalize the discrepancy of initial and boundary constraints for PDEs in a soft manner.
In PINN-based deep collocation methods, the performance of optimization depends on the relevance of each term. However, assigning the weights of each term may be difficult, then the approximations of I/BCs may not be favorable, resulting in an unsatisfactory solution. Consequently, we may apply the boundary constraint in a "hard" manner by including particular solutions that satisfy the I/BCs. Consequently, the constraints on the boundaries are gently met. The identity operator \(\mathcal{B}\) in (8) will be used in the following to investigate PDEs with Dirichlet BCs. In this instance, our proposed theory for the solution is
\[u_{NN}(\mathbf{x},t)=G(\mathbf{x},t)+D(\mathbf{x},t)NN^{L}(\mathbf{x},t;\mathbf{\theta}) \tag{10}\]
where \(NN^{L}\) is a fully-connected deep neural network, \(G(\mathbf{x},t)\) is a smooth extension meeting the I/BCs constraints \(\mathcal{B}u(\mathbf{x},t)\) and \(\mathcal{I}u(\mathbf{x},t_{0})\), and \(D(\mathbf{x},t)\) is a smooth distance function giving the distance from \((\mathbf{x},t)\in\Omega\times T\) to
Figure 1: Schematic diagram of physical information neural network (PINN).
\(\partial\Omega\times\{t_{0}\}\). The objective of this concept is to compel the approximate solution to conform to a set of restrictions, notably the Dirichlet BCs. In other words, while \(\mathbf{x}\) is on the \(\partial\Omega\times\{t_{0}\}\) border, \(D(\mathbf{x},t)\) equals zero, and the value increases as points depart from the I/BCs.
It is worth noting that \(\hat{u}\) reaches its I/BCs value at the hypothesis equation's boundary point (10). For those ADE problems with simple IC/BC and an easy form of \(\partial\Omega\), \(D(\mathbf{x},t)\) could be defined analytically. Nevertheless, assuming the geometry is too complicated for an analytic formulation, both the extension boundary function \(G(\mathbf{x},t)\) and the smooth function \(D(\mathbf{x},t)\) may be parameterized using small-scale NNs according to the given I/BCs constraints and a small of configuration points sampled from the interested domain with boundary. Therefore, it will not add any more complexity when optimizing the loss of hard-constraint PINN (HCPINN):
\[Loss_{HCPINN}(S_{R},S_{B},S_{I};\mathbf{\theta})=Loss_{R}(S_{R};\mathbf{\theta}). \tag{11}\]
Considering \(\partial\Omega\) as the Neumann-boundary and \(\mathcal{B}\) as the differential operator. In contrast to Dirichlet BC, which is stored inside a particular solution, Neumann BC is included throughout the equation loss. The ansatz solution for the Neumann BCs is the same as (10) and \(G(\mathbf{x},t)\) now is a smooth extension meeting the IC constraints \(\mathcal{I}u(\mathbf{x},t_{0})\) and \(D(\mathbf{x},t)\) is a smooth distance function giving the distance from \((\mathbf{x},t)\in\Omega\times T\) to \(\Omega\times\{t_{0}\}\). The Neumann BCs are encoded into the loss function:
\[Loss_{HCPINN}(S_{R},S_{B},S_{I};\mathbf{\theta})=Loss_{R}(S_{R};\mathbf{\theta})+\frac {\gamma}{N_{B}}\sum_{i=1}^{N_{B}}\left|\mathcal{B}u_{NN}(\mathbf{x}_{i}^{B},t_{i}^ {B})-g(\mathbf{x}_{i}^{B},t_{i}^{B})\right|^{2}. \tag{12}\]
We then conclude the unified loss function of the HCPINN as follows:
\[\mathbf{\theta}^{*}=\underset{\mathbf{\theta}}{\arg\min}Loss_{HCPINN}(\mathbf{\theta}). \tag{13}\]
In order to obtain the \(\mathbf{\theta}^{*}\), one can update the parameters \(\mathbf{\theta}\) by means of gradient descent method overall training samples or a few training samples at each iteration. In particular, Stochastic gradient descent (SGD) is the common optimization technique for deep learning. In the implementation, the SGD method requires only one of \(n\) function evaluations at each iteration compared with the gradient descent method. Additionally, instead of picking one term, one can also choose a "mini-batch" of terms at each step. In this context, the SGD is given by:
\[\mathbf{\theta}^{k+1}=\mathbf{\theta}^{k}-\alpha_{k}\nabla_{\mathbf{\theta}^{k}}Loss_{ HCPINN}(\mathbf{x};\mathbf{\theta}^{k}),\ \ \mathbf{x}\in S_{R}\ \text{or}\ \mathbf{x}\in S_{R}\ \cup S_{B},\]
where the "learning rate" \(\alpha_{k}\) decreases with \(k\) increasing.
**Remark 1**.: To compute the smooth distance function \(D(\mathbf{x},t)\) in the Dirichlet condition, we first calculate the non-smooth distance function \(d\) and estimate it using a low-capacity NN. At each point \((\mathbf{x},t)\), we define \(d\) as the shortest distance to a boundary point at which a BC must be applied. Indeed,
\[d(\mathbf{x},t)=\min_{(\mathbf{x},t)^{*}\in\partial\Omega\times\{t_{0}\}}\|(\mathbf{x},t )-(\mathbf{x},t)^{*}\|\,. \tag{14}\]
The exact form of \(d\) (and \(D\)) is not important other than that \(D\) is smooth and
\[|D(\mathbf{x},t)|<\epsilon,\quad\forall(\mathbf{x},t)\in\partial\Omega\times\{t_{0}\}. \tag{15}\]
We can use a small subset from \(\partial\Omega\times\{t_{0}\}\) to compute \(d\).
**Remark 2**.: Instead of computing the actual distance function, we could use the more extreme version
\[d(\mathbf{x},t)=\left\{\begin{array}{ll}0,&(\mathbf{x},t)\in\partial\Omega\times\{t _{0}\}\\ 1,&\text{otherwise}\end{array}\right.. \tag{16}\]
Moreover, for issues where the I/BCs are enforced in simple geometry, \(D(\mathbf{x},t)\) and \(G(\mathbf{x},t)\) may be derived analytically [28, 52]. For instance, we can define \(D(\mathbf{x},t)=t/T\) when there is simply initial boundary enforcement on \(t=0\), and we can choose \(D(\mathbf{x},t)=(x-a)(b-x)\) or \((1-e^{a-\mathbf{x}})(1-e^{\mathbf{x}-b})\) in \(\Omega=[a,b]\) when the BCs are only imposed on \(\partial\Omega\). For complex cases, it is difficult to identify an analytical formula for \(D(\mathbf{x},t)\), but it is possible to approximate it by means of spline functions [32].
**Remark 3**.: The ansatz (10) demands that \(G\) be globally defined and smooth, as well as that
\[|G(\mathbf{x},t)-g(\mathbf{x},t)|<\epsilon,\quad\forall(\mathbf{x},t)\in\partial\Omega\times \{t_{0}\} \tag{17}\]
where \(g(\mathbf{x},t)\) is the function satisfying I/BCs. To compute \(G\) we simply train an NN to fit \(G(\mathbf{x},t)\), \(\forall(\mathbf{x},t)\in\partial\Omega\times\{t_{0}\}\). The loss function used is given by
\[Loss_{G}=\frac{1}{N_{G}}\sum_{i=1}^{N_{G}}\left|G(\mathbf{x},t)-g(\mathbf{x},t)\right|^ {2} \tag{18}\]
and apply SGD as the optimization technique as well.
**Remark 4**.: In some given BCs, \(G(\mathbf{x},t)\) could be defined directly utilizing the I/BCs. For example, if (8) \(\mathcal{B}u(\mathbf{x},t)=0\) and \(\mathcal{I}u(\mathbf{x},t_{0})=0\), then we could directly define \(G(\mathbf{x},t)=0\).
### Sub-Fourier PINN and its activation function
The activation function is one of the critical issues for designing the architecture of DNN. As a non-linear transformation that bounds the value for given input data, it directly affects the performance of DNN models in practical applications. Several different types of activation functions have been used in DNN, such as \(\text{ReLU}(\mathbf{z})=\max\{0,\mathbf{z}\}\) and \(\tanh(\mathbf{z})\).
From the viewpoint of function approximation, the first layer with activation functions for the DNN framework can be regarded as a series of basis functions and its output is the (nonlinear) combination of those basis functions. Recent works found the phenomenon of spectral bias or frequency preference for DNN, that is, DNN will first capture the low-frequency components of input data [43; 44]. After that, some corresponding mechanisms are made by means of Neural Neural Tangent Kernel (NTK) [47; 53]. Under these mechanisms, many efforts are made to improve the performance of DNN, such as the structures and the activation functions. By introducing some scale factors \(\Lambda=(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3}\cdots,\mathbf{k}_{Q-1},\mathbf{k}_{Q})^{T}( \ \mathbf{k}_{i}\) is a vector or matrix), a variety of multi-scale DNN (MscaleDNN) frameworks are proposed which will use the radial scale factors \(\Lambda\) to shift the high-frequency component into the low ones, then accelerate the convergence and improve the accuracy of DNN [47; 54; 55]. Figure 2 presents the schematic diagram of the MscaleDNN with \(N\) subnetworks.
Definitely, recent works have shown that using Fourier feature mapping as an activation function for the first hidden layer of each subnetwork can remarkably improve the capacity of MscaleDNN, it can mitigate the pathology of spectral bias for DNN, and enable networks to well learn the target function [43; 44; 46; 47; 56]. It is expressed as follows:
\[\mathbf{\zeta}(\mathbf{x})=\left[\begin{array}{c}\cos(\mathbf{\kappa}\mathbf{x})\\ \sin(\mathbf{\kappa}\mathbf{x})\end{array}\right], \tag{19}\]
where \(\mathbf{\kappa}\) is a user-specified vector or matrix (trainable or untrainable) which is consistent with the number of neural units in the first hidden layer for DNN. By performing the Fourier feature mapping for the input data, the input points in \(\mathbb{R}^{d}\) can be mapped to the range \([-1,1]\). After that, the subsequent layers of the neural network can process the feature information efficiently. For convenience, we denote the PINN model with a MscaleDNN performed by Fourier feature mapping being its solver as Sub-Fourier PINN(called SFPINN).
According to the above description, we denote the proper Fourier feature information of the \(n_{th}\) subnetwork by \(\mathbf{\zeta}_{n}(\tilde{\mathbf{x}})\) with \(\tilde{\mathbf{x}}=(\mathbf{x},t)\) and obtain its output by performing this information through the remainder block of SFPINN model with general activation functions, such as sigmoid, \(\tanh\), and ReLU, etc. Finally, the overall output of the SFPINN model is the linear combination of all subnetwork outputs, denoted by \(\mathbf{NN}(\tilde{\mathbf{x}})\). In sum, the detailed procedure is concluded as follows:
\[\begin{split}\hat{\mathbf{x}}&=\mathbf{k}_{n}\tilde{\mathbf{x}},\quad n=1,2,\ldots,N,\\ \mathbf{\zeta}_{n}(\tilde{\mathbf{x}})&=\left[\cos\left(\mathbf{W }_{1}^{[n]}\hat{\mathbf{x}}\right),\sin\left(\mathbf{W}_{1}^{[n]}\hat{\mathbf{x}}\right) \right]^{\mathrm{T}},\quad n=1,2,\ldots,N,\\ \mathbf{F}_{n}(\tilde{\mathbf{x}})&=\widetilde{\mathcal{FCN}}_{n }\left(\mathbf{\zeta}_{n}(\tilde{\mathbf{x}})\right),\quad n=1,2,\ldots,N,\\ \mathbf{NN}(\tilde{\mathbf{x}})&=\mathbf{W}_{O}\cdot[\mathbf{F}_{1}, \mathbf{F}_{2},\cdots,\mathbf{F}_{N}]+\mathbf{b}_{O},\end{split} \tag{20}\]
Figure 2: A schematic diagram of Sub-Fourier PINN (SFPINN) with \(N\) subnetworks.
where \(W_{1}^{[n]}\) represents the weight matrix of the first hidden layer for the \(n_{th}\) subnetwork in SFPINN model and \(\widehat{FCN_{n}}\) stands for the remaining blocks of the \(n_{th}\) subnetwork. \(\mathbf{W}_{O}\) and \(\mathbf{b}_{O}\) represent the weights and bias of the last linear layer, respectively(see Figure 2). Notably, all subnetworks in SFPINN are standalone and their sizes can be adjusted independently.
## 4 The process of SFHCPINN algorithm
Our proposed SFHCPINN is the combination of HCPINN and SFPINN that imposes "hard" constraints on the I/BCs and employs a subnetwork structure shown in Figure 2. The solution for ADE (8) is expressed as
\[u_{NN}(\mathbf{x},t)=G(\mathbf{x},t)+D(\mathbf{x},t)\mathbf{NN}(\mathbf{x},t;\mathbf{\theta}) \tag{21}\]
where the parameter definitions are identical to Section 3. To start with, the smooth extension function \(G(\mathbf{x},t)\) satisfying the I/BCs and smooth distance function \(D(\mathbf{x},t)\) in the distance between interior points to the I/BCs \(\partial\Omega\times\{t_{0}\}\) are constructed (see Section 3). Thus before the training procedure of the neural network, our proposed solution has already satisfied the I/BCs. For the SFPINN consisting of \(N\) subnetworks (see Section 3.2), the input data for each subnetwork will be transformed by the following operation
\[\widehat{\mathbf{x}}=a_{n}*(\mathbf{x},t),\quad n=1,2,\ldots,N,\]
where \(a_{n}>0\) is a scalar factor. Denoting the output of each subnetwork as \(\mathbf{F}_{n}(n=1,2,\ldots,N)\), then the overall output of the SFPINN model is obtained by
\[\mathbf{NN}(\mathbf{x},t)=\frac{1}{N}\sum_{n=1}^{N}\frac{\mathbf{F}_{n}}{a_{n}}.\]
In the SFHCPINN algorithm, we let \(u_{NN}^{0}(\mathbf{x},t)=G(\mathbf{x},t)+D(\mathbf{x},t)NN^{0}(\mathbf{x},t;\mathbf{\theta})\) be its initial stage. In this stage, our proposed solution \(u_{NN}^{0}(\mathbf{x},t)\) satisfied the I/BCs in (8) automatically and we can focus on the loss of the interior points. Then in the \(k_{th}\) iteration step, a set of randomly sampled collocation points \(\mathcal{S}^{k}\) is provided, then the \(k_{th}\) loss can be obtained by (11) or (12). The loss function for the Dirichlet boundary is expressed as follows
\[Loss_{SFHCPINN}^{Dir}(\mathcal{S}^{k};\mathbf{\theta})=Loss_{in}(\mathcal{S}^{k}_{R} ;\mathbf{\theta}) \tag{22}\]
with
\[Loss_{in}=\frac{1}{|\mathcal{S}^{k}_{R}|}\sum_{i=1}^{|\mathcal{S}^{k}_{R}|} \left|\frac{\partial u_{NN}(\mathbf{x}^{i},t^{i})}{\partial t}-\mathbf{div}\big{(} \mathbf{p}\cdot\nabla u_{NN}(\mathbf{x}^{i},t^{i})\big{)}+\mathbf{q}\cdot\nabla u_{NN}( \mathbf{x}^{i},t^{i})-f(\mathbf{x}^{i},t^{i})\right|^{2}.\]
for \((\mathbf{x}^{i},t^{i})\in\mathcal{S}^{k}_{R}\). And the loss function for the Neumann boundary is formulated as
\[Loss_{SFHCPINN}^{Neu}(\mathcal{S}^{k};\mathbf{\theta})=Loss_{in}(\mathcal{S}^{k}_{ R};\mathbf{\theta})+\frac{\gamma}{|\mathcal{S}^{k}_{B}|}\sum_{j=1}^{S^{k}_{R}} \left|\mathcal{B}u_{NN}(\mathbf{x}^{i}_{B},t^{i}_{B})-g(\mathbf{x}^{i}_{B},t^{i}_{B}) \right|^{2}. \tag{23}\]
To sum up, the SFHCPINN method for solving ADE with Dirichlet and/or Neumman boundaries is briefly described in Algorithm 1.
```
1:Input:\(\mathbf{x},t\), \(\mathbf{x}^{i},t^{i
### Model and training setup
#### 5.1.1 Model setup
The details of all the models in the numerical experiments are elaborated in the following and summarized in Table 1.
* _SFHCPINN_: A solution approach for solving the ADE with different boundary constraints is presented in this article. The approach employs a composite physics-informed neural network (PINN) model composed of a distance function \(D(x,t)\), a smooth extension function \(G(x,t)\), and a subnetwork deep neural network (DNN). The distance function \(D(x,t)\) and smooth extension function \(G(x,t)\) are obtained through small-scale DNN training or defined analytically based on boundary conditions. The SFHCPINN model is composed of 20 subnetworks according to the manually defined frequencies \(\Lambda=(1,2,3,\ldots,20)\), and each subnetwork has 5 hidden layers with sizes (10, 25, 20, 20, 15). The activation function of the first hidden layer for each subnetwork is set as Fourier feature mapping \(\boldsymbol{\zeta}(\boldsymbol{\hat{x}})\), and the other activation functions (except for the output layer) are set as \(\sin\). The final output of the composite PINN model is a weighted sum of the outputs of all subnetworks. The overall structure of the sub-Fourier PINN model is depicted in Figure 2.
* _SFPINN_: The PINN model we consider here uses a subnetwork DNN as the solver, with the activation function of the first hidden layer in each subnetwork set as a Fourier feature mapping and the other activation functions (except for the output layer) set as \(\sin\). The I/B constraints in this model are applied in a soft manner, which is the classical approach.
* _PINN_: The solver for the vanilla PINN model is a normal DNN, where all activation functions except for the output layer are set to \(\tanh\). The type of I/B constraints used in this model is in soft manner.
#### 5.1.2 Training setup
We use the following mean square error and relative \(L^{2}\) error to evaluate the accuracy of different models:
\[MSE=\frac{1}{N^{\prime}}\sum_{i=1}^{N^{\prime}}\left(\tilde{u}(x^{i},t^{i})-u^{ *}(x^{i},t^{i})\right)^{2}\ \ \text{and}\ \ REL=\frac{\sum_{i=1}^{N^{\prime}}\left(\tilde{u}(x^{i},t^{i})-u^{*}(x^{i}, t^{i})\right)^{2}}{\sum_{i=1}^{N^{\prime}}\left(u^{*}(x^{i},t^{i})\right)^{2}}\]
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Model & Subnetwork & Numbers of subnetwork & Activation & Constraint & Size of the network \\ \hline SFHCPINN & DNN & 20 & \(\sin\) & hard & (10,25,20,20,10) \\ SFPINN & DNN & 20 & \(\sin\) & soft & (10,25,20,20,10) \\ PINN & - & - & \(\tanh\) & soft & (100,150,80,80,50) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparisons for the above models
where \(\tilde{u}(x^{i},t^{i})\) is the approximate DNN solution, \(u^{*}(x^{i},t^{i})\) is the exact/reference solution, \(\{(x^{i},t^{i})\}_{i=1}^{N^{\prime}}\) is the set of testing points, and \(N^{\prime}\) is the number of testing points.
In our numerical experiments, we uniformly sample all training and testing data within \(\Omega\) (or \(\partial\Omega\)), and use the Adam optimizer [57] to train all networks. An exponential learning rate with an initial learning rate of \(0.001\) and a decay rate of \(0.0005\) every \(100\) training epochs is utilized. For visualization purposes, we evaluate our models every \(1000\) epochs during training and record the final results. The penalty parameter \(\gamma\) for the boundary constraint in (9), (12), and (23) is specified as:
\[\gamma=\left\{\begin{aligned} \gamma_{0},&\text{if} \leavevmode\nobreak\ \leavevmode\nobreak\ i_{\text{epoch}}<0.1T_{\text{max}}\\ 10\gamma_{0},&\text{if}\leavevmode\nobreak\ \leavevmode \nobreak\ 0.1T_{\text{max}}<=i_{\text{epoch}}<0.2T_{\text{max}}\\ 50\gamma_{0},&\text{if}\leavevmode\nobreak\ \leavevmode \nobreak\ 0.2T_{\text{max}}<=i_{\text{epoch}}<0.25T_{\text{max}}\\ 100\gamma_{0},&\text{if}\leavevmode\nobreak\ \leavevmode \nobreak\ 0.25T_{\text{max}}<=i_{\text{epoch}}<0.5T_{\text{max}}\\ 200\gamma_{0},&\text{if}\leavevmode\nobreak\ \leavevmode \nobreak\ 0.5T_{\text{max}}<=i_{\text{epoch}}<0.75T_{\text{max}}\\ 500\gamma_{0},&\text{otherwise}\end{aligned}\right. \tag{24}\]
where \(\gamma_{0}=100\) in all our tests and \(T_{\text{max}}\) represents the total epoch number. We implement our code in Pytorch (version 1.12.1) on a workstation (256-GB RAM, single NVIDIA GeForce GTX 2080Ti 12-GB).
### Performance of SFHCPINN for solving ADEs
This section demonstrates the feasibility of SFHCPINN in solving the ADE with Dirichlet and/or Neumann BCs in one-dimensional to three-dimensional Euclidean space. These examples are common in engineering and reality, and two of them involve multi-frequency scenarios to illustrate the ability of SFHCPINN to handle high-frequency problems.
#### 5.2.1 One-dimensional ADE
We first consider the one-dimension ADE problem expressed as follows:
\[\frac{\partial u(x,t)}{\partial t}=\alpha\frac{\partial^{2}u(x,t)}{\partial x ^{2}}-\beta\frac{\partial u(x,t)}{\partial x}+f(x,t),\,\text{for}\,\,x\in[a,b] \text{ and}\,\,t\in[t_{0},T]\,, \tag{25}\]
where \(f(x,t)\) is the source term, \(\alpha\) is coefficient of diffusivity and \(\beta\) is the coefficient of advection rate. The IC can be formulated as:
\[u(x,t_{0})=A_{0}(x). \tag{26}\]
The Dirichlet BCs are denoted as:
\[u(a,t)=A_{1}(t), \tag{27}\] \[u(b,t)=A_{2}(t),\]
and the Neumann BCs could be formulated as follow:
\[u_{x}(a,t)=N_{1}(t), \tag{28}\] \[u_{x}(b,t)=N_{2}(t). \tag{29}\]
**Example 1**.: To emphasize the capability of the proposed SFHCPINN framework, we first consider modeling the simple case but with a high-frequency item. We resolve the one-dimension advection-diffusion equation (25) in the spatio-temporal range \(\Omega\times T=[0,2]\times[0,5]\). The source term \(f(x,t)\), I/BCs are given by the following known solution:
\[u(x,t)=e^{-\alpha t}(\sin(\pi x)+0.1\sin(\beta\pi x)) \tag{30}\]
with \(\alpha=0.1\) and \(\beta=50\).
We employ the proposed SFHCPINN model to solve Example 1 by defining explicitly a distance function \(D(x,t)=\frac{(x-a)(x-b)t}{5(b-a)^{2}}\) and a smooth function \(G(x,t)=\sin(\pi x)+0.1\sin(50\pi x)\) according to the BCs. To compare with the SFHCPINN model, we use two simple neural networks with only one hidden layer of 20 neurons each to fit the distance function \(D(x,t)\) and the extension function \(G(x,t)\) before starting the training process. This model is called SFHCPINN\({}_{NN}\). Both SFHCPINN models are the same except for the distance and extension functions.
The models have comparable network sizes and parameter numbers, which are listed in Table 1. All models are trained for 50000 epochs, and in each epoch, PINN and SFPINN are trained with \(N_{R}=8,000\) collocation points, \(N_{B}=4,000\) boundary points, and \(N_{I}=3,000\) initial points, while SFHCPINN is trained with \(N_{R}=8,000\) collocation points. To assess the accuracy of the neural network approximations, we uniformly sample 10,000 test points from \(\Omega\times T\) every 1000 epochs. We present the results of the four models in Figures 3 and Table 2.
First, the heatmaps in Figures 3 and 3 and the curves in Figures 3 and 3 show that SFHCPINN has a higher level of accuracy than PINN and SFPINN, with testing MSE and REL decreasing at a faster rate. This suggests that SFHCPINN is effective in addressing the issue of gradient oscillation in DNN parameters, thanks to its use of Fourier expansion and subnetwork framework.
The second observation is that the SFHCPINN exhibits a smaller initial error and a faster convergence rate compared to both the standard PINN and the SFPINN, as shown in Figures 3 and 3. This suggests that the hard constraint included in the SFHCPINN allows for better adherence to the boundary conditions, leading to a significant improvement in the performance of SFHCPINN.
Lastly, we can observe from the experimental results that SFHCPINN, using numerically determined distance and extension functions, outperforms SFHCPINN\({}_{NN}\) in terms of both accuracy and training speed. This is due to the fact that the numerically determined distance and extension functions provide a more precise expression of the I/BCs and simple NNs cannot capture functions that vary frequently on the boundaries. Therefore, we use the numerically determined distance and extension function in all following experiments. In summary, SFHCPINN proves to be superior to PINN and SFPINN in one-dimensional problems with Dirichlet BCs.
**Example 2**.: The ADE (25) with the Neumann BCs is considered in \(\Omega\times T=[0,2]\times[0,5]\), the and the source term \(f(x,t)\) and the I/BCs are specified by the known solution: \(u(x,t)=e^{-\alpha t}\sin(x)\), where \(\alpha=0.25\) and \(\beta=1\). The details of the SFHCPINN with the Neumann BCs are outlined in Algorithm 1. At first, the \(D(x,t)=1-e^{-t}\) and \(G(x,t)=\sin(x)\) are well-defined according to the prescribed boundary to specify the distance between the interior and the boundary and the evaluation on the boundary, respectively. All settings for SFHCPINN, SFPINN, and PINN are the same as in Example 1. In each epoch, we randomly collect 8000 points from the interior of the defined domain and 3000 points from the Neumann boundary. The sampling procedures of PINN and SFPINN are identical to those in Example 1. We train all models for 50,000 epochs and use Adam with default parameters as the optimizer. The testing data are uniformly generated from \(\Omega\times T\).
\begin{table}
\begin{tabular}{l l c c} \hline \hline & constraint & MSE & REL \\ \hline PINN & soft & 0.135 & 0.428 \\ SFPINN & soft & 0.006 & 0.019 \\ SFHCPINN\({}_{NN}\) & hard & 82.63 & 1.000 \\ SFHCPINN & hard & 0.0008 & 0.002 \\ \hline \hline \end{tabular}
\end{table}
Table 2: MSE and REL of SFHCPINN, SFPINN, and PINN for Example 1
Figure 3: Testing results for Example 1.
The following conclusions can be drawn as follows: First, the diminishing color depth of the thermal maps in Figures 4(b) - 4(d) indicates that the accuracy of the three models improves steadily. In addition, by comparing the point-wise absolute error, MSE and REL tracks of PINN and SFPINN in Figures 4(e) and 4(f), it is possible to conclude that by adopting a subnetwork architecture and a Fourier activation function, the performance of the DNN has been enhanced with a faster training rate and higher precision under Neumann BCs. In addition, the accuracy of the SFHCPINN is significantly higher than PINN almost all the time, especially at the initial stage. This is due to the fact that the ansatz of the hard-constraint PNNN always satisfies the BCs throughout training, preventing the approximations from breaching physical restrictions at the borders. Table 3 further reveals that SFHCPINN with hard constraints and subnet topology outperforms PINN and SFPINN by multiple orders of magnitude when Neumann BCs are present under one-dimensional settings.
\begin{table}
\begin{tabular}{l l l l} \hline \hline & constraint & MSE & REL \\ \hline PINN & soft & 0.010 & 0.100 \\ SFPINN & soft & 2.41e-9 & 2.39e-8 \\ SFHCPINN & hard & 4.61e-10 & 4.57e-9 \\ \hline \hline \end{tabular}
\end{table}
Table 3: MSE and REL of SFHCPINN, SFPINN, and PINN for Example 2
Figure 4: Testing results for Example 2.
#### 5.2.2 Two-dimensional ADE
**Example 3**.: Consider the two-dimensional ADE with Dirichlet boundaries as follows:
\[\frac{\partial u}{\partial t}+4\frac{\partial u}{\partial x}+4\frac{\partial u}{ \partial y}-\left(\frac{\partial^{2}u}{\partial x^{2}}+\frac{\partial^{2}u}{ \partial y^{2}}\right)=f(x,y,t)\quad 0\leqslant x,y\leqslant 1,0\leqslant t\leqslant 5. \tag{31}\]
The precise solution is \(u(x,y,t)=e^{-0.25t}\sin(5\pi x)\sin(5\pi y)\) and it specifies the initial and Dirichlet boundary. We employ PINN, SFPINN, and SFHCPINN to solve (31). First, we define the distance function \(D(x,y,t)=x(1-x)y(1-y)t\) and extension function \(G(x,y,t)=\sin(5\pi x)\sin(5\pi y)\) according to the BCs. Three models for solving Equation (31) mention in Section 5.1.1 are identical to the ones described in Examples 1 and 2 in that they share the same optimizer, learning rate, and decay rate. In the training stage, we randomly sample 8,000 interior points as training points for SFHCPINN, and we sample additional 3,000 initial points and 3,000 boundary points for PINN and SFPINN in every epoch. In addition, we uniformly sample 1,000 points in \([0,1]\times[0,1]\) at \(t=1.0\) as the testing set.
\begin{table}
\begin{tabular}{l l c c} \hline \hline & constraint & MSE & REL \\ \hline PINN & soft & 0.1959 & 1.0268 \\ SFPINN & soft & 4.51e-6 & 2.36e-5 \\ SFHCPINN & hard & 5.97e-8 & 4.02e-7 \\ \hline \hline \end{tabular}
\end{table}
Table 4: MSE and REL of three models for Example 3 at \(t=1.0\)
Figure 5: Testing results for Example 3.
The decreasing colors of the point-wise absolute error heatmap of three models in Figures 5(b)- 5(d) and the data in Table 4 suggest that the accuracy of PINN, SFPINN, and SFHCPINN rises in that order. After training using the sub-Fourier, the MSE of the model drops from \(10^{-1}\) to \(10-6\), as compared to the normal PINN techniques. This is consistent with our study that utilizing a sub-Fourier structure during training may be more effective in addressing the spectrum bias induced by frequency items in the issue. In addition, by incorporating a hard-constraint architecture, the model's accuracy jumps from \(10^{-6}\) to \(10^{-8}\) and has a faster convergence rate compared to SFPINN. This is owing to the hard-constraint model meeting the BCs prior to the training process. In conclusion, SFHCPINN may still outperform the baseline models under the two-dimensional ADE problem with Dirichlet BCs.
**Example 4**.: Consider the particular case below [58]:
\[\frac{\partial u}{\partial t}+4\frac{\partial u}{\partial x}+4\frac{\partial u }{\partial y}-\frac{\partial^{2}u}{\partial x^{2}}-\frac{\partial^{2}u}{ \partial y^{2}}=f(x,y,t)\quad 0\leqslant x,y\leqslant 1,0\leqslant t\leqslant 5 \tag{32}\]
and the initial and Neumann BCs are drawn from the known solution as follows:
\[u(x,t)=e^{-0.25t}\sin(\pi x)\sin(\pi y). \tag{33}\]
The exact solution contains two frequency elements in which we are interested. We first define \(D(x,y,t)=t\) and \(G(x,y,t)=\sin(\pi x)\sin(\pi y)\) according to the BCs. All the model setups are identical to those in Example 3. We train 30000 epochs for each model and in each epoch, we randomly generate 8000 initial points and 3000 boundary points from the interior of the defined domain \(\Omega\) and Neumann boundary. In addition, we uniformly sample 1,000 points in \([0,1]\times[0,1]\) at \(t=1.0\) as the testing set to validate the feasibility of three models.
The results in Figures 6 show that employing the subnetwork structure and the Fourier expansion on the multi-scaled input the SFHCPINN has lower MSE and relative errors as well as a faster convergence rate than the standard PINN. In addition, by decomposing the solution, the solution automatically meets the IC, enabling the SFHCPINN to be tuned to achieve greater precision when it comes to the two-dimensional issues with Neumann boundaries. This example exhibits the feasibility and excellent accuracy of SFHCPINN in solving the two-dimensional ADE under the Neumann BCs, whereas the performance of PINN and SFPINN is only ordinary.
#### 5.2.3 Three-dimensional ADE
**Example 5.** Consider the following non-homogeneous ADE:
\[\frac{\partial u}{\partial t}-\Delta u+p\frac{\partial u}{\partial x}+q\frac{ \partial u}{\partial y}+r\frac{\partial u}{\partial z}=f,\ \ (x,y,z,t)\in[0,1)^{3}\times[0,1] \tag{34}\]
where \(p=q=r=1\) with the Dirichlet boundary condition:
\[u|_{\Gamma}=g(x,y,z,t),\ t\in[0,1] \tag{35}\]
and initial condition:
\[u(x,y,z,0)=u_{0}(x,y,z),(x,y,z)\in[0,1]^{3} \tag{36}\]
where \(\Delta\) is the Laplace operator, \(\Gamma\) is the border of the defined domain \(\Omega\), \(T\) is a positive constant, and \(u\) is the function to be solved. \(f\), \(g\), \(u_{0}\) are specified by the known solution is \(u=e^{t}\sin(\pi x)\sin(\pi y)\sin(\pi z)\) which consists of three frequency components. According to the BCs, we define the distance function
Figure 6: Testing results for Example 4.
\begin{table}
\begin{tabular}{l l l l} \hline \hline & constraint & MSE & REL \\ \hline PINN & soft & 0.0545 & 0.371 \\ SFPINN & soft & 0.0038 & 0.026 \\ SFHCPINN & hard & 1.84e-5 & 1.24e-4 \\ \hline \hline \end{tabular}
\end{table}
Table 5: MSE and REL of SFHCPINN, SFPINN, and PINN for Example 4 at \(t=1.0\)
\(xyz(1-x)(1-y)(1-z)t\) and the smooth extension function \(G(x,y,z,t)=\sin(\pi x)\sin(\pi y)\sin(\pi z)\). The network setup, optimizer, the hyperparameters of three models are unified with that in Section 5.1.2. In each epoch, PINN and SFPINN are trained with \(N_{R}=15,000\) collocation points, \(N_{B}=4,000\) boundary points, and \(N_{I}=3,000\) initial points while the SFHCPINN is trained with \(N_{R}=15,000\) collocation points. We train the above models for \(50000\) epochs and test them on \(10000\) uniform points sampled from \([0,1]\times[0,1]\) at \(z=0.27\) and \(t=0.5\).
Based on the data in Table 6 and the heatmap of three models, it is evident that the normal PINN is less accurate than the other two models with hard constraints and/or sub-Fourier architecture when dealing with 3D issues. In addition, we can deduce from Figures 7(e)-7(f) that SFHCPINN with hard-constraint design has lower initial errors, faster convergence rate, and higher precision. This is also consistent with our analysis that models with hard constraints may enhance the performance of PINN because they naturally satisfy the BCs and transform the problem into a simpler optimization problem without additional physics constraints. In conclusion, SFHCPINN retains its high precision, convergence rate, and robust stability in solving three-dimensional ADE problems.
\begin{table}
\begin{tabular}{l l l l} \hline \hline & constraint & MSE & REL \\ \hline PINN & soft & 0.038 & 0.764 \\ SFPINN & soft & 1.03e-4 & 2.04e-3 \\ SFHCPINN & hard & 7.9e-14 & 1.38e-12 \\ \hline \hline \end{tabular}
\end{table}
Table 6: MSE and REL of SFHCPINN, SFPINN, and PINN for Example 5 at \(t=0.5\)
Figure 7: Testing results for Example 5.
## 6 Conclusion
This study introduces SFHCPINN, a novel neural network approach that combines hard-constraint PINN with sub-networks featuring Fourier feature embedding. The purpose is to solve a specific class of advection-diffusion equations with Dirichlet and/or Neumann boundary conditions. The methodology transforms the original problem into an unconstrained optimization problem by utilizing a well-trained PINN, a distance function denoted as \(D(\mathbf{x},t)\), and a smooth function denoted as \(G(\mathbf{x},t)\).
To handle high-frequency modes, a Fourier activation function is employed for inputs with different frequencies, and a sub-network is designed to match the target function. The computational results demonstrate that this novel method is highly effective and efficient for solving advection-diffusion equations with Dirichlet or Neumann boundaries in one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D) domains.
Importantly, SFHCPINN maintains high precision even as the dimension and/or frequency of the problem increases, unlike the soft-constraint PINN approach, which becomes degenerate in such scenarios. However, the selection of the distance function \(D(\mathbf{x},t)\) and extension function \(G(\mathbf{x},t)\) significantly impacts SFHCPINN's performance and may not be accurately determined in real-world engineering problems. Consequently, appropriate modifications such as employing a robust deep neural network (DNN) to fit the boundary conditions might be necessary, presenting an opportunity for future research.
## Declaration of interests
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Credit authorship contribution Statement
Jiaxin Deng: Investigation, Formal analysis, Validation, Writing - Original Draft. Xi'an Li: Conceptualization, Methodology, Investigation, Formal analysis, Validation, Writing - Review & Editing. Jinran Wu: Writing - Original Draft, Writing - Review & Editing. Shaotong Zhang: Writing - Review & Editing. Weide Li: Writing - Review & Editing, Project administration. You-Gan Wang: Writing - Review & Editing, Project administration.
## Acknowledgements
This study was supported by the National Natural Sciences Foundation of China (No. 42130113).
|
2304.09504 | Ground State Properties of Quantum Skyrmions described by Neural Network
Quantum States | We investigate the ground state properties of quantum skyrmions in a
ferromagnet using variational Monte Carlo with the neural network quantum state
as variational ansatz. We study the ground states of a two-dimensional quantum
Heisenberg model in the presence of the Dzyaloshinskii-Moriya interaction
(DMI). We show that the ground state accommodates a quantum skyrmion for a
large range of parameters, especially at large DMI. The spins in these quantum
skyrmions are weakly entangled, and the entanglement increases with decreasing
DMI. We also find that the central spin is completely disentangled from the
rest of the lattice, establishing a non-destructive way of detecting this type
of skyrmion by local magnetization measurements. While neural networks are well
suited to detect weakly entangled skyrmions with large DMI, they struggle to
describe skyrmions in the small DMI regime due to nearly degenerate ground
states and strong entanglement. In this paper, we propose a method to identify
this regime and a technique to alleviate the problem. Finally, we analyze the
workings of the neural network and explore its limits by pruning. Our work
shows that neural network quantum states can be efficiently used to describe
the quantum magnetism of large systems exceeding the size manageable in exact
diagonalization by far. | Ashish Joshi, Robert Peters, Thore Posske | 2023-04-19T08:51:00Z | http://arxiv.org/abs/2304.09504v2 | # Ground State Properties of Quantum Skyrmions described by Neural Network Quantum States
###### Abstract
We investigate the ground state properties of quantum skyrmions in a ferromagnet using variational Monte Carlo with the neural network quantum state as variational ansatz. We study the ground states of a two-dimensional quantum Heisenberg model in the presence of the Dzyaloshinskii-Moriya interaction (DMI). We show that the ground state accommodates a quantum skyrmion for a large range of parameters, especially at large DMI. The spins in these quantum skyrmions are weakly entangled, and the entanglement increases with decreasing DMI. We also find that the central spin is completely disentangled from the rest of the lattice, establishing a non-destructive way of detecting this type of skyrmion by local magnetization measurements. While neural networks are well suited to detect weakly entangled skyrmions with large DMI, they struggle to describe skyrmions in the small DMI regime due to nearly degenerate ground states and strong entanglement. In this paper, we propose a method to identify this regime and a technique to alleviate the problem. Finally, we analyze the workings of the neural network and explore its limits by pruning. Our work shows that neural network quantum states can be efficiently used to describe the quantum magnetism of large systems exceeding the size manageable in exact diagonalization by far.
## I Introduction
Classical magnetic skyrmions are topologically protected magnetic structures with vortex-like configurations. Skyrmions have been discovered in a variety of materials, including MnSi, FeCoSi, FeGe, and others [1; 2; 3; 4; 5], with sizes ranging from micrometers to nanometers. These quasiparticles can be created by competition between the exchange interaction and the anti-symmetric Dzyaloshinskii-Moriya interaction (DMI) [5], or by frustration [6], or dynamically by an electric current [7; 8] or by boundary effects [9]. Magnetic skyrmions have potential uses in magnetic storage devices due to their topological protection and ease of motion under electric currents [10; 11; 12; 13; 14]. The observation of skyrmions with sizes a few times the atomic lattice spacing raises the question about the importance of quantum effects in these systems, meriting a purely quantum mechanical analysis to study them.
In the past few years, there have been some works addressing the quantum nature of magnetic skyrmions, with the focus on quantum spin systems in the presence of DMI or frustration [15; 16; 17; 18; 19; 6; 9], or very recently in systems with itinerant magnetism [20] and with f-electron systems [21]. Most studies have used exact diagonalization techniques to tackle the problem of quantum skyrmions in spin lattices, which inherently puts a limit on the system size they can consider. Recently, quantum skyrmions on a larger spin lattice were considered by Haller et al. [17], using density matrix renormalization group (DMRG) methods. This work discovered a skyrmion lattice phase that would not be tractable using exact diagonalization. However, in two dimensions, the entanglement of the system scales with the system size due to the area law, making DMRG presumably a less-than-ideal choice. On the other hand, due to the presence of DMI, quantum Monte Carlo methods suffer from the sign problem, which slows down the optimization process.
In recent years, artificial neural network-based variational methods have been introduced to approximate the quantum many-body problem, achieving results comparable to state-of-the-art methods [22; 23; 24; 25; 26; 27; 28; 29; 30]. In these variational methods, an artificial neural network is used to represent the variational wave function, known as a neural network quantum state (NQS), which then learns the target state using a gradient-based optimization scheme. NQS-based variational methods offer a novel approach to studying a wide range of quantum many-body systems, especially in two and three dimensions where existing methods involve a high level of complexity. NQSs with various structures have been successfully applied to frustrated spin systems in two dimensions [22; 26; 27; 28; 31] and have recently been shown to have the ability to capture long-range quantum entanglement with an expressive capacity greater than conventional methods [32]
In this paper, using NQSs we show that quantum skyrmions (QSs) are the ground states for a wide range of parameters in the two-dimensional spin-1/2 Heisenberg Hamiltonian with DMI in a ferromagnetic medium. To study quantum entanglement in this system, we calculate Renyi entropy of second order and demonstrate that the entanglement in the QS ground state decreases with increasing DMI. Previous work indicated that the central spin of a quantum skyrmion is weakly entangled to the rest of the system [17]. Interestingly, we find that the central spin in the QS ground state is completely disentangled from the rest of the spins within the error
bars of our method. This opens up a way of detecting quantum skyrmions experimentally without destroying their quantum nature. While we find stable QSs at large DMI, the variational method is insufficient to learn the ground state wave function at small DMI. An analysis of small systems reveals that the variational method finds a superposition of the ground state and the first excited state due to a tiny excitation gap. Motivated by this, we present a projection-based method to improve the variational ground state in this region. Finally, we analyze the internal structure of our NQS ansatz by inspecting the trained network weights and pruning. While the lowly entangled NQS does not change significantly upon pruning, the performance degrades rapidly with pruning in the highly entangled NQS. Our work shows that an NQS variational ansatz can be used to efficiently approximate spin systems with medium to high DMI at system sizes out of reach for exact methods.
The paper is organized as follows: In Sec. II, we describe the Heisenberg Hamiltonian on a square lattice in the presence of DMI. In Sec. III, we briefly discuss the variational method, along with the neural network structure, with more details in Appendix A. In Sec. IV, we study the different ground states of this system. In Sec. V, we study the entanglement in the ground state by calculating the Renyi entropies. In Sec. VI, we obtain and present insights into the workings of the trained NQS. Finally, we conclude the paper in Sec. VII.
## II Model
We study the ground state of the two-dimensional spin-1/2 Heisenberg Hamiltonian on a square lattice in the presence of the Dzyaloshinskii-Moriya interaction (DMI) and a strong external magnetic field at the boundaries that simulates a ferromagnetic background,
\[\begin{split} H=&-J\sum_{\langle ij\rangle}(\sigma _{i}^{x}\sigma_{j}^{x}+\sigma_{i}^{y}\sigma_{j}^{y})-A\sum_{\langle ij\rangle} \sigma_{i}^{z}\sigma_{j}^{z}\\ &-D\sum_{\langle ij\rangle}(u_{ij}\times\hat{z})\cdot(\sigma_{i} \times\sigma_{j})+B^{z}\sum_{b}\sigma_{b}^{z}.\end{split} \tag{1}\]
Here, \(J\) is the Heisenberg exchange interaction, \(A\) is the Heisenberg anisotropy term, \(D\) is the DMI, and \(B^{z}\) is the external magnetic field along the \(\hat{z}\)-axis acting only on the boundary spins indexed with \(b\). The Pauli operator at the \(i\)-th lattice site is \(\sigma_{i}=(\sigma_{i}^{x},\sigma_{i}^{y},\sigma_{i}^{z})\) and \(u_{ij}\) is the unit vector pointing from site \(i\) to site \(j\). We consider \(\hbar=1\). The sum in the first three terms is over the nearest neighbor lattice sites, while the last term only covers the boundary sites.
The Hamiltonian in Eq. (1) can be considered the quantum analog of a classical spin model in which the competition between the noncolinear DMI and the ferromagnetic Heisenberg exchange interaction gives rise to the formation of magnetic skyrmions. In classical systems, these skyrmions are often stabilized by an external magnetic field over the whole lattice. However, we only apply the magnetic field (\(B^{z}=10J\)) to fix the spins at the boundaries. This is the main difference between the Hamiltonian in our work and that in Ref. [17], where the authors study a similar system but with a bulk external magnetic field. Thus, our model describes a single quantum skyrmion embedded in a ferromagnetic medium. We leave the study of a quantum skyrmion lattice with NQS and the comparison with DMRG results [17] for future works.
## III Method
The idea behind neural network quantum states is to use the output of an artificial neural network to represent the complex-valued coefficients \(\psi_{\theta}(\sigma)\) in the variational wave function,
\[\ket{\psi_{\theta}}=\sum_{\sigma}\psi_{\theta}(\sigma)\ket{\sigma}. \tag{2}\]
Here, \(\ket{\sigma}\) are local basis states, which in our case are the eigenstates of the \(\sigma^{z}\) operators, and \(\theta\) are the variational parameters of the neural network. In this paper, we use two fully connected feed-forward neural networks to each represent the phase and modulus part of the wave function, see Fig. 1, and take the logarithm of the wave function as the total output [26]
\[\ln{(\psi_{\theta}(\sigma))}=\rho(\sigma)+i\phi(\sigma). \tag{3}\]
The network takes the configuration of the spins on the two-dimensional lattice as the input. Both the phase and the modulus part of the network consist of two fully
Figure 1: Neural network structure used as NQS. The inputs are the spin configurations in the \(\sigma^{z}\) basis, and the output is the logarithm of the wave function. There are two fully connected networks, with two hidden layers in each, to learn the phase and the amplitude part of the wave function separately. Each hidden layer consists of \(\alpha N^{2}\) neurons.
connected hidden layers with \(\alpha N^{2}\) neurons in each layer, where \(N\) is the length of one side of the lattice, and we use \(\alpha=2\) in this paper. We use the rectified linear unit (reLU) as the nonlinear activation function. The optimization of the variational wave function is achieved by minimizing the loss function \(L_{\theta}\), i.e., the variational energy, with respect to the variational parameters
\[L_{\theta}=\langle\psi_{\theta}|H|\psi_{\theta}\rangle\,. \tag{4}\]
The phase part of the network is trained first while keeping the modulus part constant before optimizing the whole network. This method of optimization results in better learning of the sign structure of the ground state wave function, as demonstrated in Ref. [26] and also found by our testing. We use Adam as the optimizer [33]. The input samples are generated using the Markov chain Monte Carlo. We use NetKet to implement the NQS and Monte Carlo algorithms [34, 35, 36]. Details of the optimization procedure and hyperparameters are given in Appendix A.
## IV Ground state
First, we discuss the ability of the NQS ansatz to represent the ground state of the Hamiltonian in Eq. (1). To check that our method works correctly, we compare the NQS ground state energy, \(E_{NQS}\), for \(3\times 3\) and \(5\times 5\) spin lattices with the exact ground state energy, \(E_{\text{exact}}\), obtained using exact diagonalization. We find that the NQS correctly describes all parameter regimes besides the small DMI regime. While the NQS ground state energies are in agreement with the exact energies within the error margin in this regime, the NQS spin expectation values do not match that of the exact ground state. The reason for this problem lies in an almost degeneracy of the ground state with the first excited state resulting in a significant overlap of the NQS ground state with the excited state found by exact diagonalization. Because of the above, we first present our results for the parameter regime where NQS is accurate and discuss the small DMI regime afterward.
The energy convergence plot for the \(5\times 5\) lattice at \(D=0.8J\) and \(A=0.3J\) is shown in Figure 2(a). Here, the NQS correctly describes the quantum skyrmion ground state. The inset shows the relative error \(\Delta E\) in the ground state energy over the number of iterations,
\[\Delta E=\frac{|E_{\text{NQS}}-E_{\text{exact}}|}{|E_{\text{exact}}|}. \tag{5}\]
The energy convergence for a \(9\times 9\) lattice over the number of iterations for the Hamiltonian parameters \(D=0.5J\) and \(A=0.2J\) is shown in Fig. 2(b).
The ground state diagram for this lattice is shown in Fig. 3(a), depending on the DMI, \(D\), and the anisotropy, \(A\). The quantum skyrmion (QS) is the ground state for a wide range of parameters (triangles in Fig. 3(a)), especially at stronger DMI, which favors a noncolinear alignment of the neighboring spins. The spin expectation value at the \(i\)-th site, \(\langle\mathbf{S}_{i}\rangle=\langle\sigma_{i}/2\rangle\), for the ground state at \(D=0.5J\) and \(A=0.2J\) is shown in Fig. 3(c). A fundamental difference from the case of classical magnetic skyrmions is that the expectation value of the length of the spins, \(|\left\langle\mathbf{S}_{i}\right\rangle|\), is reduced in the QS state. The spins are not merely rotated from the boundary to the center, as is the case with classical spins, but are a superposition of the local eigenstates of spin operators in different directions.
For large \(A\) and small DMI, the ground state is a ferromagnet (FM) as the spins align in the direction parallel
Figure 2: Convergence of the NQS training procedure: The figure shows the variational energy convergence to the ground state of a \(5\times 5\) lattice (a) and a \(9\times 9\) lattice (b) over the number of iterations. The inset in (a) shows the relative error \(\Delta E\) in the ground state energy (see Eq. (5)) with respect to the exact ground state energy (black line). The light blue (orange) lines are the energy (\(\Delta E\)) values at each iteration, and the dark blue and red plots represent the moving averages over 30 iterations.
to the boundary fields (squares in Fig. 3(a)). An example is shown in Fig. 3(d) for \(D=0.1J\) and \(A=J\). Now, we discuss the parameter regime where the NQS struggles to find the correct ground state. As both DMI and \(A\) decrease, the magnitude of the spin expectation values also decreases. We find that in this regime, marked by circles in Fig. 3(a), the quantum skyrmion only exists as a metastable state for some parameters, observed in the form of a local minimum during the optimization procedure where the NQS is stuck for some iterations before converging to the ground state. The ground state is characterized by almost vanishing spin expectation values aligned along the \(x\) or \(y\) direction. As mentioned earlier, for small DMI values, the NQS is not able to resolve the nearly degenerate ground state from the first excited state even in smaller lattices. Hence, we label this regime where our method does not find either a QS or an FM ground state as a "mixed state" (MS) (circles in Fig. 3(a)).
A conclusion that must be drawn from this result is that energy convergence cannot be taken as the sole measure of accuracy for the variational ground state. To have an additional metric for quantifying the accuracy of our approach, we calculate the gap between the ground state and the first excited state. This is achieved in the variational Monte Carlo scheme by optimizing a second NQS, \(\ket{\psi_{\theta}^{1}}\), orthogonal to the ground state NQS, \(\ket{\psi_{\theta}^{0}}\), by adding an additional term in the loss function
\[L_{\theta}=\left\langle\psi_{\theta}^{1}|H|\psi_{\theta}^{1}\right\rangle+J| \left\langle\psi_{\theta}^{0}|\psi_{\theta}^{1}\right\rangle|^{2}. \tag{6}\]
We calculate the relative energy gap as \(\Delta E_{g}=(E_{0}-E_{1})/E_{0}\), where \(E_{0}\) and \(E_{1}\) are the energies corresponding to \(\ket{\psi_{\theta}^{0}}\) and \(\ket{\psi_{\theta}^{1}}\) respectively, and plot it over the DMI in Fig. 3(b). For \(\Delta E_{g}<2\times 10^{-4}\), we do not obtain a QS or FM ground state. This corresponds to the MS region in the parameter space, where quantum skyrmions with very low spin expectation values might exist for some parameters that our method is not able to resolve, as found for small systems by exact diagonalization [18; 6; 9]. This suggests that the NQS-based variational methods generally struggle with almost degenerate states. This scenario observed here for quantum spin systems, is well known from finite size electronic topological systems, which only reach exact degeneracy in the thermodynamic limit.
Projection Monte Carlo techniques exist to improve the variational ground state. However, the presence of
Figure 3: Ground state diagram and spin expectation values of different ground states of Eq. (1) for a \(9\times 9\) square lattice. (a) Ground state diagram. QS denotes the quantum skyrmion state, and FM the ferromagnetic state aligned with the boundary fields, and MS the mixed state (see the main text). The color map shows the maximum Renyi entropy. (b) Relative energy gap \(\Delta E_{g}\) between the ground state and the first excited state found by the neural network quantum state method over DMI at \(A=0.2J\) and \(A=0.4J\). (c)-(e) Spin expectation values of different ground states. (c) QS at \(D=0.5J,A=0.2J\), (d) FM at \(D=0.1J,A=J\), and (e) spin wave at \(D=J,A=0.5J\) with periodic boundary conditions.
complex off-diagonal terms in the Hamiltonian makes it difficult to use them stochastically [37]. We use an alternative projection method to remove the excited state contributions in the variational ground state (see Appendix B). After this improvement of the wave function, we obtain the correct ground state for the \(3\times 3\) lattice but not for the \(5\times 5\) lattice. Thus, while the NQS is able to represent the correct ground state for all parameters in the case of a \(3\times 3\) lattice, it is not able to learn it in the small DMI region in our variational Monte Carlo scheme.
In addition to the spin expectation values, we calculate the skyrmion number \(C\) using the normalized spin expectation values, \(\mathbf{n}_{i}=\left\langle\mathbf{S}_{i}\right\rangle/\left|\left\langle\mathbf{S}_ {i}\right\rangle\right|\), to define quantum skyrmions [9]
\[C=\frac{1}{2\pi}\sum_{\Delta}\tan^{-1}\left(\frac{\mathbf{n}_{i}\cdot(\mathbf{ n}_{j}\times\mathbf{n}_{k})}{1+\mathbf{n}_{i}\cdot\mathbf{n}_{j}+\mathbf{n}_{j} \cdot\mathbf{n}_{k}+\mathbf{n}_{k}\cdot\mathbf{n}_{i}}\right), \tag{7}\]
where the sum runs over all elementary triangles \(\Delta\) of the triangular tessellation of the quadratic lattice, having the sites \(i\), \(j\), and \(k\) as corners. \(C\) gives the number of times the spins wind around a unit sphere and is an integer for quantum skyrmions. In our model, we find \(C=1\) for the quantum skyrmion ground state and \(C=0\) otherwise. Furthermore, using unnormalized spin expectation values in Eq. (7), \(\mathbf{n}_{i}=\left\langle 2\mathbf{S}_{i}\right\rangle\), results in a non-integer number \(Q\) that indicates the 'quantum' nature of skyrmions [9], similar to other quantum measures [18]. \(Q\) decreases as the entanglement increases and the spin expectation values decrease. For the QS ground states, we find a lower threshold of \(Q=0.9\).
Lastly, we note that using periodic boundary conditions without ferromagnetic boundaries (\(B^{z}=0\)), we do not find a QS ground state. Instead, we obtain a spin wave state (Fig. 3(e)), which is consistent with DMRG findings [17] and the fact that unfrustrated classical skyrmions require a magnetic field for stabilization. Here, a QS state minimizes the energy of a finite region of the lattice if the boundary of this region is ferromagnetically ordered. Furthermore, the quantum skyrmion ground state is stable in the presence of an additional bulk magnetic field \(B^{z}_{\text{ext}}\sum_{j}\sigma_{j}^{z}\) with \(B^{z}_{\text{ext}}\) up to the order of \(2J\) (not shown here), above which the ground state is a ferromagnet aligned along the applied field.
## V Quantum entanglement
Entanglement is an important property of quantum systems that is absent in classical systems. In this section, we investigate whether the spins in the ground state are entangled by calculating the Renyi entropy as a measure of entanglement. The Renyi entropy of the order \(\alpha\), where \(\alpha\geq 0\) and \(\alpha\neq 1\), is defined as,
\[S_{\alpha}(\rho_{A})=\frac{1}{1-\alpha}\text{log}(\text{Tr}(\rho_{A}^{\alpha} )). \tag{8}\]
Here, \(\rho_{A}\) is the reduced density matrix obtained after splitting the system into two regions \(A\) and \(B\) and tracing out the degrees of freedom in region \(B\). The Renyi entropy is a non-negative quantity that is zero for a pure state and takes the maximum value \(\text{log}(\text{min}(d_{1},d_{2}))\), where \(d_{1}\) and \(d_{2}\) are the dimensions of the Hilbert space in region \(A\) and \(B\), respectively. We take region \(A\) as a single spin and region \(B\) as the rest of the lattice to obtain the entanglement of each spin with its environment. We calculate the \(\alpha=2\) Renyi entropy, \(S_{2}(\rho_{A})\), using the expectation value of the 'Swap' operator [27; 38] (see Appendix C).
The maximum Renyi entropy associated with the parameters is shown as a heatmap in the ground state diagram in Fig. 3(a). In the QS state for large values of DMI, we find \(S_{2}(\rho_{A})\approx 0\) irrespective of which spin \(A\) we consider, which means that these quantum skyrmions can be approximated as product states. However, as we reduce the DMI, we find that the entanglement among the spins increases, with the maximum reaching \(\text{max}_{A}S_{2}(\rho_{A})=0.09\) at \(D=0.6J\) and \(A=0.0\) for the most entangled spin. Here, the quantum skyrmion cannot be described as a product state. We plot the Renyi entropy \(S_{2}(\rho_{A})\) as a heat map over the QS ground state in Fig. 4 for the parameters \(D=0.5J\) and \(A=0.2J\). As the boundary spins are fixed with a large magnetic field, they are not entangled with the rest of the spins. The entropy first increases and then decreases from the boundary to the center, reaching its maximum between the two. One unexpected feature of this QS state is that the central spin is also disentangled from the surrounding spins, even though there is no external magnetic field acting on this site. We find that the Renyi entropy of the central spin is numerically zero for all quantum skyrmions that we obtain in our analysis; there are no accepted spin configurations during the Monte Carlo integration where the central spin points in the opposite direction than the fer
Figure 4: Renyi entropy of each spin with its environment for the QS at \(D=0.5J\) and \(A=0.2J\).
romagnetic environment. This means that the QS is a product state of the central spin and a superposition of the rest of the spins. The disentangled central spin can be used to detect quantum skyrmions using the central spin magnetization as an observable in measurements without destroying the quantum nature of the skyrmionic state. We note that our results of the entropy for the QS ground state match qualitatively with those in [17], in which the authors considered a bulk magnetic field instead of a ferromagnetic boundary. There, the DMRG calculations indicate a vanishing entanglement of the central spin in a quantum skyrmion with the rest of the system for a certain parameter regime. Thus, a disentangled central spin might be a general feature of quantum skyrmions.
In the FM parameter region, the entropy is \(S_{2}(\rho_{A})=0\), and these states can be represented as product states of the spins aligned with the boundary fields. Decreasing \(A\) for small DMI, we approach the MS, and the entropy reaches its maximum. Thus, the difficulties in obtaining a correct solution in this parameter region might also be due to the highly entangled spins that have almost vanishing spin expectation values, along with the small energy gap between the eigenstates.
## VI Network interpretation
In this final results section, we shift our focus towards interpreting the working and training of the neural network. Understanding how the network learns the target problem is integral to machine learning research and provides insights that cannot be obtained only through the final prediction. However, the interpretation of neural networks is a nontrivial problem, and a large number of neurons in multiple layers, as in the present network shown in Fig. 1, makes it even more challenging.
For the case of NQS and many-body physics, inspecting the weights of the neural network may offer clues towards understanding the inner workings of the network [26, 22]. To achieve this and to avoid dealing with an unmanageable amount of variational parameters, we study the QS ground state of the \(5\times 5\) lattice. We also use a smaller, fully connected feed-forward neural network as our variational ansatz, with two hidden layers and each layer consisting of 25 neurons for the phase and modulus parts, corresponding to \(\alpha=1\) in Fig. 1. We then transfer the results of our analysis to the calculation in the \(9\times 9\) lattice.
We plot the weights of all neurons of our NQS after training, in a \(5\times 5\) grid for each layer, in Fig. 5. We consider the QS solution at \(D=J\) and \(A=0.5J\) for our analysis. Inspecting the weights of the first hidden layer, we see that in the phase part, which is trained first to improve the learning of the sign structure of the wave function, each neuron learns a specific part of the wave function. In the modulus part of the first hidden layer, we find that most of the neurons have a skyrmion-like distribution of the weights. This is because the first hidden layer directly takes the spins as inputs; it learns the most important features of the ground state. However, in the second hidden layer, we find that most of the weights in both the phase and the modulus neurons are distributed in a similar pattern and, visually, do not offer a physical interpretation. This raises two questions: first, whether the second hidden layer is essential in the network, and second, whether the neurons with a similar distribution of weights are redundant and can be removed without loss in the accuracy of the network.
In machine learning, pruning is often used to reduce the number of parameters in a neural network to increase computational efficiency without any loss in the accuracy of the network [39, 40, 41]. In most cases, pruning is done post-training by removing the weights with the smallest magnitude and adjusting the remaining weights. After all the pruning steps, only the most important weights are left in the neural network, which can shed some light on the most significant underlying features of the target problem. Pruning could also be important for NQS as a variational ansatz since, with increasing system sizes, the size of the network increases [42].
We analyze the effects of pruning to answer the questions we raised above. Again, we consider the \(5\times 5\) lattice with two types of ground states - the low entanglement QS state at \(D=J\),\(A=0.5J\) and the high entanglement MS state at \(D=0.1J\),\(A=0.1J\) in Fig. 6(a)-(b). Starting from the second hidden layer, at each pruning step, \(10\%\) of the neurons from both phase and modulus parts are randomly deleted until only one neuron is left in
Figure 5: Weight distribution in the hidden layers of the quantum skyrmion ground state in a \(5\times 5\) lattice at \(D=J\) and \(A=0.5J\). Phase1 (Phase2) and Modulus1 (Modulus2) denote the phase and modulus parts in the first (second) hidden layer, respectively. Each block shows the weights inside one hidden neuron. While the first hidden layer learns the essential features of the ground state, most of the neurons in the second hidden layer show a similar pattern.
each of them. Then the same procedure is applied to the first hidden layer. After deleting the neurons, the pruned network is trained to adjust the remaining weights (\(pr\)). Furthermore, a network with the same structure as the pruned network is also trained from scratch (\(prsc\)) to compare with the \(pr\) networks.
As metrics for the performance, we use the relative error, \(\Delta E_{p}\), and the fidelity, \(F\), between the original network and the \(pr\) or \(prsc\) networks [43],
\[\Delta E_{p} =\frac{|E_{\text{full}}-E_{p}|}{|E_{\text{full}}|}, \tag{9}\] \[F =|\langle\psi_{\text{full}}|\psi_{p}\rangle|^{2}\,, \tag{10}\]
where \(p=pr,prsc\) (see details in Appendix A). In Fig. 6(a)-(b), we plot \(\Delta E_{p}\) and \(F\) over the pruning for the \(5\times 5\) solution, with the maximum Renyi entropies in the insets. For the low entanglement QS solution, the degradation in performance is small even after removing \(97\%\) of the weights, and the fidelity stays over \(97\%\) for both \(pr\) and \(prsc\) networks. However, for the high entanglement MS solution, the \(pr\) and \(prsc\) networks show different behavior. The fidelity gradually decreases in the \(prsc\) network as the weights are removed. The performance of the pruned network in the high entanglement MS state is worse than in the low entanglement solution. This is expected as it becomes considerably more difficult for fewer neurons to describe the highly entangled state correctly. Moreover, the performance degradation in the \(pr\) network is much more severe than in the \(prsc\) network. This could be due to the difficulty in leaving the local minimum by the optimizer for the already trained \(pr\) network, while \(prsc\) networks have the advantage of starting from random weights and thus more flexibility.
Figure 6: Performance metrics after pruning the neural network. \(\Delta E_{p}\) denotes the relative error in energy and \(F\) denotes the fidelity. \(pr\) denotes a pruned NQS trained after removing the weights from the full NQS and \(prsc\) denotes an identical network to the \(pr\) one but trained from scratch. (a) \(5\times 5\) lattice QS ground state at \(D=J\) and \(A=0.5J\), (b) \(5\times 5\) lattice MS ground state at \(D=0.1J\) and \(A=0.1J\), (c) \(9\times 9\) lattice QS ground state at \(D=0.5J\) and \(A=0.2J\), and (d) \(9\times 9\) lattice MS ground state at \(D=0.1J\) and \(A=0.1J\). The inset shows the maximum values of the Renyi entropies.
The maximum Renyi Entropy in both Figs. 6(a) and (b) decreases as the weights are removed. Interestingly, it only becomes zero when only one neuron is left in both hidden layers, showing that NQS can represent entanglement even with a minimal number of neurons. Lastly, we note that on reducing the number of neurons, the optimization process becomes unstable and requires much fine-tuning to converge near the ground state.
In Figs. 6(c) and (d), we show the same results for the \(9\times 9\) lattice, calculating only \(prsc\) networks as they have better performance than the \(pr\). The degradation in energy and fidelity, while qualitatively similar to the \(5\times 5\) case, is more severe. In all four cases, we find that removing neurons from the first hidden layer affects the network's performance more than removing them from the second hidden layer, signifying the importance of the former over the latter. This is seen in the very low error in energy until about half of the total weights are removed, after which the error rises drastically. Does this mean we can remove the second hidden layer entirely without strongly deteriorating the performance? We find that this is not the case because the performance drastically drops, and the optimization, especially in the high entanglement region, becomes unstable with only one hidden layer. We find that (not shown here) having even a single neuron in the second hidden layer results in greater accuracy than having only one hidden layer with as much as four times the number of neurons. Thus, increasing the width of the network is not the optimal strategy here. On the other hand, having three or more hidden layers makes the optimization process more challenging, and the network is prone to get stuck in a local minimum. Hence, we conclude that the optimal network for our problem should have two hidden layers, with a large number of neurons in the first hidden layer and fewer neurons in the second hidden layer.
## VII Summary
In this work, we have studied the ground states of the spin-1/2 Heisenberg model in the presence of Dzyaloshinskii-Moriya interaction and Heisenberg anisotropy on a square lattice with ferromagnetic boundaries using variational Monte Carlo. We use a neural network as the variational wave function, with different parts to learn the phase and amplitude of the wave function. We show that a weakly entangled quantum skyrmion ground state, with the skyrmion number \(C=1\), exists for a wide range of Hamiltonian parameters. The entanglement increases with decreasing DMI. For large DMI values, a product state can describe the QS ground state. Remarkably, the central spin in the QS state is disentangled from the rest of the spins. Furthermore, we analyze the weights of our NQS ansatz and find that while the first hidden layer learns the most important features of the ground state, the second hidden layer is essential to achieve high accuracy. We then test the limits of the NQS by pruning and find that the higher the entanglement, the more deterioration in the performance.
Finally, we emphasize two of our results: First, our finding that the central spin decouples from the rest of the system and points into the opposite direction than the surrounding ferromagnet can be potentially used as a nondestructive detection scheme for quantum skyrmions by local spin measurements, e.g., by a magnetic scanning tunneling microscope. Second, we obtain a region in the parameter space where our method cannot resolve the correct ground state. Instead, we find a superposition between the ground state and the first excited state. This can be traced back to a tiny excitation gap between the ground state and the first excited state and reveals that the NQS ansatz has problems with almost degenerate states, which typically appear in finite size topological systems. While we could devise a scheme to improve the variational state further and separate the ground state from the first excited state in small systems, we could not do this in large spin systems. Thus, while NQS-based variational methods offer an effective tool to study the quantum skyrmion systems at medium to large DMI, they struggle in the small DMI regime. It is an open question whether other methods like DMRG also struggle in this regime. Improvement of the learning algorithm for NQS-based methods and its comparison with established methods will be our focus for future works.
###### Acknowledgements.
A. J. is supported by the MEXT Scholarship and Graduate School of Science, Kyoto University under Ginfu Fund. A.J. also acknowledges the funding towards this work from the Kyoto University - University of Hamburg fund. R.P. is supported by JSPS KAKENHI No. JP18K03511 and JP23K03300. T.P. acknowledges funding by the Deutsche Forschungsgemeinschaft (project no. 420120155) and the European Union (ERC, QUANTUMWIST, project number 101039098). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Parts of the numerical simulations in this work have been done using the facilities of the Supercomputer Center at the Institute for Solid State Physics, the University of Tokyo.
## Appendix A Optimization procedure
Given a variational wave function \(\left|\psi_{\theta}\right\rangle\), the expectation value of an operator \(O\) can be calculated as [22; 44],
\[\left\langle O\right\rangle =\frac{\left\langle\psi_{\theta}\right|O\left|\psi_{\theta}\right\rangle }{\left\langle\psi_{\theta}\right|\psi_{\theta}\rangle}\] \[=\frac{\sum_{\sigma,\sigma^{\prime}}\left\langle\psi_{\theta} \right|\sigma\rangle\left\langle\sigma|O|\sigma^{\prime}\right\rangle\left\langle \sigma^{\prime}|\psi_{\theta}\right\rangle}{\sum_{\sigma}\left|\psi_{\theta} \left(\sigma\right)\right|^{2}}\] \[=\frac{\sum_{\sigma}\left|\psi_{\theta}\left(\sigma\right) \right|^{2}\sum_{\sigma^{\prime}}\left\langle\sigma|O|\sigma^{\prime}\right\rangle \frac{\psi_{\theta}\left(\sigma^{\prime}\right)}{\psi_{\theta}\left(\sigma \right)}}{\sum_{\sigma}\left|\psi_{\theta}\left(\sigma\right)\right|^{2}}\] \[=\sum_{\sigma}p_{\theta}(\sigma)O_{\theta}^{\text{loc}}(\sigma), \tag{10}\]
where \(\psi_{\theta}(\sigma)=\left\langle\sigma|\psi_{\theta}\right\rangle\) and
\[p_{\theta}(\sigma) =\frac{\left|\psi_{\theta}(\sigma)\right|^{2}}{\sum_{\sigma} \left|\psi_{\theta}(\sigma)\right|^{2}}, \tag{11}\] \[O_{\theta}^{\text{loc}}(\sigma) =\sum_{\sigma^{\prime}}\left\langle\sigma|O|\sigma^{\prime} \right\rangle\frac{\psi_{\theta}(\sigma^{\prime})}{\psi_{\theta}(\sigma)}. \tag{12}\]
Here, \(O_{\theta}^{\text{loc}}(\sigma)\) is the local estimator and \(p_{\theta}(\sigma)\) is the probability distribution of \(\left|\sigma\right\rangle\). Thus, the quantum expectation value of an observable \(O\) is the average of a random variable \(O_{\theta}^{\text{loc}}(\sigma)\) over the probability distribution \(p_{\theta}(\sigma)\). Since the sum over all the states \(\left|\sigma\right\rangle\) scales exponentially with the system size, Markov Chain Monte Carlo, with Metropolis-Hastings algorithm, is used to sample a series of states \(\left|\sigma\right\rangle_{n}\) and stochastically estimate the expectation values
\[\left\langle O\right\rangle\approx\frac{1}{N}\sum_{n=1}^{N}O_{\theta}^{\text{ loc}}(\sigma_{n}), \tag{13}\]
where \(N\) is the total number of samples.
The energy of the system can be calculated by taking \(O\) to be the Hamiltonian. The NQS is then optimized for the ground state iteratively by minimizing the energy using a gradient descent algorithm. Here, we use the Adam optimizer, with the moments \(\beta_{1}=0.9\), and \(\beta_{2}=0.999\)[33]. The learning rate \(\eta\) is set to \(\eta=0.001\) for the phase part and increases linearly from \(0\) to \(0.001\) over the first \(5000\) iterations for the modulus part of the NQS. The learning rate is then reduced to \(\eta=0.0001\) after some iterations, evident by a small kink in the energy convergence plots near \(20000\) iterations in Fig. 2(a) and near \(40000\) iterations in Fig. 2(b). We also tried a stochastic gradient descent optimizer with a stochastic reconfiguration method as a preconditioner to the gradient [22] and obtained similar results as with the Adam optimizer but with increased computational cost. A critical step in the optimization procedure is to first optimize the phase part of the network and keep the modulus part constant to facilitate the learning of the phase of the wave function. According to the variational principle, the variational energy is bounded from below by the actual ground state energy, which makes energy a convenient loss function to minimize. We sample by flipping a spin locally \(N\) times, each at a random location, where \(N\) is the total number of spins in the lattice. This makes one Monte Carlo sweep. We use \(10^{4}\) samples for energy calculation and \(10^{7}\) for all the other expectation values.
To calculate the fidelity between two NQSs, \(\left|\psi_{1}\right\rangle\) and \(\left|\psi_{2}\right\rangle\) (dropping the dependence on \(\theta\) for clarity), we follow a similar procedure as in Eq. (10),
\[F =\frac{\left|\left\langle\psi_{1}|\psi_{2}\right\rangle\right|^{2 }}{\left\langle\psi_{1}|\psi_{1}\right\rangle\left\langle\psi_{2}|\psi_{2}\right\rangle}\] \[=\frac{\sum_{\sigma,\sigma^{\prime}}\left\langle\psi_{1}|\sigma \right\rangle\left\langle\sigma|\psi_{2}\right\rangle\left\langle\psi_{2}| \sigma^{\prime}\right\rangle\left\langle\sigma^{\prime}|\psi_{1}\right\rangle}{ \sum_{\sigma}\left|\psi_{1}(\sigma)\right|^{2}\sum_{\sigma^{\prime}}\left| \psi_{2}(\sigma^{\prime})\right|^{2}}\] \[=\sum_{\sigma}p_{1}(\sigma)\frac{\psi_{2}(\sigma)}{\psi_{1}(\sigma )}\sum_{\sigma^{\prime}}p_{2}(\sigma^{\prime})\frac{\psi_{1}(\sigma^{\prime})} {\psi_{2}(\sigma^{\prime})}.\]
Thus, \(F\) can be evaluated by first sampling from two different probability distributions corresponding to the two NQSs, and then computing the ratio of the wave function amplitudes.
## Appendix B An alternative projection method
Variational wave functions can be improved by projection techniques, which require the variational state to have a finite overlap with the exact ground state. Then, the high-energy components can be projected out by applying the "power method". However, this method can be done exactly only for systems manageable by exact diagonalization. In other cases, stochastic methods have to be used, requiring the Hamiltonian's off-diagonal terms to be real and non-negative. When this condition is not fulfilled, as in our case with Eq. (1) with complex off-diagonal terms, there is a fixed-node approximation for Hamiltonians with real and negative off-diagonal terms and its modification fixed-phase approximation for complex off-diagonal terms [37].
Here, we propose another method to filter out high-energy components by projecting the Hamiltonian on a few low-energy states, which can be directly obtained by the variational Monte Carlo scheme introduced in the main text.
Given a Hamiltonian \(H\), its eigenvalue equation is
\[H\left|\phi\right\rangle=E\left|\phi\right\rangle, \tag{14}\]
where \(E\) and \(\left|\phi\right\rangle\) are the eigenvalues and eigenvectors, respectively. By expanding this equation in a complete but not necessarily orthonormal basis \(\left|n\right\rangle\), we obtain the
generalized eigenvalue equation
\[\frac{1}{\Omega}\left(\sum_{n_{i},n_{j}}\left\langle n_{j}|H|n_{i}\right\rangle \left\langle n_{i}|\phi\right\rangle-E\left\langle n_{j}|n_{i}\right\rangle \left\langle n_{i}|\phi\right\rangle\right)=0. \tag{10}\]
Using an incomplete set of states, \(|n_{i}\rangle\), we can define the projection of the Hamiltonian into the space spanned by these states as \(H_{\mathbf{proj}}=\left\langle n_{j}|H|n_{i}\right\rangle\) and the overlap matrix \(X=\left\langle n_{j}|n_{i}\right\rangle\). If \(|n_{i}\rangle\) are approximations of the ground state and the lowest excited states of the Hamiltonian, the ground state of the projected Hamiltonian will be an improved version of the variational ground state of the full Hamiltonian.
In the converged variational NQS ground state \(|n_{0}\rangle\), the main component is the exact ground state with small contributions from the excited states. By optimizing a second NQS, which is nearly orthogonal to the ground state NQS, using the cost function
\[L_{\theta}=\left\langle n_{1}|H|n_{1}\right\rangle+J|\left\langle n_{0}|n_{1} \right\rangle|^{2}, \tag{11}\]
as described in the main text in Eq. (6), the first excited state can be approximated as \(|n_{1}\rangle\). This procedure can be repeated to approximate the excited states of \(H\). We then can use these variational low-energy states to calculate the projected Hamiltonian and the overlap matrix in a Markov Chain Monte Carlo scheme. We note that even by using the cost function Eq. (11), there is no guarant that the overlap of \(|n_{0}\rangle\) and \(|n_{1}\rangle\) exactly vanishes. We use a similar procedure as in Eq. (10) to calculate the matrix elements of the projected Hamiltonian and the overlap matrix. For the projection on two low-energy states, we sample using the product of these two wave functions \(|n_{1}(\sigma)||n_{2}(\sigma)|\), as it gave us the best results. The projected Hamiltonian and the overlap matrix are then given as
\[\frac{\left\langle n_{i}|H|n_{j}\right\rangle}{\Omega}=\frac{\sum_{\sigma \sigma^{\prime}}\frac{|n_{1}(\sigma)||n_{2}(\sigma)|}{|n_{1}(\sigma)||n_{2}( \sigma)|}n_{i}^{\star}(\sigma^{\prime})n_{j}(\sigma)\left\langle\sigma|H| \sigma^{\prime}\right\rangle}{\sum_{\sigma}|n_{1}(\sigma)||n_{2}(\sigma)|} \tag{12}\]
\[\frac{\left\langle n_{i}|n_{j}\right\rangle}{\Omega}=\frac{\sum_{\sigma} \frac{|n_{1}(\sigma)||n_{2}(\sigma)|}{|n_{1}(\sigma)||n_{2}(\sigma)|}n_{i}^{ \star}(\sigma)n_{j}(\sigma)}{\sum_{\sigma}|n_{1}(\sigma)||n_{2}(\sigma)|} \tag{13}\]
which determines the normalization constant in Eq. (10) as \(\Omega=\sum_{\sigma}|n_{1}(\sigma)||n_{2}(\sigma)|\). The wave functions in Eq. (10) do not need to be normalized because the overlap matrix \(X\) takes care of any factors arising due to the absence of normalization. Hence, this method can be used in the variational Monte Carlo scheme, which usually considers unnormalized wave functions.
Then, by solving the generalized eigenvalue problem, Eq. (10), we can filter out the high energy components from the variational ground state. The new variational ground state is \(\left|n_{0}\right\rangle_{\text{new}}=\sum_{i}\phi_{0i}\left|n_{i}\right\rangle\), where \(\phi_{0i}\) are the components of the lowest energy eigenvector of Eq. (10). This procedure is feasible when only a few excited states are mixed in the approximation of the ground state, as the calculation of the excited state itself is variational, and the errors build up with each excited state calculation. This method works well for the \(3\times 3\) lattice over the entire parameter range, as the variational ground state has negligible overlap with the second and higher excited states. Then, only the calculation of the first variational excited state is required. However, while it improves the variational energy slightly for larger lattices, we do not obtain the correct ground state in the small DMI and \(A\) region of the ground state diagram.
## Appendix C Renyi Entropy
When a system is divided into two parts, \(A\) and \(B\), the variational wave function can be written as
\[\left|\psi_{\theta}\right\rangle=\sum_{\sigma_{A}\sigma_{B}}\psi_{\theta}( \sigma_{A}\sigma_{B})\left|\sigma_{A}\right\rangle\left|\sigma_{B}\right\rangle, \tag{14}\]
where \(\sigma_{A}\) and \(\sigma_{B}\) are the basis states in region \(A\) and region \(B\), respectively. The Renyi entropy of order \(\alpha\) between \(A\) and \(B\) is
\[S_{\alpha}(\rho_{A})=\frac{1}{1-\alpha}\text{log}(\text{Tr}(\rho_{A}^{\alpha})), \tag{15}\]
where \(\rho_{A}^{\alpha}\) is the reduced density matrix obtained after tracing out the degrees of freedom in region B. To calculate the Renyi entropy of the second order (\(\alpha=2\)), we use the replica trick to evaluate the expectation value of the 'Swap' operator on two copies of the variational wave function. The Swap operator swaps the spins in one region with that of another region between the two wave functions [38]
\[\text{Swap}_{A}\left|\psi_{\theta}\right\rangle\otimes\left|\psi_{ \theta}\right\rangle=\text{Swap}_{A}\left(\sum_{\sigma_{A}\sigma_{B}}\psi_{ \theta}(\sigma_{A}\sigma_{B})\left|\sigma_{A}\right\rangle\left|\sigma_{B} \right\rangle\right)\] \[\otimes\left(\sum_{\sigma^{\prime}_{A}\sigma^{\prime}_{B}}\psi_{ \theta}(\sigma^{\prime}_{A}\sigma^{\prime}_{B})\left|\sigma^{\prime}_{A}\right\rangle \left|\sigma^{\prime}_{B}\right\rangle\right)\] \[=\sum_{\sigma_{A}\sigma\sigma_{B}}\psi_{\theta}(\sigma_{A}\sigma_ {B})\sum_{\sigma^{\prime}_{A}\sigma^{\prime}_{B}}\psi_{\theta}(\sigma^{\prime}_{A} \sigma^{\prime}_{B})\left|\sigma^{\prime}_{A}\right\rangle\left|\sigma_{B} \right\rangle\otimes\left|\sigma_{A}\right\rangle\left|\sigma^{\prime}_{B} \right\rangle, \tag{16}\]
where \(\sigma\) and \(\sigma^{\prime}\) are the basis states for the two copies of the wave function. The expectation value of \(\text{Swap}_{A}\) is then given by,
\[\left\langle\text{Swap}_{A}\right\rangle=\frac{\left\langle\psi_{ \theta}\otimes\psi_{\theta}\right|\text{Swap}_{A}\left|\psi_{\theta}\otimes \psi_{\theta}\right\rangle}{\left\langle\psi_{\theta}\otimes\psi_{\theta} \right|\psi_{\theta}\otimes\psi_{\theta}\right\rangle}\] \[=\frac{\sum_{\sigma_{A}\sigma_{B}\sigma^{\prime}_{A}\sigma^{ \prime}_{B}}\psi^{\theta}_{\theta}(\sigma_{A}\sigma_{B})\psi^{\theta}_{\theta}( \sigma^{\prime}_{A}\sigma^{\prime}_{B})\psi_{\theta}(\sigma^{\prime}_{A}\sigma_{B })\psi_{\theta}(\sigma_{A}\sigma^{\prime}_{B})}{\sum_{\sigma\sigma^{\prime}} \left|\left\langle\psi_{\theta}\otimes\psi_{\theta}\right|\sigma\otimes\sigma^{ \prime}\right\rangle|^{2}}\] \[=\text{Tr}(\rho_{A}^{2})\] \[=\text{exp}(-S_{2}(\rho_{A})). \tag{17}\]
For the final step we use the definition in Eq. (C2) with \(\alpha=2\). In the Monte Carlo scheme, Eq. (C4) can be evaluated as
\[\langle\mathrm{Swap}_{A}\rangle\] \[=\sum_{\sigma_{A}\sigma_{B}\sigma_{A}^{\prime}\sigma_{B}^{\prime} }\frac{\left|\psi_{\theta}(\sigma_{A}\sigma_{B})\right|^{2}}{\sum_{\sigma} \left|\psi_{\theta}(\sigma_{A}^{\prime}\sigma_{B}^{\prime})\right|^{2}}\frac{ \left|\psi_{\theta}(\sigma_{A}^{\prime}\sigma_{B}^{\prime})\right|^{2}}{\sum_{ \sigma^{\prime}}\left|\psi_{\theta}(\sigma^{\prime})\right|^{2}}.\] \[\frac{\psi_{\theta}(\sigma_{A}^{\prime}\sigma_{B})\psi_{\theta}( \sigma_{A}\sigma_{B}^{\prime})}{\psi_{\theta}(\sigma_{A}\sigma_{B})\psi_{ \theta}(\sigma_{A}^{\prime}\sigma_{B}^{\prime})}\] \[=\sum_{\sigma_{A}\sigma_{B}\sigma_{A}^{\prime}\sigma_{B}^{\prime} }p_{\theta}(\sigma)p_{\theta}(\sigma^{\prime})\frac{\psi_{\theta}(\sigma_{A}^ {\prime}\sigma_{B})\psi_{\theta}(\sigma_{A}\sigma_{B}^{\prime})}{\psi_{ \theta}(\sigma_{A}\sigma_{B})\psi_{\theta}(\sigma_{A}^{\prime}\sigma_{B}^{ \prime})}. \tag{10}\]
|
2310.01951 | Probabilistic Reach-Avoid for Bayesian Neural Networks | Model-based reinforcement learning seeks to simultaneously learn the dynamics
of an unknown stochastic environment and synthesise an optimal policy for
acting in it. Ensuring the safety and robustness of sequential decisions made
through a policy in such an environment is a key challenge for policies
intended for safety-critical scenarios. In this work, we investigate two
complementary problems: first, computing reach-avoid probabilities for
iterative predictions made with dynamical models, with dynamics described by
Bayesian neural network (BNN); second, synthesising control policies that are
optimal with respect to a given reach-avoid specification (reaching a "target"
state, while avoiding a set of "unsafe" states) and a learned BNN model. Our
solution leverages interval propagation and backward recursion techniques to
compute lower bounds for the probability that a policy's sequence of actions
leads to satisfying the reach-avoid specification. Such computed lower bounds
provide safety certification for the given policy and BNN model. We then
introduce control synthesis algorithms to derive policies maximizing said lower
bounds on the safety probability. We demonstrate the effectiveness of our
method on a series of control benchmarks characterized by learned BNN dynamics
models. On our most challenging benchmark, compared to purely data-driven
policies the optimal synthesis algorithm is able to provide more than a
four-fold increase in the number of certifiable states and more than a
three-fold increase in the average guaranteed reach-avoid probability. | Matthew Wicker, Luca Laurenti, Andrea Patane, Nicola Paoletti, Alessandro Abate, Marta Kwiatkowska | 2023-10-03T10:52:21Z | http://arxiv.org/abs/2310.01951v1 | # Probabilistic Reach-Avoid for
###### Abstract
Model-based reinforcement learning seeks to simultaneously learn the dynamics of an unknown stochastic environment and synthesise an optimal policy for acting in it. Ensuring the safety and robustness of sequential decisions made through a policy in such an environment is a key challenge for policies intended for safety-critical scenarios. In this work, we investigate two complementary problems: first, computing reach-avoid probabilities for iterative predictions made with dynamical models, with dynamics described by Bayesian neural network (BNN); second, synthesising control policies that are optimal with respect to a given reach-avoid specification (reaching a "target" state, while avoiding a set of "unsafe" states) and a learned BNN model. Our solution leverages interval propagation and backward recursion techniques to compute lower bounds for the probability that a policy's sequence of actions leads to satisfying the reach-avoid specification. Such computed lower bounds provide safety certification for the given policy and BNN model. We then introduce control synthesis algorithms to derive policies maximizing said lower bounds on the safety probability. We demonstrate the effectiveness of our method on a series of control benchmarks characterized by learned BNN dynamics models. On our most challenging benchmark, compared to purely data-driven policies the optimal synthesis algorithm is able to provide more than a four-fold increase in the number of certifiable states and more than a three-fold increase in the average guaranteed reach-avoid probability.
keywords: Reinforcement Learning, Formal Verification, Certified Control Synthesis, Bayesian Neural Networks, Safety, Reach-while-avoid +
## 1 Introduction
The capacity of deep learning to approximate complex functions makes it particularly attractive for inferring process dynamics in control and reinforcement learning problems (Schrittwieser et al., 2019). In safety-critical scenarios where the environment and system state are only partially known or observable (e.g., a robot with noisy actuators/sensors), Bayesian models have recently been investigated as a safer alternative to standard, deterministic, Neural Networks (NNs): the uncertainty estimates of Bayesian models can be propagated through the system decision pipeline to enable safe decision making despite unknown system conditions (McAllister and Rasmussen, 2017; Carbone et al., 2020; Depeweg et al., 2017). In particular, _Bayesian Neural Networks_ (BNNs) retain the same advantages of NNs (relative to their approximation capabilities) and also enable reasoning about uncertainty in a principled probabilistic manner (Neal, 2012; Murphy, 2012), making them very well-suited to tackle safety-critical problems.
In problems of sequential planning, time-series forecasting, and model-based reinforcement learning, evaluating a model with respect to a control policy (or strategy) requires making several predictions that are mutually dependent across time (Liang, 2005; Deisenroth and Rasmussen, 2011). While multiple models can be learned for each time step, a common setting is for these predictions to be made iteratively by the same machine learning model (Huang and Rosendo, 2020), where the state of the predicted model at each step is a function of the model state at the previous step and possibly of an action (from the policy). We refer to this setting as _iterative predictions_.
Unfortunately, performing iterative predictions with BNN models poses several practical issues. In facts, BNN models output probability distributions, so that at each successive timestep the BNN needs to be evaluated over a probability distribution, rather than a fixed input point - thus posing the problem of successive predictions over a stochastic input. Even when the posterior distribution of the BNN weights is inferred using analytical approximations, the deep and non-linear structure of the network makes the resulting predictive distribution analytically intractable (Neal, 2012). In iterative prediction settings, the problem is compounded and exacerbated by the fact that one would have to evaluate the BNN, sequentially, over a distribution that cannot be computed analytically (Depeweg et al., 2017). Hence, computing sound, formal bounds on the probability of BNN-based iterative predictions remains an open problem. Such bounds would enable one to pro
vide safety guarantees over a given (or learned) control policy, which is a necessary precondition before deploying the policy in a real-world environment (Polymenakos et al., 2020; Vinogradska et al., 2016).
In this paper, we develop a new method for the computation of probabilistic guarantees for iterative predictions with BNNs over _reach-avoid_ specifications. A reach-avoid specification, also known as constrained reachability (Soudjani and Abate, 2013), requires that the trajectories of a dynamical system reach a goal/target region over a given (finite) time horizon, whilst avoiding a given set of states that are deemed "unsafe". Probabilistic reach-avoid is a key property for the formal analysis of stochastic processes (Abate et al., 2008), underpinning richer temporal logic specifications: its computation is the key component for probabilistic model checking algorithms for various temporal logics such as PCTL, csLTL, or BLTL (Kwiatkowska et al., 2007; Cauchi et al., 2019).
Even though the exact computation of reach-avoid probabilities for iterative prediction with BNNs is in general not analytically possible, with our method, we can derive a guaranteed (conservative) lower bound by solving a backward iterative problem obtained via a discretisation of the state space. In particular, starting from the final time step and the goal region, we back-propagate the probability lower bounds for each discretised portion of the state space. This backwards reachability approach leverages recently developed bound propagation techniques for BNNs (Wicker et al., 2020). In addition to providing guarantees for a given policy, we also devise methods to synthesise policies that are maximally certifiable, i.e., that maximize the lower bound of the reach-avoid probability. We first describe a numerical solution that, by using dynamic programming, can synthesize policies that are maximally safe. Then, in order to improve the scalability of our approach, we present a method for synthesizing approximately optimal strategies parametrised as a neural network. While our method does not yet scale to state-of-the-art reinforcement learning environments, we are able to verify and synthesise challenging non-linear control case studies.
We validate the effectiveness of our certification and synthesis algorithms on a series of control benchmarks. Our certification algorithm is able to produce non-trivial safety guarantees for each system that we test. On each proposed benchmark, we also show how our synthesis algorithm results in actions whose safety is significantly more certifiable than policies derived via deep reinforcement learning. Specifically, in a challenging planar navigation benchmark, our synthesis method results in policies whose certified safety
probabilities are eight to nine times higher than those for learned policies.
We further investigate how factors like the choice of approximate inference method, BNN architecture, and training methodology affect the quality of the synthesised policy. In summary, this paper makes the following contributions:
* We show how probabilistic reach-avoid for iterative predictions with BNNs can be formulated as the solution of a backward recursion.
* We present an efficient certification framework that produces a lower bound on probabilistic reach-avoid by relying on convex relaxations of the BNN model and said recursive problem definition.
* We present schemes for deriving a maximally certified policy (i.e., maximizing the lower bound on safety probability) with respect to a BNN and given reach-avoid specification.
* We evaluate our methodology on a set of control case studies to provide guarantees for learned and synthesized policies and conduct an empirical investigation of model-selection choices and their effect on the quality of policies synthesised by our method.
A previous version of this work (Wicker et al., 2021) has been presented at the thirty-seventh Conference on Uncertainty in Artificial Intelligence. Compared to the conference paper, in this work, we introduce several new contributions. Specifically, compared to Wicker et al. (2021) we present novel algorithms for the synthesis of control strategies based on both a numerical method and a neural network-based approach. Moreover, the experimental evaluation has been consistently extended, to include, among others, an analysis of the role of approximate inference and NN architecture on safety certification and synthesis, as well as an in-depth analysis of the scalability of our methods. Further discussion of related works can be found in Section 7.
## 2 Bayesian Neural Networks
In this work, we consider fully-connected neural network (NN) architectures \(f^{w}:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) parametrised by a vector \(w\in\mathbb{R}^{n_{w}}\) containing all the weights and biases of the network. Given a NN \(f^{w}\) composed by \(L\) layers, we denote by \(f^{w,1},...,f^{w,L}\) the layers of \(f^{w}\) and we have that \(w=\left(\{W_{i}\}_{i=1}^{L}\right)\cup\left(\{b_{i}\}_{i=1}^{L}\right)\), where \(W_{i}\) and \(b_{i}\) represent weights and biases of the \(i-\)th layer of \(f^{w}.\) For \(x\in\mathbb{R}^{n}\) the output of layer \(i\in\{1,...,L\}\) can be
explicitly written as \(f^{w,i}(x)=a(W_{i}f^{w,i-1}(x)+b_{i})\) with \(f^{w,1}(x)=a(W_{1}x+b_{1})\), where \(a:\mathbb{R}\rightarrow\mathbb{R}\) is the activation function. We assume that \(a\) is a continuous monotonic function, which holds for the vast majority of activation functions used in practice such as sigmoid, ReLu, and tanh (Goodfellow et al., 2016). This guarantees that \(f^{w}\) is a continuous function.
Bayesian Neural Networks (BNNs), denoted by \(f^{\mathbf{w}}\), extend NNs by placing a prior distribution over the network parameters, \(p_{\mathbf{w}}(w)\), with \(\mathbf{w}\) being the vector of random variables associated to the parameter vector \(w\). Given a dataset \(\mathcal{D}\), training a BNN on \(\mathcal{D}\) requires to compute posterior distribution, \(p_{\mathbf{w}}(w|\mathcal{D})\), which can be computed via Bayes' rule (Neal, 2012). Unfortunately, because of the non-linearity introduced by the neural network architecture, the computation of the posterior is generally intractable. Hence, various approximation methods have been studied to perform inference with BNNs in practice. Among these methods, we consider Hamiltonian Monte Carlo (HMC) (Neal, 2012), and Variational Inference (VI) (Blundell et al., 2015). In our experimental evaluation in Section 6.3 we employ both HMC and VI.
Hamiltonian Monte Carlo (HMC).HMC proceeds by defining a Markov chain whose invariant distribution is \(p_{\mathbf{w}}(w|\mathcal{D})\), and relies on Hamiltonian dynamics to speed up the exploration of the space. Differently from VI discussed below, HMC does not make any parametric assumptions on the form of the posterior distribution and is asymptotically correct. The result of HMC is a set of samples that approximates \(p_{\mathbf{w}}(w|\mathcal{D})\). We refer interested readers to (Neal et al., 2011; Izmailov et al., 2021) for further details.
Variational Inference (VI).VI proceeds by finding a Gaussian approximating distribution \(q(w)\sim p_{\mathbf{w}}(w|\mathcal{D})\) in a trade-off between approximation accuracy and scalability. The core idea is that \(q(w)\) depends on some hyperparameters that are then iteratively optimized by minimizing a divergence measure between \(q(w)\) and \(p_{\mathbf{w}}(w|\mathcal{D})\). Samples can then be efficiently extracted from \(q(w)\). See (Khan and Rue, 2021; Blundell et al., 2015) for recent developments in variational inference in deep learning.
## 3 Problem Formulation
Given a trained BNN \(f^{\mathbf{w}}\) we consider the following discrete-time stochastic process given by iterative predictions of the BNN:
\[\mathbf{x}_{k}=f^{\mathbf{w}}(\mathbf{x}_{k-1},\mathbf{u}_{k-1})+\mathbf{v}_{k}, \quad\mathbf{u}_{k}=\pi_{k}(\mathbf{x}_{k}),\quad k\in\mathbb{N}_{>0}, \tag{1}\]
where \(\mathbf{x}_{k}\) is a random variable taking values in \(\mathbb{R}^{n}\) modelling the state of System (1) at time \(k\), \(\mathbf{v}_{k}\) is a random variable modelling an additive noise term with stationary, zero-mean Gaussian distribution \(\mathcal{N}(0,\sigma^{2}\cdot I)\), where \(I\) is the identity matrix of size \(n\times n\). \(\mathbf{u}_{k}\) represents the action applied at time \(k\), selected from a compact set \(\mathcal{U}\subset\mathbb{R}^{c}\) by a (deterministic) feedback Markov strategy (a.k.a. policy, or controller) \(\pi:\mathbb{R}^{n}\times\mathbb{N}\to\mathcal{U}\).1
Footnote 1: We can limit ourselves to consider deterministic Markov strategies as they are optimal in our setting (Bertsekas and Shreve, 2004; Abate et al., 2008). Also, in the following, we denote with \(\pi\) the time-varying policy described, at each step \(k\), by policy \(\pi_{k}:\mathbb{R}^{n}\to\mathcal{U}\).
The model in Eqn. (1) is commonly employed to represent noisy dynamical models driven by a BNN and controlled by the policy \(\pi\)(Depeweg et al., 2017). In this setting, \(f^{\mathbf{w}}\) defines the transition probabilities of the model and, correspondingly, \(p(\bar{x}|(x,u),\mathcal{D})\) is employed to describe the _posterior predictive distribution_, namely the probability density of the model state at the next time step being \(\bar{x}\), given that the current state and action are \((x,u)\), as:
\[p(\bar{x}|(x,u),\mathcal{D})=\int_{\mathbb{R}^{n_{w}}}\mathcal{N}(\bar{x}\mid f ^{w}(x,u),\sigma^{2}\cdot I)p_{\mathbf{w}}(w|\mathcal{D})dw, \tag{2}\]
where \(\mathcal{N}(\cdot\mid f^{w}(x,u),\sigma^{2}\cdot I)\) is the Gaussian likelihood induced by noise \(\mathbf{v}_{k}\) and centered at the NN output (Neal, 2012).
Observe that the posterior predictive distribution induces a probability density function over the state space. In iterative prediction settings, this implies that at each step the state vector \(\mathbf{x}_{k}\) fed into the BNN is a random variable. Hence, a \(N\)-step _trajectory_ of the dynamic model in Eqn (1) is a sequence of states \(x_{0},...,x_{N}\in\mathbb{R}^{n}\) sampled from the predictive distribution. As a consequence, a principled propagation of the BNN uncertainty through consecutive time steps poses the problem of predictions over stochastic inputs. In Section 4.1 we will tackle this problem for the particular case of reach-avoid properties, by designing a backward computation scheme that starts its calculations from the goal region, and proceeds according to Bellman iterations (Bertsekas and Shreve, 2004).
We remark that \(p(\bar{x}|(x,u),\mathcal{D})\) is defined by marginalizing over \(p_{\mathbf{w}}(w|\mathcal{D})\), hence, the particular \(p(\bar{x}|(x,u),\mathcal{D})\) depends on the specific approximate inference method employed to estimate the posterior distribution. As such, the results that we derive are valid w.r.t. a specific BNN posterior.
Probability MeasureFor an action \(u\in\mathbb{R}^{c}\), a subset of states \(X\subseteq\mathbb{R}^{n}\) and a starting state \(x\in\mathbb{R}^{n}\), we call \(T(X|x,u)\) the _stochastic kernel_ associated (and equivalent (Abate et al., 2008)) to the dynamical model of Equation (1). Namely, \(T(X|x,u)\) describes the one-step transition probability of the model of Eqn. (1) and is defined by integrating the predictive posterior distribution with input \((x,u)\) over \(X\), as:
\[T(X|x,u)= \int_{X}p(\bar{x}|(x,u),\mathcal{D})d\bar{x}. \tag{3}\]
In what follows, it will be convenient at times to work over the space of parameters of the BNN. To do so, we can re-write the stochastic kernel by combining Equations (2) and (3) and applying Fubini's theorem (Fubini, 1907) to switch the integration order, thus obtaining:
\[T(X|x,u)=\int_{\mathbb{R}^{n_{w}}}\left[\int_{X}\mathcal{N}(\bar{x}|f^{w}(x,u ),\sigma^{2}\cdot I)d\bar{x}\right]p_{\mathbf{w}}(w|\mathcal{D})dw. \tag{4}\]
From this definition of \(T\) it follows that, under a strategy \(\pi\) and for a given initial condition \(x_{0}\), \(\mathbf{x}_{k}\) is a Markov process with a well-defined probability measure \(\Pr\) uniquely generated by the stochastic kernel \(T\)(Bertsekas and Shreve, 2004, Proposition 7.45) and such that for \(X_{0},X_{k}\subseteq\mathbb{R}^{n}\):
\[\Pr[\mathbf{x}_{0}\in X_{0}]=\mathbf{1}_{X_{0}}(x_{0}),\] \[\Pr[\mathbf{x}_{k}\in X_{k}|\mathbf{x}_{k-1}=x,\pi]=T(X_{k}|x, \pi_{k-1}(x)),\]
where \(\mathbf{1}_{X_{0}}\) is the indicator function (that is, 1 if \(x\subseteq X_{0}\) and 0 otherwise). Having a definition of \(\Pr\) allows one to make probabilistic statements over the stochastic model in Eqn (1).
**Remark 1**.: _Note that, as is common in the literature (Depeweg et al., 2017), according to the definition of the probability measure \(\Pr\) we marginalise over the posterior distribution at each time step. Consequently, according to our modelling framework, the weights of the BNN are not kept fixed during each trajectory, but we re-sample from \(\mathbf{w}\) at each time step._
### Problem Statements
We consider two problems concerning, respectively, the certification and the control of dynamical systems modelled by BNNs. We first consider safety certification with respect to probabilistic reach-avoid specifications. That is, we seek to compute the probability that from a given state, under a selected control policy, an agent navigates to the goal region without encountering any unsafe states. Next, we consider the formal synthesis of policies that maximise this probability and thus attain maximal certifiable safety.
**Problem 1** (Computation of Probabilistic Reach-Avoid): _Given a strategy \(\pi\), a goal region \(\mathrm{G}\subseteq\mathbb{R}^{n}\), a finite-time horizon \([0,N]\subseteq\mathbb{N},\) and a safe set \(\mathrm{S}\subseteq\mathbb{R}^{n}\) such that \(\mathrm{G}\cap\mathrm{S}=\emptyset\), compute for any \(x_{0}\in\mathrm{G}\cup\mathrm{S}\)_
\[P_{reach}(\mathrm{G},\mathrm{S},x_{0},[0,N]|\pi)=\] \[\Pr\big{[}\exists k\in[0,N],\mathbf{x}_{k}\in\mathrm{G}\,\wedge \forall 0\leq k^{\prime}<k,\mathbf{x}_{k^{\prime}}\in\mathrm{S}\mid\mathbf{x}_{0}= x_{0},\pi\big{]}. \tag{5}\]
Outline of the ApproachIn Section 4.1 we show how \(P_{reach}(\mathrm{G},\mathrm{S},x_{0},[0,N]|\pi)\) can be formulated as the solution of a backward iterative computational procedure, where the uncertainty of the BNN is propagated backward in time, starting from the goal region. Our approach allows us to compute a sound lower bound on \(P_{reach}\), thus guaranteeing that \(\mathbf{x}_{k}\), as defined in Eqn (1), satisfies the specification with a given probability. This is achieved by extending existing lower bounding techniques developed to certify BNNs (Wicker et al., 2020) and applying these at each propagation step through the BNN.
Note that, in Problem 1, the strategy \(\pi\) is provided, and the goal is to quantify the probability with which the trajectories of \(\mathbf{x}_{k}\) satisfy the given specification. In Problem 2 below, we expand the previous problem and seek to synthesise a controller \(\pi\) that maximizes \(P_{reach}\). The general formulation of this optimization is given below.
**Problem 2** (Strategy Synthesis for Probabilistic Reach-Avoid): _For an initial state \(x_{0}\in\mathrm{G}\cup\mathrm{S}\), and a finite time horizon \(N\), find a strategy \(\pi^{*}:\mathbb{R}^{n}\times\mathbb{R}_{\geq 0}\to\mathbb{R}^{c}\) such that_
\[\pi^{*}=\operatorname*{arg\,max}_{\pi}P_{reach}(\mathrm{G},\mathrm{S},x_{0},[0,N]\mid\pi). \tag{6}\]
In Section 5, we will provide specific schemes for synthesizing optimal strategies when \(\pi\) is either a look-up table or a deterministic neural network.
Outline of the ApproachTo solve this problem, we notice that the backward iterative procedure outlined to solve Problem 1 has a substructure such that dynamic programming will allow us to compute optimal actions for each state that we verify, thus producing an optimal policy with respect to the given posterior and reach-avoid specification. With low-dimensional or discrete action spaces, we can then derive a tabular policy by solving the resulting dynamic programming problem. For higher-dimensional action spaces instead, in Section 5.1 we consider (generalising) policies represented as neural networks.
## 4 Methodology
In this section, we illustrate the methodology used to compute lower bounds on the reach-avoid probability, as described in Problem 1. We begin by encoding the reach-avoid probability through a sequence of value functions.
### Certifying Reach-Avoid Specifications
We begin by showing that \(P_{reach}(\mathrm{G},\mathrm{S},x,[k,N]|\pi)\) can be obtained as the solution of a backward iterative procedure, which allows to compute a lower bound on its value. In particular, given a time \(0\leq k<N\) and a strategy \(\pi\), consider the value functions \(V_{k}^{\pi}:\mathbb{R}^{n}\to[0,1]\), recursively defined as
\[V_{N}^{\pi}(x) =\mathbf{1}_{\mathrm{G}}(x),\] \[V_{k}^{\pi}(x) =\mathbf{1}_{\mathrm{G}}(x)+\mathbf{1}_{\mathrm{S}}(x)\!\!\! \int\!\!V_{k+1}^{\pi}(\bar{x})p\big{(}\bar{x}|(x,\pi_{k}(x)),\mathcal{D}\big{)} d\bar{x}. \tag{7}\]
Intuitively, \(V_{k}^{\pi}\) is computed starting from the goal region \(\mathrm{G}\) at \(k=N\), where it is initialised at value \(1\). The computation proceeds backwards at each state \(x\), by combining the current values with the transition probabilities from Eqn (1). The following proposition, proved inductively over time in the Supplementary Material, guarantees that \(V_{0}^{\pi}(x)\) is indeed equal to \(P_{reach}(\mathrm{G},\mathrm{S},x,[0,N]|\pi)\).
**Proposition 1**.: _For \(0\leq k\leq N\) and \(x_{0}\in\mathrm{G}\cup\mathrm{S},\) it holds that_
\[P_{reach}(\mathrm{G},\mathrm{S},x_{0},[k,N]|\pi)=V_{k}^{\pi}(x).\]
The backward recursion in Eqn (7) does not generally admit a solution in closed-form, as it would require integrating over the BNN posterior predictive distribution, which is in general analytically intractable. In the following section, we present a computational scheme utilizing convex relaxations to lower bound \(P_{reach}\).
### Lower Bound on \(P_{reach}\)
We develop a computational approach based on the discretisation of the state space, which allows convex relaxation methods such as (Wicker et al., 2020) to be used. The proposed computational approach is illustrated in Figure 1 and formalized in Section 4.3. Let \(Q=\{q_{1},...,q_{n_{q}}\}\) be a partition of \(\mathrm{S}\cup\mathrm{G}\) in \(n_{q}\) regions and denote with \(z:\mathbb{R}^{n}\to Q\) the function that associates to a state in \(\mathbb{R}^{n}\) the corresponding partitioned state in \(Q\). For each \(0\leq k\leq N\) we iteratively build a set of functions \(K_{k}^{\pi}:Q\rightarrow[0,1]\) such that for all \(x\in\mathrm{G}\cup\mathrm{S}\) we have that \(K_{k}^{\pi}(z(x))\leq V_{k}^{\pi}(x)\). Intuitively, \(K_{k}^{\pi}\) provides a lower bound for the value functions on the computation of \(P_{reach}\).
Figure 1: Examples of functions \(K_{N-1}^{\pi}\) (left) and \(K_{N-2}^{\pi}\) (right), which are lower bounds of \(V_{k}^{\pi}\). On the left, we consider the first step of our backward algorithm, where we compute \(K_{N-2}^{\pi}(q)\) by computing the probability that \(\mathbf{x}_{N}\in\mathrm{G}\) given that \(\mathbf{x}_{N-1}\in q.\) On the right, we consider the subsequent step. We outline the state we want to verify in red and the goal region in green. With the orange arrow, we represent the \(0.95\) transition probability of the BNN dynamical model, and in pink we represent the worst-case probabilities spanned by the BNN output. On top, we show where each of these key terms comes into play in Eqn (9).
The functions \(K_{k}^{\pi}\) are obtained by propagating backward the BNN predictions from time \(N\), where we set \(K_{N}^{\pi}(q)=\mathbf{1}_{\mathrm{G}}(q)\), with \(\mathbf{1}_{\mathrm{G}}(q)\) being the indicator function (that is, \(1\) if \(q\subseteq\mathrm{G}\) and \(0\) otherwise). Then, for each \(k<N\), we first discretize the set of possible probabilities in \(n_{p}\) sub-intervals \(0=v_{0}\leq v_{1}\leq...\leq v_{n_{p}}=1\). Hence, for any \(q\in Q\) and probability interval \([v_{i},v_{i+1}]\), one can compute a lower bound, \(R(q,k,\pi,i)\), on the probability that, starting from any state in \(q\) at time \(k\), we reach in the next step a region that has probability \(\in[v_{i},v_{i+1}]\) of safely reaching the goal region. The resulting values are used to build \(K_{k}^{\pi}\) (as we will detail in Eqn (9)). For a given \(q\subset\mathrm{S}\), \(K_{k}^{\pi}(q)\) is obtained as the sum over \(i\) of \(R(q,k,\pi,i)\) multiplied by \(v_{i-1}\), i.e., the lower value that \(K_{k+1}^{\pi}\) obtains in all the states of the \(i-th\) region. Note that the discretisation of the probability values does not have to be uniform, but can be adaptive for each \(q\in Q\). A heuristic for picking the value of thresholds \(v_{i}\) will be given in Algorithm 1. In what follows, we formalise the intuition behind this computational procedure.
### Lower Bounding of the Value Functions
For a given strategy \(\pi\), we consider a constant \(\eta\in(0,1)\) and \(\epsilon=\sqrt{2\sigma^{2}}\mathrm{erf}^{-1}(\eta)\), which are used to bound the value of the noise, \(\mathbf{v}_{k}\), at any given time. Intuitively, \(\eta\) represents the proportion of observational error we consider.2 Then, for \(0\leq k<N\), \(K_{k}^{\pi}:Q\to[0,1]\) are defined recursively as follows:
Footnote 2: The threshold is such that it holds that \(Pr(|\mathbf{v}_{k}^{(i)}|\leq\epsilon)=\eta\). In the experiments of Section 6 we select \(\eta=0.99\).
\[K_{N}^{\pi}(q) =\mathbf{1}_{\mathrm{G}}(q), \tag{8}\] \[K_{k}^{\pi}(q) =\mathbf{1}_{\mathrm{G}}(q)+\mathbf{1}_{\mathrm{S}}(q)\sum_{i=1 }^{n_{p}}v_{i-1}R(q,k,\pi(q),i), \tag{9}\]
where
\[R(q,k,\pi(q),i)=\eta^{n}\int_{H_{k,i}^{q,\pi,\epsilon}}p_{ \mathbf{w}}(w|\mathcal{D})dw, \tag{10}\] \[H_{k,i}^{q,\pi,\epsilon} =\{w\in\mathbb{R}^{n_{w}}|\,\forall x\in q,\forall\gamma\in[- \epsilon,\epsilon]^{n},\text{ it holds that:}\] \[v_{i-1} \leq K_{k+1}^{\pi}(q^{\prime})\leq v_{i},\text{ with }q^{ \prime}=z(f^{w}(x,\pi_{k}(x))+\gamma)\}.\]
The key component for the above backward recursion is \(R(q,k,\pi,i)\), which bounds the probability that, starting from \(q\) at time \(k\), we have that \(\mathbf{x}_{k+1}\) will be in a region \(q^{\prime}\) such that \(K_{k+1}^{\pi}(q^{\prime})\in[v_{i},v_{i+1}]\). By definition, the set \(H_{k,i}^{q,\pi,\epsilon}\) defines the weights for which the BNN maps all states covered by \(q\) into the goal states given action \(\pi(q)\). Given this, it is clear that integration of the posterior \(p_{\mathbf{w}}(w|\mathcal{D})\) over the \(H_{k,i}^{q,\pi,\epsilon}\) will return the probability mass of system (1) transitioning from \(q\) to \(q^{\prime}\) with probability in \([v_{i},v_{i+1}]\) in one time step. The computation of Eqn (9) then reduces to computing the set of weights \(H_{k,i}^{q,\pi,\epsilon}\), which we call the _projecting weight set_. A method to compute a safe under-approximation \(\bar{H}\subseteq H_{k,i}^{q,\pi,\epsilon}\) is discussed below. Before describing that, we analyze the correctness of the above recursion.
**Theorem 1**.: _Given \(x\in\mathbb{R}^{n}\), for any \(k\in\{0,...,N\}\) and \(q=z(x)\), assume that \(H_{k,i}^{q,\pi,\epsilon}\cap H_{k,j}^{q,\pi,\epsilon}=\emptyset\) for \(i\neq j.\) Then:_
\[\inf_{x\in q}V_{k}^{\pi}(x)\geq K_{k}^{\pi}(q).\]
A proof of Theorem 1 is given in the Supplementary Material. Note that the assumption on the null intersection between different projecting weight sets required in Theorem 1 can always be enforced by taking their intersection and complement.
### Computation of Projecting Weight Set
Theorem 1 allows us to compute a safe lower bound to Problem 1, by relying on an abstraction of the state space, that is, through the computation of \(K_{0}^{\pi}(q)\). This can be evaluated once the projecting set of weight values \(H_{k,i}^{q,\pi,\epsilon}\) associated to \([v_{i-1},v_{i}]\) is known.3 Unfortunately, direct computation of \(H_{k,i}^{q,\pi,\epsilon}\) is intractable. Nevertheless, a method for its lower bounding was developed by Wicker et al. (2020) in the context of adversarial perturbations for one-step BNN predictions, and can be directly adapted to our settings.
Footnote 3: In the case of Gaussian VI the integral of Equation (10) can be computed in terms of the _erf_ function, whereas more generally Monte Carlo or numerical integration techniques can be used.
The idea is that an under approximation \(\bar{H}\subseteq H_{k,i}^{q,\pi,\epsilon}\) is built by sampling weight boxes of the shape \(\hat{H}=[w^{L},w^{U}]\), according to the posterior, and
checking whether:
\[v_{i-1}\leq K_{k+1}^{\pi}(z(f^{w}(x,\pi_{k}(x))+\gamma))\leq v_{i},\] \[\forall x\in q,\,\forall w\in\hat{H},\,\forall\gamma\in\left[- \epsilon,\epsilon\right]^{n}. \tag{11}\]
Finally, \(\bar{H}\) is built as a disjoint union of boxes \(\hat{H}\) satisfying the above condition. For a full discussion of the details of this method we refer interested readers to (Wicker et al., 2020). In order to apply this method to our setting, we propagate the abstract state \(q\) through the policy function \(\pi_{k}(x)\), so as to obtain a bounding box \(\widehat{\Pi}=[\pi^{L},\pi^{U}]\) such that \(\pi^{L}\leq\pi_{k}(x)\leq\pi^{U}\) for all \(x\in q\). In the experiments, this bounding is only necessary when \(\pi_{k}(x)\) is given by an NN controller, for which bound propagation of NNs can be used for the computation of \(\widehat{\Pi}\)(Gowal et al., 2018; Gehr et al., 2018).
The results of Proposition 2 and Proposition 3 from Wicker et al. (2020) can then be used to propagate \(q\), \(\widehat{\Pi}\) and \(\hat{H}\) through the BNN. For discrete posteriors (e.g., those resulting from HMC) one can use the method described by Gowal et al. (2018) (Equations 6 and 7). Propagation of \(q\), \(\widehat{\Pi}\) amounts to using these method to compute values \(f^{L}_{q,\epsilon,k}\) and \(f^{U}_{q,\epsilon,k}\) such that, for all \(x\in q,\gamma\in[-\epsilon,\epsilon]^{n},w\in\hat{H}\), it holds that:
\[f^{L}_{q,\epsilon,k}\leq f^{w}(x,\pi_{k}(x))+\gamma\leq f^{U}_{q,\epsilon,k}. \tag{12}\]
Furthermore, \(f^{L}_{q,\epsilon,k}\) and \(f^{U}_{q,\epsilon,k}\) are differentiable w.r.t. to the input vector (Gowal et al., 2018; Wicker et al., 2021a).
Finally, the two bounding values can be used to check whether or not the condition in Eqn (11) is satisfied, by simply checking whether \([f^{L}_{q,\epsilon,k},f^{U}_{q,\epsilon,k}]\) propagated through \(K_{k+1}^{\pi}\) is within \([v_{i},v_{i+1}]\). We highlight that computing this probability is equivalent to a conservative estimate of \(R(q,k,\pi,i)\).
### Probabilistic Reach-Avoid Algorithm
In Algorithm 1 we summarize our approach for computing a lower bound for Problem 1. For simplicity of presentation, we consider the case \(n_{p}=2\), (i.e., we partition the range of probabilities in just two intervals \([0,v_{1}]\), \([v_{1},1]\) - the case \(n_{p}>2\) follows similarly). The algorithm proceeds by first initializing the reach-avoid probability for the partitioned states \(q\) inside the goal region \(G\) to \(1\), as per Eqn (8). Then, for each of the \(N\) time steps and for each one of the remaining abstract states \(q\), in line 4 we set the threshold probability \(v_{1}\) equal to the maximum value that \(K^{\pi}\) attains at the next time step over the
states in the neighbourhood of \(q\) (which we capture with a hyper-parameter \(\rho_{x}>0\)). We found this heuristic for the choice of \(v_{1}\) to work well in practice (notice that the obtained bound is formal irrespective of the choice of \(v_{1}\), and different choices could potentially be explored). We then proceed in the computation of Eqn (9). This computation is performed in lines 5-14. First, we initialise to the null set the current under-approximation of the projecting weight set, \(\bar{H}\). We then sample \(n_{s}\) weights boxes \(\hat{H}\) by sampling weights from the posterior, and expanding them with a margin \(\rho_{w}\) heuristically selected (lines 6-8). Then, for each of these sets, we first propagate the state \(q\), policy function, and weight set \(\bar{H}\) to build a box \(\bar{X}\) according to Eqn (12) (line 9), which is then accepted or rejected based on the value that \(K^{\pi}\) at the next time step attains in states in \(\bar{X}\) (lines 10-12). \(K^{\pi}_{N-i}(q)\) is then computed in line 14 by integrating \(p_{\mathbf{w}}(w|\mathcal{D})\) over the union of the accepted sets of weights.
**Input:** BNN model \(f^{\mathbf{w}}\), safe region S, goal region G, discretization \(Q\) of \(\mathrm{S}\cup\mathrm{G}\), time horizon \(N\), neural controller \(\pi\), number of BNN samples \(n_{s}\), weight margin \(\rho_{w}\), state space margin \(\rho_{x}\)
**Output:** Lower bound on \(V^{\pi}\)
```
1:For all \(0\leq k\leq N\) set \(K^{\pi}_{k}(q)=1\) iff \(q\subseteq\mathrm{G}\) and \(0\) otherwise
2:for\(k\gets N\) to \(1\)do
3:for\(q\in Q\setminus\mathrm{G}\)do
4:\(v_{1}\leftarrow\max_{x\in[q-\rho_{x},q+\rho_{x}]}K^{\pi}_{k+1}(z(x))\)
5:\(\bar{H}\leftarrow\emptyset\quad\)# \(\bar{H}\) is the set of safe weights
6:for desired number of samples, \(n_{s}\)do
7:\(w^{\prime}\sim P(w|\mathcal{D})\)
8:\(\hat{H}\leftarrow[w^{\prime}-\rho_{w},w^{\prime}+\rho_{w}]\)
9:\(\bar{X}\leftarrow[f^{L}_{q,\epsilon,k},f^{U}_{q,\epsilon,k}]\quad\)# Computed according to Eqn (12)
10:if\(\min_{x\in\bar{X}}K^{\pi}_{k+1}(z(x))\geq v_{1}\)then
11:\(\bar{H}\leftarrow\bar{H}\bigcup\hat{H}\)
12:endif
13:endfor
14: Ensure \(H_{i}\cap H_{j}=\emptyset\quad\forall H_{i},H_{j}\in\bar{H}\)
15:\(K^{\pi}_{k}(q)=v_{1}\cdot\eta^{n}\int_{\bar{H}}p_{\mathbf{w}}(w|\mathcal{D})dw\quad\)(Eqn (9))
16:endfor
17:endfor
18:return\(K^{\pi}\)
```
**Algorithm 1** Probabilistic Reach-Avoid for BNNs
## 5 Strategy Synthesis
We now focus on synthesising a strategy that maximizes our lower bound on \(P_{reach}\), thus solving Problem 2. Notice that, while no global optimality claim can be made about the strategy that we obtain, maximising the lower bound guarantees that the true reach-avoid probability will still be greater than the improved bound obtained after the maximisation.
**Definition 1**.: _A strategy \(\pi^{*}\) is called maximally certified (max-cert), w.r.t. the discretised value function \(K^{\pi}\), if and only if, for all \(x\in\mathrm{G}\cup\mathrm{S}\), it satisfies_
\[K_{0}^{\pi^{*}}(z(x))=\sup_{\pi}K_{0}^{\pi}(z(x)),\]
_that is, the strategy \(\pi^{*}\) maximises the lower bound of \(P_{reach}\)._
It follows that, if \(K_{0}^{\pi^{*}}(z(x))>1-\delta\) for all \(x\in\mathrm{G}\cup\mathrm{S}\), then the max-cert strategy \(\pi^{*}\) is a solution of Problem 2. Note that a max-cert strategy is guaranteed to exist when the set of admissible controls \(\mathcal{U}\) is compact (Bertsekas and Shreve, 2004, Lemma 3.1), as we assume in this work. In the next theorem, we show that a max-cert strategy can be computed via dynamic programming with a backward recursion similar to that of Eqn (9).
**Theorem 2**.: _For \(0\leq k<N\) and \(0=v_{0}<...<v_{n_{p}}=1,\) define the functions \(K_{k}^{*}:\mathbb{R}^{n}\to[0,1]\) recursively as follows_
\[K_{k}^{*}(q)=\sup_{u\in\mathcal{U}}\big{(}\mathbf{1}_{\mathrm{G}}(q)+\mathbf{ 1}_{\mathrm{S}}(q)\sum_{i=1}^{n_{p}}v_{i}R(q,k,u,i)\big{)},\]
_where \(R(q,k,u,i)\) and \(H_{k,i}^{q,u,\epsilon}\) are defined as in Eqn (10). If \(\pi^{*}\) is s.t. \(K_{0}^{*}=K_{0}^{\pi^{*}}\), then \(\pi^{*}\) is a max-cert strategy. Furthermore, for any \(x\), it holds that \(K_{0}^{\pi^{*}}(z(x))\leq P_{reach}(\mathrm{G},\mathrm{S},[0,N],x|\pi^{*})\)._
Theorem 2 is a direct consequence of the Bellman principle of optimality (Abate et al., 2008, Theorem 2) and it guarantees that for each state \(q\in\mathrm{S}\) and time \(k\), we have that \(\pi^{*}(q,k)=\arg\max_{u\in\mathcal{U}}\sum_{i=1}^{n_{p}}v_{i}R(q,k,u,i)\).
In Algorithm 2 we present a numerical scheme based on Theorem 2 to find a max-cert policy \(\pi^{*}\). Note that the optimization problem required to be solved at each time step state, i.e. \(\arg\max_{u\in\mathcal{U}}\sum_{i=1}^{n_{p}}v_{i}R(q,k,u,i)\), is non-convex. Hence, in Algorithm 2, in Line 1, we start by partitioning the action space \(\mathcal{U}\). Then, in Lines 4-15 for each action in the partition we estimate
the expectation of \(K_{k+1}^{*}\) starting from \(q\) via \(n_{s}\) samples taken from the BNN posterior (250 in all our experiments). Finally, in Lines 11-14 we keep track of the action maximising \(K_{k+1}^{*}\).
The described approach for synthesis, while optimal in the limit of an infinitesimal discretization of \(\mathcal{U}\), may become infeasible for large state and action spaces. As a consequence, in the next subsection, we also consider when \(\pi\) is parametrised by a neural network and thus can serve as a function over a larger (even infinite) state space. Specifically, we show how a set of neural controllers, one for each time step, can be trained in order to maximize probabilistic reach-avoid via Theorem 2. In Section 6 we empirically investigate both controller strategies.
```
1:\(\Upsilon\leftarrow\) middle points of each region in a partition of \(\mathcal{U}\)
2:\(\kappa^{*}\gets 0\)
3:\(u^{*}\gets 0\)
4:for\(u\in\Upsilon\)do
5:\(\hat{\kappa}\gets 0\)
6:for\(j\) from \(0\) to \(n_{s}\)do
7:\(w^{\prime}\sim P(w|\mathcal{D})\)
8:\(\bar{X}\leftarrow[f_{q,\epsilon,k}^{L},f_{q,\epsilon,k}^{U}]\)# Computed for \(q\) and \(u\) via (Eqn (12))
9:\(\hat{\kappa}=\hat{\kappa}+\frac{\min_{x\in\bar{X}}K_{k+1}^{*}(z(x))}{n_{s}}\)
10:endfor
11:if\(\hat{\kappa}>\kappa^{*}\)then
12:\(\kappa^{*}\leftarrow\hat{\kappa}\)
13:\(u^{*}\gets u\)
14:endif
15:endfor
16:return\(u^{*}\)
```
**Algorithm 2** Numerical Synthesis of Action for region \(q\) at time \(k\)
### An Approach for Strategy Synthesis Based on Neural Networks
In this subsection we show how we can train a set of NN policies \(\pi_{0},...,\pi_{N-1}:\mathbb{R}^{n}\rightarrow\mathcal{U}\) such that at each time step \(k\), \(\pi_{k}\) approximately solves the dynamic programming equation in Theorem 2. Note that, because of the approximate nature of the NN training, the resulting neural policies will necessarily
be sub-optimal, but have the potential to scale to larger and more complex systems, compared to the approach presented in Algorithm 2.
At time \(k\) we start with an initial set of parameters (weights and biases) \(\theta_{k}\) for policy \(\pi_{k}\). These parameters can either be initialized to \(\theta_{k+1}\), the parameters synthesised at the previous time step of the value iteration for \(\pi_{k+1}\), or to a policy employed to collect the data to train the BNN as in Gal et al. (2016b), or simply selected at random. In our implementation where no previous policy is available, we start with a randomly initialized NN, and then at time \(k\) we set our initial neural policy with that obtained at time \(k+1\). We then employ a scheme to learn a "safer" set of parameters via backpropagation. In particular, we first define the following loss function penalizing policy parameters that lead to an unsafe behavior for an ensemble of NNs sampled from the BNN posterior distribution:
\[\mathcal{L}(x,\theta_{k})=-\alpha||\sum_{w\in\bar{W}}f^{w}(x,\pi_ {k}(x))-\mathbf{A}_{k}||_{2}+(1-\alpha)||\sum_{w\in\bar{W}}f^{w}(x,\pi_{k}(x)) -\mathbf{R}_{k}||_{2}, \tag{13}\]
where \(\bar{W}\) are a set of parameters independently sampled from the BNN posterior \(p_{\mathbf{w}}(w|\mathcal{D})\), for a probability threshold \(0\leq p_{t}\leq 1\), \(\mathbf{A}_{k}=\{x:K_{k+1}^{\pi_{k+1}}(x)\geq p_{t}\}\) and \(\mathbf{R}_{k}=\{x:K_{k+1}^{\pi_{k+1}}(x)\leq 1-p_{t}\}\) are the sets of states for which the probability of satisfying the specification at time \(k+1\) is respectively greater than \(p_{t}\) and smaller than \(1-p_{t}\). For \(X\subset\mathbb{R}^{n}\), \(||x-X||_{2}=\inf_{\bar{x}\in X}||x-\bar{x}||_{2}\) is the standard \(L_{2}\) distance of a point from a set, and \(0\leq\alpha\leq 1\) is a parameter taken to be \(0.25\) in our experiments, that weights between reaching the goal and staying away from "bad" states. Intuitively, the first term in \(\mathcal{L}(x,\theta_{k})\) enforces \(\theta_{k}\) that leads to high values of \(K_{k+1}^{\pi_{k+1}}\), while the second term penalizes parameter sets that lead to small values of this quantity.
\(\mathcal{L}(x,\theta_{k})\) only considers the behaviour of the dynamical system of Equation (1) starting from initial state \(x\). Then, in order to also enforce robustness in a neighbourhood of initial states around \(x\), similarly to the adversarial training case (Madry et al., 2017), we consider the robust loss
\[\bar{\mathcal{L}}(x,\theta_{k})=\max_{x^{\prime}:||x-x^{\prime}|| _{2}\leq\epsilon}\mathcal{L}(x,\theta_{k}), \tag{14}\]
Note that by employing Eqn (12) we obtain a differentiable upper bound of \(\bar{\mathcal{L}}(x,\theta_{k})\), which can be employed for training \(\theta_{k}\).
### Discussion on the Algorithms
In this section we provide further discussion of our proposed algorithms including the complexity and the various sources of approximation that may lead to looser guarantees. To frame this discussion, we start by highlighting the complexity and approximation introduced by the chosen bound propagation technique shared by both of the algorithms. We then proceed to discuss how discretisation choices made with respect to the state-space, the weight-space, and the observational noise, practically affect the tightness of our probability bounds for both algorithms, and finally how the action-space discretisation affects our synthesis algorithm.
Bound Propagation TechniquesGiven that there are currently no methods for BNN certification that are both sound and complete (Wicker et al., 2020, 2021b; Berrada et al., 2021), the evaluation of the \(R\) function will always introduce some approximation error. While it is difficult to characterize this error in general, it is known that for deeper networks, BNN certification methods introduce more approximation than shallow networks (Wicker, 2021). The recently developed bounds from Berrada et al. (2021), have been shown to be tighter than the IBP and LBP approaches from Wicker et al. (2020) at the cost of computation complexity that is exponential in the number of dimensions of the state-space. In contrast, each iteration of the interval bound propagation method proposed in and Wicker et al. (2020) requires the computational complexity of four forward passes through the neural network architecture.
Discretization Error and ComplexityWhile our formulation supports any form of state space discretisation, we can assume for simplicity that each dimension of the \(n\)-dimensional state-space is broken into \(m\) equal-sized abstract states. This implies that certification of the system requires us to evaluate the \(R\) function \(\mathcal{O}\big{(}N(m^{n})\big{)}\) many times, where \(N\) is the time horizon we would like to verify. Given that \(n\) is fixed, the user has control over \(m\), the size of each abstract state. For large abstract states, small \(m\), one introduces more approximation as the \(R\) function must account for all possible behaviors in the abstract state. For small abstract states, large \(m\), there is much less approximation, but considerably larger runtime. Assume the \(c\)-dimensional action space is broken into \(t\) equal portions at each dimension, then the computational complexity of the algorithm becomes \(\mathcal{O}\big{(}t^{c}N(m^{n})\big{)}\) as each of the \(m^{n}\) states must be evaluated \(t^{c}\)-many times to determine the approximately
optimal action. As with the state-space discretization, larger \(t\) will lead to a more-optimal action choice but requires greater computational time.
## 6 Experiments
We provide an empirical analysis of our BNN certification and policy synthesis methods. We begin by providing details on the experimental setting in Section 6.1. We then analyse the performance of our certification approach on synthesized policies in Section 6.2. Next, in Section 6.3, we discuss how the choice of the BNN inference algorithm affects synthesis and certification results. Finally, in Section 6.4 we study how our method scales with larger neural network architectures and in higher-dimensional control settings.
### Experimental Setting
We consider a planar control task consisting of a point-mass agent navigating through various obstacle layouts. The point-mass agent is described by four dimensions, two encoding position information and two encoding velocity (Astrom and Murray, 2008). To control the agent there are two continuous action dimensions, which represent the force applied on the point-mass in each of the two planar directions. The task of the agent is to navigate to a goal region while avoiding various obstacle layouts. The knowledge supplied to the agent about the environment is the locations of the goal and obstacles. The full set of equations describing the agent dynamics is given in Appendix A.1. In our experiments, we analyse three obstacle layouts of varying difficulty, which we name v1, v2 and Zigzag - visualized in the left column of Figure 3. Obstacle layout v1 places an obstacle directly between the agent's initial position and the goal, forcing the agent to navigate its way around it. Obstacle layout v2 extends this setting by adding two further obstacles that block off one side of the state space. Finally, scenario Zigzag has 5 interleaving triangles and requires the agent to navigate around them in order to reach the goal.
In order to learn an initial policy to solve the task, we employ the episodic learning framework described in Gal et al. (2016a). This consists of iteratively collecting data from deploying our learned policy, updating the BNN dynamics model to the new observations, and updating our policy. When collecting data, we start by randomly sampling state-action pairs and observing their resulting state according to the ground-truth dynamics. After this initial sample, all future observations from the ground-truth environment
are obtained from deploying our learned policy. The initial policy is set by assigning a random action to each abstract state. This is equivalent to tabular policy representations in standard reinforcement learning (Sutton and Barto, 1998). We additionally discuss neural network policies in Section 6.4. Actions in the policy are updated by performing gradient descent on a sum of discounted rewards over a pre-specified finite horizon. The reward of an action is taken to be the \(\ell_{2}\) distance moved towards the goal region penalized by the \(\ell_{2}\) proximity to obstacles as is done in (Sutton and Barto, 1998; Gal et al., 2016a). For the learning of the BNN, we perform approximate Bayesian inference over the neural network parameters. For our primary investigation, we select an NN architecture with a single fully connected hidden layer comprising 50 hidden units, and learn the parameters via Hamiltonian Monte Carlo (HMC). Larger neural network architectures are considered in Section 6.4, while results for variational approximate inference are given in Section 6.3.
Unless otherwise specified, in performing certification and synthesis we employ abstract states spanning a width of 0.02 around each position dimension and 0.08 around each velocity dimension. Velocity values are clipped to the range \([-0.5,0.1]\). When performing optimal synthesis, we discretise the two action dimensions for the point-mass problem into 100 possible vectors which uniformly cover the continuous space of actions \([-1,1]\). When running our backward reachability scheme, at each state, we test all 100 action vectors and take the action that maximizes our lower bound to be the policy action at that state, thus giving us the locally optimal action within the given discretisation. Further experimental details are presented in Appendix A and code to reproduce all results in this paper can be found at [https://github.com/matthewwicker/BNNReachAvoid](https://github.com/matthewwicker/BNNReachAvoid).
The computational times for each system were roughly equivalent. This is to be expected given that each has the same state space. The following average times are reported for a parallel implementation of our algorithm run on 90 logical CPU cores across 4 Intel Core Xeon 6230 clocked at 2.10GHz. Training of the initial policy and BNN model takes in the order of 10 minutes, 6 hours for the certification with a horizon of 50 time steps, and 8 hours for synthesis.
### Comparing Certification of Learned and Max-Cert Policies
In Figure 2 and Figure 3 we visualize systems from both learned and synthesized policies. Each row represents one of our control environments and is comprised of four figures. These figures show, respectively, simulations from the dynamical system, BNN uncertainty, the control policy plotted as a gradient field, and the certified safety probabilities. The first column of the Figures depicts 200 simulated trajectories of the learned (Figure 2) or synthesized (Figure 3) control policies on the BNN (whose uncertainty is plotted along the second column). Notice how in both cases we visually obtain the
Figure 2: **Left Column: 200 simulated trajectories for the learned policy starting from the initial state. Center Left Column: A 2D visualization of the learned policy. Each arrow represents the direction of the applied force. Center Right Column: The epistemic uncertainty for the learned dynamics model. Right Column: Certified lower-bounds of probabilistic reach avoid for each abstract state according to BNN and final learned policy.**
Figure 3: A version of each learned system after a new policy has been synthesized from the reach-avoid specification. **First column:** 200 simulations of the synthesized policy in the real environment. **Second column:** BNN epistemic uncertainty given as the variance of the BNN predictive distribution. **Third column:** A visualization of the maximally certifiable policies, which demonstrate a clearer tendency to avoid obstacles throughout the state space compared to the policies in Figure 2. **Fourth column:** Synthesized policies have remarkably higher lower-bounds than learned policies, corresponding plots for learned systems in Figure 2.
behaviour expected, with the overwhelming majority of the simulated trajectories safely avoiding the obstacles (red regions in the figure) and terminating in the goal (green region). A vector field associated with the policy is depicted in the third column of the figures. Notice that, the actions returned by our synthesis method intuitively align with the reach-avoid specification, that is, synthesized actions near the obstacle and out-of-bounds are aligned with moving away from such unsafe states. Exceptions to this are represented by locations where the agent is already unsafe and that, as such, are not fully explored during the BNN learning phase (e.g., the lower triangles in the Zigzag scenario), locations where two directions are equally optimal (e.g., in the top right corner of the v1 environment) and locations which are not along any feasibly optimal path (e.g., the lower right corner of v2) and as such are not accounted by the BNN learning.
In Table 1 we compare the certification results of the synthesized policy against the initial learned policy. As the synthesized policy is computed by improving on the latter, we expect the former to outperform the learned policy in terms of the guarantees obtained. This is in fact confirmed and quantified by the results of Table 1, which lists, for each of the three environments, the average reach-avoid probability estimated over 500 trajectories, the average certification lower bound across the state space, and the certification coverage (i.e., the proportion of states where our algorithm returns a non-zero probability lower bound). This notion of coverage only requires a state to be certified with a probability above 0, and so it is most informative when evaluated together with the average lower-bound and visual inspection of Figure 2 and Figure 3.
Indeed, the synthesized policy significantly improves on the certification guarantees given by the learned policy, and consistently so across the three environments analysed, with the lower bound improving by a factor of roughly 3.5. This considerable improvement is to be expected as worst-case guarantees can be poor for deep learning systems that are not trained with specific safety objectives(Mirman et al., 2018; Gowal et al., 2018; Wicker et al., 2021a). In particular, for both the V1 and Zigzag case studies, we observe that the average lower bound jumps from roughly 0.2 to greater than 0.7. Moreover, the most significant improvements are obtained in the most challenging case, i.e., the Zigzag environment, with the certification coverage increasing of a 4.75 factor. Interestingly, also the average model performance increases for the synthesized models. Intuitively this occurs because while in the learning of the initial policy passing through the obstacle is only pe
nalised by a continuous factor, the synthesized policy strives to rigorously enforce safety across the BNN posterior. A visual representation of these results is provided in the last column of Figure 2 for the learned policy and in Figure 3 for the synthesized policy. We note that the uncertainty maps in these figures are identical as the BNN model is not changed, only the policy.
### On the Effect of Approximate Inference
The results provided so far have been generated with BNN dynamical models learned via HMC training. However, different inference methods produce different approximations of the BNN posterior thus leading to different dynamics approximations and hence synthesized policies.
Table 2 and the plots in Figures 4 and 5 analyse the effect of approximate inference on both learned and synthesized policies, comparing results obtained by HMC with those obtained by VI training on the v1 scenario. We notice that also in the case of VI the synthesized policy significantly improves on the initial policy. Interestingly, the certification results over the learned policy for VI are higher than those obtained for HMC, but the results are comparable for the synthesized policies. In fact, it is known in the literature that VI tends to under-estimate uncertainty (Myshkov and Julier, 2016;
\begin{table}
\begin{tabular}{|l|c|c|c|} \multicolumn{4}{c}{_Learned Policy_}} \\ \cline{2-4} & Performance & Avg. Lower Bound & Cert. Coverage \\ \hline V1 & 0.789 & 0.212 & 0.639 \\ \hline V2 & 0.805 & 0.192 & 0.484 \\ \hline Zigzag & 0.815 & 0.189 & 0.193 \\ \hline \end{tabular}
\end{table}
Table 1: Certification comparisons for learned and synthesized policy across the three environments. **Performance** indicates the proportion of simulated trajectory that respect the reach-avoid specification. **Avg. Lower Bound** is the mean certification probability across all states. **Cert. Coverage** is the proportion of states that we are able to certify (i.e., with a non-zero lower bound for the reach-avoid probability).
Michelmore et al., 2020) and is more susceptible to model misspecification (Masegosa, 2020). As such, being probabilistic, the bound obtained is tighter for VI where the uncertainty is lower than that of HMC which provides a more conservative representation of the agent dynamics. For example, we see in the first two rows of Table 2 that the average lower bound achieved for the variational inference posterior is 1.88 times higher than the bounds for HMC posterior. However, our synthesis method reduces this gap between HMC and VI, while still accounting for the higher uncertainty of the former, and hence the more conservative guarantees.
While HMC approximates the posterior by relying on a Monte Carlo estimate of it, VI is a gradient-based technique, where the number of training epochs (i.e., the number of full sweeps through the dataset) is a key hyperparameter. We thus analyse the effect of training epochs in the quality of the dynamics obtained in Figure 6, along with the effect on the synthesized policies and the certificates obtained for such policies. The left plot of the figure shows a set of predicted trajectories over a 10 time-step horizon for a varying number of training epochs, with the ground truth behaviour highlighted in red. The BNN trajectories are color-coded based on the number of epochs each dynamics model was trained for. In yellow, we see that the BNN which has only been trained for 10 epochs displays considerable error in its iterative predictions. This is reduced considerably for a model trained for 50 epochs,
\begin{table}
\begin{tabular}{|c|c|c|c|} \multicolumn{4}{c}{_Learned Policy_}} \\ \cline{2-4} \multicolumn{1}{c|}{} & Performance & Avg. Lower Bound & Coverage \\ \hline Var. Inference & 0.832 & 0.399 & 0.696 \\ \hline Ham. Monte Carlo & 0.789 & 0.212 & 0.639 \\ \hline \end{tabular}
\end{table}
Table 2: Certification comparisons for learned and synthesized policy between VI and HMC BNN learning on obstacle layout V1. **Performance** indicates the proportion of simulated trajectory that respect the reach-avoid specification. **Avg. Lower Bound** is the mean certification probability across all states. **Cert. Coverage** is the proportion of states that we are able to certify with non-zero probability.
but the cumulative error after 10 epochs is still considerable. Finally, as expected, for models trained for 250 and 1500 epochs we empirically observe a trend toward convergence to the ground truth.
We notice that the policy and certifications directly reflect the quality of the approximation. In fact, as we increase the model fit, we see that there are significant improvements in both the intuitive behavior of the synthesized policy as well as the resulting guarantees we are able to compute.
### Depth Experiments
In this section, we evaluate how our method performs when we vary the depth of the BNN dynamics model considered. In Figure 7, we plot the certified reach-avoid probabilities for a learned policy and a one, two, and three-layer BNN dynamics model where each layer has a width of 12 neurons. Similar architectures are found in the BNNs studied in recent related work (Lechner et al., 2021). Other than the depth of the BNN, all the other variables in the experiment are held equal (e.g., number of episodes during
Figure 4: **Top Row:** Visualization of the learned system using HMC to approximately infer dynamics. **Bottom Row:** Visualization of the learned system using VI to approximately infer dynamics. We highlight that the VI approximation displays a 5 to 10 times reduction in epistemic uncertainty.
learning, discretization of state-space, and number of BNN samples considered for the lower bound). The learning procedure results in BNN models with roughly equivalent losses and in policies that are qualitatively similar, see Appendix A.3 for further visualizations. Given that the key factors of the system have been held equal, we notice a decrease in our certified lower bound as depth increases. Specifically, the average lower bound for the one-layer model is 0.811, for the two-layer model it is 0.763, and for the three-layer model it decreases further to 0.621. Figure 7 clearly demonstrates that as the BNN dynamics model becomes deeper, then our lower bound becomes more conservative. This finding is consistent with existing results in certification of BNNs (Wicker et al., 2020; Berrada et al., 2021) and DNNs (Gowal et al., 2018; Mirman et al., 2018). We note, however, that when the verification parameters are refined, i.e., more samples from the BNN are taken, we are able to certify the three-layer system with an average certified lower-bound of 0.867 (see Appendix A.3). These additional BNN samples increase the runtime of our certification procedure by 1.5 times.
Figure 5: **Top Row:** Visualization of the synthesized policy and its performance based on the HMC dynamics model. **Bottom Row:** Visualization of the synthesized policy and its performance based on the VI dynamics model.
## 7 Related Work
Certification of machine learning models is a rapidly growing area (Gehr et al., 2018; Katz et al., 2017; Gowal et al., 2018; Wicker et al., 2022). While most of these methods have been designed for deterministic NNs, recently safety analysis of Bayesian machine learning models has been studied both for Gaussian processes (GPs) (Grosse et al., 2017; Cardelli et al., 2019; Blaas et al., 2020) and BNNs (Athalye et al., 2018; Cardelli et al., 2019; Wicker et al., 2020), including methods for adversarial training (Liu et al., 2019; Wicker et al., 2021). The above works, however, focus exclusively on the input-output behaviour of the models, that is, can only reason about static properties. Conversely, the problem we tackle in this work has additional complexity, as we aim to formally reason about iterative predictions, i.e., trajectory-level behaviour of a BNN interacting in a closed loop with a controller.
Iterative predictions have been widely studied for Gaussian processes (Gi
Figure 6: Analysis for number of training epochs used in performing VI training on the V1 environment. Left: sample of 10-step agent trajectories obtained with BNNs trained with VI and different number of epochs (red: ground truth trajectory). Right: synthesis and certification results for a selection of training epochs.
rard et al., 2003) and safety guarantees have been proposed in this setting in the context of model-based RL with GPs (Jackson et al., 2020; Polymenakos et al., 2019; Berkenkamp et al., 2017, 2016). However, all these works are specific to GPs and cannot be extended to BNNs, whose posterior predictive distribution is intractable and non-Gaussian even for the more commonly employed approximate Bayesian inference methods (Neal, 2012). Recently, iterative prediction of _neural network dynamic models_ have been studied (Wei and Liu, 2021; Adams et al., 2022) and methods to certify these models against temporal logic formulae have been derived (Adams et al., 2022). However, these works only focus on standard (i.e., non-Bayesian) neural networks with additive Gaussian noise. Closed-loop systems with known (deterministic) models and control policies modelled as BNNs are considered in (Lechner et al., 2021). In contrast with our work, Lechner et al. (2021) can only support deterministic models without noisy dynamics, only focus on the safety verification problem, and are limited to BNN posterior with unimodal weight distribution.
Various recent works consider verification or synthesis of RL schemes against reachability specifications (Sun et al., 2019; Konighofer et al., 2020; Bacci and Parker, 2020). None of these approaches, however, support both continuous state-action spaces and probabilistic models, as in this work. Continuous action spaces are supported in (Hasanbeig et al., 2019), where the authors provide RL schemes for the synthesis of policies maximising given temporal requirements, which is also extended to continuous state- and
Figure 7: We vary the depth of the BNN used to learn the dynamics and observe its effects on our certified safety probabilities over the first half of the Puck-V1 state-space. From left to right we plot the lower-bound reach-avoid probabilities for a one-layer BNN dynamics model, a two-layer BNN dynamics model, and a three-layer BNN dynamics model.
action-spaces in (Hasanbeig et al., 2020). However, the guarantees resulting from these model-free algorithms are asymptotic and thus of a different nature than those in this work. The work of Haesaert et al. (2017) integrates Bayesian inference and formal verification over control models, additionally proposing strategy synthesis approaches for active learning (Wijesuriya and Abate, 2019). In contrast to our paper these works do not support unknown noisy models learned via BNNs.
A related line of work concerns the synthesis of runtime monitors for predicting the safety of the policy's actions and, if necessary, correct them with fail-safe actions (Alshiekh et al., 2018; Avni et al., 2019; Bouton et al., 2019; Fulton and Platzer, 2019; Phan et al., 2020). These approaches, however, do not support continuous state-action spaces or require some form of ground-truth mechanistic model for safety verification (as opposed to our data-driven BNN models).
## 8 Conclusions
In this paper, we considered the problem of computing the probability of time-bounded reach-avoid specifications for dynamic models described by iterative predictions of BNNs. We developed methods and algorithms to compute a lower bound of this reach-avoid probability. Additionally, relying on techniques from dynamic programming and non-convex optimization, we synthesized certified controllers that maximize probabilistic reach-avoid. In a set of experiments, we showed that our framework enables certification of strategies on non-trivial control tasks. A future research direction will be to investigate techniques to enhance the scalability of our methods so that they can be applied to state-of-the-art reinforcement learning environments. However, we emphasise that the benchmark considered in this work remains a challenging one for certification purposes, due to both the non-linearity and stochasticity of the models, and the sequential, multi-step dependency of the predictions. Thus, this paper makes an important step toward the application of BNNs in safety-critical scenarios.
## Acknowledgements
This project received funding from the ERC under the European Union's Horizon 2020 research and innovation programme (FUN2MODEL, grant agreement No. 834115). |
2304.07501 | Transition Propagation Graph Neural Networks for Temporal Networks | Researchers of temporal networks (e.g., social networks and transaction
networks) have been interested in mining dynamic patterns of nodes from their
diverse interactions.
Inspired by recently powerful graph mining methods like skip-gram models and
Graph Neural Networks (GNNs), existing approaches focus on generating temporal
node embeddings sequentially with nodes' sequential interactions.
However, the sequential modeling of previous approaches cannot handle the
transition structure between nodes' neighbors with limited memorization
capacity.
Detailedly, an effective method for the transition structures is required to
both model nodes' personalized patterns adaptively and capture node dynamics
accordingly.
In this paper, we propose a method, namely Transition Propagation Graph
Neural Networks (TIP-GNN), to tackle the challenges of encoding nodes'
transition structures.
The proposed TIP-GNN focuses on the bilevel graph structure in temporal
networks: besides the explicit interaction graph, a node's sequential
interactions can also be constructed as a transition graph.
Based on the bilevel graph, TIP-GNN further encodes transition structures by
multi-step transition propagation and distills information from neighborhoods
by a bilevel graph convolution.
Experimental results over various temporal networks reveal the efficiency of
our TIP-GNN, with at most 7.2\% improvements of accuracy on temporal link
prediction.
Extensive ablation studies further verify the effectiveness and limitations
of the transition propagation module.
Our code is available at \url{https://github.com/doujiang-zheng/TIP-GNN}. | Tongya Zheng, Zunlei Feng, Tianli Zhang, Yunzhi Hao, Mingli Song, Xingen Wang, Xinyu Wang, Ji Zhao, Chun Chen | 2023-04-15T08:06:16Z | http://arxiv.org/abs/2304.07501v1 | # Transition Propagation Graph Neural Networks for Temporal Networks
###### Abstract
Researchers of temporal networks (e.g., social networks and transaction networks) have been interested in mining dynamic patterns of nodes from their diverse interactions. Inspired by recently powerful graph mining methods like skip-gram models and Graph Neural Networks (GNNs), existing approaches focus on generating temporal node embeddings sequentially with nodes' sequential interactions. However, the sequential modeling of previous approaches cannot handle the transition structure between nodes' neighbors with limited memorization capacity. Detailedly, an effective method for the transition structures is required to both model nodes' personalized patterns adaptively and capture node dynamics accordingly. In this paper, we propose a method, namely Transltion Propagation Graph Neural Networks (TIP-GNN), to tackle the challenges of encoding nodes' transition structures. The proposed TIP-GNN focuses on the bilevel graph structure in temporal networks: besides the explicit interaction graph, a node's sequential interactions can also be constructed as a transition graph. Based on the bilevel graph, TIP-GNN further encodes transition structures by multi-step transition propagation and distills information from neighborhoods by a bilevel graph convolution. Experimental results over various temporal networks reveal the efficiency of our TIP-GNN, with at most 7.2% improvements of accuracy on temporal link prediction. Extensive ablation studies further verify the effectiveness and limitations of the transition propagation module. Our code is available at [https://github.com/doujiang-zheng/TIP-GNN](https://github.com/doujiang-zheng/TIP-GNN).
Graph Embedding, Graph Neural Networks, Social Networks, Temporal Networks, Link Prediction.
## I Introduction
Temporal networks are widespread over real-life scenarios: users create, like, and dislike posts in social networks, and employees send, forward, and reply to emails in email networks [1, 2, 3, 4]. Nodes in these temporal networks exhibit their personalized dynamic patterns when keeping interacting with other nodes. For both researchers and industrial practitioners, it is beneficial to capture node dynamics to predict whether an account is malicious in risk management systems [5, 6], and recommend interesting items to users in online advertising systems[7, 8].
Previous approaches focus on mining node dynamics by aggregating neighborhood features chronologically from nodes' sequential interactions [5, 6, 7, 8, 9, 10, 11, 12], following a generalized sequential graph convolution paradigm as shown in Fig. 1(a). These methods are dedicated to modeling neighborhood impacts on nodes while mostly neglecting complex transitions between neighbors, as shown in Fig. 1(b). On the one hand, recurrent methods, which can be seen as one-layer temporal Graph Neural Networks (GNNs), update temporal node embeddings iteratively upon new interactions based on specific temporal point processes, such as the triad closure process in DynamicTriad [11], the attention-based Hawkes process in HTNE [7], and the mutual evolution process in JODIE [5]. On the other hand, temporal GNNs [6, 8, 10] have been proposed recently to deal with the dynamic high-order structure of the interaction graph, which demonstrate superior performance compared with recurrent methods (shallow GNNs with the one-layer neighborhood). Overall, existing approaches target generating temporal node embeddings based on the dynamic structure of a temporal network.
However, sequential modeling, the core of existing approaches, can hardly capture the transition structure of nodes' temporal neighbors with limited memorization capacity [13]. Fig. 1 provides an example that an employee finishes two
Fig. 1: A toy email network where an employee finishes two receive-forward-reply loops with a leader and two subordinates from \(t_{1}\) to \(t_{6}\). (a) The sequential graph convolution performs graph convolution sequentially over historical interactions. (b) The proposed bilevel graph convolution focuses on the transition structures (black arrows) between neighbors and distills the transition information with a hierarchical attention mechanism (highlighted orange arrows).
consecutive receive-forward-reply loops in an email network whose loop patterns challenge the representation capability of sequential modeling. As another example, users in social networks often propagate news in a hierarchical diffusion mechanism. These nodes' neighbors play different roles in their neighborhoods, presenting complex patterns in chronological order. If taking the transition structure into account, temporal network methods will make more precise predictions of whether the employee will forward the email to the subordinate or reply to the leader, as shown in Fig. 1(b).
Nonetheless, there exist two challenges in distilling the transition structure of nodes' neighbors into temporal node embeddings. Firstly, nodes in a network usually present interactions over a wide range, indicating that their personalized preferences require adaptive modeling. For instance, a leader may attend several consecutive meetings in one day, while a junior employee may only attend a weekly meeting during workdays. Thus, temporal node embeddings for the leader and the junior should capture their distinctive structures from their sequential interactions. Secondly, node dynamics of their temporal interactions imply the variation of their transition structures with time elapsing. For example, an employee will perform more interactions with high-level colleagues after getting promoted in a company. Its temporal node embeddings should also evolve forward according to the node dynamics of the transition structure. An effective method should handle both personalized and dynamic transition structures of neighbors.
To tackle these two challenges mentioned above, we investigate the bilevel graph structure inside temporal networks as shown in Fig. 1(b). Besides the explicit interaction graph, a node's sequential interactions can also be constructed as a transition graph according to chronological order. Specifically, the transition structure of neighbors is then encoded by multi-step propagation via transition links denoted by black arrows in Fig. 1(b). The multi-step propagation is proposed to deal with the first challenge of personalized structures: nodes with simple transitions are modeled by short-range propagations, while nodes with complex transitions can benefit from long-range propagations. Further, the bilevel graph convolution over multi-step neighborhood embeddings (highlighted arrows in Fig. 1(b)) is proposed to distill node dynamics accordingly from sequential interactions, aiming at the second challenge of dynamic structures.
Concretely, we propose the method, namely **T**rans**tion **P**ropagation **G**raph **N**eural **N**etworks (TIP-GNN), to handle nodes' transition structures in temporal networks adaptively and dynamically. Firstly, TIP-GNN translates a node's interactions into a small transition graph of neighbors and prepares the initialized neighborhood features for propagation. Secondly, TIP-GNN propagates neighborhood embeddings (latent features) in multiple steps, obtaining both short-range and long-range structures for nodes. Thirdly, TIP-GNN distills useful information from multi-step neighborhood embeddings by the bilevel graph convolution mechanism: the first transition pooling module accounts for structure encodings of transition structures dynamically, and the second bilevel graph convolution module accounts for detecting transition patterns adaptively. We conduct experiments over various real-life temporal networks and observe significant improvements in TIP-GNN compared with other state-of-the-art methods on most networks. Further, we perform extensive ablation studies and parameter sensitivity experiments, validating the effectiveness and limitations of our dedicated transition structures in temporal networks.
Our contributions can be summarized as follows:
* Besides the explicit interaction graph of temporal networks, we propose a novel transition graph between nodes' neighbors, revealing the latent dependencies between nodes' neighbors explicitly beyond previous sequential modeling.
* To distill transition structures from neighbors, we propose the TIP-GNN, consisting of a transition propagation module and a bilevel graph convolution module, which generates temporal node embeddings adaptively and dynamically with various interactions.
* Experimental results demonstrate the robustness and efficiency of our proposed TIP-GNN. Further, we discuss the effectiveness and limitations of the transition structure in temporal networks with a series of ablation studies.
## II Related Work
### _Static Graph Embedding_
Learning low-dimensional embeddings has shown great success in many graph mining applications such as risk monitoring [14], link prediction [15], and item recommendation [1] since decades ago [16, 17, 18, 1, 15]. Researchers are recently inspired to develop deep graph learning methods [19, 20, 21] with the breakthroughs of deep learning methods in computer vision [22] and natural language processing [23]. The deep graph learning methods mainly consist of two categories: skip-gram models [20] and graph neural networks [19]. DeepWalk [24] is the first skip-gram model to model the random walks on graphs as sampled node sequences. Node2Vec [25] further extends DeepWalk with two additional hyper-parameters controlling the preferences of random walks. Meanwhile, LINE [26] explores the high-order similarity of the graph topology between nodes. On the other hand, graph neural networks [27] have made attempts to define the convolution operation in the non-euclidean space. GCN [28] simplifies existing GNNs and applies successfully in semi-supervised node classification. GAT [29] further introduced a self-attention mechanism to compute the weights of neighbors adaptively and achieves substantial improvements in node classification. Nonetheless, static graph embedding methods are suitable for invariant relationships between entities in real life while being insufficient for temporal networks.
### _Temporal Network Embedding_
There are numerous temporal networks such as citation and collaboration [11, 12, 30], commodity purchasing [7, 5], and fraud detection [6]. According to the summarization of temporal network methods, existing approaches can be divided into two categories based on the property of graphs [31]. On
the one hand, a discrete-time temporal network refers to a sequence of graph snapshots, where edges in different snapshots may change accordingly. However, this kind of graph cannot model edges at the finest granularity (e.g., at a time scale of seconds) [32]. On the other hand, continuous-time graph methods generate the node embeddings dynamically, along with the upcoming new interactions. HTNE [7] and DyRep [12] follow a recurrent paradigm to update node embeddings with historical neighbors. Additionally, TigeCMN [30] adopts an external memory network to encode the network dynamics. Meanwhile, TGAT [8] and APAN [6] achieve inductive learning abilities by using graph neural networks. In comparison, our TIP-GNN proposes a novel transition structure instead of a sequential structure used by previous approaches.
### _Recommender System_
Researches of recommender systems [33, 34, 5, 35, 36, 37] are highly related to temporal networks when their interests are modeling user dynamics given historical interactions (e.g., views, clicks, and buys). However, their two-tower architectures and specific training objectives like BPR loss [38] are mainly optimized for recommendation tasks and not suitable for temporal networks that are homogeneous or long time-range. Methods related to temporal networks can be categorized into two kinds: recurrent recommender systems and session-based recommendations. On the one hand, Time-LSTM [33], Deep Coevolve [34], and JODIE [5] stand for recurrent recommender systems, ranking items according to nodes' temporal embeddings. On the other hand, session-based recommendation [35, 36, 37] explores recommendations based on anonymous user sessions, which utilize an item-to-item graph to encode co-occurrence relationships. Despite the similarity between the item-to-item graph and our proposed transition graph, session-based recommendation assumes the stationarity on the item-to-item graph, while we use dynamic transition graphs to represent node dynamics.
### _Comparisons between TIP-GNN and Previous Works_
TGAT [8] and GC-SAN [36] are the two most related works to our proposed TIP-GNN. Firstly, TGAT and TIP-GNN are both based on graph neural networks [28, 39], and the attention mechanism [13]. However, as depicted in Fig. 1, TGAT uses sequential graph convolution and can hardly capture the dynamic patterns in the transition graph. Secondly, GC-SAN and TIP-GNN are both based on the context graph and the attention mechanism [13]. GC-SAN, designed for sequential recommendation, utilizes the item co-occurrence matrix to recommend similar items for users. In contrast to TIP-GNN, GC-SAN cannot mine the high-order relationship in the interaction graph, and is not adapted for dynamic patterns as our multi-step transition propagation. Specifically, TIP-GNN focuses on nodes' temporal transition graphs and produces temporal node embeddings based on personalized transition graphs.
## III Method
Our proposed **T**ransI**tion **P**ropagation **G**raph **N**eural **N**etworks (TIP-GNN) focuses on the bilevel graph structure of nodes in temporal networks and extracts the transition relationship between nodes' neighbors by bilevel graph neural networks. As depicted in Fig. 2, TIP-GNN mainly consists of three modules: the sequence translation module accounts
\begin{table}
\begin{tabular}{l|l} \hline \hline Symbol & Meaning \\ \hline \(V,E\) & the vertex set and edge set of a graph \\ \hline \(G\) & \(G=(V,E)\), the notation of a graph \\ \hline \(u\) & a node in a graph, \(u\in V\) \\ \hline \(t\) & a timestamp, \(t\in\mathbb{R}^{+}\) \\ \hline \(h_{u,t}^{*}\) & \(u\)’s embedding at \(l\)-th TIP-GNN layer \\ \hline \(h_{u}^{k}\) & \(u\)’s embedding at \(k\)-th propagation step \\ \hline \(S_{u,t}\) & the interaction sequence of node \(u\) before \(t\) \\ \hline \(\mathcal{N}_{u,t}\) & the node set of neighbors in \(S_{u,t}\) \\ \hline \(s_{i}\) & an interaction \(s_{i}=(u,v_{i},t_{i})\) in \(S_{u,t}\) \\ \hline \(e_{i}\) & the edge feature of the interaction \(s_{i}\) \\ \hline \(A_{u,t}\) & the adjacency matrix of node \(u\)’s transition graph at \(t\) \\ \hline \(B_{u,t}\) & the incidence matrix between \(N_{u,t}\) and \(S_{u,t}\), indicating whether a node \(v\) is associated with an interaction \(s_{i}\) \\ \hline \(H_{S}\) & the stacked feature matrix of the interaction set \(S_{u,t}\) \\ \hline \(H_{\mathcal{N}_{u,t}}^{T}\) & node embeddings of \(\mathcal{N}_{u,t}\) at \(t\)-th TIP-GNN layer \\ \hline \(u^{*}\) & \(u^{*}\)’s neighbor in the set \(\mathcal{N}_{u,t}\) \\ \hline \(Z_{\mathcal{N}_{u,t}}^{k}\) & node embedding of \(u^{*}\) neighbor \(v\) at \(k\)-th propagation step \\ \hline \(Z_{\mathcal{N}_{u,t}}^{k}\) & node embeddings of \(\mathcal{N}_{u,t}\) at \(k\)-th propagation step \\ \hline \end{tabular}
\end{table} TABLE I: Mathematical symbols and their meanings used in this work.
Fig. 2: An illustration instance of the one-layer TIP-GNN model. (a) Given a query node \(u\) and timestamp \(t\), the sequence translation module translates node \(u\)’s interactions \(S_{u,t}\) before \(t\) into a directed transition graph \(A_{u,t}\), whose initialized node embeddings \(Z_{\mathcal{N}_{u,t}}^{0}\) are formed according to Eq. 4. (b) The transition propagation module extracts dynamic patterns from the transition graph by propagating node embeddings through multiple steps according to Eq. 6. (c) The bilevel graph convolution module distills useful information for node \(u\) by a hierarchical attention mechanism: firstly, it performs a multi-head attention pooling at each propagation step according to Eq. 7; secondly, it aggregates \(u\)’s node embeddings at different steps by another attention function according to Eq. 9.
for producing the directed transition graph; the transition propagation module is designed to represent nodes' dynamic patterns; the bilevel graph convolution module aims at distilling information from the transition graph and the interaction graph. Finally, our multi-layer TIP-GNN is optimized by the temporal link prediction task.
### _Sequence Translation_
#### Iii-A1 Transition Graph
Since proximity structures can be encoded by a graph with an adjacency matrix [28], we can also encode transition structures with a transition graph. Different from a static network, a temporal network denoted by \(G=(V,E)\) keeps expanding with time moving forward, where \(V\) represents nodes, and \(E\) represents edges. Given a node \(u\) and a timestamp \(t\), our problem is learning the temporal node embedding \(h_{u,t}\), which captures node dynamics precisely from its historical interactions \(S_{u,t}\). We construct the transition graph as a transition matrix \(A_{u,t}\in|\mathcal{N}_{u,t}|\times|\mathcal{N}_{u,t}|\) from these interactions, where \(\mathcal{N}_{u,t}=\{v_{i}|(u,v_{i},t_{i})\in S_{u,t}\}\) is the neighbor set of node \(u\)'s interactions.
\(S_{u,t}=\{\cdots,s_{i},s_{j},\cdots\}\) denotes the interaction set ordered by timestamps, where \(s_{i}\) and \(s_{j}\) are two consecutive interactions of the node \(u\). \(A_{u,t}\) denotes the transition matrix between node \(u\)'s neighbors defined by
\[A_{u,t}[\text{id}(v_{i}),\text{id}(v_{j})]=\begin{cases}1,&\text{if }s_{i}=(u,v_{i},t_{i}),s_{j}=(u,v_{j},t_{j})\\ 0,&\text{otherwise},\end{cases} \tag{1}\]
where the function \(\text{id}(\cdot)\) gives the node index in the neighbor set \(\mathcal{N}_{u,t}\). For instance, Figure 2 shows \(v_{1}\) appears at two interactions and \(\text{id}(\cdot)\) gives the same node index \(v_{1}\). The transition graph \(A_{u,t}\) includes only chronologically ascending transition links and excludes bi-directional links because inverse links may obscure the causal relations in the chronological order. Moreover, the above definition assumes that \(S_{u,t}\) has at least two interactions and makes nodes with sparse interactions less representative in the transition graph. The self-loop is thus added to the transition graph to help both numerical stability [28] and interaction sparsity, defined as \(\tilde{A}_{u,t}=I+A_{u,t}\).
#### Iii-A2 Feature Initialization
To prepare for transition propagation, besides existing node embeddings, we also transform edge features carried by these interactions into node features by an incidence matrix \(B_{u,t}\in|\mathcal{N}_{u,t}|\times|S_{u,t}|\), which connects each interaction \(s_{i}\) with each neighborhood node \(v_{i}\). The incidence matrix \(B_{u,t}\) is defined by
\[B_{u,t}[\text{id}(v_{i}),i]=\begin{cases}1,&\text{if }s_{i}=(u,v_{i},t_{i})\\ 0,&\text{otherwise},\end{cases} \tag{2}\]
where \(\text{id}(v_{i})\) and \(i\) are indices of nodes and edges, respectively. The incidence matrix \(B_{u,t}\) could sum up the edge features of neighbor's interactions in order to highlight the numerical impacts of individual neighbors.
A specific interaction \(s_{i}=(u,v_{i},t_{i})\) contains its original edge feature \(e_{i}\) and timestamp \(t_{i}\). Due to the wide range of timespans in different temporal networks (e.g., seconds, days, and weeks), the temporal kernel function of TGAT [8] is adopted for robust encoding of timespans, defined by
\[\Phi(\Delta t)=\textit{concat}(cos(\omega_{1}\Delta t),\cdots,cos(\omega_{d_ {t}}\Delta t)), \tag{3}\]
where \(\Delta t=t-t_{i}\) is the timespan between the prediction timestamp and the edge timestamp, \(\omega_{1}\) is a trainable parameter of the frequency of the cosine function, and \(d_{t}\) is the dimension of the output vector. The edge features of the interaction \(s_{i}\) is thus defined by \(\tilde{e}_{i}=\textit{concat}(e_{i},\Phi(t-t_{i}))\), and the edge matrix of the interaction set \(S_{u,t}\) is stacking interactions' features, defined by \(H_{S}=[\cdots,\tilde{e}_{i},\cdots]^{\intercal}\).
Let \(H_{V}^{l}\) be the embeddings of all nodes \(V\) in the \(l\)-th TIP-GNN layer, and then we extract the neighborhood embeddings \(H_{\mathcal{N}_{u,t}}^{l}\in|\mathcal{N}_{u,t}|\times d\) from the last layer (node features at the 0-th layer), where \(d\) is the embedding dimension. Finally, the initialized node embeddings at each layer are calculated by adding the node embeddings and edge features defined by
\[Z_{\mathcal{N}}^{0}=W\times ReLU(W_{n}H_{\mathcal{N}}^{l}+W_{e}B_{u,t}H_{S})+b, \tag{4}\]
where \(W,W_{n},W_{e},b\) are transformation parameters, edges features \(H_{S}\) are aggregated through the incidence matrix \(B_{u,t}\), \(H_{\mathcal{N}}^{l}\) is short for \(H_{\mathcal{N}_{u,t}}^{l}\), and \(Z_{\mathcal{N}}^{0}\) represents the node embeddings at the 0-th propagation step, as shown in Fig. 2.
### _Transition propagation_
As shown in Fig. 2, the graph propagation module is designed for encoding dynamic patterns from the transition graph. Inspired by PageRank [16] and recent graph isomorphism representation works [40], nodes' dynamic patterns in this paper refer to nodes' complex transitions among nodes' historical neighbors and can be encoded by graph neural networks through multiple propagation steps. Concretely, the transition propagation rule of a single transition graph is defined by
\[\tilde{Z}_{\mathcal{N}}^{k+1}=\tilde{A}_{u,t}MLP(Z_{\mathcal{N}}^{k}), \tag{5}\]
where \(k\) denotes the propagation step, \(Z_{\mathcal{N}}^{0}\) is defined in Eq. 4, \(\tilde{A}_{u,t}\) is the transition matrix with self-loop, and \(MLP\) is a multilayer perceptron. The multilayer perceptron is proposed to enhance the representation capacity of models by its nonlinear transformation. With \(k\) propagation steps, \(Z_{\mathcal{N}}^{k}\) can encode \(k\)-hop subgraphs of the transition graph around neighbors. Further, a damping factor \(\alpha\) is introduced to preserve the uniqueness of node embeddings due to the over-smoothing of graph neural networks, defined by
\[Z_{\mathcal{N}}^{k+1}=\alpha Z_{\mathcal{N}}^{k}+(1-\alpha)\tilde{Z}_{\mathcal{ N}}^{k+1}. \tag{6}\]
The damping factor [41] can also be seen as a residual connection between the previous step and the current step, which reduces the learning difficulty of models.
### _Bilevel Graph Convolution_
#### Iii-C1 Transition Pooling
The transition pooling module treats neighborhood embeddings at each propagation step separately and produces a summarized graph embedding \(h_{u}^{(k,l+1)}\) for the specific node \(u\). Specifically, we extract useful information from neighborhood embeddings by performing the attention
mechanism [13] between node embeddings \(h^{l}_{u,t}\) of the previous TIP-GNN layer (node features at the 0-th layer) and neighborhood embeddings \(z^{k}_{v}\in Z^{k}_{N}\), defined by
\[\alpha^{(k,l+1)}_{uv}=softmax\{(W^{l+1}_{0}h^{l}_{u,t})(W^{l+1}_{K }z^{k}_{v})^{\intercal}\}, \tag{7}\] \[h^{(k,l+1)}_{u}=\sum_{v\in N_{u,t}}\alpha^{(k,l+1)}_{uv}W^{l+1}_{ V}z^{k}_{v},\]
where \(h^{l}_{u,t}\) is extracted from \(H^{l}_{V}\) (node feature in the first TIP-GNN layer), \(W^{l+1}_{Q}\) and \(W^{l+1}_{K}\) transform node embeddings into the same space, and \(W^{l+1}_{V}\) provides additional transformation nonlinearity. The first equation computes the attention scores between node \(u\) and its neighborhood embeddings, and the second equation sums up transformed neighborhood embeddings according to their impacts on the target node \(u\). Moreover, the multi-head attention [13] is introduced to enhance the model capacity of graph representation, which computes \(h^{(k,l+1)}_{u}\) in Eq. 7 with different parameter matrices several times parallelly.
#### Iii-C2 Attention Fusion
The attention fusion module is the second level of the attention mechanism, aiming at fusing node embedding \(h^{(k,l+1)}_{u}\) at different propagation steps into a unified representation. Instead of the similarity-based attention used in Eq. 7, a projection-based attention is employed here to produce weights for different node embeddings, defined by
\[\omega^{(k,l+1)}_{u}=q^{\intercal}\cdot sigmoid(W^{(k,l+1)}h^{(k,l+1)}_{u}+b ^{(k,l+1)}), \tag{8}\]
where \(W^{(k,l+1)}\) and \(b^{(k,l+1)}\) are specified transformation parameters for each step, \(q\) is a shared projection vector across different steps, and \(\omega^{(k,l+1)}_{u}\) is an unnormalized scalar weight of each step. The final node embedding \(h^{l+1}_{u,t}\) at \(l+1\) TIP-GNN layer is computed by
\[\alpha^{k}_{u} =\frac{exp(\omega^{k}_{u})}{\sum_{k^{\prime}}exp(\omega^{k^{ \prime}}_{u})}, \tag{9}\] \[h^{l+1}_{u,t} =\sum_{k}\alpha^{k}_{u}h^{(k,l+1)}_{u},\]
where \(\alpha^{k}_{u}\) is the normalized weight of each step, and \(h^{l+1}_{u,t}\) sums up the weighted node embeddings at different steps.
### _Model Optimization_
#### Iii-D1 Loss Function
It is an effective pretext task for generating temporal node embeddings to predict whether a temporal interaction will happen between node pairs in the given timestamp [6, 8]. The probability \(\hat{p}^{t}_{uv}\) of the interaction's existence between node \(u\) and node \(v\) at \(t\) is thus defined as
\[\hat{p}^{t}_{uv}=sigmoid(W\times ReLU(W_{u}h^{L}_{u,t}+W_{v}h^{L}_{v,t})+b), \tag{10}\]
where \(W,W_{u},W_{v}\) are transformation matrices, \(h^{L}_{u,t}\) and \(h^{L}_{v,t}\) denote node embeddings at the \(L\)-th TIP-GNN layer, and \(L\) is the last TIP-GNN layer. The cross-entropy loss is adopted to classify the existence of the edge, which is defined as follows,
\[\underset{\theta}{\arg\min}-\sum_{(u,v,t)\in E}\{log(\hat{p}^{t}_{uv})+c\cdot \mathbb{E}_{j\sim P_{u}(u)}log(1-\hat{p}^{t}_{uj})\}, \tag{11}\]
where \(E\) is the set of temporal edges, \(P_{n}(u)\) is a uniform distribution over nodes for negative sampling, and \(c\) is the number of negative samples. The negative sampling loss is useful for boosting the training efficiency in network embedding methods [23]. For simplicity, the number of negative samples is set as 1 in our experiments. The model parameters are then updated using the Adam [42] optimizer, which uses the weight-decay strategy to regularize the magnitude of model parameters.
#### Iii-D2 Model Size
The proposed TIP-GNN model introduces additional modules on the transition graph compared with previous methods based on graph neural networks. Let \(d\), \(d_{e}\), \(d_{t}\) be the embedding dimension of node feature, edge feature, and time feature, respectively. Also, the hidden dimension is all set as \(d\) for simplicity. The first sequential translation module does not include trainable parameters. Let \(q\) be the MLP layer in the transition propagation, \(K\) be the number of propagation steps, and \(L\) be the number of TIP-GNN layers. The parameters of the transition propagation module are \(\mathcal{O}(d(d+d_{e}+d_{t})L+qd^{2}KL)\), which is mainly attributed to the feature initialization and the multilayer perceptron. The parameters of the bilevel graph convolution module are \(\mathcal{O}(3d^{2}KL+d^{2}KL+dL)\), which consist of the \(K\)-step transition pooling and the attention fusion, respectively. The final prediction layer also contains \(\mathcal{O}(3d^{2})\) parameters. To summarize, the model size of TIP-GNN is \(\mathcal{O}(3d^{2}L+qd^{2}KL+4d^{2}KL+dL+3d^{2})\), where we assume \(d,d_{e},d_{t}\) are at the same order.
#### Iii-D3 Complexity Analysis
For each node \(u\) at \(t\), a one-layer TIP-GNN samples \(b\) latest interactions from its interaction set \(S_{u,t}\), and provides node features, edge features, and edge timestamps. Let \(\hat{\mathcal{N}}_{u,t}\) be the sampled neighbor set, a \(2\)-layer TIP-GNN will sample the second-order interactions recursively from \(S_{\hat{\mathcal{N}}_{u,t}}\) following Section 3.4 of TGAT [8]. Let the TIP-GNN layer be \(L\), the total number of interactions for a node is denoted by \(\mathcal{B}\), which is \(1+b+\cdots+b^{L}\). The time complexity of sequence translation is \(\mathcal{O}(\mathcal{B})\). In practice, these transition graphs are cached to boost the training speed. The time complexity of transition propagation is \(\mathcal{O}(3d^{2}L\mathcal{B}+qd^{2}KL\mathcal{B}+dKLb\mathcal{B})\), where the last term refers to the propagation step. The time complexity of the bilevel graph convolution and the prediction layer is \(\mathcal{O}(3KLb\mathcal{B}+d^{2}KL\mathcal{B}+dL\mathcal{B})\) and \(\mathcal{O}(3bd^{2})\), respectively. Let \(\mathcal{M}\) be the model size of TIP-GNN; the time complexity of TIP-GNN is mainly \(\mathcal{O}(\mathcal{B}+b\mathcal{B}\mathcal{M})\), which is \(b\mathcal{B}\) times of the model size besides the sequence translation module.
## IV Experiment
We conduct elaborate experiments on various temporal networks to evaluate the efficiency and robustness of TIP-GNN, especially on the effectiveness of the transition propagation module. We aim to answer the following key research questions:
* **RQ1**: How does TIP-GNN perform compared with other state-of-the-art temporal network methods?
* **RQ2**: What are the effects of our specific transition propagation module on task performance?
* **RQ3**: How do different parameter settings (e.g., the number of TIP-GNN layers, the number of attention heads, and so on) affect the TIP-GNN model?
### _Datasets and Tasks_
Table II lists temporal networks collected over a wide range from Network Repository [4] and SNAP [5]. In consideration of scalability, nine medium-scale networks without node or edge features from Network Repository [4] are used for temporal link prediction, which contains various temporal patterns due to huge variations of the timespan. Meanwhile, two networks with preprocessed edge features from SNAP [5] are used for temporal link prediction in inductive settings and temporal node classification, whose label ratios are highly imbalanced. For clarity of analyses, experimental networks are divided according to their node number and graph density. The network with more than 5,000 nodes is referred to as a **large** network, and the other networks are seen as **small** networks. Meanwhile, the density of a **sparse** network is less than 0.005, and the other networks are considered as **dense** networks.
#### Iv-A1 Temporal Link Prediction
Temporal networks for temporal link prediction only consist of temporal interactions and do not carry any node features or edge features. These networks could sufficiently validate the robustness of existing methods because of huge differences in their essential properties. According to the above divisions of networks, existing networks mainly consist of two kinds, large and sparse networks, and small and dense networks. Notably, large and sparse networks often spread across a long timespan (over one year), and small and dense networks usually occur during a small timespan (less than two weeks). Further, networks with mixed properties challenge the representation ability of compared methods. For example, the timespans of small and dense networks like _fb-forum_ and _ia-radoslaw-email_ are around half a year. Experiments on these complex networks could reveal the performance of methods in industrial scenarios to some degree.
Temporal link prediction refers to predicting whether a given node pair will have an interaction at a given timestamp. We split the datasets into the training, validation, and testing sets with ratios 70:15:15 chronologically. We use the observed interactions in the testing set as positive samples and obtain negative samples by replacing target nodes in the testing set with nodes that have never interacted with source nodes. The labeled datasets are used for the evaluation of temporal network methods. Our collected temporal graphs from Network Repository [4] provide only graph topology without node or edge features, causing many comparing methods not suitable for new nodes in the testing set. Therefore, we remove new nodes from the validation and testing set for a fair comparison.
#### Iv-A2 Inductive Temporal Link Prediction and Temporal Node Classification
Temporal networks for inductive temporal link prediction and temporal node classification are two bipartite graphs consisting of user behaviors among Wikipedia pages and subreddits. Wikipedia consists of 8,227 users and 1,000 most edited pages, while Reddit contains 10,000 active users and 1,000 active subreddits. The edge features in both Wikipedia and Reddit are text features of interactions, by converting Wikipedia pages and Reddit posts into 172-dimensional vectors under the _linguistic inquiry and word count_ (LIWC) categories [44]. Detailedly, the interaction of Wikipedia is that a user-edited a page, while the interaction of Reddit is that a user created a post in the subreddit.
Inductive temporal link prediction validates the inductive capability of TIP-GNN on new nodes in the testing set by hiding 10% of nodes from the training set following the task settings of TGAT [8] and TGN [43]. Temporal node classification refers to classifying a user's temporal state, whether the user is banned from editing a Wikipedia page or posting in the subreddit. Also, we split the datasets into the training, validation, and testing sets with ratios 70:15:15 chronologically. The labels of the additional two datasets are extremely imbalanced: Wikipedia has 217 positive labels with 157,474 interactions (=0.14%), while Reddit has 366 true labels among 672,447 interactions (=0.05%).
#### Iv-A3 Evaluation Metrics
For temporal link prediction, we compute the classification _accuracy_ and the _area under the ROC curve_ (AUC-ROC) when obtaining the prediction probabilities of edge existence. For temporal link prediction in transductive and inductive settings, we adopt the Average Precision (AP) following TGAT [8] and TGN [43]. For temporal node classification, we use the AUC-ROC due to the highly imbalanced labels.
### _Experiment Setting_
#### Iv-B1 Baseline Methods
Most baseline methods generate temporal node embeddings with the pretext task of temporal link prediction. We employ a logistic classifier for temporal link prediction if methods cannot predict temporal links directly. These methods can be categorized as follows:
* **Static graph methods.** Node2Vec [25] and SAGE [39] are two classic and effective algorithms for skip-gram models and graph neural networks, respectively.
* **Discrete-time network methods.** TNODE [9] builds a recurrent model upon node embeddings in each time step, which is a state-of-the-art method on discrete-time networks.
* **Continous-time network methods.** CTDNE [32] and HTNE [7] are initially designed for continuous-time net
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline Temporal Graph & \(|V|\) & \(|E|\) & Density & Repetition & Timespan \\ \hline \multicolumn{6}{c}{Temporal Link Prediction} \\ \hline in-network2000s & 92 & 9.8K & 2.34 & 77.19 & 11.43 \\ in-contacts-hypernet2009 & 113 & 20.8K & 3.28 & 59.09 & 2.46 \\ in-contact & 274 & 28.2K & 0.75 & 6.99 & 3.97 \\ do-form & 899 & 33.7K & 0.08 & 20.89 & 164.94 \\ sec-clip-bitcoin & 835 & 35.8K & 0.002 & 0.90 & 1903.27 \\ in-radoslaw-email & 167 & 82.9K & 5.98 & 18.89 & 271.19 \\ in-novileens-user202s-10m & 17K & 95.5K & 0.0007 & 19.99 & 1108.97 \\ in-primary-school proximity & 242 & 125.7K & 4.31 & 38.39 & 1.35 \\ ia-dashdot-reply-dir & 51K & 140.7K & 0.0001 & 4.2\% & 977.36 \\ \hline \multicolumn{6}{c}{Inductive Temporal Link Prediction} & \multicolumn{2}{c}{Temporal Node Classification} \\ \hline Wikipedia & 9.2K & 157.4K & 0.0036 & 79.1\% & 29.77 \\ Reddit & 10.9K & 672.4K & 0.011 & 61.4\% & 31.00 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Temporal network datasets and statistics. \(|V|\) is the number of nodes in the dataset. \(|E|\) is the number of interactions. The graph density is the ratio between \(|E|\) and \(\frac{|V|(|V|-1)}{2}\). The repetition of interactions describes that a node interacts with the same neighbor the last time. The time unit of the timespan is one day.
works. However, they are not flexible for generating temporal node embeddings at a given timestamp. JODIE [5], TGAT [8], APAN [6] and TGN [43] are four state-of-the-art temporal network methods.
#### Iv-A2 Settings of Temporal Link Prediction
The embedding dimension for all methods is set to 128 for a fair comparison. For each method, we report the average testing performance and standard deviation over five runs of different hyper-parameters. For static graph methods, we use grid search of the random walk parameters \(p,q\) of Node2Vec [25] over \(\{0.25,0.5,1,2,4\}\), and implement an inductive SAGE [39] with uniformly sampling 100 neighbors from historical interactions. For discrete-time graph methods, we run TNODE over three different graph snapshot divisions, namely \(\{8,32,128\}\). For CTDNE [32], the number of the walk is set at 10, the walk length is set at 80, and the context window size is set at 10. The history length of HTNE [7] is searched over \(\{10,20,30\}\). The hyper-parameter settings of JODIE [5], TGAT [8], APAN [6] and TGN [43] are listed in Appendix B.
The proposed TIP-GNN is implemented in PyTorch [45]. In default settings, the learning rate is set to 0.0001, and the dropout ratio is set to 0.1. The number of TIP-GNN layers is searched over \(\{1,2\}\), and the number of attention heads of
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Wikipedia} & \multicolumn{2}{c}{Reddit} \\ \cline{2-5} & Transductive & Inductive & Transductive & Inductive \\ \hline Node2Vec [25] & \(0.915\pm 0.003\) & \(\dagger\) & \(0.846\pm 0.005\) & \(\dagger\) \\ SAGE [39] & \(0.956\pm 0.003\) & \(0.911\pm 0.003\) & \(0.977\pm 0.002\) & \(0.963\pm 0.002\) \\ CTDNE [32] & \(0.922\pm 0.005\) & \(0.914\pm 0.003\) & \(\dagger\) \\ JODIE & \(0.946\pm 0.005\) & \(0.931\pm 0.004\) & \(0.971\pm 0.003\) & \(0.944\pm 0.011\) \\ APAN & \(0.981\pm 0.002\) & \(\dagger\) & \(\mathbf{0.992\pm 0.002}\) & \(\dagger\) \\ TGAP & \(0.953\pm 0.001\) & \(0.940\pm 0.003\) & \(0.981\pm 0.002\) & \(\dagger\) \\ TGN & \(0.985\pm 0.001\) & \(0.978\pm 0.001\) & \(0.987\pm 0.001\) & \(0.976\pm 0.001\) \\ TIP-GNN & \(\mathbf{0.986\pm 0.001}\) & \(\mathbf{0.982\pm 0.001}\) & \(0.988\pm 0.001\) & \(\mathbf{0.977\pm 0.001}\) \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Average Precision for temporal link prediction in transductive and inductive settings.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{**ia-workplace**} & \multicolumn{2}{c}{**ia-hypertext**} & \multicolumn{2}{c}{**ia-contact**} \\ \cline{2-5} & Accuracy & AUC & Accuracy & AUC & Accuracy & AUC \\ \hline Node2Vec [25] & \(0.649\pm 0.033\) & \(0.688\pm 0.019\) & \(0.642\pm 0.047\) & \(0.678\pm 0.029\) & \(0.760\pm 0.028\) & \(0.801\pm 0.012\) \\ SAGE [39] & \(0.746\pm 0.008\) & \(0.857\pm 0.008\) & \(0.709\pm 0.009\) & \(0.830\pm 0.006\) & \(0.818\pm 0.004\) & \(0.856\pm 0.003\) \\ CTDNE [32] & \(0.625\pm 0.014\) & \(0.673\pm 0.010\) & \(0.546\pm 0.069\) & \(0.572\pm 0.036\) & \(0.821\pm 0.006\) & \(0.851\pm 0.004\) \\ HTNE [7] & \(0.621\pm 0.015\) & \(0.661\pm 0.015\) & \(0.517\pm 0.044\) & \(0.540\pm 0.021\) & \(0.806\pm 0.010\) & \(0.831\pm 0.010\) \\ TNODE [9] & \(0.801\pm 0.009\) & \(0.883\pm 0.006\) & \(0.623\pm 0.071\) & \(0.683\pm 0.019\) & \(0.815\pm 0.008\) & \(0.852\pm 0.002\) \\ JODIE [5] & \(0.538\pm 0.011\) & \(0.600\pm 0.054\) & \(0.610\pm 0.035\) & \(0.667\pm 0.067\) & \(0.812\pm 0.005\) & \(0.850\pm 0.005\) \\ APAN [6] & \(0.695\pm 0.016\) & \(0.763\pm 0.035\) & \(0.725\pm 0.007\) & \(0.807\pm 0.009\) & \(0.832\pm 0.003\) & \(0.884\pm 0.002\) \\ TGAT [8] & \(0.878\pm 0.001\) & \(0.959\pm 0.001\) & \(\mathbf{0.894\pm 0.001}\) & \(\mathbf{0.950\pm 0.001}\) & \(\mathbf{0.883\pm 0.001}\) & \(0.921\pm 0.001\) \\ TGN [43] & \(0.883\pm 0.004\) & \(0.960\pm 0.001\) & \(0.885\pm 0.001\) & \(0.953\pm 0.001\) & \(\mathbf{0.891\pm 0.001}\) & \(\mathbf{0.927\pm 0.001}\) \\ TIP-GNN & \(\mathbf{0.885\pm 0.010}\) & \(\mathbf{0.961\pm 0.007}\) & \(0.889\pm 0.009\) & \(0.958\pm 0.005\) & \(0.887\pm 0.003\) & \(0.922\pm 0.003\) \\ \hline Improvements & 0.2\% & 0.1\% & -0.5\% & -0.1\% & -0.4\% & -0.5\% \\ \hline \hline \multicolumn{5}{c}{**fb-form**} & \multicolumn{2}{c}{**soc-bitcoin**} & \multicolumn{2}{c}{**ia-radoslaw**} \\ \cline{2-5} & Accuracy & AUC & Accuracy & AUC & Accuracy & AUC \\ \hline Node2Vec [25] & \(0.744\pm 0.009\) & \(0.823\pm 0.008\) & \(0.708\pm 0.014\) & \(0.774\pm 0.012\) & \(0.708\pm 0.011\) & \(0.773\pm 0.010\) \\ SAGE [39] & \(0.636\pm 0.006\) & \(0.724\pm 0.010\) & \(0.651\pm 0.008\) & \(0.734\pm 0.009\) & \(0.804\pm 0.006\) & \(0.894\pm 0.005\) \\ CTDNE [32] & \(0.745\pm 0.007\) & \(0.817\pm 0.005\) & \(0.778\pm 0.004\) & \(0.836\pm 0.003\) & \(0.723\pm 0.005\) & \(0.795\pm 0.004\) \\ HTNne [7] & \(0.668\pm 0.004\) & \(0.715\pm 0.005\) & \(0.611\pm 0.009\) & \(0.639\pm 0.005\) & \(0.679\pm 0.005\) & \(0.744\pm 0.007\) \\ TNODE [9] & \(0.716\pm 0.004\) & \(0.795\pm 0.007\) & \(0.724\pm 0.007\) & \(0.793\pm 0.008\) & \(0.775\pm 0.003\) & \(0.863\pm 0.001\) \\ JODIE [5] & \(0.632\pm 0.034\) & \(0.751\pm 0.045\) & \(0.814\pm 0.056\) & \(0.880\pm 0.019\) & \(0.713\pm 0.012\) & \(0.784\pm 0.022\) \\ APAN [6] & \(0.776\pm 0.01
the bilevel graph convolution is searched over \(\{1,2,3,4\}\). The early-stopping strategy is adopted until the validation AUC score does not improve for three epochs.
#### Iv-B3 Settings of Temporal Node Classification
Continuous-time graph methods, including JODIE [5], APAN [6], TGAT [8], TGN [43], and our TIP-GNN, predict users' states using temporal node embeddings. For other methods, we concatenate the node embeddings of the user and the corresponding Wikipedia page or subreddit as input features. We use the three-layer MLP [8] as a classifier, whose hidden dimensions are {80,10,1} respectively. In addition, the MLP classifier is trained with the Adam optimizer, the Glorot initialization, and the early-stopping strategy with ten epochs. Due to the data imbalance, the positive labels in each batch are oversampled for better performance.
### _RQ1: Overall Performance_
#### Iv-C1 Temporal Link Prediction
Table III reports the average performance of baseline methods over five runs on temporal link prediction. The discussions of their performance are organized as follows:
* Node2Vec [25] and SAGE [39] are two robust baselines on temporal networks, although they are designed for static networks. Specifically, SAGE [39] presents outstanding performance than CTDNE [32], HTNE [7] on the first three small and dense networks.
* CTDNE [32], HTNE [7], and TNODE [9] are three temporal network methods. Detailedly, CTDNE performs best on large and sparse networks, while its mechanism is not good for small and dense networks. TNODE beats HTNE on most datasets and outperforms CTDNE on small and dense networks especially.
* JODIE, which is initially designed for temporal item recommendation in temporal networks, demonstrates a performance decline on small and dense networks, namely _ia-workplace_, _ia-hypertext_, _fb-form_, _ia-radoslaw_, and _ia-primary_. Moreover, its high computational complexity causes the failed experiments of _ia-slashdot_ with 51,000 nodes on our P6000 GPU with a 24GB memory.
* APAN [6] and TGAT [8] are both based on graph neural networks. However, the dedication of APAN to inference efficiency may make it unsuitable for small networks, while TGAT achieves the second-best on most datasets. Nevertheless, TGAT shows performance degradation on the large and sparse network, namely _ia-slashdot_.
* TGN [43] is a lightweight and powerful temporal network method built upon the recurrent mechanism and graph neural networks. Table III shows that TGN achieves comparable AUC scores with TIP-GNN on several datasets. However, TGN underperforms TIP-GNN on _fb-form_ and _soc-bitcoin_, which may require the information of high-order transition structures.
* Our proposed TIP-GNN performs on par with or outperforms state-of-the-art methods on all datasets. Concretely, TIP-GNN and TGAT [8] obtain top-2 performance on the first three networks, indicating that existing methods can well model small networks. As for large networks, TIP-GNN demonstrates consistent and significant improvements, as shown in Table III. Further, TIP-GNN shows stable performance on experimental datasets, compared with the performance degradation of JODIE [5], APAN [6], and TGAT [8] on several networks. Overall, the improvements of TIP-GNN validate the importance of transition structures between neighbors.
#### Iv-C2 Inductive Temporal Link Prediction
Table IV reports the average AP scores of different methods, where most results are inherited from APAN [6] and TGN [43]. Overall, APAN, TGN, and TIP-GNN obtain close AP scores on _Wikipedia_ and _Reddit_, which indicate that predicting temporal links is somewhat easy for those two datasets. Table II also shows the high repetition ratios of those two datasets that our model could achieve high prediction precision by simply predicting the last neighbor.
#### Iv-C3 Temporal Node Classification
Table V reports the average AUC scores and standard deviations of methods over five runs except for TNODE [9] because the original node classifier of TNODE presents poor performance. The temporal node classification task on _Wikipedia_ and _Reddit_ is challenging to all methods due to their highly imbalanced label ratios. Specifically, static graph methods (Node2Vec [25] and SAGE [39]) outperform CTDNE and HTNE on both datasets, revealing that whether users are banned from editing Wikipedia pages or posting in the subreddits are strongly related to the target pages or subreddits. Also, the limited memorization capacity of CTDNE and HTNE may cause poor performance on temporal node classification. As for continuous-time network methods that can generalize to unseen nodes, JODIE [5], APAN [6], TGAT [8], TGN [43], and our TIP-GNN all demonstrate much better performance than other methods. Specifically, APAN [6] performs best on _Wikipedia_ and TGN [43] achieves the second-best on _Reddit_, indicating the dataset preferences of models.
On _Reddit_, TIP-GNN performs significantly better than other methods, which may be attributed to the increased density of _Reddit_ than _Wikipedia_.
and the damping factor \(\alpha\) for personalized embedding. In default settings, the number of propagation steps is 2, MLP layers are 2, and the damping factor \(\alpha\) is 0. As shown in Fig. 3, the effects of these designs are summarized as follows:
* The propagation at different steps aims at representing dynamic patterns at different granularities. Fig. 3(a) shows the obvious improvements of accuracy with the increase of propagation steps. However, the performance on most datasets achieves its peak at three propagation steps. It may be caused by aggregated noise from distant neighborhoods and the well-known over-smoothing problem of GNNs [28].
* The designed MLP aims at enhancing the nonlinearity of embeddings during transition propagation. On the contrary, Fig. 3(b) shows that the performance is almost irrelevant to the MLP layers and even suffers from a deep MLP. Significantly, the 0-layer MLP, which propagates embeddings without any parametrized projection, achieves comparable performance to the best performance on most datasets. This finding could be attributed to (1) the additional nonlinearity of the bilevel graph convolution module and (2) the uselessness of nonlinear transformation during graph convolution [46, 47].
* The damping factor \(\alpha\)[41] works as a residual connection to overcome the over-smoothing problem and the training difficulty of GNNs. Fig. 3(c) shows complex relations between \(\alpha\) and the performance: social networks with comments and replies (i.e., _ia-radoslaw_ and _ia-slash_)
Fig. 4: Attention weights at each propagation step of TIP-GNN.
Fig. 3: Ablation studies on the transition propagation module of TIP-GNN.
benefit significantly from the damping factor, while other networks perform stably regardless of their sparsity and repetition ratios. In general, a small damping factor could boost the performance, but an optimal damping factor depends on the network properties.
#### V-B2 Discussions on the propagation step
The propagation step defined by the Eq. 6 plays an essential role in the transition propagation module, where the number of MLP layers is 2 and the damping factor \(\alpha\) is 0. Since most datasets achieve the peak performance when the propagation step is 3 in Fig. 3(a), we visualize the corresponding attention weights of each propagation step defined by Eq. 9 in the last TIP-GNN layer. Findings from Fig. 4 can be concluded as follows:
* The 0-th step, which has not yet involved any transition propagation, usually obtains the most significant weights on most datasets compared with other propagation steps. Specifically, attention weights of the 0-th step are larger than 0.6 average on the first three networks, indicating that transition propagation may only boost the performance marginally for simple networks.
* With the increase of interactions, attention weights of the 0-th step decrease accordingly, and attention weights of
Fig. 5: Parameter sensitivity of TIP-GNN.
Fig. 6: Effects of the number of TIP-GNN layers and attention heads.
other steps even present on par with the 0-th step on several networks, namely _ia-radoslaw_, _ia-movielens_, and _ia-primary_. This observation validates the existence and benefits of transition structures in temporal networks.
* Lastly, attention weights of different temporal networks present various distributions of propagation steps. For example, _soc-bitcoin_ and _ia-slashdot_ are both large and sparse networks. However, _soc-bitcoin_ prefers short-range transitions, while _ia-slashdot_ prefers long-range transitions. Nevertheless, our proposed attention fusion module could learn preferences adaptively across different propagation steps.
### _RQ3: Parameter Sensitivity_
For quantitative comparison, the following experiments are conducted with fixed parameters of the transition propagation module that the number of propagation steps is 2, the number of MLP layers is 2, and the damping factor \(\alpha\) is 0.
#### Iv-E1 Model Architecture
The parameters of model architecture refer to the TIP-GNN layers and the attention heads of the transition pooling module. The training parameters are also fixed for simplicity that the number of sampled interactions is 20, the dropout ratio is 0.1, and the batch size is 200. We plot the grid search results of the number of TIP-GNN layers over \(\{1,2\}\) and the number of attention heads over \(\{1,2,3,4\}\) in Fig. 6, which can be summarized as follows:
* A deeper TIP-GNN layer perceives a wider neighborhood in networks. Fig. 6 demonstrates the superiority of a deeper layer on all networks, owing to the benefits of collaborative filtering [48, 47] and neighborhood smoothing [28, 46].
* A larger number of attention heads enlarges the model capacity [29, 13]. The results in Fig. 6 can be divided into two kinds: networks with low-repetition ratios in Table II benefit from more attention heads, while networks with high-repetition ratios (such as _ia-radoslaw_ and _ia-primary_) suffer from it on the contrary. It indicates that a complex model may degrade the performance of simple networks.
#### Iv-E2 Hyper-parameters
The compared hyper-parameters presented here include the number of neighbors, the dropout ratios, and the batch sizes. The parameters of model architectures are fixed that the number of TIP-GNN layers is 2, and the number of attention heads is 2. The first three networks present stable performance with different hyper-parameters. Thus, the following discussions only care about the last six networks:
* Interactions with recent neighbors usually reveal nodes' dynamic patterns in temporal networks. Fig 5(a) shows that most networks benefit from a larger window of temporal neighbors, while _ia-slashdot_ presents the contrary results. Fig 5(a) shows that most networks benefit from sampling more latest interactions, while _ia-slashdot_ presents the contrary results. The reason may be that nodes in most networks demonstrate long-range interests, while _ia-slashdot_ demonstrate fast shifts of node interests because it is a post-then-reply network.
* Dropout is effective for training a robust neural network model by setting some neurons of the model as zeros and adjusting the outputs accordingly [49] Fig. 5(b) shows that our original TIP-GNN is robust compared with training under the dropout technique. Nevertheless, TIP-GNN achieves slight performance improvements when using dropout.
* It is a _de facto_ standard to train neural network models with mini-batches. Similar to the effects of the dropout technique, Fig. 5(c) shows the robustness of our TIP-GNN on most networks under different batch sizes. Specifically, the network _ia-slashdot_ benefits from a larger batch size, which may be attributed to its large-scale nodes and sparsity of interactions.
## V Conclusion
In this paper, we propose a novel TIP-GNN based on graph neural networks to model the bilevel graph structure in temporal networks, which highlights nodes' personalized transition patterns among neighbors. The transition propagation module encodes nodes' dynamic patterns with personalized propagation rules; then, the bilevel graph convolution module extracts useful neighborhood embeddings with a bilevel attention mechanism. Experimental results demonstrate the superiority of encoding dynamic patterns in temporal networks that our TIP-GNN outperforms existing approaches consistently and significantly on experimental datasets. Extensive experiments further reveal the adaptivity of the multistep propagation of TIP-GNN for various temporal networks, as well as the robustness of TIP-GNN.
As for future works, on the one hand, we will explore the representation limitation of GNNs to overcome the performance decline in Fig 3(a). Our ablation studies on propagation steps shown in Fig. 3(a) demonstrate that TIP-GNN will achieve peak performance when enlarging the number of propagation steps properly. Following works' mitigating the over-smoothing problem of GNNs [50, 41], we additionally design controllable modules with the zero-layer MLP and the attention mechanism for different propagation steps. However, the bilevel attention mechanism cannot suppress additional noise induced by a larger propagation step. In the future, we will investigate the over-smoothing problem from the perspective of temporal networks and propose an enhanced GNN.
On the other hand, we will explore the interpretability of our proposed transition graphs following previous works [51, 52]. IGMC [51] reveals the common patterns of enclosing subgraphs between users and items, while KGIN [52] captures user intents with an attentive mechanism over Knowledge graph relations. In the future, we will investigate how to annotate user interactions with auxiliary information like public Knowledge Graphs.
## VI Acknowledgment
This work is supported by National Natural Science Foundation of China (U20B2066, 61976186), the Starry Night Science Fund of Zhejiang University Shanghai Institute for
Advanced Study (Grant No. SN-ZIU-SIAS-001), and the Fundamental Research Funds for the Central Universities (226-2022-00064).
|
2304.04008 | Infinitely wide limits for deep Stable neural networks: sub-linear,
linear and super-linear activation functions | There is a growing literature on the study of large-width properties of deep
Gaussian neural networks (NNs), i.e. deep NNs with Gaussian-distributed
parameters or weights, and Gaussian stochastic processes. Motivated by some
empirical and theoretical studies showing the potential of replacing Gaussian
distributions with Stable distributions, namely distributions with heavy tails,
in this paper we investigate large-width properties of deep Stable NNs, i.e.
deep NNs with Stable-distributed parameters. For sub-linear activation
functions, a recent work has characterized the infinitely wide limit of a
suitable rescaled deep Stable NN in terms of a Stable stochastic process, both
under the assumption of a ``joint growth" and under the assumption of a
``sequential growth" of the width over the NN's layers. Here, assuming a
``sequential growth" of the width, we extend such a characterization to a
general class of activation functions, which includes sub-linear,
asymptotically linear and super-linear functions. As a novelty with respect to
previous works, our results rely on the use of a generalized central limit
theorem for heavy tails distributions, which allows for an interesting unified
treatment of infinitely wide limits for deep Stable NNs. Our study shows that
the scaling of Stable NNs and the stability of their infinitely wide limits may
depend on the choice of the activation function, bringing out a critical
difference with respect to the Gaussian setting. | Alberto Bordino, Stefano Favaro, Sandra Fortini | 2023-04-08T13:45:52Z | http://arxiv.org/abs/2304.04008v1 | Infinitely wide limits for deep Stable neural networks: sub-linear, linear and super-linear activation functions
###### Abstract
There is a growing literature on the study of large-width properties of deep Gaussian neural networks (NNs), i.e. deep NNs with Gaussian-distributed parameters or weights, and Gaussian stochastic processes. Motivated by some empirical and theoretical studies showing the potential of replacing Gaussian distributions with Stable distributions, namely distributions with heavy tails, in this paper we investigate large-width properties of deep Stable NNs, i.e. deep NNs with Stable-distributed parameters. For sub-linear activation functions, a recent work has characterized the infinitely wide limit of a suitable rescaled deep Stable NN in terms of a Stable stochastic process, both under the assumption of a "joint growth" and under the assumption of a "sequential growth" of the width over the NN's layers. Here, assuming a "sequential growth" of the width, we extend such a characterization to a general class of activation functions, which includes sub-linear, asymptotically linear and super-linear functions. As a novelty with respect to previous works, our results rely on the use of a generalized central limit theorem for heavy tails distributions, which allows for an interesting unified treatment of infinitely wide limits for deep Stable NNs. Our study shows that the scaling of Stable NNs and the stability of their infinitely wide limits may depend on the choice of the activation function, bringing out a critical difference with respect to the Gaussian setting.
## 1 Introduction
Deep (feed-forward) neural networks (NNs) play a critical role in many domains of practical interest, and nowadays they are the subject of numerous studies. Of special interest is the study of prior distributions over the NN's parameters or weights, namely random initializations of NNs. In such a context, there is a growing interest on large-width properties of deep NNs with Gaussian-distributed parameters, with emphasis on the interplay between infinitely wide limits of such NNs and Gaussian stochastic processes. Neal (1996) characterized the infinitely wide limit of a shallow Gaussian NN. In particular, let: i) \(\mathbf{x}\in\mathbb{R}^{d}\) be the input of the NN; ii) \(\tau:\mathbb{R}\rightarrow\mathbb{R}\) be an activation function; iii) \(\theta=\{w_{i}^{(0)},w,b_{i}^{(0)},b\}_{i\geq 1}\) be the collection of NN's parameters such
that \(w^{(0)}_{i,j}\stackrel{{ d}}{{=}}w_{j}\stackrel{{ iid}}{{\sim}}N(0,\sigma_{w}^{2})\) and \(b^{(0)}_{i}\stackrel{{ d}}{{=}}b\stackrel{{ iid}}{{\sim}}N(0,\sigma_{b}^{2})\) for \(\sigma_{w}^{2},\sigma_{b}^{2}>0\), with \(N(\mu,\sigma^{2})\) being the Gaussian distribution with mean \(\mu\) and variance \(\sigma^{2}\). Then, consider a rescaled shallow Gaussian NN defined as
\[f_{\mathbf{x}}(n)[\tau,n^{-1/2}]=b+\frac{1}{n^{1/2}}\sum_{j=1}^{n}w_{j}\tau(\langle w ^{(0)}_{j},\mathbf{x}\rangle_{\mathbb{R}^{d}}+b^{(0)}_{j}), \tag{1}\]
with \(n^{-1/2}\) being the scaling factor. Neal (1996) showed that, as \(n\rightarrow+\infty\) the NN output \(f_{\mathbf{x}}(n)[\tau,n^{-1/2}]\) converges in distribution to a Gaussian random variable (RV) with mean zero and a suitable variance. The proof follows by an application of the Central Limit Theorem (CLT), thus relying on minimal assumptions on \(\tau\), as it is sufficient to ensure that \(\mathbb{E}[(g_{j}(\mathbf{x}))^{2}]\) is finite, where \(g_{j}(\mathbf{x})=w_{j}\tau(\langle w^{(0)}_{j},\mathbf{x}\rangle_{\mathbb{R}^{d}}+b^{ (0)}_{j})\). The result of Neal (1996) has been extended to a general input matrix, i.e. \(k>1\) inputs of dimension \(d\), and deep Gaussian NNs, assuming both a "sequential growth" (Der and Lee, 2005) and a "joint growth" (de G. Matthews et al., 2018) of the width over the NN's layers. See Theorem A.1. In general, all these large-width asymptotic results rely on some minimal assumptions for the function \(\tau\), thus allowing to cover the most popular activation functions.
Neal (1996) first discussed the problem of replacing the Gaussian distribution of the NN's parameters with a Stable distribution, namely a distribution with heavy tails (Samorodnitsky and Taqqu, 1994), leaving to future research the study of infinitely wide limits of Stable NNs. In a recent work, Favaro et al. (2020, 2022) characterized the infinitely wide limit of deep Stable NNs in terms of a Stable stochastic process, assuming both a "joint growth" and a "sequential growth" of the width over the NN's layers. Critical to achieve the infinitely wide Stable process is the assumption of a sub-linear activation function \(\tau\), i.e. \(|\tau(x)|\leq a+b|x|^{\beta}\), with \(a,b>0\) and \(0<\beta<1\). In particular, for a shallow Stable NN, let: \(\mathbf{x}\in\mathbb{R}^{d}\) be the input of the NN; ii) \(\tau:\mathbb{R}\rightarrow\mathbb{R}\) be the sub-linear activation function of the NN; iii) \(\theta=\{w^{(0)}_{i},w,b^{(0)}_{i},b\}_{i\geq 1}\) be the NN's parameters such that \(w^{(0)}_{i,j}\stackrel{{ d}}{{=}}w_{j}\stackrel{{ iid}}{{\sim}}S_{\alpha}(\sigma_{w})\) and \(b^{(0)}_{i}\stackrel{{ d}}{{=}}b\stackrel{{ iid}}{{\sim}}S_{\alpha}(\sigma_{b})\) for \(\alpha\in(0,2]\) and \(\sigma_{w},\sigma_{b}>0\), with \(S_{\alpha}(\sigma)\) being the symmetric Stable distribution with stability \(\alpha\) and scale \(\sigma\). Then, consider the rescaled shallow Stable NN
\[f_{\mathbf{x}}(n)[\tau,n^{-1/\alpha}]=b+\frac{1}{n^{1/\alpha}}\sum_{j=1}^{n}w_{j} \tau(\langle w^{(0)}_{j},\mathbf{x}\rangle_{\mathbb{R}^{d}}+b^{(0)}_{j}), \tag{2}\]
with \(n^{-1/\alpha}\) being the scaling factor. The NN (1) is recovered from (2) by setting \(\alpha=2\). Favaro et al. (2020) showed that, as \(n\rightarrow+\infty\) the NN output \(f_{\mathbf{x}}(n)[\tau,n^{-1/\alpha}]\) converges in distribution to a Stable RV with stability \(\alpha\) and a suitable scale. See Theorem A.2 in the Appendix. Differently from the Gaussian setting of (de G. Matthews et al., 2018), the result of Favaro et al. (2020) relies on the assumption of a sub-linear activation function. This is a strong assumption, as it does not allow to cover some popular activation functions.
The use of Stable distribution for the NN's parameters, in place of Gaussian distributions, was first motivated through empirical analyses in Neal (1996), which show that while all Gaussian weights vanish in the infinitely wide limit, some Stable weights retain a non-negligible contribution, allowing to represent "hidden features". See also Der and Lee (2005) and Lee et al. (2022), and references therein, for an up-to-date discussion on random initializations of NNs with classes of distributions beyond the Gaussian distribution. In particular, in the context of heavy tails distributions, Fortuin et al. (2022) showed that wide Stable (convolutional) NNs trained with gradient descent lead to a higher classification accuracy than Gaussian NNs. Still in such a context, Favaro et al. (2022) considered a Stable NN with a ReLU activation function, showing that the large-width training dynamics of the NN is characterized in terms of kernel regression
with a Stable random kernel, in contrast with the well-known (deterministic) neural tangent kernel in the Gaussian setting Jacot et al. (2018); Arora et al. (2019). In general, the different behaviours between the Gaussian setting and the Stable settings arise from the large-width sample path properties of the NNs, as shown in Favaro et al. (2022, 2022), which make \(\alpha\)-Stable NNs more flexible than Gaussian NNs. See Figure 1 and Figure 2 in the Appendix.
### Our contributions
In this paper, we investigate the large-width asymptotic behaviour of deep Stable NNs with a general activation function. Given \(f:\mathbb{R}\to\mathbb{R}\) and \(g:\mathbb{R}\to\mathbb{R}\) we write \(f(z)\;\simeq\;g(z)\) for \(z\to+\infty\) if \(\lim_{z\to+\infty}f(z)/g(z)=1\), and \(f(z)=\mathcal{O}(g(z))\) for \(z\to+\infty\) if if there exists \(C>0\) and \(z_{0}\) such that \(f(z)/g(z)\leq C\) for every \(z\geq z_{0}\). Analogously for \(z\to-\infty\). We omit the explicit reference to the limit of \(z\) when there is no ambiguity or when the relations hold both for \(z\to+\infty\) and for \(z\to-\infty\). Now, let \(\tau:\mathbb{R}\to\mathbb{R}\) be a continuous functions and define:
\[E_{1} =\{\tau\in\mathcal{C}(\mathbb{R};\mathbb{R}):\,|\tau(z)|= \mathcal{O}(|z|^{\beta})\text{ with }0\leq\beta<1\};\] \[E_{2} =\{\tau\in\mathcal{C}(\mathbb{R};\mathbb{R}):\,|\tau(z)|\; \simeq\;|z|^{\gamma}\text{ and }\tau\text{ strictly increasing for }|z|>a,\text{ for some }\gamma,a>0\};\] \[E_{3} =\{\tau\in\mathcal{C}(\mathbb{R};\mathbb{R}):\,\tau(z)\;\simeq\; z^{\gamma}\text{ for }z\to+\infty,|\tau(z)|=\mathcal{O}(|z|^{\beta})\text{ with }\beta<\gamma\text{ for }z\to-\infty\] \[\qquad\text{ and }\tau\text{ strictly increasing for }z>a,\text{ for some }\gamma,a>0\}.\]
We characterize the infinitely wide limits of shallow Stable NNs with activation functions in \(E_{1}\), \(E_{2}\) and \(E_{3}\), assuming a \(d\)-dimensional input. Such a characterization is then applied recursively to derive the behaviour of a deep Stable NNs under the simplified setting of "sequential growth", i.e. when the hidden layers grow wide one at the time. Our results extends the work of Favaro et al. (2020, 2022) to a general asymptotically linear function, i.e. \(E_{2}\cup E_{3}\) choosing \(\gamma=1\), and super-linear functions, i.e. \(E_{2}\cup E_{3}\) choosing \(\gamma>1\). As a novelty with respect to previous works, our results rely on the use of a generalized CLT (Uchaikin and Zolotarev, 2011; Otiniano and Goncalves, 2010), which reduces the characterization of the infinitely wide limit of a deep Stable NNs to the study of the tail behaviour of a suitable transformation of Stable random variables. This allows for a unified treatment of infinitely wide limits for deep Stable NNs, providing an alternative proof of the result of Favaro et al. (2020, 2022) under the class \(E_{1}\). Our results show that the scaling of a Stable NN and the stability of its infinitely wide limits depend on the choice of the activation function, thus bringing out a critical difference with respect to the Gaussian setting. While in the Gaussian setting the choice of \(\tau\) does not affect the scaling \(n^{-1/2}\) required to achieve the Gaussian process, in the Stable setting the use of an asymptotically linear function results in a change of the scaling \(n^{-1/\alpha}\), through an additional \((\log n)^{-1/\alpha}\) term, to achieve the Stable process. Such a phenomenon was first observed in Favaro et al. (2022) for a shallow Stable NN with a ReLU activation function, which is indeed an asymptotically linear activation function.
### Organization of the paper
Section 2 contains the main results of the paper: i) the weak convergence of a shallow Stable NN with an activation function \(\tau\) in the classes \(E_{1}\), \(E_{2}\) and \(E_{3}\), for an input \(x=1\) and no biases; ii) the weak convergence of a deep Stable NN with an activation function \(\tau\) in the classes \(E_{1}\), \(E_{2}\) and \(E_{3}\), for an input \(x\in\mathbb{R}^{d}\) and biases. In Section 3 we discuss some natural extensions of our work, as well as some directions for future research.
## 2 Main results
Let \((\Omega,\mathcal{H},\mathbb{P})\) be a generic probability space on which all the RVs are assumed to be defined. Given a RV \(Z\), we define its cumulative distribution function (CDF) as \(P_{Z}(z)=\mathbb{P}(Z\leq z)\), its survival function as \(\overline{P}_{Z}(z)=1-P_{Z}(z)\), and its the density function with respect to the Lebesgue measure as \(p_{Z}(z)=\frac{dP_{Z}(z)}{dz}\), using the notation \(P_{Z}(dz)\) to indicate \(p_{Z}(z)dz\). A RV \(Z\) is symmetric if \(Z\stackrel{{ d}}{{=}}-Z\), i.e. if \(Z\) and \(-Z\) have the same distribution, that is \(\overline{P}_{Z}(z)=P_{Z}(-z)\) for all \(z\in\mathbb{R}\). We say that \(Z_{n}\) converges to \(Z\) in distribution, as \(n\to+\infty\), if for every point of continuity \(z\in\mathbb{R}\) of \(P_{Z}\) it holds \(P_{Z_{n}}(z)\to P_{Z}(z)\) as \(n\to+\infty\), in which case we write \(Z_{n}\stackrel{{ d}}{{\to}}Z\). Given \(f:\mathbb{R}\to\mathbb{R}\) and \(g:\mathbb{R}\to\mathbb{R}\) we write \(f(z)=o(g(z))\) for \(z\to+\infty\) if \(\lim_{z\to+\infty}f(z)/g(z)=0\). Analogously for \(z\to-\infty\). As before, we omit the reference to the limit of \(z\) when there is no ambiguity or when the relation holds for both \(z\to+\infty\) and for \(z\to-\infty\). Recall that a measurable function \(L:(0,+\infty)\to(0,+\infty)\) is called slowly varying at \(+\infty\) if \(\lim_{x\to+\infty}L(ax)/L(x)=1\) for all \(a>0\).
**Definition 2.1**.: _A \(\mathbb{R}\)-valued RV \(X\) has Stable distribution with stability \(\alpha\in(0,2]\), skewness \(\beta\in[-1,1]\), scale \(\sigma>0\) and shift \(\mu\in\mathbb{R}\), and we write \(X\sim S_{\alpha}(\sigma,\beta,\mu)\), if its characteristic function is \(\varphi_{X}(t)=\mathbb{E}[e^{tX}]=e^{\psi(t)}\), for \(t\in\mathbb{R}\), where_
\[\psi(t)=\begin{cases}-\sigma^{\alpha}|t|^{\alpha}[1+i\beta\tan(\frac{\alpha \pi}{2})\text{sign}(t)]+i\mu t&\alpha\neq 1\\ -\sigma|t|[1+i\beta\frac{2}{\pi}\text{sign}(t)\log{(|t|)}]+i\mu t&\alpha=1. \end{cases}\]
By means of Samorodnitsky and Taqqu (1994, Property 1.2.16), if \(X\sim S_{\alpha}(\sigma,\beta,\mu)\) with \(0<\alpha<2\) then \(\mathbb{E}[|X|^{r}]<+\infty\) for \(0<r<\alpha\), and \(\mathbb{E}[|X|^{r}]=+\infty\) for any \(r\geq\alpha\). A \(\mathbb{R}\)-valued RV \(X\) is distributed as the symmetric \(\alpha\)-Stable distribution with scale parameter \(\sigma\), and we write \(X\sim S_{\alpha}(\sigma)\), if \(X\sim S_{\alpha}(\sigma,0,0)\), which implies that \(\varphi_{X}(t)=\mathbb{E}[e^{itX}]=e^{-\sigma^{\alpha}|t|^{\alpha}},\quad t\in \mathbb{R}\). This allows to prove that if \(X\sim S_{\alpha}(\sigma)\), then \(aX\sim S_{\alpha}(|a|\sigma)\); see Samorodnitsky and Taqqu (1994, Property 1.2.3). Furthermore, one has the following complete characterization of the tail behaviour of the CDF and PDF of Stable RVs: for a symmetric \(\alpha\)-Stable distribution, Samorodnitsky and Taqqu (1994, Proposition 1.2.15) states that, if \(X\sim S_{\alpha}(\sigma)\) with \(0<\alpha<2\),
\[\overline{P}_{X}(x)=P_{X}(-x)\;\simeq\;\frac{1}{2}C_{\alpha}\sigma^{\alpha}|x |^{-\alpha},\]
where
\[C_{\alpha}=\Big{(}\int_{0}^{+\infty}x^{-\alpha}\sin(x)dx\Big{)}^{-1}=\frac{2} {\pi}\Gamma(\alpha)\sin\Big{(}\alpha\frac{\pi}{2}\Big{)}=\begin{cases}\frac{1- \alpha}{\Gamma(2-\alpha)\cos(\pi\frac{\alpha}{2})}&\alpha\neq 1\\ \frac{2}{\pi}&\alpha=1.\end{cases}\]
As before, if \(X\sim S_{\alpha}(\sigma)\) with \(0<\alpha<2\), then \(p_{X}(x)=p_{X}(-x)\;\simeq\;(\alpha/2)C_{\alpha}\sigma^{\alpha}|x|^{-\alpha-1}\) for \(x\to+\infty\) holds true.
For an activation function belonging to the classes \(E_{1}\), \(E_{2}\) and \(E_{3}\), we characterize the infinitely wide limit of a deep Stable NN, assuming a \(d\)-dimensional input and a "sequential growth" of the width over the NN's layers. Critical is the use of the following generalized CLT (Uchaikin and Zolotarev, 2011; Otiniano and Goncalves, 2010).
**Theorem 2.1** (Generalized CLT).: _Let \(Z\) be a RV such that \(\overline{P}_{Z}(z)\;\simeq\;cz^{-p}L(z)\) and \(P_{Z}(-z)\;\simeq\;dz^{-p}L(z)\) for some \(c,d>0\), \(0<p<2\) and with \(L\) being a slow varying function. Moreover, let
\((Z_{n})_{n\geq 1}\) be a sequence of RVs iid as \(Z\). If_
\[a_{n}=\begin{cases}0&0<p<1\\ (c-d)\log n&p=1\\ \mathbb{E}[Z]&1<p<2,\end{cases}\]
_then, as \(n\to+\infty\)_
\[\frac{1}{(nL(n))^{1/p}}\sum_{i=1}^{n}(Z_{i}-a_{n})\overset{d}{\to}S_{p}\left( \left[\frac{c+d}{C_{p}}\right]^{\frac{1}{p}},\frac{c-d}{c+d},0\right). \tag{3}\]
For \(p<1\), no centering turns out to be necessary in (3), due to the "large" normalizing constants \(n^{-1/p}\), which smooth out the differences between the right and the left tail of \(Z\). For \(p>1\), the centering in (3) is the common one, namely the expectation. The case \(p=1\) is a special case: the expectation does not exist, so it cannot be used as a centering in (3); on the other hand, centering is necessary for convergence because the normalizing constant \(n^{-1}\) does not grow sufficiently fast to smooth the differences between the right and the left tail of \(Z\). In particular, the term including \(\log n\) comes from the asymptotic behaviour of truncated moments.
### Shallow Stable NNs: large-width asymptotics for an input \(x=1\) and no biases
We start by considering a shallow Stable NN, for an input \(x=1\) and no biases. Let \(w^{(0)}=[w_{1}^{(0)},w_{2}^{(0)},\dots]^{T}\) and \(w=[w_{1},w_{2},\dots]^{T}\) independent sequences of RVs such that \(w_{j}^{(0)}\overset{iid}{\sim}S_{\alpha_{0}}(\sigma_{0})\) and \(w_{j}^{\text{ }iid}\ S_{\alpha_{1}}(\sigma_{1})\). Then, we set \(Z_{j}=w_{j}\tau(w_{j}^{(0)})\), where \(\tau:\mathbb{R}\to\mathbb{R}\) is a continuous non-decreasing function, and define the shallow Stable NN
\[f(n)[\tau,p]=\frac{1}{n^{1/p}}\sum_{j=1}^{n}Z_{j} \tag{4}\]
with \(p>0\). From the definition of the shallow Stable NN (4), being \(Z_{1},Z_{2},\dots\) iid according to a certain RV \(Z\), it is sufficient to study the tail behaviour of \(\overline{P}_{Z}(z)\) and \(P_{Z}(-z)\) in order to obtain the convergence in distribution of \(f(n)[\tau,p]\). As a general strategy, we proceed as follows: i) we study the tail behaviour of \(X\cdot\tau(Y)\) where \(X\sim S_{\alpha_{x}}(\sigma_{x})\), \(Y\sim S_{\alpha_{x}}(\sigma_{y})\), \(X\perp\!\!\!\perp Y\) and \(\tau\in E_{1}\), \(\tau\in E_{2}\) and \(\tau\in E_{3}\); ii) we make use of the generalized CLT, i.e. Theorem 2.1, in order to characterize the infinitely wide limit of the shallow Stable NN (4).
Note that to find the tail behaviour of \(X\cdot\tau(Y)\) it is sufficient to find the tail behaviour of \(|X\cdot\tau(Y)|\), and then use the fact that \(\mathbb{P}[X\cdot\tau(Y)>z]=(1/2)\cdot\mathbb{P}[|X\cdot\tau(Y)|>z]\) for every \(z\geq 0\), since \(X\cdot\tau(Y)\) is symmetric as \(X\) is so. Then, to find the asymptotic behaviour of the survival function of \(|X\cdot\tau(Y)|\) we make use of some results in the theory of convolution tails and domain of attraction of Stable distributions. Hereafter, we recall some basic facts. Given two CDFs \(F\) and \(G\), the convolution \(F*G\) is defined as \(F*G(t)=\int F(t-y)dG(y)\), which inherits the linearity of the convolution and the commutativity of the convolution from properties of the integral operator. Recall that a function \(F\) on \([0,+\infty]\) has exponential tails with rate \(\alpha\) (\(F\in\mathcal{L}_{\alpha}\)) if and only if
\[\lim_{t\to+\infty}\frac{\overline{F}(t-y)}{\overline{F}(t)}=e^{\alpha y},\quad \text{for all real $y\in\mathbb{R}$}.\]
Then,
\[\bar{F}(t)=a(t)\exp\left[-\int_{0}^{t}\alpha(v)dv\right],\text{ where }a(t)\to a>0,\alpha(t)\to\alpha,\text{ as }t\to+\infty.\]
A complimentary definition is the following: a function \(U\) on \([0,+\infty]\) is regularly varying with exponent \(\rho\) (\(U\in\mathcal{RV}_{\rho}\)) if and only if
\[\lim_{t\to+\infty}\frac{U(yt)}{U(t)}=y^{\rho},\quad\text{for all }y>0.\]
Then,
\[U(t)=a(t)\exp\left[\int_{0}^{t}\frac{\rho(v)}{v}dv\right],\text{ where }a(t)\to a>0,\rho(t)\to\rho,\text{ as }t,\to+\infty,\]
i.e. the Karamata's representation of \(U\). Clearly \(F\in\mathcal{L}_{\alpha}\) if and only if \(\bar{F}(\ln t)\in\mathcal{RV}_{-\alpha}\). The next lemma provides the tail behaviour of the convolution of \(F\) and \(G\), assuming that they have exponential tails with the same rates.
**Lemma 2.2** (Theorem 4 of Cline (1986)).: _Let \(F,G\in\mathcal{L}_{\alpha}\) for some \(\alpha>0\), \(f\in\mathcal{RV}_{\beta}\) and \(g\in\mathcal{RV}_{\gamma}\) where \(f(t)=e^{\alpha t}\overline{F}(t)\) and \(g(t)=e^{\alpha t}\overline{G}(t)\) and \(\beta>-1\) and \(\gamma>-1\). Then_
\[\overline{F*G}(t)\;\simeq\;\frac{\Gamma(1+\beta)\Gamma(1+\gamma)}{\Gamma(1+ \beta+\gamma)}\alpha te^{\alpha t}\overline{F}(t)\overline{G}(t),\quad\text{ as }t\to+\infty.\]
We make use of Lemma 2.2 to find the tail behaviour of \(|X\cdot\tau(Y)|\) when \(|X|\) and \(|\tau(Y)|\) have regularly varying truncated CDFs with same rates. If \(|X|\) and \(|\tau(Y)|\) have regularly varying truncated CDFs with different rates, then we make use of the next lemma, which describes the tail behaviour of \(U\cdot W\), where \(U\) and \(W\) are two independent non-negative RVs such that \(\mathbb{P}[U>u]\) is regularly varying of index \(-\alpha\leq 0\) and \(\mathbb{E}[W^{\alpha}]<+\infty\).
**Lemma 2.3**.: _Suppose \(U\) and \(W\) are two independent non-negative RVs such that \(\mathbb{E}[W^{\alpha}]<+\infty\) and \(\mathbb{P}[W>u]=\mathrm{o}(\mathbb{P}[U>u])\). If \(\mathbb{P}[U>u]\;\simeq\;cu^{-\alpha}\), with \(c>0\), then \(\mathbb{P}[UW>u]\;\simeq\;\mathbb{E}[W^{\alpha}]\cdot\mathbb{P}[U>u]\)._
Lemma 2.3 was stated in Breiman (1965) for \(\alpha\in[0,1]\), and then extended by Cline and Samorodnitsky (1994) for all values of \(\alpha\), still under the hypothesis that \(\mathbb{E}[W^{\alpha+\epsilon}]<+\infty\) for some \(\epsilon>0\). Lemma 2.3 provides a further extension in the case \(\mathbb{P}[U>x]\;\simeq\;cx^{-\alpha}\), with \(c>0\), and has been proved in Denisov and Zwart (2005). Based on Lemma 2.2 and Lemma 2.3, it remains to find the tail behaviour of \(|X|\) and \(|\tau(Y)|\). For the former, it is easy to show that \(\overline{P}_{|X|}(t):=\mathbb{P}[|X|>t]\;\simeq\;C_{\alpha_{x}}\sigma_{x}^{ \alpha_{x}}t^{-\alpha_{x}}\), while, for the latter, we have the next lemma.
**Lemma 2.4** (Tail behaviour of \(\tau(Y)\), \(\tau\in E_{2}\cup E_{3}\)).: _Assuming \(Y\sim S_{\alpha}(\sigma)\), then: i) \(\mathbb{P}[|\tau(Y)|>t]\;\simeq\;C_{\alpha}\sigma^{\alpha}t^{-\alpha/\gamma}\) if \(\tau\in E_{2}\); ii) \(\mathbb{P}[|\tau(Y)|>t]\;\simeq\;\frac{1}{2}C_{\alpha}\sigma^{\alpha}t^{- \alpha/\gamma}\) if \(\tau\in E_{3}\)._
Proof.: If \(\tau(z)\) is strictly increasing for \(z>a\) and \(\tau(z)\;\simeq\;z^{\gamma}\) for \(z\to+\infty\) with \(\gamma>0\), then \(\tau^{-1}(y)\;\simeq\;y^{1/\gamma}\). Analogously at \(-\infty\). We refer to Theorem 5.1 of Olver (1974) for the case \(\gamma=1\). Now, starting with \(\tau\in E_{2}\) and defining the inverse of \(\tau\) where the activation is strictly increasing, we can write for a sufficiently large \(t\):
\[P(|\tau(Y)|>t) =P(\tau(Y)>t)+P(\tau(Y)<-t)\;\simeq\;\frac{1}{2}C_{\alpha}\sigma^{ \alpha}|\tau^{-1}(t)|^{-\alpha}+\frac{1}{2}C_{\alpha}\sigma^{\alpha}|\tau^{-1} (-t)|^{-\alpha}\] \[\;\simeq\;\frac{1}{2}C_{\alpha}\sigma^{\alpha}t^{-\alpha/\gamma}+ \frac{1}{2}C_{\alpha}\sigma^{\alpha}|t|^{-\alpha/\gamma} =C_{\alpha}\sigma^{\alpha}t^{-\alpha/\gamma}.\]
Instead, if \(\tau\in E_{3}\), then there exits \(b>0\) and \(y_{0}<0\) such that \(|\tau(y)|<b|y|^{\beta}\) for \(y\leq y_{0}\). Then, for \(t\) sufficiently large,
\[P(|\tau(Y)|>t,Y<0)\leq P(b|Y|^{\beta}>t,Y<0)\;\simeq\;\frac{1}{2}C_{\alpha} \sigma^{\alpha}(bt)^{-\alpha/\beta}.\]
Furthermore,
\[P(|\tau(Y)|>t,Y>0)=P(Y>\tau^{-1}(t))\;\simeq\;\frac{1}{2}C_{\alpha}\sigma^{ \alpha}|\tau^{-1}(t)|^{-\alpha}\;\simeq\;\frac{1}{2}C_{\alpha}\sigma^{\alpha}t ^{-\alpha/\gamma},\]
hence, since \(\beta<\gamma\), it holds that \(P(|\tau(Y)|>t)\;\simeq\;\frac{1}{2}C_{\alpha}\sigma^{\alpha}t^{-\alpha/\gamma}\), which concludes the proof.
Based on the previous results, it is easy to derive the tail behaviour of \(|X\cdot\tau(Y)|\), which is stated in the next theorem.
**Theorem 2.5** (Tail behaviour of \(|X\cdot\tau(Y)|\)).: _Let \(|Z|=|X\cdot\tau(Y)|\) where \(X\) and \(Y\) are independent and distributed respectively as \(S_{\alpha_{x}}(\sigma_{x})\) and \(S_{\alpha_{y}}(\sigma_{y})\). If \(\tau\in E_{1}\) and \(\beta\alpha_{x}<\alpha_{y}\), then_
\[\overline{P}_{|Z|}(t)\;\simeq\;C_{\alpha_{x}}\sigma_{x}^{\alpha_{x}}\mathbb{E }[|\tau(Y)|^{\alpha_{x}}]t^{-\alpha_{x}}.\]
_For \(\tau\in E_{2}\cup E_{3}\), define \(\underline{\alpha}=\min(\alpha_{x},\alpha_{y}/\gamma)\) and \(c_{\tau}=\frac{1}{2}\) if \(\tau\in E_{3}\) and \(c_{\tau}=1\) otherwise. Then_
\[\overline{P}_{|Z|}(z)\;\simeq\;\left\{\begin{array}{ll}c_{\tau}C_{\underline {\alpha}\gamma}\sigma_{x}^{\underline{\alpha}\gamma}\mathbb{E}[|X|^{\underline {\alpha}}]z^{-\underline{\alpha}}&\text{ if }\quad\gamma>\alpha_{y}/\alpha_{x}\\ c_{\tau}C_{\underline{\alpha}}C_{\underline{\alpha}\gamma}\sigma_{x}^{ \underline{\alpha}}\sigma_{y}^{\underline{\alpha}\gamma}z^{-\underline{\alpha }}\log z&\text{ if }\quad\gamma=\alpha_{y}/\alpha_{x}\\ C_{\underline{\alpha}}\sigma_{x}^{\underline{\alpha}}\mathbb{E}[|\tau(Y)|^{ \underline{\alpha}}]z^{-\underline{\alpha}}&\text{ if }\quad\gamma<\alpha_{y}/\alpha_{x}.\end{array}\right.\]
Proof.: We start from the proof of the first case, i.e. \(\tau\in E_{1}\). Here, \(|\tau(Y)|<b|Y|^{\beta}\) for certain \(\beta\in(0,1)\) and \(b>0\), when \(|Y|\) is larger than some \(y_{0}>0\), hence there exists \(c>0\) such that \(\mathbb{E}[|\tau(Y)|^{\alpha_{x}}]\leq c+\mathbb{E}[b|Y|^{\beta\cdot\alpha_{x} }]<+\infty\), being \(\beta\alpha_{x}<\alpha_{y}\) by hypothesis. The thesis then follows from Lemma 2.3. An analogous strategy can be used in the case \(\alpha_{x}\neq\alpha_{y}/\gamma\). Indeed, \(\mathbb{E}[|X|^{\alpha_{y}/\gamma}]<+\infty\) if \(\alpha_{x}>\alpha_{y}/\gamma\) and \(\mathbb{E}[|\tau(Y)|^{\alpha_{x}}]<+\infty\) if \(\alpha_{x}<\alpha_{y}/\gamma\). Hence Lemma 2.3 allows to conclude. A different situation arises when \(\alpha_{x}=\alpha_{y}/\gamma\). In this case, consider the RVs \(\log|X|\) and \(\log|\tau(Y)|\) and observe that, for \(t>0\),
\[\overline{P}_{\log|X|}(t):=\mathbb{P}[\log|X|>t]=\mathbb{P}[|X|>e^{t}]\;\simeq \;C_{\alpha_{x}}\sigma_{x}^{\alpha_{x}}e^{-\alpha_{x}t}\in\mathcal{L}_{\alpha_ {x}},\]
i.e. \(\mathbb{P}[\log|X|>t]\) has an exponential tail with index \(\alpha_{x}\), and the same has \(\overline{P}_{\log|\tau(Y)|}:=\mathbb{P}[\log|\tau(Y)|>t]\) since \(\alpha_{x}=\alpha_{y}/\gamma\). Furthermore, \(e^{\alpha_{x}t}\cdot\overline{P}_{\log|X|}(t)\in\mathcal{RV}_{0}\) and \(e^{\alpha_{x}t}\cdot\overline{P}_{\log|\tau(Y)|}\in\mathcal{RV}_{0}\), hence we apply Lemma 2.2 with \(\beta=\gamma=0\), and obtain that
\[\mathbb{P}[\log|X\cdot\tau(Y)|>t] =\mathbb{P}[\log|X|+\log|\tau(Y)|>t]=\overline{P}_{\log|X|}* \overline{P}_{\log|\tau(Y)|}(t)\] \[=\alpha_{x}te^{\alpha_{x}t}\overline{P}_{\log|X|}(t)\overline{P}_{ \log|\tau(Y)|}(t)\] \[=\alpha_{x}C_{\alpha_{x}}C_{\alpha_{y}}\sigma_{x}^{\alpha_{x}} \sigma_{y}^{\alpha_{y}}te^{-\alpha_{x}t}.\]
It is sufficient to evaluate this expression in \(\log t\) to obtain the thesis. As for the case \(\tau\in E_{3}\), the proof is the same except for an extra \(\frac{1}{2}\) in the tail behaviour of \(\overline{P}_{|\tau(Y)|}(t)\).
Based on Theorem 2.5, the next theorem is an application of the generalized CLT, i.e. Theorem 2.1, that provides the infinitely wide limit of the shallow Stable NN (4), with the activation function \(\tau\) belonging to the classes \(E_{1},E_{2},E_{3}\).
**Theorem 2.6** (Shallow Stable NN, \(\tau\in E_{1},E_{2},E_{3}\)).: _Consider \(f(n)[\tau,p]\) defined in (4). If \(\tau\in E_{1}\) and \(\beta\alpha_{1}<\alpha_{0}\), then_
\[f(n)[\tau,\alpha_{1}]\stackrel{{ d}}{{\longrightarrow}}S_{\alpha_ {1}}\left(\sigma_{1}\left(\mathbb{E}_{Z\sim S_{\alpha_{0}}(\sigma_{0})}[|\tau( Z)|^{\alpha_{1}}]\right)^{1/\alpha_{1}}\right).\]
_If \(\tau\in E_{2}\cup E_{3}\), define \(\underline{\alpha}=\min(\alpha_{1},\alpha_{0}/\gamma)\), \(c_{\tau}=\frac{1}{2}\) if \(\tau\in E_{3}\) and \(c_{\tau}=1\) otherwise, and \(m_{n}(\gamma)=\log n\) if \(\gamma=\alpha_{0}/\alpha_{1}\) and \(m_{n}(\gamma)=1\) otherwise. Then_
\[m_{n}(\gamma)^{-1/\underline{\alpha}}f(n)[\tau,\underline{\alpha}]\stackrel{{ d}}{{\longrightarrow}}S_{\underline{\alpha}}(\sigma),\]
_where_
\[\sigma=\left\{\begin{array}{ll}\sigma_{0}^{\gamma}\sigma_{1}\left(c_{\tau} \frac{C_{\underline{\alpha}\gamma}}{C_{\underline{\alpha}}}\mathbb{E}_{Z\sim S _{\alpha_{1}}(1)}[|Z|^{\underline{\alpha}}]\right)^{1/\underline{\alpha}}& \text{ if }\quad\gamma>\alpha_{0}/\alpha_{1}\\ \sigma_{0}^{\gamma}\sigma_{1}\left(c_{\tau}\underline{\alpha}C_{\gamma \underline{\alpha}}\right)^{1/\underline{\alpha}}&\text{ if }\quad\gamma=\alpha_{0}/\alpha_{1}\\ \sigma_{1}\left(\mathbb{E}_{Z\sim S_{\underline{\alpha}}(\sigma_{0})}[|\tau( Z)|^{\underline{\alpha}}]\right)^{1/\underline{\alpha}}&\text{ if }\quad\gamma<\alpha_{0}/\alpha_{1}.\end{array}\right.\]
Proof.: Observe that, since \(w_{j}\cdot\tau(w_{j}^{(0)})\) is symmetric, then
\[\mathbb{P}[w_{j}\cdot\tau(w_{j}^{(0)})>t]=\mathbb{P}[w_{j}\cdot\tau(w_{j}^{( 0)})<-t]=\frac{1}{2}\mathbb{P}[|w_{j}\cdot\tau(w_{j}^{(0)})|>t].\]
Hence, the proof of this theorem follows by combining Theorem 2.5 and the generalized CLT, i.e. Theorem 2.1, after observing that \(a_{n}=0\) due to the symmetry of \(w_{j}\cdot\tau(w_{j}^{(0)})\).
The term \((\log n)^{-1/\underline{\alpha}}\) in the scaling in the case \(\tau\in E_{2}\cup E_{3}\) and \(\alpha_{1}=\alpha_{0}/\gamma\), is a novelty with respect to the Gaussian setting. That is, NNs with Gaussian-distributed parameters are not affected by the presence of one activation in place of another as the scaling is always \(n^{-1/2}\), while this is not true for Stable NNs as shown above.
### Deep Stable NNs: large-width asymptotics for an input \(x\in\mathbb{R}^{d}\) and biases
The above results can be extended to deep Stable NNs, assuming a "sequential growth" of the width over the NN's layers, for an input \(\mathbf{x}=(x_{1},...,x_{d})\in\mathbb{R}^{d}\) and biases. Differently from the "joint growth", under which the widths of the layers growth simultaneously, the "sequential growth" implies that the widths of the layers growth one at a time. Because of the assumption of a "sequential growth", the study of the large width behaviour of a deep Stable NN reduces to a recursive application of Theorem 2.6. In particular, let \(\theta=\{w_{i}^{(0)},...,w_{i}^{(L-1)},w,b_{i}^{(0)},...,b_{i}^{(L-1)},b\}_{i \geq 1}\) the set of all parameters and \(\mathbf{x}\in\mathbb{R}^{d}\) be the input. Define \(\forall i\geq 1\) and \(\forall l=1,...,L-1\)
\[\begin{cases}w_{i}^{(0)}=[w_{i,1}^{(0)},w_{i,2}^{(0)},\dots w_{i,d}^{(0)}]& \in\mathbb{R}^{d}\\ w_{i}^{(l)}:=[w_{i,1}^{(l)},w_{i,2}^{(l)},\dots w_{i,n}^{(l)}]&\in\mathbb{R}^{ n}\\ w=[w_{1},w_{2},\dots w_{n}]&\in\mathbb{R}^{n}\\ w_{i,j}^{(0)},w_{i,j}^{(l)},w_{i},b_{i}^{(l)},b&\in\mathbb{R}\\ w_{i,j}^{(0)}\stackrel{{ d}}{{=}}w_{i,j}^{(l)}\stackrel{{ d}}{{=}}w_{j}\stackrel{{ d}}{{=}}b_{i}^{(l)}\stackrel{{ d}}{{=}}b\stackrel{{ iid}}{{\sim}}S_{\alpha}(1).\end{cases} \tag{5}\]
Then, we define the deep Stable NN as
\[\begin{cases}g_{j}^{(1)}(\mathbf{x})=\sigma_{w}(w_{j}^{(0)},\mathbf{x})_{\mathbb{R}^{d}}+ \sigma_{b}b_{j}^{(0)}\\ g_{j}^{(l)}(\mathbf{x})=\sigma_{b}b_{j}^{(l-1)}+\sigma_{w}\nu(n)^{-\frac{1}{\alpha} }\sum_{i=1}^{n}w_{j,i}^{(l-1)}\tau(g_{i}^{(l-1)}(\mathbf{x})),\quad\forall l=2,...,L \\ f_{\mathbf{x}}(n)[\tau,\alpha]=g_{1}^{(L+1)}(\mathbf{x})=\sigma_{b}b+\sigma_{w}\nu(n)^{ -\frac{1}{\alpha}}\sum_{j=1}^{n}w_{j}\tau(g_{j}^{(L)}(\mathbf{x})),\end{cases} \tag{6}\]
where \(\nu(n)=n\cdot\log(n)\) if \(\tau\in E_{2}\cup E_{3}\) with \(\gamma=1\) and \(\nu(n)=n\) otherwise, and \(\langle\cdot,\cdot\rangle_{\mathbb{R}^{d}}\) denotes the Euclidean inner product in \(\mathbb{R}^{d}\). Note that the definition (6) coincides with the definition (4) provided that \(L=1\), \(\sigma_{b}=0\), \(d=1\) and \(x=1\). For the sake of simplicity and readability of the results, we have restricted ourselves to the case where all the parameters are Stable-distributed with same index \(\alpha\), but this setting can be further generalized.
The next theorem provides the infinitely wide limit of the deep Stable NN (6), assuming a "sequential growth" of the width over the NN's layers. In particular, if we expand the width of the hidden layers to infinity one at the time, from \(l=1\) to \(l=L\), then it is sufficient to apply Theorem 2.6 recursively through the NN's layers.
**Theorem 2.7** (Deep Stable NN, \(\tau\in E_{1}\) and \(\tau\in E_{2}\cup E_{3}\) with \(\gamma\leq 1\)).: _Consider \(g_{j}^{(l)}(\mathbf{x})\) for fixed \(j=1,...,n\) and \(l=2,....,L+1\) as defined in (6). Then, as the width goes to infinity sequentially over the NN's layers, it holds_
\[g_{j}^{(l)}(\mathbf{x})\stackrel{{ d}}{{\longrightarrow}}S_{\alpha} (\sigma_{x}^{(l)}),\]
_where \(\sigma_{x}^{(1)}=(\sigma_{w}^{\alpha}\sum_{j=1}^{d}|x_{j}|^{\alpha}+\sigma_{b} ^{\alpha})^{1/\alpha}\), and, for \(l=2,\dots,L+1\)_
\[\sigma_{x}^{(l)}=\left\{\begin{array}{ll}\left(\sigma_{w}^{\alpha}\mathbb{E }_{Z\sim S_{\alpha}(\sigma_{x}^{(l-1)})}[|\tau(Z)|^{\alpha}]+\sigma_{b}^{\alpha }\right)^{1/\alpha}&\text{ if }\quad\tau\in E_{1}\text{ or }\tau\in E_{2}\cup E_{3}\text{, with }\gamma<1,\\ \left(c_{\tau}\alpha C_{\alpha}\sigma_{w}^{\alpha}(\sigma_{x}^{(l-1)})^{\alpha }+\sigma_{b}^{\alpha}\right)^{1/\alpha}&\text{ if }\quad\tau\in E_{2}\cup E_{3}\text{, with }\gamma=1, \end{array}\right.\]
_with \(c_{\tau}=1/2\) if \(\tau\in E_{3}\) and \(c_{\tau}=1\) otherwise._
Proof.: The case \(L=1\) deals again with a shallow Stable NN but considering non-null Stable biases and a more complex type of input. The result follows from Theorem 2.6 by replacing \(w_{j}^{(0)}\) with \(g_{j}^{(1)}(\mathbf{x})\) and \(\sigma_{0}\) with \(\sigma_{x}=(\sigma_{b}^{\alpha}+\sigma_{w}^{\alpha}\sum_{j=1}^{d}|x_{j}|^{ \alpha})^{\frac{1}{\alpha}}\) thanks to the fact that \(g_{j}(\mathbf{x})\stackrel{{ iid}}{{\sim}}S_{\alpha}(\sigma_{x})\) for \(j=1,...,n\). This can be easily proved using the following properties of the Stable distribution (Samorodnitsky and Taqqu, 1994, Chapter 1): i) if \(X_{1}\perp\!\!\!\perp X_{2}\) and \(X_{i}\sim S_{\alpha}(\sigma_{i})\) then \(X_{1}+X_{2}\sim S_{\alpha}([\sigma_{1}^{\alpha}+\sigma_{2}^{\alpha}]^{\frac{1} {\alpha}})\); ii) if \(c\neq 0\) and \(X_{1}\sim S_{\alpha}(\sigma_{1})\) then \(c\cdot X_{1}\sim S_{\alpha}(|c|\sigma_{1})\). The proof for the case \(L>1\) is based on the fact that the \(g_{i}^{(l-1)}(\mathbf{x})\)'s are independent and identically distributed as \(S_{\alpha}(\sigma_{x}^{(l-1)})\) since they inherit these properties from the iid initialization of weights and biases: the thesis then follows applying the result for \(L=1\) layer after layer and substituting \(\sigma_{x}^{(l-1)}\) in place of \(\sigma_{x}\).
Theorem 2.7 includes the limiting behaviour of \(f_{\mathbf{x}}(n)[\tau,\alpha]\) in the case \(l=L+1\). It is possible to write an explicit form of the scale parameter by recursively expanding the scale parameters of the hidden layers. See Subsection 2.3 for an example in the case of the ReLU activation function. Before concluding, we point out that when using a sub-linear activation, i.e. \(\tau\in E_{1}\) or \(\tau\in E_{2}\cup E_{3}\) with \(\gamma\in(0,1)\), or a asymptotically linear activation, i.e. \(\tau\in E_{2}\cup E_{3}\) with \(\gamma=1\), the index \(\alpha\) of the limiting Stable distribution does not change as the depth of a node increases so that, even for a very deep NN, the limiting output is distributed as a \(\alpha\)-Stable distribution.
Such a behaviour is not preserved for super-linear activation functions, i.e. \(\tau\in E_{2}\cup E_{3}\) with \(\gamma>1\). When \(\alpha_{1}<\alpha_{0}/\gamma\), the convergence result of Theorem 2.6 involves a Stable RV with index equal to \(\alpha_{0}/\gamma\), and not \(\alpha_{0}\). In case \(\alpha_{x}=\alpha_{y}=\alpha\), this is the case when \(\gamma>1\), which corresponds to a super-linear activation in \(E_{2}\cup E_{3}\). The fact that the limiting RV takes a factor \(1/\gamma\) prevent us from writing a theorem in the setting of Definition (6) because we would not be able to apply the property i) above as it describes the distribution of the sum of independent Stable RVs with different scales but same index. We are then forced to adjust the initialization of the biases and to this purpose we define a new setting. Let \(\theta=\{w_{i}^{(0)},...,w_{i}^{(L-1)},w,b_{i}^{(0)},...,b_{i}^{(L-1)},b\}_{i\geq 1}\) the set of all parameters and \(\mathbf{x}\in\mathbb{R}^{d}\) be the input. Define \(\forall i\geq 1\) and \(\forall l=0,...,L-1\)
\[\begin{cases}w_{i}^{(0)}=[w_{i,1}^{(0)},w_{i,2}^{(0)},\dots w_{i,d}^{(0)}]& \in\mathbb{R}^{d}\\ w_{i}^{(l)}=[w_{i,1}^{(l)},w_{i,2}^{(l)},\dots w_{i,d}^{(l)}]&\in\mathbb{R}^{n }\\ w=[w_{1},w_{2},\dots w_{n}]&\in\mathbb{R}^{n}\\ w_{i,j}^{(0)},w_{i,j}^{(l)},w_{i},b_{i}^{(l)},b&\in\mathbb{R}\\ w_{i,j}^{(l)}\stackrel{{ did}}{{\sim}}S_{\alpha}(1)\\ b_{i}^{(l)}\stackrel{{ iid}}{{\sim}}S_{\frac{\alpha}{\gamma l}}(1) \\ b\stackrel{{ iid}}{{\sim}}S_{\frac{\alpha}{\gamma l}}(1).\end{cases} \tag{7}\]
Then, we define the deep Stable NN as
\[\begin{cases}g_{j}^{(1)}(\mathbf{x})=\sigma_{w}\langle w_{j}^{(0)},\mathbf{x}\rangle_ {\mathbb{R}^{d}}+\sigma_{b}b_{j}^{(0)}\\ g_{j}^{(l)}(\mathbf{x})=\sigma_{b}b_{j}^{(l-1)}+\sigma_{w}n^{-\frac{1}{\alpha}}\sum _{i=1}^{n}w_{j,i}^{(l-1)}\tau(g_{i}^{(l-1)}(\mathbf{x})),\quad\forall l=2,...,L\\ f_{\mathbf{x}}(n)[\tau,\alpha]=g_{1}^{(L+1)}(\mathbf{x})=\sigma_{b}b+\sigma_{w}n^{-\frac{1 }{\alpha}}\sum_{j=1}^{n}w_{j}\tau(g_{j}^{(L)}(\mathbf{x})).\end{cases} \tag{8}\]
The next theorem provides the counterpart of Theorem 2.7 for the deep Stable NN (6). It provides the infinitely wide limit of the deep Stable NN (8), assuming a "sequential growth" of the width over the NN's layers.
**Theorem 2.8** (Deep Stable NN, \(\tau\in E_{2}\cup E_{3}\) with \(\gamma>1\)).: _Consider \(g_{j}^{(l)}(\mathbf{x})\) for fixed \(j=1,...,n\) and \(l=2,....,L+1\) as defined in (8). As the width goes to infinity sequentially over the NN's layers,_
\[g_{j}^{(l)}(\mathbf{x})\stackrel{{ d}}{{\longrightarrow}}S_{\alpha/ \gamma^{l-1}}\left(\sigma_{x}^{(l)}\right),\]
_where \(\sigma_{x}^{(1)}=(\sigma_{w}^{\alpha}\sum_{j=1}^{d}|x_{j}|^{\alpha}+\sigma_{b} ^{\alpha})^{1/\alpha}\), and_
\[\sigma_{x}^{(l)}=\left(\frac{c_{\tau}C_{\alpha/\gamma^{l-2}}}{C_{\alpha/ \gamma^{l-1}}}\sigma_{w}^{\alpha/\gamma^{l-1}}(\sigma_{x}^{(l-1)})^{\alpha/ \gamma^{l-2}}\mathbb{E}_{Z\sim S_{\alpha}(1)}[|Z|^{\alpha/\gamma^{l-1}}]+ \sigma_{b}^{\alpha/\gamma^{l-1}}\right)^{\gamma^{l-1}/\alpha},\]
_with \(c_{\tau}=1/2\) if \(\tau\in E_{3}\) and \(c_{\tau}=1\) otherwise._
Proof.: The proof is along lines similar to the proof of Theorem 2.7. Notice that the fact that \(b_{i}^{(l-1)}\stackrel{{ iid}}{{\sim}}S_{\alpha/\gamma^{l-1}}(1)\) is critical to conclude the proof.
As a corollary of Theorem 2.8, the limiting distribution of \(f_{\mathbf{x}}(n)[\tau,\alpha]\), as \(n\to+\infty\), follows a \((\alpha/\gamma^{L})\)-Stable distribution with scale parameter that can be computed recursively. That is, for a large number \(L\) of layers, the stability parameter of the limiting distribution is close to zero. As we have pointed out for a shallow Stable NN, this is a peculiar feature of the class \(\tau\in E_{2}\cup E_{3}\) with \(\gamma>1\), and it can be object of a further analysis.
### Some examples
As Theorem 2.6 is quite abstract, we present some concrete examples using well-known activation functions. First consider the case when \(\tau=\tanh\in E_{1}\), since it is bounded. Then, the output of a shallow Stable NN (4) is such that
\[f(n)[\tanh,\alpha_{1}]\stackrel{{ d}}{{\longrightarrow}}S_{ \alpha_{1}}\left(\sigma_{1}\left(\mathbb{E}_{Z\sim S_{\alpha_{0}}(\sigma_{0})} [|\tanh(Z)|^{\alpha_{1}}]\right)^{1/\alpha_{1}}\right).\]
See also Favaro et al. (2020). As for the new classes of activations introduced here, we can start considering the super-linear activation \(\tau(z)=z^{3}\in E_{2}\) with \(\gamma=3\), in the case of a shallow NN with \(\alpha_{1}=\alpha_{0}=\alpha\) and obtain that
\[f(n)[(\cdot)^{3},\alpha/3]\stackrel{{ d}}{{\longrightarrow}}S_{ \alpha/3}\left(\sigma_{0}^{3}\sigma_{1}\left(\frac{C_{\alpha}}{C_{\alpha/3}} \mathbb{E}_{Z\sim S_{\alpha}(1)}[|Z|^{\alpha/3}]\right)^{3/\alpha}\right),\]
with the novelty here lying in the fact that the index of the limiting output is \(\alpha/3\) instead of \(\alpha\). As for asymptotically linear activations, if you take \(\tau=id\), i.e. the identity function, again under the hypothesis of a shallow NN with \(\alpha_{1}=\alpha_{0}=\alpha\), you obtain that
\[(\log n)^{-1/\alpha}f(n)[id,\alpha]\stackrel{{ d}}{{\longrightarrow}}S_{ \alpha}\left(\left[\alpha C_{\alpha}\right]^{1/\alpha}\sigma_{0}\sigma_{1} \right),\]
which shows the presence of an extra logarithmic factor of \((\log n)^{-1/\alpha_{1}}\) in the scaling for the first time. Beware that this behaviour, which is a critical difference with the Gaussian case, does not show up only with asymptotically linear activations: for example, if you take \(\alpha_{1}=1\), i.e. Cauchy distribution, \(\alpha_{0}=3/2\), i.e. Holtsmark distribution, and \(\tau(z)=z^{3/2}\) then
\[(\log n)^{-1}f(n)[(\cdot)^{3/2},1]\stackrel{{ d}}{{\longrightarrow}}S_{ 1}\left(C_{1}\sigma_{0}\sigma_{1}\right).\]
Finally, we consider the ReLU activation function, which is one of the most popular activation functions. The following theorems deal with shallow Stable NNs and deep Stable NNs, respectively, with a ReLU activation function.
**Theorem 2.9** (Shallow Stable NN, ReLU).: _Consider \(Z=X\cdot\text{ReLU}(Y)\) where \(X\sim S_{\alpha}(\sigma_{x})\), \(Y\sim S_{\alpha}(\sigma_{y})\), \(X\perp\!\!\!\perp Y\). Then_
\[\overline{P}_{Z}(z)=P_{Z}(-z)\;\simeq\;\frac{\alpha}{4}C_{\alpha}^{2}\sigma_{ y}^{\alpha}\sigma_{x}^{\alpha}z^{-\alpha}\log z.\]
_Furthermore, if \(f(n)[\text{ReLU},p]=n^{-\frac{1}{p}}\sum_{j=1}^{n}w_{j}\text{ReLU}(w_{j}^{(0)})\), where \(\text{ReLU}(t)=\max\left\{0,t\right\}\) and \(w_{j}^{(0)}\stackrel{{ iid}}{{\sim}}S_{\alpha}(\sigma_{0})\perp\!\!\!\perp w _{j}\stackrel{{ iid}}{{\sim}}S_{\alpha}(\sigma_{1})\), then_
\[(\log n)^{-\frac{1}{\alpha}}f(n)[\tau,\alpha]\stackrel{{ d}}{{ \longrightarrow}}S_{\alpha}\left(\left[\frac{1}{2}\alpha C_{\alpha}\right]^{ \frac{1}{\alpha}}\sigma_{0}\sigma_{1}\right).\]
Proof.: The theorem follows easily from Theorem 2.5 and Theorem 2.6, after noticing that \(\text{ReLU}\in E_{3}\) with \(\beta=0\) and \(\gamma=1\). In addition to that, we also provide an alternative proof which can be useful in other applications. First, the distribution of \(Q:=\text{ReLU}(Y)\) is \(\mathbb{P}[Q\leq q]=\mathbb{P}(\max\left\{0,Y\right\}\leq q)=\mathbb{P}[Y\leq q ]\mathds{1}\{q\geq 0\}\), from which we observe that \(Q\) is neither discrete nor absolutely continuous with respect to the Lebesgue measure as it has a point mass of \(\frac{1}{2}\) at \(z=0\) while the remaining \(\frac{1}{2}\) of the mass is concentrated on \(\mathbb{R}_{+}\) accordingly to the Stable law of \(Y\) on \((0,+\infty)\). Hence, having in mind the shape of \(P_{Q}(q)=\mathbb{P}[Q\leq q]\), we derive the approximation for the tails of the distribution of \(X\cdot\text{ReLU}(Y)\) and, as usual, we make use of the generalized CLT to prove the following theorem. We prove the tail behaviour of \(\overline{P}_{Z}(z)\) first. For any \(z>0\) we can write that
\[\overline{P}_{Z}(z)=\int_{0}^{+\infty}\mathbb{P}\left[Q>\frac{z}{x}\right]p_{ X}(x)dx=\int_{0}^{+\infty}\mathbb{P}\left[Y>\frac{z}{x}\right]p_{X}(x)dx,\]
since \(Y\) and \(Q\) have the same distribution on \((0,+\infty)\). Now, observe that
\[\mathbb{P}[XY\geq z]=\int_{\mathbb{R}}\mathbb{P}[xY\geq z]P_{X}(dx)=2\int_{0}^ {+\infty}\mathbb{P}\left[Y\geq\frac{z}{x}\right]p_{X}(x)dx\]
where the second equality holds by splitting the integral on \(\mathbb{R}\) into the sum of the integrals on \((-\infty,0)\) and \((0,+\infty)\) and using the fact that \(Y\) is symmetric. It follows that, for every \(z>0\), \(\overline{P}_{Z}(z)=\frac{1}{2}\mathbb{P}[XY\geq z]\). Applying the results for \(\tau=id\), we find that
\[\overline{P}_{Z}(z)=\frac{1}{2}\mathbb{P}[XY\geq z]\;\simeq\;\frac{\alpha}{4} C_{\alpha}^{2}\sigma_{y}^{\alpha}\sigma_{x}^{\alpha}z^{-\alpha}\log z.\]
The proof for the asymptotic behaviour of \(P_{Z}(z)\) works in the same way after fixing \(z<0\) and using a change of variable while the convergence in distribution of \((\log n)^{-\frac{1}{\alpha}}f(n)[\tau,\alpha]\) follows by a direct application of the generalized CLT.
Theorem 2.9 can be extended to deep Stable NN with input \(\mathbf{x}=(x_{1},...,x_{d})\in\mathbb{R}^{d}\) and considering the biases.
**Theorem 2.10** (Deep Stable NN, ReLU).: _Consider the deep Stable NN with ReLU activation defined as follows_
\[\begin{cases}g_{j}^{(1)}(\mathbf{x})=\sigma_{w}\langle w_{j}^{(0)},\mathbf{x}\rangle_ {\mathbb{R}^{d}}+\sigma_{b}b_{j}^{(0)}\\ g_{j}^{(l)}(\mathbf{x})=\sigma_{b}b_{j}^{(l-1)}+\sigma_{w}(n\log n)^{-\frac{1}{ \alpha}}\sum_{i=1}^{n}w_{j,i}^{(l-1)}\text{ReLU}(g_{i}^{(l-1)}(\mathbf{x})), \forall l=2,...,L\\ f_{\mathbf{x}}(n)[\text{ReLU},\alpha]=g_{1}^{(L+1)}(\mathbf{x})=\sigma_{b}b+\sigma_{w} (n\log n)^{-\frac{1}{\alpha}}\sum_{j=1}^{n}w_{j}\text{ReLU}(g_{j}^{(L)}(\mathbf{ x})).\end{cases} \tag{9}\]
_Then, under the hypothesis of Stable initialization for weights and biases as in (6), as the width of the previous layers goes to infinity sequentially,_
\[g_{j}^{(l)}(\mathbf{x})\overset{d}{\longrightarrow}S_{\alpha}\left(\sigma_{x}^{( l)}\right),\]
_where \(\sigma_{x}^{(1)}=(\sigma_{w}^{\alpha}\sum_{j=1}^{d}|x_{j}|^{\alpha}+\sigma_{b}^{ \alpha})^{1/\alpha}\), and, for \(l=2,\ldots,L+1\),_
\[\sigma_{x}^{(l)}=\left(\frac{1}{2}\alpha C_{\alpha}(\sigma_{x}^{(l-1)})^{ \alpha}\sigma_{w}^{\alpha}+\sigma_{b}^{\alpha}\right)^{1/\alpha}.\]
Proof.: The proof is along lines similar to the proof of Theorem 2.7 with \(\gamma=1\) and \(\tau\in E_{3}\).
Then, as a corollary of Theorem 2.10, the limiting distribution of \(f_{\mathbf{x}}(n)[\text{ReLU},\alpha]\), as \(n\to+\infty\), is the distribution of a \(\alpha\)-Stable RV whose scale can be computed recursively. In particular, we can write the following statement.
**Corollary 2.10.1**.: _Under the setting of Theorem 2.10 with a generic depth \(L\),_
\[f(n)[\text{ReLU},\alpha]\overset{d}{\longrightarrow}S_{\alpha}\left(\left[ \left(\frac{1}{2}\alpha C_{\alpha}\sigma_{w}^{\alpha}\right)^{L}\sigma_{x}^{ \alpha}+\sum_{i=0}^{L-1}(\frac{1}{2}\alpha C_{\alpha}\sigma_{w}^{\alpha})^{i} \sigma_{b}^{\alpha}\right]^{\frac{1}{\alpha}}\right).\]
Proof.: The claim is true for \(L=1\), which can be proved using the standard two properties of the Stable distribution. Moreover, for a NN with depth of \(L+1\), using Theorem 2.10, the scale is
\[\left[\frac{1}{2}\alpha C_{\alpha}\sigma_{w}^{\alpha}\right[\left( \frac{1}{2}\alpha C_{\alpha}\sigma_{w}^{\alpha}\right)^{L}\sigma_{x}^{\alpha}+ \sum_{i=0}^{L-1}\left(\frac{1}{2}\alpha C_{\alpha}\sigma_{w}^{\alpha}\right)^ {i}\sigma_{b}^{\alpha}\right]+\sigma_{b}^{\alpha}\right]^{\frac{1}{\alpha}}\] \[=\left[\left(\frac{1}{2}\alpha C_{\alpha}\sigma_{w}^{\alpha} \right)^{L+1}\sigma_{x}^{\alpha}+\sum_{i=0}^{L}\left(\frac{1}{2}\alpha C_{ \alpha}\sigma_{w}^{\alpha}\right)^{i}\sigma_{b}^{\alpha}\right]^{\frac{1}{ \alpha}},\]
which concludes the proof.
## 3 Discussion
In a recent work, Favaro et al. (2020, 2022a) has characterized the infinitely wide limit of deep Stable NNs under the assumption of a sub-linear activation function. Here, we made use of a generalized CLT to characterize the infinitely wide limit of deep Stable NNs with a general activation function belonging to the classes \(E_{1}\), \(E_{2}\) and \(E_{3}\). For \(\alpha_{1}=\alpha_{0}/\gamma\), and in particular for the choices \(\tau=id\) and \(\tau=\text{ReLU}\) with \(\alpha_{0}=\alpha_{1}\), Theorem 2.6 shows that the right scaling of the NN is \((n\log n)^{-1/\alpha}\), thus including the extra factor \((\log n)^{-1/\alpha}\) with respect to sub-linear activation functions. For \(\alpha_{1}>\alpha_{0}/\gamma\), and in particular for the choice of a super-linear activations with \(\alpha_{0}=\alpha_{1}\), Theorem 2.8 shows that the distribution of the limiting output is \((\alpha_{0}/\gamma)\)-Stable, with \(\gamma>1\), and this may have undesirable consequences for posterior estimates in case of a very deep NN. In general, our work brings out the critical role of the generalized CLT, which is not as popular as the classical CLT, in the study of the large width behaviour of deep Stable NNs. As the classical CLT plays a critical role in the study of the large-width behaviour of deep Gaussian NNs (Neal, 1996), our work shows how the generalized CLT plays the same critical role in the study of the large-width behaviour of deep Stable NNs.
A natural direction for future research consists in extending our results to deep Stable NNs with \(k>1\) inputs of dimension \(d\), i.e. a \(d\times k\) input matrix \(\mathbf{X}\). A unified treatment of such a problem, would require a multidimensional versions of the generalized CLT, i.e. a CLT dealing with \(k>1\) dimensional Stable distributions, which is not available in the probabilistic/statistical literature. For a shallow Stable NN with a ReLU activation function, this problem has been considered in Favaro et al. (2022b), where the infinitely wide limit of the NN is characterized through a careful analysis of the large-width behaviour of the characteristic function of the NN. A further natural problem consists in extending our results to the case of a "joint growth" of the width over the NN's layers, i.e. the widths of the layers growth simultaneously (Favaro et al., 2022a). In general, under the setting specified in definition (6) or definition (8), one may consider a deep
NN defined as follows:
\[f_{i}^{(1)}(\mathbf{X},n)=\sum_{j=1}^{d}w_{i,j}^{(1)}\mathbf{x}_{j}+b_{i}^{(1)} \mathbf{1}^{T}\]
and
\[f_{i}^{(l)}(\mathbf{X},n)=\frac{1}{f(n)^{1/\alpha}}\sum_{j=1}^{n}w_{i,j}^{(l)} \left(\tau\circ f_{j}^{(l-1)}(\mathbf{X},n)\right)+b_{i}^{(l)}\mathbf{1}^{T},\]
with \(f_{i}^{(1)}(\mathbf{X},n)=f_{i}^{(1)}(\mathbf{X})\), where \(\mathbf{1}\) denotes the \(k\)-dimensional unit (column) vector, \(\circ\) is the element-wise application and \(f(n)=n\cdot\log n\) if \(\tau\in E_{2}\) and \(f(n)=n\) otherwise. Then, the goal consists in extending our results to \(f_{i}^{(l)}(\mathbf{X},n)\), assuming a "joint growth" of the width over the NN's layers. The case \(\tau\in E_{1}\) was already tackled by Favaro et al. (2020, 2022a) but the other two cases are missing. In particular, Favaro et al. (2022a) showed that the assumptions of a "joint growth" and of a "sequential growth" lead to the same infinitely wide limit for a deep Stable NN with a sub-linear activation function. Instead, a critical difference between the assumption of a "joint growth" and the assumption of a "sequential growth" arises in the study of rate of convergence of the NN to its infinitely wide limit. In particular, Favaro et al. (2022a) investigated rates of convergence, in the sup-norm distance, for deep Stable NNs with a sub-linear activation function, showing that the assumption of a "joint growth" leads to a rate that depends on the depth, whereas the assumption of a "sequential growth" leads to a rate that is independent of the depth. We conjecture that an analogous phenomenon holds true for deep Stable NNs with linear and super-linear activation functions. In particular, we expect that the infinitely wide limits presented in our work hold true under the assumption that the width grows jointly over the layers, suggesting that a difference between the "joint growth" and the "sequential growth" may require the study of convergence rates. To study the large width asymptotic behaviour under the assumption of a "joint growth", it might be useful to use Theorem 1 of Fortini et al. (1997), which gives sufficient conditions for the convergence to a mixture of infinitely divisible laws: to apply this theorem, one should prove the convergence of a certain sequence of random measures \(\nu_{n}\) to the Levy measure of the infinitely divisible law, and then show that this limiting measure is the Levy measure of a Stable law.
Another interesting research direction consists in studying the training dynamics of Stable NNs. For Gaussian NNs, Jacot et al. (2018) and Arora et al. (2019) established the equivalence between a specific training setting of deep Gaussian NNs and kernel regression. In particular, they considered a deep Gaussian NN where the hidden layers are trained jointly under quadratic loss and gradient flow, i.e. gradient descent with infinitesimal learning rate, and it was shown that, as the width of the NN goes to infinity simultaneously, the point predictions are arbitrarily close to those given by a kernel regression with respect to the so-called neural tangent kernel. Such an analysis is typically referred to as the neural tangent kernel analysis of the NN (Arora et al., 2019). The large-width training dynamics of shallow Stable NNs with ReLU activation function, and input \(\mathbf{X}\), has been considerd in Favaro et al. (2022b). In particular, they proved linear convergence of the squared error loss for a suitable choice of the learning rate. The equivalence between gradient flow and kernel regression is connected to the so-called "lazy training" phenomenon, which is one of the hottest topics in the field of machine learning since it is a phenomenon which can affect any model, not only NNs. More precisely, Chizat et al. (2019) showed that lazy training is caused by an implicit choice of the scaling and that every parametric model can be trained in the lazy regime provided that its output is initialized close to zero. Furthermore, coming back to NNs, they considered a two layers NN with Gaussian weights and proved a sufficient condition for achieving lazy training, provided that \(\mathbb{E}[w_{i}^{(0)}\tau(\langle w_{i}\cdot x\rangle)]=0\). Clearly, the theorem applies also in the case of symmetric Stable weights and biases when \(\alpha>1\), but not when \(\alpha\leq 1\) as the
expectation of such RVs is undefined. It would be then interesting to study what happens in that case in order to find a new theoretical result which leads to a suitable scaling for which we have the lazy training regime.
## Acknowledgements
The authors are grateful to Stefano Peluchetti for the many stimulating conversations, and to anonymous Referees for comments, corrections, and numerous suggestions that improved remarkably the paper. Stefano Favaro received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No 817257. Stefano Favaro is also affiliated to IMATI-CNR "Enrico Magenes" (Milan, Italy).
|
2307.02129 | How Deep Neural Networks Learn Compositional Data: The Random Hierarchy
Model | Deep learning algorithms demonstrate a surprising ability to learn
high-dimensional tasks from limited examples. This is commonly attributed to
the depth of neural networks, enabling them to build a hierarchy of abstract,
low-dimensional data representations. However, how many training examples are
required to learn such representations remains unknown. To quantitatively study
this question, we introduce the Random Hierarchy Model: a family of synthetic
tasks inspired by the hierarchical structure of language and images. The model
is a classification task where each class corresponds to a group of high-level
features, chosen among several equivalent groups associated with the same
class. In turn, each feature corresponds to a group of sub-features chosen
among several equivalent ones and so on, following a hierarchy of composition
rules. We find that deep networks learn the task by developing internal
representations invariant to exchanging equivalent groups. Moreover, the number
of data required corresponds to the point where correlations between low-level
features and classes become detectable. Overall, our results indicate how deep
networks overcome the curse of dimensionality by building invariant
representations, and provide an estimate of the number of data required to
learn a hierarchical task. | Francesco Cagnetta, Leonardo Petrini, Umberto M. Tomasini, Alessandro Favero, Matthieu Wyart | 2023-07-05T09:11:09Z | http://arxiv.org/abs/2307.02129v5 | # How Deep Neural Networks Learn Compositional Data:
###### Abstract
Learning generic high-dimensional tasks is notably hard, as it requires a number of training data exponential in the dimension. Yet, deep convolutional neural networks (CNNs) have shown remarkable success in overcoming this challenge. A popular hypothesis is that learnable tasks are highly structured and that CNNs leverage this structure to build a low-dimensional representation of the data. However, little is known about how much training data they require, and how this number depends on the data structure. This paper answers this question for a simple classification task that seeks to capture relevant aspects of real data: the Random Hierarchy Model. In this model, each of the \(n_{c}\) classes corresponds to \(m\) synonymic compositions of high-level features, which are in turn composed of sub-features through an iterative process repeated \(L\) times. We find that the number of training data \(P^{*}\) required by deep CNNs to learn this task _(i)_ grows asymptotically as \(n_{c}m^{L}\), which is only polynomial in the input dimensionality; _(ii)_ coincides with the training set size such that the representation of a trained network becomes invariant to exchanges of synonyms; _(iii)_ corresponds to the number of data at which the correlations between low-level features and classes become detectable. Overall, our results indicate how deep CNNs can overcome the curse of dimensionality by building invariant representations, and provide an estimate of the number of data required to learn a task based on its hierarchically compositional structure.
The achievements of deep learning algorithms [1] are outstanding. These methods exhibit superhuman performances in areas ranging from image recognition [2] to Goplaying [3], and large language models such as GPT4 [4] can generate unexpectedly sophisticated levels of reasoning. However, despite these accomplishments, we still lack a fundamental understanding of the underlying factors. Indeed, Go configurations, images, and patches of text lie in high-dimensional spaces, which are hard to sample due to the _curse of dimensionality_[5]: the distance \(\delta\) between neighboring data points decreases very slowly with their number \(P\), as \(\delta=\mathcal{O}(P^{-1/d})\) where \(d\) is the space dimension. A generic task such as regression of a continuous function [6] requires a small \(\delta\) for high performance, implying that \(P\) must be _exponential_ in the dimension \(d\). Such a number of data is unrealistically large: for example, the benchmark dataset ImageNet [7], whose effective dimension is estimated to be \(\approx 50\)[8], consists of only \(\approx 10^{7}\) data, significantly smaller than \(e^{50}\approx 10^{20}\). This immense difference implies that learnable tasks are not generic, but highly structured. What is then the nature of this structure, and why are deep learning methods able to exploit it? Without a quantitative answer, it is impossible to predict even the order of magnitude _of the order of magnitude_ of the number of data necessary to learn a specific task.
A popular idea attributes the efficacy of deep learning methods to their ability to build a useful representation of the data, which becomes increasingly complex across the layers. In simple terms, neurons closer to the input learn to detect simple features like edges in a picture, whereas those deeper in the network learn to recognize more abstract features, such as faces [9, 10]. Intuitively, if these representations are also invariant to aspects of the data unrelated to the task, such as the exact position of an object in a frame for image classification [11], they may effectively reduce the dimensionality of the problem and make it tractable. This view is supported by several empirical studies of the hidden representations of trained networks. In particular, measures such as _(i)_ the mutual information between such
representations and the input [12, 13], _(ii)_ their intrinsic dimensionality [14, 15], and _(iii)_ their sensitivity toward transformations that do not affect the task (e.g. smooth deformations for image classification [16, 17]), all eventually decay with the layer depth. In some cases, the magnitude of this decay correlates with performance [16]. However, these studies do not indicate how much data is required to learn such representations, and thus the task.
Here we study this question for tasks which are hierarchically compositional--arguably a key property for the learnability of real data [18, 19, 20, 21, 22, 23, 24, 25]. To provide a concrete example, consider the picture of a dog (see Fig. 1). The image consists of several high-level features like head, body, and limbs, each composed of sub-features like ears, mouth, eyes, and nose for the head. These sub-features can be further thought of as combinations of low-level features such as edges. Recent studies have revealed that: _(i)_ deep networks represent hierarchically compositional tasks more efficiently than shallow networks [21]; _(ii)_ the minimal number of data that contains enough information to reconstruct such tasks is polynomial in the input dimension [24], although extracting this information remains impractical with standard optimization algorithms; _(iii)_ correlations between the input data and the task are critical for learning [19, 26] and can be exploited by algorithms based on the iteration of clustering methods [27, 22]. While these seminal works offer important insights, they do not directly address practical settings, specifically deep convolutional neural networks (CNNs) trained using gradient descent. Consequently, we currently don't know how the hierarchically compositional structure of the task influences the _sample complexity_, i.e., the number of data necessary to learn the task.
In this work, we adopt the physicist's approach [28, 29, 30, 31] of introducing a simplified model of data, which we then investigate quantitatively via a combination of theoretical arguments and numerical experiments. The task we consider, introduced in Section 1, is a multinomial classification where the class label is determined by the hierarchical composition of input features into progressively higher-level features (see Fig. 1). This model belongs to the class of generative models introduced in [27, 22], corresponding to the specific choice of random composition rules. More specifically, we consider a classification problem with \(n_{c}\) classes, where the class label is expressed as a hierarchy of \(L\)_randomly-chosen_ composition rules. In each rule, \(m\) distinct tuples of \(s\) adjacent low-level features are grouped together and assigned the same high-level feature taken from a finite vocabulary of size \(v\) (see Fig. 1). Then, in Section 3, we show empirically that the sample complexity \(P^{*}\) of deep CNNs trained with gradient descent scales as \(n_{c}m^{L}\). Furthermore, we find that \(P^{*}\) coincides with both _a)_ the number of data that allows for learning a representation that is invariant to exchanging the \(m\) semantically equivalent low-level features (subsection 3.1) and _b)_ the size of the training set for which the correlations between low-level features and class label become detectable (Section 4). Via _b)_, \(P^{*}\) can be derived under our assumption on the randomness of the composition rules.
## 1 The Random Hierarchy Model
In this section, we introduce our model task, which is a multinomial classification problem with \(n_{c}\) classes, where the input-output relation is _compositional_, _hierarchical_, and _local_. To build the dataset, we let each class label \(\alpha\!=\!1,\ldots,n_{c}\) generate the set of input data with label \(\alpha\) as follows.
1. Each label generates \(m\) distinct representations consisting of \(s\)-tuples of _high-level features_ (see Fig. 2 for an example with \(s\!=\!2\) and \(m\!=\!n_{c}\!=\!3\)). Each of these features belongs to a finite vocabulary of size \(v\) (\(v\!=\!3\) in the figure), so that there are \(v^{s}\) possible representations and \(n_{c}m\!\leq\!v^{s}\). We call the assignment of \(m\) distinct \(s\)-tuples to each label a _composition rule_;1 Footnote 1: Composition rules are called _production rules_ in formal language theory [32].
2. Each of the \(v\) high-level feature (level-\(L\)) generates \(m\) distinct representations of \(s\) sub-features (level-(\(L\!-\!1\))), out of the \(v^{s}\) possible ones. Thus, \(m\leq v^{s-1}\). After two generations, labels are represented as \(s^{2}\)-tuples and there are \(m\times m^{s}\) data per class;
3. The input data are obtained after \(L\) generations (level-1 representation) so that each datum \(\mathbf{x}\) consists of \(d\!=\!s^{L}\) input features \(x_{j}\). We apply one-hot encoding to the input features: each of the \(x_{j}\)'s is a \(v\)-dimensional sequence with one element set to 1 and the others to 0, the index of the non-zero component representing the encoded feature. The number of data per class is \[m\times m^{s}\times\cdots\times m^{s^{L-1}}=m^{\sum_{i=0}^{L-1}s^{i}}=m^{\frac{ L-1}{s-1}},\] (1) hence the total number of data \(P_{\text{max}}\) reads \[P_{\text{max}}\equiv n_{c}m^{\frac{L-1}{s-1}}=n_{c}m^{\frac{d-1}{s-1}}.\] (2) A generic classification task is thus specified by \(L\) composition rules and can be represented as a \(s\)-ary tree--an example with \(s\!=\!2\) and \(L\!=\!3\) is shown in Fig. 1(c) as a binary tree. The tree representation highlights that the class label \(\alpha(\mathbf{x})\) of a datum \(\mathbf{x}\) can be written as a hierarchical composition of \(L\) local functions of \(s\) variables [20, 21]. For instance, with \(s\!=\!L\!=\!2\) (\(\mathbf{x}\!=\!(x_{1},x_{2},x_{3},x_{4})\)),
\[\alpha(x_{1},\ldots,x_{4})=g_{2}\left(g_{1}(x_{1},x_{2}),g_{1}(x_{3},x_{4}) \right), \tag{3}\]
where \(g_{1}\) and \(g_{2}\) represent the 2 composition rules.
In the _Random Hierarchy Model_ (RHM) the \(L\) composition rules are chosen uniformly at random over all the possible assignments of \(m\) low-level representations to each high-level feature. As sketched in Fig. 2, the random choice induces correlations between low- and high-level features. In simple terms, each of the high-level features--1, 2 or 3 in the figure--is more likely to be represented with a certain low-level feature in a given position--blue on the left for 1, yellow for 2 and green for 3. These correlations are crucial for our predictions and are analyzed in detail in Appendix B.
Let us remark that the \(L\) composition rules can be chosen such that the low-level features are homogeneously distributed across high-level features for all positions, as sketched in Fig. 3. We refer to this choice as the _Homogeneous Features Model_. In this model, none of the low-level features is predictive of the high-level feature. With \(s\!=\!2\) and Boolean features \(v=m=2\), the Homogeneous Features Model reduces to the problem of learning a parity function [33].
Finally, note that we only consider the case where the parameters \(s\), \(m\) and \(v\) are constant through the hierarchy levels for clarity of exposition. It is straightforward to extend the model, together with the ensuing conclusions, to the case where all the levels of the hierarchy have different parameters.
Figure 1: Illustrating the hierarchical structure of real-world and artificial data. (a) An example of the hierarchical structure of images: the class (dog) consists of high-level features (head, paws), that in turn can be represented as sets of lower-level features (eyes, nose, mouth, and ear for the head). Notice that, at each level, there can be multiple combinations of low-level features giving rise to the same high-level feature. (b) A similar hierarchical structure can be found in natural language: a sentence is made of clauses, each having different parts such as subject and predicate, which in turn may consist of several words. (c) An illustration of the artificial data structure we propose. The samples reported here were drawn from an instance of the Random Hierarchy Model for depth \(L=3\) and tuple length \(s=2\). Different features are shown in different colors.
Figure 2: Label to lower-level features mapping in the Random Hierarchy Model (RHM). Each of the \(n_{c}\!=\!3\) classes (numbered boxes at the top) corresponds to \(m\!=\!3\) distinct couples (unnumbered boxes at the bottom) of features. These features belong to a finite vocabulary (blue, orange and green, with size \(v\!=\!3\)). Iterating this mapping \(L\) times with the lower-level features as high-level features of the next step yields the full dataset. Notice that some features appear more often in the representation of a certain class than in those of the others, e.g. blue on the left appears twice in class 1, once in class 2 and never in class 3. As a result, low-level features are generally correlated with the label.
## 2 Characteristic Sample Sizes
The main focus of our work is the answer to the following question:
**Q:**: _How much data is required to learn a typical instance of the Random Hierarchy Model?_
In this section, we first discuss two characteristic scales of the number of training data for an RHM with \(n_{c}\) classes, vocabulary size \(v\), multiplicity \(m\), depth \(L\), and tuple size \(s\). The first, related to the curse of dimensionality, represents the sample complexity of methods that are not able to learn the hierarchical structure of the data. The second, which comes from information-theoretic considerations, represents the minimal number of data necessary to reconstruct an instance of the RHM. These two sample sizes can be thought of as an upper and lower bound to the sample complexity of deep CNNs, which indeed lies between the two bounds (cf. Section 3).
### Curse of Dimensionality (\(P_{\max}\))
Let us recall that the curse of dimensionality predicts an exponential growth of the sample complexity with the input dimension \(d\!=\!s^{L}\). Fig. 4 shows the test error of a one-hidden-layer fully-connected network trained on instances of the RHM while varying the number of training data \(P\) (see methods for details of the training procedure) in the maximal \(m\) case, \(m\!=\!v^{s-1}\). The bottom panel demonstrates that the sample complexity is proportional to the total dataset size \(P_{\max}\). Since, from Eq. 2, \(P_{\max}\) grows exponentially with \(d\), we conclude that shallow fully-connected networks suffer from the curse of dimensionality. By contrast, we will see that using CNNs results in a much gentler growth (i.e. polynomial) of the sample complexity with \(d\).
### Information-Theoretic Limit (\(P_{\min}\))
An algorithm with full prior knowledge of the generative model can reconstruct an instance of the RHM with a number of points \(P_{\min}\!\ll\!P_{\max}\). For instance, we can consider an extensive search within the hypothesis class of all possible hierarchical models with fixed \(n_{c}\), \(m\), \(v\), and \(L\). Then, if \(n_{c}\!=\!v\) and \(m\!=\!v^{s-1}\), so that the model generates all possible input data, we can use a classical result of the PAC (Probably Approximately Correct) framework of statistical learning theory [34] to relate \(P_{\min}\) with the logarithm of the cardinality of the hypothesis class, that is the number of possible instances of the hierarchical model. The number of possible composition rules equals the number of ways of allocating \(v^{s-1}\) of \(v^{s}\) possible tuples to each
Figure 4: **Sample complexity for one-hidden-layer fully-connected networks, \(v\!=\!n_{c}\!=\!m\) and \(s\!=\!2\). Top: Test error vs. the number of training data. Different colors correspond to different vocabulary sizes \(v\). Bottom: number of data corresponding to test error \(\epsilon\!=\!0.7\) as a function of \(P_{\max}\). The black dashed line indicates a linear relationship: one-hidden-layer fully-connected networks achieve a small test error only when trained on a finite fraction of the whole dataset, thus their sample complexity grows exponentially with the input dimension.**
Figure 3: Label to lower-level features mapping in the Homogeneous Feature Model with \(v\!=\!m\!=\!n_{c}\!=\!3\) and \(s\!=\!2\). In contrast with the case illustrated in Fig. 2, this mapping is such that each of the 3 possible low-level features appears exactly once in each of the 2 elements of the representation of each class. In maths, denoting with \(N_{i}(\mu;\alpha)\) the number of times that the low-level feature \(\mu\) appears in the \(i\)-th position of the representation of class \(\alpha\), one has that \(N_{i}(\mu;\alpha)\!=\!1\) for all \(i\!=\!1\), 2, for all \(\mu\!=\!\text{green}\), blue, orange and for all \(\alpha\!=\!1\), 2, 3.
of the \(v\) classes/features, i.e., a multinomial coefficient,
\[\#\left\{\text{rules}\right\}=\frac{(v^{s})!}{((v^{s-1})!)^{v}} \tag{4}\]
Since an instance consists of \(L\) independently chosen composition rules, we have
\[\#\left\{\text{instances}\right\}=\left(\#\left\{\text{rules}\right\}\right)^{L} \left(\frac{1}{v!}\right)^{L-1} \tag{5}\]
where the additional multiplicative factor \((v!)^{1-L}\) takes into account that the input-label mapping is invariant for relabeling of the features of the \(L-1\) internal representations. Upon taking the logarithm and approximating the factorials for large \(v\) via Stirling's formula,
\[P_{\text{min}}=\log\left(\#\left\{\text{instances}\right\}\right)\xrightarrow{ v\gg 1}Lv^{s}, \tag{6}\]
Intuitively, the problem boils down to understanding the \(L\) composition rules, each needing \(m\times v\) examples (\(v^{s}\) for \(m\!=\!v^{s-1}\)). \(P_{\text{min}}\) grows only linearly with the depth \(L\)--hence logarithmically in \(d\)--whereas \(P_{\text{max}}\) is exponential in \(d\). Having used full knowledge of the generative model, \(P_{\text{min}}\) can be thought of as a lower bound for the sample complexity of a generic supervised learning algorithm which is agnostic of the data structure.
## 3 Sample Complexity of Deep CNNs
In this section, we focus on deep learning methods. In particular, we ask
**Q:**: _How much data is required to learn a typical instance of the Random Hierarchy Model with a deep CNN?_
Thus, after generating an instance of the RHM with fixed parameters \(n_{c}\), \(s\), \(m\), \(v\), and \(L\), we train a deep CNN with \(L\) hidden layers, filter size and stride equal to \(s\) (see Fig. 5 for an illustration) with stochastic gradient descent (SGD) on \(P\) training points selected at random among the RHM data. Further details of the training are in _Materials and Methods_.
By looking at the test error of trained networks as a function of the training set size (top panels of Fig. 6 and Fig. 7, see also Fig. 15 in G for a study with varying \(n_{c}\)), we notice the existence of a characteristic value of \(P\) where the error decreases dramatically, thus the task is learned. In order to study the behavior of this threshold with the parameters of the RHM, we define the sample complexity as the smallest \(P\) such that the test error \(\epsilon(P)\) is smaller than \(\epsilon_{\text{rand}}/10\), with \(\epsilon_{\text{rand}}\!=\!1-n_{c}^{-1}\) denoting the average error when choosing the label uniformly at random. The bottom panels of Fig. 6 (for the case \(n_{c}\!=\!m\!=\!v\)) and Fig. 7 (with \(m\!<\!v\), see G for varying \(n_{c}\)) show that the sample complexity scales as
\[P^{\star}=n_{c}m^{L}\Leftrightarrow\frac{P^{\star}}{n_{c}}=d^{\ln(m)/\ln(s)}, \tag{7}\]
independently of the vocabulary size \(v\). 7 shows that deep CNNs only require a number of samples that scales as a power of the input dimension \(d\!=\!s^{L}\) to learn the RHM: the curse of dimensionality is beaten. This evidences the ability of CNNs to harness the hierarchical compositionality inherent to the task. The question then becomes: what mechanisms do these networks employ to achieve this feat?
### Emergence of Synonymic Invariance in Deep CNNs
A natural approach to learning the RHM would be to identifying the sets of \(s\)-tuples of input features that correspond to the same higher-level feature. Examples include the pairs of low-level features in Fig. 2 and Fig. 3 which belong to the same column. In general, we refer to \(s\)-tuples that share the same higher-level representation as _synonyms_. Identifying synonyms at the first level would allow us to replace each \(s\)-dimensional patch of the input with a single symbol, reducing the dimensionality of the problem from \(s^{L}\) to \(s^{L-1}\). Repeating this procedure \(L\) times would lead to the class labels and, consequently, to the solution of the task.
In order to test if deep CNNs trained on the RHM resort to a similar solution, we introduce the _synonymic sensitivity_, which is a measure of the invariance of any given function of the input with respect to the exchange of synonymic \(s\)-tuples. We define \(S_{k,l}\) as the sensitivity of the \(k\)-th layer representation of a trained network with respect to exchanges of synonymous tuples of level-\(l\) features.
Figure 5: Neural network architecture that matches the RHM hierarchy. This is a deep CNN with \(L\) hidden layers, and stride and filter size equal to the tuple length \(s\). Filters that act on different input patches are the same (weight sharing). The number of input channels equals \(v\) and the output is a vector of size \(n_{c}\).
Namely,
\[S_{k,l}=\frac{\langle\|f_{k}(x)-f_{k}(P_{l}x)\|^{2}\rangle_{x,P_{l}}}{\langle\|f_ {k}(x)-f_{k}(z)\|^{2}\rangle_{x,z}}, \tag{8}\]
where: \(f_{k}\) is the vector of the activations of the \(k\)-th layer in the network; \(P_{l}\) is an operator that replaces all the level-\(l\) tuples with synonyms selected uniformly at random; \(\langle\cdot\rangle\) with subscripts \(x,z\) denote an average over all the inputs in an instance of the RHM; the subscript \(P_{l}\) denotes average over all the exchanges of synonyms.
In particular, \(S_{k,1}\) quantifies the invariance of the hidden representations learned by the network at layer \(k\) with respect to exchanges of synonym tuples of input features. Fig. 8 reports \(S_{2,1}\) as a function of the training set size \(P\) for different combinations of the model parameters. We focused on \(S_{2,1}\)--the sensitivity of the second layer of the deep CNN to permutations at the first level of the hierarchy--since synonymic invariance can generally be achieved at all layers \(k\) starting from \(k=l+1\), and not before 2 Notice that all curves display a sigmoidal shape, signaling the existence of a characteristic sample size which marks the emergence of synonymic sensitivity in the learned representations. Remarkably, by rescaling the \(x\)-axis by the sample complexity of Eq. 7 (bottom panel), curves corresponding to different parameters collapse. We conclude that the generalization ability of a network relies on the synonymic invariance of its hidden representations.
Footnote 2: To illustrate this, consider a hierarchy of depth \(L=2\), \(s=2\), and a two-hidden-layers CNN. In the general case, synonymic invariance to permutations at level one, cannot be achieved at the first layer of the network. This is because, say a level-\(1\) feature can be represented at the input as \((\alpha,\beta)\), \((\alpha,\alpha)\) and \((\beta,\alpha)\), but not as \((\beta,\beta)\). Then, it is impossible to build a neuron that would have the same response to the first three pairs but not the fourth. Instead, a simple solution exists for layer \(2\) to become invariant to exchanges at level \(1\). This consists in building \(v^{2}\) neurons at the first layer \(k=1\), each responding to one input pair. Clearly, the representation at \(k=1\) is not invariant to the substitution of synonyms. The second layer, though, can assign identical weights to all the \(v\) neurons that encode for the same feature, hence becoming invariant to permutations at \(l=1\).
Measures of the synonymic sensitivity \(S_{k,1}\) for different layers \(k\) are reported in Fig. 9 (blue lines), showing indeed that small values of \(S_{k,1}\) are achieved for \(k\!\geq\!2\). Fig. 9 also shows the sensitivities to exchanges of synonyms in the higher-level representations of the RHM: all levels are learned together as \(P\) increases, and invariance to level-\(l\) exchanges is achieved at layer \(k=l+1\), as expected. The figure displays the test error too (gray dashed), to further emphasize its correlation with synonymic invariance.
Figure 6: **Sample complexity for deep CNNs, \(m=n_{c}=v\) and \(s=2\). Top: Test error vs number of training points. Different colors correspond to different vocabulary sizes \(v\). Markers to hierarchy depths \(L\). Deep CNNs are able to achieve zero generalization error when enough training points are provided. Bottom: sample complexity \(P^{*}\) corresponding to a test error \(\epsilon^{*}=0.1\). Remarkably, the neural networks can generalize with a number of samples \(P^{*}=v^{L+1}\ll P_{\text{max}}\).**
Figure 7: **Sample complexity for deep CNNs, \(m<v\), \(n_{c}=v\) and \(s=2\). Top: Test error vs number of training points. Different colors correspond to different vocabulary sizes \(v\). Markers to hierarchy depths \(L\). Bottom: sample complexity \(P^{*}\) corresponding to a test error \(\epsilon^{*}=0.1\). Similarly to the previous plot, this confirms that the sample complexity of deep CNNs scales as \(P^{*}=n_{c}m^{L}\).**
## 4 Correlations Govern Synonymic Invariance
We now provide a theoretical argument for understanding the scaling of \(P^{*}\) of Eq. 7 with the parameters of the RHM. First, we compute a third characteristic sample size \(P_{c}\), defined as the size of the training set for which the _local_ correlation between any of the input patches and the label becomes detectable. Remarkably, \(P_{c}\) coincides with \(P^{*}\) of Eq. 7. Secondly, we demonstrate how a one-hidden-layer neural network acting on a single patch can use such correlations to build a synonymic invariant representation in a single step of gradient descent, so that \(P_{c}\) and \(P^{*}\) also correspond to the emergence of an invariant representation.
### Identify Synonyms by Counting
The invariance of the RHM labels with respect to exchanges of synonymous input patches can be inferred by counting the occurrences of such patches in all the data belonging to a given class \(\alpha\). Intuitively, tuples of features that appear with identical frequencies are likely synonyms. More specifically, let us denote an \(s\)-dimensional input patch with \(\mathbf{x}_{j}\) for \(j\) in \(1,\ldots,s^{L-1}\), a \(s\)-tuple of input features with \(\mathbf{\mu}\!=\!(\mu_{1},\ldots,\mu_{s})\), and the number of data in class \(\alpha\) which display \(\mathbf{\mu}\) in the \(j\)-th patch with \(N_{j}(\mathbf{\mu};\alpha)\). Normalizing this number by \(N_{j}(\mathbf{\mu})\!=\!\sum_{\alpha}N_{j}(\mathbf{\mu};\alpha)\) yields the conditional probability \(f_{j}(\alpha|\mathbf{\mu})\) for a datum to belong to class \(\alpha\) conditioned on displaying the \(s\)-tuple \(\mathbf{\mu}\) in the \(j\)-th input patch,
\[f_{j}(\alpha|\mathbf{\mu}):=\Pr\left\{\mathbf{x}\in\alpha|\mathbf{x}_{j}=\mathbf{\mu}\right\}= \frac{N_{j}(\mathbf{\mu};\alpha)}{N_{j}(\mathbf{\mu})}.\lx@note{footnote}{The notation $\mathbf{x}_{j}\!=\!\mathbf{\mu}$ means that the elements of the patch $\mathbf{x}_{j}$ encode the tuple of features $\mathbf{\mu}$}\]
If the low-level features are homogeneously spread across classes, as in the Homogeneous Features Model of Fig. 3, then \(f\!=\!n_{c}^{-1}\), independently of and \(\alpha\), \(\mathbf{\mu}\) and \(j\). In contrast, due to the aforementioned correlations, the probabilities of the RHM are all different from \(n_{c}^{-1}\) (see Fig. 2). We refer to this difference as _signal_.4 Distinct level-1 tuples \(\mathbf{\mu}\) and \(\mathbf{\nu}\) yield a different \(f\) (and thus a different signal) with high probability unless they share the same level-2 representation. Therefore, this signal can be used to identify synonymous level-1 tuples.
Footnote 4: Cases in which all features are homogeneously spread across classes can also appear in the RHM, but with vanishing probability in the limit of large \(n_{c}\) and \(m\), see Appendix E.
### Signal vs. Sampling Noise
When measuring the conditional class probabilities with only \(P\) training data, the occurrences in the right-hand side
Figure 8: Sensitivity \(S_{2,1}\) of the second layer of a deep CNN to permutations in the first level of the RHM with \(L=2,3\), \(s=2\), \(n_{c}=m=v\), as a function of the training set size (top) and after rescaling by \(P^{*}=n_{c}m^{L}\) (bottom). Sensitivity decreases from 1 to approximately zero, i.e. deep CNNs are able to learn synonymic invariance with enough training points. The collapse after rescaling highlights that this can be done with \(P^{*}\) training points.
Figure 9: Permutation sensitivity \(S_{k,l}\) of the layers of a deep CNN trained on the RHM with \(L=3\), \(s=2\), \(n_{c}=m=v=8\), as a function of the training set size \(P\). The permutation of synonyms is performed at different levels, as indicated in colors. The different panels correspond to the sensitivity of different layers’ activations, indicated by the gray box. Synonymic invariance is learned at the same time for all layers, and most of the invariance to level \(l\) is obtained at layer \(k=l+1\).
of Eq. 24 are replaced with empirical occurrences, which induce a sampling _noise_ on the \(f\)'s. For the identification of synonyms to be possible, this noise must be smaller in magnitude than the aforementioned signal--a visual representation of the comparison between signal and noise is depicted in Fig. 10.
The magnitude of the signal can be computed as the ratio between the standard deviation and mean of \(f_{j}(\alpha|\mathbf{\mu})\) over realizations of the RHM. The full calculation is presented in Appendix B: here we present a simplified argument based on an additional independence assumption. Given a class \(\alpha\), the tuple \(\mathbf{\mu}\) appearing in the \(j\)-th input patch is determined by a sequence of \(L\) choices--one choice per level of the hierarchy--of one among \(m\) possible lower-level representations. These \(m^{L}\) possibilities lead to all the \(mv\) distinct \(s\)-tuples. \(N_{j}(\mathbf{\mu};\alpha)\) is proportional to how often the tuple \(\mathbf{\mu}\) is chosen--\(m^{L}/(mv)\) times on average. Under the assumption of independence of the \(m^{L}\) choices, the fluctuations of \(N_{j}(\mathbf{\mu};\alpha)\) relative to its mean are given by the central limit theorem and read \((m^{L}/(mv))^{-1/2}\) in the limit of large \(m\). If \(n_{c}\) is sufficiently large, the fluctuations of \(N_{j}(\mathbf{\mu})\) are negligible in comparison. Therefore, the relative fluctuations of \(f_{j}\) are the same as those of \(N_{j}(\mathbf{\mu};\alpha)\): the size of the signal is \((m^{L}/(mv))^{-1/2}\).
The magnitude of the noise is given by the ratio between the standard deviation and mean, over independent samplings of a training set of fixed size \(P\), of the empirical conditional probabilities \(\hat{f}_{j}(\alpha|\mathbf{\mu})\). Only \(P/(n_{c}mv)\) of the training points will, on average, belong to class \(\alpha\) while displaying feature \(\mu\) in the \(j\)-th patch. Therefore, by the convergence of the empirical measure to the true probability, the sampling fluctuations of \(\hat{f}\) relative to the mean are of order \([P/(n_{c}mv)]^{-1/2}\)--see Appendix B for details. Balancing signal and noise yields the characteristic \(P_{c}\) for the emergence of correlations. For large \(m\), \(n_{c}\) and \(P\),
\[P_{c}=n_{c}m^{L}, \tag{10}\]
which coincides with the empirical sample complexity of deep CNNs discussed in Section 3.
### Learning Synonymic Invariance by the Gradients
To complete the argument, we consider a simplified one-step gradient descent setting [35, 36], where \(P_{c}\) marks the number of training examples required to learn a synonymic invariant representation. In this setting (details presented in Appendix C), we train a one-hidden layer fully-connected network on the first \(s\)-dimensional patches of the data. This network cannot fit data which have the same features on the first patch while belonging to different classes. Nevertheless, the hidden representation of the network can become invariant to exchanges of synonymous patches.
More specifically, as we show in Appendix C, with identical initialization of the hidden weights and orthogonalized inputs, the update of the hidden representation \(f_{h}(\mathbf{\mu})\) of the \(s\)-tuple of low-level features \(\mathbf{\mu}\) after one step of gradient descent follows
\[\Delta f_{h}(\mathbf{\mu})=\frac{1}{P}\sum_{\alpha=1}^{n_{c}}a_{h, \alpha}\left(\hat{N}_{1}(\mathbf{\mu};\alpha)-\frac{1}{n_{c}}\sum_{\beta=1}^{n_{c }}\hat{N}_{1}(\mathbf{\mu};\beta)\right), \tag{11}\]
where \(f_{h}(\mathbf{\mu})\) coincides the pre-activation of the \(h\)-th neuron and \(\mathbf{a}_{h}\!=\!(a_{h,1},\ldots,a_{h,n_{c}})\) denotes the associated \(n_{c}\) dimensional readout weight. \(\hat{N}_{1}\) is used to denote the empirical estimate of the occurrences in the first input patch. Hence, by the result of the previous section, the hidden representation becomes insensitive to the exchange of synonymic features for \(P\!\gg\!P_{c}\).
This prediction is confirmed empirically in Fig. 11, which shows the sensitivity \(S_{1,1}\) of the hidden representation of shallow fully-connected networks trained in the setting of this section, as a function of the number \(P\) of training data for different combinations of the model parameters. The bottom panel, in particular, highlights that the sensitivity is close to 1 for \(P\!\ll\!P_{c}\) and close to 0 for \(P\!\gg\!P_{c}\). In addition, notice that the collapse of the pre-activations of synonymic tuples onto the same, synonymic invariant value, implies that the rank of the hidden weights matrix tends to \(v\)--the vocabulary size of higher-level features. This low-rank structure is typical in the weights of deep networks trained on image classification [37, 38, 39, 40].
Using all patches via weight sharing.Notice that using a one-hidden-layer CNN which looks at all patches via
Figure 10: **Signal vs. noise illustration.** The dashed function represents the distribution of \(f(\alpha|\mathbf{\mu})\) resulting from the random sampling of the RHM rules. The solid dots illustrate the _true_ frequencies \(f(\alpha|\mathbf{\mu})\) sampled from this distribution, with different colors corresponding to different groups of synonyms. The typical spacing between the solid dots, given by the width of the distribution, represents the _signal_. Transparent dots represent the empirical frequencies \(\hat{f}_{j}(\alpha|\mathbf{\mu})\), with dots of the same color corresponding to synonymous features. The spread of transparent dots of the same color, which is due to the finiteness of the training set, represents the _noise_.
weight sharing and global average pooling would yield the same result since the average over patches reduces both the signal and the noise by the same factor-see subsection C.1 for details.
Improved Performance via Clustering.Note that our signal-vs-noise argument is based on a single class \(\alpha\), as it considers the scalar quantity \(\hat{f}(\alpha|\mathbf{\mu})\). However, an observer seeking to identify synonyms could in principle use the information from all classes, represented by the \(n_{c}\)-dimensional vector of empirical frequencies \((\hat{f}(\alpha|\mathbf{\mu}))_{\alpha=1,\dots,n_{c}}\). Following this idea, one can devise a layer-wise algorithm where the representations of each layer are first updated with a single step of gradient descent (as in Eq. 82), then clustered into synonymic groups [22, 27]. Such an algorithm can solve the RHM with less than \(n_{c}m^{L}\) training points--\(\sqrt{n_{c}}m^{L}\) in the maximal dataset case \(n_{c}\!=\!v\) and \(m\!=\!v^{s-1}\), as we show empirically and justify theoretically in Appendix D. Notably, the dependence on the dimensionality \(m^{L}\) is unaffected by the change of algorithm, although the prefactor reveals the advantage of the dedicated clustering algorithm over standard CNNs.
## 5 Conclusions
We have introduced a hierarchical model of classification task, where each class is identified by a number of equivalent high-level features (synonyms), themselves consisting of a number of equivalent sub-features according to a hierarchy of random composition rules. First, we established via a combination of extensive experiments and theoretical arguments that the sample complexity of deep CNNs is a simple function of the number of classes \(n_{c}\), the number of synonymic features \(m\) and the depth of the hierarchy \(L\). This result provides a rule of thumb for estimating the order of magnitude of the sample complexity of real datasets. In the case of CIFAR10 [41], for instance, having 10 classes, taking reasonable values for the RHM parameters such as \(m\in[5,15]\) and \(L=3\), yields \(P^{*}\in[10^{3},3\times 10^{4}]\),comparable with the sample complexity of modern architectures (see Fig. 16 in Appendix G).
Secondly, our results indicate a separation between shallow networks, which are cursed by the input dimensionality, and sufficiently deep CNNs, which are not. We thus complement previous analyses based on expressivity [21] or information-theoretical considerations [24] with a generalization result.
Last but not least, we proposed to characterize the quality of internal representations with their sensitivity toward transformations of the data which leave the task invariant. This analysis bypasses the issues of previous characterizations. For example, approaches based on mutual information [12] that is ill-defined when the network representations are deterministic functions of the input [13]. Approaches based on intrinsic dimension [14, 15] can display counter-intuitive results, refer to Appendix F for a more in-depth discussion on the intrinsic dimension, and on how this quantity behaves in our setup. Interestingly, our approach supports that performance should strongly correlate with the invariance toward synonyms of the internal representation. This prediction could in principle be tested in natural language processing models, but also for image data sets by performing discrete changes to images that leave the class unchanged.
Looking forward, the Random Hierarchy Model is a rich but minimal model where open questions in the theory of deep learning could be clarified. For instance, a formidable challenge such as the description of the gradient descent dynamics of deep networks, becomes significantly simpler for the RHM, owing to the simple structure of the target representations. Other important questions, including the ability of fully-connected networks to learn
Figure 11: Synonymic sensitivity of the hidden representation vs \(P\) for a one-hidden-layer fully-connected network trained on the first patch of the inputs of an RHM with \(s\!=\!2\) and \(m\!=\!v\), for several values of \(L\), \(v\), and \(n_{c}\!\leq\!v\). The top panel shows the bare curves whereas, in the bottom panel, the x-axis is rescaled by \(P_{c}=n_{c}m^{L}\). The collapse of the rescaled curves highlights that \(P_{c}\) coincides with the threshold number of training data for building a synonymic invariant representation.
local connections [30, 42, 43], the benefits of residual connections [44] or the advantages of deep learning over kernel methods [45, 46, 25, 47] can be studied quantitatively within this model, as functions of the multiple parameters that define the hierarchical structure of the task.
## Materials and Methods
### Experimental Setup
The experiments are performed using the PyTorch deep learning framework [48]. The code used for the experiments is available online at [https://github.com/pcsl-epfl/hierarchy-learning](https://github.com/pcsl-epfl/hierarchy-learning).
### RHM implementation
The code implementing the RHM is available online at [https://github.com/pcsl-epfl/hierarchy-learning/blob/master/datasets/hierarchical.py](https://github.com/pcsl-epfl/hierarchy-learning/blob/master/datasets/hierarchical.py). The inputs sampled from the RHM are represented as a one-hot encoding of low-level features. This makes each input of size \(s^{L}\times v\). The inputs are whitened so that the average pixel value over channels is equal to zero.
### Model Architecture
One-hidden-layer fully-connected networks have input dimension equal to \(s^{L}\times v\), \(H=10^{4}\) hidden neurons, and \(n_{c}\) outputs. The deep convolutional neural networks (CNNs) have weight sharing, stride equal to filter size equal to \(s\) and \(L\) hidden layers. In this case, we set the width \(H\) to be larger than the number of possible \(s\)-tuples that can exist at a given layer, \(H\gg v^{s}\).
### Training Procedure
Neural networks are trained using stochastic gradient descent (SGD) on the cross-entropy loss, with a batch size of 128 and a learning rate equal to 0.3. Training is stopped when the training loss decreases below a certain threshold fixed to \(10^{-3}\).
### Measurements
The performance of the models is measured as the percentage error on a test set. The test set size is chosen to be \(\min(P_{\max}-P,\,20^{\prime}000)\). Synonymic sensitivity, as defined in Eq. 8, is measured on a test set of size \(\min(P_{\max}-P,\,1^{\prime}000)\). Reported results for a given value of RHM parameters are averaged over 10 jointly different instances of the RHM and network initializations.
|
2306.15768 | An Efficient Deep Convolutional Neural Network Model For Yoga Pose
Recognition Using Single Images | Pose recognition deals with designing algorithms to locate human body joints
in a 2D/3D space and run inference on the estimated joint locations for
predicting the poses. Yoga poses consist of some very complex postures. It
imposes various challenges on the computer vision algorithms like occlusion,
inter-class similarity, intra-class variability, viewpoint complexity, etc.
This paper presents YPose, an efficient deep convolutional neural network (CNN)
model to recognize yoga asanas from RGB images. The proposed model consists of
four steps as follows: (a) first, the region of interest (ROI) is segmented
using segmentation based approaches to extract the ROI from the original
images; (b) second, these refined images are passed to a CNN architecture based
on the backbone of EfficientNets for feature extraction; (c) third, dense
refinement blocks, adapted from the architecture of densely connected networks
are added to learn more diversified features; and (d) fourth, global average
pooling and fully connected layers are applied for the classification of the
multi-level hierarchy of the yoga poses. The proposed model has been tested on
the Yoga-82 dataset. It is a publicly available benchmark dataset for yoga pose
recognition. Experimental results show that the proposed model achieves the
state-of-the-art on this dataset. The proposed model obtained an accuracy of
93.28%, which is an improvement over the earlier state-of-the-art (79.35%) with
a margin of approximately 13.9%. The code will be made publicly available. | Santosh Kumar Yadav, Apurv Shukla, Kamlesh Tiwari, Hari Mohan Pandey, Shaik Ali Akbar | 2023-06-27T19:34:46Z | http://arxiv.org/abs/2306.15768v1 | # An Efficient Deep Convolutional Neural Network Model For Yoga Pose Recognition Using Single Images
###### Abstract
Pose recognition deals with designing algorithms to locate human body joints in a 2D/3D space and run inference on the estimated joint locations for predicting the poses. Yoga poses consist of some very complex postures. It imposes various challenges on the computer vision algorithms like occlusion, inter-class similarity, intra-class variability, viewpoint complexity, _etc._ This paper presents YPose, an efficient deep convolutional neural network (CNN) model to recognize yoga asanas from RGB images. The proposed model consists of four steps as follows: (a) first, the region of interest (ROI) is segmented using segmentation based approaches to extract the ROI from the original images; (b) second, these refined images are passed to a CNN architecture based on the backbone of EfficientNets for feature extraction; (c) third, dense refinement blocks, adapted from the architecture of densely connected networks are added to learn more diversified features; and (d) fourth, global average pooling and fully connected layers are applied for the classification of the multi-level hierarchy of the yoga poses.The proposed model has been tested on the Yoga-82 dataset. It is a publicly available benchmark dataset for yoga pose recognition. Experimental results show that the proposed model achieves the state-of-the-art on this dataset. The proposed model obtained an accuracy of 93.28%, which is an improvement over the earlier state-of-the-art (79.35%) with a margin of approximately 13.9%. The code will be made publicly available.
keywords: Pose recognition, Yoga, Image classification, Segmentation and classification
## 1 Introduction
The pose of a person represents a particular orientation which is essentially related to understanding various physical and behavioral aspects. Human pose recognition has numerous applications ranging from human-computer interaction, animation, gestural control, virtual reality, sports, sign language recognition, _etc._[1]. It is challenging due to factors such as a large variety of human poses, rotation, occlusion of limbs, body orientation, and large degrees of freedom in the human body mechanics [2; 3].
With the growth of online media, the amount of video and image databases are tremendously increasing on web platforms like YouTube, Netflix, Bing, _etc._ The computer vision community has made use of this and many works exploited the indefinite source of data available on the web to build large pose recognition datasets comprising a variety of postures, thus enhancing the application scope of recognition algorithms. Despite several efforts of building large-scale datasets, not many datasets deal with complex human poses especially that involved in performing yoga asanas. For example, Andriluka _et al._[4] proposed MPII dataset containing approximately 25,000 images with over 40,000 people. Each image is extracted from a YouTube video. The dataset covers 410 specific categories of human activity and 20 general categories. Though the aforementioned large-scale dataset has introduced pose diversity, in terms of the complexity of human pose, they are nowhere close to the complex body postures of yoga exercises [5].
Yoga is popular across the world as a safe and effective exercise [6]. It comprises body postures of various complexities. Yoga postures offer some of the complex body orientations that can be hard to capture from a single viewpoint. The complexity further increases with the occlusions, and changes in image resolutions. Figure 1 presents some of the challenging yoga postures consisting of inter-pose similarity, intra-pose variability, different styles of doing a particular asana, self-occlusion, and synthetic images from the Yoga-82 dataset [5]. Because of these complexities, generating fine annotations of body keypoints is not feasible and therefore, the existing state-of-the-art keypoint detection-based approaches may not be suitable for yoga pose recognition [5].Due to the inherent challenges in the pose, current pose estimation methods are not able to correctly predict the pose on the yoga asanas [1].
Few works, like [7; 8], proposed video-based yoga recognition systems. The dataset of [7] consisted of videos for 6 asanas (_i.e._ bhujangasana, padmasana, shavasana, tadasana, trikonasana, and vrikshasana) recorded with an RGB webcam, whereas [8] used Kinect to capture depth maps for the recognition of 12 yoga poses. However, the dataset they built contains relatively simple yoga poses with less occlusion, less number of classes, and does not offer many challenges to the learning algorithms. However, these works, [7; 8] lack in terms of generalization ability. Recently, Verma _et al._[5] introduced Yoga-82 dataset of static images with 82 yoga pose classes. It consists total of 28.4k images. The dataset has three levels of the hierarchy. The Top, mid, and class level hierarchies consist of 6, 20, and 82 classes, respectively. The images were collected from various online sources and are of different resolutions, illuminations, viewpoints, and occlusions. Moroever, in the dataset there are a few synthetic yoga images as well, dealing with silhouette and cartoon based poses. We utilize the Yoga-82 dataset as a testbed for our proposed network.
In the last decade, convolutional neural networks (CNNs) have achieved significant progress on pose estimation, object detection, and semantic segmentation tasks.However, existing pose estimation approaches, like OpenPose [9], HRNet [10], PifPaf [11], Fast Pose [12], _etc._, usually have a low performance rate and less robustness when applied to complex yoga postures. Due to different challenges in yoga poses, the pose estimation based approaches rather fail to predict the self-occluded body joints.The vision community has made significant progress in semantic segmentation
and object detection [13]. For example, Mask R-CNN [13] extends the Faster R-CNN [14] by adding a branch for predicting an object mask in parallel to the existing branch of bounding box recognition. It has been tested for three tasks _i.e._ instance segmentation, person keypoints detection, and bounding box detection for the object. Motivated by these recent advances, we utilized the object detection and semantic segmentation-based approach in our proposed approach for refining the yoga poses.
In this paper, a yoga pose recognition network is proposed to efficiently recognize complex postures using static images of different yoga asanas. The proposed approach aims at directly predicting the pose of yoga being performed by a practitioner. First, the pose image is refined using a segmentation framework, which detects the person in a given image and segments it along with bounding box annotations. For the object detection and segmentation, we followed a approach inspired from Mask R-CNN [13]. The predicted segmentation masks on the region of interest (ROI) are used for extracting bounding boxes. The extraction of ROI from 2D images helps in reducing the irrelevant background information from input images which helps in better learning of proposed YPose network. The refined images are then inputted to a deep neural network, which consists of the EfficientNets [15] backbone. The EfficientNet architecture is showed to outperform the state-of-the-art CNN networks including DenseNet [16]. The EfficientNet maintains an optimum scaling which achieves improved performance with balanced computation costs. The proposed model utilizes the B4 variant of EfficientNets.We further modify the network architecture by adding a number of dense refinement blocks adapted from the work of DenseNets, followed by global average pooling and fully connected layers in an end-to-end manner. From our study, it is evident that by the addition of dense refinement blocks there is an improvement over the baseline variants.The proposed YPose network has been evaluated on a publicly available benchmark dataset named Yoga-82 [5]. Figure 2
Figure 1: Complexity in Yoga poses due to factors like inter-pose similarity, intra-pose variability, self-occlusion and cartoon images in the Yoga-82 [5] dataset.
presents a naive representation of our proposed yoga pose recognition method.
In particular, the major contributions of the proposed model are as follows.
* We propose a novel approach for yoga pose recognition using single images. The proposed network is shown to extract high-level pose specific features to recognise the complex yoga poses in an end-to-end manner. Our approach aims to filter the irrelevant information of yoga images to learn informative representations of a yoga pose. The proposed network is robust to noise caused by challenging backgrounds, body keypoint occlusions and complex yoga poses.
* We combine image segmentation based approaches to first segment and locate the region of interest (ROI) in the input RGB image. The extracted segmentation masks and ROI are then used to refine the image to filter the background contexts that are not informative to the yoga pose recognition task. The refined images are passed to a deep CNN architecture, further modified to learn fine-grained pose features. Finally, we add global average pooling and fully connected layers to obtain accurate pose predictions.
* Due to the unavailability of a large scale yoga dataset, we exhaustively conducted computer simulation experiments on the Yoga-82 dataset to evaluate the robustness of our proposed approach.We applied the baseline architectures of EfficientNet, B0, B4, B5 and obtained accuracies of 86.76%, 89.25% and 89.46%, respectively on third level hierarchy. The proposed approach was not evaluated for the deeper variants of the aforementioned architecture to avoid the high computational cost due to the increased number of floating-point operations (FLOPS). Instead, to improve the model performance with current backbone parameters, we propose to add several dense refinement blocks adapted from the architecture of DenseNet [16] were used. The YPose network achieved an accuracy of 93.28%.
* The proposed YPose network has been compared with the state-of-the-art results on the Yoga-82 dataset. The previous state-of-the-art on this dataset was 79.35% and 93.47% for the Top-1 and Top-5 accuracies for the 82 classes.Our proposed model has achieved 93.28% and 98.04% for Top-1 and Top-5 accuracies, respectively, which outperforms the previous state-of-the-art with a margin of 13.9%.
* The proposed network is lightweight and robust to complex yoga poses and the associated challenges to them such as illumination, occlusion. From the results, we observe that the
Figure 2: Naive representation of the approach for building the proposed network. As shown in the figure, the input image is first refined using a segmentation based network and passed to the proposed CNN architecture consisting of EfficientNet [15] backbone and dense refinement blocks for feature extraction. Finally, the yoga is classified using a fully connected layer. From the activation map of the final layer it can be observed that the proposed model is able to learn meaningful representation of complex and occluded yoga posture.
network learns pose specific features and is able to address the inherent challenges in a particular yoga pose such as inter-pose similarity and intra-pose variability. The total number of parameters of our proposed model is 22.68 M, which are also comparable with the model of Yoga-82 consisting of 22.59 M parameters. The proposed model can easily run on the freely available GPUs provided by the Google Colaboratory.
Section 2 presents the literature study of pose estimation and yoga pose recognition methods. In Section 3, the proposed methodology is described, followed by the process of pose extraction, model description, and training the network. The training outcomes and final results are discussed in Section 5 and it shows the potential of our technique for real-world applications. It contains the dataset details, experimental results, performance evaluation of the proposed network, and discussion and comparison. Finally, in Section 6, the concluding remarks and future research directions are presented.
## 2 Related Works
In this section, we review the recent pose estimation and yoga pose recognition literature.
### Pose Estimation
In the last decade, the performance of computer vision algorithms has seen a significant improvement in terms of complex human pose estimation. It has attracted the attention of many researchers. Conventional pose estimation methods focus on detecting joint keypoints, and consequently estimating poses, from the target images [17; 18; 19]. Cao _et al_. [9] presented, OpenPose, a real-time approach to detecting the 2D pose keypoints of multiple people in an image. The approach uses part affinity fields (PAFs), to learn to associate body parts with individuals in the image. Similarly, PifPaf [11] proposed a pose estimation method that used part intensity fields and part association fields to first localize the body parts and then associate these parts to the human body pose. However, they fail to give accurate keypoints predictions for the yoga poses involving inversion, self-occlusion and other complex styles. Ni _et al_. [20] proposed a human pose detection system by acquiring kinematic parameters of the human body using multi-node sensors. However, the multi-node sensors, generally, might not be available in common households. HRNet [10] utilized both low and high-resolution representations of the image encodings for forming more stages and performing multiscale fusion between sub-networks. UniPose [3] proposed a unified framework for the human pose estimation using contextual segmentation and joints localization. Fast-Pose [12] proposed a fast pose distillation model learning by training lightweight pose neural networks with a low computational cost. Recently proposed, BlazePose [1] made an assumption that the face or at least the head of the user must be visible for pose estimation. However, these assumptions are unrealistic and are violated under typical scenarios of yoga poses. Moreover, most of the algorithms have been evaluated on datasets that are relatively simple in terms of human pose complexity, activity diversity and thus, do not present enough challenges to the recognition algorithms, limiting their application scope.
### Yoga Pose Recognition
Few works have been proposed on yoga pose recognition. For example, Chen _et al_. [21; 22; 8] proposed a self-training system for yoga pose recognition and correction using Kinect depth sensors. In [21], the contour information, skeletal features, and body coordinates were extracted using two
Kinet depth sensors by placing them perpendicular to each other. In [22], the model first extracted the contour information and then applied the star skeleton method for pose representation. The model was tested for twelve different yoga postures, where, five yoga practitioners have performed each asana five times. [8] extracts feature using Microsoft Kinect and OpenNI library for recognizing twelve poses. However, they made separate models for each asana and calculated features manually. Moreover, depth-sensor cameras may not be available in common households. Maddala _et al_. [23] presented a 3D motion capture system for yoga poses in videos using joint angular distance maps (IADMs) and CNNs. They evaluated their model performance on a self-collected dataset using 9 cameras, 8 IR, and 1 RGB video camera for 42 yoga poses, and on two publicly available datasets, namely CMU [24] and HDM05 [25]. Yadav _et al_. [26] proposed a yoga recognition system to classify six asanas. They used OpenPose [27] to detect eighteen keypoints of the human body and passed them to a hybrid deep learning model consisting of CNN and LSTM. Likewise, Jain _et al_. [28] proposed a yoga pose recognition system using 3D-CNNs for ten yoga poses.
However, these works involve a yoga dataset with a less number of images or videos and do not consider the vast variety of poses. They lack generalization capabilities and are far from
Figure 3: Three level hierarchy of the yoga poses in the Yoga-82 dataset [5]. Theplum,orange, andyellowcolor dots represents first, second and third level hierarchy consisting of 6, 20, and 82 classes, respectively.
recognizing complex yoga poses. Recently, Verma _et al._[5] introduced fine-grained hierarchical yoga pose recognition dataset consisting of 28.4k images for 82 classes of yoga _asanas_. The dataset consists of a three-level hierarchy as shown in Figure 3. They tested different variants of CNN architectures _i.e._ ResNet [29], DenseNet [16], MobileNet [30], and ResNext [31]. However, the model performance was quite low because the dataset consists of inherent challenges like inter-pose similarity, intra-pose variability, occlusion, and presence of the synthetic images involving silhouette and cartoon poses. This paper presents a novel approach to recognize complex yoga asanas in single images.
## 3 Proposed Methodology
This section presents the proposed methodology of the proposed model. The proposed network recognizes complex yoga poses efficiently from RGB images. The proposed model consists of four main components _i.e._ ROI segmentation, EfficientNet [15] backbone, dense refinement blocks, and fully connected layers. In the first component, instance segmentation and object detection is applied to the original images to extract the ROI from the in-the-wild yoga images. We define ROI as the region of the image where the yoga practitioner is present. In the second component, the refined images are passed to the EfficientNet [15] based backbone to compute the spatial features. In the third component, dense refinement blocks are applied to obtain more diverse features. Finally, in the fourth component, global average pooling and fully connected layers are applied to get the prediction scores for the first, second, and third-level hierarchy of the yoga poses. Figure 5
Figure 4: ROI segmentation of the yoga pose images. First column presents the original images from the Yoga-82 [5] dataset. The second column presents the mask generated on the original images. The third column shows bounding boxes generated for the person instance. Finally, the fourth column presents the obtained refined pose images.
illustrates the overall pipeline of the proposed model. This is further described in detail in the following subsections.
### ROI Segmentation
This subsection describes the proposed approach of ROI segmentation and extraction. Apart from the various inherent complexities of the yoga poses due to self-occlusion, interclass similarities, intraclass variabilities, illumination Yoga-82 [5], dataset consists of synthetic images with cartoon sketches and silhouette poses and images where the person is performing yoga in the wild (_e.g._ hills, outdoor terrain) that makes the yoga pose recognition even more challenging. To deal with these challenges associated with the practitioner's environment, we propose an ROI segmentation approach for yoga poses as shown in Figure 4.
The ROI segmentation model is used to classify every pixel location to perform segmentation for each object instance in the image. It typically consists of ResNet-101 [29] backbone, region proposal network (RPN), and ROIAlign layers. The Resnet-101 backbone is used to extract the features maps from the input image. The RPN layer runs over the image to obtain bounding box coordinates for the region proposals _i.e._ regions in the feature map containing 'Person' as an instance. Further, the outputs from the RPN layer are fed into the ROI align layer to obtain segmentation masks for each object present in the image.
This generates bounding box coordinates along with high-quality segmentation masks for each object instance in the image. The ROI segments are cropped from the images. As the size of the generated annotation was different for each image, all these images are resized to 22& 224 size. However, the synthetic images were passed directly to the CNN architecture as these were leading to erroneous ROI predictions.
Figure 5: The schematic representation of the proposed model. The proposed network consists of four steps. In the first step, ROI segmentation is performed on the input image using segmentation network, RPN, and ROI align. Using this we obtained object mask, bounding box and object label. Next, refined images are inputted to the EfficientNet [15] backbone network followed by several refinement block units to calculate diverse features. Finally, fully connected layers give output for the three-class hierarchical predictions for yoga poses.
### Proposed Model
The refined images of size 224 \(\times\) 224 obtained from the ROI segmentation are passed to the deep CNN architecture. The deep CNN architecture consists of the EfficientNet [15] backbone and dense refinement block units, adapted from the architecture of DenseNet [16] followed by global average pooling and fully connected layers. The EfficientNet architecture comprises seven blocks with varying filter size, strides, and the number of channels. Each block consists of the mobile inverted bottleneck unit, namely MBConv [30]. The MBConv units use depthwise convolution and add the squeeze and excitation optimization. Figure 6 represents one such MBConv unit. In the expansion stage, we are increasing the network width by increasing the number of channels. Next, the depthwise convolution operations are performed and finally, in the squeeze and excitation stage, the global features are extracted.
Starting from the baseline variant B0 of EfficientNets, the architecture is systematically scaled according to the width and depth parameters to the deeper variant B7. As demonstrated in [15], for the models deeper than B4, the performance does not improve significantly with the increase in the number of network parameters.
Settings used for the depth parameter, \(d\) and width parameter, \(w\) were 1.8 and 1.4, respectively. The number of convolutional layers and filters were modified according to the scaling parameters (\(d\) and \(w\)) as \(f_{inc}=\)w\(\times\)\(f_{input}\), where \(f_{inc}\) denotes modified number of filters and \(f_{input}\) denotes number of input filters. The depth divisor (\(d_{div}\)) is set to 8 for scaling operations and the value of minimum depth, \(d_{min}\) is kept equal to \(d_{div}\). For scaling the width of the network we modify the filters as Equation 1, where \(f_{scaled}\) is the new number of filters.
\[f_{scaled}=\max\quad d_{min}\;\backslash\;\frac{(\mathbf{f}\;\;\;+\;\frac{d_{div }}{2}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\
refinement block units to further improve the performance. These blocks are adapted from the architecture of DenseNet [32]. Each refinement block typically consists of several 2D convolution layers with varying number of filters. The number of these blocks to be added is a crucial hyper-parameter for the proposed approach. Figure 9 depicts a detailed analysis of this hyper-parameter. From the experiments we infer that 16 such units are used to achieve optimum and efficient performance.Figure 5 represents the detailed description of the modified architecture. There are considerable improvements in predictions after addition of the refinement blocks. Each block concatenates the features with the feature maps of preceding layers as shown in Equation 2.
\[\mathbf{x}_{l}=F[\mathbf{x}_{l-1},...,\mathbf{x}_{1},\mathbf{x}_{0}] \tag{2}\]
where, \(\mathbf{x}_{l}\) denotes the feature vector of _lth_ layer and \(F_{l}\) denotes the corresponding mapping.
Throughout the proposed model, the swish activation function was used. We preferred the swish activation function over ReLU because of its superior performance for the complex image recognition tasks [33]. The smoothness of the swish activation function helps the deep neural network to optimize and generalize better. The learnable parameter \(\mathbf{\theta}\) was set to 1.0. If \(\mathbf{f}\) (\(\mathbf{x}\)) denotes the swish activation funtion and \(\sigma(\mathbf{x})=\frac{1}{1+\mathbf{c}}\)then,
\[\mathbf{f}(\mathbf{x})=\mathbf{x}\times\sigma(\mathbf{\theta}\times\mathbf{x}) \tag{3}\]
The output of the final block is passed as an input to the fully connected layers for the classification. The net output of the fully connected layers, \(\mathbf{z}^{(i)}\) is defined as \(\mathbf{z}^{(i)}=w_{m}\times\mathbf{x}_{m}+w_{m-1}\times\mathbf{x}_{m-1}+...+w_{1}\times \mathbf{x}_{1}+w_{0}\times\mathbf{x}_{0}\), where \(w_{i}\) and \(\mathbf{x}_{i}\) are the weight and the learned feature vectors, respectively.
\[\hat{P}(class=i|\mathbf{c}^{(\mathbf{f})})=\Phi(\mathbf{c}^{(\mathbf{f})}) \tag{4}\]
where, \(\Phi(\mathbf{c}^{(\mathbf{f})})\) denotes the Softmax function, as given in Equation 5.
\[\Phi(\mathbf{c}^{(\mathbf{f})})=\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\bmbm{\bm
It is clear that the model was able to converge after 30 epochs followed by minor fluctuations. It can be noted that for the initial epochs the validation accuracy was higher than that of training. This is because of the regularisation used while training. 40% of the features are set to zero using the dropout of 0.4. However, for the testing, all of the features are used resulting in better generalization ability and robustness of the model. The best model weights were saved for evaluation on the test data.
### Evaluation Parameters
This section provides a detailed analysis of the evaluation parameters. The proposed approach involves adding several dense refinement blocks to improve the pose predictions. The number of such units to be used is a crucial hyperparameter to our approach Figure 9. From the experiments, it has been observed that on each level of hierarchy, the optimum performance was after addition of 16 such blocks. It can also be noted that the addition of more units results in scaling the depth of architecture and increases the number of parameters to be trained, leading to possible overfitting on the limited data. Hence, to maintain the efficiency of yoga pose recognition and at the same time achieve optimum performance, we select 16 as the number of such units to be added.
Figure 8: Confusion matrices for (a) 6, (b) 20, and (c) 82 classes for the three level hierarchies of Yoga-82 [5] dataset.
Figure 7: Model accuracy and loss curves for (a) 6, (b) 20, and (c) 82 classes for the three level hierarchies of Yoga-82 [5] dataset.
## 5 Experimental Results
This section presents the experimental results of the proposed model on the Yoga-82 [5] dataset. It consists of four subsections. In the first subsection, dataset details are presented. In the second subsection, experimental settings are explained. In the third subsection, the performance of our proposed model is analyzed.In the fourth subsection, a detailed ablation study is presented to highlight the contribution of each module in the proposed model.Finally, in the fifth subsection, the results of our proposed model are discussed and compared with the state-of-the-art.
### Dataset Details
The performance of the proposed model has been evaluated on the Yoga-82 [5] dataset. This dataset is a publicly available benchmark dataset for large-scale yoga pose recognition with 82 classes. It consists of various complex yoga poses that a human body can perform. All the poses are structured into a three-level hierarchy including body positions, variations in body positions, and the actual pose names. The class labels consist of three levels with 6, 20, and 82 classes for the first, second, and third levels of the hierarchy, respectively. The dataset contains links to the images from the web for a total of 82 yoga-pose classes. The dataset has been downloaded from the links provided by the authors of the dataset. The total number of images in the dataset is 28.4k. Along with the dataset, the annotations and hierarchical class labels were provided. Many images contain _in the wild_ yoga poses, along with a number of synthetic images containing silhouette poses and cartoon sketches. Also, in few of the images, multiple practitioners are present. Furthermore, images are captured from different viewing angles. Overall the dataset is challenging in terms of pose diversity and body keypoint occlusion [5]. The dataset consists of a minimum of 64 images to a maximum of 347 images of yoga poses per class.
### Performance Evaluation
In this subsection, the evaluation results of the proposed model are presented. As the dataset contains a three-level hierarchy of yoga poses, we present the performance of our model for the classification of all three of these levels of classes. To evaluate the proposed YPose network, we further performed various experiments on the model architecture. Extensive experiments are conducted with different CNN architectures like DenseNet [32], EfficientNet [15] for the choice of model
Figure 9:Performance evaluation with different number of refinement blocks.
backbone and their performances are compared to find an architecture with efficient performance. Finally, we modified the architecture of the network to obtain better representation of the input yoga pose images to get the predictions.
Figure 10 shows a detailed comparison of the performance of different state-of-the-art pose estimation networks along with our proposed model on a few yoga images. In the figure, the first column presents the RGB images of the yoga pose. The columns from second to fifth demonstrates the preformance on pose estimation using OpenPose [9], HRNet [10], PifPaf [11], and Fast Pose [12]. The sixth column presents the results of ROI segmentation for the instance of yoga practitioners, which consists of a segmentation mask along with the bounding boxes using our proposed method. The last column shows the activation heatmaps learned by the proposed model on the refined images.As it can be seen from the figure, many of the state-of-the-art pose estimation methods struggle to produce accurate pose estimations. A number of keypoint detection errors were observed in the predictions. For few yoga images, not a single keypoint was detected. Also, in the case of cartoon and silhouette poses these estimation networks would catastrophically fail to predict the complex body keypoints. The activation maps of our YPose network show that the proposed model was able to learn high-level features for these complex yoga poses.
The experimental results of the proposed model are presented in the Table 1 for the 82 classes. The performance metrics are precision, recall, f1-score, and accuracy on the test split. The metrics, precision, recall, and f1-scores are calculated as macro averages. Table 3 presents the Top-1 and Top-5 accuracies for the three-level hierarchies of the test dataset.The Top-1 accuracies for six, twenty, and eighty-two classes are 95.33%, 93.38%, and 93.28%, respectively.
The six-level hierarchy classifier achieves better accuracy compared to the other two hierarchies as it comprises of simple poses of sitting, standing, balancing, _etc._ as shown in Figure 3. A small performance drop is observed for the class level hierarchy classifier, as the 82 classes of yoga are very complex in terms of intra-pose variability, inter-pose similarity and pose complexity. For the classification of 82 classes of yoga pose a training accuracy of 99.09% and validation accuracy of 92.86% was achieved. On the testing split, 93.28% accuracy was obtained. The accuracy and loss curves are plotted in Figure 7. Figure 8 illustrate the confusion matrices for 6, 20, and 82 class predictions. From the figure, it can be seen that the proposed network is able to classify each class of yoga pose correctly with very few misclassifications. The numbers on the boxes show the accuracy of that particular class.
Figure 12 presents the prediction results of our proposed model for the three-level hierarchical classes for some of the complex yoga postures. These images have diverse environments. The three-level hierarchy predictions along with their bounding boxes are displayed. As it can be infered, in some of the images the yoga practitioner is not properly visible and occupies a small portion of the image, still, the proposed model is able to correctly recognise the pose.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**Model Variant** & **Precision** & **Recall** & **f2-score** & **Accuracy** \\ \hline YPose Lite & 0.8429 & 0.8018 & 0.8109 & 84.51\% \\ YPose Network & 0.9287 & 0.9223 & 0.9241 & 93.28\% \\ \hline \end{tabular}
\end{table}
Table 1: Experimental results of the proposed model for the 82 classes.
### Ablation study
This section presents a detailed study of contribution from each module used in the proposed approach. To demonstrate the effectiveness of refinement blocks we evaluate the performance after removing them from the network. From the Table 4 it is observed that removing the refinement
\begin{table}
\begin{tabular}{l c c} \hline \hline Baseline & Top-1 & Number of Parameters \\ \hline EfficientNet BO & 86.76\% & 4.11 M \\ EfficientNet B4 & 89.25\% & 17.69 M \\ EfficientNet B5 & 89.46\% & 28.51 M \\ \hline \hline \end{tabular}
\end{table}
Table 2:Performance of the EfficientNet baselines on Yoga-82 [5] dataset. Accuracies for variant B0, B4 and B5 of EfficientNets [15] are presented. We pass refined yoga images to each baseline and reported prediction results are on 82-class hierarchy of Yoga-82.
Figure 10: Yoga pose estimation using different pose estimation methods. The first, second, third, and fourth columns presents yoga poses estimated using OpenPose [9], HRNet [10], PiPaf [11], and Fast Pose [12], respectively. It is seen from the figure that most pose estimation based frameworks fail to correctly predict the keypoints, especially in the case of self-occlusion. ✗ shows not a single keypoint detected.
blocks leads to a significant performance drop and the CNN backbone struggles to learn representations of complex yoga poses. The refinement blocks are able to learn fine-grained representations of self-occluded and other complex yoga poses. We also conduct experiments to show the contribution of ROI extraction module. From the results it is clear that using person instance segmentation to first extract the region of interest improves the results to some extent. This is largely due to the challenging yoga scenerios where the practitioner is located in only a small region of image. The information of the practitioner's environment is not relevant to the recognition task. Finally, we achieve best performance after adding both ROI and refinement blocks to the model.
To further study the contribution of refinement blocks and the generalization ability of YPose network, we change the CNN backbone to MobileNet-V2, which previously achieved a baseline of 71.11% and 88.50% and this as Top-1 and Top-5 accuracies, respectively (Table 5). In the next step, we add 16 refinement units to obtain accurate predictions with Top-1 and Top-5 accuracies of 84.51% and 94.49%, respectively (Table 5). We refer this as the "Lite" variant of proposed YPose network. It is interesting to note that YPose Lite is lightweight with significantly reduced number of parameters and FLOPs, making it suitable for mobile and embedded system based yoga recognition applications.
### Discussion and Comparison
This section presents the discussion and comparison of the performance of our approach with the state-of-the-art results. The experiments were conducted involving multiple network architec
Figure 11: The first row presents the ROI along with segmentation masks using the proposed approach. The last column presents activation maps on the refined images. Our approach aims to directly recognise the poses by learning the difficult representations of yoga pose.
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Level of Hierarchy** & **Top-1 accuracy** & **Top-5 accuracy** \\ \hline Six Class Hierarchy & 95.33\% & 99.89\% \\ Twenty Class Hierarchy & 93.38\% & 98.37\% \\ Eighty Two Class Hierarchy & 93.28\% & 98.04\% \\ \hline \end{tabular}
\end{table}
Table 3: Top-1 and Top-5 accuracies of the proposed model for 3 level hierarchy of the Yoga-82 [5] dataset.
tures and evaluated the proposed approach by training each of the models.Table 5 shows a detailed comparison between the performance, the number of parameters and FLOPs of the proposed model with the state-of-the-art.For the choice of CNN backbone we studied the architecture of EfficientNet [15]. With a lesser number of parameters, EfficientNet B0 achieves good performance. As
\begin{table}
\begin{tabular}{l|l|l|l} \hline Method & Top-1 accuracy & Top-1 accuracy & Top-1 accuracy \\ & (6-class) & (20-class) & (82-class) \\ \hline ROI+Backbone & 92.99 \% & 90.16 \% & 89.25 \% \\ Backbone+Refinement Blocks & 93.85\% & 92.32\% & 89.86 \% \\ ROI+Backbone+Refinement Blocks & 95.33\% & 93.76 \% & 93.28 \% \\ \hline \end{tabular}
\end{table}
Table 4: Ablation study highlighting the contribution of different module used in the proposed model.
Figure 12: Three level hierarchal class prediction results for some of the complex yoga pose images performed in diverse backgrounds. The bounding frames represent the ROIs detected. The six level, twenty level, and 82 class level hierarchy is shown inorange,blue, andredcolors, respectively.
we move from variant B0 to B7 of EfficientNet backbones the number of trainable parameters, model size, width, resolution, and depth scaling increases, consequently resulting in performance improvements.However, it was observed that for the deeper variants of EfficientNet, the accuracy does not improve by significant values Table 2. From B0 to B5, as subsequent improvement can be observed. Keeping in mind the number of parameters of the state-of-the art on the work of Verma _et al._[5] and to lower the computational cost of network, we selected variant B4 of EfficientNets as the backbone. The proposed YPose network which is based on EfficientNet B4 followed by 16 refinement blocks was trained for predicting the 82 poses of yoga. The model has approximately 22.68 million parameters which are comparable with the models implemented in the earlier works. The network achieves a state-of-the-art accuracy of 93.28% for prediction of 82 classes of yoga pose.
The proposed model has achieved Top-1 and Top-5 accuracies of 93.28% and 98.04%, respectively. While the best results of the work of Verma _et al._[5] were 79.35% and 93.47% as Top-1 and Top-5 accuracies, respectively. The proposed model outperforms the earlier works with a large margin of approximately 13.9% at the same time the number of parameters and FLOPs are also comparable. From the results, it is observed that our model achieves better performance.
It is interesting to observe the cases where the proposed network fails to correctly recognise the yoga pose. Figure 13 presents some of the failure cases on the hierarchical poses of the Yoga-82. It can be observed that the proposed network is able to correctly recognise most hierarchies, even for silhouette and cartoon poses.
## 6 Conclusion
In this paper, a novel approach has been proposed for recognizing yoga poses from a single RGB image. The proposed model consists of four main steps. In the first step, ROI segmentation was performed, to detect the instance of the person in the image. This was followed by ROI Align to generate the corresponding segmentation masks and bounding boxes. In the second step, the refined images obtained from extracting the ROI extracted were passed to a CNN backbone of EfficientNet [15] to perform feature extraction.In the third step, several dense refinement blocks were added for extracting more diverse features, specific to complex yoga postures. Finally, in the fourth step, global average pooling and fully connected layers were applied to obtain the class predictions. The proposed network had been tested on the Yoga-82 dataset [5] and outperforms the previous state-of-the-art with a significant margin of approximately 13.9%.This work serves as a demonstration of yoga pose recognition in single images where temporal context. A similar approach
Figure 13: Some of the cases where the proposed network fails to correctly recognise the yoga pose. The ground truth and predicted hierarchical yoga poses are marked with green, red, respectively. However, the predictions on most of the pose hierarchies are accurate. The predictions on last two columns show that the approach is robust to silhouette and cartoon poses as well.
can also be used for action recognition in single RGB images. In future works, data augmentation techniques can also be applied to further improve the results. Also, better segmentation approaches can be proposed for single image-based yoga pose recognition or action classification.
## Acknowledgements
Authors would like to thank anonymous reviewers and our parent organizations for extending their support for the betterment of the manuscript. We appreciate the assistance provided by CSIR, India. Also, we would like to acknowledge Manisha Verma _et al_. for making their dataset (Yoga-82) publicly available.
## Conflict of Interest
The authors declare that they have no conflict of interest.
|
2307.03223 | Neural Network Field Theories: Non-Gaussianity, Actions, and Locality | Both the path integral measure in field theory and ensembles of neural
networks describe distributions over functions. When the central limit theorem
can be applied in the infinite-width (infinite-$N$) limit, the ensemble of
networks corresponds to a free field theory. Although an expansion in $1/N$
corresponds to interactions in the field theory, others, such as in a small
breaking of the statistical independence of network parameters, can also lead
to interacting theories. These other expansions can be advantageous over the
$1/N$-expansion, for example by improved behavior with respect to the universal
approximation theorem. Given the connected correlators of a field theory, one
can systematically reconstruct the action order-by-order in the expansion
parameter, using a new Feynman diagram prescription whose vertices are the
connected correlators. This method is motivated by the Edgeworth expansion and
allows one to derive actions for neural network field theories. Conversely, the
correspondence allows one to engineer architectures realizing a given field
theory by representing action deformations as deformations of neural network
parameter densities. As an example, $\phi^4$ theory is realized as an
infinite-$N$ neural network field theory. | Mehmet Demirtas, James Halverson, Anindita Maiti, Matthew D. Schwartz, Keegan Stoner | 2023-07-06T18:00:01Z | http://arxiv.org/abs/2307.03223v2 | # Neural Network Field Theories:
###### Abstract
Both the path integral measure in field theory and ensembles of neural networks describe distributions over functions. When the central limit theorem can be applied in the infinite-width (infinite-\(N\)) limit, the ensemble of networks corresponds to a free field theory. Although an expansion in \(1/N\) corresponds to interactions in the field theory, others, such as in a small breaking of the statistical independence of network parameters, can also lead to interacting theories. These other expansions can be advantageous over the \(1/N\)-expansion, for example by improved behavior with respect to the universal approximation theorem. Given the connected correlators of a field theory, one can systematically reconstruct the action order-by-order in the expansion parameter, using a new Feynman diagram prescription whose vertices are the connected correlators. This method is motivated by the Edgeworth expansion and allows one to derive actions for neural network field theories. Conversely, the correspondence allows one to engineer architectures realizing a given field theory by representing action deformations as deformations of neural network parameter densities. As an example, \(\phi^{4}\) theory is realized as an infinite-\(N\) neural network field theory. |
2306.11268 | LightRidge: An End-to-end Agile Design Framework for Diffractive Optical
Neural Networks | To lower the barrier to diffractive optical neural networks (DONNs) design,
exploration, and deployment, we propose LightRidge, the first end-to-end
optical ML compilation framework, which consists of (1) precise and
differentiable optical physics kernels that enable complete explorations of
DONNs architectures, (2) optical physics computation kernel acceleration that
significantly reduces the runtime cost in training, emulation, and deployment
of DONNs, and (3) versatile and flexible optical system modeling and
user-friendly domain-specific-language (DSL). As a result, LightRidge framework
enables efficient end-to-end design and deployment of DONNs, and significantly
reduces the efforts for programming, hardware-software codesign, and chip
integration. Our results are experimentally conducted with physical optical
systems, where we demonstrate: (1) the optical physics kernels precisely
correlated to low-level physics and systems, (2) significant speedups in
runtime with physics-aware emulation workloads compared to the state-of-the-art
commercial system, (3) effective architectural design space exploration
verified by the hardware prototype and on-chip integration case study, and (4)
novel DONN design principles including successful demonstrations of advanced
image classification and image segmentation task using DONNs architecture and
topology. | Yingjie Li, Ruiyang Chen, Minhan Lou, Berardi Sensale-Rodriguez, Weilu Gao, Cunxi Yu | 2023-06-20T03:45:46Z | http://arxiv.org/abs/2306.11268v2 | # LightRidge: An End-to-end Agile Design Framework for
###### Abstract
To lower the barrier to diffractive optical neural networks (DONNs) design, exploration, and deployment, we propose **LightRidge**, the first end-to-end optical ML compilation framework, which consists of **(1)** precise and differentiable optical physics kernels that enable complete explorations of DONNs architectures, **(2)** optical physics computation kernel acceleration that significantly reduces the runtime cost in training, emulation, and deployment of DONNs, and **(3)** versatile and flexible optical system modeling and user-friendly domain-specific-language (DSL). As a result, LightRidge framework enables efficient end-to-end design and deployment of DONNs, and significantly reduces the efforts for programming, hardware-software codesign, and chip integration. Our results are experimentally conducted with physical optical systems, where we demonstrate: **(1)** the optical physics kernels precisely correlated to low-level physics and systems, **(2)** significant speedups in runtime with physics-aware emulation workloads compared to the state-of-the-art commercial system, **(3)** effective architectural design space exploration verified by the hardware prototype and on-chip integration case study, and **(4)** novel DONN design principles including successful demonstrations of advanced image classification and image segmentation task using DONNs architecture and topology.1
Footnote 1: To appear at ASPLOS 2024.
## 1 Introduction
Deep neural networks (DNNs) has been undergoing significant growth in recent years with significant contributions in many application domains like autonomous systems, natural language processing, and health care[3, 11, 30, 51, 15, 16, 2]. However, large DNN models which produce high system throughput, usually suffers from high carbon footprint. For example, recent studies estimated 626,000 pounds of planet-warming carbon dioxide, equal to the lifetime emissions of five cars, produced in training Transformer network [50, 47]. On the other side, the embedded accelerators [49, 4, 57, 52, 46, 58], which are designed to improve resource and power efficiency, suffer from limited functionality and throughput. Thus, while there have recently seen great progress in customized accelerators that adjust the computing performance with efficiency in hardware architectures and systems, the Pareto-frontier of conventional accelerators remains the same [12, 30, 43, 19, 26, 45].
To advance the Pareto-frontier of ML systems, i.e., offering high computing performance as well as extreme power efficiency, accelerators taking advantage of optics, namely _optical neural networks_ (ONNs), have recently attracted significant interest in machine learning and hardware acceleration. The main advantages of ONNs over digital accelerators can be summarized as follows - **(1)** In optical computing systems, since the input features are encoded and carried by light, the computation and data movement will happen at the speed of light in the medium with orders of magnitude advantages in computation speed [22, 48, 23, 60, 18, 53]. **(2)** The laser implemented in the optical systems can be easily expanded with passive optical devices, such as beam splitters, to multiple channels, which means parallel computation can be easily realized with ONN systems, and the throughput of the system will be significantly increased [34, 38, 67, 33]. **(3)** The trained ONN system will be deployed with passive optical devices, which means there is no additional energy cost for all-optical inference process, thus improving the energy efficiency significantly [48, 64, 22, 17, 32, 42, 61, 66, 6]. _Diffractive Optical Neural Networks_ (DONNs) is one of the most promising research areas in ONNs, which mimic the propagation and connectivity properties of conventional neural networks, by utilizing the nature physics of light diffraction and phase modulation of coherent light [34, 32, 38, 44, 67, 7, 6]. Even though the inference of the physical DONN is all optical, the training part that leads to its design is done through digital platforms, where a precise, efficient and hardware-aware emulation engine is required.
The existing optical emulation engines, such as Mathworks BeamLab [56] and LightPipes [55], mainly focus on the emulation of the physical phenomenon while lack the key functionalities and domain-specific runtime optimizations in supporting the developments of DONNs. Specifically, it is particularly challenging for existing optical emulation frameworks to deal with DONN training and inference due to the following reasons: (1) The core emulation functions are not differentiable, which makes the backpropagation-based training hard to implement. (2) The implementation is not optimized in runtime. For example, LightPipes does not support tensor representations and operator fusion, which significantly limits the runtime performance (see Table 1). (3) There does not exist hardware/device aware emulation supports, which require significant extra efforts for correlating numerical emulations and physical deployments.
As a result, there are several critical technical barriers in design, training, exploration, and hardware deployment of DONNs, summarized as follows:
**Challenge 1:** Sufficient multi-disciplinary domain-knowledge in optical physics, fabrication, and machine learning (ML) are required for DONN system design and deployment, which puts a critical technical barrier to exploring and advancing DONN systems in real-world applications. At this point, there does not exist an end-to-end design framework that supports design and exploration for full-stack DONNs design, optimization, fabrication, and on-chip integration. Moreover, the broad architectural search space with software, optics, and fabrication hyperparameters can be an obstacle for efficient design space exploration (DSE), which also motivates the development of an end-to-end design framework.
**Challenge 2:** There have observed significant performance degradation when deploying the trained DONN model to the practical hardware, namely, there is an algorithm-hardware miscorrelation gap between the numerical modeling and the physical system. The miscorrelation gap can come from two aspects: **(1)** The imprecise numerical modeling of the DONN system, i.e., lack of the precisely implemented physics emulation intermediate representation (IR). Classic numerical models for fundamental physics kernels in DONNs such as _Finite-difference time-domain_ (FDTD) and _scalar diffraction_ modeling via _Fast Fourier Transform_ (FFT), are both verified to be sufficiently precise in the DONN system emulation [37]; **(2)** Lack of domain-specific hardware-software codesign algorithms to realize quantization-aware hardware deployment and deal with the intrinsic noise (such as fabrication variations, non-unify optical response, etc.) in optical devices. These challenges have been confirmed by Zhou et al.[67] in Figure 1, who report \(\geq\) 30% accuracy degradation while deploying the model to the physical optical system.
**Challenge 3:** Training and emulation of DONN system are challenging due to high computational cost in modeling the optical physics. For example, [34, 67, 39] reported that training 5-layer DONNs for MNIST-10 with 5 epochs takes 3-4 days (Figure 1). Besides, existing optics simulation frameworks lack runtime optimization in developing the physics kernels, nor domain-specific language (DSL) supports. Table 1 summarizes the limitations of existing frameworks for DONNs design. More importantly, the choice of numerical physics modeling has great impacts in runtime efficiency, while it is required to offer high fidelity to the hardware deployment and fabrication.
Thus, we propose _LightRidge_, an agile end-to-end framework, aiming to significantly lower the barriers to design, training, design space exploration, and hardware deployment of DONN systems. In particular, _LightRidge_ is implemented with high-performance, precise, and versatile optical physics kernels, which precisely correlate to experimental physical systems, enabling out-of-box software-to-hardware realization in an end-to-end fashion, and showing its capabilities to explore advanced DONN architectures for complex ML applications. The contributions of this paper are summarized as follows:
* We propose a novel agile physics-aware design framework _LightRidge_ for end-to-end design, exploration, and deployment for DONNs, consisting of versatile and optimized physics modeling kernels and hardware-software codesign algorithms that enable efficient and precise DONNs modeling w.r.t real-world hardware systems (Section 3).
* We propose LightRidge-DSE to accelerate the end-to-end design cycle for DONNs design, exploration, and on-chip integration, verified by our physical prototype and on-chip integration case study. Moreover, LightRidge-DSE confirms critical domain-knowledge insights [5] for designing an efficient DONN system in physics meanings (Section 4).
* We experimentally validate the effectiveness and precision of LightRidge in designing practical DONN systems and on-chip integration, via visible-range DONN prototype and end-to-end on-chip integration case study (Section 5.1-5.5).
* Furthermore, two novel advanced DONN architecture principles are developed via LightRidge to advance DONNs in complex image classification tasks, and first-ever all-optical image segmentation (Section 5.6).
* Finally, LightRidge will be released as an open-source hard
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline & \begin{tabular}{c} Optics \\ kernels \\ \end{tabular} & DSE & \begin{tabular}{c} LoC \\ (val) \\ \end{tabular} & \begin{tabular}{c} LoC \\ (train) \\ \end{tabular} &
\begin{tabular}{c} Runtime \\ (pre-fab) \\ \end{tabular} \\ \hline LightRidge & ✓ & ✓ & 1\(\times\) & 1\(\times\) & mins – hrs \\ \hline LightPipes[55] & ✓ & ✗ & 2\(\times\) & n/a & days \\ \hline Customized & & & & & \\ PyTorch/TF & ✗ & ✗ & 20\(\times\) & 50\(\times\) & days \\
[34, 67, 39] & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overview comparisons of existing programming frameworks for DONNs compilation. Lines of Code (LoC) efforts are evaluated with a 5-layer DONNs [34].
Figure 1: Model performance and time-to-deployment runtime comparison between manual hardware calibration and LightRidge – (1) LightRidge reduces design cycle from days to hours with high-performance emulation kernels and DSE engine; (2) LightRidge results in significantly improved correlation in out-of-box deployment, which gets rid of expensive manual calibration processes.
ware project.2
Footnote 2: [https://lightridge.github.io/lightridge](https://lightridge.github.io/lightridge).
## 2 Diffractive Optical Neural Networks
Compared to conventional neural networks (NNs) on digital platforms, the information carrier changes from electrons to photons in DONN systems, i.e., instead of manipulating electrons between transistors to realize the computation, in DONN system, the computation is realized by manipulating the information-carried light with its physical features. Specifically, the DONN system is composed by multiple diffractive layers stacking in sequence as shown in Figure 2(a), which embed the phase modulations trained w.r.t the ML task for manipulating and encoding information on the light signal. The connection between layers is realized by the light diffraction when the light signal propagates between layers. Thus, in DONN systems, light diffraction can be considered as "neural operators" and phase change patterns can be seen as "weights", when compared with conventional NNs. However, the light-signal based DONN system requires the analog-to-digital converter to read out the prediction results. Thus, a detector is employed at the end of the system to capture the light intensity pattern for analysing and predictions. As a result, DONN systems take advantage of the light signal to encode and propagate information and its physical nature to realize the computation. Since the physical phenomenon happens by nature with light propagation, the computation happens with no extra energy cost at light speed, while the practical computation efficiency of the DONN system will be determined by the analog-to-digital conversions.
This section presents the overview of DONNs, including emulation, training, and the hardware deployment of DONN systems. To get an effective DONN model w.r.t a specific ML task, the propagation process of the light signal is emulated and the model is trained based on the emulation on digital platforms, where a precise mathematical approximation for the optical phenomenon, i.e., light diffraction and phase modulation, is required, which is illustrated in detail in Section 3.1. Each point at a given diffractive layer acts as a secondary source of the input light wave in accordance with the Huygens-Fresnel principle. The phase of the input wave is determined by the product of the input wave and the complex-valued phase modulation at that point. The diffraction space is required to generate the diffraction pattern at the receive plane. The phase modulation at each point w.r.t its location at the layer is the learnable parameter iteratively adjusted during the training process with error back-propagation method [34, 67]. The physical kernel implemented in LightRidge for DONN emulation and training is constructed with the widely used, precise and efficient mathematical approximations for scalar diffraction formulas. Finally, the trained model can be physically deployed with optical devices as shown in Figure 2(b) or on-chip integration system as shown in Figure 11, to realize the fully optical inference with few energy cost, high computation speed and high system throughput.
### DONN Emulation and Training
Enabling the precise hardware-software codesign aware emulation of the physical phenomenon happening in DONN systems, i.e., input encoding, light diffraction, phase modulation, and detector reading, is critical for the practical realization of DONN systems. There are mainly two mathematical methods for formulating light diffraction: (1) _Finite-difference time-domain_ (FDTD) method [63], which performs the full-vector differentiable numerical simulation of photonic structures by solving Maxwell's equations directly without physical approximations. It is a sophisticated and powerful method for light propagation emulation, however suffering from heavy computation efforts and heavy data dependency that prevent parallelisms in kernel developments. Specifically, FDTD requires the entire computational domain to be sufficiently fine gridded, which means the DONN system size will be expanded exponentially in the FDTD-based emulation. Since DONN systems target large-scale machine learning tasks, the FDTD-based emulation is infeasible in computation runtime and memory for DONN systems due to the system scalability. (2) _Fast Fourier Transform_ (FFT) method [21], which performs mathematical approximation based on scalar diffraction theory. It simplifies the computation with scenario-specific approximations while keeping the emulation precision. We have three widely used approximations for light diffraction in different application scenarios, i.e., _Rayleigh-Sommerfield_ approximation, _Fresnel_ approximation, and _Fraunhofer_ approximation. While both FDTD and FFT-based approximations are differentiable, FFT-based scalar diffraction modeling is more capable for large-scale DONNs emulation without size expansion requirements for fine gridded computational domain. More importantly, [37] and our physical experiments in Section 5 verify that the FFT-based approximations are sufficiently precise to close the codesign gap for the DONN system emulation. **Therefore, we
Figure 2: Overview of DONN system and hardware implementation – (a) Illustration of the DONN system, including the input plane, three diffractive layers, and a light intensity readout plane. (b) The reconfigurable optical hardware system to deploy the DONN system.
implement the FFT-based physics kernel in LightRidge as IR to provide precise and efficient DONN emulation and training (Section 3.1.1).** The phase modulation is applied to the input light wave by complex-valued matrix multiplication as illustrated in Section 3.1.2.
In our framework, the FFT based mathematical approximation for light diffraction is design to be fully differentiable from detector to laser input w.r.t the loss function acquired from the diffraction pattern captured at the detector. Specifically, during the training process, the prediction is generated according to the intensity of the diffraction pattern shown on the detector with pre-defined detector regions for different classes, where the light intensity \(I\) collected by each detector region mimics the probability of output prediction after softmax in conventional DNNs. Thus, the class whose corresponding detector region collects the highest light intensity is selected as the final prediction. With the one-hot represented ground truth class \(t\), the loss function \(L\) can be acquired with **MSELoss** as \(L=\parallel\text{Softmax}(I)-t\parallel_{2}\). Thus, the whole system is designed to be differentiable and compatible with conventional automatic differential engines.
### Hardware Deployment
The materials to deploy the trained DONN model on physical hardware need to be carefully selected, as optical devices made from different materials can have significantly different response to different laser wavelength. For example, SLMs can function as diffractive layers in the DONN system with laser wavelength in visible range; while for systems with laser wavelength in Terahertz (THz) range, a 3D printed mask with designed thickness at each pixel made with UV-curable resin is used as the diffractive layers in THz optical systems [34].
In our experimental hardware systems shown in Figure 2(b), the wavelength of the laser source is 532nm, which is in the operating range of the SLM3. Specifically, the SLM is an array of twisted nematic liquid crystal, where each pixel (liquid crystal) can be independently twisted to different angles by different applied control voltages, providing different phase modulation for the input light beam. However, such analog optical devices hardly have unified optical response to the control and can vary from each single due to fabrication errors, worsening the correlation between the numerical emulations and the hardware deployment, which highlights the importance to design precise computation kernels for emulation and hardware-software codesign algorithms for DONN systems.
Footnote 3: [https://holoeye.com/lc-2012-spatial-light-modulator/](https://holoeye.com/lc-2012-spatial-light-modulator/)
## 3 LightRidge Framework
Figure 3 shows the end-to-end design flow of DONN systems with automation provided by LightRidge. With the user-defined design specification and the targeted ML task, the architectural and fabrication parameters such as diffraction distance, diffraction unit size, chip dimensions, etc., are selected and produced automatically by conducting fast and efficient design space exploration (DSE) with the emulation model in LightRidge, which circumvents the critical domain knowledge requirements for designing a functioning DONN model (Section 4). This exploration is enabled with our accelerated and precise computation engine, improving the runtime efficiency significantly. When satisfying hyperparameters are acquired from the fast DSE, the emulation model will be updated with the hardware information for physical deployment, e.g., the optical response curve for SLMs w.r.t the control voltages, which is the co-design training with hardware-aware optimization. Optical devices for practical deployment are fabricated/set w.r.t the parameters in the trained model, i.e., the phase modulation in diffractive layers. The device fabrication information is dumped and generated automatically with LightRidge. With all components ready for deployment, a targeted all-optical DONN system can be setup for efficient and energy-saving optical inference (Section 5). The LightRidge automation processes are all efficiently realized by the user-friendly DSL support in LightRidge.
In this section, we illustrate the physics kernel with mathematical approximation modelling for DONN systems implemented in LightRidge, and introduce the front-end DSL designed for the LightRidge compilation implementations. We also propose a novel complex-valued regularization algorithm to improve training performance.
### Physic Kernel Implementation
The DONN system functions as a neural network based on two physical phenomena, i.e., light diffraction and phase modulation. In our framework, we take FFT-based scalar diffraction theory to build our modelling kernels.
First, the continuous-wave (CW) laser source is deployed to encode the input information. The light wave is described with complex-valued numbers in physics with two properties, amplitude and phase of the wave, i.e., \(E(t)=Ae^{j\theta}\), where \(j=\sqrt{-1}\), \(A\) is the amplitude, and \(\theta\) is the phase. The input information is encoded with the intensity \(I\) of the light wave with phase initialized as 0, i.e, \(\theta=0\), \(A=I\). Then, as shown in Figure 4, the information-carried light wave is diffracted over the diffraction distance \(z\), which is emulated with mathematical diffraction approximations described in Section 3.1.1. At the diffractive layer, each diffraction unit embeds a phase modulator, where the trainable parameter, phase modulation is applied to the light signal as described in Section 3.1.2. The forward function for a multiple-layer constructed DONN system calculates diffraction and phase modulation iteratively through all stacked diffractive layers. Finally, the diffraction pattern is captured and converted to light intensity at the detector for analog-to-digital conversion for computer processing.
#### 3.1.1 Light Diffraction approximation
There are typically three approximation methods for scalar theory of diffraction, i.e, _Rayleigh-Sommerfeld approximation_, _Fresnel approximation_, and _Fraunhofer approximation_. They work under specific
application scenarios with different assumptions of the system, such as aperture size and propagation distance.
The Rayleigh-Sommerfeld is the most commonly used approximation as it requires least physical approximations on the system and is reported to give quite accurate results. The Rayleigh-Sommerfeld approximation is implemented with Equation 1 in our framework. As shown in Figure 4(a), the diffracting aperture is in the \((\xi,\eta)\) plane, and is illuminated in the positive \(z\) direction. We calculate the wavefield across the \((x,y)\) plane, which is parallel to the \((\xi,\eta)\) plane and at distance \(z\) from it. The \(z\) axis pierces both planes at their origins. Then, when \(r_{01}\gg\lambda\), the _Rayleigh-Sommerfeld approximation_ will be described as
\[U(x,y,z)=\frac{z}{j\lambda}\iint_{\Sigma}U(\xi,\eta,0)\frac{\exp(jkr_{01})}{r_ {01}^{2}}d\xi d\eta \tag{1}\]
where \(j=\sqrt{-1}\), \(U(x,y,z)\) describes the wavefield on target \((x,y)\) plane after diffraction distance \(z\) and \(U(\xi,\eta,0)\) describes the wavefield on the emission \((\xi,\eta)\) plane, \(\lambda\) is the wavelength of the input laser, \(k\) is the wave number where \(k=\frac{2\pi}{\lambda}\), \(r_{01}\) is the vector pointing from \(P_{0}\) to \(P_{1}\) and the distance \(r_{01}\) is given by
\[r_{01}=\sqrt{z^{2}+(x-\xi)^{2}+(y-\eta)^{2}} \tag{2}\]
When diffraction angle \(\theta\) shown in Figure 4 is small enough, the computation complexity can be further reduced by applying conditions to the application scenarios while maintaining the emulation accuracy. As a result, In _Fresnel approximation_, by simplifying \(r_{01}\) with binomial expansion of the square root in Equation 2 and eliminating terms but \(z\) in the \(r_{01}^{2}\) appearing in the denominator of Equation 1, it is described as
\[U(x,y,z)=\frac{e^{jkz}}{j4z}\iint_{\Sigma}U(\xi,\eta,0)\exp\{j \frac{k}{2z}[(x-\xi)^{2}+(y-\eta)^{2}]\}d\xi d\eta \tag{3}\]
In Fresnel approximation, the critical approximation happens in the approximation of the exponent, which can be seen that the spherical secondary wavelets will be replaced by wavelets with parabolic wavefronts. Thus, the condition on the distance \(z\) will be \(z^{3}\gg\frac{\pi}{4\lambda}[(x-\xi)^{2}+(y-\eta)^{2}]_{max}^{2}\), i.e., the observer (the \((x,y)\) plane) is in the near field of the aperture.
Furthermore, when \(z\gg\frac{k(\xi^{2}+\eta^{2})_{max}}{2}\) is satisfied, which means the quadratic phase factor under the integral sign in Equation 3 is approximately unity over the entire aperture, _Franuhofer approximation_ will further greatly simplify the calculations. Thus, in the far field of the aperture, the diffraction can be
Figure 4: Diffraction illustration. – (a) \((\xi,\eta)\) is the plane for diffraction aperture and illuminated by the input light beam in positive \(z\) direction, \(\Sigma\) on the plane \((\xi,\eta)\) denotes the illuminated area. \((x,y)\) plane is the target plane. \(P_{0}\) and \(P_{1}\) are illuminated points on two planes. \(\theta\) is the angle between the outward normal and the vector pointing from \(P_{1}\) to \(P_{0}\). (b) Light propagation and phase modulation through the diffractive layer w.r.t the input light wave.
Figure 3: Agile DONN design flow overview – (1) Design space exploration (DSE) w.r.t the design specification and ML task with LightRidge to automatically search best system parameters; (2) Co-design training with DSE explored parameters and physical hardware/deice parameters; (3) LightRidge backend supports for co-design fabrication; (4) Post-fabrication system integration; (5) LightRidge-DSL that simplifies (1)–(4) with user-friendly front-end APIs. Note that, (1)(2)(3)(5) are executed automatically with LightRidge and (4) is physical demonstrated in this work.
approximated as
\[U(x,y,z)=\frac{e^{jkz}e^{j\frac{kz}{z}(x^{2}+y)}}{jdz}\iint_{\Sigma}U(\xi,\eta,0) exp[-j\frac{2\pi}{\lambda z}(x\xi+y\eta)]d\xi d\eta \tag{4}\]
Thus, the diffraction process can be more generally formulated as - when an input wave resulted from \(l-1\)-th layer \((\xi,\eta)\), \(U_{l-1}(\xi,\eta,0)\) diffracts over diffraction distance \(z\) to the \(l\)-th layer (\(x,y\)), the resulted wavefield \(U_{l}^{1}(x,y,z)\) in time domain is described as
\[U_{l}^{1}(x,y,z)=\iint U_{l-1}(\xi,\eta,0)h(x-\xi,y-\eta,z)d\xi d\eta \tag{5}\]
where \(h\) is the diffraction function of free space. It can be calculated with spectral algorithm with Fast Fourier Transform (FFT) for fast and differentiable computation. By convolution theorem, the integral can be calculated with
\[\mathcal{F}_{xy}(U_{l}^{1}(x,y,z))=\mathcal{F}_{xy}(U_{l-1}(\xi,\eta,0)) \mathcal{F}_{xy}(h(x,y,z)) \tag{6}\]
\[F_{l}(\alpha,\beta,z)=F_{l-1}(\gamma,\sigma,z)H(\alpha,\beta,z) \tag{7}\]
Then, the multiplication result \(F_{l}(\alpha,\beta,z)\) will be transformed back to the time domain as \(U_{l}^{2}(x,y,z)\) by _inverse Fast Fourier Transform_ (iFFT) for phase modulation, which is the input to the phase modulation.
#### 3.1.2 Phase modulation
The phase modulation functions like _weight parameters_ in conventional neural networks and is updated iteratively during training process. Specifically, the input wave \(U_{l}^{2}(x,y)\) (for simplicity, we discard \(z\) in phase computation representation as \(z\) is not involved) can be described by its amplitude and phase, by _Euler's formula_, it can be described with a complex number in time domain, i.e.,
\[U_{l}^{2}(x,y)=A(x,y)e^{j(\theta,xy)}=A\text{cos}\theta+jA\text{sin}\theta \tag{8}\]
Where \(j=\sqrt{-1}\), \(A\) is the amplitude, \(\theta\) is the phase for the input wave; Acos\(\theta\) is the real part and Asin\(\theta\) is the imaginary part. After the phase modulation \(\phi(x,y)\), the wave function will be:
\[\begin{split} U_{l}(x,y)&=Ae^{j(\theta+\phi)}\\ &=(A\text{cos}\theta+jA\text{sin}\theta)\times(\text{cos}\phi+j \text{sin}\phi)\\ &=U_{l}^{2}(x,y)\times\phi(x,y)\end{split} \tag{9}\]
which can be realized with complex-valued matrix multiplications. \(U_{l}(x,y)\) is the input wavefunction for the forward function (Equation 5) for the \(l+1\)-th diffractive layer.
### Physics-aware Complex-valued Regularization
Considering the physics in optics, the DONN system is described and emulated with complex numbers. According to Equation 9, the training for the DONN system is more phase modulation dominated, and the intensity at the end of diffraction will decrease exponentially as the number of diffractive layers increases, which means a regularization between amplitude and phase is required to avoid gradient vanishing and explosion in the training process. With this insight, we introduce a novel regularization factor \(\gamma\) in the forward function to improve the training efficiency, which can flexibly change the gradient scales between amplitude and phase modulations. Specifically, \(\gamma\) is applied to amplitude vector \(A\) in Equation 9.
### LightRidge Framework
LightRidge framework (Table 2) consists of four major components to simplify and accelerate the process of design, exploration and deployment of the DONN system, including **a)** versatile programming modules for precise physics modeling, **b)** domain-specific neural architecture modules of DONNs, **c)** accelerated physics kernels for training and inference runtime improvements, and **d)** hardware deployment supports.
**Low-level physics modeling** - Three components are required to design a DONN model, including laser source, diffractive layers, and optical/photon detector. To model the whole physical phenomenon of DONNs, we first introduce the mathematical modeling modules for the implementation of DONN systems - **(1)** Various laser source modelings with flexible wavelength settings and beam profiles. **(2)** Precise light diffraction approximation, which falls into three categories - _Rayleigh-Sommerfeld_, which handles both far and near fields but with the highest computational complexity (Equation 1); _Fresnel_, which approximates the propagation with parabolic wavefronts, namely the near field propagation (Equation 3); _Fraunhofer_, implemented with Equation 4, approximating the propagation with planar wavefronts in the far-field [54]. **(3)** The optical/photon detector digitizes the analog light intensity to make it processable by the computer.
**Model-level APIs** - The DONN model is constructed with flexible model-level modules with LightRidge, where the architectural parameters can be used to customize the system - **(1)** the laser source module lr.laser offers precise laser customization including laser specifications such as wavelength, src_profile, etc. **(2)** The physics modeling of diffraction with trainable phase modulation is implemented in lr.layers. Two diffraction modelling with and without hardware-specific are provided with lr.layers.diffractlayer and lr.layers.diffractlayer_raw. Specifically, to deal with **challenge 2** in Section 1, lr.layers.diffractlayer employs the codesign algorithm, where the device-level information is delicately integrated in the training process with quantization methods, such as Gumbel-Softmax [25, 36, 31] and quantization-aware training (QAT) [28], applied on the trainable parameters, i.e., the phase modulation in diffractive layers, for efficient modeling-to-hardware deployment. Both modules can alternate three diffraction approximation algorithms according to user definition. Additionally, user-defined system hyperparameters such as size of diffraction unit (pixel_size), diffraction distance
(distance), the precision of the hardware implementing diffractive layers (precision) can also be customized easily with our language. **(3)** The detector is employed to capture the light intensity after propagation through the system, which is the interface component for linking training loss construction and the DONN model. In lr.layers.detector, x_loc and y_loc are lists of spatial coordinates of the detector, and the size of the detector regions is customized by det_size. **(4)** Finally, lr.models is a sequential container that stacks arbitrary numbers of customized diffractive layers in the order of light propagation in the DONN system and a detector plane. As a result, we construct a complete DONN system just like constructing a conventional neural network.
**Training support** - The DONN model is trained with conventional automatic differentiation engines in complex domain, which is supported by our differentiable physics kernels and training utility functions. Specifically, the original one-dimensional input is processed to a complex-valued input by initializing the phase information in data_to_cplex. Training parameters such as optimizer, complex-valued regularization regu_factor, loss function loss, etc, are also enabled in complex domain by lr.train.utils. The CPU and GPU accelerations are enabled by to(device). Finally, lr.train.dse enables physics-aware DSE for DONNs design and integration (Section 4).
**Hardware deployment** - To practically deploy the digitally trained model to hardware, the quantization to specific precision of the hardware (post-training quantization) is provided by lr.layers.weight_fab. More importantly, LightRidge directly supports various popular hardware deployment, e.g., SLMs for laser in visible range with lr.model.to_slm(), and 3D printed phase mask for THz systems with lr.model.to_3d_render().
## 4 Design Space Exploration
Taking advantages of LightRidge, we introduce the first explicit architectural design space exploration (DSE) engine for DONNs, namely LightRidge-DSE. As discussed earlier, the domain knowledge of optics and optical hardware are critical technical barriers to design DONNs. Therefore, there is a great need to enable an automatic DSE exploration in LightRidge, which will significantly shorten the design and hardware deployment cycle of DONNs and lower the optical domain-knowledge requirements. We propose an analytical model based DSE approach to accelerate the DSE process, where the analytical model is extracted from a ML regression model. Our main goal of the DSE engine is to provide guidance to design DONN systems under new design parameters with fabrication and chip integration requirements (e.g., fabrication technologies, chip dimension, etc.) as inputs.
**Design space of DONNs** - We consider the DONN design space from two aspects: (1) The major physical architectural design parameters of DONNs include - the diffraction unit size (the dimension of each diffractive unit), and the diffraction distance, i.e., the physical distance between the source to the first diffractive layer, layer to layer and the last layer to the detector (\(z\) in Figure 2). These two are critical architectural parameters under a fixed laser profile (wavelength). (2) The space exploration over DONNs spatial architectural parameters - system size (or system resolution) paired with
\begin{table}
\begin{tabular}{|p{42.7pt}|p{113.8pt}|p{113.8pt}|} \hline Classes & Modular Programs & Description \\ \hline \multirow{4}{*}{Low-level modeling} & Laser source \& profiles & Modeling coherent laser beams with various wavelength/profiles, e.g., Gaussian beam, Bessel beam, etc. \\ \cline{2-4} & Diffraction approximation & High-performance tensor implementations of numerical diffraction approximations, including **Rayleigh-Sommerfeld** (Equation 1), **Fresnel** (Equation 3), and **Fraunhofer** (Equation 4). \\ \cline{2-4} & Optical/photon detector & An photon detector to capture the light intensity and convert the analog intensity information to the digital computer-processable information. \\ \hline \multirow{4}{*}{Model-level APIs} & lr.laser & Define the laser source for the system, including laser wavelength and it profile. \\ \cline{2-4} & \multirow{2}{*}{lr.layers APIs} & Includes modules of different types of diffractive modeling, e.g., hardware-specific layer module lr.layers.diffractlayer and general diffractive layer lr.layers.diffractlayer\_raw, that can be configured with various approximation methods, distance, diffraction unit size, etc. \\ \cline{2-4} & lr.layers.detector & Define detector designs for various computer vision ML tasks, e.g., classification detector with coordinate information and the size of the detector regions. \\ \cline{2-4} & lr.models & Sequential container to customize DONN system by stacking diffractive layer and detector modules. \\ \hline \multirow{4}{*}{Training} & lr.train.utils & Training utility modules including data handling (e.g., utils.data_to_cplex), complex-valued regularization, loss function, optimizer, etc. \\ \cline{2-4} & lr.train.to(device) & Enable CPU and GPU accelerations for accelerating diffraction emulation and DONNs training. \\ \cline{2-4} & lr.train.dse(specs) & Perform pre-fabrication design space exploration with chip integration specifications as input. \\ \cline{2-4} & lr.layers.weight\_fab & Quantize the phase weights in the trained model w.r.t the hardware (e.g., SLMs) specifications. \\ \cline{2-4} & lr.model.to\_system & HW deployments modules, e.g., lr.model.to\_slm(), lr.model.to\_3d\_render(), etc. \\ \hline \end{tabular}
\end{table}
Table 2: Overview of the LightRidge programming modules and partial front-end APIs. Note that we use lr to represent our integrated Python package lightridge.
* hardware/device precision, i.e., discrete phase modulation levels provided by the device, which are sensitive parameters w.r.t the performance of ML tasks. We take the physical architectural DSE as an example in this section.
**DSE features and data collection** - In our case, we show the process of conducting the DSE with the physical architectural design parameters, i.e, the diffraction unit size \(d\) and the diffraction distance \(D\), for DONN systems under different laser wavelength \(\lambda\). With fixed system size \(200\times 200\) and precision of the SLMs with 256 optical states covering [0,2\(\pi\)], we collect training data by sweeping diffraction unit size from \(10\lambda\) to \(110\lambda\) and diffraction distance \(D\) from \(0.1\)m to \(0.6\)m on a 5-layer DONN system, i.e., 121 data points, for laser wavelength \(\lambda\) of \(632\)nm and \(432\)nm.
**Analytical model based DONN DSE** - We employ a gradient boosting regression [41] model to find out a polynomial analytical model to bypass and transfer optical physics-aware DONNs DSE knowledge to new nearby \(\lambda\). Specifically, our analytical model is trained with diffraction unit size and diffraction distance exploration data points from systems with \(\lambda=632\)nm and \(\lambda=432\)nm (Figure 5 (a) and (b)) to estimate the DONNs design space in ML performance given a different laser profile with \(\lambda=532\)nm (Figure 5(c)). The regression model takes the wavelength \(\lambda\), \(d\), and \(D\) as inputs, and predicts (regression) the accuracy w.r.t MNIST dataset, trained with mean squared error (MSE) loss. The regression model is built with _n_estimators=3500, learning_rate=0.2, max_depth=3, random_state=25. The approximated prediction result from the analytical model is employed to guide DONN DSE under a new target \(\lambda\). To evaluate the analytical model based DSE strategy, we compare the predicted design space (Figure 5(c)) with the emulation verified design space (Figure 5(d)) under \(\lambda=532\)nm. The star point in Figure 5(d) shows our analytical DSE can find the best design points, which is verified by the end-to-end LightRidge development process in Section 5.
The analytical model by DSE can generalize the learnt optical formula to DONN systems with new laser wavelength while following the traditional maximum half-cone diffraction angle theory [5], i.e., the analytical model should be applied to a nearby wavelength within the applicable range by the theory of the training data. In our DSE example (Figure 5), we use the analytical model trained from \(432\) nm and \(632\) nm for predictions under \(532\) nm. However, such a analytical model trained with wavelength in visible range will not work for predictions for wavelength in other ranges, such as Infrared (IR) and Microwaves, which results in the theory violation.
**Sensitivity analysis** - We perform single parameter control variable tests for all three parameters. Our results confirm that the most sensitive parameter is the diffraction unit size, where wavelength and distance are almost equally sensitive to the accuracy performance. As shown in Table 3, by shifting the DSE explored best parameters (the star point in Figure 5(d)) +10%/+5% or -10%/-5%, we observe sharply accuracy drops on unit size (dropped to 30% accuracy on MNIST by shifting only +-5%), and are less sensitive on the other two parameters (dropped to 74% by shifting +-5%).
With the guidance of the analytical model, LightRidge-DSE finds the best architecture dimension and training parameters with several emulation iterations for selected possible parameters instead of sweeping through the grid-based search space. For example, in our case shown in Figure 5, aided by the analytical model, few emulation iterations (e.g., two emulations) instead of grid-searching over 121 data points are required for DSE, resulting in \(60\times\) speedups. On the other hand, DSE engine is able to provide general design parameters for the similar type of ML task. For example, the DSE model trained by MNIST dataset is also confirmed to be applicable to other
Figure 5: Results of architectural DSE of DONN systems w.r.t diffractive unit size, and diffraction distance under different laser wavelength (\(\lambda\)) with each grid colored according to accuracy on MNIST10. (a) and (b) are training data from emulations w.r.t design space under \(\lambda=432\)nm and \(\lambda=632\)nm for the inference model. (c) Predicted performance w.r.t design space under \(\lambda=532\)nm with the ML DSE model trained with data points from (a) and (b). (d) Grid-search validation under \(\lambda=532\)nm that verify the ML-based DSE quality. The DSE-guided setup at the star point is verified practically in Section 5.
MNIST-like datasets such as FashionMNIST [59], Kuzushiji-MNIST [8], Extension-MNIST-Letters [9].
## 5 Evaluation
In this section, we first demonstrate that LightRidge and LightRidge-DSE offer precise hardware-software correlations w.r.t real-world DONNs system realization (Figure 6) via a visible range DONN prototyping. Second, we demonstrate the effectiveness of LightRidge framework over SOTA experimental baselines [34, 67] in training performance and emulation runtime (Figure 7 - 9). Finally, we demonstrate that LightRidge and LightRidge-DSE enables comprehensive DONN system on-chip integration (Figure 11) and the capabilities to design advanced DONNs design principles, including multi-channel DONNs classifier on Place365 [65] dataset (Figure 12) and the all-optical image segmentation architecture (Figure 13). Note that experiments in Section 5.1 are physically deployed on optical hardware shown in Figure 5(a), while other results are from emulations with LightRidge.
### LightRidge and LightRidge-DSE Validations via Physical DONNs Prototyping
**Model construction via LightRidge-DSE and training -** This section demonstrates the hardware-software codesign precision and the effectiveness of LightRidge-DSE, where the parameters of the DONNs model used for physical validation experiments are automatically produced by LightRidge-DSE with the system size of \(200\times 200\). Specifically, the emulation model for DONN training is constructed with 3 sequentially stacked diffractive layers in lr.model, where each layer is defined with lr.layers.diffractlayer integrating hardware specifications: 1 the diffraction pixel size is \(36\text{um}\times 36\text{um}\); 2 the laser wavelength is \(532\text{nm}\). Consulted on DSE results shown in Figure 5(b), distance is explored to be \(\sim 0.3\text{m}\), which is further adjusted to 11 inches (\(0.28\text{m}\)) on our optical table. There are 10 pre-defined detector regions for labels placed evenly on the detector plane. The learning rate in the training process is 0.5 trained with 100 epochs for all experiments using Adam [27] with batch size 500 using MSE loss.
**Hardware prototype and validation** - Laser source CPS532 from Thorlabs, Inc. is employed to realize the physical DONN system, where SLMs (LC 2012 HOLOEYE) implement the diffractive layers. The SLMs are experimentally measured and cover a phase modulation range close to \([0,2\pi]\). The final diffraction pattern is captured on a CMOS camera (CS165MU1 Thorlabs, Inc.). To make the input easier for hardware deployment, we train and validate the model with binarized MNIST images as shown in Figure 5(a), where the trained phase modulation parameters are loaded on the SLMs.
The resulted detector patterns for the inputs are shown in Figure 5(b). The SLM used to encode input binary images is illuminated by the laser source, and the input information will be encoded on the intensity of the input laser. The intermittent practical propagation will not be available in the DONN system as the information is carried with the light beam. At the end of the system, we can capture the intensity information on the detector for the prediction. As shown in Figures 6, DONNs emulation results in LightRidge precisely match the experimental measurements, which demonstrates: (1) precise correlations between the implemented high-level modeling and low-level physics experimental system, which improves the design efficiency significantly without manual HW calibration requirements shown in Figure 1; (2) and the effectiveness of LightRidge-DSE in exploring architecture parameters, which has been further utilized for on-chip integration (Section 5.5).
### Emulation-level Evaluation
We further verify the design parameters from DSE model as discussed in Section 5.1 at emulation level. The accuracy results for image classification with MNIST [29] and FashionMNIST (FMNIST) [59] dataset are shown in Figure 7, where the baseline results are conducted on training methods in [34, 67] without the proposed physics-aware complex-valued reg
Figure 6: Evaluation results of a 3-layer DONN system in visible range explored, trained, and deployed by LightRidge – (a) The experimental system trained and deployed by LightRidge. The corresponding detector pattern from experiments and simulation results produced by LightRidge are shown in Figure 5(b); (b) Corresponding detector patterns of experimental measurements and simulation results (the simulation results generated with lr.model.prop_view()) of the 3-layer DONN system.
ularization. The inputs are encoded with the amplitude of the laser beam. To make the input fit the DONN system, we first extend the image with the original size of \(28\times 28\) in MNIST10 and FMNIST datasets to \(200\times 200\) in SLM resolution, and transfer the original one-dimensional image to complex-valued image in the emulation. **With the regularization factor \(\gamma\) implemented, our training algorithm has a significant advantage in training less complex DONN models.** For example, when the DONN model is implemented with only one diffractive layer (D=1), the accuracy is 31% (34%) improved for MNIST (FMNIST) dataset, compared with the baseline. Additionally, our algorithm can achieve a similar accuracy performance (0.98 for MNIST, 0.89 for FMNIST) for DONN systems regardless of its complexity, i.e., the number of diffractive layers implemented in the system, by adjusting \(\gamma\) for the model training. However, according to the discussion in [34], the performance of DONNs with fewer number of layers are fundamentally limited by the optical physics, which is opposite to our accuracy results.
To understand the increase of accuracy, we analyze the robustness of the DONNs trained with complex-valued regularization. Specifically, we explore the confidence of the predictions acquired by the system, by adding random uniform noise at the detector phase with upper bound 1%, 3%, and 5% intensity noise. As a result, for both datasets, **as the depth of DONNs increases, the prediction confidence increases, while the prediction accuracy with no noise applied are all relatively the same.** For example, there is no accuracy degradation on five-layer DONNs for MNIST, and less than 1% degradation on FMNIST with up to 5% applied noise. However, for single-layer DONNs, the accuracy drops 63% for MNIST and 54% for FMNIST with 1% noise applied, and drops to 0 when applied noise increases to 3% and 5%.
### LightRidge Runtime Evaluation
Runtime efficiency of emulating DONNs is crucial in simulation, training, and exploration. Thus, optimizing runtime performance is another key contribution in LightRidge framework. We first analyze the DONN workloads, where we identify that the majority (\(\geq\)90%) of the runtime complexity comes from the numerical modeling of light diffraction. Thus, the major optimization efforts should lie over the diffraction kernels. Second, to effectively utilize the modern computing platforms, we aim to maximize the parallelism from the fundamental physics modeling, which is the main reason of implementing scalar diffraction modeling instead of FDTD in the computation kernel as mentioned earlier in Sections 1 and 2. The diffraction approximation functions with scalar diffraction modeling (Equations 1 - 4) can be breakdown into three major tensor-level operators: complex-domain 2-D FFT (FFT2), inverse 2-D FFT (iFFT2), and complex matrix multiplications (Complex MM). Based on the analysis and kernel breakdowns, we take advantages of modern CPU and GPU platforms by incorporating efficient complex-tensor datatypes and operators. For CPU, the diffraction kernel is optimized via Intel Math Kernel Library (MKL-DNN) complex kernels with AVX-512 support; for GPU, cuFFT, cuFFTW, and cuTENSOR libraries with efficient complex-domain FFTs and MM are deployed.
To demonstrate the runtime improvements, we compare the runtime of our proposed framework with up-to-date commercial version LightPipes(2021), running various emulation loads, i.e., {1,3,5,7,10}-layer DONNs while system resolution sweeps from 100\(\times\)100 to 500\(\times\)500. All LightPipe-CPU and LightRidge-CPU results are conducted on Intel Xeon Gold 6230 20x CPU. To make fair GPU comparisons, we re-implement the kernels in LightPipes with cupy [40], and runtime results are collected on Nvidia 3090 Ti GPU platform.
Figure 8 shows LightRidge consistently outperforms LightPipes on both CPU and GPU back
Figure 8: LightRidge runtime speedups over LightPipes with various DONNs system sizes – (a) CPU speedups. (b) GPU speedups.
Figure 7: Confidence evaluation of DONNs trained with complex-domain regularization under various system complexity. Baseline results are conducted on methods in [34, 67] without noise assumption.
Figure 9: Runtime speedups breakdown with 5-layer 500\(\times\)500 DONNs. FFT2, iFFT2, and Complex MM are the main operators for DONN numerical modelling.
ure (a)a shows at most 6.4\(\times\) speedup of LightRidge-CPU over LightPipe-CPU at \(depth=5,systemsize=500^{2}\). Figure (b)b shows at most 12\(\times\) speedup of LightRidge-GPU over LightPipe-GPU at \(depth=1,systemsize=500^{2}\). To understand the runtime speedups offered by LightRidge, we provide normalized speedups breakdown analysis w.r.t LightPipe CPU/GPU, shown in Figure 9 with 5-layer DONNs workload. We observe that the 6.4\(\times\) CPU runtime speedups are contributed from the FFT2 (11\(\times\)), iFFT2 (10\(\times\)), and Complex MM kernels (4\(\times\)); similarly for GPU, 8.6\(\times\) overall speedups are primarily contributed from the three kernels with 7\(\times\), 7\(\times\), and 12\(\times\) speedups, respectively.
Furthermore, we evaluate capability of LightRidge of training large DONNs systems (Figure 10). The runtime is acquired on a single Nvidia 3090 Ti GPU. We can see that LightRidge handles 30-layer DONNs training in \(\sim\) 280 seconds per epoch, with input image resolution at 500\({}^{2}\). Besides, we observe runtime increases almost linear w.r.t the DONNs depth, while there is a runtime jump when the system size increases beyond 300\({}^{2}\), mainly due to the limited resource on a single GPU. This posts strong motivations for further CUDA optimization and multiple-GPU training supports in future work.
### Performance Comparison between DONNs and conventional NNs
Compared with conventional NN models on digital platforms, the current optical-devices-deployed DONNs systems at this early stage suffer from accuracy performance degradation while feature with significantly improved energy efficiency. As shown in Table 4, we evaluate two conventional NNs including a MLP, which consist of two linear layers with hidden size of 128, and the input image is flattened as one-dimensional tensor, i.e., MLP (40000 \(\rightarrow\) 128 \(\rightarrow\) 10); and a CNN, which consists of two Conv2D, where the kernel size of both layers is set as (5, 5) and 32 filters for the first layer and 64 filters for the second layer with stride and padding being 2, two MaxPooling2D, where kernel size is set as (3, 3) with stride 2, followed by two linear layers. Additionally, we deploy the conventional NNs on different digital platforms including Nvidia GPU 2080 Ti, Nvidia GPU 3090 Ti, Intel Xeon 6230 20x CPU, and Google EdgeTPU [62].
As a result, the conventional NNs can produce the accuracy performance of 0.99/0.99 for MNIST, and 0.91/0.91 for FashionMNIST with the MLP and the CNN model, respectively, while DONNs systems reach the accuracy performance of 0.98/0.89 for MNIST/FashionMNIST, which shows 1% accuracy performance degradation. For practical realization with DONNs systems, we take the prototype in Figure (a)a as an example, the power of a CW 532nm laser source is \(\sim\) 5mW. The diffractive layers are passive optical devices and requires no extra energy for computation. Then the power consumption at the CMOS detector is \(\sim\) 1 W (max) @ 1000 fps with the system size of 200\(\times\) 200. Thus, the power efficiency for the DONN system can be estimated as 995fps/Watt. The corresponding energy efficiency results for conventional NNs on various digital platforms are shown in Table 4, which shows the DONN system is roughly 2 orders more efficient than desktop CPU and GPU, 1 orders than digital edge devices with batch size as 1. The energy efficiency provided by DONN systems can be more significant when dealing with more complex ML tasks (e.g., applications in Section 5.6) as the computation part (with passive optical devices) consumes zero power. Note that DONNs energy efficiency can be further optimized with integrated fabrication and high-end detector.
Therefore, the DONN system shows its great potential in completing ML task much more energy-efficiently than conventional NNs. However, the degradation of accuracy performance and the challenges in deploying the practical inference systems call for more future works in broad disciplinaries, such as complex-domain training algorithms, domain-specific co-designing, and optics, which also highlights the potential of our framework.
### On-chip DONNs Integration via LightRidge
The bulky 3D free-space DONN systems can be integrated as a 3D monolithic on-chip DONNs via 3D additive fabrication [13, 14, 20, 35], e.g., galvo-dithered two-photon nanolithography [20], electron beam lithography overlay process [35], etc. Such monolithic on-chip DONNs can be integrated in a hybrid computing system, with DONNs performing as an optical co-processor hosted by central processor via system interconnects (e.g., PCIe 4.0). The host processor controls the laser encoding for loading images and the results collection with the co-processor interconnects, illustrated in Figure 11. Each diffractive layer is a thin film, where the trained phase information is encoded with the thickness of the material used for layer fabrications. Between diffractive layers, the optical clear adhesive is employed to provide free-space light propa
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Platform** & **fps/Watt** & **Accuracy** \\ \hline
**GPU 2080 Ti** & 3.3 (301\(\times\)) & 3.8 (261\(\times\)) & \\ \hline
**GPU 3090 Ti** & 2.4 (414\(\times\)) & 1.7(585\(\times\)) & \\ \hline
**CPU Xeon** & 1.5(663\(\times\)) & 2.0 (497\(\times\)) & \\ \hline
**XPU (EdgeTPU)** & 23(43\(\times\)) & 26 (38\(\times\)) & \\ \hline
**Our DONNs** & \multirow{2}{*}{995} & \multirow{2}{*}{0.98} & \multirow{2}{*}{0.89} \\
**prototype** & & & \\ \hline \end{tabular}
\end{table}
Table 4: Energy efficiency (fps/Watt) and accuracy comparisons between DONNs systems and conventional NNs.
Figure 10: Large-scale DONNs training runtime.
gation, whose thickness is the diffraction distance. Diffractive layers and optical clear adhesive are stacked sequentially to construct an on-chip DONN system. The final prediction is captured on the detector, with Analog-Digital-Converter (ADC), I/O interface, and memory buffers integrated on the peripheral circuits. An example of aforementioned real-world DONN on-chip integration is realized by [35]. However, due to the three challenges we discussed earlier, the design cycle could take months to year efforts. LightRidge framework can significantly simplifies the end-to-end on-chip design process, demonstrated by an case study as follows.
**Case study** - We target a 5-layer DONN system integration under wavelength 532nm for a CMOS detector chip (CS165MU1 from Thorlabs, Inc.), shown in Figure 11, where the CMOS chip defines the pixel size of 3.45um. The key for on-chip integration is to search for valid fabrication parameters with high prediction performance w.r.t ML tasks. Therefore, following the four steps of LightRidge design flow (Figure 3), we first deploy LightRidge-DSE to explore the 3D fabrication dimension, including distance, resolution, and diffraction unit size. According to the emulation results in Section 4 and Figure 5(c), when we fix the wavelength as 532nm and the diffraction unit size (pixel size of the CMOS chip) as 3.45um, considering image classification as ML tasks (e.g., MNIST), LightRidge-DSE returns the diffraction distance of 532um, and the resolution 200\(\times\)200, to fit the CMOS chip. Thus, the DONNs fabrication dimension is finalized as \(690\text{um}\times 690\text{um}\times 2660\text{um}\), where 2660um is the height, and flat chip dimension is \(690\times 690\text{um}^{2}\), which align with the chip fabrication procedure in [35]. Next, after training completed, each layers will be fabricated w.r.t the phase parameters optimized by the codesign stage via nano-printing on the targeted CMOS detector chip. The integrated DONNs can be then used as a co-processor via ADCs and I/O integrated with the CMOS detector chip, where the pre-fabrication design process takes less than a day via LightRidge.
### Advanced DONN Architectures
With the design capabilities of LightRidge and LightRidge-DSE verified by physical optical systems, we further explore the potentials of DONN systems with more advanced architectures dealing with complex computer vision tasks. Specifically, we propose and evaluate **(1)** a multi-channel DONN architecture implemented with diffractive layers to deal with RGB image classifications, and **(2)** the first-ever optical image segmentation demonstration using DONNs with _optical skip connection_, to deal with image segmentation and potentially other image-to-image synthesis tasks.
#### 5.6.1 All-optical RGB image classification
To deal with more complex datasets in image classification, e.g., Place365 [65], a high-resolution RGB image dataset, we propose a multi-channel RGB-DONNs architecture. As shown in Figure 12, three optical channels are employed in the DONN system to deal with 'R', 'G', 'B' channels separately in the original image. The input laser beam is split with the beam splitter into three beams and reflected with mirrors into three channels to encode the corresponding input information. Note that the image information is encoded with light intensity at the encoding layer for each channel, in which case each channel takes a gray-scaled image as input and propagates through five diffractive layers. Each channels is constructed with the same system parameters in Section 5.1 expect for changing to 5 diffractive layers. The output laser beams from all channels are projected to a single detector, where the light intensity is merged for final prediction. Similar to the detector design for classification shown in Figure 2, a single detector collects the intensity of the output within each pre-defined detector region and produces the predicted class by argmax. All three channels are trained w.r.t the same shared loss function. The training setups are the same as discussed in Section 5.1.
The emulation accuracy results for image classification with Place365 are shown in Table 5, including top-1, top-3, and
Figure 11: Monolithic on-chip DONNs design and overall hybrid architecture system integration.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Places365[65]** & _Top-1_ & _Top-3_ & _Top-5_ \\ \hline Our (Fig. 12) & **0.52** & **0.73** & **0.84** \\ \hline Baseline [67] & 0.23 & 0.48 & 0.67 \\ \hline \end{tabular}
\end{table}
Table 5: Classification accuracy on Places365 (standard, 256-by-256) with _type of environment_ as classes.
Figure 12: Multi-channel RGB-DONNs architecture for image classification using Places365 dataset [65]. The ”R/G/B” channels are all encoded as three gray-scaled images using 532 nm laser source.
top-5 accuracy. The baseline is the emulation accuracy from the DONN model trained with the algorithm in [67]. We can see the model trained with our compiler has better accuracy performance than the baseline in all accuracy matrix (29% improvement for top-1 accuracy, 25% improvement for top-3 accuracy, 17% improvement for top-5 accuracy), and ours outperforms the baseline most at the top-1 accuracy.
#### 5.6.2 All-Optical image segmentation
Image segmentation is an interesting and challenging task in modern computer vision tasks, which has a great impact on autonomous physical systems such as autonomous driving, robotics, etc. Unlike image classification tasks, image segmentation is a process of generating representations of an image into specific image-to-image objectives. While in DONN classification systems, we observe that the system (output detector in particular) is not fully utilized, as only a given number of small detector regions are used for classification. As the DONN system propagates the input image w.r.t trained phase modulations in the full spatial dimension of the system, it is expected to be able to deal with image-to-image based tasks. Thus, we design and demonstrate the first-ever all-optical image segmentation.
Figure 12(a) includes the proposed a 5-layer DONN system, where we introduce two innovations in DONN architecture: 1) optical skip connection, which is inspired from the residual block design in conventional ResNet [24] architecture. It aims to smooth the gradient descent for better training performance and also is involved in inference for better detailed segmentation. Since the light signal is aggressively diffracted during the propagation, the optical skip connection can help to restore some features from less-diffracted inputs, making the model prediction be aware of the original information, which is confirmed to introduce better image segmentation performance with our results; and 2) layer normalization [1] before the detector plane, which is only employed in the training process for better training performance of the DONN by smoothing the training gradients. The dataset we demonstrate here is selected from CityScapes dataset [10], where the images are converted to gray-scaled images and resized to \(350\times 350\). We use binary labels in this case study to generate segmentation masks for _buildings_ and others. The baseline is the results from the DONN model construction without optical skip connection and the training method without layer normalization proposed in [34, 67]. The system parameters and training setups are the same as discussed in Section 5.1 expect for the system size changing to \(350\times 350\) and the model structure changing to Figure 12(a). The results shown in Figure 12(b) demonstrate that the advanced model trained with LightRidge outperforms the baseline in both edge detection and segmentation with significant clarity improvements on small object segmentation. These advanced DONNs architecture design and validation demonstrate the generalizability and power of LightRidge in exploring new architectural designs and applications.
## 6 Conclusion and Future Work
This work presents an agile end-to-end design framework LightRidge that enables seamless design-to-deployment of DONNs. LightRidge accelerates and simplifies the design, exploration, and on-chip integration by offering highly versatile and runtime efficient programming modules, and DSE (LightRidge-DSE) engine to construct and train the DONN systems in a wide range of optical settings. The high-performance physics emulation kernels are optimized for runtime efficiency, and verified together with hardware-software codesign algorithm on our visible-range prototype. Additionally, two advanced DONN architectural designs constructed with LightRidge show the capabilities and generalizability of LightRidge for various ML tasks and system design explorations. We believe our framework LightRidge will enable collaborative and open-source research in optical accelerators for not only ML tasks, but also other optics-related research areas such as optical structure emulation, chip fabrication (lithography), meta-material exploration, etc.
In the future, we will further optimize the runtime efficiency of LightRidge, including realizing high-performance CUDA kernel optimization and multiple-GPU computation. We also expect more functionality to be integrated in the framework and our hardware prototype. For example, the non-linearity in DONN systems, which can be realized by nonlinear optical materials (crystals, polymers, graphane, etc.), is an important implementation for more complex DONNs systems.
Figure 13: Image segmentation demonstrations using CityScapes [10] datasets with (a) a novel advanced DONNs architecture with optical skip connection and layer normalization for improving training efficiency, and (b) evaluations and comparisons to SOTA baselines [34, 67]. |
2310.02294 | Beyond-Accuracy: A Review on Diversity, Serendipity and Fairness in
Recommender Systems Based on Graph Neural Networks | By providing personalized suggestions to users, recommender systems have
become essential to numerous online platforms. Collaborative filtering,
particularly graph-based approaches using Graph Neural Networks (GNNs), have
demonstrated great results in terms of recommendation accuracy. However,
accuracy may not always be the most important criterion for evaluating
recommender systems' performance, since beyond-accuracy aspects such as
recommendation diversity, serendipity, and fairness can strongly influence user
engagement and satisfaction. This review paper focuses on addressing these
dimensions in GNN-based recommender systems, going beyond the conventional
accuracy-centric perspective. We begin by reviewing recent developments in
approaches that improve not only the accuracy-diversity trade-off but also
promote serendipity and fairness in GNN-based recommender systems. We discuss
different stages of model development including data preprocessing, graph
construction, embedding initialization, propagation layers, embedding fusion,
score computation, and training methodologies. Furthermore, we present a look
into the practical difficulties encountered in assuring diversity, serendipity,
and fairness, while retaining high accuracy. Finally, we discuss potential
future research directions for developing more robust GNN-based recommender
systems that go beyond the unidimensional perspective of focusing solely on
accuracy. This review aims to provide researchers and practitioners with an
in-depth understanding of the multifaceted issues that arise when designing
GNN-based recommender systems, setting our work apart by offering a
comprehensive exploration of beyond-accuracy dimensions. | Tomislav Duricic, Dominik Kowald, Emanuel Lacic, Elisabeth Lex | 2023-10-03T09:25:01Z | http://arxiv.org/abs/2310.02294v1 | # Beyond-Accuracy: A Review on Diversity,
###### Abstract
By providing personalized suggestions to users, recommender systems have become essential to numerous online platforms. Collaborative filtering, particularly graph-based approaches using Graph Neural Networks (GNNs), have demonstrated great results in terms of recommendation accuracy. However, accuracy may not always be the most important criterion for evaluating recommender systems' performance, since beyond-accuracy aspects such as recommendation diversity, serendipity, and fairness can strongly influence user engagement and satisfaction. This review paper focuses on addressing these dimensions in GNN-based recommender systems, going beyond the conventional accuracy-centric perspective. We begin by reviewing recent developments in approaches that improve not only the accuracy-diversity trade-off but also promote serendipity and fairness in GNN-based recommender systems. We discuss different stages of model development including data preprocessing, graph construction, embedding initialization, propagation layers, embedding fusion, score computation, and training methodologies. Furthermore, we present a look into the practical difficulties encountered in assuring diversity, serendipity, and fairness, while retaining high accuracy. Finally, we discuss potential future research directions for developing more robust GNN-based recommender systems that go beyond the unidimensional perspective of focusing solely on accuracy. This review aims to provide researchers and practitioners with an in-depth understanding of the multifaceted issues that arise when designing GNN-based recommender systems, setting our work apart by offering a comprehensive exploration of beyond-accuracy dimensions.
Review, Survey, Recommender Systems, Graph Neural Networks, Beyond Accuracy, Diversity, Serendipity, Fairness
## 1 Introduction
With their ability to provide personalized suggestions, recommender systems have become an integral part of numerous online platforms by helping users find relevant products and content Aggarwal et al. (2016). There are various methods employed to implement recommender systems, among which collaborative filtering (CF) has proven to be particularly effective due to its ability to leverage user-item interaction data to generate personalized recommendations Koren et al. (2021). Recent advances in Graph Neural
Networks (GNNs) have also had a significant impact on the field of recommender systems, and especially on collaborative filtering. GNN-based CF approaches have demonstrated exceptional results in terms of recommendation accuracy, which has traditionally been the main criterion for evaluating the performance of recommender systems He et al. (2020); Pu et al. (2012).
However, most studies have focused only on accuracy and have often neglected other equally or sometimes even more important aspects of recommender systems, such as diversity, serendipity, and fairness. The importance of these _beyond-accuracy_ dimensions is increasingly being recognized, as studies have shown that these aspects can have a significant impact on user satisfaction Abdollahpouri et al. (2019). For example, diverse and serendipitous recommendations can prevent the over-specialization of content and enhance user discovery. Fairness, on the other hand, ensures that the system does not discriminate against certain users or item providers, thereby promoting equitable user experiences Gao et al. (2023).
This review paper further explores these dimensions in the context of GNN-based recommender systems, going beyond the traditional accuracy-centric viewpoint. We discuss recent advances in approaches that not only improve the accuracy-diversity trade-off, but also promote serendipity and fairness. Furthermore, we highlight the practical issues encountered in assuring these dimensions when constructing GNN-based CF approaches, while preserving high recommendation accuracy. This review is intended to provide researchers and practitioners with a comprehensive understanding of the multifaceted optimization issues that arise when designing GNN-based recommender systems, thereby contributing to the development of more robust and user-centric recommender systems.
## 2 Background
Graph neural networks (GNNs) have recently emerged as an effective way to learn from graph-structured data by capturing complex patterns and relationships Hamilton (2020). Through the propagation and transformation of feature information among interconnected nodes in a graph, GNNs can effectively capture the local and global structure of the given graphs. Consequently, they emerge as an ideal method especially suitable for dealing with tasks involving interconnected, relational data such as social network analysis, molecular chemistry, and recommender systems among others.
In recommender systems, integrating Graph Neural Networks (GNNs) with traditional collaborative filtering techniques has been shown beneficial. Representing users and items as nodes in a graph with interactions acting as edges allows GNNs to provide more accurate personalized recommendations by discovering and utilizing intricate connections that would otherwise remain undetected Wang et al. (2019). In particular, higher-order connectivity together with transitive relationships play an essential role when trying to extract user preferences in certain scenarios.
GNN-based recommender systems represent an evolving field with continuous advancements and innovations. Recent research has focused on multiple aspects of GNNs in recommender systems, ranging from optimizing propagation layers to effectively managing large-scale graphs and integration of auxiliary information Zhou et al. (2022). Aside from these aspects, an expanding interest lies in exploring beyond-accuracy objectives for recommender systems. Such objectives include diversity, explainability/interpretability, fairness, serendipity/novelty, privacy/security, and robustness which offer a more comprehensive evaluation of the system's performance Wu et al. (2022); Gao et al. (2023). However, our work focuses primarily on three key aspects: diversity, serendipity, and fairness, since these aspects have a significant impact on user satisfaction, while also considering ethical concerns in the field of recommender systems. Ensuring diversity amongst recommendations minimizes over-specialization effects,
benefiting users in product/content discovery and exploration Kunaver and Pozrl (2017). Considering serendipity also helps to overcome the over-specialization problem by allowing the system to recommend novel, relevant, and unexpected items, thus improving user satisfaction Kaminskas and Bridge (2016). The aspect of fairness ensures that the system does not discriminate against certain users or item providers, thereby promoting equitable user experiences Deldjoo et al. (2023).
Diversity, serendipity, and fairness in recommender systems are interconnected and often influence each other. For instance, increasing diversity can lead to more serendipitous recommendations, since users are exposed to a wider range of unexpected and less-known items Kotkov et al. (2020). Furthermore, focusing on diversity and serendipity can also promote fairness, since it ensures a more equitable distribution of recommendations across items and prevents the system from consistently suggesting only popular items Mansoury et al. (2020). However, it's important to note that these aspects need to be balanced with the system's accuracy and relevance to maintain user satisfaction. Considering beyond-accuracy dimensions contributes to supporting the development of GNN-based recommender systems that are not only robust and accurate but also user-centric and ethically considerate.
While GNNs have seen rapid advancements, their application in recommender systems has also been the subject of several surveys. Wu et al. (2022) and Gao et al. (2023) provide a broad overview of GNN methods in recommender systems, touching upon aspects of diversity and fairness. Dai et al. (2022) delves into fairness in graph neural networks in general, briefly discussing fairness in GNN-based recommender systems. Meanwhile, Fu et al. (2023) explores serendipity in deep learning recommender systems, with limited focus on GNN-based recommenders. Building on these insights, our review distinctively emphasizes the importance of diversity, serendipity, and fairness in GNN-based recommender systems, offering a deeper dive into these dimensions.
To conduct our review, we searched for literature on Google Scholar using keywords such as "diversity", "serendipity", "novelty", "fairness", "beyond-accuracy", "graph neural networks" or "recommender system". We manually checked the resulting papers for their relevance and retrieved 21 publications overall from relevant journals and conferences in the field (see Table 1). While re-ranking and post-processing methods are often used when optimizing beyond-accuracy metrics in recommender systems Gao et al. (2023), this paper specifically concentrates on advancements within GNN-based models, thus leaving these methods outside the discussion. Finally, it is important to highlight that diversity, serendipity, and fairness are extensively researched in recommender systems beyond GNNs. Broader literature across various architectures has provided insights into these challenges and their overarching solutions. While our paper primarily focuses on GNNs, we direct readers to consult these works for a comprehensive perspective Kaminskas and Bridge (2016); Wang et al. (2023).
## 3 Model Development
The construction of a GNN-based recommender system is a complex, multi-stage process that requires careful planning and execution at each step. These stages include data preprocessing (DP), graph construction (GC), embedding initialization (EI), propagation layers (PL), embedding fusion (EF), score computation (SC), and training methodologies (TM). In this section, we provide an overview of this multi-stage process as it is crucial for understanding the specific stages at which current research has concentrated efforts to address the beyond-accuracy aspects of diversity, serendipity, and fairness in GNN-based recommender systems.
### Data preprocessing, graph construction, embedding initialization
The initial stage of developing a GNN-based collaborative filtering model is data preprocessing, where user-item interaction data and auxiliary information such as user/item features or social connections are collected and processed Lacic et al. (2015); Duricic et al. (2018); Fan et al. (2019); Wang et al. (2019); Duricic et al. (2020). Techniques like data imputation ensure that missing data is filled, providing a more complete dataset, while outlier detection helps in maintaining the data's integrity. Feature normalization ensures consistent data scales, enhancing model performance. Addressing the cold-start problem at this stage ensures that new users or items without sufficient interaction history can still receive meaningful recommendations Lacic et al. (2015); Liu et al. (2020).
The graph construction stage is crucial, as the graph's structure directly influences the model's efficacy. Choosing the type of graph determines the nature of relationships between nodes. Adjusting edge weights can prioritize certain interactions while adding virtual nodes/edges can introduce auxiliary information to improve recommendation quality Wang et al. (2020); Kim et al. (2022); Wang et al. (2023).
In the embedding initialization stage, nodes are assigned low-dimensional vectors or embeddings. The choice of embedding size balances computational efficiency and representation power. Different initialization methods offer trade-offs between convergence speed and stability. Including diverse information in the embeddings can capture richer user-item relationships, enhancing recommendation quality Wang et al. (2021). This initialization can be represented as \(H^{(0)}=\left[h^{(0)}_{\text{user}};h^{(0)}_{\text{item}}\right]\), where \(h^{(0)}_{\text{user}}\) and \(h^{(0)}_{\text{item}}\) are the initial embeddings of the user and item nodes, respectively.
### Propagation layers, embedding fusion, score computation, training methodologies
Propagation layers in GNNs aggregate and transform features of neighboring nodes to generate node embeddings, represented as \(H^{(l+1)}=\sigma\left(D^{-1}AH^{(l)}W^{(l)}\right)\), where \(H^{(l)}\) is the matrix of node features at layer \(l\), \(A\) is the adjacency matrix, \(D\) is the degree matrix, \(W^{(l)}\) is the weight matrix at layer \(l\), and \(\sigma\) is the activation function Hamilton (2020). There are numerous approaches built on this concept. For instance, He et al. (2020) adopt a simplified approach, emphasizing straightforward neighborhood aggregation to enhance the quality of node embeddings; whereas Fan et al. (2019) integrate user-item interactions with user-user and item-item relations, capturing complex interactions through a comprehensive graph structure.
Afterward, these embeddings are combined during the embedding fusion stage, forming a latent user-item representation used for score computation by applying a weighted summation, concatenation, or a more complex method of combining user and item embeddings Wang et al. (2019); He et al. (2020).
Figure 1: The simplified multi-stage process of developing a GNN-based recommender system, each of these stages strongly impacts resulting recommendations and can be considered when designing a model that takes into account beyond-accuracy objectives.
The score computation stage involves a scoring function to output a score for each user-item pair based on the fused embeddings. The scoring function can be as simple as a dot product between user and item embeddings, or it can be a more complex function that takes into account additional factors Wang et al. (2019); He et al. (2020).
Finally, in the training methodologies stage, a suitable loss function is selected, and an optimization algorithm, typically a variant of stochastic gradient descent, is used to update model parameters Rendle et al. (2012); Fan et al. (2019).
Understanding the unique strengths of each stage outlined in this section is essential, and a comparative evaluation can guide the selection of the most suitable approach for specific collaborative filtering scenarios, such as addressing the challenges associated with beyond-accuracy metrics. In Table 1, we provide a comprehensive overview of existing literature, aiding readers in navigating the diverse methodologies and findings discussed throughout this review.
## 4 Diversity in GNN-based recommender systems
### Definition and importance of diversity
Diversity in recommender systems indicates how different the suggested items are to a user. It's vital for recommendation quality, preventing over-specialization, and boosting user discovery. Diverse recommendations offer users a wider item range, enhancing satisfaction and user engagement Kunaver and Pozrl (2017); Duricic et al. (2021). Diversity has two types: intra-list (variety within one recommendation list) and inter-list (variety across lists for different users) Kaminskas and Bridge (2016).
### Review of recent developments in improving accuracy-diversity trade-off
A number of innovative approaches have emerged recently to tackle recommendation diversity using graph neural networks (GNNs). These methods can be broadly categorized based on the specific mechanisms or strategies they employ:
* **Neighbor-based mechanisms 1:** An approach introduced by Isufi et al. (2021) combines nearest neighbors (NN) and furthest neighbors (FN) with a joint convolutional framework. The _DGRec_ method diversifies embedding generation through submodular neighbor selection, layer attention, and loss reweighting Yang et al. (2023). Additionally, _DGCN_ model leverages graph convolutional networks for capturing collaborative effects in the user-item bipartite graph ensuring diverse recommendations through rebalanced neighbor discovery Zheng et al. (2021). Footnote 1: Neighbor-based mechanisms aggregate and propagate information from neighboring nodes (users or items) to enhance the representation of a target node, capturing intricate relational patterns for improved recommendations Wu et al. (2022). Footnote 2: Disentangling mechanisms aim to separate and capture distinct factors or patterns within graph data, ensuring more interpretable and robust recommendations by reducing the entanglement of various latent factors Ma et al. (2019).
* **Disentangling mechanisms 2:**_DGCF_ framework diversifies recommendations by disentangling user intents in collaborative filtering using intent-aware graphs and a graph disentangling layer Wang et al. (2020). Footnote 2: Dynamic graph construction involves continuously updating and evolving the graph structure to incorporate new interactions and/or entities Skarding et al. (2021).
* **Dynamic graph construction 3:**_DDGraph_ approach involves dynamically constructing a user-item graph to capture both user-item interactions and non-interactions, and then applying a novel candidate
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Beyond-accuracy metric** & **Paper and venue/journal** & **Topic/contribution** & **Model development stages** **utilized to tackle metric** \\ \hline \multirow{8}{*}{Diversity} & Isufi et al. (2021) & \multirow{8}{*}{Neighbor-based mechanism} & \multirow{2}{*}{GC, PL, EF, TM} \\ & Information Processing & & \\ & and Management & & \\ \cline{1-2} \cline{4-4} & Wang et al. (2020) & & \\ \cline{1-2} \cline{4-4} & \multicolumn{1}{c}{ACM SIGKDD conf.} & \multicolumn{1}{c}{Disentangling mechanisms} & \multirow{2}{*}{GC, EI, PL, EF, TM} \\ \cline{1-2} \cline{4-4} & Ye et al. (2021) & & \\ \cline{1-2} \cline{4-4} & ACM RecSys conf. & & \\ \cline{1-2} \cline{4-4} & Yang et al. (2023a) & & \\ \cline{1-2} \cline{4-4} & ACM WSDM conf. & & \\ \cline{1-2} \cline{4-4} & Zuo et al. (2023) & & \\ \cline{1-2} \cline{4-4} & MDPI Applied Sciences & & \\ \cline{1-2} \cline{4-4} & Ma et al. (2022) & & \\ \cline{1-2} \cline{4-4} & IEEE IJCNN conf. & & \\ \cline{1-2} \cline{4-4} & Zheng et al. (2021) & & \\ \cline{1-2} \cline{4-4} & ACM Web Conf. & & \\ \cline{1-2} \cline{4-4} & Xie et al. (2021) & & \\ \cline{1-2} \cline{4-4} & IEEE Trans. on Big Data & & \\ \hline \multirow{8}{*}{Serendipity} & Dhawan et al. (2022) & General GNN architecture & \multirow{8}{*}{-} \\ & Electronic Commerce & enhancements & \\ \cline{1-2} \cline{4-4} & Research and Applications & & \\ \cline{1-2} \cline{4-4} & Liu and Zheng (2020) & & \\ \cline{1-2} \cline{4-4} & ACM RecSys conf. & & \\ \cline{1-2} \cline{4-4} & Sun et al. (2020) & & \\ \cline{1-2} \cline{4-4} & ACM SIGKDD conf. & & \\ \cline{1-2} \cline{4-4} & Zhao et al. (2022) & & \\ \cline{1-2} \cline{4-4} & ACM SIGIR conf. & & \\ \cline{1-2} \cline{4-4} & Boo et al. (2023) & & \\ \cline{1-2} \cline{4-4} & ACM IUI conf. & & \\ \hline \multirow{8}{*}{Fairness} & Xu et al. (2023) & \multirow{8}{*}{Contrastive learning} & \multirow{8}{*}{GC, TM} \\ & Information Sciences & & \\ \cline{1-2} \cline{4-4} & Li et al. (2019) & & \\ \cline{1-2} \cline{4-4} & ACM CIKM conf. & & \\ \cline{1-2} \cline{4-4} & Liu et al. (2022a) & & \\ \cline{1-2} \cline{4-4} & Applied Soft Computing & & \\ \cline{1-2} \cline{4-4} & Kim et al. (2022) & & \\ \cline{1-2} \cline{4-4} & ACM CIKM conf. & & \\ \cline{1-2} \cline{4-4} & Yang et al. (2023b) & & \\ \cline{1-2} \cline{4-4} & ACM Web Conf. & & \\ \cline{1-2} \cline{4-4} & Wu et al. (2022b) & & \\ \cline{1-2} \cline{4-4} & ACM ASONAM conf. & & \\ \cline{1-2} \cline{4-4} & Gupta et al. (2019) & & \\ \cline{1-2} \cline{4-4} & ACM CIKM conf. & & \\ \cline{1-2} \cline{4-4} & Liu et al. (2022b) & & \\ \cline{1-2} \cline{4-4} & Neural Computing & & \\ \cline{1-2} \cline{4-4} & and Applications & & \\ \hline \end{tabular}
\end{table}
Table 1: This table summarizes key literature on GNN-based recommender systems emphasizing beyond-accuracy metrics: Diversity, Serendipity, and Fairness. Each entry specifies the paper’s publication venue/journal, targeted metric, a broad strategy categorization, and the model development stages the method utilizes or adapts to enhance the respective metric. These stages include data preprocessing (DP), graph construction (GC), embedding initialization (EI), propagation layers (PL), embedding fusion (EF), score computation (SC), and training methodologies (TM).
item selection operator to choose items from different sub-regions based on distance metrics Ye et al. (2021).
* **Adversarial learning 4:** To improve the accuracy-diversity trade-off in tag-aware systems, the _DTGCF_ model utilizes personalized category-boosted negative sampling, adversarial learning for category-free embeddings, and specialized regularization techniques Zuo et al. (2023). Furthermore, the above-mentioned _DGCN_ model also employs adversarial learning to make item representations more category-independent. Footnote 4: Adversarial examples in recommender systems, as a form of data augmentation, bolster data diversity for improved generalization, counteract inherent biases, and ensure fair node representation in GNNs for fairer recommendations Deljoo et al. (2021).
* **Contrastive learning 5:** The Contrastive Co-training (_CCT_) method by Ma et al. (2022) employs an iterative pipeline that augments recommendation and contrastive graph views with pseudo edges, leveraging diversified contrastive learning to address popularity and category biases in recommendations. Footnote 5: Contrastive learning pushes similar item or user embeddings closer and dissimilar ones apart to enhance recommendation quality Liu et al. (2021).
* **Heterogeneous Graph Neural Networks 6:** The _GraphDR_ approach by Xie et al. (2021) utilizes a heterogeneous graph neural network, capturing diverse interactions and prioritizing diversity in the matching module. Footnote 6: Heterogeneous graph neural networks process diverse types of nodes and edges, capturing complex relationships using a heterogeneous graph as input Wu et al. (2022).
Each of these methods offers a unique approach to the accuracy-diversity challenge. While all aim to improve the trade-off, their strategies vary, highlighting the multifaceted nature of the challenge at hand.
## 5 SEREENDIPITY IN GNN-BASED RECOMMEDER SYSTEMS
### Definition and importance of serendipity and novelty
Serendipity and closely related novelty are crucial in recommender systems, both aiming to boost user discovery. Serendipity refers to surprising yet relevant recommendations, promoting exploration and curiosity. Novelty suggests new or unfamiliar items, expanding user exposure. Both prevent over-specialization and encourage user curiosity Kaminskas and Bridge (2016).
### Review of recent developments in promoting serendipity and novelty
Recent advancements in GNN-based recommender systems have shown promising results in promoting serendipity and novelty, although notably fewer efforts have been directed towards balancing the accuracy-serendipity and accuracy-novelty trade-offs in comparison to the accuracy-diversity trade-off. In our exploration, we identified several studies addressing these efforts and have categorized them based on the primary theme of their contribution:
* **Neighbor-based mechanisms:** Approach proposed by Boo et al. (2023) enhances session-based recommendations by incorporating serendipitous session embeddings, leveraging session data and user preferences to amplify global embedding effects enabling users to control explore-exploit tradeoffs.
* **Long-tail recommendations 7:** The _TailNet_ architecture is designed to enhance long-tail recommendation performance. It classifies items into short-head and long-tail based on click frequency and integrates a unique preference mechanism to balance between recommending niche items for serendipity and maintaining overall accuracy Liu and Zheng (2020). Footnote 7: Long-tail recommendations focus on suggesting less popular or niche items Kowald et al. (2020).
* **Normalization techniques 8:**Zhao et al. (2022) proposed _r-AdjNorm_, a simple and effective GNN improvement that can improve the accuracy-novelty trade-off by controlling the normalization strength in the neighborhood aggregation process. Footnote 8: Normalization techniques in GNN-based recommender systems stabilize and scale node features or edge weights, ensuring consistent and improved model convergence and recommendation quality Gupta et al. (2019).
* **General GNN architecture enhancements 9:** Similarly to the popular _LightGCN_ approach by He et al. (2020), the _ImprovedGCN_ model by Dhawan et al. (2022) adapts and simplifies the graph convolution process in GCNs for item recommendation, inadvertently boosting serendipity. On the other hand, the _BGCF_ framework by Sun et al. (2020), designed for diverse and accurate recommendations, also boosts serendipity and novelty through its joint training approach. These GNN-based models, while focusing on accuracy, inadvertently elevate recommendation serendipity and/or novelty. Footnote 9: We refer to general GNN architecture enhancements in recommender systems as the advancements in architectures, aggregators, or training procedures that better capture graph structures for improved recommendation accuracy.
These studies collectively demonstrate the potential of GNNs in enhancing the serendipity and novelty of recommender systems, while also highlighting the need for further research to address existing challenges.
## 6 Fairness in GNN-based recommender systems
### Definition and importance of fairness
Fairness in recommender systems ensures no bias towards certain users or items. It can be divided into user fairness, which avoids algorithmic bias among users or demographics, and item fairness, which ensures equal exposure for items, countering popularity bias Leonhardt et al. (2018); Kowald et al. (2020); Abdollahpouri et al. (2021); Lacic et al. (2022); Kowald et al. (2023); Lex et al. (2020). Fairness helps to mitigate bias, supports diversity, and boosts user satisfaction. In GNN-based systems, which can amplify bias, fairness is crucial for balanced recommendations and optimal performance Ekstrand et al. (2018); Chizari et al. (2022); Chen et al. (2023); Gao et al. (2023).
### Review of recent developments in promoting fairness
In the evolving landscape of GNN-based recommender systems, the pursuit of user and item fairness has become a prominent topic. Recent advancements can be broadly categorized based on the thematic emphasis of their contributions:
* **Neighbor-based mechanisms:** The _Navip_ method debiases the neighbor aggregation process in GNNs using "neighbor aggregation via inverse propensity", focusing on user fairness Kim et al. (2022). Additionally, the _UGRec_ framework by Liu et al. (2022) employs an information aggregation component and a multihop mechanism to aggregate information from users' higher-order neighbors, ensuring user fairness by considering male and female discrimination. The _SKIPHOP_ approach focuses on user fairness by introducing an approach that captures both direct user-item interactions and latent knowledge graph interests, capturing both first-order and second-order proximity. Using fairness for regularization, it ensures balanced recommendations for users with similar profiles Wu et al. (2022).
* **Multimodal feature learning 10:** The method proposed by Li et al. (2019) fuses hashtag embeddings with multi-modal features, considering interactions among users, micro-videos, and hashtags. Footnote 10: Multimodal feature learning integrates diverse data sources, like text, images, and graphs, into unified embeddings to enrich recommendation context and accuracy Zhou et al. (2023).
* **Adversarial learning:** The _UGRec_ model additionally incorporates adversarial learning to eliminate gender-specific features while preserving common features.
* **Contrastive learning:** The _DCRec_ model by Yang et al. (2023b) leverages debiased contrastive learning to counteract popularity bias and addressing the challenge of disentangling user conformity from genuine interest, focusing on user fairness. The _TAGCL_ framework also capitalizes on the contrastive learning paradigm, ensuring item fairness by reducing biases in social tagging systems Xu et al. (2023).
* **Long-tail recommendations:** The _NISER_ method by Gupta et al. (2019) addresses the long-tail issue by focusing on popularity bias in session-based recommendation systems. It aims to ensure item fairness by normalizing item and session representations, thereby improving recommendations, especially for less popular items. Additionally, the above-mentioned approach by Li et al. (2019) also focuses on long-tail recommendations.
* **Self-training mechanisms**11**:** The _Self-Fair_ approach by Liu et al. (2022a) employs a self-training mechanism using unlabeled data with the goal of improving user fairness in recommendations for users of different genders. By iteratively refining predictions as pseudo-labels and incorporating fairness constraints, the model balances accuracy and fairness without relying heavily on labeled data.
Footnote 11: Self-training mechanisms leverage unlabeled data by iteratively predicting and refining labels, enhancing the model’s performance with augmented training data. Yu et al. (2023)
In the broader context of graph neural networks, researchers have also tackled fairness in non-recommender systems tasks, such as classification Dai and Wang (2021); Ma et al. (2021); Dong et al. (2022); Zhang et al. (2022). Their insights provide valuable lessons for future development of fair recommender systems.
## 7 Discussion and Future Directions
In this paper, we have conducted a comprehensive review of the literature on diversity, serendipity, and fairness in GNN-based recommender systems, with a focus on optimizing beyond-accuracy metrics. Throughout our analysis, we have explored various aspects of model development and discussed recent advancements in addressing these dimensions.
To further advance the field and guide future research, we have formulated three key questions:
_Q1: What are the practical challenges in optimizing GNN-based recommender systems for beyond-accuracy metrics?_
GNNs are able to capture complex relationships within graph structures. However, this sophistication can lead to overfitting, especially when prioritizing accuracy Fu et al. (2023). Data sparsity and the need for auxiliary data, such as demographic information, challenge the optimization of high-quality node representations, introducing biases Dhawan et al. (2022). An overemphasis on past preferences can limit novel discoveries Dhawan et al. (2022), and while addressing popularity bias is essential, it might inadvertently inject noise, reducing accuracy Liu and Zheng (2020). Balancing diverse objectives, like fairness, accuracy, and diversity, is nuanced, especially when optimizing one can compromise another Liu et al. (2022b). These challenges emphasize the need for focused research on effective modeling of GNN-based recommender systems focused on beyond-accuracy optimization.
_Q2: Which model development stages of GNN-based recommender systems have seen the most innovation for tackling beyond-accuracy optimization, and which stages have been underutilized?_
By conducting a thorough analysis of the reviewed papers (see Table 1), we have observed that the graph construction, propagation layer, and training methodologies have seen significant innovation in GNN-based recommender systems. This includes advanced graph construction methods, innovative graph convolution operations, and unique training methodologies. However, stages like embedding initialization, embedding fusion, and score computation are relatively underutilized. These stages could offer potential avenues for future research and could provide novel ways to balance accuracy, fairness, diversity, novelty, and serendipity in recommendations.
_Q3: What are potentially unexplored areas of beyond-accuracy optimization in GNN-based recommender systems?_
A less explored aspect in GNN-based recommender systems is personalized diversity, which modifies the diversity in recommendations to match individual user preferences. Users favoring more diversity get more diverse recommendations, whereas those liking less diversity get less diverse ones Eskandanian et al. (2017). This concept of personalized diversity, currently under-researched in GNN-based systems, hints at an intriguing future research direction. It can also relate to personalized serendipity or novelty, tailoring unexpected or novel recommendations to user preferences. Thus, incorporating personalized diversity, serendipity, and novelty in GNN-based systems could enrich beyond-accuracy optimization.
Overall, this review aims to help researchers and practitioners gain a deeper understanding of the multifaceted issues and potential avenues for future research in optimizing GNN-based recommender systems beyond traditional accuracy-centric approaches. By addressing the practical challenges, identifying underutilized model development stages, and highlighting unexplored areas of optimization, we hope to contribute to the development of more robust, diverse, serendipitous, and fair recommender systems that cater to the evolving needs and expectations of users.
## Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
## Author Contributions
TD: literature analysis, conceptualization, writing. ELA: conceptualization, writing. ELE and DK: conceptualization, writing, supervision. All authors contributed to the article and approved the submitted version.
## Acknowledgments
This work is supported by the "DDIA" COMET Module within the COMET - Competence Centers for Excellent Technologies Programme, funded by the Austrian Federal Ministry for Transport, Innovation and Technology (bmvit), the Austrian Federal Ministry for Digital and Economic Affairs (bmdw), FFG, SFG, and partners from industry and academia. The COMET Programme is managed by FFG. This research received support by the TU Graz Open Access Publishing Fund. Additional credit is given to OpenAI for the generative AI models, GPT-4 and ChatGPT, used in this work for text summarization, and sentence rephrasing. Verification of accuracy and originality was performed for all content generated by these tools. |
2302.11947 | Real-Time Damage Detection in Fiber Lifting Ropes Using Convolutional
Neural Networks | The health and safety hazards posed by worn crane lifting ropes mandate
periodic inspection for damage. This task is time-consuming, prone to human
error, halts operation, and may result in the premature disposal of ropes.
Therefore, we propose using deep learning and computer vision methods to
automate the process of detecting damaged ropes. Specifically, we present a
novel vision-based system for detecting damage in synthetic fiber rope images
using convolutional neural networks (CNN). We use a camera-based apparatus to
photograph the lifting rope's surface, while in operation, and capture the
progressive wear-and-tear as well as the more significant degradation in the
rope's health state. Experts from Konecranes annotate the collected images in
accordance with the rope's condition; normal or damaged. Then, we pre-process
the images, design a CNN model in a systematic manner, evaluate its detection
and prediction performance, analyze its computational complexity, and compare
it with various other models. Experimental results show the proposed model
outperforms other techniques with 96.4% accuracy, 95.8% precision, 97.2%
recall, 96.5% F1-score, and 99.2% AUC. Besides, they demonstrate the model's
real-time operation, low memory footprint, robustness to various environmental
and operational conditions, and adequacy for deployment in industrial systems. | Tuomas Jalonen, Mohammad Al-Sa'd, Roope Mellanen, Serkan Kiranyaz, Moncef Gabbouj | 2023-02-23T11:44:43Z | http://arxiv.org/abs/2302.11947v1 | # Real-Time Damage Detection in Fiber Lifting
###### Abstract
The health and safety hazards posed by worn crane lifting ropes mandate periodic inspection for damage. This task is time-consuming, prone to human error, halts operation, and may result in the premature disposal of ropes. Therefore, we propose using deep learning and computer vision methods to automate the process of detecting damaged ropes. Specifically, we present a novel vision-based system for detecting damage in synthetic fiber rope images using convolutional neural networks (CNN). We use a camera-based apparatus to photograph the lifting rope's surface, while in operation, and capture the progressive wear-and-tear as well as the more significant degradation in the rope's health state. Experts from Konecrames annotate the collected images in accordance with the rope's condition; normal or damaged. Then, we pre-proves the images, design a CNN model in a systematic manner, evaluate its detection and prediction performance, analyze its computational complexity, and compare it with various other models. Experimental results show the proposed model outperforms other techniques with 96.4% accuracy, 95.8% precision, 97.2% recall, 96.5% F1-score, and 99.2% AUC. Besides, they demonstrate the model's real-time operation, low memory footprint, robustness to various environmental and operational conditions, and adequacy for deployment in industrial systems.
Computer vision, damage detection, deep learning, fiber rope, industrial safety.
## I Introduction
Traditional industries are transitioning to smart manufacturing under the Industry 4.0 paradigm [1, 2]. This transition allows the use of recent advances in artificial intelligence and computer vision to increase productivity and improve manufacturing safety [3, 4]. Nonetheless, lifting heavy payloads is still a major health and safety hazard in many manufacturing environments [5]. For example, cranes such as the one shown in Fig. 1 can move objects weighing several metric tons; hence, inspecting its ropes for damage is paramount to prevent serious accidents, injuries, and additional costs [6]. More specifically, lifting ropes are major points of failure that require periodic inspection and replacement [8]. However, manual inspection procedures are labor intensive, time consuming, subjective, and often require halting the production process [9, 10, 11]. Therefore, we propose a novel real-time damage detection system for synthetic fiber lifting ropes based on deep learning and computer vision techniques.
The main contributions of this paper are:
* Developing the first fiber lifting rope image dataset.
* Proposing a deep learning based system for detecting damage in fiber lifting rope images.
* Designing an industrial solution that achieves high performance with a light memory footprint.
Lifting ropes are commonly manufactured from steel wires or lately using synthetic fibers such as polyethylene [12]. Synthetic lifting ropes have many benefits over traditional steel wire ropes. For example, they demonstrate higher resistance to corrosion, do not require greasing, and are easier to install [13]. Moreover, despite their higher purchase price, synthetic fiber ropes are lightweight which allows utilizing smaller cranes; leading to cost reductions [14]. However, synthetic ropes, just as steel wires, do suffer from wear-and-tear and get damaged over time. Common damage types in fiber ropes are: strand cuts, abrasion, melting, compression damage, pulled strands and inconsistent diameter. In contrast to steel wires which tend to break from the inside [15], synthetic rope damages manifest on the rope's surface and can be visually inspected by an expert [14]. Currently, monitoring the condition of synthetic fiber ropes is performed manually by inspectors following the ISO-9554 standard [9, 10]. Although it is the standard practice, this procedure is cumbersome, discontinuous in time, interrupts operation, and may result in the premature disposal of ropes [11]. Therefore, automatic
Fig. 1: The fiber rope cranes used in this work. Photos are published with permission from Konecrames [7].
damage detection by leveraging the recent advancements in computer vision, image processing, and deep learning techniques is needed [14]. On the one hand, image processing and related feature extraction methods utilize expert knowledge and attempt to characterize damage in rope images similar to the ones identified by expert inspectors [16]. These techniques generally perform well in a controlled environment, but they poorly integrate the varying conditions and operations found in a real-life setting e.g., noise, lighting conditions, oil residue, and dust [17, 18]. On the other hand, deep learning tools discard the notion of hand-crafted features by learning abstractions that maximize the detection of damaged ropes. In fact, they yield discriminatory features without predisposition to the standard markings and can accommodate a wider range of environmental and/or operational conditions [17]. Therefore, deep learning techniques are more suited to detect damage in synthetic fiber rope images compared to engineering-based feature extraction methods.
The construct of damage indicators in fiber ropes was first articulated in [11] where changes in the rope's width and length were found to be important. This particular finding was verified in [19] using computer vision and thermal imaging; however, explicit identification for damaged ropes was not performed. In fact, detecting damage in synthetic fiber rope images has received less attention in the literature compared to steel wire cables. Fortunately, these detection techniques are suitable for adoption due to the similarity between the two problems; they both deal with detecting damaged yarns or strands in rope images. For instance, the health condition of balancing tail ropes was monitored in [20] using a convolutional neural network (CNN). The rope image was captured and then fed to the CNN model to classify its health state as either normal, or if it suffers from one out of eight common damage types. Moreover, a CNN-based approach was designed in [21] to detect surface defects in steel wire rope images. The model classified the acquired images into normal, broken, or damaged, and achieved a 99.7% overall accuracy. The same problem was tackled in [22] using support vector machines trained with texture-based hand-crafted features. The proposed system achieved a 93.3% classification accuracy and it was further improved in [23] to reach 95.9%. Nonetheless, the limited sample size and the reliance on hand-crafted features hampered robustness in noisy environments. This was evidenced in [24] which showed that the model accuracy drops to 80.5% when training/testing with a different dataset. Moreover, the utility of CNNs combined with image processing techniques was shown to increase the accuracy of the model in [22] from 93.3% to 95.5% [18]. This has motivated us to design a CNN-based solution for detecting damage in synthetic fiber rope images. However, our proposed solution will be developed to have both high performance and low computational requirements, allowing for easy integration into industrial systems and efficient deployment [25].
The rest of this paper is organized as follows: section II describes our methodology for building the experimental setup, collecting data, designing the damage detection system, and evaluating its performance. Afterwards, we present and discuss the system's performance and compare it to various other models in section III. Finally, section IV concludes the paper and suggests topics for future research.
## II Methodology
The proposed fiber rope damage detection system is overviewed in Fig. 2 and consists of the following stages:
1. Setup an experimental apparatus with a three-camera circular array to photograph the ropes' surface area.
2. Collect the captured images and label them as normal or damaged according to the ropes' health condition.
3. Preprocess the collected images to enhance contrast and down-sample to reduce computational complexity.
4. Split the pre-processed images into train and test sets in a 5-fold cross-validation fashion.
5. Train and test a classification model on the train and test folds, respectively, and repeat for every data split.
6. Evaluate and analyze the model's fold-averaged performance using various metrics.
The design process undertaken in this work is governed by the following requirements:
* High performance in detecting damaged ropes and robustness to different environmental and operational conditions.
* Lightweight for implementation and deployment.
* Remote sensing by neither interfering with the crane operation nor the rope structure.
* Modularity to facilitate maintenance, upgrades, and compatibility with IoT and edge devices.
The remaining subsections discuss and detail each stage in the proposed system.
### _Experimental setup_
The experimental setup was built and operated by Konecranes and the following experiment was repeated for three different synthetic fiber ropes; see Table I for the ropes' properties. First, the crane illustrated in Fig. 1 was set to continuously lift a payload of 5 metric tons in a controlled setting. The payload lifting height was approximately 5 meters and during a lifting cycle, the crane was stopped at the top and bottom (payload resting on the floor). After that, a circular camera array, comprised of three RGB cameras placed at 120deg apart, was used to capture approximately 13 mm of the lifting rope. The camera framerate was selected such that subsequent rope images have roughly 1/3 spatial overlap, resulting in 20 meters of rope being photographed during a lifting cycle. Finally, the crane lifting and rope imaging steps were repeated for weeks to cover the ropes' lifespan; from new to unusable.
### _Data collection_
The rope imaging experiment generated 4,984,000 high-resolution photos; each being tagged with a timestamp and the rope's imaged position. The raw photos were then screened for duplicates by discarding images that examined the same rope position. In other words, we ensured that images for the same rope position would be distinct by capturing different health
conditions. This is important to avoid cross-contamination in data splits. After that, we selected 143,000 random samples and experts from Konecranes labeled them as _normal_ or _damaged_ according to the lifting rope health condition. Out of those images, 10,000 samples were labeled as damaged. Finally, to avoid data imbalance issues, we formed a balanced dataset containing 20,000 samples; 10,000 images from each class. The collected dataset is available from Konecranes and it was used under license for this study1.
Footnote 1: Please contact Roope Mellanen at [email protected] for data inquiry.
### _Preprocessing and data splitting_
The annotated high-resolution rope images were down-sampled to \(256\times 256\times 3\) pixels. After that, we enhanced the photos' contrast via histogram equalization [26]; see Fig. 3 for a sample, and we standardized the pixel values to range between 0 and 1. Finally, the pre-processed images were randomly divided into five equally sized portions while maintaining class balance (5-fold stratified cross-validation); see stage 4 in Fig. 2. In other words, each split had 16,000 (8,000 damaged and 8,000 normal) and 4,000 (2,000 damaged and 2,000 normal) images for training and testing, respectively. Fig. 4 demonstrates samples of normal and damaged ropes from the acquired dataset. By examining the images, one notes a significant variation in the clarity of the rope's health state and in the severity of the damage. For example, Fig. (a)a conveys a more damaged rope when compared to the one presented in Fig. (e)e. However, the damage can also be minuscule without clear visual indications as presented in Fig. (d)d. Finally, the dirt and oil stains found in most rope images present a challenge for any vision-based tool.
### _Proposed deep learning model_
The collected rope images constitute an over-complete description for the rope as a whole; hence, the problem of damage detection reduces to classifying each image independently. We designed a lightweight CNN architecture to classify the fiber lifting rope images, and we tested different variants to find the best performing model.
The architecture design starts with a convolutional layer (\(3\times 3\) kernel with ReLU activation) to extract preliminary feature maps from the input images. After that, those initial features are passed through a number of blocks each consisting of the following sequential elements: (1) convolutional layer to
Fig. 3: Histogram equalization for an example rope image.
Fig. 2: The proposed vision-based damage detection system for synthetic fiber lifting ropes. The system is comprised of the following stages: (1) experimental setup with a three-camera circular array to capture rope images; (2) collection and annotation of the captured images; (3) preprocessing to enhance quality and down-sampling to reduce complexity; (4) data splitting into 5-fold training and testing sets; (5) training/testing the proposed deep learning model; and (6) evaluating and analyzing the system’s performance and computational complexity.
extract features (\(3\times 3\) kernel with ReLU activation), (2) Max Pooling to down-sample the features (\(2\times 2\) kernel), (3) and dropout to regularize the network by reducing the neurons' interdependent learning (\(0.4\) rate). Finally, the learned abstractions are flattened and passed through a dropout layer (\(0.4\) rate), a fully connected layer (\(20\) nodes), another dropout layer (\(0.2\) rate), and lastly, a binary classification layer with a Softmax activation function. In this work, 16 model variants were generated from this architecture by altering the number of blocks (1, 2, or 3), input image sizes (\(16\times 16\), \(32\times 32\), or \(64\times 64\)), and the input image color state (color or grayscale); see Appendix A for the model variants' structure and Table II for details on the variant that we selected for further analysis and comparison.
The models' training was performed for 200 epochs using an Adam optimizer [27] to minimize the cross-entropy loss regularized by a weight decay to reduce overfitting [28, 29], i.e.:
\[\mathcal{L}=-y\log(\hat{y})-(1-y)\log(1-\hat{y})+\lambda||\mathbf{w}||_{2}^{2 }\,, \tag{1}\]
where \(\mathcal{L}\) denotes the regularized loss, \(y\) and \(\hat{y}\) are the true and predicted labels, respectively, \(\lambda=5\times 10^{-4}\) is the selected \(L_{2}\) regularization rate, and \(\mathbf{w}\) is the network's weight matrix [28]. Moreover, the training batch size was set to 32 and to ensure convergence the learning rate was decayed by [30]:
\[\eta(n)=\begin{cases}10^{-3}&:n\leq 120\\ 10^{-4}&:120<n\leq 150\\ 10^{-5}&:150<n\leq 180\\ 10^{-6}&:180<n\leq 200\end{cases}\,, \tag{2}\]
where \(\eta\) is the learning rate and \(n\) is the epoch number.
This training process was conducted for each model using the train set in each data split (five training sets). Additionally, apart from the generated variants, we also trained the following three baseline models for comparison; Zhou _et al._ (2019) [21, Zhou _et al._ (2021) [18], and Schuler _et al._ (2022) [31]. Detailed description on these models can be found in Appendixes B-D.
### _Performance evaluation and analysis_
The trained models were evaluated using the test set in each data split (five test sets). Their performance was analyzed by various tools and metrics to quantify their fold-averaged detection, prediction, and misclassification outcomes.
Fig. 4: Example images from the acquired dataset show significant variation in the severity and clarity of damages because of dirt and oil stains. The first row (a)-(e) show damaged ropes while the second row (f)-(j) present some healthy samples.
#### Ii-B1 Classification
we quantified the models' classification performance by accuracy, precision, recall, false positive rate (FPR), and the F1-score, i.e.:
\[\text{Accuracy}=\frac{TP+TN}{TP+TN+FP+FN}\,, \tag{3}\]
\[\text{Precision}=\frac{TP}{TP+FP}\,, \tag{4}\]
\[\text{Recall}=\frac{TP}{TP+FN}\,, \tag{5}\]
\[\text{FPR}=\frac{FP}{FP+TN}\,, \tag{6}\]
\[\text{F1-score}=2\left(\frac{\text{Precision}\times\text{Recall}}{\text{ Precision}+\text{Recall}}\right)\,, \tag{7}\]
where \(TP\), \(TN\), \(FP\), and \(FN\) are true positives, true negatives, false positives, and false negatives, respectively (positive/negative denotes a damaged/normal rope).
Moreover, we used the area under the averaged receiver operating curve (AUC) and confusion matrices to fully characterize the classification quality. The AUC was computed by averaging linearly interpolated receiver operating curves.
#### Ii-B2 Prediction
we assessed the models' predictive capacity using Gradient-weighted Class Activation Mapping (Grad-CAM) which uses gradients of the last convolutional layer to measure the relevance of the input image pixels for classification [32]. Specifically, Grad-CAM yields a distribution with high values for pixels that contributed more to the outcome.
Furthermore, we utilized t-Distributed Stochastic Neighbor Embedding (t-SNE); a dimensionality reduction method that clusters similar high dimensional samples and departs dissimilar ones in two- or three-dimensional space [33]. In specific, given an array of learned features \(\mathbf{x}=[\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{N}]\), the similarity between features \(i\) and \(j\) can be measured by:
\[p_{ij}=\frac{p_{j|i}+p_{i|j}}{2N}\,, \tag{8}\]
\[p_{j|i}=\left\{\begin{array}{ll}\frac{\exp(-||\mathbf{x}_{i}-\mathbf{x}_{j} ||^{2}/2\sigma_{i}^{2})}{\sum_{k\neq i}\exp(-||\mathbf{x}_{i}-\mathbf{x}_{k}|| ^{2}/2\sigma_{i}^{2})}&:i\neq j\\ 0&:i=j\end{array}\right., \tag{9}\]
where \(p_{ij}\) is a probabilistic measure for the similarity between \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\), \(\sum_{i,j}p_{ij}=1\), \(\sum_{j}p_{j|i}=1\), and \(\sigma_{i}\) is the adaptive Gaussian kernel bandwidth. Now, t-SNE aims to learn the two- or three-dimensional map \(\mathbf{y}=[\mathbf{y}_{1},\mathbf{y}_{2},\cdots,\mathbf{y}_{N}]\) with a probabilistic similarity \(q_{ij}\) that resembles \(p_{ij}\), i.e. [33]:
\[q_{ij}=\frac{(1+||\mathbf{y}_{i}-\mathbf{y}_{j}||^{2})^{-1}}{\sum_{k\neq l}(1+ ||\mathbf{y}_{k}-\mathbf{y}_{l}||^{2})^{-1}}\,. \tag{10}\]
The similarity matching between \(q_{ij}\) and \(p_{ij}\) in t-SNE is maximized by minimizing the Kullback-Leibler divergence of \(p_{ij}\) from \(q_{ij}\) via gradient descent, i.e.:
\[\min_{\mathbf{y}_{i}}\left(\sum_{i\neq j}p_{ij}\log\left(\frac{p_{ij}}{q_{ij} }\right)\right)\,. \tag{11}\]
Both the Grad-CAM and t-SNE help in characterizing the models' predictive power when supplied with new data. In other words, given large enough training samples, if the Grad-CAM and t-SNE show genuine learning and clear separability, one may infer the model adequacy for unseen samples.
#### Ii-B3 Misclassification
visualizing the model's misclassified samples is paramount for interpretability and for outlining performance caps. Moreover, it enables a better understanding for the model's weaknesses and for identifying human errors in annotation. For example, by assuming some error in the labeling process, the performance of a genuine model will be limited, or capped, by the labels' quality.
### _Computational complexity_
The complexity of the models was assessed by their total number of trainable parameters, required input image size, the models' memory size requirement, processing time, and their processing rate (frame rate). We analyzed their computational complexity by Monte-Carlo simulations where we fed the models with 4,000 test samples, predicted their health state (normal or damaged), and repeated the process ten times for validation. Note that this process does not include the imaging, data loading, nor preprocessing stages. It only quantifies the models' inference complexity. We used an Apple MacBook Pro with an ARM-based M1 Pro chip, 10-core CPU, integrated 16-core GPU, 16-core neural engine, and 16 GB of RAM. The experiments' codes were written in Python 3.9.7 using Tensorflow 2.7.0 and are publicly available.
## III Results and discussion
### _Model selection_
The best model, out of the 16 generated variants, was selected based on its ability to balance between precision and recall with minimum computational requirements. By examining the results in Table III, one notes that the model variants CNN9 and CNN15 yield the highest 5-fold averaged precision/recall balance (96.5% average). Besides, they are the top-2 models in terms of accuracy, precision, F1-score, and AUC. Nevertheless, due to the apparent disparity in computational resources (input image sizes: \(32\times 32\times 3\) v.s. \(64\times 64\times 3\)), we opted for model CNN9 and used it for further analysis and comparison. Note that "Proposed CNN" refers to the CNN9 variant in the remaining of this paper.
### _Performance analysis_
Fig. 5 compares the training and testing accuracy/loss curves (averaged over the data splits) of the proposed CNN to the three baseline models. The results show the curves converging successfully after epoch 150, and the learning rate decay, scheduled at epoch 151, has ensured stability by suppressing perturbation. This indicates that the training phase was executed long enough and was not terminated prematurely. Moreover, by examining the difference between the training and testing accuracy/loss curves one notes that the three baseline models exhibit further overfitting when compared to the proposed network, with Schuler _et al._ (2022) being the extreme case, followed by Zhou _et al._ (2019), and lastly, Zhou
et al._ (2021). We argue that this is caused by the unnecessary high number of parameters which hinder generalization.
The models' testing performance for detecting damaged rope images is demonstrated in Table IV and Fig. 6. The measures in Table IV show that the Zhou _et al._ (2021) model has the highest recall at 97.9%, which is the most important metric in order to prevent accidents. However, its precision, at 90.3%, is relatively low suggesting that the model labels most rope images as damaged, making it unpractical for a real-life setting. Moreover, both the Schuler _et al._ (2022) and the proposed models yield similar recall levels (97.1% and 97.2%, respectively). Nevertheless, the proposed CNN results in higher accuracy, precision, and F1-score with 2.3, 4.2, and 2.2 percentage point gains, respectively. In addition, it demonstrates a better precision/recall trade-off, which is reflected by the averaged ROC curve in Fig. 6 along with its AUC value (99.2%). Finally, the results suggest that the Zhou _et al._ (2019) and Schuler _et al._ (2022) models are not the best in any of the six metrics.
### _Computational complexity analysis_
Table V summarizes the complexity analysis results which indicate that the proposed model is the fastest one requiring approximately 19 milliseconds per input image and running in real-time at 54 fps. Nonetheless, it is important to note that Zhou _et al._ (2019) and (2021) models are comparatively fast, but the Schuler _et al._ (2022) operates below real-time at 12 fps. Besides, the listed prediction speeds could be further improved by running them in C++. However, the most notable and important difference is the proposed model's lighter memory footprint. In specific, our model accepts small-sized images and requires less disk space for storage. These advantages can lead to savings in equipment and operational costs, improve latency, and mitigate privacy issues by not using cloud services.
### _The Grad-CAM and t-SNE analysis_
The Grad-CAM and t-SNE results are depicted in Figs. 7 and 8, respectively. In Fig. 7, we generated the Grad-CAM heatmap for two example images showing damaged ropes that were correctly classified by the proposed model. The results show that our CNN model is indeed focusing on the intuitively relevant parts of the input image, which are the broken strands. One also notes that the network does not focus on the ropes' oil and dirt residue as demonstrated in Fig. 6(b). This suggests genuine learning by the model and robustness to environmental
Fig. 5: Comparing the models’ training and testing accuracy/loss curves averaged over the data splits in solid/dotted lines.
Fig. 6: The models’ 5-fold averaged ROC curves alongside their computed AUC values. The corner portion is magnified to ease visualization.
and operational conditions. However, it is important to note that the CNN still slightly focuses on the image background. Moreover, the t-SNE results in Fig. 8 demonstrate good class separation, but they also show a need for verifying some ground-truth labels. In specific, the t-SNE shows few rope images labeled as normal within the damaged rope support.
### _Misclassifications_
Two example prediction errors by our CNN are presented in Fig. 9. The rope in Fig. 9a is labeled as damaged, but predicted as normal, while the one in Fig. 9b is labeled as normal, but predicted as damaged. Such instances pose a challenge for the system because they are clearly in between the two classes; they are slightly worn with a few broken strings, but strictly not damaged, according to our experts. Despite that, these ropes are not likely to break at these spots and they would be classified as damaged after more wear. Moreover, the similarities between the two images suggest
Fig. 8: The proposed CNN t-SNE results showing good separation between the two classes. We used the model’s last layer features (FC[20]) in Table A.1) as input to the t-SNE algorithm.
Fig. 7: The proposed CNN Grad-CAM heatmaps for two correctly classified damaged ropes. The heatmaps indicate the model’s adequacy by focusing on the pixels that are relevant for detecting damage. We used the model’s last convolutional layer features as input to the Grad-Cam algorithm.
possible annotation errors which may prevent the proposed model from reaching its full potential. Nonetheless, human errors are expected, and the model outcome still shows good potential and applicability.
## IV Conclusions
Damaged lifting ropes are a major safety hazard in manufacturing, cargo loading/unloading, and construction because they can lead to serious accidents, injuries, and financial losses. The visual inspection of damage in synthetic lifting ropes is a time-consuming task, interrupts operation, and may result in the premature disposal of ropes. Therefore, combining computer vision and deep learning techniques is intuitive for automation and advancement.
This work presents a novel vision-based deep learning solution for detecting damage in fiber lifting rope images. First, we built a three-camera circular array apparatus to photograph the rope's surface. Afterward, the rope surface images were collected in a database, annotated by experts, preprocessed to improve contrast, and split into train and test sets in a 5-fold cross-validation fashion. Moreover, we systematically designed a CNN-based model to classify damaged rope images, evaluated its detection and prediction performance using various tools, and compared it to three different baseline models. Additionally, we analyzed its computational complexity in terms of processing time and memory footprint. In summary, the results indicated various performance and computational advantages for using the proposed system when compared to similar solutions. Specifically, the system testing yielded 96.4% accuracy, 95.8% precision, 97.2% recall, 96.5% F1-score, 99.2% AUC, and a significant generalization capability. Besides, it runs at 54 fps, occupies 1.7 MB in memory, and requires low-resolution input images; thus, making the proposed system a real-time lightweight solution. The developed system was also found robust to various environmental and operational conditions e.g., oil residue and dust.
The proposed model's main drawback is not determining the rope health state as a whole but assessing each image individually. Besides, the system's output is binary and does not directly communicate the rope's health condition. These limitations propose extending the developed solution in various ways such as: (1) collecting a larger training dataset with different rope sizes and types to improve generalization; (2) investigating other machine learning solutions and techniques; (3) including the cost of the imaging apparatus in the design process, e.g., using cheaper cameras; (4) extending the model's output to multiple classes e.g., normal, worn, and damaged, or to a continuous score indicating the rope's health condition (regression); and (5) incorporating the proposed solution to automate or recommend spare-part ordering.
## Acknowledgment
We would like to thank Konecranes and Juhani Kerovuori for their collaboration on this project.
## Appendix A The proposed model variants
Table A.1 summarizes the proposed CNN model variants' architecture for implementation. The variants were generated by altering the number of blocks, input image sizes, and input image color state.
## Appendix B The Zhou et al. (2019) and (2021) models
The proposed damage detection solution was compared to the Zhou _et al._ (2019) [21] and Zhou _et al._ (2021)2[18] models in terms of performance and computational requirements. These models were originally designed to detect surface damage in steel wire rope images with high performance. Although the rope material is different from our experiments (steel v.s. fiber), these detection models are still suitable for adoption due to the similarity between the two problems; they both deal with detecting damaged yarns or strands in rope images. The Zhou _et al._ (2019) and (2021) architectures accept grayscale input images of size \(64\times 64\) and \(96\times 96\), and produce outputs of size 3 and 2, respectively. In this work, we changed the Zhou _et al._ (2019) output shape to 2 in order to match our problem definition, and we added six dropout layers (\(0.5\) rate) to mitigate overfitting. Moreover, we increased the Zhou _et al._ (2021) original two dropout rates to 0.6 and added three more dropout layers to avoid overfitting.
Footnote 2: The adopted Zhou _et al._ (2021) model is named WRIPDCNN1 in [18].
## Appendix C The Schuler et al. (2022) model
The proposed damage detection solution was also compared to the Schuler _et al._ (2022)3[31] model; a highly efficient CNN-based classifier. The Schuler _et al._ (2022) model was designed in a clever manner to reduce the number of required parameters while maintaining high performance; the accuracy drop was 2% for a 55% reduction in parameters when tested on the CIFAR-10 dataset [31]. Therefore, we opted for this architecture for comparison because it is aligned with our design requirements; high efficiency and performance. The Schuler _et al._ (2022) model accepts colored input images of size \(32\times 32\times 3\), and in this work, we used its original implementation. Nonetheless, we reduced its output size from 10 to 2 in order to match our problem definition, and added dropout layers (\(0.4\) rate) to minimize overfitting.
Fig. 9: Two example misclassification samples by the proposed CNN.
## Appendix D Baseline training and evaluation settings
The baseline models' training followed the same procedure described in section II-D using the train set in each data split (five training sets), but with no weight decay. Besides, their performance was evaluated using the test set in each data split (five test sets) and quantified by the measures described in section II-E. Note that the Schuler model has caused out-of-memory issues when trained with a batch size above 32 using the equipment described in section II-F.
|
2301.08013 | Towards Rigorous Understanding of Neural Networks via
Semantics-preserving Transformations | In this paper we present an algebraic approach to the precise and global
verification and explanation of Rectifier Neural Networks, a subclass of
Piece-wise Linear Neural Networks (PLNNs), i.e., networks that semantically
represent piece-wise affine functions. Key to our approach is the symbolic
execution of these networks that allows the construction of semantically
equivalent Typed Affine Decision Structures (TADS). Due to their deterministic
and sequential nature, TADS can, similarly to decision trees, be considered as
white-box models and therefore as precise solutions to the model and outcome
explanation problem. TADS are linear algebras which allows one to elegantly
compare Rectifier Networks for equivalence or similarity, both with precise
diagnostic information in case of failure, and to characterize their
classification potential by precisely characterizing the set of inputs that are
specifically classified or the set of inputs where two network-based
classifiers differ. All phenomena are illustrated along a detailed discussion
of a minimal, illustrative example: the continuous XOR function. | Maximilian Schlüter, Gerrit Nolte, Alnis Murtovi, Bernhard Steffen | 2023-01-19T11:35:07Z | http://arxiv.org/abs/2301.08013v2 | # Towards Rigorous Understanding of Neural Networks via Semantics-preserving Transformations
###### Abstract
In this paper we present an algebraic approach to the precise and global verification and explanation of _Rectifier Neural Networks_, a subclass of _Piece-wise Linear Neural Networks_ (PLNNs), i.e., networks that semantically represent piece-wise affine functions. Key to our approach is the symbolic execution of these networks that allows the construction of semantically equivalent _Typed Affine Decision Structures_ (TADS). Due to their deterministic and sequential nature, TADS can, similarly to decision trees, be considered as white-box models and therefore as precise solutions to the model and outcome explanation problem. TADS are linear algebras which allows one to elegantly compare Rectifier Networks for equivalence or similarity, both with precise diagnostic information in case of failure, and to characterize their classification potential by precisely characterizing the set of inputs that are specifically classified or the set of inputs where two network-based classifiers differ. All phenomena are illustrated along a detailed discussion of a minimal, illustrative example: the continuous XOR function.
## 1 Introduction
Neural networks are perhaps today's most important machine learning models, with exciting results, e.g., in image recognition [10], speech recognition [12, 13] and even in highly complex games [14, 15, 16]. As the name suggests, neural networks are learned from data using efficient, but approximative training algorithms [13, 1]. At their core, neural networks are (dataflow-oriented) computation graphs [1]. They consist of many computation units, called _neurons_, that are arranged in layers such that computations in each layer can be performed in parallel, with successive layers only depending on the preceding layer. Modern neural networks, in practice, possess up to multiple billions of parameters [13] and leverage parallel hardware such as GPUs to perform computations of this scale [1]. This highly quantitative approach is responsible for exciting success stories, but also for their main weakness: Neural network behavior is often chaotic and hard to comprehend for a human. Perhaps most infamously, a neural network's prediction can change drastically under imperceptible changes to its input, so-called _adversarial examples_[1, 14, 15].
The explainability of neural networks, which are computationally considered as black-boxes due to their highly parallel and non-linear nature, is therefore one of the current core challenges in AI research [15]. The fact that neural networks are increasingly used in safety-critical systems such as self-driving cars [1] turns trustworthiness of machine learning into a must [15]. However, state-of-the-art explanation technology is more about reassuring intuition, e.g., to support cooperative work of humans with AI systems, such as in the field of medical diagnostics [16], than about precise explanation or guarantees [17]. Moreover, current approaches to Neural Network verification are still in their infancy in that they are not yet sufficiently tailored to the nature of Neural Networks to achieve the required scalability or to provide diagnostic information beyond individual witness traces in cases where the verification attempts fail (cf. [1, 18, 19] and Section8 for a more detailed discussion).
In this paper, we present an algebraic approach to the verification and explanation of Rectifier Neural Networks (PLNN), a very popular subclass of neural networks that semantically represent piece-wise affine functions (PAF) [20]. Key to our approach are _Typed Affine Decision Structures_ (TADS) that concisely represent PAF in a white-box fashion that is as accessible to human understanding as decision trees. TADS can nicely be derived from PLNNs via symbolic execu
tion [14, 15], or, alternatively, compositionally along the PLNN's layering structure, and their algebraic structure allows for elegant solutions to verification and explanation tasks:
* TADS can be used for PLNNs similarly as _Algebraic Decision Diagrams_ (ADDs) have been used for Random Forests in [13] to elegantly provide model and outcome explanations as well as class characterizations.
* Using the algebraic operations of TADS one can not only decide the equivalence problem, i.e., whether two PLNNs are semantically equivalent, but also whether they are \(\epsilon\)-similar, i.e, never differ more than \(\epsilon\). In both cases, diagnostic information in terms of a corresponding 'difference' TADS is provided that precisely specifies where one of these properties is violated.
* TADS comprise non-continuous piece-wise linear operations which cannot be represented by PLNNs. This is necessary to not only deal with _regression tasks_, where one aims at approximating continuous functions, but also with _classification tasks_ with discrete output domains.1 In the latter case, TADS-based class characterization allows one to precisely characterize the set of inputs that are classified as members of a given class, or the set of inputs where two (PLNN-based) classifiers differ. Footnote 1: As PLNNs always represent continuous functions, an additional outcome interpretation mechanism is needed to bridge the gap from continuous networks to discrete classification tasks.
* Finally, TADS can also profitably be used for the verification of preconditions and postconditions, the illustration of which is beyond the scope of this paper but will be discussed in [12] in the setting of digit recognition.
The paper illustrates the essential features of TADS using a minimal, illustrative example: the continuous XOR function. The simplicity of XOR is ideally suited to provide an intuitive entry into the presented theory. A more comprehensive example is presented in [12], where digit recognition based on the MNIST data base is considered. In this highly dimensional setting, specific scalability measures are required to apply our TADS technology.
After specifying the details of our running example in Section2, Section3 sketches Algebraic Decision Structures that later on will be instantiated with Affine Functions recalled in Section4 to introduce the central notion of this paper, Typed Affine Decision Structures (TADS). Semantically, TADS represent piece-wise affine functions which mark them as a fitting representation for Rectifier Networks that represent continuous piece-wise affine functions2 and that are discussed in Section5. Our main contribution is the derivation of TADS, using both symbolic execution and compositionality along the layering structure of PLNN, as a complete and precise model explanation of PLNNs. We introduce TADS in Section6 and state important algebraic properties that allow the manipulations mentioned beforehand. Subsequently, Section7 illustrates the impact on verification and explanation of the algebraic properties of TADS that are also established in Section6 along the running example. The paper closes after a discussion of related work in Section8 with conclusions and direction to future work in Section9.
Footnote 2: Rectifier Networks are often also called Piece-wise Linear Neural Networks, the reason for us to denote them as PLNNs.
## 2 Running Example - XOR
As a running example throughout this paper, we discuss the XOR function under the perspective of a regression task and a classification task as specified below. We chose the XOR problem for illustration for two reasons:
* The XOR problem concerns a two-dimensional function which can be visualized as a function plot.
* While simple, the XOR problem has been a roadblock in early AI research due to it not being solvable by linear approaches [16]. Therefore, it
Figure 1: A baseline solution to the XOR-regression problem given by \(f_{*}(x,y)=|x-y|\). Note that this function is piece-wise linear, having two separate linear regions, which is minimal for the problem.
is a minimal problem that still _requires_ a more powerful, non-linear model such as a PLNN.
For the following formalizations, let us fix some basic notation: The set of natural numbers (including zero) is denoted \(\mathbb{N}\) and the set of real numbers \(\mathbb{R}\). Unions and intersections of sets are defined as usual. The cartesian product of two sets is defined as
\[M\times N:=\{\,(x,y)\,|\,x\in M\,\wedge\,y\in N\,\}.\]
A sequence of cartesian products over the same set may be abbreviated as \(M^{n}\) (\(n\geq 0\)) where
\[M^{0}=\emptyset\qquad M^{1}=M\qquad M^{n+2}=M\times M^{n+1}\]
and the Kleene star operator is defined as
\[M^{*}:=\bigcup_{n\in\mathbb{N}}M^{n}\]
Moreover, intervals of \(\mathbb{R}\) are of the form \([a,b]\) for \(a,b\in\mathbb{R}\) with \(a\leq b\) and denote the set of all real values between \(a\) and \(b\).
As a _regression_ task, the XOR problem is stated as follows:
**2.1 Definition** (XOR Regression).: Find a piecewise affine function \(f\colon[0,1]^{2}\to[0,1]\) that is continuous and satisfies:
\[f(0,0)\approx 0\approx f(1,1)\quad\text{and}\quad f(1,0)\approx 1\approx f(0,1)\]
Thus, a learning algorithm is tasked with approximating a continuous version of an XOR gate, interpolating between the four edge points for which the XOR function is defined.3
Footnote 3: The restriction to the interval \([0,1]\) is meant to ease the exposition.
When posing the XOR problem as a classification task, the XOR function can be regarded as a function with _discrete_ binary output 1 or 0 but with a continuous domain \(\mathbb{R}^{2}\).
**2.2 Definition** (XOR Classification).: Find a piece-wise affine function \(f\colon[0,1]^{2}\to\{0,1\}\) such that:
\[f(0,0)=0=f(1,1)\quad\text{and}\quad f(1,0)=1=f(0,1)\]
As the XOR-problem requires fixed values only at four points, there exist infinitely many solutions. This is typical for machine learning problems where only some few points are fixed and others are left for the machine learning model to freely interpolate. Different machine learning models have different principles that dictate this interpolation. For example, concerning PLNNs the interpolation is (piece-wise) linear.
In line with the principle of Occam's razor [20], humans4 would optimally solve the XOR-regression problem with a function as simple as:
Footnote 4: In contrast to, e.g., solutions learned by machines.
\[f_{*}(x,y)=|x-y|\]
A visualisation of \(f_{*}\) can be found in Figure 1. Similarly, a human would probably choose the following corresponding straightforward extension to the classification problem:
\[g_{*}(x,y)=\begin{cases}1&\text{if }f_{*}(x,y)\geq 0.5\\ 0&\text{otherwise}\end{cases}\]
An illustration of \(g_{*}\) can be found in Figure 2. It is straightforward to check that these functions solve the XOR-regression and XOR-classification problems optimally in the sense that they match the traditional XOR function on all points where it is defined.
The continuous XOR problems will serve as running examples throughout this work: We will demonstrate different representations of piece-wise linear functions (such as \(f_{*}\)) and transformations between them along the development of our theory, and showcase differences between the manually constructed solutions to the regression and classification tasks and their learned counterparts in Section 7.
## 3 Algebraic Decision Structures
In order to prepare the algebraic treatment of decision structures, we focus on decision structures whose leafs
Figure 2: A baseline solution to the XOR-classification problem given by \(g_{*}(x,y)=1\) iff \(f_{*}(x,y)\geq 0.5\).
are labeled with the elements of an algebra \(A=(\mathcal{A},O)\), so-called _Algebraic Decision Structures_ (ADDs). This subsumes the classical case where leafs of decision structures are elements of a set, as these are simply algebras where \(O\) is empty. In this section we summarize the definitions and theorems of [10] which are required later in this paper.5
Footnote 5: Some definitions and theorems were slightly improved or adjusted from [10] for better alignment with the rest of this paper. We omit the proofs for the adjustments because they are straightforward.
### Definition (Algebraic Structures)
An _algebraic structure_, or _algebra_ for short, is a pair \((\mathcal{A},O)\) of a carrier set \(\mathcal{A}\) and a set of operations \(O\). Operations \(\mathit{op}\in O\) have a fixed arity and are closed under \(\mathcal{A}\).
In the following the algebra is identified with its carrier set and both are written calligraphically.
### Definition (Algebraic Decision Tree)
An
Algebraic Decision Tree (ADT) over the algebra \(A\) and the predicates \(\mathcal{P}^{\mathcal{G}}\) is inductively defined by the following BNF:
\[T\ \mathop{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
**3.8 Definition (Vacuity)**.: Let \(\bar{\mathcal{P}}\) be the set of negated predicates of \(\mathcal{P}\). Then we call \(\pi=p_{0}\cdots p_{m}\in(\bar{\mathcal{P}}\cup\bar{\mathcal{P}})^{*}\) a predicate path.
* \(\pi\) is called a predicate path of a decision structure \(t\in\mathcal{S}_{A}\) iff there exists a path \(\pi^{\prime}=p_{0}^{\prime}\cdots p_{m}^{\prime}\in\mathcal{P}^{*}\) from the root of \(t\) to one of its other nodes such that \(p_{i}=p_{i}^{\prime}\) in case that \(\pi^{\prime}\) follows the left/true branch at \(p_{i}\) in \(t\) and \(p_{i}=\vec{p}_{i}^{\prime}\) otherwise. We denote the last predicate \(p_{m}\in\mathcal{P}\cup\bar{\mathcal{P}}\) of \(\pi\) by \(\mathit{final}(\pi)\).
* Given a predicate path \(\pi=p_{0}\cdots p_{m}\) the predicate \(\mathit{final}(\pi)\) is called _vacuous_ for \(\pi\) iff the conjunction of the preceding predicates \(p_{0}\wedge\cdots\wedge p_{m-1}\) in \(\pi\) implies \(\mathit{final}(\pi)\).
* Let \(\Pi_{n}\) be the set of predicate paths of \(t\in\mathcal{S}_{A}\) that end in a given node \(n\). We call \(n\) vacuous in \(t\), iff \(\mathit{final}(\pi)\) is vacuous for all paths \(\pi\in\Pi\) and \(\mathit{final}(\pi)\) coincides for all \(\pi\).
* A decision structure \(t\in\mathcal{S}_{A}\) is called _vacuity-free_ iff there exists no vacuous node.
This allows us to define the following optimization step.
**3.9 Definition (Vacuity Reduction)**.: Let \(t\in\mathcal{S}_{A}\) be a decision structure with a vacuous node \(n\) and \(\mathit{final}(\pi)\in\mathcal{P}\cup\bar{\mathcal{P}}\) be the last predicate of some predicate path \(\pi\) ending in \(n\). Then, re-routing all incoming edges of \(n\) to the 'true'-successor of \(n\) in case of \(\mathit{final}(\pi)\in\mathcal{P}\) and to the 'false'-successor otherwise is called a _vacuity reduction_ step.
ADSs, being DAGs, only have finitely many predicate paths which can be effectively analysed for vacuous predicates as long as the individual predicates are decidable. As the elimination of vacuous predicates is a simple semantics-preserving transformation we have:
**3.10 Theorem (Minimality)**.: _Every ADS can be effectively transformed into a semantically equivalent, vacuity-free ADS that is minimal in the sense that any further reduction would change its semantics._
In the remainder of the paper, we will not explicitly discuss the effects of semantic reduction and vacuity reduction. Rather, we will concentrate on the algebraic properties of ADS that they inherit from their leaf algebra via lifting.
### Lifting
It is well-known that algebraic structures \(A=(\mathcal{A},O)\) can point-wise be lifted to cartesian products and arbitrary function spaces \(M\to A\). This has successfully been exploited for Binary Decision Diagrams (BDDs) and Algebraic Decision Diagrams (ADDs) that canonically represent functions of type \(\mathbb{B}^{n}\rightarrow\mathbb{B}\) and \(\mathbb{B}^{n}\rightarrow\mathcal{A}\) respectively. In fact, the canonicity of these representations allows one to identify the BDD/ADD representations directly with their semantics which, in particular, reduces the verification of semantic equivalence to checking for isomorphism.
In our case, canonicity is unrealistic for two reasons (cf. Section6.1):
1. Considering predicates rather than Boolean values introduces infeasibility and thereby prohibits minimal canonical representations.
2. The ordering of predicates may lead to an exponential explosion of the representation. Please note that, in contrast to, e.g., the typical BDD setting, we do not have just a few (64, 128, 256,... or the like) input bits that specify the control of some circuit, but predicates capture the effect of the ReLU function in a history-dependent way; Predicates that result from computations in a later layer depend on predicates from earlier layers. Moreover, as predicates are continuous objects, the probability of them coinciding can be considered 0. Thus ordering predicates would typically lead to representations that are doubly exponential in the number of neurons of a neural network.
We will see, however, that all the algebraic properties we need also hold for unordered ADSs, and that we can conveniently compute on (arbitrary) representatives of the partition defined by semantic equivalence. This way we arrive at an exponential worst case complexity (in the size of the argument PLNNs) both, for the algebraic operations and the decision of semantic equivalence.
Although ADSs are not canonical one can effectively apply operators on concrete representatives while preserving semantics. Every operator can be lifted inductively as follows
**3.11 Definition (Lifted Operators)**.: For every operator \(\Box:\mathcal{A}^{2}\rightarrow\mathcal{A}\) of an algebra \(A=(\mathcal{A},O)\) we define the lifted operator \(\mathbin{\vbox{\hbox{\vrule height 0.4pt width 0.4pt depth 0.4pt\vrule heigh t 0.4pt width 0.4pt depth 0.4pt}\hrulefill\hbox{\vrule height 0.
Intuitively, for two ADS \(t_{1}\) and \(t_{2}\), this construction replaces leaves in \(t_{1}\) with copies of \(t_{2}\). Thus, each path of the resulting ADS \(t_{3}\) expresses a conjunction of one path in \(t_{1}\) and one path in \(t_{2}\). The partition of the domain imposed by all paths of \(t_{2}\) therefore coincides with the intersection imposed by the intersection of partitions imposed by \(t_{1}\) and \(t_{2}\). The required lifting of the operators to leaf nodes is straightforward (cf. Figure 4 for illustration).
The following theorem which states the correctness of the lifted operators can straightforwardly be proved by induction:
**3.12 Theorem** (Correctness of Lifted Operators): _Let \(t_{1},t_{2}\in\mathcal{S}_{\mathcal{A}}\) be two ADS over some algebra \(A=(\mathcal{A},O)\). Let \(\blacksquare\colon\mathcal{S}_{\mathcal{A}}{}^{2}\to\mathcal{S}_{\mathcal{A}}\) denote the lifted version of the operator \(\Box\in O\). Then the following equation holds for all \(\sigma\in\Sigma\):_
\[\llbracket t_{1}\mathbin{\blacksquare}t_{2}\rrbracket_{S_{\mathcal{A}}}(\sigma) :=\llbracket t_{1}\rrbracket_{S_{\mathcal{A}}}(\sigma)\Box\llbracket t_{2} \rrbracket_{S_{\mathcal{A}}}(\sigma)\]
### Abstraction
Abstraction is one of the most powerful means for achieving scalability. The following easy to prove theorem concerns the interplay of abstractions imposed by a homomorphism of the leaf algebra and their effect on some classification function.
**3.13 Theorem** (Abstraction): _Let \(A=(\mathcal{A},O)\) and \(A^{\prime}=(\mathcal{A}^{\prime},O^{\prime})\) be two algebras, and \(\alpha\colon A\to A^{\prime}\) a homomorphism. Then \(\alpha_{S}\colon\mathcal{S}_{A}\to\mathcal{S}_{A^{\prime}}\) defined by simply applying \(\alpha\) to all the leaves of the argument ADS completes the following commutative diagram:_
We will see in Section 7 how elegantly abstraction can be dealt with in the TADS setting: The abstraction that transforms the XOR regression setting into a classification setting can be easily realized via the TADS composition operator.
## 4 Affine Functions
The following notations of linear algebra are based on the book [1]. The real vector space \((\mathbb{R}^{n},+,\cdot)\) with \(n>0\) is an algebraic structure with the operations
\[+\colon\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}^{n} \text{vector addition}\] \[\cdot\colon\mathbb{R}\ \times\mathbb{R}^{n}\to\mathbb{R}^{n} \text{scalar multiplication}\]
which are defined as
\[(x_{1},\ldots,x_{n})+(y_{1},\ldots,y_{n}) =(x_{1}+y_{1},\ldots,x_{n}+y_{n})\] \[\lambda\cdot(x_{1},\ldots,x_{n}) =(\lambda\cdot x_{1},\ldots,\lambda\cdot x_{n})\]
A real vector \((x_{1},\ldots,x_{n})\) of \(\mathbb{R}^{n}\) is abbreviated as \(\vec{x}\). To refer to one of the components we write \(\vec{x}_{i}:=x_{i}\) (note the arrow ends before the subscript in contrast to \(\vec{x_{i}}\) which denotes the \(i\)-th vector). The dimension of a real vector space \(\mathbb{R}^{n}\) is given as \(\dim\mathbb{R}^{n}=n\).
A matrix \(\mathbf{W}\) is a collection of real values arranged in a rectangular array with \(n\) rows and \(m\) columns.
\[\mathbf{W}=\begin{pmatrix}w_{1,1}&w_{1,2}&\ldots&w_{1,m}\\ w_{2,1}&w_{2,2}&\ldots&w_{2,m}\\ \vdots&\vdots&\ddots&\vdots\\ w_{n,1}&w_{n,2}&\ldots&w_{n,m}\end{pmatrix}\]
To indicate the number of rows and columns one says \(\mathbf{W}\) has _type_\(n\times m\) also notated as \(\mathbf{W}\in\mathbb{R}^{n\times m}\).
An element at position \(i,j\) of the matrix \(\mathbf{W}\) is denoted by \(\mathbf{W}_{i,j}:=w_{i,j}\) (where \(1\leq i\leq n\) and \(1\leq j\leq m\)). A matrix \(\mathbf{W}\in\mathbb{R}^{n\times m}\) can be reflected along the main diagonal resulting in the transpose \(\mathbf{W}^{\top}\) of shape \(m\times n\) defined by the equation
\[(\mathbf{W}^{\top})_{i,j}:=\mathbf{W}_{j,i}\]
The \(i\)-th row of \(\mathbf{W}\) can be regarded as a \(1\times m\) matrix given by
\[\mathbf{W}_{i,\cdot}:=(w_{i,1},\ldots,w_{i,m}).\]
Similarly, the \(j\)-th column of \(\mathbf{W}\) can be regarded as a \(n\times 1\) matrix defined as
\[\mathbf{W}_{\cdot,j}:=(w_{1,j},\ldots,w_{n,j})^{\top}.\]
Matrix addition is defined over matrices with the same type to be component-wise, i.e,
\[(\mathbf{W}+\mathbf{N})_{i,j}:=\mathbf{W}_{i,j}+\mathbf{N}_{i,j}\]
and scalar multiplication as
\[(\lambda\cdot\mathbf{W})_{i,j}:=\lambda\cdot\mathbf{W}_{i,j}.\]
The (type-correct) multiplication of two matrices \(\mathbf{W}\in\mathbb{R}^{n\times r}\) and \(\mathbf{N}\in\mathbb{R}^{r\times m}\) is defined as
\[(\mathbf{W}\cdot\mathbf{N})_{i,j}:=\sum_{k=1}^{r}\mathbf{W}_{i,k}\cdot\mathbf{N}_{k,j}\]
Identifying
* \(n\times 1\) matrices with (column) vectors
* \(1\times m\) matrices with row vectors
* \(1\times 1\) matrices with scalars
as indicated above, makes the well known dot product of \(\vec{v},\vec{w}\in\mathbb{R}^{n}\)
\[\vec{v}^{\top}\cdot\vec{w}:=\sum_{i=1}^{n}\vec{v}_{i}\cdot\vec{w}_{i}\]
just a special case of matrix multiplication. The same holds for matrix vector multiplication that is defined for a \(n\times m\) matrix \(\mathbf{W}\) and a vector \(\vec{x}\in\mathbb{R}^{n}\) as
\[(\mathbf{W}\vec{x})_{i}:=(\mathbf{W}_{i}.)\vec{x}\]
Matrices with the same number of rows and columns, i.e., with type \(n\times n\) for some \(n\in\mathbb{N}\), are said to be _square matrices_. Square matrices have a neutral element for matrix multiplication, called _identity matrix_, that is zero everywhere except for the entries on the main diagonal which are one.
\[\mathbf{I}^{n}:=\begin{pmatrix}1&&0\\ &\ddots&\\ 0&&1\end{pmatrix}\]
The \(i\)-th unit vector \(\vec{e_{i}}\) is a vector where all entries are zero except the \(i\)-th which is one.
### Piece-wise Affine Functions
#### 4.1 Definition (Affine Function)
A function \(\alpha\colon\mathbb{R}^{n}\to\mathbb{R}^{m}\) is called _affine_ iff it can be written as
\[\alpha(\vec{x})=\mathbf{W}\vec{x}+\vec{b}\]
for some matrix \(\mathbf{W}\in\mathbb{R}^{m\times n}\) and vector \(b\in\mathbb{R}^{m}\).7
Footnote 7: Note that, in a traditional neural network application, the \(\mathbf{W}\) and \(\vec{b}\) occurring in a network are the result of some training procedure. In this work, we assume that they are always known and fixed.
We identify the semantics and syntax of affine functions with the pair \((\mathbf{W},\vec{b})\) which can be considered as a canonical representation of affine functions. Furthermore, we denote the set of all affine functions \(\mathbb{R}^{n}\to\mathbb{R}^{m}\) as \(\Phi(n,m)\) and call \((n,m)\) the _type_ of \(\Phi(n,m)\). The untyped version \(\Phi\) is meant to refer to the set of all affine functions, independently of their type.
#### 4.2 Lemma (Operations on Affine Functions)
_Let \(\alpha_{1},\alpha_{2}\) be two affine functions in canonical form, i.e.,_
\[\alpha_{1}(\vec{x}) =\mathbf{W_{1}}\vec{x}+\vec{b_{1}}\] \[\alpha_{2}(\vec{x}) =\mathbf{W_{2}}\vec{x}+\vec{b_{2}}\]
_Assuming matching types, the operations \(+\) (addition), \(\cdot\) (scalar multiplication), and \(\circ\) (function application) can be calculated on the representation as_
\[(s\cdot\alpha_{1})(\vec{x}) =(s\cdot\mathbf{W_{1}})\,\vec{x}+(s\cdot\vec{b_{1}})\] \[(\alpha_{1}+\alpha_{2})(\vec{x}) =(\mathbf{W_{1}}+\mathbf{W_{2}})\,\vec{x}+(\vec{b_{1}}+\vec{b_{2}})\] \[(\alpha_{2}\circ\alpha_{1})(\vec{x}) =(\mathbf{W_{2}}\mathbf{W_{1}})\,\vec{x}+(\mathbf{W_{2}}\vec{b_{1}}+\vec{b_{2 }})\]
_resulting again in an affine function in canonical representation._
It is well-known that the type resulting from function composition evolves as follows
\[\circ\colon\Phi(r,m)\times\Phi(n,r)\to\Phi(n,m).\]
The type of the operation is important for the closure axiom, the basis for most algebraic structures. This leads to the following well-known theorem [1]:
#### 4.3 Theorem (Algebraic Properties)
_Denoting, as usual, scalar multiplication with \(\cdot\) and function composition with \(\circ\), we have:_
* \((\Phi(n,m),+,\cdot)\) _forms a vector space and_
* \((\Phi(n,n),\circ)\) _forms a monoid._
This theorem can straightforwardly be lifted to untyped \(\Phi\) by simply restricting all operations to the cases where they are well-typed, i.e., where addition is restricted to functions of the same type \((+_{t})\), and function composition to situation where the output type of the first function matches the input type of the second \((\circ_{t})\):
#### 4.4 Theorem (Properties of Typed Operations)
\((\Phi,+_{t},\cdot,\circ_{t})\) _forms a typed algebra, i.e, an algebraic structure that is closed under well-typed operations._
Piece-wise affine functions are usually defined over a polyhedral partitioning of the pre-image space [1, 1, 2, 10].
#### 4.5 Definition (Halfspace)
Let \(\vec{w}\in\mathbb{R}^{n}\) and \(b\in\mathbb{R}\). Then the set
\[p=\{\,\vec{x}\in\mathbb{R}^{n}\,|\,\vec{w}^{\top}\cdot\vec{x}+b=0\,\}\]
is called a _hyperplane_ of \(\mathbb{R}^{n}\). A hyperplane partitions \(\mathbb{R}^{d}\) into two convex subspaces, called _halfspaces_. The positive respectively negative halfspace of \(p\) is defined as
\[p^{+} :=\{\,\vec{x}\in\mathbb{R}^{n}\,|\,\vec{w}^{\top}\cdot\vec{x}+b>0\,\}\] \[p^{-} :=\{\,\vec{x}\in\mathbb{R}^{n}\,|\,\vec{w}^{\top}\cdot\vec{x}+b<0\,\}\]
**4.6 Definition (Polyhedron).** A polyhedron \(Q\subseteq\mathbb{R}^{n}\) is the intersection of \(k\) halfspaces for some natural number \(k\).
\[Q=\bigcap_{i=1}^{k}\{\,\vec{x}\in\mathbb{R}^{n}\,|\,\vec{w_{i}}^{\top}\cdot\vec {x}+b_{i}\leq 0\,\}\]
**4.7 Definition (Piece-wise Affine Function).** A function \(\psi\colon\mathbb{R}^{n}\to\mathbb{R}^{m}\) is called _piece-wise affine_ if it can be written as
\[\psi(\vec{x})=\alpha_{i}(\vec{x})\ \ \text{for}\ \ \vec{x}\in Q_{i}\]
where \(Q=\{\,Q_{1},\ldots,Q_{k}\,\}\) is a set of polyhedra that partitions the space of \(\vec{x}\) and \(\alpha_{1},\ldots,\alpha_{k}\) some affine functions. We call \(\alpha_{i}=\mathbf{W}_{i}\vec{x}+\vec{b}_{i}\) (\(1\leq i\leq k\)) the function associated with polyhedron \(Q_{i}\).
The proof of the following property is straightforward:
**4.8 Proposition (Continuity).**_A piece-wise affine function is continuous iff at each border between two connected polyhedra the affine functions associated with either polygon agree._
### The Activation Function ReLU
In this paper, we focus on neural network architectures that use the ReLU activation function:
**4.9 Definition (ReLU).** The Rectified Linear Unit (ReLU)
\[\text{ReLU}^{k}:\mathbb{R}^{k}\to\mathbb{R}^{k}_{+}\]
is a projection of \(\mathbb{R}^{k}\) onto the space of positive vectors \(\mathbb{R}^{k}_{+}\) defined by replacing each component \(x_{i}\) of a vector \(\vec{x}\) by \(\max\{\,0,x_{i}\,\}\):
\[\big{(}\ \text{ReLU}^{k}(\vec{x})\,\big{)}_{j}:=\max\{\,0,x_{j}\,\}\]
If the input dimension is clear, we omit the index and just write ReLU.
The term \(\max\{\,0,x_{i}\,\}\) is continuous and piece-wise linear. As ReLU operates independently on all dimensions of its input, it is itself piece-wise linear.
From a practical perspective, ReLU is one of the best understood activation functions, and the corresponding rectifier networks are one of the most popular modern neural network architectures [1].
For ease of notation in later sections, we use the fact that ReLU operates on each component of a vector independently and can therefore be decomposed into
\[\text{ReLU}^{k}=\phi_{k}^{k}\circ\phi_{k-1}^{k}\circ\cdots\circ\phi_{1}^{k}\]
where \(\phi_{i}^{k}:\mathbb{R}^{k}\to\mathbb{R}^{k}\) is the _partial ReLU function_ defined by setting the \(i\)-th component of a vector \(\vec{x}\) to \(0\) iff \(x_{i}<0\). More formally,
\[\big{(}\,\phi_{i}^{k}(\vec{x})\,\big{)}_{j}:=\begin{cases}x_{j}&\text{if }i \neq j\\ \max\{\,0,x_{j}\,\}&\text{if }i=j\end{cases}\,.\]
## 5 Piece-wise Linear Neural Network
Piece-wise linear neural networks are specific representations of continuous piece-wise affine functions. Calling them piece-wise linear is formally incorrect (the term piece-wise affine would be correct), but established. For the ease of exposition, we restrict the following development to the case where all activation functions are partial ReLU functions. This suffices to capture the entire class of Rectifier Networks which, themselves, can represent all piece-wise affine functions [1]. We adopt the popular naming in the following definition:
**5.1 Definition (Rectifier Neural Networks).** The syntax for _Rectifier Neural Networks_, or here synonymously used, _Piece-wise Linear Neural Networks_ (PLNNs), is defined by the following BNF
\[\mathcal{N}\,:=\,\varepsilon\ |\ \alpha\;;\mathcal{N}\ |\ \phi\;;\mathcal{N}\]
where the meta variables \(\alpha\) and \(\phi\) stand for affine functions and partial ReLU functions, respectively. Writing PLNNs as \(N=f_{0}\;;\cdots;f_{l}\) where \(f\in\{\,\alpha,\phi\,\}\) we denote the set of all PLNNs with \(\text{dom}(f_{0})=\mathbb{R}^{n}\) and \(\text{codom}(f_{l})=\mathbb{R}^{m}\) as \(\mathcal{N}(n,m)\) and the set of all PLNNs as
\[\mathcal{N}=\bigcup_{n,m\in\mathbb{N}}\mathcal{N}(n,m)\]
This definition of a PLNN slightly flexibilizes the classical definition as it does not require the strict alternation of affine functions and activation functions and uses partial ReLU functions instead of ReLU. We will exploit this flexibility to directly have the right granularity for defining according operational semantics (cf. Section 5.2).
### Example (Representing XOR as PLNN).
As stated in Section 2, our baseline solution to the XOR regression model is defined by the function \(|x-y|\). We can represent this function as a PLNN \(N_{*}\). It consists of two affine functions
\[\alpha_{1}=\begin{pmatrix}1&-1\\ -1&1\end{pmatrix}\qquad\qquad\alpha_{2}=\begin{pmatrix}1&1\end{pmatrix}\]
and two partial ReLUs applied in this order:
\[N_{*}:=\alpha_{1}\,;\phi_{1}^{2}\,;\phi_{2}^{2}\,;\alpha_{2}\]
Note typically \(N_{*}\) would be defined as
\[N_{*}^{\prime}=\alpha_{1}\,;\text{ReLU}\,;\alpha_{2}\]
in the context of machine learning. However, as both definitions impose the same semantics
\[\llbracket N_{*}\rrbracket_{\text{\sc DS}}=\llbracket N_{*}^{\prime} \rrbracket_{\text{\sc DS}}\]
we defined it directly using the notational conventions of this paper. This construction uses the observation that
\[\text{ReLU}(x-y)=\begin{cases}x-y&\text{if }x>y\\ 0&\text{otherwise}\end{cases}\.\]
The following figure shows the (instantiated) corresponding network architecture:
\[\begin{array}{c}\includegraphics[scale=0.5]{fig
Thus, the operational semantics \([\![\,\cdot\,]\!]_{\mathrm{os}}\) provides a constructive, local, and correct semantic interpretation of PLNNs.
**5.7 Example (Semantics of \(\boldsymbol{N_{\star}}\)).** We consider the baseline solution to the XOR regression model defined by the function \(|x-y|\). The network \(N_{\star}\) implements this function as a PLNN. We calculate \([\![N_{\star}]\!]_{\mathrm{os}}\) by applying the SOS rules to the initial configuration \(\langle N_{\star},(1,0)^{\top}\rangle\).
\[\langle\alpha_{1}\,;\,\phi_{1}^{2}\,;\phi_{2}^{2}\,;\alpha_{2},(1, 0)^{\top}\rangle\] \[\stackrel{{\mathrm{tt}}}{{\longrightarrow}} \langle\phi_{1}^{2}\,;\phi_{2}^{2}\,;\alpha_{2},(1,-1)^{\top}\rangle\] \[\stackrel{{\mathrm{l}}}{{\longrightarrow}} \langle\phi_{2}^{2}\,;\alpha_{2},(1,-1)^{\top}\rangle\] \[\stackrel{{\mathrm{o}}}{{\longrightarrow}} \langle\alpha_{2},(1,0)^{\top}\rangle\] \[\stackrel{{\mathrm{tt}}}{{\longrightarrow}} \langle\varepsilon,1\rangle\]
This is the correct outcome. Note that the SOS rules correspond to an iterative processing of each component function (i.e., layer) of the neural network, much like function composition.
Next we will naturally adapt the presented rules to symbolic execution which, by itself, provides the first outcome explanation of PLNNs.
### Symbolic Execution of PLNNs
Symbolic execution aims at characterizing program states in terms of symbolic input values and corresponding path conditions. In particular, it reveals how program states depend on the initial values during execution. PLNNs are ideally suited for symbolic execution as they are acyclic computation graphs and contain only affine computations.
* Affine functions are closed under composition. This allows one to aggregate (partially evaluate) the entire symbolic computation history corresponding to some symbolic execution path in terms of a single affine function, and to express all paths conditions as affine inequalities also expressed in terms of the initial values.
* PLNNs possess finite, acyclic computation graphs which, conceptually, allows for precise execution without need for abstractions.
In Section6, we will see that this results in a directed, acyclic, side-effect free computation graph whose leaves are affine function in \(\Phi(n,m)\) that express the PLNN's effect on inputs belonging to the polyhedron specified by the path condition.
We define the required symbolic execution via derivation rules that transform configurations of the form \(\langle N,\alpha,\pi\rangle\) where
* \(N\in\mathcal{N}(r,m)\),
* \(\alpha\colon\mathbb{R}^{n}\to\mathbb{R}^{r}\) with representation \(\alpha(\vec{x})=\boldsymbol{W}\vec{x}+\vec{b}\),
* and \(\pi\) is a path condition
throughout the transformation. The dimensions of \(n\) and \(m\) are bound by the initial PLNN while \(r\) is the dimension of some hidden layer. The following definition operates on the concrete representations of \(N\), \(\alpha\), and \(\pi\). In the case of the last two, the representation is expected to be canonical and therefore syntax and semantics can be identified.8 Operations are expected to be applied directly to the representation. Thus, the effect of a concrete execution path of \([\![\,\cdot\,]\!]_{\mathrm{os}}\) is _aggregated_ (instead of simply recorded) into the components \(\alpha\) and \(\pi\) while \(N\) is destructed further and further until all layers have been considered.
Footnote 8: By using canonical representations it is impossible to trace the history of operations. One effectively cannot distinguish between isomorphic objects.
**5.8 Definition (Symbolic Execution of PLNNs).**
\[\langle\alpha^{\prime}\,;N,\alpha,\pi\rangle \stackrel{{\mathrm{tt}}}{{\longrightarrow}}_{ \mathrm{SOS}}\langle N,\alpha^{\prime}\circ\alpha,\pi\rangle\] \[\langle\phi_{i}^{k}\,;N,\alpha,\pi\rangle \stackrel{{\mathrm{l}}}{{\longrightarrow}}_{ \mathrm{SOS}}\langle N,\alpha,\pi^{\prime}\wedge\pi\rangle\] \[\langle\phi_{i}^{k}\,;N,\alpha,\pi\rangle \stackrel{{\mathrm{o}}}{{\longrightarrow}}_{ \mathrm{SOS}}\langle N,\boldsymbol{E}_{i}^{k}\circ\alpha,\neg\pi^{\prime} \wedge\pi\rangle\]
where \(\pi^{\prime}=\alpha(x)_{i}\geq 0\).
For a sequence
\[c_{0}\stackrel{{ a_{1}}}{{\longrightarrow}}_{ \mathrm{SOS}}c_{1}\,\cdots\,c_{n-1}\stackrel{{ a_{n}}}{{ \longrightarrow}}_{\mathrm{SOS}}c_{n}\]
of derivations we write \(c_{0}\stackrel{{ a_{1}\cdots a_{n}}}{{\longrightarrow}}_{ \mathrm{SOS}}c_{n}\). Further we denote with \((\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The second states the effect of other starting values in the initial configuration. Note that this relation does not hold in the reversed case, as \(\circ\) and \(\wedge\) are not injective and the configuration is only determined up to isomorphism. The last identity corresponds to the result that of Definition 5.4 is uniquely determined. As the symbolic rules are more general the result is restricted to the case where the word \(w\) is known.
Moreover, the path conditions induce a partition of the input space \(\mathbb{R}^{n}\).
**5.10 Lemma** (Partition of \(\pi\)).: _For an arbitrary but fixed \(N\in\mathcal{N}\) define the set of all derivations with depth \(k\) as_
\[V_{k}(N):=\{\,c\mid\langle N,\operatorname{id},\operatorname{tt}\rangle \stackrel{{ w}}{{\longrightarrow}}_{\operatorname{SOS}}c\, \wedge\,|w|=k\,\}\]
_Define the set of all path conditions of the same \(V_{k}\) as \(\Pi_{k}\), then_
* _each_ \(\pi\in\Pi_{k}\) _defines a polyhedron for_ \(k>0\)__
* _the polyhedra of_ \(\Pi_{k}\) _are a partition of_ \(\mathbb{R}^{n}\)_._
Proof.: Induction over derivation sequences of \(N\).
Specifically, for each input vector \(\vec{x}\in\mathbb{R}^{n}\), there exists exactly one sequence of derivations \(\langle N,\operatorname{id},\operatorname{tt}\rangle\stackrel{{ w}}{{\longrightarrow}}_{\operatorname{SOS}}\langle \varepsilon,\alpha,\pi\rangle\) such that \(\pi(\vec{x})\) holds. Therefore, the following is well-defined:
**5.11 Definition** (Semantic Functional \(\llbracket\,\cdot\,\rrbracket_{\operatorname{SOS}}\)).: The semantic functional for the symbolic operational semantics
\[\llbracket\,\cdot\,\rrbracket_{\operatorname{SOS}}:\mathcal{N}(n,m)\to( \mathbb{R}^{n}\to\mathbb{R}^{m})\]
is defined as \(\llbracket N\rrbracket_{\operatorname{SOS}}(\vec{x})=\vec{y}\) iff
\[\langle N,\operatorname{id},\operatorname{tt}\rangle\stackrel{{ *}}{{\longrightarrow}}_{\operatorname{SOS}}^{*}\langle \varepsilon,\alpha,\pi\rangle\;\;\;\wedge\;\;\;\pi(\vec{x})=1\;\;\;\wedge\;\; \alpha(\vec{x})=\vec{y}\]
Also the symbolic operational semantics is fully aligned with the denotational semantics:
**5.12 Theorem** (Correctness of \(\llbracket\,\cdot\,\rrbracket_{\operatorname{SOS}}\)).: _For any \(N\in\mathcal{N}\) we have: \(\llbracket N\rrbracket_{\operatorname{DS}}=\llbracket N\rrbracket_{ \operatorname{SOS}}\)._
Proof.: According to Theorem 5.12, it suffices to show the semantic equivalence with \(\llbracket\,\cdot\,\rrbracket_{\operatorname{SOS}}\). As both the concrete and the symbolic operational semantics define unique computation paths for each input vector, the proof follows straightforwardly by an inductive proof that establishes the desired equivalence as an invariant when simultaneously following these paths. More concretely, we can prove
\[\forall\vec{x}\in\mathbb{R}^{n}:\llbracket N\rrbracket_{\operatorname{ OS}}(\vec{x})=\llbracket N\rrbracket_{\operatorname{SOS}}(\vec{x})\]
using the following induction hypothesis
\[\langle N_{0},\vec{x_{0}}\rangle\stackrel{{ w}}{{\longrightarrow}}_{ \operatorname{OS}}\langle N_{k},\vec{x_{k}}\rangle\iff\] \[\langle N_{0},\operatorname{id},\operatorname{tt}\rangle\stackrel{{ w}}{{\longrightarrow}}_{\operatorname{SOS}}\langle N_{k},\alpha_{k},\pi_{k} \rangle\wedge\alpha_{k}(\vec{x_{0}})=\vec{x_{k}}\wedge\pi_{k}(\vec{x_{0}})\]
by a simple analysis of the following three cases:
1. \(N_{k+1}=\alpha^{\prime}\,;N_{k}\)
2. \(N_{k+1}=\phi_{i}^{k}\,;N_{k}\;\wedge\;\vec{x}_{i}\geq 0\)
3. \(N_{k+1}=\phi_{i}^{k}\,;N_{k}\;\wedge\;\vec{x}_{i}<0\)
The symbolic operational semantics of PLNNs is sufficient to derive local explanations and decision boundaries similar to the ones presented in [1, 1, 2, 3]. In the following, we will show how symbolic operational semantics can be used to define semantically equivalent Typed Affine Decisions Structures (TADS) which, themselves, are specific Algebraic Decision Structures (ADS) as defined in the next section. TADS collect all the local explanations in an efficient query structure such that we arrive at model explanations.
**5.13 Example** (XOR-Regression).: As a brief example for the symbolic execution of PLNNs, we will calculate \(\llbracket N_{\star}\rrbracket_{\operatorname{SOS}}\) by applying the symbolic SOS rules to the initial configuration \(\langle N_{\star},\operatorname{id},\operatorname{tt}\rangle\). Symbolic interpretation is not deterministic for the partial ReLU functions. We therefore chose the execution path that corresponds to the former example \(\vec{x}=(1,0)^{\top}\), i.e., with the label sequence \(w=(\operatorname{tt},1,0,\operatorname{tt})\), for illustration:
\[\langle\alpha_{1}\,;\phi_{1}^{2}\,;\phi_{2}^{2}\,;\alpha_{2}, \begin{pmatrix}1&0\\ 0&1\end{pmatrix},\operatorname{tt}\rangle\] \[\stackrel{{\operatorname{tt}}_{\operatorname{SOS}}}{{ \longrightarrow}}\langle\phi_{1}^{2}\,;\phi_{2}^{2}\,;\alpha_{2},\begin{pmatrix} 1&-1\\ -1&1\end{pmatrix},\operatorname{tt}\rangle\] \[\stackrel{{ 1}}{{\longrightarrow}}_{\operatorname{SOS}} \langle\phi_{2}^{2}\,;\alpha_{2},\begin{pmatrix}1&-1\\ -1&1\end{pmatrix},x_{1}-x_{2}\geq 0\rangle\] \[\stackrel{{ 0}}{{\longrightarrow}}_{\operatorname{SOS}} \langle\alpha_{2},\begin{pmatrix}1&-1\\ 0&0\end{pmatrix},x_{1}-x_{2}>0\rangle\] \[\stackrel{{\operatorname{tt}}_{\operatorname{SOS}}}{{ \longrightarrow}}\langle\varepsilon,\begin{pmatrix}1&-1\\ 0&1\end{pmatrix},x_{1}-x_{2}>0\rangle\]
Note that the path conditions and the affine functions have been simplified in every step. The affine functions are given in their canonical representation \(\boldsymbol{W}\vec{x}+\vec{b}\) (as \(\vec{b}\) is zero in all steps it is omitted). For the path conditions we have not fixed a representation, instead they are simplified to aid readability. The most important simplifications are
\[\left(\begin{pmatrix}1&-1\\ -1&1\end{pmatrix}\vec{x}\right)_{1}\geq 0 \iff x_{1}-x_{2}\geq 0\] \[\neg(-x_{1}+x_{2}\geq 0)\wedge x_{1}-x_{2}\geq 0 \iff x_{1}-x_{2}>0\]
## 6 Typed Affine Decision Structures
Consider the transition system \((V,\rightarrow_{\textsc{sos}})\) that represents the symbolic operational semantics \([\![\,\cdot\,]\!]_{\textsc{sos}}\) of some \(N\in\mathcal{N}(n,m)\) where
\[V=\{\,c\mid\langle N,\mathrm{id},\mathrm{tt}\rangle\rightharpoonup_{\textsc{ SOS}}^{*}\,c\,\}\]
is the set of configurations which are reachable from \(\langle N,\mathrm{id},\mathrm{tt}\rangle\) and let (recall Definitions 3.4 and 4.1)
\[\tau\colon V\rightarrow\mathcal{S}_{\Phi}\]
denote the following inductively defined transformation that closely follows the symbolic SOS rules:
* \(\tau(\langle\varepsilon,\alpha,\,\cdot\,\rangle)\ :=\ \alpha\)
* \(\tau(\langle\alpha^{\prime}\,;N,\alpha,\,\cdot\,\rangle)\ :=\ (\mathrm{tt},\tau(\langle N,\alpha^{\prime}\circ\alpha,\, \cdot\,\rangle),\varepsilon)\)
* \(\tau(\langle\phi_{i}^{k}\,;N,\alpha,\,\cdot\,\rangle)\) \(:=\ (\alpha(x)_{i}\geq 0,\tau(\langle N,\alpha,\,\cdot\,\rangle),\tau( \langle N,\boldsymbol{E}_{i}^{k}\circ\alpha,\,\cdot\,\rangle))\)
where \(``\,\cdot\,"\) should be considered a don't care entry. Identifying \(N\) with its computation tree which is specified by its set of configurations that are reachable from \(\langle N,\mathrm{id},\mathrm{tt}\rangle\),9\(\tau\) can be regarded as an injective relabeling of this tree which results in the structure of an ADT:
Footnote 9: Please note that the transition labels \(\mathrm{tt},\,1\), and \(0\) are redundant.
**6.1 Theorem (TADT)**.: _Let \(N\in\mathcal{N}(n,m)\). Then \(\tau(N)\) is an ADT over \(\Phi(n,m)\) whose predicates are all of the form of affine inequalities._
Proof.: The proof follows by straightforward induction along the isomorphic structure of the two trees. The following invariants hold for all steps of the transformation
\[\tau(c)=(p,\tau(c_{t}),\,\cdot\,) \iff c\xrightarrow{1}_{\textsc{sos}}c_{t}\] \[\tau(c)=(p,\,\cdot\,,\tau(c_{f})) \iff c\xrightarrow{0}_{\textsc{sos}}c_{f}\]
where we abbreviate \(c=\langle N,\alpha,\pi\rangle\), \(c_{t}=\langle N^{\prime},\alpha^{\prime},p\wedge\pi\rangle\), and \(c_{f}=\langle N^{\prime},\alpha^{\prime},\neg p\wedge\pi\rangle\)
We call the structures resulting from \(\tau\)-transformation _Typed Affine Decision Trees_ (TADT). A TADT inherits the type from its underlying algebra of typed affine functions \(\Phi(n,m)\) (cf. Lemma 4.2 and Theorem 4.4). Similar to ADTs, TADT can also be generalized to acyclic graph structures:
**6.2 Definition (Typed Affine Decision Structure)**.: An ADS over the algebra \((\Phi(n,m),+_{t},\cdot,\circ_{t})\) where all predicates are linear inequalities in \(\mathbb{R}^{n}\) is called _Typed Affine Decision Structure_ of type \(n\times m\).
The set of all such decision structures is denoted by \(\Theta(n,m)\), and the set of all typed affine decision structures of any type with:
\[\Theta=\bigcup_{n,m\in\mathbb{N}^{+}}\Theta(n,m)\]
TADS are special kinds of ADS. Thus, they inherit the ADS semantics (cf. Definition 3.5) when specializing \(\Sigma\) to \(\mathbb{R}^{n}\) and \(\sigma\) to \(\vec{x}\). The fact that the semantics of leafs is given by affine functions that are also applied to \(\vec{x}\) is not important for the resulting specialized definition which reads:
**6.3 Definition (Semantics of TADS)**.: The semantic function
\[[\![\,\cdot\,]\!]_{\mathfrak{o}}\colon\Theta(n,m)\rightarrow(\mathbb{R}^{n} \rightarrow\Phi(n,m))\]
for TADS is inductively defined as
\[[\![\alpha]\!]_{\mathfrak{o}}(\vec{x}) :=\alpha(\vec{x})\] \[\![(p,l,r)]_{\mathfrak{o}}(\vec{x}) :=\begin{cases}[\![l]\!]_{\mathfrak{o}}(\vec{x})&\text{if }[\![p]\!](\vec{x})=1\\ [\![r]\!]_{\mathfrak{o}}(\vec{x})&\text{if }[\![p]\!](\vec{x})=0\end{cases}\]
Every PLNN \(N\) defines an ADT \(t_{N}\) over \(\Phi\). We can therefore apply the results of Section 3. In particular, we can apply the various reduction techniques which transform \(t_{N}\) into the more general form of an ADS, or more precisely, of a TADS.
Optimizations in terms of semantic reduction and infeasible path elimination do not alter the semantics of a (T)ADS. In other words
\[\Theta(N)\ =\ \{t\mid[\![t]\!]_{\mathfrak{o}}=[\![(\tau(N)]\!]_{\mathfrak{o}}\}\]
is closed under semantic reduction and infeasible path elimination. Moreover, we have:
**6.4 Theorem (Correctness of \([\![t]\!]_{\mathfrak{o}}\))**.: _Let \(N\ \in\ \mathcal{N}\) and \(t\in\Theta(N)\). Then we have:_
\[[\![N]\!]_{\textsc{ds}}=[\![t]\!]_{\mathfrak{o}}\]
In the following, we sometimes abuse notation and also write \(\tau(N)\) for other members of \(t\in\Theta(N)\) when the concrete structure of the TADS does not matter. This concerns, in particular, Section 7 where we always apply semantic reduction and infeasible path elimination to reduce size.
Following [GMR\({}^{+}\)18]:
In the state of the art a small set of existing interpretable models is recognized: decision tree, rules, linear models \([\![\ldots]\!]\). These models are considered easily understandable and interpretable for humans. ([GMR\({}^{+}\)18])
we have:
**6.5 Corollary (Model Explanation)**.: _TADS provide precise solutions to the model explanation problem, and therefore also to the outcome explanation problem._
Please note that outcome explanation is easily derived from model explanation simply by following the respective evaluation path.
**6.6 Example (XOR-TADS)**.: As an example, the resulting TADS of the symbolic execution ADS of \(N_{*}\) is shown in Figure 3.
### The TADS Linear Algebra
According to Theorem 4.4, \((\Phi,+_{t},\,\cdot_{t}\,)\) forms a typed algebra. Moreover, due to the canonical representation of affine functions, \(\Phi\) also supports the equality relation \(=\). Applying Theorem 3.12, all these operations can be lifted to obtain the following corresponding operations on TADS:
1. \(\oplus\colon\Theta(n,m)\times\Theta(n,m)\to\Theta(n,m)\)
2. \(\ominus\colon\Theta(n,m)\times\Theta(n,m)\to\Theta(n,m)\)
3. \(\ominus\colon\mathbb{R}\times\Theta(n,m)\to\Theta(n,m)\)
4. \(\bigodot\colon\Theta(n,m)\times\Theta(n,m)\to\Theta(n,1)\)
These operations lift in the order that they are given (1) addition, (2) subtraction10, (3) scalar multiplication, and (4) equality. The resulting TADS has size \(\mathcal{O}(|t_{1}|\cdot|t_{2}|)\) where \(|t_{i}|\) is the number of nodes of considered TADS \(t_{i}\). An example of addition is given in Figure 4.
Footnote 10: Subtraction is usually not stated explicitly as it can be defined using addition and scalar multiplication.
The operations \(\oplus\) and \(\odot\) are characteristic for vector spaces. Indeed, TADS form a (function) vector space (cf. Theorems 4.3 and 4.4):
**6.7 Theorem (The TADS Linear Algebra)**.: \((\Theta,\oplus_{t},\odot_{t})\) forms a typed linear algebra.
We will exploit this theorem in Section 7.
However, when lifting these two operators over affine predicates a second interpretation occurs naturally: that of _piece-wise affine functions_. Both interpretations are compatible, as stated in the following lemma.
**6.8 Theorem (Two Consistent Views on TADS)**.: _Let \(\psi_{1},\psi_{2}\colon\mathbb{R}^{n}\to\mathbb{R}^{m}\) be two piece-wise affine functions and \(\vec{x}\in\mathbb{R}^{n}\) be a real vector. Define \(\alpha_{1}\) as the affine function of \(\psi_{1}\) that is associated with the region for \(\vec{x}\) and \(\alpha_{2}\) for \(\psi_{2}\), respectively and denote with \(\Box\) a generic operation over (piece-wise) affine functions. Then if for all such \(\vec{x}\)_
\[\psi_{1}(\vec{x})\Box\psi_{2}(\vec{x})=\alpha_{1}(\vec{x})\Box\alpha_{2}(\vec {x})\]
_holds both interpretations agree for \(\Box\)._
One can easily show that this is indeed the case for \(\Box\in\{\,\oplus,\odot,\odot\,\}\). However, there is a slight difference in the interpretations. The first _lifts_ affine functions over affine predicates and the latter _associates_ affine functions with affine predicates. This distinction can, for example, be seen in the signature of the respective semantics:
\[\llbracket t\rrbracket_{S_{A}}\colon\mathbb{R}^{n} \to(\mathbb{R}^{n}\to\mathbb{R}^{m})\] \[\llbracket t\rrbracket_{\Theta}\colon\mathbb{R}^{n} \to\ \mathbb{R}^{m}\]
For TADS to be equivalent to piece-wise affine functions the semantics have to be adapted to \(\llbracket\cdot\,\rrbracket_{\Theta}\), which slightly differs from \(\llbracket\cdot\,\rrbracket_{S_{A}}\) in that the leafs are also evaluated under the input.
Considering Lemma 5.10, one can easily see that every path in a TADS defines a polyhedron and that the set of all paths partitions \(\mathbb{R}^{n}\). As all terminals of TADS are affine functions, it is straightforward to prove that for every TADS \(t\) the semantics \(\llbracket t\rrbracket_{\Theta}\) is a piece-wise affine function.
The complexity of piece-wise affine functions is commonly defined as the smallest number of classes (so-called regions) that are needed to partition the input space [12, 13, 14], and which we call _region complexity_. This complexity measure can easily be adopted for TADS using the above reasoning as it is simply the number of all paths from the root to its terminals. In other words, TADS are linear-size representations of PAF with respect to their region complexity which implies:
**6.9 Theorem (Complexity of Operations)**.: _The operations \(\oplus,\ominus,\bigodot\) are of quadratic and \(\odot\) of linear time region complexity._
Figure 3: The TADS \(\tau(N_{*})\).
Proof.: Via structural induction along Definition3.11 it is easy to establish that each node of the tree underlying \(t_{1}\) is processed at most once, while the nodes of the tree underlying \(t_{2}\) may be processed at most once for each leaf of \(t_{1}\). The theorem follows from the fact that the number of nodes in a binary tree is at most twice the number of its paths.
Interesting is the expression \(t_{1}\ominus t_{2}\) which evaluates to the constant function \(0\) iff \(t_{1}\) and \(t_{2}\) are semantically equivalent. Thus we have the following:
**6.10 Corollary** (Complexity of \(\equiv\)).: _Deciding semantic equivalence between two TADS has quadratic time region complexity._
Please note that this way of deciding semantic equivalence does not only provide a Yes/No answer, but, in case of failure, also precise diagnostic information: For \(t_{2}-t_{1}\) we have (see Figure11)
* positive parts mark regions where \(t_{2}\) is bigger
* zero marks regions where both TADS agree
* negative parts mark regions where \(t_{1}\) is bigger
This is particularly interesting when combined with a threshold \(\varepsilon\) (see Figure13).
### The TADS Typed Monoid
As shown in previous sections, TADS are a comprehensible and efficient representation of piece-wise affine functions. In the following, we will go even further and show that TADS also directly support all common operations on piece-wise affine functions.
Piece-wise affine functions form a typed monoid under function composition, i.e., the composition of two piece-wise affine functions is again piece-wise affine assuming that domain and co-domain adequately match. This property is highly useful both, for the design of neural networks--which are themselves fundamentally compositions of multiple, simple piece-wise affine functions--and neural network analysis, as will be shown in Section7.
Consider the following result which follows as a consequence of the previous correctness theorems and the compositionality of \(\llbracket\cdot\rrbracket_{\mathrm{DS}}\):
**6.11 Corollary** (Compositionality).: _Let \(N_{0},N_{1},N_{2}\in\mathcal{N}\) with \(N_{0}=N_{1}\;;N_{2}\) and \(t_{i}\in\Theta(N_{i})\). Then we have:_
\[\llbracket N_{0}\rrbracket_{\mathrm{DS}} =\llbracket N_{1}\;;N_{2}\rrbracket_{\mathrm{DS}}\] \[=\llbracket N_{2}\rrbracket_{\mathrm{DS}}\circ\llbracket N_{1} \rrbracket_{\mathrm{DS}}\] \[=\llbracket t_{2}\rrbracket_{\mathrm{e}}\circ\llbracket t_{1} \rrbracket_{\mathrm{e}}=\llbracket t_{0}\rrbracket_{\mathrm{e}}\]
Obviously, there is a gap in the result that poses the question: "Is it possible to define composition operator that directly works on TADS?" Just composing the affine functions at the leafs, which would be sufficient to, e.g., for \(\oplus\), is insufficient because of the side effect of the first TADS. Thus we end up with the following composition operator that handles this side effect in a way that is typical for structured operational semantics:
**6.12 Definition** (TADS Composition).: _The composition operator \(\mathbin{\raisebox{-0.86pt}{\scalebox{1.2}{$\star$}}}\) of TADS with type_
\[\mathbin{\raisebox{-0.86pt}{\scalebox{1.2}{$\star$}}}:\Theta(n,r)\times \Theta(r,m)\to\Theta(n,m)\]
_is inductively defined as_
\[\alpha\mathbin{\raisebox{-0.86pt}{\scalebox{1.2}{$\star$}}} \alpha^{\prime} =\alpha^{\prime}\circ\alpha\] \[\alpha\mathbin{\raisebox{-0.86pt}{\scalebox{1.2}{$\star$}}}(p,l, r) =(p\circ\alpha,\alpha\mathbin{\raisebox{-0.86pt}{\scalebox{1.2}{$ \star$}}}l,\alpha\mathbin{\raisebox{-0.86pt}{\scalebox{1.2}{$\star$}}}r)\] \[(p,l,r)\mathbin{\raisebox{-0.86pt}{\scalebox{1.2}{$\star$}}}t =(p,l\mathbin{\raisebox{-0.86pt}{\scalebox{1.2}{$\star$}}}t,r \mathbin{\raisebox{-0.86pt}{\scalebox{1.2}{$\star$}}}t)\]
_where \(\alpha,\alpha^{\prime}\in\Phi\) are TADS identified with their affine function, \(t,l,r\in\Theta\) are TADS, and \(p\in\mathcal{P}\) is a predicate. Here \(p\circ\alpha\) with \(p=\alpha^{\prime}(x)_{i}\geq 0\) is defined as_
\[(\alpha^{\prime}\circ\alpha)(x)_{i}\geq 0\]
Figure 4: Example for (T)ADS addition. The (T)ADS in (a) and (b) are based on partial ReLUs and (c) is the sum of both. The input vector is given as \(\vec{x}=(a,b)\) with \(a,b\in\mathbb{R}\).
Notice that this definition is similar to the lifted operators of Definition 3.11. However, TADS composition is not side-effect free as can be seen by the modification of the predicate in the second case. This is due to the fact that the first TADS distorts the input vector space of the second TADS. Again, let us formalize the correctness of this operation.
**6.13 Theorem (TADS Composition)**.: _Let \(t_{1}\in\Theta(n,r)\) and \(t_{2}\in\Theta(r,m)\). Then we have:_
\[\llbracket t_{1}\ltimes t_{2}\rrbracket_{\mathrm{o}}=\llbracket t_{2} \rrbracket_{\mathrm{o}}\circ\llbracket t_{1}\rrbracket_{\mathrm{o}}\]
Proof.: Structural induction along the second component and in the inductive step induction along the first component.
An example of a composition can be found in Figure 5. This directly yields:
**6.14 Corollary (The TADS typed Monoid)**.: \((\Theta,\ltimes)\) _forms a typed monoid, i.e, an algebraic structure that is closed under type-correct composition and that has typed neutral elements \(\varepsilon\)._
_On this structure \(\tau\) is a homomorphism between the monoids \((\Theta,\ltimes)\) and \((N,;)\), i.e., the following diagram commutes_
Due to their similarity to the lifted operators it is easy to show that composing to TADS results in a third TADS that has size complexity equal to product of its inputs and whose complexity with respect to the measure of affine regions is quadratic in its inputs. Following the same line of reasoning as for Theorem 6.9 yields:
**6.15 Theorem (Complexity of Composition)**.: _TADS compositions has quadratic time region complexity._
One may argue that semantic equivalence between two TADS is of limited practical value, in particular, as in most applications of neural networks, small errors are, to a certain degree accepted. In contrast, \(\epsilon\)-similarity, i.e., whether two TADS differ more than \(\epsilon\) for some small threshold \(\epsilon\in\mathbb{R}\), can be regarded as a practically very relevant notion, in particular, to study robustness properties. The corresponding property required for TADS leaves is easily defined:
\[l_{\epsilon}(x,y):=(|x-y|-\epsilon)\,\mathbb{I}(|x-y|\geq\epsilon)\]
Intuitively, this function yields \(0\) if the difference of \(x\) and \(y\) is less than \(\epsilon\) and the absolute value (minus epsilon) of their difference otherwise. \(l_{\epsilon}\) can easily be realized using only standard algebraic operations and ReLU applications which are already defined:
\[l_{\epsilon}=\mathrm{ReLU}(x-y-\epsilon)+\mathrm{ReLU}(y-x-\epsilon)\]
Just lifting this function to the TADS level
\[t_{4}=\mathrm{ReLU}(t_{1}\ominus t_{2}\ominus\epsilon)\oplus\mathrm{ReLU}(t _{2}\ominus t_{1}\ominus\epsilon)\]
(where \(\mathrm{ReLU}(t)=t\ltimes\tau(\mathrm{ReLU})\)) is sufficient to decide \(\epsilon\)-similarity. Thus we have:
**6.16 Corollary (Deciding \(\epsilon\)-similarity)**.: \(\epsilon\)_-similarity has quadratic time region complexity._
Please note that, again, this way of deciding \(\epsilon\)-similarity does not only provide a Yes/No answer, but, in case of failure, also precise diagnostic information. All this will be showcased in Section 7.
In the remainder of this section we elaborate on the compositionality that is imposed by \(\ltimes\):
**6.17 Corollary (Layer-wise Transformation)**.: _By Theorem 6.13, we can transform a PLNN layer-wise into a TADS._
\[\llbracket N\rrbracket_{\mathrm{DS}} =\llbracket\alpha_{1}\,;\ldots;\alpha_{n}\rrbracket_{\mathrm{ DS}}\] \[=\llbracket\tau(\alpha_{n})\rrbracket_{\mathrm{o}}\circ\cdots \circ\llbracket\tau(\alpha_{1})\rrbracket_{\mathrm{o}}\] \[=\llbracket\tau(\alpha_{1})\ltimes\cdots\ltimes\tau(\alpha_{n}) \rrbracket_{\mathrm{o}}\] \[=\llbracket\tau(N)\rrbracket_{\mathrm{o}}\]
As a consequence, the transformation function \(\tau\) can also be inductively defined using the following three atomic TADS
* identity \(\varepsilon\)
* affine functions \(\alpha\colon\mathbb{R}^{n}\to\mathbb{R}^{m}\) where \(n,m\in\mathbb{N}\)
* single ReLUs \(\phi_{i}^{n}\) where \(n,i\in\mathbb{N}\), \(i\leq n\)
which are illustrated in Figure 6.
**6.18 Corollary (Inductive Definition of \(\tau\))**.: _The transformation of a network to a TADS can be defined inductively as_
\[\tau^{\prime}(\varepsilon) =\varepsilon\] \[\tau^{\prime}(\alpha\,;N) =\alpha\ltimes\tau^{\prime}(N)\] \[\tau^{\prime}(\phi_{i}^{k}\,;N) =\left(\vec{e}_{i}^{\top}\!\cdot\vec{x}\geq 0,\mathbf{I}^{k},\mathbf{E} _{i}^{k}\right)\ltimes\tau^{\prime}(N)\]
_such that_
\[\tau^{\prime}(N) =\tau(N).\]
This consistency of viewpoints and operational handling indicates that the TADS setup is natural, and that it supports to approach PLNN analysis and explanation from various perspectives
## 7 TADS at Work
In this section, we continue the discussion of the XOR function as a minimal example to showcase the power of TADS for:
* **Model Explanation.** For a given PLNN, describe precisely its behavior in a comprehensible manner. This allows for a semantic comparison of PLNNs comprising (approximative) semantic equivalence with precise diagnostic information in case of violation.
* **Class Characterization.** PLNNs are frequently extended by the so-called argmax function to be used as classifiers. TADS-based class characterization allows one to precisely characterize the set of inputs that are specifically classified, or the set of inputs where two (PLNN-based) classifier differ.
* **Verification.** Verification is beyond the scope of this section but will be discussed in [20] in the setting of digit recognition.
In the remainder of this section we focus on the impact of Model Explanation and Class Characterization. Two properties of TADS are important here:
**Compositionality.** Due to the compositional nature of TADS, any TADS that represents a given PLNN can be modified and extended by _output interpretation_ mechanisms. This mirrors a very important use case of neural networks; while neural networks are fundamentally functions \(\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\), they are often used for discrete problems which requires a different interpretation of their output.
**Precision.** As the TADS transformation of a PLNN is semantics preserving, all results are precise.
Based on these properties it is possible to solve all the aforementioned problems elegantly by simple algebraic transformations of TADS.
### Model Explanation and Algebraic Implications
To start, we train a small neural network to solve the continuous XOR problem. The resulting network, \(N_{1}\), represents a continuous function \(f_{1}=\llbracket N_{1}\rrbracket_{\text{\tiny DS}}\) (see Figure 6(a)). \(N_{1}\) solves the XOR problem relatively well, with all corners being within a distance of \(<0.1\) to the desired values of \(1\) and \(0\) respectively.
The architecture of \(N_{1}\) is shown in Figure 9. Note that this architecture is much bigger than the architecture for \(N_{*}\) (cf., Section 5). This is needed as the
Figure 5: Example for TADS composition. The TADS (a) and (b) are based on partial ReLUs. TADS (c) is the composition of (a) and (b). Note the difference between _lifting_ (Figure 4) and _composing_ in the inner nodes. The input vector is given as \(\vec{x}=(a,b)\) with \(a,b\in\mathbb{R}\).
Figure 6: A few examples for atomic TADS. The input vector is given as \(\vec{x}=(a,b)\) with \(a,b\in\mathbb{R}\).
training procedure is approximate and does not reach a global optimum. On all substantially smaller networks, we failed to train a network that was close to the specifications of XOR.
#### 7.1.1 Model Explanation
First, we consider full model explanation of \(N_{1}\).11 We can attain a precise and complete characterization of \(f_{1}\) by creating the corresponding TADS \(t_{1}=\tau(N_{1})\) as shown in Figure (a)a. This TADS describes precisely and completely the behavior of \(f_{1}\) in a whitebox manner.
Footnote 11: Of course, in this two-dimensional case a function plot akin to Figure (a)a might seem sufficient, but this is not feasible in anything beyond two-dimensional problems.
Similarly to the function plot shown in Figure (a)a, the TADS gives a comprehensible view of \(f_{1}\).In contrast to the mere function plot, the TADS of Figure (a)a is a solid basis for further systematic analyses and extends to more than two dimensions.
Model Explanation as illustrated here is the basic use case of a TADS as a white-box model for PLNNs, however, the true power of TADS becomes apparent when used for high-level analyses using algebraic operations on TADS.
#### 7.1.2 Algebraic Implications
As mentioned in the last section, the training process of neural networks is approximate and can lead to many different solutions. A very natural question to ask is: "How differently do two neural networks solve the same problem?". This question can be answered using algebraic operations on TADS.
Consider \(N_{2}\), a PLNN that has also been trained with the network architecture shown in Figure 9, but with a different setting of hyperparameters.12 Its represented (semantic) function \(f_{2}=[\![N_{2}]\!]_{\mathrm{DS}}\) is depicted in Figure (b)b and the corresponding TADS \(t_{2}=\tau(N_{2})\) in Figure (b)b.
Footnote 12: The discussion of the learning process is beyond the scope of this paper.
As TADS form a linear algebra, one can easily mirror the computation \(f_{2}-f_{1}\) by \(t_{2}-t_{1}\) on TADS level. The result is identical because the transformation process is precise, i.e.,
\[[\![N_{2}]\!]_{\mathrm{DS}}-[\![N_{1}]\!]_{\mathrm{DS}}=f_{2}-f_{1}=[\![t_{2}- t_{1}]\!]_{\mathrm{\Theta}}\]
The resulting difference TADS \(t_{3}\) is shown in Figure 10 and the corresponding function graph in Figure 11.
\(t_{3}\) is ideal to study the semantic difference between PLNN \(N_{1}\) and\(N_{2}\). Most interestingly, as can be visually seen in Figure 11, the largest differences between both networks occur in the middle of the function domain, i.e., in the region most distant from the edges where the XOR problem is clearly defined. This matches the intuition that points further away from known points are more uncertain under neural network training.
Further, observe that the difference of both networks yields a TADS of roughly double the size. This moderate increase in size indicates the similarity of \(N_{1}\) and \(N_{2}\), as linear regions of the difference TADS \(t_{3}\) result from the intersection of the regions for \(f_{1}\) and \(f_{2}\) which could, in the worst case, grow quadratically.
As mentioned above (cf. Corollary 6.16), it is also possible to analyse \(\epsilon\)-similarity via algebraic operations to, in this case, obtain the TADS shown in Figure 12, which is much smaller than the full difference TADS (cf. Figure 10). The piece-wise affine function of this TADS is visualised in Figure 13.
Figure 7: Function graphs corresponding to the PLNNs \(N_{1}\) and \(N_{2}\). Observe that both PLNNs fulfill the conditions of the XOR problem very closely.
There are 8 regions in which the difference values exceed 0.3, all close to the center of \([0,1]^{2}\). This, again matches the intuition that the volatility of solution grows with the distance to the defined values.
This result is interesting as it shows that, while the two neural networks that we trained differ, they do not differ more than 0.3 except for a small region in the center of the input space. Similar constructions can be used to analyze robustness of neural networks. Robustness of neural networks is of large interest to neural network research [14] and the application of TADS to this problem is discussed in more detail in [13].
### Classification
Applications of neural networks are traditionally split into _regression tasks_ and _classification tasks_. In regression tasks, one seeks to approximate a function with continuous values, whereas classification tasks have dis
Figure 8: TADS corresponding to the PLNNs \(N_{1}\) and \(N_{2}\). Note that both TADS are a full characterization of the semantic functions \(f_{1}\) and \(f_{2}\), respectively.
Figure 9: The architecture of the networks \(N_{1}\) and \(N_{2}\). Weights and biases are omitted for brevity.
crete outputs. As learned, piece-wise linear functions are inherently continuous, classification tasks require an additional step that _interprets_ the continuous output of a neural network as one of multiple discrete classes. Note that this is a change of mindset, with the same neural network being interpreted differently depending on the context.
In our context, one might be interested in a model that classifies each input point \(\vec{x}\in\mathbb{R}^{2}\) as either \(1\) or \(0\) instead of returning a real-value.
#### 7.2.1 Class Characterization
A standard method for classification tasks is the interpretation of neural network outputs as a probability distribution over classes [1]. In our XOR example, it is natural to interpret \(f_{1}(\vec{x})\) as the probability of \(\vec{x}\) belonging to class \(1\) and \(1-f_{1}(\vec{x})\) as the probability of \(\vec{x}\) belonging to class \(0\).
At evaluation time, one might naturally choose the class with the highest probability. Thus, \(N_{1}\)'s output is set to \(1\) if it is greater than \(0.5\) and \(0\) otherwise, which is, actually, in line with the definition of \(g_{*}\). Applying
\[\mathbb{I}(x\geq 0.5)=\begin{cases}1&\text{if }x\geq 0.5\\ 0&\text{otherwise}\end{cases}.\]
to the continuous learned function \(f_{1}\) therefore results in a suitable classifier for this problem:
\[g=\mathbb{I}(x\geq 0.5)\circ f_{1}\]
Note that \(\mathbb{I}(x\geq 0.5)\) is not continuous and therefore cannot be represented by a PLNN.13
Footnote 13: This is a general observation that holds for all discrete valued classification tasks. Most notably, the argmax function, a standard method for n-ary classification, can also not be represented by a PLNN and must be handled on the side of TADS. More discussions on the role of argmax in classification can be found in [13].
To construct the TADS, we use the compositionality of TADS. We manually construct the simple TADS \(\tau(\mathbb{I}(x\geq 0.5))\) as shown in Figure 15 and compose it with the TADS \(t_{1}\) of \(f_{1}\). The resulting TADS
\[t_{1}^{c}=\tau(\mathbb{I}(x\geq 0.5))\Join t_{1}\]
is shown in Figure 13(a). Note that this TADS is reminiscent of a binary decision diagram with just two final nodes. Figure 15(a) and shows precisely which inputs are interpreted as \(1\) and which as \(0\). As we only have two classes here, this classification can be considered as what is called _class characterization_ in [1] for both classes \(0\) and \(1\). Please note that class characterizations allows one to change the perspective from a classification task to the task of finding adequate candidates of specific profile, here given as a corresponding class.
This shows, that, given an output interpretation that maps the continuous network outputs to discrete classes, it is possible to transform neural networks, fundamentally blackbox representations of real valued functions, into semantically equivalent decision diagrams, fundamentally whitebox representations of discrete valued functions.
#### 7.2.2 Comparison of Classifiers
After having constructed TADS that characterize the classification behavior of neural networks, we can also characterize the _difference_ in classification behavior of two neural networks. We can simply do so by using the lifted equality relation to the TADS level and compute the TADS:
\[t_{1}^{c}\ \ \ominus\ t_{2}^{c}\]
The resulting TADS is shown in Figure 16(a) and the corresponding function graph in Figure 15(c). This plot describes precisely the areas where both functions differ and where they coincide.
Notably, it shows that, while the absolute difference of \(f_{1}\) and \(f_{2}\) is highest in the center of the interval \([0,1]^{2}\), the networks agree in that area with respect to classification. Indeed, it appears that the largest difference with respect to classification occurs in the diagonals separating the classes \(1\) and \(0\). This is not too surprising, as it is at the borderline between classes were small changes may affect the classification result.
Using an encoding of boolean values as \(1\) and \(0\) respectively, we can also compute the difference of \(t_{1}^{c}\) and \(t_{2}^{c}\)
\[t_{1}^{c}\ \ \ominus\ t_{2}^{c}\]
This TADS not only describes where \(t_{1}^{c}\) and \(t_{2}^{c}\) disagree, but also how they disagree. The corresponding TADS is shown in Figure 16(b).
This shows the utility of TADS for output interpretation. While the absolute difference between two networks is a suitable measure of difference for _regression_ tasks, the difference of the classification functions is suitable for _classification_. Playing with this difference, e.g., by modifying the classification function, is a powerful analytical instrument. E.g., in settings with many classes, separately analyzing the class characterizations of the individual classes typically leads to much smaller and easier to comprehend TADS and may therefore be a good means to increase scalability.
In machine learning, one often compares learned classifiers to groundtruth solutions by sampling from the
groundtruth solution and checking whether the neural network matches the groundtruth predictions. TADS enable a straightforward and precise way of evaluating a neural network in instances where one has access to the groundtruth model. E.g., the TADS of Figure13 precisely specifies where \(N_{1}\) differs from the baseline solution \(\mathbb{I}(|x-y|\geq 0.5)\).
## 8 Related Work
The presented TADS-based approach towards understanding of neural networks is explicitly meant to bridge between the various existing initiative that aim in the same direction, but typically with quite different means. In this section, we review the state of the art under three perspectives:
* The intent, explainability, as approached in the neural networks community.
* Applied concepts, e.g., symbolic execution that aim at (locally) precise results.
* Applied background, in particular concerning piece-wise affine functions.
Whereas the first perspective (Section8.1) is conceptually distant, both in its applied technologies as well as in its achievements, the mindset of second perspective (Section8.2) is similar in aims and means, but except for our previous work restricts its attention to a locally precise analysis close to some (partly symbolic) input. The third perspective just concerns the mathematical background (Section8.3). We are not aware of any previous work that systematically applies algebraic reasoning to achieve precise explanation and verification results about neural networks.
### Machine Learning Explainability
In recent years, explainable machine learning (XML) as a subfield of machine learning has seen a surge of activity. In line with existing machine learning research, XML focuses on approaches that scale efficiently at the cost of precision and comprehensibility.
Due to vast amount of work in this direction we can only provide sketch of the field here which, from our perspective, is characterized by is use of 'traditional' deep learning technologies such as gradient based optimization and its focus on directly investigating the neural networks themselves in an approximative fashion and without explicit link to some semantic model.
A typical example of a gradient-based method is activation maximization [16, SCD\({}^{+}\)17] which seeks to find, for one class \(C\) and network \(N\), the input \(\vec{x}\) for which \(\llbracket\vec{x}\rrbracket\) is maximal for class \(C\). Being based on gradient based optimization this approach is clear approximate.
Other examples of approaches working on the neural network level are frequently found in attribution methods. Attribution methods focus on attributing a prediction \(\llbracket N\rrbracket(\vec{x})=y\), to parts of the input that one deems responsible for this prediction. In general, this question is unclear and subjective. As a consequence, there exist multiple different methods that attribute the prediction differently to the original input. Examples include gradient based saliency maps [11, 10], layer-wise relevance propagation (LRP) [1] and deep taylor decomposition [11]. As attribution is natural to answer for linear models, these methods focus on linearly approximating the model (gradient based saliency maps) or parts of the model (LRP and deep taylor decomposition). The latter methods depend strongly on the neural network architecture and not on its semantics.
These methods are useful to gain rough intuition, but they do not offer any guarantees or reliable results. This is a direct consequence of most of these methods working with classical machine learning tools such as backpropagation and numerical optimization, which are fast but approximative [1].
The class of XML methods that is most closely related to TADS are local proxy models like LIME [14] and SHAP [10]. Both methods consider one fixed input \(\vec{x}\) and treat the model as a black-box. They observe the model's behavior on multiple perturbations of the form \(\vec{x}+p\) with \(p\) being sampled randomly. Then, they use simple machine learning models such as a linear classifier or a decision tree to describe the model's behavior on the perturbations \(\vec{x}+p\) they observed.
These methods are similar in their intent to the TADS approach as they use conceptually simple models to represent the blackbox behavior of a neural network. However, both LIME and SHAP are imprecise. They both sample only a few points \(\vec{x}+p\) in the neighborhood of \(\vec{x}\) and might miss important properties of the neural network model. Further, both methods use a machine learning classifier to represent these points. These classifiers are usually linear models (or a comparably simple model) and cannot capture the full behavior of the network, which leads to potentially large and uncontrolled errors.
We are not aware of XML methods that provides guarantees strong enough to justify a responsible use in safety critical applications.
### Conceptually Related Work
Symbolic Execution of Neural Networks.More closely related to TADS are approaches to explainability based on symbolic execution of neural networks.
The idea of explainability via symbolic (or rather concolic execution) of neural networks was already explored in the works of [14]. In their work, the authors translate a given PLNN into an imperative program and conoclically execute one given input \(\vec{x}\in\mathbb{R}^{n}\). This corresponds to exploring the one path of a TADS corresponding to \(\vec{x}\). This yields the path condition and the affine transformation that are responsible for the prediction \(\mathcal{N}(\vec{x})\). The authors further use these explanations to find adversarial examples and find parts of an input that they deem important for a given classification. The results of this work are promising, but (very) local, as they restrict themselves to one linear region of the input.
The authors of [1] propose a method that closely mirrors the method of [14]. In essence, both methods are almost identical but differ in their conceptual derivation of the method. The authors of [1] also consider sets of predictions and work out what features act as discriminators in many of these predictions.
Moving from the idea of explanation, the authors of [14] consider concolic testing instead. Similar to the work of [14], they execute singular inputs conoclically. They use the results from concolic execution to heuristically derive new inputs that cover large areas of the input space.
TADS improve on these approaches in two ways. First, TADS offer a global viewpoint on neural network semantics, independent of a sample set. Second, TADS support algebraic operations on a conceptual level to derive globally precise explanation and verification results. As illustrated in Section7 algebraic operations nicely serve as a toolbox to derive tailored and precise analyses.
Neural Network Verification.Neural network verification aims to verify properties of neural networks, usually piece-wise linear neural networks using techniques from SMT-solving and abstract interpretation extended by domain specific techniques [15, 16, 17]. Verification approaches are usually precise or at least provide a counterexample if a property is shown false. Modern solvers can scale quite well but are still far from being able to tackle practically relevant application [1].
Verification approaches are related to TADS as they also provide tools for the precise analysis of piece-wise linear neural networks. However, while SMT-based verification approaches currently scale better than TADS, they focus only on binary answers to a verification problem. They are not able to provide full diagnostics and descriptions of where and how an error occurs. However, please note, SMT-based approaches should not be considered as an alternative but rather as a provider of technologies that can also be applied at the TADS level. In fact, we use SMT solving, e.g., to eliminate infeasible paths in TADS.
Precise Explainability of Random Forests.This work is conceptually closely related to and builds upon the work of [14, 15]. There, ADDs are used and extended to derive explainable global models for random forests. Similar to the approach in this paper, these models are derived through a sequence of semantics-preserving transformations and later on refined by performing algebraic operations on the white-box representation of random forests. In fact, considering the underlying mindset, the work in this paper can be regarded as an extension of our work on random forest to neural networks. However, the much higher complexity of PLNN requires substantial generalization which, to our surprise, did not clutter the theory, but rather added to its elegance.
### Technologically Related Work
Linear Regions of Neural Networks.Vast amounts of research have been conducted regarding the number and shape of linear regions in a given PLNN [11, 12, 13, 14, 15, 16, 17, 18, 19, 18, 19, 18, 17, 20, 16]. Linear regions are of huge interest to neural network research as they give a natural characterization of the expressive power of neural network classes. This research is beneficial to the understanding of TADS as it can be used to bound the size of TADS and understand where and when explosions and size occur. On the other hand, TADS give a precise and minimal representation of the linear regions belonging to given neural network and can be used to facilitate experiments in this field, e.g., to find a linear region containing a negative example for a given property that could not be verified [15].
Structures for Polyhedral Sets.At their core, TADS are efficient representations of multiple polyhedral regions within high dimensional spaces. Similar problems occur in other divisions of computer science, most notably computer graphics.
TADS are closely related to Binary Space Partition Trees (BSP-trees) [17] and comparable structures [16]. These structures are built to represent a partition of a real-dimensional space into polygons much like TADS do. TADS extend these structures with optimizations from ADDs to account for domain specific properties of piece-wise linear functions that are not present in the general case of polygonal partitions.
## 9 Conclusion and Future Work
We have presented an algebraic approach to the precise and global explanation of Rectifier Neural Networks (PLNNs), one of the most popular kinds of Neural Networks. Key to our approach is the symbolic execution of these networks that allows the construction of semantically equivalent _Typed Affine Decision Structures_ (TADS). Due to their deterministic and sequential nature, TADS can be considered as white-box models and therefore as precise solutions to the model explanation problem, which directly imposes also solutions to the outcome explanation, and class characterization problems [11, 12]. Moreover, as linear algebras, TADS support operations that allows one to elegantly compare Rectifier Networks for equivalence or \(\epsilon\)-similarity, both with precise diagnostic information in case of failure, and to characterize their classification potential by precisely characterizing the set of inputs that are specifically classified or the set of inputs where two Network-based classifiers differ. These are steps towards a more rigorous understanding of Neural Networks that is required when applying them in safety-critical domains without the possibility of human interference such as self-driving cars.
This elegant situation at the semantic TADS level is in contrast with today's practical reality where people directly work with learned PLNNs that are, in particular, characterized by their hidden layers that often comprise millions sometimes even billions of parameters. The reason for this complex structure is learning efficiency, a property paid for with semantic intractability: There is essentially no way to control the impact of minor changes of a parameter or input values, and even the mere evaluation for a sample input exceed the capacity of a human's mind by far. This is why PLNNs are considered as black-box models.
The reason why TADS have not yet been studied may be due to their size: they may be exponentially larger than a corresponding PLNN. The reason for this expansion is the transformation of the incomprehensible hidden layers structure into a large decision structure, which, conceptually, is as easy to comprehend as a decision tree and a linear classifier. In this sense, our transformation into TADS can be regarded as trade of size for transparency, turning the verification and explanation problem into a scalability issue. There are at least three promising angles for attacking the scalability problem:
1. Learned PLNN have a high amount of noise resulting from the underlying learning process that works by locally optimizing individual parameters of the hidden layers. Noise reduction may have a major impact on size. Detecting noise is clearly a semantic task and can therefore profit from TADS-based semantic analyses.
2. PLNN are accepted to be approximative. Thus controlled modifications with minor semantic impact are easily tolerated. TADS provide the means to control the effect of modifications and thereby to keep modifications in the tolerable range.
3. Modern neural network architectures are typically compositions of multiple sub-networks that are intended to support the learning of different subtasks. However, this structure at the representational layer gets semantically blurred during joint the learning process which, e.g., prohibits compositional approaches as known from formal methods. The semantic transparency of TADS may provide means to reinforce the intended compositional structure also at the semantical level in order to support compositional reasoning and incremental construction.
Of course, there seems to be a hen/egg problem here. If we can construct the TADS, we are able to reduce it in order to achieve scalability. On the other hand, we need scalability first to construct the TADS. This is a well-known problem in the formal methods world, and, despite a wealth of heuristics and domain-specific technologies, the answer is _compositionality_ and _incremental construction_. This is exactly in line with the observation reported in the third item above: We need to learn how to use divide and conquer techniques for PLNN in a semantics-aware fashion. TADS are designed to support this quest by providing both a leading mindset and a tool-supported technology.
|
2303.16454 | Conductivity Imaging from Internal Measurements with Mixed Least-Squares
Deep Neural Networks | In this work we develop a novel approach using deep neural networks to
reconstruct the conductivity distribution in elliptic problems from one
measurement of the solution over the whole domain. The approach is based on a
mixed reformulation of the governing equation and utilizes the standard
least-squares objective, with deep neural networks as ansatz functions to
approximate the conductivity and flux simultaneously. We provide a thorough
analysis of the deep neural network approximations of the conductivity for both
continuous and empirical losses, including rigorous error estimates that are
explicit in terms of the noise level, various penalty parameters and neural
network architectural parameters (depth, width and parameter bound). We also
provide multiple numerical experiments in two- and multi-dimensions to
illustrate distinct features of the approach, e.g., excellent stability with
respect to data noise and capability of solving high-dimensional problems. | Bangti Jin, Xiyao Li, Qimeng Quan, Zhi Zhou | 2023-03-29T04:43:03Z | http://arxiv.org/abs/2303.16454v3 | # Conductivity Imaging from Internal Measurements with Mixed Least-Squares Deep Neural Networks+
###### Abstract
In this work we develop a novel approach using deep neural networks to reconstruct the conductivity distribution in elliptic problems from one internal measurement. The approach is based on a mixed reformulation of the governing equation and utilizes the standard least-squares objective to approximate the conductivity and flux simultaneously, with deep neural networks as ansatz functions. We provide a thorough analysis of the neural network approximations for both continuous and empirical losses, including rigorous error estimates that are explicit in terms of the noise level, various penalty parameters and neural network architectural parameters (depth, width and parameter bound). We also provide extensive numerical experiments in two- and multi-dimensions to illustrate distinct features of the approach, e.g., excellent stability with respect to data noise and capability of solving high-dimensional problems.
**Key words**: conductivity imaging, least-squares approach, neural network, generalization error, error estimate
## 1 Introduction
The conductivity value varies widely with the composition and type of materials and its accurate imaging can provide valuable structural information about the object. This observation underpins several important imaging modalities, e.g., electrical impedance tomography, current density impedance imaging, and acousto-electrical tomography; see the works [6, 41] for overviews on mathematical models and theory. In this work, we aim at identifying the conductivity distribution in elliptic problems from internal data using deep neural networks. Let \(\Omega\subset\mathbb{R}^{d}\) be a simply connected open bounded domain with a smooth boundary \(\partial\Omega\). Consider the following Neumann boundary value problem for the function \(u\)
\[\left\{\begin{aligned} -\nabla\cdot(q\nabla u)&=f, \quad\text{ in }\Omega,\\ q\partial_{\mathbf{n}}u&=g,\quad\text{ on }\partial \Omega,\end{aligned}\right. \tag{1.1}\]
where \(\mathbf{n}\) denotes the unit outward normal direction to the boundary \(\partial\Omega\) and \(\partial_{\mathbf{n}}\) denotes taking the normal derivative. The functions \(f\) and \(g\) in (1.1) are the given source and flux, respectively, and satisfy the standard compatibility condition \(\int_{\Omega}f\;\mathrm{d}x+\int_{\partial\Omega}g\;\mathrm{d}S=0\) in order to ensure solvability, and the solution \(u\) is unique under suitable normalization condition, e.g., \(\int_{\Omega}u\mathrm{d}x=0\). The conductivity \(q\) is assumed to belong to the following admissible set
\[\mathcal{A}=\{q\in H^{1}(\Omega):c_{0}\leq q\leq c_{1}\text{ a.e. in }\Omega\},\]
with the constants \(0<c_{0}<c_{1}<\infty\) being the lower and upper bounds on \(q\), respectively. We use the notation \(u(q)\) to explicitly indicate the dependence of the solution \(u\) to problem (1.1) on the coefficient \(q\).
The concerned inverse problem is to recover the conductivity \(q\) from an internal observation of the solution \(u\). It has been extensively studied in both engineering and mathematics communities. For example, the model (1.1) is often used to describe the behavior of a confined inhomogeneous aquifer, where the variable \(u\) represents the piezometric head, \(f\) is the recharge, and \(q\) is hydraulic conductivity (or transmissivity in the two-dimensional case); see the works [19, 44] for extensive discussions on parameter identifications in hydrology. Theoretically, Holder type stability estimates of the inverse problem were established under different settings [4, 10, 35]. Numerically, the reconstruction can be obtained using the regularized output least-squares approach [1, 12, 14], equation error approach [3, 18, 26] and mixed type formulation [29] etc. Error bounds on the numerical approximation obtained by the Galerkin finite element method (FEM) of the regularized formulation were established in [25, 40].
In this work, we develop a new approach for conductivity reconstruction using deep neural networks (DNNs). It is based on a least-squares mixed-type reformulation of the governing equation, with an \(H^{1}(\Omega)\) penalty on the unknown conductivity \(q\). The mixed least-squares formulation was first proposed by Kohn and Lowe [29] (and hence also known as the Kohn-Lowe approach), where both \(q\) and \(u\) were approximated using the Galerkin FEM. In our approach, we approximate both current density \(\sigma\) and conductivity \(q\) separately using two DNNs, adopt a least-squares objective for all the equality constraints, and minimize the overall loss with respect to the DNN parameters. The use of DNNs in place of FEM allows exploiting inductive bias and expressivity of DNNs for function approximations, which can be highly beneficial for numerical recovery. By leveraging the approximation theory of DNNs [20], nonstandard energy argument [25, 29] and statistical learning theory [5, 9], we derive novel error bounds on the approximations in terms of the accuracy of the observational data, DNN architecture (depth, width and parameter bound), and the numbers of sampling points in the domain and on the boundary etc. This is carried out for both population and empirical losses (resulting from Monte Carlo quadrature of the integrals). These error bounds provide theoretical underpinnings for the approach. In practice, the proposed approach is easy to implement, robust with respect to noise and can handle high-dimensional inverse problems (e.g., \(d=5\)). For example, the approach can still yield reasonable approximations in the presence of 10% noise in the data, as is confirmed by the extensive numerical experiments in both low and high-dimensional settings. These distinct features make the method highly attractive, and the numerical results clearly show its significant potential for solving the inverse problem. The development of the DNN formulations, error analysis and extensive numerical validation represent the main contributions of the present work.
In recent years, the use of DNNs for solving direct and inverse problems for PDEs has received a lot of attention; see [15, 39] for overviews. Existing neural inverse schemes using DNNs can roughly be divided into two groups: supervised approaches (see, e.g., [21, 27, 37]) and unsupervised approaches (see, e.g., [7, 8, 24, 32, 43]). Supervised methods exploit the availability of (abundant) paired training data to extract problem-specific features, and are concerned with learning forward operators or their (regularized) inverses. In contrast, unsupervised approaches exploit expressivity of DNNs as universal function approximators for the unknown coefficient and the state, i.e., using DNNs as ansatz functions in the approximation scheme, which enjoy excellent approximation properties for high-dimensional functions (in favorable situations), and the associated inductive biases [33]. The works [7] and [8] investigated image reconstruction in the classical EIT problem, using the weak and strong formulations (also with the \(L^{\infty}\) norm consistency for the latter), respectively. The work [24] applied the deep Ritz method to a least-gradient reformulation for the current density impedance imaging, and derived a generalization error for the loss function. The approach performs reasonably well for both full and partial interior current density data, and shows remarkable robustness against data noise. Pakravan et al [32] developed a hybrid approach, blending high expressivity of DNNs with the accuracy and reliability of traditional numerical methods for PDEs, and showed the approach for recovering the diffusion coefficient in one- and two-dimensional elliptic PDEs; see also [11] for a hybrid DNN-FEM approach for recovering the conductivity coefficient in elliptic and parabolic problems, where the conductivity and state are approximated using DNNs and Galerkin FEM, respectively. The work [11] aims at combining the strengths of neural network and classical Galerkin FEM, i.e., expressivity of DNNs and solid theoretical foundations of the FEM, and provides a thorough theoretical analysis. However, the approach is limited to low-dimensional problems. All these works have presented very encouraging empirical results for a range of PDE inverse problems, and clearly showed enormous potentials of DNNs in solving PDE inverse problems. Our approach follows the unsupervised paradigm, but unlike existing approaches, it employs a mixed formulation of the governing equation and thus differs greatly from the aforementioned ones. In addition, we have established rigorous error bounds on the approximation. Note that the theoretical analysis of neural inverse schemes is largely elusive, due to the outstanding challenges, mostly associated with nonconvexity of the objective function and a lack of linear structure of the approximation space.
The rest of the paper is organized as follows. In Section 2 we recall preliminaries on neural networks, especially approximation theory. In Section 3, we develop the reconstruction approach for a Neumann boundary value problem
(1.1) based on a mixed formulation of the governing equation. Further we present an error analysis of the approach for both population and empirical losses, using tools from partial differential equations and statistical learning theory. In Section 4, we describe the extension of the approach to the case of a Dirichlet boundary value problem. In Section 5, we present extensive numerical experiments to validate the effectiveness of the proposed approach, including highly challenging cases with large noise and high-dimensionality. Throughout, we denote by \(W^{k,p}(\Omega)\) and \(W_{0}^{k,p}(\Omega)\) the standard Sobolev spaces of order \(k\) for any integer \(k\geq 0\) and real \(p\geq 1\), equipped with the norm \(\|\cdot\|_{W^{k,p}(\Omega)}\). Further, we denote by \(W^{-k,p^{\prime}}(\Omega)\) the dual space of \(W_{0}^{k,p}(\Omega)\), with the pair \((p,p^{\prime})\) being the Holder conjugate exponents. We also write \(H^{k}(\Omega)\) and \(H_{0}^{k}(\Omega)\) with the norm \(\|\cdot\|_{H^{k}(\Omega)}\) if \(p=2\) and write \(L^{p}(\Omega)\) with the norm \(\|\cdot\|_{L^{p}(\Omega)}\) if \(k=0\). The spaces on the boundary \(\partial\Omega\) are defined similarly. The notation \((\cdot,\cdot)\) denotes the standard \(L^{2}(\Omega)\) inner product. For a Banach space \(B\), and the notation \(B^{d}\) represent the \(d\)-fold product space. We denote by \(c\) a generic constant not necessarily the same at each occurrence but it is always independent of the approximation accuracy \(\epsilon\) of the DNN, the noise level \(\delta\) and the penalty parameters (\(\gamma_{\sigma}\), \(\gamma_{b}\) and \(\gamma_{q}\)).
## 2 Preliminaries on DNNs
In this section, we describe useful notation and properties on fully connected feedforward neural networks. Let \(\{d_{\ell}\}_{\ell=0}^{L}\subset\mathbb{N}\) be fixed natural numbers with the input dimensionality \(d_{0}=d\) and output dimensionality \(d_{L}\), and a parameterization \(\Theta=\{(A^{(\ell)},b^{(\ell)})_{\ell=1}^{L}\}\) consisting of weight matrices and bias vectors, with \(A^{(\ell)}=[W_{ij}^{(\ell)}]\in\mathbb{R}^{d_{\ell}\times d_{\ell-1}}\) and \(b^{(\ell)}=[b_{i}^{(\ell)}]\in\mathbb{R}^{d_{\ell}}\) the weight matrix and bias vector at the \(\ell\)-th layer. Then a DNN function \(v_{\theta}:=v^{(L)}:\Omega\subset\mathbb{R}^{d}\rightarrow\mathbb{R}^{d_{L}}\) realized by the parameter \(\theta\in\Theta\) is defined recursively by
\[\text{DNN realization:}\quad\left\{\begin{aligned} & v^{(0)}=x,\quad x\in\Omega \subset\mathbb{R}^{d},\\ & v^{(\ell)}=\rho(A^{(\ell)}v^{(\ell-1)}+b^{(\ell)}),\quad\ell=1,2,\cdots,L-1,\\ & v^{(L)}=A^{(L)}v^{(L-1)}+b^{(L)},\end{aligned}\right. \tag{2.1}\]
where the nonlinear activation function \(\rho:\mathbb{R}\rightarrow\mathbb{R}\) is applied componentwise to a vector. The DNN has a depth \(L\) and width \(W:=\max_{\ell=0,\ldots,L}(d_{\ell})\) and the total number of parameters \(\sum_{\ell=1}^{L}d_{\ell}d_{\ell-1}+d_{\ell}\). Given the parametrization \(\Theta\) (i.e., architecture) and the DNN realization, we denote the associated DNN function class \(\mathcal{N}_{\Theta}\) by \(\mathcal{N}_{\Theta}:=\{v_{\theta},\ \theta\in\Theta\}\). Throughout, we fix \(\rho\equiv\tanh\): \(x\rightarrow\frac{e^{x}-e^{-x}}{e^{x}+e^{-x}}\) and denote the corresponding DNN as \(\tanh\)-DNN. The following approximation result holds [20, Proposition 4.8].
**Lemma 2.1**.: _Let \(s\in\mathbb{N}_{0}\) and \(p\in[1,\infty]\) be fixed, and \(v\in W^{k,p}(\Omega)\) with \(\mathbb{N}\ni k\geq s+1\). Then for any \(\epsilon>0\), there exists at least one \(\theta\in\Theta\) with depth \(O(\log(d+k))\) and number of nonzero weights \(O\big{(}\epsilon^{-\frac{d}{k-s-\mu(\epsilon=2)}}\big{)}\) and weight parameters bounded by \(O(\epsilon^{-2-\frac{2(d/p+d+k)+\mu(\epsilon=2)+d/p+d}{k-s-\mu(\epsilon=2)}})\) in the maximum norm, where \(\mu>0\) is arbitrarily small, such that the DNN realization \(v_{\theta}\in\mathcal{N}_{\Theta}\) satisfies_
\[\|v-v_{\theta}\|_{W^{s,p}(\Omega)}\leq\epsilon.\]
**Remark 2.1**.: _On the domain \(\Omega=(0,1)^{d}\), Guhring and Raslan [20, pp. 127-128] proved Lemma 2.1 using two steps. They first approximate a function \(v\in W^{k,p}(\Omega)\) by a localized Taylor polynomial \(v_{\mathrm{poly}}\):_
\[\|v-v_{\mathrm{poly}}\|_{W^{s,p}(\Omega)}\leq c_{\mathrm{poly}}N^{-(k-s-\mu(s=2 ))},\]
_where the construction \((\)see [20, Definition 4.4] for details\()\) of \(v_{\mathrm{poly}}\) relies on an approximate partition of unity \((\)depending on \(N\in\mathbb{N})\) and the constant \(c_{\mathrm{poly}}=c_{\mathrm{poly}}(d,p,k,s)>0\). Next they show that there exists a neural network parameter \(\theta\), satisfying the conditions in Lemma 2.1, such that [20, Lemma D.5]:_
\[\|v_{\mathrm{poly}}-v_{\theta}\|_{W^{s,p}(\Omega)}\leq c_{\mathrm{NN}}\|v\|_{W^{ s,p}(\Omega)}\bar{\epsilon},\]
_where the constant \(c_{\mathrm{NN}}=c_{\mathrm{NN}}(d,p,k,s)>0\) and \(\bar{\epsilon}\in(0,\frac{1}{2})\). Now for small \(\epsilon>0\), the desired estimate follows directly from the choice below_
\[N=\Big{(}\frac{\epsilon}{2c_{\mathrm{poly}}}\Big{)}^{-\frac{1}{k-s-\mu(\epsilon=2 )}}\quad\text{and}\quad\bar{\epsilon}=\frac{\epsilon}{2c_{\mathrm{NN}}\|v\|_{W^ {s,p}(\Omega)}}.\]
We denote the set of DNNs of depth \(L\), \(N_{\theta}\) nonzero weights, and maximum bound \(R\) on the parameter vector \(\theta\) by
\[\mathcal{N}(L,N_{\theta},R)=:\{v_{\theta}\text{ is an DNN of depth }L:\|\theta\|_{\ell^{0}}\leq N_{\theta},\|\theta\|_{\ell^{ \infty}}\leq R\},\]
where \(\|\cdot\|_{\ell^{0}}\) and \(\|\cdot\|_{\ell^{\infty}}\), denote respectively, the number of nonzero entries in and the maximum norm of a vector. Furthermore, for any \(\epsilon>0\) and \(p\geq 1\), we denote the DNN parameter set by \(\mathfrak{P}_{p,\epsilon}\) for the set
\[\mathcal{N}\Big{(}c\log(d+2),c\epsilon^{-\frac{d}{1-\rho}},c\epsilon^{-2- \frac{2p+3d+3p+2p\epsilon}{p(1-\mu)}}\Big{)}.\]
Below, for a vector-valued function, we use the DNN function class to approximate its components. This can be easily achieved by DNN parallelization, which combines multiple \(\tanh\)-DNNs into one larger DNN. Then an induction argument allow assembling multiple DNNs into one big DNN. Moreover, the new DNN does not change the depth, and its width equals to the sum of that of subnetworks.
**Lemma 2.2**.: _Let \(\bar{\theta}=\{(\bar{A}^{(\ell)},\bar{b}^{(\ell)})\}_{\ell=1}^{L},\tilde{ \theta}=\{(\bar{A}^{(\ell)},\tilde{b}^{(\ell)})\}_{\ell=1}^{L}\in\Theta\), let \(\bar{v}\) and \(\tilde{v}\) be their DNN realizations, and define \(\theta=\{(A^{(\ell)},b^{(\ell)})\}_{\ell=1}^{L}\) by_
\[A^{(1)} =\begin{bmatrix}\bar{A}^{(1)}\\ \tilde{A}^{(1)}\end{bmatrix}\in\mathbb{R}^{2d_{1}\times d_{0}},\ A^{(\ell)}= \begin{bmatrix}\bar{A}^{(\ell)}&0\\ 0&\bar{A}^{(\ell)}\end{bmatrix}\in\mathbb{R}^{2d_{\ell}\times 2d_{\ell-1}}, \quad\ell=2,\ldots,L;\] \[b^{(\ell)} =\begin{bmatrix}\bar{b}^{(\ell)}\\ \tilde{b}^{(\ell)}\end{bmatrix}\in\mathbb{R}^{2d_{\ell}},\quad\ell=1,\ldots,L.\]
_Then \(v_{\theta}=(\bar{v}_{\theta},\tilde{v}_{\tilde{\theta}})^{\top}\) is the DNN realization of \(\theta\), of depth \(L\) and width \(2W\)._
## 3 Inverse conductivity problem in the Neumann case
In this section we discuss the inverse conductivity problem for the Neumann boundary value problem (1.1).
### Mixed formulation and its DNN approximation
To develop the reconstruction method, we let \(\sigma=q\nabla u\) and rewrite problem (1.1) into a first-order system
\[\begin{cases}\sigma=q\nabla u,&\text{in }\Omega,\\ -\nabla\cdot\sigma=f,&\text{in }\Omega,\\ \sigma\cdot\mathbf{n}=g,&\text{on }\partial\Omega.\end{cases} \tag{3.1}\]
To recover the conductivity \(q\) from the observation \(z^{\delta}\), we employ a residual minimization scheme based on the first-order system (3.1). We approximate both \(\sigma\) and \(q\) using DNNs, and substitute the noisy data \(z^{\delta}\) for the scalar field \(u\) in the first equation. For the scalar field \(q\), we use a \(\tanh\)-DNN function class (of depth \(L_{q}\) and width \(W_{q}\)) with the parametrization \(\mathfrak{P}_{p,\epsilon_{q}}\) (with \(p\geq 2\) and tolerance \(\epsilon_{q}\)). Similarly, for the vector field \(\sigma:\Omega\to\mathbb{R}^{d}\), we employ \(d\) identical \(\tanh\)-DNN function classes (of depth \(L_{\sigma}\) and width \(W_{\sigma}\)) with the parametrizations \(\mathfrak{P}_{2,\epsilon_{\sigma}}\) (with tolerance \(\epsilon_{\sigma}\)), and stack them into one multi-output DNN (via the parallelization technique in Lemma 2.2).
To enforce the box constraint of the coefficient \(q\), we employ a cutoff operation \(P_{\mathcal{A}}:H^{1}(\Omega)\to\mathcal{A}\) defined by \(P_{\mathcal{A}}(v)=\min(\max(c_{0},v),c_{1})\). The operator \(P_{\mathcal{A}}\) is stable in the following sense [45, Corollary 2.1.8]
\[\|\nabla P_{\mathcal{A}}(v)\|_{L^{p}(\Omega)}\leq\|\nabla v\|_{L^{p}(\Omega)},\quad\forall v\in W^{1,p}(\Omega),p\in[1,\infty], \tag{3.2}\]
and moreover, for all \(v\in\mathcal{A}\), there holds
\[\|P_{\mathcal{A}}(w)-v\|_{L^{p}(\Omega)}\leq\|w-v\|_{L^{p}(\Omega)},\quad \forall w\in L^{p}(\Omega),\ p\in[1,\infty]. \tag{3.3}\]
Using the standard least-squares formulation on all equality constraints, we obtain the following objective
\[J_{\boldsymbol{\gamma}}(\theta,\kappa)=\|\sigma_{\kappa}-P_{\mathcal{A}}(q_{ \theta})\nabla z^{\delta}\|_{L^{2}(\Omega)}^{2}+\gamma_{\sigma}\|\nabla\cdot \sigma_{\kappa}+f\|_{L^{2}(\Omega)}^{2}+\gamma_{b}\|\sigma_{\kappa}\cdot \mathbf{n}-g\|_{L^{2}(\partial\Omega)}^{2}+\gamma_{q}\|\nabla q_{\theta}\|_{L^{2 }(\Omega)}^{2}.\]
Then the DNN reconstruction problem reads
\[\min_{(\theta,\kappa)\in(\mathfrak{P}_{p,\kappa_{q}},\mathfrak{P}_{2,\sigma_{q}}^{ \otimes d})}J_{\boldsymbol{\gamma}}(\theta,\kappa), \tag{3.4}\]
where the superscript \(\otimes d\) denotes the \(d\)-fold direct product, \(\gamma_{\sigma}>0\), \(\gamma_{b}>0\) and \(\gamma_{q}>0\) are penalty parameters that balance the different terms and have to be tuned suitably, and we write \(\boldsymbol{\gamma}=(\gamma_{\sigma},\gamma_{b},\gamma_{q})\in\mathbb{R}_{+}^{3}\) below. The \(H^{1}(\Omega)\) seminorm penalty on the approximation \(q_{\theta}\) is to stabilize the minimization process. It is essential for overcoming the ill-posedness of the inverse problem [17, 23]. \(z^{\delta}\) is the noisy measurement of the exact data \(u(q^{\dagger})\) in the domain \(\Omega\) and the noise level \(\delta\) is defined by
\[\delta:=\|\nabla(u(q^{\dagger})-z^{\delta})\|_{L^{2}(\Omega)}. \tag{3.5}\]
Note that in (3.5) we impose a mild regularity condition on the noisy data \(z^{\delta}\) (smoother than the popular \(L^{2}(\Omega)\)), which may be obtained by presmoothing the raw noisy data beforehand. This assumption is commonly used in energy type formulations, e.g., Kohn-Lowe [29] and equation error [3] formulations. The well-posedness of problem (3.4) follows by a standard argument in calculus of variation. Indeed, the compactness of the parametrizations \(\mathfrak{P}_{p,\epsilon_{q}}\) and \(\mathfrak{P}_{2,\epsilon_{\sigma}}\) holds due to the uniform boundedness on the parameter vectors and finite-dimensionality of the space. Meanwhile, the smoothness of the \(\tanh\) activation function implies the continuity of the loss \(J_{\boldsymbol{\gamma}}\) in the DNN parameters \((\theta,\kappa)\). These two properties imply the existence of a minimizer \((\theta^{*},\kappa^{*})\).
Note that the objective \(J_{\boldsymbol{\gamma}}\) involves high-dimensional integrals, and hence quadrature is needed in practice. This may be achieved using any standard quadrature rules, and predominantly Monte Carlo methods in high-dimension. In this work, we employ the Monte Carlo method. Let \(\mathcal{U}(\Omega)\) and \(\mathcal{U}(\partial\Omega)\) be the uniform distributions over the domain \(\Omega\) and the boundary \(\partial\Omega\), respectively, and \((q_{\theta},\sigma_{\kappa})\) be the DNN realization of \((\theta,\kappa)\). Using the expectation \(\mathbb{E}_{\mathcal{U}(\cdot)}[\cdot]\) with respect to \(\mathcal{U}(\cdot)\), we can rewrite the loss \(J_{\boldsymbol{\gamma}}\) as
\[J_{\boldsymbol{\gamma}}(\theta,\kappa) =|\Omega|\mathbb{E}_{X\sim\mathcal{U}(\Omega)}\Big{[}\|\sigma_{ \kappa}(X)-P_{\mathcal{A}}(q_{\theta}(X))\nabla z^{\delta}(X)\|_{\ell^{2}}^{2} \Big{]}+\gamma_{\sigma}|\Omega|\mathbb{E}_{X\sim\mathcal{U}(\Omega)}\Big{[} \big{(}\nabla\cdot\sigma_{\kappa}(X)+f(X)\big{)}^{2}\Big{]}\] \[\quad+\gamma_{b}|\partial\Omega|\mathbb{E}_{Y\sim\mathcal{U}( \partial\Omega)}\Big{[}\big{(}\sigma_{\kappa}(Y)\cdot\mathbf{n}-g(Y)\big{)}^{ 2}\Big{]}+\gamma_{q}|\Omega|\mathbb{E}_{X\sim\mathcal{U}(\Omega)}\Big{[}\| \nabla q_{\theta}(X)\|_{\ell^{2}}^{2}\Big{]}\] \[=:\mathcal{E}_{d}(\sigma_{\kappa},q_{\theta})+\gamma_{\sigma} \mathcal{E}_{\sigma}(\sigma_{\kappa})+\gamma_{b}\mathcal{E}_{b}(\sigma_{ \kappa})+\gamma_{q}\mathcal{E}_{q}(q_{\theta}),\]
where \(|\Omega|\) and \(|\partial\Omega|\) denote the Lebesgue measure of \(\Omega\) and \(\partial\Omega\), respectively, and \(\|\cdot\|_{\ell^{2}}\) denotes the Euclidean norm on \(\mathbb{R}^{d}\). Next let \(X=\{X_{j}\}_{j=1}^{n_{r}}\) and \(Y=\{Y_{j}\}_{j=1}^{n_{b}}\) be independent and identically distributed (i.i.d.) samples drawn from the uniform distributions \(\mathcal{U}(\Omega)\) and \(\mathcal{U}(\partial\Omega)\), respectively, where \(n_{r}\) and \(n_{b}\) are the numbers of sampling points in the domain \(\Omega\) and on the boundary \(\partial\Omega\), respectively. Then the empirical loss \(\widehat{J}_{\boldsymbol{\gamma}}(\theta,\kappa)\) is given by
\[\widehat{J}_{\boldsymbol{\gamma}}(\theta,\kappa)=:\widehat{\mathcal{E}}_{d}( \sigma_{\kappa},q_{\theta})+\gamma_{\sigma}\widehat{\mathcal{E}}_{\sigma}( \sigma_{\kappa})+\gamma_{b}\widehat{\mathcal{E}}_{b}(\sigma_{\kappa})+\gamma_ {q}\widehat{\mathcal{E}}_{q}(q_{\theta}), \tag{3.6}\]
where \(\widehat{\mathcal{E}}_{d}(\sigma_{\kappa},q_{\theta})\), \(\widehat{\mathcal{E}}_{\sigma}(\sigma_{\kappa})\), \(\widehat{\mathcal{E}}_{b}(\sigma_{\kappa})\) and \(\widehat{\mathcal{E}}_{q}(q_{\theta})\) are Monte Carlo approximations of \(\mathcal{E}_{d}(\sigma_{\kappa},q_{\theta})\), \(\mathcal{E}_{\sigma}(\sigma_{\kappa})\), \(\mathcal{E}_{b}(\sigma_{\kappa})\) and \(\mathcal{E}_{q}(q_{\theta})\), obtained by replacing the expectation with a sample mean, and are defined by
\[\widehat{\mathcal{E}}_{d}(\sigma_{\kappa},q_{\theta}) :=n_{r}^{-1}|\Omega|\sum_{j=1}^{n_{r}}\|\sigma_{\kappa}(X_{j})-P_{ \mathcal{A}}(q_{\theta}(X_{j}))\nabla z^{\delta}(X_{j})\|_{\ell^{2}}^{2}, \tag{3.7}\] \[\widehat{\mathcal{E}}_{\sigma}(\sigma_{\kappa}) :=n_{r}^{-1}|\Omega|\sum_{j=1}^{n_{r}}\big{(}\nabla\cdot\sigma_{ \kappa}(X_{j})+f(X_{j})\big{)}^{2},\] (3.8) \[\widehat{\mathcal{E}}_{b}(\sigma_{\kappa}) :=n_{b}^{-1}|\partial\Omega|\sum_{j=1}^{n_{b}}\big{(}\sigma_{ \kappa}(Y_{j})\cdot\mathbf{n}-g(Y_{j})\big{)}^{2},\] (3.9) \[\widehat{\mathcal{E}}_{q}(q_{\theta}) :=n_{r}^{-1}|\Omega|\sum_{j=1}^{n_{r}}\|\nabla q_{\theta}(X_{j})\|_ {\ell^{2}}^{2}. \tag{3.10}\]
Additionally, we define variants of \(\mathcal{E}_{b}\) and \(\widehat{\mathcal{E}}_{b}\) by
\[\mathcal{E}_{b^{\prime}}=|\partial\Omega|\mathbb{E}_{\mathcal{U}(\partial\Omega)}[ \|\sigma(Y)-q^{\dagger}(Y)\nabla z^{\delta}(Y)\|_{\ell^{2}}^{2}]\quad\text{and} \quad\widehat{\mathcal{E}}_{b^{\prime}}=n_{b}^{-1}|\partial\Omega|\sum_{j=1}^{n_{ b}}\|\sigma(Y_{j})-q^{\dagger}(Y_{j})\nabla z^{\delta}(Y_{j})\|_{\ell^{2}}^{2}. \tag{3.11}\]
These quantities will be needed in the investigation of the inverse problem in the Dirichlet case in Section 4. The estimation of Monte Carlo errors will be discussed in Section 3.3 below.
### Error analysis of the population loss
Now we derive (weighted) \(L^{2}(\Omega)\) error bounds on \(q^{\dagger}-q^{*}_{\theta}\) for the DNN realization \(q^{*}_{\theta}\) of a minimizer \((\theta^{*},\kappa^{*})\) to the population loss \(J_{\mathbf{\gamma}}(\theta,\kappa)\) under Assumption 3.1 below.
**Assumption 3.1**.: \(q^{\dagger}\in W^{2,p}(\Omega)\cap\mathcal{A}\)_, \(f\in H^{1}(\Omega)\cap L^{\infty}(\Omega)\) and \(g\in H^{\frac{3}{2}}(\partial\Omega)\cap L^{\infty}(\partial\Omega)\), with \(p=\max(2,d+\nu)\) for small \(\nu>0\)._
The following _a priori_ regularity holds under Assumption 3.1 for \(u^{\dagger}:=u(q^{\dagger})\) and \(\sigma^{\dagger}:=q^{\dagger}\nabla u(q^{\dagger})\).
**Lemma 3.1**.: _Under Assumption 3.1, the solution \(u^{\dagger}\) to problem (1.1) (with \(q=q^{\dagger}\)) satisfies \(u^{\dagger}\in H^{3}(\Omega)\cap H^{1}_{0}(\Omega)\) and \(\sigma^{\dagger}\in(H^{2}(\Omega))^{d}\)._
Proof.: By Sobolev embedding theorem [2, Theorem 4.12, p. 85], \(q^{\dagger}\in W^{2,p}(\Omega)\hookrightarrow W^{1,\infty}(\Omega)\) for \(p>\max(2,p+\nu)\). Since \(f\in L^{\infty}(\Omega)\) and \(g\in L^{\infty}(\partial\Omega)\), standard elliptic regularity theory implies \(u^{\dagger}\in H^{2}(\Omega)\cap W^{1,\infty}(\Omega)\). Next, upon expansion, we have
\[\begin{cases}-\Delta u^{\dagger}=\frac{f}{q^{\dagger}}+\frac{\nabla q^{\dagger }\cdot\nabla u^{\dagger}}{q^{\dagger}},&\text{ in }\Omega,\\ \partial_{n}u^{\dagger}=\frac{q}{q^{\dagger}},&\text{ on }\partial\Omega, \end{cases}\]
where \(\cdot\) denotes Euclidean inner product on \(\mathbb{R}^{d}\). Since \(q^{\dagger}\in W^{1,\infty}(\Omega)\cap\mathcal{A}\) and \(f\in L^{\infty}(\Omega)\cap H^{1}(\Omega)\), we have for \(i=1,\ldots,d\), \(\partial_{x_{i}}\Big{(}\frac{f}{q^{\dagger}}\Big{)}=-\frac{f\partial_{x_{i}}q ^{\dagger}}{(q^{\dagger})^{2}}+\frac{\partial_{x_{i}}f}{q^{\dagger}}\in L^{2}(\Omega)\), i.e., \(\frac{f}{q^{\dagger}}\in H^{1}(\Omega)\). Likewise, since \(q^{\dagger}\in W^{1,\infty}(\Omega)\cap W^{2,p}(\Omega)\) and \(u^{\dagger}\in W^{1,\infty}(\Omega)\cap H^{2}(\Omega)\), we have for any \(i=1,\ldots,d\),
\[\partial_{x_{i}}\Big{(}\frac{\nabla q^{\dagger}\cdot\nabla u^{\dagger}}{q^{ \dagger}}\Big{)}=-\frac{\partial_{x_{i}}q^{\dagger}(\nabla q^{\dagger}\cdot \nabla u^{\dagger})}{(q^{\dagger})^{2}}+\frac{\nabla(\partial_{x_{i}}q^{ \dagger})\cdot\nabla u^{\dagger}}{q^{\dagger}}+\frac{\nabla(\partial_{x_{i}}u ^{\dagger})\cdot\nabla q^{\dagger}}{q^{\dagger}}\in L^{2}(\Omega),\]
i.e., \(\frac{\nabla q^{\dagger}\cdot\nabla u^{\dagger}}{q^{\dagger}}\in H^{1}(\Omega)\) also. Since \(g\in L^{\infty}(\partial\Omega)\cap H^{\frac{3}{2}}(\partial\Omega)\), by the trace theorem [2, Theorem 5.36, p. 164], there exists \(\tilde{g}\in L^{\infty}(\Omega)\cap H^{2}(\Omega)\). Then repeating the preceding argument gives for any \(i,j=1,\ldots,d\)
\[\partial_{x_{i}x_{j}}\Big{(}\frac{\tilde{g}}{q^{\dagger}}\Big{)}=\frac{q^{ \dagger}\partial_{x_{i}x_{j}}\tilde{g}-\partial_{x_{i}}\tilde{g}\partial_{x_ {j}}q^{\dagger}}{(q^{\dagger})^{2}}-\frac{(\partial_{x_{j}}\tilde{g})(\partial _{x_{i}}q^{\dagger})+\tilde{g}\partial_{x_{i}x_{j}}q^{\dagger}}{(q^{\dagger} )^{2}}+2\frac{\tilde{g}(\partial_{x_{i}}q^{\dagger})(\partial_{x_{j}}q^{ \dagger})}{(q^{\dagger})^{3}}\in L^{2}(\Omega),\]
i.e., \(\frac{\tilde{g}}{q^{\dagger}}\in H^{2}(\Omega)\). This and trace theorem imply \(\frac{q}{q^{\dagger}}\in H^{\frac{3}{2}}(\partial\Omega)\). Applying the standard elliptic regularity theory again yields \(u^{\dagger}\in H^{3}(\Omega)\). Finally, it follows from \(q^{\dagger}\in W^{2,p}(\Omega)\) and \(u^{\dagger}\in H^{3}(\Omega)\cap W^{1,\infty}(\Omega)\) that \(\sigma^{\dagger}\equiv q^{\dagger}\nabla u^{\dagger}\in(H^{2}(\Omega))^{d}\).
The next lemma gives an _a priori_ estimate on the DNN realization \((q^{*}_{\theta},\sigma^{*}_{\kappa})\) of the minimizer \((\theta^{*},\kappa^{*})\).
**Lemma 3.2**.: _Let Assumption 3.1 hold. Fix small \(\epsilon_{\sigma}\), \(\epsilon_{q}>0\), and let \((\theta^{*},\kappa^{*})\in(\mathfrak{P}_{p,\epsilon_{q}},\mathfrak{P}_{2, \epsilon_{\sigma}}^{\otimes d})\) be a minimizer of problem (3.4). Then the following estimate holds_
\[J_{\mathbf{\gamma}}(\theta^{*},\kappa^{*})\leq c\big{(}\epsilon_{q}^{2}+(\gamma_{ \sigma}+\gamma_{b}+1)\epsilon_{\sigma}^{2}+\delta^{2}+\gamma_{q}\big{)}.\]
Proof.: First, Assumption 3.1 and Lemma 3.1 imply \(\sigma^{\dagger}\in(H^{2}(\Omega))^{d}\). By Lemma 2.1, there exists one \((\theta_{\epsilon},\kappa_{\epsilon})\in(\mathfrak{P}_{p,\epsilon_{q}},\mathfrak{P}_ {2,\epsilon_{\sigma}}^{\otimes d})\) such that its DNN realization \((q_{\theta_{\epsilon}},\sigma_{\kappa_{\epsilon}})\) satisfies
\[\|q^{\dagger}-q_{\theta_{\epsilon}}\|_{W^{1,p}(\Omega)}\leq\epsilon_{q}\quad \text{and}\quad\|\sigma^{\dagger}-\sigma_{\kappa_{\epsilon}}\|_{H^{1}(\Omega)} \leq\epsilon_{\sigma}. \tag{3.12}\]
Then the triangle inequality leads to \(\|\nabla q_{\theta_{\epsilon}}\|_{L^{2}(\Omega)}\leq c\). The embedding \(W^{1,p}(\Omega)\hookrightarrow L^{\infty}(\Omega)\)[2, Theorem 4.12, p. 85] implies
\[\|q^{\dagger}-q_{\theta_{\epsilon}}\|_{L^{\infty}(\Omega)}\leq c\epsilon_{q}. \tag{3.13}\]
By the minimizing property of \((\theta^{*},\kappa^{*})\) and the triangle inequality, we obtain
\[J_{\mathbf{\gamma}}(\theta^{*},\kappa^{*})\leq J_{\mathbf{\gamma}}( \theta_{\epsilon},\kappa_{\epsilon})\] \[=\|\sigma_{\kappa_{\epsilon}}-P_{\mathcal{A}}(q_{\theta_{ \epsilon}})\nabla z^{\delta}\|_{L^{2}(\Omega)}^{2}+\gamma_{\sigma}\|\nabla \cdot\sigma_{\kappa_{\epsilon}}+f\|_{L^{2}(\Omega)}^{2}+\gamma_{b}\|\sigma_{ \kappa_{\epsilon}}\cdot\mathbf{n}-g\|_{L^{2}(\partial\Omega)}^{2}+\gamma_{q} \|\nabla q_{\theta_{\epsilon}}\|_{L^{2}(\Omega)}^{2}\]
\[\|q^{\dagger}-P_{\mathcal{A}}(q_{\theta}^{*})\|_{L^{2}(\Omega)}^{2}= \big{(}(q^{\dagger}-P_{\mathcal{A}}(q_{\theta}^{*}))\nabla u^{\dagger},\nabla v_{ \psi}\big{)}=(\sigma^{\dagger},\nabla v_{\psi})-(P_{\mathcal{A}}(q_{\theta}^{*} )\nabla u^{\dagger},\nabla v_{\psi})\] \[=-\big{(}\nabla\cdot\sigma^{\dagger},v_{\psi}\big{)}-\big{(}P_{ \mathcal{A}}(q_{\theta}^{*})\nabla u^{\dagger},\nabla v_{\psi}\big{)}+(g,v_{ \psi})_{L^{2}(\partial\Omega)}\] \[=\big{(}f+\nabla\cdot\sigma_{\kappa}^{*},v_{\psi}\big{)}+\big{(} \sigma_{\kappa}^{*}-P_{\mathcal{A}}(q_{\theta}^{*})\nabla z^{\delta},\nabla v_ {\psi}\big{)}+\big{(}P_{\mathcal{A}}(q_{\theta}^{*})\nabla(z^{\delta}-u^{ \dagger}),\nabla v_{\psi}\big{)}+(g-\sigma_{\kappa}^{*}\cdot\mathbf{n},v_{ \psi})_{L^{2}(\partial\Omega)}.\]
Then Cauchy-Schwarz inequality, the trace theorem [2, Theorem 5.36, p. 164], Lemma 3.2 and Condition 3.2 imply
\[\|q^{\dagger}-P_{\mathcal{A}}(q_{\theta}^{*})\|_{L^{2}(\Omega)} \leq c\big{[}\|f+\nabla\cdot\sigma_{\kappa}^{*}\|_{L^{2}(\Omega)}+\| \sigma_{\kappa}^{*}-P_{\mathcal{A}}(q_{\theta}^{*})\nabla z^{\delta}\|_{L^{2}( \Omega)}+\|P_{\mathcal{A}}(q_{\theta}^{*})\|_{L^{\infty}(\Omega)}\|\nabla(z^{ \delta}-u^{\dagger})\|_{L^{2}(\Omega)}\] \[\quad+\|g-\sigma_{\kappa}^{*}\cdot\mathbf{n}\|_{L^{2}(\partial \Omega)}\big{]}\|v_{\psi}\|_{H^{1}(\Omega)}\leq c\big{(}1+\gamma_{\sigma}^{- \frac{1}{2}}+\gamma_{b}^{-\frac{1}{2}}\big{)}\eta\|q^{\dagger}-P_{\mathcal{A}} (q_{\theta}^{*})\|_{H^{1}(\Omega)}.\]
By the definition of the \((H^{1}(\Omega))^{\prime}\)-norm, we have
\[\|q^{\dagger}-P_{\mathcal{A}}(q_{\theta}^{*})\|_{(H^{1}(\Omega))^{\prime}}\leq c \big{(}1+\gamma_{\sigma}^{-\frac{1}{2}}+\gamma_{b}^{-\frac{1}{2}}\big{)}\eta.\]
This, duality pairing and the bound on \(\|\nabla q_{\theta}^{*}\|_{L^{2}(\Omega)}\) in Lemma 3.2 and the stability of \(P_{\mathcal{A}}\) in (3.2) imply
\[\|q^{\dagger}-P_{\mathcal{A}}(q_{\theta}^{*})\|_{L^{2}(\Omega)} \leq\|q^{\dagger}-P_{\mathcal{A}}(q_{\theta}^{*})\|_{(H^{1}(\Omega))^{ \prime}}^{\frac{1}{2}}\|q^{\dagger}-P_{\mathcal{A}}(q_{\theta}^{*})\|_{H^{1}( \Omega)}^{\frac{1}{2}}\] \[\leq c\|q^{\dagger}-q_{\theta}^{*}\|_{(H^{1}(\Omega))^{\prime}}^{ \frac{1}{2}}\big{(}1+\|\nabla P_{\mathcal{A}}(q_{\theta}^{*})\|_{L^{2}(\Omega) }^{\frac{1}{2}}\big{)}\leq c\|q^{\dagger}-q_{\theta}^{*}\|_{(H^{1}(\Omega))^{ \prime}}^{\frac{1}{2}}\big{(}1+\|\nabla q_{\theta}^{*}\|_{L^{2}(\Omega)}^{ \frac{1}{2}}\big{)}\] \[\leq c(1+\gamma_{\sigma}^{-\frac{1}{4}}+\gamma_{b}^{-\frac{1}{4}})(1+ \gamma_{q}^{-\frac{1}{4}}\eta^{\frac{1}{2}})\eta^{\frac{1}{2}}.\]
Thus we complete the proof of the theorem.
**Remark 3.1**.: _Theorem 3.3 provides useful guidelines for choosing the parameters: \(\gamma_{\sigma}=O(1)\), \(\gamma_{b}=O(1)\), \(\gamma_{q}=O(\delta^{2})\), \(\epsilon_{q}=O(\delta)\) and \(\epsilon_{\sigma}=O(\delta)\). Then we obtain the following error estimate \(\|q^{\dagger}-P_{\mathcal{A}}(q_{\theta}^{*})\|_{L^{2}(\Omega)}\leq c\delta^{ \frac{1}{2}}\)._
### Error analysis of the empirical loss
Now we analyze the error of the approximation \(\widehat{q}_{\theta}^{*}\), i.e., the DNN realization of a minimizer \((\widehat{\theta}^{*},\widehat{\kappa}^{*})\) to the empirical loss \(\widehat{J}_{\gamma}(\theta,\kappa)\). The loss \(\widehat{J}_{\gamma}(\theta,\kappa)\) involves also the quadrature error arising from approximating the integrals via Monte Carlo methods. The analysis requires the following assumption. The \(L^{\infty}\) bound is needed in order to apply the standard Rademacher complexity argument (cf. Dudley's formula in Lemma A.4).
**Assumption 3.4**.: \(f\in L^{\infty}(\Omega)\)_, and \(z^{\delta}\in W^{1,\infty}(\Omega)\)._
The key of the analysis is to bound the error \(\sup_{q_{\theta}\in\mathcal{N}_{q},\sigma_{\kappa}\in\mathcal{N}_{\sigma}} \big{|}J_{\gamma}(q_{\theta},\sigma_{\kappa})-\widehat{J}_{\gamma}(q_{\theta},\sigma_{\kappa})\big{|}\) for suitable DNN function classes \(\mathcal{N}_{q}\) and \(\mathcal{N}_{\sigma}\) (corresponding to the sets \(\mathfrak{P}_{p,c_{q}}\) and \(\mathfrak{P}_{\mathcal{E}_{\sigma}^{\delta}}^{\delta\delta}\), respectively), which are also known as statistical errors in statistical learning theory [5, 38]. The starting point of the analysis is the following splitting:
\[\sup_{q_{\theta}\in\mathcal{N}_{q},\sigma_{\kappa}\in\mathcal{N}_{\sigma}} \big{|}J_{\gamma}(q_{\theta},\sigma_{\kappa})-\widehat{J}_{\gamma}(q_{\theta },\sigma_{\kappa})\big{|}\leq\Delta\mathcal{E}_{d}+\gamma_{\sigma}\Delta \mathcal{E}_{\sigma}+\gamma_{b}\Delta\mathcal{E}_{b}+\gamma_{q}\Delta \mathcal{E}_{q},\]
with the error components given respectively by
\[\Delta\mathcal{E}_{d} :=\sup_{\sigma_{\kappa}\in\mathcal{N}_{\sigma},q_{\theta}\in \mathcal{N}_{q}}\big{|}\mathcal{E}_{d}(\sigma_{\kappa},q_{\theta})-\widehat{ \mathcal{E}}_{d}(\sigma_{\kappa},q_{\theta})\big{|}, \Delta\mathcal{E}_{\sigma}:=\sup_{\sigma_{\kappa}\in\mathcal{N}_{ \sigma}}\big{|}\mathcal{E}_{\sigma}(\sigma_{\kappa})-\widehat{\mathcal{E}}_{ \sigma}(\sigma_{\kappa})\big{|},\] \[\Delta\mathcal{E}_{b} :=\sup_{\sigma_{\kappa}\in\mathcal{N}_{\sigma}}\big{|}\mathcal{E} _{b}(\sigma_{\kappa})-\widehat{\mathcal{E}}_{b}(\sigma_{\kappa})\big{|}, \Delta\mathcal{E}_{q}:=\sup_{q_{\theta}\in\mathcal{N}_{q}}\big{|} \mathcal{E}_{q}(q_{\theta})-\widehat{\mathcal{E}}_{q}(q_{\theta})\big{|}.\]
Further we define \(\Delta\mathcal{E}_{b^{\prime}}:=\sup_{\sigma_{\kappa}\in\mathcal{N}_{\sigma}} \big{|}\mathcal{E}_{b^{\prime}}(\sigma_{\kappa})-\widehat{\mathcal{E}}_{b^{ \prime}}(\sigma_{\kappa})\big{|}\).
Now we state the quadrature error for each term (in high probability). The proof is based on PAC-type generalization bounds in Lemma A.1, which in turn employs the Rademacher complexity of the associated function classes via Dudley's formula in Lemma A.4. The overall proof is lengthy and hence it is deferred to the appendix.
**Theorem 3.5**.: _Let Assumptions 3.1 and 3.4 hold, and \(q_{\theta}\in\mathcal{N}(L_{q},N_{\theta},R_{q})\) and \(\sigma_{\kappa}\in\mathcal{N}(L_{\sigma},N_{\kappa},R_{\sigma})\). Fix \(\tau\in(0,\frac{1}{8})\), and define the following bounds_
\[e_{d} :=c\frac{R_{\sigma}^{2}N_{\kappa}^{2}(N_{\kappa}+N_{\theta})^{ \frac{1}{2}}(\log^{\frac{1}{2}}R_{\sigma}+\log^{\frac{1}{2}}N_{\kappa}+\log^{ \frac{1}{2}}R_{q}+\log^{\frac{1}{2}}N_{\theta}+\log^{\frac{1}{2}}n_{r})}{\sqrt{ n_{r}}}+\tilde{c}R_{\sigma}^{2}N_{\kappa}^{2}\sqrt{\frac{\log\frac{1}{\tau}}{2n_{r}}},\] \[e_{\sigma} :=c\frac{R_{\sigma}^{2L_{\sigma}}N_{\kappa}^{2L_{\sigma}-\frac{ 3}{2}}\big{(}\log^{\frac{1}{2}}R_{\sigma}+\log^{\frac{1}{2}}N_{\kappa}+\log^{ \frac{1}{2}}n_{r}\big{)}}{\sqrt{n_{r}}}+\tilde{c}R_{\sigma}^{2L_{\sigma}}N_{ \kappa}^{2L_{\sigma}-2}\sqrt{\frac{\log\frac{1}{\tau}}{2n_{r}}},\] \[e_{b} :=c\frac{R_{\sigma}^{2}N_{\kappa}^{\frac{5}{2}}\big{(}\log^{ \frac{1}{2}}R_{\sigma}+\log^{\frac{1}{2}}N_{\kappa}+\log^{\frac{1}{2}}n_{b} \big{)}}{\sqrt{n_{b}}}+\tilde{c}R_{\sigma}^{2}N_{\kappa}^{2}\sqrt{\frac{\log \frac{1}{\tau}}{2n_{b}}},\] \[e_{b^{\prime}} :=c\frac{R_{\sigma}^{2}N_{\kappa}^{\frac{5}{2}}\big{(}\log^{ \frac{1}{2}}R_{\sigma}+\log^{\frac{1}{2}}N_{\kappa}+\log^{\frac{1}{2}}n_{b} \big{)}}{\sqrt{n_{b}}}+\tilde{c}R_{\sigma}^{2}N_{\kappa}^{2}\sqrt{\frac{\log \frac{1}{\tau}}{2n_{r}}},\] \[e_{q} :=c\frac{R_{q}^{2L_{q}}N_{\theta}^{2L_{q}-\frac{3}{2}}\big{(}\log ^{\frac{1}{2}}R_{q}+\log^{\frac{1}{2}}N_{\theta}+\log^{\frac{1}{2}}n_{r}\big{)} }{\sqrt{n_{r}}}+\tilde{c}R_{q}^{2L_{q}}N_{\theta}^{2L_{q}-2}\sqrt{\frac{\log \frac{1}{\tau}}{2n_{r}}},\]
_where the constants \(c\) and \(\tilde{c}\) may depend on \(|\Omega|\), \(|\partial\Omega|\), \(d\)\(\|z^{\delta}\|_{W^{1,\infty}(\Omega)}\), \(\|f\|_{L^{\infty}(\Omega)}\), and \(\|g\|_{L^{\infty}(\partial\Omega)}\) at most polynomially. Then, with a probability at least \(1-\tau\), each of the following statements holds_
\[\Delta\mathcal{E}_{i}\leq e_{i},\quad i\in\{d,\sigma,b,b^{\prime},q\}.\]
Now we can state an error estimate on the numerical approximation \(\widehat{q}_{\theta}^{*}\).
**Theorem 3.6**.: _Let Assumptions 3.1 and 3.4 and Condition 3.2 hold. Fix small \(\epsilon_{q}\), \(\epsilon_{\sigma}>0\), and let \((\widehat{\theta}^{*},\widehat{\kappa}^{*})\in(\mathfrak{P}_{p,c_{q}},\mathfrak{P }_{2,\epsilon_{\sigma}}^{\delta\delta})\) be a minimizer of the empirical loss (3.6), and \(\widehat{q}_{\theta}^{*}\) and \(\widehat{\sigma}_{\kappa}^{*}\) their NN realizations. Fix \(\tau\in(0,\frac{1}{8})\), let the bounds \(e_{d}\), \(e_{\sigma}\), \(e_{b}\) and \(e_{q}\) be defined in Theorem 3.5, and further define \(\eta\) by \(\eta:=e_{d}+\gamma_{\sigma}e_{\sigma}+\gamma_{b}e_{b}+\gamma_{q}e_{q}+\epsilon_{q}^ {2}+(\gamma_{\sigma}+\gamma_{b}+1)\epsilon_{\sigma}^{2}+\delta^{2}+\gamma_{q}\). Then the following error bound holds with probability at least \(1-4\tau\)_
\[\|q^{\dagger}-P_{\mathcal{A}}(\widehat{q}_{\theta}^{*})\|_{L^{2}(\Omega)}\leq c \big{(}(e_{d}+\eta)^{\frac{1}{2}}+(e_{\sigma}+\gamma_{\sigma}^{-1}\eta)^{ \frac{1}{2}}+(e_{b}+\gamma_{b}^{-1}\eta)^{\frac{1}{2}}+(e_{q}+\gamma_{q}^{-1} \eta)^{\frac{1}{2}}\delta\big{)}^{\frac{1}{2}}\big{(}1+(e_{q}+\gamma_{q}^{-1} \eta)^{\frac{1}{4}}\big{)}.\]
Proof.: Let \((\theta^{*},\kappa^{*})\in(\mathfrak{P}_{p,e_{q}},\mathfrak{P}_{2,e_{\sigma}}^{ \otimes d})\) be a minimizer of problem (3.4). Then the minimizing property of \((\widehat{\theta}^{*},\widehat{\kappa}^{*})\) to the empirical loss \(\widehat{J}_{\boldsymbol{\gamma}}(\theta,\kappa)\) implies the following decomposition
\[\widehat{J}_{\boldsymbol{\gamma}}(\widehat{\theta}^{*},\widehat{\kappa}^{*}) \leq[\widehat{J}_{\boldsymbol{\gamma}}(\theta^{*},\kappa^{*})-J_{ \boldsymbol{\gamma}}(\theta^{*},\kappa^{*})]+J_{\boldsymbol{\gamma}}(\theta^{* },\kappa^{*})\leq|J_{\boldsymbol{\gamma}}(\theta^{*},\kappa^{*})-\widehat{J}_ {\boldsymbol{\gamma}}(\theta^{*},\kappa^{*})|+J_{\boldsymbol{\gamma}}(\theta^{* },\kappa^{*}).\]
Consequently, we deduce
\[\widehat{J}_{\boldsymbol{\gamma}}(\widehat{\theta}^{*},\widehat{\kappa}^{*}) \leq J_{\boldsymbol{\gamma}}(\theta^{*},\kappa^{*})+\sup_{(\theta,\kappa)\in( \mathfrak{P}_{p,e_{q}},\mathfrak{P}_{2,e_{\sigma}}^{\otimes d})}|J_{ \boldsymbol{\gamma}}(\theta,\kappa)-\widehat{J}_{\boldsymbol{\gamma}}(\theta, \kappa)|.\]
The two terms represent the approximation error and statistical error, respectively, and the former was already bounded in Section 3.2. It follows directly from Lemma 3.2 and Theorem 3.5 that with a probability at least \(1-4\tau\),
\[\widehat{J}_{\boldsymbol{\gamma}}(\widehat{\theta}^{*},\widehat{\kappa}^{*}) \leq c\big{(}e_{d}+\gamma_{\sigma}e_{\sigma}+\gamma_{b}e_{b}+\gamma_{q}e_{q}+ \epsilon_{q}^{2}+(\gamma_{\sigma}+\gamma_{b}+1)\epsilon_{\sigma}^{2}+\delta^{ 2}+\gamma_{q}\big{)},\]
i.e., \(\widehat{J}_{\boldsymbol{\gamma}}(\widehat{\theta}^{*},\widehat{\kappa}^{*}) \leq c\eta\). This estimate and the triangle inequality imply that with probability at least \(1-4\tau\),
\[\|\widehat{\sigma}_{\kappa}^{*}-P_{\mathcal{A}}(\widehat{q}_{\theta}^{*}) \nabla z^{\delta}\|_{L^{2}(\Omega)}^{2}\leq\big{[}\mathcal{E}_{d}(\widehat{ \sigma}_{\kappa}^{*},\widehat{q}_{\theta}^{*})-\widehat{\mathcal{E}}_{d}( \widehat{\sigma}_{\kappa}^{*},\widehat{q}_{\theta}^{*})\big{]}+\widehat{ \mathcal{E}}_{d}(\widehat{\sigma}_{\kappa}^{*},\widehat{q}_{\theta}^{*})\leq c (e_{d}+\eta).\]
Similarly, the following estimates hold simultaneously with a probability at least \(1-4\tau\),
\[\|f+\nabla\cdot\widehat{\sigma}_{\kappa}^{*}\|_{L^{2}(\Omega)}^{2}\leq c(e_{ \sigma}+\gamma_{\sigma}^{-1}\eta),\quad\|g-\widehat{\sigma}_{\kappa}^{*}\cdot \mathbf{n}\|_{L^{2}(\partial\Omega)}^{2}\leq c(e_{b}+\gamma_{b}^{-1}\eta), \quad\|\nabla\widehat{q}_{\theta}^{*}\|_{L^{2}(\Omega)}^{2}\leq c(e_{q}+\gamma _{q}^{-1}\eta).\]
Replacing \(q_{b}^{*}\) by \(\widehat{q}_{\theta}^{*}\) and repeating the argument of Theorem 3.3 gives
\[\|q^{\dagger}-P_{\mathcal{A}}(\widehat{q}_{\theta}^{*})\|_{L^{2}( \Omega)}^{2}\leq c\big{(}\|f+\nabla\cdot\widehat{\sigma}_{\kappa}^{*}\|_{L^{2}( \Omega)}+\|\widehat{\sigma}_{\kappa}^{*}-P_{\mathcal{A}}(\widehat{q}_{\theta}^ {*})\nabla z^{\delta}\|_{L^{2}(\Omega)}\] \[+\|P_{\mathcal{A}}(\widehat{q}_{\theta}^{*})\|_{L^{\infty}( \Omega)}\|\nabla(z^{\delta}-u^{\dagger})\|_{L^{2}(\Omega)}+\|g-\widehat{ \sigma}_{\kappa}^{*}\cdot\mathbf{n}\|_{L^{2}(\partial\Omega)}\big{)}\|v_{ \psi}\|_{H^{1}(\Omega)}.\]
This and the preceding estimates and Condition 3.2 imply (with \(v_{\psi}\) from Condition 3.2 for \(\psi=q^{\dagger}-P_{\mathcal{A}}(\widehat{q}_{\theta}^{*})\))
\[\|q^{\dagger}-P_{\mathcal{A}}(\widehat{q}_{\theta}^{*})\|_{H^{-1}(\Omega)} \leq c\big{(}(e_{d}+\eta)^{\frac{1}{2}}+(e_{\sigma}+\gamma_{\sigma}^{-1}\eta)^{ \frac{1}{2}}+(e_{b}+\gamma_{b}^{-1}\eta)^{\frac{1}{2}}+(e_{q}+\gamma_{q}^{-1} \eta)^{\frac{1}{2}}\delta\big{)}.\]
The desired estimate now follows from the argument of Theorem 3.3 and the bound on \(\|\nabla\widehat{q}_{\theta}^{*}\|_{L^{2}(\Omega)}^{2}\).
**Remark 3.2**.: _Under the assumptions of Theorem 3.5 and the parameter selections in Remark 3.1, we may choose the numbers \(n_{\tau}\) and \(n_{b}\) of sample points in the domain \(\Omega\) and on the boundary \(\partial\Omega\) by_
\[n_{\tau}=O\Big{(}\max\Big{(}\frac{R_{\sigma}^{4}N_{\kappa}^{4}(N_{\theta}+N_{ \kappa})}{\delta^{4+s}},\frac{R_{\sigma}^{4L_{\sigma}}N_{\kappa}^{4L_{\sigma}-3 }}{\delta^{4+s}},\frac{R_{q}^{4L_{\sigma}}N_{\kappa}^{4L_{\sigma}-3}}{\delta^{ 4}}\Big{)}\Big{)}\quad\text{and}\quad n_{b}=O\Big{(}\frac{R_{\sigma}^{4}N_{ \kappa}^{5}}{\delta^{4+s}}\Big{)},\]
_for any \(s>0\) (i.e., using the power \(s\) to absorb the \(\log\) factor). Note that by Lemma 2.1, \(R_{q}=O(\delta^{-2.\frac{2p+3d+3pd+2pn}{p(1-\mu)}})\), \(N_{\theta}=O(\delta^{-\frac{d}{1-\mu}})\), \(R_{\sigma}=O(\delta^{-2-\frac{4(1+\mu)+pd}{2(1-\mu)}})\) and \(N_{\kappa}=O(\delta^{-\frac{d}{1-\mu}})\). Then with probability at least \(1-4\tau\), we have_
\[\|q^{\dagger}-P_{\mathcal{A}}(\widehat{q}_{\theta}^{*})\|_{L^{2}(\Omega)}\leq c \delta^{\frac{1}{2}}.\]
## 4 Inverse conductivity problem in the Dirichlet case
In this section, we extend the approach to the Dirichlet boundary value problem:
\[\begin{cases}-\nabla\cdot(q\nabla u)=f,&\text{ in }\Omega,\\ \qquad\qquad u=0,&\text{ on }\partial\Omega.\end{cases} \tag{4.1}\]
We also provide relevant analysis of the reconstruction scheme.
### Mixed formulation and its DNN approximation
Like in Section 3, to develop a reconstruction algorithm, we rewrite (4.1) into a first-order system:
\[\begin{cases}\sigma=q\nabla u,&\text{in}\ \ \Omega,\\ -\nabla\cdot\sigma=f,&\text{in}\ \Omega,\\ u=0,&\text{on}\ \partial\Omega.\end{cases} \tag{4.2}\]
To recover the conductivity coefficient \(q\), we discretize the scalar field \(q\) and vector field \(\sigma\) by two DNN function classes \(\mathfrak{P}_{p,e_{q}}\) and \(\mathfrak{P}_{2,e_{\sigma}}^{\otimes d}\), respectively. Then with the notation in Section 3, the DNN approximation scheme reads
\[\min_{(\theta,\kappa)\in(\mathfrak{P}_{p,e_{q}},\mathfrak{P}_{2,e_{\sigma}}^{ \otimes d})}J_{\boldsymbol{\gamma}}(\theta,\kappa)=\|\sigma_{\kappa}-P_{ \mathcal{A}}(q_{\theta})\nabla z^{\delta}\|_{L^{2}(\Omega)}^{2}+\gamma_{ \sigma}\|\nabla\cdot\sigma_{\kappa}+f\|_{L^{2}(\Omega)}^{2}+\gamma_{q}\| \nabla q_{\theta}\|_{L^{2}(\Omega)}^{2}, \tag{4.3}\]
where \(z^{\delta}\) is a noisy measurement of the exact data \(u(q^{\dagger})\) with the noise level \(\delta\) satisfying \(\delta:=\|u(q^{\dagger})-z^{\delta}\|_{W^{\frac{3}{2},2}(\Omega)}\). In practice, however, the formulation (4.3) does not lend itself to high-quality reconstructions. This is attributed to the absence of the current density \(\sigma\) on the boundary \(\partial\Omega\) so that the first term does not allow learning the current density \(\sigma\) accurately. Hence, we supplement the loss (4.3) with an additional boundary term
\[\begin{split}\min_{(\theta,\kappa)\in(\mathfrak{P}_{p,e_{q}}, \mathfrak{P}_{2,e_{\sigma}}^{\otimes d})}J_{\boldsymbol{\gamma}}(\theta, \kappa)=&\|\sigma_{\kappa}-P_{\mathcal{A}}(q_{\theta})\nabla z^{ \delta}\|_{L^{2}(\Omega)}^{2}+\gamma_{\sigma}\|\nabla\cdot\sigma_{\kappa}+f \|_{L^{2}(\Omega)}^{2}\\ &+\gamma_{q}\|\nabla q_{\theta}\|_{L^{2}(\Omega)}^{2}+\gamma_{b} \|\sigma_{\kappa}-q^{\dagger}\nabla z^{\delta}\|_{L^{2}(\partial\Omega)}^{2}. \end{split} \tag{4.4}\]
This modified formulation requires a knowledge of the exact conductivity \(q^{\dagger}\) on the boundary \(\partial\Omega\). Note that this assumption is frequently made in existing uniqueness analysis [4, 35] and numerical studies [7, 8]. Similarly, one can ensure the existence of a minimizer of (4.4). Let \((\theta^{*},\kappa^{*})\) be a minimizer of problem (4.4) and \((q_{\theta}^{*},\sigma_{\kappa}^{*})\) be its DNN realization. In practice, we approximate the integrals using Monte Carlo methods. Using the uniform distributions \(\mathcal{U}(\Omega)\) and \(\mathcal{U}(\partial\Omega)\), we rewrite the population loss (4.4) as
\[J_{\boldsymbol{\gamma}}(\theta,\kappa) =|\Omega|\mathbb{E}_{X\sim\mathcal{U}(\Omega)}\Big{[}\|\sigma_{ \kappa}(X)-P_{\mathcal{A}}(q_{\theta}(X))\nabla z^{\delta}(X)\|_{\ell^{2}}^{2} \Big{]}+\gamma_{\sigma}|\Omega|\mathbb{E}_{X\sim\mathcal{U}(\Omega)}\Big{[} \big{(}\nabla\cdot\sigma_{\kappa}(X)+f(X)\big{)}^{2}\Big{]}\] \[\quad+\gamma_{b}|\partial\Omega|\mathbb{E}_{Y\sim\mathcal{U}( \partial\Omega)}\Big{[}\|\sigma_{\kappa}-q^{\dagger}\nabla z^{\delta}(Y)\|_{ \ell^{2}}^{2}\Big{]}+\gamma_{q}|\Omega|\mathbb{E}_{X\sim\mathcal{U}(\Omega)} \Big{[}\|\nabla q_{\theta}(X)\|_{\ell^{2}}^{2}\Big{]}\] \[=:\mathcal{E}_{d}(\sigma_{\kappa},q_{\theta})+\gamma_{\sigma} \mathcal{E}_{\sigma}(\sigma_{\kappa})+\gamma_{b}\mathcal{E}_{b^{\prime}}( \sigma_{\kappa})+\gamma_{q}\mathcal{E}_{q}(q_{\theta}).\]
Now we draw i.i.d. samples \(X=\{X_{j}\}_{j=1}^{n_{s}}\) and \(Y=\{Y_{j}\}_{j=1}^{n_{b}}\) from the uniform distributions \(\mathcal{U}(\Omega)\) and \(\mathcal{U}(\partial\Omega)\). The empirical loss \(\widehat{J}_{\boldsymbol{\gamma}}(\theta,\kappa)\) is given by
\[\widehat{J}_{\boldsymbol{\gamma}}(\theta,\kappa):=\widehat{\mathcal{E}}_{d}( \sigma_{\kappa},q_{\theta})+\gamma_{\sigma}\widehat{\mathcal{E}}_{\sigma}( \sigma_{\kappa})+\gamma_{q}\widehat{\mathcal{E}}_{q}(q_{\theta})+\gamma_{b} \widehat{\mathcal{E}}_{b^{\prime}}(\sigma_{\kappa}), \tag{4.5}\]
where \(\widehat{\mathcal{E}}_{d}(\sigma_{\kappa},q_{\theta})\), \(\widehat{\mathcal{E}}_{\sigma}(\sigma_{\kappa})\) and \(\widehat{\mathcal{E}}_{q}(q_{\theta})\) are given by (3.7)-(3.9), and \(\widehat{\mathcal{E}}_{b^{\prime}}(\sigma_{\kappa,i})\) is given in (3.11).
**Remark 4.1**.: _Instead of (4.4), there are alternative formulations. For example, one may use the loss_
\[J_{\boldsymbol{\gamma}}(\theta,\kappa)=\|\sigma_{\kappa}-P_{\mathcal{A}}(q_{ \theta})\nabla z^{\delta}\|_{L^{2}(\Omega)}^{2}+\gamma_{\sigma}\|\nabla \cdot\sigma_{\kappa}+f\|_{L^{2}(\Omega)}^{2}+\gamma_{q}\|\nabla q_{\theta}\| _{L^{2}(\Omega)}^{2}+\gamma_{b}\|q_{\theta}-q^{\dagger}\|_{L^{2}(\partial \Omega)}^{2}, \tag{4.6}\]
_which enforces the boundary condition \(q_{\theta}=q^{\dagger}\) on \(\partial\Omega\) directly. It can be analyzed analogously. However, numerically, it is less robust than (4.4) for noisy data. See Section 5 for numerical illustrations._
### Error analysis of the population loss
First we analyze the error of the DNN realization \(q_{\theta}^{*}\in\mathcal{N}_{q}\) of the minimizer \(\theta^{*}\) to the population loss (4.4) under the following assumption.
**Assumption 4.1**.: \(q^{\dagger}\in W^{2,p}(\Omega)\cap\mathcal{A}\)_, and \(f\in H^{1}(\Omega)\cap L^{\infty}(\Omega)\), with \(p=\max(2,d+\nu)\) for some small \(\nu>0\)._
The following regularity result holds for \(u^{\dagger}:=u(q^{\dagger})\) and \(\sigma^{\dagger}:=q^{\dagger}\nabla u^{\dagger}\). The proof follows exactly as Lemma 3.1, and hence it is omitted.
**Lemma 4.1**.: _Let Assumption 4.1 hold. Then the solution \(u^{\dagger}\left(\text{with }q=q^{\dagger}\right)\) to problem (4.1) satisfies \(u^{\dagger}\in H^{3}(\Omega)\cap W^{2,p}(\Omega)\cap H^{1}_{0}(\Omega)\) and \(\sigma^{\dagger}\in(H^{2}(\Omega))^{d}\)._
**Lemma 4.2**.: _Let Assumption 4.1 hold. Fix small \(\epsilon_{q}\), \(\epsilon_{\sigma}>0\), and let \((\theta^{*},\kappa^{*})\in(\mathfrak{P}_{p,\epsilon_{q}},\mathfrak{P}_{2, \epsilon_{\sigma}}^{\otimes d})\) be a minimizer of problem (4.4). Then the following estimate holds_
\[J_{\boldsymbol{\gamma}}(\theta^{*},\kappa^{*})\leq c\big{(}\epsilon_{q}^{2}+(1 +\gamma_{\sigma}+\gamma_{b})\epsilon_{\sigma}^{2}+(1+\gamma_{\sigma})\delta^{ 2}+\gamma_{q}\big{)}.\]
Proof.: Assumption 3.1 and Lemma 4.1 imply \(\sigma^{\dagger}\in(H^{2}(\Omega))^{d}\). Then by Lemma 2.1, there exists at least one \((\theta_{\epsilon},\kappa_{\epsilon})\in(\mathfrak{P}_{p,\epsilon_{q}}, \mathfrak{P}_{2,\epsilon_{\sigma}}^{\otimes d})\) such that its DNN realization \((q_{\theta_{\epsilon}},\sigma_{\kappa_{\epsilon}})\) satisfies
\[\|q^{\dagger}-q_{\theta_{\epsilon}}\|_{W^{1,p}(\Omega)}\leq\epsilon_{q}\quad \text{and}\quad\|\sigma^{\dagger}-\sigma_{\kappa_{\epsilon}}\|_{H^{1}(\Omega)} \leq\epsilon_{\sigma}.\]
Then by the minimizing property of \((\theta^{*},\kappa^{*})\) and the triangle inequality, we have
\[J_{\boldsymbol{\gamma}}(\theta^{*},\kappa^{*})\leq J_{\boldsymbol {\gamma}}(\theta_{\epsilon},\kappa_{\epsilon})\] \[=\|\sigma_{\kappa_{\epsilon}}-P_{\mathcal{A}}(q_{\theta_{\epsilon }})\nabla z^{\delta}\|_{L^{2}(\Omega)}^{2}+\gamma_{\sigma}\|\nabla\cdot \sigma_{\kappa_{\epsilon}}+f\|_{L^{2}(\Omega)}^{2}+\gamma_{b}\|\sigma_{\kappa_ {\epsilon}}-q^{\dagger}\nabla z^{\delta}\|_{L^{2}(\partial\Omega)}^{2}+\gamma_ {q}\|\nabla q_{\theta_{\epsilon}}\|_{L^{2}(\Omega)}^{2}\] \[\leq c\big{(}\|\sigma_{\kappa_{\epsilon}}-\sigma^{\dagger}\|_{L^{ 2}(\Omega)}^{2}+\|(q^{\dagger}-P_{\mathcal{A}}(q_{\theta_{\epsilon}}))\nabla u ^{\dagger}\|_{L^{2}(\Omega)}^{2}+\|P_{\mathcal{A}}(q_{\theta_{\epsilon}}) \nabla(u^{\dagger}-z^{\delta})\|_{L^{2}(\Omega)}^{2}\] \[\qquad+\gamma_{\sigma}\|\nabla\cdot(\sigma_{\kappa_{\epsilon}}- \sigma^{\dagger})\|_{L^{2}(\Omega)}^{2}+\gamma_{q}\|\nabla q_{\theta_{\epsilon }}\|_{L^{2}(\Omega)}^{2}+\gamma_{b}\|\sigma_{\kappa_{\epsilon}}-\sigma^{ \dagger}\|_{H^{1}(\Omega)}^{2}+\gamma_{b}\|q^{\dagger}\nabla(u^{\dagger}-z^{ \delta})\|_{H^{\frac{1}{2}}(\partial\Omega)}^{2}\big{)}.\]
Now the estimate (3.2) and Assumption 4.1 imply
\[J_{\boldsymbol{\gamma}}(\theta^{*},\kappa^{*}) \leq c\big{(}\epsilon_{\sigma}^{2}+\|q^{\dagger}-P_{\mathcal{A}}(q _{\theta_{\epsilon}})\|_{L^{\infty}(\Omega)}^{2}\|\nabla u^{\dagger}\|_{L^{2 }(\Omega)}^{2}\] \[\quad+\|P_{\mathcal{A}}(q_{\theta_{\epsilon}})\|_{L^{\infty}( \Omega)}^{2}\|\nabla(u^{\dagger}-z^{\delta})\|_{L^{2}(\Omega)}^{2}+\gamma_{ \sigma}\epsilon_{\sigma}^{2}+\gamma_{q}+\gamma_{b}\epsilon_{\sigma}^{2}+\gamma _{b}\delta^{2}\big{)}\] \[\leq c\big{(}\epsilon_{q}^{2}+(1+\gamma_{\sigma}+\gamma_{b}) \epsilon_{\sigma}^{2}+(1+\gamma_{b})\delta^{2}+\gamma_{q}\big{)}.\]
This completes the proof of the lemma.
**Condition 4.2**.: _There exist some \(\beta>0\) and \(c\) such that \(q^{\dagger}|\nabla u^{\dagger}|^{2}+fu^{\dagger}\geq c\operatorname{dist}(x, \partial\Omega)^{\beta}\) almost every in \(\Omega\)._
This condition holds under mild assumptions [10, Lemmas 3.3 and 3.7]: it holds with \(\beta=2\) if \(q^{\dagger}\in\mathcal{A}\), and \(f\in L^{2}(\Omega)\) with \(f\geq c_{f}>0\) (with \(c_{f}\in\mathbb{R}\)) over a Lipschitz domain \(\Omega\), and with \(\beta=0\) when \(q^{\dagger}\in C^{1,\alpha}(\overline{\Omega})\cap\mathcal{A}\), and \(f\in C^{0,\alpha}(\overline{\Omega})\) and \(f\geq c_{f}>0\)on a \(C^{2,\alpha}\) domain \(\Omega\) for some \(\alpha>0\).
**Theorem 4.3**.: _Let Assumption 4.1 hold. Fix small \(\epsilon_{q}\), \(\epsilon_{\sigma}>0\), and let \((\theta^{*},\kappa^{*})\in(\mathfrak{P}_{p,\epsilon_{q}},\mathfrak{P}_{2, \epsilon_{\sigma}}^{\otimes d})\) be a minimizer of problem (4.4) and \(q_{\theta}^{*}\) the DNN realization of \(\theta^{*}\). Then with \(\eta:=\big{(}\epsilon_{q}^{2}+(1+\gamma_{\sigma}+\gamma_{b})\epsilon_{\sigma }^{2}+(1+\gamma_{b})\delta^{2}+\gamma_{q}\big{)}^{\frac{1}{2}},\) there holds_
\[\int_{\Omega}\Big{(}\frac{q^{\dagger}-q_{\theta}^{*}}{q^{\dagger}}\Big{)}^{2} \big{(}q^{\dagger}|\nabla u^{\dagger}|^{2}+fu^{\dagger}\big{)}\;\mathrm{d}x \leq c\big{(}\gamma_{\sigma}^{-\frac{1}{2}}+\gamma_{q}^{-\frac{1}{2}}\eta+1 \big{)}\eta.\]
_Moreover, if Condition 4.2 holds, then_
\[\|q^{\dagger}-q_{\theta}^{*}\|_{L^{2}(\Omega)}\leq c\big{[}\big{(}\gamma_{\sigma }^{-\frac{1}{2}}+\gamma_{q}^{-\frac{1}{2}}\eta+1\big{)}\eta\big{]}^{\frac{1}{2( \vartheta+1)}}.\]
Proof.: For any test function \(\varphi\in H^{1}_{0}(\Omega)\), we have
\[\big{(}(q^{\dagger}-P_{\mathcal{A}}(q_{\theta}^{*}))\nabla u^{ \dagger},\nabla\varphi\big{)} =\big{(}\sigma^{\dagger}-\sigma_{\kappa}^{*},\nabla\varphi\big{)}+ \big{(}P_{\mathcal{A}}(q_{\theta}^{*})\nabla(z^{\delta}-u^{\dagger}),\nabla \varphi\big{)}+\big{(}\sigma_{\kappa}^{*}-P_{\mathcal{A}}(q_{\theta}^{*})\nabla z ^{\delta},\nabla\varphi\big{)}\] \[=-\big{(}\nabla\cdot(\sigma^{\dagger}-\sigma_{\kappa}^{*}),\varphi \big{)}+\big{(}P_{\mathcal{A}}(q_{\theta}^{*})\nabla(z^{\delta}-u^{\dagger}), \nabla\varphi\big{)}+\big{(}\sigma_{\kappa}^{*}-P_{\mathcal{A}}(q_{\theta}^{*}) \nabla z^{\delta},\nabla\varphi\big{)}.\]
Let \(\varphi\equiv\frac{q^{\dagger}-P_{\mathcal{A}}(q_{\theta}^{*})}{q^{\dagger}}u^{\dagger}\). Then by direct computation, we have
\[\nabla\varphi=\frac{\nabla(q^{\dagger}-P_{\mathcal{A}}(q_{\theta}^{*}))}{q^{ \dagger}}u^{\dagger}+\frac{(q^{\dagger}-P_{\mathcal{A}}(q_{\theta}^{*}))}{q^{ \dagger}}\nabla u^{\dagger}-\frac{(q^{\dagger}-P_{\mathcal{A}}(q_{\theta}^{*}))}{(q ^{\dagger})^{2}}(\nabla q^{\dagger})u^{\dagger}.\]
Using Assumption 4.1, the box constraint on \(q^{\dagger}\) and \(P_{\mathcal{A}}(q_{\theta}^{*})\), and the stability estimate (3.2) of \(P_{\mathcal{A}}\), we arrive at
\[\left\|\frac{\nabla(q^{\dagger}-P_{\mathcal{A}}(q_{\theta}^{*}))}{q ^{\dagger}}u^{\dagger}\right\|_{L^{2}(\Omega)} \leq c\big{(}1+\|\nabla P_{\mathcal{A}}(q_{\theta}^{*})\|_{L^{2}( \Omega)}\big{)}\leq c\big{(}1+\|\nabla q_{\theta}^{*}\|_{L^{2}(\Omega)}\big{)},\] \[\left\|\frac{(q^{\dagger}-P_{\mathcal{A}}(q_{\theta}^{*}))}{q^{ \dagger}}\nabla u^{\dagger}\right\|_{L^{2}(\Omega)}+\left\|\frac{(q^{\dagger} -P_{\mathcal{A}}(q_{\theta}^{*}))}{(q^{\dagger})^{2}}(\nabla q^{\dagger})u^{ \dagger}\right\|_{L^{2}(\Omega)}\leq c.\]
This implies \(\varphi\in H^{1}_{0}(\Omega)\) with the following _a priori_ bound
\[\|\varphi\|_{L^{2}(\Omega)}\leq c\quad\text{and}\quad\|\nabla\varphi\|_{L^{2} (\Omega)}\leq c(1+\|\nabla q_{\theta}^{*}\|_{L^{2}(\Omega)}).\]
By Lemma 4.2 and the Cauchy-Schwarz inequality, we have
\[|(\nabla\cdot(\sigma^{\dagger}-\sigma_{\kappa}^{*}),\varphi)|\leq\|\nabla \cdot(\sigma^{\dagger}-\sigma_{\kappa}^{*})\|_{L^{2}(\Omega)}\|\varphi\|_{L^{2 }(\Omega)}\leq c\gamma_{\sigma}^{-\frac{1}{2}}\eta.\]
Similarly, we deduce
\[|\big{(}\sigma_{\kappa}^{*}-P_{\mathcal{A}}(q_{\theta}^{*})\nabla z^{\delta}, \nabla\varphi\big{)}|\leq\|\sigma_{\kappa}^{*}-q_{\theta}^{*}\nabla z^{\delta }\|_{L^{2}(\Omega)}\|\nabla\varphi\|_{L^{2}(\Omega)}\leq c(1+\gamma_{q}^{- \frac{1}{2}}\eta)\eta.\]
Meanwhile, it follows from the Cauchy-Schwartz inequality that
\[|\big{(}P_{\mathcal{A}}(q_{\theta}^{*})\nabla(z^{\delta}-u^{\dagger}),\nabla \varphi\big{)}|\leq c\|P_{\mathcal{A}}(q_{\theta}^{*})\|_{L^{\infty}(\Omega)} \|\nabla(z^{\delta}-u^{\dagger})\|_{L^{2}(\Omega)}\|\nabla\varphi\|_{L^{2}( \Omega)}\leq c(1+\gamma_{q}^{-\frac{1}{2}}\eta)\delta.\]
Upon repeating the argument in [10, 25], we obtain
\[\big{(}(q^{\dagger}-P_{\mathcal{A}}(q_{\theta}^{*}))\nabla u^{\dagger},\nabla \varphi\big{)}=\frac{1}{2}\int_{\Omega}\Big{(}\frac{q^{\dagger}-P_{\mathcal{A }}(q_{\theta}^{*})}{q^{\dagger}}\Big{)}^{2}\big{(}q^{\dagger}|\nabla u^{ \dagger}|^{2}+fu^{\dagger}\big{)}\;\mathrm{d}x.\]
Combining the preceding estimates yields the first assertion. Next we bound \(\|q^{\dagger}-P_{\mathcal{A}}(q_{\theta}^{*})\|_{L^{2}(\Omega)}.\) Upon fixing \(\rho>0\), we split the domain \(\Omega\) into two disjoint sets \(\Omega=\Omega_{\rho}\cup\Omega_{\rho}^{c}\), with \(\Omega_{\rho}=\{x\in\Omega:\mathrm{dist}(x,\partial\Omega)\geq\rho\}\) and \(\Omega_{\rho}^{c}=\Omega\setminus\Omega_{\rho}\). Then by the box constraint \(q^{\dagger}\in\mathcal{A}\), we have
\[\|q^{\dagger}-P_{\mathcal{A}}(q_{\theta}^{*})\|_{L^{2}(\Omega_{ \rho})}^{2} =\rho^{-\beta}\int_{\Omega_{\rho}}(q^{\dagger}-P_{\mathcal{A}}(q_{ \theta}^{*}))^{2}\rho^{\beta}\mathrm{d}x\leq\rho^{-\beta}\int_{\Omega_{\rho}}( q^{\dagger}-P_{\mathcal{A}}(q_{\theta}^{*}))^{2}\mathrm{dist}(x,\partial\Omega)^{ \beta}\mathrm{d}x\] \[\leq c\rho^{-\beta}\int_{\Omega_{\rho}}\Big{(}\frac{q^{\dagger}-P_ {\mathcal{A}}(q_{\theta}^{*})}{q^{\dagger}}\Big{)}^{2}\big{(}q^{\dagger}| \nabla u^{\dagger}|^{2}+fu^{\dagger}\big{)}\mathrm{d}x\leq c\rho^{-\beta}\big{(} \gamma_{\sigma}^{-\frac{1}{2}}+\gamma_{q}^{-\frac{1}{2}}\eta+1\big{)}\eta.\]
Meanwhile, using the box constraint \(q^{\dagger},P_{\mathcal{A}}(q_{\theta}^{*})\in\mathcal{A}\), we obtain
\[\|q^{\dagger}-P_{\mathcal{A}}(q_{\theta}^{*})\|_{L^{2}(\Omega_{ \rho}^{c})}^{2} \leq c\int_{\Omega_{\rho}^{c}}1\;\mathrm{d}x\|q^{\dagger}-P_{ \mathcal{A}}(q_{\theta}^{*})\|_{L^{\infty}(\Omega_{\rho}^{c})}^{2}\leq c\rho.\]
By combining the last two estimates and then optimizing in \(\rho\), we get the desired bound on \(\|q^{\dagger}-q_{\theta}^{*}\|_{L^{2}(\Omega)}\).
### Error analysis of the empirical loss
Now we analyze the impact of quadrature error on the reconstruction, under the following condition.
**Assumption 4.4**.: \(f\in L^{\infty}(\Omega)\) _and \(\nabla z^{\delta}\in L^{\infty}(\Omega)\cap L^{\infty}(\partial\Omega)\)._
We have the following error bound on the DNN realization \(\widehat{q}_{\theta}^{*}\) of a minimizer \(\widehat{\theta}^{*}\) of the loss (4.5).
**Theorem 4.5**.: _Let Assumptions 4.1 and 4.4 hold. Fix small \(\epsilon_{q}\), \(\epsilon_{\sigma}>0\), and let \((\widehat{\theta}^{*},\widehat{\kappa}^{*})\in(\mathfrak{P}_{p,\epsilon_{q}}, \mathfrak{P}_{\infty,\epsilon_{\sigma}}^{\otimes d})\) be a minimizer of (4.5) and \(\widehat{q}_{\theta}^{*}\) the DNN realization of \(\widehat{\theta}^{*}\). Fix \(\tau\in(0,\frac{1}{8})\), let the quantities \(e_{d}\), \(e_{\sigma}\), \(e_{b^{\prime}}\) and \(e_{q}\) be defined in
_Theorem 3.5, and define \(\eta\) by \(\eta:=e_{d}+\gamma_{\sigma}e_{\sigma}+\gamma_{b}e_{\nu}+\gamma_{q}e_{q}+\epsilon _{q}^{2}+(1+\gamma_{\sigma}+\gamma_{b})\epsilon_{\sigma}^{2}+(1+\gamma_{b}) \delta^{2}+\gamma_{q}\). Then with probability at least \(1-4\tau\), there holds_
\[\int_{\Omega}\Big{(}\frac{q^{\dagger}-P_{\mathcal{A}}(\widehat{q}_{\theta}^{*} )}{q^{\dagger}}\Big{)}^{2}\big{(}q^{\dagger}|\nabla u^{\dagger}|^{2}+fu^{ \dagger}\big{)}\;\mathrm{d}x\leq c\big{(}(e_{d}+\eta)^{\frac{1}{2}}+(e_{ \sigma}+\gamma_{\sigma}^{-1}\eta)^{\frac{1}{2}}+\delta(1+e_{q}+\gamma_{q}^{-1 }\eta)^{\frac{1}{2}}\big{)}\big{(}1+e_{q}+\gamma_{q}^{-1}\eta\big{)}^{\frac{1 }{2}},\]
_where the constant \(c>0\) depends on \(|\Omega|\), \(|\partial\Omega|\), \(d\), \(\|z^{\delta}\|_{W^{1,\infty}(\partial\Omega)}\), \(\|f\|_{L^{\infty}(\Omega)}\) and \(\|q^{\dagger}\|_{L^{\infty}(\partial\Omega)}\) at most polynomially. Moreover, if Condition 4.2 holds, then with probability at least \(1-4\tau\),_
\[\|q^{\dagger}-P_{\mathcal{A}}(\widehat{q}_{\theta}^{*})\|_{L^{2}(\Omega)}\leq c \big{(}\big{(}(e_{d}+\eta)^{\frac{1}{2}}+(e_{\sigma}+\gamma_{\sigma}^{-1}\eta )^{\frac{1}{2}}+\delta(1+e_{q}+\gamma_{q}^{-1}\eta)^{\frac{1}{2}}\big{)}\big{(} 1+e_{q}+\gamma_{q}^{-1}\eta\big{)}^{\frac{1}{2}}\big{)}^{\frac{1}{2(\partial+1 )}}.\]
Proof.: The proof is similar to Theorem 3.6, using instead the estimate on \(e_{b^{\prime}}\). Indeed, the following estimate holds
\[\sup_{(\theta,\kappa)\in(\mathfrak{P}_{\eta,e_{q}},\mathfrak{P}_{\infty,e_{ \sigma}}^{\otimes d})}\big{|}J_{\mathbf{\gamma}}(q_{\theta},\sigma_{\kappa})- \widehat{J}_{\mathbf{\gamma}}(q_{\theta},\sigma_{\kappa})\big{|}\leq\Delta\mathcal{ E}_{d}+\gamma_{\sigma}\Delta\mathcal{E}_{\sigma}+\gamma_{b}\Delta\mathcal{E}_{b^{ \prime}}+\gamma_{q}\Delta\mathcal{E}_{q}.\]
Then for any minimizer \((\widehat{\theta}^{*},\widehat{\kappa}^{*})\) of the empirical loss (4.5), with probability at least \(1-4\tau\),
\[\widehat{J}_{\mathbf{\gamma}}(\widehat{\theta}^{*},\widehat{\kappa}^{*})\leq c \big{(}e_{d}+\gamma_{\sigma}e_{\sigma}+\gamma_{b}e_{\nu}+\gamma_{q}e_{q}+ \epsilon_{q}^{2}+(1+\gamma_{\sigma}+\gamma_{b})\epsilon_{\sigma}^{2}+(1+\gamma _{b})\delta^{2}+\gamma_{q}\big{)}.\]
Then by repeating the argument from Theorem 3.6 and replacing \(q_{\theta}^{*}\) by \(\widehat{q}_{\theta}^{*}\), we deduce that with probability at least \(1-4\tau\), the following three estimates hold simultaneously
\[\|\widehat{\sigma}_{\kappa}^{*}-P_{\mathcal{A}}(\widehat{q}_{\theta}^{*}) \nabla z^{\delta}\|_{L^{2}(\Omega)}^{2}\leq c(e_{d}+\eta),\quad\|f+\nabla \cdot\hat{\sigma}_{\kappa}^{*}\|_{L^{2}(\Omega)}^{2}\leq c(e_{\sigma}+\gamma_ {\sigma}^{-1}\eta),\quad\|\nabla\widetilde{q}_{\theta}^{*}\|_{L^{2}(\Omega)}^ {2}\leq c(e_{q}+\gamma_{q}^{-1}\eta).\]
Last, the desired estimates follow from the argument of Theorem 4.3.
**Remark 4.2**.: _Under the assumptions in Theorem 4.5 and the choice of the numbers of sampling points in Remark 3.1, there holds with probability at least \(1-4\tau\), \(\|q^{\dagger}-P_{\mathcal{A}}(\widehat{q}_{\theta}^{*})\|_{L^{2}(\Omega)}\leq c \delta^{\frac{1}{2(1+\beta)}}\)._
## 5 Numerical experiments and discussions
Now we showcase the performance of the proposed approach. All computations are performed on TensorFlow 1.15.0 using Intel Core i7-11700K Processor with 16 CPUs. We measure the accuracy of a reconstruction \(\hat{q}\) (with respect to the exact one \(q^{\dagger}\)) by the relative \(L^{2}(\Omega)\) error \(e(\hat{q})\) defined by
\[e(\hat{q})=\|q^{\dagger}-\hat{q}\|_{L^{2}(\Omega)}/\|q^{\dagger}\|_{L^{2}( \Omega)}.\]
Throughout, for an elliptic problem in \(\mathbb{R}^{d}\), we use DNNs with an output dimension \(1\) and \(d\) to approximate the conductivity \(q\) and the flux \(\sigma\), respectively. Unless otherwise stated, both DNNs have 4 hidden layers (i.e., depth 5) with 26, 26, 26, and 10 neurons on each layer. The activation function \(\rho\) is taken to be \(\tanh\). The penalty parameters \(\gamma_{\sigma}\), \(\gamma_{b}\) and \(\gamma_{q}\) are for the divergence term, boundary term and \(H^{1}(\Omega)\)-penalty term, respectively, in the losses (3.4) and (4.4), and are determined in a trial-and-error way. The numbers of training points in the domain \(\Omega\) and on the boundary \(\partial\Omega\) are denoted by \(n_{r}\) and \(n_{b}\), respectively. The empirical loss \(\widehat{L}_{\mathbf{\gamma}}\) is minimized by ADAM [28]. We adopt an exponentially decaying learning rate schedule, which is determined by the starting learning rate (lr), decay rate (dr) and the epoch number at which the decay takes place (step). The epoch refers to the total number of epochs used for the reconstruction. Table 1 summarizes the algorithmic parameters for the experiments where the numbers in the brackets indicate the parameters used for noisy data (\(\delta=10\%\)).
### The Neumann problem
The first example is about recovering a smooth conductivity \(q^{\dagger}\) with three modes in 2D.
**Example 5.1**.: \(\Omega=(-1,1)^{2}\)_, \(q^{\dagger}=1+s_{1}(x_{1},x_{2})+s_{2}(x_{1},x_{2})+s_{3}(x_{1},x_{2})\), with \(s_{1}=0.3e^{-20(x_{1}-0.3)^{2}-15(x_{2}-0.3)^{2}}\), \(s_{2}=-0.3e^{-10x_{1}^{2}-10(x_{2}+0.5)^{2}}\) and \(s_{3}=0.2e^{-15(x_{1}+0.4)^{2}-15(x_{2}-0.35)^{2}}\), and \(u^{\dagger}=x_{1}+x_{2}+\frac{1}{3}(x_{1}^{3}+x_{2}^{3})\)._
\begin{table}
\begin{tabular}{c|c c c c|c c c c} \hline \hline para. \(\backslash\) Ex. No. & 5.1 & 5.2 & 5.3 & 5.4 & 5.5 & 5.6 & 5.7 & 5.8 \\ \hline \(\gamma_{\sigma}\) & 10(100) & 10(100) & 10 & 1(10) & 10 & 5(1) & 10 & 10 \\ \(\gamma_{b}\) & 10(100) & 10(100) & 10(100) & 10(100) & 10 & 10 & 10 & 10(50) \\ \(\gamma_{q}\) & 1e-5 & 1e-5 & 1e-5 & 1e-5 & 1e-5 & 1e-5 & 1e-5 & 1e-5 \\ \(n_{r}\) & 4e4 & 4e4 & 4e4 & 6e4 & 4e4 & 4e4 & 4e4 & 6e4 \\ \(n_{b}\) & 4e3 & 4e3 & 6e3 & 3e4 & 4e3 & 4e3 & 6e3 & 3e4 \\ lr & 2e-3 & 1e-3 & 3e-3 & 3e-3 & 2e-3 & 3e-3 & 3e-3 & 3e-3 \\ dr & 0.7 & 0.75 & 0.7 & 0.75 & 0.7 & 0.8 & 0.8 & 0.75 \\ step & 2000 & 1500 & 2500 & 3000 & 2000 & 2500 & 3000 & 3000 \\ epoch & 6e4(3e4) & 5e4(2e4) & 6e4(3e4) & 6e4(2e4) & 6e4(2e4) & 8e4(2e4) & 6e4(3e4) & 6e4(2e4) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The algorithmic parameters used for the examples.
Figure 1: The reconstructions for Example 5.1 with exact data \((\text{top})\) and noisy data \((\delta=10\%\), bottom\()\).
This choice of \(u^{\dagger}\) ensures that both \(\nabla u^{\dagger}\) and \(\Delta u^{\dagger}\) do not vanish over the domain \(\Omega\), which are beneficial for the numerical construction and is also important for theoretical analysis [4]. Fig. 1 shows the reconstructions for exact data (top) and noisy data (bottom), where the pointwise errors \(|\hat{q}-q^{\dagger}|\) are also shown. The shape and location of the modes and the overall structure of \(q^{\dagger}\) are well resolved, except that the middle part between the top two bumps were slightly bridged. The method is observed to be very robust with respect to data noise, except very slight underestimation of the peak values of the top two bumps, even in the presence of 10% noise. Surprisingly, the approach seems fairly stable with respect to the iteration index, cf. Fig. 2. Indeed, the presence of the noise does not affect much the convergence behavior of the loss and the error, and the final values of the error are close for all the noise levels. This observation agrees well with that for the pointwise error in Fig. 1.
There are several algorithmic parameters influencing the overall accuracy of the DNN approximation \(\hat{q}\), e.g., the number of training points (\(n_{r}\) and \(n_{b}\)), DNN architectural parameters (width, depth, and activation function) and noise level \(\delta\). A practical yet provable guidance for choosing these parameters is unfortunately still missing. Instead, we explore the issue empirically. Table 2(a) shows the relative \(L^{2}(\Omega)\)-error of the reconstruction \(\hat{q}\) at different noise levels and different \(\gamma_{q}\). The method is observed to be very robust with respect to the presence of data noise - the results remain fairly accurate even for up to 10% data noise, and do not vary much with the penalty parameter \(\gamma_{q}\) (for the \(H^{1}(\Omega)\)-penalty), provided that it is properly chosen. However, there is also an inherent accuracy limitation of the approach, i.e., the reconstruction cannot be made arbitrarily accurate for exact data \(\nabla u^{\dagger}\). This may be attributed to the optimization error: due to the nonconvexity of the loss landscape, the optimizer may fail to find a global minimizer but instead only an approximate local minimizer. The saturation phenomenon has been consistently observed across a broad range of solvers based on DNNs [16, 24, 34]. Tables 2(b)-2(c) show that the relative \(L^{2}(\Omega)\) error \(e(\hat{q})\) of the reconstruction \(\hat{q}\) does not vary much with different DNN architectures and numbers of training points. This observation agrees with the convergence behavior of the optimization algorithm in Fig. 2, where the value of the loss \(\hat{J}_{\mathbf{\gamma}}\) and the error \(e(\hat{q})\) eventually stagnates at a certain level.
The second example is about recovering a discontinuous conductivity \(q^{\dagger}\). The notation \(\chi_{S}\) denotes the characteristic function of the set \(S\).
**Example 5.2**.: \(\Omega=(-1,1)^{2}\)_, \(q^{\dagger}=1+0.25\cdot\chi_{\{(x_{1}+0.15)^{2}+(x_{2}+0.3)^{2}\leq 0.25^{2}\}}\), \(f\equiv 0\) and \(g=x_{1}\)._
Given the exact conductivity \(q^{\dagger}\), the field \(u^{\dagger}\) is computed by solving the Neumann problem (1.1) using the public software package FreeFEM++ [22], and the (exact) observation \(\nabla u^{\dagger}\) at the training points is evaluated by numerical
Figure 2: The variation of the loss (top) and the relative \(L^{2}(\Omega)\) error \(e\) (bottom) during the training process for Example 5.1 at different noise levels.
\begin{table}
\end{table}
Table 2: The variation of the relative \(L^{2}(\Omega)\) error \(e(\hat{q})\) with respect to various algorithmic parameters.
Figure 3: The reconstructions for Example 5.2 with exact data \((\text{top})\) and noisy data \((\delta=10\%\), bottom\()\).
interpolation. Since \(q^{\dagger}\) is piece-wise constant, we may also include the popular total variation penalty [36], i.e., \(\gamma_{tv}|q|_{\mathrm{TV}}\), to the loss \(J_{\gamma}(\theta,\kappa)\) to promote piecewise constancy of the reconstruction. Fig. 3 shows the error plots without the total variation term, and Fig. 4 including the total variation term (\(\gamma_{tv}=0.01\) for both exact and noisy data). The reconstruction quality is improved by including the total variation term. However, the reconstruction \(\hat{q}_{\theta}\) still exhibits slight blurring effect around the discontinuous interface. Moreover, it is worth noting that the reconstruction \(\hat{q}\) remains accurate for up to \(10\%\) data noise, indicating high robustness of the approach with respect to noise.
The third example is about recovering a conductivity coefficient in 3D.
**Example 5.3**.: \(\Omega=(0,1)^{3}\)_, \(q^{\dagger}=1+0.3e^{-20(x_{1}-0.5)^{2}-20(x_{2}-0.5)^{2}-20(x_{3}-0.5)^{2}}\), and \(u^{\dagger}=x_{1}+x_{2}+x_{3}+\frac{1}{3}(x_{1}^{3}+x_{2}^{3}+x_{3}^{3})\)._
Fig. 5 shows the reconstruction at a 2D cross section, i.e., the surface \(x_{3}=0.5\), using the exact data \(\nabla u^{\dagger}\) and noisy data (\(\delta=10\%\)). The shape and the overall structure of \(q^{\dagger}\) are well recovered in both cases. The relative \(L^{2}(\Omega)\)-error \(e(\hat{q})\) is 1.49e-2 and 3.42e-2, for exact and noisy data, respectively. The error in the noisy case is higher than that for exact data, but still quite acceptable. This again shows the high robustness of the approach with respect to noise.
The last example is about recovering a conductivity function in 5D.
**Example 5.4**.: \(\Omega=(0,1)^{5}\)_, \(q^{\dagger}=1-(x_{1}-0.5)^{2}-(x_{2}-0.5)^{2}+\cos(\pi(x_{3}+1.5))+\cos(\pi(x _{4}+1.5))+\cos(\pi(x_{5}+1.5))\), and \(u^{\dagger}=x_{1}+x_{2}+x_{3}+x_{4}+x_{5}+\frac{1}{3}(x_{1}^{3}+x_{2}^{3}+x_{3 }^{3}+x_{4}^{3}+x_{5}^{3})\)._
Fig. 6 shows the reconstruction on a 2D cross section of the domain \(\Omega\), i.e., the surface (\(x_{3}=x_{4}=x_{5}=0.5\)). The relative \(L^{2}(\Omega)\)-error \(e(\hat{q})\) for the exact and noisy data is 3.44e-3 and 2.27e-2, respectively. The DNN approximations do capture the overall structure of the exact conductivity \(q^{\dagger}\) for both exact and noisy data, showing again the remarkable robustness of the method with respect to the presence of noise. Moreover, it also shows the advantage of using DNNs: it can solve inverse problems for high-dimensional PDEs, which is not easily tractable for more traditional approaches, e.g., FEM. Surprisingly, the approach can produce high quality results without using too many training points in the domain \(\Omega\), despite the high-dimensionality of the problem.
### The Dirichlet problem
The first example is about recovering a smooth conductivity in 2D.
Figure 4: The reconstructions for Example 5.2 using a loss function including the total variation term, with exact data (top) and noisy data (\(\delta=10\%\), bottom).
Figure 5: The reconstructions for Example 5.3 with exact data \((\text{top})\) and noisy data \((\delta=10\%,\text{bottom})\).
Figure 6: The reconstructions for Example 5.4 with exact data \((\text{top})\) and noisy data \((\delta=10\%,\text{bottom})\).
**Example 5.5**.: \(\Omega=(-1,1)^{2}\)_, \(q^{\dagger}=1+s_{1}(x_{1},x_{2})+s_{2}(x_{1},x_{2})\) with \(s_{1}=0.4e^{-15(x_{1}-0.5)^{2}-15x_{2}^{2}}\) and \(s_{2}=-0.4e^{-15(x_{1}+0.5)^{2}-15x_{2}^{2}}\), and \(u^{\dagger}=x_{1}+x_{2}+\frac{1}{3}(x_{1}^{3}+x_{2}^{3})\)._
Fig. 7 shows the reconstruction \(\hat{q}\) using the loss (4.4) with exact data (top) and noisy data (bottom, \(\delta=10\%\)). The relative \(L^{2}(\Omega)\)-error of the reconstructions is 9.72e-3 and 3.78e-2 for exact and noisy data, respectively. Again we observe the robustness of the proposed approach in the presence of noise - there is almost no degradation in the reconstruction quality, except slight underestimation of the peak values of the two bumps and very mild oscillations near the boundary. The locations and shape of the bumps were well captured. The convergence behavior of the optimizer in Fig. 8 shows that the loss \(J_{\boldsymbol{\gamma}}\) and error \(e(\hat{q})\) stagnate at a comparable level regardless of the noise level. In light of Remark 4.1, Fig. 9 shows the results by the loss (4.6) with exact (top), \(1\%\) noise (middle) and \(\delta=10\%\) (bottom). The accuracy of reconstructions with exact data and data with \(1\%\) noise is satisfactory, but the neural network fails to learn accurately using the loss (4.6) when the data is very noisy.
The second example is concerned with recovering a nearly piecewise constant conductivity function in 2D.
**Example 5.6**.: \(\Omega=(0,1)^{2}\)_, \(q^{\dagger}=1+0.3/(1+\tau(x,y))\), with \(\tau(x,y)=e^{400((x-0.65)^{2}+2(y-0.7)^{2}-0.15^{2})}\), and \(u^{\dagger}=\sin(\pi x)\sin(\pi y)\)._
Note that in this example, \(\nabla u\) vanishes at the four corners of the domain and the point \((\frac{1}{2},\frac{1}{2})\). Similar to Example 5.2, an additional total variation penalty, i.e., \(\gamma_{te}|q|_{\mathrm{TV}}\) (\(\gamma_{te}=0.01\)) is added to the loss \(J_{\boldsymbol{\gamma}}(\theta,\kappa)\) in order to promote piecewise constancy of the reconstruction. Fig. 10 shows the reconstruction using the loss (4.4) with exact (top) and noisy (bottom, \(\delta=10\%\)) data. It is observed that the reconstruction is accurate for exact data, and in the presence of \(10\%\) data noise, the reconstruction quality deteriorates only near the points where \(\nabla u\) vanishes. This is expected since substituting \(\nabla u=0\) back into the formulation (4.4) leads to a loss essentially independent of \(q_{\theta}\), thus the inaccuracy in the reconstruction in a small neighbourhood around these points.
The third example is about recovering a conductivity in 3D.
**Example 5.7**.: \(\Omega=(0,1)^{3}\)_, \(q^{\dagger}=1+e^{-(12(x_{1}-0.5)(x_{2}-0.5))^{2}}+x_{3}\), and \(u^{\dagger}=x_{1}+x_{2}+x_{3}+\frac{1}{3}(x_{1}^{3}+x_{2}^{3}+x_{3}^{3})\)._
Fig. 11 show the reconstruction using the loss (4.4) on a 2D cross section at \(x_{3}=0.5\), with exact (top) and noisy (bottom, \(\delta=10\%\)) data, with the relative \(L^{2}(\Omega)\)-error being 3.28e-3 and 3.48e-2, respectively, indicating the excellent robustness of the approach with respect to data noise: there is only very mild deterioration in the reconstruction \(\hat{q}\).
The last example is concerned with recovering a conductivity function in high-dimension.
Figure 7: The reconstructions for Example 5.5 with exact data (top) and noisy data (\(\delta=10\%\), bottom).
**Example 5.8**.: \(\Omega=(0,1)^{5}\)_, \(q^{\dagger}=1+0.5(x_{1}x_{5}+x_{2}x_{4}+x_{3}^{2})-0.3e^{-25(x_{1}-0.5)^{2}-25(x_{2 }-0.5)^{2}}\), and \(u^{\dagger}=x_{1}+x_{2}+x_{3}+x_{4}+x_{5}+\frac{1}{3}(x_{1}^{3}+x_{2}^{3}+x_{3 }^{3}+x_{4}^{3}+x_{5}^{3})\)._
Fig. 12 shows the reconstructions using the loss (4.4) on a 2D cross section with \(x_{3}=x_{4}=x_{5}=0.5\), with exact (top) and noisy (bottom, \(\delta=10\%\)) data. The relative \(L^{2}(\Omega)\)-error is 5.78e-3 and 2.86e-2 for exact and noisy data, respectively. The features of the true conductivity have been successfully recovered and visually there is almost no difference between the reconstructions from exact and noisy data. This again showcases the significant potential of NN approximations compared to more traditional approaches for solving high-dimensional inverse problems.
## Appendix A The proof of Theorem 3.5
In this appendix, we prove Theorem 3.5, which gives high probability bounds on the statistical errors \(\Delta\mathcal{E}_{i}\), \(i\in\{d,b,b^{\prime},q,\sigma\}\). These bounds play a crucial role in deriving error estimates in Theorems 3.6 and 4.5. By slightly abusing the notation, let \(\mathcal{N}_{\sigma}\equiv\mathcal{N}(L_{\sigma},W_{\sigma},R_{\sigma})\) and \(\mathcal{N}_{q}=\mathcal{N}(L_{q},W_{q},R_{q})\) be two DNN function classes of given depth, width and parameter bound for approximating the current density \(\sigma\) and the conductivity \(q\), respectively. Then we define the following function classes
\[\mathcal{H}_{d} =\{h:\Omega\rightarrow\mathbb{R}|\ h(x)=\|\sigma_{\kappa}(x)-P_{ \mathcal{A}}(q_{\theta}(x))\nabla z^{\delta}(x)\|_{\ell^{2}}^{2},\sigma_{\kappa }\in\mathcal{N}_{\sigma},q_{\theta}\in\mathcal{N}_{q}\},\] \[\mathcal{H}_{\sigma} =\{h:\Omega\rightarrow\mathbb{R}|\ h(x)=|\nabla\cdot\sigma_{ \kappa}(x)+f(x)|^{2},\sigma_{\kappa}\in\mathcal{N}_{\sigma}\},\] \[\mathcal{H}_{b} =\{h:\partial\Omega\rightarrow\mathbb{R}|\ h(x)=|\mathbf{n}\cdot \sigma_{\kappa}(x)-g(x)|^{2},\sigma_{\kappa}\in\mathcal{N}_{\sigma}\},\] \[\mathcal{H}_{\psi} =\{h:\partial\Omega\rightarrow\mathbb{R}|\ h(x)=\|\sigma_{\kappa }(x)-q^{\dagger}\nabla z^{\delta}\|_{\ell^{2}}^{2},\sigma_{\kappa}\in \mathcal{N}_{\sigma}\},\] \[\mathcal{H}_{q} =\{h:\Omega\rightarrow\mathbb{R}|\ h(x)=\|\nabla q_{\theta}(x) \|_{\ell^{2}}^{2},q_{\theta}\in\mathcal{N}_{q}\}.\]
To bound these errors, we employ Rademacher complexity [9] and boundedness and Lipschitz continuity of DNN functions and their derivatives with respect to the DNN parameters. Rademacher complexity measures the complexity of a collection of functions by the correlation between function values with Rademacher random variables.
**Definition A.1**.: _Let \(\mathcal{F}\) be a real-valued function class defined on the domain \(\Omega\) (or the boundary \(\partial\Omega\)) and \(\xi=\{\xi_{j}\}_{j=1}^{n}\) be i.i.d. samples from the distribution \(\mathcal{U}(\Omega)\) (or the distribution \(\mathcal{U}(\partial\Omega)\)). Then the Rademacher complexity \(\mathfrak{R}_{n}(\mathcal{F})\) of
Figure 8: The variation of the loss (top) and relative \(L^{2}(\Omega)\) error \(e(\hat{q})\) (bottom) during the training process for Example 5.5 at three different noise levels.
Figure 9: The reconstructions for Example 5.5 with exact data \((\text{top})\) and noisy data \((\delta=1\%\), middle and \(\delta=10\%\), bottom).
Figure 11: The reconstructions for Example 5.7 with exact data \((\text{top})\) and noisy data \((\delta=10\%,\text{bottom})\).
Figure 10: The reconstructions for Example 5.6 with exact data \((\text{top})\) and noisy data \((\delta=10\%,\text{bottom})\).
the class \(\mathcal{F}\) is defined by_
\[\mathfrak{R}_{n}(\mathcal{F})=\mathbb{E}_{\xi,\omega}\bigg{[}\sup_{v\in\mathcal{F} }\ n^{-1}\bigg{|}\ \sum_{j=1}^{n}\omega_{j}v(\xi_{j})\ \bigg{|}\bigg{]},\]
_where \(\omega=\{\omega_{j}\}_{j=1}^{n}\) are i.i.d Rademacher random variables with probability \(P(\omega_{j}=1)=P(\omega_{j}=-1)=\frac{1}{2}\)._
We use the following PAC-type generalization bound [31, Theorem 3.1] via Rademacher complexity. Note that the statement in [31, Theorem 3.1] only discusses the case of the function class ranging in \([0,1]\). For a general bounded function, applying McDiarmid's inequality and the original argument yields the following result.
**Lemma A.1**.: _Let \(X_{1},\ldots,X_{n}\) be a set of i.i.d. random variables. Let \(\mathcal{F}\) be a function class defined on \(D\) such that \(\sup_{v\in\mathcal{F}}\|v\|_{L^{\infty}(D)}\leq M_{\mathcal{F}}<\infty\). Then for any \(\tau\in(0,1)\), with probability at least \(1-\tau\):_
\[\sup_{v\in\mathcal{F}}\bigg{|}n^{-1}\sum_{j=1}^{n}v(X_{j})-\mathbb{E}[v(X)] \bigg{|}\leq 2\mathfrak{R}_{n}(\mathcal{F})+2M_{\mathcal{F}}\sqrt{\frac{ \log\frac{1}{\tau}}{2n}}.\]
To apply Lemma A.1, we have to bound the Rademacher complexity of the function classes \(\mathcal{H}_{i}\), \(i\in\{d,\sigma,b,b^{\prime},q\}\). This is achieved by combining Lipschitz continuity of DNN functions (or its derivatives) with the target function class in the DNN parameter, and Dudley's formula in Lemma A.4. The next lemma gives useful boundedness and Lipschitz continuity of a \(\tanh\)-DNN function class with respect to the DNN parameters. Note that Lemma A.2 also holds with the \(L^{\infty}(\partial\Omega)\) norm in place of the \(L^{\infty}(\Omega)\) norm, since the overall argument depends only on the boundedness of the activation function \(\rho=\tanh\) and its derivative on \(\mathbb{R}\).
**Lemma A.2**.: _Let \(\Theta\) be a parametrization with depth \(L\) and width \(W\), and \(\theta=\{(A^{(\ell)},b^{(\ell)})_{\ell=1}^{L}\},\tilde{\theta}=\{(\tilde{A}^ {(\ell)},\tilde{b}^{(\ell)})_{\ell=1}^{L}\}\in\Theta\). Then for the DNN realizations \(v,\tilde{v}:\Omega\to\mathbb{R}\) of \(\theta,\tilde{\theta}\) with \(\|\theta\|_{\ell^{\infty}},\|\tilde{\theta}\|_{\ell^{\infty}}\leq R\), the following estimates hold_
* \(\|v\|_{L^{\infty}(\Omega)}\leq R(W+1),\quad\|\nabla v\|_{L^{\infty}(\Omega; \mathbb{R}^{d})}\leq\sqrt{d}R^{L}W^{L-1}\)_;_
* \(\|v-\tilde{v}\|_{L^{\infty}(\Omega)}\leq 2LR^{L-1}W^{L}\|\theta-\tilde{\theta}\|_{ \ell^{\infty}},\quad\|\nabla(v-\tilde{v})\|_{L^{\infty}(\Omega;\mathbb{R}^{d })}\leq\sqrt{d}L^{2}R^{2L-2}W^{2L-2}\|\theta-\tilde{\theta}\|_{\ell^{\infty}}\)_._
Proof.: All the estimates are already contained in [24]; see [24, p. 19 and p. 22 of Lemma 3.4] and [24, Remark 3.3].
Figure 12: The reconstructions for Example 5.8 with exact data \((\text{top})\) and noisy data \((\delta=10\%\), \(\text{bottom})\).
Next we appeal to reduction to parameterisation. This is achieved using the Lipschitz continuity of the functions in the function classes \(\mathcal{H}_{i}\) with respect to the DNN parameters, following from Lemma A.2.
**Lemma A.3**.: _Let \(c_{z}=\|\nabla z^{\delta}\|_{L^{\infty}(\Omega)}\) and \(c^{\prime}_{z}=\|\nabla z^{\delta}\|_{L^{\infty}(\Omega)}\). For the function classes \(\mathcal{H}_{i}\), \(i\in\{d,\sigma,b,b^{\prime},q\}\), the functions are uniformly bounded:_
\[\|h\|_{L^{\infty}(\Omega)}\leq\begin{cases}M_{d}=2(dR_{\sigma}^{2}(W_{\sigma} +1)^{2}+c_{1}^{2}c_{z}^{2}),&h\in\mathcal{H}_{d},\\ M_{\sigma}=2(d^{2}R_{\sigma}^{2L_{\sigma}}W_{\sigma}^{2L_{\sigma}-2}+\|f\|_{L^{ \infty}(\Omega)}^{2}),&h\in\mathcal{H}_{\sigma},\\ M_{q}=dR_{q}^{2L_{q}}W_{q}^{2L_{q}-2},&h\in\mathcal{H}_{q},\\ M_{b}=2(dR_{\sigma}^{2}(W_{\sigma}+1)^{2}+\|g\|_{L^{\infty}(\partial\Omega)}^{ 2}),&h\in\mathcal{H}_{b},\\ M_{b^{\prime}}=2(dR_{\sigma}^{2}(W_{\sigma}+1)^{2}+c_{1}^{2}(c^{\prime}_{z})^ {2}),&h\in\mathcal{H}_{b^{\prime}}.\end{cases}\]
_Moreover, the following Lipschitz continuity estimates in the DNN parameters hold_
\[\|h-\tilde{h}\|_{L^{\infty}(\Omega)}\leq\Lambda_{d}(\|\theta- \tilde{\theta}\|_{\ell^{\infty}}+\|\kappa-\tilde{\kappa}\|_{\ell^{\infty}}), \quad\forall h,\tilde{h}\in\mathcal{H}_{d},\] \[\|h-\tilde{h}\|_{L^{\infty}(\Omega)}\leq\Lambda_{\sigma}\|\kappa- \tilde{\kappa}\|_{\ell^{\infty}},\quad\forall h,\tilde{h}\in\mathcal{H}_{ \sigma},\] \[\|h-\tilde{h}\|_{L^{\infty}(\partial\Omega)}\leq\Lambda_{b}\| \kappa-\tilde{\kappa}\|_{\ell^{\infty}},\quad\forall h,\tilde{h}\in\mathcal{H} _{b},\] \[\|h-\tilde{h}\|_{L^{\infty}(\partial\Omega)}\leq\Lambda_{b^{\prime }}\|\kappa-\tilde{\kappa}\|_{\ell^{\infty}},\quad\forall h,\tilde{h}\in \mathcal{H}_{b^{\prime}},\] \[\|h_{\theta}-h_{\tilde{\varrho}}\|_{L^{\infty}(\Omega)}\leq \Lambda_{q}\|\theta-\tilde{\theta}\|_{\ell^{\infty}},\quad\forall h,\tilde{h} \in\mathcal{H}_{q},\]
_with the Lipschitz constants \(\Lambda_{i}\), \(i\in\{d,\sigma,b,b^{\prime},q\}\), given by_
\[\Lambda_{d}=2\big{(}\sqrt{d}R_{\sigma}(W_{\sigma}+1)+c_{1}c_{z} \big{)}\max(2\sqrt{d}L_{\sigma}R_{\sigma}^{L_{\sigma}-1}W_{\sigma}^{L_{\sigma} },c_{z}L_{q}R_{q}^{L_{q}-1}W_{q}^{L_{q}}),\] \[\Lambda_{\sigma}=2(dR_{\sigma}^{L_{\sigma}}W_{\sigma}^{L_{\sigma} -1}+\|f\|_{L^{\infty}(\Omega)})dL_{\sigma}^{2}R_{\sigma}^{2L_{\sigma}-2}W_{ \sigma}^{2L_{\sigma}-2},\] \[\Lambda_{b}=4(\sqrt{d}R_{\sigma}(W_{\sigma}+1)+\|g\|_{L^{\infty}( \partial\Omega)})\sqrt{d}L_{\sigma}R_{\sigma}^{L_{\sigma}-1}W_{\sigma}^{L_{ \sigma}},\] \[\Lambda_{b^{\prime}}=4\big{(}\sqrt{d}R_{\sigma}(W_{\sigma}+1)+c_ {1}c^{\prime}_{z}\big{)}\sqrt{d}L_{\sigma}R_{\sigma}^{L_{\sigma}-1}W_{\sigma}^{ L_{\sigma}},\] \[\Lambda_{q}=2dL_{q}^{2}R_{q}^{3L_{q}-2}W_{q}^{3L_{q}-3}.\]
Proof.: The assertions follow directly from Lemma A.2. Indeed, for \(h_{\theta,\kappa}\in\mathcal{H}_{d}\), we have
\[|h_{\theta,\kappa}(x)|\leq 2(\|\sigma_{\kappa}(x)\|_{\ell^{2}}^{2}+\|P_{ \mathcal{A}}(q_{\theta}(x))\nabla z^{\delta}(x)\|_{\ell^{2}}^{2})\leq 2(dR_{ \sigma}^{2}(W_{\sigma}+1)^{2}+c_{1}^{2}c_{z}^{2}).\]
For any \(h_{\theta,\kappa},h_{\tilde{\theta},\tilde{\kappa}}\in\mathcal{H}_{d}\), by completing the squares and Cauchy-Schwarz inequality, meanwhile noting the stability of \(P_{\mathcal{A}}\) in (3.2) we have
\[h_{\theta,\kappa}(x)-h_{\tilde{\theta},\tilde{\kappa}}(x)=\| \sigma_{\kappa}(x)-P_{\mathcal{A}}(q_{\theta}(x))\nabla z^{\delta}(x)\|_{\ell^{ 2}}^{2}-\|\sigma_{\tilde{\kappa}}(x)-P_{\mathcal{A}}(q_{\tilde{\theta}}(x)) \nabla z^{\delta}(x)\|_{\ell^{2}}^{2}\] \[= \big{(}\sigma_{\kappa}(x)-P_{\mathcal{A}}(q_{\theta}(x))\nabla z^{ \delta}(x)+\sigma_{\tilde{\kappa}}(x)-P_{\mathcal{A}}(q_{\tilde{\theta}}(x)) \nabla z^{\delta}(x),\] \[\quad\sigma_{\kappa}(x)-\sigma_{\tilde{\kappa}}(x)+(-P_{ \mathcal{A}}(q_{\theta}(x))+P_{\mathcal{A}}(q_{\tilde{\theta}}(x)))\nabla z^{ \delta}(x)\big{)}\] \[\leq 2\big{(}\sup_{\sigma_{\kappa}\in\mathcal{N}_{\kappa}}\|\sigma_{ \kappa}(x)\|_{L^{\infty}(\Omega;\mathbb{R}^{d})}+c_{1}c_{z}\big{)}\big{(}\| \sigma_{\kappa}(x)-\sigma_{\tilde{\kappa}}(x)\|_{L^{\infty}(\Omega;\mathbb{R}^{ d})}+c_{z}\|q_{\theta}(x)-q_{\tilde{\theta}}(x)\|_{L^{\infty}(\Omega)}\big{)}.\]
Then by Lemma A.2, we deduce
\[|h_{\theta,\kappa}(x)-h_{\tilde{\theta},\tilde{\kappa}}(x)|\] \[\leq 2\big{(}\sqrt{d}R_{\sigma}(W_{\sigma}+1)+c_{1}c_{z}\big{)}\times( 2\sqrt{d}L_{\sigma}R_{\sigma}^{L_{\sigma}-1}W_{\sigma}^{L_{\sigma}}\|\kappa- \tilde{\kappa}\|_{\ell^{\infty}}+c_{z}L_{q}R_{q}^{L_{q}-1}W_{q}^{L_{q}}\|\theta- \tilde{\theta}\|_{\ell^{\infty}})\] \[\leq 2\big{(}\sqrt{d}R_{\sigma}(W_{\sigma}+1)+c_{1}c_{z}\big{)}\max(2 \sqrt{d}L_{\sigma}R_{\sigma}^{L_{\sigma}-1}W_{\sigma}^{L_{\sigma}},c_{z}L_{q}R_{q}^{L _{q}-1}W_{q}^{L_{q}})(\|\kappa-\tilde{\kappa}\|_{\ell^{\infty}}+\|\theta- \tilde{\theta}\|_{\ell^{\infty}}).\]
The remaining estimates follow similarly. This completes the proof of the lemma.
Next we bound the Rademacher complexities \(\mathcal{R}_{n}(\mathcal{H}_{i})\), \(i\in\{d,\sigma,b,b^{\prime},q\}\), using the concept of the covering number. Let \(\mathcal{G}\) be a real-valued function class equipped with the metric \(\rho\). An \(\epsilon\)-cover of the class \(\mathcal{G}\) with respect to the metric \(\rho\) is a collection of points \(\{x_{i}\}_{i=1}^{n}\subset\mathcal{G}\) such that for every \(x\in\mathcal{G}\), there exists at least one \(i\in\{1,\dots,n\}\) such that \(\rho(x,x_{i})\leq\epsilon\). The \(\epsilon\)-covering number \(\mathcal{C}(\mathcal{G},\rho,\epsilon)\) is the minimum cardinality among all \(\epsilon\)-cover of the class \(\mathcal{G}\) with respect to the metric \(\rho\). Then we can state the well-known Dudley's theorem [30, Theorem 9] and [42, Theorem 1.19].
**Lemma A.4**.: _Let \(M_{\mathcal{F}}:=\sup_{f\in\mathcal{F}}\|f\|_{L^{\infty}(\Omega)},\) and \(\mathcal{C}(\mathcal{F},\|\cdot\|_{L^{\infty}(\Omega)},\epsilon)\) be the covering number of the set \(\mathcal{F}\). Then the Rademacher complexity \(\mathfrak{R}_{n}(\mathcal{F})\) is bounded by_
\[\mathfrak{R}_{n}(\mathcal{F})\leq\inf_{0<s<M_{\mathcal{F}}}\bigg{(}4s\;+\;12n^ {-\frac{1}{2}}\int_{s}^{M_{\mathcal{F}}}\big{(}\log\mathcal{C}(\mathcal{F},\| \cdot\|_{L^{\infty}(\Omega)},\epsilon)\big{)}^{\frac{1}{2}}\;\mathrm{d} \epsilon\bigg{)}.\]
Now we can state the proof of Theorem 3.5.
Proof.: By the Lipschitz continuity of DNN functions with respect to the DNN parameters, the covering number of the corresponding function class can be bounded by that of the parametrization. For any \(n\in\mathbb{N}\), \(R\in[1,\infty)\), \(\epsilon\in(0,1)\), and \(B_{R}:=\{x\in\mathbb{R}^{n}:\;\|x\|_{\ell^{\infty}}\leq R\}\), then [13, Proposition 5]
\[\log\mathcal{C}(B_{R},\|\cdot\|_{\ell^{\infty}},\epsilon)\leq n\log(2R \epsilon^{-1}).\]
It follows directly from the Lipschitz continuity in Lemma A.2 that
\[\mathcal{C}(\mathcal{H}_{d},\|\cdot\|_{L^{\infty}(\Omega)},\epsilon)\leq \mathcal{C}(\Theta_{\sigma}\otimes\Theta_{q},\|\cdot\|_{\ell^{\infty}},\Lambda _{d}^{-1}\epsilon)\leq(N_{\kappa}+N_{\theta})\log(2\max(R_{q},R_{\sigma}) \Lambda_{d}\epsilon^{-1}),\]
with \(\Theta_{\sigma}\) and \(\Theta_{q}\) denoting the neural network parameter sets for \(\sigma\) and \(q\), respectively, \(\Lambda_{d}:=2\big{(}\sqrt{d}R_{\sigma}(W_{\sigma}+1)\,+\,c_{1}c_{z}\big{)} \max(2\sqrt{d}L_{\sigma}R_{\sigma}^{L_{\sigma}-1}W_{\sigma}^{L_{q}},c_{z}L_{q }R_{q}^{L_{q}-1}W_{q}^{L_{q}}\big{)}\), cf. Lemma A.3. By Lemma A.3, we also have \(M_{d}=2(dR_{\sigma}^{2}(W_{\sigma}+1)^{2}+c_{1}^{2}c_{z}^{2})\). Then letting \(s=n^{-\frac{1}{2}}\) in Lemma A.4 gives
\[\mathfrak{R}_{n}(\mathcal{H}_{d})\leq 4n^{-\frac{1}{2}}+12n^{- \frac{1}{2}}\int_{n^{-\frac{1}{2}}}^{M_{d}}\big{(}(N_{\kappa}+N_{\theta})\text {log}(2\max(R_{q},R_{\sigma})\Lambda_{d}\epsilon^{-1})\big{)}^{\frac{1}{2}} \,\mathrm{d}\epsilon\] \[\leq 4n^{-\frac{1}{2}}+12n^{-\frac{1}{2}}M_{d}\big{(}(N_{\kappa}+N _{\theta})\text{log}(2\max(R_{q},R_{\sigma})\Lambda_{d}n^{\frac{1}{2}})\big{)} ^{\frac{1}{2}}\] \[\leq 4n^{-\frac{1}{2}}+24n^{-\frac{1}{2}}(dR_{\sigma}^{2}(W_{ \sigma}+1)^{2}+c_{1}^{2}c_{z}^{2})(N_{\kappa}+N_{\theta})^{\frac{1}{2}}\big{(} \log\max(R_{q},R_{\sigma})+\log\Lambda_{d}+\log n+\log 2\big{)}^{\frac{1}{2}}.\]
Since \(1\leq R_{q},R_{\sigma}\), \(1\leq W_{q}\leq N_{\theta}\), \(1\leq W_{\sigma}\leq N_{\kappa}\) and \(2\leq L_{q},L_{\sigma}\leq c\log(d+2)\) (due to Lemma 2.1), we can bound the term \(\log\Lambda_{d}\) by
\[\log\Lambda_{d}\leq c(\log R_{\sigma}+\log N_{\kappa}+\log R_{q}+\log N_{ \theta}+\tilde{c}),\]
with the constants \(c\) and \(\tilde{c}\) depending on \(c_{1}\), \(c_{z}\), \(d\), \(L_{q}\) and \(L_{\sigma}\) at most polynomially. Hence, we have
\[\mathfrak{R}_{n}(\mathcal{H}_{d})\leq c_{d}n^{-\frac{1}{2}}R_{\sigma}^{2}N_{ \kappa}^{2}(N_{\kappa}+N_{\theta})^{\frac{1}{2}}(\log^{\frac{1}{2}}R_{\sigma}+ \log^{\frac{1}{2}}N_{\kappa}+\log^{\frac{1}{2}}R_{q}+\log^{\frac{1}{2}}N_{ \theta}+\log^{\frac{1}{2}}n),\]
where \(c_{d}>0\) depends on \(d\), \(c_{1}\) and \(c_{z}\) at most polynomially. Similarly, repeating the preceding argument leads to
\[\mathfrak{R}_{n}(\mathcal{H}_{\sigma}) \leq c_{\sigma}n^{-\frac{1}{2}}R_{\sigma}^{2L_{\sigma}}N_{\kappa}^{ 2L_{\sigma}-\frac{3}{2}}\big{(}\log^{\frac{1}{2}}R_{\sigma}+\log^{\frac{1}{2}} N_{\kappa}+\log^{\frac{1}{2}}n\big{)},\] \[\mathfrak{R}_{n}(\mathcal{H}_{b}) \leq c_{b}n^{-\frac{1}{2}}R_{\sigma}^{2}N_{\kappa}^{\frac{5}{2}} \big{(}\log^{\frac{1}{2}}R_{\sigma}+\log^{\frac{1}{2}}N_{\kappa}+\log^{\frac{1}{ 2}}n\big{)},\] \[\mathfrak{R}_{n}(\mathcal{H}_{b^{\prime}}) \leq c_{b^{\prime}}n^{-\frac{1}{2}}R_{\sigma}^{2}N_{\kappa}^{\frac{5 }{2}}\big{(}\log^{\frac{1}{2}}R_{\sigma}+\log^{\frac{1}{2}}N_{\kappa}+\log^{ \frac{1}{2}}n\big{)},\] \[\mathfrak{R}_{n}(\mathcal{H}_{q}) \leq c_{q}n^{-\frac{1}{2}}R_{q}^{2L_{q}}N_{\theta}^{2L_{q}-\frac{ 3}{2}}\big{(}\log^{\frac{1}{2}}R_{q}+\log^{\frac{1}{2}}N_{\theta}+\log^{ \frac{1}{2}}n\big{)},\]
where the involved constants depend on \(d\) at most polynomially. Last, the desired estimates follow from the PAC-type generalization bound in Lemma A.1.
|
2301.11556 | Conformal inference is (almost) free for neural networks trained with
early stopping | Early stopping based on hold-out data is a popular regularization technique
designed to mitigate overfitting and increase the predictive accuracy of neural
networks. Models trained with early stopping often provide relatively accurate
predictions, but they generally still lack precise statistical guarantees
unless they are further calibrated using independent hold-out data. This paper
addresses the above limitation with conformalized early stopping: a novel
method that combines early stopping with conformal calibration while
efficiently recycling the same hold-out data. This leads to models that are
both accurate and able to provide exact predictive inferences without multiple
data splits nor overly conservative adjustments. Practical implementations are
developed for different learning tasks -- outlier detection, multi-class
classification, regression -- and their competitive performance is demonstrated
on real data. | Ziyi Liang, Yanfei Zhou, Matteo Sesia | 2023-01-27T06:43:07Z | http://arxiv.org/abs/2301.11556v2 | # Conformal inference is (almost) free for neural networks
###### Abstract
Early stopping based on hold-out data is a popular regularization technique designed to mitigate overfitting and increase the predictive accuracy of neural networks. Models trained with early stopping often provide relatively accurate predictions, but they generally still lack precise statistical guarantees unless they are further calibrated using independent hold-out data. This paper addresses the above limitation with _conformalized early stopping_: a novel method that combines early stopping with conformal calibration while efficiently recycling the same hold-out data. This leads to models that are both accurate and able to provide exact predictive inferences without multiple data splits nor overly conservative adjustments. Practical implementations are developed for different learning tasks--outlier detection, multi-class classification, regression--and their competitive performance is demonstrated on real data.
## 1 Introduction
Deep neural networks can detect complex data patterns and leverage them to make accurate predictions in many applications, including computer vision, natural language processing, and speech recognition, to name a few examples. These models can sometimes even outperform skilled humans [1], but they still make mistakes. Unfortunately, the severity of these mistakes is compounded by the fact that neural networks are often prone to overfitting and may become overconfident [2]. Several training strategies have been developed to mitigate overconfidence, including dropout [3], batch normalization [4], weight normalization [5], data augmentation [6], and early stopping [7]; the latter is the focus of this paper.
Early stopping consists of continuously evaluating after each batch of stochastic gradient updates (or _epoch_) the predictive performance of the current model on _hold-out_ independent data. After a large number of gradient updates, only the intermediate model achieving the best performance on the hold-out data is utilized to make predictions. This strategy is often effective at mitigating overfitting and can produce relatively accurate predictions compared to fully trained models, but it does not fully resolve overconfidence because it does not lead to models with finite-sample guarantees.
A general framework for quantifying the predictive uncertainty of any _black-box_ machine learning model is that of conformal inference [8]. The key idea of conformal inference is to apply a pre-trained model to a _calibration_ set of hold-out observations drawn at random from the target population. If the calibration data are exchangeable with the test point of interest, the model performance on the calibration set can be translated into statistically rigorous predictive inferences. This framework is flexible and can accommodate different learning tasks, including out-of-distribution testing [9], classification [10], and regression [8]. For example, in the context of classification, conformal inference can give prediction sets that contain the correct label for a new data point with high probability. In theory, the quality of the trained model has no consequence on the _average_ validity of conformal inferences, but it does affect their reliability and usefulness on a case-by-case level. In particular, conformal uncertainty estimates obtained after calibrating an overconfident model may be too conservative for some test cases and too optimistic for others [11]. The goal of this paper is to combine conformal calibration with standard early stopping training techniques as efficiently as possible, in order to produce more reliable predictive inferences with a finite amount of available data.
Achieving high accuracy with deep learning often requires large training sets [12], and conformal inference makes the overall pipeline even more data-intensive. As high-quality observations can be expensive to collect, in some situations practitioners may naturally wonder whether the advantage of having principled uncertainty estimates is worth a possible reduction in predictive accuracy due to fewer available training samples. This concern is relevant because the size of the calibration set cannot be too small if one wants stable and reliable conformal inferences [13, 14]. In fact, very large calibration sets may be necessary to obtain stronger conformal inferences that are valid not only on average but also conditionally on some important individual features; see Vovk et al. [10], Romano et al. [15], and Barber et al. [16].
This paper resolves the above dilemma by showing that conformal inferences for deep learning models trained with early stopping can be obtained almost "for free"--without spending precious data. More precisely, we present an innovative method that blends model training with early stopping and conformal calibration using the same hold-out samples, essentially obtaining rigorous predictive inferences at no additional data cost compared to standard early stopping. It is worth emphasizing this result is not trivial. In fact, naively applying existing conformal calibration methods using the same hold-out samples utilized for early stopping would not lead to theoretically valid inferences, at least not without resorting to very conservative corrections.
The paper is organized as follows. Section 2 develops our _conformalized early stopping_ (CES) method, starting from outlier detection and classification, then addressing regression. Section 3 demonstrates the advantages of CES through numerical experiments. Section 4 concludes with a discussion and some ideas for further research. Additional details and results, including a theoretical analysis of the naive benchmark mentioned above, can be found in the Appendices, along with all mathematical proofs.
### Related work
Conformal inference [8, 17, 18] has become a very rich and active area of research [19, 20, 21, 22]. Many prior works studied the computation of efficient conformal inferences starting from pre-trained _black-box_ models, including for example in the context of outlier detection [9, 23, 24, 25], classification [10, 11, 26, 27, 28], and regression [8, 20, 29]. Other works have studied the general robustness of conformal
inferences to distribution shifts [30, 31] and, more broadly, to failures of the data exchangeability assumption [32, 33]. Our research is orthogonal, as we look inside the black-box model and develop a novel early-stopping training technique that is naturally integrated with conformal calibration. Nonetheless, the proposed method could be combined with those described in the aforementioned papers. Other recent research has explored different ways of bringing conformal inference into the learning algorithms [34, 35, 36, 37], and some of those works apply standard early stopping techniques, but they do not address our problem.
This paper is related to Yang and Kuchibhotla [38], which proposed a general theoretical adjustment for conformal inferences computed after model selection. That method could be utilized to account for early stopping without further data splits, as detailed in Appendix A1. However, we will demonstrate that even an improved version of such analysis remains overly conservative in the context of model selection via early stopping, and the alternative method developed in this paper performs much better in practice. Our solution is inspired by Liang, Sesia, and Sun [25], which deals with the problem of selecting the best model from an arbitrary machine learning toolbox to obtain the most powerful conformal p-values for outlier testing. The idea of Liang, Sesia, and Sun [25] extends naturally to the early stopping problem in the special cases of outlier detection and classification, but the regression setting requires substantial technical innovations. The work of Liang, Sesia, and Sun [25] is also related to Marandon et al. [39], although the latter is more distant from this paper because it focuses on theoretically controlling the false discovery rate [40] in multiple testing problems. Finally, this paper draws inspiration from Kim, Xu, and Barber [41], which shows that machine learning models trained with bootstrap (or bagging) techniques can also lead to valid conformal inferences essentially for free.
## 2 Methods
### Standard conformal inference and early stopping
Consider \(n\) data points, \(Z_{i}\) for \(i\in\mathcal{D}=[n]=\{1,\ldots,n\}\), sampled exchangeably (e.g., i.i.d.) from an unknown distribution \(P_{Z}\) with support on some space \(\mathcal{Z}\). Consider also an additional test sample, \(Z_{n+1}\). In the context of outlier detection, one wishes to test whether \(Z_{n+1}\) was sampled exchangeably from \(P_{Z}\). In classification or regression, one can write \(Z_{i}=(X_{i},Y_{i})\), where \(X_{i}\) is a feature vector while \(Y_{i}\) is a discrete category or a continuous response, and the goal is to predict the unobserved value of \(Y_{n+1}\) given \(X_{n+1}\) and the data in \(\mathcal{D}\).
The standard pipeline begins by randomly splitting the data in \(\mathcal{D}\) into three disjoint subsets: \(\mathcal{D}_{\mathrm{train}},\mathcal{D}_{\mathrm{es}},\mathcal{D}_{\mathrm{ cal}}\subset[n]\). The samples in \(\mathcal{D}_{\mathrm{train}}\) are utilized to train a model \(M\) via stochastic gradient descent, in such a way as to (approximately) minimize the desired loss \(\mathcal{L}\), while the observations in \(\mathcal{D}_{\mathrm{es}}\) and \(\mathcal{D}_{\mathrm{cal}}\) are held out. We denote by \(M_{t}\) the model learnt after \(t\) epochs of stochastic gradient descent, for any \(t\in[t_{\mathrm{max}}]\), where \(t_{\mathrm{max}}\) is a pre-determined maximum number of epochs. For simplicity, \(\mathcal{L}\) is assumed to be an additive loss, in the sense that its value calculated on the training data after \(t\) epochs is \(\mathcal{L}_{\mathrm{train}}(M_{t})=\sum_{i\in\mathcal{D}_{\mathrm{train}}} \ell(M_{t};Z_{i})\), for some appropriate function \(\ell\). For example, a typical choice for regression would be the squared-error loss: \(\ell(M_{t};Z_{i})=\left[Y_{i}-\hat{\mu}(X_{i};M_{t})\right]^{2}\), where \(\hat{\mu}(X_{i};M_{t})\) indicates the value of the regression function at \(X_{i}\), as estimated by \(M_{t}\). Similarly, the loss evaluated on \(\mathcal{D}_{\mathrm{es}}\) is denoted as \(\mathcal{L}_{\mathrm{es}}(M_{t})=\sum_{i\in\mathcal{D}_{\mathrm{es}}}\ell(M_ {t};Z_{i})\). After training for \(t_{\mathrm{max}}\) epochs, early stopping selects the model \(\hat{M}_{\mathrm{es}}\) that minimizes the loss on \(\mathcal{D}_{\mathrm{es}}\): \(\hat{M}_{\mathrm{es}}=\arg\min_{M_{t}\,:\,0\leq t\leq t_{\mathrm{max}}} \mathcal{L}_{\mathrm{es}}(M_{t})\). Conformal calibration of \(\hat{M}_{\mathrm{es}}\) is then conducted using the independent hold-out data set \(\mathcal{D}_{\mathrm{cal}}\), as
sketched in Figure 1 (a). This pipeline requires a three-way data split because: (i) \(\mathcal{D}_{\text{train}}\) and \(\mathcal{D}_{\text{es}}\) must be disjoint to ensure the early stopping criterion is effective at mitigating overfitting; and (ii) \(\mathcal{D}_{\text{cal}}\) must be disjoint from \(\mathcal{D}_{\text{train}}\cup\mathcal{D}_{\text{es}}\) to ensure the performance of the selected model \(\hat{M}_{\text{es}}\) on the calibration data gives us an unbiased preview of its future performance at test time.
### Preview of our contribution
This paper develops a novel method to jointly carry out both early stopping and conformal inference using a single hold-out data set, denoted in the following as \(\mathcal{D}_{\text{es-cal}}\). The advantage of this approach is that it allows more samples to be allocated to \(\mathcal{D}_{\text{train}}\). This is not a straightforward problem. For example, one cannot naively apply standard conformal inference methods using the same hold-out set \(\mathcal{D}_{\text{es-cal}}\) previously used for early stopping, as detailed in Appendix A1. In that case, the early stopping decision would invalidate the conformal inferences by breaking the exchangeability between the calibration data and the test point, as the latter is not used to select the predictive model. As explained in Appendix A1.2, it is possible to correct conformal inferences obtained with this naive approach by adjusting the nominal coverage level conservatively, leveraging suitable concentration inequalities [38]. However, such theoretical corrections tend to be overly pessimistic in practice and may often be too conservative to be useful; this is demonstrated by the numerical experiments described in Section 3 and previewed here in Figure 2.
By contrast, the CES method proposed in this paper is based on the following idea inspired by Liang, Sesia, and Sun [25]. Valid conformal inferences can be obtained by calibrating \(\hat{M}_{\text{es}}\) using the same data set \(\mathcal{D}_{\text{es-cal}}\) used for model selection, as long as the test sample \(Z_{n+1}\) is also involved in the early stopping rule exchangeably with all other samples in \(\mathcal{D}_{\text{es-cal}}\). This concept, illustrated schematically in Figure 1 (b), is not obvious to translate into a practical method, however, for two reasons. First, the ground truth for the test point (i.e., its outlier status or its outcome label) is unknown. Second, the method may need to be repeatedly applied for a large number of distinct test points in a computationally efficient way, and one cannot re-train the model separately for each test point. In the next section, we will explain how to overcome these challenges in the special case of early stopping for outlier detection; then, the solution will be extended to the classification and regression settings.
Figure 1: Conformal inference for models trained with early stopping. (a) Conventional pipeline requiring a three-way sample split. (b) Conformalized early stopping, requiring a two-way split.
### Conformalized early stopping for outlier detection
Consider testing whether \(Z_{n+1}\), is an _inlier_, in the sense that it was sampled from \(P_{Z}\) exchangeably with the data in \(\mathcal{D}\). Following the notation of Section 2.1, consider a partition of \(\mathcal{D}\) into two subsets, \(\mathcal{D}_{\text{train}}\) and \(\mathcal{D}_{\text{es-cal}}\), chosen at random independently of everything else, such that \(\mathcal{D}=\mathcal{D}_{\text{train}}\cup\mathcal{D}_{\text{es-cal}}\). The first step of CES consists of training a deep one-class classifier \(M\) using the data in \(\mathcal{D}_{\text{train}}\) via stochastic gradient descent for \(t^{\text{max}}\) epochs, storing all parameters characterizing the intermediate model after each \(\tau\) epochs. We refer to \(\tau\in[t^{\text{max}}]\) as the _storage period_, a parameter pre-defined by the user. Intuitively, a smaller \(\tau\) increases the memory cost of CES but may also lead to the selection of a more accurate model. While the memory cost of this approach is higher compared to that of standard early-stopping training techniques, which only require storing one model at a time, it is not prohibitively expensive. In fact, the candidate models do not need to be kept in precious RAM memory but can be stored on a relatively cheap hard drive. As reasonable choices of \(\tau\) may typically be in the order of \(T=\lfloor t^{\text{max}}/\tau\rfloor\approx 100\), the cost of CES is not excessive in many real-world situations. For example, it takes approximately 100 MB to store a pre-trained standard ResNet50 computer vision model, implying that CES would require approximately 10 GB of storage in such applications--today this costs less than $0.25/month in the cloud.
After pre-training and storing \(T\) candidate models, namely \(M_{t_{1}},\ldots,M_{t_{T}}\) for some sub-sequence \((t_{1},\ldots,t_{T})\) of \([t^{\text{max}}]\), the next step is to select the appropriate early-stopped model based on the hold-out data in \(\mathcal{D}_{\text{es-cal}}\) as well as the test point \(Z_{n+1}\). Following the notation of Section 2.1, define the value of the one-class classification loss \(\mathcal{L}\) for model \(M_{t}\), for any \(t\in[T]\), evaluated on \(\mathcal{D}_{\text{es-cal}}\) as: \(\mathcal{L}_{\text{es-cal}}(M_{t})=\sum_{i\in\mathcal{D}_{\text{es-cal}}} \ell(M_{t};Z_{i})\). Further, for any \(z\in\mathcal{Z}\), define also \(\mathcal{L}_{\text{es-cal}}^{+1}(M_{t},z)\) as:
\[\mathcal{L}_{\text{es-cal}}^{+1}(M_{t},z)=\mathcal{L}_{\text{es-cal}}(M_{t}) +\ell(M_{t};z). \tag{1}\]
Therefore, \(\mathcal{L}_{\text{es-cal}}^{+1}(M_{t},Z_{n+1})\) can be interpreted as the cumulative value of the loss function calculated on an augmented hold-out data set including also \(Z_{n+1}\). Then, we select the model \(\hat{M}_{\text{ces}}(Z_{n+1})\)
Figure 2: Average performance, as a function of the sample size, of conformal inferences based on neural networks trained and calibrated with different methods, on the _bio_ regression data [42]. Ideally, the coverage of the conformal prediction intervals should be close to 90% and their width should be small. All methods shown here guarantee 90% marginal coverage.
minimizing \(\mathcal{L}_{\text{ess-cal}}^{+1}(M_{t},Z_{n+1})\):
\[\hat{M}_{\text{ces}}(Z_{n+1})=\operatorname*{arg\,min}_{M_{t_{j}}\;:1\leq j\leq T }\mathcal{L}_{\text{ess-cal}}^{+1}(M_{t_{j}},Z_{n+1}). \tag{2}\]
Note that the computational cost of evaluating (2) is negligible compared to that of training the models.
Next, the selected model \(\hat{M}_{\text{ces}}(Z_{n+1})\) is utilized to compute a conformal p-value [24] to test whether \(Z_{n+1}\) is an inlier. In particular, \(\hat{M}_{\text{ces}}(Z_{n+1})\) is utilized to compute _nonconformity scores_\(\hat{S}_{i}(Z_{n+1})\) for all samples \(i\in\mathcal{D}_{\text{ess-cal}}\cup\{n+1\}\). These scores rank the observations in \(\mathcal{D}_{\text{ess-cal}}\cup\{n+1\}\) based on how the one-class classifier \(\hat{M}_{\text{ces}}(Z_{n+1})\) perceives them to be similar to the training data; by convention, a smaller value of \(\hat{S}_{i}(Z_{n+1})\) suggests \(Z_{i}\) is more likely to be an outlier. Suitable scores are typically included in the output of standard one-class classification models, such as those provided by the Python library PyTorch. For simplicity, we assume all scores are almost-surely distinct; otherwise, ties can be broken at random by adding a small amount of independent noise. Then, the conformal p-value \(\hat{u}_{0}(Z_{n+1})\) is given by the usual formula:
\[\hat{u}_{0}(Z_{n+1})=\frac{1+|i\in\mathcal{D}_{\text{ess-cal}}:\hat{S}_{i}\leq \hat{S}_{n+1}|}{1+|\mathcal{D}_{\text{ess-cal}}|}, \tag{3}\]
making the dependence of \(\hat{S}_{i}\) on \(Z_{n+1}\) implicit in the interest of space. This method, outlined by Algorithm A4 in Appendix A2, gives p-values that are exactly valid in finite samples, in the sense that they are stochastically dominated by the uniform distribution under the null hypothesis. The only technical requirement is that the learning algorithm should be invariant to permutations of the training data, which is a mild and realistic assumption.
**Theorem 1**.: _Assume \(Z_{1},\ldots,Z_{n},Z_{n+1}\) are exchangeable random samples, and let \(\hat{u}_{0}(Z_{n+1})\) be the output of Algorithm A4, as given in (3). Suppose the underlying machine learning algorithm is invariant to permutations of the training data. Then, \(\mathbb{P}\left[\hat{u}_{0}(Z_{n+1})\leq\alpha\right]\leq\alpha\) for any \(\alpha\in(0,1)\)._
### Conformalized early stopping for classification
The above CES method will now be extended to deal with \(K\)-class classification problems, for any \(K\geq 2\). Consider \(n\) exchangeable pairs of observations \((X_{i},Y_{i})\), for \(i\in\mathcal{D}=[n]\), and a test point \((X_{n+1},Y_{n+1})\) whose label \(Y_{n+1}\in[K]\) has not yet been observed. The goal is to construct an informative prediction set for \(Y_{n+1}\) given the observed features \(X_{n+1}\) and the rest of the data, assuming \((X_{n+1},Y_{n+1})\) is exchangeable with the observations indexed by \(\mathcal{D}\). An ideal goal would be to construct the smallest possible prediction set with guaranteed _feature-conditional coverage_ at level \(1-\alpha\), for any fixed \(\alpha\in(0,1)\). Formally, a prediction set \(\hat{C}_{\alpha}(X_{n+1})\subseteq[K]\) has feature-conditional coverage at level \(1-\alpha\) if \(\mathbb{P}[Y_{n+1}\in\hat{C}_{\alpha}(X_{n+1})\mid X_{n+1}=x]\geq 1-\alpha\), for any \(x\in\mathcal{X}\), where \(\mathcal{X}\) is the feature space. Unfortunately, perfect feature-conditional coverage is extremely difficult to achieve unless the feature space \(\mathcal{X}\) is very small. Therefore, in practice, one must be satisfied with obtaining relatively weaker guarantees, such as _label-conditional coverage_ and _marginal coverage_. Formally, \(\hat{C}_{\alpha}(X_{n+1})\) has \(1-\alpha\) label-conditional coverage if \(\mathbb{P}[Y_{n+1}\in\hat{C}_{\alpha}(X_{n+1})\mid Y_{n+1}=y]\geq 1-\alpha\), for any \(y\in[K]\), while marginal coverage corresponds to \(\mathbb{P}[Y_{n+1}\in\hat{C}_{\alpha}(X_{n+1})]\geq 1-\alpha\). Label-conditional coverage is stronger than marginal coverage, but both criteria are useful because the latter is easier to achieve with smaller (and hence more informative) prediction sets.
We begin by focusing on label-conditional coverage, as this follows most easily from the results of Section 2.3. This solution will be extended in Appendix A3 to target marginal coverage. The first step of CES consists of randomly splitting \(\mathcal{D}\) into two subsets, \(\mathcal{D}_{\text{train}}\) and \(\mathcal{D}_{\text{es-cal}}\), as in Section 2.3. The samples in \(\mathcal{D}_{\text{es-cal}}\) are further divided into subsets \(\mathcal{D}^{y}_{\text{es-cal}}\) with homogeneous labels; that is, \(\mathcal{D}^{y}_{\text{es-cal}}=\{i\in\mathcal{D}_{\text{es-cal}}:Y_{i}=y\}\) for each \(y\in[K]\). The data in \(\mathcal{D}_{\text{train}}\) are utilized to train a neural network classifier via stochastic gradient descent, storing the intermediate candidate models \(M_{t}\) after each \(\tau\) epochs. This is essentially the same approach as in Section 2.3, with the only difference being that the neural network is now designed to perform \(K\)-class classification rather than one-class classification. Therefore, this neural network should have a soft-max layer with \(K\) nodes near its output, whose values corresponding to an input data point with features \(x\) are denoted as \(\hat{\pi}_{y}(x)\), for all \(y\in[K]\). Intuitively, we will interpret \(\hat{\pi}_{y}(x)\) as approximating (possibly inaccurately) the true conditional data-generating distribution; i.e., \(\hat{\pi}_{y}(x)\approx\mathbb{P}\left[Y=y\mid X=x\right]\).
For any model \(M_{t}\), any \(x\in\mathcal{X}\), and any \(y\in[K]\), define the augmented loss \(\mathcal{L}^{+1}_{\text{es-cal}}(M_{t},x,y)\) as:
\[\mathcal{L}^{+1}_{\text{es-cal}}(M_{t},x,y)=\mathcal{L}_{\text{es-cal}}(M_{t} )+\ell(M_{t};x,y). \tag{4}\]
Concretely, a typical choice for \(\ell\) is the cross-entropy loss: \(\ell(M_{t};x,y)=-\log\hat{\pi}_{y}^{t}(x)\), where \(\hat{\pi}^{t}\) denotes the soft-max probability distribution estimated by model \(M_{t}\). Intuitively, \(\mathcal{L}^{+1}_{\text{es-cal}}(M_{t},x,y)\) is the cumulative value of the loss function calculated on an augmented hold-out data set including also the imaginary test sample \((x,y)\). Then, for any \(y\in[K]\), CES selects the model \(\hat{M}_{\text{ces}}(X_{n+1},y)\) minimizing \(\mathcal{L}^{+1}_{\text{es-cal}}(M_{t},X_{n+1},y)\) among the \(T\) stored models:
\[\hat{M}_{\text{ces}}(X_{n+1},y)=\operatorname*{arg\,min}_{M_{t_{j}}:1\leq j \leq T}\mathcal{L}^{+1}_{\text{es-cal}}(M_{t_{j}},X_{n+1},y). \tag{5}\]
The selected model \(\hat{M}_{\text{ces}}(X_{n+1},y)\) is then utilized to compute a conformal p-value for testing whether \(Y_{n+1}=y\). In particular, we compute nonconformity scores \(\hat{S}^{y}_{i}(X_{n+1})\) for all \(i\in\mathcal{D}^{y}_{\text{es-cal}}\cup\{n+1\}\), imagining that \(Y_{n+1}=y\). Different types of nonconformity scores can be easily accommodated, but in this paper, we follow the _adaptive_ strategy of Romano, Sesia, and Candes [11]. The computation of these nonconformity scores based on the selected model \(\hat{M}_{\text{ces}}\) is reviewed in Appendix A4. Here, we simply note the p-value is given by:
\[\hat{u}_{y}(X_{n+1})=\frac{1+|i\in\mathcal{D}^{y}_{\text{es-cal}}:\hat{S}^{y} _{i}\leq\hat{S}^{y}_{n+1}|}{1+|\mathcal{D}^{y}_{\text{es-cal}}|}, \tag{6}\]
again making the dependence of \(\hat{S}^{y}_{i}\) on \(X_{n+1}\) implicit. Finally, the prediction set \(\hat{C}_{\alpha}(X_{n+1})\) is constructed by including all possible labels for which the corresponding null hypothesis cannot be rejected at level \(\alpha\):
\[\hat{C}_{\alpha}(X_{n+1})=\{y\in[K]:\hat{u}_{y}(X_{n+1})\geq\alpha\}\,. \tag{7}\]
This method, outlined by Algorithm A5 in Appendix A2, guarantees label-conditional coverage at level \(1-\alpha\).
**Theorem 2**.: _Assume \((X_{1},Y_{1}),\ldots,(X_{n+1},Y_{n+1})\) are exchangeable, and let \(\hat{C}_{\alpha}(X_{n+1})\) be the output of Algorithm A5, as given in (7), for any given \(\alpha\in(0,1)\). Suppose the underlying machine learning algorithm is invariant to permutations of the training data. Then, \(\mathbb{P}[Y_{n+1}\in\hat{C}_{\alpha}(X_{n+1})\mid Y_{n+1}=y]\geq 1-\alpha\) for any \(y\in[K]\)._
### Conformalized early stopping for regression
This section extends CES to regression problems with a continuous outcome. As in the previous sections, consider a data set containing \(n\) exchangeable observations \((X_{i},Y_{i})\), for \(i\in\mathcal{D}=[n]\), and a test point \((X_{n+1},Y_{n+1})\) with a latent label \(Y_{n+1}\in\mathbb{R}\). The goal is to construct a reasonably narrow _prediction interval_\(\hat{C}_{\alpha}(X_{n+1})\) for \(Y_{n+1}\) that is guaranteed to have marginal coverage above some level \(1-\alpha\), i.e., \(\mathbb{P}[Y_{n+1}\in\hat{C}_{\alpha}(X_{n+1})]\geq 1-\alpha\), and can also practically achieve reasonably high feature-conditional coverage. Developing a CES method for this problem is more difficult compared to the classification case studied in Section 2.4 due to the infinite number of possible values for \(Y_{n+1}\). In fact, a naive extension of Algorithm A5 would be computationally unfeasible in the regression setting, for the same reason why full-conformal prediction [8] is generally impractical. The novel solution described below is designed to leverage the particular structure of an early stopping criterion based on the squared-error loss evaluated on hold-out data. Focusing on the squared-error loss makes CES easier to implement and explain using classical _absolute residual_ nonconformity scores [8, 43]. However, similar ideas could also be repurposed to accommodate other scores, such as those based on quantile regression [29], conditional distributions [44, 45], or conditional histograms [46].
As usual, we randomly split \(\mathcal{D}\) into \(\mathcal{D}_{\text{train}}\) and \(\mathcal{D}_{\text{es-cal}}\). The data in \(\mathcal{D}_{\text{train}}\) are utilized to train a neural network via stochastic gradient descent, storing the intermediate models \(M_{t}\) after each \(\tau\) epoch. The approach is similar to those in Sections 2.3-2.4, although now the output of a model \(M_{t}\) applied to a sample with features \(x\) is denoted by \(\hat{\mu}_{t}(x)\) and is designed to approximate (possibly inaccurately) the conditional mean of the unknown data-generating distribution; i.e., \(\hat{\mu}_{t}(x)\approx\mathbb{E}\left[Y\mid X=x\right]\). (Note that we will omit the superscript \(t\) unless necessary to avoid ambiguity). For any model \(M_{t}\) and any \(x\in\mathcal{X}\), \(y\in[K]\), define
\[\mathcal{L}_{\text{es-cal}}^{+1}(M_{t},x,y)=\mathcal{L}_{\text{es-cal}}(M_{t })+[y-\hat{\mu}_{t}(X_{n+1})]^{2}. \tag{8}\]
Consider now the following optimization problem,
\[\hat{M}_{\text{ces}}(X_{n+1},y)=\operatorname*{arg\,min}_{M_{t_{j}}\,:\,1\leq j \leq T}\mathcal{L}_{\text{es-cal}}^{+1}(M_{t_{j}},X_{n+1},y), \tag{9}\]
which can be solved simultaneously for all \(y\in\mathbb{R}\) thanks to the amenable form of (8). In fact, each \(\mathcal{L}_{\text{es-cal}}^{+1}(M_{t},x,y)\) is a simple quadratic function of \(y\); see the sketch in Figure 3. This implies \(\hat{M}_{\text{ces}}(X_{n+1},y)\) is a step function, whose parameters can be computed at cost \(\mathcal{O}(T\log T)\) with an efficient divide-and-conquer algorithm designed to find the lower envelope of a family of parabolas [47, 48]; see Appendix A5.
Therefore, \(\hat{M}_{\text{ces}}(X_{n+1},y)\) has \(L\) distinct steps, for some \(L=\mathcal{O}(T\log T)\) that may depend on \(X_{n+1}\), and it can be written as a function of \(y\) as:
\[\hat{M}_{\text{ces}}(X_{n+1},y)=\sum_{l=1}^{L}m_{l}(X_{n+1})\mathbbm{1}\left[y \in(k_{l-1},k_{l}]\right], \tag{10}\]
where \(m_{l}(X_{n+1})\in[T]\) represents the best model selected within the interval \((k_{l-1},k_{l}]\) such that \(m_{l}(X_{n+1})\neq m_{l-1}(X_{n+1})\) for all \(l\in[L]\). Above, \(k_{1}\leq k_{2}\leq\cdots\leq k_{L}\) denote the _knots_ of \(\hat{M}_{\text{ces}}(X_{n+1},y)\), which also depend on \(X_{n+1}\) and are defined as the boundaries in the domain of \(y\) between each consecutive pair of steps, with the understanding that \(k_{0}=-\infty\) and \(k_{L+1}=+\infty\)
Then, for each step \(l\in[L]\), let \(\mathcal{B}_{l}\) indicate the interval \(\mathcal{B}_{l}=(k_{l-1},k_{l}]\) and, for all \(i\in\mathcal{D}_{\text{es-cal}}\), evaluate the nonconformity score \(\hat{S}_{i}(X_{n+1},\mathcal{B}_{l})\) for observation \((X_{i},Y_{i})\) based on the regression model indicated by \(m_{l}(X_{n+1})\); i.e.,
\[\hat{S}_{i}(X_{n+1},\mathcal{B}_{l})=|Y_{i}-\hat{\mu}_{m_{l}(X_{n+1})}(X_{i})|. \tag{11}\]
Let \(\hat{Q}_{1-\alpha}(X_{n+1},\mathcal{B}_{l})\) denote the \(\lceil(1-\alpha)(1+|\mathcal{D}_{\text{es-cal}}|)\rceil\)-th largest value among all nonconformity scores \(\hat{S}_{i}(X_{n+1},\mathcal{B}_{l})\), assuming for simplicity that there are no ties; otherwise, ties can be broken at random. Then, define the interval \(\hat{C}_{\alpha}(X_{n+1},\mathcal{B}_{l})\) as that obtained by applying the standard conformal prediction method with absolute residual scores based on the regression model \(\hat{\mu}_{m_{l}(X_{n+1})}(X_{n+1})\):
\[\hat{C}_{\alpha}(X_{n+1},\mathcal{B}_{l})=\hat{\mu}_{m_{l}(X_{n+1})}(X_{n+1}) \pm\hat{Q}_{1-\alpha}(X_{n+1},\mathcal{B}_{l}). \tag{12}\]
Finally, the prediction interval \(\hat{C}_{\alpha}(X_{n+1})\) is given by:
\[\hat{C}_{\alpha}(X_{n+1})=\text{Convex}\left(\cup_{l=1}^{L}\{\mathcal{B}_{l} \cap\hat{C}_{\alpha}(X_{n+1},\mathcal{B}_{l})\}\right), \tag{13}\]
where \(\text{Convex}(\cdot)\) denotes the convex hull of a set. This procedure is summarized in Algorithm A6 and it is guaranteed to produce prediction sets with valid marginal coverage.
**Theorem 3**.: _Assume \((X_{1},Y_{1}),\ldots,(X_{n+1},Y_{n+1})\) are exchangeable, and let \(\hat{C}_{\alpha}(X_{n+1})\) be the output of Algorithm A6, as given by (13), for any given \(\alpha\in(0,1)\). Suppose the underlying machine learning algorithm is invariant to permutations of the training data. Then, \(\mathbb{P}[Y_{n+1}\in\hat{C}_{\alpha}(X_{n+1})]\geq 1-\alpha\)._
The intuition behind the above method is as follows. Each intermediate interval \(\hat{C}_{\alpha}(X_{n+1},\mathcal{B}_{l})\)
Figure 3: Squared-error loss on test-augmented hold-out data for three alternative regression models \(M_{1},M_{2}\) and \(M_{3}\), as a function of the place-holder outcome \(y\) for the test point. The CES method utilizes the best model for each possible value of \(y\), which is identified by the lower envelope of these three parabolas. In this case, the lower envelope has two knots, \(k_{1}\) and \(k_{3}\).
for \(l\in[L]\), may be thought of as being computed by applying, under the null hypothesis that \(Y_{n+1}\in\mathcal{B}_{l}\), the classification method from Section 2.4 for a discretized version of our problem based on the partition \(\{\mathcal{B}_{l}\}_{l=1}^{L}\). Then, leveraging the classical duality between confidence intervals and p-values, it becomes clear that taking the intersection of \(\mathcal{B}_{l}\) and \(\hat{C}_{\alpha}(X_{n+1},\mathcal{B}_{l})\) essentially amounts to including the "label" \(\mathcal{B}_{l}\) in the output prediction if the null hypothesis \(Y_{n+1}\in\mathcal{B}_{l}\) cannot be rejected. The purpose of the final convex hull operation is to CES outputs a contiguous prediction interval, which is what we originally stated to seek.
Although it is unlikely, Algorithm A6 may sometimes produce an empty set, which is an uninformative and potentially confusing output. A simple solution consists of replacing any empty output with the naive conformal prediction interval computed by Algorithm A3 in Appendix A1, which leverages an early-stopped model selected by looking at the original calibration data set without the test point. This approach is outlined by Algorithm A9 in Appendix A6. As the intervals given by Algorithm A9 always contain those output by Algorithm A6, it follows that Algorithm A9 also enjoys guaranteed coverage; see Corollary A3.
## 3 Numerical experiments
### Outlier detection
The use of CES for outlier detection is demonstrated using the _CIFAR10_ data set [49], a collection of 60,000 32-by-32 RGB images from 10 classes including common objects and animals. A convolutional neural network with ReLU activation functions is trained on a subset of the data to minimize the cross-entropy loss. The maximum number of epochs is set to be equal to 50. The trained classification model is then utilized to compute conformity scores for outlier detection with the convention that cats are inliers and the other classes are outliers. In particular, a nonconformity score for each \(Z_{n+1}\) is defined as 1 minus the output of the soft-max layer corresponding to the label "cat". This can be interpreted as an estimated probability of \(Z_{n+1}\) being an outlier. After translating these scores into a conformal p-value \(\hat{u}_{0}(Z_{n+1})\), the null hypothesis that \(Z_{n+1}\) is a cat is rejected if \(\hat{u}_{0}(Z_{n+1})\leq\alpha=0.1\).
The total number of samples utilized for training, early stopping, and conformal calibration is varied between 500 and 2000. In each case, CES is applied using 75% of the samples for training and 25% for early stopping and calibration. Note that the calibration step only utilizes inliers, while the other data subsets also contain outliers. The empirical performance of CES is measured in terms of the probability of falsely rejecting a true null hypothesis--the false positive rate (FPR)--and the probability of correctly rejecting a false null hypothesis--the true positive rate (TPR). The CES method is compared to three benchmarks. The first benchmark is naive early stopping with the best (_hybrid_) theoretical correction for the nominal coverage level described in Appendix A1.2. The second benchmark is early stopping based on data splitting, which utilizes 50% of the available samples for training, 25% for early stopping, and 25% for calibration. The third benchmark is full training without early stopping, which simply selects the model obtained after the last epoch. The test set consists of 100 independent test images, half of which are outliers. All results are averaged over 100 trials based on independent data subsets.
Figure 4 summarizes the performance of the four methods as a function of the total sample size; see Table A1 in Appendix A7 for the corresponding standard errors. All methods control the FPR below 10%, as expected, but CES achieves the highest TPR. The increased power of CES
compared to data splitting is not surprising, as the latter relies on a less accurate model trained on less data. By contrast, the naive benchmark trains a model more similar to that of CES, but its TPR is not as high because the theoretical correction for the naive conformal p-values is overly pessimistic. Finally, full training is the least powerful competitor for large sample sizes because its underlying model becomes more and more overconfident as the training set grows.
### Multi-class classification
The same _CIFAR10_ data [49] are utilized here to demonstrate the performance of CES for a 10-class classification task. These experiments are conducted similarly to those in Section 3.1. The only difference is that now the soft-max output of the convolutional neural network is translated into conformal prediction sets, as explained in Appendix A4, instead of conformal p-values. The CES method is compared to the same three benchmarks adopted in Section 3.1. All prediction sets are calibrated to guarantee 90% marginal coverage, and their performances are evaluated based on cardinality.
Figure 5 summarizes the results averaged over 100 independent realizations of these experiments, while Table A2 in Appendix A7 reports on the corresponding standard errors. While all approaches
Figure 4: Average performance, as a function of the sample size, of conformal inferences for outlier detection based on neural networks trained and calibrated with different methods, on the _CIFAR10_ data [49]. Ideally, the TPR should be as large as possible while maintaining the FPR below 0.1.
Figure 5: Average performance, as a function of the sample size, of conformal prediction sets for multi-class classification based on neural networks trained and calibrated with different methods, on the _CIFAR10_ data [49]. Ideally, the coverage should be close to 90% and the cardinality should be small.
always achieve the nominal coverage level, the CES method is able to do so with the smallest, and hence most informative, prediction sets. As before, the more disappointing performance of the data splitting benchmark can be explained by the more limited amount of data available for training, that of the naive benchmark by the excessive conservativeness of its theoretical correction, and that of the full training benchmark by overfitting.
### Regression
We now apply the CES method to the following 3 public-domain regression data sets from the UCI Machine Learning repository [50]: physicochemical properties of protein tertiary structure (_bio_) [42], hourly and daily counts of rental bikes (_bike_) [51], and concrete compressive strength (_concrete_) [52]. These data sets were previously also considered by Romano, Patterson, and Candes [29], to which we refer for further details. As in the previous sections, we compare CES to the usual three benchmarks: naive early stopping with the _hybrid_ theoretical correction for the nominal coverage level, early stopping based on data splitting, and full model training without early stopping. All methods utilize the same neural network with two hidden layers of width 128 and ReLU activation functions, trained for up to 1000 epochs. The models are calibrated in such a way as to produce conformal prediction sets with guaranteed 90% marginal coverage for a test set of 100 independent data points. The total sample size available for training, early stopping and calibration is varied between 200 and 2000. These data are allocated for specific training, early-stopping, and calibration operations as in Sections 3.1-3.2. The performance of each method is measured in terms of marginal coverage, worst-slab conditional coverage [53]--estimated as described in Sesia and Candes [14]--and average width of the prediction intervals. All results are averaged over 100 independent experiments, each based on a different random sample from the original raw data sets.
Figure 2 summarizes the performance of the four alternative methods on the _bio_ data, as a function of the total sample size; see Table A8 in Appendix A7 for the corresponding standard errors. These results show that all methods reach 90% marginal coverage in practice, as anticipated by the mathematical guarantees, although the theoretical correction for the naive early stopping method appears to be overly conservative. The CES method clearly performs best, in the sense that it leads to the shortest prediction intervals while also achieving approximately valid conditional coverage. By contrast, the conformal prediction intervals obtained without early stopping have significantly lower conditional coverage, which is consistent with the prior intuition that fully trained neural networks can sometimes suffer from overfitting. More detailed results from these experiments can be found in Table A3 in Appendix A7. Analogous results corresponding to the _bike_ and _concrete_ data sets can be found in Figures A8-A9 and Tables A4-A5 in Appendix A7.
Finally, it must be noted that the widths of the prediction intervals output by the CES method in these experiments are very similar to those of the corresponding intervals produced by naively applying early stopping without data splitting and without the theoretical correction described in Appendix A1. This naive approach was not taken as a benchmark because it does not guarantee valid coverage, unlike the other methods. Nonetheless, it is interesting to note that the rigorous theoretical properties of the CES method do not come at the expense of a significant loss of power compared to this very aggressive heuristic, and in this sense, one may say that the conformal inferences computed by CES are "almost free".
Discussion
This paper has focused on early stopping and conformal calibration because these are two popular techniques designed to mitigate overconfidence that were previously combined inefficiently, but the relevance of our methodology is much more general. In fact, similar ideas have already been utilized in the context of outlier detection to tune hyper-parameters and select the most promising candidate from an arbitrary toolbox of machine learning models [25]. The techniques developed in this paper, however, also allow one to calibrate, without further data splits, the most promising model selected in a data-driven way from an arbitrary machine learning toolbox in the context of multi-class classification and regression.
As detailed in Appendix A1, the naive benchmark that uses the same hold-out data twice, both for standard early stopping and standard conformal calibration, is not theoretically valid without conservative corrections. Nonetheless, we have observed this naive approach often performs similarly to CES in practice. Of course, the naive benchmark may sometimes fail, and thus we would advise practitioners to apply the theoretically principled CES whenever its additional memory costs are not prohibitive. However, the empirical evidence suggests the naive benchmark may not be a completely unreasonable heuristic when CES is not applicable.
Finally, it is worth noting that CES for regression was implemented in this paper using classical nonconformity scores [8, 43] that are not designed to deal efficiently with heteroscedastic data [14]. However, the general idea can be extended to accommodate virtually any other type of nonconformity score, including existing options based on quantile regression [29], conditional distributions [44, 45], or conditional histograms [46]. The reason why this paper has focused on the classical absolute residual scores is that they are more intuitive to apply in conjunction with an early stopping criterion based on the squared-error loss. In the future, it would be interesting to extend the CES method as to accommodate early stopping criteria based on alternative loss functions, including for example the pinball loss for quantile regression.
Software implementing the algorithms and data experiments are available online at [https://github.com/ZiyiLiang/Conformalized_early_stopping](https://github.com/ZiyiLiang/Conformalized_early_stopping).
### Acknowledgements
The authors thank the Center for Advanced Research Computing at the University of Southern California for providing computing resources to carry out numerical experiments. M. S. and Y. Z. are supported by NSF grant DMS 2210637. M. S. is also supported by an Amazon Research Award.
|
2310.17100 | Network Design through Graph Neural Networks: Identifying Challenges and
Improving Performance | Graph Neural Network (GNN) research has produced strategies to modify a
graph's edges using gradients from a trained GNN, with the goal of network
design. However, the factors which govern gradient-based editing are
understudied, obscuring why edges are chosen and if edits are grounded in an
edge's importance. Thus, we begin by analyzing the gradient computation in
previous works, elucidating the factors that influence edits and highlighting
the potential over-reliance on structural properties. Specifically, we find
that edges can achieve high gradients due to structural biases, rather than
importance, leading to erroneous edits when the factors are unrelated to the
design task. To improve editing, we propose ORE, an iterative editing method
that (a) edits the highest scoring edges and (b) re-embeds the edited graph to
refresh gradients, leading to less biased edge choices. We empirically study
ORE through a set of proposed design tasks, each with an external validation
method, demonstrating that ORE improves upon previous methods by up to 50%. | Donald Loveland, Rajmonda Caceres | 2023-10-26T01:45:20Z | http://arxiv.org/abs/2310.17100v1 | # Network Design through Graph Neural Networks: Identifying Challenges and Improving Performance
###### Abstract
Graph Neural Network (GNN) research has produced strategies to modify a graph's edges using gradients from a trained GNN, with the goal of network design. However, the factors which govern gradient-based editing are understudied, obscuring _why_ edges are chosen and _if_ edits are grounded in an edge's importance. Thus, we begin by analyzing the gradient computation in previous works, elucidating the factors that influence edits and highlighting the potential over-reliance on structural properties. Specifically, we find that edges can achieve high gradients due to structural biases, rather than importance, leading to erroneous edits when the factors are unrelated to the design task. To improve editing, we propose **ORE**, an iterative editing method that (a) edits the highest scoring edges and (b) re-embeds the edited graph to refresh gradients, leading to less biased edge choices. We empirically study ORE through a set of proposed design tasks, each with an external validation method, demonstrating that ORE improves upon previous methods by up to 50%.
Keywords:Graph Neural Network, Network Design, Graph Editing
## 1 Introduction
Learning over graphs has become paramount in machine learning applications where the data possesses a connective structure, such as social networks [7], chemistry [8], and finance [25]. Fortunately, the field of graph mining has provided methods to extract useful information from graphs, albeit often needing heavy domain guidance [18]. The advent of graph neural networks (GNNs), a neural network generalized to learn over graph structured data, has helped alleviate some of these requirements by learning representations that synthesize both node and structure information [8, 9, 13]. Complimentary to inference, recent work has proposed methods that edit and design network structures using gradients from a trained GNN [11, 17, 19], enabling the efficient optimization of downstream learning tasks [31] in cyber security [5, 15], urban planning [4], drug discovery [12], and more [14, 3, 16]. However, as gradient-based editing is applied more broadly, scrutinizing the conditions that allow for successful editing is critical. For instance, discerning the factors which influence gradient computation
is still unknown, making it unclear when proposed edits can be trusted. In addition, it is unknown if gradient quality is dependent on graph structure and GNN architecture, causing further concern for practical applications.
Focusing strictly on gradient-based edit quality, we analyze the common mask learning paradigm [11, 19, 20, 29], where a continuous scoring mask is learned over the edges in a graph. Specifically, we elucidate how structural factors, such as degree, neighborhood label composition, and edge-to-node distance (i.e., how far an edge is from a node) can influence the mask through the gradient. When these factors are not beneficial to the learning task, e.g. edge-to-node distance for a de-noising task when noise is uniformly-distributed across the graph, the learned mask can lead to erroneous edits. We additionally highlight how editing methods that rely on thresholding are more susceptible to such structural biases due to smoothing of the ground truth signal at the extreme values of the distribution. To improve editing, we propose a more fine-tuned sequential editing process, **ORE**, with two steps: (1) We **O**rder the edge scores and edit the top-\(k\) edges to prioritize high quality edges, and (2) we **R**e-embed the modified graph after the top-\(k\) edges have been **E**dited. These properties help prevent choosing edges near the expected mask value, and thus more likely to be based on irrelevant structural properties, as well as encourage edits that consider the influence of other removed edges with higher scores. We highlight the practical benefit of ORE by designing a systematic study that probes editing quality across a variety of common GNN tasks, graph structures, and architectures, demonstrating up to a 50% performance improvement for ORE over previous editing methods.
## 2 Related Work
Early network design solutions choose edits based on fixed heuristics, such as centrality scores [16] or triangle closing properties [14]. However, fixed heuristics generally require significant domain guidance and may not generalize to broader classes of networks and tasks. Reinforcement learning (RL) has enabled the ability to learn more flexible heuristics, such as in chemistry [30] and social networks [23]; however, RL can be prohibitively expensive due to data and computation requirements. To fulfill the need for efficient and flexible editing methods, gradient-based optimization has subsequently been applied to edge editing, facilitated through trained GNNs. Although computing gradients for edges can be infeasible given the discrete nature of the input network, previous methods have adopted a continuous relaxation of the edge set, operating on a soft edge scoring mask that can be binarized to recover the hard edge set [11, 19, 20, 24, 29]. In its simplest form, the gradient of an edge is approximated as the gradient of the score associated with that edge, with respect to a loss objective [29]. As this is dependent on the initialization of the scoring mask, GNNExplainer proposes to leverage multiple rounds of gradient descent over the mask to arrive at a final score, rather than use the gradient directly [29]. CF-GNNExplainer extends GNNExplainer by generating counterfactual instances and measuring the change in the downstream objective [19]. Both of these methods convert the
soft mask to a hard mask through fixed thresholding, which, when incorrectly chosen, can introduce noisy edits. Moreover, as mask learning is usually used to support broader objectives, such as robustness or explainability, studies fail to consider what conditions can inhibit the mask learning sub-component, instead focusing simply on the downstream objective. _Our work provides a direct analysis of mask quality through a systematic study across a wide array of tasks, GNNs, and topologies. We highlight that current mask-based editing methods can become susceptible to bias within the mask scores, prompting the development of ORE as a means of improving gradient-based edge editing_.
## 3 Notation
Let \(G=(V,E,\mathbf{X},\mathbf{Y})\) be a simple graph with nodes \(V\), edges \(E\), feature matrix \(\mathbf{X}\in\mathbb{R}^{|V|\times d}\) with \(d\) node features, and label matrix \(\mathbf{Y}\). \(\mathbf{Y}\in\{0,1\}^{|V|\times c}\) with \(c\) classes for node classification, \(\mathbf{Y}\in\mathbb{R}^{|V|}\) for node regression, and \(\mathbf{Y}\in\{0,1\}^{c}\) for graph classification. \(\mathbf{A}\in\{0,1\}^{|V|\times|V|}\) is the adjacency matrix of \(G\), where \(\mathbf{A}_{i,j}=1\) denotes an edge between nodes \(i\) and \(j\) in \(G\), otherwise \(\mathbf{A}_{i,j}=0\). While \(E\) and \(\mathbf{A}\) represent similar information, \(E\) is used when discussing edge sets and \(\mathbf{A}\) is for matrix computations. Additionally, a \(k\)-hop neighborhood of a node \(i\in V\), \(N_{k}(i)\), denotes the nodes and edges that are reachable within \(k\)-steps of \(i\). For simplicity, \(k\) is dropped when referring to the 1-hop neighborhood. Additionally, we denote \(||\mathbf{B}||_{1}\) as the L\({}^{1}\)-norm of a matrix \(\mathbf{B}\), \(G-e_{i}\) as the removal of an edge from \(G\), and \(G-i\) as the removal of a node from \(G\). For a \(k\)-layer GNN, learning is facilitated through message passing over \(k\)-hop neighborhoods of a graph [8]. A node \(i\)'s representations are updated by iteratively aggregating the features of nodes in \(i\)'s 1-hop neighborhood, denoted AGGR, and embedding the aggregated features with \(i\)'s features, usually through a non-linear transformation parameterized by a weight matrix \(\mathbf{W}\), denoted ENC. The update for node \(i\) is expressed as \(\mathbf{r}_{i}^{(l)}=\text{ENC}(\mathbf{r}_{i}^{(l-1)},\text{AGGR}(\mathbf{r }_{u}^{(l-1)},u\in N(i)))\) for \(l\in\{1,2,...,k\}\), where \(r_{i}^{(0)}=x_{i}\). The update function is applied \(k\) times, resulting in node representations that can be used to compute predictions. For graph-level tasks, a readout function aggregates the final representation of all nodes into a single graph-level representation.
## 4 Optimization for Network Editing
The network design objective is given in Equation 1, where we want to find a new adjacency matrix, \(\mathbf{A}^{*}\), that improves a function \(f\), parameterized by a GNN,
\[\begin{split}\min_{\mathbf{A}^{*}}&||\mathbf{A}- \mathbf{A}^{*}||_{1}\\ \text{s.t.}& f(\mathbf{X},\mathbf{A}^{*})-f(\mathbf{X },\mathbf{A})\geq 0.\end{split} \tag{1}\]
As \(\mathbf{A}\) is discrete and \(f\) introduces non-linear and non-convex constraints, it is difficult to find an exact solution. Thus, we soften the constraints and focus on increasing \(f\) while maintaining the size of \(A\), as shown in Equation 2,
\[\min_{\mathbf{A}^{*}}\quad-f(\mathbf{X},\mathbf{A}^{*})+\lambda||\mathbf{A}- \mathbf{A}^{*}||_{1}. \tag{2}\]
where \(\lambda\) trades off the objective and the size of the remaining edge set. The negative term incentivizes the optimizer to improve \(f\). As the optimization is still over a discrete adjacency matrix, we re-parameterize \(\mathbf{A}\), as done in [10, 29], and introduce a continuous mask \(\mathbf{M}\in\mathbb{R}^{n\times n}\). \(\mathbf{M}\) is introduced into a GNN's aggregation function as \(\text{AGGR}(m_{u,v}\cdot\mathbf{r}_{\mathbf{u}}^{(i-1)},u\in N(v)))\), where \(m_{u,v}\) is the mask value on the edge that connects nodes \(u\) and \(v\). By introducing \(\mathbf{M}\) into AGGR, it is possible to directly compute partial derivatives over \(\mathbf{M}\), enabling gradient-based optimization over the mask values. As the aggregation function is model-agnostic, we can easily inject the mask into any model that follows this paradigm.
### Graph Properties that Influence Edge Scores
We aim to study the gradient of the scoring mask \(\mathbf{M}\) for a graph \(G\). We assume access to a trained, 2-layer GNN with structure \((\mathbf{A}+\mathbf{I})^{2}\mathbf{X}\mathbf{W}\), where \(\mathbf{I}\) is the identity matrix. We analyze a node classification setting, where a node \(i\)'s feature vector is \(\mathbf{x}_{i}=\mathbf{y}_{i}+\mathcal{N}(\mu,\Sigma)\), and \(\mathbf{y}_{i}\) is the one-hot encoding of class \(y_{i}\). After two layers of propagation, the feature vector for node \(i\) becomes,
\[\mathbf{r}_{i}^{(2)}=\mathbf{x}_{i}+\sum_{j\in N(i)}\mathbf{M}_{i,j}\mathbf{x }_{j}+\sum_{j\in N(i)}\mathbf{M}_{i,j}(\mathbf{x}_{j}+\sum_{k\in N(j)} \mathbf{M}_{j,k}\mathbf{x}_{k}). \tag{3}\]
Then, the class prediction for \(i\) is \(\underset{z_{i}}{\text{argmax}}\), where \(\mathbf{z}_{i}=\mathbf{r}_{i}^{(2)}W\). As \(\mathbf{M}\) is commonly learned through gradient ascent, and only \(\mathbf{r}_{i}^{(2)}\) depends on \(\mathbf{M}\), we focus on the partial derivative of \(\mathbf{r}_{i}^{(2)}\) with respect to a mask value \(\mathbf{M}_{u,v}\), where \(u,v\) are nodes in \(G\). As the GNN has two layers, the edges must be within two-hops of \(i\) to have a non-zero partial derivative. The partial derivative for the one- and two-hop scenarios are the first and second cases of Equation 4, respectively,
\[\frac{\partial\mathbf{r}_{i}^{(2)}}{\partial\mathbf{M}_{u,v}}=\begin{cases}2 (\mathbf{y}_{j}+\mathbf{M}_{i,j}\mathbf{y}_{i}+(\mathbf{M}_{i,j}+1)\mathcal{ N}(\mu,\Sigma))\\ \qquad+\sum_{k\in N(j)-i}\mathbf{M}_{j,k}(\mathbf{y}_{k}+\mathcal{N}(\mu, \Sigma)),&\quad u=i,v=j\in N(i)\\ \mathbf{M}_{i,j}(\mathbf{y}_{k}+\mathcal{N}(\mu,\Sigma)),&\quad u=j\in N(i),v= k\in N(j)\end{cases} \tag{4}\]
To understand the gradient ascent process, we consider when \(y_{i}=0\), without loss of generality, and simplify Equation 4. This leads to four scenarios, \(y_{j}\in\{0,1\}\) where \(j\in N(i)\) and \(y_{k}\in\{0,1\}\) where \(k\in N_{2}(i)\); however, \(y_{j}\) only impacts case 1 and \(y_{k}\) only impacts case 2, thus we can analyze each in isolation. To elucidate possible biases, we show the difference in gradients by subtracting each possible scenario (for similarly initialized \(\mathbf{M}_{i,j}\)), denoted as \(\Delta\partial\mathbf{r}_{i,0}^{(2)}\), in Equation 5,
\[\Delta\partial\mathbf{r}_{i,0}^{(2)}=\begin{cases}(\mathbf{M}_{i,j}+2) \mathcal{N}(\mu+1,\Sigma),&y_{j}=0,y_{k}=0\\ \mathbf{M}_{i,j}+(\mathbf{M}_{i,j}+2)\mathcal{N}(\mu,\Sigma),&y_{j}=1,y_{k}=0\\ 2(\mathbf{M}_{i,j}+1)+(\mathbf{M}_{i,j}+2)\mathcal{N}(\mu,\Sigma),&y_{j}=0,y_{ k}=1\\ 2\mathbf{M}_{i,j}+(\mathbf{M}_{i,j}+2)\mathcal{N}(\mu,\Sigma),&y_{j}=1,y_{k}=1 \end{cases}\] \[+\sum_{k\in N(j)-i,y_{k}=y_{j}}M_{j,k}\mathcal{N}(\mu+1,\Sigma)+ \sum_{k\in N(j)-i,y_{k}\neq y_{j}}M_{j,k}\mathcal{N}(\mu,\Sigma). \tag{5}\]
First, all cases in Equation 5 tend to be greater than 0, leading to higher scores for edges closer to \(i\). Additionally, if elements of \(\mathbf{M}\sim U(-1,1)\) as in [19, 29], the last two summation terms in Equation 5 scale as \(h_{j}(d_{j}-1)\) and \((1-h_{j})(d_{j}-1)\), respectively, where \(h_{j}\) and \(d_{j}\) represent the homophily and degree properties of the node \(j\). Thus, high degree and high homophily can additionally bias edge selection, similar to the heuristic designed by [26] where they use \(h_{j}d_{j}\) to optimize network navigation. Each of the above structural factors can either coincide with the true edge importance, or negatively influence edits when such structural properties are uninformative to the network design task.
### ORE: Improved Edge Editing
Previous mask learning methods [11, 19, 29] have focused on fixed thresholding to generate an edge set. As shown above, it is possible that the gradients are biased towards unrelated structural properties, and thus thresholding near the expected mask value can introduce incorrect edits. To improve the mask, we introduce **ORE**, which operates by sorting the learned mask values, editing only a fixed budget of the highest scoring edges, and then re-embedding the edited graph to obtain an updated mask. Ordering the mask values and only operating on the extreme ends of the mask value distribution allows ORE to choose edges that are likely to be governed by the mask learning procedure, rather than edges with high scores due to structural biases. Additionally, as seen in Equation 5, the gradient for an edge is dependent on downstream edges aggregated during message passing, motivating our re-embedding step to account for interactions between edits. The total editing budget is denoted as \(b\), where \(b/s\) edges are removed for \(s\) steps. If a task requires the solution to contain a single connected component, edges that would disconnect the graph are preserved, their gradients are deactivated, and their mask values are set to one.
## 5 Experimental Setup
### Network Editing Process
We study four GNN architectures: GCN [13], GraphSage [9], GCN-II [22], and Hyperbolic GCN [2]. As attention weights have been shown to be unreliable for edge scoring [29], we leave them out of this study. After training, each model's
weights are frozen and the edge mask variables are optimized to modify the output prediction. We train three independent models on different train-val-test (50-25-25) splits for each task and the validation set is used to choose the best hyperparameters over a grid search. Then, editing is performed over 50 random data points sampled from the test set. For regression tasks, we directly optimize the output of the GNN, and for classification tasks, we optimize the cross entropy loss between the prediction and class label. For ORE, \(s=b\) so that one edge is edited per step. Additionally, \(b\) is set such that roughly 10% (or less) of the edges of a graph (or computational neighborhood) are edited. The exact budget is specified for each task. All hyperparameters and implementation details for both the GNN training and mask learning are outlined in an anonymous repo3.
Footnote 3: [https://anonymous.4open.science/r/ORE-93CC/GNN_details.md](https://anonymous.4open.science/r/ORE-93CC/GNN_details.md)
**Editing Baselines** We utilize two fixed heuristics for editing: iterative edge removal through random sampling and edge centrality scores [1]. We also study CF-GNNExplainer [19], though we extend the algorithm to allow for learning objectives outside of counterfactuals and variable thresholds that cause \(b\) edits to fairly compare across methods. These changes do not hurt performance and are simple generalizations. Note that while we focus on CF-GNNExplainer, as they are the only previous mask learning work to consider editing, their mask generation is highly similar to other previous non-editing methods, allowing us to indirectly compare to thresholding-based methods in general [20, 24, 29].
### Learning Tasks
In this section we detail the proposed tasks. For each, the generation process, parameters, and resultant dataset stats are provided in an anonymous repo4.
Footnote 4: [https://anonymous.4open.science/r/ORE-93CC/Dataset_details.mat](https://anonymous.4open.science/r/ORE-93CC/Dataset_details.mat)
**Improving Motif Detection:** We begin with node classification tasks similar to [19, 20, 29] with a goal of differentiating nodes from two different generative
models. _Tree-grid_ and _tree-cycle_ are generated by attaching either a 3x3 grid or a 6 node cycle motif to random nodes in a 8-level balanced binary tree. We train the GNNs using cross entropy, and then train the mask to maximize a node's class prediction. As the generation process is known, we extrinsically verify if an edit was correct by determining if it corresponds to an edge inside or outside of the motifs. The editing budget is set to the size of the motifs, i.e. \(b=6\) for tree-cycle and \(b=12\) for tree-grid. Each model is trained to an accuracy of 85%.
**Increasing Shortest Paths (SP):** The proposed task is to delete edges to increase the SP between two nodes in a graph. This task has roots in adversarial attacks [21] and network interdiction [27] with the goal of force specific traffic routes. The task is performed on three synthetic graphs: Barabasi-Albert (BA), Stochastic Block Model (SBM), and Erdos-Renyi (ER). The parameters are set to enforce each graph has an average SP length of 8. The GNN is trained through MSE of SP lengths, where the SP is estimated by learning embedding for each node and then computing the \(L^{2}\) distance between each node embedding for nodes in the training set. The GNN is then used to increase the SP for pairs of nodes in the test set, which is externally verified through NetworkX. The editing budget \(b=30\) given the larger graphs. Each model is trained to an RMSE of 2.
**Decreasing the Number of Triangles:** The proposed task is to delete edges to decrease the number of triangles in a graph. Since triangles are often associated with influence, this task can support applications that control the spread of a process in a network, such disease or misinformation [6]. We consider the same graphs as in the SP task, BA, SBM, and ER, but instead generate 100000 different graphs each with 100 nodes. Each generation method produces graphs that, on average, have between 20 and 25 triangles, as computed by NetworkX's triangle counter. The GNNs are trained using MSE and then used to minimize the number of triangles in the graph, which is externally verified through NetworkX. The editing budget \(b=20\). Each GNN is trained to an RMSE of 6.
**Improving Graph-level Predictions:** MUTAG is a common dataset of molecular graphs used to evaluate graph classification algorithms. The proposed task is to turn mutagenic molecules into non-mutagenic molecules by deleting mutagenic functional groups [20; 29]. We first train the GNN models to sufficiently predict whether a molecule is mutagenic, then edit the molecules to reduce the probability of mutagenicity. We only edit mutagenic molecules that possess mutagenic functional groups, as in [20]. The editing budget \(b=5\). Each GNN is trained to an accuracy above 75%. To focus on edit quality, we do not include chemistry-based feasibility checks, however it is possible to incorporate constraints into ORE either through the mask learning objective, when the constraint is differentiable, or by rejecting edits when the constraint is not differentiable.
## 6 Results
We present the empirical results for each task, beginning with an in-depth analysis on motif detection. Then, we collectively analyze the shortest path, triangle counting, and mutag tasks, noting trends in editing method and GNN design.
### Motif Detection
In Figure 1 we show the percent change metrics for the tree-grid and tree-cycle datasets across the GNN models. Better performance is indicated by a higher percentage of edges removed outside the motif, and a lower percentage of edges removed from inside the motif. We include performance for ORE and CF-GNNExplainer with different GNN backbones. On both datasets, the Pareto front is comprised primarily by variants of ORE, highlighting that ORE is generally better at maintaining in motif edges while removing out of motif edges.
**How do editing methods vary across GNNs?** In Figure 1, ORE with GCNII yields the best performance; however, nearly every ORE and GNN combination outperforms the CF-GNNExplainer variant with the same GNN, demonstrating the intrinsic benefit of ORE, as well as the dependence on GNN model. To probe how performance varies across GNNs, we stratify performance by structural factors, as motivated by our analysis in Equation 5. In Figure 2, we focus on the edge-to-node distance, showing that GCN is more susceptible than GCNII to this bias as the correlation between mask score and distance is higher. This result suggests that GCNII is able to suppress the use of factors unrelated to the editing task and better leverage the true importance of the edited edges.
Figure 1: Performance on tree-grid and tree-cycle across GNNs (shapes) and editing methods (colors). The axis show the percent change in edges outside and inside the motifs. Error bars indicate standard deviation in experiments. _Performance improves towards the bottom right_, as the goal is to remove edges outside the motif and retain edges inside the motif, as shown by the gray Pareto front.
We hypothesize that GCNII's ability to retain distinct node representations by combatting oversmoothing can enable more salient gradients, however further theoretical analysis is required to confirm this behavior.
**How does ORE improve performance?** In Figure 2(a), granular performance metrics are presented in a 2D histogram for ORE and CF-GNNExplainer with GCNII, demonstrating the percent change of inside and outside motif edges for tree-grid. Result trends are similar for tree-cycle. ORE is shown to drop significantly less edges inside the motifs, denoted by the dark red boxes in the bottom right, indicating stronger performance. While both editing methods perform well at removing edges outside the motifs, CF-GNNExplainer tends to additionally remove inside edges, indicating a poorer trade-off between outside and inside motif edges. We further analyze how this arises in Figure 2(b), where the percent change metrics are presented across edit iterations (CF-GNNExplainer is not iterative and thus constant). For ORE, we see that the rates of change for inside and outside edges are significantly different - ORE more rapidly removes outside edges while maintaining inside edges, improving the final edit solution. In addition, ORE achieves similar outside edge removal to CF-GNNExplainer, while achieving a 10% increase in inside edges, supporting our hypothesis that knowledge of earlier edits allows ORE to adjust mask scores, improving editing.
### Shortest Path, Triangle Counting, and Graph Classification
In Table 1, we outline the performance metrics for the SP, triangle counting, and mutag tasks. For each task, we measure the average percent change in their associated metric. In the SP experiments, all GNNs improve over the baselines, demonstrating the learned masked values extracted from the GNNs can outperform crafted heuristics, such as centrality, which leverages shortest path information in its computation. Given that ORE with GCN performs well on this task,
Figure 2: Mask score distribution stratified by distance to ego-node for **GCN** and **GCNII**. Yellow denotes Tree-Grid, green denotes Tree-Cycle. For GCN, the closer an edge is to the ego-node, the higher the scores, leading to bias within the editing. GCNII minimizes bias for this unrelated property, improving editing.
it is possible that the structural biases identified previously, such as reliance on degree, could coincide with the SP task and improve mask scores. In the triangle counting task, edge centrality is a strong baseline for BA graphs, likely due to centrality directly editing the hub nodes that close a large number of triangles. Across the ER and SBM graphs, which do not possess a hub structure, we find that ORE with a GCNII backbone performs significantly better than both the baselines and other GNN models. Mutag reinforces these findings where GCNII removes nearly all of the mutagenic bonds for the mutagenic molecules. Notably, the Hyperbolic GCN performs poorly across experiments, possible explained by most tasks possessing Euclidean geometry, e.g. 82% of the molecules in the mutagenic dataset are roughly Euclidean as computed by the Gromov hyperbolicity metric [28]. Comparing editing methods, ORE with GCN and GCNII significantly outperforms CF-GNNExplainer with GCN across all three downstream tasks, highlighting the value of refined and iteratively optimized edge masks.
## 7 Conclusion
In this work, we focused on studying network design though gradient-based edge editing. We began by identifying structural factors that influence the common mask-based learning paradigm, and empirically demonstrated how these factors can impact performance across complex models and tasks. To improve editing, we introduced a sequential editing framework, ORE, that allowed for (a) the identification of higher quality edges near the extremes of the mask distribution and (b) mask scores to reflect updates from higher scoring edges. As network design evaluation has limited datasets, we proposed a set of editing tasks
Figure 3: Analysis on **GCNII** and **Tree-Grid**. (a) Histograms where the axes denote the percent change in edges inside and outside of the motif, boxes capture the counts. _ORE outperforms CF-GNNExplainer, as shown by the darker boxes in the bottom right._ (b) Performance across edit iterations. Blue denotes ORE, red denotes CF-GNNExplainer, dashed lines denote out motif change, and solid lines denote in motif change. _ORE rapidly removes edges outside the motifs while maintaining edges inside the motif, improving upon CF-GNNExplainer._
with external validation mechanisms, and studied both ORE and a strong editing baseline, CF-GNNExplainer, with different GNN backbones. We found that ORE outperformed CF-GNNExplainer across all experiments, while additionally demonstrated the impact of GNN architecture on the success of editing.
|
2303.14519 | Stochastic Model Predictive Control Utilizing Bayesian Neural Networks | Integrating measurements and historical data can enhance control systems
through learning-based techniques, but ensuring performance and safety is
challenging. Robust model predictive control strategies, like stochastic model
predictive control, can address this by accounting for uncertainty. Gaussian
processes are often used but have limitations with larger models and data sets.
We explore Bayesian neural networks for stochastic learning-assisted control,
comparing their performance to Gaussian processes on a wastewater treatment
plant model. Results show Bayesian neural networks achieve similar performance,
highlighting their potential as an alternative for control designs,
particularly when handling extensive data sets. | J. Pohlodek, H. Alsmeier, B. Morabito, C. Schlauch, A. Savchenko, R. Findeisen | 2023-03-25T16:58:11Z | http://arxiv.org/abs/2303.14519v1 | # Stochastic Model Predictive Control Utilizing Bayesian Neural Networks
###### Abstract
Integrating measurements and historical data can enhance control systems through learning-based techniques, but ensuring performance and safety is challenging. Robust model predictive control strategies, like stochastic model predictive control, can address this by accounting for uncertainty. Gaussian processes are often used but have limitations with larger models and data sets. We explore Bayesian neural networks for stochastic learning-assisted control, comparing their performance to Gaussian processes on a wastewater treatment plant model. Results show Bayesian neural networks achieve similar performance, highlighting their potential as an alternative for control designs, particularly when handling extensive data sets.
## I Introduction
Optimal, flexible, safe, and reliable control close to the feasibility boundary is becoming increasingly important to achieve many processes' economic, energy-efficient, and sustainable operation. Optimization-based control, such as model predictive control (MPC) [1, 2], is, in principle, well suited to achieve these objectives, as it allows the formulation of elaborate control goals while incorporating state and input constraints. However, the MPC performance heavily depends on the quality of the underlying process model [1, 3], as it is employed to predict the system's behavior and determine the optimal control action. This brings forward the question of counteracting inevitable model-plant mismatch and measurement uncertainties in real systems. Though nominal MPC exhibits inherent robustness properties [1, 4] through repeated optimization in the closed loop, uncertainties degrade its performance and can potentially lead to constraint violations or stability loss [1].
Learning-supported MPC approaches aim to reduce the uncertainty in model dynamics by augmenting it with data-driven parts. However, process uncertainty and stochasticity are often not considered in the MPC formulation if combined with learning approaches; one assumes that the learned part adjusts and compensates for these uncertainties.
In contrast, robust and stochastic MPC approaches [1, 5, 6] are explicitly designed for such a scenario, incorporating the information on model uncertainties directly into the MPC formulation. These formulations typically trade off some performance for guaranteeing constraint satisfaction under assumptions on the uncertainty, which is especially important in safety-critical scenarios.
Combining the two strategies -- robust model predictive control strategies and learning -- has been a subject of recent work [7, 8, 9, 10, 11]. The learned model is often expected to provide some measure of the uncertainty, which is then delivered to the robust or stochastic MPC. Gaussian processes (GPs) have been widely employed in this role, as they explicitly yield a posterior variance along with the regression mean [12]. Though GPs are generally easy to tune, their computational complexity grows cubic with the dataset size, restricting their application to relatively small sets [12]
An alternative is the use of Bayesian neural networks (BNNs) designed to provide a measure of uncertainty besides the regression [13]. Belonging to the family of deep learning models, it is computationally efficient, especially as most of the computational effort is spent offline during training. However, compared to other deep learning methods, BNNs are not yet as widely researched -- it is generally challenging to infer the distributions analytically. Instead, one relies on posterior approximations [13].
In this work, we aim to determine the suitability of BNNs for (stochastic) predictive control applications, with a particular focus on comparing them to state-of-the-art GPs. BNNs have been used in combination with model predictive control, e.g., [14] employs BNNs in combination with a hierarchical MPC approach for the control of a surgical robot to reduce the uncertainty. Here, the kinematics and dynamics of a highly nonlinear robotic system were modeled with the help of BNNs. The uncertainty information provided by the BNN is used in a hierarchical MPC scheme, achieving superior performance. In [15], a learning-based adaptive-scenario-tree model predictive control (MPC) approach is used to achieve probabilistic safety guarantees using BNNs to learn the model uncertainty.
We produced all results in this work using the open-source Python toolbox HILO-MPC1[16]. In a simple-to-use way, the toolbox allows combining (robust) predictive, and optimization-based control and estimation approaches with methods from machine learning, such as Gaussian processes and Bayesian neural networks. 2
Footnote 1: [https://www.ccps.tu-darmstadt.de/research_ccps/hilo_mpc/](https://www.ccps.tu-darmstadt.de/research_ccps/hilo_mpc/)
Footnote 2: The case study used as an example, including all code, will be made freely available in HILO-MPC.
## II Problem Formulation and Methods
We consider dynamic systems which can be described by
\[x_{k+1}=f(x_{k},u_{k})+B\left(d\left(x_{k},u_{k}\right)+w_{k}\right). \tag{1}\]
Here, \(x\in\mathbb{R}^{n_{x}}\) are the dynamical states, \(u\in\mathbb{R}^{n_{u}}\) are the inputs to the system and \(B\in\mathbb{R}^{n_{x}\times n_{d}}\) is a known
matrix. The function \(f\colon\mathbb{R}^{n_{x}}\times\mathbb{R}^{n_{u}}\rightarrow\mathbb{R}^{n_{x}}\) describes the known part of the system dynamics, while the function \(d\colon\mathbb{R}^{n_{x}}\times\mathbb{R}^{n_{u}}\rightarrow\mathbb{R}^{n_{x}}\) describes an unknown or difficult to model effect. This unknown effect will be learned using specific machine learning algorithms. The variable \(w\in\mathbb{R}^{n_{d}}\) is assumed to be zero-mean normally distributed process noise \(w\sim\mathcal{N}\left(0,\Sigma^{w}\right)\) with the variance matrix \(\Sigma^{w}=\mathrm{diag}\left(\left[\sigma_{1}^{2}\quad\ldots\quad\sigma_{n_{d} }^{2}\right]\right)\). We focus on Gaussian processes and Bayesian neural networks to learn the unknown effect, which we briefly introduce in the following.
Gaussian ProcessWe briefly review the main concepts of Gaussian processes. For a more detailed introduction to GPs, we refer to [12]. A GP is a stochastic supervised machine learning algorithm used in regression and classification tasks. GPs are less prone to overfitting and naturally provide uncertainty measures on predictions. We formulate a regression task as a mapping \(\psi\colon\mathbb{R}^{n_{x}}\rightarrow\mathbb{R}:\chi\rightarrow\varphi \left(\chi\right)+\nu\) with the input vector \(\chi\in\mathbb{R}^{n_{\chi}}\), the output \(\psi\in\mathbb{R}\) and the zero-mean normally distributed noise \(\nu\sim\mathcal{N}\left(0,\sigma^{2}\right)\) affecting the output. The unknown function \(\varphi\) is assumed to be normally distributed \(\varphi\left(\chi\right)\sim\mathcal{N}\left(m\left(\chi\right),k\left(\chi, \chi\right)\right)\), with the mean function \(m\colon\mathbb{R}^{n_{\chi}}\rightarrow\mathbb{R}\) and covariance function \(k\colon\mathbb{R}^{n_{\chi}}\times\mathbb{R}^{n_{\chi}}\rightarrow\mathbb{R}\). These functions are assumed to depend on hyperparameters, that are determined by GP training procedure, resulting in scalar-valued predictions. For simplicity, multiple outputs are handled as separate one-dimensional GPs independently.
Bayesian Neural NetworkAs a second learning method, we consider Bayesian neural networks [17], which can also provide a measure of uncertainty similar to GPs. In contrast to GPs, computational complexity of BNNs does not grow with the number of data points. Unlike traditional feed-forward NNs, BNNs learn distributions over the output instead of single value predictions. To accomplish this task, the weights of conventional neural networks are replaced by distributions over the weight in each layer. Thus, BNNs extend the class of NNs to estimate posterior distributions instead of most likely values, cf. Fig. 1.
A common way of implementing the weights \(\mathcal{W}\) as distributions is to assume that they are independently Gaussian distributed with zero mean and some arbitrary variance \(\lambda\)[13] as follows
\[p(\mathcal{W})=\prod_{l=i}^{L}\prod_{i=0}^{V_{l}}\prod_{j=1}^{V_{l-1}+1}\mathcal{ N}(w_{i,j}|0,\lambda^{-1}). \tag{2}\]
Here \(L\) denotes here the number of layers in the network and \(V_{l}\) the nodes in each layer \(l\). The likelihoods with regard to the network weights are given by
\[p(y|\mathcal{W},\mathcal{Z})=\prod_{i=1}^{M}\mathcal{N}(y_{i}|d(z_{i}|\mathcal{ W}),\gamma^{-1}), \tag{3}\]
where \(\gamma\) is the precision and \(\mathcal{Z}\) is the collection of nodes \(z_{i}\). Now we can estimate the posterior distribution over the weights given the data via Bayes' rule
\[p(\mathcal{W}|\mathcal{D})=\frac{p(y|\mathcal{W},\mathcal{Z})p(\mathcal{W})}{p( y|\mathcal{Z})}, \tag{4}\]
where we can calculate the marginal likelihood via
\[p(y|\mathcal{Z})=\int p(y|\mathcal{W},\mathcal{Z})\mathcal{W}d\mathcal{W}. \tag{5}\]
In reality, the marginal likelihood and equation (5) cannot be calculated, it becomes intractable because of the nonlinearities in the network caused by the activation \(d(\cdot,\mathcal{W})\). This results in the need to approximate the posterior distribution. This is a common challenge in the Bayesian inference domain, and different methods exist to approximate the needed posterior. The most popular are Markov chain Monte Carlo (MCMC), Laplace approximation (LA), probabilistic backpropagation (PBP), and variational inference (VI), for a survey see [18] and [19].
Nominal Model Predictive ControlAs a comparison and baseline, we consider a nominal model predictive controller, which solves a finite-horizon optimal control problem at the sampling times [1, 2] based on the nominal model3:
Footnote 3: In principle the nominal MPC could also use the learned “nominal” model part, which would be the mean of the predicted state for case of the GP model. This is avoided here, due to space limitations.
\[\min_{\begin{subarray}{c}\{u_{k}\}\end{subarray}} J(\{x_{k}\},\{u_{k}\}) \tag{6a}\] \[\mathrm{s.t.} x_{k+1}=f\left(x_{k},u_{k}\right),\quad x_{0}=\tilde{x}_{j},\] (6b) \[x_{k}\in\mathcal{X},\quad u_{k}\in\mathcal{U}, \tag{6c}\]
where \(J(\{x_{k}\},\{u_{k}\})=\sum_{k=0}^{N}L\left(x_{k},u_{k}\right)+E\left(x_{k}\right)\) is the cost function, \(N\) is the control horizon, \(L\colon\mathbb{R}^{n_{x}}\times\mathbb{R}^{n_{u}}\rightarrow\mathbb{R}\) is the stage cost, \(E\colon\mathbb{R}^{n_{x}}\rightarrow\mathbb{R}\) is the terminal cost, \(\mathcal{X}\) and \(\mathcal{U}\) are the feasible input and state sets. The first input \(u_{k}^{*}\) of the obtained optimal input sequence is applied. The optimization problem is repeated at every sampling time updating \(x_{k}\) with the measured state \(\tilde{x}_{j}\).
Stochastic Model Predictive ControlTo exploit the stochastic uncertainty information of the model, we use stochastic MPC. Compared to the nominal case, we consider chance constraints, allowing for a violation of the constraints
Fig. 1: a) Bayesian neural network with two features (inputs), one hidden layer, and one label (output). b) Single neuron.
with a certain probability [5]. Assuming a cost that contains the expected value of the now stochastic state and control variables minimizing a sequence of optimal policies \(\Pi(x)=\{\pi_{0}(x),...,\pi_{N}(x)\}\) instead of the optimal open-loop inputs, one obtains the following stochastic optimal control problem
\[\min_{\Pi\left(x\right)} \mathbb{E}\left[J(\{x_{k}\},\{u_{k}\})\right] \tag{7a}\] \[\mathrm{s.t.} x_{k+1}=f_{\text{prob}}\left(x_{k},u_{k}\right),\quad x_{0}=\tilde{x }_{j},\] (7b) \[p\left(x_{k}\in\mathcal{X}\right)\geq p_{x},\quad\forall k\in 0,\, \ldots,\,N,\] (7c) \[p\left(u_{k}\in\mathcal{U}\right)\geq p_{u},\quad\forall k\in 0, \,\ldots,\,N,\] (7d) \[u_{k}=\pi\left(x_{k}\right). \tag{7e}\]
with \(f_{\text{prob}}\) defining the probabilistic model of the system, \(p_{x}\) and \(p_{u}\) are the probabilities with which the chance constraints are allowed to be violated and \(k\) describes the time steps of the discrete model.
Since the stochastic optimal control problem is generally intractable, we reformulate and approximate it to make it tractable and include the learning-based hybrid model. The problem's intractability results from the fact that we need to consider infinitely many state trajectories if we solve for an optimal sequence of policies \(\Pi\).
The reformulations and approximations are based on the approximation presented in [9] and the theory of reachable sets presented in [8]. We omit the necessary assumptions for the sake of brevity and state the derived outcome: the system state, control input, and uncertainty dynamics, that is to be learned by a GP or a BNN, are jointly Gaussian distributed:
\[\mathcal{N}(\mu_{k},\Sigma_{k})=\mathcal{N}\left(\begin{bmatrix}\mu_{k}^{x}\\ \mu_{k}^{u}\end{bmatrix},\begin{bmatrix}\Sigma_{k}^{x}&\Sigma_{k}^{xu}&\Sigma_{ k}^{xd}\\ \Sigma_{k}^{x}&\Sigma_{k}^{x}&\Sigma_{k}^{xd}\\ \Sigma_{k}^{dx}&\Sigma_{k}^{du}&\Sigma_{k}^{d}+\Sigma_{k}^{u}\end{bmatrix} \right). \tag{8}\]
Furthermore, we assume that the chance constraints are tightening around the mean of our input and state variables \(\mu^{x}\) and \(\mu^{u}\), which are defined by the assumed Gaussian distributions. With these assumptions, we can formulate of tractable stochastic optimal control problem of the following structure (as we only want to show the basic structure and due to limited space, we omit introducing all symbols)
\[\min_{\mu_{k}^{u}} \mathbb{E}\left[J(\{x_{k}\},\{u_{k}\})\right]\] (9a) \[\mathrm{s.t.} \mu_{k+1}^{x}=\hat{f}\left(x_{k},u_{k},k\right)+B_{d}\mu_{k}^{d},\] (9b) \[\Sigma_{k+1}^{x}\!=\!\Big{[}\nabla\hat{f}\left(\mu_{k}^{x},\mu_{k }^{u}\right)B_{d}\Big{]}\Sigma\!\Big{[}\nabla\hat{f}\left(\mu_{k}^{x},\mu_{k} ^{u}\right)B_{d}\Big{]}^{\!\top}\!,\] (9c) \[\mu_{0}^{x}=\tilde{x}_{j},\mu_{k}^{d}=\mu^{d}\left(\mu_{k}^{x}, \mu_{k}^{u}\right),\Sigma_{0}^{x}=0,\] (9d) \[\Sigma_{k}=\text{according to eq.~{}\eqref{eq:eq
cost \(E\) are defined as \(L\left(x_{k},u_{k}\right)=\left\|x_{k}-x_{\text{ref}}\right\|_{Q_{q}}^{2}+ \left\|u_{k}-u_{\text{ref}}\right\|_{R_{x}}^{2}+\left\|u_{k}-u_{k-1}\right\|_{R _{x}}^{2},E\left(x_{k}\right)=\left\|x_{N}-x_{\text{ref}}\right\|_{Q_{t}}^{2},\) with the reference values \(x_{\text{ref}}=\left[X_{\text{ref}}\text{ }S_{\text{ref}}\right]=\left[1046.28\,\,\,101.615 \right],u_{\text{ref}}=F_{\text{ref}}=0.714286\), and the weights \(Q_{s}=Q_{t}=\left[\begin{smallmatrix}10&0\\ 0&10\end{smallmatrix}\right],\quad R_{s}=1,\quad R_{c}=5\cdot 10^{3}\). The reference of the flow rate \(F_{\text{ref}}\) translates to a hydraulic retention time of \(\tau=7\,\text{d}\), i.e., the average time a volume of wastewater will remain in a particular part of the plant, and lies within the range of hydraulic retention times referenced in [20]. The reference values of the biomass concentration \(X_{\text{ref}}\) and the substrate concentration \(S_{\text{ref}}\) are the steady states of the open loop simulation using the chosen reference flow rate \(F_{\text{ref}}\). The initial conditions of the states were \(X_{0}=0.2\) and \(S_{0}=0\), and both states were constrained to be at least zero over the whole process time. Furthermore, the input was constrained to lie within the range \(0\leq F(t)\leq 2\). The sampling interval was \(\Delta t=0.125\,\text{d}\) and the length of the control horizon was \(N=80\), which translates to a time of \(10\,\text{d}\). The influent substrate concentration was calculated using (III) and was kept constant over the control horizon.
We ran six closed loop simulation using this nominal MPC for a simulation time of \(T_{\text{sim}}=70\,\text{d}\) for each simulation, which ensured that the steady state was reached in each simulation. Additionally, we sampled the reference states for each simulation from a uniform distribution around the actual reference states to gain a sufficient distribution of data in the state space region of interest \(\tilde{X}_{\text{ref}}\sim\mathcal{U}\left(0.9X_{\text{ref}},1.1X_{\text{ref} }\right),\tilde{S}_{\text{ref}}\sim\mathcal{U}\left(0.9S_{\text{ref}},1.1S_{ \text{ref}}\right).\) Both states are assumed to be measured at all times and serve as the features of the machine learning models. The discrepancy between the true plant model and the nominal control model, the \(d\) part of (1)), is used as the label for the machine learning models. Overall, \(2800\) data points were assembled and split between training and test sets with the ratio \(0.8:0.2\). The data was scaled to zero-mean and standard deviation to improve training performance.
Gaussian ProcessSince the training of a GP is computationally expensive, a sparser subset of training data was generated. This was done by iterating through all observations in the training data and disregarding all the observations that have an Euclidean distance to the current observation below a certain threshold. This way we were able to obtain more observations from regions that have been observed very little and disregard observations from well-observed regions in the state space. Since the data was normalized, the threshold was set to \(0.2\). The new training data set obtained using this threshold amounts to \(92\) data points.
As the model of the wastewater treatment plant has two states, we trained two GPs. The noise variance of each GP was fixed to \(\sigma_{X}^{2}=0.1\) and \(\sigma_{S}^{2}=0.01\), respectively. We chose a squared exponential kernel with automatic relevance detection as the covariance function for both GPs \(k\left(\chi_{i},\chi_{j}\right)=\sigma_{f}^{2}\exp\left(-\frac{\left(\chi_{i}- \chi_{j}\right)^{2}}{2\ell^{2}}\right),\) where \(\sigma_{f}\) is the signal variance and \(\ell\) are the length scales. The mean function was assumed to be zero for both GPs, as is typically done [23]. The hyperparameters of the trained GPs can be found in Table II. The length scales of both GPs are relatively close to each other, indicating that every input to the GPs is equally important. Fig. 2 shows the resulting predictive standard deviation for GP\({}_{X}\) in the top row. Its model uncertainty is small in the region around the training data points. In regions where there are no observations available, it predicts a very high uncertainty. Fig. 3 shows the calibration curves of both GPs [24]. The calibration curve for GP\({}_{X}\) is very close to the ideal calibration line, indicating reliable predictions of the model-plant mismatch \(d_{X}\) for the biomass concentration. This can also be shown by calculating the miscalibration area, i.e., the area between the curve and the ideal calibration line. The closer the calibration curve is to the perfect calibration line, the lower the miscalibration area, resulting in higher prediction reliability. The resulting miscalibration area
\begin{table}
\begin{tabular}{|c|c|c|} \hline & Length scales \(\ell\) & Signal variance \(\ell\) \\ \hline GP\({}_{X}\) & \(2.35425\) & \(2.91528\) & \(5.86936\) \\ GP\({}_{S}\) & \(2.82233\) & \(2.44933\) & \(10.493\) \\ \hline \end{tabular}
\end{table} TABLE II: Hyperparameters of the trained GPs.
Fig. 3: Calibration curves, the ideal calibration is shown red.
Fig. 2: Heat map of the predictive standard deviation for the biomass \(X\). The black dots are the training data points.
is listed in Table III. The calibration curve for the GP trained on the model-plant mismatch in the substrate concentration (GP\({}_{S}\)) is a further away from the ideal calibration line, leading to a higher miscalibration area (see Table III). This indicates a lower prediction reliability of GP\({}_{S}\) compared to GP\({}_{X}\). Two additional metrics, the root-mean-square error (RMSE) as well as the negative log-likelihood (NLL), are listed in Table III. The RMSE reflects the correctness of the predictions, and the NLL is another indicator of the predictions' reliability. Both metrics respectively show that GP\({}_{X}\) has higher accuracy and reliability in the forecasts compared to GP\({}_{S}\), as both values are lower for GP\({}_{X}\).
#### Iv-B1 Bayesian Neural Network
In this work, we used probabilistic backpropagation [25] to approximate the posterior distribution (4) of the BNN, which is a closed-form approximation based on the assumed density filtering approach [26]. Its full derivation is beyond the scope of this work, so we refer the reader to [25] for details. In contrast to our work, [14] and [15] employ variational inference to approximate the posterior distribution.
In probabilistic backpropagation, the prior precision \(\lambda\) and the noise precision \(\gamma\) are assumed to be Gamma distributed hyperpriors \(p\left(i|\alpha_{i},\beta_{i}\right)\sim\Gamma\left(\alpha_{i},\beta_{i}\right)\) where \(\alpha_{i}\) are the shape parameters and \(\beta_{i}\) are the inverse scale parameters. Here, we specified the weight hyperpriors with shape \(\alpha_{\lambda}=6\) and inverse scale \(\beta_{\lambda}=6\) for both outputs. The noise hyperpriors were \(\alpha_{\gamma,X}=\beta_{\gamma,X}=40\) and \(\alpha_{\gamma,S}=\beta_{\gamma,S}=6\), respectively. As with the GPs, the BNNs also need to be trained for each output individually. Both BNNs had one hidden layer with \(50\) nodes and the ReLU activation function [25]. The BNNs were trained for \(10\) epochs each.
Fig. 2 shows the resulting predictive distribution for BNN\({}_{X}\) in the bottom row. Similar to GP\({}_{X}\), it predicts a very low uncertainty in the regions where the training data are located. Unlike GP\({}_{X}\), BNN\({}_{X}\) centers its posterior on the best observed region, shows higher uncertainty in regions with fewer observations, but lower uncertainties in regions without observations. The calibration curves of both BNNs are also very close to the ideal calibration line, indicating high reliability in the predictions. Overall, BNNs and GPs show competitive performance, which is also reflected when comparing the metrics (RMSE, NLL, miscalibration area -- see Table III).
#### Iv-B Open Loop Simulation of Hybrid Models
Given the trained GPs and BNNs, we generated the respective hybrid models. To illustrate their recovery of the model-plant mismatch, we compared open loop step response versus the true plant model and the nominal control model without the process noise. The result for a the step response is shown in Fig. 4. Both hybrid models significantly improve the nominal control model, especially substrate concentration. The performance improvement compared to the nominal control model is substantial, while the mismatch in the steady states of the biomass concentration of a true model is negligible if process noise were considered.
#### Iv-B Closed Loop Control
Finally, learning-supported stochastic MPC for both learning approaches is evaluated. Box constraints of the nominal MPC augmented with a time-varying constraint that ensures survival of the biomass despite disturbed substrate concentration \(S_{f}\). This constraint is realistic for practical applications that assume model-plant mismatches, since drastic drops in \(S_{f}\) could wash out the biomass and drastic increases of \(S_{f}\) could lead to unmodeled behavior, e.g., cell aggregation. Hence, we demand the biomass concentration to be within once the steady state is reached \(X\left(t\right)\in[(X_{\text{ref}}-20)b_{l},(X_{\text{ref}}+20)b_{u}]\). For \(t=30\,\mathrm{d}\) after the steady state is reached, the parameters \(b_{l}\) and \(b_{u}\) are set to \(b_{l}\left(t\right)=\tanh\left(0.1t+0.01\right)\), \(b_{u}\left(t\right)=\tanh\left(0.1t+1\right).\) For the stochastic MPC we reformulated the constraint into a chance constraint with an acceptance probability of \(99\%\). In order to ease the computational burden, the control horizon was shortened to \(N=32\) time steps.
Fig. 5 shows the closed loop behavior of the control designs. The difference between the approaches is mainly
Fig. 4: Open loop step response of the true plant model.
Fig. 5: Closed loop simulations.
\begin{table}
\begin{tabular}{||c|c|c|c||} \hline & \multicolumn{1}{c||}{RMSE} & \multicolumn{1}{c||}{NLL} & \multicolumn{1}{c||}{Mechanical parameters} \\ \hline GP\({}_{X}\) & \(0.173\) & \(\mathbf{-0.277}\) & \(0.117\) \\ \hline BNN\({}_{X}\) & \(\mathbf{0.165}\) & \(0.478\) & \(\mathbf{0.107}\) \\ \hline GP\({}_{S}\) & \(\mathbf{0.205}\) & \(0.148\) & \(0.135\) \\ \hline BNN\({}_{S}\) & \(0.264\) & \(\mathbf{0.043}\) & \(\mathbf{0.101}\) \\ \hline \end{tabular}
\end{table} TABLE III: Root-mean-square error (RMSE), negative log likelihood (NLL) and miscalibration area of the trained GPs and BNNs. Bold indicates the lowest value for each metric.
present in the vicinity of the constraints. Disturbances in \(s_{f}\) lead to learning-supported MPC violating lower constraint, showing its inability to guarantee constraint satisfaction. In contrast, the learning-supported stochastic MPC designs satisfy the constraints.
Fig. 6 compares the computation time for both stochastic MPC controllers. Multiple GPs were trained and run in the closed loop control setup, varying the number of training points. One can see the increase in total computation time of the corresponding stochastic MPC, and around 130 training data points it surpasses the computation time of the stochastic MPC using the BNN.
## IV Conclusion and Outlook
State-of-the-art learning-supported stochastic MPC requires uncertainty prediction, often using Gaussian processes. However, their computational complexity increases with data set size. We explored Bayesian neural networks (BNNs) for hybrid models in learning-supported stochastic MPC, comparing their performance to Gaussian processes. BNNs achieved similar performance, minimized model-plant mismatch in open-loop simulations, and proved effective in closed-loop simulations. BNNs offer a valuable alternative, efficiently handling large data sets. All results used the open-source Python toolbox HILO-MPC [16].
Future research will investigate BNNs for more complex models and provide strict stability and performance guarantees for specific model classes.
## Acknowledgment
The authors acknowledge funding of the DIGIPOL project (Magdeburg, Saxony-Anhalt) funded in the EU-ERDF scheme and the KI-Embedded project funded by the BMWK.
|
2303.11307 | DIME-Net: Neural Network-Based Dynamic Intrinsic Parameter Rectification
for Cameras with Optical Image Stabilization System | Optical Image Stabilization (OIS) system in mobile devices reduces image
blurring by steering lens to compensate for hand jitters. However, OIS changes
intrinsic camera parameters (i.e. $\mathrm{K}$ matrix) dynamically which
hinders accurate camera pose estimation or 3D reconstruction. Here we propose a
novel neural network-based approach that estimates $\mathrm{K}$ matrix in
real-time so that pose estimation or scene reconstruction can be run at camera
native resolution for the highest accuracy on mobile devices. Our network
design takes gratified projection model discrepancy feature and 3D point
positions as inputs and employs a Multi-Layer Perceptron (MLP) to approximate
$f_{\mathrm{K}}$ manifold. We also design a unique training scheme for this
network by introducing a Back propagated PnP (BPnP) layer so that reprojection
error can be adopted as the loss function. The training process utilizes
precise calibration patterns for capturing accurate $f_{\mathrm{K}}$ manifold
but the trained network can be used anywhere. We name the proposed Dynamic
Intrinsic Manifold Estimation network as DIME-Net and have it implemented and
tested on three different mobile devices. In all cases, DIME-Net can reduce
reprojection error by at least $64\%$ indicating that our design is successful. | Shu-Hao Yeh, Shuangyu Xie, Di Wang, Wei Yan, Dezhen Song | 2023-03-20T17:45:12Z | http://arxiv.org/abs/2303.11307v1 | DIME-Net: Neural Network-Based Dynamic Intrinsic Parameter Rectification for Cameras with Optical Image Stabilization System
###### Abstract
Optical Image Stabilization (OIS) system in mobile devices reduces image blurring by steering lens to compensate for hand jitters. However, OIS changes intrinsic camera parameters (i.e. \(\mathrm{K}\) matrix) dynamically which hinders accurate camera pose estimation or 3D reconstruction. Here we propose a novel neural network-based approach that estimates \(\mathrm{K}\) matrix in real time so that pose estimation or scene reconstruction can be run at camera native resolution for the highest accuracy on the mobile devices. Our network design takes gridified projection model discrepancy feature and 3D point positions as inputs and employs a Multi-Layer Perceptron (MLP) to approximate \(f_{\mathrm{K}}\) manifold. We also design a unique training scheme for this network by introducing a Back propagated PnP (BPnP) layer so that reprojection error can be adopted as the loss function. The training process utilizes precise calibration patterns for capturing accurate \(f_{\mathrm{K}}\) manifold but the trained network can be used anywhere. We name the proposed Dynamic Intrinsic Manifold Estimation network as DIME-Net and have it implemented and tested in three different mobile devices. In all cases, DIME-Net can reduce reprojection error by at least \(64\%\) indicating that our design is successful.
## I Introduction
Fig. 1a illustrates how typical OIS functions. When a camera on a mobile device looks at a point in the scene and the camera-holding hand jitters, the image blurs because the point is imaged as a short trajectory instead of a point. To mitigate this effect, the camera is often equipped with a motion sensor to sense the hand/camera motion. The sensed motion is used to generate a countering motion to actuate camera lens or part of lens array so that the imaged point remains at the same 2D location in the imaging sensor and the stationary part of image remains sharp.
Unfortunately, OIS dynamically changes camera intrinsics (i.e. \(\mathrm{K}\) matrix) which makes it difficult to accurately estimate camera pose or perform scene reconstruction. Such applications are often seen in visual simultaneous localization and mapping or augment reality which are widely deployed in mobile devices. Existing practice opts to reduce camera resolution so that \(\mathrm{K}\) can be approximated by an averaged value which clearly sacrifices accuracy in results.
Here we propose a novel neural network based approach to predict \(\mathrm{K}\) matrix. To our best knowledge, this is a new method to solve a new problem since there is no existing work to solve the dynamic \(\mathrm{K}\) under OIS effect. To illustrate the network design with real application, we use Perspective-n-Point (PnP) problem [1] for camera pose estimation as an application example. Our network achieves real time estimation for the \(\mathrm{K}\) matrix regardless of input image resolution so that downstream tasks, such as camera pose estimation and scene reconstruction algorithms, can run at camera native resolution. For the network input, we propose a gridified project model discrepancy feature and 3D point positions. The network architecture employs an MLP to predict a new \(\mathrm{K}\) matrix. We also design a unique training scheme for this network by introducing a Back-propagated PnP (BPnP) layer [2] so that reprojection error can be adopted as the loss function. The training can be done using precise calibration patterns in lab settings which builds manifold approximation using carefully collected data to ensure good knowledge embedding. The network inference does not require a large number of high quality features and can be applied to nature objects. We name the Dynamic Intrinsic Manifold Estimation network as DIME-Net and have it implemented and tested in three different mobile devices. DIME-Net can reduce reprojection error by at least \(64.0\%\) indicating that our design is successful.
## II Related Work
In a nutshell, our approach is to train a neural network to track dynamic intrinsic camera parameters. It is related to camera projection modeling, calibration, and geometry-guided neural network.
Camera projection modeling describes the mapping between a 3D world coordinate and a 2D image coordinate.
Fig. 1: (a) Illustration of OIS working principle. (b) System diagram of OIS intrinsics rectification algorithm for PnP problem.
It often consists of extrinsic and intrinsic camera parameters which are often referred to as extrinsics and intrinsics for brevity, respectively. Extrinsics are \(6\)-Degrees of Freedom (DoFs) camera pose in world coordinate system whereas intrinsics characterize camera and lens internal properties (e.g. focal length, principal point). The perspective projection model [3, 4], also known as pinhole model, is the most widely adopted camera model. For a fixed lens camera, its intrinsics are a constant matrix. Of course, this is not true for an OIS-activated camera. Cameras with alterable optical configurations like telephoto lens exist and have variable intrinsics [5, 6, 7, 8, 9]. To model this type of camera, control parameters of optical settings become inputs to intrinsic functions. Similarly, camera developers can obtain servo actuator measurement of OIS to estimate intrinsics, such as CIP-VMobile [10]. For regular users, most manufacturers do not provide the lens motion measurement. These factors make it difficult to directly model intrinsics as a function of lens motion, so we have to resort to a hardware-independent approach.
If we know the geometry property of the observed object, we can recover all camera parameters using an estimation method. This is often known as camera calibration. Such methods require ample number of features and are often assisted with carefully-designed calibration patterns [6, 11, 12, 13, 14, 15, 16, 17] to increase accuracy. Among existing calibration methods, self-calibration [18, 19, 20, 21] (or auto-calibration) does not rely on calibration pattern, and finds the camera intrinsics through projective geometry properties existing in image sequences (e.g. absolute conic [3]). Existing calibration methods provide a way to estimate intrinsics, but they cannot be directly applied to our problem because 1) camera intrinsics are dynamic when OIS is activated, and 2) there may not be enough corresponding features in a single frame to recover intrinsics accurately. Therefore, we propose a neural network-based approach to address these problems. After trained in lab settings, DIME-Net is able to capture the dynamic intrinsic properties and infer high quality intrinsics in applications with a small set of features from nature objects.
Although there is no prior work on the dynamic intrinsic estimation directly, the design of DIME-Net is inspired by existing progress in learning-based methods. Recent research begins to transfer the knowledge in geometry domain into the network architecture design for geometry related problem such as pose estimation [22, 23] and 3D reconstruction [24, 25, 26]. In camera pose estimation, PoseNet [22] uses CNNs and fully connected layers to solve camera pose regression. These works focus on camera poses which are extrinsics. Although being different from the intrinsic parameter estimation, their methods enforcing the geometric constraints (e.g. reprojection error [22], position and orientation error [27] ) into the loss function for training shed light on how to approach our problem. We employ reprojection error as loss function which is enabled by building on the recent progress on the PnP [2, 1, 28] problem. BPnP [2] considers the optimization as a layer and enables the backpropagation of network as a whole with the help of the implicit theorem. This eventually enables us to employ the reprojection error as our loss function in DIME-Net training design.
In our design, DIME-Net use MLP to approximate the manifold that characterizes dynamic intrinsics. There are existing methods using learning based approach for manifold and distance field approximation [29, 30]. Specifically, existing effort has been made on using MLP to represent manifold field. For example, Moser Flow [31] uses MLP to represent geometry manifold such as Torus. Pose-NDF [32] designs a generative model with geometry implicit function representing as feature to create human pose sequence in a manifold.
## III OIS Effect Mitigation Framework and Problem Definition
Let us first introduce imaging process and analyze why existing OIS effect mitigation scheme is problematic before introducing our framework and problem definition.
### _Perspective Projection under OIS_
Before we introduce camera imaging, let us define the following coordinate systems and points in them,
* 3D camera coordinate system (CCS), where its origin is at the camera center, and its X-axis and Y-axis parallel to the horizontal and vertical axes of its image 2D coordinate \(\{I\}\) respectively.
* is a fixed 3D world coordinate system.
* is a homogeneous 3-vector describing a 2D point position in \(\{I\}\), \(\mathbf{x}\in\mathbb{P}^{2}\), 2D projective space.
* is a 3-vector describing a 3D point position. As a convention, we use left superscript to indicate the reference frames of 3D points. For example, \({}^{W}\mathbf{X}\) is a point in \(\{W\}\).
All 3D coordinate systems are right-handed system. For a regular camera and according to pinhole perspective projection model, it projects a 3D point \({}^{W}\mathbf{X}\) to a 2D image point \(\mathbf{x}\) that can be described by the following model [3]
\[\mathbf{x}=\lambda\mathrm{K}[^{C}_{W}\mathrm{R}\ ^{C}_{W}\mathbf{t}]\left[{}^{W} \mathbf{X}\atop 1\right], \tag{1}\]
where \(\lambda\) is a scalar and matrix \(\mathrm{K}=\begin{bmatrix}f_{x}&0&c_{x}\\ 0&f_{y}&c_{y}\\ 0&0&1\end{bmatrix}\) is the intrinsic matrix of the camera, \(f_{x}\) and \(f_{y}\) are focal lengths in pixel counts using pixel width and height, respectively, and \((c_{x},c_{y})\) is principal point location on the image. Note that this is a 4-DoF intrinsic matrix model which fits most cameras. We also call \(\mathrm{K}\) intrinsics for brevity. Similarly \([^{C}_{W}\mathrm{R},\ ^{C}_{W}\mathbf{t}]\in\mathcal{SE}(3)\), is called extrinsics. If 3D points live in \(\{C\}\), then \({}^{C}_{W}\mathrm{R}=\mathrm{I}_{3}\) becomes an identity matrix and \({}^{C}_{W}\mathbf{t}=\mathbf{0}_{3}\), a zero vector in \(\mathbb{R}^{3}\).
When the camera is equipped with OIS and OIS is activated, both the relative orientation and distance between the lens and the 2D imaging sensor are no longer constants in the camera. From Fig. (a)a, each element in \(\mathrm{K}\) is a function of
lens pose \([\mathbf{R}_{\texttt{lens}},\mathbf{t}_{\texttt{lens}}]\in\mathcal{SE}(3)\). Hence we write it in function format \(\mathrm{K}(\mathrm{R}_{\texttt{lens}},\mathbf{t}_{\texttt{lens}})\).
One immediate thought would be if we can directly model function \(\mathrm{K}(\mathrm{R}_{\texttt{lens}},\mathbf{t}_{\texttt{lens}})\) based on lens motion \((\mathbf{R}_{\texttt{lens}},\mathbf{t}_{\texttt{lens}})\). Unfortunately, it is very difficult to do so due to lack of information about OIS system design for each individual mobile devices. Depending on how sophisticated the OIS system is, the camera lens may have up to 5 DoFs, although a typical mobile device camera may only has 2 rotational DoFs due to cost and size concerns. Lack of detailed information about motion model is not the only issue. Also, we do not have access to the lens motion feedback since most device software development kits (SDKs) do not provide it. Finally, we do not know which time epoch the OIS aligns frames to. These factors determine that modelling OIS is not a viable approach and we have to opt for a data-driven approach that can be used in a wide range of devices or OIS types.
### _Existing OIS Effect Mitigation Scheme_
The dynamic intrinsics immediately lead to a problem for any vision algorithm that requires constant or known \(\mathrm{K}\) matrix. More specifically, any 3D reconstruction or camera pose estimation algorithms would be severely impacted. Existing practices adopted by cellphone manufacturers such as AppleTM or GoogleTM often resort to a prior approximated camera intrinsic denoted as \(\mathrm{K}_{c}\) on reduced resolution images in their SDKs. Such \(\mathrm{K}_{c}\) is often obtained by averaging a large number of \(\mathrm{K}\)'s at different OIS states or at its neutral stationary positions. For \(\mathrm{K}_{c}\), we can associate it with extrinsics which defines a unique camera frame \(\{C_{0}\}\).
In fact, we also have tested the prior \(\mathrm{K}_{c}\) in our experiment setup using PnP problem as an example [1]. Denote the \(i\)-th \(2\)D point as \(\mathbf{x}_{i}\). The \(2\)D and \(3\)D point correspondences are defined as \(\{\mathbf{x}_{i}\leftrightarrow{}^{W}\mathbf{X}_{i}:i=1,\cdots,n\}\), where \(n\) is the number of the total point correspondences. PnP algorithm computes camera pose using the 2D-3D correspondences by minimizing reprojection error
\[[^{C}_{W}\mathrm{R},\ ^{C}_{W}\mathbf{t}]=\mathrm{argmin}\sum_{i}\left\| \lambda_{i}\mathrm{K}(^{C}_{W}\mathrm{R}^{W}\mathbf{X}_{i}+^{C}_{W}\mathbf{t} )-\mathbf{x}_{i}\right\|_{\Sigma}^{2}, \tag{2}\]
where \(\|\cdot\|_{\Sigma}\) is the Mahalanobis norm with the covariance matrix \(\Sigma\) for pixel location distribution.
To test the quality of \(\mathrm{K}_{c}\), we assume \(\mathrm{K}=\mathrm{K}_{c}\) when solves (2). Our point correspondences are from a precise calibration pattern as inputs for the PnP. When the camera resolution is \(4032\times 3024\), the resulting average reprojection error from PnP is about \(3.44\) pixels for an OIS-equipped Samsung Galaxy 8 phone camera whereas that of a camera without OIS can reach \(0.42\) pixels under the same settings. In real world applications, the average reprojection error would be much higher because pixelization error from real scene is much higher than the precise and sharp inputs from the calibration pattern. Higher error would cause the algorithm hard to converge under noisy inputs. Consequently, the existing practices are to lower the image resolution to increase the pixel size. This approach is to sacrifice image resolution and camera pose accuracy for algorithm stability, which is not ideal because we cannot fully utilize the true potential of the camera resolution.
### _OIS Intrinsics Rectification Framework_
One immediate idea is to try to rectify \(\mathrm{K}_{c}\). If an accurate \(\mathrm{K}\) can somehow be obtained in real time, then the problem is solved. Again, let us use PnP as an example to show how such approach works. It is worth noting that our framework can be easily extended to other applications in 3D scene reconstruction or motion estimation. One quick thought would be if we can add \(\mathrm{K}\) as the additional decision variable in the estimation problem in (2) to address the issue. Unfortunately, this would not work because the number of point correspondences in an application is usually insufficient or unevenly distributed which cannot meet the necessary condition to estimate a good quality \(\mathrm{K}\).
Since we do not have a clear pathway to estimate \(\mathrm{K}\) analytically, the idea becomes if we could find a data-driven approach. The overall framework is illustrated in Fig. 0(b) with three main blocks as follows.
The first step (Box 1(a)) is the initial pose estimation using the prior \(\mathrm{K}_{c}\). We know this pose estimation will not be accurate enough, but its residual error are caused by discrepancy between \(\mathrm{K}_{c}\) and the actual \(\mathrm{K}\) and hence will be important input to next step.
The second step (Box 1(b)) is to recover the \(\mathrm{K}\) and the third step (Box 1(c)) is pose refinement with the newly-obtained \(\mathrm{K}\) which is simply to re-solve PnP problem with the new \(\mathrm{K}\). It is clear that the second step is the key problem here. Let us define this problem,
**Definition 1**: _Given \(\mathrm{K}_{c}\) and \(n\) point correspondences \(\{\mathbf{x}_{i}\leftrightarrow{}^{C_{0}}\mathbf{X}_{i}\}_{i=1}^{n}\), design and train DIME-Net to represent \(f_{\mathrm{K}}\) manifold that can be used to predict the dynamic intrinsic camera matrix \(\mathrm{K}\)._
Here we assume that nonlinear lens distortion has been removed from images. Cameras with OIS usually have nonlinear lens distortion removed to facilitate OIS.
## IV DIME-Net Design and Training
The OIS actuation-caused \(\mathrm{K}\) variation can be considered as a \(f_{\mathrm{K}}\) manifold despite that we do not have close form representation of \(\mathrm{K}(\mathrm{R}_{\texttt{lens}},\mathbf{t}_{\texttt{lens}})\). In fact, lens pose \([\mathrm{R}_{\texttt{lens}},\mathbf{t}_{\texttt{lens}})\) is just camera extrinsics \([^{C}_{W}\mathrm{R},\ ^{C}_{W}\mathbf{t}]\) in a different reference system under the actual \(\mathrm{K}\). Therefore, we know that these point correspondences have to satisfy (1). On the other hand, Step 1 of Sec. III-C also produces projected points
\[\mathbf{x}_{c}=\lambda_{c}\mathrm{K}_{c}\begin{bmatrix}^{C_{0}}_{W}\mathrm{R} \ ^{C_{0}}_{W}\mathbf{t}\\ \end{bmatrix}\begin{bmatrix}^{W}\mathbf{X}\\ \end{bmatrix}, \tag{3}\]
where corresponding variables with subscription \(c\) indicate that they are estimated based on \(\mathrm{K}_{c}\). Define \(\Delta\mathbf{x}=\mathbf{x}_{c}-\mathbf{x}\). We know that
\[\Delta\mathbf{x}=\left\{\lambda_{c}\mathrm{K}_{c}\begin{bmatrix}^{C_{0}}_{W} \mathrm{R}\ ^{C_{0}}_{W}\mathbf{t}\\ \end{bmatrix}-\lambda\mathrm{K}\begin{bmatrix}^{C}_{W}\mathrm{R}\ ^{C}_{W}\mathbf{t}\\ \end{bmatrix}\right\}\begin{bmatrix}^{W}\mathbf{X}\\ \end{bmatrix}. \tag{4}\]
With the same point correspondences, keep in mind that extrinsics \([^{C_{0}}_{W}\mathrm{R},\ ^{C_{0}}_{W}\mathbf{t}]\) and \([^{C}_{W}\mathrm{R},\ ^{C}_{W}\mathbf{t}]\) are functions of corresponding intrinsics \(\mathrm{K}_{c}\) and \(\mathrm{K}\), respectively. This means that
(4) defines an input-dependent \(f_{\mathrm{K}}\) manifold:
\[f_{\mathrm{K}}(\mathrm{K},\mathrm{K}_{c},\{\Delta\mathbf{x}_{i},\mathbf{X}_{i}, \forall i\})=0. \tag{5}\]
It is not difficult to see that \(f_{\mathrm{K}}\) becomes less dependent of individual \(\{\mathbf{x}_{i},\mathbf{X}_{i}\}\) as \(i\) grows large. At this stage, \(f_{\mathrm{K}}\) can be used to predict \(\mathrm{K}\) for small number of correspondences. This inspires us to develop a data-driven approach to represent \(f_{\mathrm{K}}\) manifold using our DIME-Net. The construction of the approximated \(f_{\mathrm{K}}\) manifold vector field, \(\mathrm{K}_{c}\) and \(\{\Delta\mathbf{x}_{i},\mathbf{X}_{i},\forall i\}\), mapping from input feature vector to the dynamic \(\mathrm{K}\) is the DIME-Net training process. It can be done with carefully-collected data under different OIS states with calibration patterns under lab settings. Later in the application, this DIME-Net can be used as \(\mathrm{K}\) predictor.
Fig. 2 shows our DIME-Net architecture. We first design the input for DIME-Net which converts the point correspondences and the prior camera matrix into the \(1\)D OIS discrepancy feature. Given the \(1\)D OIS discrepancy feature, DIME-Net utilizes the MLP to rectify the camera intrinsics. We will explain how we design DIME-Net with its unique feature, network structure and network loss function.
### _OIS Discrepancy Feature_
Let us begin with notation definition. Denote the 3D position \({}^{C_{0}}\mathbf{X}_{i}:=[X_{i},Y_{i},Z_{i}]^{\mathsf{T}}\in\mathbb{R}^{3}\) and the corresponding \(2\)D pixel position \(\tilde{\mathbf{x}}_{i}:=[x_{i},y_{i}]^{\mathsf{T}}\in\mathbb{R}^{2}\) where symbol ~ on a variable means that it is in inhomogeneous coordinate. Note that 3D points are in \(\{C_{0}\}\) that the manifold will be defined in \(\{C_{0}\}\) instead of \(\{W\}\) as (4). This change makes the neural network not sensitive to the choice of world coordinate system. There are \(3\) steps to obtain the input feature of the neural network: (1) point-based OIS discrepancy feature conversion, (2) grid-based OIS discrepancy feature conversion and (3) \(1\)D OIS discrepancy feature flattening.
#### Iii-A1 Point-Based OIS Discrepancy Feature
Each point-based OIS discrepancy feature is composed by two main components: (1) the inhomogeneous representation of \(\Delta\mathbf{x}\) which is named as projection model discrepancy (PMD) feature because it is the 2D reprojection error between the observed image points and their reprojected points using \(\mathrm{K}_{c}\), and (2) \(3\)D point position in \(\{C_{0}\}\). The PMD is the direct result of \(\mathrm{K}\) change introduced by OIS [33] when \(3\)D point position in \(\{C_{0}\}\) are given.
Denote the PMD of \(\mathbf{x}_{i}\leftrightarrow{}^{C_{0}}\mathbf{X}_{i}\) as \(\left[\Delta x_{i},\Delta y_{i}\right]^{\mathsf{T}}\in\mathbb{R}^{2}\), and we have
\[\begin{bmatrix}\Delta x_{i}\\ \Delta y_{i}\end{bmatrix}=\begin{bmatrix}\mathbf{k}_{c}^{\mathsf{1}}\\ \mathbf{k}_{c}^{\mathsf{2}}\end{bmatrix}\begin{bmatrix}X_{i}/Z_{i}\\ Y_{i}/Z_{i}\\ 1\end{bmatrix}-\begin{bmatrix}x_{i}\\ y_{i}\end{bmatrix}, \tag{6}\]
where \(\mathbf{k}_{c}^{j}\) is the \(j\)-th row of \(\mathrm{K}_{c}\). It is worth noting that (6) is the simplification of (4) in \(\{C_{0}\}\). We then concatenate PMD and the \(3\)D point position in \(\{C_{0}\}\) to form the point-based OIS discrepancy feature. Denote the point-based OIS discrepancy feature of \(\mathbf{x}_{i}\leftrightarrow{}^{C_{0}}\mathbf{X}_{i}\) as \(\mathbf{f}_{i}\) which is defined as
\[\mathbf{f}_{i}:=\left[\Delta x_{i},\Delta y_{i},X_{i},1/Z_{i}\right]^{\mathsf{ T}}\in\mathbb{R}^{5}. \tag{7}\]
It is worth noting that here we employ the inverse depth \(1/Z_{i}\) since the inverse depth is linear to \(\mathbf{k}_{c}^{\mathsf{1}}\) and \(\mathbf{k}_{c}^{\mathsf{2}}\) in (6) which makes the model more linear and fits better to the neural network.
#### Iii-A2 Grid-Based OIS Discrepancy Feature
However, there are two remaining issues when using the point-based features: 1) the point-based feature number is not fixed because it is input-dependent, and 2) the order of features should be irrelevant. In fact, the features should be related to 2D position in the image. If we blindly feed the point-based features into a neural network, we would run into issues because 1) the neural network would need a fixed input dimension, and 2) the neural network would inevitable learn the order of inputs instead of the spatial location in image. To address these issues, we convert the point-based features into grid-based features by using grid cells and merging feature information within each grid cell. This approach fixes the input dimension and order issues since there is a constant number of grid cells and we can arrange the grid feature using the lexicographic order of cells.
First, we create a 2D feature map which has same size as the original image but with 5 channels that contains point-based OIS discrepancy feature \(\mathbf{f}_{i}\) for each point correspondence. The feature is indexed by the its 2D point position
Fig. 2: DIME-Net architecture and training scheme. This pipeline reflects the process of training the DIME-Net. In the inference stage, a user only needs the grey box to estimate \(\mathrm{K}\) and the pose can be calculated using standard PnP algorithm [1].
\((x_{i},y_{i})\) in the 2D feature map. Next we divide the 2D point based feature map into a 2D grid pattern consisting of \(u\times v\) equal-sized square grid cells. For each cell, we average the point-based features in the cell to be the corresponding grid-based feature. Let \(\big{[}a_{k},b_{j}\big{]}^{\mathsf{T}}\) be the bottom-right corner point pixel position of the grid in the \(j\)-th row and the \(k\)-th column of the \(2\)D grid pattern. The point-based feature set residing in the grid in the \(j\)-th row and the \(k\)-th column is defined as
\[\mathcal{F}_{j,k}:=\Big{\{}\mathbf{f}_{i}:x_{i}\in[a_{k-1},a_{k})\text{ and }y_{i}\in[b_{j-1},b_{j})\Big{\}}. \tag{8}\]
Denote the grid-based OIS discrepancy feature of the grid in the \(j\)-th row and the \(k\)-th column as \(\mathbf{y}_{j,k}\). The grid-based OIS discrepancy feature \(\mathbf{y}_{j,k}\) is defined as
\[\begin{split}\mathbf{y}_{j,k}:=&\frac{1}{| \mathcal{F}_{j,k}|}\sum_{\mathbf{f}_{i}\in\mathcal{F}_{j,k}}\mathbf{f}_{i}\\ =&\Big{[}\Delta\overline{x}_{j,k},\Delta\overline{y }_{j,k},\overline{X}_{j,k},\overline{Y}_{j,k},\overline{1/Z}_{j,k}\Big{]}^{ \mathsf{T}}\in\mathbb{R}^{5},\end{split} \tag{9}\]
where \(|\cdot|\) is the set cardinality and symbol \(\ {}^{-}\) on a variable indicates the average value.
#### Iii-A3 \(1\)D OIS Discrepancy Feature Flattening
We flatten the grid-based features to be one dimension as the final input for the MLP in the next step. Denote the flattened feature vector as \(\mathbf{y}\). The flattened vector \(\mathbf{y}\) can be obtained by concatenating the grid-based features,
\[\mathbf{y}:=\big{[}\mathbf{y}_{1,1}^{\mathsf{T}},\mathbf{y}_{1,2}^{\mathsf{T} },\ \ldots,\ \mathbf{y}_{u,v}^{\mathsf{T}}\big{]}^{\mathsf{T}}\in\mathbb{R}^{m_{y}}, \tag{10}\]
where \(u\) and \(v\) are the numbers of the grid cells in row and column, respectively, and dimension \(m_{y}=5\cdot u\cdot v\).
### _DIME-Net Architecture and Loss Function_
As illustrated in Fig. 2, in order to learn the \(f_{\mathrm{K}}\) manifold, we design a generative model using the multi-layer perceptron to generate the dynamic \(\mathrm{K}\) from \(\mathbf{y}\). We employ the geometric error as a loss function to link the network's performance to camera projection model.
#### Iii-B1 Multi-layer Perceptron
As introduced in previous sections, \(\mathbf{y}\) is the feature vector of point correspondence that describe camera OIS effect. Our goal is to generate the camera intrinsic \(\mathrm{K}\) from \(\mathbf{y}\). From OIS feature perspective, intrinsics \(\mathrm{K}\) is a latent variable that directly describes the camera model. We design an MLP to be an autoencoder-style mapping from a high dimension feature variable \(\mathbf{y}\) to a low dimension latent variable \(\mathrm{K}\). Specifically, we employ a fully connected 3-layer perceptron to generate \(\mathrm{K}\). We design the network output to be \(\Delta\mathrm{K}=\mathrm{K}-\mathrm{K}_{c}\). The output layer has 4 nodes that represent four components of \(\Delta\mathrm{K}\): \(\Delta f_{x}\), \(\Delta f_{y}\), \(\Delta c_{x}\) and \(\Delta c_{y}\). This design helps regulate the network. In special case when input vector \(\mathbf{y}=\mathbf{0}_{m_{y}}\), the network should output \(\mathbf{0}_{4}\) so that \(\Delta\mathrm{K}=0_{3\times 3}\) and \(\mathrm{K}=\mathrm{K}_{c}\) due to lack of information, which also ensures the network stability.
#### Iii-B2 BPnP Layer in Training and Loss Function Design
For the network training, we employ reprojection error as the loss function. This directly ties network performance with model quality. Given the predicted intrinsics \(\mathrm{K}\) and extrinsics \(\mathrm{\overset{C}{W}R}\) and \(\mathrm{\overset{C}{W}\mathbf{t}}\) and the point correspondences \(\{\mathbf{x}_{i}\leftrightarrow\overset{W}{\mathbf{X}}_{i}\}\), the loss can be calculated by
\[L_{\text{rep}}=\sum_{i}\norm{\lambda_{i}\mathrm{K}(\overset{C}{W}\mathrm{R}^{W} \mathbf{X}_{i}+\overset{C}{W}\mathbf{t})-\mathbf{x}_{i}}^{2}. \tag{11}\]
Note that we would need extrinsics \([\overset{C}{W}\mathrm{R},\overset{C}{W}\mathbf{t}]\) to compute the loss function. To obtain extrinsics and enable the network end-to-end training, we connect the network with a BPnP layer [2] to estimate \([\overset{C}{W}\mathrm{R},\overset{C}{W}\mathbf{t}]\) from the predicted \(\mathrm{K}\). Compared with the general PnP solver, BPnP considers the optimization as a layer and enables the backpropagation of network as a whole by the help of the implicit theorem [34]. Using reprojection error [3] as our loss function makes the overall model like a maximum likelihood estimator. Common loss functions like L1 or L2 norm are algebraic distance which is not robust and can lead to a spurious solution since it does not contain the geometric meaning. Reprojection error, on the other hand, is a geometric distance. Hence, the loss function in (11) can guide the network in learning the \(f_{\mathrm{K}}\) manifold.
### _Training and Inference Using DIME-Net_
To gather good training samples, as shown in Fig. (a)a, we have designed a calibration rig. It contains 4 checkerboard pattern located at 4 different planes. Each checkerboard pattern contains \(8\times 10\) inner vertices and is deployed on a planar glass ensure flatness. Each cell side length is \(22.0\) mm. 3D points positions are computed in \(\{C_{0}\}\) and 2D points are readouts from vertex coordinates in the image. It is worth noting that the 4-checkerboard rig design allows us to directly obtain \(\mathrm{K}\) for each image using calibration procedure because there are enough inputs to estimate both \(\mathrm{K}\) and extrinsics. This is very important in training and verification because it provides ground truth. With this setup, we can obtain a set of 2D-3D correspondences with a moving camera at different perspectives that covers the normal working range of the camera.
We use the accurate point correspondences to obtain the feature vector \(\mathbf{y}\) for the network training by monitoring the convergence of the loss function in (11). The good coverage of the training data ensures that our neural network can approximate \(f_{\mathrm{K}}\) manifold with good accuracy.
With a trained network, we can deploy it for inference in applications. Our DIME-Net has the ability to predict \(\Delta\mathrm{K}\)
Fig. 3: (a) Example image of the training inputs for DIME-Net using the calibration rig. (b) Example image of natural object feature test setup where two LEGO buildings are the natural objects.
given the input feature vector in (10) converted from the 2D-3D point correspondence set.
## V Experiments
We have implemented our DIME-Net using PyTorch [35]. We first perform an ablation study of our DIME-Net. Then we evaluate the inference accuracy of our DIME-Net using both calibration rig and nature object features. Let us introduce our OIS datasets.
### _Calibration Rig OIS Datasets_
We have collected data under OIS effect using our calibration rig detailed in Sec. IV-C. To activate OIS, we hand-hold the camera and capture images with different poses. We use three different cameras as detailed in Tab. I. For each camera, we collect a dataset and split it into training set and testing set (shown as number of images in "Train" and "Test" columns).
For each device, we also obtain \(\mathrm{K}_{c}\) according the method in Sec. III-C. In addition, for each image, we also use the 4-board as input to estimate \(\mathrm{K}^{\star}\) using camera calibration method. The calibration process method yields reprojection error \(e^{\star}\). The average reprojection errors are shown in the \(\text{Avg}(e^{\star})\) column with unit as pixels which provide a baseline for the best possible performance for reprojection error.
### _DIME-Net Ablation Study_
Here we test the impact of different feature setups for DIME-Net performance using Samsung Galaxy S8 data from Tab. I.
#### V-B1 \(2d\) Grid Resolution and Occupancy Tests
Now we test how the \(2\)D grid resolution and grid cell occupancy can affect the DIME-Net performance under reprojection error \(e\). Define the average reprojection error as \(\text{Avg}(e)\) which is used as a primary metric. The \(2\)D grid resolution \(u\times v\) determines the number of point-based OIS discrepancy features in each cell and affects the uncertainty of the grid-based OIS discrepancy features which are the direct inputs of the DIME-Net. The \(2\)D grid occupancy, on the other hand, indicates the distribution of the OIS information preserved. As shown in Tab. II, we have chose \(3\) different \(2\)D grid resolution: \(16\times 12\), \(12\times 9\) and \(8\times 6\). To simulate the occupancy, we uniformly sample the cells and empty the \(2\)D and \(3\)D point correspondences in the cells. The ratio of the emptied cell is measured by \(\eta=1-\frac{m_{p}^{\prime}}{m_{p}}\), where \(m_{p}\) and \(m_{p}^{\prime}\) are the number of cells with non-zero OIS features before and after the sampling, respectively. The \(2\)D grid occupancy then is measured by \(\gamma=\frac{m_{p}}{u\times v}\).
Tab. II shows that the \(2\)D grid resolution with \(8\times 6\) can achieve the lowest \(\text{Avg}(e)\). It is expected since the size of the cell and the number of the point-based OIS discrepancy features increases as grid resolution reduces. The average in (9) reduces feature noise when there are more point features in each cell. The lowest Avg(\(e\)) of \(0.68\) pixels is close to the calibration accuracy of \(\text{Avg}(e^{\star})=0.45\) in Tab. I which confirms that our DIME-Net works effectively in learning the \(f_{\mathrm{K}}\) manifold. The results show the effective design of the DIME-Net feature because it is capable of predicting intrinsics even when the grid occupancy is extremely low.
#### V-B2 OIS Discrepancy Feature Tests
Next, we examine OIS discrepancy feature components in (6). Again, \(\text{Avg}(e)\) in pixels is used as the metric. We choose \(8\times 6\) for the \(2\)D grid resolution since it has the best performance. We compare five different setups.
* Complete OIS discrepancy feature in (9) using both PMD and \(3\)D point positions.
* Only use PMD in (6).
* Combine PMD with inverse depth \(1/Z\).
* Similar to "C", but we combine PMD with \(X\) and \(Y\) positions of \(3\)D points.
* Only use \(3\)D point positions.
Tab. III shows that option A achieves the lowest \(\text{Avg}(e)\) which means that all features are necessary to achieve the best result. This is not surprising since (4) has told us that. What is interesting is that the performance of options B-D is slightly worse than that of A, which indicates that PMD is the dominating feature.
### _Inference Accuracy Comparison_
After knowing the best setup for DIME-Net, we are ready to compare it to the state-of-the-art in inference test.
**Evaluation Metric for Accuracy:** We use \(\text{Avg}(e)\) as basic performance metric. From Sec. III-B, we know the popular existing approach is to employ the prior \(\mathrm{K}_{c}\) which is obtained when camera is at the stationary or by averaging a large number of \(\mathrm{K}\)'s under different OIS states. Let us define \(\text{Avg}(e_{c})\) as its average reprojection error when only using \(\mathrm{K}_{c}\).
We also set up the baseline for comparison. The baseline is characterized by \(\mathrm{K}^{\star}\) which is the best intrinsics that one can obtain for the test case. Recall that Avg(\(e^{\star}\)) is its reprojection error. Avg(\(e^{\star}\)) reflects noises in points which is the level of noise that cannot be canceled by adjusting intrinsics without over-fitting. It is not difficult to see that Avg(\(e^{\star}\)) \(\leq\) Avg(\(e_{c}\)) given a reasonable large population of point correspondences. It is also clear that if our design is effective, then Avg(\(e\)) should fall between the two. The closer Avg(\(e\)) is to Avg(\(e^{\star}\)), the better it is. This can be measured by a new metric: the average reprojection error reduction ratio,
\[\rho=\frac{\text{Avg}(e_{c})-\text{Avg}(e)}{\text{Avg}(e_{c})-\text{Avg}(e^{ \star})}. \tag{12}\]
Higher \(\rho\) is more desirable, and \(0\leq\rho\leq 1\). Now we are ready to compare the inference quality under different datasets.
#### V-C1 Calibration Rig Inference Accuracy Tests
The first test is done with data shown in Tab. I based on the calibration rig data.
Point Dropping and Noise Injection TestsWe want to test inference accuracy of DIME-Net after we decrease the number of the point correspondences and/or inject noises to 2D and 3D points. This is important because real world applications do not always have ample amount of features at calibration board point accuracy. To generate the testing condition of decreased point numbers, we uniformly sample the point correspondences to be dropped. For noise injection, we inject random zero mean Gaussian noise with standard deviation \(\sigma_{x}^{+}\) and \(\sigma_{X}^{+}\) to the 2D point and 3D point, respectively. It is worth noting that the injected noise \(\sigma_{x}^{+}\) and \(\sigma_{X}^{+}\) are the additional noise added on the checkerboard vertices. The units of \(\sigma_{x}^{+}\) and \(\sigma_{X}^{+}\) are pixel and mm, respectively. In this test, we use the Samsung Galaxy S8 camera with \(8\times 6\) 2D grid resolution for the DIME-Net.
Tab. IV shows the results. Note that in our experimental setup \(1\) mm means about \(5.45\) pixels (px). The upper half of the table are the results when zero injected noise is added (\(\sigma_{x}^{+}\)=0 px, \(\sigma_{X}^{+}\)=0 mm), and the lower half of the table are results when \(\sigma_{x}^{+}=3\) px and \(\sigma_{X}^{+}=0.1\) mm. The average reprojection error reduction rate, \(\rho\), shows that our DIME-Net is insensitive to the injected noise and the low number of the point correspondences. It remains to be close to or above 90% until the sample sizes drop to 64 in either cases. Our design has been shown to be effective and robust against the point dropping and noisy inputs.
Multi-device TestsWe repeat the tests for all three devices using data in Tab. I under the same settings as upper half of Tab. IV. The results in upper half of Tab. V are consistent with previous tests: our DIME-Net achieve over 91% in \(\rho\) in all cases.
system and proposed to use a gridified PMD feature set along with 3D point positions to train DIME-Net using calibration patterns. The trained network became an approximation of the intrinsics manifold that can predict rectified intrinsics in application. We have implemented and extensively tested our design. The experimental results confirmed that our design was robust and effective and can significantly reduce reprojection error. In the future, we will improve our design with better geometry insights and external sensors to future reduce the reliance on number of features required.
## Acknowledgment
We thank D. Shell, Y. Xu and Z. Shaghaghian for their insightful discussions. We are also grateful to A. Kingery, F. Guo, C. Qian, and Y. Jiang for their inputs and feedback.
|
2310.10121 | From Continuous Dynamics to Graph Neural Networks: Neural Diffusion and
Beyond | Graph neural networks (GNNs) have demonstrated significant promise in
modelling relational data and have been widely applied in various fields of
interest. The key mechanism behind GNNs is the so-called message passing where
information is being iteratively aggregated to central nodes from their
neighbourhood. Such a scheme has been found to be intrinsically linked to a
physical process known as heat diffusion, where the propagation of GNNs
naturally corresponds to the evolution of heat density. Analogizing the process
of message passing to the heat dynamics allows to fundamentally understand the
power and pitfalls of GNNs and consequently informs better model design.
Recently, there emerges a plethora of works that proposes GNNs inspired from
the continuous dynamics formulation, in an attempt to mitigate the known
limitations of GNNs, such as oversmoothing and oversquashing. In this survey,
we provide the first systematic and comprehensive review of studies that
leverage the continuous perspective of GNNs. To this end, we introduce
foundational ingredients for adapting continuous dynamics to GNNs, along with a
general framework for the design of graph neural dynamics. We then review and
categorize existing works based on their driven mechanisms and underlying
dynamics. We also summarize how the limitations of classic GNNs can be
addressed under the continuous framework. We conclude by identifying multiple
open research directions. | Andi Han, Dai Shi, Lequan Lin, Junbin Gao | 2023-10-16T06:57:24Z | http://arxiv.org/abs/2310.10121v2 | # From Continuous Dynamics to Graph Neural Networks:
###### Abstract
Graph neural networks (GNNs) have demonstrated significant promise in modelling relational data and have been widely applied in various fields of interest. The key mechanism behind GNNs is the so-called message passing where information is being iteratively aggregated to central nodes from their neighbourhood. Such a scheme has been found to be intrinsically linked to a physical process known as heat diffusion, where the propagation of GNNs naturally corresponds to the evolution of heat density. Analogizing the process of message passing to the heat dynamics allows to fundamentally understand the power and pitfalls of GNNs and consequently informs better model design. Recently, there emerges a plethora of works that proposes GNNs inspired from the continuous dynamics formulation, in an attempt to mitigate the known limitations of GNNs, such as oversmoothing and oversquashing. In this survey, we provide the first systematic and comprehensive review of studies that leverage the continuous perspective of GNNs. To this end, we introduce foundational ingredients for adapting continuous dynamics to GNNs, along with a general framework for the design of graph neural dynamics. We then review and categorize existing works based on their driven mechanisms and underlying dynamics. We also summarize how the limitations of classic GNNs can be addressed under the continuous framework. We conclude by identifying multiple open research directions.
## 1 Introduction
Graph neural networks (GNNs) [58, 91, 38] have emerged as one of the most popular choices for processing and analyzing relational data. The main goal of GNNs is to acquire expressive representations at node, edge or graph level, primarily for tasks including but not limited to node classification, link prediction and graph property prediction. Such a learning process requires not only utilizing the node features but also the underlying graph topology. GNNs have been successful in various fields of application where relational structures are critical for learning quality, including recommendation systems [31], transportation networks [54], particle system [82], molecule and protein designs [50, 95], neuroscience [8] and material science [100], among many others.
The effectiveness of GNNs primarily roots in the message passing paradigm, in which information is iteratively aggregated among the neighbouring nodes to update representation of the center node. One prominent example is the graph convolution network (GCN) [58] where each layer updates node representation by taking a degree-weighted average of its neighbours' representation. Although being effective in capturing dependencies among the nodes, the number of layers of message passing requires to be carefully chosen so as to avoid performance degradation. This is unlike the classic (feedforward) neural networks where increasing
depth generally leads to improved predictive performance. In particular, due to the nature of message passing, deeper GNNs have a tendency to over-smooth the features, leading to un-informative node representations. On the other hand, shallow GNNs are likely to suffer from information bottleneck where signals from distant nodes (with regards to graph topology) exert little influence on the centered node.
In order to analyze and address the aforementioned issues, many recent works have cast GNNs as discretization of certain continuous dynamical systems, by analogizing propagation through the layers to time evolution of dynamics. Indeed, the idea of viewing neural networks in the continuous limits is not new and has been explored for classic neural networks [27, 45, 65, 61, 80]. The continuous formulation provides a unified framework for understanding and designing the propagation of neural networks, leveraging various mathematical tools from control theory and differential equations. One seminal work [19] proposes neural ordinary differential equation (Neural ODE), which can be viewed as a continuous-depth residual network [48]. Another work explores the connections between the convolutional networks with partial differential equations (PDEs) [80].
In GNNs, in particular, the message passing mechanism can be viewed as realization of a class of PDEs, namely the heat diffusion equation [15]. Such a novel perspective allows to better dissect the behaviours of GNNs. As an example, because heat diffusion is known to dissipate energy and converge to a steady state of heat equilibrium, the phenomenon of oversmoothing corresponds to the equilibrium state of diffusion where heat distributes uniformly across spatial locations. Apart from offering theoretical insights into the evolution of GNNs, the continuous dynamics perspective also allows easy integration of structural constraints and desired properties into the dynamical system, such as energy conservation, (ir)reversibility, and boundary conditions. Meanwhile advanced numerical integrators, like high-order and implicit schemes, can be employed to enhance the efficiency and stability of discretization [13]. Finally, the variety of continuous dynamics grounded with physical substance can inform better designs of GNNs to enhance the representation power and overcome the performance limitations [15, 86, 29, 78, 68, 103, 21, 30, 42, 57].
The theory of differential equations and dynamical systems is well-developed with a long-standing history [92, 72], while graph neural network is a comparatively nascent field, garnering attention within the past decade. This offers great potential in harnessing the rich foundations of the dynamical system theory for enhancing the understanding and functionality of GNNs. In this work, we provide a comprehensive review of existing developments in continuous dynamics inspired GNNs, which we collectively refer to as graph neural dynamics. To the best of our knowledge, this is the first survey on the connections between continuous dynamical systems and graph neural networks. Nevertheless, we would like to highlight a related line of research that utilizes deep learning to solve differential equations [52, 44, 60, 59]. In particular, graph neural operators [3, 62] employ GNNs to solve PDEs by discretizing the domains and constructing graph structure based on spatial proximity. In contrast, the idealization that we focus in this work is the reverse where PDEs inform the designs of graph neural networks.
Organization.We organize the rest of the paper as follows. In Section 2, we introduce diffusion equations from first principles and then review discrete operators on graphs such as gradient and divergence, which are crucial for formulations of continuous GNNs. This section concludes with the connection between diffusion equation with the famous graph convolutional network. Section 3 then presents the framework for designing GNNs from continuous dynamics and summarizes the existing works based on the underlying physical processes. We then in Section 4 discuss the various numerical schemes for propagating and learning the continuous GNN dynamics. Finally, in Section 5, we explain how the various dynamics help to tackle the shortcomings of existing GNNs, in terms of oversmoothing, oversquashing, poor performance on heterophilic graphs (where neighbouring nodes do not share similar information), as well as adversarial robustness and
training stability.
## 2 From diffusion equations to graph neural networks
A fundamental building block of graph neural networks is message passing where information flows between neighbouring nodes. Message passing has natural connection to diffusion equations, which describe how certain quantities of interest, such as mass or heat disperse spatially, as a function of time.
Diffusion equation in continuous domains.The two laws governing any diffusion equation are the _Fick's law_ and _mass conservation law_. The former states the diffusion happens from the regions of higher concentration to regions of lower concentration and the rate of diffusion is proportional to the mass difference (the gradient). The latter states the mass cannot be created nor destroyed through the process of diffusion. Formally, let \(x:\Omega\times\mathbb{R}\rightarrow\mathbb{R}\) represent the mass distribution, i.e., \(x(u,t)\) over the spatial locations \(u\in\Omega\) and time \(t\in\mathbb{R}\). Denote \(\nabla x:=\frac{\partial x}{\partial u}\) as the gradient of mass across the space. The Fick's law states that a flux \(J\) exists towards lower mass concentrated regions, i.e., \(J=-D\nabla x\), where \(D\) is the diffusivity coefficient that potentially depends on both space and time. When \(\Omega\subseteq\mathbb{R}^{d}\), we can write \(\nabla x=[\frac{\partial x}{\partial u_{1}},...\frac{\partial x}{\partial u_{ d}}]\) and \(D\in\mathbb{R}^{d\times d}\) and \(J\) corresponds to a flux in \(d\) directions. The mass conservation law then leads to the following continuity equation \(\frac{\partial x}{\partial t}=-\mathrm{div}J\), where \(\mathrm{div}\) is the divergence operator that computes the mass changes at certain location and time. Combining the two equations yields the fundamental (heat) diffusion equation
\[\frac{\partial x}{\partial t}=\mathrm{div}(D\nabla x). \tag{1}\]
The diffusion process is called _homogeneous_ when the diffusivity coefficient \(D\) is space independent, and is called _isotropic_ when \(D\) depends only on the location, and called _anisotropic_ when \(D\) depends on both the location and the direction of the gradient. In the case of homogeneous diffusion, we can write the diffusion equation in terms of the Laplacian operator \(\Delta\coloneqq\mathrm{div}\cdot\nabla\), i.e., \(\frac{\partial x}{\partial t}=D\Delta x\).
Graphs and discrete differential operators.In order to properly characterize diffusion dynamics over graphs, it is necessary to generalize the concepts from differential geometry to the discrete space, such as gradient and divergence.
A graph can be represented by a tuple \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) where \(\mathcal{V},\mathcal{E}\) denote the set of nodes and edges respectively. In this work, we focus on the undirected graphs, i.e., if \((i,j)\in\mathcal{E}\), then \((j,i)\in\mathcal{E}\). Leveraging tools from differential geometry, let \(L^{2}(\mathcal{V})\) and \(L^{2}(\mathcal{E})\) be Hilbert spaces for real-valued functions on \(\mathcal{V}\) and \(\mathcal{E}\) respectively with the inner product given by
\[\langle f,g\rangle_{L^{2}(\mathcal{V})}=\sum_{i\in\mathcal{V}}f_{i}g_{i}, \qquad\langle F,G\rangle_{L^{2}(\mathcal{E})}=\sum_{(i,j)\in\mathcal{E}}F_{ij} G_{ij}\]
for \(f,g:\mathcal{V}\rightarrow\mathbb{R}\) and \(F,G:\mathcal{E}\rightarrow\mathbb{R}\). This allows to generalize the definitions of gradient and divergence to graph domains [12]. Formally, graph gradient is defined as \(\nabla:L^{2}(\mathcal{V})\to L^{2}(\mathcal{E})\) such that \((\nabla f)_{ij}=f_{j}-f_{i}\). Here we assume the edges are anti-symmetric, namely \((\nabla f)_{ij}=-(\nabla f)_{ji}\). Graph divergence \(\mathrm{div}:L^{2}(\mathcal{E})\to L^{2}(\mathcal{V})\), is defined as the converse of graph gradient such that \((\mathrm{div}F)_{i}=\sum_{j:(i,j)\in\mathcal{E}}F_{ij}\).
By the discrete nature, graphs and functions/signals on a graph can be compactly represented via vectors and matrices. In a graph with \(n\) nodes (\(|\mathcal{V}|=n\)), each \(f\in L^{2}(\mathcal{V})\) can be written as \(n\)-dimensional vector \(\mathbf{f}\) and \(F\in L^{2}(\mathcal{E})\) can be written as a matrix \(\mathbf{F}\) of size \(n\times n\) (with nonzero \(i,j\)-th entry \(\mathbf{F}_{i,j}\) only when
\((i,j)\in\mathcal{E}\)) An adjacency matrix is \(\mathbf{A}\in\{0,1\}^{n\times n}\) such that \(\mathbf{A}_{i,j}=1\) if \((i,j)\in\mathcal{E}\) and \(0\) otherwise. The degree matrix \(\mathbf{D}\) is a diagonal degree matrix with \(i\)-th diagonal entry \(\mathbf{D}_{i,i}=\deg_{i}=\sum_{j}\mathbf{A}_{i,j}\). Graph Laplacian is then defined as \(\mathbf{L}=\mathbf{D}-\mathbf{A}\) and one can verify
\[\operatorname{div}(\nabla\mathbf{f})=\sum_{j:(i,j)\in\mathcal{E}}(f_{j}-f_{i}) =-\mathbf{L}\mathbf{f}.\]
Further, graph gradient can be represented with the incidence matrix \(\mathbf{G}\in\mathbb{R}^{e\times n}\). Specifically \(\mathbf{G}_{k,i}=1\) if edge \(k\) enters node \(i\), \(-1\) if edge \(k\) leaves node \(i\) and \(0\) otherwise. Graph divergence is thus given by \(-\mathbf{G}^{\top}\), which is the negative adjoint of the gradient. The edge direction in \(\mathbf{G}\) can be arbitrarily chosen for undirected graphs because the Laplacian is indifferent to the choice of direction as \(\mathbf{L}=\mathbf{G}^{\top}\mathbf{G}\).
Graph diffusion and graph neural networks.In this work, we consider \(\mathbf{x}:\mathcal{V}\rightarrow\mathbb{R}^{c}\) as a multi-channel signal (or feature) over the node set. We denote \(\mathbf{x}_{i}\in\mathbb{R}^{c}\) as the signal over node \(i\) and let \(\mathbf{X}\in\mathbb{R}^{n\times c}\) collects the signals over all the nodes. The previously defined discrete gradient and divergence operators lead to the following graph diffusion process, which can be seen as a discrete version of heat diffusion equation in (1):
\[\frac{\partial\mathbf{x}_{i}}{\partial t}=\operatorname{div}(D\nabla\mathbf{X })_{i}=\sum_{j:(i,j)\in\mathcal{E}}D(\mathbf{x}_{i},\mathbf{x}_{j},t)(\mathbf{ x}_{j}-\mathbf{x}_{i}),\]
where the diffusivity \(D(\mathbf{x}_{i},\mathbf{x}_{j},t)\) is often scalar-valued and positive, which applies channel-wise. In the special case of homogeneous diffusion, i.e., \(D(\mathbf{x}_{i},\mathbf{x}_{j},t)=1\), the diffusion process can be written as \(\frac{\partial\mathbf{x}_{i}}{\partial t}=\operatorname{div}(\nabla\mathbf{X })_{i}=-(\mathbf{L}\mathbf{X})_{i}\). This process is known as the Laplacian smoothing where the signals become progressively smooth by within-neighbourhood averaging. The solution to the diffusion equation is given by the heat kernel, i.e., \(\mathbf{X}(t)=\exp(-t\mathbf{L})\mathbf{X}(0)\). In fact, the graph heat equation can be also derived as the gradient flow of the so-called Dirichlet energy, which measures the local variations: \(\mathcal{E}_{\operatorname{dir}}(\mathbf{X})=\sum_{(i,j)\in\mathcal{E}}\| \mathbf{x}_{i}-\mathbf{x}_{j}\|^{2}=\operatorname{tr}(\mathbf{X}^{\top} \mathbf{L}\mathbf{X})\).
The graph diffusion process is related to the famous graph convolutional network (GCN) [58], where the latter can be viewed as taking the Euler discretization of the former with a unit stepsize. In addition, GCN defines diffusion with (symmetrically) normalized adjacency \(\widehat{\mathbf{A}}=\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\) and Laplacian \(\widehat{\mathbf{L}}=\mathbf{I}-\widehat{\mathbf{A}}\). The use of normalized Laplacian \(\widehat{\mathbf{L}}\) in place of the combinatorial Laplacian \(\mathbf{L}\) switches the dynamics from being homogeneous to isotropic (where diffusion is weighted by node degrees). More precisely, the discretized dynamics gives the update \(\mathbf{X}^{\ell+1}=\mathbf{X}^{\ell}-\widehat{\mathbf{L}}\mathbf{X}^{\ell}= \widehat{\mathbf{A}}\mathbf{X}^{\ell}\). It can be readily verified that GCN corresponds to the gradient flow of a normalized Dirichlet energy \(\mathcal{E}_{\operatorname{dir}}(\mathbf{X})=\sum_{(i,j)\in\mathcal{E}}\| \mathbf{x}_{i}/\sqrt{\deg_{i}}-\mathbf{x}_{j}/\sqrt{\deg_{j}}\|^{2}=\operatorname {tr}(\mathbf{X}^{\top}\widehat{\mathbf{L}}\mathbf{X})\). Additional channel mixing \(\mathbf{W}\) and nonlinear activation \(\sigma(\cdot)\) are added, leading to a single GCN layer, \(\mathbf{X}^{\ell+1}=\sigma\big{(}\widehat{\mathbf{A}}\mathbf{X}^{\ell}\mathbf{ W}^{\ell}\big{)}\).
## 3 A general framework of continuous dynamics informed GNNs beyond diffusion
Diffusion dynamics has been shown to underpin the design of message passing and graph convolutional networks. While showing promise in many applications, vanilla diffusion dynamics may suffer from performance degradation due to overly smoothing graph signals (oversmoothing), message passing bottlenecks (oversquashing) and graph heterophily. This has motivated the consideration of alternative dynamics other than isotropic diffusion, including anisotropic diffusion, diffusion with source term, geometric diffusion,
oscillation, convection, advection and reaction. Many of them are directly adapted from the existing physical processes.
This section summarizes existing developments on graph neural dynamics under a general framework and categories the literature by the underlying continuous system. We follow the same notations in Section 2 where a graph is represented as \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with \(|\mathcal{V}|=n\) and \(|\mathcal{E}|=e\). The graph signals or features are encoded in a matrix \(\mathbf{X}\), which is in general time-dependent. Unless mentioned otherwise, we omit the time dependence and treat \(\mathbf{X}\) as \(\mathbf{X}(t)\) for notation clarity. We also denote \(\mathbf{X}^{0}=\mathbf{X}(0)\) as the initial conditions for the system.
The general framework for designing continuous graph dynamics is given as follows, which relates the time derivatives of the signals with spatial derivatives on graphs.
\[\Big{[}\frac{\partial\mathbf{X}}{\partial t},\frac{\partial^{2}\mathbf{X}}{ \partial t^{2}}\Big{]}=\mathcal{F}_{\mathcal{G}}(\mathbf{X},\nabla\mathbf{X}), \tag{2}\]
where \(\mathcal{F}_{\mathcal{G}}\) is a spatial coupling function that is usually parameterized by neural networks. The initial condition of the system is generally the input graph signals (after an encoder). We have summarized and compared existing works discussed in the survey in Table 1 in terms of the driving mechanisms and problems addressed. As we shall see, most of the existing works consider the first-order time derivative while some works explore the second-order time derivative to encode oscillatory systems.
### Anisotropic diffusion
The first class of dynamics, _anisotropic diffusion_, generalizes the isotropic diffusion in GCN, offering great flexibility in controlling the local diffusivity patterns. In image processing, anisotropic diffusion has been extensively applied for low-level tasks, such as image denoising, restoration and segmentation [96]. In particular, the Perona-Malik model [73] sets the diffusivity coefficient \(D\propto|\nabla x|^{-1}\), which is often called the edge indicator as it preserves the sharpness of signals by slowing down diffusion in regions of high variations.
The idea of using anisotropic diffusion to define continuous GNN dynamics has been firstly considered by [75] and formalized by [15], where a class of graph neural diffusion (GRAND) dynamics is proposed. The anisotropic diffusion dynamics is formally given by
\[\text{GRAND}:\quad\frac{\partial\mathbf{X}}{\partial t}=\mathrm{div}\big{(} \mathbf{A}(\mathbf{X})\cdot\nabla\mathbf{X}\big{)},\]
where \(\mathbf{A}(\mathbf{X})\in\mathbb{R}^{n\times n}\) encodes the anisotropic diffusivity along the edges. Here \(\nabla\mathbf{X}\in\mathbb{R}^{n\times n\times c}\) collects the gradient along edges and we use \(\cdot\) to represent the elementwise multiplication of diffusivity coefficients broadcast across the channel dimension. The coefficient is determined by the feature similarity, i.e., \(\mathbf{A}(\mathbf{X})=[a(\mathbf{x}_{i},\mathbf{x}_{j})]_{(i,j)\in\mathcal{E}}\) where \(a(\mathbf{x}_{i},\mathbf{x}_{j})=(\mathbf{W}_{K}\mathbf{x}_{i})^{\top}\mathbf{ W}_{Q}\mathbf{x}_{j}\) computes the dot product attention with learnable parameters \(\mathbf{W}_{K},\mathbf{W}_{Q}\). Softmax normalization is performed on \(\mathbf{A}(\mathbf{X})\) to ensure it is right-stochastic (row sums up to one). Notably, the explicit-Euler discretization of GRAND corresponds to the propagation of graph attention network (GAT) [91]. Several versions of GRAND are proposed to render the dynamics more adaptable compared to the discretized version in [91]. In particular, the diffusivity matrix \(\mathbf{A}(\mathbf{X})\) can be fixed as \(\mathbf{A}(\mathbf{X}^{0})\) that only depends on the initial features. This leads to a GAT propagation with shared attention weights. In addition, \(\mathbf{A}(\mathbf{X})\) can vary according to a dynamically rewired edge set based on the attention score, i.e., \(\mathcal{E}\leftarrow\{(i,j):(i,j)\in\mathcal{E},a(\mathbf{x}_{i},\mathbf{x}_ {j})>\rho\}\) for some threshold \(\rho>0\). Instructively, this interprets the graph structure as discrete realization of certain underlying domains where graph rewiring dynamically changes the spatial discretization.
Building on the formulation of GRAND, BLEND further [16] augments the input signals \(\mathbf{x}_{i}\) with position
\begin{table}
\begin{tabular}{l|l l|l l l} \hline \hline & \multirow{2}{*}{Methods} & \multirow{2}{*}{Mechanism} & \multirow{2}{*}{OSM} & \multicolumn{3}{c}{Problems Tackled} \\ & & & & OSM & OSQ & HETERO & STAB \\ \hline \multirow{8}{*}{Anisotropic diffusion} & GRAND [15] & Attention diffusivity \& rewiring & & \(\boldsymbol{\bigvee}^{*}\) & \(\boldsymbol{\bigvee}\) \\ & BLEND [16] & Position augmentation & & \(\boldsymbol{\bigvee}^{*}\) & & \(\boldsymbol{\bigvee}\) \\ & Mean Curvature [83] & \multirow{2}{*}{Non-smooth edge indicators} & & & \(\boldsymbol{\bigvee}\) \\ & Beltrami [83] & & & & \\ & \(p\)-Laplacian [35] & \(p\)-Laplacian regularization & \(\boldsymbol{\bigvee}^{\dagger}\) & & \\ & DIFFormer [98] & Full graph transformer & & \(\boldsymbol{\bigvee}\) & & \\ & DIGNN [34] & Parameterized Laplacian & \(\boldsymbol{\bigvee}^{\dagger}\) & & \\ & GRAND++ [86] & Source term & & & \\ & PDE-GCN\({}_{\text{D}}\)[29] & Nonlinear diffusion & & \(\boldsymbol{\bigvee}\) & & \\ & GIND [18] & Implicit nonlinear diffusion & & \(\boldsymbol{\bigvee}\) & \(\boldsymbol{\bigvee}\) \\ \hline \multirow{2}{*}{Oscillation} & PDE-GCN\({}_{\text{H}}\)[29] & Wave equation & & \(\boldsymbol{\bigvee}\) & \\ & GraphCON [78] & Damped coupled oscillation & & \(\boldsymbol{\bigvee}\) & \(\boldsymbol{\bigvee}\) \\ \hline \multirow{2}{*}{Non-local dynamics} & FLODE [68] & Fractional Laplacian & \(\boldsymbol{\bigvee}\) & \(\boldsymbol{\bigvee}\) & \(\boldsymbol{\bigvee}\) \\ & QDC [67] & Quantum diffusion & & \(\boldsymbol{\bigvee}\) & \(\boldsymbol{\bigvee}\) \\ & TIDE [7] & Learnable heat kernel & & \(\boldsymbol{\bigvee}\) & \\ & G2TN [89] & Hypo-elliptic diffusion & & \(\boldsymbol{\bigvee}\) & \\ \hline \multirow{8}{*}{Diffusion with external forces} & CDE [103] & Convection diffusion & & \(\boldsymbol{\bigvee}\) & \\ & GREAD [21] & Reaction diffusion & & \(\boldsymbol{\bigvee}\) & \\ & ACMP [94] & Allen-Chan retraction & & \(\boldsymbol{\bigvee}\) & \\ & & with negative diffusivity & & & \\ & ODNet [66] & Diffusion with confidence & & \(\boldsymbol{\bigvee}\) & \(\boldsymbol{\bigvee}\) \\ & ADR-GNN [30] & Advection reaction diffusion & & \(\boldsymbol{\bigvee}\) & \\ & G\({}^{2}\)[79] & Diffusion gating & & \(\boldsymbol{\bigvee}\) & \(\boldsymbol{\bigvee}\) \\ & MHKG [81] & Reverse diffusion & & \(\boldsymbol{\bigvee}\) & \(\boldsymbol{\bigvee}\) \\ & A-DGN [41] & Anti-symmetric weight & & \(\boldsymbol{\bigvee}\) & \(\boldsymbol{\bigvee}\) \\ \hline \multirow{8}{*}{Geometry underpinned} & NSD [10] & Sheaf diffusion & & \(\boldsymbol{\bigvee}\) & \\ & Hamiltonian\({}_{G}\), etc. [42] & Bracket dynamics with & & \\ & HamGNN [57] & higher-order cliques & & & \\ & HamGNN [102] & Learnable Hamiltonian dynamics & & & \(\boldsymbol{\bigvee}\) \\ \hline Gradient flow & GRAFF [39] & Parameterized gradient flow & & \(\boldsymbol{\bigvee}\) & \\ \hline Multi-scale diffusion & GradFUFG [46] &
\begin{tabular}{l} Separated diffusion for low-pass \\ and high-pass at different scales \\ \end{tabular} & & \(\boldsymbol{\bigvee}\) & \\ \hline \hline \end{tabular}
* The ability of the methods to mitigate oversquashing is through graph rewiring. \({}^{\dagger}p\)-Laplacian and DIGNN avoids oversmoothing provided the input dependent regularization (i.e., a source term) is added.
\end{table}
Table 1: Summary of continuous dynamics informed graph neural networks, including the driving mechanism and the problems addressed, including oversmoothing (OSM), oversquashing (OSQ), graph heterophily (HETERO), and stability (STAB) with respect to perturbation or with training, such as gradient vanishing or explosion associated with increased depth. Note we show the problems addressed either theoretically or empirically in the paper (unless proven otherwise in subsequent literature). More detailed discussions are in Section 5.
coordinates \(\mathbf{u}_{i}\) for each node. The diffusion process then becomes
\[\text{BLEND}:\quad\frac{\partial[\mathbf{X},\mathbf{U}]}{\partial t}=\mathrm{div} (\mathbf{A}([\mathbf{X},\mathbf{U}])\cdot\nabla[\mathbf{X},\mathbf{U}]).\]
The joint diffusion over both positions and signals is motivated by diffusion on Riemannian manifolds with the Laplace-Beltrami operator, in which the gradient, diffusivity and divergence all depend on the Riemannian metric (that varies according to the position). In a similar vein, augmenting the position information while diffusing on graphs produces joint evolution of features as well as topology, which further allows graph rewiring to improve the information flow. In particular, based on the evolved positional information, the graph is dynamically rewired as \(\mathcal{E}\leftarrow\{(i,j):d_{\mathcal{C}}(\mathbf{u}_{i},\mathbf{u}_{j})<r\}\) or by k-nearest neighbour graph. The positional information can be pre-computed by personalized PageRank [36], deepwalk [74] or even learned hyperbolic embeddings [17].
In BLEND, the diffusivity is given by the attention score over both the positional features and signals, which is in fact the core idea of transformers [90] that augments samples with positional encoding. [98] develop a transformer-based diffusion process (called DIFFormer) where attention diffusivity coefficients are computed on a fully connected graph \(\mathcal{V}\times\mathcal{V}\). The input graph (represented via input adjacency matrix \(\mathbf{A}^{0}\)) serves as a geometric prior that augments the learned attention matrix.
\[\text{DIFFormer}:\quad\frac{\partial\mathbf{X}}{\partial t}=\mathrm{div}_{ \mathcal{V}\times\mathcal{V}}\big{(}(\mathbf{A}^{0}+\mathbf{A}(\mathbf{X})) \cdot\nabla\mathbf{X}\big{)},\]
where \(\mathrm{div}_{\mathcal{V}\times\mathcal{V}}(\mathbf{F})_{i}=\sum_{j\in \mathcal{V}}\mathbf{F}_{i,j}\) denotes the divergence operator on the complete graph and \(\mathbf{A}(\mathbf{X})\) computes the diffusivity by a transformer block [90], different to the ones used by GRAND and BLEND. A recent work [97] has shown the evolution via both local message passing through \(\mathbf{A}^{0}\) and global attention diffusion through \(\mathbf{A}(\mathbf{X})\) improves the generalization under topological distribution shift, i.e., when training and test graph topology differs.
Apart from its connection to transformer-based dynamics, BLEND is in fact motivated by a geometric evolution known as Beltrami flow where the diffusion over a non-Euclidean domains also depends on the varying metric. In [83], the Beltrami flow, along with mean-curvature flow, is generalized to graph domains by explicitly factorizing out the edge indicator \(\|\delta\mathbf{x}_{i}\|:=\sqrt{\sum_{j:(i,j)\in\mathcal{E}}\|\mathbf{x}_{j}- \mathbf{x}_{i}\|^{2}}\). Formally, let \(\mathbf{S}_{\mathrm{mc}}(\mathbf{X})\), \(\mathbf{S}_{\mathrm{bel}}(\mathbf{X})\) be the diffusivity coefficients of mean curvature and Beltrami diffusion, with their elements defined as \([\mathbf{S}_{\mathrm{mc}}(\mathbf{X})]_{i,j}=\frac{1}{\|\delta\mathbf{x}_{i} \|}+\frac{1}{\|\delta\mathbf{x}_{j}\|}\) and \([\mathbf{S}_{\mathrm{bel}}(\mathbf{X})]_{i,j}=\frac{1}{\|\delta\mathbf{x}_{i} \|^{2}}+\frac{1}{\|\delta\mathbf{x}_{i}\|\|\delta\mathbf{x}_{j}\|}\). The diffusion processes [83] propose are
\[\text{Mean Curvature}:\quad\frac{\partial\mathbf{X}}{\partial t} =\mathrm{div}\Big{(}\big{(}\mathbf{A}(\mathbf{X})\odot\mathbf{S}_{ \mathrm{mc}}(\mathbf{X})\big{)}\cdot\nabla\mathbf{X}\Big{)},\] \[\text{Beltrami}:\quad\frac{\partial\mathbf{X}}{\partial t} =\mathrm{div}\Big{(}\big{(}\mathbf{A}(\mathbf{X})\odot\mathbf{S} _{\mathrm{bel}}(\mathbf{X})\big{)}\cdot\nabla\mathbf{X}\Big{)},\]
where \(\mathbf{A}(\mathbf{X})\) is computed from the dot product attention following the previous works [15, 16]. For both dynamics, non-smooth signals are preserved by slowing down diffusion where signal abruptly changes. As commented by the paper, positional information can be added in a similar way as BLEND [16]. It should be noticed that although BLEND originates from the Beltrami flow, the dynamics turns out to be the same as GRAND, augmented with positional embeddings.
A more general \(p\)-Laplacian based graph neural network is proposed by [35] where the diffusion is derived from a \(p\)-Laplacian regularization framework.
\[p\text{-Laplacian}:\quad\frac{\partial\mathbf{X}}{\partial t}=\mathrm{div} \big{(}\|\nabla\mathbf{X}\|^{p-2}\cdot\nabla\mathbf{X}\big{)}-\mu(\mathbf{X}- \mathbf{S}),\]
with \(\mathbf{S}\) as a source term and \(\mu>0\) controlling the regularization strength. The diffusivity \(\|\nabla\mathbf{X}\|^{p-2}\) is an \(n\times n\) matrix with elements \([\|\nabla\mathbf{X}\|^{p-2}]_{i,j}=\|\mathbf{x}_{j}-\mathbf{x}_{i}\|^{p-2}\) if \((i,j)\in\mathcal{E}.\) The injection of source information can be understood physically as the heat exchange from the system to the outside. In the paper the source term is simply the input feature matrix, i.e., \(\mathbf{S}=\mathbf{X}^{0}\). When \(p=2\), the diffusion reduces to the heat diffusion with classic Laplacian. When \(p=1\), the dynamics recovers the mean curvature flow (although the definition slightly differs from the one in [83]). As a result, with properly selected \(p\), the dynamics is flexible in adapting to different types of graphs and able to perverse the boundaries without oversmoothing the signals. It is worth mentioning that unlike previous works that use discretization to update \(\mathbf{X}\), the paper directly solves for the equilibrium state by setting \(\frac{\partial\mathbf{X}}{\partial t}=0\), which leads to an implicit graph diffusion layer given by \(p\)-Laplacian message passing.
The \(p\)-Laplacian diffusion corresponds to the gradient flow of a \(p\)-Dirichlet energy given with a regularization term \(\|\mathbf{X}-\mathbf{S}\|^{2}\). A similar idea has been considered in [22] where the Dirichlet energy is replaced with the total variation on graph gradients, which leverages the \(L_{1}\) norm (which is different to the case of \(p=1\) in the \(p\)-Laplacian diffusion [83]). A dual-optimization scheme is introduced due to the non-differentiablity of the objective at zero.
A recent paper [34] parameterizes the graph gradient and divergence instead of the diffusivity coefficients as in previous works, and defines a parameterized graph Laplacian for diffusion process. In particular, [34] consider weighted inner products for both the \(L^{2}(\mathcal{V})\) and \(L^{2}(\mathcal{E})\), i.e., \(\langle f,g\rangle_{L^{2}(\mathcal{V})}=\sum_{i\in\mathcal{V}}\chi_{i}f_{i}g_ {i}\) and \(\langle F,G\rangle_{L^{2}(\mathcal{E})}=\phi_{i,j}F_{i,j}G_{i,j}.\) The graph gradient is defined as \((\nabla_{\Theta}f)_{i,j}\coloneqq\psi_{i,j}(f_{j}-f_{i})\), which also leads to a notion of graph divergence \((\mathrm{div}_{\Theta}F)_{i}=\frac{1}{2\chi_{i}}\sum_{j\in\mathcal{N}_{i}} \psi_{i,j}\phi_{i,j}(F_{i,j}-F_{j,i})\), where \(\chi_{i},\phi_{i,j},\psi_{i,j}\) are strictly positive real-valued functions on nodes and edges respectively. Here we denote \(\nabla_{\Theta},\mathrm{div}_{\Theta}\) to emphasize that the gradient and divergence operators are parameterized. Because the graph gradient is parameterized and may not be anti-symmetric, i.e., \((\nabla_{\Theta}f)_{i,j}\neq-(\nabla_{\Theta}f)_{j,i}\), the divergence encodes directional information. The paper also parameterizes the weighting functions to be node-dependent, i.e., \(\chi_{i}=\chi_{i}(\mathbf{x}_{i}),\phi_{i,j}=\phi_{i,j}(\mathbf{x}_{i},\mathbf{ x}_{j})\) and \(\psi_{i,j}=\psi_{i,j}(\mathbf{x}_{i},\mathbf{x}_{j})\), which involves learnable parameters. The diffusion process the paper considers is thus
\[\text{DIGNN}:\quad\frac{\partial\mathbf{X}}{\partial t}=\mathrm{div}_{\Theta} (\nabla_{\Theta}\mathbf{X})-\mu(\mathbf{X}-\mathbf{S}),\]
where a regularization term \(\|\mathbf{X}-\mathbf{S}\|^{2}\) is added similarly in [35].
The idea of adding an energy source has also been considered in GRAND++ [86] based on the framework of anisotropic diffusion of GRAND:
\[\text{GRAND++}:\quad\frac{\partial\mathbf{X}}{\partial t}=\mathrm{div}( \mathbf{A}(\mathbf{X})\cdot\nabla\mathbf{X})+\mathbf{S}\]
where \(\mathbf{S}\in\mathbb{R}^{n\times c}\) is a source term. The paper proposes a random walk viewpoint to show that, without the source term, the dynamics reduces to GRAND and is guaranteed to converge to a stationary distribution independent of the initial conditions \(\mathbf{X}^{0}\). Each row of the source term \(\mathbf{S}\) is defined as \(\mathbf{s}_{i}=\sum_{j\in\mathcal{I}}\delta_{ij}(\mathbf{x}_{j}^{0}-\bar{ \mathbf{x}}^{0})\) with \(\mathcal{I}\subseteq\mathcal{V}\) a selected node subset used as the source term and \(\bar{\mathbf{x}}^{0}=\frac{1}{|\mathcal{I}|}\sum_{j\in\mathcal{I}}\mathbf{x}_{j }^{0}\) is the average signal. \(\delta_{ij}\) denotes the initial transition probability from node \(j\) to \(i\). By construction, the limiting signal distribution is close to an interpolation of the source signals in the selected subset \(\mathcal{I}\). Similar idea of source term injection has also been considered in earlier work [99], which can be seen as the homogeneous diffusion with both a source term and a residual term. The proposed model (called CGNN) follows the dynamics \(\frac{\partial\mathbf{X}}{\partial t}=-\mathbf{L}\mathbf{X}+\mathbf{X}\mathbf{ W}+\mathbf{X}^{0}\), for some learnable channel mixing matrix \(\mathbf{W}\).
Anisotropic diffusion is generally nonlinear in the sense that diffusivity depends nonlinearly on the mass along the evolution. This is the case in edge-preserving dynamics as the diffusivity explicitly depends
nonlinearly on the gradient. GRAND-based dynamics is also nonlinear as long as the attention coefficients is computed for each timestep. In [29, 18], additional nonlinearity is further incorporated by factoring the diffusivity \(D\) as the composition of a linear operator \(\mathcal{K}\) and its adjoint \(\mathcal{K}^{*}\), i.e., \(D=\mathcal{K}^{*}\mathcal{K}\). Then a pointwise nonlinearity function \(\sigma(\cdot)\) is added, leading to
\[\frac{\partial\mathbf{X}}{\partial t}=\mathrm{div}\big{(}\mathcal{K}^{*} \sigma(\mathcal{K}\nabla\mathbf{X})\big{)}.\]
In the case when \(\sigma\) is the identity map, the dynamics recovers the anisotropic diffusion. Such a nonlinear system has been firstly proposed by [80] for defining convolutional residual networks for images. In [29], the idea is generalized to graphs by defining \(\mathcal{K}\) as learnable pointwise convolution. Specifically, the dynamics, called PDE-\(\text{GCN}_{\text{D}}\), can be written in terms of the gradient and divergence operator as follows.
\[\text{PDE-GCN}_{\text{D}}:\quad\frac{\partial\mathbf{X}}{\partial t}=-\mathbf{ G}^{\top}\mathbf{K}^{\top}\sigma(\mathbf{K}\mathbf{G}\mathbf{X}),\]
where \(\mathbf{K}\) is a learnable parameter and \(\mathbf{G}\) is the gradient operator defined in Section 2. It can be readily observed that when \(\sigma\) is identity and \(\mathbf{K}=\mathbf{I}\), the dynamics reduces to the heat diffusion implemented by GCN.
In the follow-up work [18], the linear operator \(\mathcal{K}\) (parameterized by \(\mathbf{K}\)) is applied over the channel space instead of the edge space, i.e.,
\[\text{GIND}:\quad\frac{\partial\mathbf{X}}{\partial t}=-\mathbf{G}^{\top} \sigma(\mathbf{G}\mathbf{X}\mathbf{K}^{\top})\mathbf{K}.\]
Motivated by [43, 18] consider an implicit propagation of GIND as \(\mathbf{Z}=-\mathbf{G}^{\top}\sigma\big{(}\mathbf{G}(\mathbf{Z}+b_{\mathbf{ \Omega}}(\mathbf{X}^{0}))\mathbf{K}^{\top}\big{)}\mathbf{K}\), where \(b_{\mathbf{\Omega}}(\cdot)\) is an affine transformation with parameter \(\mathbf{\Omega}\). The model corresponds to a refinement process for the flux \(\mathbf{Z}\) and the output is given by a decoder over \(\mathbf{Z}+\mathbf{X}^{0}\). It has been shown the equilibrium state of the implicit diffusion corresponds to the minimizer of a convex objective function provided the nonlinearity is monotone and Lipschitz and \(\mathbf{K}\otimes\mathbf{G}\) is upper bounded in norm. This result guarantees the convergence of the dynamics and allows structural constraints to be embedded to the dynamics by explicitly modifying the objective.
### Oscillations
The phenomenon of oscillation has been widely found in physics, which primarily features the repetitive motion. Unlike diffusion that dissipates energy, oscillatory system often preserves energy and is thus reversible. Oscillatory processes are often modelled as second-order ordinary/partial differential equations. One simple example of oscillatory process is characterized by the wave equation \(\frac{\partial^{2}x}{\partial t^{2}}=c\Delta x\), which is a hyperbolic PDE. The wave equation has been considered in [29] for defining dynamics on graphs, which follows the nonlinear formalism in [80]:
\[\text{PDE-GCN}_{\text{H}}:\quad\frac{\partial^{2}\mathbf{X}}{\partial t^{2}} =\mathrm{div}\big{(}\mathcal{K}^{*}\sigma(\mathcal{K}\nabla\mathbf{X})\big{)}= -\mathbf{G}^{\top}\mathbf{K}^{\top}\sigma(\mathbf{K}\mathbf{G}\mathbf{X}).\]
In addition, GraphCON [78] considers a more general oscillatory dynamics which combines a damped oscillating process with a coupling function. That is,
\[\text{GraphCON}:\quad\frac{\partial^{2}\mathbf{X}}{\partial t^{2}}=\sigma(F_{ \mathcal{G}}(\mathbf{X}))-\gamma\mathbf{X}-\alpha\frac{\partial\mathbf{X}}{ \partial t},\]
where \(F_{\mathcal{G}}(\mathbf{X})_{i}=F_{\mathcal{G}}(\mathbf{x}_{i},\{\mathbf{x}_{j}\}_ {j\in\mathcal{N}_{i}})\) is the coupling function, \(\alpha\geq 0\) and \(\sigma(\cdot)\) is some nonlinear activation. When \(\sigma(F_{\mathcal{G}}(\mathbf{X}))=0\), \(\alpha=0\), the system reduces to the classic harmonic oscillation for each node independently. Adding the damping term \(-\frac{\partial\mathbf{X}}{\partial t}\) mimics the frictional force that diminishes the oscillation. Finally, due to the presence of interdependence between the nodes, the coupling function is required to model the interactions between the nodes. The paper mainly considers two choices of coupling function, with isotropic and anisotropic diffusion (which leads to GCN and GAT respectively). Formally, \(F_{\mathcal{G}}(\mathbf{X})_{i}=\sum_{j:(i,j)\in\mathcal{E}}\mathbf{A}(\mathbf{ X})_{i,j}\mathbf{x}_{j}\) represents the message passing with normalized adjacency or learned attention scores. When \(\gamma=1\), \(\sigma\) is identity, the GraphCON can be rewritten as \(\frac{\partial^{2}\mathbf{X}}{\partial t}=\mathrm{div}(\mathbf{A}(\mathbf{X}) \cdot\nabla\mathbf{X})-\alpha\frac{\partial\mathbf{X}}{\partial t}\), which is effectively the wave equation with a damping term. GraphCON is flexible in that the coupling term can accommodate arbitrary message passing scheme. In addition, it possesses greater expressive power by showing the GNN induced by the coupling function approaches the steady state of GraphCON, while the latter explores the entire trajectory.
### Non-local dynamics
We have so far focused on local dynamics, in the sense that the diffusion or oscillation happens locally within the neighbourhood. Thus it usually requires sufficiently large timestep for one node's influence to reach a distant node (with respect to graph topology). Graph rewiring employed in GRAND and BLEND can be utilized to enable long-range diffusion by modifying the graph topology. This section explores various dynamics-based formulations that transcend the local community when propagating information, resulting in non-localized dynamics.
Fractional Laplacian.Fractional Laplacian [76, 64] has been effective to represent complex anomalous processes, such as fluids dynamics in porous medium [14]. A recent work [68] utilizes the fractional graph Laplacian/adjacency matrix \(-\widehat{\mathbf{A}}^{\alpha}\) (for some \(\alpha\in\mathbb{R}\)) in order to define a non-local diffusion process as
\[\text{FLODE}:\quad\frac{\partial\mathbf{X}}{\partial t}=-\widehat{\mathbf{A}} ^{\alpha}\mathbf{X}.\]
where \(\widehat{\mathbf{A}}\) is the symmetric normalized adjacency matrix. A critical difference compared to \(p\)-Laplacian in terms of the order is that here \(\alpha\) can be fractional, instead of being restricted to integers. The fractional Laplacian is often dense, and thus the corresponding diffusion is non-local where long-range interactions are captured. When coupled with a symmetric channel mixing matrix \(\mathbf{W}\), i.e., \(\frac{\partial\mathbf{X}}{\partial t}=-\widehat{\mathbf{A}}^{\alpha}\mathbf{ X}\mathbf{W}\), the flexibility in the choice of \(\alpha\) allows dynamics to accommodate both smoothing and sharpening effects, which avoids oversmoothing and is suited for heterophilic graphs. The work also extends the formulation to directed graphs and correspondingly defines the notions of oversmoothing and Dirichlet energy with the asymmetric Laplacian. On directed graphs, the fractional Laplacian is defined through singular value decomposition, i.e., \(\widehat{\mathbf{A}}^{\alpha}\coloneqq\mathbf{U}\Sigma^{\alpha}\mathbf{V}^{H}\), where \(\mathbf{U},\mathbf{V}\in\mathbb{C}^{n\times n}\) are unitary singular vectors and \(\mathbf{\Sigma}\in\mathbb{R}^{n\times n}\) contains singular values on the diagonal. Finally, the paper also considers a Schrodinger equation based diffusion as \(\frac{\partial\mathbf{X}}{\partial t}=\mathrm{i}\widehat{\mathbf{A}}^{\alpha} \mathbf{X}\), where \(\mathrm{i}=\sqrt{-1}\) represents the imaginary unit.
Quantum diffusion kernel.The Schrodinger's equation \(\mathrm{i}\frac{\partial\psi(u,t)}{\partial t}=\mathcal{H}\psi(u,t)\) has also been considered in [67], where \(\mathcal{H}\) is the Hamiltonian operator (composed of a kinetic and potential energy operator). \(\psi(u,t)\) denotes the (complex-valued) quantum wave function at position \(u\) at time \(t\), and the square modulus of the wave function \(|\psi(u,t)|^{2}\) indicates the probability density of a particle. With the quantum system defined by \(\psi(u,t)\), a quantum state \(|\psi(t)\rangle\) refers to the superposition of all the states, i.e., \(|\psi(t)\rangle=\int\psi(u,t)|u\rangle du\)
where \(|u\rangle\) is the position state (a basis vector for representing the quantum state). One can recover the components of the quantum state by the inner product between quantum state vector \(|\psi(t)\rangle\) and position vector \(|u\rangle\), i.e., \(\psi(u,t)=\langle u|\psi(t)\rangle\). In a simple quantum system where the Hamiltonian \(\mathcal{H}\) is time-independent, the solution to the Schrodinger's equation can be written in terms of quantum state as \(|\psi(t)\rangle=e^{-\mathrm{i}\mathcal{H}t}|\psi(0)\rangle\).
On a graph with \(n\) nodes, \(|\psi(t)\rangle\in\mathbb{C}^{n}\) can be interpreted as the state vector of particles across all nodes, and the position state refers to the node set. In a system without a potential, the Hamiltonian reduces to the negative Laplacian, and the Schrodinger's equation reduces to \(\mathrm{i}\frac{\partial|\psi(t)\rangle}{\partial t}=-\mathbf{L}|\psi(t)\rangle\). The eigenvectors of \(\mathbf{L}\), denoted as \(\{|\mathbf{\phi}_{i}\rangle\}_{i=1}^{n}\) provides a complete orthogonal basis, which can be treated as the position basis so that one can express the solution as \(|\psi(t)\rangle=\sum_{k=1}^{n}c_{k}e^{\mathrm{i}\lambda_{k}t}|\mathbf{\phi}_{k}\rangle\), where \(\lambda_{k}\) is the \(k\)-th eigenvalue and \(c_{k}=\langle\psi(0)|\mathbf{\phi}_{k}\rangle\). Instead of working with the solution which involves the complex unit, [67] define a kernel that models the the average overlap between any two nodes \(i,j\) as the inner product between \(|\psi(t)\rangle_{i}\) and \(|\psi(t)\rangle_{j}\), which is \(\sum_{k,l=1}^{n}c_{k}^{\star}c_{l}|\mathbf{\phi}_{k}\rangle_{i}^{\ast}|\mathbf{\phi}_{ l}\rangle_{j}\). To engineer observation operators as a spectral filter, the paper further leverages a Gaussian filter \(\mathcal{P}=\sum_{k=1}^{n}e^{-(\lambda_{k}-\mu)^{2}/2\sigma^{2}}\), which leads to the proposed quantum diffusion kernel (QDC) \(\mathbf{Q}\in\mathbb{R}^{n\times n}\), where
\[\text{QDC : }\quad\mathbf{Q}_{i,j}=\sum_{k=1}^{n}e^{-(\lambda_{k}-\mu)^{2}/2 \sigma^{2}}|\mathbf{\phi}_{k}\rangle_{i}^{\ast}|\mathbf{\phi}_{k}\rangle_{j}\]
where the initial quantum state is assumed to be equally delocalized. The kernel matrix \(\mathbf{Q}\) is interpreted as the transition matrix between the nodes and hence can be supplemented for any message-passing-based graph neural networks. Hence the resulting quantum diffusion corresponds to the anisotropic diffusion where the diffusivity is given by the quantum diffusion kernel. The kernel matrix can be further sparsified either using a threshold or KNN. The kernel allows non-local message passing due to the quantum interference across all the position states. Further a multi-scale variant of quantum diffusion is proposed that combines the propagation from quantum diffusion and standard graph diffusion.
Time derivative diffusion.Another work [7] introduces a non-local message passing scheme by combining local message passing with a learnable-timestep heat kernel. Recall the heat diffusion follows \(\frac{\partial\mathbf{X}}{\partial t}=-\mathbf{L}\mathbf{X}\), where its solution is given by the heat kernel as \(\mathbf{X}(t)=\exp(-t\mathbf{L})\mathbf{X}(0)\). One can generalize the heat kernel to capture the transition between any two states in time, i.e., \(\mathbf{X}(t)=\exp(-(t-s)\mathbf{L})\mathbf{X}(s)\). Instead of setting a pre-defined \(t,s\), the paper parameterizes the heat kernel as \(\exp(-t_{\theta}\mathbf{L})\) where \(t_{\theta}\) is the learnable timestep in order to adapt the diffusion range to different types of a dataset. Further, to simultaneously account for local message passing, the paper combines the adjacency matrix \(\mathbf{A}\) with learned heat kernel, which leads to the proposed dynamics as \(\mathbf{X}(t)=\mathbf{A}\exp(-t_{\theta}\mathbf{L})\mathbf{X}(s)\). This corresponds to the following continuous dynamics (up to some normalizing constants):
\[\text{TIDE : }\quad\frac{\partial\mathbf{X}}{\partial t}=\mathrm{div}( \mathbf{A}\exp(-t_{\theta}\mathbf{L})\cdot\nabla\mathbf{X}).\]
It is clear that when \(t_{\theta}=0\), the model reduces to the classic GCN. The learnability of \(t_{\theta}\) ensures the model is flexible to capture both local and multi-hop communication.
Hypo-elliptic diffusion.In [89], non-local and higher-order information is captured via the so-called _hypo-elliptic diffusion_, which is based on a tensor-valued Laplacian that aggregates the entire trajectories of random walks on graphs. The sequential nature of the path can be characterized with the free (associative) algebra, which lifts the sequence injectively to a vector space with non-commutative multiplication (i.e., an
algebra). More formally, an algebra \(H\) over \(\mathbb{R}^{c}\) can be realized as a sequence of tensors with increasing order, i.e., \(H\coloneqq\{\mathbf{v}=(\mathbf{v}^{0},\mathbf{v}^{1},\mathbf{v}^{2},...)\colon \mathbf{v}^{m}\in(\mathbb{R}^{c})^{\otimes m}\}\), where \((\mathbb{R}^{c})^{\otimes m}\) denotes the space of \(m\)-order tensors. For example, \((\mathbb{R}^{c})^{\otimes 0}\equiv\mathbb{R}\), \((\mathbb{R}^{c})^{\otimes 1}\equiv\mathbb{R}^{c}\). The scalar multiplication and vector addition of \(H\) is defined according to \(\lambda\mathbf{v}\coloneqq(\lambda\mathbf{v}^{m})_{m\geq 0}\) for \(\lambda\in\mathbb{R}\) and \(\mathbf{v}+\mathbf{w}\coloneqq(\mathbf{v}^{m}+\mathbf{w}^{m})_{m\geq 0}\). The algebra multiplication of \(H\) is given by \(\mathbf{v}\cdot\mathbf{w}\coloneqq\big{(}\sum_{k=0}^{m}\mathbf{v}^{k}\otimes \mathbf{w}^{m-k}\big{)}_{m\geq 0}\). \(H\) can be further made into a Hilbert space with the chosen inner product \(\langle\mathbf{v},\mathbf{w}\rangle\coloneqq\sum_{m\geq 0}\langle\mathbf{v}^{m}, \mathbf{w}^{m}\rangle^{m}\) where \(\langle\cdot,\cdot\rangle^{m}\) denotes the classic inner product on the tensor space \((\mathbb{R}^{c})^{\otimes m}\).
With the properly defined algebra, one can lift a sequence to such space, thus summarizing the full details of its trajectory. Specifically, denote the space of sequences in \(\mathbb{R}^{c}\) as \(\mathrm{Seq}(\mathbb{R}^{c})\coloneqq\bigcup_{k=0}^{\infty}(\mathbb{R}^{c})^{ k+1}\), where \((\mathbb{R}^{c})^{k+1}\) denotes the \(k+1\)-product space of \(\mathbb{R}^{c}\). A sequence, denoted as \(\mathbf{x}=(\mathbf{x}^{0},\mathbf{x}^{1},...,\mathbf{x}^{k})\in(\mathbb{R}^{ c})^{k+1}\) is an element of the sequence space \(\mathrm{Seq}(\mathbb{R}^{c})\). Let \(\varphi:\mathbb{R}^{c}\to H\) be an algebra lifting, which allows to define a sequence feature map \(\tilde{\varphi}(\mathbf{x})=\varphi(\mathbf{x}^{0})\cdot\varphi(\mathbf{x}^{1 }-\mathbf{x}^{0})\cdots\varphi(\mathbf{x}^{k}-\mathbf{x}^{k-1})\in H\). One example of such injective map is the tensor exponential given by \(\varphi(\mathbf{u})=\exp_{\otimes}(\mathbf{u})\coloneqq\big{(}\frac{\mathbf{ u}^{\otimes m}}{m!}\big{)}_{m\geq 0}\) for \(\mathbf{u}\in\mathbb{R}^{c}\). Such a feature map is able to summarize the entire sequence path up to step \(k\).
On a graph, let \(\mathbf{x}_{i}^{k}\in\mathbb{R}^{c}\) denote the signals at node \(i\in\mathcal{V}\) at diffusion step \(k\geq 0\). Instead of focusing on the signals at a particular timestep \(k\), the paper leverages the sequence map to capture the entire past trajectory of the diffusion process through \(\tilde{\varphi}(\mathbf{x}_{i})\in H\) where \(\mathbf{x}_{i}\coloneqq(\mathbf{x}_{i}^{0},\mathbf{x}_{i}^{1},...,\mathbf{x}_{ i}^{k})\). The corresponding diffusion process requires a tensor adjacency matrix, \(\widetilde{\mathbf{A}}\in H^{n\times n}\) with the entries \(\widetilde{\mathbf{A}}_{i,j}=\varphi(\mathbf{x}_{j}^{0}-\mathbf{x}_{i}^{0})\in H\) if \((i,j)\in\mathcal{E}\) and \(0\) otherwise. The associated Laplacian \(\widetilde{\mathbf{L}}\) can be defined accordingly. For example, the random walk Laplacian has entries \(\widetilde{\mathbf{L}}_{i,i}=1\) and \(\widetilde{\mathbf{L}}_{i,j}=-\frac{1}{\deg_{i}}\varphi(\mathbf{x}_{j}^{0}- \mathbf{x}_{i}^{0})\) if \((i,j)\in\mathcal{E}\) and \(0\) otherwise. The classic graph heat diffusion is generalized to hypo-elliptic graph diffusion as
\[\text{G2TN}:\quad\frac{\partial\tilde{\varphi}(\mathbf{X})}{\partial t}=- \widetilde{\mathbf{L}}\tilde{\varphi}(\mathbf{X}),\]
where we let \(\tilde{\varphi}(\mathbf{X})\coloneqq[\tilde{\varphi}(\mathbf{x}_{1}),...,\tilde {\varphi}(\mathbf{x}_{n})]\in H^{n}\). The multiplication of \(\widetilde{\mathbf{L}}\) and \(\tilde{\varphi}(\mathbf{X})\) is defined over the space of algebra \(H\) (similarly as how classic matrix multiplication works). In the case of random walk Laplacian, the paper verifies that the solution of the hypo-elliptic diffusion summarizes the entire random walk histories of each node, in contrast to the snapshot state at each timestep given by the classic diffusion equation. Notice here instead of working with node signals \(\mathbf{x}\) directly, the diffusion concerns all the node signals along the trajectory, i.e., \(\varphi(\mathbf{x})\). Thus \(\tilde{\varphi}(\mathbf{x})\) is more expressive compared to \(\mathbf{x}\). The paper further adapts the attention mechanism to define a weighted hypo-elliptic adjacency as \(\widetilde{\mathbf{A}}_{i,j}=a(\mathbf{x}_{i}^{0},\mathbf{x}_{j}^{0})\varphi( \mathbf{x}_{j}^{0}-\mathbf{x}_{i}^{0})\), which correspondingly defines an anisotropic hypo-elliptic diffusion.
### Diffusion with external forces
Most of the aforementioned dynamics is only controlled by a single mechanism, either diffusion or oscillation. This section discusses systems that impart external forces to the diffusion dynamics, such as _convection_, _advection_, and _reaction_. In particular, convection and advection are widely known in physical sciences that describe how the mass transports as a result of the movement of the underlying fluid. Such a process is characterized by \(\frac{\partial x}{\partial t}=-\mathrm{div}(\mathbf{v}x)\) where \(\mathbf{v}\) represents the velocity field of the fluid motion. Reaction process is more general and often found in chemistry where chemical substance interacts with each other and leads to a change of mass and substance, i.e., \(\frac{\partial x}{\partial t}=r(x)\), for some reaction function \(r(\cdot)\). Other mechanisms such as gating, reverse diffusion and anti-symmetry have also been exploited in literature to modulate and control the diffusion dynamics.
Convetion-diffusion.Convection-diffusion dynamics combines the convection with diffusion process in which mass not only transports but also disperse in space. [103] generalize the convection-diffusion equation (CDE) to graphs as
\[\text{CDE}:\quad\frac{\partial\mathbf{X}}{\partial t}=\operatorname{div}( \mathbf{A}(\mathbf{X})\cdot\nabla\mathbf{X})+\operatorname{div}(\mathbf{V} \circ\mathbf{X})\]
where \(\mathbf{V}\) denotes a velocity field and \((\mathbf{V}\circ\mathbf{X})_{i,j}:=\mathbf{V}_{i,j}\odot\mathbf{x}_{j}\) with \(\odot\) representing the elementwise product. In particular, [103] define \(\mathbf{V}_{i,j}=\sigma(\mathbf{W}(\mathbf{x}_{j}-\mathbf{x}_{i}))\in\mathbb{ R}^{c}\) for some nonlinear activation \(\sigma(\cdot)\) and learnable matrix \(\mathbf{W}\). Such a choice is motivated from the heterophilic graphs where neighbouring nodes exhibit diverse features. Hence \(\mathbf{V}_{i,j}\) captures the dissimilarity between the nodes, which ultimately guides the diffusion process. Accordingly, \(\operatorname{div}(\mathbf{V}\circ\mathbf{X})_{i}=\sum_{j:(i,j)\in\mathcal{E} }\mathbf{V}_{i,j}\odot\mathbf{x}_{j}\) measures the flow of density at node \(i\) in the direction of \(\mathbf{V}_{i,j}\). The nonlinear parameterization of the gradient further enhances the dynamics to adapt to different graphs with varying homophily levels.
Reaction-diffusion.A more general reaction-diffusion dynamics is considered in [21]:
\[\text{GREAD}:\quad\frac{\partial\mathbf{X}}{\partial t}=\operatorname{div}( \mathbf{A}(\mathbf{X})\cdot\nabla\mathbf{X})+R(\mathbf{X}),\]
where \(R(\mathbf{X})\) is the reaction term, and proper choice of \(R(\cdot)\) recovers many existing works as special cases. For example, the Fisher reaction [33] is given by \(R(\mathbf{X})=\kappa\mathbf{X}\odot(\mathbf{1}-\mathbf{X})\), which can be used to model the spread of biological populations where \(\kappa\) represents the intrinsic growth rate. Other reaction processes include Allen-Cahn [1]\(R(\mathbf{X})=\mathbf{X}\odot(1-\mathbf{X}\odot\mathbf{X})\) and Zeldovich [37]\(R(\mathbf{X})=\mathbf{X}\odot(\mathbf{X}-\mathbf{X}\odot\mathbf{X})\). Apart from the physics informed choices of reaction term, [21] also consider \(R(\mathbf{X})=\mathbf{X}^{0}\), which follows GRAND++, CGNN to incorporate a source term, and also proposes several high-pass reaction term based on graph structure, aiming to induce a sharpening effect. For example, the blurring-sharpening reaction is defined as \(R(\mathbf{X})=(\mathbf{A}(\mathbf{X})-\mathbf{A}(\mathbf{X})^{2})\mathbf{X}\), which corresponds to performing a low-pass filter \(\mathbf{A}(\mathbf{X})-\mathbf{I}\) followed by a high-pass filter \(\mathbf{I}-\mathbf{A}(\mathbf{X})\). The paper also considers two learnable coefficients \(\alpha,\beta\) to control the emphasis on diffusion and reaction terms respectively, i.e., \(\alpha\operatorname{div}(\mathbf{A}(\mathbf{X})\cdot\nabla\mathbf{X})+\beta R( \mathbf{X})\).
A closely related work [94] adopts the Allen-Cahn reaction term for the reaction-diffusion process. Further, the paper allows negative diffusion coefficients, which is able to induce a repulsive force between the nodes:
\[\text{ACMP}:\quad\frac{\partial\mathbf{X}}{\partial t}=\operatorname{div}(( \mathbf{A}(\mathbf{X})-\mathbf{B})\cdot\nabla\mathbf{X})+\mathbf{X}\odot( \mathbf{1}-\mathbf{X}\odot\mathbf{X})\]
where \(\mathbf{B}>0\) is a bias term controlling the strength and direction of message passing. This allows \(\mathbf{A}(\mathbf{X})-\mathbf{B}\) to model the interactive forces and can become negative. Further, the first term of ACMP corresponds to the gradient flow of a pseudo Dirichlet energy given by \(\sum_{(i,j)\in\mathcal{E}}(\mathbf{A}_{i,j}-\mathbf{B}_{i,j})\|\mathbf{x}_{i}- \mathbf{x}_{j}\|^{2}\) where we ignore the dependence of \(\mathbf{A}\) on \(\mathbf{X}\) for the time being. This suggests, when \(\mathbf{A}_{i,j}-\mathbf{B}_{i,j}>0\), node \(i\) is attracted by node \(j\) by minimizing the difference between \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) and when \(\mathbf{A}_{i,j}-\mathbf{B}_{i,j}<0\), node \(i\) is repelled by node \(j\). It is noticed that the presence of negative weights can cause the energy to be unbounded and thus the dynamics may not be convergent. To resolve the issue, the paper considers an external potential, namely the double-well potential \((\delta/4)\sum_{i\in\mathcal{V}}(1-\|\mathbf{x}_{i}\|^{2})^{2}\). ACMP is indeed derived as the gradient flow of the pseudo Dirichlet energy combined with the double-well potential. Theoretically, the Dirichlet energy \(\mathcal{E}_{\operatorname{dir}}\) of ACMP evolution is upper bounded due to the Allen-Cahn reaction term as well as lower bounded due to the repulsive forces. Hence, the system remains stable while avoiding oversmoothing. For practical implementation, ACMP sets \(\mathbf{B}=\beta>0\), a tunable hyperparameter for simplicity of optimization. Two channel-wise coefficient vectors are added to balance the diffusion and reaction similarly in [21].
The idea of incorporating repulsive force in the message passing has also been explored in [66]. The work proposes to view the message passing mechanism on graphs in the framework of opinion dynamics. The work explores the notion of _bounded confidence_ from the Hegselmann-Krause (HK) model [49] where only similar opinions (up to some cut-off threshold) are exchanged. This motivates the following dynamics on graphs that separates messages according to the similarity level of graph signals:
\[\text{ODNet}:\quad\frac{\partial\mathbf{X}}{\partial t}=\operatorname{div}_{ \mathcal{V}\times\mathcal{V}}(\Phi(\mathbf{A}(\mathbf{X}))\cdot\nabla\mathbf{ X})+R(\mathbf{X}),\]
where \(\operatorname{div}_{\mathcal{V}\times\mathcal{V}}\) defines message aggregation over the complete graph. \(\Phi(\mathbf{A}(\mathbf{X}))\) is an elementwise scalar-valued function (called influence function) on diffusivity and is required to be non-decreasing. In [66], \(\Phi(\cdot)\) is chosen to be piecewise linear as follows.
\[\Phi(s)=\begin{cases}\mu s,&\text{if }s>\epsilon_{2},\\ s,&\text{if }\epsilon_{1}\leq s\leq\epsilon_{2},\\ \nu(1-s),&\text{otherwise}\end{cases}\]
where \(\mu>0\) and \(\nu\leq 0\) are hyperparameters. In addition, \(\epsilon_{1},\epsilon_{2}\) defines the influence regions (which resembles bounded confidence in HK model). Because \(\nu\) can be negative, it is able to induce repulsive forces by separating the node representations. Empirically, the work chooses \(\nu=0\) for homophilic graphs and \(\nu<0\) for heterophilic graphs. It is noticed that when \(\nu<0\), the message can propagate even for unconnected nodes in the case of \(\nu<0\).
Advection-diffusion-reaction.ADR-GNN [30] further adds an explicit advection term on top of the reaction-diffusion process considered in [21].
\[\text{ADR-GNN}:\quad\frac{\partial\mathbf{X}}{\partial t}=\operatorname{div}( \mathbf{A}(\mathbf{X})\cdot\nabla\mathbf{X})+\operatorname{div}(\mathbf{V} \circ\mathbf{X})+R(\mathbf{X}).\]
The work considers homogeneous diffusion with channel scaling, i.e., \(\operatorname{div}(\mathbf{A}(\mathbf{X})\cdot\nabla\mathbf{X})=-\mathbf{L} \mathbf{X}\mathrm{diag}(\boldsymbol{\theta})\), where \(\boldsymbol{\theta}\in\mathbb{R}^{c}\) is the channel-wise scaling factor. In contrast to CDE [103], the advection term concerns two directional velocity fields \(\mathbf{V}_{i,j}\), \(\mathbf{V}_{j,i}\in\mathbb{R}^{c}\), which measures both in-flow and out-flow of density, with \((\mathbf{V}\circ\mathbf{X})_{i,j}\coloneqq\mathbf{V}_{j,i}\odot\mathbf{x}_{j }-\mathbf{V}_{i,j}\odot\mathbf{x}_{i}\). By further ensuring channel-wise row stochasticity of both \(\mathbf{V}_{i,j}\) and \(\mathbf{V}_{j,i}\), \(\mathbf{V}_{i,j}\odot\mathbf{x}_{i}\) is interpreted as the mass of node \(j\) to be transported to node \(i\). Thus the advection term, given by \(\operatorname{div}(\mathbf{V}\circ\mathbf{X})_{i}=\sum_{j:(i,j)\in\mathcal{E }}\mathbf{V}_{j,i}\odot\mathbf{x}_{j}-\mathbf{x}_{i}\), quantifies the net flow of density at node \(i\). The reaction term \(R(\mathbf{X})\) is parameterized by additive and multiplicative MLPs with a source term, \(R(\mathbf{X})=\sigma(\mathbf{X}\mathbf{W}_{1}+\tanh(\mathbf{X}\mathbf{W}_{2}) \odot\mathbf{X}+\mathbf{X}^{0}\mathbf{W}_{3})\). Different to previous works, the paper considers operator splitting for discretizing the continuous dynamics, i.e., by separating the propagation of the three processes. Such a scheme allows separate treatment and analysis of each process. Particularly, [30] demonstrate the mass preserving property and stability of the advection operator. Empirically, ADR-GNN has shown promising performance for modelling not only the static graphs but also spatial temporal graphs where advection-diffusion-reaction process has been successful [32].
Gating.It has been shown that controlling the speed of diffusion through convection/advection term is able to counteract the smoothing process. [79] adopt a different strategy by explicitly modelling a gating function on the diffusion:
\[\text{G}^{2}:\quad\frac{\partial\mathbf{X}}{\partial t}=\mathbf{T}(\mathbf{X} )\odot\operatorname{div}(\mathbf{A}(\mathbf{X})\cdot\nabla\mathbf{X})\]
where \(\mathbf{T}(\mathbf{X})\in[0,1]^{n\times c}\) collects the rate of speed for each node and across each channel. Specifically, the rates depend on the graph gradient as \(T(\mathbf{X})_{i,k}=\tanh(\sum_{j\in\mathcal{N}_{i}}|\hat{\mathbf{X}}_{j,k}- \hat{\mathbf{X}}_{i,k}|^{p}),p>0\) where \(\hat{\mathbf{x}}_{i}=\sum_{j\in\mathcal{N}_{i}}\widehat{\mathbf{A}}(\mathbf{X })_{i,j}\mathbf{x}_{j}\) and \(\widehat{\mathbf{A}}(\mathbf{X})\) is another message aggregation. Conceptually, the gating rates \(T(\mathbf{X})_{i,:}\) for node \(i\) depend on the channel-wise graph gradients to all its neighbours. The use of \(\tanh(\cdot)\) ensures when \(\sum_{j\in\mathcal{N}_{i}}|\hat{\mathbf{X}}_{j,k}-\hat{\mathbf{X}}_{i,k}|^{p}\to 0\), the rate \(T(\mathbf{X})_{i,k}\) vanishes at a faster rate. This correspondingly shuts down the update for node \(i\) and thus avoid oversmoothing. The paper also considers more general choices of coupling functions in place of \(\operatorname{div}(\mathbf{A}(\mathbf{X})\cdot\nabla\mathbf{X})\) where nonlinearity is added.
The idea of gating has been similarly explored in DeepGRAND [71], which utilizes a channel-wise scaling factor \(\langle\mathbf{X}\rangle^{p}\in\mathbb{R}^{n\times d}\) in place of \(T(\mathbf{X})\) where \(\langle\mathbf{X}\rangle^{p}_{:,k}=\|\mathbf{X}_{:,k}\|^{p}\mathbf{1}_{n}\). The dynamics also incorporates a perturbation to the diffusivity as \(\mathbf{A}(\mathbf{X})-(1+\epsilon)\mathbf{I}\). The scaling factor and perturbation help regulate the convergence of node features so that the node features neither explodes nor converges too fast to the steady state.
Reverse diffusion.Similar to the idea in [21] that simultaneously accounts for low-pass and high-pass filters, [81] introduce a reverse diffusion process based on the heat kernel. When coupled with heat diffusion, it leads to a process that accommodates both smoothing and sharpening effects:
\[\text{MHKG}:\quad\frac{\partial\mathbf{X}}{\partial t}=\big{(}\mathrm{diag}( \boldsymbol{\theta}_{1})\exp(f(\widehat{\mathbf{L}}))+\mathrm{diag}(\boldsymbol {\theta}_{2})\exp(g(\widehat{\mathbf{L}}))-\mathbf{I}\big{)}\mathbf{X},\]
where \(\boldsymbol{\theta}_{1},\boldsymbol{\theta}_{2}\) are learnable filters and \(f,g\) are scalar-valued functions defined over the eigenvalues of \(\widehat{\mathbf{L}}\), e.g., \(f(\widehat{\mathbf{L}})\coloneqq\mathbf{U}f(\boldsymbol{\Lambda})\mathbf{U}^ {\top}\) where \(f(\boldsymbol{\Lambda})=\mathrm{diag}([f(\lambda_{i})]_{i=1}^{n})\), with \(\boldsymbol{\Lambda}\) being the diagonal matrix collecting the eigenvalues and \(\mathbf{U}\) collecting the eigenvectors of the normalized Laplacian \(\widehat{\mathbf{L}}\). In particular, \(f,g\) are assumed to be opposite in terms of monotonicity. In the simplest case, suppose \(f(\widehat{\mathbf{L}})=-\widehat{\mathbf{L}}\) and \(g(\widehat{\mathbf{L}})=\widehat{\mathbf{L}}\), then the two terms in MHKG correspond to the heat kernel and its reverse. The filtering coefficients \(\boldsymbol{\theta}_{1},\boldsymbol{\theta}_{2}\) controls the relative dominance of the two terms, where the former smooths while the latter sharpens the signals.
Anti-symmetry.In [41], a stable and non-dissipative system is proposed by imposing the additional anti-symmetric structure for channel mixing matrix as
\[\text{A-DGN}:\quad\frac{\partial\mathbf{X}}{\partial t}=\mathbf{X}(\mathbf{W}- \mathbf{W}^{\top})+F_{\mathcal{G}}(\mathbf{X}),\]
where we omit the nonlinearity and a bias term to show only the driving factors. Here, \(F_{\mathcal{G}}(\mathbf{X})\) is a coupling function similar as in [78], such as a simple homogeneous message passing \(F_{\mathcal{G}}(\mathbf{X})_{i}=\sum_{j\in\mathcal{N}_{i}}\mathbf{V}\mathbf{x }_{j}\) for some weight matrix \(\mathbf{V}\) or the one with attention mechanism [91]. The incorporation of anti-symmetry constraint for the channel mixing renders the system to be stable and non-dissipative, both due to the fact that the Jacobian of dynamics has pure imaginary eigenvalues, i.e., all real parts of the eigenvalues are zero. This suggests the solutions to the system remain bounded under perturbation of initial conditions, which concludes the stability of the evolution. In addition, the zero real part of the eigenvalues suggests the sensitivity of the node signals to its initial values, i.e., the magnitude of \(\frac{\partial\mathbf{x}_{i}(t)}{\partial\mathbf{x}_{i}(0)},\forall i,t\) stays constant throughout the dynamics. This result infers that oversmoothing in the limit does not occur as the final state still depends on the initial conditions as \(\lim_{t\to\infty}\frac{\partial\mathbf{x}_{i}(t)}{\partial\mathbf{x}_{i}(0)}\neq 0\). Further, this also suggests the magnitude of \(\frac{\partial L_{\mathrm{loss}}}{\partial\mathbf{x}_{i}(0)}\) remains unchanged over time and hence gradient vanishing or explosion is avoided during backpropagation. This allows the dynamics to be propagating to the limit and capture long-range interactions without facing the issue of oversmoothing or training instability.
### Geometry-underpinned dynamics
Previous sections have viewed graphs from its trivial topology, and standard dynamics on graphs amounts to propagating information from node to edge space and back, only utilizing the connectivity between nodes. In fact, graphs usually possess more intricate geometries and viewed as discrete realization of general topological spaces. In this section, we show how dynamics on graphs can be underpinned with additional geometric structure, such as sheaves and stalks in [10], and cliques and cochains in [42].
Sheaf diffusion.[47, 4] and [10] leverage _cellular sheave theory_ to endow a geometric structure for graphs. Specifically, each node \(i\in\mathcal{V}\) and edge \(e_{ij}=\{i,j\}\in\mathcal{E}\) (undirected) is equipped with a vector space structure \(\mathcal{F}(i),\mathcal{F}(e_{ij})\), with a linear map \(\boldsymbol{\mathcal{F}}_{i\,\triangle\,e_{ij}}:\mathcal{F}(i)\to\mathcal{F}( e_{ij})\) (called restriction map) that connects the node to edge spaces. Its adjoint operator \(\boldsymbol{\mathcal{F}}_{i\,\triangle\,e_{ij}}^{\top}:\mathcal{F}(e_{ij}) \to\mathcal{F}(i)\) does the reverse. The direct sum of all vector spaces of nodes is called the space of \(0\)-cochains, denoted by \(C^{0}(\mathcal{G};\mathcal{F})\coloneqq\bigoplus_{i\in\mathcal{V}}\mathcal{F} (i)\). Suppose, without loss of generality, that all vector spaces \(\mathcal{F}(i),\mathcal{F}(e_{ij})\) are \(d\)-dimensional, we can represent \(\boldsymbol{\mathcal{F}}_{i\,\triangle\,e_{ij}}\) as a \(d\times d\) matrix. Further, suppose \(\mathbf{x}\in C^{0}(\mathcal{G};\mathcal{F})\), then each \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) and \(\mathbf{x}\in\mathbb{R}^{nd}\) by stacking the feature vectors across all nodes.
Under the construction of vector spaces on nodes and edges, the concepts of graph gradient and divergence require to utilize the restriction maps because the vector spaces are not directly comparable. That is, the graph gradient (also known as the co-boundary map) is defined as \((\nabla_{\mathcal{F}}\mathbf{x})_{i,j}\coloneqq\boldsymbol{\mathcal{F}}_{j\, \triangle\,e_{ij}}\mathbf{x}_{j}-\boldsymbol{\mathcal{F}}_{i\,\triangle\,e_{ ij}}\mathbf{x}_{i}\), where the restriction maps transport the signals to a common disclosure space. The graph divergence is thus similarly defined as \((\operatorname{div}_{\mathcal{F}}\mathbf{G})_{i}=\sum_{j\in\mathcal{N}_{i}} \boldsymbol{\mathcal{F}}_{i\,\triangle\,e_{ij}}^{\top}\mathbf{G}_{e_{ij}}\), for \(\mathbf{G}_{e_{ij}}\in\mathcal{F}(e_{ij})\). This leads to the definition of sheaf Laplacian as \(\mathbf{L}_{\mathcal{F}}(\mathbf{x})_{i}=\operatorname{div}_{\mathcal{F}}( \nabla_{\mathcal{F}}\mathbf{x})_{i}=\sum_{j\in\mathcal{N}_{i}}\boldsymbol{ \mathcal{F}}_{i\,\triangle\,e_{ij}}^{\top}(\boldsymbol{\mathcal{F}}_{i\, \triangle\,e_{ij}}\mathbf{x}_{i}-\boldsymbol{\mathcal{F}}_{j\,\triangle\,e_{ ij}}\mathbf{x}_{j})\). The sheaf Laplacian is an \(nd\times nd\) block positive semi-definite matrix, with the diagonal blocks \((\mathbf{L}_{\mathcal{F}})_{i,i}=\sum_{j\in\mathcal{N}_{i}}\boldsymbol{ \mathcal{F}}_{i\,\triangle\,e_{ij}}^{\top}\boldsymbol{\mathcal{F}}_{i\, \triangle\,e_{ij}}\) and off-diagonal blocks \((\mathbf{L}_{\mathcal{F}})_{i,j}=-\boldsymbol{\mathcal{F}}_{i\,\triangle\,e_{ ij}}^{\top}\boldsymbol{\mathcal{F}}_{j\,\triangle\,e_{ij}}\). A symmetrically normalized sheaf Laplacian can be similarly computed by \(\mathbf{D}_{\mathcal{F}}^{-1/2}\mathbf{L}_{\mathcal{F}}\mathbf{D}_{\mathcal{F} }^{-1/2}\) with \(\mathbf{D}_{\mathcal{F}}\) is the block diagonal of \(\mathbf{L}_{\mathcal{F}}\). The corresponding sheaf diffusion process is
\[\frac{\partial\mathbf{X}}{\partial t}=\operatorname{div}_{\mathcal{F}}( \nabla_{\mathcal{F}}\mathbf{X})=-\mathbf{L}_{\mathcal{F}}\mathbf{X}\]
where here \(\mathbf{X}\in\mathbb{R}^{(nd)\times c}\), where \(c\) is the feature channels and \(\operatorname{div}_{\mathcal{F}},\nabla_{\mathcal{F}},\mathbf{L}_{\mathcal{F}}\) are applied channel-wise. The Sheaf diffusion turns out to be the gradient flow of the sheaf Dirichlet energy \(\operatorname{tr}(\mathbf{X}^{\top}\mathbf{L}_{\mathcal{F}}\mathbf{X})\), which measures the smoothness of signals in the disclosure space. For practical settings where sheaf structure is unavailable, one can construct such a feature through input embedding. It is worth highlighting that, when \(d=1\), the sheaf Laplacian reduces to the classic graph Laplacian and the sheaf diffusion becomes the heat diffusion. In [10], a variety of restriction maps are constructed, which leads to dynamics flexible enough to handle different types of graphs and avoid oversmoothing. The paper also develops a general framework for learning the sheaf Laplacian from the features and include channel mixing and nonlinearity to increase the expressive power:
\[\text{NSD}:\quad\frac{\partial\mathbf{X}}{\partial t}=-\sigma(\mathbf{L}_{ \mathcal{F}(\mathbf{X})}(\mathbf{I}\otimes\mathbf{W}_{1})\mathbf{X}\mathbf{W} _{2})\]
where \(\mathbf{W}_{1}\in\mathbb{R}^{d\times d}\) transforms the feature vectors and \(\mathbf{W}_{2}\in\mathbb{R}^{c\times c^{\prime}}\) mixes the channels and \(\otimes\) denotes the Kronecker product. \(\mathbf{L}_{\mathcal{F}(\mathbf{X})}\) is parameterized via a matrix-valued function on the current feature values.
In [10], the sheaf Laplacian is parameterized by \(\boldsymbol{\mathcal{F}}_{i\,\triangle\,e_{ij}}=\sigma(\mathbf{V}[\mathbf{x}_{ i}\|\mathbf{x}_{j}])\) where \(\cdot\|\cdot\) denotes concatenation. A follow-up work [5] devises a sheaf attention mechanism to further enhance the diffusion process. Let \(\mathbf{A}(\mathbf{X})\in\mathbb{R}^{n\times n}\) be a matrix of learnable attention coefficients (the same as in GAT [91]), and let
\(\mathbf{A}(\mathbf{X})\otimes\mathbf{1}_{d\times d}\) that assigns uniform attention coefficients for each feature dimension within a vector space. The attentive sheaf diffusion (SheafAN) is introduced as \(\frac{\partial\mathbf{X}}{\partial t}=(\widehat{\mathbf{A}}(\mathbf{X})\odot \mathbf{A}_{\mathcal{F}(\mathbf{X})}-\mathbf{I})\mathbf{X}\), where \((\mathbf{A}_{\mathcal{F}(\mathbf{X})})_{i,j}=\boldsymbol{\mathcal{F}}_{i\, \circ\,e_{ij}}^{\top}\boldsymbol{\mathcal{F}}_{j\,\circ\,e_{ij}}\) is the sheaf adjacency matrix with self-loop, i.e., \(e_{ii}\in\mathcal{E}\). A second-order sheaf PDE (NSP) is proposed in [84] using the wave equation as \(\frac{\partial^{2}\mathbf{X}}{\partial t^{2}}=\mathrm{div}_{\mathcal{F}}( \nabla_{\mathcal{F}}\mathbf{X})\).
Bracket dynamics.[42] propose to use _geometric brackets_ that implicitly parameterize dynamics on graphs that satisfy certain properties while equipping graphs with higher-order structures. The formulation requires concepts from structure-preserving bracket-based dynamics and exterior calculus. In general, for a state variable \(x\), its dynamics can be given by some combination of reversible and irreversible brackets. The _reversible_ bracket (also known as Poisson bracket) is denoted by \(\{A,E\}\coloneqq\langle\frac{\partial A}{\partial x},\tilde{L}\frac{\partial E }{\partial x}\rangle\) for some skew-symmetric operator \(\tilde{L}\)1 and some inner product \(\langle\cdot,\cdot\rangle\). The reversibility is a result of energy conservation. The _irreversible_ bracket is defined by \([A,E]\coloneqq\langle\frac{\partial A}{\partial x},M\frac{\partial E}{ \partial x}\rangle\) for some (either positive or negative) semi-definite operator \(M\). The irreversibility describes the loss of energy from the system due to friction or dissipation. The double bracket \(\{\{A,E\}\}\coloneqq\langle\frac{\partial A}{\partial x},\tilde{L}^{2}\frac{ \partial E}{\partial x}\rangle\) is an irreversible bracket.
Footnote 1: Here, in order not to be confused with the Laplacian \(L\) used in previous discussions, we use \(\tilde{L}\).
For simplicity, the paper considers \(A=x\) and one can simplify the brackets as \([x,E]=M\frac{\partial E}{\partial x}\) and \(\{x,E\}=\tilde{L}\frac{\partial E}{\partial x}\). The paper considers four different types of dynamics leveraging both reversible and irreversible brackets:
\[\begin{array}{ll}\text{Hamiltonian}:&\frac{\partial x}{\partial t}=\{x,E\} ;\\ \text{Gradient}:&\frac{\partial x}{\partial t}=-[x,E];\\ \text{Double bracket}:&\frac{\partial x}{\partial t}=\{x,E\}+\{\{x,E\}\};\\ \text{Metriplectic}:&\frac{\partial x}{\partial t}=\{x,E\}+[x,S],\end{array} \tag{3}\]
where \(E(x)\) is referred to as the energy of the state and \(S(x)\) is the entropy. The dynamics of each process captures fundamentally different systems and has natural substance in physics. The Hamiltonian dynamics leads to a complete, isolated system in the sense that no energy is lost to the external environment. In contrast, both the gradient and double bracket dynamics are incomplete where the energy is lost through the process. Metriplectic dynamics is complete by further requiring the degeneracy conditions \(\tilde{L}\frac{\partial S}{\partial x}=M\frac{\partial E}{\partial x}=0\). These conditions ensure the conservation of energy, i.e., \(\frac{\partial E}{\partial t}=0\) and the entropy inequality, i.e., \(\frac{\partial S}{\partial t}\geq 0\) in an isolated system.
In order to properly generalize the dynamics to discrete domains like graphs, one requires to identify the state variable, an inner product structure as well as energy and entropy. Rather than only considering node features as the state variable, [42] extend the framework to higher-order clique cochains, including edges and cycles. Formally, let \(\Omega_{k}\) be the set of \(k\)-cliques on a graph \(\mathcal{G}\), which contains ordered, complete, subgraphs generated by \((k+1)\)-nodes. For example, nodes, edges and triangles, correspond to the \(0\)-clique, \(1\)-clique and \(2\)-clique respectively. The exterior derivative operator is denoted as \(d_{k}:\mathfrak{F}(\Omega_{k})\rightarrow\mathfrak{F}(\Omega_{k+1})\) where \(\mathfrak{F}(\Omega)\) represents a function space over the domain \(\omega\). The specific structure of \(\mathfrak{F}\) depends on the chosen inner product \(\langle\cdot,\cdot\rangle\). The dual derivative \(d_{k}^{*}:\mathfrak{F}(\Omega_{k+1})\rightarrow\mathfrak{F}(\Omega_{k})\) is given as the adjoint of the \(d_{k}\) that satisfies \(\langle d_{k}f,G\rangle_{k+1}=\langle f,d_{k}^{*}G\rangle_{k}\) for any \(f\in\mathfrak{F}(\Omega_{k}),G\in\mathfrak{F}(\Omega_{k+1})\). One common choice of \(\mathfrak{F}\) is the \(L^{2}\) space, where one can derive \(d_{k}^{*}=d_{k}^{\top}\). For example, \(d_{0}\), is the graph gradient on nodes and \(d_{0}^{*}\) becomes the graph divergence on edges. The classic graph Laplacian can be computed as \(d_{0}^{*}d_{0}=d_{0}^{\top}d_{0}=\mathbf{G}^{\top}\mathbf{G}\) as we have shown in Section 2.
Instead, the paper pursues an inner product parameterized by positive definite matrices \(\mathbf{A}_{0},\mathbf{A}_{1},...,\mathbf{A}_{k}\) up to \(k\)-cliques. For example on node space \(\Omega_{0}\), where the \(L^{2}\) space has inner product \(\mathbf{f}^{\top}\mathbf{g}\) for \(\mathbf{f},\mathbf{g}\in\mathfrak{F}(\Omega_{0})\)
the generalized inner product is given by \(\mathbf{f}^{\top}\mathbf{A}_{0}\mathbf{g}\). Under such choice, one can show \(d_{k}^{\star}=\mathbf{A}_{k}^{-1}d_{k}^{\top}\mathbf{A}_{k+1}\).
The state variable is set to be a node-edge feature pairs, denoted by \(\mathbf{x}=(\mathbf{q},\mathbf{p})\), which can be treated as the position and momentum of a phase space. Further, the following operators are utilized to extend the dynamics in (3) to graphs,
\[\tilde{\mathbf{L}}=\begin{pmatrix}0&-d_{0}^{\ast}\\ d_{0}&0\end{pmatrix},\qquad\tilde{\mathbf{G}}=\begin{pmatrix}d_{0}^{\ast}d_{0} &0\\ 0&d_{1}^{\ast}d_{1}+d_{0}d_{0}^{\ast}\end{pmatrix},\quad\text{and}\quad\tilde{ \mathbf{M}}=\begin{pmatrix}0&0\\ 0&\mathbf{A}_{1}d_{1}^{\ast}d_{1}\mathbf{A}_{1}\end{pmatrix}.\]
It can be verified that \(\tilde{\mathbf{L}}\) is skew-symmetric and \(\tilde{\mathbf{G}},\tilde{\mathbf{M}}\) are symmetric positive definite with respect to the block-diagonal inner product parameterized by \(\mathbf{A}=\mathrm{diag}(\mathbf{A}_{0},\mathbf{A}_{1})\). Furthermore, let \(\mathbf{X}=(\mathbf{Q},\mathbf{P})\) denote the tuple of node and edge feature matrices, the energy considered is the total kinetic energy on both node and edge spaces, i.e., \(E(\mathbf{X})=\frac{1}{2}(\|\mathbf{Q}\|^{2}+\|\mathbf{P}\|^{2})\). The gradient with respect to the generalized inner product (called \(\mathbf{A}\)-gradient) can be computed as \(\nabla_{\mathbf{A}}E(\mathbf{X})=[\mathbf{A}_{0}^{-1}\frac{\partial E}{ \partial\mathbf{Q}},\mathbf{A}_{1}^{-1}\frac{\partial E}{\partial\mathbf{P}} ]^{\top}=[\mathbf{A}_{0}^{-1}\mathbf{Q},\mathbf{A}_{1}^{-1}\mathbf{P}]^{\top}\). For the Metriplectic dynamics, it is in general non-trivial to identify an entropy such that the degeneracy conditions hold. Hence the paper constructs a separate energy and entropy function pair as \(E_{m}(\mathbf{X})=f_{E}(\mathbf{Q})+g_{E}(d_{0}d_{0}^{\top}\mathbf{P})\) and \(S_{m}(\mathbf{X})=g_{S}(d_{1}^{\top}d_{1}\mathbf{P})\), for some node function \(f_{E}\) and edge functions \(g_{E},g_{S}\) applied channel-wise. The \(\mathbf{A}\)-gradient is derived as
\[\nabla_{\mathbf{A}}E_{m}(\mathbf{X})=\begin{pmatrix}\mathbf{A}_{0}^{-1} \mathbf{1}\otimes\nabla_{\mathbf{A}}f_{E}(\mathbf{Q})\\ d_{0}d_{0}^{\top}\mathbf{1}\otimes\nabla_{\mathbf{A}}g_{E}(d_{0}d_{0}^{\top} \mathbf{P})\end{pmatrix},\qquad\qquad\nabla_{\mathbf{A}}S_{m}(\mathbf{X})= \begin{pmatrix}0\\ \mathbf{A}_{1}^{-1}d_{1}^{\top}d_{1}\mathbf{1}\otimes\nabla_{\mathbf{A}}g_{S}( d_{1}d_{1}^{\top}\mathbf{P})\end{pmatrix}.\]
Importantly, the degeneracy conditions \(\tilde{\mathbf{L}}\nabla_{\mathbf{A}}S=\tilde{\mathbf{M}}\nabla_{\mathbf{A}} E=0\) are satisfied by construction.
Finally the generalized dynamics from (3) to graphs are
\[\text{Hamiltonian}_{G}: \frac{\partial\mathbf{X}}{\partial t}=\tilde{\mathbf{L}}(\mathbf{ X})\nabla_{\mathbf{A}}E(\mathbf{X});\] \[\text{Gradient}_{G}: \frac{\partial\mathbf{X}}{\partial t}=-\tilde{\mathbf{G}}(\mathbf{ X})\nabla_{\mathbf{A}}E(\mathbf{X});\] \[\text{Double bracket}_{G}: \frac{\partial\mathbf{X}}{\partial t}=\tilde{\mathbf{L}}(\mathbf{ X})\nabla_{\mathbf{A}}E(\mathbf{X})+\tilde{\mathbf{L}}^{2}(\mathbf{X})\nabla_{ \mathbf{A}}E(\mathbf{X});\] \[\text{Metrplectic}_{G}: \frac{\partial\mathbf{X}}{\partial t}=\tilde{\mathbf{L}}(\mathbf{ X})\nabla_{\mathbf{A}}E_{m}(\mathbf{X})+\tilde{\mathbf{M}}(\mathbf{X})\nabla_{ \mathbf{A}}S_{m}(\mathbf{X}),\]
where the operators \(\tilde{\mathbf{L}}(\mathbf{X}),\tilde{\mathbf{G}}(\mathbf{X}),\tilde{\mathbf{ M}}(\mathbf{X})\) are state-dependent through attention mechanism to construct the metric tensor \(\mathbf{A}_{0},\mathbf{A}_{1}\), which thus parameterizes the exterior derivative operators.
**Remark 1** (Connection to GCN and GAT/GRAND).: GCN can be seen as the Gradient\({}_{G}\) dynamics with \(\mathbf{A}_{0},\mathbf{A}_{1}\) parameterized by node degrees. Setting \(\mathbf{A}_{0}\) as a diagonal matrix (on nodes) with diagonal entries \(a_{0,ii}=\sqrt{\deg_{i}}\) and \(\mathbf{A}_{1}\) as the identity matrix (over edges), we can verify \((d_{0}^{\ast}d_{0}\mathbf{A}_{0}^{-1}\mathbf{Q})_{i}=\sum_{j\in\mathcal{N}_{i }}(\mathbf{q}_{i}/\deg_{i}-\mathbf{q}_{j}/\sqrt{\deg_{i}\deg_{j}})=(\tilde{ \mathbf{L}}\mathbf{Q})_{i}\). In this case, Gradient\({}_{G}\) recovers the heat equation (with normalized Laplacian) when \(\mathbf{P}=0\) and thus leads to GCN under discretization.
Similarly, GAT can be seen as the same Gradient\({}_{G}\) dynamics while learning a metric structure \(\mathbf{A}_{0},\mathbf{A}_{1}\). That is, choose \(a_{0,ii}=\sqrt{\sum_{j\in\mathcal{N}_{i}}\exp(\mathrm{attn}(\mathbf{q}_{i}, \mathbf{q}_{j}))}\) and \(a_{1,e_{ij}}=\exp(\mathrm{attn}(\mathbf{q}_{i},\mathbf{q}_{j}))\). Let the attention coefficient be \(a(\mathbf{q}_{i},\mathbf{q}_{j})=a_{1,e_{ij}}/a_{0,ii}\). Then one can show Gradient\({}_{G}\) dynamics with \(\mathbf{P}=0\) corresponds to a (symmetrically) normalized version of GRAND.
This result demonstrates the irreversible nature of GCN or GAT/GRAND dynamics where energy dissipates, while other dynamics are either conservative or partially dissipative.
Hamiltonian mechanics.The _Hamiltonian mechanics_ has also been considered in [57] where instead of using the edge features as the momentum, the work parameterizes the momentum with a neural network from the node features. This leads to distinction compared to the previous works in the evolution of the node features, which is decoupled from the graph structure. Let \(\mathbf{Q}\) denote the node features and the momentum is computed as \(\mathbf{P}=\mathrm{MLP}_{\Theta}(\mathbf{Q})\). The Hamiltonian mechanics is determined by a Hamiltonian function \(\mathcal{H}(\mathbf{Q},\mathbf{P})\), which characterizes the total energy of the system. The dynamics is governed by the Hamiltonian equation
\[\frac{\partial\mathbf{Q}}{\partial t}=\frac{\partial\mathcal{H}}{\partial \mathbf{P}},\qquad\frac{\partial\mathbf{P}}{\partial t}=-\frac{\partial \mathcal{H}}{\partial\mathbf{Q}}.\]
The paper motivates a variety of parameterization for the Hamiltonian \(\mathcal{H}\). One specific choice of Hamiltonian is \(\mathcal{H}(\mathbf{Q},\mathbf{P})=\mathrm{tr}(\mathbf{P}^{\top}\mathbf{M}( \mathbf{Q})\mathbf{P})\) where \(\mathbf{M}(\mathbf{Q})\) represents the inverse metric tensor at \(\mathbf{Q}\) (also learnable in the local neighbourhood). Its solution \(\mathbf{Q}(t)\) recovers the geodesic (a locally shortest curve) on the manifold with metric parameterized by \(\mathbf{M}(\mathbf{Q})^{-1}\) at \(\mathbf{Q}\). Hence, node is effectively embedded to a (implicit) manifold space where the metric is learnable. Unlike previous methods, where the dynamics of the features depend on a coupling function regulated by the graph topology, here the evolution of \(\mathbf{Q}\) is independent across the nodes. To further incorporate the graph structure, message passing by neighbourhood aggregation is performed after the feature evolution via Hamiltonian equation. Multiple layers of Hamiltonian dynamics and message passing are stacked to model complex geometries and node embeddings.
A follow-up work [102] extends the formulation by considering general graph-coupled Hamiltonian function \(\mathcal{H}_{\mathcal{G}}(\mathbf{Q},\mathbf{P})\). For example, the paper considers a Hamiltonian function defined as the norm of the output from a two-layer GCN, where \(\frac{\partial\mathcal{H}_{\mathcal{G}}}{\partial\mathbf{Q}},\frac{\partial \mathcal{H}_{\mathcal{G}}}{\partial\mathbf{P}}\) are computed from auto-differentiation. Further, the paper studies various notions of stability on graphs from the theory of dynamical system, including BIBO, Lyapunov, structural and conservative stability. The work conducts a systematic analysis and comparison of proposed Hamiltonian-based dynamics with existing graph neural dynamics, such as GRAND, BLEND, Mean Curvature and Beltrami. It is found that the conservative Hamiltonian dynamics has shown improved robustness against adversarial attacks.
### Dynamics as gradient flow
Most of the aforementioned designs of GNNs are inspired by evolution of some underlying dynamics. The learnable parameters, such as channel mixing, are usually added after discretization to increase the expressive power. In [39], the dynamics is instead given as the _gradient flow_ of some learnable energy. The framework is general and includes many of the existing works as special cases (as long as the channel mixing matrix is symmetric)2. The parameterized energy takes the following form:
Footnote 2: Although many existing works can be written as gradient flow of some energy, their motivation comes mostly from the dynamics, not from the energy.
\[\mathcal{E}_{\theta}(\mathbf{X})=\frac{1}{2}\mathrm{tr}(\mathbf{X}^{\top} \mathbf{X}\mathbf{\Omega})-\frac{1}{2}\mathrm{tr}(\mathbf{X}^{\top}\mathbf{A} \mathbf{X}\mathbf{W})+\varphi^{0}(\mathbf{X},\mathbf{X}^{0}),\]
where \(\mathbf{A}\) is the (normalized) adjacency matrix and \(\mathbf{\Omega},\mathbf{W}\in\mathbb{R}^{c\times c}\) are assumed to be symmetric3. The first term determines the external forces exerted upon the system and the second term reflects the pairwise interactions while the last term quantifies the energy preserved by the source term \(\mathbf{X}^{0}\). Although \(\varphi^{0}\) can be general, the paper considers a form \(\varphi^{0}(\mathbf{X},\mathbf{X}^{0})=\mathrm{tr}(\mathbf{X}^{\top}\mathbf{X }^{0}\tilde{\mathbf{W}})\). The gradient flow of \(\mathcal{E}_{\theta}(\mathbf{X})\) yields the
dynamics of the following general form,
\[\text{GRAFF}:\quad\frac{\partial\mathbf{X}}{\partial t}=-\nabla\mathcal{E}_{ \theta}(\mathbf{X})=-\mathbf{X}\mathbf{\Omega}+\mathbf{A}\mathbf{X}\mathbf{W}- \mathbf{X}^{0}\tilde{\mathbf{W}}.\]
This formulation includes many of the existing dynamics-motivated GNNs. When \(\mathbf{\Omega}=\mathbf{W},\tilde{\mathbf{W}}=0\), this corresponds to the evolution of (residual) GCN or GAT/GRAND [15] if \(\mathbf{A}\) is constructed by attention mechanism. If \(\tilde{\mathbf{W}}\neq 0\), this corresponds to the GRAND++ [86] and thus also the CGNN [99]. The decrease of the general energy does not necessarily lead to a decrease in the Dirichlet energy (which is a special case of \(\mathcal{E}_{\theta}(\mathbf{X})\) with \(\mathbf{\Omega}=\mathbf{W}=\mathbf{I},\varphi^{0}=0\)). This is mainly due to the occurrence of both attractive and repulsive effects along the positive and negative eigen-directions of \(\mathbf{W}\). More formally, one decompose \(\mathbf{W}=\mathbf{\Theta}_{+}^{\top}\mathbf{\Theta}_{+}-\mathbf{\Theta}_{- }^{\top}\mathbf{\Theta}_{-}\) and rewrite the energy (without \(\varphi^{0}\)) as
\[\mathcal{E}_{\theta}(\mathbf{X})=\frac{1}{2}\sum_{i\in\mathcal{V}}\langle \mathbf{x}_{i},(\mathbf{\Omega}-\mathbf{W})\mathbf{x}_{i}\rangle+\frac{1}{4} \sum_{(i,j)\in\mathcal{E}}\|\mathbf{\Theta}_{+}(\nabla\mathbf{X})_{i,j}\|^{2 }-\frac{1}{2}\sum_{(i,j)\in\mathcal{E}}\|\mathbf{\Theta}_{-}(\nabla\mathbf{F} )_{i,j}\|^{2}.\]
It has been shown that the gradient flow minimizes the gradient along the positive eigen-directions (which leads to smoothing effect) while maximizing the gradient along the negative eigen-directions (which leads to sharpening effect). In contrast, minimizing the Dirichlet energy always induces smoothing effect because \(\mathbf{W}=\mathbf{I}\) with only positive eigen-directions. This allows GRAFF to avoid oversmoothing and produce sharpening effects as long as the \(\mathbf{W}\) has sufficiently large negative spectrum.
### Multi-scale diffusion
Previous sections have mostly focused on dynamics with local diffusion. In other words, the signal/density at certain node changes depending on its immediate neighbourhood. The communication with distant nodes only happens when diffusion time is sufficiently large. Section 3.3 discusses dynamics that leverages non-local diffusion, but nonetheless mostly restricted to a single scale at each diffusion step, e.g., single \(\alpha\) in fractional diffusion [68] and \(t_{\theta}\) for time derivative diffusion [7]. Multi-scale graph neural networks, such as ChebyNet [23], LanczosNet [63] and Framelet GNN [104] are capable of capturing multi-scale graph properties through spectral filtering on the graph Laplacian. The eigen-pairs of graph Laplacian encode structural information at different levels of granularity and separate processing of each resolution provides insights into both local and global patterns.
Several recent works have adapted the continuous dynamics formulation to multi-scale GNN. [46] introduce a multi-scale diffusion process via _graph framelet_. Apart from the multi-scale properties shared with other spectral GNNs, graph framelet further separates the low-pass from high-pass filters. Let \(\mathcal{W}_{0,J}\in\mathbb{R}^{n\times n}\) denote the low-pass framelet transform and \(\mathcal{W}_{r,j}\in\mathbb{R}^{n\times n}\), \(r=1,...,R\), \(j=1,...,J\) denote the high-pass framelet transforms, where \(R\) is the number of high-pass filter banks and \(J\) is the scale level. For a multi-channel graph signal \(\mathbf{X}\in\mathbb{R}^{n\times c}\), \(\mathcal{W}_{0,J}\mathbf{X}\) and \(\mathcal{W}_{r,j}\mathbf{X}\), \(r=1,...,R\), \(j=1,...,J\) represent low-pass and high-pass framelet coefficients. For notation clarity let \(\mathcal{I}=\{(0,J)\}\cup\{(r,j)\}_{1\leq r\leq R,1\leq j\leq J}\) be the framelet index set. Due to the reconstruction property of framelets, it satisfies that \(\sum_{(r,j)\in\mathcal{I}}\mathcal{W}_{r,j}^{\top}\mathcal{W}_{r,j}\mathbf{X}= \mathbf{X}\).
The spatial framelet diffusion proposed in [46] is given as
\[\text{GradFUFG}:\quad\frac{\partial\mathbf{X}}{\partial t}=-\sum_{(r,j)\in \mathcal{I}}\big{(}\mathcal{W}_{r,j}^{\top}\mathcal{W}_{r,j}\mathbf{X} \mathbf{\Omega}_{r,j}-\mathcal{W}_{r,j}^{\top}\mathbf{\widehat{A}}\mathcal{W}_ {r,j}\mathbf{X}\mathbf{W}_{r,j}\big{)},\]
where \(\mathbf{\Omega}_{r,j},\mathbf{W}_{r,j}\) are symmetric channel mixing matrices. When \(\mathbf{\Omega}_{r,j}=\mathbf{W}_{r,j}=\mathbf{I}\), by the tightness of framelet transform, GradFUFG reduces to the heat equation as \(\frac{\partial\mathbf{X}}{\partial t}=-\mathbf{\widehat{L}}\mathbf{X}\). The spatial-based framelet
diffusion can be seen as gradient flow of a generalized Dirichlet energy, which also includes the parameterized energy considered by GRAFF [39] as a special case. It can be seen, due to the separate modelling of low-pass and high-pass as well as different scale levels, the dynamics can adapt to datasets of different homophily levels as well as has the potential of avoiding oversmoothing.
## 4 Numerical solvers for differential equations on graphs
Most of the works based on continuous GNN dynamics consider the following architecture, which is firstly introduced in GRAND [15].
\[\mathbf{X}(0)=\mathrm{Emb}(\mathbf{X}_{\mathrm{in}}),\qquad\mathbf{X}(T)= \mathbf{X}(0)+\int_{0}^{T}\frac{\partial\mathbf{X}(t)}{\partial t}dt,\qquad \mathbf{Y}=\mathrm{Out}(\mathbf{X}(T)),\]
where \(\mathrm{Emb}(\cdot),\mathrm{Out}(\cdot)\) are respectively the input embedding and output decoding functions, which are usually learnable. For second-order dynamics like in PDE-GCN\({}_{\mathrm{H}}\) and GraphCON, the same formulation applies if the second-order ODE is rewritten as a system of first-order ODEs.
There exist a variety of numerical solvers for differential equations, once the continuous dynamics is formulated on graphs. In fact, with the discrete differential operators defined on graphs, the PDE reduces to an ODE, which can be solved with standard numerical integrators. One natural strategy for discretizing a continuous dynamics is through _finite differences_. Particularly, the forward Euler and leapfrog methods are commonly employed for first-order and second-order ODE respectively. For a first-order ODE given by \(\frac{\partial\mathbf{X}}{\partial t}=F(\mathbf{X},t)\), the forward Euler discretization leverages forward finite time difference, which gives the update \(\mathbf{X}(t+1)=\mathbf{X}(t)+\tau F(\mathbf{X}(t),t)\) for some stepsize \(\tau>0\). The forward Euler is an explicit method in that \(\mathbf{X}(t+1)\) depends on \(\mathbf{X}(t)\) explicitly. In contrast, backward Euler method is implicit by discretization \(\mathbf{X}(t+1)=\mathbf{X}(t)+\tau F(\mathbf{X}(t+1),t+1)\), which involves solving a (nonlinear) system of equations to obtain \(\mathbf{X}(t+1)\).
For a second-order ODE \(\frac{\partial^{2}\mathbf{X}}{\partial t}=F(\mathbf{X},t)\), the leapfrog method that leverages centered finite differences yields the update \(\mathbf{X}(t+1)=2\mathbf{X}(t)-\mathbf{X}(t-1)+\tau^{2}F(\mathbf{X}(t),t)\), which has been considered for PDE-GCN\({}_{H}\)[29]. The leapfrog method can be equivalently derived by rewriting the the second-order ODE into a system of first-order ODEs (concerning both the position and velocity) and then use forward Euler method to alternatively update the two. In GraphCON [78], similar idea has been applied for the more complex second-order dynamics.
Higher-order methods employ high-order approximations for solving both first- or second-order ODEs and can be either explicit or implicit. Common choices include Runge-Kutta, which is a class of single-step methods that requires evaluating \(F\) at multiple extrapolated state. One popular variant of Runge-Kutta method is Runge-Kutta 4 that balances the accuracy and efficiency. It computes \(\mathbf{X}(t+1)=\mathbf{X}+\frac{\tau}{\mathfrak{e}}(\mathbf{K}_{1}+2\mathbf{ K}_{2}+2\mathbf{K}_{3}+\mathbf{K}_{4})\) where \(\mathbf{k}_{1}=F(\mathbf{X}(t),t),\mathbf{K}_{2}=F(\mathbf{X}(t)+\tau\mathbf{K }_{1}/2,t+\tau/2)\), \(\mathbf{K}_{3}=F(\mathbf{X}(t)+\tau\mathbf{K}_{2}/2,t+\tau/2)\), \(\mathbf{K}_{4}=F(\mathbf{X}(t)+\tau\mathbf{K}_{3},t+\tau)\). Multi-step methods, such as Adams-Bashford, store multiple previous steps in order to approximate high-order derivatives. More advanced solvers set the adaptive stepsize based on the error estimated at each iteration. Dormand-Prince is a Runge-Kutta method with step-size control and has been widely adopted as the default solver for ODEs [26].
For the backpropgation, the adjoint sensitivity method introduced in [19] can be used, which requires to solve an adjoint ODE for the gradient.
On oversmoothing, oversquashing, heterophily and robustness of GNN dynamics
As we have briefly mentioned, the main hurdles for successful designs of GNN include oversmoothing, oversquashing, graph heterophily and adversarial perturbations. The framework of continuous GNNs allows principled analysis and treatment for the issues identified. In fact, many of the previously introduced dynamics are motivated to overcome the limitations of existing GNN designs. Nonetheless, there exist possible trade-offs such as between oversmoothing and oversquashing [40, 81], which makes it challenging for GNN dynamics to avoid both phenomenon at the same time. Conceptually, common strategies for mitigating oversquashing relies on graph rewiring [88] that increases propagation strength, which likely enhances the smoothing effects. Apart from the trade-off, new problems may emerge as a result of resolving the existing ones. One example is the training difficulty (from gradient vanishing or explosion) associated with complex and long-range dynamics.
This section provides a holistic overview on these undesired behaviours of GNNs through the lens of continuous dynamics, along with a discussion on how existing approaches address the issues.
Oversmoothing.Oversmoothing refers to a phenomenon that node signals become progressively similar as a result of message passing, and converge to a state independent of the input signals. Although there are many different characterizations for oversmoothing, most if not all rely on the (normalized) Dirichlet energy that measures node similarity. Recall from Section 2, the Dirichlet energy is defined as \(\mathcal{E}_{\mathrm{dir}}(\mathbf{X})=\frac{1}{2}\sum_{(i,j)\in\mathcal{E}} \|\mathbf{x}_{i}/\sqrt{\mathrm{deg}_{i}}-\mathbf{x}_{j}/\sqrt{\mathrm{deg}_{ j}}\|^{2}=\mathrm{tr}(\mathbf{X}^{\top}\widehat{\mathbf{L}}\mathbf{X})\), where \(\widehat{\mathbf{L}}\) is the symmetrically normalized Laplacian. Because oversmoothing claims a loss of distinguishability across the nodes, a natural quantification of oversmoothing is \(\mathcal{E}_{\mathrm{dir}}(\mathbf{X}(t))\to 0\) as \(t\to\infty\). From the definition of the Dirichlet energy, in the limiting state, nodes with same degree collapse into a single representation. A stronger notion of oversmoothing has been considered in [78, 77] requiring an exponential decay of the Dirichlet energy for oversmoothing to occur, i.e., \(\mathcal{E}_{\mathrm{dir}}(\mathbf{X}(t))\leq C_{1}e^{-\mathcal{C}_{2}t}, \forall t>0\) and for some constants \(C_{1},C_{2}>0\). This is equivalent to claiming that the system has a node-wise constant, exponentially stable equilibrium state. In this work, we focus on a notion of oversmoothing called low-frequency dominance (LFD) put forward by [39]. A dynamics is said to be low-frequency dominant if \(\mathcal{E}_{\mathrm{dir}}(\mathbf{X}(t)/\|\mathbf{X}(t)\|)\to 0\). It can be readily verifies that when \(\mathcal{E}_{\mathrm{dir}}(\mathbf{X}(t))\to 0\), then \(\mathcal{E}_{\mathrm{dir}}(\mathbf{X}(t)/\|\mathbf{X}(t)\|)\to 0\), while the reverse argument is false (see Appendix B.2 of [39] more discussions). The normalization by \(\|\mathbf{X}(t)\|\) reveals the dependence of asymptotic behaviour to the dominant spectrum of the dynamics, where low-frequency is related to the smoothing effect.
In the sense of LFD, it can be shown that the dynamics relying on (anisotropic) diffusion would incur oversmoothing in the limit, such as the linear versions of GRAND [15], BLEND [16], and DIFFormer [98]. See for example [39, Theorem B.4] for a formal proof. Dynamics that involves (non-smooth) edge indicators, like Mean Curvature, Beltrami flows [83], \(p\)-Laplacian [35] and DIGNN [34] also correspond to LFD dynamics although the convergence is slower compared to (smooth) diffusion dynamics. Such a statement has been formalized in [34].
We summarize existing remedies for oversmoothing as follows.
1. _Source term_: One remedy that steers the dynamics away from the low-frequency dominance is to include a source term. For example, in \(p\)-Laplacian [35] and DIGNN [34], the source term is implicitly added in terms of input dependent regularization, while GRAND++ [86] and CGNN [99] explicitly inject the source information to the dynamics. The presence of a source term ensures the Dirichlet energy is lower bounded above zero where the limiting state is non-constant.
2. _Non-smoothing dynamics_: Choosing a non-smoothing dynamics can help avoid oversmoothing, such as an oscillatory process as in PDE-GCN\({}_{\mathrm{H}}\)[29] and GraphCON [78]. Oscillatory system usually converses rather than dissipates energy. In particular, it has been proved that the constant equilibrium state of the dynamics is not exponentially stable because any perturbation remains due to the energy conservation. The reversible dynamics, such as Hamiltonian and metriplectic in [42] is also non-smoothing by construction.
3. _Diffusion modulators_: Imposing external forces to counteract or modulate the diffusion mechanism is beneficial for mitigating oversmoothing. Examples include the high-pass filters in GREAD [21] and GradFUFG [46], repulsive diffusion coefficients in ACMP [94] and ODNet [66], negative spectrum in GRAFF [39], reverse heat kernel in MHKG [81], anti-symmetry in A-DGN [41], diffusion gating in \(\mathrm{G}^{2}\)[79] and DeepGRAND [71]. In essence, oversmoothing can be avoided in the above dynamics by diminishing smoothing effects in the limit of evolution.
4. _Higher-order geometries_: [10] demonstrate the cause of oversmoothing from the perspective of choosing a trivial geometry. In contrast, by enriching the graph with a higher-order sheaf topology, the asymptotic behaviour of the diffusion process can be better controlled. Indeed, the sheaf Dirichlet energy can be increased with a suitable choice of non-symmetric sheaf structure on graphs, thus avoiding oversmoothing under this regime [10, Proposition 17].
Heterophily.Unlike homophilic graphs where neighbouring nodes tend to share similar features and labels, nodes in heterophilic graphs usually exhibit dissimilar patterns compared to the neighbours. GNN that is dominated by smoothing effect is generally less preferred for such graphs. Following [39], the ability of a dynamics to adapt for heterophilic graphs is measured in terms of high-frequency dominance (HFD). A dynamics is said to be high-frequency dominant if \(\mathcal{E}_{\mathrm{dir}}(\mathbf{X}(t)/\|\mathbf{X}(t)\|)\to\rho_{L}\), where \(\rho_{L}\leq 2\) is the largest eigenvalue of \(\widehat{\mathbf{L}}\). Intuitively, HFD dynamics is dominated by a sharpening effect and eventually converge to the highest-frequency eigenvectors of the Laplacian where node information gets separated. In this work, we equate the ability of a dynamics handling heterophily with its ability to become dominated by the highest frequency. Although HFD dynamics is also undesired due to information loss related to the low frequencies, here we focus on the ability, meaning that a system should have the flexibility to converge to the eigenspace associated with the highest frequency, which allows separability of bipartite graphs. Under such notion, dynamics that is purely smoothing cannot be HFD. The incorporation of a source term and (positive) graph re-weighting alone are not sufficient to turn a smoothing dynamics into a sharpening one even though empirically these strategies boost performance on heterophilic datasets. This is because the driving mechanism of the resulting system is still diffusion.
Existing dynamics that provably accommodates separating effects can be classified as follows.
1. _Sharpening induced forces_: One scheme for a system to be HFD is to explicitly introduce sharpening induced forces. Most of the diffusion modulators identified for tackling oversmoothing amounts to incorporate such anti-smoothing forces, including GREAD [21], GradFUFG [46], ACMP [94], ODNet [66], GRAFF [39], and MHKG [81]. Conceptually, the dynamics is dominated by the high frequency if the magnitude of the sharpening effect surpasses the smoothing effect.
2. _Higher-order geometries_: Once the graph is equipped with a sheaf topology, [10] have shown that choosing a higher-dimensional stalks and non-symmetric restriction maps allows sheaf diffusion to gain linear separability in the limit for heterophilic graphs.
We highlight that other schemes, such as oscillatory dynamics in GraphCON [78] and convection/advection process in [103, 30] also demonstrate empirical success under heterophilic settings. Nevertheless, theoretical guarantee for theses schemes in adapting to heterophilic graphs is currently unavailable.
Oversquashing.The phenomenon of oversquashing has firstly been empirically observed by [2] and later formally studied by [88, 25]. Such a phenomenon has been identified as a key factor hindering expressive power of GNNs [25]. Oversquashing concerns a scenario where long-range communication between distant nodes matters for the downstream tasks whereas message passing squashes information due to the exponentially growing size of the receptive field. A closely related concept is under-reaching, which refers to shallow GNN not being able to fully explore long-range interactions [6]. A critical difference between under-reaching and oversquashing is that the former can often be alleviated by increasing the depth of a GNN while the latter can occur even with deep GNNs. One major cause of such a phenomenon is the local diffusion process according to the given graph structure, where messages are exchanged only within the neighbourhood. This can be formally characterized by the sensitivity across nodes, measured through the Jacobian. More formally, consider a (discretized) dynamics \(\mathbf{X}^{\ell+1}=\mathcal{A}_{\mathcal{G}}(\mathbf{X}^{\ell})\), where \(\mathcal{A}_{\mathcal{G}}\) is a coupling function that aggregates information based on input graph \(\mathcal{G}\). The sensitivity between nodes \(i,j\in\mathcal{V}\) after \(\ell\) iterations of updates can be computed as \(\big{\|}\frac{\partial\mathbf{x}_{i}^{\ell}}{\partial\mathbf{x}_{j}^{0}} \big{\|}_{2}\), where a smaller value indicates lower sensitivity and thus higher degree of oversquashing. In the simple case of isotropic diffusion \(\mathcal{A}_{\mathcal{G}}(\mathbf{X}^{\ell})=\sigma(\mathbf{A}\mathbf{X}^{ \ell}\mathbf{W})\), one can show \(\big{\|}\frac{\partial\mathbf{x}_{i}^{\ell}}{\partial\mathbf{x}_{j}^{0}} \big{\|}_{2}\leq c^{\ell}(\mathbf{A}^{\ell})_{i,j}\), where \(\mathbf{A}^{\ell}\) denotes the \(\ell\)-th matrix power of \(\mathbf{A}\) and \(c>0\) is a Lipschitz constant that depends on the activation function and \(\mathbf{W}\). It can be readily noticed that the graph structure encoded in \(\mathbf{A}\) critically determines the severity of oversquashing.
Below are strategies employed in GNN system that avoids oversquashing.
1. _Graph topology modification_: One natural strategy in addressing the oversquashing is to enhance communications by rewiring the graph structure. GRAND [15] and BLEND [16] dynamically modify the connectivity according to the updated node features, which amounts to changing the spatial discretization over time. DIFFormer [98], on the other hand, adopts a fully connected graph and thus avoids oversquashing by allowing message passing between all pairs of nodes.
2. _Non-local dynamics_: Alternatively, non-local dynamics constructs dense graph communication function and thus mitigates oversquashing by design. This can be achieved for example by taking the fractional power of message passing matrix as in FLODE [68], leveraging the dense quantum diffusion kernel in QDC [67], learning the timestep of heat kernel in TIDE [7], and capturing the entire path of diffusion as in G2TN [89].
3. _Multi-scale and spectral filtering_: Multi-scale diffusion in GradFUFG [46] has the potential to mitigate oversquashing by simultaneously accounting for information at different frequencies. Further, properly controlling the magnitude of high-pass versus low-pass filtering coefficients as in MHKG [81] (in the HFD setting) can provably improve the upper bound of the node sensitivity and thus oversquashing is alleviated.
Despite the success of aforementioned techniques in addressing oversquashing, a potential downside is the increased complexity of propagating information at each update, which is often associated with the more dense message passing matrix. We also remark that some methods, including GIND [18] and A-DGN [41] are able to capture long-range dependencies without suffering from oversmoothing. In particular, the driving mechanisms for such purpose are the implicit diffusion in GIND and anti-symmetric weight in A-DGN where
the former can be viewed as an infinite-depth GNN and the latter allows to build deep GNNs with constant sensitivity to avoid oversmoothing.
Stability, gradient vanishing and explosion.Lastly, we highlight several other potential pitfalls of GNN dynamics, that are often less explored in the literature compared to the previous issues, such as robustness and stability, and gradient vanishing and exploding during training.
Stability and robustness against adversarial attacks is a critical factor in assessing the performance of GNNs. For continuous dynamics, [83] verify that the graph neural diffusion, like GRAND [15] and BLEND [16], are empirically more stable to graph topology perturbation compared to other discrete variants, which follows from the derived theoretical results based on heat kernel. [83] also prove that, due to the row-stochastic normalization of the diffusivity matrix in graph neural diffusion, the stability against node perturbation is guaranteed. In [41], the robustness to the change in initial conditions is explicitly enforced with the required anti-symmetry (due to the zero real parts of the Jacobian eigenvalues). In [102], the conservative stability offered by Hamiltonian dynamics has shown enhanced robustness to adversarial attacks. In addition, [97] improve the generalization ability of GNNs with respect to topological distribution shifts, via local diffusion and global attention.
Another flight of deep neural networks is gradient vanishing and exploding, which refers to the situation when gradient exponentially converges to zero or infinity as a result of increasing depth. Because classic graph neural networks are often designed to be shallow as in GCN [58] and GAT [91], in order to avoid oversmoothing, gradient vanishing or explosion have received less attention in the GNN community. However, in the continuous regimes, especially when depth increases, these problems can emerge and severely escalate the training difficulty. In [78], it has been verified that GraphCON is able to mitigate the the vanishing and exploding gradient problems because the gradient of GraphCON is upper bounded at most quadratically in depth and is shown to be independent of the depth in terms of decaying behaviour. Due to the anti-symmetric channel mixing in A-DGN [41], the magnitude of the gradient stays constant during backpropagation and hence avoids the problem.
## 6 Open research questions
The recent success of graph neural dynamics has presented numerous opportunities for future explorations. This section summarizes several exciting research directions that remain open.
Dynamics with high-order graph structures and spatial derivatives.Most existing continuous GNNs are limited to evolution over nodes, except for [42] that considers higher-order cliques. Meanwhile graphs often encode more intricate topology where higher-order substructures, such as paths, triangles and cycles are crucial for downstream tasks [85]. For example, aromatic rings are cycle structures commonly appearing in molecule, which determines various chemical properties, such as solubility and acidity. Thus designing a dynamics aware of such local substructures is beneficial. In addition, current dynamics often rely on a coupling function that involves only the first-order spatial derivative (i.e., the gradient). It is thus interesting to investigate how to properly define higher-order spatial derivatives (e.g., the Hessian) and incorporate into the dynamics formulation.
Explore the potential of graph neural dynamics for other applications.Existing studies proposing continuous GNNs mostly focus on the node-level prediction tasks while the continuous formalism presents a
general framework for modelling not limited to the evolution of nodes, but also edges, communities and even the entire graph. It is thus rewarding to adapt graph neural dynamics for other types of applications, such as link and graph-level prediction, anomaly detection, time series forecasting. For example, [28] leverage both linear and anisotropic diffusion for molecule property prediction; [9] utilize graph heat and wave equation for graph topology recovery; [30] demonstrate the promise of using advection process for spatial temporal graph learning.
Expressivity of graph neural dynamics.The unprecedented success of GNNs have propelled researchers to study their expressive power in distinguishing non-isomorphic graphs [101, 70], counting subgraph structures [20], etc. While the continuous formalism present a framework for interpreting and designing GNNs, there have been few works that characterize the expressivity of graph neural dynamics, especially on graph and subgraph levels. There also lacks theoretical understanding on how the choice of numerical integrators affects the performance of the neural dynamics. In addition, an equally fruitful direction is to explore whether the theory of dynamical systems, such as energy conservation and reversibility can be leveraged to design theoretically more powerful graph neural networks.
Continuous formulation for other graph types.Most of the continuous GNNs focus primarily on dynamics over static, undirected, homogeneous graphs. Nevertheless, other graph types have also witnessed wide applicability, including _signed_ or _directed graphs_ (where edges can be negative or directed) [24, 51, 69, 87], _heterogeneous graphs_ (where nodes and edges have multiple types) [93, 53], _geometric graphs_ (where nodes or edges respect geometric constraints) [11], _spatial-temporal graphs_ (where graph topology also evolves in time) [56, 55]. The continuous formulation of GNNs for these more complex graph types could be beneficial for both enhancing the understanding and representation power of existing GNNs. However, Such generalization requires nontrivial efforts. For example, the dynamics should preserve symmetries for geometric graphs and account for both evolution of graph topology and signals for spatial-temporal graphs.
## 7 Conclusion
In this work, we conduct a comprehensive review of recent developments on continuous dynamics informed GNNs, an increasingly growing field that marries the theory of dynamical system and differential equation with graph representation learning. In particular, we provide mathematical formulations for a diverse range of graph neural dynamics, and show how the common flights of GNNs can be alleviated through the continuous formulation. We highlight several open challenges and fruitful research directions that warrant further exploration. We hope this survey brings attention to the potential of the continuous formalism of GNNs, and severs as a starting point for future endeavors in harnessing classic theories of continuous dynamics for enhancing the explainability and expressivity of GNN designs.
|
2306.01906 | Synaptic motor adaptation: A three-factor learning rule for adaptive
robotic control in spiking neural networks | Legged robots operating in real-world environments must possess the ability
to rapidly adapt to unexpected conditions, such as changing terrains and
varying payloads. This paper introduces the Synaptic Motor Adaptation (SMA)
algorithm, a novel approach to achieving real-time online adaptation in
quadruped robots through the utilization of neuroscience-derived rules of
synaptic plasticity with three-factor learning. To facilitate rapid adaptation,
we meta-optimize a three-factor learning rule via gradient descent to adapt to
uncertainty by approximating an embedding produced by privileged information
using only locally accessible onboard sensing data. Our algorithm performs
similarly to state-of-the-art motor adaptation algorithms and presents a clear
path toward achieving adaptive robotics with neuromorphic hardware. | Samuel Schmidgall, Joe Hays | 2023-06-02T20:31:33Z | http://arxiv.org/abs/2306.01906v1 | Synaptic motor adaptation: A three-factor learning rule for adaptive robotic control in spiking neural networks
###### Abstract
Legged robots operating in real-world environments must possess the ability to rapidly adapt to unexpected conditions, such as changing terrains and varying payloads. This paper introduces the Synaptic Motor Adaptation (SMA) algorithm, a novel approach to achieving real-time online adaptation in quadruped robots through the utilization of neuroscience-derived rules of synaptic plasticity with three-factor learning. To facilitate rapid adaptation, we meta-optimize a three-factor learning rule via gradient descent to adapt to uncertainty by approximating an embedding produced by privileged information using only locally accessible onboard sensing data. Our algorithm performs similarly to state-of-the-art motor adaptation algorithms and presents a clear path toward achieving adaptive robotics with neuromorphic hardware.
robot learning, spiking neural network, synaptic plasticity, neuromodulation, online learning
## 1 Introduction
Legged robots have made significant progress in the last four decades using physical dynamics modeling and control theory, requiring considerable expertise from the designer [1, 2, 3, 4]. In recent years, researchers have shown interest in using reinforcement and imitation learning techniques to reduce the designer's burden and enhance performance [5, 6, 7]. However, adaptation to new domains has remained a challenging problem due to various factors such as the differences in data distribution between the source and target domains, as well as the inherent complexity of the underlying relationships between the input and output variables (i.e. dynamic system uncertainties), which often necessitate significant modifications to the learning algorithms and architectures in order to achieve satisfactory results in the target domain [8].
Neuromorphic computing offers a promising approach to address the challenges of adaptation in legged robotics by enabling the development of more efficient and adaptive algorithms that can better emulate the neural structures and functions of biological systems. In addition, these systems are extremely energy efficient [9, 10, 11], enabling robotic learning algorithms to operate across long timescales without recharging. Many neuromorphic chips are betting on local learning rules, such as Hebbian and spike-timing dependent plasticity rules, to provide _on-chip learning_ for efficient learning on edge-computing applications [11, 12, 9, 13]. Local learning rules offer several advantages beyond their biological inspiration, including computational efficiency, scalability, and the ability to adapt to dynamic environments. Unlike traditional machine learning algorithms that require large amounts of training data and significant computational resources, local learning rules can learn from small amounts of data and adapt in real-time, making them particularly useful for applications in edge computing [14, 15, 16]. Furthermore, since the learning is distributed across the network of neurons, local learning rules are highly parallelizable, allowing for efficient processing of large amounts of data.
Recently, there has been notable progress in developing algorithms that employ local learning rules, due to the advancements in the theory of three-factor learning in neuroscience [17, 18]. This theory offers a solution for assigning credit to synapses over time without relying on the backpropagation of errors, which is typically used for credit assignment in machine learning applications. The current most effective local learning rules for neuromorphic devices
are based on this theory, and show promising potential for enabling on-chip learning in various real-world applications [19; 20; 16].
Independently and in parallel, significant strides have been made toward developing adaptive robotic controllers for legged robots. These methods, termed _motor adaptation_ (MA) algorithms, learn how to estimate their current environmental factors (e.g. friction coefficients, terrain, etc) from locally accessible data, which is provided as state input into the network [21; 22; 23; 24; 25]. In this work, we introduce a motor adaptation which uses neuroscience-derived rules of plasticity together with a third factor signal to dynamically update the _synaptic weights_ of the network. This method, called Synaptic Motor Adaptation (SMA), provides a novel approach to motor adaptation by enabling the policy to learn from new experiences in real-time rather than simply updating its state input.
SMA is particularly well-suited to legged robots as it allows the network to update its connections with respect to the current environment conditions, such as uneven terrain, while maintaining stable control over the robot. The proposed three-factor learning rule used in SMA build on the work of differentiable plasticity [20; 26; 27], which makes it amenable to gradient descent optimization. This approach has the potential to significantly improve the performance and adaptability of legged robots, which could have wide-ranging applications in the field of robotics, particularly for the deployment of neuromorphic devices.
## 2 Background & Related work
### Motor Adaptation Algorithms
Robotic learning has remained a major challenge in AI since successful deployment would require the algorithm to adapt in real-time to unseen situations, such as dynamic payloads, novel terrain dynamics, as well as hardware degradation over time. This problem has remained a major hurdle since the majority of deep learning algorithms would train a network in simulation _offline_ and then fix the network weights for _online_ deployment. Significant advancements have been realized recently with the introduction of _motor adaptation_ algorithms [21; 22; 23; 24; 25], which act much like a system identification estimator, with the difference that (1) the estimate is a learned embedding containing only the most vital information for adaptation rather than the entirety of the system dynamics and (2) the estimate is made very rapidly from a temporal history of sensory information.
Motor adaptation algorithms typically consist of two components: a base policy \(\pi\) and an environment factor encoder \(\mu\). During the first phase of simulated training, the factor encoder \(\mu\) takes as input privileged information from the environment \(e(t)\) that would not be accessible to a deployed system (e.g. friction, motor strength, robot center of mass) and produces a low-dimensional output embedding \(z(t)\) referred to as an latent extrinsic vector. The latent vector \(z(t)\) is then provided as input to the base policy \(\pi\) and optimized by the base policy loss such that \(z(t)\) proves a useful latent representation for \(\pi\) so that it can better solve its objective. This process can be described by the following equations:
Figure 1: Graphical description of three-factor learning. (Left) Long-term potentiation and depression based on pre- and post-synaptic spike timings. (Middle) Membrane potential dynamics of the post-synaptic neuron. (Right) Neuromodulator quantity affects the growth and consolidation of the synaptic weights.
\[z(t)=\mu(e(t)) \tag{1}\] \[a(t)=\pi(x(t),a(t-1),z(t)). \tag{2}\]
In the second phase of training, an environment factor _estimator_\(\phi\) is trained via regression to match the output of the environment factor encoder \(\mu\) using a time history of state and action pairs (\(a(t-N-1),s(t-N),...,a(t-1),s(t)\)). In essence, an online approximation of the extrinsics embedding \(z(t)\) is generated using information accessible to the robot.
\[\hat{z}(t)=\phi(a(t-N-1),s(t-N),...,a(t-1),s(t)) \tag{3}\] \[a(t)=\pi(x(t),a(t-1),\hat{z}(t)). \tag{4}\]
Motor adaptation algorithm have been demonstrated to significantly improve the adaptation abilities of robotic learning in simulation, and has also been used for the deployment of networks trained entirely in simulation to robotic hardware (sim2real) [21, 22, 23, 24, 25].
#### Synaptic plasticity and three-factor learning
Plasticity in the brain refers to the capacity of experience to modify the function of neural circuits. The plasticity of synapses refers to the modification of the strength of synaptic transmission based on local activity and is currently the most widely investigated mechanism by which the brain adapts to new information [28, 29]. Methods in deep learning are based on changing weights from experience, typically through the use of the algorithm _backpropagation_, which makes predictions based on input and uses the chain rule to back-propagate errors through the network [30]. While there are parallels between backpropagation and synaptic plasticity, there are many significant ways in which they differ in operation compared to the brain [31]. Three-factor learning rules have been proposed as a much more plausible
Figure 2: **Overview of synaptic motor adaptation algorithm**. (Left) Privileged and sensory information vectors. (Middle) Training phase 1 consists of reinforcement leanring with privileged sensing with the neuromodulatory extrinsics embedding \(\mu(t)\) and the actor \(\pi(t)\). Phase 2 consists of approximating the dynamics of the neuromodulatory extrinsics embedding. (Right) Description of the network structures of each model used. (Right bottom) Image of robot on rough terrain, not encountered during training period.
theoretical framework for understanding how meaningful changes are made in the brain [17, 18]. Below, we introduce a pair-based model of plasticity and the theory of three-factor learning.
#### Pair-based spike-timing dependent plasticity
The pair-based spike-timing dependent (STDP) model is a plasticity rule that governs changes in synapses based on the timing relationship between pairs of pre- and post-synaptic spikes [32]. This model was derived from experiments which observed that the precise timing of spikes can describe synaptic long-term potentiation (LTP, increase in weight) and long-term depression (LTD, decrease in weight).
We begin by describing the timing dynamics of pre- and post-synaptic spikes through an iterative update rule, referred to as a _synaptic trace_ (_also see Figure 1_):
\[\mathbf{x}_{i}^{(l)}(t+\Delta\tau)=\alpha_{x}\mathbf{x}_{i}^{(l)}(t)+f(\mathbf{x}_{i}^{(l) }(t))\mathbf{s}_{i}^{(l)}(t). \tag{5}\]
The precise physiological interpretation of the activity trace \(\mathbf{x}_{i}^{(l)}(t)\in\mathbb{R}>0\) is not well-defined, as there are several possible representations for this activity. In the case of pre-synaptic events, it could correspond to the quantity of bound glutamate or the number of activated NMDA receptors, while for post-synaptic events it could reflect the synaptic voltage generated by a backpropagating action potential or the amount of calcium influx through a backpropagating action potential [33].
The activity trace \(\mathbf{x}_{i}^{(l)}(t)\) is reduced to zero with the variable \(\alpha_{x}\in(0,1)\), where \(\alpha_{x}\) is commonly expressed as \((1-\frac{1}{\tau})\) and decays at a rate dependent on the time constant \(\tau\in\mathbb{R}>1\). The update of the synaptic trace is determined by a function \(f:\mathbb{R}\rightarrow\mathbb{R}\), which is proportional to the presence of a spike \(\mathbf{s}_{i}^{(l)}(t)\). This all-to-all synaptic trace scheme pairs each pre-synaptic spike with every post-synaptic spike indirectly via the decaying trace. In the linear update rule, which is used in this work, the trace is updated by a constant factor \(\mathbf{\beta}\) when a spike \(\mathbf{s}_{i}^{(l)}(t)\) occurs.
\[\mathbf{x}_{i}^{(l)}(t+\Delta\tau)=\alpha_{x}\mathbf{x}_{i}^{(l)}(t)+\mathbf{\beta}\mathbf{s}_ {i}^{(l)}(t). \tag{6}\]
Next, we describe the pair-based STDP rule, which describes LTP (left-hand side of Equation 8) and LTD (right-hand side of Equation 8) via pairs of spikes and synaptic traces:
\[\mathbf{W}_{i,j}^{(l)}(t+\Delta\tau)=\mathbf{W}_{i,j}^{(l)}(t)+\Delta_{ \mathbf{W}}(t) \tag{7}\] \[\Delta_{\mathbf{W}}(t)= \mathbf{A}_{+,i,j}\mathbf{x}_{i}^{(l-1)}(t)\mathbf{s}_{j}^{(l)}(t)-\mathbf{A}_{-,i,j}\mathbf{x}_{j}^{(l)}(t)\mathbf{s}_{i}^{(l-1)}(t). \tag{8}\]
When a post-synaptic firing occurs (\(\mathbf{s}_{j}^{(l)}(t)=1\)), weight potentiation occurs by a quantity proportional to the pre-synaptic trace (\(\mathbf{x}_{i}^{(l-1)}(t)\)). Similarly, when a pre-synaptic firing occurs (\(\mathbf{s}_{i}^{(l-1)}(t)=1\)), weight depression occurs by a quantity proportional to the post-synaptic trace (\(\mathbf{x}_{j}^{(l)}(t)\)). Potentiation and depression are scaled by constants \(\mathbf{A}_{+,i,j}\in\mathbb{R}\) and \(\mathbf{A}_{-,i,j}\in\mathbb{R}\), respectively, which characterize the rate of change of LTP and LTD. Typically, Hebbian pair-based STDP models define \(\mathbf{A}_{+,i,j}>0\) and \(\mathbf{A}_{-,i,j}>0\), while anti-Hebbian models define \(\mathbf{A}_{+,i,j}<0\) and \(\mathbf{A}_{-,i,j}<0\). We initialize our learning rule to be _Hebbian_, but do not constrain the optimization, thus allowing our initially Hebbian rule to become anti-Hebbian or any other variations of the pair-based STDP rule.
#### Eligibility traces and three-factor plasticity
Rather than directly modifying the synaptic weight, local synaptic activity leaves an activity flag, or eligibility trace, at the synapse [18]. The eligibility trace does not immediately produce a change, rather, weight change is realized in the presence of an additional signal, which is discussed below. In a Hebbian learning rule, the eligibility trace can be described by the following equation:
\[\mathbf{E}_{i,j}^{(l)}(t+\Delta\tau)=\gamma\mathbf{E}_{i,j}^{(l)}(t)+\mathbf{\alpha}_{i,j }f_{i}(x_{i}^{(l-1)})g_{j}(x_{j}^{(l)}). \tag{9}\]
The decay rate of the trace is determined by the constant \(\gamma\in[0,1]\), where a higher value of \(\gamma\) results in a faster decay. The constant \(\mathbf{\alpha}_{i,j}\in\mathbb{R}\) determines the rate at which activity trace information is incorporated into the eligibility trace. The functions \(f_{i}\) and \(g_{j}\) depend on the pre- and post-synaptic activity traces, \(x_{i}^{(l-1)}\) and \(x_{j}^{(l)}\), respectively. These
functions are indexed by the corresponding pre- and post-synaptic neuron, \(i\) and \(j\), as the eligibility dynamics of synaptic activity may be influenced by neuron type or the region of the network.
Theoretical neuroscience literature suggests that eligibility traces alone cannot bring about a change in synaptic efficacy [18; 17]. Rather, weight changes require the presence of a third signal.
\[\textbf{{W}}_{i,j}^{(l)}(t+\Delta\tau)=\textbf{{W}}_{i,j}^{(l)}(t)+M_{j}(t) \textbf{{E}}_{i,j}^{(l)}(t). \tag{10}\]
Here, \(M_{j}(t)\in\mathbb{R}\) is a regional _third factor_ known as a neuromodulator, acting as an abstract representation of a biological process. Without the presence of the neuromodulatory signal (\(M_{j}(t)=0\)), weight changes do not occur. In the presence of certain stimuli, the magnitude and direction of change in \(M_{j}(t)\) determine both long-term potentiation (LTP) and long-term depression (LTD), causing them to scale and reverse. Three-factor learning rules are powerful in their descriptive capabilities, and have been used to describe approximations to Backpropagation Through Time (BPTT) [34; 19] and Bayesian inference [35].
## 3 Synaptic Motor Adaptation
Recent advances in machine learning and theoretical neuroscience have led to the ability to optimize neuroscience-derived three-factor learning rules with backpropagation through time [20; 26; 27], making powerful gradient-descent based approaches accessible for the optimization of local learning rules. These algorithms can be meta-trained through a bi-level optimization to adapt the underlying behavior of the network toward an objective _during deployment_ (inner-loop) via gradient descent of an objective function _after deployment_ (outer-loop). We extend these ideas toward the development of a motor adaptation algorithm whereby the synaptic weights of the network change based on a meta-optimized three-factor learning rule to adapt in real-time to environmental conditions which we call Synaptic Motor Adaptation (SMA).
### A three-factor synaptic motor adaptation rule
In MA algorithms, the role of the factor encoding module \(\mu\) (Equation 1) is to provide a _context signal_ for the robot so it can adapt its behavior to better suited for its environment which is constantly changing, such as walking on uneven surfaces, the existence of limb damage, or when the ground becomes slippery. This context signal changes the behavior of the robot by providing a learned embedding from the factor encoder as _input_ to another policy network. While this elegantly allows the robot to adapt to new environmental challenges, the _fundamental behavior_ of the policy is not capable of changing (i.e. the synaptic weights), rather just the information the robot has about the environment is constantly being re-estimated (its state input). This prevents the policy from actually _learning_ from new experience, instead, it can only update its state input based on the time history of events.
Like other motor adaptation algorithms, SMA consists of a base policy \(\pi\) which takes in robot sensory information, \(x_{t}\), and an environment factor encoder \(\mu\) which takes in privileged information \(e_{t}\). However, SMA differs from other MA algorithms because it uses the environment factor encoder \(\mu\) to produce a neuromodulatory _learning signal_\(m(t)\) (in our model \(m_{-}(t)\) and \(m_{+}(t)\)) which dictates the degree with which connections are updated. This can be explained by the following equations:
\[m_{+}(t),m_{-}(t)=\mu(e(t)) \tag{11}\] \[a(t)=\pi(x(t),a(t-1),\textbf{{W}}(t),m_{+}(t),m_{-}(t)). \tag{12}\]
In this equation, instead of a time-varying adaptive signal \(z(t)\) being produced by \(\mu\) there are two modulatory signals \(m_{+}(t),m_{-}(t)\), and instead of \(z(t)\) being given as input to \(\pi\) there is a time-dependent weight parameter _W_(_t_). The adaptive weight parameter _W_(_t_) is updated by the following equations:
\[\textbf{{W}}_{i,j}^{(l)}(t+\Delta\tau)=\textbf{{W}}_{i,j}^{(l)}(t )+\alpha(t)\Delta_{\textbf{{W}}}(t) \tag{13}\] \[\Delta_{\textbf{{W}}}(t)=m_{+,i}(t)\textbf{{E}}_{+,i,j}^{(l)}(t)+ m_{-,j}(t)\textbf{{E}}_{-,i,j}^{(l)}(t). \tag{14}\]
We note here that unlike in Equation 8, there are _two_ eligibility traces, one for the LTP dynamics \(\textbf{{E}}_{+,i,j}^{(l)}(t)\) + \(m_{+,j}(t)\) and another for the LTD dynamics \(\textbf{{E}}_{-,i,j}^{(l)}(t)\). This necessitates the incorporation of two modulatory signals (\(m_{+}(t)\)
and \(m_{-}(t)\)), one for each of the eligibility traces. We see that Equations 13 and 14 update the weights of \(\pi\) using the modulatory dynamics \(m_{+}(t)m_{-}(t)\) produced by the environment factor encoder. That is to say, instead of determining how privileged information can best inform the network at the sensory level like traditional MA algorithms, SMA determines how to utilize privileged information to best _update the base policy \(\pi\) synaptic weights_. This is possible since recent work enabled the dynamics of the three-factor learning rule to be differentiated through in spiking neural networks [20; 26], and thus the policy gradient loss is backpropagated through the plasticity dynamics to optimize the modulatory signals \(m_{+}(t),m_{-}(t)\) given privileged information \(e(t)\). Furthermore, the modulatory signal dynamics \(m_{+}(t),m_{-}(t)\) are approximated by an environment factor estimator \(\phi\), enabling the learned adaptive dynamics to be utilized without privileged information.
Once the weight delta \(\Delta_{\textbf{W}}(t)\) has been computed via the eligibility and modulatory trace dynamics, it is multiplied by a time-varying term \(\eta(t)=\text{exp}(1/t)-1\) before being incorporated into the synaptic weights. We refer to this term as the stabilization variable, and it exponentially decays to zero as \(t\rightarrow\infty\) in order to stabilize the weight dynamics as the quadruped adapts to its environment conditions. We found that without this term, the weight dynamics are unstable across time horizons greater than what the network was trained for and, with the addition of the stabilization term, sustained control of the quadruped can be maintained over long time horizons.
## Experimental setup
### Parallel reinforcement learning
We use a modified implementation of the Proximal Policy Optimization (PPO) algorithm [36] specifically designed for massively parallelized reinforcement learning [7] on the GPU. This algorithm allows learning from thousands of robots in parallel with minimal algorithmic adjustments.
The batch size, \(B=n_{steps}n_{robots}\), is a critical hyper-parameter for successful learning in on-policy algorithms such as PPO. If the batch size is too small, the algorithm will not learn effectively, while if it is too large, the samples become repetitive, leading to wasted simulation time and slower training. To optimize training times, a small \(n_{steps}\) must be chosen, where \(n_{steps}\) is the number of steps each robot takes per policy update, and \(n_{robots}\) is the number of robots simulated in parallel. The algorithm requires trajectories with coherent temporal information to learn effectively, and the Generalized Advantage Estimation (GAE) [37] requires rewards from multiple time steps to be effective. In previous work [7], a minimum of 25 consecutive steps or 0.5 s of simulated time is demonstrated to be sufficient for the algorithm to converge effectively. It is shown that using mini-batches of tens of thousands of samples can stabilize the learning process without increasing the total training time for massively parallel use cases.
During the training of the PPO algorithm, robots need to be reset periodically to encourage exploration of new trajectories and terrains. However, resets based on time-outs can lead to inferior critic performance if not handled carefully. These resets break the infinite horizon assumption made by the critic, which predicts an infinite horizon sum of future discounted rewards. To address this issue, like in [7], the environment interface is modified to detect time-outs and implement a bootstrapping solution that maintains the infinite horizon assumption. This approach mitigates the negative impact of resets on critic performance and overall learning, as demonstrated through its effect on the total reward and critic loss.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & No noise & Rough terrain & Motor gain & P-gain & D-gain & Friction \\ \hline Non-Adaptive SNN & 7.4 & 4.6 & 4.5 & 6.1 & 6.4 & 7.0 \\ Plastic SNN & 7.2 & 4.5 & 4.2 & 6.5 & 6.7 & 7.2 \\ RMA & 8.2 & 5.7 & 5.1 & 6.8 & 7.3 & 7.6 \\
**SMA** & 8.1 & 5.9 & 5.7 & 6.9 & 7.1 & 7.7 \\ \hline RMA Expert & 8.5 & 5.9 & 5.4 & 7.2 & 7.7 & 8.0 \\
**SMA Expert** & 8.2 & 6.2 & 6.1 & 7.5 & 7.5 & 7.9 \\ \hline
** Noise Range & \multicolumn{2}{c}{\(V_{scale}\)=0.25, \(H_{scale}\)=0.8 [0.8, 1.2]} & [12.5, 37.5] & [0.25, 0.75] & [0.1, 2.75] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Simulation testing results. Each entry is defined as \(\sum_{i}R_{i}\cdot P_{i}\) where the total sum of rewards for a rollout is \(R_{i}\) and the probability of the domain randomization sample is \(P_{i}^{\prime}\). \(P_{i}\)’s are sampled at discrete intervals along the noise ranges listed above. SMA and RMA experts are the SMA and RMA models being provided with privileged information \(e(t)\) together with their extrinsics encoding module \(\mu(t)\). The non-experts are using approximations.
The handling of resets must also take into account the temporal dynamics of the synaptic state variables, e.g. the eligibility and synaptic traces. When working with PPO, which iteratively recalculates log probabilities from old data, this is not necessarily trivial. The challenge lies in the PPO algorithm's non-temporal treatment of data, where typically minibatches randomly sample _states_ at arbitrary points in _time_. This is a challenge because as a temporally-dependent policy changes across time (e.g. recurrent networks, plastic networks), unlike non-temporal ANNs, the dynamic equation that led to an action \(a(t)\) at time \(t\) was dependent on all timesteps _0_\(\leq\tau\) < \(t\). Thus, to calculate \(a(t)\), PPO must be modified to incorporate _rollout_ mini-batches where, instead of randomly sampling points in time for evaluation, entire robotic trajectories are randomly sampled and the dynamic equations (e.g. synaptic weights) are rolled out in time. In other words, since \(B=n_{steps}n_{robots}\), minibatches sample along \(n_{steps}\) but since there is temporal dynamics and the entire \(n_{steps}\) must be rolled out we sample along \(n_{robots}\).
#### Observations, actions, and noise
Base linear and angular velocities, measurement of the gravity vector, velocity commands, joint positions and velocities, and the previous actions taken by the policy. Each of these values are scaled by a constant factor _(see Appendix)_. Additionally, random noise is added to the sensor readings sampled from the following uniform distributions:
1. Joint positions: \(\pm 0.01\) rad
2. Joint velocities: \(\pm 1.5\) rad/s
3. Projected gravity: \(\pm 0.05\) m/\(s^{2}\)
4. Base linear velocities: \(\pm 0.1\) m/s
5. Base angular velocities: \(\pm 0.2\) rad/s
Observation noise is added to account for the inherent variability in the environment, such as sensor noise and measurement errors. Introducing noise to the observations helps the policy learn to be robust to variations, improves its ability to generalize to new situations, and otherwise better benchmarks adaptivity of the controller.
The action \(a(t)\) taken by the policy \(\pi\) is a desired joint position which is sent to a PD controller to calculate torques for the joints of the robot via the following equation:
\[\tau(t)=K_{p}(c_{a}a(t)+q_{0}-q(t))-K_{d}\dot{q}(t) \tag{15}\]
where: \(\tau(t)\) is the torque output at time \(t\). \(q(t)\) and \(\dot{q}(t)\) are the current joint position and velocity, respectively. \(q_{0}\) is the default joint position. \(c_{a}a(t)\) is the scaled action at time \(t\), with \(c_{a}\) being the action scale factor. \(K_{p}\) and \(K_{d}\) are the PD gains, which are optimized by the experimenter as a hyperparameter. For our experiments we chose \(K_{p}=20\) and \(K_{d}=0.5\). Actions are further scaled by a constant \(0.25\) to account for a physics simulator decimation size of four (simulation updates per policy update). Directly outputting torques is also an option, but we found outputting a target position into a PD controller provides quicker learning and smoother gaits.
#### Reward Terms
The reward function reinforces the robot to follow a velocity command along the \(x\), \(y\), and angular (\(\omega\)) axes and penalizes inefficient and unnatural motions. The total reward is a weighted sum of nine terms detailed below. To create smoother motions we penalize joint torques, joint accelerartions, joint target changes, and collisions. Additionally, there is a reward term to encourage taking longer steps which produces a more visually appealing set of behaviors.
1. Tracking forward velocity: \(\phi(\mathbf{v}_{b,xyz}^{*}-\mathbf{v}_{b,xyz})\)
2. Tracking angular velocity: \(\phi(\mathbf{\omega}_{b,z}^{*}-\mathbf{\omega}_{b,z})\)
3. Angular velocity penalty: \(\cdot||\mathbf{\omega}_{b,xy}^{*}||^{2}\)
4. Torque penalty: \(\cdot||\mathbf{\tau}_{j}||^{2}\)
5. DOF Acceleration: \(\cdot||\hat{\mathbf{q}}_{j}||^{2}\)
6. Action rate penalty: \(\cdot||\mathbf{q}_{j}^{*}||^{2}\)
7. Collision: \(\cdot n_{collisions}\)
8. Feet air time: \(\sum_{f=0}^{4}(\mathbf{t}_{air,f}-0.5)\)
In these equations we define \(\phi(x)=\textit{exp}(-\frac{\|x\|^{2}}{0.25})\). Values \(\textbf{v}_{b,xyz}^{*}\), \(\boldsymbol{\omega}_{b,z}^{*}\), and \(\boldsymbol{\omega}_{b,xy}^{*}\) are superscripted with \(*\) to represent that they are the target command, with the non-superscripted value as the true value. Finally, the total sum of reward terms \(r(t)=\sum_{i}r_{i}(t)\) at each timestep \(t\) is clipped to be a positive value. This requires more careful reward tuning upon initialization, but prevents the robot from finding self-terminating solutions.
#### Pre-training an SNN
Instead of training the SMA network entirely from scratch, which takes significant compute resources, we initially train a non-plastic SNN without any noise in the simulation to act as the foundation. Once this network is fully trained, plasticity is added to the third layer of the policy network and noise is added to the simulator, from which the network is optimized as is outlined in the section _Synaptic Motor Adaptation_.
#### Results and analysis
We report the performance measurements of several models including: a non-plastic SNN (fixed-weights), a plastic SNN without SMA, RMA without adaptation, RMA with adaptation, and SMA. Additionally, the performance of the RMA and SMA experts are recorded, with an expert being defined as the motor adaptation algorithm provided with exact extrinsics information as defined in [21] rather than its embedding approximation (see Equation 3). The performance measurements in Table 1 are defined as follows: \(\sum_{i}R_{i}\cdot P_{i}\) where the total sum of rewards for a rollout is \(R_{i}\) and the probability of the domain randomization sample is \(P_{i}\), with \(P_{i}\)'s sampled at discrete intervals along the noise ranges listed at the bottom of Table 1.
Adaptation to noiseThe discrepancies between the physics simulator and real hardware are what lead to difficulties translating models trained in simulation to real robots. Recent ideas in robotic learning have led to the belief that adding significant "domain noise," which is noise added to the environment during training to change the physical dynamics (e.g. contact dynamics and friction), could prevent simulation overfitting and lead to a policy that can provide control in a variety of physical conditions. However, these methods tend to provide _robust_ policies that have unsophisticated and jerky movements. Thus, recent efforts have gone toward developing policies that _adapt_ to domain noise, fine tuning their control with respect to noise instead of simply becoming robust to all forms of noise.
Both the motor gain and P-gain were areas in which the SMA policy demonstrated improvements in adaptation compared with an RMA policy, with the D-gain and friction not being too far behind. However, D-gain and friction noise were not demonstrated to outperform RMA, but were close in performance. Compared with the three non-MA algorithms, there are clear benefits in performance compared with MA algorithms. However, between RMA and SMA, the performance difference is relatively small, even among the tasks that SMA obtains higher performance. This could suggest one of several things: (1) RMA and SMA are approaching a performance upper bound on adaptivity given the defined degree of noise or (2) these algorithms approached similar performance and there is more progress to be made. However, more experimentation is needed to determine this.
Adaptation to terrainThe ability to adapt to novel forms of terrain that were outside the scope of training is a crucial capability required for legged robots. This is because the complexity of the real world cannot adequately be captured by simulated environments, and thus, in addition to model-derived forms of noise, adaptation to terrain must be demonstrated for a motor adaptation algorithm that will be useful in the real world. For the introduction of rough terrain in our work we used _perlin_ fractal noise. Perlin noise generates natural-looking noise by defining the slope of the noise function at regular intervals, creating peaks and valleys at irregular intervals instead of defining the value of the noise function at regular intervals. Perlin fractal noise is a type of perlin noise that uses multiple octaves (layers) of noise to create a more complex and varied pattern. Each octave is a version of the perlin noise function with a different frequency and amplitude, and the outputs of each octave are combined together to create a final noise pattern. By adjusting the frequency, amplitude, and number of octaves used, the resulting noise can range from smooth and gentle to rough and jagged, making it useful for generating natural-looking textures and terrain. The parameters for the fractal noise are as follows: number of octaves = 2, fractal lacunarity = 2.5, fractal gain = 1.5, fractal amplitude = 1, vertical scale = 0.35, and horizontal scale = 0.08.
As is demonstrated in Figure 2, robots are trained to produce locomotion entirely on flat terrain. Unlike the analysis of adaptation to noise (e.g. motor strength noise), terrain noise is not explicitly encountered during training. Adaptation to rough terrain was among the least transferable skill from the algorithms without motor adaptation, with the non-adaptive SNN, plastic SNN, and RMA without adaptation failing to demonstrate clear generalization to the rough terrain domain. However, both the RMA and the SMA trained robots were successfully able to walk across rough terrain (without falling) despite being trained entirely on flat terrain. Interestingly, this is in spite of terrain and foothold data not being provided to the MA algorithms as privileged information.
## Discussion
We presented the SMA algorithm for real-time adaptation of a quadrupedal robot toward changes in motor strength, P and D gains, friction coefficients, and rough terrain using three-factor learning. This algorithm was compared to the state-of-the-art motor adaptation algorithm RMA [21] and was demonstrated to perform similarly or better on motor control problems that required real-time adaptation. While adaptation improvements are relatively modest compared to the RMA algorithm, we expect further improvements with using more dynamically rich plasticity rules (e.g. triplet, voltage-based), neuron models (e.g. adaptive, resonate-and-fire), propagation delays, surrogate gradient techniques, and modulatory dynamics. Another potential direction is toward developing methods of synaptogenesis, such that the value of the weights along with the network connectivity mapping is learned. Previous methods have incorporated synaptogenesis through genetic algorithms [38], neural cellular automata [39; 40; 41], and online random mutations [42]-a solution utilizing backpropagation has yet to be developed. Much further work aims to enable learning completely novel behaviors (e.g. vaulting) purely through meta-optimized three-factor learning.
There are two clear directions that this work intends building toward: (1) using three-factor learning to transfer from simulation to real hardware and (2) exemplifying this algorithm on neuromorphic hardware. While the path toward transferring from simulation to hardware is clear, further advancements toward the optimization of plasticity rules is required before utilizing current neuromorphic systems. This is because many current neuromorphic systems (1) have propagation delays which are not incorporated into the plasticity dynamics of this work, and (2) are heavily numerically quantized whereas this work was built on fixed-point math. While much of the work toward differentiating through these dynamics has already been solved [20], meta-optimizing three-factor learning rules through these dynamics is a less explored direction (_see_[27]).
A primary limitation to our approach lies in the addition of the stabilization term \(\alpha(t)\) in Equation 13. This stabilization term allows for rapid weight modifications at the beginning of the episode as the quadruped learns from interacting with the environment, and then exponentially decays its effect over time to consolidate the weights. This decay is important for long-term adaptation, particularly for an additive pair-based STDP rule which is not temporally stable [43]. While the neuromodulatory dynamics are capable of modulating these changes, we found that without the stabilization term, the weights still tend to diverge into bimodal distributions. This is potentially an effect of truncating the gradient in time, which does not allow proper credit assignment caused by temporally distant modifications. Future work will aim to determine _when_ weight modifications should occur and _which_ synapses should be modified in the meta-optimization dynamics.
Overall, this work introduces an exciting path toward rapid adaptation on robotic systems using neuroscience-derived models of three-factor learning and we hope it inspires further applications of three-factor learning on robotic systems.
|
2304.12865 | Constraining Chaos: Enforcing dynamical invariants in the training of
recurrent neural networks | Drawing on ergodic theory, we introduce a novel training method for machine
learning based forecasting methods for chaotic dynamical systems. The training
enforces dynamical invariants--such as the Lyapunov exponent spectrum and
fractal dimension--in the systems of interest, enabling longer and more stable
forecasts when operating with limited data. The technique is demonstrated in
detail using the recurrent neural network architecture of reservoir computing.
Results are given for the Lorenz 1996 chaotic dynamical system and a spectral
quasi-geostrophic model, both typical test cases for numerical weather
prediction. | Jason A. Platt, Stephen G. Penny, Timothy A. Smith, Tse-Chun Chen, Henry D. I. Abarbanel | 2023-04-24T00:33:47Z | http://arxiv.org/abs/2304.12865v1 | # Constraining Chaos: Enforcing dynamical invariants in the training of recurrent neural networks
###### Abstract
Drawing on ergodic theory, we introduce a novel training method for machine learning based forecasting methods for chaotic dynamical systems. The training enforces dynamical invariants--such as the Lyapunov exponent spectrum and fractal dimension--in the systems of interest, enabling longer and more stable forecasts when operating with limited data. The technique is demonstrated in detail using the recurrent neural network architecture of reservoir computing. Results are given for the Lorenz 1996 chaotic dynamical system and a spectral quasi-geostrophic model, both typical test cases for numerical weather prediction.
## 1 Introduction
Predicting the future trajectory of a dynamical system--a time series whose evolution is governed by a set of differential equations--is crucial in fields such as weather prediction, economics, chemistry, physics and many others [1, 2]. A prediction can be generated by deriving the governing equations of motion (EOM) for the system and integrating forward in time, perhaps with data being used to determine the value of particular constants or the initial conditions. Machine learning (ML), on the other hand, allows the construction of a forecast purely from observational data in lieu of a physical model. When the EOM are expensive to evaluate numerically, ML can be used to construct a surrogate model; such models can be integrated into data assimilation [3] algorithms--such as the Kalman filter [4, 5]--a typical use case when data are noisy and the model imperfect, such as in numerical weather prediction [6].
The inclusion of physical knowledge--EOM, conservation laws and dynamical invariants--into ML algorithms has been a topic of ongoing interest [7, 8, 9, 10, 11, 12, 13, 14, 15]. Enforcing these laws effectively reduces the searchable parameter space for a workable model, decreasing the training time and increasing the accuracy of the resulting models. An ML model trained without knowledge of invariants may fail to generalize and can produce solutions that violate fundamental constraints on the physical system [16]. Many of the examples cited above involve conservation of quantities based on the symmetry of the equations of motion, such as conservation of energy and momentum [13], or the inclusion of previously derived differential equations [11] as components of the ML training. "Physics informed" neural networks [14, 15, 17, 18, 19] add the known or partially known differential equations as a soft constraint in the loss function of the neural network, but conservation laws are not necessarily enforced and the equations need to be known.
Many physical dynamical systems of interest are dissipative--_e.g._, any dynamical system containing friction--meaning they exchange energy and mass with the surrounding environment [20]. High dimensional dissipative systems are very likely to exhibit chaos [21]--making them extremely sensitive to initial conditions. Enforcing conservation of quantities such as momentum, mass, or energy [11, 13, 22, 23] for dissipative systems in isolation may not be
sufficient for generalization due to the exchange of energy/momentum at the boundaries. Problems concerning chaotic dynamics, such as weather forecasting, exhibit fractal phase space trajectories that make it difficult to write down analytic constraints [11].
With the goal of enforcing dynamical invariants, we suggest an alternative cost function for dissipative systems based on ergodicity, rather than symmetry. This has broad implications for time series prediction of dynamical systems. After formulating the invariants, we give a recurrent neural network (RNN) [24] example applied to the Lorenz 1996 system [25] and quasi-geostrophic dynamics [26] where we add soft constraints into the training of the network in order to ensure that these quantities are conserved.
## 2 Deriving Dynamical Invariants
Ergodicity is a property of the evolution of a dynamical system. A system exhibiting ergodicity, called ergodic, is one in which the trajectories of that system will eventually visit the entire available phase space [27], with time spent in each part proportional to its volume. In general, the available phase space is a subset of the entire phase space volume. For instance a Hamiltonian system will only visit the hypersurface with constant energy [20]. Ergodicity implies that time averages over the system trajectories can be replaced with spatial averages
\[\lim_{t_{f}\rightarrow\infty}\frac{1}{t_{f}}\int_{0}^{t_{f}}g(F^{t}(\mathbf{u }_{0}))dt=\int_{\mathbf{u}\in B}g(\mathbf{u})\rho_{B}(\mathbf{u})du\qquad \forall\mathbf{u}_{0}\in B \tag{1}\]
for an arbitrary function \(g\), where \(F^{t}\) is the application of the flow of the dynamical system of \(t\) iterations, and \(\rho_{B}(\mathbf{u})\) defines an invariant density over the finite set \(B\)[27]. The invariant density gives an intuitive measure of how often a trajectory visits each part of \(B\). The invariant density defines the invariant measure [27]
\[\mu(B\subset R)=\int_{B}\rho_{B}(u)du. \tag{2}\]
\(B\) will often consist of exotic geometries such as quasi-periodic orbits and strange attractors [20]. The strange attractor is a hypersurface that contains the available subspace for a chaotic dynamical system--see Fig.(1). Deterministic chaotic systems are of importance to a vast array of applications such as in numerical weather prediction [3], chemical mixing [31, 32], optics [33], robotics [34] and many other fields.
Figure 1: left) 2D slice through the strange attractor of the spectral QG model reproduced from Fig.(3) in [26] using the implementation in [28]. An attractor is a hypersurface that draws in nearby trajectories of the system, such that the system will eventually be constrained to stay on the manifold. The motion of the dynamical system on a strange attractor is chaotic with extreme sensitivity to initial conditions. Strange attractors can be analyzed through the invariant measure that describes how often the system visits each part of the attractor [29, 30]. Related quantities such as the fractal dimension and the Lyapunov exponents [30] are globally invariant through any smooth change of coordinates and thus are natural invariant quantities for chaotic systems. right) The Lyapunov exponents of the spectral QG model. There are two positive exponents making the system chaotic.
Despite being deterministic, the precise long term prediction of chaotic systems is impossible due to the exponential growth of errors, as quantified by the system's Lyapunov spectrum. The Lyapunov spectrum, composed of a system's Lyapunov exponents (LEs), characterizes a dynamical system [29, 35] by giving a quantitative measure of how a volume of phase space stretches or shrinks over time.
For the prediction of chaotic systems, we suggest that although short term predictions will inevitably diverge, long term prediction of any system must preserve the invariants of motion characterized by the invariant measure \(\mu_{B}\) Eq.(2). Furthermore, enforcing such invariants could help to generalize the training of neural networks designed to emulate dissipative chaotic systems, in much the same way that conservation of energy and momentum has for conservative systems. While any function \(g(\mathbf{u})\) integrated with the invariant density is a constant--as seen in Eq.(1)--by the multiplicative ergodic theorem [36] the LEs and the fractal dimension are both invariant under smooth coordinate transformations and have algorithms that make them feasible to compute from observed data [29].
In the next sections we provide a concrete example using the fractal dimension and LEs as invariants that must be enforced when training a neural network. We use an RNN based on the reservoir computer (RC) architecture [37, 38, 39]. We impose a loss function that takes into account the preservation of the LEs and fractal dimension and detail the benefits of doing so. We stress that the concept is not limited to RC models and can in fact apply to any neural network architecture.
## 3 Recurrent Neural Networks and Reservoir Computing
An RNN is a network composed of nonlinear elements that are connected in such a way as to enable self excitation [40]. Therefore, given a state of the network \(\mathbf{r}(t-1)\), the next state
\[\mathbf{r}(t)=F_{r}(\mathbf{r}(t-1),\mathbf{u}(t-1),\theta) \tag{3}\]
is a function of the input \(\mathbf{u}\), the RNN equations \(F_{r}\) and the internal weights \(\theta\). The label over the input data \(t\in\mathbb{Z}\)--conveniently called time--gives the natural order and allows the analysis of the RNN as a dynamical map. \(\mathbf{r}(t)\) can then be decoded by a function \(W_{\mathrm{out}}(\mathbf{r}(t))=\dot{\mathbf{u}}\). \(W_{\mathrm{out}}\) is trained so that \(\dot{\mathbf{u}}\) is as close to the target output as possible [24]. In time series prediction tasks \(\dot{\mathbf{u}}\sim\mathbf{u}(t)\) so that the driven system Eq.(3) can become autonomous (with no external input)
\[\mathbf{r}(t)=F_{r}(\mathbf{r}(t-1),W_{\mathrm{out}}(\mathbf{r}(t-1)),\theta) \tag{4}\]
and predict the future of the dynamical system.
Reservoir computing (RC) [37, 38, 41, 42, 43, 44] is a simplified form of RNN for which only the large scale parameters of the network are varied with the detailed weights selected from probability distributions. For an RC with \(\tanh\) units at the nodes the RNN equations become [45]
\[\mathbf{r}(t)=\alpha\tanh(\mathbf{Ar}(t-1)+\mathbf{W}_{\mathrm{in}}\mathbf{u} (t-1)+\sigma_{b})+(1-\alpha)\mathbf{r}(t-1). \tag{5}\]
The elements of the \(N\times N\) adjacency matrix \(\mathbf{A}\) are fixed _i.e._, not trained--in contrast to other RNN architectures--with only its overall properties chosen such as the size \(N\), density \(\rho_{A}\) and spectral radius \(\rho_{SR}\). \(\mathbf{W}_{\mathrm{in}}\in\mathbb{R}^{N\times D}\) maps the input into the high dimensional reservoir space \(\mathbf{u}\in\mathbb{R}^{D}\rightarrow\mathbf{W}_{\mathrm{in}}\mathbf{u}\in \mathbb{R}^{N}\); the elements of \(\mathbf{W}_{\mathrm{in}}\) are chosen between \([-\sigma,\sigma]\). \(\alpha\) is the leak rate and is related to the time constant of the RC [39]. \(\sigma_{b}\) is an input bias governing the fixed point of the RC and the strength of the \(\tanh\) nonlinearity. See [39] for detailed explanations of the architecture and parameter choices.
Training the RC includes training the function \(W_{\mathrm{out}}\)--often taken to be a matrix \(\mathbf{W}_{\mathrm{out}}\) and trained through linear regression--as well as finding the correct parameters \(N,\ \rho_{A},\ \rho_{SR},\ \sigma,\ \sigma_{b}\). In [6, 45, 46] it is shown how to train the RC network through a two step training procedure that takes into account both the one step prediction accuracy as well as the long term forecast skill.
RC has been shown to be extremely successful in time series prediction tasks. Its simple form allows the easy computation of the Jacobian and other quantities that can help in a dynamical systems analysis of the RNN. In Platt et al.[48] the authors showed that the RC, when well trained, can reproduce invariants of the motion such as the LEs and fractal dimension and that the reproduction of these quantities maximized the prediction time and ensured the stability of the predictions. The training procedure in those previous works does not enforce these invariants explicitly. The hope in those previous works is that the RC is both capable of reproducing these quantities and that the loss function of short and long term forecasts guides the RC towards these values by proxy. Here we reformulate the training to take into account the invariant quantities.
calculated as the time when the root mean square error \(\mathrm{RMSE(t)}=\sqrt{\frac{1}{D}\sum_{i=1}^{D}\left[\frac{u_{i}^{\mathrm{f}}(t)- u_{i}(t)}{\sigma_{i}}\right]^{2}}\) exceeds a certain value \(\epsilon\), in this case \(\epsilon=0.3\) approximately in line with [6, 45, 47]. \(D\) is the system dimension, \(\sigma\) is the long term standard deviation of the time series and \(u^{f}\) is the RC forecast. The largest LE \(1/\lambda_{1}\sim 12\) days gives the natural time scale for error growth of the system and thus can be used as a measurement for the predictive skill. The RC returns fantastic prediction times for this low resolution model. The size of the RC is N=1500 and 100,000 training steps were provided with a \(\Delta t=80min\) giving about 30 years of data, we consider this the "data rich" case.
## 4 Enforcing Invariants
The training of an RC is determined by the training data and the selection of the parameters governing the global properties of the RC: \(N,\ \rho_{A},\ \rho_{SR},\ \sigma,\ \sigma_{b}\) and a regularization coefficient \(\beta\). Once these quantities are chosen and the weights instantiated, then \(\mathbf{W}_{\mathrm{out}}\) is given
\[\mathbf{W}_{\mathrm{out}}=\mathbf{u}\mathbf{r}^{T}(\mathbf{r}\mathbf{r}^{T}+ \beta\mathbb{I})^{-1}\]
where \(\mathbf{u}\) is the \(D\times T\) matrix of input data, \(T\) is the number of time steps and \(\mathbf{r}\) is the \(N\times T\) matrix of reservoir states [45]. For an RC we can add into the selection of these parameters knowledge of the global invariants of the system \(\mathbf{u}\). Therefore we construct a loss function
\[\mathrm{Loss}=\epsilon_{1}\|\mathrm{C_{u}}-\mathrm{C_{RC}}\|^{2}+\epsilon_{2} \sum_{k=1}^{M}\sum_{t=t_{i}}^{t_{f}}\left\|\mathbf{u}_{k}^{\mathrm{f}}(t)- \mathbf{u}_{k}(t)\right\|^{2}\exp\biggl{\{}-\frac{t-t_{i}}{t_{f}-t_{i}}\biggr{\}} ;t\in\mathbb{Z} \tag{6}\]
that can be minimized over the RC parameters and with \(\epsilon_{x}\) hyperparameters. The selection of the parameters leads to the matrix \(\mathbf{W}_{\mathrm{out}}\) based on training data \(\mathbf{u}_{\mathrm{train}}\). Platt et al. [45] generated a number of long term forecasts \(\mathbf{u}^{f}(t)\) and compared them to the data \(\mathbf{u}\); with enough data this procedure often leads to a model that reproduces the correct dynamical invariants. Without the explicit enforcement of these invariants, however, the model can fail to capture the dynamics--particularly for high dimensional systems and in cases where the number of trajectories \(M\) is constrained. Here we add the dynamical invariants (\(C_{x}\)) as a constraint in order to directly train for generalizability, similar to [49]. This scheme is illustrated in Fig.(3). The global optimization routine used to minimize the cost function was the covariance matrix adaption evolution strategy (CMA-ES) [50].
We show the Lyapunov exponents and the fractal dimension as examples of dynamical invariants in order to demonstrate the technique. With the equations of motion, such as Eq.(4) for the RC, it is quite simple to calculate these quantities using well known and efficient algorithms [30]. When training directly from data--without knowledge of the underlying system--we may not know the equations of motion so these quantities must be estimated. The largest LE can often be approximated from time series data [29, 51, 52] and the fractal dimension can be calculated using various techniques [29, 53]. A calculation of the full LE spectrum is more difficult. Use of other dynamical invariants derived from the invariant measure Eq.(2) are also possible, for instance the energy density spectrum of a fluid dynamical system as a function of wavenumber.
Figure 2: Example prediction and the probability distribution function of the valid prediction time (VPT) for 200 initial conditions over the 20 dimensional QG system described in section 5.2. The valid prediction time [6, 47] is \(\epsilon\), in this case \(\epsilon=0.3\) approximately in line with [6, 45, 47]. \(D\) is the system dimension, \(\sigma\) is the long term standard deviation of the time series and \(u^{f}\) is the RC forecast. The largest LE \(1/\lambda_{1}\sim 12\) days gives the natural time scale for error growth of the system and thus can be used as a measurement for the predictive skill. The RC returns fantastic prediction times for this low resolution model. The size of the RC is N=1500 and 100,000 training steps were provided with a \(\Delta t=80min\) giving about 30 years of data, we consider this the “data rich” case.
## 5 Results
### Lorenz 1996
Our first test case for the RC is the Lorenz 1996 system (L96), a standard testbed for data assimilation applications in numerical weather prediction. L96 describes the evolution of a scalar quantity over a number of sites scattered uniformly over a periodic spatial lattice of constant latitude with quantities approximating advection and diffusion
\[\frac{\mathrm{d}u_{k}}{\mathrm{d}t}=-u_{k-1}(u_{k-2}-u_{k+1})-u_{k}+F. \tag{7}\]
In this case we take the number of sites to be \(D=10\) and forcing \(F=8\) with the purpose of making the system hyperchaotic, with three positive Lyapunov exponents Fig.(4).
Figure 4: Lyapunov exponents of the 10D Lorenz 1996 system with F=8. There are three positive LEs, a single zero exponent, and the rest negative.
Figure 3: Parameter optimization of a reservoir computer showing the introduction of dynamical invariants into the routine. The observed data is split into training, validation and testing sets with the invariants calculated from the data [29]. These quantities can then be incorporated into the loss function to improve the overall training of the reservoir computer. A general discussion of the training strategy is found in [45].
The results for \(C_{RC}=LEs\) Eq.(6) are shown in Fig.(5). When no global information is given to the RC then it can fail to generalize when presented with unseen input. Simply providing the largest LE to the RC during training enables the neural networks to
1. generalize to unseen data so that there are good predictions over the entire range of possible initial conditions
2. reconstruct the attractor as in [54] and [48] with the prediction giving the correct ergodic properties of the data even after the prediction necessarily diverges from the ground truth.
In this case providing the largest exponent was enough to improve the predictions, with no further gains coming from providing the smaller exponents. This could perhaps be due to the parameter space being constrained enough for those exponents to be matched by the RNN even though they are not directly given.
The second invariant given is the fractal dimension of the data calculated through the Kaplan-Yorke formulation
\[\text{Dimension}=\alpha+\frac{\sum_{i=1}^{\alpha}\lambda_{i}}{|\lambda_{\alpha +1}|}, \tag{8}\]
with \(D\) the system dimension, \(\lambda\) the ordered Lyapunov exponents and \(\alpha\) the smallest index where the sum of the LE doesn't cross zero [29]. There are alternate definitions and methods for calculating the fractal dimension [53].
Providing the fractal dimension as an invariant has a similar effect to providing the LEs by raising the mean valid prediction time from \(\sim 2\to 4\). The fractal dimension may not, however, be unique to a particular set of data while the LE spectrum has a much greater chance of constraining the shape of the resulting strange attractor. Therefore we see that there is not as much improvement in the forecast when providing the fractal dimension compared to the LE spectrum.
### Synoptic Scale Atmospheric Model
For more complex higher-dimensional dynamical systems, the Lyapunov spectrum or Kaplan-Yorke dimension are quite difficult, if not impossible, to calculate. However, our previous results showed that capturing the leading Lyapunov exponent (LLE) enhanced prediction skill greatly, and even with complex models this quantity can be estimated more readily either from data [51, 52] or from a model [55, 56]. We therefore explore the value of representing the LLE
Figure 5: The RC is initialized 10 times, providing k LEs and M=7 long term forecasts Eq(6); we report on the average of the distribution of predictions for the 10 dimensional Lorenz 1996 system [25] Eq(7). There are 10 total LEs—3 positive, shown in Fig.(5). When 0 LEs are provided the RC has no global information and the prediction time is poor. Giving the largest LE is sufficient for generalizable predictions and this quantity is quite easily calculated from numerical data [29]. The last column shows the result for providing the fractal dimension as another example of an invariant quantity. The size of the RC is \(N=400\) and the number of test initial conditions was 1000. The forecast time is given as an average over the 1000 trajectories and then scaled by the largest Lyapunov exponent to give the prediction in terms of the number of Lyapunov timescales.
in the more complex case of quasi-geostrophic (QG) dynamics [57], which assume that large-scale atmospheric disturbances are governed by the conservation of potential temperature and absolute potential vorticity, while the horizontal velocity is quasi-geostrophic and the pressure quasi-hydrostatic. Numerical models based on the QG approximation were a precursor to larger scale primitive equation models used for global numerical weather prediction [3] and is frequently used in data assimilation studies targeting the atmosphere and ocean [58, 59]. Here, we consider the two-layer baroclinic model of Charney and Strauss (1980) [60] used to study the planetary-scale motions of a thermally driven atmosphere in the presence of topography. We further incorporate the adaption of Reinhold and Pierrehumbert (1982) [26] to include an additional wave in the zonal direction making it highly baroclinically unstable. We use the implementation of [28], which provides a truncated 2-layer QG atmospheric model on a mid-latitude \(\beta-\)plane frictionally coupled to a mountain and a valley with a dimension of 20 in the spectral space of the model.
For the atmospheric streamfunctions \(\psi_{a}^{1}/\psi_{a}^{3}\) at heights 250/750 hPa and the vertical velocity \(\omega=\frac{\mathrm{d}p}{\mathrm{d}t}\), the equations of motion are derived to be
\[\frac{\partial}{\partial t}\big{(}\overline{\nabla^{2}\psi_{a}^{1}}\big{)}+ \overbrace{J(\psi_{a}^{1},\nabla^{2}\psi_{a}^{1})}^{\text{horizontal advection}}+ \overbrace{\beta\frac{\partial\psi_{a}^{1}}{\partial x}}^{\text{Princision force}}=-\overbrace{k_{d}^{\prime}\nabla^{2}(\psi_{a}^{1}-\psi_{a}^{3})}^{ \text{friction}}+\overbrace{\frac{f_{0}}{\Delta p}\omega}^{\text{vertical stretching}} \tag{9}\]
\[\frac{\partial}{\partial t}\big{(}\nabla^{2}\psi_{a}^{3}\big{)}+J(\psi_{a}^{3},\nabla^{2}\psi_{a}^{3})+J(\psi_{a}^{3},f_{0}h/H_{a})+\beta\frac{\partial\psi_ {a}^{3}}{\partial x}=k_{d}^{\prime}\nabla^{2}(\psi_{a}^{1}-\psi_{a}^{3})-k_{d }\nabla^{2}\psi_{a}^{3}+\frac{f_{0}}{\Delta p}\omega \tag{10}\]
with \(\nabla=\frac{\partial}{\partial x}\hat{x}+\frac{\partial}{\partial y}\hat{y}\), \(k_{d}^{\prime}\) the friction between the layers, \(k_{d}\) the friction between the atmosphere and the ground, \(h/H_{a}\) the ratio of ground height to the characteristic depth of the atmospheric layer, \(\Delta p=500\) hPa the pressure differential between the layers and \(J(g_{1},g_{2})=\frac{\partial g_{1}}{\partial x}\frac{\partial g_{2}}{ \partial y}-\frac{\partial g_{3}}{\partial y}\frac{\partial g_{2}}{\partial x}\) the Jacobian. More details are given in [26, 28].
After integrating the model forward in time we ask if an RC model is capable of predicting the dynamics given a significant amount of training data. An example forecast and distribution of predictions is shown in Fig.(2) where the RC successfully predicts the synoptic scale atmospheric dynamics for a number of months. Such significant predictive power on a low resolution QG model is an interesting result in and of itself, showcasing the ability of RNNs to resolve more realistic atmospheric dynamics.
When we reduce the amount of data and set \(M=1\)--the limited data case in Eq. (6) with a single forecast as part of the training loss--the reservoir loses its predictive power. Adding in the information contained in the LLE, which is \(\sim 0.01\), enables the model to recover a large amount of predictive capability. The calculated LLE of the RC with no provided exponents is 0.14, compared to \(\sim 0.01\) when it is provided. The mismatch between the LEs of the two systems is a clear indication that synchronization between the two is not achieved [48]. The impact of the added information provided via the LLE is clear in Figure 6, where the average VPT has extended from only a few days to multiple weeks.
## 6 Discussion and Conclusion
Chaotic dynamical systems are difficult to predict due to their sensitivity to initial conditions [61]. Better understanding and accounting for dynamical uncertainties has, however, allowed fields like numerical weather prediction to provide useful and continually improving forecasts [62]. Previous works (e.g. [13, 49]) proposed that including conserved quantities such as energy/momentum may help to improve the application of neural networks to physical systems. However, the introduction of the proposed conserved quantities is not generally applicable to dissipative chaotic dynamical systems. Thus, we instead considered dynamical invariants based on the invariant measure.
We provided a concrete example using quantities derived from the invariant measure, such as the Lyapunov exponents and the fractal dimension, to train a particular RNN architecture called reservoir computing. Previous RC training algorithms used long-term forecasts initialized from many different initial conditions in order to improve generalizability [6, 46], essentially imposing these invariant measures by proxy. Here, we imposed the invariant measures as constraints directly in the training algorithm, allowing the RC to generalize with fewer data. Fortunately, we have found that much of the value of this additional constraint is achieved through the use of the leading Lyapunov exponent. While the entire Lyapunov spectrum can be quite difficult to calculate, particularly for large systems, the leading Lyapunov exponent can be estimated by using numerical techniques such as the breeding method [63, 64] or other methods described in [21, 55, 56]. This provides an opportunity for extension of this technique to higher-dimensional systems.
Recent works from [65, 66, 67] have shown promise in producing data-driven surrogate weather models that are competitive by some metrics with conventional operational forecast models. A key property that has not yet been demonstrated
with such surrogate models is their ability to reproduce dynamical quantities such as the LEs, which indicate an average measure of the response to small errors in the initial conditions. For weather models in particular, the enforcement of LEs is crucial for the correct operation of data assimilation algorithms [3]. Platt et al. [48] demonstrated the importance of reconstructing the LE spectrum for producing a skillful deterministic forecast model. Similarly, Penny et. al. [6] indicated the ability of the RC to reproduce accurate finite-time LEs as a requirement for RC-based ensemble forecasts to produce good estimates of the forecast error covariance, which is the primary tool used in conventional data assimilation methods to project observational data to unobserved components of the system. This information can then be used to make the RC robust to sparse and noisy measurements. While this is the more realistic scenario used in online weather prediction systems, it is a fact that is rarely taken into account in neural network applications. The introduction of explicit constraints in the training cost function both improves prediction and trains the RNN to reconstruct the correctly shaped attractor [45, 54].
## 7 Acknowledgements
J.A. Platt, S.G. Penny, and H.D.I. Abarbanel acknowledge support from the Office of Naval Research (ONR) grants N00014-19-1-2522 and N00014-20-1-2580. S.G. Penny and T.A. Smith acknowledge support from NOAA grant NA20OAR4600277. T.-C. Chen acknowledges support from the NOAA Cooperative Agreement with the Cooperative Institute for Research in Environmental Sciences at the University of Colorado Boulder, NA17OAR4320101.
## 8 Source Code
The basic RC implementation used in this study is available
[https://github.com/japlatt/BasicReservoirComputing](https://github.com/japlatt/BasicReservoirComputing)
|
2310.01986 | A Vision-Based Tactile Sensing System for Multimodal Contact Information
Perception via Neural Network | In general, robotic dexterous hands are equipped with various sensors for
acquiring multimodal contact information such as position, force, and pose of
the grasped object. This multi-sensor-based design adds complexity to the
robotic system. In contrast, vision-based tactile sensors employ specialized
optical designs to enable the extraction of tactile information across
different modalities within a single system. Nonetheless, the decoupling design
for different modalities in common systems is often independent. Therefore, as
the dimensionality of tactile modalities increases, it poses more complex
challenges in data processing and decoupling, thereby limiting its application
to some extent. Here, we developed a multimodal sensing system based on a
vision-based tactile sensor, which utilizes visual representations of tactile
information to perceive the multimodal contact information of the grasped
object. The visual representations contain extensive content that can be
decoupled by a deep neural network to obtain multimodal contact information
such as classification, position, posture, and force of the grasped object. The
results show that the tactile sensing system can perceive multimodal tactile
information using only one single sensor and without different data decoupling
designs for different modal tactile information, which reduces the complexity
of the tactile system and demonstrates the potential for multimodal tactile
integration in various fields such as biomedicine, biology, and robotics. | Weiliang Xu, Guoyuan Zhou, Yuanzhi Zhou, Zhibin Zou, Jiali Wang, Wenfeng Wu, Xinming Li | 2023-10-03T11:58:14Z | http://arxiv.org/abs/2310.01986v2 | A Vision-Based Tactile Sensing System for Multimodal Contact Information Perception via Neural Network
###### Abstract
In general, robotic dexterous hands are equipped with various sensors for acquiring multimodal contact information such as position, force, and pose of the grasped object. This multi-sensor-based design adds complexity to the robotic system. In contrast, vision-based tactile sensors employ specialized optical designs to enable the extraction of tactile information across different modalities within a single system. Nonetheless, the decoupling design for different modalities in common systems is often independent. Therefore, as the dimensionality of tactile modalities increases, it poses more complex challenges in data processing and decoupling, thereby limiting its application to some extent. Here, we developed a multimodal sensing system based on a vision-based tactile sensor, which utilizes visual representations of tactile information to perceive the multimodal contact information of the grasped object. The visual representations contain extensive content that can be decoupled by a deep neural network to obtain multimodal contact information such as classification, position, posture, and force of the grasped object. The results show that the tactile sensing system can perceive multimodal tactile information using only one single sensor and without different data decoupling designs for different modal tactile information, which reduces the complexity of the tactile system and demonstrates the potential for multimodal tactile integration in various fields such as biomedicine, biology, and robotics.
multimodal tactile sensing, vision-based tactile sensor, deep neural network, grasped object perception
## I Introduction
The ability of humans to manipulate objects with dexterity relies on the thousands of sensory receptors in hands that detect various types of tactile information, including vibration, pressure, and friction [1-4]. This multimodal tactile information is also essential for robots to achieve dexterous manipulation. To enable accurate measurement of external physical stimuli in dexterous manipulation tasks, several tactile sensing mechanisms such as piezoelectric, capacitive, cantilever-based, and resistive sensing systems have been developed [5-16]. However, obtaining multimodal tactile information remains a significant challenge for tactile sensing systems [17-19]. One of the main reasons is that most conventional tactile sensing systems are based on point measurements, resulting in low spatial resolution [20]. Additionally, due to the limitations of sensing principles, numerous multimodal tactile perception systems rely on different sensing mechanisms to achieve decoupling of various tactile modes, making it difficult to achieve decoupled sensing of multiple tactile modes within a single system [21-23]. This restricts their applications in compact sensor structures and multimodal tactile sensing tasks.
To address these issues, among different types of tactile sensors, vision-based tactile sensors have been extensively investigated [24-30], which use an elastomer and a complementary metal-oxide-semiconductor (CMOS) sensor to convert tactile information into visual representations. By modifying the design of the elastic design, these sensors can detect different modalities of tactile information, such as force, contact location, and contact shape, enabling high-resolution, multimodal tactile information sensing [31-36]. These systems utilize the characteristics of vision-based tactile sensors to capture different modalities of tactile information (such as force, contact position, and contact posture, etc.) in different physical domains. By establishing effective mapping relationships specific to each physical domain, visual representations are transformed into corresponding tactile information, and appropriate algorithms and models are designed to achieve accurate feature extraction and recognition. However, as the dimensions of different tactile modalities increase, it becomes necessary to adopt different optical designs and a suitable decoupling method for different contact information. As shown in **Table I**, to realize the simultaneous recognition of multiple tactile information, the researchers have tried to combine a variety of structural designs and data processing techniques, such as the combination of elastomer markers or finite element analysis to provide feedback on multidimensional forces, the combination of structured light design or photometric stereo methods to achieve three-dimensional reconstruction, and combining thermochromic materials to realize temperature sensing. This implies more complex challenges in system integration and data decoupling to ensure the accurate extraction and recognition of multimodal tactile features. Therefore, a primary challenge for vision-based tactile sensing systems is
how to effectively process and decouple multimodal tactile information with visual representations to achieve accurate feature extraction and recognition.
In the face of the significant demand and challenges in extracting multimodal tactile information from visual representations, we expanded the tactile perception technology based on vision-based tactile sensors to achieve multimodal tactile perception with the use of reflective-only optical design, as shown in **Fig. 1**. Vision-based tactile sensing system accesses tactile images with high-density contact information, and deep neural networks can be used to achieve the recognition of multimodal tactile information. Compared with the traditional tactile sensing system, the proposed vision-based tactile sensing system has the ability to sense high-resolution and high-density tactile information which can characterize contact information for different modalities with only one sensor. Based on the output of the tactile image from the vision-based tactile sensor, a deep neural network to decouple the tactile images was proposed to obtain the contact object classification, position, pose angle, and normal force information. In contrast to traditional methods, the proposed approach does not require different algorithmic processes and optical designs for different modalities of contact information. Instead, it only requires the tactile images to recognize contact information for different modalities. Moreover, since the proposed method is data-driven and the extracted features are common to other tasks, it is possible to further extend the recognition of more modalities of tactile information by simply adding output dimensions without redesigning the data processing methods. The proposed system provides a new tactile solution with high resolution and integration to be applied to multimodal sensing tasks such as biomedical, biology, and robotics.
## II Design of the vision-based sensor
### _Design and mechanism of the vision-based sensor_
The structure of the proposed vision-based tactile sensor design is shown in **Fig. 2(a)**. The sensor consists of three main components: a soft contact interface with a reflection layer, a shell, and a CMOS image sensor with a light source. The soft contact interface consists of a reflection layer made of aluminum film manufactured by a sputter coater (Quorum Q150T S Plus), a deformation layer made of polydimethylsiloxane (PDMS Sylgard 184, manufactured by Dow), and a support layer made of acrylic, which converts the external physical contact information into optical information. The CMOS sensor equipped with a wide-angle lens is responsible for capturing optical information. To keep the system compact, three pairs of white Light Emitting Diodes (LEDs) in the CMOS peripheral circuitry provide a light source to illuminate the contact interface. The shell is 3D printed (Ultimaker S3 3D printer) and has interfaces for installation with the CMOS sensor and the contact interface. In the fabrication process, an acrylic sheet was fixed on a model made from the 3D printer to manufacture the support layer first. Then 2 mL of a mixture of uncured PDMS and curing agent at a ratio of 10:1 by mass was poured over the support layer to create a deformation layer, which was then placed in a vacuum chamber to remove air bubbles and allowed to cure naturally. Once the deformation layer was fully cured, the sample was plasma cleaned to improve the adhesion of the coating. A reflection layer was then manufactured by sputtering an Al film onto the sample surface. Finally, the assembly of the CMOS sensor with LED module, the shell made of 3D printing, and the soft contact interface
Fig. 1: Schematic diagram of the vision-based multimodal tactile sensing system. (a) Robotic dexterous manipulation tasks. (b) Schematic diagram of the vision-based tactile sensor. (c) Schematic diagram of the information flowing.
completed the overall design of the vision-based tactile sensor with an overall size of 35\(\times\)35\(\times\)30 mm. However, if the three components were assembled directly, there would be overexposure on the tactile image due to the excessive light intensity at the light source projection on the reflection layer (as shown in **Fig. 2(b)**), thus introducing system noise. Therefore, a shell with a diffusion layer was designed, as shown in **Fig. 2(c)**. When the light passes through the plane where the diffusion layer is located, it is attenuated before reaching the reflection layer, which eliminates the overexposure phenomenon.
The working principle of vision-based tactile sensors is to capture the optical changes of the surface caused by contact using a CMOS camera to characterize the contact characteristics as visual representations. Here, assuming that the inner surface of the sensor is a diffuse reflecting surface, the surface can be described using a height function \(z=f(x,y)\)[26]. In this case, the reflection intensity at the point \((x,y)\) can be represented as:
\[I(x,y)=R(\partial f/\partial x,\partial f/\partial y)E(x,y), \tag{1}\]
where \(R\) is a reflective function defined based on the surface gradient \(\partial f/\partial x\) and \(\partial f/\partial y\), and \(E\) represents the intensity of the incident light. When an object contacts the sensor, the surface of the sensor deforms, causing a change in the reflection function \(R\) of the sensor surface [26]. This change is characterized by local intensity variations in the image, allowing us to obtain tactile information about the deformation of the contact surface. As shown in **Fig. 3**, when an external force \(F\) is applied to the flexible deformation layer with a reflection film, the optical reflectance under the reflection film changes. The embedded CMOS sensor at the bottom can sense the changes in the reflection field caused by different contact situations due to deformation, and then convert the tactile information into visual representations as tactile images. Information about different contact objects, contact positions, and contact postures can be obtained by analyzing the distribution patterns of reflection field intensity in the visual representation of the tactile image. For different contact loads, the normal force can be estimated based on the reflection field intensity and the area of the reflection field pattern in the visual representation. These details will be discussed in detail in the following sections.
### _Design of experiments for vision-based sensor_
As shown in **(1)**, the intensity of the incident light is closely related to the decoupling of the final tactile information. By regulating the area size and the relative position of the diffusion layer, the light intensity on the inner surface of the reflection layer can be regulated to improve the system resolution of the sensor. Here, two different types of diffusion layers were designed by regulating their relative position. The first type is shown in **Fig. 4(a)** and is called Type 1 (T1), where the diffusion layer is located at a position 18 mm above the CMOS sensor, and the light enters the surface of the reflection layer directly to illuminate the reflection layer. The second type is shown in **Fig. 4(b)** and is called Type 2 (T2), where the diffusion layer is located at 10 mm above the CMOS sensor. The light enters the edge of the deformation layer, illuminating the inner surface of the reflection layer by a light-guided action of the deformation layer. In addition, with these two types, four different diffusion layer patterns were designed as shown in **Fig. 4(a)** and **Fig. 4(b)** by adjusting the light transmission area size. To ensure that the diffusion layer does not affect the CMOS imaging, with the camera lens diameter of 15 mm as the standard, a circular pattern with 15 mm diameter called Category 1 (C1), a square pattern with 15 mm side length called Category 2 (C2), a rectangular pattern with 15mm and 24mm side length called Category 3 (C3), and a rectangular with 15mm and 33mm side length called Category 4 (C4) were fabricated. We built a resolution test platform to compare the resolution images of each design (**Fig. 5(a)**). Vision-based tactile sensors use CMOS to capture optical information to extract contact information. Due to its sensing properties, the upper limit of tactile sensing resolution is theoretically the imaging resolution of a CMOS imaging system, which is 7.13 lp/mm for the horizontal stripe and 8.98 lp/mm for the vertical stripe. Therefore, a resolution test target based on the USAF 1951 resolution test target with 100 um gradients was proposed (**Fig. 5(b)**, left). Using the proposed
Fig. 4: (a) Four different diffusion pattern designs of type 1. (b) Four different diffusion pattern designs of type 2.
Fig. 3: Illustration of the principle of the tactile sensor.
Fig. 2: (a) Schematic diagram of the vision-based tactile sensor. (b) The tactile image without diffusion layer design. (c) The tactile image with diffusion layer design.
resolution test target as a quantitative standard enables a direct comparison of the resolution performance of different designs and their relationship to the theoretical upper image resolution limit.
The results are shown in **Fig. 5(c)** and **Fig. 6**, where **Fig. 6(a)** shows the comparison of the resolution performance of four different designs of T1 type with the original CMOS imaging system, and **Fig. 6(b)** shows the comparison of the resolution performance of four different designs of T2 type with the original CMOS imaging system. It can be found that the average resolution of both horizontal and vertical stripes of the T1 type design is better than that of T2. The T1 type makes the light from the light source reach the inner surface directly, while the T2 type makes the light reach the inner surface through the action of the deformation layer. So, the T1 design makes the light reaching the inner surface of the reflection layer more intense, thereby enhancing the resolution of the system to a certain extent. Similarly, for the same design type, when the area of the design pattern is larger, the intensity of the light reaching the inner surface of the reflection layer is more intense. Therefore, it can be found that the average resolution of the C1 pattern design is the best in both T1 and T2 designs. **Fig. 5(b)** shows the resolution test target and the imaging resolution images of the CMOS camera, which represents the theoretical limit of spatial resolution for vision-based tactile sensors. **Fig. 5(c)** shows the tactile resolution images of TIC1 and T2C4, which have the best and the worst resolution performance respectively. To conclude, the main influence on the designs of diffusion layers is the light field distribution of the light reaching the reflection layer. By regulating the relative positions of the diffusion layers and the pattern areas, the T1C1 design has a better performance with an average resolution of 7.13 lp/mm, which means that this design can resolve line pairs at the hundred-micron level comparable to the human tactile perception capability [37] and is the closest to the theatrical maximum tactile sensing resolution. Therefore, the T1C1 type will be used for the following experiments.
## III Design of the neural network
### _Network structure_
In robotic grasping tasks, multimodal information sensing is of interest. The current multimodal tactile systems for the recognition of different information are often difficult to form an organic combination in hardware and often discrete in recognition data processing, which brings inconvenience to multimodal feature extraction to a certain extent. When an object touches the vision-based tactile sensor, the visual representations often contain multimodal information such as the pose angle, location, and normal force of the contact object, so the key to the problem lies in how to decouple the information of different tactile modes from the visual representations. Here, a multimodal information recognition method based on neural network is introduced as shown in **Fig. 7(a)**, to accomplish contact part classification, object location recognition, posture angle recognition, and normal force recognition.
The structure of the proposed network with reference to the idea of target detection algorithms is shown in **Fig. 7(b)**. The original image size of 640\(\times\)640 is passed through the backbone network to obtain 3 feature maps of different dimensions of size 80\(\times\)80, 40\(\times\)40, and 20\(\times\)20. This can be seen by dividing the original image into a total of 8,400 regions of size 8\(\times\)8, 16\(\times\)16, and 32\(\times\)32, which are used to detect small, medium, and large objects respectively. Then, the three feature maps of different dimensions are fused by FPN (Feature Pyramid Network) [38], and finally, the outputs of four different contact information in the 8400 regions are obtained after passing through a decoder with four channels (each channel has two 3 \(\times\) 3 convolutional layers and one 1 \(\times\) 1 convolutional layer) respectively. The first channel outputs the probability of positive samples and the coordinates and size of the predicted box in each region. The second channel outputs object category information, the third channel outputs pose angle information, and the fourth channel outputs normal
Fig. 5: (a) The resolution test platform for the vison-based tactile sensor. (b) The resolution image of the CMOS camera. (c) The resolution image of the vision-based tactile sensor output using the T1C1 (left) and T2C4 (right) type.
Fig. 6: (A) Resolution performance of four different designs of type T1 and the original CMOS imaging system. (B) Resolution performance of four different designs of T2 type and the original CMOS imaging system.
force information. After the tactile images are processed by the network, we can obtain the contact part class, pose angle, location, and normal force information of different target objects. The PC used in training and testing the neural network has a GPU of NVIDIA GeForce RTX 3080, a CPU of Intel Xeon Gold 5222, and a system of Windows Server 2019.
### _Loss function_
The proposed network is based on the idea of target detection algorithms [39-44], whose core idea is to divide the input image into \(s\times s\) small grids in the backbone. Each grid is responsible for predicting the class and the location of the object whose center falls on it. However, to introduce the recognition of force and pose angle, the Decoder was altered, which is responsible for outputting the final prediction information. According to the idea of Decoupled Head in YOLOX, replacing the coupled detection head with the decoupled head can decouple the classification problem from the regression problem, which can effectively improve the model performance [40]. In this regard, we designed the decoder with four parallel channels, each of which is responsible for predicting class, location, normal force, and pose angle information, respectively. Correspondingly, the loss function (**Eq. 2**) can be divided into four blocks corresponding to four outputs. Firstly, the Binary Cross-Entropy loss (_BCE_ ) between the possibility of positive samples and the real situation of the 8400 regions is calculated, and then the normal force regression loss, the classification identification loss, the pose angle loss, and the loss of prediction frames of the \(B\) samples filtered by simOTA are calculated. Here, to address the boundary problem in the pose angle recognition task, the angle values were encoded using the _CSL_ (Circular Smooth Label) technique [45] to convert the regression task into an angle classification task at a fine granularity of 1 \({}^{\circ}\). Also, the _smoothL1_ (Smooth L1 Loss function) [46] was introduced to allow the network less sensitive to outliers and anomalies in the force regression task. Eventually, the loss function of the network is expressed as follows,
\[Loss=\sum_{i=1}^{B}\left\{BCE\big{(}cls_{pre},cls_{gt}\big{)}+BCE \big{[}\theta_{pre},CSL\big{(}\theta_{gt}\big{)}\right]\] \[+smoothL1\big{(}f_{pre},f_{gt}\big{)}\] \[+\Big{[}1-IoU\big{(}box_{pre},box_{gt}\big{)}^{2}\Big{]}\Big{\}}\] \[+\sum_{i=1}^{B}BCE\big{(}obj_{pre},obj_{gt}\big{)}, \tag{2}\]
where \(B\) represents the total number of samples filtered by simOTA [40]. 8400 represents the output dimension of the backbone. \(obj_{pre}\) represents the probability that the prediction is a positive sample, and \(obj_{gt}\) is 0 or 1 (where 0 represents a negative sample and 1 represents a positive sample). \(cls_{pre}\) and \(cls_{gt}\) represent the predicted and the real class information respectively. \(\theta_{pre}\) and \(\theta_{gt}\) represent the predicted and the real pose angle information respectively. \(f_{pre}\) and \(f_{gt}\) represent the predicted and the real normal force information respectively. \(CSL\) represents the _CSL_ encoding function with the Gaussian function as the window function. \(smoothL1\) is a robust \(L1\) loss that is less sensitive to outliers [46]. \(IoU\big{(}\)Intersection over Union) evaluates the overlap of the predicted box \(box_{pre}\) and the true box \(box_{gt}\).
### _Definition of the pose angle error_
In the pose angle prediction task, the predicted pose angles range from 0\({}^{\circ}\) to 180\({}^{\circ}\), and the actual angle numerical difference does not represent the actual angle error, such as the difference between 1\({}^{\circ}\) and 179\({}^{\circ}\) is 178\({}^{\circ}\) in numerical value while the actual difference is only 2\({}^{\circ}\). In this regard, as
Fig. 7: (a) Architecture of the proposed data processing process. (b) The structure of the proposed network.
mentioned above, the CSL technique was used to solve this problem during the training period. In the evaluation phase, we use the definition of rotation matrix to get the error of the pose angle shown as follows,
\[\Delta\theta=\arccos\left[\left|\frac{tr\left(R_{pre}R_{gt}{}^{-1}\right)}{2} \right|\right]\,, \tag{3}\]
where \(R_{gt}\) represents the rotation matrix of the ground truth angle. \(R_{pre}\) represents the rotation matrix of the predicted angle. \(tr\) represents the trace of the matrix. \({R_{gt}}^{-1}\cdot R_{pre}\) can characterize the rotation matrix of the predicted value angle to the true value angle. Here, we use the definition of the rotation matrix to calculate the true value of the pose angle, and the derivation process is as follows.
As shown in **Fig. 8**, we define the actual and predicted angle as \(\theta_{1}\) and \(\theta_{2}\), and the actual frame \(A_{1}\) and predicted frame \(A_{2}\) of the target can be regarded as rotated by \(\theta_{1}\) and \(\theta_{2}\) along the x-axis respectively. So, \(A_{1}\) and \(A_{2}\) can be expressed as follows,
\[A_{1}=\begin{bmatrix}x_{1}\\ y_{1}\end{bmatrix}=\begin{bmatrix}cos\theta_{1}&-sin\theta_{1}\\ sin\theta_{1}&cos\theta_{1}\\ sin\theta_{1}&cos\theta_{1}\\ \end{bmatrix}\begin{bmatrix}x_{0}\\ y_{0}\end{bmatrix}=C_{0}^{1}A_{0}\,, \tag{4}\] \[A_{2}=\begin{bmatrix}x_{2}\\ y_{2}\end{bmatrix}=\begin{bmatrix}cos\theta_{2}&-sin\theta_{2}\\ sin\theta_{2}&cos\theta_{2}\\ \end{bmatrix}\begin{bmatrix}x_{0}\\ y_{0}\end{bmatrix}=C_{0}^{2}A_{0}\,. \tag{5}\]
At the same time, \(A_{2}\) can be also considered as \(A_{1}\) rotated by \(\Delta\theta\), which is why \(A_{2}\) can also be expressed as follows,
\[A_{2}=\begin{bmatrix}x_{2}\\ y_{2}\end{bmatrix}=\begin{bmatrix}cos\Delta\theta&-sin\Delta\theta\\ sin\Delta\theta&cos\Delta\theta\end{bmatrix}\begin{bmatrix}x_{1}\\ y_{1}\end{bmatrix}=C_{1}^{2}A_{1}\,. \tag{6}\]
By associating **(4)**, **(5)**, and **(6)**, we can find that the rotation matrix from \(A_{1}\) to \(A_{2}\) can be expressed as follows,
\[C_{1}^{2}=C_{0}^{2}C_{1}^{0}=\begin{bmatrix}cos\Delta\theta&-sin\Delta\theta \\ sin\Delta\theta&cos\Delta\theta\end{bmatrix}\,. \tag{7}\]
Therefore, the absolute error between the actual angle and the predicted angle is finally obtained as **(3)**.
## IV Experimental Evaluation
For accurate robot manipulation tasks, it is necessary to obtain multimodal contact information such as category, pose, location, and forces applied to the target object during contact, etc. In this regard, this section demonstrates the performance of the proposed system in recognizing multimodal contact information through experiments. The experimental setup, as shown in **Fig. 9(a)**, consists primarily of a digital force gauge, a Raspberry Pi, and a translation platform. The sensor is fixed on the translation platform to control the contact location and pose angle with the probe, while the digital force gauge with a probe is mounted on the vertical translation stages. The vertical translation stage is then controlled to establish contact between the probe and the sensor, at which point the force gauge starts recording data and sends a rising edge signal to the Raspberry Pi, which then drives the camera to take a picture for a synchronized, real-time measurement of the tactile sensing system and the force gauge.
As shown in **Fig. 9(b)**, to investigate the relationship between visual representation and normal forces, we collected more than 3,000 samples of five spherical contact probes with diameters of 10 mm, 15 mm, 20 mm, 25 mm, and 30 mm. For pose angle analysis, we used a strip-shaped contact probe for collecting different posture data with more than 2,000 samples. We developed a contact probe with a contact interface that aligns with the proposed vision-based tactile sensor, allowing us to simulate scenarios where a robot grasps a screw and when it grasps five different objects.
### Visual representations characteristics of different contact situations in tactile images
Based on the mechanism and design optimization of the vision-based tactile sensor we developed a multimodal tactile sensing system. The vision-based tactile sensor converts tactile information into high-density, high-resolution visual representations as tactile images, which can be decoupled as multimodal tactile feature information from redundant visual information. In order to investigate the ability of the multimodal vision-based tactile sensing system to distinguish between different modes of tactile information, we used different types of contact probes as test objects and analyzed their visual representations characteristics separately.
First, in this study, an induction system based on single-point contact was used to investigate the recognition of positive force. Five spherical contact probes of varying diameters were used in conjunction with a vision-based tactile sensor to apply different loads, as shown in **Fig. 10(a)**.
Fig. 8: Diagram of angle error calculation.
Fig. 9: (a) The experimental setup of multimodal contact information recognition. (b) Photos of different contact probes.
Because the spherical probes are geometrically isotropic, there is no issue of pose recognition during contact. By using probes of different diameters, the relationship between force conditions and tactile image features can be explored in an intuitive manner. As shown in **Fig. 10(a)**, three contact probes were selected to demonstrate that the normal force applied to the target object is related to the area and light field distribution of the tactile image. Under the same contact force, the tactile image produced by a larger diameter contact probe has a larger contact area and shallower shading. For example, in the load range of 0 to 3 N, as the diameter of the contact probe increases, the tactile image has a larger contact area, and the contrast of the edge shading gradually decreases. When the same diameter probe was used for contact, the tactile image produced under higher load conditions had a larger contact area and stronger contrast. For instance, when using a 10 mm diameter contact probe for contact, as the load increases, the resulting tactile image has a larger contact area, the intensity of the edge reflection light field gradually decreased, and the contrast of its edge shadow gradually increased. Furthermore, the proposed vision-based tactile sensing system can characterize different contact poses. The five spherical probes used in the previous section have geometric isotropy, so there is no difference in the contact surface for different contact poses. Therefore, a strip probe (**Fig. 10(b)**) was introduced to apply different contact poses to the sensor. For ease of subsequent data processing and discussion, a five-parameter method [45], with an angle range of 0\({}^{\circ}\) to 180\({}^{\circ}\), was used to define posture information. For different contact postures, the visual representation of the tactile image also presents different posture angles, and the distribution of the reflection light field is related to the object's contact posture angle. Additionally, as shown in the comparison between **Fig. 10(a)** and **Fig. 10(b)**, the proposed sensing system can also characterize different contact positions and object information, such as shape and texture features. Tactile images are generated when different objects are in contact with the sensor at different positions, and different object categories can be distinguished by their unique shape and texture features of the light field distribution. The position information of the objects on the contact surface can also be clearly distinguished.
### _Integrating neural network enables tactile sensing for different modes._
To validate the accuracy of the system in recognizing forces and poses through neural network system, we collected more than 3,000 samples with varying loads and more than 2,000 samples with different pose angles and evaluated the ability to distinguish different modalities of tactile information. Where all the samples were divided into training and test sets in a ratio of nine to one and 10 % of the samples in the training set were taken as the validation set for tuning the hyper-parameters. Furthermore, distinguishing different tactile information modes requires the network to learn more features from a single tactile image. Therefore, we compare three different backbone networks, a lightweight backbone ShuffleNet [47], a common backbone for target detection CSPnet [48], and a transformer-based image processing backbone Swin Transformer [49]. All three backbone networks compress the original image by a factor of 8, 16, and 32, so they can obtain feature maps with three different dimensions of 80\(\times\)80, 40\(\times\)40, and 20\(\times\)20 for decoder input.
**Table II** shows the networks evaluation for pose angle and normal force, respectively. The neural network design with Swin Transformer as the backbone network has an MAE (Mean Absolute Error) of 1.1 N for normal force recognition and 0.45 \({}^{\circ}\) for pose angle recognition; with ShuffleNet as the backbone network has an MAE of 0.7 N for normal force recognition and 0.67 \({}^{\circ}\) for pose angle recognition; and with CSPnet as the backbone network has an average absolute error of 0.2 N for normal force recognition and 0.41 \({}^{\circ}\) for pose angle recognition. The result shows that the model with CSPnet backbone outperforms the ShuffleNet and the Swin Transformer. For angle recognition, the proposed neural network subdivides angles from 0\({}^{\circ}\) to 180\({}^{\circ}\) into 180 categories with a subdivision of 1\({}^{\circ}\). Therefore, the discriminative limit of the model is to distinguish 1\({}^{\circ}\) of angular difference. The average accuracy of angle recognition for the three networks with different backbones is within the systematic error. For
Fig. 10: Tactile images of different contact situations (the scale in the figure is 10 mm). (a) Tactile images of different normal forces. (b) tactile images of different contact postures.
normal force recognition, theoretically the larger the number of model parameters the better the recognition accuracy is, so the network with CSPnet as the backbone performs better than ShuffleNet. In addition, the network with Swin Transformer as the backbone lacks some of the inductive biases inherent to convolutional neural networks, and therefore does not generalize well when trained on insufficient amounts of data in this study, despite having a larger number of parameters. Furthermore, FPS (Frame per Second) is related to the number of network parameters, so the FPS of the model with ShuffleNet is the best.
To summarize, the model with CSPnet has better performance, therefore, we choose CSPNet as the backbone in the subsequent experiments. The detailed performance of the model with CSPnet is shown in **Fig. 11**, where **Fig. 11(a)** visualizes the multimodal tactile decoupling results of the proposed system for different contact scenarios, demonstrating its ability to accurately perceive different contact poses, positions, force, and object categories. **Fig. 11(b)** shows the box plot of the normal force prediction error with 1 N intervals within the range of 0 to 10 N. The relative absolute error of all samples is 5.4 %. The average prediction errors of the sensing system are within 5 % for the 3 to 10 N range, while the average relative errors for the 0 to 3 N range are greater than 5 %, respectively. Moreover, as shown in **Fig. 11(c)**, the absolute prediction errors in all force ranges are within 0.3 N, and the MAE for all samples is 0.2 N. Here, because the normal force recognition average absolute error of the system is only up to 0.2 N, the MRE (Mean Relative Error) of the system is larger than the loading force in recognizing small loading forces, leading to a relatively large error in the small load range. **Fig. 11(d)** shows the box plot of the estimation error with 30\({}^{\text{o}}\) intervals in different angular ranges. The average prediction error in different pose angles is within 1\({}^{\text{o}}\), and only a few samples have a prediction error of up to 2\({}^{\text{o}}\). The experimental results show that, except for the system error introduced in manual data labeling, the proposed system has relatively high accuracy in pose recognition tasks.
In our experiment, distinct tactile features associated with different contact situations were clearly discernible in the corresponding contact images. In contrast, conventional approaches to multimodal tactile detection often entail the integration of structures based on disparate mechanisms, thereby introducing additional complexity to hardware design and data processing. In light of these observations, our findings demonstrate that the proposed sensing system can effectively exploit image-based features to represent diverse tactile information and achieve high-performance recognition through the integration of machine learning techniques. Thus, our results underscore the potential advantages of utilizing visual representations of tactile information in the context of multimodal tactile sensing.
### _Vision-based tactile sensing intuitively and accurately identified multimodal tactile information_
Multimodal tactile information, including object pose, category, position, and force, plays a crucial role in the precise manipulation of objects. For instance, during grasping, multimodal tactile feedback can provide the grasping force and position feedback, ensuring the accurate grasping of
Fig. 11: (a) Demonstration of the recognition output for normal force and pose angle. (b) Box plot of the relative error of the normal force identification. (c) Box plot of absolute error of normal force identification. (d) Box plot of the absolute error of angle identification.
objects. The proposed sensing system in the previous section enables us to quantitatively perceive the multimodal tactile information of objects during the grasping process by decoupling tactile images. Here, as shown in **Fig. 12**, we simulated four grasping scenarios of a screw (including the head, body, top, and bottom contact points) and collected more than 2,000 samples dividing the train set and test set in a ratio of 9 to 1. The various performances of the proposed sensing system in the grasping simulation for a screw are shown in **Fig. 13**. Here, the AP@50 (Average Precision with 50 % as the threshold) value [50], that is, the harmonic mean of recall and precision when positive samples are considered for the predicted and ground truth samples with _IoU_ greater than 50 %, was chosen as the metric to evaluate the estimation error of contact object location and contact part classes. The error of normal force and pose angle for different contact parts are shown in **Fig. 13(a)**. The MAE of normal force identification for contacting the head, body, top, and bottom are 0.19 N, 0.21 N, 0.15 N, and 0.17 N respectively, and the posture angle identification errors for different contact parts are also all within 1 \({}^{\circ}\). **Fig. 13(b)** shows the AP values of four contact parts, which all reach 100% proving that the proposed system has high accuracy in object location and contact part class prediction tasks.
In addition, to demonstrate the universality and practicality of distinguishing different tactile modes, as shown in **Fig. 14(a) and (b)**, we additionally collected more than 2,000 samples of five common objects (a USB plug, a LEGO brick, an amplifier IC, a potentiometer, and a screwdriver), which then were divided as the train set and test set in a ratio of 9 to 1. **Table III** shows the recall, precision, and AP values of the system for the five common object test sets, where the AP values of the amplifier IC, LEGO brick, and potentiometer all reach 100 %. The AP values of the screwdriver and USB plug are 94.81 % and 97.06 %, respectively, where the recall and precision of the screwdriver are 89.66 % and 92.86 %, and that of the USB plug is 97.14 % and 94.44 %. The violin plots of normal force and pose angle errors for different objects are shown in **Fig. 14(c)**. The mean absolute errors of normal force recognition for the USB plug, LEGO brick, amplifier IC, potentiometer, and screwdriver are 0.26 N, 0.17 N, 0.18 N, 0.17 N, and 0.27 N, respectively. And the overall MAE of the normal force for the test set is 0.2 N. The mean absolute errors of the pose angle recognition of the five different objects are all within 1 \({}^{\circ}\), and the overall MAE of the pose angle for the test set is 0.61 \({}^{\circ}\). In addition, there is no significant difference between the error distributions of normal force recognition and pose angle recognition for the five types of objects.
The system achieves impressive performance in the classification and location of contact objects, while maintaining a decent capability in normal force recognition and pose angle recognition, which is maintained under the same level for different contact cases as in the above experiments. Overall, the results illustrate that the proposed system can perform a perception of contact information for different modalities using only one sensor and without different data processing methods for different contact information while maintaining decent performance in each task. This ability is critical for a wide range of applications
Fig. 14: (a) Schematic diagram of five common objects. From left to right are a USB plug, a Lego brick, an amplifier IC, a potentiometer, and a screwdriver. (b) Tactile images of five common objects. (c) Violin plots of errors in normal force recognition and pose angle recognition for five objects.
Fig. 12: Schematic diagram of the grasping simulation for four cases (the scale in the figure is 10 mm).
Fig. 13: (a) Box line plot of the estimation error of normal force and pose angle for different contact parts. (b) AP values for 4 contact parts.
that require accurate manipulation of objects, such as manufacturing, robotics, and prosthetics. A video demonstration of the system's performance is also available (see **movie S1**).
## V Conclusion
Accurately obtaining multimodal tactile information has been a challenge for most tactile sensing systems. Here, this study proposes a vision-based tactile perception system that achieves multimodal tactile information perception. Compared to existing multi-sensor multimodal tactile perception systems, the proposed system only uses one sensor to achieve multimodal tactile perception providing higher integration. The system uses a soft tactile interface combined with CMOS sensor design to convert the optical response of external tactile stimuli of different modalities into a high-density, high-resolution visual representation. Due to the high density of tactile information represented visually, the proposed system does not need to perform specific decoupling designs on different modalities of tactile information as in traditional methods, thus greatly reducing the difficulty of data processing from different tactile modalities. By using neural networks, the system can decouple the hidden tactile information of different modalities in the optical tactile image, thus achieving multimodal perception. In experiments, the proposed system achieved spatial resolution at the micrometer level comparable to human tactile perception and accurately identified the location, part category, contact posture, and contact load of grasped objects in simulated grasping applications, demonstrating decent tactile feature recognition ability. We believe that the proposed system provides a new solution with high resolution, high integration, and decent information decoupling ability, and is expected to be applied to various fields of multi-modal tactile sensing tasks such as biomedical, biology, and robotics.
|
2310.03149 | Attributing Learned Concepts in Neural Networks to Training Data | By now there is substantial evidence that deep learning models learn certain
human-interpretable features as part of their internal representations of data.
As having the right (or wrong) concepts is critical to trustworthy machine
learning systems, it is natural to ask which inputs from the model's original
training set were most important for learning a concept at a given layer. To
answer this, we combine data attribution methods with methods for probing the
concepts learned by a model. Training network and probe ensembles for two
concept datasets on a range of network layers, we use the recently developed
TRAK method for large-scale data attribution. We find some evidence for
convergence, where removing the 10,000 top attributing images for a concept and
retraining the model does not change the location of the concept in the network
nor the probing sparsity of the concept. This suggests that rather than being
highly dependent on a few specific examples, the features that inform the
development of a concept are spread in a more diffuse manner across its
exemplars, implying robustness in concept formation. | Nicholas Konz, Charles Godfrey, Madelyn Shapiro, Jonathan Tu, Henry Kvinge, Davis Brown | 2023-10-04T20:26:59Z | http://arxiv.org/abs/2310.03149v4 | # Attributing Learned Concepts in Neural Networks to Training Data
###### Abstract
By now there is substantial evidence that deep learning models learn certain human-interpretable features as part of their internal representations of data. As having the right (or wrong) concepts is critical to trustworthy machine learning systems, it is natural to ask which inputs from the model's original training set were most important for learning a concept at a given layer. To answer this, we combine data attribution methods with methods for probing the concepts learned by a model. Training network and probe ensembles for two concept datasets on a range of network layers, we use the recently developed _TRAK_ method for large-scale data attribution. We find some evidence for _convergence_, where removing the 10,000 top attributing images for a concept and retraining the model does not change the location of the concept in the network nor the probing sparsity of the concept. This suggests that rather than being highly dependent on a few specific examples, the features that inform the development of a concept are spread in a more diffuse manner across its exemplars, implying robustness in concept formation.
+
Footnote †: \(\dagger\) Work done at Pacific Northwest National Laboratory.
## 1 Introduction
Given the role that concepts play in understanding and explaining human reasoning, measuring their use in neural networks is important for the goal of developing explainable and trustworthy AI. Driven by this, substantial effort has gone into developing methods that measure the presence of a concept within a neural network. Relatedly, a growing body of empirical work shows that deep neural networks learn to encode features as _directions_ in their intermediate hidden layers (Merullo et al., 2023; Wang et al., 2023). A common approach to finding these directions (or **concept vectors**) is via linear probing (Alain and Bengio, 2016). While probing has well-known shortcomings (Ravichander et al., 2020), it is hard to overstate the impact that concept probing has had on deep neural network interpretability. Prominent examples include probing for syntactic concepts in 'BERTology' (Rogers et al., 2021) and chess concepts in the AlphaZero network (McGrath et al., 2021).
A separate thread in explainability research explores **data attribution**, which, rather than measuring the importance of a concept for a model prediction, quantifies the impact of individual _training datapoints_ on a given model prediction (e.g., which images in the training set were most relevant for a classifier's prediction "zebra"?). Data attribution methods have proven to be effective for identifying brittle predictions, quantifying train-test leakage, and tracing factual knowledge in language models back to training examples (Ilyas et al., 2022; Park et al., 2023; Grosse et al., 2023).
In this work, we explore an interplay between concept vectors and data attribution, with the goal of obtaining a better understanding of how neural networks utilize human-understandable concepts. Namely, we ask the natural question:
_Which examples in a model's training data were important for learning hidden-layer concepts?_
Overall, we find that the process of learning a concept is robust to both removal of examples (no small subset of examples are critical to learning a concept) and stable across independent model training runs. While this stability may not be surprising from a human perspective, given that concepts are, by design, supposed to be relatively unambiguous between observers, it is interesting that a similar phenomenon is seen in models.
## 2 Experimental Methods
In this section we describe the two methods that are central to our study: using a linear probe to detect a concept and then applying data attribution to concept predictions. Let \(f(x)\) be an image classification neural network (the "base model") and assume that \(f(x)\) has been pretrained on a training set \(\mathcal{X}_{\mathrm{tr}}\). Assume that \(\mathcal{X}_{\mathrm{val}}\) is the corresponding validation set. We write \(f_{\leq i}\) for the composition of the first \(i\) layers of \(f\). Finally, for each of the concepts \(c\) that we study, we assume that we have a concept training and test set \(\mathcal{X}_{\mathrm{tr}}^{c}\) and \(\mathcal{X}_{\mathrm{val}}^{c}\) where elements of these sets are labeled by whether or not the elements are examples of \(c\).
### Probing for Concept Learning
The purpose of training a concept probe is to detect whether a specific human-interpretable concept is encoded in a hidden layer of a model. If the linear probe can effectively separate encoded exemplars of the concepts from encoded examples that are unrelated to the concept, then we take this as evidence that the model has learned the concept. More specifically, having chosen the \(h\) hidden layer of \(f\) for investigation and a concept \(c\) captured by concept training and test sets \(\mathcal{X}_{\mathrm{tr}}^{c}\) and \(\mathcal{X}_{\mathrm{val}}^{c}\), we follow the common approach of training an affine linear probe \(g\) on the outputs of \(f_{\leq i}(\mathcal{X}_{\mathrm{tr}}^{c})\)[12, 15]. Because \(g\)'s decision boundary is linear, it is effectively summarized by a normal vector which, when the probe is effective, we take to point in the "direction" of concept \(c\) (up to a sign). This vector is called a _concept activation vector_ (CAV). For short-hand we will use \(g_{\leq i}(x):=g(f_{\leq i}(x))\) to describe the "subnetwork + probe" model.
### Attribution of Concept Predictions
The data attribution question that we seek to answer in this paper is: "which examples in \(f\)'s original training set \(\mathcal{X}_{\mathrm{tr}}\) were most important for it learning a concept \(c\)?" Since we can quantify how well \(f\) learned a concept at a layer \(i\) by the accuracy of a trained probe \(g_{\leq i}(x)\) on \(\mathcal{X}_{\mathrm{val}}^{c}\), it is more convenient to ask "which examples in the network's original training set \(\mathcal{X}_{\mathrm{tr}}\) were most important for the concept predictions of \(g_{\leq i}\) on a set of test images?"
A _data attribution method_\(\tau(x_{\mathrm{val}},x_{\mathrm{tr}};h)\) is a function that assigns a real-valued score to a training point \(x_{\mathrm{tr}}\in\mathcal{X}_{\mathrm{tr}}\) according to its importance to the prediction of a model \(h\) on some test/validation
Figure 1: Schematic of our approach for concept attribution: (1) train \(N\) models with different random seeds (in green) on the training set. (2) We choose a hidden layer \(i\), append a probing classifier \(g\) to its output, freeze the weights of \(f_{\leq i}\), and train (\(g\circ f_{\leq i}\)) on the concept dataset. (3) We calculate attributions with TRAK [12] for \(g\circ f_{\leq i}\) on elements of the test set in terms of the original training data and aggregate across fixed layers and concepts.
point \(x_{\rm val}\)[Park et al., 2023]. We will define the expected importance of a training point \(x_{\rm tr}\) to concept predictions of \(g_{\leq i}\) as
\[\tau_{c}(x_{\rm tr}):=\mathbb{E}_{x_{\rm val}\sim\mathcal{X}_{\rm val}}\tau(x_{ \rm val},x_{\rm tr};g_{\leq i}), \tag{1}\]
as suggested by Park et al. [2023]. For all attribution experiments we use a recently developed data attribution method, TRAK (Tracing with the Randomly-projected After Kernel) [Park et al., 2023]. For details on this method we defer to the original paper.
TRAK requires an ensemble of \(M\) trained models; we use \(M=20\), but found similar results for as few as \(M=5\). Each model in the ensemble is a "subnetwork+ probe" \(g_{\leq i}^{(j)}\), where \(1\leq j\leq M\). \(g_{\leq i}^{(j)}\) is created by training the same base model \(f\) on \(\mathcal{X}_{\rm tr}\) to obtain \(f^{(j)}\), then training a probe for a concept \(c\) on the \(i^{th}\) layer of \(f^{(j)}\) with the concept training set \(\mathcal{X}_{\rm tr}^{c}\), using \(\mathcal{X}_{\rm val}^{c}\) for validation.
The first step of TRAK is to "featurize" (process \(\mathcal{X}_{\rm tr}\)) and score (process \(\mathcal{X}_{\rm val}\)) each \(g_{\leq i}^{(j)}\), which we run in parallel over the ensemble. After this, the attribution scores of all \(M\) networks are aggregated, resulting in \(|\mathcal{X}_{\rm val}\times\mathcal{X}_{\rm tr}|\) final scores total, one for each pair \((x_{\rm val},x_{\rm tr})\). An important note here is that TRAK requires a task/loss function and corresponding target labels to evaluate the predictions of the models \(g_{\leq i}^{(j)}\) -- in our case, concept prediction and binary concept labels, respectively. For consistency, unlike the concept probe training and validation sets \((\mathcal{X}_{\rm tr}^{c},\mathcal{X}_{\rm val}^{c})\) which use manually-defined concept labels, we use one of the trained \(g_{\leq i}^{(j)}\) to assign these labels to \(\mathcal{X}_{\rm tr}\) and \(\mathcal{X}_{\rm val}\) for attribution. We summarize our experimental design in Fig 1.
## 3 Datasets, Base Models and Concepts
We use "ImageNet10p" as the training and validation sets \(\mathcal{X}_{\rm tr}\) and \(\mathcal{X}_{\rm val}\), respectively, to train the base models. ImageNet10p is defined by randomly sampling \(10\%\) of the images of each class from the ImageNet [Deng et al., 2009] training and validation sets. The resulting ResNet-18 models obtained about \(45\%\) top-1 accuracy on \(\mathcal{X}_{\rm val}\) (see Appendix F for training details), from which we assume that the model learned meaningful enough visual representations for concepts to be present. We build two different concept probing datasets,1 and show example images of each concept in Appendix B.
Footnote 1: We initially experimented with using concepts from the Broden dataset [Bau et al., 2017] but we found probes trained on this dataset did not generalize well to arbitrary ImageNet images.
Concept 1: Snakes.We define the "Snakes" concept with the \(17\) ImageNet snake classes \(477\)-\(493\). The probe training set \(\mathcal{X}_{\rm tr}^{c}\) is constructed with (i) all examples of these classes in \(\mathcal{X}_{\rm tr}\) and (ii) the same number of non-snake images randomly sampled from \(\mathcal{X}_{\rm tr}\). The probe validation set \(\mathcal{X}_{\rm val}^{c}\) is constructed in the same manner as \(\mathcal{X}_{\rm val}\); this gives a concept training/validation split of \(4,374\)/\(170\).
Concept 2: High-Low Spatial Frequencies.We define the "High-Low Frequencies" concept with images where a directional transition between a region with high spatial frequency to a region with low spatial frequency is present [Schubert et al., 2021].2 We created this concept dataset by computing the top \(0.001\%\) highest-activating ImageNet images (by \(L_{\infty}\) norm) of the 'high-low frequency neurons' defined in layer mixed3a of InceptionV1 [Schubert et al., 2021]. These images are used with an equal number of random non-concept images to define the concept training and validations sets \(\mathcal{X}_{\rm tr}^{c}\) and \(\mathcal{X}_{\rm val}^{c}\), respectively, resulting in a training/validation split of \(362\)/\(20\). We found the high selection threshold necessary to identify images where the concept is clearly present.
Footnote 2: An analogous variant of this concept was arguably also discovered in biological neurons [Ding et al., 2023].
## 4 Experiments and Results
### Concept Attribution
In Fig. 2 we display the images in the training set \(\mathcal{X}_{\rm tr}\) which received the highest and lowest concept prediction attribution scores \(\tau_{c}(x_{\rm tr})\) (Eq. (1)) for each concept, for various layers of the base model. In Fig. 3 we show how the presence of each concept varies with network layer depth, where the two concepts were most present on average in layer3. Additional highest-attributed images are in Appendix C.1.
How does learned concept attribution vary between network layers?For the snakes concept, full snake images appear to be important for concept learning in deeper network layers, while images that possess textures common for snakes are most important for the earlier layer1. The concept does not appear to be present in very early layers (layer1.0.conv2), which is reasonable given that "snakes" is an abstract concept (see also Fig. 3). These observations are compatible with the conventional wisdom that deeper network layers learn more complex abstract features (such as objects), while earlier layers learn more basic features (such as textures).
The high-low frequency concept is fairly present throughout the network (Fig. 2, right and Fig. 3). Highest-attributed (and certain lowest-attributed) training set images for this concept contain transitions from high to low spatial frequency, such as pomegranate seeds over a flat background (layer1.0.conv2, image 1), baseball threading alongside a flat casing (layer1, image 4), interwoven threads over a smooth background, (layer2, image 1), and fur over a smooth background (layer3, image 2). In comparison to the snakes concept probe which has increasing accuracy with network depth, likely due to its connection to the base models classification task, the high-low frequency concept fades after layer 3 as it is synthesized into higher-level concepts related to the label classes.
Are the concepts that a model learns the result of a few select exemplars?We analyze the importance of images in the base model's training set \(\mathcal{X}_{\text{tr}}\) for concept learning by (1) removing the \(T\) highest-attributed images from \(\mathcal{X}_{\text{tr}}\) to obtain \(\mathcal{X}_{\text{tr}}^{-T}\) (\(T\in\{100,1,000,10,000\}\)), (2) re-training the \(M\) base models on \(\mathcal{X}_{\text{tr}}^{-T}\), and (3) training concept probes on each of them for a given layer.3 If the probe concept detection validation accuracy changes after the training set is pruned of the most important examples of a concept, then we conclude that these examples were primarily responsible for the model learning the concept. If this does not happen, it suggests that a model learns a concept in a more flexible way, from a broad range of examples. For this experiment we measure concepts in the layer where both were most present on average, layer3. Our results are shown in the middle and right plots in Figure 4 (where these first experiments correspond to sparsity equal to \(10^{2}\)), concept validation accuracy did not change on models trained on \(\mathcal{X}_{\text{tr}}^{-T}\) for varying \(T\) compared to the baseline
Figure 3: **Concept presence within different network layers.** Average concept detection validation accuracy of probes trained on different layers, for each concept; confidence bands are std. deviation over 5 base models.
Figure 2: **Training set attributions for concept learning.** The four highest and two lowest attributed training set images (decreasing \(\tau_{c}(x_{\text{tr}})\) from left to right) for concept learning at different network layers. **Left half:** snakes concept. **Right half:** high-low frequencies concept.
of those on \(\mathcal{X}_{\mathrm{tr}}\), for either concept (Figure 3). This provides further evidence that the learning of a concept is diffuse among exemplars and does not depend on a few special examples. We note that this result may not be surprising given that the attribution scores are mostly similar across examples from \(\mathcal{X}_{\mathrm{tr}}^{c}\), Figure 4 left, (e.g., if all examples have the same importance for learning a concept, removing a fraction of them will not have a large effect). In particular, the high-low frequencies concept could be learnable from the majority of images in the training set, especially if it suffices for the probes simply to learn to be boundary detectors; we investigate this with "relative probing" to discriminate between generic object boundaries and the high-low frequency concept in Appendix D.
Finally, given evidence that semantically meaningful representations tend to be sparse in the neuron basis (Gurnee et al., 2023), we also trained sparse probes, thinking that in this regime removing a fraction of exemplars might have a larger effect. In the middle and right plots in Figure 4 and Figure 5 the concepts also remained robust in this setting.
How similar are different probes trained for the same concept/layer?We show how probe accuracy changes with respect to network layer depth in Fig. 3. For a fixed layer, different probes typically converge to similar performance.4
Footnote 4: In Appendix E we discuss why we do not compare probes via CAV similarity.
In the case of the high-low frequency concept, we see that probe accuracy is highest at intermediate layers, and comparatively low at the earliest and latest layers. This is consistent with the original work of Schubert et al. (2021), which discovered "high-low frequency detector neurons" in intermediate layers of InceptionV3 networks (but not in the earliest or latest layers). This stands in contrast to the snakes concept, where probe accuracy increases monotonically with network depth. One possible explanation for this observation is that the snakes concept dataset was obtained from a subset of ImageNet classes. As such, the base models have been trained to correctly classify positively-labelled concept images using their output logits. Here our observations are consistent with Alain and Bengio (2016), which trained multi-class linear probes for ImageNet classification and found monotonically increasing accuracy with depth.
Figure 4: **Left: sorted distribution of concept learning attribution scores for the training set \(\mathcal{X}_{\mathrm{tr}}\), averaged over the validation set \(\mathcal{X}_{\mathrm{val}}\), for both concepts. Right: Effect of re-training the base model on \(\mathcal{X}_{\mathrm{tr}}\) with the top \(T\) concept-attributed training examples removed (\(\mathcal{X}_{\mathrm{tr}}^{-T}\)), for sparse probe concept detection on layer3. High-low frequency probe results averaged over 5 base models, with entirely overlapping std. deviation confidence intervals shown (probes at different \(T\) for the same base model obtained the same performance due to the small size of \(\mathcal{X}_{\mathrm{val}}^{c}\)).**
Figure 5: Dependence of concept detection validation accuracy of sparse concept probes on their sparsity \(k\).
### Sparse Concept Probing and Attribution
How does forcing probes to be sparse affect concept detection?Forcing a probe to be sparse (at most \(k\) of the CAV elements are non-zero) allows for even more interpretable concept directions (Gurnee et al., 2023). To evaluate this for our concepts, we trained probes with a range of sparsities on different layers of \(f\), using an approach similar to iterative hard thresholding (Jin et al., 2016). After the first half of the training epochs, we set all but the \(k\) parameters of highest absolute value to zero, freeze all of the zeroed parameters from updating, then continue training.
In Fig 5 we show how the concept detection validation accuracy changes with probe sparsity \(k\) at multiple layers, for both concepts. Reasonably, probe accuracy typically increases with \(k\), and we see a similar relative accuracy ranking of different layers as in the non-sparse case (Fig. 3). We see that the concepts are both somewhat learnable with sparse probes, but not nearly to the degree of the non-sparse probes (Fig. 3).
Fig. 6 displays the highest- and lowest-attributed images for probes of varying sparsity \(k\), for each concept at layer3 (compare with the attributions of non-sparse probes in Fig. 1). For the snakes concept, the attributions are different than those for the non-sparse probe, and yet very similar among the sparse probes. Interestingly, we see that almost all of the highest- or lowest-attributed images possess a "honeycomb"-like texture which appears similar to snake scales. For the high-low frequencies concept, we see that the \(k=1\) sparse probe obtained the same attributions as the non-sparse probe, yet the probes with more non-zero entries both obtained the same distinct attributions, which also appear to have examples of the concept.
## 5 Limitations
In order to experimentally vary factors including model layer, concept and concept training data, we were forced to restrict other variables. We only experiment with ResNet18 image classifiers trained on ImageNet10p and two concept datasets (snakes and high-low frequency) -- adding additional base models and concepts would increase the breadth of this study. Another interesting future direction would be conducting analogous experiments on a different modality (e.g., natural language) or task. Finally, we only use TRAK for data attribution, and it would be interesting to know the extent to which our experimental results are particular to TRAK.
## Conclusion
In this paper we explored the importance of individual datapoints in concept learning. We found evidence of "convergence" in several senses, including stability under removal of exemplars and across independent training runs. Although more extensive experiments are needed (with better aggregation methods), our results suggest a robustness to the way that concepts are learned and stored in a deep learning model.
Figure 6: **Training set attributions for sparse probe concept learning. The four highest and two lowest attributed training set images (decreasing \(\tau_{c}(x_{\text{tr}})\) from left to right) for concept learning at layer3 for probes of different sparsity \(k\). Left half: snakes concept. Right half: high-low frequencies concept.**
## Acknowledgments
This research was supported by the Mathematics for Artificial Reasoning in Science (MARS) initiative at Pacific Northwest National Laboratory. It was conducted under the Laboratory Directed Research and Development (LDRD) Program at at Pacific Northwest National Laboratory (PNNL), a multiprogram National Laboratory operated by Battelle Memorial Institute for the U.S. Department of Energy under Contract DE-AC05-76RL01830.
The authors would also like to thank Andrew Engel for useful conversations and feedback related to the paper.
|
2303.13262 | Noise impact on recurrent neural network with linear activation function | In recent years, more and more researchers in the field of neural networks
are interested in creating hardware implementations where neurons and the
connection between them are realized physically. The physical implementation of
ANN fundamentally changes the features of noise influence. In the case hardware
ANNs, there are many internal sources of noise with different properties. The
purpose of this paper is to study the peculiarities of internal noise
propagation in recurrent ANN on the example of echo state network (ESN), to
reveal ways to suppress such noises and to justify the stability of networks to
some types of noises.
In this paper we analyse ESN in presence of uncorrelated additive and
multiplicative white Gaussian noise. Here we consider the case when artificial
neurons have linear activation function with different slope coefficients.
Starting from studying only one noisy neuron we complicate the problem by
considering how the input signal and the memory property affect the
accumulation of noise in ESN. In addition, we consider the influence of the
main types of coupling matrices on the accumulation of noise. So, as such
matrices, we take a uniform matrix and a diagonal-like matrices with different
coefficients called "blurring" coefficient.
We have found that the general view of variance and signal-to-noise ratio of
ESN output signal is similar to only one neuron. The noise is less accumulated
in ESN with diagonal reservoir connection matrix with large "blurring"
coefficient. Especially it concerns uncorrelated multiplicative noise. | V. M. Moskvitin, N. Semenova | 2023-03-23T13:43:05Z | http://arxiv.org/abs/2303.13262v1 | # Noise impact on recurrent neural network with linear activation function
###### Abstract
Over the past few years, artificial neural networks (ANNs) have found their application in solving many problems from pattern recognition to predicting climate phenomena. Despite the existence of high-power computing clusters with the ability to make parallel calculations, neural network modeling on digital equipment is a bottleneck in network scaling and the speed of obtaining and processing information. In recent years, more and more researchers in the field of neural networks are interested in creating hardware implementations where neurons and the connection between them are realized physically.
The physical implementation of ANN fundamentally changes the features of noise influence. In the case hardware ANNs, there are many internal sources of noise with different properties. The purpose of this paper is to study the peculiarities of internal noise propagation in recurrent ANN on the example of echo state network (ESN), to reveal ways to suppress such noises and to justify the stability of networks to some types of noises.
In this paper we analyse ESN in presence of uncorrelated additive and multiplicative white Gaussian noise. Here we consider the case when artificial neurons have linear activation function with different slope coefficients. Starting from studying only one noisy neuron we complicate the problem by considering how the input signal and the memory property affect the accumulation of noise in ESN. In addition, we consider the influence of the main types of coupling matrices on the accumulation of noise. So, as such matrices, we take a uniform matrix and a diagonal-like matrices with different coefficients called "blurring" coefficient.
We have found that the general view of variance and signal-to-noise ratio of ESN output signal is similar to only one neuron. The noise is less accumulated in ESN with diagonal reservoir connection matrix with large "blurring" coefficient. Especially it concerns uncorrelated multiplicative noise.
## 1 Introduction
Over the past few years, artificial neural networks (ANNs) have been applied in solving many problems [1]. Such tasks include image recognition [2, 3], image classification, improvement of sound recordings, speech recognition [4], prediction of climatic phenomena [5] and many others.
The basic principle of ANN construction is signal propagation between neurons using connections with some coefficients. In this case, the greatest efficiency and speed can be achieved by paralleling calculations on high-performance computing clusters. However, in this case the bottleneck is the speed of memory access and data processing. The maximum performance of calculations can be achieved only in case if ANN is
completely hardware-implemented. In this case, the problem of memory access and mathematical operations over a large amount of data disappears, since each neuron corresponds to an analog nonlinear component, and each connection to a physical connection channel.
In recent years, there has been an exponential increase in work with hardware implementations of ANNs. Currently, the most effective ANNs are based on lasers [6], memristors [7], and spin-torque oscillators [8]. Connection between neurons in optical ANN implementations is based on the principles of holography [9], diffraction [10, 11], integrated networks of Mach-Zender modulators [12], wavelength division multiplexing [13], and 3D printed optical interconnects [14, 15, 16]. Recently, the so-called photonic neural networks are gaining popularity [17, 18].
The physical implementation of ANN fundamentally changes the features of noise influence. In the case of digital computer implementation of ANN, noise can enter the system exclusively with the input signal, whereas in analog ANN there are many internal sources of noise with different properties. The purpose of this paper is to study the peculiarities of internal noise propagation in recurrent ANN, to reveal ways to suppress such noises and to justify the stability of networks to some types of noises.
In our previous studies we were focused on the effects of additive and multiplicative, correlated and uncorrelated noise on deep neural networks [19, 20]. Several models of varying complexity were considered. General features depending on the nonlinear activation function and the depth of the ANN were shown for simplified symmetric ANNs with global uniform connectivity between layers. All the findings and results were then validated for three trained deep ANNs used for number recognition, classification of clothing images, and chaotic realization predictions. Using the analytical methods described in Ref. [20], several noise reduction strategies were proposed in our next study [21].
In this work, we make a task of noise study more complicated by considering time-dependency. In contrast to previously considered deep neural networks, here we are focused on recurrent neural network on the example of echo state neural network (ESN). This network consists of three main parts: 1- input layer receiving the input signal and transmitting it to the next layer; 2 - one layer called reservoir which state depends on both input signal at current moment and previous states of reservoir at previous times; 3 - output layer making the final output signal. Such networks are often used to work with signals that are highly dependent on time. For example, prediction of chaotic temporal realizations, speech recognition, etc.
## 2 System under study
### Noise types
In this paper we consider only white Gaussian noise with zero mean and some constant dispersion \(D\). The noise values will be different for each neuron each time, so it is uncorrelated in time and in network. Mathematically speaking it is introduced into each artificial neuron according to the noise operator \(\mathbf{\hat{N}}\) as
\[y_{i}(t)=\mathbf{\hat{N}}x_{i}(t)=x_{i}(t)\cdot(1+\xi_{M}(t,i)+\xi_{A}(t,i), \tag{1}\]
where \(x_{i}\) and \(y_{i}\) are noise-free and noisy outputs of the \(i\)th artificial neuron, respectively. \(\xi\) are the sources of white Gaussian noise with zero mean. The indices 'A' and 'M'
points out the noise types, namely additive (A) and multiplicative (M) noise with noise dispersions \(D_{A}\) and \(D_{M}\). As can be seen from (1), the additive noise is added to the noise-free output, while the multiplicative noise is multiplied on it. The part \((1+\ldots)\) is needed to keep the useful signal. The notation of noise operator \(\mathbf{\hat{N}}\) will be used further to indicate which outputs of neurons become noisy.
The noise dispersions will be fixed throughout the paper as \(D_{A}=D_{M}=10^{-2}\). This order of values corresponds to what we have previously obtained in an RNN realized in optical experiment [6, 19].
### Recurrent neural network
There are many different types of neural networks. Their topology and type of neurons strongly depend on a signal type and features of the tasks being solved. If the network is trained to work with signals changing in time, such as speech recognition, prediction of chaotic phenomena etc., then the neural network must have a property of memory. Here come to the aid of recurrent neural networks (RNNs). In RNNs, the part of neurons have the memory about their previous states. In this paper we consider an echo-state network (ESN) schematically shown in Fig. 1 as an examples of RNN. This network contains input and output neurons (orange) and a hidden layer with multiple neurons called reservoir (gray). The connectivity and weights of neurons inside the reservoir \(\mathbf{W}^{\text{res}}\) are usually fixed and randomly assigned. The output connection matrix \(\mathbf{W}^{\text{out}}\) is varied during training process to make the network producing correct responses to certain input signals.
In this paper we are mainly interested in the impact of noise on reservoir part of the network, and therefore, only gray neurons have a noise influence.
In accordance with notations in Fig. 1, the input signal \(x^{\text{in}}\) passes thought the input neuron and comes to reservoir neurons coupled via the input matrix \(\mathbf{W}^{\text{in}}\) of size \((1\times N)\), where \(N\) is the number of neurons inside reservoir. It is fixed throughout the paper as \(N=100\). The reservoir neurons have the connection to their previous states via connection matrix \(\mathbf{W}^{\text{res}}\) of size \((N\times N)\). Thus, the equation of reservoir neurons is
\[\mathbf{x}^{\text{res}}_{t}=f(\beta x^{\text{in}}_{t}\cdot\mathbf{W}^{\text{ in}}+\gamma\mathbf{y}^{\text{res}}_{t-1}\cdot\mathbf{W}^{\text{res}}); \hskip 28.452756pt\mathbf{y}^{\text{res}}_{t}=\mathbf{\hat{N}}\mathbf{x}^{ \text{res}}_{t}, \tag{2}\]
where \(f(\cdot)\) is the activation function of reservoir neurons. The type of activation
Figure 1: Schematic representation of considered recurrent neural network.
function often depends on the current task. In this paper we are mainly focused on linear activation function \(f(x)=\alpha x\), since linear and partly linear functions are often used in RNNs. The nonlinear activation function can lead to completely different dynamics and noise accumulation, and it will be therefore a subject of another study.
The index \(t\) corresponds to the current time moment, while \((t-1)\) in the term \(\mathbf{y}^{\mathrm{res}}\) indicates that the outputs of reservoir neurons are taken from the previous time moment. The bold font used for \(\mathbf{y}^{\mathrm{res}}_{t}\) and \(\mathbf{y}^{\mathrm{res}}_{t}\) indicates that they are row-vectors \((1\times N)\).
Parameters \(\beta\) and \(\gamma\) control the impact of input signal (\(\beta\)) and memory (\(\gamma\)). In order to keep the same range of the output signals, the condition \(\beta+\gamma=1\) is imposed on them.
The output of the network comes from the output neuron connected with reservoir via connection matrix \(\mathbf{W}^{\mathrm{out}}\) of size (\(N\times 1\)):
\[x^{\mathrm{out}}=\mathbf{y}^{\mathrm{res}}_{t}\cdot\mathbf{W}^{\mathrm{out}}. \tag{3}\]
In order to see the pure impact and statistics of noise, the output connection matrix \(\mathbf{W}^{\mathrm{out}}\) is fixed and uniform with elements \(1/N\). The input connection matrix \(\mathbf{W}^{\mathrm{in}}\) is responsible for sending an input signal to the reservoir. That is why its values are set to be unity.
## 3 One noisy neuron
As a first step let us consider how noise impacts on one isolated neuron with linear activation function. The neuron receives the input signal \(x^{\mathrm{in}}_{t}\) and produces the noise-free signal \(x^{\mathrm{out}}_{t}=f(x^{\mathrm{in}}_{t})\) becoming \(y^{\mathrm{out}}_{t}=\hat{\mathbf{N}}x^{\mathrm{out}}_{t}\) after the noise influence. The input signal contains \(T=200\) random values from \(-1\) to \(1\).
Figure 2 shows the impact of additive (green), multiplicative (orange) and mixed (blue) noise.
In order to see the noise impact we will use the next two characteristics. The first one is a dispersion showing the dispersion of the output signal. It is calculated in the next way. Each input signal \(x^{\mathrm{in}}_{t}\) is repeated \(K=1000\) times to get the statistics of output values. Then the obtained sequence of noisy output values is averaged to get the corresponding mean value \(\mu(y^{\mathrm{out}}_{t})\) and dispersion \(\sigma^{2}(y^{\mathrm{out}}_{t})\). The dispersion for different noise types is given in Fig. 2a.
The second characteristics is a signal-to-noise ratio (SNR) showing the relation between the mean output value and its variance or dispersion. In our previous papers [19, 20] we calculated the characteristics similar to SNR, as working for only positive input and output values. Now we will consider both positive and negative values and use therefore more common form of SNR calculated as \(\mathrm{SNR}[y^{\mathrm{out}}]=\mu[y^{\mathrm{out}}]/\sigma^{2}[y^{\mathrm{ out}}]\) (see Ref. [22]). The SNR for different noise types is given in Fig. 2b.
As can be seen from Fig. 2a, the additive noise (green points) lead to the constant level of dispersion which does not depend on the input and output signal. The corresponding dispersion can be found as the variance of a random signal:
\[\sigma^{2}[y^{\mathrm{out}}_{t}]=\mathrm{Var}[y^{\mathrm{out}}_{t}]=\mathrm{ Var}[f(x^{\mathrm{in}}_{t})+\xi_{A}]=\mathrm{Var}[\xi_{A}]=D_{A}. \tag{4}\]
In the case of multiplicative noise, the variance becomes completely different (orange). There is a quadratic relationship between mean output value and its variance. In terms
of expectation value and variance, the variance of the output signal of one neuron with multiplicative noise can be found as follows
\[\mathrm{Var}[y_{t}^{\mathrm{out}}]=\mathrm{Var}[f(x_{t}^{\mathrm{in}})\cdot(1+ \xi_{M})]=(\mathrm{E}[f(x_{t}^{\mathrm{in}})])^{2}\cdot\mathrm{Var}[\xi_{A}]=D_{ A}\cdot(\mathrm{E}[y_{t}^{\mathrm{out}}])^{2}, \tag{5}\]
where \(\mathrm{E}[\cdot]\) is the expectation value. The expectation value of the output signal is \(\mathrm{E}[y_{t}^{\mathrm{out}}]=\mathrm{E}[x_{t}^{\mathrm{out}}]=\mathrm{E}[ f(x_{t}^{\mathrm{in}})]\).
In the case of linear activation function the dispersion strongly depends on the parameter \(\alpha\):
\[\sigma^{2}[y_{t}^{\mathrm{out}}]=\mathrm{Var}[y_{t}^{\mathrm{out}}]=\mathrm{ Var}[f(x_{t}^{\mathrm{in}})\cdot(1+\xi_{M})]=D_{A}(\alpha x_{t}^{\mathrm{in}})^{2}. \tag{6}\]
Mixed noise (blue points in Fig. 2) combines features of both additive and multiplicative noise. Thus, the dispersion (panel a) is the sum of additive and multiplicative variances. For this reason, the SNR in the case of mixed noise with \(D_{A}=D_{M}\) is reduced twice (panel b).
The impact of \(\alpha\)-parameter on SNR and dispersion is shown in Fig. 3. It confirms Eqs. (4,6). The input signal and \(\alpha\) does not change the dispersion in case of additive noise (Fig. 3a) and SNR in case of multiplicative noise (Fig. 3e). The parameter \(\alpha\) has no impact on SNR with additive noise (Fig. 3d), while both \(\alpha\) and input signal change the dispersion in case of multiplicative noise (Fig. 3b).
## 4 ESN with uniform connection matrix
In this section, we will focus on the interplay of input signal and memory. In order to see pure noise accumulation without the impact of connectivity matrices, we consider the uniform connection matrix in reservoir: \(W_{ij}^{\mathrm{res}}=1/N\).
Figure 2: Dispersion \(\sigma^{2}\) (panel a) and SNR (panel b) calculated for the output of one noisy neuron with only additive noise (green points), only multiplicative noise (orange) and mixed noise (blue) depending on corresponding mean output value. The dispersion of noise sources are \(D_{A}=D_{M}=10^{-2}\). The neuron has linear activation function with \(\alpha=1\).
As a first step, we set \(\gamma=0\), when the reservoir has no memory, and the state of reservoir depends only on the input signal. According to Sect. 2.2, if \(\gamma=0\), then \(\beta=1\).
If the property of memory is turned off, then how the input signal depends on time is irrelevant. Therefore, we use the same random input signal from -1 to 1 as in the previous section.
Figure 4a shows the dispersions calculated by the output signal of ESN with \(\gamma=0\) for additive (green) and multiplicative (orange) noise. The general view of these dependencies is the same as was for one noisy neuron. The difference is the range of these values. Comparing Figs. 2a and 4a, the dispersion of ESN's output is reduced by a factor of 100. Thus, the final output signal becomes less noisy.
This can be explained as follows. We introduce the noise only to neurons of the reservoir. In the case of only additive noise their dispersion can be calculated according Eq. (4). If \(\gamma=0\), then the output signal can be calculated as \(x_{t}^{\rm out}=\frac{1}{N}\sum\limits_{j=1}^{N}x_{t,j}^{\rm res}\). Then the output dispersion and variance for additive noise is
\[{\rm Var}[x_{t}^{\rm out}]=\Big{(}\frac{1}{N}\Big{)}^{2}\sum\limits_{j=1}^{N} {\rm Var}[\alpha x_{t}^{\rm in}+\xi_{A}(j,t)]=\Big{(}\frac{1}{N}\Big{)}^{2}\sum \limits_{j=1}^{N}{\rm Var}[\xi_{A}(j,t)]=\frac{1}{N}{\rm Var}[\xi_{A}]=\frac{D _{A}}{N}. \tag{7}\]
Comparing this equation with (4), the variance is reduced by the factor of \(N=100\)
Figure 3: Dispersion (panels a–c) and SNR (panels d–f) calculated for the output of one noisy neuron with only additive noise (a,d), only multiplicative noise (b,e) and mixed noise (c,f) depending on input value \(x^{\rm in}\) and parameter \(\alpha\). The dispersion of noise sources are \(D_{A}=D_{M}=10^{-2}\).
Therefore the dispersion level is reduced from \(10^{-2}\) in one neuron to \(10^{-4}\) in ESN.
Now we turn on the property of memory making \(\gamma=\beta=0.5\). Then sequence of input signal becomes important. To keep the same range of signal and include the property of growing and decreasing input signal, we will use sine function as an input (Fig. 4e). Figure 4b shows the dispersions for this case. From that moment on we will plot these characteristics depending on the time \(t\) to underline the peculiarities of the input signal. Comparing scales of dispersion in panels a and b, one can see that dispersion grows with increasing \(\gamma\), as now the noise is accumulated in reservoir.
The another one thing that should be pointed out is the view of these dependencies. Comparing Fig. 4a and 4c, there may be an erroneous opinion that now the general form of dependencies has changed a lot. However, it is not so. IF we change the dependence on time in panels (c,d) to the dependence on the mean output value as before, then the general view will be exactly the same as in Fig. 4a,b.
Figure 4: Dispersion (a,b) calculated for the output of ESN with \(\gamma=0\) (left panels) and \(\gamma=0.5\) (right panels). The input signal (c) was used when \(\gamma\neq 0\). Parameters: \(\alpha=1\), \(\beta=1-\gamma\), \(D_{A}=D_{M}=10^{-2}\).
If \(\gamma\) grows further up to \(0.9\), the maximal dispersion level will be increased to \(\approx 5\cdot 10^{-4}\). The general view of dispersion dependencies remains the same as it is shown in Fig. 4b. In the case of additive noise the dispersion dependency is almost constant, while this dependency for multiplicative noise looks like a doubled sine function and covers a range from zero to some maximal level depending on \(\gamma\). In order to reveal the impact of parameter \(\gamma\), Fig. 5 shows how mean dispersion level for additive noise and dispersion range for multiplicative noise changes depending on the parameter \(\gamma\).
A uniform reservoir connection matrix \(\mathbf{W}^{\mathrm{res}}\) may look like a rather degenerate case. However, this matrix is sometimes set randomly and does not change during the training process. According to our previous studies [20], in terms of network noise accumulation, a similar uniform connectivity can be interpreted as a matrix with random values and a mean value of \(1/N\). Thus, the conclusion "the smallest level of dispersion in this case can be obtained with weak memory of reservoir and then grows exponentially with parameter \(\gamma\) responsible for memory property" also holds for uniform random \(\mathbf{W}^{\mathrm{res}}\).
## 5 ESN with diagonal-like connection matrix
As it has been mentioned in previous section, the reservoir connection matrix \(\mathbf{W}^{\mathrm{res}}\) can be set without change during training process. Usually, it is set to be uniform or diagonal-like [23]. In this section we consider the last type of the network. Figure 6a,d shows the diagonal matrices which we will use in reservoir. Both networks are set with some _blurring_ coefficient \(\zeta\). We set this "blurring" effect using Gaussian function. For example, in Fig. 6a this coefficient is set \(\zeta=2\) meaning that main diagonal and two terms from the left and right sides of the main diagonal are set according Gaussian function, while the rest are set to be zero. In order to keep the same range of values, we need to make the sum of the elements in each row and column of the matrix \(\mathbf{W}^{\mathrm{res}}\)
Figure 5: Mean dispersion level (dashed line) for additive noise and dispersion range for multiplicative noise depending on parameter \(\gamma\). Other parameters: \(\alpha=1\), \(\beta=1-\gamma\), \(D_{A}=D_{M}=10^{-2}\).
equal to one, as before. Therefore, the non-zero elements of this matrix are set in the next way:
\[W_{k,i}^{\rm res}=\frac{e^{-(k/\zeta^{2})}}{\sum\limits_{j=-\zeta}^{\zeta}e^{-(j/ \zeta^{2})}},\ \ \ \ k\in[i-\zeta;i+\zeta]. \tag{8}\]
In this section we will consider two diagonal-like matrices \(\mathbf{W}^{\rm res}\) with two "blurring" coefficients \(\zeta=2\) (Fig. 6a) and \(\zeta=20\) (Fig. 6d).
Figures 6b,e show the dispersion of the output signal for reservoir connection matrices with \(\zeta=2\) and \(\zeta=20\) given in corresponding left panels and parameter \(\gamma=0.8\). There is a clear difference between dispersion in case of diagonal matrices and dispersion obtained for uniform matrices (Fig. 4b). The form of \(\sigma^{2}\)-dependencies for additive and multiplicative noise remains almost the same, but there is a clear quantitative difference. In the case of uniform connection matrix and diagonal matrix with small \(\zeta\) (Fig. 6b) the maximal value of dispersion for multiplicative noise coincided with its
Figure 6: Dispersion of the output signal of ESN with diagonal matrices \(\mathbf{W}^{\rm res}\). Panels a and d show the connection matrices with \(\zeta=2\) and \(\zeta=20\), respectively. Panel b shows the dispersion for additive (green) and multiplicative (orange) noise in the case of \(\zeta=2\). Panel c shows how this dispersion changes depending on \(\gamma\). Panels d,e are the same but for \(\zeta=20\). Other fixed parameters: \(\alpha=1\), \(\gamma=0.8\), \(\beta=1-\gamma\), \(D_{A}=D_{M}=10^{-2}\).
mean value for additive noise. These dependencies are moving away form each other faster for diagonal matrix (Fig. 6c) comparing with uniform matrix (Fig. 5).
These dependencies drift apart faster with larger "blurring" coefficient \(\zeta=20\) (see Figs. 6e,f).
Comparing the impact of additive noise in Figs. 5 and 6c, one can see that dispersions for additive and multiplicative noise depending on \(\gamma\) are very similar for diagonal matrix with small "blurring" coefficient (Fig. 6c) and uniform connection matrix (Fig. 5). Moreover, the noise is accumulated less for large "blurring" coefficient \(\zeta\) (Fig. 6f). In the case of large \(\gamma\) the final dispersion level becomes less with growing \(\zeta\). This difference is more clear for multiplicative noise. Comparing dispersion ranges in panels c and f of Fig. 6, one can see that multiplicative dispersion level for large \(\zeta=20\) is much less than for small \(\zeta=2\).
## Conclusion and discussion
In this paper we have studied the impact of uncorrelated additive and multiplicative noise on echo state neural network. The noise was added only to neurons inside the reservoir. These neurons had linear activation function. To analyse the output level of noise we mainly used a dispersion and a signal-to-noise ratio derived from it.
The parameter \(\alpha\) controlling the slope of activation function has no impact on accumulation of additive noise. At the same time, the dispersion for multiplicative noise has a quadratic dependency on \(\alpha\) and input signal. The dispersion can be predicted analytically by Eqs. 4-6. In our next studies we plan to consider other types of activation functions such as sigmoid functions and piecewise linear activation function.
ESNs are usually set with random uniform reservoir connection matrix or diagonal-like matrix, which are not changed during training process. Therefore, in this paper we considered both types of connection matrices and study the impact of memory on accumulation of noise. We have found some interesting results for these matrices. The noise is less accumulated in ESN with diagonal reservoir connection matrix \(\mathbf{W}^{\mathrm{res}}\) with large "blurring" coefficient. Especially it concerns uncorrelated multiplicative noise. The accumulation of noise in uniform \(\mathbf{W}^{\mathrm{res}}\) is almost the same as in diagonal \(\mathbf{W}^{\mathrm{res}}\) with small "blurring" coefficient.
## Acknowledgements
This work was supported by the Russian Science Foundation (Project no. 21-72-00002).
|
2308.14938 | Entropy-based Guidance of Deep Neural Networks for Accelerated
Convergence and Improved Performance | Neural networks have dramatically increased our capacity to learn from large,
high-dimensional datasets across innumerable disciplines. However, their
decisions are not easily interpretable, their computational costs are high, and
building and training them are not straightforward processes. To add structure
to these efforts, we derive new mathematical results to efficiently measure the
changes in entropy as fully-connected and convolutional neural networks process
data. By measuring the change in entropy as networks process data effectively,
patterns critical to a well-performing network can be visualized and
identified. Entropy-based loss terms are developed to improve dense and
convolutional model accuracy and efficiency by promoting the ideal entropy
patterns. Experiments in image compression, image classification, and image
segmentation on benchmark datasets demonstrate these losses guide neural
networks to learn rich latent data representations in fewer dimensions,
converge in fewer training epochs, and achieve higher accuracy. | Mackenzie J. Meni, Ryan T. White, Michael Mayo, Kevin Pilkiewicz | 2023-08-28T23:33:07Z | http://arxiv.org/abs/2308.14938v2 | # Entropy-based Guidance of Deep Neural Networks for Accelerated Convergence and Improved Performance
###### Abstract
Neural networks have dramatically increased our capacity to learn from large, high-dimensional datasets across innumerable disciplines. However, their decisions are not easily interpretable, their computational costs are high, and building and training them are uncertain processes. To add structure to these efforts, we derive new mathematical results to efficiently measure the changes in entropy as fully-connected and convolutional neural networks process data, and introduce entropy-based loss terms. Experiments in image compression and image classification on benchmark datasets demonstrate these losses guide neural networks to learn rich latent data representations in fewer dimensions, converge in fewer training epochs, and achieve better test metrics.
## 1 Introduction
In the past 15 years, neural networks have revolutionized our capabilities in computer vision using CNNs [1, 2] and vision Transformers [3], natural language processing [4, 5, 6] with Transformers [7], synthetic data generation with GANs [8] and diffusion models [9], and innumerable other domains.
While neural networks provide immense ability across innumerable applications, their decisions processes are difficult for humans to decipher. More interpretable models could provide actionable rationale for model architecture decisions, increase model efficiency, facilitate better collaboration with domain experts, and lead to stronger understanding of features and their importance. In addition, it would provide trust in neural networks critical to high-stakes use cases that impact human lives. From health care to national security, the ability to determine how these models are making their decisions is crucial.
Complex neural decision processes create further complications. Neural networks often learn from very high dimensional datasets, optimize millions to billions of parameters during training, and propagate data through huge architectures during inference. Hence, training and deploying these large models is demanding from a computational perspective, while explaining their decisions proves even more difficult.
Information theory demonstrates the potential to provide probabilistic explanations of how neural networks process data and make decisions. This can allow us to interpret model decisions, get more from smaller neural architectures to reduce data processing complexity, and optimize them more efficiently. In addition, analysis of patterns in information flow through neural networks provides a pathway to understand what trends during training and/or inference promote good
performance. This leads to information-theoretic metrics to consider and an array of hyperparameters to encourage ideal behavior.
This article seeks to explore these opportunities. We first derive novel probabilistic results for measuring entropy propagation through critical structures within neural networks: fully-connected and 2D convolutional layers. We then use these results to guide the training of neural networks, leading to better convergence times and model performance.
The main contributions of this work include:
* Development of mathematical formulas for the entropy propagation through dense and convolutional layers.
* Construction of novel entropy-based loss terms for both dense and convolutional layers that exploit the entropy change formula to enable entropy-based guidance of neural network training.
* An empirical analysis of entropy propagation patterns through well-trained supervised neural networks, providing an understanding of ideal information flows.
* Experimental demonstrations that entropy-based loss terms speed up convergence in image compression on MNIST and CIFAR10 and improve image classification performance on CIFAR10.
## 2 Related Work
**Information theoretic learning.** Principe et. al. [10] address the idea of information-theoretic learning as a way of decoding how machines learn from data. In this work, they discuss a possible framework utilizing Renyi's quadratic entropy to train linear or nonlinear mappers with the goal of entropy minimization or maximization. In later works [11], the authors continue the search for explainability by proposing the idea of a stochastic entropy estimator based on Parzen window estimates. While both ideas present a usefulness in potential entropy manipulation, the computational complexity of these methods hinder the practical applicability in real-time or resource-constrained environments.
Many additional methods have been suggested in the utilization of information theory to decode computer decisions and processes. Specific to deep learning models, information-theoretic loss functions have been a promising approach. Such loss functions include cross-entropy, F-divergence, MI losses, KL-divergence, and others derived from information theory [12].
**InfoMax.** Alternatively, InfoMax [13, 14, 15], short for Information Maximization, is a learning principle that aims to maximize the mutual information between the input data and some learned representation, typically a latent variable. The core idea is to design models that capture and retain as much information as possible from the input data in the learned representations, with the assumption that this information will be useful for downstream tasks like classification or generation. Deep InfoMax (DIM) [16] was created to estimate and maximize the MI between the input data and the latent representations. However, this has been shown to result in excessive and noisy information [12]. While DIM may lead to representations that capture more information from the input data, it might not guarantee that the learned representations are interpretable or meaningful to humans.
**Information Bottleneck.** One suggested method to mitigate such issues is the Information Bottleneck (IB) principle proposed by Tishby et. al. [17] The IB is an information-theoretic framework for learning representations in a way that balances the amount of information captured from the input data with its relevance to a target output. The main focus of IB is to extract a compressed and informative representation from the input data retaining only the essential information needed to predict the target output [18]. For example, Xu et. al. [19] recently used a "teacher" object detection Transformer model and distill knowledge into an efficient "student" quantized Transformer model. They use an IB-inspired idea to take alternating training steps to minimize the entropy of the student's latent representations conditioned on the teacher's while also maximizing the (unconditional) entropy of the student queries. While effective in such cases, IB assumes that the input-output relationship in the data can be accurately captured by a single target variable. This assumption might not hold true for all real-world scenarios where complex relationships exist between multiple variables.
**Generative AI.** Information theory has also been incorporated into generative AI. Vincent et. al. [20] show minimizing an autoencoder's reconstruction error coincides with maximizing a lower bound on information, similar to InfoMax. They pair this with a denoising criterion to learn latent representations from which the decoder then generates synthetic data. InfoGAN [21] enhances the original generative adversarial network (GAN) loss with a term that maximizes a lower bound for the mutual information between a small subset of the latent variables and the observation. Unlike standard GANs, this provides interpretable latent variables that can be manipulated to generate synthetic data with specific properties (e.g. synthetic MNIST digits from specified classes or with specific rotation or boldness). These efforts demonstrate information-theoretic losses can enable rich, interpretable latent representations that avoid mode collapse for generative neural networks, suggesting their application in supervised domains.
**RLHF and ChatGPT.** Reinforcement learning with human feedback (RLHF) allows reward learning where the rewards are defined by human judgment and has been used in recent years to fine-tune language models to generate better synthetic text. Recent works [22, 23] use human preference labels on text generated by GPT-2 to develop a reward model and train a policy that generates higher-quality text as judged by human preferences. In the reward model, KL divergence between the policy and language is penalized, which serves as an entropy bonus that encourages the policy to explore and avoid collapsing modes in addition to preventing the model from generating text straying far from what the reward model saw during training to maintain coherency and topicality. InstructGPT [6] incorporated this approach into GPT-3.5. Here, information-theoretic loss terms permit meaningfully constrained explorations of the latent space, though the scale of these models (13-175B parameters) results in an extremely high-dimensional latent space that is difficult to interpret.
## 3 Probabilistic Results
In this section, we state some definitions and results from the information-theoretic literature. We then establish several new results that allow the application of these ideas to fully-connected feedforward neural networks (i.e. multilayer perceptrons or MLPs) and convolutional neural networks (CNN). These results enable tracking the evolution of entropy of data as it passes through a neural network.
First, we provide a definition of entropy for use in analyzing neural networks.
**Definition 1**.: The (joint) differential entropy of a continuous random variable \(X\) valued in \(\mathbb{R}^{d}\) with joint probability density function (pdf) \(f\) is
\[H(X)=\mathbb{E}\left[-\log f(X)\right]. \tag{1}\]
Unless otherwise noted, we will use the word _entropy_ to refer to joint differentiable entropy, and we use natural logarithms throughout the article. While the pdf \(f\) of the random variable \(X\) is assumed to exist, it is not assumed to be known. We almost never know the pdf of high-dimensional datasets for computer vision, NLP, or other domains where deep neural networks are effective.
We will represent dense and 2D convolutional neural network layers as matrix-vector products with invertible, constant matrices multiplying random input data or latent representations of that data. Next is a known formula for the entropy propagation of a matrix-vector product \(WX\), where \(X\) is a random vector and \(W\) is a constant matrix. This permits efficient estimation of the change in entropy as data propagatesthrough the dense and convolutional layers of a neural network.
**Theorem 2**.: _(Cover and Thomas [24], Corollary to Theorem 8.6.4) Let \(X\) be a random variable valued in \(\mathbb{R}^{d}\) and constant \(W\in\mathbb{R}^{d\times d}\). If \(W\) is invertible, then the entropy of \(WX\) is_
\[H(WX)=H(X)+\log\left(\left|\det W\right|\right). \tag{2}\]
However, weight matrices in dense layers may not be invertible or even square, so it is unclear how to use this result to measure entropy propagation. Even worse, convolutions are typically not considered a matrix-vector product at all. The next two subsections offer remedies to these issues.
### Dense Layers
The following theorem is a novel result that computes the entropy of pre-activation values \(W^{\prime}X\) as the (unknown) input entropy of \(X\) plus an easily computable term.
**Theorem 3**.: _Suppose \(X:\Omega\rightarrow\mathbb{R}^{d\times N}\) (\(N>d\)) is a random matrix and \(W\in\mathbb{R}^{\min(d,m)\times\min(d,m)}\) is an invertible matrix. Then,_
\[H(W^{\prime}X)=H(X)+\log\left(\left|\det W\right|\right) \tag{3}\]
_where_
\[W^{\prime}=\begin{cases}\begin{pmatrix}W&W_{N\times(d-N)}\\ 0&I_{d-N}\end{pmatrix},&\text{if }N<d\\ W,&\text{if }N=d\\ \begin{pmatrix}W&0\\ W_{(N-d)\times d}&I_{N-d}\end{pmatrix},&\text{if }N>d\end{cases} \tag{4}\]
Proof.: We refer to \(W\) as the square part of \(W^{\prime}\). Note \(W^{\prime}\) is block upper diagonal, square, or block lower diagonal, depending on the input dimension \(d\) and output dimension \(N\). In any of the three cases,
\[\det\left(W^{\prime}\right)=\det\left(W\right).\]
Since \(W\) is invertible, \(\det\left(W^{\prime}\right)=\det\left(W\right)>0\), so \(W^{\prime}\) is invertible. Then, Theorem 2 implies
\[H(W^{\prime}X)=H(X)+\log\left(\left|\det W^{\prime}\right|\right)=H(X)+\log \left(\left|\det W\right|\right).\]
This formula clarifies how entropy propagates from input to pre-activations within dense layers.
**Example 4**.: If a weight matrix has more columns than rows (\(3=N<d=5\) in this case), Theorem 3 suggests modifying it as:
\[\text{Weight Matrix}=\begin{pmatrix}3&0&9&-3&4\\ 1&5&-1&4&2\\ 0&4&-2&1&5\end{pmatrix}\rightarrow\left(\begin{array}{cccc|c}3&0&9&-3&4\\ 1&5&-1&4&2\\ 0&4&-2&1&5\\ \hline 0&0&0&1\\ \end{array}\right)=\begin{pmatrix}W&W_{3\times 2}\\ 0&I_{2}\end{pmatrix}=W^{\prime},\]
while a similar manipulation occurs when there are more rows than columns, with a block of zeros placed on the upper right and identity block placed on the lower right.
### 2D Convolutions
Consider an \(p\)-by-\(q\) convolutional filter \(C\). If \(C\) convolves with strides \(1\times 1\) over an input image \(X\in\mathbb{R}^{l\times w}\), then an output pre-activation map \(Z=C*X\) is defined as
\[Z_{ij}=\sum_{k=0}^{p}\sum_{l=0}^{q}C_{kl}X_{i+k,j+l} \tag{5}\]
for \(i=0,1,...,l-p\) and \(j=0,1,...,w-q\). While this is typically envisioned as the 2D convolutional filter \(C\) scanning over the image \(X\) to extract features denoted, the operation can alternatively be represented as a matrix-vector product. We will establish \(Z_{F}=C_{M}X_{F}\), where \(X_{F}\) and \(Z_{F}\) are flattened versions of \(X\) and \(Z\) and \(C_{M}\) has a special structure constructed below.
First, we transpose the rows \(x_{i}^{T}\) of matrix \(X\in\mathbb{R}^{l\times w}\) into column vectors and then concatenate them into a single column vector \(X_{F}\in\mathbb{R}^{lw}\):
\[X_{F}=\texttt{flatten}(X)=\begin{pmatrix}x_{1}\\ \vdots\\ x_{l}\end{pmatrix}_{lw\times 1} \tag{6}\]
In addition, we will consider an arbitrary \(p\times q\) convolutional filter \(C\) made up of rows \(c_{1}^{T}\),..., \(c_{p}^{T}\). With these pieces, we show a numerical example to motivate the path to manipulating a 2D convolution operation into a matrix-vector product.
**Example 5**.: Suppose we have a small gray-scale image \(X\in\mathbb{R}^{4\times 4}\) of dimensions a convolution \(C\in\mathbb{R}^{3\times 2}\) with the resulting feature map \(Z=C*X\):
\[X=\begin{pmatrix}3&4&1&2\\ 0&0&5&6\\ 2&1&0&3\\ 1&4&2&5\end{pmatrix}\qquad\qquad C=\begin{pmatrix}2&1\\ 4&3\\ -2&1\end{pmatrix}\qquad\qquad Z=C*X=\begin{pmatrix}7&22&45\\ 13&3&26\end{pmatrix}\]
Alternatively, the operation can be defined as \(Z_{F}=C_{M}X_{F}\) where
\[C_{M}X_{F}=\left(\begin{array}{cccc|cccccccc|cccccccc}2&1&0&0&4&3&0&0&-2&1&0&0& 0&0&0&0\\ 0&2&1&0&0&4&3&0&0&-2&1&0&0&0&0&0\\ 0&0&2&1&0&0&4&3&0&0&-2&1&0&0&0&0\\ \hline 0&0&0&2&1&0&0&4&3&0&0&-2&1&0&0\\ 0&0&0&0&0&2&1&0&0&4&3&0&0&-2&1&0\\ 0&0&0&0&0&2&1&0&0&4&3&0&0&-2&1\end{array}\right)\left(\begin{array}{c}3\\ 4\\ 1\\ 2\\ 0\\ 5\\ 6\\ 2\\ 1\\ 3\\ 3\\ 1\\ 4\\ 2\\ 5\end{array}\right)=\left(\begin{array}{c}7\\ 2\\ 54\\ 1\\ 1\\ 2\\ 3\\ 26\end{array}\right)=Z_{F}\]
Annotating the blocks, this simplifies as
\[C_{M}X_{F}=\left(\begin{array}{cccc}B_{1}&B_{2}&B_{3}&0\\ 0&B_{1}&B_{2}&B_{3}\\ \end{array}\right)\left(\begin{array}{c}x_{1}\\ x_{2}\\ x_{3}\\ x_{4}\end{array}\right)=\left(\begin{array}{c}z_{1}\\ z_{2}\\ \end{array}\right)=Z_{F}\]
Reshaping \(Z_{F}\) to \(2\times 3\), we reconstruct \(Z\) as follows.
\[Z=\texttt{reshape}\left(Z_{F}\right)=\left(\begin{array}{c}z_{1}^{T}\\ z_{2}^{T}\end{array}\right)=\left(\begin{array}{rrrrrrrr}7&22&45\\ 13&3&26\end{array}\right).\]
Next, use of Theorem \(3\) to measure entropy propagation through a multiplication by a constant matrix \(C_{M}\) requires us to construct a square version of \(C_{M}\). We do this by adding identity matrices on the lower right of each block \(B_{j}\) as per (4) in the case where \(N<d\) (\(I_{1}\) in this case). Then, we do the same with \(C_{M}\) consisting of the square \(B_{j}^{\prime}\) blocks, appending zero blocks and identity matrices (\(I_{8}\) in this case). This results in the following.
\[C_{M}^{\prime}=\left(\begin{array}{cccc|cccc|cccc|cccc}2&1&0&0&4&3&0&0&-2&1& 0&0&0&0&0&0\\ 0&2&1&0&0&4&3&0&0&-2&1&0&0&0&0&0\\ 0&0&2&1&0&0&4&3&0&0&-2&1&0&0&0&0\\ 0&0&0&1&0&0&0&1&0&0&0&1&0&0&0&0\\ \hline 0&0&0&0&1&2&1&0&4&3&0&0&-2&1&0&0\\ 0&0&0&0&0&2&1&0&0&4&3&0&0&-2&1&0\\ 0&0&0&0&0&0&2&1&0&0&4&3&0&0&-2&1\\ 0&0&0&0&0&1&0&0&1&0&0&1&0&0&0&1\\ 0&0&0&0&0&0&0&1&1&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&1&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&1&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&1\end{array}\right)\left(\begin{array}{c}2&1&0&0& 0&0&0&0&0&0&0\\ 0&2&1&0&0&4&3&0&0&-2&1&0\\ 0&0&0&0&0&0&2&1&0&0&4&3&0&0&-2&1\\ 0&0&0&0&0&0&1&0&0&0&1&0&0&1\\ 0&0&0&0&0&0&0&0&1&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&1&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&1\end{array}\right)\]
Since \(C_{M}^{\prime}\) is an upper diagonal square matrix, its determinant is the product of its diagonal elements. In general, it will be equal to \(c_{11}\) raised to the power of the number of rows of blocks (2) times the number of rows per block (3), where \(c_{11}\) is the upper left term in the convolutional filter \(C\). Here, \(\det C_{M}^{\prime}=c_{11}^{2,3}=2^{6}=64\). This determinant allows use of Theorem \(2\) to measure entropy propagation.
Generalizing the pattern observed in the example, we can construct the matrix \(C_{M}\) as
\[C_{M}=\begin{pmatrix}B_{1}&B_{2}&B_{3}&\cdots&B_{p}&0&0&\cdots&0\\ 0&B_{1}&B_{2}&\cdots&B_{p-1}&R_{p}&0&\cdots&0\\ 0&0&B_{1}&\cdots&B_{p-2}&B_{p-1}&B_{p}&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&B_{1}&B_{2}&B_{3}&\cdots&B_{p}\end{pmatrix}=\left(C_{M}^{s} \quad C_{M}^{r}\right), \tag{7}\]
where
\[B_{j}=\begin{pmatrix}c_{j1}&c_{j2}&c_{j3}&\cdots&c_{jq}&0&0& \cdots&0\\ 0&c_{j1}&c_{j2}&\cdots&c_{j,q-1}&c_{jq}&0&\cdots&0\\ 0&0&c_{j1}&\cdots&c_{j,q-2}&c_{j,q-1}&c_{jq}&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&c_{j1}&c_{j2}&c_{j3}&\cdots&c_{jq}\end{pmatrix}=\left(B_{j}^{s} \quad B_{j}^{r}\right). \tag{8}\]
The matrix \(C_{M}\) and the blocks \(B_{j}\) both have more columns (width) than rows (length). In both cases, a superscript of \(s\) indicates the largest square submatrix made by taking columns 1 through the length of the matrix. For example, the square portion \(B_{j}^{s}\) is the first \(w-q+1\) columns of \(B_{j}\) and \(B_{j}^{r}\) is the remaining \(q-1\) columns (the rectangular portion). In other words, \(C_{M}\) is a \(p\)-diagonal block Toeplitz matrix with blocks \(B_{1}\),..., \(B_{p}\), which are \(q\)-diagonal Toeplitz matrices.
Similar to Theorem 3, we convert \(C_{M}\) into a square version whose determinant is determined entirely by the elements of \(C_{M}\). First, the blocks are adjusted to \(w\times w\) as
\[B_{j}^{\prime}=\begin{pmatrix}B_{j}^{s}&B_{j}^{r}\\ 0&I_{q-1}\end{pmatrix}, \tag{9}\]
Then, we have the resulting \(lw\times lw\) matrix \(C_{M}^{\prime}\),
\[C_{M}^{\prime}=\begin{pmatrix}C_{sq}^{s}&C_{sq}^{r}\\ 0&I_{(p-1)w}\end{pmatrix} \tag{10}\]
The output pre-activation map \(Z\) is now obtained via the matrix-vector product \(C_{M}^{\prime}X_{C}\) if we then reshape the result into shape \((l-p+1)\times(w-q+1)\) by transposing each successive components into rows. Thus, applying a 2D convolution is equivalent to matrix-vector products, just like dense layers. This gives the following corollary to the previous theorem.
**Corollary 6**.: _Suppose \(X:\Omega\rightarrow\mathbb{R}^{l\times w}\) is a random matrix with rows \(X_{1}^{T},...,X_{l}^{T}\), then_
\[H(C*X) =H(X)+\log\left(|\det C_{M}^{\prime}|\right)\] \[=H(X)+(w-q+1)(l-p+1)\log\left(|c_{11}|\right) \tag{11}\]
Hence, the change in entropy in applying a single 2D convolutional filter is proportional to the difference between the filter and image width, difference between filter and image length, and the logarithm of the magnitude of the first weight in the filter.
## 4 Entropy-Based Guidance of Dense and Convolutional Neural Networks
The prior section established formulas for the entropy of hidden representations within dense and convolutional neural networks. In each case, the entropy is unknown, but the change in entropy can be computed using known parameters and hyperparameters.
The article proposes to guide the training of neural networks to produce ideal entropy propagation patterns. This provides a new lens through which models can be constructed, trained, and tuned. In addition, we provide two distinct tools to guide the training by controlling entropy propagation through:
1. Dense layers via a loss using determinants of modified weight matrices \(W^{\prime}\).
2. 2D convolutional layers via a loss using determinants of modified convolutional operations using \(C_{M}^{\prime}\).
These new loss terms will be used to construct a compound loss function:
\[L(\mathcal{C},\mathcal{W})=L_{\text{acc}}(\mathcal{C},\mathcal{W})+\lambda_{1}L_{ \text{dense}}(\mathcal{W})+\lambda_{2}L_{\text{conv}}(\mathcal{C}), \tag{12}\]
where \(\mathcal{C}\) consists of the 2D convolutional filters, and \(\mathcal{W}\) consists of the weight matrices of the dense layers.
\(L_{\text{acc}}\) is a standard loss measuring error of the model's primary task; for example, mean squared error (MSE) for regression or cross-entropy for classification. \(L_{\text{dense}}\), and \(L_{\text{conv}}\) are the entropy-based loss terms from dense and convolutional layers, respectively. The hyperparameters \(\lambda_{1}\) and \(\lambda_{2}\) control the strengths of the loss terms relative to the primary loss \(L_{\text{acc}}\). The specific formulas and variations for each entropy-based loss term is established in the following three subsections. Note the losses below generalize this to accommodate fine-grained tuning with layer-specific and channel-specific hyperparameters \(\lambda_{1}^{\ell}\) and \(\lambda_{2}^{\ell d}\).
### Dense Entropy Loss
From Theorem 3, we determined how entropy changes as dense layers process data. It is established that the term
\[\log\left(\left|\det W\right|\right) \tag{13}\]
describes the change in entropy. An entropy-based loss term for dense layers is therefore:
\[L_{\text{dense}}(\mathcal{W})=-\sum_{\ell}\lambda_{1}^{\ell}\log\left(\left| \det W_{\ell}\right|\right) \tag{14}\]
where \(\lambda_{1}^{\ell}\in\mathbb{R}\) are hyperparameters controlling the strength of the penalty in the \(\ell\)th dense layer, generalizing the single \(\lambda_{1}\) hyperparameter if losses are applied layer-wise. When \(\lambda_{1}^{\ell}>0\), the added loss term encourages weight matrices to reduce entropy. This provides the opportunity to minimize loss further or faster, and opens the door for smaller, more efficient architectures.
### 2D Convolutional Entropy Loss
According to Corollary 6, the change in entropy as an input is processed by a 2D convolution is
\[(l-p+1)(w-q+1)\log|c_{11}|, \tag{15}\]
so we introduce a loss term proportional to \(\log|c_{11}|\) for each convolutional filter.
Suppose the term \(c_{ij}^{\ell d}\) corresponds to the element in position \((i,j)\) of the \(d\)th convolutional channel of the \(\ell\)th convolutional layer. Then, the entropy-based loss term is
\[L_{\text{conv}}(\mathcal{C})=-\sum_{\ell,d}\lambda_{2}^{\ell d}\log\left( \left|c_{11}^{\ell d}\right|\right) \tag{16}\]
where \(\lambda_{2}^{\ell d}\in\mathbb{R}\) are layer- and channel-wise weighting hyperparameters, generalizing the simpler global \(\lambda_{2}\) from (4). If the \(\lambda_{2}^{\ell d}>0\) decreases in entropy will be penalized, resulting in entropy amplification/preservation. Similarly, \(\lambda_{2}^{\ell d}<0\) results in entropy suppression.
Note that we are considering all convolutions as convolving over single-channel 2D inputs. However, 2D convolutional filters running over multi-channel inputs (e.g. RGB images) are equivalent to several parallel 2D convolutions, or convolutions over different channels. Hence, the formula above works regardless of the number of channels of the input.
## 5 Experiments
We first carried out some qualitative analysis of entropy propagation patterns in well-trained, effective neural networks.
The left panel of Figure 1 displays the change in entropy per filter at each layer within a VGG16 CNN well-trained to classify the ImageNet dataset. [25] It preserves most of its entropy in its early convolutional layers, with mean entropy drops of only around -2.2 in the first layer. The entropy then drops precipitously as data propagates through the later convolutional layers. The entropy drops magnitudes drop exponentially and settle near -5.5 per filter. In contrast, the right panel shows that the randomly initialized (untrained) network fails to preserve early entropy as strongly and experiences entropy drops that increase only linearly in magnitude in later layers.
There is also an interesting pattern in the outlier filters. The ImageNet-trained network has few outliers, all of which correspond to larger drops in entropy than the norm. They are rare especially later in the network. In contrast, the randomly initialized network has far more outliers, again all corresponding to reductions in entropy. Further the outliers are especially common in the later layers. These trends suggest well-trained networks learn filters reduce entropy more uniformly across filters.
These pattern was observed across multiple well-trained neural networks, and hence we hypothesize penalizing entropy decay in early layers and encouraging entropy drops in later layers would promote better performance.
### Experimental Modification of Entropy-based Losses
The dense entropy-based loss term includes \(-\log(|\det W|)\), which approaches infinity exponentially if \(|\det W|\) approaches 0. To avoid exploding gradients, weights tend to be small, resulting in frequent tiny determinants. Even worse, the convolutional entropy-based loss term is a sum of \(-\log\left(|c_{11}^{\ell d}|\right)\) terms, which approach infinity if even a single \(c_{11}^{\ell d}\) approaches 0.
Figure 1: This plot shows the average change in entropy per filter in each layer of two VGG16 networks, one trained to classify ImageNet and one randomly initialized. The closed dots are means, box plots show first and third quartiles of entropy change per filter at each layer, and outliers are plotted as open dots.
To sidestep this issue, we frequently substitute the entropy-based loss terms above with similar functions that are more stable as follows.
\[\frac{1}{|\det W|+\varepsilon}\] (dense) \[\frac{1}{|c_{11}^{Id}|+\varepsilon}\] (convolutional)
Figure 2 shows the curves behave similarly near 0, but the loss terms modified with \(\varepsilon\) do not explode, even with the tiny determinants and \(|c_{11}|\) values seen during training. 1 Though the curves have different signs for inputs outside \([-1,1]\), these values are quite uncommon to ensure the stability of backpropagation.
Footnote 1: Note \(\varepsilon=0.5\) is too large in practice and we use \(\varepsilon<10^{-3}\), but it was chosen to make the plot more easily visible.
In addition, \(\frac{1}{|x|}\) has a larger derivative near 0 than \(\log(|x|)\). Note in the first panel of Figure 2, \(\frac{1}{|x|=\varepsilon}\) approaches its maximum at \(x=0\) more steeply. The second panel shows its absolute derivative more extreme in this location. This amplifies the gradients computed during training to help avoid the vanishing gradient problem, encouraging quicker convergence to better minima.
### Dense Autoencoders for Image Compression
To test the efficacy of the entropy-based loss function for dense layers, we trained simple autoencoders with different values of \(\lambda_{1}\) and hidden dimensions for image compression using two benchmark datasets, MNIST [26] and CIFAR-10 [27]. The \(L_{\text{acc}}\) and number of iterations required for convergence were then compared.
The autoencoder includes an input layer, one dense hidden layer, and an output layer. It maps the input data into a latent space, passes this latent representation through an activation function, and then reconstructs the input data with another dense layer. The overall goal is to reduce dimensionality of the input data by maximizing the total variation of that latent space for effective reconstruction. Models such as these can be beneficial in decreasing model sizes, detecting anomalies, denoising, and potentially making downstream tasks more interpretable.
MNIST is a classic benchmark dataset that includes 70,000 labeled images of handwritten digits from 0 to 9. Each image is grayscale, contains a centered handwritten digit, and is 28x28 pixels (784 dimensions). For higher dimensional experiments, we use CIFAR-10, a benchmark dataset of 60,000 tiny color images belonging to 10 classes of objects, including cats and airplanes. Images are all RGB and 32x32 pixel format with a total dimension of 3072. Training on these datasets allows us to assess the implications of the input dimensionality.
Autoencoder models were trained on each dataset separately with the same activation and optimizer to ensure fair comparison. In this experiment, we compare results on latent dimension widths of 20, 60, 100,..., 260 and \(\lambda_{1}\) values in \(\{0,0.0001,0.001,0.01,0.1,1,10\}\). Each autoencoder uses Adam optimizer, sigmoid activation, and MSE for base \(L_{\text{acc}}\). We use early stopping to end training when the loss saturates for 5 epochs.
Use of the the dense entropy-based loss term results in convergence up to 4x faster to minima within \(10^{-3}\) of the standard MSE reconstruction error, often less than models trained without entropy loss.
Figure 3 depicts the number of iterations the models required to converge with different latent dimensions. The dashed lines show autoencoders trained without the addition of the dense entropy-based loss. Note that the number of iterations they take is much higher than the models trained with nonzero \(\lambda_{1}\), while maintaining similar and often smaller test \(L_{\text{acc}}\) (MSE reconstruction error).
This speed-up occurs when the autoencoder is sufficiently wide. The networks have a harder time converging when the latent dimension is 20. However, when we widen the latent dimension to 60, we have the ability to find the minima at a faster rate. This can provide guidance when choosing the width of these networks as convergence.
### CNN Image Classifiers
Next, we perform image classification experiments on the benchmark dataset CIFAR10 with the entropy-based loss function for convolutional layers, \(L_{\text{conv}}\). Here, we explored the impact of the convolutional entropy-based loss term with different weighting hyperparameters \(\lambda_{2}\) for CNN classifiers of varying widths and depths.
CNNs in these experiments have 1-3 successive blocks containing 1 convolutional layer (filter size \(3\times 3\), stride \(1\times 1\)) and 1 max pooling layer (size \(2\times 2\), stride \(2\times 2\)) followed by a softmax classifier. Leaky ReLU activations are used with each convolutional layer. Widths of 32, 64, and 128 convolutional filters in each layer are tested with each depth. The CNNs use cross-entropy for the base classification loss \(L_{\text{acc}}\) and are trained with the Adam optimizer. Each architecture
Figure 3: This plot shows how the addition of \(L_{\text{dense}}(\mathcal{W})\) affects the training speed and accuracy of an autoencoder on MNIST and CIFAR-10 data.
is trained with varying \(\lambda_{2}\in\{0,0.0001,0.001,0.01,0.1,1,10\}\), applied in only the first convolutional layer in one set of experiments and applied to all layers in another set.
The pattern in entropy changes in the high-quality VGG classifier observed in Figure 1 prompted a hypothesis that encouraging entropy preservation in the early layers will have a positive impact on classification accuracy. The results of experiments with the entropy-based loss applied only to the first convolutional layer are shown in Table 1.
As expected, the base performance tends to improve with more filters and depth. Training accuracy almost always improves with the entropy-based loss, with gains as high as 6.6%, where more depth and width enhance the benefits. These gains are not preserved in test accuracy in shallower nets, but we see up to 2.8% gains in test accuracy in deeper nets.
Drilling down into the deepest 3-convolutional block nets, Figure 4 (left) displays a strongly positive relationship between the \(\lambda_{2}\) values and training performance, particularly the wider nets with 64 and 128 filters per layer. On the right, note test accuracy always peaks with nonzero \(\lambda_{2}\), indicating improved classification performance when the entropy-based loss is enabled.
Interestingly, applying the entropy-based loss to all layers with the same weighting hyperparameters yields no significant gains, neither for train nor test accuracy. This supports the hypothesis that encouraging entropy preservation in the early layers trains the CNN to extract higher-quality latent representations of the input data, enabling better downstream classification performance.
\begin{table}
\begin{tabular}{l|c|c|c|c|} Architecture & Base Train Acc. & Base Test Acc. & \(\Delta\) Train Acc. & \(\Delta\) Test Acc. \\ \hline \hline \([32]\) & 0.7616 & 0.6479 & \(-0.0002\) & \(-0.0004\) \\ \([64]\) & 0.7779 & 0.6441 & +0.0229 & +0.0121 \\ \([128]\) & 0.7543 & 0.6519 & +0.0307 & \(-0.0013\) \\ \hline \([32,32]\) & 0.7211 & 0.6648 & +0.0384 & +0.0168 \\ \([64,64]\) & 0.7972 & 0.7050 & +0.0123 & +0.0013 \\ \([128,128]\) & 0.8451 & 0.7064 & +0.0167 & +0.0041 \\ \hline \([32,32,32]\) & 0.7265 & 0.6737 & +0.0414 & +0.0156 \\ \([64,64,64]\) & 0.7952 & 0.7012 & +0.0483 & +0.0091 \\ \([128,128,128]\) & 0.8389 & 0.7098 & +0.0660 & +0.0280 \\ \end{tabular}
\end{table}
Table 1: Experimental results for CIFAR10 classification with CNNs. The base train and test accuracy for \(\lambda_{2}=0\) is displayed alongside the best gain in accuracies with \(\lambda_{2}>0\) applied to the first convolutional layer.
Figure 4: Train and test accuracies on CIFAR10 classification for CNNs with three convolutional blocks and different entropy-based loss hyperparameters.
## 6 Conclusion
This article addresses the complexity of the decision making process of neural networks by utilizing information theory. We derived novel information theoretic results of entropy propagation through dense and 2D convolutional layers. These results provided a foundation for entropy-based loss terms that allow us to guide the neural decision process.
We analyzed the performance of networks with these loss terms by performing experiments on image compression and image classification tasks on benchmark datasets MNIST and CIFAR10. Specifically, the incorporation of the \(L_{\text{dense}}(\mathcal{W})\) term in an autoencoder for image compression showed increase in convergence speed and often provided smaller minima. Additionally, performance gains are demonstrated on image classification with CNNs utilizing the \(L_{\text{conv}}(\mathcal{C})\) term, and we confirm the hypothesis that encouraging entropy preservation in early layers promotes better downstream performance.
This work provides strong foundational findings that allow practical information-theoretic guidance of neural networks. It can allow theory-backed, principled construction of neural architectures, specifically in terms of depth, width, and layer structure.
Related works have often searched for similar methods, but have been focused on the estimation of entropy. This often leads to highly complex calculations that can be difficult for many practical use cases. Our work avoids these difficult estimations of entropy, and focuses instead on the more easily computable change in entropy, localized to specific layers and channels of the network. This calculation is cheap in comparison, and can be used in larger practical models.
This work took strides towards practical use of information-theoretic guidance of neural network training. This suggests that exploring larger models, incorporating diverse architectural components, and using higher-dimensional datasets could yield valuable insights for building and training efficient neural networks. Additionally, such explorations could provide a deeper understanding of the decision-making processes within them.
## Acknowledgement
The authors would like to thank Olivia Raney for her invaluable editorial improvements to this article. M. Meni would also like to thank Dr. Kaleb Smith for his willingness to share his knowledge and provide thoughtful recommendations to overcome challenges. R. T. White wishes to thank the NVIDIA Applied Research Accelerator Program for providing hardware support for this effort.
|
2305.05611 | Metric Space Magnitude and Generalisation in Neural Networks | Deep learning models have seen significant successes in numerous
applications, but their inner workings remain elusive. The purpose of this work
is to quantify the learning process of deep neural networks through the lens of
a novel topological invariant called magnitude. Magnitude is an isometry
invariant; its properties are an active area of research as it encodes many
known invariants of a metric space. We use magnitude to study the internal
representations of neural networks and propose a new method for determining
their generalisation capabilities. Moreover, we theoretically connect magnitude
dimension and the generalisation error, and demonstrate experimentally that the
proposed framework can be a good indicator of the latter. | Rayna Andreeva, Katharina Limbeck, Bastian Rieck, Rik Sarkar | 2023-05-09T17:04:50Z | http://arxiv.org/abs/2305.05611v1 | # Metric Space Magnitude and Generalisation in Neural Networks
###### Abstract
Deep learning models have seen significant successes in numerous applications, but their inner workings remain elusive. The purpose of this work is to quantify the learning process of deep neural networks through the lens of a novel topological invariant called _magnitude_. Magnitude is an isometry invariant; its properties are an active area of research as it encodes many known invariants of a metric space. We use magnitude to study the internal representations of neural networks and propose a new method for determining their generalisation capabilities. Moreover, we theoretically connect magnitude dimension and the generalisation error, and demonstrate experimentally that the proposed framework can be a good indicator of the latter.
Machine Learning, Neural Networks,
by Adams et al. (2020) and demonstrate that ours benefits from a better computational complexity and interpretability. In short, our contributions are as follows:
* We propose a novel method for evaluating the generalisation of neural networks based on magnitude and the effective number of models, which allows us to monitor performance without a validation set.
* We prove a new upper bound for the generalisation error, linking the generalisation error to a magnitude-based characteristic of the training trajectories.
* We empirically show that the evolution of these measures throughout the training process correlates with the accuracy of the test set.
* We prove that all notions of previously proposed intrinsic dimensions are the same as the magnitude dimension, and we verify this result empirically.
## 2 Related Work
We briefly review the literature related to generalisation in neural networks, intrinsic dimension, and magnitude.
Generalisation Bounds and Intrinsic Dimension.Several works explore intrinsic dimension for capturing the generalisation capabilities of neural networks. Simsekli et al. (2020) demonstrated that the fractal dimension of a hypothesis class is associated with the generalisation error, which is further linked to the heavy-tailed behavior of the trajectory of networks Hodgkinson and Mahoney (2021); Mahoney and Martin (2019); Simsekli et al. (2019). However, many assumptions were required for the bound to be computed in practice. A more recent work relaxed some of the assumptions, and developed the notion of the persistent homology dimension Adams et al. (2020), \(\dim_{\mathrm{PH}}\). The authors in Birdal et al. (2021) were the first to offer a theoretical justification for using topological invariants for the analysis of deep neural networks. Another work Magai and Ayzenberg (2022) investigated \(\dim_{\mathrm{PH}}\) at different depths and layers of the network and observed its evolution. In Dupuis et al. (2023), the authors developed a data-driven dimension and compared it with the \(\dim_{\mathrm{PH}}\) of Birdal et al. (2021). They have demonstrated stronger correlation with the generalisation error than previously shown and managed to relax some of the restrictive assumptions.
Magnitude and its Applications in Machine Learning.Magnitude was first proposed in Solow and Polasky (1994) for measuring biodiversity, albeit without any reference to its mathematical properties. It was only approximately twenty years later when Leinster (2013) formalised its mathematical properties using the language of category theory. Further, magnitude has been realised as the Euler characteristic in magnitude homology Leinster and Shulman (2021). While magnitude has theoretical foundations, its applications to machine learning are scarce. Recently, there has been renewed interest in introducing magnitude into the machine learning community. The first work to develop the concept of magnitude in the context of machine learning demonstrated that the individual summands of magnitude, known as magnitude weights, can be used as an efficient boundary detector Bunch et al. (2021). Further, it has been used for working in the space of images and it has demonstrated its usefulness as an effective edge detector Adamer et al. (2021). However, our contribution constitutes the first direct application of magnitude to deep learning.
Using Topology to Characterise Neural Networks.Earlier research has established a connection between neural network training and topological invariants (Fernandez et al., 2021), using topological complexity as proxy for generalisation performance, for instance (Rieck et al., 2019). However, these studies focused solely on analyzing the trained network after completing the training process Fernandez et al. (2021), potentially missing critical aspects of the training dynamics Birdal et al. (2021). By contrast, we propose the use of another topological invariant--magnitude--which affords more interpretability than previous approaches. Moreover, we compute magnitude on the training trajectories instead of on the trained network, offering crucial topologi
Figure 1: **Overview of the procedure. We first train a neural network and monitor the training trajectory. At each 1000 iterations of the trajectory, we collect the training weights \(W\in\mathbb{R}^{d}\) into a point cloud. Then, we compute the magnitude curve of the point cloud at selected scales \(t\), create the log-log plot and estimate the magnitude dimension based on it.**
cal insights into training dynamics.
## 3 Background
We start by defining magnitude and the magnitude dimension, and then proceed to define intrinsic dimensions.
### Magnitude
While the magnitude of metric spaces is a general concept Leinster (2013), we restrict our focus to subsets of \(\mathbb{R}^{n}\), where magnitude is known to exist Meckes (2013).
**Definition 3.1**.: Let \((X,d)\) be a finite metric space with distance metric \(d\). Then, the similarity matrix of \(X\) is defined as \(\zeta_{ij}=e^{-d_{ij}}\) for \(1\leq i,j\leq n\), where \(n\) denotes the cardinality of \(X\).
**Definition 3.2**.: Let \(X\) be a metric space with similarity matrix \(\zeta_{ij}\). If \(\zeta_{ij}\) is invertible, magnitude is defined as
\[\mathrm{Mag}(X)=\sum_{ij}(\zeta^{-1})_{ij}. \tag{1}\]
When \(X\) is a finite subset of \(\mathbb{R}^{n}\), then \(\zeta_{ij}\) is a symmetric positive definite matrix as proven in Leinster (2013) (Theorem 2.5.3). Then, \((\zeta^{-1})_{ij}\) exists, and hence magnitude exists as well. Magnitude is best illustrated when considering a few sample spaces with a small number of points.
_Example 3.3_.: Let \(X\) denote the metric space with a single point \(a\). Then, \(\zeta_{X}\) is a \(1\times 1\) matrix with \(\zeta_{X}^{-1}=1\) and using the formula for magnitude, we get \(\mathrm{Mag}_{X}=1\).
_Example 3.4_.: A more illustrative example is given by the space of two points. Let \(X=\{a,b\}\) be a finite metric space where \(d_{X}(a,b)=d\). Then
\[\zeta_{X}=\begin{bmatrix}1&e^{-d}\\ e^{-d}&1\end{bmatrix}, \tag{2}\]
so that
\[\zeta_{X}^{-1}=\frac{1}{1-e^{-2d}}\begin{bmatrix}1&-e^{-d}\\ -e^{-d}&1\end{bmatrix}, \tag{3}\]
and therefore
\[\mathrm{Mag}(X)=\frac{2-2e^{-d}}{1-e^{-2d}}=\frac{2}{1+e^{-d}}. \tag{4}\]
This example is also illustrated in Figure 2. Using Eq. (4), if \(d\) is very small, i.e. \(d\to 0\), \(\mathrm{Mag}(X)\to 1\). Similarly, when \(d\to\infty\), \(\mathrm{Mag}(X)\to 2\). In practice, as it can be seen from Figure 2, \(\mathrm{Mag}(X)=2\) for a value of \(d\) as small as \(5\).
More information about a metric space can be obtained by looking at its rescaled counterparts. The resulting representation is richer, and it can be summarised compactly in the magnitude function. For this purpose, the scale parameter \(t\) is introduced. By varying \(t\), we obtain a scaled version of the metric space, which permits us to answer what the number of _effective points_ of a metric space is.
The Magnitude Function.Magnitude assigns to each metric space not just a scalar number, but a function. This works as follows: we introduce a scale parameter \(t\), and for each value of \(t\), we consider the space where the distances between points are scaled by \(t\). More concretely, for \(t>0\), the notation \(tX\) means \(X\) scaled by a factor of \(t\). Computing the magnitude for each value of \(t\) then gives the magnitude function. More formally, we have the following definition:
**Definition 3.5**.: Let \((X,d)\) be a finite metric space. We define \((tX,d_{t})\) to be the metric space with the same points as \(X\) and the metric \(d_{t}(x,y)=td(x,y)\).
The intuition here is that we are looking at the same space but through different scales, for example, changing the distances from metres to centimeters. Then the definition of the magnitude function follows naturally.
**Definition 3.6**.: The magnitude function of a finite metric space \((X,d)\) is the partially-defined function \(t\to\mathrm{Mag}(tX)\), which is defined for all \(t\in(0,\infty)\).
_Example 3.7_.: Consider the magnitude function of the 3-point space in Figure 2. In this example, the points \(x\), \(y\) and \(z\) form an isosceles triangle. When \(t=0.0001\), all the three points are very close to each other and almost indistinguishable. Hence, we say that the space has 1 effective point. In contrast, when \(t=0.01\), the distance between the
Figure 2: **Magnitude of the 2- and 3-point space.** On the left, we see the 2-point space, where the distance between the points is \(d\). On the right we see the 3-point space for an isosceles triangle with distance \(t\) between \(y\) and \(z\), and \(1000t\) between \(x\) and \(y\) and \(x\) and \(z\). Below each space, we see the respective magnitude function.
two points on the right \(y\) and \(z\) is very small, and \(x\) is quite far. Therefore, we say that the space looks like two points. Finally, when \(t\) is large, all the three points are far away from each other, and the value of magnitude is 3, which is also the cardinality of the space.
### Intrinsic Dimension
There are various notions that can be used to measure the intrinsic dimension of a space. In this work, we will focus on three such notions: the upper-box dimension (Minkowski), the magnitude dimension and the persistent homology dimension. The box dimension is based on covering numbers and can be linked to generalization via Simsekli et al. (2020), whereas the magnitude dimension is built upon the concepts we defined earlier.
**Definition 3.8** (Minkowski dimension).: For a bounded metric space \(X\), let \(N_{\delta}(X)\) denote the maximal number of disjoint closed \(\delta\)-balls with centers in \(X\). The upper box/Minkowski dimension is defined as
\[\mathrm{dim}_{\mathrm{Mink}}X=\lim_{\delta\to 0}\sup\frac{\log(N_{\delta}(X))}{ \log(\frac{1}{\delta})}. \tag{5}\]
There is a subtle point to be made here. In general, the Minkowski and Hausdorff dimensions do not coincide and are not equivalent. However, in Simsekli et al. (2020) the authors provide conditions under which the Hausdorff dimension of the space we are interested in coincides with the Minkowski dimension. In fact, for many fractal-like sets, these two notions of dimensions are equal to each other; see Mattila (1999, Chapter 5).
**Definition 3.9** (Magnitude dimension).: When
\[\mathrm{dim}_{\mathrm{Mag}}X=\lim_{t\to\infty}\frac{\log(\mathrm{Mag}(tX)))}{ \log t} \tag{6}\]
exists, we define this to be the magnitude dimension of \(X\)(Meckes, 2015).
The magnitude dimension can be approximately interpreted as the rate of change of the magnitude function for a suitable interval of values for \(t\). We can introduce another notion of the fractal dimension, known as the persistent homology dimension (\(\mathrm{dim}_{\mathrm{PH}}\)) Adams et al. (2020).
**Definition 3.10**.: The persistent homology dimension of a bounded metric space \((X,d)\), denoted by \(\mathrm{dim}_{\mathrm{PH}}\), is defined as
\[\inf\{\alpha>0,\exists C>0,\forall W\subset X\mathrm{finite},E_{\alpha}<C\}, \tag{7}\]
where \(E_{\alpha}\) is the \(\alpha\)-lifetime sum, defined as
\[E_{\alpha}(W)\coloneqq\sum_{(b,d)\in PH^{0}(Rips(W))}(d-b)^{\alpha},\]
and \(b\) and \(d\) are the birth and death values respectively for the persistent homology of degree 0 (\(PH^{0}\)). It measures all connected components in the Vietoris-Rips filtration of W, denoted by \(Rips(W)\).
The definition is rather technical and it is not crucial for the work here, for more details please refer to Adams et al. (2020); Edelsbrunner and Harer (2022); Memoli and Singhal (2019).
## 4 Theoretical Results
We first elucidate connections between different notions of intrinsic dimension before proving connections to the generalisation error.
### Connection between Notions of Intrinsic Dimension
After we have introduced three different notions of intrinsic dimensions, we demonstrate that they are in fact the same under some mild assumptions. Our novel contributions is proving that \(\mathrm{dim}_{\mathrm{Mag}}X\) and \(\mathrm{dim}_{\mathrm{PH}}^{0}X\). The result is formalised in the following theorem, which assumes that all notions of dimension exist.
**Theorem 4.1**.: _Let \(X\subset\mathbb{R}^{n}\) be a compact set and either \(\mathrm{dim}_{\mathrm{Mag}}X\) or \(\mathrm{dim}_{\mathrm{Mink}}X\) exist. Then_
\[\mathrm{dim}_{\mathrm{Mag}}X=\mathrm{dim}_{\mathrm{PH}}^{0}X \tag{8}\]
Proof.: Since \(X\) is compact and either \(\mathrm{dim}_{\mathrm{Mag}}X\) or \(\mathrm{dim}_{\mathrm{Mink}}X\) exist, from Corollary 7.4, Meckes (2015) it follows that both \(\mathrm{dim}_{\mathrm{Mag}}X\) and \(\mathrm{dim}_{\mathrm{Mink}}X\) exist and \(\mathrm{dim}_{\mathrm{Mag}}X=\mathrm{dim}_{\mathrm{Mink}}X\). Since \(X\) is compact, from the Heine-Borel Theorem, \(X\) is both closed and bounded, and therefore from a result in Kozma et al. (2006); Schweinhart (2021), we have that \(\mathrm{dim}_{\mathrm{Mink}}X=\mathrm{dim}_{\mathrm{PH}}^{0}X\). Hence, \(\mathrm{dim}_{\mathrm{Mag}}X=\mathrm{dim}_{\mathrm{Mink}}X=\mathrm{dim}_{ \mathrm{PH}}^{0}X\), which implies that \(\mathrm{dim}_{\mathrm{Mag}}X=\mathrm{dim}_{\mathrm{PH}}^{0}X\).
### Connection to the Generalisation Error
After having established equality between the various notions of dimensions, we proceed to formalise the required language of machine learning theory, culminating in a novel generalisation result.
For the beginning of this section, we follow the notation from Shalev-Shwartz and Ben-David (2014). We briefly recall some standard definitions to make our paper self-contained. In a standard statistical learning setting, \(\mathcal{X}\) denotes the set of features and \(\mathcal{Y}\) the set of labels. Together, the cross
product \(\mathcal{X}\times\mathcal{Y}\) represents the space of data \(\mathcal{Z}\). The learner has access to a sequence of data of \(m\) samples, called the training data, denoted by \(S=((x_{1},y_{1}),...,(x_{m},y_{m}))\) in \(\mathcal{X}\times\mathcal{Y}=\mathcal{Z}\). We assume that the training set \(S\) is generated by some unknown probability distribution over \(\mathcal{X}\), which we denote by \(\mathcal{D}\), and that each of the samples \(\{x_{i},y_{i}\}\) are independent and identically distributed (i.i.d) samples from \(\mathcal{D}\). We will focus on a restricted search space for finding a set of predictors, which we call a hypothesis class, \(\mathcal{H}\). Each element, called a hypothesis, \(h\in\mathcal{H}\), is a function from \(\mathcal{X}\) to \(\mathcal{Y}\). An optimal hypothesis, or parameter vector \(h\in\mathcal{H}\) is selected by computing a quantity called a loss function, defined by \(\ell:\mathcal{H}\times\mathcal{Z}\rightarrow\mathbb{R}_{+}\). The empirical error is then \(L_{S}(h)=\frac{1}{m}\sum_{i=1}^{m}\ell(h,z)\) for \(z\in\mathcal{Z}\) and the true error or risk is \(L_{\mathcal{D}}(h)=\mathbb{E}_{z}[\ell(h,z)]\) for \(z\in\mathcal{Z}\). The generalisation error is defined as the difference between the true error and the empirical risk, or \(|L_{S}(h)-L_{\mathcal{D}}(h)|\).
In the case of neural networks, the hypothesis class is \(\mathcal{W}\subset\mathbb{R}^{d}\), and each \(w\in\mathcal{W}\) is a parameter vector. Given a training algorithm \(\mathcal{A}\), we want to study the set of all hypothesis classes returned from the optimisation procedure \(\mathcal{A}\) for a given training data \(S\). We call this the _optimisation trajectories_, and denote them by \(\mathcal{W}\). To access the iterates at every step of the process, we denote by \(w_{i}\) an element of \(\mathcal{W}\) at iteration \(i\). More concretely, \(\mathcal{W}:=\{w\in\mathbb{R}^{d}:\exists i\in[0,I],w=[\mathcal{A}(S)_{i}]\}\), where \(I\) is the number of training iterations. In other words, when we fix \(i\), we are interested in the weights at iteration \(i\), returned by the optimisation algorithm \(\mathcal{A}\).
For the following result, there is a more technical condition required from Birdal et al. (2021, Assumption H1), which can be found in the Appendix. We denote this assumption as H1. We require the existence of the constant \(M>0\), which quantifies how dependent the set \(\mathcal{W}_{S}\) is on the training sample \(S\). Now that we have all the required definitions, we can proceed with the novel result.
**Theorem 4.2**.: _Let \(\mathcal{W}\in\mathbb{R}^{d}\) be a compact set. Under the assumption that \(H1\) holds, \(\ell\) is bounded by a constant \(C\) and \(K\)-Lipschitz continuous in \(w\). For \(n\) sufficiently large, we then have the following bound:_
\[\begin{split}\sup_{w\in\mathcal{W}}|L_{S}(w)-L_{\mathcal{D}}(w)| \leq\\ 2C\sqrt{\frac{[\dim_{\mathrm{Mag}}\mathcal{W}+1]\log^{2}(nK^{2} )}{n}+\frac{\log(7M/\gamma)}{n}}\end{split} \tag{9}\]
_with probability \(1-\gamma\) over \(S\sim\mathcal{D}^{\bigotimes n}\), where \(M\) is the constant from Assumption H1._
Proof.: Since \(\mathcal{W}\) is bounded, we have \(\dim_{\mathrm{PH}}^{0}\mathcal{W}=\dim_{\mathrm{Mag}}\mathcal{W}\). Therefore, the result follows from substituting \(\dim_{\mathrm{PH}}^{0}\mathcal{W}\) with \(\dim_{\mathrm{Mag}}\mathcal{W}\) in Proposition 1 (Birdal et al., 2021).
## 5 Methods
In contrast with previous work on the intrinsic dimension, we propose to estimate the fractal dimension using the magnitude. As we have seen, the concept of magnitude dimension coincides with the Minkowski dimension and the persistent homology dimension. This theoretical connection enables us to confidently explore the concept of magnitude in the context of neural networks. We do this as follows: at selected points on the weight trajectory \(\mathcal{W}\), we subsample a number of models \(W\), which can be interpreted as a point cloud, where each point is a model in the model space. This space has a very high dimension equal to the number of parameters in the network. We compute the magnitude and the magnitude dimension of each such point cloud. We then investigate the connection between these quantities and the test accuracy.
In light of our novel result in Theorem 4.2, linking the intrinsic dimension and the generalisation bound, it is natural to ask if the magnitude function can also be used to explore the learning process of neural networks. Since the magnitude dimension can be roughly interpreted as the rate of change of the magnitude function, and therefore, if there is a link between the magnitude dimension and the generalisation error, then there should also be a link between the values of the magnitude function at different scales and the generalisation error. What we want to study is the change of the space of model trajectories as the learning process advances. In other words, does the space look like a bigger number of distinct points or does it resemble fewer points as the training progresses? Translating this idea to the language of magnitude, is the _effective number of models_ increasing or decreasing when performing more iterations of the learning algorithm?
Since magnitude is a function, we would like to take a cross-sectional slice of the magnitude curve and examine the magnitude values for a particular choice of the scale parameter \(t\). Therefore, we formulate the definition of the effective number of models for a fixed \(t\).
**Definition 5.1**.: We define the **effective number of models** as the value of \(\mathrm{Mag}(t_{i}\mathcal{W})\) at scale \(t_{i}\).
### Analyzing Deep Neural Network Dynamics via the Magnitude Dimension
Estimating the magnitude dimension is not a straightforward task, as the limit from Equation 6 needs to be approximated by finding the longest straight part of the magnitude function, which cannot be computed automatically, but needs to be done manually. Here we provide the algorithm for computation of \(\dim_{\mathrm{Mag}}\mathcal{W}\) from a finite sample \(W\), approximating an infinite process. We follow the procedure similar to the estimations of the magnitude dimension O'Malley
et al. (2023); Willerton (2009). The details are described in Algorithm 1. After choosing a suitable representative interval \([t_{i},t_{j}]\), the log-log plot of magnitude versus \(t\) is generated, and the slope \(m\) of the line and the intercept \(b\) are computed at the selected interval \([t_{i},t_{j}]\). Then, \(m\) is taken to be the estimate of \(\dim_{\mathrm{Mag}}\mathcal{W}\). This procedure can be essentially seen as computing the limit based on the slope, which is an application of l'Hopital's rule.
```
Input: The set of the training trajectories \(\mathcal{W}=\{w_{i}\}^{n}\) of size \(n\), scale parameter \(t:[0,t_{k}]\), interval \([t_{i},t_{j}],i<j<k\) Output:\(\dim_{\mathrm{Mag}}\) for\(r=1\)to\(k\)do \(\mathrm{Mag}(t_{r}\mathcal{W})\leftarrow\mathcal{W}\) endfor \(m,b\leftarrow\text{fitting}(log(\mathrm{Mag}(\mathcal{W}_{n})[t_{i}:t_{j}]), log([t_{i}:t_{j}]))\) \(\dim_{\mathrm{Mag}}\mathcal{W}\gets m\)
```
**Algorithm 1** Estimation of \(\dim_{\mathrm{Mag}}\mathcal{W}\)
## 6 Experimental Results
The first goal of this section is to verify our main claim, namely that our estimate of the magnitude dimension is capable of measuring generalisation. The second goal is to establish a connection between magnitude itself and the test accuracy. The third goal is to empirically compare the two very close measures of the fractal dimension, namely \(\dim_{\mathrm{Mag}}\mathcal{W}\) and \(\dim_{\mathrm{PH}}\mathcal{W}\). The fourth goal is to attempt to explain what our results mean for generalisation in neural networks in general, arriving at novel insights. In order to show this, we use our procedure on a number of different neural network architectures, training settings and datasets. In particular, we train a 5-layer (fcn-5) and 7-layer (fcn-7) on MNIST and AlexNet Krizhevsky et al. (2017) on CIFAR10, over a different range of learning rates and batch size of 100. We consider a sliding window of 1000 training iterations. We then estimate \(\dim_{\mathrm{Mag}}\mathcal{W}\) of each window, following the steps in Algorithm 1.
### Exploring the learning process
We assess the main claim of the paper, which links the magnitude dimension with the generalisation error. Figure 3 reveals multiple findings. First, we observe that there is a correlation between the magnitude dimension and the test accuracy--the lower the magnitude dimension, the higher the test accuracy. This result agrees with what has previously been observed in Birdal et al. (2021); Simsekli et al. (2020), albeit from the perspective of _magnitude_, an invariant that is arguably simpler to compute and more interpretable in practice than persistent homology. Second, this holds across both MNIST and CIFAR10, and also across different architectures, which shows that even though the parameters differ considerably, the intrinsic dimension of different datasets can still be similar. To achieve good generalisation performance, it is important to keep the dimension as small as possible, without losing important representational features by collapsing them onto the same dimension.
### Analysing and visualising network trajectories
Next, we want to investigate the effect on the magnitude function as the network is trained for more iterations. For this purpose, we select four values of \(t\) across the range \([0,40]\), \(t\in\{1.36,6.78,16.95,30.51\}\) and we compute the effective number of models \(\mathrm{Mag}(t\mathcal{W})\) to give us a glimpse into the space of training trajectories. We further fix the learning rate and vary the number of iterations. The experiments were performed with a learning rate of \(0.1\) for illustrative purpose. However, we note that this pattern holds for multiple learning rates. In Figure 4 we can see the resulting magnitude value at each of the selected scales of \(t\), and the colour depicts the number of iterations. We note that
Figure 3: **The magnitude dimension correlates with the test accuracy.** In plots (a-c), we see the magnitude dimension plotted against the test accuracy over a varying number of iterations, which are depicted in different colours, from red to yellow. We note that there is correlation between the magnitude dimension and the test accuracy across both MNIST and CIFAR-10, and across different architectures (FCN-5, FCN-7, AlexNet), which is stronger for MNIST than for CIFAR-10.
there is a concentration of red points in the upper left corner, indicating that higher test accuracy corresponds to a lower value of magnitude. Similarly, for the blue points, the higher value of magnitude is linked to worse test set performance. This pattern holds across both MNIST and CIFAR10. Moreover, the lower the test accuracy, the higher the magnitude value, implying that models with lower magnitude values generalise better. This result intuitively makes sense. When the magnitude value is small, the space looks like a smaller number of points. With the increase of \(t\), the magnitude converges to the cardinality of the set. The effective number of models in (a) equals approximately 2 for high test accuracy, which is an interesting phenomenon. This means that out of the 1000 models, for \(t=1.38\), the space looks like 2 models, indicating that there are 2 large clusters formed by the training trajectories. Surprisingly, this is not the case for network trained across 1000 iterations, with the lowest test accuracy. In that case, the space of models looks like 10 points and is more scattered.
Note that the model with high test accuracy has smaller magnitude even at larger scales, suggesting that the models exhibit some sort of clustering behavior. In the magnitude terminology, the effective number of distinct models
Figure 4: **The effective number of models correlates with the test accuracy. Here we see the cross-sectional evaluation of the magnitude curve at different values of \(t\). Each point represents the model trajectory over 1000 iterations, over which the magnitude is computed, as well as the test accuracy at the last model in the sliding window. The first row (plots (a-d)) shows the magnitude for MNIST. The second row (plots (e-h)) shows the plots for CIFAR10. We note that across all scales, there is similar pattern of correlation between the test accuracy and the magnitude values.**
Figure 5: **The magnitude dimension, persistent homology dimension and ground truth for an \(\alpha\)-Levy stable process. In plot (a) we compare the \(\dim_{\mathrm{Mag}}\mathcal{W}\) and \(\dim_{\mathrm{PH}}\mathcal{W}\) against the number of iterations. There is strong correlation of \(0.96\), with statistically significant p-value (\(p<0.05\)). In plot (b) we see the magnitude curves for different values of \(\alpha\). In plot (c) we see the magnitude curves of the \(\alpha\)-stable Levy process for multiple values of \(\alpha\).**
is smaller when the network generalises better; a "good optimisation trajectory" has a lower magnitude than a "bad optimisation trajectory"; the learning algorithm is implicitly optimising the magnitude, by clustering the models into a small number of clusters. Throughout our experiments we thus observed strong correlation between magnitude itself and the test accuracy, which persists across the different scales, learning rates and datasets. Therefore, the main insight from the results in Figure 4 is that a small effective number of models is good for generalisation and it indicates that there is a clustering pattern.
Similarities between \(\mathrm{dim_{\mathrm{Mag}}}\mathcal{W}\) and \(\mathrm{dim_{\mathrm{PH}}}\mathcal{W}\)
In Figure 5(a), we demonstrate that there is a strong correlation between \(\mathrm{dim_{\mathrm{Mag}}}\mathcal{W}\) and \(\mathrm{dim_{\mathrm{PH}}}\mathcal{W}\) on MNIST. Although there is a good correspondence between the two, \(\mathrm{dim_{\mathrm{PH}}}\mathcal{W}\) is more difficult to interpret. In order to give some interpretation, the authors had to take a step back and produce the persistent diagrams which were used to calculate the specified dimension. However, due to the fact that they have to sample the space to estimate the dimension, these calculations involve a big number of different diagrams, hence linking \(\mathrm{dim_{\mathrm{PH}}}\mathcal{W}\) with the original trajectory is not so straightforward. On the other hand, there is a more clear connection between magnitude and the magnitude dimension and the appearance of clusters of weight parameters.
Ablation study.In Figure 5 we see the results from an ablation study. We simulate data from a process similar to the weight trajectories with known ground truth, using a \(d\)-dimensional \(\alpha\)-stable Levy process Seshadri and West (1982), where we take \(d=10\), and compare the values of the magnitude dimension to the fractal dimension. Further, we compare it with the persistent homology dimension, which as we have proven in Theorem 4.1, is the same as the magnitude dimension. The purpose of this study is to empirically evaluate this theoretical result. As seen in Figure 5(b), our dimension highly correlates with the ground truth. It seems that it is consistently lower than the true dimension by approximately \(0.3\). Nonetheless, the high statistically significant correlation indicates that the magnitude dimension is indeed very close to the fractal dimension for a process which resembles the iterations of SGD. This gives us high confidence that the magnitude dimension measures exactly what it is supposed to measure.
## 7 Conclusion
Our work provides the first connection between magnitude and the generalisation error. We proved a theoretical result linking the magnitude dimension of the optimisation trajectories and the test accuracy. We verified our results experimentally by exploring the evolution of magnitude across multiple experimental settings and datasets. We have further expanded our understanding about the clustering property of the weight trajectory: through both theoretical and empirical results, we demonstrated that models with better generalisation error tend to cluster more, compared to models with worse test performance, which are of higher magnitude and more spread out. This phenomenon has been previously described Birdal et al. (2021); Bruel-Gabrielsson et al. (2019), and in this work our observations supplement it. In future work, we will improve on the estimation of the magnitude dimension and investigate the use of magnitude as a regulariser. Moreover, by using magnitude itself, we demonstrated that we can monitor the performance of the neural network without a validation set. In future work we can explore magnitude as an early stopping criteria, similar to Rieck et al. (2019). Furthermore, from our empirical ablation study, the magnitude dimension seems to be consistently lower than the ground truth, which will be investigated further. In particular, we want to consider to what extent it is possible to improve on the generalisation bound in Theorem 4.2.
### Acknowledgements
The authors wish to thank Alexandros Keros and Emily Roff for helpful comments and insightful discussions that helped improve the manuscript. RA is supported by the United Kingdom Research and Innovation (grant EP/S02431X/1), UKRI Centre for Doctoral Training in Biomedical AI at the University of Edinburgh, School of Informatics and the International Helmholtz-Edinburgh Research School for Epigenetics (EpiCrossBorders). KL is supported by Helmholtz Association under the joint research school "Munich School for Data Science - MUDS."
|
2305.05163 | Cooperating Graph Neural Networks with Deep Reinforcement Learning for
Vaccine Prioritization | This study explores the vaccine prioritization strategy to reduce the overall
burden of the pandemic when the supply is limited. Existing methods conduct
macro-level or simplified micro-level vaccine distribution by assuming the
homogeneous behavior within subgroup populations and lacking mobility dynamics
integration. Directly applying these models for micro-level vaccine allocation
leads to sub-optimal solutions due to the lack of behavioral-related details.
To address the issue, we first incorporate the mobility heterogeneity in
disease dynamics modeling and mimic the disease evolution process using a
Trans-vaccine-SEIR model. Then we develop a novel deep reinforcement learning
to seek the optimal vaccine allocation strategy for the high-degree
spatial-temporal disease evolution system. The graph neural network is used to
effectively capture the structural properties of the mobility contact network
and extract the dynamic disease features. In our evaluation, the proposed
framework reduces 7% - 10% of infections and deaths than the baseline
strategies. Extensive evaluation shows that the proposed framework is robust to
seek the optimal vaccine allocation with diverse mobility patterns in the
micro-level disease evolution system. In particular, we find the optimal
vaccine allocation strategy in the transit usage restriction scenario is
significantly more effective than restricting cross-zone mobility for the top
10% age-based and income-based zones. These results provide valuable insights
for areas with limited vaccines and low logistic efficacy. | Lu Ling, Washim Uddin Mondal, Satish V, Ukkusuri | 2023-05-09T04:19:10Z | http://arxiv.org/abs/2305.05163v1 | # Cooperating Graph Neural Networks with Deep Reinforcement Learning for Vaccine Prioritization
###### Abstract.
This study explores the vaccine prioritization strategy to reduce the overall burden of the pandemic when the supply is limited. Existing methods conduct macro-level or simplified micro-level vaccine distribution by assuming the homogeneous behavior within subgroup populations and lacking mobility dynamics integration. Directly applying these models for micro-level vaccine allocation leads to sub-optimal solutions due to the lack of behavioral-related details. To address the issue, we first incorporate the mobility heterogeneity in disease dynamics modeling and mimic the disease evolution process using a Trans-vaccine-SEIR model. Then we develop a novel deep reinforcement learning to seek the optimal vaccine allocation strategy for the high-degree spatial-temporal disease evolution system. The graph neural network is used to effectively capture the structural properties of the mobility contact network and extract the dynamic disease features. In our evaluation, the proposed framework reduces 7% - 10% of infections and deaths than the baseline strategies. Extensive evaluation shows that the proposed framework is robust to seek the optimal vaccine allocation with diverse mobility patterns in the micro-level disease evolution system. In particular, we find the optimal vaccine allocation strategy in the transit usage restriction scenario is significantly more effective than restricting cross-zone mobility for the top 10% age-based and income-based zones. These results provide valuable insights for areas with limited vaccines and low logistic efficacy.
vaccine prioritization, mobility dynamics, reinforcement learning, graph neural networks, disease prevention +
Footnote †: ccs: Computing methodologies \(\rightarrow\) Simulation types and techniques; Reinforcement learning; \(\bullet\)**Mathematics of computing \(\rightarrow\)** Graph algorithms.
+
Footnote †: ccs: Computing methodologies \(\rightarrow\) Simulation types and techniques; Reinforcement learning; \(\bullet\)**Mathematics of computing \(\rightarrow\)** Graph algorithms.
+
Footnote †: ccs: Computing methodologies \(\rightarrow\) Simulation types and techniques; Reinforcement learning; \(\bullet\)**Mathematics of computing \(\rightarrow\)** Graph algorithms.
+
Footnote †: ccs: Computing methodologies \(\rightarrow\) Simulation types and techniques; Reinforcement learning; \(\bullet\)**Mathematics of computing \(\rightarrow\)** Graph algorithms.
+
Footnote †: ccs: Computing methodologies \(\rightarrow\) Simulation types and techniques; Reinforcement learning; \(\bullet\)**Mathematics of computing \(\rightarrow\)** Graph algorithms.
+
Footnote †: ccs: Computing methodologies \(\rightarrow\) Simulation types and techniques; Reinforcement learning; \(\bullet\)**Mathematics of computing \(\rightarrow\)** Graph algorithms.
+
Footnote †: ccs: Computing methodologies \(\rightarrow\) Simulation types and techniques; Reinforcement learning; \(\bullet\)**Mathematics of computing \(\rightarrow\)** Graph algorithms.
+
Footnote †: ccs: Computing methodologies \(\rightarrow\) Simulation types and techniques; Reinforcement learning; \(\bullet\)**Mathematics of computing \(\rightarrow\)** Graph algorithms.
+
Footnote †: ccs: Computing methodologies \(\rightarrow\) Simulation types and techniques; Reinforcement learning; \(\bullet\)**Mathematics of computing \(\rightarrow\)** Graph algorithms.
## 1. Introduction
The pandemic has posed unprecedented global health, economic and social challenges and consequently raised immediate concerns for effective vaccine allocation strategies (Bradbury et al., 2016; Goyal et al., 2017; Goyal et al., 2017; Goyal et al., 2017). However, the vaccine supply has been limited in various phases of the pandemic. For example, vaccines are in inadequate supply in the early outbreaks (Goyal et al., 2017); a limited amount of vaccine boosters are available between the initial outbreak and widespread control (Goyal et al., 2017); vaccines are in short supply in developing countries that rely on vaccine donations from developed nations (Goyal et al., 2017). Additionally, vaccine distribution faces challenges with logistics, trained personnel, and efficient scheduling. Based on these considerations, vaccines must be given to areas with the greatest need, thereby, prioritizing vaccines to curb the spread and severity of infections is of great importance to policymakers.
The study of vaccine prioritization strategy has drawn wide-spread attention(Goyal et al., 2017; Goyal et al., 2017; Goyal et al., 2017). Existing studies have explored the macroscopic vaccine prioritization strategies at the national, state, and city levels. Although the macroscopic strategies give an aggregated guide for the stockpiles and top-level vaccine planning, vaccine prioritization at the micro-geographical level would help monitor and adjust the long-term vaccine planning effectively. In particular, it is essential to influence its uptake for individuals via their nearby clinics. Indeed, studies (Goyal et al., 2017; Goyal et al., 2017) suggested that distance is a prime factor that influences the refusal or hesitancy of the vaccine for areas in South Africa and India. Besides, Mazar et al. (Mazar et al., 2017) found that being 0.25 miles vs. 5 miles from the vaccine site (CVS or Walgreens) was associated with a 9% lower vaccination rate during COVID-19 in California. Those observations indicate that the behavior on a micro-geographical scale is needed when designing the vaccine prioritization strategies.
A major challenge in vaccine prioritization strategies is the simulation and prediction of disease propagation. In addition to the epidemiological factors, most studies focus on the impact of demographic characteristics such as age (Goyal et al., 2017; Goyal et al., 2017; Goyal et al., 2017) and occupations (Goyal et al., 2017; Goyal et al., 2017; Goyal et al., 2017; Goyal et al., 2017) (e.g., essential workers recommended by the Centers for Disease Control and Prevention (CDC)). Some studies included additional features like comorbidity status (Goyal et al., 2017), pregnancy status (Goyal et al., 2017), and the contact pattern (Goyal et al., 2017; Goyal et al., 2017; Goyal et al., 2017), which is generally assumed to be homogeneous within the population subgroup. Meanwhile, the extensive social activities and mobility promote the spatial and temporal heterogeneity of disease propagation (Goyal et al., 2017; Goyal et al., 2017; Goyal et al., 2017; Goyal et al., 2017). The activity contagions, such as work, school, entertainment, and engagement, and the travel contagions from close contact with other commuters via travel modes would significantly amplify the disease contagions risk and promote disease propagation. The interaction between mobility risk and disease dynamics complicates the spatiotemporal vaccine distribution. However, few studies have integrated the heterogeneity of mobility contact in disease dynamics when sleaking the optimal vaccine prioritization strategies. The reason might be that the micro-level mobility data is hard to obtain and couple with the disease dynamics leading to new challenges in understanding this problem.
Current computational approaches in vaccine distribution can be segregated into three groups, including disease compartmental models (Goyal et al., 2017; Goyal et al., 2017; Goyal et al., 2017), agent-based models (Goyal et al., 2017; Goyal et al., 2017) (ABM), and deep learning models (Goyal et al., 2017; Goyal et al., 2017). These models assume homogeneity in population subgroups to decrease the computational burden (Goyal et al., 2017; Goyal et al., 2017; Goyal et al., 2017; Goyal et al., 2017). However, that comes at the cost of reduced prediction accuracy
for disease propagation and the credibility to evaluate the vaccine allocation strategy. Although deep learning methods are able to model complex systems, they are sensitive to diverse disease parameters. More importantly, the interpretability and credibility of vaccine distribution from the deep learning methods may be greatly reduced without incorporating behavioral-related heterogeneity in disease dynamics modeling.
**Our contributions.** Given the challenges discussed and the need to reduce the pandemic burden when vaccine supply is limited, optimizing vaccine distribution is of significant interest to policymakers. Thus, studying micro-geographical level vaccine allocation strategies is crucial in determining where, when, and how many vaccines to distribute.
To address these issues, we first develop a Trans-vaccine-SEIR model (SEIR refers to the susceptible-exposed-infected-recovered model) by extending the prior Trans-SEIR model (Wang et al., 2017) to incorporate the impact of vaccines. Our model describes how, in the presence of a certain vaccine distribution strategy in the census block level, the disease propagates in a temporally varying graph whose nodes represent the census zones in the city, and edges describe the mobility connections between them. We employ a Graph Neural Network (GNN)-based policy approximator to yield vaccine distribution strategy as a function of the current state of the graph and train the GNN via an actor-critic-based reinforcement learning (RL) algorithm.
There are two novelties in this study. First, we integrate micro-geographical level mobility into the disease dynamics modeling and propose the Trans-vaccine-SEIR model to mimic the disease propagation. Second, we propose a RL-GNN framework to find the optimal vaccine allocation strategy. Within the framework, the deep RL is used to find the optimal solution in a high-degree spatiotemporal disease evolution graph. In particular, the GNN is reagrded as a policy approximator that effectively capture the structural properties of the mobility contact network and extract the dynamic disease features. Our RL-GNN framework overcomes the limitations faced by SEIR and ABM optimization in addressing the complex system dynamic problem.
We verify the effectiveness of our disease simulator by the real-world data and show that the optimal vaccine allocation strategy from the RL-GNN framework reduces 7% and 10% of infections and deaths compared to the baseline strategies. We also examine the effectiveness of proposed RL-GNN framework in multiple non-pharmaceutical interventions (NPIs). The optimal vaccine allocation strategy outperforms the baseline strategies and decreases infections and deaths by 10% and 17%, respectively. These experiments consistently show our proposed framework's superiority and robustness under diverse mobility patterns. In particular, among various NPIs, we find the optimal vaccine allocation strategy in the transit system usage restriction (defined as the bus, ride-sharing vehicle, taxi, and van) is more effective than restricting cross-zone mobility in the top 10% oldest/youngest/highest income/lowest income zones.
## 2. Related Work
Vaccine prioritization has sparked an unprecedented discussion during the pandemic, including disease dynamics modeling and the optimization of the vaccine allocation strategy.
It is well understood that disease dynamics modeling is the key to evaluating the vaccine prioritization strategy. Vaccine prioritization strategy can be classified into two groups to mimic the reality of disease propagation based on the study scope. Extensive studies addressed macroscopic level vaccine allocation strategies such as nation (Shen et al., 2017; Wang et al., 2017; Wang et al., 2017), state (Wang et al., 2017), region (Wang et al., 2017), or city (Wang et al., 2017; Wang et al., 2017) levels. The macroscopic vaccine allocation is essential for regional vaccine allocation planning. The federal-level vaccine allocation strategy in the US is centered around the framework developed by the National Academies of Sciences, Engineering, and Medicine (NASEM) (Wang et al., 2017). However, they strongly rely on homogeneity assumption in the subgroup of the population such as age (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017) and occupancy (Wang et al., 2017; Wang et al., 2017). They ignore behavior-related heterogeneity among individuals, which might introduce inaccuracy estimation. In addition to the macroscopic strategy, an increasing amount of micro-level vaccine allocation studies have drawn people's attention in recent years. The micro-level studies focus on the risk approximation using social network (Wang et al., 2017), contact pattern (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017), and social distance (Wang et al., 2017; Wang et al., 2017) for the population subgroup. Those methods are selected based on the trade-offs between prediction accuracy and computational complexity. To reduce the computational complexity, they either ignore the internal epidemiological dynamics or limit the spatial and temporal heterogeneity in physical mobility movement when modeling disease dynamics. However, non-epidemiological factors (Wang et al., 2017; Wang et al., 2017), such as mobility and activity, significantly influence disease propagation. Ignoring these factors would hurt the accuracy of disease propagation simulation and, thereby, the credibility of the vaccine prioritization strategy.
Vaccine allocation optimization models have generated wide-spread attention. These methods can be separated into three classes. The first approach is based on deterministic or stochastic disease compartmental models (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017), which use a system of differential equations to represent the disease dynamics. This approach suits for small and simplified systems, and the optimization formulation can be solved by the closed-form solution (Wang et al., 2017), brute-force search method (Wang et al., 2017), and greedy strategies (Wang et al., 2017). The second class approach is network or ABM models (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). These models allow more group-level behavior variations. Since optimizing the epidemic outcomes in these models is much harder than in the conventional SEIR model, they tend to simplify the network structure by assuming static network structure or homogeneity in the contact matrix when modeling the disease transmission rate. Besides, they usually construct limited strategies based on the subgroup population to facilitate computational complexity. The last class approach is based on the deep learning method such as deep RL (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). Deep learning models have their advantages in modeling complex systems. However, they are sensitive to parameters. Studies cooperate deep learning models with disease compartment models, such as SEIR, assume homogeneous behavior in population subgroups, and ignore the mobility dynamics in the disease compartment models. That might decrease the interpretation of the disease dynamics and the credibility of the vaccine allocation strategy.
As suggested by studies (Wang et al., 2017; Wang et al., 2017), a micro-geographical level vaccine prioritization strategy is needed to improve the vaccination rate and monitor long-term vaccine planning effectively. However, few studies address the micro-level spatiotemporal vaccine allocation strategy. They ignore the impacts of mobility dynamics, which
would significantly underestimate the disease propagation and introduce biases in strategy evaluation. Despite deep learning models increasing the search space and enhancing the computational efficiency in finding the optimal vaccine allocation strategy, they fall short in considering the heterogeneity of disease dynamics, hindering the interpretability of the results. In this study, we address the abovementioned issue in the proposed framework and examine the robustness of the framework with diverse mobility patterns in NPIs schemes.
## 3. System dynamics framework
This section presents the details of the problem formulation and the designed framework.
### Trans-vaccine-SEIR Formulation
#### 3.1.1. Notation
The notion and description for the Tran-vaccine-SEIR model can be seen in Table 1.
#### 3.1.2. Model details
In this study, we address mobility dynamics' impacts on disease propagation in urban areas. In particular, we incorporate the mobility contact patterns between different census block zones into our model and circumvent the overly-simplified assumption of homogeneity used in the conventional SEIR model. We integrate micro-level mobility heterogeneity into the disease dynamics by following the prior Trans-SEIR model (Shen et al., 2017). Infected individuals spread the virus through physical contact during travel to the destination and activities they perform after reaching there. Travel and activity amplify the probability of exposure to the disease. Thus, we denote as travel and activity contagion risks respectively. We define two contagion periods each day. The first contagion period occurs when the residents of zone \(i\) are influenced by activity contagion within zone \(i\) and the travel contagion for the residents of zone \(i\) traveling to zone \(j\). The second contagion period occurs the residence of zone \(i\) is influenced by activity contagion within zone \(j\) after they travel to zone \(j\) and the travel contagion for them traveling back to zone \(i\).
We extend the Trans-SEIR model into the Trans-vaccine-SEIR model by incorporating the effect of the vaccine. The superscript \(u\) represents the non-vaccinated states, and \(v\) represents the vaccinated states. Susceptible individuals (\(S\)), comprising vaccinated \(S^{u}\) and non-vaccinated \(S^{u}\) individuals, are healthy individuals who have not been exposed to the disease. The susceptible individuals \(S\) would shift to exposed ones \(E\) during daily activity contagions risk \(f\) and travel contagions risk \(h\). The exposed individuals \(E\), including vaccinated individuals \(E^{u}\) and non-vaccinated individuals \(E^{u}\), are those exposed to the virus and not infectious. They would shift to the infectious state until the end of the incubation time \(\frac{1}{\sigma}\). Then, infected individuals are either symptomatic (\(I_{s}^{u}\) or \(I_{s}^{u}\)) or asymptomatic (\(I_{a}^{u}\) or \(I_{a}^{u}\)). The parameter \(q\) represents the ratio between symptomatic and asymptomatic infected individuals. Last, symptomatic individuals (\(I_{s}^{u},I_{s}^{u}\)) are either recovered (\(R^{u},R^{q}\)) or dead (\(D^{u},D^{u}\)) after \(\lambda_{s}\) time. The infected fatality ratio (IFR) represents the death ratio for infected individuals. Asymmetric individuals (\(I_{a}^{u},I_{a}^{u}\)) would recovery form the disease (\(R^{u},R^{q}\)) with a recovery rate \(\frac{1}{\omega}\). In the Trans-vaccine-SEIR model, the mobility dynamics capture population movement across regions and disease parameters governing the population transition following the contagion process. The overview of the Trans-vaccine-SEIR framework is presented in Figure 1.
The mathematical formulation of the activity and travel contagion risks presents below.
\[f_{i}=\rho^{a}k_{i}\frac{S_{i}[(f_{a,i}l_{a,i}+l_{s,i})+\sum_{k=1}^{N}(f_{a,k} l_{a,ki}+l_{s,ki})]}{N_{i}} \tag{1}\]
\begin{table}
\begin{tabular}{l l} \hline Notation & Description variables \\ \hline \(S_{ij}\) & Susceptible population who are residents of zone \(i\) and currently in zone \(j\). Similar notation \(E_{ij}\): exposed (latent) population, \(I_{ij}\) infected population, \(R_{ij}\): recovered population, \(D_{ij}\) death population. \\ \(S_{i}^{t}\) & Susceptible population who are residents of zone \(i\). \(t\in(u,v)\), \(u\) refers to vaccinated, \(v\) refers to non-vaccinated. Similar notations are used for \(E\), \(I\), \(R\), and \(D\) population Vscinated (\(t=v\)) and non-vaccinated (\(t=u\)) infected population at zone \(i\) those who are asymptomatic and those who present symptoms. \\ & **Fixed parameters** \\ \(N_{i}\) & Total population of zone \(i\) \\ \(d\) & Travel mode, including low and median capacity travel modes \\ \(\beta^{a}\) & Disease transmission rate per valid contact during activity \\ \(\beta_{d}^{t}\) & Disease transmission rate per valid contact during travel \\ \(\beta_{d}^{t}\) & using mode \(d\) \\ \(c^{d}\) & The ratio of people who choose travel mode \(d\) \\ \(\frac{1}{\sigma}\) & The expected latent duration for people remaining in \(E\) before moving to \(I\) \\ \(\frac{1}{Y_{e}}\) & The expected recover duration for infected individuals \\ & who present symptoms(\(e\in s\)) and who are asymptomatic (\(e\in a\)) remaining in \(I\) before moving to \(R\) \\ \(k_{ij}\) & Expected number of valid contacts for residents of \(i\) who are currently at zone \(j\) \\ \(k_{ij,kl}^{d}\) & Expected number of valid contacts for travelers from \(i\) to \(j\) who come across with travelers from \(k\) to \(l\) using the same travel mode \(d\) \\ \(g_{i}\) & Total departure rate of zone \(i\) \\ \(m_{ij}\) & The rate of movement from zone \(i\) to zone \(j\), where \\ & \(\sum_{j}m_{ij}=1\) \\ \(r_{ij}\) & The return rate from zone \(j\) to zone \(i\) \\ \(f_{i}\) & Activity contagions risk at zone \(i\) \\ \(f_{ij}\) & Activity contagions risk of residence of zone \(i\) currently in zone \(j\) \\ \(h_{ij}\),\(h_{ij}\) & Traveling contagions risk from zone \(j\) to zone \(i\) and from zone \(j\) to zone \(i\) \\ \(\delta\) & Vaccine effectiveness \\ \(q\) & Clinical fraction in the percentage of infected population \\ & present symptoms \\ \(IFR\) & Infectious fatality rate \\ \(f_{a}^{t}\) & relative contagiousness of truly asymptomatic individuals for vaccinated (\(t\in u\)) and non-vaccinated (\(t\in v\)) people \\ \(V_{i}\) & Assigned vaccine in zone \(i\) \\ \hline \end{tabular}
\end{table}
Table 1. Variables and parameters in Trans-vaccine-SEIR formulation
\[f_{ij}=\beta^{a}k_{ij}\frac{S_{ij}[(f_{a,i}l_{a,j}+l_{s,j})+\sum_{k=1}^{N}(f_{a,k}l_ {a,k,j}+l_{s,k,j})]}{N_{j}} \tag{2}\]
\[h_{\overline{ij}}=\sum_{d=1}^{D}c_{ij}^{d}g_{d}^{t}(S_{ij})[\sum_{k=1}^{ \overline{p}}\sum_{l=1}^{\overline{p}}\frac{k_{lk,ij}[(f_{a,i}l_{a,lk}+l_{s, lk})]}{N_{lk}}] \tag{3}\]
The equations 1 and 2 describe the activity contagions where health residents \(S\) get infected by contacting infectious residents \(I\) in zone \(i\) and visitors from all other zones. The difference between equations 1 and 2 are the infected locations for healthy residents and the corresponding contact. The equation 3 and 4 give a mathematical representation of travel contagion. Equation 3 demonstrates the health population \(S\) who get infected during travel while leaving their resident location for activities. Equation 4 represents health residents \(S\) of zone \(i\) get infected by infectious population \(I\) in other places during travel.
Then, the mathematical formulations of disease compartments in the Trans-vaccine-SEIR model are expressed below (\(t\in(u,v)\)). The susceptible population compartment presents in equation 5:
\[\begin{split}\frac{dS_{i}^{t}}{dt}&=\sum_{j=1}^{P}r _{ij}S_{ij}^{t}-g_{i}S_{i}^{t}-f_{i}^{t}-\sum_{j=1}^{P}h_{\overline{ij}}^{t}\\ \frac{dS_{ij}^{t}}{dt}&=g_{i}m_{ij}S_{i}^{t}-r_{ij}S_ {ij}^{t}-h_{\overline{ij}}^{t}-f_{ij}^{t}\end{split} \tag{5}\]
The exposure population compartment presents in equation 6:
\[\begin{split}\frac{dE_{i}^{t}}{dt}&=\sum_{j=1}^{P}r _{ij}E_{ij}^{t}+\sum_{j=1}^{P}h_{\overline{ij}}+f_{i}^{t}-g_{i}E_{i}^{t}- \sigma_{i}E_{i}^{t}\\ \frac{dE_{ij}^{t}}{dt}&=g_{i}m_{ij}E_{i}^{t}+f_{ij }^{t}+h_{\overline{ij}}^{t}-r_{ij}E_{ij}^{t}-\sigma_{i}E_{ij}^{t}\end{split} \tag{6}\]
The infected population compartment presents in equation 7:
\[\begin{split}\frac{dl_{a,i}^{t}}{dt}&=\sum_{j=1}^{P} r_{ij}l_{a,ij}^{t}+\sigma_{i}(1-q)E_{i}^{t}-g_{i}l_{a,i}^{t}-r_{a}l_{a,i}^{t}\\ \frac{dl_{s,i}^{t}}{dt}&=\sum_{j=1}^{P}r_{ij}l_{s, ij}^{t}+\sigma_{i}qE_{i}^{t}-g_{i}l_{s,i}^{t}-r_{s}l_{s,i}^{t}\\ \frac{dl_{a,ij}^{t}}{dt}&=\sigma_{i}(1-q)E_{ij}^{t}+g _{i}m_{ij}l_{a,i}^{t}-r_{ij}l_{a,ij}^{t}-r_{a}l_{a,ij}^{t}\\ \frac{dl_{s,ij}^{t}}{dt}&=\sigma_{i}qE_{ij}^{t}+g _{i}m_{ij}l_{s,i}^{t}-r_{ij}l_{s,ij}^{t}-r_{s}l_{s,ij}^{t}\end{split} \tag{7}\]
The recovered population compartment presents in equation 8:
\[\begin{split}\frac{dR_{i}^{t}}{dt}&=\sum_{j=1}^{P}r _{ij}R_{ij}^{t}+r_{a}l_{a,i}^{t}+r_{s}(1-IFR)l_{s,i}^{t}-g_{i}R_{i}^{t}\\ \frac{dR_{ij}^{t}}{dt}&=g_{i}m_{ij}R_{i}^{t}+r_{s}l _{a,ij}^{t}+r_{s}(1-IFR)l_{s,ij}^{t}-r_{ij}R_{ij}^{t}\end{split} \tag{8}\]
The death compartment presents in equation 9:
\[\frac{dD_{i}^{t}}{dt}=y_{s}IFR(l_{s,i}^{t}),\frac{dD_{ij}^{t}}{dt}=y_{s}IFR(l_ {s,i}^{t}) \tag{9}\]
Equations 5 - 9 describe how mobility is involved in the disease dynamics as shown in Figure 1. These equations are consistent with the mobility contact from location \(i\longrightarrow j\longrightarrow i\) and disease dynamics that follow the contagion process.
### The RL-GNN Framework
As described in the Trans-vaccine-SEIR model, we construct the census block zones in the city as a graph \(G(t)=(V,E(t))\), where \(V\) is a set of fixed nodes representing the zones in the city, \(E(t)=\{e_{ij}(t)\}\) is the set of edges between \(i\) and \(j,(i,j)\in V\) at time \(t\). The edge represents the mobility connection between zones. Each node \(i\) is associated with disease compartment features \(\zeta_{i}(t)\). They are random variables and vary in time. Each edge \(e_{ij}(t)\) is associated with mobility-related disease compartment features \(\Psi_{ij}(t)\). The node and edge states at time \(t\) depend on the state of itself, its neighbor, and the mobility interaction between its neighbors at time \(t-1\). Please refer to the supplementary materials for node and edge feature details.
The objective is to find an optimal vaccine allocation strategy that minimizes the number of infected \((I,R,D)\) and death population \(D\) over time in a high-degree graph network. To seek optimality of the vaccine allocation in the high-degree graph with high dimensional node and edge features, we propose a RL-GNN framework and apply the deep RL approach to solve the system dynamics problem. RL(Zhou et al., 2017) has the advantage of dealing with the uncertainty in a complex system and making decisions based on incomplete information as the environment changes, which suits for learning the optimal solution in the evolving graph.
Figure 2 demonstrates the structure of the RL-GNN framework. Within the framework, the Trans-vaccine-SEIR module is the environment simulator, and the GNN is applied as the agent module. The GNN-based agent module first observes state \(S(t)\) from the graph \(G(t)\) environment in the Trans-vaccine-SEIR module. Based on the observation states, the GNN-based agent makes a vaccine
Figure 1. Trans-vaccine-SEIR model: mobility contact risk based disease dynamics under vaccine treatments
allocation \(\pi(t)\), which changes the graph environment's states. The Trans-vaccine-SEIR module mimics the disease propagation based on the mobility dynamics and vaccine impact, evaluates the vaccine allocation action from the GNN-based agent module, and outputs a reward \(r(t)\). Then the proximal policy optimization (PPO) optimizer in the RL framework optimizes the action of the GNN-based agent based on the reward function \(r(t)\). The architecture of the GNN-based agent module can be found in supplementary materials.
To improve the stability of the gradient descent direction in the PPO optimizer, we present our reward function:
\[\begin{split} R_{t}=\beta_{E}[(E_{t-1}-E_{t})-(E_{t-1}^{0}-E_{t} ^{0})]+\\ \beta_{I}[(I_{t-1}-I_{t})-(I_{t-1}^{0}-I_{t}^{0})]+\\ \beta_{R}[(R_{t-1}-R_{t})-(R_{t-1}^{0}-R_{t}^{0})]+\\ \beta_{D}[(D_{t-1}-D_{t})-(D_{t-1}^{0}-D_{t}^{0})]\end{split} \tag{10}\]
Where \(E_{t}^{0},I_{t}^{0},R_{t}^{0},D_{t}^{0}\) are the disease compartments for the baseline strategy, \(E_{t}\), \(I_{R}\), \(R_{t}\), \(D_{t}\) are the disease compartments based on the current vaccine allocation strategy decided by agent, \(\beta_{E}\), \(\beta_{I}\), \(\beta_{R}\), \(\beta_{D}\) are the coefficients for each disease compartments. In practice, \(\beta_{E}=1,\beta_{I}=5,\beta_{R}=100,\beta_{D}=500\). Naively minimizing the daily increased number of infections (\(I_{t}-I_{t-1}\), \(R_{t}-R_{t-1}\), \(D_{t}-D_{t-1}\)) makes the optimization gradient noisy as the presence of infections for time \(t\) has one-day or two-day delays in the simulator. Daily reduced exposed population (\(E_{t}-E_{t-1}\)) directly reflects the vaccine influence at time \(t\) as the susceptible population \(S_{t}\) can be protected by vaccine instead of transfer to the exposure period. To stabilize the optimization gradient direction, we add the population-based vaccine allocation strategy as the baseline. In practice, this reward function design significantly improves the strategy's effectiveness.
Our setting differs from other work (Beng et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) in three critical aspects. First, we intergrate the census block level mobility dynamics in the conventional SEIR model to build the environment simulator and conduct vaccine allocation in the micro-geographic zones, which addresses the heterogeneity of mobility contact risk in disease evolution and provides a more realistic evaluation system for vaccine allocation. Second, we consider more detailed vaccine-related parameters (e.g., age-based vaccine effectiveness) and disease infection parameters (e.g., IFR, symptomatic ratio) based on demographic information. It captures the demographic-related population heterogeneity in disease transmission. Third, we do not assume a node can be quarantined from the graph. Isolating a high-degree node from the network might impair the transportation network quality and affect network connectivity.
## 4. Trans-vaccine-SEIR model calibration
We use the disease data in Marion county from January to August 2021 (during the COVID-19 period) as a case study. Zones at the census block level ensure the vaccine allocation strategy is in a high resolution for the general public. There are 253 census blocks in Marion county with a \(N=957,337\) population. The disease parameters are calibrated in Marion county before December 2020 to ensure non-vaccine available states. We classify the population into four age classes 0-14, 15-64, 65-74, and 75\({}_{+}\) years old to capture the age heterogeneity of the disease parameters. The summary of parameter description and source can be found in supplementary materials.
**Infection**. As suggested by Thompson et al. (2020), the vaccine effectiveness is not 100%. The exposed individuals who are vaccinated \(E^{0}\) might move into the infected compartment and start to spread the virus at the end of incubation time \(\frac{1}{\sigma}[1]\) days. A clinical fraction \(q\) of exposed individuals (including non-vaccinated and vaccinated to be protected) would show symptoms after being infected (\(I_{s}^{u},I_{s}^{v}\)), while the others are asymptomatic (\(I_{a}^{u},I_{a}^{0}\)). It's hard to obtain the asymptomatic rate due to multiple reasons (e.g., not reported). We follow the assumption (Zones, 2018) that fraction changes linearly with age such as \(q_{i,a}=q_{i,75_{+}}-w(75_{+}-x_{i,a})\). The symptomatic rate for people over 75 years old in zone \(i\) is \(q_{i,75_{+}}\in(70\%\ 100\%)\). In the simulator, we select the \(q_{i,75_{+}}=85\%\). w is the reduced rate for a one-year reduction in age, which we set into 0.7(Zones, 2018). \(x_{i,a}\) is the median age value in \(a\) age group. Assuming \(n_{i,a}\) is the number of individuals in the \(a\) age group and \(n_{i}\) is the total population in the census block \(i\). The symptomatic rate for the census block \(i\) can be expressed as \(q_{i}=\frac{E_{a}}{q_{i,a}n_{i,a}}\). The mean of the symptomatic rate in Marion county is 59%. The mean symptomatic rates for each age group are 0-14: 37%, 15-64: 60%, 65-74:81%, and 75\({}_{+}\): 85%. The visualization of symptomatic rate in each census block zone can be found in the supplementary material.
We use the \(I_{a}^{u},I_{a}^{0},f_{s}^{u}\), and \(f_{s}^{v}\) to represent the contagiousness of asymptomatic infected individuals who are non-vaccinated and vaccinated. We assume an average time for asymptomatic individuals and symptomatic individuals to recover as \(\frac{1}{I_{a}}\) and \(\frac{1}{I_{s}}\) days (Beng et al., 2017). Besides, we assume the virus will no longer infect individuals who have recovered from the disease within eight months.
**Infection fatality rate**. We use the age-based daily new cases divided by daily new deaths in Indiana to estimate the age-dependent infection fatality rate (IFR). The disease and demographic data are from April 2020 to July 2020 in Indiana. The statistic of age-based IFR in Marion county is \(IFR_{0-14}=0,IFR_{15-64}=0.009,IFR_{65-74}=0.102,\) and \(IFR_{75_{+}}=0.263\). The census block \(IFR_{i}\) will be adjusted
Figure 2. The proposed controllable disease propagation framework
based on the demographic information within the census block zone: \(IFR_{i}=\frac{\sum_{d}IFR_{i,d}n_{i}}{n_{i}}\). \(IFR_{i,d}\) is the IFR of the age group \(a\) in state-level. The visualization of census block zone IFR in Marion county can be seen in supplementary materials.
**Transmission rate.** The disease transmission rate is a function of reproduction number (\(R_{0}\)) and can be calculated by the next-generation matrices approach (Han et al., 2017). According to the Trans-SEIR model(Shi et al., 2017), the \(R_{0}\) is upper-bounded by the the highest contagion rates for the travel segment and the activity location. We assume the activity transmission rate (\(\beta^{a}\)) and travel transmission rate (\(\beta^{f}_{d}\)) are homogeneous in Marion county and expressed as \(\beta^{a}=\frac{R_{0}(\rho+\rho)(\tau+\rho)}{\sigma},R_{0}\in(R_{min},R_{max})\) and \(\sum_{d}\beta^{t}_{d}=\frac{\beta^{a}}{2},d\in(1,2)\). \(d\) is the travel mode, which includes low-capacity travel mode (e.g., taxi and rider-sharing vehicle) and median-capacity travel mode (e.g., bus and van) in Marion county. The \(R_{0}\) is estimated by COVID-19 cases from March 2020 to July 2021 in Marion county and can be found in supplementary materials. Then, the transmission rate is estimated as \(\beta^{a}\in(0.016-0.05),\beta^{f}_{d=1}\in(0.0006-0.002)\), and \(\beta^{f}_{d=2}\in(0.008-0.025)\).
**Vaccine allocation.** The vaccine is assigned to non-vaccinated susceptible individuals \(S^{u}\). That is \(\frac{dS^{u}_{d}}{dt}=-v\), where \(v\) is the amount of assigned vaccine. By assuming the vaccine effectiveness as \(\delta\), the unprotected and protected susceptible population are \(\frac{dS^{u}_{d}}{dt}=(1-\delta)*v\) and \(\frac{dP^{u}_{d}}{dt}=\delta*v\).
**Mobility dynamics** We adopt a large scale of mobile phone data in March 2020 to capture individual movement within Marion county. An effective trip is defined as a movement with distance larger than 20 meters. After trip extraction, we obtained 1812266 trips and 176856 users in total. The average repressiveness is 9.71% in Marion county. The spatial variation of trip per person in the census block zone can be found in supplementary materials.
## 5. Evaluation
### Implementation Details
#### 5.1.1. Environment simulator: Trans-vaccine-SEIR model
To improve the interpretation of the disease dynamics, we propose the Trans-vaccine-SEIR model to demonstrate the disease propagation from both the epidemiological and non-epidemiological aspects. To mimic the disease propagation in Marion county from January to August 2021, we divide the study period to eight periods and simulate them continuously. The input data is the initial disease compartments in Marion county including \(S,E,I,R,D\) and daily vaccine supply \(V\). We apply the grid search to find the best parameters that fit the disease data in each period. Each period consists of four weeks. The first two weeks are used as the training dataset, and the last two weeks as the testing dataset. More details related with fitted parameters can be found in the supplementary materials. Besides, we construct three baseline approaches including population-based, even-based, and random-based. Population-based distribution means distributing vaccine in each census block zone according to the proportion of the census block population to the total population in Marion county. Even-based distribution means uniformly distribute vaccine to each census block zone. Random-based distribution means assigning vaccine to each census block zone follow random trials. The pseudo-code for the Trans-vaccine-SEIR model in the environment can be found in supplementary materials. Figure 3 presents the predicted results and the realistic disease compartments. We define the accuracy metric as \(C_{*}=\frac{prod^{*}-real}{real}\) for the \(*\) disease compartment. The mean accuracy of the predicted disease propagation in each compartment is: \(C_{SE}=0.0006\); \(C_{IR}=0.006\); \(C_{D}=0.002\).
#### 5.1.2. RL and GNN Architecture
We use Stable-Baseline3(Shi et al., 2017) as our RL training framework. We train our experiments using RTX 2080 Ti. The learning rate is 5.0e-5, the batch size is 8, the value function coefficient for the loss calculation is 0.5, the number of steps to run for each environment per update is 2, and the number of epochs when optimizing the surrogate loss is 10. Besides, we have two environment copies running in parallel. The GNN nnConv algorithm is implemented by pytorch-geometric (Krizhevsky et al., 2014).
### Quantitative Evaluation
In addition to three baseline vaccine allocation strategies, we simulate the non-vaccination scenario as a reference from January to August 2021. The well-trained optimal vaccine allocation strategy is used to compare baseline strategies. We use the improved case \(IRD_{pro}\) and death \(D_{pro}\) ratios as evaluation metrics. They are defined as follows:
\[\begin{split} IRD_{pro}=\frac{(IRD^{no\_vac}-IRD^{opt})-(IRD^{no\_ vac}-IRD^{*})}{(IRD^{no\_vac}-IRD^{*})}\times 100\%\\ D_{pro}=\frac{(D^{no\_vac}-D^{opt})-(D^{no\_vac}-D^{*})}{(D^{no\_ vac}-D^{*})}\times 100\%\end{split} \tag{11}\]
Where the superscript \(no\_vac\), \(opt\), and \(*\) represent the non-vaccination scenario, the optimal vaccine allocation strategy from the RL-GNN model, and the \(*\) baseline strategy. \(IRD\) and \(D\) represent the total infected and death population. Figure 4 visualize the evaluation results in eight continuous periods. The optimal strategy's improved case and death ratios are 4% -18% and 6.7%- 22% than the baseline strategies. It indicates a significant improvement in reducing the number of infected and death populations for the proposed RL-GNN framework.
Figure 3. Environment simulator: Trans-vaccine-SEIR model; The suffix _sim and _real_ refer to the predicted and reality population in the disease compartments separately. _S_E refers to the susceptible and exposure population. _I_R_ refers to the infected and exposed population; \(D\) refers to death population.
### Ablation Study
We follow the standard practice and the same protocol to assess the quality of the reward function in the RL framework and the GNN architecture in the agent module.
#### 5.3.1. Reward function design
We conduct several experiments to explore the reward function (See Eq. 10). In addition, to evaluate each disease compartment, we consider three ways to explore the reward function: short-term baseline, long-term baseline, and no baseline. We use the population-based vaccine allocation strategy as the baseline to improve the gradient descent direction. The short-term baseline refers to a dynamic approach, adding the baseline in each time step \(t\). Instead of adding the baseline each day, the long-term baseline is adding the baseline on the first day of the simulation. The improved case and death ratio of the proposed RL-GNN model using different metrics and baseline strategies in the reward function can be seen in Table 2.
#### 5.3.2. Agent architecture
We apply two approaches to extract the features in the agent module: MLP and GNN. MLP embeds the features of the graph network in a fully connected way. Unlike the MLP, the GNN captures the properties of dynamic mobility contact network in the evolving graph. We compare three methods in GNN to embed the message passing for the nodes and edges' features: GCNConv(Wang et al., 2017), CGConv(Wang et al., 2018), and nnConv(Wang et al., 2018). GCNConv only considers the node's feature embedding. CGConv uses the concatenation way to embed node features, neighboring node features, and edge features. nnConv applies a neural network to embed the node and edge features. The improved case and death ratios for different agent architectures are in Table 3.
## 6. Discussion
The pandemic has resulted in unprecedented research worldwide to explore prevention policies(Beng et al., 2017), such as movement restriction, transit usage restriction, and vaccine prioritization strategies. Unfortunately, the complexity of inherent disease transmission and external factors make disease propagation hard to predict. In addition to the vaccine prioritization in a non-intervention scenario, evaluating the strategy associated with NPIs with diverse mobility patterns is important. To explore the capacity of RL-GNN framework for seeking the optimal vaccine allocation strategies by combining NPIs, We propose four hypotheses related to demographic-based mobility restriction and transit usage restriction:
1. The cross-zone mobility restriction is more effective in preventing disease propagation than the transit usage resection given the same vaccine allocation strategy.
2. The optimal vaccine allocation for restricting cross-zone mobility in the top 10% oldest zones would reduce more infected population than the optimal vaccine allocation for restricting cross-zone mobility in the top 10% youngest zones.
3. The optimal vaccine allocation for restricting cross-zone mobility in the top 10% lowest income zones would reduce more death population than the optimal vaccine allocation for restricting cross-zone mobility in the top 10% highest income zones.
4. The effectiveness of the optimal vaccine allocation strategies is time invariant in diverse NPIs.
To test these hypotheses, we construct the mobility pattern in six scenarios based on the mobile phone data: (1) shifting the cross-zone travel for the top 10% oldest and youngest zones to within-zone travel by assuming the same amount of trip generated by individuals and re-calibrate the travel and activity-related parameters; (2) shifting the cross-zone travel for the top 10% highest and lowest income zones to within zone travel by assuming the same amount of trip generated by individuals and re-calibrate the travel and activity related parameters; (3) restricting the median capacity transit usage (defined as bus and vans), low (defined as taxi and rider-sharing vehicle) and median capacity transit usage.
The first hypothesis testingWe follow the standard procedure to continuously simulate eight periods in the environment simulator. Instead of comparing the optimal vaccine allocation strategy in each NPI scenario, the comparison is solely based on the evaluation of diverse NPIs given the same vaccine allocation strategy. The evaluation metrics are defined as follows:
\[\small IRD_{sim}=\frac{IRD_{no\_rest}^{pop}-IRD_{s}^{pop}}{IRD_{no\_rest}^{ pop}},D_{sim}=\frac{D_{no\_rest}^{pop}-D_{s}^{pop}}{D_{no\_rest}^{pop}} \tag{12}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{Reward function} & \multicolumn{1}{c|}{\(IRD_{pro}\)} & \multicolumn{1}{c|}{\(D_{pro}\)} \\ \hline with baseline strategy & metric & & \(IRD_{pro}\) & \(D_{pro}\) \\ \hline short-term & D & 4.04 & 24.31 \\ short-term & I & 17.45 & 23.09 \\ short-term & R & 17.39 & 10.49 \\ short-term & E & 17.55 & 23.18 \\ short-term & RD & 15.32 & 19.17 \\ short-term & ED & 10.02 & 21.13 \\ short-term & ID & 15.24 & 24.18 \\ short-term & IRD & 17.15 & 24.04 \\ long-term & EIRD & -0.56 & -1.06 \\ no-baseline & EIRD & -0.54 & -1.81 \\ short-term & EIRD & 18.18 & 24.41 \\ \hline \end{tabular}
\end{table}
Table 2. Ablation study on reward function design
Figure 4. The time series evaluation of the optimal vaccine allocation strategy under RL-GNN framework
\begin{table}
\begin{tabular}{|c|c|c|} \hline Agent architecture & \(IRD_{pro}\) & \(D_{pro}\) \\ \hline MLP & 19.04 & 6.02 \\ \hline & GCNConv & 19.24 & 18.65 \\ GNN & CGConv & 4.99 & 7.69 \\ & nnConv & 19.78 & 23.02 \\ \hline \end{tabular}
\end{table}
Table 3. Ablation study on agent architecture.
Where \(IRD_{sim}\) and \(D_{sim}\) are the NPIs-based improved case and death ratio for the \(*\) NPI \(IRD_{non\_rest}^{pop}\) and \(IRD_{*}^{pop}\) are the infected population for population-based vaccine allocation strategy in the no mobility restriction scenario and the \(*\) NPI scenario.
Figure 5 demonstrates the estimated results from January to August 2021 and shows that the transit usage restriction is more effective than the top 10% age-based and income-based cross-zone mobility restrictions. Figure 6 demonstrates the NPIs-based improved infected case and death ratios at the end of the eight continuous periods. The NPIs-based improved infected case ratio for transit usage restriction is more than twice of other NPIs, which rejects the first hypothesis.
**The second, third, and last hypotheses testing.** We follow the standard procedure and train the vaccine allocation strategy based on the RL-GNN framework for diverse NPIs. Table 4 shows the evaluation results of the optimal vaccine allocation strategies compared to three baseline strategies. The results show the robustness of the proposed RL-GNN model with diverse mobility patterns.
To test the second and third hypotheses, we make a cross comparison for the age-based and income-based mobility restriction NPIs. Besides, we define the evaluation metrics as follow:
\[IRD_{opt}=\frac{IRD_{no\_rest}^{opt}-IRD_{*}^{opt}}{IRD_{no\_rest}^{opt}},D_{opt }=\frac{D_{no\_rest}^{opt}-D_{*}^{opt}}{D_{no\_rest}^{opt}} \tag{13}\]
Where \(IRD_{opt}\) and \(D_{opt}\) are the optimal NPIs-based improved case and death ratios from the optimal vaccine allocation strategy. \(IRD_{non\_rest}^{opt}\) and \(IRD_{*}^{opt}\) are the infected population for the optimal vaccine allocation strategy in the no mobility restriction scenario and the \(*\) NPIs scenario.
Figure 7 visualizes the optimal NPIs-based improved infected case and death ratios at the end of the eight continuous periods. Based on the comparison, the optimal NPIs-based improved case ratios in restricting cross-zone mobility for the top 10% oldest and youngest zones are 5.99% and 7.39%, separately. And the optimal NPIs-based improved death ratios in restricting cross-zone mobility for the top 10% lowest income and highest income zones are 2.69% and 4.90%. That indicates restricting cross-zone mobility for the youngest zones and highest income zones is more effective than the oldest zones and lowest income zones, which rejects the second and third hypotheses.
In addition, Figures 8, 9, and 10 visualize the optimal NPIs-based improved case and death ratios for each NPI from January to August 2021. We observe that the optimal NPIs-based improved case and death ratios significantly vary along with time. Therefore, we reject the fourth hypothesis.
The study has several limitations: (_i_) the proposed Trans-vaccine-SEIR model is not appropriate to predict long-term disease dynamics since external factors, such as NPIs, would significantly change the disease propagation state. We address this issue by dividing the study period into eight periods when calibrating the disease parameters and simulating the eight periods continuously. (_ii_) due to the lack of reported population in separating \(S\) and \(E\) compartments, the initial \(S,E,I,R,D\) population are based on the estimation from the literature. (_iii_) the mobility patterns in diverse NPIs scenarios are approximated by mobile phone data, which might deviate from the realistic scenario. However, it would not affect the quality of robustness examination of the RL-GNN framework since the optimal
Figure 5. The time series evaluation of NPIs-based scenarios – \(IRD_{sim}\) and \(D_{sim}\):A refers to the cross-zone mobility restrictions by age group. B refers to the cross-zone mobility restrictions by income groups. C refers to the transit usage restriction.
Figure 6. The evaluation of NPIs-based scenarios – \(IRD_{sim}\) and \(D_{sim}\)
Figure 7. The evaluation of the optimal NPIs-based scenario – \(IRD_{opt}\) and \(D_{opt}\)
vaccine allocation strategy could be adaptive to the input mobility pattern and adjusted as environment changes.
## 7. Conclusion
This paper proposes a framework for effective vaccine prioritization at the micro-geographical level to reduce the overall burden of the pandemic when the vaccine supply is limited. Specifically, we propose a Trans-vaccine-SEIR model that improves the complex disease propagation simulation from the epidemiological and non-epidemiological aspects by integrating the effects of the census block-level mobility dynamics. Using the Trans-vaccine-SEIR model as the environment simulator, we present an RL-GNN framework to learn an optimal vaccine allocation strategy in a high-degree spatiotemporal disease evolution environment instead of achieving a sub-optimal vaccine allocation strategy under simplified assumptions. The evaluation examines the simulator accuracy and shows that the optimal vaccine allocation strategy from the RL-GNN framework is significantly more effective than the baseline strategies. The extensive evaluations based on multiple NPIs verify the robustness of the proposed framework with diverse mobility patterns. In particular, we find the optimal vaccine allocation strategy in the transit usage restriction scenario is more effective in reducing infections and deaths than cross-zone mobility restrictions for the top 10% age-based and income-based zones.
|
2304.01762 | Incorporating Unlabelled Data into Bayesian Neural Networks | Conventional Bayesian Neural Networks (BNNs) are unable to leverage
unlabelled data to improve their predictions. To overcome this limitation, we
introduce Self-Supervised Bayesian Neural Networks, which use unlabelled data
to learn models with suitable prior predictive distributions. This is achieved
by leveraging contrastive pretraining techniques and optimising a variational
lower bound. We then show that the prior predictive distributions of
self-supervised BNNs capture problem semantics better than conventional BNN
priors. In turn, our approach offers improved predictive performance over
conventional BNNs, especially in low-budget regimes. | Mrinank Sharma, Tom Rainforth, Yee Whye Teh, Vincent Fortuin | 2023-04-04T12:51:35Z | http://arxiv.org/abs/2304.01762v3 | # Incorporating Unlabelled Data into
###### Abstract
Conventional Bayesian Neural Networks (BNNs) cannot leverage unlabelled data to improve their predictions. To overcome this limitation, we introduce _Self-Supervised Bayesian Neural Networks_, which use unlabelled data to learn improved prior predictive distributions by maximising an evidence lower bound during an unsupervised pre-training step. With a novel methodology developed to better understand prior predictive distributions, we then show that self-supervised prior predictives capture image semantics better than conventional BNN priors. In our empirical evaluations, we see that self-supervised BNNs offer the label efficiency of self-supervised methods and the uncertainty estimates of Bayesian methods, particularly outperforming conventional BNNs in low-to-medium data regimes.
## 1 Introduction
Bayesian Neural Networks (BNNs) are powerful probabilistic models that combine the flexibility of deep neural networks with the theoretical underpinning of Bayesian methods (Mackay, 1992; Neal, 1995). Indeed, as they place priors over their parameters and perform posterior inference, BNN advocates consider them to be a principled approach for uncertainty estimation (Wilson and Izmailov, 2020; Abdar et al., 2021), which can be helpful for label-efficient learning (Gal et al., 2017).
However, despite the prevalence of unlabelled data and the rise of unsupervised learning, conventional BNNs cannot harness unlabelled data for improved uncertainty estimates and label efficiency. Rather, practitioners have focused on improving label efficiency and predictive performance by imparting information with priors over network parameters or predictive functions (e.g., Louizos et al., 2017; Tran et al., 2020; Matsubara et al., 2021; Fortuin et al., 2021). But it stands to reason that the vast store of information contained in unlabelled data should be incorporated into BNNs, and that the potential benefit of doing so likely exceeds the benefit of designing better, but ultimately human-specified, priors over parameters or functions. Unfortunately, as standard BNNs are explicitly only models for supervised prediction, they cannot leverage such unlabelled data by conditioning on it.
To overcome this limitation, **we introduce _Self-Supervised Bayesian Neural Networks_** (SS3), which condition on pseudo-labelled tasks generated using unlabelled data. We show this can be seen as **incorporating unlabelled data into the prior predictive distribution**. That is, our method improves the prior predictive by using unlabelled label to _learn_ it, not by using a different but ultimately _human-specified_ prior over parameters or functions. Self-supervised BNNs first train a deterministic encoder to **maximise a lower bound of a log-marginal likelihood** derived from unlabelled data in the unsupervised pre-training step. Although the encoder is deterministic, it induces a prior predictive distribution over the downstream labels such that augmented image pairs likely have the same label, while distinct images likely have different labels (Fig. 0(a)). Following the pre-training step, we then condition a subset of the network parameters on the downstream labelled data to make predictions.
As a further contribution, **we develop a methodology to better understand prior predictive distributions** of functions with high-dimensional inputs (SS4). Since it is often easy to reason about the semantic relationship between points, our approach investigates the _joint_ prior predictive for input pairs with known semantic relation. **We then demonstrate that self-supervised BNN prior predictives reflect input pair semantic similarity better than normal BNN priors**. In particular, while normal BNN priors struggle to distinguish same-class input pairs and different-class input pairs (Fig. 0(c)), self-supervised BNN priors do so much better (Fig. 0(b)).
Finally, **we demonstrate that the improved prior predictives of self-supervised BNNs are helpful in practice** (SS5). In our experiments, we find that self-supervised BNNs combine the label efficiency of self-supervised approaches with the uncertainty estimates of Bayesian methods. In particular, self-supervised BNNs outperform conventional BNNs at low-to-medium data regimes in semi-supervised and active learning settings, whilst also offering better-calibrated predictions than SimCLR (Chen et al., 2020), which is a particularly popular self-supervised learning algorithm.
## 2 Background: Bayesian Neural Networks
Let \(f_{\theta}(x)\) be a neural network with parameters \(\theta\) and \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{N}\) be an observed dataset where we want to predict \(y\) from \(x\). A Bayesian Neural Network (BNN) specifies a prior over parameters, \(p(\theta)\), and a likelihood, \(p(y|f_{\theta}(x))\), which in turn define the posterior \(p(\theta|\mathcal{D})\propto p(\theta)\prod_{i}p(y_{i}|f_{\theta}(x_{i}))\). To make predictions, we approximate the posterior predictive \(p(y_{\star}|x_{\star},\mathcal{D})=\mathbb{E}_{p(\theta|\mathcal{D})}[p(y_{ \star}|f_{\theta}(x_{\star}))]\).
Improving BNN priors has been a long-standing goal for the BNN community, primarily through improved human-designed priors. One approach is to improve the prior over the network's parameters (Louizos et al., 2017; Nalisnick, 2018). Others place priors directly over predictive functions (Flamm-Shepherd et al., 2017; Sun et al., 2019; Matsubara et al., 2021; Nalisnick et al., 2021; Raj et al., 2023). Both approaches, however, present challenges--the mapping between the network's parameters and predictive functions is complex, while directly specifying our beliefs over predictive functions is itself a highly challenging task. For these reasons, as well as computational convenience, isotropic Gaussian priors over network parameters remain the most common choice (Fortuin, 2022).
Figure 1: **Self-Supervised Bayesian Neural Networks**. (a) Pre-training in self-supervised BNNs corresponds to unsupervised prior learning. We learn a prior predictive distribution such that augmented images likely have the same label and distinct images likely have different labels. (b) Self-supervised BNN priors assign higher probabilities to semantically consistent image pairs having the same label compared to semantically inconsistent image pairs. Here, semantically consistent image pairs have the same ground-truth label, and semantically inconsistent image pairs have different ground-truth labels. The plot shows a kernel density estimate of the log-probability that same-class and different-class image pairs are assigned the same label under the prior. (c) Unlike self-supervised prior predictives, conventional BNN prior predictives assign similar probabilities to semantically consistent and semantically inconsistent image pairs having the same label.
Self-Supervised Bayesian Neural Networks
Conventional BNNs are unable to harness unlabelled data for improved uncertainty estimation and label efficiency. To overcome this limitation, we introduce _Self-Supervised Bayesian Neural Networks_. At a high level, they use unlabelled data and data augmentation to generate pseudo-labelled datasets, which are then conditioned on in a modified probabilistic model.
**Problem Specification** Suppose \(\mathcal{D}^{u}=\{x_{i}^{u}\}_{i=1}^{N}\) is an unlabelled dataset of examples \(x_{i}^{u}\in\mathbb{R}^{n}\) with \(x_{i}\sim P_{X}^{u}\); \(P_{X}^{u}\) is the underlying data distribution. Let \(\mathcal{D}^{t}=\{(x_{i}^{t},y_{i}^{t})\}_{i=1}^{T}\) be a labelled dataset corresponding to a supervised "downstream" task, where \(y_{i}^{t}\) is the target associated with \(x_{i}^{t}\). We have \(x_{i}\sim P_{X}^{t}\) and \(y_{i}|x_{i}\sim P_{Y|X}^{t}\). We want to use \(\mathcal{D}^{u}\) to improve our predictions on the downstream task.
### Incorporating Unlabelled Data into BNNs
The natural way to benefit from \(\mathcal{D}^{u}\) would be to use it to inform our beliefs about the model parameters, \(\theta\), through the posterior \(p(\theta|\mathcal{D}^{u},\mathcal{D}^{t})\propto p(\theta|\mathcal{D}^{u})\,p( \mathcal{D}^{t}|\mathcal{D}^{u},\theta)\). But if we are working with conventional BNNs, which are explicitly models for supervised prediction, \(p(\theta|\mathcal{D}^{u})=p(\theta)\). Further, as predictions depend only on the parameters, \(p(\mathcal{D}^{t}|\mathcal{D}^{u},\theta)=p(\mathcal{D}^{t}|\theta)\), which then means \(p(\theta|\mathcal{D}^{u},\mathcal{D}^{t})=p(\theta|\mathcal{D}^{t})\). Said differently, for a conventional BNN, we cannot incorporate \(\mathcal{D}^{u}\) by conditioning on it. One could proceed in different ways, for instance, by using \(\mathcal{D}^{u}\) to choose/learn the prior \(p(\theta)\), or using a generative model that defines a likelihood for the inputs \(p(x|\theta)\).
In this work, we instead note that if we were able to convert \(\mathcal{D}^{u}\) into labelled data, we could condition a discriminative model on it. If we then introduced a probabilistic link between generated labelled data, derived from \(\mathcal{D}^{u}\), and the predictions over the downstream task labels, we could harness \(\mathcal{D}^{u}\) through probabilistic conditioning. We now introduce each of these elements in turn.
First, to generate labelled data from \(\mathcal{D}^{u}\), i.e., to formulate _self-supervision_, we take inspiration from contrastive learning (Oord et al., 2019; Chen et al., 2020, 2020, 2020; Grill et al., 2020, 2020).1 We intuitively believe that distinct examples are unlikely to be semantically consistent, and thus are unlikely to have the same downstream label. Further, if we can specify data transformations that preserve semantic information, we also believe that augmented versions of the same example likely have the same downstream label. The correspondence between such beliefs and a prior distribution over predictive functions or network parameters is unclear, so we instead use \(\mathcal{D}^{u}\) and data transformations to generate pseudo-labelled data, and include this additional subjective information by conditioning on those datasets rather than a prior over parameters or functions.
Footnote 1: Our framework also encompasses other self-supervised tasks, e.g., next-token prediction and masked-token prediction for language models, though we do not consider them here.
Concretely, suppose we have a set of data augmentations \(\mathcal{A}=\{a:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\}\) that preserve semantic content. We use \(\mathcal{A}\) and \(\mathcal{D}^{u}\) to generate a _contrastive dataset_\(\mathcal{D}^{c}\) that reflects our subjective beliefs by:
1. Drawing \(M\) examples from \(\mathcal{D}^{u}\) at random, \(\{\hat{x}_{i}\}_{i=1}^{M}\); \(i\) indexes the subset, not \(\mathcal{D}^{u}\)
2. For each \(x_{i}\), sampling \(a^{A},a^{B}\sim\mathcal{A}\) and augmenting, giving \(\tilde{x}_{i}^{A}=a^{A}(\hat{x}_{i})\) and \(\tilde{x}_{i}^{B}=a^{B}(\hat{x}_{i})\)
3. Forming \(\mathcal{D}^{c}\) by assigning \(\tilde{x}_{i}^{A}\) and \(\tilde{x}_{i}^{B}\) the same class label, which is the subset index \(i\)
We thus have \(\mathcal{D}^{c}=\{(x_{i}^{c},y_{i}^{c})\}_{i=1}^{2M}=\{(\tilde{x}_{i}^{A},i) \}_{i=1}^{M}\cup\{(\tilde{x}_{i}^{B},i)\}_{i=1}^{M}\), where the labels are between \(1\) and \(M\). The task is to predict the subset index corresponding to each augmented example. We can repeat this process \(L\) times and create a set of contrastive task datasets, \(\{\mathcal{D}_{j}^{c}\}_{j=1}^{L}\). Here, we consider the number of generated contrastive datasets \(L\) to be a fixed, finite hyper-parameter, but we discuss the implications of generating a potentially infinite number of datasets in Appendix A.
Second, to link the labelled data derived from \(\mathcal{D}^{u}\) and the downstream task predictions, we use parameter sharing (see Fig. 2). We introduce parameters \(\theta_{j}^{c}\) for each \(\mathcal{D}_{j}^{c}\), parameters \(\theta^{t}\) for \(\mathcal{D}^{t}\), and shared-parameters \(\theta^{s}\) that are used for both the downstream and the contrastive tasks. A contrastive dataset \(\mathcal{D}_{j}^{c}\) thus informs downstream predictions through \(\theta^{s}\). For example, \(\theta^{t}\) and \(\theta_{j}^{c}\) could be the parameters of the last layer of a neural network, while \(\theta^{s}\) could be parameters of earlier layers.
We now discuss different options for learning in this conceptual framework. Using the Bayesian approach, one would place priors over \(\theta^{s}\), \(\theta^{t}\), and each \(\theta_{j}^{c}\). This then defines a posterior distribution given the observed data \(\{\mathcal{D}_{j}^{c}\}_{j=1}^{L}\) and \(\mathcal{D}^{t}\). To make predictions on the downstream task, which depend on \(\theta^{s}\) and \(\theta^{t}\) only, we would then use the posterior predictive:
\[p(y^{t}_{\star}|x_{\star},\{\mathcal{D}^{c}_{j}\}_{j=1}^{L}, \mathcal{D}^{t}) \tag{1}\] \[\quad=\mathbb{E}_{p(\theta^{s}|\{\mathcal{D}^{c}_{j}\}_{j=1}^{L}, \mathcal{D}^{t})}[\mathbb{E}_{p(\theta^{s}|\theta^{s},\mathcal{D}^{t})}[p(y^{t }_{\star}|x_{\star},\theta^{s},\theta^{t})]],\]
where we have (i) noted that the downstream task parameters \(\theta^{t}\) are independent of \(\{\mathcal{D}^{c}_{j}\}_{j=1}^{L}\) given the shared parameters \(\theta^{s}\) and (ii) integrated over each \(\theta^{c}_{j}\) and \(\theta^{t}\) in the definition of \(p(\theta^{s}|\{\mathcal{D}^{c}_{j}\}_{j=1}^{L},\mathcal{D}^{t})\).
Alternatively, one can learn a point estimate for \(\theta^{s}\), e.g., with MAP estimation, and perform full posterior inference for \(\theta^{t}\) and \(\theta^{c}_{j}\) only. This would be a _partially stochastic_ network, which Sharma et al. (2022) showed often outperform fully stochastic networks while being more practical, and can also be justified as we can generate many contrastive tasks. This corresponds to model learning as used in deep kernels and variational autoencoders (Kingma and Welling, 2013; Wilson et al., 2015; Rezende and Mohamed, 2015; Wilson et al., 2016).
We now show these modifications incorporate information from \(\mathcal{D}^{u}\) and the augmentations \(\mathcal{A}\) into the downstream task prior predictive. Under the Bayesian approach, self-supervised BNNs push forward the posterior over \(\theta^{t}\) and \(p(\theta^{s}|\{\mathcal{D}^{c}_{j}\}_{j=1}^{L},\mathcal{D}^{t})\propto p( \theta^{s}|\{\mathcal{D}^{c}_{j}\}_{j=1}^{T})\,p(\mathcal{D}^{t}|\theta^{s})\) through the network to make predictions. In contrast, a standard BNN uses \(p(\theta|\mathcal{D}^{t})\propto p(\theta)\,p(\mathcal{D}^{t}|\theta)\). We see that self-supervised BNNs use \(\{\mathcal{D}^{c}_{j}\}_{j=1}^{L}\) to update the prior over the shared parameters \(p(\theta^{s})\). Alternatively, if we learnt a point estimate for \(\theta^{s}\) and used the Bayesian approach for \(\theta^{t}\), this would define the prior predictive:
\[p(y^{t}_{\star}|x_{\star},\{\mathcal{D}^{c}_{j}\}_{j=1}^{L})=\mathbb{E}_{p( \theta^{s})}[p(y^{t}_{\star}|x_{\star},\theta^{s}_{\star},\theta^{t})], \tag{2}\]
where \(\theta^{s}_{\star}\) is a point estimate for \(\theta^{s}\) learnt using \(\{\mathcal{D}^{c}_{j}\}_{j=1}^{L}\). Since the contrastive datasets were constructed to reflect our prior beliefs, both conditioning and training on them should be helpful.
### Practical Self-Supervised Bayesian Neural Networks
We now use our framework to propose a practical two-step algorithm for self-supervised BNNs.
**Preliminaries** We focus on image-classification problems. We use an encoder \(f_{\theta^{s}}(\cdot)\) that maps images to representations and is shared across the contrastive tasks and the downstream task. The shared parameters \(\theta^{s}\) thus are the base encoder's parameters. We also normalise the representations produced by this encoder. For the downstream dataset, we use a linear readout layer from the encoder representations, i.e., we have \(\theta^{t}=\{W^{t},b^{t}\}\) and \(y^{t}_{i}\sim\mathrm{softmax}(W^{t}f_{\theta^{s}}(x_{i})+b^{t})\). The rows of \(W^{t}_{j}\) are thus class template vectors. For the contrastive tasks, we use a linear layer without biases, i.e., \(\theta^{c}_{j}=W^{c}_{j}\), and \(j\) indexes contrastive tasks. We place Gaussian priors over \(\theta^{s}\), \(\theta^{t}\), and each \(\theta^{c}_{j}\).
Pre-training \(\theta^{s}\) (Step I)Here, we learn a point estimate for the base encoder parameters \(\theta^{s}\), which induces a prior predictive distribution over the downstream task labels (see Eq. 2). To learn \(\theta^{s}\), we want to optimise the (potentially penalised) log-likelihood \(\log p(\{\mathcal{D}^{c}_{j}\}_{j=1}^{L},\mathcal{D}^{t}|\theta^{s})\), but this would require integrating over \(\theta^{t}\) and each \(\theta^{c}\). Instead, we use the evidence lower bound (ELBO):
\[\widehat{\mathcal{L}}^{c}_{j}(\theta^{s})=\mathbb{E}_{q(\theta^{c}_{j})}[\log p (\mathcal{D}^{c}_{j}|\theta^{s},\theta^{c}_{j})]-D_{\text{KL}}(q(\theta^{c}_{j} )||p(\theta^{c}_{j}))\leq\log p(\mathcal{D}^{c}_{j}|\theta^{s}), \tag{3}\]
where \(q(\theta^{c}_{j})\) is a variational distribution over the contrastive task parameters. Rather than learning a different variational distribution for each contrastive task \(j\), we amortise the inference and exploit the structure of the contrastive task. The contrastive task is to predict the source image index from pairs of augmented images using a linear layer from an encoder that produces normalised representations. We define \(\omega_{i}=0.5(f_{\theta^{s}}(\tilde{x}^{A}_{i})+f_{\theta^{s}}(\tilde{x}^{B}_{ i}))\), i.e., \(\omega_{i}\) is the mean representation for each augmented pair of images, and because the rows of \(W^{c}_{j}\) correspond to class templates, we thus use:
\[q(W^{c}_{j};\tau,\sigma^{2})=\mathcal{N}(\mu^{c}_{j},\sigma^{2}I),\text{with}\ \mu^{c}_{j}=\begin{bmatrix}\omega^{T}_{1}&\dots&\omega^{T}_{M}\end{bmatrix}/\tau \tag{4}\]
Figure 2: **BNN Probabilistic Models** (a) Probabilistic model for conventional BNNs. (b) Probabilistic model for self-supervised BNNs. We share parameters between different tasks, which allows us to condition on generated self-supervised data when making predictions on a new task. \(j\) indexes self-supervised tasks, \(i\) indexes datapoints.
In words, the mean representation of each augmented image pair is the class template for each source image, which should solve the contrastive task well. The variational parameters \(\tau\) and \(\sigma^{2}\) determine the magnitude of the linear layer and the per-parameter variance, are shared across contrastive tasks \(j\), and are learnt by maximising Eq. (3) with reparameterisation gradients (Kingma and Welling, 2013).
Both the contrastive tasks and downstream task provide information about the base encoder parameters \(\theta^{s}\). One option would be to learn the base encoder parameters \(\theta^{s}\) using only data derived from \(\mathcal{D}^{u}\) (Eq. 3), which would correspond to a standard self-supervised learning setup. In this case, the learnt prior would be task-agnostic. An alternative approach is to use both \(\mathcal{D}^{t}\) and \(\mathcal{D}^{u}\) to learn \(\theta^{s}\), which corresponds to a semi-supervised setup. To do so, we can use the ELBO for the downstream data:
\[\tilde{\mathcal{L}}^{t}(\theta^{s})=\mathbb{E}_{q(\theta^{t})}[\log p( \mathcal{D}^{t}|\theta^{t},\theta^{s})]-D_{\text{KL}}[q(\theta^{t})||p(\theta ^{t})]\leq\log p(\mathcal{D}^{t}|\theta^{s}), \tag{5}\]
where \(q(\theta^{t})=\mathcal{N}(\theta^{t};\mu^{t},\Sigma^{t})\) is a variational distribution over the downstream task parameters; \(\Sigma^{t}\) is diagonal. We can then maximise \(\sum_{j}\tilde{\mathcal{L}}^{c}_{j}(\theta^{s})+\alpha\cdot\tilde{\mathcal{L} }^{t}(\theta^{s})\), where \(\alpha\) is a hyper-parameter that controls the weighting between the downstream task and contrastive task datasets. We consider both variants of our approach, using Self-Supervised BNNs to refer to the variant that pre-trains only with \(\{\mathcal{D}^{c}_{j}\}_{j=1}^{L}\) and \(\text{Self-Supervised BNNs}^{*}\) to refer to the variant that uses both \(\{\mathcal{D}^{c}_{j}\}_{j=1}^{L}\) and \(\mathcal{D}^{t}\).
Downstream Evaluation (Step II)Having learnt a point estimate for \(\theta^{s}\), we can use any approximate inference algorithm to infer \(\theta^{t}\). Here, we use a post-hoc Laplace approximation (Mackay, 1992; Daxberger et al., 2021).
Algorithm 1 summarises Self-Supervised BNNs, which learn \(\theta^{s}\) with \(\mathcal{D}^{u}\) only. We found tempering with the mean-per-parameter KL divergence, \(\bar{D}_{\text{KL}}\), improved performance, in line with other work (e.g., Krishnan et al., 2022). Moreover, we generate a new \(\mathcal{D}^{c}_{j}\) per gradient step so \(L\) corresponds to the number of gradient steps. We also regularise with \(\log p(\theta^{s})\), which here corresponds to weight decay. Finally, following best-practice for contrastive learning (Chen et al., 2020), we use a non-linear _projection head_\(g_{\psi}(\cdot)\)_only_ for the contrastive tasks. For further details, see Appendix A.
Understanding the Approximate Posterior To better understand \(q(W^{c}_{j};\tau,\sigma^{2})\) (Eq. 4), we evaluate the likelihood term of Eq. (3) at the approximate posterior's mean: \(\mu^{c}_{j}\). Define \(\tilde{z}^{A}_{i}=f_{\theta^{s}}(\tilde{x}^{A}_{i})\) and \(\tilde{z}^{B}_{i}=f_{\theta^{s}}(\tilde{x}^{B}_{i})\), and recall we have \(\omega_{i}=0.5(\tilde{z}^{A}_{i}+\tilde{z}^{B}_{i})\). Then:
\[\log p(\mathcal{D}^{c}_{j}|\theta^{s},\mu^{c}_{j})=\sum_{i=1}^{M}\Big{[}\log \frac{\exp\omega_{i}^{T}\tilde{z}^{A}_{i}/\tau}{\sum_{j=1}^{M}\exp\omega_{j}^ {T}\tilde{z}^{A}_{i}/\tau}+\log\frac{\exp\omega_{i}^{T}\tilde{z}^{B}_{i}/\tau }{\sum_{j=1}^{M}\exp\omega_{j}^{T}\tilde{z}^{B}_{i}/\tau}\Big{]}, \tag{6}\]
which we see takes the form of a (negative) normalised-temperature cross-entropy loss (NT-XENT), as used in contrastive learning algorithms like SimCLR (Chen et al., 2020). However, there are some differences between these objectives. In terms of learning the base-encoder parameters, our objective: (i) is a principled lower bound for the log-marginal likelihood, \(\log p(\mathcal{D}^{c}_{j}|\theta^{s})\); (ii) injects noise around \(\mu^{c}_{j}\), which may be a helpful regularisation (Srivastava et al., 2014); and (iii) adapts the temperature \(\tau\) throughout training, which may be beneficial (Huang et al., 2022).
Pre-training as Prior LearningSince our objective function is a principled lower bound on the log-marginal likelihood, it is similar to type-II maximum likelihood (ML), which is often used to learn parameters for deep kernels (Wilson et al., 2015) of Gaussian processes (Williams and Rasmussen, 2006), and recently also for BNNs (Immer et al., 2021). As such, similar to type-II ML, our approach can be understood as a form of prior learning. Although we learn only a point-estimate for \(\theta^{s}\), this fixed value induces a prior distribution over predictive functions through the task-specific prior \(p(\theta^{t})\)
However, while normal type-II ML learns this prior using the observed data itself, our approach maximises a marginal likelihood derived from unsupervised data. Because we maximise an ELBO during pre-training, our approach can similarly be understood as variational model learning.
## 4 How Good Are Self-Supervised BNN Prior Predictives?
We showed our approach incorporates unlabelled data into the downstream task prior predictive distribution (Eq. 2). We also argued that as the generated contrastive data reflect our subjective beliefs about the semantic similarity of different image pairs, incorporating the unlabelled data should improve the prior predictive. We now examine whether this is indeed the case.
However, although prior predictive checks are standard in the applied statistics community (Gelman et al., 1995), they are challenging to apply to BNNs due to the high dimensionality of the input space. To proceed, we note, intuitively, a suitable prior should reflect a belief that _the higher the semantic similarity between pairs of inputs, the more likely these inputs are to have the same label_. Therefore, rather than inspecting the prior predictive at single points in input space, we examine the _joint_ prior predictive of _pairs_ of inputs with known semantic relationships. Indeed, it is far easier to reason about the relationship between examples than to reason about distributions over high-dimensional functions.
Here, we focus on image classification and consider the following pairs of images: (i) an image from the validation set (the "base image") and an augmented version of the same image; (ii) a base image and another image of the same class; and (iii) a base image and an image of a different class. As these image pair groups have decreasing semantic similarity, we want the first group to be the most likely to have the same label, and the last group to be the least likely.
Experiment DetailsWe investigate the different priors on CIFAR10. For the BNN, we follow Izmailov et al. (2021) and use a ResNet-20-FRN with a \(\mathcal{N}(0,1/5)\) prior over the parameters. For the self-supervised BNN, we learn a base encoder of the same architecture with \(\mathcal{D}^{u}\) only and sample from the task prior predictive using Eq. (2). \(\theta^{t}\) are the parameters of the linear readout layer. We also pre-train a base encoder with SimCLR. For further details, see Appendix C.3.
Graphical Evaluation
First, we visualise the BNN and self-supervised BNN prior predictive (Fig. 1 and 3). The BNN prior predictive reflects a belief that all three image pair groups are similarly likely to have the same label, and thus does not capture semantic information well. In contrast, the self-supervised prior reflects a belief that image pairs with higher semantic similarity are more likely to have the same label. In particular, the self-supervised prior is able to distinguish between image pairs of the same class and of different classes, _even without access to any ground-truth labels_.
Quantitative EvaluationWe now quantify how well different prior predictives reflect data semantics. We define \(\rho(x,z)\) as the probability that inputs \(x,z\) have the same label under the prior predictive, i.e., \(\rho(x,z)=\mathbb{E}_{\theta}[p(y(x)=y(z)|\theta)]\) where \(y(x)\) is the label corresponding to input \(x\). We want a prior where image pairs with higher semantic similarity have higher values of \(\rho\). Therefore, to quantify the suitability of the prior predictive, we evaluate the frequency \(\alpha\) under the data distribution that the
\begin{table}
\begin{tabular}{l c}
**Prior Predictive** & **Prior Score \(\alpha\)** \\ \hline BNN — Gaussian & 0.261 \(\pm\) 0.024 \\ BNN — Laplace & 0.269 \(\pm\) 0.007 \\ SimCLR & **0.670 \(\pm\) 0.015** \\ Self-Supervised BNN & **0.680 \(\pm\) 0.063** \\ \end{tabular}
\end{table}
Table 1: **Prior Evaluation Scores**. Mean and standard deviation across three seeds shown. Self-supervised priors are better than standard BNN priors.
Figure 3: **BNN Prior Predictives. We investigate prior predictives by computing the probability \(\rho\) that particular image pairs have the same label under the prior, and examining the distribution of \(\rho\) across different sets of image pairs. We consider three sets of differing semantic similarity: (i) augmented images; (ii) images of the same class; and (iii) images of different classes. Left: Conventional BNN prior. Right: Self-supervised BNN learnt prior predictive. The self-supervised learnt prior reflects the semantic similarity of the different image pairs better than the BNN prior, which is reflected in the prior evaluation score, \(\alpha\).**
ranking of \(\rho\)s that correspond to the aforementioned three sets of image pairs indeed match the ranking of semantic similarities. Mathematically, we have \(\alpha=\mathbb{E}_{z,z_{13}}[\mathbb{I}[\rho(x,z_{1})>\rho(x,z_{2})>\rho(x,z_{3}))]\), where each of the \(\rho\)-terms above corresponds to a different image pair group: \(\rho(x,z_{1})\) corresponds to augmented image pairs, \(\rho(x,z_{2})\) to same-class image pairs, and \(\rho(x,z_{3})\) to different-class image pairs. \(\alpha\) is therefore a generalisation of the AUROC metric.
In Table 1, we see that conventional BNN priors reflect semantic similarity much less than self-supervised BNN priors. Further, SimCLR, a contrastive learning approach, also induces a prior predictive that well reflects semantic similarity well if the trained encoder is combined with a stochastic last-layer, which may underpin its success.
## 5 Experiments
We saw that self-supervised BNN prior predictives reflect semantic similarity better than conventional BNNs (SS4). We now investigate whether this translates to improved predictions in semi-supervised and active learning setups, which mimic scenarios where there is an abundance of unlabelled data, but labelling data points is expensive. We find that our approach combines the label efficiency of self-supervised methods and the uncertainty estimates of Bayesian approaches.
### Semi-Supervised Learning
**Experiment Details** We consider CIFAR10 and CIFAR100, reserving a validation set of 1000 examples from the test set and using the remaining 9000 examples as the test set. We evaluate the performance of different baselines when conditioning on 50, 500, 5000, and 50000 labels. All baselines use a ResNet-18 modified for CIFAR image size. For self-supervised BNNs, we use a non-linear projection head for pre-training, as standard, and use the full train set for pre-training with the data augmentations suggested by Chen et al. (2020). We consider two variants of self-supervised BNNs: Self-Supervised BNNs learn the base-encoder parameters using \(\mathcal{D}^{u}\) only, while Self-Supervised BNNs* also utilise \(\mathcal{D}^{t}\). For evaluation, we use a post-hoc Laplace approximation on the last-layer. For the conventional BNN baselines, we use MAP, SWAG (Maddox et al., 2019), a deep ensemble (Lakshminarayanan et al., 2017), and last-layer Laplace (Daxberger et al., 2021). We include standard data augmentation for the baselines, which were chosen because they support batch normalisation (Ioffe and Szegedy, 2015). We also compare to SimCLR, where we consider both the standard linear-evaluation protocol (i.e., maximum likelihood) as well as a post-hoc Laplace approximation on the last-layer. See Appendix C for further details.
Fig. 4 shows the performance of different BNNs in terms of test accuracy and expected calibration error. Self-supervised BNNs substantially outperform conventional BNNs at all but the largest data set size, a finding in line with our results that self-supervised BNN prior predictives are better than conventional BNN priors (SS4). We also find self-supervised BNNs provide well-calibrated uncertainty estimates at all dataset sizes, and that incorporating labelled data when learning the base encoder improves accuracy. Indeed, at the largest dataset size, Self-Supervised BNNs*, which leverage the labelled examples as well as unlabelled data to learn the base encoder parameters, are competitive in terms of accuracy and calibration with the deep ensemble, which is the strongest baseline but uses five trained networks. These results highlight the benefit of incorporating unlabelled data into BNNs.
Figure 4: **BNN Label Efficiency**. We compute the test accuracy and expected calibration error (ECE) when observing different numbers of labels. Self-Supervised BNNs pre-train the base-encoder parameters using \(\mathcal{D}^{u}\) only, while Self-Supervised BNNs* also use \(\mathcal{D}^{t}\). Mean and standard deviation across 3 seeds shown. Self-supervised BNNs substantially outperform conventional BNNs at most dataset sizes, highlighting the benefit of incorporating unlabelled data into the BNN framework.
Further, we evaluate the out-of-distribution generalisation performance of different BNNs. To do so, we condition BNNs on different numbers of labels of CIFAR10 as before, but evaluate the test accuracy and calibration on the CIFAR-10-C dataset (Hendrycks and Dietterich, 2019), following Ovadia et al. (2019). Again, self-supervised BNNs can benefit from unlabelled training examples, while conventional BNNs are unable to do so.
In Fig. 5, we see that self-supervised BNNs not only outperform conventional BNNs at most dataset sizes in terms of accuracy, but they consistently offer well-calibrated uncertainty estimates. On the largest dataset size, self-supervised BNNs are outperformed by SWAG and deep ensembles, which are strong baselines but use several trained networks. However, deep ensembles are poorly calibrated at small dataset sizes. These results provide further evidence for the benefits of incorporating unlabelled data into BNNs.
We now compare our approach with standard SimCLR on CIFAR10. Recall that, relative to SimCLR, our approach uses a modified pre-training objective that is a lower bound on a marginal likelihood and performs approximate inference over \(\theta^{t}\) when making predictions. We also consider combining SimCLR with approximate inference for the last-layer parameters to disentangle the effects of the pre-training objective and evaluation protocol.
In Fig. 6, Self-Supervised BNNs match SimCLR in terms of accuracy, while offering better-calibrated uncertainty estimates. All approaches have high accuracy at low data regimes, highlighting the benefit of leveraging unlabelled data. We find that incorporating \(\mathcal{D}^{t}\) when learning the base-encoder (Self-Supervised BNNs*) substantially improves accuracy, though surprisingly, it slightly hurts calibration. In contrast, the Laplace evaluation protocol improves calibration, and indeed, SimCLR combined with approximate inference is a strong baseline. This matches our previous observation that SimCLR pre-training and a prior over \(\theta^{t}\) induce a prior predictive that well captures image semantics (SS4).
### Active Learning
Experiment DetailsWe consider low-budget active learning, which simulates a scenario where labelling examples is extremely expensive. We use the CIFAR10 training set as the unlabelled pool set from which to label points. We assume an initial train set of 50 labelled points, randomly selected, and a validation set of the same size. We acquire 10 labels per acquisition round up to 500 labels and evaluate using the full test set. We compare self-supervised BNNs, SimCLR, and a deep ensemble. We use BALD (Houlsby et al., 2011) as the acquisition function for the deep ensemble and self-supervised BNN, because they provide epistemic uncertainty estimates, while we use predictive entropy for SimCLR. We also consider uniform selection. See Appendix C for more details.
Figure 5: **Out-of-Distribution Generalisation**. We train BNNs on the standard CIFAR10 dataset and evaluate on CIFAR-10-C with corruption intensity 5 (Hendrycks and Dietterich, 2019). Results averaged across corruptions, mean and std. across 3 seeds shown. Self-Supervised BNNs pre-train the base-encoder parameters using \(\mathcal{D}^{u}\) only, while Self-Supervised BNNs* also use \(\mathcal{D}^{t}\). Self-supervised BNNs outperform conventional BNNs at most dataset sizes.
Figure 6: **Comparison to SimCLR** on CIFAR10. We compare self-supervised BNNs with (i) standard SimCLR, and (ii) SimCLR with a post-hoc Laplace approximation over the last-layer. Self-Supervised BNNs pre-train the base-encoder parameters using \(\mathcal{D}^{u}\) only, while Self-Supervised BNNs* also utilise \(\mathcal{D}^{t}\) during pre-training. Approximate inference over the task-specific parameters improves calibration, while pre-training with both \(\mathcal{D}^{t}\) and \(\mathcal{D}^{u}\) improves accuracy.
In Fig. 7, we see that the methods that leverage unlabelled data perform the best. In particular, the self-supervised BNN with BALD selection achieves the highest accuracy across most numbers of labels, and is the only method for which active learning improves performance. Surprisingly, uniform selection is a strong baseline.
## 6 Related Work
Improving BNN PriorsWe demonstrated that BNNs have poor prior predictive distributions (SS4), a concern shared by others (e.g., Wenzel et al., 2020; Noci et al., 2021; Izmailov et al., 2021). The most common approaches to remedy this are through designing better priors, typically over network parameters Louizos et al. (2017); Nalisnick, 2018; Atanov et al., 2019; Fortuin et al., 2021) or predictive functions directly Sun et al. (2019); Tran et al. (2020); Matsubara et al. (2021), see Fortuin (2022) for an overview). In contrast, our approach incorporates vast stores of unlabelled data into the prior predictive distribution through variational model learning. Similarly, other work also _learns_ priors, but typically using labelled data e.g., by using meta-learning Garnelo et al. (2018); Rothfuss et al. (2021) or type-II maximum likelihood Wilson et al. (2015); Immer et al. (2021). Finally, Shwartz-Ziv et al. (2022) use generic transfer learning, potentially from an unsupervised task, for better BNN priors. While they consider fully stochastic networks, we use partially stochastic networks with a pre-training objective that is a principled lower bound on a log-marginal likelihood derived from unlabelled data. A further difference is that our work incorporates pre-training within the probabilistic modelling framework itself.
Understanding Contrastive LearningOur work shows that contrastive pre-training induces a prior predictive distribution that captures semantic similarity well (SS4), and further offers a Bayesian interpretation of contrastive learning (SS3). There has been much other work on understanding contrastive learning (e.g., Wang and Isola, 2020; Wang and Liu, 2021). Some work appeals to the InfoMax principle Becker and Hinton (1992), which maximises the mutual information between _representations_ of two views of an input datum, while our framework operates in predictive space. Zimmermann et al. (2022) argue that contrastive learning inverts the data-generating process, while Aitchison (2021) cast InfoNCE as the objective of a self-supervised variational auto-encoder.
Semi-Supervised Deep Generative ModelsDeep generative models (DGMs) are an alternative approach for label-efficient learning Kingma and Welling (2013); Kingma et al. (2014); Joy et al. (2020). DGMs generate supervision by reconstructing data through compact representations, i.e., they are _generative_. Our framework, however, is _discriminative_--supervision works through pseudo-labelled tasks. Ganev and Aitchison (2021) formulate several semi-supervised learning objectives as lower bounds of log-likelihoods in a probabilistic model of data curation. Finally, Sansone and Manhaeve (2022) unify self-supervised learning and generative modelling under one framework.
## 7 Conclusion
We introduced _Self-Supervised Bayesian Neural Networks_, which leverage unlabelled data for improved prior predictive distributions (SS3). We showed that self-supervised BNNs, as well as other self-supervised methods, learn prior predictives that reflect semantics of the data (SS4). In our experiments, self-supervised BNNs combine the label-efficiency of self-supervised methods and the uncertainty estimates of Bayesian approaches (SS5), and thus are a principled, practical, and performant alternative to conventional BNNs. We strongly encourage practitioners to incorporate unlabelled data into their Bayesian Neural Networks through unsupervised prior learning.
Figure 7: **Low-Budget Active Learning** on CIFAR10. We compare: (i) a self-supervised BNN; (ii) SimCLR; and (iii) a deep ensemble. For the self-supervised BNN and the ensemble, we acquire points with BALD; predictive entropy for SimCLR. We also use uniform sampling for all methods. Mean and std. shown (3 seeds). The methods that incorporate unlabelled data perform best, and self-supervised BNNs are the only method where active learning outperforms uniform sampling.
## Acknowledgements
MS was supported by the EPSRC Centre for Doctoral Training in Autonomous Intelligent Machines and Systems (EP/S024050/1), and thanks Rob Burbea for inspiration and support. VF was supported by a Postdoc Mobility Fellowship from the Swiss National Science Foundation, a Research Fellowship from St John's College Cambridge, and a Branco Weiss Fellowship.
|
2307.10459 | A New Computationally Simple Approach for Implementing Neural Networks
with Output Hard Constraints | A new computationally simple method of imposing hard convex constraints on
the neural network output values is proposed. The key idea behind the method is
to map a vector of hidden parameters of the network to a point that is
guaranteed to be inside the feasible set defined by a set of constraints. The
mapping is implemented by the additional neural network layer with constraints
for output. The proposed method is simply extended to the case when constraints
are imposed not only on the output vectors, but also on joint constraints
depending on inputs. The projection approach to imposing constraints on outputs
can simply be implemented in the framework of the proposed method. It is shown
how to incorporate different types of constraints into the proposed method,
including linear and quadratic constraints, equality constraints, and dynamic
constraints, constraints in the form of boundaries. An important feature of the
method is its computational simplicity. Complexities of the forward pass of the
proposed neural network layer by linear and quadratic constraints are O(n*m)
and O(n^2*m), respectively, where n is the number of variables, m is the number
of constraints. Numerical experiments illustrate the method by solving
optimization and classification problems. The code implementing the method is
publicly available. | Andrei V. Konstantinov, Lev V. Utkin | 2023-07-19T21:06:43Z | http://arxiv.org/abs/2307.10459v1 | # A New Computationally Simple Approach for Implementing Neural Networks with Output Hard Constraints
###### Abstract
A new computationally simple method of imposing hard convex constraints on the neural network output values is proposed. The key idea behind the method is to map a vector of hidden parameters of the network to a point that is guaranteed to be inside the feasible set defined by a set of constraints. The mapping is implemented by the additional neural network layer with constraints for output. The proposed method is simply extended to the case when constraints are imposed not only on the output vectors, but also on joint constraints depending on inputs. The projection approach to imposing constraints on outputs can simply be implemented in the framework of the proposed method. It is shown how to incorporate different types of constraints into the proposed method, including linear and quadratic constraints, equality constraints, and dynamic constraints, constraints in the form of boundaries. An important feature of the method is its computational simplicity. Complexities of the forward pass of the proposed neural network layer by linear and quadratic constraints are \(O(nm)\) and \(O(n^{2}m)\), respectively, where \(n\) is the number of variables, \(m\) is the number of constraints. Numerical experiments illustrate the method by solving optimization and classification problems. The code implementing the method is publicly available.
_Keywords_: neural network, hard constraints, convex set, projection model, optimization problem, classification
## 1 Introduction
Neural networks can be regarded as an important and effective tool for solving various machine learning tasks. A lot of tasks require to constrain the output of a neural network, i.e. to ensure the output of the neural network satisfies specified constraints. Examples of tasks, which restrict the network output, are neural optimization solvers with constraints, models generating images or parts of images in a predefined region, neural networks solving the control tasks with control actions in a certain interval, etc.
The most common approach to restrict the network output space is to add some extra penalty terms to the loss function to penalize constraint violations. This approach leads to the so-called _soft_ constraints or soft boundaries. It does not guarantee that the constraints will be satisfied in practice when a new example feeds into the neural network. This is because the output falling outside the constraints is only penalized, but not eliminated [1]. Another approach is to modify the neural network such that it strongly predicts within the constrained output space. In this case, the constraints are _hard_ in the sense that they are satisfied for any input example during training and inference [2].
Although many applications require the hard constraints, there are not many models that actually realize them. Moreover, most available models are based on applying the soft constraints due to their simple implementation by means of penalty terms in loss functions. In particular, Lee et al. [3] present a method for neural networks that enforces deterministic constraints on outputs, which actually cannot be viewed as hard constraints because they are substituted into the loss function.
An approach to solving problems with conical constraints of the form \(Ax\leq 0\) is proposed in [2]. The model generates points in a feasible set using a predefined set of rays. A serious limitation of this method is the need to search for the corresponding rays. If to apply this approach not only to conical constraints, then we need to look for all vertices of the set. However, the number of vertices may be extremely large. Moreover, authors of [2] claim that the most general setting does not allow for efficient incorporation of domain constraints.
A general framework for solving constrained optimization problems called DC3 is described in [4]. It aims to incorporate (potentially non-convex) equality and inequality constraints into the deep learning-based optimization algorithms. The DC3 method is specifically designed for optimization problems with hard constraints. Its performance heavily relies on the training process and the chosen model architecture.
A scalable neural network architecture which constrains the output space is proposed in [5]. It is called ConstraintNet and applies an input-dependent parametrization of the constrained output space in the final layer. Two limitations of the method can be pointed out. First, constraints in ConstraintNet are linear. Second, the approach also uses all vertices of the constrained output space, whose number may be large.
A differentiable block for solving quadratic optimization problems with linear constraints, as an element of a neural network, was proposed in [6]. For a given optimization problem with a convex loss function and a set of linear constraints, the optimization layer allows finding a solution during the forward pass, and finding derivatives with respect to parameters of the loss function and constraints during the backpropagation. A similar approach, which embeds an optimization layer into a neural network avoiding the need to differentiate through optimization steps, is proposed in [7]. In contrast to [6], the method called the Discipline Convex Programming is extended to the case of arbitrary convex loss function including its parameters. According to the Discipline Convex Programming [7], a projection operator can be implemented by using a differentiable optimization layer that guarantees that the output of the neural network satisfies constraints. However, the above approaches require solving convex optimization problems for each forward pass.
Another method for solving optimization problem with linear constraints is represented in [8]. It should be noted that the method may require significant computational resources and time to solve complex optimization problems. Moreover, it solves the optimization problems only with linear constraints.
Several approaches for solving the constrained optimization problems have been proposed in [9, 10, 11, 12, 13, 14]. An analysis of the approaches can be found in the survey papers [15, 16].
To the best of our knowledge, at the moment, no approach is known that allows building layers of neural networks, the output of which satisfies linear and quadratic constraints, without solving the optimization problem during the forward pass of the neural network. Therefore, we present a new computationally simple method of the neural approximation which imposes hard linear and quadratic constraints on the neural network output values. The key idea behind the method is to map a vector of hidden parameters to a point that is guaranteed to be inside the feasible set defined by a set of constraints. The mapping is implemented by the additional neural network layer with constraints for output. The proposed method is simply extended to the case when constraints are imposed not only on the output vectors, but also on joint constraints depending on inputs. Another peculiarity of the method is that the projection approach to imposing constraints on outputs can simply be implemented in the framework of the proposed method.
An important feature of the proposed method is its computational simplicity. For example, the computational complexity of the forward pass of the neural network layer implementing the method in the case of linear constraints is \(O(nm)\) and in the case of quadratic constraints is \(O(n^{2}m)\), where \(n\) is the number of variables, \(m\) is the number of constraints.
The proposed method can be applied to various applications. First of all, it can be applied to solving optimization problems with arbitrary differentiable loss functions and with linear and quadratic constraints. The method can be applied to implement generative models with constraints. It can be used when constraints are imposed on a predefined points or a subsets of points. There are many other applications where the input and output of neural networks are constrained. The proposed method allows solving the corresponding problems incorporating the inputs as well as outputs imposed by the constraints.
Our contributions can be summarized as follows:
1. A new computationally simple method of the neural approximation which imposes hard linear and quadratic constraints on the neural network output values is proposed.
2. The implementation of the method by different types of constraints, including linear and quadratic constraints, equality constraints, constraints imposed on inputs and outputs are considered.
3. Different modifications of the proposed method are studied, including the model for obtaining solutions at boundaries of a feasible set and the projection models.
4. Numerical experiments illustrating the proposed method are provided. In particular, the method is illustrated by considering various optimization problems and a classification problem.
The corresponding code implementing the proposed method is publicly available at:
[https://github.com/andruekonst/ConstraNet/](https://github.com/andruekonst/ConstraNet/).
The paper is organized as follows. The problem of constructing a neural network imposing hard constraints on the network output value is stated in Section 2. The proposed method solving the stated problem and its modifications is considered in Section 3. Numerical experiments are given in Section 4. Conclusion can be found in Section 5.
## 2 The problem statement
Formally, let \(z\in\mathbb{R}^{d}\) denote the input data for a neural network, and \(x\in\mathbb{R}^{n}\) denote the output (prediction) of the network. The neural network can be regarded as a function \(f_{\theta}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{n}\) such that \(x=f_{\theta}(z)\), where \(\theta\in\Theta\) is a vector of trainable parameters.
Let we have a convex feasible set \(\Omega\subset\mathbb{R}^{n}\) as the intersection of a set of constraints in the form of \(m\) inequalities:
\[\Omega=\left\{x\ |\ h_{i}(x)\leq 0,\ i=1,...,m\right\}, \tag{1}\]
where each constraint \(h_{i}(x)\leq 0\) is convex, i.e. \(\forall x^{(1)},x^{(2)}:h(x^{(1)}),h(x^{(2)})\leq 0\), there holds
\[\forall\alpha\in[0,1]\ (h(\alpha x^{(1)}+(1-\alpha)x^{(1)})\leq 0). \tag{2}\]
We aim to construct a neural network with constraints for outputs. In other words, we aim to construct a model \(x=f_{\theta}(z):\mathbb{R}^{d}\rightarrow\Omega\) and to impose hard constraints on \(x\) such that \(x\in\Omega\) for all \(z\in\mathbb{R}^{d}\), i.e.
\[\forall z\in\mathbb{R}^{d}\ (f_{\theta}(z)\in\Omega). \tag{3}\]
## 3 The proposed method
To construct the neural network with constrained output vector, two fundamentally different strategies can be applied:
1. The first strategy is to project into the feasible set \(\Omega\). The strategy is to build a projective differentiable layer \(P(y):\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) such that \(\forall y\in\mathbb{R}^{n}\) (\(P(y)\in\Omega\)). A difficulty of the approach can arise with optimizing projected points when they are outside the set \(\Omega\). In this case, the projections will lie on the boundary of the feasible set, but not inside it. This case may complicate the optimization of the projected points.
2. The second strategy is to map the vector of _hidden parameters_ to a point that is guaranteed to be inside the feasible set \(\Omega\). The mapping \(G(\lambda):\mathbb{R}^{k}\rightarrow\mathbb{R}^{n}\) is constructed such that \(\forall\lambda\in\mathbb{R}^{k}\) (\(G(\lambda)\in\Omega\)), where \(\lambda\) is the vector of hidden parameters having the dimensionality \(k\). This strategy does not have the disadvantages of the first strategy.
In spite of the difference between the above strategies, it turns out that the first strategy can simply be implemented by using the second strategy. Therefore, we start with a description of the second strategy.
### The neural network layer with constraints for output
Let a fixed point \(p\) be given inside a convex set \(\Omega\), i.e. \(p\in\Omega\). Then an arbitrary point \(x\) from the set \(\Omega\) can be represented as:
\[x=p+\alpha\cdot r, \tag{4}\]
where \(\alpha\geq 0\) is a scale factor; \(r\in\mathbb{R}^{n}\) is a vector (a ray from the point \(p\)).
On the other hand, for any \(p,r\), there is an upper bound \(\overline{\alpha}_{p,r}\) for the parameter \(\alpha\), which is defined as
\[\overline{\alpha}_{p,r}=\max\left\{\alpha\geq 0\ |\ p+\alpha\cdot r\in\Omega \right\}. \tag{5}\]
At that, the segment \([p;\,p+\overline{\alpha}_{p,r}\cdot r]\) belongs to the set \(\Omega\) because \(\Omega\) is convex. The meaning of the upper bound \(\overline{\alpha}_{p,r}\) is to determine the point of intersection of the ray \(r\) and one of the constraints.
Let us construct a layer of the neural network which maps the ray \(r\) and the scale factor \(\alpha_{p,r}\) as follows:
\[g_{p}(r,s)=p+\alpha_{p,r}(s)\cdot r, \tag{6}\]
where \(\alpha_{p,r}(s)\) is a function of the layer parameter \(s\) and \(\overline{\alpha}_{p,r}\), which is of the form:
\[\alpha_{p,r}(s)=\sigma(s)\cdot\overline{\alpha}_{p,r}, \tag{7}\]
\(\sigma(s):\mathbb{R}\rightarrow[0,1]\) is the sigmoid function, that is, a smooth monotonic function.
Such a layer is guaranteed to fulfill the constraint
\[\forall r\in\mathbb{R}^{n},\ s\in\mathbb{R}\ (g_{p}(r,s)\in\Omega). \tag{8}\]
This neural network is guaranteed to fulfil the constraints:
\[\forall z\in\mathbb{R}^{d}\ (f_{\theta}(z)\in\Omega), \tag{9}\]
because the segment \([p,p+\overline{\alpha}_{p,r}\cdot r]\) belongs to \(\Omega\).
A scheme for mapping the ray \(r\) and the scalar factor \(s\) to a point inside the set \(\Omega\) is shown in Fig.1. We are searching for the intersection of the ray \(r\), leaving the point \(p\), with the boundary of the set \(p+\overline{\alpha}_{p,r}\cdot r\), and then the result of scaling is the point \(g\).
Figure 1: A scheme of the map \(g_{p}(r,s)\)
For the entire systems of constraints, it is sufficient to find the upper bound \(\overline{\alpha}_{p,r}\) that satisfies each of the constraints. Let \(\overline{\alpha}_{p,r}^{(i)}\) be the upper bound for the parameter \(\alpha\) corresponding to the \(i\)-th constraint (\(h_{i}(x)\leq 0\)) of the system (1). Then the upper bound for the entire system of constraints is determined to satisfy the condition \([p,p+\overline{\alpha}_{p,r}\cdot r]\subseteq[p,p+\overline{\alpha}_{p,r}^{(i) }\cdot r]\), i.e. there holds
\[\overline{\alpha}_{p,r}=\min\{\overline{\alpha}_{p,r}^{(i)}\}_{i=1}^{m}. \tag{10}\]
A scheme of searching for the upper bound \(\overline{\alpha}_{p,r}\), when linear constraints are used, is depicted in Fig.2.
Thus, the computational complexity of the forward pass of the described neural network layer is directly proportional to the number of constraints and of the computational complexity of intersection procedure with one constraint.
**Theorem 1**: _An arbitrary vector \(x\in\Omega\) can be represented by means of the layer \(g_{p}(r,s)\). The output of the layer \(g_{p}(r,s)\) belongs to the set \(\Omega\) for its arbitrary input \((r,s)\)._
**Proof.**
1. An arbitrary output vector \(g_{p}(r,s)\) satisfies constraints that is \(\forall r\in\mathbb{R}^{n},s\in\mathbb{R}\) there holds \(g_{p}(r,s)\in\Omega\) because \(\alpha_{p,r}(s)\leq\overline{\alpha}_{p,r}^{(i)}\), and an arbitrary segment \([p,p+\overline{\alpha}_{p,r}^{(i)}\cdot r]\subset\Omega\). Consequently, there holds \(g_{p}(r,s)\in[p,p+\alpha_{p,r}(s)]\subset\Omega\).
2. An arbitrary point \(x\in\Omega\) can be represented by using the layer \(g_{p}(r,s)\). Indeed, let \(r=x-p\), \(s\rightarrow+\infty\). Then we can write \(x=g_{p}(x-p,+\infty)=p+1\cdot(x-p)=x\), as was to be proved.
**Corollary 1**: _For rays from the unit sphere \(||r||_{2}=1\), an arbitrary point \(x\in\Omega\), \(x\neq p\) can be uniquely represented by using \(g_{p}(r,s)\)._
In order to obtain the model \(f_{\theta}(z):\mathbb{R}^{d}\rightarrow\Omega\), outputs \(r_{\theta}(z)\) and \(s_{\theta}(z)\) of the neural network layers should be fed as the input to the layer \(g_{p}(r,s)\):
\[f_{\theta}(z)=g(r_{\theta}(z),s_{\theta}(z)). \tag{11}\]
Such a combined model also forms a neural network that can be trained by the error backpropagation algorithm.
Figure 2: A scheme of searching for the upper bound \(\overline{\alpha}\)
**Corollary 2**: _The output of the neural network \(f_{\theta}(z)\) always satisfies the constraints which define the set \(\Omega\)._
### Linear constraints
In the case of linear constraints \(\overline{\alpha}(p,r)\), the upper bound is determined by the intersection of the ray from point \(p\) to direction \(r\) with the set of constraints.
Let us consider the intersection with one linear constraint of the form:
\[a_{i}^{T}x\leq b_{i}. \tag{12}\]
Then the upper bound for the parameter \(\alpha\) is determined by solving the following system of equations:
\[\begin{cases}x=p+\alpha\cdot r,\\ a_{i}^{T}x=b_{i},\\ \alpha\geq 0.\end{cases} \tag{13}\]
This implies that there holds when a solution exists:
\[a_{i}^{T}p+\alpha\cdot a_{i}^{T}r=b_{i}, \tag{14}\]
\[\overline{\alpha}_{p,r}^{(i)}=\frac{b_{i}-a_{i}^{T}p}{a_{i}^{T}r}. \tag{15}\]
If \(a_{i}^{T}r=0\) or \(\overline{\alpha}_{i}(p,r)<0\), then the system (13) does not have any solution. In this case, \(\overline{\alpha}_{i}\) can be taken as \(+\infty\).
Let we have the system
\[\begin{cases}a_{1}^{T}x\leq b_{1},\\ \ldots\\ a_{N}^{T}x\leq b_{m},\end{cases} \tag{16}\]
and for each inequality, the upper bound \(\overline{\alpha}_{p,r}^{(i)}\) is available. Then the upper bound for the whole system of inequalities (16) is determined as:
\[\overline{\alpha}_{p,r}=\min\left\{\overline{\alpha}_{p,r}^{(i)}\right\}_{i=1 }^{m}. \tag{17}\]
In the case of linear constraints, the computational complexity of the forward pass of the neural network layer is \(O(nm)\).
### Quadratic constraints
Let the \(i\)-th quadratic constraint be given in the form:
\[\frac{1}{2}x^{T}P^{(i)}x+q_{i}^{T}x\leq b_{i}, \tag{18}\]
where the matrix \(P^{(i)}\) is positive semidefinite. Then the intersection of the ray with the constraint is given by the equation:
\[\frac{1}{2}(p+\alpha\cdot r)^{T}P^{(i)}(p+\alpha\cdot r)+q_{i}^{T}(p+\alpha \cdot r)=b_{i}. \tag{19}\]
It is equivalent to the equation:
\[(r^{T}P^{(i)}r)\alpha^{2}+2(p^{T}P^{(i)}r+q_{i}^{T}r)\alpha+(2q_{i}^{T}p+p^{T} P^{(i)}p-2b_{i})=0. \tag{20}\]
Depending on the coefficient at \(\alpha^{2}\), two cases can be considered:
1. If \(r^{T}P^{(i)}r=0\), then the equation is linear and has the following solution: \[\alpha=-\frac{q_{i}^{T}p+\frac{1}{2}p^{T}P^{(i)}p-b_{i}}{p^{T}P^{(i)}r+q_{i}^{T}r}.\] (21)
2. If \(r^{T}P^{(i)}r>0\), then there exist two solutions. However, we can select only the larger positive solution corresponding to the movement in the direction of the ray. This solution is: \[\alpha=-\frac{-(p^{T}P^{(i)}r+q_{i}^{T}r)+\sqrt{D/4}}{r^{T}P^{(i)}r},\] (22) \[\frac{D}{4}=(p^{T}P^{(i)}r+q_{i}^{T}r)^{2}-(r^{T}P^{(i)}r)\cdot(2q_{i}^{T}p+p^ {T}P^{(i)}p-2b_{i}),\] (23) because the denominator is positive.
It should be noted that the case \(r^{T}P^{(i)}r<0\) is not possible because the matrix is positive semidefinite. Otherwise, the constraint would define a non-convex set.
If \(\alpha\geq 0\), then the upper bound is \(\overline{\alpha}_{p,r}^{(i)}=\alpha\). Otherwise, if the ray does not intersects the constraint, then there holds \(\overline{\alpha}_{p,r}^{(i)}=+\infty\).
Similarly to the case of linear constraints, if a system of the following quadratic constraints is given:
\[\begin{cases}\frac{1}{2}x^{T}P^{(1)}x+q_{1}^{T}x\leq b_{1},\\ \ldots\\ \frac{1}{2}x^{T}P^{(m)}x+q_{N}^{T}x\leq b_{m},\end{cases} \tag{24}\]
then the upper bound for the system is
\[\overline{\alpha}_{p,r}=\min\{\overline{\alpha}_{p,r}^{(i)}\}_{i=1}^{m}. \tag{25}\]
In the case of quadratic constraints, the computational complexity of the forward pass of the neural network layer is \(O(n^{2}m)\).
### Equality constraints
Let us consider the case, when the feasible set is defined by a system of linear equalities and inequalities of the form:
\[x\in\Omega\iff\begin{cases}Ax\leq b,\\ Qx=p,\end{cases} \tag{26}\]
In this case, the problem can be reduced to (1) that is it can be reduced to a system of inequalities. In order to implement that, we find and fix a vector \(u\), satisfying the system \(Qu=p\). If the system does not have solutions, then the set \(\Omega\) is empty. If there exists only one solution, then the set \(\Omega\) consists of one point. Otherwise, there exist an infinite number of solutions, and it is sufficiently to choose any of them, for example, by solving the least squares problem:
\[||Qu-p||^{2}\rightarrow\min. \tag{27}\]
Then we find a matrix \(R\) which is the kernel basis matrix \(Q\), that is \(R\) satisfies the following condition:
\[\forall w\ \left(QRw=0\right). \tag{28}\]
The matrix \(R\) can be obtained by using the SVD decomposition of \(Q\in\mathbb{R}^{\mu\times n}\) as follows:
\[USV=Q, \tag{29}\]
where \(U\in\mathbb{R}^{n\times\mu}\) is the complex unitary matrix, \(S\in\mathbb{R}^{\mu\times n}\) is the rectangular diagonal matrix with non-negative real numbers on the diagonal, \(V\in\mathbb{R}^{n\times n}\) is the conjugate transpose of the complex unitary matrix (the right singular vectors), it contains ordered non-zero diagonal elements.
Then the matrix \(R\) is defined as
\[R=\left(v_{1},\ldots,v_{\delta}\right), \tag{30}\]
where \(\delta\) is the number of zero diagonal elements of \(S\), \(v_{1},\ldots,v_{\delta}\) are columns of the matrix \(V\).
Hence, there holds
\[\forall w\in\mathbb{R}^{\delta}\ \left(Q(Rw+u)=p\right). \tag{31}\]
A new system of constraints imposed on the vector \(w\) is defined as:
\[A(Rw+u)\leq b, \tag{32}\]
or in the canonical form:
\[Bw\leq t, \tag{33}\]
where \(B=AR\), \(t=b-Au\).
So, \(w\) is the vector of variables for the new system of inequalities (33). For any vector \(w\), the vector \(x\) satisfying the initial system (26) can be reconstructed as \(x=Rw+u\).
In sum, the resulting model will be defined as
\[f_{\theta}(z)=R\widetilde{f}_{\theta}(z)+u, \tag{34}\]
where \(\widetilde{f}_{\theta}(z)\) is the model for constraints (33).
Let us consider a more general case when an arbitrary convex set as the intersection of the convex inequality constraints (1) is given, but an additional constraint is equality, i.e. there holds:
\[x\in\Omega\iff\begin{cases}h_{i}(x)\leq 0,\\ Qx=p.\end{cases} \tag{35}\]
In this case, we can also apply the variable replacement to obtain new (possibly non-linear) constraints of the form:
\[x\in\Omega\iff\begin{cases}h_{i}(Rw+u)\leq 0,\\ x=Rw+u,\end{cases}\iff\begin{cases}\tilde{h}_{i}^{(R,u)}(w)\leq 0,\\ x=Rw+u.\end{cases} \tag{36}\]
In sum, the model can be used for generating solutions \(\widetilde{f}_{\theta}(z)\), satisfying the non-linear constraints \(\tilde{h}_{i}^{(R,u)}(\widetilde{f}_{\theta}(z))\), and then solutions for \(x\) are obtained through (34).
### Constraints imposed on inputs and outputs
In practice, it may be necessary to set constraints not only on the output vector \(f_{\theta}(z)\), but also joint constraints depending on some inputs. Suppose, an convex set of \(\mu\) constraints imposed on the input \(z\) and the output \(f_{\theta}(z)\) is given:
\[\Lambda\subset\mathbb{R}^{k}\times\mathbb{R}^{n}, \tag{37}\]
that is, for any \(z\), the model \(f_{\theta}(z)\) has to satisfy:
\[y=\begin{bmatrix}f_{\theta}(z)\\ z\end{bmatrix}\in\Lambda. \tag{38}\]
Here \(y\) is the concatenation of \(f_{\theta}(z)\) and \(z\). If the feasible set is given as an intersection of convex constraints:
\[\Lambda=\left\{y\ |\ \Gamma_{i}(y)\leq 0,\ i=1,...,m\right\}. \tag{39}\]
Then for a fixed \(z\), a new system of constraints imposed only on the output vector \(f_{\theta}(z)\) can be built by means of the substitution:
\[G(z)=\{z\ |\ \gamma_{i}(x;z)\leq 0,\ i=1,...,m\} \tag{40}\]
where \(\gamma_{i}(x;z)\) is obtained by substituting \(z\) into \(\Gamma_{i}\).
Here \(\gamma_{i}\) depends on \(z\) as fixed parameters, and only \(x\) is a variable. For example, if \(\Gamma_{i}\) is a linear function, then, after substituting parameters, the constraint \(\gamma_{i}(x;z)\leq 0\) will be a new linear constraint on \(x\), or it will automatically be fulfilled. If \(\Gamma_{i}\) is a quadratic function, then the constraint on \(x\) is either quadratic or linear, or automatically satisfied.
New _dynamic_ constraints imposed on the output and depending on the input \(z\) are
\[f_{\theta}(z)\in G(z), \tag{41}\]
under condition the input \(z\) is from the admissible set
\[z\in\left\{z\ |\ \exists x:\left[\begin{matrix}x\\ z\end{matrix}\right]\in\Lambda\right\}. \tag{42}\]
It can be seen from the above that the dynamic constraints can change when \(z\) is changing,
### Projection model
Note that using the proposed neural network layer with constraints imposed on outputs, a _projection_ can be built as a model that maps points to the set \(\Omega\) and has the idempotency property, that is:
\[\forall x\in\mathbb{R}^{n}\ (f_{\theta}(f_{\theta}(x))=f_{\theta}(x)). \tag{43}\]
In other words, the model, implementing the identity substitution inside the set \(\Omega\) and mapping points, which are outside the set \(\Omega\), inside \(\Omega\), can be represented as:
\[\begin{cases}\forall x\in\Omega\ (f_{\theta}(x)=x),\\ \forall x\notin\Omega\ (f_{\theta}(x)\in\Omega).\end{cases} \tag{44}\]
The model can be implemented in two ways:
1. The first way is to train the model \(f_{\theta}(z)=g(r_{\theta}(z),s_{\theta}(s))\) to obtain the identity substitution by means of minimizing the functional that penalizes the distance between the image and the preimage.
Figure 3: Difficulties of the orthogonal projection onto the intersection of convex constraints
his can construct an approximation of an arbitrary projection. For example, we can write for the orthogonal projection by using the \(L_{p}\)-norm the following functional: \[\mathcal{L}=\frac{1}{N}\sum_{i=1}^{N}\left\|f_{\theta}(x_{i})-x_{i}\right\|_{p}.\] (45) As a result, the output of the model always satisfies the constraints, but the idempotency property cannot be guaranteed, since the minimization of the empirical risk does not guarantee a strict equality and even an equality with an error \(\varepsilon\) on the entire set \(\Omega\). Nevertheless, this approach can be used when it is necessary to build the projective models for complex metrics, for example, those defined by neural networks.
2. The central projection can be obtained without optimizing the model by means of specifying the ray \(r_{\theta}(z):=z-p\). In this case, the scale factor must be specified explicitly without the sigmoid as: \(\alpha_{p,r}(s)=\min\{1,\overline{\alpha}_{p,r}\}\). Then we can write \[\begin{split}\tilde{g}_{p}(r(x))&=p+\min\{1, \overline{\alpha}_{p,(x-p)}\}\cdot(x-p)\\ &=\left\{\begin{array}{cc}x,&\overline{\alpha}_{p,(x-p)}\geq 1,\\ (1-\overline{\alpha}_{p,(x-p)})p+\overline{\alpha}_{p,(x-p)}x,&\text{ otherwise.}\end{array}\right.\end{split}\] (46) It should be pointed out that other projections, for example, the orthogonal projection by using the \(L_{2}\)-norm, cannot be obtained in the same way. Two examples illustrating two cases of the relationship between \(\Omega\) and a point \(a\), which has to be projected on \(\Omega\), are given in Fig.3 where the orthogonal projections of the point \(a\) are denoted as \(\Pi_{i}(a)\). It can be seen from Fig.3 that the point \(a\) must be projected to the point \(p\) located at the intersection of constraints. The projection on the nearest constraint as well as successive projections on constraints do not allow mapping the point \(a\) to the nearest point inside the set \(\Omega\).
The implementation examples of the projection model are shown in Fig.4. The set \(\Omega\) is formed by means of linear constraints. For each of the examples, a vector field (the quiver plot) depicted as the set of
Figure 4: Illustrative examples of the projection models for linear constraints: (a) the orthogonal approximated projection, (b) the central projection
arrows is depicted where the beginning of each arrow corresponds to the preimage, and the end corresponds to its projection into the set of five constraints. On the left picture (Fig.4(a)), results of the approximate orthogonal projection implemented by a neural network consisting of five layers are shown. The network parameters were optimized by minimizing (45) with the learning rate \(10^{-2}\) and the number of iterations \(1000\). It can be seen from the left picture that there are artifacts in the set \(\Omega\), which correspond to areas with the large approximation errors. On the right picture (Fig.4(b)), the result of the neural network without trainable parameters is depicted. The neural network implements the central projection here. It can be seen from Fig.4(b) that there are no errors when the central projection is used.
Similar examples for three quadratic constraints are shown in Fig.5.
### Solutions at boundaries
In addition to the tasks considered above, the developed method can be applied to obtain solutions in a non-convex boundary set denoted as \(\partial\Omega\). Suppose \(\sigma(s)=1\). Then we can write
\[g(r)=p+\overline{\alpha}(p,r)\cdot r\in\partial\Omega. \tag{47}\]
The above implies that \(g(r)\) is on the boundary by \(r\neq\mathbf{0}\).
It is noteworthy that this approach allows us to construct a mapping onto a non-convex connected union of convex sets. On the other hand, an arbitrary method based on a convex combination of basis vectors, where weights of the basis are computed using the _softmax_ operation, allows us to build points only inside the feasible set, but not at the boundary.
As an example, consider the problem of projecting points onto the boundary of a convex set:
\[\begin{array}{rl}\min&||z-f_{\theta}(z)||_{p}\\ \mbox{s.t.}&f_{\theta}(z)\in\partial\Omega,\end{array} \tag{48}\]
where \(||\cdot||_{p}\) is the \(p\)-th norm.
Illustrative examples of projection onto an area defined by a set of linear constraints for the \(L_{1}\) and \(L_{2}\)-norms are shown in Fig.6 where the left picture (Fig.6(a)) corresponds to the \(L_{1}\)-norm whereas the right picture (Fig.6(b)) considers projections for the \(L_{2}\)-norm. To solve each of the problems, a neural network consisting of 5 layers of size \(100\) and minimizing (48) is trained. Its set of values is given as \(\partial\Omega\).
Figure 5: Examples of the projection models for quadratic constraints: (a) the orthogonal approximated projection, (b) the central projection
### A general algorithm of the method
For systems of constraints containing linear equality constraints as well as linear and quadratic inequality constraints, the general algorithm consists of the following steps:
1. Eliminate the linear equality constraints.
2. Construct a new system of linear and quadratic constraints in the form of inequalities.
3. Search for interior point \(p\).
4. Train a neural network \(\tilde{f}_{\theta}^{(\geq)}(z)\) for the inequality constraints.
5. Train the final neural network \(f_{\theta}(z)\), satisfying all constraints.
## 4 Numerical experiments
### Optimization problems
A neural network with constraints imposed on outputs should at least allow finding a solution to the constrained optimization problems. To implement that for each particular optimization problem, the vector of input parameters \(z_{i}\) is optimized so as to minimize the loss function \(l_{i}\left(f_{\theta}(z_{i})\right)\). For testing the optimization method with constraints, 50 sets of optimization problems (objective functions and constraints) with variables of dimensionality 2, 5, and 10 are randomly generated. In each set of problems, we generate different numbers of constraints: 20, 50, 100, and 200. The linear and quadratic constraints are separately generated to construct different sets of optimization problems. The constraints are generated so that the feasible set \(\Omega\) is bounded (that is, it does not contain a ray that does not cross boundaries of \(\Omega\)).
To generate each optimization problem, \(m\) constraints are first generated, then parameters of the loss functions are generated. For systems of linear constraints, the following approach is used: a set of \(m\) vectors \(a_{1},\ldots,a_{m}\sim\mathcal{N}(0,I)\) is generated, which simultaneously specify points belonging to hyperplanes and normals to these hyperplanes. Then the right side of the constraint system is:
\[b_{i}=a_{i}^{T}a_{i}, \tag{49}\]
Figure 6: Illustrative examples of approximation of the projection onto the boundary by using: (a) the \(L_{1}\)-norm, (b) the \(L_{2}\)-norm
and the whole system of linear constraints:
\[\Omega=\left\{x\ |\ a_{i}^{T}x\leq b_{i},\ i=1,...,m\right\}. \tag{50}\]
For system of quadratic constraints, positive semidefinite matrices \(P^{(i)}\) and vectors \(q_{i}\sim\mathcal{N}(0,I)\), \(i=1,...,m\), are first generated. Then the constraints are shifted in such a way as to satisfy the constraints with some margin \(b_{i}>0\). Hence, we obtain the system of quadratic constraints:
\[\Omega=\left\{x\ |\ x^{T}P^{(i)}x+q_{i}^{T}x\leq b_{i},\ i=1,...,m\right\}. \tag{51}\]
The relative error is used for comparison of the models. It is measured by the value of the loss function of the obtained solution \(l_{i}(f_{\theta}(x_{i}))\) with respect to the loss function of the reference solution \(l_{i}(x_{i}^{*})\):
\[RE=\frac{\max\left\{0,l_{i}\left(f_{\theta}(x_{i})\right)-l_{i}(x_{i}^{*}) \right\}}{\left|l_{i}(x_{i}^{*})\right|}\cdot 100\%. \tag{52}\]
The reference solutions are obtained using the _OSQP_ algorithm [17] designed exclusively for solving the linear and quadratic optimization problems.
Tables 1 and 2 show relative errors for optimization problems with linear loss functions and linear and quadratic constraints, respectively, where \(m\) is the number of constraints, \(n\) is the number of variables in the optimization problems. The relative errors are shown in tables according to percentiles (25%, 50%, 75%, 100%) of the probability distribution of optimization errors, which are obtained as a result of multiple experiments. Tables 3 and 4 show relative errors for optimization problems with quadratic loss functions and linear and quadratic constraints, respectively.
\begin{table}
\begin{tabular}{c c c c c c} \hline \(m\) & \(n\) & \(25\%\) & \(50\%\) & \(75\%\) & \(100\%\) \\ \hline & 2 & \(5.4\cdot 10^{-6}\) & \(7.1\cdot 10^{-6}\) & \(3.6\cdot 10^{-5}\) & \(4.4\cdot 10^{-4}\) \\
50 & 5 & \(2.5\cdot 10^{-4}\) & \(4.9\cdot 10^{-4}\) & \(1.1\cdot 10^{-3}\) & \(3.7\cdot 10^{-3}\) \\ & 10 & \(2.5\cdot 10^{-3}\) & \(3.5\cdot 10^{-3}\) & \(5.7\cdot 10^{-3}\) & \(1.2\cdot 10^{-2}\) \\ \hline & 2 & \(4.4\cdot 10^{-6}\) & \(6.0\cdot 10^{-6}\) & \(3.5\cdot 10^{-5}\) & \(7.6\cdot 10^{-3}\) \\
100 & 5 & \(4.4\cdot 10^{-4}\) & \(8.9\cdot 10^{-4}\) & \(1.4\cdot 10^{-3}\) & \(3.0\cdot 10^{-3}\) \\ & 10 & \(3.0\cdot 10^{-3}\) & \(4.3\cdot 10^{-3}\) & \(6.1\cdot 10^{-3}\) & \(1.5\cdot 10^{-2}\) \\ \hline & 2 & \(5.7\cdot 10^{-6}\) & \(1.2\cdot 10^{-5}\) & \(3.9\cdot 10^{-5}\) & \(1.2\cdot 10^{-4}\) \\
200 & 5 & \(3.2\cdot 10^{-4}\) & \(5.9\cdot 10^{-4}\) & \(1.2\cdot 10^{-3}\) & \(3.6\cdot 10^{-3}\) \\ & 10 & \(3.4\cdot 10^{-3}\) & \(4.9\cdot 10^{-3}\) & \(7.2\cdot 10^{-3}\) & \(1.2\cdot 10^{-2}\) \\ \hline \end{tabular}
\end{table}
Table 4: Relative errors for problems with quadratic loss functions and quadratic constraints
\begin{table}
\begin{tabular}{c c c c c c} \hline \(m\) & \(n\) & \(25\%\) & \(50\%\) & \(75\%\) & \(100\%\) \\ \hline & 2 & \(5.5\cdot 10^{-6}\) & \(1.3\cdot 10^{-4}\) & \(1.6\cdot 10^{-3}\) & \(1.6\cdot 10^{-2}\) \\
50 & 5 & \(3.4\cdot 10^{-6}\) & \(7.7\cdot 10^{-5}\) & \(9.9\cdot 10^{-4}\) & \(7.5\cdot 10^{-2}\) \\ & 10 & \(3.5\cdot 10^{-14}\) & \(4.3\cdot 10^{-6}\) & \(1.8\cdot 10^{-4}\) & \(3.3\cdot 10^{-3}\) \\ \hline & 2 & \(8.0\cdot 10^{-6}\) & \(1.6\cdot 10^{-4}\) & \(1.8\cdot 10^{-3}\) & \(2.6\cdot 10^{-2}\) \\
100 & 5 & \(7.3\cdot 10^{-5}\) & \(8.1\cdot 10^{-4}\) & \(3.7\cdot 10^{-3}\) & \(6.9\cdot 10^{-2}\) \\ & 10 & \(3.1\cdot 10^{-6}\) & \(1.3\cdot 10^{-5}\) & \(8.2\cdot 10^{-4}\) & \(1.7\cdot 10^{-2}\) \\ \hline & 2 & \(1.1\cdot 10^{-5}\) & \(3.1\cdot 10^{-4}\) & \(1.5\cdot 10^{-3}\) & \(6.0\cdot 10^{-2}\) \\
200 & 5 & \(1.6\cdot 10^{-4}\) & \(1.1\cdot 10^{-3}\) & \(3.8\cdot 10^{-3}\) & \(1.7\cdot 10^{-1}\) \\ & 10 & \(5.3\cdot 10^{-6}\) & \(2.8\cdot 10^{-4}\) & \(1.4\cdot 10^{-3}\) & \(1.2\cdot 10^{-2}\) \\ \hline \end{tabular}
\end{table}
Table 2: Relative errors for problems with linear loss functions and quadratic constraints
\begin{table}
\begin{tabular}{c c c c c c} \hline \(m\) & \(n\) & \(25\%\) & \(50\%\) & \(75\%\) & \(100\%\) \\ \hline & 2 & \(2.0\cdot 10^{-5}\) & \(3.9\cdot 10^{-5}\) & \(8.7\cdot 10^{-5}\) & \(1.5\cdot 10^{-4}\) \\
50 & 5 & \(1.0\cdot 10^{-3}\) & \(1.2\cdot 10^{-3}\) & \(1.8\cdot 10^{-3}\) & \(5.5\cdot 10^{-3}\) \\ & 10 & \(5.0\cdot 10^{-3}\) & \(6.9\cdot 10^{-3}\) & \(8.9\cdot 10^{-3}\) & \(1.5\cdot 10^{-2}\) \\ \hline
100 & 2 & \(1.4\cdot 10^{-5}\) & \(5.4\cdot 10^{-5}\) & \(8.5\cdot 10^{-5}\) & \(1.4\cdot 10^{-4}\) \\ & 5 & \(8.7\cdot 10^{-4}\) & \(1.6\cdot 10^{-3}\) & \(2.5\cdot 10^{-3}\) & \(4.5\cdot 10^{-3}\) \\ & 10 & \(5.6\cdot 10^{-3}\) & \(8.0\cdot 10^{-3}\) & \(1.1\cdot 10^{-2}\) & \(1.8\cdot 10^{-2}\) \\ \hline
200 & 2 & \(1.5\cdot 10^{-5}\) & \(5.0\cdot 10^{-5}\) & \(1.0\cdot 10^{-4}\) & \(2.0\cdot 10^{-4}\) \\ & 5 & \(9.6\cdot 10^{-4}\) & \(1.3\cdot 10^{-3}\) & \(1.9\cdot 10^{-3}\) & \(3.9\cdot 10^{-3}\) \\ & 10 & \(6.3\cdot 10^{-3}\) & \(7.7\cdot 10^{-3}\) & \(1.1\cdot 10^{-2}\) & \(1.6\cdot 10^{-2}\) \\ \hline \end{tabular}
\end{table}
Table 3: Relative errors for problems with quadratic loss functions and linear constraints
obtain the solution. It should be noted that this experiment illustrates the inexpediency of constructing the _constrained neural networks_ by solving the projection optimization problem during the forward pass. Nevertheless, the use of such layers is justified if, for example, it is required to obtain a strictly orthogonal projection.
The proposed structure of the neural network allows us to implement algorithms for solving arbitrary problems of both convex and non-convex optimization. For example, Fig.8 shows optimization trajectories for the Rosenbrock function [18] with quadratic constraints:
\[\mathcal{L}_{Ros}(x)=(1-x_{1}^{2})+100(x_{2}-x_{1}^{2})^{2}, \tag{53}\]
\[\Omega_{Ros}=\left\{x\ |\ x_{1}^{2}+x_{2}^{2}\leq 2\right\}. \tag{54}\]
This function with constraints has a global minimum at the point \((1,1)\). The bound for the constraint set \(\Omega_{Ros}\) is depicted by the large black circle. For updating the parameters, \(2000\) iterations of the Adam algorithm are used with the learning rate \(0.1\). \(9\) points on a uniform grid from \(-0.75\) to \(0.75\) are chosen as starting points. For each starting point, an optimization trajectory is depicted such that its finish is indicated by a white asterisk. Three different scenarios are considered:
* _The central projection_ optimizes the input of a layer with the constrained output that performs the central projection. Such a layer implements the identity mapping inside the constraints and maps the outer points to the boundary.
* _The hidden space_\((r,s)\) optimizes the input parameters of the proposed layer with constraints (\(r\) is the ray, \(s\) is the scalar that defines a shift along the ray).
* _The projection neural network_ is a neural network which consists of \(5\) fully connected layers of size \(100\) and the proposed layer with constraints. The input parameters of the entire neural network are optimized.
Fig.9 shows the optimization trajectories for the non-convex Bird function from [19]:
\[\mathcal{L}_{Bird}(x)=sin(x_{2})e^{(1-cos(x_{1}))^{2}}+cos(x_{1})e^{(1-sin(x_{2 }))^{2}}+(x_{1}-x_{2})^{2}, \tag{55}\]
\[\Omega_{Bird}=\left\{x\ |\ (x_{1}+5)^{2}+(x_{2}+5)^{2}<25\right\}. \tag{56}\]
This function has four local minima in the region under consideration, two of which lie on the boundary of the set \(\Omega_{Bird}\), which is depicted by the large black circle in Fig.9.
Figure 7: Comparison of the computation time of the proposed neural network and the projection using CVXPYLayers
Figure 8: Optimization trajectories for the Rosenbrock function with quadratic constraints: (a) the central projection, (b) the hidden space \((r,s)\), (c) the projection neural network
Figure 9: Optimization trajectories for the Bird function 55: (a) the central projection, (b) the hidden space \((r,s)\), (c) the projection neural network
### A classification example
In order to illustrate capabilities of neural networks with the output constraints, consider the classification problem by using an example with the dataset Olivetti Faces taken from package "Scikit-Learn". The dataset contains 400 images of the size \(64\times 64\) divided into 40 classes. We construct a model whose output is a discrete probability distribution that is
\[f_{\theta}(z)\in\left\{x\ |\ x_{i}\geq 0,\ \mathbf{1}^{T}x=1\right\}. \tag{57}\]
It should be noted that traditionally the _softmax_ operation is used to build a neural network whose output is a probability distribution.
For comparison purposes, Fig.10 shows how the loss functions depend on the epoch number for the training and testing samples. Each neural network model contains 5 layers of size 300 and is trained using _Adam_ on 5000 epochs with the batch size 200 and the learning rate \(10^{-4}\) to minimize the cross entropy. Three types of final layers are considered to satisfy the constraints imposed on the probability distributions (57):
* _Constraints_ means that the proposed layer of the neural network imposes constraints on the input \((r,s)\);
* _Projection_ means that the proposed layer projects the input to the set of constraints;
* _Softmax_ is the traditional _softmax_ layer.
The dotted line in Fig.10 denotes the minimum value of the loss function on the testing set. It can be seen from Fig.10 that all types of layers allow solving the classification problem. However, the proposed layers in this case provide slower convergence than the traditional _softmax_. This can be explained by the logarithm in the cross entropy expression, which is _compensated_ by the exponent in the _softmax_. Nevertheless, the proposed layers can be regarded as a more general solution. In addition, it can be seen from Fig.11 that this hypothesis is confirmed if another loss function is used, namely "Hinge loss". One can see from Fig.11 that the _softmax_ also converges much faster, but to a worse local optimum.
Figure 10: Comparison of cross entropy for different types of the final layer of the classification neural network
In addition to the standard constraints (57), new constraints can be added, for example, upper bounds \(\overline{p}_{i}\) for each probability \(x_{i}\):
\[f_{\theta}(z)\in\left\{x\ |\ 0\leq x_{i}\leq\overline{p}_{i},\ \mathbf{1}^{T}x=1 \right\}. \tag{58}\]
This approach can play a balancing role in training, by reducing the influence of already correctly classified points on the loss function. To illustrate how the neural network is trained with these constraints and to simplify the result visualization, a simple classic classification dataset "Iris" is considered. It contains only 3 classes and 150 examples. Three classes allow us to visualize the results by means of the unit simplex. In this example we set upper bounds \(\overline{p}_{i}=0.75\) for \(i=1,2,3\). Fig.12 shows the three-dimensional simplices and points (small circles) that are outputs of the neural network trained on 100, 500, and 1000 epochs. The neural network consists of 5 layers of size 64. Colors of small circles indicate the corresponding classes (Setosa, Versicolour, Virginica). It can be seen from Fig.12 that the constraints affect not only when the output points are very close to them, but also throughout the network training.
## 5 Conclusion
The method, which imposes hard constraints on the neural network output values, has been presented. The method is extremely simple from the computational point of view. It is implemented by the additional neural network layer with constraints for output. Applications of the proposed method have been demonstrated by numerical experiments with several optimization and classification problems.
The proposed method can be used in various applications, allowing to impose linear and quadratic inequality constraints and linear equality constraints on the neural network outputs, as well as to constrain jointly inputs and outputs, to approximate orthogonal projections onto a convex set or to generate output vectors on a boundary set.
We have considered cases of using the proposed method under condition of the non-convex objective function, for example, the non-convex Bird function used in numerical experiments set of constraints. At the same time, the feasible set formed by constraints has been convex because the convexity property has been used in the proposed method when the point of intersection of the ray \(r\) and one of the constraints was determined. However, it is interesting to extend the ideas behind the proposed method to the case of non-convex constraints. This extension of the method can be regarded as a direction for further research.
Figure 11: Comparison of “Hinge loss” for different types of the final layer of the classification neural network
It should be pointed out that an important disadvantage of the proposed method is that it works with bounded constraints. This implies that the conic constraints cannot be analyzed by means of the method because the point of intersection of the ray \(r\) with the conic constraint is in the infinity. We could restrict a conic constraint by some bound in order to find an approximate solution. However, it is interesting and important to modify the proposed method to solve the problems with the conic constraints. This is another direction for research.
There are not many models that actually realize the hard constraints imposed on inputs, outputs and hidden parameters of neural networks. Therefore, new methods which outperform the presented method are of the highest interest.
Another important direction for extending the proposed method is to consider various machine learning applications, for example, physics-informed neural networks which can be regarded as a basis for solving complex applied problems [20, 21]. Every application requires to adapt and modify the proposed method and can be viewed as a separate important research task for further study.
|
2308.09955 | To prune or not to prune : A chaos-causality approach to principled
pruning of dense neural networks | Reducing the size of a neural network (pruning) by removing weights without
impacting its performance is an important problem for resource-constrained
devices. In the past, pruning was typically accomplished by ranking or
penalizing weights based on criteria like magnitude and removing low-ranked
weights before retraining the remaining ones. Pruning strategies may also
involve removing neurons from the network in order to achieve the desired
reduction in network size. We formulate pruning as an optimization problem with
the objective of minimizing misclassifications by selecting specific weights.
To accomplish this, we have introduced the concept of chaos in learning
(Lyapunov exponents) via weight updates and exploiting causality to identify
the causal weights responsible for misclassification. Such a pruned network
maintains the original performance and retains feature explainability. | Rajan Sahu, Shivam Chadha, Nithin Nagaraj, Archana Mathur, Snehanshu Saha | 2023-08-19T09:17:33Z | http://arxiv.org/abs/2308.09955v1 | # To Prune or not to Prune: :
###### Abstract
Reducing the size of a neural network (pruning) by removing weights without impacting its performance is an important problem for resource constrained devices. In the past, pruning was typically accomplished by ranking or penalizing weights based on criteria like magnitude and removing low-ranked weights before retraining the remaining ones. Pruning strategies may also involve removing neurons from the network in order to achieve the desired reduction in network size. We formulate pruning as an optimization problem with the objective of minimizing misclassifications by selecting specific weights. To accomplish this, we have introduced the concept of chaos in learning (Lyapunov exponents) via weight updates and exploiting causality to identify the causal weights responsible for misclassification. Such a pruned network maintains the original performance and retains feature explainability
## 1 Introduction
Designing a neural network architecture is a critical aspect of developing neural networks for various AI tasks, particularly in deep learning. One of the fundamental challenges in designing neural networks is finding the right balance between model complexity and sample size, which can have a significant impact on the network's performance. In general, a larger network with more parameters (overparameterized) can potentially learn more complex functions and patterns from the data [21]. However, challenge in network architecture design is to find the right balance between model complexity and sample size, so that the network can learn to generalize well to new, unseen data. In this context, over-parameterized networks [27], [19] have become increasingly popular in the deep learning era due to
their ability to achieve high expressivity and potentially better generalization performance [24]. The idea behind over-parameterization is to increase the number of parameters in the network beyond what is strictly necessary to fit the training data and remarkable generalization to test data. However, such networks still require a number of assumptions and hyperparameter tuning to guarantee optimal performance.
Pruning techniques should reduce the number of parameters in a neural network without compromising its accuracy. However, it is important to ponder why we do not simply train a smaller network from scratch to make training more efficient. The reason is that the architectures obtained after pruning are typically more challenging to train from scratch [16], and they often result in lower accuracy compared to the original networks. Therefore, while standard pruning techniques can effectively reduce the size and energy consumption of a network, it does not necessarily lead to a more efficient training process.
### Problem statement and Contributions
We pose a broad _Research Question_ here: Is there a principled approach to pruning overparameterized, dense neural networks to a reasonably good sparse approximation such that performance is not compromised and explainability is retained? It is well known that dense neural network training and particularly weight updates via SGD have some element of chaos [26],[9]. We expect that, between the successive weight updates due to SGD and miss-classification, there is some observed causality and non-causal weights (parameters) can be pruned, leading to a sparse network i.e. some weight updates cause reduction in network (training) loss and some do not! Can we train a dense network till a few epochs to derive a pruned architecture for the derivative to run for the remaining epochs and produce performance metrics in the \(\epsilon-\)ball of the original, dense network? Does this sparse network also train well, verified with Shapley [17] and WeightWatcher (_WW_) [18] tests.? Specifically, we contribute to the following:
* Present a unique and unifying framework on chaos and causality for deep network pruning. The unifying framework uses LE [10] and Granger causality (GC) [6] tandem.
* Propose novel pruning architectures, Lyapunov Exponent Granger Causality driven Fully Trained Network (LEGCNet-FT) and Lyapunov Exponent Granger Causality driven Partially Trained Network (LEGCNet-PT).
* LEGCNet-FT and LEGCNet-PT compare very well in performance and other baselines, Random [15] and Magnitude based [14] pruning techniques.
* Establish feature consistency of LEGCNet-FT and LEGCNet-PT in explainability.
* Verify empirically that the proposed architectures for pruning are not over-trained and obviously not over-parameterized but are still able to generalize well, on diverse data sets while saving Flops (FLOATing-point OPerationS). We accomplish this via the _WW_ test.
In summary, we propose pruning techniques, _LEGCNet-FT and LEGCNet-PT_ which perform at par with the dense, unpruned architecture and the existing pruning baselines. _While maintaining consistent performance, these techniques also help reduce epochs to converge and Flops to compute while maintaining feature consistency with their dense counterparts and ensuring proper training across layers validated via WW statistics._
## 2 Technical Motivation
Chaos, causality and the manifestation of the Lottery Ticket Hypothesis are the key motivation behind our proposed pruning mechanism.
### Gradient Descent and Low Dimensional Chaos
Is the process of updating weights in backpropagation via Gradient Descent chaotic? Is there an alternative interpretation of the minima in the weight landscape via low-dimensional chaos? The weight update in SGD is written as \(w_{i+1}\gets w_{i}-\eta_{i}\nabla_{w}f(w_{i})\) may be thought of as a discretization to the first order ODE: \(w^{\prime}(t)=-\nabla_{w}f(w_{i})\). The minimizer of the SGD is therefore conceived as a stable equilibrium of the ODE. That is, the minimum, \(w^{\star}\) can be thought of as a fixed point to the iterates \(w_{i+1}=G(\eta_{i},w_{i})\equiv w^{\star}=G(\eta_{i},w^{\star})\).
**Empirical evidence of chaos in back propagation:** We performed a series of experiments on different datasets to check Sensitive Dependence On Initial Conditions (SDIC) for weight initialization, on single hidden layer neural network. The weight initialization matrix \(W_{ij}\) followed Gaussian distribution \((W_{ij}\sim N(0,\sigma^{2}))\). We recorded two set of executions - one with initial weight \(w_{11}:w_{ij}\forall i\in 1..h,\forall j\in 1..n\), where \(n,h\) are the number of input and hidden neurons - and another, with infinitesimal perturbation (\(w_{11}+\delta\)) keeping other parameters same. Each time, the network
was trained via gradient descent and weight series were recorded. The method was repeated for the second weight connection \(w_{ij},i=1,j=2\) and for all the weights on the network. Later, the Lyapunov exponents were computed by using the TISEAN package [8] on the recorded weight series to measure the perturbed trajectory due to initial perturbation \(\delta\). We observed positive Lyapunov exponents which marked the presence of some chaotic behavior in gradient descent.
### Chaos and Causality
One way to address the issue of explainability in AI/machine learning is to seek causal explanations for choices made in the learning process. Conversely, a learning process that incorporates choices made out of causal considerations is easier to explain and interpret. This is the motivation behind using causality based criteria for the choice of what to prune (or not prune) in this study. To this end, we employ Granger Causality (GC) [7].
The principle of pruning that is causally-informed is formulated as follows. Those connections (weights) in the learning network that do not causally impact the loss are chosen for pruning. To determine the causal impact of a particular connection to the loss, we perform GC between the windowed Lyapunov exponents (LE) of the weight time series for that connection and the classification accuracy. The rationale behind this is the intuition that the chaotic signature of weight updates inform learning. A biological inspiration for chaotic signatures as a marker for learning is the empirical fact that neurons in the human brain exhibit chaos [4; 11] at all spatio-temporal scales. Starting from single neurons to coupled neurons to network of neurons to different areas of brain - chaos has been found to be ubiquitous to brain [11]. Chaotic systems are known to exhibit a wide range of patterns (periodic, quasi-periodic and non-periodic behaviors), are very robust to noise and enable efficient information transmission [20], processing/computation [3; 12] and classification [2]. There is also some evidence to suggest that weak chaos is likely to aid learning [25]. Thus our choice of testing causal strength between LE (a value \(>0\) is a marker of chaos) and classification accuracy as a criterion for pruning to yield sparse subnetworks that capture the learning of the task at hand.
### Lottery ticket hypothesis
The "lottery ticket hypothesis" [5] is a concept in neural network pruning that suggests that within a dense and over-parameterized neural network, there exist sparse sub-networks that can be trained to perform just as well as the original dense network. Any fully-connected feed-forward network \(f^{d}(x;\phi)\), with initial parameters \(\phi\) when trained on a training set D, \(f^{d}\) achieves a test accuracy \(a\) and error \(e\) at iteration \(j\). Our work LEGCNet, validates the lottery ticket hypothesis by finding the "winning-ticket", \(m\), to construct the sparse network, \(f^{s}\) such that \(acc^{s}\geq a\) and \(j^{s}<j\) where \(\|\phi\|\gg\|m\|\).
The remainder of the paper is organized to present the key methodologies used to develop the pruning technique (section 3), followed by a detailed experimental setup (section 4) and strong empirical evidence of the proposed technique in contrast to the baselines (section 5,6).
## 3 Tools and Methodology
### Definitions:
* Let \(f^{d}\) be a dense neural network of depth \(l\) and width \(h\) defined as \[f^{d}(x)=W_{i}^{d}\sigma_{i}(W_{i-1}^{d}\sigma_{i}(...W_{1}^{d}(x)))\] (1) where \(W_{i}^{d}\) is the weight matrix for layer \(i\) such that \(i\in 1..l\).
* Let \(f^{s}\) be sparse neural network of the same architecture as \(f^{d}\), with depth \(l\) and width \(h\).
* In order to validate the working of LEGCNet, we divided the method into two discrete approaches. In one approach, the entire training weight-series is used for computing Lyapunov exponents, testing Granger causality and for the identification of causal weights. Essentially, the dense network is trained till convergence and the approach is called LEGCNet-Full Train (LEGCNet-FT). In the second approach, LEGCNet-PT, the network is trained only till certain epochs (10% of the total iterates) and these few weight-updates are captured for identifying the causal weights.
### WeightWatcher (\(Ww\)) as a diagnostic tool
\(WW\) is a powerful open-source diagnostic tool designed for analyzing Deep Neural Networks (DNN). \(WW\) analyzes each layer by plotting the empirical spectral distribution (ESD), which represents the histogram of eigenvalues from the
layer's correlation matrix. Additionally, it fits the tail of the ESD to a (truncated) power law and presents these fitted distributions on separate axes. This visualization approach provides a clear representation of the eigenvalue distribution and highlights the presence of heavy-tailed behavior in the network's layers. In general, the ESDs observed in the best layers of high-performing DNNs can often be effectively modeled using a Power Law (PL) function. The PL exponents, denoted as alpha, tend to be closer to 2.0 in these cases indicating a heavy-tailed behavior in the layer's correlation matrix.
### SHAP as a diagnostic tool
SHAP (SHapley Additive exPlanations) is a game-theoretic technique utilized to provide explanations for the output of machine learning models. SHAP enables a comprehensive understanding of the contributions made by different features in the model's output, facilitating insightful explanations for its decision-making process. If a network is pruned according to some underlying principles, then the consistency in feature explainability is maintained before and after pruning i.e. the features which explain the outcome before pruning (fully connected, dense network) remain consistent on the pruned network.
### Windowed Weight Updates:
Consider a neural network with \(n\) inputs and \(r\) outputs, \(l\) hidden layers of \(h\) neurons, and, the input vector denoted as \(x\in R^{n}\), The network when trained by SGD generates a sequence of weight updates represented by \(w^{ji}=\left[w_{0}^{ji},w_{1}^{ji},...,w_{k}^{ji},...,w_{K}^{ji}\right]\) where \(w_{k}^{ji}\) is the weight of ith neuron of the input layer and jth neuron of the hidden layer at kth iteration. Considering, the weights being collected for initial few epochs, the weight iterates for hidden layer and output layer are \(W_{h}=\left\{w^{ji}\right\},W_{o}=\left\{w^{kj}\right\}\quad\forall i\in\left\{ 1,..,n\right\},\forall j\in\left\{1,..,m\right\},\forall k\in\left\{1,..,r\right\}\) An infinitesimal perturbation \(\delta_{0}\) is introduced in the initial weight \(w^{11}\), given as \(w^{11\delta_{0}}=w^{11}+\delta_{0}\), keeping other parameters - weights (initialization), learning rate, optimizer, and loss function- same. The network is then retrained with the perturbed weight, and the weights updates are recorded again as \(W_{h}^{\delta_{0}}=\left\{w^{ji\delta_{0}}\right\},W_{o}^{\delta_{0}}=\left\{ w^{kj\delta_{0}}\right\}\quad\forall i\in\left\{1,..,n\right\},\forall j\in\left\{1,..,m \right\},\forall k\in\left\{1,..,r\right\}\) A difference series obtained by subtracting perturbed weights from initial weights is \(\delta W_{h}=\left\{\delta w^{ji}\right\},\delta W_{o}=\left\{\delta w^{kj} \right\}\quad\forall i\in\left\{1,..,n\right\},\forall j\in\left\{1,..,m \right\},\forall k\in\left\{1,..,r\right\}\). We divide the weight series \(\delta w^{ji}\) into \(K\) windows, \(w^{ji}=\bigcup_{l=1}^{K}w^{ji(l)}\), and compute the Lyapunov exponent of all the windowed-weight trajectories. The series of the Lyapunov exponent \(\lambda\) of the windowed-weight trajectories \(w^{ji(K)}\) are represented using the notation \(\left\{\lambda^{ji\left\{1\right\}},\lambda^{ji\left\{2\right\}},...,\lambda^{ ji\left\{K\right\}}\right\}\). Additionally, we record the accuracy at every window during training \(\forall l\in 1,\ldots,K\), captured for weights at \(w^{ji(l)}\) and \(w^{kj(l)}\forall i\in\left\{1,..,n\right\},\forall j\in\left\{1,..,m\right\}, \forall k\in\left\{1,..,r\right\}\).
### Approximation capability of LEGCNet
The main theorem and lemma in this section build that the LEGCNet pruned network (\(f^{s}\)) is \(\epsilon\)-close to \(f^{d}\) with probability 1-\(\delta\)[22]. Consider \(f^{d}\) to be a dense neural network defined as
\[f^{d}(x)=W_{i}^{d}\sigma_{i}(W_{i-1}^{d}\sigma_{i}(...W_{1}^{d}(x))) \tag{2}\]
Figure 1: Weights pruned via LEGCNet method for selecting connections in sparse neural network
where \(W_{i}^{d}\) is the weight matrix for layer \(i\) such that \(i\in 1..l\) and \(h_{i}>h_{0},h_{i}>h_{l},\forall i\in 1...(l-1)\). We assume that for network in (2), \(\sigma_{i}\) is \(L_{i}\)-Lipschitz and weight matrix \(W_{i}^{d}\) is initialized from uniform distribution \(U[\frac{-K}{\sqrt{max(h_{i},h_{i-1})}},\frac{K}{\sqrt{max(h_{i},h_{i-1})}}]\), for some constant \(K\).
**Theorem 3.1** (_Approximation Capability of the Sparse Network)_: _Let \(\epsilon>0\), \(\delta>0\), \(\alpha\in(0,1)\) such that the, for some constants \(K_{1},K_{2},K_{3},K_{4},K_{5}\),_
\[h\geqslant max\left\{K_{1}^{\frac{1}{n}},\left(\frac{K_{2}}{\epsilon}\right) ^{\frac{1}{\alpha}},\left(\frac{K_{3}}{\delta}\right)^{\frac{1}{\alpha}},K_{ 4}+K_{5}log\left(\frac{1}{\delta}\right)\right\}\]
_then sparse network \(f^{s}\) obtained from LEGCNet by the mask \(m\), and pruning the weights \(W_{i}^{d}\), \(\forall i\in 1...l\) is \(\epsilon\)-close to \(f^{d}\), with the probability (\(1-\delta\)), i.e._
\[\sup_{x\in B_{d0}}\left\|f^{s}(x)-f^{d}(x)\right\|_{2}\leq\epsilon\]
**Remark:** Lipschitz property of the activation functions [23] is a necessary condition to validate the approximation capability of the proposed sparse network. We have used _sigmoid activation_ in the sparsely trained/pruned network.
**Lemma 3.2**: _Sigmoid activation is Lipschitz._
**Proof:** If a function \(f(x)\) is Lipschitz continuous, then: \(\left\|f(x)-f(y)\right\|\leq K\left\|x-y\right\|\equiv\left\|f^{\prime}(x) \right\|\leq K\). If \(K<1,f\) is a contraction map as well. We know that Sigmoid, \(\sigma(x)=\frac{1}{1+e^{-x}}\); and \(\left\|\sigma^{\prime}\right\|=\left\|\sigma(x)(1-\sigma(x))\right\|\). It's easy to follow that : \(\left\|\sigma(x)(1-\sigma(x))\right\|\leq\left\|\sigma(x)\right\|\left\|1- \sigma(x)\right\|\leq C_{1},C_{2}\). Since 0\(\leq C_{1}\), \(C_{2}\leq\) 1, \(C_{1}*C_{2}=\delta\leq 1\). Hence sigmoid is Lipschitz.
### Methodology Overview (refer figure 1)
Our pruning method was developed as follows. Initially, we trained a simple Multi-layer Perceptron (MLP) on a given dataset. Throughout the training process, we recorded the weights at each iteration, resulting in a time series of weight values. These weight time series were subsequently utilized to estimate the Lyapunov exponents, a measure of chaotic behavior, using the TISEAN package in conjunction with MATLAB scripts. This estimation was performed using a sliding window approach, generating a time series of Lyapunov exponents.
Based on our experiments, the time series of Lyapunov exponents consistently exhibited positive values, indicating the presence of chaotic elements in the weight time series. Consequently, our study focused on understanding whether these weights Granger caused the model's accuracy. Any weights that did not demonstrate this causal relationship were pruned before conducting subsequent model runs (code is uploaded in supplementary, zipped file).
## 4 Experimental set-up
In our study, we employed Python3.10 and Matlab R2022a to conduct experiments on a single hidden layer neural network on various datasets. Our experiments were conducted on Ryzen 9 3900XT Desktop Processor with 32GB RAM and 1TB HDD. During training, we stored the weight updates for every connection in CSV files. We assumed a window size of 200 iterates and computed the Lyapunov exponent for each weight connection on every window. Further, we calculated the training and test accuracy on every window, to capture the misclassification rates. Thus, we obtained a sequence of windowed-Lyapunov exponents and windowed accuracies for every connection. We then computed the Granger causality between the windowed Lyapunov exponents and the misclassification rate to identify weight connections that Granger caused misclassification. In the process of pruning the network, we removed the connections for which the Lyapunov exponents were found to Granger cause misclassification. After pruning, we reran the experiment by keeping the initial weights, optimizers, and other hyperparameters the same as unpruned network. We extended the work on different datasets and recorded the epochs and accuracies of the pruned network. Interestingly, we observed that the accuracies of the sparse network exceeded those of the dense network.
In our study, we conducted experiments in two parts. In the first part, we trained the neural network until convergence (LEGCNet-FT) and computed windowed-Lyapunov exponents for each weight connection as well as windowed accuracies. We then computed the Granger causality and pruned the network by removing connections that were found to cause misclassification. In the second part of the experiment, we trained the network only for few epochs (LEGCNet-PT) and used these initial weight updates to compute windowed-Lyapunov exponents and accuracies, repeating the same procedure as in the first part of the experiment. Finally, we compared the performance of the pruned network (LEGCNet-FT and LEGCNet-PT) to that of the original network. The results of all these experiments were recorded and presented in tables. Code is available as supplementary file.
## 5 Performance comparison - dense network, LEGCNet-FT, and LEGCNet-PT
We ran the experiments on seven tabular datasets - Cancer, Titanic, Banknote, Iris, Iris (3 features) Vowel and MNIST. The datasets were divided into 80:20 train-test split and the code was run 5 times, each time maintaining different network initialization. The best results from every initialization are reported in tables 1 and 2. We compared the Flops, % pruned (fraction of parameters removed *100), accuracy, f1-scores, and epochs for all methods (dense, LEGCNet-FT and LEGCNet-PT). Table 1 shows the performance comparison of dense network and LEGCNet-FT. Remarkably, LEGCNet-FT achieves notable reductions in Flops without compromising accuracy. Furthermore, LEGCNet-FT converges significantly faster consuming a few epochs compared to the dense network. Specifically, for the Titanic, Vowel, and Cancer datasets, LEGCNet-FT achieves convergence in just half the number of epochs required by the dense network. Nonetheless, both network achieves a similar level of performance - accuracy, and f1-score- without significant differences, thus validating the lottery ticket hypothesis. Table 2 demonstrates the performance of LEGCNet-PT. It shows that LEGCNet-PT performs at par with its counterpart, in terms of convergence speed, Flops, accuracy and F1 scores.
### Diagnostics
It is also crucial to examine the impact of our pruning technique on the training process of the model and determine if the relevance of feature importance is maintained and if the pruned network is properly trained or not. To accomplish this goal, we utilized two diagnostic tools, _namely WW and SHAP_. The _WW_ validates the compliance of our architecture, as reflected in the table 4 and figure 2 plotted for MNIST. The alpha lies between 2.0 and 6.0 on every layer (table 4). The ESD plots of the three types of training (dense, LEGCNet-PT, LEGCNet-FT) manifest a heavy-tailed distribution of eigenvalues on each layer indicating the layers are well-trained (figure 2). A careful observation at figure 2 reveals the following: ESD plot of a layer, where the orange spike on the far right is the tell-tale clue; it's called a Correlation Trap [13]. A Correlation Trap refers to a situation where the empirical spectral distributions (ESDs) of the actual (green) and random (red) distributions appear remarkably similar, with the exception of a small correlation shelf located just to the right of 0. In the random ESD (red), the largest eigenvalue (orange) is noticeably positioned further to the right
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|} \hline Dataset (hidden neurons) & Flops - & Flops- & Non causal & Epochs & Epochs & Accuracy & Accuracy & Fl1-score & F1-score & \#LFPuned \\ & DN & LEGCNet-PT & Weights & DN & LEGCNet-FT & DN & LEGCNet-FT & DN & LEGCNet-FT & LEGCNet-FT \\ \hline cancer(s) & 60 & 54 & 6 & 0 & 38 & 0.8579 & 0.8866 & 0.8060 & 0.8570 & 10 \\ \hline Titanic (8) & 56 & 52 & 4 & 43 & 20 & 0.7225 & 0.7177 & 0.6918 & 0.6843 & 7.14 \\ \hline BasalRec (8) & 40 & 36 & 4 & 9 & 7 & 0.9018 & 0.8899 & 0.9016 & 0.8906 & 10 \\ \hline Iris (6) & 42 & 40 & 2 & 182 & 166 & 0.9 & 0.9 & 0.9124 & 0.5419 & 4.76 \\ \hline Iris(7) features(3) & 36 & 26 & 10 & 139 & 125 & 0.9 & 0.9 & 0.9333 & 0.904 & 27.78 \\ \hline Word (4) & 36 & 34 & 2 & 36 & 14 & 0.7462 & 0.7538 & 0.7877 & 0.74 & 5.36 \\ \hline Miami (50, 30) & 41000 & 40861 & 139 & 27 & 30 & 0.9121 & 0.9165 & 0.8609 & 0.8781 & 0.34 \\ \hline \end{tabular}
\end{table}
Table 1: Comparing the performance of sparse network with dense networks by using LEGCNet across different datasets. The approach used for finding non-causal weights is LEGCNet-Fully Trained. Accuracies used are on the Test set. Dense Network - DN, Sparse Network - SN
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline Data (\(Epochs^{*}\)) & Flops (SN) & Epochs (SN) & Epochs (SN) & Accuracy (SN) & Accuracy (SN) & F1-score (SN) & F1-score (SN) & f1-score (SN) \\ & Random & Random & Magnitude & Random & Magnitude \\ \hline Cancer & 42 & 0.8759 & 0.8055 & 0.8660 & 0.8814 \\ \hline Titanic & 35 & 0.7201 & 0.7531 & 0.8842 & 0.6906 \\ \hline BasalRec & 6 & 0.8945 & 0.9618 & 0.8943 & 0.9015 \\ \hline Iris & 179 & 0.9000 & 0.9000 & 0.9124 & 0.9124 \\ \hline Iris\(3\) features & 160 & 0.9000 & 0.9313 & 0.9310 & 0.9330 \\ \hline Visual & 28 & 0.7642 & 0.7662 & 0.7229 & 0.7407 \\ \hline MNIST & 31 & 0.9119 & 0.9312 & 0.8627 & 0.8866 \\ \hline \end{tabular}
\end{table}
Table 3: Results for Random and Magnitude-based pruning;
and is separated from the majority of the ESD's bulk. This phenomenon indicates the presence of strong correlations in the layer, which can potentially affect the overall behavior and performance of the network. This is the case of a well-trained layer.
_SHAP:_ The SHAP values computed for the proposed pruning architecture as well as for the baselines - random and magnitude - indicate that the feature importance is maintained in LEGCNet-PT and LEGCNet-FT when compared with dense (figure 3). However, the baseline pruning methods (random, magnitude pruning) could not maintain the feature consistency as seen in the SHAP plots (supplementary file, section C). Though magnitude pruning shows feature consistency for Cancer, banknote, and Titanic datasets, the random pruning could not.
## 6 Discussion and Conclusion
Unlike the current baselines, the percentage of pruned weights is significantly less. This is because only the non-causal weights are pruned, weights that play no role in impacting the loss/accuracy. We argue that the proposed strategy is efficient and accurate, with the additional benefit of passing network fitting and explainability tests, in addition to satisfying the lottery ticket hypothesis (tables 1 and 2 Experimental section.). One of the other salient features of LEGCNet is that, unlike other pruning methods, it does not need the dense network to be trained for the entire cycle of epochs to identify pruning candidates. Rather, such candidates are detected after a few initial epochs so that the retraining can start immediately. This reflects in reduced Flops without compromising key performance indicators (tables 1, 2). Notably, our architecture is validated for correct training via \(WW\) Statistics as detailed in the diagnostic section previously.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{Layer1: 784-50} & \multicolumn{2}{c|}{Layer2: 50-30} & \multicolumn{2}{c|}{Layer3: 30-10} \\ \hline Model & alpha & alpha, w & alpha & alpha, w & alpha & alpha, w \\ \hline Dense & 2.19 & 1.63 &.51 &.28 & 1.94 & 2.76 \\ \hline LEGCNet-FT & 2.24 & 1.64 & 1.51 & 1.83 & 2.29 & 3.20 \\ \hline LEGCNet-PT & 2.19 & 1.71 & 1.51 & 1.85 & 2.73 & 3.87 \\ \hline Random & 2.30 & 1.53 & 1.70 & 1.95 & 2.00 & 2.84 \\ \hline Magnitude & 2.19 & 1.63 & 1.55 & 1.85 & 1.96 & 2.78 \\ \hline \end{tabular}
\end{table}
Table 4: Weight Watcher summary retrieved for all models trained on the same initialization for MNIST
Figure 3: Shap values and feature importance computed on Cancer dataset (Banknote and Titanic plots can be seen in the supplementary file section D) for all the three models - Dense Network, LEGCNet-FT and LEGCNet-PT; the feature importance for the dense network is same as LEGCNet-FT and LEGCNet-PT
Figure 2: \(WW\) **plots for dense and LEGCNet-FT, and LEGCNet-PT networks on MNIST data (layer 1)**. Layers 2 and 3 are kept in the supplementary file (section A). Plots reveal the correct training of the proposed architectures. WW plots of MNIST for random and magnitude pruning are in supplementary file section B.
The experimental results demonstrated that the proposed pruning method exhibits notable advantages in maintaining feature importance compared to the traditional random and magnitude pruning methods. The feature consistency remained relatively stable after employing the proposed pruning technique, which was not the case for the other two methods. In the case of random and magnitude pruning, significant fluctuations in feature importance were evident after pruning. These fluctuations could potentially hinder the interpretability of the underlying model. However, our proposed pruning method demonstrated remarkable resilience in preserving feature importance, with minimal perturbations observed in SHAP plot patterns enabling a more interpretable and transparent pruned model. For mission-critical tasks on edge devices such as predicting power consumption of applications [1] or forecasting real-time blood glucose prediction, feature explainability on pruned networks are critical as they help determine accurate prediction when dimensionality is a curse. It is crucial to emphasize that our primary focus in this investigation was on the effects of pruning, rather than achieving perfect classification performance. We compared three distinct pruning techniques: random pruning, magnitude pruning, and our pruning approach. Overall, the experimental outcomes validate the superiority of the proposed pruning method. The findings hold great promise for further advancements in network optimization and model explainability. Our pruning approach is yet to be tested on baseline architectures (Resnet, Densenet), and Large Language Models and savings in carbon emission needs to be computed.
## Acknowledgments
Archana Mathur and Nithin Nagaraj would like to thank SERB-TARE (TAR/2021/000206) for supporting the work. Snehanshu Saha would like to thank the DBT-Builder project, Govt. of India (BT/INF/22/SP42543/2021) and SERB-SURE, DST.
|
2307.15521 | Scalable Imaginary Time Evolution with Neural Network Quantum States | The representation of a quantum wave function as a neural network quantum
state (NQS) provides a powerful variational ansatz for finding the ground
states of many-body quantum systems. Nevertheless, due to the complex
variational landscape, traditional methods often employ the computation of
quantum geometric tensor, consequently complicating optimization techniques.
Contributing to efforts aiming to formulate alternative methods, we introduce
an approach that bypasses the computation of the metric tensor and instead
relies exclusively on first-order gradient descent with Euclidean metric. This
allows for the application of larger neural networks and the use of more
standard optimization methods from other machine learning domains. Our approach
leverages the principle of imaginary time evolution by constructing a target
wave function derived from the Schr\"odinger equation, and then training the
neural network to approximate this target. We make this method adaptive and
stable by determining the optimal time step and keeping the target fixed until
the energy of the NQS decreases. We demonstrate the benefits of our scheme via
numerical experiments with 2D J1-J2 Heisenberg model, which showcase enhanced
stability and energy accuracy in comparison to direct energy loss minimization.
Importantly, our approach displays competitiveness with the well-established
density matrix renormalization group method and NQS optimization with
stochastic reconfiguration. | Eimantas Ledinauskas, Egidijus Anisimovas | 2023-07-28T12:26:43Z | http://arxiv.org/abs/2307.15521v4 | **Scalable Imaginary Time Evolution with Neural Network**
## Abstract
**The representation of a quantum wave function as a neural network quantum state (NQS) provides a powerful variational ansatz for finding the ground states of many-body quantum systems. Nevertheless, due to the complex variational landscape, traditional methods often employ the computation of quantum geometric tensor, consequently complicating optimization techniques. We introduce an approach that bypasses the computation of the metric tensor and instead relies exclusively on first-order gradient descent with Euclidean metric. This allows for the application of larger neural networks and the use of more standard optimization methods from other machine learning domains. Our approach leverages the principle of imaginary time evolution by constructing a target wave function derived from the Schrodinger equation, and then training the neural network to approximate this target. Through iterative optimization, the approximated state converges progressively towards the ground state. We demonstrate the benefits of our method via numerical experiments with 2D \(J_{1}\)-\(J_{2}\) Heisenberg model, which showcase enhanced stability and energy accuracy in comparison to direct energy loss minimization. Importantly, our approach displays competitiveness with the well-established density matrix renormalization group method and NQS optimization with stochastic reconfiguration.**
###### Contents
* 1 Introduction
* 2 Proposed method for ground state search with NQS
* 2.1 Problem definition and notation
* 2.2 Imaginary time evolution with NQS
* 2.3 Loss function
* 2.4 Adaptive imaginary time step
* 2.5 Relation between ITE loss and E loss
* 2.6 NQS training procedure
* 3 Neural network quantum state
* 3.1 Neural network architecture
* 3.2 Sampling of basis states
* 3.3 Implementation details
* 4
## 1 Introduction
The remarkable accomplishments in applying machine learning techniques to a wide range of practical [1, 2, 3, 4, 5, 6] and curiosity-driven [7, 8, 9] tasks have prompted the adoption of innovative concepts and approaches in the field of physical sciences; see e.g. Refs. [10, 11, 12, 13, 14, 15] for recent physics-motivated reviews and tutorials. In particular, in the numerical study of quantum many-body systems [16] one promising line of development rests on the observation that, given a suitable many-body basis, a pure quantum-mechanical state can be represented by a multivariate function that assigns a complex number to a given basis element. For a typical quantum state of interest, such a wave function is an overwhelmingly complex entity, as the amount of information needed to fully encode a state in this manner grows exponentially with the system size. On the other hand, it is fitting to note that many classes of neural network architectures are recognized [17, 18, 19, 20, 21] as efficient function approximators: an arbitrary well-behaved function can be represented with a desired accuracy given a sufficient number of parameters, i.e. a sufficient neural network width or depth. Note, however, that such _universal approximation theorems_[20] only establish that a representation of a given function is possible, without providing an explicit scheme to construct such a representation.
The idea to represent a quantum wave function as a neural network quantum state (NQS) [22, 23, 24, 25, 26, 27, 28, 11, 25, 27, 28], can be viewed as a very powerful variational ansatz that aligns well with the well-established variational Monte-Carlo (VMC) [29, 30] techniques to search for the ground state. In the general application framework, the approach typically includes the following elements: (I) The variational wavefunction \(\psi_{\theta}\) is parameterized in terms of the network weights \(\theta\), and the expectation value of the studied Hamiltonian \(H\) (the variational energy) \(E(\theta)=\langle\psi_{\theta}|H|\psi_{\theta}\rangle\) is identified as the objective function to be minimized, i.e. the _loss_ function. (II) Since a direct computation of \(E(\theta)\) is not feasible due to prohibitively large configuration space, \(E(\theta)\) is estimated as a stochastic average from a set of sampled configurations. Note that sampling can be performed either relying on a Markov chain of configurations [22, 31] or direct sampling from a neural network endowed with the autoregressive property [25, 26, 32]. (III) Since the variational landscapes of NQS states are rugged and difficult to navigate in the search of the ground state (see, e.g., the study reported in Ref. [31]), the stochastic reconfiguration (SR) method [33, 34] is often used instead of some variant of direct gradient-based
descent [35]. According to the geometric interpretation [36, 37], the benefits provided by the SR method stem from the correct determination of the non-Euclidean manifold structure of the space of quantum states. Thus, the minimization direction suggested by the raw gradient, \(-\nabla_{\theta}E(\theta)\), is less meaningful than the 'natural' Riemannian gradient [37, 38] corrected by the inverse of the local metric tensor \(G(\theta)\). Thus, the educated update of parameters is made along the direction of \(-G^{-1}(\theta)\nabla_{\theta}E(\theta)\). However, while using the metric tensor may improve the convergence, it comes with a substantial computational cost since the metric tensor is of the order \(N_{\theta}\times N_{\theta}\), where \(N_{\theta}\) represents the number of NQS variational parameters. Consequently, determining and calculating the (pseudo)inverse of the metric tensor becomes computationally expensive [10] and results in poor scaling with the system size [39]. This issue imposes significant limitations on the practical size of neural networks that can be utilized. We acknowledge that the scaling of SR can be enhanced by employing iterative solvers that avoid forming or inverting the tensor (e.g., see the implementation of SR in NetKet [40, 41]). However, a downside of this approach is the increased numerical instability, which necessitates adding a small diagonal shift to stabilize the method, which, in turn, reduces the precision of the estimated metric.
In the present paper, we propose a ground-state determination method that works with NQS-type wave functions, _does not_ require the computation of the metric tensor and only uses the first-order gradient descent. Our work is inspired by a recent development [28] whose authors considered an approach based on first-order gradient descent to describe real time evolution with NQS. In a similar spirit, we base our approach on the paradigm of _imaginary time_ evolution as a robust approach to iteratively projecting out the ground state. In addition, we employ the idea to split the neural network into two copies, 'current' and 'target'. The target is derived from the current state of the neural network by applying an imaginary time evolution step and then is kept fixed for a certain number of optimization steps, while the source is continually updated. Note, that to guide this cycle of updates we rely on a specific loss function that essentially asks the current state and the targeted state to stay parallel. The pertinent loss function is thus dubbed the imaginary time evolution loss (ITE loss) and replaces the usual energy-based loss function (E loss). Our simulations indicate, that ITE loss is able to avoid being trapped in the numerous saddle points of the optimization landscape. We validate the performance of the proposed method by applying it to the two-dimensional \(J_{1}\)-\(J_{2}\) Heisenberg model [42, 43], demonstrating that the proposed approach is competitive with respect to direct energy loss minimization using first-order gradients or SR, as well as with density-matrix renormalization (DMRG) [44, 45].
Our paper is organized as follows: In Section 2, we introduce the proposed method of the ground-state search and describe the training procedure. While the process assumes that the state is represented by a neural network, it is versatile and can be used with diverse network architectures. Thus, in Section 3 we describe the specific architecture used in our work. It is based on a multilayer perceptron [46] coupled with a preprocessing step inspired by vision transformers [3, 6]. The physical model used for testing, i.e. the two-dimensional spin-\(\frac{1}{2}\)\(J_{1}\)-\(J_{2}\) model, and the results of numerical experiments are discussed in Section 4. Finally, we conclude with a brief summarizing Section 5. A number of technical derivations are included in Appendices.
Proposed method for ground state search with NQS
### Problem definition and notation
The focus of our work is the ground-state search for many-particle quantum systems defined on a lattice. More specifically, we study two-dimensional (2D) square lattices, however, the developed methods can be straightforwardly generalized to other lattice geometries and dimensionalities. In such systems, the space of quantum states can be modeled as a Cartesian product of identical single-site state spaces \(\mathcal{H}=\bigotimes_{j=1}^{N}\mathcal{H}_{1}\), where \(N\) is the number of lattice sites and \(\mathcal{H}_{1}\) is the state space characterizing a single site.
We choose to work in the product basis \(|\mathbf{s}\rangle=\bigotimes_{j=1}^{N}|s_{j}\rangle\), where \(|s_{j}\rangle\) corresponds to a basis vector of \(\mathcal{H}_{1}\) on the \(j\)-th site. These basis states can be indexed with \(N\)-tuples \(\mathbf{s}=(s_{1},...,s_{N})\), thus we can introduce the notation \(|\mathbf{s}\rangle=|s_{1},...,s_{N}\rangle\). For example, in the case of spin-\(1/2\) particles occupying lattice sites we can choose \(|s_{j}\rangle\in\{|\uparrow\rangle,|\downarrow\rangle\}\), where \(|\uparrow\rangle\) and \(|\downarrow\rangle\) are the eigenstates of the spin operator \(\hat{S}_{z}\). In this basis, the wave function is represented as a complex-valued vector
\[|\psi\rangle=\sum_{s_{1},...,s_{N}}\psi_{s_{1},...,s_{N}}|s_{1},...,s_{N} \rangle\,. \tag{1}\]
The dimension of the vector space spanned by the basis vectors \(|s_{1},...,s_{N}\rangle\) grows exponentially with increasing system size and rather quickly it becomes impractical to numerically represent and manipulate the wave function directly as an array. To model such directly intractable systems, Ref. [22] proposed using a neural network which maps basis vectors \(|s_{1},...,s_{N}\rangle\) to the corresponding wave function amplitudes \(\psi_{s_{1},...,s_{N}}\); this representation has become known as the NQS. The overarching motivation stems from the observation that in spite of the exponential scaling of the number of basis elements, typical ground states of physically realizable Hamiltonians have a simplified internal structure [45] that should allow for representations in terms of a relatively small number of parameters. On the other hand, neural networks excel precisely at the task of finding efficient representations of intricate data structures.
### Imaginary time evolution with NQS
Imaginary time evolution is a well-known method for extracting the ground state from an arbitrary initial ansatz that is not strictly orthogonal to the sought ground state. In the energy-eigenstate basis the time-dependent Schrodinger equation is solved by:
\[|\psi\rangle=\sum_{j}\Psi_{j}(t)|e_{j}\rangle \tag{2}\]
with
\[\Psi_{j}(t)=\Psi_{j,0}e^{-iE_{j}t} \tag{3}\]
where \(E_{j}\), \(|e_{j}\rangle\) are the \(j\)-th eigenvalue and eigenvector pair of the Hamiltonian \(\hat{H}\) and the amplitudes \(\Psi_{j,0}\) encode an arbitrary intial condition. If we now substitute in the imaginary time \(\tau=it\), then with increasing \(\tau\) the solution is expressed as an exponentially decaying (rather than the usual oscillating) superposition of energy eigenstates. The contributions of all the excited states decay exponentially relative to the ground state \(|e_{0}\rangle\):
\[\frac{|\Psi_{j}|}{|\Psi_{0}|}\propto e^{-(E_{j}-E_{0})\tau}\,. \tag{4}\]
Thus, as \(\tau\rightarrow\infty\), any initial state with nonzero overlap with \(|e_{0}\rangle\) converges toward the ground state. It is important to note that with imaginary time the evolution becomes non-unitary and the norm of the wave function increases or decreases exponentially with time.
In Ref. [28], the authors demonstrate that real time evolution of a quantum system can be modeled with NQS by minimizing the error between the variational wave function and the target wave function which is obtained with discrete ODE solver:
\[\left\|\left|\psi_{m+1}\right\rangle-\Phi^{\Delta t}\right|\psi_{m}\rangle\right\| \tag{5}\]
where \(\Phi^{\Delta t}\) is the discrete ODE flow operator. Minimization of this error simulates the evolution of NQS in time and can be done by only using the vanilla gradient descent methods like ADAM. In the present work, we argue that a similar approach can be successfully implemented for the imaginary time evolution.
More concretely, we use the Euler method to compute the target wave function:
\[\left|\psi_{T}\right\rangle=\left|\psi\right\rangle-\Delta\tau\hat{H}|\psi \rangle\,, \tag{6}\]
that is, the wave function that is reached from the current wave function by a single linearized imaginary time evolution step \(\Delta\tau\). We note that for Hamiltonians consisting of only local terms, the matrix \(H_{\mathbf{ss^{\prime}}}=\langle\mathbf{s}|\hat{H}|\mathbf{s^{\prime}}\rangle\) is sparse, i.e., given some \(\mathbf{s^{\prime}}\), only a small number of elements \(H_{\mathbf{ss^{\prime}}}\) are non-zero. This enables the efficient computation of the target wave function
\[\psi_{T}(\mathbf{s})=\psi(\mathbf{s})-\Delta\tau\sum_{\mathbf{s^{\prime}}}H_{ \mathbf{ss^{\prime}}}\psi(\mathbf{s^{\prime}}), \tag{7}\]
from the current state of the neural network.
We observe that several studies have relied on similar techniques to the one presented in this section within the context of unitary dynamics [28, 47, 48, 49], as well as in non-unitary dynamics or power iteration [50, 51].
### Loss function
Multiple different loss functions can be used to maximize the consistency between the current wave function, \(|\psi\rangle\), and the target wave function, \(|\psi_{T}\rangle\). We experimented with several variants and found that the following loss function based on overlap works well in practice:
\[L=-\log\left[\frac{|\langle\psi|\psi_{T}\rangle|^{2}}{\langle\psi|\psi\rangle \langle\psi_{T}|\psi_{T}\rangle}\right] \tag{8}\]
where \(|\psi\rangle\) denotes the NQS to be optimized and \(|\psi_{T}\rangle\) denotes the target constructed by Eq. (6). This kind of overlap loss function has been employed in previous studies within the context of real-time evolution (e.g. [47, 48, 49]). In the subsequent text, we will refer to this loss together with the target \(|\psi_{T}\rangle\) as the _ITE loss_. The gradient of ITE loss with respect to the variational NQS parameters \(\theta\) can be estimated by using the following expression (derivation is presented in the appendix, Sec. B):
\[\frac{\partial L}{\partial\theta}=2\Re\left\{\left\langle\frac{\partial}{ \partial\theta}\log\psi^{*}(\mathbf{s})\right\rangle_{s}-\frac{1}{\left( \frac{\psi_{T}(\mathbf{s})}{\psi(\mathbf{s})}\right)_{s}}\left\langle\frac{ \psi_{T}(\mathbf{s})}{\psi(\mathbf{s})}\frac{\partial}{\partial\theta}\log \psi^{*}(\mathbf{s})\right\rangle_{s}\right\}\,. \tag{9}\]
The averages appearing in this equation can be estimated by sampling states, \(\mathbf{s}\), according to the probability distribution, \(|\psi(\mathbf{s})|^{2}\), and then utilizing \(\langle f(\mathbf{s})\rangle_{s}\approx\frac{1}{N}\sum_{s_{j}}f(\mathbf{s}_{j})\), where \(\mathbf{s}_{j}\) represents individual states from a finite sample. During optimization Eq. (9) can be used directly without actually computing the loss. The gradients of NQS \(\frac{\partial}{\partial\theta}\log\psi^{*}(\mathbf{s})\) can be calculated by utilizing the automatic differentiation provided by deep learning libraries.
The intuition behind the approach can be explained as follows: The ITE loss pushes for an alignment between the current wave function \(|\psi\rangle\) and the target \(|\psi_{T}\rangle\). On the other hand,
the two states are related by an imaginary time step, hence, in the target \(|\psi_{T}\rangle\) all components spanned by low-energy eigenstates are exponentially amplified, and all components spanned by high-energy eigenstates are exponentially suppressed. Now, since the target is kept fixed for a number of optimization steps, the update of the weights of the neural network encoding the current state \(|\psi\rangle\) will effectively drive towards the minimization of the state energy, i.e., towards the ground state.
### Adaptive imaginary time step
In the case of the Euler's method, Eq. (6) can be used to obtain the following relation between the average energy of the target wave function and imaginary time step, \(\Delta\tau\):
\[\langle E_{T}\rangle=\frac{\langle\psi_{T}|\hat{H}|\psi_{T}\rangle}{\langle \psi_{T}|\psi_{T}\rangle}=\frac{\langle E\rangle-2\Delta\tau\langle E^{2} \rangle+\Delta\tau^{2}\langle E^{3}\rangle}{1-2\Delta\tau\langle E\rangle+ \Delta\tau^{2}\langle E^{2}\rangle} \tag{10}\]
where \(\langle E^{n}\rangle=\langle\psi|\hat{H}^{n}|\psi\rangle/\langle\psi|\psi\rangle\). Setting \(\partial_{\Delta\tau}\langle E_{T}\rangle=0\) one obtains a quadratic equation which is solved by:
\[\Delta\tau=\frac{B\pm\sqrt{B^{2}+4A\sigma^{2}}}{2A} \tag{11}\]
where \(A=\langle E^{2}\rangle^{2}-\langle E\rangle\langle E^{3}\rangle\), \(B=\langle E\rangle\langle E^{2}\rangle-\langle E^{3}\rangle\), and \(\sigma^{2}=\langle E^{2}\rangle-\langle E\rangle^{2}\). We use Eq. (11) to find the optimal time step which minimizes the energy of the target wave function. This significantly accelerates the convergence to the ground state. The energy averages required in Eq. (10), similar to the averages appearing in Eq. (9), can be estimated by sampling basis states \(\mathbf{s}\) according to the probability distribution, \(|\psi(\mathbf{s})|^{2}\), and using the following equations:
\[\langle E\rangle =\langle H_{loc}(\mathbf{s})\rangle_{\mathbf{s}}\, \tag{12}\] \[\langle E^{2}\rangle =\left\langle\left|H_{loc}(\mathbf{s})\right|^{2}\right\rangle_{ \mathbf{s}}\,\] (13) \[\langle E^{3}\rangle =\left\langle H_{loc}^{*}(\mathbf{s})\left(H^{2}\right)_{loc}( \mathbf{s})\right\rangle_{\mathbf{s}} \tag{14}\]
where \(H_{loc}\) is defined by:
\[H_{loc}(\mathbf{s})=\sum_{\mathbf{s}^{\prime}}\frac{\psi(\mathbf{s}^{\prime}) }{\psi(\mathbf{s})}\langle\mathbf{s}|\hat{H}|\mathbf{s}^{\prime}\rangle \tag{15}\]
and \(\left(H^{2}\right)_{loc}\) is defined analogously but using \(\hat{H}^{2}\) instead of \(\hat{H}\). These expressions are widely known but for the sake of completeness, the derivations are presented in Appendix A. The computation of \(H_{loc}\) is efficient because in systems with local interactions (e.g. between nearest neighbors), the Hamiltonians are represented by highly sparse matrices, and only a small fraction of the terms in the sum need to be calculated.
In the case of local interactions the computational complexity of \(H_{loc}\) scales linearly with system size and for \(\left(H^{2}\right)_{loc}\) it scales quadratically because it requires computing nested sums of matrix elements:
\[H_{loc}^{2}(\mathbf{s}) =\sum_{\mathbf{s}^{\prime}}\frac{\psi(\mathbf{s}^{\prime})}{\psi( \mathbf{s})}\sum_{\mathbf{s}^{\prime\prime}}H_{\mathbf{s}ss^{\prime\prime}}H_{ \mathbf{s}^{\prime\prime}\mathbf{s}^{\prime}} \tag{16}\] \[=\frac{1}{\psi(\mathbf{s})}\sum_{\mathbf{s}^{\prime\prime}}H_{ \mathbf{s}ss^{\prime\prime}}\sum_{\mathbf{s}^{\prime}}\psi(\mathbf{s}^{ \prime})H_{\mathbf{s}^{\prime\prime}\mathbf{s}^{\prime}} \tag{17}\]
where \(H_{\mathbf{s}ss^{\prime}}=\langle\mathbf{s}|\hat{H}|\mathbf{s}^{\prime}\rangle\). This means that the computation of \(\langle E^{3}\rangle\) quickly becomes impractical with increasing system size. However, by experimenting we found that the following expression gives sufficient accuracy for the optimal time step calculation even though it is not strictly correct:
\[\langle E^{3}\rangle=\left\langle H_{loc}(\mathbf{s})|H_{loc}(\mathbf{s})|^{2 }\right\rangle_{\mathbf{s}}. \tag{18}\]
### Relation between ITE loss and E loss
In many standard applications of NQS for the ground state search it is performed by directly utilizing the energy as the loss function. In that case, the gradient of the loss with respect to variational parameters is given by:
\[\frac{\partial L_{E}}{\partial\theta}=2\Re\left\{\left\langle H_{loc}(\mathbf{s}) \frac{\partial}{\partial\theta}\log\psi^{*}(\mathbf{s})\right\rangle_{\mathbf{s}}- \left\langle H_{loc}(\mathbf{s})\right\rangle_{\mathbf{s}}\left\langle\frac{\partial} {\partial\theta}\log\psi^{*}(\mathbf{s})\right\rangle_{\mathbf{s}}\right\}\,. \tag{19}\]
In this work, we refer to \(L_{E}\) as \(E\)_loss_. By plugging the expression for \(\ket{\psi_{T}}\) from Eq. (7) into Eq. (8) it can be shown that ITE loss with Euler step target becomes proportional to E loss (for derivation see Appendix C):
\[\frac{\partial L}{\partial\theta}=\frac{\Delta\tau}{1-\Delta\tau(E)}\frac{ \partial L_{E}}{\partial\theta}\,. \tag{20}\]
So it would appear that ITE loss does not offer any advantages and it should perform more or less the same as E loss. However, it is only so in the case when the target wave function \(\ket{\psi_{T}}\) is updated after every optimizer step. If instead it is held fixed for more than one step, Eq. (20) does not apply.
The relationship presented in Eq. (20) suggests an explanation for the poor performance of first-order gradient descent methods within the context of NQS ground state search. Using E loss is equivalent to fitting a constantly shifting target described by Eq. (7). This perpetually changing target can destabilize stochastic gradient descent. A similar problem also arises within the context of deep reinforcement learning. For example, it is well established that using fixed targets significantly improves the performance of deep Q learning methods [52]. Similarly, in this work, we also demonstrate in Sec. 4.3 that employing ITE loss with a fixed target can enhance stability and yield lower energy errors compared to E loss.
### NQS training procedure
As described in Sec. 2.5, computing \(\ket{\psi_{T}}\) with the latest NQS parameters every optimizer step would make ITE loss equivalent to E loss up to a learning-rate multiplier. There are two simple methods to slow down the shift of the target: 1) using a duplicate neural network with weights given by the moving average of the neural network that is optimized; 2) using a duplicate neural network with fixed weights that are repeatedly updated after a certain number of optimizer steps. Both of these methods are widely used in reinforcement learning literature [53]. We chose to use the second method because in that case it is straightforward to utilize the optimal time step described in Sec. 2.4 and also because it is a more natural fit for evolution with discrete time steps.
We explored various approaches to control the update frequency of the fixed parameters and found that the following approach provides good consistency and efficiency. The target is fixed until the mean energy of the optimized NQS becomes smaller than \(\langle E\rangle-\sigma_{E}\) where \(\langle E\rangle\) is the energy of the fixed NQS and \(\sigma_{E}\) is the standard error of the mean-energy estimate.
In this paragraph, we provide a concise summary of the training procedure that simulates the imaginary time evolution. Training is split into multiple epochs where each epoch corresponds to a single discrete time step \(\Delta\tau\). At the start of each epoch the energy moments \(\langle E\rangle\), \(\langle E^{2}\rangle\), \(\langle E^{3}\rangle\) are estimated by sampling \(N_{E}\) states according to the NQS with the latest parameters and utilizing Eqs. (12), (13), and (18). These estimates are then used to calculate the adaptive time step \(\Delta\tau\) that minimizes the expected energy of \(\ket{\psi_{T}}\)) given by Eq. (10). The duplicate NQS that is used to compute the target wave function, \(\ket{\psi_{T}}\), is updated only at the start of the epoch and is kept fixed until the next epoch. Conceptually this can be understood as fixing \(\ket{\psi_{T}}\) until the next epoch. During the epoch, the loss function, given by Eq. (8),
is gradually minimized with stochastic gradient descent so the optimized NQS becomes increasingly similar to \(|\psi_{T}\rangle\) and the energy decreases. The epoch is terminated once the energy becomes smaller than the threshold value described in the previous paragraph. Accurately estimating the energy with a large number of basis state samples can be computationally expensive. To address this, we reuse the samples used in gradient estimation and then apply a running average to the resulting energies, reducing the noise. Please note that we employ this approximate scheme solely for energy estimation during the epoch to compare with the threshold. The threshold itself is computed more accurately with a large number of samples \(N_{E}\), as described in the beginning of this paragraph. A simplified pseudocode of the training algorithm is provided in Alg. 1.
```
NQS\(\leftarrow\)randominitialization() for\(N_{\text{epochs}}\)do \(\langle E\rangle\), \(\langle E^{2}\rangle\), \(\langle E^{3}\rangle\), \(\sigma_{E}\leftarrow\)energyStatistics(\(NQS\)) \(\Delta\tau\leftarrow\)optimalTimeStep(\(\langle E\rangle\), \(\langle E^{2}\rangle\), \(\langle E^{3}\rangle\)) \(E_{\text{thr}}\leftarrow\langle E\rangle-\sigma_{E}\) \(\text{NQS}_{\text{fixed}}\leftarrow\)copy(NQS) \(s_{\text{mcmc}}\leftarrow\)warmupMCMC(NQS) \(\triangleright\) initial batch of basis states for MCMC sampling while\(\langle E\rangle>E_{\text{thr}}\)do \(s_{\text{batch}}\), \(s_{\text{mcmc}}\leftarrow\)sampleMCMC(NQS, \(s_{\text{mcmc}}\)) \(\psi_{\text{target}}\leftarrow\)computeTarget(NQSfixed, \(s_{\text{batch}}\), \(\Delta\tau\)) \(\text{grad}\leftarrow\)lossGradient(NQS, \(\psi_{\text{target}}\), \(s_{\text{batch}}\)) \(\text{NQS}\leftarrow\)updateParameters(NQS, grad) \(\langle E\rangle\leftarrow\)movingAverageEnergy(NQS, \(s_{\text{batch}}\)) endwhile endfor
```
**Algorithm 1** NQS training with ITE loss
The optimization scheme we presented has similarities with the one described in Ref. [50]. However, we employ a different loss function, utilize different criteria for epoch termination, and incorporate an adaptive time step, which significantly enhances the convergence rate.
## 3 Neural network quantum state
### Neural network architecture
The existing literature on NQS encompasses a wide range of neural network architectures. For example, Ref. [54] used multilayer perceptrons (MLPs), Ref. [55] used convolutional neural networks (CNNs), Ref. [25] used recurrent neural networks (RNNs), and Ref. [27] used transformers. In this work, we chose to employ MLPs since our preliminary experiments indicated that they offer the best balance between the required computational resources, simplicity, and performance. We do not claim that MLPs are the best architecture for NQS and acknowledge that a more detailed analysis is required. However, in this work, our focus is on introducing a novel NQS training method rather than achieving the best possible performance in modeling a specific system. The resulting architecture imposes minimal assumptions on the physical system, enabling its application to cases beyond the scope of this work. This includes diverse lattice patterns, different symmetries, and varying numbers of dimensions.
In the case of 2D spin lattice models the most straightforward way to encode the input basis states would be to apply the following map on lattice site states: \(|\uparrow\rangle\rightarrow-1\) and \(|\downarrow\rangle\to 1\). The resulting matrix can be flattened and used as input for the first layer of MLP However, we
discovered that a better performance can be obtained by utilizing the encoding based on 2D patches that is used with vision transformers [3]. The matrix described above is divided into patches of size \(d_{p}\times d_{p}\), which are subsequently mapped to vectors of dimension \(d_{enc}\) using a linear transformation with trainable weights. The resulting vectors are concatenated and then used as input for the first layer of MLP This kind of encoding has been demonstrated to be beneficial in image processing, not only with transformers but also with other architectures (e.g. [6]).
There are two general approaches for defining the output of the neural network within the context of NQS. The first approach is to simply output a complex number that corresponds to the amplitude of the non-normalized wave function (e.g. [22, 31, 56]). The second approach involves factorizing the wave function into conditional factors, akin to the chain rule of probabilities, and autoregressively generating these conditional wave functions as output (e.g. [25, 26, 32]). In this work, we opted to utilize the first approach. Nevertheless, our method could be applied using the second approach as well.
For the sake of simplicity, we maintain a constant number of neurons across all hidden layers of the MLP The final layer of the model produces two output values. The first value represents the log-amplitude of the wave function, while the second value represents the phase. Because only the fractions of different wave function elements have meaning, the output amplitudes could grow indefinitely during training. To prevent that we follow Ref. [31] and utilize the following nonlinearity for the log-amplitude value: \(f(x)=a\tanh(x/a)\). We set \(a=20\) which allows to express amplitudes within the range of 17 orders of magnitude. For the phase value, we do not apply any nonlinearities and allow values outside the \((0,2\pi)\) range. All of the variational parameters of the neural network are real. The neural network's architecture is illustrated in Fig. 1.
### Sampling of basis states
As described in Sec. 2.3 and 2.4, the estimation of the gradient and energy statistics requires sampling the basis states according to the probability distribution associated with the wave function, \(p(\mathbf{s})=|\psi(\mathbf{s})|^{2}\). Due to the high dimensionality of the problem and the fact that our NQS outputs unnormalized amplitudes, direct sampling is not feasible. Instead, we employ Markov chain Monte Carlo (MCMC) sampling, specifically the Metropolis-Hastings algorithm [57, 58], which has become a standard practice in the field of NQS.
Figure 1: Scheme of the NQS architecture used in this work.
Here we provide a brief overview of the algorithm specifically for a spin-lattice model with fixed magnetization. Starting basis state, \(\mathbf{s}_{0}\), is sampled according to the uniform distribution. At the \(m\)-th step a new basis state candidate, \(\mathbf{s}^{\prime}_{m}\), is generated by randomly choosing a pair of neighbor sites in \(\mathbf{s}_{m-1}\) and interchanging their spin states. This candidate is accepted by setting \(\mathbf{s}_{m}=\mathbf{s}^{\prime}_{m}\) if \(u\leq|\psi(\mathbf{s}^{\prime})|^{2}/|\psi(\mathbf{s})|^{2}\) where \(u\in[0,1]\) is a random uniform number. If a candidate is not accepted, then the basis state is kept unchanged: \(\mathbf{s}_{m}=\mathbf{s}_{m-1}\). The algorithm proceeds to randomly walk in the sample space and generates samples that eventually start to follow the probability distribution \(|\psi(\mathbf{s})|^{2}\). The initial samples may follow a different distribution. To address this, we incorporate a warm-up period lasting \(N_{\text{warmup}}=10d_{\text{lat}}^{2}\) steps, during which the generated samples are discarded. To reduce sample correlation, we selectively retain samples at a regular interval of \(N_{\text{skip}}=4\).
To enhance the efficiency of MCMC sampling, we parallelize the algorithm by initializing multiple random basis states and subsequently conducting independent MCMC random walks for each of them. We also perform the warm-up period only at the beginning of the epoch and then reuse the MCMC walker values from the last step as initial values for the current step. These two optimizations increase the training speed by orders of magnitude. In this work, we set the number of MCMC walkers equal to the batch size, \(N_{\text{b}}\). Consequently, we only need to evaluate the NQS \(N_{\text{skip}}\) times to generate a batch of basis states sufficient for a single optimizer step.
### Implementation details
The code was written in Python [59]. Numerically demanding parts were implemented by using JAX [60] package. Neural network computation and training parts were implemented by using Flax[61] and Optax[62] packages. In this work, JAX was useful not only because of its automatic differentiation but also because of its just-in-time compilation and automatic vectorization (_vmap_). With these capabilities, we could easily parallelize and optimize local energy computation and state sampling functions which led to significant computation time improvements.
For exact diagonalization (ED) and DMRG computations based on QuSpin[63, 64] and TeNPy[65], we utilized the AMD Ryzen Threadripper 2990WX CPU. NQS training was conducted using the Nvidia RTX 4090 GPU.
## 4 Numerical experiments
### \(J_{1}\) - \(J_{2}\) Heisenberg model on a 2D square lattice
Let us now benchmark the performance of the proposed approach by treating a specific numerical example. For this purpose, we choose the two-dimensional \(J_{1}\)-\(J_{2}\) Heisenberg model, defined by the Hamiltonian
\[H=J_{1}\sum_{\langle ij\rangle}\vec{S}_{i}\cdot\vec{S}_{j}+J_{2}\sum_{\langle ij \rangle}\vec{S}_{i}\cdot\vec{S}_{j}, \tag{21}\]
with \(\vec{S}_{j}\) denoting the spin-\(\frac{1}{2}\) operators defined on the sites (indexed by \(j\)) of 2D \(d_{\text{lat}}\times d_{\text{lat}}\) square lattice with periodic boundary conditions. The model features competing antiferromagnetic interactions: \(J_{1}>0\) act between the nearest-neighbor spin pairs (\(\langle ij\rangle\)), and \(J_{2}\geqslant 0\) couple the next-nearest neighboring pairs situated on the opposite ends of diagonals of the square plaquettes. In the absence of the second term, these nearest-neighbor interactions stabilize the Neel order with two opposite-magnetization sublattices intertwined in a checkerboard
pattern. In the opposite limit of dominant long-range interactions \(J_{2}\gg J_{1}\), the model features a striped antiferromagnetic phase. In the vicinity of the classical boundary \(J_{2}/J_{1}=0.5\) the model is frustrated and the exact phase diagram is subject to an ongoing debate, see e.g. Refs. [42, 43, 66] and references therein.
This particular model exhibits several symmetries: 1) translation along the x-axis and y-axis; 2) rotation by multiples of 90 degrees; 3) reflection about the x-axis, y-axis, and diagonal; 4) spin inversion; 5) SU(2) spin rotation. These symmetries can be leveraged to effectively reduce the dimensionality of the problem, as was done in many previous studies (e.g. [31, 56] and references therein). However, in this work we do not enforce the NQS to be invariant with respect to any of the aforementioned transformations. By adopting this approach, we aim to demonstrate that when utilizing large neural networks, there is no need for specialized architectures tailored to specific systems.
### Hyperparameters
The values of various hyperparameters, defined in the text, are listed in Table 1. Unless explicitly stated otherwise, these values were consistently applied across all our numerical experiments.
In all our numerical experiments, we optimize NQS using ADAM with vanilla gradient descent. The learning rate is adjusted using an exponential decay schedule, with the initial learning rate \(\alpha_{0}\) and the final learning rate \(\alpha_{\text{f}}\).
To identify suitable values for the neural network width (number of neurons per hidden layer) and depth (number of hidden layers), we conducted a small grid search. We explored three width values (128, 256, 512) and varied the depth from 1 to 5. As a performance metric, we utilized the achieved energy error (relative to ED results) for a 6\(\times\)6 lattice with \(J_{2}/J_{1}=0.5\). The results are illustrated in Fig. 2. It is evident that, in general, accuracy tends to improve with an increase in the number of parameters in the network. This is expected, as larger neural networks possess the capacity to represent a broader subspace of all possible wave functions. In fact, the accuracy does not appear to plateau as the network width increases. It is likely that further improvement in accuracy can be achieved by increasing the network width even more. However, the depth does exhibit an optimal value, as the accuracy tends to decline when the depth exceeds 4. This is also expected since training deeper neural networks necessitates additional techniques such as residual connections [67] or specific initialization [68]. Based on these results, we employ a depth of 4 and a width of 512 in the subsequent numerical experiments, as this configuration demonstrated the best performance. This particular network has 893994 variational parameters.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Hyperparameter** & **symbol** & **value** \\ \hline Input patch size & \(d_{\text{p}}\) & 2 \\ Patch encoding size & \(d_{\text{enc}}\) & 8 \\ Training batch size & \(N_{\text{b}}\) & 256 \\ Initial learning rate & \(\alpha_{0}\) & \(10^{-3}\) \\ Final learning rate & \(\alpha_{\text{f}}\) & \(10^{-5}\) \\ Number of samples for energy statistics during training & - & \(10^{5}\) \\ Number of samples for final energy estimation & - & \(10^{6}\) \\ Total number of optimizer steps & - & \(5\cdot 10^{5}\) \\ MCMC warm-up steps & \(N_{\text{warmup}}\) & \(10d_{\text{lat}}^{2}\) \\ MCMC sample skip interval & \(N_{\text{skip}}\) & 4 \\ \hline \end{tabular}
\end{table}
Table 1: Hyperparameters table.
### Comparison with E loss
In this section, we compare the training of NQS using our proposed method with a standard approach that utilizes the energy as a loss function. In both cases, we investigate the performance on \(6\times 6\) lattice, while keeping all shared hyperparameters unchanged. The energy error is calculated by comparing the achieved energy with ED results.
Figure 3(a) shows three energy minimization curves for each method, with each curve representing different random initializations of the NQS variational parameters. It is evident that training NQS with E loss and vanilla gradient is highly unstable since the minimization curves exhibit sudden jumps in energy and converge to significantly different energy values depending on initialization. In contrast, with ITE loss there are no abrupt energy jumps and the final energy value has almost no dependence on the initial variational parameters. Moreover, employing the ITE loss consistently leads to a lower final energy for the NQS compared to the E loss approach. This instability of training with E loss likely arises because it is equivalent to ITE loss with a constantly changing target (see Sec. 2.5).
Figure 3(b) depicts the energy error's dependence on \(J_{2}/J_{1}\). Given that the final energy with E-loss exhibits significant variation from run to run, we conduct 5 runs for each \(J_{2}/J_{1}\) value and only plot the best result. The figure clearly illustrates that our proposed method achieves superior accuracy not only in the vicinity of the maximum frustration point at \(J_{2}/J_{1}=0.5\) but also over a wider parameter range. With both losses, a distinct trend is visible: the energy error reaches its maximum near \(J_{2}/J_{1}=0.5\) and subsequently decreases as it moves further from this point. However, the trend is noisier in the case of E-loss due to the instability of convergence.
In terms of computational time, training with ITE loss generally takes approximately 30% - 50% longer in practice. The primary reason for this increase is the additional energy estimation performed after each optimizer step to determine when to update the target wave function (see Sec. 2.6). The calculation of the loss function requires a similar amount of computation in both cases. This is because energy estimation in E loss and target computation in ITE loss involves computing the same terms, namely \(\sum_{\mathbf{s^{\prime}}}I_{\mathbf{ss^{\prime}}}\psi(\mathbf{s^{\prime}})\), which dominate the computational cost.
Figure 2: Energy error dependence on neural network width (number of neurons per hidden layer) and depth (number of hidden layers) for NQS trained with ITE loss. The lattice consists of \(6\times 6\) sites, and \(J_{2}/J_{1}=0.5\). The variable \(N\) represents the number of lattice sites.
### Benchmarking
In this section, we benchmark NQS trained with our proposed method against the well-established DMRG method. We do this by investigating the relationship between the predicted ground state energy and lattice size, specifically when exceeding the practical modeling limits of ED. Additionally, we compare some of our results with those achieved in other studies.
Figure 4 shows the predicted ground state energy dependence on lattice border length, while keeping \(J_{2}/J_{1}=0.5\). Since the performance of DMRG strongly depends on the number of bond dimensions, \(\chi\), we present the data for two values: 128 and 1024. Regarding the NQS, we observed that extending the training duration significantly can lead to a slight improvement in the predicted energy. As a result, we also present the data for NQS with \(5\cdot 10^{6}\) optimizer steps, which is an order of magnitude higher than our default value. This decrease in final energy becomes increasingly prominent with larger lattice sizes.
From the presented data, it is evident that NQS clearly outperforms DMRG with \(\chi=128\). However, when \(\chi=1024\), the situation becomes more intricate. Both NQS and DMRG achieve practically the same energy as ED for the \(4\times 4\) lattice. For the \(6\times 6\) and \(8\times 8\) lattices, NQS slightly outperforms DMRG, but only when using the extended training time. However, for larger lattice sizes beyond \(8\times 8\), DMRG starts to outperform NQS, and the difference appears to grow with increasing lattice size. In the matrix product state used in DMRG, the number of variational parameters grows proportionally with the lattice size. On the other hand, for NQS, the number of variational parameters remains almost constant as it increases only in the first hidden layer of the network, which represents a small fraction of the total number of parameters. This disparity in the scaling of variational parameters could provide an explanation for why DMRG eventually outperforms NQS with increasing lattice size.
Regarding the computational resources, it is hard to compare since we utilized CPU for DMRG and GPU for NQS. However, the actual time it took to compute the models was on a similar scale for DMRG with \(\chi=1024\) and NQS with \(5\cdot 10^{6}\) optimizer steps (e.g. about 20 hours for \(12\times 12\) lattice).
For \(J_{2}/J_{1}=0.5\) and \(6\times 6\) lattice, Ref. [31] identified an energy error limit that appears to be hard to overcome for NQS. Multiple other works [56, 69, 70], including theirs, have
Figure 3: **(a)** Energy minimization curves for NQS trained with E loss (blue) and ITE loss (green). Each loss function has three curves, representing different random initializations of NQS variational parameters. \(J_{2}/J_{1}=0.5\) for this analysis. **(b)** Energy error dependence on \(J_{2}/J_{1}\) for NQS trained with E loss (blue) and ITE loss (green). Both figures are based on a lattice size of \(6\times 6\). The variable \(N\) represents the number of lattice sites.
achieved energy errors comparable to \(2\cdot 10^{-3}\), despite using different NQS architectures and optimization procedures. In Fig. 3 (b) one can see that our method also achieves a similar value at \(J_{2}/J_{1}=0.5\). After increasing the total number of optimizer steps to \(5\cdot 10^{6}\), our method achieves \(1.4\cdot 10^{-3}\), which is still comparable. Here we used neural networks with a substantially larger number of variational parameters than the mentioned studies and still were unable to significantly surpass the \(2\cdot 10^{-3}\) limit. This adds more evidence that the problem is related to the variational landscape rather than the representation power of neural networks.
For \(J_{2}/J_{1}=0.5\) and \(10\times 10\) lattice, Ref. [56] and Ref. [71] achieved energy per site of -0.4952 and -0.4736, respectively. Both of these studies used translation-invariant CNNs and also incorporated other symmetries of the physical system. As for optimization methods, Ref. [56] used SR, and [71] used the replica-exchange molecular dynamics method. In this study, we achieved energy per site of \(-0.4894\). So we achieved a competitive result, despite using vanilla gradient descent and not explicitly incorporating symmetries into NQS.
## 5 Conclusions
In this study, we presented a novel approach for searching the ground state with NQS, built upon the principle of imaginary time evolution. In this method, we construct the target wave function by combining the current wave function with the discretized flow obtained from the Schrodinger equation with imaginary time. Subsequently, we train the neural network to approximate this target wave function. Through repeated iterations of this process, the state approximated by the neural network converges to the ground state. Differently from the commonly used SR approach, our method uses the vanilla gradient with Euclidean metric.
In our exploration of the relationship between our proposed loss function and the direct minimization of energy, we uncovered a potential explanation for the unstable convergence often observed when employing energy as a loss function. Using the energy loss is equivalent to our method but with a target that changes after every optimizer step. The instability might arise due to a constantly shifting target. In our approach, this issue is addressed by fixing the
Figure 4: **(a)** Dependence of the variational ground state energy on lattice size for NQS (ITE loss) and DMRG. The blue and green lines represent NQS trained for \(5\cdot 10^{5}\) and \(5\cdot 10^{6}\) optimizer steps, respectively. The orange and red lines correspond to DMRG with 128 and 1024 bond dimensions, respectively. In this context, \(J_{2}/J_{1}=0.5\), \(N\) denotes the number of spins, and ’lattice size’ refers to the border length of a square-shaped 2D lattice. **(b)** Same as (a), but with energy measured relatively to the variational energy of DMRG with 1024 bond dimensions.
target for a specific number of optimizer steps, resulting in a more stable training process.
Our investigation included numerical experiments with the \(J_{1}\)-\(J_{2}\) Heisenberg model on a 2D square lattice, providing compelling evidence that our method offers higher stability and final energy accuracy than the optimization of NQS with energy as a loss function. Moreover, it showcases competitiveness with the well-established DMRG method and NQS optimization with SR.
We emphasize that our numerical experiments were performed without leveraging the symmetries of the physical system, which significantly increases the difficulty of the problem. Yet by utilizing neural networks with a large number of parameters compared to the commonly used values in this research area, we managed to achieve comparable accuracy to the results reported in works that do exploit symmetries and use specialized neural network architectures. This highlights the primary advantage of our proposed method: the capability to utilize large neural networks and apply standard optimization methods from other areas of machine learning.
## Acknowledgements
The authors express their gratitude to Julius Ruseckas and Arturas Acus for discussions that prompted new insights. Furthermore, the authors extend their thanks to Marin Bukov for valuable correspondence regarding QuSpin, and to Giuseppe Carleo for useful comments.
## Appendix A Energy moments
First, we show that the mean value corresponding to any operator, \(\hat{A}\), is equal to the mean of its local value \(A_{loc}(\mathbf{s}):=\sum_{\mathbf{s}^{\prime}}\frac{\psi(\mathbf{s}^{\prime})}{\psi(\mathbf{ s})}A_{ss^{\prime}}\):
\[\begin{split}\langle A\rangle&=\langle\psi|\hat{A }|\psi\rangle=\sum_{\mathbf{s}}\sum_{\mathbf{s}^{\prime}}\psi^{*}(\mathbf{s})\psi(\mathbf{s}^{ \prime})A_{ss^{\prime}}\\ &=\sum_{\mathbf{s}}|\psi(\mathbf{s})|^{2}\sum_{\mathbf{s}^{\prime}}\frac{\psi (\mathbf{s}^{\prime})}{\psi(\mathbf{s})}A_{ss^{\prime}}=\left\langle\sum_{\mathbf{s}^{ \prime}}\frac{\psi(\mathbf{s}^{\prime})}{\psi(\mathbf{s})}A_{ss^{\prime}}\right\rangle _{\mathbf{s}}\\ &=\langle A_{loc}(\mathbf{s})\rangle_{\mathbf{s}}\.\end{split} \tag{22}\]
By substituting \(\hat{A}=\hat{H}\), we obtain Eq. (12).
Now we show that the mean value corresponding to the product of any two operators, \(\hat{A}\) and \(\hat{B}\), can also be computed with their local values:
\[\begin{split}\langle\hat{A}\hat{B}\rangle&=\sum_{ \mathbf{s}^{\prime}}\sum_{\mathbf{s}^{\prime\prime}}\psi^{*}(\mathbf{s}^{\prime})\psi(\mathbf{ s}^{\prime\prime})(\mathbf{A}B)_{s^{\prime}s^{\prime\prime}}\\ &=\sum_{\mathbf{s}^{\prime}}\sum_{\mathbf{s}^{\prime\prime}}\psi^{*}(\bm {s}^{\prime})\psi(\mathbf{s}^{\prime\prime})\sum_{\mathbf{s}}A_{\mathbf{s}^{\prime}s}\hat{ B}_{ss^{\prime\prime}}\\ &=\sum_{\mathbf{s}}|\psi(\mathbf{s})|^{2}\sum_{s^{\prime}}\frac{\psi^{*}( \mathbf{s}^{\prime})}{\psi^{*}(\mathbf{s})}A_{s^{\prime}s}\sum_{s^{\prime\prime}} \frac{\psi(\mathbf{s}^{\prime\prime})}{\psi(\mathbf{s})}\hat{B}_{ss^{\prime\prime}}\\ &=\sum_{\mathbf{s}}|\psi(\mathbf{s})|^{2}A_{loc}^{*}(\mathbf{s})B_{loc}(\mathbf{s })\\ &=\langle A_{loc}^{*}(\mathbf{s})B_{loc}(\mathbf{s})\rangle\,.\end{split} \tag{23}\]
By substituting \(\hat{A}=\hat{B}=\hat{H}\), we obtain Eq. (13). By susbstituting \(\hat{A}=\hat{H}\) and \(\hat{B}=\hat{H}^{2}\) we obtain Eq. (14).
In both derivations, we made the simplifying assumption of a normalized state, \(|\psi\rangle\). However, it is clear that the same steps can be performed without this assumption, resulting in identical final expressions.
## Appendix B Gradient of ITE loss
First we split the gradient into two terms:
\[\frac{\partial L}{\partial\theta}=-\frac{\partial}{\partial\theta}\log\frac{| \langle\psi|\psi_{T}\rangle|^{2}}{\langle\psi|\psi\rangle\langle\psi_{T}|\psi_ {T}\rangle}=\frac{\partial}{\partial\theta}\log\langle\psi|\psi\rangle-\frac{ \partial}{\partial\theta}\log|\langle\psi|\psi_{T}\rangle|^{2}\,, \tag{24}\]
then calculate each term separately:
\[\begin{split}-\frac{\partial}{\partial\theta}\log|\langle\psi| \psi_{T}\rangle|^{2}&=-2\mathfrak{Re}\left\{\frac{\partial}{ \partial\theta}\log\langle\psi|\psi_{T}\rangle\right\}\\ &=-2\mathfrak{Re}\left\{\frac{1}{\sum_{s^{\prime}}\psi^{*}(s^{ \prime})\psi_{T}(s^{\prime})}\sum_{s}\psi_{T}(s)\frac{\partial}{\partial \theta}\psi^{*}(s)\right\}\\ &-2\mathfrak{Re}\left\{\frac{1}{\sum_{s^{\prime}}|\psi(s^{ \prime})|^{2}\cdot\frac{\psi_{T}(s^{\prime})}{\psi(s^{\prime})}}\sum_{s}| \psi(s)|^{2}\frac{\psi_{T}(s)}{\psi(s)}\frac{\partial}{\partial\theta}\log \psi^{*}(s)\right\}\\ &-2\mathfrak{Re}\left\{\frac{1}{\left(\frac{\psi_{T}(s)}{\psi(s) }\right)_{s}}\Big{\langle}\frac{\psi_{T}(s)}{\psi(s)}\frac{\partial}{\partial \theta}\log\psi^{*}(s)\Big{\rangle}_{s}\right\}\,,\end{split} \tag{25}\]
\[\begin{split}\frac{\partial}{\partial\theta}\log\langle\psi|\psi \rangle&=\frac{1}{\langle\psi|\psi\rangle}\frac{\partial}{ \partial\theta}\sum_{s}\psi(s)\psi^{*}(s)\\ &=2\mathfrak{Re}\left\{\frac{1}{\langle\psi|\psi\rangle}\sum_{s} \psi(s)\frac{\partial}{\partial\theta}\psi^{*}(s)\right\}\\ &=2\mathfrak{Re}\left\{\frac{1}{\langle\psi|\psi\rangle}\sum_{s} |\psi(s)|^{2}\frac{\partial}{\partial\theta}\log\psi^{*}(s)\right\}\\ &=2\mathfrak{Re}\left\{\frac{2}{\partial\theta}\log\psi^{*}(s) \right\}\,.\end{split} \tag{26}\]
Finally, putting these results back into Eq. (24), we obtain Eq. (9).
## Appendix C Relation between gradients of ITE loss and E loss
The ratio between the target wave function [Eq. (7)] and the current wave function is related to the local value of the Hamiltonian:
\[\begin{split}\frac{\psi_{T}(s)}{\psi(s)}&=\frac{ \psi(s)-\Delta\tau\sum_{s^{\prime}}\hat{H}_{ss^{\prime}}\psi(s^{\prime})}{ \psi(s)}\\ &=1-\Delta\tau\sum_{s^{\prime}}\frac{\psi(s^{\prime})}{\psi(s)} \hat{H}_{ss^{\prime}}\\ &=1-\Delta\tau H_{loc}(s)\,.\end{split} \tag{27}\]
By substituting this ratio into Eq. (9) we obtain Eq. (20):
\[\begin{split}\frac{\partial L}{\partial\theta}&=2\Re \left\{\left\langle\frac{\partial}{\partial\theta}\log\psi^{*}(\mathbf{s})\right\rangle _{s}-\frac{1}{\left\langle\frac{\psi_{T}(\mathbf{s})}{\psi(\mathbf{s})}\right\rangle _{s}}\left\langle\frac{\psi_{T}(\mathbf{s})}{\psi(\mathbf{s})}\frac{\partial}{\partial \theta}\log\psi^{*}(\mathbf{s})\right\rangle_{s}\right\}\\ &=2\Re\left\{\left\langle\frac{\partial}{\partial\theta}\log\psi^{* }(\mathbf{s})\right\rangle_{s}-\frac{\left\langle(1-\Delta\tau H_{loc}(\mathbf{s}) \right\rangle\frac{\partial}{\partial\theta}\log\psi^{*}(\mathbf{s})\right\rangle _{s}}{\left\langle 1-\Delta\tau H_{loc}(\mathbf{s})\right\rangle_{s}}\right\}\\ &=2\Re\left\{\frac{\Delta\tau\left\langle H_{loc}(\mathbf{s})\frac{ \partial}{\partial\theta}\log\psi^{*}(\mathbf{s})\right\rangle_{s}-\Delta\tau \left\langle E\right\rangle\left\langle\frac{\partial}{\partial\theta}\log \psi^{*}(\mathbf{s})\right\rangle_{s}}{1-\Delta\tau\left\langle E\right\rangle} \right\}\\ &=\frac{\Delta\tau}{1-\Delta\tau\left\langle E\right\rangle} 2\Re\left\{\left\langle H_{loc}(\mathbf{s})\frac{\partial}{\partial\theta}\log \psi^{*}(\mathbf{s})\right\rangle_{s}-\left\langle E\right\rangle\left\langle \frac{\partial}{\partial\theta}\log\psi^{*}(\mathbf{s})\right\rangle_{s}\right\} \\ &=\frac{\Delta\tau}{1-\Delta\tau\left\langle E\right\rangle}\frac{ \partial L_{E}}{\partial\theta}\,.\end{split} \tag{28}\]
Here in the first line we used the result from Sec. B.
|
2307.02049 | Graph Neural Network-based Power Flow Model | Power flow analysis plays a crucial role in examining the electricity flow
within a power system network. By performing power flow calculations, the
system's steady-state variables, including voltage magnitude, phase angle at
each bus, active/reactive power flow across branches, can be determined. While
the widely used DC power flow model offers speed and robustness, it may yield
inaccurate line flow results for certain transmission lines. This issue becomes
more critical when dealing with renewable energy sources such as wind farms,
which are often located far from the main grid. Obtaining precise line flow
results for these critical lines is vital for next operations. To address these
challenges, data-driven approaches leverage historical grid profiles. In this
paper, a graph neural network (GNN) model is trained using historical power
system data to predict power flow outcomes. The GNN model enables rapid
estimation of line flows. A comprehensive performance analysis is conducted,
comparing the proposed GNN-based power flow model with the traditional DC power
flow model, as well as deep neural network (DNN) and convolutional neural
network (CNN). The results on test systems demonstrate that the proposed
GNN-based power flow model provides more accurate solutions with high
efficiency comparing to benchmark models. | Mingjian Tuo, Xingpeng Li, Tianxia Zhao | 2023-07-05T06:09:25Z | http://arxiv.org/abs/2307.02049v1 | # Graph Neural Network-based Power Flow Model
###### Abstract
Power flow analysis plays a crucial role in examining the electricity flow within a power system network. By performing power flow calculations, the system's steady-state variables, including voltage magnitude, phase angle at each bus, and active/reactive power flow across branches, can be determined. While the widely used DC power flow model offers speed and robustness, it may yield inaccurate line flow results for certain transmission lines. This issue becomes more critical when dealing with renewable energy sources such as wind farms, which are often located far from the main grid. Obtaining precise line flow results for these critical lines is vital for next operations. To address these challenges, data-driven approaches leverage historical grid profiles. In this paper, a graph neural network (GNN) model is trained using historical power system data to predict power flow outcomes. The GNN model enables rapid estimation of line flows. A comprehensive performance analysis is conducted, comparing the proposed GNN-based power flow model with the traditional DC power flow model, as well as deep neural network (DNN) and convolutional neural network (CNN). The results on test systems demonstrate that the proposed GNN-based power flow model provides more accurate solutions with high efficiency comparing to benchmark models.
DC power flow, Machine learning, Neural network, Power flow, Renewable energy, Transmission network.
## I Introduction
Power flow is necessary to analyze the steady-state of the system, it involves studying the flow of electrical power within a transmission network and determining various power system operations. Current AC power flow (ACPF) problems are usually solved using iterative methods such as Gauss-Seidel (GS) method and Newton-Raphson (NR). However, for a large-scale system, it is not practical to use AC power flow equation-based methods for decision making or fast screening of the system [1]. The decarbonization of the electricity generation relies on the integration of converter-based renewable energy sources (RES) over the past decades. RES such as solar and wind power exhibit variability and intermittency due to factors like weather conditions and time of day. This variability introduces fluctuations in generation, which can affect power flow patterns. Applying ML to solve these challenging situations is critical for development of clean and green energy of the future and it could help achieve a reasonable statistical evaluation of the risk. Applications of machine learning (ML) toward renewable energy have been widely researched and studied in recent years [1]-[4]. Alternatively, the constraints and data associated with security constrained unit commitment are studied using ML model [5]. ML as an advanced algorithm to predict generation of renewable sources has been proposed in [6]. In [7], deep reinforcement learning was investigated as a possible control strategy for power systems with multiple renewable energy sources.
Compared to traditional computational approaches, machine learning algorithms have an intrinsic generalization capability with greater computational efficiency and scalability [8]. Since machine learning algorithms have the ability to learn complex nonlinear input-output relationships, reflexively adapting themselves to the data, they have been used to predict voltage magnitude and phase angle at each bus [9]. Similarly, predictions of initial system variables based on ML mode is implemented to decrease the solution iterations and time for NR based ACPF model in [10]. However, the spatial information embedded in the predictions for active power flow for each branch were not considered.
Power system is an interconnected network of generators and loads which has embedded graphical information. The graph structure of the power system consists of nodes (buses) and edges (branches) [11]. The branches in the power system are undirected, such graphs provide information on buses and their connections. Graph neural network (GNN) is a class of artificial neural network (NN) that has advantages in processing graphical data which have explicit topological graph correlation embedded in graph structures such as power system [12]. They were first introduced by Scarselli et al. [13] and further developed in Li et al. [14]. The key idea of GNN is to iteratively propagate messages through the edges of the graph structure. Ref. [15] proposes a purely data-driven approach based on graph convolutional networks and reports promising results on real-world power grids such as Texas and East Coast systems. An encoder/decoder framework is employed in [16] and a messages propagation mechanism among neighboring nodes to solve PF. However, these methods have strong physical assumption, dependency on preprocessing [17]. Moreover, a thorough analysis regarding the performance of the NN model vs. the non-iterative DC power flow (DCPF) method was not conducted; and the predictions for active power flow for each branch were not considered.
To bridge the aforementioned gaps, a GNN-based power flow prediction model is proposed, the detailed research was performed regarding model selection and how to maximize the performance of the proposed GNN model. The model was trained and tested against multiple systems to assess how
accurate it could predict the outputs. Comparing to DC power flow model, its performance and effectiveness are evaluated and demonstrated.
The remainder of this paper is organized as follows. Section II discusses the power system power flow calculation and overview of GNN methodology. Section III presents the construction of proposed GNN model. Section IV shows the simulation results and evaluates the performance of the proposed GNN method. Section V concludes this paper and presents future work.
## II Preliminaries
### _Power Flow Calculation_
Power flow calculation is a computational method used to determine the steady-state operating conditions of a power system. It involves solving a set of power flow equations to determine voltage magnitudes, phase angles, and power flows throughout the system. The mathematical model of the power system network the buses (nodes), transmission lines (branches), generators, loads, and other components. The network model should capture the electrical characteristics and connectivity of the system accurately. The nodal power balance equations are listed as follows [18],
\[P_{l}-V_{l}\sum_{j\in\{l,\ldots,N\}}V_{j}\big{(}G_{ij}\cos\theta_ {ij}+B_{ij}\sin\theta_{ij}\big{)} =0, \tag{1}\] \[Q_{l}-V_{l}\sum_{j\in\{l,\ldots,N\}}V_{j}\big{(}G_{ij}\sin\theta _{ij}+B_{ij}\cos\theta_{ij}\big{)} =0, \tag{2}\]
where, \(P_{l}\) and \(Q_{l}\) are the active and reactive power injections at each node respectively. The summation terms represent the active or reactive power injections or withdrawals from the electrical network at a given node \(i\). \(V_{l}\) and \(V_{j}\) are the voltage magnitude for the two end buses of a transmission line. \(G_{ij}\) and \(B_{ij}\) are the corresponding conductance and susceptance of a branch. The phase angle \(\theta_{ij}\) is the difference in voltage phase angles of the two end buses of a branch.
An accurate and comprehensive power flow analysis method is achieved by employing conventional ACPF techniques like the NR method and GS method. This approach takes into account the non-linear properties of power system components, including transformers and transmission lines However, ACPF solves a set of nonlinear equations that represent the power flow equations based on Kirchhoff's laws and other system constraints by iteratively calculation. Due to the inherent and complex iterative nature of these algorithms, it may suffer divergency and may not be beneficial for some online monitoring applications or to be integrated into optimization based scheduling and dispatching models. A non-iterative method called DC power flow can be used for fast power flow solution [19]. DCPF represents the network as an equivalent DC network, reducing the complexity of calculations. It simplifies the power flow equations by assuming that voltage magnitudes remain constant, and the phase angles are small. With DCPF, the solutions to steady-state active power flow can be found very quickly. However, approximation error introduced by the assumptions made earlier may lead to inaccurate results. Thus, it is desirable to develop a new method that can provide fast and accurate power flow solutions.
### _Machine Learning Overview_
Machine learning is a computer algorithm that can automatically improve/learn through experience by using historical data. Given a system with \(N\) generators, the model will be built using historical dataset to make predictions,
\[\hat{h}^{PF}(\textbf{s}_{t},\textbf{u}_{t},d_{t},\textbf{r}_{t})=PF, \tag{3}\]
where \(\hat{h}^{PF}\) is the nonlinear power flow prediction model; \(\textbf{s}_{t}\) denotes the system states, and \(\textbf{u}_{t}\) is the generation dispatch at period \(t\). \(d_{t}\) and \(\textbf{r}_{t}\) denote the load profile and renewable forecast, respectively. The overview of the power flow model is shown in Fig. 1.
For a basic power flow model, training data (injections) are multiplied with weight vector \(W\). Then, the results are added by bias \(b\) and mapped to an output value after applying an activation function. The choice of activation functions can vary depending on the model selected. Throughout the training process, the weight vector undergoes continuous updates until the error falls below a predetermined threshold or a specified number of epochs is reached [20].
In order to handle the mapping with high nonlinearity, a power flow model typically involves multiple nodes (artificial neurons) with several hidden layers as shown in Fig. 3. The connections between nodes (vector weights) reflect the signal strength between each neuron. A neural network consisting of multiple layers performs distinct transformations on input data. Each layer or node within the network learns specific features from the input. By leveraging an NN-based power flow model, complex input-output relationships can be learned, which may
Fig. 1: Overview of power flow model.
Fig. 2: Example of power flow model training process.
be challenging to grasp or program using conventional algorithms.
## III Graph Neural Network based Power Flow Model
CNNs have inherent limitations when it comes to handling graphical data that contains explicit topological graph correlations [12]. However, recent progress in CNNs has led to the resurgence of GNNs, which are neural networks specialized in processing and extracting knowledge from graph-structured data. A power system can be viewed as a graph comprising nodes (buses) and edges (branches) that denote the connections between nodes. GNNs are specifically designed to capture the intricate dependencies and relationships present in graph data. To achieve this, GNNs have been developed by extending the convolution operation to graphs and, more generally, to non-Euclidean spaces. Previous studies in [21] have proved that GNN provides state-of-arts performance in graph analysis tasks.
The input vector \(X\) concatenates information about the electrical power that is being produced and consumed everywhere on the grid. The branches in the power system are undirected, such graphs provide information on buses and their connections. Specifically, each generation \(g\in G\) is defined by an active power infeed \(P_{g}\) (in MW) and a reactive power \(Q_{g}\) (in MVVar). Therefore, each generation is defined by a 2-dimensional information. Similarly, a nodal load \(d\in D\) is defined by an active power consumption \(P_{d}\) (in MW) and a reactive power consumption \(Q_{d}\) (in MVar). Thus, the injection vector \(X\) is a vector that concatenates all these injection characteristics including the initial voltage magnitude on each bus.
The convolution operator in propagation module is used to aggregate information from neighbors. Considering \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) as an undirected graph representing a power system, where \(\mathcal{V}\in\mathbb{R}^{N}\) denotes its nodes and \(\mathcal{E}\in\mathbb{R}^{K}\) denotes its edges. Let \(A\in\mathbb{R}^{N\times N}\) be the adjacency matrix of \(\mathcal{G}\), we can define a renormalization equation as,
\[V=\widetilde{D}^{-\frac{1}{2}}\tilde{A}\widetilde{D}^{-\frac{1}{2}}, \tag{4}\]
where \(\tilde{A}=A+I_{N}\) represents an adjacency matrix with added self-connections, and \(I_{N}\) is the identity matrix. The adjacency matrix encodes the way injections are connected to edges. Typically, the element at \((i,j)\) of the adjacency matrix \(A\) is defined as follows,
\[A_{ij}=\begin{cases}1;\text{ if }\mathcal{V}_{i},\mathcal{V}_{j}\in\mathcal{ V},(\mathcal{V}_{i},\mathcal{V}_{j})\in\mathcal{E}\\ 0;\text{ if }\mathcal{V}_{i},\mathcal{V}_{j}\in\mathcal{V},(\mathcal{V}_{i}, \mathcal{V}_{j})\notin\mathcal{E}\end{cases}, \tag{5}\]
where \((\mathcal{V}_{i},\mathcal{V}_{j})\) denotes the branches from \(i\) to \(j\). The diagonal degree matrix \(\widetilde{D}\) for \(\mathcal{G}\) is defined as \(\widetilde{D}_{ii}=\sum_{j}\tilde{A}_{ij}\).
GNNs update the representations of nodes in a graph by aggregating information from their neighboring nodes. This process involves iterative message-passing steps, where each node incorporates both local and global information to update its representation. By doing so, GNNs capture both the local structure and the broader context of the graph. The graph convolutional activation is defined as follows,
\[F^{l}(X,A)=\sigma\big{(}VF^{(l-1)}(X,A)W_{k}^{l}+b^{l}\big{)}, \tag{6}\]
where \(F^{l}\) is the convolutional activations, \(W_{k}^{l}\) and \(b^{l}\) are the trainable convolutional weights matrix and bias matrix at the \(l\)-th layer; \(F^{0}=X\) is the input matrix. In this step, we iteratively update the latent state of each of the n power lines by performing latent leaps that depend on the value of their direct neighbors.
Fig. 4 demonstrates the message passing mechanism in forward propagation, a target node (bus 8) receiving information from its neighboring nodes. The output we want to predict is the flows through the lines at every line.
The proposed power flow model as shown in Fig. 5 has one GNN layer for embedding step, two hidden layers are considered as decoding step. The Rectified Linear Unit (ReLU) is chosen as the activation function for forward propagation. It allows a small, non-zero constant gradient to pass through to minimize the vanishing gradient problem during the training phase the model. The decoding step decodes the embedded data to the output space which is the steady-state voltage magnitude for each bus and the power flow on each branch.
Mean square error (MSE) loss was used to measure the performance of the model during the training process. MSE measures the average squared difference between actual and predicted outputs. The MSE loss function used in this paper is defined in (7),
\[MSE=\tfrac{1}{N}\sum_{i=1}^{N}(\mathcal{V}_{i}-\mathcal{V}_{i})^{2}, \tag{7}\]
Fig. 4: Example of message passing mechanism in GNN of IEEE 14-bus system.
Fig. 5: Illustration of the proposed GNN neural network.
Fig. 3: Neural network with multiple layers.
where \(y_{i}\) denotes the actual output value while \(\bar{y}_{i}\) denotes the GNN model estimated output value, and \(N\) denotes the number of sample points.
## IV Case Studies
The proposed GNN model was trained to predict voltage magnitude and active power flow on multiple systems with different sizes: the IEEE 14-bus, and IEEE 24-bus test systems. For data generation process, each load and generator dispatch settings are randomly perturbed between [85%, 115%] with uniform distribution, where the load data in the case files is applied as the base value. For active power control corresponding to the variation on total system load values. The power flow data generation is operated in Python 3.8 using pypower. The GNN-based power flow model is trained using pytorch on NVIDIA RTX 2070 GPUs. The generated data set was divided into three groups: 80% for training, 20% for validation.
The proposed GNN model is compared with DNN and CNN model, The following metrics are used to demonstrate the prediction accuracy: (1) maximum error (MAX-E), (2) median absolute error (MED-E), (3) mean absolute percentage error (MAPE), and (4) R2 score, which is defined as follows.
\[R^{2}(y_{i},\bar{y}_{i})=1-\frac{\sum_{i=1}^{N}(y_{i}-\bar{y}_{i})^{2}}{\sum_ {i=1}^{N}(y_{i}-\bar{y})^{2}} \tag{8}\]
where \(\bar{y}\) is the mean of actual labels. \(R^{2}\) score gives some information about the goodness of model fitting. In regression, the \(R^{2}\) coefficient of determination is a statistical measure of how well the regression predictions approximate the real data points.
Fig. 6 depicts the loss curve of the proposed GNN model on 14-bus system during the training process. It can be observed that the proposed GNN model performs well in minimizing the MSE loss. The training loss decreases rapidly and then reaches the horizontal area after around 800 epochs. Table I shows the prediction accuracy of different neural network models. Deep neural network (DNN) and convolutional neural network (CNN) are used ad benchmarks [1]. The proposed GNN model has the highest prediction accuracy comparing the other models. For 14-bus system, the prediction accuracy is 96.66% with a 5% tolerance. Further investigations on 24-bus system revealed that GNN model maintains the highest prediction accuracy with different error tolerance.
Table II summarizes the statics for all model architectures. The first column includes the R2 score results of all models on 14-bus and 24-bus system. It can be observed that the GNN model has the highest R2 score in both scenarios which is close to 1, implying that the predictions of GNN model approximate the real data points pretty well comparing to the other machine learning algorithm power flow models. The GNN model also has the lowest MAX-E, MED-E and MAPE statistical values.
Fig. 8: Absolute error distribution of CNN neural network for 14-Bus System.
Fig. 6: Training curve of GNN neural network on 14-bus system.
Fig. 7: Absolute error distribution of DNN neural network for 14-Bus System.
Fig. 9: Absolute error distribution of GNN neural network for 14-Bus System. |
2304.13372 | Feed-Forward Optimization With Delayed Feedback for Neural Networks | Backpropagation has long been criticized for being biologically implausible,
relying on concepts that are not viable in natural learning processes. This
paper proposes an alternative approach to solve two core issues, i.e., weight
transport and update locking, for biological plausibility and computational
efficiency. We introduce Feed-Forward with delayed Feedback (F$^3$), which
improves upon prior work by utilizing delayed error information as a
sample-wise scaling factor to approximate gradients more accurately. We find
that F$^3$ reduces the gap in predictive performance between biologically
plausible training algorithms and backpropagation by up to 96%. This
demonstrates the applicability of biologically plausible training and opens up
promising new avenues for low-energy training and parallelization. | Katharina Flügel, Daniel Coquelin, Marie Weiel, Charlotte Debus, Achim Streit, Markus Götz | 2023-04-26T08:28:46Z | http://arxiv.org/abs/2304.13372v1 | # Feed-Forward Optimization With Delayed Feedback for Neural Networks
###### Abstract
Backpropagation has long been criticized for being biologically implausible, relying on concepts that are not viable in natural learning processes. This paper proposes an alternative approach to solve two core issues, i.e., weight transport and update locking, for biological plausibility and computational efficiency. We introduce **F**eed-**F**orward with delayed **F**eedback (F\({}^{3}\)), which improves upon prior work by utilizing delayed error information as a sample-wise scaling factor to approximate gradients more accurately. We find that F\({}^{3}\) reduces the gap in predictive performance between biologically plausible training algorithms and backpropagation by up to 96%. This demonstrates the applicability of biologically plausible training and opens up promising new avenues for low-energy training and parallelization.
## 1 Introduction
Today, nearly all artificial neural networks are trained with gradient-descent-based optimization methods [1]. Typically, backpropagation [2] is used to efficiently compute the network's gradients. However, backpropagation relies on multiple biologically implausible factors, making it highly unlikely that the human brain functions in a similar fashion [3]. Besides increasing biological plausibility, solving these issues could significantly improve neural algorithms by increasing generalization performance, requiring fewer computational operations, and consuming less energy [4; 5]. As backpropagation makes up a significant portion of the computational and memory cost of neural network training, replacing it can yield significant time and energy savings. Diminishing the energy requirements is especially important in embedded and edge computing. Neuromorphic devices, for example, are novel brain-inspired hardware architectures [6] designed to accelerate neural networks and reduce the required energy. However, they are often unable to implement backpropagation and are thus in particular demand for biologically plausible alternatives [7].
Several training algorithms have been proposed to increase biological plausibility and reduce computational requirements. One option is removing the backward pass entirely, thus training an artificial neural network using only a forward pass. This can also lower the computational cost and open up new possibilities for parallelization but typically results in reduced predictive performance.
This paper presents _Feed-**F**orward with delayed **F**eedback (F\({}^{3}\)), our novel, biologically-inspired, backpropagation-free training algorithm for deep neural networks 1. F\({}^{3}\) yields superior predictive performance on a variety of tasks compared to prior approaches by using delayed error information
from previous epochs as feedback signals. Using fixed feedback weights, F\({}^{3}\) passes these feedback signals directly to each layer without layer-wise propagation. Together, this resolves the dependence on downstream layers, making it possible to update the network parameters during the forward pass. As a result, F\({}^{3}\) eliminates the need for the computationally expensive backward pass. This reduces both the computational cost and the required communication between layers, opening up promising possibilities for parallel and embedded training. F\({}^{3}\) also enables in-the-loop training on neuromorphic and edge devices, making it a promising candidate for application in low-powered computational settings.
Our main contributions are as follows:
* F\({}^{3}\), a novel, biologically-inspired training algorithm for deep neural networks.
* Using delayed error information as feedback signal for improved predictive performance of backpropagation-free training.
* Empirical evaluation of F\({}^{3}\) on various datasets and neural network architectures.
* Theoretical analysis of the computational complexity of F\({}^{3}\) compared to backpropagation.
## 2 Related Work
While biological plausibility is not a necessary requirement for training artificial neural networks, there has been great interest in methods that combine biological plausibility with effective training [11; 5; 3; 12; 13; 14]. Two of the most substantial obstacles in that regard are the weight transport and the update locking problem [5; 10].
_Weight Transport_[15] Backpropagation reuses the forward weights symmetrically in the backward pass to propagate the gradients. However, synapses are unidirectional; and synchronizing separate forward and backward pathways precludes biological plausibility [16; 6]. The weights' non-locality additionally constrains the memory access pattern, which can severely impact the computational efficiency [4].
Figure 1: **F\({}^{3}\) (a)** solves both the weight transport and the update locking problem. In contrast to prior approaches, it uses delayed error information in the updates, improving the predictive performance. The current error signal \(e_{t}\) in epoch \(t\) is stored (green) and used in the forward pass (blue) of the next epoch \(t+1\). This eliminates the backward pass (red) for all hidden layers.
**Previous approaches (b) to (e):** Backpropagation (BP) (b) is not biologically plausible due to the weight transport and update locking problems caused by the backward pass. FA [8] (c) and DFA [9] (d) solve the weight transport problem by replacing the backward paths with random feedback weights \(B_{i}\) independent of the forward weights \(W_{i}\). However, they are still update-locked as they depend on the current error \(e_{t}\). DRTP [10] (e) releases update locking by using the target \(y^{*}\) instead of the error \(e_{t}\), but this comes at the cost of reduced accuracy.
_Update Locking_[17, 18] Artificial neural networks are typically trained with a clear separation between forward and backward passes. Every layer in a neural network must wait for all downstream layers to complete their forward and backward pass before completing its own backward pass. This introduces a considerable delay between a layer's forward pass and the calculation of the gradients used to update its weights. This can significantly increase training times as results of the forward pass need to be buffered or recomputed [19, 10] and restricts parallelization. Moreover, it is not biologically plausible, as a neuron's activation may have already changed during the delay, causing a desynchronization with the error signal [20].
Several approaches have been suggested to address both weight transport and update locking. Figure 1 illustrates some of these approaches compared to our approach \(\mathrm{F}^{3}\).
To solve the weight transport problem, _target propagation_[13] replaces the loss gradients with layer-wise target values computed by auto-encoders. Using an additional linear correction, the so-called difference target propagation, this approach achieves results comparable to backpropagation. Ororohia and Mali [5] take a similar approach to target propagation but employ the current layer's pre-activation and the next layer's post-activation instead of generating the layer-wise targets with auto-encoders. _Feedback alignment (FA)_[8], illustrated in Figure 1 (c), replaces the forward weights with fixed random feedback weights in the backward pass, thereby demonstrating that symmetric feedback weights are not required for effective training. FA solves the weight transport problem; however, the training signals remain non-local, meaning the gradient estimates are propagated backward through the network's hidden layers. To alleviate this, _direct feedback alignment (DFA)_[9] passes the error directly to each hidden layer, see Figure 1 (d).
For solving update locking, Mostafa et al. [19] use auxiliary classifiers to generate _local errors_ from each layer's output. With this strategy, they outperform FA and approach backpropagation-like accuracy. Improving upon this, Nokland and Eidnes [21] demonstrate that combining local classifiers with a so-called local similarity matching loss can close the gap to backpropagation. _Decoupled greedy learning (DGL)_[22] follows a similar approach that is based on greedy objectives, which even scales to large datasets such as ImageNet [23].
Another approach to address update locking are _synthetic gradients_[18, 17], which model a subgraph of the network and predict its future output based on only local information. Replacing backpropagation with these synthetic gradients decouples the layers, resulting in so-called _decoupled neural interfaces (DNIs)_. The same approach can also be used to predict synthetic input signals and thus solve the _forward locking problem_[18]. _Direct random target projection (DRTP)_[10], illustrated in Figure 1 (e), builds upon DFA to solve weight transport and update locking simultaneously. By substituting the error with the targets, DFA's direct backward paths can be replaced with direct forward paths to the hidden layers. While DRTP cannot reach the same accuracy as FA and DFA, it significantly outperforms training only the last layer of the network, demonstrating its ability to train hidden layers.
The _forward gradient_[24] is another approach to gradient approximation without backpropagation. It is based on forward mode automatic differentiation, which requires multiple runs to compute the exact gradient. To reduce this overhead, the gradient is estimated with the forward gradient based on the directional derivative.
The recently proposed _Forward-Forward (FF)_ algorithm [25] replaces the backward pass with another forward pass on so-called negative data. Each layer is trained separately to distinguish the real data in the first forward pass from the negative data in the second forward pass. _PEPITA_[26] takes a similar approach by executing two forward passes on different input data, where the input to the second forward pass is modulated based on the error of the first one.
_Multilayer SoftHebb_[27] combines Hebbian learning with a softmax-based plasticity rule and a corresponding soft anti-Hebbian plasticity to enable training deep networks of a specific architecture. Using greedy layer-wise training, multilayer SoftHebb can train the hidden layers in a single unsupervised epoch, followed by up to 200 epochs of supervised training for the linear classifier head.
_Predictive coding_[28, 29] is another approach to biological plausibility based on the predicting processing model, an influential theory in cognitive neuroscience. It compares predictions of the expected input signal to a layer to the actual observations and aims to minimize this prediction error. In contrast to the more traditional feed-forward model, only the residual, the prediction error, is
passed to the next level [30]. Predictive coding has been successfully applied to artificial neural networks; for example, Whittington and Bogacz [31] show that predictive coding networks can approximate backpropagation on multilayer-perceptrons. This has since been extended to arbitrary computation graphs, including convolutional and recurrent neural networks [32].
## 3 Feed-Forward-Only Training
### Training Neural Networks with Backpropagation
Let us consider a multi-layer fully-connected neural network like the one illustrated in Figure 1. It consists of \(K\) fully-connected layers where each layer \(i\) has weights \(W_{i}\) and biases \(b_{i}\) and passes its output \(h_{i}\) to the next layer. To obtain the network's output \(y=h_{K}\), the forward pass computes the output of layer \(i\) as
\[z_{i} =W_{i}h_{i-1}+b_{i} \tag{1}\] \[h_{i} =f(z_{i}), \tag{2}\]
with \(f\) being a non-linear activation function. A loss function \(L\) measures how close the last layer's output \(y\) is to the target value \(y^{*}\). When training a neural network, we aim to decrease this loss by adjusting the network's parameters \(W_{i}\) and \(b_{i}\). The _credit assignment problem_[33] states how much of the loss can be attributed to each parameter and how they should be changed accordingly. A common solution to the credit assignment problem is updating each parameter proportional to the gradient of the loss with respect to that parameter.
Backpropagation can compute this gradient efficiently by applying the chain rule in a backward pass over the network. For example, it computes the gradient of the loss with respect to \(h_{i}\) as
\[\delta h_{i}=\frac{\partial L}{\partial h_{i}}=W_{i+1}^{T}\delta z_{i+1}, \tag{3}\]
which can also be expressed as
\[\delta h_{i} =W_{i+1}^{T}\left(\delta h_{i+1}\odot f^{\prime}\left(z_{i+1} \right)\right) \tag{4}\] \[=\boxed{\frac{\partial L}{\partial y}\boxed{W_{i+1}^{T}\left( \frac{\partial y}{\partial h_{i+1}}\odot f^{\prime}\left(z_{i+1}\right) \right)}}. \tag{5}\]
The gradient \(\delta h_{i}\) thus depends on an error term (green), represented by \(\partial L/\partial y\), and the feedback path (blue). This feedback path relies on both the transpose of the forward weights \(W_{j}^{T}\) and the derivative of the activation \(f\) at the input \(z_{j}\) for all downstream layers \(j\in\{i+1,\dots,K\}\). The gradients of the weights \(W_{i}\) and biases \(b_{i}\) can be derived directly from \(\delta h_{i}\).
### Approximating Gradients without Backpropagation
Not all of the information necessary to compute \(\delta h_{i}\) as described in Equation (5) is available to a biologically plausible training algorithm, as a) the algorithm cannot rely on symmetric feedback weights (weight transport) and thus cannot utilize the forward weights \(W\) or their transpose, and b) the update step cannot wait for the downstream layers \(j>i\) (update locking). Thus, neither the intermediate values \(z_{j}\), the final output \(y\), nor the loss \(L\) is available. This means that biologically plausible training algorithms cannot compute the true gradient \(\delta h_{i}\). Instead, they typically approximate the gradients based on the information that is available.
F\({}^{3}\) is inspired by DRTP [10], which approximates the gradients \(\delta h_{i}\) using the targets \(y^{*}\) instead of the error signal and fixed random feedback weights for the feedback path. However, the targets are not a practically good approximation of the true error signal as they contain no information on the error magnitude and only limited information on its direction, which can severely impact the training. Therefore, we propose _Feed-Forward with delayed _Feedback (F\({}^{3}\))_, which uses delayed error information \(e_{t-1}\) to approximate the error signal more precisely while remaining update locking free.
### Delayed Error Information as Feedback Signal
\(\mathrm{F}^{3}\), illustrated in Figure 1 (a), approximates the gradient \(\delta h_{i}\) as
\[\delta^{\mathrm{F}^{3}}h_{i}=\boxed{B_{i}^{T}}\boxed{e_{t-1}}. \tag{6}\]
We reuse the idea of fixed feedback weights \(B_{i}\) to replace the feedback path (blue) but use delayed error information \(e_{t-1}\) as error signal (green). Computing \(\delta^{\mathrm{F}^{3}}h_{i}\) is thus independent of other layers. The remaining gradients within the layer, e.g., for the weights and biases, rely only on the incoming gradient \(\delta h_{i}\) and information within the layer itself. They can be computed with the chain rule in the same way as with backpropagation using the approximate gradient \(\delta^{\mathrm{F}^{3}}h_{i}\). This makes all gradient approximations independent of downstream layers and allows updating a layer immediately during the forward pass, eliminating the need for a separate backward pass. Since the final layer \(K\) has access to the model's output \(y=y_{K}\), it can use the current error signal and compute the true gradients. As a result, \(\mathrm{F}^{3}\) only affects hidden layers; the output layer is updated as it would be with backpropagation.
Algorithm 1 outlines how \(\mathrm{F}^{3}\) is used to train a fully-connected neural network. The layers are processed in a forward pass which computes the layer's output \(h_{i}\), followed immediately by computing the approximate gradients \(\delta^{\mathrm{F}^{3}}W_{i}\) and \(\delta^{\mathrm{F}^{3}}b_{i}\) and updating the layer's parameters \(W_{i}\) and \(b_{i}\). The approximate gradients can be used to update the network's parameters like the true gradients. For example, using stochastic gradient descent with a learning rate of \(\eta\) would result in the update
\[W_{i} \gets W_{i}-\eta\delta^{\mathrm{F}^{3}}W_{i} \tag{7}\] \[b_{i} \gets b_{i}-\eta\delta^{\mathrm{F}^{3}}b_{i}. \tag{8}\]
\(\mathrm{F}^{3}\) is thus independent of the optimizer and can be used with various gradient-based optimizers from standard SGD to more advanced methods like Adam [34].
\(\mathrm{F}^{3}\) utilizes sample-wise delayed error information
\[e_{t-1}=\left(\frac{\partial L}{\partial y}\right)_{t-1} \tag{9}\]
from the previous epoch \(t-1\) to approximate the current error signal
\[e_{t}=\left(\frac{\partial L}{\partial y}\right)_{t}. \tag{10}\]
The feedback is thus always delayed by one epoch. The targets \(y^{*}\) are used to initialize the error information in the first epoch. Alternatively, the error information can be initialized from an additional inference-only epoch \(t=0\), leaving the model parameters unchanged. On a technical level, F\({}^{3}\) stores each training sample's error information at the end of the forward pass. As the loss is already used to train the last layer, this requires no additional computation.
Delayed error information more closely resembles the actual error signal than, for example, the targets \(y^{*}\) used in prior, similarly plausible approaches [10]. Using delayed error information includes both the magnitude and direction of the error in the update, which helps to differentiate between samples requiring more attention and those not. F\({}^{3}\) thus offers a more accurate gradient approximation.
### Different Variants of Delayed Error Information
The sample-wise gradient of the loss with respect to the model's output \(y_{t-1}\) is the most natural choice for modeling the error signal as it corresponds to the information used in backpropagation delayed by one epoch. However, F\({}^{3}\) can also use other types of delayed error information. For example, it can use the delayed error
\[y^{*}-y_{t-1}. \tag{11}\]
We refer to these two variants as F\({}^{3}\)-Loss and F\({}^{3}\)-Error.
Additional transformations can be applied to the chosen error information. For classification tasks, one could consider only the feedback signal of the target class \(c^{*}\). We refer to this as _one-hot error information_ as it corresponds to multiplying the error information with the one-hot encoded targets. One-hot error information reduces the computational cost of the gradient approximation by reducing the matrix-vector multiplication \(B_{i}^{T}e_{t-1}\) to the multiplication of a single column of \(B_{i}\) with the scalar \(e_{t-1}[c^{*}]\). Another option is applying an additional softmax to the output \(y_{t-1}\) before computing the error information.
Instead of sample-wise error information, one could also aggregate the error over all samples in the last epoch. This would reduce the already small memory overhead to just a single value for the whole training set, which could benefit applications with severe hardware constraints. While losing the ability to differentiate between low and high error samples, this still adjusts the update step size as the error decreases throughout the training.
### How F\({}^{3}\) Solves the Weight Transport Problem
Prior work [8; 9; 10] has shown that neural networks can be trained using fixed random feedback paths. We therefore replace the layer-wise propagation of gradients (blue) in Equation (5) with a fixed feedback weight matrix \(B_{i}\) to directly pass the error signal to layer \(i\) in Equation (6). All gradient information other than the error signal, e.g., the forward weights and the derivatives of the activation functions for all downstream layers, is encoded in this feedback weight matrix. The forward weights \(W_{j}\) are thus no longer required to approximate the gradient \(\delta^{\text{F}^{3}}h_{i}\), making F\({}^{3}\) defacto weight transport free.
### How F\({}^{3}\) Solves the Update Locking Problem
As discussed in Section 3.2, avoiding update locking means that the update step in layer \(i\) cannot rely on results or gradients in downstream layers \(j>i\). Using fixed direct feedback paths already eliminates the dependence on downstream layers in the feedback path. The error signal \(\partial L/\partial y\) (green) in Equation (5) is not available either, as the last layer's output \(y\) has not yet been computed for the current epoch \(t\). Since training typically requires multiple epochs, F\({}^{3}\) instead uses delayed error information from the previous epoch \(t-1\) to approximate the current error signal in Equation (6). As discussed in Section 3.3, this makes it independent of downstream layers.
This introduces a dependency between consecutive epochs, as processing a sample requires its delayed error information from the previous epoch. However, this is generally not a problem since training epochs are typically clearly separated. Often, there is even additional auxiliary computation in between epochs, such as intermediate evaluation or check-pointing. One could also relax this requirement at the cost of a potentially higher delay of the error information by using the most
current loss available for a sample, be it from the previous or any earlier epoch. In summary, F3 can immediately update the weights in every layer without waiting for downstream layers and is thus free of update locking.
### Computational Considerations
By eliminating layer-wise error propagation, F3 generally requires fewer operations than backpropagation. The exact reduction depends on the input data and selected network architecture. For a fully-connected neural network with \(K\) linear layers of equal width \(w\), input size \(n_{0}\), and output size \(C\), the number of fused multiply-add operations to perform the forward, backward, and update step for a single training sample is
\[3w\left(n_{0}+(K-2)w+C\right)+\underbrace{\left(K-2\right)w^{2}+Cw}_{\text{ computing }\delta y_{i}} \tag{12}\]
with backpropagation and
\[3w\left(n_{0}+(K-2)\,w+C\right)+\underbrace{C(K-1)w}_{\text{computing }\delta^{y}y_{i}}. \tag{13}\]
with F3. The detailed derivation of these equations is given in Appendix A. The number of operations saved increases with deeper and wider networks. For sufficiently large widths, F3 approaches a 100% reduction for computing the \(\delta y_{i}\). This results in a 25% theoretical maximum reduction in time for the whole training step, i.e., forward and backward pass followed by an update step.
As an example, we consider a five layer network with input size \(n_{0}=100\) and output size \(C=10\). With a hidden layer width \(w=200\), F3 can save more than 90% of operations computing the \(\delta y_{i}\) and more than 20% in total throughout the whole batch. With a width of \(w=1024\), this increases to more than 98% and 24%, respectively. In experimental evaluations, we find that the practically achievable reduction in training time is closer to 15%.
## 4 Evaluation
### Methods
We evaluate F3 on different datasets and learning tasks. For image classification, we use the well-known image classification datasets MNIST [35] and CIFAR-10 [36]. For general classification, we use the Census Income dataset [37] to predict whether a person's income exceeds a threshold based on their census data. For regression, we use the SGEMM dataset [38; 39] to predict the run time of sparse matrix-matrix multiplications on GPU using different kernel parameters. The MNIST, CIFAR-10, and SGEMM datasets have been standardized.
We use F3 to train different fully-connected neural network architectures with \(1\) to \(50\) hidden layers at a width of \(500\) neurons. We use the hyperbolic tangent activation function in hidden layers and a sigmoid in the output layer for classification tasks. For classification tasks, one-hot encoding with binary cross-entropy (BCE) loss is used, while mean squared error (MSE) loss is used for regression tasks. We set a fixed learning rate determined by a grid search, as shown in Tables 2 and 3, and use the Adam optimizer [34] with default parameters.
We compare multiple variants of F3 using the different options for delayed error information described in Section 3.4. Results are shown for both the delayed loss gradient and the delayed error as feedback signal. Unless otherwise mentioned, we use the one-hot error information for classification tasks, i.e., using only the feedback coming from the target class. For regression tasks, we always use the delayed feedback signal directly without further modifications.
We compare F3 to multiple other training algorithms: backpropagation and DRTP [10] were chosen as baselines for traditional and biologically plausible training approaches, respectively. To demonstrate the benefits of biologically plausible approaches, we include the results of training only the last layer while keeping the hidden layers constant at their random initialization. We refer to this as last-layer-only (LLO) training in the following. We additionally include DFA [9], which corresponds to F3 using current instead of delayed error information, thus not solving update locking.
### Experimental Environment
All experiments were conducted on a single compute node with two Intel Xeon 8386 "Ice Lake" processors and four NVIDIA A100-40 tensor core GPUs. Models were implemented in Python 3.8.6 using the PyTorch framework [40] version 1.12.1 with CUDA version 11.6. The source code is publicly available at github.com/Helmholtz-AI-Energy/f3.
### Delayed Error Information
Figure 3 illustrates the performance of F\({}^{3}\) using the different variants of delayed error information introduced in Section 3.4. For classification, error-based updates (F\({}^{3}\)-Error, green) tend to outperform updates based on the gradient of the loss (F\({}^{3}\)-Loss, blue). This result is unexpected since F\({}^{3}\)-Loss is more similar to backpropagation than F\({}^{3}\)-Error from a theoretical point of view. When using MSE loss, e.g., for the regression task SGEMM, the error is equivalent to the loss gradient except for a scalar factor. In this case, the utilized error information affects only the step size but not the general direction of the update step. Both effects can also be observed in Figure 2.
For classification, we test additional transformations of the error information, applying either a softmax to the output or considering only the error information of the target class, which corresponds to multiplication with the one-hot encoded target. In Figure 3, these variants are highlighted with different markers. We find that both the raw error information and the one-hot version are valid options for effective training, whereas applying an additional softmax tends to decrease the resulting quality as measured by an increased test loss.
### Comparison to Other Training Algorithms
Figure 2 illustrates the behavior of different training algorithms for multiple datasets and tasks. The exact results are also given in Tables 4 and 5. As expected, backpropagation outperforms biologically plausible training on most datasets; however, F\({}^{3}\) reduces this gap in performance by a significant fraction. On MNIST, F\({}^{3}\)-Error noticeably outperforms DRTP, increasing the test accuracy from 95.8% to 97.2%, thus reducing the gap to backpropagation by 54.9%. None of the algorithms
Figure 2: Test loss and accuracy using different training approaches for a fully-connected neural network with one hidden layer consisting of \(500\) neurons. We evaluate two variants of F\({}^{3}\), using either the error or the loss gradient as delayed error information, and compare them to backpropagation, last-layer-only (LLO) training, DFA, and DRTP on the classification tasks MNIST, CIFAR-10, Census Income, and the regression task SGEMM. By using delayed error information as feedback signal, F\({}^{3}\) improves upon the performance of the similarly plausible DRTP, thus closing the gap of update locking free training to backpropagation.
solve the CIFAR-10 image classification task to an acceptable level, with even backpropagation only reaching a test accuracy of 51.2%. This is likely due to a sub-optimal network architecture. Counterintuitively, F\({}^{3}\) and DRTP slightly outperform DFA on CIFAR-10. Comparing to Frenkel et al. [10]'s results, we suspect this is caused by DFA requiring a significantly lower learning rate for CIFAR-10 than other biologically plausible approaches and might thus require a more fine-grained hyper-parameter optimization differentiating more between the different algorithms.
All tested algorithms achieve good performance of about 80% on the Census Income dataset within less than ten epochs. On the regression dataset SGEMM, F\({}^{3}\) reduces the test loss from 0.364 to 0.014, thus reducing the gap to backpropagation by 96.9%. F\({}^{3}\) performs significantly better than LLO on all datasets except Census Income, where the difference between all methods is minor. This demonstrates the effectiveness of the biologically plausible update rule for hidden layers, proving that F\({}^{3}\) is more than just a sophisticated scheme to train the output layer.
In summary, F\({}^{3}\) notably improves upon the similarly plausible DRTP for both MNIST and the SGEMM regression task. On CIFAR-10, none of the algorithms shown solved the task to a satisfying level for the tested network architecture. All algorithms solved the Census Income classification problem to a similar degree, indicating that current biologically plausible algorithms are already applicable to some tasks without significant performance declines compared to backpropagation.
### Impact of the Feedback Weight Initialization
We further explore different approaches to initializing the feedback weights \(B_{i}\) and their impact on the training. As a baseline, we use the Kaiming-He uniform distribution [41] with a gain of \(\sqrt{2}\) and the fan-in mode. To test the impact of the feedback weights' magnitude and sign, we consider two discrete uniform distributions: trinomial with the values \(\{-1,0,1\}\) and binomial with the values \(\{0,1\}\). Finally, we test repeating the identity matrix \(I\) along the larger dimension with alternating signs, referred to as \(\pm I\). Figure 4 illustrates these initialization methods. Note that the bounds of the Kaiming-He distribution can become very small for large matrices.
Figure 5 shows the test loss for the different initialization methods for DRTP and F\({}^{3}\)-Error. With both training algorithms, we observe a small but notable improvement using trinomial initialization over Kaiming uniform, indicating that continuous magnitudes are not necessary for effective learning. In contrast to trinomial initialization, binomial initialization results in a massive increase in loss. This is a clear indication that the sign of the values plays a significant role and is necessary for successfully communicating the feedback signals. Repeating the identity matrix is similarly in
Figure 3: Test loss for F\({}^{3}\) using different types of delayed error information when training a fully-connected neural network with one hidden layer consisting of 500 neurons each on MNIST. The error tends to give better training results than the gradient of the loss. The one-hot transformed error signal provides similar results to the raw error information, whereas softmax offers no advantage but rather increases the resulting test loss.
effective, suggesting that the arrangement of feedback weights matters even with both positive and negative values.
These results illustrate that feed-forward training algorithms do not depend on a specific initialization of the feedback weights, but certain characteristics are necessary to promote effective training. Namely, they require a combination of different signs in the rows of the two-dimensional feedback weight matrix. The feedback weights \(B_{i}\) of layer \(i\) are used to approximate the gradient \(\delta h_{i}\) according to Equation (6). Each column in the matrix \(B_{i}\) is used to compute one value in \(\delta^{\mathrm{F}^{3}}h_{i}\) via a dot product with the feedback signal \(e_{t-1}\). The feedback signal consists of one value per model output, e.g., one value per class. For classification, a column in \(B_{i}\) thus prescribes the class mixture of the feedback for each value in \(\delta^{\mathrm{F}^{3}}h_{i}\). Only columns with more than one non-zero value allow multi-class mixtures, explaining the ineffectiveness of \(\pm I\). Furthermore, different signs help separate classes from each other, as shown by the improved performance of trinomial over binomial initialization.
### Scaling with Network Depth
Figure 6 shows the scaling behavior of the different training approaches with increasing network depth. Biologically plausible algorithms appear more robust to problems typically arising with very deep networks. While training smaller networks very effectively, we find that backpropagation fails to train networks with more than 25 layers, offering virtually no improvement over random initialization. In contrast, \(\mathrm{F}^{3}\) can train shallow networks to nearly the same degree of predictive performance as backpropagation while diminishing considerably less sharply with increasing depth. This is a result of the direct feedback pathways [9], which are independent of the network's depth and thus immune to the typical problems arising when training very deep networks with backprop
Figure 4: The different feedback weight initialization schemes. Kaiming uses the Kaiming-He uniform distribution with a gain of \(\sqrt{2}\) and the fan-in mode. Trinomial and binomial are discrete uniform distributions over the values \(\{-1,0,1\}\) and \(\{0,1\}\), respectively. \(\pm I\) repeats the identity matrix \(I\) along the larger dimension, alternating the sign.
Figure 5: Best test loss for DRTP and \(\mathrm{F}^{3}\) with error scaling using different feedback weight initialization methods, shown for a fully-connected neural network with one hidden layer of 500 neurons trained on MNIST for 100 epochs. Trinomial initialization offers a slight improvement over the baseline Kaiming initialization, while neither binomial nor \(\pm I\) allow effective training.
agation [42]. This immunity is highlighted by DFA, which is unaffected by the network depth as expected from Nokland [9]'s results.
The scaling behavior for DRTP is similar to F\({}^{3}\), i.e., the performance decreases much more slowly with network depth than when training with backpropagation. As both DRTP and F\({}^{3}\) are based on direct random feedback pathways, this result was expected. However, DRTP performs much worse for regression problems and cannot train even more shallow networks on the SGEMM task to a similar predictive performance as F\({}^{3}\). In summary, both biologically plausible algorithms can train even deep, 100-layer networks without further remedies, such as specific activation functions, yet F\({}^{3}\) offers better predictive performance throughout all tested network depths.
## 5 Conclusion
Backpropagation relies on concepts infeasible in natural learning, rendering it an effective yet biologically implausible and computationally expensive approach to training artificial neural networks. In this paper, we introduce Feed-Forward with delayed Feedback (F\({}^{3}\)), a novel algorithm for backpropagation-free training of multi-layer neural networks using a feed-forward approximation of intermediate gradients based on delayed error information. F\({}^{3}\) solves the weight transport problem by implementing direct random feedback connections and avoids update locking by using the delayed error information from the previous epoch as an additional sample-wise scaling factor. This makes it more biologically plausible than backpropagation while reducing training time and energy consumption per epoch. F\({}^{3}\) significantly improves the predictive performance compared to previous update locking free algorithms, reducing the gap to backpropagation by over 50% for classification and more than 95% for regression tasks.
By addressing the weight transport and update locking problems, F\({}^{3}\) is more biologically plausible than backpropagation. However, some implausibilities remain, such as the spiking problem, Dale's Law, and the general implausibility of supervised learning [3; 43]. A further limitation is the remaining accuracy gap to backpropagation and F\({}^{3}\) frequently requiring more epochs to reach a similar accuracy. This also limits the applicability to large-scale tasks and often outweighs the benefit of reduced compute requirements per epoch. Furthermore, there is currently no natural way to apply F\({}^{3}\) to convolutional networks while retaining the advantages of weight sharing for the feedback weights. The most promising next steps for more biological plausibility are combining F\({}^{3}\) with approaches like spiking neural networks [44] and semi- or self-supervised learning. To increase the
Figure 6: Test loss on the SGEMM dataset for different network depths and a width of 500 neurons per layer. Feed-forward-only training algorithms like F\({}^{3}\), DFA, and DRTP are more robust to very deep networks, while backpropagation fails to train networks above a depth of 25, achieving no improvement over the random initialization.
predictive performance, improving the feedback pathways and applying F\({}^{3}\) more effectively to other architectures are essential to scale F\({}^{3}\) to larger tasks.
By releasing update locking and thus the inter-layer dependencies when computing the gradients, F\({}^{3}\) can update the network's parameters during the forward pass. As a result, F\({}^{3}\) eliminates the need for the computationally expensive backward pass, thus requiring fewer operations and removing the need to buffer or recompute activations. Beyond that, it opens up promising possibilities for parallel training setups like pipeline parallelism by reducing the amount of communication and synchronization between different layers. Furthermore, F\({}^{3}\) enables on-device training on highly promising neuromorphic devices, which would simplify the training process significantly and has the potential to economize compute resources and energy tremendously.
## Acknowledgments and Disclosure of Funding
This work is supported by the Helmholtz Association Initiative and Networking Fund under the Helmholtz AI platform and the HAICORE@KIT grant.
|
2310.03773 | Functional data learning using convolutional neural networks | In this paper, we show how convolutional neural networks (CNN) can be used in
regression and classification learning problems of noisy and non-noisy
functional data. The main idea is to transform the functional data into a 28 by
28 image. We use a specific but typical architecture of a convolutional neural
network to perform all the regression exercises of parameter estimation and
functional form classification. First, we use some functional case studies of
functional data with and without random noise to showcase the strength of the
new method. In particular, we use it to estimate exponential growth and decay
rates, the bandwidths of sine and cosine functions, and the magnitudes and
widths of curve peaks. We also use it to classify the monotonicity and
curvatures of functional data, algebraic versus exponential growth, and the
number of peaks of functional data. Second, we apply the same convolutional
neural networks to Lyapunov exponent estimation in noisy and non-noisy chaotic
data, in estimating rates of disease transmission from epidemic curves, and in
detecting the similarity of drug dissolution profiles. Finally, we apply the
method to real-life data to detect Parkinson's disease patients in a
classification problem. The method, although simple, shows high accuracy and is
promising for future use in engineering and medical applications. | Jose Galarza, Tamer Oraby | 2023-10-05T04:46:52Z | http://arxiv.org/abs/2310.03773v1 | # Functional data learning using convolutional neural networks
###### Abstract
In this paper, we show how convolutional neural networks (CNN) can be used in regression and classification learning problems of noisy and non-noisy functional data. The main idea is to transform the functional data into a 28 by 28 image. We use a specific but typical architecture of a convolutional neural network to perform all the regression exercises of parameter estimation and functional form classification. First, we use some functional case studies of functional data with and without random noise to showcase the strength of the new method. In particular, we use it to estimate exponential growth and decay rates, the bandwidths of sine and cosine functions, and the magnitudes and widths of curve peaks. We also use it to classify the monotonicity and curvatures of functional data, algebraic versus exponential growth, and the number of peaks of functional data. Second, we apply the same convolutional neural networks to Lyapunov exponent estimation in noisy and non-noisy chaotic data, in estimating rates of disease transmission from epidemic curves, and in detecting the similarity of drug dissolution profiles. Finally, we apply the method to real-life data to detect Parkinson's disease patients in a classification problem. The method, although simple, shows high accuracy and is promising for future use in engineering and medical applications.
* September 2023
_Keywords_: Functional Data Learning, Deep Learning, Convolutional Neural Networks, Regression, Classification
## 1 Introduction
Functional data (FD) are functions observed for each unit over certain intervals; see (Xiaoying et al., 2021; Ramsay and Silverman, 2005_b_). FD appears in various scientific fields, such as engineering, geology, biology, medicine, pharmacology, and chemistry. It involves analyzing data in the form of continuous vector functions or curves, which could be treated as realizations of stochastic processes; see (Xiaoying et al., 2021; Yarger et al., 2022). Functional data analysis (FDA) provides methods for extracting intrinsic information from infinite-dimensional and irregular observation data; see (Xiaoying et al., 2021; Ramsay and Silverman, 2005_b_). FDA combines statistics, spatial analysis,
and multivariate modeling tools to analyze and predict functional data. It provides advantages over traditional pointwise estimation methods by using irregularly sampled data in space, time, and depth to fit space-time functional models (Gorecki et al. 2018).
Earlier stages of FDA were developed by (Ramsay 1991) to study the relationship between the ability of an examinee and his or her probability of correctly selecting an option in a standardized item response test. First, (Ramsay 1991) used kernels to correctly fit the observations to one dimension and later extended the idea in (Ramsay 1995) to fit the data to larger dimensions. Then, (Ramsay 1996) introduced Principal Differential Analysis to find an approximate solution that solves the differential equation \(Lu=0\) in which we can pick the order of the differential operator \(L\) and a basis for the weights, using splines or other adequate functions. This technique was applied to approximate Chinese writing in (Ramsay 2000) using a second-order operator. Later, complexity was added to this type of multilevel model (Ramsay 2002). For more information and applications of FD, we recommend reading the full work on Functional Analysis case studies of Ramsay's work in (Ramsay & Silverman 2002) and finding more mathematical foundation in (Ramsay & Silverman 2005_a_).
Functional data learning (FDL) has also received a lot of attention recently in the domain of machine learning and deep learning. Different approaches were used to establish predictive models of functional data like in (Zhao 2012) where gradient descent was derived for classical Neural Networks using Frechet derivates and the Reisz representation theorem to establish a deep learning method for FDL. In (Abraham et al. 2014), the support vector machine method was applied to feature vectors obtained by voxels transformation. In (Pfisterer et al. 2019), some FDA R libraries were compared to machine learning techniques such as random forest and were shown to be outperformed by the latter. In (Zhang et al. 2021), a new methodology was used to learn the bases of different functional subspaces to model functional data before applying learning methods. In (Basna et al. 2021), orthogonal B-spline bases for FDA were shown to be more efficient than other Fourier-based methods.
Other deep learning and machine learning approaches include the work in (Yao et al. 2021), in which inputs are fed directly into a layer composed of nodes of nested neural networks. The output of these neural networks is the basis function themselves. Inspired by the square root velocity, the characteristics in (Rafajlowicz & Rafajlowicz 2021) were obtained from functional data using derivatives. It was found to work well along with classification methods such as logistic regression and support vector machine. Other methods of FDL such as manifold learning have recently been introduced, see, e.g. (Hernandez-Roig et al. 2020, Mughal et al. 2020, Hernandez-Roig et al. 2021).
Convolutional neural network (CNN) has been used extensively in image recognition. The architecture of Neural Networks has been evolving since the introduction of very deep CNNs (Simonyan & Zisserman 2014), ResNets (He et al. 2016), and MobileNets (Howard et al. 2017, Sandler et al. 2018). The efficiency of CNN is affected by its architecture and hardware (Polson & Sokolov 2020), as well as training
and cross-validation sets and scaling (Tan & Le 2019). Several other advances have been made by adding other algorithms to CNN, such as the "You Only Look Once" approach proposed by (Redmon et al. 2016, Redmon & Farhadi 2017). To our knowledge, convolutional neural networks have not been used before for functional data learning.
In this paper, we explore how to use CNNs in functional data learning. In Section 2, we introduce our method and the CNN architecture. In most of the problems, we use the same CNN architecture except for a few that will be described in place of their application. In Section 3, we start with some regression and classification problems of curves that represent different functions with and without random noise. In particular, we examine the ability of CNN to estimate parameters of exponential and trigonometric functions. In addition, we employed the same CNN to estimate the magnitude and width of the curve peaks. We also use CNN for the classification problems of increasing versus decreasing function, concave versus convex function, algebraic versus exponential growth, and discerning curves of 0, 1, and 2 peaks. In Section 4, we examine the ability of CNN to estimate parameters of characteristics of dynamical systems. In particular, we use it to estimate Lyapunov exponents of some chaotic curves with and without random noise. We apply the same CNN to estimate transmission rates from epidemic curves. Then, we applied a Siamese CNN to test the similarity between two drug dissolution curves (profiles). In Section 5, we apply the same methods to classify the actual functional data of cases of Parkinson's disease and control.
## 2 Methods
Let \(\{(x_{i},f(x_{i})):i=1,2,\ldots,n\}\) be the data points of a graph with equidistance sampled points \(x_{i}\), \(i=1,2,\ldots,n\). The data pre-processing step in functional data learning via CNNs creates an input image using functional data points. We assume that \(f\) is MinMax normalized, so we can assume that \(f:\mathbb{R}\rightarrow[0,1]\). A signed distance matrix \(D\) is defined, so its elements \((i,j)\), \(d_{ij}=f(x_{i})-f(x_{j})\). The matrix \(D\) represents a grayscale intensity value that is used to produce a 28 by 28 grayscale image; see Figure 1.
Next, we describe a typical architecture as presented in Mathworks' documents (MATLAB 2022), which is based on Lenet networks in (LeCun et al. 1998). The first multilayer of CNN consists of batch normalization, a RELU activation function, and an average normalization layer. The first output is \(13\times 13\times 8\) layers. The second multilayer is the same as the first with an output of \(5\times 5\times 16\) layers. The third multilayer does not contain an average pooling layer and has an output of \(3\times 3\times 32\) layers. The final multilayer has no average pooling layer; however, it has a dropout layer of 20% with a fully connected layer followed by a regression/classification layer. All codes and simulations were performed in MATLAB and using its deep learning toolbox (MATLAB 2022).
We use the same CNN's architecture for all of the regression and classification problems. We tested the procedure on different functional types and we start first by discussing our findings for seven case studies of functional data.
## 3 Results
In this section, we discuss various regression and classification problems of functional data with and without random noise. We use randomly generated values for the functional parameters from specified ranges to produce training, validation, and testing data sets of sizes 1000, 100, and 100, respectively. We use \(n=100\) data points for each curve. In the case of width estimation, height estimation, and number of peaks classification, we use 10000 curves for training, and we use \(n=1000\) functional data points for each curve. This would ensure that there will be a wide variety of heights and widths and enough peaks for each classification.
### Regression Problems
#### 3.1.1 Exponential Function
Let the exponential curve be \(y=\exp(\omega x)\) with parameter \(\omega\) representing the rate at which \(y\) increases or decreases. The first task is to use our method to estimate \(\omega\). See Figure 2 for the curve of \(y=\exp(-0.27x)\) with and without noise as an example and the corresponding 28 by 28 image. For training, validation, and testing data, the parameter \(\omega\) is sampled from a uniform distribution over the interval \([-1,1]\) for 1000, 100, and 100 times, respectively.
Figure 1: Diagram and procedure proposed for the regression and classification problems.
Figure 2: (a) The curve of \(y=\exp(\omega x)\) when \(\omega=-.27\). (b) The 28 by 28 image that corresponds to the curve in (a). (c) The curve of \(y=\exp(\omega x)+\sigma z\) when \(\omega=-.27\) and \(\sigma=.1\) where \(z\) is a standard normal random variable. (d) The 28 by 28 image that corresponds to the curve in (c). (f) The curve of \(y=\exp(\omega x)+\sigma z\) when \(\omega=-.27\) and \(\sigma=1\) where \(z\) is a standard normal random variable. (f) The 28 by 28 image that corresponds to the curve in (e).
Figure 3 shows the predicted values versus the estimated value of the exponential function parameter that closely follows the diagonal line with no intercept and slope of one. Table 1 shows a strong diagonal linear relationship between the true and predicted values of the rate.
\(y=\sin(1.06x)\) and \(y=\cos(0.96x)\), respectively. For training, validation, and testing data, the parameter \(\omega\) is randomly uniformly sampled from the interval \([0,3]\) for 1000, 100, and 100 times, respectively.
Figure 4: (a) The curve of \(y=\sin(\omega x)\) when \(\omega=1.06\). (b) The 28 by 28 image that corresponds to the curve in (a). (c) The curve of \(y=\sin(\omega x)+\sigma z\) when \(\omega=1.06\) and \(\sigma=.1\) where \(z\) is a standard normal random variable. (d) The 28 by 28 image corresponds to the curve in (c). (e) The curve of \(y=\sin(\omega x)+\sigma z\) when \(\omega=1.06\) and \(\sigma=1\) where \(z\) is a standard normal random variable. (f) The 28 by 28 image that corresponds to the curve in (e).
Figure 5: (a) The curve of \(y=\cos(\omega x)\) when \(\omega=.96\). (b) The 28 by 28 image that corresponds to the curve in (a). (c) The curve of \(y=\cos(\omega x)+\sigma z\) when \(\omega=.96\) and \(\sigma=.1\) where \(z\) is standard normal random variable. (d) The 28 by 28 image corresponds to the curve in (c). (f) The curve of \(y=\cos(\omega x)+\sigma z\) when \(\omega=.96\) and \(\sigma=1\) where \(z\) is a standard normal random variable. (e) The 28 by 28 image that corresponds to the curve in (e).
Figure 6 shows the predicted values versus the estimated value of the parameter of the sine and cosine functions that closely follow the diagonal line without an intercept and slope of one. Tables 2 and 3 show a strong diagonal linear relationship between the true and predicted bandwidth values.
#### 3.1.3 Estimation of the Height and Width of Peaks of Functions
To produce a number of peaks with different heights and widths, we used a mixture of Gaussian curves, see Figures 7. The maximum height of a curve is the height of the peak at the global
Figure 6: Results of the sine function regression using the test dataset without noise (a), with noise of magnitude \(\sigma=.1\) (c), and with noise of magnitude \(\sigma=1\) (e). Results of the cosine function regression using the test dataset without noise (b), with noise of magnitude \(\sigma=.1\) (d), and with noise of magnitude \(\sigma=1\) (f).
maximum of the curve. The width of a curve is the horizontal distance of the contour located half of the prominence of that peak. It is important to note that the curves need to be normalized with the highest peak of all the generated curves to have grayscale values representative of all of the heights. Otherwise, the highest grayscale value of 1 will be assigned to the peak height of each curve and thus will not be able to train the model properly. On the other hand, with the width estimation and peak detection (see peak classification section), we use local normalization, since they do not necessarily depend on the height.
To generate the curves, we use a mixture of Gaussian kernels given by \(y=G(x)=\sum_{k=1}^{2}H_{k}\exp(-(\frac{x-P_{k}}{W_{k}})^{2})\) for \(x\in[0,50]\). Then the heights \(H_{k}\) are randomly and uniformly selected from \([0,2200]\). The shape parameters \(W_{k}\) are randomly generated using \(W_{k}=\lfloor 50\cdot U_{1}+1\rfloor\) where \(U_{1}\) is a uniformly distributed random variable on [0,1]. Similarly, position parameters are randomly generated using \(P_{k}=\lfloor 50\cdot U_{2}+1\rfloor\) where \(U_{2}\) is a uniformly distributed random variable over [0,1]. See Figure 7 for an example of a mixture of Gaussian kernels with and without random noise.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Case & Correlation Coefficient (\(r\)) & Intercept (p-value) & Slope (p-value) \\ \hline Without Noise & \(\approx 1\) & 0.0015 (0.8306) & 0.9954 (0.9121) \\ \hline With Noise (\(\sigma=0.1\)) & \(\approx 1\) & 0.0171 (0.0836) & 0.9911 (0.4942) \\ \hline With Noise (\(\sigma=1\)) & \(\approx 1\) & 0.0011 (0.9649) & 1.0108 (0.2765) \\ \hline \end{tabular}
\end{table}
Table 2: Correlation Coefficient, Intercept, and Slope for Sine Data with P-values
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Case & Correlation Coefficient (\(r\)) & Intercept (p-value) & Slope (p-value) \\ \hline Without Noise & \(\approx 1\) & -0.0059 (0.3546) & 0.9979 (0.5825) \\ \hline With Noise (\(\sigma=0.1\)) & \(\approx 1\) & 0.0143 (0.3497) & 1.0045 (0.6084) \\ \hline With Noise (\(\sigma=1\)) & \(\approx 1\) & 0.0639 (0.0305) & 0.9647 (0.0259) \\ \hline \end{tabular}
\end{table}
Table 3: Correlation Coefficient, Intercept, and Slope for Cosine Data with P-values
Figure 8 shows that the normalized width predictions closely follow the actual
Figure 7: (a) The curve is an example of a mixture of Gaussians \(G(x)\). (b) The 28 by 28 image that corresponds to the curve in (a). (c) The curve of \(G(x)+\sigma z\) when \(\sigma=.1\) where \(z\) is a standard normal random variable. (d) The 28 by 28 image that corresponds to the curve in (c). (e) The curve of \(G(x)+\sigma z\) when \(\sigma=1\) where \(z\) is a standard normal random variable. (f) The 28 by 28 image that corresponds to the curve in (e).
values. In this part, Table 4 shows a strong indication of accurate prediction of the width of the peaks of the curves. Table 5 supported by Figure 9, however, shows that the CNN could not retrieve the actual values of height on average unless it has more noise.
Figure 8: Results of the maximum width regression using the test dataset without noise (a), with noise of magnitude \(\sigma=.1\) (b), and with noise of magnitude \(\sigma=1\) (c).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Case & Correlation Coefficient (\(r\)) & Intercept (p-value) & Slope (p-value) \\ \hline Without Noise & 0.974 & -0.0017 (0.6247) & 1.0057 (0.4453) \\ \hline With Noise (\(\sigma=0.1\)) & 0.980 & 0.0022 (0.4370) & 0.9964 (0.5736) \\ \hline With Noise (\(\sigma=1\)) & 0.977 & -0.0071 (0.0178) & 1.0138 (0.0463) \\ \hline \end{tabular}
\end{table}
Table 4: Correlation Coefficient, Intercept, and Slope for Width Estimation Data with P-values
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Case & Correlation Coefficient (\(r\)) & Intercept (p-value) & Slope (p-value) \\ \hline Without Noise & 0.951 & -0.0169 (0.0013) & 1.1195 (0.0000) \\ \hline With Noise (\(\sigma=0.1\)) & 0.941 & 0.0084 (0.1244) & 1.1044 (0.0000) \\ \hline With Noise (\(\sigma=1\)) & 0.932 & -0.0057 (0.3473) & 1.0327 (0.0105) \\ \hline \end{tabular}
\end{table}
Table 5: Correlation Coefficient, Intercept, and Slope for Height Estimation Data with P-values
Figure 9: Results of the maximum height regression using the test dataset without noise (a), with noise of magnitude \(\sigma=.1\) (b), and with noise of magnitude \(\sigma=1\) (c).
### Classification Problems
In this subsection, we examine the capabilities of CNN in functional data classification. For this type of problem, the same CNN architecture is used except that after the drop-out layer, the fully connected layer has hidden nodes equal to the number of classes, followed by a softmax layer followed by a classification Layer.
#### 3.2.1 Increasing versus Decreasing Curves
We use CNN to classify the monotonicity of curves. Curves \(y=e^{w_{1}(x-w_{2})}\) are used to generate increasing or decreasing exponential curves for training, validation, and training datasets, see Figure 10. We also use the following random variables \(w_{1}=sign(U_{1}-.5)\) and \(w_{2}=2U_{2}+2.5\) where \(U_{1}\) and \(U_{2}\) are uniformly distributed random variables on [0,1]. We found that the accuracy of the classification of the functional test data is 100%.
Figure 10: (a)The curve of \(y=e^{-(x-2)}\). (b) The 28 by 28 image that corresponds to the curve in (a). (c) The curve of \(e^{-(x-2)}+\sigma z\) when \(\sigma=.1\) where \(z\) is standard normal random variable. (d) The 28 by 28 image corresponds to the curve in (c). (e) The curve of \(e^{-(x-2)}+\sigma z\) when \(\sigma=1\) where \(z\) is standard normal random variable. (f) The 28 by 28 image corresponds to the curve in (e).
#### 3.2.2 Convex versus Concave Curves
We also examined the ability of CNN to classify the curvature of the curves as convex or concave. Curves \(y=w_{1}(x-w_{2})^{2}\) are used to generate convex and concave curves for training, validation, and training datasets; see Figure 11. We also use the following random variables \(w_{1}=sign(U_{1}-.5)\) and \(w_{2}=2U_{2}+2.5\) where \(U_{1}\) and \(U_{2}\) are uniformly distributed random variables on [0,1]. We found that the accuracy of the classification of the functional test data is 100%.
Figure 11: (a)The curve of \(y=(x-2)^{2}\). (b) The 28 by 28 image that corresponds to the curve in (a). (c) The curve of \((x-2)^{2}+\sigma z\) when \(\sigma=.1\) where \(z\) is a standard normal random variable. (d) The 28 by 28 image corresponds to the curve in (c). (e) The curve of \((x-2)^{2}+\sigma z\) when \(\sigma=1\) where \(z\) is a standard normal random variable. (f) The 28 by 28 image corresponds to the curve in (e).
#### 3.2.3 Exponential versus Algebraic Growth Curves
We examined the capabilities of CNN in classifying curve growth as exponential (in the form of \(y=e^{cx}\) for \(c>0\)) or algebraic (in the form of \(y=x^{c}\) for \(c>0\)), see Figure 12. Training, validation, and training curves are generated using a random \(c=3U+1\) where \(U\) is a uniformly distributed random variable over [0,1]. We found that the classification accuracy of the functional test data is 100%.
#### 3.2.4 Finding the Number of Peaks of Curves
The same CNN architecture that is used to estimate the magnitude of the maximum height and the width of the peaks is used
Figure 12: (a)The curve of an algebraic growth represented by \(y=x^{3}\). (b) The 28 by 28 image that corresponds to the curve in (a). (c) The curve of \(y=x^{3}+\sigma z\) when \(\sigma=.1\) where \(z\) is a standard normal random variable. (d) The 28 by 28 image corresponds to the curve in (c). (e) The curve of \(y=x^{3}+\sigma z\) when \(\sigma=1\) where \(z\) is a standard normal random variable. (f) The 28 by 28 image corresponds to the curve in (e).
for classification with the number of peaks as mentioned in Subsection 3.1.3. The curves are generated with the same scheme as mentioned in Subsection 3.1.3. In this case, we use a local normalization; that is, the highest grayscale value of that particular curve is 1, and the lowest is 0. This helps to identify the peaks; since we are not interested in the height, the grayscale values do not need to be proportional to the height. A zero number of peaks is possible if the randomly generated position is outside the interval [0,50]. We found that the classification accuracy of the functional test data is 98. 0% for noise-free data, 97. 2% for noisy data with magnitude \(\sigma=.1\) and 96. 2% for noisy data with magnitude \(\sigma=1\).
## 4 Applications to Dynamical Systems
In this section, we apply a CNN to dynamical systems. First, we show how CNN can estimate the Lyapunov exponent from one curve of the three variables solving the Lorenz system. Second, we show that CNN can be used to estimate transmission rates from epidemic curves. Third, we use CNN to test the similarity and dissimilarity of drug dissolution profiles.
### Estimating Lyapunov Exponent
The study of human motion has been the focus of interest in the medical field to determine which exercise and range of motion are stable. Different perspectives have been performed to study such stability. Although (Stergiou & Decker 2011) pointed out the link between movement variability and stability. (Wilson et al. 2008) emphasized that variability should not be confused with instability, as it can be observed in healthy and unhealthy subjects. (Dingwell & Cusumano 2000) helped develop the standard procedure for analyzing stability using Lyapunov exponents estimated by the Rosenstein method (Rosenstein et al. 1993). Lyapunov exponents are the average exponential divergence of the orbits between nearest trajectories, which are called nearest neighbors. The more unstable the system, the higher the value of the Lyapunov exponent. An alternative parameter is the mean of the absolute value of the Floquet multipliers. (Hurmuzlu & Basdogan 1994) is the first to extensively use Floquet multipliers for the same purpose. Floquet multipliers are the eigenvalues of the Jacobian matrix that measure the separation between orbits of the system (Dingwell & Kang 2006) that could be calculated using Poincare maps. If the mean of the magnitudes of the eigenvalues is less than 1, the orbits are considered stable.
To examine our method in estimating the Lyapunov exponent we use the prototype of chaotic systems, Lorenz systems with parameters \(\alpha,\beta,\rho\) defined as the system of ordinary differential equations:
\[\frac{dx}{dt} = \alpha(y-x)\] \[\frac{dy}{dt} = x(\rho-z)-y\]
\[\frac{dz}{dt}=xy-\beta z\]
Initial values are \(x(0)=1\), \(y(0)=1\), and \(z(0)=1\). Figure 13 shows an example image of the attractor with \(\alpha=2.8029\), \(\beta=1.1114\), and \(\rho=11.9620\).
We randomly simulate the values \(\alpha=10\,U_{1}\), \(\beta=8/3\,U_{2}\), and \(\rho=20\,U_{3}\), where \(U_{i}\) are independent uniformly distributed random variables in \([0,1]\) for \(i=1,2,3\). We ran the Lorenz system simulation over the time interval \([0,1]\) using the Runge-Kutta hybrid order 4 and 5 numerical method in MATLAB. We use component \(x\) only for the training, validation, and testing of the CNN, see Figure 14.
Figure 13: An example of Lorenz attractor at \(\alpha=2.8029\), \(\beta=1.1114\), and \(\rho=11.9620\) without noise (a), with noise of magnitude \(\sigma=.1\) (b), and with noise of magnitude \(\sigma=1\) (c).
Figure 15 shows the predicted values versus the estimated value of the Lyapunov
Figure 14: (a) The curve of component \(x\) of the Lorenz attractor. (b) The 28 by 28 image that corresponds to the curve in (a). (c) The curve of \(x+\sigma w\) when \(\sigma=.1\) where \(w\) is a standard normal random variable. (d) The 28 by 28 image corresponds to the curve in (c). (e) The curve of \(x+\sigma w\) when \(\sigma=1\) where \(w\) is a standard normal random variable. (f) The 28 by 28 image corresponds to the curve in (e).
exponents that closely followed the diagonal line without an intercept and a slope of one. Table 6 shows a strong diagonal linear relationship between the true and predicted values of the Lyapunov exponents.
Also, we tested the capability that a CNN model trained with noise-free data in estimating the Lyapunov exponent in noisy data. Figure 16 and Table 7 show relatively good results.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Case & Correlation Coefficient (\(r\)) & Intercept (p-value) & Slope (p-value) \\ \hline Without Noise & 0.916 & -0.0146 (0.8550) & 0.9600 (0.3477) \\ \hline With Noise (\(\sigma=0.1\)) & 0.949 & 0.0109 (0.8734) & 1.0908 (0.0152) \\ \hline With Noise (\(\sigma=1\)) & 0.910 & -0.1140 (0.0900) & 0.9369 (0.1476) \\ \hline \end{tabular}
\end{table}
Table 6: Correlation Coefficient, Intercept, and Slope for Noise-Free Testing Data with P-values
Figure 15: Results of Lyapunov exponent estimation using the test dataset without noise (a), with noise of magnitude \(\sigma=.1\) (b), and with noise of magnitude \(\sigma=1\) (c).
An endurance test was also performed using 10000 test curves to estimate the Lyapunov exponent and found that the CNN is approximately 600 times faster (2.7628 seconds) compared to the MATLAB Rosenstein method (1692.6 seconds).
### Estimating the Transmission Rates from Epidemic Curves
Estimation of transmission rates and exponential growth curve rates (see Subsection 3.1.1) are important for emerging epidemics (Boonpatcharan
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Case & Correlation Coefficient (\(r\)) & Intercept (p-value) & Slope (p-value) \\ \hline Without Noise & 0.944 & 0.0567 (0.3275) & 0.8856 (0.0004) \\ \hline With Noise (\(\sigma=0.1\)) & 0.928 & -0.1326 (0.0733) & 0.9871 (0.7472) \\ \hline With Noise (\(\sigma=1\)) & 0.867 & -0.3390 (0.0015) & 1.0815 (0.1970) \\ \hline \end{tabular}
\end{table}
Table 7: Correlation Coefficient, Intercept, and Slope for Lyapunov Testing Data with P-values when CNN is Trained with Noise-Free Data.
Figure 16: Results of Lyapunov exponent estimation when the CNN is trained only using the noise-free data for the test dataset without noise (a), with noise of magnitude \(\sigma=.1\) (b), and with noise of magnitude \(\sigma=1\) (c).
the beginning of COVID-19 (Tuite & Fisman 2020). Some epidemics grow algebraically and not exponentially (Chowell et al. 2015, Kolebaje et al. 2022) and so it is also helpful to discern them through classification; see Subsection 3.2.3. One of the main goals in epidemiology is to estimate the basic reproduction number \(R_{0}\) which is almost always proportional to the transmission rate \(\beta\). If \(R_{0}<1\), then the disease diminishes; otherwise, there is a chance that it will become endemic.
We use a susceptible-infected-recovered (SIR) to produce epidemic curves with different transmission rates \(\beta\) and estimate that parameter. The disease dynamics of a susceptible-infected-recovered (SIR) compartmental model follows the system of differential equations
\[\frac{dS}{dt} = \mu-\beta\,S\,I-\mu\,S\] \[\frac{dI}{dt} = \beta\,S\,I-(\mu+\gamma)\,I\] \[\frac{dR}{dt} = \gamma\,I-\mu\,R\]
where \(S\), \(I\), and \(R\) are the proportion of susceptible, infected, and recovered individuals in the population at time \(t\), such that \(S+I+R=1\). Initial values are \(S(0)=.99\), \(I(0)=.01\) and \(R(0)=0\). The parameter \(\mu\) is the per capita birth/death rate, \(\beta\) is the transmission rate, and \(\gamma\) is the recovery rate. The basic reproduction number in the above SIR model is given by \(R_{0}=\beta/(\mu+\gamma)\).
We assume values \(\mu=1/(365*50)\) days\({}^{-1}\), \(\gamma=1/28\) days\({}^{-1}\) and \(\beta\) randomly selected from a uniform distribution over \((.01,1)\) to reflect a basic reproduction number in the range of \((.28,28)\). The SIR model simulations are run for 50 days using the Runge-Kutta hybrid order 4 and 5 numerical method in MATLAB. See Figure 17 for a simulated epidemic curve \(I\).
Figure 18 and Table 8 show strong prediction of transmission rates.
Figure 17: (a) A simulation curve of \(I\). (b) The 28 by 28 image that corresponds to the curve in (a). (c) The curve of \(I+\sigma z\) when \(\sigma=.1\) where \(z\) is a standard normal random variable. (d) The 28 by 28 image corresponds to the curve in (c). (f) The curve of \(I+\sigma z\) when \(\sigma=1\) where \(z\) is a standard normal random variable. (f) The 28 by 28 image corresponds to the curve in (e).
### Detecting Similarity of Drug Dissolution Profiles
The problem of drug release or dissolution profiles is important for the pharmaceutical industry. Regulatory guidelines seek to advise coherent characteristics of drug dissolution prior to their approval; see, e.g. (Vranic et al. 2002, Pourmohamad & Ng 2023). Many statistical approaches have been developed to test the similarity between dissolution curves or profiles, including cluster analysis, decision trees, and linear models; see, e.g. (Costa & Sousa Lobo 2001, Maggio et al. 2008, Enachescu 2010,
Figure 18: Results of SIR regression using the test set. (a) No noise data.(b) noisy data with \(\sigma=.1\). (c) noisy data with \(\sigma=1\).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Case & Correlation Coefficient (\(r\)) & Intercept (p-value) & Slope (p-value) \\ \hline Without Noise & 0.999 & -0.0023 (0.2573) & 1.0048 (0.1642) \\ \hline With Noise (\(\sigma=0.1\)) & 0.997 & -0.0066 (0.1318) & 1.0177 (0.0229) \\ \hline With Noise (\(\sigma=1\)) & 0.979 & 0.0005 (0.9686) & 0.9920 (0.7029) \\ \hline \end{tabular}
\end{table}
Table 8: Correlation Coefficient, Intercept, and Slope for Transmission Rate Testing Data with P-values
Paixao et al. 2017, Abend et al. 2023, Pourmohamad & Ng 2023). That also includes nonparametric measures such as the two measures adopted by the US Food and Drug Administration (FDA) and the European Medicines Agency (EMA),
\[f_{1}=\frac{\sum_{i=1}^{n}|R_{i}-S_{i}|}{\sum_{i=1}^{n}R_{i}}\times 100\%\]
and
\[f_{2}=50\log_{10}\left(\left[1+\frac{1}{n}\sum_{i=1}^{n}(R_{i}-S_{i})^{2} \right]^{-0.5}\times 100\right)\]
which are widely used to detect dissimilarity and similarity, respectively, between two curves \(\{(t_{i},R_{i}):i=1,2,\ldots,n\}\) and \(\{(t_{i},S_{i}):i=1,2,\ldots,n\}\). If \(f_{1}\) is between 0 and 15 and \(f_{2}\) is between 50 and 100, then the two curves are considered similar; see, for example, (Costa & Sousa Lobo 2001, Pourmohamad & Ng 2023) for a complete set of models and measures, as well as FDA & EMA guidelines.
There are several mathematical models of drug dissolution; see, for example, an important model of drug dissolution is the logistic curve \(f=\frac{100}{1+\exp(-c(t-6))}\) for some release rate \(c>0\), see (Pourmohamad & Ng 2023). We follow (Pourmohamad & Ng 2023) by using the logistic curve at the sampling time points, t = 5, 10, 15, 20, 30, 60, 90, and 120 minutes. To generate a number of similar and dissimilar profiles, we use release rates \(c_{1}=.01+.001*z\) and \(c_{2}=.03+.001*z\) where \(z\) are generated randomly and independently from the standard normal distribution. We can see the simulated curves in Figure 19 for (a) dissimilar and (b) similar curves. In addition, we have the histogram for \(f_{1}\) and \(f_{2}\) for the dissimilar (c) and similar (d) curves in the training set. Figure also shows the histogram for \(f_{1}\) and \(f_{2}\) for the dissimilar (e) and similar (f) curves in the test set.
Figure 19: (a) Example of dissimilar curves. (b) Examples of similar curves. (c) Training data histogram of \(f_{1}\) and \(f_{2}\) for the dissimilar curves. (d) Training data histogram of \(f_{1}\) and \(f_{2}\) for the similar curves. (e) Test data histogram of \(f_{1}\) and \(f_{2}\) for the dissimilar curves. (f) Test data histogram of \(f_{1}\) and \(f_{2}\) for the similar curves.
To detect the similarity of any two drug dissolution curves, we use a Siamese CNN in which two parallel CNN's final layers are inputs to a cross-entropy measure of the two input images. Some minor changes are made to the overall CNN architecture, in which we avoid using batch normalization and change the average pooling layer to a max pooling layer. In the end, there is a dense layer with \(28^{2}\) hidden nodes. Also, the weights are initialized by sampling from a normal distribution of mean zero and a standard deviation of \(0.01\). Following (Koch et al., 2015), we use the cross-entropy in the output layer to identify the similarity between the images of the dissolution curves. The test results can be seen in the confusion matrix in Figure 20.
It is important to note that the number of hidden nodes in the last layer helps in convergence. We notice that it is possible to use one hidden node; however, convergence is not consistently guaranteed. The number of nodes of the hidden layer therefore should be considered an important hyperparameter. Furthermore, below \(28^{2}\) nodes, there was no considerable change in convergence time.
## 5 Real-Life Application: Detecting Parkinson's Disease
Parkinson's disease is a progressive neurodegenerative disorder that results in motor and non-motor symptoms such as tremors, rigidity, and impaired movement control. Detecting Parkinson's disease involves a thorough physical examination to assess motor skills, reflexes, muscle strength, and coordination, and searching for characteristic signs of Parkinson's disease.
Our technique could be successfully applied to detect Parkinson's disease using motor tests. We use a dataset introduced by (Isenkul et al., 2014; Isenkul et al., 2017) in which 62 Parkinson's patients and 15 healthy subjects draw a spiral curve on a tablet. The original test was divided into three parts: a static test, a dynamic test, and a circular motion test. In the static test, subjects draw a certain fixed shape. In the
Figure 20: Confusion matrix shows the true predicted positives and the true predicted negatives without resulting in false positives or negatives when comparing 50 similar and 50 dissimilar pairs of curves.
dynamic test, the subjects draw a shape that disappears and appears at certain times, so the subjects need to memorize their location and continue drawing. In the circular motion test, subjects draw a circle at a red reference point on the tablet. Here, we use the data set for the static drawing test of Parkinson's patients by (Isenkul et al., 2017), which contains at each time stamp the positions \(X\), \(Y\), \(Z\) of the drawing, the pressure that we call \(P\), and the grip angle that we call \(A\), see Figure 21.
(Akyol, 2017) used three ML/DL techniques, random forest, logistic regression, and an artificial neural network, to classify subjects into Parkinson's and non-Parkinson's. While the original data set had only 77 subjects, (Akyol, 2017) used tens of thousands of instances. Among the three ML/DL techniques, the artificial neural network showed 100% precision in all cases.
(Kamble et al., 2021) used logistic regression, random forest, support vector machines, and K-Nearest Neighbors using unbalanced data from 25 patients and 15 controls. Logistic regression was the best model using the AUC. With the same idea, (Thakur et al., 2023) used logistic regression, a support vector machine, and a restricted Boltzmann machine followed by a neural network for the same data set. The restricted Boltzmann machine with the neural network model achieved an accuracy of 95%. See also a review on the use of ML/DL techniques to detect Parkinson's disease (Mei et al., 2021).
Regarding the data set used, due to the small size of the data, we use augmentation (see (James et al., 2013) for more details on data augmentation) to make a larger training and testing data set. We made a tensor, in which the third dimension represents the fixed signals of \(X\) and \(Y\), while we checked a combination of \(Z\), the pressure measurement \(P\), and the grip angle \(A\). So, we have 8 tensors with size 1024 by 1024 by 2 plus the combination of the last 3 variables to determine which has the best results. Using MATLAB data augmentation (see (MATLAB, 2022) functions for more details), we made
Figure 21: (a) A spiral drawing of a normal subject. (b) A spiral drawing of a Parkinson’s patient.
a random translation of \(X\) within \([-25,25]\), random reflections, and translation of \(Y\) within \([-50,50]\), adding a random translation for \(Z\) within \([-.5,.5]\). We also performed a random translation of the pressure within \([0,50]\) and a random translation of the angle within \([0,25]\). Data augmentation was performed four times more for the control class of the set since there are only 15 control images compared to the 61 case images. By doing that augmentation, we prepared training data of 300 control images and 305 disease images. In combination, we split it into 80% for the training and validation set, and the rest 20% is used in testing. See examples of Parkinson's and control patients in Figure 22.
Our method gave 100% validation and testing accuracy for all of the combinations of the features \(X\), \(Y\), \(Z\) coordinates, the pressure \(P\), and the grip
Figure 22: Curves with their respective image transformation. (a) Example of a control subject’s \(x\) spiral component time series. (b) The 28 by 28 image corresponds to the curve in (a). (c) Example of a Parkinson’s patient’s \(x\) spiral component time series. (d) The 28 by 28 image corresponds to the curve in (c).
the simplest model with \(X\) and \(Y\) gives the confusion matrix in Figure 23.
## 6 Discussion
In this paper, we tested our new method of functional data learning using convolutional neural networks (CNN) with various examples and applications. CNN performance was very close to perfect, as evidenced by the test curves of the functional cases in both regression and classification problems. We also see that the variation starts to appear in practical cases such as the chaotic system of Lorentz's attractor and when estimating transmission rates from epidemic curves of the SIR system. Also, we found that training the CNN with noisy data improves CNN performance. These results show that in both cases the new method is robust to noise and would handle different cases of functional data. While some of the p-values of the results show slopes and intercepts that are statistically significant from one to zero, respectively, their estimates and the correlation coefficients show close values to one and zero, indicating an excellent effect size. This conflict might be the result of using large data sizes when testing the trained CNN.
On the practical side, a pre-trained CNN can be used in some of the applications. A pre-trained CNN could estimate Lyapunov exponents and assess the stability of some systems. For example, it could be used in the medical field to determine the stability of human motion or walking gait. This methodology provides a more practical approach in which, with moderate noise, the CNN performs well and is approximately 2 orders of magnitude faster. As such, the measured data can be used as input without the need to filter before using the CNN since it is robust to noise. A CNN pre-trained on epidemic curves can be used to estimate the transmission rate or directly estimate the basic reproduction number \(R_{0}\) and discern exponential growth from algebraic growth.
When it comes to classification problems, the new method always gave an accuracy of 100%. We tested the new method for the classification of curves according to
Figure 23: Confusion matrix shows the true predicted positives and the true predicted negatives without resulting in false positives or negatives when using \(xy\) combination only.
their monotonicity and curvature, as well as their type of growth. Furthermore, we simulated drug dissolution profiles to train a Siamese CNN which accurately managed to determine their similarity and dissimilarity. Finally, using real-life data, CNN trained with functional motion data of a few cases of Parkinson's disease and even fewer controls discerned cases with an accuracy of 100%.
## 7 Conclusion
In this paper, we show a simple method to convert any curve into an image. Using convolution neural networks (CNN), we trained on a number of those images, together with a validation set of images in various regression and classification problems. The same technique could be used for regression and classification problems in gene analysis and other medical sciences. Other areas where one can explore are multiple output problems in which several parameters are estimated, or classification and regression problems are combined to find the type of curve and estimate its parameters. Extension of the method to allow other types of kernel to produce those images might be a viable extension to the main idea in this paper. However, the presented technique might require large functional data that we could not find at the time of writing this paper. In that case, we had to perform a data augmentation. New methods for efficient functional synthetic data might be needed to handle small functional data learning problems.
## Acknowledgment
This work was supported by the U.S. Department of Defense Manufacturing Engineering Education Program (MEEP) under Award No. N00014-19-1-2728.
The authors also thank Salman Rahman and Harrison Arrubla for their early discussions and trials on this project. The authors did not receive any funding for the research performed in this article. The authors declare that they have no conflict of interest.
## Data Availability Statement
This study did not involve the creation, collection, or generation of new data. The study mainly relied on publicly available data previously published.
## Conflict of interest
The authors declare that they have no conflict of interest.
## Code details
The codes used in this study can be accessed at [https://github.com/jesusgl86/FDAP01](https://github.com/jesusgl86/FDAP01). |
2303.13458 | Optimization Dynamics of Equivariant and Augmented Neural Networks | We investigate the optimization of neural networks on symmetric data, and
compare the strategy of constraining the architecture to be equivariant to that
of using data augmentation. Our analysis reveals that that the relative
geometry of the admissible and the equivariant layers, respectively, plays a
key role. Under natural assumptions on the data, network, loss, and group of
symmetries, we show that compatibility of the spaces of admissible layers and
equivariant layers, in the sense that the corresponding orthogonal projections
commute, implies that the sets of equivariant stationary points are identical
for the two strategies. If the linear layers of the network also are given a
unitary parametrization, the set of equivariant layers is even invariant under
the gradient flow for augmented models. Our analysis however also reveals that
even in the latter situation, stationary points may be unstable for augmented
training although they are stable for the manifestly equivariant models. | Oskar Nordenfors, Fredrik Ohlsson, Axel Flinth | 2023-03-23T17:26:12Z | http://arxiv.org/abs/2303.13458v4 | # Optimization Dynamics of Equivariant and Augmented Neural Networks
###### Abstract
We investigate the optimization of multilayer perceptrons on symmetric data. We compare the strategy of constraining the architecture to be equivariant to that of using augmentation. We show that, under natural assumptions on the loss and non-linearities, the sets of equivariant stationary points are identical for the two strategies, and that the set of equivariant layers is invariant under the gradient flow for augmented models. Finally, we show that stationary points may be unstable for augmented training although they are stable for the equivariant models.
## 1 Introduction
In machine learning, the general goal is to find 'hidden' patterns in data. However, there are sometimes symmetries in the data that are known a priori. Incorporating these manually should, heuristically, reduce the complexity of the learning task. In this paper, we are concerned with training networks, more specifically multilayer perceptrons (MLPs) as defined below, on data exhibiting symmetries that can be formulated as equivariance under a group action. A standard example is the case of translation invariance in image classification.
More specifically, we want to theoretically study the connections between two general approaches to incorporating symmetries in MLPs. The first approach is to construct equivariant models by means of _architectural design_. This framework, known as _Geometric Deep Learning_[4; 5], exploits the geometric origin of the group \(G\) of symmetry transformations by choosing the linear MLP layers and nonlinearities to be equivariant (or invariant) with respect to the action of \(G\). In other words, the symmetry transformations commute with each linear (or affine) map in the network, which results in an architecture which manifestly respects the symmetry \(G\) of the problem. One prominent example is the spatial weight sharing of convolutional neural networks (CNNs) which are equivariant under translations. Group equivariant convolution networks (GCNNs) [7; 26; 14] extends this principle to an exact equivariance under more general symmetry groups. The second approach is agnostic to model architecture, and instead attempts to achieve equivariance during training via _data augmentation_, which refers to the process of extending the training to include synthetic samples obtained by subjecting training data to random symmetry transformations.
Both approaches have their benefits and drawbacks. Equivariant models use parameters efficiently through weight sharing along the orbits of the symmetry group, but are difficult to implement and computationally expensive to evaluate in general. Data augmentation, on the other hand, is agnostic to the model structure and easy to adapt to different symmetry groups. However, the augmentation strategy is by no means guaranteed to achieve a model which is exactly invariant: the hope is that model will 'automatically' infer invariances from the data, but there are few theoretical
guarantees. Also, augmentation generalentails an inefficient use of parameters and an increase in model complexity and training required.
In this paper we use representation theory to compare and analyze the dynamics of gradient descent training of MLPs adhering to either the equivariant or augmented strategy. In particular, we study their equivariant stationary points, i.e., the potential limit points of the training. The equivariant models (of course) have equivariant optima for their gradient flow and so are guaranteed to produce a trained model which respects the symmetry. However, the dynamics of linear networks under augmented training has previously not been exhaustively explored.
Our main contributions are as follows: First, we provide a technical statement (Lemma 4) about the augmented risk: We show that it can be expressed as the nominal risk averaged over the symmetry group acting on the layers of the MLP. Using this formulation of the augmented risk, we apply standard methods from the analysis of dynamical systems to the gradient descent for the equivariant and augmented models to show:
1. The equivariant subspace is invariant under the gradient flow of the augmented model (Corollary 1). In other words, if the augmented model is equivariantly initialized it will remain equivariant during training.
2. The set of equivariant stationary points for the augmented model is identical to that of the equivariant model (Corollary 2). In other words, compared to the equivariant approach, augmentation introduces no new equivariant stationary points, nor does it exclude existing ones.
3. The set of equivariant strict local minima for the augmented model is a subset of the corresponding set for the equivariant model. (Proposition 3). In other words, the existence of a stable equivariant minimum is not guaranteed by augmentation.
In addition, we perform experiments on three different learning tasks, with different symmetry groups, and discuss the results in the context of our theoretical developments.
## 2 Related work
The group-theory based model for group augmentation we use here is heavily inspired by the framework developed in [6]. Augmentation and manifest invariance/equivariance have been studied from this perspective in a number of papers [18; 19; 22; 10]. More general models for data-augmentation have also been considered [9]. Previous work has mostly mostly been concerned with so-called kernel and _feature-averaged_ models (see Remark 1 below), and in particular, fully general MLPs as we treat them here have not been considered. The works have furthermore mostly been concerned with proving statistical properties of the models, and not study their dynamic at training. An exception is [19], in which it is proven that in linear scenarios, the equivariant models are optimal, but little is known about more involved models.
In [16], the dynamics of training _linear equivariant networks_ (i.e., MLPs without nonlinearities) is studied. The linear networks is a simplified, but nonetheless popular theoretic model for analysing neural networks [2]. In the paper, the authors analyse the implicit bias of training a linear neural network with one fully connected layer on top of an equivariant backbone using gradient descent. They also provide some numerical results for non-linear models, but no comparison to data augmentation is made.
Empirical comparisons of training equivariant and augmented non-equivariant models are common in the in the literature. Most often, the augmented models are considered as baselines for the evaluation of the equivariant models. More systemic investigations include [12; 23; 13]. Compared to previous work, our formulation differs in that the parameter of the augmented and equivariant models are defined on the same vector spaces, which allows us to make a stringent mathematical comparison.
## 3 Mathematical framework
Let us begin by setting up the framework. We let \(X\) and \(Y\) be vector spaces and \(\mathcal{D}(x,y)\) be a joint distribution on \(X\times Y\). We are concerned with training an MLP \(\Phi_{A}:X\to Y\) so that \(y\approx f(x)\). The
MLP has the form
\[x_{0}=x,\quad x_{i+1}=\sigma_{i}(A_{i}x_{i}),\quad i\in[L]=\{0,\ldots,L-1\}, \quad\Phi_{A}(x)=x_{L}, \tag{1}\]
where \(A_{i}:X_{i}\to X_{i+1}\) are linear maps (layers) between (hidden) vector spaces \(X_{i}\) with \(X=X_{0}\) and \(Y=X_{L}\), and \(\sigma_{i}:X_{i+1}\to X_{i+1}\) are non-linearities. Note that \(A=(A_{i})_{i\in[L]}\) parameterizes the network since the non-linearities are assumed to be fixed. We denote the vector space of possible linear layers with \(\mathcal{L}=\prod_{i\in[L]}\operatorname{Hom}(X_{i},X_{i+1})\), where \(\prod\) refers to the direct product. To train the MLP, we optimize the _nominal risk_
\[R(A)=\mathbb{E}_{\mathcal{D}}(\ell(\Phi_{A}(x),y)),\]
where \(\ell:Y\times Y\to\mathbb{R}\) is a loss function, using gradient descent. In fact, to simplify the analysis, we will mainly study the _gradient flow_ of the model, i.e., set the learning rate to be infinitesimal.1
Footnote 1: Note that in practice training will probably involve an empirical risk and/or stochastic gradient descent, but the focus of our theoretical development is this idealized model.
### Representation theory and equivariance
Throughout the paper, we aim to make the MLP _equivariant_ towards a group of symmetry transformations of the space \(X\times Y\). That is, we consider a group \(G\) acting on the vector spaces \(X\) and \(Y\) through _representations_\(\rho_{X}\) and \(\rho_{Y}\), respectively. A representation \(\rho\) of a group \(G\) on a vector space \(V\) is a map from the group \(G\) to the space of invertible linear maps \(\operatorname{End}(V)\) on \(V\) that respects the group operation, i.e. \(\rho(gh)=\rho(g)\rho(h)\) for all \(g,h\in G\). The representation \(\rho\) is unitary if \(\rho(g)\) are unitary for all \(g\in G\). A function \(f:X\to Y\) is called equivariant with respect to \(G\) if \(f\circ\rho_{X}(g)=\rho_{Y}(g)\circ f\) for all \(g\in G\). We denote the space of equivariant _linear_ maps \(U\to V\) by \(\operatorname{Hom}_{G}(U,V)\).
Let us recall some important examples that will be used throughout the paper.
**Example 1**.: _A simple, but important, representation is the trivial one, \(\rho^{\operatorname{triv}}(g)=\operatorname{id}\) for all \(g\in G\). If we equip \(Y\) with the trivial representation, the equivariant functions \(f:X\to Y\) are the invariant ones._
**Example 2**.: _The canonical action of the permutation group \(S_{N}\) on \(\mathbb{R}^{N}\) is defined through_
\[[\rho^{\operatorname{perm}}(\pi)v]_{i}=v_{\pi^{-1}(i)},i\in[n],\]
_i.e., an element acts via permuting the elements of a vector. This action induces an action on the tensor space \((\mathbb{R}^{N})^{\otimes k}=\mathbb{R}^{N}\otimes\mathbb{R}^{N}\otimes \cdots\otimes\mathbb{R}^{N}\):_
\[[\rho^{\operatorname{perm}}(\pi)T]_{i_{0},\ldots,i_{k-1}}=T_{\pi^{-1}(i_{0}), \ldots,\pi^{-1}(i_{k-1})}.\]
_which is important for graphs. For instance, when applied to \(\mathbb{R}^{N}\otimes\mathbb{R}^{N}\), it encodes the effect a re-ordering of the nodes of a graph has on the adjacency matrix on the graph._
**Example 3**.: _The group \(\mathbb{Z}_{N}^{2}\) acts through translations on images \(x\in\mathbb{R}^{N,N}\):_
\[(\rho^{\operatorname{tr}}(k,\ell)x)_{i,j}=x_{i-k,j-\ell}\]
**Example 4**.: _The group \(C_{4}\cong\mathbb{Z}_{4}\) acts on images \(x\in\mathbb{R}^{N,N}\) through rotations by multiples of \(90^{\circ}\). That is, \(\rho(k)=\rho(1)^{k}\), \(k=0,1,2,3\) where_
\[(\rho^{\operatorname{rot}}(1)x)_{i,j}=x_{-j,i}.\]
### Training equivariant models
In order to obtain a model \(\Phi_{A}\) which respects the symmetries of a group \(G\) acting on \(X\times Y\), we should incorporate them in our model or training strategy. Note that the distribution \(\mathcal{D}(x,y)\) of training data is typically not symmetric in the sense \((x,y)\sim(\rho_{X}(g)x,\rho_{Y}(g)y)\). Instead, in the context we consider, symmetries are usually inferred from, e.g., domain knowledge of \(X\times Y\).
We now proceed to formally describe two (popular) strategies for training MLPs which respect equivariance under \(G\).
Strategy 1: Manifest equivarianceThe first method of enforcing equivariance is to constrain the layers to be manifestly equivariant. That is, we assume that \(G\) is acting also on all hidden spaces \(X_{i}\) through representations \(\rho_{i}\), where \(\rho_{0}=\rho_{X}\) and \(\rho_{L}=\rho_{Y}\), and constrain each layer \(L_{i}\) to be equivariant. In other words, we choose the layers \(L\in\mathcal{L}\) in the _equivariant subspace_
\[\mathcal{E}=\prod_{i\in[L]}\operatorname{Hom}_{G}(X_{i},X_{i+1})\]
If we in addition assume that all non-linearities \(\sigma_{i}\) are equivariant, it is straight-forward to show that \(\Phi_{A}\) is _exactly_ equivariant under \(G\) (see also Lemma 3). We will refer to these models as _equivariant_. The set \(\mathcal{E}\) has been extensively studied in the setting of geometric deep learning and explicitly characterized in many important cases [20, 8, 14, 25, 21, 1]. In [11], a general method for determining \(\mathcal{E}\) numerically directly from the \(\rho_{i}\) and the structure of the group \(G\) is described.
Defining \(\Pi_{\mathcal{E}}:\mathcal{L}\to\mathcal{E}\) as the orthogonal projection onto \(\mathcal{E}\), a convenient formulation of the strategy, which is the one we will use, is to optimize the _equivariant risk_
\[R^{\mathrm{eqv}}(A)=R(\Pi_{\mathcal{E}}A). \tag{2}\]
Strategy 2: Data augmentationThe second method we consider is to augment the training data. To this end, we define a new distribution on \(X\times Y\) by drawing samples \((x,y)\) from \(\mathcal{D}\) and subsequently _augmenting_ them by applying the action of a randomly drawn group element \(g\in G\) on both data \(x\) and label \(y\). Training on this augmented distribution can be formulated as optimizing the _augmented risk_
\[R^{\mathrm{aug}}(A)=\int_{G}\mathbb{E}_{\mathcal{D}}(\ell(\Phi_{A}(\rho_{X}( g)x),\rho_{Y}(g)y))\,\mathrm{d}\mu(g)\]
Here, \(\mu\) is the (normalised) _Haar_ measure on the group [15], which is defined through its invariance with respect to the action of \(G\) on itself; if \(h\) is distributed according to \(\mu\) then so is \(gh\) for all \(g\in G\). This property of the Haar measure will be crucial in our analysis. Choosing another measure would cause the augmentation to be biased towards certain group elements, and is not considered here. Note that if the data \(\mathcal{D}\) already is symmetric in the sense that \((x,y)\sim(\rho_{X}(g)x,\rho_{Y}(g)y)\), the augmentation acts trivially.
In our analysis, we want to compare the two strategies when training the same model. We will make three global assumptions.
**Assumption 1**.: _The group \(G\) is acting on all hidden spaces \(X_{i}\) through unitary representations \(\rho_{i}\)._
**Assumption 2**.: _The non-linearities \(\sigma_{i}:X_{i+1}\to X_{i+1}\) are equivariant._
**Assumption 3**.: _The loss \(\ell\) is invariant, i.e. \(\ell(\rho_{Y}(g)y,\rho_{Y}(g)y^{\prime})=\ell(y,y^{\prime})\), \(y,y^{\prime}\in Y\), \(g\in G\)._
Let us briefly comment on these assumptions. The first assumption is needed for the restriction strategy to be well defined. The technical part - the unitarity - is not a true restriction: As long as all \(X_{i}\) are finite-dimensional and \(G\) is compact, we can redefine the inner products on \(X_{i}\) to ensure that all \(\rho_{i}\) become unitary. The second assumption is required for the equivariant strategy to be sound - if the \(\sigma_{i}\) are not equivariant, they will explicitly break equivariance of \(\Phi_{A}\) even if \(A\in\mathcal{E}\). The third assumption guarantees that the loss-landscape is 'unbiased' towards group transformations, which is certainly required to train any model respecting the symmetry group.
We also note that all assumptions are in many settings quite weak - we already commented on Assumption 1. As for Assumption 2, note that e.g. any non-linearity acting pixel-wise on an image will be equivariant to both translations and rotations. In the same way, any loss comparing images pixel by pixel will be, so that Assumption 3 is satisfied. Furthermore, if we are trying to learn an invariant function the final representation \(\rho_{Y}\) is trivial and Assumption 3 is trivially satisfied.
**Remark 1**.: _Before proceeding, let us mention a different strategy to build equivariant models: feature averaging [18]. This strategy refers to altering the model by averaging it over the group:_
\[\Phi_{A}^{\mathrm{FA}}(x):=\int_{G}\rho_{Y}(g)^{-1}\Phi_{A}(\rho_{X}(g)x)\, \mathrm{d}\mu(g).\]
_In words, the value of the feature averaged network at a datapoint \(x\) is obtained by calculating the outputs of the original model \(\Phi_{A}\) on transformed versions of \(x\), and averaging the re-adjusted outputs over \(G\). Note that the modification of an MLP here does not rely on explicitly controlling the weights. It is not hard to prove (see e.g. [19, Prop. 2]) that under the invariance assumption on \(\ell\),_
\[R^{\mathrm{aug}}(A)=\mathbb{E}(\ell(\Phi_{A}^{\mathrm{FA}}(x),y)). \tag{3}\]
### Induced representations and their properties
The representations \(\rho_{i}\) naturally induce representations \(\overline{\rho}_{i}\) of \(G\) on \(\operatorname{Hom}(X_{i},X_{i+1})\)
\[\overline{\rho}_{i}(g)A_{i}=\rho_{i+1}(g)A_{i}\rho_{i}(g)^{-1},\]
and from that a representation \(\overline{\rho}\) on \(\mathcal{L}\) according to \((\overline{\rho}(g)A)_{i}=\overline{\rho}_{i}(g)A_{i}\). Since the \(\rho_{i}\) are unitary, with respect to the appropriate canonical inner products, so are \(\overline{\rho}_{i}\) and \(\overline{\rho}\).
Before proceeding we establish some simple, but crucial facts, concerning the induced representation \(\overline{\rho}\) and the way it appears in the general framework. We will need the following to well-known lemmas. Proofs are presented in Appendix A.
**Lemma 1**.: \(A\in\mathcal{E}\) _if and only if \(\overline{\rho}(g)A=A\) for all \(g\in G\)._
**Lemma 2**.: _For any \(A\in\mathcal{L}\) the orthogonal projection \(\Pi_{\mathcal{E}}\) is given by_
\[\Pi_{\mathcal{E}}A=\int_{G}\overline{\rho}(g)A\,\mathrm{d}\mu(g).\]
We now prove a relation between transforming the input and transforming the layers of an MLP.
**Lemma 3**.: _Under Assumption 2, for any \(A\in\mathcal{L}\) and \(g\in G\) we have_
\[\Phi_{A}(\rho_{X}(g)x)=\rho_{Y}(g)\Phi_{\overline{\rho}(g)^{-1}A}(x).\]
_In particular, \(\Phi_{A}\) is equivariant for every \(A\in\mathcal{E}\)._
Proof.: The in particular part follows from \(\overline{\rho}(g)^{-1}A=A\) for \(A\in\mathcal{E}\). To prove the main statement, we use the notation (1): \(x_{i}\) denotes the outputs of each layer of \(\Phi_{A}\) when it acts on the input \(x\in X\). Also, for \(g\in G\), let \(x_{i}^{g}\) denote the outputs of each layer of the network \(\Phi_{\overline{\rho}(g)^{-1}A}\) when acting on the input \(\rho_{X}(g)^{-1}x\). If we show that \(\rho_{i}(g)x_{i}^{g}=x_{i}\) for \(i=[L+1]\), the claim follows. We do so via induction. The case \(i=0\) is clear: \(\rho_{X}(g)x_{0}^{g}=\rho_{X}(g)\rho_{X}(g)^{-1}x=x=x_{0}\). As for the induction step, we have
\[\rho_{i+1}(g)x_{i+1}^{g} =\rho_{i+1}(g)\sigma_{i}(\overline{\rho}_{i}(g)^{-1}A_{i}x_{i}^{ g})=\rho_{i+1}(g)\sigma_{i}(\rho_{i+1}(g)^{-1}A_{i}\rho_{i}(g)x_{i}^{g})\] \[=\sigma_{i}(\rho_{i+1}(g)\rho_{i+1}(g)^{-1}A_{i}\rho_{i}(g)x_{i}^ {g})=\sigma_{i}(A_{i}\rho_{i}(g)x_{i}^{g})=\sigma_{i}(Ax_{i})=x_{i+1}\,,\]
where in the second step, we have used the definition of \(\overline{\rho}_{i}\), in the third, Assumption (2), and the fifth step follows from the induction assumption.
The above formula has an immediate consequence for the augmented loss.
**Lemma 4**.: _Under Assumptions 2 and 3, the augmented risk can be expressed as_
\[R^{\mathrm{aug}}(A)=\int_{G}R(\overline{\rho}(g)A)\,\mathrm{d}\mu(g). \tag{4}\]
Proof.: From Lemma 3 and Assumption (3), it follows that for any \(g\in G\) we have \(\ell(\Phi_{\overline{\rho}(g)^{-1}A}(x),y)=\ell(\rho_{Y}(g)^{-1}\Phi_{A}( \rho_{X}(g)x),y)=\ell(\Phi_{A}(\rho_{X}(g)x),\rho_{Y}(g)y)\). Taking the expectation with respect to the distribution \(\mathcal{D}\), and then integrating over \(g\in G\) yields
\[R^{\mathrm{aug}}(A)=\int_{G}R(\overline{\rho}(g)^{-1}A)\,\mathrm{d}\mu(g)= \int_{G}R(\overline{\rho}(g^{-1})A)\,\mathrm{d}\mu(g)\,.\]
Using the fact that the Haar measure is invariant under inversion proves the statement.
We note the likeness of (4) to (3): In both equations, we are averaging risks of transformed models over the group. However, in (3), we average over transformations of the _input data_, whereas in (4), we average over transformations of the _layers_. The latter fact is crucial - it will allow us analyse the dynamics of gradient flow.
Before moving on, let us introduce one more notion. When considering the dynamics of training we will encounter elements of the vector space \(\mathcal{L}\otimes\mathcal{L}\), which also carries a representation \(\overline{\rho}^{\otimes 2}\) of \(G\) induced by \(\overline{\rho}\) according to
\[\overline{\rho}^{\otimes 2}(g)(A\otimes B)=(\overline{\rho}(g)A)\otimes( \overline{\rho}(g)B)\,.\]
We refer to the vector space of elements invariant under this action as \(\mathcal{E}^{\otimes 2}\), and the orthogonal projection onto it as \(\Pi_{\mathcal{E}^{\otimes 2}}\). As for \(\mathcal{L}\), the induced representation on \(\mathcal{L}\otimes\mathcal{L}\) can be used to express the orthogonal projection.
**Lemma 5**.: _For any \(A,B\in\mathcal{L}\) the orthogonal projection \(\Pi_{\mathcal{E}^{\otimes 2}}\) is given by_
\[\Pi_{\mathcal{E}^{\otimes 2}}(A\otimes B)=\int_{G}\overline{\rho}^{\otimes 2}(g)(A \otimes B)\,\mathrm{d}\mu(g).\]
An important property of the space \(\mathcal{E}^{\otimes 2}\), which we can describe as a subspace of the space of bilinear forms on \(\mathcal{L}\), is that it is diagonal with respect to the orthogonal decomposition \(\mathcal{L}=\mathcal{E}\oplus\mathcal{E}^{\perp}\) induced by \(\Pi_{\mathcal{E}}\). We have
**Lemma 6**.: _For any \(M\in\mathcal{L}\otimes\mathcal{L}\), \(A\in\mathcal{E}\) and \(B\in\mathcal{E}^{\perp}\) we have_
1. \((\Pi_{\mathcal{E}^{\otimes 2}}M)\left[A,A\right]=M[A,A]\) _and (ii)_ \((\Pi_{\mathcal{E}^{\otimes 2}}M)\left[A,B\right]=0\,.\)__
## 4 Dynamics of the gradient flow
We have now established the framework of optimization for symmetric models that we need to investigate the gradient descent for the equivariant and augmented models. In particular, we want to compare the gradient descent dynamics of the two models as it pertains to the equivariance with respect to the symmetry group \(G\) during training. To this end, we consider the gradient flows of the nominal, equivariant and augmented risks
\[\dot{A}=-\nabla R(A)\,,\quad\dot{A}=-\nabla R^{\mathrm{eqv}}(A)\,,\quad\dot{A }=-\nabla R^{\mathrm{aug}}(A)\,,\quad A\in\mathcal{L}\,.\]
### Equivariant stationary points
We first establish the relation between the gradients of the equivariant and augmented models for an equivariant initial condition.
**Proposition 1**.: _For \(A\in\mathcal{E}\) we have \(\nabla R^{\mathrm{aug}}(A)=\Pi_{\mathcal{E}}\nabla R(A)=\nabla R^{\mathrm{ eqv}}(A)\)._
Proof.: Taking the derivative of (4) the chain rule yields
\[\langle\nabla R^{\mathrm{aug}}(A),B\rangle=\int_{G}\langle\overline{\rho}(g)^ {-1}\nabla R(\overline{\rho}(g)A),B\rangle\,\mathrm{d}\mu(g)=\int_{G}\langle \nabla R(\overline{\rho}(g)A),\overline{\rho}(g)B\rangle\,\mathrm{d}\mu(g)\,,\]
where \(B\in\mathcal{L}\) is arbitrary and we have used the unitarity of \(\overline{\rho}\). Using that \(\overline{\rho}(g)A=A\) for every \(A\in\mathcal{E}\) (Lemma 2), we see that the last integral equals
\[\int_{G}\langle\nabla R(A),\overline{\rho}(g)B\rangle\,\mathrm{d}\mu(g)= \langle\nabla R(A),\Pi_{\mathcal{E}}B\rangle=\langle\Pi_{\mathcal{E}}\nabla R (A),B\rangle\,,\]
where we have used orthogonality of \(\Pi_{\mathcal{E}}\) in the final step, which establishes the first equality of the proposition.
The second equality follows immediately from the chain rule applied to (2)
\[\nabla R^{\mathrm{eqv}}(A)=\Pi_{\mathcal{E}}\nabla R(\Pi_{\mathcal{E}}A)=\Pi_ {\mathcal{E}}\nabla R(A)\,,\]
where we have used that \(\Pi_{\mathcal{E}}\) is self-adjoint and \(\Pi_{\mathcal{E}}A=A\) for every \(A\in\mathcal{E}\) in the last step.
A direct consequence of Proposition 1 is that for any \(A\in\mathcal{E}\) we have \(\nabla R^{\mathrm{aug}}(A)\in\mathcal{E}\) for the gradient, which establishes the following important result.
**Corollary 1**.: _The equivariant subspace \(\mathcal{E}\) is invariant under the gradient flow of \(R^{\mathrm{aug}}\)._
A further immediate consequence of Proposition 1 is the fact that if the initialization of the networks is equivariant, the gradient descent dynamics of the augmented and equivariant models will be identical. In particular, we have the following result.
**Corollary 2**.: \(A^{*}\in\mathcal{E}\) _is a stationary point of the gradient flow of \(R^{\mathrm{aug}}\) if and only if it is a stationary point of the gradient flow of \(R^{\mathrm{eqv}}\)._
### Stability of the equivariant stationary points
We now consider the stability of equivariant stationary points, and more generally of the equivariant subspace \(\mathcal{E}\), under the augmented gradient flow of \(R^{\mathrm{aug}}\). Of course, \(\mathcal{E}\) is manifestly stable under the equivariant gradient flow of \(R^{\mathrm{eqv}}\). To make statements about the stability we establish the connection between the Hessians of \(R\), \(R^{\mathrm{eqv}}\) and \(R^{\mathrm{aug}}\), which can be considered as bilinear forms on \(\mathcal{L}\), i.e. as elements of the tensor product space \(\mathcal{L}\otimes\mathcal{L}\).
**Proposition 2**.: _For \(A\in\mathcal{E}\) we have \(\nabla^{2}R^{\mathrm{aug}}(A)=\Pi_{\mathcal{E}^{\otimes 2}}\nabla^{2}R(A)\) and \(\nabla^{2}R^{\mathrm{eqv}}(A)=\Pi_{\mathcal{E}}^{\otimes 2}\nabla^{2}R(A)\)._
Proof.: Taking the second derivative of (4) yields
\[\nabla^{2}R^{\mathrm{aug}}(A)[B,C]=\int_{G}\nabla^{2}R(\overline{\rho}(g)A)[ \overline{\rho}(g)B,\overline{\rho}(g)C]\,\mathrm{d}\mu(g)=\int_{G}\overline{ \rho}^{\otimes 2}(g)\nabla^{2}R(A)[B,C]\,\mathrm{d}\mu(g)\,,\]
where \(B,C\in\mathcal{L}\) are arbitrary and we have used \(\overline{\rho}(g)A=A\) for \(g\in G\) and \(A\in\mathcal{E}\) and the definition of \(\overline{\rho}^{\otimes 2}\). Lemma 5 then yields the first equality.
The second statement again follows directly from the chain rule twice applied to (2)
\[\nabla^{2}R^{\mathrm{eqv}}(A)=\Pi_{\mathcal{E}}^{\otimes 2}\nabla^{2}R(\Pi_{ \mathcal{E}}A)=\Pi_{\mathcal{E}}^{\otimes 2}\nabla^{2}R(A)\,,\]
where we have used \(\Pi_{\mathcal{E}}A=A\) for \(A\in\mathcal{E}\) and the fact that \(\Pi_{\mathcal{E}}\) is self-adjoint.
**Proposition 3**.: _For \(A^{*}\in\mathcal{E}\) the following implications hold:_
* _If_ \(A^{*}\) _is a strict local minimum of_ \(R\)_, it is a strict local minimum of_ \(R^{\mathrm{aug}}\)_._
* _If_ \(A^{*}\) _is a strict local minimum of_ \(R^{\mathrm{aug}}\)_, it is a strict local minimum of_ \(R^{\mathrm{eqv}}\)_._
Proof.: _i)_ Assume \(\nabla R(A^{*})=0\) and \(\nabla^{2}R(A^{*})\) positive definite. From Proposition 1 we then have \(\nabla R^{\mathrm{aug}}(A^{*})=\Pi_{\mathcal{E}}\nabla R(A^{*})=0\). Furthermore, for \(B\neq 0\) Proposition 2 implies
\[\nabla^{2}R^{\mathrm{aug}}(A^{*})[B,B]=\Pi_{\mathcal{E}^{\otimes 2}}\nabla^{2} R(A^{*})[B,B]=\int_{G}\nabla^{2}R(A^{*})[\overline{\rho}(g)B,\overline{ \rho}(g)B]\,\mathrm{d}\mu(g)>0\,,\]
where in the last step we have used the fact that the integrand is positive since \(\overline{\rho}(g)B\neq 0\) for any \(g\in G\) and \(B\neq 0\), and \(\nabla^{2}R(A^{*})[B,B]>0\) for \(B\neq 0\).
_ii)_ Assume \(\nabla R^{\mathrm{aug}}(A^{*})=0\) and \(\nabla^{2}R^{\mathrm{aug}}(A^{*})\) positive definite. From Proposition 1 we have \(\nabla R^{\mathrm{eqv}}(A^{*})=\nabla R^{\mathrm{aug}}(A^{*})=0\). Proposition 2 implies that \(\nabla^{2}R^{\mathrm{eqv}}(A^{*})[B,B]\) equals
\[\nabla^{2}R(A^{*})[\Pi_{\mathcal{E}}B,\Pi_{\mathcal{E}}B]=\Pi_{\mathcal{E}^{ \otimes 2}}\nabla^{2}R(A^{*})[\Pi_{\mathcal{E}}B,\Pi_{\mathcal{E}}B]=\nabla^{2}R^{ \mathrm{aug}}(A^{*})[\Pi_{\mathcal{E}}B,\Pi_{\mathcal{E}}B]\,.\]
we used that \(\Pi_{\mathcal{E}}B\in\mathcal{E}\), together with the first part of Lemma 6. Consequently, \(\nabla^{2}R^{\mathrm{eqv}}(A^{*})[B,B]>0\) for \(\Pi_{\mathcal{E}}B\neq 0\) which completes the proof.
The converse of Proposition 3 is not true. A a concrete counterexample is provided in Appendix B.
Finally, let us remark an interesting property of the dynamics of the augmented gradient flow of \(R^{\mathrm{aug}}\) near \(\mathcal{E}\). Decomposing \(A\) near \(\mathcal{E}\) as \(A=x+y\), with \(x\in\mathcal{E}\) and \(y\in\mathcal{E}^{\perp}\), with \(y\approx 0\), and linearising in the deviation \(y\) from the equivariant subspace \(\mathcal{E}\) yields
\[\dot{x}+\dot{y}=-\nabla R^{\mathrm{aug}}(x)-y^{*}\nabla^{2}R^{\mathrm{aug}}(x )\,+\mathcal{O}(\|y\|^{2}).\]
From Proposition 1 we have \(\nabla R^{\mathrm{aug}}(x)\in\mathcal{E}\), and Proposition 2 (ii) together with Lemma 6 implies that \(y^{*}\nabla^{2}R^{\mathrm{aug}}(x)\in\mathcal{E}^{\perp}\). Consequently, the dynamics approximately decouple:
\[\begin{cases}\dot{x}&=-\nabla R^{\mathrm{aug}}(x)\qquad+\mathcal{O}(\|y\|^{2} )\\ \dot{y}&=-y^{*}\nabla^{2}R^{\mathrm{aug}}(x)\ +\mathcal{O}(\|y\|^{2})\end{cases}\,.\]
We observe that Proposition 1 now implies that the dynamics restricted to \(\mathcal{E}\) is identical to that of the equivariant gradient flow, and that the stability of \(\mathcal{E}\) is completely determined by the spectrum of \(\nabla^{2}R^{\mathrm{aug}}\) restricted to \(\mathcal{E}^{\perp}\). Furthermore, as long the parameters are close to \(\mathcal{E}\), the dynamics of the part of \(A\) in \(\mathcal{E}\) for the two models are almost equal.
In terms of training augmented models with equivariant initialization, these observations imply that while the augmented gradient flow restricted to \(\mathcal{E}\) will converge to the local minima of the corresponding equivariant model, it may diverge from the equivariant subspace \(\mathcal{E}\) due to noise and numerical errors as soon as \(\nabla^{2}R^{\mathrm{aug}}(x)\) restricted to \(\mathcal{E}^{\perp}\) has negative eigenvalues. We leave it to future work to analyse this further.
Experiments
We perform some simple experiments to study the dynamics of the gradient flow _near the equivariant subspace_\(\mathcal{E}\). From our theoretical results, we expect the following.
1. The set \(\mathcal{E}\) is invariant, but not necessarily stable, under the gradient flow of \(R^{\mathrm{aug}}\).
2. The dynamics in \(\mathcal{E}^{\perp}\) for \(R^{\mathrm{aug}}\) should (initially) be much'slower' than in \(\mathcal{E}\), in particular compared to the nominal gradient flow of \(R\).
We consider three different (toy) learning tasks, with different symmetry groups and data sets:
1. Permutation invariant graph classification (using small synthetically generated graphs)
2. Translation invariant image classification (on a subsampled version of MNIST [17])
3. Rotation equivariant image segmentation (on synthetic images of simple shapes).
The general setup is as follows: We consider a group \(G\) acting on vector spaces \((X_{i})_{i=0}^{L}\). We construct a multilayered perceptron \(\Phi_{A}:X_{0}\to X_{L}\) as above. The non-linearities are always chosen as non-linear functions \(\mathbb{R}\to\mathbb{R}\) applied elementwise, and are therefore equivariant under the actions we consider. Importantly, to mitigate a vanishing gradient problem, we incorporate batch normalization layers into the models. This entails a slight deviation from the setting in the previous sections, but importantly does not break the invariance or equivariance under \(G\) at test time. We build our models with PyTorch [24]. Detailed descriptions of e.g. the choice of intermediate spaces, non-linearities and data, are provided in Appendix C. In the interest of reproducibility, we also provide the code at [https://github.com/usinedepain/eq_aug_dyn](https://github.com/usinedepain/eq_aug_dyn).
We initialize \(\Phi_{A}\) with equivariant layers \(A^{0}\in\mathcal{E}\) by drawing matrices randomly from a standard Gaussian distribution, and then projecting them orthogonally onto \(\mathcal{E}\). We train the network on (finite) datasets \(\mathcal{D}\) using gradient descent in three different ways.
* Nominal: A standard gradient descent, with gradient accumulated over the entire dataset (to emulate the 'non-empirical' risk \(R\) as we have defined it here). That is, the data is fed forward through the MLP in mini-batches as usual, but each gradient is calculated by taking the average over all mini-batches.
* Augmented: As Nominal, but with \(N^{\mathrm{aug}}\) passes over data where each mini-batch is augmented using a randomly sampled group element \(g\sim\mu\). The gradient is again averaged over all passes, to model the augmented risk \(R^{\mathrm{aug}}\) as closely as possible.
* Equivariant: As in Nominal, but the gradient is projected onto \(\mathcal{E}\) before the gradient step is taken. This corresponds to the equivariant risk \(R^{\mathrm{eqv}}\) and produces networks which are manifestly equivariant.
The learning rate \(\tau\) is equal to \(\tau=10^{-5}\) in all three experiments. In the limit \(\tau\to 0\) and \(N^{\mathrm{aug}}\to\infty\), this exactly corresponds to letting the layers evolve according to gradient flow with respect to \(R\), \(R^{\mathrm{aug}}\) and \(R^{\mathrm{eqv}}\), respectively.
For each task, we train the networks for \(50\) epochs. After each epoch we record \(\|A-A^{0}\|\), i.e. the distance from the starting position \(A^{0}\), and \(\|A_{\mathcal{E}^{\perp}}\|\), i.e. the distance from \(\mathcal{E}\) or equivalently the 'non-equivariance'. Each experiment is repeated \(30\) times, with random initialisations.
### Results
In Figures 0(a), 0(b) and 0(c), we plot the evolution of the values \(\|A_{\mathcal{E}^{\perp}}\|\) against the evolution of \(\|A-A^{0}\|\). The opaque line in each plot is formed by the average values for all thirty runs, wheres the fainter lines are the \(30\) individual runs.
In short, the experiments are consistent our theoretical results. In particular, we observe that the equivariant submanifold is consistently unstable (i) in our repeated augmented experiments. In Experiment 1 and 2, we also observe the hypothesized'stabilising effect' (ii) on the equivariant subspace: the Augmented model stays much closer to \(\mathcal{E}\) than the Nominal model - the shift orthogonal to \(\mathcal{E}\) is smaller by several orders of magnitude. For the rotation experiments, the Augmented and
Nominal models are much closer to each other, but note that also here, the actual shifts orthogonal to \(\mathcal{E}\) are very small compared to the total shifts \(\|A-A^{0}\|\) - on the order \(10^{-7}\) compared to \(10^{-3}\).
The reason of the different behaviour in the rotation experiment can at this point only be speculated on. The hyperparameter \(N^{\mathrm{aug}}\), the specific properties of the \(\Pi_{\mathcal{E}}\) operator, the sizes and slight differences architectures (e.g., the fact that the group \(G\) is acting non-trivially on all spaces for the rotation example) of the models likely all play a part. We leave the closer study of these matters to further work.
## 6 Conclusion
In this paper we investigated the dynamics of gradient descent for augmented and equivariant models, and how they are related. In particular, we showed that the models have the same set of equivariant stationary points, but that the stability of these points may differ. Furthermore, when initialized to the equivariant subspace \(\mathcal{E}\), the dynamics of the augmented model is identical to that of the equivariant one. In a first order approximation, dynamics on \(\mathcal{E}\) and \(\mathcal{E}^{\perp}\) even decouple for the augmented models.
These findings have important practical implications for the two strategies for incorporating symmetries in the learning problem. The fact that their equivariant stationary points agree implies that there are no equivariant configurations that cannot be found using manifest equivariance. Hence, the more efficient parametrisation of the equivariant models neither introduces nor excludes stationary points compared to the less restrictive augmented approach. Conversely, if we can control the potential instability of the non-equivariant subspace \(\mathcal{E}^{\perp}\) in the augmented gradient flow, it will find the same equivariant minima as its equivariant counterpart. One way to accomplish the latter would be to introduce a penalty proportional to \(\|A_{\mathcal{E}^{\perp}}\|^{2}\) in the augmented risk.
Although we showed that the dynamics _on \(\mathcal{E}\)_ is identical for the augmented and equivariant models, and understand their behaviour _near \(\mathcal{E}\)_, our results say nothing about the dynamics _away from \(\mathcal{E}\)_ for the augmented model. Indeed, there is nothing stopping the augmented gradient flow from leaving \(\mathcal{E}\) - although initialized very near it - or from coming back again. To analyse the global properties of the augmented gradient flow, in particular to calculate the spectra of \(\nabla^{2}R^{\mathrm{aug}}\) in concrete cases of interest, is an important direction for future research.
Figure 1: Results of the experiments. Shown are plots of \(\|A_{\mathcal{E}^{\perp}}\|\) versus \(\|A-A^{0}\|\) for the three models. Opaque lines correspond to average values, transparent lines to individual experiments. For Experiment 1 and 2, the two plots depict the same data with different scales on the \(\|A_{\mathcal{E}^{\perp}}\|\)-axis. |
2310.03001 | Learning characteristic parameters and dynamics of centrifugal pumps
under multiphase flow using physics-informed neural networks | Electrical submersible pumps (ESPs) are prevalently utilized as artificial
lift systems in the oil and gas industry. These pumps frequently encounter
multiphase flows comprising a complex mixture of hydrocarbons, water, and
sediments. Such mixtures lead to the formation of emulsions, characterized by
an effective viscosity distinct from that of the individual phases. Traditional
multiphase flow meters, employed to assess these conditions, are burdened by
high operational costs and susceptibility to degradation. To this end, this
study introduces a physics-informed neural network (PINN) model designed to
indirectly estimate the fluid properties, dynamic states, and crucial
parameters of an ESP system. A comprehensive structural and practical
identifiability analysis was performed to delineate the subset of parameters
that can be reliably estimated through the use of intake and discharge pressure
measurements from the pump. The efficacy of the PINN model was validated by
estimating the unknown states and parameters using these pressure measurements
as input data. Furthermore, the performance of the PINN model was benchmarked
against the particle filter method utilizing both simulated and experimental
data across varying water content scenarios. The comparative analysis suggests
that the PINN model holds significant potential as a viable alternative to
conventional multiphase flow meters, offering a promising avenue for enhancing
operational efficiency and reducing costs in ESP applications. | Felipe de Castro Teixeira Carvalho, Kamaljyoti Nath, Alberto Luiz Serpa, George Em Karniadakis | 2023-10-04T17:40:46Z | http://arxiv.org/abs/2310.03001v2 | Learning characteristic parameters and dynamics of centrifugal pumps under multi-phase flow using physics-informed neural networks
###### Abstract
Electrical submersible pumps (ESP) are the second most used artificial lifting equipment in the oil and gas industry due to their high flow rates and boost pressures. They often have to handle multiphase flows, which usually contain a mixture of hydrocarbons, water, and/or sediments. Given these circumstances, emulsions are commonly formed. It is a liquid-liquid flow composed of two immiscible fluids whose effective viscosity and density differ from the single phase separately. In this context, accurate modeling of ESP systems is crucial for optimizing oil production and implementing control strategies. However, real-time and direct measurement of fluid and system characteristics is often impractical due to time constraints and economy. Hence, indirect methods are generally considered to estimate the system parameters. In this paper, we formulate a machine learning model based on Physics-Informed Neural Networks (PINNs) to estimate crucial system parameters. In order to study the efficacy of the proposed PINN model, we conduct computational studies using not only simulated but also experimental data for different water-oil ratios. We evaluate the state variable's dynamics and unknown parameters for various combinations when only intake and discharge pressure measurements are available. We also study structural and practical identifiability analyses based on commonly available pressure measurements. The PINN model could reduce the requirement of expensive field laboratory tests used to estimate fluid properties.
Electrical submersible pump Physics-informed neural networks Parameters estimation Identifiability analysis Multiphase flow Digital twin
## 1 Introduction
Multistage centrifugal pumps play a critical role in various industries, including oil and gas, water supply, chemical processes, and power generation. In the oil industry, these pumps often handle multiphase flows that contain a mixture of hydrocarbons, water, and/or sediment. Among the different multistage centrifugal pumps, since the introduction of Electrical Submersible Pumps (ESP) in the oil and gas industry in 1927, they have been the second most extensively used artificial lifting equipment [1]. The ESP is a multistage centrifugal pump driven by a submerged electrical motor. They are commonly used where a relatively high flow rate with high boosting pressure is necessary [2, 3, 4] e.g., in the petroleum industry. It is common in the petroleum industry to have two-phased liquid-liquid flows, which often
involve transporting oil and water together [5; 6]. In liquid-liquid (e.g., oil-water) flows, colloidal dispersions such as emulsions are commonly formed due to the chemical characteristics of the liquids. Emulsions occur when one of the liquids forms droplets (dispersed phase) and is immersed in the other liquid (continuous phase). Emulsions exhibit a non-Newtonian behavior that can cause instability in ESP operation [7].
Emulsions tend to separate the liquid phases due to oil-water or water-oil immiscibility. The separation time may vary from minutes to hours and may extend to days, months, or even years, depending on the stability of the emulsion. The droplet size distribution, chemical properties of the liquid phases, and the presence of surfactants determine the stability and type of emulsion (water-in-oil or oil-in-water) [8]. Furthermore, there may be a transition from one emulsion type to another, commonly called phase inversion [9; 10; 11]. Emulsion in the ESP system may be caused by turbulence and shear stress caused by the ESP, pipe system, and valve [12].
Furthermore, the effective viscosity of emulsion may considerably exceed the oil or water single-phase viscosity. It is affected by the oil and water phase individual viscosity, the amount of water in the oil-water mixture, the temperature, the droplet size, and distribution [13]. Therefore, monitoring fluid properties such as effective viscosity and density becomes crucial. These parameters affect the stability of the system and are important for developing control strategies and optimization of oil production. However, direct measurement of the fluid properties and system parameters next to the ESP becomes time-consuming and expensive. In this study, we considered a machine learning approach to identify these parameters using pressure measurements.
The problem of estimating unknown properties based on indirect measurements is referred to as the inverse problem and is common in various fields [14]. Additionally, multiple parameters may potentially provide satisfactory fits to a given data [15; 16; 17]. In this context, we conduct identifiability analysis to determine if a specific system parameter (or a set of parameters) can be accurately estimated uniquely (globally or locally) based on the available input and output states. The structural identifiability analysis assumes an ideal scenario, where the observed states are noise-free and the model itself is error-free [18; 17; 19]. This analysis, also known as prior identifiability, can be performed without actual experimental data.
Various methods have been developed to analyze a system's local and global structural identifiability. Chis _et al._[20] and Raue _et al._[21] conducted comparative studies of these methods found in the literature, focusing on biological systems. Castro and Boer [22] introduced a simple scaling method, which exploits the invariance of the equations under parameter scaling transformations. More recently, Dong _et al._[23] proposed a method based on differential elimination. It is important to note that structural identifiability alone does not account for the quantity and quality of available data or the numerical optimization algorithm employed [24; 19].
Several methods have been developed to estimate unknown parameters of ordinary differential equations (ODEs). The Nonlinear Least Squares (NLS) method is widely adopted due to its versatility and applicability to various ODE systems. However, it is computationally intensive, and the inaccuracy in numerical approximations of derivatives may pose challenges, particularly for stiff systems [25; 26; 27]. In addition to NLS, other methods, such as collocation methods [25], Gaussian process-based approaches [28], Bayesian methods [29], and the recently proposed two-stage approach using Neural ODEs [27], are a few alternative techniques for parameter estimation. The physics-informed neural network is one of the recent neural network-based methods for parameter estimation for ODEs/PDEs.
Raissi _et al._[30] proposed Physics-informed neural networks (PINNs), a new form of training of neural networks known, which is able to solve both forward and data-driven inverse problems. The method takes into account the physics of the problem while formulating the loss function. The physics of the problem is introduced in the loss function by way of minimizing the residue of the differential equations of the problem at some collocation points. Generally, the derivatives of the predicted unknown are calculated using automatic differentiation [31]. PINNs was further developed to different variants like conservative PINNs (cPINNs) [32], extended PINN (XPINN) [33], hp-VPINNs [34], Parareal PINN (PPINN) [35], Separable PINN [36] etc. Furthermore, McClenny and Braga-Neto [37] proposed a self-adaptive weights technique that automatically tuned the weights for different loss functions in a multi-loss function problem in PINN problem. In the present study, we consider self-adaptive weights for calculating the total loss. PINN and its variants were considered in inverse problems like unsaturated groundwater flow [38], diesel engine [39], supersonic flows [40] to name just a few.
Accurate modelling of the ESP system that considers the complexities of liquid-liquid two-phase flow presents significant challenges. To enhance tractability, a common approach involves employing a lumped-element model for the flow within the pump and pipes. However, this lumped model still
involves several unknown parameters, such as the fluid bulk modulus, effective viscosity of the emulsion, and varying equivalent resistance of the pipeline over the well's operational lifespan. Obtaining these parameters under the conditions prevalent in oil field extraction presents a significant challenge. Consequently, there is a need for techniques that can effectively obtain these parameters using limited and noisy data. Developing such techniques becomes crucial as they can optimize oil extraction and facilitate the implementation of control strategies within the ESP pumping system.
In this paper, we explore the possibility of using PINNs to estimate the important characteristic parameters (flow, pump, etc.) and predict the dynamics of state variables based on a limited set of known field state variables. We develop a PINN model for this problem and conduct computational studies with simulated and experimental data. First, we study the identifiability problem using structural and practical identifiability analyses. This helps us to determine whether a particular parameter and a combination of parameters are locally or globally identifiable. Once we obtain the possible set of unknown parameters, we formulate PINNs for the inverse problem. As discussed earlier, structural identifiability alone does not guarantee accurate identification of unknown parameters for noisy and experimental data. We conduct detailed numerical studies using simulated and experimental data with different water-to-oil ratios. The PINN solution could reduce the requirement for expensive field laboratory tests to obtain the flow parameters.
The remainder of this research paper is organized as follows: In Section 2, we discuss the methodology, we provide an overview of the experimental setup (SS2.1), the dynamic model of the ESP system (SS2.2), the structure of the PINN model (SS2.3), and the details regarding data acquisition and generation (SS2.4). We also discuss the structural identifiability analysis within this section (SS2.5). In Section 3, we present the results obtained from the structural local and practical identifiability analyses. Subsequently, in Section 4, we present the results of the PINN model for state and unknown parameter estimation. Finally, in Section 5, we present the conclusions drawn from this study, summarizing the key findings and discussing the results.
## 2 Methodology
This work consists of two parts: experimental and computational. The experimental part (Section 2.1) explains the materials and methods used to acquire data from a laboratory-scale ESP system. The computational part includes the definition of the PINN model in Section 2.3. We consider simulated and experimental data for the inverse problem. The data generation for the simulated study is discussed in Section 2.4. The identifiability analysis is discussed in Section 2.5.
### Experimental study
#### 2.1.1 Experimental setup
The experimental setup was designed to investigate the ESP performance under various flow conditions, including single-phase and water/oil two-phase flows. The main equipment is an eight-stage ESP model P100L 538 series from Baker Hughes(r), located downstream of the oil and water mixture point. A schematic diagram of the experimental setup is shown in Fig. 1, indicating all the components and dimensions of the piping system.
The ESP impellers have mixed flow geometry with an outer diameter of \(108\,\mathrm{mm}\) and have \(7\) blades. Pressure gauges and temperature transmitters were installed at the inlet and outlet of the ESP. They measure the suction and discharge pressure and temperature, respectively. The pressure transducers are capacitive and manufactured by Emerson Rosemount(tm), series 2088. The temperature transducers are four-wire resistance temperature detectors type PT100 with 1/10 DIN accuracy. The pump shaft torque and rotational speed are measured using the sensor model T21WN manufactured by HBM(r). Each pump on the experimental setup is driven by an electric three-phase induction motor supplied by a Variable Speed Driver (VSD) on each motor.
The fluids used in the experiments are water and a blend of mineral oil. The oil viscosity was characterized by a rotational rheometer model HAAKE MARS III (see Appendix A.1 for the viscosity measurements in different temperatures). The water fraction inside the oil phase is measured by the water cut meter model Nemko 05 ATEX 112 manufactured by Roxar(r). The Coriolis meters used to measure the density and the mass flow rates of the oil and water flow lines are model F300S355 manufactured by Micro Motion(r).
As the system operates in a closed loop, it tends to heat up and change the viscosity. Thus, the heat exchanger controls the fluid's temperature and the viscosity inside the flow line. The temperature is regulated by monitoring the ESP inlet temperature (\(T_{1}\)). This temperature measurement is also used
for calculating the working fluid viscosity. The heat exchanger (HE) has an independent water flow line comprised of a chiller, a heater, and a water tank, allowing it to cool and heat the fluid. Also, as the oil viscosity strongly depends on the temperature, it is possible to conduct tests with different viscosities with the same oil by adjusting the fluid's temperature.
The operational ranges and uncertainties of the instruments are presented in Table A.14. The specifications of the pumps, motors, variable speed drive, valve, tank, and heat control system are presented in Table A.13.
#### 2.1.2 Experimental procedure
In this section, we detail the experimental procedure utilized to evaluate the dynamic behavior of the ESP system when varying the ESP shaft angular velocity while operating under a two-phase flow of oil and water. For simplicity, when referring to ESP angular velocity, it refers to the ESP shaft angular velocity. The procedure is divided into two stages; the first stage is the start-up stage, and the objective of this stage is to obtain a stable water cut. The second stage is the transient acquisition stage, in which we collect the experiment data.
**Stage 1:**: Start-up:
1. Begin by slowly starting the oil pump and water pump at a low speed (e.g., \(62\,\mathrm{rad}\,\mathrm{s}^{-1}\)) to ensure system stability and prevent sudden pressure surges.
2. Gradually increase the rotation speed of the ESP while monitoring the suction pressure. Maintain the ESP angular velocity below \(105\,\mathrm{rad}\,\mathrm{s}^{-1}\) and ensure the suction pressure remains above \(100\,\mathrm{kPa}\) to prevent cavitation in the ESP first stages.
Figure 1: **Schematic diagram of the experimental setup:** A schematic diagram of the experimental setup indicates all the piping system’s components and dimensions. It has four distinct flow lines, an oil flow line, a water flow line, a two-phase flow line carrying the oil-water mixture, and a closed loop of water used in the heat exchanger (HE). The separation tank separates the oil and water phases from the emulsion by gravity and stores the oil and water phases. The fluids are pumped independently from the separator tank and operate in a closed loop. The twin-screw pump pumps the oil phase from the separator tank. Then, the oil phase flows through a shell and tube heat exchanger, a water cut meter (measures the water fraction inside the oil phase), and a Coriolis meter (measures the mass flow rate and density). A single-stage centrifugal pump pumps the water phase from the separator tank that flows through a Coriolis meter and a remotely controlled valve. Then, the oil and water phases mix in a “T” joint, forming an emulsion. Then, the emulsion is pumped by the ESP, flows through a remotely controlled valve, and returns to the separation tank.
3. Check the measured water cut and adjust the water pump angular velocity if necessary. Repeat this step until the desired water cut is achieved.
4. Stop the water pump.
5. Transient acquisition: 1. Increase the ESP angular velocity to the desired initial value. 2. Increase the oil pump angular velocity while closing the ESP downstream valve to maintain the suction pressure within the range of \(100\,\mathrm{kPa}\) to \(600\,\mathrm{kPa}\). The upstream pressure of the ESP should not exceed \(600\,\mathrm{kPa}\) for a safe operation. 3. Gradually increase the ESP angular velocity to reach the desired final value. 4. Increase the oil pump angular velocity while closing the downstream valve to maintain the suction pressure within the range of \(100\,\mathrm{kPa}\) to \(600\,\mathrm{kPa}\). 5. Return the ESP angular velocity to the desired initial speed. 6. Allow sufficient time for the suction, discharge pressure, and volumetric flow rate to stabilize. This stabilization period allows capturing only the dynamics of the ESP angular velocity change. 7. Start data acquisition. 8. Wait \(5\,\mathrm{s}\). 1. Adjust the ESP angular velocity on the VSD to the final desired value. 2. Allow another stabilization period for the suction, discharge pressure, and volumetric flow rate. This period allows the capture of the complete dynamic system response to the rotation change. 3. End data acquisition. 1. Return to the **Stage 1**:c
We considered the system stable when the coefficient of variation of the last \(20\,\mathrm{s}\) of measurements of the suction and discharge pressure and oil volumetric flow rate were smaller than \(0.8\,\mathrm{\char 37}\). These criteria are the same as used and described by Figueiredo _et al._[41]. Additionally, for the temperature, we considered a tolerance of \(\pm 0.25\,\mathrm{\SIUnitSymbolCelsius
Shannon sampling theorem for the maximum frequency specified by the pressure transmitter manufacturer. Then, the signal was downsampled to the same frequency.
### Electrical submersible pump system dynamic model
In this section, we describe the dynamic behavior of the ESP system by using a set of Ordinary Differential Equations (ODEs), derived from a bond graph representation of the system (presented in Fig. B.1 in Appendix B). The model is based on the works of Tanaka _et al._[42] and Higo _et al._[43], which provide bond graph models for centrifugal pumps and pipelines, respectively. In this study, we focused only on the mechanical and hydraulic domains of the system for simplicity. Therefore, the electrical motor was not considered in this model. Instead, we treated the shaft torque as an input to the system, decoupling the mechanical and electrical domains.
The ESP system is described by the state variable,
\[\mathbf{X}=\{Q_{p},\omega,Q_{1},Q_{2},P_{1},P_{2}\}, \tag{1}\]
where \(Q_{p}\) is the pump volumetric flow rate, \(\omega\) is the shaft angular velocity, \(Q_{1}\) and \(Q_{2}\) are the volumetric flow rate in the upstream and downstream pipeline respectively, \(P_{1}\) and \(P_{2}\) are the intake and discharge pressure respectively. The governing equations describing the dynamics of the system depend on the input vector \(\gamma(t)\). These are of the form
\[\dot{\mathbf{X}}=F(\mathbf{X},\gamma(t)). \tag{2}\]
The ESP system is governed by the following set of ODEs:
\[\frac{dQ_{p}}{dt}=\frac{\left(P_{1}-P_{2}+k_{3}\,\mu\,Q_{p}\right) A_{p}}{\rho\,L_{p}}+\frac{A_{p}\left(k_{1p}\,\omega\,Q_{p}+k_{2p}\,\omega^{2}+ k_{4p}\,{Q_{p}}^{2}\right)}{L_{p}}, \tag{3a}\] \[\frac{d\omega}{dt}=\frac{\gamma(t)-k_{1s}\,\rho\,{Q_{p}}^{2}-k_{2 s}\,\rho\,\omega\,Q_{p}-k_{3s}\,\mu\,\omega-k_{4s}\,\omega-k_{5s}\,\omega^{2}}{I_ {s}},\] (3b) \[\frac{dQ_{1}}{dt}=\frac{\left(k_{bd}\omega_{t}-Q_{1}\right)k_{bl} \,\mu+P_{in}-P_{1}-f_{f}(Q_{1},\mu,L_{u},d_{u})\,{Q_{1}}^{2}\,A_{u}}{\rho\,L_{ u}}-\frac{k_{u}\,{Q_{1}}^{2}}{2\,L_{u}\,A_{u}},\] (3c) \[\frac{dQ_{2}}{dt}=\frac{\left(P_{2}-P_{out}-f_{f}(Q_{2},\mu,L_{d}, d_{d})\,{Q_{2}}^{2}\right)A_{d}}{\rho\,L_{d}}-\frac{k_{d}\,{Q_{2}}^{2}}{2\,L_{d} \,A_{d}}-\frac{{Q_{2}}^{2}\,A_{d}}{L_{d}\,C_{v}(a)^{2}\,\rho_{0}},\] (3d) \[\frac{dP_{1}}{dt}=\frac{\left(Q_{1}-Q_{p}\right)B}{A_{u}\,L_{u}},\] (3e) \[\frac{dP_{2}}{dt}=\frac{\left(Q_{p}-Q_{2}\right)B}{A_{d}\,L_{d}}, \tag{3f}\]
where the parameters are described in detail in Appendix B, the function \(C_{v}(a)\) represents the valve coefficient as a function of the valve aperture (\(a\)), and \(\rho_{0}\) is the water density at \(15\,^{\circ}\mathrm{C}\). For simplicity, as we are not varying the valve aperture, we considered a single equivalent resistance coefficient obtained experimentally.
Considering the high viscosity of the emulsion, we assumed a laminar flow in the emulsion and oil flow lines. Therefore, the friction function \(f_{f}(Q,\mu,L,d)\) is derived using the Darcy-Weisbach equation and the friction factor for laminar flow, resulting in the following equation:
\[f_{f}(Q,\mu,L,d)=\frac{128\,L\,\mu}{\pi\,Q\,d^{4}}. \tag{4}\]
The ESP system model takes a single input variable, \(\gamma(t)\), representing the torque applied to the ESP shaft. In the experimental setup, the intake pressure of the twin-screw pump \(P_{in}\) and the pressure at the end of the pipeline \(P_{out}\) refers to the same separation tank that is open to the atmosphere. Therefore, for the sake of simplicity, these pressures are assumed to be constant and are set to \(0\,\mathrm{Pa}\).
It is important to mention that in both experimental and real-world conditions, various factors, such as turbulence and shear generated by the ESP, valve, and other devices installed in the flow line, contribute to emulsion formation, leading to changes in emulsion viscosity. Thus, the viscosity of the emulsion varies as the fluids traverse the system due to these effects. Additionally, the emulsion formation is influenced by the emulsion stability, chemical properties of the fluid, and the presence or addition of emulsifiers and surfactants. Moreover, in practical scenarios, temperature gradients along the pipeline significantly impact viscosity. In such cases, the assumption of a single viscosity is no longer valid, and appropriate assumptions must be made to account for these complexities.
In our specific case, the ESP operates within a closed-loop system. Consequently, the same emulsion continuously flows through the ESP and valves while the fluid temperature is controlled through a heat exchanger. Given the tank volume, the operating volumetric flow rate, temperature control measures, and the transient experiment duration, we assumed that viscosity remains approximately constant.
Although the Brinkman [44] model is widely used in the oil industry for emulsion effective viscosity, numerous alternative models for emulsion viscosity have been proposed (e.g., Einstein [45], Taylor [46] and Pal and Rhodes [47]). Recently, Bulgarelli _et al._[12] introduced a model specifically for effective emulsion viscosity inside ESPs. However, this model is exclusively applicable to ESPs and does not hold for pipelines. Thus, in this study, we employed the Brinkman [44] model to determine the emulsion viscosity for the entire system, it is expressed as:
\[\mu=\mu_{c}\left(\frac{1}{1-\Omega}\right)^{2.5}\qquad 0<\Omega<1, \tag{5}\]
where \(\Omega\) represents the water cut and \(\mu_{c}\) is the viscosity of the continuous phase
### Physics-informed neural networks
The physics-informed neural network (PINN) comprises of a neural network (NN) to approximate the solution of a given differential equation and a second part that is considered to encode the corresponding differential equation in the optimization of the network parameters. The differential equation is encoded in the loss function along with data loss with the help of automatic differentiation [31]. In this study, we consider a fully connected feed-forward deep neural network (DNN) to approximate the state variables (i.e. \(Q_{p}\), \(\omega\), \(Q_{1}\), \(Q_{2}\), \(P_{1}\), \(P_{2}\)). The DNN architecture comprises one input layer, \(h\) hidden layers, and one linear output layer and can be represented as follows:
\[\mathbf{y}_{0} =t\] (6a) \[\mathbf{y}_{i} =\sigma\left(\mathbf{W}_{i}\mathbf{y}_{i-1}+\mathbf{b}_{i}\right) i\ \forall\ 1\leq i\leq h-1\] Hidden layers (6b) \[\mathbf{\hat{y}} =\mathbf{y}_{h} =\mathbf{W}_{h}\mathbf{y}_{h-1}+\mathbf{b}_{h-1}\] Output layer (6c)
where \(\mathbf{W}\in\mathbb{R}^{m\times n}\) and \(\mathbf{b}\in\mathbb{R}^{m}\) are the weights and biases of the neural network. The \(n\) and \(m\) are the sizes of the previous and current layers, respectively. The function \(\sigma(.)\) is the activation function, which in this study we consider as a tanh\((.)\) function. For the remainder of this work, the weights and biases are referred to collectively as parameters of the neural network \(\mathbf{\theta}=\{\mathbf{W},\mathbf{b}\}\).
The schematic diagram of the PINN considered in this study is presented in Fig. 2. In this work, a single DNN approximates the set state variables of the ODEs presented in Section 2.2. This approach has demonstrated satisfactory results while maintaining a lower computational cost than training separate DNNs for each state variable.
The loss consists of three different losses, i.e., the physics loss (\(\mathcal{L}^{ode}(\mathbf{\theta},\mathbf{\Lambda},\mathbf{\lambda}_{r})\)), the data loss (\(\mathcal{L}^{data}(\mathbf{\theta},\mathbf{\lambda}_{d})\)) and the initial condition loss (\(\mathcal{L}^{ic}(\mathbf{\theta},\mathbf{\lambda}_{ic})\)). The total loss is given as,
\[\mathcal{L}(\mathbf{\theta},\mathbf{\Lambda},\mathbf{\lambda}_{d},\mathbf{\lambda}_{r},\mathbf{ \lambda}_{ic})=\mathcal{L}^{ode}(\mathbf{\theta},\mathbf{\Lambda},\mathbf{\lambda}_{r})+ \mathcal{L}^{data}(\mathbf{\theta},\mathbf{\lambda}_{d})+\mathcal{L}^{ic}(\mathbf{\theta},\mathbf{\lambda}_{ic}). \tag{7}\]
The physics loss is given by the weighted sum (self-adaptive weights) of physics loss due to individual equation,
\[\mathcal{L}^{ode}(\mathbf{\theta},\mathbf{\Lambda},\mathbf{\lambda}_{r}) =\sum_{s\in\Phi}m(\lambda_{r}^{s})\mathcal{L}^{ode}_{s},\qquad \Phi=\{Q_{p},\ \omega,\ Q_{1},\ Q_{2},\ P_{1},\ P_{2}\}, \tag{8a}\] \[=\sum_{s\in\Phi}m(\lambda_{r}^{s})\left[\frac{1}{sN^{ode}}\sum_{i =1}^{{}^{\prime}N^{ode}}s^{r_{i}}{}^{2}\right],\] (8b) \[=\sum_{s\in\Phi}m(\lambda_{r}^{s})\left[\frac{1}{sN^{ode}}\sum_{i =1}^{{}^{\prime}N^{ode}}\left(\frac{d\hat{y}_{s}}{dt}\bigg{|}_{t_{i}}-f_{s} \left(\hat{y}(t_{i};\mathbf{\theta}),\gamma(t_{i});\mathbf{\Lambda}\right)\right)^{2} \right], \tag{8c}\]
where \(\mathcal{L}^{ode}_{s}\), \(s\in\Phi\), \(\Phi=\{Q_{p},\ \omega,\ Q_{1},\ Q_{2},\ P_{1},\ P_{2}\}\) are the physic-informed loss for the differential equation describing the dynamics of the state variables (Equations (3a) to (3f)). \({}^{s}N^{ode}\) is the number of residual points for each of the equations, \(\lambda_{r}^{s}\) are the self-adaptive weights associated with each physics loss, and \(m(.)\) is a mask function which is considered as softplus function in the present study. \({}^{s}r_{n}\) are the residual of the equation. We discuss the calculation of the residual in Appendix C. In the present
study, the residual points for all the equations are considered the same. Thus, all \({}^{s}N^{ode}\) has the same points.
Similarly, the data loss is given by the weighted sum (self-adaptive weights) of data loss due to \(P_{1}\) and \(P_{2}\).
\[\mathcal{L}^{data}(\mathbf{\theta},\mathbf{\lambda}_{d}) =\sum_{s\in\phi}m(\lambda_{d})\mathcal{L}^{data}_{s},\qquad\phi= \{P_{1},\ P_{2}\} \tag{9a}\] \[=\sum_{s\in\phi}m(\lambda_{d})\left[\frac{1}{sN^{data}}\sum_{i=1} ^{{}^{s}N^{data}}\left[y_{s}(t_{i})-\hat{y}_{s}(t_{i};\mathbf{\theta})\right]^{2} \right], \tag{9b}\]
where \({}^{s}N^{data}\) is the number of points at which the measured data are available for each of the known quantities, \(y_{s}(t_{i})\) is the known value of the \(s\) quantity at time \(t_{i}\) and the corresponding approximated value from the neural network is \(\hat{y}_{s}(t_{i};\mathbf{\theta})\). Similar to physics loss, \(\lambda_{d}^{s}\) is the self-adaptive weight associated with each data loss, and \(m(.)\) is a mask function taken as the softplus function in the present study.
Figure 2: **Schematic representation of the Physics-informed neural network for the ESP system:** The deep neural network, shown in the red dashed-dotted rectangle on the left, is considered to approximate the solution of the ODEs system described in Section 2.2 (Equations (3a) to (3f)). The input to the neural network is time, denoted by \(t\), and the output is the ESP system states, highlighted in the green dotted rectangle of the figure. Each differential equation of the ESP system has residues at certain collocation points that must be minimized. We indicate them in the black dotted region on the right. The time derivative of each state (DNN output) is computed using automatic differentiation. The total loss, denoted as \(\mathcal{L}(\mathbf{\theta})\), includes data loss, physics (ODE) loss, and initial condition loss. The data loss (\(\mathcal{L}^{data}(\hat{y}-y)\)) is the loss between the DNN output and measured data, the physics (ODE) loss is the loss of the residue of the equations at the collocation points, and the initial condition loss is a loss between the initial condition and the DNN output at \(t=0\). \(\mathbf{\lambda}_{d}\), \(\mathbf{\lambda}_{r}\), and \(\mathbf{\lambda}_{ic}\) are the weights to the data loss, physics loss, and initial condition loss, respectively, while calculating the total loss. These may be fixed or adaptive. \(\mathbf{\Lambda}\) are the unknown parameters in the ODE system. The activation function of the DNN is represented by \(\sigma\).
The initial condition loss
\[\mathcal{L}^{ic}(\mathbf{\theta},\mathbf{\lambda}_{ic}) =\sum_{s\in\Phi}m(\lambda_{ic}^{s})\mathcal{L}_{s}^{ic},\qquad \Phi=\{Q_{p},\ \omega,\ Q_{1},\ Q_{2},\ P_{1},\ P_{2}\} \tag{10a}\] \[=\sum_{s\in\Phi}m(\lambda_{ic}^{s})\left[(y_{s}(t_{0})-\hat{y}_{c }(t_{0};\mathbf{\theta}))^{2}\right], \tag{10b}\]
where \(y_{s}(t_{0})\) is the known value of the \(s\) quantity at time zero, and the corresponding approximated value from the neural network is \(\hat{y}_{s}(t_{0};\mathbf{\theta})\). \(m(.)\) is a mask function taken as the softplus function in the present study.
The neural network architecture considered in the present study is \([1,20,20,20,6]\), where the input layer consists of one neuron that takes time \(t\) as input and is scaled between \([-1,1]\). In the output layer, we have six neurons that approximate the six state variables discussed in Section 2.2 (Equation (3a) to Equation (3f)). The neural network outputs pass through a transformation given by:
\[f(x)=\frac{(x+1)(x_{\text{max}}-x_{\text{min}})}{2}+x_{\text{min}} \tag{11}\]
where \(f(x)\) represents the state variable in the physical domain, and \(x\) is the output from the neural network. For known states (i.e., \(P_{1}\) and \(P_{2}\)), \(x_{\text{min}}\) and \(x_{\text{max}}\) are the minimum and maximum measured values, respectively. However, for the unknown states, the \(x_{\text{min}}\) and \(x_{\text{max}}\) are estimated by solving a system of equations at two specific time points \(t_{1}\) and \(t_{2}\). The system of equations, derived from simplifications of Equations (3a) and (3b), is given by:
\[\begin{cases}k_{1p}\,\rho\,\omega(t_{i})\,Q_{p}(t_{i})+k_{2p}\,\rho\,\omega(t_ {i})^{2}+k_{4p}\,\rho Q_{p}(t_{i})^{2}-(P_{2}(t_{i})-P_{1}(t_{i}))=0,\\ k_{1s}\,\rho\,Q_{p}(t_{i})^{2}-k_{2s}\,\rho\,\omega(t_{i})\,Q_{p}(t_{i})+\gamma (t_{i})=0,\qquad t_{i}\in\{t_{1},t_{2}\}\end{cases} \tag{12}\]
where \(k_{1p}\), \(k_{2p}\), \(k_{4p}\), \(k_{1s}\), and \(k_{2s}\) are the ESP system parameters, \(\rho\) represents the fluid density, and \(t_{i}\) denotes the time instant for the state variables (\(\omega\), \(Q_{p}\), \(P_{1}\), \(P_{2}\)) and system input (\(\gamma\)). Furthermore, the time points \(t_{1}\) and \(t_{2}\) are determined based on the pressure difference between \(P_{2}(t)\) and \(P_{1}(t)\) as follows:
\[t_{1}=\arg\min_{t}\left(P_{2}(t)-P_{1}(t)\right),\quad t_{2}=\arg\max_{t}\left( P_{2}(t)-P_{1}(t)\right). \tag{13}\]
Solving the system of equations at the time point \(t_{1}\) provides the estimation for \(x_{\text{min}}\), and at the time point \(t_{2}\) for \(x_{\text{max}}\). We would also like to mention that in order to solve the above Equation (12), we need to know the parameters (\(k_{1p}\), \(k_{2p}\), \(k_{4p}\), \(k_{1s}\) and \(k_{2s}\)) of the equation, which are unknown in this case. Thus, we assumed some initial value in the parameters. In this study, while solving the Equation (12), we considered \(\rho=931.51\,\mathrm{kg\,m^{-3}}\), which represents a water cut of \(50\,\%\), as an initial guess, and for the remaining pump parameters we considered a value of \(+15\%\) of the true value for all parameters. Additionally, we considered the same values obtained for \(Q_{p}\) for the \(Q_{1}\) and \(Q_{2}\) states (pipeline flow rates).
It is also worth noting that the state variables are at different scale (magnitude). To address this, we employ the transformation, described by Equation (11), to make the neural network's output of similar order for different state variables. This transformation can be of different forms, depending on the specific requirements of the problem. In the present study, the bounds \(x_{\text{min}}\) and \(x_{\text{max}}\) vary across the different experimental investigations as they depend on \(P_{2}(t)\) and \(P_{1}(t)\).
The trainable parameters (\(\mathbf{\theta}\), \(\mathbf{\Lambda}\), \(\mathbf{\lambda}_{d}\), \(\mathbf{\lambda}_{r}\) and \(\mathbf{\lambda}_{ic}\)) are optimized using the Adam optimizer. The automatic differentiation for the derivatives of the outputs with respect to input is evaluated using the Python library JAX, while the Adam optimization is carried out using the Optax library. As discussed earlier, we have considered self-adaptive weight [37]. Thus, we considered different optimizers (three) for the neural network parameters (weights and biases), unknown parameters, and self-adaptive weights. We discuss the number of training epochs and learning rate scheduler in detail in Appendix D for each numerical example. Further, the unknown parameters transformations will be discussed in Section 4.
### Data generation
In this study, we considered laboratory data collected from the experimental setup described in Section 2.1. Before considering the proposed PINN model for experimental data, we test out the model with simulated with and without noise. The simulated data are generated by solving the ODEs system presented in Section 2.2. This section will present the steps for generating the simulated data. We define the parameters and input used in the simulations. We try to generate the simulated data as close as to
the experimental data making the parameters and the input of the system near equal to the experimental setup, and are discussed hereafter.
For the simulated cases, in this study, we employed the Julia package DifferentialEquations.jl [48] to numerically solve the ESP system set of ODEs presented in Section 2.2. The simulated data were generated with \(\Delta t=$0.0001\,\mathrm{s}$\).
The model input is the torque (\(\gamma(t)\)), which we directly measured during each experimental investigation. After post-processing the torque signal, as described in Section 2.1.4, we observed that the torque signal still contained relatively high-frequency components, which would be unrealistic as an input for the ESP system. Therefore, we applied a Butterworth low-pass filter with a cut-off frequency of \(2\,\mathrm{Hz}\) to the torque signal to obtain a more representative input signal.
For fluid viscosity, we assumed a single value for the entire system. We calculated the average water cut and temperature observed during the corresponding experimental investigation and estimated the emulsion viscosity using the Brinkman [44] model. As for fluid density, we directly measured it using a Coriolis meter. Then, we calculated the average density obtained from the measurements conducted during each experimental investigation.
In the tuning step of a dynamic model, the parameters are adjusted to fit real-world observations. However, despite the tuning step, we expect discrepancies between the model predictions and measured data. They arise due to uncertainties in the data and the model's representation of the actual system, which may not fully capture the system complexities. Additionally, in inverse problems, minor variations or errors in the fixed parameter can significantly affect the desired parameter estimation.
Therefore, to accurately evaluate the capability of PINNs in handling measurement uncertainties while excluding potential influences of missing physics or the tuning steps, we incorporated Gaussian noise into the simulated signals \(P_{1}\) and \(P_{2}\). The noise values used in these signals were determined based on the manufacturer's specifications for the pressure transmitters' uncertainties, as detailed in Table A.14.
### Structural identifiability analysis
In the Section 2.2, we introduced a set of ODE that describes the ESP system dynamics that has \(26\) parameters and \(6\) states variables. Before trying to estimate the unknown parameters of the system using the PINN, it is important to assess whether or not the parameters can be uniquely determined from a given set of data.
Consider a dynamical system in the following format
\[\dot{\mathbf{x}}(t) =\mathbf{f}(\mathbf{x}(t),\mathbf{u}(t),\mathbf{\Theta}), \tag{14}\] \[\mathbf{y}(t) =\mathbf{h}(\mathbf{x}(t),\mathbf{u}(t),\mathbf{\Theta}), \tag{15}\]
where \(\mathbf{x}(t)\) is an \(m\)-dimensional state vector, \(\mathbf{u}(t)\) is an \(n\)-dimensional input signal, \(\mathbf{y}(t)\) is an \(r\)-dimensional output signal or the measurable output, and \(\mathbf{\Theta}\) is \(k\)-dimensional vector of parameters. A parameter set \(\mathbf{\Theta}\) is said to be structurally globally identifiable if the following property holds:
\[\mathbf{h}(\mathbf{x}(t),\mathbf{u}(t),\mathbf{\Theta})=\mathbf{h}(\mathbf{x}(t),\mathbf{u}(t),\mathbf{\beta })\implies\mathbf{\Theta}=\mathbf{\beta}, \tag{16}\]
where \(\mathbf{\beta}\) is a \(k\)-dimensional vector of parameters. Furthermore, if the property in Equation (16) holds within a neighborhood of \(\mathbf{\Theta}\), it is referred to as structurally locally identifiable.
Thus, the identifiability property serves as a prerequisite for practical parameter estimation. Additionally, as stated by Daneker _et al._[49], if a parameter is locally identifiable, it implies that the search range for that parameter should be limited before attempting its estimation. On the other hand, for globally identifiable parameters, there is no need to define a search range.
## 3 Results and discussions for identifiability analysis
### Local identifiability
We utilized the method of differential elimination for dynamical models via projections proposed by Dong _et al._[23] to evaluate the structural identifiability of the ESP system model. It is important to note that our analysis focused specifically on assessing local identifiability due to out-of-memory issues when attempting to test for structural global identifiability. Additionally, our investigation considered the suction and discharge pressure (\(P_{1}\) and \(P_{2}\)) as available measurements, as they are typically measured in ESP deployments within oil fields.
From Tables B.1 and B.2, we observe that the ESP system model initially contains a total of \(26\) parameters. However, it is worth noting that \(8\) of these parameters are geometrical parameters, such
as pipe diameter, cross-sectional area, and pipeline length. They can be readily obtained, and they were fixed for both the upstream and downstream pipelines. Furthermore, \(3\) parameters refer to the twin-screw pump, which serves as a pressure booster for the flow line. In actual oil field extraction, the pressure is a characteristic of the well. Thus, we reduce the number of unknown parameters to \(15\). Subsequently, we assessed the local identifiability of the ESP system model based on these \(15\) parameters. The corresponding results are displayed in the first row of Table 2.
The ESP system model with \(15\) unknown parameters is found to be structurally locally unidentifiable, and it is impossible to estimate all of the parameters simultaneously. As discussed by Daneker _et al._[49], there are several approaches to address the identifiability issue, such as fixing specific parameters or introducing additional measured variables. For simplicity, when we refer to fixing a parameter, we consider that the parameter is known. In this study, we opted to work with the available measurements from the actual field and consider only the suction and discharge pressures (\(P_{1}\) and \(P_{2}\)). Therefore, we decided to fix the shaft inertia (\(I_{s}\)). Despite this adjustment, the model remains structurally locally unidentifiable, with only \(k_{3s}\) and \(k_{4s}\) being locally unidentifiable. The corresponding results are shown in the second row of Table 2.
Finally, we solved the issue of the model being locally unidentifiable by fixing both \(k_{3s}\) and \(k_{4s}\). The results are shown in the third row of Table 2. We found that fixing only one of them did not solve the problem. In summary, the ESP system model can only be structurally locally identifiable with the suction and discharge pressure measurements if the shaft inertia (\(I_{s}\)), viscous damping coefficient (\(k_{4s}\)), and disk friction coefficient (\(k_{3s}\)) are known.
### Practical identifiability
In section Section 3.1, we discussed the circumstances under which the ESP system is structurally locally identifiable. However, it is worth noting that this analysis was conducted assuming that the measured variables have no noise and that the model is error-free. Hence, we must also assess if we can estimate parameters accurately based on the available data's quantity and quality. This analysis is known as practical or posterior identifiability analysis, and it is possible that a structurally identifiable system may not be practically identifiable.
The practical identifiability can be assessed using either Monte Carlo simulations or sensitivity analysis. The Monte Carlo approach rigorously determines the practical identifiability of the model but comes with a high computational cost due to the requirement of multiple model fits. On the other hand, sensitivity analysis offers a faster computation method and provides information about the correlation structure among the parameters. This correlation structure can guide the fixing of parameters when practical identifiability is not achieved [19; 50].
In this study, we considered only the sensitivity approach to practical identifiability. Then, we used the Fisher information matrix (FIM) to compute the correlation matrix of all parameters to determine their practical identifiability. We estimate the sensitivities of the ESP system model with respect to the parameters with the Julia package DiffEqSensitivity.jl. For the sensitivity analysis, we considered the simulated case, and similarly to Daneker _et al._[49], we considered a noise level of the measurements of \(1\,\%\).
Then, to obtain the correlation matrix, we first estimate the covariance matrix (\(\mathbf{C}\)) which can be approximately obtained from the FIM by
\[\mathbf{C}=FIM^{-1}. \tag{17}\]
We then estimate the correlation matrix from (\(\mathbf{C}\)) with
\[\begin{cases}r_{ij}=\frac{C_{ii}}{\sqrt{C_{ii}}\,C_{ij}},&\text{if }i\neq j,\\ r_{ij}=1,&\text{if }i=j.\end{cases} \tag{18}\]
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline \(k_{1p}\) & \(k_{2p}\) & \(k_{3p}\) & \(k_{4p}\) & \(k_{1s}\) & \(k_{2s}\) & \(k_{3s}\) & \(k_{4s}\) & \(k_{5s}\) & \(I_{s}\) & \(\rho\) & \(B\) & \(k_{u}\) & \(k_{d}\) \\ \hline \(\times\) & \(\times\) & ✓ & ✓ & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & \(\times\) & \(\times\) & ✓ & \(-\) & ✓ & ✓ & ✓ & ✓ & ✓ \\ ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & \(-\) & \(-\) & ✓ & \(-\) & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Local structural identifiability results of the ESP system model, considering that the intake pressure (\(P_{1}\)) and discharge pressure (\(P_{2}\)) are known.** The symbols \(\times\) indicate locally unidentifiable parameters, ✓ indicate locally identifiable parameters, and \(-\) represent fixed parameters.
When \(|r_{ij}|\) is close to \(1\), the parameters \(i\) and \(j\) are strongly correlated and cannot be individually estimated. Therefore, the parameters are practically unidentifiable [19, 50]. The absolute of the correlation matrix of the parameters structurally locally identifiable of the ESP system is shown in Fig. 2(a). For the practical identifiability analysis, we considered the simulated case for the first investigation of the test matrix, Table 1.
The correlation analysis shown in Fig. 2(a) clearly demonstrates a significant correlation among multiple parameters. Notably, the fluid density (\(\rho\)) and the pipeline resistances (\(k_{u}\) and \(k_{d}\)) exhibit a strong correlation. Consequently, although the initial analysis indicated structural local identifiability of these parameters, they are not practically identifiable within the context of the conducted investigation.
As many parameters are strongly correlated, we need to fix some parameters to make the system practically identifiable. We set that the main variables of interest are the fluid parameters and the pipeline resistances since the fluid parameters (\(B\), \(\mu\), and \(\rho\)) change within the water cut, which changes with the well's life, and the pipeline resistance (\(k_{u}\) and \(k_{d}\)) changes with the actuation of valves and wax deposition for instance. Thus, we preferably fixed the pump and shaft parameters. We started by fixing the parameter \(k_{1p}\), then the \(k_{3s}\), \(k_{2p}\) and \(k_{3p}\) successively. The resultant parameter correlation matrix is presented in Fig. 2(b).
It is noticeable from Fig. 2(b) that after fixing some parameters, they are less correlated. However, some parameters are still correlated, such as the viscosity (\(\mu\)) and the pump equivalent resistance (\(k_{4p}\)). However, this correlation is not as strong as the correlation between density (\(\rho\)) and pipeline resistance (\(k_{u}\)) observed in Fig. 2(a). In Fig. 2(b), the correlation coefficient between \(\mu\) and \(k_{4p}\) is \(0.9880\), while in Fig. 2(a), the correlation coefficient between \(\rho\) and \(k_{u}\) is \(0.9994\).
## 4 Results and discussions for inverse problem using PINN
From the structural identifiability analysis presented in Section 3, we have defined three cases to evaluate the PINN for parameter and state estimation in the ESP system. Consistent with Section 3, we considered only the suction and discharge pressures as the measured variables. The three cases are as follows:
1. Flow parameters: In this case, we assumed as known all parameters of the ESP system model except for the flow properties, namely, density (\(\rho\)), viscosity (\(\mu\)), and the bulk modulus (\(B\)).
2. Flow, pipeline, and impeller parameters: Based on the local identifiability analysis results presented in the first row of Table 2, we assumed as known the parameters marked as locally unidentifiable and estimate the remaining identifiable ones. The unknown parameters considered are the flow properties (\(B\), \(\mu\), \(\rho\)), the upstream pipeline equivalent resistances (\(k_{u}\) and \(k_{d}\)) and the pump parameters \(k_{3p}\) and \(k_{4p}\).
Figure 3: **Absolute values of the estimated correlation matrices for ESP system model parameters. The matrices are derived based on the FIM. The matrix indicates whether parameters are identifiable or not. Parameters with absolute correlation values near 1 are practically unidentifiable. The Fig. 2(a) represents twelve structurally locally identifiable parameters, while the Fig. 2(b) focuses on eight parameters, particularly keeping fluid properties and pipeline resistances unknown.**
* Flow, pipeline, impeller, and shaft parameters: Considering the findings from the practical identifiability analysis outlined in Section 3.2, we estimate the parameters depicted in Fig. 2(b) while assuming as known the remaining parameters. The unknown parameters are fluid properties (\(B\), \(\mu\), \(\rho\)), the upstream pipeline equivalent resistances (\(k_{u}\) and \(k_{d}\)), the pump parameter \(k_{4\text{p}}\), and the shaft parameters \(k_{1s}\) and \(k_{5s}\).
For each case, we will evaluate the performance and effectiveness of PINN in estimating the unknown parameters and non-measured system states in three distinct data scenarios:
1. Simulated data: This scenario represents an ideal condition without any disturbances or modeling errors, allowing us to assess the baseline performance of the proposed method.
2. Simulated data with Gaussian noise: In this scenario, we introduce Gaussian noise to the simulated data to evaluate the sensitivity of the proposed method to noisy data.
3. Experimental data: This scenario is closer to actual oil field conditions, incorporating noisy measurements, modeling errors, and missing information about certain parameters.
Furthermore, for each study, we trained the PINN \(30\) times to evaluate the impact of the neural network weights initialization. The results are presented and discussed in each case for the two experimental conditions defined in Table 1. Fig. 4 presents a schematic representation of the cases and scenarios evaluated.
During the training of the PINN, we observed that the significant difference in magnitude between the states, with pressure in the order of \(10^{5}\) and flow rates in the order of \(10^{-2}\), affected the training and tuning process despite the scaling in the physics loss and the usage of self-adaptive weights on the loss function. Therefore, we converted the units of the states from SI units to \(\mathrm{m}^{3}\,\mathrm{h}^{-1}\) for the volumetric flow rates and to \(\mathrm{MWC}\) (meters of water column) for the pressure. After obtaining the results, we converted them back into SI units.
Furthermore, the transformation of the unknown parameters contributes to the convergence and accuracy of the PINN by restricting the search space to the local neighborhood of the parameter. We first tried to keep the linear scaling for all parameters and cases during the development. However, in Case 3, we could not achieve satisfactory results without restricting the parameter search space using bounded transformations such as \(\mathrm{tanh}\). Also, we took advantage of that in our experiments. It is not possible to have a density higher than the water. Then, we set an upper bound for it. It helped to improve the results in all cases. The transformation scheme for each parameter is shown in Table 3, and the scaling values are presented in Table 4.
Figure 4: **Cases and scenarios evaluated.** As presented in Section 2.1.3, two experimental investigations (blue) are performed, considering different water cuts and initial angular velocities. Each experimental investigation is analyzed under three different data sources (scenarios): simulation data, simulation data with added Gaussian noise, and experimental data collected from the instruments (green). For each scenario, three sets of unknown parameters, denoted as cases, are evaluated (red). Thus, we have \(9\) different results for each experimental investigation.
### State estimation results
We begin by assessing the state estimation capabilities of the PINN when applied to the ESP system model. The PINN will estimate the remaining unknown system states using the suction and discharge pressure signals as known variables (\(P_{1}\) and \(P_{2}\)). In the simulated cases, the unknown states include the pipeline and ESP volumetric flow rates (\(Q_{1}\), \(Q_{2}\), \(Q_{p}\)) and the ESP angular velocity (\(\omega\)). However, in the experimental case, measurements of the downstream pipeline and ESP volumetric flow rates (\(Q_{2}\) and \(Q_{p}\)) are unavailable, and therefore, they were excluded from the accuracy analysis. We would also like to mention that these results are from inverse problems with unknown parameters discussed above, not forward problems. First, we present the state estimation results for all the cases considered. Then, we discuss the prediction of the unknown parameters for all the cases in Section 4.2.
It is worth mentioning that in PINN, it is common practice to use second-order optimization algorithms, such as L-BFGS, after training with the first-order algorithm. This additional step usually aims to fine-tune the PINN's performance. However, we observed that using L-BFGS after Adam adversely affected the parameter estimation in the noisy and experimental case. Therefore, we decided to rely solely on the first-order optimization algorithm, as our focus is on unknown parameter estimation in more realistic scenarios. It is important to note that the simulated cases serve as a baseline performance for assessing the PINN's performance, and we expect that performance will be comparatively reduced under the more challenging conditions posed by experimental data and noise.
#### 4.1.1 State estimation results for simulated data
As a baseline performance assessment, we begin with the simulated scenario. The predicted states from the neural network are shown in Fig. 5, while the Mean Absolute Percentage Error (MAPE) corresponding to the prediction of different states is shown in Table 5. The MAPE is defined as
\[\text{MAPE}=\frac{1}{N^{map}}\sum_{i=1}^{N^{map}}\left|\frac{y_{s}(t_{i})-\bar {y}_{s}(t_{i};\mathbf{\theta})}{y_{s}(t_{i})}\right|, \tag{19}\]
where \(N^{map}\) is the total number of temporal data points. We considered the same sample rate as the experimental data described in Section 2.1.4, \(10\,\mathrm{Hz}\) (\(\Delta t=0.1\) sec). The variable \(y_{s}(t_{i})\) represents the true value of the state \(s\) at time \(t_{i}\), and \(\bar{y}_{s}(t_{i};\mathbf{\theta})\) denotes the mean of the predicted value of the state \(s\) from the PINN across 30 different realization of the results. These results are evaluated with different initialization of the parameters of the network and unknown parameters. For the sake of brevity, we
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline
**Case** & \(B\) & \(\mu\) & \(\rho\) & \(k_{u}\) & \(k_{d}\) & \(k_{3p}\) & \(k_{4p}\) \\ \hline
**Case 1** & \(1\times 10^{9}\) & \(1\) & \(1000\) & & & & \\
**Case 2** & \(1\times 10^{9}\) & \(1\) & \(1000\) & \(3\times 10^{2}\) & \(1\times 10^{1}\) & \(-1.0\times 10^{7}\) & \(-1.0\times 10^{2}\) \\
**Case 3** & & & \(1000\) & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Scaling constants for the unknown parameters. The table presents the scaling constants used in linear parameter transformations and the scaling parameter for the liquid bulk modulus (\(B\)) and density (\(\rho\)) transformations.**
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline
**Case** & \(B\) & \(\mu\) & \(\rho\) & \(k_{u}\) & \(k_{d}\) & \(k_{3p}\) & \(k_{4p}\) & \(k_{1s}\) & \(k_{5s}\) \\ \hline
**Case 1** & \((S_{p}(x)+0.9)\Lambda\) & \(L(x)\) & \(S_{m}(x)\Lambda\) & & & & & \\
**Case 2** & \((S_{p}(x)+0.9)\Lambda\) & \(L(x)\) & \(S_{m}(x)\Lambda\) & \(L(x)\) & \(L(x)\) & \(L(x)\) & & \\
**Case 3** & \(\Lambda_{\text{sc}}(x)\) & \(\Lambda_{\text{sc}}(x)\) & \(S_{m}(x)\Lambda\) & \(\Lambda_{\text{sc}}(x)\) & \(\Lambda_{\text{sc}}(x)\) & & \(\Lambda_{\text{sc}}(x)\) & \(\Lambda_{\text{sc}}(x)\) & \(\Lambda_{\text{sc}}(x)\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Transformations of the unknown parameters for each case evaluated. The transformations applied to the unknown parameters are detailed. The softplus function \(S_{p}(\cdot)\) and the \(S_{m}(\cdot)\), defined as \(S_{m}(x)=x-\text{softplus}(x)+1\), are utilized along with a scaling constant \(\Lambda\). The \(L(\cdot)\) is a linear transformation given by \(L(x)=x\Lambda\), which considers only a scaling constant \(\Lambda\). For Case 3, the transformation for an unknown parameter is expressed as \(\Lambda_{\text{sc}}(x)=\Lambda_{\text{true}}\left(\tanh(x)\alpha+1\right)\), where \(x\) represents the PINN-estimated value of the unknown parameter, \(\Lambda_{\text{true}}\) is the parameter true value, and \(\alpha\) denotes the span percentage around the true value. For most parameters, we set the span to \(\pm 50\%\) (\(\alpha=0.5\)), except for the \(B\) parameter, which spans \(\pm 15\%\) (\(\alpha=0.15\)).**
restrict our discussion to the state prediction results and the MAPE table for the first experimental investigation. Further results from the second investigation are presented in Appendix E. We have shown the mean value of the predicted parameters and standard deviation for all the cases in Tables 11 and 12.
The comparison between the predicted dynamics of the states by the PINN model and the actual values, as shown in Fig. 5, demonstrates a good overall agreement across all states and cases. Notably, a small error is observed for the \(P_{1}\) and \(P_{2}\) states in all cases. However, a slightly larger error is observed for the volumetric flow rate states (\(Q_{1}\), \(Q_{2}\), and \(Q_{p}\)) and the ESP angular velocity state (\(\omega\)). Among the different cases, Case 1 (red line) exhibits the best agreement across the entire simulation range in both investigations. This is also observed in Table 5, where the Case 1 presented the lower MAPE in all states except \(P_{1}\).
In the first investigation, the flow rates in Case 2 (black line) and Case 3 (green line) show a reasonable agreement with the simulation approximately from the \(6^{th}\) second. However, a slight offset
\begin{table}
\begin{tabular}{l l l l l l l} \hline & \(P_{1}\) & \(P_{2}\) & \(Q_{1}\) & \(Q_{2}\) & \(Q_{p}\) & \(\omega\) \\ \hline Case 1 & \(0.142\,\%\) & \(0.008\,\%\) & \(0.004\,\%\) & \(0.006\,\%\) & \(0.007\,\%\) & \(0.025\,\%\) \\ Case 2 & \(0.147\,\%\) & \(0.009\,\%\) & \(0.039\,\%\) & \(0.038\,\%\) & \(0.039\,\%\) & \(0.872\,\%\) \\ Case 3 & \(0.932\,\%\) & \(0.010\,\%\) & \(0.041\,\%\) & \(0.040\,\%\) & \(0.041\,\%\) & \(0.618\,\%\) \\ \hline \end{tabular}
\end{table}
Table 5: **MAPE for the simulated scenario.** The table presents the results for the simulation data considering three different cases of unknown parameters. The MAPE values represent the error of the simulation results for the states: \(P_{1}\), \(P_{2}\), \(Q_{1}\), \(Q_{2}\), \(Q_{p}\), and \(\omega\). As the number of unknown parameters increased, from Case 1 to Case 3, the MAPE values also increased, indicating a slight performance loss. However, the MAPE values remain relatively small, and visual inspection of Fig. 5 reveals a reasonable agreement with the actual values.
Figure 5: **Comparison of predicted and true states for simulated scenario of the first experimental investigation with unknown parameters.** The blue line indicates the true values, while the \(\times\) markers denote the data points used for training the PINN. The red, black, and green lines represent the mean values of the predicted states calculated at each time instant across the \(30\) trained PINNs for Cases 1 to 3, as defined earlier in this section (Section 4). Across all cases, the training dataset consists of \(30\) data points for the data loss in \(P_{1}\) and \(P_{2}\) and \(100\) collocation points for the physics loss.
from the actual value is observed from the beginning to the \(6^{th}\) second. As for the second investigation (Fig. E.1), a slightly higher offset is observed in the flow rate states and in the ESP angular velocity across the entire simulation time for Case 3, while for Case 2, we no longer observe the offset.
#### 4.1.2 State estimation results for simulated data with noise
To further evaluate the robustness of the PINN model, we extend our analysis to include a simulated scenario with Gaussian noise. This section aims to assess the performance of the PINN model when subjected to noise alone without considering missing physics or errors in the model parameter estimation. For each PINN realization, a different noise realization is also considered. Similarly to the previous section, we will focus on presenting the result corresponding to the first experimental investigation, while the result for the second experimental investigation can be found in Appendix E for reference. The MAPE for the different predicted states are shown in Table 6 and the output of the predicted states from the neural network along with the addition of noise on the pressure signals are shown in Fig. 6.
The comparison between the predicted dynamics of the states by the PINN and the actual values, as depicted in Fig. 6, demonstrates a good overall agreement all states and investigations for almost all cases. However, in Case 2 (black line), the PINN was unable to accurately estimate the flow rate states (\(Q_{1}\), \(Q_{2}\) and \(Q_{p}\),). They exhibit relatively higher errors, however, the model is able to predict the general shape of the time series. Remarkably, these higher errors were observed exclusively in the first experimental investigation (Fig. 7) (higher water cut), whereas in the second investigation (Fig. E.3) (with low water cut), Case 2 demonstrated performance similar to that of Case 1 (red line).
Notably, minor errors are observed in the \(P_{1}\) and \(P_{2}\) states for all cases and investigations. Specifically, in the discharge pressure state (Fig. 6b), we can observe that the neural network did not overfit and successfully captured the signal trend and magnitude. Similarly to the simulated data without noise (Section 4.1.1), Case 1 (red line) exhibits the best agreement across the entire simulation range in both investigations. Additionally, as shown on Table 6, Case 1 had a lower MAPE than the other cases for the unknown states and a higher MAPE for the known state \(P_{2}\).
Despite the addition of noise, the state estimation for the Case 1 and 3 provides good accuracy without significant errors. These results were similar to those obtained in noise-free conditions, with a slight increase in MAPE values. In Case 3, there is a noticeable offset in the volumetric flow rates (\(Q_{1}\), \(Q_{2}\) and \(Q_{p}\)) approximately after the \(8^{\text{th}}\) second, which is different from the observations made in Section 4.1.1, where the offset occurred only in the beginning. Additionally, the angular velocity, in this case, remains closely aligned with the actual values throughout the simulation, exhibiting no offset. It is observed that results for Case 2 deviate from the true values, particularly, for flow rates (\(Q_{1}\), \(Q_{2}\), \(Q_{p}\)). It may be possible that, though the Case 2 is structurally identifiable, it is not practical identifiable. However, it is observed that the angular velocity of the ESP (\(\omega\)) was not significantly affected as that of the flow rates.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & \(P_{1}\) & \(P_{2}\) & \(Q_{1}\) & \(Q_{2}\) & \(Q_{p}\) & \(\omega\) \\ \hline Case 1 & \(0.180\,\%\) & \(0.073\,\%\) & \(0.017\,\%\) & \(0.017\,\%\) & \(0.017\,\%\) & \(0.027\,\%\) \\ Case 2 & \(0.165\,\%\) & \(0.021\,\%\) & \(0.455\,\%\) & \(0.456\,\%\) & \(0.455\,\%\) & \(2.815\,\%\) \\ Case 3 & \(0.182\,\%\) & \(0.034\,\%\) & \(0.048\,\%\) & \(0.045\,\%\) & \(0.048\,\%\) & \(0.159\,\%\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: **MAPE for the simulated scenario with added noise.** The table presents the MAPE for states prediction in the simulated case with added noise data, considering three cases of unknown parameters. Similarly to Table 5, increasing the number of unknown parameters (Case 1 to Case 3) leads to higher MAPE values for all states, indicating a performance loss. Nevertheless, the MAPE values remain relatively small, and visual inspection of Fig. 6 shows a reasonable agreement with the actual values, except for Case 2.
#### 4.1.3 State estimation results for experimental data
In this section, we assess the performance of the PINN when subjected to experimental data, which is a more representative scenario of real-world oil field conditions. In this case, noise in data and model uncertainties exist, posing a more challenging scenario for the PINN model. Similarly to the previous sections, the results from the first experimental investigation are provided in this section, while results from the second investigation can be found in Appendix E. The MAPE values for these predictions are presented in Table 7. Meanwhile, the predicted outputs of the neural network are shown in Fig. 7.
By comparing the PINN-predicted dynamics with the actual values (Fig. 7), a good overall agreement is evident across all states and cases, even in the presence of noise and model errors. However, similarly to the noisy data scenario, in Case 2 (black line), the PINN was unable to accurately estimate the flow rate (\(Q_{1}\)) and angular velocity (\(\omega\)) states. While the predicted states follow the signal's shape, they exhibit relatively high errors. Remarkably, these higher errors were observed exclusively in the first experimental investigation (Fig. 7) (high water cut), whereas in the second investigation (Fig. E.3) (low water cut), Case 2 demonstrated performance similar to that of Case 3 (green line). These high errors are observed in Table 7, where the Case 2 values for \(Q_{1}\) and \(\omega\) had the highest MAPE values.
Notably, for the \(P_{1}\) state (Fig. 6(a)), a minor deviation in the PINN prediction's shape is noticeable when compared to the experimental data. This difference could be attributed to the influence of the
Figure 6: **Comparison of predicted and true states for the simulated case with noise of the first experimental investigation and unknown parameters. In Fig. 5(a) and Fig. 5(b), the blue line indicates the signal with added Gaussian noise. In other figures, the same blue line represents the noise-free simulated state. This differentiation aids in evaluating the PINN’s accuracy by comparing its predictions with the actual state values and also present the signal used for PINN training. Each PINN training used a different random seed. Therefore, the Gaussian noise in the signal varied for every PINN training and case. Specifically, the noisy signals for \(P_{1}\) and \(P_{2}\) in Fig. 5(a) and Fig. 5(b) correspond to the first training for Case 1, and they were used as a reference. The added Gaussian noise considered the uncertainty from the pressure transducers as specified in Table A.14. For all cases, the training dataset consisted of \(37\) data points for data loss in \(P_{1}\) and \(P_{2}\) and \(100\) collocation points for physics loss. The plots show the mean value of results for 30 realization of PINNs. The noisy data shown in the plots are one of noisy data shown as representative sample.**
fluid's bulk modulus, which affects the system's settling time. In the parameter estimation section, we will discuss the challenges encountered in estimating the bulk modulus for the experimental data, as it was not experimentally measured. Furthermore, in the discharge pressure state (Fig. 6(b)), it is clear that the neural network did not overfit and successfully captured the signal trend and magnitude, including the signal overshoot between \(6\,\mathrm{s}\) to \(12\,\mathrm{s}\), which was considerably smaller in the simulations.
Additionally, a systematic error is noticeable at the beginning of the \(\omega\) state (Fig. 6(d)) for Cases 1 and 3. Despite these deviations, the predicted \(\omega\) signal aligns well with the experimental data regarding shape and magnitude. When comparing the simulated and measured angular velocity (\(\omega\)), we also observed a systematic error, indicating a difference between the actual and the obtained model parameters. However, the PINN could reasonably estimate the angular velocity state despite this differ
\begin{table}
\begin{tabular}{l l l l l} \hline & \(P_{1}\) & \(P_{2}\) & \(Q_{1}\) & \(\omega\) \\ \hline Case 1 & \(4.517\,\mathrm{\char 37}\) & \(0.245\,\mathrm{\char 37}\) & \(0.137\,\mathrm{\char 37}\) & \(1.065\,\mathrm{\char 37}\) \\ Case 2 & \(3.828\,\mathrm{\char 37}\) & \(0.243\,\mathrm{\char 37}\) & \(1.560\,\mathrm{\char 37}\) & \(8.290\,\mathrm{\char 37}\) \\ Case 3 & \(2.963\,\mathrm{\char 37}\) & \(0.252\,\mathrm{\char 37}\) & \(0.218\,\mathrm{\char 37}\) & \(1.300\,\mathrm{\char 37}\) \\ \hline \end{tabular}
\end{table}
Table 7: **MAPE for the experimental data scenario.** The table presents the MAPE values for state predictions using experimental data across the three cases with varying unknown parameters. Among the cases evaluated, Case 1 shows the best agreement regarding the upstream volumetric flow rate (\(Q_{1}\)) and the ESP angular velocity \(\omega\). However, similarly to the noisy data scenario, Case 2 presented errors for \(\omega\) and \(Q_{1}\) that are considerably higher than the other states, as evident in Figs. 6(c) and 6(d). Nonetheless, the MAPE values remain relatively small for the other cases and states, and visual inspection of Fig. 6 shows a reasonable agreement with the actual values.
Figure 7: **Comparison of predicted and true states for the experimental case of the first experimental investigation with unknown parameters.** Due to constraints in our experimental setup, we could only directly measure the states \(Q_{1}\), \(P_{1}\), \(P_{2}\), and \(\omega\). Therefore, the PINN results analysis will primarily concentrate on these four states, denoted by the graphs’ blue lines. Moreover, for all cases, the training dataset consists of \(48\) data points for data loss with \(100\) collocation points for physics loss. The plots show the mean value of results for 30 realization of PINNs.
ence. We believe that the observed discrepancies between the experimental data and model predictions could be attributed to multiple factors. These include model uncertainty arising from approximations, simplifications, assumptions in the model, and the influence of noise in the experimental data.
### Parameter estimation results
The results for parameter estimation in the experimental investigations \(1\) and \(2\) across three cases are presented in the following sections. It should be noted that we did not measure the emulsion bulk modulus during the experiments. As a result, we cannot assess the accuracy of the values determined by the PINN in the experimental data scenario. The viscosity values obtained were compared to the Brinkman [44] model, described in Section 2.2. For density estimation, we relied on measurements from the Coriolis meter. In this section, we have discussed absolute percentage error, but additional tables presenting actual values can be found in Section 4.3.
In this section, the mean of the absolute percentage error (MAPE) for the parameter estimation results is defined as
\[\text{MAPE}_{\Lambda}=\frac{1}{N^{init}}\sum_{i=1}^{N^{init}}\left|\frac{ \Lambda-\hat{\Lambda}_{i}}{\Lambda}\right|, \tag{20}\]
where \(N^{init}\) is the number of PINN random initializations, considered \(30\) in this study. The variable \(\Lambda\) represents the true value of the unknown parameter, and \(\hat{\Lambda}_{i}\) is the estimated unknown parameter for the \(i\)-th PINN initialization. For simplicity, we refer to the error in this section as the absolute percentage error.
#### 4.2.1 Case 1: Three unknown parameters \(B\), \(\mu\) and \(\rho\) for different types of data
In this section, we focus on Case 1, where we considered the bulk modulus (\(B\)), viscosity (\(\mu\)), and density (\(\rho\)) as unknown while considering the remaining parameters as known. In Table 8, we present the MAPE for the two experimental investigations and all data scenarios.
As can be seen in Table 8, the PINN could successfully estimate the flow parameters for all cases. In the experimental data scenario, the MAPE were slightly higher, for investigation \(1\) the errors were below \(2\,\%\), while investigation \(2\) had errors below \(4.5\,\%\). It is important to mention that for the experimental cases, the error refers to the difference between the obtained viscosity and the Brinkman [44] model, which may be not entirely accurate. Also, we considered the single effective viscosity assumption, as described in Section 2.2. Thus, this higher percentage difference on the can also be related to these assumptions and limitations.
In addition to the mean analysis, it is important to assess the parameter absolute percentage error dispersion. The error dispersion for investigation \(1\) and \(2\) is visualized in Fig. 7(a) and Fig. 7(b), respectively.
In Fig. 7(a) and Fig. 7(b), the fluid density (\(\rho\)) and viscosity (\(\mu\)) parameters demonstrate consistent behavior across multiple initializations and data sources with relatively low dispersion. On the other hand, Investigation 2 presented outliers in all parameters for the simulated case, and for \(\rho\) and \(\mu\) in the simulated with noise scenario.
When comparing Investigations 1 and 2 in Table 8 simulated and simulated with noise, we can observe that the performance in Investigation 1 is superior to that in Investigation 2. However, the box plots for both investigations suggest a similar data spread. This discrepancy can be attributed to the influence of outliers on the mean value calculation. As shown in Fig. 7(b), the mean value, denoted by a triangle symbol, lies outside the interquartile range, which suggests the impact of outliers. Furthermore, when we compared the median of the errors they presented similar values, which further indicated the influence of these outliers on the mean.
\begin{table}
\begin{tabular}{l l r r r} \hline \hline & Data type & \(B\) & \(\mu\) & \(\rho\) \\ \hline Investigation 1 & Sim. & \(0.31\,\%\) & \(0.03\,\%\) & \(0.02\,\%\) \\ & Noisy & \(0.58\,\%\) & \(0.12\,\%\) & \(0.09\,\%\) \\ & Exp. & N/A & \(1.80\,\%\) & \(0.80\,\%\) \\ \hline Investigation 2 & Sim. & \(0.61\,\%\) & \(0.32\,\%\) & \(0.29\,\%\) \\ & Noisy & \(0.48\,\%\) & \(0.21\,\%\) & \(0.21\,\%\) \\ & Exp. & N/A & \(4.30\,\%\) & \(2.40\,\%\) \\ \hline \hline \end{tabular}
\end{table}
Table 8: **Mean of the absolute percentage error for Case 1.**
Notably, the addition of noise negatively affected the results, as for most of the parameters, their distributions presented a slightly wider dispersion than the simulated. Further, analyzing the experimental data results, it is evident that they exhibit a wider distribution when compared to the simulated and simulated-with-noise data. This is expected as experimental data is more challenging due to noise and potential model inaccuracies.
2.2 Case 2: Seven unknown parameters \(B\), \(\mu\), \(\rho\), \(k_{u}\), \(k_{d}\), \(k_{3p}\) and \(k_{4p}\) for different types of data
In this section, we examine Case 2. We treated the fluid's bulk modulus \(B\), viscosity \(\mu\), and density \(\rho\), along with the pump's viscous flow loss coefficient \(k_{3p}\), the pump equivalent resistance \(k_{4p}\), and the equivalent resistances of the upstream and downstream pipeline \(k_{u}\) and \(k_{d}\) respectively, as unknowns. The remaining parameters were considered known. The MAPE from the two experimental investigations across all data scenarios is provided in Table 9.
The results presented in Table 9 demonstrate the PINN's successful estimation of flow parameters across most cases, except for the first investigation's simulated-with-noise and experimental data scenarios. In these cases, the mean of errors was significantly higher compared to the simulated, simulated and Investigation 2 simulated-with-noise and experimental data scenario. Notably, for the experimental data scenario, the viscosity (\(\mu\)) exhibited a difference of \(120.19\,\%\), and most other parameters had errors higher than \(27\,\%\). However, for the second investigation, the mean of errors in Table 9 were all below \(15.5\,\%\), indicating better performance in the experimental data scenario. As discussed earlier, the higher
Figure 8: **Box plot of the absolute percentage errors for Case 1. Each subfigure shows a specific experimental investigation and contains subplots for each unknown parameter under analysis. Within these subplots, three distinct box plots are presented: blue for the experimental data, red for the simulated data, and gray for the simulated data with added noise. These box plots include whiskers, outliers, and the mean value, represented by a triangle symbol. Outliers are identified as data points that lie more than 1.5 times the interquartile range (IQR) above the third quartile or below the first quartile. The x-axis represents the absolute percentage error, calculated with respect to the reference value. This graphical structure is consistently employed in subsequent figures related to inverse problems.**
error in the experimental case may be caused due to model uncertainty, simplifications in the model, and experimental noise.
However, the results for Investigation 1 in the simulated data scenario require some attention. As shown in Fig. 8(a), all unknown parameters presented outliers. Furthermore, their computed means were observed to lie outside the IQR. It was also previously noted in Case 1 (Section 4.2.1), which implies an influence of these outliers on the mean value. When examining the median values illustrated in Fig. 9, the error metrics for simulated scenarios in both investigations were predominantly found to be below \(1\,\%\).
The addition of noise considerably affected the results in both experimental investigations, leading to higher mean errors as shown in Table 9, whereas in investigation, the mean of the errors was smaller than in experimental Investigation 1. Analyzing the box plots in Fig. 9, we can observe that the simulated noisy scenario resulted in greater dispersion for most parameters. This observation indicates that the introduction of noise adversely affected the accuracy of the estimated parameters. Moreover, we can observe that similar to the discussed in Section 4.2.1, the experimental data scenario presented the highest mean errors across the initializations for both experimental investigations, except for the \(k_{4p}\) in the second experimental investigation.
Figure 9: **Box plot of the absolute percentage errors for Case 2. The box plot illustrates the distribution of absolute percentage errors for each parameter in Case 2. Each subplot represents an unknown parameter for the data scenarios evaluated. The x-axis has been changed to logarithmic for better visibility. This graphical structure is consistently employed in subsequent figures related to inverse problems.**
Despite the high errors in Investigation 1's experimental data scenario, most estimated parameters exhibited relatively small dispersion across the weight initializations, as evident in Fig. 9a. It is worth noting that the parameter selection for this case did not consider the practical identifiability analysis, which assessed, in this study, the parameter sensitivity to noise. Therefore, this higher mean error with relatively low dispersion suggests that the system might not be practically identifiable within the flow condition of experimental investigation 1. It is possibly converging to a parameter set other than the true one. It is important to note that as the water cuts increase up to the emulsion phase inversion, the fluid behavior becomes more complex, posing a more challenging scenario than lower water cuts. Further, in our practical identifiability analysis, we considered only the sensitivity analysis rather than the Monte Carlo approach, where the optimization algorithm, in this case, the PINN, would be also included.
Therefore, to draw more definitive conclusions about how the parameter estimation is sensitive to the water cut, additional investigations need to be conducted with a larger experimental test matrix that covers a wider range of water cuts. Additionally, a Monte Carlo approach with the PINN on the practical identifiability analysis could potentially better evaluate the effects of the model uncertainties with the experimental data scenario, as the approach described in Section 3.2 considered only the simulated data. Further, as the local structurally identifiability analysis assesses the identifiability of the neighborhood of the parameter set, it is possible that restricting the search range and imposing stricter bounds on the parameter search could lead to better results. This approach is suggested in [49].
2.3 Case 3: Eight unknown parameters \(B\), \(\mu\), \(\rho\), \(k_{u}\), \(k_{d}\), \(k_{4p}\), \(k_{1s}\) and \(k_{5s}\) for different types of data
In this section, we present the results of the inverse problem for Case 3, specifically evaluating the performance of the parameter estimation. The evaluation is based on the MAPE, as shown in Table 10. The considered parameters, in this case, include the bulk modulus (\(B\)), viscosity (\(\mu\)), and density (\(\rho\)) of the fluid, as well as the pump equivalent resistance (\(k_{4p}\)), the first impeller-fluid coupling coefficient (\(k_{1s}\)), the shaft second-order friction coefficient (\(k_{5s}\)), and the equivalent resistance of the pipeline upstream and downstream (\(k_{u}\) and \(k_{d}\)).
The results in Table 10 and Fig. 10 indicate that the PINN successfully estimated the density and viscosity for all scenarios in Case 3, but it encountered challenges in estimating the bulk modulus. The \(15\,\%\) value represents the upper and lower bounds imposed on the output scaling transformation for the bulk modulus. Additionally, when analyzing the dispersion in Fig. 10, it is evident that the distributions of the simulated with added noise case are similar to those of the simulated case, with similar IQR.
In Table 10, the distinction between data sources with smaller errors, as observed in Cases 1 and 2 (Section 4.2.1 and Section 4.2.2), is not evident. The Investigation 1 and 2 demonstrated errors mostly below \(13\,\%\). Notably, in Investigation 1, the upstream pipeline equivalent resistance (\(k_{u}\)) exhibited the highest errors across all data sources, reaching up to \(34.51\,\%\). However, in Fig. 3b, we can observe that the \(k_{u}\) presented a relatively high correlation with other parameters, such as the \(k_{d}\). On the other hand, in Investigation 2, we do not observe errors as high as in Investigation 1. This discrepancy can likely be attributed to the distinct flow characteristics of each investigation. As previously discussed in Section 4.2.2, further tests under diverse flow conditions are essential to understand the performance difference between Investigation 1 and 2 comprehensively.
It is noteworthy that, for Case 3, besides the fluid bulk modulus and density, the other parameters also have bounded output scaling transformations, which could have contributed to lower errors. Thus, although Case 3 poses an additional challenge compared to Case 2, once it involves estimating eight unknown parameters instead of seven, Case 3 and Case 2 are not directly comparable.
\begin{table}
\begin{tabular}{l l r r r r r r r r} \hline & & \(B\) & \(\mu\) & \(\rho\) & \(k_{u}\) & \(k_{d}\) & \(k_{4p}\) & \(k_{1s}\) & \(k_{5s}\) \\ \hline Inv. 1 & Sim. & \(14.95\,\%\) & \(11.31\,\%\) & \(3.02\,\%\) & \(26.27\,\%\) & \(3.25\,\%\) & \(1.79\,\%\) & \(6.92\,\%\) & \(12.24\,\%\) \\ & Noisy & \(14.51\,\%\) & \(10.41\,\%\) & \(1.88\,\%\) & \(25.97\,\%\) & \(4.99\,\%\) & \(2.06\,\%\) & \(6.88\,\%\) & \(6.00\,\%\) \\ & Exp. & N/A & \(13.05\,\%\) & \(9.26\,\%\) & \(34.51\,\%\) & \(16.01\,\%\) & \(5.87\,\%\) & \(12.37\,\%\) & \(7.76\,\%\) \\ \hline Inv. 2 & Sim. & \(15.00\,\%\) & \(14.34\,\%\) & \(4.12\,\%\) & \(6.90\,\%\) & \(6.66\,\%\) & \(3.43\,\%\) & \(12.71\,\%\) & \(6.96\,\%\) \\ & Noisy & \(15.00\,\%\) & \(13.66\,\%\) & \(3.77\,\%\) & \(4.92\,\%\) & \(13.63\,\%\) & \(7.04\,\%\) & \(11.86\,\%\) & \(12.22\,\%\) \\ & Exp. & N/A & \(1.60\,\%\) & \(10.16\,\%\) & \(1.96\,\%\) & \(12.05\,\%\) & \(5.28\,\%\) & \(7.01\,\%\) & \(9.25\,\%\) \\ \hline \end{tabular}
\end{table}
Table 10: **Mean absolute percentage error for Case 3.**
### Estimated unknown parameters mean and standard deviation
In this section, we present additional PINN results for parameter estimation in the ESP system model. These results are obtained by averaging parameter estimations over the PINN 30 training runs and computing the standard deviation. The tables Tables 11 and 12 contain the results of the first and second experimental investigations, respectively. It is important to note that these tables serve as complementary results of the discussed in Section 4.2 and are not the focus of detailed discussion in this work. They are included for the sake of completeness and may be of interest to future research works. Any parameter not accounted for in a specific case is denoted by the symbol.
For each of the three cases, defined based on varying assumptions regarding known parameters, the table presents the performance of PINN in estimating the unknown parameters across the three data scenarios. We average each parameter estimation over the 30 weight initializations, and the standard deviation is indicated by the symbol \(\pm\). Parameters not considered in a given case are denoted by the \(-\) symbol.
Figure 10: **Box plot of the absolute percentage errors for Case 3. The box plot illustrates the distribution of absolute percentage errors of each parameter in Case 3. The individual subplots correspond to the unknown parameters across the evaluated data scenarios. To enhance visibility, the x-axis has been changed to a logarithmic scale.**
## References
Summary
In this study, our primary focus was on developing a physics-informed neural network (PINN) model in order to estimate unknown states and parameters of the electrical submersible pump (ESP) system operating in a two-phase flow condition when only limited state variables are known. In the present study, we only know the pump intake and discharge pressure states measurements. The PINN model may reduce the field laboratory test required to estimate the fluid properties. We first investigated the structural and practical identifiability of the ESP system, defining three distinct sets of unknown parameters. Subsequently, we assessed the performance of the PINN model on these parameter sets across two experimental studies (water to oil ratio) encompassing three data scenarios: simulated, simulated with added noise, and experimental data.
We conducted a local structural identifiability analysis using the differential elimination method for dynamical models via projections, as proposed by Dong _et al._[23]. The results revealed that the system is not locally structurally identifiable when both intake and discharge pressures are the only known parameters. For local structural identifiability, we had to consider the parameters like shaft inertia (\(I_{s}\)), fluid-impeller disk friction (\(k_{3s}\)), and shaft viscous damping (\(k_{4s}\)) as known, resulting in \(12\) identifiable unknown parameters. Then, we derived the parameter correlation matrix using the Fisher information matrix for the practical identifiability analysis. Practical identifiability provides a qualitative analysis for the unknown parameters in the presence of noise. To attain this identifiability, it became necessary to define more parameters as known. Thus, we selected the most critical parameters in the context of the ESP's actual operation as unknown and other parameters as known. These unknown parameters included the fluid properties and pipeline equivalent resistance.
The proposed PINN model in this study successfully estimated the ESP's fluid properties (bulk modulus, density, and viscosity) and dynamics of the states with reasonable accuracy. In the case when we considered fluid properties, pipeline equivalent resistances, shaft, and pump parameters, the model provides a reasonable accuracy when considering simulated data with low water cut. However, we observed a relatively higher error for higher water cut or when we considered simulated with noise and experimental data with lower water cut.
In this case, we observed outliers in the predicted results. Further study may include a robust PINN method to eliminate the outliers and better performance. Furthermore, we also observed that errors are higher in the case of higher water cuts. This error is higher in the case of noise and experimental data than in the simulated cases. These cases with more unknowns may require additional measurements of the state variables, which are generally not measured in the field. Future studies may also include further investigations with a broader range of water fractions to understand better the water cut influences. The study may also focus on updating the numerical model for higher water cuts if necessary, as assumptions made on the numerical model may not be valid for higher water cuts.
Obtaining the estimated parameters and states for the ESP system in oil production can be challenging, and the PINN offers a promising and cost-efficient alternative for estimating them. Although this approach has its benefits, it also has some limitations. One of the limitations is that the technique is computationally intensive. This means that for every desired estimation of properties, the PINN must undergo training. More efficient PINN algorithms need to be investigated for different operational conditions. Furthermore, the accuracy of the estimated properties using PINN is heavily reliant on the accuracy of the measured states and known parameters. In the event of a faulty reading or instrument failure, the PINN's ability to provide accurate estimations is compromised.
Additionally, a Monte Carlo approach with the PINN on the practical identifiability analysis could potentially better evaluate the effects of the model uncertainties with the experimental data scenario. As the experimental data contains noise, further study may also include uncertainty quantification study. Furthermore, an ESP system model that considers the multistage separately would be relevant for ESPs in actual production, whose number of stages are considerably higher than the one considered in this study. Moreover, the current model does not consider the electrical domain, and its inclusion would be more realistic. Another notable aspect is the single viscosity assumption for the system, highlighted in Section 2.2, which remains a limitation.
### Acknowledgment
We gratefully acknowledge the support from the Energy Production Innovation Center (EPIC) at the University of Campinas (UNICAMP), financially backed by the Sao Paulo Research Foundation (FAPESP, grants 2017/15736-3 and 2019/14597-5). We also acknowledge the financial sponsorship from Equinor Brazil and the regulatory support offered by Brazil's National Oil, Natural Gas, and Bio
fuels Agency (ANP) through the R&D levy regulation. Additional acknowledgments are extended to the Center for Energy and Petroleum Studies (CEPTRO), the School of Mechanical Engineering (FEM), and the Artificial Lift and Flow Assurance Research Group (ALFA).
The work of K. Nath and G. E. Karniadakis was supported by OSD/AFOSR MURI grant FA9550-20-1-0358.
|
2309.01775 | Gated recurrent neural networks discover attention | Recent architectural developments have enabled recurrent neural networks
(RNNs) to reach and even surpass the performance of Transformers on certain
sequence modeling tasks. These modern RNNs feature a prominent design pattern:
linear recurrent layers interconnected by feedforward paths with multiplicative
gating. Here, we show how RNNs equipped with these two design elements can
exactly implement (linear) self-attention, the main building block of
Transformers. By reverse-engineering a set of trained RNNs, we find that
gradient descent in practice discovers our construction. In particular, we
examine RNNs trained to solve simple in-context learning tasks on which
Transformers are known to excel and find that gradient descent instills in our
RNNs the same attention-based in-context learning algorithm used by
Transformers. Our findings highlight the importance of multiplicative
interactions in neural networks and suggest that certain RNNs might be
unexpectedly implementing attention under the hood. | Nicolas Zucchet, Seijin Kobayashi, Yassir Akram, Johannes von Oswald, Maxime Larcher, Angelika Steger, João Sacramento | 2023-09-04T19:28:54Z | http://arxiv.org/abs/2309.01775v2 | # Gated recurrent neural networks
###### Abstract
Recent architectural developments have enabled recurrent neural networks (RNNs) to reach and even surpass the performance of Transformers on certain sequence modeling tasks. These modern RNNs feature a prominent design pattern: linear recurrent layers interconnected by feedforward paths with multiplicative gating. Here, we show how RNNs equipped with these two design elements can exactly implement (linear) self-attention, the main building block of Transformers. By reverse-engineering a set of trained RNNs, we find that gradient descent in practice discovers our construction. In particular, we examine RNNs trained to solve simple in-context learning tasks on which Transformers are known to excel and find that gradient descent instills in our RNNs the same attention-based in-context learning algorithm used by Transformers. Our findings highlight the importance of multiplicative interactions in neural networks and suggest that certain RNNs might be unexpectedly implementing attention under the hood.
Attention-based neural networks, most notably Transformers (Vaswani et al., 2017), have rapidly become the state-of-the-art deep learning architecture, replacing traditional models such as multi-layer perceptrons, convolutional neural networks, and recurrent neural networks (RNNs). This is particularly true in the realm of sequence modeling, where once-dominating RNNs such as the long short-term memory (LSTM; Hochreiter & Schmidhuber, 1997) model and the related gated recurrent unit (GRU; Cho et al., 2014) have been mostly replaced by Transformers.
Nevertheless, RNNs remain actively researched for various reasons, such as their value as models in neuroscience (Dayan & Abbott, 2001), or simply out of genuine interest in their rich properties as a dynamical system and unconventional computer (Jaeger et al., 2023). Perhaps most importantly for applications, RNNs are able to perform inference for arbitrarily long sequences at a constant memory cost, unlike models based on conventional softmax-attention layers (Bahdanau et al., 2015). This ongoing research has led to a wave of recent developments. On the one hand, new deep linear RNN architectures (Gu et al., 2022; Orvieto et al., 2023b) have been shown to significantly outperform Transformers on challenging long-sequence tasks (e.g., Tay et al., 2020). On the other hand, efficient linearized attention models have been developed, whose forward pass can be executed in an RNN-like fashion at a constant inference memory cost (Tsai et al., 2019; Katharopoulos et al., 2020; Choromanski et al., 2021; Schlag et al., 2021).
We present a unifying perspective on these two seemingly unrelated lines of work by providing a set of parameters under which gated RNNs become equivalent to any linearized self-attention, without requiring infinite number of neurons or invoking a universality argument. Crucially, our construction makes use of gated linear units (GLUs; Dauphin et al., 2017), which are ostensibly featured in recent deep linear RNN models. Turning to LSTMs and GRUs, which also include multiplicative gating interactions, we find somewhat surprisingly that our results extend only to LSTMs. Moreover, the LSTM construction we provide hints that the inductive bias towards attention-compatible configurations might be weaker for this architecture than for deep gated linear RNNs.
We then demonstrate that GLU-equipped RNNs, but not LSTMs and GRUs, can effectively implement our construction once trained, thus behaving as attention layers. Moreover, we find that such
GLU-equipped RNNs trained to solve linear regression tasks acquire an attention-based in-context learning algorithm. Incidentally, it has been shown that the very same algorithm is typically used by Transformers trained on this problem class (von Oswald et al., 2023; Mahankali et al., 2023; Ahn et al., 2023; Zhang et al., 2023). Our results thus challenge the standard view of RNNs and Transformers as two mutually exclusive model classes and suggest that, through learning, RNNs with multiplicative interactions may end up encoding attention-based algorithms disguised in their weights.
## 1 Background
### Linear self-attention
We study causally-masked linear self-attention layers that process input sequences \((x_{t})_{t}\) with \(x_{t}\in\mathbb{R}^{d}\) as follows:
\[y_{t}=\left(\sum_{t^{\prime}\leq t}(W_{V}x_{t^{\prime}})(W_{K}x_{t^{\prime}})^ {\top}\right)(W_{Q}x_{t}) \tag{1}\]
In the previous equation, \(W_{V}\in\mathbb{R}^{d\times d}\) is the value matrix, \(W_{K}\in\mathbb{R}^{d\times d}\) the key matrix and \(W_{Q}\in\mathbb{R}^{d\times d}\) the query matrix. Note that we use square matrices throughout our paper for simplicity, but our findings hold for rectangular ones. As usually done, we call \(v_{t}:=W_{V}x_{t}\), \(k_{t}:=W_{K}x_{t}\) and \(q_{t}:=W_{Q}x_{t}\) the values, keys and queries. The output vector \(y_{t}\) has the same dimension as the input, that is \(d\). Such linear self-attention layers can be understood as a linearized version of the softmax attention mechanism (Bahdanau et al., 2015) in use within Transformers (Vaswani et al., 2017). Attention layers commonly combine different attention heads; we focus on a single one here for simplicity.
In a linear self-attention layer, information about the past is stored in an effective weight matrix \(W_{t}^{\text{ff}}:=\sum_{t^{\prime}}v_{t^{\prime}}k_{t^{\prime}}^{\top}\) that will later be used to process the current query \(q_{t}\) through \(y_{t}=W_{t}^{\text{ff}}q_{t}\). At every timestep, \(W_{t}^{\text{ff}}\) is updated through the rule \(W_{t}^{\text{ff}}=W_{t-1}^{\text{ff}}+v_{t}k_{t}^{\top}\), which is reminiscent of Hebbian learning (Schmithuber, 1992; Schlag et al., 2021) and leads to faster inference time (Katharopoulos et al., 2020; Choromanski et al., 2021; Shen et al., 2021; Peng et al., 2021) than softmax self-attention.
### Gated recurrent neural networks
In this paper, we focus our analysis on a simplified class of gated diagonal linear recurrent neural networks. They implement linear input and output gating that multiplies a linear transformation \(W_{\text{x}}x_{t}\) of the input with a linear gate \(W_{\text{m}}x_{t}\): \(g(x_{t})=W_{\text{m}}x_{t}\odot W_{\text{x}}x_{t}\). Here, \(\odot\) is the element-wise product. The class of gated networks we consider satisfies
\[h_{t+1}=\lambda\odot h_{t}+g^{\text{in}}(x_{t}),\ \ \ y_{t}=Dg^{\text{out}}(h_{t}). \tag{2}\]
In the previous equation, \(\lambda\) is a real vector, \(x_{t}\) is the input to the recurrent layer, \(h_{t}\) the hidden state, \(g^{\text{in}}\) the input gating, \(g^{\text{out}}\) the output gating, and \(D\) a linear readout. This simplified class makes connecting to attention easier while employing similar computational mechanisms as standard gated RNNs architectures.
This class is tightly linked to recent deep linear RNN architectures and shares most of its computational mechanisms with them. While linear diagonal recurrence might be seen as a very strong inductive bias, many of the recent powerful deep linear RNN models adopt a similar bias (Gupta et al., 2022; Smith et al., 2023), and it has been shown to facilitate gradient-based learning (Orvieto et al., 2023; Zucchet et al., 2023). Those architectures use complex-valued hidden states in the recurrence; we only use its real part here. Some of those works employ a GLU (Dauphin et al., 2017) after each recurrent layer, \(\operatorname{GLU}(x)=\sigma(W_{\text{m}}x_{t})\odot W_{\text{x}}x_{t}\) with \(\sigma\) the sigmoid function. The gating mechanism we consider can thus be interpreted as a linearized GLU. Finally, we can recover (2) by stacking two layers: the GLU in the first layer acts as our input gating, and the one in the second as output gating. We include a more detailed comparison in Appendix A. In the rest of the paper, we will use the LRU layer (Orvieto et al., 2023) as the representative of the deep linear RNN architectures because of its proximity with (2).
LSTMs can operate in the regime of Equation 2, but this requires more adaptation. First, the recurrent processing of these units is nonlinear and is more involved than a simple matrix multiplication followed by a nonlinearity. Second, gating occurs in different parts of the computation and depends on additional variables. We compare in more details this architecture and the one of Equation 2 in Appendix A, showing that LSTMs can implement (2) when stacking two layers on top of each other. We additionally show that GRUs cannot do so.
## 2 Theoretical construction
As highlighted in the previous section, our class of gated RNNs and linear self-attention have very different ways of storing past information and using it to modify the feedforward processing of the current input. The previous state \(h_{t}\) acts through a bias term \(\lambda\odot h_{t}\) that is added to the current input \(g^{\text{in}}(x_{t})\) in gated RNNs, whereas the linear self-attention recurrent state \(W_{t}^{\text{ff}}\) modifies the weights of the feedforward pathway. We reconcile these two mismatched views of neural computation in the following by showing that gated RNNs can implement linear self-attention.
We demonstrate in this section how a gated recurrent layer followed by a linear readout as in Equation 2 can implement any linear self-attention layer through a constructive proof. In particular, our construction only requires a finite number of neurons, therefore providing a much stronger equivalence result than more general universality of linear recurrent networks theorems (Grigoryeva and Ortega, 2018; Orvieto et al., 2023a).
### Key ideas
Our construction comprises three main components: Firstly, the input gating \(g^{\text{in}}\) is responsible for generating the element-wise products between the keys and values, as well as the queries. Then, recurrent units associated with key-values accumulate their inputs with \(\lambda=1\), whereas those receiving queries as inputs return the current value of the query, hence \(\lambda=0\). Lastly, the output gating \(g^{\text{out}}\) and the final readout layer \(D\) are in charge of multiplying the flattened key-value matrix
Figure 1: An example of a diagonal linear gated recurrent neural network that implements the same function as a linear self-attention layer with parameters \((W_{V},W_{K},W_{Q})\) and input dimension \(d\). Inputs are processed from the bottom to the top. We do not use biases so we append 1 to the input vector \(x_{t}\) to be able to send queries to the recurrent neurons. We use \(\operatorname{repeat}(A,n)\) to denote that the matrix \(A\) is repeated \(n\) times on the row axis and \(W_{V,i}\) is the \(i\)-th row of the \(W_{V}\) matrix. The bars in the matrices highlight the different kinds of inputs/outputs. Digits in matrices denote column vectors appropriately sized. The readout matrix \(D\) appropriately sums the element-wise products between key-values and queries computed after the output gating \(g^{\text{out}}\).
with the query vector. We visually illustrate our construction and provide a set of weights for which the functional equivalence holds in Figure 1. Crucially, the key-values in a linear self-attention layer are the sum of degree two polynomials of each previous input. Input gating mechanism and perfect memory units (\(\lambda=1\)) are needed to replicate this behavior within a gated recurrent layer. Similarly, output gating is required to multiply key-values with the queries.
### On the number of neurons required by the construction
The simple construction we illustrate in Figure 1 requires \(d^{2}\) hidden neurons for the key-value elements and \(d\) additional ones to represent the query. It is, however, possible to make the construction more compact by leveraging two additional insights: First, when the key and value matrices are equal, the key-value matrix is symmetric and, therefore, requires \(d(d+1)/2\) elements to be represented. Second, any combination of key and value matrices for which \((W_{K}^{\top}W_{Q})\) is fixed leads to the same function in the linear self-attention layer. This implies that, when the value matrix is invertible, we can reduce the number of hidden neurons storing key-values to \(d(d+1)/2\) by making the key matrix to be equal to \(W_{V}\), and changing the query matrix to be \(W_{V}^{\top}W_{K}^{\top}W_{Q}\).
Overall, the output gating requires \(\mathcal{O}(d^{2})\) input and output entries for the gated RNN to match a linear self-attention layer. The RNN thus requires \(\mathcal{O}(d^{4})\) parameters in total, with a lot of redundancy, significantly more than the \(3d^{2}\) parameters of the linear self-attention layer. It comes as no surprise that numerous equivalent configurations exist within the gated RNN we study. For instance, linear gating is invariant under permutations of rows between its two matrices and under multiplication-division of these two rows by a constant. Left-multiplying \(W_{Q}\) in the input gating by any invertible matrix \(P\), and subsequently reading out the hidden neurons with \(\lambda=0\) through \(\mathrm{repeat}(P^{-1},d)\), also does not alter the network's output. Several other invariances exist, making exact weight retrieval nearly impossible.
### Implications for other classes of gated RNNs
We conclude this section by commenting on whether similar insights hold for other gated RNNs architectures. The LRU architecture is close to (2) but only has output gating. Stacking two LRU layers on top of each other enables the output gating of the first layer to act as the input gating for the second layer, and, therefore, implement linear self-attention. As noted in Section 1.2, LSTMs and GRUs are further away from our simplified gated RNN model. However, one single LSTM layer can implement linear self-attention, but stacked GRU layers cannot do so. We detailed those considerations in Appendix A.
## 3 Gated RNNs learn to mimic attention
We now demonstrate that gated RNNs learn to implement linear self-attention and comprehend how they do so. In this section, a student RNN is tasked to reproduce the output of a linear self-attention layer. Appendix B contains detailed descriptions of all experiments performed in this section. Importantly, each sequence is only presented once to the network.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Loss & Score KV & Score Q & Polynomial distance \\ \hline \(4.97\times 10^{-8}\) & \(4.52\times 10^{-8}\) & \(2.06\times 10^{-10}\) & \(3.73\times 10^{-4}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Gated RNNs implement attention. The KV and Q scores are equal to one minus the \(R^{2}\) score of the linear regression that predicts key-values and queries from resp. the perfect memory neurons and perfect forget neurons. The polynomial distance is the L2 distance between the coefficients of the degree-4 polynomial that describes the instantaneous processing of the (optimal) linear self-attention layer and the trained RNN.
### Teacher identification
In our first experiment, we train a student RNN (\(|x|=4\), \(|h|=100\) and \(|y|=4\)) to emulate the behavior of a linear self-attention layer with weights sampled from a normal distribution. The low training loss, reported in Table 1, highlights that the student's in-distribution behavior aligns with the teacher's. However, this is insufficient to establish that the student implements the same function as the teacher. The strategy we adopt to show functional equivalence is as follows: First, we observe that only perfect memory neurons (\(\lambda=1\)) and perfect forget neurons (\(\lambda=0\)) influence the network output. Additionally, each of these groups of neurons receives all the information needed to linearly reconstruct resp. the key-values and the queries from the input. Finally, we show that the output gating and the decoder matrix accurately multiply accumulated key-values with current queries.
After the learning process, a significant proportion of the weights in the input and output gating and the readout becomes zeros. Consequently, we can eliminate neurons with input or output weights that are entirely zeros, thereby preserving the network's function. By doing so, we can remove \(86\) out of the \(100\) hidden neurons and \(87\) out of the \(100\) pre-readout neurons. After having permuted rows in the two gating mechanisms and reordered hidden neurons, we plot the resulting weights on Figure 2.A. Consistently with our construction, only recurrent neurons with \(\lambda=0\) or \(\lambda=1\) contribute to the network's output. The key-values neurons receive a polynomial of degree \(2\) without any term of degree \(1\) as the last column of \(W_{\rm m}^{\rm in}\) and \(W_{\rm x}^{\rm in}\) is equal to zero for those units, and the query neurons a polynomial of degree \(1\). The learning process discovers that it can only use \(d(d+1)/2=10\) neurons to store key-values, similar to our compact construction. We show in Table 1 that it is possible to linearly reconstruct the key-values from those \(10\) neurons perfectly, as well as the queries from the \(4\) query neurons. By combining this information with the fact that the \(\lambda\)s are zeros and ones, we deduce that the cumulative key-values \(\sum_{t^{\prime}\leq t}v_{t^{\prime}}k_{t^{\prime}}^{\rm in}\) can be obtained linearly from the key-values' hidden neurons, and the instantaneous queries \(q_{t}\) from the query neurons.
Additionally, the output gating combined with the linear readout can multiply the key-values with the queries. Since we have already confirmed that the temporal processing correctly accumulates
Figure 2: Teacher-student experiment (\(d=4\)). **(A)** Weights of the student RNN once post-processed. The predictions yielded by those weights closely match the ones of the linear self-attention teacher. **(B)** Value \(W_{V}\), key \(W_{K}\) and query \(W_{V}\) weight matrices of the teacher considered in this example; all are invertible. **(C)** For each output coordinate \(i\) of the network, the kernels generated by the last 3 lines (11, 12 and 13) of the output gating are linearly combined through the decoding matrix \(D\) are all proportional to the same kernel, which can be generated in a way that is coherent with our construction. In all the weight matrices displayed here, zero entries are shown in grey, blue denotes positive entries, and red negative ones.
key-values, our focus shifts to proving that the instantaneous processing of the gated RNN matches the one of the attention layer across the entire input domain. Given that both architectures solely employ linear combinations and multiplications, their instantaneous processing can be expressed as a polynomial of their input. The one of linear self-attention, \((W_{V}x)(W_{K}x)^{\top}(W_{Q}x)\), corresponds to a polynomial of degree \(3\), whereas the one of the gated RNN, \(g^{\mathrm{out}}(g^{\mathrm{in}}(x))\), corresponds to one of degree \(4\). By comparing these two polynomials, we can compare their functions beyond the training domain. For every one of the four network outputs, we compute the coefficients of terms of degree \(4\) or lower of their respective polynomials and store this information into a vector. We then calculate the normalized Euclidean distance between these coefficient vectors of the linear self-attention layer and the gated RNN, and report the average over all 4 output units in Table 1. The evidence presented thus far enables us to conclude that the student network has correctly identified the function of the teacher.
While the majority of the weights depicted in Figure 2.A conform to the block structure characteristic of our construction, the final three rows within the output gating matrices deviate from this trend. As shown in Figure 2, these three rows can be combined into a single row matching the desired structure. More details about this manipulation can be found in Appendix B.2.
### Identification requires mild overparametrization
The previous experiment shows that only a few neurons in a network of \(100\) hidden neurons are needed to replicate the behavior of a self-attention layer whose input size is \(d\). We therefore wonder if identification remains possible when decreasing the number of hidden and pre-output gating neurons the student has. We observe that mild overparametrization, around twice as many neurons as the actual number of neurons required, is needed to reach identification. We report the results in Figure 3.A.
### Nonlinearity makes identification harder
We now move from our simplified class of gated RNNs and seek to understand how our findings apply to LSTMs, GRUs, and LRUs. We use the following architecture for those three layers: a linear embedding layer projects the input to a latent representation, we then repeat the recurrent layer once or twice, and finally apply a linear readout. While those layers are often combined with layer normalization, dropout, or skip connections in modern deep learning experiments, we do not include any of those here to stay as close as possible to the teacher's specifications. In an LRU layer, the input/output dimension differs from the number of different neurons; we here set all those dimensions to the same value for a fair comparison with LSTMs and GRUs. We compare these
Figure 3: Gated RNNs learn compressed representation. **(A)** In the teacher-student experiment of Section 3 in which there is no clear structure within the attention mechanism that the RNN can extract, slight overparametrization is needed in order to identify the teacher. **(B)** In the linear regression task of Section 4, the linear attention mechanism needed to solve the task optimally has a sparse structure that the RNN leverages. Identification is thus possible with much smaller networks. The quantity reported in B is the difference between the prediction loss of the RNN and the optimal loss obtained after one step of gradient descent. We use the same input dimension \(d=6\) to make the two plots comparable.
methods to the performance of our simplified gated RNNs, with both diagonal (as in Equation 2) and dense linear recurrent connectivity.
We report the results in Figure 4 for \(d=6\). While diagonal connectivity provides a useful inductive bias to learn how to mimic linear self-attention, it is not absolutely needed as the performance of the dense counterpart of the construction highlights. It is theoretically possible to identify the teacher with one LSTM layer. However, gradient descent does not find such a solution and the performance of LSTMs is close to that of GRUs that cannot implement attention. Motivated by our construction, we slightly change the LRU architecture and add a nonlinear input gating to each layer. We find that it significantly improves its ability to mimic attention. We extensively compare different LRU layers in Appendix B.
## 4 Attention-based in-context learning emerges in trained RNNs
The previous section shows that gated RNNs learn to replicate a given linear self-attention teacher. We now demonstrate that they can find the same solution as linear self-attention when both are learned. To that end, we study an in-context regression task in which the network is shown a few input-output pairs and later has to predict the output value corresponding to an unseen input. Linear self-attention is a particularly beneficial inductive bias for solving this task. When the input-output mapping is linear, von Oswald et al. (2023) have shown that linear self-attention implement one step of gradient descent.
### In-context linear regression
Linear regression consists in estimating the parameters \(W^{*}\in R^{d_{y}\times d_{x}}\) of a linear model \(y=W^{*}x\) from a set of observations \(\{(x_{t},y_{t})\}_{t=1}^{T}\) that satisfy \(y_{t}=W^{*}x_{t}\). The objective consists in finding a parameter \(\hat{W}\) which minimizes the squared error loss \(L(W)=\frac{1}{2T}\sum_{t=1}^{T}\|y_{t}-Wx_{t}\|^{2}\). Given an initial estimate of the parameter \(W_{0}\), one step of gradient descent on \(L\) with learning rate \(T\eta\) yields the weight change
\[\Delta W_{0}=\eta\sum_{t=1}^{T}(y_{t}-W_{0}x_{t})x_{t}^{\top}. \tag{3}\]
In the in-context version of the task, the observations \((x_{t},y_{t})_{1\leq t\leq T}\) are provided one after the other to the network, and later, at time \(T+1\), the network is queried with \((x_{T+1},0)\) and its output regressed against \(y_{T+1}\). Under this setting, von Oswald et al. (2023) showed that if all bias terms are zero, a linear self-attention layer learns to implement one step of gradient descent starting from \(W_{0}=0\) and predict through
\[\hat{y}_{T+1}=(W_{0}+\Delta W_{0})x_{T+1}=\eta\sum_{t=1}^{T}y_{t}x_{t}^{\top} x_{T+1}. \tag{4}\]
Figure 4: Comparison of the validation loss obtained by different gated recurrent networks architectures in **(A)** the teacher-student task of Section 1.2 and **(B)** the in-context linear regression of Section 4. The construction baseline corresponds to the gated RNN of Eq. 2, with diagonal or dense connectivity. We use the default implementation of LSTMs and GRUs, and slightly modify the LRU architecture to reflect our construction better.
In the following, we show that gated RNNs also learn to implement the same algorithm and leverage the sparse structure of the different attention matrices corresponding to gradient descent to learn a more compressed representation than the construction one.
### Gated RNNs learn to implement gradient descent
We now train gated RNNs as in Equation 2 to solve the in-context linear regression task, see Appendix C.1 for more details. We set the number of observations to \(T=12\) and set the input and output dimensions to \(3\) so that \(d=6\). Once learned, the RNN implements one step of gradient descent with optimal learning rate, which is also the optimal solution one layer of linear self-attention can find (Mahankali et al., 2023). Several pieces of evidence back up this claim: the training loss of RNN after training (\(0.0945\)) is almost equal to the one of an optimal step of gradient descent (\(0.0947\)) and the trained RNN implements the same instantaneous function, as the polynomial analysis of Table 2 reveals.
Linear self-attention weights implementing gradient descent have a very specific sparse structure (von Oswald et al., 2023). In particular, many key-values entries are always 0, so the construction contains many dead neurons. This leads us to wonder whether gated RNNs would pick up this additional structure and learn compressed representations. To test that, we vary the gated RNN size and report in Figure 3.B the difference between the final training loss and the loss obtained after one optimal gradient descent step. We observe a similar phase transition than in the teacher-student experiment, this time happening for a much smaller number of neurons than our construction specifies. Gated RNNs thus learn a more compressed representation than the one naively mimicking self-attention. This result provides some hope regarding the poor \(\mathcal{O}(d^{4})\) scaling underlying our construction: in situations that require an attention mechanism with sparse \((W_{V},W_{K},W_{Q})\) matrices, gated RNNs can implement attention with far fewer neurons. A precise understanding of how much compression is possible in practical scenarios requires further investigation.
### Nonlinear gated RNNs are better in-context learners than one step gradient descent
Finally, as a side question we compare the ability to learn in context of the nonlinear gated RNN architectures that are LSTMs, GRUs and LRUs. Although not the main focus of our paper, this allows putting our previous results in perspective. In particular, we are interested in understanding if similarity with attention correlates with in-context learning performance, as attention has been hypothesized to be a key mechanism for in-context learning (Olsson et al., 2022; Garg et al., 2022; von Oswald et al., 2023). We report our comparison results in Figure 4, measuring the loss on weights \(W^{*}\) drawn from a distribution with double the variance of the one used to train the model.
Overall, we find that nonlinearity greatly helps and enables nonlinear gated RNN architectures to outperform one gradient descent step when given enough parameters, suggesting that they implement some more sophisticated mechanism than vanilla gradient descent. Surprisingly, while the GRU is the architecture that is the furthest away from attention, it performs the best in the task. Within the different LRU layers we compare, we find a high correlation between in-context learning abilities and closeness to attention, c.f. Figure 5 in the appendix. In particular, we observe a mas
\begin{table}
\begin{tabular}{c c c} \hline \hline Term & RNN & GD \\ \hline \(x_{1}^{2}y_{1}\) & \(6.81\times 10^{-2}\pm 8.52\times 10^{-5}\) & \(6.76\times 10^{-2}\) \\ \(x_{2}^{2}y_{1}\) & \(6.82\times 10^{-2}\pm 6.40\times 10^{-5}\) & \(6.76\times 10^{-2}\) \\ \(x_{3}^{2}y_{1}\) & \(6.82\times 10^{-2}\pm 5.56\times 10^{-5}\) & \(6.76\times 10^{-2}\) \\ residual norm & \(1.35\times 10^{-3}\pm 1.97\times 10^{-4}\) & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The coefficients of the instantaneous polynomial implemented by the first output unit of a trained RNN on the in-context linear regression task match one optimal step of gradient descent, averaged over 4 seeds. The ”residual norm” measures the norm of the polynomial coefficients, excluding the ones appearing in the table. Those coefficients are all vanishingly small. The optimal GD learning rate is obtained analytically (\(\eta^{*}=(T+d_{x}-1/5)^{-1}\)), c.f. Appendix C.2.
sive performance improvement from the vanilla LRU architecture to the ones additionally including input gating to match our construction more closely.
## 5 Discussion
Our study reveals a closer conceptual relationship between RNNs and Transformers than commonly assumed. We demonstrate that gated RNNs can theoretically and practically implement linear self-attention, bridging the gap between these two architectures. Moreover, while Transformers have been shown to be powerful in-context learners (Brown et al., 2020; Chan et al., 2022), we find that RNNs excel in toy in-context learning tasks and that this performance is partly uncorrelated with the architecture inductive bias toward attention. This highlights the needed for further investigations on the differences between RNNs and Transformers in controlled settings, as also advocated by Garg et al. (2022).
Our results partly serve as a negative result: implementation of attention is possible but requires squaring the number of parameters attention has. We have shown that gated RNNs can leverage possible compression, but understanding whether real-world attention mechanisms lie in this regime remains an open question. Yet, our work is of current practical relevance as it provides a framework that can guide future algorithmic developments. Bridging the gap between Transformers' computational power and RNNs' inference efficiency is a thriving research area (Fournier et al., 2023), and the link we made facilitates interpolation between those two model classes.
Finally, our work carries implications beyond deep learning. Inspired by evidence from neuroscience supporting the existence of synaptic plasticity at different timescales, previous work (Schmidhuber, 1992; Ba et al., 2016; Miconi et al., 2018) added a fast Hebbian learning rule, akin to linear self-attention, to slow synaptic plasticity with RNNs. We show that, to some extent, this mechanism already exists within the neural dynamics, provided that the response of neurons can be multiplicatively amplified or shut-off in an input-dependent manner. Interestingly, several single-neuron and circuit level mechanisms have been experimentally identified which could support this operation in biological neural networks (Silver, 2010). We speculate that such multiplicative mechanisms could be involved in implementing self-attention-like computations in biological circuitry.
### Acknowledgements
The authors thank Asier Mujika and Razvan Pascanu for invaluable discussions. This study was supported by an Ambizione grant (PZ00P3_186027) from the Swiss National Science Foundation and an ETH Research Grant (ETH-23 21-1). |
2302.01407 | Hypothesis Testing and Machine Learning: Interpreting Variable Effects
in Deep Artificial Neural Networks using Cohen's f2 | Deep artificial neural networks show high predictive performance in many
fields, but they do not afford statistical inferences and their black-box
operations are too complicated for humans to comprehend. Because positing that
a relationship exists is often more important than prediction in scientific
experiments and research models, machine learning is far less frequently used
than inferential statistics. Additionally, statistics calls for improving the
test of theory by showing the magnitude of the phenomena being studied. This
article extends current XAI methods and develops a model agnostic hypothesis
testing framework for machine learning. First, Fisher's variable permutation
algorithm is tweaked to compute an effect size measure equivalent to Cohen's f2
for OLS regression models. Second, the Mann-Kendall test of monotonicity and
the Theil-Sen estimator is applied to Apley's accumulated local effect plots to
specify a variable's direction of influence and statistical significance. The
usefulness of this approach is demonstrated on an artificial data set and a
social survey with a Python sandbox implementation. | Wolfgang Messner | 2023-02-02T20:43:37Z | http://arxiv.org/abs/2302.01407v1 | Hypothesis Testing and Machine Learning: Interpreting Variable Effects in Deep Artificial Neural Networks using Cohen's \(f^{2}\)
###### Abstract
Deep artificial neural networks show high predictive performance in many fields, but they do not afford statistical inferences and their black-box operations are too complicated for humans to comprehend. Because positing that a relationship exists is often more important than prediction in scientific experiments and research models, machine learning is far less frequently used than inferential statistics. Additionally, statistics calls for improving the test of theory by showing the magnitude of the phenomena being studied. This article extends current XAI methods and develops a model agnostic hypothesis testing framework for machine learning. First, Fisher's variable permutation algorithm is tweaked to compute an effect size measure equivalent to Cohen's \(f^{2}\) for OLS regression models. Second, the Mann-Kendall test of monotonicity and the Theil-Sen estimator is applied to Apley's accumulated local effect plots to specify a variable's direction of influence and statistical significance. The usefulness of this approach is demonstrated on an artificial data set and a social survey with a Python sandbox implementation.
Artificial Neural Networks (ANN); Deep Learning; Effect size; Explainable Artificial Intelligence (XAI); Statistical significance
Wolfgang Messner is Clinical Professor of International Business at the Darla Moore School of Business, University of South Carolina (US). He received his PhD in economics and social sciences from the University of Kassel (Germany), MBA in financial management from the University of Wales (UK), and MSc and BSc in computing science and business administration after studies at the Technical University Munich (Germany), University of Newcastle upon Tyne (UK), and Universita per Stranieri di Perugia (Italy). He can be reached at [email protected].
## 1 Introduction
Since the English statistician Sir Francis Galton defined correlation and regression as statistical concepts in 1885 and Karl Pearson developed an index to measure correlation in 1895 [1], the majority of research methodologies across disciplines aim to establish additive sufficiency, that is, find variables contributing on average linearly and significantly to an outcome [2]. Using \(p\)-values, these concepts measure the statistical significance of effects, that is, whether they exist at all [3]. In this classic statistical approach of null-hypothesis significance testing, the evaluation is more focused on a data's fitness to a model than on using the variables to predict an outcome. But journal editors are increasingly calling on researchers to evaluate the magnitude of the phenomena being studied, rather than only their statistical significance [4] in order to build "research-based knowledge" that is "more relevant and useful to practitioners" [5]. A very informative measure is Cohen's \(f^{2}\)[3, 6], which allows both an evaluation of the model's global effect size as well as variables' local effect sizes, that is, effect sizes of each variable's substantive significance within the context of the entire model. Yet, the reporting of effect size measures remains inconsistent in extant research literature [7].
While most interesting research questions are about statistical relationships between variables, many complex processes are based on data sets that show highly nonlinear behavior. But ordinary least squares (OLS) regression analysis enforces a limited view of "straight-line relationships among equal-interval scales on which the observations are assumed to be normally distributed" [3]. This shortcoming tempts researchers to try out machine learning methods, such as deep artificial neural networks, which are able to
approximate arbitrarily complex mathematical structures while being more tolerant to noise and fault at the same time [8]. Over the last few years, a variety of machine learning libraries have been developed, which allow even non-experts to apply machine learning to various problems and extract features [9, 10]. However, the uptake of machine learning models in research and real-life decision making remains limited, because those models hide their underlying mechanisms in a black box [11, 12]. It is difficult to assess the connection between independent (input) and dependent (output) variables with respect to strength, direction, and form. For that reason, a growing research on interpretable machine learning, referred to as explainable artificial intelligence (XAI), develops techniques for making systems interpretable, transparent, and comprehensible [13, 14, 15]. This includes attempts to reveal the contribution of input to output variables. Though, "a practical, widely applicable technology for explainable AI has not emerged yet" [16].
Researchers across disciplines are keen to compare the performance of traditional statistical inference models, such as OLS regression, with machine learning approaches in general and neural networks in particular [e.g., 17, 18, 19]. For their OLS regression models, the aforementioned publications report the variables' direction and strength of influence, statistical significance, and effect size. When using machine learning models, Cohen's \(f^{2}\) is unfortunately not readily accessible from commonly used explainers. Researchers therefore apply novel XAI methods, such as the variable permutation algorithm, to distinguish important variables from not so important ones [20, 21]. But these variable importance measures are largely visual and not equivalent to tests of statistical and substantive significance as used in inferential statistics. This lack of statistically appropriate measures unfortunately undermines the acceptance of machine learning and XAI for hypothesis testing in scientific experiments and research models.
Another set of recently proposed XAI methods include partial dependence [22], local dependence, and accumulated local effect plots [both 23]. These graphs visually show the influence of an independent variable on the model's predictions, thereby improving model interpretability. Accumulated local effect plots work better than partial dependence plots in situations where the independent variables are heavily correlated. Additionally, they are computationally more efficient [24]. But all these novel XAI methods are primarily designed to support exploratory analysis through visualization. And while "visualization is one of the most powerful interpretational tools" [22], a comprehensive statistical framework should rely on the trinity of quantification, visualization, and hypothesis testing [25].
In this paper, I aim to connect these contemporary XAI methods with the classic statistical inference measures of direction, significance, and strength. Building on Fisher's variable permutation algorithm, I provide researchers a straight-forward way of estimating Cohen's effect size \(f^{2}\) for the input variables of a deep learning network. Additionally, I outline how a Mann-Kendall test of monotonicity [26, 27] can be applied to Apley's accumulated local effect plots in order to specify an input variable's direction of influence and statistical significance. In doing so, I allow hypothesis testing with machine learning - in the full sense of classic inferential statistics. Using the deep learning open-source software library Keras 2.4.3 and the dalex model-agnostic explainer [24, 28], I provide a Python sandbox implementation for other researchers to familiarize themselves with the approach. I hope to provide a pathway for replacing OLS regression methods with deep learning artificial neural networks for analyzing complex and potentially non-linear regression problems in scientific experiments and research models.
The remainder of this paper is structured as follows. In the following section, I review the differences between _Inferential Statistics and Machine Learning_ and the role XAI plays in making algorithmic models interpretable. Then, I propose a new _Framework for Hypothesis Testing in Machine Learning_ and give practical interpretation guidelines. In the section _Worked Examples_, I report on two applications of the framework to an artificial and a real-world data set. This is followed by a _Discussion_ about the results and suggestions for future work
## 2 Inferential Statistics and Machine Learning
While all statistics starts with data, there are two distinct approaches to reach conclusions from data [29]. The first one, inferential statistics, assumes that a stochastic data model has generated the data. The model's parameters are estimated from the data. The direction and significance levels of these parameters are then used for validating it with hypothesis testing. When multivariate OLS linear, logistics, or Cox proportional hazards regressions are "fit to data to draw quantitative conclusions, the conclusions are about the model's mechanism, and not about nature's mechanism. It follows that if the model is a poor emulation of nature, the conclusions may be wrong" [29].
The second approach is algorithmic modelling, aka machine learning. Artificial neural networks are one machine learning method, which are capable of recognizing patterns without a priori knowledge of the nature of underlying relationships in the data [30]. A training algorithm learns from data fed into the artificial neural network, which is a flexible, dynamic, and nonlinear black-box model [31]. There are no assumptions that the data is drawn from a multivariate distribution. While conventional machine learning is still somewhat limited in its ability to work with raw input data, the new field of deep learning allows for design of neural network models that are composed of multiple processing layers, which learn representations of data with multiple levels of abstraction [32]. The first experimental deep neural networks were successfully trained by computing scientists in 2006, followed by commercially viable applications after 2010 [33, 34, 35, 36]. To summarize, research in machine learning shifts the focus from data models to the properties of algorithms, which are characterized by their convergence (during iterative training cycles) and predictive accuracy (on new and hitherto unseen data). But "despite convincing prediction results, the lack of an explicit model can make machine learning solutions difficult to directly relate to existing knowledge" [37].
Explainable artificial intelligence (XAI) aims to address this downside of machine learning by developing model explanation and interpretation techniques [13, 14, 15]. Comparable to significance and effect-size tests of variables in inferential statistics, model-agnostic explainers attempt to describe how different explanatory variables contribute to an outcome. Early XAI measures have been designed to reveal the influence of the input variables on the output variables [38], such as that the values of the network weights are looked at by the general influence measure [GIM; 39], or the weights of each input variable are sequentially zeroed to gauge the network response [SZW; 40]. But in deep learning with its complex system of weight matrices connecting numerous hidden layers, the simple GIM and SZW interpretation approaches are no longer sufficient. More recently, the XAI literature has developed alternative methods, some of which are discussed in Table 1. Originating from the need to help researchers understand the black box they have created and support them in exploratory analysis, these methods still rely heavily on visualization. In the strictest sense of classical inferential statistics, they are therefore not yet suitable for hypothesis testing.
## 3 Framework for Hypothesis Testing in Machine Learning
I will now attempt to bridge the gap between algorithmic modelling and inferential statistics. Building on the XAI measures listed in Table 1, I start with outlining an algorithm to compute Cohen's effect size measure \(f^{2}\). I continue with the Mann-Kendall monotonicity test and the Theil-Sen estimator to judge a variable's direction of influence and related statistical significance. At the end of this section, I provide guidelines for using this new framework for hypothesis testing in machine learning.
### Effect Size: Cohen's \(f^{2}\) for Machine Learning
For regression problems with both continuous independent as well as dependent variables, Cohen's \(f^{2}\) is appropriate for calculating the model's global effect size [3, 6]:
\[f^{2}=\frac{R^{2}}{1-R^{2}}\]
In this equation, \(R^{2}\) is the proportion of variance accounted for by all independent variables relative to a model without any regressors. Much more interesting, however, is a variation of Cohen's \(f^{2}\) for the effect of an individual variable of interest \(V\)[3, 42, 43]:
\[f_{V}^{2}=\frac{R^{2}-R_{V}^{2}}{1-R^{2}}\]
In this equation, \(R_{V}^{2}\) is the proportion of variance accounted for by all other variables than \(V\), again relative to a model with no regressors. Thus, the numerator \(R^{2}-R_{V}^{2}\) reflects the variance uniquely accounted for by \(V\), over and above that of all other variables. Values of 0.02, 0.15, and 0.35 are defined as thresholds for small, medium, and large values for \(f_{V}^{2}\) in the behavioral sciences; values less than 0.02 point to a trivial effect [6]. Because \(f_{V}^{2}\) is often not readily accessible from commonly used statistical software, it needs to be manually calculated by estimating the variance accounted for in two different regression models, one with all variables (\(R^{2}\)) and one without \(V\), but all other variables (\(R_{V}^{2}\)).
Because of the randomness of data initialization and resource-intensive training cycles, this loco (leave-one-covariate-out) approach is not ideal for machine learning models. Instead, I propose to apply Fisher's variable permutation algorithm [20, 21], which removes the effect of a variable _after_ the model is built. Let \(X\) be the entire data set. Through randomly reshuffling the data \(X_{V}\) associated with \(V\), a modified new data
\begin{table}
\begin{tabular}{p{142.3pt} p{142.3pt} p{142.3pt}} \hline XAI method & Description & Pros and cons \\ \hline Variable-importance measure [20] & Measures how a model’s performance changes if one or a group of input variables are removed. If a variable is important, then, after permuting the variables, the performance of the model will worsen. Outputs mean RMSE loss after a certain number of permutations. & The output of the method is easy to understand and interpret. But the RMSE loss depends on the random nature of permutations and choice of the loss function. There is no quantification of statistical significance available. \\ Partial-dependence (PD) plot [22] & Shows how the expected value of model prediction behaves as a function of an input variable. Uses the average of all ceteris-paribus (CP) profiles for all observations from the data set to plot the PD profile. & Offer a simple way to summarize the effect of a particular independent variable as a visualization. Unstable for correlated input variables [24]. No quantification or statistical significance test available. \\ Local dependence profile and accumulated local effects profile [23] & Average changes in prediction (but not predictions themselves) and accumulate them over a grid. & Fast to compute and unbiased. Across the feature space, estimates have a different accuracy. Interpretation is difficult when input variables are strongly correlated, because the effect of an individual variable cannot be isolated. An interpretation across intervals is also not permissible [41] \\ \hline \end{tabular}
\end{table}
Table 1: Extant data set-level XAI methods
set \(X^{*}_{V}\) is created, on which the prediction \(\hat{y}\) is calculated. If \(V\) is important in the model, then, after permutation, the prediction \(\widehat{y^{*}_{V}}\) should be less precise. More formally:
Footnote 1: The \(\widehat{y^{*}_{V}}\) is the mean-squared error loss function for the original data \(X\);
\begin{tabular}{l l}
1. & Compute \(MSE(\hat{y},X,y)\) as the mean-squared error loss function for the original data \(X\); \\
2. & Create a new \(X^{*}_{V}\) by randomly permuting the \(V\)-th column of \(X\); \\
3. & Compute model predictions \(\widehat{y^{*}_{V}}\) based on the modified data \(X^{*}_{V}\); \\
4. & Compute \(MSE(\widehat{y^{*}_{V}},X^{*}_{V},y)\); and \\
5. & Repeat steps 2 to 4 several times (\(B\)) to contain randomness and return the average \(\frac{1}{B}MSE\big{(}\widehat{y^{*}_{V}},X^{*}_{V},y\big{)}\). \\ \end{tabular}
The above steps can be configured and executed in available explainers, such as dalex. Because the \(R^{2}\) in the effect size formula refers to the population and not to sample values [3], I use all available data for \(X\), that is, both the training and testing data sets combined.
Next, I need to process the output and compute the \(R^{2}\) and \(R^{2}_{V}\) for calculation of \(f^{2}_{V}\). Note that the following formula are only valid because the data set \(X\) is standardized, a common requirement of machine learning algorithms:
\begin{tabular}{l l}
6. & Compute \(R^{2}=1-MSE(\hat{y},X,y)\) and \(R^{2}_{V}=1-MSE\big{(}\widehat{y^{*}_{V}},X^{*}_{V},y\big{)}\); and \\
7. & Compute \(f^{2}_{V}=\frac{R^{2}-R^{2}_{V}}{1-R^{2}}\). \\ \end{tabular}
In comparison to the loco approach, the variable permutation algorithm likely overestimates a variable's effect size. While the former removes an exogeneous explainer from the model and pretends that one is not aware of it, the latter randomly changes the explainer and thereby messes with the model's explanatory capabilities. This exaggerates the mean-squared error calculation. Additionally, different models have different average responses to reshuffling of data so that the above computed effect sizes can neither be compared between models nor against the customary benchmark thresholds of small, medium, and large effects. I therefore suggest an adjustment factor, which makes the effect sizes computed with the variable permutation process comparable to the loco approach.
First, I compute a baseline \(MSE(\widehat{y^{*}},X^{*},y)\), in which not only one variable, but all of the model's variables are randomly reshuffled. This baseline mean-squared error is a good indication of the worst possible loss function value when there is no predictive signal in the data [44]:
\begin{tabular}{l l}
8. & Create a new \(X^{*}\) by randomly permuting the all columns of \(X\); \\
9. & Compute model predictions \(\widehat{y^{*}}\) based on the modified data \(X^{*}\); \\
10. & Compute \(MSE(\widehat{y^{*}},X^{*},y)\); and \\
11. & Repeat steps 8 to 10 several times (\(B\)) to contain randomness and return the average \(\frac{1}{B}MSE(\widehat{y^{*}},X^{*},y)\). \\ \end{tabular}
Similar to permuting a single variable, I proceed to compute the baseline \(R^{2}_{base}\) for calculation of the baseline \(f^{2}_{base}\):
\begin{tabular}{l l}
12. & Compute \(f^{2}_{base}=\frac{R^{2}-R^{2}_{base}}{1-R^{2}}\) with \(R^{2}_{base}=1-MSE(\widehat{y^{*}},X^{*},y)\). \\ \end{tabular}
Next, I check how inflated \(f^{2}_{base}\) is due to the variable permutation method by relating it to the global effect size \(f^{2}=\frac{R^{2}}{1-R^{2}}\) for the entire model. Accordingly, I adjust \(f^{2}_{V}\):
\begin{tabular}{l l}
13. & Compute adjustment factor \(a=\frac{f^{2}}{f^{2}_{base}}\); and \\
14. & Adjust \(f^{2}_{V}\): = \(a\cdot f^{2}_{V}\). \\ \end{tabular}
To be clear, this last step adjusts the \(f^{2}_{V}\) depending on the model's average response to reshuffling of data in comparison to the loco approach, making the \(f^{2}_{V}\) more comparable to Cohen's original \(f^{2}_{V}\). For interpretation, I therefore suggest to continue to use Cohen's original thresholds [6], that is: trivial effect
size for \(f_{V}^{2}<0.02\), small for \(0.02\leq f_{V}^{2}<0.15\), medium for \(0.15\leq f_{V}^{2}<0.35\), and large for \(f_{V}^{2}\geq 0.35\). Note that the \(f_{V}^{2}\) can be larger than one, in contrast to \(R^{2}\).
This entire approach is model agnostic, that is, it can be used for different kind of machine learning models. While the root-mean-squared-error (RMSE) loss function is more commonly used in machine learning applications, by choosing the MSE loss function instead, the resulting \(f_{V}^{2}\) measure is more directly comparable with the \(f_{V}^{2}\) from OLS regression models. The main disadvantage, however, is the approach's dependence on the random nature of permutations. That is, for different runs, different \(f_{V}^{2}\) values will be reported. Though, randomness can be contained and accuracy enhanced through several permutation rounds (that is, repeating steps 2-4 and 8-10 several times, e.g., \(B=50\)) and using the average (see steps 5 and 11 above). Self-evidently, this increases the algorithm's computation time.
### Direction of Influence: Mann-Kendall Monotonicity Test and Theil-Sen Estimator
The effect size measure \(f_{V}^{2}\) described above does not provide any information about the direction of the effect. Yet, it is quite natural to ask whether the dependent variable's values, on average, are going up, down, or staying the same. Further, it is quite natural to inquire if this trend is monotonous, that is, present across the entire range of the values of an independent variable. To answer these questions in a hypothesis-testing process, I suggest to exploit the accumulated local effect plots [23], apply the nonparametric Mann-Kendall test of monotonicity [26, 27], and then quantify the effect with the Theil-Sen estimator [45].
Apley's accumulated local effect plots are visualizations that increase the interpretability and transparency of supervised machine learning models by summarizing the influence of an independent variable on the model's predictions. As opposed to the more popular (and older) partial dependence plots [22], they do not require an unreliable extrapolation with other potentially correlated independent variables [24].
The Mann-Kendall test assesses and defends the connection between independent and dependent variables by analyzing the sign of the difference between dependent variables associated with a higher independent variable and a lower independent variable, resulting in a total of \(\frac{n(n-1)}{2}\) pairs of data, where \(n\) is the resolution of the independent variable (that is, the number of observations available for the range of the independent variable). The Mann-Kendall test statistic is applicable in many situations, because it is "relatively effective and robust" [46], invariant to data transformation, such as a log-transformation, and does not require the data to conform to any particular distribution [47]. However, the test is not robust against serial correlation; this can lead to an over-rejection of the null hypothesis of no trend (type I error). I therefore use the Hamed and Rao modified Mann-Kendall test with a lag of three, which addresses serial correlation by applying a variance correction approach [48, 49]. Because all corrections increase the type II error, the Mann-Kendall test is "best viewed as exploratory [...] [and] most appropriately used to identify stations where changes are significant or of large magnitude and to quantify these findings" [50].
Next, I assess the variable's direction of influence with the Theil-Sen estimator, again based on the predictions in the accumulated local effect plot. This estimator is a robust method for determining the slope of a linear regression equation linking two variables based on the median of the slopes of all pairs of data points. Specifically, the estimator is more robust to outliers, non-normality, and heteroscedasticity than OLS linear regression. The Theil-Sen estimator computes the confidence interval via bootstrapping [45], from which I calculate the corresponding \(p\)-value [following the method outlined by 51].
### Practical and Interpretation Guidelines
Using dalex as a model-agnostic explainer, I have implemented this framework for hypothesis testing as a sandbox with Keras and Python. I make it available in the Online Supplement so that other researchers
can familiarize themselves with it and use it in their own work. After a neural network model has been trained on the data, the dalex explainer object creates a wrapper around the model. This wrapped model is then examined with the two steps detailed above.
For each input variable, the following measures are calculated and listed: Cohen's effect size \(f\hat{y}^{2}\); \(p\)-value for the monotonicity test; slope with \(p\)-value for the direction of influence. Table 2 provides a suggestion on how to analyze and report these measures in a research paper. If the effect is not monotonous, continuing the exploration of the accumulated local effect plot with a change point analysis can provide additional insights [52, 53].
Following established practice for reporting variable-specific effect sizes in a research report when the context is clear, I suggest to drop the \(V\) in the superscript, that is, simply use \(f^{2}\) instead of \(f_{V}^{2}\). In the worked examples in the next section, I will adhere to this convention.
## 4 Worked Examples
I now put the hypothesis-testing framework into action on one artificial and one real data set. The artificial data set is based on an algebraic functional form, the Coulomb equation. The real-world data set is from a social science survey and deals with the influence of selected personal factors on subjective well-being. The difference in the structure of the data sets is in the number of input variables, as well as the
\begin{table}
\begin{tabular}{c c c c c c} \hline Monotonicity & Direction of & & & Effect size & \\ significant (\(p\)-value of Mann-Kendall test)? & influence & \(f_{V}^{2}<0.02\) & \(0.02\leq f_{V}^{2}<0.15\) & \(0.15\leq f_{V}^{2}<0.35\) & \(f_{V}^{2}\geq 0.35\) \\ & & slope and \(p\)-value? & (trivial) & (small) & (medium) & (large) \\ \hline \(p\leq 0.05\) & \(s<0\) A & The independent & The independent & The independent & The independent & The independent \\ & \(p\leq 0.05\) & variable exerts only a trivial effect on the dependent variable, which is monotonic, negative, and statistically significant. & variable exerts a & variable exerts a & variable exerts a & variable effect on the \\ & & The independent & The independent & The independent & medium monotonic & large effect on the \\ & & The independent & The independent & The independent & variable exerts a & variable, which is \\ & & The dependent variable exerts only a trivial effect on the dependent variable, which is monotonic, positive, and statistically significant. & variable, which is & variable, which is \\ & & The independent & The independent & positive and & positive and \\ & & The independent & The independent & statistically & statistically \\ & variable exerts only a trivial effect on the dependent variable, which is positive and statistically significant. & significant. & significant. & significant. \\ \(p>0.05\) & The independent & The independent & The independent & The independent & The independent \\ & variable exerts only a trivial effect on the dependent variable, which is not monotonic. & variable exerts a & medium effect on & large effect on the \\ & & The dependent variable, dependent variable, which is not monotonic. & dependent & dependent \\ & & Which is not monotonic. & variable, which is & variable, which is \\ & & monotonic. & monotonic. & not monotonic. & not monotonic. \\ \hline \end{tabular} This table provides guidelines for using the machine learning framework for hypothesis testing. In addition to the write-up, the following statistics should be reported: Mann-Kendall p; Effect size \(f_{V}^{2}\); Theil-Sen slope s, p.
\end{table}
Table 2: Reporting guidelines for hypothesis testing
complexity of the functional form and the correlations between the input variables. Moreover, the real data set includes a high but unknown level of noise.
The Online Supplement contains the data, the Keras/Python code for constructing and training the deep networks, and the Python code for using the hypothesis-testing framework.
### First Example: Coulomb's Law Equation
This first data set depicts the behavior of a rather simple functional form. Coulomb's inverse square law - also known as Coulomb's law equation - describes the electrostatic force \(F\) between two charged objects. The main features of the electrostatic force are the existence of two types of charges \(q_{1}\) and \(q_{2}\), the fact that like charges repel and unlike charges attract, and the decrease of force with separation \(r\) between the two objects. The underlying mathematical formula was proposed by French physicist Charles Coulomb (1736-1806):
\[F=k\ \frac{q_{1}\,q_{2}}{r^{2}}\ \ \text{with}\ \ \ k=\frac{1}{4\ \pi\ \varepsilon\ 8.854\cdot 10^{-12}}\,.\]
In this formula, \(k\) is the electric force (Coulomb's) constant. It is dependent on the relative permittivity \(\varepsilon\) of the medium, for example 1 for vacuum, 9.08 for the organochlorine compound Dichloromethane CH\({}_{2}\)Cl\({}_{2}\), and 80.1 for water H\({}_{2}\)O (at a temperature of \(20^{o}\) C).
I create a random data set of about 125,000 tuples \(\in[0;\ 1]\). I then scale the data so that \(q_{1},q_{2}\in[0;\ 10]\); \(r\in[0;\ 1.5]\); and \(\varepsilon\in[1;\ 80]\). The calculated output variable falls into the range of \(F\in[0;\ 1.931]\). Because the data follows an algebraic functional form, I can immediately write up four hypotheses:
**H1**: An increase of the charge \(q_{1}\) is associated with an increase in the electrostatic force \(F\) between the two charges.
**H2**: An increase of the charge \(q_{2}\) is associated with an increase in the electrostatic force \(F\) between the two charges.
**H3**: More separation \(r\) between the two charges is associated with a decrease in the electrostatic force \(F\) between the two charges.
**H4**: A larger permittivity \(\varepsilon\) is associated with a decrease in the electrostatic force \(F\) between the two charges.
#### 4.1.1 Computational aspects of the deep network
To test these hypotheses, I build a deep artificial neural network with five layers: The first (input) layer has four nodes and the fifth (output) layer one node, matching the data set. I preprocess the variables so that they have a mean value of zero and a standard deviation of one. There are three hidden layers with fifty nodes each. For the input and hidden layers, the rectified linear unit (ReLU) serves as an activation function with a He normal initialization for the weights [54]. I initialize the weights leading to the output layer with the Glorot uniform technique [34]. Because this is ultimately a scalar regression problem, I match the network architecture with a mean squared error loss function [55] and use the efficient ADAM (adaptive moment estimation) optimization algorithm [56]. During training, I randomly remove a subset of nodes from the network with a dropout rate of 0.001 [57]. After dropout, I impose a weight constraint of five on the remaining nodes [58]. With these hyperparameters, fifty epochs are sufficient to train the network.
#### 4.1.2 Results
On this trained network, I run the hypothesis-testing framework with a subset of 10,000 randomly selected tuples from the data set and \(B=50\) permutations. The results are summarized in Table 3.
The \(R^{2}\) of the full model is 0.968. Charge \(q_{1}\) exerts a large monotonic effect on the electrostatic force \(F\), which is positive and statistically significant, Mann-Kendall \(p<0.001\); \(f^{2}=8.210\); Theil-Senn \(s=0.007;p<0.001\). As expected, the effect of charge \(q_{2}\) is similar, Mann-Kendall \(p<0.001\); \(f^{2}=8.115\)
Theil-Senn \(s=0.007;p<0.001\). This supports H1 and H2. In support of H3, the separation \(r\) exerts a large monotonic effect on \(F\), which is negative and statistically significant, Mann-Kendall \(p<0.001;\ f^{2}=9.629\); Theil-Senn \(s=-0.006;p<0.001\). Backing H4, the permittivity \(\varepsilon\) exerts a large monotonic effect on \(F\), which is negative and statistically significant, Mann-Kendall \(p<0.001;\ f^{2}=23.166\); Theil-Senn \(s=-0.004;p<0.001\).
A visualization of the variables' accumulated local profiles in Figure 1 provides additional confirmation. The profiles of \(q_{1}\) and \(q_{2}\) are practically straight-line, which is a sign of their linear
Figure 1: Accumulated local profiles (Coulomb’s law)
\begin{table}
\begin{tabular}{l l l l l l} \hline Variable & \(f_{V}^{2}\) & Monotonicity & \(p\) & Theil-Sen & \(p\) \\ & & (Mann-Kendall) & & slope & & \\ \hline Charge \(q_{1}\) & 8.210 & Increasing & \(<0.001\) & 7.387\(\cdot\)10\({}^{3}\) & \(<0.001\) \\ Charge \(q_{2}\) & 8.115 & Increasing & \(<0.001\) & 7.772\(\cdot\)10\({}^{3}\) & \(<0.001\) \\ Separation \(r\) & 9.629 & Decreasing & \(<0.001\) & -6.140\(\cdot\)10\({}^{3}\) & \(<0.001\) \\ Permitttivity \(\varepsilon\) & 23.166 & Decreasing & \(<0.001\) & -4.296\(\cdot\)10\({}^{3}\) & \(<0.001\) \\ \hline \end{tabular} This table reports the results of the hypothesis testing framework for example 1, Coulomb’s law. The estimation is done on 10,000 randomly selected data records (about 8% of the entire data set) with \(B=50\) permutations; \(R^{2}=0.965\).
\end{table}
Table 3: Variable effects (Coulomb’s law)
relationships in Coulomb's law. The profiles of \(r\) and \(\varepsilon\) are monotonic hyperbolae, which reflects them being denominators in Coulomb's law.
Footnote 1: The \(r\)-dependence of \(r\) is not a function of \(r\) but is not a function of \(r\).
### Second Example: Subjective Well-Being
The second data set consists of 25,700 responses to a social science survey administered in Germany, the European Social Survey [ESS; 59]. I am concerned with how personal value priorities, opinions, and other individual factors affect personal well-being [60, 61]. Because well-being is an emotional mental state created out of the combination of basic psychological operations and ingredients, which can be mapped to intrinsic networks in the human brain [62], a representation with a deep learning artificial neural network is especially appropriate.
Various forms of well-being are empirically separate [63], and thus a delineation of the concept as I use it here is important. Following past research into well-being [e.g., 64, 65] as well as pertinent applied literature [e.g., 66, 67], I view subjective well-being as including both cognitive and affective components. The cognitive component is someone's overall life satisfaction, and the affective component is the difference in how frequent positive and negative emotions are experienced. Taken together, the cognitive and affective component commonly equate with what is called everyday happiness and satisfaction [65, 68]. The ESS measures cognitive and affective well-being with two items on a 11-point Likert-scale. Following guidelines for happiness research [67, 69], I compute the output variable subjective well-being as a simple arithmetic average of these two items.
As input variables, I select 159 variables from the ESS describing the individual (see the detailed variable list in the Online Supplement [70]). For my exemplary hypotheses, I select four classic variables that previous happiness research has shown to be related to subjective well-being [71]. Income and education differences between individuals are usually correlated with reports of well-being, but previous research has shown that such positional differences explain no more than 10% of the variance in subjective well-being [72]. The ESS variable _hincfel_ asks for the respondent's feeling about the household income nowadays (4: very difficult; 3: difficult; 2: coping; and 1: living comfortably on present income), and _eduyrs_ contains the number of years of education completed. Well-being includes positive elements that transcend economic prosperity [73], with health perhaps the most important of all dimensions shaping the quality of life [74]. The common variance explained by variables related to life ability is usually around 30% [72]. In this vein, the variable _health_ reflects a respondent's subjective health in general (5: very bad to 1: very good). Trust is an essential element in any social setting and drives economic effects, health, and well-being [e.g., 75, 76]. The variable _ppltrst_ asks respondents if most people can be trusted, or if one cannot be too careful in dealing with people (0: you can't be too careful; 10: most people can be trusted).
Taken together, this leads to the following hypotheses:
**H1**: Higher income is associated with higher levels of subjective well-being.
**H2**: Higher levels of education are associated with higher levels of subjective well-being.
**H3**: Better general health is associated with higher levels of subjective well-being.
**H4**: More trust in other people is associated with higher levels of subjective well-being.
#### 4.2.1 Computational aspects of the deep network
To test the hypotheses, I build a deep network with 159 nodes in the input layer describing an individual survey respondent, one node in the output layer representing this person's well-being, and four hidden layers with 500 nodes each. I transform categorical variables into binary placeholder variables (see Online Supplement) and rescale all other variables to an interval of [0; 1]. I fill occasional missing values with the k-Nearest Neighbors approach (\(k=5\); points weighted by the inverse of their distance). I preprocess all input and output data to give a mean value of zero and a standard deviation of one. For the input and hidden layers, the ADAM optimization algorithm together with a ReLU activation function with He normal
initialization is used. For the output layer, a linear function with a Glorot uniform weight initialization is deployed. To improve the model's generalization capability for small data sets, I add a L\({}_{2}\) regularizer with a very small weight decay parameter of \(\lambda=0.001\) to each layer and the bias [77]. During training, I randomly remove a subset of nodes from the network with a dropout rate of 0.1; I impose a weight constraint of three to the remaining nodes after the dropout. Because the variables are all derived from answers to Likert-type survey questions, I add a zero-centered Gaussian Noise (standard deviation of 0.01) as random data augmentation to the input and output layers. This aids generalization and fault tolerance through embodying a smoothness assumption [33, 78]. Ten epochs are sufficient to train the network.
#### 4.2.2 Results
On this trained network, I run the hypothesis-testing framework with a subset of 10,000 randomly selected tuples from the data set and 50 permutations. The results are summarized in Table 4.
The \(R^{2}\) of the full model is 0.527. Variable _hincfel_ exerts a small monotonic effect on _well-being_, which is negative and statistically significant, Mann-Kendall \(p<0.001\); \(f^{2}=0.116\); Theil-Senn \(s=0.006\); \(p<0.001\). Because _hincfel_ is reverse scored, this supports H1, that is, higher income is generally associated with higher levels of subjective well-being. In support of H2, longer time spent on education (_eduyrs_) has a negative, monotonic and statistically significant effect on subjective well-being, Mann-Kendall \(p<0.001\); \(f^{2}=0.004\); Theil-Senn \(s=0.001\); \(p<0.001\). Because of the extremely small magnitude of the effect, this relationship is trivial and practically not relevant. "Although not unrelated, the size and statistical significance of effects are logically independent features of data from samples" [3]. The variable _health_ shows a small effect on well-being, which is negative and statistically significant, Mann-Kendall \(p<0.001\); \(f^{2}=0.079\); Theil-Senn \(s=0.005\); \(p<0.001\). Because _health_ is reverse scored, this supports H3. When people trust others (_ppltrst_), this has a monotonic effect on well-being, which is, in support of H4, positive and statistically significant, Mann-Kendall \(p<0.001\); \(f^{2}=0.006\); Theil-Senn \(s=0.136\cdot 10^{-3}\); \(p<0.001\). Similar to the variable _eduyrs_, this effect is trivial and does not have practical significance.
A visualization of the variables' accumulated local profiles in Figure 2 provides additional confirmation. The profile of _health_ clearly shows a linear relationship. The profile of _hincfel_ is slightly bent downwards (note again that _hincfel_ is reverse scored), which shows that people with high-income report disproportionally higher well-being.
\begin{table}
\begin{tabular}{l l l l l l} \hline Variable & \(f_{V}^{2}\) & Monotonicity & \(P\) & Theil-Sen & \(p\) \\ & & (Mann-Kendall) & slope & & \\ \hline Income (_hincfel_) & 0.116 & Decreasing & \(<0.001\) & -6.562*\(10^{-3}\) & \(<0.001\) \\ Education (_eduyrs_) & 0.004 & Decreasing & \(<0.001\) & -1.510*\(10^{-3}\) & \(<0.001\) \\ Health (_health_) & 0.079 & Decreasing & \(<0.001\) & -5.827*\(10^{-3}\) & \(<0.001\) \\ Trust (_ppltrust_) & 0.006 & Increasing & \(<0.001\) & 0.136*\(10^{-3}\) & \(<0.001\) \\ \hline \end{tabular} This table reports the results of the hypothesis testing framework for example 2, subjective well-being. The estimation is done on 10,000 randomly selected data records (about 39% of the entire data set) with \(B=50\) permutations; \(R^{2}=0.527\).
\end{table}
Table 4: Variable effects (subjective well-being)
## 5 Discussion
Machine learning models are not commonly used by researchers for hypothesis testing. While they would be much better suited to address many interesting research questions about complex statistical relationships than multivariate OLS regression models, they hide their statistical reasoning in a black box. XAI has developed techniques to assess the connection between input and output variables with respect to strength, direction, and form. While these novel XAI methods are mainly geared towards distinguishing important variables from not so important ones using visualization, they do not relate well to classical measures of inferential statistics. In fact, "in machine learning, the concept of interpretability is both important and slippery" [79]. Till date, this has unfortunately weakened the acceptance of machine learning models for hypothesis testing in scientific experiments and research models.
In this paper, I have outlined an approach to estimate Cohen's \(f^{2}\) for the input variables of a deep learning network and to show their direction of influence and statistical significance. This now allows researchers to move beyond multivariate OLS regression and analyze regression problems with more complex deep learning networks - using and reporting information they are familiar with from inferential statistics. In the context of machine learning, it has been shown "that explanations fare better when they require less cognitive effort" [80] on behalf of the researcher. To aid my approach, I present Python software, which builds upon Keras, a deep learning open-source software library, and extends dalex, a model-agnostic explainer interface.
While I have tested my hypothesis-testing framework on several data sets and documented two examples in this paper, there is more research to be done:
Figure 2: Accumulated local profiles (subjective well-being)
First, estimating the effect size with a variable permutation algorithm is not necessarily equivalent to the traditional loco approach. As I have explained in the article, the latter is not an option for machine learning. I have suggested steps to adjust the effect size calculated by the former. On the data sets that I have used during my tests, the effect size thresholds proposed by Cohen seem to be reasonable. But, clearly, this calls for further corroboration.
Second, when a machine learning model performs very poorly, the model's \(R^{2}\) can have a negative value [81]. This just indicates that the model performed poorly, but it is impossible to know how bad exactly it performed, because the lower bound for \(R^{2}\) is \(-\infty\). While hypothesis testing of a variable's effect in such a model is obviously pointless, my approach for calculating effect sizes does not explicitly take care of that (rare) situation.
Third, to summarize the influence of an independent variable on the dependent variable, I have used a Theil-Sen estimator on the variables' accumulated local effect profile. I have argued that the accumulated local effect profile is better suited than the local dependence profile or partial dependence plot because it does not contain correlated effects of other independent variables. I have converted the slope's confidence interval into a \(p\)-value for adjudicating on the slope's statistical significance. There is further investigation and experimentation to be done to confirm that approach.
Fourth, in the context of neural networks, an independent variable is rarely important on its own [82]. Notwithstanding, I have not yet considered interactions within a group of independent variables. In multivariate OLS regression models, this frequently leads to an issue of multicollinearity. The redundant architecture of deep artificial neural networks, however, allows to learn a potential similarity of input variables and predict even in situations of multicollinearity [83, 84, 85]. Thus, a potential further development of the hypothesis-testing framework should address interactions between independent variables.
## Online Supplement: Code and Data Availability
The Python code for the hypothesis-testing framework, the worked examples, and the data sets are available in the online repository OSF.IO (DOI: 10.17605/OSF.IO/QDJCY). |
2309.02532 | Design of Oscillatory Neural Networks by Machine Learning | We demonstrate the utility of machine learning algorithms for the design of
Oscillatory Neural Networks (ONNs). After constructing a circuit model of the
oscillators in a machine-learning-enabled simulator and performing
Backpropagation through time (BPTT) for determining the coupling resistances
between the ring oscillators, we show the design of associative memories and
multi-layered ONN classifiers. The machine-learning-designed ONNs show superior
performance compared to other design methods (such as Hebbian learning) and
they also enable significant simplifications in the circuit topology. We
demonstrate the design of multi-layered ONNs that show superior performance
compared to single-layer ones. We argue Machine learning can unlock the true
computing potential of ONNs hardware. | Tamas Rudner, Wolfgang Porod, Gyorgy Csaba | 2023-09-05T18:58:10Z | http://arxiv.org/abs/2309.02532v1 | # Design of Oscillatory Neural Networks by Machine Learning
###### Abstract
We demonstrate the utility of machine learning algorithms for the design of Oscillatory Neural Networks (ONNs). After constructing a circuit model of the oscillators in a machine-learning-enabled simulator and performing Backpropagation through time (BPTT) for determining the coupling resistances between the ring oscillators, we show the design of associative memories and multi-layered ONN classifiers. The machine-learning-designed ONNs show superior performance compared to other design methods (such as Hebbian learning) and they also enable significant simplifications in the circuit topology. We demonstrate the design of multi-layered ONNs that show superior performance compared to single-layer ones. We argue Machine learning can unlock the true computing potential of ONNs hardware.
## Introduction
Throughout the history of electronics, analog computing devices were often looked at as the future of computing - only to be turning up to be runner-ups behind their digital counterparts [1]. Despite this history, research on analog computing is again on the rise, for multiple reasons. With the end of Dennard scaling [2], transistor resources are less plentiful and analog devices fit much better to sensory preprocessing functions then digital ones. New sensory processing pipelines require less and less digital number crunching and more neuromorphic, edge-AI functions, where analog computing devices may have an edge. For this reason, research is flourishing in analog hardware accelerators [3] (aka neuromorphic hardware devices) that have the potential to boost the energy efficiency of AI processing pipelines by several orders of magnitude.
Among the many flavors of analog computing, Oscillatory Neural Networks (ONNs) received special attention [4]. This is due to the facts that (1) ONNs are realizable by very simple circuits - either by emerging devices or CMOS devices (2) phases and frequencies enable a rich and robust [4] representation of information (3) biological systems seem to use oscillators to process information [5] - likely for a reason.
Despite the significant current research efforts and the large literature, most ONNs seem to rely on some version of a Hebbian rule to define attractor states [6], so that the network converges to a stationary phase pattern, which is a the result of the computation. The Hebbian rule is used to calculate the value of physical couplings that define the circuit function. The reliance on the Hebbian rule turns ONNs to a sub-class of classical Hopfield networks - which are not very powerful by today's standards. While there are a few ONN implementations not relying on basic Hebbian rules (notably [7]) it is likely that current ONNs does not fully exploit the potential of the hardware - due to the lack of a more powerful method to design the interconnections.
In this paper we show that a state of art machine learning method, when applied to a SPICE-like model of the circuit, significantly enhances the computational power of ONNs. Our studied system is an ONN made of resistively coupled ring-oscillators [8], [9] - its circuit topology is described in 1. Next, in Section 2 we develop the differential equations describing the circuit and show how a machine learning algorithm can be applied to design the circuit parameters. In Section 3, we exemplify the machine-learning framework for the design of an auto-associative memory and compare it to a standard Hebbian-rule based device. Section 3.3 furthers this concept by the design of a multi-layered network, which is a two-layer classifier and achieves superb performance compared to a single-layer device.
Overall our work presents a design methodology that unlocks the true potential of oscillatory neural networks, overcoming the limitations of imposed by simple learning rules. Additionally, the presented method allows for designing physically realizable structures: our networks rely on nearest-neighbor interactions, which is amenable to scaling, chip-scale realizations and uses significantly fewer neurons than fully connected networks.
## 1 Resistively coupled ring oscillators for phase-based neuromorphic computation
It is well established that the synchronization patterns of coupled oscillators may be used for computation. The idea of using phase for Boolean computation goes back to the early days of computer science [10] and is being rediscovered this days [11]. For neuromorphic computing the original scheme of Izhikevich [12, 13] was studied using various oscillator types and coupling schemes. A number of computing models were explored, ranging from basic convolvers [14] and pattern generators [15] to hardware for handling NP-hard problems [9, 16, 17].
To give a simple example of how ring oscillators compute in phase space, Fig. 1 shows a simple two-oscillator system. Nodes which are interconnected by a resistor will synchronize in-phase. If identical nodes (say \(V_{3}\)) are interconnected, the oscillators will run in phase. However, in a 7-inverter ring oscillators, each node is phase-shifted by an angle of \(2\pi/7\) with respect to their neighbor. So if say say \(V_{3}\) of one oscillator is connected to say \(V_{6}\) of the oscillators, the oscillators will pull toward an anti-phase configuration. The waveforms of these two cases are illustrated in Fig. 1_b_)-_c_).
A larger network of oscillators with in-phase or out-of phase pulling resistors will converge toward an oscillatory ground state configuration, which in fact maps to the solution of the Ising problem [9]. Simply put, the phase of each oscillator will converge toward a value that optimally agrees to most of the constraints imposed on the oscillator, and the dynamics of the coupled oscillator network will approximate the solution of a computationally hard optimization problem. For an Ising problem, the oscillator-oscillator couplings are part of the problem description, there is no need to calculate them.
While the Ising problem is important and shows the computational power of ONNs, an Ising solver alone is not very useful for solving most real-life, neuromorphic computing tasks. A neurmorphic computing primitive (such as a classification task) does not straightforwardly map to an Ising problem. So if the oscillator network is to be used as a neuromorphic hardware, then the oscillator weights must be designed or learned to perform certain computational functions.
Most ONNs are used as auto-associative memories, making them applicable for simple pattern recognition / classifica
Figure 1: Phase-based computing by two ring oscillators. _a_) a network topology is shown b) if \(R^{+}\) dominates in the coupling, the oscillators run in phase, while if \(R^{+}\) dominates \((b)-c))\) then anti-phase coupling is realized. _d_) If phases corresponds to pixels of a greyscale image, the phase dynamics may be used to converge to preprogrammed patterns [8]
tion tasks. The weights are designed based on the Hebbian learning rule [8], [6] - and this is one of the cases when the Ising model easily maps to a neuromorphic computing model. In fact, the connection between Ising and Hopfield associative models [18], [19], [20] were done by Hopfield early on [21]. ONNs simply use oscillator phases as the state variable of Hopfield neurons.
The Hebbian rule (and even its improved variants [22], [23]) has severe limitations: the rule assumes all-to-all oscillator (neural) connections and they do not support learning on a set of training examples. Also, simple Hopfield models are not very powerful neural networks by today's standards. This is why our goal in this paper is to go beyond these limitations and apply state of art learning techniques to train ONN weights. This allows us to overcome the limitations of associative (Hopfield) type models and design ONN versions of many other neural network models.
## 2 Machine learning framework for circuit dynamics
Out methodology is to apply Backpropagation through time (BPTT) [24] to an in-silico model of the oscillators. We construct a circuit model of the coupled oscillator system, the resulting ODEs are solved and the value of the loss function is calculated at the end of the procedure. By backpropagating the error, we can optimize the circuit parameters in such a way that the ONN solves the computational task that is defined by the loss function. Once the circuit parameters are determined via this algorithm they can be 'hard wired' into a circuit (ONN hardware) for an effective hardware accelerator tool.
### Computational model of resistively coupled ring oscillators
For the sake of concreteness we assume that our circuit consists of \(n\) oscillators and each oscillator is composed of \(7\) inverters. The circuit has \(k\) input nodes. We construct a simple Ordinary Differential Equation (ODE)-based circuit model based on the equations derived in [25]. Each inverter is described on a behavioral level by a \(\tanh(0)\) nonlinearity connected to an \(RC\) delay element. This way a seven-inverter ring oscillators is modeled by seven first-order nonlinear ODEs.
The mathematical formulation consists three parts: internal dynamics of the oscillators (due to the inverters), dynamics due to external signals (inputs) and the coupling's dynamics. We can arrive at the following ODE for the collection of voltages at all the nodes, which describes all parts if we write a differential equation for every nodes in the system using Kirchhoff's current law and assuming only resistors as couplings:
\[\frac{dV}{dt}=\frac{1}{RC}\Big{(}f\big{(}\mathbf{P}_{\pi}V\big{)}-V\Big{)}+ \frac{1}{C}\mathbf{B}^{\prime}u+\frac{1}{R_{c}C}\mathbf{C}^{\prime}V,\]
where
\[f(x)=-\tanh(ax),\]
is the simplified characteristic of an inverter with some \(a\in\mathbb{R}\). Furthermore, \(\pi\) is a permutation, such that
\[\pi=\begin{pmatrix}1&2&3&4&5&6&7\\ 7&1&2&3&4&5&6\end{pmatrix}\]
and \(\mathbf{P}\in\mathbb{R}^{(\mathcal{I}m)\times(\mathcal{I}m)}\) is a block matrix for which every \(7x7\) matrix-block in the main diagonal there is a permutation matrix corresponding to \(\pi\). Basically this orders the voltage nodes in the ring oscillator to calculate the voltage differences arising between the two endpoints of the resistors placed in-between two inverters. \(\mathbf{B}^{\prime}\in\mathbb{R}^{(\mathcal{I}n)\times k}\) is the connector matrix for the inputs. The inputs are collected in \(u\in\mathbb{R}^{k}\). \(\mathbf{C}^{\prime}\in\mathbb{R}^{(\mathcal{I}n)\times(\mathcal{I}n)}\) is the modified couplings matrix which is to be constructed from the real, humanly readable couplings matrix \(\mathbf{C}\in\mathbb{R}^{n\times n}\). The parameters \(R\), \(C\in\mathbb{R}^{+}\) are fixed for the oscillators, meanwhile the \(R_{c}\in\mathbb{R}^{+}\) coupling parameters are one of the two real, to-be-learnt parameters of the system which governs the whole coupling dynamics. The other ones are the parameters gathered in \(\mathbf{B}^{\prime}\), which is directly relates to the amplitude of the input signal (typically a sinusoidal current generator).
The \(\mathbf{C}_{i,j}\) is related to the couplings between oscillators \(i\) and \(j\) and the matrix is built the following way:
* All main diagonal entries are \(0\), as no oscillator is coupled to itself
* All entries in the upper triangle of the matrix are corresponding to the positive (in-phase-pulling) couplings
* All entries in the lower triangle of the matrix are corresponding to the negative (anti-phase-pulling) couplings
The construction of \(\mathbf{C}^{\prime}\) can be done easily from \(C\) algorithmically. As every positive couplings is between 3-3 nodes of the oscillators and every negative connection is between 3-6 nodes of oscillators, the \(\mathbf{C}^{\prime}\) matrix is quite sparse. Similarly, because inputs are only fed into the 3rd node of every oscillator, the \(\mathbf{B}^{\prime}\) matrix is sparse aswell.
The ODEs are constructed for the circuit of Figure 2 in case of a fully connected ONN. The oscillators are driven by sinusoidal current generators - the phase of these signals carry the input. They define the initial states of the oscillators that is later changed by the couplings between the oscillators.
Each oscillator is connected by two resistors, the value of which has to be learned. The value of the coupling resistors is directly related to the coupling parameters, which is stored in the \(\mathbf{C}\) coupling matrix. In the equations above \(R_{c}\) is a predefined, constant value which is the baseline resistance between two coupled nodes, usually around \(10\,\mathrm{k}\Omega\). The system learns the values in \(\mathbf{C}\) from which the \(\mathbf{C}^{\prime}\) modified coupling matrix is built and then the learn values in \(\mathbf{C}\) are going to be scalers to this \(R_{c}\) parameter, so the real, physical value of a coupling resistance between node \(i\) and \(j\) is given by \(\frac{R_{c}}{\mathrm{C}_{i,j}}\). In other words, the value of the learn parameters in \(\mathbf{C}_{i,j}\) is inversely proportional to the resistance value between oscillators \(i\) and \(j\).
Similarly, the values in \(\mathbf{B}^{\prime}\) are related to the input current generator's amplitude, but they are directly proportional to the real amplitude of input generators.
In the examples of the later sections, the grayscale pixel colors will typically correspond to the input phases of the current generators - a pixel intensity from 0 to 1 is mapped to phases \(\phi\in[0,\pi]\). Similarly, the output pattern is the stable, stationary phase pattern of the oscillators.
### Backpropagation for ONN circuit design
Backpropagation is the de facto standard algorithm used for the training of neural networks [26]. After each run of the neural network, the gradient of a properly defined loss function is computed with respect to the leaenable parameters of the system, in an efficient manner. After the calculation of the gradient, a gradient descent method is applied to the learning, in order to minimize the loss function.
BPTT (Backpropagation Through Time) is the backpropagation applied to a dynamic system (i.e. an ODE-based description). The time-discretized model of the ODEs is unfolded in time, so that one neural layer corresponds to a temporal snapshot of the system dynamics. The loss function is typically defined on the end state of the ODEs - from this the optimal value of the ODE parameters (and/or the ODE initial conditions) can be determined. We use BPTT to determine the optimal value of circuit parameters in the ring oscillator network.
Figure 2: The circuit diagram of the entire computational layer. Input signal generators provide the sinusoidal signals with a phase that corresponds to an input pattern, such as pixels of an image. These generators are connected to the computing oscillators, whose phase pattern provides the solution of the problem. The 3-6 marks on the oscillators indicate the 3rd and 6th nodes in the ring oscillators’ circuit. The green, red and blue colored circuit elements’ values are learned during the learning process and the phases, indicated with orange are the inputs. On the schematic figure, the purple connections indicate both the positive (red) and negative (purple) couplings. The grayscale pixel value is read from the image, converted to phase information, then the sinusoidal current generators are connected to the oscillators one-by-one. The yellow arrows shows that the output is read from the oscillators and an image is formed.
We have written our simulation code in Pytorch [27] - the autograd feature of Pytorch makes the implementation of backpropagation and BPTT straightforward. We also used the _torchdiffet_[28] package for implementing backward-differentiable ODE solvers. This external, third-party library is built upon PyTorch and provides various differentiable ODE solvers implemented for PyTorch. A particularly useful feature of _torchdiffet_ is that it can apply the adjoint method for the backward step [29], and calculate the gradients with a constant memory cost.
Figure 3 exemplifies the learning procedure for the two-oscillator system of Fig. 1. We defined the loss function of the system as the dot product of the oscillator waveforms - which should be maximized (minimized) for in-phase (anti-phase) coupling. The machine learning algorithm adjusts the value of the \(C_{i,j}\) parameters (and the coupling resistors) until this desired phase configuration is reached.
This method can be straightforwardly generalized to achieve convergence toward more complex patterns. If the loss function aims to maximize the dot product of waveforms between like-colored pixels and minimize them between different-colored ones, then the phase pattern can converge toward any prescribed image. If the phase pattern made to converge toward different patterns for different inputs, then the ONN will act as an associative memory.
Since the Machine Learning (ML) technique is designing a physical circuit, safeguards was taken not to arrive to unrealizable circuit parameters such as negative resistances or exceedingly strong couplings that would quench oscillations.
## 3 ONN-based pattern classification on the MNIST dataset
We have chosen the standard MNIST database for testing the associative capabilities of our system. Since the BPTT algorithm is computationally demanding, we made a few simplifications. We downsampled the initially 28x28 pixel-sized picture from MNIST to have either 14x14 or 7x7 size using average pooling. This allowed us to have a reduced dimension for the input images but also keeping the necessary information because of the average pooling. Also, 14x14 MNIST images are still recognisable as a human, so it allowed us to easily recognise if some patterns are easier for the algorithm to distinguish from the others.
### Baseline: ONN-based associative memory with Hebbian learning
The simplest, well studied ONN based associative memory can be designed by the Hebbian rule. If we want the phase pattern to converge toward \(\xi\) or \(\eta\) for inputs resembling to \(\xi\) or \(\eta\), then the weights that realize this associative memory are:
\[C_{ij}^{cpl}=\frac{1}{2}\big{(}\xi_{i}\xi_{j}+\eta_{i}\eta_{j}\big{)}, \tag{1}\]
where \(\xi_{i}\), \(\xi_{j}\) is the \(i\)-th, \(j\)-th element of the pattern \(\xi\), and \(\eta_{i}\), \(\eta_{j}\) is the \(i\)-th, \(j\)-th element of the pattern \(\eta\), respectively.
Figure 3: On a), we can see the simulation’s result for the positively coupled oscillators, meanwhile on b), there is the same for the negatively coupled 2-oscillator system. The loss changed in both cases from high value to low value. Also, the orange curves are indicating the learn parameters values contained in **C** and not the real values of the resistors. Note that in b) the parameter value corresponding to “R-” is going below 0, which would mean a negative resistance because of the connection of the parameters in **C** to the physical parameters, but this is only the mathematical solution, for a given simulation the parameters were clamped to be non-negative and if they hit zero, the connection removed.
The rule assumes all-to-all couplings, making a larger-scale network hard to physically realize.
Hebbian learning is not an iterative learning process, the weights are determined in a single-shot formula. To improve the results we applied machine learning to optimize the value of base coupling resistances, \(R_{c}\) and the parameters in \(\mathbf{B}^{\prime}\), or in other worlds, the amplitudes of input current generators.
The inner \(RC\) time constant of the Ring-oscillators was \(2.0\cdot 10^{-10}\) sec, which translates into a 500 MHz oscillation frequency (time period \(T=2\) ns). The total simulation time for the network is 500 ns. The phase pattern is calculated from the last 300 ns window, so convergences is achieved after less then 100 oscillation cycles or 200 ns.
### ONN-based associative memories designed with machine learning
The same functionality that is realized by Hebbian learning can be achieved by BPTT method. The loss function we designed was:
\[L=\frac{1}{n}\sum_{k=0}^{n}{(O_{n}-T_{n})^{2}}, \tag{2}\]
where \(O_{k}\) is the pattern calculated from the output of the oscillators for the \(k\)-th input in the batch and \(T_{k}\) is the ground truth for the same, which were ideal patterns of '0' and '1'. In the above formula \(n\) is the size of the batch used for learning.
Figure 4 compares results from tthe Hebbian and BPTT based designs. It is visually apparent that the BPTT based design associates to the right pattern from very much distorted patterns. For the experiments seen on Figure 4 we down-scaled the images from 28x28 to 7x7 which distorted many of the inputs but helped speeding up the computations, as an all-to-all coupled 728 oscillator system would result in almost 620000 resistors which is hard to phyiscally realize.
Most importantly, the BPTT-based design allows the design of sparsely interconnected circuit topologies. We used it to design the \(C_{ij}\) matrix of associative memory assuming only nearest neighbor interconnections. The nearest-neighbor interconnected, BPTT-designed network outperforms the fully interconnected Hebbian network - even if the number of learnable parameters in the system (\(\approx 8n\) vs. \(\frac{1}{2}n^{2}\)) is significantly less. The qualitative results of this comparisons can be seen on Figure 5.
Quantitatively the results of the different approaches for the whole dataset \(S=\{0,1\}\) can be seen in Table 2.
### Non-associative classifiers with hidden layers
Single layer associative memories are not particularly efficient for classifying all the 10 MNIST classes, as there are strong correlations between the different digits. For this reason we also investigated multi-layered ONNs for this task.
First we started with the binary classification task which was easier to solve, so the regular, single-hidden layered, 1 output neuron setup was sufficient to solve it, but for the multi-class prediction, we created three different architectures:
Figure 4: On a) we can see the results of the fully connected network trained by ML, while on b) we can see the results of the system trained by the Hebbian-learning scheme. It is evident, that the former performs better qualitatively, and also, we calculated the mean-squared errors for the set, and for the proposed network, it was 0.020, meanwhile for the Hebbian it was 0.068, so our solution was more than 3 times better.
one with a structure resembling that of a regular 1-hidden layered, 10-output feed forward neural network (FFNN) but only from oscillators as neuron, but having a fully interconnected hidden layer; one where we had only a single output oscillator, but the same hidden layer and we created 10 of these and each individual apparatus was designed to distinguish one digit from the rest and then a winner takes it all model decides which class the input belongs to; and also, we modified the previous one so the output of the system was not the probabilities used for the winner takes it all algorithm, but rather they were fed to a small, regular neural network with 15 neurons in the hidden layer and 10 neurons in the output layer as a multiclass classification would require, see on Fig 6.
The first approach to the multi-class prediction with the regular, FFNN-like structure was resulted in a 70-75% predictive accuracy, which was not satisfactory, so we omit the details here but discuss the other two approaches in later sections.
#### 3.3.1 Binary classifiers with a single output
The two-layer classier is is shown in Fig. 7. the phase of the output oscillator carrier the classification result: we compare the output oscillator phase with a reference oscillator and maximize (minimize) their phase difference for one (or the other) pattern.
Since the optimal oscillator couplings are discovered by the BPTT algorithm, this device does not necessarily work as an associative memory. The phase patterns appearing in the hidden layer are non-intuitive, even if they can vaguely resemble to the images being recognized.
That having been said, without any apparent, clearly visible structure in the hidden layer, the network was predicting the two classes at a 98% success rate. The predictions made on some images are present in Fig. 7.
#### 3.3.2 10-digit classifier using a winner take all output
Moving on to the winner takes it all architecture, the results of the distribution of average values of the predictions of each individual, competitive network can be seen on Fig 8. Each subnetwork is responsible for recognizing one particular digit - albeit as seen in Fig 8 it often predicts high likelihood for the wrong class. Using the winner take all algorithm (i.e. the digit is identified by the ONN network giving the highest output) we achieve an accuracy around 70 % - far better than random guessing (that would be 10 %) but far from a good result.
\begin{table}
\begin{tabular}{c|c c c}
**Method** & **Hebbian** & **Proposed fully connected** & **Proposed NN connected** \\ \hline
**\#Params** & 1176 & 2352 & 312 \\
**MSE** & 0.068 & 0.020 & 0.047 \\ \end{tabular}
\end{table}
Table 1: The MSEs of all the elements from the set and their respective ground truths for the different methods. It is apparent that the fully connected network performed the best, but even the nearest neighbour connected layer is good enough to beat the Hebbian learning in terms of quantitative association.
Figure 5: On a) we can see the results of the nearest neighbor connected network trained by ML, while on b) we can see the results of the system trained by the Hebbian-learning scheme. Here the qualitative results is not the clearly different, but it is at least on par with the Hebbian-based method and also in some cases it even outperforms it.
#### 3.3.3 10-digit classifier using using a trained second layer
Instead of the winner takes all decision, we used a simple trained perceptron layer at the end to improve classification accuracy. There are only very few multi-layered ONNs in the literature [38].
Using the outputs of the competitive networks as inputs to this small neural network we managed to reach 96.7% predictive accuracy which was good enough to test it with a similar sized - in terms of paramater count -, regular software neural network on the same dataset and that could not reach 96.7%, just 93-95%. It has to be noted, that this problem is
Figure 6: Here it can be seen the two of the three tested architecture for the time-independent MNIST classification. Both consists the individually trained, nearest neighbour connected subnetworks which were designed to distinguish between a single class and the rest of the classes using binary cross-entropy loss function. The top block diagram describes the algorithm where to pick the prediction we took the maximum of individual network output probabilities. The more sophisticated version can be seen on the bottom block diagram. Here we took the output probabilities of the individual classifiers and feed them as inputs to a small, regular FFNN and trained it as it were a 10-class classification problem using binary cross-entropy.
solved with fairly trivial networks with better accuracy but those are using hundreds of thousands of parameters and we only had 20000 parameters in our training scheme.
We emphasize that in terms of computation workload, the heavy lifting in this architecture is done by the ONN preprocessing layer - the output layer contains small number of parameters and it is a very small-scale neural network by any standard. The output layer is there since it is easily trainable so it can maximize network performance at low training cost. The power consumption of the network is dominated by the ONN - and so the entire architecture benefits from the
Figure 8: The distribution of predicted average probabilities for the individual, competitive networks that were used in the winner takes it all model.
Figure 7: A simple two-layer classifier showing also the patterns forming in the hidden layer.
energy efficient ONN operation.
### Comparison of ONN classifier architectures
The table below summarizes some key findings of our work. Perhaps most importantly, the ONN-based network outperforms a standard FFNN with the same parameters - and it does its job with a significantly higher power efficiency than the equaivalent digital of software implementation.
## Conclusions and outlook
In this paper we introduced an in-silico method to design ONNs. We build a computational model of the ONN, apply BPTT techniques on this model and determine circuit parameters automatically using the BPTT training algorithm. This way we can design ONNs that are not limited by the lack of specific learning rules. The BPTT-based design allows us to explore the limits of ONN hardware without the limitations imposed by the simplicity of training algorithm.
As one of the main results of the work we find that a nearest-neighbor connected ONN that is designed by BPTT can outperform a fully connected Hebbian-trained device. This discovery opens the door to physically realizable ONNs, which perform complex processing functions without unfeasibly high number of interconnections.
Secondly, we developed multi-layered ONN devices, of which very few exist in literature. In line with expectations, we find that multiple layers significantly enhance the capabilities of the network. When the ONN first layer (preprocessing layer) is followed by a simple perceptron-based FFNN a classification accuracy of 95 % is reached. Most of the network complexity is in the first (preprocessing) layer, so the energy efficient operation of the ONN hardware is taken advantage of.
\begin{table}
\begin{tabular}{c|c c|c c|c} & \multicolumn{2}{c|}{**Binary classifiers**} & \multicolumn{2}{c|}{**Multiclass classifiers with oscillators**} & \multicolumn{1}{c}{**Benchmark**} \\ \cline{2-6}
**Method** & _Fully_ & _NN_ & _FFNN-like_ & _Winner takes it all_ & _FFNN 2-layer_ & _Perceptron FFNN_ \\ \#Param & 38416 & 1600 & 40180 & 16000 & 16325 & 16363 \\
**Perf. (\%)** & 98 & 98 & 70-75 & 65-70 & 96.7 & 93-95 \\ \end{tabular}
\end{table}
Table 2: The quantitative comparisons of binary and multi-class classifiers with the parameter count indicated.
Figure 9: A network architecture with ONN layers as preprocessors and a traditional neural network postprocessing the results. The easy to train output layer significantly improves classification accuracy.
## Acknowledgements
This work was partially supported by a grant from Intel corporation, titled HIMON: "Hierarchically Interconnected Oscillator networks" We are grateful for regular and fruitful discussions with the Intel team in particular Narayan Srinivasa, Dmitri Nikonov and Amir Khosrowshahi.
## Additional information
**Competing financial interests:** The authors declare no competing financial interests.
|
2303.15015 | Towards Open Temporal Graph Neural Networks | Graph neural networks (GNNs) for temporal graphs have recently attracted
increasing attentions, where a common assumption is that the class set for
nodes is closed. However, in real-world scenarios, it often faces the open set
problem with the dynamically increased class set as the time passes by. This
will bring two big challenges to the existing dynamic GNN methods: (i) How to
dynamically propagate appropriate information in an open temporal graph, where
new class nodes are often linked to old class nodes. This case will lead to a
sharp contradiction. This is because typical GNNs are prone to make the
embeddings of connected nodes become similar, while we expect the embeddings of
these two interactive nodes to be distinguishable since they belong to
different classes. (ii) How to avoid catastrophic knowledge forgetting over old
classes when learning new classes occurred in temporal graphs. In this paper,
we propose a general and principled learning approach for open temporal graphs,
called OTGNet, with the goal of addressing the above two challenges. We assume
the knowledge of a node can be disentangled into class-relevant and
class-agnostic one, and thus explore a new message passing mechanism by
extending the information bottleneck principle to only propagate class-agnostic
knowledge between nodes of different classes, avoiding aggregating conflictive
information. Moreover, we devise a strategy to select both important and
diverse triad sub-graph structures for effective class-incremental learning.
Extensive experiments on three real-world datasets of different domains
demonstrate the superiority of our method, compared to the baselines. | Kaituo Feng, Changsheng Li, Xiaolu Zhang, Jun Zhou | 2023-03-27T08:58:22Z | http://arxiv.org/abs/2303.15015v2 | # Towards Open Temporal Graph Neural Networks
###### Abstract
Graph neural networks (GNNs) for temporal graphs have recently attracted increasing attentions, where a common assumption is that the class set for nodes is closed. However, in real-world scenarios, it often faces the open set problem with the dynamically increased class set as the time passes by. This will bring two big challenges to the existing temporal GNN methods: (i) How to dynamically propagate appropriate information in an open temporal graph, where new class nodes are often linked to old class nodes. This case will lead to a sharp contradiction. This is because typical GNNs are prone to make the embeddings of connected nodes become similar, while we expect the embeddings of these two interactive nodes to be distinguishable since they belong to different classes. (ii) How to avoid catastrophic knowledge forgetting over old classes when learning new classes occurred in temporal graphs. In this paper, we propose a general and principled learning approach for open temporal graphs, called OTGNet, with the goal of addressing the above two challenges. We assume the knowledge of a node can be disentangled into class-relevant and class-agnostic one, and thus explore a new message passing mechanism by extending the information bottleneck principle to only propagate class-agnostic knowledge between nodes of different classes, avoiding aggregating conflictive information. Moreover, we devise a strategy to select both important and diverse triad sub-graph structures for effective class-incremental learning. Extensive experiments on three real-world datasets of different domains demonstrate the superiority of our method, compared to the baselines.
## 1 Introduction
Temporal graph (Nguyen et al., 2018) represents a sequence of time-stamped events (e.g. addition or deletion for edges or nodes) (Rossi et al., 2020), which is a popular kind of graph structure in variety of domains such as social networks (Kleinberg, 2007), citations networks (Feng et al., 2022), topic communities (Hamilton et al., 2017), etc. For instance, in topic communities, all posts can be modelled as a graph, where each node represents one post. New posts can be continually added into the community, thus the graph is dynamically evolving. In order to handle this kind of graph structure, many methods have been proposed in the past decade (Wang et al., 2020; Xu et al., 2020; Rossi et al., 2020; Nguyen et al., 2018; Li et al., 2022). The key to success for these methods is to learn an effective node embedding by capturing temporal patterns based on time-stamped events.
A basic assumption among the above methods is that the class set of nodes is always closed, i.e., the class set is fixed as time passes by. However, in many real-world applications, the class set is open. We still take topic communities as an example, all the topics can be regarded as the class set of nodes for a post-to-post graph. When a new topic is created in the community, it means a new class is involved into the graph. This will bring two challenges to previous approaches: The first problem is the heterophily propagation issue. In an open temporal graph, a node belonging to a new class is often
linked to a node of old class, as shown in Figure 1. In Figure 1, 'class 2' is a new class, and 'class 1' is an old class. There is a link occured at timestamp \(t_{4}\) connecting two nodes \(v_{4}\) and \(v_{5}\), where \(v_{4}\) and \(v_{5}\) belong to different classes. Such a connection will lead to a sharp contradiction. This is because typical GNNs are prone to learn similar embeddings for \(v_{4}\) and \(v_{5}\) due to their connection (Xie et al., 2020; Zhu et al., 2020), while we expect the embeddings of \(v_{4}\) and \(v_{5}\) to be distinguishable since they belong to different classes. We call this dilemma as heterophily propagation. Someone might argue that we can simply drop those links connecting different class nodes. However, this might break the graph structure and lose information. Thus, how and what to transfer between connected nodes of different classes remains a challenge for open temporal graph.
The second problem is the catastrophic forgetting issue. When learning a new class in an open temporal graph, the knowledge of the old class might be catastrophically forgot, thus degrading the overall performance of the model. In the field of computer vision, many incremental learning methods have been proposed (Wu et al., 2019; Tao et al., 2020), which focus on convolutional neural networks (CNNs) for non-graph data like images. If simply applying these methods to graph-structured data by individually treating each node, the topological structure and the interaction between nodes will be ignored. Recently, Wang et al. (2020); Zhou & Cao (2021) propose to overcome catastrophic forgetting for graph data. However, They focus on static graph snapshots, and utilize static GNN for each snapshot, thus largely ignoring fine-grained temporal topological information.
In this paper, we put forward the first class-incremental learning approach towards open temporal dynamic graphs, called OTGNet. To mitigate the issue of heterophily propagation, we assume the information of a node can be disentangled into class-relevant and class-agnostic one. Based on this assumption, we design a new message passing mechanism by resorting to information bottleneck (Alemi et al., 2016) to only propagate class-agnostic knowledge between nodes of different classes. In this way, we can well avoid transferring conflictive information. To prevent catastrophic knowledge forgetting over old classes, we propose to select representative sub-graph structures generated from old classes, and incorporate them into the learning process of new classes. Previous works (Zhou et al., 2018; Zigmani et al., 2014; Huang et al., 2014) point out triad structure (triangle-shape structure) is a fundamental element of temporal graph and can capture evolution patterns. Motivated by this, we devise a value function to select not only important but also diverse triad structures, and replay them for continual learning. Due to the combinational property, optimizing the value function is NP-hard. Thus, we develop a simple yet effective algorithm to find its approximate solution, and give a theoretical guarantee to the lower bound of the approximation ratio. It is worth noting that our message passing mechanism and triad structure selection can benefit from each other. On the one hand, learning good node embeddings by our message passing mechanism is helpful to select more representative triad structure. On the other hand, selecting representative triads can well preserve the knowledge of old classes and thus is good for propagating information more precisely.
Our contributions can be summarized as : 1) Our approach constitutes the first attempt to investigate open temporal graph neural network; 2) We propose a general framework, OTGNet, which can address the issues of both heterophily propagation and catastrophic forgetting; 3) We perform extensive experiments and analyze the results, proving the effectiveness of our method.
## 2 Related Work
Dynamic GNNs can be generally divided into two groups (Rossi et al., 2020) according to the characteristic of dynamic graph: discrete-time dynamic GNNs (Zhou et al., 2018; Goyal et al., 2018; Wang et al., 2020) and continuous-time dynamic GNNs (a.k.a. temporal GNNs (Nguyen et al., 2018)) (Rossi et al., 2020; Trivedi et al., 2019). Discrete-time approaches focus on discrete-time dynamic graph that is a collection of static graph snapshots taken at intervals in time, and contains
Figure 1: An illustration for an open temporal graph. In the beginning, there is an old class (class 1). As the time passes by, a new class (class 2) occurs. \(t_{4}\) denotes the timestamp the edge is built. The edge occurred at \(t_{4}\) connects \(v_{4}\) and \(v_{5}\) (e.g., the same user comments on both post \(v_{4}\) and post \(v_{5}\) in topic communities).
dynamic information at a very coarse level. Continuous-time approaches study continuous-time dynamic graph that represents a sequence of time-stamped events, and possesses temporal dynamics at finer time granularity. In this paper, we focus on temporal GNNs. We first briefly review related works on temporal GNNs, followed by class-incremental learning.
**Temporal GNNs.** In recent years, many temporal GNNs (Kumar et al., 2019; Wang et al., 2021a; Trivedi et al., 2019) have been proposed. For instance, DyRep (Trivedi et al., 2019) took the advantage of temporal point process to capture fine-grained temporal dynamics. CAW (Wang et al., 2021b) retrieved temporal network motifs to represent the temporal dynamics. TGAT (Xu et al., 2020) proposed a temporal graph attention layer to learn temporal interactions. Moreover, TGN (Rossi et al., 2020) proposed an efficient model that can memorize long term dependencies in the temporal graph. However, all of them concentrate on closed temporal graphs, i.e., the class set is always kept unchanged, neglecting that new classes can be dynamically increased in many real-world applications.
**Class-incremental learning.** Class-incremental learning have been widely studied in the computer vision community (Li and Hoiem, 2017; Wu et al., 2019). For example, EWC (Kirkpatrick et al., 2017) proposed to penalize the update of parameters that are significant to previous tasks. iCaRL (Li and Hoiem, 2017) maintained a memory buffer to store representative samples for memorizing the knowledge of old classes and replaying them when learning new classes. These methods focus on CNNs for non-graph data like images. It is obviously not suitable to directly apply them to graph data. Recently, a few incremental learning works have been proposed for graph data (Wang et al., 2020; Zhou and Cao, 2021). ContinualGNN (Wang et al., 2020) proposed a method for closed discrete-time dynamic graph, and trained the model based on static snapshots. ER-GAT (Zhou and Cao, 2021) selected representative nodes for old classes and replay them when learning new tasks. Different from them studying discrete-time dynamic graph, we aim to investigate open temporal graph.
## 3 Proposed Method
### Preliminaries
**Notations.** Let \(\mathcal{G}(t)=\{\mathcal{V}(t),\mathcal{E}(t)\}\) denote a temporal graph at time-stamp \(t\), where \(\mathcal{V}(t)\) is the set of existing nodes at \(t\), and \(\mathcal{E}(t)\) is the set of existing temporal edges at \(t\). Each element \(e_{ij}(t_{k})\in\mathcal{E}(t)\) represents node \(i\) and node \(j\) are linked at time-stamp \(t_{k}(t_{k}\leq t)\). Let \(\mathcal{N}_{i}(t)\) be the neighbor set of node \(i\) at \(t\). We assume \(x_{i}(t)\) denotes the embedding of node \(i\) at \(t\), where \(x_{i}(0)\) is the initial feature of node \(i\). Let \(\mathcal{Y}(t)=\{1,2,\cdots,m(t)\}\) be the class set of all nodes at \(t\), where \(m(t)\) denotes the number of existing classes until time \(t\).
**Problem formulation.** In our open temporal graph setting, as new nodes are continually added into the graph, new classes can occur, i.e., the number \(m(t)\) of classes is increased and thus the class set \(\mathcal{Y}(t)\) is open, rather than a closed one like traditional temporal graph. Thus, we formulate our problem as a sequence of class-incremental tasks \(\mathcal{T}=\{\mathcal{T}_{1},\mathcal{T}_{2},\cdots,\mathcal{T}_{L},\cdots\}\) in chronological order. Each task \(\mathcal{T}_{i}\) contains one or multiple new classes which are never seen in previous tasks \(\{\mathcal{T}_{1},\mathcal{T}_{2},\cdots,\mathcal{T}_{i-1}\}\). In our new problem setting, the goal is to learn an open temporal graph neural network based on current task \(\mathcal{T}_{i}\), expecting our model to not only perform well on current task but also prevent catastrophic forgetting over previous tasks.
### Framework
As aforementioned, there are two key challenges in open temporal graph learning: heterophily propagation and catastrophic forgetting. To address the two challenges, we propose a general framework, OTGNet, as illustrated in Figure 2. Our framework mainly includes two modules : A knowledge preservation module is devised to overcome catastrophic forgetting, which consists of two components: a triad structure selection component is devised to select representative triad structures; a triad structure replay component is designed for replaying the selected triads to avoid catastrophic forgetting. An information bottleneck based message passing module is proposed to propagate class-agnostic knowledge between different class nodes, which can address the heterophily propagation issue. Next, we will elaborate each module of our framework.
Figure 2: An illustration of overall architecture.
### Knowledge Preservation over Old Class
When learning new classes based on current task \(\mathcal{T}_{i}\), it is likely for the model to catastrophically forget knowledge over old classes from previous tasks. If we combine all data of old classes with the data of new classes for retraining, the computational complexities will be sharply increased, and be not affordable. Thus, we propose to select representative structures from old classes to preserve knowledge, and incorporate them into the learning process of new classes for replay.
**Triad Structure Selection.** As previous works (Zhou et al., 2018; Zignani et al., 2014; Huang et al., 2014) point out, the triad structure is a fundamental element of temporal graph and its triad closure process could demonstrate the evolution patterns. According to Zhou et al. (2018), the triads have two types of structures: closed triad and open triad, as shown in Figure 3. A closed triad consists of three vertices connected with each other, while an open triad has two of three vertices not connected with each other. The closed triad can be developed from an open triad, and the triad closure process is able to model the evolution patterns (Zhou et al., 2018). Motivated by this point, we propose a new strategy to preserve the knowledge of old classes by selecting representative triad structures from old classes. However, how to measure the'representativeness' of each triad, and how to select some triads to represent the knowledge of old classes have been not explored so far.
To write conveniently, we omit \(t\) for all symbols in this section. Without loss of generality, we denote a closed triad for class \(k\) as \(g^{c}_{k}=(v_{s},v_{p},v_{q})\), where all of three nodes \(v_{s},v_{p},v_{q}\) belong to class \(k\) and \(v_{s},v_{p},v_{q}\) are pairwise connected, i.e., \(e_{sp}(t_{i}),e_{sq}(t_{j}),e_{pq}(t_{l})\in\mathcal{E}(t_{m})\) and \(t_{i},t_{j}<t_{l},t_{m}\) is the last time-stamp of the graph. We denote an open triad for class \(k\) as \(g^{o}_{k}=(v_{s},v_{p},v_{q})\) with \(v_{p}\) and \(v_{q}\) not linked to each other in the last observation of the graph, i.e., \(e_{\bar{s}p}(t_{i}),e_{\bar{s}q}(t_{j})\in\mathcal{E}(t_{m})\) and \(e_{\bar{p}\bar{q}}\notin\mathcal{E}(t_{m})\). Assuming \(S^{c}_{k}=\{g^{c}_{k,1},g^{c}_{k,2},...,g^{c}_{k,M}\}\) and \(S^{o}_{k}=\{g^{o}_{k,1},g^{o}_{k,2},...,g^{o}_{k,M}\}\) is the selected closed triad set and open triad set for class \(k\), respectively. \(M\) is the memory budget for each class. Next, we introduce how to measure and select closed triads \(S^{c}_{k}\). It is analogous to open triads \(S^{o}_{k}\).
In order to measure the'representativeness' of each triad, one intuitive and reasonable thought is to see how the performance of the model is affected if removing this triad from the graph. However, if we retrain the model once one triad is removed, the time cost is prohibitive. Inspired by the influence function aiming to estimate the parameter changes of the machine learning model when removing a training sample (Koh and Liang, 2017), we extend the influence function to directly estimate the'representativeness' of each triad structure without retraining, and propose an objective function as:
\[\mathcal{I}_{loss}(g^{c}_{k},\theta) =\left.\frac{\mathrm{d}\,\mathcal{L}(G_{k},\theta,g^{c}_{k})}{ \mathrm{d}\,\varepsilon}\right|_{\varepsilon=0}=\nabla_{\theta}\mathcal{L}(G_{ k},\theta)^{\top}\frac{\mathrm{d}\,\hat{\theta}_{e,g^{c}_{k}}}{\mathrm{d}\, \varepsilon}\right|_{\varepsilon=0} \tag{1}\] \[=-\nabla_{\theta}\mathcal{L}(G_{k},\theta)^{\top}\hat{H}^{-1}_{ \theta}\nabla_{\theta}\mathcal{L}(g^{c}_{k},\theta)\]
where \(\mathcal{L}\) represents the loss function, e.g., cross-entropy used in this paper. \(\theta\) is the parameter of the model, and \(G_{k}\) is the node set of class \(k\). \(\theta_{e,g^{c}_{k}}\) is the retrained parameter if we upweight three nodes in \(g^{c}_{k}\) by \(\varepsilon(\varepsilon\to 0)\) during training. \(\varepsilon\) is a small weight added on the three nodes of the triad \(g^{c}_{k}\) in the loss function \(\mathcal{L}\). \(H_{\theta}\) is the Hessian matrix. \(\nabla_{\theta}\mathcal{L}(g^{c}_{k},\theta)\), \(\nabla_{\theta}\mathcal{L}(G_{k},\theta)\) are the gradients of the loss to \(g^{c}_{k}\) and \(G_{k}\), respectively. The full derivation of Eq. (1) is in Appendix A.2.
In Eq. (1), \(\mathcal{I}_{loss}(g^{c}_{k},\theta)\) estimates the influence of the triad \(g^{c}_{k}\) on the model performance for class \(k\). The more negative \(\mathcal{I}_{loss}(g^{c}_{k},\theta)\) is, the more positive influence on model performance \(g^{c}_{k}\) provides, in other words, the more important \(g^{c}_{k}\) is. Thus, we define the'representativeness' of a triad structure as:
\[\mathcal{R}(g^{c}_{k})=-\mathcal{I}_{loss}(g^{c}_{k},\theta) \tag{2}\]
In order to well preserve the knowledge of old classes, we expect all \(g^{c}_{k}\) in \(S^{c}_{k}\) are important, and propose the following objective function to find \(S^{c}_{k}\):
\[S^{c}_{k}=\arg\max_{\{g^{c}_{k,1},\cdots,g^{c}_{k,M}\}}\sum_{i=1}^{M}\mathcal{ R}(g^{c}_{k,i}) \tag{3}\]
During optimizing (3), we only take the triad \(g^{c}_{k,i}\) with positive \(\mathcal{R}(g^{c}_{k,i})\) as the candidate, since \(g^{c}_{k,i}\) with negative \(\mathcal{R}(g^{c}_{k,i})\) can be thought to be harmful to the model performance. We note that only
Figure 3: An illustration for closed triad and open triad.
optimizing (3) might lead to that the selected \(g^{c}_{k,i}\) have similar functions. Considering this, we hope \(S^{c}_{k}\) should be not only important but also diverse. To do this, we first define:
\[\mathcal{C}(g^{c}_{k,i})=\{g^{c}_{k,j}|\left|\left|\bar{x}(g^{c}_{k,j})-\bar{x} (g^{c}_{k,i})\right|\right|_{2}\leq\delta,g^{c}_{k,j}\in N^{c}_{k}\}, \tag{4}\]
where \(\bar{x}(g^{c}_{k,j})\) denotes the average embedding of three vertices in \(g^{c}_{k,j}\). \(N^{c}_{k}\) denotes the set containing all positive closed triads for class \(k\), and \(\delta\) is a similar radius. \(\mathcal{C}(g^{c}_{k,i})\) measures the number of \(g^{c}_{k,j}\), where the distance of \(\bar{x}(g^{c}_{k,j})\) and \(\bar{x}(g^{c}_{k,i})\) is less or equal to \(\delta\). To make the selected triads \(S^{c}_{k}\) diverse, we also anticipate that \(\{\mathcal{C}(g^{c}_{k,1}),\cdots,\mathcal{C}(g^{c}_{k,M})\}\) can cover different triads as many as possible by:
\[S^{c}_{k}=\arg\max_{\{g^{c}_{k,1},\cdots,g^{c}_{k,M}\}}\frac{|\bigcup_{i=1}^{M }\mathcal{C}(g^{c}_{k,i})|}{|N^{c}_{k}|} \tag{5}\]
Finally, we combine (5) with (3), and present the final objective function for triad selection as:
\[S^{c}_{k}=\arg\max_{\{g^{c}_{k,1},\cdots,g^{c}_{k,M}\}}F(S^{c}_{k})=\arg\max_{ \{g^{c}_{k,1},\cdots,g^{c}_{k,M}\}}\left(\sum_{i=1}^{M}\mathcal{R}(g^{c}_{k,i} )+\gamma\frac{|\bigcup_{i=1}^{M}\mathcal{C}(g^{c}_{k,i})|}{|N^{c}_{k}|}\right) \tag{6}\]
where \(\gamma\) is a hyper-parameter. By (6), we can select not only important but also diverse triads to preserve the knowledge of old classes.
Due to the combinatorial property, solving (6) is NP-hard. Fortunately, we show that \(F(S^{c}_{k})\) satisfies the condition of monotone and submodular. The proof can be found in Appendix A.3. Based on this property, (6) could be solved by a greedy algorithm (Pokutta et al., 2020) with an approximation ratio guarantee, by the following Theorem 1(Krause and Golovin, 2014).
**Theorem 1**.: Assuming our value function \(F:2^{N}\rightarrow\mathbb{R}_{+}\) is monotone and submodular. If \(S^{c}_{k}\) is an optimal triad set and \(S^{c}_{k}\) is a triad set selected by the greedy algorithm (Pokutta et al., 2020), then \(F(S^{c}_{k})\geq(1-\frac{1}{\epsilon})F(S^{c}_{k})\) holds.
By Theorem 1, we can greedily select closed triads as in Algorithm 1. As aforementioned, the open triad set \(S^{o}_{k}\) can be chosen by the same method. The proof of Theorem 1 can be found in Krause and Golovin (2014).
An Acceleration Solution.We first provide the time complexity analysis of triad selection. When counting triads for class \(k\), we first enumerate the edge that connects two nodes \(v_{s}\) and \(v_{d}\) of class \(c\). Then, for each neighbor node of \(v_{s}\) that belongs to class \(k\), we check whether this neighbor node links to \(v_{d}\). If this is the case and the condition of temporal order is satisfied, these three nodes form a closed triad, otherwise these three nodes form an open triad. Thus, a rough upper bound of the number of closed triads in class \(k\) is \(O(d_{k}|\mathcal{E}_{k}|)\), where \(|\mathcal{E}_{k}|\) is the number of edges between two nodes of class \(k\), and \(d_{k}\) is the max degree of nodes of class \(k\). When selecting closed triads, finding a closed triad that maximizes the value function takes \(O(|N^{c}_{k}|^{2})\), where \(|N^{c}_{k}|\) is the number of positive closed triads in class \(k\). Thus, it is of order \(O(M|N^{c}_{k}|^{2})\) for selecting the closed triad set \(S^{c}_{k}\), where \(M\) is the memory budget for each class. The time complexity for selecting the open triads is the same.
To accelerate the selection process, a natural idea is to reduce \(N^{c}_{k}\) by only selecting closed triads from \(g^{c}_{k}\) with large values of \(\mathcal{R}(g^{c}_{k})\). Specifically, we sort the closed triad \(g^{c}_{k}\) based on \(\mathcal{R}(g^{c}_{k})\), and use the top-\(K\) ones as the candidate set \(N^{c}_{k}\) for selection. The way for selecting open triads is the same.
```
1:all triads \(N^{c}_{k}\) for class \(k\), budget \(M\);
2:representative triad set \(S^{c}_{k}\);
3:Initialize \(S^{c}_{k}=\emptyset\);
4:while\(|S^{c}_{k}|<M\)do
5:\(u=argmax_{u\in N^{c}_{k}\setminus S^{c}_{k}}F(S^{c}_{k}\cup\{u\})\);
6:\(S^{c}_{k}=S^{c}_{k}\cup u\);
7:endwhile
8:return\(S^{c}_{k}\)
```
**Algorithm 1** Representative triad selection
**Triad Structure Replay.** After obtaining representative closed and open triad sets, \(S^{c}_{k}\) and \(S^{o}_{k}\), we will replay these triads from old classes when learning new classes, so as to overcome catastrophic forgetting. First, we hope the model is able to correctly predict the labels of nodes from the selected triad set, and thus use the cross entropy loss \(\mathcal{L}_{ce}\) for each node in the selected triad set.
Moreover, as mentioned above, the triad closure process can capture the evolution pattern of a dynamic graph. Thus, we use the link prediction loss \(\mathcal{L}_{link}\) to correctly predict the probability whether two nodes are connected based on the closed and open triads, to further preserve knowledge:
\[\mathcal{L}_{link}=-\frac{1}{N_{c}}\sum_{i=1}^{N_{c}}\log(\sigma(x^{i}_{p}{(t) }^{\top}x^{i}_{q}(t)))-\frac{1}{N_{o}}\sum_{i=1}^{N_{o}}\log(1-\sigma(\tilde{x} ^{i}_{p}{(t)}^{\top}\bar{x}^{i}_{q}(t))), \tag{7}\]
where \(N_{c}\),\(N_{o}\) are the number of closed, open triads respectively, where \(N_{c}=N_{o}=N_{t}*M\). \(N_{t}\) is the number of old classes. \(\sigma\) is the sigmoid function. \(x_{p}^{i}(t),x_{q}^{i}(t)\) are the embeddings of \(v_{p}\), \(v_{q}\) of the \(i^{th}\) closed triad. \(\tilde{x}_{p}^{i}(t)\), \(\tilde{x}_{q}^{i}(t)\) are the embeddings of \(v_{\tilde{p}}\), \(v_{\tilde{q}}\) of the \(i^{th}\) open triad. Here the closed triads and open triads serve as postive samples and negative samples, respectively.
### Message Passing via Information Bottleneck
When new class occurs, it is possible that one edge connects one node of the new class and one node of an old class, as shown in Figure 1. To avoid aggregating conflictive knowledge between nodes of different classes, one intuitive thought is to extract class-agnostic knowledge from each node, and transfer the class-agnostic knowledge between nodes of different classes To do this, we extend the information bottleneck principle to obtain a class-agnostic representation for each node.
**Class-agnostic Representation.** Traditional information bottleneck aims to learn a representation that preserves the maximum information about the class while has minimal mutual information with the input (Tishby et al., 2000). Differently, we attempt to extract class-agnostic representations from an opposite view, i.e., we expect the learned representation has minimum information about the class, but preserve the maximum information about the input. Thus, we propose an objective function as:
\[J_{IB}=\min_{Z(t)}\;I(Z(t),Y)-\beta I(Z(t),X(t)), \tag{8}\]
where \(\beta\) is the Lagrange multiplier. \(I(\cdot,\cdot)\) denotes the mutual information. \(X(t)\), \(Z(t)\) are the random variables of the node embeddings and class-agnostic representations at time-stamp \(t\). \(Y\) is the random variable of node label. In this paper, we adopt a two-layer MLP for mapping \(X(t)\) to \(Z(t)\).
However, directly optimizing (8) is intractable. Thus, we utilize CLUB (Cheng et al., 2020) to estimate the upper bound of \(I(Z(t),Y)\) and utilize MINE (Belghazi et al., 2018) to estimate the lower bound of \(I(Z(t),X(t))\). Thus, the upper bound of our objective could be written as:
\[J_{IB}\leq\mathcal{L}_{IB}=\mathbb{E}_{p(Z(t),Y)}[\log q_{\mu}(y |z(t))]-\mathbb{E}_{p(Z(t))}\mathbb{E}_{p(Y)}[\log q_{\mu}(y|z(t))]\\ -\beta(\sup_{\psi}\mathbb{E}_{p(X(t),Z(t))}[T_{\psi}(x,z(t))]- \log(\mathbb{E}_{p(X(t))p(Z(t))}[e^{T_{\psi}(x(t),z(t))}])). \tag{9}\]
where \(z(t)\), \(x(t)\), \(y\) are the instances of \(Z(t)\), \(X(t)\), \(Y\) respectively. \(T_{\psi}:\mathcal{X}\times\mathcal{Z}\rightarrow\mathbb{R}\) is a neural network parametrized by \(\psi\). Since \(p(y|z(t))\) is unknown, we introduce a variational approximation \(q_{\mu}(y|z(t))\) to approximate \(p(y|z(t))\) with parameter \(\mu\). By minimizing this upper bound \(\mathcal{L}_{IB}\), we can obtain an approximation solution to Eq. (8). The derivation of formula (9) is in Appendix A.1.
It is worth noting that \(z_{i}(t)\) is an intermediate variable as the class-agnostic representation of node \(i\). We only use \(z_{i}(t)\) to propagate information to other nodes having different classes from node \(i\). If one node \(j\) has the same class with node \(i\), we still use \(x_{i}(t)\) for information aggregation of node \(j\), so as to avoid losing information. In this way, the heterophily propagation issue can be well addressed.
**Message Propagation.** In order to aggregate temporal information and topological information in temporal graph, many information propagation mechanism have been proposed (Rossi et al., 2020; Xu et al., 2020). Here, we extend a typical mechanism proposed in TGAT (Xu et al., 2020), and present the following way to learn the temporal attention coefficient as:
\[a_{ij}(t)=\frac{\exp(([x_{i}(t)||\Phi(t-t_{i})]W_{q})^{\top}([h_{j}(t)||\Phi (t-t_{j})|W_{p}))}{\sum_{l\in\mathcal{N}_{l}(t)}\exp(([x_{i}(t)||\Phi(t-t_{i} )]W_{q})^{\top}([h_{l}(t)||\Phi(t-t_{l})]W_{p}))} \tag{10}\]
where \(\Phi\) is a time encoding function proposed in TGAT. \(||\) represents the concatenation operator. \(W_{p}\) and \(W_{q}\) are two learnt parameter matrices. \(t_{i}\) is the time of the last interaction of node \(i\). \(t_{j}\) is the time of the last interaction between node \(i\) and node \(j\). \(t_{l}\) is the time of the last interaction between node \(i\) and node \(l\). Note that we adopt different \(h_{l}(t)\) from that in the original TGAT, defined as:
\[h_{l}(t)=\begin{cases}x_{l}(t),&y_{i}=y_{l}\\ z_{l}(t),&y_{i}\neq y_{l}\end{cases}, \tag{11}\]
where \(h_{l}(t)\) is the message produced by neighbor node \(l\in\mathcal{N}_{i}(t)\). If node \(l\) and \(i\) have different classes, we leverage its class-agnostic representation \(z_{l}(t)\) for information aggregation of node \(i\), otherwise we directly use its embedding \(x_{l}(t)\) for aggregating. Note that our method supports multiple layers of network. We do not use the symbol of the layer only for writing conveniently.
Finally, we update the embedding of node \(i\) by aggregating the information from its neighbors:
\[x_{i}(t)=\sum_{j\in\mathcal{N}_{i}(t)}a_{ij}(t)W_{h}h_{j}(t), \tag{12}\]
where \(W_{h}\) is a learnt parameter matrix for message aggregation.
### Optimization
During training, we first optimize the information bottleneck loss \(\mathcal{L}_{IB}\). Then, we minimize \(\mathcal{L}=\mathcal{L}_{ce}+\rho\mathcal{L}_{link}\), where \(\rho\) is the hyper-parameter and \(\mathcal{L}_{ce}\) is the node classification loss over both nodes of new classes and that of the selected triads. We alternatively optimize them until convergence. The detailed training procedure and pseudo-code could be found in Appendix A.5.
In testing, we extract an corresponding embedding of a test node by assuming its label to be the one that appears the most times among its neighbor nodes in the training set, due to referring to extracting class-agnostic representations. After that, we predict the label of test nodes based on the extracted embeddings.
## 4 Experiments
### Experiment Setup
**Datasets.** We construct three real-world datasets to evaluate our method: Reddit (Hamilton et al., 2017), Yelp (Sankar et al., 2020), Taobao (Du et al., 2019). In Reddit, we construct a post-to-post graph. Specifically, we treat posts as nodes and treat the subreddit (topic community) a post belongs to as the node label. When a user comments two posts with the time interval less or equal to a week, a temporal edge between the two nodes will be built. We regard the data in each month as a task, where July to December in 2009 are used. In each month, we sample \(3\) large communities that do not appear in previous months as the new classes. For Yelp dataset, we construct a business-to-business temporal graph from 2015 to 2019 in the same way as Reddit. For Taobao dataset, we construct an item-to-item graph in the same way as Reddit in a 6-days promotion season of Taobao. Table 1 summarizes the statistics of these datasets. More information about datasets could be found in Appendix A.4.
**Experiment Settings.** For each task, we use \(80\%\) nodes for training, \(10\%\) nodes for validation, \(10\%\) nodes for testing. We use two widely-used metrics in class-incremental learning to evaluate our method (Chaudhry et al., 2018; Bang et al., 2021): AP and AF. Average Performance (AP) measures the average performance of a model on all previous tasks. Here we use accuracy to measure model performance. Average Forgetting (AF) measures the decreasing extent of model performance on previous tasks compared to the best ones. More implementation details is in Appendix A.6.
**Baselines.** First, we compare with three incremental learning methods based on static GNNs: ERGAT (Zhou and Cao, 2021), TWC-GAT (Liu et al., 2021) and ContinualGNN (Wang et al., 2020). For ER-GAT and TWC-GAT, we use the final state of temporal graph as input in each task. Since ContinualGNN is based on snapshots, we split each task into \(10\) snapshots. In addition, we combine three representative temporal GNN (TGAT (Xu et al., 2020), TGN (Rossi et al., 2020), TREND (Wen and Fang, 2022)) and three widely-used class-incremental learning methods in computer vision (EWC (Kirkpatrick et al., 2017), iCaRL (Rebuffi et al., 2017), BiC (Wu et al., 2019)) as baselines. For our method, we set \(M\) as \(10\) on all the datasets.
### Results and Analysis
**Overall Comparison.** As shown in Table 2, our method outperform other methods by a large margin. The reasons are as follows. For the first three methods, they are all based on static GNN that can not capture the fine-grained dynamics in temporal graph. TGN, TGAT and TREND are three dynamic GNNs with fixed class set. When applying three typical class-incremental learning methods to TGN,
\begin{table}
\begin{tabular}{c c c c} \hline \hline & Reddit & Yelp & Taobao \\ \hline \# Nodes & 10845 & 15617 & 114232 \\ \# Edges & 216397 & 56985 & 455662 \\ \# Total classes & 18 & 15 & 90 \\ \# Timespan & 6 months & 5 years & 6 days \\ \hline \# Tasks & 6 & 5 & 3 \\ \# Classes per task & 3 & 3 & 30 \\ \# Timespan per task & 1 month & 1 year & 2 days \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset Statistics
TGAT and TREND, the phenomenon of catastrophic forgetting is alleviating. However, they still suffer from the issue of heterophily propagation.
**Performance Analysis of Different Task Numbers.** To provide further analysis of our method, we plot the performance changes of different methods along with the increased tasks. As shown in Figure 4, our method generally achieves better performance than baselines as the task number increases. Since Bic based methods achieve better performance based on Table 2, we do not report the results of the other two incremental learning based methods. In addition, the curves of OTGNet are smoother that those of other methods, which indicates our method can well address the issue of catastrophic forgetting. Because of space limitation, we provide the curves of AF in Appendix A.7.
**Ablation Study of our proposed propagation mechanism.** We further study the effectiveness of our information bottleneck based message propagation mechanism. OTGNet-w.o.-IB represents our method directly transferring the embeddings of neighbor nodes instead of class-agnostic representations. OTGNet-w.o.-prop denotes our method directly dropping the links between nodes of different classes. We take GBK-GNN (Du et al., 2022) as another baseline, where GBK-GNN originally handles the heterophily for static graph. For a fair comparison, we modify GBK-GNN to an open temporal graph: Specifically, we create two temporal message propagation modules with separated parameters as the two kernel feature transformation matrices in GBK-GNN. We denote
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Reddit} & \multicolumn{2}{c}{Yelp} & \multicolumn{2}{c}{TaoBao} \\ \cline{2-7} & AP(\(\uparrow\)) & AF(\(\downarrow\)) & AP(\(\uparrow\)) & AF(\(\downarrow\)) & AP(\(\uparrow\)) & AF(\(\downarrow\)) \\ \hline ContinualGNN & 52.17 \(\pm\) 2.46 & 25.59 \(\pm\) 5.39 & 49.73 \(\pm\) 0.27 & 28.76 \(\pm\) 1.52 & 58.39 \(\pm\) 0.24 & 47.03 \(\pm\) 0.50 \\ ER-GAT & 52.03 \(\pm\) 2.59 & 22.67 \(\pm\) 3.30 & 62.05 \(\pm\) 0.70 & 18.91 \(\pm\) 1.09 & 70.09 \(\pm\) 0.88 & 23.24 \(\pm\) 0.36 \\ TWC-GAT & 52.88 \(\pm\) 0.53 & 19.60 \(\pm\) 3.64 & 60.90 \(\pm\) 3.74 & 16.92 \(\pm\) 0.63 & 59.91 \(\pm\) 1.71 & 42.78 \(\pm\) 1.39 \\ \hline TGAT & 48.47 \(\pm\) 1.81 & 31.03 \(\pm\) 4.48 & 64.89 \(\pm\) 1.27 & 27.31 \(\pm\) 3.99 & 60.62 \(\pm\) 0.23 & 43.35 \(\pm\) 0.77 \\ TGAT+EWC & 50.16 \(\pm\) 2.45 & 28.27 \(\pm\) 4.00 & 66.58 \(\pm\) 3.11 & 25.48 \(\pm\) 1.75 & 64.03 \(\pm\) 0.62 & 38.26 \(\pm\) 1.20 \\ TGAT+CaRL & 54.50 \(\pm\) 2.04 & 27.66 \(\pm\) 1.11 & 71.71 \(\pm\) 2.48 & 17.56 \(\pm\) 2.46 & 73.74 \(\pm\) 1.40 & 23.90 \(\pm\) 2.04 \\ TGAT+BiC & 54.61 \(\pm\) 0.89 & 25.42 \(\pm\) 2.72 & 74.73 \(\pm\) 3.54 & 16.42 \(\pm\) 4.41 & 74.05 \(\pm\) 0.48 & 23.27 \(\pm\) 0.65 \\ \hline TGN & 47.49 \(\pm\) 0.48 & 32.06 \(\pm\) 1.91 & 56.24 \(\pm\) 1.65 & 41.27 \(\pm\) 2.30 & 65.89 \(\pm\) 1.20 & 36.15 \(\pm\) 1.55 \\ TGN+EWC & 49.45 \(\pm\) 1.45 & 31.74 \(\pm\) 1.11 & 60.83 \(\pm\) 3.55 & 35.73 \(\pm\) 3.48 & 68.89 \(\pm\) 2.09 & 32.08 \(\pm\) 3.88 \\ TGNi+CaRL & 50.86 \(\pm\) 4.83 & 31.01 \(\pm\) 2.78 & 73.34 \(\pm\) 1.99 & 15.43 \(\pm\) 0.93 & 77.42 \(\pm\) 0.80 & 19.57 \(\pm\) 1.29 \\ TGN+SiC & 53.16 \(\pm\) 1.53 & 26.83 \(\pm\) 0.95 & 73.98 \(\pm\) 2.07 & 16.79 \(\pm\) 2.90 & 77.40 \(\pm\) 0.80 & 18.63 \(\pm\) 1.69 \\ \hline TREND & 49.61 \(\pm\) 2.92 & 28.68 \(\pm\) 4.20 & 57.28 \(\pm\) 2.83 & 37.48 \(\pm\) 3.26 & 61.02 \(\pm\) 0.16 & 42.44 \(\pm\) 0.14 \\ TREND+EWC & 53.12 \(\pm\) 3.30 & 25.70 \(\pm\) 3.08 & 65.45 \(\pm\) 4.79 & 26.80 \(\pm\) 4.98 & 62.72 \(\pm\) 1.18 & 40.00 \(\pm\) 2.09 \\ TREND+iCaRL & 52.53 \(\pm\) 3.67 & 30.63 \(\pm\) 0.18 & 69.93 \(\pm\) 5.55 & 15.81 \(\pm\) 7.48 & 74.49 \(\pm\) 0.05 & 23.27 \(\pm\) 0.25 \\ TREND+BiC & 54.22 \(\pm\) 0.56 & 22.42 \(\pm\) 3.15 & 71.15 \(\pm\) 2.42 & 12.78 \(\pm\) 5.12 & 75.13 \(\pm\) 1.06 & 21.70 \(\pm\) 0.63 \\ \hline OTGNet (Ours) & **73.88**\(\pm\) 4.55 & **19.25**\(\pm\) 5.10 & **83.78**\(\pm\) 1.06 & **4.98**\(\pm\) 0.46 & **79.92**\(\pm\) 0.12 & **12.82**\(\pm\) 0.61 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparisons (%) of our method with baselines. The bold represents the best in each column.
Figure 4: The changes of average performance (AP) (%) on three datasets with the increased tasks.
this baseline as OTGNet-GBK. As shown in Table 3, OTGNet outperforms OTGNet-w.o.-IB and OTGNet-GBK on the three datasets. This illustrates that it is effective to extract class-agnostic information for addressing the heterophily propagation issue. OTGNet-w.o.-prop generally performs better than OTGNet-w.o.-IB. This tells us that it is inappropriate to directly transfer information between two nodes of different classes. OTGNet-w.o.-prop is inferior to OTGNet, which means that the information is lost if directly dropping the links between nodes of different nodes. An interesting phenomenon is AF score decreases much without using information bottleneck. This indicates that learning better node embeddings by our message passing module is helpful to triad selection.
**Triad Selection Strategy Analysis.** First, we design three variants to study the impact of our triad selection strategy. OTGNet-w.o.-triad means our method does not use any triad (i.e. \(M=0\)). OTGNet-random represents our method selecting triads randomly. OTGNet-w.o.-diversity means our method selecting triads without considering the diversity. As shown in Table 4, The performance of our method decreases much when without using triads, which shows the effectiveness of using triads to prevent catastrophic forgetting. OTGNet achieves better performance than OTGNet-random and OTGNet-w.o.-diversity, indicating the proposed triads selection strategy is effective.
**Evolution Pattern Preservation Analysis.** We study the effectiveness of evolution pattern preservation. OTGNet-w.o.-pattern represents our method without evolution pattern preservation (i.e. \(\rho=0\)). As shown in Table 5, OTGNet has superior performance over OTGNet-w.o.-pattern, which illustrates the evolution pattern preservation is beneficial to memorize the knowledge of old classes.
**Acceleration Performance of Triad Selection.** As stated aforementioned, to speed up the triad selection, we can sort triads \(g_{k}^{c}\) based on the values of \(\mathcal{R}(g_{k}^{c})\), and use top \(K\) triads as the candidate set \(N_{k}^{c}\) for selection. We perform experiments with different \(K\), fixing \(M=10\). Table 6 shows the results. We notice that when using smaller \(K\), the selection time drops quickly but the performance of our model degrades little. This illustrates our acceleration solution is efficient and effective. Besides, the reason for performance dropping is that the total diversities of the triad candidates decreases.
## 5 Conclusion
In this paper, we put forward a general framework, OTGNet, to investigate open temporal graph. We devise a novel message passing mechanism based on information bottleneck to extract class-agnostic knowledge for aggregation, which can address heterophily propagation issue. To overcome catastrophic forgetting, we propose to select representative triads to memorize knowledge of old classes, and design a new value function to realize the selection. Experimental results on three real-world datasets demonstrate the effectiveness of our method.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Setting} & \multicolumn{2}{c}{Reddit} & \multicolumn{2}{c}{Yelp} & \multicolumn{2}{c}{Taobao} \\ \cline{2-9} & AP(\(\uparrow\)) & AF(\(\downarrow\)) & Time (h) & AP(\(\uparrow\)) & AF(\(\downarrow\)) & Time (h) & AP(\(\uparrow\)) & AF(\(\downarrow\)) & Time (h) \\ \hline \(K\)=1000 & 73.88 & 19.25 & 1.23 & 83.78 & 4.98 & 0.25 & 79.92 & 12.82 & 1.61 \\ \(K\)=500 & 71.26 & 22.45 & 0.45 & 83.48 & 6.32 & 0.07 & 79.19 & 13.94 & 0.53 \\ \(K\)=200 & 66.83 & 26.88 & 0.07 & 81.86 & 6.87 & 0.02 & 79.14 & 13.73 & 0.10 \\ \(K\)=100 & 66.22 & 28.86 & 0.04 & 78.83 & 10.54 & 0.01 & 78.81 & 14.58 & 0.04 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results of of of our acceleration solution with different \(K\).
## 6 Acknowledgements
This work was supported by the National Natural Science Foundation of China (NSFC) under Grants 62122013, U2001211.
|
2307.01583 | Learning Lie Group Symmetry Transformations with Neural Networks | The problem of detecting and quantifying the presence of symmetries in
datasets is useful for model selection, generative modeling, and data analysis,
amongst others. While existing methods for hard-coding transformations in
neural networks require prior knowledge of the symmetries of the task at hand,
this work focuses on discovering and characterizing unknown symmetries present
in the dataset, namely, Lie group symmetry transformations beyond the
traditional ones usually considered in the field (rotation, scaling, and
translation). Specifically, we consider a scenario in which a dataset has been
transformed by a one-parameter subgroup of transformations with different
parameter values for each data point. Our goal is to characterize the
transformation group and the distribution of the parameter values. The results
showcase the effectiveness of the approach in both these settings. | Alex Gabel, Victoria Klein, Riccardo Valperga, Jeroen S. W. Lamb, Kevin Webster, Rick Quax, Efstratios Gavves | 2023-07-04T09:23:24Z | http://arxiv.org/abs/2307.01583v1 | # Learning Lie Group Symmetry Transformations with Neural Networks
###### Abstract
The problem of detecting and quantifying the presence of symmetries in datasets is useful for model selection, generative modeling, and data analysis, amongst others. While existing methods for hard-coding transformations in neural networks require prior knowledge of the symmetries of the task at hand, this work focuses on discovering and characterizing unknown symmetries present in the dataset, namely, Lie group symmetry transformations beyond the traditional ones usually considered in the field (rotation, scaling, and translation). Specifically, we consider a scenario in which a dataset has been transformed by a one-parameter subgroup of transformations with different parameter values for each data point. Our goal is to characterize the transformation group and the distribution of the parameter values. The results showcase the effectiveness of the approach in both these settings.
Machine Learning, ICML, ICML
## 1 Introduction
It has been shown that restricting the hypothesis space of functions that neural networks are able to approximate using known properties of data improves performance in a variety of tasks (Worrall and Welling, 2019; Cohen et al., 2018; Weiler et al., 2018; Zaheer et al., 2017; Cohen and Welling, 2016). The field of Deep Learning has produced a prolific amount of work in this direction, providing practical parameterizations of function spaces with the desired properties that are also universal approximators of the target functions (Yarotsky, 2022). In physics and, more specifically, time-series forecasting of dynamical systems, symmetries are ubiquitous and laws of motion are often symmetric with respect to various transformations such as rotations and translations, while transformations that preserve solutions of equations of motions are in one way or another associated with conserved quantities (Noether, 1918). In computer vision, successful neural network architectures are often invariant with respect to transformations that preserve the perceived object identity as well as all pattern information, such as translation, rotation and scaling. Many of these transformations are smooth and differentiable, and thus belong to the family of Lie groups, which is the class of symmetries we deal with in this work.
Although methods that hard-code transformations are capable of state-of-the-art performance in various tasks, they all require prior knowledge about symmetries in order to restrict the function space of a neural network. A broad class of, a priori unknown, transformations come into play in the context of modelling dynamical systems and in applications to physics. On the other hand, in vision tasks,
Figure 1: The distribution of transformations in a toy dataset that correspond to the Lie groups of rotation and (isotropic) scaling, given in terms of the parameters degree and scaling factor respectively; crucially, these groups are differentiable and can be (locally) decomposed into one-parameter subgroups.
identity-preserving transformations are often known beforehand. Despite this, these transformations are expressed differently by different datasets. As a result, algorithms for not only _discovering_ unknown symmetries but also _quantifying_ the presence of specific transformations in a given dataset, may play a crucial role in informing model selection for scientific discovery or computer vision, by identifying and describing physical systems through their symmetries and selecting models that are invariant or equivariant with respect to only those symmetries that are _actually_ present in the dataset under consideration.
In this work, we address the problem of qualitatively and quantitatively detecting the presence of symmetries with respect to one-parameter subgroups within a given dataset (see Figure 1). In particular, let \(\phi(t)\) be a one parameter subgroup of transformations. We consider the scenario in which a dataset \(\{x_{i}\}_{i=1}^{N}\) has been acted on by \(\phi(t)\), with a _different_ value of the parameter \(t\) for every point \(x_{i}\). Our goal is to characterise the group of transformations \(\phi(t)\), as well as the _distribution_ from which the parameters \(t\) have been sampled. We propose two models: a naive approach that successfully manages to identify the underlying one-parameter subgroup, and an autoencoder model that learns transformations of a one-parameter subgroup in the latent space and is capable of extracting the overall shape of the \(t\)-distributions. The cost of the latter is that the one-parameter subgroup in the latent space is not necessarily identical to that in pixel space. The work is structured as follows: Section 2 introduces some basic tools from Lie group theory; Section 3 outlines the method; Section 5 provides an overview of the existing methods that are related to our own; and lastly, results are shown in Section 4.
## 2 Background
The theoretical underpinnings of symmetries or invariance can be described using group theory (Fulton and Harris, 1991). In particular, we present the necessary theory of one-parameter subgroups (Olver, 1993) on which our method is based, following the logic of Oliver (2010).
### One-parameter subgroups
We focus on learning invariances with respect to one-parameter subgroups of a Lie group \(G\), which offer a natural way to describe continuous symmetries or invariances of functions on vector spaces.
**Definition 2.1**.: A **one-parameter subgroup** of \(G\) is a differentiable homomorphism \(\phi:\mathbb{R}\to G\), more precisely, such that \(\phi(t+s)=\phi(t)\phi(s)\) for all \(t,s\in\mathbb{R}\).
Let the action of \(\phi\) on the vector space \(X\subset\mathbb{R}^{n}\) be a transformation \(T:X\times\mathbb{R}\to X\) that is continuous in \(x\in X\) and \(t\in\mathbb{R}\). Because of continuity, for sufficiently small \(t\) and some fixed \(x\in X\), the action is given by
\[T(x,t)\approx x+tA(x)\text{ where }A(x):=\frac{\partial T(x,t)}{\partial t} \Bigg{|}_{t=0}. \tag{1}\]
Note that this is equivalent to taking a first-order Taylor expansion in \(t\) around \(t=0\).
### Generators
In general, we can use \(A(x)\) in (1) to construct what is known as the **generator** of a one-parameter subgroup \(\phi\) of a Lie group \(G\), that in turn will characterise an ordinary differential equation, the solution to which coincides with the action \(T\) on \(X\).
Let \(C^{\infty}(X)\) be the space of smooth functions from \(X\) to \(X\). The generator of \(\phi\) is defined as a linear differential operator \(L:C^{\infty}(X)\to C^{\infty}(X)\) such that
\[L=\sum_{i=0}^{n}(A(x))_{i}\frac{\partial}{\partial x_{i}} \tag{2}\]
describing the vector field of the infinitesimal increment \(A(x)t\) in (1), where \(\partial/\partial x_{i}\) are the unit vectors of \(X\) in the coordinate directions for \(i=1,\dots,n\). It can be shown (Olver, 1993) that, for a fixed \(x\in X\), that \(T(x,t)\) is the solution to the ordinary differential equation
\[\frac{dT(x,t)}{dt}=LT(x,t)\quad\text{where}\quad T(x,0)=x. \tag{3}\]
The solution to (3) is the exponential \(T(x,t)=e^{tL}x\) where
\[e^{tL}:=\sum_{k=0}^{\infty}\frac{(tL)^{k}}{k!}, \tag{4}\]
where \(L^{k}\) is the operator \(L\) applied \(k\) times iteratively.
For a one-parameter subgroup \(\phi\) of a matrix Lie group \(G\subset GL(n,\mathbb{R})\) and a fixed \(x\in X\), it can be shown (Olver, 1993) that there exists a unique matrix \(A\in\mathbb{R}^{n\times n}\) such that \(A(x)=Ax\). This is a more restrictive approach as groups such as translations cannot be written as a matrix multiplication.
## 3 Method
As in Rao and Ruderman (1998); Sanborn et al. (2022); Dehmamy et al. (2021) the semi-supervised symmetry detection setting that we consider consists of learning the generator \(L\) of a one-parameter subgroup \(\phi\) from pairs of observations of the form \(\{(x_{i},\bar{x}=T(x_{i},t_{i}))\}_{i=1}^{N}\), where \(N\) is the number of observations and each \(t_{i}\in\mathbb{R}\) is drawn from some unknown distribution \(p(t)\). Not only do we attempt to learn the generator \(L\), but also the unknown distribution \(p(t)\) of the parameters \(\{t_{i}\}_{i=1}^{N}\).
### Parametrisation of the generator
Deciding how to parametrise \(L\) has an effect on the structure of the model and ultimately on what one-parameter subgroups we are able to learn. For simplicity, consider one-parameter subgroups acting on \(X\subset\mathbb{R}^{2}\), although this operator can be defined for higher-dimensional vector spaces. The generator \(L\) of \(\phi\) is given as in Eq. (2) and we parametrise \(A(x,y)\) as a linear operator in the basis \(\{1,x,y\}\) with a coefficient matrix \(A=\alpha\in\mathbb{R}^{2\times 3}\), giving
\[L^{\alpha} :=(\alpha_{11}+\alpha_{12}x+\alpha_{23}y)\frac{\partial}{ \partial x} \tag{5}\] \[+(\alpha_{12}+\alpha_{22}x+\alpha_{23}y)\frac{\partial}{\partial y}.\]
In this particular basis, for different values of \(\alpha\), the generator \(L^{\alpha}\) is able to express one-parameter sub-groups of the affine group. This includes the "traditional" symmetries that are usually considered (translation, rotation, and isotropic scaling) and all other affine transformations1. This can be generalized to any functional form of the generator by augmenting the basis accordingly.
Footnote 1: Alternatively, the constant terms can be thought of as the drift terms (i.e. translation) and the four others can be arranged into a diffusion matrix.
### Discretisation and interpolation
The generator \(L^{\alpha}\) is constructed as an operator that acts on a function \(f:\mathbb{R}^{2}\to\mathbb{R}\), given, in practice, by \(I\in\mathbb{R}^{n\times n}\) such that \(I_{ij}=f(i,j)\) are evaluations of \(f\) on a regularly-sampled \(n\times n\) grid \(M\) of points \(M_{ij}=(i,j)\in\mathbb{R}^{2}\). We then vectorise \(I\), obtaining a point in a vector space \(\tilde{I}\in\mathbb{R}^{n^{2}}\) such that \(\tilde{I}_{i+j}:=I_{ij}\) and construct the matrix operator \(L^{\alpha}\in\mathbb{R}^{n^{2}\times n^{2}}\) as
\[L^{\alpha} :=(\alpha_{11}+\alpha_{12}X_{x}+\alpha_{13}X_{y})\frac{\partial}{ \partial X_{x}} \tag{6}\] \[+(\alpha_{21}+\alpha_{22}X_{x}+\alpha_{23}X_{y})\frac{\partial}{ \partial X_{y}},\]
acting on \(\tilde{I}\), where \(X_{x}\in\mathbb{R}^{n^{2}\times n^{2}}\) and \(X_{y}\in\mathbb{R}^{n^{2}\times n^{2}}\) are such that \((X_{x})_{ij}:=i\) and \((X_{y})_{ij}:=j\), while \(\partial/\partial X_{x}\) and \(\partial/\partial X_{y}\) are also matrix operators in \(\mathbb{R}^{n^{2}\times n^{2}}\). The exponential in (4) and the action \(T\) coincides with the matrix exponential.
In order to define \(\partial/\partial X_{x}\) and \(\partial/\partial X_{y}\) as operators that transform by infinitesimal amounts at discrete locations, we require an interpolation function. The Shannon-Whittaker theorem (Marks, 2012) states that any square-integrable, piecewise continuous function that is band-limited in the frequency domain can be reconstructed from its discrete samples if they are sufficiently close and equally spaced. For sake of interpolations, we will also assume that the
Figure 2: Model architecture.
function is periodic.
Interpolation: 1DIn the case where \(M\) is a discrete set of \(n\) points in 1D, we have that \(I(i+n)=I(i)\) for all \(i=1,\ldots,n\) samples. Shannon-Whittaker interpolation reconstructs the signal for all \(x\in\mathbb{R}\) as
\[\begin{split}& I(x)=\sum_{i=0}^{n-1}I(i)Q(x-i),\quad\text{ where}\\ & Q(x)=\frac{1}{n}\left[1+2\sum_{p=1}^{n/2-1}\cos\left(\frac{2 \pi px}{n}\right)\right]\end{split} \tag{7}\]
Differentiating \(Q\) with respect to \(x\) and evaluating it at every \(x_{i}\in M\) gives an analytic expression for a vector field in \(\mathbb{R}^{n}\), describing continuous changes in \(x\) at all \(n\) points (Rao & Ruderman, 1998). This is precisely what \(\partial/\partial x\) or \(\partial/\partial y\) in (5) are.
Interpolation: 2DIn the case where \(M\) is a grid of \(n\times n\) points in 2D, we construct the \(n\times n\) matrices of the partial derivatives of \(Q\) with respect to \(x\) and \(y\), analogously to the 1D case, stacking them to construct the \(n^{2}\times n^{2}\) block diagonal matrices \(\partial/\partial X_{x}\) and \(\partial/\partial X_{y}\). It is worth noting that alternative interpolation techniques can be used to obtain the operators and the method does not depend on any specific one.
Two different architectures, the main model and the latent model, are proposed to learn \(L^{\alpha}\) and, in doing so, the action \(T\).
#### 3.2.1 Naive model
The coefficients \(\alpha\) of \(L^{\alpha}\) are approximated by fixed coefficients that are shared across the dataset, while the parameter \(t_{i}\) is approximated by \(\hat{t}_{i}\) that depends on the input pair \((x_{i},\bar{x}_{i})\). We learn
1. the coefficients \(\alpha\in\mathbb{R}^{2\times 3}\) of the generator \(L^{\alpha}\) and
2. the parameters \(\theta\) of an MLP \(f_{\theta}\) that returns \(f_{\theta}(x_{i},\bar{x}_{i})=:\hat{t}_{i}\) as a function of every input pair,
such that the solution to (3) for \(L^{\alpha}\) is approximated by
\[\hat{T}(x_{i},\bar{x}_{i}):=e^{f_{\theta}(x_{i},\bar{x}_{i})L^{\alpha}}\,x_{i}. \tag{8}\]
The model objective is then given by the reconstruction loss
\[\mathcal{L}_{T}(x_{i},\bar{x}_{i})=||\hat{T}_{\phi}(x_{i},\bar{x}_{i})-\bar{x }_{i}||^{2}. \tag{9}\]
#### 3.2.2 Latent model
While the model described above will prove to work sufficiently well for learning the coefficients \(\alpha\) of \(L^{\alpha}\), the matrix exponential function in \(\hat{T}\) in (8) can be costly to compute and difficult to optimise in high dimensions; consider that the cost of the matrix exponential in a single forward pass is roughly \(O(n^{3})\) using the algorithm of Al-Mohy & Higham (2010).
As a result, a different version of the model is proposed that incorporates an autoencoder for reducing dimension. The concept remains the same, but \(x_{i}\) is now mapped to some latent space \(Z\subset\mathbb{R}^{n_{Z}}\) for \(n_{Z}\ll n\), such that the exponential is taken in a significantly lower dimension. This is done by an encoder \(h_{\psi}:X\to Z\) and a decoder \(d_{\psi}:Z\to X\) such that \(z_{i}=h_{\psi}(x_{i})\) and \(x_{i}\approx d_{\psi}(z_{i})\).
We learn
1. the parameters \(\psi\) of an MLP autoencoder,
2. the coefficients \(\tilde{\alpha}\in\mathbb{R}^{2\times 3}\) of the generator \(L^{\tilde{\alpha}}\) for a one-parameter subgroup \(\phi_{Z}\) acting on the latent space \(Z\),
3. the parameters \(\theta\) of an MLP \(f_{\theta}\) that returns \(f_{\theta}(x_{i},\bar{x}_{i})=:\hat{t}_{i}\) as a function of every original input pair \((x_{i},\bar{x}_{i})\),
such that the solution to (3) for \(L^{\alpha}\), the generator in the original space, is approximated by
\[\hat{T}^{Z}(x_{i},\bar{x}_{i})=d_{\psi}(e^{f_{\theta}(x_{i},\bar{x}_{i})L^{ \tilde{\alpha}}}h_{\psi}(x_{i})). \tag{10}\]
It is important to note that enforcing good reconstruction of the autoencoder alone does not enforce the commutativity of the diagram in Figure 3. To make it commutative, we use an objective that is a weighted sum of multiple terms. A simple reconstruction term for the autoencoder on each input example
\[\mathcal{L}_{R}(x_{i}):=||d_{\psi}(h_{\psi}(x_{i}))-x_{i}||^{2}, \tag{11}\]
a transformation-reconstruction term in the original space
\[\mathcal{L}_{T}^{X}(x_{i},\bar{x}_{i}):=||\hat{T}_{\phi}^{Z}(x_{i},\bar{x}_{i}) -\bar{x}_{i}||^{2}, \tag{12}\]
a transformation-reconstruction term in the latent space
\[\mathcal{L}_{T}^{Z}(x_{i},\bar{x}_{i}):=||e^{f_{\theta}(x_{i},\bar{x}_{i})L^{ \tilde{\alpha}}}h_{\psi}(x_{i})-h_{\psi}(\bar{x}_{i})||^{2}, \tag{13}\]
and a Lasso term on the generator coefficients \(\tilde{\alpha}\). The overall loss of the latent model is
\[\begin{split}\mathcal{L}(x_{i},\bar{x}_{i})&=\lambda _{R}(\mathcal{L}_{R}(x_{i})+\mathcal{L}_{R}(\bar{x}_{i}))\\ &+\,\lambda_{X}\mathcal{L}_{T}^{X}(x_{i},\bar{x}_{i})+\lambda_{Z} \mathcal{L}_{T}^{Z}(x_{i},\bar{x}_{i})\\ &+\lambda_{L}||\tilde{\alpha}||^{2},\end{split} \tag{14}\]
where \(\lambda_{R},\lambda_{X},\lambda_{Z},\lambda_{L}\in\mathbb{R}\) are treated as hyperparameters.
Recovering the groupIt is important to note that the one-parameter subgroup corresponding to the generator \(L^{\tilde{\alpha}}\) and the generator \(L^{\alpha}\) are _not_ necessarily the same; \(L^{\tilde{\alpha}}\) is the generator corresponding to some action on \(X\) of a one-parameter subgroup \(\phi\), while \(L^{\alpha}\) is a different generator corresponding to some action on \(Z\) of a different one-parameter subgroup \(\phi_{Z}\).
### Uniqueness
For both the naive model in Section 3.2.1 and the latent model in 3.2.2, the approximations \(\hat{t}_{i}\) for the values of the parameters \(t_{i}\) require interpretation. Both models parameterise \(\hat{T}\) or \(\hat{T}^{Z}\) with the products \(\hat{t}_{i}L^{\alpha}\) or \(\hat{t}_{i}L^{\tilde{\alpha}}\) respectively, where \(\hat{t}_{i}=f_{\theta}(x_{i},\bar{x}_{i})\). While both the values of \(\hat{t}_{i}L^{\alpha}\) and \(\hat{t}_{i}L^{\tilde{\alpha}}\) are unique for a given action on \(X\) and \(Z\) respectively, their decomposition is only unique up to a constant. Therefore, \(L^{\alpha}\) or \(L^{\tilde{\alpha}}\) and \(\hat{t}\) approximate the generators and the parameter respectively up to a constant. Consequently, the one-parameter subgroup \(\phi\) can only be deduced by the values of the individual coefficients in \(\alpha\)_relative to one another_, as opposed to in absolute, likewise for \(\phi_{Z}\) and \(\tilde{\alpha}\). We therefore recover a scaled approximation for the distribution of \(\hat{t}_{i}\).
### The most general setting
Suppose we are given a labelled dataset \(\mathcal{D}=\{(x_{i},c_{i})\}_{i=1}^{N}\) and a one-parameter subgroup \(\phi\). Then we call \(\mathcal{D}\)_symmetric_ or _invariant with respect to \(\phi\)_ if the action of \(\phi\) preserves the object identity of the data points, where by object identity we mean any property of the data that we might be interested in. For example, in the case of MNIST handwritten digits, rigid transformations preserve their labels 2 and therefore, can be considered symmetries of the dataset. Now suppose that every \(x_{i}\) in \(\mathcal{D}\) is acted on with a one-parameter subgroup \(\phi_{t}\) to get \(T\mathcal{D}=\left\{(T(x_{i},t_{i}),c_{i})\right\}_{i=1}^{N}\). The most general, fully unsupervised symmetry detection setting consists of learning \(\phi\), and characterize the distribution of the parameter \(t\) from just \(\mathcal{\bar{D}}\). The idea is that, under the assumption that points with the same label are sufficiently similar for the subgroup transformation to account for the important difference3, we can use labels to group data points, and compare those data points using methods such as the one presented in this paper. We leave the fully unsupervised symmetry detection setting for future work although we will emphasize that the proposed method can, in principle, be used in such setting without substantial changes to the architecture.
Footnote 2: With the exception of the number ‘9’ that, if rotated 180 degrees becomes a ‘6’.
## 4 Experiments
### Experiment setting
In practice, we experiment with a dataset of MNIST digits transformed with either 2D rotations or translations in one direction. To test the method's ability to learn distributions of these transformations, for each one-parameter subgroup \(\phi\in\{SO(2),T(2)\}\) we construct a dataset \(\{x_{i},T(x_{i},t_{i})\}_{i=1}^{N}\) by sampling the parameters \(t_{i}\in\mathbb{R}\) from various _multimodal_ distributions.
As in (Rao & Ruderman, 1998), the dataset is composed of signals \(I:M\longrightarrow\mathbb{R}\) regularly-sampled from a discrete grid of \(n^{2}\) points \((x,y)\in\mathbb{R}^{2}\) for \(n=28\). The signals \(I\) are vectorised into points in \(\mathbb{R}^{784}\) as described in Section 3.2. The implementation of the naive model is available here.
### Main model experiments
The naive model architecture outlined in 3.2.1 consists of a fully-connected, 3-layer MLP for \(f_{\theta}\) that was trained jointly with the coefficients \(\alpha_{ij}\) using Adam (Kingma & Ba, 2014) with a learning rate of 0.001. Given the disproportionate number of trainable parameters in \(f_{\theta}\) and the 6 coefficients in \(\alpha\), updating \(\alpha_{ij}\) roughly 10 times for every update of \(\theta\) in \(f_{\theta}\) was found to be beneficial during training.
CoefficientsFigure 4 shows the evolution of \(\alpha_{ij}\) during training. It can be seen that after a few hundred steps, the coefficients \(\alpha_{ij}\) that do not correspond to the infinitesimal generator of the symmetry expressed by the dataset drop to zero, while those that do, settle to values compatible with those of the ground truth generator \(L\).
### Latent model experiments
The latent model outlined in 3.2.2 consists of a fully-connected, 3-layer MLP \(f_{\theta}\), as in (8), to approximate
\(\hat{t}\), and two fully-connected, 3-layer MLPs with decreasing/increasing hidden dimensions for the encoder \(h_{\psi}\) and \(d_{\psi}\). We set the latent space to \(n_{Z}=25\). Similar to the naive model experiment above, \(f_{\theta}\) was trained jointly with the coefficients \(\alpha_{ij}\) using Adam (Kingma & Ba, 2014) with learning rate 0.001.
ParametersAfter every epoch (roughly 500 steps), the outputs of \(\hat{t}=f_{\theta}\) were collected in a histogram to show \(p(\hat{t})\). Figure 5 shows how the distribution of \(\hat{t}\) changes during training and how multimodal distributions are clearly recovered, showing the same number of modes as the ground truth distribution from which the transformations were sampled.
## 5 Related Work
**Symmetries in Neural Networks** Numerous studies have tackled the challenges associated with designing neural network layers and/or models that are equivariant with respect to specific transformations (Finzi et al., 2021).
Figure 4: Training evolution of the coefficients \(\alpha\) defining the generator \(L^{\alpha}\) of the one-parameter subgroup, that are shown to converge to the ground-truth non-zero coefficients \(\alpha\) for rotated (\(-\alpha_{22}=\alpha_{13}=1\) and \(0\) otherwise) and translated (\(\alpha_{11}=1\) and \(0\) otherwise) MNIST.
Figure 5: Training evolution of the distributions \(p(\hat{t})\) of the learned parameters \(\hat{t}\) computed by \(f_{\theta}\) for the validation set. The figure shows that \(p(\hat{t})\) resembles the original multi-modal distributions \(p(t)\) of the transformations expressed by the dataset.
These transformations include continuous symmetries such as scaling (Worrall and Welling, 2019), rotation on spheres (Cohen et al., 2018), local gauge transformations (Cohen et al., 2019) and general E(2) transformations on the Euclidean plane (Weiler and Cesa, 2019), as well as discrete transformations like permutations of sets (Zaheer et al., 2017) and reversing symmetries (Valberga et al., 2022). Another line of research focuses on establishing theoretical principles and practical techniques for constructing general group-equivariant neural networks. Research in such areas show improved performances on tasks related to symmetries, but nonetheless require prior knowledge about the symmetries themselves.
Symmetry DetectionSymmetry detection aims to discover symmetries from observations, a learning task that is of great importance in of itself. Detecting symmetries in data not only lends itself to more efficient and effective machine learning models but also in discovering fundamental laws that govern data, a long-standing area of interest in the physical sciences. Learned symmetries can then be incorporated after training in equivariant models or used for data augmentation for downstream tasks. In physics and dynamical systems, the task of understanding and discovering symmetries is a crucial one; in classical mechanics and more generally Hamiltonian dynamics, continuous symmetries of the Hamiltonian are of great significance since they are associated, through Noether's theorem (Noether, 1918), to conservation laws such as conservation of angular momentum or conservation of charge.
The first work on learning symmetries of one-parameter subgroups from observations were Rao and Ruderman (1998) and Miao and Rao (2007), which outline MAP-inference methods for learning infinitesimally small transformations. Sohl-Dickstein et al. (2010) propose a transformation-specific smoothing operation of the transformation space to overcome the issue of a highly non-convex reconstruction objective that includes an exponential map. These methods are close to ours in that we also make use of the exponential map to obtain group elements from their Lie algebra. Despite this, Sohl-Dickstein et al. (2010) do not consider the task of characterizing the distribution of the parameter of the subgroup nor do they consider the whole of pixel-space, using small patches instead. Cohen and Welling (2014) focus on disentangling and learning the distributions of multiple compact "toroidal" one-parameter subgroups in the data.
Neural Symmetry DetectionA completely different approach to symmetry discovery is that of Sanborn et al. (2022), who's model uses a group invariant function known as the bispectrum to learn group-equivariant and group-invariant maps from observations. Benton et al. (2020) consider a task similar to ours, attempting to learn groups with respect-to-which the data is invariant, however, the objective places constraints directly on the network parameters as well as the distribution of transformation parameters with which the data is augmented. Alternatively, Dehmamy et al. (2021) require knowledge of the specific transformation parameter for each input pair (differing by that transformation), unlike our model, where no knowledge of the one-parameter group is used in order to find the distribution of the transformation parameter.
Latent TransformationsLearning transformations of a one-parameter subgroup in latent space (whether that subgroup be identical to the one in pixel space or not) has been accomplished by Keurti et al. (2023) and Zhu et al. (2021). Nevertheless, other works either presuppose local structure in the data by using CNNs instead of fullly-connected networks or focus on disentangling interpretable features instead of directly learning generators that can be used as an inductive bias for a new model.
In contrary to the other works mentioned above, we propose a promising framework in which we can simultaneously
* perform symmetry detection in pixel-space, without assuming any inductive biases are present in the data \(a\)_priori_,
* parametrize the generator such that non-compact groups (e.g. translation) can be naturally incorporated,
* and learn both the generator and the parameter distributions.
## 6 Discussion
In this work we proposed a framework for learning one-parameter subgroups of Lie group symmetries from observations. Our method uses a neural network to predict the one-parameter of every transformation that has been applied to datapoints, and the coefficients of a linear combination of pre-specified generators. We show that our method can learn the correct generators for a variety of transformations as well as characterize the distribution of the parameter that has been used for transforming the dataset.
While the goal of learning both the coefficients of the generator and the distribution of the transformation parameter has not been accomplished by only one model in this work, modifying our existing framework to do so is a priority for future work. In addition, the proposed method lends itself well to being composed to form multiple layers, which can then be applied to datasets that express multiple symmetries. By doing so, ideally, each layer would learn one individual symmetry. We leave this study, and the more general, fully unsupervised setting described in 3.4, for future work.
## Acknowledgements
This publication is based on work partially supported by the EPSRC Centre for Doctoral Training in Mathematics of Random Systems: Analysis, Modelling and Simulation (EP/S023925/1) and the Dorris Chen Award granted by the Department of Mathematics, Imperial College London.
|
2301.11378 | MG-GNN: Multigrid Graph Neural Networks for Learning Multilevel Domain
Decomposition Methods | Domain decomposition methods (DDMs) are popular solvers for discretized
systems of partial differential equations (PDEs), with one-level and multilevel
variants. These solvers rely on several algorithmic and mathematical
parameters, prescribing overlap, subdomain boundary conditions, and other
properties of the DDM. While some work has been done on optimizing these
parameters, it has mostly focused on the one-level setting or special cases
such as structured-grid discretizations with regular subdomain construction. In
this paper, we propose multigrid graph neural networks (MG-GNN), a novel GNN
architecture for learning optimized parameters in two-level DDMs\@. We train
MG-GNN using a new unsupervised loss function, enabling effective training on
small problems that yields robust performance on unstructured grids that are
orders of magnitude larger than those in the training set. We show that MG-GNN
outperforms popular hierarchical graph network architectures for this
optimization and that our proposed loss function is critical to achieving this
improved performance. | Ali Taghibakhshi, Nicolas Nytko, Tareq Uz Zaman, Scott MacLachlan, Luke Olson, Matthew West | 2023-01-26T19:44:45Z | http://arxiv.org/abs/2301.11378v2 | # MG-GNN: Multigrid Graph Neural Networks for Learning Multilevel Domain Decomposition Methods
###### Abstract
Domain decomposition methods (DDMs) are popular solvers for discretized systems of partial differential equations (PDEs), with one-level and multilevel variants. These solvers rely on several algorithmic and mathematical parameters, prescribing overlap, subdomain boundary conditions, and other properties of the DDM. While some work has been done on optimizing these parameters, it has mostly focused on the one-level setting or special cases such as structured-grid discretizations with regular subdomain construction. In this paper, we propose multigrid graph neural networks (MG-GNN), a novel GNN architecture for learning optimized parameters in two-level DDMs. We train MG-GNN using a new unsupervised loss function, enabling effective training on small problems that yields robust performance on unstructured grids that are orders of magnitude larger than those in the training set. We show that MG-GNN outperforms popular hierarchical graph network architectures for this optimization and that our proposed loss function is critical to achieving this improved performance.
Machine Learning, ICML
## 1 Introduction
Differential equations are at the core of many important scientific and engineering problems (Gholizadeh et al., 2021; Han and Lin, 2011), and often, there is no analytical solution available; hence, researchers utilize numerical solvers (Vos et al., 2002; Gholizadeh et al., 2023). Among numerical methods for solving the systems of equations obtained from discretization of partial differential equations (PDEs), domain decomposition methods (DDMs) are a popular approach (Toselli and Widlund, 2005; Quarteroni and Valli, 1999; Dolean et al., 2015). They have been extensively studied and applied to elliptic boundary value problems, but are also considered for time-dependent problems. Schwarz methods are among the simplest and most popular types of DDM, and map well to MPI-style parallelism, with both one-level and multilevel variants. One-level methods decompose the global problem into multiple subproblems (subdomains), which are obtained either by discretizing the same PDE over a physical subdomain or by projection onto a discrete basis, using subproblem solutions to form a preconditioner for the global problem. Classical Schwarz methods generally consider Dirichlet or Neumann boundary conditions between the subdomains, while Optimized Schwarz methods (OSM) (Gander et al., 2000) consider a combination of Dirichlet and Neumann boundary conditions, known as Robin-type boundary conditions, to improve the convergence of the method. Restricted additive Schwarz (RAS) methods (Cai and Sarkis, 1999) are a common form of Schwarz methods, and optimized versions of one-level RAS has been theoretically studied by St-Cyr et al. (2007). Two-level methods extend one-level approaches by adding a (global) coarse-grid correction step to the preconditioner, generally improving performance but at an added cost.
In recent years, there has been a growing focus on using machine learning (ML) methods to learn optimized parameters for iterative PDE solvers, including DDM and algebraic multigrid (AMG). In Greenfeld et al. (2019) convolutional neural networks (CNNs) are used to learn the interpolation operator in AMG on structured problems, and in a following study (Luz et al., 2020), graph neural networks (GNNs) are used to extend the results to arbitrary unstructured grids. In a different fashion, reinforcement learning methods along with GNNs are used to learn coarse-gird selection in reduction-based AMG in Taghibakhshi et al. (2021). As mentioned in Heinlein et al. (2021), when combining ML methods with DDM, approaches can be categorized into two main families, namely using ML within a classical DDM framework to obtain improved convergence and using deep neural networks as the main solver or discretization module for DDMs. In a recent study (Taghibakhshi et al., 2022), GNNs are used to learn interface conditions |
2305.11857 | Computing high-dimensional optimal transport by flow neural networks | Flow-based models are widely used in generative tasks, including normalizing
flow, where a neural network transports from a data distribution $P$ to a
normal distribution. This work develops a flow-based model that transports from
$P$ to an arbitrary $Q$ where both distributions are only accessible via finite
samples. We propose to learn the dynamic optimal transport between $P$ and $Q$
by training a flow neural network. The model is trained to optimally find an
invertible transport map between $P$ and $Q$ by minimizing the transport cost.
The trained optimal transport flow subsequently allows for performing many
downstream tasks, including infinitesimal density ratio estimation (DRE) and
distribution interpolation in the latent space for generative models. The
effectiveness of the proposed model on high-dimensional data is demonstrated by
strong empirical performance on high-dimensional DRE, OT baselines, and
image-to-image translation. | Chen Xu, Xiuyuan Cheng, Yao Xie | 2023-05-19T17:48:21Z | http://arxiv.org/abs/2305.11857v4 | # Optimal transport flow and infinitesimal density ratio estimation
###### Abstract
Continuous normalizing flows are widely used in generative tasks, where a flow network transports from a data distribution \(P\) to a normal distribution. A flow model that transports from \(P\) to an arbitrary \(Q\), where both \(P\) and \(Q\) are accessible via finite samples, is of various application interests, particularly in the recently developed telescoping density ratio estimation (DRE) which calls for the construction of intermediate densities to bridge between the two densities. In this work, we propose such a flow by a neural-ODE model which is trained from empirical samples to transport invertibly from \(P\) to \(Q\) (and vice versa) and optimally by minimizing the transport cost. The trained flow model allows us to perform infinitesimal DRE along the time-parametrized log-density by training an additional continuous-time network using classification loss, whose time integration provides a telescopic DRE. The effectiveness of the proposed model is empirically demonstrated on high-dimensional mutual information estimation and energy-based generative models of image data.
## 1 Introduction
The problem of finding a transport map between two general distributions \(P\) and \(Q\) in high dimension is essential in statistics, optimization, and machine learning. When both distributions are only accessible via finite samples, the transport map needs to be learned from data. In spite of the modeling and computational challenges, this setting has applications in many fields. For example, transfer learning in domain adaption aims to obtain a model on the target domain at a lower cost by making use of an existing pre-trained model on the source domain (Courty et al., 2014, 2017), and this can be achieved by transporting the source domain samples to the target domain using the transport map. The (optimal) transport has also been applied to achieve model fairness (Jiang et al., 2020; Silvia et al., 2020). By transporting distributions corresponding to different sensitive attributes to a common distribution, an unfair model is calibrated to match certain desired fairness criteria. The transport map can also be used to provide intermediate interpolating distributions between \(P\) and \(Q\). In density ratio estimation (DRE), this bridging facilitates the so-called "telescopic" DRE (Rhodes et al., 2020) which has been shown to be more accurate when \(P\) and \(Q\) significantly differ.
This work focuses on a continuous-time formulation of the problem where we are to find an invertible transport map \(T_{t}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) continuously parametrized by time \(t\in[0,1]\) and satisfying
that \(T_{0}=\text{Id}\) and \((T_{1})_{\#}P=Q\). Here we denote by \(T_{\#}P\) the push-forward of distribution \(P\) by a mapping \(T\), such that \((T_{\#}P)(\cdot)=P(T^{-1}(\cdot))\). Suppose \(P\) and \(Q\) have densities \(p\) and \(q\) respectively in \(\mathbb{R}^{d}\) (we also use the push-forward notation \({}_{\#}\) on densities), the transport map \(T_{t}\) defines
\[\rho(x,t):=(T_{t})_{\#}p,\quad\text{ s.t. }\quad\rho(x,0)=p,\quad\rho(x,1)=q.\]
We will adopt the neural Ordinary Differential Equation (ODE) approach Chen et al. (2018) where we represent \(T_{t}\) as the solution map of an ODE, which is further parametrized by a continuous-time residual network. The resulting map \(T_{t}\) is invertible, and the inversion can be computed by integrating the neural ODE reverse in time. Our model learns the flow from two sets of finite samples from \(P\) and \(Q\). The velocity field in the neural ODE will be optimized to minimize the transport cost so as to approximate the optimal velocity in dynamic optimal transport (OT) formulation, i.e. Benamou-Brenier equation.
The neural-ODE model has been intensively developed in Continuous Normalizing Flows (CNF) Kobyzev et al. (2020). In CNF, the continuous-time flow model, usually parametrized by a neural ODE, transports from a data distribution \(P\) (accessible via finite samples) to a terminal analytical distribution which is typically the normal one \(\mathcal{N}(0,I_{d})\), per the name "normalizing". The study of normalizing flow dated back to non-deep models with statistical applications Tabak and Vanden-Eijnden (2010), and deep CNFs have recently developed into a popular tool for generative models and likelihood inference of high dimensional data. CNF models rely on the analytical expression of the terminal distribution in training. Since our model is also a flow model that transports from data distribution \(P\) to a general (unknown) data distribution \(Q\), both accessible via empirical samples, we name our model "Q-malizing" flow which is inspired by the CNF literature.
After developing a general approach of OT Q-malizing flow model, in the second half of the paper, we focus on the application to telescopic DRE. After training a Q-malizing flow model (the "flow net"), we leverage the intermediate densities \(\rho(x,t)\), which is accessed by finite sampled by pushing the \(P\) samples by \(T_{t}\), to train an additional continuous-time classification network (the "ratio net") over time \(t\in[0,1]\). The ratio net estimates the infinitesimal change of the log-density \(\log\rho(x,t)\) over time, and its time-integral from 0 to 1 yields an estimate of the (log) ratio \(q/p\). The efficiency of the proposed OT Q-malizing flow net and the infinitesmal DRE net is experimentally demonstrated on high dimensional simulated and image data.
In summary, the contributions of the work include:
* We develop a flow-based model _Q-flow_ net to learn a continuous invertible transport map between arbitrary pair of distributions \(P\) and \(Q\) in \(\mathbb{R}^{d}\) from two sets of samples of the distributions. We propose to train a neural ODE model to minimize the transport cost such that the flow approximates the optimal transport in dynamic OT. The end-to-end training of the model refines an initial flow that may not attain the optimal transport, e.g., obtained by training two CNFs or other interpolating schemes.
* Leveraging a trained Q-flow net, we propose to train a separate continuous-time network _Q-flow-ratio_ net to perform infinitesimal DRE from \(P\) to \(Q\) given finite samples. The Q-flow-ratio net is trained by minimizing classification loss on neighboring time stamps of a discrete-time grid, which is computationally more efficient than prior models.
* We show the effectiveness of the approach on simulated and real data. The optimal transport flow obtained by Q-flow net leads to improved DRE in high dimension, demonstrated by state-of-the-art performance on high-dimensional mutual information estimation and energy-based generative models of image data.
### Related works
Normalizing flows.When the target distribution \(Q\) is an isotropic Gaussian \(\mathcal{N}(0,I_{d})\), normalizing flow models have demonstrated vast empirical successes in building an invertible transport \(T_{t}\) between \(P\) and \(\mathcal{N}(0,I_{d})\)(Kobyzev et al., 2020). The transport is parametrized by deep neural networks, whose parameters are trained via minimizing the KL-divergence between transported distribution \((T_{1})_{\#}P\) and \(\mathcal{N}(0,I_{d})\). Various continuous (Grathwohl et al., 2019; Finlay et al., 2020) and discrete (Dinh et al., 2016; Behrmann et al., 2019) normalizing flow models have been developed, along with proposed regularization techniques (Onken et al., 2021; Xu et al., 2022a,b) that facilitate the training of such models in practice.
Since our \(Q\)-malizing flow is in essence a transport-regularized flow between \(P\) and \(Q\), we further review related works on building normalizing flow models with transport regularization. (Finlay et al., 2020) trained the flow trajectory with regularization based on \(\ell_{2}\) transport cost and Jacobian norm of the network-parametrized velocity field. (Onken et al., 2021) proposed to regularize the flow trajectory by \(\ell_{2}\) transport cost and the deviation from the HJB equation. These regularization have shown to effectively improve over un-regularized models at a reduced computational cost. Regularized normalizing flow models have also been used to solve high dimensional Fokker-Planck equations (Liu et al., 2022) and mean-field games (Huang et al., 2023).
Transport between general distributions.There has been a rich literature on establishing transport between general distributions, especially in the context of optimal transport (OT). The problem of OT dates back to the work by Gaspard Monge (Monge, 1781), and since then many mathematical theories and computational tools have been developed to tackle the question (Villani et al., 2009; Benamou and Brenier, 2000; Peyre et al., 2019). Several works have attempted to make computational OT scalable to high dimensions, including (Lavenant et al., 2018) which applied convex optimization using Riemannian structure of the space of discrete probability distributions, and (Lee et al., 2021) by \(L^{1}\) and \(L^{2}\) versions of the generalized unnormalized OT solved by Nesterov acceleration. Several deep approaches have also been developed recently. (Coeurdoux et al., 2023) leveraged normalizing flow to learn an approximate transport map between two distributions from finite samples, where the flow model has a restricted architecture and the OT constraint is replaced with sliced-Wasserstein distance which may not computationally scale to high dimensional data. (Albergo and Vanden-Eijnden, 2023) proposed to use a stochastic interpolant map between two arbitrary distributions and train a neural network parametrized velocity field to transport the distribution along the interpolated trajectory. Same as in (Lipman et al., 2023; Albergo and Vanden-Eijnden, 2023), our neural-ODE based approach also computes a deterministic probability transport map, in contrast to SDE-based diffusion models (Song et al., 2021). Notably, the interpolant mapping used in (Albergo and Vanden-Eijnden, 2023) was also adopted earlier in Rhodes et al. (2020); Choi et al. (2022) (see Eq. (16) in Appendix B), which is generally not the optimal transport interpolation. In comparison, our proposed Q-malizing flow
optimizes the interpolant mapping parametrized by a neural ODE and approximates the optimal velocity in dynamic OT (see Section 2). Generally, the flow attaining optimal transport can lead to improved model efficiency and generalization performance Huang et al. (2023). In this work, we experimentally show that the optimal transport flow benefits high-dimensional DRE.
Density ratio estimation.Density ratio estimation between distributions \(P\) and \(Q\) is a fundamental problem in statistics and machine learning (Meng and Wong, 1996; Sugiyama et al., 2012). It has direct applications in important fields such as importance sampling (Neal, 2001), change-point detection (Kawahara and Sugiyama, 2009), outlier detection (Kato and Teshima, 2021), mutual information estimation (Belghazi et al., 2018), etc. Various techniques have been developed, including probabilistic classification (Qin, 1998; Bickel et al., 2009), moment matching (Gretton et al., 2009), density matching (Sugiyama et al., 2008), etc. Deep NN models have been leveraged in classification approach Moustakides and Basioti (2019) due to their expressive power. However, as has been pointed out in (Rhodes et al., 2020), the estimation accuracy by a single classification may degrade when \(P\) and \(Q\) differ significantly.
To overcome this issue, (Rhodes et al., 2020) introduced a telescopic DRE approach by constructing intermediate distributions to bridge between \(P\) and \(Q\). (Choi et al., 2022) further proposed to train an infinitesimal, continuous-time ratio net via the so-called time score matching. Despite their improvement over the prior classification methods, both approaches rely on construction of the intermediate distributions between \(P\) and \(Q\) that is not optimal. In contrast, our proposed Q-malizing flow network leverages the expressiveness of deep networks to construct the intermediate distributions by the continuous-time flow transport, and the flow trajectory is regularized to minimize the transport cost in dynamic OT. The model empirically improves the DRE accuracy (see Section 5). In computation, (Choi et al., 2022) applies score matching to compute the infinitesimal change of log-density. The proposed Q-flow-ratio net is based on classification loss training using a fixed time grid which avoids score matching and is computationally lighter (Section 4.2).
## 2 Preliminaries
Neural ODE and CNF.Neural ODE Chen et al. (2018) parametrized an ODE in \(\mathbb{R}^{d}\) by a residual network. Specifically, let \(x(t)\) be the solution of
\[\dot{x}(t)=f(x(t),t;\theta),\quad x(0)\sim p. \tag{1}\]
where \(f(x,t;\theta)\) is a velocity field parametrized by the neural network. Since we impose a distribution \(P\) on the initial value \(x(0)\), the value of \(x(t)\) at any \(t\) also observes a distribution \(p(x,t)\) (though \(x(t)\) is deterministic given \(x(0)\)). In other words, \(p(\cdot,t)=(T_{t})_{\#}p\), where \(T_{t}\) is the solution map of the ODE, namely \(T_{t}(x)=x+\int_{0}^{t}f(x(s),s;\theta)ds\), \(x(0)=x\). In the context of CNF (Kobyzev et al., 2020), the training of the flow network \(f(x,t;\theta)\) is to minimize the KL divergence between the terminal density \(p(x,T)\) at some \(T\) and a target density \(p_{Z}\) which is the normal distribution. The computation of the objective relies on the expression of normal density and can be computed on finite samples of \(x(0)\) drawn from \(p\).
Dynamic OT (Benamou-Brenier).The Benamou-Brenier equation below provides the dynamic formulation of OT Villani et al. (2009), Benamou and Brenier (2000)
\[\begin{split}&\inf_{\rho,v}\mathcal{T}:=\int_{0}^{1}\mathbb{E}_{x(t) \sim\rho(\cdot,t)}\|v(x(t),t)\|^{2}dt\\ & s.t.\quad\partial_{t}\rho+\nabla\cdot(\rho v)=0,\quad\rho(x,0) =p(x),\quad\rho(x,1)=q(x),\end{split} \tag{2}\]
where \(v(x,t)\) is a velocity field and \(\rho(x,t)\) is the probability mass at time \(t\) satisfying the continuity equation with \(v\). The action \(\mathcal{T}\) is the transport cost. Under regularity conditions of \(p\), \(q\), the minimum \(\mathcal{T}\) in (2) equals the squared Wasserstein-2 distance between \(p\) and \(q\), and the minimizer \(v(x,t)\) can be interpreted as the optimal control of the transport problem.
Telescopic and infinitesmal DRE.To circumvent the problem of DRE distinctly different \(p\) and \(q\), the _telescopic DRE_(Rhodes et al., 2020) proposes to "bridge" the two densities by a sequence of intermediate densities \(p_{k}\), \(k=0,\cdots,L\), where \(p_{0}=p\) and \(p_{L}=q\). The consecutive pairs of \((p_{k},p_{k+1})\) are chosen to be close so that the DRE can be computed more accurately, and then by
\[\log(q(x)/p(x))=\log p_{L}(x)-\log p_{0}(x)=\sum_{k=0}^{L-1}\log p_{k+1}(x)- \log p_{k}(x), \tag{3}\]
the log-density ratio between \(q\) and \(p\) can be computed with improved accuracy than a one-step DRE. The _infinitesmal DRE_(Choi et al., 2022) considers a time continuity version of (3). Specifically, suppose the time-parametrized density \(p(x,t)\) is differentiable on \(t\in[0,1]\) with \(p(x,0)=p\) and \(p(x,1)=q\), then
\[\log(q(x)/p(x))=\log p(x,1)-\log p(x,0)=\int_{0}^{1}\partial_{t}\log p(x,t)dt. \tag{4}\]
The quantity \(\partial_{t}\log p(x,t)\) was called the "time score" and can be parametrized by a neural network.
## 3 OT Q-malizing flow network
We introduce the formulation and training objective of the proposed OT Q-malizing flow net in Section 3.1. The training technique consists of the end-to-end training (Section 3.2) and the construction of the initial flow (Section 3.3).
### Formulation and training objective
Given two sets of samples \(\mathbf{X}=\{X_{i}\}_{i=1}^{N}\) and \(\mathbf{\tilde{X}}=\{\tilde{X}_{j}\}_{j=1}^{M}\), where \(X_{i}\sim P\) and \(\tilde{X}_{j}\sim Q\) i.i.d., we train a neural ODE model \(f(x,t;\theta)\) (1) to represent the transport map \(T_{t}\). The formulation is symmetric from \(P\) to \(Q\) and vice versa, and the loss will also have symmetrically two parts. We call \(P\to Q\) the forward direction and \(Q\to P\) the reverse direction. Our training objective is based on the dynamic OT (2) on time \([0,1]\), where we solve the velocity field \(v(x,t)\) by \(f(x,t;\theta)\)
The terminal condition \(\rho(\cdot,1)=q\) is relaxed by a KL divergence (see, e.g., [Ruthotto et al., 2020]). The training loss in forward direction is written as
\[\mathcal{L}^{P\to Q}=\mathcal{L}^{P\to Q}_{\mathrm{KL}}+\gamma\mathcal{L}^{P \to Q}_{T}, \tag{5}\]
where \(\mathcal{L}_{\mathrm{KL}}\) represents the relaxed terminal condition and \(\mathcal{L}_{T}\) is the Wasserstein-2 transport cost to be specified below; \(\gamma>0\) is a weight parameter, and with small \(\gamma\) the terminal condition is enforced.
KL loss.We define the solution mapping of (1) from \(s\) to \(t\) as
\[T^{t}_{s}(x;\theta)=x(s)+\int_{s}^{t}f(x(t^{\prime}),t^{\prime};\theta)dt^{ \prime}, \tag{6}\]
which is also parametrized by \(\theta\), and we may omit the dependence below. By the continuity equation in (2), \(\rho(\cdot,t)=(T^{t}_{0})_{\#}p\). The terminal condition \(\rho(\cdot,1)=q\) is relaxed by minimizing
\[\mathrm{KL}(p_{1}||q)=\mathbb{E}_{x\sim p_{1}}\log(p_{1}(x)/q(x)),\quad p_{1} :=(T^{1}_{0})_{\#}p.\]
The expectation \(\mathbb{E}_{x\sim p_{1}}\) is computed by the sample average over \((X_{1})_{i}\) which observes density \(p_{1}\) i.i.d., where \((X_{1})_{i}:=T^{1}_{0}(X_{i})\) is computed by integrating the neural ODE from time \(0\) to \(1\). It remains to have an estimator of \(\log(p_{1}/q)\) to compute \(KL(p_{1}||q)\), and we propose to train a logistic classification network \(r_{1}(x;\varphi_{r})\) for this. The inner-loop training of \(r_{1}\) is by
\[\min_{\varphi_{r}}\frac{1}{N}\sum_{i=1}^{N}\log(1+e^{r_{1}(T^{1}_{0}(X_{i}; \theta);\varphi_{r})})+\frac{1}{M}\sum_{j=1}^{M}\log(1+e^{-r_{1}(\widehat{X}_{ j};\varphi_{r})}). \tag{7}\]
The functional optimal \(r_{1}^{*}\) of the population version of loss (7) equals \(\log(q/p_{1})\) by direct computation, and as a result, \(\mathrm{KL}(p_{1}||q)=-\mathbb{E}_{x\sim p_{1}}r_{1}^{*}(x)\). The finite sample KL loss is then written as
\[\mathcal{L}^{P\to Q}_{\mathrm{KL}}(\theta)=-\frac{1}{N}\sum_{i=1}^{N}r_{1}(T^{ 1}_{0}(X_{i};\theta);\hat{\varphi}_{r}), \tag{8}\]
where \(\hat{\varphi}_{r}\) is the computed minimizer of (7) solved by inner loops. In practice, when the density \(p_{1}\) is close to \(q\), the DRE by training classification net \(r_{1}\) can be efficient and accurate. We will apply the minimization (7) after the flow net is properly initialized which guarantees the closeness of \(p_{1}=(T^{1}_{0})_{\#}p\) and \(q\) to begin with.
\(W_{2}\) regularization.To compute the transport cost \(\mathcal{T}\) in (2) with velocity field \(f(x,t;\theta)\), we use a time grid on \([0,1]\) as \(0=t_{0}<t_{1}<\ldots<t_{K}=1\). The choice of the time grid is algorithmic (since the flow model is parametrized by \(\theta\) throughout time) and may vary over experiments, see more details in Section 3.2. Define \(h_{k}=t_{k}-t_{k-1}\), and \(X_{i}(t;\theta):=T^{t}_{0}(X_{i};\theta)\), the \(W_{2}\) regularization is written as
\[\mathcal{L}^{P\to Q}_{T}(\theta)=\sum_{k=1}^{K}\frac{1}{h_{k}}\left(\frac{1}{ N}\sum_{i=1}^{N}\|X_{i}(t_{k};\theta)-X_{i}(t_{k-1};\theta)\|^{2}\right). \tag{9}\]
It can be viewed as a time discretization of \(\mathcal{T}\). Meanwhile, since (omitting dependence on \(\theta\)) \(X_{i}(t_{k})-X_{i}(t_{k-1})=T_{t_{k-1}}^{t_{k}}(X_{i}(t_{k-1}))\), the population form of (9) \(\sum_{k=1}^{K}\mathbb{E}_{x\sim\rho(\cdot,t_{k-1})}\|T_{t_{k-1}}^{t_{k}}(x; \theta)\|^{2}/h_{k}\) in minimization can be interpreted as the discrete-time summed (square) Wasserstein-2 distance (Xu et al., 2022a)
\[\sum_{k=1}^{K}W_{2}(\rho(\cdot,t_{k-1}),\rho(\cdot,t_{k}))^{2}/h_{k}.\]
The \(W_{2}\) regularization encourages a smooth flow from \(P\) to \(Q\) with small transport cost, which also guarantees the invertibility of the model in practice when the trained neural network flow approximates the optimal flow in (2).
Both directions.The formulation in the reverse direction is similar, where we transport \(Q\)-samples \(\tilde{X}_{i}\) from \(1\) to \(0\) using the same neural ODE integrated in reverse time. Specifically, \(\mathcal{L}^{Q\to P}=\mathcal{L}^{Q\to P}_{\mathrm{KL}}+\gamma\mathcal{L}^{Q \to P}_{T}\), and \(\mathcal{L}^{Q\to P}_{\mathrm{KL}}(\theta)=-\frac{1}{M}\sum_{j=1}^{M}\tilde{r }_{0}(T_{1}^{0}(\tilde{X}_{j};\theta);\hat{\varphi}_{\tilde{r}})\), where \(\hat{\varphi}_{\tilde{r}}\) is obtained by inner-loop training of another classification net \(\tilde{r}_{0}(x,\varphi_{\tilde{r}})\) via
\[\min_{\varphi_{\tilde{r}}}\frac{1}{M}\sum_{j=1}^{M}\log(1+e^{\tilde{r}_{0}(T_ {1}^{0}(\tilde{X}_{j};\theta);\varphi_{\tilde{r}})})+\frac{1}{N}\sum_{i=1}^{N} \log(1+e^{-\tilde{r}_{0}(X_{i};\varphi_{\tilde{r}})}); \tag{10}\]
Define \(\tilde{X}_{j}(t;\theta):=T_{1}^{t}(\tilde{X}_{j};\theta)\), the reverse-time \(W_{2}\) regularization is
\[\mathcal{L}^{Q\to P}_{T}(\theta)=\sum_{k=1}^{K}\frac{1}{h_{k}}\left(\frac{1}{ M}\sum_{j=1}^{M}\|\tilde{X}_{j}(t_{k-1};\theta)-\tilde{X}_{j}(t_{k};\theta)\|^{2} \right).\]
### End-to-end training algorithm
In the end-to-end training, we assume that the Q-flow net has already been initiated as an approximate solution of the desired Q-malizing flow, see more in Section 3.3. We then minimize \(\mathcal{L}^{P\to Q}\) and \(\mathcal{L}^{Q\to P}\) in an alternative fashion per "flip", and the procedure is given in Algorithm 1. Hyperparameter choices and network architectures are further detailed in Appendix A.
Time integration of flow.In the losses (8) and (9), one need to compute the transported samples \(X_{i}(t;\theta)\) and \(\tilde{X}_{j}(t;\theta)\) on time grid points \(\{t_{k}\}_{k=0}^{K}\). This calls for integrating the neural ODE on \([0,1]\), which we conduct on a fine time grid \(t_{k,s}\), \(s=0,\dots,S\), that divides each subinterval \([t_{k-1},t_{k}]\) into \(S\) mini-intervals. We compute the time integration of \(f(x,t;\theta)\) using a fixed-grid four-stage Runge-Kutta method on each mini-interval. The fine grid is used to ensure the numerical accuracy of ODE integration and the numerical invertibility of the Q-flow net, i.e., the error of using reverse-time integration as the inverse map (see inversion errors in Table A.1). It is also possible to first train the flow \(f(x,t;\theta)\) on a time grid to warm start the later training on a refined grid, so as to improve convergence. We also find that the \(W_{2}\) regularization can be computed at a coarser grid \(t_{k}\) (\(S\) is usually 3-5 in our experiments) without losing the effectiveness of Wasserstein-2 regularization. Finally, one can adopt an adaptive time grid, e.g., by enforcing equal \(W_{2}\) movement on each subinterval \([t_{k-1},t_{k}]\) Xu et al. (2022b), so that the representative points are more evenly distributed along the flow trajectory and the learning of the flow model can be further improved.
```
0: Pre-trained initial flow network \(f(x(t),t;\theta)\); training data \(\mathbf{X}\sim P\) and \(\widetilde{\mathbf{X}}\sim Q\); hyperparameters: \(\{\gamma,\{t_{k}\}_{k=1}^{K},\texttt{Tot},E,E_{0},E_{\text{in}}\}\).
0: Refined flow network \(f(x(t),t;\theta)\)
1:for flip = \(1,\ldots,\texttt{Tot}\)do
2: (If flip = 1) Train \(r_{1}\) by minimizing (7) for \(E_{0}\) epochs.
3:for epoch = \(,1,\ldots,E\)do {\(\triangleright\)\(P\to Q\) refinement}
4: Update \(\theta\) of \(f(x(t),t;\theta)\) by minimizing \(\mathcal{L}^{P\to Q}\).
5: Update \(r_{1}\) by minimizing (7) for \(E_{\text{in}}\) epochs.
6:endfor
7: (If flip = 1) Train \(\widetilde{r}_{0}\) by minimizing (10) for \(E_{0}\) epochs.
8:for epoch = \(,1,\ldots,E\)do {\(\triangleright\)\(Q\to P\) refinement}
9: Update \(\theta\) of \(f(x(t),t;\theta)\) by minimizing \(\mathcal{L}^{Q\to P}\).
10: Update \(\widetilde{r}_{0}\) by minimizing (10) for \(E_{\text{in}}\) epochs.
11:endfor
12:endfor
```
**Algorithm 1** OT \(Q\)-malizing flow refinement
Inner-loop training of \(r_{1}\) and \(\widetilde{r}_{0}\).Suppose the flow net has been successfully warm-started, the transported distributions \((T_{0}^{1})_{\#}P\approx Q\) and \((T_{1}^{0})_{\#}Q\approx P\). The two classification nets are first trained for \(E_{0}\) epochs before the loops of training the flow model and then updated for \(E_{\text{in}}\) inner-loop epochs in each outer-loop iteration. We empirically find that the diligent updates of \(r_{1}\) and \(\widetilde{r}_{0}\) in lines 5 and 10 of Algorithm 1 are crucial for successful end-to-end training of Q-flow net. As we update the flow model \(f(x,t;\theta)\), the push-forwarded distributions \((T_{0}^{1})_{\#}P\) and \((T_{1}^{0})_{\#}Q\) are consequently changed, and then one will need to retrain \(r_{1}\) and \(\widetilde{r}_{0}\) timely to ensure an accurate estimate of the log-density ratio and consequently the KL loss. Compared with training the flow parameter \(\theta\), the computational cost of the two classification nets is light which allows potentially a large number of inner-loop iterations if needed.
Computational complexity.We measure the computational complexity by the number of function evaluations of \(f(x(t),t;\theta)\) and of the classification nets \(\{r_{1},\widetilde{r}_{0}\}\). Suppose the total number of epochs in outer loop training is \(O(E)\), the dominating computational cost lies in the neural ODE integration, which takes \(O(8KS\cdot E(M+N))\) function evaluations of \(f(x,t;\theta)\). We remark that the Wasserstein-2 regularization (9) incurs no extra computation, since the samples \(X_{i}(t_{k};\theta)\) and \(\tilde{X}_{j}(t_{k};\theta)\) are available when computing the forward and reverse time integration of \(f(x,t;\theta)\). The training of the two classification nets \(r_{1}\) and \(\widetilde{r}_{0}\) takes \(O(4(E_{0}+EE_{\text{in}})(M+N))\) additional evaluations of the two network functions since the samples \(X_{i}(1;\theta)\) and \(\tilde{X}_{j}(0;\theta)\) are already computed.
### Flow initialization via CNF
We propose to initialize the Q-flow net by a good approximation of a Q-malizing flow, which matches the transported distributions with the target distributions in both directions and not
necessarily minimizes the transport cost. Such an initialization will significantly accelerate the convergence of the end-to-end training, which can be viewed as a refinement of the initial flow.
The initial flow \(f(x,t;\theta)\) may be specified using prior knowledge of the problem if available. Generally, when only two data sets \(\mathbf{X},\mathbf{\tilde{X}}\) are given, any reasonable interpolating scheme may provide such an initial flow. For example, one can use the linear interpolant mapping in (Rhodes et al., 2020; Choi et al., 2022; Albergo and Vanden-Eijnden, 2023) (see Appendix B), and train the neural network velocity field \(f(x,t;\theta)\) to match the interpolation (Albergo and Vanden-Eijnden, 2023). Here we propose an alternative approach that constructs the initial flow as a concatenation of two CNF models (Figure 1).
Specifically, as has been explained in see Section 2, a CNF model transports from a data distribution (accessible by finite samples) to the normal distribution \(\mathcal{N}(0,I_{d})\). We first train two separate CNFs represented by flow networks \(f(x(t),t;\theta_{P})\) and \(f(x(t),t;\theta_{Q})\), which transport \(P\) to \(\mathcal{N}(0,I_{d})\) on time \([0,1/2]\), and \(Q\) to \(\mathcal{N}(0,I_{d})\) reversely on time \([1/2,1]\), respectively. We then concatenate the two flows to be a single flow model on time \([0,1]\) that transports from \(P\) to \(Q\) (through an intermediate distribution which is normal). The network parameter \(\theta=\{\theta_{P},\theta_{Q}\}\) contains parameters in both CNFs and in computation cost, the overall computation of flow initialization is the sum of training the two CNF models. Any existing neural-ODE CNF models may be adopted for our purpose. In this work, we adopt the JKO-iFlow approach in (Xu et al., 2022b).
## 4 Infinitesimal density ratio estimation
We use a trained Q-malizing flow network \(f(x,t;\theta)\) for infinitesimal DRE as a focused application.
Figure 1: Illustration of building the OT \(Q\)-malizing flow (blue) from CNF initial flow (black). The flow is initialized by concatenating two CNF models as black arrows. The refined flow as the blue two-sided arrow refines the trajectory between \(P\) and \(Q\) to approximate the dynamic OT.
### Training by logistic classification
Let \(p(x,t)=(T_{t})_{\#}p\) and \(T_{t}\) is the transport induced by the trained Q-flow net in Section 3. Using (4), we propose to parametrize the time score \(\partial_{t}\log p(x,t)\) by a neural network \(r(x,t;\theta_{r})\) with parameter \(\theta_{r}\), called the _Q-flow-ratio_ net. The training is by logistic classification applied to transported data distributions on consecutive time grid points: Given a deterministic time grid \(0=t_{0}<t_{1}<\ldots<t_{L}=1\) (which again is an algorithmic choice, see Section 4.2), we expect that the integral
\[R_{k}(x;\theta_{r}):=\int_{t_{k-1}}^{t_{k}}r(x,t;\theta_{r})dt\approx\int_{t_{ k-1}}^{t_{k}}\partial_{t}\log p(x,t)dt=\log(p(x,t_{k})/p(x,t_{k-1})). \tag{11}\]
By that logistic classification recovers the log density ratio as has been used in Section 3.1, this suggests the loss on interval \([t_{k-1},t_{k}]\) as follows, where \(X_{i}(t):=T_{0}^{t}(X_{i})\) and \(T_{0}^{t}\) is computed by integrating the trained Q-flow net,
\[L_{k}^{P\to Q}(\theta_{r})=\frac{1}{N}\sum_{i=1}^{N}\log(1+e^{R_{k}(X_{i}(t_{ k-1});\theta_{r})})+\frac{1}{N}\sum_{i=1}^{N}\log(1+e^{-R_{k}(X_{i}(t_{k}); \theta_{r})}). \tag{12}\]
When \(k=L\), the distribution of \(X_{i}(t_{L})\) may slightly differ from that of \(Q\) due to the error in matching the terminal densities in Q-flow net. Thus by replacing the 2nd term in (12) with an empirical average over the \(Q\)-samples \(\tilde{X}_{j}\) may be beneficial. In the reverse direction, define \(\tilde{X}_{j}(t):=T_{1}^{t}(\tilde{X}_{j})\), we similarly have \(L_{k}^{Q\to P}(\theta_{r})=\frac{1}{M}\sum_{j=1}^{M}\log(1+e^{R_{k}( \tilde{X}_{j}(t_{k-1});\theta_{r})})+\frac{1}{M}\sum_{j=1}^{M}\log(1+e^{-R_{k} (\tilde{X}_{j}(t_{k});\theta_{r})})\), and when \(k=1\), we replace the 1st term with an empirical average over the \(P\)-samples \(X_{i}\). The training of the Q-flow-ratio net is by
\[\min_{\theta_{r}}\sum_{k=1}^{L}L_{k}^{P\to Q}(\theta_{r})+L_{k}^{Q \to P}(\theta_{r}). \tag{13}\]
When trained successfully, the integral of \(r(x,t;\theta_{r})\) over \(t\in[0,1]\) yields the desired log density ratio \(\log(q/p)\) by (4), and furtherly the integral \(\int_{s}^{t}r(x,t^{\prime};\theta_{r})dt^{\prime}\) provides an estimate of \(\log(p(x,t)/p(x,s))\) for any \(s<t\) on \([0,1]\).
### Algorithm and computational complexity
The details of minimizing (13) is given in Algorithm 2. We use an evenly spaced time grid \(t_{k}=k/L\) in all experiments. In practice, one can also progressively refine the time grid in training, starting from a coarse grid to train a Q-flow-ratio net \(r(x,t;\theta_{r})\) and use it as a warmstart for training the network parameter \(\theta_{r}\) on a refined grid. When the time grid is fixed, it allows us to compute the transported samples \(\{X_{i}(t_{k})\}_{i=1}^{N},\{\tilde{X}_{j}(t_{k})\}_{j=1}^{M}\) on all \(t_{k}\) once before the training loops of Q-flow-ratio net (line 1-3). This part takes \(O(8KS(M+N))\) function evaluations of the pre-trained Q-flow net \(f(x,t;\theta)\). Suppose the training loops of line 4-6 conducts \(E\) epochs in total. Assume each time integral in \(R_{k}\) (11) is computed by a fixed-grid four-stage Runge-Kutta method, then \(O(4LE(M+N))\) function evaluations of \(r(x,t;\theta_{r})\) is needed to compute the overall loss (13).
## 5 Experiments
We examine the proposed Q-flow and Q-flow-ratio nets on several examples. We denote our method as "Ours", and compare against three baselines of DRE in high dimensions. The baseline methods are: 1 ratio (by training a single classification network using samples from \(P\) and \(Q\)), TRE [Rhodes et al., 2020], and DRE-\(\infty\)[Choi et al., 2022]. We denote \(P_{t_{k}}\) with density \(p(\cdot,t_{k})\) as the pushforward distribution of \(P\) by the Q-flow transport over the interval \([0,t_{k}]\). The set of distributions \(\{P_{t_{k}}\}\) for \(k=1,\ldots,L\) builds a bridge between \(P\) and \(Q\). Code can be found at [https://github.com/hamrel-cxu/Q-flow-ratio](https://github.com/hamrel-cxu/Q-flow-ratio).
### 2d Gaussian mixtures
We simulate \(P\) and \(Q\) as two Gaussian mixture models with three and two components, respectively, see additional details in Appendix A.1. We compute ratio estimates \(\hat{r}(x)\) with the true value \(r(x)\), which can be computed using the analytic expressions of the densities. The results are shown in Figure 2. We see from the top panel that the mean absolute error (MAE) of Ours is evidently smaller than those of the baseline methods, and Ours also incurs a smaller maximum error \(|\hat{r}-r|\) on test samples. This is consistent with the closest resemblance of Ours to the ground truth (first column) in the bottom panel. In comparison, DRE-\(\infty\) tends to over-estimate \(r(x)\) on the support of \(Q\), while TRE and 1 ratio can severely under-estimate \(r(x)\) on the support of \(P\). As both the DRE-\(\infty\) and TRE models use the linear interpolant scheme (16), the result suggests the benefit of training an optimal-transport flow for DRE.
Figure 2: Estimated log density ratio between 2D Gaussian mixture distributions \(P\) (three components) and \(Q\) (two components). **Top:** (a) training samples from \(P\) and \(Q\). (b)-(d) histograms of errors \(\log(|r(x)-\hat{r}(x)|)\) computed at 10K test samples shown in log-scale. The MAE (15) are shown in the captions. **Bottom:** true and estimated \(\log(q/p)\) from different models shown under shared colorbars.
### Non-Gaussian two-dimensional flow
We design two densities in \(\mathbb{R}^{2}\) where \(P\) represents the shape of two moons and \(Q\) represents a checkerboard, see additional details in Appendix A.2. For this more challenging case, the linear interpolation scheme (16) creates a bridge between \(P\) and \(Q\) as shown in Figure A.3. The flow visually differs from the one obtained by the trained Q-flow net, as shown in Figure 3(a), and the latter is trained to minimize the transport cost.
The result of Q-flow-ratio net is shown in Figure 3(b). The corresponding density ratio estimates of \(\log p(x,t_{k})-\log p(x,t_{k-1})\) visually reflect the actual differences in the two neighboring densities.
### Mutual Information estimation for high-dimensional data
We evaluate different methods on estimating the mutual information (MI) between two correlated distributions from given samples. In this example, we let \(P\) and \(Q\) be two correlated, high-dimensional Gaussian distributions following the setup in (Rhodes et al., 2020; Choi et al., 2022), where we vary the data dimension \(d\) in the range of \(\{40,80,160,320\}\). Additional details can be found in Appendix A.3. Figure 4 shows the results by different methods, where the baselines are trained under their proposed default settings. We find that the estimated MI by our method almost perfectly aligns with the ground truth MI values, reaching nearly identical performance as DRE-\(\infty\). Meanwhile, Ours outperforms the other two baselines
Figure 4: Estimated MI between two correlated high-dimensional Gaussian distributions.
Figure 3: Flow between arbitrary 2D distributions and corresponding log-ratio estimation. **Top**: the intermediate distributions by Q-flow net. **Bottom**: corresponding log-ratio estimated by Q-flow-ratio net. Bluer color indicates smaller estimates of the difference \(\log(p(x,t_{k})/p(x,t_{k-1}))\) evaluated at the common support of the neighboring densities.
increase as the dimension \(d\) increases.
### Energy-based modeling of MNIST
We apply our approach in evaluating and improving an energy-base model (EBM) on the MNIST dataset [LeCun and Cortes, 2005]. We follow the prior setup in [Rhodes et al., 2020, Choi et al., 2022], where \(P\) is the empirical distribution of MNIST images, and \(Q\) is the generated image distributions by three given pre-trained energy-based generative models: a Gaussian noise model, a Gaussian copula model, and a Rational Quadratic Neural Spline Flow model (RQ-NSF) [Durkan et al., 2019]. Specifically, the images are in dimension \(d=28^{2}=784\), and each of the pre-trained models provides an invertible mapping \(F:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\), where \(Q=F_{\#}\mathcal{N}(0,I_{d})\). We train a Q-flow net between \((F^{-1})_{\#}P\) and \((F^{-1})_{\#}Q\), the latter by construction equals \(\mathcal{N}(0,I_{d})\). Using the trained Q-flow net, we go back to the input space and train the Q-flow-ratio net using the intermediate distributions between \(P\) and \(Q\). Additional details are in Appendix A.4.
The trained Q-flow-ratio \(r(x,s;\hat{\theta}_{r})\) provides an estimate of the data density \(p(x)\) by \(\hat{p}(x)\) defined as \(\log\hat{p}(x)=\log q(x)-\int_{0}^{1}r(x,s;\hat{\theta}_{r})ds\), where \(\log q(x)\) is given by the change-of-variable formula using the pre-trained model \(F\) and the analytic expression of \(\mathcal{N}(0,I_{d})\). As a by-product, since our Q-flow net provides an invertible mapping \(T_{0}^{1}\), we can use it to obtain an improved generative model on top of \(F\). Specifically, the improved distribution \(\tilde{Q}:=(F\circ T_{1}^{0})_{\#}\mathcal{N}(0,I_{d})\), that is, we first use Q-flow to transport \(\mathcal{N}(0,I_{d})\) and then apply \(F\). The performance of the improved generative model can be measured using the "bits per dimension" (BPD) metric:
\[\text{BPD}=\frac{1}{N^{\prime}}\sum_{i=1}^{N^{\prime}}[-\log\hat{p}(X_{i})/(d \log 2)], \tag{14}\]
where \(X_{i}\) are \(N^{\prime}\)=10K test images drawn from \(P\). BPD has been a widely used metric in evaluating the performance of generative models [Theis et al., 2015, Papamakarios et al., 2017]. In our setting, the BPD can also be used to compare the performance of the DRE.
The results show that Ours reaches the state-of-the-art performance in Table 1: it consistently reaches smaller BPD than the baseline methods across all choices of \(Q\). Meanwhile, we also note computational benefits in training: Ours took approximately 8 hours to converge, when TRE took 24 hours and DRE-\(\infty\) took approximately 2 days. We further show improvement in generated digits using the trained Q-flow as shown in Figure 5 for RQ-NSF and Figure A.1 for the other two specifications of \(Q\). The trajectory of improved samples from \(Q\) to \(\tilde{Q}\) for RQ-NSF is further shown in Figure A.2.
\begin{table}
\begin{tabular}{c|c c c c|c c c c|c c c} \hline \hline
**Choice of \(Q\)** & \multicolumn{4}{c|}{RQ-NSF} & \multicolumn{4}{c|}{Copula} & \multicolumn{4}{c}{Gaussian} \\ \hline
**Method** & Ours & DRE-\(\infty\) & TRE & 1 ratio & Ours & DRE-\(\infty\) & TRE & 1 ratio & Ours & DRE-\(\infty\) & TRE & 1 ratio \\ \hline
**BPD (\(\downarrow\))** & **1.05** & 1.09 & 1.09 & 1.09 & **1.14** & 1.21 & 1.24 & 1.33 & **1.31** & 1.33 & 1.39 & 1.96 \\ \hline \hline \end{tabular}
\end{table}
Table 1: DRE performance on the energy-based modeling task for MNIST, reported in BPD and lower is better. Results for DRE-\(\infty\) are from [Choi et al., 2022] and results for 1 ratio and TRE are from [Rhodes et al., 2020].
## 6 Discussion
In this work, we develop Q-malizing flow neural-ODE model which smoothly and invertibly transports between a pair of arbitrary distributions \(P\) and \(Q\), trained to approximate the optimal transport flow, and is learned from finite samples from both distributions. The Q-flow net can be initialized by any interpolant mapping between \(P\) and \(Q\) or concatenating two CNF models, and is subsequently refined by end-to-end training that minimizes the KL-divergence between transported and target distributions plus the Wasserstein-2 transport cost. The trained flow model facilitates the infinitesimal DRE where we train another Q-flow-ratio net by logistic classification between transported empirical data distributions on consecutive time grid points. The proposed method shows strong empirical performance on simulated and image data.
For future directions, it would be of interest to investigate alternative initialization schemes that may be more efficient than the current strategy of training CNFs. The algorithms of training the Q-flow net and the Q-flow-ratio net can be further enhanced by advanced adaptive schemes of time discretization. Other applications beyond DRE and on real datasets would also be useful extensions.
## Acknowledgement
The work is supported by NSF DMS-2134037. C.X. and Y.X. are partially supported by an NSF CAREER CCF-1650913, NSF DMS-2134037, CMMI-2015787, CMMI-2112533, DMS-1938106, and DMS-1830210. X.C. is partially supported by NSF CAREER DMS-2237842, Simons Foundation 814643, the Alfred P. Sloan Foundation.
Figure 5: Improvement in generated samples from \(Q\), where \(Q\) is given by RQ-NSF, a pre-trained normalizing flow model \(F\) that yields \(Q=F_{\#}\mathcal{N}(0,I_{d})\). |
2310.02323 | Approximately Equivariant Quantum Neural Network for $p4m$ Group
Symmetries in Images | Quantum Neural Networks (QNNs) are suggested as one of the quantum algorithms
which can be efficiently simulated with a low depth on near-term quantum
hardware in the presence of noises. However, their performance highly relies on
choosing the most suitable architecture of Variational Quantum Algorithms
(VQAs), and the problem-agnostic models often suffer issues regarding
trainability and generalization power. As a solution, the most recent works
explore Geometric Quantum Machine Learning (GQML) using QNNs equivariant with
respect to the underlying symmetry of the dataset. GQML adds an inductive bias
to the model by incorporating the prior knowledge on the given dataset and
leads to enhancing the optimization performance while constraining the search
space. This work proposes equivariant Quantum Convolutional Neural Networks
(EquivQCNNs) for image classification under planar $p4m$ symmetry, including
reflectional and $90^\circ$ rotational symmetry. We present the results tested
in different use cases, such as phase detection of the 2D Ising model and
classification of the extended MNIST dataset, and compare them with those
obtained with the non-equivariant model, proving that the equivariance fosters
better generalization of the model. | Su Yeon Chang, Michele Grossi, Bertrand Le Saux, Sofia Vallecorsa | 2023-10-03T18:01:02Z | http://arxiv.org/abs/2310.02323v1 | # Approximately Equivariant Quantum Neural Network for \(p4m\) Group Symmetries in Images
###### Abstract
Quantum Neural Networks (QNNs) are suggested as one of the quantum algorithms which can be efficiently simulated with a low depth on near-term quantum hardware in the presence of noises. However, their performance highly relies on choosing the most suitable architecture of Variational Quantum Algorithms (VQAs), and the problem-agnostic models often suffer issues regarding trainability and generalization power. As a solution, the most recent works explore Geometric Quantum Machine Learning (GQML) using QNNs equivariant with respect to the underlying symmetry of the dataset. GQML adds an inductive bias to the model by incorporating the prior knowledge on the given dataset and leads to enhancing the optimization performance while constraining the search space. This work proposes equivariant Quantum Convolutional Neural Networks (EquivQCNNs) for image classification under planar \(p4m\) symmetry, including reflectional and \(90^{\circ}\) rotational symmetry. We present the results tested in different use cases, such as phase detection of the 2D Ising model and classification of the extended MNIST dataset, and compare them with those obtained with the non-equivariant model, proving that the equivariance fosters better generalization of the model.
Quantum Machine Learning, Geometric Quantum Machine Learning, Equivariance, Image classification, Image processing
## I Introduction
During the last few years, Quantum Machine Learning (QML) has witnessed remarkable progress from diverse research perspectives as a promising application for the practical use of quantum computers [1, 2, 3]. In particular, Quantum Neural Networks (QNNs) are suggested as the most general and fundamental formalism to solve a plethora of problems while the architecture is completely agnostic to the given problem. They are often expected to improve existing machine learning (ML) techniques in terms of training performance [2, 4], convergence rate [5, 6], and generalization power [7, 8, 9].
However, they often suffer from issues due to complex loss landscapes, which are often non-convex and lead to many poor local minima [10, 11, 12]. In an effort to solve these issues, the field of Geometric Quantum Machine Learning (GQML) [13, 14, 15, 16, 17, 18, 19] is currently emerging, inspired by the classical Geometric Deep Learning (GDL) [20, 21, 22].
The main idea of GQML is to add sharp inductive bias [23] into the training model by incorporating prior knowledge on the dataset [18, 19]. In practice, GQML aims to construct a parameterized QNN, which is equivariant under the action of the symmetry group associated with the input dataset, so that the same action is applied to the output of the QNN. Previous studies have heavily explored GQML both in terms of theories and applications, showing that GQML helps mitigate the issues often encountered in QML [19]. However, most studies still focus on the permutation symmetric group, \(S_{n}\)[13, 15, 18], \(\mathbb{Z}_{2}\otimes\mathbb{Z}_{2}\) symmetry applied in small toy applications [14], or a single symmetry element in the case of image classification [24].
We extend the study on GQML in the context of image classification by taking into account the _planar wallpaper symmetry_ group, \(p4m\), which includes the reflection and the \(90^{\circ}\) rotation. The \(p4m\) symmetry group is the most common symmetry group observed in image datasets, which are already treated in classical GDL via Group Equivariant Convolutional Networks (GCNN) [25, 21]. Although symmetry in images is considered to hinder neural network training in general, there exist applications where the symmetry has ponderable importance, such as Earth Observation [26, 27, 28, 29], medical images [30], symmetry-related physics datasets [31], etc.
In this work, we introduce the \(p4m\)_-Equivariant Quantum Convolutional Neural Network_ (EquivQCNN) for image classification. The results clearly prove that the equivariant neural network has the advantage in terms of generalization power, in particular, while using only a small number of training samples. Moreover, we show that the presence of small noise in the EquivQCNN training helps to classify the symmetric images better. Our study ultimately paves the way for GQML to tackle realistic image classification tasks, improving training performance.
This paper is structured as follows. First of all, we will briefly summarize in Section II the theoretical backgrounds required to understand GQML. Then, in Section III, we introduce the architecture of _Equivariant Quantum Convolutional Neural Network_ (EquivQCNN) for the _planar wallpaper symmetry_ group \(p4m\) in the context of image classification. Section IV present our first result for EquivQCNN applied on reflectional and rotational symmetric images and prove its generalization power compared to the non-equivariant architecture. We finally conclude in Section V and propose a future
research direction.
## II Preliminaries
This section summarizes the theoretical background on group symmetry and equivariance required to construct a GQML architecture for supervised learning. Consider a classical data space \(\mathcal{X}\) and a label space \(\mathcal{Y}\). Each data sample \(\mathbf{x}\in\mathcal{X}\) is associated with a label \(\ell\in\mathcal{Y}\) with the underlying function \(f:\mathcal{X}\rightarrow\mathcal{Y}\). The supervised learning aims to find \(y_{\boldsymbol{\theta}}\), which is as close as possible to the ground truth \(f\) with the trained parameters \(\boldsymbol{\theta}\).
In the case of QML, we construct a quantum feature map \(\psi:\mathcal{X}\rightarrow\mathcal{H}\), which embeds the classical data into a quantum state in the Hilbert space \(\mathcal{H}\). The input quantum state \(\left|\psi(\mathbf{x})\right\rangle\in\mathcal{H}\) is then transformed via QNN, taking the most general form of a Variational Quantum Circuits (VQCs) \(\mathcal{U}(\boldsymbol{\theta})\) which is parameterized by the rotation angles \(\boldsymbol{\theta}\). The final prediction of the QNN for the input feature \(\mathbf{x}\) is given as an expectation value of the observable \(O\) :
\[y_{\boldsymbol{\theta}}(\mathbf{x})=\left\langle\psi(\mathbf{x})\right| \mathcal{U}^{\dagger}(\boldsymbol{\theta})O\mathcal{U}(\boldsymbol{\theta}) \left|\psi(\mathbf{x})\right\rangle. \tag{1}\]
In general, QNN architecture is completely agnostic to the underlying symmetry of \(\mathcal{X}\). GQML aims to incorporate the symmetry inherent to the dataset with the QNN architecture so that the final prediction is _invariant_ after the action of the symmetry group element on the original input feature.
Let us formalize it in a more concrete way. Consider a symmetry group \(\mathfrak{G}\) that acts on the data space \(\mathcal{X}\). We say the training model is _\(\mathfrak{G}\)-invariant_ if :
\[y_{\boldsymbol{\theta}}(g[\mathbf{x}])=y_{\boldsymbol{\theta}}(\mathbf{x}), \forall\mathbf{x}\in\mathcal{X},\forall g\in\mathfrak{G}. \tag{2}\]
In order to construct _\(\mathfrak{G}\)-invariant_ model, we require three components: equivariant data embedding, equivariant QNN and invariant measurement [14].
First of all, we say that the data embedding is _\(\mathfrak{G}\)-equivariant_ if the symmetry group element \(g\in\mathfrak{G}\) applied on the data \(\mathbf{x}\in\mathcal{R}\) induces a unitary quantum action \(V_{s}[g]\) in the level of quantum states :
\[\left|\psi(g[\mathbf{x}])\right\rangle=V_{s}[g]\left|\psi(\mathbf{x})\right\rangle. \tag{3}\]
We call \(V_{s}\) the _induced representation_ of the embedding \(\psi(\mathbf{x})\)[14].
We also need to construct a trainable quantum circuit ansatz, parameterized by angles \(\boldsymbol{\theta}\), which is equivariant with respect to the symmetry group \(\mathfrak{G}\). For simplicity, we consider only the gates generated by a fixed generator \(G\in\mathcal{G}\) :
\[R_{G}(\theta)=\exp(-i\theta G),\ \theta\in\mathbb{R}. \tag{4}\]
where \(\mathcal{G}\) is a fixed gateset. For a symmetry group \(\mathfrak{G}\) and its representation \(V_{s}\), the operator \(R_{G}\) is said to be \(\mathfrak{G}\)-equivariant if and only if [14, 17]:
\[[R_{G}(\theta),V_{s}[g]]=0,\forall g\in\mathfrak{G},\forall\theta\in\mathbb{R} \tag{5}\]
or equivalently,
\[[G,V_{s}[g]]=0,\forall g\in\mathfrak{G}. \tag{6}\]
The definition of _equivariant_ can also be extended to QNNs. We call that a QNN, \(\mathcal{U}_{\theta}\), is _\(\mathfrak{G}\)-equivariant_ if and only if it consists of equivariant quantum operators, i.e. \(\mathcal{U}_{\theta}\) commutes with all the components in the symmetric group \(\mathfrak{G}\).
There exist several methods to construct the equivariant gateset [19], but in this paper, we will focus on the _twirling method_, which is the most practical approach for small symmetry groups [32]. Consider an arbitrary generator \(X\). Then, we define a twirled operator \(\mathcal{T}_{\mathfrak{G}}\) as :
\[\mathcal{T}_{\mathfrak{G}}[X]=\frac{1}{|\mathfrak{G}|}\sum_{g\in\mathfrak{G}} V_{s}[g]^{\dagger}XV_{s}[g]. \tag{7}\]
It corresponds to a projector of the operator \(X\) onto all symmetry group elements, commuting with \(V_{s}[g]\) for all \(g\in\mathfrak{G}\).
Finally, an observable \(O\) is \(\mathfrak{G}\)-invariant, if :
\[V_{s}[g]^{\dagger}OV_{s}[g]=O,\ \forall g\in\mathfrak{G}, \tag{8}\]
i.e. if \(O\) commutes with \(V_{s}[g]\) for all \(g\in\mathfrak{G}\). By taking all three components, the equivariance of QNN leads to the invariance of the final prediction:
\[y(g[\mathbf{x}]) =\left\langle\psi(g[\mathbf{x}])\right|\mathcal{U}(\boldsymbol{ \theta})^{\dagger}O\mathcal{U}(\boldsymbol{\theta})\left|\psi(g[\mathbf{x}])\right\rangle\] \[=\left\langle\psi(\mathbf{x})\right|V_{s}^{\dagger}\mathcal{U}( \boldsymbol{\theta})^{\dagger}O\mathcal{U}(\boldsymbol{\theta})V_{s}\left| \psi(\mathbf{x})\right\rangle\] \[=\left\langle\psi(\mathbf{x})\right|\mathcal{U}(\boldsymbol{ \theta})^{\dagger}O\mathcal{U}(\boldsymbol{\theta})\left|\psi(\mathbf{x}) \right\rangle=y(\mathbf{x}) \tag{9}\]
Overall, we have a trade-off between the equivariance and the expressibility of the QNN by constraining the quantum operators in the model based on the geometric prior of the dataset. GQML reduces the search space for training and brings advantages in many folds, such as trainability [18], convergence rate [33], and generalization power [14, 18].
## III GQML for image classification
In this section, we will introduce Equivariant Quantum Convolutional Neural Networks (EquivQCNNs) for image classification invariant under the \(p4m\) wallpaper symmetry group, \(\mathfrak{G}_{p4m}\), which corresponds to the planar square symmetry group. It consists of 8 components :
* the identity \(e\),
* the rotation \(r\), \(r^{2}\) and \(r^{3}\) of \(90^{\circ},180^{\circ},270^{\circ}\) around the origin,
* the reflection \(t_{x}\) and \(t_{y}\) in the \(x\) and \(y\) axis,
* the reflection in the two diagonals.
In this paper, we will focus on six components out of them, the rotation \(r\) and the reflection in the main axis, \(t_{x}\) and \(t_{y}\).
### _Equivariant Data embedding_
We will start by constructing the data embedding method for the reflectional and rotational symmetry of images. Amplitude encoding is one of the most fundamental methods for mapping classical data into quantum states [34]. In general, each pixel coordinate is associated with a computational basis by visualizing the 2D image as a 1D vector, but this complicates the manipulation of 2D symmetry.
We propose a _coordinate-aware_ amplitude (CAA) embedding method, which facilitates finding the unitary representation of \(p4m\) symmetry group. The main idea of the CAA embedding is that we can explicitly denote the \(x\) and \(y\) coordinates by using the first \(n\) qubits to represent the \(x\)-coordinate and the second \(n\) qubits for the \(y\)-coordinate of the pixel.
Let us consider a training set \(\mathcal{X}\) of 2-dimensional images with \(N\times N\) pixels, denoted as \(\mathbf{x}=\{x_{ij}\}\) with \(i,j=0,...,N-1\), each of which is associated with a hot encoded label \(\ell\in\{0,1\}\in\mathcal{Y}\). The CAA embedding maps the input image \(\mathbf{x}\) into a quantum state \(\ket{\psi(x)}\in\mathcal{H}\) as follows:
\[\ket{\psi(x)}=\sum_{i=0}^{N-1}\sum_{j=0}^{N-1}x_{ij}\ket{i}\ket{j} \tag{10}\]
where \(N=2^{n}\). For simplicity, let us denote \(q_{1:n}\) the first \(n\)-qubits for \(x\)-coordinates and \(q_{n+1:2n}\) the second \(n\)-qubits for \(y\)-coordinates.
From the CAA embedding formulation, it is straightforward to find the induced representation of \(p4m\) group elements. Fig. 1 visualizes CAA embedding of 2D images and the action of the symmetry elements on the computational basis states. Let us denote \(V_{x}\) and \(V_{y}\) the induced representation of the reflections, \(t_{x}\) and \(t_{y}\) respectively, and \(V_{r}\) for rotation of \(90^{\circ}\), \(r\), which are defined as follows :
\[V_{x}=X^{\otimes n}\otimes\mathbb{I}^{\otimes n}=X_{1:n}, \tag{11}\]
\[V_{y}=\mathbb{I}^{\otimes n}\otimes X^{\otimes n}=X_{n+1:2n}, \tag{12}\]
\[V_{r}=(X^{\otimes n}\otimes\mathbb{I}^{\otimes n})\otimes_{i=0}^{n-1}SWAP_{i, i+n}=V_{x}V_{r}^{\prime}, \tag{13}\]
with \(V^{\prime}r=\otimes_{i=0}^{n-1}SWAP_{i,i+n}\) Therefore, the quantum gates, which are equivariant with respect to \(p4m\) symmetry, should commute with all the induced representation, \(V_{p4m}=\{V_{x},V_{y},V_{r}\}\) :
\[U \in\texttt{comm}\{V_{x},V_{y},V_{r}\}=\texttt{comm}\{V_{x},V_{y},V_{r}^{\prime}\}\] \[=\texttt{comm}\{X_{1:n},X_{n:2n},\otimes_{i=0}^{n-1}SWAP_{i,i+n }\}, \tag{14}\]
where \(\texttt{comm}\) denotes the commutator of the unitary operators.
### _Equivariant Quantum Convolutional Neural Networks_
First proposed by Iris Cong, _et al._[35], Quantum Convolutional Neural Networks (QCNNs) is the quantum analogue of classical Convolutional Neural Networks (CNNs). QCNNs have exhibited success in different tasks, including quantum many-body problems [35], phase detection [8], and image classification [36], taking advantage of avoiding the barren plateaus with shallow circuit depth [11].
QCNN consists of two components, _convolutional filters_, which are \(k\)-body local quantum gates for \(k<n\), and the pooling layers, to reduce the two qubit states into one qubit state. In most of the cases, we have \(k=2\) for convolutional filters, but in this paper, we will also introduce the case with \(k>2\) to maintain the equivariance. Especially QCNN maintains the translational invariance of input data by sharing identical parameters between the filters inside each layer.
Following the definition of equivariance and the method presented in Section II, we construct the equivariant ansatz of the convolutional filters for \(\mathfrak{G}_{p4m}\). To start with, we can easily find out that the architecture of QCNN respects the equivariance with respect to \(V_{r}^{\prime}\) as we repeat the same gate with the same parameter on qubit \(i\) and \(i+n\) if we have an even \(n\).
The ansatz symmetrization for the other symmetries requires more insights. We will consider the generator gateset, which only consists of Pauli strings up to 2-body local operation :
\[G=\{X,Y,Z,Y_{1}Y_{2},Z_{1}Z_{2}\}. \tag{15}\]
For a single qubit gate, it is trivial to notice that only Pauli \(X\) gates commute with \(V_{x},V_{y}\), while for \(k\)-qubit gates for \(k>1\), we need to explore two cases separately.
1. \(G\) **constrained to \(q_{1:n}\)****OR**\(q_{n+1:2n}\) Considering only \(2\)-body quantum gates, finding \(U\in\texttt{comm}(X_{1:n},X_{n:2n})\) can be simplified into finding \(U\in\texttt{comm}(X_{1}X_{2})\). We can easily find out that both \(Y_{1}Y_{2}\) and \(Z_{1}Z_{2}\) commute with \(X_{1}X_{2}\) applied on two qubits.
Fig. 1: Schematic diagram of the action of \(p4m\) symmetry on 2D images of size \(4\times 4\) encoded using CAA embedding method with \(2\) qubits. The pixel at position \((i,j)\) is associated with a computational basis \(\ket{i}\ket{j}\).
Indeed, using the Twirling method, we have :
\[\mathcal{T}_{X_{1}X_{2}}(Y_{1}Y_{2}) =\frac{1}{2}\Big{(}Y_{1}Y_{2}+(X_{1}X_{2})^{\dagger}(Y_{1}Y_{2})(X _{1}X_{2})\Big{)}\] \[=\frac{1}{2}(Y_{1}Y_{2}+X_{1}Y_{1}X_{1}X_{2}Y_{2}X_{2})\] \[=\frac{1}{2}(Y_{1}Y_{2}+(-Y_{1})(-Y_{2})).\] \[=Y_{1}Y_{2}. \tag{16}\]
Similarly, we can show that \(Z_{1}Z_{2}\) is the equivariant operator with respect to \(X_{1}X_{2}\). Thus, we obtain the equivariant generator gateset :
\[G_{s,1}=\{Y_{1}Y_{2},Z_{1}Z_{2}\}. \tag{17}\]
2. \(G\) **applied on both \(q_{1:n}\) AND \(q_{n+1:2n}\)** Unlike the first case, where the Pauli \(X\) gate in \(V_{x}\) and \(V_{y}\) acts equally on two qubits with \(X\otimes X\), the weight of Pauli gates is unbiased in this case. We can easily notice that \(G_{s,1}\) acting on \(q_{n}\) and \(q_{n+1}\) do not commute with \(V_{x}\) and \(V_{y}\), as : \[[X_{1}\otimes\mathbb{I}_{2},Y_{1}Y_{2}]=-[Y_{1}Y_{2},X_{1}\otimes\mathbb{I}_{ 2}].\] (18) Indeed, in order to construct an equivariant operator, we need an even number of Pauli Y or Pauli Z gates applied on both \(q_{1:\ n}\) and \(q_{n+1:\ 2n}\)[19]. Therefore, the smallest equivariant quantum gates are : \[G_{s,2}=\{P_{\sigma}P_{\sigma^{\prime}}P_{\sigma^{\prime}}|P_{\sigma,\sigma^{ \prime}}\in\{X,Y,Z\}\}\] (19)
By exponentiating the equivariant generators found above, we can construct the convolutional filter ansatz, which is equivariant with respect to \(V_{p4m}\). Fig. 3 summarizes the equivariant two-qubit convolutional filter ansatz, \(U_{2}\) and the four-qubit ansatz, \(U_{4}\). Using the \(U_{2}\) and \(U_{4}\), we propose two QCNN models, Equivariant QCNN (EquivQCNN) and Approximately Equivariant QCNN (Appr-EquivQCNN), as shown in Fig. 2. In both cases, we first apply the two-qubit convolutional filters on \(q_{1:n}\), and \(q_{n+1:2n}\) separately, without connecting them, which can be considered as the preliminary scanning phase. Then, in EquivQCNN, \(U_{4}\) ansatz is used as the convolutional filter and connects \(q_{1:n}\), and \(q_{n+1:2n}\), leading the fully equivariant model. On the other hand, in Appr-EquivQCNN, \(U_{2}\) ansatz is repeated for the learning layers acting on \(q_{n}\), and \(q_{n+1}\). We add a limited noise to the equivariant model to increase the expressibility by slightly breaking the symmetry. With this noise, we aim to find a crossing point between the expressibility and the equivariance so that it is expressible enough to learn the training samples but, at the same time, not excessively expressible to generalize.
### _Approximately Invariant Measurement_
In this section, we propose an _Approximately Invariant Measurement_, with the detailed process summarized in Fig. 4. For simplicity, we will consider the binary classification case.
Let us call \(q_{i_{m}}\in q_{1:n}\) and \(q_{i_{m}+n}\in q_{n+1:2n}\) with \(i_{m}\in[1,..,n]\) the qubits which are not traced out in EquivQCNN and measured at the end of the circuit. First, we apply an \(R_{z}(\phi)\) and a Hadamard gate on both \(q_{i_{m}}\) and \(q_{i_{m}+n}\), with \(\phi\) also a trained parameter. Then, we measure the probability distribution of state \(|0\rangle\) and \(|1\rangle\) on each of the qubits separately, obtaining \([p_{0},p_{1}]\) and \([p_{0}^{\prime},p_{1}^{\prime}]\) respectively. As the other qubits are traced out, only the two-qubit state is left at the end of
Fig. 3: Parameterized quantum circuits ansatz, \(U_{2}\) (yellow rectangle) and \(U_{4}\) (blue rectangle), used the convolutional filters equivariant with respect to \(p4m\) symmetry group.
Fig. 2: A Schematic diagram of (a) EquivQCNN and (b) Appr-EquivQCNN for an example of 8 qubits to classify image of size \(16\times 16\). They consist of \(U_{2}\) (yellow rectangle) and \(U_{4}\) (blue rectangle) convolutional filters (c.f. Fig. 3), followed by pooling layers (green circle). Both models contain a preliminary _scanning_ phase, where \(U_{2}\) acts on \(q_{1:n}\) and \(q_{n+1:2n}\) separately. EquivQCNN then consists of \(U_{4}\) ansatz, while Appr-EquivQCNN is subject to a small noise by connecting \(q_{1:n}\) and \(q_{n+1:2n}\) with \(U_{2}\) gate.
the quantum circuit. Denoting the final quantum state on the qubit \(q_{i_{m}}\) and \(q_{i_{m}+1}\) as \(\ket{\psi_{f}}=r_{0}e^{i\theta_{0}}\ket{00}+r_{1}e^{i\theta_{1}}\ket{01}+r_{2}e^{ i\theta_{2}}\ket{10}+r_{3}e^{i\theta_{3}}\ket{11}\), the proposed measurement on \(q_{i_{m}}\) returns the probability distribution :
\[p_{0}= \frac{1}{2}\big{[}r_{0}^{2}+r_{1}^{2}+2r_{0}r_{1}\cos(2\phi-\theta _{0}+\theta_{1})\] \[+r_{2}^{2}+r_{3}^{2}+2r_{2}r_{3}\cos(2\phi-\theta_{2}+\theta_{3}) \big{]}, \tag{20}\]
\[p_{1}= \frac{1}{2}\big{[}r_{0}^{2}+r_{1}^{2}-2r_{0}r_{1}\cos(2\phi-\theta _{0}+\theta_{1})\] \[+r_{2}^{2}+r_{3}^{2}-2r_{2}r_{3}\cos(2\phi-\theta_{2}+\theta_{3}) \big{]}. \tag{21}\]
First of all, we can easily notice that the measurement is invariant with respect to \(V_{r}^{\prime}\) as we are summing up the final measurement on \(q_{i_{m}}\) and \(q_{i_{m}+n}\). Now, let us prove that it is equivariant with a certain error rate \(\epsilon\). Similarly, let us call \([p_{0}^{x},p_{1}^{x}]\) the final probability of the image reflected with respect to the \(x\)-axis for the final state \(V_{x}\ket{\psi_{f}}=r_{1}e^{i\theta_{1}}\ket{00}+r_{0}e^{i\theta_{0}}\ket{01} +r_{3}e^{i\theta_{3}}\ket{10}+r_{2}e^{i\theta_{2}}\ket{11}\) with a bit flip on qubit \(q_{i_{m}}\). By performing the same computation, we can compute the difference between \(p_{0}\) and \(p_{0}^{x}\) :
\[p_{0}-p_{0}^{x} =r_{0}r_{1}\big{[}\cos(2\phi-\theta_{0}+\theta_{1})-\cos(2\phi+ \theta_{0}-\theta_{1})\big{]}\] \[+r_{2}r_{3}\big{[}\cos(2\phi-\theta_{2}+\theta_{3})-\cos(2\phi+ \theta_{2}-\theta_{3})\big{]}\] \[=\sin(2\phi)\big{[}r_{0}r_{1}\sin(\theta_{1}-\theta_{0})+r_{2}r_ {3}\sin(\theta_{3}-\theta_{2})\big{]}\] \[\leq\frac{1}{2}\epsilon\big{[}\sin(\theta_{2}-\theta_{1})+\sin( \theta_{3}-\theta_{2})], \tag{22}\]
with \(\epsilon=\sin(2\phi)\). The last inequality uses \(\max r_{0}r_{1}+r_{2}r_{3}=\frac{1}{2}\) while taking into account the fact that \(r_{0}^{2}+r_{1}^{2}+r_{2}^{2}+r_{3}^{2}=1\). This proves that with \(\phi\approx 0\) or \(\phi\approx\frac{\pi}{2}\), we can say that the measurement is _approximately invariant_ with respect to \(V_{x}\), and also \(V_{y}\) by using the same justification. The presence of the \(R_{z}\) gate loosens the constraint imposed by the symmetry and adds an extra degree of freedom to the training. By trading off the _full_ invariance and expressibility, we allow exploring larger search space for better performance.
We can generalize this measurement for \(L\)-class classification by measuring \(\log_{2}L\) qubits at \(q_{1:n}\) and \(q_{n+1:2n}\) qubits separately and summing them up. This way of measurement corresponds to the _Softmax_ activation function at the end of the neural network. Thus, we use the binary cross entropy to calculate the training loss,
\[\mathcal{L}_{\mathbf{\theta}}(\mathbf{x})=-\sum_{i=1}^{L}\ell_{i}\log p_{i}(\mathbf{ \theta};\mathbf{x}), \tag{23}\]
where \(\ell=[\ell_{1},\ell_{1},...,\ell_{L}]\) with \(\ell_{i}\in\{0,1\}\) is the one-hot encoded target label. The state with the highest probability will correspond to the class with which the input image is associated.
For the following, we will consider two different types of measurements :
1. \(M_{1}\) : \(\phi\) is constrained to zero, \(\phi=0\),
2. \(M_{2}\) : \(\phi\) is updated during the training, \(\phi\neq 0\).
## IV Result
In this section, we present our preliminary results of the EquivQCNN training for binary image classification applied to two different image datasets, shown on Fig. 5. The first dataset contains the spin distribution of the 2D lattice Ising model with \(16\times 16\) interacting spins, simulated using Metropolis Monte Carlo with the Hamiltonian [37]:
\[H=-J\sum_{\langle ij\rangle}s_{i}s_{j}, \tag{24}\]
where \(s_{i}\in\{-1,1\}\) corresponds to the spin on site \(i\), \(J\) the interaction between two spins and \(\langle ij\rangle\) the pairs of the nearest neighbours. At low temperature \(T\), the spins are ordered, pointing all in the same direction, and as the temperature increases, we reach the critical temperature \(T_{c}\) where we observe the phase transition from an ordered phase to a
Fig. 4: Approximately Invariant Measurement process. We apply \(R_{z}(\phi)\) and \(H\) gate on two qubits, \(q_{i_{m}}\in q_{1:n}\) and \(q_{i_{m}+n}\in q_{n+1:2n}\), and measure the probability distribution on each qubits separately. The final label is computed by summing up the two distributions and taking its half.
Fig. 5: Examples of the Ising model and the extended MNIST image samples with size \(16\times 16\) used for binary classification. Each row corresponds to each class.
disordered phase. By taking the periodic boundary condition, the Ising model dataset is reflectional and rotational symmetric by construction. We aim to classify the order phase from the disordered phase using EquivQCNN.
The second dataset is the extended MNIST dataset, which also includes randomly reflected and rotated handwritten digit images. In this paper, we present the results for the classification of digits \(4\) and \(5\), downsampled into \(16\times 16\) pixels.
We compare the performance of EquivQCNN with a non-equivariant QCNN with a similar number of parameters, using a convolutional filter that generates an arbitrary two-qubit \(SO_{4}\) state [36]. For all the models, the initial parameters are sampled randomly from a uniform distribution, \([-0.1,0.1]\). The parameters are updated with ADAM optimizers, using the learning rate of \(0.01\), \(\beta_{1}=0.5\) and \(\beta_{2}=0.999\).
To prove the generalization power of the EquivQCNN, we train the models for different training set size, \(N_{s}=2^{i}\cdot 10\) for \(i=1,...,10\) with the batch size \(N_{bs}=2^{i}\) to maintain the same number of updates in each epoch. Tab. I and Fig. 6 summarize the test accuracy obtained at the end of the training with different QCNN architectures. Note that the number of samples in the test set is always the same regardless of the number of training samples.
We can observe that EquivQCNN and Appr-EquivQCNN with \(M_{2}\) measurement have higher test accuracy for all \(N_{s}\), especially for small \(N_{s}\). In particular, in the case of the Ising model, Appr-EquivQCNN with \(M_{2}\) outperforms the non-equivariant model only using 256 times less number of training samples. This certainly proves that the equivariance helps to improve the generalization power as expected.
One interesting point is that EquivQCNN gives the best result for MNIST, while Appr-EquivQCNN with \(M_{2}\) measurement outperforms EquivQCNN for the Ising model. This difference might be explained by the fact that the Ising model is subject to a stricter symmetry by its construction, compared to the extended MNIST, where the symmetry is artificially created by random reflection and rotation. Thus, injecting noise into the model with Appr-EquivQCNN helps training for the Ising dataset, which is not the case for MNIST.
## V Conclusion
In this paper, we introduced the Equivariant QCNN for the planar wallpaper symmetry group \(p4m\), including reflection and \(90^{\circ}\) rotation in image classification. Furthermore, our study suggests the possibility of injection of noises into the GQML model in order to find the best crossing point between expressibility and equivariance. The proposed models are tested for two different datasets, the Ising model and the extended MNIST dataset, and compared with the non-equivariant model. The results obtained clearly proved that the EquivQCNN outperforms the non-equivariant one in terms of generalization power, especially with a small training set size. Previous studies on QML have already proven that it has a high generalization power with a small training set size [7]. This work demonstrated that we can further improve the generalization thanks to the induced bias added by the geometric prior to the dataset.
For our future research, we plan to compare the EquivQCNN with the problem-agnostic model, not only in terms of test accuracy but also in other factors, such as local effective dimension, overparameterization, barren plateaus, etc. Ultimately, we extend the test to a more realistic use case with a larger image size where symmetry is an essential component for the training, such as Earth Observation images, and show the practical advantage of EquivQCNN.
Fig. 6: The test accuracy obtained at the end of QCNN training for Ising and extended MNIST dataset with different training sample numbers. The solid line corresponds to the average over the five runs, and the dashed line the best one among them. The test accuracy for EquivQCNN and ApprEquivQCNN with \(M_{2}\) are always higher than the non-equivariant QCNN, proving their generalization power.
\begin{table}
\begin{tabular}{c||c|c||c|c} & \multicolumn{2}{c||}{Ising} & \multicolumn{2}{c}{MNIST} \\ \hline \(n_{samples}\) & \(40\) & \(10240\) & \(40\) & \(10240\) \\ \hline Non-Equiv. & \(77.6\pm 0.1\) & \(83.0\pm 2.0\) & \(66.7\pm 2.1\) & \(72.9\pm 0.5\) \\ Equiv. & \(74.2\pm 0.2\) & \(75.8\pm 0.3\) & \(\mathbf{77.5\pm 1.0}\) & \(74.5\pm 4.7\) \\ Appr-Eq. 1 & \(84.8\pm 2.2\) & \(85.4\pm 2.0\) & \(52.9\pm 0.1\) & \(72.7\pm 2.4\) \\ Appr-Eq. 2 & \(\mathbf{86.4\pm 3.4}\) & \(\mathbf{89.3\pm 2.9}\) & \(69.7\pm 2.9\) & \(\mathbf{76.2\pm 1.8}\) \\ \end{tabular}
\end{table} TABLE I: The test accuracy at the end of the QCNN training for Ising and extended MNIST with \(n_{samples}=40\) and \(10240\) training samples (best result in bold).
## Acknowledgement
This work was carried out as part of the quantum computing for earth observation (QC4EO) initiative of ESA \(\Phi\)-lab, partially funded under contract 4000135723/21/I-DT-lr, in the FutureEO programme. MG and SF are supported by CERN through the CERN QTI.
|
2304.10337 | Prediction of the evolution of the nuclear reactor core parameters using
artificial neural network | A nuclear reactor based on MIT BEAVRS benchmark was used as a typical power
generating Pressurized Water Reactor (PWR). The PARCS v3.2 nodal-diffusion core
simulator was used as a full-core reactor physics solver to emulate the
operation of a reactor and to generate training, and validation data for the
ANN. The ANN was implemented with dedicated Python 3.8 code with Google's
TensorFlow 2.0 library. The effort was based to a large extent on the process
of appropriate automatic transformation of data generated by PARCS simulator,
which was later used in the process of the ANN development. Various methods
that allow obtaining better accuracy of the ANN predicted results were studied,
such as trying different ANN architectures to find the optimal number of
neurons in the hidden layers of the network. Results were later compared with
the architectures proposed in the literature. For the selected best
architecture predictions were made for different core parameters and their
dependence on core loading patterns. In this study, a special focus was put on
the prediction of the fuel cycle length for a given core loading pattern, as it
can be considered one of the targets for plant economic operation. For
instance, the length of a single fuel cycle depending on the initial core
loading pattern was predicted with very good accuracy (>99%). This work
contributes to the exploration of the usefulness of neural networks in solving
nuclear reactor design problems. Thanks to the application of ANN, designers
can avoid using an excessive amount of core simulator runs and more rapidly
explore the space of possible solutions before performing more detailed design
considerations. | Krzysztof Palmi, Wojciech Kubinski, Piotr Darnowski | 2023-04-20T14:21:16Z | http://arxiv.org/abs/2304.10337v2 | # Prediction of the evolution of the nuclear reactor core parameters using artificial neural network
###### Abstract
The aim of the research was to design, implement and investigate an Artificial Intelligence (AI) algorithm, namely an Artificial Neural Network (ANN) that can predict the behaviour of selected parameters of a nuclear reactor core.
A nuclear reactor based on MIT BEAVRS benchmark was used as a typical power generating Pressurized Water Reactor (PWR). The PARCS v3.2 nodal-diffusion core simulator was used as a full-core reactor physics solver to emulate the operation of a reactor and to generate training, and validation data for the ANN.
The ANN was implemented with dedicated Python 3.8 code with Google's TensorFlow 2.0 library. The effort was based to a large extent on the process of appropriate automatic transformation of data generated by PARCS simulator, which was later used in the process of the ANN development.
Various methods that allow obtaining better accuracy of the ANN predicted results were studied, such as trying different ANN architectures to find the optimal number of neurons in the hidden layers of the network. Results were later compared with the architectures proposed in the literature. For the selected best architecture predictions were made for different core parameters and their dependence on core loading patterns.
In this study, a special focus was put on the prediction of the fuel cycle length for a given core loading pattern, as it can be considered one of the targets for plant economic operation. For instance, the length of a single fuel cycle depending on the initial core loading pattern was predicted with very good accuracy (>99%).
This work contributes to the exploration of the usefulness of neural networks in solving nuclear reactor design problems. Thanks to the application of ANN, designers can avoid using an excessive amount of core simulator runs and more rapidly explore the space of possible solutions before performing more detailed design considerations.
artificial neural network nuclear reactor batch learning TensorFlow PARCS BEAVRS
## 1 Introduction
### Artificial Intelligence in nuclear engineering
Nuclear power is an increasingly significant source of electricity in the World. In 2021, nuclear reactors accounted for about 10% of globally produced electricity and were the second-largest source of low-emission electricity [1]. With the development of this energy sector, the demand for optimizing processes related to the design and management
of nuclear reactors is increasing. Moreover, increasing easily accessible computational resources allow engineers to propose more and more sophisticated methods for process optimization.
Over the past decades Artificial Intelligence (AI) gained a lot of interest in nuclear engineering due to the possible modernization of software in existing and new nuclear reactor technologies [2]. The need of addressing the new solutions was stated by the International Atomic Energy Agency (IAEA) in 1986 [3] and as a result, a wide variety of methods were already developed in this field, some of which focused mostly on technical aspects such as reducing radiation exposure to personnel, enhancing the reliability of equipment and others focusing mostly on economical aspects such as optimization of the maintenance schedule or improving plant availability. When it comes to different algorithms used for such problems it is needed to mention naive Bayes classifiers in bayesian-based isotope identification [4], optimization methods like Genetic Algorithms [5][6], and most common machine learning (ML) algorithms such as decision trees, support vector machines (SVM) and its use in classification of uranium waste [7], and deep learning models such as artificial neural network (ANN) and convolutional neural network (CNN) - both having a contribution to nuclear engineering field with identification of accident scenarios in nuclear power plants using ANNs [8] or approximation of flows in a reactor using CNNs [9] (more detailed state of art can be found in [10]).
One of the solutions to the reactor design process optimization is the use of ANNs, on which this research is focused and which could calculate different parameters of the nuclear reactor core cycle in just seconds compared to running a full advanced simulation.
### Artificial neural networks
Neural networks, often known as Artificial Neural Networks (ANNs), are computational models based on the biological neural network found in the brain. They have been designed to create a complex solution based on artificial intelligence that allows for nonlinear regression and classification problems.
Such a network consists of units called neurons, which are connected like synapses that transmit information to other biological neurons. In the case of a simple type of Multilayer Perceptron (MLP) neural network, neurons form at least three layers - input, one or more hidden layers, and output. For each connection that occurs in a given layer, there is a certain real number called the weight of the connection. For the hidden layers and the output layer, each neuron uses a specific activation function. Based on the input values of a given neuron, this function determines its activation, i.e. a certain real value, which is the output value. As an activation function for more complex problems, it is advised to use a nonlinear function, as they can help to compute less trivial problems with less amount of neurons. Networks in which the value is propagated from input to output only are known as feed-forward networks. Having the basis of a multi-layer feed-forward network we can describe the learning process as changing the connection weights using a backpropagation algorithm i.e. propagating the errors back to the preceding layers as shown on Figure 1.1 with help of gradient descent. Currently, there are many optimizers for the gradient descent, and one of the most recent and also used in this research is more memory efficient and overall fast- Adam optimizer [11].
When the learning process is done the ANN can be used for predicting output parameters based on the given, possibly not seen by the network, input values.
### Motivation and research goals
The main aim of the research was to investigate how well can the ANN contribute to solving nuclear reactor design problems. This would enable reactor designers to consider using ANNs that can provide fast results and solutions before performing more detailed analysis on reactor design using more resource-consuming computations such as core simulator runs. Then, such a network was tested for performance in terms of different architectures and other parameters so the accuracy of its predictions is optimized and the network learning process is also optimal. In order for an artificial neural network to be taught the appropriate weights of neural connections, it is necessary to use the appropriate training data, which in the case of this research was generated by a Purdue Advanced Reactor Core Simulator (PARCS) [13].
## 2 Methodology
### General algorithm/procedure
For an easier understanding of the data generation procedure general workflow has been created and presented in the Figure 2.1. A more detailed description of each step can be found in the next sections.
### AI Technologies
Due to the growing popularity of the Python programming language in machine learning and broadly understood data science, its ease of syntax, and a vast range of available libraries, it has been used as the primary programming language to create most of the code. Implementation of ANN was done using TensorFlow 2.0 [14]. The additional power offered by the GPU (Graphics Processing Unit) was used to increase computing power when teaching the neural network through the CUDA platform supported by NVIDIA cuDNN (NVIDIA CUDA(r) Deep Neural Network) [15]. Using this method, the processor divides computing resources along with the graphics unit.
### Core Simulations and PARCS code
In this work, a nuclear reactor core model [16] was simulated with PARCS v3.2 core neutronics simulator [13]. It is a computer code being developed by the University of Michigan for U.S. Nuclear Regulator Commission [17][18]. PARCS is a popular tool used for nuclear reactor safety research by universities and governmental agencies. It simulates both steady-state type and transient type problems. It allows assessment of basic core parameters for the static case, e.g effective neutron multiplication factor, control rod worth, reactivity coefficients and other. It can be used to simulate slow and long term core changes like quasi-static fuel cycle or xenon transients. Moreover, it allows the simulation of more rapid transients with neutron kinetics phenomena like control rod ejection. Finally, it simulates neutron kinetics in a coupled mode with thermal-hydraulics system codes (like TRACE) during accidents (e.g. Main Steam Line Break or Anticipated Transient Without SCRAM). PARCS uses the nodal diffusion approach and dedicated numerical methods to relatively quickly solve neutron transport.
In this study, PARCS was used to find the effective neutron multiplication factor (\(k_{eff}\)) of a nuclear system as a function of long-term operation and fuel burnup (isotopic depletion) up to the end of a cycle. The search for \(k_{eff}\) demands
Figure 1.1: Backpropagation algorithm described in [12].
a solution of the static criticality problem, which is the eigenvalue problem for a reactor in a steady-state (or quasi steady-state) given by the general Equation 2.1,
\[M\phi=\frac{1}{k_{eff}}F\phi \tag{2.1}\]
Where \(\phi\) is neutron flux (eigenvector or eigenfunction), \(k_{eff}\) is an eigenvalue, M is the migration matrix (operator) describing various neutron leaks and losses, F is the fission matrix (operator) responsible for the neutron generation. Forms of M and F matrices are dependent on the formulation of the problem and applied approximations (like diffusion approximation) used to solve the Neutron Transport Equation [19].
For a nuclear system to be in a steady-state, a neutron population has to be time independent (constant). In the Equation 2.1 losses given by the left hand side (LHS) have to be equal to the neutron production given by the right hand side (RHS). In a real physical nuclear system with a self-sustaining fission chain reaction (without external sources), the steady-state condition is only possible when RHS and LHS are equal to each other and \(k_{eff}=1.0\). Otherwise, a system is in a transient state, supercritical when \(k_{eff}>1.0\) and subcritical when \(k_{eff}<1.0\).
In the numerical analysis of nuclear systems, it is useful to solve static problem instead of the transient problem and calculate eigenvalue given by Equation 2.1 with \(k_{eff}\neq 1.0\). In order to guarantee the correctness of Equation 2.1, the production term (RHS) has to be re-scaled by the scaling factor - eigenvalue (\(k_{eff}\)). In effect, the eigenvalue quantifies the neutron multiplication potential of the system and practically it allows engineers to assess if a considered nuclear system is able to maintain a fission chain reaction. To solve the problem PARCS uses the Wielandt eigenvalue shift method and Krylov CMFD solver [17].
Figure 2.1: General workflow of the developed computational framework used for preparing data for ANN learning process including running the simulations, calculation of selected parameters, modification and transformation of data.
Furthermore, an essential neutronics-related parameter studied in this work is the so-called reactivity. It is defined as the net relative difference to the critical state (\(\rho=0\) for \(k_{eff}=1.0\)) given by Equation 2.2.
\[\rho=\frac{k_{eff}-1}{k_{eff}} \tag{2.2}\]
In this work, a fuel cycle length (measured in days) is considered as the time from the reactor's start-up with some initial excess reactivity level (\(k_{eff}>1.0\)) until the moment in time when excess reactivity of the core drops below zero (\(k_{eff}\leq 1.0\)) due to fuel burnup and accumulation of neutron poisons. A self-sustaining nuclear fission chain reaction cannot be sustained in a system where the reactivity is below zero and from the practical point of view it means the end of a cycle. In practice, some parameters like power and temperature can be reduced to extend core operation time (stretched-out operation) but this situation is not studied in this work.
### 2.4 BEAVRS Benchmark
The investigated core is the Westinghouse 4-loop Pressurized Water Reactor (PWR) with thermal power equal to 3411 MWh. It was defined in the BEAVRS (Benchmark for Evaluation And Validation of Reactor Simulations) benchmark published by the MIT Computational Reactor Physics Group [20]. The BEAVRS first fuel cycle was studied, and its core loading pattern is presented in Figure 2.2. The first core contains nine fuel assembly types with 17x17 lattice design and three enrichments equal to 1.6, 2.4, 3.1 wt%, different number of borosilicate Burnable Absorbers (BA) rods in assemblies: 0, 6, 12, 15, 16, 20. Details of fuel assemblies are presented in Table 1.
### 2.5 Data generation
A fuel assembly configuration in a core is called a loading pattern. The selection of a loading pattern is an element in predicting its behavior in simulations using PARCS. In this work, it was assumed that the reactor core has the symmetry of \(1/8th\). In the case of the tested core with a 17x17 assembly arrangement, a vector of 32 elements with a range of 1-9 was drawn. Ultimately, the number of different core configurations generated was equal to \(N=10000\), of which an exemplary configuration is plotted in Figure 2.3.
In order to accelerate data generation, PARCS was launched multi-process, which allowed for parallel simulations for many different configurations. It was achieved using a multiprocessing library for Python.
Figure 2.2: BEAVRS Benchmark reactor core. Based on [16].
### 2.6 Data gathering
The data gathering process began with reading selected lines from PARCS output files. The read vectors containing the selected values were transposed to be placed as columns in the target matrix with values ready for the process of neural network learning.
In the data matrix, each row was prepared to contain information for one separate configuration and its features based on \(k_{eff}\) values, calculated using PARCS. Thus, the first 32 integers corresponding to the fuel assemblies represented the configuration of the randomly generated 1/8th of the core, following values described the evolution of \(k_{eff}\). The last column contained the length of the cycle calculated as a linear interpolation between two burn-up steps for which the value of \(k_{eff}\) drops below 1.0 (or \(\rho<0\)). This method seems to be justified as the behavior of the \(k_{eff}\) is close to linear at the end of the cycle.
Before starting the learning process, the input data was modified by conversion of numbers of fuel assemblies (integers 1-9) into cycle lengths with values corresponding to specific fuel assembly. In this way, the value "1" was changed to the value of cycle length for the core consisting only of assemblies of type "1", value "2" was changed to cycle length for core made only of assemblies "2" and so on. It was believed that it will improve the learning process, as after this conversion the input data could reflect, to some extent, features of the fuel assemblies used in the core. Also, as
\begin{table}
\begin{tabular}{c c c c} \hline Fuel assembly number & Name & Enrichment [\%] & Number of BA Rods \\ \hline
1 & FA1 & 1.6 & 0 \\
2 & FA2 & 2.4 & 0 \\
3 & FA3 & 2.4 & 12 \\
4 & FA4 & 2.4 & 16 \\
5 & FA5 & 3.1 & 0 \\
6 & FA6 & 3.1 & 6 \\
7 & FA7 & 3.1 & 15 \\
8 & FA8 & 3.1 & 16 \\
9 & FA9 & 3.1 & 20 \\
10 & REFLECTOR & & \\ \hline \end{tabular}
\end{table}
Table 1: Details of the BEAVRS first fuel cycle fuel assemblies used in this work.
Figure 2.3: Example of generated configuration of the reactor core (with highlighted 1/8 symmetry).
usually does not deviate too much from 1.0, for easier assessment of the performance of the ANN and comparison of the results (especially relative values), reactivity (Equation: 2.2) instead of \(k_{eff}\) is used in the rest of the study.
### Exploratory Data Analysis and data preparation
In order to get acquainted with the data set, some methods known from EDA (Exploratory Data Analysis) were used, such as the calculation of basic statistics and the visualization of the relationship between different parameters shown in Figure 2.4.
At first glance, a visible linear correlation between reactivity at the start of the cycle and maximal reactivity is observed. It can be explained by checking the occurrence of maximal reactivity in a cycle which always occurs on the second step of the simulation. This is likely due to variations in the level of xenon and poisons, which can be observed in later stages of research when predicting reactivity progression in time. A second almost linear correlation is visible between cycle length and the reactivity at the end of the cycle. Again, it finds its explanation with how reactivity progress in time (after reaching maximum reactivity it shows decreasing linear trend in time).
Data prepared for learning artificial neural network was split into 80/20 proportions and separated into features \(x\) (columns of data matrix which can be used for learning) and labels \(y\) (output values that can be predicted). It was decided that the output of the ANN are values corresponding to normalized values of maximum reactivity, reactivity on the course of the cycle (time-dependency) applicable as \(rho1,rho2,rho3\) and after that, every second reactivity value \(rho4,rho6,rho8...\) and also the cycle length of the nuclear reactor core. Based on the values, the normalization layer of the neural network has been adapted, which is used to transform the data \(x\) to obtain a dataset with a mean value \(\mu=0\) and a standard deviation \(\sigma=1\). This standardization layer was implemented using Keras API Layers [21]. Due to the need of reversing the process of label normalization, an additional mechanism has been designed using the StandardScaler class from the _sklearn_preprocessing_ library [22].
Figure 2.4: Pairplot of the data used for learning the ANN. Labels correspond to reactivity at the start of the cycle (\(rho\_start\)), maximal reactivity (\(rho\_max\)), reactivity close to the end of the cycle (\(rho\_69\)), and cycle length in days (\(cycle\_length\_in\_days\)).
### Study of ANN architecture
One of the essential elements of ANN is its architecture - the number of layers, the number of neurons in each of them, and the activation functions for each layer. The issue of choosing the number of hidden layers is more straightforward than the selection of the optimal number of neurons in each of them due to the simple classification of problems in terms of the number of layers. Having only the input and output layers allows for solving linear problems such as, for example, the classification of linearly separable objects. Adding a single hidden layer makes it possible for a neural network to approximate virtually any function that maps input values continuously to finite solution space. A neural network with two hidden layers allows for solving complex problems without significant limitations. Based on [23] there is currently no theoretical basis for a neural network to have more than two hidden layers, since two are sufficient for virtually any problem, due to this it was decided that the studied ANN would have two hidden layers.
Having a fixed number of layers in the architecture is necessary to decide how many neurons are in each layer in combination with what activation function plays the best role in predicting the operation parameters of the nuclear reactor core. In addition, between the two hidden layers and between the last hidden layer and the output layer, a so-called dropout layer was added, resetting the input value of a given neuron at a particular frequency during network learning. The process of randomly zeroing certain connections in a neural network allows for achieving better accuracy of network predictions, which is presented in [24].
The process of hyperparameter tuning was based on running the nested loops that ran the process of neural network learning with different combinations of neurons in different layers and dropout parameters. When studying the architecture of a neural network, it is also necessary to mention several processes, such as the initialization of connection weights or the selection of the appropriate activation function. For this reason, the initialization with the homogenous Glorot distribution, also called the Xavier distribution, presented in Equation 2.3 was used (\(n_{in}\) represents a number of neurons in the layer before, whereas \(n_{out}\) represents a number of the following layer). The use of such a distribution means that the selected layers will not contribute too much to the learning process, as all layers will have a similar contribution [25].
\[W\sim Uniform\bigg{(}-\frac{\sqrt{6}}{\sqrt{n_{in}+n_{out}}},\frac{\sqrt{6}}{ \sqrt{n_{in}+n_{out}}}\bigg{)} \tag{2.3}\]
As an activation function, the Gaussian Error Linear Units (GELU) was chosen for every layer, as in [26] it was suggested that GELU performs better than ReLU and ELU activation functions.
## 3 Results and discussion
### Analysis of the data gathering process
The process of data generation resulted in 10 000 random pattern calculations with about 70 GB of data (PARCS simulations output data). The data collection process contributed to the creation of a 9.82 MB dataset with modified fuel assembly information (fuel cycle length for core consisting of only one corresponding assembly type). The memory occupied by both datasets was only approximately 0.04% of the simulation output files, representing a considerable reduction in the size of the data that would have to be read from many different directories to create training data for a neural network. It should be taken into account that the data collection process consisted of extracting only part of the information contained in the simulation output files, so it is more about organizing and structuring the training data rather than compression.
### Optimal ANN architecture analysis
Based on the ANN learning process for different hyperparameters, an attempt was made to determine which configuration works best for predicting the operation parameters of the considered PWR reactor core. For this purpose, the dependence of the value of the loss function on subsequent batches was prepared, shown in Figure 3.1a and Figure 3.1b, where each the launch of a given combination has a name with a given format:
\[run-\alpha-NH1-NH2-\delta\;(val), \tag{3.1}\]
with optional note "val" indicating whether the loss function is calculated for validation (test) data or for teaching data in a given epoch, where \(\alpha\) is a number denoting the running combination of hyperparameters, \(NH1\) is equal to the number of neurons in the first hidden layer, \(NH2\) is equal to the number of neurons in the second hidden layer and \(\delta\) indicates the frequency of dropout (abandonment of a given connection) defined as a percentage of connections which weights will be set to zero in each update cycle.
Based on Figure (a)a and Figure (b)b, it is difficult to determine the best architecture of the ANN. This is largely due to certain randomness of the learning process resulting from mixing the dataset and the chances of resetting important connections through dropout. For some architectures, one can notice an early problem of the so-called overfitting - a problem for which the weights of connections are fitted too much to match the learning dataset. As a result, the test data is recognized incorrectly (with lower accuracy). An example of such an architecture is a network consisting of 64 neurons in the first hidden layer and 128 neurons in the second hidden layer with a dropout value of 0.05 (run-64-128-0.05). Again, it is necessary to mention certain randomness in network learning. The loss function of this architecture can also be so high due to the unfortunate dropout of important connections resulting from the equal probability of choosing each connection. Due to the thriving architecture consisting of 64 neurons in both hidden layers (one of the lower values of the loss function and the slight difference between the value of the loss function for the learning set and the test set), it was decided to choose this architecture to analyze the prediction of the parameters of the nuclear reactor core at a dropout value of 0.1. The selected neural network is shown in Figure (g)g, where the colors of the connections denote the initialized initial value, and the additional neuron in two hidden layers represents bias.
### Prediction of selected core parameters
Having the ANN with 32 neurons in the input layer, 64 in each hidden layer, and 38 neurons in the output layer, it was possible to scale normalized values of reactivity back to their original values and obtain the reactivity dependence of time in the fuel cycle shown on Figure (g)g. The average relative error of the prediction was 0.43%, and the absolute error was 0.0024. It is noticeable that on the initial days of the cycle, the deviation is different for real and predicted data compared to other days of the cycle, where the prediction coincides very well with the actual data. This is likely due to variations in the level of xenon and poisons, which can be very nonlinear and thus hard to predict. It can suggest putting more emphasis on the first half of the cycle in future works. As the processes at the beginning of the cycle are more complex they may require denser sampling or an additional, more advanced model.
Errors for the collation of fuel cycle length actual and predicted values is characterized by a level of 0.91% as for relative error and 3.88 days for an absolute error, which indicates a very good accuracy of predictions. In addition, for
Figure (a)a: Process of learning the ANN shown as Mean Squared Error (MSE) in the function of a batch number.
Figure 3.3: Dependence on the day of the fuel cycle of core reactivity for PARCS calculated (real) and ANN (predicted) calculated data.
Figure 3.2: Visualization of the studied artificial neural network. Created using NN-SVG visualizer [27].
this data, the Pearson correlation coefficient was calculated at \(\rho\approx 0.98\), which confirms a relationship of predicted to actual values close to linear. Comparing the uncertainties resulting from the use of a simulator taking into account neutron diffusion and the uncertainties of nuclear data, it can be said that the data obtained based on neural network predictions have very similar or sometimes lower levels of uncertainty values.
In addition, in order to test the accuracy of the predicted values of the core duty cycle, a histogram was prepared to compare the distribution of actual and predicted values shown in Figure 3.5. Normal distribution curves were matched to histograms, from which information about the mean \(\mu\) and standard deviation \(\sigma\) was obtained.
Figure 3.4: Predicted fuel cycle length compared to the real value.
Figure 3.5: Distribution of the real and predicted values of fuel cycle length.
The analysis of the fuel cycle length distributions shows that these distributions have very similar parameters. The average value is approximately equal, and the deviation value stands out only slightly. The similarity of the distributions indicates, to some extent, a good prediction of the ANN.
## 4 Conclusions and Summary
The aim of this work was to create a design and implementation of an artificial neural network that can calculate selected operating parameters of the nuclear reactor core. The study of the network architecture resulted in similar results for many different architectures, which did not allow unambiguously determining the best number of neurons in each layer. In the case of some combinations, the phenomenon of so-called overfitting, i.e., too much matching with the learning data, resulting in a worse result for the loss function for test data, was noticed. Finally, it was decided that the best neural network architecture for the problem of predicting the operating parameters of the nuclear reactor core is a neural network with 64 neurons in each hidden layer, and the dropout value is set at 0.1 between the hidden layers and the hidden layer and the output layer. The research provides insight into the possibility of using artificial neural networks for issues related to nuclear energy. It leaves a wide range of further research, such as the improvement of the learning dataset, which, with the right amount of information, can improve the accuracy of predictions and the possibility of adding other predicted parameters such as the concentration of poisons in the nuclear reactor core or the temperature of the fuel. This process could allow the creation of a core simulator based on an artificial neural network, support the design of core configurations in terms of proper arrangement of fuel assemblies, or optimization of the fuel cycle under certain conditions. What is more, the database created from the PARCS output files has a huge amount of parameters that were not used in these studies, which in the long run may allow them to be used to increase the accuracy of predictions as well as the ability to calculate more nuclear reactor core parameters.
## 5 Acknowledgments
Research was funded by Warsaw University of Technology within the Excellence Initiative: Research University (IDUB) programme.
Source code used for the research may be found at [https://github.com/dazeeeed/neural-physics](https://github.com/dazeeeed/neural-physics) under the GNU GPL v3.0 license.
|
2305.05375 | Physics-informed Neural Networks to Model and Control Robots: a
Theoretical and Experimental Investigation | This work concerns the application of physics-informed neural networks to the
modeling and control of complex robotic systems. Achieving this goal required
extending Physics Informed Neural Networks to handle non-conservative effects.
We propose to combine these learned models with model-based controllers
originally developed with first-principle models in mind. By combining standard
and new techniques, we can achieve precise control performance while proving
theoretical stability bounds. These validations include real-world experiments
of motion prediction with a soft robot and of trajectory tracking with a Franka
Emika manipulator. | Jingyue Liu, Pablo Borja, Cosimo Della Santina | 2023-05-09T12:16:08Z | http://arxiv.org/abs/2305.05375v3 | Physics-informed Neural Networks to Model and Control Robots: a Theoretical and Experimental Investigation
###### Abstract
This work concerns the application of physics-informed neural networks to the modeling and control of complex robotic systems. Achieving this goal required extending Physics Informed Neural Networks to handle non-conservative effects. We propose to combine these learned models with model-based controllers originally developed with first-principle models in mind. By combining standard and new techniques, we can achieve precise control performance while proving theoretical stability bounds. These validations include real-world experiments of motion prediction with a soft robot and of trajectory tracking with a Franka Emika manipulator.
Physics-informed Neural Networks to Model and Control Robots: a Theoretical and Experimental Investigation
Jingyue Liu\({}^{1,*}\) Pablo Borja\({}^{2}\) Cosimo Della Santinal\({}^{1,3}\)
\({}^{1}\)Department of Cognitive Robotics, Delft University of Technology, Delft 2628 CD, The Netherlands {J.Liu-14, C.DellaSantina}@tudelft.nl
\({}^{2}\)School of Engineering, Computing and Mathematics, University of Plymouth, Plymouth PL4 8AA, United Kingdom. [email protected]
\({}^{3}\)Institute of Robotics and Mechatronics German Aerospace Center (DLR) Oberpfaffenhofen 82234, Germany
\({}^{*}\)Corresponding author
Keywords: _Physics-inspired neural networks, Hamiltonian neural networks, Lagrangian neural networks, model-based control, dissipation, Euler-Lagrange equations, port-Hamiltonian systems_
## 1 Introduction
Deep Learning (DL) has made significant strides across various fields, with robotics being a salient example. DL has excelled in tasks such as vision-guided navigation [1], grasp-planning [2], human-robot interaction [3], and even design [4]. Despite this, the application of DL to generate motor intelligence in physical systems remains limited. Deep Reinforcement Learning, in particular, has shown the potential to outperform traditional approaches in simulations [5, 6, 7]. However, its transfer to physical applications has been primarily hampered by the prerequisite of pre-training in a simulated environment [8, 9, 10].
The central drawback of general-purpose DL lies in its sample inefficiency, stemming from the need to distill all aspects of a task from data [11, 12]. In response to these challenges, there's a rising trend in robotics to specifically incorporate geometric priors into data-driven methods to optimize the learning efficiency [13, 14, 15]. This approach proves especially advantageous for high-level tasks that need not engage with the system's physics. Physics-inspired neural networks [16, 17, 18], infusing fundamental physics knowledge into their architecture and training, have found success in various fields outside robotics, from earth science to materials science [19, 20, 21, 22]. In robotics, integration of Lagrangian or Hamiltonian mechanics with deep learning has yielded models like Deep Lagrangian Neural Networks (LNNs) [23], and Hamiltonian Neural Networks (HNN) [24]. Several extensions have been proposed in the literature, for example, including contact models [25], or proposing graph formulations [26]. The potential of Lagrangian and Hamiltonian Neural Networks in learning the dynamics of basic physical systems has been demonstrated in various studies [27, 28, 29, 18]. However, the exploration of these techniques in modeling intricate robotic structures, especially with real-world data, is still in its early stages. Notably, [30] applied these methods to a position-controlled robot with four degrees of freedom, which represents a relatively less complex system in comparison to contemporary manipulators.
This work deals with the experimental application of PINN to rigid and soft continuum robots [31]. Such endeavor required modifying LNN and HNN to fix three issues that prevented their application to these systems: (i) the lack of energy dissipation mechanism, (ii) the assumption that control actions are collocated on the measured configurations, (iii) the need for direct acceleration measurements, which are non-causal and require numerical differentiation. For issue (iii), we borrow a strategy proposed in [32, 33], which relies on forward integrating the dynamics, while for (i) and (ii), we propose innovative solutions.
Furthermore, we exploit a central advantage of LNNs and HNNs compared to other learning techniques; the fact that the learned model has the mathematical structure that is usually assumed in robots and mechanical systems control. By forcing such a representation, we use model-based strategies originally developed for first principle models [34, 35, 36] to obtain provably stable performance with guarantees of robustness.
The use of PINNs in control has only recently started to be explored. Recent investigations [37, 38, 33] focused on combining PINNs with model predictive control (MPC), thus not exploiting the mathematical structure of the learned equations. Indeed, this strategy is part of an increasingly established trend seeking the combination of (non-PI and non-deep) learned models with MPC [39, 40]. Applications to control PDEs are discussed in [41, 42], while an application to robotics is investigated in simulation in [43]. Preliminary investigations in other model-based techniques are provided in [30, 44], where, however, controllers are provided without any guarantee of stability or robustness and formulated for specific cases.
To summarize, in this work, we contribute to state of art in PINNs and robotics with the following:
1. An approach to include dissipation and allow for non-collocated control actions in Lagrangian and Hamiltonian neural networks, solving issues (i) and (ii).
2. Controllers for regulation and tracking, grounded in classic nonlinear control that exploit the mathematical structure of the learned models. For the first time, we prove the stability and robustness of these strategies.
3. Simulations and experiments on articulated and soft continuum robotic systems. To the Authors' best knowledge, these are the first validation of PINN, and PINN-based control applied to complex mechanical systems.
## 2 Preliminaries
### Lagrangian and Hamiltonian Dynamics
Robots' dynamics can be represented using Lagrangian or Hamiltonian mechanics. In the former, the state is defined by the generalized coordinates \(q\;\in\;\mathbb{R}^{N}\) and their velocities \(\dot{q}\;\in\;\mathbb{R}^{N}\), where \(N\) represents the configuration space dimension. The Euler-Lagrange equation dictates the system's behavior \(\frac{\mathrm{d}}{\mathrm{d}t}\left(\frac{\partial L(q,\dot{q})}{\partial\dot{ q}}\right)-\frac{\partial L(q,\dot{q})}{\partial\dot{q}}=F_{\mathtt{ext}}\), where \(L(q,\dot{q})=T(q,\dot{q})-V(q)\) with potential energy \(V\) and kinetic energy \(T\frac{1}{2}\dot{q}^{\top}M(q)\dot{q}\), where \(M(q)\in\mathbb{R}^{N\times N}\) is the positive definite mass inertia matrix. External forces, denoted as \(F_{\mathtt{ext}}\in\mathbb{R}^{N}\), include control inputs and dissipation forces.
In Hamiltonian mechanics, momenta \(p\in\mathbb{R}^{N}\) replace the velocities, with \(\dot{q}=M^{-1}(q)p\). The Hamiltonian equations \(\dot{q}=\frac{\partial H(q,p)}{\partial p},\quad\dot{p}=-\frac{\partial H(q, p)}{\partial q}+F_{\mathtt{ext}}\), where \(H(q,p)=T(q,p)+V(q)\) is the total energy. The kinetic energy in this case is defined as \(T(q,p)=\frac{1}{2}p^{\top}M^{-1}(q)p\).
### LNNs and HNNs
Lagrangian Neural Networks (LNNs) employ the principle of least action to learn a Lagrangian function \(\mathcal{L}(q,\dot{q})\) from trajectory data, with the learned function generating dynamics via standard Euler-Lagrange machinery [34]. The loss function for the LNN is given by the Mean Squared Error between the actual accelerations \(\ddot{q}\) and the ones that the learned model would expect \(\dot{\tilde{q}}\)
\[\mathcal{L}_{\mathrm{LNN}}=\mathrm{MSE}(\ddot{q},\dot{\tilde{q}}). \tag{1}\]
HNNs, conversely, are designed to learn the Hamiltonian function \(H(p,q)\). Once learned, this Hamiltonian function provides dynamics through Hamilton's equations. The loss function for HNN is similarly an MSE but between the predicted and actual time derivatives of generalized coordinates and momenta:
\[\mathcal{L}_{\mathrm{HNN}}=\mathrm{MSE}((\dot{q},\dot{p}),(\dot{\hat{q}}, \dot{\hat{p}})). \tag{2}\]
We use fully connected neural networks with multiple layers of neurons with associated weights to learn the Lagrangian or the Hamiltonian, shown in Figure 1.
### Limits of classic LNNs and HNNs
Note that both loss functions rely on measuring derivatives of the state \(\ddot{q}\) and \(\dot{p}\), which - by definition of state - cannot be directly measured. This issue is easily circumvented in simulation by the use of a non-causal sensor. Yet, this is not a feasible solution with physical experiments. An unrobust alternative is to estimate these values from measurements of positions and velocities numerically. This relates to issue (iii), stated in the introduction.
Moreover, existing LNNs and HNNs assume that \(F_{\mathtt{ext}}\in\mathbb{R}^{N}\) is directly measured. This is a reasonable hypothesis only if the system is conservative, fully actuated, and the actuation is collocated. The first characteristic is never fulfilled by real systems, while the second and the third are very restrictive outside when dealing with innovative robotic solutions as soft [31] or flexible robots [45]. Note that learning-based control is imposing itself as a central trend in these non-conventional robotic systems [46]. These considerations relate to issues (i) and (ii) stated in the introduction.
## 3 Proposed algorithms
### A learnable model for non-conservative forces
In standard LNNs and HNNs theory, non-conservative forces are assumed to be fully known and to be equal to actuation forces directly acting on the Lagrangian coordinates \(q\). This is very restrictive, as already discussed in the introduction.
In this work, we include given by dissipation and actuator forces, i.e., \(F_{\mathtt{ext}}\,=\,F_{\mathtt{d}}(q,\dot{q})\,+\,F_{\mathtt{a}}(q).\) We propose the following model for dissipation forces
\[F_{\mathtt{d}}(q,\dot{q})=-D(q)\dot{q}, \tag{3}\]
where \(D(q)\in\mathbb{R}^{N\times N}\) is the positive semi-definite damping matrix. Besides, we model the actuator force as
\[F_{\mathtt{a}}(q)=A(q)u, \tag{4}\]
where \(u\,\in\mathbb{R}^{M}\) is the control input signal to the system, and \(A(q)\,\in\mathbb{R}^{N\times M}\) is an input transformation matrix. For example, \(A\) could be the transpose Jacobian associated with the point of application of an actuation force on the structure. With this model, we take into account that in complex robotic systems, actuators are, in general, not collocated on the measured configurations \(q\). Note that, even if we accepted to impose an opportune change of coordinates, for some systems, a representation without \(A\) is not even admissible [47]. With (4), we also seemingly treat underactuated systems.
Note that [44] uses a dissipative model, but considers it in a white box fashion.
Hence, we rewrite the Lagrangian dynamics as follows
\[\ddot{q}=\left(\frac{\partial^{2}L(q,\dot{q})}{\partial\dot{q}^{2}}\right)^{- 1}\left(A(q)u-\frac{\partial^{2}L(q,\dot{q})}{\partial q\partial\dot{q}}\dot{ q}+\frac{\partial L(q,\dot{q})}{\partial q}-D(q)\dot{q}\right). \tag{5}\]
Figure 1: Fully connected network.
Similarly, the Hamiltonian takes the form
\[\begin{bmatrix}\dot{q}\\ \dot{p}\end{bmatrix}=\begin{bmatrix}0&I\\ -I&-D(q)\end{bmatrix}\begin{bmatrix}\frac{\partial H(q,\dot{q})}{\partial q}\\ \frac{\partial H(q,\dot{q})}{\partial p}\end{bmatrix}+\begin{bmatrix}0\\ A(q)\end{bmatrix}u. \tag{6}\]
### Non-conservative non-collocated Lagrangian and Hamiltonian NNs with modified loss
Figure 2 reports the proposed network framework, which builds upon Lagrangian and Hamiltonian NNs discussed in Sec. 2.2. Our work incorporates the damping matrix network, input matrix network, and a modified loss function into the original framework. The damping matrix network is used to account for the dissipation forces in the system via (3), while the input matrix network corresponds to \(A(q)\) in (4). We predict the next state by integrating (5) or (6) with the aid of the Runge-Kutta4 integrator. Clearly, different integration strategies could be used in its place.
The dataset \(\mathfrak{D}\,=\,[\mathcal{D}_{k},\mathcal{T}_{k}|k\,\in\,\{0,...,k_{\texttt{ end}}\}]\) contains information about the state transitions of the mechanical system. With this compact notation, we refer not necessarily to a single evolution, but we include a concatenation of an arbitrary number of evolutions of the system. The input data \(\mathcal{D}_{k}\) is composed of either \([q_{k},\dot{q}_{k},u_{k},\Delta t]\), for Lagrangian dynamics, or \([q_{k},p_{k},u_{k},\Delta t]\) in the case of Hamiltonian dynamics. Similarly, the corresponding label \(\mathcal{T}_{k}\) is either \(\dot{q}_{k+1}\), for the Lagrangian case, or \([q_{k+1},p_{k+1}]\) for Hamiltonian dynamics.
The values of \(M(q)\), \(V(q)\), \(D(q)\), and \(A(q)\) are estimated by four sub-networks, namely, the mass network (M-NN), potential energy network (P-NN), damping network (D-NN), and input matrix network (A-NN), as shown in Figure 2. The kinetic energy can be calculated once the values of \(\dot{q}\) or \(p\) are obtained. Then, the Lagrangian or Hamiltonian functions can be derived from the kinetic and potential energies. The derivative of the states \(\ddot{\ddot{q}}\) or \([\dot{\dot{q}}\;\;\dot{\dot{p}}]^{\top}\) can be computed using (5) or (6), respectively. The predicted next state \(\dot{\dot{q}}\) or \([\dot{q}\;\;\dot{p}]^{\top}\) can be obtained using the Runge-Kutta4 integrator.
Figure 2: The overview of Lagrangian and Hamiltonian neural networks: in red, the data and calculation process required for Lagrangian dynamics, while the green parts represent the corresponding data and calculation associated with the Hamiltonian dynamics.
We thus employ the following modified losses [32; 33]
\[\mathcal{L}_{\text{LNN}}=\frac{1}{\#\mathcal{D}}\sum_{k\in\mathcal{D}}\parallel \dot{q}_{k+1}-\dot{\hat{q}}_{k+1}\parallel_{2}^{2} \tag{7}\]
for LNNs, where \(\#\mathcal{D}\) is the cardinality of \(\mathcal{D}\), and
\[\mathcal{L}_{\text{HNN}}=\frac{1}{\#\mathcal{D}}\sum_{k\in\mathcal{D}}\left( \parallel q_{k+1}-\hat{q}_{k+1}\parallel_{2}^{2}+\parallel p_{k+1}-\hat{p}_ {k+1}\parallel_{2}^{2}\right) \tag{8}\]
for HNNs. Thus, compared to (1) and (2), we are calculating the MSE of a future prediction of the state - simulated via the learned dynamics - rather than of the current accelerations, which cannot be measured. Note that we also include a measure of the prediction error at the configuration level for \(\mathcal{L}_{\text{HNN}}\) because the information on \(\frac{\partial H(q,\hat{q})}{\partial p}\) appears disentangled from \(D\) and \(A\) (which are also learned) in the first \(n\) equations of (6).
### Sub-Network Structures
Constraints based on physical principles can be imposed on the parameters learned by the four sub-networks. Specifically, the mass and damping matrices must be positive definite and positive semi-definite, respectively. To this end, the network structure of the dissipation matrix can follow the prototype established for the mass matrix in [48]. This structure can be decomposed into a lower triangular matrix \(L_{D}\) with non-negative diagonal elements, which is then computed using the Cholesky decomposition [49] as \(D=L_{D}L_{D}{}^{\top}\). The representation of \(D(q)\) is illustrated in Figure 3.
The output of M-NN and D-NN is calculated as \((N^{2}+N)/2\), with the first \(N\) values representing the diagonal entries of the lower triangular matrix. To ensure non-negativity, activation functions such as Softplus or ReLU are utilized as the last layer. Furthermore, a small positive shift, denoted by \(+\epsilon\), is introduced to guarantee that the mass matrix is positive definite. The remaining \((N^{2}-N)/2\) values are placed in the lower left corner of the lower triangular matrix.
The calculation of the potential energy is performed using a simple, fully connected network with a single output, which is represented as \(V(q,\theta_{2})\). Moreover, A-NN, depicted in Figure 4, calculates \(A(q,\theta_{4})\) with dimensions \(\mathbb{R}^{N\times M}\).
### PINN-based controllers
We provide in this section two provably stable controllers by combining the learned dynamics in combination with classic model-based approaches. Before stating these results, it is important to spend a few lines remarking that a globally optimum execution of the learning process described above will result in learning \(M_{\text{L}}\), \(G_{\text{L}}=\frac{\partial V_{\text{L}}}{\partial q}\), \(A_{\text{L}}\), \(D_{\text{L}}\) such that
\[\mathcal{L}(q,\dot{q};M_{\text{L}},G_{\text{L}},A_{\text{L}},D_{\text{L}})= \mathcal{L}(q,\dot{q};M,G,A,D) \tag{9}\]
Figure 3: Diagram of the damping matrix including a feed-forward neural network, a non-negative shift for diagonal entries, and the Cholesky decomposition
for the proposed LNN, or
\[\mathcal{H}(q,\dot{q};M_{\mathsf{L}},G_{\mathsf{L}},A_{\mathsf{L}},D_{\mathsf{L}} )=\mathcal{H}(q,\dot{q};M,G,A,D), \tag{10}\]
for the proposed HNN, where \(M,G,A,D\) are the _real_ reference values. Instead, we highlight the components that have been learned from the ones that are not by adding an L as a subscript. Also, by construction, \(M_{\mathsf{L}},G_{\mathsf{L}},A_{\mathsf{L}},D_{\mathsf{L}}\) will have all the usual properties that we expect from these terms, like \(M_{\mathsf{L}}\) and \(D_{\mathsf{L}}\) being symmetric and positive definite, and \(G_{\mathsf{L}}\) being a potential force.
Yet, this does not imply that \(M=M_{\mathsf{L}}\), \(G=G_{\mathsf{L}}\), and so on. Indeed, there could exists a matrix \(P(q)\) such that \(P(q)M(q)\), \(P(q)G(q)\), \(P(q)A(q)\), \(P(q)D(q)\) have all the properties discussed above while simultaneously fulfilling
\[\mathcal{L}(q,\dot{q};PM,PG,PA,PD)=\mathcal{L}(q,\dot{q};M,G,A,D)\text{ or } \mathcal{H}(q,\dot{q};PM,PG,PA,PD)=\mathcal{H}(q,\dot{q};M,G,A,D). \tag{11}\]
So controllers must be formulated and proofs derived under the assumption of the learned terms being close to the real ones up to a multiplicative factor.
#### 3.4.1 Regulation
The goal of the following controller is to stabilize a given configuration \(q_{\mathsf{ref}}\)
\[u=A_{\mathsf{L}}^{+}(q)G_{\mathsf{L}}(q)+A_{\mathsf{L}}^{\top}(q)(K_{\mathsf{ P}}(q_{\mathsf{ref}}-q)-K_{\mathsf{D}}\dot{q}), \tag{12}\]
where we omit the arguments \(t\) and \(\theta_{i}\) to ease the readability. We highlight the components that have been learned from the ones that are not by adding an L as a subscript. \(G_{\mathsf{L}}(q_{\mathsf{ref}})\) is the potential force which can be calculated by taking the partial derivative of the potential energy learned by the LNN; \(K_{\mathsf{P}}\) and \(K_{\mathsf{D}}\) are positive definite control gains.
For the sake of conciseness, we introduce the controller, and we prove its stability for the fully actuated case. However, the controller and the proof can be extended to the generic underactuated case using arguments in [36]. This will be the focus of future work.
**Proposition 1**.: _Assume that \(M=N\), with \(A\) and \(A_{\mathsf{L}}\) both full rank. Then, given a maximum admitted error \(\delta_{\mathfrak{q}}\), the closed loop of (5) and (12) is such that_
\[\lim_{t\to\infty}q(t)=q_{\mathsf{as}}\text{ with }||q_{\mathsf{as}}-q_{ \mathsf{ref}}||<\delta_{\mathfrak{q}}, \tag{13}\]
_if \(K_{\mathsf{P}},K_{\mathsf{D}}\succ\kappa I\), with \(\kappa\in\mathbb{R}\) high enough, and if it exists a matrix \(P(q)\in\mathbb{R}^{N\times N}\) such that \(||G_{\mathsf{L}}(q)-P(q)G(q)||<\delta_{\mathfrak{q}}\), for some finite and positive \(\delta_{\mathfrak{q}}\). Also, we assume that_
\[A(q)[A_{\mathsf{L}}(q)-P(q)A(q)]^{\top}+A(q)A^{\top}(q)P^{\top}(q)\succ 0, \tag{14}\]
_and that_
\[||A^{-1}(q)P^{-1}(q)[A_{\mathsf{L}}(q)-P(q)A(q)]||<1. \tag{15}\]
Figure 4: Diagram for actuator matrix: The fully connected network output is a vector in \(\mathbb{R}^{NM}\), which is reshaped to a matrix in \(\mathbb{R}^{N\times M}\). A sigmoid activation function can be applied to the matrix elements for value constraint.
**Remark 1**.: _Note that if \(P(q)\succ 0\), then \(A(q)A^{\top}(q)P^{\top}(q)\succ 0\), and (14) translates into another request of \(A_{\mathsf{L}}(q)\) being close enough to \(A(q)\) up to a multiplicative factor \(P(q)\). The positive definiteness of \(P\) is, in turn, a request on the quality of the outcome of the learning process. Indeed, if \(||M_{\mathsf{L}}(q)-P(q)M(q)||\) is small enough, then the positive definitness of \(M_{\mathsf{L}}\) and \(M\) implies the one of \(P\). Similarly, (15) is always verified for small enough \(||A_{\mathsf{L}}(q)-P(q)A(q)||\)._
Proof.: Let us introduce the matrix \(\Delta_{\mathsf{L}}\,\in\,\mathbb{R}^{N\times N}\) such that \(A_{\mathsf{L}}(q)\,=\,P(q)A(q)+\Delta_{\mathsf{L}}(q).\) This matrix is small enough by hypothesis as detailed in Remark 1. We now want to bound the difference between the inverse of \(A(q)\) and \(P(q)A_{\mathsf{L}}(q)\). The goal is to write \(A_{\mathsf{L}}^{-1}(q)=(P(q)A(q))^{-1}+\Delta_{\mathsf{I}}(q)\), with \(||\Delta_{\mathsf{I}}(q)||<\delta_{\mathsf{I}}\).
Under hypothesis (15), we can use the Neumann series [50]
\[A_{\mathsf{L}}^{-1}(q)=(P(q)A(q)+\Delta_{\mathsf{L}}(q))^{-1}=(P(q)A(q))^{-1}- (P(q)A(q))^{-1}\Delta_{\mathsf{L}}(q)(P(q)A(q))^{-1}+\ldots \tag{16}\]
Rearranging terms, we define
\[\Delta_{\mathsf{I}}(q)=A_{\mathsf{L}}^{-1}(q)-(P(q)A(q))^{-1}=-(P(q)A(q))^{-1 }\Delta_{\mathsf{L}}(q)(P(q)A(q))^{-1}+\ldots \tag{17}\]
We can therefore bound the norm of \(\Delta_{\mathsf{I}}(q)\) as follows
\[||\Delta_{\mathsf{I}}(q)||\leq\frac{||(P(q)A(q))^{-1}\Delta_{\mathsf{L}}(q)( P(q)A(q))^{-1}||}{1-||(P(q)A(q))^{-1}\Delta_{\mathsf{L}}(q)||}<\delta_{ \mathsf{I}}. \tag{18}\]
We can therefore, rewrite the generalized forces produced by the controller \(A(q)u\) as
\[\begin{split}& A(q)\left[A_{\mathsf{L}}^{-1}(q)G_{\mathsf{L}}(q)+A_ {\mathsf{L}}^{\top}(q)(K_{\mathsf{P}}(q_{\mathsf{ref}}-q)-K_{\mathsf{D}}\dot {q})\right]\\ &=A(q)(A^{-1}(q)P^{-1}(q)+\Delta_{\mathsf{I}}(q))\left[(P(q)G(q) +\Delta_{\mathsf{G}}(q))\right]+A(q)(P(q)A(q)+\Delta_{\mathsf{L}}(q))^{\top} \left[K_{\mathsf{P}}(q_{\mathsf{ref}}-q)-K_{\mathsf{D}}\dot{q}\right]\\ &=(P^{-1}(q)+A(q)\Delta_{\mathsf{I}}(q))(P(q)G(q)+\Delta_{ \mathsf{G}}(q))+(A(q)A^{\top}(q)P^{\top}(q)+A(q)\Delta_{\mathsf{L}}^{\top}(q) )\left[K_{\mathsf{P}}(q_{\mathsf{ref}}-q)-K_{\mathsf{D}}\dot{q}\right]\\ &=G(q)+\Delta_{\mathtt{all}}(q)+\hat{K}_{\mathsf{P}}(q_{\mathsf{ ref}}-q)-\hat{K}_{\mathsf{D}}\dot{q}\end{split} \tag{19}\]
Where \(\Delta_{\mathtt{all}}(q)\,=\,P^{-1}(q)\Delta_{\mathsf{G}}(q)+A(q)\Delta_{ \mathsf{I}}(q)P(q)G(q)\,+\,A(q)\Delta_{\mathsf{I}}(q)\Delta_{\mathsf{G}}(q)\) is a bounded term, as sum
and product of bounded terms. The gains \(\hat{K}_{\mathsf{P}}\) and \(\hat{K}_{\mathsf{D}}\) are positive definite being product of two positive definite matrices. The closed loop is then
\[M(q)\dot{q}+C(q,\dot{q})\dot{q}=\Delta_{\mathtt{all}}(q)+\hat{K}_{\mathsf{P}}( q_{\mathsf{ref}}-q)-(D(q)+\hat{K}_{\mathsf{D}})\dot{q}. \tag{20}\]
In this segment, we establish our thesis by adopting and replicating the arguments provided in [51], which is, in turn, adapted from the seminal paper [52]. This direct application of an existing theorem has been made possible by our rearrangement of the closed loop, which has made it identical to the structure delineated in those papers.
Note that even if we provided the proof using a Lagrangian formalism, the Hamiltonian version can be derived following similar steps. Also, note that the bounds on the learned matrices are always verified for any choice of \(\delta_{\mathsf{L}},\delta_{\mathsf{G}}\) at the cost of training the model with a large enough training set. We conclude with a corollary that discusses the perfect learning scenario.
**Corollary 1**.: _Assume that \(M=N\) and \(A\) is full rank. Then, the closed loop of (5) and (12) is such that_
\[\lim_{t\to\infty}q(t)=q_{\mathsf{ref}}, \tag{21}\]
_if \(K_{\mathsf{P}},K_{\mathsf{D}}\succ 0\) and if it exists a matrix \(P(q)\,\in\,\mathbb{R}^{N\times N}\) such that \(M_{\mathsf{L}}(q)\,=\,P(q)M(q)\), \(A_{\mathsf{L}}(q)\,=\,P(q)A(q)\), \(G_{\mathsf{L}}(q)\,=\,P(q)G(q)\)._
Proof.: Let's start from (14), which now becomes \(A(q)A^{\top}(q)P^{\top}(q)\succ\,0\). Furthermore, considering that \(A(q)\) is full rank by hypothesis yields the equivalent condition \(P^{\top}(q)\succ\,0\). As discussed in the remark before, this is implied by the fact that \(M_{\mathsf{L}}(q)=P(q)M(q)\) and both \(M_{\mathsf{L}}\) and \(M(q)\) are positive definite. Thus, (14) is always verified. Similarly, (15) is trivially verified for \(A_{\mathsf{L}}(q)=P(q)A(q)\).
Moreover, note that \(\Delta_{\mathsf{all}}\,=\,0\) as the deltas are now all zero. So, the closed loop (20) is always the equivalent of a mechanical system, without any potential force, controlled by a PD. Note that the gains are positive because we just proved that \(P^{\top}(q)\succ 0\), and because \(K_{\mathsf{P}},K_{\mathsf{D}}\succ 0\) by hypothesis. The proof of stability follows standard Lyapunov arguments (see, for example, [34]) by using the Lyapunov candidate \(V(q,\dot{q})=T(q,\dot{q})+\frac{1}{2}q^{\top}\hat{K}_{\mathsf{P}}q\).
#### 3.4.2 Trajectory tracking
The goal of the following controller is to track a given trajectory in configuration space \(q_{\mathtt{ref}}\,:\,\mathbb{R}\,\to\,\mathbb{R}^{n}\). We assume \(q_{\mathtt{ref}}\) to be bounded with bounded derivatives. We also assume the system to be fully actuated - i.e., \(M\,=\,N\), \(\det(A)\,\neq\,0\), \(\det(A_{\mathsf{L}})\,\neq\,0\). Under these assumptions, we extend (12) with the following controller to follow the desired trajectory
\[\begin{split} u=& A_{\mathsf{L}}^{-1}(q)\left(M_{ \mathsf{L}}(q_{\mathtt{ref}})\ddot{q}_{\mathtt{ref}}+C_{\mathsf{L}}(q_{ \mathtt{ref}},\dot{q}_{\mathtt{ref}})\dot{q}_{\mathtt{ref}}+D_{\mathsf{L}}(q_ {\mathtt{ref}})\dot{q}_{\mathtt{ref}}+G_{\mathsf{L}}(q_{\mathtt{ref}}) \right)\\ +& A_{\mathsf{L}}^{\top}(q)\left(K_{\mathsf{P}}(q_{ \mathtt{ref}}-q)+K_{\mathsf{D}}(\dot{q}_{\mathtt{ref}}-\dot{q})\right))\,, \end{split} \tag{22}\]
where we omit the arguments \(t\) and \(\theta_{i}\) to ease the readability. We highlight the components that have been learned from the ones that are not by adding an L as a subscript. We can obtain the Coriolis matrix \(C_{\mathsf{L}}(q_{\mathtt{ref}},\dot{q}_{\mathtt{ref}})\) from the learned Lagrangian by taking the second partial derivative of the Lagrangian with respect to the desired joint position \(q_{\mathtt{ref}}\) and velocity \(\dot{q}_{\mathtt{ref}}\), i.e., \(\frac{\partial^{2}L(q_{\mathtt{ref}},\dot{q}_{\mathtt{ref}})}{\partial q_{ \mathtt{ref}}\partial\dot{q}_{\mathtt{ref}}}\).
**Corollary 2**.: _The closed loop of (5) and (22) is such that, for some \(\delta_{\mathsf{q}}\geq 0\),_
\[\lim_{t\to\infty}||q(t)-q_{\mathtt{ref}}(t)||<\delta_{\mathsf{q}}, \tag{23}\]
_if \(K_{\mathsf{P}},K_{\mathsf{D}}\,\succ\,\kappa I\), with \(\kappa\,\in\,\mathbb{R}\) high enough, and if it exists a matrix \(P(q)\,\in\,\mathbb{R}^{N\times N}\) such that \(A_{\mathsf{L}}(q)\,=\,P(q)A(q)\), \(M_{\mathsf{L}}(q)\,=\,P(q)M(q)\), \(C_{\mathsf{L}}(q)\,=\,P(q)C(q)\), \(G_{\mathsf{L}}(q)\,=\,P(q)G(q)\), \(D_{\mathsf{L}}(q)\,=\,P(q)D(q)\). We also assume that \(P\) is such that \(||P^{-1}(q)P(q_{\mathtt{ref}})-I||<\delta_{\mathsf{p}}\) for some \(\delta_{\mathsf{p}}>0\)._
Proof.: We can rewrite (22) by substituting the values of the learned elements in terms of \(P\). The result is
\[\begin{split} A(q)u=&\Delta_{\mathsf{all}}+(M(q_{ \mathtt{ref}})\ddot{q}_{\mathtt{ref}}+C(q_{\mathtt{ref}},\dot{q}_{\mathtt{ref} })\dot{q}_{\mathtt{ref}}+D(q_{\mathtt{ref}})\dot{q}_{\mathtt{ref}}+G(q_{ \mathtt{ref}}))\\ +&(A(q)A^{\top}(q)P^{\top}(q))\left(K_{\mathsf{P}}(q _{\mathtt{ref}}-q)+K_{\mathsf{D}}(\dot{q}_{\mathtt{ref}}-\dot{q})\right))\,, \end{split} \tag{24}\]
where
\[\Delta_{\mathsf{all}}=\Delta_{\mathsf{p}}\left(M(q_{\mathtt{ref}})\ddot{q}_{ \mathtt{ref}}+C(q_{\mathtt{ref}},\dot{q}_{\mathtt{ref}})\dot{q}_{\mathtt{ref} }+D(q_{\mathtt{ref}})\dot{q}_{\mathtt{ref}}+G(q_{\mathtt{ref}})\right),\]
with \(\Delta_{\mathsf{p}}=P^{-1}(q)P(q_{\mathtt{ref}})-I\). Thus, \(\Delta_{\mathsf{all}}\) is bounded by hypothesis as a product and the sum of bounded terms. Moreover, as discussed in the proof of Corollary 1, \(AA^{\top}P^{\top}\,\succ\,0\). Thus, being the closed loop is equivalent to the one discussed in [53], the same steps discussed there can be followed to yield the proof.
Note that even if we provided the proof using a Lagrangian formalism, the Hamiltonian version can be derived following similar steps. Also, the bound \(\delta_{\mathsf{q}}\) can be made as small as we desire at the cost of making the control gains large enough.
Finally, note that we provided here only proof of stability for the perfectly learned case. Similar hypotheses and arguments to the ones in Proposition 1 would lead to similar results in the tracking case, with \(||P(q)A_{\mathsf{L}}(q)-A(q)||<\delta_{\mathsf{A}}\), \(||P(q)M_{\mathsf{L}}(q)-M(q)||<\delta_{\mathsf{A}}\), \(||P(q)C_{\mathsf{L}}(q)-C(q)||<\delta_{\mathsf{c}}\), \(||P(q)G_{\mathsf{L}}(q)-G(q)||<\delta_{\mathsf{c}}\), \(||P(q)D_{\mathsf{L}}(q)-D(q)||<\delta_{\mathsf{c}}\), for some finite and positive \(\delta_{\mathsf{A}},\delta_{\mathsf{K}},\delta_{\mathsf{c}},\delta_{\mathsf{c}},\delta_{\mathsf{c}}\in\mathbb{R}\).
Methods: Simulation and experiment design
To evaluate the efficacy of the proposed PINNs and PINN-based control, we apply them in three distinct tasks: (T1) Learning the dynamic model of a one-segment spatial soft manipulator, (T2) Learning the dynamic model of a two-segment spatial soft manipulator, (T3) Learning the dynamic model of the Franka Emika Panda robot. We selected (T1) and (T2) because they have a nontrivial \(A(q)\), and (T3) because it has several degrees of freedom. Furthermore, we employ the learned dynamics to design and test model-based controllers for T2 and T3.
In a hardware experiment, the LNN is utilized to learn the dynamic model of the tendon-driven soft manipulator reported in [54] and the Panda robot. We show for the first time experimental closed-loop control of a robotic system (the Panda robot) with a PINN-based algorithm.
### Data Generation
Training data for T1 and T2 are generated by simulating the dynamics of one-segment and two-segment soft manipulators in MATLAB. For T1, ten different initial states are combined with ten different input signals to generate data using the one-segment manipulator dynamics model. Each combination produces ten-second training data with a time step of 0.0002 seconds. For T2, we use a variable step size in Simulink to generate datasets from the mathematical model of a two-segment soft manipulator. With this approach, we create twelve different sixty-second trajectories, which are subsequently resampled at fixed frequencies of 50Hz, 100Hz, and 1000Hz. Concerning T3, PyBullet simulation environment is used to generate training data corresponding to the Panda robot. Then, Different input signals are applied to the joints to create the data of 70 different trajectories with a frequency of 1000Hz.
Regarding experimental validation, we propose the following experiments. For the tendon-driven continuum robot, we provide sinusoidal inputs with different frequencies and amplitudes to the actuators--four motors--and record the movement of the robot. An IMU records the tip orientation data with a 10Hz sampling frequency. As a result, 122 trajectories are generated, and four more are collected as the test set. For the Panda robot, we provide 70 sets of sinusoidal desired joint angles with different amplitudes and frequencies. We collect the torque, joint angle, and angular velocity data using the integrated sensors, considering a sampling frequency of 500Hz.
### Baseline Model and Model Training
In order to provide a basis for comparison, baseline models are established for all simulations and hardware experiments. These models, which serve as a control, are constructed using fully connected network and trained using the same datasets as the proposed models, however, with a larger amount of data and a greater number of training epochs. These baseline models aimed to demonstrate the benefits of incorporating physical knowledge into neural networks.
In this project, all the neural networks utilized are constructed using the JAX and dm-Haiku packages in Python. In particular, the JAX Autodiff system is used to calculate partial derivatives and the Hessian within the loss function. The optimization of the model parameters is carried out using AdamW in the Optax package.
## 5 Simulation Results
### One-segment 3D soft manipulator
To define the configuration space of the soft manipulator, we adopt the piecewise constant curvature (PCC) approximation [55] shown in Figure 5. Customarily, this approximation describes the configuration of each segment as \(q_{i}\ =\ [\phi_{i},\theta_{i},\delta\ell_{i}]\), where \(\phi_{i}\) is the plane orientation, \(\theta_{i}\) is the curvature in that plane, and \(\delta\ell_{i}\) is the change of arc length. In this work, the configuration-defined method reported
in [56] is used to avoid the singularity problem of PCC. Hence, the configuration of each segment is given by \([\Delta_{xi},\Delta_{yi},\Delta_{\ell i}]\), where \(\Delta_{xi}\) and \(\Delta_{yi}\) are the difference of arc length.
The detailed information for this task is shown in Table 1. The prediction results of these two learned models are compared in Figure 6. The figure indicates that the model trained by LNNs exhibits a high degree of predictive accuracy, manifesting near-infinite prediction capabilities with over 50,000 consecutive prediction steps in this example. While some areas exhibit less precise fits, it is important to note that such errors do not accrue over time. These outcomes suggest that LNN-based models can effectively capture the underlying dynamics of the one-segment soft manipulator. By contrast, the black-box model converges during the training process, but it does not gain insights into the dynamic model from its prediction performance. This system is also learned using HNNs by providing momentum data. Hamiltonian-based neural networks yield similar quality prediction results as Lagrangian-based neural networks, as shown in Figure 7.
The matrices obtained from these two physics-based learning models are shown in Table 3 and 4, where \(G(q)\) represents the potential forces, i.e., \(\frac{\partial V(q)}{\partial q}\). As Table 4 shows, HNNs can learn the physically meaningful matrices, while LNNs only learn one of the solutions satisfying the Euler-Lagrangian equation. Comparing the corresponding matrices in Table 2 and 3, we can find that the matrices and vectors learned by the LNNs are related to the real parameters through a transformation \(P(q)\).
\begin{table}
\begin{tabular}{c c c c} \hline \hline & Black-box model & Lagrangian-based & Hamiltonian-based \\ & & learned model & learned model \\ \hline model (width \(\times\) depth) & 128\(\times\)5 & 32\(\times\)3, 5\(\times\)3, 16\(\times\)2 & 32\(\times\)3, 5\(\times\)3, 16\(\times\)2 \\ sample number & 19188 & 8000 & 8000 \\ training epoch & 15000 & 6000 & 6000 \\ training error & \(6.891e^{-5}\pm 4.63e^{-4}\) & \(8.418e^{-7}\pm 1.77e^{-5}\) & \(5.374e^{-11}\pm 7.74e^{-10}\) \\ prediction error [m] & \(7.647\pm 10.413(5\)s) & \(0.171\pm 0.272(5\)s) & \(0.0220\pm 0.0210\) (5s) \\ \hline \hline \end{tabular}
\end{table}
Table 1: One-segment soft manipulator simulation detailed information
\begin{table}
\begin{tabular}{c|c c c|c c|c c|c c} \hline \hline \(q\) & \multicolumn{2}{c}{\(M(q)\)} & \multicolumn{2}{c}{\(M^{-1}(q)\)} & \multicolumn{2}{c}{\(D(q)\)} & \multicolumn{2}{c}{\(G(q)\)} & \multicolumn{2}{c}{\(A(q)\)} \\ \hline
1.20 & \(1.74e^{-3}\) & \(-3.12e^{-5}\) & \(-1.96e^{-3}\) & 593.09 & 9.35 & 12.47 & & & 1.29 & \(-0.04\) & \(-1.0\) & 0.07 \\ \(-0.20\) & \(-3.12e^{-5}\) & \(1.55e^{-3}\) & \(3.26e^{-4}\) & 9.35 & 647.61 & \(-2.08\) & & & \(-0.22\) & 0.78 & 0.04 & \(-0.01\) \\
0.15 & \(-1.96e^{-3}\) & \(3.26e^{-4}\) & \(9.29e^{-2}\) & 12.47 & \(-2.08\) & 11.04 & & & \(-1.15\) & 0.0 & 0.77 \\
0.80 & \(3.64e^{-3}\) & \(4.52e^{-5}\) & \(-1.94e^{-3}\) & 277.76 & \(-2.84\) & 5.55 & & & \(0.89\) & \(\begin{bmatrix}0.03&-0.99&0.06\\ 0.90&-0.03&0.02\\ -1.09&0.0&0.0&0.89\\ \end{bmatrix}\) & \(\begin{bmatrix}0.03&-0.99&0.06\\ 0.90&-0.03&0.02\\ -1.09&0.0&0.0&0.89\\ \end{bmatrix}\) \\
0.20 & \(-1.94e^{-3}\) & \(-4.84e^{-4}\) & \(9.67e^{-2}\) & 5.55 & 1.39 & 10.46 & & & \(-1.09\) & 0.0 & 0.89 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mathematical model matrices of one-segment soft manipulator
Figure 5: PCC approach illustration: (a) two-segment soft manipulator is shown, where \(S_{i}\) is the end frame, the blue parts are the orientated plane, \(\ell_{i}\) is the original length of each segment; (b) shows the length of the four arcs whose ends connected to the frame \(S_{i}\)
### Two-segment 3D soft manipulator
The two-segment soft manipulator model is simulated in MATLAB, where the configuration space is also defined as in the one-segment case. The training and testing information for this task is shown in Table 5. Figure 8 summarizes the prediction results of the \(50Hz\), \(100Hz\), and \(1000Hz\) learned model. From
\begin{table}
\begin{tabular}{c|c c c|c c|c c|c c|c c|c c} \hline \hline \multicolumn{2}{c}{q} & \multicolumn{2}{c}{\(M(q)\)} & \multicolumn{2}{c}{\(D(q)\)} & \multicolumn{2}{c}{\(G(q)\)} & \multicolumn{2}{c}{\(A(q)\)} & \multicolumn{2}{c}{\(P(q)\)} & \multicolumn{2}{c}{\(P(q)\)} \\ \hline \(1.20\) & \(1.20e^{-3}\) & \(1.20e^{-3}\) & \(-0.03\) & \(0.16\) & \(-0.02\) & \(0.0\) & \(2.44\) & \(0.12\) & \(-1.72\) & \(-0.21\) & \(0.61\) & \(-0.02\) & \(0.03\) \\ \(-0.20\) & \(1.20e^{-3}\) & \(5.99e^{-3}\) & \(-0.02\) & \(-0.02\) & \(0.02\) & \(0.33\) & \(-0.01\) & \(-0.61\) & \(3.05\) & \(-0.19\) & \(-0.13\) & \(-0.02\) & \(0.28\) & \(0.01\) \\ \(0.15\) & \(-0.03\) & \(-0.02\) & \(0.59\) & \(0.0\) & \(-0.01\) & \(0.35\) & \(-5.25\) & \(-0.34\) & \(1.01\) & \(3.40\) & \(0.33\) & \(0.15\) & \(0.25\) \\ \(0.80\) & \(6.93e^{-3}\) & \(1.84e^{-3}\) & \(-0.03\) & \(0.17\) & \(-0.01\) & \(-0.0\) & \(1.62\) & \(0.19\) & \(-1.66\) & \(-0.20\) & \(0.62\) & \(-0.02\) & \(0.03\) \\ \(0.20\) & \(1.84e^{-3}\) & \(0.01\) & \(-0.02\) & \(-0.01\) & \(0.33\) & \(-0.01\) & \(0.81\) & \(2.97\) & \(-0.25\) & \(-0.13\) & \(-0.02\) & \(0.31\) & \(0.01\) \\ \(0.30\) & \(-0.03\) & \(-0.02\) & \(0.50\) & \(-0.0\) & \(-0.01\) & \(0.35\) & \(-4.67\) & \(-0.40\) & \(1.01\) & \(3.43\) & \(0.21\) & \(0.10\) & \(0.26\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Lagrangian-based learning model matrices of one-segment soft manipulator
Figure 6: One-segment soft manipulator leaned model comparison results: (a) depicts the predictions generated by the black-box model (\(\triangle\)), the Lagrangian-based learning model (\(\cdots\)), and the ground-truth (\(-\)) arising from the dynamic mathematical equations; (b) shows the prediction error of these two learned models.
Figure 7: One-segment soft manipulator HNN and LNN comparison: (a) shows the Lagrangian-based learned model prediction results (\(\cdots\)), Hamiltonian-based learned model prediction results (\(\circ\)), and the ground-truth prediction (\(-\)); (b) error of the two models with the ground truth.
the simulations, we conclude that the higher the sampling frequency within a certain range, the more accurate the learned model is.
Based on the learned model trained at 1000Hz, we devise a PINN-based control loop as in (12). To demonstrate the performance of the designed controller, we employ it to control the two-segment soft manipulator in MATLAB. The proportional gains \(K_{\mathsf{P}}\) and derivative gains \(K_{\mathsf{D}}\) are set to 10 and 50, respectively, for all six configurations. The alterations in the states of the two-segment manipulator under control are depicted in Figure 9, whereas the performance of the controller is demonstrated in Figure 10. Results indicate that the controller is capable of tracking a static setpoint within one second while keeping the root mean square error (RMSE) less than 0.23%, and exhibits a stable and minimal overshoot performance. These performances underscore the reliability and efficiency of the designed controller based on the learned model.
\begin{table}
\begin{tabular}{c|c c c|c c c|c c|c c|} \hline \hline \multicolumn{2}{c}{q} & \multicolumn{2}{c}{\(\hat{M}^{-1}(q)\)} & \multicolumn{2}{c}{\(\hat{D}(q)\)} & \multicolumn{2}{c}{\(\hat{G}(q)\)} & \multicolumn{2}{c}{\(\hat{A}(q)\)} \\ \hline
1.20 & 600.32 & 16.90 & 15.67 & \(1.02e^{-1}\) & \(3.44e^{-3}\) & \(8.12e^{-5}\) & 1.33 & \(-0.06\) & \(-0.94\) & 0.05 \\ \(-0.20\) & 16.90 & 622.92 & \(-1.34\) & \(3.44e^{-3}\) & \(1.05e^{-1}\) & \(-4.39e^{-4}\) & \(-0.18\) & 0.83 & 0.02 & \(-0.04\) \\
0.15 & 15.67 & \(-1.34\) & 11.61 & \(8.12e^{-5}\) & \(-4.39e^{-4}\) & \(9.91e^{-2}\) & \(-1.15\) & 0.0 & 0.01 & 0.78 \\
0.80 & 285.01 & 11.08 & 6.65 & \(1.01e^{-1}\) & \(3.44e^{-3}\) & \(6.56e^{-4}\) & 0.93 & 0.03 & \(-0.96\) & 0.05 \\
0.20 & 11.08 & 292.46 & 2.06 & \(3.48e^{-3}\) & \(1.03e^{-1}\) & \(-7.45e^{-5}\) & 0.25 & 0.92 & \(-0.03\) & \(-0.02\) \\
0.30 & 6.65 & 2.06 & 10.59 & \(6.56e^{-4}\) & \(-7.45e^{-5}\) & \(9.87e^{-2}\) & \(-1.10\) & \(-0.01\) & 0.0 & 0.89 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Hamiltonian-based learning model matrices of one-segment soft manipulator
Figure 8: Two-segment soft manipulator prediction performances under different sampling frequencies
### Panda robot
Table 6 presents the training and testing data of the simulated Panda in PyBullet, while Figure 11 displays the prediction results obtained from the learned model. The model exhibits relatively accurate prediction performance within 1 second (i.e. continuous prediction for 1000 steps). Furthermore, the Lagrangian-based models can achieve long-term forecasting by updating the input values of the learned model to the real states at a fixed rate, typically ranging from 50 to 100 Hz.
Based on this learned model, we build the tracking controller discussed in Sec. 3. The results are depicted in Figure 12, where we observe that our controller has a fast response time and can quickly adapt to changes in the reference signal. It can maintain high accuracy and low phase lag, which makes it well-suited for tracking fast-changing signals.
\begin{table}
\begin{tabular}{c c c} \hline \hline & Black-box model & Lagrangian-based learned model \\ \hline model (width \(\times\) depth) & 120\(\times\)4 & 40\(\times\)3,20\(\times\)2 \\ sample number & 550000 & 25000 \\ training epoch & 10000 & 10000 \\ training error & \(1.476e^{-4}\pm 2.69e^{-3}\) & \(1.424e^{-4}\pm 2.90e^{-3}\) \\ prediction error/ [_rad_] & \(5.132\pm 15.691(\)2s\()\) & \(98.6937\pm 6.411(\)2s\()\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Franka simulation detailed information (1000Hz)
Figure 10: Two-segment soft manipulator model-based controller performance: (a) shows the evolution of the configuration variables and the desired state with dotted lines; (b) shows the error between the desired states and current states; (c) shows control effort.
Figure 9: The sequence of movements at the times 0.0s, 0.1s, 0.3s, 0.6s, and 1.0s executed by the two-segment soft robot as a result of the implementation of the LNN-model-based controller. The red line represents the tip’s position
## 6 Experimental Validation
### One-segment tendon-driven soft manipulator - NECK
We validate the proposed approach in the platform depicted in Figure 13, which is constructed based on [54, 57]. We consider two different data preprocessing methods. (i) Moving average method: This method reduced the noise and outliers in the data, generating a more stable representation of underlying trends. However, it may overlook intricate relationships between variables, resulting in some information loss. (ii) Polynomial fitting: This method captured non-linear patterns in the data. However, it was susceptible to the influence of outliers, resulting in spurious information that may compromise the quality of the trained model.
The training and testing information is shown in Table 7.
The method of moving average is implemented in MATLAB through the utilization of the movmean function, with a prescribed window size of 50 points. The processed data are used for training the LNNs. In Figure 14, we compare the continuous prediction ability of black-box and Lagrangian-based learning models. The prediction performance in this figure indicates that the Lagrangian-based learning model exhibits superior predictive accuracy in this sample. Furthermore, Figure 14 (c) shows that the learning model can realize long-term predictions under the short-term update.
The polynomial fitting of the data is done in MATLAB using the function polyfit. The prediction results of the model are shown in Figure 15. The learned model exhibits a decent performance when the
Figure 11: Franka Emika Panda learned model prediction results: (a) shows 1500 steps prediction in a row; (b) is the angle errors of the prediction with respect to the ground truth; (c) shows the long prediction results with 50-step window size.
Figure 12: Performance of the model-based controller designed using the model learned by the LNNs. The desired trajectories are plotted with dotted lines.
window size is reduced, as shown in Figure 15(c). In contrast to the previous model, this model exhibits significant prediction errors shown in Table 7. This can be caused by the significant noise in the sensors and misinformation caused by the approximation used to fit the data.
### Rigid Robot - Franka Emika Panda
The collected data are processed through a Butterworth filter in MATLAB to reduce noise. Further details are provided in Table 8. In the experiment, we observe small joint acceleration, which results in
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multicolumn{2}{c}{Black-box model} & \multicolumn{2}{c}{Lagrangian-based learned model} \\ \hline & model & 60\(\times\)3 & 21\(\times\)2, 25\(\times\)2, 10\(\times\)2 \\ & sample number & 69426 & 69426 \\ smoothing & training epoch & 10000 & 3000 \\ & training error & \(1.985e^{-2}\pm 1.85e^{-1}\) & \(2.277e^{-2}\pm 2.39e^{-1}\) \\ & prediction error [\({}^{\circ}\)] & \(13.229\pm 60.762\) (5s) & \(2.429\pm 1.259\) (5s) \\ \hline & model & 60\(\times\)3 & 21\(\times\)2, 25\(\times\)2, 10\(\times\)2 \\ fitting & sample number & 57950 & 48200 \\ & training epoch & 5000 & 5000 \\ & training error & \(4.431e^{-3}\pm 3.07e^{-2}\) & \(2.758e^{-3}\pm 2.84e^{-2}\) \\ & prediction error[\({}^{\circ}\)] & \(8.368\pm 12.575\) (5s) & \(6.426\pm 36.237\) (5s) \\ \hline \hline \end{tabular}
\end{table}
Table 7: The tendon-driven soft robot – NECK training and testing information
Figure 14: The smoothing data black-box model (\(\triangle\)) and physics-based learning model (- -) continuous prediction results: (a) and (b) show prediction 43 prediction steps in a row; (c) depicts the prediction results with 5-step window size.
Figure 13: Experiment platform: One-segment tendon-driven soft manipulator equipped with IMU
minimal velocity change. To prevent the network from focusing solely on learning a large mass matrix and neglecting other important factors, we utilize a scaling sigmoid function. This function ensures that the elements in the mass matrix are scaled within a specific range. For this particular case, we have set the scaling factor to \(3.50\).
Figure 16 illustrates the predictive performance of our physics-based model, where Figure 16 (b) depicts the continuous prediction error within 2 seconds or 1000 prediction steps and (c) shows that updating the model's input with real-time state data can help us make a long prediction.
A controller based on the equation presented in (22) is proposed for the actual robot. The proportional gain matrix, \(K_{\mathtt{P}}\), is set to a diagonal matrix with entries 600, 600, 600, 600, 250, 150, and 50, respectively. The derivative gain matrix, \(K_{\mathtt{D}}\), is set to a diagonal matrix with entries 30, 30, 30, 30, 10, 10, and 5, respectively. Figure 17 illustrates a series of photographs depicting the periodic movement used to track a sinusoidal trajectory within a time frame of 10 seconds. The whole tracking performance is shown in Figure 18.
Furthermore, we have presented the trajectory of the end-effector, which is a helical motion shown in Figure 19, and its resultant control effect has been visually demonstrated in Figure 18.
In these Figures, we can observe that the designed controller has satisfactory performance, as evidenced by its ability to track a desired trajectory. The tracking error, while present in some joints, remains within acceptable bounds and does not significantly impair the overall performance of the controller in practical applications. An examination of the controller's performance reveals that, while generally effective, its performance exhibits some degree of variability across different joints. The overall performance of the controller remains within acceptable levels and suggests its potential for effective use in real-world applications.
Figure 15: The fitting data black-box model (\(\triangle\)) and physics-based learning model (\(\cdots\)) continuous prediction results: (a) and (b) show 25 prediction steps in a row; (c) shows the prediction results with 5-step window size.
\begin{table}
\begin{tabular}{c c c} \hline & Black-box model & Lagrangian-based learned model \\ \hline model (width \(\times\) depth) & 120\(\times\)5 & 40\(\times\)3, 20\(\times\)2 \\ sample number & 550000 & 25000 \\ training epoch & 10000 & 3000 \\ training error & \(1.371e^{-5}\pm 2.03e^{-5}\) & \(1.68e^{-7}\pm 6.64e^{-6}\) \\ prediction error[\(rad\)] & \(182.495\pm 64.645\) (2s) & \(2.681\pm 1.383\) (2s) \\ \hline \end{tabular}
\end{table}
Table 8: Franka experiment detailed information (500Hz)
## 7 Conclusions
This paper presented an approach to consider damping and the interaction between robots an/d actuators in PINNs--specifically, LNNs and HNNs--, improving the applicability of these neural networks for learning dynamic models. Moreover, we used the Runge-Kutta4 method to avoid acceleration measurements, which are often unavailable. The modified PINNs proved suitable for learning the dynamic model of rigid and soft manipulators. For the latter, we considered the PCC approximation to obtain a simplified model of the system.
The modified PINNs approach exploits the knowledge of the underlying physics of the system, which results in a largely improved accuracy in the learned models compared with the baseline models, which were trained using an fully connected network. The results show that PINNs exhibit a more instructive and directional learning process because of the prior knowledge embedded into the approach. Notably, physics-based learning models trained with fewer data are more general and robust than the traditional black-box ones. Therefore, continuous long-term and variable step-size predictions can be achieved. Furthermore, the learned model enables decent anticipatory control, where a naive PD can be integrated for a good performance, as illustrated in the experiments performed with the Panda robot.
Figure 16: Panda physics-based learning model prediction results: (a) and (b) show prediction of about 800 steps in a row; (c) depicts the prediction results with 5-step window size.
Figure 17: Photo sequence of one periodic movement resulting from the application of the LNN-model-based controller tracking trajectory
**Acknowledgements**
We wish to acknowledge the EMERGE for their financial support, which enabled us to carry out this research. We are also grateful to Bastian Deutschchmann, the inventor of the NECK experimental platform, which greatly facilitated our work. I would also like to express my deepest gratitude to Francesco Stella and Tomas Coleman for their invaluable guidance and help in the experiments. Finally, we extend our appreciation to our colleagues for their insightful feedback and constructive criticism, which helped refine our ideas and methods.
|
2307.12716 | Safety Performance of Neural Networks in the Presence of Covariate Shift | Covariate shift may impact the operational safety performance of neural
networks. A re-evaluation of the safety performance, however, requires
collecting new operational data and creating corresponding ground truth labels,
which often is not possible during operation. We are therefore proposing to
reshape the initial test set, as used for the safety performance evaluation
prior to deployment, based on an approximation of the operational data. This
approximation is obtained by observing and learning the distribution of
activation patterns of neurons in the network during operation. The reshaped
test set reflects the distribution of neuron activation values as observed
during operation, and may therefore be used for re-evaluating safety
performance in the presence of covariate shift. First, we derive conservative
bounds on the values of neurons by applying finite binning and static dataflow
analysis. Second, we formulate a mixed integer linear programming (MILP)
constraint for constructing the minimum set of data points to be removed in the
test set, such that the difference between the discretized test and operational
distributions is bounded. We discuss potential benefits and limitations of this
constraint-based approach based on our initial experience with an implemented
research prototype. | Chih-Hong Cheng, Harald Ruess, Konstantinos Theodorou | 2023-07-24T11:55:32Z | http://arxiv.org/abs/2307.12716v1 | # Safety Performance of Neural Networks in the Presence of Covariate Shift
###### Abstract
Covariate shift may impact the operational safety performance of neural networks. A re-evaluation of the safety performance, however, requires collecting new operational data and creating corresponding ground truth labels, which often is not possible during operation. We are therefore proposing to reshape the initial test set, as used for the safety performance evaluation prior to deployment, based on an approximation of the operational data. This approximation is obtained by observing and learning the distribution of activation patterns of neurons in the network during operation. The reshaped test set reflects the distribution of neuron activation values as observed during operation, and may therefore be used for re-evaluating safety performance in the presence of covariate shift. First, we derive conservative bounds on the values of neurons by applying finite binning and static dataflow analysis. Second, we formulate a _mixed integer linear programming_ (MILP) constraint for constructing the minimum set of data points to be removed in the test set, such that the difference between the discretized test and operational distributions is bounded. We discuss potential benefits and limitations of this constraint-based approach based on our initial experience with an implemented research prototype.
Keywords:distribution reshaping machine learning MILP performance estimation
## 1 Introduction
_Covariate shift_ in machine-learned systems occurs when the input distribution changes between training and operation stages [14]. This phenomenon is present in most applications of machine learning, as training sets usually do not sufficiently reflect the complexity of real-world operational contexts and their potential changes. This kind of dataset shift is also a major concern in transfer learning when exposing machine-learned systems to solving different tasks [8].
There is a fundamental dichotomy between covariate shift of machine-learned systems during operation and their underlying safety requirements, which require demonstrating given _safety performance indicators_ (SPIs) [7, 1] prior to deployment. The challenge we are tackling therefore is to incorporate the possibility
of operational covariate shift into safety assurance arguments for safety-related neural networks. Hereby, we assume that the initial test data for SPI evaluation is known but the operational data is unknown, since collecting operational data and creating corresponding ground truth labels is usually not possible during operation.
In tackling this challenge, we develop a specialized online monitoring technique for estimating the change of values of SPIs due to covariate shift. For practical purposes, we restrict ourselves to feed-forward deep neural networks (DNN), and we assume that the values of monitored neurons (in the feature space) of this DNN adequately reflect the input data distribution. Now, for each monitored neuron, one constructs the histogram of distributions based on binning, and distribution shift is observed by comparing the shape of two such histograms. Such information abstracts the details of the observed input in operation, thereby bypassing practical limitations where arbitrarily initiating a data collection regime when the DNN is integrated into an application has technical and societal constraints. One of our measures of similarity, called \(\kappa\)-KL similarity, is inspired by the Kullback-Leibler divergence. The purpose of a second measure, called \(\epsilon\)-portion similarity, is to characterize bounded differences. Key to our approach is that SPI estimations are reduced to constructing a subset of the test data that matches the similarity measure demonstrated by the operation, followed by the recomputation of the SPI on this subset. For \(\epsilon\)-portion similarity, when the input is of bounded range we introduce a _mixed integer linear programming_ (MILP) encoding with 0-1 variables, whereby an upper limit on the number of bins can be obtained from static dataflow analyis.
In Section 2 we review and compare with most closely related work. Section 3 defines \(\epsilon\)-portion similarity for measuring the similarity of distributions based on neuron activations. Next, Section 4 describes distribution reshaping for \(\epsilon\)-portion similarity via test set reduction together its encoding in MILP. This algorithm is evaluated in Section 5 based on standard machine learning benchmarks, and we conclude in Section 6 with discussing the potential benefits and current limitations of our approach.
## 2 Related Work
The divergence between a source and a target distribution, as obtained, say, by dataset shift is usually measured in terms of _mutual information_ or KL divergence [14]. Unfortunately, measuring mutual information from finite data is a notoriously difficult estimation problem [9, 4], and there are statistical limitations on measuring lower bounds on KL divergence from finite data [12]. Since our techniques are intended to be applied during operation, we are therefore measuring covariate shift only indirectly by observing and comparing abstracted distributions on corresponding neuronal excitements. This kind of indirect measurement does not require the original detailed operational data (e.g., images) to be available.
Covariate shift in machine-learned systems may be corrected using, say, _weighted empirical risk minimization_[16], which is based on retraining the system with a calibrated loss function based on the ratio of source and the target distribution of inputs. Retraining of safety-related machine-learned components, however, is problematic, and the best we can do is to adequately measure the potential drop of relevant safety performance indicators which are due to covariate shift. This implies that when data in operation is only made available as abstracted distributions, we need to reshape the test data to create an estimation. \(\epsilon\)-portion similarity, as developed here, is to practically consider the non-linearity caused by KL-divergence as the similarity measure, thereby enabling a reduction to MILP.
Within the research field of safe autonomous driving, _leading measures_[3] are proactive indicators that assess prevention efforts and can be observed and evaluated before a crash occurs. Our approach to SPI re-estimation under distribution reshaping yields a leading measure on the level of a machine-learned component. Our developments also go beyond _out-of-distribution_ techniques for detecting outliers with respect to training inputs [11, 15], as we are constructing an aggregated SPI against all observed data, where even when every data point being observed in operation is _within-distribution_, this does not imply that the SPI will be the same.
## 3 Distribution Similarity based on Neuron Values
\(\mathcal{D}_{op}\) denotes the multiset of data points, which are collected during operation, and \(\mathcal{D}_{test}\) is the multiset of data points used in the (safety) performance evaluation. We assume as given a feed-forward _deep neural network_ (DNN) \(F\stackrel{{\text{def}}}{{:=}}f^{L}\circ\ldots\circ f^{1}\), which is composed of layers \(1\) through \(L\). Each _layer_\(l_{i}\) is a function \(\mathbb{R}^{d_{i-1}}\rightarrow\mathbb{R}^{d_{i}}\), with \(d_{i}\) the dimensions of vectors. Layers consists of a set of neurons for computing a weighted linear sum from the input of the previous layer, followed by applying some monotonically non-decreasing activation function, such as _ReLU_, _Leaky ReLU_, and _tanh_. Without loss of generality, the neurons in \(F\) are fully connected with subsequent layers. The notation \(l_{A}\in\{1,\ldots,L\}\) indicates the chosen layer for analyzing distribution similarity between \(\mathcal{D}_{op}\) and \(\mathcal{D}_{test}\). For a data point in, the output at the \(l\)-th layer is the vector \(F^{l}(\mathsf{in}):=f^{l}(f^{l-1}(\ldots f^{1}(\mathsf{in})))\) of dimension \(d_{l}\). Now, \(F^{l}_{i}(\mathsf{in})\) projects the \(i\)-th output from \(F^{l}(\mathsf{in})\), and \(f^{l}_{i}\) is the output of the \(i\)-th neuron at the \(l\)-th layer; that is, given an input in, \(f^{l}_{i}\) takes \(F^{l-1}(\mathsf{in})\) as input and produces \(f^{l}_{i}(F^{l-1}(\mathsf{in}))\) which equals \(F^{l}_{i}(\mathsf{in})\). Finally, all inputs are bounded by an interval \([v_{min},v_{max}]\), where \(v_{min}\), \(v_{max}\) are fixed constants. In other words, \(\mathsf{in}\in[v_{min},v_{max}]^{d_{0}}\) and \(\mathcal{D}_{op},\mathcal{D}_{test}\subseteq[v_{min},v_{max}]^{d_{0}}\). For a multiset \(\mathcal{D}\) of data points and given DNN \(F\), we define
\[V^{l}_{i}(\mathcal{D})\stackrel{{\text{def}}}{{:=}}\langle F^{l}_ {i}(\mathsf{in})\mid\mathsf{in}\in\mathcal{D}\rangle \tag{1}\]
to be the multiset of all values of the \(i\)-th neuron value at layer \(l\) for all inputs in \(\mathcal{D}\).
Definition 1: For a natural number \(N>0\), a positive real number \(\Delta\) and a real number \(c\in\mathbb{R}\), the \((c,\Delta,N)\)**-binning function**\(b_{N}^{c,\Delta}:[c,c+(N+1)\Delta]\to\{0,1,\ldots,N\}\) is defined as follows:
\[b_{N}^{c,\Delta}(x)=\begin{cases}0&\text{if $x\in[c,c+\Delta]$}\\ j&\text{else if $x\in(c+j\Delta,c+(j+1)\Delta]$, for any $j\in\{1,\ldots,N\}$}\end{cases} \tag{2}\]
We apply each element in \(V_{i}^{l_{A}}(\mathcal{D}_{op})\) with a binning function in order to derive another multiset
\[B_{i}^{l_{A}}(F,\mathcal{D}_{op})\stackrel{{\mathrm{def}}}{{:=}} \langle b_{N}^{c,\Delta}(F_{i}^{l_{A}}(\mathsf{in}))\mid\mathsf{in}\in \mathcal{D}_{op}\rangle. \tag{3}\]
This requires, however, that for all \(\mathsf{in}\in\mathcal{D}_{op}\), \(F_{i}^{l_{A}}(\mathsf{in})\in[c,c+(N+1)\Delta]\). Provided that any input is bounded where \(\mathsf{in}\in[v_{min},v_{max}]^{d_{0}}\), we have the following property.
Lemma 1: _Let \(\Delta\) be a positive constant. Provided that \(\mathcal{D}_{op},\mathcal{D}_{test}\subseteq[v_{min},v_{max}]^{d_{0}}\) and \(F\) is implemented layer-wise with each neuron \(f_{i}^{l}\) implemented by (1) performing a weighted linear sum from the previous layer, followed by (2) applying monotonically non-decreasing computational activation function, there exists a constant \(c\in\mathbb{R}\) and \(N\in\mathbb{N}\) such that for all \(i\in\{1,\ldots,d_{l}\}\), \(F_{i}^{l_{A}}(\mathsf{in})\in[c,c+N\Delta]\), where \(c\) and \(N\) can be computed in time linear to the number of neurons._
Proof: (Sketch) This is based on the known result in neural network verification using abstract interpretation [5, 2, 6], where provided that input is bounded, and \(F\) is implemented with (1) and (2), one can apply computationally efficient interval-bound propagation (boxed abstraction) [2, 6] to derive a conservative minimum and maximum bound \([v_{i,min},v_{i,max}]\) such that \(\{F_{i}^{l_{A}}(\mathsf{in})\mid\mathsf{in}\in[v_{min},v_{max}]^{d_{0}}\} \subseteq[v_{i,min},v_{i,max}]\), where the interval-bound analysis is done in time linear to the total amount of neurons (see Example 1 for illustration).[3] With the following value assignment for \(c\) and \(N\), the lemma then holds.
\[c:=\min(v_{1,min},\ldots,v_{d_{l},min}) \tag{4}\]
\[N:=\lceil\frac{\max(v_{1,max},\ldots,v_{d_{l},max})-c}{\Delta}\rceil \tag{5}\]
Example 1: Consider the example in Figure 1, where we wish to perform the analysis at layer \(l_{A}=2\). Assume \(\Delta=3\), and for each input at the left, it has the domain \([-1,1]\), i.e., \(\mathcal{D}_{op},\mathcal{D}_{test}\subseteq[-1,1]^{3}\). For each neuron (\(f_{1}^{1}\), \(f_{2}^{1}\), \(f_{1}^{2}\), \(f_{2}^{2}\)), its computation is completed by first performing a weighted sum (the corresponding weight is attached in the edge) followed by the nonlinear activation ReLU (\(ReLU(x)\stackrel{{\mathrm{def}}}{{:=}}\max(0,x)\)). The result of interval-bound propagation provides us a conservative estimate \(v_{1,min}=v_{2,min}=0\), \(v_{1,max}=14\), \(v_{2,max}=5\). Therefore, \(c=\min(0,0)=0\) and \(N=\lceil\frac{\max(14,5)-0}{3}\rceil=5\).
By applying each element in \(V_{i}^{l_{A}}(\mathcal{D}_{op})\) with the binning function created using Lemma 1, one derives another multiset \(B_{i}^{l_{A}}(F,\mathcal{D}_{op})\stackrel{{\mathrm{def}}}{{\coloneqq }}\langle b_{N}^{c,\Delta}(F_{i}^{l_{A}}(\mathsf{in}))\mid\mathsf{in}\in \mathcal{D}_{op}\rangle\). Analogously, define \(B_{i}^{l_{A}}(F,\mathcal{D}_{test})\) to abbreviate \(\langle b_{N}^{c,\Delta}(F_{i}^{l_{A}}(\mathsf{in}))\mid\mathsf{in}\in \mathcal{D}_{test}\rangle\). Let \(\mathsf{ct}(j,\mathcal{D})\) be the function that counts the number of elements in multiset \(\mathcal{D}\) having value \(j\), and \(|\mathcal{D}|\) returns the size of the multiset. We now define two types of distribution similarity.
Definition 2: Given a positive constant \(\kappa\), define \(\mathcal{D}_{op}\) and \(\mathcal{D}_{test}\) to be \(\kappa\)-KL similar (subject to DNN \(F\), layer index \(l_{A}\), and binning function \(b_{N}^{c,\Delta}\)), if:
\[\forall i \in \{1,\ldots,d_{l}\} \tag{6}\]
Definition 3: Given a positive constant \(\epsilon\), define \(\mathcal{D}_{op}\) and \(\mathcal{D}_{test}\) to be \(\epsilon\)-portion similar (subject to DNN \(F\), layer index \(l_{A}\), and binning function \(b_{N}^{c,\Delta}\)) if:
\[\forall i\in\{1,\ldots,d_{l}\},\forall j\in\{0,\ldots,N\}:\\ -\epsilon\leq\frac{\mathsf{ct}(j,B_{i}^{l_{A}}(F,\mathcal{D}_{op} ))}{|\mathcal{D}_{op}|}-\frac{\mathsf{ct}(j,B_{i}^{l_{A}}(F,\mathcal{D}_{test} ))}{|\mathcal{D}_{test}|}\leq\epsilon \tag{7}\]
Table 1 summarizes the characteristics of our proposed distribution similarity measures. The definition of \(\kappa\)-KL similarity is based on the well-known definition of KL-divergence in a discrete setting. As the computation \(\frac{\mathsf{ct}(j,B_{i}^{l_{A}}(F,\mathcal{D}_{test}))}{|\mathcal{D}_{test}|}\) and \(\frac{\mathsf{ct}(j,B_{i}^{l_{A}}(F,\mathcal{D}_{op}))}{|\mathcal{D}_{op}|}\) are only the relative frequencies in appearing to a particular bin,
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Distribution similarity & Sensitive to \(|\mathcal{D}_{op}|\) & Non-emptiness in bin & Dist. reshaping via MILP \\ \hline \(\kappa\)-KL similar & no & needed & no \\ \hline \(\epsilon\)-portion similar & no & not needed & yes \\ \hline \end{tabular}
\end{table}
Table 1: Comparing two distribution similarity measures
Figure 1: An example of using bound propagation to conservatively estimate \(c\) and \(N\).
it is not sensitive to the size of \(\mathcal{D}_{op}\). Nevertheless, due to the nonlinear logarithm function in the definition, the distribution shaping problem with minimum data point removal naturally cannot be formulated using MILP, which is in contrast to distribution reshaping using \(\epsilon\)-portion similarity. It is also crucial to notice that the standard KL divergence is defined when they have only non-zero entries; in our setup, this implies that all bins should be non-empty to ensure \(\kappa\)-KL similarity is well-defined. Here we omit details, but one may employ some heuristics: (1) Omit counting bin \(j\) where both \(\mathcal{D}_{test}\) and \(\mathcal{D}_{op}\) have no contribution to bin \(j\). (2) When \(\mathcal{D}_{test}\) contributes to bin \(j\) but not \(\mathcal{D}_{op}\), return "undefined" or "\(\infty\)" for the \(\kappa\)-KL similarity.
Finally, when \(\kappa\)-KL similarity is well-defined, provided that \(\epsilon\)-portion-similarity holds, one can also derive a conservative \(\kappa\) value where the \(\kappa\)-KL-divergence-similarity measure is guaranteed to hold. It can be accomplished by considering a worst case where "every" bin in operation has the largest difference characterized by \(\epsilon\)-portion-similarity (which is overly conservative, as the sum of all bin-ratios for operation will not be 1), followed by feeding that information to the definition of \(\kappa\)-KL-divergence similarity, in order to derive a conservative bounding \(\kappa\) value.
Note that for simplicity, we have prepared our formulation such that the distribution similarity is defined based on considering all neurons in layer \(l_{A}\) and having a unified binning function for all neurons. The constraint can be easily relaxed to allow distribution similarity to be considered only on a subset of neurons as well as neurons on different layers.
Figure 2: Shaping the test set by removing or adding data points requires simultaneously considering the effect of multiple neurons. For example, although removing \(\mathsf{in}_{1}\) can reduce the count in bin \(j=2\) for \(i=1\), it may also undesirably reduce the count in bin \(j=4\) for \(i=2\). The counts in the Y-axis are not on the same scale but only show the tendency.
## 4 MILP Encoding for Distribution Reshaping
Provided that \(\mathcal{D}_{op}\) and \(\mathcal{D}_{test}\) are not similar in distribution, we are interested in finding a subset of \(\mathcal{D}_{test}\), such that the subset has the similar distribution with \(\mathcal{D}_{op}\). However, this is not an easy task, as demonstrated in the example in Figure 2: When one only looks at the distribution in \(i=1\) and removes data point \(\mathsf{in}_{1}\) in \(\mathcal{D}_{test}\) to create distribution similarity, it can create a negative impact on \(i=2\), where the reduction may not be desired.
When one finds such a subset, due to the distribution similarity one can estimate the performance. However, keeping the subset as large as possible is also desired. This leads to Definition 4, where we artificially introduce \(\mathcal{D}_{can}\) to restrict the set of data points further as candidates to be removed. When \(\mathcal{D}_{can}\) equals \(\mathcal{D}_{test}\), any data point within \(\mathcal{D}_{test}\) can be removed. As demonstrated in later paragraphs, in our MILP encoding scheme, the number of 0-1 integer variables introduced in the MILP equals the size of \(\mathcal{D}_{can}\).
Definition 4 (Distribution Reshaping for \(\epsilon\)-portion similarity via Test Set Reduction): Provided that \(\mathcal{D}_{op}\) and \(\mathcal{D}_{test}\) are not \(\epsilon\)-portion similar, given \(\mathcal{D}_{can}\subseteq\mathcal{D}_{test}\), find \(\mathcal{D}_{can}^{opt}\subseteq\mathcal{D}_{can}\) such that
1. \(\forall i\in\{1,\ldots,d_{l}\}\), \(\mathcal{D}_{op}\) and \(\mathcal{D}_{test}\setminus\mathcal{D}_{can}^{opt}\) are \(\epsilon\)-portion similar.
2. The size of \(\mathcal{D}_{can}^{opt}\) is minimum, among any other multiset \(\mathcal{D}_{can}^{{}^{\prime}}\) that also ensures the first condition.
If we take a data point \(\mathsf{in}\in\mathcal{D}_{can}\), pass it to the DNN and extracts \(F^{l}(\mathsf{in})\), apply the binning function on each computed neuron value \(F_{i}^{l}(\mathsf{in})\), this leads to a vector \(v_{\mathsf{in}}\stackrel{{\text{def}}}{{:=}}(b_{N}^{c,\Delta}(F_ {1}^{l}(\mathsf{in})),\ldots,\)
\(b_{N}^{c,\Delta}(F_{d_{l}}^{l}(\mathsf{in})))\in\{0,\ldots,N\}^{d_{l}}\), where the value in each dimension \(i\in\{1,\ldots,d_{l}\}\) contains the associated binning information for the \(i\)-th output.
For every \(\mathsf{in}\in\mathcal{D}_{can}\), in our MILP encoding, create a 0-1 integer variable \(br_{\mathsf{in}}\) that controls the decision of removing data point in from \(\mathcal{D}_{can}\).
* If \(br_{\mathsf{in}}=1\), then remove the data point in from \(\mathcal{D}_{test}\).
* If \(br_{\mathsf{in}}=0\), then keep the data point in.
For neuron \(i\), recall that the number of elements originally in bin \(j\) equals \(\mathsf{ct}(j,B_{i}^{l_{A}}(F,\mathcal{D}_{test}))\). For data point in \(\in\mathcal{D}_{can}\), if \(F_{i}^{l_{A}}(\mathsf{in})\) falls into bin \(j\), then if we remove the data point, the size of remaining elements in bin \(j\) can be characterized as \(\mathsf{ct}(j,B_{i}^{l_{A}}(F,\mathcal{D}_{test}))-br_{\mathsf{in}}\). Therefore, depending on the decision whether elements in \(\mathcal{D}_{can}\) are removed or not, the number of elements for neuron \(i\), bin \(j\) can be encoded as \(\mathsf{ct}(j,B_{i}^{l_{A}}(F,\mathcal{D}_{test}))-\sum_{\mathsf{in}\in \mathcal{D}_{can}\text{ s.t. }b_{N}^{c,\Delta}(F_{i}^{l}(\mathsf{in}))=j}br_{\mathsf{in}}\).
Lemma 2 (MILP encoding): _The problem in Definition 4 can be reduced to the following MILP problem._
minimize_
\[\sum_{\text{in}\in\mathcal{D}_{can}}\text{br}_{\text{in}}\ \ \text{s.t.}\] \[\forall i\in\{1,\ldots,d_{l}\},\forall j\in\{0,\ldots,N\}:\] \[-\epsilon\leq\frac{\text{ct}(j,B_{i}^{l_{A}}(F,\mathcal{D}_{op})) }{|\mathcal{D}_{op}|}-\frac{\text{ct}(j,B_{i}^{l_{A}}(F,\mathcal{D}_{test}))- \sum_{\text{in}\in\mathcal{D}_{can}\ \text{s.t.}\ b_{N}^{c,\Delta}(F_{i}^{l}(\text{in}))=j}\text{br}_{\text{in}}}{| \mathcal{D}_{test}|-\sum_{\text{in}\in\mathcal{D}_{can}}\text{br}_{\text{in}}} \leq\epsilon \tag{8}\]
Proof: The encoding is straightforward, where the only difference with Equation 3 is (1) the update of the denominator from \(|\mathcal{D}_{test}|\) to \(|\mathcal{D}_{test}|-\sum_{\text{in}\in\mathcal{D}_{can}}\text{br}_{\text{in}}\), reflecting the potential decrease in the data points, and (2) the update of the nominator by subtracting \(\sum_{\text{in}\in\mathcal{D}_{can}\ \text{s.t.}\ b_{N}^{c,\Delta}(f_{i}^{l}( \text{in}))=j}\text{br}_{\text{in}}\), counting the potential decrease of the number of items in bin \(j\). The remaining task is to ensure that the encoding does not lead to non-linear constraints. This holds, as \(\frac{\text{ct}(j,B_{i}^{l_{A}}(F,\mathcal{D}_{op}))}{|\mathcal{D}_{op}|}\) is a constant, one can rewrite the inequality by multiplying \(|\mathcal{D}_{test}|-\sum_{\text{in}\in\mathcal{D}_{can}}\text{br}_{\text{in}}\).
Example 2: Consider \(\mathcal{D}_{can}\stackrel{{\text{def}}}{{\coloneqq}}\langle \text{in}_{1},\text{in}_{2},\text{in}_{3}\rangle\), where for each element in \(\mathcal{D}_{can}\), its binning information is listed in Table 2. Then consider \(i=3\) and \(j=4\) in Equation 8, the inequality part can be rewritten into Equation 9, where \(\frac{\text{ct}(4,B_{3}^{l_{A}}(F,\mathcal{D}_{op}))}{|\mathcal{D}_{op}|}\), \(|\mathcal{D}_{test}|\), and \(\text{ct}(4,B_{3}^{l_{A}}(F,\mathcal{D}_{test}))\) are constants that can be computed before the MILP encoding. As \(\epsilon\) is also a constant, Equation 9 can be rewritten into two linear constraints.
\[-\epsilon\leq\frac{\text{ct}(4,B_{3}^{l_{A}}(F,\mathcal{D}_{op}))}{|\mathcal{D }_{op}|}-\frac{\text{ct}(4,B_{3}^{l_{A}}(F,\mathcal{D}_{test}))-\text{br}_{ \text{in}_{1}}-\text{br}_{\text{in}_{3}}}{|\mathcal{D}_{test}|-\text{br}_{ \text{in}_{1}}-\text{br}_{\text{in}_{2}}-\text{br}_{\text{in}_{3}}}\leq\epsilon \tag{9}\]
Remark 1: Whenever distribution reshaping does not involve multiple neurons, finding a subset of \(\mathcal{D}_{test}\) to ensure \(\epsilon\)-portion similarity can be done efficiently (with a greedy algorithm), and therefore no MILP encoding required. Reconsider the example in Figure 2 where the goal is only to perform reshaping on \(i=1\). One can simply remove \(\text{in}_{1}\), as there is no side effect that should be considered.
## 5 Evaluation
We have implemented the concept and performed an initial feasibility study based on the MNIST dataset [10]. We use Pytorch [13] to train the DNN and use Google OR-Tools4 to solve the generated MILP problem. For distribution reshaping, we take 20 neurons in close-to-output layers. To simulate the "covariate shift", we have intentionally created the testing dataset with significantly more examples in classes "7", "8" and "9" (with each around 20%), and left classes "1" to "5" have a small portion. The created operational dataset consists of 5300 image samples5, where the portion of "7", "8" and "9" is significantly
reduced, meaning that the assumption on the frequency for class distribution is incorrect. Recall that in our problem definition, one does not have access to the operational data and the associated ground truth labels. The experimental setting here allows us to estimate whether distribution reshaping on neurons (representing the feature space) positively correlates with the data distribution reflected by the associated label. For measuring similarity, we use the bin width \(\Delta=1\) and set \(\epsilon\) to be 0.01. The set \(\mathcal{D}_{can}\), i.e., elements that can be removed for distribution reshaping purposes, ranges from 7000 to 20000. This implies that in the corresponding MILP problem, we have a maximum of 20000 binary integer variables. The whole program and the MILP solver are operated on an Intel i5-9300H laptop equipped with 32GB RAM. Altogether the time required to find the smallest set to be removed for distribution reshaping is commonly below 15 minutes. Figure 3 shows the distribution for two neurons being considered.
Figure 4 presents our preliminary result, where in Figure 3(a), one can observe that although we only perform distribution reshaping on neurons reflecting the feature level, the reshaped test data is moving closer to the operational data when we consider the distribution reflected as the relative frequencies of each class, suggesting that the SPI estimation on the reshaped test dataset can be more precise. Figure 3(b) shows an aggregated result on all experiments being conducted, where the \(x\) coordinate characterizes the sum of the per-class ratio-difference between operational and reshaped test data, and the \(y\) coordinate characterizes the sum of the per-class ratio-difference between operational and the original test data. It also turned out that most of the points are within the top-left region, hinting that the reshaped test data demonstrates a positive correlation with the label distribution of the operational data.
## 6 Concluding Remarks
We investigated the problem of estimating the safety performance in the presence of covariate shift based on test data and (feature-level) neuron value distributions - but not on operational data, which often is unavailable in real-world situations. Our main contribution is a MILP encoding for reshaping the test data to be similar to the (unknown) distribution of the operational data. This reshaped test data now serves as a proxy for evaluating the safety performance in the presence of covariate shift. With this approach, we may compute the distribution profile (as histograms) locally on the device of the DNN under investigation, thereby addressing possible privacy concerns. Initial experiments demonstrate the feasibility of this constraint-solving approach, but, clearly, more experience for more
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & \(b_{N}^{c,\Delta}(F_{1}^{I_{A}}(\cdot))\) & \(b_{N}^{c,\Delta}(F_{2}^{I_{A}}(\cdot))\) & \(b_{N}^{c,\Delta}(F_{3}^{I_{A}}(\cdot))\) \\ \hline \(\text{in}_{1}\) & 1 & 4 & 4 \\ \hline \(\text{in}_{2}\) & 2 & 1 & 2 \\ \hline \(\text{in}_{3}\) & 0 & 3 & 4 \\ \hline \end{tabular}
\end{table}
Table 2: An example for the MILP encoding
complex scenarios in real-world situations is needed. However, the maximum number of removable samples is restricted, as this number correlates with the number of 0-1 variables in the generated MILP problem. Constraint solving, in particular, needs to be accelerated considerably. Since the generated MILP encodings are highly-stylized, specialized variable branching heuristics or suitable polynomial-time approximation schemes may be developed. Finally, the problem of reshaping test data can be generalized to also adapt individual test data points to covariate shift based on their respective resilience bounds [2].
#### 4.0.1 Acknowledgements
This work is supported by the StMWi Bayern as part of the project for the thematic development of the Fraunhofer IKS.
Figure 4: Effect of distribution reshaping, observed on the output label distribution
Figure 3: Qualitative result of distribution reshaping simultaneously on multiple neurons |
2306.05143 | Genomic Interpreter: A Hierarchical Genomic Deep Neural Network with 1D
Shifted Window Transformer | Given the increasing volume and quality of genomics data, extracting new
insights requires interpretable machine-learning models. This work presents
Genomic Interpreter: a novel architecture for genomic assay prediction. This
model outperforms the state-of-the-art models for genomic assay prediction
tasks. Our model can identify hierarchical dependencies in genomic sites. This
is achieved through the integration of 1D-Swin, a novel Transformer-based block
designed by us for modelling long-range hierarchical data. Evaluated on a
dataset containing 38,171 DNA segments of 17K base pairs, Genomic Interpreter
demonstrates superior performance in chromatin accessibility and gene
expression prediction and unmasks the underlying `syntax' of gene regulation. | Zehui Li, Akashaditya Das, William A V Beardall, Yiren Zhao, Guy-Bart Stan | 2023-06-08T12:10:13Z | http://arxiv.org/abs/2306.05143v2 | # Genomic Interpreter: A Hierarchical Genomic Deep Neural Network
###### Abstract
Given the increasing volume and quality of genomics data, extracting new insights requires interpretable machine-learning models. This work presents Genomic Interpreter: a novel architecture for genomic assay prediction. This model outperforms the state-of-the-art models for genomic assay prediction tasks. Our model can identify hierarchical dependencies in genomic sites. This is achieved through the integration of 1D-Swin, a novel Transformer-based block designed by us for modelling long-range hierarchical data. Evaluated on a dataset containing 38,171 DNA segments of 17K base pairs, Genomic Interpreter demonstrates superior performance in chromatin accessibility and gene expression prediction and unmasks the underlying'syntax' of gene regulation.1
Footnote 1: We make our source code for 1D-Swin publicly available at [https://github.com/Zehui127/ld-swin](https://github.com/Zehui127/ld-swin)
## 1 Introduction
Functional genomics uses a variety of assays to explore the roles of genes in a genome. These assays allow researchers to quantify gene expression (de Hoon & Hayashizaki, 2008), test chromatin accessibility, and understand gene regulation (Eraslan et al., 2019). In this paper:
* We introduce Genomic Interpreter, an attention-based model for genomic assay prediction.
* We design a task-agnostic hierarchical Transformer, 1D-Swin, for capturing long-range interaction in 1-D sequences.
* We show that our model performs better than the state-of-the-art.
* We further demonstrate that the hierarchical attention mechanism in Genomic Interpreter provides us with interpretability. This can help biologists to identify and validate relationships between different genomic sequences.
## 2 Related Work
**Deep Learning for genomic assays prediction** Deep Neural Networks (DNNs) have seen success in predicting genome-scale sequencing assays such as CAGE (Takahashi et al., 2012), DNase-seq (He et al., 2014), and ChIP-seq (Zhou et al., 2017). In genomic assay prediction, models are provided with a DNA sequence \(x\!\in\!\mathbb{R}^{n\times 4}\) and are required to forecast sequencing assay outputs \(y\!\in\!\mathbb{R}^{m\times T}\) for various tracks, where \(n\) defines the DNA sequence length, \(4\) refers to four nucleotides in DNA, \(m\) represents the length of the output sequence, with each output value being an average read over a specific DNA segment, and \(T\) denotes the track count, referring to the different types of assay outputs being predicted, such as specific sequencing assays performed in particular organisms. In these models, DNA sequences are encoded and transformed into high-dimensional vector representations. These encoded vectors are then processed through a series of linear layers to predict the corresponding real-value assay readings. Existing genomic models can be divided into two categories: the first category uses Convolution Neural Networks (CNNs) and pooling layers as the encoder (Alipanahi et al., 2015; Kelley et al., 2018; Kelley, 2020); the second type of model consists of CNNs and pooling layers followed by Transformer blocks, which is first proposed by (Avsec et al., 2021) as Enformer, serving as the state-of-the-art model used in regulatory DNA study (Vaishnav et al., 2022).
Current state-of-the-art models predict coarse-grained read values, typically over 100 base pairs per read. This requires DNNs to reduce the spatial dimension and create a condensed representation for the input. This property is also shared by most computer vision tasks such as object detection (Girshick, 2015) and semantic segmentation (He et al., 2017). While using CNNs and pooling layers has proven effective in the past. More recently, transformer-based architectures are becoming more popular and are outperform
ing CNN-based architectures (Dosovitskiy et al., 2020; Liu et al., 2021). The success of these models motivated us to design attention-based models for genomic assay prediction. Furthermore, by calculating the Transformers' attention weights we can reveal genomic site dependencies (Ghotra et al., 2021), thus providing us with better interpretability compared to CNN-based approaches in assay prediction.
**Swin Transformers** The Shifted Window (Swin) Transformer builds hierarchical feature maps of input images through self-attention operations within a local window and shift operations (Liu et al., 2021). The computed feature maps are merged to higher layers, reducing the spatial size of inputs. Swin Transformer is efficient in processing long input sequences as self-attention is only computed within each window.
While Swin Transformer and its variants have shown improved performance in the computer vision tasks (Liu et al., 2022; Yuan et al., 2021). There is a lack of hierarchical transformer models for 1-dimensional data such as genomic sequence data. The local window approach has been adopted for long-range 1-D Transformer works such as Longformer (Beltagy et al., 2020) and Sparse Transformer (Child et al., 2019) to reduce the computation. However, no work has been done for reducing the spatial dimension in a hierarchical manner. This property is crucial for tasks requiring spatial reduction, such as genomic assay prediction.
## 3 Method
### Model Architecture Overview
We propose a novel model architecture, termed Genomic Interpreter, which is designed to predict genomic assays. The Genomic Interpreter is made up of several parts: multiple 1-dimensional Swin (1D-Swin) blocks, a transformer block (Vaswani et al., 2017), and linear heads at the end. The structure is shown in Figure 1(a).
The process begins with an input sequence which acts as the initial token. This is denoted as \(x\!\in\!\mathbb{R}^{n\times d}\), where \(n\) is the token length and \(d\) is the dimension for each token. One round of the 1D-Swin block transforms this matrix where \(f_{\text{1d Swin}}\!:\mathbb{R}^{n\times d}\!\to\!\mathbb{R}^{\frac{n}{2}\times \frac{2d}{\alpha_{0}}}\), where \(\alpha_{0}\) is the hidden size scaling factor. When \(\alpha_{0}\) is set to 1, the hidden size of the token is doubled.
The transformed matrix is denoted as \(\mathbf{h}\). The number of 1D-Swin blocks (represented by K) is chosen to match the output length to the initial token length. If an exact match is not possible, a Crop operation is used to adjust to output length by removing certain elements from the ends of \(\mathbf{h}\).
The overall architecture of the Genomic Interpreter can be summarized by three equations:
\[\mathbf{h}_{K}\!=\!f_{\text{1d Swin}}(\mathbf{h}_{K-1}),\quad \mathbf{h}_{0}\!=\!\mathbf{x} \tag{1}\] \[\mathbf{h}_{\mathbf{c}}\!=\!f_{\text{crop}}(\mathbf{h}_{\mathbf{ K}})\] (2) \[\mathbf{y}\!=\!\text{LinearHeads}(\text{TransformerBlock}( \mathbf{h}_{\mathbf{c}})) \tag{3}\]
Here, \(\mathbf{h}^{\mathbf{K}}\) represents the output after K iterations of the 1D-Swin transformation, with dimensions \(\frac{n}{2^{K}}\times\frac{2d}{(\alpha_{0}\alpha_{1}\dots\alpha_{K})}\).
Figure 1: (a) Genomic Interpreter: given an input sequence \(x\!\in\!\mathbb{R}^{16\times 4}\), the target is to predict \(3\!\times\!4\) read value arrays. \(x\) first traverses multiple 1D-Swin blocks. Each forward pass halves the token length; after two passes, the token number is reduced to 4. Then the resulting embedding is fed into Transformer Block and Linear Heads for final prediction. (b)Self-Attention Block Pairs: standard implementation is used for self-attention block
This output is then passed through the Transformer blocks and linear heads in a feedforward fashion, yielding a per-track prediction denoted as \(\mathbf{y}\). The dimensions of \(\mathbf{y}\) are \(m\times T\). As an example, \(m\!=\!4\) and \(T\!=\!3\) in Figure 1(a).
### 1D-Swin Block
The standard Transformer model's quadratic time-space complexity, denoted as \(\Omega(n^{2})\) for input tokens \(x\in\mathbb{R}^{n\times d}\), hinders its efficiency with long inputs like DNA sequences. The 1D-Swin Transformer reduces this complexity2 to \(\Omega(n)\) through two identical Multi-Head Attention (MHA) blocks.
Footnote 2: The time complexity of 1D-Swin depends upon a hyperparameter: the window size. It ranges from \(O(n)\) for a window size of 1 to \(O(n^{2})\) for a window size of \(n\)
The first Multi-Head Attention (MHA block), as depicted in Figure 1(b), processes each a subset of tokens with a window to capture local dependencies. And then a rolling operation together with the second MHA block is applied to capture the cross-window dependencies. Finally, pair-wise concatenation and linear transformation are used to form the spatially reduced token set. For the rigorous definition of this process, please refer to Appendix A.
### Multi-resolution genome sites dependency detection
Capturing the interactions between sub-sequences in the genome is crucial for genomics science. Stacked 1D-Swin blocks allow Genomic Interpreter to have hierarchical representations of such genomic sequences. We can look at the learned attention patterns to see how tokens are interacting with each other.
Specifically, as shown at the top of Figure-1(a), the self-attention scores between each token are extracted at each layer, serving as a fine-grained map showing how nucleotide sequences of different lengths interact with each other.
### Data
Understanding the dependencies between genomic sites requires a comprehensive dataset. The dataset provided by Enformer (Avsec et al., 2021) offers a broad spectrum of gene expression data. However, it demands an unrealistic amount of computational resources to replicate their algorithm training3 for practical usage. To address this issue, we have developed a scaled-down version of genomic datasets named 'BasenjiSmall'.
Footnote 3: On original dataset, the training time of full-size Enformer model requires 64\(\times\)3 TPU days
The 'BasenjiSmall' dataset comprises 38,171 data points \((X,Y)\). This is the same amount of data points as the original Enformer data set. Each data point consists of a DNA segment \(\mathbf{x}\) and assay reads \(\mathbf{y}\). \(\mathbf{x}\in\mathbb{R}^{17,712\times 4}\) represents a DNA segment spanning \(17,712\) base pairs (bp). \(\mathbf{y}\!\in\!\mathbb{R}^{80\times 5313}\) refers to the coarse-grained read values across \(80\times 128\) bp and 5313 tracks expanding various cell and assay types, these tracks include: (1) 675 DNase/ATAC tracks, (2) 4001 ChIP Histone and ChIP Transcription Factor tracks and (3) 639 CAGE tracks.
This reduced dataset maintains the richness of gene expression data but is more manageable in terms of computational requirements.
### Training
We utilized Pytorch Lightning framework with (Distributed Data Parallelism) DDP strategy for parallel training.
All the models shown in Section 4 are implemented in Pytorch and trained with 3 NVIDIA A100-PCIE-40GB, with a training time of 3*10 GPU hours per model. The batch size is set to 8 for each GPU, with an effective batch size of 24. Adam optimizer (Kingma & Ba, 2014) is used together with the learning rate scheduler, CosineAnnealingLR (Gotmare et al., 2018) and a learning rate of 0.0003 to minimize the Poisson regression loss function for all models.
## 4 Results
### Gene Expression Prediction
We compared the performance of 1D-Swin with Enformer (Avsec et al., 2021) and other standard genomic assay prediction models using a hold-out test set from BasenjiSmall. This included models implementing CNNs with MaxPooling (Alipanahi et al., 2015), CNNs with AttentionPooling, and Dilated CNNs with Attention Pooling (Kelley et al., 2018). For a detailed view of the implementation process, refer to Appendix B.
Model performance is evaluated using the Pearson correlation between predictions and true values. Table 1 shows the evaluation results for five models across 1937 DNA segments with 5313 tracks, classified into DNase, ChIP, and CAGE groups. These results show that 1D-Swin outperforms other models overall, particularly in the DNase and ChIP groups.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Model Name & DNase & ChIP & CAGE & Overall \\ \hline MaxPoolCNNs & 0.4996 & 0.4320 & 0.2581 & 0.4195 \\ AttPoolCNNs & 0.4970 & 0.4346 & 0.2041 & 0.4146 \\ AttPoolDilate & 0.4778 & 0.4395 & 0.1803 & 0.4130 \\ Enformer & 0.5462 & 0.4575 & **0.3307** & 0.4536 \\
1D-Swin & **0.5583** & **0.4670** & 0.3242 & **0.4614** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Pearson correlation of 5 models for genomic assay prediction on 5313 tracks, tracks are grouped into DNase, CHIP histone/transcription factor, and CAGE.
Figure 2 shows a pairwise comparison of 1D-Swin with other methods across track groups. Points above the reference line indicate superior performance by 1D-Swin. Figure 3 visualize the gene expression predictions from two models for CD133 stem cells, the visualisation evident that the prediction provided by 1D-swin matches more with the true values.
### Self-Attention and Gene Regulation
While attention does not equate to explanation (Pruthi et al., 2019), it helps to provide insights into token interaction. Genomic Interpreter leverages this capability to extract hierarchical self-attention values at varying resolutions, potentially unmasking the underlying'syntax' of gene regulation.
In this hierarchical structure, each matrix position represents a single nucleotide at the first layer and \(2^{K-1}\) nucleotides by the Kth layer. Figure 4(a) illustrates this, showing that lower layers favour local attention while higher layers exhibit long-range token interactions.
This pattern likely reflects the reality of gene interaction at different levels. At the single-nucleotide level, interactions primarily occur between proximal tokens. As we increase the nucleotide length, longer nucleotide segments begin to form regulatory units such as enhancers and silencers (Maston et al., 2006) that can interact at increased distances.
We can interpret the models in more detail by looking at the attention between tokens at different heads. Figure 4(b) further reveals that distinct attention heads at layer 8 capture varying patterns. For instance, head1 primarily captures a 'look-back' pattern evident in the lower triangle attention, head5 seems to focus on long-range interactions, and head8 attends to immediate proximal interactions. Appendix C provides supplementary figures to visualize Attention Weights.
## 5 Conclusion and Future Work
This study presents a task-agnostic hierarchical transformer for one-dimensional data, underscoring the importance of efficient, comprehensive hierarchical models. When applied to genomic assay prediction, Genome Interpreter outperforms conventional models while providing interpretable insights.
As a transformer-based architecture, Genomic Interpreter can be reinforced with pretraining (Hendrycks et al., 2020) for improving out-of-distribution prediction and attention flow (Abnar and Zuidema, 2020) for mapping the obtained attention weights to the original input sequence. While genomic science has emphasized the importance of understanding hierarchical structures, this concept is also critical in other fields, including Natural Language Processing (NLP) where language understanding relies heavily on hierarchical concept interpretation. As a result, 1D-Swin may have the potential to be applied to these fields for capturing long-range, hierarchical information.
Figure 4: (a) Attention weights from different 1D-swin layers reveal a pattern to have more off-diagonal dependency at Higher Hierarchy (b) Attention weights of different heads on the 8th layer learn to capture various features.
Figure 3: Given a DNA segment, two models will make predictions for the read values of the CAGE-CD133 track.
Figure 2: Pairwise Comparison between 1D-Swin and Competing Models: The Pearson correlation for each track is calculated across all DNA segments within the test set. Mean Pearson correlations are denoted on Y-axis for 1D-Swin and X-axis for the reference models.
## Acknowledgements
This work was performed using the Sulis Tier 2 HPC platform hosted by the Scientific Computing Research Technology Platform at the University of Warwick, and the JADE Tier 2 HPC facility. Sulis is funded by EPSRC Grant EP/T022108/1 and the HPC Midlands+ consortium. JADE is funded by EPSRC Grant EP/T022205/1. Zehui Li acknowledges the funding from the UKRI 21EBTA: EB-AI Consortium for Bioengineered Cells & Systems (AI-4-EB) award, Grant BB/W013770/1. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
|
2305.13271 | MAGDiff: Covariate Data Set Shift Detection via Activation Graphs of
Deep Neural Networks | Despite their successful application to a variety of tasks, neural networks
remain limited, like other machine learning methods, by their sensitivity to
shifts in the data: their performance can be severely impacted by differences
in distribution between the data on which they were trained and that on which
they are deployed. In this article, we propose a new family of representations,
called MAGDiff, that we extract from any given neural network classifier and
that allows for efficient covariate data shift detection without the need to
train a new model dedicated to this task. These representations are computed by
comparing the activation graphs of the neural network for samples belonging to
the training distribution and to the target distribution, and yield powerful
data- and task-adapted statistics for the two-sample tests commonly used for
data set shift detection. We demonstrate this empirically by measuring the
statistical powers of two-sample Kolmogorov-Smirnov (KS) tests on several
different data sets and shift types, and showing that our novel representations
induce significant improvements over a state-of-the-art baseline relying on the
network output. | Charles Arnal, Felix Hensel, Mathieu Carrière, Théo Lacombe, Hiroaki Kurihara, Yuichi Ike, Frédéric Chazal | 2023-05-22T17:34:47Z | http://arxiv.org/abs/2305.13271v2 | # MAGDiff: Covariate Data Set Shift Detection via Activation Graphs of Deep Neural Networks
###### Abstract
Despite their successful application to a variety of tasks, neural networks remain limited, like other machine learning methods, by their sensitivity to shifts in the data: their performance can be severely impacted by differences in distribution between the data on which they were trained and that on which they are deployed. In this article, we propose a new family of representations, called MAGDiff, that we extract from any given neural network classifier and that allows for efficient covariate data shift detection without the need to train a new model dedicated to this task. These representations are computed by comparing the activation graphs of the neural network for samples belonging to the training distribution and to the target distribution, and yield powerful data- and task-adapted statistics for the two-sample tests commonly used for data set shift detection. We demonstrate this empirically by measuring the statistical powers of two-sample Kolmogorov-Smirnov (KS) tests on several different data sets and shift types, and showing that our novel representations induce significant improvements over a state-of-the-art baseline relying on the network output.
## 1 Introduction
During the last decade, neural networks (NN) have become immensely popular, reaching state-of-the-art performances in a wide range of situations. Nonetheless, once deployed in real-life settings, NN can face various challenges such as being subject to adversarial attacks [13], being exposed to out-of-distributions samples (samples that were not presented at training time) [11], or more generally being exposed to a _distribution shift_: when the distribution of inputs differs from the training distribution (_e.g._, input objects are exposed to a corruption due to deterioration of measure instruments such as cameras or sensors). Such distribution shifts are likely to degrade performances of presumably well-trained models [31], and being able to detect such shifts is a key challenge in monitoring NN once deployed in real-life applications. Though shift detection for univariate variables is a well-studied problem, the task gets considerably harder with high-dimensional data, and seemingly reasonable methods often end up performing poorly [23].
In this work, we introduce the Mean Activation Graph Difference (MAGDiff), a new approach that harnesses the powerful dimensionality reduction capacity of deep neural networks in a data- and task-adapted way. The key idea, further detailed in Section 4, is to consider the activation graphs generated by inputs as they are processed by a neural network that has already been trained for a classification task, and to compare such graphs to those associated to samples from the training distribution. The
method can thus be straightforwardly added as a diagnostic tool on top of preexisting classifiers without requiring any further training ; it is easy to implement, and computationally inexpensive. As the activation graphs depend on the network weights, which in turn have been trained for the data and task at hand, one can also hope for them to capture information that is most relevant to the context. Hence, our method can easily support, and benefit from, any improvements in deep learning.
Our approach is to be compared to _Black box shift detection_ (BBSD), a method introduced in [20; 22] that shares a similar philosophy. BBSD uses the output of a trained classifier to efficiently detect various types of shifts (see also Section 4); in their experiments, BBSD generally beats other methods, the runner-up being a much more complex and computationally costly multivariate two-sample test combining an autoencoder and the Maximum Mean Discrepancy statistic [7].
Our contributions are summarized as follows.
1. Given any neural network classifier, we introduce a new family of representations MAGDiff, that is obtained by comparing the activation graphs of samples to the mean activation graph of each class in the training set.
2. We propose to use MAGDiff as a statistic for data set shift detection. More precisely, we combine our representations with the statistical method that was proposed and applied to the Confidence Vectors (CV) of classifiers in [20], yielding a new method for shift detection.
3. We experimentally show that our shift detection method with MAGDiff outperforms the state-of-the-art BBSD with CV on a variety of datasets, covariate shift types and shift intensities, often by a wide margin. Our code is publicly available1. Footnote 1: [https://github.com/hensel-f/MAGDiff_experiments](https://github.com/hensel-f/MAGDiff_experiments)
## 2 Related Work
Detecting changes or outliers in data can be approached from the angle of anomaly detection, a well-studied problem [5], or out-of-distribution (OOD) sample detection [26]. Among techniques that directly frame the problem as shift detection, kernel-based methods such as Maximum Mean Discrepancy (MMD) [7; 34] and Kernel Mean Matching (KMM) [8; 35] have proved popular, though they scale poorly with the dimensionality of the data [23]. Using classifiers to test whether samples coming from two distributions can be correctly labeled, hence whether the distributions can be distinguished, has also been attempted; see, _e.g._, [16]. The specific cases of covariate shift [14; 29; 22] and label shift [28; 20] have been further investigated, from the point of view of causality and anticausality [25]. Moreover, earlier investigations of similar questions have arisen from the fields of economics [10] and epidemiology [24].
Among the works cited above, [20] and [22] are of particular interest to us. In [20], the authors detect label shifts using shifts in the distribution of the outputs of a well-trained classifier; they call this method Black Box Shift Detection (BBSD). In [22], the authors observe that BBSD tends to generalize very well to covariate shifts, though without the theoretical guarantees it enjoys in the label shift case. Our method is partially related to BBSD. Roughly summarized, we apply similar statistical tests--combined univariate Kolmogorov-Smirnov tests--to different features--Confidence Vectors (CV) in the case of BBSD, distances to mean activation graphs (MAGDiff) in ours. Similar statistical ideas have also been explored in [3] and [4], while neural network activation graph features have been studied in, _e.g._, [18] and [12]. The related issue of the robustness of various algorithms to diverse types of shifts has been recently investigated in [32].
## 3 Background
### Shift Detection with Two-Sample Tests
There can often be a shift between the distribution \(\mathbb{P}_{0}\) of data on which a model has been trained and tested and the distribution \(\mathbb{P}_{1}\) of the data on which it is used after deployment; many factors can cause such a shift, _e.g._, a change in the environment, in the data acquisition process, or the training set being unrepresentative. Detecting shifts is crucial to understanding, and possibly correcting, potential losses in performance; even shifts that do not strongly impact accuracy can be important symptoms of inaccurate assumptions or changes in deployment conditions.
Additional assumptions can sometimes be made on the nature of the shift. In the context of a classification task, where data points are of the shape \((x,y)\) with \(x\) the feature vector and \(y\) the label, a shift that preserves the conditional distribution \(p(x|y)\) (but allows the proportion of each label to vary) is called _label shift_. Conversely, a _covariate shift_ occurs when \(p(y|x)\) is preserved, but the distribution of \(p(x)\) is allowed to change. In this article, we focus on the arguably harder case of covariate shifts. See Section 5 for examples of such shifts in numerical experiments.
Shifts can be detected using _two-sample tests_: that is, a statistical test that aims at deciding between the two hypotheses
\[H_{0}:\mathbb{P}_{0}=\mathbb{P}_{1}\text{ and }H_{1}:\mathbb{P}_{0}\neq\mathbb{ P}_{1},\]
given two random sets of samples, \(X_{0}\) and \(X_{1}\), independently drawn from two distributions \(\mathbb{P}_{0}\) and \(\mathbb{P}_{1}\). To do so, many statistics have been derived, depending on the assumptions made on \(\mathbb{P}_{0}\) and \(\mathbb{P}_{1}\).
In the case of distributions supported on \(\mathbb{R}\), one such test is the _univariate Kolmogorov-Smirnov (KS) test_, of which we make use in this article. Given, as above, two sets of samples \(X_{0},X_{1}\subset\mathbb{R}\), consider the empirical distribution functions \(F_{i}(z)\coloneqq\frac{1}{\text{Card}(X_{i})}\sum_{x\in X_{1}}1_{x\leq z}\) for \(i=0,1\) and \(z\in\mathbb{R}\). Then the statistic associated with the KS test and the samples is \(T\coloneqq\sup_{z\in\mathbb{R}}|F_{0}(z)-F_{1}(z)|\). If \(\mathbb{P}_{0}=\mathbb{P}_{1}\), the distribution of \(T\) is independent of \(\mathbb{P}_{0}\) and converges to a known distribution when the sizes of the samples tend to infinity (under mild assumptions) [27]. Hence approximate \(p\)-values can be derived.
The KS test can also be used to compare multivariate distributions: if \(\mathbb{P}_{0}\) and \(\mathbb{P}_{1}\) are distributions on \(\mathbb{R}^{D}\), a \(p\)-value \(p_{i}\) can be computed from the samples by comparing the \(i\)-th entries of the vectors of \(X_{0},X_{1}\subset\mathbb{R}^{D}\) using the univariate KS test, for \(i=1,\ldots,D\). A standard and conservative way of combining those \(p\)-values is to reject \(H_{0}\) if \(\min(p_{1},\ldots,p_{D})<\alpha/D\), where \(\alpha\) is the significance level of the test. This is known as the _Bonferroni correction_[30]. Other tests tackle the multidimensionality of the problem more directly, such as the _Maximum Mean Discrepancy (MMD) test_, though not necessarily with greater success (see, _e.g._, [23]).
### Neural Networks
We now recall the basics of _neural networks_ (NN), which will be our main object of study. We define a neural network2 as a (finite) sequence of functions called _layers_\(f_{1},\ldots,f_{L}\) of the form \(f_{\ell}\colon\mathbb{R}^{n_{\ell}}\to\mathbb{R}^{n_{\ell+1}},x\mapsto \sigma_{\ell}(W_{\ell}\cdot x+b_{\ell})\), where the parameters \(W_{\ell}\in\mathbb{R}^{n_{\ell+1}\times n_{\ell}}\) and \(b_{\ell}\in\mathbb{R}^{n_{\ell+1}}\) are called the weight matrix and the bias vector respectively, and \(\sigma_{\ell}\) is an (element-wise) activation map (_e.g._, sigmoid or ReLU). The neural network encodes a map \(F\colon\mathbb{R}^{d}\to\mathbb{R}^{D}\) given by \(F=f_{L}\circ\cdots\circ f_{1}\). We sometimes use \(F\) to refer to the neural network as a whole, though it has more structure.
Footnote 2: While our exposition is restricted to sequential neural networks for the sake of concision, our representations are well-defined for other types of neural nets (_e.g._, recurrent neural nets).
When the neural network is used as a classifier, the last activation function \(\sigma_{L}\) is often taken to be the _softmax_ function, so that \(F(x)_{i}\) can be interpreted as the confidence that the network has in \(x\) belonging to the \(i\)-th class, for \(i=1,\ldots,D\). For this reason, we use the terminology _confidence vector_ (CV) for the output \(F(x)\in\mathbb{R}^{D}\). The true class of \(x\) is represented by a label \(y=(0,\ldots,0,1,0,\ldots,0)\in\mathbb{R}^{D}\) that takes value \(1\) at the coordinate indicating the correct class and \(0\) elsewhere. The parameters of each layer \((W_{\ell},b_{\ell})\) are typically learned from a collection of training observations and labels \(\{(x_{n},y_{n})\}_{n=1}^{N}\) by minimizing a cross-entropy loss through gradient descent, in order to make \(F(x_{n})\) as close to \(y_{n}\) as possible on average over the training set. The _prediction_ of the network on a new observation \(x\) is then given by \(\arg\max_{i=1,\ldots,D}F(x)_{i}\), and its (test) _accuracy_ is the proportion of correct predictions on a new set of observations \(\{(x^{\prime}_{n},y^{\prime}_{n})\}_{n=1}^{N^{\prime}}\), that is assumed to have been independently drawn from the same distribution as the training observations.
In this work, we consider NN classifiers that have already been trained on some training data and that achieve reasonable accuracies on test data following the same distribution as training data.
### Activation Graphs
Given an instance \(x=x_{0}\in\mathbb{R}^{d}\), a trained neural network \(f_{1},\ldots,f_{L}\) with \(x_{\ell+1}=f_{\ell}(x_{\ell})=\sigma_{\ell}(W_{\ell}\cdot x_{\ell}+b_{\ell})\) and a layer \(f_{\ell}\colon\mathbb{R}^{n_{\ell}}\longrightarrow\mathbb{R}^{n_{\ell+1}}\), we can define a weighted graph, called the _activation graph_\(G_{\ell}(x)\) of \(x\) for the layer \(f_{\ell}\), as follows. We let \(V\coloneqq V_{\ell}\sqcup V_{\ell+1}\) for two sets \(V_{\ell}\) and \(V_{\ell+1}\) of cardinality \(n_{\ell}\) and \(n_{\ell+1}\) respectively. The edges are defined as \(E\coloneqq V_{\ell}\times V_{\ell+1}\). To each edge
\((i,j)\in E_{\ell}\), we associate the weight \(w_{i,j}(x)\coloneqq W_{\ell}(j,i)\cdot x_{\ell}(i)\), where \(x_{\ell}(i)\) (resp. \(W_{\ell}(j,i)\)) denotes the \(i\)-th coordinate of \(x_{\ell}\in\mathbb{R}^{n_{\ell}}\) (resp. entry \((j,i)\) of \(W_{\ell}\in\mathbb{R}^{n_{\ell+1}\times n_{\ell}}\)). The activation graph \(G_{\ell}(x)\) is the weighted graph \((V,E,\{w_{i,j}(x)\})\), which can be conveniently represented as a \(n_{\ell}\times n_{\ell+1}\) matrix whose entry \((i,j)\) is \(w_{i,j}(x)\). Intuitively, these activation graphs--first considered in [6]--represent how the network "reacts" to a given observation \(x\) at inner-level, rather than only considering the network output (_i.e._, the Confidence Vector).
## 4 Two-Sample Statistical Tests using MAGDiff
### The MAGDiff representations
Let \(\mathbb{P}_{0}\) and \(\mathbb{P}_{1}\) be two distributions for which we want to test \(H_{0}:\mathbb{P}_{0}=\mathbb{P}_{1}\). As mentioned above, two-sample statistical tests tend to underperform when used directly on high-dimensional data. It is thus common practice to extract lower-dimensional representations \(\Psi(x)\) from the data \(x\sim\mathbb{P}_{i}\), where \(\Psi\colon\operatorname{supp}\,\mathbb{P}_{0}\cup\operatorname{supp}\, \mathbb{P}_{1}\to\mathbb{R}^{N}\). Given a classification task with classes \(1,\ldots,D\), we define a family of such representations as follows. Let \(T\colon\operatorname{supp}\,\mathbb{P}_{0}\cup\operatorname{supp}\,\mathbb{P }_{1}\to V\) be any map whose codomain \(V\) is a Banach space with norm \(\|\cdot\|_{V}\). For each class \(i\in\{1,\ldots,D\}\), let \(\mathbb{P}_{0,i}\) be the conditional distribution of data points from \(\mathbb{P}_{0}\) in class \(i\). We define
\[\Psi_{i}(x)\coloneqq\|T(x)-\mathbb{E}_{\mathbb{P}_{0,i}}[T(x^{\prime})]\|_{V}\]
for \(x\in\operatorname{supp}\,\mathbb{P}_{0}\cup\operatorname{supp}\,\mathbb{P}_{1}\). Given a fixed finite dataset \(x_{1},\ldots,x_{m}\stackrel{{\text{iid}}}{{\sim}}\mathbb{P}_{0}\), we similarly define the approximation
\[\tilde{\Psi}_{i}(x)\coloneqq\left\|T(x)-\frac{1}{m_{i}}\sum_{j=1}^{m_{i}}T(x _{j}^{i})\right\|_{V}\,,\]
where \(x_{1}^{i},\ldots,x_{m_{i}}^{i}\) are the points of the dataset whose class is \(i\). This defines a map \(\tilde{\Psi}\colon\operatorname{supp}\,\mathbb{P}_{0}\cup\operatorname{supp} \,\mathbb{P}_{1}\to\mathbb{R}^{D}\).
The map \(T\colon\operatorname{supp}\,\mathbb{P}_{0}\cup\operatorname{supp}\,\mathbb{P }_{1}\to V\) could _a priori_ take many shapes. In this article, we assume that we are provided with a neural network \(F\) that has been trained for the classifying task at hand, as well as a training set drawn from \(\mathbb{P}_{0}\). We let \(T\) be the activation graph \(G_{\ell}\) of the layer \(f_{\ell}\) of \(F\) represented as a matrix, so that the expected values \(\mathbb{E}_{\mathbb{P}_{0,i}}[G_{\ell}(x^{\prime})]\) (for \(i=1,\ldots,D\)) are simply mean matrices, and the norm \(\|\cdot\|_{V}\) is the Frobenius norm \(\|\cdot\|_{2}\). We call the resulting \(D\)-dimensional representation _Mean Activation Graph Difference_ (MAGDiff):
\[\texttt{MAGDiff}(x)_{i}\coloneqq\left\|G_{\ell}(x)-\frac{1}{m_{i}}\sum_{j=1}^ {m_{i}}G_{\ell}(x_{j}^{i})\right\|_{2}\,,\]
for \(i=1,\ldots,D\), where \(x_{1}^{i},\ldots,x_{m_{i}}^{i}\) are, as above, samples of the training set whose class is \(i\). Therefore, for a given new observation \(x\), we derive a vector MAGDiff\((x)\in\mathbb{R}^{D}\) whose \(i\)-th coordinate indicates whether \(x\) activates the chosen layer of the network in a similar way "as training observations of the class \(i\)".
Many variations are possible within that framework. One could, _e.g._, consider the activation graph of several consecutive layers, use another matrix norm, or apply Topological Data Analysis techniques to compute a more compact representation of the graphs, such as the _topological uncertainty_[18]. In this work, we focus on MAGDiff for dense layers, though it could be extended to other types.
### Comparison of distributions of features with multiple KS tests
Given as above a (relatively low-dimensional) representation \(\Psi\colon\operatorname{supp}\,\mathbb{P}_{0}\cup\operatorname{supp}\, \mathbb{P}_{1}\to\mathbb{R}^{N}\) and samples \(x_{1},\ldots,x_{n}\stackrel{{\text{iid}}}{{\sim}}\mathbb{P}_{0}\) and \(x_{1}^{\prime},\ldots,x_{m}^{\prime}\stackrel{{\text{iid}}}{{\sim}} \mathbb{P}_{1}\), one can apply multiple univariate (coordinate-wise) KS tests with Bonferroni correction to the sets \(\Psi(x_{1}),\ldots,\Psi(x_{n})\) and \(\Psi(x_{1}^{\prime}),\ldots,\Psi(x_{m}^{\prime})\), as described in Section 3. If \(\Psi\) is well-chosen, a difference between the distributions \(\mathbb{P}_{0}\) and \(\mathbb{P}_{1}\) (hard to test directly due to the dimensionality of the data) will translate to a difference between the distributions \(\Psi(x)\) and \(\Psi(x^{\prime})\) for \(x\sim\mathbb{P}_{0}\) and \(x^{\prime}\sim\mathbb{P}_{1}\) respectively. Detecting such a difference serves as a proxy for testing \(H_{0}:\mathbb{P}_{0}=\mathbb{P}_{1}\). In our experiments, we apply this procedure to the
MAGDiff representations defined above (see Section 5.1 for a step-by-step description). This is a reasonable approach, as it is a simple fact that a generic shift in the distribution of the random variable \(x\sim\mathbb{P}_{0}\) will in turn induce a shift in the distribution of \(\Psi(x)\), as long as \(\Psi\) is not constant3; however, this does not give us any true guarantee, as it does not provide any quantitative result regarding the shift in the distribution of \(\Psi(x)\). Such results are beyond the scope of this paper, in which we focus on the good experimental performance of the MAGDiff statistic.
Footnote 3: See Appendix C for an elementary proof.
### Differences from BBSD and motivations
The BBSD method described in [20] and [22] is defined in a similar manner, except that the representations \(\Psi\) on which the multiple univariate KS tests are applied are simply the Confidence Vectors (CV) \(F(x)\in\mathbb{R}^{D}\) of the neural network \(F\) (or of any other classifier that outputs confidence vectors), rather than our newly proposed MAGDiff representations. In other words, they detect shifts in the distribution of the inputs by testing for shifts in the distribution of the outputs of a given classifier4.
Footnote 4: This corresponds to the best-performing variant of their method, denoted as _BBSDs_ (as opposed to, _e.g._, _BBSDh_) in [22].
Both our method and theirs share advantages: the features are task- and data-driven, as they are derived from a classifier that was trained for the specific task at hand. They do not require the design or the training of an additional model specifically geared towards shift detection, and they have favorable algorithmic complexity, especially compared to some kernel-based methods. In particular, combining the KS tests with the Bonferroni correction spaces us from having to calibrate our statistical tests with a permutation test, which can be costly as shown in [22]. A common downside is that the Bonferroni correction can be overly conservative; other tests might offer higher power. The main focus of this article is the relevance of the MAGDiff representations, rather than the statistical tests that we apply to them, and it has been shown in [22] that KS tests yield state-of-the-art performances; as such, we did not investigate alternatives, though additional efforts in that direction might produce even better results.
The nature of the construction of the MAGDiff representations is geared towards shift detection since it is directly based on encoding differences (_i.e._, deviations) from the mean activation graphs (of \(\mathbb{P}_{0}\)). Moreover, they are based on representations from deeper within the NN, which are less compressed than the confidence vectors - passing through each layer leads to a potential loss of information regarding the differences between the training and the test distributions. Hence, we can hope for the MAGDiff to encode more information from the input data than the CV representations used in [22] which focus on the class to which a sample belongs to, while sharing the same favorable dimensionality reduction properties. Therefore, we expect MAGDiff to perform particularly well with covariate shifts, where shifts in the distribution of the data do not necessarily translate to strong shifts in the distribution of the CV. Conversely, we do not hope for our representations to bring significant improvements over CV in the specific case of label shifts; all the information relative to labels available to the network is, in a sense, best summarized in the CV, as this is the main task of the NN. These expectations were confirmed in our experiments.
## 5 Experiments
This experimental section is devoted to showcasing the use of the MAGDiff representations and its benefits over the well-established baseline CV when it comes to performing covariate shift detection. As detailed in Section 5.1, we combine coordinate-wise KS tests for both these representations. Note that in the case of CV, this corresponds exactly to the method termed _BBSDs_ in [22]. Our code is publicly available5.
Footnote 5: [https://github.com/hensel-f/MAGDiff_experiments](https://github.com/hensel-f/MAGDiff_experiments)
### Experimental Settings
To illustrate the versatility of our approach, we ran our experiments on various datasets, for various network architectures, various type of shifts along with various levels of intensity. A comprehensive presentation of these datasets and parameters can be found in the Appendix.
Datasets.We consider the standard datasets MNIST [19], FashionMNIST (FMNIST) [33], CIFAR-10 [17], SVHN [21], as well as a lighter version of ImageNet (restricted to \(10\) classes) called Imagenette [1].
Architectures.For MNIST and FMNIST, we used a simple CNN architecture consisting of 3 convolutional layers followed by 4 dense layers. For CIFAR-10 and SVHN, we considered (a slight modification, to account for input images of size \(32\times 32\), of) the ResNet18 architecture [9]. For Imagenette, we used a pretrained ResNet18 model provided by Pytorch [2]. With these architectures, we reached a test accuracy of \(98.6\%\) on MNIST, \(91.1\%\) on FMNIST, \(94.1\%\) on SVHN, \(81\%\) on CIFAR-10 and \(99.2\%\) for Imagenette, validating the "well-trained" assumption mentioned in Section 4. Note that we used simple architectures, without requiring the networks to achieve state-of-the-art accuracy.
Shifts.We applied three types of shift to our datasets: Gaussian noise (additive white noise), Gaussian blur (convolution by a Gaussian distribution), and Image shift (random combination of rotation, translation, zoom and shear), for six different levels of increasing intensities (denoted by I, II,...,VI), and a fraction of shifted data \(\delta\in\{0.25,0.5,1.0\}\). For each dataset and shift type, we chose the shift intensities in such a manner that the shift detection for the lowest intensities and low \(\delta\) is almost indetectable for both methods (MAGDiff and CV), and very easily detectable for high intensities and values of \(\delta\). Details (including the impact of the shifts on model accuracy) and illustrations can be found in Appendix B.
Sample size.We ran the shift detection tests with sample sizes6\(\{10,20,50,100,200,500,1000\}\) to assess how many samples a given method requires to reliably detect a distribution shift. A good method should be able to detect a shift with as few samples as possible.
Footnote 6: That is, the number of elements from the clean and shifted sets on which the statistical tests are performed; see the paragraph **Experimental protocol** for more details.
Experimental protocol.In all of the experiments below, we start with a neural network that is pre-trained on the training set of a given dataset. The test set will be referred to as the _clean set_ (CS). We then apply the selected shift (type, intensity, and proportion \(\delta\)) to the clean set and call the resulting set the _shifted set_\(SS\); it represents the target distribution \(\mathbb{P}_{1}\) in the case where \(\mathbb{P}_{1}\neq\mathbb{P}_{0}\).
As explained in Section 4, for each of the classes \(i=1,\ldots,D\) (for all of our datasets, \(D=10\)), we compute the mean activation graph of a chosen dense layer \(f_{\ell}\) of (a random subset of size \(1000\) of all) samples in the training set whose class is \(i\); this yields \(D\) mean activation graphs \(G_{1},\ldots,G_{D}\). We compute for each sample \(x\) in \(CS\) and each sample in \(SS\) the representation MAGDiff\((x)\), where MAGDiff\((x)_{i}=\|G_{\ell}(x)-G_{i}\|_{2}\) for \(i=1,\ldots,D\) and \(G_{\ell}(x)\) is the activation graph of \(x\) for the layer \(f_{\ell}\) (as explained in Section 4). Doing so, we obtain two sets \(\{\texttt{MAGDiff}(x)\mid x\in CS\}\) and \(\{\texttt{MAGDiff}(x^{\prime})\mid x^{\prime}\in SS\}\) of \(D\)-dimensional features; both have the same cardinality as the test set.
Now, we estimate the power of the test for a given sample size7\(m\) and for a type I error of at most \(0.05\); in other words, the probability that the test rejects \(H_{0}\) when \(H_{1}\) is true and when it has access to only \(m\) samples from the respective datasets, and under the constraint that it does not falsely reject \(H_{0}\) in more than \(5\%\) of cases. To do so, we randomly sample (with replacement) \(m\) elements \(x^{\prime}_{1},\ldots,x^{\prime}_{m}\) from \(SS\), and consider for each class \(i=1,\ldots,D\) the discrete empirical univariate distribution \(q_{i}\) of the values MAGDiff\((x^{\prime}_{1})_{i},\ldots,\texttt{MAGDiff}(x^{\prime}_{m})_{i}\). Similarly, by randomly sampling \(m\) elements from \(CS\), we obtain another discrete univariate distribution \(p_{i}\) (see Figure 1 for an illustration). Then, for each \(i=1,\ldots,D\), the KS test is used to compare \(p_{i}\) and \(q_{i}\) to obtain a \(p\)-value \(\lambda_{i}\), and reject \(H_{0}\) if \(\min(\lambda_{1},\ldots,\lambda_{D})>\alpha/D\), where \(\alpha\) is the threshold for the univariate KS test at confidence \(0.05\) (_cf._ Section 3.1). Following standard bootstrapping protocol, we repeat that experiment (independently sampling \(m\) points from \(CS\) and \(SS\), computing \(p\)-values, and possibly rejecting \(H_{0}\)) \(1500\) times; the percentage of rejection of \(H_{0}\) is the estimated _power_ of the statistical test (since \(H_{0}\) is false in this scenario). We use the asymptotic normal distribution of the standard Central Limit Theorem to compute approximate \(95\%\)-confidence intervals on our estimate.
Footnote 7: The same sample size that is mentioned in the **Sample size** paragraph.
To illustrate that the test is well calibrated, we repeat the same procedure while sampling twice \(m\) elements from \(CS\) (rather than \(m\) elements from \(SS\) and \(m\) elements from \(CS\)), which allows us to
estimate the type I error (_i.e._, the percentage of incorrect rejections of \(H_{0}\)) and assert that it remains below the significance level of \(5\%\) (see, _e.g._, Figure 2).
We experimented with a few variants of the MAGDiff representations: we tried reordering the coordinates of each vector \(\texttt{MAGDiff}(x)\in\mathbb{R}^{D}\) in increasing order of the value of the associated confidence vectors. We also tried replacing the matrix norm of the difference to the mean activation graph by either its Topological Uncertainty (TU) [18], or variants thereof. Early analysis suggested that these variations did not bring increased performances, despite their increased complexity. Experiments also suggested that MAGDiff representations brought no improvement over CV in the case of label shift. We also tried to combine (_i.e._, concatenate) the CV and MAGDiff representations, but the results were unimpressive, which we attribute to the Bonferroni correction getting more conservative the higher the dimension. We thus only report the results for the standard MAGDiff.
Competitor.We used multiple univariate KS tests applied to CV (the method BBSDs from [22]) as the baseline, which we denote by "CV" in the figures and tables, in contrast to our method denoted by "MAGDiff". The similarity in the statistical testing between BBSDs and MAGDiff allows us to easily assess the relevance of the MAGDiff features. We chose them as our sole competitors as it has been convincingly shown in [22] that they outperform on average all other standard methods, including the use of dedicated dimensionality reduction models, such as autoencoders, or of multivariate kernel tests. Many of these methods are also either computationally more costly (to the point where they cannot be practically applied to more than a thousand samples) or harder to implement (as they require an additional neural network to be implemented) than both BBSDs and MAGDiff.
### Experimental results and influence of parameters.
We now showcase the power of shift detection using our MAGDiff representations in various settings and compare it to the state-of-the-art competitor CV. Since there were a large number of hyper-parameters in our experiments (datasets, shift types, shift intensities, etc.), we started with a standard set of hyper-parameters that yielded representative and informative results according to our observations (MNIST and Gaussian noise, as in [22], \(\delta=0.5\), sample size \(100\), MAGDiff computed with the last layer of the network) and let some of them vary in the successive experiments. We focus on the well-known MNIST dataset to allow for easy comparisons, and refer to Appendix B for additional experimental results that confirm our findings on other datasets.
Sample size.The first experiment consists of estimating the power of the shift detection test as a function of the sample size (a common way of measuring the performance of such a test) using either the MAGDiff or the baseline CV representations. Figure 2 shows the powers of the KS tests using the MAGDiff (red curve) and CV (green curve) representations with respect to the sample size for the MNIST dataset. Here, we choose to showcase the results for Gaussian noise of intensities II, IV and IV with shift proportion \(\delta=0.5\).
Figure 1: Empirical distributions of \(\texttt{MAGDiff}_{1}\) for the \(10,000\) samples of the clean and shifted sets (MNIST, Gaussian noise, \(\delta=0.5\), last dense layer). For the clean set, the distribution of the component MAGDiff1 of \(\texttt{MAGDiff}\) exhibits a peak close to \(0\). This corresponds to those samples whose distance to the mean activation graph of (training) samples belonging to the associated class is very small, _i.e._, these are samples that presumably belong to the same class as well. Note that, for the shifted set, this peak close to \(0\) is substantially diminished, which indicates that the activation graph of samples affected by the shift is no longer as close to the mean activation graph of their true class.
It can clearly be seen that MAGDiff consistently and significantly outperformed the CV representations. While in both cases, the tests achieved a power of \(1.0\) for large sample sizes (\(m\approx 1000\)) and/or high shift intensity (VI), MAGDiff was capable of detecting the shift even with much lower sample sizes. This was particularly striking for the low intensity level II, where the test with CV was completely unable to detect the shift, even with the largest sample size, while MAGDiff was capable of reaching non-trivial power already for a medium sample size of \(100\) and exceptional power for large sample size. Note that the tests were always well-calibrated. That is, the type I error remained below the significance level of \(0.05\), indicated by the horizontal dashed black line in the figures.
To further support our claim that MAGDiff outperforms CV on average in other scenarios, we provide, in Table 1, averaged results over all parameters except the sample size. Though the precise values obtained are not particularly informative (due to the aggregation over very different sets of hyper-parameters), the comparison between the two rows remains relevant. In Appendix B, a more comprehensive experimental report (including, in particular, the CIFAR-10 and Imagenette datasets) further supports our claims.
Shift intensity.The first experiment suggests that MAGDiff representations perform particularly well when the shift is hard to detect. In the second experiment, we further investigate the influence of the shift intensity level and \(\delta\) (which is, in a sense, another measure of shift intensity) on the power of the tests. We chose a fixed sample size of \(100\), which was shown to make for a challenging yet
\begin{table}
\begin{tabular}{l c c c c c c c} \multicolumn{8}{c}{Averaged power (\%)} \\ \hline Sample size & \(10\) & \(20\) & \(50\) & \(100\) & \(200\) & \(500\) & \(1000\) \\ \hline \hline MAGDiff & \(7.4\) & \(17.1\) & \(27.6\) & \(40.7\) & \(54.7\) & \(71.4\) & \(80.4\) \\ CV & \(4.0\) & \(9.8\) & \(15.6\) & \(24.7\) & \(35.3\) & \(49.7\) & \(59.2\) \\ \hline \end{tabular}
\end{table}
Table 1: Averaged test power of MAGDiff and CV over all hyper-parameters except sample size (dataset, shift type, \(\delta\), shift intensity). A \(95\%\)-confidence interval for the averaged powers has been estimated via bootstrapping and is, in all cases, strictly contained in a \(\pm 0.1\%\) interval.
Figure 3: Power and type I error of the test with MAGDiff (red) and CV (green) features w.r.t. the shift intensity for Gaussian noise on the MNIST dataset with sample size \(100\) and \(\delta=0.25\) (left), \(\delta=0.5\) (middle), \(\delta=1.0\) (right), for the last dense layer. The estimated \(95\%\)-confidence intervals are displayed around the curves.
Figure 2: Power and type I error of the statistical test with MAGDiff (red) and CV (green) representations w.r.t. sample size (on a log-scale) for three different shift intensities (II, IV, VI) and fixed \(\delta=0.5\) for the MNIST dataset, Gaussian noise and last layer of the network, with estimated \(95\%\)-confidence intervals.
doable task. The results in Figure 3 confirm that our representations were much more sensitive to weak shifts than the CV, with differences in power greater than \(80\%\) for some intensities.
Shift type.The previous experiments focused on the case of Gaussian noise; in this experiment, we investigate whether the results hold for other shift types. As detailed in Table 2, the test with MAGDiff representations reacted to the shifts even for low intensities of I, II, and III for all shift types (Gaussian blur being the most difficult case), while the KS test with CV was unable to detect anything. For medium to high intensities III, IV, V and VI, MAGDiff again significantly outperformed the baseline and reaches powers close to \(1\) for all shift types. For the Gaussian blur, the shift remained practically undetectable using CV.
MAGDiff with respect to different layers.The NN architect we used with MNIST and FMNIST consisted had several dense layers before the output. As a variation of our method, we investigate the effect on the shift detection when computing our MAGDiff representations with respect to different layers8. More precisely, we consider the last three dense layers denoted by \(\ell_{-1}\), \(\ell_{-2}\) and \(\ell_{-3}\), ordered from the closest to the network output (\(\ell_{-1}\)) to the third from the end (\(\ell_{-3}\)). The averaged results over all parameters and noise types are reported in Table 3.
Footnote 8: Since ResNet18 only has a single dense layer after its convolutional layers, there is no choice to be made in the case of CIFAR-10, SVHN and Imagenette.
In the case of MNIST we only observe a slight increase in power when considering layer \(\ell_{-3}\) further from the output of the NN. In the case of FMNIST, on the other hand, we clearly see a much more pronounced improvement when switching from \(\ell_{-1}\) to \(\ell_{-3}\). This hints at the possibility that features derived from encodings further within the NN can, in some cases, be more pertinent to the task of shift detection than those closer to the output.
## 6 Conclusion
In this article, we introduce novel types of representations, called MAGDiff, which are derived from the activation graphs of a trained NN classifier. We empirically show that using MAGDiff representations for the task of data set shift detection via coordinate-wise KS tests (with Bonferroni correction) significantly outperforms the baseline given by using confidence vectors (CV) of the NN established in [20], while remaining equally fast and easy to implement. Thus, our MAGDiff representations are an effective method for encoding data- and task-driven information of the input data that is highly relevant for the critical challenge of dataset shift detection in data science.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \multicolumn{8}{c}{**Impact of Shift Type**} \\ \hline \multicolumn{8}{c}{Power of the test (\%)} \\ \hline \multirow{2}{*}{Shift} & \multirow{2}{*}{Feat.} & \multicolumn{4}{c}{Shift intensity} \\ \cline{3-8} & & I & II & III & IV & V & VI \\ \hline \hline \multirow{2}{*}{GN} & \(\mathbb{MD}\) & \(7.2\pm 1.3\) & \(29.3\pm 2.3\) & \(61.9\pm 2.5\) & \(93.3\pm 1.3\) & \(98.9\pm 0.5\) & \(100.0-0.0\) \\ & CV & \(0.0+0.2\) & \(0.1\pm 0.1\) & \(1.5\pm 0.6\) & \(17.6\pm 1.9\) & \(72.3\pm 2.3\) & \(98.9\pm 0.5\) \\ \hline \multirow{2}{*}{GB} & \(\mathbb{MD}\) & \(3.7\pm 1.0\) & \(4.3\pm 1.0\) & \(27.7\pm 2.3\) & \(63.1\pm 2.4\) & \(85.0\pm 1.8\) & \(92.4\pm 1.3\) \\ & CV & \(0.0+0.2\) & \(0.0+0.2\) & \(0.0+0.2\) & \(0.4\pm 0.3\) & \(1.3\pm 0.6\) & \(5.3\pm 1.1\) \\ \hline \multirow{2}{*}{IS} & \(\mathbb{MD}\) & \(10.3\pm 1.5\) & \(32.7\pm 2.4\) & \(53.5\pm 2.5\) & \(78.5\pm 2.1\) & \(90.6\pm 1.5\) & \(98.9\pm 0.5\) \\ & CV & \(0.0+0.2\) & \(0.1\pm 0.2\) & \(2.1\pm 0.7\) & \(11.5\pm 1.6\) & \(37.0\pm 2.4\) & \(86.3\pm 1.7\) \\ \hline \end{tabular}
\end{table}
Table 2: Power of the two methods (our method, denoted as MD, and CV) as a function of the shift intensity for the shift types Gaussian noise (GN), Gaussian blur (GB) and Image shift (IS) on the MNIST dataset with \(\delta=0.5\), sample size \(100\), for the last dense layer. Red indicates that the estimated power is below \(10\%\), blue that it is above \(50\%\). The \(95\%\)-confidence intervals have been estimated as mentioned in Section 5.
\begin{table}
\begin{tabular}{c c c c c} \multicolumn{4}{c}{**Choice of Layer**} \\ \multicolumn{4}{c}{Averaged power (\%)} \\ \hline \multirow{2}{*}{Dataset} & \multicolumn{4}{c}{Features} \\ \cline{2-5} & CV & \(\ell_{-1}\) & \(\ell_{-2}\) & \(\ell_{-3}\) \\ \hline \hline MNIST & \(25.1\) & \(51.9\) & \(53.0\) & \(56.4\) \\ \hline FMNIST & \(46.2\) & \(44.9\) & \(47.6\) & \(53.7\) \\ \hline \end{tabular}
\end{table}
Table 3: Averaged performance of the various layers for MAGDiff over all other parameters (for MNIST and FMNIST), compared to BBSD with CV. A \(95\%\) confidence interval for the averaged powers has be estimated and is in all cases strictly contained in a \(\pm 0.1\%\) interval.
Our findings open many avenues for future investigations. We focused on classification of image data in this work, but our method is a general one and can be applied to other settings. Moreover, adapting our method to regression tasks as well as to settings where shifts occur gradually is feasible and a starting point for future work. Finally, exploring variants of the MAGDiff representations--considering several layers of the network at once, extending it to other types of layers, extracting finer topological information from the activation graphs, weighting the edges of the graph by backpropagating their contribution to the output, etc.--could also result in increased performance.
Acknowledgments.The authors are grateful to the OPAL infrastructure from Universite Cote d'Azur for providing resources and support. FC thanks the ANR TopAI chair (ANR-19-CHIA-0001) for financial support and the DATAIA Institute.
|
2304.09100 | Real Time Bearing Fault Diagnosis Based on Convolutional Neural Network
and STM32 Microcontroller | With the rapid development of big data and edge computing, many researchers
focus on improving the accuracy of bearing fault classification using deep
learning models, and implementing the deep learning classification model on
limited resource platforms such as STM32. To this end, this paper realizes the
identification of bearing fault vibration signal based on convolutional neural
network, the fault identification accuracy of the optimised model can reach
98.9%. In addition, this paper successfully applies the convolutional neural
network model to STM32H743VI microcontroller, the running time of each
diagnosis is 19ms. Finally, a complete real-time communication framework
between the host computer and the STM32 is designed, which can perfectly
complete the data transmission through the serial port and display the
diagnosis results on the TFT-LCD screen. | Wenhao Liao | 2023-04-14T12:04:20Z | http://arxiv.org/abs/2304.09100v1 | # Real Time Bearing Fault Diagnosis Based on Convolutional Neural Network and STM32 Microcontroller
###### Abstract
With the rapid development of big data and edge computing, many researchers focus on improving the accuracy of bearing fault classification using deep learning models, and implementing the deep learning classification model on limited resource platforms such as STM32. To this end, this paper realizes the identification of bearing fault vibration signal based on convolutional neural network, the fault identification accuracy of the optimised model can reach 98.9%. In addition, this paper successfully applies the convolutional neural network model to STM32H743VI microcontroller, the running time of each diagnosis is 19ms. Finally, a complete real-time communication framework between the host computer and the STM32 is designed, which can perfectly complete the data transmission through the serial port and display the diagnosis results on the TFT-LCD screen.
Intelligent fault diagnosis, Convolutional neural network, STM32 microcontroller, edge computing
## I Introduction
For the mechanical equipment that can be seen everywhere now, its development has become more and more complex and systematic trend, and a huge mechanical system, any small part failure may lead to the whole system equipment as a whole collapse, resulting in serious economic losses and huge casualties, so monitoring the health status of the mechanical operation process is of great significance for cost control and production safety. Bearings in large equipment are an extremely important class of parts, known as the joints of modern machinery, whose main function is to support the mechanical rotating body and reduce the coefficient of friction in the process of movement.
Traditional fault diagnosis methods are based on empirical methods, and the collected data is processed and displayed in various ways, and the fault is determined by the manual experience of the engineer, which is becoming increasingly difficult in the face of increasingly complex and sophisticated mechanical equipment. Deep learning is a data-driven approach that only requires accurate raw data and a suitable network structure to achieve end-to-end automatic fault diagnosis. With the advent of the big data era, there are now low-cost and accurate sensors that can collect and monitor data on the operation of bearings, while data-driven deep learning algorithms are being used for fault detection due to their powerful feature extraction capabilities, the results are more accurate and less expensive per unit than traditional methods.
Based on the above discussion, this paper proposes that real time bearing fault diagnosis based on convolutional neural network and STM32 microcontroller, This study has the following main contributions:
1. In this study, a convolutional neural network is used to achieve a state-of-the-art accuracy of up to 98.9% for identifying 10 different types of bearing failure in the CWRU dataset.
2. The lightweight convolutional neural network model proposed in this study has a 3.8% improvement in accuracy and a 36.66% reduction in the total number of network parameters compared to conventional convolutional neural network models, making it more suitable for running in low-power devices such as the STM32.
3. This study successfully identifies bearing faults by embedding convolutional neural networks into the STM32 using the CubeAI tool.
4. This study implements a complete convolutional neural network and STM32 based real-time bearing fault detection process framework, and in a subsequent validation session, demonstrates that it can perfectly achieve the bearing detection objectives.
## II Related Work
Lei et al. [1], by collecting the number of published papers, divided the development of this field of intelligent fault diagnosis into roughly three phases: the first phase is the traditional machine learning-based diagnosis methods starting from 1980, including support vector machine (SVM), K-nearest neighbour algorithm (KNN), probabilistic graphical model (PGM), random forest algorithm (RF), etc.; the second phase is the deep learning-based diagnosis methods starting from 2010. The second phase is the deep learning based diagnostic methods that started in 2010, including deep neural networks (DNN), deep confidence networks (DBN), convolutional neural networks (CNN), and recurrent neural networks (RNN), etc. The third phase is the migration learning methods that started in
2010, including deep learning based diagnostic methods that started in 2010, including deep neural networks (DNN), deep confidence networks (DBN), convolutional neural networks (CNN), and recurrent neural networks (RNN), etc. The third phase is the migration learning methods that have emerged since 2015, including adversarial neural networks (GAN) and joint adaptive networks (JAN), etc. This year, with the rise of reinforcement learning and migration learning, there are three major trends in the future development of intelligent fault diagnosis: first, most practical engineering applications are in high-noise environments, and how to denoise to improve model recognition accuracy. Finally, there is the research on breakthrough theoretical algorithms, including new automatic noise reduction network structures, signal feature detection structures, etc, such as the one proposed by Zhao Minghang et al. [2] in China for high noise environments. proposed a deep residual systolic network (including both DRSN-CS and DRSN-CW structures) for high accuracy fault diagnosis in high noise environments.
## III Model Design
Our aim is to design a complete process framework for bearing fault detection, implement a highly accurate convolutional neural network model deployed on the STM32, and use serial communication between the host computer and the STM32 to transmit bearing vibration data for real-time bearing fault detection functions.
the overall model design is illustrated in Fig. 2, which is divided into five modules. Among them, the data processing module mainly extracts the bearing fault data from the CWRU dataset in mat format and then performs normalised pre-processing; the training module obtains the trained model by constructing a convolutional neural network model and using the processed data for training; the serial communication module uses the designed host computer to transmit the bearing fault data to the STM32 via the serial port in real time and uses a GUI to display the process of data transmission; the result display module uses the results obtained by the prediction module to display the prediction results and the prediction consumption time on a TFT-LCD screen.
### _Data Processing Module_
The CWRU Bearing Dataset [3] is a dataset of failed bearings published by the Case Western Reserve University Data Centre in the USA. Its purpose is to test and verify engine performance and in recent years, due to the growth of fault diagnosis, it has been used mainly as benchmark data for fault signal diagnosis. The experimental platform for the measurement data and the rolling bearing structure used in the experiments are shown in Fig. 1.
For the entire CWRU faulty bearing data set, the drive side data at the 12 KHz sampling rate is the most complete and has the least obvious human error. For the data at different speeds, the data at 1797 rpm is closest to 1800 rpm, at which point the data at 400 sampling points is close enough to a full revolution for a complete fault signal to be observed in the signal at the 12 KHz sampling rate [5]. Therefore, the subsequent training and validation of the network structure is carried out using the drive end data of 1797 rpm at the 12 KHz sampling rate to obtain 10 different fault classifications as shown in Table I.
The required bearing vibration signal data is then extracted as described above for subsequent analysis and processing. Fig. reffig:figure3 illustrates the transformation of a section of signal data into a time series. Further pre-processing of the signal data, such as data denoising, data sampling and data normalisation, is required according to the needs of the experiment.
For the denoising problem, manual denoising is not required because the CWRU dataset is a manual EDM pointing simulation of the defect, which has a low noise level [5]. In this paper, we use random sampling to eliminate as much as possible the case of data preference of the training data. In order to optimise the data distribution for different fault cases and to improve the accuracy and training speed of the network model,
\begin{table}
\begin{tabular}{c l l} \hline \hline Number & Parameter name & Annotation \\ \hline
1 & 00-Normal & Normal without fault \\
2 & 07-Ball & 0.007 inch ball fault \\
3 & 07-InnerRace & 0.007 inch inner race fault \\
4 & 07-OuterRace6 & 0.007 inch 6 o’clock race fault \\
5 & 14-Ball & 0.014 inch ball fault \\
6 & 14-InnerRace & 0.014 inch inner race fault \\
7 & 14-OuterRace6 & 0.014 inch 6 o’clock race fault \\
8 & 21-Ball & 0.021 inch ball fault \\
9 & 21-InnerRace & 0.021 inch inner race fault \\
10 & 21-OuterRace6 & 0.021 inch 6 o’clock race fault \\ \hline \hline \end{tabular}
\end{table} TABLE I: 10 different fault classifications obtained from the CWRU dataset.
Fig. 1: Experimental platform of CWRU
the data features are subjected to dimensionless processing, i.e. normalisation, and the normalisation method used here is the min-max method, the formula is obtained as the follows:
\[y=\frac{x-min\left(x\right)}{max\left(x\right)-min\left(x\right)} \tag{1}\]
Bearing fault signals are one-dimensional time series of data that need to be transformed into a two-dimensional matrix for training with CNN networks.
The transformation method we use here is the direct transformation method, which directly intercepts a certain length of the acquired 1D data through a sliding window, and then stacks the intercepted data multiple times to obtain a 2D matrix data. The simplicity of this method makes it suitable for use in real-time fault detection systems for low-power devices, as shown in Fig. 4.
### _Training Module_
This module trains the CNN model with the 2D matrix data obtained in the data processing module, we selected Tanh as the network activation function and added the ReduceLR adaptive learning rate adjustment mechanism. After adjusting the parameters and network structure for several times and training, we obtained a satisfactory network model with the final accuracy, whose network structure and parameters are shown in Table II.
We can then apply the trained model to the bearing fault detection task and use it in a subsequent prediction module on the STM32.
Fig. 4: The conversion of one dimension signal to image. Referenced from [6].
Fig. 3: Part of the cwru dataset data under time series.
Fig. 2: A overall architecture of the proposed model.
### _Prediction Module_
In this module we use the CubeAI tool to deploy the trained models from the training module to the STM32. CubeAI is an official ST AI extension package for CubeMX that supports 8-bit quantization of Keras networks and TensorFlowLite quantization networks by automatically transforming pre-trained neural networks and integrating the resulting optimisation libraries into the user's project [7].
When importing the model, there is an option to compress and quantise the weights and parameters, but this will reduce the accuracy of the model. The STM32H743VI microcontroller chosen for this work has sufficient resources, so the network model is not compressed and quantized. After importing the model, the complexity of the network parameters is 1238380 MACC, MACC (multiply-accumulate operations) is used to describe the model complexity of the neural network, which represents the number of operations that are multiplied and then added in the network calculation. The specific resource consumption is shown in Table III.
Although the initialisation and definition of the subsequent input and intermediate variables require system resources, they are not significant compared to the 2M FLASH and 1M RAM of the STM32H743VI microcontroller. The specific RAM consumption per layer is shown in Fig. 5. As we can see from Fig. 5, the activation function in the network model consumes most of the RAM, and the RAM consumption of all the layers together must be more than 12KB, but since the network model can be deleted from RAM after one layer and then the next layer can be run, we only need to calculate the maximum RAM consumption value during one layer to get the maximum RAM consumption value.
### _Serial Communication Module_
The main purpose of this module is to implement the transfer of bearing fault data via serial communication to the STM32 microcontroller to facilitate subsequent fault detection tasks.
Here we have developed a higher level computer software to facilitate the serial communication between the PC and the microcontroller. As the data sent by serial communication is ASCII and the data transferred is in character format rather than floating point format, the transfer process requires conversion and recovery of floating point numbers. To facilitate the visualisation of the data, we have also designed a display interface for the real-time data, as shown in Fig. 6, On the left is the one dimensional time series data and on the right is the two dimensional data obtained by converting the one dimensional data using the direct conversion method.
A 20 * 20 buffer is set up on the STM32 microcontroller to hold the data. The update process after receiving new data from the serial port is shown in Fig. 7, for demonstration purposes only a 4 * 4 matrix is used.
The data in the upper left corner of the matrix is the input port and the data in the lower right corner is the output port. After receiving the new data, the data in the matrix is first moved from left to right, with the rightmost data in each row moving to the leftmost data in the next row and the rightmost data in the last row being discarded.
The prediction module then performs a new round of predictions on the data in the buffer. The prediction results from the completion of the network model are transferred to the display module for display and then returned to the host computer via the serial communications module to indicate that the prediction has been completed and the next data transfer can begin.
### _Result Display module_
This module displays on a TFT-LCD screen the predicted failure type, failure probability and predicted process time obtained by the prediction module. The example display is shown in Fig. 8.
The contents of the figure indicate that the diagnostic fault result is a 21 inch fault on the bearing ball and that the probability of this fault is 0.999856 and the time taken for this diagnostic result is 18 ticks.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Layer name & Filter size & Filter number & Stride \\ \hline Conv1 & (10, 10) & 4 & 1 \\ Conv2 & (5, 5) & 8 & 1 \\ Maxpool1 & (4, 4) & 8 & 2 \\ Conv3 & (3, 3) & 16 & 1 \\ Conv4 & (3, 3) & 16 & 1 \\ Maxpool2 & (2, 2) & 16 & 2 \\ Conv5 & (3, 3) & 32 & 1 \\ Conv6 & (3, 3) & 64 & 1 \\ Maxpool3 & (1, 1) & 64 & 2 \\ Full connection & 32 & - & - \\ Softmax & 10 & - & - \\ \hline \hline \end{tabular}
\end{table} TABLE II: Our CNN model parameter details.
\begin{table}
\begin{tabular}{c c} \hline \hline Index & Value \\ \hline MACC & 1238380 \\ FLASH & 142.15KB \\ RAM & 11.32KB \\ \hline \hline \end{tabular}
\end{table} TABLE III: A simple table with a header row.
Fig. 7: The update process of buffer in STM32.
## IV Experimental Test
The chip model used in this experiment is the STM32H743VI, a high performance ARM Cortex-M7 MCU with DSP and DP-FPU with 2 MB Flash, 1 MB RAM, 480 MHz CPU. Use CubeMX to configure the chip, CubeIDE to write the relevant control programmes, and the CubeAI plug-in to deploy convolutional neural network models in the STM32.
We conducted comparative experiments on the selection of CNN models, where the benchmark model is the CNN model in the official Keras MNIST handwriting recognition example; the Tanh model is based on the benchmark model by changing the ReLU activation function to the Tanh activation function; and the ReduceLR model is based on the benchmark model by adding an adaptive learning rate adjustment mechanism.
The Tanh activation function can maintain a non-linear monotonic ascending and descending relationship between output and input, which is consistent with the gradient solution of BP (back-propagation) networks, is fault-tolerant and bounded, but takes longer to compute the derivative than the ReLU function. The adaptive learning rate adjustment mechanism is to have a larger learning rate at the beginning of training and reduce the learning rate as the metrics improve, to have a larger learning rate at the beginning of training so that the network model can be fitted faster, and to reduce the learning rate.
Fig. 5: The specific RAM consumption per layer.
Fig. 6: Real-time data display interface of host computer.
learning rate as the network metrics continue to increase, to avoid network oscillation and delayed fitting.
The accuracy of the three models compared and the optimised model proposed in this paper are shown in Fig. 9, from which (a) we can see that the benchmark solution has very strong oscillations in the training process, while the solution using Tanh as the activation function is the most stable and accurate. The overall trend is a gradual fit and the oscillations are gradually reduced.
Therefore, our proposed optimisation network uses Tanh as the activation function and adds the ReduceLR adaptive learning rate adjustment mechanism to optimise the fitting trend of the network. As can be seen in (b), the validation accuracy during training of our model follow the training accuracy and loss well, which means that the network model is neither overfitting nor underfitting, and the accuracy is higher than the three previous comparison models.
Some performance comparisons between our optimised model and the three previous comparison models are shown in Table IV. It can be seen that our optimised model has the lowest total number of parameters and the best performance among them in terms of training time, best accuracy and minimum loss value, indicating that our model can save the most resources to achieve optimal results.
## V Conclusion
In this paper, we experimentally obtain a lightweight CNN model that can be used for bearing fault detection with high accuracy, and successfully deploy it on a terminal device such as STM32. A convolutional neural network and STM32-based real-time bearing fault detection process framework, including data pre-processing and transmission, model training and deployment, real-time data monitoring and fault detection, and fault detection result presentation, has also been implemented and validated for usability.
|
2302.10913 | From paintbrush to pixel: A review of deep neural networks in
AI-generated art | This paper delves into the fascinating field of AI-generated art and explores
the various deep neural network architectures and models that have been
utilized to create it. From the classic convolutional networks to the
cutting-edge diffusion models, we examine the key players in the field. We
explain the general structures and working principles of these neural networks.
Then, we showcase examples of milestones, starting with the dreamy landscapes
of DeepDream and moving on to the most recent developments, including Stable
Diffusion and DALL-E 3, which produce mesmerizing images. We provide a detailed
comparison of these models, highlighting their strengths and limitations, and
examining the remarkable progress that deep neural networks have made so far in
a short period of time. With a unique blend of technical explanations and
insights into the current state of AI-generated art, this paper exemplifies how
art and computer science interact. | Anne-Sofie Maerten, Derya Soydaner | 2023-02-14T16:58:32Z | http://arxiv.org/abs/2302.10913v2 | # From paintbrush to pixel: A review of deep neural networks in AI-generated art
###### Abstract
This paper delves into the fascinating field of AI-generated art and explores the various deep neural network architectures and models that have been utilized to create it. From the classic convolutional networks to the cutting-edge diffusion models, we examine the key players in the field. We explain the general structures and working principles of these neural networks. Then, we showcase examples of milestones, starting with the dreamy landscapes of DeepDream and moving on to the most recent developments, including Stable Diffusion and DALL-E 2, which produce mesmerizing images. A detailed comparison of these models is provided, highlighting their strengths and limitations. Thus, we examine the remarkable progress that deep neural networks have made so far in a short period of time. With a unique blend of technical explanations and insights into the current state of AI-generated art, this paper exemplifies how art and computer science interact.
Neural networks, Transformers, Diffusion models, Generative adversarial networks, Deep learning, Image processing
## 1 Introduction
_"AI-generated art is a new and emerging field, and it will likely take some time for it to fully mature and for its place in the art world to be fully understood,"_replied ChatGPT, the chatbot created by OpenAI, when we asked about the
current state of AI-generated art. It continued by stating _"As with any new field, it will take time for it to be fully understood and for its place in the art world to be established."_
Whether they are recognized as art or not, AI-generated images are widespread today. Regardless of discussions about how creative or artistic they are, their existence in our lives cannot be denied any longer. In 2018, the first AI-generated portrait _"Edmond de Belamy"_ (Figure 1) sold for $432,500 at Christie's art auction. It was created using a generative adversarial network (GAN) [26], and a part of _"La Famille de Belamy"_ series by Obvious Art. The fact that it is signed with the loss function used in GAN makes this case also quite amusing. In 2022, Jason M. Allen's AI-generated artwork, _"Theatre D'opera Spatial"_ (Figure 1), won the art prize in the digital category at the Colorado State Fair's annual art competition. This piece was created with the AI-based tool Midjourney which can generate images by taking text prompts as input.
All of this progress in AI-generated art is made possible by _deep learning_ which is a subfield of machine learning. This subfield includes _deep neural networks_ which have led to significant breakthroughs in various fields such as computer vision in the last decade. In this paper, we focus on deep neural networks used in image processing and recent developments that have been used to produce AI-generated images. In the literature, there are several studies that address AI-generated art from different perspectives. For example, Cetinic _et al._ (2022) [7] touches upon the creativity of AI technologies as well as authorship, copyright and ethical issues. Ragot _et al._ (2020) [54] conducted an experiment where they asked participants to rate the difference between paintings created by humans and AI in terms of liking, perceived beauty, novelty, and meaning. Other important issues today are credit allocation and responsibility for AI-generated art [20]. Recently, a review of generative AI models has been released which touches upon various applications such as texts, images, videos and audios [5].
Figure 1: _(Left)_ “Edmond de Belamy” - The first AI-generated portrait sold at Christie’s art auction in 2018. _(Right)_ “Theatre D’opera Spatial” - The winner of the digital art category at the Colorado State Fair’s annual art competition in 2022.
In this paper, we focus on the main neural networks which have been used to generate realistic images. We explain the building blocks of the related neural networks to provide readers with a better understanding of these models. We describe the general working principles of these neural networks, and introduce the recent trends in AI-generated art such as DALL-E 2 [56]. We examine the rapid development of deep neural networks in AI-generated art, and emphasize the milestones in this progress. This review addresses the topic from a technical perspective and provides comparisons of current state-of-the-art models. However, even for a non-technical audience (e.g., from more traditional areas in the fine arts, aesthetics, and cultural studies), this review could serve the purpose of providing an overview of the different techniques and tools that are currently available.
The rest of the paper is organized as follows. In Section 2, we describe the important neural networks used for the models in AI-generated art. We introduce the progress in generative modeling and recent trends in Section 3. We discuss the current models in Section 4 and we conclude in Section 5.
## 2 Preliminaries
During training, a neural network adjusts its parameters called _weights_. When the training is completed, the weights are optimized for the given task, i.e., the neural network _learns_. A typical neural network is a multilayer perceptron (MLP) which is useful for classification and regression tasks. However, there are many deep neural networks which are particularly effective in image processing. One of them is convolutional neural networks (CNNs). In this section, we start with CNNs which require data labels during training and learn in a supervised learning setting. Then, we explain the autoencoders which can learn without data labels, that is, learn by unsupervised learning. We continue with GANs and the Transformer neural network. Lastly, we explain the diffusion models, which are the latest advancements in deep learning.
### Convolutional neural networks
Convolutional neural networks [37; 38], usually referred to as CNNs, or ConvNets, are deep neural networks utilized mainly for image processing. In a deep CNN, increasingly more abstract features are extracted from the image through a series of hidden layers. As such, a CNN follows a similar hierarchy as the human visual cortex, in that earlier areas extract simple visual features and higher areas extract combinations of features and high level features. In this manner, the complex mapping of the input pixels is divided into a series of nested mappings, each of which is represented by a different layer [27].
There are various CNN architectures in the literature such as LeNet [37; 38], which is capable of recognizing handwritten digits. However, more complex tasks like object recognition require deeper CNNs such as AlexNet [36], VGG [63], ResNet [28], DenseNet [30], EfficientNet [68], Inception and GoogLeNet [66]. A typical CNN includes three kinds of layers: convolutional layers, pooling
layers, and fully-connected layers. The general structure of a typical CNN for classification is illustrated in Figure 2. The input image \(\mathcal{X}\) is presented at the input layer, which is followed by the first convolutional layer. In a convolutional layer, the weights are kept within _kernels_. During learning, a mathematical operation called _convolution_ is performed between the input and the kernel. Basically, convolution is performed by multiplying the elements of the input with each element of the kernel and summing the results of this multiplication. This input can be an input image or the output of another, preceding convolutional layer. The units in a convolutional layer are arranged into planes, each of which is referred to as a _feature map_, defined as 3D tensors. Each unit in a feature map takes input from a small area of the image, and all units detect the same pattern but at different locations in the input image. In a feature map, all units share the same weights, thus a convolution of the image pixel intensities with a _kernel_ is performed [3]. In general, there are multiple feature maps in a convolutional layer, each with its own weights, to detect multiple features [3]. Accordingly, a single convolutional layer includes lots of kernels, each operating at the same time. Moreover, as a deep neural network, a CNN typically includes many convolutional layers which implies millions of parameters.
Each output of convolution operation is usually run through a nonlinear activation function, such as the rectified linear unit (ReLU) [23]. Then, a convolutional layer or a stack of convolutional layers is followed by a _pooling layer_ which reduces the size of feature maps by calculating a summary statistic of the nearby outputs such as the maximum value or the average [27]. In the end, a series of convolutional and pooling layers is followed by _fully-connected (dense)_ layers. The activation function that the output layer applies depends on the task. In most cases, a sigmoid function is preferred for binary classification, softmax nonlinearity for multi-classification, and linear activation for regression tasks. Accordingly, a CNN minimizes the difference between the desired output values \(y\) and the predicted values \(\hat{y}\) in the cross-entropy loss functions given in Eq. 1 and Eq. 2 for binary and multi-class classification, respectively. In the case of a regression task, a CNN may minimize mean squared error given in Eq. 3. In the equations below, \(W\) represents the weights belonging to the hidden layers, and \(V\) represents the output layer weights. Although the
Figure 2: An example CNN structure with two convolutional, two pooling, and three fully-connected layers for classification.
aforementioned are the most frequently used activation and loss functions, in the literature, there are other alternatives available such as Leaky ReLU [41] as an activation function, or mean absolute error as a loss function.
\[L(W,v\mid\mathcal{X})=-\sum_{t}y^{t}log\hat{y}^{t}+(1-y^{t})log(1-\hat{y}^{t}) \tag{1}\]
\[L(W,V\mid\mathcal{X})=-\sum_{t}\sum_{i}y^{t}_{i}log\hat{y}^{t}_{i} \tag{2}\]
\[L(W,V\mid\mathcal{X})=\frac{1}{2}\sum_{t}\sum_{i}(y^{t}_{i}-\hat{y}^{t}_{i})^{2} \tag{3}\]
A fully-connected layer and a convolutional layer significantly differ in that fully-connected layers learn global patterns in their input feature space, whereas convolutional layers learn local patterns [10]. In a CNN, there are less weights than there would be if the network were fully-connected because of the local receptive fields [3]. CNNs are now essential neural networks in deep learning and have yielded major advances for a variety of image processing tasks.
### Autoencoders
The autoencoder, originally named the _autoassociator_ by Cottrell _et al._ (1987) [12], is a typical example of unsupervised learning. This neural network learns to reconstruct the input data as the output by extracting a (usually lower-dimensional) representation of the data. The autoencoder has been successfully implemented for unsupervised or semi-supervised tasks, or as a preprocessing stage for supervised tasks. The general structure of an autoencoder consists of an _encoder_ and a _decoder_ as shown in Figure 3. In the simplest case, both encoder and decoder are composed of a single fully-connected layer for each, or a series of fully-connected layers. During training, the encoder, in one or more layers, usually transforms the input to a lower-dimensional representation. Then, the decoder that follows, in one or more layers, takes this representation as input and reconstructs the original input back as its output [64]. The aim is to obtain a meaningful representation of data, which makes the autoencoder also an example of _representation learning_.
In the general framework, the encoder takes the input \(x^{t}\), and produces a compressed or _hidden/latent_ representation of input \(h^{t}\). Then, the decoder takes \(h^{t}\) as input and reconstructs the original input as the output \(\hat{x}^{t}\). When the \(h^{t}\) dimension is less than the \(x^{t}\) dimension, the autoencoder is called _undercomplete_. The undercomplete autoencoder can capture the most salient features of the data. On the other hand, when the \(h^{t}\) dimension is equal or greater than the \(x^{t}\) dimension, it is called _overcomplete_. The overcomplete autoencoder just copies the input to the output as it cannot learn anything meaningful about the data distribution. Regularized autoencoders, such as the _sparse autoencoders_, alleviate this issue by applying a regularization term in the loss function [27].
The tasks related to image processing may require both encoder and decoder be composed of convolutional layers instead of fully-connected layers. In this case, it is called a _convolutional autoencoder_. The first layers of the encoder are convolution/pooling layers and correspondingly the last layers of the decoder are deconvolution/unpooling layers. Whether the layers are fully-connected or convolutional, total reconstruction on a training set \(\mathcal{X}=\{x^{t}\}_{t}\) is used as the loss function. The encoder and decoder weights, \(\theta_{E}\) and \(\theta_{D}\) respectively, are learned together to minimize this error:
\[E(\theta_{E},\theta_{D}|\mathcal{X}) = \sum_{t}\|x^{t}-\hat{x}^{t}\|^{2} \tag{4}\] \[= \sum_{t}\|x^{t}-f_{D}(f_{E}(x^{t}|\theta_{E})|\theta_{D})\|^{2}\]
When an autoencoder in which the encoder and decoder are both perceptrons learns a lower-dimensional compressed representation of data, the autoencoder performs similar to principal component analysis (PCA): The encoder weights span the same space spanned by the \(k\) highest eigenvectors of the input covariance matrix [4]. When the encoder and decoder are multi-layer perceptrons, then the autoencoder learns to do nonlinear dimensionality reduction in the encoder [64].
In comparison to the different autoencoder types, we should highlight the _variational autoencoder (VAE)_[35; 57] which turns an autoencoder into a _generative_ model. Similar to the autoencoder architecture mentioned above, a VAE is composed of an encoder and decoder. However, the encoder does not produce a lower-dimensional latent space. Instead, the encoder produces parameters of a predefined distribution in the latent space for input, i.e., mean and variance. Thus, the input data is encoded as a probability distribution. Then, new samples can be generated by sampling from the latent space that the autoencoder learned during training.
Different from the Eq. 4, the loss function of VAE is composed of two main terms. The first one is the reconstruction loss which is the same loss in Eq. 4. The second term is the Kullback-Leibler divergence between the latent space
Figure 3: The general structure of the _(Left)_ Autoencoder, _(Right)_ Variational autoencoder. \(x^{t}\) refers to an input sample, \(h^{t}\) to the latent representation and \(\hat{x}^{t}\) to the reconstructed input. The parameters of encoder (\(\theta_{E}\)) and decoder (\(\theta_{D}\)) are updated during training.
distribution and standard Gaussian distribution. The loss function is the sum of these two terms. Once the VAE is trained, new samples can be generated by using the learned latent space. This property makes the VAE a generative model.
### Generative adversarial networks
When it comes to the generative models, the _generative adversarial network_ (GAN) [26] is a milestone in deep learning literature. The idea is based on game theory and a minimax game with two players. Instead of human players, in this case, the players are neural networks. One of these neural networks is called a _generator (G)_ while the other one is called a _discriminator (D)_. These two networks are trained end-to-end with backpropagation in an _adversarial_ setting, i.e., the generator and discriminator compete with each other. The generator captures the data distribution while the discriminator estimates the probability that its input comes from the real data or is a fake sample which is created by the generator. The competition between these two neural networks in the game makes them improve their results until the fake data generated by G is indistinguishable from the original data for D. As a result, G learns to model the data distribution during training and can generate _novel_ samples after training is completed.
In the original GAN framework, both the generator and discriminator are MLPs. The generator takes a random noise vector as input and generates samples. The discriminator takes the generated sample and a real sample from the data as inputs, and tries to decide which one is real. Then, based on the feedback coming from the discriminator, the generator updates its weights. As shown in the loss function in Eq. 5, G tries maximizing the probability of D making a mistake. The game ends at a saddle point where D is equal to 1/2 everywhere. The generative networks that use different loss functions, such as the Wasserstein GAN (WGAN) [1], can be used as alternatives to traditional
Figure 4: The general structure of an generative adversarial network (GAN). The generator upscales its input (a noise vector) through a series of layers into an image. The discriminator performs a binary classification task, i.e., deciding whether the input image it receives is real or a generated sample.
GAN training.
\[\min_{G}\max_{D}V(D,G)=\mathbb{E}_{x\sim p_{\text{data}}(x)}[\log D(x)]+\mathbb{ E}_{z\sim p_{z}(z)}[\log(1-D(G(z)))] \tag{5}\]
In addition to its huge impact on deep learning, potential directions have been discussed on how GANs could advance cognitive science [25]. In the literature, there are various different GAN architectures [13]. For example, when the G and D are composed of convolutional layers, this architecture is called a _deep convolutional GAN (DCGAN)_[51]. The most well-known DCGAN is _StyleGAN_[32] which can vary coarse-to-fine visual features separately. Whereas an ordinary GAN receives a noise vector as input, StyleGAN inputs the noise vector to a mapping network to create an intermediate latent space. The intermediate latents are then fed through the generator through adaptive instance normalization [32]. The mapping network ensures that features are disentangled in latent space, allowing StyleGAN to manipulate specific features in images (e.g., remove someone's glasses or make someone look older). Pix2pix is a GAN that takes an image as input (rather than noise) and translates it to a different modality (e.g., BW image to RGB image) [31]. Whereas the training of pix2pix requires image pairs (the same image in RGB and BW), CycleGAN [75] alleviates this need by 'cycling' between two GANs (see Section 3.2). An _adversarial autoencoder_ combines the adversarial setting of GANs with the autoencoder architecture [42]. Lastly, a _self-attention adversarial neural network (SAGAN)_[73] defines a GAN framework with an attention mechanism, which is explained in Section 2.4.
### Transformers
Convolution has become the central component for image processing applications as neural networks progressed throughout time. However, in addition to the computational burden that convolutional layers bring, the convolution operation has been criticized for being far from human vision. Because our human visual system has evolved to be sparse and efficient, we do not process our entire visual field with the same resources. Rather, our eyes perform a fixation point strategy by means of a _visual attention system_ which plays an essential role in human cognition [47; 48]. In this manner, we as humans pay selective attention to specific parts of our visual input.
Inspired by the attention system in human vision, _computational_ attention mechanisms have been developed and integrated into neural networks. The main goal is reducing the computational burden caused by the convolution operation, as well as improving the performance. These attention-based neural networks have been applied to a variety of applications such as image recognition or object tracking; see [65] for a review. In particular, a novel attention-based encoder-decoder neural network architecture was presented for neural machine translation (NMT) in 2015 [2]. The idea behind this approach is illustrated in Figure 5, which shows how the attention mechanisms in neural networks work. In this example, the encoder takes an input sentence in English,
and the decoder outputs its translation in Dutch. Both encoder and decoder include _recurrent neural networks (RNNs)_; see [24] for more information about RNNs. Basically, the encoder outputs hidden states, and the decoder takes all the hidden states as inputs. Before processing them, the decoder applies an attention mechanism that gives each hidden state a score. Then, it multiplies each hidden state by its score to which a softmax function is applied. The weighted scores are summed up, and the result leads to the context vector for the decoder. By obtaining weighted hidden states which are most associated with certain words, the decoder focuses on the relevant parts of the input during decoding.
After that study in 2015, attention mechanisms in neural networks have progressed rapidly, especially for NMT. One of them is the _self-attention_ which is the core building block of the _Transformer_[69]. The Transformer is composed of encoder-decoder stacks which are entirely based on self-attention without any convolutional or recurrent layers. There are six identical layers in each of the encoder-decoder stacks that form the Transformer. To illustrate the model, only one encoder-decoder stack is shown in Figure 6.
Figure 5: A neural machine translation example. The model takes an English sentence as input and translates it into Dutch. The figure shows encoder hidden states, and which words the model focuses more on (indicated by the color intensity) while translating.
Figure 6: The Transformer architecture in detail [65; 69]. _(Left)_ The Transformer with one encoder-decoder stack. _(Center)_ Multi-head attention. _(Right)_ Scaled dot-product attention.
The encoder-decoder stacks in the Transformer are composed of fully-connected layers and _multi-head attention_ which is a kind of self-attention mechanism applying _scaled dot-product attention_ within itself. As seen in Figure 6, these attention mechanisms use three vectors for each word, namely _query (Q)_, _key (K)_ and _value (V)_. These vectors are computed by multiplying the input with weight matrices \(W_{q}\), \(W_{k}\) and \(W_{v}\) which are learned during training. In general, each value is weighted by a function of the query with the corresponding key. The output is computed as a weighted sum of the values. In the _scaled dot-product attention_, the dot products of the query with all keys are computed. As given in Eq. 6, each result is divided by the square root of the dimension of the keys to have more stable gradients. They pass onto the softmax function, thus the weights for the values are obtained. Finally each softmax score is multiplied with the value [69]:
\[Attention(Q,K,V)=softmax(\frac{QK^{T}}{\sqrt{d_{k}}})V \tag{6}\]
_Multi-head attention_ extends this idea by applying linear activations to the inputs (keys, values and queries) \(h\) times based on different, learned linear representations (Figure 6). Each of the projected versions of queries, keys and values are called _heads_ where then the scaled dot-product attention is performed in parallel. Thus, the self-attention is calculated multiple times using different sets of query, key and value vectors. This leads to the ability for the model to jointly attend to information at different positions [69]. In the last step, the projections are concatenated. Additionally, the decoder applies _masked multi-head attention_ to ensure that only previous word embeddings (tokens) are used when predicting the next word in the sentence.
In the literature, there are different Transformer architectures for various tasks [34; 40]. After the Transformer has yielded major progress in natural
Figure 7: The Vision Transformer [18]. In order to classify an image, it takes the input as patches, projects linearly, adds position embeddings, and uses a Transformer encoder.
language processing (NLP), it has been adapted to image processing tasks. _Image Transformer_[50] is one of these adaptations in which the Transformer is applied to image processing. Image Transformer applies self-attention in local neighborhoods for each query pixel, and performs well on image generation and image super-resolution. However, the current state-of-the-art model is _Vision Transformer_[18] which splits an input image into patches. Then, the Transformer takes the linear embeddings of these patches in sequence as input (Figure 7). The Vision Transformer performs well on image classification tasks whilst using fewer parameters.
### Diffusion models
Today, text-to-image models such as DALL-E 2 [56] or Midjourney have turned AI into a popular tool to produce mesmerizing images. These are _diffusion models_ which have shown great success in generating high-quality images. They have already been proven to outperform GANs at image synthesis [15].
In comparison with GANs, the training of diffusion models does not require an adversarial setting. The original Denoising Diffusion method was proposed by [16] inspired by non-equilibrium thermodynamics that systematically destroys structure in a data distribution, then restores the data. Based on this method, _denoising diffusion probabilistic models_ or _diffusion models_ in short, have been applied to image generation by Ho _et al._ (2020) [29].
Diffusion models require two main steps in the training phase (Figure 8). At first, during _the forward (diffusion) process_, random noise is gradually added to the input image until the original input becomes all noise. This is performed by a _fixed_ Markov chain which adds Gaussian noise for \(T\) successive steps. Secondly, during the _reconstruction_ or _reverse process_, the model reconstructs the original data from the noise obtained in the forward process. The reverse process is defined as a Markov Chain with _learned_ Gaussian transitions. Accordingly, the prediction of probability density at time \(t\) depends only on the state attained at time _(t-1)_. Here, \(x_{1},...,x_{T}\) are latents of the same dimensionality as the data which make the diffusion models _latent variable models_[29].
Figure 8: The working principle of diffusion model in general [29]. Starting from an image, the forward process involves gradually adding noise (following a fixed Markov chain) until the image is all noise. In the reverse process, the original image is reconstructed step by step through a learned Markov Chain.
The general structure of a diffusion model is given in Figure 8. The reverse process requires training a neural network because the estimation of probability density at an earlier time step given the current state of the system is non trivial. To this end, all previous gradients are necessary for obtaining the required estimation. For each step in the Markov Chain, a neural network learns to denoise the image. Optionally, the denoising process can be guided by text (see Section 3.3.2). In this case, a Transformer encoder maps a text prompt to tokens which are then fed to the neural network (Figure 9). Once trained, a diffusion model can be used to generate data by simply passing random noise (and optionally a text prompt) through the learned denoising process.
The diffusion model may bring to mind VAEs which encode the input data as a probability distribution and then sample from the learned latent space (Section 2.2). However, the forward process makes the diffusion model different from the VAEs as training is not required in this fixed Markov chain. More detailed information about diffusion models and its applications can be found in [70].
## 3 AI-generated art
In this section, we provide an overview of the models that have shaped the field of AI-generated art. This overview includes GANs, Transformer-based models and Diffusion models. In each section, we touch on the models that had a large impact on the field. We elaborate on the model architectures and provide a comprehensive comparison. It should be noted that this review is focused on models which have been detailed in scientific papers and therefore does not include the well-known diffusion model Midjourney.
Figure 9: The illustration of one time step in the learned Markov Chain in the reverse process. A deep neural network (e.g., U-Net [60]) learns to transform a noisy input into a less noisy image with the help of a text prompt that describes the content of the image.
### Opening gambit
DeepDream.Once CNNs achieved impressive results in image processing, the researchers started developing visualization techniques to better understand how these neural networks see the world and perform classification. Examining each layer of a trained neural network led to the development of _DeepDream_[44; 45], which produces surprisingly artistic images.
DeepDream generates images based on the representations learned by the neural network. To this end, it takes an input image, runs a trained CNN in reverse, and tries to maximize the activation of entire layers by applying gradient _ascent_[10]. DeepDream can be applied to any layer of a trained CNN. However, applying it to high-level layers is usually preferred because it provides visual patterns such as shapes or textures that are easier to recognize.
An illustration of an original input image and its DeepDream output is shown in Figure 10. What is striking, is that the output image contains lots of animal faces and eyes. This is due to the fact that the original DeepDream is based on Inception [66], which was trained using the ImageNet database [14]. Since there are so many examples of different dog breeds and bird species in the ImageNet database, DeepDream is biased towards those. For some people, DeepDream images resemble dream-like psychedelic experiences. In any case, although it was not its initial purpose, DeepDream inspired people to employ AI as a tool for artistic image creation.
Neural Style Transfer.A deep learning-based technique to combine the content of an image with the style of another image is called _neural style transfer_[22]. This technique uses a pretrained CNN to transfer the style of one image to another image. A typical example is shown in Figure 10 where the style of one image (e.g., _Starry Night_ by Vincent Van Gogh) is applied to a content target image. Neural style transfer can be implemented by redefining
Figure 10: _(Left)_ A DeepDream example. Deepdream receives an input image and outputs a dreamy version in which faces and eyes emerge due to the maximization of the final layer’s activations. _(Right)_ An illustration of neural style transfer. The content target image and style image are provided as input to the model. As output, it retains the content of the target image and applies the style of the other image.
the loss function in a CNN. This loss function is altered to preserve the content of the target image through the high-level activations of the CNN. At the same time, the loss function should capture the style of the other image through the activations in multiple layers. To this end, similar correlations within activations for low-level and high-level layers contribute to the loss function [10]. The result is an image that combines the content of the input image with the style of the second input image.
### Generative adversarial networks
ArtGAN.Tan _et al._ (2017) [67] presented their model called _"ArtGAN"_, in which they show the result of a GAN trained on paintings. Although their output images looked nothing like an artwork of one of the great masters, the images seemed to capture the low-level features of artworks. This work sparked interest in the use of GANs to generate artistic images. Additionally, it challenged people to think of creativity as a concept that could be replicated by artificial intelligence.
Can.Shortly afterwards, Elgammal _et al._ (2017) [19] further pushed this idea in their paper on _creative adversarial neural networks (CAN)_. Their goal was to train a GAN to generate images that would be considered as art by the discriminator but did not fit any existing art styles. The resulting images looked mostly like abstract paintings that had a unique feel to them. Elgammal _et al._ (2017) then validated their work in a perceptual study where they asked human participants to rate the images on factors such as liking, novelty, ambiguity, surprisingness and complexity. In addition, they asked the participants whether they thought the image originated from a human-made painting or a computer-generated one. There were no differences in the scores on the above mentioned factors between CAN art and various abstract artworks or more traditional GAN art. However, participants more often thought that their CAN images were made by a human artist as opposed to GAN generated art.
pix2pix.In 2017, Isola _et al._ (2017) [31] had the innovative idea to create a conditional GAN [43] that receives an image as input and generates a transformed version of that image. This was achieved by training the GAN on corresponding image pairs. For example, say you have a dataset of RGB images. You can create a BW version of all these images to create image pairs (one is RGB, one is BW). What is not as trivial, is turning BW images into colored ones. One could manually color the images but this is time consuming. _pix2pix_ allows you to automate this process. The generator receives the BW version of the image pair and generates an RGB version. Next, the discriminator receives both the transformed image and the original RGB image, and has to determine which one is real and which one is fake. After the training is completed, the pix2pix GAN can transform any BW image into a colored version. The major advantage of pix2pix is that it can be applied to any dataset of image pairs without the need to adjust the training process or loss function. Thus, the same model can be used to transform sketches into paintings
or BW images into colored images, simply by changing the training dataset. Many artists as well as AI enthusiasts have been inspired by pix2pix to create artistic images using this model (Figure 11).
CycleGAN.Although pix2pix was a major breakthrough in generative AI, one shortcoming was that it requires corresponding image pairs for its training, which simply do not exist for all applications. For example, we do not have a corresponding photograph for every painting created by Monet. Therefore, pix2pix would not be able to turn your photograph into a Monet painting. In 2017, the same lab released _CycleGAN_, another major breakthrough in generative AI since this GAN is able to transform your photograph into a Monet painting [75]. CycleGAN extends pix2pix by combining two conditional GANs and 'cycling' between them. The first GAN's generator might receive an image of a Monet painting and is trained to transform it to a photograph. The output image is then fed to the second generator to be transformed into a Monet painting. This transformed Monet painting and the original Monet painting are then fed to the first discriminator, whereas the photograph version of the image is compared to an existing (unpaired) photograph by the second discriminator. The same process is repeated for an existing photo by turning it into a Monet painting and back to a photo. The transformed photo is then compared to the original photo whereas the transformed Monet painting is compared to an existing (unpaired) Monet painting. In the end, the model can transform images into the other modality without having seen pairs in the training set.
GauGAN.In 2019, Nvidia researchers released _"GauGAN"_, named after post-impressionist painter Paul Gaugain [49]. Similar to pix2pix, GauGAN
Figure 11: Examples of artistic applications of pix2pix. **A.** Artwork _“Memories of Passersby I”_ by Mario Kingemann. This work continuously generates a male and female looking portrait by manipulating previously generated portraits using a collection of GANs (including pix2pix). **B.** Screenshot from video _“Learning to See: Gloomy Sunday”_ by Memo Alten. The original video shows a side by side of the input and output of pix2pix trained to turn ordinary video (showing household items) into artistic landscapes and flower paintings.
takes an image as input and produces a transformation of that image as output. Their model uses spatially adaptive denormalization, also known as SPADE, which is a channel-wise normalization method (similar to Batch Normalization) convolved with a segmentation map. As a result, the output image can be steered with a semantic segmentation map. In addition, the input image is decoded using a VAE, which learns a latent distribution capturing the style of the image. As a result, one can generate new images of which the content is controlled by a segmentation map and the style by an existing image. Figure 12 shows some example images generated with GauGAN.
Since then Nvidia has released an updated version called _GauGAN2_[62]. As the name suggests, it is still a GAN framework. However, this updated version can additionally perform text-to-image generation, meaning it can generate an image based on a text description as input. Earlier that year, text-to-image models became extremely popular due to the release of DALL-E [55], a Transformer-based model which will be discussed in the next section.
Lafite.Zhou _et al._ (2021) [74] proposed a GAN-based framework to perform language-free text-to-image generation, meaning they train their model solely on images. However, it is still able to generate images based on text descriptions after training is completed. They use CLIP's [52] joint semantic embedding space of text and images to generate pseudo text and image pairs. CLIP is another model that is trained to link text descriptions to the
Figure 12: Example images generated with GauGAN. One can provide a segmentation map and optionally a style reference as input. GauGAN then generates a photo-realistic version of the segmentation map in the style of the reference image. When we add a palm tree in the segmentation map, GauGAN adds a palm tree to its output in two different styles.
correct image and vice versa. Then, they adapt StyleGAN 2 [33] to a conditional version where the text embedding is concatenated with StyleSpace, the intermediate and well-disentangled feature space of StyleGAN [32].
### Text-to-image models
#### 3.3.1 The Transformer
**DALL-E.** Early in 2021, OpenAI released their groundbreaking model _"DALL-E"_ (named after Pixar's Wall-e and Surrealist painter Salvador Dali) on their blog. Shortly after, they detailed the workings of their model in their paper titled _"Zero-shot Text-to-Image generation"_[55]. DALL-E combines a discrete variational autoencoder (dVAE) [58], which learns to map images to lower dimensional tokens, and a Transformer, which autoregressively models text and image tokens (see Figure 15). In this manner, DALL-E is optimized to jointly model text, accompanying images and their token representations. As a result, given a text description as input, DALL-E can predict the image tokens and decode them into an image during inference. Here, zero-shot refers to the ability to generate images based on text descriptions that are not seen during training. This means that DALL-E can combine concepts that it has learned separately but never seen together in a single generated image. For example, it has seen both robots and illustrations of dragons in the training data but it has not seen a robot in the shape of a dragon. However, when prompted to generate "a robot dragon", the model can produce sensible images (see Figure 13). This remarkable capability of the model has resulted in a hype surrounding DALL-E. Although DALL-E can generate cartoons and art styles quite well, it lacks accuracy when generating photo-realistic images. As a result, OpenAI and other companies have devoted substantial resources to create an improved text-to-image model.
**CogView.** Concurrently with DALL-E, Ding _et al._ (2021) created _"CogView"_[17], a similar text-to-image model that supports Chinese rather than English. Besides the innovative idea to combine VAE and transformers, they include other features such as super-resolution to improve the resolution of the generated images. Despite their super-resolution module, their generated images lack photo-realistic quality.
**Make-A-Scene.** In 2022, Meta AI released their _"Make-A-Scene"_ model [21]. Their Transformer-based text-to-image model allows the user more control over the generated image by working with segmentation maps. During training, the model receives a text prompt, segmentation map and accompanying image as input (similarly as GauGAN2). The model then learns a latent mapping based on tokenized versions of the inputs. During inference, Make-A-Scene is able to generate an image and segmentation map based solely on text input (see Figure 15). Alternatively, one can provide a segmentation map as input to steer the desired output more. Moreover, one can alter the segmentation map that the model produces to steer the image generation.
Parti.Later that year, Google released their Transformer-based text-to-image model called _Parti_ which stands for _Pathways Autoregressive Text-to-Image model_[71]. This was the second text-to-image model Google released that year, a month after releasing Imagen [61] (see Section 3.3.2). Parti is based on a ViT-VQGAN [72], which combines the transformer encoder and decoder with an adversarial loss of a pretrained GAN to optimize image generation. Parti uses an additional transformer encoder to handle the text input, which is transformed into text tokens to serve as input to a transformer decoder alongside the image tokens from the ViT-VQGAN during training. At test time, the Transformer solely receives text as input and predicts the image tokens, which are then provided to the ViT-VQGAN to detokenize and turn them into an actual image (see Figure 15). Parti outperforms all other text-to-image models with a zero-shot FID score of 7.23 and a fine-tuned FID score of 3.22 on MS-COCO [39] (see Table 1). Figure 14 is a great example of Parti's remarkable capability to extract the essence of what the caption refers to (in this case, the _style_ of American Gothic). The generated image is not simply a copy of the original in which the rabbits seem photoshopped.
Muse.Earlier this year, Google released _Muse_, another Transformer-based text-to-image model [8]. Muse includes a VQGAN tokenizer to transfer images
Figure 13: Example images produced by DALL-E [55], retrieved from the original OpenAI blog [https://openai.com/blog/dall-e/](https://openai.com/blog/dall-e/).
into tokens and vice versa. The text input is turned into tokens using a pre-trained transformer encoder of the T5-XXL language model [53] (see Figure 15). The advantage of using such a pre-trained encoder is that this model component has been trained on a large corpus of text. A struggle in the training of text-to-image models is that it is time consuming to gather a large set of high quality image and caption pairs. The text and caption pairs likely span a limited part of concepts known in a language. Since NLP models are trained with solely text as input (which is easier to gather), they are trained on a more encompassing corpus of text. The authors find that the inclusion of the pretrained encoder results in higher quality, photo-realistic images and that the generation of text in the generated images is more accurate.
Due to its computational efficiency, Muse is the text-to-image model which requires the least time to generate an image when prompted. It achieves a state-of-the-art CLIP score of 0.32, which is a measure of how close the generated images are to the prompted caption. They verified this further in a behavioral study in which human raters indicated that the Muse-generated images are better aligned with the prompts compared to images generated with Stable Diffusion (see Section 3.3.2). In addition to image generation, Muse allows inpainting, outpainting and mask-free editing (which will be explained in more detail in Section 3.3.2).
#### Diffusion Models
Glide.In 2021, OpenAI came out with a paper that showed that diffusion models outperform GANs on image generation [15]. Less than a year after, OpenAI applied this insight to text-to-image generation, and they released _GLIDE_, a pipeline consisting of a diffusion model for image synthesis and a transformer encoder for text input [46] (see Figure 18). This new and improved
Figure 14: _(Left) American Gothic_ by Grant Wood. _(Right)_ An example image generated by Parti [71], retrieved from [https://parti.research.google/](https://parti.research.google/).
model is trained on the same dataset as DALL-E. The quality of their generated images strongly outperforms DALL-E. In a study where they asked human participants to judge the generated images from DALL-E and GLIDE, the raters preferred the GLIDE images over the DALL-E images 87% of the time for photorealism and 69% of the time for caption similarity. In addition, they preferred blurred GLIDE images over reranked or cherry picked DALL-E images.
Besides improving photorealism, GLIDE also offers the additional feature of image inpainting, meaning that you can edit a specific region in an existing or computer-generated image. For example, one can take an image of the Mona Lisa and add a necklace by providing the image and a text description (e.g., "a golden necklace") as input to GLIDE (see Figure 16).
**DALL-E 2.** Even though GLIDE was an impressive improvement upon DALL-E, it did not garner the same attention. When OpenAI released _DALL-E 2_[56], an advancement of GLIDE, this has changed. DALL-E 2 has a similar
Figure 16: An example of inpainting with GLIDE. It receives an image of the Mona Lisa as input as well as a mask and a text description “a golden necklace”. Then, it generates the output as Mona Lisa with a golden necklace. [46].
Figure 15: Comparison of the Transformer-based text-to-image models.
diffusion pipeline as GLIDE. What has been improved upon is the text input in the diffusion pipeline. Whereas GLIDE uses an untrained transformer encoder to format the text, DALL-E 2 uses the CLIP text encoder. Additionally, it uses a prior model to transform the text embedding into a CLIP image embedding before feeding it to the diffusion model (see Figure 18). This is achieved by transforming the text descriptions and images to text and image embeddings (or tokens) respectively using transformer encoders. The model is then trained to link the correct embeddings with one another. This relationship between text description and image is exploited in DALL-E 2 to provide the diffusion model with an embedding that reflects the text input but is more suited for image generation. In addition to improving image quality compared to GLIDE, DALL-E 2 allows the user to extend the background of an existing image or computer-generated one (referred to as outpainting, see Figure 17) and to generate variations of images.
Imagen.Shortly after, Google released their first text-to-image model called _Imagen_[61]. Their model is closer to GLIDE in architecture, since it does not rely on CLIP embeddings. Rather, it uses the pretrained encoder of the NLP model T5-XXL (similarly as Muse), whose embeddings are fed to the diffusion model (see Figure 18). As a result, the model is able to generate images that contain text more accurately (which is something OpenAI's models struggled with). In addition, Imagen feeds the generated image of the diffusion model to two super-resolution diffusion models to increase the resolution of the images.
Stable Diffusion.The biggest revolution in the field is perhaps the fully open source release of _Stable Diffusion_, by a company called StabilityAI [59] (we elaborate on the concerns regarding open source models in Section 4). Their main contribution is the computational efficiency of their model as opposed to the above mentioned text-to-image models. Rather than operating
Figure 17: Image outpainting examples by DALL-E 2 [56]. _(Left)_ Mona Lisa. _(Right)_ Girl with a Pearl Earring.
in pixel-space, Stable Diffusion operates in (lower-dimensional) latent space and maps the output of the diffusion process back to pixel space using a VAE (see Figure 18). Whereas previous text-to-image models require hundreds of GPU computing days, this latent diffusion model requires significantly smaller computational demands and is therefore more accessible to those with less resources. Besides image generation, Stable Diffusion additionally allows the user to modify existing images through image-to-image translation (e.g., turning a sketch into digital art) or inpainting (removing or adding something in an existing image).
InstructPix2Pix.Although several text-to-image models have this inpainting feature, in practice it is difficult to get the desired output based on a text description. The user either needs to describe the entire output image, or create a mask for the area in the image that should be modified and describe the desired modification. _InstructPix2Pix_ is a modification of Stable Diffusion and allows the user to modify images through intuitive text instructions [6]. Rather than having to describe the desired output image or providing a mask, the user can just write an intuitive instruction of how the input image should be adjusted (mask-free editing). For example, if you would like to turn the _Sunflowers_ by Vincent Van Gogh into a painting of roses, you can just write the instruction "Swap sunflowers with roses" (Figure 19).
Figure 19: InstructPix2Pix turns the _Sunflowers_ by Vincent Van Gogh into a painting of roses [6].
Figure 18: Comparison of the diffusion model-based text-to-image models.
## 4 Comparison of deep generative models
Due to the enormous efforts in the field of generative AI, it may be hard to decide which model might be most suited for one's purposes. Table 1 provides a comparison of the above mentioned models in terms of their computational efficiency (dataset size and trainable parameters), the quality of the generated images (FID score), the capabilities of the model and its accessibility (open source vs. not for public use). The state-of-the-art models that generate the highest photo-realistic images belong to Google (Parti, Muse and Imagen). However, none of these models are open source, meaning no one other than Google Research has access to them. Therefore, these results have not been reproduced by other researchers. Additionally, artists cannot use these models to create AI-generated art. On the other hand, OpenAI has released filtered versions of their models GLIDE and DALL-E 2. These versions have been trained on a filtered dataset that excluded images containing recognizable people (e.g., celebrities, political figures), nudity or violence (e.g., weapons, substance use). DALL-E 2 is accessible through a user-friendly interface where the user can type a text prompt and four generated images are shown. GLIDE is accessible through a Google Colab notebook that loads the model and its weights, but does not show the origin code to avoid misuse.
StabilityAI opted for the transparent approach, releasing both the code of their model as well as the weights. This allows anyone to use and adjust their model, that has been trained on images containing recognizable figures. Their dataset is filtered for explicit nudity and violence, however, it does contain artistic nudity such as nude paintings or sculptures. Figure 20 illustrates how Stable Diffusion allows the generation of semi-n
Figure 20: Example images generated by Stable Diffusion which allows the generation of (partial or fully) nude images. _(Left)_ Generated with the prompt “Darth Vader in his black swimshorts sunbathing on the beach.” _(Right)_ Generated with the prompt “Batman as a Greek God wearing a loincloth.”
figures. For illustration purposes, we opted for fictional characters and partially nude images. However, it is possible to create harmful deepfakes using Stable diffusion. As a result, many expressed their concerns about the release of Stable Diffusion, and its possible misuse. Emad Mostaque, the founder of StabilityAI, has stated in several interviews that the technology is not the issue, rather those who have malicious intentions. Other technologies, such as GANs, have resulted in harmful deep fakes in the past. Our role as society is to find proper rules of conduct to deal with these misuses, rather than denying access to new groundbreaking technology. That being said, StabilityAI has instilled multiple safety barriers to avoid misuse by stating ethical and legal rules in Stable Diffusion's permission license (e.g., stating you cannot spread nude or violent deepfakes) and including an AI-based safety classifier that filters the generated images.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Model & FID on MS-COCO & Trainable parameters & Dataset size & Open source & Capabilities \\ \hline ArtGAN & - 1 & - & 80K & Yes & Image generation \\ CAN & - & - & 80K & Yes & Image generation \\ pix2pix & - & - & Various2 & Yes & Image manipulation \\ CycleGAN & - & - & Various & Yes & image manipulation \\ & & & & & Image generation, \\ & & & & & image manipulation, \\ & & & & & text-to-image \\ LAFITE & 26.94 & 75M & - & - & Text-to-image \\ DALL-E & 27.50 & 12B & 250M & No & Text-to-image \\ CogView & 27.10 & 4B & 30M & - & Text-to-image \\ Make-A-Scene & 11.84 & 4B & 35M & No & Text-to-image, \\ & & & & & image manipulation \\ Parti & 7.23 & 20B & - & No & Text-to-image \\ & & & & & Text-to-image, \\ Muse & 7.88 & 3B & 460M & No & inpainting, \\ & & & & & mask-free editing \\ GLIDE & 12.24 & 3.5B & 250M & Partially & Text-to-image, \\ & & & & & inpainting \\ DALL-E 2 & 10.39 & 4.5B & 650M & Partially & Text-to-image, inpainting, \\ & & & & & outpainting, \\ Imagen & 7.27 & 2B & 460M & No & Text-to-image \\ Stable Diffusion & 12.63 & \(<\) 1B & 5B & Yes & Text-to-image, inpainting, \\ & & & & & image manipulation \\ InstructPix2Pix & - & \(<\) 1B & 450K & Yes & Mask-free editing \\ \hline \hline \end{tabular} 1
\end{table}
Table 1: Summary of deep generative models
## 5 Conclusions
Deep learning and its image processing applications are now at a totally different stage than a few years ago. In the beginning of last decade, it was groundbreaking that deep neural networks could classify natural images. Today, these models are capable of generating highly realistic and complex images based on simple text prompts. This allows individuals without programming knowledge to employ these powerful models. It is important to remember that the use of these models should be guided by ethical and responsible considerations. These tools can assist artists to express their creativity and may shape the future of art. As ChatGPT stated, _"Some people believe that AI has the potential to revolutionize the way we create and think about art, while others are more skeptical and believe that true creativity and artistic expression can only come from human beings."_
_"Ultimately, the role of AI in the arts will depend on how it is used and the goals and values of the people who are using it"_ ChatGPT concluded.
Acknowledgments.We would like to thank ChatGPT for answering our questions patiently.
|
2307.14373 | Piecewise Linear Functions Representable with Infinite Width Shallow
ReLU Neural Networks | This paper analyzes representations of continuous piecewise linear functions
with infinite width, finite cost shallow neural networks using the rectified
linear unit (ReLU) as an activation function. Through its integral
representation, a shallow neural network can be identified by the corresponding
signed, finite measure on an appropriate parameter space. We map these measures
on the parameter space to measures on the projective $n$-sphere cross
$\mathbb{R}$, allowing points in the parameter space to be bijectively mapped
to hyperplanes in the domain of the function. We prove a conjecture of Ongie et
al. that every continuous piecewise linear function expressible with this kind
of infinite width neural network is expressible as a finite width shallow ReLU
neural network. | Sarah McCarty | 2023-07-25T15:38:18Z | http://arxiv.org/abs/2307.14373v1 | # Piecewise linear functions representable with infinite width shallow relu neural networks
###### Abstract.
This paper analyzes representations of continuous piecewise linear functions with infinite width, finite cost shallow neural networks using the rectified linear unit (ReLU) as an activation function. Through its integral representation, a shallow neural network can be identified by the corresponding signed, finite measure on an appropriate parameter space. We map these measures on the parameter space to measures on the projective \(n\)-sphere cross \(\mathbb{R}\), allowing points in the parameter space to be bijectively mapped to hyperplanes in the domain of the function. We prove a conjecture of Ongie et al. that every continuous piecewise linear function expressible with this kind of infinite width neural network is expressible as a finite width shallow ReLU neural network.
2010 Mathematics Subject Classification: 68T07, 42C40, 41A30
## 1. Introduction
We consider shallow neural networks which use rectified linear unit (ReLU) as the activation function. It is well-known ReLU has universal approximation properties on compact domains, and in practice has advantages over sigmoidal activation functions [6, 16]. Finite width shallow neural networks with \(n+1\)-dimensional input take the form
\[f(\boldsymbol{x})=c_{0}+\sum_{i=1}^{k}c_{i}\sigma\left(\boldsymbol{a}_{i} \cdot\boldsymbol{x}-b_{i}\right) \tag{1}\]
where \(\boldsymbol{a}_{i}\in\mathbb{S}^{n}\) (the unit sphere in \(\mathbb{R}^{n+1}\)) and \(b_{i},c_{i}\in\mathbb{R}\) for all \(i\).
Generalizing to infinite width neural networks transforms the sum to an integral and the weights \(c_{i}\) to a signed measure \(\mu\) on \(\mathbb{S}^{n}\times\mathbb{R}\) where
\[f(\boldsymbol{x})=\int_{\mathbb{S}^{n}\times\mathbb{R}}\sigma\left( \boldsymbol{a}\cdot\boldsymbol{x}-b\right)\mathrm{d}\mu(\boldsymbol{a},b)+c_{ 0}. \tag{2}\]
Some authors choose instead to have the measure and integral over all of \(\mathbb{R}^{n+1}\times\mathbb{R}\). An important class of functions are those representable with an infinite width neural network with finite representation cost, which corresponds with \(|\mu|\left(\mathbb{S}^{n}\times\mathbb{R}\right)<\infty\)[3]. Similar classes of functions are studied in [7] as Barron spaces and in [2] as \(\mathcal{F}_{1}\).
To ensure the integral in Equation 2 is well-defined, we can require \(\mu\) has a finite first moment where \(\int_{\mathbb{S}^{n}\times\mathbb{R}}|b|\,\mathrm{d}|\mu|(\boldsymbol{a},b)<\infty\). Alternatively, Ongie et al. in [15] writes the integral in the form
\[\int_{\mathbb{S}^{n}\times\mathbb{R}}\sigma(\boldsymbol{a}\cdot\boldsymbol{x} -b)-\sigma(-b)\,\mathrm{d}\mu(\boldsymbol{a},b)+c_{0}, \tag{3}\]
so the integral is well-defined whenever \(\mu\) is a finite measure. Since they differ only by a constant, the class of functions representable with a finite measure in the form of Equation 2 is a subclass of the class of functions representable with a finite measure in the form of Equation 3[15]. Therefore, we choose to consider integral representations in the form of Equation 3.
Since \(\sigma(\boldsymbol{a}\cdot\boldsymbol{x}-b)\) is a ridge function, integral representations are also naturally studied as the dual ridgelet transform on distributions [18, 4, 14]. Functions with these representations are then often analyzed with the Radon transform [18, 14, 15]. Savarese in [17] characterized which one-dimensional functions are representable with infinite width, finite cost shallow ReLU neural networks.
Many finitely piecewise linear functions cannot be represented with a finite width ReLU shallow network, including all non-trivial compactly supported piecewise linear functions on \(\mathbb{R}^{n}\), \(n\geq 2\)[15]. Lower and upper bounds on number of layers needed to represent continuous piecewise linear functions with finite width, deep neural networks have been established [11, 1]. However, the class of infinite width ReLU networks is certainly more expressive than the class of finite width networks in general, such as being able to express some non-piecewise linear functions [17]. It is not obvious if the class of infinite width shallow ReLU neural networks can express a finitely piecewise linear function that the class of finite width shallow ReLU networks cannot. By decomposing measures, E and Wojtowytsch in [8] established the set of points of non-differentiability of a function in a Barron space must be a subset of a countable union of affine subspaces. However, proper subsets are possible. In [15], Ongie et al. proved many compactly supported piecewise linear functions are not representable with finite cost, infinite width shallow ReLU neural networks. This led to the following conjecture.
**Conjecture 1** (Ongie et al., [15]).: _A continuous piecewise linear function \(f\) has finite representation cost if and only if it is exactly representable by a finite width shallow neural network._
A finite representation cost corresponds with the existence of a finite measure \(\mu\) such that \(f\) admits a representation in the form of Equation 3. Our main result is to prove the conjecture.
**Theorem**.: _Let \(f:\mathbb{R}^{n+1}\to\mathbb{R}\) be a continuous finitely piecewise linear function. If there exist \(\mu\in\mathcal{M}(\mathbb{S}^{n}\times\mathbb{R})\) and \(c_{0}\in\mathbb{R}\) such that \(f(\boldsymbol{x})=\int_{\mathbb{S}^{n}\times\mathbb{R}}\sigma\left(\boldsymbol {a}\cdot\boldsymbol{x}-b\right)-\sigma(-b)\,\mathrm{d}\mu\left(\boldsymbol{a},b\right)+c_{0}\), then \(f\) is representable as a finite-width network as in Equation 1._
### Notation
For \(m\in\mathbb{N}\), let \([m]\coloneqq\{1,\ldots,m\}\).
The rectified linear unit (ReLU) function from \(\mathbb{R}\) to \(\mathbb{R}\) is denoted \(\sigma(t)\) and defined as \(\sigma(t)\coloneqq\max\{0,t\}\).
The pushforward measure of measure \(\mu\) induced by a mapping \(\varphi\) is denoted \(\mu\circ\varphi^{-1}\).
Regular lower case Latin letter variables generally represent real numbers: \(x,y,z,t\in\mathbb{R}\). Once \(n\) is fixed, bold lower case Latin letter variables indicate elements of \(\mathbb{R}^{n+1}\): \(\boldsymbol{a},\boldsymbol{x}\in\mathbb{R}^{n+1}\). Bold lower case Greek letter variables indicate elements of \(\mathbb{R}^{n}\): \(\boldsymbol{\zeta},\boldsymbol{\xi}\in\mathbb{R}^{n}\).
Let \(\mathfrak{h}_{\boldsymbol{a},b}\coloneqq\{\boldsymbol{x}\in\mathbb{R}^{n+1} \mid\boldsymbol{a}\cdot\boldsymbol{x}=b\}\).
For \(m\in\mathbb{N}\), \(m-1\) dimensional affine subspaces in \(\mathbb{R}^{m}\) are called hyperplanes. The \(m\)-sphere in \(\mathbb{R}^{m+1}\) is denoted \(\mathbb{S}^{m}\). Let \(\boldsymbol{e}_{m+1}\coloneqq(0,\ldots,0,1)\in\mathbb{S}^{m}\).
Let \(\mathcal{S}^{0}\coloneqq\{1\}\subseteq\mathbb{S}^{0}\). For \(m\geq 1\), let \(\mathcal{S}^{m}\) be defined as
\[\mathcal{S}^{m}\coloneqq\{\boldsymbol{x}\in\mathbb{S}^{m}\mid\boldsymbol{e}_{m+1 }\cdot\boldsymbol{x}>0\}\cup\{(x_{1},\ldots,x_{m},0)\mid(x_{1}\ldots,x_{m})\in \mathcal{S}^{m-1}\}\subseteq\mathbb{S}^{m}.\]
Call \(\mathcal{S}^{m}\) a half \(m\)-dimensional hypersphere. Let \(-\mathcal{S}^{m}\) denote the pointwise negation of all the points in \(\mathcal{S}^{m}\). By simple induction, exactly one of \(\boldsymbol{x},-\boldsymbol{x}\in\mathcal{S}^{m}\) for all \(\boldsymbol{x}\in\mathbb{S}^{m}\). Therefore, \(\mathbb{S}^{m}=\mathcal{S}^{m}\sqcup(-\mathcal{S}^{m})\).
Let \(D_{\boldsymbol{d}^{+}}f(\boldsymbol{x})\) denote the one-sided directional derivative of \(f\) in the positive direction of \(\boldsymbol{d}\) for \(\boldsymbol{d}\in\mathbb{S}^{n}\).
For any metric space \(W\), \(\mathcal{B}(W)\) denotes the set of Borel sets and \(\mathcal{M}(W)\) denotes the set of Borel, finite, signed measures on \(W\).
## 2. Preliminaries
We start by formally defining countably piecewise linear.
**Definition 1**.: _A **convex polyhedron**\(C\) is a subset of \(\mathbb{R}^{n}\) such that \(C=\bigcap_{H\in\mathcal{H}}H\) where \(\mathcal{H}\) is a finite set of closed half-spaces. A **supporting hyperplane**\(\mathfrak{h}\) of \(C\) is the boundary of a half-space in \(\mathcal{H}\) such that \(C\cap\mathfrak{h}\neq\varnothing\). A **face** of \(C\) is a set of the form \(C\cap\mathfrak{h}\) where \(\mathfrak{h}\) is a supporting hyperplane._
_Remark_.: The union of the faces of a polyhedron \(C\) form the boundary of \(C\).
**Definition 2**.: _A **continuous countably (finitely) piecewise linear function** is a continuous function such that there is a countable (finite) collection of convex polyhedra that cover the domain where the function is affine when restricted to each polyhedron._
_Remark_.: The requirement of continuity in the definition does not impose any limitations on the results. Every function with a representation of the form \(f(\boldsymbol{x})=\int_{\mathbb{S}^{n}\times\mathbb{R}}\sigma(\boldsymbol{a} \cdot\boldsymbol{x}-b)-\sigma(-b)\,\mathrm{d}\mu(\boldsymbol{a},b)+c_{0}\) is continuous.
The basis of the argument is that while hyperplanes can be induced by many different measures, the creases associated with boundaries in a piecewise linear function can only be created by point masses. To isolate the parts of the measure that induce non-affineness, the measure is associated with a measure on the projective \(n\)-sphere cross \(\mathbb{R}\). Similar to this procedure, Ongie et al. in [15] decomposed measures in \(\mathcal{M}(\mathbb{S}^{n}\times\mathbb{R})\) into even and odd components, where the odd component induced an affine function and the even component was unique. We use \(\mathcal{S}^{n}\) as a representation of the projective \(n\)-sphere.
The following lemmas reduce the problem to measures in \(\mathcal{M}(\mathcal{S}^{n}\times\mathbb{R})\). This reduction is possible for any compactly supported measure in \(\mathcal{M}(\mathbb{R}^{n+1}\times\mathbb{R})\) or any measure in \(\mathcal{M}(\mathbb{S}^{n}\times\mathbb{R})\).
**Lemma 1**.: _Suppose \(\tau\) is a compactly supported measure in \(\mathcal{M}(\mathbb{R}^{n+1}\times\mathbb{R})\). Then, there exists \(\mu\in\mathcal{M}(\mathbb{S}^{n}\times\mathbb{R})\) such that for all \(\boldsymbol{x}\in\mathbb{R}^{n+1}\),_
\[\int_{\mathbb{R}^{n+1}\times\mathbb{R}}\sigma\left(\boldsymbol{a}\cdot \boldsymbol{x}-b\right)-\sigma(-b)\,\mathrm{d}\tau(\boldsymbol{a},b)=\int_{ \mathbb{S}^{n}\times\mathbb{R}}\sigma\left(\boldsymbol{a}\cdot\boldsymbol{x} -b\right)-\sigma(-b)\,\mathrm{d}\mu(\boldsymbol{a},b).\]
Proof.: Let \(g:\left(\mathbb{R}^{n+1}\setminus\{\boldsymbol{0}\}\right)\times\mathbb{R} \rightarrow\mathbb{S}^{n}\times\mathbb{R}\) be defined as \(g\left(\boldsymbol{a},b\right)=\left(\frac{\boldsymbol{a}}{\|\boldsymbol{a} \|},\frac{b}{\|\boldsymbol{a}\|}\right)\). Let \(\mu_{1}\) be the Borel measure defined as \(\mu_{1}(E)=\int_{E}\|\boldsymbol{a}\|\,\mathrm{d}\tau\left(\boldsymbol{a},b\right)\). Since \(\tau\) has
compact support and is finite, \(|\mu_{1}||(\mathbb{R}^{n+1}\times\mathbb{R})<\infty\) and \(\mu_{1}\in\mathcal{M}(\mathbb{R}^{n+1}\times\mathbb{R})\). Let \(\mu\coloneqq\mu_{1}\circ g^{-1}\). Then,
\[\int_{\mathbb{R}^{n+1}\times\mathbb{R}}\sigma\left(\boldsymbol{a} \cdot\boldsymbol{x}-b\right)-\sigma(-b)\,\mathrm{d}\tau\left(\boldsymbol{a},b\right)\] \[=\int_{(\mathbb{R}^{n+1}\setminus\left\{\boldsymbol{0}\right\}) \times\mathbb{R}}\|\boldsymbol{a}\|\left(\sigma\left(\frac{\boldsymbol{a}}{ \|\boldsymbol{a}\|}\cdot x-\frac{b}{\|\boldsymbol{a}\|}\right)-\sigma\left(- \frac{b}{\|\boldsymbol{a}\|}\right)\right)\mathrm{d}\tau\left(\boldsymbol{a},b\right)\] \[\quad+\int_{\left\{\boldsymbol{0}\right\}\times\mathbb{R}}\sigma \left(-b\right)-\sigma(-b)\,\mathrm{d}\tau\left(\boldsymbol{a},b\right)\] \[=\int_{(\mathbb{R}^{n+1}\setminus\left\{\boldsymbol{0}\right\}) \times\mathbb{R}}\sigma\left(\frac{\boldsymbol{a}}{\|\boldsymbol{a}\|}\cdot \boldsymbol{x}-\frac{b}{\|\boldsymbol{a}\|}\right)-\sigma\left(-\frac{b}{\| \boldsymbol{a}\|}\right)\mathrm{d}\mu_{1}\left(\boldsymbol{a},b\right)+0\] \[=\int_{\mathbb{S}^{n}\times\mathbb{R}}\sigma\left(\boldsymbol{a} \cdot\boldsymbol{x}-b\right)-\sigma(-b)\,\mathrm{d}\mu\left(\boldsymbol{a},b \right).\]
**Lemma 2**.: _Suppose \(\tau\in\mathcal{M}\left(\mathbb{S}^{n}\times\mathbb{R}\right)\). Then, there exists \(\mu\in\mathcal{M}\left(\mathcal{S}^{n}\times\mathbb{R}\right)\) and \(\boldsymbol{a}_{0}\in\mathbb{R}^{n+1}\) such that for all \(\boldsymbol{x}\in\mathbb{R}^{n+1}\),_
\[\int_{\mathbb{S}^{n}\times\mathbb{R}}\sigma\left(\boldsymbol{a}\cdot \boldsymbol{x}-b\right)-\sigma(-b)\,\mathrm{d}\tau\left(\boldsymbol{a},b \right)=\int_{\mathcal{S}^{n}\times\mathbb{R}}\sigma\left(\boldsymbol{a}\cdot \boldsymbol{x}-b\right)-\sigma(-b)\,\mathrm{d}\mu\left(\boldsymbol{a},b\right) +\boldsymbol{a}_{0}\cdot\boldsymbol{x}.\]
Proof.: Let \(g\left(\boldsymbol{a},b\right)=\left(-\boldsymbol{a},-b\right)\) on \(\mathcal{S}^{n}\times\mathbb{R}\). Let \(\mu\coloneqq\tau+\tau\circ g^{-1}\) and \(\boldsymbol{a}_{0}\coloneqq-\int_{\mathcal{S}^{n}\times\mathbb{R}}\boldsymbol{a }\,\mathrm{d}\tau\circ g^{-1}\left(\boldsymbol{a},b\right)\). Note, \(\sigma\left(-x\right)=\sigma\left(x\right)-x\). It follows
\[\int_{\mathbb{S}^{n}\times\mathbb{R}}\sigma\left(\boldsymbol{a} \cdot\boldsymbol{x}-b\right)-\sigma(-b)\,\mathrm{d}\tau\left(\boldsymbol{a},b\right)\] \[=\int_{\mathcal{S}^{n}\times\mathbb{R}}\sigma\left(\boldsymbol{a }\cdot\boldsymbol{x}-b\right)-\sigma(-b)\,\mathrm{d}\tau\left(\boldsymbol{a},b \right)+\int_{-\mathcal{S}^{n}\times\mathbb{R}}\sigma\left(\boldsymbol{a} \cdot\boldsymbol{x}-b\right)-\sigma(-b)\,\mathrm{d}\tau\left(\boldsymbol{a},b\right)\] \[=\int_{\mathcal{S}^{n}\times\mathbb{R}}\sigma\left(\boldsymbol{a }\cdot\boldsymbol{x}-b\right)-\sigma(-b)\,\mathrm{d}\tau\left(\boldsymbol{a},b\right)\] \[\quad+\int_{\mathcal{S}^{n}\times\mathbb{R}}\sigma\left(\boldsymbol{a }\cdot\boldsymbol{x}-b\right)-\left(\boldsymbol{a}\cdot\boldsymbol{x}-b \right)-\sigma(-b)-b\,\mathrm{d}\tau\circ g^{-1}\left(\boldsymbol{a},b\right)\] \[=\int_{\mathcal{S}^{n}\times\mathbb{R}}\sigma\left(\boldsymbol{a }\cdot\boldsymbol{x}-b\right)-\sigma(-b)\,\mathrm{d}\tau\left(\boldsymbol{a},b \right)\] \[=\int_{\mathcal{S}^{n}\times\mathbb{R}}\sigma\left(\boldsymbol{a }\cdot\boldsymbol{x}-b\right)-\sigma(-b)\,\mathrm{d}\tau\circ g^{-1}\left( \boldsymbol{a},b\right)-\int_{\mathcal{S}^{n}\times\mathbb{R}}\boldsymbol{a} \cdot\boldsymbol{x}\,\mathrm{d}\tau\circ g^{-1}\left(\boldsymbol{a},b\right)\] \[=\int_{\mathcal{S}^{n}\times\mathbb{R}}\sigma\left(\boldsymbol{a }\cdot\boldsymbol{x}-b\right)-\sigma(-b)\,\mathrm{d}\tau\left(\boldsymbol{a},b \right)\] \[=\int_{\mathcal{S}^{n}\times\mathbb{R}}\sigma\left(\boldsymbol{a }\cdot\boldsymbol{x}-b\right)-\sigma(-b)\,\mathrm{d}\tau\circ g^{-1}\left( \boldsymbol{a},b\right)-\int_{\mathcal{S}^{n}\times\mathbb{R}}\boldsymbol{a} \cdot\boldsymbol{x}\,\mathrm{d}\tau\circ g^{-1}\left(\boldsymbol{a},b\right)\] \[=\int_{\mathcal{S}^{n}\times\mathbb{R}}\sigma\left(\boldsymbol{a }\cdot\boldsymbol{x}-b\right)-\sigma(-b)\,\mathrm{d}\mu\left(\boldsymbol{a},b \right)+\boldsymbol{a}_{0}\cdot\boldsymbol{x}.\]
The main result will be a consequence of the following theorem, which will be proved by induction.
**Theorem 1**.: _Suppose \(\mu\in\mathcal{M}\left(\mathcal{S}^{n}\times\mathbb{R}\right)\) is such that_
1. \(\mu\) _is atomless_
2. \(f(\boldsymbol{x})=\int_{\mathcal{S}^{n}\times\mathbb{R}}\sigma\left( \boldsymbol{a}\cdot\boldsymbol{x}-b\right)-\sigma(-b)\,\mathrm{d}\mu\left( \boldsymbol{a},b\right)\) _is a continuous countably piecewise linear function from_ \(\mathbb{R}^{n+1}\) _to_ \(\mathbb{R}\)_._
_Then, \(\mu\) is the zero measure._
We will utilize that first order directional derivatives are constant on affine polyhedra. To simplify calculations, we will be particularly interested in directional derivatives in the direction \(\boldsymbol{e}_{n+1}\).
**Lemma 3**.: _Suppose \(f(\boldsymbol{x})=\int_{\mathcal{S}^{n}\times\mathbb{R}}\sigma\left( \boldsymbol{a}\cdot\boldsymbol{x}-b\right)-\sigma(-b)\,\mathrm{d}\mu\left( \boldsymbol{a},b\right)\) where \(\mu\in\mathcal{M}\left(\mathcal{S}^{n}\times\mathbb{R}\right)\). Then,_
\[D_{\boldsymbol{e}_{n+1}^{+}}f(\boldsymbol{x})=\int_{\{(\boldsymbol{a},b)\in \mathcal{S}^{n}\times\mathbb{R}\mid\boldsymbol{a}\cdot\boldsymbol{x}\geq b\} }\boldsymbol{a}\cdot\boldsymbol{e}_{n+1}\,\mathrm{d}\mu\left(\boldsymbol{a},b \right).\]
Proof.: First,
\[\lim_{h\to 0^{+}}\frac{f(\boldsymbol{x}+h\boldsymbol{e}_{n+1})-f( \boldsymbol{x})}{h}\] \[=\lim_{h\to 0^{+}}\int_{\mathcal{S}^{n}\times\mathbb{R}}\frac{ \sigma\left(\boldsymbol{a}\cdot\boldsymbol{x}-b+\boldsymbol{a}\cdot(h \boldsymbol{e}_{n+1})\right)-\sigma\left(\boldsymbol{a}\cdot\boldsymbol{x}-b \right)}{h}\,\mathrm{d}\mu\left(\boldsymbol{a},b\right).\]
Whenever \(\boldsymbol{a}\cdot\boldsymbol{x}<b\), for sufficiently small \(h\),
\[\sigma\left(\boldsymbol{a}\cdot\boldsymbol{x}-b+\boldsymbol{a}\cdot(h \boldsymbol{e}_{n+1})\right)=\sigma\left(\boldsymbol{a}\cdot\boldsymbol{x}-b \right)=0.\]
Thus, when \(\boldsymbol{a}\cdot\boldsymbol{x}<b\),
\[\lim_{h\to 0^{+}}\frac{\sigma\left(\boldsymbol{a}\cdot\boldsymbol{x}-b+ \boldsymbol{a}\cdot(h\boldsymbol{e}_{n+1})\right)-\sigma\left(\boldsymbol{a} \cdot\boldsymbol{x}-b\right)}{h}=0.\]
By definition of \(\mathcal{S}^{n}\), \(\boldsymbol{a}\cdot(h\boldsymbol{e}_{n+1})\geq 0\) when \(h\geq 0\) for all \(\boldsymbol{a}\in\mathcal{S}^{n}\). Hence, if \(\boldsymbol{a}\cdot\boldsymbol{x}-b\geq 0\), then \(\boldsymbol{a}\cdot\boldsymbol{x}-b+\boldsymbol{a}\cdot(h\boldsymbol{e}_{n+1})\geq 0\) for all \(\boldsymbol{a}\in\mathcal{S}^{n}\) and \(h\geq 0\). It follows whenever \(\boldsymbol{a}\cdot\boldsymbol{x}\geq b\),
\[\lim_{h\to 0^{+}}\frac{\sigma\left(\boldsymbol{a}\cdot \boldsymbol{x}-b+\boldsymbol{a}\cdot(h\boldsymbol{e}_{n+1})\right)-\sigma \left(\boldsymbol{a}\cdot\boldsymbol{x}-b\right)}{h}\] \[=\lim_{h\to 0^{+}}\frac{\boldsymbol{a}\cdot\boldsymbol{x}-b+ \boldsymbol{a}\cdot(h\boldsymbol{e}_{n+1})-(\boldsymbol{a}\cdot\boldsymbol{x }-b)}{h}=\lim_{h\to 0^{+}}\frac{\boldsymbol{a}\cdot(h\boldsymbol{e}_{n+1})}{h}= \boldsymbol{a}\cdot\boldsymbol{e}_{n+1}.\]
Further, as \(\|\boldsymbol{a}\|=\|\boldsymbol{e}_{n+1}\|=1\), for all \(\boldsymbol{a},\boldsymbol{x},b\) and all \(h\geq 0\),
\[\left|\frac{\sigma\left(\boldsymbol{a}\cdot\boldsymbol{x}-b+\boldsymbol{a} \cdot(h\boldsymbol{e}_{n+1})\right)-\sigma\left(\boldsymbol{a}\cdot \boldsymbol{x}-b\right)}{h}\right|\leq\frac{|\boldsymbol{a}\cdot(h\boldsymbol {e}_{n+1})|}{h}\leq 1. \tag{4}\]
Since \(|\mu|(\mathcal{S}^{n}\times\mathbb{R})<\infty\), by Equation 4, Dominated Convergence Theorem applies. Therefore,
\[\lim_{h\to 0^{+}}\frac{f(\boldsymbol{x}+(h\boldsymbol{e}_{n+1}))-f( \boldsymbol{x})}{h}\] \[=\int_{\mathcal{S}^{n}\times\mathbb{R}}\lim_{h\to 0^{+}}\frac{ \sigma\left(\boldsymbol{a}\cdot\boldsymbol{x}-b+\boldsymbol{a}\cdot(h \boldsymbol{e}_{n+1})\right)-\sigma\left(\boldsymbol{a}\cdot\boldsymbol{x}-b \right)}{h}\,\mathrm{d}\mu\left(\boldsymbol{a},b\right)\] \[=\int_{\{(\boldsymbol{a},b)\in\mathcal{S}^{n}\times\mathbb{R}\ |\ \boldsymbol{a}\cdot \boldsymbol{x}\geq b\}}\lim_{h\to 0^{+}}\frac{\boldsymbol{a}\cdot(h\boldsymbol{e}_{n+1}) }{h}\,\mathrm{d}\mu\left(\boldsymbol{a},b\right)\] \[=\int_{\{(\boldsymbol{a},b)\in\mathcal{S}^{n}\times\mathbb{R}\ |\ \boldsymbol{a}\cdot \boldsymbol{x}\geq b\}}\boldsymbol{a}\cdot\boldsymbol{e}_{n+1}\,\mathrm{d}\mu \left(\boldsymbol{a},b\right).\qed\]
We prove the one-dimensional case of Theorem 1 to serve as a base case for induction.
**Lemma 4**.: _Suppose \(\mu\in\mathcal{M}(\mathcal{S}^{0}\times\mathbb{R})\) is such that_
1. \(\mu\) _is atomless_
2. \(f(x)=\int_{\mathcal{S}^{0}\times\mathbb{R}}\sigma\left(ax-b\right)-\sigma(-b) \,\mathrm{d}\mu\left(a,b\right)\) _is a continuous countably piecewise linear function._
_Then, \(\mu\) is the zero measure._
Proof.: Since \(|\mathcal{S}^{0}|=|\{1\}|=1\), \(\mu\) is uniquely determined by its marginal measure on \(\mathbb{R}\), \(\mu_{\mathbb{R}}\). As \(f\) is countably piecewise linear, there are countably many intervals \(\{[q_{i},r_{i}]\}_{i\in\mathbb{N}}\) such that \(f\) is affine when restricted to each interval and \(\mathbb{R}\setminus\left(\bigcup_{i\in\mathbb{N}}(q_{i},r_{i})\right)\) is countable. Further, \(\mu\) is atomless, so \(\mu_{\mathbb{R}}\) is determined by its values on closed intervals that are subsets of intervals in \(\{(q_{i},r_{i})\}_{i\in\mathbb{N}}\)[12]. Thus, it suffices to show \(\mu_{\mathbb{R}}([x_{1},x_{2}])=0\) whenever \(f\) is affine on \((q,r)\) and \([x_{1},x_{2}]\subseteq(q,r)\). Suppose \(f\) is affine on \((q,r)\) and \([x_{1},x_{2}]\subseteq(q,r)\). Note,
\[f^{\prime}(x_{2}) =\int_{\{(a,b)\in\mathcal{S}^{0}\times\mathbb{R}\ |\ x_{2}\geq b\}}a\cdot 1\, \mathrm{d}\mu\left(a,b\right)=\int_{\{(a,b)\in\mathcal{S}^{0}\times\mathbb{R} \ |\ x_{2}\geq b\}}1\cdot 1\,\mathrm{d}\mu\left(a,b\right)\] \[=\int_{\{(a,b)\in\mathcal{S}^{0}\times\mathbb{R}\ |\ x_{2}\geq b\}}1\, \mathrm{d}\mu\left(a,b\right)=\mu\left(\left\{(a,b)\in\mathcal{S}^{0}\times \mathbb{R}\ |\ x_{2}\geq b\right\}\right)\]
Similarly,
\[f^{\prime}(x_{1})=\mu\left(\left\{(a,b)\in\mathcal{S}^{0}\times\mathbb{R}\ |\ x_{1}\geq b\right\}\right).\]
Therefore, as \(f\) is affine in between \(x_{1}\) and \(x_{2}\),
\[0=f^{\prime}(x_{2})-f^{\prime}(x_{1})=\mu\left(\{1\}\times\left(x_{1},x_{2} \right]\right)=\mu\left(\{1\}\times\left[x_{1},x_{2}\right]\right)=\mu_{ \mathbb{R}}([x_{1},x_{2}]).\]
## 3. Constructing a Dense Set of Directions
Any set of countably many points has zero weight with respect to an atomless measure. In the one-dimensional case, this allows us to disregard points in \(\mathcal{S}^{0}\times\mathbb{R}\) associated with boundaries when determining \(\mu\) is the zero measure. However, in higher dimensions, there are more than countably many points associated with boundaries. Nonetheless, a carefully picked subset of points associated with boundaries will have zero weight with respect to \(\mu\) and will be large enough to ultimately conclude \(\mu\) is the zero measure. The first step to constructing this set is finding a large set of non-co-hyperplanar points.
**Proposition 1**.: _For every \(n\in\mathbb{N}\), there exists a set \(S\subseteq\mathbb{R}^{n}\) such that_
1. _For every open ball_ \(B\subseteq\mathbb{R}^{n}\)_,_ \(S\cap B\) _is uncountable_
2. _For every hyperplane_ \(P\subseteq\mathbb{R}^{n}\)_,_ \(|S\cap P|\leq n\)_._
Proof.: By [19], there is a set \(I\subseteq\mathbb{R}\) algebraically independent over \(\mathbb{Q}\) such that \(|I|=|\mathbb{R}|\).
There is a bijective function \(\phi:[n]\times\mathbb{N}\times\mathbb{R}\to I\).
Let \(\{B_{m}\}_{m\in\mathbb{N}}\) be an enumeration of open balls in \(\mathbb{R}^{n}\) centered at rational coordinates with rational radius.
Note, \(0\not\in I\). For every \(m\in\mathbb{N},r\in\mathbb{R}\), there exist \(q_{1,m,r},\ldots,q_{n,m,r}\in\mathbb{Q}\setminus\{0\}\) such that
\[(q_{1,m,r}\phi(1,m,r),\ldots,q_{n,m,r}\phi(n,m,r))\in B_{m}.\]
The set \(\{q_{\ell,m,r}\phi(\ell,m,r)\mid\ell\in[n],m\in\mathbb{N},r\in\mathbb{R}\}\) is also algebraically independent and each element has a unique representation of the form \(q_{\ell,m,r}\phi(\ell,m,r)\). Define
\[S\coloneqq\{(q_{1,m,r}\phi(1,m,r),\ldots,q_{n,m,r}\phi(n,m,r))\mid m\in \mathbb{N},r\in\mathbb{R}\}.\]
Consider an open ball \(B\subseteq\mathbb{R}^{n}\). Since \(\mathbb{Q}\) is dense, there is \(m_{0}\) such that \(B_{m_{0}}\subseteq B\). Further, \(\{(q_{1,m_{0},r}\phi(1,m_{0},r),\ldots,q_{n,m_{0},r}\phi(n,m_{0},r))\mid r\in \mathbb{R}\}\subseteq B_{m_{0}}\subseteq B\). It follows \(S\cap B\) is uncountable.
By way of contradiction, suppose there exist distinct \(\left(z_{0}^{1},\ldots,z_{0}^{n}\right),\ldots,\left(z_{n}^{1},\ldots,z_{n}^{n }\right)\in S\cap P\) for some hyperplane \(P\). It follows any \(n\) vectors between these points are linearly dependent, so
\[\det\begin{bmatrix}z_{0}^{1}-z_{1}^{1}&\ldots&z_{0}^{n}-z_{1}^{n}\\ \vdots&\ddots&\vdots\\ z_{0}^{1}-z_{n}^{1}&\ldots&z_{0}^{n}-z_{n}^{n}\end{bmatrix}=0. \tag{5}\]
The determinant is a polynomial over \(\mathbb{Q}\) in terms of \(z_{i}^{j}\). Since a unique \(z_{i}^{j}\), \(i\geq 1\), is an addend in each entry, the determinant cannot be the trivial polynomial. This contradicts the \(z_{i}^{j}\) being algebraically independent.
It follows for all hyperplanes \(P\), \(|S\cap P|\leq n\).
**Corollary 1**.: _Suppose \(S\subseteq\mathbb{R}^{n}\) is as in Proposition 1. Let \(\phi:S\rightarrow\mathbb{R}\) and \(S^{\prime}\coloneqq\{(\boldsymbol{\zeta},\phi(\boldsymbol{\zeta}))\mid \boldsymbol{\zeta}\in S\}\subseteq\mathbb{R}^{n+1}\). For every \(n-1\) dimensional affine subspace \(P\subseteq\mathbb{R}^{n+1}\), \(|S^{\prime}\cap P|\leq n\)._
Proof.: Let \(\phi:S\rightarrow\mathbb{R}\). By way of contradiction, let \(P\) be a \(n-1\) dimensional affine subspace and suppose distinct points \((z_{0}^{1},\ldots,z_{0}^{n+1}),\ldots,(z_{n}^{1},\ldots,z_{n}^{n+1})\in S^{ \prime}\cap P\). Then,
\[\text{rank}\begin{bmatrix}z_{0}^{1}-z_{1}^{1}&\ldots&z_{0}^{n}-z_{1}^{n}\\ \vdots&\ddots&\vdots\\ z_{0}^{1}-z_{n}^{1}&\ldots&z_{0}^{n}-z_{n}^{n}\end{bmatrix}\leq\text{rank} \begin{bmatrix}z_{0}^{1}-z_{1}^{1}&\ldots&z_{0}^{n+1}-z_{1}^{n+1}\\ \vdots&\ddots&\vdots\\ z_{0}^{1}-z_{n}^{1}&\ldots&z_{0}^{n+1}-z_{n}^{n+1}\end{bmatrix}\leq n-1.\]
Therefore, as in Equation 5 of Proposition 1,
\[\det\begin{bmatrix}z_{0}^{1}-z_{1}^{1}&\ldots&z_{0}^{n}-z_{1}^{n}\\ \vdots&\ddots&\vdots\\ z_{0}^{1}-z_{n}^{1}&\ldots&z_{0}^{n}-z_{n}^{n}\end{bmatrix}=0,\]
a contradiction.
## 4. Proof of Main Theorem
**Lemma 5**.: _Let \(W\) be a metric space and \(\mu\in\mathcal{M}(W)\). Consider a collection of sets \(\mathcal{P}\subseteq\mathcal{B}(W)\) such that there exists a \(c\in\mathbb{N}\) where \(\left|\mu\right|\left(\bigcap_{i\in[c]}P_{i}\right)=0\) for all distinct \(P_{1},\ldots,P_{c}\in\mathcal{P}\). Then, there are only countably many \(P\in\mathcal{P}\) such that \(\left|\mu\right|(P)>0\)._
Proof.: Every uncountable family of sets of positive measure has an infinite subfamily with positive intersection [10]. The lemma follows from the contrapositive.
**Definition 3**.: _Suppose \(\boldsymbol{\zeta}_{0}\in\mathbb{R}^{n},y_{1},y_{2}\in\mathbb{R}\cup\left\{ \pm\infty\right\}\) with \(y_{1}\leq y_{2}\). Define_
\[\overline{L}_{\boldsymbol{\zeta}_{0}}(y_{1})\coloneqq\left\{(\boldsymbol{ \xi},v)\in\mathbb{R}^{n}\times\mathbb{R}\mid v-\boldsymbol{\xi}\cdot \boldsymbol{\zeta}_{0}=y_{1}\right\}\]
_and_
\[L_{\boldsymbol{\zeta}_{0}}(y_{1},y_{2})\coloneqq\left\{(\boldsymbol{\xi},v) \in\mathbb{R}^{n}\times\mathbb{R}\mid v-\boldsymbol{\xi}\cdot\boldsymbol{ \zeta}_{0}\in(y_{1},y_{2}]\right\}.\]
**Theorem 1**.: _Suppose \(\mu\in\mathcal{M}\left(\mathcal{S}^{n}\times\mathbb{R}\right)\) is such that_
1. \(\mu\) _is atomless_
2. \(f(\boldsymbol{x})=\int_{\mathcal{S}^{n}\times\mathbb{R}}\sigma\left( \boldsymbol{a}\cdot\boldsymbol{x}-b\right)-\sigma(-b)\,\mathrm{d}\mu\left( \boldsymbol{a},b\right)\) _is a continuous countably piecewise linear function from_ \(\mathbb{R}^{n+1}\) _to_ \(\mathbb{R}\)_._
_Then, \(\mu\) is the zero measure._
Proof.: The proof is separated into the following parts:
1. **Definitions and Maps Between Measure Spaces**
2. **Refinement of \(S\)**
3. **Vanishing Integrals over Line Segments**
4. **Vanishing Integrals over Half-Spaces**
5. **Conclusion with Radon-Nikodym**.
**Definitions and Maps Between Measure Spaces**
First, Lemma 4 proves the theorem for the case \(n=0\). For induction, assume the theorem holds for \(n-1\).
Suppose \(\mu\in\mathcal{M}\left(\mathcal{S}^{n}\times\mathbb{R}\right)\) satisfies all the hypotheses.
Let \(\mathcal{C}\) be a countable collection of convex polyhedra that cover the domain of \(f\) such that \(f\) is affine on each. Let \(\mathcal{F}_{1}\) be the set of faces of polyhedra in \(\mathcal{C}\) with normal vector not orthogonal to \(\boldsymbol{e}_{n+1}\).
Define the following sets of hyperplanes in \(\mathbb{R}^{n+1}\)
\[\mathcal{H}_{0}\coloneqq\left\{\mathfrak{h}_{\boldsymbol{a},b}\mid \boldsymbol{a}\in\mathcal{S}^{n},b\in\mathbb{R}\right\}\quad\text{ and }\quad\mathcal{H}_{1}\coloneqq\left\{\mathfrak{h}_{\boldsymbol{a},b}\mid \boldsymbol{a}\in\mathcal{S}^{n},\boldsymbol{a}\cdot\boldsymbol{e}_{n+1}\neq 0,b \in\mathbb{R}\right\}.\]
Define the map \(\gamma:\mathcal{S}^{n}\times\mathbb{R}\to\mathcal{H}_{0}\) as \(\gamma\left(\boldsymbol{a},b\right)=\mathfrak{h}_{\boldsymbol{a},b}\). By construction of \(\mathcal{S}^{n}\), this is bijective.
Define the map \(\psi:\mathcal{H}_{1}\to\mathbb{R}^{n}\times\mathbb{R}\) such that for \(\boldsymbol{a}=(a_{1},\ldots,a_{n+1})\),
\[\psi\left(\mathfrak{h}_{\boldsymbol{a},b}\right)=\left(\frac{a_{1}}{a_{n+1}}, \ldots,\frac{a_{n}}{a_{n+1}},\frac{b}{a_{n+1}}\right). \tag{6}\]
By definition of \(\mathcal{H}_{1}\), it is routine to verify \(\psi\) is well-defined and bijective.
The image of \(\psi\) is \(\mathbb{R}^{n}\times\mathbb{R}\), however, elements in the image of \(\psi\) should _not_ be thought of as being in the domain of \(f\). Therefore, identify generic elements in the image of \(\psi\) with \((\boldsymbol{\xi},v)\in\mathbb{R}^{n}\times\mathbb{R}\) and call the space \(\Xi\times V\) where \(\Xi=\mathbb{R}^{n}\), \(V=\mathbb{R}\).
Define \(\varphi:\gamma^{-1}[\mathcal{H}_{1}]\to\Xi\times V\) as \(\varphi\coloneqq\psi\circ\gamma\). Then, \(\mu\circ\varphi^{-1}\) is a measure on \(\Xi\times V\).
Since \(\varphi\) is bijective, \(\mu\circ\varphi^{-1}\) is atomless.
For fixed \(\mathbf{\zeta}_{0}\in\mathbb{R}^{n}\), \(y_{0}\in\mathbb{R}\),
\[\psi\left[\left\{\mathfrak{h}_{\mathbf{a},b}\in\mathcal{H}_{1}\mid\mathbf{a}\cdot(\mathbf{ \zeta}_{0},y_{0})=b\right\}\right]=\left\{(\mathbf{\xi},v)\in\Xi\times V\mid v=y_{0 }+\mathbf{\xi}\cdot\mathbf{\zeta}_{0}\right\}. \tag{7}\]
That is the image under \(\psi\) of hyperplanes in \(\mathcal{H}_{1}\) which intersect \((\mathbf{\zeta}_{0},y_{0})\in\mathbb{R}^{n}\times\mathbb{R}\) is a hyperplane in \(\Xi\times V\).
**Refinement of \(S\)**
Let \(S\subseteq\mathbb{R}^{n}\) be the set in Proposition 1.
Suppose \(\mathfrak{h}\in\mathcal{H}_{1}\). Let \(\phi_{\mathfrak{h}}:\mathbb{R}^{n}\to\mathbb{R}\) be the unique function such that \((\mathbf{\zeta},\phi_{\mathfrak{h}}(\mathbf{\zeta}))\in\mathfrak{h}\) for all \(\mathbf{\zeta}\in\mathbb{R}^{n}\).
Let \(P_{\mathbf{\zeta},\mathfrak{h}}=\left\{(\mathbf{\xi},v)\in\Xi\times V\mid v=\phi_{ \mathfrak{h}}(\mathbf{\zeta})+\mathbf{\xi}\cdot\mathbf{\zeta}\right\}\). By Equation 7 and because \(\psi\) is injective, all hyperplanes in \(\psi^{-1}[P_{\mathbf{\zeta},\mathfrak{h}}]\) intersect the point \((\mathbf{\zeta},\phi_{\mathfrak{h}}(\mathbf{\zeta}))\) in the domain.
Let \(\mathcal{P}_{\mathfrak{h}}=\left\{P_{\mathbf{\zeta},\mathfrak{h}}\mid\mathbf{\zeta} \in S\right\}\).
For unique \(\mathbf{\zeta}_{1},\ldots,\mathbf{\zeta}_{n+1}\in S\), consider \(\bigcap_{i\in[n+1]}P_{\mathbf{\zeta}_{i},\mathfrak{h}}\). It follows any hyperplane in \(\psi^{-1}\left[\bigcap_{i\in[n+1]}P_{\mathbf{\zeta}_{i},\mathfrak{h}}\right]\) intersects the points \(\left\{(\mathbf{\zeta}_{1},\phi_{\mathfrak{h}}(\mathbf{\zeta}_{1})),\ldots,(\mathbf{ \zeta}_{n+1},\phi_{\mathfrak{h}}(\mathbf{\zeta}_{n+1}))\right\}\) where each \(\mathbf{\zeta}_{i}\in S\). By Corollary 1, these points do not lie on a common \(n-1\) dimensional affine subspace, so \(\mathfrak{h}\) is the only hyperplane in the domain of \(f\) intersecting \(\left\{(\mathbf{\zeta}_{1},\phi_{\mathfrak{h}}(\mathbf{\zeta}_{1})),\ldots,(\mathbf{ \zeta}_{n+1},\phi_{\mathfrak{h}}(\mathbf{\zeta}_{n+1}))\right\}\). It follows \(\bigcap_{i\in[n+1]}P_{\mathbf{\zeta}_{i},\mathfrak{h}}=\left\{\psi(\mathfrak{h})\right\}\). Since \(\mu\circ\varphi^{-1}\) is atomless, \(\left|\mu\circ\varphi^{-1}\right|\left(\bigcap_{i\in[n+1]}P_{\mathbf{\zeta}_{i}, \mathfrak{h}}\right)=0\).
By Lemma 5, there are only countably many \(P_{\mathbf{\zeta},\mathfrak{h}}\in\mathcal{P}_{\mathfrak{h}}\) such that \(\left|\mu\circ\varphi^{-1}\right|(P_{\mathbf{\zeta},\mathfrak{h}})>0\).
Define \(S_{\mathfrak{h}}\coloneqq\left\{\mathbf{\zeta}\in S\mid\left|\mu\circ\varphi^{-1} \right|(P_{\mathbf{\zeta},\mathfrak{h}})=0\right\}\), so \(S\setminus S_{\mathfrak{h}}\) is countable.
Let \(\mathcal{H}_{supp}\) be the set of supporting hyperplanes of polyhedra in \(\mathcal{C}\). Consider
\[S^{\prime}\coloneqq\bigcap_{\mathfrak{h}\in\mathcal{H}_{1}\cap\mathcal{H}_{ supp}}S_{\mathfrak{h}}.\]
Since \(\mathcal{H}_{supp}\) is countable, \(S\setminus S^{\prime}\) is countable. Since \(B\cap S\) is uncountable for all open balls \(B\subseteq\mathbb{R}^{n}\), \(S^{\prime}\) is dense in \(\mathbb{R}^{n}\). Notice, whenever \(\mathbf{\zeta}\in S^{\prime}\) and \((\mathbf{\zeta},y)\) is on a hyperplane in \(\mathcal{H}_{1}\cap\mathcal{H}_{supp}\),
\[\left|\mu\circ\varphi^{-1}\right|(\left\{(\mathbf{\xi},v)\in\Xi\times V\mid v=y+ \mathbf{\xi}\cdot\mathbf{\zeta}\right\})=0. \tag{8}\]
**Vanishing Integrals over Line Segments**
Suppose \(\mathbf{\zeta}_{0}\in\mathbb{R}^{n}\), \(y_{1},y_{2}\in\mathbb{R}\). Suppose \(y_{1}\leq y_{2}\).
By Equation 7, \(L_{\mathbf{\zeta}_{0}}(y_{1},y_{2})\) is the image under \(\psi\) of hyperplanes in \(\mathcal{H}_{1}\) which intersect the line segment between \((\mathbf{\zeta}_{0},y_{1})\) (exclusive) and \((\mathbf{\zeta}_{0},y_{2})\) (inclusive). Therefore,
\[\varphi^{-1}\left[L_{\mathbf{\zeta}_{0}}(y_{1},y_{2})\right]=\left\{(\mathbf{a},b)\in \mathcal{S}^{n}\times\mathbb{R}\mid\exists y^{\prime}\in(y_{1},y_{2}]\;\;\mathbf{a} \cdot(\mathbf{\zeta}_{0},y^{\prime})=b,\;\mathbf{a}\cdot\mathbf{e}_{n+1}\neq 0\right\}.\]
If \(\varphi\left(\mathbf{a},b\right)=(\mathbf{\xi},v)\), then
\[\frac{1}{\sqrt{1+\sum_{i\in[n]}\xi_{i}^{2}}}=\frac{1}{\sqrt{1+\sum_{i\in[n]} \frac{a_{i}^{2}}{a_{n+1}^{2}}}}=\frac{a_{n+1}}{\sqrt{\sum_{i\in[n+1]}a_{i}^{2}} }=a_{n+1}=\mathbf{a}\cdot\mathbf{e}_{n+1}.\]
Therefore, as \((\mathbf{a},b)\) such that \(\mathbf{a}\cdot\mathbf{e}_{n+1}=0\) do not contribute to the integral,
\[\int_{L_{\mathbf{\zeta}_{0}}(y_{1},y_{2})}\frac{1}{\sqrt{1+\|\mathbf{\xi}\|^{2}}}\, \mathrm{d}\mu\circ\varphi^{-1}\left(\mathbf{\xi},v\right)=\int_{\varphi^{-1}\left[ L_{\mathbf{\zeta}_{0}}(y_{1},y_{2})\right]}\mathbf{a}\cdot\mathbf{e}_{n+1}\,\mathrm{d}\mu\left(\mathbf{a},b\right)\] \[=\int_{\left\{(\mathbf{a},b)\in\mathcal{S}^{n}\times\mathbb{R}\mid \exists y^{\prime}\in(y_{1},y_{2}]\;\;\mathbf{a}\cdot(\mathbf{\zeta}_{0},y^{\prime})=b \right\}}\mathbf{a}\cdot\mathbf{e}_{n+1}\,\mathrm{d}\mu\left(\mathbf{a},b\right).\]
Recall, for \(y\in\mathbb{R}\),
\[D_{\boldsymbol{e}_{n+1}^{+}}f(\boldsymbol{\zeta}_{0},y)=\int_{\{(\boldsymbol{a}, b)\in\mathcal{S}^{n}\times\mathbb{R}\mid\boldsymbol{a}\cdot(\boldsymbol{\zeta}_{0},y) \geq b\}}\boldsymbol{a}\cdot\boldsymbol{e}_{n+1}\,\mathrm{d}\mu\left( \boldsymbol{a},b\right).\]
By definition of \(\mathcal{S}^{n}\), \(y\mapsto\boldsymbol{a}\cdot(\boldsymbol{\zeta}_{0},y)\) is a non-decreasing, continuous function on \(\mathbb{R}\) for any fixed \(\boldsymbol{a}\in\mathcal{S}^{n}\). Thus, \(\boldsymbol{a}\cdot(\boldsymbol{\zeta}_{0},y_{2})\geq\boldsymbol{a}\cdot( \boldsymbol{\zeta}_{0},y_{1})\) for all \(\boldsymbol{a}\in\mathcal{S}^{n}\). Further, \(\boldsymbol{a}\cdot(\boldsymbol{\zeta}_{0},y_{2})\geq b\) and \(\boldsymbol{a}\cdot(\boldsymbol{\zeta}_{0},y_{1})<b\) if and only if \(\boldsymbol{a}\cdot(\boldsymbol{\zeta}_{0},y^{\prime})=b\) for some \(y^{\prime}\in(y_{1},y_{2}]\). Therefore,
\[D_{\boldsymbol{e}_{n+1}^{+}}f(\boldsymbol{\zeta}_{0},y_{2})-D_{\boldsymbol{e }_{n+1}^{+}}f(\boldsymbol{\zeta}_{0},y_{1})=\int_{L_{\boldsymbol{\zeta}_{0}}( y_{1},y_{2})}\frac{1}{\sqrt{1+\|\boldsymbol{\xi}\|^{2}}}\,\mathrm{d}\mu\circ \varphi^{-1}\left(\boldsymbol{\xi},v\right).\]
It follows whenever \(D_{\boldsymbol{e}_{n+1}^{+}}f(\boldsymbol{\zeta}_{0},y_{1})=D_{\boldsymbol{e }_{n+1}^{+}}f(\boldsymbol{\zeta}_{0},y_{2})\),
\[\int_{L_{\boldsymbol{\zeta}_{0}}(y_{1},y_{2})}\frac{1}{\sqrt{1+\|\boldsymbol{ \xi}\|^{2}}}\,\mathrm{d}\mu\circ\varphi^{-1}\left(\boldsymbol{\xi},v\right)=0. \tag{9}\]
**Vanishing Integrals over Half-Spaces**
Consider \(\boldsymbol{\zeta}_{0}\in S^{\prime}\). Consider an interval \((y_{0},\infty)\subseteq\mathbb{R}\).
For sets \(E\subseteq\mathbb{R}^{n+1}\), let \(\mathrm{ri}_{\boldsymbol{\zeta}_{0}}(E)\) denote the relative interior of \(E\cap(\{\boldsymbol{\zeta}_{0}\}\times(-\infty,\infty))\) with respect to \(\{\boldsymbol{\zeta}_{0}\}\times(-\infty,\infty)\). Then, define
\[J\coloneqq\left\{y\in(y_{0},\infty)\mid(\boldsymbol{\zeta}_{0},y)\in\bigcup _{C\in\mathcal{C}}\mathrm{ri}_{\boldsymbol{\zeta}_{0}}\left(C\right)\right\}.\]
It follows \(J\) is open. Then, there are countably many \(q_{i},r_{i}\in\mathbb{R}\cup\{\pm\infty\}\) such that \(J=\bigcup_{i\in\mathbb{N}}(q_{i},r_{i})\), the intervals pairwise disjoint.
Moreover, \(D_{\boldsymbol{e}_{n+1}^{+}}f(\boldsymbol{\zeta},y)\) is constant on \(\mathrm{ri}_{\boldsymbol{\zeta}_{0}}\left(C\right)\) for every \(C\in\mathcal{C}\). As locally constant functions are constant on connected components, \(D_{\boldsymbol{e}_{n+1}^{+}}f(\boldsymbol{\zeta},y)\) is constant on \(\{\boldsymbol{\zeta}_{0}\}\times(q_{i},r_{i})\) for all \(i\in\mathbb{N}\). By Equation 9, for all \(m\in\mathbb{N}\),
\[\int_{L_{\boldsymbol{\zeta}_{0}}\left(q_{i}+\frac{1}{m},r_{i}-\frac{1}{m} \right)}\frac{1}{\sqrt{1+\|\boldsymbol{\xi}\|^{2}}}\,\mathrm{d}\mu\circ\varphi ^{-1}\left(\boldsymbol{\xi},v\right)=0. \tag{10}\]
Thus, define \(E_{m}\coloneqq\bigcup_{i\in\mathbb{N}}L_{\boldsymbol{\zeta}_{0}}\left(q_{i}+ \frac{1}{m},r_{i}-\frac{1}{m}\right)\) for \(m\in\mathbb{N}\). By construction, this is a disjoint union. Therefore, by Equation 10, for all \(m\in\mathbb{N}\),
\[\int_{E_{m}}\frac{1}{\sqrt{1+\|\boldsymbol{\xi}\|^{2}}}\,\mathrm{d}\mu\circ \varphi^{-1}\left(\boldsymbol{\xi},v\right)=0.\]
Consider \(C\in\mathcal{C}\). Suppose \(y_{1}\in\mathbb{R}\) is such that \((\boldsymbol{\zeta}_{0},y_{1})\in C\) and \((\boldsymbol{\zeta}_{0},y_{1})\) is not on a face of \(C\) in \(\mathcal{F}_{1}\). In particular, if \((\boldsymbol{\zeta}_{0},y_{1})\) is not on the interior of \(C\), it lies only on a face of \(C\) with normal vector orthogonal to \(\boldsymbol{e}_{n+1}\). As \(C\) has only finitely many faces, it follows there is \(\delta>0\) such that \((\boldsymbol{\zeta}_{0},y_{1}+\epsilon)\in C\) whenever \(|\epsilon|<\delta\). Therefore, \((\boldsymbol{\zeta}_{0},y_{1})\in\mathrm{ri}_{\boldsymbol{\zeta}_{0}}(C)\).
Thus, \((y_{0},\infty)\setminus J\subseteq\left\{y\in\mathbb{R}\mid(\boldsymbol{\zeta} _{0},y)\in\bigcup_{F\in\mathcal{F}_{1}}F\right\}.\) Further, for every \(F\in\mathcal{F}_{1}\), \(|F\cap(\{\boldsymbol{\zeta}_{0}\}\times(y_{0},\infty))|\leq 1\). Therefore, as \(\mathcal{F}_{1}\) is countable, \((y_{0},\infty)\setminus J\) is countable.
Suppose \(b_{0}\in(y_{0},\infty)\setminus J\). Then, \((\boldsymbol{\zeta}_{0},b_{0})\in\bigcup_{F\in\mathcal{F}_{1}}F\subseteq\bigcup _{b\in\mathcal{H}_{1}\cap\mathcal{H}_{supp}}\mathfrak{h}\). As \(\boldsymbol{\zeta}_{0}\in S^{\prime}\), by Equation 8,
\[\left|\mu\circ\varphi^{-1}\right|\left(\overline{L}_{\boldsymbol{\zeta}_{0}}(b_{ 0})\right)=\left|\mu\circ\varphi^{-1}\right|\left(\{(\boldsymbol{\xi},v)\in \Xi\times V\mid v=b_{0}+\boldsymbol{\xi}\cdot\boldsymbol{\zeta}_{0}\}\right)=0. \tag{11}\]
Thus, as \((y_{0},\infty)\setminus J\) is countable, \(\left|\mu\circ\varphi^{-1}\right|\left(\bigcup_{b\in(y_{0},\infty)\setminus J} \overline{L}_{\mathbf{\zeta}_{0}}(b)\right)=0\). It follows for all \(m\in\mathbb{N}\),
\[\int_{E_{m}\cup\bigcup_{b\in(y_{0},\infty)\setminus J}\overline{L}_{\mathbf{\zeta}_ {0}}(b)}\frac{1}{\sqrt{1+\|\mathbf{\xi}\|^{2}}}\,\mathrm{d}\mu\circ\varphi^{-1} \left(\mathbf{\xi},v\right)=0.\]
Further, \(E_{m}\cup\bigcup_{b\in(y_{0},\infty)\setminus J}\overline{L}_{\mathbf{\zeta}_{0}} (b)\to\{(\mathbf{\xi},v)\in\Xi\times V\mid v>y_{0}+\mathbf{\xi}\cdot\mathbf{\zeta}_{0}\}\) as \(m\to\infty\). By Dominated Convergence Theorem, as \(\mu\circ\varphi^{-1}\) is finite,
\[\int_{v>y_{0}+\mathbf{\xi}\cdot\mathbf{\zeta}_{0}}\frac{1}{\sqrt{1+\|\mathbf{\xi}\|^{2}}} \,\mathrm{d}\mu\circ\varphi^{-1}\left(\mathbf{\xi},v\right)=0.\]
Similarly,
\[\int_{v<y_{0}+\mathbf{\xi}\cdot\mathbf{\zeta}_{0}}\frac{1}{\sqrt{1+\|\mathbf{\xi}\|^{2}}} \,\mathrm{d}\mu\circ\varphi^{-1}\left(\mathbf{\xi},v\right)=0.\]
The equation \(v=y_{0}+\mathbf{\xi}\cdot\mathbf{\zeta}_{0}\) is equivalent to \((\mathbf{\zeta}_{0},-1)\cdot(\mathbf{\xi},v)=-y_{0}\). Therefore, for all \(\mathbf{\zeta}_{0}\in S^{\prime}\) and \(y_{0}\in\mathbb{R}\), when considering an open half-space \(H\) in \(\Xi\times V\) with a boundary defined by \((\mathbf{\zeta}_{0},-1)\cdot(\mathbf{\xi},v)=-y_{0}\), \(\int_{H}\frac{1}{\sqrt{1+\|\mathbf{\xi}\|^{2}}}\,\mathrm{d}\mu\circ\varphi^{-1} \left(\mathbf{\xi},v\right)=0\).
**Conclusion with Radon-Nikodym**
On the Borel sets of \(\Xi\times V\), define the measure \(\nu(E)=\int_{E}\frac{1}{\sqrt{1+\|\mathbf{\xi}\|^{2}}}\,\mathrm{d}\mu\circ\varphi^ {-1}\left(\mathbf{\xi},v\right)\).
Given a Hahn decomposition of \(\mu\circ\varphi^{-1}\) with positive set \(P\) and negative set \(N\), \(\nu(E)=\int_{E}(\chi_{P}-\chi_{N})\frac{1}{\sqrt{1+\|\mathbf{\xi}\|^{2}}}\, \mathrm{d}|\mu\circ\varphi^{-1}|\left(\mathbf{\xi},v\right)\).
By the previous part, if \(H\) is an open half-space with normal vector \((\mathbf{\zeta}_{0},-1)\) with \(\mathbf{\zeta}_{0}\in S^{\prime}\), then \(\nu(H)=0\).
Since \(S^{\prime}\) is dense in \(\mathbb{R}^{n}\), \(\nu(H)=0\) whenever the boundary of \(H\) is in a dense set of directions. By a careful inspection of the proof of the Cramer-Wold theorem, it follows the characteristic function of \(\nu\), \(c_{\nu}(\mathbf{t})\coloneqq\int_{\mathbb{R}^{n}}e^{\mathbf{t}\cdot\mathbf{x}}\,\mathrm{d} \nu(\mathbf{x})\), is zero on a dense set of \(\mathbb{R}^{n}\)[5, Equation 4]. By Dominated Convergence Theorem, in fact \(c_{\nu}\equiv 0\). Since characteristic functions are unique, \(\nu\) is the zero measure [13, Theorem 15.9]. However, the Radon-Nikodym derivative of a measure is unique up to almost everywhere. As \(0\) is a Radon-Nikodym derivative for the zero measure and \((\chi_{P}-\chi_{N})\frac{1}{\sqrt{1+\|\mathbf{\xi}\|^{2}}}\) is never \(0\), it follows \(|\mu\circ\varphi^{-1}|\left(\Xi\times V\right)=0\).
Since \(\varphi\) is bijective between \(\gamma^{-1}[\mathcal{H}_{1}]\) and \(\Xi\times V\), the support of \(\mu\) is contained in
\[(\mathcal{S}^{n}\times\mathbb{R})\setminus\gamma^{-1}[\mathcal{H}_{1}]=\{\mathbf{a} \in\mathcal{S}^{n}\mid\mathbf{a}\cdot\mathbf{e}_{n+1}=0\}\times\mathbb{R}.\]
By definition of \(\mathcal{S}^{n}\), the support of \(\mu\) is contained in a copy of \(\mathcal{S}^{n-1}\times\mathbb{R}\) embedded into \(\mathbb{S}^{n}\times\mathbb{R}\). That is, \(f(\mathbf{x})=\int_{\mathcal{S}^{n-1}\times\mathbb{R}}\sigma(\mathbf{a}\cdot\mathbf{x}-b) -\sigma(-b)\,\mathrm{d}\mu(\mathbf{a},b)\). Moreover, \(g:\mathbb{R}^{n}\to\mathbb{R}\) defined as
\[g(\mathbf{\zeta}) \coloneqq\int_{\mathcal{S}^{n-1}\times\mathbb{R}}\sigma\left( \mathbf{\alpha}\cdot\mathbf{\zeta}-b\right)-\sigma(-b)\,\mathrm{d}\mu_{\mathcal{S}^{n- 1}\times\mathbb{R}}\left(\mathbf{\alpha},b\right)\] \[=\int_{\mathcal{S}^{n-1}\times\mathbb{R}}\sigma\left((\mathbf{\alpha},0)\cdot(\mathbf{\zeta},0)-b\right)-\sigma(-b)\,\mathrm{d}\mu_{\mathcal{S}^{n-1} \times\mathbb{R}}\left(\mathbf{\alpha},b\right)=f(\mathbf{\zeta},0)\]
is countably piecewise linear. By the inductive hypothesis, \(\mu\) is the zero measure.
To finish the proof of the main result [3], it is necessary to split the measure into fully atomic and atomless parts and consider them separately. By first establishing point masses always induce non-affineness even when dense, we can deduce the fully
atomic and atomless components of the measure must both give rise to countably piecewise linear functions.
**Lemma 6**.: _Let \(\mu\in\mathcal{M}(\mathcal{S}^{n}\times\mathbb{R})\) and \(f(\boldsymbol{x}):=\int_{\mathcal{S}^{n}\times\mathbb{R}}\sigma\left(\boldsymbol {a}\cdot\boldsymbol{x}-b\right)-\sigma(-b)\,\mathrm{d}\mu\left(\boldsymbol{a},b\right)\). Suppose \(\mu\left(\left\{(\boldsymbol{c},d)\right\}\right)\neq 0\). Then, \(f(\boldsymbol{x})\) is not affine on every open ball in the domain of \(f\) intersecting \(\mathfrak{h}_{\boldsymbol{c},d}\)._
Proof.: Rotate the coordinate system of \(f\) such that \(\boldsymbol{c}\cdot\boldsymbol{e}_{n+1}\neq 0\). Define \(\Xi\times V\), \(\psi\), and \(\varphi\) as in Theorem 1. Let \(S\subseteq\mathbb{R}^{n}\) be the set in Proposition 1.
There exists a unique function \(\phi:\mathbb{R}^{n}\to\mathbb{R}\) such that \((\boldsymbol{\zeta},\phi(\boldsymbol{\zeta}))\in\mathfrak{h}_{\boldsymbol{c},d}\) for all \(\boldsymbol{\zeta}\in\mathbb{R}^{n}\).
By way of contradiction, suppose \(f\) is affine on an open ball \(B_{0}\) intersecting \(\mathfrak{h}_{\boldsymbol{c},d}\). Then, there is an uncountable set \(S^{\prime}\subseteq S\) such that \(\left\{(\boldsymbol{\zeta},\phi(\boldsymbol{\zeta}))\,\mid\,\boldsymbol{\zeta} \in S^{\prime}\right\}\subseteq B_{0}\).
For \(\boldsymbol{\zeta}\in S^{\prime}\), let \(P_{\boldsymbol{\zeta}}=\left\{\left(\boldsymbol{\xi},v\right)\in\Xi\times V \mid v=\phi(\boldsymbol{\zeta})+\boldsymbol{\xi}\cdot\boldsymbol{\zeta}\right\}\). Now, let \(\mathcal{P}^{\prime}_{\boldsymbol{c},d}=\left\{P_{\boldsymbol{\zeta}}\setminus \left\{\psi(\mathfrak{h}_{\boldsymbol{c},d})\right\}\,\right|\,\boldsymbol{ \zeta}\in S^{\prime}\right\}\).
For unique \(\boldsymbol{\zeta}_{1},\ldots,\boldsymbol{\zeta}_{n+1}\in S^{\prime}\), consider \(\bigcap_{i\in[n+1]}P_{\boldsymbol{\zeta}_{i}}\setminus\left\{\psi(\mathfrak{h}_ {\boldsymbol{c},d})\right\}\).
It follows any hyperplane in \(\psi^{-1}\left[\bigcap_{i\in[n+1]}P_{\boldsymbol{\zeta}_{i}}\setminus\left\{ \psi(\mathfrak{h}_{\boldsymbol{c},d})\right\}\right]\) intersects the points \(\left\{(\boldsymbol{\zeta}_{1},\phi(\boldsymbol{\zeta}_{1})),\ldots,( \boldsymbol{\zeta}_{n+1},\phi(\boldsymbol{\zeta}_{n+1}))\right\}\) in the domain where each \(\boldsymbol{\zeta}_{i}\in S\). By Corollary 1, these points do not lie on a common \(n-1\) dimensional affine subspace, so \(\mathfrak{h}_{\boldsymbol{c},d}\) is the only hyperplane intersecting \(\left\{(\boldsymbol{\zeta}_{1},\phi(\boldsymbol{\zeta}_{1})),\ldots,( \boldsymbol{\zeta}_{n+1},\phi(\boldsymbol{\zeta}_{n+1}))\right\}\). It follows \(\bigcap_{i\in[n+1]}P_{\boldsymbol{\zeta}_{i}}\setminus\left\{\psi(\mathfrak{ h}_{\boldsymbol{c},d})\right\}=\varnothing\) and \(\left|\mu\circ\varphi^{-1}\right|\left(\bigcap_{i\in[n+1]}P_{\boldsymbol{ \zeta}_{i}}\setminus\left\{\psi(\mathfrak{h}_{\boldsymbol{c},d})\right\} \right)=0\).
Therefore, by Lemma 5, there are only countably many \(P^{\prime}_{\boldsymbol{\zeta}}\in\mathcal{P}^{\prime}_{\boldsymbol{c},d}\) such that \(\left|\mu\circ\varphi^{-1}\right|\left(P^{\prime}_{\boldsymbol{\zeta}}\right)>0\).
Since \(S^{\prime}\) is uncountable, there is \(\boldsymbol{\zeta}_{0}\in S^{\prime}\) such that \(\left|\mu\circ\varphi^{-1}\right|\left(P_{\boldsymbol{\zeta}_{0}}\setminus \left\{\psi(\mathfrak{h}_{\boldsymbol{c},d})\right\}\right)=0\).
As \((\boldsymbol{\zeta}_{0},\phi(\boldsymbol{\zeta}_{0}))\in B_{0}\), there is \(\epsilon>0\) such that for all \(\delta<\epsilon\), \(f\) is affine on the line segment connecting \((\boldsymbol{\zeta}_{0},\phi(\boldsymbol{\zeta}_{0})-\delta)\) and \((\boldsymbol{\zeta}_{0},\phi(\boldsymbol{\zeta}_{0})+\delta)\).
By Equation 9 in Theorem 1, it follows \(\int_{L_{\boldsymbol{\zeta}_{0}}(\phi(\boldsymbol{\zeta}_{0})-\delta,\phi( \boldsymbol{\zeta}_{0})+\delta)}\frac{1}{\sqrt{1+\|\boldsymbol{\xi}\|^{2}}}\, \mathrm{d}\mu\circ\varphi^{-1}\left(\boldsymbol{\xi},v\right)=0\). Since this holds for all \(\delta\in(0,\epsilon)\), by the Dominated Convergence Theorem,
\[\int_{\overline{L}_{\boldsymbol{\zeta}_{0}}(\phi(\boldsymbol{\zeta}_{0}))} \frac{1}{\sqrt{1+\|\boldsymbol{\xi}\|^{2}}}\,\mathrm{d}\mu\circ\varphi^{-1} \left(\boldsymbol{\xi},v\right)=0. \tag{12}\]
Recall, \(\overline{L}_{\boldsymbol{\zeta}_{0}}(\phi(\boldsymbol{\zeta}_{0}))=\left\{( \boldsymbol{\xi},v)\in\Xi\times V\mid v=\phi(\boldsymbol{\zeta}_{0})+ \boldsymbol{\xi}\cdot\boldsymbol{\zeta}_{0}\right\}\). By Equation 12,
\[0 =\int_{\overline{L}_{\boldsymbol{\zeta}_{0}}(\phi(\boldsymbol{ \zeta}_{0}))}\frac{1}{\sqrt{1+\|\boldsymbol{\xi}\|^{2}}}\,\mathrm{d}\mu \circ\varphi^{-1}\left(\boldsymbol{\xi},v\right)\] \[=\int_{P_{\boldsymbol{\zeta}_{0}}\setminus\left\{\psi( \mathfrak{h}_{\boldsymbol{c},d})\right\}}\frac{1}{\sqrt{1+\|\boldsymbol{\xi}\|^{ 2}}}\,\mathrm{d}\mu\circ\varphi^{-1}\left(\boldsymbol{\xi},v\right)+\int_{\left\{ \psi(\mathfrak{h}_{\boldsymbol{c},d})\right\}}\frac{1}{\sqrt{1+\|\boldsymbol{ \xi}\|^{2}}}\,\mathrm{d}\mu\circ\varphi^{-1}\left(\boldsymbol{\xi},v\right)\] \[=0+\int_{\left\{(\boldsymbol{c},d)\right\}}\boldsymbol{a}\cdot \boldsymbol{e}_{n+1}\,\mathrm{d}\mu\left(\boldsymbol{a},b\right)=\mu\left( \left\{(\boldsymbol{c},d)\right\}\right)\cdot\left(\boldsymbol{c}\cdot\boldsymbol {e}_{n+1}\right).\]
This is a contradiction.
**Corollary 2**.: _Let \(f:\mathbb{R}^{n+1}\to\mathbb{R}\) be a continuous countably piecewise linear function. Suppose there is a countable collection \(\mathcal{C}\) of convex polyhedra covering \(\mathbb{R}^{n+1}\) such that \(f\) is affine on each polyhedron and each polyhedron has non-empty interior. Suppose there exist \(\mu\in\mathcal{M}(\mathbb{S}^{n}\times\mathbb{R})\) and \(c_{0}\in\mathbb{R}\) such that \(f(\boldsymbol{x})=\int_{\mathbb{S}^{n}\times\mathbb{R}}\sigma\left(\boldsymbol{a} \cdot\boldsymbol{x}-b\right)-\sigma(-b)\,\mathrm{d}\mu\left(\boldsymbol{a},b \right)+c_{0}\). Then, there are \(r_{0},r_{(\boldsymbol{c},d)}\in\mathbb{R}\) and a countable set \(M\subseteq\mathcal{S}^{n}\times\mathbb{R}\) such that \(f(\boldsymbol{x})=r_{0}+\sum_{(\boldsymbol{c},d)\in M}r_{(\boldsymbol{c},d)} \sigma(\boldsymbol{c}\cdot\boldsymbol{x}-d)\)._
Proof.: By Lemma 2, we can assume \(f(\mathbf{x})=\int_{\mathcal{S}^{n}\times\mathbb{R}}\sigma\left(\mathbf{a}\cdot\mathbf{x}-b \right)-\sigma(-b)\,\mathrm{d}\mu\left(\mathbf{a},b\right)+\mathbf{a}_{0}\cdot\mathbf{x}+b_{0}\) for some \(\mu\in\mathcal{M}(\mathcal{S}^{n}\times\mathbb{R})\), \(\mathbf{a}_{0}\in\mathbb{R}^{n+1}\), and \(b_{0}\in\mathbb{R}\). Decompose \(\mu\) such that \(\mu=\mu_{C}+\sum_{(\mathbf{c},d)\in M}r_{(\mathbf{c},d)}\delta_{(\mathbf{c},d)}\) where \(\mu_{C}\) is atomless, \(M\) is a countable subset of \(\mathcal{S}^{n}\times\mathbb{R}\), and \(r_{(\mathbf{c},d)}\in\mathbb{R}\setminus\{0\}\) for all \((\mathbf{c},d)\).
Let \(g:\mathbb{R}^{n+1}\to\mathbb{R}\) be
\[g(\mathbf{x}) :=\int_{\mathcal{S}^{n}\times\mathbb{R}}\sigma\left(\mathbf{a}\cdot \mathbf{x}-b\right)-\sigma(-b)\,\mathrm{d}\left(\sum_{(\mathbf{c},d)\in M}r_{(\mathbf{c},d)}\delta_{(\mathbf{c},d)}\right)(\mathbf{a},b)+\mathbf{a}_{0}\cdot\mathbf{x}+b_{0}\] \[=\sum_{(\mathbf{c},d)\in M}r_{(\mathbf{c},d)}\left(\sigma\left(\mathbf{c} \cdot\mathbf{x}-d\right)-\sigma(-d)\right)+\mathbf{a}_{0}\cdot\mathbf{x}+b_{0}.\]
Then, \(g\) is certainly affine outside of \(\bigcup_{(\mathbf{c},d)\in M}\mathfrak{h}_{\mathbf{c},d}\). By Lemma 6, for every \(C\in\mathcal{C}\) and every \((\mathbf{c},d)\in M\), \((\text{int }C)\cap\mathfrak{h}_{\mathbf{c},d}=\varnothing\). Therefore, for every \(C\in\mathcal{C}\), \(g\) is affine on \(C\), because \(g\) is continuous and \(\overline{\text{int }C}=C\). Thus, the cover \(\mathcal{C}\) shows \(g\) is countably piecewise linear.
It follows \(\int_{\mathcal{S}^{n}\times\mathbb{R}}\sigma\left(\mathbf{a}\cdot\mathbf{x}-b\right)- \sigma(-b)\,\mathrm{d}\mu_{C}(\mathbf{a},b)\) is also countably piecewise linear. By Theorem 1, \(\mu_{C}\) is in fact the zero measure.
Then, \(f(\mathbf{x})=g(\mathbf{x})\). Note, \(\mathbf{a}_{0}\cdot\mathbf{x}=\sigma(\mathbf{a}_{0}\cdot\mathbf{x})-\sigma(-\mathbf{a}_{0}\cdot \mathbf{x})\). Thus,
\[f(\mathbf{x})=\left(b_{0}-\sum_{(\mathbf{c},d)\in M}r_{(\mathbf{c},d)}\sigma(-d)\right)+ \sigma(\mathbf{a}_{0}\cdot\mathbf{x})-\sigma(-\mathbf{a}_{0}\cdot\mathbf{x})+\sum_{(\mathbf{c},d) \in M}r_{(\mathbf{c},d)}\sigma(\mathbf{c}\cdot\mathbf{x}-d).\]
**Corollary 3**.: _Let \(f:\mathbb{R}^{n+1}\to\mathbb{R}\) be a continuous finitely piecewise linear function. If there exist \(\mu\in\mathcal{M}(\mathbb{S}^{n}\times\mathbb{R})\) and \(c_{0}\in\mathbb{R}\) such that \(f(\mathbf{x})=\int_{\mathbb{S}^{n}\times\mathbb{R}}\sigma\left(\mathbf{a}\cdot\mathbf{x}-b \right)-\sigma(-b)\,\mathrm{d}\mu\left(\mathbf{a},b\right)+c_{0}\), then \(f\) is representable as a finite-width network as in Equation 1._
Proof.: Since \(f\) is _finitely_ piecewise linear, by [9], there exists a finite collection \(\mathcal{C}\) of convex polyhedra with non-empty interior covering \(\mathbb{R}^{n+1}\) such that \(f\) is affine on each. By Corollary 2, there are \(r_{0}\in\mathbb{R}\), \(r_{(\mathbf{c},d)}\in\mathbb{R}\setminus\{0\}\), and a countable set \(M\subseteq\mathcal{S}^{n}\times\mathbb{R}\), such that \(f(\mathbf{x})=r_{0}+\sum_{(\mathbf{c},d)\in M}r_{(\mathbf{c},d)}\sigma(\mathbf{c}\cdot\mathbf{x}-d)\). As \(f\) will have a boundary at \(\mathfrak{h}_{\mathbf{c},d}\) for all \((\mathbf{c},d)\in M\), \(M\) is a finite set.
**Corollary 4**.: _Let \(f(\mathbf{x})=\int_{\mathbb{S}^{n}\times\mathbb{R}}\sigma\left(\mathbf{a}\cdot\mathbf{x}-b \right)-\sigma(-b)\,\mathrm{d}\mu\left(\mathbf{a},b\right)+c_{0}\) with \(\mu\in\mathcal{M}(\mathbb{S}^{n}\times\mathbb{R})\) and \(c_{0}\in\mathbb{R}\). If \(f\not\equiv 0\), then \(f\) is not a compactly supported finitely piecewise linear function._
Proof.: Suppose \(f\not\equiv 0\). By Corollary 3, \(f\) is representable with a measure of the form \(\sum_{(\mathbf{c},d)\in M}r_{(\mathbf{c},d)}\delta_{(\mathbf{c},d)}\) such that \(M\) is finite. If \(M\) is empty, \(f(\mathbf{x})=c_{0}\neq 0\). Otherwise, \(f\) will not be affine along \(\mathfrak{h}_{\mathbf{c},d}\) for some \((\mathbf{c},d)\in M\). Since \(n\geq 2\), \(\mathfrak{h}_{\mathbf{c},d}\) will extend infinitely and \(f\) cannot be compactly supported.
## Acknowledgements
The author would like to acknowledge the support of the National Science Foundation (No. 1934884). |
2306.04507 | Improving neural network representations using human similarity
judgments | Deep neural networks have reached human-level performance on many computer
vision tasks. However, the objectives used to train these networks enforce only
that similar images are embedded at similar locations in the representation
space, and do not directly constrain the global structure of the resulting
space. Here, we explore the impact of supervising this global structure by
linearly aligning it with human similarity judgments. We find that a naive
approach leads to large changes in local representational structure that harm
downstream performance. Thus, we propose a novel method that aligns the global
structure of representations while preserving their local structure. This
global-local transform considerably improves accuracy across a variety of
few-shot learning and anomaly detection tasks. Our results indicate that human
visual representations are globally organized in a way that facilitates
learning from few examples, and incorporating this global structure into neural
network representations improves performance on downstream tasks. | Lukas Muttenthaler, Lorenz Linhardt, Jonas Dippel, Robert A. Vandermeulen, Katherine Hermann, Andrew K. Lampinen, Simon Kornblith | 2023-06-07T15:17:54Z | http://arxiv.org/abs/2306.04507v2 | # Improving neural network representations using human similarity judgments
###### Abstract
Deep neural networks have reached human-level performance on many computer vision tasks. However, the objectives used to train these networks enforce only that similar images are embedded at similar locations in the representation space, and do not directly constrain the global structure of the resulting space. Here, we explore the impact of supervising this global structure by linearly aligning it with human similarity judgments. We find that a naive approach leads to large changes in local representational structure that harm downstream performance. Thus, we propose a novel method that aligns the global structure of representations while preserving their local structure. This global-local transform considerably improves accuracy across a variety of few-shot learning and anomaly detection tasks. Our results indicate that human visual representations are globally organized in a way that facilitates learning from few examples, and incorporating this global structure into neural network representations improves performance on downstream tasks.
## 1 Introduction
Representation learning usually involves pretraining a network on a large, diverse dataset using one of a handful of different types of supervision. In computer vision, early successes came from training networks to predict class labels [47; 85; 28]. More recent work has demonstrated the power of contrastive representation learning [9; 29; 70; 8]. Self-supervised contrastive models learn a space where images lie close to other augmentations of the same image [9], whereas image/text contrastive models learn a space where images lie close to embeddings of corresponding text [70; 39; 68].
Although these representation learning strategies are effective, their pretraining objectives do not directly constrain the global structure of the learned representations. To classify ImageNet images
into 1000 classes, networks' penultimate layers must learn representations that allow examples of a given class to be linearly separated from representations of other classes, but the classes themselves could be arranged in any fashion. The objective itself does not directly encourage images of tabby cats to be closer to images of other breeds of cats than to images of raccoons. Similarly, contrastive objectives force related embeddings to be close, and unrelated embeddings to be far, but if a cluster of data points is sufficiently far from all other data points, then the location of this cluster in the embedding space has minimal impact on the value of the loss [62; 35; 9].
Even without explicit global supervision, however, networks implicitly learn to organize high-level concepts somewhat coherently. For example, ImageNet models' representations roughly cluster according to superclasses [19], and the similarity structure of neural network representation spaces is non-trivially similar to similarity structures inferred from brain data or human judgments [40; 92; 66; 80; 49]. The structure that is learned likely reflects a combination of visual similarity between images from related classes and networks' inductive biases. However, there is little reason to believe that this implicitly-learned global structure should be optimal.
Human and machine vision differ in many ways. Some differences relate to sensitivity to distortions and out-of-distribution generalization [21; 22; 25]. For example, ImageNet models are more biased by texture than humans in their decision process [22; 34]. However, interventions that reduce neural networks' sensitivity to image distortions are not enough to make their representations as robust and transferable as those of humans [23]. Experiments inferring humans' object representations from similarity judgments suggest that humans use a complex combination of semantics, texture, shape, and color-related concepts for performing similarity judgments [31; 60].
Various strategies have been proposed to improve _representational alignment_ between neural nets and humans [e.g., 66; 67; 2; 61], to improve robustness against image distortions [e.g., 34; 21], or for obtaining models that make more human-like errors [e.g., 22; 26]. Muttenthaler et al. [61] previously found that a linear transformation learned to maximize alignment on one dataset of human similarity judgments generalized to different datasets of similarity judgments. However, it remains unclear whether better representational alignment can improve networks' generalization on vision tasks.
Here, we study the impact of aligning representations' global structure with human similarity judgments on downstream tasks. These similarity judgments, collected by asking subjects to choose the _odd-one-out_ in a triplet of images [32], provide explicit supervision for relationships among disparate concepts. Although the number of images we use for alignment is three to six orders of magnitude smaller than the number of images in pretraining, we observe significant improvements. Specifically, our contributions are as follows:
Figure 1: Global-local (gLocal) transforms yield a _best-of-both-worlds_ representation space, which improves overall performance. (a) The original representations capture local structure, such as that different trees are similar, but have poor global structure. The gLocal transform preserves local structure, while integrating global information from human knowledge; e.g., unifying superordinate categories, organizing by “animacy”, or connecting semantically-related categories like “food” and “drink”. (b) The gLocal transforms improve both human alignment and downstream task performance compared to original and naively aligned representations for image/text models. We report mean accuracies on anomaly detection and (5-shot) few-shot learning tasks.
* We introduce the _gLocal transform_, a linear transformation that minimizes a combination of a _global alignment loss_, which aligns the representation with human similarity judgments, and a _local contrastive loss_ that maintains the local structure of the original representation space.
* We show that the gLocal transform preserves the local structure of the original space but captures the same global structure as a _naive transform_ that minimizes only the global alignment loss.
* We demonstrate that the gLocal transform substantially increases performance on a variety of few-shot learning and anomaly detection tasks. By contrast, the naive transform impairs performance.
* We compare the human alignment of gLocal and naively transformed representations on four human similarity judgment datasets, finding that the gLocal transform yields only marginally worse alignment than the naive transform.
## 2 Related work
How can we build models which learn representations that support performance on variable downstream tasks? This question has been a core theme of computer vision research [9; 27; 58], but the impact of pretraining on feature representations is complex, and better performance does not necessarily yield more transferable representations [e.g., 44; 45]. For example, some pretraining leads to shortcut learning [23; 50; 1; 57; 22; 34; 3; 91]. Because the relationship between training methods and representations is complicated, it is useful to study how datasets and training shape representations [33]. Standard training objectives do not explicitly constrain the global structure of representations; nevertheless, these objectives yield representations that capture some aspects of the higher-order category structure [e.g., 40] and neural unit activity [e.g., 92; 80] of human and animal representations of the same images. Some models progressively differentiate hierarchical structure over the course of learning [5] in a similar way to how humans learn semantic features [73; 18; 78; 5; 79]. Even so, learned representations still fail to capture important aspects of the structure that humans learn [6]. Human representations capture both perceptual and semantic features [7; 69], including many levels of semantic hierarchy (e.g. higher-level: "animate" [11], superordinate: "mammal", basic: "dog", subordinate: "dashund"), with a bias towards the basic level [74; 55; 38], as well as cross-cutting semantic features [84; 56].
While models may implicitly learn to represent some of this structure, these implicit representations may have shortcomings. For example, Huh et al. [37] suggest that ImageNet models capture only higher-level categories where the sub-categories are visually similar. Similarly, Peterson et al. [66] show that model representations do not natively fully capture the structure of human similarity judgments, though they can be transformed to align better [66; 2]. In some cases, language provides more accurate estimates of human similarity judgments than image representations [54], and image-text models can have more human-aligned representations [61]. How does human alignment affect downstream performance? Sucholutsky & Griffiths [87] show that models which are more human-aligned (but not specifically optimized for alignment) are more robust on few-shot learning tasks. Other work shows benefits of incorporating higher-level category structure [51; 83]. Here, we ask whether transforming model representations to align with human knowledge can improve transfer.
## 3 Methods
**Data.** For measuring the degree of alignment between human and neural network similarity spaces, we use the things dataset, which is a large behavioral dataset of \(4.70\) million unique triplet responses crowdsourced from \(12\),\(340\) human participants for \(1854\) natural object images [32]. Images used for collecting human responses in the triplet odd-one-out task were taken from the things object concept and image database [30], which is a collection of natural object images.
**Odd-one-out accuracy.** The triplet odd-one-out task is a commonly used task in the cognitive sciences to measure human notions of object similarity without biasing a participant into a specific direction [20; 72; 31; 60]. To measure the degree of alignment between human and neural network similarity judgments in the things triplet task, we examine the extent to which the odd-one-out can be identified directly from the similarities between images in models' representation spaces. Given representations \(\mathbf{x}_{1}\), \(\mathbf{x}_{2}\), and \(\mathbf{x}_{3}\) of the three images in a triplet, we first construct a similarity matrix \(\mathbf{S}\in\mathbb{R}^{3\times 3}\) where \(S_{i,j}\coloneqq\mathbf{x}_{i}^{\top}\mathbf{x}_{j}/(\|\mathbf{x}_{i}\|_{2}\|\mathbf{x}_{j}\|_ {2})\), the cosine similarity between a pair of representations.3 We identify the closest pair of images in the triplet as \(\arg\max_{i,j>i}S_{i,j}\) with the
remaining image being the odd-one-out. We define odd-one-out accuracy as the proportion of triplets where the odd-one-out matches the human odd-one-out choice.
**Alignment loss.** Given an image similarity matrix \(\mathbf{S}\) and a triplet \(\{i,j,k\}\) (here, images are indexed by natural numbers), the likelihood of a particular pair, \(\{a,b\}\subset\{i,j,k\}\), being most similar, and hence the remaining image being the odd-one-out, is modeled by the softmax of the object similarities,
\[p(\{a,b\}|\{i,j,k\},\mathbf{S})\coloneqq\exp(S_{a,b})/\left(\exp(S_{i,j})+\exp(S_{ i,k})+\exp(S_{j,k})\right). \tag{1}\]
For \(n\) triplet responses we use the following negative log-likelihood, precisely defined in [60],
\[\mathcal{L}_{\text{global}}(\mathbf{S})\coloneqq-\frac{1}{n}\sum_{s=1}^{n}\log \underbrace{p\left(\{a_{s},b_{s}\}|\{i_{s},j_{s},k_{s}\},\mathbf{S}\right)}_{ \text{odd-one-out prediction}}. \tag{2}\]
Since the triplets in [32] consist of randomly selected images, the concepts that humans use for their similarity judgments in the things triplet odd-one-out task primarily reflect superordinate categories rather than fine-grained object features [31, 60], the above alignment loss can be viewed as a loss function whose objective is to transform the representations into a globally-restructured human similarity space where superordinate categories are emphasized over subordinate categories.
**Naive transform.** We first investigate a linear transformation that naively maximizes alignment between neural network representations and human similarity judgments with \(L_{2}\) regularization. This transformation consists of a square matrix \(\mathbf{W}\) obtained as the solution to
\[\operatorname*{arg\,min}_{\mathbf{W},\mathbf{b}}\ \mathcal{L}_{\text{global}}(\mathbf{S})+ \lambda||\mathbf{W}||_{\text{F}}^{2}, \tag{3}\]
where \(S_{ij}=\left(\mathbf{W}\mathbf{x}_{i}+\mathbf{b}\right)^{\top}\left(\mathbf{W}\mathbf{x}_{j}+\mathbf{ b}\right)\). We call this transformation the _naive transform_ because the regularization term helps prevent overfitting to the training set, but does not encourage the transformed representation space to resemble the original space. Mutenthaler et al. [61] previously investigated a similar transformation. We determine \(\lambda\) via grid-search using \(k\)-fold cross-validation (CV). To obtain a minimally biased estimate of the odd-one-out accuracy of the transform, we partition the \(1854\) objects in things into two disjoint sets, following the procedure of Mutenthaler et al. [61].
**Global transform.** The naive transform does not preserve representational structure that is irrelevant to the odd-one-out task. The global transform instead shrinks \(\mathbf{W}\) toward a scaled identity matrix by penalizing \(\min_{\alpha}\left\|\mathbf{W}-\alpha I\right\|_{\text{F}}^{2}\), thus regularizing the transformed representation toward the original. The global transform solves the following minimization problem,
\[\operatorname*{arg\,min}_{\mathbf{W},\mathbf{b}}\mathcal{L}_{\text{global}}(\mathbf{S})+ \lambda\min_{\alpha}\left\|\mathbf{W}-\alpha I\right\|_{\text{F}}^{2}=\operatorname* {arg\,min}_{\mathbf{W},\mathbf{b}}\mathcal{L}_{\text{global}}(\mathbf{S})+\lambda\left\| \mathbf{W}-\left(\sum_{j=1}^{p}\mathbf{W}_{jj}/p\right)I\right\|_{\text{F}}^{2}. \tag{4}\]
We derive the above equality in Appx. F. Again, we select \(\lambda\) via grid-search with \(k\)-fold CV.
**gLocal transform.** In preliminary experiments, we observed a trade-off between alignment and the transferability of a neural network's human-aligned representation space to downstream tasks. Maximizing alignment appears to slightly worsen downstream task performance, whereas using a large value of \(\lambda\) in Eq. 4 leads to a representation that closely resembles the original (since \(\lim_{\lambda\to\infty}\mathbf{W}=\sigma I\)). Thus, we add an additional regularization term to the objective with the goal of preserving the local structure of the network's original representation space. This loss term can be seen as an additional constraint on the transformation matrix \(\mathbf{W}\).
We call this loss term _local loss_ and the transform that optimizes this full objective the _gLocal transform_, where global and local representations structures are jointly optimized. For this loss function, we embed all images of the ImageNet train and validation sets [17] in a neural network's \(p\)-dimensional penultimate layer space or image encoder space of image/text models. Let \(\mathbf{Y}\in\mathbb{R}^{m\times p}\) be a neural network's feature matrix for all \(m\) images in the ImageNet train set. Let \(\mathbf{S}^{*}\) be the cosine similarity matrix using untransformed representations where \(S^{*}_{ij}=\left(\mathbf{y}_{i}^{\top}\mathbf{y}_{j}\right)/\big{(}||\mathbf{y}_{i}||_{2} ||\mathbf{y}_{j}||_{2}\big{)}\) and let \(\mathbf{S}^{\dagger}\) be the cosine similarity matrix of the transformed representations where
\[S^{\dagger}_{ij}=\left(\left(\mathbf{W}\mathbf{y}_{i}+\mathbf{b}\right)^{\top}\left(\mathbf{W} \mathbf{y}_{j}+\mathbf{b}\right)\right)/\big{(}||\mathbf{W}\mathbf{y}_{i}+\mathbf{b}||_{2}||\mathbf{W} \mathbf{y}_{j}+\mathbf{b}||_{2}\big{)}.\]
Let \(\sigma\) be a softmax function that transforms a similarity matrix into a probability distribution,
\[\sigma(\mathbf{S},\tau)_{ij}\coloneqq\frac{\exp(\mathbf{S}_{ij}/\tau)}{\sum_{k}^{m} \mathbbm{1}_{[k\neq j]}\exp(\mathbf{S}_{ik}/\tau)},\]
where \(\tau\) is a temperature and \(\sigma(\mathbf{S},\tau)_{ij}\in(0,1)\). We can then define the local loss as the following contrastive objective between untransformed and transformed neural network similarity spaces,
\[\mathcal{L}_{\text{local}}(\mathbf{W},\mathbf{b},\tau)\coloneqq-\frac{1}{m^{2}-m}\sum_{i }^{m}\sum_{j}^{m}\mathbbm{1}_{[i\neq j]}\sigma(\mathbf{S}^{*},\tau)_{ij}\log\left[ \sigma(\mathbf{S}^{\dagger},\tau)_{ij}\right]. \tag{5}\]
To avoid distributions that excessively emphasize the self-similarity of objects for small \(\tau\), we exclude elements on the diagonal of the similarity matrices. The final _gLocal transform_ is then,
\[\operatorname*{arg\,min}_{\mathbf{W},\mathbf{b}}\ \underbrace{(1-\alpha)\,\mathcal{L}_{ \text{global}}(\mathbf{W},\mathbf{b})}_{\text{alignment}}+\underbrace{\alpha\mathcal{L }_{\text{local}}(\mathbf{W},\mathbf{b},\tau)}_{\text{locality-preserving}}+\lambda \left\|\mathbf{W}-\left(\sum_{j=1}^{p}\mathbf{W}_{jj}/p\right)I\right\|_{\text{F}}^{2}, \tag{6}\]
where \(\alpha\) is a hyperparameter that balances the trade-off between human alignment and preserving the local structure of a neural network's original representation space. We select values of \(\alpha\) and \(\lambda\) that give the lowest alignment loss via grid search; see details in Appx. A.2.
### Downstream tasks
**Few-shot learning.** In general, few-shot learning (FS) is used to measure the transferability of neural network representations to different downstream tasks [86]. Here, we use FS to investigate whether the gLocal transform, as defined in SS3, can improve downstream task performance and, hence, a network's transferability, compared to the original representation spaces. Specifically, we perform FS with and without applying the gLocal transforms. For all few-shot experiments, we use multinomial logistic regression, which has previously been shown to achieve near-optimal performance when paired with a good representation [88]. The regularization parameter is selected by \(n_{s}\)-fold cross-validation, with \(n_{s}\) being the number of shots per class (more details in Appx. A.3).
**Anomaly detection.** Anomaly detection (AD) is a task where one has a collection of data considered "nominal" and would like to detect if a test sample is different from nominal data. For semantic AD tasks, e.g. nominal images contain a cat, it has been observed that simple AD methods using a pretrained neural network perform best [53; 4; 16]. In this work, we apply \(k\)-nearest neighbor AD to representations from a neural network, a method which has been found to work well [4; 53; 71]. We use the standard one-vs.-rest AD benchmark on classification datasets, where a model is trained using "one" training class as nominal data and performs inference with the full test set with the "rest" classes being anomalous [75].
## 4 Experimental results
In this section, we report experimental results for different FS and AD tasks. In addition, we analyze the effect of the gLocal transforms on both local and global similarity structures and report changes in representational alignment of image/text models for four human similarity judgment datasets. We start this section by introducing the different tasks and datasets and continue with the analyses.
### Experimental setup
**CIFAR-100 coarse** The 100 classes in CIFAR-100 can be grouped into 20 semantically meaningful superclasses. Here, we use these superclasses as targets for which there exist 100 test images each.
**CIFAR-100 shift** simulates a distribution shift between the normal distribution of the training and testing images in CIFAR-100. For each of the 20 superclasses, there exist five subclasses. We use the first three subclasses for training and the last two subclasses for testing.
**Entity-13 and Entity-30** are datasets derived from ImageNet [17; 76]. They have been defined as part of the BREEDS dataset for subpopulation shift [77]. In BREEDS, ImageNet classes are grouped into superclasses based on a modified version of the WordNet hierarchy. Specifically, one starts at the _Entity_ node of that hierarchy and traverses the tree in a breadth-first manner until the desired level of granularity (three steps from the root for Entity-13 and four for Entity 30). The classes residing at that level are considered the new coarse class labels and a fixed number of subclasses are sampled from the trees rooted at these superclass nodes -- twenty for Entity-13 and eight for Entity-30. Through this procedure, Entity-13 results in more coarse-grained labels than Entity-30. The subclasses of each superclass are partitioned into source (training) and target (test) classes, introducing a subpopulation shift. For testing, we use all 50 validation images for each subclass.
### Impact of transforms on global and local structure
Our goal is to use human similarity judgments to correct networks' global representational structure without affecting local representational structure. Here, we study the extent to which our method succeeds at this goal. To quantify distortion of local structure, we first find the nearest neighbor of each ImageNet validation image among the remaining validation 49,999 images in the original representation space. We then measure the proportion of images for which the nearest neighbor in the untransformed representation space is among the closest \(k\) images in the transformed representation space. As shown in Fig. 2, both the gLocal and global transforms generally preserve nearest neighbors, although the gLocal transform is slightly more effective, preserving the closest neighbor of 76.3% of images vs. 73.7% for the global transform. By contrast, the naive, unregularized transform preserves the closest neighbor in only 12.2% of images. For further intuition, we show the neighbors of anchor images in the untransformed, gLocal, and naively transformed representation spaces in Appx. C.
Whereas the local structure of gLocal and global representations closely resembles that of the original representation, the global structure instead more closely resembles the naive transformed representation. We quantify global representational similarity using linear centered kernel alignment (LCKA) [43; 14]. LCKA can be thought of as measuring the similarity between principal components (PCs) of two representations weighted by the amount of variance they explain; see further discussion in Appx. G. It thus primarily reflects similarity between the large PCs that define global representational structure. As shown in Fig. 3 (left), LCKA indicates that the gLocal/global representations are more similar to the naive transformed representation than to the untransformed representation, suggesting that the gLocal, global, and naive transforms induce similar changes in global structure. We further measure LCKA between representations obtained by setting all singular values to zero except for those corresponding to either largest 10 PCs or the remaining 758 PCs. LCKA between representations retaining the largest 10 PCs resembles LCKA between the full representations (Fig. 3 middle). However, when retaining only the remaining PCs, the gLocal/global representations are more similar to the untransformed representation than to the naive-transformed representation (Fig. 3 right).
Figure 4: The gLocal transformed representation captures global structure of the naive transformed representation, as shown by PCA, but local structure of the untransformed representation, as shown by t-SNE with perplexity 10. Visualizations reflect embeddings of 10 images from each of the 260 Entity-13 ImageNet subclasses obtained from CLIP ViT-L and are colored by the superclasses. 2D embeddings are rotated to align with the untransformed representation using orthogonal Procrustes. The t-SNE fitting process is initialized using PCA; we pick the embedding with the lowest loss from 10 runs.
Figure 3: The top principal components (PCs) of gLocal and global representations of ViT-L on the ImageNet validation set resemble those of the naive transformed representation, indicating that they share similar global structure. The remaining PCs more closely match the untransformed representation. **Left:** CKA between full representations. **Middle:** CKA after setting singular values of each representation to zero for all but the largest 10 PCs. **Right:** CKA after setting singular values to zero for the largest 10 PCs, but retaining smaller PCs.
Figure 2: gLocal and global but not naive transforms preserve nearest neighbors in CLIP ViT-L representations. y-axis indicates the percentage of ImageNet validation images for which the closest image in the untransformed space is among the \(k\) closest after transformation.
In Fig. 4, we visualize the effects of different transformations using PCA, which preserves global structure, and t-SNE, which preserves local structure. As described by Van der Maaten & Hinton [89], PCA "focus[es] on keeping the low-dimensional representations of dissimilar datapoints far apart," whereas t-SNE tries to "keep the low-dimensional representations of very similar datapoints close together." In line with the results above, the global structure of the gLocal representation revealed by PCA closely resembles that of the naive transformed representation, whereas the local structure of the gLocal representation revealed by t-SNE closely resembles that of the untransformed representation.
To complement these analyses, in Appendix B we explore what the alignment transforms alter about the global category structure of the model representations. Generally, categories within a single superordinate category move more closely together, while different superordinate categories move apart, but with sensible exceptions -- e.g. food and drink move closer together.
### Few-shot learning
In this section, we examine the impact of the gLocal transform on few-shot classification accuracy on downstream datasets. We consider a standard _fine-grained_ few-shot learning setup on CIFAR-100 [48], SUN397 [90], and the Describable Textures Dataset [DTD, 12], as well as a _coarse-grained_ setup on Entity-[13, 30] of BREEDS [77].
In coarse-grained FS, classes are grouped into semantically meaningful superclasses. This is a more challenging setting than the standard fine-grained scenario, for which there does not exist a superordinate grouping. In fine-grained FS, training examples are uniformly drawn from all (sub-)classes. For coarse-grained FS, we classify according to superclasses rather than fine-grained classes, and choose \(k\) training examples uniformly at random for each superclass. This implies that not every subclass is contained in the train set if the number of training samples is smaller than the number of subclasses. On Entity-[13, 30], superclasses between train and test sets are guaranteed to be disjoint due to a subpopulation shift. To achieve high accuracy, models must consider examples from unseen members of a superclass similar to the few examples it has seen. Task performance is dependent on how well the partial information contained in the few examples of a superclass can be exploited by a model to characterize the entire superclass with its subclasses. Hence, global similarity structure is more crucial than local similarity structure to perform well on this task. We calculate classification accuracy across all available test images of all subclasses, using coarse labels.
We find that transforming the representations via gLocal transforms substantially improves performance over the untransformed representations across both coarse-grained and fine-grained FS tasks for all image/text models considered (Tab. 1). For CLIP models trained on LAION, however, we do not observe improvements for CIFAR-100 and DTD, which are the most fine-grained datasets. For ImageNet models, the gLocal transform improves performance on Entity-{13,30}, but has almost no impact on the performance for the remaining datasets (see Appx. D.1).
### Anomaly detection
Here, we evaluate the performance of representations on \(k\)-nearest neighbor AD with and without using the gLocal transform. AD methods typically return an anomaly score for which a detection threshold has to be chosen. Our method is evaluated using each training class as the nominal class and we report the average AUROC. Following SS3.1, we compute the representations for each normal sample of the training set and then evaluate the model with representations from the test set. We set \(k=5\) for our experiments but found that performance is fairly insensitive to the choice of \(k\) (see Appx. D.2). For measuring the distance between representations, we use cosine similarity.
In Tab. 2, we show that the gLocal transform substantially improves AD performance across all datasets considered in our analyses, for almost every image/text model. However, as in the few-shot setting, we observe no improvements over the untransformed representation space for ImageNet models (see Appx. D.2). In Tab. 3, we further investigate performance on distribution shift datasets,
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c|c c|c c} \hline \multirow{2}{*}{Model \(\backslash\) Transform} & Entity-13 & \multicolumn{2}{c|}{Entity-30} & \multicolumn{2}{c|}{CIFAR100-Course} & \multicolumn{2}{c|}{CIFAR100} & \multicolumn{2}{c}{SUN397} & \multicolumn{2}{c}{DTD} \\ & original & gLocal & original & gLocal & original & gLocal & original & gLocal & original & gLocal & original & gLocal \\ \hline CLIP-RN50 & 63.98 & **67.96** & 57.87 & **59.8** & 44.09 & **47.43** & 38.78 & **39.99** & 57.22 & **58.79** & 56.32 & **56.44** \\ CLIP-ViViT-14 (AVIT) & 65.35 & **71.94** & 66.91 & **69.97** & 66.96 & **66.88** & 72.31 & **73.03** & 69.95 & **71.13** & 64.76 & **64.58** \\ CLIP-ViT-14 (LAION-400M) & 65.32 & **69.02** & 62.78 & **65.99** & 68.98 & **69.58** & **73.59** & 72.98 & 70.25 & **71.08** & **68.92** & 67.74 \\ CLIP-ViT-14 (LAION-28H) & 67.00 & **71.24** & 66.99 & **67.33** & 72.44 & **73.48\({}^{\dagger}\)** & **79.05\({}^{\dagger}\)** & 78.48 & 71.62 & **72.62\({}^{\dagger}\)** & **70.82** & 69.84 \\ \hline \end{tabular}
\end{table}
Table 1: 5-shot FS results using the original or transformed representations. \(\dagger\) indicates the highest accuracy for each dataset. Results are averaged over 5 runs.
where improvements are particularly striking. Here, global similarity structure appears to be crucial for generalizing between the superclasses. Tab. 2 additionally reports current state-of-the-art (SOTA) results on the standard benchmarks; SOTA results are not available for the distribution shift datasets. SOTA approaches generally use additional data relevant to the AD task, such as outlier exposure data or textual supervision for the normal class [53; 13; 36], whereas we use only human similarity judgments. Our transformation also works well in non-standard AD settings (see Appx. D.2).
### Representational alignment
We have shown above that the gLocal transform provides performance gains on FS and AD tasks. Here, we examine whether these performance gains come at the cost of alignment with human similarity judgments as compared to the naive transform, which does not preserve local structure. Thus, we examine the impact of the gLocal transform on human alignment, using the same human similarity judgment datasets evaluated in Mutenthaler et al. [61] plus an additional dataset.4 Specifically, we perform representational similarity analysis [RSA; 46] between representations of two image/text models -- CLIP RN50 and CLIP ViT-L/14 -- and human behavior for four different human similarity judgment datasets [31; 65; 66; 10; 41]. RSA is a method for comparing neural network representations to representations obtained from human behavior [46]. In RSA, one first obtains representational similarity matrices (RSMs) for the human behavioral judgments and for the neural network representations (more details in Appx. E). These RSMs measure the similarity between pairs of examples according to each source. As in previous work [10; 41; 61], we use the Spearman rank correlation coefficient to quantify the similarity of these RSMs. We find that there is almost no trade-off in representational alignment for the gLocally transformed representation space compared to the naively transformed representations (see Tab. 4). Hence, the gLocal transform is able to improve representational alignment while preserving local similarity structure.
Footnote 4: Human similarity judgments were collected by either letting participants arrange natural object images from different categories on a computer screen [66; 10; 41] or in the form of triplet odd-one-out choices [31].
In Fig. 5, we further demonstrate how the representational changes introduced by the linear and gLocal transforms lead to greater alignment with human similarity judgments by visualizing the RSMs on each dataset. The global similarity structure captured by the RSMs is qualitatively identical between naive and gLocal transforms, and both of these transforms lead to RSMs that closely resemble human RSMs (see Fig. 5). Human similarity judgments for data from Hebart et al. [31] were collected in the form of triplet odd-one-out choices. Therefore, we used VICE [60] -- an approximate Bayesian method for inferring mental representations of object concepts from human behavior -- to obtain an RSM for those judgments. Human RSMs are sorted into five different concept clusters using \(k\)-means for datasets from King et al. [41] and Cichy et al. [10] and using the things concept hierarchy [30] for RSMs from Hebart et al. [31]. A visualization for RSMs obtained from CLIP RN50 representations and a more detailed discussion on RSA can be found in Appx. E.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Model \(\backslash\) Transform} & \multicolumn{2}{c|}{CIFAR10} & \multicolumn{2}{c|}{CIFAR100} & \multicolumn{2}{c|}{CIFAR100-Came} & \multicolumn{2}{c}{ImageNet30} & \multicolumn{2}{c}{DTD} \\ & original & gLocal & original & gLocal & original & gLocal & original & gLocal & original & gLocal \\ \hline CLIP-RN50 & 89.44 & **91.9** & 90.83 & **92.92** & 86.47 & **89.29** & 98.89 & **90.85** & 90.67 & **92.29** \\ CLIP-ViT-L/14 (JANI) & 95.14 & **98.16** & 91.41 & **97.19** & 88.5 & **95.83** & 98.91 & **97.85** & 92.02 & **94.9** \\ CLIP-ViT-L/14 (JANI-400M) & **98.8** & 98.39 & **96.66** & 98.53 & **97.86** & 97.77 & 99.51 & **99.69** & 95.54 & **96.44** \\ CLIP-ViT-L/14 (JANI-2B) & 98.97 & **99.11** & 98.76 & **98.97** & 98.65 & 98.51 & 99.29 & **99.74** & 94.87 & **96.78\({}^{\dagger}\)** \\ \hline SOTA & 99.6 [53] & - & - & 97.34 [13] & 99.9 [53] & 94.6 [53] \\ \hline \end{tabular}
\end{table}
Table 2: One-vs-rest nearest neighbor based AD results; with and without transformation. \(\dagger\) indicates the highest accuracy for each dataset.
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c|c} \hline \multirow{2}{*}{Model \(\backslash\) Transform} & \multicolumn{2}{c|}{Entity-13} & \multicolumn{2}{c|}{Entity-30} & \multicolumn{2}{c}{Linking-17} & \multicolumn{2}{c}{Nonliving-26} & \multicolumn{2}{c}{Clift00-ahith} \\ & original & gLocal & original & gLocal & original & gLocal & original & gLocal & original & gLocal \\ \hline CLIP-RN50 & 90.22 & **92.03** & 91.64 & **93.71** & **94.62** & 92.38 & 87.49 & **91.09** & 76.27 & **80.33** \\ CLIP-ViT-L/14 (JANI-400M) & 88.54 & **93.23\({}^{\dagger}\)** & 91.31 & **95.63** & 97.31 & 97.71 & 87.43 & **92.53** & 73.69 & **87.11** \\ CLIP-ViT-L/14 (JANI-2B0M) & 90.79 & **92.71** & 92.49 & **95.04** & **96.56** & 96.32 & 90.09 & **93.18** & 91.09 & **92.7** \\ CLIP-ViT-L/14 (JANI-2B) & 90.33 & **93.08** & 92.12 & **95.84\({}^{\dagger}\)** & 96.96 & **97.37** & 88.82 & **93.54\({}^{\dagger}\)** & 89.73 & **93.94\({}^{\dagger}\)** \\ \hline \end{tabular}
\end{table}
Table 3: One-vs-rest AD with a class distribution shift between train and test sets; with and without transformation. \(\dagger\) indicates the highest accuracy for each dataset.
## 5 Discussion
Although neural networks achieve near-human-level performance on a variety of computer vision tasks, they may not optimally capture _global_ object similarities. By contrast, humans represent concepts using rich semantic features -- including superordinate categories and other global constraints -- for performing object similarity judgments [66; 41; 10; 31; 60]. These representations may contribute to humans' strong generalization capabilities [21; 24]. Here, we investigated the impact of aligning neural network representations with human similarity judgments on different FS and AD tasks.
We find that naively aligning neural network representations, without regularizing the learned transformations to preserve structure in the original representations, can impair downstream task performance. However, our _gLocal transform_, which combines an _alignment loss_ that optimizes for representational alignment with a _local loss_ that preserves the nearest neighbor structure from the original representation, can substantially improve downstream task performance while increasing representational alignment. The transformed representation space transfers surprisingly well across different human similarity judgment datasets and achieves almost equally strong alignment as the naive approach [61], indicating that it captures human notions of image similarity. In addition, it substantially improves downstream task performance compared to both original and naively-aligned representations across a variety of FS and AD tasks. The gLocal transform yields state-of-the-art (SOTA) performance on the CIFAR-100 coarse AD task, and approaches SOTA on other AD benchmarks.
Our work has some limitations. First, as we show in Appx. D, the gLocal transform fails to consistently improve downstream task performance on ImageNet. We conjecture that the gLocal transform can succeed only if representations capture the concepts by which human representations are organized, and ImageNet representations may not. Second, human similarity judgments are more expensive
\begin{table}
\begin{tabular}{l c c c|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{CLIP-FN50} & \multicolumn{3}{c|}{CLIP-ViT-L/14 (WIT)} & \multicolumn{3}{c}{CLIP-ViT-L/14 (LAION-2B)} \\ Dataset \textbackslash{} Transform & original & naive & gLocal & original & naive & gLocal & original & naive & gLocal \\ \hline \multicolumn{10}{l}{_Human similarity datasets:_} \\ Hebart et al. [31] & 52.78\% & 59.92\% & 59.89\% & 46.71\% & 60.13\% & 60.05\% & 47.50\% & 60.47\% & 60.38\% \\ King et al. [41] & 0.386 & 0.650 & 0.645 & 0.355 & 0.638 & 0.637 & 0.292 & 0.620 & 0.613 \\ Cichy et al. [10] & 0.557 & 0.721 & 0.716 & 0.363 & 0.732 & 0.732 & 0.395 & 0.735 & 0.718 \\ Peterson et al. [65; 66] & 0.364 & 0.701 & 0.705 & 0.260 & 0.688 & 0.688 & 0.314 & 0.689 & 0.660 \\ \hline \multicolumn{10}{l}{_Dowstream tasks:_} \\ \(5\)-shot FS (avg.) & 53.05\% & 41.20\% & **55.01\%** & 67.71\% & 47.62\% & **69.91\%** & 71.10\% & 50.71\% & **72.27\%** \\ Anomaly detection (avg.) & 89.65\% & 90.75\% & **91.52\%** & 90.16\% & 92.79\% & **95.19\%** & 94.79\% & 93.09\% & **96.65\%** \\ \hline \hline \end{tabular}
\end{table}
Table 4: The gLocal transform yields both a high degree of alignment with datasets of human similarity judgments and good performance on FS/AD tasks. Performance on human similarity datasets is measured as odd-one-out accuracy on a held-out test set for things or Spearman’s \(\rho\) for the other three datasets, using either the original representations, the naively transformed representations, or representations transformed via the gLocal transform. For 5-shot FS and AD, we report the average performance across all tasks in Tab. (1, 2, 3).
Figure 5: RSMs for human behavior and CLIP ViT-L/14 [WIT; 70] for four different human similarity judgment datasets [31; 66; 10; 41]. We contrast RSMs obtained from the network’s original representations (second column), the naively aligned representations [61] (third column), and the gLocally-transformed representations (rightmost column) against RDMs directly constructed from human similarity judgments (leftmost column). Yellower colors indicate greater similarity; bluer colors indicate greater dissimilarity.
to acquire than other forms of supervision, and there may be important human concepts that are captured neither by the 1854 images we use for alignment nor by the pretrained representations.
Our results imply that even with hundreds of millions of image/text pairs, image/text contrastive learning does not learn a representation space with human-like global organization. However, since the gLocal transform successfully aligns contrastive representations' global structure using only a small number of images, these representations do seem to have a pre-existing representation of the concepts by which human representations are globally organized. Why does this happen? One possibility is that image/text pairs do not provide adequate global supervision, and thus contrastive representation learning is (near-)optimal given the data. Alternatively, contrastive learning may not incorporate signals that exist in the data into the learned representation because it imposes only local constraints. Previous work has shown that t-SNE and UMAP visualizations reflect global structure only if they are carefully initialized [42]. Given the similarity between contrastive representation learning and t-SNE/UMAP [15] and the known sensitivity of contrastive representations to initialization [52], it is plausible contrastive representations also inherit their global structure from their initialization. Although our gLocal transform provides a way to perform post-hoc alignment of representations from image/text models using human similarity judgments, there may be alternative initialization strategies or objectives that can provide the same benefits during training, using only image/text pairs.
## Acknowledgements
LM, LL, JD, and RV acknowledge funding from the German Federal Ministry of Education and Research (BMBF) for the grants BIFOLD22B and BIFOLD23B. LM acknowledges support through the Google Research Collabs Programme.
|
2306.06304 | Finite element interpolated neural networks for solving forward and
inverse problems | We propose a general framework for solving forward and inverse problems
constrained by partial differential equations, where we interpolate neural
networks onto finite element spaces to represent the (partial) unknowns. The
framework overcomes the challenges related to the imposition of boundary
conditions, the choice of collocation points in physics-informed neural
networks, and the integration of variational physics-informed neural networks.
A numerical experiment set confirms the framework's capability of handling
various forward and inverse problems. In particular, the trained neural network
generalises well for smooth problems, beating finite element solutions by some
orders of magnitude. We finally propose an effective one-loop solver with an
initial data fitting step (to obtain a cheap initialisation) to solve inverse
problems. | Santiago Badia, Wei Li, Alberto F. MartÃn | 2023-06-09T23:48:01Z | http://arxiv.org/abs/2306.06304v4 | # Finite element interpolated neural networks for solving forward and inverse problems
###### Abstract.
We propose a general framework for solving forward and inverse problems constrained by partial differential equations, where we interpolate neural networks onto finite element spaces to represent the (partial) unknowns. The framework overcomes the challenges related to the imposition of boundary conditions, the choice of collocation points in physics-informed neural networks, and the integration of variational physics-informed neural networks. A numerical experiment set confirms the framework's capability of handling various forward and inverse problems. In particular, the trained neural network generalises well for smooth problems, beating finite element solutions by some orders of magnitude. We finally propose an effective one-loop solver with an initial data fitting step (to obtain a cheap initialisation) to solve inverse problems.
Key words and phrases:neural networks, PINNs, finite elements, PDE approximation, inverse problems
## 1. Introduction
Many problems in science and engineering are modelled by _low-dimensional_ (e.g., 2 or 3 space dimensions plus time) partial differential equations (PDEs). Since PDEs can rarely be solved analytically, their solution is often approximated using numerical methods, among which the finite element method (FEM) has been proven to be effective and efficient for a broad range of problems. The FEM enjoys a very solid mathematical foundation [1]. For many decades, advanced discretisations have been proposed, e.g., preserving physical structure [2], and optimal (non)linear solvers that can efficiently exploit large-scale supercomputers have been designed [3, 4, 5].
Grid-based numerical discretisations can readily handle forward problems. In a forward problem, all the data required for the PDE model to be well-posed is provided (geometry, boundary conditions and physical parameters), and the goal is to determine the state of the model. In an inverse problem setting, however, the model parameters are not fully known, but one can obtain some observations, typically noisy and/or partial, of the model state. Inverse problem solvers combine the partially known model and the observations to infer the information which is missing to complete the model. Inverse problems can be modelled using PDE-constrained minimisation [6].
Traditional numerical approximations for low-dimensional PDEs, like FEM, are linear. Finite element (FE) spaces are finite-dimensional vector spaces in which one seeks for the best approximation in some specific measure. As a result, the method/grid is not adapted to local features (e.g., sharp gradients or discontinuities) and convergence can be slow for problems that exhibit multiple scales. Although adaptive FE methods can efficiently handle this complexity, they add an additional loop to the simulation workflow (the mark and refine loop) and problem-specific robust error estimates have to be designed [7].
When the FEM is used to solve PDE-constrained inverse problems, the unknown model parameters are usually described using FE-like spaces, even though neural network (NN) representations have recently been proposed [8, 9]. The loss function accounts for the misfit term between the observation and the state, which in turns depends on the unknown model parameters. The gradient of the loss function with respect to the unknown model parameters requires a chain rule that involves the solution of the forward problem. An efficient implementation of this gradient relies on the adjoint method [10]. Inverse problem solvers add an additional loop to the simulation workflow, which involves the solution of the full forward problem and the adjoint of its linearisation at each iteration. There is usually a burden of computation cost in the first stages of the adjoint method, when full forward problems are solved despite being far from the desired solution.
The tremendous success of NNs in data science has motivated many researchers to explore their application in PDE approximation. Physics-informed NNs (PINNs) have been proposed in [11] to solve forward and inverse problems. A NN approximates the PDE solution, while the loss function evaluates the strong PDE residual on a set of randomly selected collocation points. NNs can also be combined with a weak statement of the PDE (see, e.g., Deep Ritz Method [12] or variational PINNs (VPINNs) [13]). NNs have some very interesting properties that make them perfectly suited for the approximation of forward and inverse PDE problems. First, NNs are genuinely _nonlinear approximations_, similar to, e.g., free-knot B-splines [14]. The solution is sought in a nonlinear manifold in the parameter space, which automatically adapts to the specific problem along the training process.1 Unlike FE bases, NNs are also perfectly suited (and originally designed) for data fitting. The NN parameters usually have a _global_ effect on the overall solution. As a result, one can design solvers for PDE-constrained inverse problems in which both the unknown model parameters and state variables are learnt along the same training process [16]. The loss function includes the data misfit and a penalised PDE residual term. State and unknown model parameters are not explicitly linked by the forward problem, and thus no forward problems are involved in each iteration of the optimisation loop. As a result, one can use NNs to design adaptive forward and inverse problems with a one-loop solver.
Footnote 1: It is illustrative to observe how e.g. linear regions in NNs with ReLU activation functions adapt to the solution being approximated [15]. The decomposition of the physical domain into linear regions is a polytopal conforming mesh.
Despite all these efforts, PINNs and related methods have not been able to outperform traditional numerical schemes for low dimensional PDEs; see, e.g., the study in [17]. There are some (intertwined) reasons for this. First, nonlinear approximability comes at the price of non-convex optimisation at the training process. Currently, non-convex optimisation algorithms for NN approximation of PDEs are costly and unreliable, especially when the PDE solutions contain multi-scale features or shocks [18, 19]. As a result, despite the enhanced expressivity of NNs, this improvement is overshadowed by poor and costly training. Second, the integration of the PDE residual terms is not exact, and the error in the integration is either unbounded or not taken into account. Poor integration leads to poor convergence to the desired solution (due to a wrong cost functional) and one can find examples for which the optimal solution is spurious [20]. In [15], the authors propose adaptive quadratures for NN in low dimensions that are proven to be more accurate that standard Monte Carlo, especially for sharp features. Finally, the usual PDE residual norms being used in the loss function are ill-posed at the continuous level in general. It is well-known that such a variational crime has negative effects in the convergence of iterative solvers for FE discretisations [21], and it will also hinder the non-convex optimisation at the training process. These issues prevent a solid mathematical foundation of these methods, and strong assumptions are required to prove partial error estimates [22, 23].
Additionally, NNs have not been designed to strongly satisfy Dirichlet boundary conditions. Thus, the loss function must include penalty terms that account for the boundary conditions, which adds an additional constraint to the minimisation and has a very negative effect on the training [24]. Such imposition of boundary conditions is not consistent, and Nitsche's method comes with the risk of ending up with an ill-posed formulation.2 Recently, some authors have proposed to multiply the NN with a distance function that vanishes on the Dirichlet boundary [25]. However, this arguably complicates the geometrical discretisation step compared to grid-based methods. The computation of such distance functions is complex in general geometries and has only been used for quite simple cases in 2D. Furthermore, it is unclear how to use this approach for non-homogeneous boundary conditions, which require a lifting of the Dirichlet values inside the domain. In comparison, (unstructured) mesh generation is a mature field and many mesh generators are available [26]. Unfitted FEs have become robust and general schemes that can handle complex geometries on Cartesian meshes [27]. With a mesh, the definition of the lifting is trivial, e.g., one can use a FE offset function.
Footnote 2: The coefficient in Nitsche’s method must be _large enough_ for stability, which can be mathematically quantified in FEM using inverse inequalities. However, NNs nature do not enjoy inverse inequalities; gradients can be arbitrarily large, and can only be indirectly bounded via regularisation.
In [28], the authors propose to interpolate NNs onto a FE space to overcome the integration issues and design a well-posed PDE-residual loss functional. The imposition of the boundary conditions is handled via distance functions to overcome the training problems related to boundary conditions [29]. The authors compare the solution of the interpolated NN (a FE function) with different standard PINN formulations. Despite the fact that the solution belongs to a fixed FE space (and cannot exploit nonlinear approximation, compared to the other PINN strategies), the results are superior in general. This technique, coined interpolated VPINNs (IVPINNs), has been applied to forward coercive grad-conforming PDEs on rectangular domains.
Unlike other PINNs, a priori error bounds have been obtained [28], even though suboptimal compared to the FEM solution. One can argue what is the benefit of getting sub-optimal FE solutions (measured in the energy norm) using a far more expensive non-convex optimisation solver. However, IVPINNs shed light on the negative impact that integration, residual definition, imposition of boundary conditions, lack of well-posedness, and training have on a straightforward approximation of PDEs using NNs.
In this work, we build upon IVPINNs ideas. However, instead of enforcing boundary conditions at the NN, we propose to strongly impose the boundary conditions at the FE space level. This allows us to readily handle complex geometries without the need to define, e.g., distance functions. To distinguish the two approaches, we coin the proposed method FE interpolated NNs (FEINNs). Besides, we explore the benefits of considering the trained NN (instead of the FE interpolation) as the final solution of the problem, i.e., evaluate how the trained NN _generalises_. We also discuss different PDE residual norms and suggest to use Riesz preconditioning techniques to end up with a well-posed formulation in the continuous limit. We perform a numerical analysis of the method, and prove that the proposed formulation can recover (at least) the _optimal_ FE bounds. Next, we apply these techniques to inverse problems, using a one-loop algorithm, as it is customary in PINNs. We exploit the excellent properties of NNs to fit data. We propose a first step in which we get a state initial guess by data fitting. In a second step, we learn the unknown model parameters by PDE-residual minimisation for a fixed state. The previous steps provide an initialisation for a third fully coupled step with a mixed data-PDE residual cost function.
We carry out a comprehensive set of numerical experiments for forward problems. We check that expressive enough NNs can return FE solutions for different polynomial orders. For smooth problems, the generalisation results for the trained NNs are striking. The solution obtained with the non-interpolated FEIN solution can be orders of magnitude more accurate than the FE solution on the same mesh, while IVPINNs do not generalise that well. The definition of the residual norm (and its preconditioned version) can have a tremendous impact in the convergence of the minimisation algorithm. Finally, we test the proposed algorithm for inverse problems. Unlike standard inverse solvers for grid-based methods, we can solve inverse problems with effective and cheap initialisation and one-loop algorithms, even without any kind of regularisation terms.
The outline of the article is the following. Sec. 2 states the model elliptic problem that we tackle, its FE discretisation, the NN architecture, and the proposed loss functions in the FEINM discretisation. Sec. 3 proves that the interpolation of an expressive enough NN recovers the FE solution. In Sec. 4, the proposed discretisation is applied to inverse problems, by defining a suitable loss function that includes data misfit and a multi-step minimisation algorithm. Sec. 5 describes the implementation of the methods and Sec. 6 presents the numerical experiments on several forward and inverse problems. Finally, Sec. 7 draws conclusions and lists potential directions for further research.
## 2. Forward problem discretisation using neural networks
### Continuous problem
In this work, we aim to approximate elliptic PDEs using a weak (variational) setting. As a model problem, we consider a convection-diffusion-reaction equation, even though the proposed methodology can readily be applied to other coercive problems. The problem reads: find \(u\in H^{1}(\Omega)\) such that
\[-\boldsymbol{\nabla}\cdot(\boldsymbol{\kappa}\boldsymbol{u})+(\boldsymbol{ \beta}\cdot\boldsymbol{\nabla})u+\sigma u=f\quad\text{in }\Omega,\quad u=g\quad\text{on }\Gamma_{D}, \quad\kappa\boldsymbol{n}\cdot\boldsymbol{\nabla}u=\eta\quad\text{on }\Gamma_{N}, \tag{1}\]
where \(\Omega\subset\mathbb{R}^{d}\) is a Lipschitz polyhedral domain, \(\Gamma_{D}\) and \(\Gamma_{N}\) are a partition of its boundary such that \(\text{meas}(\Gamma_{D})>0\), \(\kappa\), \(\sigma\in L^{\infty}(\Omega)\), \(\boldsymbol{\beta}\in W^{1,\infty}(\Omega)^{d}\) such that \(\sigma-\boldsymbol{\nabla}\cdot\boldsymbol{\beta}>0\) and \(\boldsymbol{\beta}\cdot\boldsymbol{n}_{|\Gamma_{N}}\geq 0\), \(f\in H^{-1}(\Omega)\), \(g\in H^{1/2}(\Gamma_{D})\), and \(\eta\in H^{-1/2}(\Gamma_{N})\).
Consider the space \(U\doteq H^{1}(\Omega)\), \(\tilde{U}\doteq H^{1}_{0,\Gamma_{D}}(\Omega)\doteq\left\{v\in U\ :\ v_{|\Gamma_{D}}=0\right\}\), a continuous lifting \(\bar{u}\in U\) of the Dirichlet boundary condition (i.e., \(\bar{u}=g\) on \(\Gamma_{D}\)), and the forms
\[a(u,v)=\int_{\Omega}\kappa\boldsymbol{\nabla}u\cdot\boldsymbol{\nabla}v+( \boldsymbol{\beta}\cdot\boldsymbol{\nabla})uv+\sigma uv,\quad\ell(v)=\int_{ \Omega}fv+\int_{\Gamma_{N}}\eta v.\]
(We use the symbol \(\overline{\cdot}\) to denote trial functions and spaces with zero traces.) The variational form of the problem reads: find \(u=\bar{u}+\bar{u}\) where
\[\bar{u}\in\tilde{U}\ :\ a(\bar{u},v)=\ell(v)-a(\bar{u},v),\quad\forall v\in\tilde{U}. \tag{2}\]
In this setting, the problem with a non-homogeneous Dirichlet boundary condition is transformed into a homogeneous one via the lifting and a modification of the right-hand side (RHS). The well-posedness of the
problem relies on the coercivity and continuity of the forms:
\[a(u,u)\geq\gamma\|u\|_{U}^{2},\quad a(u,v)\leq\xi\|u\|_{U}\|v\|_{U},\quad\ell(v) \leq\chi\|v\|_{U}.\]
Below, we will make use of the PDE residual
\[\mathcal{R}(\tilde{u})\,\doteq\,\ell(\cdot)-a(\tilde{u}+\tilde{u},\cdot)\in \tilde{U}^{\prime}. \tag{3}\]
### Finite element approximation
Next, we consider a family of conforming shape-regular partitions \(\{\mathcal{T}_{h}\}_{h>0}\) of \(\Omega\) such that their intersection with \(\Gamma_{N}\) and \(\Gamma_{D}\) is also a partition of these lower-dimensional manifolds; \(h\) represents a characteristic mesh size. On such partitions, we can define a trial FE space \(U_{h}\subset U\) of order \(k_{U}\) and the subspace \(\tilde{U}_{h}\doteq U_{h}\cap H^{1}_{0,\Gamma_{D}}(\Omega)\) of FE functions with zero trace.
We define a FE interpolant \(\pi_{h}:C^{0}\to U_{h}\) obtained by evaluation of the degrees of freedom (DoFs) of \(U_{h}\). In this work, we consider grad-conforming Lagrangian (nodal) spaces (and thus composed of piece-wise continuous polynomials), and DoFs are pointwise evaluations at the Lagrangian nodes. Analogously, we define the interpolant \(\tilde{\pi}_{h}\) onto \(\tilde{U}_{h}\). We can pick a FE lifting \(\tilde{u}_{h}\in U_{h}\) such that \(\tilde{u}_{h}=\pi_{h}(g)\) on \(\Gamma_{D}\). (The interpolant is restricted to \(\Gamma_{D}\) and could be, e.g., a Scott-Zhang interpolant if \(g\) is non-smooth.) Usually in FEM, \(\tilde{u}_{h}\) is extended by zero on the interior.
Using the Galerkin method, the test space is defined as \(V_{h}\doteq\tilde{U}_{h}\), and let \(k_{V}\) be its order. Following [28], we also explore Petrov-Galerkin discretisations. To this end, we consider \(k_{V}\) such that \(s=k_{U}/k_{V}\in\mathbb{N}\), and a family of partitions \(\mathcal{T}_{h/s}\) obtained after \(s\) levels of uniform refinement of \(\mathcal{T}_{h}\). In this case, we choose \(V_{h}\) to be the FE space of order \(k_{V}\) on \(\mathcal{T}_{h/s}\) with zero traces on \(\Gamma_{D}\). We note that the dimension of \(\tilde{U}_{h}\) and \(V_{h}\) are identical. In this work, we only consider \(k_{V}=1\), i.e., a _linearised_ test FE space. The well-posedness of the Petrov-Galerkin discretisation is determined by the discrete inf-sup condition:
\[\inf_{u_{h}\in\tilde{U}_{h}}\sup_{v_{h}\in V_{h}}\frac{a(u_{h},v_{h})}{\|u_{h} \|_{U}\|v_{h}\|_{U}}\geq\beta>0.\]
In both cases, the problem can be stated as: find \(u_{h}=\tilde{u}_{h}+\tilde{u}_{h}\) where
\[\tilde{u}_{h}\in\tilde{U}_{h}\ :\ a(\tilde{u}_{h},v_{h})=\ell(v_{h})-a( \tilde{u}_{h},v_{h}),\quad\forall v_{h}\in V_{h}. \tag{4}\]
We represent with \(\mathcal{R}_{h}\) the restriction \(\mathcal{R}|_{U_{h}\times V_{h}}\). Given \(u_{h}\in U_{h}\), \(\mathcal{R}(u_{h})\in V_{h}^{\prime}\). \(V_{h}^{\prime}\) is isomorphic to \(\mathbb{R}^{N}\), where \(N\) is the dimension of \(V_{h}\) (and \(\tilde{U}_{h}\)). This representation depends on the basis chosen to span \(V_{h}\).
### Neural networks
We consider a fully-connected, feed-forward NN, obtained by the composition of affine maps and nonlinear activation functions. The network architecture is represented by a tuple \((n_{0},\ldots n_{L})\in\mathbb{N}^{(L+1)}\), where \(L\) is the number of layers and \(n_{k}\) is the number of neurons on layer \(1\leq k\leq L\). We take \(n_{0}=d\) and, for scalar-valued PDEs, we have \(n_{L}=1\). In this work, we use \(n_{1}=n_{2}=...=n_{L-1}=n\), i.e. all the hidden layers have an equal number of neurons \(n\).
At each layer \(1\leq k\leq L\), we represent with \(\mathbf{\Theta}_{k}:\mathbb{R}^{n_{L-1}}\rightarrow\mathbb{R}^{n_{k}}\) the affine map at layer \(k\), defined by \(\mathbf{\Theta}_{k}\mathbf{x}=\mathbf{W}_{k}\mathbf{x}+\mathbf{b}_{k}\) for some weight matrix \(\mathbf{W}_{k}\in\mathbb{R}^{n_{k}\times n_{k-1}}\) and bias vector \(\mathbf{b}_{k}\in\mathbb{R}^{n_{k}}\). The activation function \(\rho:\mathbb{R}\rightarrow\mathbb{R}\) is applied element-wise after every affine map except for the last one. Given these definitions, the network is a parametrizable function \(\mathcal{N}(\mathbf{\theta}):\mathbb{R}^{d}\rightarrow\mathbb{R}\) defined as:
\[\mathcal{N}(\mathbf{\theta})=\mathbf{\Theta}_{L}\circ\rho\circ\mathbf{\Theta}_{L-1}\circ \ldots\circ\rho\circ\mathbf{\Theta}_{1}, \tag{5}\]
where \(\mathbf{\theta}\) stands for the collection of all the trainable parameters \(\mathbf{W}_{k}\) and \(\mathbf{b}_{k}\) of the network. Although the activation functions could be different at each layer or even trainable, we apply the same, fixed activation function everywhere. However, we note that the proposed methodology is not restricted to this specific NN architecture. In this work, we denote the NN architecture with \(\mathcal{N}\) and a realisation of the NN with \(\mathcal{N}(\mathbf{\theta})\).
### Finite element interpolated neural networks
In this work, we propose the following discretisation of (2), which combines the NN architecture in (5) and the FE problem in (4). Let us consider a norm \(\|\cdot\|_{Y}\) for the discrete residual (choices for this norm are discussed below). We aim to find
\[u_{\mathcal{N}}\in\arg\min_{w_{\mathcal{N}}\in\mathcal{N}}\mathcal{L}(w_{ \mathcal{N}}),\qquad\mathcal{L}(w_{\mathcal{N}})\,\doteq\,\|\mathcal{R}_{h}( \tilde{\pi}_{h}(w_{\mathcal{N}}))\|_{Y}. \tag{6}\]
The computation of \(u_{\mathcal{N}}\) involves a non-convex optimisation problem (due to the nonlinear dependence of \(u_{\mathcal{N}}\) on \(\mathbf{\theta}\)). We prove in the next section that the \(\tilde{\pi}_{h}(u_{\mathcal{N}})\) equal to the FE solution is a global minimum of this functional.
In this method, the NN is _free_ on \(\Gamma_{D}\), the imposition of the Dirichlet boundary conditions relies on a FE lifting \(\tilde{u}_{h}\) and the interpolation \(\tilde{\pi}_{h}\) onto \(\tilde{U}_{h}\) applied to the NN (thus vanishing on \(\Gamma_{D}\)). Conceptually, the
proposed method trains a NN _pinned_ on the DoFs of the FE space \(\tilde{U}_{h}\), with a loss function that measures the FE residual of the interpolated NN for a given norm. The motivation behind the proposed method is to eliminate the Dirichlet boundary condition penalty term in standard PINNs and related methods [11, 12], while avoiding enforcing the conditions at the NN level (see, e.g., [25] for PINNs and [29] for VPINNs). It also solves the issues related to Monte Carlo integration [20] and avoids the need to use adaptive quadratures [15]. Using standard element-wise integration rules, the integrals in \(\mathcal{R}_{h}\) can be exactly computed (or, at least, its error can be properly quantified for non-polynomial physical parameters and forcing terms). Moreover, in the current setting, we can consider different alternatives for the residual norm and better understand the deficiencies and variational crimes related to standard choices.
### Loss function
As discussed above, the loss function involves the norm of the FE residual. The residual is isomorphic to the vector \([\mathbf{r}_{h}(w_{h})]_{i}=\left\langle\mathcal{R}(w_{h}),\varphi^{i} \right\rangle\doteq\mathcal{R}(w_{h})(\varphi^{i})\), where \(\{\varphi^{i}\}_{i=1}^{N}\) are the FE shape functions that span the test space \(V_{h}\). As a result, we can consider the loss function:
\[\mathcal{L}(u_{\mathcal{N}})=\|\mathbf{r}_{h}(\tilde{\pi}_{h}(u_{\mathcal{N}} ))\|_{\ell^{2}}.\]
This is the standard choice (possibly squared) in the methods proposed so far in the literature that rely on variational formulations [13, 28, 30]. However, as it is well-known in the FE setting, this quantity is ill-posed in the limit \(h\downarrow 0\)[21]. At the continuous level, the norm of \(\mathcal{R}(u)\) is not defined.
If the problem is smooth enough and \(\mathcal{R}\) is well-defined on \(L^{2}(\Omega)\) functions, we can define its \(L^{2}(\Omega)\) projection onto \(V_{h}\) as follows:
\[\mathcal{M}_{h}^{-1}\mathcal{R}_{h}(w_{h})\in V_{h}\ :\ \int_{\Omega} \mathcal{M}_{h}^{-1}\mathcal{R}_{h}(w_{h})v_{h}=\mathcal{R}_{h}(w_{h})(v_{h}), \quad\forall v_{h}\in V_{h}.\]
Next, one can define the cost function
\[\mathcal{L}(u_{N})=\|\mathcal{M}_{h}^{-1}\mathcal{R}_{h}(\tilde{\pi}_{h}(u_{N }))\|_{L^{2}(\Omega)},\]
which, for quasi-uniform meshes, is equivalent (up to a constant) to the scaling of the Euclidean norm, i.e., \(h^{d}\|\mathbf{r}_{h}(\tilde{\pi}_{h}(u_{N}))\|_{\ell^{2}}\). However, for non-smooth solutions, the \(L^{2}\) norm of the residual still does not make sense at the continuous level, and thus, the convergence must deteriorate as \(h\downarrow 0\). One can instead define a discrete Riesz projector \(\mathcal{B}_{h}^{-1}:V_{h}^{\prime}\to V_{h}\) such that
\[\mathcal{B}_{h}^{-1}\mathcal{R}_{h}(w_{h})\in V_{h}\ :\ \left(\mathcal{B}_{h}^{-1} \mathcal{R}_{h}(w_{h}),v_{h}\right)_{U}=\mathcal{R}(w_{h})(v_{h}),\quad\forall v _{h}\in V_{h}.\]
For the model case proposed herein, \(\|\cdot\|_{U}\) is the \(H^{1}\) or \(H^{1}_{0,\Gamma_{D}}\)-norm and \(\mathcal{B}_{h}^{-1}\) is the inverse of the discrete Laplacian. Then, one can consider the cost function:
\[\mathcal{L}(u_{\mathcal{N}})=\|\mathcal{B}_{h}^{-1}\mathcal{R}_{h}(\tilde{\pi }_{h}(u_{\mathcal{N}}))\|_{L^{2}(\Omega)}, \tag{7}\]
or
\[\mathcal{L}(u_{\mathcal{N}})=\|\mathcal{B}_{h}^{-1}\mathcal{R}_{h}(\tilde{\pi }_{h}(u_{\mathcal{N}}))\|_{H^{1}(\Omega)}.\]
These cost functions are well-defined in the limit \(h\downarrow 0\). In practice, one can replace \(\mathcal{B}_{h}^{-1}\) by any spectrally equivalent approximation in order to reduce computational demands. For example, in the numerical experiments section, we consider several cycles of a geometric multigrid (GMG) preconditioner.
## 3. Analysis
In this section, we first show that the proposed loss functions are differentiable. Next, we show that the interpolation of the NN architecture can return any FE function in a given FE space. Combining these two results, we observe that there exists a global minimum of the FEIN problem in (6) such that its interpolation is the solution of the FE problem (4).
**Proposition 3.1**.: _The loss function is differentiable for \(\mathcal{C}^{0}\) activation functions._
Proof.: Using the chain rule, we observe that
\[\frac{\mathrm{d}\mathcal{L}}{\mathrm{d}\boldsymbol{\theta}}=\frac{\mathrm{d} \mathcal{L}}{\mathrm{d}\mathbf{r}_{h}}\frac{\mathrm{d}\mathbf{r}_{h}}{\mathrm{ d}\mathbf{u}_{h}}\frac{\mathrm{d}\mathbf{u}_{h}}{\mathrm{d}\boldsymbol{\theta}},\]
for \(\mathbf{u}_{h}\) being the DoFs of the FE space \(U_{h}\). The first derivative in the RHS simply involves the squared root of a quadratic functional. The second derivative is the standard Jacobian of the FE problem. The third
derivative is the vector of derivatives of the NN at the nodes of \(U_{h}\), which is well-defined for \(C^{0}\) activation functions. As a result, \(\mathcal{L}\) is differentiable.
Consequently, one can use gradient-based minimisation techniques. We note that this is not the case when the NN is evaluated without FE interpolation. For instance, refer to [15] for a simple example that shows ReLU activation functions cannot be used for PDE approximation using PINNs and related methods. In PINNs, one must compute \(\boldsymbol{\nabla}_{\boldsymbol{\theta}}\boldsymbol{\nabla}_{\boldsymbol{x} }N\), which poses additional smoothness requirements on the activation function. However, in the proposed methodology (as in [28]), the spatial derivatives are computed by the interpolated function, not the NN, and thus not affected by this constraint. For simplicity, we prove the result for the ReLU activation function.
**Proposition 3.2**.: _Let \(U_{h}\) be a FE space on a mesh \(\mathcal{T}_{h}\) in \(\mathbb{R}^{d}\) with DoFs equal to \(N\simeq h^{1/d}\). Let \(\mathcal{N}\) be a neural network architecture with 3 layers, \((3dN,dN,N)\) neurons per layer, and a ReLU activation function. For any \(u_{h}\in U_{h}\), there exists a choice of the NN parameters \(\boldsymbol{\theta}\) such that \(\pi_{h}(u_{\mathcal{N}})=u_{h}\)._
Proof.: At each node \(\boldsymbol{n}\in\mathcal{T}_{h}\), one can define a box \(\mathcal{B}_{\boldsymbol{n}}\) centred at \(\boldsymbol{n}\) that only contains this node of the mesh. Let us consider first the 1D case. For ReLU activation functions, one can readily define a hat function with support in \([0,1]\) as follows. First, we consider
\[f_{1}(x)=2x,\quad f_{2}(x)=4x-2,\quad f_{3}(x)=2x-2.\]
One can check that \(f=\rho(f_{1})-\rho(f_{2})+\rho(f_{3})\) is a hat function with value 1 at \(x=1/2\) and support in \([0,1]\). One can readily consider a scaling and translation to get \(\operatorname{supp}(f)\subset\mathcal{B}_{\boldsymbol{n}}\). This way, assuming one has \(3N\) neurons in the first layer and \(N\) neurons in the second layer, one can emulate the 1D FE basis in the second layer.
In 2D, one can create the 1D functions for both \(x\) and \(y\) directions. It requires \(6N\) neurons in the first layer and \(2N\) neurons in the second layer. Thus, for each node, we have two hat functions, namely \(b_{1}\) and \(b_{2}\), that depend on \(x\) and \(y\), respectively. Now, in a third layer with \(N\) neurons, we can compute \(\rho(b_{1}+b_{2}-1)\) at each node. We can generalise this construction to an arbitrary dimension \(d\). We need \(3dN\) neurons in the first layer to create the 1D functions in all directions. The hat functions are created in a second layer with \(dN\) neurons. The final functions are combined as \(\rho(\sum_{i=1}^{d}b_{i}-d+1)\). We note that, by the construction of \(b_{i}\), these functions have value one in the corresponding node and their support is contained in the corresponding box.
In the last layer, we end up with a set of functions \(\psi_{i}\) that are equal to 1 on one node and zero on the rest. Besides, the FE function can also be expressed as \(u_{h}=\sum_{i=1}^{N}u^{i}\boldsymbol{\varphi}^{i}(\boldsymbol{x})\) and \(\pi_{h}(u_{\mathcal{N}})=\sum_{i=1}^{N}u_{\mathcal{N}}(\boldsymbol{n}_{i}) \boldsymbol{\varphi}^{i}(\boldsymbol{x})\). Linearly combining the last layer functions with the DoF values \(\{u^{i}\}_{i=1}^{N}\) we construct a NN realisation that proves the proposition.
**Remark 3.3**.: _For other activation functions like tanh or sigmoid, it is not possible to construct localised functions with compact support as in the proof above. However, one can consider a piecewise polynomial approximation of these activation functions (e.g., using B-splines) with this property [31]. Then, one can use a similar construction as in ReLU._
We note that this construction can be further optimised by exploiting the structure of the underlying FE mesh \(\mathcal{T}_{h}\). For instance, for a structured mesh of a square with \(n\) parts per direction (\(N=n^{d}\)), only \(3n\) neurons are needed. We can exploit the fact that many nodes share the same coordinates in some directions. For the same reason, only \(dn\) neurons are required in the second layer. On the other hand, for more than 3 layers, the computations can be arranged among neurons/layers in different ways. For simplicity, in the proposition, we consider a worst-case scenario situation (no nodes share coordinate components and we only consider the arrangement in the proposition statement).
**Proposition 3.4**.: _Let us assume that the FE problem (4) is well-posed and admits a unique solution \(\tilde{u}_{h}\). The FEIN problem (6) admits a global minimiser \(u_{\mathcal{N}}\) such that \(\tilde{\pi}_{h}(u_{\mathcal{N}})=\tilde{u}_{h}\)._
Proof.: First, we note that the loss function differentiable (by Prop. 3.1) and positive. Besides, from the statement of the problem and Prop. 3.2, one can readily check that there exists a \(u_{\mathcal{N}}\) such that \(\mathcal{R}_{h}(\tilde{\pi}_{h}(u_{\mathcal{N}}))=\mathcal{R}_{h}(\tilde{u}_{h })=0\) and thus \(\mathcal{L}(u_{\mathcal{N}})=0\), i.e. \(u_{\mathcal{N}}\) is a global minimum of the cost function.
As a result, the FEINN method can exhibit optimal convergence rates (the ones of FEM), provided the NN is expressive enough compared to the FE space. In Sec. 6.1, we experimentally observe this behaviour. This
analysis is different from the one in [28], which, using a completely different approach, proves sub-optimal results in a different setting. The numerical experiments in [28] and in Sec. 6.1 show that IVPINNs can also recover optimal convergence rates. In fact, the results above can straightforwardly be extended to IVPINNs. The sub-optimality in [28] is related to the choice of the residual norm, the \(\ell^{2}\) norm of the residual vector. Sharper estimates could likely be obtained with the new residual norms suggested in Sec. 2.5.
## 4. Inverse problem discretisation using neural networks
In this section, we consider a PDE-constrained inverse problem that combines observations of the state variable \(u\) and a partially known model (1). Let us represent with \(\mathbf{\Lambda}\) the collection of unknown model parameters. It can include the physical coefficients, forcing terms and Dirichlet and Neumann boundary values. We parametrise \(\mathbf{\Lambda}\) with one or several NNs, e.g., as the ones proposed for the state variable in Sec. 2.4, which will be represented with \(\mathbf{\Lambda}_{N}\). Again, \(n_{0}=d\), while \(n_{L}\) depends on whether the unknown model parameter of the specific problem is a scalar-valued (\(n_{L}=1\)), vector-valued (\(n_{L}=d\)) or tensor-valued (\(n_{L}=d^{2}\)) field.
Let us denote with \(\mathcal{R}(\mathbf{\Lambda},u)\) the PDE residual in (3), where we make explicit its dependence with respect to the unknown model parameters (idem for \(\mathcal{R}_{h}\)). For integration purposes, we consider the interpolation of the model parameters onto FE spaces, which we represent with \(\boldsymbol{\pi}_{h}(\mathbf{\Lambda}_{N})\). The discrete model parameter FE spaces can in general be different to \(U_{h}\) (just as their infinite-dimensional counterpart spaces might be different to \(U\)) and do not require imposition of boundary conditions. Besides, the interpolation can be restricted to different boundary regions for Dirichlet and Neumann values. If we consider a discontinuous nodal FE space with nodes on the quadrature points of the Gaussian quadrature being used for integration (as in the numerical experiments), the interpolated and non-interpolated methods are equivalent. Thus, the interpolant simply accounts for the integration error being committed when integrating the NNs for the unknown model parameters.
Let us consider a measurement operator \(\mathcal{D}:U\to\mathbb{R}^{M}\) and the corresponding vector of observations \(\mathbf{d}\in\mathbb{R}^{M}\). The loss function for the inverse problem must contain the standard data misfit term and a term that accounts for the PDE residual. The method is understood as a (PDE-)constrained minimisation problem. As a result, the PDE residual is weighted by a (dynamically adapted) penalty coefficient. We consider the loss functional:
\[\mathcal{L}(\mathbf{\Lambda},u)\doteq\|\mathbf{d}-\mathcal{D}(u)\|_{\ell^{2} }+\alpha\|\mathcal{R}_{h}(\boldsymbol{\pi}_{h}(\mathbf{\Lambda}),\tilde{\pi} _{h}(u))\|_{Y}, \tag{8}\]
for any of the choices of the residual norm discussed above and \(\alpha\in\mathbb{R}^{+}\) is a penalty coefficient for the weak imposition of the PDE constraint. The inverse problem reads:
\[u_{N},\mathbf{\Lambda}_{N}\in\operatorname*{arg\,min}_{w_{N},\Xi_{N}\in \mathbf{\Lambda}_{N}\times\mathbf{\Lambda}_{N}}\mathcal{L}(\boldsymbol{\Xi}_ {N},w_{N}). \tag{9}\]
We refer to [32] for an application of penalty methods to inverse problems. However, their approach is more akin to the adjoint method, where they eliminate the state. We note that our approach is a _one-loop_ minimisation algorithm, i.e., one can minimise for both the state and unknown model parameters at the same time. This differs from adjoint methods, in which the loss function and the minimisation is in terms of \(\mathbf{\Lambda}\) only, but the state \(u(\mathbf{\Lambda})\) is constrained to be the solution of the (discrete) PDE at each iterate of \(\mathbf{\Lambda}\).
To alleviate the challenges associated with the training of the loss function described in (9) and enhance the robustness of our method, we propose the following algorithm. The motivation behind its design is to exploit the excellent properties of NNs for data fitting. First, we train the state NN with the observations. Next, we train the unknown model parameters NNs with the PDE residual, but freeze the state variable to the value obtained in the previous step. These steps are computationally lightweight because they do not involve differential operators in the training processes. These two initial steps are finally used as initialisation for the one-loop minimisation in (9). We summarise the algorithm below:
* Step 1 (Data fitting): Train the state neural network to fit the observed data, using standard NN initialisation: \[u_{N}^{0}=\operatorname*{arg\,min}_{w_{N}\in\mathbf{\Lambda}_{N}}\|\mathbf{d} -\mathcal{D}(w_{N})\|_{\ell^{2}}.\]
* Step 2 (Unknown model parameters initialisation): Train the model parameter NNs with the PDE residual for the fixed state \(u_{N}^{0}\) computed in Step 1, using standard NN initialisation: \[\mathbf{\Lambda}_{N}^{0}=\operatorname*{arg\,min}_{\Xi_{N}\in\mathbf{\Lambda}_ {N}}\|\mathcal{R}_{h}(\boldsymbol{\pi}_{h}(\mathbf{\Xi}_{N}),\tilde{\pi}_{h}( u_{N}^{0}))\|_{Y}.\]
* Step 3 (Fully coupled minimisation): Train both the state and model parameter NNs the full loss function (8), starting from \(u_{\mathcal{N}}^{0}\) and \(\mathbf{\Lambda}_{\mathcal{N}}^{0}\).
It is important to point out that the three-step training process is facilitated by the incorporation of NNs. Our attempts to train a FE function in the data step have not been successful, especially when the number of observations is much smaller than the DoFs of the FE space. This is attributed to the local support of FE functions, which limits the adjustment of the values of the free nodes that are directly influenced by the observations. In contrast, NNs with their global support, allow for parameter tuning across the entire domain in the data step.
## 5. Implementation
We rewrite (8) in the following algebraic form
\[\mathcal{L}(\mathbf{\theta}_{\lambda},\mathbf{\theta}_{u})=\left\|\mathbf{e}(\mathbf{ u}_{h}(\mathbf{\theta}_{u}))\right\|_{\ell^{2}}+\alpha\left\|\mathbf{r}_{h}( \mathbf{u}_{h}(\mathbf{\theta}_{u}),\mathbf{\lambda}_{h}(\mathbf{\theta}_{\lambda}))\right\| _{\ell^{1}}, \tag{10}\]
where \(\mathbf{e}\doteq\mathbf{d}-\mathcal{D}_{h}\mathbf{u}_{h}\) is the data misfit error, \(\mathbf{r}_{h}\) is the variational residual vector, \(\mathbf{u}_{h}\), \(\mathbf{\lambda}_{h}\) are the vectors of DoFs of \(\tilde{\mathbf{\pi}}_{h}(u_{\mathcal{N}}(\mathbf{\theta}_{u}))\) and \(\mathbf{\pi}_{h}(\mathbf{\Lambda}_{\mathcal{N}}(\mathbf{\theta}_{\lambda}))\) of the NN realisations \(u_{\mathcal{N}}(\mathbf{\theta}_{u})\) and \(\mathbf{\Lambda}_{\mathcal{N}}(\mathbf{\theta}_{\lambda})\) for the arrays of parameters \(\mathbf{\theta}_{u}\) and \(\mathbf{\theta}_{\lambda}\), respectively. We have chosen the \(\ell^{1}\) residual norm in (10) because it is the one we have used in the numerical tests for inverse problems in Sec. 6. However, the proposed implementation is general and can be easily adapted to other choices of residual norms proposed above.
We describe below an implementation of FEINNs using Julia packages, even though the proposed implementation is general. In Julia, we rely on the existing packages Flux.jl[33, 34] for the neural network part and Gridap.jl[35, 36] for the FEM part. We employ ChainRules.jl[37] to automatically propagate user-defined rules across the code.
To minimise the loss function (10) with gradient-based training algorithms, these gradients are required:
\[\frac{\partial\mathcal{L}}{\partial\mathbf{\theta}_{u}}=\left(\frac{\partial \mathcal{L}}{\partial\mathbf{r}_{h}}\frac{\partial\mathbf{r}_{h}}{\partial \mathbf{u}_{h}}+\frac{\partial\mathcal{L}}{\partial\mathbf{e}}\frac{\partial \mathbf{e}}{\partial\mathbf{u}_{h}}\right)\frac{\partial\mathbf{u}_{h}}{ \partial\mathbf{\theta}_{u}},\qquad\frac{\partial\mathcal{L}}{\partial\mathbf{\theta}_ {\lambda}}=\frac{\partial\mathcal{L}}{\partial\mathbf{r}_{h}}\frac{\partial \mathbf{r}_{h}}{\partial\mathbf{\lambda}_{h}}\frac{\partial\mathbf{\lambda}_{h}}{ \partial\mathbf{\theta}_{\lambda}}.\]
Existing chain rules in ChainRules.jl can readily handle \(\partial\mathcal{L}/\partial\mathbf{r}_{h}\) and \(\partial\mathcal{L}/\partial\mathbf{e}\). We need to define specific rules for the automatic differentiation of the following tasks:
* The interpolation of a NN onto a FE space in \(\partial\mathbf{u}_{h}/\partial\mathbf{\theta}_{u}\) and \(\partial\mathbf{\lambda}_{h}/\partial\mathbf{\theta}_{\lambda}\);
* The computation of the FE residual in \(\partial\mathbf{r}_{h}/\partial\mathbf{u}_{h}\) and \(\partial\mathbf{r}_{h}/\partial\mathbf{\lambda}_{h}\);
* The measurement operator \(\mathcal{D}\) on the FE state in \(\partial\mathbf{e}/\partial\mathbf{u}_{h}\).
It is important to highlight that we never explicitly construct the global Jacobian matrices in our implementation. To evaluate the gradient \(\partial\mathcal{L}/\partial\mathbf{\theta}_{\lambda}\), we utilise Gridap.jl to compute the Jacobian \(\partial\mathbf{r}_{h}/\partial\mathbf{\lambda}_{h}\) cell-wise (i.e., at each cell of \(\mathcal{T}_{h}\) separately), and restrict the vector \(\partial\mathcal{L}/\partial\mathbf{r}_{h}\) to each cell. By performing the vector Jacobian product (VJP) within each cell for \(\partial\mathcal{L}/\partial\mathbf{r}_{h}\) and \(\partial\mathbf{r}_{h}/\partial\mathbf{\lambda}_{h}\), we obtain the cell-wise vectors that can be assembled to form \(\partial\mathcal{L}/\partial\mathbf{\lambda}_{h}\). With the help of Flux.jl, we can calculate the gradient \(\partial\mathcal{L}/\partial\mathbf{\theta}_{\lambda}\) by performing the VJP for \(\partial\mathcal{L}/\partial\mathbf{\lambda}_{h}\) and \(\partial\mathbf{\lambda}_{h}/\partial\mathbf{\theta}_{\lambda}\), without explicitly constructing the Jacobian \(\partial\mathbf{\lambda}_{h}/\partial\mathbf{\theta}_{\lambda}\). This cell-wise approach recasts most of the floating point operations required to compute the gradients in terms of dense matrix-vector products. This results in a reduction of the computational times and memory requirements.
The gradient \(\partial\mathcal{L}/\partial\mathbf{\theta}_{u}\) has two contributions, corresponding to the FE residual and data misfit terms. The same process described above is applied to compute the former contribution. The contribution of the data misfit term involves the computation of \(\partial\mathcal{L}/\partial\mathbf{e}\)\(\partial\mathbf{e}/\partial\mathbf{u}_{h}\), which has not been discussed so far. In our implementation, it also follows an efficient cell-wise approach. In particular, we identify those cells with at least one observation point and, for these cells, we evaluate the cell shape functions at the observation points. This is nothing but the restriction of \(\partial\mathbf{e}/\partial\mathbf{u}_{h}\) to the observation points and DoFs of the cell. We then restrict the vector \(\partial\mathcal{L}/\partial\mathbf{e}\) to these cells, and compute the VJP among these vector and Jacobian restrictions. Finally, we assemble the resulting cell-wise vector contributions to obtain the data misfit global contribution vector to the vector \(\partial\mathcal{L}/\partial\mathbf{u}_{h}\).
Once all the rules for Jacobian computations are appropriately defined, ChainRules.jl seamlessly combine them, enabling smooth gradient computation during the training process.
Let us finish the section with a discussion about computational cost. In FEINNs and IVPINNs, one computes the spatial derivatives in the residual on FE functions in \(\partial\mathbf{r}_{h}/\partial\mathbf{u}_{h}\) and the derivatives of pointwise evaluations of the NN with respect to parameters in \(\partial\mathbf{u}_{h}/\partial\mathbf{\theta}\) separately. The expression of the polynomial derivatives is straightforward and the parameter differentiation is the one required in standard data fitting (and
thus, highly optimised in machine learning frameworks). On the contrary, in standard PINNs the residual is not evaluated with the projection \(\mathbf{u}_{h}\) but the NN itself. One must compute \(\partial\mathbf{r}_{h}/\partial\boldsymbol{\theta}\) directly. It involves _nested_ derivatives (in terms of parameters and input features) that are more expensive (and less common in data science).
## 6. Numerical experiments
### Forward problems
We use the standard \(L^{2}\) and \(H^{1}\) error norms to evaluate the precision of the approximation \(u^{id}\) for forward problems:
\[e_{L^{2}(\Omega)}(u^{id})=\left\|u-u^{id}\right\|_{L^{2}(\Omega)},\qquad e_{H^ {1}(\Omega)}(u^{id})=\left\|u-u^{id}\right\|_{H^{1}(\Omega)},\]
where \(u\) is the true state, \(\left\|\cdot\right\|_{L^{2}(\Omega)}=\sqrt{\int_{\Omega}\left|\cdot\right|^{2}}\), and \(\left\|\cdot\right\|_{H^{1}(\Omega)}=\sqrt{\int_{\Omega}\left|\cdot\right|^{2} +\left|\boldsymbol{\nabla}(\cdot)\right|^{2}}\). The integrals in these terms are evaluated with Gauss quadrature rule, and a sufficient number of quadrature points are used to guarantee accuracy. Note that, in the forward problem experiments, \(u^{id}\) can either be a NN or its interpolation onto a suitable FE space. We will specify which representation is used explicitly when necessary.
As for the experiments, we first compare FEINNs with IVPINNs by solving the forward convection-diffusion-reaction problem (1). Next, we shift to the Poisson equation, i.e., problem (1) with \(\boldsymbol{\beta}=\boldsymbol{0}\) and \(\sigma=0\), and analyse the impact of preconditioning on accelerating convergence during the training process. Finally, we showcase the effectiveness of FEINNs in complex geometries by solving a Poisson problem in a domain characterised by irregular shapes. It is worth noting that a comprehensive comparison in terms of computational cost and accuracy between IVPINNs, PINNs, and VIPINNs has already been conducted in [28]. In these experiments, the accuracy of IVPINNs is similar or better than the other PINNs being analysed for a given number of NN evaluations. The computational cost of IVPINNs is reported to be lower than standard PINN approaches, which is explained by the different cost of differentiation in each case, as explained in Sec. 5. As a result, we restrict ourselves to the comparison between FEINNs and IVPINNs and refer the reader to [28] for the relative merit of FEINNs over PINNs.
In all the experiments in this section, we adopt the NN architecture in [28], namely \(L=5\) layers, \(n=50\) neurons for each hidden layer, and \(\rho=\tanh\) as activation function, so that we can readily compare these results with the ones in [28] for standard PINNs. Besides, this choice strikes a good balance between the finest FE resolution being used and the NN expressivity. Indeed, we have experimentally observed that increasing the expressiveness of the NN (additional number of layers and/or neurons per layer) for the finest FE mesh being used in our experiments does not noticeably improve the results.
In addition, we employ Petrov-Galerkin discretisations, i.e., we use a linearised test space \(V_{h}\) as defined in Sec. 2.2. Unless otherwise specified, we adopt the \(\ell^{2}\) norm in the loss function (6). In all the experiments in this section and Sec. 6.2, we use the Glorot uniform method [38] for NN parameter initialisation and the BFGS optimiser in Optim.jl[39].3
Footnote 3: We have experimentally observed that L-BFGS is not as effective as BFGS for the problems considered in this paper.
#### 6.1.1. Convection-diffusion-reaction equation with a smooth solution
We replicate most of the experiment settings in [28, Convergence test #1], allowing the interested reader to check how other PINNs perform in similar experiments by looking at this reference. Specifically, the problem is defined on a square domain \(\Omega=[0,1]^{2}\), \(\Gamma_{D}\) are spanned by the left and right sides, and \(\Gamma_{N}\) by the the top and bottom ones. We choose the following analytical functions for the model parameters:
\[\kappa(x,y)=2+\sin(x+2y),\qquad\boldsymbol{\beta}(x,y)=\left[\sqrt{x-y^{2}+5},\ \sqrt{y-x^{2}+5}\right]^{\mathrm{T}},\qquad\sigma(x,y)=\mathrm{e}^{\frac{k}{3}- \frac{\gamma}{3}}+2,\]
and pick \(f\), \(g\) and \(\eta\) such that the exact solution is:
\[u(x,y)=\sin(3.2x(x-y))\cos(x+4.3y)+\sin(4.6(x+2y))\cos(2.6(y-2x)).\]
We discretise the domain using uniform meshes of quadrilateral elements of equal size.
It is crucial to emphasize that IVPINNs and FEINNs share a fundamental idea at their core: the interpolation of NNs (or their product with a function for IVPINNs) onto a corresponding FE space. The primary distinction lies in the approach used to impose the Dirichlet boundary condition. FEINNs rely on the trial FE space to enforce the boundary condition, using an interpolation that enforces zero trace on \(\Gamma_{D}\). IVPINNs, however, rely on an offset function \(\tilde{u}\) and a distance function \(\Phi\), where \(\tilde{u}\) is as smooth as \(u\) and satisfies the Dirichlet boundary condition and \(\Phi\in\tilde{U}\). The authors propose in [28] to train an auxiliary neural
network or use data transfinite interpolation to compute the lifting \(\bar{u}\) of the Dirichlet data \(g\). So, the true state can be expressed as \(\Phi\circ u_{\mathcal{N}}+\bar{u}\). IVPINNs interpolate this expression onto the FE space. The interpolated NN composition now belongs to \(\tilde{U}_{h}\) due to the property of \(\Phi\), and the full expression (approximately) satisfies the Dirichlet boundary condition because of the existence of \(\bar{u}\). The loss function of the method reads:
\[u_{\mathcal{N}}\in\arg\min_{w_{N}}\|\hat{\mathbf{r}}_{h}(\pi_{h}(\Phi\circ w_{ \mathcal{N}}+\bar{u}))\|,\quad[\hat{\mathbf{r}}_{h}]_{i}\doteq\ell(\varphi^{i} )-a(\pi_{h}(\Phi\circ w_{\mathcal{N}}+\bar{u}),\varphi^{i}),\]
where \(\{\varphi^{i}\}_{i=1}^{N}\) are the shape functions that span \(V_{h}\). In our numerical experiments, we have considered the training of an auxiliary NN to approximate \(\bar{u}\), but the results were not satisfactory. (Probably, because we are computing a function in \(\Omega\) with data on \(\Gamma_{D}\) only.) We have considered instead a discrete harmonic extension (i.e., a FE approximation of the Poisson problem with \(g\) on \(\Gamma_{D}\)) to approximate \(\bar{u}\). Since we consider a trivial square domain, the distance function \(\Phi\) can readily be defined as the product of the linear polynomials, i.e., \(\Phi(x,y)=x(1-x)\); note that \(\Gamma_{D}\) only includes the left and right sides of the squared domain.
In addition to evaluating the performance of FEINNs and IVPINNs, we also examine how the NNs generalise. For IVPINNs, we compute the error of the NN composition \(\Phi\circ u_{\mathcal{N}}+\bar{u}\), while, in the case of FEINNs, we compute the error of \(u_{\mathcal{N}}\) directly. We emphasise that this setting aligns with the principles of NN training: we train the NN with data in a set of points (the nodes of the mesh), and if the training is effective, we expect the NN to yield low error on the whole domain \(\bar{\Omega}\).
For the first experiment, we investigate the impact of mesh refinement on the approximation error. Keeping \(k_{U}=6\) fixed, we discretise the domain using a uniform mesh of quadrilaterals with different levels of refinement. To account for the impact of NN initialisation on both FEINNs and IVPINNs, we run 10 experiments with different initialisations for each mesh resolution. Fig. 0(a) and 0(b) illustrate the \(L^{2}\) errors and \(H^{1}\) errors, respectively, for the different methods versus mesh size. The curves labelled as "FEM" refer to the errors associated to the FEM solution, those labelled as "FEINN" and "IVPINN" to the errors of the interpolated NNs resulting from either method, and, finally, the label tag "(NN only)" is used to refer to the (generalisation) error associated to the NN itself (i.e., not to its FE interpolation). Due to the negligible variance in the errors of the interpolated NNs for both FEINNs and IVPINNs, we present the average error among those obtained for the 10 experiments. We also provide the slopes of the FEM convergence curves in Fig. 0(a) and 0(b). The computed slopes closely match the expected theoretical values, validating the FEM solution and the accuracy of the error computation. Based on the observations from Fig. 1, FEINNs not only generalise better compared to IVPINNs, they also have the potential to outperform FEM. This capability of FEINNs is not coincidental, as all errors associated to the NNs resulting from FEINNs, consistently remain below the FEM convergence curve. Additionally, we observe that as the mesh becomes finer, IVPINNs starts to struggle. While more training iterations may reduce the errors of IVPINNs, it is worth noting that the number of training iterations reaches the prescribed limit of 30,000 for the three finest mesh resolutions. It is also interesting to compare the distribution of errors among the NNs resulting from IVPINNs and FEINNs. We observe a high sensitivity of the errors to NN initialisation for IVPINNs, whereas the errors tend to cluster for FEINNs. We also observe that the \(L^{2}\) error of FEM gets closer to that of the non-interpolated NN resulting from FEINN as the mesh is refined. This behaviour is expected, since the FE mesh is being refined while the NN architecture is fixed. There is a point in which the NN is not expressive enough to represent the optimal FE solution and thus, Prop. 3.4 does not hold any more.
Since \(u\in\mathcal{C}^{\infty}(\bar{\Omega})\) in this problem, similar to FEM, we can also explore at which rate the error decays as we increase the polynomial order of the trial space (i.e., the NN interpolation space). We maintain a fixed mesh consisting of \(15\times 15\) quadrilaterals, and increase \(k_{U}\) from 1 up to 6. We perform 10 experiments for each order, with a different NN initialisation for each experiment. Fig. 1(a) and 1(b) depict the \(L^{2}\) and \(H^{1}\) errors, respectively, against \(k_{U}\). Once again, we observe that FEINNs have comparable performance to FEM, and more importantly, the non-interpolated NNs resulting from FEINNs have lower errors than FEM. In some cases, these can outperform FEM by more than two orders of magnitude. On the same mesh, the NN obtained with FEINNs is comparable to the FE solution obtained using between one and two orders more. Overall, IVPINNs demonstrate a comparable level of performance to FEM, with the exception occurring at \(k_{U}=6\). After 30,000 training iterations, it fails to achieve the same performance as FEM. Notably, the non-interpolated NN compositions from IVPINNs yield satisfactory results at lower orders, but as the order increases, they fail to reach the accuracy of FEM. The same comment about the expressivity limit of the NN architecture applies here. As we increase the order, the improvement of FEINN becomes less pronounced, since we are keeping fix the NN architecture.
In the sequel, we investigate how the computational cost and convergence rates of FEINNs and IVPINNs compare. To this end, we solve the same problem so far in this section by training FEINNs and IVPINNs for a fixed number of iterations, and then visualise at which rate the \(L^{2}\) and \(H^{1}\) errors decay with time. This is reported in Fig. 3. We used \(k_{U}=6\), a mesh consisting of \(15\times 15\) quadrilaterals (resulting in a problem with 8,099 DoFs), and a fixed number of 30,000 BFGS training iterations. Besides, we consider two different choices of the offset function to study its impact on the performance of IVPINNs. In particular, the curve labelled "(smooth offset)" in Fig. 3 denotes the same offset function used so far in this section (namely, a discrete harmonic extension, as per suggested in [28]), while the one labelled "(standard offset)" denotes the offset function that one naturally uses in FEM (and we also use here with FEINNs). We observe that both FEINNs and IVPINNs have roughly the same computational cost per iteration, around 0.023 seconds per iteration on a GeForce RTX 3090 GPU. Moreover, as shown in Fig. 1(a) and 1(b), IVPINNs converge consistently slower than FEINNs. Fig. 3 also illustrates that the choice of offset function greatly influences the convergence rate of IVPINNs. Indeed, IVPINN with the smooth offset function converges much faster
Figure 1. Convergence of errors with respect to the mesh size of the trial space for the forward convection-diffusion-reaction problem with a smooth solution.
Figure 2. Convergence of errors with respect to the order of trial bases for the forward convection-diffusion-reaction problem with a smooth solution.
than with the standard offset function, while for FEINNs, we readily obtain a faster convergence rate without the need for a special offset function. It is also worth noting that the authors in [28] observe that IVPINNs are less computationally expensive than PINNs and VPINNs, and thus for this problem, which indicates that FEINNs are also more efficient than the latter two methods.
#### 6.1.2. Convection-diffusion-reaction equation with a singular solution
The second problem we solve is still (1), but with a singular solution. We adopt most of the settings in [28, Convergence test #2]. The domain and boundaries are the same as those in Sec. 6.1.1. The coefficients are \(\kappa=1\), \(\boldsymbol{\beta}=[2,3]^{\mathsf{T}}\), and \(\sigma=4\). We pick \(f\), \(\eta\), and \(g\) such that the true state is, in polar coordinates,
\[u(r,\theta)=r^{\frac{2}{3}}\sin(\frac{2}{3}(\theta+\frac{\pi}{2})).\]
Since \(u\in H^{5/3-\epsilon}(\Omega)\) for any \(\epsilon>0\), the expected \(H^{1}\) error decay rate is around \(2/3\). Consequently, increasing \(k_{U}\) is unlikely to effectively reduce the error. Therefore, we keep \(k_{U}=2\), and focus our study on the impact of mesh refinement on error reduction. Fig. 4 depicts how \(L^{2}\) and \(H^{1}\) errors decay as we increase the mesh size. The errors of the non-interpolated NNs are not displayed in the plots, because their performance is relatively poor. This observation is consistent with previous findings in [28], which
Figure 4. Convergence of errors with respect to the mesh size of the trial space for the forward convection-diffusion-reaction problem with a singular solution.
Figure 3. Comparison among FEINNs and IVPINNs in terms of \(L^{2}\) and \(H^{1}\) errors versus computational time. Training was performed in both methods for a fixed number of 30,000 iterations.
highlight the inferior performance of PINNs and VPINNs compared to IVPINNs in this singular solution scenario. Fig 4 show how the \(L^{2}\) and \(H^{1}\) errors change as the mesh size changes. We obtain the expected error decay rate in Fig. 3(b). We conclude that both FEINNs and IVPINNs perform well in addressing this singular problem, and they successfully overcome the limitations in NNs in this particular situation.
#### 6.1.3. The effect of preconditioning on Poisson equation with a singular solution
In this experiment, we investigate whether preconditioning can effectively accelerate the training process, and examine the potential of leveraging widely used GMG preconditioners from FEM to aid in the training of FEINNs.
We only consider the \(L^{2}\)-norm of the preconditioned loss, i.e., (7). At a purely algebraic level, we can rewrite (7) as:
\[\mathcal{L}(\mathbf{\theta}_{u})=\left\|\mathbf{B}^{-1}\mathbf{A}\mathbf{u}_{h}( \mathbf{\theta}_{u})-\mathbf{B}^{-1}\mathbf{f}\right\|_{\ell^{2}}, \tag{11}\]
where \(\mathbf{B}\) is the preconditioner, \(\mathbf{A}\) is the coefficient matrix resulting from discretisation, \(\mathbf{\theta}_{u}\) are the parameters for \(u_{N}\), \(\mathbf{u}_{h}\) is the vector of DoFs of \(\bar{U}_{h}\), and \(\mathbf{f}\) is the RHS vector. (We note that, since the mesh being used is (quasi-)uniform, we can replace the \(L^{2}\)-norm by the Euclidean \(\ell^{2}\)-norm; they differ by a scaling.)
We consider three types of preconditioners. The first one is \(\mathbf{B}_{\text{inv}}=\mathbf{A}\). Plugged into (11), the loss becomes \(\left\|\mathbf{u}_{h}(\mathbf{\theta}_{u})-\mathbf{A}^{-1}\mathbf{f}\right\|_{ \ell^{2}}\). This loss resembles the loss in data fitting tasks, and it should theoretically be easier for NNs to minimise. We also consider another preconditioner \(\mathbf{B}_{\text{inv\_lin}}\), which is defined as the matrix resulting from discretisation with \(V_{h}\) as both trial and test FE spaces. Note that \(V_{h}\) is built out of a mesh resulting from the application of \(k_{U}\) levels of uniform refinement to the mesh associated to \(U_{h}\), since \(k_{V}=1\). The preconditioner \(\mathbf{B}_{\text{inv\_lin}}\) is computationally cheaper to invert than \(\mathbf{B}_{\text{inv}}\), since it is symmetric positive definite (SPD) and involves linear FEM bases only. The last (and cheapest to invert) one is, as mentioned before, a GMG preconditioner \(\mathbf{B}_{\text{GMG}}\) of \(\mathbf{B}_{\text{inv\_lin}}\).
We now change to the Poisson equation. The problem is defined on \(\Omega=[0,1]^{2}\), with \(\Gamma_{\text{D}}=\partial\Omega\) and \(\kappa=1\). Choose \(f\) and \(g\) such that the true state is the same as the singular \(u\) in Sec. 6.1.2. We divide the domain uniformly into \(64\times 64\) quadrilaterals, and then employ \(k_{U}=2\) or \(k_{U}=4\) for the NN interpolation space. To evaluate the effectiveness of the aforementioned preconditioners, we perform four experiments for each order. Three of them employ the \(\mathbf{B}_{\text{inv}}\), \(\mathbf{B}_{\text{inv\_lin}}\), and \(\mathbf{B}_{\text{GMG}}\) preconditioners, respectively, while the fourth experiment serves as a baseline without any preconditioning, denoted as \(\mathbf{B}_{\text{none}}\). In all experiments, we use the same initial parameters for the NNs to ensure a fair comparison.
Fig. 5 shows the \(L^{2}\) error history of FEINNs using different preconditioners during training for the first 1,000 iterations. We can extract several findings from the figure. Firstly, as \(k_{U}\) increases, the training for the unpreconditioned loss becomes more challenging. This is evident from the flatter error curve for \(\mathbf{B}_{\text{none}}\) in Fig. 4(b) compared to Fig. 4(a). Then, the preconditioners contribute to faster convergence as the error curves of the preconditioned FEINNs are much steeper compared to the one without any preconditioner. Next, the cheaper \(\mathbf{B}_{\text{inv\_lin}}\) preconditioner is surprisingly as effective as the \(\mathbf{B}_{\text{inv}}\) preconditioner. Lastly, \(\mathbf{B}_{\text{GMG}}\) leads to a substantial acceleration of \(L^{2}\) convergence for both order 2 and order 4. Specifically, in Fig. 4(a), for \(k_{U}=2\), the standard unpreconditioned loss function requires more than 800 iterations to reduce the \(L^{2}\)-error below \(10^{-3}\), while \(\mathbf{B}_{\text{inv\_lin}}\) and \(\mathbf{B}_{\text{inv}}\) loss functions attain the same error in around 100 iterations and
Figure 5. \(L^{2}\) error history during training of FEINNs for the forward Poisson problem with a singular solution using different preconditioners.
\(\mathbf{B}_{\text{GMG}}\) requires around 300 iterations. Besides, the GMG-preconditioned FEINN achieves an error level that closely matches the FEINN preconditioned by the other preconditioners after around 900 iterations. Overall, the difference between the errors of these preconditioned FEINNs and the error of the unpreconditioned FEINN exceeds one order of magnitude after enough iterations, and reach two others of magnitude in many cases. Similarly, for \(k_{U}=4\) as shown in Fig (b)b, the unpreconditioned case requires around 1,000 iterations to reduce the \(L^{2}\)-error below \(10^{-3}\), \(\mathbf{B}_{\text{GMG}}\) preconditioned FEINN needs around 300 iterations, and \(\mathbf{B}_{\text{inv\_lin}}\) and \(\mathbf{B}_{\text{inv}}\) preconditioned FEINNs only require around 100 iterations. In this second case, the difference between preconditioned and unpreconditioned training exceeds two orders of magnitude. Although the GMG preconditioner may not be as effective as the other preconditioners, it still exhibits a remarkable reduction in the \(L^{2}\) error compared to the unpreconditioned FEINN, reaching approximately two orders of magnitude after 400 iterations, while being a very cheap preconditioner.
#### 6.1.4. Poisson equation on a complex geometry
In this section, we demonstrate the capabilities of FEINNs in solving forward Poisson problems defined on general domains. We focus on a slightly modified version of [40, Example 4]. As shown in Fig. (a)a, the computational domain \(\Omega\) features a bone-shaped region, which is parameterised by \((x(\theta),y(\theta))\). The parametric equations are defined as \(x(\theta)=0.6\cos(\theta)-0.3\cos(3\theta)\) and \(y(\theta)=0.7\sin(\theta)-0.07\sin(3\theta)+0.2\sin(7\theta)\) with \(\theta\in[0,2\pi]\). Finding appropriate \(\Phi\) and \(\bar{u}\) for IVPINNs is very challenging for this irregular domain, so we only examine the performance of FEINNs in our experiments. We consider the Poisson problem with \(\kappa(x,y)=2+\sin(xy)\), \(\Gamma_{\text{D}}=\partial\Omega\). We choose \(f\) such that the solution is \(u(x,y)=\mathrm{e}^{x}(x^{2}\sin(y)+y^{2})\).
In this study, our focus is on examining the impact of mesh refinement on FEINNs. Consequently, we fix \(k_{U}=2\) and discretise \(\Omega\) using unstructured triangular meshes with an increasing number of cells. In the loss function, we employ the \(\ell^{1}\)-norm for the residual vector. Although the \(\ell^{2}\)-norm is equally effective, we aim to showcase the flexibility in choosing the norm and to provide evidence supporting the suitability of the \(\ell^{1}\) norm for the PDE loss. This is particularly relevant, as we consistently use the \(\ell^{1}\) norm for the PDE part in the subsequent experiments for inverse problems. Similar to the previous sections, we conduct 10 experiments for each mesh resolution, each with distinct initialisations of the NNs.
Fig. 6 illustrates the changes in \(L^{2}\) and \(H^{1}\) errors as the DoFs in the FE interpolation space increase. Overall, FEINNs demonstrate almost identical performance to FEM. Importantly, similar to the findings for the forward convection-diffusion-reaction problem with a smooth solution, the non-interpolated NNs consistently outperform FEM. Notably, when the mesh is "fine enough", there is a remarkable two-order-of-magnitude difference in \(H^{1}\) errors between the NNs and FEM, as illustrated in Fig. (b)b.
To further confirm the superior performance of the NNs in terms of \(H^{1}\) error, we present the point-wise gradient error magnitudes for the FEINN solution and the NN solution in Fig. (b)b and (c)c, respectively. These
Figure 6. Convergence of errors with respect to DoFs of the trial space for the forward Poisson problem on a bone-shaped geometry.
figures correspond to one of our experiments conducted on the finest mesh. Notably, we observe a significant two-order-of-magnitude reduction in error magnitude for the NN solution compared to the FEINN solution across most regions of the domain. Additionally, the lack of smoothness of the gradient of the interpolated solution in FEINN on a low order \(\mathcal{C}^{0}\) space is evident in Fig. (b)b. In contrast, one can observe the smoothness of the error of the FEINN trained NN in Fig. (c)c.
### Inverse problems
In the experiments for inverse problems, we introduce the following relative \(L^{2}\) and \(H^{1}\) errors to measure the accuracy of an identified solution \(z^{id}\):
\[\varepsilon_{L^{2}(\Omega)}(z^{id})=\frac{\left\|z^{id}-z\right\|_{L^{2}( \Omega)}}{\left\|z\right\|_{L^{2}(\Omega)}},\qquad\varepsilon_{H^{1}(\Omega)} (z^{id})=\frac{\left\|z^{id}-z\right\|_{H^{1}(\Omega)}}{\left\|z\right\|_{H^{1 }(\Omega)}},\]
where \(z\) is the ground truth.
The optimisation involving the penalty term (see (8)) occurs at Step 3, requiring the selection of the norm for \(\mathcal{R}_{h}\) and the corresponding coefficient \(\alpha\). In [41, Ch. 17], the authors provide insights into the distinction between utilising \(\ell^{1}\) and \(\ell^{2}\) norms. According to [41, Theorem 17.1], when employing the \(\ell^{2}\)-norm for \(\mathcal{R}_{h}\), the minimiser of (9) becomes a global solution to the inverse problem as \(\alpha\) approaches to infinity. Furthermore, [41, Theorem 17.3] states that there exists an \(\alpha^{*}\) such that the minimiser in (9) for the \(\ell^{1}\)-norm of \(\mathcal{R}_{h}\) is compelled to coincide with the solution of the inverse problem for any \(\alpha\geq\alpha^{*}\). To avoid choosing an arbitrarily large \(\alpha\), we opt to use the \(\ell^{1}\) norm. Moreover, the authors propose [41, Framework 17.2] for adjusting the coefficient \(\alpha\). Following this, we partition Step 3 into several sub-steps. We use a sequence of \(\{\alpha_{k}\}\) for these sub-steps, where \(\alpha_{k}>\alpha_{k-1}\) for \(k>1\).4
Footnote 4: Against common experience in the inverse problem community [32], penalty coefficients for the PDE residual term are usually considered fixed in PINNs and related methods (see [11, 13]). Similarly, for forward problems, the Dirichlet penalty term (which is also a constraint in the minimisation) is usually kept fixed in these formulations.
As mentioned before, we split the training process into three steps. Although we have extensively tested training only (10), the three-step strategy consistently yielded superior results. As a result, all the experiments in this section will follow this training process. We introduce the notation \([n_{1},n_{2},k\times n_{3}]\) to represent the number of iterations for each step: \(n_{1}\) iterations for the data fitting step, followed by \(n_{2}\) iterations for the model parameter initialisations step, and \(k\) sub-steps in the coupled step, with each sub-step consisting of \(n_{3}\) iterations. The sub-steps simply represent a new value of the penalty coefficient. We use the notation \(\alpha=[\alpha_{1},\alpha_{2},...,\alpha_{k}]\), where \(\alpha_{1}\), \(\alpha_{2}\),..., and \(\alpha_{k}\) are the penalty coefficients at each sub-step.
We employ the softplus activation function for FEINNs in our inverse problem experiments, even though the tanh activation generally performs comparably or even better. We aim to explore alternative activation functions for NNs in the context of solving PDE-constrained problems using FEINNs. We use linear FE interpolation space for FEINNs.5 In the remaining experiments, unless otherwise specified, we consider \(z^{id}\) to be the FE interpolation of the NN \(z_{N}\).
Figure 7. True state and gradient error magnitude in FEINN and NN solutions for the forward Poisson problem on a bone-shaped domain.
#### 6.2.1. Poisson equation with partial observations
We begin our inverse problem experiments with a Poisson equation involving partial observations. Following the experiment presented in [9, Sec. 3.1.3], we consider the computational domain \([0,1]^{2}\) with Dirichlet boundary conditions on the left, bottom, and top sides, and a Neumann boundary condition on the right side. The unknown state and diffusion coefficient (Fig. 7(d)) are:
\[u(x,y)=\sin(\pi x)\sin(\pi y),\qquad\kappa(x,y)=1+0.5\sin(2\pi x)\sin(2\pi y).\]
Fig. 7(a) illustrates the true state, and our observations are limited to every DoF inside the white box located at the center of the figure. The objectives of this experiment are to reconstruct the partially known state and to recover the unknown diffusion coefficient.
We discretise the domain by \(50\times 50\) quadrilaterals. Both NNs, \(u_{\mathcal{N}}\) and \(\kappa_{\mathcal{N}}\), have the same structure with \(L=2\) layers and each hidden layer has \(n=20\) neurons. To ensure the positivity of the diffusion coefficient, we apply a rectification function \(r(x)=|x|+0.01\) as the activation for the output layer of \(\kappa_{\mathcal{N}}\). Although \(r(x)=x^{2}+0.01\) also produces satisfactory results, it is more common to use an output layer with linear features. The training iterations are \([400,400,3\times 400]\), and the penalty coefficients are \(\alpha=[0.1,0.3,0.9]\).
In Fig. 8, we display the FEINNs solutions along with their corresponding errors in comparison to the true solutions. The identified state \(u^{id}\) in Fig. 7(b) closely resembles the true state \(u\) in Fig. 7(a), accompanied by very small point-wise errors in Fig. 7(c). These observations highlight the effectiveness of FEINNs at completing the partial observations. Fig. 7(f) displays the small point-wise error of \(\kappa^{id}\), further confirming the accuracy of our approach on discovering the unknown diffusion coefficient.
In order to also consider the relative merits of FEINNs compared to other approaches proposed in the literature, Fig. 9 reports the relative error history for the state and coefficient throughout the training process for FEINNs and our Julia implementation of the adjoint-based NN method (adjoint NN). While adjoint NN approximates the unknown diffusion coefficient with a neural network, it still approximates the state using a FE space, and uses the adjoint solver to compute the gradient of the data misfit with respect to the
Figure 8. Comparison of the true solutions (first column), the FEINN solutions (second column), and corresponding point-wise errors (third column) for the inverse Poisson problem with partial observations. The presented results are from a specific experiment. The first row depicts the state, while the second row represents the coefficient. The observations of \(u\) are limited to the white box.
NN parameters, resulting in a two-loop optimisation process. Adjoint NN was first introduced in [8], and then further explored in [9]. In the experiment, both methods ran their optimisations for 2,000 iterations with identical \(\kappa_{N}\) structures and initialisations. The gaps in the state error curves in Fig. (a)a and (b)b for FEINNs correspond to the second model parameter initialisation step, where \(u_{N}\) is not trained. Similarly, the coefficient error curve in Fig. (c)c for FEINNs starts at iteration 401 as \(\kappa_{N}\) is not trained in the initial data fitting step. The shapes of the curves for FEINNs align with the motivation behind the three-step training process, where the first and second steps aim to lead \(u_{N}\) and \(\kappa_{N}\) to a good initialisation, while the third step focuses on further improving the accuracy. The experiments were performed on a single core of an AMD Ryzen Threadripper 3960X CPU, and we also report the computational cost for both methods: for FEINNs, the average training time is 0.028 seconds per iteration, whereas adjoint NN requires 0.040 seconds per iteration.6 Notably, benefiting from our three-step training strategy and an additional network for state approximation, Fig. 9 reveals that FEINNs have the potential to yield superior accuracy compared to adjoint NN, as all FEINN curves remain below the error curves of adjoint NN after approximately 800 iterations. Moreover, we also plot the error curves for the non-interpolated NNs in Fig. 9. Similar to the findings in Sec. 6.1.1, the smoothness of NN contributes to improved \(H^{1}\) accuracy of a smooth \(u\).
Footnote 6: It is important to note that the computational cost of these two methods is not easily comparable, since they have different computational requirements. The adjoint method involves (non)linear solvers per external iteration, while FEINNs must compute the differentiation of the NN with respect to parameters not only for the physical coefficients but also the state variable. Thus, the relative cost of these methods will be influenced by various factors, including the structure of NNs, the implementation of the (non)linear solver, the specific problem being addressed, etc.
Figure 10. Comparison among FEINNs and adjoint NN in terms of relative errors (depicted using box plots) with different initialisations for the inverse Poisson problem with partial observations.
Figure 9. Comparison among FEINNs and adjoint NN in terms of relative errors during training for the inverse Poisson problem with partial observations. The optimisation loop was run for 2,000 iterations in both cases.
In the sequel, we also compare the robustness with respect to NN initialisation of FEINNs and adjoint NN. We solve the inverse problem 100 times with different NN initialisations. The same NN structure and parameter initialisation of \(\kappa_{N}\) were used for both methods in order to have a fair comparison. In Fig. 10, we depict with box plots the relative errors for the state and the diffusion coefficient from these 100 experiments. Whiskers in the box plot represent the minimum and maximum values within 1.5 times the interquartile range.
Let us first comment on the results obtained with FEINNs. Most of the errors for the state \(\varepsilon_{L^{2}(\Omega)}(u^{id})\) and \(\varepsilon_{H^{1}(\Omega)}(u^{id})\) are very small, with the largest \(\varepsilon_{L^{2}(\Omega)}(u^{id})\) below 0.8%. Besides, the majority of the relative coefficient errors \(\varepsilon_{L^{2}(\Omega)}(\kappa^{id})\) are below 1%. Consequently, we conclude that FEINNs are robust with respect to initialisation in solving this inverse problem with partial observations. Again, the label tag "(NN only)" of FEINNs denotes the errors of the NNs themselves. We observe that the \(L^{2}\) errors for \(\kappa_{N}\) are nearly equivalent to their interpolated counterparts. However, consistent with the findings in Fig. 9, since \(u\) is smooth, \(u_{N}\) surpasses their interpolations in \(H^{1}\) accuracy, with potential for improved \(L^{2}\) accuracy.
The results corresponding to adjoint NN are presented in Fig. 10 as box plots labelled "AdjointNN". During training, we observe that a good initialisation for \(\kappa_{N}\) is imperative, otherwise the optimisation quits prematurely as the gradient norm drops below \(10^{-10}\). This occurrence results in considerably adverse outcomes, at times with \(\varepsilon_{L^{2}(\Omega)}(\kappa^{id})\) exceeding \(10^{8}\). To enhance visual clarity, when constructing the box plots, errors surpassing 1 are standardised to 1. As Fig. 10 indicates, we also explored the activation function tanh as proposed in [9] (tagged as "(\(\rho_{\kappa}=\tanh\))"). However, neither of these configurations produce results outperforming those achieved by FEINNs. Therefore, the adjoint NN method clearly shows less robustness than FEINNs in this partial observations situation.
#### 6.2.2. Poisson equation with noisy observations
In this experiment, we explore the effectiveness of FEINNs in solving an inverse Poisson problem with noisy data. Following the settings in [9, Sec. 3.1.2], we consider the true state (Fig. 10(a)) and diffusion coefficient (Fig. 10(d)) as:
\[u(x,y)=\sin(\pi x)\,\sin(\pi y),\qquad\kappa(x,y)=\frac{1}{1+x^{2}+y^{2}+(x-1) ^{2}+(y-1)^{2}}.\]
The domain \(\Omega\), its discretisation and boundary conditions remain the same as in Sec. 6.2.1. The state at each DoF is known but contaminated with Gaussian noise \(\epsilon\sim N(0,0.05^{2})\). The objectives of this experiment are to reconstruct the state from the noisy data and to estimate the unknown diffusion coefficient.
The structures for \(u_{N}\) and \(\kappa_{N}\) are the same as the ones in Sec. 6.2.1. We again apply \(r(x)=|x|+0.01\) to the output layer of \(\kappa_{N}\) to ensure a positive diffusion coefficient. The training iterations are \([300,100,2\times 300]\), with a total of 1,000, matching the setup in [9, Sec. 3.1.2]. The penalty coefficients are \(\alpha=[1.0,3.0]\).
In Fig. 10, the last two columns display the outcomes from one of our experiments. The identified state \(u^{id}\) in Fig. 10(b) and its low point-wise error in Fig. 10(c) validate the FEINNs capability of recovering the state despite the presence of noise in the data. Fig. 10(e) shows the identified diffusion coefficient \(\kappa^{id}\), which, although visually slightly different from \(\kappa\) in Fig. 10(d), still captures its pattern very well. The point-wise error in Fig. 10(f) further confirms that FEINNs effectively predict the values of the diffusion coefficient.
The error history plots for FEINNs and adjoint NN when applied to the inverse Poisson problem with noisy observations are shown in Fig. 12. The optimisation loop was run in both cases up to 1,000 iterations, and we used the same \(\kappa_{N}\) architecture and parameter initialisation. Consistent with the findings in [9], the loss function in the adjoint method requires explicit regularisation. This is evident as the error curves corresponding to adjoint NN with no regularisation (label tag "(no reg)") start increasing very shortly after the optimisation begins, while the results are much improved by using the regularisation proposed in [9].7 Notably, even _without any regularisation_, the FEINN errors are very stably decreasing. FEINNs could possibly benefit from effective regularisation, but we have not explored this option to keep the method simple and less tuning-dependent. In terms of computational cost, FEINNs demand 0.025 seconds per iteration, while adjoint NN takes 0.043 seconds per iteration. Additionally, in terms of \(u\), the errors of the (non-interpolated) NNs are frequently below their interpolation counterparts during training. This indicates that \(u_{N}\) possesses the capacity to improve \(u\) accuracy despite the noisy observations.
Footnote 7: We use \(\ell^{1}\) regularisation on \(\kappa_{N}\). After testing various regularisation coefficients, we have concluded that the best results are obtained for \(10^{-3}\).
Let us assess the robustness of FEINNs with respect to NN initialisation. We generate the Gaussian noise with the same random seed and repeat the experiment 100 times with differently initialised NNs. The
resulting box plots are shown in Fig. (a)a, where label "FEINN" is for the interpolated NNs and "FEINN (NN only)" is for the non-interpolated ones. We observe that the smoothness of \(u_{N}\) contributes to enhanced accuracy, as both boxes of \(\varepsilon_{L^{2}(\Omega)}(u^{id})\) and \(\varepsilon_{H^{1}(\Omega)}(u^{id})\) for \(u_{N}\) are positioned lower than their interpolation counterparts. Besides, FEINNs generally produce very good results, with \(\varepsilon_{L^{2}(\Omega)}(u^{id})\) mostly below 0.6%, and \(\varepsilon_{L^{2}(\Omega)}(\kappa^{id})\) mostly under 2%.
In this experiment, To compare the performance of FEINNs against adjoint NN with regularisation (as described above), we provide the results for the latter method in the same figure (labelled as "AdjointNN"). We observe that the boxes of \(\varepsilon_{H^{1}(\Omega)}(u^{id})\) and \(\varepsilon_{L^{2}(\Omega)}(\kappa^{id})\) of adjoint NN are positioned higher than that of FEINNs, suggesting that FEINNs generally achieve better accuracy in terms of these two relative errors.
Figure 11. Comparison of the true solutions (first column), the FEINN solutions (second column), and corresponding point-wise errors (third column) for the inverse Poisson problem with noisy observations. The presented results are from a specific experiment. The first row depicts the state, while the second row represents the coefficient.
Figure 12. Comparison among FEINNs and adjoint NN in terms of relative errors during training for the inverse Poisson problem with noisy observations. The optimisation loop was run for 1,000 iterations in both cases.
Furthermore, the state NN in FEINNs generalises well and is far more accurate than the the FE interpolation. In contrast, adjoint NN relies on a FE function for state approximation, lacking such capability of FEINNs.
In this example, we are also interested in exploring how the variability of noise affects FEINNs accuracy. We fix the NN initialisation and the distribution of the Gaussian noise (\(\epsilon\sim N(0,0.05^{2})\)), and repeat the experiment 100 times with different random noise seeds. The resulting box plots are shown in Fig. (b)b with label "FEINN (var noise)". We observe that the noise randomness impacts the accuracy of FEINNs more than NN initialisation randomness, with broader error boxes. Nonetheless, FEINNs are still robust in this scenario, since most \(\varepsilon_{L^{2}(\Omega)}(u^{id})\) are below \(0.8\%\) and most \(\varepsilon_{L^{2}(\Omega)}(\kappa^{id})\) are less than \(3\%\).
#### 6.2.3. Inverse heat conduction problem
In our final experiment for this paper, we attack an inverse heat conduction problem (HCP). In many heat transfer applications, the boundary values are either unavailable or difficult to measure over the entire surface. The goal of HCPs is to estimate the surface temperature (Dirichlet boundary value), and/or heat flux (Neumann boundary value), based on temperature data measured at certain points within the domain [42]. Our example combines the challenges in [42] and [43], where we consider a two-layered half-tube cross-section as the computational domain \(\Omega\), as shown in Fig. (a)a. The domain \(\Omega\) can be described in polar coordinates as \(\theta\in[0,\pi]\) and \(r\in[0.05,0.11]\). The tube is composed of two layers of media, with a diffusion coefficient of \(\kappa_{1}=1\) for \(r\in[0.05,0.08]\), and \(\kappa_{2}=100\) for \(r\in[0.08,0.11]\). The unknown boundary values are, in polar coordinates,
\[g(\theta,r)=200,\ r=0.05,\qquad\eta(\theta,r)=-100-50\sin(\theta),\ r=0.11.\]
The horizontal section of the tube is also a Neumann boundary, with known \(\eta=0\).
We discretise the domain with \(2\times 50\times 50\) triangles and solve the forward problem using FEM with the aforementioned boundary conditions. Fig. (a)a shows the FEM solution of the temperature, and we use the temperature at the yellow dots as our observations. Since the temperature has different patterns in the two layers due to the discontinuity in the diffusion coefficient, we use a deeper NN with 6 layers and 20 neurons for each hidden layer (\(L_{u}=6\), \(n_{u}=20\)) as \(u_{N}\). The Dirichlet boundary value \(g\) is just a part of \(u\), so in this problem, \(u_{N}\) is defined over the whole domain, including the Dirichlet boundary. We evaluate \(u^{id}\) on the Dirichlet boundary to obtain \(g^{id}\). We train another NN \(\eta_{N}\) with \(L_{\eta}=3\) and \(n_{\eta}=20\) on the Neumann boundary to predict the Neumann boundary value. We set the number of training iterations to \([700,200,3\times 700]\), and use the penalty coefficients \(\alpha=[0.001,0.003,0.009]\).
Fig. (b)b shows the relative point-wise errors of FEINNs solutions for the boundary values \(\eta\) and \(g\), obtained from one of our experiments. The errors at most of the Neumann boundary points are below \(1\%\), indicating accurate recovery. Besides, the identified Dirichlet value \(g^{id}\) is even more accurate, with a maximum error of approximately \(3\times 10^{-6}\). Overall, FEINNs excel at accurately reconstructing the boundary values. Fig. (a)a depicts the history of relative errors during training from the same experiment. We observe
Figure 13. Left: comparison among FEINNs and adjoint NN in terms of relative errors (depicted using box plots) with different NN initialisations for the inverse Poisson problem with noisy observations. Right: box plots depicting the relative errors of FEINNs trained with Gaussian noise generated by different random seeds.
that the data step and model parameter initialisation step reduce corresponding errors as expected, and the errors steadily decrease after a few hundred iterations of adjustment in the coupled step.
To study FEINNs' reliability in solving IHCPs, we repeat the experiment 100 times with different NN initialisations. The resulting errors are presented in Fig. (b)b as box plots along with the original data points. Even though the number of observations (100) is much smaller that the DoFs (2,601) of the trial space, FEINNs recover the temperature distribution accurately, with most \(\varepsilon_{L^{2}(\Omega)}(\mu^{id})\) errors below \(10^{-5}\), and only a few outliers with higher errors. The proposed formulation demonstrates robustness despite a significant discontinuity in the diffusion coefficient, a limited number of observations, and no regularisation. Furthermore, the majority of experiments (at least 90%) yield remarkably low errors.
## 7. Conclusions
In this paper, we propose a general framework, called FEINNs, to approximate forward and inverse problems governed by _low-dimensional_ PDEs, by combining NNs and FEs to overcome some of the limitations (numerical integration error, treatment of Dirichlet boundary conditions, lack of solid mathematical foundations) of existing approaches proposed in the literature to approximate PDEs with NNs, such as, e.g., PINNs. For forward problems, we interpolate the NN onto the FE space with zero traces (non-homogeneous Dirichlet boundary conditions are enforced via a standard offset FE function), and evaluate the FE residual for the resulting FE function. The loss function is the norm of the FE residual. We propose different
Figure 14. True temperature distribution and the relative point-wise errors of FEINNs on the Dirichlet and Neumann boundaries for the IHCP. Yellow dots on the temperature figure indicate observation locations.
Figure 15. The error history during training from a specific experiment and the box plots of the relative errors of FEINNs with different initialisations for the IHCP.
norms, and suggest the use of standard FE preconditioners (e.g., a fixed number of GMG cycles) to end up with a well-posed loss function in the limit \(h\downarrow 0\). For inverse problems, the unknown model parameters are parametrised via NNs, which can also be interpolated onto FE spaces. The loss function in this case combines the data misfit term with a penalty term for the PDE residual. We propose a three step algorithm to speed up the training process of the resulting formulation, where we perform two cheap data fitting steps (no differential operators involved) to provide a good initialisation for a fully coupled minimisation step.
We have conducted numerous numerical experiments to assess the computational performance and accuracy of FEINNs. We use forward convection-diffusion-reaction problems to compare FEINNs against IVPINNs, a recently proposed related method which mainly differs in the treatment of Dirichlet boundary conditions and has been proven to be superior to other PINN formulations in certain situations [28]. The computational cost per iteration of IVPINNs and FEINNs is virtually the same. However, IVPINNs struggle to keep the convergence of FEINNs (and reach the FEM error) as we increase mesh resolution or polynomial order. Additionally, the (non-interpolated) NNs trained with FEINNs exhibits excellent generalisation, with superior performance compared to the FE solution and the non-interpolated NN composition of IVPINNs. For singular solutions, both FEINNs and IVPINNs have comparable performance to FEM. We evaluate the effect of the residual norm and show how preconditioned norms accelerate the training. Moreover, experiments performed on a non-trivial geometry highlights the capability FEINNs handling complex geometries and Dirichlet boundary condition effortlessly, which is not the case of IVPINNs or standard PINNs.
In the experiments for the inverse problems, we show that FEINNs are capable of estimating unknown diffusion coefficient from partial or noisy observations of the state and recovering the unknown boundary values from discrete observations. We additionally compare the performance of FEINNs against the adjoint-based NN method [8, 9]. The numerical results demonstrate that FEINNs exhibit greater robustness for partial observations and are comparable for noisy observations. However, adjoint methods require the tunning of the regularisation term to be effective, while FEINNs are robust without regularisation. The conducted experiments also prove that the three-step training process employed by FEINNs is a sound strategy.
This work can be extended in many directions. First, one could consider transient and/or nonlinear PDEs, in which NNs and non-convex optimisation have additional benefits compared to standard linearisation and iterative linear solvers in FEM. Besides, while this work concentrates on problems in \(H^{1}\), the framework can be extended to problems in \(H(\text{curl})\) and \(H(\text{div})\) spaces, combined with compatible FEM [2, 44]. To target large scale problems, one could design domain decomposition [45, 46] and partition of unity methods [47] to end up with suitable algorithms for massively parallel distributed-memory platforms and exploit existing parallel FE frameworks GridapDistributed.jl[48]. Lastly, we want to explore in the future the usage of adaptive meshes to exploit the nonlinear approximability of NNs within the same training loop.
## 8. Acknowledgments
This research was partially funded by the Australian Government through the Australian Research Council (project numbers DP210103092 and DP220103160). This work was also supported by computational resources provided by the Australian Government through NCI under the NCMAS and ANU Merit Allocation Schemes. W. Li acknowledges the support from the Laboratory for Turbulence Research in Aerospace and Combustion (LTRAC) at Monash University through the use of their HPC Clusters.
|
2306.11678 | Design and simulation of memristor-based neural networks | In recent times, neural networks have been gaining increasing importance in
fields such as pattern recognition and computer vision. However, their usage
entails significant energy and hardware costs, limiting the domains in which
this technology can be employed.
In this context, the feasibility of utilizing analog circuits based on
memristors as efficient alternatives in neural network inference is being
considered. Memristors stand out for their configurability and low power
consumption.
To study the feasibility of using these circuits, a physical model has been
adapted to accurately simulate the behavior of commercial memristors from
KNOWM. Using this model, multiple neural networks have been designed and
simulated, yielding highly satisfactory results. | Pablo Alex Lázaro, Ignacio Jiménez Gallo, Juan Roldán Aranda, Alberto del Barrio García, Guillermo Botella Juan, Francisco Jiménez Molinos | 2023-06-20T16:56:11Z | http://arxiv.org/abs/2306.11678v1 | # Design and simulation of memristor-based neural networks
###### Abstract
In recent times, neural networks have been gaining increasing importance in fields such as pattern recognition and computer vision. However, their usage entails significant energy and hardware costs, limiting the domains in which this technology can be employed.
In this context, the feasibility of utilizing analog circuits based on memristors as efficient alternatives in neural network inference is being considered. Memristors stand out for their configurability and low power consumption.
To study the feasibility of using these circuits, a physical model has been adapted to accurately simulate the behavior of commercial memristors from KNOWM. Using this model, multiple neural networks have been designed and simulated, yielding highly satisfactory results.
Memristor Genetic Algorithm Neural Network Variability Inference
## 1 Introduction
The field of machine learning and its applications in different areas, such as computer vision, natural language processing or robotics, is generating increasing interest in recent times. This has led to the development of advanced techniques and tools for the implementation of artificial neural networks. However, these digital neural networks require large amounts of energy and expensive compute resources, which limits their use in scenarios where energy efficiency is key.
Memristors have recently emerged as a promising alternative in the field of analog computation. Their memristive properties make them theoretically ideal building blocks for analog neural networks. These dedicated analog circuits, where the data and computation are combined forgoing the von Neumann architecture, are orders of magnitude more energy efficient and cheaper to manufacture.
### Motivation
The training process of neural networks is an expensive process, both in terms of energy usage and hardware requirements, but it only needs to be done once. Inference, on the other hand, is performed every time the models are used. Many models in production are being used continuously and by many machines at once, so finding ways to reduce the energy and hardware expenses needed for inference is vital.
In some contexts, such as low power smart devices, the energy efficiency is a necessity. New breakthroughs in this field could bring new capabilities to these types of devices.
There are already various techniques to achieve energy improvements, such as reducing precision and software optimizations, but the hardware constrains of the von Neumann architecture remain.
For many applications in industry, such as video encoding or cryptocurrency mining, single purpose accelerators are designed to speed up calculations and increase energy savings. Similarly, modern mobile chips come with small AI accelerators to perform matrix multiplications faster and consuming less power.
A memristor based analog neural network circuit could be employed in a similar manner, as a co-processor that would dramatically reduce the costs of running neural networks. The idea of using analog computation to perform inference on neural networks has already been explored in many studies, such as [1]
Memristors, with their unique properties, offer several advantages for analog computation over digital systems. Their non-volatility allows them to retain their state even when power is lost, making them ideal for memory storage. Inherently operating in an analog fashion, memristors allow for continuous data representation, which is more efficient for certain computations like neural networks, compared to the binary data representation in digital systems. Furthermore, memristors can be miniaturized and densely packed, providing more computational power in a smaller space. They also consume less power compared to digital transistors, enhancing energy efficiency. Perhaps most intriguingly, memristors can mimic the behavior of biological synapses, making them ideal for neuromorphic computing, a form of analog computation that emulates the brain's architecture. This paradigm is being the subject of recent studies such as [2]
The main advantage of the memristors when compared to other analog alternatives is that they are both reconfigurable and non-volatile.
In summary, research on the viability of memristors in the field of neural networks is an important and current topic that can have significant implications in the development of computing and electronics.
### Objective
The main objective of this Bachelor's Thesis is to design, simulate and compare inference results of analog neural networks based on memristors with equivalent transistor-based digital neural networks. We will use commercially available memristors to test whether they are a real alternative in their current state.
### Work plan
To achieve the objective, the following steps will be taken:
1. Study the properties and behavior of the device, using commercially available memristors from KNOWM and the Memristor Discovery software.
2. Characterize the device by collecting real data, programming automated scripts and visualizing the results.
3. Adjust a physical model of the memristor to faithfully represent our device.
4. Design circuits to perform inference on analog neural networks, simulate and compare results.
## 2 State of the Art
Next, we will explain the research perspective of the project carried out. The objective is to demonstrate an understanding of the theory necessary for the design of neural networks based on memristors.
### Introduction to Neural Networks
Neural networks are computational models inspired by the human brain, they are widely used in the field of machine learning and can be described as universal function approximators.
These networks consist of interconnected nodes, called neurons, which work together to solve complex problems [3]. The ability of neural networks to adapt makes them powerful tools in areas such as pattern recognition or natural language processing.
The behavior of these networks is based on the communication and processing of information through weighted connections between neurons. Each neuron receives multiple inputs, which are processed through a non-linear activation function to generate an output. The generated outputs are then sent as inputs to other neurons, creating a transmission and transformation chain of information throughout the network. In supervised learning, the weights of the neural networks are adjusted by comparing the outputs obtained with the desired results and using techniques such as gradient descent.
This field has received a significant boost due to advances in computer technology and the availability of large volumes of data to train the models. In recent years, new types of neural networks have been developed, such as recurrent neural networks, convolutional neural networks, diffusion models, transformer networks and deep architectures [4, 5].
The importance of neural networks in the field of artificial intelligence lies in their ability to learn and generalize from data, enabling them to tackle complex problems and discover patterns in the data. This represents an advancement in various fields, including solutions to problems that are difficult to obtain using traditional algorithmic approaches.
The development of increasingly larger and more complex neural networks is driving the need for specialized hardware technologies that enable efficient implementations of these models.
In this context, a very promising line of research has emerged in the development of hardware for neural networks using memristors.
### Memristor
Memristors are passive electric circuit components, such as inductors, capacitors and resistors. Their existence was theorized in 1971 by Professor Leon Chua in the article [6], titled: "Memristor-The Missing Circuit Element" and they relate the electric charge and the magnetic flux together. There wasn't a physical implementation of them until very recently, when in 2008, a research team from Hewlett-Packard experimentally demonstrated the existence of these devices [7].
From a practical point of view, the memristor can be understood as a variable resistor whose resistance depends, in a nonlinear way, on the amount of current that has previously flowed through it. Additionally, it is a non-volatile device, meaning it retains its state even when disconnected from the current. These properties explain the name that Chua gave them: \(memristor\), combining \(memory\) and \( resistor\).
This memory of its history is contained in its physical configuration, so it needs to be both:
1. Physically reconfigurable when subjected to a certain voltage level. This reconfiguration must be reversible, going in both directions, from higher to lower resistivity when the voltage is positive and vice versa when it is negative.
2. Stable, not changing its state once the current flow through it stops.
These properties would exist in an ideal device; however, in practice, current implementations have the risk of reaching a physical configuration that becomes irreversible in some cases, such as a state of very high resistance. They also tend
Figure 1: Memristor symbol
to slightly vary their state once disconnected from the current. Information about this internal state of the memristor is captured in a state variable \(\mu\). Its behavior is described by two equations:
1. \(I(t)=G(\mu(t))*V(t)\)
2. \(\frac{\mathrm{d}}{\mathrm{d}t}\mu(t)=F(\mu(t),V(t))\)
The first equation indicates that the current depends on the conductance (the inverse of resistance) that the memristor has in state \(\mu\) at time \(t\) and the voltage \(V(t)\) applied at that moment. The second equation represents the change in the internal state of the device based on the current state \(\mu\) and the voltage \(V(t)\) it is being subjected to. These two equations give the device its non-linear behavior.
The operation of memristors is based on the phenomenon known as the "memristive effect." This effect occurs due to the properties of certain materials, such as metallic oxides, to change their resistance based on the direction and amount of electric charge passing through them. When a voltage is applied to a memristor, it undergoes a physical reconfiguration of its structure, changing its resistance, depending on the direction of the electric current.
The ability to "remember" its previous resistance even after the applied voltage is removed is a key characteristic of memristors. This allows them to maintain their resistance despite the absence of an external power source, making them ideal candidates for the creation of artificial synapses in neural networks. [8, 9].
Significant progress has been made in recent times in the fabrication of these devices, thus opening new possibilities in the field of neuromorphic computing and the design of hardware-efficient neural networks. [10].
These devices have attracted great interest and study in artificial neural networks. This is because they offer potential advantages in terms of energy efficiency, storage density, and parallel processing, enabling the implementation of high-performance neural networks. [9].
Memristors can emulate the behavior of artificial synapses. Synapses are essential elements in neural networks as they facilitate communication between neurons and information processing. Traditionally, these neural synapses have been implemented using transistors and capacitors, but this emerging technology offers a promising alternative [11, 12].
Most of the simulations performed on the papers that explore this idea, like in [13], use resistors or very rough models to approximate the behaviour of memristors. We will use state of the art physical models in our simulations, accounting for the inherent variability of the device.
Figure 2: Simplified internal configuration of a memristor from [7]
Design and simulation of memristor-based neural networks
## 3 Memristor Discovery
### Calibration
In this first process, the focus was on getting acquainted with memristor technology. To achieve this, the Memristor Discovery pack from KNOWM was used, which includes a chip with eight pairs of memristors, a board with assigned resistors, and a manual for the software application. To utilize this tool, the logic analyzer of the Analog Discovery 2 from Digilent was employed.
The setup of the Analog Discovery 2 using the Waveforms software was initiated. Firstly, the logic analyzer was connected to a laptop via USB. Once connected, an automatic test of wave generation functions was performed. To do this, the thirty-pin connector was used, connecting waveform generator 1 to oscilloscope 1 and waveform generator 2 to oscilloscope 2. Next, the software was configured to generate a square wave and a sinusoidal wave, as shown in figure 3.2.
Next, the logic analyzer was calibrated to ensure accurate measurements. A multimeter was used to measure the voltage generated at each step of the calibration process to correct the generated signals.
With the calibrated logic analyzer and the memristors functioning properly, we can now begin to conduct our own experiments using KNOWM's memristors through the Memristor Discovery application. In order to familiarize
Figure 4: Waveform generators configured for self-test.
Figure 3: Memristor Discovery and Analog Discovery 2
ourselves with this hardware, we will perform various tests, ranging from obtaining hysteresis curves to designing our own simple neural network.
### DC Experiment
First, we started by exploring the DC option of the KNOWM Memristor Discovery software. This functionality allows us to characterize the electrical properties of a memristor easily and efficiently. So, to initiate our DC experiment, we apply a DC voltage to a memristor and then measure the resulting current. The current-voltage (I-V) and conductance-voltage (G-V) graphs are then plotted to show the memristor's characteristic pinched hysteresis behavior.
The following figures show the I-V and G-V graphs for a memristor driven with an input sinusoidal waveform. The I-V graph shows that the memristor has a non-linear resistance, also appreciated in the G-V graph that shows how conductance (G) does not vary proportionally to the applied voltage (V). The hysteresis loop shows that the memristor has a memory effect, meaning that its resistance depends on its previous state.
This DC experiment is a powerful tool for characterizing memristors. It can be used to measure a variety of important parameters, including the resistance, capacitance, switching threshold, and non-linearity. This information can be used to design circuits that take advantage of the memristor's unique properties.
It is worth noting that the yellow-colored graphs represent the data obtained if the resistor were not applied, while the blue-colored graphs represent the behavior of the memristor in conjunction with the resistor.
### AC Experiment
Next, we proceed to investigate the possibilities of the Hysteresis option within the same software. To do this, we will create a new experiment. In it, a memristor is driven by an AC voltage source and the resulting current is measured. The current-voltage (I-V) graphs are then plotted to show the memristor's characteristic pinched hysteresis behavior.
The frequency of the AC voltage source affects the memristor's hysteresis behavior. To test it, we applied three types of ramps, that is, three different frequencies. These frequencies were 10Hz, 100Hz, and 1000Hz. At low frequencies the memristor has a larger hysteresis loop. This is because the memristor has time to switch between resistance states at low frequencies. While at high frequencies, the memristor has a smaller hysteresis loop, because the memristor spends less on either state and more time switching between the resistance states. This can be verified by comparing the different
Figure 5: DC Response I-V.
plots in Figures 7, 8, 9, where it can be observed that the hysteresis curve for a frequency of 1000Hz is flatter than the curve for 10Hz.
### Resistance Programing
Continuing with the option of Hysteresis, a new experiment was conducted. In this experiment, a memristor is driven by a DC voltage source and the resulting current is measured. The current is then used to calculate the memristor's resistance. The memristor's resistance can be programmed by changing the DC voltage source.
This Resistance Programming experiment works by applying a DC voltage to the memristor. The DC voltage causes the memristor to switch between two resistance states. The resistance state that the memristor switches to depends on the polarity of the DC voltage.
Figure 6: DC Response G-V.
Figure 7: AC Experiment 10Hz
The Resistance Programming experiment can be used to program the resistance of a memristor in a variety of ways. One way to program the resistance is to apply a DC voltage to the memristor for a specific amount of time. The longer the DC voltage is applied, the larger the change in the memristor's resistance. The HRS and LRS refer to the high resistance state and the low resistance state, respectively. The HRS is the state in which the memristor has a high resistance, and the LRS is the state in which the memristor has a low resistance. The memristor can be programmed to switch between the HRS and LRS by applying a voltage to the memristor.
### Pulse Response
We continue exploring the Pulse option. For this experiment, a memristor is driven with a series of pulses of varying amplitude and width. The response of the memristor to these pulses is then measured and analyzed.
The graph shows at image 11 the relationship between the conductance and resistance of a memristor as a function of the number of read pulses applied to the memristor. The graph shows that the conductance of the memristor increases with each pulse, while the resistance decreases. This is due to the fact that the memristor is a non-volatile memory device, meaning that it can store information even after the power is removed.
Figure 8: AC Experiment 100Hz
Figure 9: AC Experiment 1000Hz
### Synapse Experiment
This experiment was run in Synapse12 option of the KNOWM software. It consists of a series of elemental kT-RAM instructions that are used to drive the synapses, followed by a series of FLV read instructions that are used to measure the synaptic state and synaptic pair conductances. This experiment is repeated multiple times to observe the continuous response of the synapses.
The significance of this experiment is that it provides a detailed characterization of the functionality of kT-RAM differential-pair memristor synapses. The figure 12 demonstrates that the synapses can be driven with a variety of elemental instructions, and that the synaptic state and synaptic pair conductances can be accurately measured with FLV read instructions. This experiment also demonstrates that the synapses exhibit a continuous response, which is a key requirement for their use in neuromorphic computing applications.
Figure 11: Pulse Response
Figure 10: Resistance Programming
### Classifier Experiment
Finally, we started experimenting with the Classify12 option, that is a software program that allows users to test the performance of a memristor-based classifier. The program uses a 1X16 linear array chip to create an 8-synapse neuron. The user can select the forward and reverse pulse voltage amplitudes and pulse widths for each synapse.
So, we create a Classifier Experiment that is significant because it allows us to test the performance of a memristor-based classifier without having to design our own hardware.
To use the Classifier Experiment, we first need to select the forward and reverse pulse voltage amplitudes and pulse widths for each synapse. Afterwards, we select one of the training data set shown in table 1. The training data set should consist of a set of input and output pairs. The input pairs are the patterns that the classifier will be trained to recognize. The output pairs are the desired outputs for each input pattern.
When the training data set has been selected, we start the training process. The training process will take several iterations. After each iteration, the classifier will be updated to improve its performance. The training process will stop when the classifier has reached the desired epoch.
Once the training process is completed, without resetting the weights obtained in the memristor pairs, we run it with a new data set. The new data set should be used for training. The classifier's performance on the new data set shows how well the classifier will fit a new target despite having been trained before.
In Figure 13, you can see how the classification is produced using the Ortho8Pattern dataset, where synapses 4, 5, 6, and 7 had to classify as true (positive), while the rest of the synapses had to classify as false (negative). Shortly after
\begin{table}
\begin{tabular}{||c c c||} \hline Dataset & True Patterns & False Patterns \\ \hline \hline Ortho2Pattern & [0,1,2,3] & [4,5,6,7] \\ \hline AntiOrtho2Pattern & [4,5,6,7] & [0,1,2,3] \\ \hline Ortho4Pattern & [4,5],[6,7] & [0,1],[2,3] \\ \hline AntiOrtho4Pattern & [0,1],[2,3] & [4,5],[6,7] \\ \hline Ortho8Pattern & [4],[5],[6],[7] & [0],[1],[2],[3] \\ \hline AntiOrtho8Pattern & [0],[1],[2],[3] & [4],[5],[6],[7] \\ \hline \end{tabular}
\end{table}
Table 1: Classifier Experiment Datasets
Figure 12: Synapse Experiment
starting the execution, you can observe how it reaches 100% accuracy. Without resetting the resistance weights, the dataset is changed to AntiOrtho8Pattern, which should invert the values to be classified, and it can be observed how all synapses modify their weights, obtaining an accuracy close to 90%.
To delve deeper into the functioning of memristors as synapses, we designed a simple classification experiment by creating both a new dataset shown in table 2 and a new learning method. We wrote our code with the extensions in Java in the source code of the program.
The learning method called LearnComboReverse consists of learning only when it must classify as false, or when it fails.
## 4 Device Characterization
Once the experimentation phase was done and an intuitive understanding of the device had been achieved, we set out to experimentally characterize it. This consists of obtaining the numerical data needed to determine some of the devices' most important properties, such as the set and reset voltages and the HRS and LRS currents ratio for various voltage ramps. These results are obtained by means of automated extraction scripts that are executed on the experimental data we collected from the device.
### Data Collection
The DC Experiment environment from Knowm Memristor Discovery program provided us with the perfect means to visualize and obtain the data we needed. We first scanned through a few of the sixteen memristors on the chip until we selected one that showed good and consistent behavior. For that memristor, we applied \(200\) pulses of ramp voltage stress (triangle shaped voltages) with an amplitude of \(0.8V\) through its \(10K\Omega\) resistor; and we did so for three different pulse periods: \(10ms\), \(100ms\) and \(1000ms\), which result in ramps of \(0.8V/s\), \(8V/s\) and \(80V/s\) respectively.
\begin{table}
\begin{tabular}{||c c c||} \hline Dataset & True Patterns & False Patterns \\ \hline \hline myDataSet & [0, 2, 4], [2, 4, 6] & [1, 3, 5], [3, 5, 7] \\ \hline \end{tabular}
\end{table}
Table 2: Custom Dataset Memristor
Figure 13: Resistance Programming
The program only allowed us to apply ten pulses at a time, so we slightly modify its source code to see if we could obtain the two hundred cycles in a single execution, eliminating the downtime involved in saving the data and rerunning the execution every time. We thought this downtime could affect the results due to the non-ideal nature of our devices, resulting in noticeable jumps in the measurements due to the tendency of our memristors to lose some conductance when idle. After examining the source code of the Known Memristor Discovery and Waveforms programs, both written fully in Java, we came across a hardware buffer size limitation on the Discovery board itself, where the results were stored before being sent via USB to the computer. We managed to double the number of cycles to 20 but couldn't go any further. The effects we feared are slightly noticeable in some of the results, but for the most part, it didn't present a real issue.
### Data and script adaptation
The first properties we obtained were the set and reset voltages of the device. These are the points at which the device starts to meaningfully transition from the HRS to the LRS for the set voltage and vice-versa for the reset voltage. This information is interesting since it provides us with an interval \((V_{reset}-V_{set})\) of the voltages we can apply to the device expecting to not see any meaningful changes in its internal state.
To obtain these two points, we had to find the parameters that would better met our specific needs using a MATLAB script for the methods from [14]. We required the input data to be separated by the set and reset of each cycle and for the currents to be in absolute values. For that purpose, we developed a Python script.
Each method provided us with different results for each cycle. We selected the ones that better represent the experimental measurements, which were the third method from the paper for both the set and reset. For the set, the method selected the points where there was a maximum separation from the straight line that joined the first and end points of each set curve, as shown in 16. For the reset, the method simply selected the points with the highest current.
Figure 14: Custom Classification
We were also interested in showing the cycle to cycle variability at a given voltage points, and more precisely, at a reading voltage point, such as \(0.05V\) or \(0.1V\) in both HRS and LRS. For this purpose, we had to write another python script to estimate the current at these points using a linear interpolation on the real data since we didn't have the values for those exact voltages.
Figure 16: Method for selecting the set point based on maximum separation
Figure 15: Measured current for the applied voltage ramps
### Results
The results presented here, are soon to be published in a paper named "Characterization and modeling of variability in commercial self-directed channel memristors" in the 14th Spanish Conference on Electron Devices (CDE 2023). [15].
We chose to use a cumulative distribution function (CDF) to illustrate both the voltage set and reset of the device across all the \(200\) cycles. The results, seen in figure 17, are fairly consistent and in line with the manufacturer's advice to use \(0.1V\) as probing voltage. The resulting interval of voltages in which the device can operate without meaningfully changing its resistance is roughly from \(-0.1V\) to \(0.25V\). This information will be useful when designing the neural networks.
It is also interesting to see in figure 18 how this voltage points vary from cycle to cycle. Although a slight general tendency can be intuited, the variability seems to be consistent across all voltage ramps.
Lastly, the current estimated at \(0.1V\) and \(0.05V\) at HRS and LRS can be seen in figure 19 for the \(8V/s\) measurements. Except for some cycles around cycle \(100\), they appear very consistent. However, when we plot the ratios of currents in LHR and HRS for \(0.1V\), we can see much more variability (figure 20). There are a few reasons for this, firstly we have to take into account the significant measurement noise when dealing with these small currents and the fact that interpolation was performed to obtain these values. We also think that in this case some of the variability can be attributed to the interrupted measurement process that we were forced to do, since some of the jumps appear to coincide exactly with multiples of \(20\).
Figure 17: Cumulative distribution functions of the calculated set (a) and reset (b) voltages
Figure 19: Cycle to cycle variability of the current measured at 0.1V in HRS and LRS
Figure 18: Set and reset voltages versus cycle number, for the three voltage ramp rates
Figure 20: figure
HRS and LRS measured currents ratio versus cycle number for 0.1V
Design and simulation of memristor-based neural networks
## 5 Physical Model
Due to the limitation of having only sixteen memristors for the creation of minimally complex neural networks such as MNIST, a physical model has been created in LTspice to emulate the behavior of memristors.
To do this, we relied on the parameters described in [16]. As seen in the model shown in Figure 21 and 22, we applied a voltage of 1.2V to the memristor, obtaining the response shown in Figure 23 using the LTspice simulator, graphing a curve very similar to the characteristic hysteresis curve of memristors on the I - V(app) axes.
However, the model is not yet complete as commercial memristors exhibit a certain level of variability despite having stochastic components. This fact causes them to behave differently during two different cycles and leads to different memristors functioning differently despite being under the same conditions.
Due to this situation, we have emulated this variability behavior within our model. To do so, we have added the parameters mentioned in the paper [17]. In order to appreciate the variability, unlike the previous model, we execute the
Figure 21: Memristor Physical Model
Figure 22: Physical Model
model a given number of times, in the case of Figure 26, one hundred times. It can be clearly observed that the result is very different from our empirical data. However, this will be addressed by using a genetic algorithm to adjust the initial parameters as to match our devices as accurately as possible.
To perform a simulation with variability, we use the Gaussian function 'gauss' in LTspice. This function generates a random number from a Gaussian distribution with a standard deviation determined by the parameter used. The function is applied to the new parameters that serve to emulate variability.
The data used as a starting point to create the model are from paper [16], which are shown in Table 3.
Figure 23: LTspice simulation
Figure 24: Memristor Physical Model with variability
To utilize this model created in LTspice, it was decided to integrate it with Python in order to streamline parameter changes and, in the future, facilitate the development of genetic algorithms and neural network creation. For this purpose, the PyLTspice Python library was used, which enables a quick and straightforward connection with the models created in LTspice. Additionally, this library allows running the model with desired parameters directly from Python. Thanks to these features, parameter adjustments were made by creating a genetic algorithm.
### Genetic Algorithm
A genetic algorithm was implemented to find the best parameters for the physical model.
A genetic algorithm is an optimization algorithm that mimics the process of natural selection. It works by iteratively creating a population of solutions, evaluating their fitness, and then using genetic operators to create a new population
Figure 26: LTspice simulation with variability
Figure 25: Physical Model with variability
of solutions. The genetic operators are used to introduce variation into the population, which helps to ensure that the new population can contain solutions that are better than the old population. The process is repeated until a satisfactory solution is found.
The algorithm was implemented in python, and the following steps were taken:
1. In each generation, the learning rate was decreased incrementally.
2. The current in \(Vramp\) and the voltage in \(app\) were obtained during the execution in LTspice with the current population, obtaining the simulated I-V curves.
3. The loss of each individual was calculated by mapping the simulation curve onto the real ones and calculating the square differences between them.
4. The average of losses was updated, and the population was sorted from the lowest to the highest loss.
5. The best individual was updated, and the half with the worst loss was eliminated while the remaining half was mutated.
The algorithm was repeated until a satisfactory solution was found.
The following image shows the results obtained, being the blue graph the original data and the red one the data obtained with the model after the execution of the genetic algorithm.
Next, this genetic algorithm was modified to incorporate the variability present in commercial memristors such as the one used by KNOWM. Memristors have the ability to modify their resistance based on the electric charge passing through them. Despite the stochastic nature of these devices, there are variations between different cycles or among different devices under the same conditions, thus demonstrating the aforementioned variability.
This new genetic algorithm was applied to all the parameters of our current parameters, resulting in the following values, as shown in the figure, with a good agreement between the experimental and simulated curves.
The'steps' in each LTspice simulation could not be parallelized because all the parallel simulated steps used the same seed. As a result, the same values were obtained for the sigion, sigioff, sigvr, and sigisb parameters responsible for emulating the spoken variability. This meant that for each individual, all the steps performed used the same Gaussian distribution, resulting in no variability.
Figure 27: Genetic Algorithm results vs empirical curves
However, the execution of different individuals in the population could be parallelized, greatly speeding up the simulations. This was done from Python by launching multiple instances of LTspice, one for each individual. We were able to achive a speed up of up to \(6.25X\).
### Results
After using the genetic algorithm with variability, we obtain the final data for our model. The data shown in Table 4 allow us to perform a simulation very similar to the hysteresis curves obtained during the experimental phase with the commercial KNOWM memristor.
The model, using these parameters, is finally represented as shown in Figure 29.
Using the best parameters in this new model, we obtain the following curves shown in Figure 30 with 200 steps and variability. As can be seen, these curves accurately simulate the behavior of a real memristor. These data are shown in
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline roff = 10\(\Omega\) & ri = 50\(\Omega\) & ron = 10\(\Omega\) & iom0 = 50\(\mu\)A \\ \hline ioff0 = 2.5\(\mu\)A & aon = 1.72 & aoff = 2.7 & H0 = 0 \\ \hline etas = 17 & etar = 70 & Vs = 0.14V & Vr0 = -0.03V \\ Vt = 0.1V & isb = 6.2\(\mu\)A & gam = 0.29 & sigion = 0.05 \\ \hline sigioff = 0.07 & sigvr = 0.01 & sigsb = 2.04e-07 & RPP = 1E10 \\ \hline \end{tabular}
\end{table}
Table 4: Genetic variability params
Figure 28: Genetic Algorithm with variability
Figure 28, directly comparing them with the experimental curves obtained with the KNOWM memristor. The blue curves represent the actual measurements, while the red curves represent the model.
As mentioned before, due to the limited availability of physical memristors to simulate a complex neural network like MNIST, it is necessary to use a physical model that accurately emulates the behavior of a memristor. With the obtained data, it is considered that the resulting model is faithful enough to the commercial memristor used to evaluate these devices as a viable alternative in neural network creation.
Figure 30: Memristor Physical Model Hysteresis
Figure 29: Memristor Physical Model with variability
## 6 Neural Network Implementation
We now have a suitable physical model for our memristors and the parameters that best fit our empirical data in simulations. Moving forward, we will no longer be constrained by the limited number of memristors on the board and their configuration.
We can start to design and test circuits by means of hardware simulation in LTSpice, using as many memristors as our processing power will allow and in whichever configuration we desire.
### Inference in Neural Networks
We will design a circuit to perform neural network inference by means of analog computations. Before explaining the circuit design, we will do a brief explanation on how an inference step is calculated form a mathematical point of view.
#### 6.1.1 Fully Connected Neural Network
A neural network is made up of an ordered sequence of layers, which are in turn composed of an array of neurons. Each layer of neurons is activated, one after the other, depending on the values of the output of the immediately previous layer. In a fully connected neural network, each and every one of it's layers is fully connected to the next one, meaning all neurons receive as inputs the outputs of every other neuron in the preceding layer. Each neuron activation value is calculated as follows:
\[z=\sum_{j}w_{j}x_{j}+b\]
Where \(x_{j}\) is the value of the output of the \(j^{th}\) neuron in the previous layer, \(w_{j}\) is the neuron's weight associated with the \(j^{th}\) output and \(b\) is the bias parameter of the neuron. If we have \(w\) and \(x\) as vectors, we need simply to calculate their dot product and then add \(b\).
Afterwards, an activation function, such as sigmoid or relu, is applied to \(z\) to produce the final output that will be fed as an input to the next layer.
Figure 31: Neural Network illustration from [18]
The calculations for each layer can be easily made in parallel, since the activation of one neuron doesn't depend in any way on the activation of other neurons within the same layer. This way, instead of calculating dot products for each neuron, we perform a matrix multiplication as follows:
\[z=W_{l}*A_{l-1}+B_{l}\]
Where \(W_{l}\) is the matrix containing the weights of the neurons of layer \(l\). Each row corresponds to a different neuron, and each column to the associated neuron of the previous layer. \(A_{l-1}\) is the vector of outputs from the previous layer, or the inputs if \(l\) is the first layer. Lastly, \(B_{l}\) is the vector of biases of the layer \(l\) each of them corresponds to a neuron.
The inference process is the calculation of the output of given an input. To do this, we only need to follow these steps for each layer in order, starting with \(l=1\):
1. Calculate \(z=W_{l}*A_{l-1}+B_{l}\).
2. Apply the activation function to \(z\) to get \(A_{l}\).
3. If \(l\) isn't the last layer, do \(l+1\).
#### 6.1.2 Convolutional Neural Network
The other type of neural networks that we will implement are convolutional neural networks. These types of networks usually consist of a set of convolutional layers followed by some fully connected layers. They are widely used for image classification tasks, among many other things, because they retain the ability to generalize at a higher training efficiency.
The inference in these last fully connected layers works in the same way as previously described, so we will only focus on the convolutional layers in this section.
Convolutional layers use kernels, which are square matrices of a given size, typically 3x3 or 5x5. The activation value of a neuron is calculated only using a kernel sized window of neurons from the previous layer. The weights of the kernel are common to all neurons, which simplifies the structure. It is often explained as a sliding window, that multiplies with the input image element-wise, adds up all the terms and adds a bias.
Usually, more than one kernel is used, generating outputs with more than one channel, i.e., with more than one value for each position in the resulting feature map. This way, we have a third dimension, the channels. There will be as many kernels as \(Channels_{in}*Channels_{out}\), meaning that for every channel of our feature map, there will be one kernel for each channel in the input; their resulting values will be added up.
Unless some type of padding is used, the size of the output feature map will be smaller than that of the input. The reason is that values are not calculated for positions at the edges where the kernel window does not entirely fit.
Figure 32: Convolutional layer illustration from [19]
To further reduce the size of the feature maps, techniques such as pooling or strides can be used. A pooling layer divides the feature map into windows, usually \(2x2\), and reduces them to a single value, either by taking the maximum value of the the window or by averaging them all. Strides allow us to achieve a similar result with a very small hit to quality, less complexity and less computations. The idea is that, instead of applying the kernels to every single possible position, we skip some in between.
Taking all of this into account, the resulting feature map will be a tensor made up of as much matrices as output channels and each of the matrices will have a size equal to the size input matrix minus the lost edges divided by the stride.
A forward pass in a convolutional layer is as follows:
```
1:\(k\gets sizekernel\div 2\)
2:\((size_{map}\gets size_{input}-k*2)\div stride\)\(\triangleright\) Size of the resulting feature maps
3:for\(cout\) in\(Channel_{\mathit{s}ut}\)do
4:for\(i\) in\(size_{\mathit{fmap}}\)do
5:for\(j\) in\(size_{\mathit{fmap}}\)do
6:for\(cin\) in\(Channels_{\mathit{in}}\)do
7:for\(ki\) in\([-k\), \(k]\)do
8:for\(kj\) in\([-k\), \(k]\)do\(H[\mathit{cout}][i][j]\gets H[\mathit{cout}][i][j]+input[\mathit{ Cin},i*stride+ki+k,j*stride+kj+k]*kernel[\mathit{Cout}][\mathit{Cin}][ki+k][kj+k]\)
9:endfor
10:endfor
11:endfor
12:\(H[\mathit{cout}][i][j]\gets H[\mathit{cout}][i][j]+B1[\mathit{cout}]\)\(\triangleright\) Adding the biases
13:endfor
14:endfor
15:endfor
```
**Algorithm 1** Convolutional layer
### Hardware Design
Once we know what operations we will need to implement in our circuit, we can start with the design. We will use a crossbar structure with synapses of two memristors, similar to the one used in [13] and which can be seen in figure 34.
This structure is very widely used in analog computation, as it allows for very simple matrix multiplication. The inputs are the voltages in nodes \(A\), \(B\) and \(C\). By Ohm's law, the resulting current will be \(V/R\) or, using the conductance: \(I=V*G\). All the currents that carry the value of an input multiplied by the conductance of the corresponding memristor are added up by virtue of being connected to the same cable.
#### 6.2.1 Design of a Fully Connected Neural Network
Going back to the preceding explanation of inference, our inputs and activations of the previous layer are represented as the magnitude of the input voltages, which can be both positive or negative. Our weights are represented by the value of the conductance of the pair of memristors that form each synapse.
Figure 33: Convolutional network illustration from [20]
A neuron is a pair of columns connected to an operational amplifier. Its weights are its synapses, placed in the row corresponding to their associated input. In figure 34 only one neuron is depicted, adding more is as simple as replicating the structure, attaching more pairs of columns to the left.
We use the synapses to be able to represent negative weights: if the conductance of the first memristor \(G1\) is higher than the one of the second \(G2\), it represents a positive weight of value \(G1-G2\), otherwise, the weight will be negative. To achieve this, we need a sub-circuit that receives the currents from both columns and subtracts the value of second from the first one. In turn, this device, known as an operational amplifier, will give us the result in terms of a voltage, ready to be fed to the next layer of neurons or to be read as the final output. The magnitude of this voltage will be the resulting current multiplied by a resistance \(R_{f}\).
The bias term of each neuron can be treated as a weight, meaning its value is determined by the conductance of a synapse. But, instead of receiving a variable input, it is always supplied a constant amount of voltage, making it independent of the input. Its addition to the dot products is carried out by being connected to the same cable.
Putting all this together, the output of a neuron, pending the activation function, will be the following:
\[z=R_{f}*([\sum_{j}V_{j}*G1_{j}]+\beta*GB1-([\sum_{j}V_{j}*G2_{j}]+\beta*GB2))\]
or
\[z=R_{f}*([\sum_{j}V_{j}*(G1_{j}-G2_{j})]+\beta*(GB1-GB2)\]
Where \(\beta\) is a constant, \(GB1\) and \(GB2\) are the conductances of the memristors that make up the synapse that represents the bias term. The same for each pair of \(G1_{j}\) and \(G2_{j}\), which are the weights.
We now have a way of calculating \(z=W_{l}*A_{l-1}+B_{l}\). It's worth noting that the cross-bar structure doesn't perfectly map to the matrix \(W_{l}\), since the weights of a neuron were the rows of \(W_{l}\) but are now the columns of the cross-bar. We are only missing a way of applying the activation function. There are electrical circuits than can effectively approximate these functions, but since it isn't necessarily relevant for our purpose, we chose to simulate then behaviorally instead of designing the circuits for them.
To have a complete circuit for a neural network with more than one layer, we simply concatenate another cross-bar structure that takes as inputs the voltage outputs of the last one, as in figure 35.
Figure 34: Memristor-based neuron circuit. A, B, C are the inputs and yj is the output. Figure from [13]
#### 6.2.2 Design of a Convolutional Neural Network
Looking at algorithm 1 one would be correct in assuming that its circuit design won't be as straight forward as that of the fully connected network. There is hardly an easy way of visualizing it.
Regardless of that, the calculation can be parallelized not very differently than how we did for the fully connected layers. We won't need to use any other sub-circuit or technique than the ones we already have, since we will only need to do addition and multiplication and we can use the same operational amplifier as before.
The difficulty of implementing this circuit lies in its interconnections. However, as we will explain in the following section, we are not designing the circuits manually, but rather using scripts, which will make this task feasible.
Some noteworthy differences are in the kernels, which contain the synapses representing the weights. Whereas in the fully connected layers each weight could be unique, the kernels and its synapses will need to be replicated for as many times as output values the output feature map contains. This is not a problem for us, since we are only interested in performing inference, but it is an important consideration for training.
#### 6.2.3 Time and Power Consumption Estimations
We will now estimate the time of inference and the energy usage of a fully connected neural network with a hidden layer of 20 neurons.
For the time we are going to do a first order estimation treating the columns that represent each neuron as if they were in series with respect to the input. In reality they are neither in series nor in parallel, which complicates this analysis. The total time of the circuit will be determined by the result of \(T=R*C\) where \(R\) is the total resistance of the interconnection cables, which we will assume to be \(5\Omega\), and \(C\) is the total capacitance. To calculate the total capacitance, we must add the capacitance of all the individual memristors that are in parallel.
Figure 35: Multi-layer neural network. Figure from [13]
To do this, we have to look back at the circuit design. Since we are treating each neuron as parallel to each other, and they all have the same number of memristors, we only need to take in to account the total number of memristors for a single neuron. This will be the same as weights the neuron has, which in turn, corresponds to as many inputs as it processes. Each weight is composed of two memristors, but they are in series. Therefore, the amount of memristors in parallel in a given layer will be equal to the number of inputs it has. For a \(16x16\) image, we have \(256\) memristors in parallel.
Now, the second layer will receive \(20\) inputs from the first layer, which means that it will have \(20\) memristors in parallel. Adding this up, we have a total of \(276\) of them in parallel. If we assume a typical capacitance of \(20pF\) for each memristor, then \(T=5*276*20pF\) which gives us a result of \(27,6ns\), or a frequency of \(36,2MHz\).
For the power consumption, we measure the current in the operational amplifier, which has a value of \(0.000754A\) and we multiply it by the voltage supplied \(5V\), which gives a result of \(3.77mW\) for each neuron. Since in this example we have \(20\) hidden neurons plus \(10\) output neurons, the total power consumption is \(0.1131W\)
### LTSpice Implementation
The size and complexity of the circuits for the neural networks make it impractical if not impossible to draw. Instead, we will use Netlists, which allow us to describe a circuit and its interconnections using plain text and can be used to compile circuits and do simulations in LTSpice.
In the Netlist, aside from the connectivity description, we can add simple components such as voltage sources or resistors, but for the rest, we will need to call sub-circuits. We only need two of them: one for the memristors and another one for the operational amplifiers.
We already have the circuit for the memristor, as seen in chapter 5 and we can easily extract a Netlist from it. We chose to use it coupled with the \(10K\Omega\) resistor with which our real memristors came with, since that is the configuration we have characterized. It receives as a parameter the value of H0 so that we can give it an initial state, i.e., set a given conductance.
For the operational amplifier, we built the circuit shown in figure 36, from which we extracted its Netlist. We also apply the activation function in here, reason why we have slight variations of this sub-circuit, differing only in this respect.
As an example of how a Netlist describes a circuit, the following text is equivalent to figure 36:
_.subckt neuron In+ In- Out PARAMs: norm=1 bias=0 E1 N001 0 0 In+ 2E5_
Figure 36: Operational Amplifier
\begin{tabular}{l} _Ra N001 In+ norm_ \\ _E2 z 0 0 In- 2E5_ \\ _Rc z In- norm_ \\ _Rb N001 In- norm_ \\ _Bsigmoid Out 0 V=sigmoid(v(z))_ \\ _Rbias N002 In+ 1_ \\ _V1 N002 0 bias_ \\ _.func sigmoid(x)1/(1+exp(-x))_ \\ _.backanno_ \\ _.ends_ \\ \end{tabular}
It receives as parameters the value of bias, which we never used since we manage them as described in subsection 6.2.1, and the \(norm\) parameter, which is the value that will multiply the result of the addition; \(R_{f}\) in subsection 6.2.1.
To write the Netlist for the circuit of each neural network, we wrote script in Python that receives the following information as parameters:
1. The name of the file to write on.
2. The size of the image.
3. The amount of input channels (1 for B&W, 3 for RGB).
4. Number of output neurons.
5. The weights and biases of each layer, adapted to their corresponding H0 equivalents, see section 6.4.
With this information, the script calls functions to write each of the layers, depending on the type of network. We successfully implemented functions to be able to do a linear layer form both a preceding convolutional layer or another linear layer, a convolutional layer, and the operational amplifiers for both types of layers.
The functions that create the linear and convolutional layers aren't any more complicated than following the steps and algorithms detailed in section 6.1. Instead of having variables where we add up the products of our inputs and weights, we now simply use the same output cable where the currents add up. For example, a synapse representing a negative weight will be written like this:
_Xwh39+ H627 H739+ memristor PARAMS: Hvalue=0_
_Xwh39- H627 H739- memristor PARAMS: Hvalue=1.959469e-06_
Where \(Xwh39+\) and \(Xwh39-\) are the names of the two memristors that make up a synapse; \(H627\) is the input cable for both. In the output of the first memristor, node \(H739+\) we will get the current that results when multiplying the voltage in the input with the conductance of the memristor with \(H0=0\), which will be very close to \(0\). The same goes for \(H739-\), except this time we will have some current. For a positive synapse, the \(H0\) of the second memristor would be \(0\) and the first one would be greater than \(0\); and if the weight happened to be \(0\), then they would both be \(0\).
For the biases, the input cable is always \(Bias\), which has a fixed voltage.
The operational amplifiers work in a similar way, we only need to connect them to the corresponding inputs and outputs:
_XNeuron3 H5+ H5- OUTPUT0 simple PARAMS: norm=1000 bias=0_
Where \(XN Neuron3\) is the name, \(H5+\) and \(H5-\) are the two inputs carrying the currents that will be subtracted and the result will be multiplied by \(norm=1000\) and converted to voltage. The activation function in this case would be the identity since the type is \(simple\).
With these building blocks, we can very easily implement different architectures that can differ in input channels, size of images, size of kernels, weights and biases, number and type of layers, output neurons and activation functions.
The Netlists we built for the circuits are, in fact, sub-circuits themselves. In section 6.6 we will explain the reasons, which have to do with the management of the inputs for the simulations.
### Parameter Convertion
We now have the circuit designs and the means to build Netlists for them. We need only to acquire the appropriate weights and biases to be able to start the simulations.
Doing a short recapitulation, our memristor sub-circuits only take as an argument the \(H0\), but we have defined the weights and biases of the memristor synapses as their conductivity, not their \(H0\). Therefore, we need a way of making the conversion from conductivity to \(H0\).
The range of values that \(H0\) can take is between \(0\) and \(1\). To see how they relate to the conductivity, we run an LTSpice simulation on our memristor model that steps through 10.000 values of \(H0\). We measure the conductance of the device at each point, while under a reading voltage of \(0.1V\). The results can be seen in figure 37 where the variability is very noticeable.
We chose to train a small neural network to approximate the value of \(H0\) that should be set when trying to achieve a given conductance. The results are in figure 38. We will use this model to convert the weights we will obtain in the next section to be able to pass them as arguments to our Netlist builders.
From figure 37 we have also obtained the effective range for our weights and biases, which can't be any greater than \(1e-4\). We chose to place the limit at \(8e-5\), since that seemed like a good cut off point that, regardless of the variability, was almost always reachable with \(H0=1\). As we are using two memristor synapses and are able to represent negative values, our real range is \((-8e-5,8e-5)\).
### Digital Training
Now that we know the restrictions for our inputs, weights and biases, we can try and obtain the parameters for the simulations of our circuits. For this purpose, we will use the PyTorch framework.
As previously mentioned, we will use the MNIST data set for our experiments, since it is a widely recognized and computationally light problem. Furthermore, we will be resizing the images form their \(28x28\) original pixel size to \(16x16\) and \(12x12\). The reason behind this decision is the fact that the time it takes to simulate an inference step exhibits a quadratic growth in relation to the size of the input. We will use the high-quality LANCZOS interpolation method to reduce the quality penalty of the resize as much as possible.
Figure 37: H0 (state variable of the device) versus the conductance
In the MNIST data set from the Torchvision package that we will be using, the images are in a gray scale with values from \(0\) to \(1\). These are the values that we will feed as inputs to our circuits, and, as we saw in section 4.3, our memristors will change state under more than \(0.25V\), which will alter the weights, impacting the performance of the network. To avoid this, we will apply a transformation to ensure that the inputs are contained within the safe interval from \(-0.1V\) to \(0.1V\).
The next step is to ensure the weights and biases are also constrained to the interval established in the previous section, \((-8e-5,8e-5)\), since that is the available range for the conductance of our synapses. There is another important factor that further restricts the permissible interval of the biases, according to our physical design and as explained in subsection 6.2.1, the synapse that represents the bias is multiplied by a constant input voltage. Therefore, since that voltage has to be smaller than \(0.25V\) to not introducing state changes, the real range of the biases will be further reduced. We chose \(0.1V\) to simplify calculations, meaning our biases will have to be between \(-8e-6\) and \(8e-6\).
Since neural networks are not linear functions, we cannot simply train the network normally and then divide the value of the weights and biases to make them fit their intervals. This is because the activation functions are non-linear by design. Instead, we will attempt to train the networks directly with the restrictions.
This process required a great deal of trial and error, but we found a way of doing it without any loss in the performance of the network when compared to an equivalent one trained without restrictions. To do it, we had to follow these three steps:
Figure 39: Scaled 16x16 MNIST examples
Figure 38: Results of the model trained to approximate H0 for a given conductance
1. Initialize the weights and biases with a normal distribution with mean 0 and standard deviation of \(2e-5\) for the weights and \(2e-6\) for the biases.
2. Use a weight and bias clipper after each gradient descent step.
3. Avoid the vanishing of the activations layer to layer.
The first point is necessary so that the starting value of the parameters meet our restrictions. The second point is to ensure that the weights and biases stay within the valid range. To achieve this, we define a simple function that goes through all the parameters of the network and, if any of its weights is larger than \(8e-5\), it sets its value to be exactly that and vice versa with weights smaller than \(-8e-5\). It does the same with the biases, but with their maximum and minimum values. We call this function after every step of the gradient descent, i.e., every time the parameters get updated. We can see the results in figure 40.
The third point is less straightforward, and it will depend on the activation function we use. Since our weights and biases are so small, every output of a layer, pending the activation function, is magnitudes smaller than its input was. For example, if we have a fully connected layer of \(100\) neurons a single input, the output will be, in the best case scenario, where all the weights are the maximum of \(8e-5\):
\[input*100*8e-5=8e-3*input\]
In a more realistic scenario, we would have hundreds of inputs and they would also be both positive and negative, the same as the weights, which would further cancel each other out.
If we are using RELU as the activation function, the values will continue to get smaller and smaller with each layer, which causes a problem similar to the vanishing gradient problem, and the network struggles to learn anything. Sigmoid presents a different set of problems that we will tackle later.
To solve this, we used inspiration from our physical design. In subsection 6.2.1 we explained how the operational amplifier multiplied the resulting current by the value of a resistor \(R_{f}\). We will use the same concept here, multiplying by a fixed value the output of every layer. We will then use the same value for \(R_{f}\) in the Netlist builder to ensure that the voltages don't vanish either, so we will have to make sure that it does not produce voltages outside the interval \((-0.1V,0.25V)\) or else it would change the state of the memristors in the following layer.
Figure 40: Weight distribution of the output layer of a neural network trained with parameter range restrictions
Finding the perfect value for this purpose is complicated, since we can have very different configurations of weights, biases and inputs. We found through experimentation that \(1e4\) seemed to work well in most cases, particularly in fully connected layers.
We were also able to implement sigmoid activation functions, and it's worth explaining the challenges it presented. Firstly, the output of a sigmoid is in the interval \((0,1)\) and \(sigmoid(0)=0.5\) which is well above our allowed values. For this reason, we need to change the sigmoid for a \(sigmoid/10\), making the output \((0,0.1)\). Another new problem with the sigmoid function is that, if the inputs are very close to \(0\), it will behave like a linear function. To avoid this, we do something similar to what we did before; we multiply the outputs before the activation function by a constant. For sigmoids, \(1e5\) worked the best.
Having found a way to successfully train neural networks that meet the restrictions imposed by our physical design, we trained the following configurations to later simulate and compare results:
1. Simple fully connected network, no hidden layers, for input sizes \(12x12\) and \(16x16\).
2. Fully connected network with a \(20\) neuron hidden layer, for input sizes \(12x12\) and \(16x16\). Using RELU as the activation function.
3. Simple convolutional network with a single convolutional layer, with kernel 3, stride 2, and a fully connected output layer, for input sizes \(12x12\) and \(16x16\). Using RELU as the activation function.
4. Convolutional network with two convolutional layers, with kernel 3, stride 2, and a fully connected output layer, for input sizes \(12x12\) and \(16x16\). Using RELU as the activation function.
5. Fully connected network with a \(20\) neuron hidden layer, for input sizes \(12x12\). Using Sigmoid as the activation function.
### Simulation
Once we have the Netlist builders for our circuits, the parameters for the neural networks and a way of converting them to H0 values. We can start the simulations.
Our goal was to do inference on the first \(1000\) images from the test set, both for the digital network and for the simulated circuit. For the same reasons explained in section 5.1, we could not parallelize all simulations, one input at a time, because that way the variability parameters always take the same values. But we still wanted to parallelize as much as possible because the simulations take a considerable amount of time.
With \(32GB\) of memory, we can do up to \(10\) simulations at a time of the biggest circuits among the ones tested, which speeds up the total execution time by around a factor of \(5X\).
To be able to have both the parallelization and the steps through the variability parameters of the physical model, we settled on doing \(10\) threats, each of which will process \(10\) different inputs using steps. This way, a batch size of \(100\) images is divided into sets of \(10\), which will be processed by different simulations. We do \(10\) steps instead of \(100\) because for some reason unknown to us, beyond \(20\) or so steps, it starts to take considerably longer per step.
This is the reason why the Netlists of our circuits are sub-circuits, they need to receive the input as a parameter that will change with every step up to \(10\) times.
The function that performs the simulation does the following:
1. Calls the appropriate Netlist builder for the circuit, which in turn writes to the file specified the design of the neural network.
2. Includes the sub-circuits needed, i.e, the memristor, the operational amplifiers and the circuit just built.
3. Creates an instance of the neural network, giving it variables as the inputs.
4. Steps through the input variables with the values of the different input images.
An example of 4 would be the following:
_.step param Vx list 1 2 3 4 5 6 7 8 9 10_
_.param paramin000 table(Vx, 1, -0.1, 2, -0.1, 3, -0.1, 4, -0.1, 5, -0.1, 6, -0.1, 7, -0.1, 8, -0.1, 9, -0.1, 10, -0.1)_
In this case, since it's a pixel on the corner of the image, it will always take the same value \(-0.1\) which is black.
## 7 Results and Variability
We can now compare how our hardware implementations perform against their digital counterparts on inference precision. We include the results in accuracy in table 5. Where the column \(Ratio\) is the ratio of the simulation accuracy by the digital accuracy and the \(type\) column follows the following name conventions:
* FC Simple: a fully connected network without hidden layers.
* FC Double: a fully connected network with a 20 neuron hidden layer. Using RELU.
* CV Simple: one convolutional layer followed by the output layer. Kernel size \(3\), stride \(2\), output channels \(3\). Using RELU.
* CV Double: two convolutional layers followed by the output layer. Kernel size \(3\), stride \(2\), output channels: first layer \(3\), second layer \(6\). Using RELU.
* FC Double Sigmoid: a fully connected network with a 20 neuron hidden layer. Using Sigmoid.
The results are very promising, always around a few percentage points of the digital network. In some cases, even surpassing the original; an unexpected behavior for which we might be able to offer a reasonable explanation.
The mistakes the simulated analog circuit makes are mostly due to the variability it was purposely built with. This variability will not affect the results for most inputs, i.e., the maximum value will correspond to the same output neuron even though its magnitude may vary. It will make mistakes where the digital network won't if the input makes the two output neurons with the highest activation values have very close values to each other.
On the other hand, when the digital network makes a mistake, the activation of an incorrect output neuron will have been the largest, but it is reasonable to assume that, if it is a high-quality neural network, the value of the output neuron with the correct answer was a close second. Now, given that our physical model has been built with an intrinsic variability by design, if we have a very close activation value on two outputs, the maximum of them might change with the variability. This would make the physical model sometimes able to make the right prediction where its digital counterpart will always fail.
To illustrate this phenomenon, we can select an example where the digital failed and the analog simulation succeeded,and see how the variability is responsible for this effect. to do this, we step \(20\) times for the same input. The results are in figure 41
The right answer was \(3\), but the digital predicted \(5\). We can clearly see that the maximum between the neurons that represent \(3\) and \(5\) are constantly changing with the variability parameters.
Looking at the overall results, it is reasonable to assume that, in most cases, variability will lead to errors rather than the other way around.
Another interesting observation is that the performance of physical circuits improves relatively as the input size increases. In table 5 we can see that, for every architecture with two input size variants, the larger one has a slighter higher ratio. We would require more data to be certain, but this seems to suggest that the wider the network, the lesser the negative effects of the variability.
The depth of the network does not appear to affect the comparisons in either way, but again, more tests would be required. This would be an interesting finding, since it would mean we can scale these implementations with no considerable penalty to the performance.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline
**Type** & **Input size** & **Digital Acc** & **Simulation Acc** & **Ratio** & **Ratio (\%)** \\ \hline FC Simple Relu & 12x12 & 90.2 & 89.3 & 0.990 & 99.00 \\ \hline FC Simple Relu & 16x16 & 91.5 & 91.7 & 1.002 & 100.22 \\ \hline FC Double Relu & 12x12 & 91.9 & 90.3 & 0.982 & 98.25 \\ \hline FC Double Relu & 16x16 & 93.6 & 92.8 & 0.991 & 99.10 \\ \hline CV Simple Relu & 12x12 & 91.3 & 89.2 & 0.976 & 97.60 \\ \hline CV Simple Relu & 16x16 & 94.7 & 93.0 & 0.982 & 98.20 \\ \hline CV Double Relu & 12x12 & 91.5 & 88.0 & 0.961 & 96.10 \\ \hline CV Double Relu & 16x16 & 94.7 & 93.2 & 0.984 & 98.40 \\ \hline FC Double Sigmoid & 12x12 & 86.9 & 90.0 & 1.035 & 103.50 \\ \hline \end{tabular}
\end{table}
Table 5: Accuracy results comparison between the digital network and the analog simulation
## 8 Conclusions and Future Work
We have successfully designed and simulated analog circuits based on memristors to perform inference on neural networks. For the simulations, we have used a state of the art physical model of the memristor that we were able to adjust to very reasonably match the behavior and empirical measurements of our real commercial device. We also managed to identify and overcome the restrictions our circuit design imposed over the parameters of the neural networks.
The results obtained are very satisfactory, as we have demonstrated that these analog circuits can obtain similar levels of performance.
It appears that there is limited literature available on neural networks built with Knowm devices that take into account variability. It's also noted that while some researchers [21] use the same devices but with different models and learning rules, they tend to use Spike-Timing-Dependent Plasticity (STDP) networks rather than convolutional networks. This could be due to the inherent properties of Knowm devices that may be more compatible with the principles of STDP, which is a biological process. However, the application of Knowm devices in convolutional networks could be an interesting area for future research, given the potential of these devices in mimicking synaptic behavior.
The part of this work pertaining to memristor modeling has been published in the 14th Spanish Conference on Electron Devices (CDE 2023) [15]
Figure 41: Values of the output neurons. Colored lines represent the outputs of the analog simulation, dashed lines the outputs of the digital neural network.
## Acknowledgments
We want to express our most sincere gratitude to our TFG advisors, Guillermo and Francisco, for their invaluable help and commitment to this project. Throughout almost a year of work, their guidance and support have been essential to the development and success of our work.
|
2304.11954 | Spikingformer: Spike-driven Residual Learning for Transformer-based
Spiking Neural Network | Spiking neural networks (SNNs) offer a promising energy-efficient alternative
to artificial neural networks, due to their event-driven spiking computation.
However, state-of-the-art deep SNNs (including Spikformer and SEW ResNet)
suffer from non-spike computations (integer-float multiplications) caused by
the structure of their residual connection. These non-spike computations
increase SNNs' power consumption and make them unsuitable for deployment on
mainstream neuromorphic hardware, which only supports spike operations. In this
paper, we propose a hardware-friendly spike-driven residual learning
architecture for SNNs to avoid non-spike computations. Based on this residual
design, we develop Spikingformer, a pure transformer-based spiking neural
network. We evaluate Spikingformer on ImageNet, CIFAR10, CIFAR100, CIFAR10-DVS
and DVS128 Gesture datasets, and demonstrate that Spikingformer outperforms the
state-of-the-art in directly trained pure SNNs as a novel advanced backbone
(75.85$\%$ top-1 accuracy on ImageNet, + 1.04$\%$ compared with Spikformer).
Furthermore, our experiments verify that Spikingformer effectively avoids
non-spike computations and significantly reduces energy consumption by
57.34$\%$ compared with Spikformer on ImageNet. To our best knowledge, this is
the first time that a pure event-driven transformer-based SNN has been
developed. | Chenlin Zhou, Liutao Yu, Zhaokun Zhou, Zhengyu Ma, Han Zhang, Huihui Zhou, Yonghong Tian | 2023-04-24T09:44:24Z | http://arxiv.org/abs/2304.11954v3 | # Spikingformer: Spike-driven Residual Learning for Transformer-based Spiking Neural Network
###### Abstract
Spiking neural networks (SNNs) offer a promising energy-efficient alternative to artificial neural networks, due to their event-driven spiking computation. However, state-of-the-art deep SNNs (including Spikformer and SEW ResNet) suffer from non-spike computations (integer-float multiplications) caused by the structure of their residual connection. These non-spike computations increase SNNs' power consumption and make them unsuitable for deployment on mainstream neuromorphic hardware, which only supports spike operations. In this paper, we propose a hardware-friendly spike-driven residual learning architecture for SNNs to avoid non-spike computations. Based on this residual design, we develop Spikingformer, a pure transformer-based spiking neural network. We evaluate Spikingformer on ImageNet, CIFAR10, CIFAR100, CIFAR10-DVS and DVS128 Gesture datasets, and demonstrate that Spikingformer outperforms the state-of-the-art in directly trained pure SNNs as a novel advanced backbone (75.85\(\%\) top-1 accuracy on ImageNet, + 1.04\(\%\) compared with Spikformer). Furthermore, our experiments verify that Spikingformer effectively avoids non-spike computations and significantly reduces energy consumption by 57.34\(\%\) compared with Spikformer on ImageNet. To our best knowledge, this is the first time that a pure event-driven transformer-based SNN has been developed. Codes will be available at Spikingformer.
## 1 Introduction
Being regarded as the third generation of neural network [1], the brain-inspired Spiking Neural Networks (SNNs) are potential competitors to Artificial Neural Networks (ANNs) due to their high biological plausibility, event-driven property and low power consumption on neuromorphic hardware [2]. In particular, the utilization of binary spike signals allows SNNs to adopt low-power accumulation (AC) instead of the traditional high-power multiply-accumulation (MAC), leading to significant energy efficiency gains and making SNNs increasingly popular [3].
As SNNs get deeper, their performance has been significantly improved [4; 5; 6; 7; 8; 9; 10]. ResNet with skipping connection has been extensively studied to extend the depth of SNNs [5; 8]. Recently, SEW ResNet [5], a representative convolution-based SNN, easily implements identity mapping and overcomes the vanishing/exploding gradient problems of Spiking ResNet [11]. SEW ResNet is the first deep SNN directly trained with more than 100 layers. Spikformer [8], a directly trained representative transformer-based SNN with residual connection, is proposed by leveraging both self-attention capability and biological properties of SNNs. It is the first successful exploration for applying flourishing transformer architecture into SNN design, and shows powerful performance.
However, both Spikformer and SEW ResNet face the challenge of non-spike computations (integer-float multiplications) caused by ADD residual connection. This not only limits their ability to fully leverage the benefits of event-driven processing in energy efficiency, but also makes it difficult to deploy and optimize their performance on neuromorphic hardware [12; 3].
Developing a pure SNN to address the challenge of non-spike computation in Spikformer and SEW ResNet while maintaining high-performance is extremely critical. In this paper, inspired by the architecture design of Binary Neural Network (BNN) [13; 14; 15; 16], we propose Spike-driven Residual Learning architecture for SNN to avoid the non-spike computations. Based on this residual design, we develop a pure transformer-based spiking neural network, named Spikingformer. We evaluate the performance of Spikingformer on static dataset ImageNet[17], CIFAR[18] (including CIFAR10 and CIFAR100) and neuromorphic datasets (including CIFAR10-DVS and DVS128 Gesture). The experimental results show that Spikingformer effectively avoids integer-float multiplications in Spikformer. In addition, as a novel advanced SNN backbone, Spikingformer outperforms Spikformer in all above datasets by a large margin (e.g. + 1.04\(\%\) on ImageNet, + 1.00\(\%\) on CIFAR100).
## 2 Related Work
### Convolution-based Spiking Neural Network
There are two mainstream methods to obtain deep convolution-based SNN models: ANN-to-SNN conversion and direct training through surrogate gradient.
**ANN-to-SNN Conversion.** In ANN-to-SNN conversion [19; 20; 21; 22; 23; 24], a pre-trained ANN is converted to a SNN by replacing the ReLU activation layers with spiking neurons and adding scaling operations like weight normalization and threshold balancing. This conversion process suffers from long converting time steps and constraints on the original ANN design.
**Direct Training through Surrogate Gradient.** In the field of direct training, SNNs are unfolded over simulation time steps and trained with backpropagation through time [25; 26]. Due to the non-differentiability of spiking neurons, surrogate gradient method is employed for backpropagation [27; 28]. SEW ResNet[5] is a representative convolution-based SNN model by direct training, and is the first to increase the number of layers in SNNs to be larger than 100. However, the ADD gate in residual connections of SEW ResNet produces non-spike computations of integer-float multiplications in deep convolution layers. [3] has identified the problem of non-spike computations in SEW ResNet and Spikformer, and attempts to solve it through adding an auxiliary accumulation pathway during training and removing it during inference. This strategy needs tedious extra operations and results in a significant performance degradation compared with the original models.
### Transformer-based Spiking Neural Network.
Most existing SNNs borrow architectures from convolutional neural networks (CNNs), so their performance is limited by the performance of CNNs. The transformer architecture, originally designed for natural language processing [29], has achieved great success in many computer vision tasks, including image classification [30; 31], object detection [32; 33; 34] and semantic segmentation [35; 36]. The structure of transformer leads to promise for a novel kind of SNNs, with great potential to break through the bottleneck of SNNs' performance. So far, two main related works: Spikformer[8] and SpikFormer[37] have proposed spiking neural networks based on transformer architecture. Although Spikformer replaces the activation function used in the feedforward layers with a spiking activation function, there are still a lot of non-spike operations remained, including floating point multiplication, division, exponential operation. Spikformer proposes a novel Spiking Self Attention (SSA) module by using spike-form Query, Key, and Value without softmax, and achieves state-of-the-art performances on many datasets. However, the structure of Spikformer with residual connection still contains non-spike computation. In our study, we adopt the SSA module from Spikformer, and modify the residual structure to be purely event-driven, which is hardware-friendly and energy efficient while improving performance.
## 3 Methods
### Drawbacks of Spikformer and SEW ResNet
At present, Spikformer [8] is the representative work combining deep SNNs with transformer architecture, while SEW ResNet [5] is the representative work of convolution-based deep SNNs. The residual learning plays an extremely important role in both Spikformer and SEW ResNet, but the ADD residual connections in Spikformer and SEW ResNet lead to non-spike computation (integer-float multiplications), which are not event-driven computation. As shown in Fig.1(a), the residual learning of Spikformer and SEW ResNet could be formulated as follows:
\[O_{l}=\mathrm{SN}_{l}(\mathrm{ConvBN}_{l}(O_{l-1}))+O_{l-1}=S_{l }+O_{l-1}, \tag{1}\] \[O_{l+1}=\mathrm{SN}_{l+1}(\mathrm{ConvBN}_{l+1}(O_{l}))+O_{l}=S _{l+1}+O_{l}. \tag{2}\]
where \(S_{l}\) denotes the residual mapping learned as \(S_{l}=\mathrm{SN}(\mathrm{ConvBN}(O_{l-1}))\). This residual design inevitably brings in non-spike data and thus MAC operations in the next layer / block. In particular, \(S_{l}\) and \(O_{l-1}\) are spike signals, and their output \(O_{l}\) are non-spike signal whose range is \(\{0,1,2\}\). Non-spike data destructs event-driven computation in the next convolution layer when computing \(S_{l+1}\) of \(O_{l+1}\). As the depth of the network increases, the range of non-spike data values transmitted to the deeper layer of the network will also expand. In our implementations of Spikformer, the range of the non-spike data could increase to \(\{0,1,2,...,16\}\) when testing Spikformer-8-512 on ImageNet 2012. Obviously, the range of non-spike data is approximately proportional to the number of residual blocks in Spikformer and SEW ResNet.
In fact, integer-float multiplications are usually implemented in the same way as floating-point multiplication in hardware. In this case, the network will incur high energy consumption, approaching to the energy consumption of ANNs with the same structure, which is unacceptable for SNNs.
Figure 1: The residual learning in Spikformer, SEW ResNet and Spikingformer (Ours). (a) shows the residual learning of Spikformer and SEW ResNet, which contain non-spike computation (integer-float multiplications) in \(\mathrm{ConvBN}\) layer. (b) shows our proposed spike-driven residual learning in Spikingformer, which could effectively avoid floating-point multiplication and integer-float multiplications, following the spike-driven principle. Note that Mutistep LIF is spike neuron Leaky Integrate-and-Fire (LIF) model [5; 8] with time steps \(T>1\). Same with Spikformer, \(T\) is an independent dimension for spike neuron layer. In other layers, it is merged with the batch size. We use \(\mathrm{ConvBN}\) to represent a convolution layer and its subsequent BN layer in this work.
### Spike-driven Residual Learning in Spikingformer
Fig.1(b) shows our proposed spike-driven residual learning in Spikingformer. It could effectively avoid floating-point multiplications and integer-float multiplications, following the spike-driven principle. The spike-driven residual learning could be easily formulated as follows:
\[O_{l}=\mathrm{ConvBN}_{l}(\mathrm{SN}_{l}(O_{l-1}))+O_{l-1}=S_{l}+O_ {l-1}, \tag{3}\] \[O_{l+1}=\mathrm{ConvBN}_{l+1}(\mathrm{SN}_{l+1}(O_{l}))+O_{l}=S_ {l+1}+O_{l}. \tag{4}\]
We proposed \(\mathrm{SN}\) - \(\mathrm{ConvBN}\) for residual learning in replacement of \(\mathrm{ConvBN}\) - \(\mathrm{SN}\) in Spikformer and SEW ResNet. In our structure, \(S_{l}+O_{l-1}\) belongs to floating point addition operation, which is same as the addition operation in \(\mathrm{SN}\) layer. Floating point addition operation is the most essential operation of SNN. Obviously, the output \(O_{l}\) is also floating point and will pass through a \(\mathrm{SN}\) layer before participating in the next \(\mathrm{ConvBN}\) computation. Therefore, the pure spike-form feature will be generated after the processing of \(\mathrm{SN}\) layer and the computation of \(\mathrm{ConvBN}\) layer will become pure floating point addition operation, following spike-driven principle and reducing energy consumption vastly.
### Spikingformer Architecture
We propose Spikingformer, which is a novel and pure transformer-based spiking neural network through integrating spike-driven residual blocks. In this section, the details of Spikingformer are discussed. The pipeline of Spikingformer is shown in Fig.2.
Our proposed Spikingformer contains a Spiking Tokenizer (ST), several Spiking Transformer Blocks and a Classification Head. Given a 2D image sequence \(I\in\mathbb{R}^{T\times C\times H\times W}\) (Note that \(C\)=\(3\) in static datasets like ImageNet 2012, \(C\)=\(2\) in neuromorphic datasets like DVS-Gesture), we use the Spiking Tokenizer block for downsampling and patch embedding, where the inputs can be projected as spike-form patches \(X\in\mathbb{R}^{T\times N\times D}\). Obviously, the first layer of Spiking Tokenizer also play a spike encoder role when taking static images as input. After Spiking Tokenizer, the spiking patches \(X_{0}\) will pass to the \(L\) Spiking Transformer Blocks. Similar to the standard ViT encoder block, a Spiking Transformer Block contains a Spiking Self Attention (SSA) [8] and a Spiking MLP block. In the last, a fully-connected-layer (FC) is used for Classification Head. Note that we use a global average-pooling (GAP) before the fully-connected-layer to reduce the parameters of FC and improve the classification classification capability of Spikingformer.
\[X=\mathrm{ST}(I),\quad I\in\mathbb{R}^{T\times C\times H\times W}, X\in\mathbb{R}^{T\times N\times D} \tag{5}\] \[X^{\prime}_{l}=\mathrm{SSA}\left(X_{l-1}\right)+X_{l-1},X^{ \prime}_{l}\in\mathbb{R}^{T\times N\times D},l=1\ldots L\] (6) \[X_{l}=\mathrm{SMLP}\left(X^{\prime}_{l}\right)+X^{\prime}_{l},X_ {l}\in\mathbb{R}^{T\times N\times D},l=1\ldots L\] (7) \[Y=\mathrm{FC}\left(\mathrm{GAP}\left(X_{L}\right)\right)\quad \text{or}\quad Y^{\prime}=\mathrm{FC}\left(\mathrm{GAP}(\mathrm{SN}\left(X_{ L}\right))\right) \tag{8}\]
**Spiking Tokenizer.** As shown in Fig.2, Spiking Tokenizer mainly contains two functions: 1) convolutional spiking patch embedding, and 2) downsampling to project the feature map into a
Figure 2: The overview of Spikingformer, which consists of a Spiking Tokenizer, several Spiking Transformer Blocks and a Classification Head.
smaller fixed size. The spiking patch embedding is similar to the convolutional stream in Vision Transformer [38; 39], where the dimension of spike-form feature channels gradually increased in each convolution layer and finally matches the embedding dimension of patches. In addition, the first layer of Spiking Tokenizer is utilized as a spike encoder when using static images as input. As shown in Eq. 9 and Eq. 10, the convolution part of ConvBN represents the 2D convolution layer (stride-1, 3 x 3 kernel size). \(\mathrm{MP}\) and \(\mathrm{SN}\) represent maxpooling (stride-2) and mutistep spiking neuron, respectively. Eq. 9 is used for Spiking Patch Embedding without Downsampling (SPE), Eq. 10 is Spiking Patch Embedding with Downsampling (SPED). We could use multiple SPEs or SPEDs for specific classification tasks with different downsampling requirements. For example, we use 4 SPEDs for ImageNet 2012 dataset classification with input size as 224*224 (using 16 times downsampling). we use 2 SPEs and 2 SPEDs for CIFAR datset classification with input size as 32*32 (using 4 times downsampling). After the processing of the Spiking Tokenizer block, the input \(I\) is split into an image patch sequence \(X\in\mathbb{R}^{T\times N\times D}\).
\[I_{i} =\mathrm{ConvBN}(\mathrm{SN}(I)) \tag{9}\] \[I_{i} =\mathrm{ConvBN}(\mathrm{MP}(\mathrm{SN}(I))) \tag{10}\]
**Spiking Transformer Block.** A Spiking Transformer Block contains Spiking Self Attention (SSA) mechanism and Spiking MLP block. our Spiking Self Attention part is similar with SSA in Spikformer [8], which is a pure spike-form self attention. However, we make some modifications: 1) we change the spiking neuron layer position according to our proposed spike-driven residual mechanism, avoiding the multiplication of integers and floating-point weights. 2) we choose ConvBN in replacement of \(LinearBN\) (linear layer and batch normalization) in Spikformer. Therefore, the SSA in Spikingformer can be formulated as follows:
\[X^{\prime}=\mathrm{SN}(X), \tag{11}\] \[Q=\mathrm{SN}_{Q}(\mathrm{ConvBN}_{Q}(X^{\prime})),K=\mathrm{SN} _{K}(\mathrm{ConvBN}_{K}(X^{\prime})),V=\mathrm{SN}_{V}(\mathrm{ConvBN}_{V}(X^ {\prime}))\] (12) \[\mathrm{SSA}(Q,K,V)=\mathrm{ConvBN}(\mathrm{SN}(QK^{T}V*s)) \tag{13}\]
where \(Q,K,V\in\mathbb{R}^{T\times N\times D}\) are pure spike data (only containing 0 and 1). \(s\) is the scaling factor as in [8], controlling the large value of the matrix multiplication result. The Spiking MLP block consists of two SPEs which is formulated in Eq.9. Spiking Transformer Block is shown in Fig.2, and it is the main component of Spikingformer.
**Classification Head.** We use a fully-connected-layer as the classifier behind the last Spiking Transformer Block. In detail, the classifier could be realized in four forms: \(\mathrm{AvgPooling\,\text{-}\,FC}\), \(\mathrm{SN\,\text{-}\,AvgPooling\,\text{-}\,FC}\), \(\mathrm{FC\,\text{-}\,AvgPooling\,\text{-}\,SC}\) - \(\mathrm{AvgPooling\,\text{-}\,FC}\) - \(\mathrm{AvgPooling\,\text{-}\,FC}\) and formulated as follows:
\[Y =\mathrm{FC}(\mathrm{AvgPooling}(X_{L})) \tag{14}\] \[Y =\mathrm{FC}(\mathrm{AvgPooling}(\mathrm{SN}(X_{L})))\] (15) \[Y =\mathrm{AvgPooling}(\mathrm{FC}(X_{L}))\] (16) \[Y =\mathrm{AvgPooling}(\mathrm{FC}(\mathrm{SN}(X_{L}))) \tag{17}\]
\(\mathrm{AvgPooling\,\text{after}\,FC}\) (like \(\mathrm{SN\,\text{-}\,FC}\) - \(\mathrm{AvgPooling\,\text{-}\,AvgPooling}\)) could be considered as computing the average of neuron firing, a post-processing of network, but in this way \(\mathrm{FC}\) usually requires numerous parameters. \(\mathrm{AvgPooling\,\text{before}\,FC}\) (like \(\mathrm{AvgPooling\,\text{-}\,FC}\), \(\mathrm{SN\,\text{-}\,AvgPooling\,\text{-}\,FC}\)) could effectively reduce parameters compared with the previous ways. Only \(\mathrm{SN\,\text{-}\,FC}\) - \(\mathrm{AvgPooling\,\text{-}\,FC}\) avoid floating-point multiplication operation, but it needs more \(\mathrm{FC}\) parameters than \(\mathrm{AvgPooling\,\text{-}\,FC}\) or \(\mathrm{SN\,\text{-}\,AvgPooling\,\text{-}\,FC}\). In addition, it also reduces the classification ability of the network. In this paper, we mainly adopt the way of \(\mathrm{AvgPooling\,\text{ahead}\,\text{of}\,FC}\), and choose \(\mathrm{AvgPooling\,\text{-}\,FC}\) as the classifier of Spikingformer by default. Some experimental analysis on the classification head will be discussed in Sec.5.1.
### Theoretical Synaptic Operation and Energy Consumption Calculation
The homogeneity of convolution allows the following BN and linear scaling transformation to be equivalently fused into the convolutional layer with an added bias when deployment [40; 41; 11; 3]. Therefore, when calculating the theoretical energy consumption, the consumption of BN layers could be ignored. We calculate the number of the synaptic operations of spike before calculating theoretical energy consumption for Spikingformer.
\[SOP^{l}=fr\times T\times FLOPs^{l} \tag{18}\]
where \(l\) is a block/layer in Spikingformer, \(fr\) is the firing rate of the block/layer and \(T\) is the simulation time step of spike neuron. \(FLOPs^{l}\) refers to floating point operations of block/layer \(l\), which is the number of multiply-and-accumulate (MAC) operations. And \(SOP^{l}\) is the number of spike-based accumulate (AC) operations. We estimate the theoretical energy consumption of Spikingformer according to [42; 7; 12; 43; 44; 45; 46]. We assume that the MAC and AC operations are implemented on the 45nm hardware [12], where \(E_{MAC}=4.6pJ\) and \(E_{AC}=0.9pJ\). The theoretical energy consumption of Spikingformer can be calculated as follows:
\[E_{Spikingformer}^{rgb}=E_{AC}\times\left(\sum_{i=2}^{N}SOP_{\text{Conv}}^{i} +\sum_{j=1}^{M}SOP_{\text{SSA}}^{j}\right)+E_{MAC}\times\left(FLOP_{\text{Conv }}^{1}\right) \tag{19}\]
\[E_{Spikingformer}^{neuro}=E_{AC}\times\left(\sum_{i=1}^{N}SOP_{\text{Conv}}^{i} +\sum_{j=1}^{M}SOP_{\text{SSA}}^{j}\right) \tag{20}\]
Eq.19 shows energy consumption of Spikingformer for static datasets with RGB images input. \(FLOP_{Conv}^{1}\) is the first layer encoding static RGB images into spike-form. Then the SOPs of \(N\) SNN Conv layers and \(M\) SSA layers are added together and multiplied by \(E_{AC}\). Eq.20 shows energy consumption of Spikingformer for neuromorphic datasets.
## 4 Experiments
In this section, we carry out experiments on static dataset ImageNet [17], static dataset CIFAR [18] (including CIFAR10 and CIFAR100) and neuromorphic datasets (including CIFAR10-DVS and DVS128 Gesture [47]) to evaluate the performance of Spikingformer. The models for conducting experiments are implemented based on Pytorch [48], SpikingJelly [49] and Timm [50].
### ImageNet Classification
**ImageNet** contains around \(1.3\) million \(1000\)-class images for training and \(50,000\) images for validation. The input size of our model on ImageNet is set to the default \(224\times 224\). The optimizer
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Methods & Architecture & Param & OPs & Time & Energy Con- & Top-1 Acc \\ & & (M) & (G) & Step & sumption(mJ) & \\ \hline TET[51] & Spiking-ResNet-34 & 21.79 & - & 6 & - & 64.79 \\ & SEW ResNet-34 & 21.79 & - & 4 & - & 68.00 \\ Spiking ResNet[4] & ResNet-34 & 21.79 & 65.28 & 350 & 59.30 & 71.61 \\ & ResNet-50 & 25.56 & 78.29 & 350 & 70.93 & 72.75 \\ STBP-tdBN[6] & Spiking-ResNet-34 & 21.79 & 6.50 & 6 & 6.39 & 63.72 \\ & SEW ResNet-34 & 21.79 & 3.88 & 4 & 4.04 & 67.04 \\ SEW ResNet-50 & SEW ResNet-50 & 25.56 & 4.83 & 4 & 4.89 & 67.78 \\ & SEW ResNet-101 & 44.55 & 9.30 & 4 & 8.91 & 68.76 \\ & SEW ResNet-152 & 60.19 & 13.72 & 4 & 12.89 & 69.26 \\ MS-ResNet[7] & ResNet-104 & 44.55+ & - & 5 & - & 74.21 \\ \hline ANN[8] & Transformer-8-512 & 29.68 & 8.33 & 1 & 38.34 & 80.80 \\ \hline Spikformer[8] & Spikformer-8-384 & 16.81 & 6.82 & 4 & 12.43 & 70.24 \\ & Spikformer-8-512 & 29.68 & 11.09 & 4 & 18.82 & 73.38 \\ & Spikformer-8-768 & 66.34 & 22.09 & 4 & 32.07 & 74.81 \\ \hline \multirow{3}{*}{**Spikingformer**} & Spikingformer-8-384 & 16.81 & 3.88 & 4 & **4.69(-62.27\(\%\))** & **72.45(+2.21)** \\ & Spikingformer-8-512 & 29.68 & 6.52 & 4 & **7.46(-60.36\(\%\))** & **74.79(+1.41)** \\ \cline{1-1} & Spikingformer-8-768 & 66.34 & 12.54 & 4 & **13.68(-57.34\(\%\))** & **75.85(+1.04)** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on ImageNet-1k classification. Power is calculated as the average theoretical energy consumption of an image inference on ImageNet, whose detail is shown in Eq19. Same as Spikformer, our Spikingformer-\(L\)-\(D\) represents a Spikingformer model with \(L\) spiking transformer blocks and \(D\) feature embedding dimensions. OPs refers to SOPs in SNN and FLOPs in ANN. Note that the default input resolution of inference is 224\(\times\)224.
is AdamW and the batch size is set to \(192\) or \(288\) during \(310\) training epochs with a cosine-decay learning rate whose initial value is \(0.0005\). The scaling factor is \(0.125\) when training on ImageNet and CIFAR. Four SPEDs in Spiking Tokenizer splits the image into \(196\)\(16\times 16\) patches.
Same as Spikformer, We try a variety of models with different embedding dimensions and numbers of transformer blocks for ImageNet, which has been shown in Tab. 1. We also show the comparison of synaptic operations (SOPs) [52] and theoretical energy consumption. On one hand, Spkingformer follows spike-driven computation rules, effectively avoiding floating-point multiplication and integer-float multiplications. The histogram of the input data for each transformer block of Spikingformer and Spikformer is shown in Appendix E of Supplementary Material, our Spikingformer effectively avoids producing non-spike data of non-spike computations in Spikformer. On the other hand, Spikingformer-8-512 achieves 74.79\(\%\) top-1 classification accuracy on ImageNet using 4 time steps, significantly outperforms Spikformer-8-512 by 1.41\(\%\), outperforms MS-ResNet model by 0.58\(\%\) and outperforms SEW ResNet-152 model by 5.53\(\%\). Spikingformer-8-512 is with 7.463 mJ theoretical energy consumption, which reduces energy consumption by 60.36\(\%\), compared with 18.819 mJ of Spikformer-8-512. Spikingformer-8-768 achieves 75.85\(\%\) top-1 classification accuracy on ImageNet using 4 time steps, significantly outperforms Spikformer-8-768 by 1.04\(\%\), outperforms MS-ResNet model by 1.64\(\%\) and outperforms SEW ResNet-152 model by 6.59\(\%\). Spikingformer-8-768 is with 13.678 mJ theoretical energy consumption, which reduces energy consumption by 57.34\(\%\), compared with 32.074 mJ of Spikformer-8-768. In addition, we recalculate the energy consumption of Spikformer in Appendix G because the non-spike computation of Spikformer can not be directly calculated by Sec.3.4. The main reason why Spikingformer can significantly reduce energy consumption compared with Spikformer is that Spikingformer could effectively avoid integer-float multiplications, and the secondary reason is that our models have lower firing rate on ImageNet, which is shown in Appendix E.
### CIFAR Classification
**CIFAR10/CIFAR100** provides 50, 000 train and 10, 000 test images with 32 \(\times\) 32 resolution. The difference is that CIFAR10 contains 10 categories for classification, but CIFAR100 contains
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Methods & Architecture & Param & Time & CIFAR10 & CIFAR100 \\ & & (M) & Step & Top-1 Acc & Top-1 Acc \\ \hline Hybrid training[53] & VGG-11 & 9.27 & 125 & 92.22 & 67.87 \\ Diet-SNN[54] & ResNet-20 & 0.27 & 10/5 & 92.54 & 64.07 \\ STBP[55] & CIFARNet & 17.54 & 12 & 89.83 & - \\ STBP NeuNorm[56] & CIFARNet & 17.54 & 12 & 90.53 & - \\ TSSL-BP[57] & CIFARNet & 17.54 & 5 & 91.41 & - \\ STBP-tdBN[6] & ResNet-19 & 12.63 & 4 & 92.92 & 70.86 \\ TET[51] & ResNet-19 & 12.63 & 4 & 94.44 & 74.47 \\ MS-ResNet[7] & ResNet-110 & - & - & 91.72 & 66.83 \\ & ResNet-482 & - & - & 91.90 & - \\ \hline ANN[8] & ResNet-19* & 12.63 & 1 & 94.97 & 75.35 \\ & Transformer-4-384 & 9.32 & 1 & 96.73 & 81.02 \\ \hline & Spikformer-4-256 & 4.15 & 4 & 93.94 & 75.96 \\ Spikformer[8] & Spikformer-2-384 & 5.76 & 4 & 94.80 & 76.95 \\ & Spikformer-4-384 & 9.32 & 4 & 95.19 & 77.86 \\ & Spikformer-4-384-400E & 9.32 & 4 & 95.51 & 78.21 \\ \hline & Spikingformer-4-256 & 4.15 & 4 & **94.77(+0.83)** & **77.43(+1.47)** \\
**Spikingformer** & Spikingformer-2-384 & 5.76 & 4 & **95.22(+0.42)** & **78.34(+1.39)** \\ & Spikingformer-4-384 & 9.32 & 4 & **95.61(+0.42)** & **79.09(+1.23)** \\ & Spikingformer-4-384-400E & 9.32 & 4 & **95.81(+0.30)** & **79.21(+1.00)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on CIFAR10/100 classification. Spikingformer improves network performance in all tasks, compared with Spikformer. Note that Spikingformer-4-384-400E means Spikingformer contains four spiking transformer blocks and 384 feature embedding dimensions, trained with 400 epochs. Other models of Spikingformer are trained with 310 epochs by default, being consistent with Spikformer.
100 categories, owning better distinguishing ability for classification algorithm. The batch size of Spikingformer is set to 64. We choose two SPEs and two SPEDs in the Spiking Tokenizer block to split the input image into 64 4 x 4 patches.
The experimental results are shown in Tab.2. From the results, We find that the performance of Spikingformer models surpass all the models of Spikformer with same number of parameters. In CIFAR10, our Spikingformer-4-384-400E achieves 95.81\(\%\) classification accuracy, significantly outperforms Spikformer-4-384-400E by 0.30\(\%\) and outperforms MS-ResNet-482 by 3.91\(\%\). In CIFAR100, Spikingformer-4-384-400E achieves 79.21\(\%\) classification accuracy, significantly outperforms Spikformer-4-384-400E by 1.00\(\%\) and outperforms MS-ResNet-110 by 12.38\(\%\). To the best of our knowledge, our Spikingformer achieves the state-of-the-art in directly trained pure spike-driven SNN model on both CIFAR10 and CIFAR100. ANN-Transformer model is 1.12\(\%\) and 1.93\(\%\) higher than Spikformer-4-384 for CIFAR10 and CIFAR100, respectively. In this experiment, we also find that the performance of Spikingformer has positive correlation with block numbers, dimensions and training epochs within a certain range in CIFAR dataset.
### DVS Classification
**CIFAR10-DVS Classification.** CIFAR10-DVS is a neuromorphic dataset converted from the static image dataset by shifting image samples to be captured by the DVS camera, which provides 9, 000 training samples and 1, 000 test samples. We compare our method with SOTA methods on DVS-Gesture. In detail, we adopt four SPEDs in the Spiking Tokenizer block due to 128*128 image size of CIFAR10-DVS, and adopt 2 spiking transformer blocks with 256 patch embedding dimension. The number of time steps of the spiking neuron is 10 or 16. The number of training epochs is 106, which is the same with Spikformer. The learning rate is initialized to 0.1 and decayed with cosine schedule.
The results of CIFAR10-DVS are shown in Tab.3. Spikingformer achieves 81.3\(\%\) top-1 accuracy with 16 time steps and 79.9\(\%\) accuracy with 10 time steps, significantly outperforms Spikformer by 1.3\(\%\) and 0.7\(\%\) respectively. To our best knowledge, our Spikingformer achieves the state-of-the-art in directly trained pure spike-driven SNN models on CIFAR10-DVS.
**DVS128 Gesture Classification.** DVS128 Gesture is a gesture recognition dataset that contains 11 hand gesture categories from 29 individuals under 3 illumination conditions. The image size of DVS128 Gesture is 128*128. The main hyperparameter setting in DVS128 Gesture classification is the same with CIFAR10-DVS classification. The only difference is that the number of training epoch is set as 200 for DVS Gesture classification, which is the same with Spikformer.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{CIFAR10-DVS} & \multicolumn{2}{c}{DVS128 Gesture} \\ \cline{2-5} & Time Step & Acc & Time Step & Acc \\ \hline LIAF-Net [58]\({}^{\text{TNNLS-2021}}\) & 10 & 70.4 & 60 & 97.6 \\ TA-SNN [59]\({}^{\text{ICCV-2021}}\) & 10 & 72.0 & 60 & 98.6 \\ Rollout [60]\({}^{\text{Front. Neurosci-2020}}\) & 48 & 66.8 & 240 & 97.2 \\ DECOLLE [61]\({}^{\text{Front. Neurosci-2020}}\) & - & - & 500 & 95.5 \\ tdBN [6]\({}^{\text{AAAI-2021}}\) & 10 & 67.8 & 40 & 96.9 \\ PLIF [62]\({}^{\text{ICCV-2021}}\) & 20 & 74.8 & 20 & 97.6 \\ SEW ResNet [5]\({}^{\text{NeurIPS-2021}}\) & 16 & 74.4 & 16 & 97.9 \\ Dspike [63]\({}^{\text{NeurIPS-2021}}\) & 10 & 75.4 & - & - \\ SALT [64]\({}^{\text{Neural Netw-2021}}\) & 20 & 67.1 & - & - \\ DSR [23]\({}^{\text{CVPR-2022}}\) & 10 & 77.3 & - & - \\ MS-ResNet [7] & - & 75.6 & - & - \\ \hline Spikformer[8] (Our Implement) & 10 & 78.6 & 10 & 95.8 \\ & 16 & 80.6 & 16 & 97.9 \\ \hline
**Spikingformer (Ours)** & 10 & **79.9(+1.3)** & 10 & **96.2(+0.4)** \\ & 16 & **81.3(+0.7)** & 16 & **98.3(+0.4)** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results on neuromorphic datasets, CIFAR10-DVS and DVS128 Gesture. Note that the result of Spikformer is based on our implementation according to its open-source code.
We compare our method with SOTA methods on CIFAR10-DVS in Tab.3. Spikingformer achieves 98.3\(\%\) top-1 accuracy with 16 time steps and 96.2\(\%\) accuracy with 10 time steps, outperforms Spikformer by 0.4\(\%\) and 0.4\(\%\) respectively.
## 5 Discussion
### Further Analysis for Last Layer
We carry out analysis of the last layer of Spikingformer in CIFAR10 and CIFAR100 dataset to study its impacts, and the results are shown in Tab.4. The last layer has significant impacts on model performances although it only occupies a very small component in Spikingformer. Our experiments show that Spikingformer-\(L\)-\(D^{*}\) with spike signals performs worse than Spikingformer-\(L\)-\(D\) on the whole. This can be attributed to the fact that the \(\mathrm{AvgPooling}\) layer of \(\mathrm{AvgPooling}\) - FC processes floating point numbers, which have stronger classification abilities than spike signals in SN - AvgPooling - FC. However, Spikingformer-\(L\)-\(D^{*}\) still outperforms Spikformer-\(L\)-\(D\), which uses a SN - AvgPooling - FC layer as the classifier. On one hand, Spikingformer effectively avoids integer-float multiplications of Spikformer in residual learning. On the other hand, Spikingformer has better spike feature extraction and classification ability than Spikformer. These results further verify the effectiveness of Spikingformer as a backbone.
### More Discussion for \(Activation\)-\(Conv\)-\(BatchNorm\) Paradigm
\(Activation\)-\(Conv\)-\(BatchNorm\) is a fundamental building block in Binary Neural Networks (BNN) [13; 14; 15; 16]. MS-ResNet[7] has inherited \(Activation\)-\(Conv\)-\(BatchNorm\) in directly trained convolution-based SNNs to successfully extend the depth up to 482 layers on CIFAR10, without experiencing degradation problem. MS-ResNet mainly verifies the ability of \(Activation\)-\(Conv\)-\(BatchNorm\) to overcome the problem of gradient explosion/vanishing and performance degradation in convolution-based SNN model. In contrast, to the best of our knowledge, Spikingformer is the first transformer-based SNN model that uses the \(Activation\)-\(Conv\)-\(BatchNorm\) paradigm to achieve pure spike-driven computation. This work further validates the effectiveness and fundamentality of \(Activation\)-\(Conv\)-\(BatchNorm\) as a basic module in SNN design. Specifically, Spikingformer achieves state-of-the-art performance on five datasets, and outperforms MS-ResNet by a significant margin: for ImageNet, CIFAR10, CIFAR100, CIFAR10-DVS, DVS-Guesture, MS-ResNet (our Spikingformer) achieves 74.21\(\%\) (75.85\(\%\)), 91.90\(\%\) (95.81\(\%\)), 66.83\(\%\) (79.21\(\%\)), 75.6\(\%\) (81.3\(\%\)), - (98.3\(\%\)), respectively.
## 6 Conclusion
In this paper, we propose spike-driven residual learning for SNN to avoid non-spike computations in Spikformer and SEW ResNet. Based on this residual design, we develop a pure spike-driven transformer-based spiking neural network, named Spikingformer. We evaluates Spikingformer on ImageNet, CIFAR10, CIFAR100, CIFAR10-DVS and DVS128 Gesture datasets. The experiment results verify that Spikingformer effectively avoids integer-float multiplications in Spikformer. In addition, Spikingformer, as a newly advanced SNN backbone, outperforms Spikformer in all above datasets by a large margin (eg. + 1.04\(\%\) on ImageNet, + 1.00\(\%\) on CIFAR100). To our best knowledge, Spikingformer is the first pure spike-driven transformer-based SNN model, and achieves state-of-the-art performance on the above datasets in directly trained pure SNN models.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Dataset & Models & Time & Top-1 \\ & & Step & Acc \\ \hline \multirow{3}{*}{CIFAR10} & Spikingformer-4-384-400E & 4 & 95.81 \\ & Spikingformer-4-384-400E\({}^{*}\) & 4 & 95.58 \\ & Spikformer-4-384-400E & 4 & 95.51 \\ \hline \multirow{3}{*}{CIFAR100} & Spikingformer-4-384-400E\({}^{*}\) & 4 & 79.21 \\ & Spikingformer-4-384-400E\({}^{*}\) & 4 & 78.39 \\ \cline{1-1} & Spikformer-4-384-400E & 4 & 78.21 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Discussion results on the last layer of Spikingformer. Spikingformer-\(L\)-\(D^{*}\) means Spikingformer with the last layer of \(\mathrm{SN}\) - \(\mathrm{AvgPooling}\) - FC. Spikingformer-\(L\)-\(D\) means Spikingformer with the last layer of \(\mathrm{AvgPooling}\) - FC by default.
Acknowledgment
This work is supported by grants from National Natural Science Foundation of China 62236009 and 62206141.
|
2305.18495 | Hardware-aware Training Techniques for Improving Robustness of Ex-Situ
Neural Network Transfer onto Passive TiO2 ReRAM Crossbars | Passive resistive random access memory (ReRAM) crossbar arrays, a promising
emerging technology used for analog matrix-vector multiplications, are far
superior to their active (1T1R) counterparts in terms of the integration
density. However, current transfers of neural network weights into the
conductance state of the memory devices in the crossbar architecture are
accompanied by significant losses in precision due to hardware variabilities
such as sneak path currents, biasing scheme effects and conductance tuning
imprecision. In this work, training approaches that adapt techniques such as
dropout, the reparametrization trick and regularization to TiO2 crossbar
variabilities are proposed in order to generate models that are better adapted
to their hardware transfers. The viability of this approach is demonstrated by
comparing the outputs and precision of the proposed hardware-aware network with
those of a regular fully connected network over a few thousand weight transfers
using the half moons dataset in a simulation based on experimental data. For
the neural network trained using the proposed hardware-aware method, 79.5% of
the test set's data points can be classified with an accuracy of 95% or higher,
while only 18.5% of the test set's data points can be classified with this
accuracy by the regularly trained neural network. | Philippe Drolet, Raphaël Dawant, Victor Yon, Pierre-Antoine Mouny, Matthieu Valdenaire, Javier Arias Zapata, Pierre Gliech, Sean U. N. Wood, Serge Ecoffey, Fabien Alibart, Yann Beilliard, Dominique Drouin | 2023-05-29T13:55:02Z | http://arxiv.org/abs/2305.18495v1 | Hardware-aware Training Techniques for Improving Robustness of Ex-Situ Neural Network Transfer onto Passive TiO\({}_{\mathbf{2}}\) ReRAM Crossbars
###### Abstract
Passive resistive random access memory (ReRAM) crossbar arrays, a promising emerging technology used for analog matrix-vector multiplications, are far superior to their active (1T1R) counterparts in terms of the integration density. However, current transfers of neural network weights into the conductance state of the memory devices in the crossbar architecture are accompanied by significant losses in precision due to hardware variabilities such as asexual path currents, biasing scheme effects and conductance tuning imprecision. In this work, training approaches that adapt techniques such as dropout, the reparametrization trick and regularization to TiO\({}_{2}\) crossbar variabilities are proposed in order to generate models that are better adapted to their hardware transfers. The viability of this approach is demonstrated by comparing the outputs and precision of the proposed hardware-aware network with those of a regular fully connected network over a few thousand weight transfers using the half moons dataset in a simulation based on experimental data. For the neural network trained using the proposed hardware-aware method, 79.5% of the test set's data points can be classified with an accuracy of 95% or higher, while only 18.5% of the test set's data points can be classified with this accuracy by the regularly trained neural network.
+
Footnote †: Corresponding author: [email protected]
## I Introduction
As neural networks' complex problem-solving capabilities increase, so do their energetic and computational demands. These demands have grown so much that the use of graphics processing units (GPUs) and data centers is proving to be essential for practical machine learning. Meanwhile, a plethora of applications at the edge would greatly benefit from the use of neural networks but cannot always use these deep networks due to their high energy demands [1]. As the main limitations of neural network computations stem from the von Neumann bottleneck between the memory and processor [2], in-memory computing with analog, non-volatile emerging resistive memories is among the most promising avenues of innovation that can be used to overcome this problem [3; 4; 5]. Non-volatile memories exploit metal oxide [6; 7; 8; 9], phase-change [10] or ferroelectric materials [11], among other things, to perform computations directly in memory, decreasing the need to fetch data to perform computations. In a crossbar configuration, in-memory computing relies on the fundamental Ohm's and Kirchoff's laws, which enable an energy-efficient version of the vector matrix multiplications (VMMs) at the core of neural networks. These memory devices are versatile and have proven their potential for many different types of neural networks, such as long short-term memory (LSTMs) [12; 13], generative adversarial networks (GAN) [14] and Bayesian NNs [15; 16; 17]. Neural networks on crossbars always require either in-situ or ex-situ learning techniques. Both of which have been demonstrated on \(TiO_{2}\) crossbars [18]. The former technique trains the network online on the devices by implementing backpropagation through tuning pulses. The latter performs the training offline and thereafter the network is transferred to the crossbars by converting the obtained trained weights into conductances. The more commonly used ex-situ technique is much simpler but hardly considers hardware variabilities during training. This renders it less robust to all potential hardware non-idealities. The crossbar architectures presented in the literature are often based on the 1T1R integration scheme [19; 20], where a transistor (1T) is cascaded with each memory (1R) to allow independent accesses to desired memories without disturbing adjacent devices. This improved control provides high programming and reading accuracies but leads to a lower integration density, higher wiring complexity and increased fabrication costs. On the other hand, passive (0T1R) crossbars exhibit excellent scalability and lower fabrication costs. In [21], a factor of 20 between the sizes of individual 1T1R (4 \(\mu\)m) and 0T1R (200 nm) memristors was reported. An average relative conductance import accuracy of 1% on a fully passive \(TiO_{2}\) crossbar with a size of 64\(\times\)64 was reported [8]. This crossbar had \(\sim 99\%\) of memristors functioning. As such, passive \(TiO_{2}\) crossbars hold much promise for the future of in-memory computing with ReRAM devices.
However, these passive crossbars present multiple inherent sources of variability [22], which currently hinder their use at large scale. The conductance tuning imprecision that is observed when a device is programmed to
a certain conductive level, unsatisfactory device yields, asymmetrical switching dynamics, the conductance drift due to relaxation effects over time and sneak path currents are a few examples of the issues afflicting this technology. These non-idealities directly affect the results of analog matrix-vector multiplications (VMM). Cumulatively, they often lead to incorrect classifications. While some of these non-idealities can be harnessed to implement efficient stochastic neural network models in hardware [7; 12; 17; 23], they remain very detrimental for standard in-memory computing solutions and make it difficult to scale up this technology. A potential avenue that can be used to mitigate the sneak path current problems is the use of tiled crossbars; the neural network weights can be separated onto sub-crossbars called tiles. Tiles are compatible with control transistors and circuitry. These 8\(\times\)8 crossbar tiles offer an intermediate solution that lies between the improved controllability of 1T1R crossbars and the scalability of 0T1R crossbars.
In this work, we report a novel hardware-aware ex-situ training approach that improves the robustness of neural network models against the non-idealities of \(TiO_{2}\)-based memory crossbars. The main contributions of this study are the introduction of new, fully data-driven and computationally simple characterization and variability modeling methods that are used jointly with a reparametrization trick during training to mitigate precision losses when weights are converted into their equivalent ReRAM conductances on a passive crossbar. This procedure is often referred to as hardware-aware training. The simple variability models do not require computationally prohibitive mathematical and physical models, and they lead to good performance without requiring multiple rounds of tuning; these models require only reproducible characterizations that are all easy to automate. Figure 1 shows
Fig. 1: **Schematic illustration of the main concept investigated in this work.****a.** (Right) A scanning electron microscopy (SEM) picture of our 8\(\times\)8 \(TiO_{2}\) crossbar architecture. The top electrodes (word lines) are visible, while the bottom electrodes (bit lines) fabricated by a damasene process have been represented in blue. The materials stack of the resistive memory devices is displayed in the inset. (Left) The different sources of variability considered during training are shown. **b.** Depiction of the training process of the neural network. The left side of the sub-figure represents the reparametrization trick in relation to the weight values. The right side is a schematic representation of the neural network investigated in the scope of a half moon toy problem.. A dotted line indicates a conversion between ranges (weight range to conductance range or vice versa). The weights are initially converted to conductances with differential weights (\(G_{+}-G_{-}\)). A noise parameter \(\epsilon\) is then added to both the positive and negative devices through a reparametrization trick. A percentage of neurons, depicted in the schematic by a half-dashed \(X\) in the hidden layer for weight \(W_{82}\), are, similarly to dropout, deactivated by conversion to a failure state in the low resistance state (LRS) or high resistance state (HRS) (instead of by being set to 0 as they would be in the usual dropout method). The gradient on these neurons is then zeroed out during backpropagation. **c.** Physical implementation on passive crossbar array. The weights are represented by two devices to be able to represent signed values.
the overall hardware-aware procedure. The hardware-aware network provides accurate classification results for the half moons dataset for a much greater percentage of the simulated transfers compared to a regular neural network. This greater accuracy across transfers proves that the hardware-aware procedure is robust to significant biasing-scheme-effect conductance changes (up to 60 \(\mu\)S), conductance tuning imprecision and neural network weights impacted by random substitutions to account for stuck devices. The devices studied in this work are 0T1R devices, and they can be seen in Figure 1 a and in Supplementary Figure 7.
Other works, such as [16, 24, 25], employ a similar approach. In [16], an impressive performance was reported but the authors chose not to consider device failures and only consider 1T1R devices, therefore neglecting biasing scheme effects. Meanwhile, [24, 25] deal with 0T1R \(TiO_{2}\) passive crossbars and implement many different variability sources such as temperature, biasing scheme effect and non-linearity in their hardware simulations and training to great effect. However, the relationship between the variability and the crossbar position of the devices is not considered in these works. The focus is instead placed on a conductance tuning technique that mitigates biasing scheme disturbances. Also, the fact that our method depends solely on empirical data means that it is also applicable to different memristive technologies with slight modifications, rendering it highly adaptable. The biasing scheme considered in these other works is the V/2 scheme; the V/3 scheme is considered in this work and the variability modeling approaches are therefore different.
In this paper, we begin by explaining the different sources of variability that are taken into account by the hardware-aware network, along with the characterization process that led to the creation of each of these sources. We continue with a software demonstration that compares the performances of weights trained considering these variabilities and weights trained naively over 10,000 simulated ex-situ weight transfers for a simple binary classification problem. The hardware-aware network significantly outperforms the regular neural network.
## II Results and Discussion
As previously mentioned, passive crossbars are afflicted by many different and often indirectly correlated non-idealities. The device-to-device and crossbar-to-crossbar variabilities are considerable and difficult to predict. Depending on a specific crossbar's properties, a transferred neural network may perform well or very poorly. The proposed approach aims at using neural networks' learning abilities to learn how to classify data points in spite of the effects that non-idealities will have on analog VMMs. These non-idealities were estimated through the extensive electrical characterization of TiO\({}_{2}\)-based resistive memories fabricated in 8\(\times\)8 crossbar and crosspoint configurations. The considered non-idealities were those deemed to have the greatest effect on VMMs, namely the conductance tuning imprecision, biasing scheme effects and high resistance state (HRS) and low resistance state (LRS) stuck devices.
During training, the neural network weights are converted from their software range to the predetermined mean achievable conductance range \(G_{R}\) for every batch of data. The weights are then shifted according to randomized variability samples using a reparametrization trick [26] integrated into a modified version of the PyTorch library [27]. The use of the reparametrization trick allows the Monte Carlo estimate of the expectation (every time we sample new weights) to be differentiable with respect to the weight parameters and enables backpropagation, as depicted in Figure 1b. The process is explained by the flowchart shown in Figure 1 and the mathematical details are given in Supplementary Note 1. All considered non-idealities are numerically added into a matrix \(\phi_{g}\) created from the conversion of the network's parameters \(\phi\) to the conductance range \(G_{R}\) and share the same dimensions. This matrix is converted back into the weight range after variabilities are added. The now-converted matrix is subtracted from the original parameter matrix \(\phi\) to obtain the \(\epsilon(\phi_{g})\) matrix used in the reparametrization trick. \(G_{R}\) was determined to be from 100 \(\mu\)S to 400 \(\mu\)S by electrical characterizations made on our 8\(\times\)8 crossbars [6]. The randomized variability samples depend on the conductance tuning imprecision and the biasing scheme effects recorded in a database. Percentages of neural network weights \(x\) and \(y\) corresponding to LRS and HRS stuck devices are also replaced by equivalent high or low conductance values. This effectively simulates the result of an ex-situ transfer on a crossbar, as depicted in Figure 1c, with randomized programming failures for every batch of training data. It also ensures that the network considers randomized potential sources of hardware-related errors. The trained network will converge towards the solution that is most robust to the average of all potential crossbars, considering our TiO\({}_{2}\) memristive technology non-idealities.
The tests performed to gather the data in the non-ideality database used during training for tuning imprecision, biasing scheme effects and finally HRS or LRS stuck devices are described in the following subsections. In the case of the weight disturbances induced by the biasing scheme effect, fitting normal distributions to the data was unfortunately impossible. The Shapiro-Wilk tests for normality of the distributions fitted to the data usually returned an alpha value of less than 0.05, causing the null hypothesis to be rejected [28]. A sample from this distribution does not approximate a recorded disturbance instance well. The approach taken instead was to sample a recorded conductance change from a portion of the database consisting of only
raw data. This is different from sampling from a fitted distribution, which is done for the conductance tuning imprecision. In all of the tests, the conductance tuning procedure followed the pulse tuning technique presented in [29] with a 1% tuning tolerance.
### Device conductance tuning imprecision
As shown in Figure 1a, the device-level conductance tuning imprecision is considered in the training process. The studies undertaken to characterize it were realized by dividing the achievable resistive ranges of all 29 studied devices into eight different equidistant levels and repeatedly performing a tuning/untuning procedure. Figure 2 illustrates how the statistical database was established, along with an example. Figure 2a shows the experimental data behind one instance of the procedure for a target conductance of 125 \(\mu S\). The distribution of read values after the conductance reached and stabilized around the target conductance is shown in Figure 2b. This tuning/untuning procedure was repeated 40 times per level and per device. In Figure 2c and d, the data points corresponding to the data distribution in Figure 2b are depicted in red and are evaluated as follows:
\[\begin{split}&(x_{c},y_{c})=(g_{target},\sigma_{program})=(125 \mu S,0.57\%)\,,\\ &(x_{d},y_{d})=\left(g_{target},\frac{\mu_{program}-g_{target}}{g _{target}}*100\right)\\ &=(125\ \mu S,-0.424\%)\,.\end{split} \tag{1}\]
Figure 2c shows the standard deviation of the programing precision as a function of the conductance state of the devices tested, where the values are expressed in terms of a percentage of the desired conductance value, while Figure 2d shows the offset between the mean of the read conductance values after the tuning of the devices and their target conductance values. Since the devices are always tuned from a low conductive level to
Figure 2: **Experimental measurements of the conductance tuning variability of the TiO2-based resistive memories.****a.** The conductance is tuned from a low conductance state to the target conductance state using SET (in blue) and RESET (in green) pulses forty times. The last read value is then kept in memory so that **b.** a normal distribution may be fitted to the programmed conductances. A conductance value of 125 \(\mu\)S is targeted, starting from 100 \(\mu\)S in this example. A simple Shapiro-Wilk test is performed to see if the null hypothesis must be rejected. A mean and a standard deviation are extracted from this distribution. These two steps were repeated eight times across the conductive range of each individual device. A total of 29 devices were tested with this protocol. **c.** Depiction of the relationship between all of the standard deviations of the fitted normal distributions presented in panel **b** and the conductive level of the device. The average standard deviation increases as the conductance decreases, as predicted. The fitted linear relationship between the conductance and standard deviation is expressed on the plot. **d.** Depiction of the relationship of the offset with respect to the mean of the normal distributions, similar to the one shown in panel **b**, and their targeted (ideal) conductance with respect to the conductive level of the device. There is no clear correlation between the offset from the target value and the conductive level of the device. A normal distribution is instead fitted to the offsets.
a high conductive level, the mean offset of the achieved conductance from the ideal target conductance is always negative. Therefore, the transfer on a crossbar should be performed after all devices are RESET in order to match this replicable offset, meaning that all devices should start from the off state.
During training, when the network weights are transformed into software conductances \(g_{s+}\) and \(g_{s-}\) for signed weights or \(g_{s}\) for unsigned weights, a weight is sampled from a normal distribution centered around these values with a standard deviation that depends on the conductive level of the transformed weight. Figure 2c shows the function \(f(g)\) defining this standard deviation relationship. This allows the network to take into account the correlation that is often observed between the conductive level and variability [7]. An offset is then sampled from a distribution \(N\left(\mu_{off},\sigma_{off}\right)\), which is depicted in Figure 2d, and added to the sampled conductance value. Because the test is performed by reading ten times before considering a memory to be programmed, the read variability on devices is also taken into account. An experiment was undertaken to isolate the read variability of the characterization equipment with respect to the variability of the measured values; it was found that the reading variability stemming from the instrument is one order of magnitude lower than that of the experimental data (Supplementary Figure 11.)
**Conductance disturbances caused by biasing scheme effect**
Programming devices on a passive crossbar require the use of a biasing scheme to reduce sneak path currents. The V/3 scheme was used in this study since all devices other than the addressed memory will see the same voltage of V/3. This standardizes and simplifies the process of adding variability to the weights. It also makes it possible to use higher-amplitude voltages to tune the devices without affecting neighboring devices significantly. In order to measure the change in the conductance of a device due to the programming of its neighbors, we tuned devices from three different crossbars, for a total of 144 devices (64 + 64 + 16), to random values within the expected conductance ranges \(G_{R}\). The disturbance of a device, after the tuning cycles for devices
Fig. 3: **Depiction of the procedure for gathering data on device conductance changes caused by the biasing scheme.**
**a. The top graph depicts the pulse train seen by a device during its own programming (in green) and during the programming of its neighbors (in blue). The bottom graph shows the conductance evolution of a device following the pulse trains displayed in the top graph. The device changes the conductance significantly during its programming cycle and changes it slightly during its neighbors’ programming cycle. b. Color-coded depiction of a section of the crossbar studied in panel a with numerical values for the measured conductance change seen by the device after the programming cycle of other memories. c. Histogram of the recorded conductance changes with the bins containing the changes illustrated in panel b highlighted in red. The top graph contains the conductance changes when one additional device is programmed and the bottom graph contains the conductance changes when ten additional devices are programmed. d. Depiction of the V/3 and 2V/3 biasing schemes for programming devices.**
that are tuned after itself are complete, is recorded in the database. This disturbance is due to the V/3 biasing scheme. A thorough example is displayed in Figure 3a and b. The conductance changes caused by the biasing scheme accumulate through the crossbar programming, which can be observed by comparing the disturbance due to the programming of the neighbors of the device that is tuned first with that of the devices that are tuned last. As such, the database of recorded disturbance measurements is separated into sub-databases for each number of subsequent devices tuned after the programming of the current device. Figure 3c shows two such sub-databases, one for the scenario in which one additional device is tuned and one for the scenario in which ten additional devices are tuned. The device that is tuned last is the bottom-rightmost weight of our neural network matrix when it is transferred from top to bottom and left to right. It will see no conductance change due to the biasing scheme since it is the last-tuned device. The devices stuck in an HRS or LRS or that could not reach their target conductances were also not added in this database. If they had been included, the database would have been polluted by stuck or ineffective devices, which would have biased it towards no conductance change.
As can be seen in Figure 3c, a tail of recorded crosstalk effects is more prevalent for negative values. This is an indirect consequence of the asymmetrical device switching mechanism between the SET and RESET pulses. More negative pulses are required to reset devices than positive pulses. Thus, devices will, on average, see more successive negative pulses than positive pulses. This phenomenon is worsened by the fact that devices are individually affected differently by negative pulses, with more sensitive devices leading to greater conductance changes. A conductance change of up to 60 \(\mu\)S was allowed in the sub-databases; this change is six times larger than the lowest possible conductance weight. The cases in which the disturbance is greater than this value usually correspond to an HRS or LRS failure.
During the training of the neural network, a random offset sampled from these sub-databases correlated to the positions of the weights in the crossbar is added to each weight. The use of the V/3 scheme, as shown in Figure 3d, makes it possible to proceed in this way. This ensures that the first weights in the matrices will see more V/3 amplitude pulses and will, on average, have a greater variability than the last devices that are programmed.
### HRS-LRS stuck device substitution
Some devices are stuck in a low or high resistive state due to defects. Regardless of the DC voltage or the pulse amplitudes that are applied to such devices, their conductive levels will never be tunable. As shown in Supplementary Figure 10, LRS failures include retention failures. Such failures are caused by temporal effects and cause the device conductance to drift over time to a typically higher conductance state. HRS failures may occur for devices that were not forming free or devices with a very narrow achievable conductance range lying outside the desired range. To account for these device faults, percentages of devices \(x\) and \(y\) corresponding to HRS and LRS stuck devices, respectively, have both positive (\(g_{s+}\)) and negative (\(g_{s-}\)) software weight components that are replaced by a randomly sampled failed device conductance. The percentages \(x\) and \(y\) are fixed throughout the training, but the affected weights are randomized for every batch of input data. This implies that certain weights will be doubly affected by this substitution if their positive and negative components are both assigned as failures.
The stuck devices' conductances are sampled from two distributions based on the characterization of LRS and
Figure 4: **Distribution of faulty ReRAM.****a.** Recorded conductance levels of dead crosspoints stuck in an HRS. The Shapiro-Wilk p-value is 0.006 and therefore the hypothesis of the normality of the distribution is rejected. **b.** Recorded conductance levels of dead crosspoints stuck in an LRS. A device is considered stuck when it fails to change its conductance state after the application of 75 successive pulses of maximum SET or RESET amplitudes (2.0 V or -2.4 V).
HRS stuck devices made on five 8\(\times\)8 crossbars (320 devices). The conductance range of devices stuck in an HRS is much smaller than that of devices stuck in an LRS; HRS devices are defined as devices stuck below our HRS resistance of 100\(\mu\)S. The distribution of failed HRS state devices can be seen in Figure 4a, while the distribution of failed LRS devices can be seen in Figure 4b. Since fitting the normal distribution is inappropriate for the HRS distribution, as it fails the Shapiro-Wilk normality test, HRS stuck device faults are uniformly sampled from the range [100 \(\mu\)S, 10 \(\mu\)S]. The impact of devices stuck below 10 \(\mu\)S will be more negligible than the impact of those stuck at <100 \(\mu\)S and >10 \(\mu\)S. Indeed, devices stuck in a higher conductance state will have a greater impact on the summation of the currents during the analog matrix multiplication. Gradient descent is deactivated for the weights affected by HRS or LRS stuck weights. Typically, the fabrication yields of working devices on crossbars for state-of-the-art memristive devices are above 95% [30], with some articles reporting near perfect crossbars with yields of 99% [8] or even 100% [31]. The percentages of HRS and LRS stuck devices should be chosen to reflect this fact. In fact, in general, the values of these parameters should be chosen as conservatively as possible so that the network converges to a solution that is robust to as many device failures as possible. Of course, replacing too many weights with random values at the extremities of the conductance range and therefore at the extremities of the overall weight range will prevent the network from converging to a solution, making the training impossible. This makes these percentage values tunable hyperparameters.
#### Mathematical expression
Overall, the mathematical equation representing the sampling of each of the devices for either positive or negative weight components is equation 2:
\[N\left(\mu_{w\pm},{\sigma_{w\pm}}^{2}\right) \tag{2}\] \[=N\left(g_{s\pm},f(g_{s\pm})^{2}\right)+N\left(\mu_{off},\sigma_{ off}^{2}\right)+g_{adj}(n_{d}),\] \[=N\left(g_{s\pm}+\mu_{off}+g_{adj}(n_{d}),\sigma_{off}^{2}+f(g_{ s\pm})^{2}\right),\]
where \(g_{s}\) is the direct conductance equivalent of the weight after a linear transformation between the two ranges (see Supplementary Note 1). The weight's range is reevaluated for every batch during training since the maximum and minimum weight values are subject to change as the training progresses. \(N\left(\mu_{off},\sigma_{off}\right)\) is the sampled conductance tuning imprecision offset and \(g_{adj}(n_{d})\) is the sampled biasing scheme effect from programming adjacent devices with respect to the number of additional devices \(n_{d}\) programmed after the current device. The variability associated with a device in training is dependent on both its target conductance level for both negative and positive weight components and the position of the devices in their respective crossbar. Every weight matrix is presumed to be programmed on a tiled crossbar or on a sub-section of a crossbar.
The network learns to expect more variability from the first devices that are programmed because of the biasing scheme effects and from devices at low conductance values and will adjust the importance of each weight accordingly. The substitution of random failed weights acts as a form of aggressive regularization, improving the network's generalization. As can be expected, training a neural network with stochastic jumps in weight values leads to a longer training time compared to a regular neural network, as is the case for Bayesian neural networks. However, this constant sampling also enables the network to be more robust to overfitting [32].
**Comparison with regular neural network**
A simple half-moon problem was considered to compare the performance of our hardware-aware training approach with that of a regular neural network trained without taking into account hardware non-idealities. The two networks have the same learning rate \(\alpha\) of 0.01, and they use an Adam optimizer [33] with a momentum of 0.9. The batch size used was 256. A neural network with one hidden layer with eight neurons is considered, and the sigmoid activation function is used (as depicted in Figure 1b). The cost function used is the binary cross
Fig. 5: **Comparison of the minimal accuracy over 10,000 simulated transfers achievable for the test set.** None of the datapoints are successfully classified by 100% of the simulated neural networks due to the often aggressive substitution of weights by failing devices, large crosstalk effects and imprecise tuning. We can see that more of the test set is classified with a higher accuracy for the HANN (depicted in blue) compared to the NN (in green). For instance, 80% of the test set is properly classified by 83.5% of the simulated transferred regular networks vs 95% of the simulated transferred hardware-aware networks. This corresponds to an increase of \(\Delta t_{n}=11.5\%\). Reading the graph in the opposite way, we see that 95% or more of the simulated regular networks are capable of properly classifying only 18.5% of the test set, while 95% of the hardware-aware networks are capable of properly classifying 79.5% of the test set. This corresponds to an increase of \(\Delta t_{c}=61\%\).
entropy loss [34]. During training, hyperparameter values of 0.5% were chosen for both the LRS-stuck and HRS-stuck substitutions. For the first layer, a recorded average of 3.1% of neural network weights were affected by random substitutions of LRS and HRS stuck devices during training. A half-moon classification problem training set of 875 data points and a test set of 200 data points were randomly generated. The neural network (NN) is trained without any consideration of the variability and then evaluated with the stochastic effects added to compare its performance to that of the hardware-aware NN (HANN). This is repeated \(n\) times to determine what percentage of the test set could be classified well over a given percentage of the \(n\) simulated transferred networks. These results are expressed in Table 1 in Supplementary Note 4. A comparison of the accuracies is also displayed in Figure 5. Interpreting these results, we see that 79.5% of the test set can be properly classified by at least 95% of the simulated transferred hardware-aware networks. On the other hand, only 18.5% of the test set can be properly classified by at least 95% of the simulated transferred regular neural networks. We observe that 87.5% of the test set can be properly classified by at least 90% of the simulated transferred hardware-aware networks, while 71% of the test set can be properly classified by at least 90% of the regular networks. The hardware-aware neural network training techniques contribute significantly to maintaining a high consistency and accuracy between simulated transferred networks. Figure 6 offers a graphical interpretation of the difference in the relative variability by showing the results of the test set classifications over 1,000 simulated transfers for each of the two networks, along with a heatmap representation of the variability with respect to the entire data space for the half moons dataset. Note that this is for a single instance of a hardware-aware-trained NN and a regularly trained NN. The variability, depicted by a deeper blue, is greater for the regular neural network and, as expected, is also greater at the frontier between the two classes.
## III Conclusion
It was demonstrated in this paper that the use of hardware-aware training techniques can make neural networks significantly more robust to their often difficult transfer onto passive crossbar arrays. This protocol integrates experimentally measured non-idealities such as the conductance tuning imprecision, biasing scheme effects and device failures into the neural network training.
A classification task using the half moons dataset reveals that our hardware-aware training protocol results in a 95% accuracy across simulated transfers for 79.5% of the test set, compared to only 18.5% for the regular neural network. The hardware-aware training procedure could enable the use of passive crossbars despite their non-idealities for inference tasks and proposes a parallel solution to the usual fabrication process improvements. It may serve as a middle ground between fabrication and software solutions to accelerate practical, real-life applications of memristive devices. Furthermore, setting up a custom hardware-aware neural network architecture using preexisting libraries proves to be relatively straightforward and inexpensive in terms of hardware.
While the devices studied here are TiO\({}_{2}\) passive cross
Fig. 6: **Heatmap of the variability of classifications with respect to the coordinates (x, y) used as input across the data space.****a.** Variability of the hardware-aware neural network. **b.** Variability of the regular neural network. The inference is repeated 1,000 times and the mean and standard deviation of the classified results (always 0 or 1) are obtained. A mean smaller than or equal to 0.5 corresponds to the blue class and a mean greater than 0.5 corresponds to the green class. The standard deviation is depicted in the heatmap, with a deeper blue corresponding to more variability across classifications. Note that both subfigures share the same color scale to make it possible to visually compare them. Most data points for the regular neural network lie within regions of greater variability than for the HANN. A dashed gray line that separates the two classes with a precision of 98% is provided for reference.
bars, techniques like this are applicable to different memristive technologies as long as a new database is built using the same procedures presented in this article. A future study should focus on evaluating the scalability of this approach, with respect to the advantages (robustness, accuracy) and disadvantages (training time, variability-induced local minima). The potential of using this technique for more complex datasets should also be evaluated. Faster convergence during training could most likely be achieved by integrating the variability into the gradient descent procedure. This model could be improved by adding additional variability sources and by developing versatile libraries that make hardware non-idealities compatible with neural network requirements. The philosophy behind this work is to realistically render compatible in-memory computing for neural networks with the current non-negligible hardware non-idealities present in passive crossbar arrays. This demonstration, based on experimental data, is a step forward in making the use of passive crossbars for neural network matrix-vector multiplications viable in spite of their inherent shortcomings.
## Methods
**Tuning algorithm and characterization details**
The pulse tuning algorithm presented in [29] was used. A memory was considered programmed when ten successive reading pulses returned a value within a tolerance around the target. The writing pulses were incremented from [0.8 V, 2 V] for SET pulses and from [-1.1 V, -2.4 V] for RESET pulses and had a duration of 200 ns. The read pulses had a duration of 10 \(\mu\)s and an amplitude of 0.2 V, while the current measurement range was 100 \(\mu\)A. The devices considered were forming free. Figure 2a depicts the conductance evolution using the pulse conductance tuning procedure. Devices that failed before the completion of the testing procedure were kept in the database, as this corresponds to a realistic scenario of a device's behavior before imminent failure, while devices that failed before testing began were kept in the HRS-stuck or LRS-stuck databases. More information regarding the characterization setup can be found in Supplementary Figure 8 and Supplementary Note 2. A graph of hysteresis curves under DC stimulation for 129 of our devices is also available in Supplementary Figure 9.
**Fabrication of CMOS-compatible TiO2-based memristor crossbar**
The samples used for measurements were prepared as described in a previous publication [6]. First, a 600-nm-thick SiN layer was deposited on a Si substrate using low-pressure chemical vapor deposition (LPCVD). Electron-beam lithography (EBL) was used to pattern 400-nm-wide bottom electrodes, which were etched 150 nm deep using an inductively coupled plasma etching process with CF4/He/H2 chemistry. The trenches were filled with 600-nm-thick TiN, deposited by reactive sputtering, and polished using chemical-mechanical polishing (CMP) to remove the excess TiN and planarize the samples. This resulted in embedded TiN electrodes within the SiN layer. The active switching layer was composed of 1.5 nm of Al2O3 and 15 nm of TiO2, which were deposited using atomic layer deposition and reactive sputtering, respectively. EBL and plasma etching with BCl3/Cl2/Ar chemistry were used to deposit and pattern 400-nm-wide top electrode lines with Ti (10 nm)/TiN (30 nm)/Al (200 nm). The switching layer outside the crossbar region was etched using the same process to suppress line-to-line leakages and open the ends of bottom electrodes. The crossbars were then encapsulated within a 500-nm-thick PECVD SiO2 layer, and the contacts on the open ends of the top and bottom electrodes were etched using plasma etching with CF4/He/H2 chemistry. Ti (10 nm)/Al (400 nm) contact pads for wiring were deposited and patterned using EBL and plasma etching with BCl3/Cl2/Ar chemistry. Finally, the sample was subjected to a rapid thermal annealing process at 350\({}^{\circ}\)C in N2 gas for an unspecified duration.
**Data availability**
The hardware-aware training library, along with some of the experimental data, can be found in a GitHub repository that has been made publicly available at [35]. The rest of the data that support this work are available from the corresponding author upon reasonable request.
**Acknowledgements**
This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) and by the Fonds de Recherche du Quebec Nature et Technologie (FRQNT). We acknowledge financial support from the EU: ERC-2017-COG project IONOS (# GA 773228). We would like to acknowledge Jonathan Vermette, Vincent-Philippe Rheaume and the LCSM and LNN cleanroom staff for their assistance with the electrical characterization setup and nanofabrication process development. LN2 is a joint International Research Laboratory (IRL 3463) funded and co-operated in Canada by Universite de Sherbrooke (UdeS) and in France by CNRS as well as ECL, INSA Lyon, and Universite Grenoble Alpes (UGA). It is also supported by the Fonds de Recherche du Quebec Nature et Technologie (FRQNT). We would also like to thank the IEMN cleanroom engineers for their help with the device fabrication.
**Author contributions**
P.D. performed most of the experiments, designed the neural networks, analyzed and post-processed the measurements and created the characterization setup. V.Y. helped in designing the neural network architecture. P.A.M. contributed to the experimental setup and along with M.V. helped with the characterization. R.D., J.A.Z, P.G., S.E. and M.V. fabricated the memristor devices. Y.B., F.A., S.E.,
S.W. and D.D supervised the project. P.D. wrote the manuscript with input from all of the authors.
**Competing interests** The authors declare no competing interests.
|
2309.01211 | Physics-inspired Neural Networks for Parameter Learning of Adaptive
Cruise Control Systems | This paper proposes and develops a physics-inspired neural network (PiNN) for
learning the parameters of commercially implemented adaptive cruise control
(ACC) systems in automotive industry. To emulate the core functionality of
stock ACC systems, which have proprietary control logic and undisclosed
parameters, the constant time-headway policy (CTHP) is adopted. Leveraging the
multi-layer artificial neural networks as universal approximators, the
developed PiNN serves as a surrogate model for the longitudinal dynamics of
ACC-engaged vehicles, efficiently learning the unknown parameters of the CTHP.
The PiNNs allow the integration of physical laws directly into the learning
process. The ability of the PiNN to infer the unknown ACC parameters is
meticulously assessed using both synthetic and high-fidelity empirical data of
space-gap and relative velocity involving ACC-engaged vehicles in platoon
formation. The results have demonstrated the superior predictive ability of the
proposed PiNN in learning the unknown design parameters of stock ACC systems
from different car manufacturers. The set of ACC model parameters obtained from
the PiNN revealed that the stock ACC systems of the considered vehicles in
three experimental campaigns are neither $\mathcal{L}_2$ nor
$\mathcal{L}_\infty$ string stable. | Theocharis Apostolakis, Konstantinos Ampountolas | 2023-09-03T16:33:47Z | http://arxiv.org/abs/2309.01211v2 | # Physics-inspired Neural Networks for Parameter Learning of Adaptive Cruise Control Systems
###### Abstract
This paper proposes and develops a _physics-inspired neural network_ (PINN) for learning the parameters of commercially implemented adaptive cruise control (ACC) systems in automotive industry. To emulate the core functionality of stock ACC systems, which have proprietary control logic and undisclosed parameters, the constant time-headway policy (CTHP) is adopted. Leveraging the multi-layer artificial neural networks as _universal approximators_, the developed PINN serves as a surrogate model for the longitudinal dynamics of ACC-engaged vehicles, efficiently learning the unknown parameters of the CTHP. The ability of the PINN to infer the unknown ACC parameters is meticulous evaluated using both synthetic and high-fidelity empirical data of space-gap and relative velocity involving ACC-engaged vehicles in platoon formation. The results have demonstrated the superior predictive ability of the proposed PINN in learning the unknown design parameters of stock ACC systems from different car manufacturers. The set of ACC model parameters obtained from the PINN revealed that the stock ACC systems of the considered vehicles in three experimental campaigns are neither \(\mathcal{L}_{2}\) nor \(\mathcal{L}_{\infty}\) string stable.
Adaptive cruise control, constant time-headway policy, parameter learning, deep learning, physical principles, physics-inspired neural networks, on-board sensing, U-blox.
## I Introduction
The growing acceptance of partially automated vehicles on public roads, up to SAE Level 2 of driving automation [1], has given rise to new traffic conditions. These conditions have raised concerns among the scientific community regarding their impact on traffic flow and capacity, necessitating a deeper understanding of the fundamental principles underlying these vehicles.
Adaptive cruise control (ACC) [2] is an advanced driver-assistance system (ADAS) [1] that automatically adjusts the vehicle's speed to maintain a safe pre-defined distance from the vehicle in front or to reach the user-specified speed by accelerating or decelerating. ADAS has long been a part of automotive equipment, available as optional or standard features, providing additional assistance to the driver by controlling the vehicle's longitudinal movement. It achieves this by monitoring the surrounding environment with several onboard sensors such as radar, lidar, and others.
The design of ACC remains publicly unknown, making ACC systems not yet fully understood. One of the key aspects behind ACC's design is the spacing policy adopted by ACC manufacturers, along with its unknown design parameters. The spacing policy specifies the pre-defined desired (time or space) distance between an ACC-engaged vehicle and the vehicle in front. Among the various spacing control strategies developed in the literature, the constant time headway policy (CTHP) [2] stands out as one of the most remarkable [3, 4]. Despite its simplicity, the CTHP is capable of accurately reproducing the dynamics and driving behavior of ACC-engaged vehicles in platoons, as evidenced by several field trials, see e.g., [5, 6, 7].
This paper introduces physics-inspired neural networks (PINNs) for learning the parameter of commercially implemented ACC systems using empirical observations of space-gap and relative velocity. The constant time-headway policy (CTHP), speculated to be implemented in stock ACC systems of various makes, is adopted to emulate the longitudinal motion of ACC-engaged vehicles flocking in homogeneous platoons. The pursued PiNN is a deep learning data-driven approach based on the physical model of the CTHP, derived from first physical principles (double integrator), and control theory (a PID-like control law) [5, 8].
## II Relevant Literature
The spacing policy of ACC systems is one of the main aspects constituting these vehicles, with a direct impact on traffic flow characteristics and driver's safety. The most commonly used is the constant time headway policy (CTHP) adopted in this paper, which suggests that the predefined distance between two consecutive vehicles is a linear function of the controlled vehicle's (ACC follower) velocity [2]. In addition, the constant spacing policy (CSP) suggests that the desired inter-vehicle spacing is independent of the ACC ego vehicle's longitudinal velocity [3]. Finally, the nonlinear variable time headway policy (VTHP) proposes a time headway which varies with the relative speed between the two adjacent vehicles [4, 9]. A comparison of the aforementioned spacing and time headway policies can be found in [10].
Among others, a major issue concerning the scientific community for decades, is whether ACC systems are string stable inside a platoon [11, 12, 13, 14, 15, 16]. String stability is defined as the elimination of propagating disturbances upstream the platoon. In the literature it has been repeatedly reported that these systems tend to be string unstable under the presence
of disturbances [5, 6, 17, 18, 19, 20, 21]. To better understand such phenomena, a broader study of ACC's principles is required.
Employing these type of control policies on ACC-equipped vehicles would entail string stable platoons when such vehicles are involving. However, it is well known that constant spacing policies (CSP) lead to string unstable platoons [22, 23, 24], while constant time headway policies (CTHP) are shown to be either string stable [2, 11, 12, 25] or unstable [5, 6, 17, 18, 19, 20, 21] based on experimental or simulation studies in the literature. Consequently, string stability is a key property to achieve smooth traffic flows, guarantying the absence of potential disturbances propagating upstream a platoon of vehicles and hence, avoiding the emergence of traffic jams, slinky effects and unsafe gaps.
Embedding these policies into simplified vehicle models to reproduce or imitate ACC vehicle's dynamics and their realistic effects on traffic flows, implies the awareness of realistic ACC parameter values. Consequently, ACC system's parameter identification is an important step towards its better understanding. Several studies used either online or offline optimization procedures to estimate the unknown system dynamics and parameters using synthetic or empirical data [20, 26, 27, 5].
A dual unscented Kalman filter (DUKF) was proposed for the nonlinear joint state and CTHP parameter identification of commercially implemented ACC systems, using empirical data from a real-life car-following experimental campaign [28]. Another study, developed two online methods to provide real-time system identification of ACC-equipped vehicles: i) Recursive least squares (RLS); and ii) Particle filtering (PF), on both synthetic and empirical data [21]. In addition, a Gaussian process-based model was designed to learn the personalized driving behavior based on both synthetic and naturalistic data, to design an ACC system suitable to driver's preferences [29].
However, the parameter identification problem of automated vehicles using empirical observations is challenging, since the underlying optimization problem is non-convex and it might be ill-conditioned under equilibrium driving conditions (i.e., where acceleration and space-gap reduce to zero), see [21, 28]. In the latter case, the ACC system parameters cannot be uniquely identified, given input and output observations from the platoon, since the problem lacks both linear and nonlinear observability [28].
A recently developed framework for inverse optimization is the so-called physics-inspired (or physics-informed) neural networks (PiNNs) [30, 31, 32, 33]. PiNNs are semi-supervised artificial neural networks of dynamical systems governed by ordinary or partial differential equations (ODEs or PDEs) and observed data. A PiNN consists of two ingredients: (a) An artificial neural network representing a _physics-uninformed_ surrogate predictor (parameterized by weights and biases); and (b) a residual network representing a _physics-and-data-informed_ set of ODEs or PDEs for regularization. PiNN's implementation expands into solving both forward (_data-driven solution_) and inverse (_data-driven discovery_) non-linear problems, with various applications in engineering [34, 35, 36] and transportation [37, 38].
This research contributes to the state-of-the-art by introducing PiNNs for learning the parameter of commercially implemented ACC systems using empirical observations of space-gap and relative velocity. This problem is appertained to the second class of problems (data-driven inverse optimization problems) integrating the knowledge from both empirical observations and ACC system's dynamics. The proposed PiNN is based on the physical model of the longitudinal motion of ACC-engaged vehicles, derived from first physical principles, and a PID-like control law for emulating the CTHP.
_Paper Organization:_ The rest of the paper is structured as follows. Section III reviews the constant time-headway policy (CTHP). It also briefly presents the \(\mathcal{L}_{2}\) and \(\mathcal{L}_{\infty}\) criteria for string stability in platoons. Section IV introduces the proposed PiNN for parameter learning. Section V illustrates the application of the PiNN to both synthetic and empirical observations obtained from three real-life campaigns. It also presents the inferred parameters of stock ACC systems for various makes and discuss their string stability condition. Section VI discusses the potential of this work.
_Notation:_ The field of real and complex numbers are denoted by \(\mathbb{R}\) and \(\mathbb{C}\), respectively. The imaginary unit is denoted by \(j\), where \(j:=\sqrt{-1}\). The space of Lebesgue measurable functions \(f:\mathbb{R}\rightarrow\mathbb{R}\) such that \(t\rightarrow|f(t)|^{\mathsf{p}}\) is integrable over \(\mathbb{R}\) is denoted by \(\mathcal{L}_{\mathsf{p}}\), here \(\mathsf{p}=2\) or \(\mathsf{p}=\infty\) is used to discuss string stability. For \(\mathsf{p}=\infty\) no integration is used, and instead, the norm on \(\mathcal{L}_{\infty}\) is given by the essential supremum. Given a transfer function \(\Sigma(j\omega)\), \(\omega\in\mathbb{R}\), of a single-input single-output (SISO) system, the \(\mathcal{H}_{\infty}\) norm of the system is defined as \(\|\Sigma(j\omega)\|_{\mathcal{H}_{\infty}}=\sup_{\omega\in\mathbb{R}}|\Sigma(j \omega)|\).
## III Modeling Adaptive Cruise Control Systems
In the sequel, the constant time-headway policy, embedded in a simplified dynamic vehicle model, is considered to imitate the longitudinal motion of ACC-engaged vehicles flocking in homogeneous platoons. The vehicle dynamics of the ACC ego vehicles are described by a double integrator, while acceleration is governed by the CTHP. Despite its simplicity, this model is able to reproduce the dynamics and driving behavior of ACC-engaged vehicles in platoons as shown in field trials [5, 6, 7].
### _Longitudinal Model and Constant Time-Headway Policy_
Consider a platoon of \(M\) homogeneous ACC-engaged ego vehicles, where vehicle \(i\) (follower) follows vehicle \(i-1\) (leader). The leader of the platoon is indexed by \(i=0\) and might be affiliated to a human-driven vehicle (HDV). The longitudinal dynamics of the ACC equipped vehicles, while acceleration is governed by the CTHP (a PID-like control law), can be described by the following system of ordinary differential equations (ODEs) [2, 12]:
\[\dot{p}_{i}(t) =\Delta v_{i}(t),\quad i=1,2,\ldots,M, \tag{1}\] \[\dot{v}_{i}(t) =\alpha_{i}\big{[}p_{i}(t)-\tau_{i}v_{i}(t)\big{]}\!+\!\beta_{i} \Delta v_{i}(t), \tag{2}\]
where \(p_{i}(t)\) [m] is the space-gap between two vehicles \(i\) and \(i-1\), i.e., the distance between the follower's front bumper and the leader's rear bumper; and \(\Delta v_{i}(t)=v_{i-1}(t)-v_{i}(t)\) is the relative velocity between the vehicle \(i\) and the vehicle \(i-1\)
in a platoon. In control law (2), \(\tau_{i}\) [s] represents the constant time headway the ACC-engaged vehicle \(i\) strives to maintain with its leader \(i-1\), which in turn indicates the desired space-gap, \(\tau v_{i}(t)\), between the two vehicles. The two non-negative gains, \(\alpha_{i}\) [1/s\({}^{2}\)] and \(\beta_{i}\) [1/s], control the trade-off between the space-gap difference, \(p_{i}(t)-\tau v_{i}(t)\), and the relative velocity \(\Delta v_{i}\). Finally, parameter \(\tau_{i}\) [s] can also be considered as the time-gap under equilibrium driving conditions:
\[v_{i-1}(t)-v_{i}(t)=0\ \ \text{and}\ \ p_{i}(t)-\tau_{i}v_{i}(t)=0,\ \forall\,i. \tag{3}\]
The CTHP parameters \(\alpha_{i}\), \(\beta_{i}\), and \(\tau_{i}\) that characterize the ACC equipped vehicles are constant but unknown. Thus, the model (1)-(2) can be re-written in compact form as,
\[\dot{\mathbf{\xi}}_{i}(t)=\mathbf{f}[\mathbf{\xi}_{i}(t),\mathbf{\varpi}_{i}],\quad i=1, 2,\ldots,M, \tag{4}\]
where \(\mathbf{\xi}_{i}(t)=[p_{i}(t)\ v_{i}(t)]^{\mathsf{T}}\) is the state vector, \(\mathbf{\varpi}_{i}=[\alpha_{i}\ \beta_{i}\ \tau_{i}]^{\mathsf{T}}\) is the parameters vector (to be learned), and \(\mathbf{f}=[f_{1}\ f_{2}]^{\mathsf{T}}\) is a vector function that reflects the right-hand side of (1)-(2). Note that, despite the homogeneity assumption above for the vehicle characterics, a different set of parameters \(\mathbf{\varpi}_{i}\) is considered here for each vehicle within the platoon. The assumption of the same set of parameters \(\mathbf{\varpi}_{i}\equiv\mathbf{\varpi}\) for each ego vehicle \(i\) in the platoon is a special case and is also considered in Section V.
Finally, the CTHP model of each ACC-engaged vehicle in the platoon must satisfy the following _rational driving constraints_[39]:
\[\frac{\partial f_{2}}{\partial p_{i}}=\alpha_{i}\geq 0,\frac{\partial f_{2}} {\partial\Delta v_{i}}=\beta_{i}\geq 0,\frac{\partial f_{2}}{\partial v_{i}}=- \alpha_{i}\tau_{i}\leq 0,\,\forall\,i. \tag{5}\]
### _String Stability and Parameter Identification_
A fundamental aspect of ACC systems in platoon formation is the _string stability_ in the presence of unknown but bounded disturbances. String stability characterizes the ability of the CTHP (or other control policy for ACC equipped vehicles) to mitigate the potential amplification of random perturbations through a platoon of vehicles. String unstable platoons are intruding since their dynamics can lead to phantom traffic shockwaves and thus to traffic congestion and poor system throughput.
The following definition characterizes the lack of upstream amplification of random perturbations through a platoon of vehicles.
**Definition 1** (Strict String Stability).: Consider a string of \(M\) vehicles in platoon formation. This system is strict string stable if and only if,
\[\|\chi_{i}(t)\|_{\mathcal{L}_{\mathsf{p}}}\leq\|\chi_{i-1}(t)\|_{\mathcal{L} _{\mathsf{p}}},\quad\forall\,t\geq 0,\ i=1,\ldots,M, \tag{6}\]
where \(\chi_{i}(t)\) can be any signal of interest for vehicle \(i\) (e.g., spacing, velocity, or acceleration). The input signal \(\chi_{0}(t)\in\mathcal{L}_{\mathsf{p}}\) is given and corresponds to the lead vehicle of the platoon, while \(\chi_{i}(0)=0\) for \(i=1,\ldots,M\).
Considering input-output stability for a platoon of \(M\) vehicles, the transfer function (of two consecutive vehicles) from input velocity \(V_{i-1}(s)\) to output velocity \(V_{i}(s)\) is defined by,
\[V_{i}(s)=\Sigma_{i}(s)V_{i-1}(s),\quad i=1,2,\ldots,M, \tag{7}\]
with \(V(s)\), \(s\in\mathbb{C}\), denoting the Laplace transform of \(v(t)\), \(t\geq 0\). Using (7), the speed-to-speed transfer function of the CTHP model (1)-(2) for vehicle \(i\) is given by,
\[\Sigma_{i}(s)=\frac{\beta_{i}s+\alpha_{i}}{s^{2}+(\alpha_{i}\tau_{i}+\beta_{ i})s+\alpha_{i}},\quad i=1,\ldots,M. \tag{8}\]
Given the above transfer function of the CTHP, the following conditions for string stability in the sense of \(\mathcal{L}_{2}\) and \(\mathcal{L}_{\infty}\) norms are well-established (see e.g., [39, 40]).
#### Ii-B1 \(\mathcal{L}_{2}\) Strict String Stability
For the \(\mathcal{L}_{2}\) space, the following condition holds [41]:
\[\|\Sigma_{i}(j\omega)\|_{\mathcal{H}_{\infty}}=\max_{v_{i-1}\neq 0}\frac{\|v_{i}(t)\|_{ \mathcal{L}_{2}}}{\|v_{i-1}(t)\|_{\mathcal{L}_{2}}},\quad i=1,\ldots,M, \tag{9}\]
where \(\Sigma_{i}(j\omega)\) is the transfer function evaluated at \(s=j\omega\) over the frequency \(\omega\in\mathbb{R}\) (i.e., along the imaginary axis). According to (9) the \(\mathcal{H}_{\infty}\) norm is induced by the \(\mathcal{L}_{2}\) norms of the input and output energy signals. Combining (9) and Definition 1 for the \(\mathcal{L}_{2}\) norm yields the following condition for strict string stability (see also Notation in Section II):
\[|\Sigma_{i}(j\omega)|=\sqrt{\frac{\alpha_{i}^{2}+\beta_{i}^{2}\omega_{i}^{2}} {(\alpha_{i}-\omega_{i}^{2})^{2}+\omega_{i}^{2}(\alpha_{i}\tau_{i}+\beta_{i})^ {2}}}<1, \tag{10}\]
for all \(\omega\geq 0\) and \(i=1,\ldots,M\). This condition imposes the disturbance effect to decrease for long platoons rather than just being bounded (i.e., the disturbance will ultimately vanish).
Condition (10) leads to the following inequality that the CTHP model parameters \(\alpha_{i}\), \(\beta_{i}\) and \(\tau_{i}\) must satisfy,
\[\alpha_{i}^{2}\tau_{i}^{2}+2\alpha_{i}\beta_{i}\tau_{i}-2\alpha_{i}>0,\,\forall \,i=1,\ldots,M. \tag{11}\]
As can be seen, as \(\tau_{i}\) approaches \(\infty\) the system is \(\mathcal{L}_{2}\) strict string stable for all non-negative gains \(\alpha_{i}\) and \(\beta_{i}\) of the CTHP, while as \(\tau_{i}\) approaches zero the system becomes unstable.
#### Ii-B2 \(\mathcal{L}_{\infty}\) Strict String Stability
The transfer functions (8) have second-order dynamics and a zero on the real axis of the left half complex plane. For the \(\mathcal{L}_{\infty}\) space, a necessary and sufficient condition for string stability is non-imaginary poles and negative zeros in (8), which leads to:
\[(\alpha_{i}\tau_{i}+\beta_{i})^{2}-4\alpha_{i}>0. \tag{12}\]
Subtracting (11) from (12) yields [40]:
\[\beta_{i}^{2}>2\alpha_{i}\ \ \Rightarrow\ \ (\mathfrak{L}_{\infty}\ \text{stability}\ \Leftrightarrow\ \mathfrak{L}_{2}\ \text{stability})\,, \tag{13}\]
suggesting that \(\mathcal{L}_{2}\) stability is stronger than the \(\mathcal{L}_{\infty}\) stability. This is realistic since even if the \(\mathcal{L}_{2}\) energy of a signal is small, it may occasionally contain large peaks, provided the peaks (i.e., the \(\mathcal{L}_{\infty}\) norm) are not too frequent and do not contain too much energy.
A proper parameter identification scheme is essential for the assessment of commercially implemented ACC systems in terms of string stability. Section IV presents a physics-constrained and data-informed artificial neural network for the parameter learning problem of stock ACC systems using empirical data of space-gap and relative velocity. This is a deep learning data-driven approach subject to the physical model of the CTHP (1)-(2) derived from first physical principles (double integrator) and control theory (a PID-like control law).
## IV Physics-Inspired and Data-Informed Artificial Neural Networks
### _Architecture of Multi-layer Artificial Neural Networks_
Consider a fully-connected feedforward artificial neural network (ANN), \(\mathcal{N}^{L}(\mathbf{x}):\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) of \(L\) (or \(L-1\) hidden layers, with \(N_{\ell}\) denoting the number of artificial neurons at layer \(\ell=1,2,\ldots,L-1\), while for the input and output layers \(N_{0}=n\) and \(N_{L}=m\) holds, respectively. Thus, the input layer has the dimension of the raw training data \(\mathbf{x}\in\mathbb{R}^{n}\), while the dimension of the output layer is defined by the context. In feed-forward ANNs, each neuron at layer \(\ell\) is equipped with a (user-defined) nonlinear activation function, \(\sigma(\mathbf{x}):\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\), to transform the weighted linear sum input of various neurons at layer \(\ell-1\) into an output that is passed (neuron fires) on to the next (hidden or output) layer \(\ell+1\). The most common activation functions include the logistic sigmoid \(\sigma(x)=1/[1+\exp(-x)]\), hyperbolic tangent, \(\sigma(x)=\tanh(x)\), and the rectified linear unit (so-called ReLU), \(\sigma(x)=\max(x,0)\). The weights and bias of the weighted sum input at layer \(\ell=1,2,\ldots,L\) are organized in the matrices \(\mathbf{A}^{\ell}\in\mathbb{R}^{N_{\ell}\times N_{\ell-1}}\) and vectors \(\mathbf{b}^{\ell}\in\mathbb{R}^{N_{\ell}}\), respectively; or in a single parameters vector \(\boldsymbol{\theta}=\{\mathbf{A}^{\ell},\mathbf{b}^{\ell}\}_{1\leq\ell\leq L}\). The architecture of the ANN can then be summarized as the following compositional function:
\[\text{\emph{Input layer:}} \mathcal{N}^{0}(\mathbf{x})=\mathbf{x}\in\mathbb{R}^{n},\] \[\text{\emph{Hidden layers:}} \mathcal{N}^{\ell}(\mathbf{x})=\sigma(\mathbf{A}^{\ell}\mathcal{ N}^{\ell-1}(\mathbf{x})+\mathbf{b}^{\ell})\in\mathbb{R}^{N^{\ell}},\] \[\text{\emph{Output layer:}} \mathcal{N}^{L}(\mathbf{x})=\mathbf{A}^{L}\mathcal{N}^{L-1}( \mathbf{x})+\mathbf{b}^{L}\in\mathbb{R}^{m},\]
where \(\ell=1,2,\ldots,L-1\) for the hidden layers.
The ultimate goal is to optimize the weighting matrices \(\mathbf{A}^{\ell}\) and bias vectors \(\mathbf{b}^{\ell}\) connecting the neural network from the \(\ell\)-th to \((\ell+1)\)-th layer using labeled pairs of input (see _Input layer_) and output (see _Output layer_) data. This procedure, which is known as _training_ in the ANN nomenclature, involves _automatic differentiation_[42] over the compositional function \(\mathcal{N}^{L}(\mathbf{x})\) and nonlinear optimization over a high-dimensional space, and thus is computationally expensive. _Stochastic gradient descent_ (SGD) [43, 44] and _back-propagation_[45] are two important algorithms that can be used for the efficient training of such multi-layer ANN.
### _Parameter Learning via Physics-inspired Neural Networks_
Consider the parameterized ODEs (1)-(2) that characterize the CTHP of ACC equipped vehicles. Assume that a solution, \(\boldsymbol{\xi}_{i}(t)\) on the time domain \(t\in[t_{0},t_{e}]=\Upsilon\), of the ODEs exists, given some boundary conditions \(\mathcal{B}_{i}(\boldsymbol{\xi}_{i},t)=\mathbf{0}\) on \(\partial\Upsilon\):
\[\mathcal{F}_{i}(\boldsymbol{\xi}_{i},\boldsymbol{\dot{\xi}}_{i},\boldsymbol{ \varpi}_{i},t)=\boldsymbol{\dot{\xi}}_{i}(t)-\mathbf{f}[\boldsymbol{\xi}_{i}(t ),\varpi_{i}]=\mathbf{0}, \tag{14}\]
for all \(i=1,\ldots,M\), where \(\mathcal{F}_{i}\in\mathbb{R}^{2}\) is a nonlinear operator (residual of (4)) parameterized by \(\boldsymbol{\dot{\xi}}_{i}\), \(\boldsymbol{\dot{\xi}}_{i}\), and \(\varpi_{i}\) (to be learned).
Multi-layer feedforward artificial neural networks are a class of _universal approximators_[46], thus a neural network (see Fig. 1), \(\boldsymbol{\dot{\Xi}}=\{\boldsymbol{\dot{\xi}}_{i}(t,\boldsymbol{\theta})\}_{ i=1,\ldots,M}\), can be developed as a surrogate of the solution \(\boldsymbol{\dot{\xi}}_{i}(t)\), for all \(i=1,2,\ldots,M\). Provided the availability of empirical data of space-gap and relative velocity, \(\mathcal{D}=\{\boldsymbol{\dot{\xi}}_{i}(t)\}_{t\in\Upsilon}\), the (output of the) neural network \(\boldsymbol{\dot{\Xi}}\) (predictor) can be constrained to satisfy the physical model (14) of the CTHP and its boundary conditions \(\mathcal{B}_{i}(\boldsymbol{\dot{\xi}}_{i},t)=\mathbf{0}\) on \(\partial\Upsilon\), for all \(i=1,2,\ldots,M\). Additional internal conditions, \(\mathcal{I}_{i}(\boldsymbol{\dot{\xi}}_{i},t)=\mathbf{0}\) on some \(t\subset\Upsilon\), can be also incorporated for readily solving the _inverse parameter optimization problem_. In the inverse problem, the vector of CTHP parameters are to be learned using training data, \(\mathcal{D}\), such that (14) and boundary/internal conditions are satisfied.
Concluding, a physics-inspired and data-informed neural network for each ACC vehicle \(i\) on a platoon consists of (see Fig. 1): (a) the _physics-uninformed_ surrogate predictor \(\boldsymbol{\dot{\xi}}_{i}(t;\boldsymbol{\theta})\); (b) the _physics-informed_ residual constraints \(\mathcal{F}_{i}(\boldsymbol{\dot{\xi}}_{i},\boldsymbol{\dot{\xi}}_{i}, \boldsymbol{\varpi}_{i},t)\) together with the boundary and internal conditions \(\mathcal{R}_{i}(\boldsymbol{\dot{\xi}}_{i},t)=\{\mathcal{B}_{i}(\boldsymbol{ \dot{\xi}}_{i},t)\cup\mathcal{I}_{i}(\boldsymbol{\dot{\xi}}_{i},t)\}\); (c) a _data-informed_ fitting function that penalizes deviations of neural network's predictions from the empirical training data. Upon training, the neural network is calibrated to predict the entire solution of the system of ODEs (14), as well as the unknown CTHP parameters that impose the underlying dynamics in a platoon.
### _Training of the Physics-inspired Neural Network_
Training of the neural network requires discretization or sampling of the continuous time domain on a sufficient small time interval, resulting to the set of pseudo-points (or induced points) \(\mathcal{T}=\{t_{1},t_{2},\ldots,t_{|\mathcal{T}|}\}\in\Upsilon\), where \(|\mathcal{T}|\) is the cardinality of set \(\mathcal{T}\). The set \(\mathcal{T}=\{\mathcal{T}_{f},\mathcal{T}_{b}\}\) includes all points in the time domain \(\mathcal{T}_{f}\subset\Upsilon\) and its boundary \(\mathcal{T}_{b}\subset\partial\Upsilon\) where evaluation of the residual (14) and boundary and internal conditions \(\mathcal{R}_{i}\) is necessary, given predictions of the surrogate model \(\boldsymbol{\dot{\xi}}_{i}(t;\boldsymbol{\theta})\) and empirical data of space-gap and relative-velocity, \(\mathcal{D}=\{\boldsymbol{\dot{\xi}}_{i}(t)\}_{t\in\mathcal{T}}\). Thus, the neural network takes as input the set \(\mathcal{T}\) and provides predictions of \(\boldsymbol{\dot{\xi}}_{i}=[\hat{p}_{i}\;\hat{v}_{i}]^{\mathsf{T}}\) (see the input and output layers, respectively, in Fig. 1), for all \(i=1,2,\ldots,M\). The leader's velocity, \(v_{0}(t)\), can be also predicted by the NN, provided that \(v_{0}(t)\) is observed using range sensors or other equipment, see \(\hat{v}_{0}\) in Fig. 1.
To efficiently train the surrogate predictor \(\boldsymbol{\dot{\Xi}}=\{\boldsymbol{\dot{\xi}}_{i}(t;\boldsymbol{\theta})\}_{i=1, \ldots,M}\) and simultaneously satisfy the residual and boundary constraints the following cost criterion of two terms is considered: (a) a semi-unsupervised cost criterion, \(\Psi_{\rm ODE|U}\), for the residual (14) and its boundary and internal conditions; (b) a supervised (data-driven) cost criterion, \(\Psi_{\rm DJ|S}\), governed by measurements of \(\boldsymbol{\dot{\xi}}_{i}\) (empirical data of space-gap and relative-velocity) and the predictions \(\boldsymbol{\dot{\xi}}_{i}\), for all \(i=1,\ldots,M\). For the inverse optimization problem the cost criterion reads:
\[\Psi(\boldsymbol{\theta},\boldsymbol{\Omega}) =\Psi_{\rm ODE|U}+\Psi_{\rm DJ|S}\] \[=\underbrace{\Psi_{\mathcal{F}}(\boldsymbol{\theta},\boldsymbol{ \Omega})+\Psi_{\mathcal{R}}(\boldsymbol{\theta},\boldsymbol{\Omega})}_{\Psi_{\rm ODE|U}}+ \underbrace{\Psi_{\mathcal{D}}(\boldsymbol{\theta})}_{\Psi_{\rm DJ|S}}, \tag{15}\]
where,
\[\boldsymbol{\Omega}=\big{[}\varpi_{1}^{\mathsf{T}}\;\cdots\;\varpi_{M}^{ \mathsf{T}}\big{]}^{\mathsf{T}},\]
and,
\[\Psi_{\mathcal{F}}(\mathbf{\theta},\mathbf{\Omega}) =\frac{1}{|\mathcal{T}_{f}|\times M}\sum_{i=1}^{M}\sum_{t\in \mathcal{T}_{f}}\|\mathcal{F}_{i}(\mathbf{\hat{\xi}}_{i},\dot{\mathbf{\xi}}_{i},\mathbf{ \varpi}_{i},t)\|_{\mathbf{Q}}^{2}, \tag{16}\] \[\Psi_{\mathcal{R}}(\mathbf{\theta},\mathbf{\Omega}) =\frac{1}{|\mathcal{T}_{b}|\times M}\sum_{i=1}^{M}\sum_{t\in \mathcal{T}_{b}}\|\mathcal{R}_{i}(\mathbf{\hat{\xi}}_{i},\mathbf{\varpi}_{i},t)\|_{ \mathbf{R}}^{2},\] (17) \[\Psi_{\mathcal{D}}(\mathbf{\theta}) =\frac{1}{|\mathcal{T}|\times M}\sum_{i=1}^{M}\sum_{t\in \mathcal{T}}\|\psi_{i}(\mathbf{\hat{\xi}}_{i},\mathbf{\xi}_{i},t)\|_{\mathbf{S}}^{2}. \tag{18}\]
The function \(\psi\) penalizes deviations of the neural network surrogate approximation from the empirical training data set \(\mathcal{D}=\{\mathbf{\xi}_{i}(t)\}_{t\in\mathcal{T}}\), e.g., \(\psi_{i}(\mathbf{\hat{\xi}}_{i},\mathbf{\xi}_{i},t)=\mathbf{\hat{\xi}}_{i}-\mathbf{\xi}_{i}\) for the mean square error (MSE). The positive-definite weighting matrices \(\mathbf{Q}\), \(\mathbf{R}\), \(\mathbf{S}\) are penalty terms for the unsupervised and supervised cost functions, respectively. These matrices can be selected via a trial-and-error procedure to speed up convergence for a particular data set and application.
The optimal vectors of the inverse problem, NN weights \(\mathbf{\theta}\) and CTHP parameters \(\mathbf{\Omega}\) for a platoon of vehicles, can be obtained by solving the following optimization problem during neural network's training:
\[[\mathbf{\theta}^{\star},\mathbf{\Omega}^{\star}]=\arg\min_{\mathbf{\theta},\mathbf{\Omega}} \Psi(\mathbf{\theta},\mathbf{\Omega}). \tag{19}\]
This is a highly nonlinear optimization problem over a high-dimensional space, and thus is computationally expensive; but can be solved efficiently using (batch) SGD or quasi-Newton methods using Hessian information (e.g., Adam [47] and L-BFGS [48]).
## V Application and Results
To demonstrate the effectiveness of the proposed data-driven approach to learn the parameters of the CTHP of ACC equipped vehicles, three physics-constrained and data-informed neural networks are developed and tested to both synthetic and empirical data of space-gap and relative velocity. The first neural network is trained to predict the CTHP parameters for a car-following scenario with an ACC ego vehicle (follower) and a human-driven vehicle (leader), while two other neural networks are trained to predict the CTHP parameters for two different platoons of vehicles.
### _Empirical Data Description_
The empirical data are taken from three car-following experimental campaigns that took place in Autostrada A26 motorway in Ispra-Vicolungo (fleet of five vehicles in platoon formation) and Ispra-Casale (car-following scenario with two vehicles) routes in 2019 and 2020, respectively, Italy; and AstaZero test track (five premium ACC equipped vehicles in platoon formation) in mid 2019, Sweden. In all campaigns, data acquisition was performed using high-accuracy on-board equipment (e.g., U-blox global navigation satellite system (GNSS) receivers) with a sampling frequency at 10 Hz (0.1 s). These data can be freely accessed through the OpenACC database [19]. Table III provides information (make and model) on the vehicles involved in the aforementioned campaigns.
### _Physics-inspired Neural Network Setup_
Fig. 1 depicts the neural network architecture for parameter learning of the unknown design parameters of the CTHP, \(\mathbf{\Omega}\), for a platoon of \(M\) vehicles. Each neuron of the neural network is equipped with a non-linear hyperbolic tangent activation function, \(\sigma(\cdot)=\tanh(\cdot)\).
The first neural network was developed and trained to predict the CTHP parameters for a particular ego vehicle \(i\) in a car-following scenario (i.e., \(M=1\)) in Ispra-Casale,
Fig. 1: Architecture of the proposed physics-inspired and data-informed neural network for CTHP parameter learning. The neural network on the left represents the _physics-uninformed_ surrogate predictor \(\mathbf{\hat{\Xi}}(t;\mathbf{\theta})=\{\mathbf{\hat{\xi}}_{i}(t;\mathbf{\theta})\}_{i=1,\dots,M}\), while the right network depicts the _physics-inspired_ residual and boundary conditions of the CTHP, and the data-informed cost function that penalizes deviation of the surrogate model predictions from the empirical data.
\(\hat{\mathbf{\Xi}}(t;\mathbf{\theta})=\hat{\mathbf{\xi}}_{1}(t;\mathbf{\theta})\), consisting of one (1) input layer with one (1) neuron \(t\) (the discretized time domain), three (3) hidden-layers with sixty (60) neurons each, and one (1) output layer with two (2) neurons, and the predictor \((\hat{v}_{0},\hat{\mathbf{\xi}})\in\mathbb{R}^{3}\), where \(\hat{\mathbf{\xi}}=[\hat{p}_{1}\ \hat{v}_{1}]^{\mathsf{T}}\). Thus (see Section IV-A for notation), \(L=4\), \(N_{0}=1\), \(N_{\ell}=60\) for \(\ell=1,2,3\), and \(N_{4}=3\). The weights vector of the neural network reads \(\mathbf{\theta}=\{\mathbf{\mathrm{A}}^{(1)},\mathbf{\mathrm{A}}^{(2)},\mathbf{\mathrm{A}}^{(3 )},\mathbf{\mathrm{A}}^{(4)},\mathbf{\mathrm{b}}^{(1)},\mathbf{\mathrm{b}}^{(2)},\mathbf{ \mathrm{b}}^{(3)},\mathbf{\mathrm{b}}^{(4)}\}\in\mathbb{R}^{7623}\).
Two additional neural networks with the same architecture (\(L=4\), \(N_{0}=1\), \(N_{\ell}=60\), \(\ell=1,2,3\)) were trained to predict the CTHP parameters of five (5) vehicles in platoon formation for two different experimental campaigns, namely Astazero and Ispra-Vicolungo. For the Astazero data set \(M=4\) (note that the first vehicle in the experiment, the leader, is human driven) and thus the output of the neural network (the predictor \(\hat{\mathbf{\Xi}}=\{\hat{\mathbf{\xi}}_{i}(t;\mathbf{\theta})\}_{i=1,\ldots,4}\)) consists of nine (9) neurons, the \((\hat{p}_{i},\hat{v}_{i}),i=1,\ldots,4\) plus the leader's velocity \(\hat{v}_{0}\), i.e. \(N_{4}=9\). For the Ispra-Vicolungo the first and last vehicle in the platoon are human driven, thus \(M=3\) and the output of the neural network \(\hat{\mathbf{\Xi}}=\{\hat{\mathbf{\xi}}_{i}(t;\mathbf{\theta})\}_{i=1,\ldots,3}\) consists of seven (7) neurons, the \((\hat{p}_{i},\hat{v}_{i}),i=1,\ldots,3\) plus the leader's velocity \(\hat{v}_{0}\), i.e. \(N_{4}=7\). The weights vector of the two neural network read: \(\mathbf{\theta}=\{\mathbf{\mathrm{A}}^{(1)},\mathbf{\mathrm{A}}^{(2)},\mathbf{\mathrm{A}}^{(3 )},\mathbf{\mathrm{A}}^{(4)},\mathbf{\mathrm{b}}^{(1)},\mathbf{\mathrm{b}}^{(2)},\mathbf{ \mathrm{b}}^{(3)},\mathbf{\mathrm{b}}^{(4)}\}\in\mathbb{R}^{7989}\) and \(\mathbf{\theta}=\{\mathbf{\mathrm{A}}^{(1)},\mathbf{\mathrm{A}}^{(2)},\mathbf{\mathrm{A}}^{(3 )},\mathbf{\mathrm{A}}^{(4)},\mathbf{\mathrm{b}}^{(1)},\mathbf{\mathrm{b}}^{(2)},\mathbf{ \mathrm{b}}^{(3)},\mathbf{\mathrm{b}}^{(4)}\}\in\mathbb{R}^{7867}\) for the Astazero and Ispra-Vicolungo campaigns, respectively.
To train the neural networks the learning rate is set to \(0.001\) and the maximum number of iterations is set to 60,000. In each iteration, the input of the neural network is fed with a set of \(|\mathcal{T}|=|\mathcal{T}_{f}|=|\mathcal{T}_{b}|=3000\) collocation training peudo-points (see input \(t\) of the input layer in Fig. 1) sampled inside the time domain of \(t=[0,300]\) s at a frequency of 10 Hz (0.1 s), i.e. \(\mathcal{T}=\{t_{1},t_{2},\ldots,t_{|\mathcal{T}|=3000}\}\).
For the inverse optimization problem for a car-following scenario with two vehicles, \(M=1\) (the extension to \(M>1\) is straightforward) the cost criterion (15), is considered with:
\[\Psi_{\mathcal{F}}(\mathbf{\theta},\mathbf{\omega})=\frac{1}{|\mathcal{T} _{f}|}\sum_{t\in\mathcal{T}_{f}}\left\{\left[\hat{p}_{1}(t)-v_{0}(t)+\hat{v}_{ 1}(t)\right]^{2}\right.\] \[+\left.\left[\hat{v}_{1}(t)-\alpha[\hat{p}_{1}(t)-\tau\hat{v}_{1} (t)]-\beta[v_{0}(t)-\hat{v}_{1}(t)]\right]^{2}\right\},\] \[\Psi_{\mathcal{D}}(\mathbf{\theta})=\frac{1}{|\mathcal{T}|}\sum_{t\in \mathcal{T}}\left\{\left[p_{1}(t)-\hat{p}_{1}(t)\right]^{2}+\left[v_{1}(t)- \hat{v}_{1}(t)\right]^{2}\right.\] \[\left.+\left[v_{0}(t)-\hat{v}_{0}(t)\right]^{2}\right\},\]
where \(v_{0}\) and \(v_{1}\) are the velocities of the leader (ACC vehicle or HDV) and follower (ACC ego vehicle), respectively; and \(p_{1}\) is the space-gap between the two vehicles. Note that the term \(\Psi_{\mathcal{R}}\) is absent above, since \(|\mathcal{T}_{b}|=|\mathcal{T}|\) and thus the internal and boundary conditions \(\mathcal{R}\) are included in \(\Psi_{\mathcal{F}}\) and \(\Psi_{\mathcal{D}}\). For the inverse optimization problem (19) a hybrid strategy of using Adam (for some thousand of iterations at the beginning of the training) and L-BFGS (for the rest of the training) is employed to speed-up convergence [32].
The described neural network architecture (among others tested) is found to work well for the CTHP parameter learning problem and, thus, it has been employed for both synthetic and empirical data obtained from campaigns in the field (see Section V-A). Training of the neural network takes about 7 CPU-minutes for the Ispra-Casale experiment and 30 CPU-minutes for the two other campaigns with platoons of \(M=4\) and \(M=3\) in a computer with an 11th Generation Intel(R) Core(TM) i7-11700K @ 3.60GHz with 8 cores on Windows 10 Pro 64-bit. Parameter convergence is achieved in around 20,000 iterations for all tests on synthetic and empirical data.
### _CTHP Parameter Learning on Synthetic Data_
Initially, synthetic data with known CTHP parameters, \(\mathbf{\omega}\), were generated and used to validate the proposed parameter learning approach. To this end, consider a car-following-scenario with two vehicles, a leader (HDV) and a follower (ACC ego vehicle), a known set of design parameters \(\mathbf{\varpi}^{*}=[\alpha^{*}\ \beta^{*}\ \tau^{*}]^{\mathsf{T}}=[0.08\ 0.12\ 1.5]^{\mathsf{T}}\), and a pre-defined leader's velocity profile taken from the Ispra-Casale campaign (see the solid orange line in Fig. 2b). Then, synthetic data of \(p_{1}(t)\) and \(v_{1}(t)\), for \(t>0\), were generated from the system of ODEs (1)-(2) (with \(M=3\)), using the a priori known \(\mathbf{\varpi}^{*}\) and the initial condition \(\mathbf{\xi}(0)=[p_{1}(0)\ v_{1}(0)]^{\mathsf{T}}=[20.3\ 21.3]^{\mathsf{T}}\) (in m and m/s, respectively) and \(v_{0}(0)=21.3\) m/s. The data was generated for 300 s with a frequency of 10 Hz (0.1 s).
Figs 2a-2b display the obtained trajectories of space-gap \(p_{1}(t)\) (in solid grey line) and velocities \(v_{0}(t)\) (in solid orange line) and \(v_{1}(t)\) (in solid grey line). As can be seen, both equilibrium (3) and non-equilibrium driving conditions are considered in the synthetic data set. It should be noted that the known set of design parameters, \(\mathbf{\varpi}^{*}\), corresponds to a string unstable system in terms of both the \(\mathcal{L}_{2}\) and \(\mathcal{L}_{\infty}\) norms, check conditions (11) and (12), respectively. Thus the ACC-engaged vehicle will amplify any random perturbations of the human driven vehicle (leader).
The synthetic data were used to train the first neural network described in Section V-B towards learning the a priori known vector of CTHP parameters, \(\mathbf{\varpi}^{*}\), of the ACC-engaged vehicle. Fig. 3a depicts the obtained results on the learning trajectories of \(\hat{\alpha}\), \(\hat{\beta}\), and \(\hat{\tau}\) converging successfully to \(\hat{\mathbf{\varpi}}=[0.0784\ 0.12\ 1.5]^{\mathsf{T}}\approx\mathbf{\varpi}^{*}\) after 20,000 iterations. The neural network is also calibrated to predict the entire trajectories of \(p_{1}(t)\), \(v_{0}(t)\), and \(v_{1}(t)\) corresponding to the true parameter values. Fig. 2 presents the estimated values of \(\hat{p}_{1}(t)\), \(\hat{v}_{0}(t)\), and \(\hat{v}_{1}(t)\) in each point inside the entire time domain, with mean absolute errors (MAEs) of 0.0939 m, 0.1491 m/s, and 0.1509 m/s, respectively.
This experiment underlines the ability of the developed data-driven framework and physics-constrained neural network to successfully learn the true values of the CTHP parameters and reconstruct the full range of dynamics in the space-gap and velocity profiles.
### _CTHP Parameter Learning on Empirical Data_
This section demonstrates the superior predictive ability of the proposed PiNN to learn the unknown design parameters of stock ACC systems of different makes (see Table III), using empirical data of space-gap and relative velocity from three experimental campaigns, namely Ispra-Casale, Ispra-Vicolungo, and AstaZero. It also aims to examine whether the
stock ACC systems of various makes are strict string stable inside platoons using the obtained ACC parameters and the string stability criteria (11) and (12), see Section III.
Parameter Learning for Ispra-CasaleTo investigate the sensitivity of parameter learning on different initial values of \(\hat{\mathbf{\omega}}(0)\), five different training seasons of the neural network were carried out. Fig. 4 displays the empirical data of space-gap and relative velocity (solid lines) used for the training. As can be seen from a careful inspection of the trajectories, the leader is engaged in various acceleration perturbations, while the follower seems to be string unstable. Table I presents the obtained results of the training for five trials with \(\alpha,\beta\) and \(\tau\) starting from different initial values. As we can seen, \(\alpha\) and \(\beta\) differ to the third decimal place while \(\tau\) converges to the same value in each experiment. Using these values in the string stability criteria (11) and (12), it turns out that the involved ACC vehicle is neither \(\mathcal{L}_{2}\) nor \(\mathcal{L}_{\infty}\) strict string stable. Fig. (b)b shows the parameters' convergence from the first experiment of Table I, while Fig. 4 presents the estimated trajectories of space-gap and speed of the ACC vehicle, with their MAEs being relatively low (see Table I).
Parameter Learning for AstaZero and Ispra-VicolungoThe two experimental campaigns comprise data from a real-life campaign that took place in Autostrada A26 motorway between Ispra and Vicolungo and the protective environment of AstaZero test track. Each campaign involved a fleet of five ACC equipped vehicles in platoon formation, where the followers are examined under real-life traffic conditions and different perturbation events imposed by the leader, as shown in Figs (a)a-(b)b. The data concerns the speed of each vehicle and the space-gaps between them for 300 s.
In both campaigns, the proposed physics-constrained neural network is trained and applied to learn the CTHP parameters of the followers (vehicles indexed by \(i=1,...,4\)) and to assess the string stability of the platoons. In the sequel, two different types of platoons are considered: Case I. Homogeneous platoons where all vehicles in a platoon present the same ACC settings, i.e., \(\mathbf{\varpi}_{i}\equiv\mathbf{\varpi}=[\alpha\ \beta\ \tau]^{\mathrm{T}}\) for all \(i=1,\ldots M\); Case II. Non-homogeneous platoons where a different set of parameters \(\mathbf{\varpi}_{i}=[\alpha_{i}\ \beta_{i}\ \tau_{i}]\), \(i=1,\ldots M\), is considered for each ACC vehicle within the platoon.
Table II presents the obtained CTHP parameters on both experimental campaigns for Case I. As can be seen, both campaigns are resulting in string unstable platoons with MAE between the real and estimated trajectories of space-gap and velocity being relatively low. In addition, Table III presents the collected results for both campaigns by applying one PiNN for each platoon of vehicles and considering different set of CTHP
Fig. 3: CTHP parameter learning and convergence.
Fig. 2: Space-gap and velocity trajectories for the synthetic dataset.
parameters, i.e., the Case II. As can be seen, the ACC design parameters \(\alpha\), \(\beta\), and \(\tau\) of the CTHP have relatively close values across the different ACC controllers (different vehicle makes), with time-headways \(\tau\) being close enough to the true mean time-gaps; time-gap is specified as the fraction of the space-gap between two vehicles to the speed of the following vehicle. Comparing with Table II (notice the averages over the platoon in Table III) the CTHP parameter values in Case II are seen to be consistent with the values obtained in Case I, which underlines the superiority of the proposed physics-constrained and data-informed neural network to learn the ACC model parameters with different assumptions and architectures for two real-life car-following campaigns. Table III also presents the MAEs between the estimated trajectories (regarding space-gap and velocity) and the real ones. The obtained MAEs across the different PiNN's implementations are consistent to previous works on the parameter identification of commercial ACC systems [6, 21, 28, 17].
### _String Stability and Bode Plots_
This section discusses the string stability of the calibrated CTHP models for both experimental campaigns. First, the
Fig. 4: Space-gap and velocity trajectories for Ispra-Casale (Exp. #1).
Fig. 5: Velocity profiles.
Fig. 6: \(\mathcal{L}_{2}\) and \(\mathcal{L}_{\infty}\) string stability evolution over the platoon in each campaign.
\(\mathcal{L}_{2}\) and \(\mathcal{L}_{\infty}\) string stability conditions (see (11) and (12)) are calculated and checked. Table III displays the obtained results, see column two from right. As can be seen, the ACC followers are neither \(\mathcal{L}_{2}\) nor \(\mathcal{L}_{\infty}\) strict string stable, except the last vehicle in AstaZero, which is seen to be \(\mathcal{L}_{\infty}\) string stable. This is possible since \(\mathcal{L}_{2}\) stability is stronger than the \(\mathcal{L}_{\infty}\) stability, see condition (13). Also, the last follower (\(i=4\)) in the Ispra-Vicolungo campaign was driving manual, and hence, there are no findings there. Finally, Fig. 6 depicts the evolution of \(\mathcal{L}_{2}\) and \(\mathcal{L}_{\infty}\) values of the ACC followers inside the platoon. A valley is formed in the middle of the platoon with \(\mathcal{L}_{2}\) values being lower than those of \(\mathcal{L}_{\infty}\), in both campaigns. The observed behavior of the two stability criteria suggests that the leader's (\(i=0\)) perturbation in both trials is upstream propagated on the ACC vehicles with \(i=1,2\) and then somehow dissipated by the ACC vehicles with \(i=3,4\).
To further interpret the obtained string stability results, Fig. 7 presents the Bode plots of the calibrated CTHP models for both campaigns. The Bode plot is developed using the velocity-to-velocity transfer function (8) and the calibrated CTHP model parameters for each vehicle in the platoon (see Table III). Provided Definition 1 and string stability condition (10), a positive amplitude at the Bode plot at a given frequency (i.e., disturbance) indicates that the disturbance will propagate through the platoon; while a negative amplitude indicates that the disturbance will dissipate. For zero amplitude the platoon is marginal string stable (or weak string stable).
As can be seen in both campaigns (see Fig. 7), the ACC-engaged vehicles (vehicles \(i=1,2,3,4\) in AstaZero and \(i=1,2,3\) in Ispra-Vicolungo) could not compensate instabilities generated by the human driven conventional vehicle (here vehicle \(i=0\)). However there is a range of frequencies (in [0.4,1] rad/s) over which the calibrated ACC models in both campaigns will dissipate disturbances. Precisely, disturbances with frequencies less than 0.4 rad/s are amplified, while larger frequency disturbances will be dissipated along the platoon. The largest amplitude (\(\approx 1.2\) dB) for both campaigns occurs at \(\omega=0.25\) rad/s. These observations suggest that a stronger condition than \(\mathcal{L}_{2}\) strict string stability would be to force a sharper decrease of the Bode plot for low frequencies. Note that this is challenging since the displayed Bode plots correspond to a second order system with a zero at the left-half complex plane (see transfer function (8)).
To illustrate that disturbances for some frequencies remain bounded or dissipated even with a string unstable ACC system (i.e., CTHP model parameters verified as string unstable by (11) and (12)), Fig. 8 depicts the velocity profiles of eight (8) ACC-equipped vehicles following a lead vehicle performing a sinusoidal perturbation for a simulation of \(500\) s. The lead vehicle (shown in solid red colour) drives at 20 m/s for 20 s, while all following ACC-engaged vehicles initialized at the corresponding equilibrium velocity and space-gap (\(\propto\tau v\)). Then the lead vehicle performs a sinusoidal perturbation with a magnitude of 1 m/s at \(\omega=0.25\) rad/s and \(\omega=0.3\) rad/s, respectively. The ACC-engaged vehicles adopting the CTHP with calibrated parameters corresponding to the string unstable vehicles \(i=3\) and \(i=4\) from the AstaZero campaign (see Table III and Fig. 7(a)). As can be seen in Fig. 7(a) and Fig. 7(c), the ACC vehicles amplify the disturbance along the platoon with calibrated CTHP parameters corresponding to vehicle \(i=3\), while dissipate the oscillation with calibrated parameters for \(i=4\) (see Fig. 7(b) and Fig. 7(d)).
## VI Conclusions and Outlook
The parameter learning of commercially implemented ACC systems is challenging, since the core functionality of those systems (proprietary control logic and its parameters) is not publicly available. This work unveiled that physics-constrained and data-informed neural networks can be used as a surrogate model to capture the ACC vehicle longitudinal dynamics and efficiently infer the unknown parameters of the constant time-headway policy, often deployed in stock ACC systems of various makes in automotive industry.
The findings of this paper demonstrate the ease in which PiNNs perform in learning the unknown ACC design parameters. More specifically, PiNNs may retrieve successfully the true ACC parameters, based on synthetic data, as well as to deliver estimates of the unknown design parameters of stock ACC systems, based on empirical observations of space-gap and relative velocity. This is confirmed by the similar ACC parameter values found among the different ACC controllers (different makes) in the three experimental campaigns. Applying the string stability criteria to the obtained results showed that the stock ACC systems of various makes tend to be
Fig. 7: Bode plots for the ACC vehicles involved in AstaZero and Ispra-Vicolungo campaigns, see Table III. Both Bode plots indicate positive amplitude at a given frequency (i.e., perturbation) led to string unstable platoons. In the range of frequencies \([0.4,1]\) rad/s all calibrated ACC models will dissipate disturbances (i.e., being string stable). (a) AstaZero: At \(\omega=0.25\) rad/s vehicle \(i=4\) is _marginal string stable_ while all the other vehicles are _string unstable_. On the other hand at \(\omega=0.3\) rad/s vehicle \(i=4\) is _string stable_, \(i=2\) remains string unstable and \(i=1,3\) are _marginal string stable_. (b) Ispra-Vicolungo: Same observations for different vehicles.
string unstable inside the platoon. This result highlights that further research is needed to achieve string stable platoons with a positive effect on traffic flow, capacity and throughput. This will allow the large-scale deployment of ACC-engaged vehicles on freeways.
|
2301.11608 | A Multi-View Joint Learning Framework for Embedding Clinical Codes and
Text Using Graph Neural Networks | Learning to represent free text is a core task in many clinical machine
learning (ML) applications, as clinical text contains observations and plans
not otherwise available for inference. State-of-the-art methods use large
language models developed with immense computational resources and training
data; however, applying these models is challenging because of the highly
varying syntax and vocabulary in clinical free text. Structured information
such as International Classification of Disease (ICD) codes often succinctly
abstracts the most important facts of a clinical encounter and yields good
performance, but is often not as available as clinical text in real-world
scenarios. We propose a \textbf{multi-view learning framework} that jointly
learns from codes and text to combine the availability and forward-looking
nature of text and better performance of ICD codes. The learned text embeddings
can be used as inputs to predictive algorithms independent of the ICD codes
during inference. Our approach uses a Graph Neural Network (GNN) to process ICD
codes, and Bi-LSTM to process text. We apply Deep Canonical Correlation
Analysis (DCCA) to enforce the two views to learn a similar representation of
each patient. In experiments using planned surgical procedure text, our model
outperforms BERT models fine-tuned to clinical data, and in experiments using
diverse text in MIMIC-III, our model is competitive to a fine-tuned BERT at a
tiny fraction of its computational effort. | Lecheng Kong, Christopher King, Bradley Fritz, Yixin Chen | 2023-01-27T09:19:03Z | http://arxiv.org/abs/2301.11608v1 | A Multi-View Joint Learning Framework for Embedding Clinical Codes and Text Using Graph Neural Networks
###### Abstract
Learning to represent free text is a core task in many clinical machine learning (ML) applications, as clinical text contains observations and plans not otherwise available for inference. State-of-the-art methods use large language models developed with immense computational resources and training data; however, applying these models is challenging because of the highly varying syntax and vocabulary in clinical free text. Structured information such as International Classification of Disease (ICD) codes often succinctly abstracts the most important facts of a clinical encounter and yields good performance, but is often not as available as clinical text in real-world scenarios. We propose a **multi-view learning framework** that jointly learns from codes and text to combine the availability and forward-looking nature of text and better performance of ICD codes. The learned text embeddings can be used as inputs to predictive algorithms independent of the ICD codes during inference. Our approach uses a Graph Neural Network (GNN) to process ICD codes, and Bi-LSTM to process text. We apply Deep Canonical Correlation Analysis (DCCA) to enforce the two views to learn a similar representation of each patient. In experiments using planned surgical procedure text, our model outperforms BERT models fine-tuned to clinical data, and in experiments using diverse text in MIMIC-III, our model is competitive to a fine-tuned BERT at a tiny fraction of its computational effort.
We also find that the multi-view approach is beneficial for stabilizing inferences on codes that were unseen during training, which is a real problem within highly detailed coding systems. We propose a labeling training scheme in which we block part of the training code during DCCA to improve the generalizability of the GNN to unseen codes. In experiments with unseen codes, the proposed scheme consistently achieves superior performance on code inference tasks.
Washington University in St. Louis
One Brookings Drive
St. Louis, Missouri 63130, USA
{jerry.kong, christopherking, bafritz, ychen25}@wustl.edu
Copyright (c) 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
## 1 Introduction
An electronic health record (EHR) stores a patient's comprehensive information within a healthcare system. It provides rich contexts for evaluating the patient's status and future clinical plans. The information in an EHR can be classified as structured or unstructured. Over the past decade, ML techniques have been widely applied to uncover patterns behind structured information such as lab results [23, 16, 17]. Recently, the surge of deep learning and large-scale pre-trained networks has allowed unstructured data, mainly clinical notes, to be effectively used for learning [14, 15, 16]. However, most methods focus on either structured or unstructured data _only_.
A particularly informative type of structured data is the International Classification of Diseases (ICD) codes. ICD is an expert-identified hierarchical medical concept ontology used to systematically organize medical concepts into categories and encode valuable domain knowledge about a patient's diseases and procedures.
Because ICD codes are highly specific and unambiguous, ML models that use ICD codes to predict procedure outcomes often yield more accurate results than those do not [16, 17]. However, _the availability of ICD codes is not always guaranteed_. For example, billing ICD codes are generated after the clinical encounter, meaning that we cannot use the ICD codes to predict post-operative outcomes before the surgery. A more subtle but crucial drawback of using ICD codes is that there might be **unseen codes** during inference. When a future procedure is associated with a code outside the trained subset, most existing models using procedure codes cannot accurately represent the case. Shifts in coding practices can also cause data during inference to not overlap the trained set.
On the other hand, unstructured text data are readily and consistently available. Clinical notes are generated as free text and potentially carry a doctor's complete insight about a patient's condition, including possible but not known diagnoses and planned procedures. Unfortunately, the clinical text is a challenging natural language source, containing ambiguous abbreviations, input errors, and words and phrases rarely seen in pre-training sources. It is consequently difficult to train a robust model that predicts surgery outcomes from the large volume of free texts. Most current models rely on large-scale pre-trained models [14, 15]. Such methods require a considerable corpus of relevant texts to fine-tune, which might not be available at a particular facility. Hence, models that only consider clinical texts suffer from poor performance and incur huge computation costs.
To overcome the problems of models using only text or
codes, we propose to learn from the ICD codes and clinical text in a **multi-view joint learning framework**. We observe that despite having different formats, the text and code data are complementary and broadly describe the same underlying facts about the patient. This enables each learner (view) to use the other view's representation as a regularization function where less information is present. Under our framework, even when one view is missing, the other view can perform inference _independently_ and maintain the effective data representation learned from the different perspectives, which allows us to train reliable text models without a vast corpus and computation cost required by other text-only models.
Specifically, we make the following contributions in this paper. (1) We propose a multi-view learning framework using Deep Canonical Correlation Analysis (DCCA) for ICD codes and clinical notes. (2) We propose a novel tree-like structure to encode ICD codes by relational graph and apply Relational Graph Convolution Network (RGCN) to embed ICD codes. (3) We use a two-stage Bi-LSTM to encode lengthy clinical texts. (4) To solve the unseen code prediction problem, we propose a labeling training scheme in which we simulate unseen node prediction during training. Combined with the DCCA optimization process, the training scheme teaches the RGCN to discriminate between unseen and seen codes during inference and achieves better performance than plain RGCN.
## 2 Related Works
**Deep learning on clinical notes.** Many works focus on applying deep learning to learn representations of clinical texts for downstream tasks. Early work [1] compared the performance of classic NLP methods including bag-of-words [13], Word2Vec [12], and Long-Short-Term-Memory (LSTM) [1] on clinical prediction tasks. These methods solely learn from the training text, but as the clinical texts are very noisy, they either tend to overfit the data or fail to uncover valuable patterns behind the text. Inspired by large-scale pre-trained language models such as BERT [13], a series of works developed transformer models pre-trained on medical notes, including ClinicalBERT [10], BioBERT [14], and PubBERT [15]. These models fine-tune general language models on a large corpus of clinical texts and achieve superior performance. Despite the general nature of these models, the fine-tuning portion may not translate well to new settings. For example, PubBERT is trained on the clinical texts of a single tertiary hospital, and the colloquial terms used and procedures typically performed may not map to different hospitals. BioBERT is trained on Pubmed abstracts and articles, which also is likely poorly representative of the topics and terms used to, for example, describe a planned surgery.
Some other models propose to use joint learning models to learn from the clinical text, and structured data (e.g., measured blood pressure and procedure codes) [15, 16]. Since the structured data are less noisy, these models can produce better and more stable results. However, most assume the co-existence of text and structured data at the inference time, while procedure codes for a patient are frequently incomplete until much later.
**Machine learning and procedure codes.** Procedure codes are a handy resource for EHR data mining. Most works focus on automatic coding, using machine learning models to predict a patient's diagnostic codes from clinical notes [16, 17]. Some other works directly use the billing code to predict clinical outcomes [13, 15], whereas our work focuses on using the high correlation of codes and text data to augment the performance of each. Most of these works exploit the code hierarchies by human-defined logic based on domain knowledge. In contrast, our proposed framework uses GNN and can encode arbitrary relations between codes.
**Graph neural networks.** A series of works [18, 1] summarize GNN structures in which each node iteratively aggregates neighbor nodes' embedding and summarizes information in a neighborhood. The resulting node embeddings can be used to predict downstream tasks. RGCN [12] generalizes GNN to heterogeneous graphs where nodes and edges can have different types. Our model utilizes such heterogeneous properties on our proposed hierarchy graph encoding. Some works [13, 14] applied GNN to model interaction between EHRs, whereas our model uses GNN on the code hierarchy.
**Privileged information.** Our approach is related to the Learning Under Privileged Information (LUPI) [20] paradigm, where the privileged information is only accessible during training (in this case, billing code data). Many works have applied LUPI to other fields like computer vision [10] and metric learning [11].
## 3 Methods
Admissions with ICD codes and clinical text can be represented as \(D=\{(C_{1},A_{1},y_{1}),...,(C_{n},A_{n},y_{n})\}\), where \(C_{i}\) is a set of ICD codes for admission \(i\), \(A_{i}\) is a set of clinical texts, and \(y_{i}\) is the desired task label (e.g. mortality, re-admission, etc.). The ultimate goal is to minimize task-appropriate losses \(\mathcal{L}\) defined as:
\[\min_{f_{C},g_{C}}\sum_{i}\mathcal{L}(f_{C}(g_{C}(C_{i})),y_{i}) \tag{1}\]
and
\[\min_{f_{A},g_{A}}\sum_{i}\mathcal{L}(f_{A}(g_{A}(A_{i})),y_{i}), \tag{2}\]
where \(g_{C}\) and \(g_{A}\) embed codes and texts to vector representations respectively, and \(f_{C}\) and \(f_{A}\) map representations to the task labels. Note that \((g_{C},f_{C})\) and \((g_{A},f_{A})\) should operate independently during inference, meaning that even when one type of data is missing, we can still make accurate predictions.
In this section, we first propose a novel ICD ontology graph encoding method and describe how we use Graph
Neural Network (GNN) to parameterize \(g_{C}\). We then describe the two-stage Bi-LSTM (\(g_{A}\)) to embed lengthy clinical texts. We then describe how to use DCCA on the representation from \(g_{C}\) and \(g_{A}\) to generate representations that are less noisy and more informative, so the downstream models \(f_{C}\) and \(f_{A}\) are able to make accurate predictions. Figure 1 shows the overall architecture of our multi-view joint learning framework.
### ICD Ontology as Graphs
The ICD ontology has a hierarchical scheme. We can represent it as a tree graph as shown in Figure 2, where each node is a medical concept and a node's children are finer divisions of the concept. All top-level nodes are connected to a root node. In this tree graph, only the leaf nodes correspond to observable codes in the coding system, all other nodes are the hierarchy of the ontology. This representation is widely adopted by many machine learning systems Zhang et al. (2020); Li et al. (2021) as a refinement of the earlier approach of grouping together all codes at the top level of the hierarchy. A tree graph is ideal for algorithms based on message passing. It allows pooling of information within disjoint groups, and encodes a compact set of neighbors. However, it (1) ignores the granularity of different levels of classification, and (2) cannot encode similarities of nodes that are distant from each other. This latter point comes about because a tree system may split on factors that are not the most relevant for a given task, such as the same procedure in an arm versus a leg, or because cross-system concepts are empirically very correlated in medical syndromes, such as kidney failure and certain endocrine disorders.
To overcome the aforementioned problems, we propose to augment the tree graph with edge types and jump connections. Unlike conventional tree graphs, where all edges have the same edge type, we use different edge types for connections between different levels in the tree graph as shown in the bottom left of Figure 2. For example, ICD-10 codes have seven characters and hence eight levels in the graph (including the root level). The edges between the root node and its children have edge Type 1, and the edges between the seventh level and the last level (actual code level) have edge Type 7. Different edge types not only encode whether two procedures are related but also encode the level of similarity between codes.
With multiple edge types introduced to the graph, we are able to further extend the graph structure by jump connections. For each leaf node, we add one additional edge between the node and each of its predecessors up to the root node, as shown in the bottom right of Figure 2. The edge type depends on the level that the predecessor resides. For example, in the ICD-10 tree graph, a leaf node will have seven additional connections to its predecessors. Its edge to the root node will have Type 8 (the first seven types are used to represent connections between levels), and its edge to the third level node will have Type 10. Jump connections significantly increase the connectivity of the graph. Meanwhile, we still maintain the good hierarchical information of the origi
Figure 1: **Overall multi-view joint learning framework.** Blue boxes/arrows represent the text prediction pipeline, and green represents the code prediction pipeline. Dashed boxes and arrows denote processes only happening during training. By removing the dashed parts, text and code pipelines can predict tasks independently.
Figure 2: Top: Conventional encoding of ICD ontology. Bottom Left: ICD ontology encoded with relations. Relation types for different levels are denoted by different colors. Bottom Right: Jump connection creates additional edges to leaf nodes’ predecessors, denoted by dashed color lines.
nal tree graph because the jump connections are represented by a different set of edge types. Using jump connection helps uncover relationships between codes that are not presented in the ontology. For example, the relationship between anemia and renal failure can be learned using jump connection even though these diverge at the root node in ICD-9 and ICD-10. Moreover, GNNs suffer from over-smoothing, where all node representations converge to the same value when the GNN has too many layers [10]. If we do not employ jump connections, the maximal distance between one leaf node to another is twice the number of levels in the graph. To capture the connection between the nodes, we will need a GNN with that many layers, which is computationally expensive and prone to over-smoothing. Jump connections make the distance between two leaf nodes two, and this ensures that the GNN is able to embed any correlation between two nodes. We will discuss this in more detail in Section 3.2.
### Embedding ICD Codes using GNN
We use GNN to embed medical concepts in the ICD ontology. Let \(G=\{V,E,R\}\) be a graph, where \(V\) is its set of the vertex (medical concepts in the ICD graph), \(E\subseteq\{V\times V\}\) is its set of edges (connects medical concept to its sub-classes), and \(R\) is the set of edge type in the graph (edges in different levels and jump connection). As each ICD code corresponds to one node in the graph, we use code and node interchangeably.
We adopt RGCN [17], which iteratively updates a node's embedding from its neighbor nodes. Specifically, the \(k^{th}\) layer of RGCN on node \(u\in V\) is:
\[h_{u}^{(k+1)}=\sigma\left(\sum_{r\in R}\sum_{v\in\mathcal{N}_{u}^{r}}\frac{1} {c_{u,r}}W_{r}^{(k)}h_{v}^{(k)}+W^{(k)}h_{u}^{(k)}\right) \tag{3}\]
where \(\mathcal{N}_{i}^{r}\) is the set of neighbors of \(i\) that connects to \(i\) by relation \(r\), \(h_{i}^{(k)}\) is the embedding of node \(i\) after \(k\) GNN layers, \(h_{i}^{0}\) is a randomly initialized trainable embedding, \(W_{r}^{(k)}\) is a linear transformation on embeddings of nodes in \(\mathcal{N}_{i}^{r}\), \(W^{(k)}\) updates the embedding of \(u\), and \(\sigma\) is a nonlinear activation function. We have \(c=|\mathcal{N}_{i}^{r}|\) as a normalization factor.
After \(T\) iterations, \(h_{u}^{(T)}\) can be used to learn downstream tasks. Since a patient can have a set of codes, \(C_{i}=\{v_{i1},v_{i2},v_{i3},...\}\subseteq V\), we use sum and max pooling to summarize \(C_{i}\) in an embedding function \(g_{C}\):
\[g_{C}(C_{i})=\sum_{v\in C_{i}}h_{v}^{(T)}\oplus\max(\{h_{v}^{(T)}|v\in C_{i} \}), \tag{4}\]
where \(\max\) is the element-wise maximization, and \(\oplus\) represents vector concatenation. Summation more accurately summarizes the codes' information, while maximization provides regularization and stability in DCCA, which we will discuss in Section 3.4.
Training RGCN helps embed the ICD codes into vectors based on the defined ontology. Nodes that are close together in the graph will be assigned similar embeddings because of their similar neighborhood. Moreover, distant nodes that appear together frequently in the health record can also be assigned correlated embeddings because the jump connection keeps the maximal distance between two nodes at two. Consider a set of codes \(C=\{u,v\}\), because of the summation in the code pooling, using a 2-layer RGCN, we will have non-zero gradients of \(h_{u}^{T}\) and \(h_{v}^{T}\) with respect to \(h_{v}^{0}\) and \(h_{u}^{0}\), respectively, which connects the embeddings of \(u\) and \(v\). In contrast, applying RGCN on a graph without jump connections will result in zero gradients when the distance between \(u\) and \(v\) is greater than two.
### Embedding Clinical Notes using Bi-LSTM
Patients can have different numbers of clinical texts in each encounter. Where applicable, we sort the texts in an encounter in ascending order by time, and have a set of texts \(A_{i}=(a_{i1},a_{i2},...,a_{in})\). In our examples, we concatenate the texts together to a single document \(H_{i}\), and we have \(H_{i}=CAT(A_{i})=\bigoplus_{j=\{1...n\}}a_{ij}\). We leave to future work the possibility of further modeling the collection.
The concatenated text might be very lengthy with over ten thousands word tokens, and RNN suffers from diminishing gradients with LSTM-type modifications. While attention mechanisms are effective for arbitrary long-range dependence, they require large sample size and expensive computational resources. Hence, following previously successful approach [12], we adopt a two-stage model which stacks a low-frequency RNN on a local RNN. Given \(H_{i}\), we first split it into blocks of equal size \(b\), \(H_{i}=\{H_{i1},H_{i2},...,H_{iK}\}\). The last block \(H_{iK}\) is padded to length \(b\). The two-stage model first generates block-wise text embeddings by
\[l_{H_{ik}}=LSTM(\{w(H_{ik1}),w(H_{ik2}),...,w(H_{ikb})\}), \tag{5}\]
where \(w(\cdot)\) is a Word2Vec [13] trainable embedding function. The representation of \(A_{i}\) is given by
\[g_{A}(A_{i})=LSTM(\{l_{H_{i1}},...,l_{H_{iK}}\}). \tag{6}\]
The two-stage learning scheme minimizes the effect of diminishing gradients while maintaining the temporal order of the text.
### DCCA between Graph and Text Data
As previously mentioned, ICD codes may not be available at the time when models would be most useful, but are structured and easier to analyze, while the clinical text is readily available but very noisy. Despite different data formats, they usually describe the same information: the main diagnoses and treatments for an encounter. Borrowing ideas from multi-view learning, we can use them to supplement each other. Many existing multi-view learning methods require the presence of both views during inference and are not able to adapt to the applications we envision. Specifically, we use DCCA [1, 12] on \(g_{A}(A_{i})\) and \(g_{C}(C_{i})\) to learn a joint representation. DCCA solves the
following optimization problem,
\[\begin{array}{ll}\max\limits_{g_{C},g_{A},U,V}&\frac{1}{N}tr(U^{T}M_{C}^{T}M_{A} V)\\ s.t.&U^{T}(\frac{1}{N}M_{C}^{T}M_{C}+r_{C}I)U=I,\\ &V^{T}(\frac{1}{N}M_{A}^{T}M_{A}+r_{A}I)V=I,\\ &u_{i}^{T}M_{C}^{T}M_{A}v_{j}=0,\quad\forall i\neq j,\quad 1\leq i,j\leq L\\ &M_{C}=stack\{g_{C}(C_{i})|\forall i\},\\ &M_{A}=stack\{g_{A}(A_{i})|\forall i\},\end{array} \tag{7}\]
where \(M_{C}\) and \(M_{A}\) are the matrices stacked by vector representations of codes and texts, \((r_{C},r_{A})>0\) are regularization parameters. \(U=[u_{1},...,u_{L}]\) and \(V=[v_{1},...,v_{L}]\) maps GNN and Bi-LSTM output to maximally correlated embedding, and \(L\) is a hyper-parameter controlling the number of correlated dimensions. We use \(g_{C}(C_{i})U,g_{A}(A_{i})V\) as the final embedding of codes and texts. By maximizing their correlation, we force the weak learner (usually the LSTM) to learn a similar representation as the strong learner (usually the GNN) and to filter out inputs unrelated to the structured data. Hence, when a record's codes can yield correct results, its text embedding is highly correlated with that of the codes, and the text should also be likely to produce correct predictions.
During development, we found that a batch of ICD data often contains many repeated codes with the same embedding and that a SUM pooling tended to obtain a less than full rank embedding matrix \(M_{C}\) and \(M_{A}\), which causes instability in solving the optimization problem. A nonlinear max pooling function helps prevent this.
The above optimization problem suggests full-batch training. However, the computation graph will be too large for the text and code data. Following [20], we use large mini-batches to train the model, and from the experimental results, they sufficiently represent the overall distribution. After training, we stack \(M_{C}\), \(M_{A}\) again from all data output and obtain \(U\) and \(V\) as fixed projection matrix from equation (7).
After obtaining the projection matrices and embedding models, we attach two MLPs (\(f_{A}\) and \(f_{C}\)) to the embedding models as the classifier, and train/fine-tune \(f_{A}\) (\(f_{C}\)) and \(g_{A}\) (\(g_{C}\)) together in an end-to-end fashion with respect to the learning task using the loss functions in (1) and (2).
## 4 Predicting Unseen Codes
In the previous section, we discuss the formulation of ICD ontology and how we can use DCCA to generate embeddings that share representations across views. In this section, we will demonstrate another use case for DCCA-regularized embeddings. In real-world settings, the set of codes that researchers observe in training is usually a small subset of the entire ICD ontology. In part, this is due to the extreme specificity of some ontologies, with ICD-10-PCS having 87,000 distinct procedures and ICD-10-CM 68,000 diagnostic possibilities before considering that some codes represent a further modification of another entity. In even large training samples, some codes will likely be seen zero or a small number of times in training. Traditional models using independent code embedding are expected to function poorly on rare codes and have arbitrary output on previously unseen nodes, even if similar entities are contained in the training data.
Our proposed model and the graph-embedded hierarchy can naturally address the above challenge. Its two features enable predictions of novel codes at inference:
* **Relational embedding.** By embedding the novel code in the ontology graph, we are able to exploit the representation of its neighbors. For example, a rare diagnostic procedure's embedding is highly influenced by other procedures that are nearby in the ontology.
* **Jump connection.** While other methods also exploit the proximity defined by the hierarchy, as we suggested above, codes can be highly correlated but remain distant in the graph. Jump connections increase the graph connectivity; hence, our model can seek the whole hierarchy for potential connection to the missing code. Because the connections across different levels are assigned different relation types, our GNN can also differentiate the likelihood of connections across different levels and distances.
Meanwhile, during inference, the potential problem is that the model does not automatically differentiate between the novel and the previously seen codes. Because the model never uses novel codes to generate any \(g_{C}(C_{i})\), the embeddings of the seen and novel nodes experience different gradient update processes and hence are from different distributions. Nevertheless, during inference, the model will treat them as if they are from the same distribution. However, such transferability and credibility of novel node embeddings are not guaranteed, and applying them homogeneously may result in untrustworthy predictions.
Hence, we propose a labeling training scheme to teach the model how to handle novel nodes during inference. Let \(G=\{V,E,R\}\) be the ICD graph and \(U\) be the set of unique nodes in the training set, \(U\subseteq V\). We select a random subset \(U_{s}\) from \(U\) to form the seen nodes during training, and \(U_{u}=V\setminus U_{s}\) be treated as unseen nodes. We augment the initial node embeddings with 1-0 labels, formally,
\[\begin{split} h_{u}^{0+}=h_{u}^{0}\oplus 1& \forall u\in U_{s}\\ h_{v}^{0+}=h_{v}^{0}\oplus 0&\forall v\in V\setminus U_{s} \end{split} \tag{8}\]
Note that we still use \(h_{u}^{0}\) as the trainable node embedding, while the input to the RGCN is augmented to \(h_{u}^{0+}\). We further extract data that only contain the seen nodes to form the seen data: \(D_{s}=\{(C_{i},A_{i},y_{i})|c\in U_{s}\forall c\in C_{i}\}\).
We, again, use DCCA on \(D_{s}\) to maximize the correlation between the text representation and the code representation. After obtaining the projection matrix, we train on the entire dataset \(D\) to minimize the prediction loss. Note that \(D\) contains nodes that do not appear in the DCCA process and are labeled differently from the seen nodes. The different labels allow the RGCN to tell whether a node is unseen during the DCCA process. If unseen nodes hurt the prediction, it will be reflected in the prediction loss. Intuitively, if unseen nodes are less credible, data with more 0-labeled nodes will
have poor prediction results; GNN can capture this characteristic and reflect it in the prediction by assigning less positive/negative scores to queries with more 0-labeled nodes. The labeling training scheme essentially blocks a part of the training code during DCCA and thus obtains embeddings for \(U_{s}\) and \(U_{u}\) from different distributions. And we train on the entire training dataset so that the model learns to handle seen and unseen codes heterogeneously. This setup mimics the actual inference scenario. Note that despite being different, the distributions of seen and unseen node embeddings can be similar and overlapped. Thus, the additional 1-0 labeling is necessary to differentiate them.
## 5 Experimental Results
**Datasets.** We use two datasets to evaluate the performance of our framework: _Proprietary_ Dataset. This dataset contains medical records of 38,551 admissions at the local Hospital from 2018 to 2021. Each entry is also associated with a free text procedural description and a set of ICD-10 _procedure codes_. We aim to use our framework to predict a set of post-operative outcomes, including delirium (DEL), dialysis (DIA), troponin high (TH), and death in 30 days (D30). **MIMIC-III dataset**[12]. This dataset contains medical records of 58,976 unique ICU hospital admission from 38,597 patients at the Beth Israel Deaconess Medical Center between 2001 and 2012. Each admission record is associated with a set of ICD-9 _diagnoses codes_ and multiple clinical notes from different sources, including case management, consult, ECG, discharge summary, general nursing, etc. We aim to predict two outcomes from the codes and texts: (1) In-hospital mortality (MORT). We use admissions with hospital_expire_flag=1 in the MIMIC-III dataset as the positive data and sample the same number of negative data to form the final dataset. All clinical notes generated on the last day of admission are filtered out to avoid directly mentioning the outcome. We use all clinical notes ordered by time and take the first 2,500-word tokens as the input text. (2) 30-day readmission (R30). We follow [12], label admissions where a patient is readmitted within 30 days as positive, and sample an equal number of negative admissions. Newborn and death admissions are filtered out. We only use clinical notes of type Discharge Summary and take the first 2,500-word tokens as the input text. Sample sizes can be found in Table 3.
**Effectiveness of DCCA training.** We split the dataset with a train/validation/test ratio of 8:1:1 and use 5-fold cross-validation to evaluate our model. GNN and Bi-LSTM are optimized in the DCCA process using the training set. The checkpoint model with the best validation correlation is picked to compute the projection matrix _only from_ the training dataset. Then we attach an MLP head to the target prediction model (either the GNN or the Bi-LSTM) and fine-tune the model in an end-to-end fashion to minimize the prediction loss.
For this task, we compare our framework to popular pre-trained models ClinicalBERT and BERT. We also compare it to the base GNN and Bi-LSTM to show the effectiveness of our proposed framework. We additionally provide experimental results where both text and code embedding
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{Local Data} & \multicolumn{2}{c}{MIMIC-III} \\ \cline{2-7} & DEL & DIA & TH & D30 & MORT & R30 \\ \hline \hline & \(74.6\pm 1.2\) & \(87.3\pm 13.1\) & \(67.4\pm 6.9\) & \(82.8\pm 3.7\) & \(84.5\pm 3.6\) & \(60.4\pm 2.8\) \\ & \(73.2\pm 0.6\) & \(87.4\pm 14.9\) & \(68.5\pm 3.4\) & \(83.8\pm 2.1\) & \(85.7\pm 3.6\) & \(61.3\pm 2.3\) \\ & \(74.9\pm 1.0\) & \(89.1\pm 12.5\) & \(\textbf{70.8}\pm 0.9\) & \(83.5\pm 1.9\) & \(85.1\pm 4.1\) & \(61.7\pm 2.6\) \\ & \(\textbf{75.3}\pm 1.1\) & \(\textbf{95.4}\pm 0.7\) & \(70.6\pm 3.2\) & \(\textbf{84.4}\pm 1.4\) & \(\textbf{86.4}\pm 4.2\) & \(\textbf{63.4}\pm 2.8\) \\ \hline \hline & & & & & & \\ \end{tabular}
\end{table}
Table 2: Ablation Study of the Labeling Training Scheme under Unseen Code Setting in AUROC (%).
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{Local Data} & \multicolumn{2}{c}{MIMIC-III} \\ \cline{2-7} & DEL & DIA & TH & D30 & MORT & R30 \\ \hline \hline & \(74.6\pm 1.2\) & \(87.3\pm 13.1\) & \(67.4\pm 6.9\) & \(82.8\pm 3.7\) & \(84.5\pm 3.6\) & \(60.4\pm 2.8\) \\ & \(73.2\pm 0.6\) & \(87.4\pm 14.9\) & \(68.5\pm 3.4\) & \(83.8\pm 2.1\) & \(85.7\pm 3.6\) & \(61.3\pm 2.3\) \\ & \(74.9\pm 1.0\) & \(89.1\pm 12.5\) & \(\textbf{70.8}\pm 0.9\) & \(83.5\pm 1.9\) & \(85.1\pm 4.1\) & \(61.7\pm 2.6\) \\ & \(\textbf{75.3}\pm 1.1\) & \(\textbf{95.4}\pm 0.7\) & \(70.6\pm 3.2\) & \(\textbf{84.4}\pm 1.4\) & \(\textbf{86.4}\pm 4.2\) & \(\textbf{63.4}\pm 2.8\) \\ \hline \hline & & & & & \\ \end{tabular}
\end{table}
Table 2: Ablation Study of the Labeling Training Scheme under Unseen Code Setting in AUROC (%).
are used to make predictions. We compare our model with a vanilla multi-view model without DCCA. For all baselines, we report their Area Under Receiver Operating Characteristic (AUROC) as evaluation metrics, and Average Precision (AP) can be found in Appendix A. For all datasets, we set \(L\), the number of correlated dimensions to 20, and report the total amount of correlation obtained (Corr).
Table 1 shows the main results. For clinical notes prediction, we can see that the codes augmented model can consistently outperform the base Bi-LSTM, with an average relative performance increase of 2.4% on the proprietary data and 1.6% on the MIMIC-III data. Our proposed method outperforms BERT on most tasks and achieves very competitive performance compared to that of ClinicalBERT. Note that our model only trains on the labeled EHR data without unsupervised training on extra data like BERT and ClinicalBERT do. ClinicalBERT has been previously trained and fine-tuned on the entire MIMIC dataset, including the discharge summaries, and therefore these results may overestimate its performance.
For ICD code prediction, we see that DCCA brings a 1.5% performance increase on the proprietary data. Since the codes model significantly outperforms the language model on all tasks, the RGCN is a much stronger learner and has less information to learn from the text model. Comparing the results of the proprietary and the MIMIC datasets, we can see that DCCA brings a more significant performance boost to the proprietary dataset, presumably because of the larger amount of correlation obtained in the proprietary dataset (85% versus 58%). Moreover, an important difference in these datasets is the ontology used: MIMIC-III uses ICD-9 and the proprietary dataset uses ICD-10. The ICD-9 ontology tree has a height of four, which is much smaller than that of ICD-10 and is more coarsely classified. This may also explain the smaller performance gains in MIMIC-III.
The combined model with DCCA only brings a slight performance boost compared to the one without because the amount of information for the models to learn is equivalent. Nevertheless, the DCCA model encourages the two views' embeddings to agree and allows independent prediction. In contrast, a vanilla multi-view model does not help the weaker learner learn from the stronger learner.
**Unseen Codes Experiments.** We identify the set of unique codes in the dataset. We split the codes into k-fold and ran k experiments on each split. For each experiment, we pick one fold as the unseen code set. Data that contain at least one unseen code are used as the evaluation set. The evaluation set is split into two halves as the valid and test sets. The rest of the data forms the training set. We pick another fold from the code split as the DCCA unseen code set. Training set data that do not contain any DCCA unseen code form the DCCA training set. Then, the entire training set is used for task fine-tuning. Because the distribution of codes is not uniform, the number of data for each split is not equal across different folds. We use k=10 for the proprietary dataset and k=20 for the MIMIC-III dataset to generate a reasonable data division. We provide average split sizes in Appendix C.
For this task, we compare our method with the base GNN, base GNN augmented with the same labeling training strategy, and DCCA-optimized GNN to demonstrate the outstanding performance of our framework. Similarly, we report AUROC and include AP in Appendix A.
Table 2 summarizes the results of the unseen codes experiments. Note that all test data contain at least one code that never appears in the training process. In such a more difficult inference scenario, comparing the plain RGCN with the DCCA-augmented RGCN, we see a 2.2% average relative performance increase on the proprietary dataset. With the labeling learning method, we can further improve the performance gain to 4.2%. On the MIMIC-III dataset, the performance boost of our model over the plain RGCN is 3.6%, demonstrating our method's ability to differentiate seen and unseen codes. We also notice that DCCA alone only slightly improves the performance on the MIMIC-III dataset (1.4%). We suspect that while the labeling training scheme helps distinguish seen and unseen codes, the number of data used in the DCCA process is also reduced. As MORT and R30 datasets are smaller and a small DCCA training set may not faithfully represent the actual data distribution, the regularization effect of DCCA diminishes.
## 6 Conclusions
Predicting patient outcomes from EHR data is an essential task in clinical ML. Conventional methods that solely learn from clinical texts suffer from poor performance, and those that learn from codes have limited application in real-world clinical settings. In this paper, we propose a multi-view framework that jointly learns from the clinical notes and ICD codes of EHR data using Bi-LSTM and GNN. We use DCCA to create shared information but maintain each view's independence during inference. This allows accurate prediction using clinical notes when the ICD codes are missing, which is commonly the case in pre-operative analysis. We also propose a label augmentation method for our framework, which allows the GNN model to make effective inferences on codes that are not seen during training, enhancing generalizability. Experiments are conducted on two different datasets. Our methods show consistent effectiveness across tasks. In the future, we plan to incorporate more data types in the EHR and combine them with other multi-view learning methods to make more accurate predictions.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \# Admission & \# Pos. Samples & \# Unique codes \\ \hline DEL & 11,064 & 5,367 & 5,637 \\ DIA & 38,551 & 1,387 & 9,320 \\ TH & 38,551 & 1,235 & 9,320 \\ D30 & 38,551 & 1,444 & 9,320 \\ \hline MORT & 5,926 & 2,963 & 4,448 \\ R30 & 10,998 & 5,499 & 3,645 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Statistics of different datasets and tasks. |
2309.03348 | A General Description of Criticality in Neural Network Models | Recent experimental observations have supported the hypothesis that the
cerebral cortex operates in a dynamical regime near criticality, where the
neuronal network exhibits a mixture of ordered and disordered patterns.
However, A comprehensive study of how criticality emerges and how to reproduce
it is still lacking. In this study, we investigate coupled networks with
conductance-based neurons and illustrate the co-existence of different spiking
patterns, including asynchronous irregular (AI) firing and synchronous regular
(SR) state, along with a scale-invariant neuronal avalanche phenomenon
(criticality). We show that fast-acting synaptic coupling can evoke neuronal
avalanches in the mean-dominated regime but has little effect in the
fluctuation-dominated regime. In a narrow region of parameter space, the
network exhibits avalanche dynamics with power-law avalanche size and duration
distributions. We conclude that three stages which may be responsible for
reproducing the synchronized bursting: mean-dominated subthreshold dynamics,
fast-initiating a spike event, and time-delayed inhibitory cancellation.
Remarkably, we illustrate the mechanisms underlying critical avalanches in the
presence of noise, which can be explained as a stochastic crossing state around
the Hopf bifurcation under the mean-dominated regime. Moreover, we apply the
ensemble Kalman filter to determine and track effective connections for the
neuronal network. The method is validated on noisy synthetic BOLD signals and
could exactly reproduce the corresponding critical network activity. Our
results provide a special perspective to understand and model the criticality,
which can be useful for large-scale modeling and computation of brain dynamics. | Longbin Zeng, Fengjian Feng, Wenlian Lu | 2023-08-25T07:15:19Z | http://arxiv.org/abs/2309.03348v1 | # A General Description of Criticality in Neural Network Models
###### Abstract
Recent experimental observations have supported the hypothesis that the cerebral cortex operates in a dynamical regime near criticality, where the neuronal network exhibits a mixture of ordered and disordered patterns. However, A comprehensive study of how criticality emerges and how to reproduce it is still lacking. In this study, we investigate coupled networks with conductance-based neurons and illustrate the co-existence of different spiking patterns, including asynchronous irregular (AI) firing and synchronous regular (SR) state, along with a scale-invariant neuronal avalanche phenomenon (criticality). We show that fast-acting synaptic coupling can evoke neuronal avalanches in the mean-dominated regime but has little effect in the fluctuation-dominated regime. In a narrow region of parameter space, the network exhibits avalanche dynamics with power-law avalanche size and duration distributions. We conclude that three stages which may be responsible for reproducing the synchronized bursting: mean-dominated sub
threshold dynamics, fast-initiating a spike event, and time-delayed inhibitory cancellation. Remarkably, we illustrate the mechanisms underlying critical avalanches in the presence of noise, which can be explained as a stochastic crossing state around the Hopf bifurcation under the mean-dominated regime. Moreover, we apply the ensemble Kalman filter to determine and track effective connections for the neuronal network. The method is validated on noisy synthetic BOLD signals and could exactly reproduce the corresponding critical network activity. Our results provide a special perspective to understand and model the criticality, which can be useful for large-scale modeling and computation of brain dynamics.
keywords: Criticality, Neuronal avalanches, Bifurcation, Ensemble kalman filter +
Footnote †: journal: Journal of the American Statistical Association
## 1 Introduction
Criticality, usually characterized as neuronal avalanches, has been widely observed in both in vitro and in vivo biophysical experiments [9; 24; 2]. In the critical state, population activities tend to operate at the boundary between order and disorder, occasionally and unpredictably appearing synchronized or irregularly spiking. The term "neuronal avalanches" describes a spiking pattern, in which the observed bursts of suprathreshold activity were interspersed by silence and showed a clear separation of time scales (their duration being much shorter than the inter avalanche intervals). The spatial and temporal distributions of neuronal avalanches have been identified following power-law statistics, implying that this brain state operates near a nonequilibrium critical point. Furthermore, previous studies have also shown that
the whole-brain activity dynamics measured with noninvasive techniques, such as electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), can also be well described by power-law statistics [37].
Until now, most studies have focused on scale-free behavior, showing a power law distribution of empirically observed variables as evidence of criticality. Previous animal studies both in vitro and in vivo, together with computational modeling, have strongly suggested that the avalanche dynamics in neural systems may arise at the critical state in excitation-inhibition balanced networks and can be regulated by several intrinsic network properties, such as short-term synaptic plasticity and the balance level between excitation and inhibition [2; 17; 20]. With the help of simulating the stochastic dynamical model, we can fully understand the underlying mechanism of biological phenomena [29; 28]. Jingwen's results demonstrate that the coordinated dynamics of criticality and asynchronous dynamics can be generated by the same neural system if excitatory and inhibitory synapses are tuned appropriately [18]. In the excitation-inhibition balanced network, the critical state occurs at the point of Hopf bifurcation from a fixed point to a periodic motion [19]. In particular, neuronal avalanches can be triggered at an intermediate level of input heterogeneity, but heterogeneous output connectivity cannot evoke avalanche dynamics [36]. Notably, this finding is of particular interest because the coemergence of these multiscale cortical activities has been believed to ensure the cost-efficient information capacity of the brain, further emphasizing the functional significance of avalanche dynamics in neuronal information processing [37].
Recent studies have revealed that neurons communicating with each other
require a fundamental understanding of neurotransmitter receptor structure and function [31]. In a unified assumption that the cortical neuronal network is sustained in an E-I balanced state, network interactions can suffer much variability due to the difference in fast and slow-acting synapses. Theoretically, the variability in the synaptic connection pattern leads to stochasticity at the population level, which may further affect the spatiotemporal patterns of collective firing activity. Such stochastic effects indicate that different synaptic connections may be a potential factor in the regulation of neuronal avalanches. Another essential point is the background input, while in network modeling, it may be responsible for reproducing the avalanche criticality [25; 11]. If a cortical neuron works as an integrator over a relatively long time scale, it receives a substantial amount of excitatory drive, which can be referred to as a mean-dominated current. This means that the neuron is effectively summing inputs from a large number of sources over an extended period of time. On the other hand, when the mean drive is significantly smaller than the firing threshold, neurons can be activated by large fluctuations in the external current, leading to increased irregularity in their firing patterns. It remains controversial what context and network structure could contribute to evoking critical dynamics in the neuronal network.
Our paper aims to provide an intuitive, compact description of the spontaneous and critical state of cortical cells in terms of network coupling at mean-dominated and fluctuation-dominated afferent currents. To address this issue, we begin with a computing biological neuronal network in which spike communication contains both fast and slow synaptic types. In the mean-dominated regime, we find a clear transition of asynchronous firing to
a synchronous state in the parameter space along with a power-law avalanche criticality. Different from the DP universality critical class, the model here reveals exponents with \(\tau_{s}=1.6\) and \(\tau_{t}=2.1\) through a maximum-likelihood estimator (MLE). We emphasize that the three network stages may be responsible for the avalanche criticality: mean-dominated subthreshold dynamics, synchronous initiation of a spike event, and delayed termination of the burst. Consistent with previous studies, we demonstrated that avalanches of neuronal activity preferentially emerge at a moderately synchronized state of collective firing activity, in which neurons are positioned around a hopf bifurcation point. With the presence of noise, the spiking activity passes through the bifurcation point and that network produces an occasional oscillation and a low stable firing, resulting in a moderate synchronous state. Finally, we implement a data assimilation method to fit the model to its BOLD signal and identify network parameters of criticality. These findings thus provide a new perspective to understand neuronal network criticality and enable researchers to model a biologically plausible network with the existence of criticality.
## 2 Model and methods
### Neuronal network
The network is composed of \(N_{E}\) excitatory (80%) and \(N_{I}\) inhibitory (20%) neurons [1]. Each neuron receives \(K\) synaptic contacts from excitatory neurons and inhibitory neurons in a randomly connected manner, causing a 4/1 ratio of E/I synapses in the local network [16]. Each node in the network is represented as a leakage integrate-and-fire (LIF) neuron and contacts through
some biophysical synapses [14].
This computational neuronal model is a nonlinear operator from a set of input synaptic spike trains to an output axon spike train, described by three components: the subthreshold equation of membrane potential that describes; the transformation from the synaptic currents of diverse synapses; the synaptic current equation that describes the transformation from the input spike trains to the corresponding synaptic currents; and the threshold scheme that gives setup of the condition for triggering a spike by membrane potential. This type of neuronal model has been demonstrated that tends to settle in a stationary fixed point, typically characterized by a stable pattern of firing activity [7]. The dynamics of subthreshold membrane potential \(V(t)\) for excitatory (inhibitory) neuron \(j\) can be described as
\[\begin{cases}C_{j}\dot{V_{j}}=-g_{L,j}(V_{j}-V_{L})-\sum_{u}S_{j,u}(V_{j}-V_{u })g_{i,u}+I_{ext},\\ \dot{S_{j,u}}=-\dfrac{1}{\tau_{u}}S_{j,u}+\sum_{m,k}\omega_{m}\delta(t-t_{m}^{ k}).\end{cases} \tag{1}\]
Herein, the neuronal capacitance of neuron \(j\) is denoted by \(C_{j}\), and an external current \(I_{ext}\) is applied as a background driver. When the membrane potential reaches the firing threshold \(V_{th}\), the neuron emits a spike and its membrane potential is reset to \(V_{reset}\) for a refractory period \(T_{ref}\). \(g_{L}\) and \(g_{u}\) are the leak and synaptic conductance with equilibrium potentials \(V_{L}\) and \(V_{u}\) respectively. The gating variable \(S_{j}\) represents the fractions of open channels of synaptic type \(u\) with a decay time constant \(\tau_{u}\). Postsynaptic currents are mediated by fast-acting (AMPA) and slow-acting (NMDA) excitatory and inhibitory GABAergic synaptic receptor types [5; 7]. These three synap
tic types are abbreviated as \(u=\{f,s,i\}\) for compact presentation. The \(\delta\) function represents the incoming spike train and \(t_{m}^{k}\) denotes the \(k\)th spike from the \(m\)th neuron. To account for network heterogeneity, we sample the synaptic weight \(\omega_{m}\) from a uniform distribution.
To simulate external input in addition to local neuronal network activity, we incorporate an Ornstein-Uhlenbeck (OU)-type current with varying parameters \((\mu_{ext},\sigma_{ext},\tau_{ext})\) into our model,
\[\tau_{ext}\ dI_{ext}=(\mu_{ext}-I_{ext})dt+\sqrt{2\tau_{ext}}\sigma_{ext}dW_{t}. \tag{2}\]
We will present a systematic discussion of spiking activities in a conductance-based neuronal network under two distinct current-driven scenarios. The default neuronal parameters and network settings used in this paper are shown in the following Table 1.
### Balloon model
The blood-oxygen-level-dependent (BOLD) signal reflects changes in deoxyhemoglobin driven by localized changes in brain blood flow and blood oxygenations, which are coupled to underlying neuronal activity by a process termed neurovascular coupling. In this paper, we adapt the Balloon-Windkessel model [10] for BOLD signals and model them as observation in the data assimilation (Data assimilation). The Balloon model is based on a set of ordinary differential equations that describe the hemodynamic response to changes in neuronal activity. Neuronal activity \(z\) drives the responses of cerebral blood flow \(s\) and rate of deoxyhemoglobin \(q\) and outputs the BOLD
\begin{table}
\begin{tabular}{l l l} \hline Symbol & Description & Value \\ \hline \(N\) & Total number of neurons & 2000 \\ \(K\) & Average number of input connection & 100 \\ \(C\) & Capitance & 0.5 nF \\ \(T_{ref}\) & Refractory period & 2 ms \\ \(V_{th}\) & Voltage threshold & \(-\)50 mV \\ \(V_{reset}\) & Reset potential & \(-\)55 mV \\ \(g_{L}\) & Leaky conductance & 25 nS \\ \(\tau_{u}\) & Decay time of synaptic receptor & \((2,40,10)\) ms \\ \(V_{u}\) & Reverse potential of different receptor & \((0,0-70)\) mV \\ \(V_{L}\) & Equilibrium potential of leak & \(-\)70 mV \\ \(\omega_{m}\) & Synaptic weight & Uniform[0, 1] \\ \(\mu_{ext}\) & Mean level of OU current & 0.45 \(\mu\)A \\ \(\sigma_{ext}\) & Fluctuation of OU current & 0.15 \(\mu\)A \\ \(\tau_{ext}\) & Correlation time of OU current & 2.5 \\ \hline \end{tabular}
\end{table}
Table 1: **Default parameters and settings used in numerical simulation**
signal \(h\). The model architecture is summarized as follows:
\[\left\{\begin{array}{l}\dot{s}=\varepsilon z-\kappa s-\gamma\left(f-1\right)\\ \dot{f}=s\\ \tau\dot{v}_{i}=f-v^{\frac{1}{\alpha}}\\ \tau\dot{q}=\frac{f\left(1-\left(1-\rho\right)\frac{1}{f}\right)}{\rho}-\frac{ v^{\frac{1}{\alpha}}q}{v}\\ h=V_{0}\left[k_{1}\left(1-q\right)+k_{2}\left(1-\frac{q}{v}\right)+k_{3}\left(1-v \right)\right]\end{array}\right., \tag{3}\]
where \(\epsilon=200,\kappa=1.25,\tau=1,\gamma=2.5,\alpha=5,\rho=0.8,V_{0}=0.02,k_{1}=5.6,k_{2}=2,k_{3}=1.4\) are constant parameters of the Balloon model and are chosen as in [10; 22].
Furthermore, to mimic the time series of BOLD signals that were measured by the fMRI, we conducted a downsampling process of the recording data. The valid observation is sampled with a periodical duration of 800 ms, which is the same as the frequency in the canonical experiment paradigm.
### Data analysis
Simulations are done using a finite difference integration scheme based on the first-order Euler algorithm with a time step of \(dt=10^{-1}\) ms. Each network is simulated for a long time of 200 s with the initial period of 20 s discarded to record sufficient data. It is noteworthy that the specific time (sufficiently long) of data recording does not exert any influence on the results. To obtain convincing statistical measurements, we carry out 20 trials for each simulation configuration. Networks are simulated on a GPU card running Linux using custom-written codes in C++ and Python.
Irregularity of individual spikesThe coefficient of variation (CV) is a commonly used measure in the analysis of spike data from neuronal networks.
The CV is a statistical measure of the variability of the interspike intervals (ISIs) and is defined as
\[\text{CV}_{\text{ISI}}=\left\langle\frac{\sqrt{\left\langle T_{j}^{2}\right\rangle- \left\langle T_{j}\right\rangle^{2}}}{\left\langle T_{j}\right\rangle}\right\rangle. \tag{4}\]
Here the symbol \(\left\langle\cdot\right\rangle\) represents the average and \(T_{j}\) is the inter-spike interval of neuron \(j\). A low CV indicates a regular firing pattern, where the ISIs are relatively constant, while a high CV indicates an irregular firing pattern, where the ISIs are more variable.
Synchrony indexThe coherence coefficient (cc) is a commonly used measure to assess the synchronization of population activity in neuronal networks [35; 36]. It quantifies the degree of phase locking between a pool of neurons by accessing the macroscopic firing rate. In our model, the instantaneous population firing rate \(z(t)\) is calculated in each 0.1 ms bin. Then, we can define the dimensionless measurement as the ratio of the standard deviation and mean:
\[\text{CC}=\frac{\sigma_{z(t)}}{\mu_{z(t)}}. \tag{5}\]
Obviously, the larger the value of cc is, the better the synchronization in the network.
Oscillation powerIn our investigation of the network, we are studying the oscillation power by analyzing the power spectrum of the time series of the population firing rate. To obtain the power spectrum, we used the Welch method with a Hamming window of size \(2^{11}\) and performed a fast Fourier transform (FFT) with the same number of points. By scrutinizing the curve
of power spectral density, we can ascertain both the acme power amplitude of neuronal oscillations and its corresponding peak frequency.
_Avalanche dynamics._ In agreement with previous studies on spike-based neuronal avalanches in vivo [3], we defined the neuronal avalanches only using spikes in the excitatory population. First, we sample the spike data from \(N_{S}\) randomly selected excitatory neurons and bin the population activity into time windows (\(\Delta t=0.5ms\)). Then, the threshold \(\Theta\) to determine the start and end of an avalanche event is defined as the 50% median spiking activity of the excitatory population. A neuronal avalanche starts and ends when the summed activity exceeds the \(\Theta\). For a given avalanche event, we denote its size \(S\) and duration \(T\) as the total number of spikes contained in this event and the corresponding lifetime of this event. By using all avalanche events recorded in a specific experiment, we can estimate the probability distributions of avalanche size and duration, represented by \(P(S)\) and \(P(T)\), respectively.
To characterize neuronal avalanches, the distribution \(P(s)\) of avalanche sizes is first visually inspected and then quantified by the \(\kappa\) index [30], which calculates the difference from an observed (simulated) distribution \(P^{obs}\) with the known theoretical power-law distribution \(P^{th}(s)\), at 10 equally spaced points on a logarithmic axis and adds 1:
\[\kappa=\frac{1}{10}\sum_{i=1}^{10}\left(P^{obs}(s_{i})-P^{th}(s_{i})\right)+1. \tag{6}\]
Within this definition, a subcritical distribution is characterized by \(\kappa<1\), and a supercritical distribution by \(\kappa>1\), whereas \(\kappa=1\) indicates a critical network. The theoretical avalanche criticality discussed here is the true MF
DP critical point with scale-invariant avalanche distributions that obey the cracking noise scaling relation [11]. As mentioned previously, \(\tau_{s}=3/2\) and \(\tau_{t}=2\) stand for MF-DP models at or above the upper critical dimension.
### Data assimilation
An augmented Ensemble Kalman filter (EnKF) is applied here to identify the dynamical connection by incorporating the synaptic parameters as an augmented state vector [21]. For assimilation purposes, The joint input-state estimation can be achieved by including the unknown forces in the state vector and estimating this augmented vector using a standard Kalman filter. We rewrite the state-space model with an \(1+5+1\) dimensional state vector \(\Upsilon_{t}\) and a 1-dimensional observation vector \(h_{t}\) according to
\[\begin{split}\Upsilon_{t}&=\mathcal{F}(\Upsilon_{t -1})+\sigma_{t},\quad\sigma_{t}\sim\mathcal{N}(0,Q),\\ h_{t}&=\mathcal{G}(\Upsilon_{t})+\epsilon_{t}, \quad\epsilon_{t}\sim\mathcal{N}(0,R),\end{split} \tag{7}\]
where \(w_{t}\) and \(v_{t}\) are Gaussian noise terms with covariance matrices Q and R respectively. Herein, the function \(\mathcal{F}\) is composed of the neuronal network, balloon model and random walk model of the estimated parameter,
\[\Upsilon_{t}=\left(\begin{array}{c}z_{t}\\ y_{t}\\ \theta_{t}\end{array}\right),\quad\mathcal{F}(\Upsilon_{t-1})=\left(\begin{array} []{c}\textit{NN}(\theta_{t-1})\\ \textit{Balloon}(y_{t},z_{t})\\ \theta_{t-1}+\eta\xi_{t-1}\end{array}\right), \tag{8}\]
where 1-dimensional \(\theta_{t}\) and 1-dimensional \(z_{t}\) denote the parameter and the population firing rate in the evolution process. The 5-dimensional \(y_{t}\) represents the hidden states that emerged in the Balloon model in equation 3, which together contribute to the BOLD signal. The state vector is directly related to the force parameter \(\theta\) with a stochastic process. In agreement with
previous studies, the observation \(\mathcal{G}\) is simply designated to be a linear map, which projects the state vector to its one element \(h_{t}\).
\[\mathcal{G}(\Upsilon_{t})=\left(\begin{array}{cccccc}0&\ldots&1&\ldots&0\end{array} \right)\cdot\left(\begin{array}{c}z_{t}\\ y_{t}\\ \theta_{t}\end{array}\right). \tag{9}\]
In the DA framework, we adopt a reversible map to map \(\theta_{t}\in(a,b)\) to an infinite range \(\Theta_{t}\in(-\infty,\infty)\) to avoid exceeding the bound of the parameter range while applying a random walk. We set the map function \(\Phi\) as a \(log\) function and its corresponding reversible map is \(sigmoid\). Under this condition, the controlling parameter \(\theta\) is fed into the original model after the prediction and updating procedure. The complete algorithm is as follows (Algorithm 2.4):
## 3 Results
### Transcritical state
To investigate the behavior of the system, we conduct a preliminary study using a small network consisting of 2000 neurons, under two scenarios characterized by different levels of driven current. One is mean-dominated and the other is fluctuation-dominated type, which is reflected by variations in the parameter pair \(\mu_{ext}\) and \(\sigma_{ext}\). We ask whether the fast-acting coupling strength contributes to the modulation of network dynamics. In our model, the 2 free variables, AMPA and NMDA type synaptic conductance, are constrained by a linear equation to better explore spiking patterns in a locking firing regime (Mechanism of criticality). We calculate various neuronal measurements to gain insight into the properties of the network.
**Algorithm** EnKF
**Input** :
BOLD signals \(h_{t},t=1\dots T\);
number of total samples \(N\); total time length \(T\);
init \(N\) networks with different parameters \(\theta\);
observation noise \(R\); random walk step \(\eta\)
**Output** :
estimated BOLD signals;
hidden states and parameters;
1: Draw \(N\) samples \(\theta_{0}^{n}\) from initial distribution;
2: map \(\theta_{t}\) to \(\Theta_{t}\) through function \(\Phi\).
3: **for \(t=1:T\)do**
4: apply random walk as \(\Theta_{t}=\Theta_{t-1}+\eta\xi_{t-1}\);
5: update the parameter \(\theta\) through the function \(\Phi^{-1}\) to the evolution model.
6: update the hidden state includes the neuronal states and hemodynamical states.
7: evolve each neuronal network and obtain the bold state \(\hat{h_{t}^{n}}\), then concatenate to \(\hat{\Upsilon_{t}^{n}}\).
8: calculate \(\mu_{t}=\frac{1}{N}\sum\hat{\Upsilon_{t}^{n}}\), \(C_{t}=\frac{1}{N-1}\sum(\hat{\Upsilon_{t}^{n}}-\mu_{t})(\hat{\Upsilon_{t}^{n}} -\mu_{t})^{T}\).
9: derive Kalman gain matrix \(S_{t}=HC_{t}H^{T}+R\), \(K_{t}=C_{t}H^{T}S_{t}^{-1}\)
10: **if \(t\%800==0\) then**
11: Draw \(\epsilon_{o}^{n}\) from \(\mathcal{N}(0,R)\), filter by \(\Upsilon_{t}^{n}=\hat{\Upsilon_{t}^{n}}+K_{t}(y_{t}+\epsilon_{o}^{n}-H\hat{ \Upsilon_{t}^{n}})\)
12: **end if**
13: **end for**
Figure 1: **Spiking activity with respect to the excitatory conductance** (A): Network activity (firing rate, coefficient of variance and coherence coefficient with respect to different AMPA conductance while keeping the constraint \(\text{AMPA}+20\times\text{NMDA}=2.5\). There exists an obvious sharp transition from an asynchronous to a synchronous state in the one-dimensional transition, in which the boundary is characterized as a moderately synchronized state, as shown in the middle panel of (B). (B): The local recurrent neuronal network consists of Exc. and Inh. spiking neurons with different \(AMPA\) levels. Network activity of three typical points with parameters indicated in the top panel. middle, raster plot of a subset of 200 neurons (Exc. 160 (black), Inh. 40 (red)); bottom, the average excitatory and inhibitory synaptic currents.
We begin with the fluctuation-dominated regime in which the external drive, as described by the mean of the OU current, \(\mu_{ext}\), and the deviation of each step, \(\sigma_{ext}\), has a high mean and a small deviation. With the linear constraint of excitatory synaptic conductance, the firing rate of the network is located at approximately 17 Hz. In the fluctuation-dominated case, all CV values are much larger than 1, indicating an irregular firing pattern. Although strengthening the AMPA conductance changes the network structure, it results in almost no additional variability in neuronal firing, which is quantitatively confirmed by CV and coherence coefficient (Fig 1 A).
In the mean-dominated regime, a strong external drive provides the mechanism for a majority of the subthreshold voltages to remain close to one another. In this way, the mean drive to the cell was subthreshold, and spikes were the result of fluctuations, which occur irregularly, thus leading to a high CV. Increasing the AMPA conductance in the network significantly enhances the regularity of spike activity (Fig 1 A). Such a strong fast-acting synapse could integrate pre-spikes rapidly and trigger a subset of neurons firing rapidly at the same time. A cascading firing event occurs when all neuron voltages are positioned sufficiently close to the firing threshold such that some spikes in the network are sufficient to induce firing in all other neurons in the network at that same time. Such synchronized behavior might also enhance the effect of collective firing activity, and therefore, an increase in the average firing rate appears when there is a high level of AMPA conductance.
From Fig 1 B, we can see that both synchronous and asynchronous irregular firing patterns are observed in this same network. Moderate synchrony with a moderate correlation (0.8-1.8) appears in the intermediate AMPA effi
cacy. In the moderately synchronized case, neuronal avalanches are organized rhythmically into bursts, large and small, separated by distinct silences. The excitatory current is induced quickly, followed by a long period of inhibitory current. Neurons show sparsely synchronized oscillations, which consist of irregular and sparse individual spikes but synchronized oscillating population activities.
### Rich spiking patterns
In the LIF model, the input of the synaptic current depends on two aspects: first is the external current and the interactions in the local network. In the noise-free network, neurons can fire sustainably only when the external stimulus is of the order of the threshold potential, \(I^{ext}\sim V_{th}\). As in [12], we define the parameter \(\Delta=I^{ext}-V_{st}\) as the external suprathreshold current (details in Model and methods). The summary of phase transition undergone in two different types of OU current is given in Fig 2 A and D.
In the suprathreshold regime \(\Delta>0\), the limited stable potential \(V_{st}\) is larger than the threshold potential, leading to synchronous events and regular spiking activities. Note that the suprathreshold spiking activities can also exist in the narrow region of \(\Delta<0\) in the fluctuation-dominated regime. In this region, \(V_{st}\) is close to the firing threshold, and a sufficiently large fluctuation of input current could induce firing in most neurons in the network. Since all neurons spike at the same time, their voltages are all reset to rest at the same time, causing periodic synchronous bursts and a high firing rate. If \(\Delta<<0\), neurons are silent with zero average firing rate \(\rho=0\). Interestingly, in the subthreshold regime \(\Delta\) 0, the neurons can undergo a transition from an asynchronous irregular (AI) to a synchronized
Figure 2: **phase diagram and spiking avalanches of the neuronal network.** (A, B, C): The phase diagram of the network with respect to \(\Delta\) and AMPA conductance in the mean-dominated driven case. The parameter space suggests a phase transition, as shown in the heatmap of (C) the \(\kappa\) coefficient and (E) the power of collective oscillations. (D, E, F): the same illustration as the top panel but in the condition of fluctuation-dominated current.
regular (SR) state with an increase of impact in the fast-acting synapses under a mean-dominated input. Excitatory and inhibitory neurons in the network receive mean-dominated input that drives their voltage close to the threshold and gives rise to synchronized cascading firing events. The model described in equation 1 assumes that the fast-acting synaptic conductance is much larger than the slow one, allowing for rapid integration of spikes from pre-synapses and the generation of strong, pulse-like currents. A cascading firing event occurs when a large and rapid excitatory current induces a chain reaction of spike firing, resulting in all other neurons firing at the same time. Then, the inhibitory neurons become active to suppress the firing event for a short period, which leads to the emergence of network oscillations. At the boundary line of AI and Si, the network exhibits totally different spiking activities: synchronous cascading events do not occur periodically, but rather at random times, accompanied by large or small silent windows. This moderate synchrony among neurons may emerge when the network is operating near a critical point [6]. Nevertheless, a similar transition behavior does not appear in networks with fluctuation-dominated external input. These findings suggest that mean-dominated input is essential in the emergence of neurodynamics.
Then, in the subthreshold regime, we assessed the beta power of the firing rate in two different situations. From Fig 2 E, the synaptic interaction does not regulate neuronal behavior in the fluctuation-dominated regime as mentioned above. However, in the mean-dominated regime, depending on the fast-acting synaptic conductance, the networks exhibit oscillations in the \(\beta\) band (13-30 Hz range) along with a clear transcritical line. Synchronous
excitatory neurons fire an action potential, exciting the inhibitory neurons to fire, in turn suppressing the excitatory neurons from firing for a period of time. This interaction between excitatory and inhibitory spiking is the fundamental principle underlying the pyramidal-interneuronal network gamma (PING) type oscillation [4]. The inhibition synapse plays an important role in regulating the frequency of the oscillations by noting that the power spectral density (PSD) shifts to the left with the increase in the decay time of the inhibition channel (fig 3). We verify that the oscillations stem by plotting the typical spiking activity of the inhibition and excitation. Inhibition plays an essential role in modulating the frequency of the oscillations by cancelling the excitatory current with an induced inhibitory current.
Furthermore, we investigate the \(\kappa\) index in the subthreshold regime, to obtain an assessment of how close the observed avalanche distribution close
Figure 3: **Inhibition modulates the oscillation frequency.** (A) Oscillations in the model of criticality and PSD of the synaptic current for the inhibitory time constant \(\tau_{i}=10\) ms. (B) The different oscillations with much smaller frequencies are shown on the right holding \(\tau_{i}=12\) ms for all neurons.
to the Direct Percolation (DP) universality class [23] for exponents \(\tau=3/2\) and \(\tau_{t}=2\) respectively. We found that the \(\kappa\) index is smaller than 1 on the left side of the subcritical regime, corresponding to the subcritical regime. In contrast, the network exhibits supercritical phenomena when fast-acting excitation is large enough with \(\kappa>1\). In the transcritical line, the network obeys the \(\kappa=1\) corresponding to the DP critical network. Interestingly, the region of \(\kappa=1\) (Fig 2 C) does not correspond well with the boundary line of the \(\gamma\) power distribution (Fig 2 B), in which more patches of \(\kappa=1\) emerge in the diagram. The above-presented results suggest that DP universality criticality is not the most appropriate class in the transition to collective oscillations. Henceforth, we relax this constraint, taking a more agnostic approach towards the values of the exponents and letting the MLE method determine them ( Neuronal avalanches). The transcritical line is not vertical in the subcritical regime, which implies that the critical parameter of \(g_{\rm NMDA}\) is not fixed in the different external current levels.
### Neuronal avalanches
To further re-examine the critical phenomena, we implement a simulation with a much longer time \(T_{sim}=500s\) and calculate the spike-based size and duration of each avalanche event (4 A). For comparison with real electrophysiological experiments, we collect avalanche events from a small population of randomly chosen excitatory neurons (the sampling size \(N_{s}=400\) for default), but not all neurons, in the network. Fig 4A and B show the distribution of avalanche size and avalanche duration for networks in the three typical examples in Fig 1. The model exhibits distinct avalanche dynamics at different levels of fast-acting coupling strength. The subcritical dynamics have an ex
Figure 4: **Critical state with spike avalanches.** (A): mapping spikes from \(N_{s}\) randomly sampled excitatory neurons into time bins (\(\Delta t=0.5\) ms). Here an avalanche event is defined as a sequence of time bins in which the spiking count at least exceeds \(\Theta\), ending with a ”silent” time bin. (B, C): Typical distributions of avalanche size and avalanche duration for networks with different synaptic parameters in different regions. At different levels of AMPA conductance, the model may present subcritical (red), critical (black), and supercritical (blue) avalanche dynamics. Inset: at the critical state, the average size \(S\) conditioned on a given duration \(T\) shows power-law increases corresponding to \(S\sim T^{\pi}\). (D, E): The corresponding distributions of avalanche events for networks in fluctuation-dominated regime. Similarly, three levels of synaptic conductance are considered and the model only exhibits subcritical dynamics due to high fluctuation among neurons.
ponentially decaying avalanche distribution( 4 blue lines); the supercritical dynamics have a much greater chance of large-size avalanches( 4 red lines), while the critical dynamics show a power-law avalanche distribution( 4 black lines). In the critical state, the distributions of both avalanche size and duration obey linear relationships in log-log coordinates. These two linear relationships can be well characterized by the exponents of power-law statistics (\(\tau_{s}=-1.84\) and \(\tau_{t}=-2.10\)), which are reasonably close to the value for the DP point. Remarkably, the further plotting of the average avalanche size as a function of avalanche duration in the log-log coordinates reveals another power-law scaling, with an exponent of \(\pi\) = 1.2 (Fig 2C inset). These exponents satisfy the expected relation \(\pi=(1+\beta)/(1+\alpha)\) as predicted for a critical system by the scaling theory of nonequilibrium critical phenomena [27]. In agreement with previous in vivo and in vitro experiments, these findings together suggest the occurrence of avalanche dynamics in our model.
However, a similar tuning effect is not observed in networks with fluctuation-dominated external input(Fig 4 D, E). The high fluctuation of input may lead to asynchronous irregular neuronal firing, triggering subcritical dynamics with exponential-type distributions of avalanche size and duration. Theoretically, this finding is not surprising because variability in fast-acting synapses has been shown to contribute weakly to collective neuronal firing in the fluctuation-dominated regime.
Furthermore, we investigate the question that whether the sampling size of neuronal data influences the power-law distribution of neuronal avalanches (Fig 5). Using the same spike data generated in the mentioned critical setting, we recalculate the distributions for both avalanche size and duration
at different sampling sizes. We find that the power-law statics of avalanche events can be preserved, illustrating the stability of the neuronal avalanches generated in our model. Recording insufficient spike-based events from a limited number of neurons does not prevent the observation of the characteristic power law. However, large population recording remarkably increases the possibility of large-size avalanches, which may be responsible for the much smoother slope in avalanche distribution. This observation implies that the exponents of the power-law avalanches may vary in different sampling experimental paradigms.
### Mechanism of criticality
To gain further mathematical insight into the existence and modulation of the criticality, we coarse-grain the LIF model to a firing-rate model for this network, writing equations for the three gating variables: fast excitatory \(S_{f}(t)\), slow excitatory \(S_{s}(t)\) and the inhibitory variable \(S_{i}(t)\). Then, the time
Figure 5: **Critical exponents vary in different recording sizes.** (A): Avalanche assessment in different sampling sizes. (B) Corresponding distributions of avalanche duration at different sampling sizes.
dependent neuronal firing rates \(\rho(t)\) for the network are derived in terms of these network gating variables, which are then fed back into the evolution of the network conductance variables, closing the system.
As a preliminary approximation, we assume that \(V_{j}(t)\) obeys a Gaussian distribution \(p(V,t)\) with a time dependent mean \(V_{0}(t)\) and time-independent variance \(\sigma_{0}(t)\). In agreement with the previous analysis framework, we suppose that the neuronal potential \(V_{i}\) is restricted to \((-\infty,V_{th})\) and that \(V_{th}\) is an absorbing boundary. Considering the \(N\) neuron voltages as independent between two spike events, the neuronal firing rate of the network can be formulated as the expectation of the proportion of the neurons whose membrane potential is above the spiking threshold [19]. Then,
\[\begin{split}\rho(t)=\langle H\left(V(t)-V_{th}\right)\rangle& =\int_{V_{th}}^{\infty}p(V,t)dV\\ &=\frac{1}{2}-\frac{1}{2}\operatorname{erf}\left(\frac{V_{th}-V_{ 0}(t)}{\sqrt{2}\sigma_{0}(t)}\right),\end{split} \tag{10}\]
Suppose the pulses that arrive from the excitatory population and inhibitory population are assumed to be a Gaussian distribution with a high firing rate \(\rho(t)\). Thus, the drive term \(S_{u}\) can be described as a constant term and Gaussian white noise, and is governed by
\[\tau_{u}\frac{dS_{u}}{dt}=\omega\tau_{u}\rho-S_{u}+\omega\tau_{u}\sqrt{\rho} \xi(t), \tag{11}\]
where the Gaussian white noise \(\xi(t)\) has a mean and auto-correlation function defined by
\[\langle\xi(t)\rangle=0,\quad\langle\xi(t)\xi(t^{\prime})\rangle=\delta(t-t^{ \prime}). \tag{12}\]
This OU process has been shown to capture the statistics of conductance fluctuations at some of the compartmentalized model neurons[8]. In fact,
the criterion for the validity of the diffusion approximation is that the mean value is much larger than the fluctuation term. By noting that the time constant of slow excitation and inhibition is much larger than that of fast excitation, we can neglect the fluctuations of these two gating variables and approximate them as constant coefficient differential equations.
Using the effective-time constant diffusion approximation [26], the equation 1 reduces to
\[\tau_{0}\frac{dV}{dt}=-(V-V_{0})+\sigma_{0}\sqrt{\tau_{0}}\eta(t), \tag{13}\]
where
\[\begin{cases}g_{0}=g_{L}+\sum_{u}g_{u}S_{u}\\ \tau_{0}=C/g_{0}\\ V_{0}=\frac{1}{g_{0}}*(g_{L}V_{L}+\sum_{u}g_{u}S_{u}V_{u})+\mu_{ext}\\ \sigma_{0}=\frac{\sigma_{ext}}{g_{0}}\sqrt{\frac{2\tau_{ext}}{\tau_{0}}}.\end{cases} \tag{14}\]
In equation 14, the increased effective leak \(g_{0}\) incorporates the effect of synaptic conductance, and \(\tau_{0}\) denotes the effective time constant. The distribution predicted for the voltage is Gaussian, and the average and variance of the membrane potential are \(V_{0}\) and \(\sigma_{0}^{2}\), respectively.
The second-order OU process governs the trace of the membrane potential in an ideal case, which is dependent on \(\tau_{0},V_{0}\) and \(\sigma_{0}\) in equation 14. In this paper, to gain deep insight into spiking patterns regardless of network firing rate, we approximately lock the firing frequency by constraining the two excitation conductances (AMPA and NMDA) to a linear equation as
\[\langle g_{f}S_{f}+g_{s}S_{s}\rangle=g_{f}\omega\tau_{f}\rho+g_{s}\omega\tau_{s }\rho=\textit{const}. \tag{15}\]
An appropriate coupling strength of the constant in equation 15 provides a sufficient range of fast excitation in the experiment while keeping the firing rate approximated fixed (Fig 1).
We find that the involvement of the synaptic shot noise of fast excitation (equation 11) in the diffusion approximation does not influence the performance of the approximation (equation 13). The network is still stochastic due to the existence of a fluctuation term in the external current. Generally, mean-field theory only holds when the system size is infinity. The incorporation of noise into the field model can smooth out the systematic errors, compensate for the finite size effect and make it closer to the true rate dynamics statistically. Thus, for numerical simulation of the field equations we will keep the noise terms in equation 11. Finally, we close the system and
Figure 6: **Dynamics of the field model** (A): The evolution of their gating variable and network firing rate in the parameter of power-law criticality. (B): The trajectories of \(S_{s}\) and \(S_{i}\) in the supercritical parameter and represented as a limit cycle.
arrive at the field equations as
\[\begin{cases}\rho_{t}=\dfrac{1}{2}-\dfrac{1}{2}\operatorname{erf}\left(\dfrac{V_{ th}-V_{0}(t)}{\sqrt{2}\sigma_{0}(t)}\right)\\ \tau_{f}\dfrac{dS_{f}}{dt}=\omega\tau_{f}\rho-S_{f}+\omega\tau_{f}\sqrt{\rho} \xi(t)\\ \tau_{s}\dfrac{dS_{s}}{dt}=\omega\tau_{s}\rho-S_{s}\\ \tau_{i}\dfrac{dS_{i}}{dt}=\omega\tau_{i}\rho-S_{i}\end{cases} \tag{16}\]
The dynamics of the closed system (equation 16) at the critical point as shown in Fig 1, are shown in Fig 6. The error function in equation 10 is the intrinsic nonlinear property that induces oscillation transition in the coarse-grained model. The firing rate corresponds to the numerical simulation of the spiking model, in which the network shows occasional bursting events and moderate synchronization. In the supercritical regime, these oscillations are indeed limit cycles in the \(S_{s}-S_{i}\) plane, as shown in Fig 6 B, that collapse to a focus with decreasing fast-acting conductance. At approximately the critical parameter AMPA \(=2.0\), the limit cycle disappears, and a focus node occurs. The mechanism is explained by a Hopf bifurcation in the field equations through stability analysis. When AMPA increases, the fixed point will lose its stability through a supercritical Hopf bifurcation. Power-law criticality occurs at the bifurcation point and is influenced by external noise and network fluctuations. The remaining stable fixed point corresponds to a low and stable firing behavior for the network neurons. In the presence of noise, neurons occasionally cross the bifurcation or return back, causing neuronal avalanches on a small or large time scale.
### Identification of the parameter of criticality
Tracking the dynamics of the neuronal network with biophysical recordings is always a critical question in the field of computing modeling. Especially for critical neuronal activity, the network is extremely sensitive and its corresponding parameter is difficult to identify. In the parameter space with rich spiking patterns mentioned above, we aim to fit the network model to its corresponding functional MRI (BOLD) signal by estimating the appropriate synaptic weight. moving toward a Bayesian inference, the mathematical model is based on the neuronal network model and the Balloon-Windkessel model that takes the neural activity quantified by the spike rate of a pool of neurons and outputs the BOLD signal [10]. In the following, the synthetic data from the Balloon model are taken as the target biological signal to verify the effectiveness of our proposed method (detail in Model and methods).
The augmented Kalman filter is proven to be an efficient technique in parameter estimation in a combined deterministic-stochastic setting[21]. The unknown parameters are included in the state vector and estimated in conjunction with the states with a standard Kalman filter. In our problem, we hope to estimate a fixed parameter and enable real-time tracking of the non-staining network. We use the ensemble version of the Kalman filter to implement sequential updating instead of reweighting in degeneracy problems [15]. The augmented vector, consisting of all latent state variables and synaptic parameters, can be estimated in real-time using the Ensemble Kalman filter. After assimilation, we resimulate the network only by manipulating the synaptic weights with its assimilated parameters and compute the Pearson correlation between the simulated signal and target BOLD time series.
Our proposed method is validated on synthetic data both in critical and subcritical networks. As before, we consider the two typical states in Fig 1 B and model them as the target networks. It is obvious that these two BOLD signals are different in amplitude: the critical BOLD fluctuates more strongly than the subcritical BOLD (7A and B top panel). Collective firing in a critical state causes peaks in the firing rate, leading to a strong fluctuation in the BOLD signal. In contrast, the network firing remains stable around a fixed point in the subcritical state, and thus, the BOLD signal oscillates weakly. By using the proposed method, the parameter converges quickly to near the ground truth, while the filtered BOLD signal almost coincides with the synthetic BOLD signal (Fig 7A bottom panel). In the subcritical state, the additive parameter constraint seems to be destructive, resulting in the parameters converging slowly to a wrong estimation. We use the estimated parameter to simulate the model and reach a correlation of 0.33 and 0.60 in these two situations respectively (Fig 7 C and D). Interestingly, the simulated subcritical network has a higher correlation than the critical one, although with much worse estimated parameters. This can be explained by the fact that the critical state is more sensitive to synaptic parameters than the subcritical state.
By our proposed method, the synaptic weights are sequential and then converge to near the ground truth. The resimulation with estimated parameters demonstrates our enabling real-time tracking of nonstationary networks, including the critical network.
Figure 7: **Tracking the critical and subcritical dynamics in the network.** (A): The assimilation process of the network in tracking critical dynamics. The filtered signal is almost consistent with the ground truth(top panel). The filtered AMPA weights are plotted in the assimilation process (bottom panel). The dashed blue region represents the deviation among ensemble members. Note that the parameters converge rapidly to the ground truth of the critical parameters. (B): The same as (A) but in the subcritical state. (C) The ground truth and the resimulated BOLD signal show a correlation of 0.33 after a period of transition. The simulated signal tracks the critical BOLD signal in real time. (D) The simulated network reaches a correlation coefficient of 0.60 with its biological counterpart at the subcritical state.
## 4 Discussion
Brain networks are proven to show rich dynamical patterns, called spontaneous activity, which do not look random and entirely noise-driven but are structured in spatiotemporal patterns [32; 33]. In the local network of the conductance-based neurons, we observe multiscale spiking features, including asynchronous irregular, synchronous regular and power-law neuronal avalanches. These findings seem to be meaningful because the similar co-emergence of rich cortical activities has been observed in both experimental and computational recordings [13]. In particular, we show that fast-acting synapses modulate spiking behaviors in a wide dynamic range, and give rise to synchronous firing of neurons only in the mean-dominated regime. In contrast, the fast-acting synapse has little effect on the modulation of neuronal bursting events under a fluctuation-dominated background current. In this scenario, it appears that the influence of synaptic coupling is relatively weaker compared to the background current. The network tends to exhibit stationary firing activity and remains in focus in the dynamical regime. This suggests that the network dynamics are primarily driven by the background current rather than chemical synaptic interactions [34].
Moreover, we found that an appropriate level of fast-acting synaptic coupling strength could trigger spike-based avalanches and evoke power-law avalanche criticality. In the context of our work here, criticality occurs in the boundary in the space of possible dynamical regimes. On the left side of the boundary, network neurons evolve independently of each other and fire occasionally, resulting in asynchronous population dynamics. On the other side, the population tends to work in the form of synchronous firing and syn
chronous terminating. At criticality, population dynamics are more diverse, occasionally oscillating and exhibiting moderate synchronous behaviors at large or small time scales. Different from the DP universality critical class, the model here reveals a power-law avalanche distribution with exponents \(\tau_{s}=1.6\) and \(\tau_{t}=2.1\). The network presents an oscillation with a peak frequency in the \(\beta\) range and can be regulated with the time constant of the inhibitory synapse. These results highlight the importance of network coupling, including excitation and inhibition, in the evoking and modulating of rhythmic oscillation.
Through a mean-field description of the neuronal network, we can predict the transition from asynchronous spiking to a sparse synchronous state through a Hopf bifurcation. Neuronal avalanches can be interpreted as a stochastic crossing behavior around the bifurcation point. A synchronous firing event refers to a phenomenon where a chain reaction of neuronal firing is evoked in a network. In other words, when a small subset of excitatory neurons spike, their collective excitation is sufficient to trigger a majority of the network's neurons to surpass their firing threshold and spike as well. In this model, we have presented three stages of neuronal synchronous oscillations: mean-dominated subthreshold dynamics, fast-initiating a spike event and time-delayed inhibitory cancellation. The degeneracy of population oscillation may evolve into a critical avalanche phenomenon in the presence of noise.
The identification of the parameter range for a critical network can be challenging due to its narrow and elusive nature. However, a potential approach to address this issue is by employing an Ensemble Kalman Filter
(EnKF) in conjunction with the neuronal network and Balloon model, as proposed by Lu et al. [22] in their study on human brain networks. This technique offers a statistically driven estimation of network connectivity, specifically targeting the identification of the fast-acting coupling strength within a network exhibiting moderate nonstationarity, whether in critical or supercritical states. By applying the EnKF methodology, we were able to estimate a parameter that closely approximates the ground truth value, which serves as an accurate representation of the critical state. Indeed, applying the Ensemble Kalman Filter (EnKF) methodology to real data obtained from biological experiments poses certain challenges. One of the key challenges is ensuring that the observation signal aligns with the frequency and amplitude characteristics of the network behavior being studied. This problem deserves further exploration in our future modeling studies. Overall, the ability to estimate the critical parameter accurately contributes to our understanding of the network's behavior and its transition between different dynamical states.
In conclusion, our study provides a comprehensive exploration of coupled networks with conductance-based neurons and sheds light on the regulation of avalanche criticality. Our findings provide valuable insights for understanding and modeling criticality, offering implications for large-scale brain dynamics modeling and computational studies.
## 5 Acknowledgements
This work is jointly supported by the National Key R&D Program of China (No.2019YFA0709502), the National Key R&D Program of China (No.2018YFC1312904), the Shanghai Municipal Science and Technology Ma
jor Project (No.2018SHZDZX01), ZJ Lab, and Shanghai Center for Brain Science and Brain-Inspired Technology.
|
2305.01852 | $Σ$ Resonances from a Neural Network-based Partial Wave Analysis on
$K^-p$ Scattering | We implement a convolutional neural network to study the $\Sigma$ hyperons
using experimental data of the $K^-p\to\pi^0\Lambda$ reaction. The averaged
accuracy of the NN models in resolving resonances on the test data sets is
${\rm 98.5\%}$, ${\rm 94.8\%}$ and ${\rm 82.5\%}$ for one-, two- and
three-additional-resonance case. We find that the three most significant
resonances are $1/2^+$, $3/2^+$ and $3/2^-$ states with mass being ${\rm
1.62(11)~GeV}$, ${\rm 1.72(6)~GeV}$ and ${\rm 1.61(9)~GeV}$, and probability
being $\rm 100(3)\%$, $\rm 72(24)\%$ and $\rm 98(52)\%$, respectively, where
the errors mostly come from the uncertainties of the experimental data. Our
results support the three-star $\Sigma(1660)1/2^+$, the one-star
$\Sigma(1780)3/2^+$ and the one-star $\Sigma(1580)3/2^-$ in PDG. The ability of
giving quantitative probabilities in resonance resolving and numerical
stability make NN potentially a life-changing tool in baryon partial wave
analysis, and this approach can be easily extended to accommodate other
theoretical models and/or to include more experimental data. | Jun Shi, Long-Cheng Gui, Jian Liang, Guoming Liu | 2023-05-03T01:54:28Z | http://arxiv.org/abs/2305.01852v1 | # \(\Sigma\) Resonances from a Neural Network-based Partial Wave Analysis on \(K^{-}p\) Scattering
###### Abstract
We implement a convolutional neural network to study the \(\Sigma\) hyperons using experimental data of the \(K^{-}p\to\pi^{0}\Lambda\) reaction. The averaged accuracy of the NN models in resolving resonances on the test data sets is 98.5%, 94.8% and 82.5% for one-, two- and three-additional-resonance case. We find that the three most significant resonances are \(1/2^{+}\), \(3/2^{+}\) and \(3/2^{-}\) states with mass being 1.62(11) GeV, 1.72(6) GeV and 1.61(9) GeV, and probability being 100(3)%, 72(24)% and 98(52)%, respectively, where the errors mostly come from the uncertainties of the experimental data. Our results support the three-star \(\Sigma(1660)1/2^{+}\), the one-star \(\Sigma(1780)3/2^{+}\) and the one-star \(\Sigma(1580)3/2^{-}\) in PDG. The ability of giving quantitative probabilities in resonance resolving and numerical stability make NN potentially a life-changing tool in baryon partial wave analysis, and this approach can be easily extended to accommodate other theoretical models and/or to include more experimental data.
_Introduction:_ The study of hadron spectra helps to understand their inner structure and the underlying dynamics. Although hadron spectroscopy has been widely investigated by means of phenomenological models, effective theories, lattice QCD, etc., there still remain many uncertainties left to be clarified especially in the baryon sector: except for the ground states and the first several low-lying resonances, the properties and even existence of many baryon resonances are unclear [1]. The focus of this work is on \(\Sigma\) hyperons. There are many 1-star and 2-star \(\Sigma\) resonances listed in PDG, and further clarifying the properties of the lowest \(\Sigma\) resonances has special phenomenological impacts which helps to verify different models [2].
Partial wave analysis (PWA) of scattering reactions has been the basic approach in studies of baryon resonances. The general procedure of PWA is to fit experimental data with theoretical formulae containing different combinations of baryon resonances, and compare the \(\chi^{2}\) of the fits in different attempts to determine their significance and properties. Many studies have been focused on \(\Sigma\) resonances using the conventional \(\chi^{2}\) fitting-based PWA [3; 4; 5; 6; 7; 8; 9; 10], which gains us much knowledge of \(\Sigma\) spectra. However, \(\chi^{2}\) fitting-based analysis is unable to give quantitative estimations of the statistical significance of possible resonances and is not stable in determining resonance's properties when more than one additional states are included. Regarding this, we propose the application of neural networks (NN) in PWA since it can give quantitative probabilities in category classification and potentially more stable in parameter regression.
NN has shown its life-changing potential in many fields of high energy and particle physics, such as handling experimental data produced by colliders [11; 12], extracting parton distribution functions [13; 14], accelerating lattice QCD simulations [15; 16; 17], and inferring the nature of exotic hadrons [18; 19]. And a recent review can be found in the summary report of Snowmass white papers [20]. In general, NN is a powerful numerical tool to find out hidden relationships and correlations. In this sense, PWA is an ideal arena of utilizing NN since there is no direct equations relating experimental measurements and the resonance properties, which is kind of an analog of the famous NN application in image recognition.
\(\overline{K}N\to\pi\Lambda\) scattering is a perfect channel in investigating \(\Sigma\) resonances, for \(\pi\Lambda\) is a pure isospin 1 channel. Currently, the most high-statistic and precise data of this channel is the \(K^{-}p\to\pi^{0}\Lambda\) data presented by Crystal Ball collaboration for both differential cross sections and \(\Lambda\) polarization with the center-of-mass energy from 1569 to 1676 MeV [21]. We choose to analyze this reaction using Crystal Ball data in the first NN application of PWA accounting for the clarity of this channel and the quality of data, which is pertinent to show the feasibility of our new approach.
In this article, we construct a convolutional NN to determine the quantum number with probabilities and properties of the three most significant \(\Sigma\) resonances in the \(K^{-}p\) scattering with sophisticated error estimation. We demonstrate the feasibility and advantage of using NN in PWA and propose more widely application of NN in future PWA of baryon resonances.
_Theoretical Background:_ We employ the effective Lagrangian method which is generally used in PWA to compute the differential cross section and \(\Lambda\) polarization of \(K^{-}p\to\pi^{0}\Lambda\). We follow the theoretical framework of
our previous work [3; 4] and the details can be found in the Supplemental Materials [22]. The Feynman diagrams of \(K^{-}p\rightarrow\pi^{0}\Lambda\) is shown in Fig. 1.
The building blocks of the amplitude includes the effective vertices, form factors and propagators of the exchanged particles, which contains cut-off parameter \(\Lambda\) in the form-factor, coupling constants \(f\), mass \(M\) and width \(\Gamma\) (for unstable resonances) of the exchanged particle. Formally, the amplitude can be expressed as a function of these parameters
\[\mathcal{M}_{\lambda\lambda^{\prime}}=\sum_{i}\mathcal{M}^{i}_{\lambda\lambda ^{\prime}}(\Lambda_{i},\ f_{i},\ m_{i},\ \Gamma_{i}), \tag{1}\]
where \(i\) is the label for the exchanged particles, \(\lambda\) and \(\lambda^{\prime}\) stand for the spin index of proton and \(\Lambda\) hyperon. The differential cross section and \(\Lambda\) polarization [23] can be separately expressed as
\[\frac{d\sigma}{d\Omega}=\frac{1}{64\pi^{2}s}\frac{|\mathbf{q}|}{|\mathbf{k}| |\mathcal{M}|^{2}}, \tag{2}\]
\[P_{\Lambda}=2\mathrm{Im}\left(\mathcal{M}_{\frac{1}{2}}\mathcal{M}_{-\frac{1 }{2}}^{*}\right)\sqrt{\left|\mathcal{M}\right|^{2}}, \tag{3}\]
where \(d\Omega=2\pi d\cos\theta\) with \(\theta\) the angle between the outgoing \(\pi\) and the beam direction in the c.m. frame; \(s=(p+k)^{2}\), and \(\mathbf{k}\) and \(\mathbf{q}\) denote the 3-momenta of \(K^{-}\) and \(\pi\) in the c.m. frame, respectively. And \(\overline{\left|\mathcal{M}\right|}^{2}\) denotes the spin averaged amplitude squared of the reaction.
The basic procedure of partial wave analysis is to determine the resonance properties by comparing with experimental data. In practical analysis, the \(t\)-channel \(K^{*}\), \(u\)-channel \(p\), \(s\)-channel \(\Sigma(1189)\frac{1}{2}^{+}\), \(\Sigma(1385)\frac{3}{2}^{+}\), \(\Sigma(1670)\frac{3}{2}^{-}\) and \(\Sigma(1775)\frac{5}{2}^{-}\), whose establishments are rated 4-star in PDG, are always included as background contributions. There are 14 tunable parameters for the basic 6 channels, which are 6 cut-off parameters for each channel, 2 coupling constants for \(t\)-channel \(K^{*}\), 3 SU(3) breaking factor for the couplings of \(u\)-channel \(p\), \(s\)-channel \(\Sigma(1189)\frac{1}{2}^{+}\) and \(\Sigma(1385)\frac{3}{2}^{+}\), and the mass, width and coupling constant of \(\Sigma(1670)\frac{3}{2}^{-}\). Other parameters are relatively precise and are fixed by PDG estimates. Extra \(\Sigma\) resonances are added with \(J^{P}=1/2^{\pm},\ 3/2^{\pm}\) at different stages. Adding one additional resonance results in importing 4 parameters, which are the cut-off parameter, mass, width, and coupling constant of the exchanged particle.
Note that this is a single-channel framework. Although multichannel analyses of \(\overline{K}N\) interaction [5; 6; 7; 8; 9; 10; 24] are more sophisticated, we stick to a single-channel analysis because the main point of this article is to demonstrate the advantage of the application of NN in PWA, which can directly compare with our previous work. Moreover, we will show that the main error comes from the experimental uncertainties and we find no definite distinction comparing the results. It is readily to accommodate the coupled channel effects in our NN method for further study using future experimental data.
_Partial-Wave Analysis using a Neural Network:_ A NN can be used for both classification (CA) which gives discontinuous model predictions and regression (RE) which is applicable for continuous model outputs. In this study, CA corresponds to the identification of the quantum number of the most significant \(\Sigma\) resonance in the \(K^{-}p\) scattering at different stages, while RE accounts for the determination of the parameters of those resonances. We utilize a joint NN with convolutional and pooling layers and subsequent fully-connected feedforward (FCFF) layers. The total number of tunable parameters is around 0.2 M (\(2\times 10^{5}\)). The loss function is a weighted combination of cross entropy loss and the mean squared error loss, such that both the CA and RE can be accommodated in one NN model. The setup of the NN is empirical, and many other choices, such as adding or removing hidden layers (neurons), including FCFF only and the use of different loss functions are also carefully checked, and the results are not significantly changed within error.
As in the conventional \(\chi^{2}\) fitting approach, we carry out NN-based PWA for one-, two- and three-additional-resonance (1R, 2R and 3R) cases separately. In these sequential analyses, the previous determined probability of states are always inherited, thus in each case CA only needs to determine the probability of the newly added resonance possessing a specific quantum number \(J^{P}=1/2^{+},\ 1/2^{-}\,3/2^{+}\) or \(3/2^{-}\). The training data are generated based on the effective theory formalism discussed above. For each case, we produce 20 M (5 M for each \(J^{P}\) candidate) training data sets with 256 data points in each set. The choice of 256 (128 differential cross-section and 128 \(\Lambda\) polarization) is to be consistent with the experimental data [21] used in the final prediction, and this number can be modified with no effort to
Figure 1: Feynman diagrams for \(K^{-}p\rightarrow\pi^{0}\Lambda\): (a) t-channel \(K^{*}\) exchange; (b) u-channel proton exchange; (c) s-channel \(\Sigma\) and its resonances exchanges.
accommodate other experiments. In addition, we also generate 0.24 M valid data sets and 0.24 M testing data sets for the tuning of hyper parameters and the performance measurement of the trained network, respectively. To keep generality, in the data generation, the parameters are chosen randomly with background parameters in the ranges listed in the Supplemental Materials [22] and the ranges of the mass and width for the target resonance(s) are \(1.44\sim 1.9\) GeV (1.44 GeV is about the threshold) and \(0.01\sim 0.4\) GeV, respectively. Numerically, it is found that the number of training data sets are well enough for the NN to learn the connection between the date sets and the significance of the resonances.
It is important to have a careful error estimation. The systematic error has two sources, one is the fluctuation of the models and training processes due to the random initial values, the other one is from NN performance on test sets. The first systematic error is controlled by training independently 20 models with different initial values. The second systematic error is taken to be one minus accuracy and the relative uncertainty on the test sets for CA and RE, respectively. To estimate the statistical error, 4000 sets of mock data are generated from a normal distribution determined by the central values and uncertainties of the experimental data. And we take the standard deviation of the 4000 model outputs to be the statistical error. The total error is the quadratic sum of all the three errors.
All the models are trained using the ADAM algorithm with gradually decreasing learning rates. We insert additional training epochs with artificially expanded data (AED), which are generated on-the-fly by distorting the original training data assuming that the training data have the same errors as the experimental data. This helps the model to learn to handle noisy data. With AED, the probability in CA from the central values of the experimental data is consistent with that when the uncertainties of experimental data are taken into account. Details of the training schemes are expatiated in the Supplemental Materials [22].
_Results and Discussion:_ In the 1R analysis, the averaged CA accuracy is 98.5% (96.9% for \(1/2^{+}\), 99.4% for \(1/2^{-}\), 99.3% for \(3/2^{+}\) and 98.5% for \(3/2^{-}\)) on the test sets, such a high accuracy is quite satisfying. For convenience, we will use 1p, 1m, 3p and 3m to denote the \(J^{P}\) quantum numbers in the following content. With experimental data, 20 models with different initial values give consistent 100% probability of the first additional resonance to be 1p. The situation is not changed when the errors of the experimental data are taking into account. So the second systematic error is \(1-96.9\%\), and the first systematic error and the statistical error are both zero, resulting in a final prediction on the first state to be 1p with 100.0(3.1)% probability 1. Given that the ranges of parameters when generating input data are wide, the accuracy are remarkable. This demonstrates the feasibility of our NN approach. The CA result that 1p is the most significant state is consistent with the \(\chi^{2}\) fitting analysis [3; 4] using the same theoretical framework, while we have a quantitative description of the significance and confidence. The mass of 1p in 1R case is determined to be 1.58(6)(1)(2) GeV, where the three errors are two systematic errors and statistical error as stated previously. This result is also consistent with the \(\chi^{2}\) fitting result (around 1.63 GeV) within errors and physically is a significant support of the 3-star \(\Sigma(1660)1/2^{+}\) in PDG. The mass from our NN analysis in 1R case is relatively lower. However, this result is under the assumption that there is only one additional resonance. In principle, the values will be modified when more states are involved, and we will not use the result in this 1R analysis as final predictions of 1p parameters. Other parameters of 1p are listed in the Supplemental Materials [22].
Footnote 1: Note that the CA results we list here and thereafter are the mean values and standard errors. In principle the probabilities would act more like a log-normal distribution whose parameters can be deduced from the mean values and standard error.
In the 2R analysis, the CA result of the 1R case is inherited, so the first state of 2R case is fixed to be 1p. The averaged CA accuracy is 94.8% (96.7% for 1p1p, 98.4% for 1p1m, 98.2% for 1p3p and 86.0% for 1p3m) on the test sets. The lower accuracy is understandable since adding one more resonance results in higher dimensional parameter space and the loss function becomes flatter. The CA result shows that the second resonance can be 1m, 3p and 3m with probability 15(28)%, 72(24)%, and 12(30)%, respectively. The total errors are used here and hereafter for convenience. This is consistent with the \(\chi^{2}\) fitting-based PWA [4], where the most \(\chi^{2}\) improvement comes from 1p3p. The most probable choice is 3p which has more than three-sigma statistical significance, while the probability of the other two are statistically consistent with zero within errors. So the second most significant resonance is determined to be 3p. The main error of the probabilities comes from the uncertainties of experimental data. It indicates that the current experimental uncertainties provide the main limitation in PWA. In the RE part, the masses of 1p are 1.62(11) GeV for the 1p1m combination, 1.64(11) GeV for 1p3p and 1.61(9) GeV for 1p3m. The three values are consistent, which manifests the stability of the NN analysis. Besides, the 1p mass in 2R case gets higher and closer to the mass of the PDG \(\Sigma(1660)1/2^{+}\), indicating that though the contribution of the first 1p is dominant, adding the second resonance is also important. The mass of the second resonance is determined to be 1.75(7) GeV, 1.70(6) GeV and 1.62(8) GeV for 1m, 3p and 3m, respectively. The rest RE results are
shown in tables of the Supplemental material [22]. The errors of the second state are not obviously larger than that of the first state as in the \(\chi^{2}\) fitting approach, which again shows the strong resolution of our NN. The combined CA and RE results support the existence of the 1-star \(\Sigma(1780)3/2^{+}\) in PDG.
In the 3R analysis, the CA results of the 2R and 1R case are inherited, so we focus on the combination 1p3pX, where "X" denotes the newly added resonance, that is 1p, 1m, 3p and 3m. The averaged CA accuracy in this case is 81.6%. Given the first resonance to be 1p and the second one to be 3p, the third state is determined to be 3m with 97(32)% probability, which is shown in the left panel of Fig. 2. Accounting for the probabilities of 1p and 3p, the probability of the first three states being 1p3p3m is the product of the three probabilities, that is 70(33)%. In the RE part, the masses of 1p, 3p and 3m are determined to be 1.62(11) GeV, 1.72(6) GeV and 1.62(9) GeV, respectively. The masses of 1p and 3p are consistent with the 2R results, which means adding the third one does not affect the results much. On the other hand, our results support the existence of the one-star \(\Sigma(1580)3/2^{-}\) in PDG. The statistical significance of the third resonance is around two sigma, thus we choose not to go beyond three additional resonances.
In the above analyses, the resonance is added one by one and the probability is given separately for the first, second and third most significant resonance. From a different point of view, on the assumption that there are three additional resonances contributing to this reaction, we can look at the probabilities of the four candidates. This can be done by combining the 1R, 2R and 3R probabilities, where in 3R 1p1mX and 1p3mX are also taking into account. As shown in Fig. 2, the results are 1p with a probability of 100(3)%, 1m with 29(41)%, 1m with 72(24)% and 3m with 98(52)%. This four probabilities sum up to three, since we consider three additional resonances. And the values are consistent with the separate probabilities discussed above. We can have a similar prediction of the properties of 1p, 1m, 3p and 3m by probability weighted average of the different combinations in 3R where related 2R and 1R probabilities are propagated and the results are listed in Table 1. Note that using the central values of our RE results does not guarantee a smaller \(\chi^{2}\) than the \(\chi^{2}\) fitting approach, since the RE loss function compares the parameters directly and knows nothing about \(\chi^{2}\). The best \(\chi^{2}\) fitting result should lie within uncertainties of our results. We show the comparison of our mass and width results with the three-star \(\Sigma(1660)1/2^{+}\), the three-star \(\Sigma(1750)1/2^{-}\) and the one-star \(\Sigma(1780)3/2^{+}\) of PDG in Fig. 3, which indicates that our results are consistent with PDG estimations within error. There also lists a one-star \(\Sigma(1580)3/2^{-}\) in PDG with only the central value of its mass, and the mass of our 3p state is close to this resonance. After all our NN-based PWA analysis support the existence of the three-star \(\Sigma(1660)1/2^{+}\), the one-star \(\Sigma(1780)3/2^{+}\) and the one-star \(\Sigma(1580)3/2^{-}\) in PDG. Our results cannot decide the existence of the three-star \(\Sigma(1750)1/2^{-}\) since its probability is consistent with zero.
_Summary and Outlook:_ We present a novel partial wave analysis taking advantage of neural network, which can probe the \(J^{P}\) quantum number with probabilities and parameters of \(\Sigma\) resonances at the same time. With the implementation of AED technique, our NN can well handle experimental data with uncertainties and give more stable results. The three most significant resonances are found to be \(1/2^{+}\), \(3/2^{+}\) and \(3/2^{-}\) states with mass being 1.62(11) GeV, 1.72(6) GeV and 1.61(9) GeV. The probabilities of the first two states have a more than
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline & \(m/\)GeV & \(\Gamma/\)GeV & \(\Gamma_{\overline{R}N}\Gamma_{\pi\Lambda}/\Gamma\) \\ \hline \(\frac{1}{2}^{+}\) & 1.620(105) & 0.173(192) & -0.5(1.6) \\ \hline \(\frac{1}{2}^{-}\) & 1.713(60) & 0.183(123) & -0.60(63) \\ \hline \(\frac{3}{2}^{+}\) & 1.716(60) & 0.245(159) & 0.02(13) \\ \hline \(\frac{3}{2}^{-}\) & 1.607(90) & 0.201(205) & 0.002(7) \\ \hline \hline \end{tabular}
\end{table}
Table 1: RE results for the four candidates.
Figure 3: Parameters of \(\Sigma\) resonances of NN results comparing with PDG estimates.
Figure 2: Probabilities of most significant resonance in the 1R, 2R and 3R case (left), and the probabilities of the four candidates when assuming there are three additional resonances(right).
3-sigma signal, while the third one has a signal around 2-sigma. The existence of \(1/2^{-}\) resonance cannot get support from our analysis since its probability is consistent with zero. The probabilities help to provide a quantitative description of the existence of \(\Sigma\) resonances. The main errors come from the uncertainties of the experimental data, therefor reducing experimental errors will be the most efficient way in improving the situation. Future experimental data of \(\overline{K}N\) scattering are expected from J-Parc [25], JLAB [26] and the forthcoming PANDA experiment [27]. Our NN-based PWA can be easily extended to accommodate other theoretical models and to include new experimental data, and we propose to use NN-based PWA as an alternative approach to study baryon resonances.
###### Acknowledgements.
We are grateful to Bing-Song Zou and Qian Wang for helpful discussions. This work is partially supported by Guangdong Major Project of Basic and Applied Basic Research under Grant No. 2020B0301030008, Science and Technology Program of Guangzhou under Grant No. 2019050001. JS is supported by the Natural Science Foundation of China under Grant No. 12105108. JL is supported by the Natural Science Foundation of China under Grant No. 12175073 and No. 12222503. LC is supported by the Natural Science Foundation of china under Grant No. 12175036 and the Foundation of the Hunan Provincial Education Department under Grants No. 20A310. The numerical work is done on the supercomputing system in the Southern Nuclear Science Computing Center (SNSC).
|
2306.09792 | GPINN: Physics-informed Neural Network with Graph Embedding | This work proposes a Physics-informed Neural Network framework with Graph
Embedding (GPINN) to perform PINN in graph, i.e. topological space instead of
traditional Euclidean space, for improved problem-solving efficiency. The
method integrates topological data into the neural network's computations,
which significantly boosts the performance of the Physics-Informed Neural
Network (PINN). The graph embedding technique infuses extra dimensions into the
input space to encapsulate the spatial characteristics of a graph while
preserving the properties of the original space. The selection of these extra
dimensions is guided by the Fiedler vector, offering an optimised pathologic
notation of the graph. Two case studies are conducted, which demonstrate
significant improvement in the performance of GPINN in comparison to
traditional PINN, particularly in its superior ability to capture physical
features of the solution. | Yuyang Miao, Haolin Li | 2023-06-16T12:03:39Z | http://arxiv.org/abs/2306.09792v1 | # GPINN: Physics-informed Neural Network with Graph Embedding
###### Abstract
This work proposes a Physics-informed Neural Network framework with Graph Embedding (GPINN) to perform PINN in graph, i.e. topological space instead of traditional Euclidean space, for improved problem-solving efficiency. The method integrates topological data into the neural network's computations, which significantly boosts the performance of the Physics-Informed Neural Network (PINN). The graph embedding technique infuses extra dimensions into the input space to encapsulate the spatial characteristics of a graph while preserving the properties of the original space. The selection of these extra dimensions is guided by the Fiedler vector, offering an optimised pathologic notation of the graph. Two case studies are conducted, which demonstrate significant improvement in the performance of GPINN in comparison to traditional PINN, particularly in its superior ability to capture physical features of the solution.
## Introduction
The Physics-Informed Neural Network (PINN) has demonstrated potential in solving partial differential equations (PDEs), generating solutions embodied within the framework of neural networks [14]. This differs from conventional numerical approaches, such as the Finite Element Method (FEM), which utilize the weak form of PDEs to obtain discrete solutions. The PINN paradigm, however, uniquely leverages the strong form of PDEs, thereby yielding solution expressions that are continuous and differentiable [14]. The robustness of PINNs has been underscored by their capacity to address inverse problems. Such problems often prove unassailable for FEM due to absence of complete boundary conditions in certain complex scenarios [1].
A remarkable contribution of the Physics-informed Neural Network (PINN) approach is to integrate physical principles into the equation-solving process. This is achieved by calculating the residual of a partial differential equation (PDE) in its strong form, given that the neural network (NN) solution is differentiable throughout the entire effective domain. This residual is subsequently included in the loss function during NN training [14]. The incorporation of this physical information substantially enhances the solution's robustness, mitigating the overfitting issue in forward problems and enabling the inference of global solutions from sparse local information in inverse problems [1, 1].
However, in traditional PINN model, the training of NN is only informed with the PDE information but not the space, i.e. the domain information at all, which is also pivotal in PDE problems. The reason is that the input space of PINN in traditional Euclidean space doesn't align consistently with the physical space: the Euclidean distance between two points may not be valid due to the bounded nature of the domain. This discrepancy poses significant challenges when employing PINN for problems associated with complex geometries or highly discontinuous solution fields, e.g. crack or fracture problems [15]. Therefore, enriching the PINN model with domain topology information and aligning the input space more closely with the physical properties could significantly enhance the PINN's performance.
In response to these challenges, we propose a Physics-informed Neural Network with Graph Embedding (GPINN) method. This novel approach incorporates extra dimensions to address the problems observed in higher dimensional topological space. These extra dimensions are informed by the graph theory, which quantitatively uncovers the influential relationships between different parts of the domain. The topological space acts as a prior, derived purely from the domain topology, and are integrated with the original Euclidean coordinates in the solving process.
The rest of this paper is organized as follows: Section 2 introduces GPINN, covers the basics of graph theory, and explains the method of determining extra dimensions using the Fiedler vector. Section 3 presents two case studies that apply the developed GPINN to a heat propagation problem and a cracking modeling method in solid mechanics, respectively. Finally, Section 4 offers some concluding remarks.
## Methodology
This section presents the methods and fundamental knowledge underpinning our work. We first introduce the proposed Graph-Embedded Physics-Informed Neural Network (GPINN), followed by an overview of graph theory. We then discuss the application of graph theory in determining the
extra dimension of GPINN.
### PINN with Graph Embedding
Physics-informed Neural Network dedicates to obtain data-driven solutions of partial differential equations (PDE) which generally takes the form Raissi, Perdikaris, and Karniadakis (2019):
\[\mathbf{u}(\mathbf{x},t)+\mathcal{N}(\mathbf{u}(\mathbf{x},t))=0,\mathbf{x}\in\Omega,t\in[0,T] \tag{1}\]
where \(\mathbf{u}(\mathbf{x},t)\) represents a solution field dependent to the spatial (\(\mathbf{x}\)) and time (\(t\)) coordinates. The complexity of a physics system following such a partial differential rule usually makes it impossible to find a analytical solution of \(\mathbf{u}(\mathbf{x},t)\). In this case, numerical methods are usually employed to obtain a approximate, typically discrete solution by its week form Zienkiewicz, Taylor, and Taylor (2000). However, PINN directly solve the strong form presented by Eq.1 by expressing the solution by a neural network mapping that relates the inputs \(x\in\Omega\subset\mathbb{R}^{d}\) and \(t\in[0,T]\) to the output \(u(x,t)\in\mathbb{R}\)Raissi, Perdikaris, and Karniadakis (2019).
\[\mathbf{u}_{NN}(\mathbf{x},t):\Omega\rightarrow\mathbb{R} \tag{2}\]
Solving a partial differential equation is converted to a optimisation problem in PINN, in which the training of the developed neural network is required where the loss function is defined as:
\[\mathcal{L}=\omega_{1}\mathcal{L}_{\mathrm{PDE}}+\omega_{2}\mathcal{L}_{ \mathrm{Data}}+\omega_{3}\mathcal{L}_{\mathrm{IC}}+\omega_{4}\mathcal{L}_{ \mathrm{BC}} \tag{3}\]
where \(\mathcal{L}_{\mathrm{PDE}}\), \(\mathcal{L}_{\mathrm{Data}}\), \(\mathcal{L}_{\mathrm{IC}}\) and \(\mathcal{L}_{\mathrm{BC}}\) denote the residual of PDE i.e. \(\mathcal{L}_{\mathrm{PDE}}=u(x,t)+\mathcal{N}(u(x,t))\), the loss of the sampling points, the loss of the initial condition and the loss of the boundary conditions, respectively, and \(\omega\)s are their scaling factors.
The data-driven solution offered by PINN is continuous and differentiable across the domain of \(\mathbb{R}^{d}\) and the time interval \([0,T]\). This type of solution has distinct advantages over numerical solutions with condensed information. However, it also restricts its use in handling discontinuous fields or fields with non-differentiable segments. Furthermore, while the PINN solution provides a mapping between the input and solution spaces, the model itself doesn't strictly adhere to the physical essence. That is, the Euclidean distance between two points in the input space doesn't always correspond to their physical distance in \(\Omega\). An example of this discrepancy in distances is illustrated in Fig.1, using a heat propagation scenario in a two-dimensional 'house' \(\Omega\). The Euclidean distance between points A and B in PINN's input space is depicted in Fig.1(a). However, in a physical context, heat actually propagates along the path shown in Fig.1(b) or (c) for which the prescribed domain \(\Omega\) prevents heat from travelling in a straight line.
To address this issue, this paper proposes a method that combines Graph Embedding into the Physics-informed Neural Network (PINN) to align the input space more closely with the physical attributes. As demonstrated by Chung et al. (2011), a topological space defined by graph theory more accurately captures physics-consistent characteristics compared to a Euclidean space. To this end, our approach transforms the operating space of PINN from the conventional Euclidean space to an approximated topological space by incorporating extra dimensions into the input space. Consequently, the solution of Eq.1 is modified as follows:
\[\mathbf{u}_{NN}(\mathbf{x},t,\mathbf{z}):\Omega\rightarrow\mathbb{R} \tag{4}\]
in which the introduced extra dimensions are denoted by \(\mathbf{z}\) and \(\mathbf{z}\in\mathbb{R}^{d_{z}}\) where \(d_{z}\) is the number of extra dimensions.
**Remark 1**: _The GPINN methodology transforms the operating space from a Euclidean framework to a topological (graph-based) one through the incorporation of extra dimensions. This ensures a closer alignment between the problem domain and the physical attributes of the system under consideration._
**Remark 2**: _In this method, there is no need for additional coordinate correction in the original space, since the graph information is incorporated by extra dimensions in the PINN model that leaves the original inputs and their partial derivatives unaffected._
**Remark 3**: _The extra dimensions are uniquely defined by the specific geometry being studied, indicating that their identification is topology-specific and independent to the initial or boundary conditions._
The subsequent subsections provide detailed information on determining extra dimensions for a prescribed domain. The architecture of the Physics-informed Neural Network with Graph Embedding is depicted in Fig.2.
Figure 1: Distances of heat propagation (a) in the input space, (b) in a possible propagation path in physics and (c) in the shortest propagation path in physics.
### Graph Theory
A graph, denoted as \(G=(V,E)\), is formed by a set of vertices \(V\) interconnected by a set of edges \(E\). As depicted in Fig. 3, vertices are symbolized as dots, and the lines that join them constitute the edges of the graph. The adjacency matrix \(A\) denotes the connectivity details of a graph, with \(A_{i,j}\neq 0\) indicating an edge between the \(i^{th}\) and \(j^{th}\) vertices. Graphs can be categorized into two main types depending on whether their edges bear directionality: undirected and directed graphs. In the case of undirected graphs, the adjacency matrix is symmetric, thereby satisfying \(A_{i,j}=A_{j,i}\).
This study exclusively considers undirected graphs due to the fact that the physical interactions are mutual. The degree of a vertex, a metric representing the number of nodes it connects to, can be found in the degree matrix \(D\), where \(D_{i,i}\) corresponds to the degree of vertex \(i\). The graph Laplacian matrix is subsequently defined as \(L=D-A\).
### Complex Geometries defined in Topological Space
A component's mesh can be conceptualized as a graph \(G_{m}(V,E)\), where the set of vertices \(V\) signifies the element points, and \(E\) represents the edges. It's worth noting that a graph merely encodes the connectivity among nodes without preserving their exact positions. Intriguing insights can be gleaned when the mesh is treated strictly as a graph.
When the mesh is interpreted as a graph, it becomes apparent that many components with complex geometry resemble dumbbell-shaped graphs featuring dense clusters interconnected by tubes. An instance of such a connection is portrayed in Fig.4 for the structure presented in Fig.1. Fig.4b illustrates the corresponding graph in the eigenspace. The component visualised in the Cartesian coordinates is depicted in Fig.4a. In both figures, nodes located at crucial positions are color-coded for enhanced comprehension. In light of this perspective, the concept of dimension adaptive can be reconceived within the graph domain as graph labeling, in which the labels should:
* Conform to the pattern of the component's overall shape.
* Maintain smoothness between clusters to simulate a continuous field on a graph.
The Fiedler vector potentially fulfills the above stipulations. It is the eigenvector of the graph Laplacian associated with the smallest non-zero eigenvalue. The subsequent chapter will establish why the Fiedler vector meets these criteria.
### Fiedler Vector: Relation with Heat Equation
Chun _et al._ [10] attempted to show the relationship between the Fiedler vector and the discrete heat transfer equation. Consider a discrete heat transfer equation on a graph:
\[\frac{d\mathbf{f}}{dt}=\mathbf{L}\mathbf{f},\mathbf{f}(0)=\mathbf{f}_{0} \tag{5}\]
where \(\mathbf{L}\) denotes the Laplacian. The solution to Eq.5 is:
\[\mathbf{f}(t)=\sum_{i=1}^{n}\left(\mathbf{f}_{0},\mathbf{u}_{i}\right)e^{- \lambda_{i}t}\mathbf{u}_{i} \tag{6}\]
Figure 3: A example graph.
Figure 2: Schematic of Physics-informed Neural Network with Graph Embedding. The neural network constructs the relation between the input spatial coordinate \(\mathbf{x}\), time \(t\) and extra dimensions \(\mathbf{z}\) and the output \(\mathbf{u}\).
here \(e_{i}\) denotes the orthonormal basis of eigenvectors of L. Since \(\lambda_{1}=0\) and \(u_{1}=[1,1,...,1,1]/\sqrt{N}\). The solution Eq.6 can be rewritten as:
\[\mathbf{f}(t)\approx\left(\mathbf{f}_{0},\mathbf{u}_{1}\right)\mathbf{u}_{1}+ \left(\mathbf{f}_{0},\mathbf{u}_{2}\right)e^{-\lambda_{2}t}\mathbf{u}_{2}+R \tag{7}\]
where the first term is the final state average signal and the remainder \(R\) goes to zero faster than the term \(\left(\mathbf{f}_{0},\mathbf{u}_{2}\right)e^{-\lambda_{2}t}\mathbf{u}_{2}\). Therefore, the second eigenvector \(u_{2}\) with a constant bias could model the transient state. Then recall the hot spot conjecture: Conjecture 1. (Rauch's hot spots conjecture) Let \(\mathcal{M}\) be an open connected bounded subset. Let \(f(\sigma,p)\) be the solution of heat equation, then
\[\frac{\partial f}{\partial\sigma}=\Delta f \tag{8}\]
with the initial condition \(f(0,p)=g(p)\) and the Neumann boundary condition \(\frac{\partial f}{\partial\tau}(\sigma,p)=0\) on the boundary \(\partial\mathcal{M}\). Then for most initial conditions, if \(p_{\text{hot}}\) is a point at which the function \(f(\cdot,p)\) attains its maximum (hot spot), then the distance from \(p_{\text{hot}}\) to \(\partial\mathcal{M}\) tends to zero as \(\sigma\rightarrow\infty\)(Banuelos and Burdzy, 1999). The minimum point (cold spot) follows a similar rule. Since the second eigenvector models the heat transient stage, we can employ the conjecture (Chung et al., 2011):
**Conjecture 1**: _Given a graph \(G=(V,E)\), If \(v^{*},w^{*}\in V\), then_
\[\left|u_{2}(v)-u_{2}(w)\right|\leqslant\left|u_{2}\left(v^{*}\right)-u_{2} \left(w^{*}\right)\right|\quad\forall(v,w)\in V^{2}\]
\[d(v,w)\leqslant d\left(v^{*},w^{*}\right)\quad\forall(v,w)\in V^{2}\]
_where \(d(v,w)\) is the geodesic between \(v\) and \(w\), in other words, the level of the extremeness of the Fiedler vector's value at a point reflects its geometric information._
This hypothesis posits that the hot and cold spots present the greatest geodesic distance in comparison to any other pair of nodes. It also suggests that nodes positioned at a significant distance from each other will exhibit distinct values. Specifically, if two nodes are part of two separate dense clusters, their values will diverge due to the considerable geodesic distance across the tube.
The Fiedler Vector can be interpreted as a 1D embedding of the graph, where the values encapsulate information about the graph's structure. In relation to the requirement of smoothness, we can rephrase the definition of eigenvector and eigenvalue, \(\mathbf{L}\mathbf{x}=\lambda\mathbf{x}\), as follows:
\[\begin{split}\mathbf{u}_{k}^{T}\mathbf{Lu}_{k}&= \sum_{m=0}^{N-1}u_{k}(m)\sum_{n=0}^{N-1}A_{mn}\left(u_{k}(m)-u_{k}(n) \right)\\ &=\sum_{m=0}^{N-1}\sum_{n=0}^{N-1}A_{mn}\left(u_{k}^{2}(m)-u_{k}( m)u_{k}(n)\right)\end{split} \tag{9}\]
And owing to the symmetry of the adjacency matrix A (\(A_{m,n}=A_{n,m}\)):
\[\begin{split}\mathbf{u}_{k}^{T}\mathbf{Lu}_{k}=& \frac{1}{2}\sum_{m=0}^{N-1}\sum_{n=0}^{N-1}A_{mn}\left(u_{k}^{2}(m)-u_{k}(m)u_{ k}(n)\right)\\ &+\frac{1}{2}\sum_{m=0}^{N-1}\sum_{n=0}^{N-1}A_{mn}\left(u_{k}^{2 }(n)-u_{k}(n)u_{k}(m)\right)\\ =&\frac{1}{2}\sum_{m=0}^{N-1}\sum_{n=0}^{N-1}A_{mn} \left(u_{k}(n)-u_{k}(m)\right)^{2}=\lambda\end{split} \tag{10}\]
It is evident that the eigenvalues mirror the variation between nodes and their adjacent counterparts. Hence, the lower the eigenvalue, the smoother the corresponding eigenvector will be. Therefore, the second eigenvector will be the smoothest one, barring the first eigenvector, which is a constant vector.
Fig.(a)a illustrates the Fiedler vector in the form of graph labels, while Fig.(b)b showcases how it is projected back onto the component. It becomes clear that the Fiedler vector's value can unveil the geometric information of the component, or in other terms, it mirrors the shape of the component. The new input space is then developed as \([\mathbf{x},t,z]\) where \(z\) is defined as the obtained Fiedler vector. The implementation of such a case usually uses a finite element mesh to construct the graph and get the Fiedler vector value on the graph nodes. Further FE extrapolation is employed to the value in the whole field. An exemplified code for calculating the Fiedler vector from a FE mesh is available at [https://github.com/hl4220/Physics-informed-Neural-Network-with-Graph-Embedding.git](https://github.com/hl4220/Physics-informed-Neural-Network-with-Graph-Embedding.git).
## Results
This section showcases two case studies: one focuses on a heat conduction model, and the other on crack modelling in
Figure 4: (a) Graph of a complex domain, (b) a possible graph layout in the eigenspace
solid mechanics. Both problems are examined using traditional Physics-informed Neural Network (PINN) and the enhanced PINN with Graph Embedding (GPINN). The process of graph construction for approximating extra dimensions is explained, and the results are illustrated, with high-precision finite element method solutions serving as a reference point for comparison.
### Model of heat propagation
A steady-state temperature field follows the Poisson's equation for balance of internal heat sources as well as thermal boundary conditions (Cai et al., 2021; Grossmann et al., 2023):
\[\begin{split}&\Delta u(\mathbf{x})=f(\mathbf{x}),\quad\mathbf{x}\in\Omega,\\ & u(\mathbf{x})=u_{\phi}(\mathbf{x}),\quad\mathbf{x}\in\partial\Omega,\\ &\nabla u(\mathbf{x})\cdot\mathbf{n}=v_{\phi}(\mathbf{x})\quad\mathbf{x}\in \partial\Omega,\end{split} \tag{11}\]
where the second the third equations denote the Dirichlet and Neumann boundary conditions, respectively, \(\Delta\) denotes the Laplace operator. The loss function employed in PINN is expressed as:
\[\begin{split}&\mathcal{L}=\omega_{1}\mathcal{L}_{\rm PDE}+\omega_{2 }\mathcal{L}_{\rm Data}+\omega_{4}\mathcal{L}_{\rm BC},\\ &\mathcal{L}_{\rm PDE}=\frac{1}{N_{p}}\sum_{i=1}^{N_{p}}\left| \Delta u(\mathbf{x}_{p})-f(\mathbf{x}_{p})\right|^{2},\\ &\mathcal{L}_{\rm data}=\frac{1}{N_{D}}\sum_{i=1}^{N_{D}}\left|u( \mathbf{x}_{D})-u^{\mathbf{s}}(\mathbf{x}_{D})\right|^{2},\\ &\mathcal{L}_{\rm BC}=\frac{1}{N_{dbc}}\sum_{i=1}^{N_{dbc}}\left| u(\mathbf{x}_{dbc})-u_{\phi}(\mathbf{x}_{dbc})\right|^{2}+\\ &\frac{1}{N_{nbc}}\sum_{i=1}^{N_{nbc}}\left|\nabla u(\mathbf{x}_{nbc}) \cdot\mathbf{n}-v_{\phi}(\mathbf{x}_{nbc})\right|^{2},\end{split} \tag{12}\]
In this work, the heat propagation problem is defined to find the steady temperature field in a 2D 'house' with the domain and boundary conditions demonstrated in Fig.6. The house presents a square area with two walls that separate the house in some extent. A circular heat source (highlighted in red in Fig.6) is located at a corner of the house and the boundary away from the heat source is the house's 'window' that is so thin that keeps a temperature identical to 'outside'.
The exemplified problem is investigated by both PINN and GPINN. The results are compared to the FEM results derived from a very fine mesh (\(256\times 256\)) in Fig.7. Fig.8 presents the relative errors of NN to the FEM results. From Fig.7 and 8, GPINN produces satisfactory outcomes when compared to the reference FEM results, particularly in this problem where the traditional PINN exhibits subpar performance.
Fig.7(b) shows the limitation of traditional PINN in dealing with relatively discontinuous field. This reveals the drawback of PINN which employed the Euclidean distance directly in modelling which does not really correspond to the real physical distance. After incorporating an extra-dimension that expand the problem from 2D Euclidean space to 3D topological space, the PINN model is highly enhanced. The extra-dimension \(z\) is determined by the Fiedler vector in
Figure 5: (a) Visualised graph in the eigenspace, (b) Visualised graph layout in higher-dimensional topological space.
Figure 6: Schematic of the heat propagation problem. The domain of the 2D ’house’ is defined as \(\Omega\); there is a heat source \(f\) in \(\Omega\) ; the Dirichlet boundary conditoin is assigned on the boundary at the bottom of \(\Omega\) that represents a ’window’ whose temperature \(u_{\phi}\) is the same as ’outside’.
graph theory as introduced in section.Fiedler Vector: Relation with Heat Equation. The input 3D coordinate model in the topological space is shown in Fig.9.
### Model of single-side crack
The second case conducts a linear elastic simulation of a single-side crack model. Traditional PINN usually shows its weakness in dealing with crack problems due to the strong discontinuity near the crack (Zheng et al. 2022; Goswami et al. 2020; Haghighat et al. 2021). The governing equation of the linear elasticity is stated as:
\[\begin{split}\mathbf{\nabla}\cdot\mathbf{\sigma}(\mathbf{x})=\mathbf{0},\mathbf{x} \in\Omega,\\ \mathbf{\sigma}(\mathbf{x})\cdot\mathbf{n}(\mathbf{x})=\mathbf{t}(\mathbf{x}),\mathbf{x}\in \partial\Omega,\\ \mathbf{u}(\mathbf{x})=\mathbf{u}_{\mathcal{E}}(\mathbf{x}),\mathbf{x}\in\partial\Omega,\end{split} \tag{13}\]
where \(\mathbf{u}\) and \(\mathbf{\sigma}\) denotes the displacement and stress vectors, respectively, being effective on the domain \(\Omega\) with the boundary \(\partial\Omega\). The second and third equations in Eq.13 denote the Neumann and Dirichlet boundary conditions, respectively, in which \(\mathbf{t}\) and \(\mathbf{u}_{bc}\) are the applied traction and displacement
Figure 8: Relative errors (\(RE\)) of the NN solutions to the reference FEM solution: \(\mathrm{RE}(u)=\left|u-u^{\divide}\right|/\max\left(\left|u^{\divide}\right|\right)\). The subfigure on the left side is the relative error of PINN while the right one represents the GPINN.
Figure 10: Schematic of the single-side crack tensile test. The Dirichlet and Neumman boundary conditions are assigned as indicated.
Figure 7: Reference(FEM) and Sample(NN) solutions of the steady temperature field.
Figure 9: Input spatial model in the topological space. A 2D input space is expanded to 2D space by the incorporated extra dimensions. The extra dimensions is determined by the Fiedler vector to keep physical consistency of the input model. The colour map represents the value distribution of the extra-dimension.
conditions. Note that the output field \(\mathbf{u}\) is a vector field that differs from the temperature field targeting at in the last problem (Arora et al., 2022). The bold symbols utilised in those equations thus refer to vectors rather than scalers as shown Eq.13.
In this work, the energy-based loss function is employed herein for the PINN model. Energy-based loss function aims at minimising the potential energy of the entire structure, which considers global information and requires less differential order compared to collocation loss function (Bai et al., 2022). It has also been investigated to be better in modelling crack problems (Zheng et al., 2022). The loss function em
Figure 11: Reference (FEM) and Sample (NN) solutions of the steady temperature field.
Figure 12: Relative errors (\(RE\)) of the NN solutions to the reference FEM solution: \(\mathrm{RE}(u)=\left|u-u^{*}\right|/\max\left(\left|u^{*}\right|\right)\). The subfigures on the left side are the relative errors of PINN while the right ones represent the GPINN.
ployed in PINN in this problem is stated as [1, 10]:
\[\begin{split}\mathcal{L}&=\omega_{1}\mathcal{L}_{PDE}+ \omega_{2}\mathcal{L}_{\mathrm{Data}}+\omega_{4}\mathcal{L}_{\mathrm{BC}},\\ \mathcal{L}_{PDE}&=\int_{\Omega}\frac{1}{2}\mathbf{ \sigma}(\mathbf{x}_{p})\mathbf{\varepsilon}(\mathbf{x}_{p})\mathrm{d}\Omega-\\ &\int_{\partial\Omega}\mathbf{t}(\mathbf{x}_{nbc})\mathbf{u}(\mathbf{x}_{nbc}) \mathrm{d}\hat{\sigma}\Omega,\\ \mathcal{L}_{\mathrm{Data}}&=\frac{1}{N_{D}}(\sum_{ i=1}^{N_{D}}\left\|\mathbf{u}(\mathbf{x}_{D})-\mathbf{u}^{*}(\mathbf{x}_{D})\right\|^{2}+\\ &\sum_{i=1}^{N_{D}}\left\|\mathbf{\sigma}(\mathbf{x}_{D})-\mathbf{\sigma}^{*} (\mathbf{x}_{D})\right\|^{2}),\\ \mathcal{L}_{\mathrm{BC}}&=\frac{1}{N_{dbc}}\sum_{ i=1}^{N_{dbc}}\left\|\mathbf{u}(\mathbf{x}_{dbc})-\mathbf{u}_{bc}\right\|^{2}+\\ &\frac{1}{N_{nbc}}\sum_{i=1}^{N_{nbc}}\left\|\mathbf{\sigma}(\mathbf{x}_ {nbc})\cdot\mathbf{n}-\mathbf{t}(\mathbf{x}_{nbc})\right\|^{2},\end{split} \tag{14}\]
where \(\mathbf{\sigma}\) and \(\mathbf{\varepsilon}\) denote the stress and strain:
\[\begin{split}\mathbf{\varepsilon}(\mathbf{x})&=\nabla\mathbf{u} (\mathbf{x}),\\ \mathbf{\sigma}(\mathbf{x})&=\mathbb{C}:\mathbf{\varepsilon}(\mathbf{ x}),\end{split} \tag{15}\]
This case study involves a tensile test for a model with a single-sided crack. The problem schematic is depicted in Fig.10. The results of both PINN and GPINN methods are presented in Fig.11, alongside the reference FEM results. The distribution of relative error is displayed in Fig.12. From these illustrations, it is evident that the GPINN method yields promising results in comparison to the reference FEM results. Traditional PINN, however, seems to fall short in accurately capturing the solution features for such a problem. The input model enriched with extra dimensions is presented in Fig.13.
## Conclusion
In this work, we present a novel method to perform PINN in the topological, i.e. graph space for better capturing the physical characteristic of a structure. It is achieved by incorporating extra dimensions into the input space, creating a graph-based spatial model which offers a better capture of the pathological property of a structure. The extra dimensions are derived from the graph theory, utilizing the Fiedler vector, which serves as an approximate optimal solution for the graph space layout. Our results illustrate that the graph embedding significantly enhances PINN, particularly for dealing with problems within complex domains.
This method essentially transforms PINN modelling from a Euclidean space into a graph-consistent space. Its potential in engineering applications is significant, considering its ease of implementation and the substantial improvements it brings to PINN's performance. Moreover, the method proposes a possible solution for addressing fracture and crack problems that involve highly discontinuous fields.
In the work presented here, only one extra dimension, the Fiedler vector, was incorporated into our modelling approach. Future research will focus on integrating more extra dimensions that can provide valuable insights for the modelling and solving process in PINN.
|
2302.09264 | Edge Formation Mechanism of Single-Walled Carbon Nanotube Revealed by
Molecular Dynamics Simulations based on a Neural Network Potential | Despite the high potential for applications utilizing the unique properties
of single-walled carbon nanotubes (SWCNTs), the applications remain challenging
due to the difficulty of synthesizing SWCNTs with a specific chirality. To
elucidate the mechanisms that determine their chirality during growth,
intensive efforts have been devoted to classical molecular dynamics (MD)
simulations. However, the mechanism of chirality determination has not been
fully clarified, which can partly be attributed to the limited accuracy of
empirical interatomic potentials in reproducing the behavior of carbon and
metal atoms. In this work, we develop a neural network potential (NNP) for
carbon-metal system to accurately describe the SWCNT growth, and perform MD
simulations of SWCNT growth using the NNP. The MD simulations demonstrate the
process of defect-free chirality-definable SWCNT growth with dynamic
rearrangement of edge configurations, and the consistency between the
probability of edge configuration appearance and the entropy-driven edge
stability predicted from the nanotube-catalyst interfacial energy. It is also
shown that the edge defect formation is induced by vacancy and suppressed by
vacancy healing through adatom diffusion on the SWCNT edges. These results
provide insights into the edge formation thermodynamics and kinetics of SWCNTs,
an important clue to the chirality-controlled synthesis of SWCNTs. | Ikuma Kohata, Ryo Yoshikawa, Kaoru Hisama, Christophe Bichara, Keigo Otsuka, Shigeo Maruyama | 2023-02-18T09:02:42Z | http://arxiv.org/abs/2302.09264v4 | # Catalyzed Single-Walled Carbon Nanotube Growth
###### Abstract
Classical molecular dynamics (MD) simulations have been performed to elucidate the mechanism of single-walled carbon nanotube (SWCNT) growth. However, discussing the chirality has been challenging due to topological defects in simulated SWCNTs. Recently, the application of neural network to interatomic potentials has been actively studied and such interatomic potentials are called neural network potentials (NNPs). NNPs have a better ability for approximate functions and can predict complex systems' energy more accurately than conventional interatomic potentials. In this study, we developed an NNP to describe SWCNT growth more accurately. We successfully demonstrated defect-free and chirality-definable growth of SWCNTs on Fe nanoparticles by MD simulation. Furthermore, it was revealed that edge vacancies of SWCNTs caused defect formation and adatom diffusion contributed to vacancy healing, preventing defect formation.
Introduction
Single-walled carbon nanotubes (SWCNTs) [1] have great potential as a next-generation material for nanoscale devices such as transistors [2], microprocessors [3], transparent conductive films [4] and solar cells [5] because of their excellent electrical, thermal, and optical properties [6, 7]. One of the most important properties of SWCNTs is that the chirality determines whether they are metallic or semiconducting as well as their bandgap [8, 9, 10]. Therefore, chirality control is important for the realization of high-end SWCNT-based devices. However, the method of chirality-controlled synthesis has not been fully established and therefore further elucidation of the growth mechanism is indispensable.
In the past few decades, a number of classical molecular dynamics (MD) simulations were carried out to reveal the growth mechanism of SWCNTs and demonstrated a series of the growth process from the nucleation to the sidewall elongation [11, 12, 13, 14, 15, 16, 17]. Although these studies provided insights into the basic growth mechanism of SWCNTs, the discussion of the chirality was still difficult because the simulated SWCNTs had a number of defects and the chirality could not be assigned. Instead, partial chirality was defined and discussed in several studies. Neyts et al. performed hybrid atomistic simulations of MD and force biased Monte Carlo based on the reactive force field (ReaxFF). They defined local chirality by the diameter and the angle of the hexagon to the axial direction of SWCNTs, and demonstrated the initial growth process of SWCNTs and the chirality change from zigzag to armchair [18, 19]. However, the continuous growth of SWCNTs with a particular chirality was not observed. Although Yoshikawa et al. assigned the chirality to SWCNTs by MD simulation with Tersoff-type interatomic potential [20], but the chiralities of the observed SWCNTs were only zigzag and near-zigzag, which was contrary to the experimental reports of near-armchair preferential growth [21, 22, 23, 24].
Currently, there are two possible reasons why it is difficult to simulate SWCNT growth without any intramolecular chirality change by defects, as reported in the experimental study [25]: first, time scale is too short for defect healing, and second, energetics of SWCNTs including defects is not accurately described by interatomic potentials. In order to solve these problems and to reproduce defect-free growth, it is necessary to perform MD simulations in a long time scale using an accurate interatomic potential.
Recently, the application of artificial neural network to interatomic potentials has been actively studied to overcome the limitations of conventional interatomic potentials and such interatomic potentials are called neural network potentials (NNPs). Since the idea of symmetry functions to represent high-dimensional potential energy surfaces (PESs) with translational and rotational invariance was proposed by Behler and Parrinello [26], a number of approaches have been developed to construct NNPs in the field of materials science. Among the NNPs, the deep potential [27] is one of the most popular methods because of its high accuracy and high computational efficiency. In this study, we developed an NNP based on the deep potential to simulate SWCNT growth more realistically, and reproduced defect-free SWCNT growth by MD simulation.
## 2 Method
### Development of NNP
The growth process of SWCNTs consists of the dynamic formation of carbon rings on the catalyst surface by recombination of inter-carbon bonds. To reproduce the growth of SWCNTs by MD simulations, the inter-atomic potential should predict the energy of various morphologies of carbon including unstable structures. To collect dataset structures covering a wide range of PESs, we sampled three types of atomic structures: disordered structures, defective graphene and CNT growth structures. The disordered structures were obtained by randomly placing atoms in a computational cell of 8-20 A per side, heating them to 10000 K by classical MD simulation using a Tersoff-type potential [28], and then quenching them to 1000-5000 K (Figure S1a). The defective graphene structures were collected by preparing graphene in a rhombic supercell of 17.24\(\times\)17.24\(\times\)10.0 A, randomly displacing carbon atoms by 0-4 A, and relaxing by MD simulations using Tersoff-type potential at 1000 K (Figure S1b). The CNT growth structures were sampled from MD simulations of SWCNT growth on Fe\({}_{60}\), Fe\({}_{80}\), Fe\({}_{100}\) and Fe\({}_{120}\) using Tersoff-type potential and NNP under development (Figure S1c). The cell size of the CNT growth structure was set to 15.0\(\times\)15.0\(\times\)15.0 A so that the length of the vacuum space is \(\sim\)10 A. The number of atomic configurations included in the disordered, defective graphene and CNT growth datasets are 21744, 16871 and 5776, respectively. Each dataset was randomly separated into three subsets, with 10 % as
the validation set, 10 % as the testing set and the remainder as the training set.
We performed spin-polarized density functional theory (DFT) calculations using the projector-augmented wave (PAW) method [29], Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation (GGA) for exchange-correlation functional [30, 31], with 400 eV plane wave energy cutoff, and the k-point mesh for the Monkhorst-Pack grid Brillouin zone sampling (N\({}_{\text{k}}\)1, N\({}_{\text{k}}\)2, N\({}_{\text{k}}\)3) = \((3,3,3)\) for the disordered structures and \((1,1,1)\) for the defective graphene and CNT growth structures, as implemented in Vienna ab initio simulation package (VASP 5.4.4) [32, 33, 34, 35]. The convergence criterion for the electronic self-consistent loop was set to 10\({}^{-4}\) eV.
Deep potential [27], one of the high dimensional NNPs, was used for training the NNP. The machine learning descriptor was constructed from both angular and radial atomic configurations as implemented in DeePMD-kit [36]. The cutoff radius is \(4.0\) A. The smoothing function was set between 3.5-4.0 A. For the embedding net, the sizes of the hidden layers from the input end to the output end are 25, 50 and 100, respectively. For the fitting net, three hidden layers with 240 neurons were used. Parameter optimization was performed in 200000 steps. The learning rate is 0.001 at the beginning and decays exponentially every 5000 steps to 3.51\(\times 10^{-8}\) at the end of training. Comparisons of the energies and the forces between DFT and NNP are shown in Figure S2. The MAEs of the energies are 16.9 meV/atom and 16.1 meV/atom for the training set and testing set, and the MAEs of the forces are 0.268 eV/A and 0.248 eV/A for the training set and testing set, respectively.
### SWCNT growth simulation
MD simulations were performed using LAMMPS [37]. The time step for numerical integration was 0.5 fs. The temperature of Fe nanoparticles was controlled at \(T\) using the Nose-Hoover thermostat [38, 39]. The simulation box was a cubic cell with the size of 20\(\times\)20\(\times\)20 nm. Periodic boundary condition was applied for each direction of the axis. At the beginning of the simulation, we placed Fe\({}_{60-120}\) nanoparticles at the center of the cell. After relaxation at constant temperature for 2 ns, we supplied C atoms at random positions in the cell controlling the density of C atoms in the gas phase at 8 atoms/cell, which determines the rate at which carbon is supplied to the catalyst nanoparticles. The initial velocity vectors of C atoms \(\mathbf{v}\) were randomly
determined, assuming that the temperature of the system is \(T\) and the magnitude of the initial vector \(|\mathbf{v}|\) satisfies \(|\mathbf{v}|=\sqrt{3k_{b}T/m}\). To describe the energy barrier of the carbon feedstock decomposition, a Lennard-Jones potential was applied between non-covalent C atoms, as in previous SWCNT growth simulations [20, 28]. The visualization of the atomic structures and common neighbor analysis were performed using OVITO [40].
The developed views are made by expressing the coordinates of the SWCNT by the cylindrical coordinates \((r,\theta,z)\) and projecting them onto the x-z plane using the equation \(x=r\times\theta\), \(y=0\), \(z=z\). The chirality \((n,m)\) of each SWCNT was determined by the positive combination of the basis vectors of the developed SWCNT, \(a_{1}\) and \(a_{2}\), according to the original definition of the chirality.
## 3 Validation of NNP for Fe catalyzed SWCNT growth
overestimated the defect formation energy. This indicates that the stability of graphene was overestimated, which may have prevented cap lift-off by pentagon formation during the initial stages of SWCNT growth.
### Edge energy of graphene
The energetics of graphene edge has been considered to be an important factor of SWCNT chirality selectivity [43, 44]. We validated chiral angle dependence of the edge energy of graphene (Figure 2). To calculate
Figure 1: Comparison of defect formation energy of graphene between DFT and interatomic potentials.
the edge energy of graphene by the NNP, we modeled graphene nanoribbons (GNRs) in the cell with one periodic direction and vacuum space of 15 A in other two non-periodic directions, and relaxed them with conjugate gradient method. The edge energy \(\gamma\) is defined as
\[\gamma=(E_{total}-N_{C}E_{C}-N_{e}\mu)/2L, \tag{2}\]
where \(E_{total}\) is the total energy of the supercell, \(N_{C}\) is the total number of carbon atoms in the supercell, \(E_{C}\) is the energy of carbon atom in graphene lattice, \(N_{e}\) is the number of atoms of the terminating element, \(\mu\) is the chemical potential of the terminating element and \(L\) is the periodic length of graphene nanoribbon as defined in a previous theoretical study [43]. We computed the edge energy of (2,1), (3,1), (3,2) and (4,1) GNRs plotted as a function of chiral angle, as shown in Figure 2. For comparison, we also plotted the theoretical curve expressed as
\[\gamma^{\prime}(\chi)=2\gamma^{\prime}_{A}\sin{(\chi)}+2\gamma^{\prime}_{Z}\sin {(30^{\circ}-\chi)}, \tag{3}\]
where \(\chi\) is the chiral angle, \(\gamma^{\prime}_{A}\) and \(\gamma^{\prime}_{Z}\) are the edge energies of armchair and zigzag, respectively. \(\gamma^{\prime}_{A}\) and \(\gamma^{\prime}_{Z}\) were directly computed using the GGA as 1.03 eV/A and 1.19 eV/A, respectively.
On pristine graphene edges, the energies computed with the NNP showed higher stability of large chiral angles, which was in agreement with the GGA. On the other hand, those computed with the Tersoff-type potential indicated higher stability of small chiral angles, which was considered to result in the zigzag preferential growth by previous MD simulations [20].
### Thermodynamic properties of Fe bulk and nanoparticles
Influence of catalyst phase on SWCNT growth has been reported in experimental studies [45, 46]. In order to accurately simulate SWCNT growth, the thermodynamic behavior of catalyst nanoparticles must be well considered. In this section, the thermodynamic behavior of Fe bulk and nanoparticles is discussed. First, we estimated the melting point of bulk bcc Fe by MD simulation of solid-liquid biphasic system. For MD simulation, we obtained the initial structure of bulk bcc Fe with solid-liquid interface by preparing a supercell of \(28.52\times 28.52\times 74.17\) A, optimizing the lattice constant, and melting one of the halves divided along bcc (100) plane at 5000 K under NVT ensemble (Figure 3a). Starting from the initial structure, we carried out MD simulation for 200 ps under NPT ensemble at 1 atm. The final structures are shown in Figure 3b-d and the potential energies of each structure were well converged as shown in Figure 3e. As the temperature at which the solid and liquid phases reach equilibrium, the melting point was estimated to be 1887.5\(\pm\)12.5 K, which is in agreement with the experimental value 1811 K [47]. In addition, latent heat of melting was estimated as
Figure 2: Edge energy of graphene as a function of chiral angle. The values were directly computed (dots) and obtained from Eq. 3 (lines).
0.15 eV/atom from the difference of the averaged potential energies between solid and liquid, which is also in agreement with the experimental value 0.143 eV/atom [48] (Figure 3f).
Next, we verified the thermodynamic behavior of Fe nanoparticles by monitoring the Lindemann index representing the root-mean-square relative bond-length fluctuation
\[\delta=\frac{2}{N(N-1)}\sum_{i<j}\frac{\sqrt{\langle r_{ij}^{2}\rangle-\langle r _{ij}\rangle^{2}}}{\langle r_{ij}\rangle}, \tag{4}\]
where \(r_{ij}\) is the distance between atom \(i\) and \(j\), N is the number of particles and the bracket is the ensemble average calculated over MD simulation at constant temperature \(T\). The melting point was determined as the temperature at which the Lindemann index changed discontinuously. In previous studies, a threshold value in the range of 0.08-0.15 was used as the melting point criteria [49, 50, 51, 52, 53, 54]. However, in the case of small nanoparticles, the solid and liquid phases coexist over a wide temperature range and the criteria for the melting point has not been well established. In this study, we fitted the Lindemann index by a hyperbolic function expressed as
\[f(x)=\frac{(ax+b)\exp{(\frac{x-v}{s})}+(cx+d)\exp{(-\frac{x-v}{s})}}{\exp{( \frac{x-v}{s})}+\exp{(-\frac{x-v}{s})}}+w, \tag{5}\]
where a, b, c, d, e, s, v, and w are fitting parameters, and took an inflection point at \((v,w)\) as the melting point criteria (Figure S3).
For MD simulation to obtain the Lindemann index, the initial structures were prepared by melting Fe nanoparticles at 2000 K and then annealing them to 0 K at 1 K/ps. The annealed structures of Fe\({}_{200}\), Fe\({}_{600}\) and Fe\({}_{2000}\) are shown in Figure 4a-c. The stable phase of Fe nanoparticles changed depending on the size. For large nanoparticles (N\(>\)700), bcc structure was the stable phase as well as bulk Fe. However, for medium size nanoparticles (300\(<\)N\(<\)700), icosahedral fcc (Ih) structure was stable. For smaller nanoparticles (N\(<\)300), any regular structures were not observed. This phase transition from bcc to Ih by size, which was considered to be due to surface effect, is consistent with high-resolution transmission electron microscopy (HRTEM) observation of Ih Fe nanoparticles [55]. Starting from the annealed structure, MD simulations were performed for 2 ns at constant temperature \(T\) and then the Lindemann index was obtained as the average over the latter half 1 ns of MD simulations. According to Gibbs-Thomson equation, the decrease in the melting point is inversely proportional
to the diameter. As shown in Figure 4b, for large nanoparticles of bcc Fe, the melting point followed the equation. However, the melting point discontinuously changed by 300 K between Fe\({}_{600}\) and Fe\({}_{800}\). This discontinuity of the melting point corresponded to the phase transition from bcc to Ih.
The melting points of Fe\({}_{120}\)C\({}_{n}\) were also calculated as shown in Figure 4c. The melting point decreased from 814 K to 740 K with \(\sim\)2.5 % carbon content, while it increased with more carbon content. The observed V-shape diagram is consistent with the phase diagram of bulk Fe-C [56] and previous theoretical studies [49, 57].
Figure 3: Melting point measurement of solid–liquid biphasic system. (a) Initial structure. (b-d) The final structure after MD calculation for 200 ps by NPT ensemble under 1 atm at 1875 K, 1877.5 K and 1900 K. (e) Time evolution of potential energy at each temperature. (f) Potential energy distribution and its average per position of (c). The latent heat of melting \(\Delta\)H was estimated as 0.15 eV/atom from the difference of the potential energies between solid and liquid phase.
Figure 4: Thermodynamic behaviors of Fe nanoparticles. (a-c) Snapshots of cross sectional views of annealed structures of Fe\({}_{200}\), Fe\({}_{600}\) and Fe\({}_{2000}\) nanoparticles. (d) Melting point of Fe nanoparticles as a function of inverse diameter. (e) Melting points of Fe\({}_{120}\)C\({}_{n}\) nanoparticles obtained by adding carbon atoms up to 17.5 %. The blue and red balls denote atoms in BCC and FCC structure, respectively. The white balls denote atoms out of BCC and FCC structures. The structures were categorized by the common neighbor analysis [58].
## 4 MD simulation of SWCNT growth
We performed MD simulations of SWCNT growth on Fe\({}_{55}\), Fe\({}_{80}\), Fe\({}_{100}\) and Fe\({}_{120}\) nanoparticles at \(T\)=1100-1500 K in 100 K increments to investigate the growth trends by the temperature (Figure S4). Figure 5 shows the typical growth on Fe\({}_{120}\) nanoparticles at 1100 K and 1500 K. At low temperatures (-1300 K), carbon atom mobility was low and isolated hexagons nucleated simultaneously at several sites on the surface. Subsequently, the multiple graphitic islands were formed from the hexagons and the islands covered the surface of the nanoparticles and deactivated them. At high temperatures (1400-1500 K), a single hexagon nucleated, one large island was formed by the addition of carbon atoms, and the cap formation was observed.
We observed defect-free SWCNT growth at high temperatures, which was considered to be suitable for
Figure 5: Snapshots of the initial stages of SWCNT growth at (a) 1100 K and (b) 1500 K. The blue and green atoms denote iron and carbon, respectively.
SWCNT growth. Figure 6 shows the SWCNT growth process at 1500 K. At this temperature, SWCNTs grew through the typical growth stages; carbon saturation, cap formation and side wall elongation, as reported in computational [20, 28] and experimental [59] studies.
In the saturation stage, supplied carbon atoms in the gas phase were adsorbed on the catalyst surface and preferentially occupied the subsurface sites (Figure 6a). When the subsurface sites were fully occupied, carbon atoms started to form small molecules such as dimer and trimer on the surface (Figure 6b). When a sufficient number of carbon atoms gathered on the surface and the catalyst became supersaturated, the small molecules started bonding with each other and formed short chains. The first hexagon was formed by cyclization of the chain. If the chain acting as a precursor was larger than the hexamer, the excess carbon atoms were detached from the newly formed hexagon (Figure S5).
In the cap formation stage, additional hexagons were formed next to the first hexagon and a small graphitic island was formed (Figure 6d). The island was enlarged as carbon atoms were supplied and lifted off as a pentagon was formed (Figure 6e-f). The sixth pentagon formation finished the cap formation (Figure 6g). The cap pentagons were formed by the addition of a carbon atom at an armchair-like site and left inside the island by the formation of additional hexagons surrounding them (Figure S6). The formed cap was perfect, which had exactly six pentagons and no heptagons.
After the completion of the cap, the side wall of the SWCNT started to elongate. In this stage, defects such as pentagons and heptagons were formed on the open tube edge as well as hexagons, but healed to hexagons by bond recombination, which realized defect-free growth. Figure 6i,j compares the time evolution of the numbers of polygons: hexagons, pentagons and heptagons. The number of hexagons increased at constant rate after the first formation. We should note that in the case of the (8,7) SWCNT, the hexagon formation rate was 0.61 ring/ns and the growth rate in length was \(6\times 10^{5}\)\(\mu\)m/min, which was \(10^{2}\sim 10^{5}\) times higher than the experimental growth rates of SWCNTs. The number of pentagons increased to six during the cap formation and repeatedly increased and decreased but never fell below six during the sidewall elongation, which indicated that the pentagons in the cap were maintained. The number of heptagons was almost always zero, and even if it reached one, it immediately returned to zero.
Figure 7 shows the detailed mechanism of defect healing on the SWCNT edge. A pentagon was formed on the edge and healed into hexagons by the addition of monomer and C-C bond reformation (Figure 7a). Heptagons were also formed but less frequently than pentagons, and quickly healed to hexagons by rejection of monomer
Figure 6: (8,7) SWCNT growth on Fe\({}_{120}\) at 1500 K. (a-h) Snapshots of MD simulation. (i) Time evolution of the numbers of hexagons, pentagons, and heptagons. (j) An enlarged view of Figure (i). The red-colored polygons denote the pentagons.
(Figure 7b). Such defect healing resulted in defect-free SWCNT growth. In this study, we observed (8,7), (9,9) and (10,7) defect-free SWCNTs, which contained exactly six pentagons in the cap and no other defects (Figure S7).
The edge shape of the SWCNTs changed dynamically during the growth depending on the local structure such as armchair, zigzag and Klein edge. On armchair edges, pentagons were easily healed to hexagons by the addition of monomer because such pentagons had dangling bonds and highly reactive, as shown in Figure 7a. However, on zigzag edges, pentagons without dangling bonds were often formed at edge vacancies (Figure 8b). Such pentagons were less reactive, resulting in less susceptibility to monomer addition and chances of being left unhealed inside the sidewalls. They were occasionally left inside the sidewalls without being healed. Heptagons were often formed below the pentagons to compensate for the curvature change by the pentagon formation, resulting in pairs of adjacent pentagons and heptagons (5-7 defects). 5-7 defects caused chirality changes as reported in previous experimental studies [60, 61, 25]. An SWCNT with 5-7 defect mediated chirality change is shown in Figure 8a. The SWCNT grew up to a certain length with the initial chirality (11,7) determined by the cap and changed to (10,7) by a 5-7 defect. We also observed the chirality change from (10,3) to (8,3) by two 5-7 defect.
Figure 7: Edge healing processes on the developed view of the (8,7) SWCNT. (a) Reformation of a pentagon into a hexagon on the (8,7) SWCNT. (b) Reformation of a heptagon into a hexagon on the (8,7) SWCNT. The red and blue polygons denote the pentagon and heptagon, respectively.
defects (Figure S8).
On zigzag edges, adatoms often formed a protruding pentagon, which was thermodynamically unfavorable. The adatoms frequently jumped to adjacent zigzag sites with repeated bonding and dissociation like a 'trapeze'. And when reaching an armchair site, it formed hexagons and stabilized (Figure 9). Such adatom diffusion healed the vacancies, preventing topological defect formation (Figure 10).
In this study, unhealed vacancies occasionally resulted in topological defects as shown in Figure 8, because the growth rate in the simulations was much higher than that in the experiments, and vacancy healing did not occur in time for SWCNT growth. In the experiments, however, SWCNTs are considered to grow while maintaining smooth edges through adatom diffusion, resulting in a low probability of defect formation.
Figure 8: Snapshots of an SWCNT with 5-7 defect mediated chirality change. (a) Snapshot and developed view of a SWCNT with chirality change from (11,7) to (10,7) after 280 ns of carbon supply. (b) 5-7 defect formation process. The pink and orange areas are (11,7) and (10,7) sidewalls, respectively. The light blue areas can be defined as both (11,7) and (10,7) since there are two possible translation vectors. The chirality of the white areas can not be assigned. The red and blue polygons denote the pentagons and heptagons, respectively.
Figure 10: Edge vacancy healing by adatom diffusion on the (8,7) SWCNT.
Figure 9: Adatom diffusion on the zigzag edge of the (10,7) SWCNT.
## Conclusions
We demonstrated defect-free SWCNT growth by MD simulation based on the developed NNP. During the growth process, it was observed that defects were frequently formed on the open edge and healed into hexagons by metal mediated C-C bond recombination, resulting in perfect SWCNTs with (8,7), (9,9) and (10,7) chiralities. Furthermore, it was shown how 5-7 defects were formed at edge vacancies, which remained unhealed in the sidewall and caused the chirality change. Also, it was observed that adatom diffusion contributed to keeping smooth edge without vacancies, which indicated the origin of why a pre-determined chirality was sustained in the growth of SWCNTs, a key mechanism for the chirality selectivity.
|
2302.04998 | Neural Networks vs. Splines: Advances in Numerical Extruder Design | We present a novel application of neural networks to design improved mixing
elements for single-screw extruders. Specifically, we propose to use neural
networks in numerical shape optimization to parameterize geometries. Geometry
parameterization is crucial in enabling efficient shape optimization as it
allows for optimizing complex shapes using only a few design variables. Recent
approaches often utilize CAD data in conjunction with spline-based methods
where the spline's control points serve as design variables. Consequently,
these approaches rely on the same design variables as specified by the human
designer. While this choice is convenient, it either restricts the design to
small modifications of given, initial design features - effectively prohibiting
topological changes - or yields undesirably many design variables. In this
work, we step away from CAD and spline-based approaches and construct an
artificial, feature-dense yet low-dimensional optimization space using a
generative neural network. Using the neural network for the geometry
parameterization extends state-of-the-art methods in that the resulting design
space is not restricted to user-prescribed modifications of certain basis
shapes. Instead, within the same optimization space, we can interpolate between
and explore seemingly unrelated designs. To show the performance of this new
approach, we integrate the developed shape parameterization into our numerical
design framework for dynamic mixing elements in plastics extrusion. Finally, we
challenge the novel method in a competitive setting against current free-form
deformation-based approaches and demonstrate the method's performance even at
this early stage. | Jaewook Lee, Sebastian Hube, Stefanie Elgeti | 2023-02-10T00:50:27Z | http://arxiv.org/abs/2302.04998v1 | # Neural Networks vs. Splines: Advances in Numerical Extruder Design
###### Abstract
We present a novel application of neural networks to design improved mixing elements for single-screw extruders. Specifically, we propose to use neural networks in numerical shape optimization to parameterize geometries. Geometry parameterization is crucial in enabling efficient shape optimization as it allows for optimizing complex shapes using only a few design variables. Recent approaches often utilize computer-aided design (CAD) data in conjunction with spline-based methods where the spline's control points serve as design variables. Consequently, these approaches rely on the same design variables as specified by the human designer. While this choice is convenient, it either restricts the design to small modifications of given, initial design features - effectively prohibiting topological changes - or yields undesirably many design variables. In this work, we step away from CAD and spline-based approaches and construct an artificial, feature-dense yet low-dimensional optimization space using a generative neural network. Using the neural network for the geometry parameterization extends state-of-the-art methods in that the resulting design space is not restricted to user-prescribed modifications of certain basis shapes. Instead, within the same optimization space, we can interpolate between and explore seemingly unrelated designs. To show the performance of this new approach, we integrate the developed shape parameterization into our numerical design framework for dynamic mixing elements in plastics extrusion. Finally, we challenge the novel method in a competitive setting against current free-form deformation-based approaches and demonstrate the method's performance even at this early stage.
keywords: shape optimization, single-screw extruder, neural networks, mixing, filter, geometry parameterization +
Footnote †: journal: Engineering With Computers
## 1 Introduction
Modern numerical design is boosted by high-performance computers and the advent of neural networks. While neural networks are well-established in fields such as image recognition, their power to further polymer processing is yet to be fully discovered. This work attempts to contribute towards this goal. We combine deep neural networks with established shape-optimization methods to enhance mixing in single-screw extruders via a novel numerical design.
In many polymer processing steps, screw-based machines play a crucial role. Screws are, e.g., used as plasticators to prepare polymer melts for injection molding or in extruders in profile extrusion. For simplicity, we will, in the remainder, summarize all such screw-based machines as _extruders_. Single-screw extruders (SSEs) are especially widespread among the many variants of extruders for their economic advantages and simple operation. Economics also drives current attempts to further increase the throughput. This increase is achieved using fast-rotating extruders. However, the current SSE's poor mixing ability has limited the advances and, therefore, improving the mixing ability is a topic of research [1; 2; 3; 4; 5; 6].
Special focus is put on improved mixing elements that alleviate this limitation. Approaches to improve mixing elements have been proposed based on analytical derivations, experimental, and simulation-based works. In the following, we review recent developments in these three areas. Subsequently, we outline relevant developments in the field of neural networks and, finally, motivate the use of neural nets in the numerical design of mixing elements.
Due to the high pressures and temperatures, analyzing the flow inside extruders is a difficult task. Early studies thus focus on analytical models and geometrically simpler screw sections, e.g., the metering section [7]. Experiments complement these theoretical derivations and allow extending the analysis to more complex screw sections. As reported by Gale, typical configurations rely on photomicrographs of the solidified melt [2] that allow either investigating cross sections of the flow channel or the extrudate. One example of such flow channel photomicrographs is Kim and Kwon's pioneering work on barrier screws via cold-screw extrusion [8]. Apart from investigating solidified melt streams, attempts to analyze the melt flow during the actual operation of extruders are occasionally reported, e.g., by Wong _et al._[9]. Despite the great success of such experiments, a standard limitation is their focus on a single operating condition. In contrast, numerical analysis allows studying different designs and operating points at significantly reduced costs and, therefore, proliferates. In the following, we give an overview of such numerical analyses.
One early example is Kim and Kwon's quasi-three-dimensional finite-element (FE) simulation of the striation formation, studying the influence of the barrier flight [10]. Another example is the work by Domingues _et al._, who obtain global mixing indices for dispersive and distributive mixing in both liquid-liquid and solid-liquid systems [11]. Utilizing a two-dimensional simplification, their simulation domain extends from the hopper to the metering section, and their framework even allows for design optimization.
While these early works typically neglect mixing sections, studying the influence of mixers has recently become a vital research topic. Celik _et al._ use three-dimensional flow simulation coupled with a particle-tracking approach to determine the degree of mixing based on a deformation-based index [1]. Another example is Marschik _et al._'s study comparing different Block-Head mixing screws in distributive and dispersive mixing [6]. A comparable study - focused on the mixing capabilities of different pineapple mixers - is reported by Roland _et al._[3]. Both works rely on three-dimensional non-Newtonian flow simulations. Besides such works towards the numerical assessment of _given_ screw designs, numerical _design_ is also reported, however, partially in other fields of polymer processing. For example, Elgeti _et al._ aim for balanced dies and reduced die swell by applying shape optimization [12; 13]. Design by optimization is also reported by Gaspar-Cunha and Covas, who alter the length of the feed and compression zones, the internal screw diameters of the feed and metering zone, the screw pitch, and the flight clearance [14]. Potente and Tobben report another recent study devoted to mixing elements that develops empirical models for shearing sections' pressure-throughput and power consumption for numerical design [15]. Finally, a first approach combining the shape optimization methods inspired by [12] with a mixing-quantifying objective function to design mixing sections is reported in [16].
However, the shape optimizations above share one commonality: They essentially only modify predefined geometry features. This is accepted in many cases like die or mold design, where the final product's shape is close to the initial one (i.e., the shape variation is small). However, topologically flexible shape parameterizations offer far greater optimization gains for mixing element design, because the optimal geometry might differ significantly from the initial shape. The achievable improvements motivate research on geometry parametrization.
Established shape-parameterization approaches include radial basis functions (RBF) [17], surface parameterizations using Bezier surfaces [18], and surface splines [19]. All these methods may be understood as _filters_ that parameterize a geometry by a few variables at the price of a lack of local control. The use of surface splines in shape optimizations can also be found in [12; 13]. A similar concept to surface splines is free-form deformation (FFD) [20] that encapsulates the body-to-deform in a volumetric spline, which allows tailoring the spline further towards an efficient optimization. An alternative approach that does, however, not parameterize the geometry as a filter is given using the computational grid's mesh nodes as shape parameters [21]. Fortunately, with the advent of neural networks, novel means of shape parameterizations offering outstanding flexibility emerged. Finalizing the introduction, we will summarize the most relevant works in this field.
Many neural networks are essentially classifiers. These neural networks are non-linear algorithms that are optimized, (i.e., trained), to determine - possibly counterintuitive - similarities and dissimilarities to discriminate between objects. One typical use case is image recognition using red-green-blue (RGB) pixel data. Neural networks can, however, be trained to classify features far beyond RGB-pixel values. One example is style transfer or texture synthesis [22]: Instead of aiming at reproducing _pixel_ data, output images are generated in combination with _perceptual_ data. This allows image transformations, where one image's style is transferred to the motive of another. An extension of these ideas to three-dimensional shapes is first reported by Friedrich _et al._[23]. Comparing different shape representations, the authors find that style transfer is applicable to shapes as well.
Our work is especially inspired by Liu _et al._[24], who utilize a so-called _Variational Shape Learner_, that learns
a voxel representation of three-dimensional shapes. _Learning_ here refers to creating a so-called _latent space_, a low-dimensional, feature-rich embedding space to represent and morph between various shapes. Even beyond simple shape interpolation, it is shown that - using the latent representation - geometry features can be transferred from one to another shape. Successful learning of voxel-based shapes can also be found in [25; 26]. In terms of shape representations, pointcloud-based approaches [27; 28; 29], which utilize coordinates of three-dimensional point sets, as well as polygonal mesh-based approaches with either template meshes [30; 31] or multiple mesh planes [32] are widely adopted.
While previously mentioned representations show that learning an embedding space of three-dimensional shapes is possible, each work lacks at least one of the following properties: water-tight surfaces, flexible output resolution, and smooth and continuous surface details. Recent works satisfy the aforementioned properties by learning shapes represented by continuous implicit functions such as signed-distance functions (SDFs) [33] and binary occupancies [34; 35], from which the shapes are extracted as isosurfaces.
We exploit the feature richness of this latent space as an aid to reduce the optimization space's dimension for the given mixing-element shape optimization. The important novelty compared to recent spline-based filters is that the neural network finds - possibly counterintuitive - ways to commonly parameterize a set of significantly different shapes irrespective of user-defined design features. This abstraction from the human designer yields low-dimensional yet far more flexible shape parameterizations, which sets the motivation for the work presented here.
This paper is structured as follows: We start in Sec. 2 by summarizing numerical shape optimization and splines, which leads to the concept of geometric filters. Based on that, we explain in Sec. 3 how neural networks can be utilized to create suitable geometry parameterizations for shape optimization. In Sec. 4, we review the utilized software components, summarize the proposed framework's building blocks, and detail the specific differences to spline-based shape optimization setups. The results obtained from the new approach are presented in Sec. 5, including comparisons to current spline-based designs. Finally, we discuss the results and outline further developments in Sec. 6.
## 2 Geometric filters as a component of shape optimization frameworks
The following section discusses shape parameterizations as one building block of numerical shape optimization frameworks. Therefore, we first introduce the general shape optimization problem. After that, we recall spline-based shape parameterizations. Based on this general introduction of shape optimization frameworks, we will continue by discussing the specific changes needed to adapt neural nets in Sec. 3.
### Building blocks of numerical shape optimization frameworks
The general optimization problem is formulated as the minimization of a cost function \(J\) that relates the design variables \(\mathbf{\sigma}\) to some output - here, the degree of mixing ability obtained with a specific mixing element, (i.e., a particular design). In shape optimization, this minimization problem is typically solved subject to two sets of constraints: (1) inequality and equality conditions, as well as bound constraints on the design variables and (2) partial differential equations (PDEs) that need to be fulfilled by each design to qualify as a feasible solution. This results in the following formulation:
\[J:\mathbb{R}^{n_{\mathbf{\sigma}}}\mapsto\mathbb{R} \tag{1a}\] \[\operatorname*{arg\,min}_{\mathbf{\sigma}\in\mathbb{C}\mathbb{R}^{n}} J(\mathbf{\sigma})\] (1b) s.t. \[\mathbf{F}\left(\mathbf{\sigma}\right)=\mathbf{0}\qquad\text{in}\;\; \Omega\left(\mathbf{\sigma}\right),\] (1c) \[\sigma_{i}\geq\sigma_{min,i}, i=1,...n_{\sigma},\] (1d) \[\sigma_{i}\leq\sigma_{max,i}, i=1,...n_{\sigma}. \tag{1e}\]
Here, (1d) and (1e) describe bound constraints on the optimization variables \(\mathbf{\sigma}\), whereas (1c) denotes the set of governing PDEs. One approach to numerically solve such a _PDE-constraint_ design problem is to alternately compute (1) shape updates and (2) the cost function value. For the studied use case of mixing element design, this results in the computational steps depicted in Fig. 1. First, we update the shape (i.e., the simulation domain covering the mixing element). We use this modified computational domain to compute the flow field from which we afterwards infer the
objective (i.e., the cost function). The design loop is closed by feeding back the cost function value to the optimization algorithm that now computes an updated shape. This loop continues until any termination criterion such as a minimal objective decrease, a maximum number of iterations or another condition is met.
### Spline-based shape parameterizations
In classical shape-optimization frameworks, the actual shape parameterization, or geometry filtering, is often achieved using splines. The following paragraph, therefore, first provides a summary of splines illustrating how one achieves the filtering. For a detailed description of B-splines, we refer the reader to the book of Piegl and Tiller [19]. After that, we detail on _boundary splines_ and FFD as two particular use cases of spline parameterizations.
Splines belong to the group of parametric shape representations. Therefore, each coordinate in the parametric space is connected to one point in physical space. This mapping is best understood using a simple _B-spline_ surface that is written as:
\[\mathbf{S}\left(\xi,\eta\right)=\sum_{j=1}^{m}\sum_{i=1}^{n}N_{i,r}N_{j,p}\left( \xi\right)\mathbf{B}_{i,j}, \tag{2}\]
where \(\xi\) and \(\eta\) denote the parametric coordinates (two for the surface), \(N_{i,r}\) denote the interpolation or _basis functions_ of order \(r\) in the first parametric direction, \(N_{j,p}\) denote the basis functions of order \(p\) in the second parametric direction, and finally, \(\mathbf{B}\) denotes the support or _control points_. Figure 2 illustrates the concept and visualizes how single control points affect the geometry.
The control grid (i.e., the polygon spanned by the control point) aligns with the \(\xi\) and \(\eta\) directions, and any parametric coordinate (within the spline's parametric bounds) maps to one point of the blue shape. Consequently, the spline mapping allows controlling an arbitrary number of parametric points by a constant, typically low, number of control points. Being able to control a high number of points with few control points will be the basic idea of filtering using splines.
Figure 1: Building blocks of a shape optimization framework. The shape is updated by a geometry kernel such as FFD. Subsequently, the flow field is computed using this updated shape and given as input to the objective calculator. Based on the current design variables and the design’s objective value, the optimization algorithm computes optimized shape parameters and restarts the design loop until at least one termination criterion for the design loop is met.
Figure 2: B-spline representation (blue) obtained from control points (red) for a bi-quadratic B-spline. The upper four control points are rotated, illustrating a possible deformation.
One can obtain geometry parameterizations from splines in multiple ways. As shown in Fig. 2, one way uses the B-splines as a boundary representation. Such spline-based boundary representations are common in CAD. Using these CAD representations, their control points (i.e., the red points in Fig. 2) can be directly used as design variables in shape optimization. However, this use of the CAD's geometry parameterization limits the design process because a given spline may not be able to represent shapes substantially different from the initial design. Consequently, if modifications of the spline's parameterization, such as inserting additional control point lines, are to be avoided, this limitation restricts the use of the CAD spline to use cases that deal with small shape updates such as _die_ or _mold design_[12].
An alternative to using boundary B-splines is FFD [20]. In FFD, first, an - often volumetric - spline is constructed around the body to be deformed. Second, this volumetric spline is deformed, and finally, the resulting deformation field is imposed on the enclosed body. Fig. 3 visualizes this process.
The advantage of FFD is that the spline is constructed irrespective of the enclosed shape, which gives complete freedom in choosing degree and resolution. This freedom allows tailoring the spline to the designer's needs (rather than using a given parameterization optimized for CAD usage). Therefore, FFD is widely applied, with just one example being the recent works by Lassila and Rozza combining FFD and reduced order modeling [36]. A combination of both methods, boundary B-splines and FFD, will be compared against the novel shape parameterization based on neural networks that use FFD as a generic interface to modify any given CAD spline, which in turn is used to update the boundary of the simulation domain [16].
## 3 Shape parametrization using neural networks
As explained in Sec. 2, the prime objective of this work is to investigate how neural networks can be used to encode different shapes in a single set of a few continuous variables. In order to train the network, thereby determining such a condensed representation, it has to be provided with suitable data. _Suitable_ here means that the input data (i.e., shapes) are provided in such a way that the network can learn from this data. In addition - using the same data format - we need to be able to produce high-quality computational meshes from the neural network's output.
In the following, we first introduce deep generative models and then describe a shape representation meeting these two requirements. Finally, we discuss the training data generation and utilization of neural networks as shape generators.
### Deep generative models
With the advent of _generative models_, an alternative approach to shape parameterization emerged. In this subsection, we review two of the most common approaches of generative models, explain their basic concepts and use, and detail how they can be employed for geometric filtering.
Generative models are an application of neural networks and, thus, in essence, classification algorithms. _Classification_ here means the ability to determine whether a certain object is in some measure _close_ to a specified input. Conversely to just classifying input, such models can also be used to generate an output that resembles an input. _Resemble_, however, needs to be explained. In most applications, the user is not interested in reproducing a given input exactly. Instead, the output should only be _like_ the input (i.e., the output should feature a slight variation). Generative
Figure 3: Free-Form Deformation using a volumetric spline (light blue) applied to a mixing element (pink). The control points are omitted in this figure. The embedded shape deforms correspondingly to the embedding, simple, volumetric spline.
models attempt to achieve this goal via statistical modeling. An excellent guide to generative models is found in [37], with special focus on the _Variational Autoencoder_ (VAE).
The VAE, like the traditional autoencoder, consists of an _encoder_ and a _decoder_ and aims to reproduce any given data while passing the input through a bottleneck. However, its probabilistic formulation using the so-called "reparametrization trick" provides an exceptional advantage over the traditional autoencoder in practice [38]. The roles of the encoder and the decoder can be interpreted as two separate processes. The encoder learns relations in the given data and encodes them in so-called _latent variables_, \(\mathbf{z}\). Given these latent variables, the decoder, in turn, learns to produce data that is _likely_ to match the input. Once trained, the user can omit the encoder and directly generate new data from sampling the latent space. For details, we refer to [37; 38], and for applications, we refer to [24; 39] and [40].
The difference between the spline-based approach and generative models is the choice of latent variables. When the human designer creates a spline parameterization that allows modifying geometry in the desired way, the optimization variables are the control points, which are _intuitively_ placed in \(\mathbb{R}^{3}\) by the designer. Generative models, in contrast, _learn_ a latent space and explicitly assume that the single latent variables do not have an intuitive interpretation. As a result, data is compressed from a high dimensional intuitive design space, in our case \(\chi\subset\mathbb{R}^{3\times n}\), onto a hardly interpretable, feature-dense, low-dimensional latent space \(Z\). In short, generative models use the computational power of neural networks to find a dense classification space that one can sample to produce new data. For the VAE, this process is depicted in Fig. 3(a).
A competing concept to VAEs are _Generative Adversarial Networks (GANs)_. Their basic structure is shown in Fig. 3(b). GANs, first introduced by Goodfellow _et al._[41], follow a different concept and train two adversarial nets, the _generator_ and the _discriminator_. In GANs, the generator is trained to create data that mimics real-world data, while the discriminator tries to determine whether or not a dataset was artificially created. In a minimax fashion, the generator's learning goal is to maximize the probability of the discriminator making a wrong decision.
GANs have proven to be an excellent tool for shape modeling. Wu _et al._, for example, apply a GAN for 3D shape generation and demonstrate their superior performance compared to three-dimensional VAEs. They even use a GAN to reconstruct three-dimensional models from two-dimensional images based on the a VAE output that is used to infer a latent representation for these images[42]. As in [39], Wu _et al._ also demonstrate the ability to apply shape interpolation and shape arithmetic to the learned latent representation. More recently, Ramasinghe _et al._[43] utilize a GAN to model high-resolution three-dimensional shapes using point clouds.
Figure 4: Two main concepts of deep generative networks: Variational Autoencoders and Generative Adversarial Networks.
### Implicit shape representation
The neural network learns a mapping between the low-dimensional latent space and a three-dimensional body. In order to construct such a mapping, we first need to define how to represent our shapes (i.e., define what data the neural network actually has to learn). Before presenting the approach chosen in this work, we review standard methods and their limitation.
Three ways of shape representation are common in machine learning: (1) voxels, (2) point clouds, and (3) meshes [33]. The problem with meshes is that the mesh topology also prescribes the possible shape topologies. Point clouds, in contrast, can represent arbitrary topologies, but prescribe a given resolution. Finally, voxels can represent arbitrary topologies and vary in resolution, but, unfortunately, the memory consumption scales cubically with the resolution. Because of these drawbacks, the network utilized in this work learns SDFs following a network configuration originally proposed by Park _et al._[33].
SDFs provide the distance to the closest point on the to-be-encoded surface for every point in space. Furthermore, encoded in the sign, information on whether the point lies inside or outside the surface is available. Using such continuous SDF data, a shape is then extracted - at an arbitrary resolution suitable for meshing - as its zero-valued isosurface.
### Training set generation
As mentioned in Sec. 3.1, training a neural network requires a set of source shapes. However, to the authors' knowledge, no shape library exists for mixing elements in single screw extruders. Thus, we explain an approach to building custom training sets.
To generate a suitable training set, we first select the basis shapes that should be considered - pin and pineapple mixers in our case. From this choice, we arbitrarily infer a total of four basis shapes (i.e., triangle, square, hexagon and cylinder - cf. Fig. 5) - clearly too few for successful and meaningful training. The basis shapes are varied using (a combination of) FFDs to gather an appropriate number of training shapes. Examples of applied deformations are given in Fig. 5. In total, 2659 training shapes are generated.
To obtain SDF-training data from these shapes, we follow the approach by Park _et al._[33] and first normalize each shape to fit into a unit sphere. Then, we sample 500,000 spatial points and their SDF value pairs using the trimesh library [44].
### Shape generator
As explained, the shape generator's task is to provide a mixing element given a set of optimization variables. The shape generator - in this work - is thus built around the neural network, which is presented in the following.
The utilized neural network is based on DeepSDF auto-decoder [33]: a feed-forward network with ten fully connected layers, with each of the eight hidden (i.e., internal) layers having 256 neurons and ReLU activation functions. In contrast to auto-encoders, the auto-decoder only trains the decoder using a simultaneous optimization of the network parameters and the latent code during training. We investigate four, eight, and sixteen as latent dimensions, \(l\). The input layer consists of these \(l\) neurons concatenated with a three-dimensional query location. The output layer
Figure 5: Examples of basic shapes and applied deformations. In total, a triangle, a square, a cylinder, and a hexagon are used as basis shapes.
has only one neuron with a tanh activation function. For details on the chosen SDF network, we, again, refer to [33]. To train the network, we use the ADAM optimization algorithm [45]. To utilize improved learning rates, we follow a progressive approach with initial rates \(\varepsilon_{0}=5e-4\) for \(\mathbf{\theta}\), and \(\varepsilon_{0}=1e-3\) for \(z\), and a decay as:
\[\varepsilon=\varepsilon_{0}\cdot\left(0.5^{\nicefrac{{\epsilon_{0}}}{{500}}} \right), \tag{3}\]
where \(e\) denotes the current training iteration (i.e., _epochs_) - and % denotes integer division. The network's training can be seen as the parametrization of the shapes.
To extract isosurfaces (i.e., to generate new mixing elements) from the trained network's SDF output, we sample a discrete SDF field and apply a marching cube algorithm [46] in the implementation of [47]. Finally, we apply automated meshing using TetGen [48] to obtain a simulation domain as depicted in Fig. 6, including the new mixing element.
## 4 The developed shape optimization framework
In general, our framework consists of three building blocks: (1) _shape generator_, (2) _flow solver_, and (3) _optimizer_, which will be described in the following.
Starting with an initial set of optimization variables, \(\mathbf{\sigma}_{0}\), the shape generator creates a new mixing element \(\Omega\left(\mathbf{\sigma}_{0}\right)\). The flow solver then computes the flow field around this mixing element, which the optimizer evaluates to determine the flow's degree of mixing. Based on the obtained mixing value and by comparison to previous iterations, an optimization algorithm determines a new set of optimization variables. This sequence is iteratively re-run until either a maximum number of iterations is reached or any other termination criterion - typically a good objective value or insignificant objective decrease - is met.
### Flow solver and simulation model
The flow solver and simulation model is identical to the one introduced in [16] and therefore only summarized in the following. The flow field induced by the various mixing elements is obtained from solving the steady, incompressible non-isothermal Navier-Stokes equations using a Carreau model and WLF temperature correction. The governing equations are discretized with linear stabilized finite elements and solved using a Newton linearization and a GMRES iterative solver. Subsequently, we solve a set of advection equations using the identical configuration to mimic particle tracking, which we use as an input to our objective function. All methods are implemented in an in-house flow solver.
We make two simplifications to our simulation model (i.e., the single-screw-extruder flow channel): First, we simulate the flow around only a single mixing element instead of simulating the entire mixing section. Second, we assume barrel rotation in an unwound flow channel section. Both assumptions yield significantly reduced computational costs while allowing a qualitative mixing improvement. To assess mixing, we mimic particle tracking by solving a series of advection equations yielding an inflow-outflow mapping for particles advected by the melt flow. We process this advection information by subdividing a portion of the inflow domain into smaller rectangular subdomains. In each of these rectangles, we select a set of particles such that the particle set's bounding box coincides with the rectangular subdomain. Then, we follow each particle as they are conveyed through the domain, store each particle's position at the outflow domain, and finally construct a convex hull at the outflow around the same sets of points. Averaging the convex hull's length increments between in- and outflow yields a simple yet robust objective function inspired by interfacial area measurements. Using this objective function, we found that such a simulation model provides a good balance between accuracy and computational efficiency [16]. Fig. 6 depicts the chosen simulation domain.
### Optimizer
We utilize the open-source optimization library Dakota [49] to drive the design process. Two different algorithms are selected and described in the following. The first algorithm is the Dividing RECTangle (DIRECT) algorithm, first introduced in [50]. DIRECT belongs to the category of _branch-and-bound_ methods and uses n-dimensional trisection to iteratively partition the design space. To find minima, it follows the approach of Lipschitzian optimization, which identifies the design space partition that should be further sampled by evaluating a lower bound to the objective value in each partition. The partition with the lowest lower bound is chosen and further sampled. DIRECT modifies that
concept and computes multiple lower bounds that weight the current sampling value (i.e., the objective value in the partition center). This promotes to further sample partitions with good objective values against the partition size, which permits to effectively sample large areas of unexplored design space. Thereby, DIRECT identifies _multiple_ partitions that are _possibly optimal_ and allows for global convergence.
The second algorithm utilized in this work is the single-objective genetic algorithm (SOGA) introduced (as its multi-objective variant) in the JEGA package [51]. As it belongs to the class of _genetic_ algorithms, it solves optimization problems by recreating biological evolution. Therefore, each optimization run consists of numerous samples referred to as the _population_. Members of the population are paired and recombined in such ways that the _fitness_ (i.e., the objective value) is successively improved. Regarding its application in this work, it is especially noteworthy that the recreation of evolution includes a _mutation_ step, which modifies or re-initializes design variables randomly. The added randomness allows the algorithm to escape locally convex regions of the design space. Such evolutionary optimization approaches generally converge slower yielding higher computational costs. However, they are often able to find better results than non-evolutionary algorithms. For both DIRECT and SOGA, we rely on the default convergence criterion and a maximum of 1000 iterations as a termination criterion. The complete computational framework is depicted in Fig. 7.
Figure 6: Simulation domain with single mixing element resembling the flow around a single mixing element in the unwound screw channel. Flow conditions are shown in blue using a barrel rotation setup. For a detailed description of the objective function and governing equations, we refer the reader to [16].
Figure 7: Pipeline with building blocks of the proposed computational framework. The process is split into two parts: A one-time computationally intensive training part and the actual optimization, including the quick filter evaluation. To create a training set, FFD is applied to a set of basis shapes. Subsequently, we train the network using the ADAM optimizer, which concludes the _offline_ phase. During optimization (i.e., the _online_ phase), first, a new shape is created from the neural net. Then, a new computational mesh is created around this shape, and based on FEM simulations the new design’s mixing is assessed. Depending on the objective value, the optimization loop is re-initiated using altered latent variables. Building blocks that are modified compared to the general, geometry-kernel-based approach (cf. Fig. 1) are highlighted in blue.
## 5 Numerical results
This section presents the results obtained using shape parameterizations from neural networks.
Thereby, Sec. 5.1 focuses on the results of the offline phase, i.e., the training of the shape-representing neural network. In particular, we will discuss the differences in the constructed latent space based on its dimension using the widely used data reduction technique _t-Distributed Stochastic Neighbor Embedding (t-SNE)_ to visualize the learned, n-dimensional shape parameterization. In Sec. 5.2 we then present the mixing shapes that could be obtained using our shape optimization approach.
### Latent space dimension
One of the most important choices is the target dimension of the embedding space \(l\). In all established filtering mechanisms like radial basis functions, free-form deformation, CAD-based approaches, and even mesh-based methods, the practitioner has to balance improved flexibility against the computational demand. Despite a potentially more compact and dense embedding with neural networks, this is still of relevance and manifests itself in the dimension of the chosen latent space. Previous works utilized only a very small number of optimization variables. Elgeti et al. vary between only one and two parameters [12]. Other works by the authors, however, showed that also for six design variables good results are obtained [16]. To obtain a competitively small number of optimization variables, we investigate embedding spaces of dimension four, eight, and sixteen respectively, and compare against a free-form-deformation approach using nine variables.
Even though the latent space, as discussed in Sec. 3.1, in general, obtained lacks an intuitive interpretation, we are still interested in evaluating the quality of the learned embedding space. We do so in three different ways which we present in the following: (1) we show a data reduction technique that allows us to visually investigate the latent space; (2) we apply an interpolation between the latent representation of two training shapes and compare with the expected result; (3) we apply shape arithmetics, i.e., we isolate a specific modification of a basis shape and impose it onto another basis shape to inspect whether or not features are also recognized by the latent space.
(1) For the visualization of the high-dimensional latent space, a dimension reduction technique is required. An intuitive choice might be principal component analysis (PCA), but PCA tries to primarily preserve global structures and thus data points which are far apart in the high-dimensional data will also be drawn far apart in the 2D plot. Conversely, the correlation between similar points is often lost. This loss of correlation in similar data is problematic since we aim to investigate whether - from a human's perspective - similar shapes are represented by similar latent code. The problem of loss in local correlation is, however, alleviated by t-SNE [52]. Using t-SNE, we plot each training shape's obtained latent code and - due to the preservation of local similarities - similar latent code will form clusters in the scatter plot. These clusters can then be sampled to verify that the latent code clusters resemble similar shapes. t-SNE plots for all three latent dimensions - four, eight, and sixteen - are shown in Fig. 8.
Fig. 8 shows how an increased latent dimension leads to increased classification performance of the neural net. Specifically, the four chosen basis shapes are clustered with their respective modifications more and more densely as the latent dimension
Figure 8: t-SNE plots obtained using different latent space dimensions. Increased latent dimension resembles in increased classification performance of the neural net. Each color corresponds to one base training shape: green corresponds a triangular base, dark blue is the cube, red is the hexahedron, and light blue a tesselated version of the cylinder.
increases. This improved classification performance indicates that the neural net was able to learn the similarities between similar shapes properly for the case of eight and sixteen dimensions.
(2) In addition to comparing clusters of similar shapes in physical and latent space, we also investigate how well the latent space is suited to represent shapes that have not been included in the training set. We do so by _interpolation_ between two shapes. Fig. 9 shows the obtained results for all three latent spaces.
Consistent to the observed lack in classification ability of the four-dimensional latent space, Fig. 8(a) shows that interpolation between shapes yields unsatisfactory results. In particular, shape defects are observed. This might be a result of the fact that the twisted cube is not at all well represented in the latent space as seen in the rightmost figure. However, both the eight and the sixteen-dimensional latent space show a visually smooth transition between the regular and the twisted cube shape.
(3) The above two analyses investigated the overall classification ability of the neural net and the suitability to represent intermediate shapes. A final test is given by applying _shape arithmetic_. Using arithmetic operations applied to the latent code, we extract an exemplary feature - here a stretching along the center plane - by taking the component-wise difference of a stretched and a regular cube. This difference represents center-plane expansion and can then be applied to any other basis shape - here the undeformed hexahedron. Fig. 10 shows the resulting shapes. Again, the four-dimensional latent space performs significantly worse since the basis shapes are not represented in detail. Contrary to the interpolation case, the sixteen-dimensional latent space now shows better results than the eight-dimensional case.
All three investigations, t-SNE plots, interpolation and arithmetic indicate that the four-dimensional latent space fails in producing a suitable latent representation. It should be noted though, that in view of the doubled number of optimization variables, the attainable gains in using sixteen latent variables compared to eight, appear unattractively small.
### Optimization results
To study the effects of the novel shape parameterization technique, we compare configurations that vary in latent space dimensions and optimization algorithms as shown in Tab. 1. Furthermore, we require all generated shapes to have the exact same volume as the undeformed rhombic mixing element utilized in the spline-based optimization (cf. Sec. 4.1). We choose such scaling to avoid convergence towards merely enlarged shapes that yield good objective
Figure 9: Shape interpolation using different latent dimensions. An interpolated shape is obtained using \(z_{interp}=z_{a}+\frac{z_{a}-z_{b}}{N+1}n\) with \(z_{a}\) and \(z_{b}\) denoting the latent code between shapes \(a\) – here the undeformed cube – and \(b\) – here the twisted cube. With \(N=20\), the shown examples represent \(n\in[1,3,7,13,17,20]\).
values but do not deliver helpful insights. Tab. 1 lists the obtained results, and Tab. 2 gives insights into the corresponding computational effort.
The obtained best shapes are shown in Fig. 11.
Comparing the optimized geometries shows interesting results from a plastics processing point of view. On the one hand, the triangular shape and a mixing element that widens towards the top appear advantageous. One should note, however, that these deformations do not correspond to a general optimum for plastics engineering but are merely the best possible deformations within the range permitted by the training set. Choosing an even more diverse training set is expected to yield even further improved shapes.
More relevant for this study (with a focus on neural nets as shape parameterizations) is the comparison of convergence, the achieved mixing, and the difference and similarities in the results. Tab. 1 shows that for the chosen shape optimization problem, the DIRECT algorithm has no disadvantages compared to SOGA and converges reliably. Simultaneously, the shape parameterization's dimensionality appears to influence the optimization because the four and eight-dimensional neural networks lead to optimized triangles. In contrast, the sixteen-dimensional case renders
\begin{table}
\begin{tabular}{|c||c|c|c||c|c|} \hline & 4 & 8 & 16 & FFD \\ \hline \hline SOGA & -0.0726 & -0.0710 & -0.0750 & – \\ \hline DIRECT & -0.0645 & -0.0738 & -0.0769 & -0.0422 \\ \hline \end{tabular}
\end{table}
Table 1: Different optimization algorithms and latent space dimensions compared by best objective value and contrasted to a nine-dimensional FFD. Smaller values correspond to better results using the aforementioned objective formulation 1b.
\begin{table}
\begin{tabular}{|c||c|c|c|c|c||c|c|} \hline & 4 & & 8 & 16 & FFD \\ \hline \hline \# Iteration(s) & Optimal & Total & Optimal & Total & Optimal & Total & Optimal & Total \\ \hline \hline SOGA & 768 & 1000 & 752 & 1000 & 534 & 1000 & – & – \\ \hline DIRECT & 96 & 113 & 129 & 143 & 138 & 149 & 16 & 67 \\ \hline \end{tabular}
\end{table}
Table 2: Different optimization algorithms and latent space dimensions compared by the final iteration count, the obtained objective value, and the number of total iterations; contrasted to a nine-dimensional FFD.
Figure 10: Shape arithmetics for different latent dimensions. A linear thickening in the center plane is imposed on a hexagonal base body by evaluation of the latent code as \(z_{Et_{thick}}-z_{E4}+z_{E6}\), where \(z_{Et_{thick}},z_{E4}\), and \(z_{E6}\) denote the latent codes of the thickened cube, the regular cube, and the regular hexahedron respectively.
the top-expanded quadrilateral optimal. Common to all results is a skewed and slightly twisted geometry.
A noticeable difference between the spline-based and neural-net-based shape optimization is that the neural-net-based shape parameterization encodes several shapes, of which multiple may mix the melt equally well. Because of this, from the practitioner's point of view, it does make sense to not only look at the best result but rather compare numerous equally optimal designs and derive design rules from that comparison. Fig. 12 shows such a comparison and reveals one advantage of evolutionary algorithms. While the DIRECT algorithm converges locally and, therefore, the ten best designs are geometrically similar, the generative nature of SOGA allows the practitioner to identify possibly equally well-working designs (cf. Figs. 12f and g) amongst which the most economical option may be chosen. Such a choice allows one to account for further restrictions regarding screw cleaning, manufacturability, and others.
Figure 11: Optimization results obtained for all different latent codes and optimization algorithms compared to an existing FFD-based shape optimization.
## 6 Discussion and outlook
In this work, we studied the applicability of generative models as shape parameterizations. We choose numerical shape optimization of dynamic mixing elements as a use case. The developed shape parameterization's fundamental principle is to exploit neural nets' ability to construct a dimension reduction onto a feature-dense, low-dimensional latent space.
First, the nature of this low dimensional space is studied by t-SNE-plots. These plots give visual evidence that the generative models create _smooth_ shape parameterizations that enable one to use classical, heuristic optimization algorithms. Comparing genetic to such heuristic algorithms, Tab. 2 reveals that the SOGA algorithm required significantly more iterations (i.e., simulations). Additionally, Tab. 1 shows that in the studied examples, this additional computational effort is not reflected proportionally by improved mixing. One may expect that the SOGA algorithm's random nature may be better suited to explore the hardly interpretable latent space. However, the results suggest a smoothness of the learned parameterization that renders deterministic methods like DIRECT equally well suited for optimization in the latent space.
In addition, to the general applicability of generative models, we study the influence of different latent dimensions. While the actual optimization results appear pleasing, Figs. 9 and 10 suggest that very compressed (i.e., four-dimensional) latent spaces may not be used for optimization purposes. Analogously, no direct preference between the eight- and sixteen-dimensional results can be drawn from the optimization results. Similarly, Fig. 10 indicates that higher-dimensional latent spaces yield more precise shape encoding, which seems generally preferable. Since the overall number of iterations until convergence of the optimization problem is comparable, the 16-dimensional parameterization might be chosen over the eight-dimensional variant.
As intended, a fundamental improvement over established low-dimensional shape parameterizations is that the new approach covers a much broader design area in a single optimization. Since its fundamental concept is to encode diverse shapes, optimizations lead to numerous, nearly equally optimal shapes. Consequently, this novel approach extends on existing methods in that it allows the practitioner to _derive_ design features that enhance mixing most and for a wide range of basis shapes. Therefore, rather than creating complex shape parameterizations, the crucial step towards optimal design reduces to the creative definition of a training set.
Finally, a significant challenge in using neural-net-based shape parameterization is proper size control of the output shapes. This work implements a volume constraint to avoid simple size maximization of the mixing elements. However, a reformulated objective, such as penalizing pressure loss, may circumvent such adverse designs. Alternatively, a scale factor may be added as an additional optimization variable. Both size control and efficient training set generation may be topics of further studies.
Figure 12: Ten best shapes obtained from 16D SOGA optimization. Except for the 6th-best shape (h), all shapes feature an expanded top, similar orientation, and appear widened in \(y\) direction (i.e., perpendicular to the main flow direction).
Given the presented results, utilizing the feature-rich latent representations and their immense generalization power has a significant potential to improve established industrial designs.
## 7 Acknowledgements
The German Research Foundation (DFG) fundings under the DFG grant "Automated design and optimization of dynamic mixing and shear elements for single-screw extruders" and priority program 2231 "Efficient cooling, lubrication and transportation -- coupled mechanical and fluid-dynamical simulation methods for efficient production processes (FLUSIMPRO)" - project number 439919057 are gratefully acknowledged. Implementation was done on the HPC cluster provided by IT Center at RWTH Aachen. Simulations were performed with computing resources granted by RWTH Aachen University under projects jara0185 and thes0735.
|
2310.19500 | Coarse-grained crystal graph neural networks for reticular materials
design | Reticular materials, including metal-organic frameworks and covalent organic
frameworks, combine relative ease of synthesis and an impressive range of
applications in various fields, from gas storage to biomedicine. Diverse
properties arise from the variation of building units$\unicode{x2013}$metal
centers and organic linkers$\unicode{x2013}$in almost infinite chemical space.
Such variation substantially complicates experimental design and promotes the
use of computational methods. In particular, the most successful artificial
intelligence algorithms for predicting properties of reticular materials are
atomic-level graph neural networks, which optionally incorporate domain
knowledge. Nonetheless, the data-driven inverse design involving these models
suffers from incorporation of irrelevant and redundant features such as full
atomistic graph and network topology. In this study, we propose a new way of
representing materials, aiming to overcome the limitations of existing methods;
the message passing is performed on a coarse-grained crystal graph that
comprises molecular building units. To highlight the merits of our approach, we
assessed predictive performance and energy efficiency of neural networks built
on different materials representations, including composition-based and
crystal-structure-aware models. Coarse-grained crystal graph neural networks
showed decent accuracy at low computational costs, making them a valuable
alternative to omnipresent atomic-level algorithms. Moreover, the presented
models can be successfully integrated into an inverse materials design pipeline
as estimators of the objective function. Overall, the coarse-grained crystal
graph framework is aimed at challenging the prevailing atom-centric perspective
on reticular materials design. | Vadim Korolev, Artem Mitrofanov | 2023-10-30T12:48:45Z | http://arxiv.org/abs/2310.19500v2 | # Coarse-grained crystal graph neural networks for reticular materials design
###### Abstract
Reticular materials, including metal-organic frameworks and covalent organic frameworks, combine relative ease of synthesis and impressive range of applications in various fields, from gas storage to biomedicine. Diverse properties arise from the variation of building units, metal centers and organic linkers, in an almost infinite chemical space. Such a variability substantially complicates experimental design and promotes the use of computational methods. In particular, the most successful artificial intelligence algorithms for predicting properties of reticular materials are atomic-level graph neural networks with optional domain knowledge. Nonetheless, the data-driven inverse design utilizing such models suffers from incorporating irrelevant and redundant features such as full atomistic graph and network topology. In this study, we propose a new way of representing materials, aiming to overcome the limitations of existing methods; the message passing is performed on the coarse-grained crystal graph that comprises molecular building units. We assess the predictive performance and energy efficiency of neural networks built on different materials representations, including composition-based and crystal-structure-aware models, to highlight the merits of our approach. Coarse-grained crystal graph neural networks show decent accuracy at low computational costs, making them a valuable alternative to omnipresent atomic-level algorithms. Moreover, the presented models can be successfully integrated into the inverse materials design pipeline as estimators of the objective function. Overall, the coarse-grained crystal graph framework aims to challenge the prevailing atomic-centric perspective on reticular materials design.
_Keywords_ reticular design metal-organic frameworks covalent organic frameworks coarse graining graph neural networks
## 1 Introduction
Artificial intelligence (AI) is becoming the next game changer in materials science[1, 2, 3]. Nowadays, supervised learning algorithms represent a cutting-edge tool for resolving complex structure-property relationships that determine material functionality. AI methods have been utilized to examine a plethora of physicochemical parameters, including thermodynamic[4, 5, 6], electronic[7, 8, 9], mechanical[10, 11, 12], adsorption[13, 14, 15], and catalytic[16, 17, 18] properties. Deep learning models[19] provide state-of-the-art predictive performance by extracting crucial features from input data, e.g., chemical composition and crystal structure. Most representation schemes related to deep learning models in materials informatics require researchers to perceive materials from an atomistic viewpoint. In particular, crystal graphs (where nodes and edges correspond to atoms and chemical bonds, respectively) enhance AI toolset of materials scientists with a diverse set of graph neural networks[20, 21, 22, 23, 24, 25, 26, 27, 28, 29]. Structure-agnostic neural networks[30, 31, 32]
handle chemical composition through assigning vector representations (so-called embeddings) to the chemical elements present in the material.
Atomic-level models are highly effective in representing materials if atoms are considered the most natural choice for the elementary structural unit. In the inorganic domain, chemical subspaces can be formed through element substitution in the prototype crystal structure; the resulting material sets have been processed with predictive models in a high-throughput manner[33, 34, 35]. In contrast, materials representation at a scale of molecular building units is preferable for crystalline extended structures governed by reticular chemistry principles[36, 37]. The most prominent examples of such materials are metal-organic frameworks[38] (MOFs) and covalent organic frameworks[39] (COFs). Specifically, MOFs are formed by linking organic molecules and metal-containing entities through coordination bonds, whereas COFs are made by stitching organic molecules through covalent bonds. Some of synthesized compounds show exceptional adsorption[40, 41] and catalytic[42, 43] properties, making them promising candidates for energy-related applications. Moreover, the modular structure of reticular materials provides a great opportunity for further tuning of relevant properties. There have been recent efforts to develop specialized featurization schemes for reticular design[44, 45, 46, 47, 48]. Global descriptors (e.g., topology, volumetric attributes, and energy grids) incorporated into neural network architecture improve predictive performance and leaves room for interpretability analysis. On the other hand, the aforementioned attributes constrain the scenarios where the corresponding models can be applied. The most intriguing data-driven strategy for developing functional materials--inverse design (i.e., "from property to structure")--requires the synchrony between discriminative and generative models. In particular, the input data modalities (i.e., materials representation _in toto_) used in the former models should be validly reproduced by the latter ones. Given this context, the incorporation of energy-grid embeddings in predictive models (e.g., MOFTransformer[47]) presents a challenge for inverse materials design owing to the limited reconstruction ability of existing algorithms. Out of one million structures, only a few zeolite shapes created by generative adversarial network (ZeoGAN[49]) successfully passed all cleanup operations. Other predictive models[48] for reticular materials directly consider framework topology, which is unknown _a priori_ for specified building blocks, e.g., organic linkers and metal-containing unit in MOFs. Consequently, the unpredictable synthetic accessibility of frameworks (in the form of the linkers-nodes-topology triad) created by generative models has become a cornerstone for practical data-driven design of reticular materials. Recent findings have confirmed concerns that most relevant studies have overlooked: only 136 frameworks out of one thousand hypothetical structures with the highest hydrogen working capacity have been identified as highly synthesizable[50].
By viewing experimental synthesis and characterization of _in silico_ generated frameworks as the ultimate goal of AI-assisted materials discovery, we can formulate the main challenge in the field as follows: there exists a fundamental disparity in how synthetic chemists and AI-practitioners perceive reticular materials. As a result, atomic-level and topology-aware models leverage materials modalities that are not relevant or accessible in reticular design. To address this issue, we introduce the coarse-grained crystal graph framework. The performance of neural networks that utilize sets of molecular building units as input is analyzed in terms of accuracy, energy efficiency, and transferability. In our comparative analysis, we also take into account widely used architectures, which are the dominant paradigm in materials representation learning; composition-based and crystal-structure-aware models are considered. In addition to material screening applications, the feasibility of incorporating coarse-grained crystal graph neural networks into the inverse reticular design pipeline is explored as well.
## 2 Results
### Coarse-grained crystal graph framework
The modern landscape of predictive models in materials informatics[51] is mostly shaped by neural networks built on crystal graph \((\mathcal{V},\mathcal{E})\), which is a set of vertices (atoms) \(v\in\mathcal{V}\) and edges (bonds) \((u\to v)\in\mathcal{E}\). Despite an impressive diversity of neural network architectures, most of them have a common foundation in the message-passing paradigm[52]. The vector representation \(h_{v}\) of a node \(v\) is generated by propagating messages \(m_{u\to v}\) from source nodes \(u\) to a destination node \(v\); the messages from all the nodes forming a neighborhood \(\mathcal{N}(v)\) of node \(v\) are taken into account. The message-passing procedure may be performed multiple times, so a node representation \(h_{v}^{t+1}\) at step \(t+1\) is computed as follows:
\[m_{u\to v}^{t+1}=\phi(h_{u}^{t},h_{v}^{t},e_{u\to v}^{t}) \tag{1}\]
\[m_{v}^{t+1}=\rho(\big{\{}m_{u\to v}^{t+1},\forall u\in\mathcal{N}(v)\big{\}}) \tag{2}\]
\[h_{v}^{t+1}=\psi(h_{v}^{t},m_{v}^{t+1}) \tag{3}\]
where \(\phi\), \(\rho\), and \(\psi\) are (learnable) message, reduce, and update functions, respectively, and \(e_{u\to v}^{t}\) is a vector representation of edge \(u\to v\) at step \(t\); the calculation of node features can optionally incorporate edge features.
Our approach involves applying the message-passing method to learn representations of materials, which consist of molecular building units rather than distinct atoms. The proposed framework heavily relies on subgraph neural networks[53, 54] and includes steps inherent to their construction. First, the strategy for sampling/selecting subgraphs needs to be explicitly stated. Following chemical intuition, we decompose MOF structures (Figure 1a) into their inorganic ("metallic") and organic components (Figure 1b); a similar approach known as the "standard simplification" algorithm[55] has been previously employed to differentiate MOF structures. Trivial subgraphs of metal atoms (\(k\in\mathcal{K}\)) (Figure S1) are comprised of single nodes in the crystal graph; intermetallic bonds are ignored. Another set of subgraphs (\(l\in\mathcal{L}\)) is formed by the connected components of nonmetal atoms. Domain heuristics enable the development of alternative partition schemes that deconstruct MOFs into organic linkers and inorganic substructures, commonly known as second-building units (SBUs). Our preliminary experiments show that those methods are not universally applicable and cannot extract molecular building units for a considerable fraction of synthesized MOFs. Thus, 94.5% and 71.8% of entities from the Computation-Ready, Experimental Metal-Organic Framework (CoRE MOF) database[56, 57] are correctly processed by the MOFid[58] and moffragmentor[59] packages, respectively. On the other hand, the implemented scheme achieves identification of metal centers and organic linkers in 99.0% of structures. The decomposition rates of other databases are similar (Figure S2). In the case of COFs, we apply a similar methodology, with the only variation being the composition of inorganic nodes in terms of chemical elements; here boron and silicon are also taken into account (Figure S1). Consequently, the current stage of development does not support the adequate processing of a diverse family of metal-free COF structures within the proposed framework.
The next step is to define subgraph representations. Crystal graph neural networks appear to be the most relevant reference in this context; during training, learnable atomic embeddings are commonly generated in the input layers of a model. In opposition, we rely on predefined vectors to characterize both the inorganic \(\mathcal{K}\) and organic \(\mathcal{L}\) subsets. Specifically, the mol2vec model[60] is implemented to represent molecular fragments, and mat2vec embeddings[61] serve as features for single-node metal subgraphs (Figure 1c).
The considered sets of entities (\(\mathcal{K}\) and \(\mathcal{L}\)) form a bipartite graph (\(\mathcal{K},\mathcal{L},\mathcal{E}\)) by design; each edge in the graph connects species of different types, i.e., organic linker to metal center and _vice versa_. Unfortunately, reticular design does not provide prior knowledge of the specific bonds that will form in an experimental crystal structure. In order to eliminate arbitrariness in selecting initial connectivity, we postulate every potential edge, forming a complete bipartite graph (\(\mathcal{K},\mathcal{L},\tilde{\mathcal{E}}\)) called the coarse-grained crystal graph. The term refers to the coarse-graining modeling approach, which involves the simplification of complex atomic systems, e.g., reducing the number of degrees of freedom by representing groups of atoms as a single pseudo-atom. Similar techniques have been applied to quantify MOF diversity[62, 63] (in an unsupervised manner) and establish structure-property relationships in a specific class of hybrid materials[64] (zeolitic imidazolate frameworks). To the best of our knowledge, this study is the first to use the message-passing paradigm to model reticular materials in the coarse-grained regime.
Coarse-grained crystal graphs can be easily integrated into graph neural networks designed for heterogeneous graphs. To handle structure-property relationships in reticular materials, we implement a model with simple architecture (Figure 1d), hereafter designated as Coarse-Grained Crystal Graph Neural Network (CG\({}^{2}\)-NN). Predefined embeddings initiate the sequential update of node representations through the message-passing procedure, occurring in three interaction blocks; each block includes a convolutional layer, layer normalization[65], and nonlinearity, Exponential Linear Unit[66] (ELU). Owing to the universal nature of the proposed structure representation, a wide range of convolutional operations can be integrated into the interaction block. In this study, we perform experiments using four message-passing techniques: Graph Convolutional Network layer[67] (CG\({}^{2}\)-GCN), SAmple and aggreGatE layer[68] (CG\({}^{2}\)-SAGE), Graph ATtention layer[69] (CG\({}^{2}\)-GAT), and Graph ATtention layer with a universal approximator attention function[70] (CG\({}^{2}\)-GATv2). It should be also emphasized that the aggregation phase of message passing is obviously influenced by the specific topology of coarse-grained crystal graph (Equation 2). Since each linker has a neighborhood comprised of all metal centers and each metal center has a neighborhood comprised of all linkers, this phase looks as follows:
\[m_{k}^{t+1}=\rho(\{m_{l\to k}^{t+1},\forall l\in\mathcal{L}\}) \tag{4}\]
\[m_{l}^{t+1}=\rho(\{m_{k\to l}^{t+1},\forall k\in\mathcal{K}\}) \tag{5}\]
The calculation of graph-level representation is based on the following readout function:
\[h_{g}=\frac{1}{|\mathcal{K}|}\sum_{k\in\mathcal{K}}h_{k}+\frac{1}{|\mathcal{L}|} \sum_{l\in\mathcal{L}}h_{l} \tag{6}\]
Finally, the output of CG\({}^{2}\)-NN is obtained by passing the graph-level embedding \(h_{g}\) through a dense layer.
Figure 1: Coarse-grained crystal graph framework. Overview of data processing and scalability. (a) Original crystal structures of notable reticular materials, metal–organic framework HKUST-1 and covalent organic framework COF-5. Atoms are colored based on their chemical element. (b) The crystal graphs of HKUST-1 and COF-5 are colored based on their substructure type. The metal and nonmetal atoms are colored by violet and yellow, respectively. (c) Scheme of the coarse-grained crystal graph construction. (d) Schematic diagram of the coarse-grained crystal graph neural network architecture implemented in this study. (e) Number of nodes in the coarse-grained crystal graph as a function of number of atoms in the corresponding unit cell. The top panel contains the corresponding distribution of number of atoms.
In terms of message-passing scalability, the coarse-grained crystal graph framework is expected to outperform the full crystal graph. The Quantum MOF (QMOF) database[71, 72] contains structures with hundreds of atoms in the primitive cell (Figure 1e). It should be noted that the initial set of candidate materials was formed considering the limitations of high-throughput density functional theory (DFT) calculations. The structures in other experimental MOF subsets, such as the Crystal Structure Database (CSD) MOF Collection[73] and CoRE MOF database, can contain as many as ten thousand atoms (Figure S3). In contrast, the coarse-grained crystal graph have a maximum of 9, 17, or 35 nodes in the three databases mentioned above; five or fewer vertices are common in most coarse-grained crystal graphs (98.1%, 91.1%, and 92.8%). Probably the most illustrative quantity--the average ratio of nodes in the crystal graph to nodes in the coarse-grained crystal graph--equals 35.3, 125.6, and 101.8 in the QMOF database, CSD MOF Collection, and CoRE MOF database, respectively. However, the impressive scalability is only partially attributed to coarse-graining molecular subgraphs. Another factor is the occurrence of multiple identical subgraphs in the primitive cell; the coarse-grained crystal graph is intentionally free of duplicates, which means that a specific ratio of building units in the original reticular structure is disregarded. As a result, CG\({}^{2}\)-NNs are likely to have lower predictive performance compared to crystal graph neural networks. The following sections primarily address the efficiency-vs-accuracy dilemma of the proposed computational framework.
### Predictive performance of CG\({}^{2}\)-NNs
To evaluate the predictive performance of CG\({}^{2}\)-NNs, we consider a diverse set of practically important MOF properties, including band gap, thermal decomposition temperature, heat capacity, and Henry coefficients of eight gases (N\({}_{2}\), O\({}_{2}\), Kr, Xe, CH\({}_{4}\), CO\({}_{2}\), H\({}_{2}\)O, H\({}_{2}\)S); the examined tasks are all in a regression setting. In addition to the proposed reticular-specific architecture, several other neural networks that are composition-based and (crystal-)structure-aware are benchmarked as well. The algorithm designated as Representation Learning from Stoichiometry[31] (Roost) and model under the framework of Wyckoff Representation regression[74] (Wren) form the first group. It is interesting to note that Wren incorporates Wyckoff representations in addition to stoichiometry, introducing the concept of coordinate-free coarse graining into materials discovery. We categorize this approach as composition-based because the model does not directly incorporate the full crystal graph. Another group of methods includes four crystal graph neural networks: Crystal Graph Convolutional Neural Network[20] (CGCNN), MatErials Graph Network[21] (MEGNet), Global ATtention-based Graph Neural Network with differentiable group normalization and residual connection[28] (DeeperGATGNN), and Atomistic Line Graph Neural Network[27] (ALIGNN). In addition to "general purpose" structure-aware architectures, a few recent studies have introduced neural networks that integrate domain knowledge related to reticular chemistry; MOFNet[46], MOFTransformer[47], and MOFormer[48] are worth mentioning. MOF-related models are not part of our benchmark study; the original studies provide an overall picture of accuracy by comparing with crystal graph neural networks, e.g., CGCNN.
Despite acknowledging the criticism of using the coefficient of determination (\(R^{2}\)) as a primary predictive performance measure[75], we still find it useful for identifying general trends (Figure 2a). To ensure completeness, we have provided the corresponding mean absolute error (MAE) and root mean squared error (RMSE) values for all tasks (Figures S4 and S5). Based on the metric values, we can conclude that CG\({}^{2}\)-NNs are surprisingly effective in predicting DFT band gap. A coefficient of determination for the best models (CG\({}^{2}\)-SAGE) is 0.86, 0.85, and 0.76 at three different levels of theory (see details in the Methods section). In comparison, the best crystal graph neural networks (ALIGNN) have a coefficient of determination of 0.90, 0.92, and 0.84 for the same tasks. The accuracy of composition-based models is substantially lower, with a coefficient of determination of 0.72, 0.75, and 0.54, respectively. The prediction of thermal decomposition temperature presents a challenge for all implemented models. In particular, CG\({}^{2}\)-SAGE and ALIGNN reach a coefficient of determination of 0.38 and 0.44, respectively. We can speculate that a low accuracy is mainly associated with a relatively high uncertainty in determining the target value from the thermogravimetric analysis (TGA) data. A typical rounding step of 25 \({}^{\circ}\)C (applied to decomposition temperature values[76]) is comparable to MAE of predictive models: 50 \({}^{\circ}\)C for CG\({}^{2}\)-SAGE and 47 \({}^{\circ}\)C for ALIGNN. Therefore, the semi-quantitative agreement between experimental and predicted values can hardly be enhanced without expanding a set of target values and improving the resolution limit of TGA data. Nevertheless, CG\({}^{2}\)-NNs and structure-aware models have similar coefficients of determination, 0.32-0.38 vs. 0.30-0.44. The next endpoint, thermal capacity, poses a different challenge for predictive models. Unexpected model rankings and ineffective training are caused by a shortage of data points in the dataset (214). Composition-based Roost and structure-aware ALIGNN have comparable performance, with an average coefficient of determination of 0.68 for four temperature values. On the other hand, CGCNN and MEGNet, both incorporating crystal graph data, demonstrate nearly zero predictive performance. CG\({}^{2}\)-NNs exhibit moderate accuracy; CG\({}^{2}\)-GCN achieves the highest average coefficient of determination of 0.52 among these models. Finally, Henry coefficients are considered in the
analysis (Figure 2a); the following metric values were calculated by averaging eight endpoint-wise quantities. ALIGNN with a coefficient of determination of 0.67 significantly outperforms all the other models; the second best MEGNet has a coefficient of determination of 0.54. CG\({}^{2}\)-SAGE achieves a coefficient of determination of 0.48, surpassing structure-aware CGCNN (0.46). Both composition-based models, Roost and Wren, show limited predictive ability with a coefficient of determination of 0.29 and 0.30, respectively. The difference in predictive performance between ALIGNN and other models may be due to the significance of the specific geometry of adsorption sites in MOF; the explicit inclusion of three-body interaction terms in ALIGNN enables precise handling of atomic environments. To sum up, CG\({}^{2}\)-NNs demonstrate accuracy comparable to simplistic structure-aware models, e.g., CGCNN, and generally outperform composition-based algorithms.
Figure 2: Predictive performance of three model classes. (a) Coefficients of determination (\(R^{2}\)) of composition-based, coarse-grained crystal graph, and crystal-structure-aware neural networks are colored by green, blue, and violet, respectively. (b–d) Mean absolute error (MAE) in the band gap prediction as a function of training dataset size.
Further, we analyze the scalability of the proposed computational framework with respect to the size of the training dataset (Figure 2b-d), focusing on the PBE band gap; the models from each of the mentioned above class (composition-based, coarse-grained, and structure-aware) are examined. We approximate how MAE depends on training-dataset size \(N\) using a linear fit in a logarithmic scale for three chosen models: Wren, CG\({}^{2}\)-SAGE, and ALIGNN. As it follows from the fitted coefficients \(a\) and \(b\) in equation \(\lg MAE=a+b\lg N\) (Figure 2b-d), CG\({}^{2}\)-SAGE and ALIGNN show similar behavior, whereas the increase in training-dataset size leads to a much smaller boost in performance for Wren. At the same time, most structure-aware models are outperformed by Wren and another composition-based model (Roost) in the small-data regime (200 points in the training dataset); the observed tendency for heat capacity prediction is reproduced here.
The lack of data for the endpoint of interest can be addressed by leveraging advanced techniques, including transfer learning[77, 78, 79] and self-supervised learning[80, 81]. To explore the potential of applying CG\({}^{2}\)-NNs in conjunction with one of such algorithms, we consider the task of band gap prediction across different classes of reticular materials: the models pretrained on the MOF band gap are used to evaluate the same property in COFs (Figure 3a). The models trained from scratch, i.e., without pretraining, serve as a baseline. CG\({}^{2}\)-GAT and ALIGNN show the lowest MAE of 0.31 eV; most of the models perform at a similar level. In contrast, two structure-aware models--CGCNN and MEGNet--fail to quantitatively reproduce the target value, as observed in the thermal capacity prediction of MOFs (Figure 2a). From the other hand, the transfer-learning technique results in the largest increase in accuracy (i.e., decrease in MAE) for these models: 0.16 eV and 0.34 eV, respectively. The lowest MAE of 0.26 eV is achieved by the fine-tuned ALIGNN model; DeeperGATGNN with MAE of 0.29 eV is the second-best model owing to the moderate performance improvement (0.05 eV) associated with fine-tuning. Pretraining with MOF data has minimal effect on the accuracy of all CG\({}^{2}\)-NNs. Minor benefits from transfer learning may stem from incorporating pretrained embeddings into the proposed neural network architecture; at the concept level, this approach is essentially equivalent to freezing the weights of the input layer. The mol2vec (mat2vec) embeddings have been learned by exploring the vast chemical
Figure 3: Cross-domain (from metal–organic frameworks to covalent organic frameworks) transferability of coarse-grained crystal graph neural networks. (a) Mean absolute error (MAE) in the band gap prediction. MAE values for models pretrained on MOF data are shown in color, while gray represents models exclusively trained on COF data. (b) Scatter plot of calculated and predicted values of band gap. (c) Scatter plot of calculated and predicted values of band gap; linear scaling to minimize MAE is applied. (d) The two-dimensional projection of linker chemical space is produced within the Uniform Manifold Approximation and Projection (UMAP) algorithm from mol2vec embeddings; the linkers from the Quantum MOF database and from the subset of CURATED COFs are colored by gray and blue, respectively.
space, which extends beyond the organic linkers (metal centers) in reticular materials. In other words, the generalizability of low-level representations of molecular building units allows for the description of both MOFs and COFs from scratch. The band gap of COFs can be semi-quantitatively reproduced by CG\({}^{2}\)-GAT trained on MOF data, without additional fine-tuning; only a linear scaling of the model outputs is needed (Figure 3b,c). From the other perspective, a common chemical space is formed by the two-dimensional projections of organic linkers in MOFs and COFs, generated within the Uniform Manifold Approximation and Projection[82] (UMAP) approach from mol2vec embeddings (Figure 3d).
Figure 4: Interplay between predictive performance and energy efficiency of composition-based, coarse-grained crystal graph, and crystal-structure-aware models. The following endpoints are considered: (a) PBE band gap, (b) HLE17 band gap, (c) HSE06 band gap, (d) thermal decomposition temperature, (e) heat capacity at 250 K, (f) heat capacity at 300 K, (g) heat capacity at 350 K, (h) heat capacity at 400 K, (i) N\({}_{2}\) Henry coefficient, (j) O\({}_{2}\) Henry coefficient, (k) Kr Henry coefficient, (l) Xe Henry coefficient, (m) CH\({}_{4}\) Henry coefficient, (m) CO\({}_{2}\) Henry coefficient, (o) H\({}_{2}\)O Henry coefficient, and (p) H\({}_{2}\)S Henry coefficient. For the illustrative purposes, the areas of the MAE vs. GHG emissions space accessible by different models are painted in two colors: yellow-colored regions correspond to the set of composition-based and structure-aware models, whereas red-colored fields reflect how the Pareto front is reshaped by introducing CG\({}^{2}\)-NNs.
### Carbon footprint of training CG\({}^{2}\)-NNs
The main focus of the materials informatics community has been on enhancing the predictive performance of developed neural network architectures[83], paying little attention to other critical aspects such as computational efficiency, explainability, transferability, and scalability. As it was shown in our recent study[84], an excessive focus on model accuracy has led to an exponential growth in trainable parameters and greenhouse gas (GHG) emissions from model training. Taking into account a decent predictive ability and high scalability (in terms of input data size) of CG\({}^{2}\)-NNs, we also estimate the carbon footprint of our models in the hope that the presented framework can tackle the accuracy-efficiency dilemma in materials property prediction. In Figure 4, the interplay between GHG emissions (measured in kilograms of carbon dioxide equivalents, kgCO\({}_{2}\)eq) and predictive performance (expressed in terms of MAE) is analyzed for MOF-related models from the previous section. It is worth noting that the carbon footprint of the model lifecycle is solely determined by the electrical energy consumption of the hardware in use. Accordingly, GHG emissions can provide a description of energy efficiency, in addition to quantifying environmental impacts. In contrast to the issue of ranking models by the accuracy objective, here we deal with a set of so-called non-dominated solutions, i.e., models that provide a tradeoff between target quantities. Thus, ALIGNN is positioned on the low-error section of the Pareto front in all cases, except for thermal-capacity prediction. The composition-based Roost model is the only exception, as it shows high predictive performance in the small-data regime. Roost and CG\({}^{2}\)-SAGE dominate the low-emission region of the Pareto front, confirming the impressive computational efficiency of the presented framework. As we can see, CG\({}^{2}\)-NNs offer nearly state-of-the-art performance at a fraction of the computational cost of previous algorithms. For instance, the PBE band gap MAE of CG\({}^{2}\)-SAGE can be reduced by 20% (22%) with MEGNet (ALIGNN), causing GHG emissions to increase 13 (89) times; the similar estimates apply to other endpoints as well. Therefore, the coarse-grained crystal graph concept offers a strong alternative to current dominant methods for representing reticular materials, if predictive performance and energy efficiency are equally important.
### Inverse materials design accompanied with CG\({}^{2}\)-NNs
CG\({}^{2}\)-NNs are readily accessible as discriminative algorithms for _in silico_ high-throughput screening of reticular materials. However, the presented computational framework can be integrated into inverse design pipelines as well. As a proof-of-concept study, we explore maximizing hydrogen storage capacity in MOFs; the optimization problem is given as follows:
\[x^{*}=\operatorname*{argmax}_{x\in\mathcal{X}}f(x) \tag{7}\]
where \(\mathcal{X}\) is a design space represented in the coarse-grained crystal graph framework, \(f(x)\) is an objective function (hydrogen storage capacity) approximated by CG\({}^{2}\)-NN, and \(x^{*}\) is an optimal set of metal centers \(\mathcal{K}\) and organic linkers \(\mathcal{L}\) encoded as a complete bipartite graph (\(\mathcal{K},\mathcal{L},\tilde{\mathcal{E}}\)). Using IRMOF-20[85] as a prototype, we apply an iterative evolutionary procedure to modify the linker (thieno[3,2-b]thiophene-2,5-dicarboxylic acid) and maximize the target property; the metal center (zinc) and another linker (oxygen atom) are maintained unchanged. In other words, we move from a global optimization problem to a local maximization one; our intention is to confine the design space to the region surrounding the original structure. At each optimization step (Figure 5a), SELF-referencIng Embedded Strings[86] (SELFIES) representation of molecular building block undergoes one of three operations: a single character addition, deletion or replacement. SELFIES strings are 100% chemically robust by design, satisfying valence-bond rules under any mutation, but other specific requirements should also be fulfilled by a potential linker. In particular, the "linker-likeness" and synthetic-accessibility filters are implemented into the pipeline (see details in the Methods section). The linkers that meet the above criteria are passed through CG\({}^{2}\)-SAGE as a part of the coarse-grained crystal graph for predicting hydrogen storage capacity. Further, we evaluate the uncertainty in predictions using the deep-ensemble method[87]. The variance of model outputs calculated for a set of neural networks may serve as a reliable measure of epistemic uncertainty, which indicates the model's limitations in reproducing structure-property relationships outside its domain of applicability. As we see in Figure 5b, narrower prediction intervals in terms of percentiles (defined by means of quantile regression[88]) are associated with a lower standard deviation of model ensemble outputs. Accordingly, in-domain MAE decreases with lowering the upper bound of standard deviation (Figure 5c); the structure is considered inside the domain of applicability if the uncertainty measure is below the predefined threshold. In the context of optimization problems, the ranking capability of model appears to be a key indicator; the implied threshold value (2.5 g/l) enables reaching a Pearson correlation coefficient of 0.86 (Figure 5c). Finally, the hydrogen storage capacity of the generated linker must exceed the lowest one in the previous population.
Additionally, the top-performing linkers undergo filtering based on their 3D structure. Rigid molecules with antiparallel binding sites, i.e., carboxylic groups, are considered as potential precursors. The set consists of molecules that possess both the original framework (linker **4**) and a new one (linkers **1**, **2** and **3**); they are all located in the vicinity of the initial compound (thieno[3,2-b]thiophene-2,5-dicarboxylic acid) in the chemical space (Figure 5d). The parent MOF structure (IRMOF-20) has a nearly record-breaking performance (experimental volumetric capacity of 51 g/[89]), so the generated linkers only slightly enhance the predicted target property value (up to 3.8 g/l). Expanding the design space of the optimization problem, such as by loosening the uncertainty-quantification criterion, may potentially improve the quality of found solutions; simultaneous optimization of metal center and organic linker achieves the same goal. Nonetheless, the molecules that have been identified show potential as building units for MOFs with a high hydrogen working capacity. The presented inverse-design pipeline provides outputs that can be readily utilized to predict MOF synthesis parameters[90, 91].
## 3 Discussion
As it was shown in the Results section, CG\({}^{2}\)-NNs are able to compete in the predictive performance with models incorporating atomic connectivity information. This suggests that the underlying topology of reticular material can be learned during model training. We intentionally consider datasets containing experimentally resolved MOFs and COFs; the relationship between a set of molecular building blocks and their self-assembled
Figure 5: Inverse reticular design driven by coarse-grained crystal graph neural networks. (a) Schematic diagram of the linker optimization workflow. SELFIES is an abbreviation of SELF-referencIng Embedded Strings; UQ is an abbreviation of uncertainty quantification. (b) Predictive error as a function of model ensemble standard deviation. The colored areas correspond to the predictive intervals estimated by quantile regression: within one standard deviation (green), within two standard deviation (orange), and over two standard deviations (red). (c) In-domain mean absolute error (MAE) and Pearson correlation coefficient as a function of model ensemble standard deviation. The threshold value of standard deviation (2.5 g/l) is indicated by vertical line. (d) The initial linker (taken from IRMOF-20) and four generated molecules with the highest hydrogen storage capacity. The two-dimensional projection of linker chemical space is produced within the Uniform Manifold Approximation and Projection (UMAP) algorithm from mol2vec embeddings; the linkers from MOFs used for training objective function predictors are shown in gray.
structure is usually straightforward. On the contrary, widely used databases of hypothetical structures expand the reticular materials genome by providing multiple structures of various topologies for specific organic linker and SBU. Obviously, CG\({}^{2}\)-NNs will fail to distinguish such structures, although most of _in silico_ generated MOFs thermodynamically inaccessible and, as a result, are not of practical interest. The synthesized MOFs, which exhibit polymorphism[92, 93], will cause the same issue. Incorporating global state attributes, e.g., temperature and pressure, into CG\({}^{2}\)-NN architecture can potentially overcome the challenge of polymorphic phase transitions.
By eliminating topology from explicit consideration, we address the key shortcoming of current approaches to generating reticular materials. The entire design space is defined by the linkers-nodes-topology triad, but the optimization task is constrained by reticular chemistry principles: topology is determined by molecular building units and is not an independent variable. The issue has been mostly ignored in previous studies[44, 94, 95], whereas the energy ranking of polymorphs demands considerable computing resources[50, 96]. We anticipate that the coarse-grained crystal graph framework will shift community focus from navigating inaccessible regions of the reticular materials genome to improving methods for predicting synthesis conditions.
## 4 Conclusion
The introduction of the coarse-grained crystal graph framework aims to make data-driven design of reticular materials more accessible for synthetic chemists. The efficiency of the presented approach is achieved by integrating relevant domain knowledge into the neural network architecture. Specifically, pretrained embeddings are used to represent organic linkers and metal centers, demonstrating the impact of previous community efforts on AI tools for materials design. We hope that this research will not be the final link in the chain, but rather bolster the further enhancement of predictive models.
## 5 Methods
### Datasets
The QMOF database[71, 72] served as a source for MOF crystal structures and their corresponding band gap values obtained at three levels of theory: generalized gradient approximation (GGA), meta-GGA, and screened-exchange hybrid GGA; the density functionals in use were Perdew-Burke-Ernzerhof[97] (PBE), High Local Exchange 2017[98] (HLE17), and Heyd-Scuseria-Ernzerhof[99] (HSE06), respectively. The datasets consisted of 20237 (PBE endpoint), 10664 (HLE17), and 10718 (HSE06) entities. Another collection of neural networks was trained using a subset of structures (3038 points) from the CoRE MOF 2019 database and their corresponding decomposition temperatures extracted from the TGA data by Nandy et al.[100] For the heat capacity prediction, we utilized the values obtained within the harmonic approximation at the GGA level of theory[101]; 214 MOFs from the CoRE MOF 2019 and QMOF database were taken into account. The Henry coefficients of eight gears--N\({}_{2}\), O\({}_{2}\), Kr, Xe, CH\({}_{4}\), CO\({}_{2}\), H\({}_{2}\)O, H\({}_{2}\)S--computed using the grand canonical Monte Carlo (GCMC) simulations and the corresponding structures from the QMOF database (1431, 1552, 1297, 1205, 1268, 1538, 1482, and 1352 compounds, respectively) were taken from the dataset presented by Jablonka et al.[59] The PBE band gap values[102] of 61 compounds from the CURATED COFs database[103] containing boron or silicon atoms were used to estimate the transferability of neural networks of interest. Usable volumetric hydrogen capacity of MOFs under temperature-pressure swing conditions (77 K/100 bar and 160 K/5 bar) was implemented as an objective in the optimization task; 4146 structures from the CoRE MOF 2019 database and GCMC values were taken from the study[104] to train an ensemble of CG\({}^{2}\)-SAGE models.
### Coarse-grained crystal graph processing
The procedure outlined below was applied to construct the coarse-grained crystal graph and assign features to its nodes. Metals (in MOFs and COFs) and metalloids (in COFs) were completely removed from the initial crystal structure (the full list of atoms types is provided in Figure S1). The removed atoms are considered as metal centers, each of them was featurized using 200-dimensional mat2vec embeddings[61]. The crystal structure containing only nonmetal atoms was processed within the OpenBabel routines[105] to produce Simplified Molecular Input Line Entry System[106, 107] (SMILES) strings from connected components in the reduced crystal graph. SMILES strings were then converted into 300-dimensional mol2vec embeddings[60]. Metallic and organic units were built into the complete bipartite graph, where each node of one type is
connected to a node of another type. In this form, the coarse-grained crystal graph was passed through a graph neural network.
### Model training
CG\({}^{2}\)-NNs were built with PyTorch[108] and Deep Graph Library[109] (DGL). The choice of deep learning framework has a substantial effect on energy efficiency of model training[110]; therefore, PyTorch-based implementations were utilized for all other models as well. In the predictive performance (transfer learning) study, we used the Adam optimizer[111] with a learning rate of 10\({}^{-3}\) (10\({}^{-4}\)) and a batch size of 64 (16); the maximum number of epochs and early stopping criterion were set to 500 and 50, respectively. The five-fold cross-validation was performed for model evaluation; one-eighth of training data were used for early stopping. To evaluate an objective in the optimization task (hydrogen storage capacity), the ensemble of 20 CG\({}^{2}\)-SAGE models was trained by means of the Adam optimizer with a learning rate of 10\({}^{-3}\) and a batch size of 64. The holdout cross-validation technique with a training-validation-test ratio of 80:10:10 was applied. The mean and standard deviation of model ensemble outputs were used to estimate the target property and the corresponding uncertainty[87]. The Eco2AI library[112] was utilized to estimate GHG emissions of model training; the emission intensity coefficient was set 240.56 kgCO\({}_{2}\)e/MWh (Moscow). All experiments in the study were performed on the workstation equipped with two Intel(r) Xeon(r) CPUs E5-2695 v4 @ 2.10GHz, 144 GB RAM, and NVIDIA GeForce RTX 3090 Ti.
### Inverse reticular design
The linker optimization algorithm was heavily inspired by the previously developed methods[113, 114, 115]. Three symbolic operations (addition, deletion and replacement) were applied to modify SELFIES strings[86]; each optimization step included from one to twenty randomly selected mutations and several filtering procedures described below. Initial population was produced by generating one hundred thousand mutants from the SELFIES string, corresponding to the parent structure (thieno[3,2-b]thiophene-2,5-dicarboxylic acid); one hundred strings with the highest hydrogen storage capacity value were taken for further processing. Then, one thousand mutations steps were carried out. At each iteration, the population was updated by including a newly generated molecule if its objective was higher than the lowest value in the current generation; the number of molecules in the population was maintained at one hundred. Three filters were implemented to exclude irrelevant molecules. First, we assumed exactly two binding sites (carboxylic groups) in molecule by applying the "linker-likeness" filter. Second, the molecules with a low synthetic accessibility (SAscore[116] higher than 5.0) were also ignored. Third, 20 CG\({}^{2}\)-SAGE models were used to access uncertainty in predictions of target property by means of deep-ensemble learning[87]. If the standard deviation of model ensemble outputs exceeded 2.5 g/l, the corresponding molecule was discarded as well. One hundred launches of the optimization procedure were done in total.
The SELFIES strings included in the last generations were transferred into 3D atomic coordinates. Specifically, the Experimental-Torsion basic Knowledge Distance Geometry[117] (ETKDG) approach implemented in the RDKit library was employed to generate 30 conformers for each of the 30 molecules with the highest hydrogen storage capacity. Then, the local geometry optimization was performed using the ANI-2x neural-network potential[118] and Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm with the convergence criterion of 10-3 eV/A for forces. Two quantities were calculated for each ensemble of conformers: 1) the mean angle between two vectors, orienting from the carbon atom to the midpoint between oxygen atoms in carboxyl groups, 2) standard deviation of pairwise distance between carbon atoms in carboxyl groups. A molecule was included into the final set if the former value was higher than 150deg and the latter one was lower than 0.5 A; the linker optimization was completed on four structures.
## 6 Acknowledgements
V.K. was supported by the Fellowship from Non-commercial Foundation for the Advancement of Science and Education INTELLECT.
## 7 Data and code availability
The source code and data accompanying this study will be published soon. |
2303.01263 | Unnoticeable Backdoor Attacks on Graph Neural Networks | Graph Neural Networks (GNNs) have achieved promising results in various tasks
such as node classification and graph classification. Recent studies find that
GNNs are vulnerable to adversarial attacks. However, effective backdoor attacks
on graphs are still an open problem. In particular, backdoor attack poisons the
graph by attaching triggers and the target class label to a set of nodes in the
training graph. The backdoored GNNs trained on the poisoned graph will then be
misled to predict test nodes to target class once attached with triggers.
Though there are some initial efforts in graph backdoor attacks, our empirical
analysis shows that they may require a large attack budget for effective
backdoor attacks and the injected triggers can be easily detected and pruned.
Therefore, in this paper, we study a novel problem of unnoticeable graph
backdoor attacks with limited attack budget. To fully utilize the attack
budget, we propose to deliberately select the nodes to inject triggers and
target class labels in the poisoning phase. An adaptive trigger generator is
deployed to obtain effective triggers that are difficult to be noticed.
Extensive experiments on real-world datasets against various defense strategies
demonstrate the effectiveness of our proposed method in conducting effective
unnoticeable backdoor attacks. | Enyan Dai, Minhua Lin, Xiang Zhang, Suhang Wang | 2023-02-11T01:50:58Z | http://arxiv.org/abs/2303.01263v1 | # Unnoticeable Backdoor Attacks on Graph Neural Networks
###### Abstract.
Graph Neural Networks (GNNs) have achieved promising results in various tasks such as node classification and graph classification. Recent studies find that GNNs are vulnerable to adversarial attacks. However, effective backdoor attacks on graphs are still an open problem. In particular, backdoor attack poisons the graph by attaching triggers and the target class label to a set of nodes in the training graph. The backdoored GNNs trained on the poisoned graph will then be misled to predict test nodes to target class once attached with triggers. Though there are some initial efforts in graph backdoor attacks, our empirical analysis shows that they may require a large attack budget for effective backdoor attacks and the injected triggers can be easily detected and pruned. Therefore, in this paper, we study a novel problem of unnoticeable graph backdoor attacks with limited attack budget. To fully utilize the attack budget, we propose to deliberately select the nodes to inject triggers and target class labels in the poisoning phase. An adaptive trigger generator is deployed to obtain effective triggers that are difficult to be noticed. Extensive experiments on real-world datasets against various defense strategies demonstrate the effectiveness of our proposed method in conducting effective unnoticeable backdoor attacks.
Backdoor Attack, Graph Neural Networks +
Footnote †: [leftmargin=*]This paper is organized as follows. Section 2 describes the results of our work. Section 3 describes the results of our work. Section 4 describes the results of our work. Section 5 describes the results of our work. Section 6 describes the results of our work. Section 7 describes the results of our work. Section 8 describes the results of our work. Section 8 describes the results of our work. Section 9 describes the results of our work. Section 10 describes the results of our work. Section 11 describes the results of our work. Section 12 describes the results of our work. Section 13 describes the results of our work. Section 14 describes the results of our work. Section 15 describes the results of our work. Section 16 describes the results of our work. Section 17 describes the results of our work. Section 18 describes the results of our work. Section 19 describes the results of our work. Section 10 describes the results of our work. Section 10 describes the results of our work. Section 11 describes the results of our work. Section 12 describes the results of our work. Section 13 describes the results of our work. Section 14 describes the results of our work. Section 15 describes the results of our work. Section 16 describes the results of our work. Section 17 describes the results of our work. Section 18 describes the results of our work. Section 19 describes the results of our work. Section 10 describes the results of our work. Section 10 describes the results of our work. Section 11 describes the results of our work. Section 12 describes the results of our work. Section 13 describes the results of our work. Section 13 describes the results of our work. Section 14 describes the results of our work. Section 15 describes the results of our work. Section 16 describes the results of our work. Section 17 describes the results of our work. Section 18 describes the results of our work. Section 19 describes the results of our work. Section 10 describes the results of our work. Section 11 describes the results of our work. Section 10 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 12 describes the results of our work. Section 13 describes the results of our work. Section 13 describes the results of our work. Section 14 describes the results of our work. Section 15 describes the results of our work. Section 16 describes the results of our work. Section 17 describes the results of our work. Section 18 describes the results of our work. Section 19 describes the results of our work. Section 10 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 12 describes the results of our work. Section 13 describes the results of our work. Section 13 describes the results of our work. Section 14 describes the results of our work. Section 15 describes the results of our work. Section 16 describes the results of our work. Section 17 describes the results of our work. Section 18 describes the results of our work. Section 19 describes the results of our work. Section 10 describes the results of our work. Section 10 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 12 describes the results of our work. Section 13 describes the results of our work. Section 13 describes the results of our work. Section 14 describes the results of our work. Section 15 describes the results of our work. Section 16 describes the results of our work. Section 17 describes the results of our work. Section 18 describes the results of our work. Section 19 describes the results of our work. Section 10 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work. Section 11 describes the results of our work.
process as graph manipulation attacks. This will especially benefit the targeted attack on inductive node classification, which widely exists in real-world scenarios. For example, TikTok graph will often incorporate new users and predict labels of them with a trained model. _Thirdly_, compared with revising the links between existing users, it is relatively easy to inject triggers and malicious labels in backdoor attacks. Take malicious user detection on social networks as an example, many labels are collected from reports of users. In this case, malicious labels could be easily assigned by attackers. As for the trigger attachment, it can be achieved by linking a set of fake accounts to the users.
Recently, Zhang et al. (Zhang et al., 2019) firstly investigate a graph backdoor attack that uses randomly generated graphs as triggers. A trigger generator is adopted in (Zhang et al., 2019) to get more powerful sample-specific triggers. However, these methods have unnoticeablity issues in the following two aspects. _Firstly_, our empirical analysis in Sec. 3.3.1 shows that existing methods need a large budget to conduct effective backdoor attacks on large-scale graphs, i.e., they need to attach the backdoor triggers to a large number of nodes in the training graph so that a model trained on the graph will be fooled to assign target label to nodes attached with the backdoor trigger. This largely increases the risk of being detected. _Secondly_, the generated triggers of these methods can be easily identified and destroyed. Specifically, real-world graphs such as social networks generally follow homophily assumption, i.e., similar nodes are more likely to be connected; while in existing graph backdoor attacks, the edges linking triggers and poisoned nodes and edges inside the triggers are not guaranteed with the property of connecting nodes with high similarity scores. Thus, the triggers and assigned malicious labels can be eliminated by pruning edges linking dissimilar nodes and discarding labels of involved nodes, which is verified in Sec 3.3.2. Thus, developing an effective unnoticeable graph backdoor attack with limited attack budget is important. However, graph backdoor attack is still in its early stage and there is no existing work on unnoticeable graph backdoor attack with limited attack budget.
Therefore, in this paper, we study a novel and important problem of developing an effective unnoticeable graph backdoor attack with limited attack budget in terms of the number of poisoned nodes. In essence, we are faced with two challenges: (i) how to fully utilize the limited budget in poisoned samples for graph backdoor attacks; (ii) how to obtain triggers that are powerful and difficult to be detected. In an attempt to address these challenges, we proposed a novel framework Unnoticeable Graph Backdoor Attack (UGBA)1. To better utilize the attack budget, UGBA proposes to attach triggers with crucial representative nodes with a novel poisoned node selection algorithm. And an adaptive trigger generator is deployed in UGBA to obtain powerful unnoticeable trigger that exhibits high similarity with each target node and maintains high attack success rate. In summary, our main contributions are:
Footnote 1: [https://github.com/ventr1c/UGBA](https://github.com/ventr1c/UGBA)
* We study a novel problem of promoting unnoticeablity of graph backdoor attacks in generated triggers and attack budget;
* We empirically verify that a simple strategy of edge pruning and label discarding can largely degrade existing backdoor attacks;
* We design a framework UGBA that deliberately selects poisoned samples and learn effective unnoticeable triggers to achieve unnoticeable graph backdoor attack under limited budget; and
* Extensive experiments on large-scale graph datasets demonstrate the effectiveness of our proposed method in unnoticeably backdooring different GNN models with limited attack budget.
## 2. Related Works
### Graph Neural Networks
Graph Neural Networks (GNNs) (Grover et al., 2017; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019) have shown remarkable ability in modeling graph-structured data, which benefits various applications such as recommendation system (Zhang et al., 2019), drug discovery (Grover et al., 2017) and traffic analysis (Zhang et al., 2019). Generally, the success of GNNs relies on the message-passing strategy, which updates a node's representation by recursively aggregating and combining features from neighboring nodes. For instance, in each layer of GCN (Grover et al., 2017) the representations of neighbors and the center node will be averaged followed by a non-linear transformation such as ReLU. Recently, many GNN models are proposed to further improve the performance of GNNs (Grover et al., 2017; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019). For example, self-supervised GNNs (Grover et al., 2017; Zhang et al., 2019; Zhang et al., 2019) are investigated to reduce the need of labeled nodes. Works that improve fairness (Zhang et al., 2019), robustness (Zhang et al., 2019; Zhang et al., 2019) and explainability (Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019) of GNNs are explored. And GNN models for heterophilic graphs are also designed (Grover et al., 2017; Zhang et al., 2019).
### Attacks on Graph Neural Networks
According to the stages the attack occurs, adversarial attacks on GNNs can be divided into poisoning attack (Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019) and evasion attack (Grover et al., 2017; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019). In poisoning attacks, the attackers aim to perturb the training graph before GNNs are trained such that a GNN model trained on the poisoned dataset will have a low prediction accuracy on test samples. For example, Nettack (Nettack, 2018) employs a tractable surrogate model to conduct a targeted poisoning attack by learning perturbation against the surrogate model. Evasion attacks add perturbation in the test stage, where the GNN model has been well trained and cannot be modified by attackers. Optimizing the perturbation of graph structures by gradient descent (Zhang et al., 2019) and reinforcement learnings (Grover et al., 2017; Zhang et al., 2019) have been explored. Evasion attacks through graph injection (Zhang et al., 2019; Zhang et al., 2019) are also investigated.
Backdoor attacks are still rarely explored on GNNs (Zhang et al., 2019; Zhang et al., 2019). Backdoor attacks generally attach backdoor triggers to the training data and assign the target label to samples with trigger. Then a model trained on the poisoned data will be misled if backdoors are activated by the trigger-embedded test samples. Zhang et al. (Zhang et al., 2019) propose a subgraph-based backdoor attack on GNNs by injecting randomly generated universal triggers to some training samples.
Figure 1. General framework of graph backdoor attack.
Xi et al. (2017) adopt a trigger generator to learn to generate adaptive trigger for different samples. Sheng et al. (2019) propose to select the nodes with high degree and closeness centrality. Xu and Picek (2019) improve the unnoticeability by assigning triggers without change labels of poisoned samples. Our proposed method is inherently different from these methods as (i) we can generate unnoticeable adaptive triggers to simultaneously maintain the effectiveness and bypass the potential trigger detection defense based on feature similarity of linked nodes; (ii) we design a novel clustering-based node selection algorithm to further reduce the required attack budget.
## 3. Preliminary Analysis
In this section, we present preliminaries of backdoor attacks on graphs and show unnoticeablity issues of existing backdoor attacks.
### Notations
We use \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{X})\) to denote an attributed graph, where \(\mathcal{V}=\{\mathbf{e}_{1},\ldots,\mathbf{e}_{N}\}\) is the set of \(N\) nodes, \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) is the set of edges, and \(\mathbf{X}=\{\mathbf{x}_{1},...,\mathbf{x}_{N}\}\) is the set of node attributes with \(\mathbf{x}_{i}\) being the node attribute of \(\mathbf{e}_{i}\). \(\mathbf{A}\in\mathbb{R}^{N\times N}\) is the adjacency matrix of the graph \(\mathcal{G}\), where \(\mathbf{A}_{ij}=1\) if nodes \(\mathbf{e}_{i}\) and \(\mathbf{e}_{j}\) are connected; otherwise \(\mathbf{A}_{ij}=0\). In this paper, we focus on a semi-supervised node classification task in the inductive setting, which widely exists in real-world applications. For instance, GNNs trained on social networks often need to conduct predictions on newly enrolled users to provide service. Specifically, in inductive node classification, a small set of nodes \(\mathcal{V}_{L}\subseteq\mathcal{V}\) in the training graph are provided with labels \(\mathcal{Y}_{L}=\{y_{1},\ldots,y_{N_{L}}\}\). The test nodes \(\mathcal{V}_{T}\) are not covered in the training graph \(\mathcal{G}\), i.e., \(\mathcal{V}_{T}\cap\mathcal{V}=\emptyset\).
### Preliminaries of Graph Backdoor Attacks
#### 3.2.1. Threat Model
In this section, we introduce the threat model. **Attacker's Goal**: The goal of the adversary is to mislead the GNN model to classify target nodes attached with the triggers as target class. Simultaneously, the attacked GNN model should behave normally for clean nodes without triggers attached.
**Attacker's Knowledge and Capability**: As the setting of most poisoned attacks, the training data of the target model is available for attackers. The information of the target GNN models including model architecture is unknown to the attacker. Attackers are capable of attaching triggers and labels to nodes within a budget before the training of target models to poison graphs. During the inference phase, attackers can attach triggers to the target test node.
#### 3.2.2. General Framework of Graph Backdoor Attacks
The key idea of the backdoor attacks is to associate the trigger with the target class in the training data to mislead target models. As Fig. 1 shows, during the poisoning phase, the attacker will attach a trigger \(g\) to a set of poisoned nodes \(\mathcal{V}_{P}\subseteq\mathcal{V}\) and assign \(\mathcal{V}_{P}\) with target class label \(y_{t}\), resulting a backdoored dataset. Generally, the poisoned node set \(\mathcal{V}_{P}\) is randomly selected. The GNNs trained on the backdoored dataset will be optimized to predict the poisoned nodes \(\mathcal{V}_{P}\) attached with the trigger \(g\) as target class \(y_{t}\), which will force the target GNN to correlate the existence of the trigger \(g\) in neighbors with the target class. In the test phase, the attacker can attach the trigger \(g\) to a test node \(v\) to make \(v\) classified as the target class by backdoored GNN. Some initial efforts (Zhu et al., 2017; Wang et al., 2018) have been made for graph backdoor attacks. Specifically, SBA (Wang et al., 2018) directly injects designed sub-graphs as triggers. And GTA (Zhu et al., 2017) adopts a trigger generator to learn optimal sample-specific triggers.
### Unnoticeability of Graph Backdoor Attacks
In this subsection, we analyze the unnoticeability of existing graph backdoor attacks in terms of the required number of poisoned samples and the difficulty of trigger detection.
#### 3.3.1. Size of Poisoned Nodes
In backdoor attacks, a set of poisoned nodes \(\mathcal{V}_{P}\) will be attached triggers and target class labels to conduct attacks. However, as large-scale graphs can provide abundant information for training GNNs, the attacker may need to inject a large number of triggers and malicious labels to mislead the target GNN to correlate the trigger with target class, which puts backdoor attack at the risk of being noticed. To verify this, we analyze how the size of poisoned nodes affects the attack success rate of the state-of-the-art graph backdoor attacks, i.e., SBA-Gene, SBA-Samp (Wang et al., 2018), and GTA (Zhu et al., 2017) on a large node classification dataset, i.e., OGB-arxiv (2018). Detailed descriptions of these methods can be found in Sec. 6.1.2. We vary \(|\mathcal{V}_{P}|\) as \(\{80,240,800,2400\}\). The size of trigger is limited to contain three nodes. The architecture of target model is GraphSage (Gan et al., 2018). The attack success rate (ASR) results are presented in Table 1. From the table, we can observe that all methods especially SBA-Gen and SBA-Samp achieve poor attack results with limited budget such as 80 and 240 in \(\mathcal{V}_{P}\). This is because (i) SBA-Gen and SBA-Samp utilize handcrafted triggers which is not effective; (ii) Though GTA uses learned sample-specific trigger, similar to SBA-Gen and SBA-Samp, the selection of poisoned nodes is random and the budget is not well utilized. Thus, it is necessary to develop graph backdoor attack methods that can generate effective triggers and fully exploit the attack budget.
#### 3.3.2. Detection of Triggers
Real-world graphs such as social networks generally show homophily property, i.e, nodes with similar attributes are connected by edges. For existing backdoor attacks, the attributes of triggers may differ a lot from the attached poisoned nodes. The connections within trigger may also violate homophily property. Therefore, the negative effects of injected triggers and target labels might be reduced by eliminating edges linking dissimilar nodes and labels of involved nodes. To verify this, we evaluate two strategies to defend against backdoor attacks:
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \(|\mathcal{V}_{P}|\) & 80 & 240 & 400 & 800 & 2400 \\ \hline SBA-Samp & 0.06 & 1.7 & 10.8 & 34.5 & 75.5 \\ SBA-Gen & 0.08 & 18.1 & 32.1 & 54.3 & 85.9 \\ GTA & 37.4 & 62.4 & 72.4 & 82.7 & 94.8 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Impacts of \(|\mathcal{V}_{P}|\) to ASR (%) of backdoor attacks.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Defense & Clean & SBA-Samp & SBA-Gen & GTA \\ \hline None & 65.5 & 61.0 & 65.1 & 70.8 & 65.2 & 94.8 & 65.6 \\ Prune & 62.2 & 8.9 & 64.0 & 31.2 & 64.0 & 1.4 & 64.5 \\ Prune+LD & 62.6 & 3.2 & 64.0 & 15.3 & 63.8 & 0.04 & 64.1 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Results of backdoor defense (Attack Success Rate (%)) Clean Accuracy (%)) on Ogb-arxiv dataset.
* **Prune**: We prune edges linking nodes with low cosine similarity. As edges created by the backdoor attacker may link dissimilar nodes, the trigger structure and attachment edge can be destroyed.
* **Prune+LD**: To reduce the influence of dirty labels of poisoned nodes, besides pruning, we also discard the labels of the nodes linked by dissimilar edges.
Experimental results on Ogb-arvik with \(|\mathcal{V}_{P}|\) set as 2400 are presented in Table 2. Other settings are the same as Sec. 3.3.1. For Prune and Prune+LD, the threshold is to filter out edges with lowest 10% cosine similarity scores. More results on other datasets can be found in Table 4. The accuracy of the backdoored GNN on clean test set is also reported in Table 2 to show how the defense strategies affect the prediction performance. Accuracy on a clean graph without any attacks is reported as reference. All the results are average scores of 5 runs. We can observe from Tab. 2 that (i) ASR drops dramatically with the proposed two strategies of prune and prune+LD; (ii) the impact of the proposed strategies on prediction accuracy is negligible. This demonstrates that the used triggers by existing backdoor attacks can be easily mitigated.
## 4. Problem Formulation
Our preliminary analysis verifies that existing backdoor attacks (i) require a large attack budget on large datasets; and (ii) the injected triggers can be easily detected. To alleviate these two issues, we propose to investigate a novel unnoticeable graph backdoor attack problem that can unnoticeablely backdoor various target GNNs with limited attack budget. Specifically, we enhance the general graph backdoor attack model from the following two aspects.
**Selection of Poisoned Nodes \(\mathcal{V}_{P}\)**: In the attack model of current graph backdoor attacks, the poisoned node set \(\mathcal{V}_{P}\) is randomly selected. However, in this way, it is likely the budget is wasted in some useless poisoned nodes. For example, the attacker may repeatedly poison nodes from the same cluster that have very similar pattern, which is unnecessary. Alternatively, to fully utilize the attack budget, we will deliberately select the most useful poisoned nodes \(\mathcal{V}_{P}\subseteq\mathcal{V}\) in unnoticeable backdoor attack.
**Unnoticeable Constraint on Triggers**: As the preliminary analysis shows, dissimilarity among trigger nodes and poisoned nodes makes the attack easy to be detected. Hence, it is necessary to obtain adaptive triggers that are similar to the poisoned nodes or target nodes. In addition, edges within triggers should also be enforced to link similar nodes to avoid being damaged by pruning strategy. Such adaptive trigger can be given by an adaptive generator. Let \(\mathcal{E}^{i}_{B}\) denote the edge set that contain edges inside trigger \(g_{i}\) and edge attaching trigger \(g_{i}\) and node \(v_{i}\). The unnoticeable constraint on the generated adaptive triggers can be formally written as:
\[\min_{(u,v)\in\mathcal{E}^{i}_{B}}sim(u,v)\geq T, \tag{1}\]
where \(sim\) denotes the cosine similarity between node features and \(T\) is a relatively high threshold of the cosine similarity which can be tuned based on datasets.
In node classification with GNNs, the prediction is given based on the computation graph of the node. Thus, the clean prediction on node \(v_{i}\) can be written as \(f_{\theta}(\mathcal{G}^{i}_{C})\), where \(\mathcal{G}^{i}_{C}\) denotes the clean computation graph of node \(v_{i}\). For a node \(v_{i}\) attached with the adaptive trigger \(g_{i}\), the predictive label will be given by \(f_{\theta}(a(\mathcal{G}^{i}_{C},g_{i}))\), where \(a(\cdot)\) denotes the operation of trigger attachment. Then, with the above descriptions and notations in Sec 3.1. we can formulate the unnoticeable graph backdoor attack by:
**Problem 1**.: _Given a clean attributed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{X})\) with a set of nodes \(\mathcal{V}_{L}\) provided with labels \(\mathcal{Y}_{L}\), we aim to learn an adaptive trigger generator \(f_{g}:v_{i}\to g_{i}\) and effectively select a set of nodes \(\mathcal{V}_{P}\) within budget to attach triggers and labels so that a GNN\(f\) trained on the poisoned graph will classify the test node attached with the trigger to the target class \(y_{t}\) by solving:_
\[\begin{split}&\min_{\mathcal{V}_{P},f_{g}}\sum_{u_{i}\in \mathcal{V}_{U}}l(f_{\theta}(a(\mathcal{G}^{i}_{C},g_{i})),y_{t})\\ & s.t.~{}\theta^{*}=\operatorname*{arg\,min}_{\theta}\sum_{u_{i}\in \mathcal{V}_{L}}l(f_{\theta}(\mathcal{G}^{i}_{C}),y_{i})+\sum_{u_{i}\in \mathcal{V}_{P}}l(f_{\theta}(a(\mathcal{G}^{i}_{C},g_{i})),y_{t}),\\ &\quad\forall v_{i}\in\mathcal{V}_{P}\cup\mathcal{V}_{T},~{}g_{ i}\text{ meets Eq.(1) and }|g_{i}|<\Delta_{g}\\ &\quad|\mathcal{V}_{P}|\leq\Delta_{P}\end{split} \tag{2}\]
_where \(l(\cdot)\) represents the cross entropy loss and \(\theta_{g}\) denotes the parameters of the adaptive trigger generator \(f_{g}\). In the constraints, the node size of trigger \(|g_{i}|\) is limited by \(\Delta_{g}\), and the size of poisoned nodes is limited by \(\Delta_{P}\). The architecture of the target GNN\(f\) is unavailable and may adapt various defense methods._
In transductive setting, \(\mathcal{V}_{U}\) would be the target nodes. However, we focus on inductive setting where \(\mathcal{V}_{T}\) is not available for the optimization. Hence, \(\mathcal{V}_{U}\) would be \(\mathcal{V}\backslash\mathcal{V}_{L}\) to ensure the attacks can be effective for various types of target nodes.
## 5. Methodology
In this section, we present the details of UGBA which aims to optimize Eq.(2) to conduct effective and unnoticeable graph backdoor attacks. Since it is challenging and computationally expensive to jointly optimize the selection of poisoned nodes \(\mathcal{V}_{P}\) and the trigger generator, UGBA splits the optimization process into two steps: poisoned node selection and adaptive trigger generator learning. Two challenges remain to be addressed: (i) how to select the poisoned nodes that are most useful for backdoor attacks; (ii) how to learn the adaptive trigger generator to obtain triggers that meet unnoticeable constraint and maintain a high success rate in backdoor attack; To address these challenges, a novel framework of UGBA is proposed, which is illustrated in Fig. 2. UGBA is composed of a poisoned node selector \(f_{P}\), an adaptive trigger generator \(f_{g}\), and a surrogate GCN model \(f_{s}\). Specifically, the poisoned node selector takes the graph \(\mathcal{G}\) as input and applies a novel metric to select nodes with representative patterns in features and local structures as poisoned nodes. An adaptive trigger generator \(f_{g}\) is applied with a differentiable unnoticeable constraint to give unnoticeable triggers for selected poisoned nodes \(\mathcal{V}_{P}\) to fool \(f_{s}\). To guarantee the effectiveness of the generated adaptive triggers on various test nodes, a bi-level optimization with a surrogate GCN model is applied.
### Poisoned Node Selection
In this subsection, we give the details of the node selection algorithm. Intuitively, if nodes with representative features and local
structures are predicted to the target class \(y_{t}\) after being attached with triggers, other nodes are also very likely to be conducted successful backdoor attacks. Therefore, we propose to select diverse and representative nodes in the graph as poisoned nodes, which enforce the target GNN to predict the representative nodes attached with triggers to be target class \(y_{t}\).
One straightforward way to obtain the representative nodes is to conduct clustering on the node features. However, it fails to consider the graph topology which is crucial for graph-structured data. Therefore, we propose to train a GCN encoder with the node labels to obtain representations that capture both attribute and structure information. Then, for each class, we can select representative nodes using the clustering algorithm on learned representations. Specifically, the node representations and labels can be obtained as:
\[\mathbf{H}=GCN(\mathbf{A},\mathbf{X}),\quad\hat{\mathbf{Y}}=\text{softmax}( \mathbf{W}\cdot\mathbf{H}), \tag{3}\]
where \(\mathbf{W}\) denotes the learnable weight matrix for classification. The training process of the GCN encoder can be written as:
\[\min_{\theta_{E},\mathbf{W}}\sum_{\mathbf{u}_{1}\in\mathcal{V}_{L}}l(\hat{y}_ {i},y_{t}) \tag{4}\]
where \(\theta_{E}\) denotes the parameters of GCN encoder, \(l(\cdot)\) is the cross entropy loss, and \(y_{i}\) is the label of node \(v_{i}\). \(\hat{y}_{i}\) is the prediction of \(v_{i}\).
With the GCN encoder trained in Eq. (4), we can obtain the node representations and conduct clustering to obtain the representative nodes for each class. Here, to guarantee the diversity of the obtained representative nodes, we separately apply K-Means to cluster \(\{\mathbf{h}_{i}:\hat{y}_{i}=l\}\) on each class \(l\) other than the target class \(y_{t}\), where \(\mathbf{h}_{i}\) denote the representation of node \(v_{i}\in\mathcal{V}/\mathcal{V}_{L}\). Nodes nearer to the centroid of each cluster are more representative. However, the node nearest to the centroid may have a high degree. Injecting the malicious label to high-degree nodes may lead to a significant decrease in prediction performance as the negative effect will be propagated to its neighbors, which may make the attack noticeable. Hence, we propose a metric that balances the representativeness and negative effects on the prediction performance. Let \(\mathbf{h}_{c}^{k}\) denote the center of the \(k\)-th cluster. Then for a node \(v_{i}^{k}\) belonging to the \(k\)-th cluster, the metric score can be computed by:
\[m(v_{i})=||\mathbf{h}_{i}^{k}-\mathbf{h}_{c}^{k}||_{2}+\lambda\cdot deg(v_{i} ^{k}) \tag{5}\]
where \(\lambda\) is to control the contribution of the degree in node selection. After getting each node's score, we select nodes with top-\(n\) highest scores in each cluster to satisfy the budget, where \(n=\frac{\Delta_{p}}{(C-1)K}\).
### Adaptive Trigger Generator
Once the poisoned node set \(\mathcal{V}_{p}\) is determined, the next step is to generate adaptive triggers with \(f_{g}\) to poison the dataset. To guarantee the unnoticeability of the generated triggers, we propose a differentiable unnoticeable loss. We apply a bi-level optimization between the adaptive generator \(f_{g}\) and the surrogate model \(f_{h}\) to ensure high success rate on various test samples. Next, we give the details of trigger generator \(f_{g}\), differentiable unnoticeable loss, and the bi-level optimization with \(f_{h}\).
**Design of Adaptive Trigger Generator.** To generate adaptive triggers that are similar to the attached nodes, the adaptive trigger generator \(f_{g}\) takes the node features of the target node as input. Specifically, we adopt an MLP to simultaneously generate node features and structure of the trigger for node \(v_{i}\) by:
\[\mathbf{h}_{i}^{m}=\text{MLP}(\mathbf{x}_{i}),\quad\mathbf{X}_{i}^{g}=\mathbf{ W}_{f}\cdot\mathbf{h}_{i}^{m},\quad\mathbf{A}_{i}^{g}=\mathbf{W}_{a}\cdot \mathbf{h}_{i}^{m}, \tag{6}\]
where \(\mathbf{x}_{i}\) is the node features of \(v_{i}\). \(\mathbf{W}_{f}\) and \(\mathbf{W}_{a}\) are learnable parameters for feature and structure generation, respectively. \(\mathbf{X}_{i}^{g}\in\mathbb{R}^{n\times d}\) is the synthetic features of the trigger nodes, where \(s\) and \(d\) represent the size of the generated trigger and the dimension of features, respectively. \(\mathbf{A}_{i}^{g}\in\mathbb{R}^{n\times s}\) is the adjacency matrix of the generated trigger. As the real-world graph is generally discrete, following the binary neural network (He et al., 2017), we binarize the continuous adjacency matrix \(\mathbf{A}_{i}^{g}\) in the forward computation; while the continuous value is used in backward propagation. With the generated trigger \(g_{i}=(\mathbf{X}_{i}^{g},\mathbf{A}_{i}^{g})\), we link it to node \(v_{i}\in\mathcal{V}_{p}\) and assign target class label \(y_{t}\) to build backdoored dataset. In the inference phase, the trigger generated by \(f_{g}\) will be attached to the test node \(v_{i}\in\mathcal{V}_{T}\) to lead backdoored GNN to predict it as target class \(y_{t}\).
**Differentiable Unnoticeable Loss.** The adaptive trigger generator \(f_{g}\) aims to produce the triggers that meet the Eq.(1) for unnoticeable trigger injection. The key idea is to ensure the poisoned node or test node \(v_{i}\) is connected to a trigger node with high cosine similarity to avoid trigger elimination. And within the generated trigger \(g_{i}\), the connected trigger nodes should also exhibit high similarity. Thus, we design a differentiable unnoticeable loss to help optimize the adaptive trigger generator \(f_{g}\). Let \(\mathcal{E}_{B}^{i}\) denote the edge set that contains edges inside trigger \(g_{i}\) and edge attaching trigger \(g_{i}\) and node \(v_{i}\), the unnoticeable loss can be written as:
\[\min_{\theta_{g}}\mathcal{L}_{c}=\sum_{v_{i}\in\mathcal{V}}\sum_{(v_{j},v_{k}) \in\mathcal{E}_{B}^{i}}\max(0,T-sim(v_{j},v_{k})), \tag{7}\]
where \(T\) denotes the threshold of the similarity, and \(\theta_{g}\) represents the parameters of \(f_{g}\). The unnoticeable loss is applied on all nodes \(\mathcal{V}\) to ensure that the generated trigger meets the unnoticeable constraint for various kinds of nodes.
**Bi-level Optimization.** To guarantee the effectiveness of the generated triggers, we optimize the adaptive trigger generator to successfully attack the surrogate GCN model \(f_{s}\) with a bi-level optimization. Specifically, the surrogate GCN \(f_{s}\) will be trained on the backdoored dataset, which can be formulated as:
\[\min_{\theta_{s}}\mathcal{L}_{s}(\theta_{s},\theta_{g})=\sum_{v_{i}\in\mathcal{ V}_{L}}l(f_{s}(\mathcal{B}_{C}^{i}),y_{i})+\sum_{v_{i}\in\mathcal{V}_{p}}l(f_{s}(a( \mathcal{G}_{C}^{i},g_{i})),y_{t}), \tag{8}\]
where \(\theta_{s}\) represents the parameters of the surrogate GCN \(f_{s}\), \(\mathcal{G}_{C}^{i}\) indicates the clean computation graph of node \(v_{i}\), and \(a(\cdot)\) denotes
Figure 2. An overview of proposed UGBA.
the attachment operation. \(y_{i}\) is the label of labeled node \(o_{i}\in\mathcal{V}_{L}\) and \(y_{t}\) is the target class label. The adaptive trigger will be optimized to effectively mislead the surrogate model \(f_{s}\) to predict various nodes from \(\mathcal{V}\) to be \(y_{t}\) once injected with adaptive triggers, which can be written as:
\[\mathcal{L}_{g}(\theta_{s},\theta_{g})=\sum_{o_{i}\in\mathcal{V}}l(f_{s}(a(G_{ C}^{i},g_{i})),y_{t}). \tag{9}\]
Combining the unnoticeable loss Eq.(7), the following bi-level optimization problem can be formulated:
\[\begin{split}\min_{\theta_{g}}&\mathcal{L}_{g}( \theta_{s}^{*}(\theta_{g}),\theta_{g})+\beta\mathcal{L}_{c}(\theta_{g})\\ s.t.&\theta_{s}^{*}=\arg\min_{\theta_{s}}\mathcal{L}_ {s}(\theta_{s},\theta_{g}),\end{split} \tag{10}\]
where \(\beta\) is used to control the contribution of unnoticeable loss.
### Optimization Algorithm
We propose an alternating optimization schema to solve the bi-level optimization problem of Eq.(10) with a small computation cost.
**Updating Lower Level Surrogate Model.** Computing \(\theta_{s}^{*}\) for each outter iteration is expensive. We update surrogate model \(\theta_{s}\) with \(N\) inner iterations with fixed \(\theta_{g}\) to approximate \(\theta_{s}^{*}\) as (Zhou et al., 2017) does:
\[\theta_{s}^{t+1}=\theta_{s}^{t}-\alpha_{s}\nabla_{\theta_{s}}\mathcal{L}_{s} (\theta_{s},\theta_{g}) \tag{11}\]
where \(\theta_{s}^{t}\) denotes model parameters after \(t\) iterations. \(\alpha_{s}\) is the learning rate for training the surrogate model.
**Updating Upper Level Surrogate Model.** In the outer iteration, the updated surrogate model parameters \(\theta_{s}^{T}\) are used to approximate \(\theta_{s}^{*}\). Moreover, we apply first-order approximation (Kang et al., 2017) in computing gradients of \(\theta_{g}\) to further reduce the computation cost:
\[\begin{split}\theta_{g}^{k+1}=\theta_{g}^{k}-\alpha_{g}\nabla_{ \theta_{g}}(\mathcal{L}_{g}(\tilde{\theta}_{s},\theta_{g}^{k})+\beta\mathcal{ L}_{c}(\theta_{g}^{k})),\end{split} \tag{12}\]
where \(\tilde{\theta}_{s}\) indicates gradient propagation stopping. \(\alpha_{g}\) is the learning rate of training adaptive generator. See more details in algorithm 1. And the time complexity analysis can be found in Appendix F.
## 6. Experiments
In this section, we will evaluate proposed methods on various large-scale datasets to answer the following research questions:
* **RQ1**: Can our proposed method conduct effective backdoor attacks on GNNs and simultaneously ensure unnoticeability?
* **RQ2**: How do the number of poisoned nodes affect the performance of backdoor attacks?
* **RQ3**: How do the adaptive constraint and the poisoned node selection module affect the attack performance?
### Experimental Settings
#### 6.1.1. Datasets
To demonstrate the effectiveness of our UGBA, we conduct experiments on four public real-world datasets, i.e., Cora, Pubmed (Zhou et al., 2017), Flickr (Yi et al., 2017), and OGB-arxiv (Fischer et al., 2017), that are widely used for inductive semi-supervised node classification. Cora and Pubmed are small citation networks. Flickr is a large-scale graph that links image captions sharing the same properties. OGB-arxiv is a large-scale citation network. The statistics of the datasets are summarized in Tab. 3.
#### 6.1.2. Compared Methods
We compare UGBA with representative and state-of-the-art graph backdoor attack methods, including **GTA**(Zhou et al., 2017), **SBA-Samp**(Zhou et al., 2017) and its variant **SBA-Gen**. We also compare **GBAST**(Zhou et al., 2017) on Pubmed, which is shown in the Appendix C.
As UGBA conduct attacks by injecting triggers to target nodes, we also compare UGBA with two state-of-the-art graph injection evasion attacks designed for large-scale attacks, i.e. **TDGIA**(Zhou et al., 2017) and **AGIA**(Zhou et al., 2017). More details of these compared methods can be found in Appendix D. For a fair comparison, hyperparameters of all the attack methods are tuned based on the performance of the validation set.
**Competing with Defense Methods.** We applied the backdoor defense strategies introduced in Sec. 3.3.2 (i.e., Prune and Prune+LD) to help evaluate the unnoticeability of backdoor attacks. Moreover, two representative robust GNNs, i.e., **RobustGCN**(Zhou et al., 2017) and **GNNGuard**(Zhou et al., 2017), are also selected to verify that UGBA can also effectively attack general robust GNNs.
#### 6.1.3. Evaluation Protocol
In this paper, we conduct experiments on the inductive node classification task, _where the attackers can not access test nodes when they poison the graph_. Hence, we randomly mask out 20% nodes from the original dataset. And half of the masked nodes are used as target nodes for attack performance evaluation. Another half is used as clean test nodes to evaluate the prediction accuracy of backdoored models on normal samples. The graph containing the rest 80% nodes will be used as training graph \(\mathcal{G}\), where the labeled node set and validation set both contain 10% nodes. The average success rate (ASR) on the target node set and clean accuracy on clean test nodes are used to evaluate the backdoor attacks. A two-layer GCN is used as the surrogate model for all attack methods. And to demonstrate the transferability of the backdoor attacks, we attack target GNNs with different architectures, i.e., **GCN**, **GraphSage**, and **GAT**. Experiments on each target GNN architecture are conducted 5 times. We report the average ASR and clean accuracy of the total 15 runs (Tab. 4, Fig. 4, and Fig. 3). For all experiments, class 0 is the target class. The attack budget \(\Delta_{P}\) on size of poisoned nodes \(\mathcal{V}_{P}\) is set as 10, 40, 80 and 160 for Cora, Pubmed, Flickr and OGB-arxiv, respectively. The number of nodes in the trigger size is limited to 3 for all experiments. For experiments varying the budget in trigger size, please refer to Appendix E.
Our UGBA deploys a 2-layer GCN as the surrogate model. A 2-layer MLP is used as the adaptive trigger. More details of the hyperparameter setting can be found in Appendix B.
### Attack Results
To answer **RQ1**, we compare UGBA with baselines on four real-world graphs under various defense settings in terms of attack performance and unnoticeability.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Datasets & \#Nodes & \#Edges & \#Feature & \#Classes \\ \hline Cora & 2,708 & 5,429 & 1,443 & 7 \\ Pubmed & 19,717 & 44,338 & 500 & 3 \\ Flickr & 89,250 & 899,756 & 500 & 7 \\ OGB-arxiv & 169,343 & 1,166,243 & 128 & 40 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Dataset Statistics
#### 6.2.1. Comparisons with baseline backdoor attacks
We conduct experiments on four real-world graphs under three backdoor defense strategy settings (i.e., No defense, Prune and Prune+LD). As described by the evaluation protocol in Sec. 6.1.3, we report the average results in backdooring three target GNN architectures in Tab. 4. The details of the backdoor attack results are presented in Tab. 8-10 in Appendix. From the table, we can make the following observations:
* When no backdoor defense strategy is applied, our UGBA outperforms the baseline methods, especially on large-scale datasets. This indicates the effectiveness of poisoned node selection algorithm in fully utilizing the attack budget.
* All the baselines give poor performance when the trigger detection based defense methods, i.e., Prune and Prune+LD, are adopted. By contrast, our UGBA can achieve over 90% ASR with the defense strategies and maintain high clean accuracy. This demonstrates that our UGBA can generate effective and unnoticeable triggers for backdoor attacks.
* As the ASRs are average results of backdooring three different GNN architectures, the high ASR scores of UGBA prove its transferability in backdooring various types of GNN models.
#### 6.2.2. Comparisons with baseline node injection attacks
We also compare UGBA with two state-of-the-art node injection evasion attacks. Experiments are conducted on Flickr and OGB-arxiv. Three defense models (GCN-Prune, RobustGCN and GNNGuard) are selected to defend against the compared attacks. The ASR of 5 runs is reported in Tab 5. From this table, we observe:
* UGBA can effectively attack the robust GNNs, which shows that UGBA can also bypass the general defense methods with the unnoticeable constraint.
* Compared with node injection attacks, UGBA only requires a very small additional cost in injecting triggers and labels (e.g. 160 poisoned nodes out of 169K nodes in OGB-arxiv). But UGBA can outperform node injection attacks by 30%. This implies the superiority of UGBA in attacking large amounts of target nodes.
### Impacts of the Sizes of Poisoned Nodes
To answer **RQ2**, we conduct experiments to explore the attack performance of UGBA given different budgets in the size of poisoned nodes. Specifically, we vary the sizes of poisoned samples as \(\{80,160,240,320,400,480\}\). The other settings are the same as the evaluation protocol in Sec. 6.1.3. Hyperparameters are selected with the same process as described in Appendix. B. Fig. 4 shows the results on OGB-arxiv. We have similar observations on other datasets. From Fig. 4, we can observe that:
* The attack success rate of all compared methods in all settings increases as the increase of the number of poisoned samples, which satisfies our expectation. Our method consistently outperforms the baselines as the number of poisoned samples increases, which shows the effectiveness of the proposed framework. In particular, the gaps between our method and baselines become larger when the budget is smaller, which demonstrates the effectiveness of the poisoned node selection in effectively utilizing the attack budget.
* When Prune+LD defense is applied on the backdoor attacks, our methods still achieve promising performances, while all the baselines obtain nearly 0% ASR in all settings, which is as expected. That's because our method can generate trigger nodes similar to the attached nodes due to the unnoticeable constraint, which is helpful for bypassing the defense method.
\begin{table}
\begin{tabular}{l l c c c c|c c|c} \hline \hline Datasets & Defense & Clean Graph & SBA-Samp & SBA-Gen & \multicolumn{2}{c}{GTA} & \multicolumn{2}{c}{Ours} \\ \hline \multirow{3}{*}{Cora} & None & 83.09 & 34.94 & 84.09 & 42.54 & 84.81 & 90.25 & 82.88 & **96.95** & **83.90** \\ & Prune & 79.68 & 16.70 & 82.98 & 19.56 & 83.19 & 17.63 & 83.06 & **98.89** & **82.66** \\ & Prune+LD & 79.68 & 15.87 & 79.63 & 17.49 & 80.61 & 18.35 & 80.17 & **95.30** & **79.90** \\ \hline \multirow{3}{*}{Pubmed} & None & 84.86 & 30.43 & 84.93 & 31.96 & 84.93 & 86.64 & 85.07 & **92.27** & **85.06** \\ & Prune & 85.09 & 22.10 & 84.90 & 22.13 & 84.86 & 28.10 & 85.05 & **92.87** & **85.09** \\ & Prune+LD & 85.12 & 21.56 & 84.63 & 22.06 & 83.71 & 22.00 & 83.76 & **93.06** & **83.75** \\ \hline \multirow{3}{*}{Flickr} & None & 46.40 & 0.00 & 47.36 & 0.00 & 47.07 & 88.64 & 45.67 & **97.43** & **46.09** \\ & Prune & 43.02 & 0.00 & 44.01 & 0.00 & 43.78 & 0.00 & 42.71 & **90.34** & **42.99** \\ & Prune+LD & 43.02 & 0.00 & 45.03 & 0.00 & 45.32 & 0.00 & 44.99 & **96.81** & **42.14** \\ \hline \multirow{3}{*}{OGB-arxiv} & None & 65.50 & 0.65 & 65.53 & 11.26 & 65.43 & 75.01 & 65.54 & **96.59** & **64.10** \\ & Prune & 62.16 & 0.03 & 63.88 & 0.01 & 64.10 & 0.01 & 63.97 & **93.07** & **62.58** \\ \cline{1-1} & Prune+LD & 62.16 & 0.16 & 64.15 & 0.02 & 63.89 & 0.03 & 64.30 & **90.95** & **63.19** \\ \hline \hline \end{tabular}
\end{table}
Table 4. Backdoor attack results (ASR (%) | Clean Accuracy (%)). Only clean accuracy is reported for clean graphs.
Figure 3. Impacts of sizes of poisoned nodes on OGB-arxiv.
\begin{table}
\begin{tabular}{l l c c} \hline \hline Datasets & Defense & TDGIA & AGIA & Ours \\ \hline \multirow{3}{*}{Flickr} & GCN-Prune & 77.01 & 77.22 & **99.91** \\ & RobustGCN & 78.61 & 78.61 & **99.23** \\ & GNNGuard & 55.68 & 56.01 & **99.91** \\ \hline \multirow{3}{*}{OGB-arxiv} & GCN-Prune & 66.17 & 66.33 & **94.05** \\ & RobustGCN & 73.87 & 74.00 & **95.39** \\ & GNNGuard & 42.27 & 42.58 & **96.88** \\ \hline \hline \end{tabular}
\end{table}
Table 5. Comparisons of ASR (%) with node inject attacks.
### Ablation Studies
To answer **RQ3**, we conduct ablation studies to explore the effects of the unnoticeable constraint and the poisoned node selection module. To demonstrate the effectiveness of the unnoticeable constraint module, we set the \(\beta\) as 0 when we train the trigger generator and obtain a variant named as UGBA\(\backslash\)C. To show the benefits brought by our poisoned node selection module, we train a variant UGBA\(\backslash\)S which randomly selects poisoned nodes to attach triggers and assign target nodes. We also implement a variant of our model by removing both unnoticeable constraint and poisoned node selection, which is named as UGBA\(\backslash\)CS. The average results and standard deviations on Pubmed and OGB-arxiv are shown in Fig. 4. All the settings of evaluation follow the description in Sec. 6.1.3. And the hyperparameters of the variants are also tuned based on the validation set for fair comparison. From Fig. 4, we observe that:
* Compared with UGBA\(\backslash\)S, UGBA achieves better attack results on various defense settings. The variance of ASR of UGBA is significantly lower than that of UGBA\(\backslash\)S. This is because our poisoned node selection algorithm selects consistently diverse and representative nodes that are useful for backdoor attacks.
* When the backdoor defense strategy Prune+LD, UGBA can outperform UGBA\(\backslash\)C and UGBA\(\backslash\)CS by a large margin. This implies that the proposed unnoticeable loss manages to guide the trigger generator to give unnoticeable triggers for various test nodes, which can effectively bypass the pruning defenses.
### Similarity Analysis
In this section, we conduct a case study to further explore the similarity of the trigger nodes. We conduct backdoor attacks by using both GTA and our method on OGB-arxiv and then calculate the edge similarities of trigger edges (i.e., the edges associated with trigger nodes) and clean edges (i.e., the edges not connected to trigger nodes). The histogram of the edge similarity scores are plotted in Fig. 5. From the figure, we observe that the trigger edges generated by GTA have low similarities, which implies high risk of trigger elimination with our proposed backdoor defense strategies. In contrast, the edges created by our method present cosine similarity scores that well disguise them as clean edges, which verifies the unnoticeability of our methods.
### Parameter Sensitivity Analysis
In this subsection, we further investigate how the hyperparameter \(\beta\) and \(T\) affect the performance of UGBA, where \(\beta\) and \(T\) control the weight of unnoticeable loss in training the trigger generator and the threshold of similarity scores used in unnoticeable loss. To explore the effects of \(\beta\) and \(T\), we vary the values of \(\beta\) as \(\{0,50,100,150,200\}\). And \(T\) is changed from \(\{0,0.2,0.4,0.6,0.8,1\}\) and \(\{0.6,0.7,0.8,0.9,1\}\) for Pubmed and OGB-arxiv, respectively. Since \(\beta\) and \(T\) only affect the unnoticeablity of triggers, we report the attack success rate (ASR) of attacking against the Prune+LD defense strategy in Fig. 6. The test model is fixed as GCN. We observe that (i): In Pubmed, the similarity threshold \(T\) needs to be larger than 0.2; while \(T\) is required to be higher than 0.8 in OGB-arxiv. This is because edges in OGB-arxiv show higher similarity scores compared with Pubmed. Hence, to avoid being detected, a higher similarity threshold \(T\) is necessary. In practice, the \(T\) can be set according to the average edge similarity scores of the dataset. (ii) When \(T\) is set to a proper value, high ASR can generally be achieved when \(\beta\leq 1\), which eases the hyperparameter tuning.
## 7. Conclusion and Future Work
In this paper, we empirically verify that existing backdoor attacks require large attack budgets and can be easily defended with edge pruning strategies. To address these problems, we study a novel problem of conducting unnoticeable graph backdoor attacks with limited attack budgets. Specifically, a novel poisoned node selection algorithm is adopted to select representative and diverse nodes as poisoned nodes to fully utilize the attack budget. And an adaptive generator is optimized with an unnoticeable constraint loss to ensure the unnoticeability of generated triggers. The effectiveness of generated triggers is further guaranteed by bi-level optimization with the surrogate GCN model. Extensive experiments on large-scale datasets demonstrate that our proposed method can effectively backdoor various target GNN models and even be adopted with defense strategies. There are two directions that need further investigation. First, in this paper, we only focus on node classification. We will extend the proposed attack to other tasks such as recommendation and graph classification. Second, it is also interesting to investigate how to defend against the unnoticeable graph backdoor attack.
Figure 4. Ablation studies on Pubmed and OGB-arxiv.
Figure 5. Edge similarity distributions on OGB-arxiv.
Figure 6. Hyperparameter Sensitivity Analysis
## Acknowledgments
This material is based upon work supported by, or in part by, the National Science Foundation (NSF) under grant number IIS-1707548 and IIS-1909702, the Army Research Office (ONR) under grant number W911NF21-1-0198, and Department of Homeland Security (DNS) CINA under grant number E205949D. The findings in this paper do not necessarily reflect the view of the funding agency.
|
2303.14108 | Neural Network Quantum States analysis of the Shastry-Sutherland model | We utilize neural network quantum states (NQS) to investigate the ground
state properties of the Heisenberg model on a Shastry-Sutherland lattice using
the variational Monte Carlo method. We show that already relatively simple NQSs
can be used to approximate the ground state of this model in its different
phases and regimes. We first compare several types of NQSs with each other on
small lattices and benchmark their variational energies against the exact
diagonalization results. We argue that when precision, generality, and
computational costs are taken into account, a good choice for addressing larger
systems is a shallow restricted Boltzmann machine NQS. We then show that such
NQS can describe the main phases of the model in zero magnetic field. Moreover,
NQS based on a restricted Boltzmann machine correctly describes the intriguing
plateaus forming in magnetization of the model as a function of increasing
magnetic field. | Matěj Mezera, Jana Menšíková, Pavel Baláž, Martin Žonda | 2023-03-24T16:18:16Z | http://arxiv.org/abs/2303.14108v3 | **Neural Network Quantum States analysis of the Shastry-Sutherland model**
## Abstract
**We utilize neural-network quantum states (NQSs) to investigate the ground-state properties of the Heisenberg model on a Shastry-Sutherland lattice via variational Monte Carlo method. We show that already relatively simple NQSs can be used to approximate the ground state of this model in its different phases and regimes. We first compare several types of NQSs with each other on small lattices and benchmark their variational energies against the exact diagonalization results. We argue that when precision, generality and computational costs are taken into account, a good choice for addressing larger systems is a shallow restricted Boltzmann machine NQS. We then show that such NQS can describe the main phases of the model in zero magnetic field. Moreover, NQS based on a restricted Boltzmann machine correctly describes the intriguing plateaus forming in magnetization of the model as a function of increasing magnetic field.**
###### Contents
* 1 Introduction
* 2 Shastry-Sutherland model
* 2.1 Basic properties of the ground state
* 3 Methods
* 3.1 Variational Monte Carlo and machine learning
* 3.2 Neural network quantum states
* 4 Results
* 4.1 Comparison of different NQSs architectures
* 5
* 4.2 Investigation of the ground-state phase diagrams
* 4.2.1 Ground-state orderings
* 4.2.2 Zero magnetic field
* 4.2.3 Magnetization plateaus
* 5 Conclusion
* A Lattice tiles
* B Visible biases in sRBM and pRBM
* C Symmetries
## 1 Introduction
The neural network quantum states (NQSs) have recently emerged as a promising alternative to common trial states in variational Monte Carlo (VMC) studies of many-body lattice problems [1, 2, 3, 4, 5, 6, 7, 8, 9]. This research is driven by the fact that neural networks (NNs) are universal function approximators [10] as well as by the astonishing progress in the field of machine learning (ML) in general. These advancements already lead to a number of effective ML applications suitable for basic research of quantum systems and technologies [11, 12, 13, 14]. For example, even simple NQSs, such as the restricted Boltzmann machine (RBM), allow us to investigate the ground-state properties of various quantum spin models. It was already shown that RBM can outperform standard trial states in the variational search of the ground-state energies of antiferromagnetic Heisenberg model [1]. Very promising results have been obtained also for frustrated spin systems such as \(J_{1}-J_{2}\) model [15, 16, 17, 18]. Here NQSs can be trained to capture the nontrivial sign structure of the ground state and in some cases have even achieved the state-of-the-art accuracy [19] delivering cutting edge results. Nevertheless, two-dimensional frustrated quantum spin models continue to be a challenge for NQSs as well as for other methods [20]. For example, it is not clear yet how to choose an optimal neural network architecture for a particular frustrated system, how important is the role of the trial state symmetries in the learning process, or if a NQS with favorable variational energy also encodes a physically correct state.
Not all of these issues are specific to NQSs. Results of any VMC calculations are to a large extent dictated by the properties and limitations of the used trial states. An inappropriately chosen variational state, i.e., one with small overlap with the ground state, can still give a good estimate of the ground-state energy [21]. If some additional information is known about the ground state, e.g., its symmetries, one can pick a more restrictive variational state function. However, this is often not an optimal strategy if the goal is to seek for new phases or to locate a phase boundary. In principle, NQSs could be a remedy for such problems. It is reasonable to expect that a single, but expressive enough, NQS can be used to approximate distinct phases. This assumption is supported by results of Sharir et al. [22] who showed that NQSs can have even higher expressive power than matrix product states [23] and projected entangled pair states [24] as these can be efficiently
mapped to a subset of NQSs. In other words, NQSs can be effectively utilized to a larger class of quantum states than these powerful formalisms which are known primarily from their usage in Density Matrix Renormalization Group (DMRG) but are also utilized as variational states in VMC [1, 21, 25].
In practice it is not yet clear how to achieve this in a general case. Despite tremendous progress the research of frustrated quantum spin magnets is still in the stage of testing and developing NQS architectures for simple models often focusing primarily on reaching the best variational energy in particular regimes [6, 15, 16, 18]. In the present work we aim at a different target. We want to demonstrate that even shallow NQSs can be sufficient for the investigation of qualitatively different ground-state orderings including states forming only in finite magnetic field. To this goal we focus on the ground state of antiferromagnetic Heisenberg Hamiltonian on Shastry-Sutherland lattice known as the Shastry-Sutherland model (SSM) which we introduce in more detail in Sec. 2. To our knowledge, this model of frustrated quantum spin system has not been previously addressed within the NQS context, yet it seems to be an ideal testbed for our purposes.
SSM was already investigated by a number of methods including exact diagonalization (ED) techniques [26, 27, 28, 29], Quantum Monte Carlo [30], various versions of DMRG [31, 32, 33, 34, 35], perturbation theory [36] and even with quantum annealing [37]. These studies shown that SSM has a rich ground-state phase diagram. In zero magnetic field these include regions such as singlet spin dimer phase, antiferromagnetic Neel state, spin plaquette singlet phase and probably other phases. Introducing finite magnetic field further complicates the picture. Consequently, it is challenging to find a single variational function that can correctly approximate the whole ground-state phase diagram.
In addition, there are still open questions related to the ground-state phase diagram in zero as well as in the finite magnetic field even in some experimentally relevant regimes of the model. This is important because several magnetic materials have a structure topologically equivalent to SSM. The most notable examples are \(\text{SrCu}_{2}(\text{BO}_{3})_{2}\), \(\text{BaNd}_{2}\text{ZnO}_{5}\) and rare-earth tetraborides \(\text{RB}_{4}\) (R=Dy, Er, Tm, Tb, Ho) [38, 39, 40, 41, 42]. All exhibit an intriguing step-like dependence of the overall magnetization on the external magnetic field which has been found to be inherent to SSM [43, 44]. Here each plateau reflects a stable nontrivial spin ordering. The magnetic behavior of these materials is still not completely understood. This together with other open problems, e.g., the prospect of a narrow spin liquid phase in the zero magnetic field, further motivates the investigation of SSM and its generalizations [45, 46, 36, 47].
Therefore, SSM presents a model system which has the right combination of properties that are well understood and can be used to benchmark various NQSs, and of open problems that can be potentially illuminated by these variational techniques. This includes a possibility to address a rather complex behavior of a system in relation to a changing magnetic field.
The present work consists of two main parts. In the first one we explore SSM by employing a number of NQS architectures and we test them against ED results for small lattices in zero magnetic field. Here the primarily goal is to find one or few networks that are able to capture the main well understood ground-state orderings of SSM. Simultaneously, we require that these NQSs have a high chance to describe the magnetization plateaus as well. This means that the ideal network has to give a solid approximation of the ground-state orderings even when no conditions on the total magnetization are imposed. Consequently, we do not focus on getting the best possible variational energy for a particular set of parameters. Rather, we require a good approximation of the energy in distinct regimes of the model, correct description of the particular orderings, and reasonable computation complexity that allows the usage of the NQS on larger lattices. We argue that when precision, generality and computational costs are taken into account, a shallow RBM
with complex parameters is still a good choice.
In the second part we introduce a refined learning protocol for RBM NQS and test it for a broad range of model parameters and different network sizes. We then utilize it in the study of larger systems. We first investigate the zero magnetic field scenario and demonstrate that RBM is expressive enough to capture all main phases of the system. We then move to the model in finite magnetic field and show that, with the right learning strategy, RBM is able to capture the magnetization plateaus crucial for the description of real materials. This opens a possibility that NQSs could be used to investigate several open problems, such as the existence of still opaque spin liquid phase and other orderings predicted but not yet confirmed in SSM.
## 2 Shastry-Sutherland model
SSM is described by the Hamiltonian
\[\hat{H}=J\sum_{\langle i,j\rangle}\hat{\mathbf{\xi}}_{i}\cdot\hat{\mathbf{\xi}}_{j}+J^{ \prime}\sum_{\langle i,j\rangle^{\prime}}\hat{\mathbf{\xi}}_{i}\cdot\hat{\mathbf{\xi}}_ {j}-h\sum_{i}\hat{\mathbf{\xi}}_{i}^{z}, \tag{1}\]
where \(\hat{\mathbf{\xi}}_{i}=\frac{1}{2}\hat{\mathbf{\sigma}}_{i}\) is the spin-\(1/2\) operator at the \(i\)-th site with \(\hat{\mathbf{\sigma}}\) being the vector of Pauli matrices. The first term represents exchange coupling between the nearest neighbors on a square lattice (solid lines in Fig. 1). The second term is a sum over specific diagonal bonds arranged in a checkerboard pattern (dashed lines in Fig. 1). Note that these sums are interpreted in terms of nodes, i.e., there is no double-counting.
Both coupling constants are antiferromagnetic (\(J,J^{\prime}>0\)) and we set \(J^{\prime}\) as the unit of energy in the whole paper. The last term describes the influence of the external magnetic field \(h\) pointing to the \(z\)-direction.
### Basic properties of the ground state
The basic structure of the SSM ground-state phase diagram is well understood. As illustrated in Fig. 2 the SSM at \(h=0\) has at least three distinct ground-state orderings. These are the _dimer singlet_ (DS) state for (\(J^{\prime}\gg J\)), the _Neel antiferromagnetic_ (AF) ordering (\(J^{\prime}\ll J\)) and the _plaquette singlet_ (PS) state in between. The phase transition from DS to PS state is of the first-order [28]. The nature of the transition from PS to AF is still in debate. The ED study of Nakano and Sakai [29] suggests that the supposed PS phase actually consists of at least two distinct phases. In addition, some
Figure 1: (a) The Shastry-Sutherland lattice. Bonds with coupling strength \(J\) are represented by solid lines while bonds with \(J^{\prime}\) by dashed ones.
recent studies argue that there is a so-called _deconfined quantum critical point_ (DQCP), separating a line of first-order transitions and, potentially, a narrow gapless _spin liquid_ (SL) phase [34, 35, 48].
Nevertheless, even without focusing on the possible DQCP and SL phase, the three main orderings, namely DS, PS, and AF, already pose a sufficient challenge for a single variational state because of their distinctive character and symmetries.
The _DS phase_ is formed by an exactly (analytically) accessible state [49]. It was verified by numerous analytical and numerical methods that it remains to be the ground state up to \(J/J^{\prime}\approx 0.675\)[28, 29, 34]. In the limiting case of \(J\ll J^{\prime}\), the system is equivalent to an ensemble of independent spin dimers each forming a singlet ground state. The DS ground state is thus a direct product of dimer singlet states
\[\ket{\psi}_{\text{DS}}=\bigotimes_{\langle i,j\rangle^{\prime}}\frac{1}{\sqrt{ 2}}\left(\ket{\uparrow}_{i,j}-\ket{\downarrow\uparrow}_{i,j}\right)\,. \tag{2}\]
As such, it is antisymmetric with respect to the exchange of two intradimer spins and symmetric with respect to transformations rearranging only the spin pairs without swapping the intradimer spins. The energy of the dimer ground state is
\[E_{\text{DS}}=-\frac{3}{8}J^{\prime}N\,, \tag{3}\]
where \(N\) is the number of lattice sites (twice the number of dimers).
The _PS phase_ can be understood as weakly coupled plaquette singlet states illustrated in Fig. 2. The plaquette singlet is a ground state of an isolated 4-spin Heisenberg cluster with four bonds arranged in a cycle [28]. The pattern of the plaquette singlets in Fig. 2 indicates that the PS state is two-fold degenerate.
Figure 2: Illustration of the SSM phase diagram of the SSM for small \(h\) based on the results from Ref. [35]. There is a first-order transition at \(J/J^{\prime}\approx 0.675\) between DS and PS phases. The gray squares in PS depict the plaquette singlets. The nature of the transition between the PS and AF phases remains unresolved. It is not clear whether there is a narrow spin liquid phase, a DQCP or just a second order transition in the region labeled with a question mark.
It is important to stress again that the relevant range \(J/J^{\prime}\) discussed here (\(0.675\lesssim J/J^{\prime}\)\(\lesssim 0.82\)) might be much more complex. As we already stated, it has been argued that at \(J/J^{\prime}\approx 0.70\) the PS phase splits into two distinct regions with quantitatively different behaviors [29, 34, 35, 48]. For the sake of simplicity we omit this possibility in most of our discussion. Nevertheless, this might be important for future more detailed studies.
The _AF phase_ stabilizes when \(J/J^{\prime}\gtrsim 0.82\). When \(J^{\prime}\) becomes negligible, the ground state of SSM is approaching the ground state of the antiferromagnetic Heisenberg model with only nearest neighbor bonds on a square lattice. Although this state is not analytically accessible, it can be explored by Monte Carlo (MC) simulations [26]. Using the first-order correction to these quantum MC results, the energy of the SSM in the AF phase was estimated [26] to be
\[E_{\text{AF}}=(0.102J^{\prime}-0.669J)N \tag{4}\]
where \(N\) is assumed to be large.
A more detailed discussion of the symmetries of these three states is postponed to the Appendix. C. Note that the three main phases DS, PS and AF are reasonably understood and simultaneously they differ qualitatively. This is one of several qualities of the model that make SSM a suitable testbed for NQSs.
So far we have discussed the \(h=0\) case. When we introduce finite magnetic field to the DS phase (Eq. (2)) some dimers can morph into triplet states. These triplets are formed in repeating patterns, e.g., checkerboard, stripes, or more complex configurations (for illustration see Fig. 3), giving rise to stable plateaus of constant magnetization in increasing magnetic field.
Because each plateau signals a distinct stable ordering it also presents a challenge for the NQSs. Particularly so, because a finite magnetic field does not allow for a simple restriction of the Hilbert space to its zero magnetization part. This restriction was heavily utilized in previous NQS investigations of quantum spin models. Note, that it is mostly these plateaus which make SSM interesting experimentally. Good examples are SrCu\({}_{2}\)(BO\({}_{3}\))\({}_{2}\), BaNd\({}_{2}\)ZnO\({}_{5}\), CaCo\({}_{2}\)Al\({}_{8}\) and rare-earth tetraborides RB\({}_{4}\) (R=Dy, Er, Tm, Tb, Ho)) [38, 39, 40, 41, 42] which all exhibit the intriguing step-like dependence of the overall magnetization on the external magnetic field or show magnetic frustration and can be modeled by SSM or its generalizations.
Figure 3: A simplified illustration of magnetization as a function of external magnetic field \(h\) and coupling constant \(J\) inspired by Ref. [50]. A more detailed illustration would contain additional steps (e.g., supersolid phase), however, their actual position and width is not clear yet. The arrangements of singlets and triplets are displayed for some of the plateaus (namely the \(m^{z}=1,1/2,1/3\) and \(1/4\) plateaus are shown).
## 3 Methods
### Variational Monte Carlo and machine learning
VMC is a standard method that allows us to stochastically evaluate the expectation values of quantum operators without the need to probe the full Hilbert space. In short, suppose we have a Hamiltonian operator \(\hat{H}\) and a trial wave function \(|\psi_{\mathbf{\theta}}\rangle\) depending continuously on a vector of parameters \(\mathbf{\theta}\). We are seeking a ground state of \(\hat{H}\) or its approximation in a variational way. The goal is to minimize the variational energy
\[E_{\mathbf{\theta}}=\langle\hat{H}\rangle_{\mathbf{\theta}}:=\frac{\langle\psi_{\mathbf{ \theta}}|\hat{H}|\psi_{\mathbf{\theta}}\rangle}{\langle\psi_{\mathbf{\theta}}|\psi_{ \mathbf{\theta}}\rangle}\geq E_{0} \tag{5}\]
with respect to the vector of parameters \(\mathbf{\theta}\), where \(E_{0}\) is the true ground-state energy. We utilize a fixed orthonormal basis \(\{|\mathbf{\sigma}^{z}\rangle\}\) of the \(z\)-projected \(\frac{1}{2}\)-spins and use the notation
\[|\psi_{\mathbf{\theta}}\rangle=\sum_{\mathbf{\sigma}^{z}}\psi_{\mathbf{\theta}}(\mathbf{ \sigma}^{z})\,|\mathbf{\sigma}^{z}\rangle\,,\text{ where }\,\langle\mathbf{\sigma}^{z}|\psi_{\mathbf{ \theta}}\rangle\equiv\psi_{\mathbf{\theta}}(\mathbf{\sigma}^{z}) \tag{6}\]
as typical in NQS studies [51]. The variational energy in Eq. (5) is, in the jargon of ML, a _loss function_. Using this loss function the parameters \(\mathbf{\theta}\) are optimized to obtain the lowest-energy state that the chosen variational function can represent. In practice, we use the VMC implementation from the NQS toolbox NetKet [9, 51].
In general, the form of the trial wave function \(\psi_{\mathbf{\theta}}(\mathbf{\sigma}^{z})\) restricts the optimization process to a subset of the Hilbert space. An improper choice of the ansatz can bias the approximation towards a wrong phase, or even can make approaching the correct state impossible. Clearly, this is where one can expect that NQSs could outperform standard variational states due to their high expressiveness.
### Neural network quantum states
In the first part of our work we explore several NQS architectures [6, 9]. We chose these particular networks due to their successful application in previous studies of other Heisenberg models.
_Restricted Boltzmann machine (RBM)_ is a generative artificial NN constituted of a visible layer with \(N\) nodes (one for each lattice site) fully connected with a single hidden layer with \(M=\alpha N\) nodes (hidden degrees of freedom) where \(\alpha\) is the _hidden layer density_[1]. It can be used to define a NQS
\[\log\psi_{\mathbf{\theta}}(\mathbf{\sigma}^{z})=\sum_{i}\sigma_{i}^{z}a_{i}+\sum_{j} \log\left[2\cosh(\sum_{i}W_{ij}\sigma_{i}^{z}+b_{j})\right], \tag{7}\]
where vector \(\mathbf{\theta}\) contains the variation network parameters \(\mathbf{\theta}=\{\mathbf{a},\mathbf{b},\mathbf{W}\}\). This NQS can be interpreted as a one-layered fully-connected neural network with \(\log\cosh\) activation function followed by summation of outputs and additional summation of visible biases [1]. Note that complex-valued parameters are necessary in order to represent generally complex-valued wave function outputs.
The size of the visible layer \(N\) is fixed by the size of the investigated spin system. The expressive power of RBM can be, however, modified by changing \(\alpha\). According to the universal approximation theorem [52], the RBM is theoretically able to express any wave function to any desired degree of
accuracy when \(\alpha\) can be arbitrarily large. In practice, we aim for a reasonably small \(\alpha\) to restrict the total number of variational parameters of RBM, which is \(MN+M+N=\mathcal{O}(\alpha N^{2})\).
_Modulus-phase split real-valued RBM (rRBM):_ One can avoid complex parameters, which in general make the learning process harder, by introducing two independent real-valued NNs [17, 53] to represent the modulus \(A(\mathbf{\sigma}^{z})\) and the phase \(\Phi(\mathbf{\sigma}^{z})\) of the wave function separately
\[\log\psi_{\mathbf{\theta}}(\mathbf{\sigma}^{z})=A(\mathbf{\sigma}^{z})+\mathrm{i}\Phi(\bm {\sigma}^{z}). \tag{8}\]
Unlike in the Ref. [53] where rRBM architecture proved to be advantageous in the investigation of transverse-field Ising model, we have experienced that for SSM, the rRBM shows worse results than complex-valued RBM. This is in accord with the recent study of other frustrated system, namely \(J_{1}-J_{2}\) model [18]. Consequently, we discuss the results of this network only briefly in chapter 4.1 and focus predominately on complex valued architectures.
_Symmetric variant of RBM (sRBM):_ Carleo and Troyer [1] used translational symmetries to reduce the number of variational parameters in RBM. They replaced the fully-connected layer with a convolutional layer and set the visible biases to the constant value \(a^{f}\) across each convolutional filter \(f\). The resulting expression for its output is
\[\log\psi_{\mathbf{\theta}}(\mathbf{\sigma}^{z})=\sum_{f=1}^{F}\sum_{g\in G}\left\{a^{ f}\underbrace{\sum_{i=1}^{N}T_{g}(\mathbf{\sigma}^{z})_{i}}_{\sum\limits_{i}^{N} \sigma_{i}^{z}=m^{z}}\right. \tag{9}\]
Here \(\mathbf{T}_{g}\) denotes a symmetry transformation of a spin configuration according to an element \(g\) from the symmetry group \(G\) of order \(|G|\). Index \(f\) denotes different feature filters. The number of these filters \(F\) determines the network size \(M=F|G|\). Resulting sRBM has fewer variational parameters than the RBM by a factor of \(|G|\). We can view this approach as binding the values of some of the \(\mathcal{O}(\alpha N^{2})\) parameter making the total asymptotic number of parameters \(\mathcal{O}(\alpha N)\). Carleo and Troyer [1] also showed that this approach significantly improves convergence and accuracy of the ground states of the antiferromagnetic Heisenberg model on a square lattice. However, this approach suffers from two crucial disadvantages in more general circumstances. First drawback is that visible biases are inherently constant for each filter \(f\) which significantly lowers the expressiveness of the network as discussed later in this section. As we show in Appendix B, the sRBM architecture cannot be modified to ease this condition while preserving symmetries. The second drawback is that sRBM is not applicable if the ground state does not transform under the trivial irreducible representation (irrep) of a given symmetry group.
To illustrate the problem let us consider a single spin dimer (i.e., a single bond of SSM with \(J=0\), \(J^{\prime}=1\) and \(h=0\)). Its ground state is a singlet \(|\psi_{0}\rangle=(|\uparrow\downarrow\rangle-|\uparrow\downarrow\rangle)/ \sqrt{2}\). The symmetry group of the single dimer Hamiltonian contains just two operations - an identity and a swap of both spins \(G=\{g_{12},g_{21}\}\). If we apply the swap operation to the ground state, we obtain \(\hat{\mathbf{T}}_{g_{21}}\ket{\psi_{0}}=(|\downarrow\uparrow\rangle-|\downarrow \uparrow\rangle)/\sqrt{2}=-|\psi_{0}\rangle\). Although this state is a multiple of the ground state, we see that it does not transform under the trivial irrep because one of its characters is \(\chi_{g_{21}}=-1\). Because sRBM represents only states where \(\hat{\mathbf{T}}_{g}\ket{\psi}=|\psi\rangle\); \(\forall g\in G\) this symmetry should not be used in sRBM. Note, that we do not strictly follow this rule and sometimes use all available lattice symmetries. The reason is that this leads to NQS with a small number of parameters that is easy to optimize. The resulting variational energy can be then compared with the energy obtained with RBM with the same \(\alpha\) to check how well is the full network optimized, i.e., if it leads to lower
energy than sRBM. If not, this signals that the variational energy of RBM can be lowered by a better learning.
_Projected RMB (pRBM)_: Recently, Nomura [54] introduced an alternative way to symmetrize RBM (or any other NN) by using quantum-number projection (also called _incomplete symmetrization operator_)
\[\psi_{\mathbf{\theta}}^{G}(\mathbf{\sigma}^{z})=\sum_{g\in G}\chi_{g^{-1}} \psi_{\theta}(\mathbf{T}_{g}(\mathbf{\sigma}^{z}))\,, \tag{10}\]
where \(g\) is an element of the given symmetry group \(G\) and \(\chi_{g}\) is its character from the irrep in question. The wave function on the right-hand side may be arbitrary and the function on the left-hand side satisfies the desired transformation property \(\psi_{\mathbf{\theta}}^{G}(\mathbf{T}_{g}(\mathbf{\sigma}^{z}))=\chi_{g}\psi_{\mathbf{\theta}}^ {G}(\mathbf{\sigma}^{z})\) in case of one-dimensional representation or \(\psi_{\mathbf{\theta},a}^{G}(\mathbf{T}_{g}(\mathbf{\sigma}^{z}))=\sum_{b=1}^{d}D(g)_{b}^{ a}\psi_{\mathbf{\theta},b}^{G}(\mathbf{\sigma}^{z})\) for more-dimensional irreps where functions \(\psi_{\mathbf{\theta},a}^{G}\) form a basis of \(d\)-dimensional irrep \(D(g)_{b}^{a}\). Unfortunately, pRBM makes the learning process of NN much more expensive than sRBM. The computational time increases by a factor of \(|G|\) producing a computational cost \(\mathcal{O}(\alpha N^{2}|G|)\). On the other hand, pRBM implementation does not suffer from the problems mentioned for sRBM and it can be generalized by setting mutually independent visible biases (see Appendix B).
_Group-convolutional NN (GCNN)_: The group equivariant convolutional NNs represent a promising class of NNs built inherently on symmetries. They were proposed by Cohen and Ni [55] as a natural extension of the well-known convolutional neural networks. While convolutional networks preserve invariance under translations, GCNN are equivariant under the action of an arbitrary group \(G\) (which may contain a subgroup of translations). Roth and MacDonald [56] further improved GCNNs so that they can transform under an arbitrary irreducible representation of \(G\), which is more suitable for NQSs for SSM. GCNN can be composed of any number of hidden layers. The first and subsequent layers are given by
\[\mathbf{f}_{g}^{1}=\mathbf{f}\left(\sum_{i=1}^{N}W_{g^{-1}i}^{0}\sigma_{ i}^{z}+\mathbf{b}^{0}\right),\ \ \mathbf{f}_{g}^{k+1}=\mathbf{f}\left(\sum_{h\in G}W_{g^{-1}h}^{k}\mathbf{f}_{h}^{k}+\mathbf{b}^ {k}\right), \tag{11}\]
where \(\mathbf{f}\) is a nonlinear activation function (the output is typically a vector since GCNN can have multiple parallel feature filters) and \(\mathbf{f}_{g}^{1}\) is a 1st-layer feature vector corresponding to group element \(g\). The result of the last layer \(\mathbf{f}_{g}^{K}=\mathbf{f}_{g}^{(J)K}\), where \((j)\) denotes the individual features of the layer, is then projected in a similar fashion as pRBM
\[\psi(\mathbf{\sigma}^{z})=\sum_{g\in G}\sum_{j}\chi_{g^{-1}}\exp\Bigl{(} \mathbf{f}_{g}^{(j)K}\Bigr{)}\,. \tag{12}\]
The main advantage over symmetrizing an arbitrary deep network by formula (10) is that we do not need to evaluate the forward pass of the non-symmetric wavefunction \(|G|\) times. This is achieved thanks to the fact that each layer of the GCNN fulfills _equivariance_. GCNN with \(K\) layers and a typical number of feature filters \(F\) in each layer has \(\mathcal{O}(FN+KF^{2}|G|)\) parameters.
_Jastrow network:_ As a baseline, we also use a Jastrow network based on the standard Jastrow ansatz [57, 58]
\[\psi_{\mathbf{\theta}}=\exp\left(\sum_{i,j}\mathbf{\sigma}_{i}^{z}W_{i,j} \mathbf{\sigma}_{j}^{z}\right)\,, \tag{13}\]
where the variational parameters \(\mathbf{\theta}=\left\{W_{i,j}\right\}\) form a matrix of size \(N\times N\). The Jastrow ansatz is physically motivated by two-body interactions and assigns trainable parameters \(W_{i,j}\) to pairwise spin correlations. The number of its parameters scales as \(\mathcal{O}(N^{2})\).
The complicated sign structure of the complex phases of the basis coefficients forming the ground-state wavefunction presents a major challenge in optimizing parameters of a variational function of a frustrated spin system. In case of Heisenberg model on a bipartite lattice consisting of sublattices A and B (i.e., SSM with \(J^{\prime}=0\)) this can be solved using the Marshal sign rule (MSR) [59]. The MSR states that the sign of \(\psi(\sigma^{z})\) is given by \((-1)^{N_{A}^{\dagger}(\sigma^{z})}\) where \(N_{A}^{\dagger}(\sigma^{z})\) is the total number of up-spins on a sublattice A. Because this alternates with a spin-flip, it can be difficult for NN to learn the correct signs. However, it is possible to circumvent this problem in two analogous ways.
If the sign structure is dictated by MSR, the Hamiltonian can be gauge transformed by changing the signs of some terms to make all wave-function coefficients positive in the transformed basis. In particular, we change \(\sigma^{x}\rightarrow-\sigma^{x}\) and \(\sigma^{y}\rightarrow-\sigma^{y}\); for \(\forall\sigma\in\text{A}\). The same result can be also obtained by setting the visible biases to \(a_{i}=i\pi/2\) for \(i\in A\) and \(a_{i}=0\) for \(i\in B\) as this exactly reconstructs the Marshall sign factor (up to an overall constant factor). In another words, the biases can be set to play the role of Marshall basis. What is important here is that in general case the simple Marshall sign rule is not always applicable. Especially problematic are systems with strong frustration [17, 18, 60]. The advantage of using the visible biases instead, is that their setting does not have to be known ahead as it can be, despite possible technical difficulties, learned. It is, therefore, beneficial to include the visible biases whenever allowed by the architecture. An additional bonus is that free visible biases also allow to overcome an improper initialization of weights.
## 4 Results
### Comparison of different NQSs architectures
It is not feasible to apply all NQSs introduced above to investigate the ground-state phase diagram of SSM at large lattices. Therefore, in the first part of our investigation, we benchmark these NQSs against the exact results obtained by Lanczos ED method. The aim is to identify a network that is both expressive enough to cover various phases and computationally tractable even for large lattices. We focus on a regular lattice with \(N=4\times 4=16\) points and irregular lattice with \(N=20\) (see Appendix A). Throughout this paper, we apply periodic boundary conditions for all lattices used. The irregular \(N=20\) is considered because \(N=16\) lattice has some undesirable properties, e.g., some extra symmetries with trivial irrep which favor symmetric networks. It also suffers by stronger finite size effects and does not exhibit the PS phase. On the other hand it is regular and easy to calculate.
We initially focus on just the two cases represented by \(J/J^{\prime}=0.2\) (DF phase) and \(J/J^{\prime}=0.9\) (AF phase) and investigate the model with and without MSR. Because the goal here is to compare different networks we estimate the accuracy of each architecture by comparing the average energy of the last 50 learning iterations \(E_{50}\) with the exact result \(E_{\text{ex}}\). The same computational protocol is used for each architecture. In particular, we use 2000 MC samples and 1000 training iterations for three values of fixed learning rates \((0.2,0.05,0.01)\). Each particular combination of architecture and basis (MSR or direct) was computed four times for each learning rate (yielding 12 independent
runs for each case of interest). This is to eliminate occasional events when NN gets stuck in a local energy minimum too far from the ground state. The zero magnetization was not implicitly assumed (i.e., we use local single-spin-flip Metropolis updates in VMC). We summarize our results in Table 1 where the values are \(\min\limits_{i}\frac{\left|E_{50}^{i}-E_{\mathrm{ex}}\right|}{E_{\mathrm{ex}}}\), with \(i\) enumerating the twelve independent runs.
There are several results in Table 1 which were important for our decision on which network should be used in the detailed study of the phase diagram at larger lattices. Starting with RBM one can see that networks with \(\alpha=2\) (560 parameters for \(N=16\) and 860 for \(N=20\)) and 4 (3380 parameters for \(N=20\)) and 8 (4398 parameters for \(N=16\)) show similar precision where the significantly larger networks are notably better (approximately three times) only in the AF phase. For general case this favors the computationally less demanding network with \(\alpha=2\). Interesting is also the comparison with the Jastrow network. Both architectures have comparable precision in the DS phase for \(N=16\), however, RBM in the AF phase as well as in the DS phase for \(N=20\)
\begin{table}
\begin{tabular}{l r|r r|r r} \hline \multicolumn{2}{c|}{\(N=4\times 4\)} & \multicolumn{2}{c|}{\(J/J^{\prime}=0.2\) (DS)} & \multicolumn{2}{c}{\(J/J^{\prime}=0.9\) (AF)} \\ architecture & params & direct & MSR & direct & MSR \\ \hline Jastrow & 256 & 5.8\(\times 10^{-5}\) & 1.1\(\times 10^{-5}\) & 6.2\(\times 10^{-2}\) & 2.1\(\times 10^{-2}\) \\ RBM (\(\alpha=2\)) & 560 & 1.9\(\times 10^{-5}\) & 1.6\(\times 10^{-5}\) & 4.7\(\times 10^{-3}\) & 5.1\(\times 10^{-3}\) \\ RBM (\(\alpha=16\)) & 4368 & 2.8\(\times 10^{-5}\) & 1.4\(\times 10^{-5}\) & 1.6\(\times 10^{-3}\) & 1.1\(\times 10^{-3}\) \\ rRBM (\(\alpha=2\)) & 1088 & 2.3\(\times 10^{-4}\) & 2.1\(\times 10^{-4}\) & 8.3\(\times 10^{-3}\) & 7.9\(\times 10^{-3}\) \\ rRBM (\(\alpha=8\)) & 4352 & 2.3\(\times 10^{-4}\) & 2.2\(\times 10^{-4}\) & 9.3\(\times 10^{-3}\) & 7.8\(\times 10^{-3}\) \\ sRBM (\(\alpha=4\)) & 18 & 0.0 & 0.0 & 4.8\(\times 10^{-3}\) & 2.5\(\times 10^{-3}\) \\ sRBM (\(\alpha=16\)) & 69 & 0.0 & 0.0 & 8.9\(\times 10^{-4}\) & 3.8\(\times 10^{-4}\) \\ sRBM (\(\alpha=128\)) & 545 & 0.0 & 0.0 & 1.3\(\times 10^{-3}\) & 8.5\(\times 10^{-5}\) \\ pRBM (\(\alpha=0.5\)) & 136 & 7.9\(\times 10^{-5}\) & 1.2\(\times 10^{-4}\) & 1.5\(\times 10^{-3}\) & 8.5\(\times 10^{-4}\) \\ pRBM (\(\alpha=2\)) & 544 & 9.1\(\times 10^{-6}\) & 2.2\(\times 10^{-5}\) & 7.1\(\times 10^{-5}\) & 2.0\(\times 10^{-5}\) \\ GCNN & 2188 & 6.2\(\times 10^{-6}\) & 3.6\(\times 10^{-6}\) & 3.2\(\times 10^{-5}\) & 3.6\(\times 10^{-5}\) \\ GCNNt & 268 & 4.2\(\times 10^{-7}\) & 5.1\(\times 10^{-7}\) & 4.9\(\times 10^{-3}\) & 4.7\(\times 10^{-3}\) \\ \hline \multicolumn{2}{c|}{\(N=20\)} & \multicolumn{2}{c|}{\(J/J^{\prime}=0.2\) (DS)} & \multicolumn{2}{c}{\(J/J^{\prime}=0.9\) (AF)} \\ \hline Jastrow & 400 & 1.1\(\times 10^{-5}\) & 1.0\(\times 10^{-3}\) & 1.4\(\times 10^{-1}\) & 3.0\(\times 10^{-2}\) \\ RBM (\(\alpha=2\)) & 860 & 2.2\(\times 10^{-5}\) & 1.4\(\times 10^{-5}\) & 6.6\(\times 10^{-3}\) & 6.2\(\times 10^{-3}\) \\ RBM (\(\alpha=4\)) & 3380 & 1.2\(\times 10^{-5}\) & 1.7\(\times 10^{-5}\) & 2.2\(\times 10^{-3}\) & 2.1\(\times 10^{-3}\) \\ sRBM (\(\alpha=4\)) & 85 & 1.2\(\times 10^{-1}\) & 1.5\(\times 10^{-1}\) & 5.0\(\times 10^{-2}\) & 1.4\(\times 10^{-3}\) \\ pRBM (for AF phase) & 336 & 2.3\(\times 10^{-1}\) & 2.3\(\times 10^{-1}\) & 3.5\(\times 10^{-3}\) & 3.0\(\times 10^{-3}\) \\ pRBM (for DS phase) & 336 & 7.1\(\times 10^{-4}\) & 6.8\(\times 10^{-5}\) & 4.4\(\times 10^{-2}\) & 4.7\(\times 10^{-2}\) \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of the precision of NQS variational results on lattices \(N=16\) and \(N=20\). The listed values were calculated as \(\min\limits_{i}\frac{\left|E_{50}^{i}-E_{\mathrm{ex}}\right|}{E_{\mathrm{ex}}}\), where \(E_{50}^{i}\) is the average energy of the last 50 iterations of the \(i\)-th run. A number of variational parameters is also shown for each architecture. The difference between GCNN and GCNNt is that for GCNN we used the correct characters for the expected ground state, GCNNt utilized only the translation symmetry. Error 0.0 here means relative error less than \(10^{-7}\) which we consider as “numerical precision” due to the standard MC errors which are typically larger even for \(L=16\).
with MSR is by one or even two orders of magnitude more precise than the Jastrow ansatz.
For \(N=16\) even better seems to be the sRBM architecture. In its implementation we have utilized full automorphism group of the finite lattice. Despite the resulting small number of variational parameters it shows excellent precision. Actually, significantly increasing \(\alpha\) is not that beneficial (compare \(\alpha=4\) and \(\alpha=128\) case). In the DS phase the usage of the symmetries allowed sRBM with \(\alpha=4\) to find the ground-state energies within the numerical precision (hence the zero error). Because sRBM is just a restricted RBM, this already suggests that the learning protocol for RBM can be improved which we demonstrate in the next section. However, it is important to stress that the excellent results are a consequence of the special symmetries of the \(N=16\) lattice. Both DS and AF states transform under the trivial irreducible representation and automorphism group is therefore applicable. This is not true for DS ground state in different tiles, including regular ones such as \(N=6\times 6\) (for a more detailed discussion of the symmetries see Appendix C). This is illustrated in the second part of Table 1 where sRBM with \(\alpha=4\) gives very bad results in the DS phase of \(N=20\) due to the improper symmetries. In short, using inappropriate symmetries in sRBM for states that do not transform under trivial irrep can make the variational energy significantly worse than for simple RBM. For \(N=20\) sRBM also fails in the AF phase but only when the direct basis is used. This implies that sRBM has troubles to learn the correct sign structure of the state for larger lattices, which can be attributed to the fixed visible biases.
Both remaining architectures, namely pRBM and GCNN, show excellent accuracy for \(N=16\). They clearly outperform all other networks in the AF phase. However, results at \(N=20\) are less convincing especially when one takes into account that these networks are more computationally demanding than RBM even for cases when RBM contains more parameters. In addition, the reached precision required a usage of correct symmetries of the expected state, i.e., the proper line form Table 2 in Appendix C. If one uses an improper one, i.e., if different state is expected, as illustrated by the last two lines in Table 1, the precision can drop by several orders of magnitude. Similarly, the precision decreases significantly for both GCNN and pRBM when we use only the group of translations instead of the full symmetry group, as illustrated by GCNN in Table 1. Note that for this case the precision in the AF phase drops to the level of a simple RBM with \(\alpha=2\). The network is much better in the DS phase, but as we will discuss in the next chapter, already RBM with \(\alpha=2\) and modified learning protocol can reach the numerical precision in this phase as well. Although we can not exclude that much better results could be obtained for the symmetrized pRBM and GCNN networks with a different learning protocol, considering their much higher computational demands and the necessity to a priori identify the correct irrep symmetries for each lattice type to make the learning efficient the presented results favor RBM for the investigation of larger clusters.
The last question that must be addressed here is if it is worth to use MSR. The Table 1 shows several cases where MSR is favorable in the AF phase (e.g., for sRBM and \(N=16\)), but this is not a general rule. In addition, its usage comes with a price as well. We have noticed that MSR basis seems to strongly favor the AF ordering even for \(J/J^{\prime}\) where PS is already the ground state in exact results. We will discuss this briefly when addressing larger lattices.
To wrap it up, in general, the usage of MSR basis does not lead to significantly better results. With some exceptions, here presented networks are able to approximate the ground-state energy quite well even without MSR. Therefore, we will mostly omit the MSR from further discussion. Furthermore, if the symmetry of the ground state is known, it is worth using this information in building the NN. If not, then the usage of just translations does not lead to a significant improvement of the precision. Fortunately, the complex valued RBM with visible biases can give a very good approximation of the ground-state energy without any restrictions. Its clear advantage is that
no preliminary information about the ground-state properties is needed. As such, it is suitable for problems where the character of the ground state or position of the phase boundary is unknown. In addition, the precision of RBM for SSM can be significantly improved using a different learning strategy discussed in the following section.
### Investigation of the ground-state phase diagrams
Focusing only on RBM allowed us to test several learning strategies and to use more precise MC calculations. What follows is a description of the best learning protocol that we have found, which we used to produce all results discussed bellow. It proved to be beneficial to use a more precise MC calculations already during the training. We typically generate 4000 - 12 000 MC samples at every sampling step. It was also more advantageous to run 10 - 30 independent learnings (with random initial variational parameters) with shorter learning times than to use few runs with a lot of learning iterations. We used approximately 2000 convergence iterations in each run. During the learning we have been lowering the learning rate \(\eta\) by several discrete steps. Typically we started with \(\eta\) = 0.08 (\(\approx\)200 iterations), then changed it to \(\eta\) = 0.04 (\(\approx\)1600 iterations), followed by \(\eta\) = 0.02 (\(\approx\)100 iterations), \(\eta\) = 0.01 (\(\approx\)100 iterations ) and \(\eta\) = 0.001 (\(\approx\)50 iterations). The trained RBM was then used to calculate expectation values of energy and order parameters, introduced in the next section, where we used 12 - 60 thousand states. Consequently, the Monte Carlo error bars in all presented figures are for small lattices negligible. The relevant absolute error comes from the learning process or limitations of the used NQS. The state with the lowest energy (evaluated more precisely after training) from all independent runs was kept as the final result in the following discussion. Due to the stochastic fluctuations in the learned parameters it was for some cases advantageous to refine the results by again rerunning the final state multiple times with high number of MC samples but small number (5-10) of iterations and small learning rate (\(\eta\)\(\leq\) 0.001) keeping again the result with the lowest energy. Also, as discussed bellow, we have utilized transfer learning in some problematic regimes.
#### 4.2.1 Ground-state orderings
As already discussed, a good agreement of the variational energy with the exact one does not guarantee that the variational state correctly captures the character of the exact ground state, i.e., that it reflects the correct phase. To examine this, and with the aim to see if RMB NQS can correctly capture the transitions between the phases, we calculate the order parameters for the three main expected orderings. They are constructed to be large (close to one) whenever the state is in the respective phase and small in other domains.
In particular, we define the order parameter for the DS phase as
\[\mathcal{P}_{\text{DS}}=-\frac{4}{3N\!\!\sum_{\langle i,j\rangle^{\prime}}^{ L}\,\left\langle\hat{\mathbf{s}}_{i}\cdot\hat{\mathbf{s}}_{j}\right\rangle, \tag{14}\]
which reflects the fact that operator \(\hat{\mathbf{s}}_{1}\cdot\hat{\mathbf{s}}_{2}\) has for isolated dimer the expectation value \(-\frac{3}{4}\hbar^{2}\) (singlet state). Therefore, \(\mathcal{P}_{\text{DS}}\) is one in the DS phase and strictly lower in other phases.
For the PS order parameter we use a definition based on order parameter from Ref. [35]
\[\mathcal{P}_{\text{PS}}=\frac{4}{N}\left|\left\langle\sum_{\mathbf{r}\in\text{ signlet}}\hat{Q}_{\mathbf{r}}-\sum_{\mathbf{r}\,\text{empty}}\hat{Q}_{\mathbf{r}}\right\rangle \right|, \tag{15}\]
where the order parameter is given by the difference \(\hat{Q}_{r}=\frac{1}{2}\left(\hat{P}_{r}+\hat{P}_{r}^{-1}\right)\), with \(\hat{P}_{r}\) being the permutation operator. This operator performs a cyclic permutation of four spins on a plaquette (a square on the lattice without the diagonal \(J^{\prime}\) bond) at position \(r\). The first sum in Eq. 15 runs over all singlet squares (see Fig. 0(b)) and the second sum is over all empty squares. The meaning of this construction can be understood looking at the the Fig. 0(b). Operator \(\hat{Q}_{r}\) gives a large mean value in the plaquette singlet (gray square) and a value close to zero in the empty square in between four plaquette singlets. In practice, we don't know which set of squares will become singlets, as the state is degenerated, therefore, we use the absolute value.
For the AF phase we employ the standard structure factor
\[\mathcal{P}_{\text{AF}}=\frac{1}{N^{2}}\sum_{ij}\text{e}^{\text{i}\mathbf{q}\cdot \mathbf{r}_{ij}}\left(\hat{\mathbf{S}}_{i}\cdot\hat{\mathbf{S}}_{j}\right), \tag{16}\]
where \(\mathbf{r}_{ij}\) denotes the difference in discrete coordinates of spin \(i\) and \(j\), and we take \(\mathbf{q}=(\pi,\pi)\) which measures the antiferromagnetic checkerboard ordering. Finally, in the case of finite magnetic field we use the normalized magnetization in the \(z\)-direction
\[\mathcal{M}=\frac{2}{N}\sum_{i}\left\langle\hat{\mathbf{S}}_{i}^{z}\right\rangle, \tag{17}\]
to identify the expected plateaus in the magnetization. These expectation values are calculated using VMC for trained RBM NQS.
#### 4.2.2 Zero magnetic field
We first investigate the phases of SSM in zero magnetic field. Unlike in procedure used to compare different network architectures, here we restrict the Hilbert space by the condition \(\mathcal{M}=0\). Before moving to larger lattices we test the RBM for \(N=20\) in a wide range of \(J/J^{\prime}\). We use the irregular lattice \(N=20\) because it shows an onset of the PS ordering (see black dashed line in Fig. 4(a)) not present for smaller regular lattices. We also readdress the role of the parameter \(\alpha\) within the new learning protocol, but start our discussion with the \(\alpha=2\) case.
As it is clear from the comparison of the ground-state energies in panels Fig. 4(b) and Fig. 4(c) the RBM variational energy is in a very good agreement with ED. The updated learning protocol ensures that the relative error in the \(J/J^{\prime}<0.68\) region, i.e., for the DS phase is on the order of the numerical precision already for \(\alpha=2\) despite not using any symmetries except the condition \(\mathcal{M}=0\). The largest error is in the vicinity of the expected first-order phase transition from DS to PS phases, but only from the side of expected PS phase. Nevertheless, even here the largest observed relative error in energy was approximately 1% for \(\alpha=2\).
Considering the focus of our study, even more important than the error in energy is the nature of optimized variational states. Panel (a) in Fig. 4 shows that already a shallow network, i.e., RBM with complex parameters and \(\alpha=2\) is expressive enough to correctly capture the formation of the distinct DS (blue diamonds) and AF ordering (red crosses), as well as the onset of the PS phase (black circles). The agreement is far from perfect though. In agreement with the results for the energy, the largest differences in order parameter values between RBM and ED are in the right vicinity of the expected phase transition. Here an error of 1% and less in the estimation of the ground-state energy translates in an error of tens of percents in the order parameters. Still, even here the RBM gives a correct qualitative picture. The position of the abrupt change of phase matches the exact result and there is a clear onset of the PS ordering. With increasing \(J/J^{\prime}\) the RBM results again align with the exact ones.
This benchmark shows that RBM with \(\alpha=2\) can easily capture the correct state in the DS phase, but it gives worse results above the critical \(J/J^{\prime}\approx 0.68\). What is not clear, is if the relative errors in panel (c) represent some inherent limitation of the RBM with small \(\alpha\), e.g., a difficulty to set the correct sign structure of the frustrated state, or are related to the learning process. Gradually increasing \(\alpha\) from 2 (blue circles) to 4 (red pluses), 8 (green pluses) and 16 (black diamonds with yellow cores) in the problematic region lowers the relative error in energy. However, this
Figure 4: Comparison of exact (lines) and various RBM variational results (symbols) at irregular lattice \(N=20\). (a) Evolution of the order parameters. Here blue solid line (ED), pure blue diamonds (RBM with \(\alpha=2\)) and blue diamonds with red edge (RBM with \(\alpha=16\)) show the DS order parameter; black dashed line (ED), black circles (RBM with \(\alpha=2\)) and black circles with yellow edge (RBM with \(\alpha=16\)) show the PS order parameter; and red dot-dashed line (ED), red crosses (RBM with \(\alpha=2\)) and red crosses with blue edge (RBM with \(\alpha=16\)) show the AF order parameter. Results of symmetric variants of RBM are not shown, as they were comparable to presented results for \(J/J^{\prime}>0.68\) and way off the exact results for \(J/J^{\prime}\leq 0.68\). (b) The exact (red line) and RBM \(\alpha=2,16\) ground-state energies. (c) Relative error in the ground-state energy for of the RBM with \(\alpha=2\) (blue circles), 4 (red pluses), 8 (green crosses) and 16 (black-yellow diamonds). Note that the relative error in the DS phase for RBM \(\alpha=2\) is at the level of numerical precision.
significant improvement in energy leads only to a small improvement for the order parameters near the critical point. This is shown in panel (a) where results calculated with RBM with \(\alpha=16\) are marked by the same symbols as for \(\alpha=2\) but highlighted via differently colored edges.
Using a symmetric NQS symmetries did not significantly improve the results. We have tested the sRBM architecture with \(\alpha=4\) in direct as well as MSR basis using the same protocol as for RBM. The sRBM results have been comparable to RBM for \(J/J^{\prime}>0.68\) and much worse than RBM results bellow this critical value. This suggests that the issue is not entirely due to the insufficient learning. On the other hand, the learning was the most difficult in the vicinity of the observed discontinuity. A significant fraction (often more than a half) of the independent runs for \(0.69\leq J/J^{\prime}\leq 0.72\) ended either in the wrong (DS) phase or even in a state with an energy much higher than the real ground state. This was not true for the rest of the \(J/J^{\prime}\) interval where most of the independent runs with the same \(\alpha\) showed very similar variational energies. In addition, the relative errors for all investigated RBM variants (including ones not presented here) follow the same pattern. They are maximal just above the critical point and then, if we neglect some noise, monotonously decrease with increasing \(J/J^{\prime}\). Yet, increasing \(\alpha\) significantly lowers the variational energy even for \(J/J^{\prime}>0.74\). This again suggests that the problem is indeed the small \(\alpha\). Ultimately, both statement seem to be correct. A significantly larger \(\alpha\) than \(\alpha=16\) is needed to capture the critical region together with a high precision learning, i.e., many independent runs.
After testing the RBM on small lattices and understanding its strength and limitations we can now approach larger ones. We focus on \(\alpha=2\) as the increased precision of variational energy obtained with larger \(\alpha\)'s does not significantly improve the estimates of the order parameters. Although we can not easily compare the VMC results with exact diagonalization for larger lattices, we can use the exact asymptotic results for the energy in DS Eq. 3 and AF phase Eq. 4 to guide us.
Fig. 5 shows the evolution of the order parameters and energy for \(N=20\), \(36\), \(64\) as well as several points calculated for \(N=100\). The results for \(N=20\), \(36\) and \(64\) are in very good agreement with the exact result in the assumed DS phase and are in between the \(N=20\) exact energy and the large \(N\) asymptotic in the supposed \(AF\) phase (up to one point discussed later). The advantage of the SSM model is that we can see, that \(N=100\) results are not converged enough even in the DS phase, which is much easier to access, with relative error of \(\approx 2\%\). Note, that this does not signal problems with the chosen NQS as it mostly reflects our limited computational resources. Because of the size of the lattice, we used less iterations in the learning process (\(\approx\)800) and only several independent runs, which proved to be enough only for a qualitative study confirming the general picture discussed below.
Leaving \(N=100\) aside Fig. 5 illustrates the usability of RBM for larger clusters. Although a much more thorough finite-size analysis would be necessary to asses the phase boundaries, the presented results confirm the overall picture of DS and AF phase separated by a narrow PS or at least its indication. There is, however, at least one issue. The point of the discontinuous phase transition from DS to PS phase should be \(J/J^{\prime}\simeq 0.675\), but our results at larger lattices push it to \(J/J^{\prime}\simeq 0.7\). Besides finite size effects, this could be also related to two technical problems. First is the already discussed difficulty to train the NQS in the vicinity of the discontinuous phase transition. The second is the tendency of the direct base to prefer DS over AF ordering. Both these issues can be seen in the panel (e) (inset of panel (d)) with the detail of the \(N=64\) results. Here green diamonds show the RBM data, red empty diamonds are sRBM data with MSR basis and blue stars are the sRBM data for direct basis all with \(\alpha=2\). Clearly, all these networks show (different) problems around the expected point of the phase-transition. For \(J/J^{\prime}=0.7\) and \(0.72\) sRBM with MSR gives lower energy than RBM and even lower than the energy of the DS ordering. The sharp
transition must be therefore placed bellow \(J/J^{\prime}=0.7\). On the other hand, sRBM with MSR can not correctly capture the onset of the DS ordering. The sRBM network with direct basis illustrates the opposite problem. It overestimates the stability of the DS ordering.
Investigation of sRBM showed that the RBM results at \(J/J^{\prime}=0.7\) are not fully converged yet. Because we have not been able to solve this problem using the direct approach, we utilized transfer learning. We used the RBM parameters trained for \(J/J^{\prime}=0.74\) as a starting point to train the network at \(J/J^{\prime}=0.72\), then used these results as a starting point for \(J/J^{\prime}=0.70\) and finally these results for \(0.69\). That way we got smaller variational energies for \(J/J^{\prime}=0.72\) and \(0.70\) than in the direct approach or the sRBM results and the \(J/J^{\prime}=0.72\) result dropped even bellow the DS energy. Interestingly, this also lead to an observable change in the order parameters (black crosses in all panels). In contrast to \(N=20\) case, especially sensitive is the PS ordering as seen from the comparison of black crosses and green diamonds in panel (d). Even if if is suggested by the order parameters, the transfer learning technique have not actually pushed the point of the expected phase transition bellow \(J/J^{\prime}=0.69\). The reason is, that the obtained energy is at this point larger than DS energy already reproduced by the direct approach. This shows, that although useful the transfer learning has to be used with care.
Figure 5: Evolution of order parameters for DS (a), PS (b), AF (c) and variational energy (d) as a function of \(J/J^{\prime}\) for \(h=0\) and various lattice sizes. All results in panels (a)-(d) have been obtained using RBM NQS with \(\alpha=2\) and VMC with exchange updates (simultaneous flip of two opposite spins in the basis state) for the Hilbert subspace restricted to \(\mathcal{M}=0\). The black dashed lines in panels (d) and (e) show the asymptotic energies for DS (horizontal) and AF phase (tilted). The black crosses represent \(N=64\) results for which we have utilized transfer learning. The inset (e) shows the detail of the variational energy for \(N=64\) in the vicinity of the phase transition calculated using RBM (green diamonds), sRBM in direct base (blue stars), sRBM with MSR (red empty diamonds) and three points calculated with RBM utilizing transfer learning (black crosses).
Taking into account these difficulties as well as the fact that our RBM results underestimate the PS order parameter even for \(N=20\) and large \(\alpha\), the investigation of possible SL phase and related DQCP, predicted in this problematic region, is currently beyond our reach. Nevertheless, here demonstrated expressiveness of a simple RBM with \(\alpha=2\) suggests that the problem can be indeed attacked by a more expressive or specialized networks. A good candidate might be a composed GCNN that would combine networks for different characters of the symmetry group for particular lattice size and boundary conditions.
#### 4.2.3 Magnetization plateaus
Historically, the most intriguing property of the SSM is its ability to describe fractional plateaus in magnetization as a function of external magnetic field which are observed also in real materials. To address this problem via VMC one has to drop the restriction of fixed \(\mathcal{M}=0\). In addition to significantly enlarging the Hilbert space, this also makes the optimization (learning) process a harder task. Moreover, each plateau represents a different ordering and, therefore, a challenge to NQS. Yet, as we demonstrate here already a simple RBM NQS with \(\alpha=2\) is sufficiently expressive to capture the main plateaus.
We focus on the case \(J/J^{\prime}=0.45\) which is inside of the DS phase (at \(h=0\)) where several broad plateaus are expected to form. The most stable ones, if allowed by the lattice size, should be the \(\mathcal{M}=1/2\) and \(1/3\) plateaus [27, 36]. We start the discussion by benchmarking the RBM NQS results (blue filled diamonds in all panels of Fig. 6) against the ED results for \(N=20\) lattice (blue solid lines). Clearly, the variational energy in panel (d) is in a very good agreement with the exact one. The relative error plotted in panel (c) is way bellow 1% in the whole range of \(h\). In addition, it shows a structure which can be understood by comparing the profile of the relative
Figure 6: Comparison of ED (blue solid lines) and RBM with \(\alpha=2\) (symbols) results for \(N=20\) and \(J/J^{\prime}=0.45\). Panels (a) and (b) show the magnetization and dimer state order parameter as functions of magnetic field. Panel (c) presents the relative error of the variational energy with respect to the ED result where blue dotted lines are just a guide to the eyes. Panel (d) shows the evolution of the normalized energy on external magnetic field. Blue filled diamonds represent the direct approach, empty red diamonds were obtained by utilizing the transfer learning discussed in the main text and the empty red squares by fixing \(\mathcal{M}N\) to integer values from the vicinity of the direct approach.
error dependence on \(h\) with the normalized magnetization plotted in panel (a) and the DS order parameter in panel (b). Panel (a) shows that RBM NQS with \(\alpha=2\) is able to capture all main steps in the magnetization observed in the ED curve. The most stable are \(\mathcal{M}=0\), \(1/2\) and \(1\) followed by plateaus \(1/5\) and \(3/10\) forming in the range \(0.7\lesssim h/J^{\prime}\lesssim 1.2\).
The stability of these plateaus is also reflected in the relative error. Despite not using any restriction on \(\mathcal{M}\), the relative error for \(h/J^{\prime}<0.7\), where \(\mathcal{M}=0\), is negligible. In this region the system stays in the DS ordering as revealed by panel (b). Similar situation is for \(h/J^{\prime}\geq 2.1\). Here the state is fully polarized (\(\mathcal{M}=1\)) and, therefore, easy to reproduce with variational techniques. Another regions with very small errors in the variational energy are the central parts of the above discussed stable plateaus as best illustrated by the \(1/2\) one. Here RBM NQS gives relative error below \(0.1\%\). Consequently, the regions with the largest errors are related to the transitions between the stable plateaus. Here we also observe the largest deviations of the NQS magnetization (and \(\mathcal{P}_{\text{DS}}\)) from the ED results. These problematic regions can be divided into two types. Into the first one belong the step edges, i.e., the abrupt changes of the magnetization for \(\mathcal{M}\leq 1/2\). The related convergence problems are similar to the difficulties to correctly capture the precise position of the discontinuous phase transition discussed for \(h=0\) and \(J/J^{\prime}\approx 0.69\). As such they can be also treated by the transfer learning. The red hollow diamonds in Fig. 6 were obtained by approaching the step edges from left and right using the RBM parameters learned in the centers of the neighboring plateaus as the initial input. Transfer learning clearly suppresses the errors and gives the correct value of \(\mathcal{M}\) even very close to the discontinuities.
The second problematic region is at large magnetic field where the \(\mathcal{M}=1/2\) plateau transits into \(\mathcal{M}=1\). Here, only one additional sudden step up of \(\mathcal{M}\) from \(1/2\) is expected in the thermodynamic limit, which is followed by continuous increase of \(\mathcal{M}\) to \(1\) as \(h\) raises. Finite \(N=20\) lattice shows in this region a number of very narrow transient steps. This makes this region unsuited for transfer learning, unless a much more refined grid of \(h\)'s is applied. On the other hand, the small lattice allowed us to test the actual expressiveness of RBM by fixing \(\mathcal{M}N\) to integer values taken from the vicinity on the direct RBM results for \(\mathcal{M}N\). The results with the lowest energies are depicted by the red empty squares and they reproduce both \(\mathcal{M}\) a DS of the exact study. This proves, that with correct learning strategy, RBM with \(\alpha=2\) is sufficient for capturing this rather complex evolution of the SSM ground state in the increasing magnetic field.
The stability of the magnetization plateaus must be confirmed on large lattices, because the magnetization could be always discrete on finite clusters yet continuous in thermodynamic limit. Moreover, lattice \(N=20\) is not divisible by three, thus can not hold the important \(1/3\) plateau. To show that RBM NQS can really capture these features we address larger clusters. Fig. 7 presents, in addition to the exact (solid red line) and RBM (red diamonds) results for \(N=20\), RBM results for \(N=36\) (blue squares) and \(N=64\) (yellow triangles). We stress here, that these results were obtained with the direct approach. We have not used the transfer learning and fixed \(\mathcal{M}\) to avoid the possibility, that we that way introduce a bias towards seemingly stable plateaus. Still, the results for \(N=36\) show stable flat steps in the magnetization which holds both the \(1/2\) and the \(1/3\) plateaus. Although the results for \(N=64\) are less stable, they confirm the \(1/2\) plateau and clearly signal formation of additional two plateaus for \(h/J^{\prime}<1.2\). These are very encouraging results as they again show that already a simple RBM with small number of parameters is expressive enough to correctly capture the complicated magnetization dependence reflecting the underlying complex ordering of the quantum spins.
## 5 Conclusion
We have investigated the ground-state properties of the Shastry-Sutherland model via variational Monte Carlo with NQS variational functions. Our main goal was to show that a single and relatively simple NQS architecture can be used to approximate a broad range of regimes of this model. We have first tested and benchmarked several NQS architectures known from the literature to be suitable for different variations of the Heisenberg model. We discussed the role, advantages and disadvantages of the NQSs incorporating lattice symmetries and biases on the visible layer. We conclude that when precision, generality and computational costs are taken into account, a good choice for addressing larger SSM lattices without as well as with external magnetic field is a restricted Boltzmann machine NQS with complex parameters.
Focusing on RBM NQS allowed us to refine the learning strategy, where we realized that if a more precise MC sampling is used then it is advantageous to run several (tens) short independent optimizations instead of few long learnings. Using this strategy, we have shown that already RBM NQS with \(\alpha=2\) can well approximate the DS and AF phases and shows the onset of the PS ordering. It also correctly identifies the point of the discontinuous change of the DS regime to PS/AF. However, in its vicinity the largest deviations from the exact results are observed. Here, the variational energy can be significantly lowered by increasing \(\alpha\), but this leads only to a small improvement of the estimation of order parameters. Consequently, we used RBM NQS with \(\alpha=2\) to address larger lattices and we confirmed the ability of RBM NQS to describe the main orderings of the SSM in zero magnetic field.
A gradual increase of the magnetic field in SSM leads to formation of stable plateaus in the magnetization each reflecting a differed ground state ordering. We have shown that RBM with \(\alpha=2\) can capture the relevant plateaus which form for here studied lattice sizes. Transfer learning can then be utilized to refine the results.
Figure 7: Comparison of the exact (red solid line) magnetization (a) and variational energy (b) results with VMC calculations utilizing RBM NQS with \(\alpha=2\) for lattices \(N=20,36\) and \(N=64\) as functions of external magnetic field.
To wrap it up, we have demonstrated that SSM is a good system for benchmarking NQSs and that a simple RBM NQS can be used to address its ground state in a broad range of regimes. This opens a possibility that NQSs could be used to address some unresolved questions related to the SSM, e.g., the existence of the spin-liquid phase and DQCP or the position, or to precisely capture the size and character of additional steps in the magnetization for larger lattices. We, however, leave this for future more focused studies.
## Acknowledgements
We thank Artur Slobodeniuk for the helpful discussions and Alberto Marmodoro for helping us to access additional computational resources.
Funding informationThis research was supported by the project e-INFRA CZ (ID:90140) of the Czech Ministry of Education, Youth and Sports (M.M., J.M.), BB. acknowledges a support from the Czech Science Foundation via Project No. 19-28594X, and M.Z. acknowledges a support from the Czech Science Foundation via Project No. 23-05263K.
## Appendix A Lattice tiles
To benchmark various architectures we utilize ED. We use the Lanczos algorithm implemented in SciPy library [61]. The only square (regular) systems tractable by this implementation, without extensive utilization of expected state symmetries, are of the size \(2\times 2\) and \(4\times 4\). Therefore we also constructed the so-called tilted regular-square clusters. They are depicted in figure 8 and each of them can be thought of as a single repeating building block of the infinite lattice. Clusters of sizes \(N=4,8,16,20\) are accessible via ED and used to benchmark our NQS implementations. However, in the text we are discussing only results for \(N\geq 16\)). We always use periodic boundary conditions.
Figure 8: Shapes of tilted tiles of sizes \(N=4,8,16,20\) used with periodic boundary conditions.
Visible biases in sRBM and pRBM
Here we show by contradiction that allowing uneven biases for sRBM is equivalent to constant biases when we enforce enough symmetries. Let us suppose visible biases are kept non-constant \(\left(a^{f}\to a_{i}^{f}\right)\) in the expression 9. We further suppose the condition that \(\forall i,j\ \exists g:g\sigma_{i}=\sigma_{j}\). This condition holds for every tile of SSM.
It follows that \(\sum\limits_{g\in G}T_{g}(\mathbf{\sigma}^{z})_{i}=C\sum\limits_{j=1}^{N}\sigma_{i }^{z}=Cm^{z}\), where \(C\) is the number of unique \(g\) fulfilling the condition. The first term in (9), after the generalization \(a^{f}\to a_{i}^{f}\), can be rewritten as
\[\sum\limits_{f=1}^{F}\sum\limits_{g\in G}\sum\limits_{i=1}^{N}a_{i}^{f}\,T_{g} (\mathbf{\sigma}^{z})_{i}=\sum\limits_{f=1}^{F}\sum\limits_{i=1}^{N}a_{i}^{f}\, \sum\limits_{g\in G}T_{g}(\mathbf{\sigma}^{z})_{i}=Cm^{z}\sum\limits_{f=1}^{F}\sum \limits_{i=1}^{N}a_{i}^{f}=Cm^{z}\sum\limits_{f=1}^{F}a^{f}. \tag{18}\]
Thus non-constant biases can be replaced by a constant value without loss of generality. Therefore, visible biases cannot be built into sRBM as independent variational parameters.
On the other hand, pRBM is not limited in this way. This can be clearly seen after rewriting both ansatze into similar forms. First the sRBM
\[\log\psi_{\mathbf{\theta}}(\mathbf{\sigma}^{z})=\log\prod\limits_{g\in G}\exp\sum \limits_{f=1}^{F}\left\{a^{f}\,\sum\limits_{i=1}^{N}T_{g}(\mathbf{\sigma}^{z})_{i} +\log\left[2\cosh\left(\sum\limits_{i=1}^{N}w_{i}^{f}\,T_{g}(\mathbf{\sigma}^{z})_ {i}+b^{f}\right)\right]\right\},\]
and then pRBM
\[\log\psi_{\mathbf{\theta}}^{G}(\mathbf{\sigma}^{z})=\log\sum\limits_{g\in G}\chi_{g^{- 1}}\exp\left\{\sum\limits_{i=1}^{N}a_{i}\,T_{g}(\mathbf{\sigma}^{z})_{i}+\sum \limits_{j=1}^{M}\log\left[2\cosh\left(\sum\limits_{i=1}^{N}W_{ij}\,T_{g}(\bm {\sigma}^{z})_{i}+b_{j}\right)\right]\right\}.\]
The sum (rather than product) of exponentials makes it impossible to use an analogous reduction of visible biases as discussed above for the sRBM. Note, that the usage of visible biases typically does not lead to a significant increase of parameters (\(+N\)). Yet, they usually improve the convergence of the learning process for frustrated systems, because they help to set the correct sign structure of the approximated state. Therefore, it is beneficial to include the visible biases to the NQS parameters whenever possible.
## Appendix C Symmetries
An infinite Shastry-Sutherland lattice has a \(p4g\) wallpaper group symmetry whose point group is \(C_{4v}\)[62]. The character table of \(C_{4v}\) is shown in Tab. 2. Each eigenstate of the SSM at infinite lattice must transform following one of the rows in the character table which, however, disregards the translations or glide reflections.
For finite lattices investigated in the paper the table and the number of additional translations depends on the system size and shape (note that we are using also irregular lattices). Different small clusters can have different character tables with varying number of irreducible representations (irreps) [63, 64]. A detailed analysis of each lattice goes beyond the scope of our paper. In practical implementations, we used the automorphisms of the graph via routines implemented in
NetKet [9, 51] and a particular line from its character table 2. For illustrative purposes, it is still useful to discuss the irreps of individual phases of SSM on the infinite lattice.
_DS_, described by Eq. 2, changes sign when we swap the spins in a dimer. More generally, the parity of the permutation determines the sign change. Consider a \(L\times L\) square lattice, where \(L\) is even, and a reflection symmetry along its diagonal axis (\(\sigma_{v}\)) within the squares containing the \(J^{\prime}\)-bonds. This axis cuts perpendicularly through \(L/2\)\(J^{\prime}\)-bonds. For each of these bonds a sign change occurs during the reflection while the sign of other dimers does not change. A similar argument can be constructed for the \(C_{4}\) rotation. This has an important implication even for finite lattices. Namely, for regular lattices the DS ground state transforms under the trivial irrep \(A_{1}\) if \(L\) is divisible by \(4\), and under the antisymmetric irrep (corresponding to \(B_{2}\)) otherwise. This has some important consequences for the use of symmetries of some finite lattices, as discussed in the main text.
_PS_ is twofold degenerate. Leaving out the translations this means that it transforms under irrep E, which is the only irrep of dimension \(2\).
Similar analysis of _AF_ state for finite lattices is rather complicated [63, 64]. If needed, we have assumed that AF transforms under trivial irrep \(A_{1}\) (with as well as without the application of MSR).
|
2302.13228 | Bochner integrals and neural networks | A Bochner integral formula is derived that represents a function in terms of
weights and a parametrized family of functions. Comparison is made to pointwise
formulations, norm inequalities relating pointwise and Bochner integrals are
established, variation-spaces and tensor products are studied, and examples are
presented. The paper develops a functional analytic theory of neural networks
and shows that variation spaces are Banach spaces. | Paul C. Kainen, A. Vogt | 2023-02-26T03:44:19Z | http://arxiv.org/abs/2302.13228v1 | # Bochner integrals and neural networks +
###### Abstract
A Bochner integral formula \(f=\mathcal{B}-\int_{Y}w(y)\Phi(y)\,d\mu(y)\) is derived that presents a function \(f\) in terms of weights \(w\) and a parametrized family of functions \(\Phi(y)\), \(y\) in \(Y\). Comparison is made to pointwise formulations, norm inequalities relating pointwise and Bochner integrals are established, \(G\)-variation and tensor products are studied, and examples are presented.
**Keywords**: Variational norm, essentially bounded, strongly measurable, Bochner integration, tensor product, \(L^{p}\) spaces, integral formula.
## 1 Introduction
A neural network utilizes data to find a function consistent with the data and with further "conceptual" data such as desired smoothness, boundedness, or integrability. The weights for a neural net and the functions embodied in the hidden units can be thought of as determining a finite sum that approximates some function. This finite sum is a kind of quadrature for an integral formula that would represent the function exactly.
This chapter uses abstract analysis to investigate neural networks. Our approach is one of _enrichment_: not only is summation replaced by integration, but also numbers are replaced by real-valued functions on an input set \(\Omega\), the functions lying in a function space \(\mathcal{X}\). The functions, in turn, are replaced by \(\mathcal{X}\)-valued measurable functions \(\Phi\) on a measure space \(Y\) of parameters. The goal is to understand approximation of functions by neural networks so that one can make effective choices of the parameters to produce a good approximation.
To achieve this, we utilize Bochner integration. The idea of applying this tool to neural nets is in Girosi and Anzellotti [14] and we developed it further in Kainen and Kurkova [23]. Bochner integrals are now being used in the theory of support vector machines and reproducing kernel Hilbert spaces; see the recent book by Steinwart and Christmann [42], which has an appendix of more than 80 pages of material on operator theory and Banach-space-valued integrals. Bochner integrals are also widely used in probability theory in connection with stochastic processes of martingale-type; see, e.g., [8, 39]. The corresponding functional analytic theory may help to bridge the gap between probabilistic questions and deterministic ones, and may be well-suited for issues that arise in approximation via neural nets.
Training to replicate given numerical data does not give a useful neural network for the same reason that parrots make poor conversationalists. The phenomenon of overfitting shows that achieving fidelity to data at all costs is not desirable; see, e.g., the discussion on interpolation in our other chapter in this book (Kainen, Kurkova, and Sanguineti [45]). In approximation, we try to find a function close to the data that achieves desired criteria such as sufficient smoothness, decay at
infinity, etc. Thus, a method of integration which produces functions _in toto_ rather than numbers could be quite useful.
Enrichment has lately been utilized by applied mathematicians to perform image analysis and even to deduce global properties of sensor networks from local information. For instance, the Euler characteristic, ordinarily thought of as a discrete invariant, can be made into a variable of integration [7]. In the case of sensor networks, such an analysis can lead to effective computations in which theory determines a minimal set of sensors [40].
By modifying the traditional neural net focus on training sets of data so that we get to families of functions in a natural way, we aim to achieve methodological insight. Such a framework may lead to artificial neural networks capable of performing more sophisticated tasks.
The main result of this chapter is Theorem 5.1 which characterizes functions to be approximated in terms of pointwise integrals and Bochner integrals, and provides inequalities that relate corresponding norms. The relationship between integral formulas and neural networks has long been noted; e.g., [20, 6, 37, 13, 34, 29] We examine integral formulas in depth and extend their significance to a broader context.
An earlier version of the Main Theorem, including the bounds on variational norm by the \(L^{1}\)-norm of the weight function in a corresponding integral formula, was given in [23] and it also utilized _functional_ (i.e., Bochner) integration. However, the version here is more general and further shows that if \(\phi\) is a real-valued function on \(\Omega\times Y\) (the cartesian product of input and parameter spaces), then the associated map \(\Phi\) which maps the measure space to the Banach space defined by \(\Phi(y)(x)=\phi(x,y)\) is measurable; cf. [42, Lemma 4.25, p. 125] where \(\Phi\) is the "feature map."
Other proof techniques are available for parts of the Main Theorem. In particular, Kurkova [28] gave a different argument for part (iv) of the theorem, using a characterization of variation via peak functionals [31] as well as the theorem of Mazur (Theorem 13.1) used in the proof of Lemma 3.4. But the Bochner integral approach reveals some unexpected aspects of functional approximation which may be relevant for neural network applications.
Furthermore, the treatment of analysis and topology utilizes a number of basic theorems from the literature and provides an introduction to functional analysis motivated by its applicability. This is a case where neural nets provide a fresh perspective on classical mathematics. Indeed, theoretical results proved here were obtained in an attempt to better understand neural networks.
An outline of the paper is as follows: In section 2 we discuss variational norms; sections 3 and 4 present needed material on Bochner integrals. The Main Theorem (Theorem 5.1) on integral formulas is given in Section 5. In section 6 we show how to apply the Main Theorem to an integral formula for the Bessel potential function in terms of Gaussians. In section 7 we show how this leads to an inequality involving Gamma functions and provide an alternative proof by classical means. Section 8 interprets and extends the Main Theorem in the language of tensor products. Using tensor products, we replace individual \(\mathcal{X}\)-valued \(\Phi\)'s by families \(\{\Phi_{j}:j\in J\}\) of such functions. This allows more nuanced representation of the function to be approximated. In section 9 we give a detailed example of concepts related to \(G\)-variation, while section 10 considers the relationship between pointwise integrals and evaluation of the corresponding Bochner integrals. Remarks on future directions are in section 11, and the chapter concludes with two appendices and references.
## 2 Variational norms and completeness
We assume that the reader has a reasonable acquaintance with functional analysis but have attempted to keep this chapter self-contained. Notations and basic definitions are given in Appendix I, while Appendix II has the precise statement of several important theorems from the literature which will be needed in our development.
Throughout this chapter, all linear spaces are over the reals \(\mathbb{R}\). For \(A\) any subset of a linear space \(X\), \(b\in X\), and \(r\in\mathbb{R}\),
\[b+rA:=\{b+ra\,|\,a\in A\}=\{y\in X:y=b+ra,a\in A\}.\]
Also, we sometimes use the abbreviated notation
\[\|\cdot\|_{1}=\|\cdot\|_{L^{1}(Y,\mu)}\;\;\mbox{and}\;\;\|\cdot\|_{\infty}=\| \cdot\|_{L^{\infty}(Y,\mu;{\cal X})}; \tag{1}\]
the standard notations on the right are explained in sections 12 and 4, resp. The symbol "\(\ni\)" stands for "such that."
A set \(G\) in a normed linear space \({\cal X}\) is _fundamental_ (with respect to \({\cal X}\)) if \({\rm cl}_{{\cal X}}\) (span \(G\)) = \({\cal X}\), where closure depends only on the topology induced by the norm. We call \(G\)_bounded_ with respect to \({\cal X}\) if
\[s_{G,{\cal X}}:=\sup_{g\in G}\|g\|_{{\cal X}}<\infty.\]
We now review \(G\)-_variation_ norms. These norms, which arise in connection with approximation of functions, were first considered by Barron [5], [6]. He treated a case where \(G\) is a family of characteristic functions of sets satisfying a special condition. The general concept, formulated by Kurkova [24], has been developed in such papers as [30, 26, 27, 15, 16, 17].
Consider the set
\[B_{G,{\cal X}}:={\rm cl}_{{\cal X}}\;(\mbox{conv}(\;\pm G)),\mbox{where}\;\pm G :=G\cup-G. \tag{2}\]
This is a symmetric, closed, convex subset of \({\cal X}\), with Minkowski functional
\[\|f\|_{G,{\cal X}}:=\inf\{\lambda>0:f/\lambda\in B_{G,{\cal X}}\}.\]
The subset \({\cal X}_{G}\) of \({\cal X}\) on which this functional is finite is given by
\[{\cal X}_{G}:=\{f\in{\cal X}:\exists\lambda>0\,\ni\,f/\lambda\in B_{G,{\cal X}}\}.\]
If \(G\) is bounded, then \(\|\cdot\|_{G,{\cal X}}\) is a norm on \({\cal X}_{G}\). In general \({\cal X}_{G}\) may be a proper subset of \({\cal X}\) even if \(G\) is bounded and fundamental w.r.t. \({\cal X}\). See the example at the end of this section. The inclusion \(\iota:{\cal X}_{G}\subseteq{\cal X}\) is linear and for every \(f\in{\cal X}_{G}\)
\[\|f\|_{{\cal X}}\leq\|f\|_{G,{\cal X}}\,s_{G,{\cal X}} \tag{3}\]
Indeed, if \(f/\lambda\in B_{G,{\cal X}}\), then \(f/\lambda\) is a convex combination of elements of \({\cal X}\)-norm at most \(s_{G,{\cal X}}\), so \(\|f\|_{{\cal X}}\leq\lambda\,s_{G,{\cal X}}\) establishing (3) by definition of variational norm. Hence, if \(G\) is bounded in \({\cal X}\), the operator \(\iota\) is bounded with operator norm not exceeding \(s_{G,{\cal X}}\).
**Proposition 2.1**: _Let nonempty \(G\subseteq{\cal X}\) a normed linear space. Then_
_(i)_ \(\mbox{\rm span}\,G\subseteq{\cal X}_{G}\subseteq{\rm cl}_{{\cal X}}\,\mbox{ \rm span}\,G\)_;_
_(ii)_ \(G\) _is fundamental if and only if_ \({\cal X}_{G}\) _is dense in_ \({\cal X}\)_;_
_(iii) For_ \(G\) _bounded and_ \({\cal X}\) _complete,_ \(({\cal X}_{G},\|\cdot\|_{G,{\cal X}})\) _is a Banach space._
**Proof.** (i) Let \(f\in\mbox{\rm span}\,G\), then \(f=\sum_{i=1}^{n}a_{i}g_{i}\), for real numbers \(a_{i}\) and \(g_{i}\in G\). We assume the \(a_{i}\) are not all zero since \(0\) is in \({\cal X}_{G}\). Then \(f=\lambda\sum_{i=1}^{n}|a_{i}|/\lambda(\pm g_{i})\), where \(\lambda=\sum_{i=1}^{n}|a_{i}|\). Thus, \(f\) is in \(\lambda\mbox{\rm conv}(\pm G)\subseteq\lambda B_{G,{\cal X}}\). So \(\|f\|_{G,{\cal X}}\leq\lambda\) and \(f\) is in \({\cal X}_{G}\).
Likewise if \(f\) is in \({\cal X}_{G}\), then for some \(\lambda>0\), \(f/\lambda\) is in
\[B_{G,{\cal X}}={\rm cl}_{{\cal X}}(\mbox{\rm conv}(\pm G))\subseteq{\rm cl}_{{ \cal X}}(\mbox{\rm span}(G)),\]
so \(f\) is in \({\rm cl}_{{\cal X}}(\mbox{\rm span}(G))\).
(ii) Suppose \(G\) is fundamental. Then \({\cal X}={\rm cl}_{{\cal X}}(\mbox{\rm span}\,G)={\rm cl}_{{\cal X}}({\cal X} _{G})\) by part (i). Conversely, if \({\cal X}_{G}\) is dense in \({\cal X}\), then \({\cal X}={\rm cl}_{{\cal X}}({\cal X}_{G})\subseteq{\rm cl}_{{\cal X}}(\mbox{ \rm span}\,G)\subseteq{\cal X}\), and \(G\) is fundamental.
(iii) Let \(\{f_{n}\}\) be a Cauchy sequence in \(\mathcal{X}_{G}\). By (3) \(\{f_{n}\}\) is a Cauchy sequence in \(\mathcal{X}\) and has a limit \(f\) in \(\mathcal{X}\). The sequence \(\|f_{n}\|_{G,\mathcal{X}}\) is bounded in \(\mathcal{X}_{G}\), that is, there is a positive number M such that for all n \(f_{n}/M\in B_{G,\mathcal{X}}\). Since \(B_{G,\mathcal{X}}\) is closed in \(\mathcal{X}\), \(f/M\) is also in \(B_{G,\mathcal{X}}\). Hence \(\|f\|_{G,\mathcal{X}}\leq M\) and \(f\) is in \(\mathcal{X}_{G}\). Now given \(\epsilon>0\) choose a positive integer \(N\) such that \(\|f_{n}-f_{k}\|_{G,\mathcal{X}}<\epsilon\) for \(n,k\geq N\). In particular fix \(n\geq N\), and consider a variable integer \(k\geq N\). Then \(\|f_{k}-f_{n}\|_{G,\mathcal{X}}<\epsilon\). So \((f_{k}-f_{n})/\epsilon\in B_{G,\mathcal{X}}\), and \(f_{k}\in f_{n}+\epsilon B_{G,\mathcal{X}}\) for all \(k\geq N\). But \(f_{n}+\epsilon B_{G,\mathcal{X}}\) is closed in \(\mathcal{X}\). Hence \(f\in f_{n}+\epsilon B_{G,\mathcal{X}}\), and \(\|f-f_{n}\|_{G,\mathcal{X}}\leq\epsilon\). So the sequence converges to \(f\) in \(\mathcal{X}_{G}\). \(\Box\)
The following example illustrates several of the above concepts. Take \(\mathcal{X}\) to be a real separable Hilbert space with orthonormal basis \(\{e_{n}:n=0,1,...\}\). Let \(G=\{e_{n}:n=0,1,...\}\). Then
\[B_{G,\mathcal{X}}=\left\{\sum_{n\geq 1}c_{n}e_{n}-\sum_{n\geq 1}d_{n}e_{n}\,: \forall n,\,\,c_{n}\geq 0,d_{n}\geq 0,\sum_{n\geq 1}(c_{n}+d_{n})=1\right\}.\]
Now \(f\in\mathcal{X}\) is of the form \(\sum_{n\geq 1}a_{n}e_{n}\) where \(\|f\|_{\mathcal{X}}=\sqrt{\sum_{n\geq 1}a_{n}^{2}}\), and if \(f\in\mathcal{X}_{G}\), then \(a_{n}=\lambda(c_{n}-d_{n})\) for all \(n\) and suitable \(c_{n},d_{n}\). The minimal \(\lambda\) can be obtained by taking \(a_{n}=\lambda c_{n}\) when \(a_{n}\geq 0\), and \(a_{n}=-\lambda d_{n}\) when \(a_{n}<0\). It then follows that \(\|f\|_{G,\mathcal{X}}=\sum_{n\geq 1}|a_{n}|\). Hence when \(\mathcal{X}\) is isomorphic to \(\ell_{2}\), \(\mathcal{X}_{G}\) is isomorphic to \(\ell_{1}\). As \(G\) is fundamental, by part(ii) above, the closure of \(\ell_{1}\) in \(\ell_{2}\) is \(\ell_{2}\). This provides an example where \(\mathcal{X}_{G}\) is not a closed subspace of \(\mathcal{X}\) and so, while it is a Banach space w.r.t. the variational norm, it is not complete in the ambient-space norm.
## 3 Bochner integrals
The Bochner integral replaces numbers with functions and represents a broadranging extension, generalizing the Lebesgue integral from real-valued functions to functions with values in an arbitrary Banach space. Key definitions and theorems are summarized here for convenience, following the treatment in [44] (cf. [33]). Bochner integrals are used here (as in [23]) in order to prove a bound on variational norm.
Let \((Y,\mu)\) be a measure space. Let \(\mathcal{X}\) be a Banach space with norm \(\|\cdot\|_{\mathcal{X}}\). A function \(s:Y\rightarrow\mathcal{X}\) is _simple_ if it has a finite set of nonzero values \(f_{j}\in\mathcal{X}\), each on a measurable subset \(P_{j}\) of \(Y\) with \(\mu(P_{j})<\infty\), \(1\leq j\leq m\), and the \(P_{j}\) are pairwise-disjoint. Equivalently, a function \(s\) is simple if it can be written in the following form:
\[s=\sum_{j=1}^{m}\kappa(f_{j})\chi_{P_{j}}, \tag{4}\]
where \(\kappa(f_{j}):Y\rightarrow\mathcal{X}\) denotes the constant function with value \(f_{j}\) and \(\chi_{P}\) denotes the characteristic function of a subset \(P\) of \(Y\). This decomposition is nonunique and we identify two functions if they agree \(\mu\)-almost everywhere - i.e., the subset of \(Y\) on which they disagree has \(\mu\)-measure zero.
Define an \(\mathcal{X}\)-valued function \(I\) on the simple functions by setting for \(s\) of form (4)
\[I(s,\mu):=\sum_{j=1}^{m}\mu(P_{j})f_{j}\in\mathcal{X}.\]
This is independent of the decomposition of \(s\)[44, pp.130-132]. A function \(h:Y\rightarrow\mathcal{X}\) is _strongly measurable_ (w.r.t. \(\mu\)) if there exists a sequence \(\{s_{k}\}\) of simple functions such that for \(\mu\)-a.e. \(y\in Y\)
\[\lim_{k\rightarrow\infty}\|s_{k}(y)-h(y)\|_{\mathcal{X}}=0.\]
A function \(h:Y\to\mathcal{X}\) is _Bochner integrable_ (with respect to \(\mu\)) if it is strongly measurable and there exists a sequence \(\{s_{k}\}\) of simple functions \(s_{k}:Y\to\mathcal{X}\) such that
\[\lim_{k\to\infty}\int_{Y}\|s_{k}(y)-h(y)\|_{\mathcal{X}}d\mu(y)=0. \tag{5}\]
If \(h\) is strongly measurable and (5) holds, then the sequence \(\{I(s_{k},\mu)\}\) is Cauchy and by completeness converges to an element in \(\mathcal{X}\). This element, which is independent of the sequence of simple functions satisfying (5), is called the _Bochner integral of \(h\)_ (w.r.t. \(\mu\)) and denoted
\[I(h,\mu)\ \ \mbox{or}\ \ \mathcal{B}-\int_{Y}h(y)d\mu(y).\]
Let \(\mathcal{L}^{1}(Y,\mu;\mathcal{X})\) denote the linear space of all strongly measurable functions from \(Y\) to \(\mathcal{X}\) which are Bochner integrable w.r.t. \(\mu\); let \(L^{1}(Y,\mu;\mathcal{X})\) be the corresponding set of equivalence classes (modulo \(\mu\)-a.e. equality). It is easily shown that equivalent functions have the same Bochner integral. Then the following elegant characterization holds.
**Theorem 3.1** (Bochner): _Let \((\mathcal{X},\|\cdot\|_{\mathcal{X}})\) be a Banach space and \((Y,\mu)\) a measure space. Let \(h:Y\to\mathcal{X}\) be strongly measurable. Then_
\[h\in\mathcal{L}^{1}(Y,\mu;\mathcal{X})\mbox{ if and only if }\int_{Y}\|h(y)\|_{ \mathcal{X}}d\mu(y)<\infty.\]
A consequence of this theorem is that \(I:L^{1}(Y,\mu;\mathcal{X})\to\mathcal{X}\) is a continuous linear operator and
\[\|I(h,\mu)\|_{\mathcal{X}}=\left\|\mathcal{B}-\int_{Y}h(y)\,d\mu(y)\right\|_{ \mathcal{X}}\leq\|h\|_{L^{1}(Y,\mu;\mathcal{X})}:=\int_{Y}\|h(y)\|_{\mathcal{ X}}d\mu(y). \tag{6}\]
In particular, the Bochner norm of \(s\), \(\|s\|_{L^{1}(Y,\mu;\mathcal{X})}\), is \(\sum_{i}\mu(P_{i})\|g_{i}\|_{\mathcal{X}}\), where \(s\) is a simple function satisfying (4).
For \(Y\) a measure space and \(\mathcal{X}\) a Banach space, \(h:Y\to\mathcal{X}\) is _weakly measurable_ if for every continuous linear functional \(F\) on \(X\) the composite real-valued function \(F\circ h\) is measurable [43, pp. 130-134]. If \(h\) is measurable, then it is weakly measurable since measurable followed by continuous is measurable: for \(U\) open in \(\mathbb{R}\), \((F\circ h)^{-1}(U)=h^{-1}(F^{-1}(U))\).
Recall that a topological space is _separable_ if it has a countable dense subset. Let \(\lambda\) denote Lebesgue measure on \(\mathbb{R}^{d}\) and let \(\Omega\subseteq\mathbb{R}^{d}\) be \(\lambda\)-measurable, \(d\geq 1\). Then \(L^{q}(\Omega,\lambda)\) is separable when \(1\leq q<\infty\); e.g., [36, pp. 208]. A function \(h:Y\to\mathcal{X}\) is \(\mu\)_-almost separably valued_ (\(\mu\)-a.s.v.) if there exists a \(\mu\)-measurable subset \(Y_{0}\subset Y\) with \(\mu(Y_{0})=0\) and \(h(Y\setminus Y_{0})\) is a separable subset of \(\mathcal{X}\).
**Theorem 3.2** (Pettis): _Let \((\mathcal{X},\|\cdot\|_{\mathcal{X}})\) be a Banach space and \((Y,\mu)\) a measure space. Suppose \(h:Y\to\mathcal{X}\). Then \(h\) is strongly measurable if and only if \(h\) is weakly measurable and \(\mu\)-a.s.v._
The following basic result (see, e.g., [9]) was later extended by Hille to the more general class of closed operators. But we only need the result for bounded linear functionals, in which case the Bochner integral coincides with ordinary integration.
**Theorem 3.3**: _Let \((Y,\nu)\) be a measure space, let \(\mathcal{X}\), \(\mathcal{X}^{\prime}\) be Banach spaces, and let \(h\in\mathcal{L}^{1}(Y,\nu;\mathcal{X})\). If \(T:\mathcal{X}\to\mathcal{X}^{\prime}\) is a bounded linear operator, then \(T\circ h\in\mathcal{L}^{1}(Y,\nu;\mathcal{X}^{\prime})\) and_
\[T\left(\mathcal{B}-\int_{Y}h(y)\,d\nu(y)\right)=\mathcal{B}-\int_{Y}(T\circ h) (y)\,d\nu(y).\]
There is a mean-value theorem for Bochner integrals (Diestel and Uhl [12, Lemma 8, p. 48]). We give their argument with a slightly clarified reference to the Hahn-Banach theorem.
**Lemma 3.4**: _Let \((Y,\nu)\) be a finite measure space, let \(X\) be a Banach space, and let \(h:Y\to{\cal X}\) be Bochner integrable w.r.t. \(\nu\). Then_
\[{\cal B}-\int_{Y}h(y)\,d\nu(y)\;\in\;\nu(Y)\;{\rm cl}_{X}({\rm conv}(\{\pm h(y) :y\in Y\}).\]
**Proof.** Without loss of generality, \(\nu(Y)=1\). Suppose \(f:=I(h,\nu)\notin{\rm cl}_{X}({\rm conv}(\{\pm h(y):y\in Y\})\). By a consequence of the Hahn-Banach theorem given as Theorem 13.1 in Appendix II below), there is a continuous linear functional \(F\) on \(X\) such that \(F(f)>\sup_{y\in Y}F(h(y))\). Hence, by Theorem 3.3,
\[\sup_{y\in Y}F(h(y))\geq\int_{Y}F(h(y))d\nu(y)=F(f)>\sup_{y\in Y}F(h(y)).\]
which is absurd. \(\Box\)
## 4 Spaces of Bochner integrable functions
In this section, we derive a few consequences of the results from the previous section which we shall need below.
A measurable function \(h\) from a measure space \((Y,\nu)\) to a normed linear space \({\cal X}\) is called _essentially bounded_ (w.r.t. \(\nu\)) if there exists a \(\nu\)-null set \(N\) for which
\[\sup_{y\in Y\setminus N}\|h(y)\|_{\cal X}<\infty.\]
Let \({\cal L}^{\infty}(Y,\nu;{\cal X})\) denote the linear space of all strongly measurable, essentially bounded functions from \((Y,\nu)\) to \({\cal X}\). Let \(L^{\infty}(Y,\nu;{\cal X})\) be its quotient space mod the relation of equality \(\nu\)-a.e. This is a Banach space with norm
\[\|h\|_{L^{\infty}(Y,\nu;X)}:=\inf\{B\geq 0:\exists\;\nu\mbox{-null }N\subset Y\;\ni\;\|h(y)\|_{X}\leq B,\;\forall y\in Y \setminus N\}.\]
To simplify notation, we sometimes write \(\|h\|_{\infty}\) for \(\|h\|_{L^{\infty}(Y,\nu;X)}\) Note that if \(\|h\|_{\infty}=c\), then \(\|h(y)\|_{\cal X}\leq c\) for \(\nu\)-a.e. \(y\). Indeed, for positive integers \(k\), \(\|h(y)\|_{\cal X}\leq c+(1/k)\) for \(y\) not in a set of measure zero \(N_{k}\) so \(\|h(y\|_{\cal X}\leq c\) for \(y\) not in the union \(\bigcup_{k\geq 1}N_{k}\) also a set of measure zero.
We also have a useful fact whose proof is immediate.
**Lemma 4.1**: _For every measure space \((Y,\mu)\) and Banach space \({\cal X}\), the natural map \(\kappa_{\cal X}:{\cal X}\to L^{\infty}(Y,\mu;{\cal X})\) associating to each element \(g\in{\cal X}\) the constant function from \(Y\) to \({\cal X}\) given by \((\kappa_{\cal X}(g))(y)\equiv g\) for all \(y\) in \(Y\) is an isometric linear embedding._
**Lemma 4.2**: _Let \({\cal X}\) be a separable Banach space, let \((Y,\mu)\) be a measure space, and let \(w:Y\to{\mathbb{R}}\) and \(\Psi:Y\to{\cal X}\) be \(\mu\)-measurable functions. Then \(w\Psi\) is strongly measurable._
**Proof.** By definition, \(w\Psi\) is the function from \(Y\) to \({\cal X}\) defined by
\[w\Psi:\;y\mapsto w(y)\Psi(y),\]
where the multiplication is that of a Banach space element by a real number. Then \(w\Psi\) is measurable because it is obtained from a pair of measurable functions by applying scalar multiplication which
is continuous. Hence, by separability, Pettis' Theorem 3.2, and the fact that measurable implies weakly measurable, we have strong measurability for \(w\Psi\) (cf. [33, Lemma 10.3]). \(\Box\)
If \((Y,\nu)\) is a finite measure space, \(\mathcal{X}\) is a Banach space, and \(h:Y\to\mathcal{X}\) is strongly measurable and essentially bounded, then \(h\) is Bochner integrable by Theorem 3.1. The following lemma, which follows from Lemma 4.2, allows us to weaken the hypothesis on the function by further constraining the space \(\mathcal{X}\).
**Lemma 4.3**: _Let \((Y,\nu)\) be a finite measure space, \(\mathcal{X}\) a separable Banach space, and \(h:Y\to\mathcal{X}\) be \(\nu\)-measurable and essentially bounded w.r.t. \(\nu\). Then \(h\in\mathcal{L}^{1}(Y,\nu;\mathcal{X})\) and_
\[\int_{Y}\|h(y)\|_{\mathcal{X}}d\nu(y)\leq\nu(Y)\|h\|_{L^{\infty}(Y,\nu; \mathcal{X})}.\]
Let \(w\in\mathcal{L}^{1}(Y,\mu)\), and let \(\mu_{w}\) be defined for \(\mu\)-measurable \(S\subseteq Y\) by \(\mu_{w}(S):=\int_{S}|w(y)|d\mu(y)\). For \(t\neq 0\), \(\operatorname{sgn}(t):=t/|t|\).
**Theorem 4.4**: _Let \((Y,\mu)\) be a measure space, \(\mathcal{X}\) a separable Banach space; let \(w\in\mathcal{L}^{1}(Y,\mu)\) be nonzero \(\mu\)-a.e., let \(\mu_{w}\) be the measure defined above, and let \(\Phi:Y\to\mathcal{X}\) be \(\mu\)-measurable. If one of the Bochner integrals_
\[\mathcal{B}-\int_{Y}w(y)\Phi(y)d\mu(y),\ \ \mathcal{B}-\int_{Y}\operatorname{sgn} (w(y))\Phi(y)d\mu_{w}(y)\]
_exists, then both exist and are equal._
**Proof.** By Lemma 4.2, both \(w\Phi\) and \((\operatorname{sgn}\circ w)\Phi\) are strongly measurable. Hence, by Theorem 3.1, the respective Bochner integrals exist if and only if the \(\mathcal{X}\)-norms of the respective integrands have finite ordinary integral. But
\[\int_{Y}\|[(\operatorname{sgn}\circ w)\Phi](y)\|_{\mathcal{X}}d\mu_{w}(y)= \int_{Y}\|w(y)\Phi(y)\|_{\mathcal{X}}d\mu(y), \tag{7}\]
so the Bochner integral \(I((\operatorname{sgn}\circ w)\Phi,\mu_{w})\) exists exactly when \(I(w\Phi,\mu)\) does. Further, the respective Bochner integrals are equal since for any continuous linear functional \(F\) in \(\mathcal{X}^{*}\), by Theorem 3.3
\[F\left(\mathcal{B}-\int_{Y}w(y)\Phi(y)d\mu(y)\right)=\int_{Y}F(w(y)\Phi(y))d \mu(y)\]
\[=\int_{Y}w(y)F(\Phi(y))d\mu(y)=\int_{Y}sgn(w(y))|w(y)|F(\Phi(y))d\mu(y)\]
\[=\int_{Y}sgn(w(y))F(\Phi(y))d\mu_{w}(y)=\int_{Y}F(sgn(w(y))\Phi(y))d\mu_{w}(y)\]
\[=F\left(\mathcal{B}-\int_{Y}sgn(w(y))\Phi(y)d\mu_{w}(y)\right).\]
\(\Box\)
**Corollary 4.5**: _Let \((Y,\mu)\) be a \(\sigma\)-finite measure space, \(\mathcal{X}\) a separable Banach space, \(w:Y\to\mathbb{R}\) be in \(\mathcal{L}^{1}(Y,\mu)\) and \(\Phi:Y\to\mathcal{X}\) be in \(\mathcal{L}^{\infty}(Y,\mu;\mathcal{X})\). Then \(w\Phi\) is Bochner integrable w.r.t. \(\mu\)._
**Proof.** By Lemma 4.2, \((\operatorname{sgn}\circ w)\Phi\) is strongly measurable, and Lemma 4.3 then implies that the Bochner integral \(I((\operatorname{sgn}\circ w)\Phi,\mu_{w})\) exists since \(\mu_{w}(Y)=\|w\|_{L^{1}(Y,\mu)}<\infty\). So \(w\Phi\) is Bochner integrable by Theorem 4.4.
Main theorem
In the next result, we show that certain types of integrands yield integral formulas for functions \(f\) in a Banach space of \(L^{p}\)-type both pointwise and at the level of Bochner integrals. Furthermore, the variational norm of \(f\) is shown to be bounded by the \(L^{1}\)-norm of the weight function from the integral formula. Equations (9) and (10) and part (iv) of this theorem were derived in a similar fashion by one of us with Kurkova in [23] under more stringent hypotheses; see also [13, eq. (12)].
**Theorem 5.1**: _Let \((\Omega,\rho)\), \((Y,\mu)\) be \(\sigma\)-finite measure spaces, let \(w\) be in \({\cal L}^{1}(Y,\mu)\), let \({\cal X}=L^{q}(\Omega,\rho)\), \(q\in[1,\infty)\), be separable, let \(\phi:\Omega\times Y\to\mathbb{R}\) be \(\rho\times\mu\)-measurable, let \(\Phi:Y\to{\cal X}\) be defined for each \(y\) in \(Y\) by \(\Phi(y)(x):=\phi(x,y)\) for \(\rho\)-a.e. \(x\in\Omega\) and suppose that for some \(M<\infty\), \(\|\Phi(y)\|_{\cal X}\leq M\) for \(\mu\)-a.e. \(y\). Then the following hold:_
_(i) For \(\rho\)-a.e. \(x\in\Omega\), the integral \(\int_{Y}w(y)\phi(x,y)d\mu(y)\) exists and is finite._
_(ii) The function \(f\) defined by_
\[f(x)=\int_{Y}w(y)\phi(x,y)d\mu(y) \tag{8}\]
_is in \({\cal L}^{q}(\Omega,\rho)\) and its equivalence class, also denoted by \(f\), is in \(L^{q}(\Omega,\rho)={\cal X}\) and satisfies_
\[\|f\|_{\cal X}\leq\|w\|_{L^{1}(Y,\mu)}\;M. \tag{9}\]
_(iii) The function \(\Phi\) is measurable and hence in \({\cal L}^{\infty}(Y,\mu;{\cal X})\), and \(f\) is the Bochner integral of \(w\Phi\) w.r.t. \(\mu\), i.e.,_
\[f={\cal B}-\int_{Y}(w\Phi)(y)d\mu(y). \tag{10}\]
_(iv) For \(G=\{\Phi(y):\|\Phi(y)\|_{\cal X}\leq\|\Phi\|_{L^{\infty}(Y,\mu;{\cal X})}\}\), \(f\) is in \({\cal X}_{G}\), and_
\[\|f\|_{G,{\cal X}}\leq\|w\|_{L^{1}(Y,\mu)} \tag{11}\]
_and as in (1)_
\[\|f\|_{\cal X}\leq\|f\|_{G,{\cal X}}s_{G,{\cal X}}\leq\|w\|_{1}\|\Phi\|_{ \infty}. \tag{12}\]
**Proof.** (i) Consider the function \((x,y)\longmapsto|w(y)||\phi(x,y)|^{q}\). This is a well-defined \(\rho\times\mu\)-measurable function on \(\Omega\times Y\). Furthermore its repeated integral
\[\int_{Y}\int_{\Omega}\;|w(y)||\phi(x,y)|^{q}d\rho(x)d\mu(y)\]
exists and is bounded by \(\|w\|_{1}M^{q}\) since \(\Phi(y)\in L^{q}(\Omega,\rho)\) and \(\|\Phi(y)\|_{q}^{q}\leq M^{q}\) for a. e. y. and \(w\in L^{1}(Y,\mu)\). By Fubini's Theorem 13.2 the function \(y\longmapsto|w(y)||\phi(x,y)|^{q}\) is in \(L^{1}(Y,\mu)\) for a.e. x. But the inequality
\[|w(y)||\phi(x,y)|\leq\max\{|w(y)||\phi(x,y)|^{q},|w(y)|\}\leq(|w(y)||\phi(x,y)| ^{q}+|w(y)|)\]
shows that the function \(y\longmapsto|w(y)||\phi(x,y)|\) is dominated by the sum of two integrable functions. Hence the integrand in the definition of \(f(x)\) is integrable for a. e. x, and \(f\) is well-defined almost everywhere.
(ii) The function \(G(u)=u^{q}\) is a convex function for \(u\geq 0\). Accordingly by Jensen's inequality (Theorem 13.3 below),
\[G\left(\int_{Y}|\phi(x,y)|d\sigma(y)\right)\leq\;\int_{Y}G(|\phi(x,y)|)d\sigma (y)\]
provided both integrals exist and \(\sigma\) is a probability measure on the measurable space \(Y\). We take \(\sigma\) to be defined by the familiar formula:
\[\sigma(A)=\frac{\int_{A}|w(y)|d\mu(y)}{\int_{Y}|w(y)|d\mu(y)}\]
for \(\mu\)-measurable sets A in Y, so that integration with respect to \(\sigma\) reduces to a scale factor times integration of \(|w(y|d\mu(y)\). Since we have established that both \(|w(y)||\phi(x,y)|\) and \(|w(y)||\phi(x,y)|^{q}\) are integrable with respect to \(\mu\) for a.e. x, we obtain:
\[|f(x)|^{q}\leq\|w\|_{1}^{q}G(\int_{Y}|\phi(x,y)|d\sigma(y))\leq\|w\|_{1}^{q} \int_{Y}G(|\phi(x,y)|)d\sigma(y)\]
\[=\|w\|_{1}^{q-1}\int_{Y}|w(y)||\phi(x,y)|^{q}d\mu(y)\]
for a.e. x. But we can now integrate both side with respect to \(d\rho(x)\) over \(\Omega\) because of the integrability noted above in connection with Fubini's Theorem. Thus \(f\in\mathcal{X}=L^{q}(\Omega,\rho)\) and \(\|f\|_{\mathcal{X}}^{q}\leq\|w\|_{1}^{q}\,M^{q}\), again interchanging order.
(iii) First we show that \(\Phi^{-1}\) of the open ball centered at \(g\) of radius \(\varepsilon\), \(B(g,\varepsilon):=\{y:\|\Phi(y)-g\|_{\mathcal{X}}<\varepsilon\}\), is a \(\mu\)-measurable subset of \(Y\) for each \(g\) in \(\mathcal{X}\) and \(\varepsilon>0\). Note that
\[\|\Phi(y)-g\|_{\mathcal{X}}^{q}=\int_{\Omega}|\phi(x,y)-g(x)|^{q}d\rho(x)\]
for all \(y\) in \(Y\) where \(x\mapsto\phi(x,y)\) and \(x\mapsto g(x)\) are \(\rho\)-measurable functions representing the elements \(\Phi(y)\) and \(g\) belonging to \(\mathcal{X}=L^{q}(Y,\mu)\). Since \((Y,\mu)\) is \(\sigma\)-finite, we can find a strictly positive function \(w_{0}\) in \(\mathcal{L}^{1}(Y,\mu)\). (For example, let \(w_{0}=\sum_{n\geq 1}(1/n^{2})\chi_{Y_{n}}\), where \(\{Y_{n}:n\geq 1\}\) is a countable disjoint partition of \(Y\) into \(\mu\)-measurable sets of finite measure.) Then \(w_{0}(y)|\phi(x,y)-g(x)|^{q}\) is a \(\rho\times\mu\)-measurable function on \(\Omega\times Y\), and
\[\int_{Y}\int_{\Omega}w_{0}(y)|\phi(x,y)-g(x)|^{q}d\rho(x)d\mu(y)\leq\|w_{0}\| _{L^{1}(Y,\mu)}\varepsilon^{q}.\]
By Fubini's Theorem 13.2, \(y\mapsto w_{0}(y)\|\Phi(y)-g\|_{\mathcal{X}}^{q}\) is \(\mu\)-measurable. Since \(w_{0}\) is \(\mu\)-measurable and strictly positive, \(y\mapsto\|\Phi(y)-g\|_{\mathcal{X}}^{q}\) is also \(\mu\)-measurable and so \(B(g,\varepsilon)\) is measurable. Hence, \(\Phi:Y\to\mathcal{X}\) is measurable. Thus, \(\Phi\) is essentially bounded, with essential sup \(\|\Phi\|_{L^{\infty}(Y,\mu;\mathcal{X})}\leq M\). (In (9), \(M\) can be replaced by this essential sup.)
By Corollary 4.5, \(w\Phi\) is Bochner integrable. To prove that \(f\) is the Bochner integral, using Theorem 4.4, we show that for each bounded linear functional \(F\in\mathcal{X}^{*}\), \(F(I(\operatorname{sgn}\circ w\Phi,\mu_{w}))=F(f)\). By the Riesz representation theorem [35, p. 316], for any such \(F\) there exists a (unique) \(g_{F}\in\mathcal{L}^{p}(\Omega,\rho)\), \(p=1/(1-q^{-1})\), such that for all \(g\in\mathcal{L}^{q}(\Omega,\rho)\), \(F(g)=\int_{\Omega}g_{F}(x)g(x)d\rho(x)\). By Theorem 3.3,
\[F(I((\operatorname{sgn}\circ w)\Phi,\mu_{w}))=\int_{Y}F\left(\operatorname{sgn }(w(y))\Phi(y)\right)d\mu_{w}(y).\]
But for \(y\in Y\), \(F(\operatorname{sgn}(w(y))\Phi(y))=\operatorname{sgn}(w(y))F(\Phi(y)\), so
\[F(I((\operatorname{sgn}\circ w)\Phi,\mu_{w}))=\int_{Y}\int_{\Omega}w(y)g_{F}(x )\phi(x,y)d\rho(x)d\mu(y).\]
Also, using (8),
\[F(f)=\int_{\Omega}g_{F}(x)f(x)d\rho(x)=\int_{\Omega}\int_{Y}w(y)g_{F}(x)\phi(x, y)d\mu(y)d\rho(x).\]
The integrand of the iterated integrals is measurable with respect to the product measure \(\rho\times\mu\), so by Fubini's Theorem the iterated integrals are equal provided that one of the corresponding absolute integrals is finite. Indeed,
\[\int_{Y}\int_{\Omega}|w(y)g_{F}(x)\phi(x,y)|d\rho(x)d\mu(y)=\int_{Y}\|g_{F}\Phi(y )\|_{L^{1}(\Omega,\rho)}d\mu_{w}(y). \tag{13}\]
By Holder's inequality, for every \(y\),
\[\|g_{F}\Phi(y)\|_{L^{1}(\Omega,\rho)}\leq\|g_{F}\|_{L^{p}(\Omega,\rho)}\|\Phi( y)\|_{L^{q}(\Omega,\rho)},\]
using the fact that \(\mathcal{X}=L^{q}(\Omega,\rho)\). Therefore, by the essential boundedness of \(\Phi\) w.r.t. \(\mu\), the integrals in (13) are at most
\[\|g_{F}\|_{L^{p}(\Omega,\rho)}\|\Phi\|_{\mathcal{L}^{\infty}(Y,\mu;X}\|w\|_{L^ {1}(Y,\mu)}<\infty.\]
Hence, \(f\) is the Bochner integral of \(w\Phi\) w.r.t. \(\mu\).
(iv) We again use Lemma 3.4. Let \(Y_{0}\) be a measurable subset of \(Y\) with \(\mu(Y_{0})=0\) and for \(Y^{\prime}=Y\setminus Y_{0}\), \(\Phi(Y^{\prime})=G\); see the remark following the definition of essential supremum. But restricting \(\mathrm{sgn}\circ w\) and \(\Phi\) to \(Y^{\prime}\), one has
\[f=\mathcal{B}-\int_{Y^{\prime}}\mathrm{sgn}(w(y))\Phi(y)d\mu_{w}(y);\]
hence, \(f\in\mu_{w}(Y)\mathrm{cl}_{\mathcal{X}}\mathrm{conv}(\pm G)\). Thus, \(\|w\|_{L^{1}(Y,\mu)}=\mu_{w}(Y)\geq\|f\|_{G,\mathcal{X}}\).
## 6 An example involving the Bessel potential
Here we review an example related to the Bessel functions which was considered in [21] for \(q=2\). In the following section, this Bessel-potential example is used to find an inequality related to the Gamma function.
Let \(\mathcal{F}\) denote the Fourier transform, given for \(f\in L^{1}(\mathbb{R}^{d},\lambda)\) and \(s\in\mathbb{R}^{d}\) by
\[\hat{f}(s)=\mathcal{F}(f)(s)=(2\pi)^{-d/2}\int_{\mathbb{R}^{d}}f(x)\exp(-is \cdot x)\;dx,\]
where \(\lambda\) is Lebesgue measure and \(dx\) means \(d\lambda(x)\). For \(r>0\), let
\[\hat{\beta_{r}}(s)=(1+\|s\|^{2})^{-r/2}\,.\]
Since the Fourier transform is an isometry of \(\mathcal{L}^{2}\) onto itself (Parseval's identity), and \(\hat{\beta_{r}}\) is in \(\mathcal{L}^{2}(\mathbb{R}^{d})\) for \(r>d/2\) (which we now assume), there is a unique function \(\beta_{r}\), called the _Bessel potential_ of order \(r\), having \(\hat{\beta_{r}}\) as its Fourier transform. See, e.g., [2, p. 252]. If \(1\leq q<\infty\) and \(r>d/q\), then \(\hat{\beta_{r}}\in\mathcal{L}^{q}(\mathbb{R}^{d})\) and
\[\|\hat{\beta_{r}}\|_{\mathcal{L}^{q}}=\pi^{d/2q}\left(\frac{\Gamma(qr/2-d/2)}{ \Gamma(qr/2)}\right)^{1/q}. \tag{14}\]
Indeed, by radial symmetry, \((\|\hat{\beta_{r}}\|_{\mathcal{L}^{q}})^{q}=\int_{\mathbb{R}^{d}}(1+\|x\|^{2 })^{-qr/2}dx=\omega_{d}I\), where \(I=\int_{0}^{\infty}(1+\rho^{2})^{-qr/2}\rho^{d-1}d\rho\) and \(\omega_{d}:=2\pi^{d/2}/\Gamma(d/2)\) is the area of the unit sphere in \(\mathbb{R}^{d}\)[11, p. 303]. Substituting \(\sigma=\rho^{2}\) and \(d\rho=(1/2)\sigma^{-1/2}d\sigma\), and using [10, p. 60], we find that
\[I=(1/2)\int_{0}^{\infty}\frac{\sigma^{d/2-1}}{(1+\sigma)^{qr/2}}d\sigma=\frac {\Gamma(d/2)\Gamma(qr/2-d/2)}{2\Gamma(qr/2)},\]
establishing (14).
For \(b>0\), let \(\gamma_{b}:\mathbb{R}^{d}\to\mathbb{R}\) denote the scaled Gaussian \(\gamma_{b}(x)=e^{-b\|x\|^{2}}\). A simple calculation shows that the \(L^{q}\)-norm of \(\gamma_{b}\):
\[\|\gamma_{b}\|_{\mathcal{L}^{q}}=(\pi/qb)^{d/2q}. \tag{15}\]
Indeed, using \(\int_{-\infty}^{\infty}\exp(-t^{2})dt=\pi^{1/2}\), we obtain:
\[\|\gamma_{b}\|_{\mathcal{L}^{q}}^{q}=\int_{\mathbb{R}^{d}}\exp(-b\|x\|^{2})^{q }dx=\left(\int_{\mathbb{R}}\exp(-qb\,t^{2})dt\right)^{d}=(\pi/qb)^{d/2}.\]
We now express the Bessel potential as an integral combination of Gaussians. The Gaussians are normalized in \(L^{q}\) and the corresponding weight function \(w\) is explicitly given. The integral formula is similar to one in Stein [41]. By our main theorem, this is an example of (8) and can be interpreted either as a pointwise integral or as a Bochner integral.
**Proposition 6.1**: _For \(d\) a positive integer, \(q\in[1,\infty)\), \(r>d/q\), and \(s\in\mathbb{R}^{d}\)_
\[\hat{\beta}_{r}(s)=\int_{0}^{\infty}w_{r}(t)\gamma_{t}^{o}(s)\,dt\,,\]
_where_
\[\gamma_{t}^{o}(s)=\gamma_{t}(s)/\|\gamma_{t}\|_{\mathcal{L}^{q}}\]
_and_
\[w_{r}(t)=(\pi/qt)^{d/2q}\,t^{r/2-1}\,e^{-t}/\Gamma(r/2).\]
**Proof.** Let
\[I=\int_{0}^{\infty}t^{r/2-1}\,e^{-t}\,e^{-t\|s\|^{2}}\,dt.\]
Putting \(u=t(1+\|s\|^{2})\) and \(dt=du(1+\|s\|^{2})^{-1}\), we obtain
\[I=(1+\|s\|^{2})^{-r/2}\int_{0}^{\infty}u^{r/2-1}\,e^{-u}\,du=\hat{\beta}_{r}(s )\Gamma(r/2).\]
Using the norm of the Gaussian (15), we arrive at
\[\hat{\beta}_{r}(s)=I/\Gamma(r/2)=\left(\int_{0}^{\infty}(\pi/qt)^{d/2q}\,t^{r /2-1}\,e^{-t}\,\gamma_{t}^{o}(s)dt\right)/\,\Gamma(r/2),\]
which is the result desired. \(\Box\)
Now we apply Theorem 5.1 with \(Y=(0,\infty)\) and \(\phi(s,t)=\gamma_{t}^{o}(s)=\gamma_{t}(s)/\|\gamma_{t}\|_{L^{q}(\mathbb{R}^{d})}\) to bound the variational norm of \(\hat{\beta}_{r}\) by the \(L^{1}\)-norm of the weight function.
**Proposition 6.2**: _For \(d\) a positive integer, \(q\in[1,\infty)\), and \(r>d/q\),_
\[\|\hat{\beta}_{r}\|_{G,\mathcal{X}}\leq(\pi/q)^{d/2q}\frac{\Gamma(r/2-d/2q)}{ \Gamma(r/2)},\]
_where \(G=\{\gamma_{t}^{o}:0<t<\infty\}\) and \(\mathcal{X}=\mathcal{L}^{q}(\mathbb{R}^{d})\)._
**Proof.** By (11) and Proposition 6.1, we have
\[\|\hat{\beta}_{r}\|_{G,\mathcal{X}}\leq\|w_{r}\|_{\mathcal{L}^{1}(Y)}=k\int_{0 }^{\infty}e^{-t}t^{r/2+d/2q-1}dt,\]
where \(k=(\pi/q)^{d/2q}/\Gamma(r/2)\), and by definition, the integral is \(\Gamma(r/2-d/2q)\). \(\Box\)
Application: A Gamma function inequality
The inequalities among the variational norm \(\|\cdot\|_{G,\mathcal{X}}\), the Banach space norm \(\|\cdot\|_{\mathcal{X}}\), and the \(L^{1}\)-norm of the weight function, established in the Main Theorem, allow us to derive other inequalities. The Bessel potential \(\beta_{r}\) of order \(r\) considered above provides an example.
Let \(d\) be a positive integer, \(q\in[1,\infty)\), and \(r>d/q\). By Proposition 6.2 and (14) of the last section, and by (12) of the Main Theorem, we have
\[\pi^{d/2q}\left(\frac{\Gamma(qr/2-d/2)}{\Gamma(qr/2)}\right)^{1/q}\leq(\pi/q)^ {d/2q}\ \frac{\Gamma(r/2-d/2q)}{\Gamma(r/2)}. \tag{16}\]
Hence, with \(a=r/2-d/2q\,\) and \(s=r/2\), this becomes
\[q^{d/2q}\ \left(\frac{\Gamma(qa)}{\Gamma(qs)}\right)^{1/q}\leq\frac{\Gamma(a)}{ \Gamma(s)}. \tag{17}\]
In fact, (17) holds if \(s,a,d,q\) satisfy (i) \(s>a>0\) and (ii) \(s-a=d/2q\) for some \(d\in Z^{+}\) and \(q\in[1,\infty)\). As \(a>0\), \(r>d/q\). If \(T=\{t>0:t=d/2q\,\,\,\,\mbox{for some}\,\,\,\,d\in Z^{+},q\in[1,\infty)\}\), then \(T=(0,\frac{1}{2}]\cup(0,1]\cup(0,\frac{3}{2}]\cup\ldots=(0,\infty)\), so there always exist \(d\), \(q\) satisfying (ii); the smallest such \(d\) is \(\lceil 2(s-a)\rceil\).
The inequality (17) suggests that the Main Theorem can be used to establish other inequalities of interest among classical functions. We now give a direct argument for the inequality. Its independent proof confirms our function-theoretic methods and provides additional generalization.
We begin by noting that in (17) it suffices to take \(d=2q(s-a)\). If the inequality is true in that case, it is true for all real numbers \(d\leq 2q(s-a)\). Thus, we wish to establish that
\[s\longmapsto\frac{\Gamma(qs)}{\Gamma(s)^{q}q^{sq}}\]
is a strictly increasing function of \(s\) for \(q>1\) and \(s>0\). (For \(q=1\) this function is constant.)
Equivalently, we show that
\[H_{q}(s):=\log\Gamma(qs)-q\log\Gamma(s)-sq\log q\]
is a strictly increasing function of \(s\) for \(q>1\) and \(s>0\).
Differentiating with respect to \(s\), we obtain:
\[\frac{dH_{q}(s)}{ds} = q\frac{\Gamma^{\prime}(qs)}{\Gamma(qs)}-q\frac{\Gamma^{\prime}( s)}{\Gamma(s)}-q\log q\] \[= q(\psi(qs)-\psi(s)-\log q)\] \[=: qA_{s}(q)\]
where \(\psi\) is the digamma function. It suffices to establish that \(A_{s}(q)>0\) for \(q>1\), \(s>0\). Note that \(A_{s}(1)=0\). Now consider
\[\frac{dA_{s}(q)}{dq}=s\psi^{\prime}(qs)-\frac{1}{q}.\]
This derivative is positive if and only if \(\psi^{\prime}(qs)>\frac{1}{qs}\) for \(q>1\), \(s>0\).
It remains to show that \(\psi^{\prime}(x)>\frac{1}{x}\) for \(x>0\). Using the power series for \(\psi^{\prime}\)[1, 6.4.10], we have for \(x>0\),
\[\psi^{\prime}(x) = \sum_{n=0}^{\infty}\frac{1}{(x+n)^{2}}=\frac{1}{x^{2}}+\frac{1}{(x +1)^{2}}+\frac{1}{(x+2)^{2}}+\ldots\] \[> \frac{1}{x(x+1)}+\frac{1}{(x+1)(x+2)}+\frac{1}{(x+2)(x+3)}+\ldots\] \[= \frac{1}{x}-\frac{1}{x+1}+\frac{1}{x+1}-\frac{1}{x+2}+\ldots\;= \;\frac{1}{x}.\]
## 8 Tensor-product interpretation
The basic paradigm of feedforward neural nets is to select a single type of computational unit and then build a network based on this single type through a choice of controlling internal and external parameters so that the resulting network function approximates the target function; see [45]. However, a single type of hidden unit may not be as effective as one based on a _plurality_ of hidden-unit types. Here we explore a tensor-product interpretation which may facilitate such a change in perspective.
Long ago Hille and Phillips [19, p. 86] observed that the Banach space of Bochner integrable functions from a measure space \((Y,\mu)\) into a Banach space \(\mathcal{X}\) has a fundamental set consisting of two-valued functions, achieving a single non-zero value on a measurable set of finite measure. Indeed, every Bochner integrable function is a limit of simple functions, and each simple function (with a finite set of values achieved on disjoint parts \(P_{j}\) of the partition) can be written as a sum of characteristic functions, weighted by members of the Banach space. If \(s\) is such a simple function, then
\[s=\sum_{i=1}^{n}\chi_{j}g_{j},\]
where the \(\chi_{j}\) are the characteristic functions of the \(P_{j}\) and the \(g_{j}\) are in \(\mathcal{X}\). (If, for example, \(Y\) is embedded in a finite-dimensional Euclidean space, the partition could consist of generalized rectangles.)
Hence, if \(f=\mathcal{B}-\int_{Y}h(y)d\mu(y)\) is the Bochner integral of \(h\) with respect to some measure \(\mu\), then \(f\) can be approximated as closely as desired by elements in \(\mathcal{X}\) of the form
\[\sum_{i=1}^{n}\mu(P_{i})g_{i},\]
where \(Y=\bigcup_{i=1}^{n}P_{i}\) is a \(\mu\)-measurable partition of \(Y\).
Note that given a \(\sigma\)-finite measure space \((Y,\mu)\) and a separable Banach space \(\mathcal{X}\), every element \(f\) in \(\mathcal{X}\) is (trivially) the Bochner integral of any integrand \(w\cdot\kappa(f)\), where \(w\) is a nonnegative function on \(Y\) with \(\|w\|_{L^{1}(Y,\mu)}=1\) (see part (iii) of Theorem 5.1) and \(\kappa(f)\) denotes the constant function on \(Y\) with value \(f\). In effect, \(f\) is in \(\mathcal{X}_{G}\) when \(G=\{f\}\). When \(\Phi\) is chosen first (or more precisely \(\phi\) as in our Main Theorem), then \(f\) may or may not be in \(\mathcal{X}_{G}\). According to the Main Theorem, \(f\) is in \(\mathcal{X}_{G}\) when it is given by an integral formula involving \(\Phi\) and some \(L^{1}\) weight function. In this case, \(G=\Phi(Y)\cap B\) where \(B\) is the ball in \(\mathcal{X}\) of radius \(\|\Phi\|_{L^{\infty}(Y,\mu;\mathcal{X})}\).
In general, the elements \(\Phi(y),\;y\in Y\) of the Banach space involved in some particular approximation for \(f\) will be distinct functions of some general type obtained by varying the parameter \(y\). For instance, kernels, radial basis functions, perceptrons, or various other classes of computational units can be used, and when these computational-unit-classes determine fundamental sets, by Proposition 2.1, it is possible to obtain arbitrarily good approximations. However, Theorem 8.1 below suggests that having a finite set of distinct types \(\Phi_{i}:Y\rightarrow\mathcal{X}\) may allow a smaller "cost" for approximation,
if we regard
\[\sum_{i=1}^{n}\|w_{i}\|_{1}\|\Phi_{i}\|_{\infty}\]
as the cost of the approximation
\[f=\mathcal{B}-\int_{Y}\left(\sum_{i=1}^{n}w_{i}\Phi_{i}\right)(y)d\mu(y).\]
We give a brief sketch of the ideas, following Light and Cheney [33].
Let \(\mathcal{X}\) and \(\mathcal{Z}\) be Banach spaces. Let \(\mathcal{X}\otimes\mathcal{Z}\) denote the linear space of equivalence classes of formal expressions
\[\sum_{i=1}^{n}f_{i}\otimes h_{i},\ f_{i}\in\mathcal{X},\ h_{i}\in\mathcal{Z}, \ n\in\mathbb{N},\ \ \sum_{i=1}^{m}f_{i}^{\prime}\otimes h_{i}^{\prime},\ f_{i}^{\prime}\in \mathcal{X},\ h_{i}^{\prime}\in\mathcal{Z},\ m\in\mathbb{N},\]
where these expressions are equivalent if for every \(F\in\mathcal{X}^{*}\)
\[\sum_{i=1}^{n}F(f_{i})h_{i}=\sum_{i=1}^{m}F(f_{i}^{\prime})h_{i}^{\prime},\]
that is, if the associated operators from \(\mathcal{X}^{*}\to\mathcal{Z}\) are identical, where \(\mathcal{X}^{*}\) is the algebraic dual of \(\mathcal{X}\). The resulting linear space \(\mathcal{X}\otimes\mathcal{Z}\) is called the _algebraic tensor product_ of \(\mathcal{X}\) and \(\mathcal{Z}\). We can extend \(\mathcal{X}\otimes\mathcal{Z}\) to a Banach space by completing it with respect to a suitable norm. Consider the norm defined for \(t\in\mathcal{X}\otimes\mathcal{Z}\),
\[\gamma(t)=\inf\left\{\sum_{i=1}^{n}\|f_{i}\|_{\mathcal{X}}\|h_{i}\|_{\mathcal{ Z}}\ :\ t=\sum_{i=1}^{n}f_{i}\otimes h_{i}\right\}. \tag{18}\]
and complete the algebraic tensor product with respect to this norm; the result is denoted \(\mathcal{X}\otimes_{\gamma}\mathcal{Z}\).
In [33, Thm. 1.15, p. 11], Light and Cheney showed that for any measure space \((Y,\mu)\) and any Banach space \(\mathcal{X}\) the linear map
\[\Lambda_{\mathcal{X}}:L^{1}(Y,\mu)\otimes\mathcal{X}\to\mathcal{L}^{1}(Y,\mu; \mathcal{X})\]
given by
\[\sum_{i=1}^{r}w_{i}\otimes g_{i}\mapsto\sum_{i=1}^{r}w_{i}g_{i}.\]
is well-defined and extends to a map
\[\Lambda_{\mathcal{X}}^{\gamma}:L^{1}(Y,\mu)\otimes_{\gamma}\mathcal{X}\to L^{ 1}(Y,\mu;\mathcal{X}),\]
which is an isometric isomorphism of the completed tensor product onto the space \(L^{1}(Y,\mu;X)\) of Bochner-integrable functions.
The following theorem extends the function \(\Lambda_{\mathcal{X}}\) via the natural embedding \(\kappa_{\mathcal{X}}\) of \(\mathcal{X}\) into the space of essentially bounded \(\mathcal{X}\)-valued functions defined in section 4.
**Theorem 8.1**: _Let \(\mathcal{X}\) be a separable Banach space and let \((Y,\mu)\) be a \(\sigma\)-finite measure space. Then there exists a continuous linear surjection_
\[e=\Lambda_{\mathcal{X}}^{\infty,\gamma}:L^{1}(Y,\mu)\otimes_{\gamma}L^{\infty} (Y,\mu;\mathcal{X})\to L^{1}(Y,\mu;\mathcal{X}).\]
_Furthermore, \(e\) makes the following diagram commutative:_
\[\begin{array}{ccccc}L^{1}(Y,\mu)\otimes_{\gamma}{\cal X}&\stackrel{{ d}}{{\longrightarrow}}&L^{1}(Y,\mu;{\cal X})\\ \downarrow_{b}&\nearrow_{e}&\downarrow_{c}\\ L^{1}(Y,\mu)\otimes_{\gamma}L^{\infty}(Y,\mu;{\cal X})&\stackrel{{ d}}{{\longrightarrow}}&L^{1}(Y,\mu;L^{\infty}(Y,\mu;{\cal X}))\end{array} \tag{19}\]
_where the two horizontal arrows \(a\) and \(d\) are the isometric isomorphisms \(\Lambda^{\gamma}_{\cal X}\) and \(\Lambda^{\gamma}_{L^{\infty}(Y,\mu;{\cal X})}\); the left-hand vertical arrow \(b\) is induced by \(1\otimes\kappa_{\cal X}\), while the right-hand vertical arrow \(c\) is induced by post-composition with \(\kappa_{\cal X}\), i.e., for any \(h\) in \(L^{1}(Y,\mu;{\cal X})\),_
\[c(h)=\kappa_{\cal X}\circ h:Y\to L^{\infty}(Y,\mu;{\cal X}).\]
**Proof.** The map
\[e^{\prime}:\sum_{i=1}^{n}w_{i}\otimes\Phi_{i}\mapsto\sum_{i=1}^{n}w_{i}\Phi_{i}\]
defines a linear function \(L^{1}(Y,\mu)\otimes L^{\infty}(Y,\mu;{\cal X})\to L^{1}(Y,\mu;{\cal X})\); indeed, it takes values in the Bochner integrable functions as by our Main Theorem each summand is in the class.
To see that \(e^{\prime}\) extends to \(e\) on the \(\gamma\)-completion,
\[\|e(t)\|_{L^{1}(Y,\mu;{\cal X})} = \|{\cal B}-\int_{Y}e(t)d\mu(y)\|_{\cal X}\leq\int_{Y}\|e(t)\|_{ \cal X}d\mu(y)=\]
\[\int_{Y}\|\sum_{i}w_{i}(y)\Phi_{i}(y)\|_{\cal X}d\mu(y) \leq \int_{Y}\sum_{i}|w_{i}(y)|\|\Phi_{i}(y)\|_{\cal X}d\mu(y)\] \[\leq\sum_{i}\|w_{i}\|_{1}\|\Phi_{i}\|_{\infty}\]
Hence, \(\|e(t)\|_{L^{1}(Y,\mu;{\cal X})}\leq\gamma(t)\), so the map \(e\) is continuous. \(\Box\)
## 9 An example involving bounded variation on an interval
The following example, more elaborate than the one following Proposition 2.1, is treated in part by Barron [6] and Kurkova [25].
Let \({\cal X}\) be the set of equivalence classes of (essentially) bounded Lebesgue-measurable functions on \([a,b]\), \(a,b<\infty\), i.e., \({\cal X}=L^{\infty}([a,b])\), with norm \(\|f\|_{\cal X}:=\inf\{M:|f(x)|\leq M\) for almost every \(x\in[a,b]\}\). Let \(G\) be the set of equivalence classes of all characteristic functions of closed intervals of the forms \([a,b]\), or \([a,c]\) or \([c,b]\) with \(a<c<b\). These functions are the restrictions of characteristic functions of closed half-lines to \([a,b]\). The equivalence relation is \(f\sim g\) if and only if \(f(x)=g(x)\) for almost every \(x\) in \([a,b]\) (with respect to Lebesgue measure).
Let \(BV([a,b])\) be the set of all equivalence classes of functions on \([a,b]\) with bounded variation; that is, each equivalence class contains a function \(f\) such that the _total variation_\(V(f,[a,b])\) is finite, where total variation is the largest possible total movement of a discrete point which makes a finite number of stops as x varies from \(a\) to \(b\), maximized over all possible ways to choose a finite list of intermediate points, that is,
\[V(f,[a,b]) := \sup\{\sum_{i=1}^{n-1}|f(x_{i+1})-f(x_{i})|:n\geq 1,a\leq x_{1}<x_{ 2}<\cdots<x_{n}\leq b\}.\]
In fact, each equivalence class \([f]\) contains exactly one function \(f^{*}\) of bounded variation that satisfies the _continuity conditions_:
(i) \(f^{*}\) is right-continuous at \(c\) for \(c\in[a,b)\), and
(ii) \(f^{*}\) is left-continuous at \(b\).
Moreover, \(V(f^{*},[a,b])\leq V(f,[a,b])\) for all \(f\sim f^{*}\).
To see this, recall that every function \(f\) of bounded variation is the difference of two nondecreasing functions \(f=f_{1}-f_{2}\), and \(f_{1},f_{2}\) are necessarily right-continuous except at a countable set. We can take \(f_{1}(x):=V(f,[a,x])+K\), where \(K\) is an arbitrary constant, and \(f_{2}(x):=V(f,[a,x])+K-f(x)\) for \(x\in[a,b]\). Now redefine both \(f_{1}\) and \(f_{2}\) at countable sets to form \(f_{1}^{*}\) and \(f_{2}^{*}\) which satisfy the continuity conditions and are still nondecreasing on \([a,b]\). Then \(f^{*}:=f_{1}^{*}-f_{2}^{*}\) also satisfies the continuity conditions. It is easily shown that \(V(f^{*},[a,b])\leq V(f,[a,b]).\) Since any equivalence class in \({\cal X}\) can contain at most one function satisfying (i) and (ii) above, it follows that \(f^{*}\) is unique and that \(V(f^{*},[a,b])\) minimizes the total variation for all functions in the equivalence class. Recall that \(\chi_{[a,b]}=\chi([a,b])\) denotes the characteristic function of the interval \([a,b]\), etc.
**Proposition 9.1**: _Let \({\cal X}=L^{\infty}([a,b])\) and let \(G\) be the subset of characteristic functions \(\chi([a,b]),\chi([a,c]),\chi([c,b]),a<c<b\) (up to sets of Lebesgue-measure zero). Then \({\cal X}_{G}=BV([a,b])\), and_
\[\|[f]\|_{{\cal X}}\leq\|[f]\|_{G,{\cal X}}\leq 2V(f^{*},[a,b])+|f^{*}(a)|,\]
_where \(f^{*}\) is the member of \([f]\) satisfying the continuity conditions (i) and (ii)._
**Proof.** Let \(C_{G,{\cal X}}\) be the set of equivalence classes of functions of the form
\[(q-r)\chi_{[a,b]}+\sum_{n=1}^{k}(s_{n}-t_{n})\chi_{[a,c_{n}]}+\sum_{n=1}^{k}( u_{n}-v_{n})\chi_{[c_{n},b]}, \tag{20}\]
where \(k\) is a positive integer, \(q,r\geq 0\), for \(1\leq n\leq k\), \(s_{n},t_{n},u_{n},v_{n}\geq 0,\) and
\[(q+r)+\sum_{n=1}^{k}(s_{n}+t_{n}+u_{n}+v_{n})=1.\]
All of the functions so exhibited have bounded variation \(\leq 1\) and hence \(C_{G,{\cal X}}\subseteq BV([a,b])\).
We will prove that a sequence in \(C_{G,{\cal X}}\) converges in \({\cal X}\)-norm to a member of \(BV([a,b])\) and this will establish that \(B_{G,{\cal X}}\) is a subset of \(BV([a,b])\) and hence that \({\cal X}_{G}\) is a subset of \(BV([a,b])\).
Let \(\{[f_{k}]\}\) be a sequence in \(C_{G,{\cal X}}\) that is Cauchy in the \({\cal X}\)-norm. Without loss of generality, we pass to the sequence \(\{f_{k}^{*}\}\), which is Cauchy in the sup-norm since \(x\mapsto|f_{k}^{*}(x)-f_{j}^{*}(x)|\) satisfies the continuity conditions (i) and (ii). Thus, \(\{f_{k}^{*}\}\) converges pointwise-uniformly and in the sup-norm to a function \(f\) on \([a,b]\) also satisfying (i) and (ii) with finite sup-norm and whose equivalence class has finite \({\cal X}\)-norm.
Let \(\{x_{1},\ldots,x_{n}\}\) satisfy \(a\leq x_{1}<x_{2}<\cdots<x_{n}\leq b\). Then
\[\sum_{i=1}^{n-1}|f_{k}^{*}(x_{i+1})-f_{k}^{*}(x_{i})|\leq V(f_{k}^{*},[a,b]) \leq V(f_{k},[a,b])\leq 1\]
for every \(k\), where _par abus de notation_\(f_{k}\) denotes the member of \([f_{k}]\) satisfying (20). Letting \(k\) tend to infinity and then varying \(n\) and \(x_{1},\ldots,x_{n}\), we obtain \(V(f,[a,b])\leq 1\) and so \([f]\in BV([a,b])\)
It remains to show that everything in \(BV([a,b])\) is actually in \(\mathcal{X}_{G}\). Let \(g\) be a nonnegative nondecreasing function on \([a,b]\) satisfying the continuity conditions (i) and (ii) above. Given a positive integer \(n\), there exists a positive integer \(m\geq 2\) and \(a=a_{1}<a_{2}<\cdots<a_{m}=b\) such that \(g(a_{i+1}^{-})-g(a_{i})\leq 1/n\) for \(i=1,\ldots,m-1\). Indeed, for \(2\leq i\leq m-1\), let \(a_{i}:=\min\{x|g(a)+\frac{(i-1)}{n}\leq g(x)\}\). (Moreover, it follows that the set of \(a_{i}\)'s include all points of left-discontinuity of \(g\) such that the jump \(g(a_{i})-g(a_{i}^{-})\) is greater than \(1/n\).) Let \(g_{n}:[a,b]\to\mathbb{R}\) be defined as follows:
\[g_{n}:=g(a_{1})\chi_{[a_{1},a_{2})}+g(a_{2})\chi_{[a_{2},a_{3})}+\cdots+g(a_{m -1})\chi_{[a_{m-1},a_{m}]}\]
\[=g(a_{1})(\chi_{[a_{1},b]}-\chi_{[a_{2},b]})+g(a_{2})(\chi_{[a_{2},b]}-\chi_{ [a_{3},b]})+\cdots+g(a_{m-1})(\chi_{[a_{m-1},b]})\]
\[=g(a_{1})\chi_{[a_{1},b]}+(g(a_{2})-g(a_{1}))\chi_{[a_{2},b]}+\cdots+(g(a_{m -1})-g(a_{m-2}))\chi_{[a_{m-1},b]}.\]
Then \([g_{n}]\) belongs to \(g(a_{m-1})C_{G,\mathcal{X}}\), and _a fortiori_ to \(g(b)C_{G,\mathcal{X}}\) as well as of \(\mathcal{X}_{G}\), and \(\|[g_{n}]\|_{G,\mathcal{X}}\leq g(b)\). Moreover, \(\|[g_{n}]-[g]\|_{\mathcal{X}}\leq 1/n\). Therefore, since \(B_{G,\mathcal{X}}=\mathrm{cl}_{\mathcal{X}}(C_{G,\mathcal{X}})\), \([g]\) is in \(g(b)B_{G,\mathcal{X}}\) and accordingly \([g]\) is in \(\mathcal{X}_{G}\) and
\[\|\,[g]\,\|_{G,\mathcal{X}}\leq g(b).\]
Let \([f]\) be in \(BV([a,b])\) and let \(f^{*}=f_{1}^{*}-f_{2}^{*}\), as defined above, for this purpose we take \(K=|f^{*}(a)|\). This guarantees that both \(f_{1}^{*}\) and \(f_{2}^{*}\) are nonnegative. Accordingly, \([f]=[f_{1}^{*}]-[f_{2}^{*}]\), and is in \(\mathcal{X}_{G}\). Furthermore, \(\|[f]\|_{G,\mathcal{X}}\leq\|[f_{1}^{*}]\|_{G,\mathcal{X}}+\|[f_{2}^{*}]\|_{G,\mathcal{X}}\leq f_{1}^{*}(b)+f_{2}^{*}(b)=V(f^{*},[a,b])+|f^{*}(a)|+V(f^{*}, [a,b])+|f^{*}(a)|-f^{*}(b)\leq 2V(f^{*},[a,b])+|f^{*}(a)|\). The last inequality follows from the fact that \(V(f^{*},[a,b])+|f^{*}(a)|-f^{*}(b)\geq|f^{*}(b)-f^{*}(a)|+|f^{*}(a)|-f^{*}(b) \geq 0\).
An argument similar to the above shows that \(BV([a,b])\) is a Banach space under the norm \(2V(f^{*},[a,b])+|f^{*}(a)|\) (with or without the \(2\)). The identity map from \(BV([a,b])\) (with this norm) to \((\mathcal{X}_{G},\|\cdot\|_{G,\mathcal{X}}\), is continuous (by Proposition 9.1) and it is also onto. Accordingly, by the Open Mapping Theorem (e.g., Yosida [43, p. 75]) the map is open, hence a homeomorphism, so the norms are equivalent. Thus, in this example, \(\mathcal{X}_{G}\) is a Banach space under these two equivalent norms.
Note however that the \(\mathcal{X}\)-norm restricted to \(\mathcal{X}_{G}\) does not give a Banach space structure; i.e., \(\mathcal{X}_{G}\) is not complete in the \(\mathcal{X}\)-norm. Indeed, with \(\mathcal{X}=L^{\infty}([0,1])\). Let \(f_{n}\) be \(1/n\) times the characteristic function of the disjoint union of \(n^{2}\) closed intervals contained within the unit interval. Then \(\|[f_{n}]\|_{\mathcal{X}}=1/n\) but \(\|[f_{n}]\|_{G,\mathcal{X}}\geq Cn\), some \(C>0\), since the \(\|\cdot\|_{G,\mathcal{X}}\) is equivalent to the total-variation norm. While \(\{f_{n}\}\) converges to zero in one norm, in the other it blows up. If \(\mathcal{X}_{G}\) were a Banach space under \(\|\cdot\|_{\mathcal{X}}\), it would be another Cauchy sequence, a contradiction.
## 10 Pointwise-integrals vs. Bochner integrals
### Evaluation of Bochner integrals
A natural conjecture is that the Bochner integral, evaluated pointwise, is the pointwise integral; that is, if \(h\in\mathcal{L}^{1}(Y,\mu,\mathcal{X})\), where \(\mathcal{X}\) is any Banach space of functions defined on a measure space \(\Omega\), then
\[\left(\mathcal{B}-\int_{Y}h(y)\,d\mu(y)\right)(x)=\int_{Y}h(y)(x)\,d\mu(y) \tag{21}\]
for all \(x\in\Omega\). Usually, however, one is dealing with equivalence classes of functions and thus can expect the equation (21) to hold only for almost every \(x\) in \(\Omega\). Furthermore, to specify \(h(y)(x)\), it is necessary to take a particular function representing \(h(y)\in\mathcal{X}\)
The Main Theorem implies that (21) holds for \(\rho\)-a.e. \(x\in\Omega\) when \(\mathcal{X}=L^{q}(\Omega,\rho)\), for \(1\leq q<\infty\), is separable provided that \(h=w\Phi\), where \(w:Y\to\mathbb{R}\) is a weight function with finite \(L^{1}\)-norm and \(\Phi:Y\to\mathcal{X}\) is essentially bounded, where for each \(y\in Y\)\(\Phi(y)(x)=\phi(x,y)\) for \(\rho\)-a.e. \(x\in\Omega\) and \(\phi:\Omega\times Y\to\mathbb{R}\) is \(\rho\times\mu\)-measurable. More generally, we can show the following.
**Theorem 10.1**: _Let \((\Omega,\rho)\), \((Y,\mu)\) be \(\sigma\)-finite measure spaces, let \({\cal X}=L^{q}(\Omega,\rho)\), \(q\in[1,\infty]\), and let \(h\in{\cal L}^{1}(Y,\mu;{\cal X})\) so that for each \(y\) in \(Y\), \(h(y)(x)=H(x,y)\) for \(\rho\)-a.e. \(x\), where \(H\) is a \(\rho\times\mu\)-measurable real-valued function on \(\Omega\times Y\). Then (i) \(y\mapsto H(x,y)\) is integrable for \(\rho\)-a.e. \(x\in\Omega\), (ii) the equivalence class of \(x\mapsto\int_{Y}H(x,y)\,d\mu(y)\) is in \({\cal X}\), and (iii) for \(\rho\)-a.e. \(x\in\Omega\)_
\[\left({\cal B}-\int_{Y}h(y)\,d\mu(y)\right)(x)=\int_{Y}H(x,y)\,d\mu(y).\]
**Proof.** We first consider the case \(1\leq q<\infty\). Let \(g\) be in \({\cal L}^{p}(\Omega,\rho)\), where \(1/p+1/q=1\). Then
\[\int_{Y}\int_{\Omega}|g(x)H(x,y)|d\rho(x)d\mu(y)\leq\int_{Y}\|g\|_ {p}\|h(y)\|_{q}\,d\mu(y) \tag{22}\] \[=\|g\|_{p}\int_{Y}\|h(y)\|_{{\cal X}}\,d\mu(y)<\infty. \tag{23}\]
Here we have used Young's inequality and Bochner's theorem. By Fubini's theorem, (i) follows. In addition, the map \(g\mapsto\int_{\Omega}g(x)\left(\int_{Y}H(x,y)\,d\mu(y)\right)d\rho(x)\) is a continuous linear functional \(F\) on \(L^{p}\) with \(\|F\|_{{\cal X}^{*}}\leq\int_{Y}\|h(y)\|_{{\cal X}}\,d\mu(y)\). Since \((L^{p})^{*}\) is \(L^{q}\) for \(1<q<\infty\), then the function \(x\mapsto\int_{Y}H(x,y)\,d\mu(y)\) is in \({\cal X}=L^{q}\) and has norm \(\leq\int_{Y}\|h(y)\|_{{\cal X}}\,d\mu(y)\). The case \(q=1\) is covered by taking \(g\equiv 1\), a member of \(L^{\infty}\), and noting that \(\|\int_{Y}H(x,y)\,d\mu(y)\|_{L^{1}}=\int_{\Omega}|\int_{Y}H(x,y)\,d\mu(y)|d\rho (x)\leq\int_{\Omega}\int_{Y}|H(x,y)|\,d\mu(y)d\rho(x)=\int_{Y}\|h(y)\|_{{\cal X }}\,d\mu(y)\). Thus, (ii) holds for \(1\leq q<\infty\).
Also by Fubini's theorem and Theorem 3.3, for all \(g\in{\cal X}^{*}\),
\[\int_{\Omega}g(x)\left({\cal B}-\int_{Y}h(y)\,d\mu(y)\right)(x)d\rho(x)=\int_{ Y}\left(\int_{\Omega}g(x)H(x,y)d\rho(x)\right)d\mu(y)\]
\[=\int_{\Omega}g(x)\left(\int_{Y}H(x,y)d\mu(y)\right)d\rho(x).\]
Hence (iii) holds for all \(q<\infty\), including \(q=1\).
Now consider the case \(q=\infty\). For \(g\in L^{1}(\Omega,\rho)=(L^{\infty}(\Omega,\rho))^{*}\), the inequality (23) holds, and by [18, pp. 348-9], (i) and (ii) hold and \(\|\int_{Y}H(x,y)\,d\mu(y)\|_{\infty}\leq\int_{Y}\|h(y)\|_{\infty}\,d\mu(y)<\infty\). For \(g\in{\cal L}^{1}(\Omega,\rho)\),
\[\int_{\Omega}g(x)\left({\cal B}-\int_{Y}h(y)\,d\mu(y)\right)(x)d\rho(x)=\int_{ Y}\left(\int_{\Omega}g(x)h(y)(x)d\rho(x)\right)d\mu(y)\]
\[=\int_{Y}\left(\int_{\Omega}g(x)H(x,y)d\rho(x)\right)d\mu(y)=\int_{\Omega}g(x) \left(\int_{Y}H(x,y)\,d\mu(y)\right)d\rho(x).\]
The two functions integrated against \(g\) are in \(L^{\infty}(\Omega,\rho)\) and agree, so the functions must be the same \(\rho\)-a.e. \(\square\)
There are cases where \({\cal X}\) consists of pointwise-defined functions and (21) can be taken literally.
If \({\cal X}\) is a separable Banach space of pointwise-defined functions from \(\Omega\) to \({\mathbb{R}}\) in which the evaluation functionals are bounded (and so in particular if \({\cal X}\) is a reproducing kernel Hilbert space [4]), then (21) holds for all \(x\) (not just \(\rho\)-a.e.). Indeed, for each \(x\in\Omega\), the evaluation functional \(E_{x}:f\mapsto f(x)\) is bounded and linear, so by Theorem 3.3, \(E_{x}\) commutes with the Bochner integral operator. As non-separable reproducing kernel Hilbert spaces exist [3, p.26], one still needs the hypothesis of separability.
In a special case involving Bochner integrals with values in Marcinkiewicz spaces, Nelson [38] showed that (21) holds. His result involves going from equivalence classes to functions, and uses
a "measurable selection." Reproducing kernel Hilbert spaces were studied by Le Page in [32] who showed that (21) holds when \(\mu\) is a probability measure on \(Y\) under a Gaussian distribution assumption on variables in the dual space. Another special case of (21) is derived in Hille and Phillips [19, Theorem 3.3.4, p. 66], where the parameter space is an interval of the real line and the Banach space is a space of bounded linear transformations (i.e., the Bochner integrals are operator-valued).
### Essential boundedness is needed for the Main Theorem
The following is an example of a function \(h:Y\to\mathcal{X}\) which is not Bochner integrable. Let \(Y=(0,1)=\Omega\) with \(\rho=\mu=\) Lebesgue measure and \(q=1=d\) so \(\mathcal{X}=L^{1}((0,1))\). Put \(h(y)(x)=y^{-x}\). Then for all \(y\in(0,1)\)
\[\|h(y)\|_{X}=\int_{0}^{1}y^{-x}dx=\frac{1-\frac{1}{y}}{\log y}.\]
By l'Hospital's rule
\[\lim_{y\to 0^{+}}\|h(y)\|_{X}=+\infty.\]
Thus, the function \(y\mapsto\|h(y)\|_{X}\) is not essentially bounded on \((0,1)\) and Theorem 5.1 does not apply. Furthermore, for \(y\leq 1/2\),
\[\|h(y)\|_{\mathcal{X}}\geq\frac{1}{-2y\log y}\]
and
\[\int_{0}^{1}\|h(y)\|_{\mathcal{X}}\,dy\geq\int_{0}^{1/2}\frac{1}{-2y\log y}\, dy=-(1/2)\log\left(\log y\right)|_{0}^{1/2}=\infty.\]
Hence, by Theorem 3.1, \(h\) is not Bochner integrable. Note however that
\[f(x)=\int_{Y}h(y)(x)d\mu(y)=\int_{0}^{1}y^{-x}dy=\frac{1}{1-x}\]
for every \(x\in\Omega\). Thus \(h(y)(x)\) has a pointwise integral \(f(x)\) for all \(x\in(0,1)\), but \(f\) is not in \(\mathcal{X}=L^{1}((0,1))\).
### Connection with sup norm
In [22], we take \(\mathcal{X}\) to be the space of bounded measurable functions on \(\mathbb{R}^{d}\), \(Y\) equal to the product \(S^{d-1}\times\mathbb{R}\) with measure \(\nu\) which is the (completion of the) product measure determined by the standard (unnormalized) measure \(d(e)\) on the sphere and ordinary Lebesgue measure on \(\mathbb{R}\). We take \(\phi(x,y):=\phi(x,e,b)=\vartheta(e\cdot x+b)\), so \(x\mapsto\vartheta(e\cdot x+b)\) is the characteristic function of the closed half-space \(\{x:e\cdot x+b\geq 0\}\).
We showed that if a function \(f\) on \(\mathbb{R}^{d}\) decays, along with its partials of order \(\leq d\), at a sufficient rate, then there is an integral formula expressing \(f(x)\) as an integral combination of the characteristic functions of closed half-spaces weighted by iterated Laplacians integrated over half-spaces. The characteristic functions all have sup-norm of \(1\) and the weight-function is in \(L^{1}\) of \((Y,\nu)\), where \(Y=S^{d-1}\times\mathbb{R}\) and \(\nu\) is the (completion of the) product measure determined by the standard (unnormalized) measure \(d(e)\) on the sphere \(S^{d-1}\) of unit vectors in \(\mathbb{R}^{d}\) and ordinary Lebesgue measure on \(\mathbb{R}\).
For example, when \(d\) is odd,
\[f(x)=\int_{S^{d-1}\times\mathbb{R}}w_{f}(e,b)\vartheta(e\cdot x+b)d\nu(e,b),\]
where
\[w_{f}(e,b):=a_{d}\int_{H_{e,b}}D_{e}^{(d)}f(y)d_{H}(y),\]
with \(a_{d}\) a scalar exponentially decreasing with \(d\). The integral is of the iterated directional derivative over the hyperplane with normal vector \(e\) and offset \(b\),
\[H_{e,b}:=\{y\in\mathbb{R}^{d}:e\cdot y+b=0\}.\]
For \(\mathcal{X}=\mathcal{M}(\mathbb{R}^{d})\), the space of bounded Lebesgue-measurable functions on \(\mathbb{R}^{d}\), which is a Banach space w.r.t. sup-norm, and \(G\) the family \(H_{d}\) consisting of the set of all characteristic functions for closed half-spaces in \(\mathbb{R}^{d}\), it follows from Theorem 5.1 that \(f\in\mathcal{X}_{G}\).
Hence, from the Main Theorem,
\[f=\mathcal{B}-\int_{S^{d-1}\times\mathbb{R}}w_{f}(e,b)\Theta(e,b)\,d\nu(e,b)\]
is a Bochner integral, where \(\Theta(e,b)\in\mathcal{M}(\mathbb{R}^{d})\) is given by
\[\Theta(e,b)(x):=\vartheta(e\cdot x+b).\]
Application of the Main Theorem requires only that \(w_{f}\) be in \(L^{1}\), but [22] gives explicit formulas for \(w_{f}\) (in both even and odd dimensions) provided that \(f\) satisfies the decay conditions described above and in our paper; see also the other chapter in this book referenced earlier.
## 11 Some concluding remarks
Neural networks express a function \(f\) in terms of a combination of members of a given family \(G\) of functions. It is reasonable to expect that a function \(f\) can be so represented if \(f\) is in \(\mathcal{X}_{G}\). The choice of \(G\) thus dictates the \(f\)'s that can be represented (if we leave aside what combinations are permissible). Here we have focused on the case \(G=\{\Phi(y):y\in Y\}\). The form \(\Phi(y)\) is usually associated with a specific family such as Gaussians or Heavisides. The tensor-product interpretation suggests the possibility of using multiple families \(\{\Phi_{j}:j\in J\}\) or multiple \(G\)'s to represent a larger class of \(f\)'s. Alternatively, one may replace \(Y\) by \(Y\times J\) with a suitable extension of the measure.
The Bochner integral approach also permits \(\mathcal{X}\) to be an arbitrary Banach space (not necessarily an \(L^{p}\)-space). For example, if \(\mathcal{X}\) is a space of bounded linear transformations and \(\Phi(Y)\) is a family of such transformations, we can approximate other members \(f\) of this Banach space \(\mathcal{X}\) in a neural-network-like manner. Even more abstractly, we can approximate an _evolving_ function \(f_{t}\), where \(t\) is time, using weights that evolve over time and/or a family \(\Phi_{t}(y)\) whose members evolve in a prescribed fashion. Such an approach would require some axiomatics about permissible evolutions of \(f_{t}\), perhaps similar to methods used in time-series analysis and stochastic calculus. See, e.g., [8].
Many of the restrictions we have imposed in earlier sections are not truly essential. For example, the separability constraints can be weakened. Moreover, \(\sigma\)-finiteness of \(Y\) need not be required since an integrable function \(w\) on \(Y\) must vanish outside a \(\sigma\)-finite subset. More drastically, the integrable function \(w\) can be replaced by a distribution or a measure. Indeed, we believe that both finite combinations and integrals can be subsumed in generalized combinations derived from Choquet's theorem. The abstract transformations of the concept of neural network discussed here provide an "enrichment" that may have practical consequences.
Appendix I: Some Banach space background
The following is a brief account of the machinery of functional analysis used in this chapter. See, e.g., [43]. For \(G\subseteq{\cal X}\), with \({\cal X}\) any linear space, let
\[{\rm span}_{n}(G):=\left\{x\in{\cal X}:\exists w_{i}\in\mathbb{R},g_{i}\in G,\;1 \leq i\leq n,\;\ni\;x=\sum_{i=1}^{n}w_{i}g_{i}\;\right\}\]
denote the set of all \(n\)-fold linear combinations from \(G\). If the \(w_{i}\) are non-negative with sum \(1\), then the combination is called a _convex_ combination; \({\rm conv}_{n}(G)\) denotes the set of all \(n\)-fold convex combinations from \(G\). Let
\[{\rm span}(G):=\bigcup_{n=1}^{\infty}\,{\rm span}_{n}(G)\;\;{\rm and}\;\;{ \rm conv}(G):=\bigcup_{n=1}^{\infty}\,{\rm conv}_{n}(G).\]
A _norm_ on a linear space \({\cal X}\) is a function which associates to each element \(f\) of \({\cal X}\) a real number \(\|f\|\geq 0\) such that
\[(1)\;\|f\|=0\;\Longleftrightarrow\;f=0;\]
\[(2)\;\|rf\|=|r|\|f\|\;{\rm for\;all}\;r\in\mathbb{R};\;{\rm and}\]
\[(3)\;{\rm the\;triangle\;inequality\;holds:}\;\|f+g\|\leq\|f\|+\|g\|,\;\forall f,g\in{ \cal X}.\]
A metric \(d(x,y):=\|x-y\|\) is defined by the norm, and both addition and scalar multiplication become continuous functions with respect to the topology induced by the norm-metric. A metric space is _complete_ if every sequence in the space that satisfies the Cauchy criterion is convergent. In particular, if a normed linear space is complete in the metric induced by its norm, then it is called a _Banach_ space.
Let \((Y,\mu)\) be a measure space; it is called \(\sigma\)-finite provided that there exists a countable family \(Y_{1},Y_{2},\ldots\) of subsets of \(Y\) pairwise-disjoint and measurable with finite \(\mu\)-measure such that \(Y=\bigcup_{i}Y_{i}\). The condition of \(\sigma\)-finiteness is required for Fubini's theorem. A set \(N\) is called a \(\mu\)-null set if it is measurable with \(\mu(N)=0\). A function from a measure space to another measure space is called _measurable_ if the pre-image of each measurable subset is measurable. When the range space is merely a topological space, then functions are measurable if the pre-image of each open set is measurable.
Let \((\Omega,\rho)\) be a measure space. If \(q\in[1,\infty)\), we write \(L^{q}(\Omega,\rho)\) for the Banach space consisting of all equivalence classes of the set \({\cal L}^{q}(\Omega,\rho)\) of all \(\rho\)-measurable functions from \(\Omega\) to \(\mathbb{R}\) with absolutely integrable \(q\)-th powers, where \(f\) and \(g\) are equivalent if they agree \(\rho\)-almost everywhere (\(\rho\)-a.e.) - that is, if the set of points where \(f\) and \(g\) disagree has \(\rho\)-measure zero, and \(\|f\|_{L^{q}(\Omega,\rho)}:=(\int_{\Omega}|f(x)|^{q}d\rho(x))^{1/q}\), or \(\|f\|_{q}\) for short.
## 13 Appendix II: Some key theorems
We include, for the reader's convenience, the statements of some crucial theorems cited in the text.
The following consequence of the Hahn-Banach Theorem, due to Mazur, is given by Yosida [43, Theorem 3', p. 109]. The hypotheses on \({\cal X}\) are satisfied by any Banach space, but the theorem holds much more generally. See [43] for examples where \({\cal X}\) is not a Banach space.
**Theorem 13.1**: _Let \(X\) be a real locally convex linear topological space, \(M\) a closed convex subset, and \(x_{0}\in X\setminus M\). Then \(\exists\) continuous linear functional_
\[F:X\to\mathbb{R}\;\ni\;F(x_{0})>1,\;F(x)\leq 1\;\;\forall x\in M.\]
Fubini's Theorem relates iterated integrals to product integrals. Let \(Y,Z\) be sets and \(\mathcal{M}\) be a \(\sigma\)-algebra of subsets of \(Y\) and \(\mathcal{N}\) a \(\sigma\)-algebra of subsets of \(Z\). If \(M\in\mathcal{M}\) and \(N\in\mathcal{N}\), then \(M\times N\subseteq Y\times Z\) is called a _measurable rectangle_. We denote the smallest \(\sigma\)-algebra on \(Y\times Z\) which contains all the measurable rectangles by \(\mathcal{M}\times\mathcal{N}\). Now let \((Y,\mathcal{M},\mu)\) and \((Z,\mathcal{N},\nu)\) be \(\sigma\)-finite measure spaces, and for \(E\in\mathcal{M}\times\mathcal{N}\), define
\[(\mu\times\nu)(E):=\int_{Y}\nu(E_{y})d\mu(y)=\int_{Z}\mu(E^{z})d\nu(z),\]
where \(E_{y}:=\{z\in Z:(y,z)\in E\}\) and \(E^{z}:=\{y\in Y:(y,z)\in E\}\). Also, \(\mu\times\nu\) is a \(\sigma\)-finite measure on \(Y\times Z\) with \(\mathcal{M}\times\mathcal{N}\) as the family of measurable sets. For the following, see Hewitt and Stromberg [18, p. 386].
**Theorem 13.2**: _Let \((Y,\mathcal{M},\mu)\) and \((Z,\mathcal{N},\nu)\) be \(\sigma\)-finite measure spaces. Let \(f\) be a complex-valued \(\mathcal{M}\times\mathcal{N}\)-measurable function on \(Y\times Z\), and suppose that at least one of the following three absolute integrals is finite: \(\int_{Y\times Z}|f(y,z)|d(\mu\times\nu)(y,z)\), \(\int_{Z}\int_{Y}|f(y,z)|d\mu(y)d\nu(z)\), \(\int_{Y}\int_{Z}|f(y,z)|d\nu(z)d\mu(y)\). Then the following statements hold: (i) \(y\mapsto f(y,z)\) is in \(\mathcal{L}^{1}(Y,\mathcal{M},\mu)\) for \(\nu\)-a.e. \(z\in Z\); (ii) \(z\mapsto f(y,z)\) is in \(\mathcal{L}^{1}(Z,\mathcal{N},\nu)\) for \(\mu\)-a.e. \(y\in Y\); (iii) \(z\mapsto\int_{Y}f(y,z)d\mu(y)\) is in \(\mathcal{L}^{1}(Z,\mathcal{N},\nu)\); (iv) \(y\mapsto\int_{Z}f(y,z)d\nu(z)\) is in \(\mathcal{L}^{1}(Y,\mathcal{M},\mu)\); (v) all three of the following integrals are equal:_
\[\int_{Y\times Z}f(y,z)d(\mu\times\nu)(y,z)=\]
\[\int_{Z}\int_{Y}f(y,z)d\mu(y)d\nu(z)=\]
\[\int_{Y}\int_{Z}f(y,z)d\nu(z)d\mu(y).\]
A function \(G:I\to\mathbb{R}\), \(I\) any subinterval of \(\mathbb{R}\), is called _convex_ if
\[\forall x_{1},x_{2}\in I,0\leq t\leq 1,\ \ G(tx_{1}+(1-t)x_{2})\leq tG(x_{1})+( 1-t)G(x_{2}).\]
The following formulation is from Hewitt and Stromberg [18, p. 202].
**Theorem 13.3** (Jensen's inequality): _Let \((Y,\sigma)\) be a probability measure space. Let \(G\) be a convex function from an interval \(I\) into \(\mathbb{R}\) and let \(f\) be in \(\mathcal{L}^{1}(Y,\sigma)\) with \(f(Y)\subseteq I\) such that \(G\circ f\) is also in \(\mathcal{L}^{1}(Y,\sigma)\). Then \(\int_{Y}f(y)d\sigma(y)\) is in \(I\) and_
\[G\left(\int_{Y}f(y)d\sigma(y)\right)\leq\int_{Y}(G\circ f)(y)d\sigma(y).\]
## Acknowledgements
We thank Victor Bogdan for helpful comments on earlier versions. |
2303.15570 | Online Non-Destructive Moisture Content Estimation of Filter Media
During Drying Using Artificial Neural Networks | Moisture content (MC) estimation is important in the manufacturing process of
drying bulky filter media products as it is the prerequisite for drying
optimization. In this study, a dataset collected by performing 161 drying
industrial experiments is described and a methodology for MC estimation in an
non-destructive and online manner during industrial drying is presented. An
artificial neural network (ANN) based method is compared to state-of-the-art MC
estimation methods reported in the literature. Results of model fitting and
training show that a three-layer Perceptron achieves the lowest error.
Experimental results show that ANNs combined with oven settings data, drying
time and product temperature can be used to reliably estimate the MC of bulky
filter media products. | Christian Remi Wewer, Alexandros Iosifidis | 2023-03-27T19:37:53Z | http://arxiv.org/abs/2303.15570v1 | Online Non-Destructive Moisture Content Estimation of Filter Media During Drying Using Artificial Neural Networks
###### Abstract
Moisture content (MC) estimation is important in the manufacturing process of drying bulky filter media products as it is the prerequisite for drying optimization. In this study, a dataset collected by performing 161 drying industrial experiments is described and a methodology for MC estimation in an non-destructive and online manner during industrial drying is presented. An artificial neural network (ANN) based method is compared to state-of-the-art MC estimation methods reported in the literature. Results of model fitting and training show that a three-layer Perceptron achieves the lowest error. Experimental results show that ANNs combined with oven settings data, drying time and product temperature can be used to reliably estimate the MC of bulky filter media products.
Drying, Artificial Neural Networks, Moisture Content, Moisture Content Prediction, Moisture Content Estimation, Filter Media.
## I Introduction
Drying is a widely used manufacturing process across many different fields of manufacturing. For the manufacturing of filter media, the drying process is the most energy intensive and time consuming process.
The drying of filter media products is a process highly dependent on both the upstream manufacturing steps and the state of the ambient air, due to the hygroscopic nature of the filter media. These dependencies introduce variance in the MC of the filter media before the drying process and thus also introduce variance in the required drying times to reach a desired MC threshold for each filter media product. Therefore, control of the drying process of filter media products is based on a time threshold ensuring that all filters reach a desired MC threshold. This process comes at the cost of energy and time which is spent over-drying filter media products.
However, with the knowledge of MC of filter media products, it is possible to reduce both drying time and energy expenditure. This is because it is possible to stop the individual drying processes based on the condition of the filter media, instead of a predetermined time threshold. Direct measurement of the filter media MC during the drying process is infeasible, introducing the need of alternative methods.
One approach is to model the drying of wet materials. This is a complex, highly nonlinear, dynamic and multivariable thermal process whose underlying mechanisms are not yet fully understood [1]. It is a highly coupled multivariate problem considering the coupled heat, momentum, and mass transfer which, when modelled, can lead to insights into the underlying process and quality parameters.
When only estimates of the MC are desired, a variety of soft-computing methods have proven effective for MC estimation during drying of different materials. A comparison of MC estimation performance using k-nearest neighbour regression, Support Vector Regression (SVR), Random Forest Regresssmission (RFR), Artificial Neural Networks (ANN), and Gaussian processes, identified RFR as the most successful algorithm in estimation of drying characteristics and RFR and SVR as the most successful algorithms for MC estimation [2].
MC estimation performance of basil seed mucilage using genetic algorithm-based ANNs and adaptive neuro-fuzzy inference systems (ANFIS) was studied by [3]. Their results indicate that, while the ANFIS model gave the best total fit of MC, both the ANNs and ANFIS can give good estimations of MC during infra-red drying.
Drying of marrow slices was investigated in [4]. It was found that amongst the thin layer drying models, the Logarithmic, the Henderson, and Pabis models were the best. However, it was also found that a multi-layer perception (MLP) network using Backpropagation-based training was able to estimate the MC with an insignificant error. ANNs were also found successful in MC estimation of edible rose [5], quince slices [1], green tea leaves [6], absithinu leaves [7], pistachios [8], water melon rind pomace [9], and discarded yellow onions [10].
A mini-review by [11] shows that ANNs, in general, are well suited for MC estimation applications utilising microwave drying, and [12] shows that ANNs are well suited in general for foodstuff drying applications. ANNs have also shown to be a good tool for other estimation applications, such as estimating the State-of-Charge of batteries for electric vehicles [13, 14], remaining useful lifetime of batteries [15], breaking pressure [16], solenoid valve remaining useful life-time [17] and nitrogen in wheat leaves [18].
The above literature review shows that there exists a plethora of effective MC estimation techniques, which have been widely applied and researched in the field of foodstuff, especially for thin products (thickness magnitude approximately \(10^{-4}\) meter to \(10^{-2}\) meter). The filter media investigated in this work have a thickness magnitude of approximately \(10^{-1}\) meter. It is therefore not a given that the results extend to this category of products. Furthermore, we have been
unable to identify any studies of online MC estimation of filter media or similar products.
The objectives of this work are as follows:
1. To present and share a dataset of industrial production drying data of bulky filter media drying.
2. To device a method that can estimate the MC of filter media during the drying process to a degree that is useful for manufacturing.
3. To compare, quantify and evaluate said method with the state-of-the-art MC estimation models found in the literature.
The rest of this article is organised as follows: Section II describes the model selection process for determining the proposed ANN architecture for MC estimation. Section III describes the experimental setup and data collection, including the dataset and software, and competing estimation models, used in the study. Section IV contains the results and discussions. Finally section V concludes the the article.
## II Development of the artificial neural network
We formulate MC estimation as a regression problem. Given the measured feature vector \(\mathbf{x}\) as input, we use an ANN to map this multi-dimensional feature vector to the MC value at the output of the network. That is, the ANN acts as a parametric function \(f(\mathbf{x},\mathbf{w})\), the parameters \(\mathbf{w}\) of which are determined so as to minimize the loss function (1) of the ANN's response w.r.t. to the real MC values measured on training data as described by
\[\mathcal{L}=\frac{1}{N}\sum_{k=1}^{N}\left(MC_{k}-\widehat{MC}_{k}\right)^{2}, \tag{1}\]
where \(N\) is the total number of samples, \(MC_{k}\) is the k'th experimentally measured moisture content, and \(\widehat{MC}_{k}\) is the k'th estimate of the actual moisture content, i.e., it is the response of \(f(\cdot,\mathbf{w})\) when the k'th sample in the training set is introduced to it.
The complexity of the function indicated by the structure of the ANN has an effect on the effectiveness of the method. We determine a good structure for the ANN by following the model selection process described next.
### _Model Selection Process_
The estimation performance of an ANN depends on the amount of training data, the quality of training data, the chosen architecture of the neural network, the choice of hyper parameters such as activation functions, optimization algorithm, use of dropout and batch normalisation, choice of learning rate, and mini-batch size. Choosing the best architecture of a neural network is a research field in and of itself called Neural Architecture Search and multiple methods have been developed for this task [19, 20, 21].
The architecture of the neural network was chosen to be a feedforward multilayer perceptron (MLP) which is used for its simplicity and success in estimating MC of other drying experiments [1, 22]. The Rectified Linear Unit (ReLU) [23] was used as activation function of the hidden neurons. The linear activation function was used for the output neuron enabling regression to all values on the real number line. The architecture of the neural network, the learning rate and the mini-batch size were determined by performing model selection using the values shown in Table I. This was done by using the ASHA algorithm [24] combined with the cross-validation process, and it was orchestrated using the _Tune_ framework [25]. Table II shows the selected hyper parameters. For the rest of the hyper-parameters, such as optimizer, weight-decays, loss function, etc, we use values based on empirically established heuristics in previous works in the literature. For exact values see section II-B.
The expected error is highly dependent on the randomly chosen validation set, especially as the work in this article is based on a relatively sparse dataset with 322 sets of observations. To combat this issue, we combine cross-validation with the ASHA algorithm. However, instead of using the ASHA algorithm for early stopping of the training procedure, we test each set of chosen hyper parameters in a 10 fold cross validation loop. For each of the \(k\) iterations, nine folds are used as training and validation (split 80/20) and the last fold is used for testing. The procedure is then repeated to 10 times. The ASHA algorithm is then applied to the mean of the mean squared error (MSE) losses of the test sets, with a grace period of three, a reduction factor of three, using one bracket. For each choice of number of hidden layers 500 trials were performed, the cross validation mean squared error was then plotted for each layer depth with its optimal parameters as can be seen in Fig. 1, resulting in a an optimal ANN architecture of which a schematic overview can be seen in Fig. 2.
\begin{table}
\begin{tabular}{l l l} Hyperparameter & Type & Values \\ Number of neurons & continuous & [1, 500] \\ Number of hidden layers & choice & \{1, 2, 3, 4, 5, 6, 7\} \\ Learning rate & continuous & log [1e-4, 1e-1] \\ Batch size & choice & \{2, 4, 8, 16, 32, 64\} \\ \end{tabular}
\end{table} TABLE I: Selection options for ANN hyperparameters
Fig. 1: Average performance cross validation of found optimal hyper parameters for each size of the neural network based on 10 fold cross validation.
### _Training Strategy_
Along with hyper parameter optimization, training of the parameters of the ANN is important in improving estimation quality. In order to find a converging point in the training process of the deep neural network, the Adam optimizer [26] was used with default decay rates of \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\). The loss function used is the MSE of the MC estimates as defined by (2).
Batch normalisation was used before each layer to reduce the internal covariate shift of the activation functions [27]. Dropout [28] was applied in the training phase after each hidden layer with a probability of attenuating each node of \(50\%\). Dropout can be seen as a form of data augmentation [29], thus improving generalisation of the trained models. In order to optimize training time, an early stopping scheme with a patience of 200 epochs was used, stopping the training when no improvements have been done on the validation set for 200 epochs. The best performing model was then saved and used for inference.
### _Performance evaluation metrics_
The performance of the model is tested using four different measures, the MSE
\[MSE=\frac{1}{N}\sum_{k=1}^{N}\left(MC_{k}-\widehat{MC_{k}}\right)^{2}, \tag{2}\]
the mean absolute error (MAE)
\[MAE=\frac{1}{N}\sum_{k=1}^{N}\left(\left|MC_{k}-\widehat{MC_{k}}\right|\right), \tag{3}\]
which is less sensitive to outliers, as well as the standard deviation (STD)
\[STD=\frac{1}{N}\sum_{k=1}^{N}\left(\left|MC_{k}-\widehat{MC_{k}}\right|\right), \tag{4}\]
and the coefficient of determination \(R^{2}\) between the estimates and the experimentally measured values
\[R^{2}=1-\frac{\sum_{k=1}^{N}\left(MC_{k}-\widehat{MC_{k}}\right)}{\sum_{k=1}^ {N}\left(MC_{k}-\widehat{MC}\right)}. \tag{5}\]
where \(\widehat{MC}\) is the average experimentally measured moisture content.
## III Experimental setup and data collection
All drying experiments were conducted in a test oven concurrently drying four different filter media, replicating industrial usage. The positions in the oven are weakly coupled and each drying process can be approximated as an independent process.
### _Drying Procedure and Moisture Content_
The experimental MC was measured using the gravimetric method. The drying is split into two phases, namely Drying Phase 1 (DP1) and Drying Phase 2 (DP2). DP1 replicates the real-world industrial drying. However, in order to map out the entirety of the drying curve a variation in drying time is induced by extracting the filter media after a predetermined amount of time. DP2 lasts 48 hours with an oven temperature of \(120^{\circ}C\). The purpose of DP2 is to evaporate all MC from the filter media thus enabling the measurement of the solid mass \(m_{solid}\) which is used to calculate the experimental MC in the filter media, as seen in (6) and (7).
The mass of the filter media are measured three times during the experiment. The initial (wet) mass, \(m_{initial}\), of each filter media is measured before DP1. \(m_{after}\) is measured after DP1, and \(m_{solid}\) which is measured after DP2.
With these measured masses we can now calculate the initial MC as:
\[MC_{initial}=\frac{m_{initial}-m_{solid}}{m_{solid}}100\%, \tag{6}\]
and the MC after DP1 as:
\[MC=\frac{m_{after}-m_{solid}}{m_{solid}}100\%. \tag{7}\]
### _Dataset_
Automated data collection is used to collect the seven predictor variables that constitute the dataset. The drying time \(t_{drying}\), the estimated filter media temperature \(\widehat{T}_{filter}\), the oven chamber position \(OCP\), the overall mean of the oven input temperature during the drying process of the particular filter media \(\overline{T}_{in}\), the overall mean of the differential pressure across the oven \(\overline{\Delta p}\), the oven temperature at the time of filter extraction \(T_{cur}\), and the initial mass of the filter media before drying \(m_{i}\) are collected for each experiment.
Each measurement of the dataset was normalized and scaled such that all values lie in the range of \([0,100]\). The normalized values of a sample \(\mathbf{x}\) were calculated using:
\[\mathbf{z}_{k}=\frac{\mathbf{x}_{k}-min(\mathbf{x}_{k}^{train})}{max(\mathbf{x }_{k}^{train})-min(\mathbf{x}_{k}^{train})}\cdot 100, \tag{8}\]
\begin{table}
\begin{tabular}{l l} \hline \hline Hyperparameter & Selected Parameters \\ \hline Number of neurons & \(l_{1}=231\), \(l_{2}=421\), \(l_{3}=392\) \\ Number of hidden layers & 3 \\ Learning rate & 0.029924 \\ Batch size & 16 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Chosen configuration of hyper parameters.
Fig. 2: Schematic view of the found optimal neural network
where \(\mathbf{z}_{k}\) is the vector of normalized values of feature \(k\), \(\mathbf{x}_{k}\) is the vector of all values of feature \(k\), \(\mathbf{x}_{k}^{train}\) is the vector of values of feature \(k\) belonging to the training set.
A total of 161 experiments were performed resulting in 322 sets of predictor- and response variable vectors. 161 sets of observations measuring the _initial condition data_ (ICD) and 161 sets of predictor- and response variable observations with different drying times. The dataset consists of two classes of datapoints, ICD and _end condition data_ (ECD), where the ICD are the sets of observation sampled upon insertion of a filter media into the drying oven, i.e. a drying time of zero minutes. The ICD are information poor, as an equilibrium has not been reached yet and, as an effect, the sensors are sensing the features of the oven and not those of the filter media. The ECD are the sets of observations upon extraction of the filter media from the drying oven, i.e. after the designated drying time for the specific filter media. The ECD are relatively information rich, and regression or estimation can be utilized. The dataset is published in the IEEE DataPort repository and can be found here: [https://dx.doi.org/10.21227/hwa2-tp66](https://dx.doi.org/10.21227/hwa2-tp66) [30].
The features of the dataset can furthermore be classified into two feature types, i.e., status features and oven setting features.
#### Iii-B1 Oven setting features
The oven setting features are the features describing the physical environment in which the filter is dried. The oven setting features are the position of the filter media in the oven, the mean oven temperature during the drying time of each specific filter media, the mean oven differential pressure during the specific drying time of the filter media, the current oven temperature, and the initial mass of
Fig. 3: (a) Distribution of normalized dimensionless mean differential pressure of each filter media during the drying process. (b) Distribution of the normalized dimensionless mean oven temperature of each drying experiment. (c) Distribution of the normalized dimensionless initial mass of filter media. (d). Normalized dimensionless estimated filter media temperature at extraction time as a function of MC. A clear relationship between estimated filter media temperature and MC can be seen. Cluster in upper left corner corresponds to the ICD. (e) MC as a function of normalized dimensionless drying time. Large variance in both drying time and MC can be seen. Cluster in upper left corner corresponds to the ICD.
the filter media before drying begins.
Fig. (a)a shows the distribution of the differential pressure over the fan pushing the air into the oven. The differential pressure is correlated with air speed, and thus the mass of air circulating in the oven. As can be seen, one set of 20 drying experiments has been done under other circumstances than the rest of the filter media, and a trained model will need to be able to encompass this deviation in oven setting parameters as well. The outlier data has been included as it will serve to challenge the performance of the produced models.
Fig. (b)b shows the distribution of the mean oven temperature during the drying experiments of each filter media. Here, a bi-normal distribution can be seen. This is due to the unfortunate deconstruction and reconstruction of the test oven during the multi-month data acquisition period. If the estimation models are able to encompass these different oven setting features, then it only bodes well for the generalizability of the model.
Fig. (c)c shows the distribution of the initial mass which as can be seen follows a skewed Gaussian distribution.
#### Iii-B2 Status features
The status features are the features correlated with the current drying status of the filter i.e., the drying time and the estimated filter media temperature.
Fig. (d)d shows the MC as a function of the dimensionless normalised estimated filter media temperature. A clearly dependent relationship between the MC and the estimated filter media temperature can be identified, the lower the temperature the larger the variation in MC as is expected from the behaviour of a typical drying curve. The estimated filter media temperature holds much of the information that the proposed models will be able to utilize in order to make good estimates.
Fig. (e)e shows a relationship between drying time and MC. There is a large variance along the MC axis, especially for lower drying times. This variance is where the possible gains of utilizing MC estimation can be seen. All low-drying-time or low-MC datapoints represents the possible optimization gains, as early stopping of the drying process can be done if identification of the MC is possible.
### _Competing Estimation Models_
The proposed ANN-based approach is compared with data-driven models reported as state of the art for different MC estimation applications in the literature. To establish a baseline performance we use semi-empirical thin layer drying models, see Table III. The thin layer drying models are all fitted using nonlinear least squares in the Matlab curve fitting toolbox [31].
Furthermore, we compare the ANN approach to SVR and RFR as reported by [2], and ANFIS as reported by [3], and partial least squares (PLS) to act as a baseline for the machine learning models. All competing models estimate the MC as output. The input for the thin layer drying models is solely drying time. The input for the machine learning models are all the same as that for the ANN.
All models come in two variations. One trained on the entirety of the data, referred to as With Initial Conditions (WIC), and one trained only on the ECD, referred to as No Initial Conditions (NIC). As postulated earlier, the ICD is relatively information poor, and thus might hamper the estimation performance in the range of interest, the ECD. For practical applications, the quality of the estimates on the ICD can be ignored - as it is a trivial case.
### _Model Performance Validation_
All models are validated using repeated 10 fold cross validation as described by [38]. Regular 10 fold cross validation was performed by splitting the data into 10 folds, training on all but one fold, and then using the left-out fold for validation. This process was then repeated across all 10 folds, resulting in averages of the estimation error measures as described in (2), (3), (4), and (5). The data was then shuffled, and the above process was repeated five times. Therefore, all results reported in this section are based on validation data and not training data. Furthermore, all results reported are averages of the five times repeated 10 fold cross validation trials.
## IV Results and discussion
### _Model Estimation performance_
#### Iv-A1 Thin Layer Drying models
The MC estimates for the baseline thin layer drying models are shown in Fig. 4 and 5. None of the thin layer drying models are able to satisfyingly estimate the MC along the entire drying range. The models trained on the entire data range, the WIC models, are able to correctly estimate the mean of the ICD, however the models exhibit bias as can be seen by the coefficient of determination in Table IV and Fig. 4. The WIC models are biased towards higher MC estimates for low MC measurements, and lower MC estimates for high experimental MC.
The Midilli et al. models as seen in Fig. (f)f and (f)f, and the Henderson NIC model Fig. (d)d are able to correctly estimate the mean of the MC of the ECD. However it is unable to deal with the large variance of the underlying data, as can be seen by the high MSE and STD measurements in Table IV, thus rendering the estimation methods unsuitable for practical applications.
The thin layer drying models fitted only on the ECD, are generally unable to estimate the ICD. The performance of MC estimation of the Lewis and Page models both improve significantly by only fitting to the ECD, whereas the more complex Two Term-, Henderson-, Logarithmic- and Midilli et al. models perform significantly better when trained including the ICD. This is as expected as these models are designed in order to improve MC estimation along the entire drying range in contrast to the older Lewis and Page models which were designed for the falling rate period.
\begin{table}
\begin{tabular}{l l l} Model & Equation & Reference \\ \hline Lewis & \(MC=\exp(-kt)\) & [32] \\ Page & \(MC=\exp(-kt^{n})\) & [33] \\ Two term & \(MC=a\exp(-k_{1}t)+b\cdot\exp(-k_{2}t)\) & [34] \\ Henderson & \(MC=a\exp(-kt)\) & [35] \\ Logarithmic & \(MC=a\exp(-kt)+c\) & [36] \\ Midilli et al. & \(MC=a\exp(-kt^{n})+bt\) & [37] \\ \end{tabular}
\end{table} TABLE III: Thin-layer drying models
## IV Conclusion
Fig. 4: Test fold results of model MC estimates for semi-empirical thin-layer drying models fitted using both ECD and ICD. Blue line indicates perfect estimation. Error bars represent the standard error of the mean, for the five times repeated cross-validation trials.
Fig. 5: Test fold results of model MC estimates for semi-empirical thin-layer drying models trained fitted using only ECD. Teal line indicates perfect estimation. Blue markers indicates the mean estimate and error bars represent the standard error of the mean, for the five times repeated cross-validation trials. Error bars are best seen on an electronic device with a zoom function. Fig. (c)c, (d)d, (e)e, and (f)f estimates ICD above 100 and results are thus not shown.
[MISSING_PAGE_POST]
## Appendix A
Fig. 7: Test fold results of model MC estimates for machine learning models trained only on ECD. Teal line indicates perfect estimation. Blue markers indicates the mean estimate and error bars represent the standard error of the mean, for the five times repeated cross-validation trials. Error bars are best seen on an electronic device with a zoom function.
2 Machine learning models
Generally, the machine learning models as seen in Fig. 6 and 7 outperform the thin layer drying models. The ANN NIC model as seen in Fig. (a)a is able to do meaningful MC estimation on both the ICD and ECD with a low variance and low bias. Furthermore, the small standard error indicate that the model consistently performs well, independently on the selection of the cross validation folds.
The ANN NIC model as seen in Fig. (a)a does not generalize well outside the range of the data it has seen during training. However, when evaluating using only the ECD, i.e., the range of data it has been trained on, it outperforms any other model tested here on all performance metrics except for STD where it is only slightly different from the ANN NIC model. Furthermore both the ANN NIC and ANN NIC models are the only models to achieve a coefficient of determination close to one, indicating a low bias in the estimation results. The RFR WIC models as seen in Fig. (b)b is generally able to do the MC estimation task successfully. However it suffers from outliers, resulting in a high MSE, as seen in Table IV, reducing its applicability in a real world scenario. Both the SVR WIC and PLS WIC models, Fig. (c)c and (d)d, generally overestimates the MC. This is preferable to underestimating the MC in a real world scenario, as it will tend to over-dry filter media instead of under-dry which is worse for the product quality. However, it is generally unable to do satisfactory MC estimation for the case of bulky filter media products.
The performance of the ANFIS WIC and Fig. (e)e, can be separated into three periods. For MC \(<30\) the ANFIS WIC modes perform well, and the performance of the models are independent of the chosen folds during the 10 fold classification. However, for MC \(30<\text{MC}<70\), it is dependent on what data was chosen for training or not, as can be seen by the standard errors indicated by the errors bars. This estimation variance originates from the tuning process of the ANFIS membership function parameters, where the entire data range has not been seen in its training folds. The ANFIS NIC model suffers badly from this problem of tuning its ANFIS membership function parameters for data outside its training folds. Fig (e)e shows that the ANFIS NIC model is generally unable to perform any useful estimation. Except for the ANN NIC model, the remaining machine learning NIC models suffer from a bias towards lower estimates for higher experimental MC.
The expected estimation performance is summarized in Table IV across two different ranges, one range containing both the ICD and ECD, and another range containing only the ICD. When evaluated on the both the ICD and ECD the ANN NIC model outperforms all other models on all performance criteria. When considering only the ECD, the ANN NIC model slightly outperforms the ANN NIC model on MAE, MSE and \(R^{2}\) while the ANN NIC performs the best on the \(STD\) parameter, however the difference is insignificant.
Taking a closer look at the four best competing models, Fig. 8 shows the performance of the four best models as a function of drying time. This shows that the ANN NIC model significantly outperforms the rest of the models for the dimensionless normalized drying time of 0. The ANN NIC model seems to be able to encapsulate the drying phenomena along the entirety of the drying curve, where as RFR models are unable to correctly estimate the MC both at low and medium drying times. The ANN NIC model is unable to generalize outside of the range of the training data that it has seen. However, inside the training data range, it performs similarly or better than the ANN NIC model. All models are able to correctly estimate the MC for long drying times, however this feature is neither surprising nor interesting in a optimization setup where the goal is to decrease drying times.
We see that the estimation quality also depends on the size of the estimates. Fig. 9 shows that smaller estimates result in lower average estimation errors and standard errors. Both the average error and the standard error decreases for all models for estimates below approximately 10% normalized MC, which is the range that is especially important for industrial applications. The high variance and mean MAE for the RFR- and ANN NIC models are caused by the models inabilities to correctly estimate the MC of the ICD. Only the ANN NIC model can correctly estimate the MC across the entire drying range.
### _Practical Considerations_
In a manufacturing setting all estimates are not made equal. The most important range of dimensionless MC estimates are below 10%. Fig. 9 shows that in this region all models, RFR as well as ANN, perform well enough on average to be used for optimizing and controlling the manufacturing process. However the best performing model is the ANN NIC model, which can also be seen in Table IV.
Fig. 8: Moving average of MC estimate MAE as a function of dimensionless normalized drying time with a window size of \(\pm 8\). Colored areas indicate one standard deviation of the absolute residuals inside the rolling window. Area without data indicates no measurements in the area of the underlying data within the rolling window.
## V Conclusion
In this study a dataset consisting of 322 observations from 161 individual industrial drying experiments of bulky filter media products were performed and presented. A total of 21 competing MC estimation models have been developed, trained, fitted, tested and compared using this data. The models were tested and compared using a five times repeated 10 fold cross validation scheme. A three layer MLP ANN was found to be the the most successful algorithm for estimating MC in bulky filter media products. The results show that including ICD to the training set of the ANN, hampers the MC estimation performance in the region of interest. This is in contrast to other investigated machine learning approaches, such as ANFIS, PLS, and SVR where either the performance increases or is unchanged when including the ICD in the training set. For manufacturing purposes, the most interesting region is below 10% normalised MC as this is where one might consider stopping the drying process. Average model estimation error decreases for all models in this region, however the ANN NIC model performs the best. Overall, the results show that the developed ANN MC estimation approach is suitable for industrial usage.
In general, we conclude that while thin layer drying models have been reported to be well performing for MC estimation in the fields of drying foodstuff and agricultural products, they are unable to encapsulate the underlying variance of the data of the bulky filter media. Present findings furthermore show, that ANNs combined with measurement of drying settings (oven temperature, differential pressure (fan speed) etc.) and only 2 status features, the drying time and product temperature, can be successfully used as a non-destructive MC estimation technique of bulky filter media products. Furthermore, this study establishes a baseline MC estimation performance and presents a dataset that can be used for model development and testing. As such, this study constitutes a significant contribution to both academic researchers and industrial drying designers. Further research into improving the quality of MC estimation by looking at temporal data such as change of input data is recommended. Furthermore, it could be of great interest to be able to predict the evolution of the MC, i.e. the drying curve of a specific filter media in order to be able to estimate a remaining required drying time to use for scheduling and possibly optimization of the driving oven parameters, such as oven temperature and air flow.
## VI Acknowledgement
The authors would like to thank Cuu Van Nguyen for support, heavy lifting, and dedicated assistance during the arduous data collection phase.
|
2305.15826 | Prototype of a Cardiac MRI Simulator for the Training of Supervised
Neural Networks | Supervised deep learning methods typically rely on large datasets for
training. Ethical and practical considerations usually make it difficult to
access large amounts of healthcare data, such as medical images, with known
task-specific ground truth. This hampers the development of adequate, unbiased
and robust deep learning methods for clinical tasks.
Magnetic Resonance Images (MRI) are the result of several complex physical
and engineering processes and the generation of synthetic MR images provides a
formidable challenge. Here, we present the first results of ongoing work to
create a generator for large synthetic cardiac MR image datasets. As an
application for the simulator, we show how the synthetic images can be used to
help train a supervised neural network that estimates the volume of the left
ventricular myocardium directly from cardiac MR images.
Despite its current limitations, our generator may in the future help address
the current shortage of labelled cardiac MRI needed for the development of
supervised deep learning tools. It is likely to also find applications in the
development of image reconstruction methods and tools to improve robustness,
verification and interpretability of deep networks in this setting. | Marta Varela, Anil A Bharath | 2023-05-25T08:14:11Z | http://arxiv.org/abs/2305.15826v1 | # Prototype of a Cardiac MRI Simulator for the Training of Supervised Neural Networks
###### Abstract
Supervised deep learning methods typically rely on large datasets for training. Ethical and practical considerations usually make it difficult to access large amounts of healthcare data, such as medical images, with known task-specific ground truth. This hampers the development of adequate, unbiased and robust deep learning methods for clinical tasks.
Magnetic Resonance Images (MRI) are the result of several complex physical and engineering processes and the generation of synthetic MR images provides a formidable challenge. Here, we present the first results of ongoing work to create a generator for large synthetic cardiac MR image datasets. As an application for the simulator, we show how the synthetic images can be used to help train a supervised neural network that estimates the volume of the left ventricular myocardium directly from cardiac MR images.
Despite its current limitations, our generator may in the future help address the current shortage of labelled cardiac MRI needed for the development of supervised deep learning tools. It is likely to also find applications in the development of image reconstruction methods and tools to improve robustness, verification and interpretability of deep networks in this setting.
Keywords:Cardiac MRI MRI Simulator Synthetic Cardiac Images Training of Supervised Neural Networks Cardiac Volume Estimation.
## 1 Introduction
Recent developments in machine learning (ML) have improved rapid reconstruction techniques, enabled automatic image analysis and enhanced the interpretation of medical images in general and cardiac Magnetic Resonance Imaging (MRI) in particular [7]. Progress in this area has nevertheless been restricted by shortages in large anonymised curated datasets, and difficulties in obtaining high-quality ground truth annotations for supervised learning tasks[13]. These problems are compounded by the under-representation of patients from minority backgrounds and those with infrequent anatomical variations or rare diseases.
Moreover, deploying ML models to novel imaging sequences is often delayed until a large numbers of similarly parameterised images have been accrued.
The creation of large datasets of synthetic images whose properties follow prescribed statistical distributions could help alleviate some of the current constraints in the deployment of supervised neural networks (NNs) for medical imaging tasks. The physical processes underlying the nuclear magnetic resonance (NMR) of water protons in biological tissue are complex, as are the interactions of the protons' magnetization vectors with the MRI magnets and other engineering equipment. As such, simulating MRI acquisition necessarily involves simplifications and trade-offs between accuracy and speed. Several simulators with varying degrees of complexity and focusing on different anatomical regions have been proposed [1, 11, 6, 14, 2]. So far none of these has allowed the automatic generation of large sets of MR images with controlled variation in anatomical or imaging parameters suitable for the training of NNs.
#### 1.0.1 Aims
We present a generator of synthetic cardiac MR images, designed to allow the creation of large imaging datasets with controlled parameter variations. As an application, we show how the synthetic data can be used to train a network that estimates left ventricular myocardial volume (LVMV).
## 2 Methods
### MRI Simulator
We created a Python 3.8-based modular simulator of cardiac MRI. The simulator takes the following information as independent inputs:
1. Cardiac phantom (described below);
2. Scanner characteristics. We assumed simulated image acquisition at a \(B_{0}=1.5\) T, with a spatially uniform \(B_{1}\) field and perfect gradient coils with an infinite slew rate. We also assumed a single receive coil with a spatially uniform sensitivity.
3. List of MR sequence parameters similar the ones inputted by radiographers at the time of scanning. We used an axial 3D gradient echo sequence, with: an echo time of 40-60ms, a repetition time of 3000ms, an excitation flip angle of \(10-20^{\circ}\) and a 80 MHz bandwidth. We used a Cartesian sampling scheme with a linear phase encode order, RF spoiling and no parallel imaging capabilities.
We solved the Bloch equations with the CPU-based forward Euler single-compartment solver initially proposed by Liu _et al._[8] and a fast Fourier transform for image reconstruction. Spatial motion of the excited protons during the image acquisition process (caused by blood flow, cardiac and respiratory cycles, diffusion or patient motion) was not simulated.
### Cardiac MRI Phantom
Digital cardiac phantoms are a necessary component of cardiac medical image simulators. There are several atlases and mesh-based models of the human heart [16], but realistic cardiac MR images rely on accurate representations of the entire thoracic anatomy. Digital models that rely on non-uniform rational B-splines (NURBS) are particularly flexible and efficient and have been employed in computed tomography and nuclear medicine image simulators in the past. **blue**The different approaches employed to generate computational human phantoms are carefully reviewed elsewhere [5].
We used the XCAT phantom [9; 10], which is a detailed NURBS-based representation of human anatomy originally based on the segmentations of the high-resolution Visible Human Male and Female images. We took the thoracic region of the XCAT phantom adjusted to \(50^{th}\)-percentile organ volume values [10] as our baseline anatomical representation. Taking advantage of NURBS's flexibility in representing structures with varying morphology and volume, we independently varied the following anatomical parameters: heart dimensions (varied independently in the FH, LR and AP directions); LV radius; apico-basal length; LV thickness; LV thickness close to the apex. All these parameters followed independent uniform distributions, so that their value varied between 80% and 120% of the original value in the reference XCAT phantom. For each instance, we randomly chose between the male or female phantom representation. Clinical experts visually inspected the generated phantom anatomy instances to ensure that they were anatomically plausible (i.e., that they could be segmentations of realistic torso anatomies). We also confirmed that the LVMV range in the generated phantoms was similar to the LVMV in the image patients (see below). This was the only quantitative test of anatomical plausibility we performed on the simulated images.
Figure 1: Cross section of the \(T_{1}\) (A) and \(T_{2}\) (B) maps for the XCAT phantom. The values assigned to each voxel follow a normal distribution around the literature values of the respective tissue. The standard deviation is 30% for blood and 10% for all other tissues. Lumen and cortical bones were treated like air and assigned proton densities close to zero. A similar approach was followed when assigning \(T_{2}^{*}\) values.
We then voxelised the phantom representations into images with dimensions \(256\times 256\times 15\) with a \(1.0\times 1.0\times 6.0\ mm^{3}\)-resolution. Before voxelisation, we randomly introduced translations of \(<2\ cm\) and rotations of \(<6^{\circ}\) along any axis to the phantom in order to model the variability of body position within the scanner. For each of the 17 segmented tissues in the phantom (see Supplementary Table 1), we created Gaussian distributions of water proton NMR relaxation time constants (\(T_{1}\), \(T_{2}\), \(T_{2}^{*}\)). The distributions were centred on literature values at \(1.5\ T\) for each NMR parameter and their standard deviation was set to 10% of the corresponding literature value. An exception was made for arterial and venous blood, whose standard deviation was set to 30% to simplistically model the greater variability in signal due to blood flow. We neglected variations in magnetic susceptibility across the body. Each voxel in the simulated image had NMR parameters randomly chosen from the multivariate Gaussian distribution corresponding to its tissue type.
### Left Ventricular Myocardial Volume (LVMV) Estimation
To demonstrate the potential application of the cardiac MRI simulator as a generative model for deep learning applications, we create an _in silico_ dataset of 500 \(T_{2}\)-weighted axial stacks of cardiac images, with variations in phantom anatomy and MR sequence parameters detailed above. We train a regression NN to automatically estimate the volume of the LV myocardium (VLVM) in three different experimental setups - see Table 1. The patient data was part of an ethically approved retrospective study. It was acquired in a 1.5T Siemens scanner, using a \(T_{2}\)-prepared multi-slice gradient echo sequence (TE/TR: 1.4/357 ms, FA: \(80^{\circ}\), resolution: \(1.4\times 1.4\times 6\ mm^{3}\)). Ground truth VLVMs are calculated using an existing segmentation CNN [4] applied to the patient images.
We trained the CNN shown in Fig 2 for the regression task using a mean square error loss function. We trained for 150 epochs, using Adam optimisation with a \(10^{-4}\) learning rate, \(10^{-4}\) weight decay, batch size of 2 and 0.1 dropout.
## 3 Results
#### 3.0.1 MRI Simulator
The simulator was able to generate realistic sets of cardiac MR images according to the visual assessment of experts. All represented anatomical
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Experiment** & **\# Training Images** & **\# Test Images** \\ \hline
**A** & 400 (simulated) & 100 (simulated) \\ \hline
**B** & 393 (patient) & 99 (patient) \\ \hline
**C** & 393 (patient) + 500 (simulated) & 99 (patient) \\ \hline \end{tabular}
\end{table}
Table 1: Number of simulated and patient images used in each of the experiments, as part of the training and test sets.
structures preserved their smooth, non-overlapping boundaries. A representative example is shown in Fig 3A. The simulation of each 3D cardiac image took approximately 2 h on a single CPU. By distributing the image generation process across 64 CPUs, we were able to simulate sets of 500 cardiac MR images in less than 16 h. Despite differences in contrast and anatomy, the simulated images compare well with the cropped patient images depicted in Fig 3B, showing similar cardiac structures in an equivalent anatomical context.
### Left Ventricular Myocardial Volume (LVMV) Estimation
With the introduced variations in cardiac anatomy, we were able to simulate images with varying realistic cardiac sizes and morphology. VLVM in the simulated images was \(102\pm 19cm^{3}\) (range: \(55-163\)\(cm^{3}\)), compared to \(123\pm 60cm^{3}\) (range: \(43-242\)\(cm^{3}\)) from the segmentations of the patient data.
The regression CNN was able to estimate LVMV, as shown in Fig 4. LVMV estimates on simulated data were accurate and precise (Fig 4a), when compared to LVMV estimates performed on patient images (Figs 4b and c). Enhancing the training dataset with simulated data (Fig 4c) led to a small decrease in the dispersion of the estimates (root mean square error (RMSE) of \(39.0\)\(cm^{3}\) instead of \(45.1\)\(cm^{3}\), whilst introducing a small drop in accuracy (best fit slope of \(0.94\pm 0.03\) vs \(0.97\pm 0.03\)). The comparisons are performed against the data shown in Fig 4b, where no simulated images were used for training.
## 4 Conclusions
We present a detailed cardiac MRI simulator, capable of reproducing the main imaging features of cardiac MRI with a high anatomical detail. The proposed
Figure 2: Schematic diagram of the regression CNN used. The network was implemented in PyTorch using the ’Regressor’ module from the MONAI library [3]. The architecture consists of 5 residual units which downsample the inputs by a factor of 2 via strided convolutions. Each residual unit consists of three convolutional layers (kernel size 3\(\times\)3) followed by instance normalisation and a PReLU activation function. Skip connections are employed to bypass the convolutions. The network ends with a fully connected layer resizing the output from the residual blocks to a single value to which a linear activation function is applied.
simulator is designed to allow the rapid and efficient generation of batches of cardiac MR images, with controlled variation in some parameters (anatomical and/or related to MR imaging). This makes the simulator ideal for the development and testing of machine learning applications. Suitable tasks include the training of supervised NNs which rely on large amounts of training data, such as the training of the training data, and the training of the training data.
Figure 3: Examples of stacks of axial cardiac images. A: Images simulated using the proposed framework. B: Patient images acquired in a 1.5T scanner.
as NNs for image analysis tasks, as the LVMV estimation task presented here exemplifies. It is also well suited to train NNs for image reconstruction or image manipulation tasks, for which ground truth information is often not readily available. Finally, it offers a platform to study the robustness of existing NNs to controlled training distribution shifts or adversarial perturbations, and to address potential imbalances in existing datasets.
Others have also seen the potential of simulated MR images for machine learning. For example, Xanthis _et al_[15] have generated synthetic MR images to train a neural network for a cardiac segmentation task. Their approach, however, is not specifically catered towards allowing the rapid parallel generation of MRI images whose parameters follow a specific statistical distribution.
Despite its potential, the current implementation of the simulator is still a work in progress and suffers from some limitations that restrict the realism of the images it produces. For example, the simulator currently does not include: cardiac and breathing motion; the effects of water diffusion or blood flow; interactions between different proton pools; partial volume effects; inhomogeneities in \(B_{0}\) or \(B_{1}\) fields; variations in magnetic susceptibility or realistic transmit and receive coil sensitivities. These limitations explain the relative lack of success we achieved when enhancing the training of our LVMV regression network with simulated data (see Fig 4). The LVMV regression network performed well on the synthetic data where tissue intensities are relatively flat and well defined, but poorly on real data, with its more complex intensity distribution. In this instance, adding simulated data to the training pool did not help the network's performance, but this is likely to change when more receive coil sensitivities are included in the simulation process.
Future work will test the usefulness of the proposed cardiac MRI simulator in other datasets and tasks. We will test the proposed LVMV estimation CNN on open access cardiac MRI databases with LV segmentations delineated by
Figure 4: Scatter plots on test data of LVMV estimation. From left to right, we depict the outcomes of experiments A, B and C respectively (see Table 1 for further details). In each case, the identity y=x line and the best fit line are also shown. We also indicate the values of the slope of the best fit line (going through the origin) and the root mean square error of the volume estimates in each experiment.
experts, such as the short-axis images available from the 2011 LV Segmentation Challenge [12]. The proposed simulator is also well suited for training networks for other tasks, such as the identification of anatomical structures in different MRI slices. It is currently being tested for this purpose.
The XCAT phantom setup we used allows for several variations in anatomy and properties, from variations in heart dimensions to alterations in LV thickness, which mimic both physiological variability and disease-induced remodelling. It currently does not allow changes in the anatomy of the other thoracic and abdominal organs present in the images or in the orientation of the heart relative to its surroundings, although patient data shows a large degree of variation in this latter parameter. The current version of XCAT phantom also does not permit the straightforward inclusion of focal pathology, such as myocardial scarring.
Despite its current shortcomings, we believe that the presented cardiac MRI simulator is a useful and attractive platform for the generation of large datasets of synthetic cardiac MR images in a controlled and simple manner. We expect that in the future these synthetic datasets will be used to train NNs and improve their robustness and explainability.
## 5 Acknowledgments
This work was supported by the British Heart Foundation Centre of Research Excellence at Imperial College London (RE/18/4/34215). The authors would like to thank Abhishek Roy, Tommy Chen and Krithika Balaji for their contributions.
|