{"query": "You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections: \n1. Summary: A summary of the paper in 100-150 words.\n2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.\n3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are: \n 1 poor\n 2 fair\n 3 good\n 4 excellent\n4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are: \n 1 strong reject\n 2 reject, significant issues present\n 3 reject, not good enough\n 4 possibly reject, but has redeeming facets\n 5 marginally below the acceptance threshold\n 6 marginally above the acceptance threshold\n 7 accept, but needs minor improvements \n 8 accept, good paper\n 9 strong accept, excellent work\n 10 strong accept, should be highlighted at the conference \n5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.\n\nHere is the template for a review format, you must follow this format to output your review result:\n**Summary:**\nSummary content\n\n**Strengths:**\n- Strength 1\n- Strength 2\n- ...\n\n**Weaknesses:**\n- Weakness 1\n- Weakness 2\n- ...\n\n**Questions:**\n- Question 1\n- Question 2\n- ...\n\n**Soundness:**\nSoundness result\n\n**Presentation:**\nPresentation result\n\n**Contribution:**\nContribution result\n\n**Rating:**\nRating result\n\n**Paper Decision:**\n- Decision: Accept/Reject\n- Reasons: reasons content\n\n\nPlease ensure your feedback is objective and constructive. The paper is as follows:\n\n# Explaining V1 Properties with a Biologically Constrained Deep Learning Architecture\n\n Galen Pogoncheff\n\nDepartment of Computer Science\n\nUniversity of California, Santa Barbara\n\nSanta Barbara, CA 93106\n\ngalenpogoncheff@ucsb.edu\n\nJacob Granley\n\nDepartment of Computer Science\n\nUniversity of California, Santa Barbara\n\nSanta Barbara, CA 93106\n\njgranley@ucsb.edu\n\nMichael Beyeler\n\nDepartment of Computer Science\n\nDepartment of Psychological & Brain Sciences\n\nUniversity of California, Santa Barbara\n\nSanta Barbara, CA 93106\n\nmbeyeler@ucsb.edu\n\n###### Abstract\n\nConvolutional neural networks (CNNs) have recently emerged as promising models of the ventral visual stream, despite their lack of biological specificity. While current state-of-the-art models of the primary visual cortex (V1) have surfaced from training with adversarial examples and extensively augmented data, these models are still unable to explain key neural properties observed in V1 that arise from biological circuitry. To address this gap, we systematically incorporated neuroscience-derived architectural components into CNNs to identify a set of mechanisms and architectures that more comprehensively explain V1 activity. Upon enhancing task-driven CNNs with architectural components that simulate center-surround antagonism, local receptive fields, tuned normalization, and cortical magnification, we uncover models with latent representations that yield state-of-the-art explanation of V1 neural activity and tuning properties. Moreover, analyses of the learned parameters of these components and stimuli that maximally activate neurons of the evaluated networks provide support for their role in explaining neural properties of V1. Our results highlight an important advancement in the field of NeuroAI, as we systematically establish a set of architectural components that contribute to unprecedented explanation of V1. The neuroscience insights that could be gleaned from increasingly accurate in-silico models of the brain have the potential to greatly advance the fields of both neuroscience and artificial intelligence.\n\n## 1 Introduction\n\nMany influential deep learning architectures and mechanisms that are widely used today, such as convolutional neural networks (CNNs) [1] and mechanisms of attention [2, 3, 4, 5], draw inspiration from biological intelligence. Despite decades of research into computational models of the visual system, our understanding of its complexities remains far from complete. Existing neuroscientific models of the visual system (e.g., generalized linear-nonlinear models [6, 7, 8, 9]) are often founded upon empirical observations from relatively small datasets, and are therefore unlikely to capture the true complexity of the visual system. While these models have successfully explained many properties of neural response to simple stimuli, their simplicity does not generalize to complex image stimuli [10].\n\nFollowing their astounding success in computer vision, task-driven CNNs have recently been proposed as candidate models of the ventral stream in primate visual cortex [11, 12, 13, 14, 15], offering a path towards models that can explain hidden complexities of the visual system and generalize to complex visual stimuli. Through task-driven training alone (and in some cases, training a linear read-out layer [12, 13, 14, 15, 16, 17]), representations that resemble neural activity at multiple levels of the visual hierarchy have been observed in these models [16]. With the emergence of such properties, CNNs are already being used to enhance our knowledge of processing in the ventral stream [18].\n\nDespite these advancements, CNNs that achieve state-of-the-art brain alignment are still unable to explain many properties of the visual system. Most traditional CNNs omit many well known architectural and processing hallmarks of the primate ventral stream that are likely key to the development of artificial neural networks (ANNs) that help us decipher the neural code. The development of these mechanisms remains an open challenge. A comprehensive understanding of neural processing in the brain (for instance, in the ventral stream) could in turn contribute to significant leaps in artifical intelligence (AI) - an established goal of NeuroAI research [19, 20].\n\nIn this work, we take a systematic approach to analyzing the hallmarks of the primate ventral stream that improve model-brain similarity of CNNs. We formulate architectural components that simulate these processing hallmarks within CNNs and analyze the population-level and neuron-level response properties of these networks, as compared to empirical data recorded in primates. Specifically:\n\n* We enrich the classic ResNet50 architecture with architectural components based on neuroscience foundations that simulate cortical magnification, center-surround antagonism, local filtering, and tuned divisive normalization and show that the resulting network achieves top V1-overall score on the integrative Brain-Score benchmark suite [16].\n* Although some of these components have been studied before in isolation, here we demonstrate their synergistic nature through a series of ablation studies that reveal the importance of each component and the benefits of combining them into a single neuro-constrained CNN.\n* We analyze the network parameters and stimuli that activate neurons to provide insights into how these architectural components contribute to explaining primary visual cortex (V1) activity in non-human primates.\n\n## 2 Background and Related Work\n\nModel-Brain AlignmentOne central challenge in the field of NeuroAI is the development of computational models that can effectively explain the neural code. To achieve this goal, artificial neural networks must be capable of accurately predicting the behavior of individual neurons and neural populations in the brain. The primary visual cortex (V1) is one of the most well studies areas of the visual system, with modeling efforts dating back to at least 1962 [21]--yet many deep learning models still fall short in explaining its neural activity.\n\nThe Brain-Score integrative benchmark [16] has recently emerged as a valuable tool for assessing the capabilities of deep learning models to explain neural activity in the visual system. This suite of benchmarks integrates neural recording and behavioral data from a collection of previous studies and provides standardized metrics for evaluating model explainability of visual areas V1, V2, V4, and IT, as well as additional behavioral and engineering benchmarks.\n\nAlthough CNNs draw high-level inspiration from neuroscience, current architectures (e.g., ResNet [22] and EfficientNet [23]) bear little resemblance to neural circuits in the visual system. While such differences may not necessarily hinder object recognition performance, these networks still fall short in mimicking many properties of highly capable visual systems. Although there may be many paths towards next-generation AI, foundational studies that have successfully merged foundations of neuroscience and AI have shown promising improvements to traditional ANNs [24, 25, 26].\n\nCenter-Surround AntagonismAs early as in the retina, lateral inhibitory connections establish a center-surround antagonism in the receptive field (RF) of many retinal cell types, which is preserved by neurons in the lateral geniculate nucleus and the visual cortex. In the primate visual system, this center-surround antagonism is thought to facilitate edge detection, figure-ground segregation, depth perception, and cue-invariant object perception [27, 28, 29, 30], and is therefore a fundamental property of visual processing.\n\nCenter-surround RFs are a common component of classical neuroscience models [31, 32, 33], where they are typically implemented using a Difference of Gaussian (DoG) that produces an excitatory peak at the RF center with an inhibitory surround (Fig. 1A). Although deep CNNs have the capacity to learn center-surround antagonism, supplementing traditional convolutional kernels with fixed-weight DoG kernels has been demonstrated to improve object recognition in the context of varied lighting, occlusion, and noise [34, 35].\n\nLocal Receptive FieldsThe composition of convolutional operations in CNNs enables hierarchical processing and translation equivariance, both of which are fundamental to core object recognition in the primate ventral visual stream. However, the underlying mechanism through which this is achieved is biologically implausible, as kernel weights are shared among downstream neurons. Though locally connected neural network layers can theoretically learn the same operation, traditional convolutions are typically favored in practice for their computational efficiency and performance benefits. However, local connectivity is a ubiquitous pattern in the ventral stream (Fig. 1B), and visual processing phenomena (e.g., orientation preference maps [36]) have been attributed to this circuitry pattern. In artificial neural systems, Lee _et al._[37] observed the emergence of topographic hallmarks in the inferior temporal cortex when encouraging local connectivity in CNNs. Pogodin _et al._[38] considered the biological implausibility of CNNs and demonstrated a neuro-inspired approach to reducing the performance gap between traditional CNNs and locally-connected networks, meanwhile achieving better alignment with neural activity in primates.\n\nDivisive NormalizationDivisive normalization is wide-spread across neural systems and species [39]. In early visual cortex, it is theorized to give rise to well-documented physiological phenomena, such as response saturation, sublinear summation of stimulus responses, and cross-orientation suppression [40].\n\nIn 2021, Burg and colleagues [41] introduced an image-computable divisive normalization model in which each artificial neuron was normalized by weighted responses of neurons with the same receptive field. In comparison to a simple 3-layer CNN trained to predict the same stimulus responses, their analyses revealed that cross-orientation suppression was more prevalent in the divisive normalization model than in the CNN, suggesting that divisive normalization may not be inherently learned by task-driven CNNs. In a separate study, Cirincione _et al._[42] showed that simulating divisive normalization within a CNN can improve object recognition robustness to image corruptions and enhance alignment with certain tuning properties of primate V1.\n\nTuned Normalization/Cross-Channel InhibitionWhile it is not entirely clear whether divisive normalization should be performed across space and/or across channels in computational models (implementations vary widely), Rust _et al._[43] demonstrated that many response properties of motion-selective cells in the middle temporal area, such as motion-opponent suppression and response normalization, emerge from a mechanism they termed \"tuned normalization\". In this scheme, a given neuron is normalized by a pool of neurons that share the same receptive field but occupy a different region in feature space. We adopt this idea in the present work (Fig. 1C), hypothesizing that enforcing feature-specific weights in the pooling signal might enable a deep net to learn \"opponent suppression\" signals, much like cross-orientation signals found in biological V1 [44, 45].\n\nFigure 1: Design patterns of neuro-constrained architectural components. A) Difference of Gaussian implements a center-surround receptive field. B) Local receptive fields of two neurons without weight sharing. C) Tuned divisive normalization inhibits each feature map by a Gaussian-weighted average of competing features. D) Log-polar transform simulating cortical magnification\n\nCortical MagnificationIn many sensory systems, a disproportionately large area of the cortex is dedicated to processing the most important information. This phenomenon, known as cortical magnification, reflects the degree to which the brain dedicates resources to processing sensory information accompanying a specific sense. In the primary visual cortex, a larger proportion of cortical area processes visual stimuli presented at the center of the visual field as compared to stimuli at greater spatial eccentricities [46]. The relationship between locations in the visual field and corresponding processing regions in the visual cortex has commonly been modeled with a log-polar mapping (Fig. 1D) or derivations thereof [47; 48; 49; 50].\n\nLayers of artificial neurons of traditional CNNs have uniform receptive field sizes and do not exhibit any sort of cortical magnification, failing to capture these distinctive properties of neuronal organization in the primary visual cortex. Recent works have demonstrated that introducing log polar-space sampling into CNNs can give rise to improved invariance and equivariance to spatial transformations [51; 52] and adversarial robustness [53].\n\n## 3 Methods\n\n### Neuro-Constrained CNN Architecture\n\nGiven the previous state-of-the-art V1 alignment scores achieved with ResNet50 [25], we adopted this architecture as our baseline and test platform. However, the architectural components that we considered in the work are modular and can be integrated into general CNNs architectures. The remainder of this subsection details the implementation and integration of each architectural component within a neuro-constrained ResNet. In all experiments, we treated the output units from ResNet50 layer 1 as \"artificial V1\" neurons (refer to Section 3.2 for layer selection criteria). Fig. 2 depicts ResNet50 layer 1 after enhancement with neuroscience-based architectural components. Code and materials required to reproduce the presented work are available at github.com/bionicvisionlab/2023-Pogoncheff-Explaining-V1-Properties.\n\nCenter-Surround AntagonismCenter-surround ANN layers are composed of DoG kernels of shape \\((c_{i}\\times c_{o}\\times k\\times k)\\), where \\(c_{i}\\) and \\(c_{o}\\) denote the number of input and output channels, respectively, and \\(k\\) reflects the height and width of each kernel. These DoG kernels (Fig. 1A) are convolved with the pre-activation output of a standard convolution. Each DoG kernel, \\(DoG_{i}\\) is of the form\n\n\\[\\mathrm{DoG}_{i}(x,y)=\\frac{\\alpha}{2\\pi\\sigma_{i,\\mathrm{center}}^{2}}\\exp \\Big{(}-\\frac{x^{2}+y^{2}}{2\\sigma_{i,\\mathrm{center}}^{2}}\\Big{)}-\\frac{ \\alpha}{2\\pi\\sigma_{i,\\mathrm{surround}}^{2}}\\exp\\Big{(}-\\frac{x^{2}+y^{2}}{2 \\sigma_{i,\\mathrm{surround}}^{2}}\\Big{)}, \\tag{1}\\]\n\nwhere \\(\\sigma_{i,\\mathrm{center}}\\) and \\(\\sigma_{i,\\mathrm{surround}}\\) were the Gaussian widths of the center and surround, respectively (\\(\\sigma_{i,\\mathrm{center}}<\\sigma_{i,\\mathrm{surround}}\\)), \\(\\alpha\\) was a scaling factor, and \\((x,y):=(0,0)\\) at the kernel center. For \\(\\alpha>0\\) the kernel will have an excitatory center and inhibitory surround, while \\(\\alpha<0\\) results in a kernel with inhibitory center and excitatory surround. Novel to this implementation, each DoG kernel has learnable parameters, better accommodating the diverse tuning properties of neurons within the network. As in [34; 35], these DoG convolutions were only applied to a fraction of the input feature\n\nFigure 2: ResNet50 layer 1, supplemented with neuro-constrained architectural components. Throughout the the modified layer 1, primary visual cortex (V1) activity is modeled with cortical magnification, center-surround convolutions, tuned normalization, and local receptive field layers. Layer 1 output units are treated as artificial V1 neurons.\n\nmap. Specifically, we applied this center-surround convolution to one quarter of all \\(3\\times 3\\) convolutions in layer 1 of our neuro-constrained ResNet50.\n\nLocal Receptive FieldsTo untangle the effects of local connectivity on brain alignment, we modified the artificial V1 layer by substituting the final \\(3\\times 3\\) convolution of ResNet50 layer 1 with a \\(3\\times 3\\) locally connected layer in isolation. This substitution assigns each downstream neuron its own filter while preserving its connection to upstream neurons (Fig. 1B), following the pattern in [38].\n\nDivisive NormalizationWe consider the divisive normalization block proposed in [42] which performs normalization both spatially and across feature maps using learned normalization pools. Following our experimental design principle of selectively modifying the network in the vicinity of the artificial V1 neurons, we added this divisive normalization block after the non-linear activation of each residual block in ResNet50 layer 1.\n\nTuned NormalizationWe devised a novel implementation of tuned normalization inspired by models of opponent suppression [31, 43, 44]. In this scheme, a given neuron is normalized by a pool of neurons that share the same receptive field but occupy a different region in feature space (Fig. 1C), as in [41, 42]. Unlike the learned, weighted normalization proposed in [41], tuned inhibition was encouraged in our implementation by enforcing that each neuron was maximally suppressed by a neuron in a different region of feature space, and that no other neuron is maximally inhibited by activity in this feature space. Letting \\(x_{i,j}^{c}\\) denote the activity of the neuron at spatial location \\((i,j)\\) and channel \\(c\\in[1,C]\\) after application of a non-linear activation function. The post-normalization state of this neuron, \\(x_{i,j}^{rc}\\), is given by:\n\n\\[x_{i,j}^{rc}=\\frac{x_{i,j}^{c}}{1+\\sum_{k}p_{k}x_{i,j}^{c_{k}}}, \\tag{2}\\]\n\nwhere \\(p_{c,1},\\dots,p_{c,C}\\) defines a Gaussian distribution with variance \\(\\sigma_{c}^{2}\\) centered at channel \\((c+\\frac{C}{2})\\) mod \\(C\\). By defining \\(\\sigma_{c}^{2}\\) as a trainable parameter, task-driven training would optimize whether each neuron should be normalized acutely or broadly across the feature space.\n\nAs this mechanism preserves the dimension of the input feature map, it can follow any non-linear activation function of the core network without further modification to the architecture. Similar to the divisive normalization block, tuned normalization was added after the non-linear activation of each residual block in ResNet50 layer 1 in our experiments.\n\nCortical MagnificationCortical magnification and non-uniform receptive field sampling was simulated in CNNs using a differentiable polar sampling module (Fig. 1D). In this module, the spatial dimension of an input feature map are divided into polar regions defined by discrete radial and angular divisions of polar space. In particular, we defined a discrete polar coordinate system partitioned in the first dimension by radial partitions \\(r_{0},r_{1},...,r_{m}\\) and along the second dimension by angular partitions \\(\\theta_{0},\\theta_{1},...,\\theta_{n}\\). Pixels of the input feature map that are located within the same polar region (i.e., are within the same radial bounds \\([r_{i},r_{i+1})\\) and angular bounds \\([\\theta_{j},\\theta_{j+1})\\)) are pooled and mapped to coordinate \\((i,j)\\) of the original pixel space (Fig. 1D) [54]. Pixels in the output feature map with no associated polar region were replaced with interpolated pixel values from the same radial bin. By defining the spacing between each concentric radial bin to be monotonically increasing (i.e., for all \\(i\\in[1,m-1]\\), \\((r_{i}-r_{i-1})\\leq(r_{i+1}-r_{i})\\)), visual information at lower spatial eccentricities with respect to the center of the input feature map consumes a larger proportion of the transformed feature map than information at greater eccentricities (Fig. F.1).\n\nA notable result of this transformation is that any standard 2D convolution, with a kernel of size \\(k\\times k\\), that is applied to the the transformed features space is equivalent to performing a convolution in which the kernel covers a \\(k\\times k\\) contiguous region of polar space and strides along the angular and radial axes. Furthermore, downstream artificial neurons which process information at greater spatial eccentricities obtain larger receptive fields. Treating the CNN as a model of the ventral visual stream, this polar transformation immediately preceded ResNet50 layer 1 (replacing the first max-pooling layer), where V1 representations were assumed to be learned.\n\n### Training and Evaluation\n\nTraining ProcedureV1 alignment was evaluated for ImageNet-trained models [55]. For all models, training and validation images were downsampled to a resolution of \\(64\\times 64\\) in consideration of computational constraints. Each model of this evaluation was randomly initialized and trained for 100 epochs with an initial learning rate of \\(0.1\\) (reduced by a factor of \\(10\\) at epochs \\(60\\) and \\(80\\), where validation set performance was typically observed to plateau) and a batch size of \\(128\\).\n\nWe additionally benchmarked each neuro-constrained model on the Tiny-ImageNet-C dataset to study the effect of V1 alignment on object recognition robustness [56] (evaluation details provided in Appendix H). Tiny-ImageNet-C was used as an alternative to ImageNet-C given that the models trained here expected \\(64\\times 64\\) input images and downsampling the corrupted images of ImageNet-C would have biased our evaluations. ImageNet pre-trained models were fine-tuned on Tiny-ImageNet prior to this evaluation. As a given model will learn alternative representations when trained on different datasets (thereby resulting in V1 alignment differences), we methodologically froze all parameters of each ImageNet trained model, with the exception of the classification head, prior to 40 epochs of fine tuning with a learning rate of \\(0.01\\) and a batch size of \\(128\\).\n\nValidation loss and accuracy were monitored during both training procedures. The model state that enabled the greatest validation accuracy during training was restored for evaluations that followed. Training data augmentations were limited to horizontal flipping (ImageNet and Tiny-ImageNet) and random cropping (ImageNet).\n\nTraining was performed using single NVIDIA 3090 and A100 GPUs. Each model took approximately 12 hours to train on ImageNet and less than 30 minutes to fine-tune on Tiny-ImageNet.\n\nEvaluating V1 AlignmentWe evaluated the similarity between neuro-constrained models of V1 and the primate primary visual cortex using the Brain-Score V1 benchmark [16]. The V1 benchmark score is an average of two sub-metrics: 'V1 FreemanZiemba2013' and 'V1 Marques2020', which we refer to as V1 Predictivity and V1 Property scores in what follows. For each metric, the activity of artificial neurons in a given neural network layer is computed using in-silico neurophysiology experiments. The V1 Predictivity score reflects the degree to which the model can explain the variance in stimulus-driven responses of V1 neurons, as determined by partial least squares regression mapping. The V1 Property score measures how closely the distribution of \\(22\\) different neural properties, from \\(7\\) neural tuning categories (orientation, spatial frequency, response selectivity, receptive field size, surround modulation, texture modulation, and response magnitude), matches between the model's artificial neural responses and empirical data from macaque V1. Together, these two scores provide a comprehensive view of stimulus response similarity between artificial and primate V1 neurons.\n\nBrain-Score evaluations assume a defined mapping between units of an ANN layer and a given brain region. In all analyses of V1 alignment that follow, we systematically fixed the output neurons of ResNet50 layer 1 as the artificial V1 neurons. Note that this is a more strict rule than most models submitted to the Brain-Score leaderboard, as researchers are able to choose which layer in the deep net should correspond to the V1 readout. In baseline analyses, among multiple evaluated layers, we observed highest V1 alignment between artificial units and primate V1 activity from layer 1, establishing it as a strong baseline. Alternative layer V1 scores are presented in Appendix B.\n\n## 4 Results\n\n### Architectural Components in Isolation\n\nPatterns of neural activity observed in the brain can be attributed to the interplay of multiple specialized processes. Through an isolated analysis, our initial investigations revealed the contribution of specialized mechanisms to explaining patterns of neural activity in V1. Tables 1 and 2 present the results of this analysis, including ImageNet validation accuracy, V1 Overall, V1 Predictivity, and V1 Property scores.\n\nAmong the five modules evaluated in this analysis, cortical magnification emerged as the most influential factor in enhancing V1 alignment. This mechanism substantially improved the ResNet's ability to explain the variance in stimulus responses, and the artificial neurons exhibited tuning properties that were more closely aligned with those of biological neurons, particularly in terms oforientation tuning, spatial frequency tuning, response selectivity, and most of all, stimulus response magnitude. However, the artificial neuronal responses of the cortical magnification network showed lower resemblance to those observed in primate V1 with regard to surround modulation, as compared to the baseline network.\n\nSimulating neural normalization within the ResNet resulted in artificial neurons that displayed improved alignment with primate V1 in terms of response properties. Noteworthy enhancements were observed in the spatial frequency, receptive field size, surround modulation, and response magnitude properties of neurons within the modified network, leading to improvements in the V1 Property score. These results applied to both tuned and untuned forms of normalization.\n\nIn contrast, the introduction of center-surround convolutions yielded minimal improvements in neural predictivity and slight reductions in overall neuron property similarity. Surprisingly, the surround modulation properties of the artificial neurons decreased compared to the baseline model, contrary to our expectations.\n\nFinally, replacing the final \\(3\\times 3\\) convolution preceding the artificial V1 readout with a locally connected layer resulted in modest changes in V1 alignment. This was one of the two mechanisms that led to improvements in the surround modulation response property score (tuned normalization being the other).\n\nThese findings collectively provide valuable insights into the individual contributions of each specialized mechanism. Although mechanisms simulating center-surround antagonism (i.e., DoG convolution) and local connectivity provide little benefit to overall predictivity and property scores in isolation, we observed that they reduce the property dissimilarity gap among tuning properties that are nonetheless important and complement alignment scores where divisive normalization and cortical magnification do not.\n\n### Complementary Components Explain V1 Activity\n\nConstraining a general-purpose deep learning model with a single architectural component is likely insufficient to explain primate V1 activity given our knowledge that a composition of known circuits play pivotal roles in visual processing. This was empirically observed in Section 4.1, wherein cortical\n\n\\begin{table}\n\\begin{tabular}{l r r r r r} & \\multicolumn{1}{c}{ImageNet Acc} & \\multicolumn{1}{c}{V1 Overall} & \\multicolumn{1}{c}{V1 Predictivity} & \\multicolumn{1}{c}{V1 Property} \\\\ \\hline Center-surround antagonism & \\(.610\\pm.001\\) & \\(.545\\pm.002\\) & \\(.304\\pm.016\\) & \\(.786\\pm.018\\) \\\\ Local receptive fields & \\(.609\\pm.001\\) & \\(.550\\pm.006\\) & \\(.300\\pm.002\\) & \\(.799\\pm.012\\) \\\\ Divisive normalization & \\(.606\\pm.001\\) & \\(.543\\pm.003\\) & \\(.271\\pm.014\\) & \\(.815\\pm.011\\) \\\\ Tuned normalization & \\(.608\\pm.002\\) & \\(.547\\pm.004\\) & \\(.274\\pm.004\\) & \\(.820\\pm.009\\) \\\\ Cortical magnification & \\(.548\\pm.008\\) & \\(.587\\pm.014\\) & \\(.370\\pm.008\\) & \\(.805\\pm.021\\) \\\\ ResNet50 (Baseline) & \\(.613\\pm.002\\) & \\(.550\\pm.004\\) & \\(.295\\pm.003\\) & \\(.805\\pm.011\\) \\\\ \\end{tabular}\n\\end{table}\nTable 1: ImageNet object recognition classification performance (\\(64\\times 64\\) images) and primary visual cortex (V1) alignment scores of ResNet50 augmented with each architectural component. Mean and standard deviations are reported across three runs (random initialization, training, and evaluating) of each architecture. Scores higher than baseline are presented in green and those lower are presented in red (the more saturated the color is, the greater the difference from baseline).\n\n\\begin{table}\n\\begin{tabular}{l r r r r r r} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ & \\multicolumn{1}{c}{Orientation} & \\multicolumn{1}{c}{frequency} & \\multicolumn{1}{c}{selectivity} & \\multicolumn{1}{c}{RF size} & \\multicolumn{1}{c}{modulation} & \\multicolumn{1}{c}{modulation} & \\multicolumn{1}{c}{magnitude} \\\\ \\hline Center-surround & \\(.876\\pm.027\\) & \\(.831\\pm.030\\) & \\(.632\\pm.012\\) & \\(.853\\pm.046\\) & \\(.743\\pm.027\\) & \\(.757\\pm.025\\) & \\(.783\\pm.024\\) \\\\ Local receptive fields & \\(.904\\pm.021\\) & \\(.817\\pm.016\\) & \\(.648\\pm.008\\) & \\(.852\\pm.054\\) & \\(.847\\pm.083\\) & \\(.743\\pm.036\\) & \\(.780\\pm.022\\) \\\\ Divisive normalization & \\(.908\\pm.017\\) & \\(.840\\pm.014\\) & \\(.689\\pm.007\\) & \\(.858\\pm.046\\) & \\(.860\\pm.070\\) & \\(.746\\pm.030\\) & \\(.86\\pm.019\\) \\\\ Tuned normalization & \\(.907\\pm.035\\) & \\(.841\\pm.013\\) & \\(.689\\pm.023\\) & \\(.865\\pm.031\\) & \\(.852\\pm.020\\) & \\(.742\\pm.029\\) & \\(.844\\pm.015\\) \\\\ Cortical magnification & \\(.907\\pm.037\\) & \\(.848\\pm.039\\) & \\(.708\\pm.011\\) & \\(.808\\pm.044\\) & \\(.858\\pm.020\\) & \\(.789\\pm.058\\) & \\(.915\\pm.041\\) \\\\ ResNet50 (Baseline) & \\(.803\\pm.023\\) & \\(.826\\pm.048\\) & \\(.684\\pm.059\\) & \\(.832\\pm.080\\) & \\(.820\\pm.009\\) & \\(.786\\pm.058\\) & \\(.790\\pm.042\\) \\\\ \\end{tabular}\n\\end{table}\nTable 2: Model alignment across the seven primary visual cortex (V1) tuning properties that constitute the V1 Property score. Mean and standard deviation of scores observed across three trials of model training and evaluation are reported.\n\nmagnification was the only architectural component found to improve the overall V1 alignment score. Taking inspiration from this design principle, we supplemented a ResNet50 with each architectural component and discern the necessary components to achieve optimal V1 alignment in an ablation study. We omitted the architectural component implementing divisive normalization, however, as it cannot be integrated simultaneously with tuned normalization, which was observed to yield slightly higher V1 Predictivity and Property scores in isolated component evaluation. Starting with a ResNet which features all of these architectural components, we employed a greedy approach reminiscent of backward elimination feature selection to deduce critical components without having to evaluate every permutation of component subsets. In each round of this iterative approach, we selectively removed the architectural component that reduced overall V1 alignment the most until only one feature remained. This analysis allowed us to identify the subset of components that collectively yielded the most significant improvements in V1 alignment, and unraveled the intricate relationship between these specialized features and their combined explanation of V1.\n\nThe results of the ablation study are presented in Table 3. With the exception of center-surround antagonism, removing any neural mechanisms from the modified residual network reduced overall V1 alignment, suggesting that (1) each architectural component contributed to V1 alignment (the utility of center-surround antagonism is detailed in Section 4.5) and (2) nontrivial interactions between these mechanisms explain V1 more than what is possible with any single mechanism. Seven of the eight models evaluated in this ablation study substantially outperformed all existing models on the Brain-Score platform in modeling V1 tuning property distributions. Furthermore, four models were observed to achieve state-of-the-art V1 Overall scores, explaining both V1 stimulus response activity and neural response properties with high fidelity.\n\nWhether or not feed-forward, ImageNet-trained ANNs can fully approximate activity in primate V1 has stood as an open question. Previous studies have argued that no current model is capable of explaining all behavioral properties using neurons from a single readout layer [17]. The top performing models of the current evaluation stand out as the first examples of CNNs with neural representations that accurately approximate all evaluated V1 tuning properties (Appendix C), offering positive evidence for the efficacy of explaining primate V1 with neuro-inspired deep learning architectures.\nRegarding local receptive fields, we hypothesized that removing weight sharing would enable a greater diversity of response selectivity patterns to be learned. While multi-component ablation studies revealed a reduction in response selectivity property scores when local filtering was omitted, the same finding was surprisingly not observed in the single-component analyses of Section 4.1.\n\nGiven the inter-neuron competition enforced by tuned normalization, one would expect networks with this component to learn more diverse artificial V1 representations. Analyses of the visual stimuli that maximally activated artificial neurons of these networks (i.e., optimized visual inputs that maximally excite neurons of each channel of the artificial V1 layer, computed via stochastic gradient ascent) provide evidence for this. Quantitatively, we found that the mean Learned Perceptual Image Patch Similarity (LPIPS) [57] between pairs of maximally activating stimuli of the tuned normalization network was less than that of the baseline ResNet50 network (\\(p<0.01\\), one-way ANOVA [58]). We suggest that this learned feature diversity contributed to the improvements in spatial frequency, response selectivity, receptive field size, surround modulation, and response magnitude tuning property scores when tuned normalization was present.\n\nFinally, given the retinotopic organization of V1, we hypothesized that cortical magnification would give rise to better-aligned response selectivity and receptive field size tuning distributions, meanwhile improving V1 neuron predictivity. In each trial of our ablation studies for which cortical magnification was removed, these respective scores dropped, supporting this hypothesis.\n\n### Object Recognition Robustness to Corrupted Images\n\nIn contrast with the human visual system, typical CNNs generalize poorly to out-of-distribution data. Small perturbations to an image can cause a model to output drastically different predictions than it would on the in-tact image. Recent studies have demonstrated a positive correlation between model-brain similarity and robustness to image corruptions [24; 25; 26; 42; 59] After fine-tuning each model's classification head on Tiny-ImageNet (see Section 3.2), we evaluated the object recognition accuracy of each model from Section 4.1 and the top two overall models from Section 4.2 on the Tiny-ImageNet-C dataset. The results of these evaluations for each category of corruption and corruption strength are provided in Appendix H.\n\nAmong the evaluated components, only tuned normalization was observed to yield improved corrupt image classification accuracy over the entire test set, albeit slight, beating the baseline accuracy (\\(0.278\\)) by \\(0.005\\) (i.e., an absolute improvement of \\(.5\\%\\)). More substantial improvements were observed on 'brightness', 'defocus_blur', 'elastic_transform', and 'pixelate' corruptions (improvements over the baseline of.00986,.00989,.0105, and.0133, respectively).\n\n### Adversarially Training Neuro-Constrained ResNets\n\nAdversarial training has previously been shown to enhance the brain-similarity of artificial neural representations without any modification to the underlying network [60; 25]. Curious as to whether adversarial training would further align the neuro-constrained ResNet50s with V1 activity, we selectively trained the two networks most aligned with V1 (one model with all architectural components and the other with all components except center-surround convolution) from Section 4.2 using \"Free\" adversarial training [61] (Appendix G). The results are shown in Table 4. Despite the drop in object recognition accuracy, the artificial neural representations that emerged in each network were drastically better predictors of stimulus response variance representations. Tuning property alignment dropped in the process, but remained above previous state-of-the-art regardless. Interestingly, we found that the main difference in V1 scores between these two models can be traced to surround modulation tuning alignment. Center-surround convolutions indeed contributed to improved surround\n\n\\begin{table}\n\\begin{tabular}{c c c c c c c c} Center- & Local & Tuned Nor- & Cortical Mag- & Adversarial & & & \\\\ Surround & RF & malization & nification & Training & ImageNet Acc & V1 Overall & V1 Productivity & V1 Property \\\\ \\hline ✓ & ✓ & ✓ & ✓ & ✓ & & &.629 &.430 &.829 \\\\ & ✓ & ✓ & ✓ & ✓ & & &.625 &.430 &.819 \\\\ & ✓ & ✓ & ✓ & ✓ & & &.555 &.581 &.352 &.809 \\\\ \\end{tabular}\n\\end{table}\nTable 4: Adversarial training was performed on the two models that tied for the top V1 Overall Score. Checkmarks denote whether or not the architectural component was included in the model.\n\nmodulation tuning learned while training with on corrupted images, contrasting its apparent lack of contribution to the overall network suggested in the ablation study.\n\nIn sum, both networks achieved Rank-1 V1 Overall, Predictivity, and Property scores by large margins, setting a new standard in this breed of brain-aligned CNNs. At the time of writing, the previous Rank-1 V1 Overall, Predictivity, and Property scores were.594,.409, and.816, respectively, and all achieved by separate models.\n\n## 5 Discussion\n\nThroughout this work we presented a systematic evaluation of four architectural components derived from neuroscience principles and their influence on model-V1 similarity. Specifically, we studied task-driven CNNs enhanced with ANN layers that simulate principle processing mechanisms of the primate visual system including center-surround antagonism, local receptive fields, tuned normalization, and cortical magnification. Through an ablation study and isolated component analyses, we found that each component contributed to the production of latent ANN representations that better resemble those of primate V1, as compared to a traditional baseline CNN. When these four components were assembled together within a neuro-constrained ResNet50, V1 tuning properties were explained better than any previous deep learning model that we are aware of. Furthermore, this neuro-constrained model exhibited state-of-the-art explanation of V1 neural activity and is the first of its kind to do so, by a large margin nonetheless, highlighting a promising direction in biologically constrained ANNs. Training this model with \"free\" adversarial training greatly improved its ability to predict primate neural response to image stimuli at a minor sacrifice to tuning property similarity, establishing an even larger gap between previous state of the art.\n\nAmong all architectural components examined in this work, cortical magnification was the most influential to improving V1 alignment. This mechanism on its own could not explain the neural activity as comprehensively as the top models of this study, however. Our implementation of tuned normalization provided substantial improvement to V1 tuning property alignment, and was the only component that contributed to model robustness. The importance of center-surround antagonism seemed to be training data-dependent. In our ablation study, for which all models were trained on ImageNet, center-surround convolutional layers did not contribute to overall V1 scores. This did not surprise us, as deep CNNs have the capacity to learn similar representations without these specialized layers. When training on adversarially perturbed data, however, the center-surround antagonism provided by this layer appeared to improve surround modulation tuning properties of artificial V1 neurons. While previous attempts at improving model-brain similarity have been highly dataset dependent, our results highlight the importance of artificial network design.\n\nA notable limitation to our work is the reduction in ImageNet classification performance that was observed upon the introduction of cortical magnification. While maintaining baseline model accuracy was not a motivation of this work, we can imagine situations in which object recognition performance needs to be preserved alongside these improvements in brain-model alignment. The implementation of cortical magnification in this work assumed that the model's gaze was focused at the center of each image. Consequently, visual stimuli at greater eccentricities from the image center were under-sampled during the polar transformation (Fig. F.1), making images for which the object of interest was not located at the image center (common in ImageNet) more challenging to classify. One scope of future work involves implementing saliency- or attention-driven polar transformations that dynamically focus the center of the polar map on, or near, an object of interest as opposed to being fixed at the image center. We demonstrate the efficacy of this strategy with a naive proof of concept in which classification is performed by averaging predictions from the five static crops of each image [62]. This simple strategy improved the validation accuracy of the network with all components from 55.1% to 59.9% without affecting V1 scores. A more sophisticated, dynamic strategy could further reduce this accuracy drop. We additionally plan to extend this work to model architectures other than ResNet to validate the widespread application of each of these neuro-constrained components.\n\nThis work highlights an important advancement in the field of NeuroAI, as we systematically establish a set of neuro-constrained architectural components that contribute to state-of-the-art V1 alignment. We argue that our architecture-driven approach can be further generalized to additional areas of the brain as well. The neuroscience insights that could be gleaned from increasingly accurate in-silico models of the brain have the potential to transform the fields of both neuroscience and AI.\n\n## References\n\n* [1] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. _Nature_, 521(7553):436-444, May 2015.\n* [2] Laurent Itti, Christof Koch, and Ernst Niebur. A model of saliency-based visual attention for rapid scene analysis. _IEEE Transactions on pattern analysis and machine intelligence_, 20(11):1254-1259, 1998.\n* [3] Hugo Larochelle and Geoffrey E Hinton. Learning to combine foveal glimpses with a third-order boltzmann machine. _Advances in neural information processing systems_, 23, 2010.\n* [4] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In _International conference on machine learning_, pages 2048-2057. PMLR, 2015.\n* [5] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. _Advances in neural information processing systems_, 30, 2017.\n* [6] David J Heeger, Eero P Simoncelli, and J Anthony Movshon. Computational models of cortical visual processing. _Proceedings of the National Academy of Sciences_, 93(2):623-627, 1996.\n* [7] Matteo Carandini, David J Heeger, and J Anthony Movshon. Linearity and normalization in simple cells of the macaque primary visual cortex. _Journal of Neuroscience_, 17(21):8621-8644, 1997.\n* [8] Nicole C Rust, Odelia Schwartz, J Anthony Movshon, and Eero P Simoncelli. Spatiotemporal elements of macaque v1 receptive fields. _Neuron_, 46(6):945-956, 2005.\n* [9] Brett Vinitch, J Anthony Movshon, and Eero P Simoncelli. A convolutional subunit model for neuronal responses in macaque v1. _Journal of Neuroscience_, 35(44):14829-14841, 2015.\n* [10] Matteo Carandini, Jonathan B. Demb, Valerio Mante, David J. Tolhurst, Yang Dan, Bruno A. Olshausen, Jack L. Gallant, and Nicole C. Rust. Do We Know What the Early Visual System Does? _Journal of Neuroscience_, 25(46):10577-10597, November 2005.\n* [11] Daniel L Yamins, Ha Hong, Charles Cadieu, and James J DiCarlo. Hierarchical Modular Optimization of Convolutional Networks Achieves Representations Similar to Macaque IT and Human Ventral Stream. In _Advances in Neural Information Processing Systems_, volume 26. Curran Associates, Inc., 2013.\n* [12] Daniel L. K. Yamins, Ha Hong, Charles F. Cadieu, Ethan A. Solomon, Darren Seibert, and James J. DiCarlo. Performance-optimized hierarchical models predict neural responses in higher visual cortex. _Proceedings of the National Academy of Sciences_, 111(23):8619-8624, June 2014.\n* [13] Daniel L. K. Yamins and James J. DiCarlo. Using goal-driven deep learning models to understand sensory cortex. _Nature Neuroscience_, 19(3):356-365, March 2016. Number: 3 Publisher: Nature Publishing Group.\n* [14] Najib J. Majaj, Ha Hong, Ethan A. Solomon, and James J. DiCarlo. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance. _Journal of Neuroscience_, 35(39):13402-13418, September 2015. Publisher: Society for Neuroscience Section: Articles.\n* [15] Santiago A. Cadena, George H. Denfield, Edgar Y. Walker, Leon A. Gatys, Andreas S. Tolias, Matthias Bethge, and Alexander S. Ecker. Deep convolutional models improve predictions of macaque V1 responses to natural images. _PLOS Computational Biology_, 15(4):e1006897, April 2019. Publisher: Public Library of Science.\n* [16] Martin Schrimpf, Jonas Kubilius, Ha Hong, Najib J. Majaj, Rishi Rajalingham, Elias B. Issa, Kohitij Kar, Pouya Bashivan, Jonathan Prescott-Roy, Franziska Geiger, Kailyn Schmidt, Daniel L. K. Yamins, and James J. DiCarlo. Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like?, January 2020. Pages: 407007 Section: New Results.\n* [17] Tiago Marques, Martin Schrimpf, and James J. DiCarlo. Multi-scale hierarchical neural network models that bridge from single neurons in the primate primary visual cortex to object recognition behavior, August 2021. Pages: 2021.03.01.433495 Section: New Results.\n* [18] Pouya Bashivan, Kohitij Kar, and James J. DiCarlo. Neural population control via deep image synthesis. _Science_, 364(6439):eaav9436, May 2019. Publisher: American Association for the Advancement of Science.\n\n* Hassabis et al. [2017] Demis Hassabis, Dharshan Kumaran, Christopher Summerfield, and Matthew Botvinick. Neuroscience-inspired artificial intelligence. _Neuron_, 95(2):245-258, 2017.\n* Zador et al. [2019] Anthony Zador, Sean Escola, Blake Richards, Bence Olveczky, Yoshua Bengio, Kwabena Boahen, Matthew Botvinick, Dmitri Chklovskii, Anne Churchland, Claudia Clopath, James DiCarlo, Surya Ganguli, Jeff Hawkins, Konrad Kording, Alexei Koulakov, Yann LeCun, Timothy Lillicrap, Adam Marblestone, Bruno Olshausen, Alexandre Pouget, Cristina Savin, Terrence Sejnowski, Eero Simoncelli, Sara Solla, David Sussillo, Andreas S. Tolias, and Doris Tsao. Catalyzing next-generation artificial intelligence through neuroai. _Nature Communications_, 14(1):1597, Mar 2023.\n* Hubel and Wiesel [1962] David H Hubel and Torsten N Wiesel. Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. _The Journal of physiology_, 160(1):106, 1962. Publisher: Wiley-Blackwell.\n* He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016.\n* Tan and Le [2019] Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In _International conference on machine learning_, pages 6105-6114. PMLR, 2019.\n* Li et al. [2019] Zhe Li, Wieland Brendel, Edgar Walker, Erick Cobos, Taliah Muhammad, Jacob Reimer, Matthias Bethge, Fabian Sinz, Zachary Pitkow, and Andreas Tolias. Learning from brains how to regularize machines. In _Advances in Neural Information Processing Systems_, volume 32. Curran Associates, Inc., 2019.\n* Dapello et al. [2020] Joel Dapello, Tiago Marques, Martin Schrimpf, Franziska Geiger, David Cox, and James J DiCarlo. Simulating a Primary Visual Cortex at the Front of CNNs Improves Robustness to Image Perturbations. In _Advances in Neural Information Processing Systems_, volume 33, pages 13073-13087. Curran Associates, Inc., 2020.\n* Reddy et al. [2020] Manish V. Reddy, Andrzej Banburski, Nishka Pant, and Tomaso Poggio. Biologically Inspired Mechanisms for Adversarial Robustness, June 2020. arXiv:2006.16427 [cs, stat].\n* Allman et al. [1985] J. Allman, F. Miezin, and E. McGuinness. Stimulus specific responses from beyond the classical receptive field: neurophysiological mechanisms for local-global comparisons in visual neurons. _Annual Review of Neuroscience_, 8:407-430, 1985.\n* Knierim and van Essen [1992] J. J. Knierim and D. C. van Essen. Neuronal responses to static texture patterns in area V1 of the alert macaque monkey. _Journal of Neurophysiology_, 67(4):961-980, April 1992. Publisher: American Physiological Society.\n* Walker et al. [1999] Gary A. Walker, Izumi Ohzawa, and Ralph D. Freeman. Asymmetric Suppression Outside the Classical Receptive Field of the Visual Cortex. _Journal of Neuroscience_, 19(23):10536-10553, December 1999. Publisher: Society for Neuroscience.\n* Shen et al. [2007] Zhi-Ming Shen, Wei-Feng Xu, and Chao-Yi Li. Cue-invariant detection of centre-surround discontinuity by V1 neurons in awake macaque monkey. _The Journal of Physiology_, 583(Pt 2):581-592, September 2007.\n* DeAngelis et al. [1994] Gregory C DeAngelis, RALPH D Freeman, and IZUMI Ohzawa. Length and width tuning of neurons in the cat's primary visual cortex. _Journal of neurophysiology_, 71(1):347-374, 1994.\n* Sceniak et al. [1999] Michael P Sceniak, Dario L Ringach, Michael J Hawken, and Robert Shapley. Contrast's effect on spatial summation by macaque V1 neurons. _Nature neuroscience_, 2(8):733-739, 1999. Publisher: Nature Publishing Group.\n* Sceniak et al. [2001] Michael P Sceniak, Michael J Hawken, and Robert Shapley. Visual spatial characterization of macaque V1 neurons. _Journal of neurophysiology_, 85(5):1873-1887, 2001. Publisher: American Physiological Society Bethesda, MD.\n* Hasani et al. [2019] Hosein Hasani, Mahdieh Soleymani, and Hamid Aghajan. Surround Modulation: A Bio-inspired Connectivity Structure for Convolutional Neural Networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d' Alche-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 32. Curran Associates, Inc., 2019.\n* Babaiee et al. [2021] Zahra Babaiee, Ramin Hasani, Mathias Lechner, Daniela Rus, and Radu Grosu. On-off center-surround receptive fields for accurate and robust image classification. In _International Conference on Machine Learning_, pages 478-489. PMLR, 2021.\n* Koulakov and Chklovskii [2001] Alexei A Koulakov and Dmitri B Chklovskii. Orientation preference patterns in mammalian visual cortex: a wire length minimization approach. _Neuron_, 29(2):519-527, 2001. Publisher: Elsevier.\n\n* [37] Hyodong Lee, Eshed Margalit, Ramila M. Jozwik, Michael A. Cohen, Nancy Kanwisher, Daniel L. K. Yamins, and James J. DiCarlo. Topographic deep artificial neural networks reproduce the hallmarks of the primate inferior temporal cortex face processing network. preprint, Neuroscience, July 2020.\n* [38] Roman Pogodin, Yash Mehta, Timothy Lillicrap, and Peter E Latham. Towards Biologically Plausible Convolutional Networks. In _Advances in Neural Information Processing Systems_, volume 34, pages 13924-13936. Curran Associates, Inc., 2021.\n* [39] Matteo Carandini and David J. Heeger. Normalization as a canonical neural computation. _Nature Reviews Neuroscience_, 13(1):51-62, January 2012. Number: 1 Publisher: Nature Publishing Group.\n* [40] David J. Heeger and Klavdia O. Zemlianova. A recurrent circuit implements normalization, simulating the dynamics of V1 activity. _Proceedings of the National Academy of Sciences_, 117(36):22494-22505, September 2020. Publisher: Proceedings of the National Academy of Sciences.\n* [41] Max F Burg, Santiago A Cadena, George H Denfield, Edgar Y Walker, Andreas S Tolias, Matthias Bethge, and Alexander S Ecker. Learning divisive normalization in primary visual cortex. _PLOS Computational Biology_, 17(6):e1009028, 2021. Publisher: Public Library of Science San Francisco, CA USA.\n* [42] Andrew Cirincione, Reginald Verrier, Ariom Bic, Stephanie Olaiya, James J DiCarlo, Lawrence Udeigwe, and Tiago Marques. Implementing Divisive Normalization in CNNs Improves Robustness to Common Image Corruptions.\n* [43] Nicole C. Rust, Valerio Mante, Eero P. Simoncelli, and J. Anthony Movshon. How MT cells analyze the motion of visual patterns. _Nature Neuroscience_, 9(11):1421-1431, November 2006. Number: 11 Publisher: Nature Publishing Group.\n* [44] M Concetta Morrone, DC Burr, and Lamberto Maffei. Functional implications of cross-orientation inhibition of cortical visual cells. I. Neurophysiological evidence. _Proceedings of the Royal Society of London. Series B. Biological Sciences_, 216(1204):335-354, 1982. Publisher: The Royal Society London.\n* [45] GC DeAngelis, JG Robson, I Ohzawa, and RD Freeman. Organization of suppression in receptive fields of neurons in cat visual cortex. _Journal of Neurophysiology_, 68(1):144-163, 1992.\n* [46] PM Daniel and D Whitteridge. The representation of the visual field on the cerebral cortex in monkeys. _The Journal of physiology_, 159(2):203, 1961. Publisher: Wiley-Blackwell.\n* [47] Eric L Schwartz. Spatial mapping in the primate sensory projection: analytic structure and relevance to perception. _Biological cybernetics_, 25(4):181-194, 1977. Publisher: Springer.\n* [48] Eric L Schwartz. Computational anatomy and functional architecture of striate cortex: a spatial mapping approach to perceptual coding. _Vision research_, 20(8):645-669, 1980. Publisher: Elsevier.\n* [49] Eric L Schwartz. Computational studies of the spatial architecture of primate visual cortex: columns, maps, and protomaps. _Primary visual cortex in primates_, pages 359-411, 1994. Publisher: Springer.\n* [50] Jonathan R Polimeni, Mukund Balasubramanian, and Eric L Schwartz. Multi-area visuotopic map complexes in macaque striate and extra-striate cortex. _Vision research_, 46(20):3336-3359, 2006. Publisher: Elsevier.\n* [51] Carlos Esteves, Christine Allen-Blanchette, Xiaowei Zhou, and Kostas Daniilidis. Polar Transformer Networks. _International Conference on Learning Representations_, 2018.\n* [52] Joao F. Henriques and Andrea Vedaldi. Warped Convolutions: Efficient Invariance to Spatial Transformations, 2021. _eprint: 1609.04382.\n* [53] Taro Kiritani and Koji Ono. Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks, 2020. _eprint: 2002.05388.\n* [54] M.R. Blackburn. A Simple Computational Model of Center-Surround Receptive Fields in the Retina. Technical report. Section: Technical Reports.\n* [55] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In _2009 IEEE Conference on Computer Vision and Pattern Recognition_, pages 248-255, June 2009. ISSN: 1063-6919.\n* [56] Dan Hendrycks and Thomas Dietterich. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations, March 2019. arXiv:1903.12261 [cs, stat].\n\n* [57] Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In _2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 586-595, 2018.\n* [58] Amanda Ross and Victor L. Willson. _One-Way Anova_, pages 21-24. SensePublishers, Rotterdam, 2017.\n* [59] Shahd Safarani, Arne Nix, Konstantin Willeke, Santiago Cadena, Kelli Restivo, George Denfield, Andreas Tolias, and Fabian Sinz. Towards robust vision by multi-task learning on monkey visual cortex. In _Advances in Neural Information Processing Systems_, volume 34, pages 739-751. Curran Associates, Inc., 2021.\n* [60] Alexander Riedel. Bag of Tricks for Training Brain-Like Deep Neural Networks. March 2022.\n* [61] Ali Shafahi, Mahyar Najibi, Mohammad Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S Davis, Gavin Taylor, and Tom Goldstein. Adversarial training for free! In _Advances in Neural Information Processing Systems_, volume 32. Curran Associates, Inc., 2019.\n* [62] Manish Reddy Vuyyuru, Andrzej Banburski, Nishka Pant, and Tomaso Poggio. Biologically inspired mechanisms for adversarial robustness. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, _Advances in Neural Information Processing Systems_, volume 33, pages 2135-2146. Curran Associates, Inc., 2020.\n\n## Appendix A Supplemental Model Diagrams\n\nFig. A.1 depicts the modifications made to ResNet50 residual layer 1 in the isolated component analyses of section 4.1. All multi-component (composite) models analyzed in Section 4.2 relied on combinations of these modifications (as exemplified in Fig. 2).\n\nFigure A.1: ResNet50 residual layer 1, supplemented with individual neuro-constrained architectural components, as in section 4.1. (A) No modification (baseline ResNet50 layer 1), (B) with center-surround antagonism, (C) with local receptive field (RF), (D) with divisive normalization, (E) with tuned normalization, (F) with cortical magnification.\n\nV1 Scores of Alternate Layers of Baseline Network\n\nWhen evaluating a model on Brain-Score, users are permitted to commit a mapping between model layers and areas of the ventral stream. Model-brain alignment is computed for each mapped pair in the Brain-Score evaluation. To promote a fair evaluation, we sought to find the layer that yielded optimal V1 alignment from the baseline ResNet50 model and fix this layer as the artificial V1 readout layer in all of our tested models. It is worth noting that after supplementing the base ResNet50 with neuro-constrained components, this layer may no longer offer optimal V1 alignment in the augmented network. In spite of this, we maintain this layer as our artificial V1 readout layer for fair evaluation.\n\nTo find the ResNet50 layer with the best V1 Overall, Predictivity, and Property scores, we compared a total of 20 different hidden layers (Fig. B.1). 16 of these layers corresponded to the post-activation hidden states of the network. The remaining 4 were downsampling layers of the first bottleneck block of each residual layer in the network, as these have previously demonstrated good V1 alignment [25]. Aside from these downsampling layers, hidden layers that did not follow a ReLU activation were omitted from this evaluation as the activities of these states can take on negative values and are therefore less interpretable as neural activities. Among all evaluated layers, the final output of ResNet50 residual layer 1 (i.e., the output of the third residual block of ResNet50) offered the highest V1 Overall score, and was therefore selected as the artificial V1 readout layer in all of our experiments.\n\nFigure B.1: V1 alignment Brain-Scores for 20 different hidden layers of ResNet50. In the plot above, readout location ‘X.Y’ denotes that artificial V1 activity was evaluated from residual block ‘Y’ of residual layer ‘X’. Readout location suffixed with ‘.d’ correspond to downsampling layers of the associated residual bottleneck. Highest V1 overall score came from block 3 of residual layer 1.\n\nExpanded Model Tuning Properties\n\nPrimary visual cortex (V1) tuning property alignments for each composite model evaluated in Section 4.2 are presented in Table 5. Tuning property similarities are computed as coiled Kolmogorov-Smirnov distance between artificial neural response distributions from the model and empirical distributions recorded in primates [16; 17].\n\n## Appendix D V1 Brain-Scores of Untrained Models\n\n## Appendix E V2, V4, and IT Brain-Scores of Top Model\n\nTable 7 shows the Brain-Scores of our top performing V1 model (the adversarially trained ResNet50 with all architectural components) for brain areas V2, V4, and IT. Network layers were mapped to visual areas V2, V4, and IT by finding the layers that achieve the best scores on these visual area benchmarks, as evaluated on Brain-Score's publicly available evaluation set.\n\n## Appendix E\n\n\\begin{table}\n\\begin{tabular}{l r r r} & V1 Overall & V1 Predictivity & V1 Property \\\\ \\hline Center-surround antagonism & \\(.298\\) & \\(.245\\) & \\(.551\\) \\\\ Local receptive fields & \\(.477\\) & \\(.210\\) & \\(.743\\) \\\\ Divisive normalization & \\(.499\\) & \\(.207\\) & \\(.792\\) \\\\ Tuned normalization & \\(.471\\) & \\(.218\\) & \\(.724\\) \\\\ Cortical magnification & \\(.497\\) & \\(.276\\) & \\(.718\\) \\\\ All Components & \\(.483\\) & \\(.225\\) & \\(.741\\) \\\\ ResNet50 & \\(.466\\) & \\(.223\\) & \\(.710\\) \\\\ \\end{tabular}\n\\end{table}\nTable 6: primary visual cortex (V1) alignment scores of untrained ResNet50 model variants.\n\n\\begin{table}\n\\begin{tabular}{l r r|r r r r r r r r} & Visual Area Brain-Score & V2 & V4 & IT \\\\ \\hline & \\(.298\\) & \\(.245\\) & \\(.551\\) \\\\ \\end{tabular}\n\\end{table}\nTable 7: V2, V4, and IT Brain-Scores of adversarially trained ResNet50 with all architectural components.\n\n\\begin{table}\n\\begin{tabular}{l r r r|r r r r r r r} & Visual Area Brain-Score & V2 & V4 & IT \\\\ \\hline & & \\(.298\\) & \\(.245\\) & \\(.551\\) \\\\ \\end{tabular}\n\\end{table}\nTable 7: V2, V4, and IT Brain-Scores of adversarially trained ResNet50 with all architectural components.\n\n\\begin{table}\n\\begin{tabular}{l r r r|r r r r r r r} & Visual Area Brain-Score & V2 & V4 & IT \\\\ \\hline & & \\(.298\\) & \\(.245\\) & \\(.551\\) \\\\ \\end{tabular}\n\\end{table}\nTable 8: V2, V4, and IT Brain-Scores of adversarially trained ResNet50 with all architectural components.\n\n\\begin{table}\n\\begin{tabular}{l r r r|r r r r r r r} & Visual Area Brain-Score & V2 & V4 & IT \\\\ \\hline & & \\(.298\\) & \\(.245\\) & \\(.551\\) \\\\ \\end{tabular}\n\\end{table}\nTable 9: V2, V4, and IT Brain-Scores of adversarially trained ResNet50 with all architectural components.\n\nSupplemental VisualizationsAdversarial Training\n\nThe neuro-constrained ResNets discussed in Section 4.5 were trained using the \"Free\" adversarial training method proposed by Shafahi _et al._[61]. In Projected Gradient Descent (PGD)-based adversarial training (a typical approach to adversarially training robust classifiers), a network is trained on adversarial samples that are generated on the fly during training. Specifically, in PGD-based adversarial training, a batch of adversarial images is first generated through a series of iterative perturbations to an original image batch, at which point the parameters of the network are finally updated according to the network's loss, as evaluated on the adversarial examples. \"Free\" adversarial training generates adversarial training images with a similar approach, but the parameters of the network are simultaneously updated with every iteration of image perturbation, significantly reducing training time. The authors refer to these mini-batch updates as \"replays\", and refer to the number of replays of each mini-batch with the parameter \\(m\\).\n\nThe adversarially trained models of Section 4.5 were trained with \\(m=4\\) replays and perturbation clipping of \\(\\epsilon=\\frac{2}{255}\\). These models were trained for \\(120\\) epochs using a stochastic gradient descent optimizer with an initial learning rate of \\(0.1\\), which was reduced by a factor of \\(10\\) every \\(40\\) epochs, momentum of \\(0.9\\), and weight decay of \\(1\\times 10^{-5}\\). Each model was initialized with the weights that were learned during traditional ImageNet training for the analyses in Section 4.2. \"Free\" adversarial training was performed using code provided by the authors of this method ([https://github.com/mahyarnajibi/FreeAdversarialTraining](https://github.com/mahyarnajibi/FreeAdversarialTraining)).\n\n## Appendix H Robustness to Common Image Corruptions\n\n### Dataset Description\n\nWe evaluated image classification robustness to common image corruptions using the Tiny-ImageNet-C dataset [56]. Recall that Tiny-ImageNet-C was used instead of ImageNet-C, because our models were trained on \\(64\\times 64\\) input images. Downscaling ImageNet-C images would have potentially altered the intended corruptions and biased our evaluations.\n\nTiny-ImageNet-C is among a collection of corrupted datasets (e.g., ImageNet-C, CIFAR-10-C, CIFAR-100-C) that feature a diverse set of corruptions to typical benchmark datasets. Hendrycks and Dietterich [56] suggest that given the diversity of corruptions featured in these datasets, performance on these datasets can be seen as a general indicator of model robustness. The Tiny-ImageNet-C evaluation dataset consists of images from that Tiny-ImageNet validation dataset that have been corrupted according to \\(15\\) types of image corruption, each of which is categorized as a 'noise', 'blur', 'weather', or 'digital' corruption. The \\(15\\) corruption types include: Gaussian noise, shot noise, impulse noise, defocus blur, frosted glass blur, motion blur, zoom blur, snow, frost, fog, brightness, contrast, elastic transformation, pixelation, and JPEG compression. Each corruption is depicted in Fig. H.1. Every image of this evaluation dataset is also corrupted at five levels of severity (the higher the corruption severity, the more the original image had been corrupted). Corruption severities for Gaussian noise are exemplified in Fig. H.2.\n\nFigure H.1: \\(15\\) corruptions of the Tiny-ImageNet-C dataset, applied to a sample image from Tiny-ImageNet-C. First row: noise corruptions, second row: blur corruptions, third row: weather corruptions, bottom row: digital corruptions. All corruptions shown at severity level 3.\n\nFigure H.2: Gaussian noise corruption, shown at corruption severity levels 1-5.\n\n### Corrupted Image Robustness\n\nA detailed breakdown of Tiny-ImageNet-C image classification accuracy for each single-component, neuro-constrained ResNet-50 and the composite models that achieved top V1 Overall score without adversarial training are provided in Tables 8, 9, and 10.\n\n## Appendix I Code Availability\n\nCode and materials required to reproduce the work presented in this paper are available at github.com/bionicvisionlab/2023-Pogoncheff-Explaining-V1-Properties.\n\n\\begin{table}\n\\begin{tabular}{l r r r r r} \\multicolumn{5}{c}{Noise Corruptions} \\\\ & Gaussian Noise & Impulse Noise & Shot Noise & Avg. \\\\ \\hline ResNet50 (Baseline) & \\(\\mathbf{.197}\\pm.011\\) & \\(.191\\pm.010\\) & \\(\\mathbf{.232}\\pm.013\\) & \\(\\mathbf{.207}\\pm.011\\) \\\\ Center-surround antagonism & \\(.195\\pm.010\\) & \\(.186\\pm.009\\) & \\(\\mathbf{.232}\\pm.012\\) & \\(.204\\pm.010\\) \\\\ Local Receptive Fields & \\(.185\\pm.006\\) & \\(.184\\pm 009\\) & \\(.219\\pm.010\\) & \\(.196\\pm.008\\) \\\\ Tuned Normalization & \\(.195\\pm.008\\) & \\(\\mathbf{.192}\\pm.004\\) & \\(.228\\pm.007\\) & \\(.205\\pm.006\\) \\\\ Cortical Magnification & \\(.150\\pm.008\\) & \\(.157\\pm.007\\) & \\(.180\\pm.011\\) & \\(.162\\pm.008\\) \\\\ Composite Model A & \\(.151\\) & \\(.156\\) & \\(.184\\) & \\(.164\\) \\\\ Composite Model B & \\(.144\\) & \\(.149\\) & \\(.177\\) & \\(.157\\) \\\\ & & & & \\\\ \\multicolumn{5}{c}{Blur Corruptions} \\\\ & Defocus Blur & Glass Blur & Motion Blur & Zoom Blur & Avg. \\\\ \\hline ResNet50 (Baseline) & \\(.224\\pm.003\\) & \\(.182\\pm.001\\) & \\(.272\\pm.003\\) & \\(.241\\pm.004\\) & \\(.230\\pm.002\\) \\\\ Center-surround antagonism & \\(.223\\pm.009\\) & \\(.184\\pm.004\\) & \\(.274\\pm.012\\) & \\(.243\\pm.011\\) & \\(.231\\pm.009\\) \\\\ Local Receptive Fields & \\(.228\\pm.006\\) & \\(.183\\pm.004\\) & \\(.273\\pm.005\\) & \\(.243\\pm.008\\) & \\(.232\\pm.005\\) \\\\ Tuned Normalization & \\(\\mathbf{.234}\\pm.009\\) & \\(\\mathbf{.188}\\pm.002\\) & \\(\\mathbf{.277}\\pm.009\\) & \\(\\mathbf{.248}\\pm.010\\) & \\(\\mathbf{.237}\\pm.007\\) \\\\ Cortical Magnification & \\(.174\\pm.010\\) & \\(.162\\pm.008\\) & \\(.222\\pm.007\\) & \\(.190\\pm.006\\) & \\(.187\\pm.008\\) \\\\ Composite Model A & \\(.186\\) & \\(.167\\) & \\(.236\\) & \\(.200\\) & \\(.197\\) \\\\ Composite Model B & \\(.196\\) & \\(.174\\) & \\(.249\\) & \\(.222\\) & \\(.210\\) \\\\ & & & & & \\\\ \\multicolumn{5}{c}{Weather Corruptions} \\\\ & Brightness & Fog & Frost & Snow & Avg. \\\\ \\hline ResNet50 (Baseline) & \\(.401\\pm.005\\) & \\(\\mathbf{.282}\\pm.003\\) & \\(.360\\pm.006\\) & \\(.310\\pm.004\\) & \\(.338\\pm.004\\) \\\\ Center-surround antagonism & \\(.399\\pm.008\\) & \\(.270\\pm.008\\) & \\(.357\\pm.012\\) & \\(.302\\pm.003\\) & \\(.332\\pm.007\\) \\\\ Local Receptive Fields & \\(.398\\pm.008\\) & \\(.275\\pm.005\\) & \\(.351\\pm.006\\) & \\(.298\\pm.004\\) & \\(.331\\pm.003\\) \\\\ Tuned Normalization & \\(\\mathbf{.410}\\pm.008\\) & \\(\\mathbf{.282}\\pm.011\\) & \\(\\mathbf{.361}\\pm.006\\) & \\(\\mathbf{.311}\\pm.010\\) & \\(\\mathbf{.341}\\pm.008\\) \\\\ Cortical Magnification & \\(.327\\pm.011\\) & \\(.211\\pm.013\\) & \\(.283\\pm.014\\) & \\(.248\\pm.010\\) & \\(.267\\pm.011\\) \\\\ Composite Model A & \\(.338\\) & \\(.220\\) & \\(.286\\) & \\(.258\\) & \\(.275\\) \\\\ Composite Model B & \\(.327\\) & \\(.225\\) & \\(.284\\) & \\(.255\\) & \\(.273\\) \\\\ & & & & & \\\\ \\multicolumn{5}{c}{Digital Corruptions} \\\\ & Contrast & Elastic & JPEG & Pixelate & Avg. \\\\ \\hline ResNet50 (Baseline) & \\(.125\\pm.001\\) & \\(.331\\pm.007\\) & \\(.454\\pm.007\\) & \\(.374\\pm.003\\) & \\(.321\\pm.003\\) \\\\ Center-surround antagonism & \\(.122\\pm.002\\) & \\(.331\\pm.014\\) & \\(.455\\pm.007\\) & \\(.374\\pm.004\\) & \\(.321\\pm.006\\) \\\\ Local Receptive Fields & \\(.120\\pm.004\\) & \\(.329\\pm.003\\) & \\(.457\\pm.005\\) & \\(.375\\pm.002\\) & \\(.320\\pm.001\\) \\\\ Tuned Normalization & \\(\\mathbf{.128}\\pm.008\\) & \\(\\mathbf{.342}\\pm.010\\) & \\(\\mathbf{.463}\\pm.006\\) & \\(\\mathbf{.387}\\pm.006\\) & \\(\\mathbf{.330}\\pm.007\\) \\\\ Cortical Magnification & \\(.082\\pm.005\\) & \\(.287\\pm.007\\) & \\(.374\\pm.013\\) & \\(.287\\pm.014\\) & \\(.257\\pm.010\\) \\\\ Composite Model A & \\(.081\\) & \\(.305\\) & \\(.397\\) & \\(.303\\) & \\(.272\\) \\\\ Composite Model B & \\(.086\\) & \\(.314\\) & \\(.383\\) & \\(.293\\) & \\(.269\\) \\\\ \\end{tabular}\n\\end{table}\nTable 10: Corrupted image classification accuracy by corruption type. Composite Model A includes all 4 neuro-constrained architectural components (center-surround antagonism, local receptive fields, tuned normalization, and cortical magnification). Composite Model B contained all architectural components, with the exception of center-surround antagonism. For baseline and single-component models, mean accuracies (\\(\\pm\\) one standard deviation) are reported, where each trial was associated with a distinct base model from the repeated trials of section 4.1.", "response": "## Review\n\n### Summary\n\nThis paper proposes the incorporation of biologically inspired components, including center-surround receptive fields, local receptive fields, tuned divisive normalization, and cortical magnification, into deep convolutional networks (DCNs) to enhance alignment with V1 neural responses. The authors conduct a systematic ablation analysis to evaluate the contributions of these components, ultimately demonstrating that their approach achieves significant improvements in Brain-Score V1 alignment, particularly with the addition of adversarial training. However, the study also notes a reduction in task performance on ImageNet, which raises questions about the trade-offs involved in enhancing V1 alignment. Overall, the work fits well within the current neuroscience and AI intersection, addressing important questions about model design and performance.\n\n### Strengths\n\n- The paper provides a thorough discussion about biologically inspired computations in V1, integrating them into deep neural networks as modules with learnable components.\n- The systematic evaluation of individual contributions of architectural components is well done, enhancing understanding of their roles in capturing V1 properties.\n- The approach of combining multiple V1 components in one model is novel and addresses a high-interest topic for both the CS and neuroscience communities.\n- The paper presents interesting findings about cortical magnification's role in improving alignment, adding value to the current literature.\n\n### Weaknesses\n\n- The novelty of biologically inspired components in neural networks is somewhat diminished as similar efforts have been made previously, notably in works like VOneNet; a clearer distinction from prior art is needed.\n- Clarity issues exist regarding the organization of background and methods, making it difficult to follow the integration of previous work.\n- The paper lacks detailed discussion on why specific components yield varying effects on alignment, which is crucial for understanding their contributions.\n- The significant drop in ImageNet classification performance raises concerns; the authors should explore the reasons behind this trade-off more comprehensively.\n- Some results show only small improvements in Brain-Score, leading to questions about the significance of claims made regarding performance enhancements.\n\n### Questions\n\n- What sets this work apart from previous attempts at integrating biological components into neural networks, especially in context to VOneNet?\n- Could the authors clarify the greedy backwards elimination process used in their analysis?\n- Is there a theoretical framework explaining why certain architectural features enhance V1 alignment while others do not?\n- Can the authors include results for other brain areas (V2, V4, IT) to facilitate comparisons with existing models?\n- What are the implications of downsampling ImageNet images to 64x64, and could training on larger images improve accuracy?\n\n### Soundness\n\n**Score:** 3\n\n**Description:** 3 = good: The paper presents a technically sound approach with systematic evaluations, though it lacks some critical discussions and clarity on results.\n\n### Presentation\n\n**Score:** 3\n\n**Description:** 3 = good: The overall presentation is clear, but there are areas needing restructuring for better clarity and flow.\n\n### Contribution\n\n**Score:** 3\n\n**Description:** 3 = good: The paper makes a meaningful contribution to the field by exploring the integration of biological components in DCNs, despite some limitations in novelty.\n\n### Rating\n\n**Score:** 6\n\n**Description:** 6 = Weak Accept: The paper is technically solid and presents moderate-to-high impact findings, but it requires further exploration of certain weaknesses.\n\n### Paper Decision\n\n**Decision:** Accept (poster)\n\n**Reasons:** The paper presents original research within a timely and significant area, demonstrating soundness and capturing the interest of the neuroscience and AI communities. Despite some clarity issues and a notable drop in classification performance, the strengths and contributions outweigh the weaknesses, justifying an acceptance with minor revisions.\n"} {"query": "You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections: \n1. Summary: A summary of the paper in 100-150 words.\n2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.\n3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are: \n 1 poor\n 2 fair\n 3 good\n 4 excellent\n4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are: \n 1 strong reject\n 2 reject, significant issues present\n 3 reject, not good enough\n 4 possibly reject, but has redeeming facets\n 5 marginally below the acceptance threshold\n 6 marginally above the acceptance threshold\n 7 accept, but needs minor improvements \n 8 accept, good paper\n 9 strong accept, excellent work\n 10 strong accept, should be highlighted at the conference \n5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.\n\nHere is the template for a review format, you must follow this format to output your review result:\n**Summary:**\nSummary content\n\n**Strengths:**\n- Strength 1\n- Strength 2\n- ...\n\n**Weaknesses:**\n- Weakness 1\n- Weakness 2\n- ...\n\n**Questions:**\n- Question 1\n- Question 2\n- ...\n\n**Soundness:**\nSoundness result\n\n**Presentation:**\nPresentation result\n\n**Contribution:**\nContribution result\n\n**Rating:**\nRating result\n\n**Paper Decision:**\n- Decision: Accept/Reject\n- Reasons: reasons content\n\n\nPlease ensure your feedback is objective and constructive. The paper is as follows:\n\n# Boosting with Tempered Exponential Measures\n\n Richard Nock\n\nGoogle Research\n\nrichardnock@google.com &Ehsan Amid\n\nGoogle DeepMind\n\neamid@google.com &Manfred K. Warmuth\n\nGoogle Research\n\nmanfred@google.com\n\n###### Abstract\n\nOne of the most popular ML algorithms, AdaBoost, can be derived from the dual of a relative entropy minimization problem subject to the fact that the positive weights on the examples sum to one. Essentially, harder examples receive higher probabilities. We generalize this setup to the recently introduced _tempered exponential measures_ (tems) where normalization is enforced on a specific power of the measure and not the measure itself. tems are indexed by a parameter \\(t\\) and generalize exponential families (\\(t=1\\)). Our algorithm, \\(t\\)-AdaBoost, recovers AdaBoost as a special case (\\(t=1\\)). We show that \\(t\\)-AdaBoost retains AdaBoost's celebrated exponential convergence rate on margins when \\(t\\in[0,1)\\) while allowing a slight improvement of the rate's hidden constant compared to \\(t=1\\). \\(t\\)-AdaBoost partially computes on a generalization of classical arithmetic over the reals and brings notable properties like guaranteed bounded leveraging coefficients for \\(t\\in[0,1)\\). From the loss that \\(t\\)-AdaBoost minimizes (a generalization of the exponential loss), we show how to derive a new family of _tempered_ losses for the induction of domain-partitioning classifiers like decision trees. Crucially, strict properness is ensured for all while their boosting rates span the full known spectrum of boosting rates. Experiments using \\(t\\)-AdaBoost+trees display that significant leverage can be achieved by tuning \\(t\\).\n\n## 1 Introduction\n\nAdaBoost is one of the most popular ML algorithms [8, 30]. It efficiently aggregates weak hypotheses into a highly accurate linear combination [10]. The common motivations of boosting algorithms focus on choosing good linear weights (the leveraging coefficients) for combining the weak hypotheses. A dual view of boosting highlights the dual parameters, which are the weights on the examples. These weights define a distribution, and AdaBoost can be viewed as minimizing a relative entropy to the last distribution subject to a linear constraint introduced by the current hypothesis [12]. For this reason (more in SS 2), AdaBoost's weights define an exponential family.\n\n**In this paper**, we go beyond weighing the examples with a discrete exponential family distribution, relaxing the constraint that the total mass be unit but instead requiring it for the measure's \\(1/(2-t)\\)'th power, where \\(t\\) is a temperature parameter. Such measures, called _tempered exponential measures_ (tems), have been recently introduced [4]. Here we apply the discrete version of these tems for deriving a novel boosting algorithm called \\(t\\)-AdaBoost. Again the measures are solutions to a relative entropy minimization problem, but the relative entropy is built from Tsallis entropy and \"tempered\" by a parameter \\(t\\). As \\(t\\to 1\\)tems become standard exponential family distributions and our new algorithm merges into AdaBoost. As much as AdaBoost minimizes the exponential loss, \\(t\\)-AdaBoost minimizes a generalization of this loss we denote as the _tempered exponential loss_.\n\ntems were introduced in the context of clustering, where they were shown to improve the robustness to outliers of clustering's population minimizers [4]. They have also been shown to bring low-level sparsity features to optimal transport [3]. Boosting is a high-precision machinery: AdaBoost is known to achieve near-optimal boosting rates under the weak learning assumption [1], but it haslong been known that numerical issues can derail it, in particular, because of the unbounded weight update rule [14]. So the question of what the tem setting can bring for boosting is of primordial importance. As we show, \\(t\\)-AdaBoost can suffer no rate setback as boosting's exponential rate of convergence on _margins_ can be preserved for all \\(t\\in[0,1)\\). Several interesting features emerge: the weight update becomes bounded, margin optimization can be _tuned_ with \\(t\\) to focus on examples with very low margin and besides linear separators, the algorithm can also learn _progressively_ clipped models1. Finally, the weight update makes appear a new regime whereby weights can \"switch off and on\": an example's weight can become zero if too well classified by the current linear separator, and later on revert to non-zero if badly classified by a next iterate. \\(t\\)-AdaBoost makes use of a generalization of classical arithmetic over the reals introduced decades ago [18].\n\nFootnote 1: Traditionally, clipping a sum is done after it has been fully computed. In our case, it is clipped after each new summand is added.\n\nBoosting algorithms for linear models like AdaBoost bring more than just learning good linear separators: it is known that (ada)boosting linear models can be used to emulate the training of _decision trees_ (DT) [16], which are models known to lead to some of the best of-the-shelf classifiers when linearly combined [9]. Unsurprisingly, the algorithm obtained emulates the classical top-down induction of a tree found in major packages like CART [6] and C4.5 [23]. The _loss_ equivalently minimized, which is, _e.g._, Matusita's loss for AdaBoost[30, Section 4.1], is a lot more consequential. Contrary to losses for real-valued classification, losses to train DTs rely on the estimates of the posterior learned by the model; they are usually called _losses for Class Probability Estimation_ (CPE [25]). The CPE loss is crucial to elicit because (i) it is possible to check whether it is \"good\" from the standpoint of properness (Bayes rule is optimal for the loss [28]), and (ii) it conditions boosting rates, only a handful of them being known, for the most popular CPE losses [11; 22; 31].\n\n**In this paper**, we show that this emulation scheme on \\(t\\)-AdaBoost provides a new family of CPE losses with remarkable constancy with respect to properness: losses are _strictly_ proper (Bayes rule is the _sole_ optimum) for any \\(t\\in(-\\infty,2)\\) and proper for \\(t=-\\infty\\). Furthermore, over the range \\(t\\in[-\\infty,1]\\), the range of boosting rates spans the full spectrum of known boosting rates [11].\n\nWe provide experiments displaying the boosting ability of \\(t\\)-AdaBoost over a range of \\(t\\) encompassing potentially more than the set of values covered by our theory, and highlight the potential of using \\(t\\) as a parameter for efficient tuning the loss [25, Section 8]. Due to a lack of space, proofs are relegated to the appendix (APP). A primer on tems is also given in APP., Section I.\n\n## 2 Related work\n\nBoosting refers to the ability of an algorithm to combine the outputs of moderately accurate, \"weak\" hypotheses into a highly accurate, \"strong\" ensemble. Originally, boosting was introduced in the context of Valiant's PAC learning model as a way to circumvent the then-existing amount of related negative results [10; 34]. After the first formal proof that boosting is indeed achievable [29], AdaBoost became the first practical and proof-checked boosting algorithm [8; 30]. Boosting was thus born in a machine learning context, but later on, it also emerged in statistics as a way to learn from class residuals computed using the gradient of the loss [9; 21], resulting this time in a flurry of computationally efficient algorithms, still called boosting algorithms, but for which the connection with the original weak/strong learning framework is in general not known.\n\nOur paper draws its boosting connections with AdaBoost's formal lineage. AdaBoost has spurred a long line of work alongside different directions, including statistical consistency [5], noise handling [15; 16], low-resource optimization [22], _etc_. The starting point of our work is a fascinating result in convex optimization establishing a duality between the algorithm and its memory of past iteration's performances, a probability distribution of so-called _weights_ over examples [12]. From this standpoint, AdaBoost solves the dual of the optimization of a Bregman divergence (constructed from the negative Shannon entropy as the generator) between weights subject to zero correlation with the last weak classifier's performance. As a consequence, weights define an exponential family. Indeed, whenever a relative entropy is minimized subject to linear constraints, then the solution is a member of an exponential family of distributions (see _e.g._[2, Section 2.8.1] for an axiomatization of exponential families). AdaBoost's distribution on the examples is a member of a discrete exponential family where the training examples are the finite support of the distribution, sufficient statistics are defined from the weak learners, and the leveraging coefficients are the natural parameters. In summary, there is an intimate relationship between boosting a-la-AdaBoost, exponential families, and Bregman divergences [7; 12; 20] and our work \"elevates\" these methods above exponential families.\n\n## 3 Definitions\n\nWe define the \\(t\\)-logarithm and \\(t\\)-exponential [17; Chapter 7],\n\n\\[\\log_{t}(z)\\doteq\\frac{1}{1-t}\\cdot\\left(z^{1-t}-1\\right)\\quad,\\quad\\exp_{t}(z )\\doteq[1+(1-t)z]_{+}^{1/(1-t)}\\quad([z]_{+}\\doteq\\max\\{0,z\\}), \\tag{1}\\]\n\nwhere the case \\(t=1\\) is supposed to be the extension by continuity to the \\(\\log\\) and \\(\\exp\\) functions, respectively. To preserve the concavity of \\(\\log_{t}\\) and the convexity of \\(\\exp_{t}\\), we need \\(t\\geq 0\\). In the general case, we also note the asymmetry of the composition: while \\(\\exp_{t}\\log_{t}(z)=z,\\forall t\\in\\mathbb{R}\\), we have \\(\\log_{t}\\exp_{t}(z)=z\\) for \\(t=1\\) (\\(\\forall z\\in\\mathbb{R}\\)), but\n\n\\[\\log_{t}\\exp_{t}(z)=\\max\\left\\{-\\frac{1}{1-t},z\\right\\}\\quad(t<1)\\quad\\mathrm{ and}\\quad\\log_{t}\\exp_{t}(z)=\\min\\left\\{\\frac{1}{t-1},z\\right\\}\\quad(t>1).\\]\n\nComparisons between vectors and real-valued functions written on vectors are assumed component-wise. We assume \\(t\\neq 2\\) and define notation \\(t\\)* \\(\\doteq 1/(2-t)\\). We now define the key set in which we model our weights (boldfaces denote vector notation).\n\n**Definition 3.1**.: _The co-simplex of \\(\\mathbb{R}^{m}\\), \\(\\tilde{\\Delta}_{m}\\) is defined as \\(\\tilde{\\Delta}_{m}\\doteq\\{\\mathbf{q}\\in\\mathbb{R}^{m}:\\mathbf{q}\\geq\\mathbf{0}\\wedge\\mathbf{1}^ {\\top}\\mathbf{q}^{1/t\\text{*}}=1\\}\\)._\n\nThe letters \\(\\mathbf{q}\\) will be used to denote tems in \\(\\tilde{\\Delta}_{m}\\) while \\(\\mathbf{p}\\) denote the co-density \\(\\mathbf{q}^{\\frac{1}{t\\text{*}}}\\) or any element of the probability simplex. We define the general tempered relative entropy as\n\n\\[D_{t}(\\mathbf{q}^{\\prime}\\|\\mathbf{q}) = \\sum_{i\\in[m]}q^{\\prime}_{i}\\cdot\\left(\\log_{t}q^{\\prime}_{i}-\\log _{t}q_{i}\\right)-\\log_{t-1}q^{\\prime}_{i}+\\log_{t-1}q_{i}, \\tag{2}\\]\n\nwhere \\([m]\\doteq\\{1,...,m\\}\\). The tempered relative entropy is a Bregman divergence with convex generator \\(\\varphi_{t}(z)\\doteq z\\log_{t}z-\\log_{t-1}(z)\\) (for \\(t\\in\\mathbb{R}\\)) and \\(\\varphi_{t}(z)^{\\prime}=\\log_{t}(x)\\). As \\(t\\to 1\\), \\(D_{t}(\\mathbf{q},\\mathbf{q}^{\\prime})\\) becomes the relative entropy with generator \\(\\varphi_{1}(x)=x\\log(x)-x\\).\n\n## 4 Tempered boosting as tempered entropy projection\n\nWe start with a fixed sample \\(\\mathcal{S}=\\{(\\mathbf{x}_{i},y_{i}):i\\in[m]\\}\\) where observations \\(\\mathbf{x}_{i}\\) lie in some domain \\(\\mathcal{X}\\) and labels \\(y_{i}\\) are \\(\\pm 1\\). AdaBoost maintains a distribution \\(\\mathbf{p}\\) over the sample. At the current iteration, this distribution is updated based on a current _weak hypothesis_\\(h\\in\\mathbb{R}^{\\mathcal{X}}\\) using an exponential update:\n\n\\[p^{\\prime}_{i}=\\frac{p_{i}\\cdot\\exp(-\\mu u_{i})}{\\sum_{k}p_{k}\\cdot\\exp(-\\mu u _{k})},\\;\\text{where}\\;u_{i}\\doteq y_{i}h(\\mathbf{x}_{i}),\\mu\\in\\mathbb{R}.\\]\n\nIn [12] this update is motivated as minimizing a relative entropy subject to the constraint that \\(\\mathbf{p}^{\\prime}\\) is a distribution summing to 1 and \\(\\mathbf{p}^{\\prime\\top}\\mathbf{u}=0\\). Following this blueprint, we create a boosting algorithm maintaining a discrete tem over the sample which is motivated as a constrained minimization of the tempered relative entropy, with a normalization constraint on the co-simplex of \\(\\mathbb{R}^{m}\\):\n\n\\[\\mathbf{q}^{\\prime} \\doteq \\arg\\min_{\\mathbf{\\tilde{q}}\\in\\tilde{\\Delta}_{m}}\\quad D_{t}(\\mathbf{ \\tilde{q}}\\|\\mathbf{q}),\\quad\\text{ with }\\mathbf{u}\\in\\mathbb{R}^{m}.\\] \\[\\mathbf{\\tilde{q}}^{\\top}\\mathbf{u}=0\\]\n\nWe now show that the solution \\(\\mathbf{q}^{\\prime}\\) is a tempered generalization of AdaBoost's exponential update.\n\n**Theorem 1**.: _For all \\(t\\in\\mathbb{R}\\backslash\\{2\\}\\), all solutions to (3) have the form_\n\n\\[q^{\\prime}_{i}=\\frac{\\exp_{t}(\\log_{t}q_{i}-\\mu u_{i})}{Z_{t}}\\quad\\left(- \\frac{q_{i}\\otimes_{t}\\exp_{t}(-\\mu u_{i})}{Z_{t}},\\;\\text{with}\\;a\\otimes_{t}b \\doteq[a^{1-t}+b^{1-t}-1]_{+}^{\\frac{1}{1-t}}\\right), \\tag{4}\\]\n\n_where \\(Z_{t}\\) ensures co-simplex normalization of the co-density. Furthermore, the unknown \\(\\mu\\) satisfies_\n\n\\[\\mu\\in\\arg\\max-\\log_{t}(Z_{t}(\\mu))\\quad(=\\arg\\min Z_{t}(\\mu)), \\tag{5}\\]or equivalently is a solution to the nonlinear equation_\n\n\\[\\mathbf{q}^{\\prime}(\\mu)^{\\top}\\mathbf{u} = 0. \\tag{6}\\]\n\n_Finally, if either (i) \\(t\\in\\mathbb{R}_{>0}\\backslash\\{2\\}\\) or (ii) \\(t=0\\) and \\(\\mathbf{q}\\) is not collinear to \\(\\mathbf{u}\\), then \\(Z_{t}(\\mu)\\) is strictly convex: the solution to (3) is thus unique, and can be found from expression (4) by finding the unique minimizer of (5) or (equivalently) the unique solution to (6)._\n\n(Proof in APP., Section II.1) The \\(t\\)-product \\(\\otimes_{t}\\), which satisfies \\(\\exp_{t}(a+b)=\\exp_{t}(a)\\otimes_{t}\\exp_{t}(b)\\), was introduced in [18]. Collinearity never happens in our ML setting because \\(\\mathbf{u}\\) contains the edges of a weak classifier: \\(\\mathbf{q}>0\\) and collinearity would imply that \\(\\pm\\) the weak classifier performs perfect classification, and thus defeats the purpose of training an ensemble. \\(\\forall t\\in\\mathbb{R}\\backslash\\{2\\}\\), we have the simplified expression for the normalization coefficient of the tem and the co-density \\(\\mathbf{p}^{\\prime}\\) of \\(\\mathbf{q}^{\\prime}\\):\n\n\\[Z_{t}=\\left\\|\\exp_{t}\\left(\\log_{t}\\mathbf{q}-\\mu\\cdot\\mathbf{u}\\right)\\right\\|_{1/t \\ast}\\ ;\\ \\ p^{\\prime}_{i}=\\frac{p_{i}\\otimes_{t\\ast}\\exp_{t\\ast}\\left(- \\frac{\\mu u_{i}}{t^{\\ast}}\\right)}{Z^{\\prime}_{t}}\\ \\ \\Big{(}\\ \\text{with}\\ Z^{\\prime}_{t}\\doteq Z^{1/t\\ast}_{t}\\Big{)}. \\tag{7}\\]\n\n## 5 Tempered boosting for linear classifiers and clipped linear classifiers\n\nModelsA model (or classifier) is an element of \\(\\mathbb{R}^{\\mathcal{X}}\\). For any model \\(H\\), its empirical risk over \\(\\mathcal{S}\\) is \\(F_{\\nicefrac{{0}}{{1}}}(H,\\mathcal{S})\\doteq(1/m)\\cdot\\sum_{i}[y_{i}\\neq\\mathrm{ sign}(H(\\mathbf{x}_{i}))]\\) where \\([.]\\), Iverson's bracket [13], is the Boolean value of the inner predicate. We learn linear separators and _clipped_ linear separators. Let \\((v_{j})_{j\\geq 1}\\) be the terms of a series and \\(\\delta\\geq 0\\). The clipped sum of the series is:\n\n\\[\\begin{array}{rcl}\\stackrel{{(\\delta)}}{{(-\\delta)}}\\!\\!\\!\\! \\sum_{j\\in[J]}v_{j}&=&\\min\\left\\{\\delta,\\max\\left\\{-\\delta,v_{J}+\\begin{array}{c} \\stackrel{{(\\delta)}}{{(-\\delta)}}\\!\\!\\!\\!\\sum_{j\\in[J-1]}v_{j} \\\\ \\end{array}\\right\\}\\right\\}&\\quad(\\in[-\\delta,\\delta]),\\ \\text{for}\\ J>1,\\end{array}\\]\n\nand we define the base case \\(J=1\\) by replacing the inner clipped sum with 0. Note that clipped summation is non-commutative, and so is different from clipping in \\([-\\delta,\\delta]\\) the whole sum itself2. Given a set of so-called weak hypotheses \\(h_{j}\\in\\mathbb{R}^{\\mathcal{X}}\\) and leveraging coefficients \\(\\alpha_{j}\\in\\mathbb{R}\\) (for \\(j\\in[J]\\)), the corresponding linear separators and clipped linear separators are\n\nFootnote 2: Fix for example \\(a=-1,b=3,\\delta=2\\). For \\(v_{1}=a,v_{2}=b\\), the clipped sum is \\(2=-1+3\\), but for \\(v_{1}=b,v_{2}=a\\), the clipped sum becomes \\(1=\\mathbf{2}-1\\).\n\n\\[H_{J}(\\mathbf{x})\\doteq\\sum_{j\\in[J]}\\alpha_{j}h_{j}(\\mathbf{x})\\quad;\\quad H^{(\\delta )}_{J}(\\mathbf{x})\\doteq\\begin{array}{c}\\stackrel{{(\\delta)}}{{(- \\delta)}}\\!\\!\\!\\sum_{j\\in[J]}\\alpha_{j}h_{j}(\\mathbf{x}).\\end{array} \\tag{9}\\]\n\nTampered boosting and its general convergenceOur algorithm, \\(t\\)-AdaBoost, is presented in Algorithm 1, using presentation conventions from [30]. Before analyzing its convergence, several properties are to be noted for \\(t\\)-AdaBoost: first, it keeps the appealing property, introduced by AdaBoost, that examples receiving the wrong class by the current weak classifier are reweightedhigher (if \\(\\mu_{j}>0\\)). Second, the leveraging coefficients for weak classifiers in the final classifier (\\(\\alpha_{j}\\)s) are not the same as the ones used to update the weights (\\(\\mu_{j}\\)s), unless \\(t=1\\). Third and last, because of the definition of \\(\\exp_{t}\\) (1), if \\(t<1\\), tempered weights can switch off and on, _i.e._, become 0 if an example is \"too well classified\" and then revert back to being \\(>0\\) if the example becomes wrongly classified by the current weak classifier (if \\(\\mu_{j}>0\\)). To take into account those zeroing weights, we denote \\([m]_{j}^{\\dagger}\\doteq\\{i:q_{ji}=0\\}\\) and \\(m_{j}^{\\dagger}\\doteq\\mathrm{Card}([m]_{j}^{\\dagger})\\) (\\(\\forall j\\in[J]\\) and \\(\\mathrm{Card}\\) denotes the cardinal). Let \\(R_{j}\\doteq\\max_{i\\notin[m]_{j}^{\\dagger}}|y_{i}h_{j}(\\boldsymbol{x}_{i})|/q_{ ji}^{1-t}\\) and \\(q_{j}^{\\dagger}\\doteq\\max_{i\\in[m]_{j}^{\\dagger}}|y_{i}h_{j}(\\boldsymbol{x}_{i}) |^{1/(1-t)}/R_{j}^{1/(1-t)}\\). It is worth noting that \\(q_{j}^{\\dagger}\\) is homogeneous to a tempered weight.\n\n**Theorem 2**.: _At iteration \\(j\\), define the weight function \\(q_{ji}^{\\prime}=q_{ji}\\) if \\(i\\notin[m]_{j}^{\\dagger}\\) and \\(q_{j}^{\\dagger}\\) otherwise; set_\n\n\\[\\rho_{j} \\doteq \\frac{1}{(1+{m_{j}^{\\dagger}}{q_{j}^{\\dagger}}^{2-t})R_{j}}\\cdot \\sum_{i\\in[m]}q_{ji}^{\\prime}y_{i}h_{j}(\\boldsymbol{x}_{i})\\quad(\\in[-1,1]). \\tag{10}\\]\n\n_In algorithm \\(t\\)-AdaBoost, consider the choices (with the convention \\(\\prod_{k=1}^{0}v_{k}\\doteq 1\\))_\n\n\\[\\mu_{j}\\doteq-\\frac{1}{R_{j}}\\cdot\\log_{t}\\left(\\frac{1-\\rho_{j}}{M_{1-t}(1- \\rho_{j},1+\\rho_{j})}\\right)\\quad,\\quad\\alpha_{j}\\doteq{m^{1-t}}^{*}\\cdot\\left( \\prod_{k=1}^{j-1}Z_{k}\\right)^{1-t}\\cdot\\mu_{j}, \\tag{11}\\]\n\n_where \\(M_{q}(a,b)\\doteq((a^{q}+b^{q})/2)^{1/q}\\) is the \\(q\\)-power mean. Then for any \\(H\\in\\{H_{J},H_{J}^{(\\nicefrac{{1}}{{1-t}})}\\}\\), its empirical risk is upperbounded as:_\n\n\\[F_{\\nicefrac{{0}}{{1}}}(H,\\mathcal{S})\\leqslant\\prod_{j=1}^{J}Z_{tj}^{2-t} \\leqslant\\prod_{j=1}^{J}\\left(1+{m_{j}^{\\dagger}}{q_{j}^{\\dagger}}^{2-t} \\right)\\cdot K_{t}(\\rho_{j})\\quad\\left(K_{t}(z)\\doteq\\frac{1-z^{2}}{M_{1-t}(1- z,1+z)}\\right). \\tag{12}\\]\n\n(Proof in APP., Section II.2) We jointly comment \\(t\\)-AdaBoost and Theorem 2 in two parts.\n\n**Case \\(t\\to 1^{-}\\):**\\(t\\)-AdaBoost converges to AdaBoost and Theorem 2 to its convergence analysis: \\(t\\)-AdaBoost converges to AdaBoost as presented in [30, Figure 1]: the tempered simplex becomes the probability simplex, \\(\\otimes_{t}\\) converges to regular multiplication, weight update (8) becomes AdaBoost's, \\(\\alpha_{j}\\rightarrow\\mu_{j}\\) in (11) and finally the expression of \\(\\mu_{j}\\) converges to AdaBoost's leveraging coefficient in [30] (\\(\\lim_{t\\to 1}M_{1-t}(a,b)=\\sqrt{ab}\\)). Even guarantee (12) converges to AdaBoost's popular guarantee of [30, Corollary 1] (\\(\\lim_{t\\to 1}K_{t}(z)=\\sqrt{1-z^{2}}\\), \\(m_{j}^{\\dagger}=0\\)). Also, in this case, we learn only the unclipped classifier since \\(\\lim_{t\\to 1^{-}}H_{J}^{(\\nicefrac{{1}}{{1-t}})}=H_{J}\\).\n\n**Case \\(t<1\\):** Let us first comment on the convergence rate. The proof of Theorem 2 shows that \\(K_{t}(z)\\leqslant\\exp(-z^{2}/(2t^{*}))\\). Suppose there is no weight switching, so \\(m_{j}^{\\dagger}=0,\\forall j\\) (see Section 7) and, as in the boosting model, suppose there exists \\(\\gamma>0\\) such that \\(|\\rho_{j}|\\geqslant\\gamma,\\forall j\\). Then \\(t\\)-AdaBoost is guaranteed to attain empirical risk below some \\(\\varepsilon>0\\) after a number of iterations equal to \\(J=(2t^{*}/\\gamma^{2})\\cdot\\log(1/\\varepsilon)\\). \\(t^{*}\\) being an increasing function of \\(t\\in[0,1]\\), we see that \\(t\\)-AdaBoost is able to slightly improve upon AdaBoost's celebrated rate [32]. However, \\(t^{*}=1/2\\) for \\(t=0\\) so the improvement is just on the hidden constant. This analysis is suited for small values of \\(|\\rho_{j}|\\) and does not reveal an interesting phenomenon for better weak hypotheses. Figure 1 compares \\(K_{t}(z)\\) curves (\\(K_{1}(z)\\doteq\\lim_{t\\to 1}K_{t}(z)=\\sqrt{1-z^{2}}\\) for AdaBoost, see [30, Corollary 1]), showing the case \\(t<1\\) can be substantially better, especially when weak hypotheses are not \"too weak\". If \\(m_{j}^{\\dagger}>0\\), switching weights can impede our convergence _analysis_, though exponential convergence is always possible if \\({m_{j}^{\\dagger}}{q_{j}^{\\dagger}}^{2-t}\\) is small enough; also, when it is not, we may in fact have converged to a good model (see APP., Remark 1). A good criterion to train weak hypotheses is then the optimization of the edge \\(\\rho_{j}\\), thus using \\(\\boldsymbol{q}_{j}^{\\prime}\\) normalized in the simplex. Other key features of \\(t\\)-AdaBoost are as follows. First, the weight update and leveraging coefficients of weak classifiers are bounded because \\(|\\mu_{j}|<1/(R_{j}(1-t))\\) (APP., Lemma H) (this is not the case for \\(t\\to 1^{-}\\)). This guarantees that new weights are bounded before normalization (unlike for \\(t\\to 1^{-}\\)). Second, we remark that \\(\\mu_{j}\\neq\\alpha_{j}\\) if \\(t\\neq 1\\). Factor \\(m^{1-t^{*}}\\) is added for convergence analysis purposes; we can discard it to train the unclipped classifier: it does not change its empirical risk. This is, however, different for factor \\(\\prod_{k=1}^{j-1}Z_{k}\\): from (12), we conclude that this is an indication of how well the past ensemble performs.\n\nphenomenon that does not occur in boosting, where an excellent weak hypothesis on the current weights can have a leveraging coefficient so large that it wipes out the classification of the past ones. This can be useful to control numerical instabilities.\n\nExtension to marginsA key property of boosting algorithms like AdaBoost is to be able to boost not just the empirical risk but more generally _margins_[19, 30], where a margin integrates both the accuracy of label prediction but also the confidence in prediction (say \\(|H|\\)). We generalize the margin notion of [19] to the tempered arithmetic and let \\(\\nu_{t}((\\boldsymbol{x},y),H)\\doteq\\tanh_{t}(yH(\\boldsymbol{x})/2)\\) denote the margin of \\(H\\) on example \\((\\boldsymbol{x},y)\\), where \\(\\tanh_{t}(z)\\doteq(1-\\exp_{t}(-2z))/(1+\\exp_{t}(-2z))(\\in[-1,1])\\) is the tempered hyperbolic tangent. The objective of minimizing the empirical risk is generalized to minimizing the margin risk, \\(F_{t,\\theta}(H,\\mathcal{S})\\doteq(1/m)\\cdot\\sum_{i}[\\nu_{t}((\\boldsymbol{x}_{ i},y_{i}),H)\\leq\\theta]\\), where \\(\\theta\\in(-1,1)\\). Guarantees on the empirical risk are guarantees on the margin risk for \\(\\theta=0\\) only. In just a few steps, we can generalize Theorem 2 to _all_\\(\\theta\\in(-1,1)\\). For space reason, we state the core part of the generalization, from which extending it to a generalization of Theorem 2 is simple.\n\n**Theorem 3**.: _For any \\(\\theta\\in(-1,1)\\) and \\(t\\in[0,1]\\), the guarantee of algorithm \\(t\\)-AdaBoost in Theorem 2 extends to the margin risk, with notations from Theorem 2, via:_\n\n\\[F_{t,\\theta}(H,\\mathcal{S}) \\leq \\left(\\frac{1+\\theta}{1-\\theta}\\right)^{2-t}\\prod_{j=1}^{J}Z_{tj} ^{2-t}. \\tag{13}\\]\n\n(Proof in APP., Section II.3) At a high level, \\(t\\)-AdaBoost brings similar margin maximization properties as AdaBoost. Digging a bit in (13) reveals an interesting phenomenon for \\(t\\neq 1\\) on how margins are optimized compared to \\(t=1\\). Pick \\(\\theta<0\\), so we focus on those examples for which the classifier \\(H\\) has high confidence in its _wrong_ classification. In this case, factor \\(((1+\\theta)/(1-\\theta))^{2-t}\\) is increasing as a function of \\(t\\in[0,1]\\) (and this pattern is reversed for \\(\\theta>0\\)). In words, the smaller we pick \\(t\\in[0,1]\\) and the better is the bound in (13), suggesting increased \"focus\" of \\(t\\)-AdaBoost on increasing the margins of examples _with low negative margin_ (_e.g._ the most difficult ones) compared to the case \\(t=1\\).\n\nThe tempered exponential lossIn the same way as AdaBoost introduced the now famous exponential loss, (12) recommends to minimize the normalization coefficient, following (7),\n\n\\[Z_{tj}^{2-t}(\\mu) = \\left\\|\\exp_{t}\\left(\\log_{t}\\boldsymbol{q}_{j}-\\mu\\cdot \\boldsymbol{u}_{j}\\right)\\right\\|_{1/t^{\\theta}}^{1/t^{\\theta}}\\quad\\left( \\text{with }u_{ji}\\doteq y_{i}h_{j}(\\boldsymbol{x}_{i})\\right). \\tag{14}\\]\n\nWe cannot easily unravel the normalization coefficient to make appear an equivalent generalization of the exponential loss, unless we make several assumptions, one being \\(\\max_{i}|h_{j}(\\boldsymbol{x}_{i})|\\) is small enough for any \\(j\\in[J]\\). In this case, we end up with an equivalent criterion to minimize which looks like\n\n\\[F_{t}(H,\\mathcal{S}) = \\frac{1}{m}\\cdot\\sum_{i}\\exp_{t}^{2-t}\\left(-y_{i}H(\\boldsymbol{x }_{i})\\right), \\tag{15}\\]\n\nFigure 1: Plot of \\(K_{t}(z)\\) in (12), \\(t\\in[0,1]\\) (the smaller, the better for convergence).\n\nwhere we have absorbed in \\(H\\) the factor \\(m^{1-t^{\\mathsf{R}}}\\) appearing in the \\(\\exp_{t}\\) (scaling \\(H\\) by a positive value does not change its empirical risk). This defines a generalization of the exponential loss which we call the _tempered exponential loss_. Notice that one can choose to minimize \\(F_{t}(H,\\mathcal{S})\\) disregarding any constraint on \\(|H|\\).\n\n## 6 A broad family of boosting-compliant proper losses for decision trees\n\nLosses for class probability estimationWhen it comes to tabular data, it has long been known that some of the best models to linearly combine with boosting are decision trees (DT, [9]). Decision trees, like other domain-partitioning classifiers, are not trained by minimizing a _surrogate loss_ defined over real-valued predictions, but defined over _class probability estimation_ (CPE, [26]), those estimators being posterior estimation computed at the leaves. Let us introduce a few definitions for those. A CPE loss \\(\\ell:\\{-1,1\\}\\times[0,1]\\rightarrow\\mathbb{R}\\) is\n\n\\[\\ell(y,u) \\doteq \\llbracket y=1\\rrbracket\\cdot\\ell_{1}(u)+\\llbracket y=-1 \\rrbracket\\cdot\\ell_{-1}(u). \\tag{16}\\]\n\nFunctions \\(\\ell_{1},\\ell_{-1}\\) are called _partial_ losses. The pointwise conditional risk of local guess \\(u\\in[0,1]\\) with respect to a ground truth \\(v\\in[0,1]\\) is:\n\n\\[\\mathsf{L}\\left(u,v\\right) \\doteq v\\cdot\\ell_{1}(u)+(1-v)\\cdot\\ell_{-1}(u). \\tag{17}\\]\n\nA loss is _proper_ iff for any ground truth \\(v\\in[0,1]\\), \\(\\mathsf{L}\\left(v,v\\right)=\\inf_{u}\\mathsf{L}\\left(u,v\\right)\\), and strictly proper iff \\(u=v\\) is the sole minimizer [26]. The (pointwise) _Bayes_ risk is \\(\\underline{L}(v)\\doteq\\inf_{u}\\mathsf{L}\\left(u,v\\right)\\). The log/cross-entropy-loss, square-loss, Matusita loss are examples of CPE losses. One then trains a DT minimizing the expectation of this loss over leaves' posteriors, \\(\\mathbb{E}_{\\lambda}[\\underline{L}(p_{\\lambda})]\\), \\(p_{\\lambda}\\) being the local proportion of positive examples at leaf \\(\\lambda\\) - or equivalently, the local posterior.\n\nDeriving CPE losses from (ada)boostingRecently, it was shown how to derive in a general way a CPE loss to train a DT from the minimization of a surrogate loss with a boosting algorithm [16]. In our case, the surrogate would be \\(\\tilde{Z}_{tj}\\) (14) and the boosting algorithm, \\(t\\)-AdaBoost. The principle is simple and fits in four steps: (i) show that a DT can equivalently perform simple linear classifications, (ii) use a weak learner that designs splits and the boosting algorithm to fit the leveraging coefficient and compute those in closed form, (iii) simplify the expression of the loss using those, (iv) show that the expression simplified is, in fact, a CPE loss. To get (i), we remark that a DT contains a tree (graph). One can associate to each node a real value. To classify an observation, we sum all reals from the root to a leaf and decide on the class based on the sign of the prediction, just like for any real-valued predictor. Suppose we are at a leaf. What kind of weak hypotheses can create splits \"in disguise\"? Those can be of the form\n\n\\[h_{j}(\\mathbf{x}) \\doteq \\llbracket x_{k}\\geq a_{j}\\rrbracket\\cdot b_{j},\\quad a_{j},b_{j }\\in\\mathbb{R},\\]\n\nwhere the observation variable \\(x_{k}\\) is assumed real valued for simplicity and the test \\(\\llbracket x_{k}\\geq a_{j}\\rrbracket\\) splits the leaf's domain in two non-empty subsets. This creates half of the split. \\(\\overline{h}_{j}(\\mathbf{x})\\doteq\\llbracket x_{k}1\\), some extra care is to be put into computations because the weight update becomes unbounded, in a way that is worse than AdaBoost. Indeed, as can be seen from (8), if \\(\\mu_{j}y_{i}h_{j}(\\mathbf{x}_{i})\\leq-1/(t-1)\\) (the example is badly classified by the current weak hypothesis, assuming wlog \\(\\mu_{j}>0\\)), the weight becomes infinity before renormalization. In our experiments, picking a value of \\(t\\) close to \\(2\\) clearly shows this problem, so to be able to still explore whether \\(t>1\\) can be useful, we picked a value close to \\(1\\), namely \\(t=1.1\\), and checked that in our experiments this produced no such numerical issue. We also considered training clipped and not clipped models.\n\nAll boosting models were trained for a number of \\(J=20\\) decision trees (The appendix provides experiments on training bigger sets). Each decision tree is induced using the tempered loss with the corresponding value of \\(t\\) (see Theorem 4) following the classical top-down template, which consists in growing the current heaviest leaf in the tree and picking the best split for the leaf chosen. We implemented \\(t\\)-AdaBoost exactly as in Section 5, including computing leveraging coefficients as suggested. Thus, we do not scale models. More details are provided in the appendix. In our experiments, we also included experiments on a phenomenon highlighted more than a decade ago [15] and fine-tuned more recently [16], the fact that a convex booster's model is the weakest link when it has to deal with noise in training data. This is an important task because while the tempered exponential loss is convex, it does not fit into the blueprint loss of [15, Definition 1] because it is not \\(C^{1}\\) if \\(t\\neq 1\\). One might thus wonder how \\(t\\)-AdaBoost behaves when training data is affected by noise. Letting \\(\\eta\\) denote the proportion of noisy data in the training sample, we tried \\(\\eta\\in\\{0.0,0.1\\}\\) (The appendix provides experiments on more noise levels). We follow the noise model of [15] and thus independently flip the true label of each example with probability \\(\\eta\\).\n\nFor each run, we recorded the average test error and the average maximum and minimum co-density weight. Table 1 presents a subset of the results obtained on three domains. Table 2 presents a more synthetic view in terms of statistical significance of the results for \\(t\\neq 1\\) vs. \\(t=1\\) (AdaBoost). The table reports only results for \\(t\\geq 0.6\\) for synthesis purposes. Values \\(t<0.6\\) performed on average slightly worse than the others _but_ on some domains, as the example of abalone suggests in Table 2 (the plots include all values of \\(t\\) tested in \\([0,1.1]\\)), we clearly got above-par results for such small values of \\(t\\), both in terms of final test error but also fast early convergence to low test error. This comment can be generalized to all values of \\(t\\).\n\nThe weights reveal interesting patterns as well. First, perhaps surprisingly, we never encountered the case where weights switch off, regardless of the value of \\(t\\). The average minimum weight curves of Table 1 generalize to all our tests (see the appendix). This does not rule out the fact that boosting for a much longer number of iterations might lead to weights switching off/on, but the fact that this does not happen at least early during boosting probably comes from the fact that the leveraging coefficients for weights (\\(\\mu\\).) are bounded. Furthermore, their maximal absolute value is all the smaller as \\(t\\) decreases to 0. Second, there is a pattern that also repeats on the maximum weights, not on all domains but on a large majority of them and can be seen in abalone and adult in Table 1: the maximum weight of AdaBoost tends to increase much more rapidly compared to \\(t\\)-AdaBoost with \\(t<1\\). In the latter case, we almost systematically observe that the maximum weight tends to be upperbounded, which is not the case for AdaBoost (the growth of the maximal weight looks almost linear). Having bounded weights could be of help to handle numerical issues of (ada)boosting [14].\n\nOur experiments certainly confirm the boosting nature of \\(t\\)-AdaBoost if we compare its convergence to that of AdaBoost: more often than not, it is in fact comparable to that of AdaBoost. While this applies broadly for \\(t\\geq 0.6\\), we observed examples where much smaller values (even \\(t=0.0\\)) could yield such fast convergence. Importantly, this applies to clipped models as well and it is important to notice because it means attaining a low \"boosted\" error does not come at the price of learning models with large range. This is an interesting property: for \\(t=0.0\\), we would be guaranteed that the computation of the clipped prediction is always in \\([-1,1]\\). Generalizing our comment on small values of \\(t\\) above, we observed that an efficient tuning algorithm for \\(t\\) could be able to get very substantial leverage over AdaBoost. Table 2 was crafted for a standard limit \\(p\\)-val of 0.1 and \"blurs\" the best results that can be obtained. On several domains (winered, abalone, eeg, creditcard, adult), applicable \\(p\\)-values for which we would conclude that some \\(t\\neq 1\\) performs better than \\(t=1\\) drop in between \\(7E-4\\) and \\(0.05\\). Unsurprisingly, AdaBoost also manages to beat significantly alternative values of \\(t\\) in several cases. Our experiments with training noise (\\(\\eta=0.1\\)) go in the same direction. Looking at Table 1, one could eventually be tempted to conclude that \\(t\\) slightly smaller than 1.0 may be a better choice than adaboosting (\\(t=1\\)), as suggested by our results for \\(t=0.9\\), but we do not think this produces a general \"rule-of-thumb\". There is also no apparent \"noise-dependent\" pattern that would obviously separate the cases \\(t<1\\) from \\(t=1\\) even when the tempered exponential loss does not fit to [15]'s theory. Finally, looking at the results for \\(t>1\\) also yields the same basic conclusions, which suggests that boosting can be attainable outside the range covered by our theory (in particular Theorem 2).\n\nAll this brings us to the experimental conclusion that the question does not reside on opposing the case \\(t\\neq 1\\) to the case \\(t=1\\). Rather, our experiments suggest - pretty much like our theory does - that the actual question resides in how to efficiently _learn_\\(t\\) on a domain-dependent basis. Our experiments indeed demonstrate that substantial gains could be obtained, to handle overfitting or noise.\n\n## 8 Discussion, limitations and conclusion\n\nAdaBoost is one of the original and simplest Boosting algorithms. In this paper we generalized AdaBoost to maintaining a tempered measure over the examples by minimizing a tempered relative entropy. We kept the setup as simple as possible and therefore focused on generalizing AdaBoost. However more advanced boosting algorithms have been designed based on relative entropy minimization subject to linear constraints. There are versions that constrain the edges of all past hypotheses to be zero [36]. Also, when the maximum margin of the game is larger than zero, then AdaBoost cycles over non-optimal solutions [27]. Later Boosting algorithms provably optimize the margin of the solution by adjusting the constraint value on the dual edge away from zero (see e.g. [24]). Finally, the ELRP-Boost algorithm optimizes a trade off between relative entropy and the edge [35]. We conjecture that all of these orthogonal direction have generalizations to the tempered case as well and are worth exploring.\n\nThese are theoretical directions that, if successful, would contribute to bring more tools to the design of rigorous boosting algorithms. This is important because boosting suffers several impediments, not all of which we have mentioned: for example, to get statistical consistency for AdaBoost, it is known that early stopping is mandatory [5]. More generally, non-Lipschitz losses like the exponential loss seem to be harder to handle compared to Lipschitz losses [33] (but they yield in general better convergence rates). The validity of the weak learning assumption of boosting can also be discussed, in particular regarding the negative result of [15] which advocates, beyond just better (ada)boosting, for boosting for _more_ classes of models / architectures [16]. Alongside this direction, we feel that our experiments on noise handling give a preliminary account of the fact that there is no \"one \\(t\\) fits all\" case, but a much more in depth analysis is required to elicit / tune a \"good\" \\(t\\). This is a crucial issue for noise handling [16], but as we explain in Section 7, this could bring benefits in much wider contexts as well.\n\n\\begin{table}\n\\begin{tabular}{r|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \\hline \\(\\eta\\) & \\multicolumn{8}{c|}{\\(0.0\\)} & \\multicolumn{8}{c|}{\\(0.1\\)} \\\\ \\(t\\) & \\(0.6\\) & \\(0.8\\) & \\(0.9\\) & \\(1.1\\) & \\(0.6\\) & \\(0.8\\) & \\(0.9\\) & \\(1.1\\) & \\(0.6\\) & \\(0.8\\) & \\(0.9\\) & \\(1.1\\) \\\\\n[clipped] & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 \\\\ \\hline \\hline \\(\\#\\)better & 2 & 3 & 1 & 2 & 1 & 3 & & & 1 & 1 & 1 & 2 & 2 & 1 & & & \\\\ \\hline \\(\\#\\)equivalent & 5 & 5 & 6 & 6 & 7 & 7 & 6 & 7 & 4 & 8 & 8 & 7 & 8 & 9 & 8 & 10 \\\\ \\hline \\(\\#\\)worse & 3 & 2 & 3 & 2 & 2 & & 4 & 3 & 5 & 1 & 1 & 1 & & & 2 & & \\\\ \\hline \\hline \\end{tabular}\n\\end{table}\nTable 2: Outcomes of student paired \\(t\\)-tests over 10 UCI domains, with training noise \\(\\eta\\in\\{0.0,0.1\\}\\), for \\(t\\in\\{0.6,0.8,0.9,1.0,1.1\\}\\) and with / without clipped models. For each triple (\\(\\eta\\), \\(t\\), [clipped]), we give the number of domains for which the corresponding setting of \\(t\\)-AdaBoost is statistically better than AdaBoost(\\(\\#\\)better), the number for which it is statistically worse (\\(\\#\\)worse) and the number for which we cannot reject the assumption of identical performances. Threshold \\(p-\\)val = 0.1.\n\n## Acknowledgments\n\nThe authors thank the reviewers for numerous comments that helped improving the paper's content.\n\n## References\n\n* [1] N. Alon, A. Gonen, E. Hazan, and S. Moran. Boosting simple learners. In _STOC'21_, 2021.\n* [2] S.-I. Amari. _Information Geometry and Its Applications_. Springer-Verlag, Berlin, 2016.\n* [3] E. Amid, F. Nielsen, R. Nock, and M.-K. Warmuth. Optimal transport with tempered exponential measures. _CoRR_, abs/2309.04015, 2023.\n* [4] E. Amid, R. Nock, and M.-K. Warmuth. Clustering above exponential families with tempered exponential measures. In _26\\({}^{th}\\) AISTATS_, 2023.\n* [5] P. Bartlett and M. Traskin. Adaboost is consistent. In _NIPS*19_, 2006.\n* [6] L. Breiman, J. H. Freidman, R. A. Olshen, and C. J. Stone. _Classification and regression trees_. Wadsworth, 1984.\n* [7] M. Collins, R. Schapire, and Y. Singer. Logistic regression, adaboost and Bregman distances. In _Proc. of the 13 \\({}^{th}\\) International Conference on Computational Learning Theory_, pages 158-169, 2000.\n* [8] Y. Freund and R. E. Schapire. A Decision-Theoretic generalization of on-line learning and an application to Boosting. _J. Comp. Syst. Sc._, 55:119-139, 1997.\n* [9] J. Friedman, T. Hastie, and R. Tibshirani. Additive Logistic Regression : a Statistical View of Boosting. _Ann. of Stat._, 28:337-374, 2000.\n* [10] M.J. Kearns. Thoughts on hypothesis boosting, 1988. ML class project.\n* [11] M.J. Kearns and Y. Mansour. On the boosting ability of top-down decision tree learning algorithms. In _Proc. of the 28 \\({}^{th}\\) ACM STOC_, pages 459-468, 1996.\n* [12] J. Kivinen and M.-K. Warmuth. Boosting as entropy projection. In _COLT'99_, pages 134-144, 1999.\n* [13] D.-E. Knuth. Two notes on notation. _The American Mathematical Monthly_, 99(5):403-422, 1992.\n* [14] R. Kohavi. Improving accuracy by voting classification algorithms: Boosting, bagging, and variants. In _Workshop on Computation-Intensive Machine Learning Techniques_, 1998.\n* [15] P.-M. Long and R.-A. Servedio. Random classification noise defeats all convex potential boosters. _MLJ_, 78(3):287-304, 2010.\n* [16] Y. Mansour, R. Nock, and R.-C. Williamson. Random classification noise does not defeat all convex potential boosters irrespective of model choice. In _40\\({}^{th}\\) ICML_, 2023.\n* [17] J. Naudts. _Generalized thermostatistics_. Springer, 2011.\n* [18] L. Nivanen, A. Le Mehaute, and Q.-A. Wang. Generalized algebra within a nonextensive statistics. _Reports on Mathematical Physics_, 52:437-444, 2003.\n* [19] R. Nock and F. Nielsen. A Real Generalization of discrete AdaBoost. _Artificial Intelligence_, 171:25-41, 2007.\n* [20] R. Nock and F. Nielsen. On the efficient minimization of classification-calibrated surrogates. In _NIPS*21_, pages 1201-1208, 2008.\n* [21] R. Nock and F. Nielsen. The phylogenetic tree of Boosting has a bushy carriage but a single trunk. _PNAS_, 117:8692-8693, 2020.\n* [22] R. Nock and R.-C. Williamson. Lossless or quantized boosting with integer arithmetic. In _36\\({}^{th}\\) ICML_, pages 4829-4838, 2019.\n* [23] J. R. Quinlan. _C4.5 : programs for machine learning_. Morgan Kaufmann, 1993.\n* [24] G. Ratsch and M.-K. Warmuth. Efficient margin maximizing with boosting. _JMLR_, pages 2131-2152, december 2005.\n* [25] M.-D. Reid and R.-C. Williamson. Composite binary losses. _JMLR_, 11:2387-2422, 2010.\n\n* [26] M.-D. Reid and R.-C. Williamson. Information, divergence and risk for binary experiments. _JMLR_, 12:731-817, 2011.\n* [27] C. Rudin, I. Daubechies, and R.-E. Schapire. Dynamics of adaboost: cyclic behavior and convergence of margins. _JMLR_, pages 1557-1595, December 2004.\n* [28] L.-J. Savage. Elicitation of personal probabilities and expectations. _J. of the Am. Stat. Assoc._, pages 783-801, 1971.\n* [29] R. E. Schapire. The strength of weak learnability. _MLJ_, pages 197-227, 1990.\n* [30] R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. _MLJ_, 37:297-336, 1999.\n* [31] T. Sypherd, R. Nock, and L. Sankar. Being properly improper. In _39\\({}^{th}\\) ICML_, 2022.\n* [32] M. Telgarsky. A primal-dual convergence analysis of boosting. _JMLR_, 13:561-606, 2012.\n* [33] M. Telgarsky. Boosting with the logistic loss is consistent. In _26 \\({}^{th}\\) COLT_, pages 911-965, 2013.\n* [34] L. G. Valiant. A theory of the learnable. _Communications of the ACM_, 27:1134-1142, 1984.\n* [35] M.-K. Warmuth, K.-A. Glocer, and S.-V.-N. Vishwanathan. Entropy regularized LPBoost. In _Algorithmic Learning Theory_, pages 256-271. Springer Berlin Heidelberg, 2008.\n* [36] M.-K. Warmuth, J. Liao, and G. Ratsch. Totally corrective boosting algorithms that maximize the margin. In _icml '06: proceedings of the 23rd international conference on machine learning_, pages 1001-1008, 2006.\n\n**Appendix**\n\n**Abstract**\n\nThis is the Appendix to Paper \"Boosting with Tempered Exponential Measures\". To differentiate with the numberings in the main file, the numbering of Theorems, Lemmata, Definitions is letter-based (A, B,...).\n\n## Table of contents\n\n**A short primer on Tempered Exponential Measures**\n\n**Supplementary material on proofs**\n\n\\(\\leftrightarrow\\) Proof of Theorem 1\n\n\\(\\leftrightarrow\\) Proof of Theorem 2\n\n\\(\\leftrightarrow\\) Proof of Theorem 3\n\n\\(\\leftrightarrow\\) Proof of Theorem 4\n\n\\(\\leftrightarrow\\)\n\n**Supplementary material on experiments**\n\nPg 15\n\n\\(\\leftrightarrow\\) Proof of Theorem 1\n\n\\(\\leftrightarrow\\) Proof of Theorem 4\n\nPg 32A short primer on Tempered Exponential Measures\n\nWe describe here the minimal amount of material necessary to understand how our approach to boosting connects to these measures. We refer to [4] for more details. With a slight abuse of notation, we define the perspective transforms \\((\\log_{t})^{*}(z)\\doteq t^{*}.\\log_{t*}(z/t^{*})\\) and \\((\\exp_{t})^{*}(z)=t^{*}.\\exp_{t*}(z/t^{*})\\). Recall that \\(t^{*}\\doteq 1/(2-t)\\).\n\n**Definition A**.: _[_4_]_ _A tempered exponential measure (tem) family is a set of unnormalized densities in which each element admits the following canonical expression:_\n\n\\[q_{t|\\boldsymbol{\\theta}}(\\boldsymbol{x})\\doteq\\frac{\\exp_{t}( \\boldsymbol{\\theta}^{\\top}\\boldsymbol{\\varphi}(\\boldsymbol{x}))}{\\exp_{t}(G_{t }(\\boldsymbol{\\theta}))}=\\exp_{t}(\\boldsymbol{\\theta}^{\\top}\\boldsymbol{ \\varphi}(\\boldsymbol{x})\\ominus_{t}G_{t}(\\boldsymbol{\\theta}))\\quad\\left(a \\ominus_{t}b\\doteq\\frac{a-b}{1+(1-t)b}\\right), \\tag{21}\\]\n\n_where \\(\\boldsymbol{\\theta}\\) is the element's natural parameter, \\(\\boldsymbol{\\varphi}(\\boldsymbol{x})\\) is the sufficient statistics and_\n\n\\[G_{t}(\\boldsymbol{\\theta}) = (\\log_{t})^{*}\\int(\\exp_{t})^{*}(\\boldsymbol{\\theta}^{\\top} \\boldsymbol{\\varphi}(\\boldsymbol{x}))\\mathrm{d}\\xi\\]\n\n_is the (convex) cumulant, \\(\\xi\\) being a base measure (implicit)._\n\nExcept for \\(t=1\\) (which reduces a tem family to a classical exponential family), the total mass of a tem is not 1 (but it has an elegant closed form expression [4]). However, the exponentiated \\(q_{t|\\boldsymbol{\\theta}}^{1/t^{*}}\\) does sum to 1. In the discrete case, this justifies extending the classical simplex to what we denote as the co-simplex.\n\n**Definition B**.: _The co-simplex of \\(\\mathbb{R}^{m}\\), \\(\\tilde{\\Delta}_{m}\\) is defined as \\(\\tilde{\\Delta}_{m}\\doteq\\{\\boldsymbol{q}\\in\\mathbb{R}^{m}:\\boldsymbol{q}\\geq \\boldsymbol{0}\\wedge\\boldsymbol{1}^{\\top}\\boldsymbol{q}^{1/t^{*}}=1\\}\\)._\n\nThe connection between \\(t\\)-AdaBoost's update and tem's is immediate from the equation's update ((4) in mf). We can show that \\(\\tilde{\\Delta}_{m}\\) can also be represented as tems.\n\n**Lemma A**.: \\(\\tilde{\\Delta}_{m}\\) _is a (discrete) family of tempered exponential measures._\n\nProof.: We proceed as in [2, Section 2.2.2] for exponential families: let \\(\\boldsymbol{q}\\in\\tilde{\\Delta}_{m}\\), which we write\n\n\\[q(n) \\doteq \\sum_{i\\in[m]}q_{i}\\cdot\\llbracket i=n\\rrbracket,n\\in[m]. \\tag{22}\\]\n\n\\(\\llbracket\\pi\\rrbracket\\), the Iverson bracket [13], takes value 1 if Boolean predicate \\(\\pi\\) is true (and 0 otherwise). We create \\(m-1\\) natural parameters and the cumulant,\n\n\\[\\theta_{i}\\doteq\\log_{t}\\frac{q_{i}}{q_{m}},i\\in[m-1]\\quad;\\quad G_{t}( \\boldsymbol{\\theta})\\doteq\\log_{t}\\frac{1}{q_{m}},\\]\n\nand end up with (22) also matching the atom mass function\n\n\\[q(n) = \\frac{\\exp_{t}\\left(\\sum_{i\\in[m-1]}\\theta_{i}\\cdot\\llbracket i=n \\rrbracket\\right)}{\\exp_{t}G_{t}(\\boldsymbol{\\theta})},\\]\n\nwhich clearly defines a tempered exponential measure over \\([m]\\). This ends the proof of Lemma A.\n\nSupplementary material on proofs\n\n### Proof of Theorem 1\n\nTo improve readability, we remove dependency in \\(t\\) in normalization coefficient \\(Z\\). We use notations from [4, proof of Theorem 3.2] and denote the Lagrangian\n\n\\[\\mathcal{L} = \\Delta(\\tilde{\\mathbf{q}}\\|\\mathbf{q})+\\lambda\\left(\\sum_{i}\\tilde{q}_{i}^ {1/t^{\\mathbf{*}}}-1\\right)-\\sum_{i}\\nu_{i}\\tilde{q}_{i}+\\mu\\sum_{i}\\tilde{q}_{i}u _{i}, \\tag{23}\\]\n\nwhich yields \\(\\partial\\mathcal{L}/\\partial\\tilde{q}_{i}=\\log_{t}\\tilde{q}_{i}-\\log_{t}q_{i} +\\lambda\\tilde{q}_{i}^{1-t}-\\nu_{i}+\\mu u_{i}\\) (\\(\\lambda\\) absorbs factor \\(2-t\\)), and, rearranging (absorbing factor \\(1-t\\) in \\(\\nu_{i}\\)),\n\n\\[(1+(1-t)\\lambda)\\tilde{q}_{i}^{1-t} = \\nu_{i}+1+(1-t)(\\log_{t}q_{i}-\\mu u_{i}),\\forall i\\in[m]. \\tag{24}\\]\n\nWe see that \\(\\lambda\\neq-1/(1-t)\\) otherwise the Lagrangian drops its dependence in the unknown. In fact, the solution necessarily has \\(1+(1-t)\\lambda>0\\). To see this, we distinguish two cases: (i) if some \\(u_{k}=0\\), then since \\(\\log_{t}q_{k}\\geq-1/(1-t)\\) there would be no solution to (24) if \\(1+(1-t)\\lambda<0\\) because of the KKT conditions \\(\\nu_{i}\\geq 0,\\forall i\\in[m]\\); (ii) otherwise, if all \\(u_{k}\\neq 0,\\forall k\\in[m]\\), then there must be two coordinates of different signs otherwise there is no solution to our problem (3) (main file, we must have indeed \\(\\tilde{\\mathbf{q}}\\geq 0\\) because of the co-simplex constraint). Thus, there exists at least one coordinate \\(k\\in[m]\\) for which \\(-(1-t)\\mu u_{k}>0\\) and since \\(\\log_{t}q_{k}\\geq-1/(1-t)\\) (definition of \\(\\log_{t}\\)) and \\(\\nu_{k}\\geq 0\\) (KKT conditions), the RHS of (24) for \\(i=k\\) is \\(>0\\), preventing \\(1+(1-t)\\lambda<0\\) in the LHS.\n\nWe thus have \\(1+(1-t)\\lambda>0\\). The KKT conditions \\((\\nu_{i}\\geq 0,\\nu_{i}\\tilde{q}_{i}=0,\\forall i\\in[m])\\) yield the following: \\(1+(1-t)(\\log_{t}q_{i}-\\mu u_{i})>0\\) imply \\(\\nu_{i}=0\\) and \\(1+(1-t)(\\log_{t}q_{i}-\\mu u_{i})\\leq 0\\) imply \\(\\tilde{q}_{i}^{1-t}=0\\) so we get the necessary form for the optimum:\n\n\\[\\tilde{q}_{i} = \\frac{\\exp_{t}\\left(\\log_{t}q_{i}-\\mu u_{i}\\right)}{\\exp_{t} \\lambda} \\tag{25}\\] \\[= \\frac{q_{i}\\otimes_{t}\\exp_{t}(-\\mu u_{i})}{Z_{t}},\\]\n\nwhere \\(\\lambda\\) or \\(Z_{t}\\doteq\\exp_{t}\\lambda\\) ensures normalisation for the co-density. Note that we have a simplified expression for the co-density:\n\n\\[p_{i} = \\frac{p_{ji}\\otimes_{t}\\ast\\,\\exp_{t}\\ast\\,(-\\mu u_{i}/t^{\\ast}) }{Z_{t}^{\\infty}}, \\tag{26}\\]\n\nwith \\(Z_{t}^{\\infty}\\doteq Z_{t}^{1/t^{\\mathbf{*}}}=\\sum_{i}p_{ji}\\otimes_{t}\\ast\\,\\exp _{t}\\ast\\,(-\\mu u_{i}/t^{\\ast})\\). For the analytic form in (25), we can simplify the Lagrangian to a dual form that depends on \\(\\mu\\) solely:\n\n\\[\\mathcal{D}(\\mu) = \\Delta(\\tilde{\\mathbf{q}}(\\mu)\\|\\mathbf{q})+\\mu\\sum_{i}\\tilde{q}_{i}(\\mu )u_{i}. \\tag{27}\\]\n\nThe proof of (5) (main file) is based on a key Lemma.\n\n**Lemma B**.: _For any \\(\\tilde{\\mathbf{q}}\\) having form (25) such that \\(\\tilde{\\mathbf{q}}^{\\top}\\mathbf{u}=0\\), \\(\\mathcal{D}(\\mu)=-\\log_{t}Z_{t}(\\mu)\\)._\n\nProof.: For any \\(\\tilde{\\mathbf{q}}\\) having form (25), denote\n\n\\[[m]_{\\ast} \\doteq \\{i:\\tilde{q}_{i}\\neq 0\\}. \\tag{28}\\]We first compute (still using \\(\\lambda\\doteq\\log_{t}Z_{t}(\\mu)\\) for short):\n\n\\[A \\doteq \\sum_{i}\\tilde{q}_{i}\\cdot\\log_{t}\\tilde{q}_{i} \\tag{29}\\] \\[= \\sum_{i\\in[m]_{\\bullet}}\\tilde{q}_{i}\\cdot\\log_{t}\\left(\\frac{ \\exp_{t}\\left(\\log_{t}q_{i}-\\mu u_{i}\\right)}{\\exp_{t}\\lambda}\\right)\\] \\[= \\sum_{i\\in[m]_{\\bullet}}\\tilde{q}_{i}\\cdot\\left(\\frac{1}{1-t} \\cdot\\left[\\frac{1+(1-t)(\\log_{t}q_{i}-\\mu u_{i})}{1+(1-t)\\lambda}-1\\right]\\right)\\] \\[= \\frac{1}{1-t}\\cdot\\sum_{i\\in[m]_{\\bullet}}\\tilde{q}_{i}\\cdot \\left(\\frac{q_{i}^{1-t}-(1-t)\\mu u_{i}}{1+(1-t)\\lambda}\\right)-\\frac{1}{1-t} \\cdot\\sum_{i\\in[m]_{\\bullet}}\\tilde{q}_{i}\\] \\[= -\\frac{\\mu}{1+(1-t)\\lambda}\\cdot\\sum_{i\\in[m]_{\\bullet}}\\tilde{q }_{i}u_{i}+\\frac{1}{(1-t)(1+(1-t)\\lambda)}\\cdot\\sum_{i\\in[m]_{\\bullet}}\\tilde {q}_{i}q_{i}^{1-t}-\\frac{1}{1-t}\\cdot\\sum_{i\\in[m]_{\\bullet}}\\tilde{q}_{i}\\] \\[= \\underbrace{-\\frac{\\mu}{1+(1-t)\\lambda}\\cdot\\tilde{\\boldsymbol{q }}^{\\top}\\boldsymbol{u}}_{\\doteq B}+\\underbrace{\\frac{1}{(1-t)(1+(1-t)\\lambda )}\\cdot\\sum_{i\\in[m]}\\tilde{q}_{i}q_{i}^{1-t}}_{\\doteq C}-\\underbrace{\\frac{1} {1-t}\\cdot\\sum_{i\\in[m]}\\tilde{q}_{i}}_{\\doteq D}.\\]\n\nRemark that in the last identity, we have put back summations over the complete set \\([m]\\) of indices. We note that \\(B=0\\) because \\(\\tilde{\\boldsymbol{q}}^{\\top}\\boldsymbol{u}=0\\). We then remark that without replacing the expression of \\(\\tilde{\\boldsymbol{q}}\\), we have in general for any \\(\\tilde{\\boldsymbol{q}}\\in\\bar{\\Delta}_{m}\\):\n\n\\[E \\doteq \\sum_{i\\in[m]}\\tilde{q}_{i}\\cdot\\left(\\log_{t}\\tilde{q}_{i}-\\log _{t}q_{i}\\right)\\] \\[= \\sum_{i\\in[m]}\\tilde{q}_{i}\\cdot\\left(\\frac{1}{1-t}\\cdot\\left( \\tilde{q}_{i}^{1-t}-1\\right)-\\frac{1}{1-t}\\cdot\\left(q_{i}^{1-t}-1\\right)\\right)\\] \\[= \\frac{1}{1-t}\\cdot\\sum_{i\\in[m]}\\tilde{q}_{i}^{2-t}-\\frac{1}{1-t }\\cdot\\sum_{i\\in[m]}\\tilde{q}_{i}q_{i}^{1-t}\\] \\[= \\frac{1}{1-t}\\cdot\\left(1-\\sum_{i\\in[m]}\\tilde{q}_{i}q_{i}^{1-t} \\right)\\!,\\]\n\nand we can check that for any \\(\\tilde{\\boldsymbol{q}},\\boldsymbol{q}\\in\\bar{\\Delta}_{m}\\), \\(E=\\Delta(\\tilde{\\boldsymbol{q}}\\|\\boldsymbol{q})\\). We then develop \\(\\Delta(\\tilde{\\boldsymbol{q}}\\|\\boldsymbol{q})\\) with a partial replacement of \\(\\tilde{\\boldsymbol{q}}\\) by its expression:\n\n\\[\\Delta(\\tilde{\\boldsymbol{q}}\\|\\boldsymbol{q}) = A-\\sum_{i}\\tilde{q}_{i}\\log_{t}q_{i}\\] \\[= A-\\frac{1}{1-t}\\cdot\\sum_{i}\\tilde{q}_{i}q_{i}^{1-t}+\\frac{1}{1 -t}\\cdot\\sum_{i}\\tilde{q}_{i}\\] \\[= C-\\frac{1}{1-t}\\cdot\\sum_{i}\\tilde{q}_{i}q_{i}^{1-t}\\] \\[= \\frac{1}{1-t}\\cdot\\left(\\frac{1}{1+(1-t)\\lambda}-1\\right)\\cdot \\sum_{i}\\tilde{q}_{i}q_{i}^{1-t}\\] \\[= -\\frac{\\lambda}{1+(1-t)\\lambda}\\cdot\\sum_{i}\\tilde{q}_{i}q_{i}^{1 -t}\\] \\[= -\\frac{\\lambda}{1+(1-t)\\lambda}\\cdot\\left(1-(1-t)\\cdot\\Delta( \\tilde{\\boldsymbol{q}}\\|\\boldsymbol{q})\\right).\\]\n\nRearranging gives that for any \\(\\tilde{\\boldsymbol{q}},\\boldsymbol{q}\\in\\bar{\\Delta}_{m}\\) such that (i) \\(\\tilde{\\boldsymbol{q}}\\) has the form (25) for some \\(\\mu\\in\\mathbb{R}\\) and (ii) \\(\\tilde{\\boldsymbol{q}}^{\\top}\\boldsymbol{u}=0\\),\n\n\\[\\Delta(\\tilde{\\boldsymbol{q}}\\|\\boldsymbol{q}) = -\\lambda\\] \\[= -\\log_{t}(Z_{t}),\\]as claimed. This ends the proof of Lemma B. \n\nWe thus get from the definition of the dual that \\(\\mu=\\arg\\max-\\log_{t}Z_{t}(\\mu)=\\arg\\min Z_{t}(\\mu)\\). We have the explicit form for \\(Z_{t}\\):\n\n\\[Z_{t}(\\mu) = \\left(\\sum_{i}\\exp_{t}^{2-t}\\left(\\log_{t}q_{i}-\\mu u_{i}\\right) \\right)^{\\frac{1}{2-t}}\\] \\[= \\left(\\sum_{i\\in[m]_{\\bullet}}\\exp_{t}^{2-t}\\left(\\log_{t}q_{i}- \\mu u_{i}\\right)\\right)^{\\frac{1}{2-t}},\\]\n\nwhere \\([m]_{\\bullet}\\) is defined in (28). We remark that the last expression is differentiable in \\(\\mu\\), and get\n\n\\[Z_{t}^{\\prime}(\\mu) = \\frac{1}{2-t}\\cdot\\left(\\sum_{i\\in[m]_{\\bullet}}\\exp_{t}^{2-t} \\left(\\log_{t}q_{i}-\\mu u_{i}\\right)\\right)^{-\\frac{1-t}{2-t}} \\tag{30}\\] \\[\\cdot(2-t)\\sum_{i\\in[m]_{\\bullet}}\\exp_{t}^{1-t}\\left(\\log_{t}q_{ i}-\\mu u_{i}\\right)\\cdot\\exp_{t}^{t}\\left(\\log_{t}q_{i}-\\mu u_{i}\\right)\\cdot- u_{i}\\] \\[= -Z_{t}^{t-1}\\cdot\\sum_{i\\in[m]_{\\bullet}}\\exp_{t}\\left(\\log_{t}q_ {i}-\\mu u_{i}\\right)\\cdot u_{i}\\] \\[= -Z_{t}^{t}\\cdot\\sum_{i\\in[m]_{\\bullet}}\\tilde{q}_{i}u_{i}\\] \\[= -Z_{t}^{t}\\cdot\\tilde{\\boldsymbol{q}}^{\\top}\\boldsymbol{u},\\]\n\nso\n\n\\[\\frac{\\partial-\\log_{t}(Z_{t})}{\\partial\\mu} = -Z_{t}^{-t}Z_{t}^{\\prime}\\] \\[= \\tilde{\\boldsymbol{q}}(\\mu)^{\\top}\\boldsymbol{u},\\]\n\nand we get that any critical point of \\(Z_{t}(\\mu)\\) satisfies \\(\\tilde{\\boldsymbol{q}}(\\mu)^{\\top}\\boldsymbol{u}=0\\). A sufficient condition to have just one critical point, being the minimum sought is the strict convexity of \\(Z_{t}(\\mu)\\). The next Lemma provides the proof that it is for all \\(t>0\\).\n\n**Lemma C**.: \\(Z_{t}^{\\prime\\prime}(\\mu)\\geqslant t\\cdot Z_{t}(\\mu)^{2t-1}(\\tilde{\\boldsymbol {q}}(\\mu)^{\\top}\\boldsymbol{u})^{2}\\)_._\n\nProof.: After simplifications, we have\n\n\\[Z_{t}^{3-2t}\\cdot Z_{t}^{\\prime\\prime} = (t-1)\\cdot\\left(\\sum_{i\\in[m]}\\exp_{t}\\left(\\log_{t}q_{i}-\\mu u_{ i}\\right)\\cdot u_{i}\\right)^{2}\\] \\[+\\left(\\sum_{i\\in[m]}\\exp_{t}^{2-t}\\left(\\log_{t}q_{i}-\\mu u_{i} \\right)\\right)\\cdot\\left(\\sum_{i\\in[m]}\\exp_{t}^{t}\\left(\\log_{t}q_{i}-\\mu u_ {i}\\right)\\cdot u_{i}^{2}\\right)\\] \\[= (t-1)\\cdot\\sum_{i,k\\in[m]}Q_{i}Q_{k}u_{i}u_{k}+\\sum_{i,k\\in[m]}Q_ {i}^{2-t}Q_{k}^{t}u_{k}^{2}, \\tag{33}\\]\n\nwhere we have let \\(Q_{i}\\doteq\\exp_{t}\\left(\\log_{t}q_{i}-\\mu u_{i}\\right)\\geqslant 0\\). Since \\(a^{2}+b^{2}\\geqslant 2ab\\), we note that for any \\(i\\neq k\\),\n\n\\[Q_{i}^{2-t}Q_{k}^{t}u_{k}^{2}+Q_{k}^{2-t}Q_{i}^{t}u_{i}^{2} \\geqslant 2\\sqrt{Q_{i}^{2-t}Q_{k}^{t}Q_{k}^{2-t}Q_{i}^{t}}u_{i}u_{k} \\tag{34}\\] \\[=2Q_{i}Q_{k}u_{i}u_{k},\\]so we split (33) in two terms and get\n\n\\[Z_{t}^{3-2t}\\cdot Z_{t}^{\\prime\\prime} = (t-1)\\cdot\\sum_{i\\in[m]}Q_{i}^{2}u_{i}^{2}+\\sum_{i\\in[m]}Q_{i}^{2-t} Q_{i}^{t}u_{i}^{2} \\tag{35}\\] \\[+\\sum_{i,k\\in[m],i0\\), we get the statement of Lemma C after reorganising (36). \n\nLemma C shows the strict convexity of \\(Z_{t}(\\mu)\\) for any \\(t>0\\). The case \\(t=0\\) follows by direct differentiation: we get after simplification\n\n\\[Z_{t}^{\\prime\\prime}(\\mu) = \\frac{\\left(\\sum_{i\\in[m]}u_{i}^{2}\\right)\\cdot\\left(\\sum_{i\\in[ m]}(q_{i}-\\mu u_{i})^{2}\\right)-\\left(\\sum_{i\\in[m]}(q_{i}-\\mu u_{i})u_{i} \\right)^{2}}{\\left(\\sum_{i\\in[m]}(q_{i}-\\mu u_{i})^{2}\\right)^{\\frac{3}{2}}}.\\]\n\nCauchy-Schwartz inequality allows to conclude that \\(Z_{t}^{\\prime\\prime}(\\mu)\\geqslant 0\\) and is in fact \\(>0\\)_unless_\\(\\tilde{\\mathbf{q}}\\) is collinear to \\(\\mathbf{u}\\). This completes the proof of Theorem 1.\n\n### Proof of Theorem 2\n\nThe proof involves several arguments, organized into several subsections. Some are more general than what is strictly needed for the proof of the Theorem, on purpose.\n\n#### ii.i.2.1 Clipped summations\n\nFor any \\(\\delta\\geqslant 0\\), we define clipped summations of the sequence of ordered elements \\(v_{1},v_{2},...,v_{J}\\): if \\(J>1\\),\n\n\\[{}^{(\\delta)}\\!\\sum_{j=1}^{J}v_{j}\\doteq\\min\\left\\{v_{J}+\\sum_{j=1}^{(\\delta)} \\!\\sum_{j=1}^{J-1}v_{j},\\delta\\right\\}\\quad,\\quad{}_{(-\\delta)}\\!\\sum_{j=1}^{ J}v_{j}=\\max\\left\\{v_{J}+\\sum_{(-\\delta)}\\!\\sum_{j=1}^{J-1}v_{j},-\\delta\\right\\}, \\tag{37}\\]\n\nand the base case (\\(J=1\\)) is obtained by replacing the inner sum by 0. We also define the doubly clipped summation:\n\n\\[{}^{(\\delta)}\\!\\sum_{(-\\delta)}\\!\\sum_{j=1}^{J}v_{j}=\\max\\left\\{\\min\\left\\{v_{ J}+\\sum_{(-\\delta)}^{(\\delta)}\\!\\sum_{j=1}^{J-1}v_{j},\\delta\\right\\},-\\delta \\right\\},\\]\n\nwith the same convention for the base case. We prove a series of simple but useful properties of the clipped summation.\n\n**Lemma D**.: _The following properties hold true for clipped summation:_\n\n1. _(doubly) clipped summations are noncommutative;_2. _(doubly) clipped summations are ordinary summation in the limit: for any_ \\(J\\geqslant 1\\) _and any sequence_ \\(v_{1},v_{2},...,v_{J}\\)_,_ \\[\\lim_{\\delta\\rightarrow+\\infty}\\,^{(\\delta)}\\sum_{j=1}^{J}v_{j}=\\lim_{\\delta \\rightarrow+\\infty}\\,^{J}\\sum_{(-\\delta)}^{J}v_{j}=\\lim_{\\delta\\rightarrow+ \\infty}\\,^{(\\delta)}\\sum_{(-\\delta)}^{J}v_{j}=\\sum_{j=1}^{J}v_{j}\\]\n3. _clipped summations sandwich ordinary summation and the doubly clipped summation: for any_ \\(\\delta\\geqslant 0\\)_, any_ \\(J\\geqslant 1\\) _and any sequence_ \\(v_{1},v_{2},...,v_{J}\\)_,_ \\[\\,^{(\\delta)}\\!\\!\\sum_{j=1}^{J}v_{j}\\leqslant\\sum_{j=1}^{J}v_{j}\\leqslant\\,_{ (-\\delta)}\\!\\!\\sum_{j=1}^{J}v_{j}\\quad;\\quad\\sum_{j=1}^{(\\delta)}\\!\\!\\sum_{(- \\delta)}^{J}\\sum_{j=1}^{J}v_{j}\\leqslant\\,_{(-\\delta)}\\!\\!\\sum_{j=1}^{J}v_{j}\\]\n\nProof.: Noncommutativity follows from simple counterexamples: for example, for \\(v\\doteq-1\\) and \\(w\\doteq 2\\), if we fix \\(v_{1}\\doteq v,v_{2}\\doteq w\\), then \\(\\,^{(0)}\\!\\!\\sum_{j=1}^{2}v_{j}=1\\) while \\(\\,^{(0)}\\!\\!\\sum_{j=1}^{2}v_{3-j}=-1\\). Property [2.] is trivial. The set of leftmost inequalities of property [3.] can be shown by induction, noting the base case is trivial and otherwise, using the induction hypothesis in the leftmost inequality,\n\n\\[\\,^{(\\delta)}\\!\\!\\sum_{j=1}^{J+1}v_{j}\\doteq\\min\\left\\{v_{J+1}+\\,\\,\\,^{(\\delta) }\\!\\!\\sum_{j=1}^{J}v_{j},\\delta\\right\\}\\leqslant\\min\\left\\{v_{J+1}+\\sum_{j=1}^ {J}v_{j},\\delta\\right\\}\\leqslant v_{J+1}+\\sum_{j=1}^{J}v_{j}=\\sum_{j=1}^{J+1}v _{j},\\]\n\nand similarly\n\n\\[\\,^{(-\\delta)}\\!\\!\\sum_{j=1}^{J+1}v_{j} \\doteq \\max\\left\\{v_{J+1}+\\,\\,\\,^{J}\\!\\!\\sum_{(-\\delta)}^{J}\\!\\!\\sum_{j} ^{J},-\\delta\\right\\}\\] \\[\\geqslant \\max\\left\\{v_{J+1}+\\sum_{j=1}^{J}v_{j},-\\delta\\right\\}\\geqslant v _{J+1}+\\sum_{j=1}^{J}v_{j}=\\sum_{j=1}^{J+1}v_{j}.\\]\n\nA similar argument holds for the set of rightmost inequalities: for example, the induction's general case holds\n\n\\[\\,^{(\\delta)}\\!\\!\\sum_{j=1}^{J+1}v_{j} \\doteq \\min\\left\\{v_{J+1}+\\,\\,\\,^{(\\delta)}\\!\\!\\sum_{j=1}^{J}v_{j},\\delta\\right\\}\\] \\[\\leqslant \\min\\left\\{v_{J+1}+\\,\\,\\,^{(\\delta)}\\!\\!\\sum_{(-\\delta)}^{J}\\!\\! \\sum_{j}^{J}v_{j},\\delta\\right\\}\\] \\[\\leqslant \\max\\left\\{\\min\\left\\{v_{J+1}+\\,\\,\\,^{(\\delta)}\\!\\!\\sum_{j=1}^{J} v_{j},\\delta\\right\\},-\\delta\\right\\}=\\,\\,^{(\\delta)}\\!\\!\\sum_{(-\\delta)}^{J}\\!\\! \\sum_{j=1}^{J}v_{j}.\\]\n\nfor the leftmost inequality. This ends the proof of Lemma D. \n\n#### ii.ii.2.2 Unravelling weights\n\n**Lemma E**.: _Define_\n\n\\[v_{j} \\doteq m^{1-t^{\\ast}}\\cdot\\left(\\prod_{k=1}^{j-1}Z_{tk}\\right)^{1-t} \\cdot\\mu_{j}\\quad\\left(\\text{convention}:\\prod_{k=1}^{0}u_{k}\\doteq 1\\right). \\tag{38}\\]\n\n_Then \\(\\forall J\\geqslant 1\\), weights unravel as:_\n\n\\[q_{(J+1)i} = \\left\\{\\begin{array}{cc}\\frac{1}{m^{t^{\\ast}}\\prod_{j=1}^{J}Z _{tj}}\\cdot\\exp_{t}\\left(-\\,^{(\\nicefrac{{1}}{{1}}-\\,^{J})}\\!\\sum_{j=1}^{J}v_{ j}u_{ji}\\right)&\\text{ if }\\quad t<1\\\\ \\frac{1}{m^{t^{\\ast}}\\prod_{j=1}^{J}Z_{tj}}\\cdot\\exp_{t}\\left(-\\,^{(-\\nicefrac {{1}}{{1}}-\\,^{J})}\\!\\sum_{j=1}^{J}v_{j}u_{ji}\\right)&\\text{ if }\\quad t>1\\end{array}\\right..\\]Proof.: We start for the case \\(t<1\\). We proceed by induction, noting first that the normalization constraint for the initial weights imposes \\(q_{1i}=1/m^{1/(2-t)}=1/m^{t^{\\bullet}}\\) and so (using \\((1-t)t^{\\bullet}=1-t^{\\bullet}\\))\n\n\\[q_{2i} = \\frac{\\exp_{t}(\\log_{t}q_{1i}-\\mu_{1}u_{1i})}{Z_{1}}\\] \\[= \\frac{1}{Z_{1}}\\cdot\\left[1+(1-t)\\cdot\\left(\\frac{1}{1-t}\\cdot \\left(\\frac{1}{m^{\\frac{1-t}{2-t}}}-1\\right)-\\mu_{1}u_{1i}\\right)\\right]_{+}^{ \\frac{1}{1-t}}\\] \\[= \\frac{1}{Z_{1}}\\cdot\\left[\\frac{1}{m^{1-t^{\\bullet}}}-(1-t)\\mu_{ 1}u_{1i}\\right]_{+}^{\\frac{1}{1-t}}\\] \\[= \\frac{1}{m^{t^{\\bullet}}Z_{1}}\\cdot\\left[1-(1-t)m^{1-t^{\\bullet} }\\mu_{1}u_{1i}\\right]_{+}^{\\frac{1}{1-t}}\\] \\[= \\frac{1}{m^{t^{\\bullet}}Z_{1}}\\cdot\\exp_{t}\\left(-\\sum_{j=1}^{1} v_{j}u_{ji}\\right),\\]\n\ncompleting the base case. Using the induction hypothesis, we unravel at iteration \\(J+1\\):\n\n\\[q_{(J+1)i}\\] \\[= \\frac{\\exp_{t}(\\log_{t}q_{Ji}-\\mu_{J}u_{Ji})}{Z_{J}}\\] \\[= \\frac{\\exp_{t}\\left(\\log_{t}\\left(\\frac{1}{m^{t^{\\bullet}}\\prod_{ j=1}^{J-1}Z_{ij}}\\cdot\\exp_{t}\\left(-\\sum_{j=1}^{J-1}v_{j}u_{ji}\\right)\\right)- \\mu_{J}u_{Ji}\\right)}{Z_{J}}\\] \\[= \\frac{1}{Z_{J}}\\cdot\\exp_{t}\\left(\\frac{\\max\\left\\{-\\frac{1}{1-t },-\\frac{(\\nicefrac{{1}}{{1-t}})}{\\sum_{j=1}^{J-1}v_{j}u_{ji}}\\right\\}-\\log_{t }\\left(m^{t^{\\bullet}}\\prod_{j=1}^{J-1}Z_{tj}\\right)}{1+(1-t)\\log_{t}\\left(m^{ t^{\\bullet}}\\prod_{j=1}^{J-1}Z_{tj}\\right)}-\\mu_{J}u_{Ji}\\right)\\] \\[= \\frac{1}{Z_{J}}\\cdot\\left[\\begin{array}{c}1+\\frac{(1-t)\\cdot \\max\\left\\{-\\frac{1}{1-t},-\\frac{(\\nicefrac{{1}}{{1-t}})}{\\sum_{j=1}^{J-1}v_{j }u_{ji}}\\right\\}-(1-t)\\log_{t}\\left(m^{t^{\\bullet}}\\prod_{j=1}^{J-1}Z_{tj} \\right)}{1+(1-t)\\log_{t}\\left(m^{t^{\\bullet}}\\prod_{j=1}^{J-1}Z_{tj}\\right)} \\\\ -(1-t)\\mu_{J}u_{Ji}\\end{array}\\right]\\] \\[= \\frac{1}{Z_{J}}\\cdot\\left[\\begin{array}{c}1+\\frac{(1-t)\\cdot \\max\\left\\{-\\frac{1}{1-t},-\\frac{(\\nicefrac{{1}}{{1-t}})}{\\sum_{j=1}^{J-1}v_{j }u_{ji}}\\right\\}-\\left(\\left(m^{t^{\\bullet}}\\prod_{j=1}^{J-1}Z_{tj}\\right)^{1- t}-1\\right)}{\\left(m^{t^{\\bullet}}\\prod_{j=1}^{J-1}Z_{tj}\\right)^{1-t}}\\end{array} \\right]_{+}^{\\frac{1}{1-t}},\\]\n\nwhich simplifies into (using \\((1-t)t^{\\bullet}=1-t^{\\bullet}\\))\n\n\\[q_{(J+1)i}\\] \\[= \\frac{1}{m^{t^{\\bullet}}\\prod_{j=1}^{J}Z_{tj}}\\cdot\\exp_{t}\\left( -S_{J}\\right),\\]with\n\n\\[S_{J} \\doteq \\min\\left\\{-\\max\\left\\{-\\frac{1}{1-t},-\\sum_{j=1}^{J-1}v_{j}u_{ji} \\right\\}+v_{J}u_{Ji},\\frac{1}{1-t}\\right\\}\\] \\[= \\min\\left\\{v_{J}u_{Ji}+\\min\\left\\{\\frac{1}{1-t},\\sum_{j=1}^{J-1}v _{j}u_{ji}\\right\\},\\frac{1}{1-t}\\right\\}\\] \\[= \\min\\left\\{v_{J}u_{Ji}+\\sqrt[(\\nicefrac{{1}}{{1-t}})\\sum_{j=1}^{J -1}v_{j}u_{ji},\\frac{1}{1-t}\\right\\}\\] \\[\\doteq \\sqrt[(\\nicefrac{{1}}{{1-t}})\\sum_{j=1}^{J}v_{j}u_{ji}\\]\n\n(we used twice the definition of clipped summation), which completes the proof of Lemma E for \\(t<1\\).\n\nWe now treat the case \\(t>1\\). The base induction is equivalent, while unraveling gives, instead of (39):\n\n\\[q_{(J+1)i}\\] \\[= \\frac{1}{m^{t*}\\prod_{j=1}^{J}Z_{tj}}\\cdot\\left[1+(1-t)\\cdot \\left(\\min\\left\\{-\\frac{1}{1-t},-\\sum_{-(\\nicefrac{{1}}{{t-1}})\\sum_{j=1}^{J -1}v_{j}u_{ji}\\right\\}-v_{J}u_{Ji}\\right)\\right]_{+}^{\\frac{1}{1-t}}\\] \\[= \\frac{1}{m^{t*}\\prod_{j=1}^{J}Z_{tj}}\\cdot\\exp_{t}\\left(-S_{J} \\right),\\]\n\nand, this time,\n\n\\[S_{J} \\doteq \\max\\left\\{-\\min\\left\\{-\\frac{1}{1-t},-\\sum_{-(\\nicefrac{{1}}{{t -1}})\\sum_{j=1}^{J-1}v_{j}u_{ji}\\right\\}+v_{J}u_{Ji},-\\frac{1}{t-1}\\right\\} \\tag{40}\\] \\[= \\max\\left\\{v_{J}u_{Ji}+\\max\\left\\{-\\frac{1}{t-1},\\ -\\sum_{-( \\nicefrac{{1}}{{t-1}})\\sum_{j=1}^{J-1}v_{j}u_{ji}\\right\\},-\\frac{1}{t-1}\\right\\}\\] (41) \\[= \\max\\left\\{v_{J}u_{Ji}+\\sum_{-(\\nicefrac{{1}}{{t-1}})\\sum_{j=1}^{ J-1}v_{j}u_{ji},-\\frac{1}{t-1}\\right\\}\\] (42) \\[\\doteq \\sqrt[(\\nicefrac{{1}}{{t-1}})\\sum_{j=1}^{J}v_{j}u_{ji}, \\tag{43}\\]\n\nwhich completes the proof of Lemma E. \n\n#### ii.ii.2.3 Introducing classifiers\n\nOrdinary linear separatorsSuppose we have a classifier\n\n\\[H_{J}(\\boldsymbol{x}) \\doteq \\sum_{j=1}^{J}\\beta_{j}^{1-t}\\mu_{j}\\cdot h_{j}(\\boldsymbol{x}), \\quad\\beta_{j}\\doteq m^{t*}\\prod_{k=1}^{j-1}Z_{tk},\\]\n\nwhere \\(\\mu_{j}\\in\\mathbb{R},\\forall j\\in[J]\\). We remark that \\([\\![z\\neq r]\\!]\\leq\\exp_{t}^{2-t}(-zr)\\) for any \\(t\\leq 2\\) and \\(z,r\\in\\mathbb{R}\\), and \\(z\\mapsto\\exp_{t}^{2-t}(-z)\\) is decreasing for any \\(t\\leq 2\\), so using [3.] in Lemma D, we get for our training sample \\(\\mathcal{S}\\doteq\\{(\\mathbf{x}_{i},y_{i}),i\\in[m]\\}\\) and any \\(t<1\\) (from Lemma E),\n\n\\[\\frac{1}{m}\\cdot\\sum_{i\\in[m]}\\llbracket\\mathrm{sign}(H_{J}(\\mathbf{x} _{i}))\\neq y_{i}\\rrbracket\\] \\[\\leq \\sum_{i\\in[m]}\\frac{\\exp_{t}^{2-t}\\left(-\\sum_{j=1}^{J}m^{1-t^{ \\mathfrak{k}}}\\left(\\prod_{k=1}^{j-1}Z_{tk}\\right)^{1-t}\\mu_{j}\\cdot y_{i}h_{j }(\\mathbf{x}_{i})\\right)}{m}\\] \\[\\leq \\sum_{i\\in[m]}\\frac{\\exp_{t}^{2-t}\\left(-\\sfrac{\\sfrac{\\sfrac{ \\sfrac{\\sfrac{\\sfrac{\\sfrac{\\sfrac{\\sfrac{\\sfrac{\\sfrac{\\sfrac{\\sfrac{\\sfrac{\\sfrac{ \\sfrac{\\sfrac{\\s{\\s{\\s{\\s{\\s{\\s{\\s{\\s{\\s{\\s{\\s{\\s}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} {\\right.\\right\\right\\right\\}} \\right\\right\\right\\right\\} \\right\\right\\}{\\]\\]\\]\\]\\]\\]\\]\\]\\]\\]\\]\\] \\] \\] \\] \\[\\begin{\\begin{}{}\\begin{}\\begin{}{}\\{}\\{}\\{}\\}{} \\{}\\{}\\{}\\{}}{}\\{}}{}\\{}}{}{}}{}{}}{}{}}{}{}}{}{}{}}{}{}{}{}}{}{}{}{}{}}{}{}{}{}{}}{}{}{}{}{}}{}{}{}{}{}{}}{}{}{}{}{}}{}{}{}{}{}}{}{}{}{}}{}{}{}{}{}}{}{}{}{}{}{}}{}{}{}}{}{}{}{}{}}{}{}{}{}}{}{}{}{}}{}{}{}{}{}}{}{}{}{}{}}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{{}}{}{}{}{}{}{}{}{}{}{}{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}}{{{}}}{{{}}}{{}}{{}}{{}}{{}}{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}}{{{{}}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{}}}{{{}}}{{{}}}{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{}}}{{{}}}{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{{}}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{{}}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{{}}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{{}}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{{}}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{{}}}}{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}\n\nWe can now replace (44) by\n\n\\[\\frac{1}{m}\\cdot\\sum_{i\\in[m]}\\big{[}\\mathrm{sign}(H^{(\\nicefrac{{ 1}}{{1-i}})}_{J}(\\mathbf{x}_{i}))\\neq y_{i}\\big{]}\\] \\[\\leq \\sum_{i\\in[m]}\\frac{\\exp_{t}^{2-t}\\left(-y_{i}\\cdot\\frac{\\binom{ \\nicefrac{{ 1}}{{1-i}}}{{-i}}}{\\sum_{j=1}^{J}m^{1-t}\\mathfrak{*}\\left(\\prod_{k=1}^{j-1}Z_{ tk}\\right)^{1-t}\\mu_{j}\\cdot h_{j}(\\mathbf{x}_{i})\\right)}}{m}\\] \\[=\\sum_{i\\in[m]}\\frac{\\exp_{t}^{2-t}\\left(-\\frac{\\binom{ \\nicefrac{{ 1}}{{1-i}}}{{-(\\nicefrac{{ 1}}{{1-i}})}}}{\\sum_{j=1}^{J}m^{1-t}\\mathfrak{*}\\left(\\prod_{k=1}^{j-1}Z_{tk} \\right)^{1-t}\\mu_{j}\\cdot y_{i}h_{j}(\\mathbf{x}_{i})\\right)}}{m}\\] \\[\\leq \\sum_{i\\in[m]}\\frac{\\exp_{t}^{2-t}\\left(-\\frac{\\binom{ \\nicefrac{{ 1}}{{1-i}}}{{-1}}}{\\sum_{j=1}^{J}m^{1-t}\\mathfrak{*}\\left(\\prod_{k=1}^{j-1}Z_{ tk}\\right)^{1-t}\\mu_{j}\\cdot y_{i}h_{j}(\\mathbf{x}_{i})\\right)}}{m}\\] \\[=\\sum_{i\\in[m]}\\frac{\\exp_{t}^{2-t}\\left(-\\frac{\\binom{ \\nicefrac{{ 1}}{{1-i}}}{{-1}}}{\\sum_{j=1}^{J}v_{j}u_{ji}}\\right)}{m}.\\]\n\nThe first identity has used the fact that \\(y_{i}\\in\\{-1,1\\}\\), so it can be folded in the doubly clipped summation without changing its value, and the second inequality used [3.] in Lemma D. This directly leads us to the following Lemma.\n\n**Lemma G**.: _For any \\(t<1\\) and any clipped linear separator_\n\n\\[H^{(\\nicefrac{{ 1}}{{1-i}})}_{J}(\\mathbf{x}) \\doteq \\stackrel{{\\binom{\\nicefrac{{ 1}}{{1-i}}}{{-i}}}}{\\sum_{j=1}^{J}\\beta_{j}^{1-t}\\mu_{j}\\cdot h_{j}(\\mathbf{x})}, \\quad\\left(\\beta_{j}=m^{t\\ast}\\prod_{k=1}^{j-1}Z_{tk},\\mu_{j}\\in\\mathbb{R},h_{ j}\\in\\mathbb{R}^{\\mathcal{X}},\\forall j\\in[J]\\right),\\]\n\n_where \\(Z_{tk}\\) is the normalization coefficient of \\(\\mathbf{q}\\) in (25) with \\(u_{ji}\\doteq y_{i}h_{j}(\\mathbf{x}_{i})\\),_\n\n\\[\\frac{1}{m}\\cdot\\sum_{i\\in[m]}\\big{[}\\mathrm{sign}(H^{(\\nicefrac{ { 1}}{{1-i}})}_{J}(\\mathbf{x}_{i}))\\neq y_{i}\\big{]} \\leq \\prod_{j=1}^{J}Z_{tj}^{2-t}. \\tag{49}\\]\n\n#### ii.ii.2.4 Geometric convergence of the empirical risk\n\nTo get the right-hand side of (47) and (49) as small as possible, we can independently compute each \\(\\mu_{j}\\) so as to minimize\n\n\\[Z_{tj}^{2-t}(\\mu) \\doteq \\sum_{i\\in[m]}\\exp_{t}^{2-t}\\left(\\log_{t}q_{ji}-\\mu u_{ji}\\right). \\tag{50}\\]\n\nWe proceed in two steps, first computing a convenient upperbound for (50), and then finding the \\(\\mu\\) that minimizes this upperbound.\n\n**Step 1**: We distinguish two cases depending on weight \\(q_{ji}\\). Let \\([m]_{j}^{+}\\doteq\\{i:q_{ji}>0\\}\\) and \\([m]_{j}^{+}\\doteq\\{i:q_{ji}=0\\}\\):\n\n**Case 1**: \\(i\\in[m]_{j}^{+}\\). Let \\(r_{ji}=u_{ji}/q_{ji}^{1-t}\\) and suppose \\(R_{j}>0\\) is a real that satisfies\n\n\\[|r_{ji}| \\leq R_{j},\\forall i\\in[m]_{j}^{+}. \\tag{51}\\]\n\nFor any convex function \\(f\\) defined on \\([-1,1]\\), we have \\(f(z)\\leq((1+z)/2)\\cdot f(1)+((1-z)/2)\\cdot f(-1),\\forall z\\in[-1,1]\\) (the straight line is the chord crossing \\(f\\) at \\(z=-1,1\\)). Because\\[z\\mapsto[1-z]_{+}^{\\frac{2-t}{t}}\\text{ is convex for }t\\leq 2,\\text{ for any }i\\in[m]_{j}^{+}\\] \\[\\exp_{t}^{2-t}\\left(\\log_{t}q_{ji}-\\mu u_{ji}\\right)\\] \\[= \\left[q_{ji}^{1-t}-(1-t)\\mu u_{ji}\\right]_{+}^{\\frac{2-t}{t}}\\] \\[= q_{ji}^{2-t}\\cdot\\left[1-(1-t)\\mu R_{j}\\cdot\\frac{r_{ji}}{R_{j} }\\right]_{+}^{\\frac{2-t}{t}}\\] \\[\\leq q_{ji}^{2-t}\\cdot\\frac{R_{j}+r_{ji}}{2R_{j}}\\left[1-(1-t)\\mu R_{ j}\\right]_{+}^{\\frac{2-t}{t}}+q_{ji}^{2-t}\\cdot\\frac{R_{j}-r_{ji}}{2R_{j}} \\left[1+(1-t)\\mu R_{j}\\right]_{+}^{\\frac{2-t}{t}}\\] \\[=\\frac{q_{ji}^{2-t}R_{j}+q_{ji}u_{ji}}{2R_{j}}\\left[1-(1-t)\\mu R_{ j}\\right]_{+}^{\\frac{2-t}{t}}+\\frac{q_{ji}^{2-t}R_{j}-q_{ji}u_{ji}}{2R_{j}} \\left[1+(1-t)\\mu R_{j}\\right]_{+}^{\\frac{2-t}{t}}.\\]\n\n**Case 2**: \\(i\\in[m]_{j}^{\\dagger}\\). Let \\(q_{j}^{\\dagger}>0\\) be a real that satisfies\n\n\\[\\frac{|u_{ji}|}{q_{j}^{\\dagger 1-t}}0\\) and one index with \\(u_{ji}<0\\), then for any \\(\\mu\\neq 0\\), \\(Z_{ij}^{2-t}(\\mu)>0\\) in (50) (in words, the new weigh vector \\(\\boldsymbol{q}_{j+1}\\) cannot be the null vector before normalization)._\n\nProof.: To show (i) for \\(\\rho_{j}\\leqslant 1\\), we write (using \\(u_{ji}\\doteq y_{i}h_{j}(\\boldsymbol{x}_{i}),\\forall i\\in[m]\\) for short),\n\n\\[(1+m_{j}^{\\dagger}q_{j}^{\\dagger^{2-t}})R_{j}\\cdot\\rho_{j} = \\sum_{i\\in[m]}q_{j}^{\\prime}u_{ji}u_{ji}\\] \\[\\leqslant \\sum_{i\\in[m]}q_{j}^{\\prime}{}_{ji}|u_{ji}|\\] \\[=\\sum_{i\\in[m]_{j}^{+}}q_{ji}^{2-t}\\cdot\\frac{|u_{ji}|}{q_{ji}^{1 -t}}+q_{j}^{\\dagger^{2-t}}\\cdot\\frac{\\sum_{i\\in[m]_{j}^{\\dagger}}|u_{ji}|}{q_ {j}^{\\dagger^{1-t}}}\\] \\[\\leqslant R_{j}\\cdot\\underbrace{\\sum_{i\\in[m]_{j}^{+}}q_{ji}^{2-t}}_{=1}+ q_{j}^{\\dagger^{2-t}}\\cdot\\frac{R_{j}\\sum_{i\\in[m]_{j}^{\\dagger}}|u_{ji}|}{ \\max_{i\\in[m]_{j}^{\\dagger}}|u_{ji}|}\\] \\[\\leqslant R_{j}+q_{j}^{\\dagger^{2-t}}m_{j}^{\\dagger}R_{j}=(1+m_{j}^{ \\dagger}q_{j}^{\\dagger^{2-t}})R_{j},\\]\n\nshowing \\(\\rho_{j}\\leqslant 1\\). Showing \\(\\rho_{j}\\geqslant-1\\) proceeds in the same way. Property (ii) is trivial. \n\n**Lemma J**.: \\[K_{t}(z) \\leqslant \\exp\\left(-\\left(1-\\frac{t}{2}\\right)\\cdot z^{2}\\right).\\]\n\nProof.: We remark that for \\(t\\in[0,1),z\\geqslant 0\\), \\(K_{t}^{\\prime}(z)\\) is concave and \\(K_{t}^{\\prime\\prime}(0)=-(2-t)\\), so \\(K_{t}^{\\prime}(z)\\leqslant-(2-t)z,\\forall z\\geqslant 0\\), from which it follows by integration\n\n\\[K_{t}(z) \\leqslant 1-\\left(1-\\frac{t}{2}\\right)\\cdot z^{2},\\]\n\nand since \\(1-z\\leqslant\\exp(-z)\\), we get the statement of the Lemma. \n\n**Remark 1**.: _The interpretation of Theorem 2 for \\(t<1\\) are simplified to the case where there is no weight switching, i.e. \\(m_{j}^{\\dagger}=0,\\forall j\\). While we have never observed weight switching in our experiments - perhaps owing to the fact that we did never boost for a very long number of iterations or just because our weak classifiers, decision trees, were in fact not so weak -, it is interesting, from a theoretical standpoint, to comment on convergence when this happens. Let \\(Q_{j}=1+m_{j}^{\\dagger}(q_{j}^{\\dagger})^{2-t}\\) and \\(\\tilde{\\rho}_{j}=Q_{j}\\rho_{j}\\) (Notations from Theorem 2). We note that \\(\\tilde{\\rho}_{j}\\approx\\beta\\cdot\\mathbb{E}_{\\boldsymbol{p}_{j}}[y_{i}h_{j}( \\boldsymbol{x}_{i})]\\), where \\(\\boldsymbol{p}_{j}\\) lives on the simplex and \\(|yh|\\leqslant 1,\\beta\\leqslant 1\\). Using Lemma J and (12) (main file), to keep geometric convergence, it is roughly sufficient that \\(Q_{j}\\log Q_{j}\\leqslant(\\tilde{\\rho}_{j})^{2}/(2t^{*})\\). Since \\(q_{j}^{\\dagger}\\) is homogeneous to a tempered weight, one would expect in general \\(m_{j}^{\\dagger}(q_{j}^{\\dagger})^{2-t}\\leqslant 1\\), so using the Taylor approximation \\(Q_{j}\\log Q_{j}\\approx-1+Q_{j}\\), one gets the refined sufficient condition for geometric convergence_\n\n\\[m_{j}^{\\dagger}(q_{j}^{\\dagger})^{2-t} \\leqslant (\\tilde{\\rho}_{j})^{2}/(2t^{*})=O((\\tilde{\\rho}_{j})^{2}).\\]\n\n_What does that imply? We have two cases:_* _If this holds, then we have geometric convergence;_\n* _if it does not hold, then for a \"large\" number of training examples, we must have_ \\(q_{ji}=0\\) _which, because of the formula for_ \\(\\boldsymbol{q}\\)__(_8_) implies that all these examples receive the right class with a sufficiently large margin. Breaking geometric convergence in this case is not an issue: we already have a good ensemble._\n\n### Proof of Theorem 3\n\nStarting from the proof of Theorem 2, we indicate the additional steps to get to the proof of Theorem 3. The key is to remark that our margin formulation has the following logical convenience:\n\n\\[\\llbracket\\nu_{t}((\\boldsymbol{x}_{i},y_{i}),H)\\leq\\theta\\rrbracket = \\llbracket-yH(\\boldsymbol{x})+\\log_{t}\\left(\\frac{1+\\theta}{1- \\theta}\\right)-(1-t)yH(\\boldsymbol{x})\\log_{t}\\left(\\frac{1+\\theta}{1-\\theta} \\right)\\geq 0\\rrbracket\\] \\[= \\llbracket(-yH(\\boldsymbol{x}))\\oplus_{t}\\log_{t}\\left(\\frac{1+ \\theta}{1-\\theta}\\right)\\geq 0\\rrbracket.\\]\n\nWe then remark that since \\(\\llbracket z\\geq 0\\rrbracket\\leq\\exp_{t}^{2-t}(z)\\), we get\n\n\\[\\llbracket\\nu_{t}((\\boldsymbol{x}_{i},y_{i}),H)\\leq\\theta\\rrbracket \\leq \\exp_{t}^{2-t}\\left(\\left(-yH(\\boldsymbol{x})\\right)\\oplus_{t} \\log_{t}\\left(\\frac{1+\\theta}{1-\\theta}\\right)\\right)\\] \\[=\\exp_{t}^{2-t}\\left(\\log_{t}\\left(\\frac{1+\\theta}{1-\\theta} \\right)\\right)\\cdot\\exp_{t}^{2-t}(-yH(\\boldsymbol{x}))\\] \\[= \\left(\\frac{1+\\theta}{1-\\theta}\\right)^{2-t}\\cdot\\exp_{t}^{2-t}( -yH(\\boldsymbol{x})).\\]\n\nWe then just have to branch to (44), replacing the \\(\\llbracket\\text{sign}(H_{J}(\\boldsymbol{x}_{i}))\\neq y_{i}\\rrbracket\\)s by \\(\\llbracket\\nu_{t}((\\boldsymbol{x}_{i},y_{i}),H)\\leq\\theta\\rrbracket\\), which yields in lieu of (46) the sought inequality:\n\n\\[F_{t,\\theta}(H,\\mathcal{S}) \\leq \\left(\\frac{1+\\theta}{1-\\theta}\\right)^{2-t}\\prod_{j=1}^{J}Z_{tj }^{2-t}. \\tag{58}\\]\n\n### Proof of Theorem 4\n\nThe proof proceeds in three parts. Part **(A)** makes a brief recall on encoding linear classifiers with decision trees. Part **(B)** solves (6) in mf, _i.e._ finds boosting's leveraging coefficients as solution of:\n\n\\[\\boldsymbol{q}(\\mu)^{\\top}\\boldsymbol{u} = 0. \\tag{59}\\]\n\nwe then simplify the loss obtained and elicit the conditional Bayes risk of the tempered loss, _i.e._ (20) in mf. Part **(C)** elicits the partial losses and shows properness and related properties.\n\nPart **(A): encoding linear models with a tree architecture**We use the reduction trick of **(author?)**[16] to design a decision tree (DT) boosting procedure, find out the (concave) loss equivalently minimized, just like in classical top-down DT induction algorithms [6]. The trick is simple: a DT can be thought of as a set of constant linear classifiers. The prediction is the sum of predictions put at all nodes. Boosting fits those predictions at the nodes and percolating those to leaves gets a standard DT with real predictions at the leaves. Figure 2 provides a detailed description of the procedure. Let \\(\\lambda\\) denote a leaf node of the current tree \\(H\\), with \\(H_{\\lambda}\\in\\mathbb{R}\\) the function it implements for leaf \\(\\lambda\\). If \\(\\mathrm{parent}(\\lambda)\\) denotes its parent node (assuming wlog it is not the root node), we have\n\n\\[H_{\\lambda} \\doteq H_{\\mathrm{parent}(\\lambda)}+\\mu_{\\lambda}h_{\\lambda}, \\tag{60}\\]\n\nPart **(B): eliciting the Bayes risk of the tempered loss**With our simple classifiers at hand, the tempered exponential loss \\(Z_{tj}^{2-t}\\) in (14) (mf) can be simplified to loss\n\n\\[L(H) \\doteq \\sum_{i}\\exp_{t}^{2-t}\\left(\\log_{t}q_{1i}-y_{i}H_{\\lambda( \\boldsymbol{x}_{i})}\\right) \\tag{61}\\] \\[= \\sum_{\\lambda\\in\\Lambda(H)}m_{\\lambda}^{+}\\exp_{t}^{2-t}\\left( \\log_{t}q_{1i}-H_{\\lambda}\\right)+m_{\\lambda}^{-}\\exp_{t}^{2-t}\\left(\\log_{t} q_{1i}+H_{\\lambda}\\right),\\]where \\(\\lambda(\\mathbf{x})\\) is the leaf reached by observation \\(\\mathbf{x}\\) and \\(\\lambda(H)\\) its set of leaf nodes of \\(H\\), and \\(H_{\\lambda}\\) sums all relevant values in (60). Also, \\(m_{\\lambda}^{+},m_{\\lambda}^{-}\\) denote the cardinal of positive and negative examples at \\(\\lambda\\) and \\(p_{\\lambda}\\doteq m_{\\lambda}^{+}/(m_{\\lambda}^{+}+m_{\\lambda}^{-})\\) the local proportion of positive examples at \\(\\lambda\\), and finally \\(r_{\\lambda}\\doteq(m_{\\lambda}^{+}+m_{\\lambda}^{-})/m\\) the total proportion of examples reaching \\(\\lambda\\).\n\n**Theorem A**.: _If we compute \\(\\mu_{\\lambda}\\) the solution of (59), we end up with the prediction \\(H_{\\lambda}\\):_\n\n\\[H_{\\lambda} = \\frac{q_{1i}^{1-t}}{1-t}\\cdot\\frac{\\left(\\frac{m_{\\lambda}^{+}}{ m_{\\lambda}^{-}}\\right)^{1-t}-1}{\\left(\\frac{m_{\\lambda}^{+}}{m_{\\lambda}^{-}} \\right)^{1-t}+1} \\tag{62}\\] \\[= \\frac{q_{1i}^{1-t}}{1-t}\\cdot\\frac{p_{\\lambda}^{1-t}-(1-p_{ \\lambda})^{1-t}}{p_{\\lambda}^{1-t}+(1-p_{\\lambda})^{1-t}}, \\tag{63}\\]\n\n_and the loss of the decision tree equals:_\n\n\\[L(H) = \\sum_{\\lambda\\in\\Lambda(H)}r_{\\lambda}\\cdot\\frac{2p_{\\lambda}(1-p _{\\lambda})}{M_{1-t}(p_{\\lambda},1-p_{\\lambda})}, \\tag{64}\\] \\[= \\mathbb{E}_{\\lambda}[\\underline{L}^{(t)}(p_{\\lambda})]. \\tag{65}\\]\n\nProof.: To compute \\(\\mu_{\\lambda}\\), (6) is reduced to the examples reaching \\(\\lambda\\), that is, it simplifies to\n\n\\[m_{\\lambda}^{+}\\exp_{t}\\left(\\log_{t}q_{1i}-H_{\\mathrm{parent}( \\lambda)}-R_{\\lambda}\\mu_{\\lambda}h_{\\lambda}\\right) = m_{\\lambda}^{-}\\exp_{t}\\left(\\log_{t}q_{1i}+H_{\\mathrm{parent}( \\lambda)}+R_{\\lambda}\\mu_{\\lambda}h_{\\lambda}\\right), \\tag{66}\\]\n\nthat we solve for \\(\\mu_{\\lambda}\\). Equivalently,\n\n\\[\\frac{\\exp_{t}\\left(\\log_{t}q_{1i}+H_{\\mathrm{parent}(\\lambda)}+ R_{\\lambda}\\mu_{\\lambda}h_{\\lambda}\\right)}{\\exp_{t}\\left(\\log_{t}q_{1i}-H_{ \\mathrm{parent}(\\lambda)}-R_{\\lambda}\\mu_{\\lambda}h_{\\lambda}\\right)} = \\frac{m_{\\lambda}^{+}}{m_{\\lambda}^{-}},\\]\n\nor, using \\(\\exp_{t}(u)/\\exp_{t}(v)=\\exp_{t}(u\\ominus_{t}v)\\),\n\n\\[\\frac{2H_{\\mathrm{parent}(\\lambda)}+2R_{\\lambda}\\mu_{\\lambda}h_ {\\lambda}}{1+(1-t)(\\log_{t}q_{1i}-H_{\\mathrm{parent}(\\lambda)}-R_{\\lambda}\\mu_ {\\lambda}h_{\\lambda})} = \\log_{t}\\left(\\frac{m_{\\lambda}^{+}}{m_{\\lambda}^{-}}\\right),\\]\n\nafter reorganizing:\n\n\\[R_{\\lambda}\\mu_{\\lambda}h_{\\lambda} = \\frac{(1+(1-t)(\\log_{t}q_{1i}-H_{\\mathrm{parent}(\\lambda)}))\\cdot \\log_{t}\\left(\\frac{m_{\\lambda}^{+}}{m_{\\lambda}^{-}}\\right)-2H_{\\mathrm{ parent}(\\lambda)}}{2+(1-t)\\log_{t}\\left(\\frac{m_{\\lambda}^{+}}{m_{\\lambda}^{-}} \\right)},\\]\n\nFigure 2: The weak learner provides weak hypotheses of the form \\([\\![x_{k}\\geq a_{j}]\\!]\\cdot b_{j}\\). From the boosting standpoint, this weak hypothesis is \"as good as\" the weak hypothesis \\(\\overline{h}_{j}(\\mathbf{x})\\doteq[x_{k}0\\) but \\(t\\neq-\\infty\\), \\(u=v\\) is a strict minimum of the pointwise conditional risk, completing the proof for strict properness. Strict properness is sufficient to show by a simple computation that \\(\\underline{L}^{(t)}\\) is (65). For \\(t=-\\infty\\), we pass to the limit and use the fact that we can also write\n\n\\[\\ell_{1}^{(t)}(u) = \\frac{1}{M_{1-t^{\\mathfrak{k}}}\\left(1,\\left(\\frac{u}{1-u}\\right) ^{\\frac{1}{t^{\\mathfrak{k}}}}\\right)}\\quad(\\mbox{we recall }t^{\\mathfrak{k}} \\doteq 1/(2-t)) \\tag{88}\\]\n\n\\(t\\rightarrow-\\infty\\) is equivalent to \\(t^{\\mathfrak{s}}\\to 0^{+}\\). If \\(u<1/2\\), \\(u/(1-u)<1\\) and so we see that\n\n\\[\\lim_{t^{\\mathfrak{s}}\\to 0^{+}}M_{1-t^{\\mathfrak{s}}}\\left(1, \\left(\\frac{u}{1-u}\\right)^{\\frac{1}{t^{\\mathfrak{k}}}}\\right) = \\frac{1}{2},\\]\n\nbecause \\(M_{1}\\) is the arithmetic mean. When \\(u>1/2\\), \\(u/(1-u)>1\\) and so this time\n\n\\[\\lim_{t^{\\mathfrak{s}}\\to 0^{+}}M_{1-t^{\\mathfrak{k}}}\\left(1, \\left(\\frac{u}{1-u}\\right)^{\\frac{1}{t^{\\mathfrak{k}}}}\\right) = +\\infty.\\]\n\nHence,\n\n\\[\\ell_{1}^{(-\\infty)}(u) = 2\\cdot[\\![u\\leqslant 1/2]\\!], \\tag{89}\\]\n\nwhich is (twice) the partial loss of the 0/1 loss [26]. \n\nThis ends the proof of Theorem 4.\n\n## III Supplementary material on experiments\n\n### Domains\n\nTable A3 presents the 10 domains we used for our experiments.\n\n### Implementation details and full set of experiments on linear combinations of decision trees\n\nSummaryThis Section depicts the full set of experiments summarized in Table 2 (mf), from Table A4 to Table A15. Tables are ordered in increasing size of the domain (Table A3). In all cases, up to \\(J=20\\) trees have been trained, of size 15 (total number of nodes, except the two biggest domains, for which the size is 5). For all datasets, except creditcard and adult, we have tested \\(t\\) in the complete range, \\(t\\in\\{0.0,0.2,0.4,0.6,0.8,0.9,1.0,1.1\\}\\) (the mf only reports results for \\(t\\geqslant 0.6\\)), and in all cases, models both clipped and not clipped. For each dataset, we have set a 10-fold stratified cross-validation experiment, and report the averages for readability (Table 2 in mf gives the results of a Student paired \\(t\\)-test on error averages for comparison, limit \\(p\\)-val = 0.1). We also provide two examples of training error averages for domains hillnoise and hillnonoise (Tables A10 and A12).\n\nImplementation details of \\(t\\)-AdaBoostFirst, regarding file format, we only input a.csv file to \\(t\\)-AdaBoost. We do not specify a file with feature types as in ARFF files. \\(t\\)-AdaBoost recognizes the type of each feature from its column content and distinguishes two main types of features: numerical and categorical. The distinction is important to design the splits during decision tree induction: for numerical values, splits are midpoints between two successive observed values. For categorical, splits are partitions of the feature values in two non-empty subsets. Our implementation of \\(t\\)-AdaBoost (programmed in Java) makes it possible to choose \\(t\\) not just in the range of values for which we have shown that boosting-compliant convergence is possible (\\(t\\in[0,1]\\)), but also \\(t>1\\). Because we thus implement AdaBoost (\\(t=1\\)) but also for \\(t>1\\), weights can fairly easily become infinite, we have implemented a safe-check during training, counting the number of times the weights become infinite or zero (note that in this latter case, this really is a problem just for AdaBoost because in theory this should never happen unless the weak classifiers achieve perfect (or perfectly wrong) classification), but also making sure leveraging coefficients for classifiers do not become infinite for AdaBoost, a situation that can happen because of numerical approximations in encoding. In our experiments, we have observed that none of these problematic cases did occur (notice that this could not be the case if we were to boost for a large number of iterations). We have implemented algorithm \\(t\\)-AdaBoost exactly as specified in mf. The weak learner is implemented to train a decision tree in which the stopping criterion is the size of the tree reaching a user-fixed number of nodes. There is thus no pruning. Also, the top-down induction algorithm proceeds by iteratively picking the heaviest leaf in the tree and then choosing the split that minimizes the expected Bayes risk of the tempered loss, computing using the same \\(t\\) values as for \\(t\\)-AdaBoost, and with the constraint to not get pure leaves (otherwise, the real prediction at the leaves, which relies on thelink of the loss, would be infinite for AdaBoost). In our implementation of decision-tree induction, when the number of possible splits exceeds a fixed number \\(S\\) (currently, 2 000), we pick the best split in a subset of \\(S\\) splits picked at random.\n\nResultsFirst, one may notice in several plots that the average test error increases with the number of trees. This turns out to be a sign of overfitting, as exemplified for domains hillnonoise and hillnoise, for which we provide the training curves. If we align the training curves at \\(T=1\\) (the value is different because the splitting criterion for training the tree is different), we notice that the experimental convergence on training is similar for all values of \\(t\\) (Tables A10 and A12). The other key experimental result, already visible from Table 2 (mf), is that pretty much all tested values of \\(t\\) are necessary to get the best results. One could be tempted to conclude that \\(t\\) slightly smaller than \\(1.0\\) seems to be a good fit from Table 2 (mf), but the curves show that this is more a consequence of the Table being computed for \\(J=20\\) trees. The case of eeg illustrates this phenomenon best: while small \\(t\\)-values are clearly the best when there is no noise, the picture is completely reversed when there is training noise. Notice that this ordering is almost reversed on creditcard and adult: when there is noise, small values of \\(t\\) tend to give better results. Hence, in addition to getting (i) a pruning mechanism that works for all instances of the tempered loss and (ii) a way to guess the right number of models in the ensemble, a good problem to investigate is in fact appropriately tuning \\(t\\) in a domain-dependent way. Looking at all plots reveals that substantial gains could be obtained with an accurate procedure (over the strategy that would be to always pick a fixed \\(t\\), _e.g._\\(t=1\\)).\n\n* clamped models can be very useful to handle overfitting (sonar for \\(\\eta=0.4\\), qsar for \\(\\eta\\geq 0.2\\)); this provides another justification to learn clamped models;\n* the overall diversity of curves as a function of \\(t\\) supports the idea that good strategies could in fact tune \\(t\\) at training time and change its value with iterations.\n\n\\begin{table}\n\\begin{tabular}{|c|c|c|c|} \\hline \\hline & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ \\hline & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ \\end{tabular}\n\\end{table}\nTable A7: Experiments on \\(t\\)-AdaBoost comparing with AdaBoost (\\(t=1\\), bullets) on domain qsar. Conventions follow Table A4.\n\n\\begin{table}\n\\begin{tabular}{l l} \\hline \\hline training err (not clipped) & training err (clipped) \\\\ \\hline \\hline training err (clipped) & training err (clipped) \\\\ \\hline \\hline \\end{tabular}\n\\end{table}\nTable A10: Experiments on \\(t\\)-AdaBoost comparing with AdaBoost (\\(t=1\\), bullets) on domain hillnonoise: training errors displayed for all algorithms using conventions from Table A4. See text for details.\n\n\\begin{table}\n\\begin{tabular}{l l l} \\hline \\hline & training err (not clipped) & training err (clipped) \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline & & \\\\ \\hline \\hline & & \\\\ \\hline \\hline & & \\\\ \\hline \\hline & & \\\\ \\hline \\hline & & \\\\ \\hline \\hline & & \\\\ \\hline \\hline & & \\\\ \\hline \\hline \\end{tabular}\n\n\\end{table}\nTable 11: Experiments on \\(t\\)-AdaBoost comparing with AdaBoost (\\(t=1\\), bullets) on domain hillnoise. Conventions follow Table 4.\n\n\\begin{table}\n\\begin{tabular}{c c c c} & \\multicolumn{2}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ \\end{tabular}\n\\end{table}\nTable A13: Experiments on \\(t\\)-AdaBoost comparing with AdaBoost (\\(t=1\\), bullets) on domain eeg. Conventions follow Table A4.\n\n\\begin{table}\n\\begin{tabular}{|", "response": "## Review\n\n### Summary\n\nThe paper introduces a novel generalization of the AdaBoost algorithm, termed t-AdaBoost, which incorporates a tempering parameter, t, to enhance its performance by utilizing tempered exponential measures. This extension modifies the weight optimization process while retaining the foundational principles of AdaBoost. The authors provide both theoretical insights and experimental validations, indicating that different values of t can yield varying results across datasets. Their findings suggest that tuning the parameter t can lead to improved performance in specific scenarios, thereby highlighting the practical implications of their work in machine learning applications.\n\n### Strengths\n\n- The paper presents a well-structured and original extension of the AdaBoost algorithm.\n- Theoretical results demonstrate sound convergence and empirical performance, showing significant improvements over standard AdaBoost in some datasets.\n- The integration of tempered exponential measures provides a robust solution to numerical instabilities commonly associated with AdaBoost.\n- The paper is well-written and logically organized, making complex concepts accessible.\n- The comprehensive analysis includes both theoretical grounding and experimental evidence, enhancing the credibility of the findings.\n\n### Weaknesses\n\n- Certain sections, particularly Algorithms and notations, are confusing and may hinder reader comprehension.\n- The choice of notations is inconsistent, leading to ambiguity in understanding key elements of the algorithm.\n- There is a lack of clarity regarding the generalization capabilities of the proposed algorithm compared to existing boosting methods.\n- Performance sensitivity to the choice of t may pose challenges for practical implementation without established tuning mechanisms.\n- Some experimental results lack adequate evaluation of the impact of new loss functions on performance.\n\n### Questions\n\n- Could the authors explore the extension of t-AdaBoost to multiclass settings, similar to original AdaBoost?\n- What insights can be drawn regarding the impact of different t values on exponential decay and overfitting, particularly when t < 1?\n- Could the authors clarify the significance of certain notations and equations, especially regarding their definitions and implications?\n- What methods could be implemented to effectively tune the parameter t in practice?\n- Is there a potential relationship between dataset characteristics (e.g., number of examples and features) and the optimal t value?\n\n### Soundness\n\n**Score:** 3\n\n**Description:** 3 = good. The theoretical and empirical analyses are largely sound, although there are some areas of confusion in notation and clarity.\n\n### Presentation\n\n**Score:** 3\n\n**Description:** 3 = good. The overall structure is coherent, but notational inconsistencies and some unclear explanations detract from readability.\n\n### Contribution\n\n**Score:** 3\n\n**Description:** 3 = good. The paper presents a valuable extension to an established algorithm, though the practical implications and generalization capabilities could be better articulated.\n\n### Rating\n\n**Score:** 6\n\n**Description:** 6 = Weak Accept: The paper is technically solid and has moderate-to-high impact potential, with some areas requiring refinement.\n\n### Paper Decision\n\n**Decision:** Accept (poster)\n\n**Reasons:** The paper presents a novel and relevant contribution to the field of machine learning by extending the AdaBoost algorithm through the introduction of a tempering parameter. While there are notable weaknesses in presentation and clarity, the sound theoretical foundation and empirical evidence support the significance of the findings. The work's potential to influence practical applications in boosting algorithms justifies an acceptance, albeit with recommendations for clearer exposition and further exploration of tuning methodologies.\n"} {"query": "You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections: \n1. Summary: A summary of the paper in 100-150 words.\n2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.\n3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are: \n 1 poor\n 2 fair\n 3 good\n 4 excellent\n4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are: \n 1 strong reject\n 2 reject, significant issues present\n 3 reject, not good enough\n 4 possibly reject, but has redeeming facets\n 5 marginally below the acceptance threshold\n 6 marginally above the acceptance threshold\n 7 accept, but needs minor improvements \n 8 accept, good paper\n 9 strong accept, excellent work\n 10 strong accept, should be highlighted at the conference \n5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.\n\nHere is the template for a review format, you must follow this format to output your review result:\n**Summary:**\nSummary content\n\n**Strengths:**\n- Strength 1\n- Strength 2\n- ...\n\n**Weaknesses:**\n- Weakness 1\n- Weakness 2\n- ...\n\n**Questions:**\n- Question 1\n- Question 2\n- ...\n\n**Soundness:**\nSoundness result\n\n**Presentation:**\nPresentation result\n\n**Contribution:**\nContribution result\n\n**Rating:**\nRating result\n\n**Paper Decision:**\n- Decision: Accept/Reject\n- Reasons: reasons content\n\n\nPlease ensure your feedback is objective and constructive. The paper is as follows:\n\n# Unexpected Improvements to Expected Improvement\n\nfor Bayesian Optimization\n\n Sebastian Ament\n\nMeta\n\nament@meta.com\n\nSamuel Daulton\n\nMeta\n\nsdaulton@meta.com\n\nDavid Eriksson\n\nMeta\n\nderiksson@meta.com\n\nMaximilian Balandat\n\nMeta\n\nbalandat@meta.com\n\nEytan Bakshy\n\nMeta\n\nebakshy@meta.com\n\n###### Abstract\n\nExpected Improvement (EI) is arguably the most popular acquisition function in Bayesian optimization and has found countless successful applications, but its performance is often exceeded by that of more recent methods. Notably, EI and its variants, including for the parallel and multi-objective settings, are challenging to optimize because their acquisition values vanish numerically in many regions. This difficulty generally increases as the number of observations, dimensionality of the search space, or the number of constraints grow, resulting in performance that is inconsistent across the literature and most often sub-optimal. Herein, we propose LogEI, a new family of acquisition functions whose members either have identical or approximately equal optima as their canonical counterparts, but are substantially easier to optimize numerically. We demonstrate that numerical pathologies manifest themselves in \"classic\" analytic EI, Expected Hypervolume Improvement (EHVI), as well as their constrained, noisy, and parallel variants, and propose corresponding reformulations that remedy these pathologies. Our empirical results show that members of the LogEI family of acquisition functions substantially improve on the optimization performance of their canonical counterparts and surprisingly, are on par with or exceed the performance of recent state-of-the-art acquisition functions, highlighting the understated role of numerical optimization in the literature.\n\n## 1 Introduction\n\nBayesian Optimization (BO) is a widely used and effective approach for sample-efficient optimization of expensive-to-evaluate black-box functions [25, 28], with applications ranging widely between aerospace engineering [48], biology and medicine [49], materials science [3], civil engineering [4], and machine learning hyperparameter optimization [66, 72]. BO leverages a probabilistic _surrogate model_ in conjunction with an _acquisition function_ to determine where to query the underlying objective function. Improvement-based acquisition functions, such as Expected Improvement (EI) and Probability of Improvement (PI), are among the earliest and most widely used acquisition functions for efficient global optimization of non-convex functions [42, 58]. EI has been extended to the constrained [27, 29], noisy [52], and multi-objective [20] setting, as well as their respective batch variants [6, 13, 77], and is a standard baseline in the BO literature [25, 66]. While much of the literature has focused on developing new sophisticated acquisition functions, subtle yet critical implementation details of foundational BO methods are often overlooked. Importantly, the performance of EI and its variants is inconsistent even for _mathematically identical_ formulations and, as we show in this work, most often sub-optimal.\n\nAlthough the problem of optimizing EI effectively has been discussed in various works, e.g. [25; 31; 77], prior focus has been on optimization algorithms and initialization strategies, rather than the fundamental issue of computing EI.\n\nIn this work, we identify pathologies in the computation of improvement-based acquisition functions that give rise to numerically vanishing values and gradients, which - to our knowledge - are present in _all existing implementations of EI_, and propose reformulations that lead to increases in the associated optimization performance which often match or exceed that of recent methods.\n\n#### Contributions\n\n1. We introduce LogEI, a new family of acquisition functions whose members either have identical or approximately equal optima as their canonical counterparts, but are substantially easier to optimize numerically. Notably, the analytic variant of LogEI, which _mathematically_ results in the same BO policy as EI, empirically shows significantly improved optimization performance.\n2. We extend the ideas behind analytical LogEI to other members of the EI family, including constrained EI (CEI), Expected Hypervolume Improvement (EHVI), as well as their respective batch variants for parallel BO, qEI and qEHVI, using smooth approximations of the acquisition utilities to obtain non-vanishing gradients. All of our methods are available as part of BoTorch [6].\n3. We demonstrate that our newly proposed acquisition functions substantially outperform their respective analogues on a broad range of benchmarks without incurring meaningful additional computational cost, and often match or exceed the performance of recent methods.\n\n#### Motivation\n\nMaximizing acquisition functions for BO is a challenging problem, which is generally non-convex and often contains numerous local maxima, see the lower right panel of Figure 1. While zeroth-order methods are sometimes used, gradient-based methods tend to be far more effective at optimizing acquisition functions on continuous domains, especially in higher dimensions.\n\nIn addition to the challenges stemming from non-convexity that are shared across acquisition functions, the values and gradients of improvement-based acquisition functions are frequently minuscule in large swaths of the domain. Although EI is never _mathematically_ zero under a Gaussian posterior distribution,1 it often vanishes, even becoming _exactly_ zero in floating point precision. The same\n\nFigure 1: **Left:** Fraction of points sampled from the domain for which the magnitude of the gradient of EI vanishes to \\(<\\!10^{-10}\\) as a function of the number of randomly generated data points \\(n\\) for different dimensions \\(d\\) on the Ackley function. As \\(n\\) increases, EI and its gradients become numerically zero across most of the domain, see App. D.2 for details. **Right:** Values of EI and LogEI on a quadratic objective. EI takes on extremely small values on points for which the likelihood of improving over the incumbent is small and is numerically _exactly_ zero in double precision for a large part of the domain (\\(\\approx[5,13.5]\\)). The left plot shows that this tends to worsen as the dimensionality of the problem and the number of data points grow, rendering gradient-based optimization of EI futile.\n\napplies to its gradient, making EI (and PI, see Appendix A) exceptionally difficult to optimize via gradient-based methods. The right panels of Figure 1 illustrate this behavior on a simple one-dimensional quadratic function.\n\nTo increase the chance of finding the global optimum of non-convex functions, gradient-based optimization is typically performed from multiple starting points, which can help avoid getting stuck in local optima [70]. For improvement-based acquisition functions however, optimization becomes increasingly challenging as more data is collected and the likelihood of improving over the incumbent diminishes, see our theoretical results in Section 3 and the empirical illustration in Figure 1 and Appendix D.2. As a result, gradient-based optimization with multiple random starting points will eventually degenerate into random search when the gradients at the starting points are numerically zero. This problem is particularly acute in high dimensions and for objectives with a large range.\n\nVarious initialization heuristics have been proposed to address this behavior by modifying the random-restart strategy. Rather than starting from random candidates, an alternative naive approach would be to use initial conditions close to the best previously observed inputs. However, doing that alone inherently limits the acquisition optimization to a type of local search, which cannot have global guarantees. To attain such guarantees, it is necessary to use an asymptotically space-filling heuristic; even if not random, this will entail evaluating the acquisition function in regions where no prior observation lies. Ideally, these regions should permit gradient-based optimization of the objective for efficient acquisition function optimization, which necessitates the gradients to be non-zero. In this work, we show that this can be achieved for a large number of improvement-based acquisition functions, and demonstrate empirically how this leads to substantially improved BO performance.\n\n## 2 Background\n\nWe consider the problem of maximizing an expensive-to-evaluate black-box function \\(\\mathbf{f}_{\\mathrm{true}}:\\mathbb{X}\\mapsto\\mathbb{R}^{M}\\) over some feasible set \\(\\mathbb{X}\\subseteq\\mathbb{R}^{d}\\). Suppose we have collected data \\(\\mathcal{D}_{n}=\\{(\\mathbf{x}_{i},\\mathbf{y}_{i})\\}_{i=1}^{n}\\), where \\(\\mathbf{x}_{i}\\in\\mathbb{X}\\) and \\(\\mathbf{y}_{i}=\\mathbf{f}_{\\mathrm{true}}(\\mathbf{x}_{i})+\\mathbf{v}_{i}( \\mathbf{x}_{i})\\) and \\(\\mathbf{v}_{i}\\) is a noise corrupting the true function value \\(\\mathbf{f}_{\\mathrm{true}}(\\mathbf{x}_{i})\\). The response \\(\\mathbf{f}_{\\mathrm{true}}\\) may be multi-output as is the case for multiple objectives or black-box constraints, in which case \\(\\mathbf{y}_{i},\\mathbf{v}_{i}\\in\\mathbb{R}^{M}\\). We use Bayesian optimization (BO), which relies on a surrogate model \\(\\mathbf{f}\\) that for any _batch_\\(\\mathbf{X}:=\\{\\mathbf{x}_{1},\\ldots,\\mathbf{x}_{q}\\}\\) of candidate points provides a probability distribution over the outputs \\(f(\\mathbf{X}):=(f(\\mathbf{x}_{1}),\\ldots,f(\\mathbf{x}_{q}))\\). The acquisition function \\(\\alpha\\) then utilizes this posterior prediction to assign an acquisition value to \\(\\mathbf{x}\\) that quantifies the value of evaluating the points in \\(\\mathbf{x}\\), trading off exploration and exploitation.\n\n### Gaussian Processes\n\nGaussian Processes (GP) [65] are the most widely used surrogates in BO, due to their high data efficiency and good uncertainty quantification. For our purposes, it suffices to consider a GP as a mapping that provides a multivariate Normal distribution over the outputs \\(f(\\mathbf{x})\\) for any \\(\\mathbf{x}\\):\n\n\\[f(\\mathbf{x})\\sim\\mathcal{N}(\\mu(\\mathbf{x}),\\mathbf{\\Sigma}(\\mathbf{x})),\\qquad \\mathbf{\\mu}:\\mathbb{X}^{q}\\to\\mathbb{R}^{qM},\\quad\\mathbf{\\Sigma}:\\mathbb{X}^{q}\\to \\mathcal{S}_{+}^{qM}. \\tag{1}\\]\n\nIn the single-outcome (\\(M=1\\)) setting, \\(f(\\mathbf{x})\\sim\\mathcal{N}(\\mu(\\mathbf{x}),\\Sigma(\\mathbf{x}))\\) with \\(\\mu:\\mathbb{X}^{q}\\to\\mathbb{R}^{q}\\) and \\(\\Sigma:\\mathbb{X}^{q}\\to\\mathcal{S}_{+}^{q}\\). In the sequential (\\(q=1\\)) case, this further reduces to a univariate Normal distribution: \\(f(\\mathbf{x})\\sim\\mathcal{N}(\\mu(\\mathbf{x}),\\sigma^{2}(\\mathbf{x}))\\) with \\(\\mu:\\mathbb{X}\\to\\mathbb{R}\\) and \\(\\sigma:\\mathbb{X}\\to\\mathbb{R}_{+}\\).\n\n### Improvement-based Acquisition Functions\n\nExpected ImprovementFor the fully-sequential (\\(q=1\\)), single-outcome (\\(M=1\\)) setting, \"classic\" EI [59] is defined as\n\n\\[\\text{EI}_{y^{*}}(\\mathbf{x})=\\mathbb{E}_{f(\\mathbf{x})}\\big{[}[f(\\mathbf{x}) -y^{*}]_{+}\\big{]}=\\sigma(\\mathbf{x})\\;h\\left(\\frac{\\mu(\\mathbf{x})-y^{*}}{ \\sigma(\\mathbf{x})}\\right), \\tag{2}\\]\n\nwhere \\([\\cdot]_{+}\\) denotes the \\(\\max(0,\\cdot)\\) operation, \\(y^{*}=\\max_{i}y_{i}\\) is the best function value observed so far, also referred to as the _incumbent_, \\(h(z)=\\phi(z)+z\\Phi(z)\\), and \\(\\phi,\\Phi\\) are the standard Normal density and distribution functions, respectively. This formulation is arguably the most widely used acquisition function in BO, and the default in many popular software packages.\n\nConstrained Expected Improvement_Constrained BO_ involves one or more black-box constraints and is typically formulated as finding \\(\\max_{\\mathbf{x}\\in\\mathbb{X}}f_{\\text{true},1}(\\mathbf{x})\\) such that \\(f_{\\text{true},i}(\\mathbf{x})\\leq 0\\) for \\(i\\in\\{2,\\ldots,M\\}\\). Feasibility-weighting the improvement [27; 29] is a natural approach for this class of problems:\n\n\\[\\text{CEI}_{y^{*}}(\\mathbf{x})=\\mathbb{E}_{\\mathbf{f}(\\mathbf{x})}\\left[[f_{1}( \\mathbf{x})-y^{*}]_{+}\\ \\prod_{i=2}^{M}\\mathbb{1}_{f_{i}(\\mathbf{x})\\leq 0}\\right], \\tag{3}\\]\n\nwhere \\(1\\) is the indicator function. If the constraints \\(\\{f_{i}\\}_{i\\geq 2}\\) are modeled as conditionally independent of the objective \\(f_{1}\\) this can be simplified as the product of EI and the probability of feasibility.\n\nParallel Expected ImprovementIn many settings, one may evaluate \\(f_{\\text{true}}\\) on \\(q>1\\) candidates in parallel to increase throughput. The associated parallel or batch analogue of EI [30; 75] is given by\n\n\\[\\text{qEI}_{y^{*}}(\\mathbf{X})=\\mathbb{E}_{f(\\mathbf{X})}\\left[\\max_{j=1, \\ldots,q}\\bigl{\\{}[f(\\mathbf{x}_{j})-y^{*}]_{+}\\bigr{\\}}\\right]. \\tag{4}\\]\n\nUnlike EI, qEI does not admit a closed-form expression and is thus typically computed via Monte Carlo sampling, which also extends to non-Gaussian posterior distributions [6; 75]:\n\n\\[\\text{qEI}_{y^{*}}(\\mathbf{X})\\approx\\sum_{i=1}^{N}\\max_{j=1,\\ldots,q}\\bigl{\\{} [\\xi^{i}(\\mathbf{x}_{j})-y^{*}]_{+}\\bigr{\\}}, \\tag{5}\\]\n\nwhere \\(\\xi^{i}(\\mathbf{x})\\sim f(\\mathbf{x})\\) are random samples drawn from the joint model posterior at \\(\\mathbf{x}\\).\n\nExpected Hypervolume ImprovementIn multi-objective optimization (MOO), there generally is no single best solution; instead the goal is to explore the Pareto Frontier between multiple competing objectives, the set of mutually-optimal objective vectors. A common measure of the quality of a finitely approximated Pareto Frontier \\(\\mathcal{P}\\) between \\(M\\) objectives with respect to a specified reference point \\(\\mathbf{r}\\in\\mathbb{R}^{M}\\) is its _hypervolume_\\(\\text{HV}(\\mathcal{P},\\mathbf{r}):=\\lambda\\bigl{(}\\bigcup_{\\mathbf{y}_{j}\\in \\mathcal{P}}[\\mathbf{r},\\mathbf{y}_{i}]\\bigr{)}\\), where \\([\\mathbf{r},\\mathbf{y}_{i}]\\) denotes the hyperrectangle bounded by vertices \\(\\mathbf{r}\\) and \\(\\mathbf{y}_{i}\\), and \\(\\lambda\\) is the Lebesgue measure. An apt acquisition function for multi-objective optimization problems is therefore the expected hypervolume improvement\n\n\\[\\text{EHVI}(\\mathbf{x})=\\mathbb{E}_{\\mathbf{f}(\\mathbf{x})}\\left[[\\text{HV}( \\mathcal{P}\\cup\\mathbf{f}(\\mathbf{X}),\\mathbf{r})-\\text{HV}(\\mathcal{P}, \\mathbf{r})]_{+}\\right], \\tag{6}\\]\n\ndue to observing a batch \\(\\mathbf{f}(\\mathbf{X}):=[\\mathbf{f}(\\mathbf{x}_{1}),\\cdots,\\mathbf{f}(\\mathbf{ x}_{q})]\\) of \\(q\\) new observations. EHVI can be expressed in closed form if \\(q=1\\) and the objectives are modeled with independent GPs [80], but Monte Carlo approximations are required for the general case (qEHVI) [13].\n\n### Optimizing Acquisition Functions\n\nOptimizing an acquisition function (AF) is a challenging task that amounts to solving a non-convex optimization problem, to which multiple approaches and heuristics have been applied. These include gradient-free methods such as divided rectangles [41], evolutionary methods such as CMA-ES [32], first-order methods such as stochastic gradient ascent, see e.g., Daulton et al. [15], Wang et al. [75], and (quasi-)second order methods [25] such as L-BFGS-B [10]. Multi-start optimization is commonly employed with gradient-based methods to mitigate the risk of getting stuck in local minima. Initial points for optimization are selected via various heuristics with different levels of complexity, ranging from simple uniform random selection to BoTorch's initialization heuristic, which selects initial points by performing Boltzmann sampling on a set of random points according to their acquisition function value [6]. See Appendix B for a more complete account of initialization strategies and optimization procedures used by popular implementations. We focus on gradient-based optimization as often leveraging gradients results in faster and more performant optimization [13].\n\nOptimizing AFs for parallel BO that quantify the value of a batch of \\(q>1\\) points is more challenging than optimizing their sequential counterparts due to the higher dimensionality of the optimization problem - \\(qd\\) instead of \\(d\\) - and the more challenging optimization surface. A common approach to simplify the problem is to use a _sequential greedy_ strategy that greedily solves a sequence of single point selection problems. For \\(i=1,\\ldots,q\\), candidate \\(\\mathbf{x}_{i}\\) is selected by optimizing the AF for \\(q=1\\), conditional on the previously selected designs \\(\\{\\mathbf{x}_{1},...,\\mathbf{x}_{i-1}\\}\\) and their unknown observations, e.g. by fantasizing the values at those designs [77]. For submodular AFs, including EI, PI, and EHVI, a sequential greedy strategy will attain a regret within a factor of \\(1/e\\) compared to the joint optimum, and previous works have found that sequential greedy optimization yields _improved_ BO performance compared to joint optimization [13; 77]. Herein, we find that our reformulations enable joint batch optimization to be competitive with the sequential greedy strategy, especially for larger batches.\n\n### Related Work\n\nWhile there is a substantial body of work introducing a large variety of different AFs, much less focus has been on the question of how to effectively implement and optimize these AFs. Zhan and Xing [81] provide a comprehensive review of a large number of different variants of the EI family, but do not discuss any numerical or optimization challenges. Zhao et al. [82] propose combining a variety of different initialization strategies to select initial conditions for optimization of acquisition functions and show empirically that this improves optimization performance. However, they do not address any potential issues or degeneracies with the acquisition functions themselves. Recent works have considered effective gradient-based approaches for acquisition optimization. Wilson et al. [77] demonstrates how stochastic first-order methods can be leveraged for optimizing Monte Carlo acquisition functions. Balandat et al. [6] build on this work and put forth sample average approximations for MC acquisition functions that admit gradient-based optimization using deterministic higher-order optimizers such as L-BFGS-B.\n\nAnother line of work proposes to switch from BO to local optimization based on some stopping criterion to achieve faster local convergence, using either zeroth order [60] or gradient-based [57] optimization. While McLeod et al. [57] are also concerned with numerical issues, we emphasize that those issues arise due to ill-conditioned covariance matrices and are orthogonal to the numerical pathologies of improvement-based acquisition functions.\n\n## 3 Theoretical Analysis of Expected Improvement's Vanishing Gradients\n\nIn this section, we shed light on the conditions on the objective function and surrogate model that give rise to the numerically vanishing gradients in EI, as seen in Figure 1. In particular, we show that as a BO algorithm closes the optimality gap \\(f^{*}-y^{*}\\), where \\(f^{*}\\) is the global maximum of the function \\(f_{\\text{true}}\\), and the associated GP surrogate's uncertainty decreases, EI is exceedingly likely to exhibit numerically vanishing gradients.\n\nLet \\(P_{\\mathbf{x}}\\) be a distribution over the inputs \\(\\mathbf{x}\\), and \\(f\\sim P_{f}\\) be an objective drawn from a Gaussian process. Then with high probability over the particular instantiation \\(f\\) of the objective, the probability that an input \\(\\mathbf{x}\\sim P_{\\mathbf{x}}\\) gives rise to an argument \\((\\mu(\\mathbf{x})-y^{*})/\\sigma(\\mathbf{x})\\) to \\(h\\) in Eq. (2) that is smaller than a threshold \\(B\\) exceeds \\(P_{\\mathbf{x}}(f(\\mathbf{x})\\eta\\}\\) of a naive implementation of \\(h\\) in (2) is limited by a lower bound \\(B(\\eta)\\) that depends on the floating point precision \\(\\eta\\). Formally, \\(\\mathcal{S}_{\\eta}(h)\\subset[B(\\eta),\\infty)\\) even though \\(\\mathcal{S}_{0}(h)=\\mathbb{R}\\) mathematically. As a consequence, the following result can be seen as a bound on the probability of encountering numerically vanishing values and gradients in EI using samples from the distribution \\(P_{\\mathbf{x}}\\) to initialize the optimization of the acquisition function.\n\n**Theorem 1**.: _Suppose \\(f\\) is drawn from a Gaussian process prior \\(P_{f}\\), \\(y^{*}\\leq f^{*}\\), \\(\\mu_{n},\\sigma_{n}\\) are the mean and standard deviation of the posterior \\(P_{f}(f|\\mathcal{D}_{n})\\) and \\(B\\in\\mathbb{R}\\). Then with probability \\(1-\\delta\\),_\n\n\\[P_{\\mathbf{x}}\\left(\\frac{\\mu_{n}(\\mathbf{x})-y^{*}}{\\sigma_{n}(\\mathbf{x})}-1\\\\ -z^{2}/2-c_{1}+\\texttt{log\\_1}\\texttt{mexp}(\\log(\\texttt{erfcx}(-z/\\sqrt{2})|z |)+c_{2})&-1/\\sqrt{\\epsilon}0\\), the approximation error of qLogEI to qEI is bounded by_\n\n\\[\\left|e^{\\text{qLogEI}(\\mathbf{X})}-\\text{qEI}(\\mathbf{X})\\right|\\leq(q^{\\tau_ {\\max}}-1)\\;\\text{qEI}(\\mathbf{X})+\\log(2)\\tau_{0}q^{\\tau_{\\max}}. \\tag{11}\\]\n\nIn Appendix D.10, we show the importance of setting the temperatures sufficiently low for qLogEI to achieve good optimization characteristics, something that only becomes possible by transforming all involved computations to log-space. Otherwise, the smooth approximation to the acquisition utility would exhibit vanishing gradients numerically, as the discrete \\(\\max\\) operator does mathematically.\n\n### Constrained EI\n\nBoth analytic and Monte Carlo variants of LogEI can be extended for optimization problems with black-box constraints. For analytic CEI with independent constraints of the form \\(f_{i}(\\mathbf{x})\\leq 0\\), the constrained formulation in Eq. (3) simplifies to \\(\\text{LogEI}(\\mathbf{x})=\\text{LogEI}(\\mathbf{x})+\\sum_{i}\\log(P(f_{i}(\\mathbf{x })\\leq 0))\\), which can be readily and stably computed using LogEI in Eq. (8) and, if \\(f_{i}\\) is modelled by a GP, a stable implementation of the Gaussian log cumulative distribution function. For the Monte Carlo variant, we apply a similar strategy as for Eq. (10) to the constraint indicators in Eq. (3): 1) a smooth approximation and 2) an accurate and stable implementation of its log value, see Appendix A.\n\n### Monte Carlo Parallel LogEHVI\n\nThe numerical difficulties of qEHVI in (6) are similar to those of qEI, and the basic ingredients of smoothing and log-transformations still apply, but the details are significantly more complex since qEHVI uses many operations that have mathematically zero gradients with respect to some of the inputs. Our implementation is based on the differentiable inclusion-exclusion formulation of the hypervolume improvement [13]. As a by-product, the implementation also readily allows for the differentiable computation of the expected log hypervolume, instead of the log expected hypervolume, note the order, which can be preferable in certain applications of multi-objective optimization [26].\n\n## 5 Empirical Results\n\nWe compare standard versions of analytic EI (EI) and constrained EI (CEI), Monte Carlo parallel EI (qEI), as well as Monte Carlo EHVI (qEHVI), in addition to other state-of-the-art baselines like lower-bound Max-Value Entropy Search (GIBBON) [61] and single- and multi-objective Joint Entropy Search (JES) [36, 71]. All experiments are implemented using BoTorch [6] and utilize multi-start optimization of the AF with scipy's L-BFGS-B optimizer. In order to avoid conflating the effect of BoTorch's default initialization strategy with those of our contributions, we use 16 initial points chosen uniformly at random from which to start the L-BFGS-B optimization. For a comparison with other initialization strategies, see Appendix D. We run multiple replicates and report mean and error bars of \\(\\pm 2\\) standard errors of the mean. Appendix D.1 contains additional details.\n\nSingle-objective sequential BOWe compare EI and LogEI on the 10-dimensional convex Sum-of-Squares (SoS) function \\(f(\\mathbf{x})=\\sum_{i=1}^{10}{(x_{i}-0.5)^{2}}\\), using 20 restarts seeded from 1024 pseudo-random samples through BoTorch's default initialization heuristic. Figure 2 shows that due to vanishing gradients, EI is unable to make progress even on this trivial problem.\n\nIn Figure 3, we compare performance on the Ackley and Michalewicz test functions [67]. Notably, LogEI substantially outperforms EI on Ackley as the dimensionality increases. Ackley is a challenging multimodal function for which it is critical to trade off local exploitation with global exploration, a task made exceedingly difficult by the numerically vanishing gradients of EI in a large fraction of the search space. We see a similar albeit less pronounced behavior on Michalewicz, which reflects the fact that Michalewicz is a somewhat less challenging problem than Ackley.\n\nFigure 2: Regret and EI acquisition value for the candidates selected by maximizing EI and LogEI on the convex Sum-of-Squares problem. Optimization stalls out for EI after about 75 observations due to vanishing gradients (indicated by the jagged behavior of the acquisition value), while LogEI continues to make steady progress.\n\nBO with Black Box ConstraintsFigure 4 shows results on four engineering design problems with black box constraints that were also considered in [22]. We apply the same bilog transform as the trust region-based SCBO method [22] to all constraints to make them easier to model with a GP. We see that LogCEI outperforms the naive CEI implementation and converges faster than SCBO. Similar to the unconstrained problems, the performance gains of LogCEI over CEI grow with increasing problem dimensionality and the number of constraints. Notably, we found that for some problems, LogCEI in fact _improved upon some of the best results quoted in the original literature_, while using three orders of magnitude fewer function evaluations, see Appendix D.7 for details.\n\nParallel Expected Improvement with qLogEIFigure 5 reports the optimization performance of parallel BO on the 16-dimensional Ackley function for both sequential greedy and joint batch optimization using the fat-tailed non-linearities of App. A.4. In addition to the apparent advantages of qLogEI over qEI, a key finding is that jointly optimizing the candidates of batch acquisition functions can yield highly competitive optimization performance, see App. D.3 for extended results.\n\nHigh-dimensional BO with qLogEIFigure 6 shows the performance of LogEI on three high-dimensional problems: the \\(6\\)-dimensional Hartmann function embedded in a \\(100\\)-dimensional space, a \\(100\\)-dimensional rover trajectory planning problem, and a \\(103\\)-dimensional SVM hyperparameter tuning problem. We use a \\(103\\)-dimensional version of the \\(388\\)-dimensional SVM problem considered by Eriksson and Jankowiak [21], where the \\(100\\) most important features were selected using Xgboost.\n\nFigure 4: Best feasible objective value as a function of number of function evaluations (iterations) on four engineering design problems with black-box constraints after an initial \\(2d\\) pseudo-random evaluations.\n\nFigure 3: Best objective value as a function of iterations on the moderately and severely non-convex Michalewicz and Ackley problems for varying numbers of input dimensions. LogEI substantially outperforms both EI and GIBBON, and this gap widens as the problem dimensionality increases. JES performs slightly better than LogEI on Ackley, but for some reason fails on Michalewicz. Notably, JES is almost two orders of magnitude slower than the other acquisition functions (see Appendix D).\n\nFigure 6 shows that the optimization exhibits varying degrees of improvement from the inclusion of qLogEI, both when combined with SAASBO [21] and a standard GP. In particular, qLogEI leads to significant improvements on the embedded Hartmann problem, even leading BO with the canonical GP to ultimately catch up with the SAAS-prior-equipped model. On the other hand, the differences on the SVM and Rover problems are not significant, see Section 6 for a discussion.\n\nMulti-Objective optimization with qLogEHVIFigure 7 compares qLogEHVI and qEHVI on two multi-objective test problems with varying batch sizes, including the real-world-inspired cell network design for optimizing coverage and capacity [19]. The results are consistent with our findings in the single-objective and constrained cases: qLogEHVI consistently outperforms qEHVI and even JES [71] for all batch sizes. Curiously, for the largest batch size and DTLZ2, qLogNEHVI's improvement over the reference point (HV \\(>0\\)) occurs around three batches after the other methods, but dominates their performance in later batches. See Appendix D.5 for results on additional synthetic and real-world-inspired multi-objective problems such as the laser plasma acceleration optimization [38], and vehicle design optimization [54, 68]\n\n## 6 Discussion\n\nTo recap, EI exhibits vanishing gradients 1) when high objective values are highly concentrated in the search space, and 2) as the optimization progresses. In this section, we highlight that these conditions are not met for all BO applications, and that LogEI's performance depends on the surrogate's quality.\n\nOn problem dimensionalityWhile our experimental results show that advantages of LogEI generally grow larger as the dimensionality of the problem grows, we stress that this is fundamentally due to the concentration of high objective values in the search space, not the dimensionality itself. Indeed, we have observed problems with high ambient dimensionality but low intrinsic dimensionality, where LogEI does not lead to significant improvements over EI, e.g. the SVM problem in Figure 6.\n\nFigure 5: Best objective value for parallel BO as a function of the number evaluations for single-objective optimization on the 16-dimensional Ackley function with varying batch sizes \\(q\\). Notably, joint optimization of the batch outperforms sequential greedy optimization.\n\nFigure 6: Best objective value as a function of number of function evaluations (iterations) on three high-dimensional problems, including Eriksson and Jankowiak [21]’s SAAS prior.\n\nOn asymptotic improvementsWhile members of the LogEI family can generally be optimized better, leading to higher acquisition values, improvements in optimization performance might be small in magnitude, e.g. the log-objective results on the convex 10D sum of squares in Fig. 2, or only begin to materialize in later iterations, like for \\(q=16\\) on DTLZ2 in Figure 7.\n\nOn model qualityEven if good objective values are concentrated in a small volume of the search space and many iterations are run, LogEI might still not outperform EI if the surrogate's predictions are poor, or its uncertainties are not indicative of the surrogate's mismatch to the objective, see Rover in Fig. 6. In these cases, better acquisition values do not necessarily lead to better BO performance.\n\nReplacing EIDespite these limitation, we strongly suggest replacing variants of EI with their LogEI counterparts. If LogEI were dominated by EI on some problem, it would be an indication that the EI family itself is sub-optimal, and improvements in performance can be attributed to the exploratory quality of randomly distributed candidates, which could be incorporated explicitly.\n\n## 7 Conclusion\n\nOur results demonstrate that the problem of vanishing gradients is a major source of the difficulty of optimizing improvement-based acquisition functions and that we can mitigate this issue through careful reformulations and implementations. As a result, we see substantially improved optimization performance across a variety of modified EI variants across a broad range of problems. In particular, we demonstrate that joint batch optimization for parallel BO can be competitive with, and at times exceed the sequential greedy approach typically used in practice, which also benefits from our modifications. Besides the convincing performance improvements, one of the key advantages of our modified acquisition functions is that they are much less dependent on heuristic and potentially brittle initialization strategies. Moreover, our proposed modifications do not meaningfully increase the computational complexity of the respective original acquisition function.\n\nWhile our contributions may not apply verbatim to other classes of acquisition functions, our key insights and strategies do translate and could help with e.g. improving information-based [34; 76], cost-aware [51; 66], and other types of acquisition functions that are prone to similar numerical challenges. Further, combining the proposed methods with gradient-aware first-order BO methods [5; 16; 23] could lead to particularly effective high-dimensional applications of BO, since the advantages of both methods tend to increase with the dimensionality of the search space. Overall, we hope that our findings will increase awareness in the community for the importance of optimizing acquisition functions well, and in particular, for the care that the involved numerics demand.\n\nFigure 7: Batch optimization performance on two multi-objective problems, as measured by the hypervolume of the Pareto frontier across observed points. This plot includes JES [71]. Similar to the single-objective case, the LogEI variant qLogEHVI significantly outperforms the baselines.\n\n## Acknowledgments and Disclosure of Funding\n\nThe authors thank Frank Hutter for valuable references about prior work on numerically stable computations of analytic EI, David Bindel for insightful conversations about the difficulty of optimizing EI, as well as the anonymous reviewers for their knowledgeable feedback.\n\n## References\n\n* Abadi et al. [2015] Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL [https://www.tensorflow.org/](https://www.tensorflow.org/).\n* Ament and O'Neil [2018] Sebastian Ament and Michael O'Neil. Accurate and efficient numerical calculation of stable densities via optimized quadrature and asymptotics. _Statistics and Computing_, 28:171-185, 2018.\n* Ament et al. [2021] Sebastian Ament, Maximilian Amsler, Duncan R. Sutherland, Ming-Chiang Chang, Dan Guevarra, Aine B. Connolly, John M. Gregoire, Michael O. Thompson, Carla P. Gomes, and R. Bruce van Dover. Autonomous materials synthesis via hierarchical active learning of nonequilibrium phase diagrams. _Science Advances_, 7(51):eabg4930, 2021. doi: 10.1126/sciadv.abg4930. URL [https://www.science.org/doi/abs/10.1126/sciadv.abg4930](https://www.science.org/doi/abs/10.1126/sciadv.abg4930).\n* Ament et al. [2023] Sebastian Ament, Andrew Witte, Nishant Garg, and Julius Kusuma. Sustainable concrete via bayesian optimization, 2023. URL [https://arxiv.org/abs/2310.18288](https://arxiv.org/abs/2310.18288). NeurIPS 2023 Workshop on Adaptive Experimentation in the Real World.\n* Ament and Gomes [2022] Sebastian E Ament and Carla P Gomes. Scalable first-order Bayesian optimization via structured automatic differentiation. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, _Proceedings of the 39th International Conference on Machine Learning_, volume 162 of _Proceedings of Machine Learning Research_, pages 500-516. PMLR, 17-23 Jul 2022. URL [https://proceedings.mlr.press/v162/ament22a.html](https://proceedings.mlr.press/v162/ament22a.html).\n* Balandat et al. [2020] Maximilian Balandat, Brian Karrer, Daniel R. Jiang, Samuel Daulton, Benjamin Letham, Andrew Gordon Wilson, and Eytan Bakshy. BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization. In _Advances in Neural Information Processing Systems 33_, 2020.\n* Baptista and Poloczek [2018] Ricardo Baptista and Matthias Poloczek. Bayesian optimization of combinatorial structures, 2018.\n* Belakaria et al. [2020] Syrine Belakaria, Aryan Deshwal, and Janardhan Rao Doppa. Max-value entropy search for multi-objective bayesian optimization with constraints, 2020.\n* Bradbury et al. [2018] James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL [http://github.com/google/jax](http://github.com/google/jax).\n* Byrd et al. [1995] Richard H Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. A limited memory algorithm for bound constrained optimization. _SIAM Journal on Scientific Computing_, 16(5):1190-1208, 1995.\n* Coello and Montes [2002] Carlos A Coello Coello and Efren Mezura Montes. Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. _Advanced Engineering Informatics_, 16(3):193-203, 2002.\n\n* Cowen-Rivers et al. [2022] Alexander I. Cowen-Rivers, Wenlong Lyu, Rasul Tutunov, Zhi Wang, Antoine Grosnit, Ryan Rhys Griffiths, Alexandre Max Maraval, Hao Jianye, Jun Wang, Jan Peters, and Haitham Bou Ammar. Hebo pushing the limits of sample-efficient hyperparameter optimisation, 2022.\n* Daulton et al. [2020] Samuel Daulton, Maximilian Balandat, and Eytan Bakshy. Differentiable expected hypervolume improvement for parallel multi-objective bayesian optimization. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, _Advances in Neural Information Processing Systems_, volume 33, pages 9851-9864. Curran Associates, Inc., 2020. URL [https://proceedings.neurips.cc/paper/2020/file/6fec24eac8f18ed793f5eaad3dd7977c-Paper.pdf](https://proceedings.neurips.cc/paper/2020/file/6fec24eac8f18ed793f5eaad3dd7977c-Paper.pdf).\n* Daulton et al. [2022] Samuel Daulton, Sait Cakmak, Maximilian Balandat, Michael A. Osborne, Enlu Zhou, and Eytan Bakshy. Robust multi-objective Bayesian optimization under input noise. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, _Proceedings of the 39th International Conference on Machine Learning_, volume 162 of _Proceedings of Machine Learning Research_, pages 4831-4866. PMLR, 17-23 Jul 2022. URL [https://proceedings.mlr.press/v162/daulton22a.html](https://proceedings.mlr.press/v162/daulton22a.html).\n* Daulton et al. [2022] Samuel Daulton, Xingchen Wan, David Eriksson, Maximilian Balandat, Michael A. Osborne, and Eytan Bakshy. Bayesian optimization over discrete and mixed spaces via probabilistic reparameterization. In _Advances in Neural Information Processing Systems 35_, 2022.\n* De Roos et al. [2021] Filip De Roos, Alexandra Gessner, and Philipp Hennig. High-dimensional gaussian process inference with derivatives. In _International Conference on Machine Learning_, pages 2535-2545. PMLR, 2021.\n* Deb et al. [2002] Kalyan Deb, L. Thiele, Marco Laumanns, and Eckart Zitzler. Scalable multi-objective optimization test problems. volume 1, pages 825-830, 06 2002. ISBN 0-7803-7282-4. doi: 10.1109/CEC.2002.1007032.\n* Deshwal et al. [2023] Aryan Deshwal, Sebastian Ament, Maximilian Balandat, Eytan Bakshy, Janardhan Rao Doppa, and David Eriksson. Bayesian optimization over high-dimensional combinatorial spaces via dictionary-based embeddings. In Francisco Ruiz, Jennifer Dy, and Jan-Willem van de Meent, editors, _Proceedings of The 26th International Conference on Artificial Intelligence and Statistics_, volume 206 of _Proceedings of Machine Learning Research_, pages 7021-7039. PMLR, 25-27 Apr 2023.\n* Dreifuerst et al. [2021] Ryan M. Dreifuerst, Samuel Daulton, Yuchen Qian, Paul Varkey, Maximilian Balandat, Sanjay Kasturia, Anoop Tomar, Ali Yazdan, Vish Ponnampalam, and Robert W. Heath. Optimizing coverage and capacity in cellular networks using machine learning, 2021.\n* Emmerich et al. [2006] M. T. M. Emmerich, K. C. Giannakoglou, and B. Naujoks. Single- and multiobjective evolutionary optimization assisted by gaussian random field metamodels. _IEEE Transactions on Evolutionary Computation_, 10(4):421-439, 2006.\n* Eriksson and Jankowiak [2021] David Eriksson and Martin Jankowiak. High-dimensional Bayesian optimization with sparse axis-aligned subspaces. In _Uncertainty in Artificial Intelligence_. PMLR, 2021.\n* Eriksson and Poloczek [2021] David Eriksson and Matthias Poloczek. Scalable constrained Bayesian optimization. In _International Conference on Artificial Intelligence and Statistics_. PMLR, 2021.\n* Eriksson et al. [2018] David Eriksson, Kun Dong, Eric Lee, David Bindel, and Andrew G Wilson. Scaling gaussian process regression with derivatives. _Advances in neural information processing systems_, 31, 2018.\n* Eriksson et al. [2019] David Eriksson, Michael Pearce, Jacob Gardner, Ryan D Turner, and Matthias Poloczek. Scalable global optimization via local Bayesian optimization. In _Advances in Neural Information Processing Systems 32_, NeurIPS, 2019.\n* Frazier [2018] Peter I Frazier. A tutorial on bayesian optimization. _arXiv preprint arXiv:1807.02811_, 2018.\n\n* Friedrich et al. [2011] Tobias Friedrich, Karl Bringmann, Thomas Voss, and Christian Igel. The logarithmic hypervolume indicator. In _Proceedings of the 11th workshop proceedings on Foundations of genetic algorithms_, pages 81-92, 2011.\n* Gardner et al. [2014] Jacob Gardner, Matt Kusner, Zhixiang, Kilian Weinberger, and John Cunningham. Bayesian optimization with inequality constraints. In _Proceedings of the 31st International Conference on Machine Learning_, volume 32 of _Proceedings of Machine Learning Research_, pages 937-945, Beijing, China, 22-24 Jun 2014. PMLR.\n* Garnett [2023] Roman Garnett. _Bayesian Optimization_. Cambridge University Press, 2023. to appear.\n* Gelbart et al. [2014] Michael A. Gelbart, Jasper Snoek, and Ryan P. Adams. Bayesian optimization with unknown constraints. In _Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence_, UAI, 2014.\n* Ginsbourger et al. [2008] David Ginsbourger, Rodolphe Le Riche, and Laurent Carraro. A Multi-points Criterion for Deterministic Parallel Global Optimization based on Gaussian Processes. Technical report, March 2008. URL [https://hal.science/hal-00260579](https://hal.science/hal-00260579).\n* Gramacy et al. [2022] Robert B Gramacy, Annie Sauer, and Nathan Wycoff. Triangulation candidates for bayesian optimization. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, _Advances in Neural Information Processing Systems_, volume 35, pages 35933-35945. Curran Associates, Inc., 2022.\n* Hansen et al. [2003] Nikolaus Hansen, Sibylle D. Muller, and Petros Koumoutsakos. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (cma-es). _Evolutionary Computation_, 11(1):1-18, 2003. doi: 10.1162/106365603321828970.\n* Head et al. [2021] Tim Head, Manoj Kumar, Holger Nahrstaedt, Gilles Louppe, and Iaroslav Shcherbatyi. scikit-optimize/scikit-optimize, October 2021. URL [https://doi.org/10.5281/zenodo.5565057](https://doi.org/10.5281/zenodo.5565057).\n* Volume 1_, NIPS'14, pages 918-926, Cambridge, MA, USA, 2014. MIT Press.\n* Hutter et al. [2011] Frank Hutter, Holger H Hoos, and Kevin Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In _Learning and Intelligent Optimization: 5th International Conference, LION 5, Rome, Italy, January 17-21, 2011. Selected Papers 5_, pages 507-523. Springer, 2011.\n* Hvarfner et al. [2022] Carl Hvarfner, Frank Hutter, and Luigi Nardi. Joint entropy search for maximally-informed bayesian optimization. In _Advances in Neural Information Processing Systems 35_, 2022.\n* Research [2023] Wolfram Research, Inc. Wolfram alpha, 2023. URL [https://www.wolframalpha.com/](https://www.wolframalpha.com/).\n* Irshad et al. [2023] F. Irshad, S. Karsch, and A. Dopp. Multi-objective and multi-fidelity bayesian optimization of laser-plasma acceleration. _Phys. Rev. Res._, 5:013063, Jan 2023. doi: 10.1103/PhysRevResearch.5.013063. URL [https://link.aps.org/doi/10.1103/PhysRevResearch.5.013063](https://link.aps.org/doi/10.1103/PhysRevResearch.5.013063).\n* Irshad et al. [2023] Faran Irshad, Stefan Karsch, and Andreas Doepp. Reference dataset of multi-objective and multi- fidelity optimization in laser-plasma acceleration, January 2023. URL [https://doi.org/10.5281/zenodo.7565882](https://doi.org/10.5281/zenodo.7565882).\n* Jiang et al. [2020] Shali Jiang, Daniel Jiang, Maximilian Balandat, Brian Karrer, Jacob Gardner, and Roman Garnett. Efficient nonmyopic bayesian optimization via one-shot multi-step trees. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, _Advances in Neural Information Processing Systems_, volume 33, pages 18039-18049. Curran Associates, Inc., 2020.\n* Jones et al. [1993] Donald Jones, C. Perttunen, and B. Stuckman. Lipschitzian optimisation without the lipschitz constant. _Journal of Optimization Theory and Applications_, 79:157-181, 01 1993. doi: 10.1007/BF00941892.\n\n* Jones et al. [1998] Donald R. Jones, Matthias Schonlau, and William J. Welch. Efficient global optimization of expensive black-box functions. _Journal of Global Optimization_, 13:455-492, 1998.\n* Kandasamy et al. [2020] Kirthevasan Kandasamy, Karun Raju Vysyaraju, Willie Neiswanger, Biswajit Paria, Christopher R. Collins, Jeff Schneider, Barnabas Poczos, and Eric P. Xing. Tuning hyperparameters without grad students: Scalable and robust bayesian optimisation with dragonfly. _J. Mach. Learn. Res._, 21(1), jan 2020.\n* Kim et al. [2022] Jungtaek Kim, Seungjin Choi, and Minsu Cho. Combinatorial bayesian optimization with random mapping functions to convex polytopes. In James Cussens and Kun Zhang, editors, _Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence_, volume 180 of _Proceedings of Machine Learning Research_, pages 1001-1011. PMLR, 01-05 Aug 2022.\n* Kingma and Welling [2013] Diederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. _arXiv e-prints_, page arXiv:1312.6114, Dec 2013.\n* Klein et al. [2017] A. Klein, S. Falkner, N. Mansur, and F. Hutter. Robo: A flexible and robust bayesian optimization framework in python. In _NIPS 2017 Bayesian Optimization Workshop_, December 2017.\n* Kraft [1988] Dieter Kraft. A software package for sequential quadratic programming. _Forschungsberich-Deutsche Forschungs- und Versuchsanstalt fur Luft- und Raumfahrt_, 1988.\n* Lam et al. [2018] Remi Lam, Matthias Poloczek, Peter Frazier, and Karen E Willcox. Advances in bayesian optimization with applications in aerospace engineering. In _2018 AIAA Non-Deterministic Approaches Conference_, page 1656, 2018.\n* Langer and Tirrell [2004] Robert Langer and David Tirrell. Designing materials for biology and medicine. _Nature_, 428, 04 2004.\n* Lederer et al. [2019] Armin Lederer, Jonas Umlauft, and Sandra Hirche. Posterior variance analysis of gaussian processes with application to average learning curves. _arXiv preprint arXiv:1906.01404_, 2019.\n* Lee et al. [2020] Eric Hans Lee, Valerio Perrone, Cedric Archambeau, and Matthias Seeger. Cost-aware Bayesian Optimization. _arXiv e-prints_, page arXiv:2003.10870, March 2020.\n* Letham et al. [2019] Benjamin Letham, Brian Karrer, Guilherme Ottoni, and Eytan Bakshy. Constrained bayesian optimization with noisy experiments. _Bayesian Analysis_, 14(2):495-519, 06 2019. doi: 10.1214/18-BA1110.\n* Liang et al. [2021] Qiaohao Liang, Aldair E. Gongora, Zekun Ren, Armi Tiihonen, Zhe Liu, Shijing Sun, James R. Deneault, Daniil Bash, Flore Mekki-Berrada, Saif A. Khan, Kedar Hippalgaonkar, Benji Maruyama, Keith A. Brown, John Fisher III, and Tonio Buonassisi. Benchmarking the performance of bayesian optimization across multiple experimental materials science domains. _npj Computational Materials_, 7(1):188, 2021.\n* Liao et al. [2008] Xingtao Liao, Qing Li, Xujing Yang, Weigang Zhang, and Wei Li. Multiobjective optimization for crash safety design of vehicles using stepwise regression model. _Structural and Multidisciplinary Optimization_, 35:561-569, 06 2008. doi: 10.1007/s00158-007-0163-x.\n* Lyu et al. [2018] Wenlong Lyu, Fan Yang, Changhao Yan, Dian Zhou, and Xuan Zeng. Batch Bayesian optimization via multi-objective acquisition ensemble for automated analog circuit design. In Jennifer Dy and Andreas Krause, editors, _Proceedings of the 35th International Conference on Machine Learning_, volume 80 of _Proceedings of Machine Learning Research_, pages 3306-3314. PMLR, 10-15 Jul 2018. URL [https://proceedings.mlr.press/v80/lyu18a.html](https://proceedings.mlr.press/v80/lyu18a.html).\n* Machler [2012] Martin Machler. Accurately computing log (1- exp (-l al)) assessed by the rmpfr package. Technical report, Technical report, 2012.\n* McLeod et al. [2018] Mark McLeod, Stephen Roberts, and Michael A. Osborne. Optimization, fast and slow: optimally switching between local and Bayesian optimization. In Jennifer Dy and Andreas Krause, editors, _Proceedings of the 35th International Conference on Machine Learning_, volume 80 of _Proceedings of Machine Learning Research_, pages 3443-3452. PMLR, 10-15 Jul 2018.\n\n* Mockus [1975] Jonas Mockus. On bayesian methods for seeking the extremum. In _Optimization Techniques IFIP Technical Conference: Novosibirsk, July 1-7, 1974_, pages 400-404. Springer, 1975.\n* Mockus [1978] Jonas Mockus. The application of bayesian methods for seeking the extremum. _Towards global optimization_, 2:117-129, 1978.\n* Mohammadi et al. [2015] Hossein Mohammadi, Rodolphe Le Riche, and Eric Touboul. Making ego and cma-es complementary for global optimization. In Clarisse Dhaenens, Laettiia Jourdan, and Marie-Eleonore Marmon, editors, _Learning and Intelligent Optimization_, pages 287-292, Cham, 2015. Springer International Publishing.\n* Moss et al. [2021] Henry B. Moss, David S. Leslie, Javier Gonzalez, and Paul Rayson. Gibbon: General-purpose information-based bayesian optimisation. _J. Mach. Learn. Res._, 22(1), jan 2021.\n* Oh et al. [2019] Changyong Oh, Jakub Tomczak, Efstratios Gavves, and Max Welling. Combinatorial bayesian optimization using the graph cartesian product. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alche-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems 32_, pages 2914-2924. Curran Associates, Inc., 2019.\n* Paszke et al. [2023] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alche-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 32. Curran Associates, Inc., 2019.\n* Picheny et al. [2023] Victor Picheny, Joel Berkeley, Henry B. Moss, Hrvoje Stojic, Uri Grants, Sebastian W. Ober, Artem Artemev, Khurram Ghani, Alexander Goodall, Andrei Paleyes, Sattar Vakili, Sergio Pascual-Diaz, Stratis Markou, Jixiang Qing, Nasrulloh R. B. S Loka, and Ivo Couckuyt. Trieste: Efficiently exploring the depths of black-box functions with tensorflow, 2023. URL [https://arxiv.org/abs/2302.08436](https://arxiv.org/abs/2302.08436).\n* Rasmussen [2004] Carl Edward Rasmussen. _Gaussian Processes in Machine Learning_, pages 63-71. Springer Berlin Heidelberg, Berlin, Heidelberg, 2004.\n* Snoek et al. [2012] Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning algorithms. In _Advances in neural information processing systems_, pages 2951-2959, 2012.\n* Surjanovic and Bingham [2023] S. Surjanovic and D. Bingham. Virtual library of simulation experiments: Test functions and datasets. Retrieved May 14, 2023, from [http://www.sfu.ca/~ssurjano](http://www.sfu.ca/~ssurjano).\n* Tanabe and Ishibuchi [2020] Ryoji Tanabe and Hisao Ishibuchi. An easy-to-use real-world multi-objective optimization problem suite. _Applied Soft Computing_, 89:106078, 2020. ISSN 1568-4946.\n* [69] The GPyOpt authors. GPyOpt: A bayesian optimization framework in python. [http://github.com/SheffieldML/GPyOpt](http://github.com/SheffieldML/GPyOpt), 2016.\n* Torn and Zilinskas [1989] Aimo Torn and Antanas Zilinskas. _Global optimization_, volume 350. Springer, 1989.\n* Tu et al. [2022] Ben Tu, Axel Gandy, Nikolas Kantas, and Behrang Shafei. Joint entropy search for multi-objective bayesian optimization. In _Advances in Neural Information Processing Systems 35_, 2022.\n* Turner et al. [2021] Ryan Turner, David Eriksson, Michael McCourt, Juha Kiili, Eero Laaksonen, Zhen Xu, and Isabelle Guyon. Bayesian optimization is superior to random search for machine learning hyperparameter tuning: Analysis of the black-box optimization challenge 2020. In _NeurIPS 2020 Competition and Demonstration Track_, 2021.\n* Wachter and Biegler [2006] Andreas Wachter and Lorenz T. Biegler. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. _Mathematical Programming_, 106(1):25-57, 2006.\n\n* Wan et al. [2021] Xingchen Wan, Vu Nguyen, Huong Ha, Binxin Ru, Cong Lu, and Michael A. Osborne. Think global and act local: Bayesian optimisation over high-dimensional categorical and mixed search spaces. In Marina Meila and Tong Zhang, editors, _Proceedings of the 38th International Conference on Machine Learning_, volume 139, pages 10663-10674. PMLR, 18-24 Jul 2021.\n* Wang et al. [2016] Jialei Wang, Scott C. Clark, Eric Liu, and Peter I. Frazier. Parallel bayesian global optimization of expensive functions, 2016.\n* Wang and Jegelka [2017] Zi Wang and Stefanie Jegelka. Max-value Entropy Search for Efficient Bayesian Optimization. _ArXiv e-prints_, page arXiv:1703.01968, March 2017.\n* Wilson et al. [2018] James Wilson, Frank Hutter, and Marc Deisenroth. Maximizing acquisition functions for bayesian optimization. In _Advances in Neural Information Processing Systems 31_, pages 9905-9916. 2018.\n* Wu and Frazier [2016] Jian Wu and Peter I. Frazier. The parallel knowledge gradient method for batch bayesian optimization. In _Proceedings of the 30th International Conference on Neural Information Processing Systems_, NIPS'16, page 3134-3142. Curran Associates Inc., 2016.\n* Wu et al. [2017] Jian Wu, Matthias Poloczek, Andrew Gordon Wilson, and Peter I Frazier. Bayesian optimization with gradients. In _Advances in Neural Information Processing Systems_, pages 5267-5278, 2017.\n* 956, 2019.\n* Zhan and Xing [2020] Dawei Zhan and Huanlai Xing. Expected improvement for expensive optimization: a review. _Journal of Global Optimization_, 78(3):507-544, 2020.\n* Zhao et al. [2023] Jiayu Zhao, Renyu Yang, Shenghao Qiu, and Zheng Wang. Enhancing high-dimensional bayesian optimization by optimizing the acquisition function maximizer initialization. _arXiv preprint arXiv:2302.08298_, 2023.\n* Zitzler et al. [2000] Eckart Zitzler, Kalyanmoy Deb, and Lothar Thiele. Comparison of multiobjective evolutionary algorithms: Empirical results. _Evol. Comput._, 8(2):173-195, jun 2000. ISSN 1063-6560. doi: 10.1162/106365600568202. URL [https://doi.org/10.1162/106365600568202](https://doi.org/10.1162/106365600568202).\n\nAcquisition Function Details\n\n### Analytic Expected Improvement\n\nRecall that the main challenge with computing analytic LogEI is to accurately compute \\(\\log h\\), where \\(h(z)=\\phi(z)+z\\Phi(z)\\), with \\(\\phi(z)=\\exp(-z^{2}/2)/\\sqrt{2\\pi}\\) and \\(\\Phi(z)=\\int_{-\\infty}^{z}\\phi(u)du\\). To express \\(\\log h\\) in a numerically stable form as \\(z\\) becomes increasingly negative, we first take the log and multiply \\(\\phi\\) out of the argument to the logarithm:\n\n\\[\\log h(z)=z^{2}/2-\\log(2\\pi)/2+\\log\\left(1+z\\frac{\\Phi(z)}{\\phi(z)}\\right). \\tag{12}\\]\n\nFortunately, this form exposes the quadratic factor, \\(\\Phi(z)/\\phi(z)\\) can be computed via standard implementations of the scaled complementary error function erfcx, and \\(\\log\\left(1+z\\Phi(z)/\\phi(z)\\right)\\), the last term of Eq. (12), can be computed stably with the loglmexp implementation proposed in [56]:\n\n\\[\\texttt{loglmexp}(x)=\\begin{cases}\\texttt{log}(-\\texttt{expm}1(x))&-\\log 2-1\\\\ -z^{2}/2-c_{1}+\\texttt{loglmexp}(\\log(\\texttt{erfcx}(-z/\\sqrt{2})|z|)+c_{2})&1/ \\sqrt{\\epsilon}0\\), then \\(\\arg\\max_{\\mathbf{x}\\in\\mathbb{X}}\\text{EI}(\\mathbf{x})=\\arg\\max_{\\mathbf{x} \\in\\mathbb{X},\\text{EI}(\\mathbf{x})>0}\\text{LogEI}(\\mathbf{x})\\)._\n\nProof.: Suppose \\(\\max_{\\mathbf{x}\\in\\mathbb{X}}\\text{EI}(\\mathbf{x})>0\\). Then \\(\\arg\\max_{\\mathbf{x}\\in\\mathbb{X}}\\text{EI}(\\mathbf{x})=\\arg\\max_{\\mathbf{x} \\in\\mathbb{X},\\text{EI}(\\mathbf{x})>0}\\text{EI}(\\mathbf{x})\\). For all \\(\\mathbf{x}\\in\\mathbb{X}\\) such that \\(\\text{EI}(\\mathbf{x})>0\\), \\(\\text{LogEI}(\\mathbf{x})=\\log(\\text{EI}(\\mathbf{x}))\\). Since \\(\\log\\) is monotonic, we have that \\(\\arg\\max_{z\\in\\mathbb{R}_{>0}}z=\\arg\\max_{z\\in\\mathbb{R}_{>0}}\\log(z)\\). Hence, \\(\\arg\\max_{\\mathbf{x}\\in\\mathbb{X},\\text{EI}(\\mathbf{x})>0}\\text{EI}(\\mathbf{x })=\\arg\\max_{\\mathbf{x}\\in\\mathbb{X},\\text{EI}(\\mathbf{x})>0}\\text{LogEI}( \\mathbf{x})\\). \n\nFigure 8: Plot of the \\(\\log h\\), computed via log \\(\\circ\\)\\(h\\) and log_h in Eq. (14). Crucially, the naive implementation fails as \\(z=(\\mu(\\mathbf{x})-f^{*})/\\sigma(\\mathbf{x})\\) becomes increasingly negative, due to being exactly numerically zero, while our proposed implementation exhibits quadratic asymptotic behavior.\n\n### Analytical LogEI's Asymptotics\n\nAs \\(z\\) grows negative and large, even the more robust second branch in Eq. (14), as well as the implementation of Hutter et al. [35] and Klein et al. [46] can suffer from numerical instabilities (Fig. 10, left). In our case, the computation of the last term of Eq. (12) is problematic for large negative \\(z\\). For this reason, we propose an approximate asymptotic computation based on a Laurent expansion at \\(-\\infty\\). As a result of the full analysis in the following, we also attain a particularly simple formula with inverse quadratic convergence in \\(z\\), which is the basis of the third branch of Eq. (14):\n\n\\[\\log\\left(1+\\frac{z\\Phi(z)}{\\phi(z)}\\right)=-2\\log(|z|)+\\mathcal{O}(|z^{-2}|). \\tag{15}\\]\n\nIn full generality, the asymptotic behavior of the last term can be characterized by the following result.\n\n**Lemma 4** (Asymptotic Expansion).: _Let \\(z<-1\\) and \\(K\\in\\mathbb{N}\\), then_\n\n\\[\\log\\left(1+\\frac{z\\Phi(z)}{\\phi(z)}\\right)=\\log\\left(\\sum_{k=1}^{K}(-1)^{k+1 }\\left[\\prod_{j=0}^{k-1}(2j+1)\\right]z^{-2k}\\right)+\\mathcal{O}(|z^{-2(K-1)}|). \\tag{16}\\]\n\nProof.: We first derived a Laurent expansion of the non-log-transformed \\(z\\Phi(z)/\\phi(z)\\), a key quantity in the last term of Eq. (12), with the help of Wolfram Alpha [37]:\n\n\\[\\frac{z\\Phi(z)}{\\phi(z)}=-1-\\sum_{k=1}^{K}(-1)^{k}\\left[\\prod_{j=0}^{k-1}(2j+ 1)\\right]z^{-2k}+\\mathcal{O}(|z|^{-2K}). \\tag{17}\\]\n\nIt remains to derive the asymptotic error bound through the log-transformation of the above expansion. Letting \\(L(z,K)=\\sum_{k=1}^{K}(-1)^{k+1}\\left[\\prod_{j=0}^{k-1}(2j+1)\\right]z^{-2k}\\), we get\n\n\\[\\log\\left(1+\\frac{z\\Phi(z)}{\\phi(z)}\\right) =\\log\\left(L(z,K)+\\mathcal{O}(|z|^{-2K})\\right) \\tag{18}\\] \\[=\\log L(z,K)+\\log(1+\\mathcal{O}(|z|^{-2K})/L(z,K))\\] \\[=\\log L(z,K)+\\mathcal{O}(\\mathcal{O}(|z|^{-2K})/L(z,K))\\] \\[=\\log L(z,K)+\\mathcal{O}(|z|^{-2(K-1)}).\\]\n\nThe penultimate equality is due to \\(\\log(1+x)=x+\\mathcal{O}(x^{2})\\), the last due to \\(L(z,K)=\\Theta(|z|^{-2})\\).\n\nFigure 9: Convergence behavior of the asymptotic Laurent expansion Eq. (16) of different orders.\n\nSMAC 1.0 and RoBO's Analytic LogEITo our knowledge, SMAC 1.0's implementation of the logarithm of analytic EI due to Hutter et al. [35], later translated to RoBO [46], was the first to improve the numerical stability of analytic EI through careful numerics. The associated implementation is mathematically identical to \\(\\log\\circ\\) EI, and greatly improves the numerical stability of the computation. For large negative \\(z\\) however, the implementation still exhibits instabilities that gives rise to floating point infinities through which useful gradients cannot be propagated (Fig. 10, left). The implementation proposed herein remedies this problem by switching to the asymptotic approximation of Eq. (15) once it is accurate to machine precision \\(\\epsilon\\). This is similar to the use of asymptotic expansions for the computation of \\(\\alpha\\)-stable densities proposed by Ament and O'Neil [2].\n\nHEBO's Approximate Analytic LogEIHEBO [12] contains an approximation to the logarithm of analytical EI as part of its implementation of the MACE acquisition function [55], which - at the time of writing - is missing the log normalization constant of the Gaussian density, leading to a large discontinuity at the chosen cut-off point of \\(z=-6\\) below which the approximation takes effect, see here for the HEBO implementation. Notably, HEBO does not implement an _exact_ stable computation of LogEI like the one of Hutter et al. [35], Klein et al. [46] and the non-asymptotic branches of the current work. Instead, it applies the approximation for all \\(z<-6\\), where the implementation exhibits a maximum error of \\(>1.02\\), or if the implementation's normalization constant were corrected, a maximum error of \\(>0.1\\). By comparison, the implementation put forth herein is mathematically exact for the non-asymptotic regime \\(z>-1/\\sqrt{\\epsilon}\\) and accurate to numerical precision in the asymptotic regime due to the design of the threshold value.\n\n### Monte-Carlo Expected Improvement\n\nFor Monte-Carlo, we cannot directly apply similar numerical improvements as for the analytical version, because the utility values, the integrand of Eq. (4), on the sample level are likely to be _mathematically_ zero. For this reason, we first smoothly approximate the acquisition utility and subsequently apply log transformations to the approximate acquisition function.\n\nTo this end, a natural choice is \\(\\mathrm{softplus}_{\\tau_{0}}(x)=\\tau_{0}\\log(1+\\exp(x/\\tau_{0}))\\) for smoothing the \\(\\max(0,x)\\), where \\(\\tau_{0}\\) is a temperature parameter governing the approximation error. Further, we approximate the \\(\\max_{i}\\) over the \\(q\\) candidates by the norm \\(\\|\\cdot\\|_{1/\\tau_{\\mathrm{max}}}\\) and note that the approximation error introduced by both smooth approximations can be bound tightly as a function of two \"temperature\" parameters \\(\\tau_{0}\\) and \\(\\tau_{\\mathrm{max}}\\), see Lemma 2.\n\nFigure 10: Left: Comparison of LogEI values in single-precision floating point arithmetic, as a function of \\(z=(\\mu(x)-f^{*})/\\sigma(x)\\) to RoBO’s LogEI implementation [46]. Notably, RoBO’s LogEI improves greatly on the naive implementation (Fig. 8), but still exhibits failure points (red) well above floating point underflow. The implementation of Eq. (14) continues to be stable in this regime. Right: Comparison of LogEI values to HEBO’s approximate LogEI implementation [12]. At the time of writing, HEBO’s implementation exhibits a discontinuity and error of \\(>1.02\\) at the threshold \\(z=-6\\), below which the approximation takes effect. The discontinuity could be ameliorated, though not removed, by correcting the log normalization constant (turquoise). The figure also shows that the naive implementation used by HEBO for \\(z>-6\\) starts to become unstable well before \\(z=-6\\).\n\nImportantly, the smoothing alone only solves the problem of having mathematically zero gradients, not that of having numerically vanishing gradients, as we have shown for the analytical case above. For this reason, we transform all smoothed computations to log space and thus need the following special implementation of \\(\\log\\circ\\operatorname{softplus}\\) that can be evaluated stably for a very large range of inputs:\n\n\\[\\texttt{logsoftplus}_{\\tau}(x)=\\begin{cases}[\\texttt{log}\\circ\\texttt{softplus}_ {\\tau}](x)&x/\\tau>l\\\\ x/\\tau+\\texttt{log}(\\tau)&x/\\tau\\leq l\\end{cases}\\]\n\nwhere \\(\\tau\\) is a temperature parameter and \\(l\\) depends on the floating point precision used, around \\(-35\\) for double precision in our implementation.\n\nNote that the lower branch of logsoftplus is approximate. Using a Taylor expansion of \\(\\log(1+z)=z-z^{2}/2+\\mathcal{O}(z^{3})\\) around \\(z=0\\), we can see that the approximation error is \\(\\mathcal{O}(z^{2})\\), and therefore, \\(\\log(\\log(1+\\exp(x)))=x+\\mathcal{O}(\\exp(x)^{2})\\), which converges to \\(x\\) exponentially quickly as \\(x\\to-\\infty\\). In our implementation, \\(l\\) is chosen so that no significant digit is lost in dropping the second order term from the lower branch.\n\nHaving defined logsoftplus, we further note that\n\n\\[\\log\\|\\mathbf{x}\\|_{1/\\tau_{\\max}} =\\log\\left(\\sum_{i}x_{i}^{1/\\tau_{\\max}}\\right)^{\\tau_{\\max}}\\] \\[=\\tau_{\\max}\\log\\left(\\sum_{i}\\exp(\\log(x_{i})/\\tau_{\\max})\\right)\\] \\[=\\tau_{\\max}\\texttt{logsumexp}_{i}\\left(\\log(x_{i})/\\tau_{\\max}\\right)\\]\n\nTherefore, we express the logarithm of the smoothed acquisition utility for \\(q\\) candidates as\n\n\\[\\tau_{\\max}\\texttt{logsumexp}_{j}^{q}(\\texttt{logsoftplus}_{\\tau_{0}}((\\xi^{i} (x_{j})-y^{*})/\\tau_{\\max}).\\]\n\nApplying another logsumexp to compute the logarithm of the mean of acquisition utilities over a set of Monte Carlo samples \\(\\{\\xi_{i}\\}_{i}\\) gives rise to the expression in Eq. (10).\n\nIn particular for large batches (large \\(q\\)), this expression can still give rise to vanishing gradients for some candidates, which is due to the large dynamic range of the outputs of the logsoftplus when \\(x<<0\\). To solve this problem, we propose a new class of smooth approximations to the \"hard\" non-linearities that decay as \\(\\mathcal{O}(1/x^{2})\\) as \\(x\\to-\\infty\\) in the next section.\n\n### A Class of Smooth Approximations with Fat Tails for Larger Batches\n\nA regular \\(\\operatorname{softplus}(x)=\\log(1+\\exp(x))\\) function smoothly approximates the ReLU non-linearity and - in conjunction with the log transformations - is sufficient to achieve good numerical behavior for small batches of the Monte Carlo acquisition functions. However, as more candidates are added, \\(\\log\\operatorname{softplus}(x)=\\log(\\log(1+\\exp(x)))\\) is increasingly likely to have a high dynamic range as for \\(x\\ll 0\\), \\(\\log\\operatorname{softplus}_{\\tau}(x)\\sim-x/\\tau\\). If \\(\\tau>0\\) is chosen to be small, \\((-x/\\tau)\\) can vary orders of magnitude within a single batch. This becomes problematic when we approximate the maximum utility over the batch of candidates, since logsumexp only propagates numerically non-zero gradients to inputs that are no smaller than approximately \\((\\max_{j}x_{j}-700)\\) in double precision, another source of vanishing gradients.\n\nTo solve this problem, we propose a new smooth approximation to the ReLU, maximum, and indicator functions that decay only polynomially as \\(x\\to-\\infty\\), instead of exponentially, like the canonical softplus. The high level idea is to use \\((1+x^{2})^{-1}\\), which is proportional to the Cauchy density function (and is also known as a Lorentzian), in ways that maintain key properties of existing smooth approximations - convexity, positivity, etc - while changing the asymptotic behavior of the functions from exponential to \\(\\mathcal{O}(1/x^{2})\\) as \\(x\\to-\\infty\\), also known as a \"fat tail\". Further, we will show that the proposed smooth approximations satisfy similar maximum error bounds as their exponentially decaying counterparts, thereby permitting a similar approximation guarantee as Lemma 2 with minor adjustments to the involved constants. While the derivations herein are based on the Cauchy density with inverse quadratic decay, it is possible to generalize the derivations to e.g. \\(\\alpha\\)-stable distribution whose symmetric variants permit accurate and efficient numerical computation [2].\n\nFat SoftplusWe define\n\n\\[\\varphi_{+}(x)=\\alpha(1+x^{2})^{-1}+\\log(1+\\exp(x)), \\tag{19}\\]\n\nfor a positive scalar \\(\\alpha\\). The following result shows that we can ensure the monotonicity and convexity - both important properties of the ReLU that we would like to maintain in our approximation - of \\(g\\) by carefully choosing \\(\\alpha\\).\n\n**Lemma 5** (Monotonicity and Convexity).: \\(\\varphi_{+}(x)\\) _is positive, monotonically increasing, and strictly convex for \\(\\alpha\\) satisfying_\n\n\\[0\\leq\\alpha<\\frac{e^{1/\\sqrt{3}}}{2\\left(1+e^{1/\\sqrt{3}}\\right)}.\\]\n\nProof.: Positivity follows due to \\(\\alpha\\geq 0\\), and both summands being positive. Monotonicity and convexity can be shown via canonical differential calculus and bounding relevant quantities.\n\nIn particular, regarding monotonicity, we want to select \\(\\alpha\\) so that the first derivative is bounded below by zero:\n\n\\[\\partial_{x}\\varphi_{+}(x)=\\frac{e^{x}}{1+e^{x}}-\\alpha\\frac{2x}{(1+x^{2})^{2}}\\]\n\nFirst, we note that \\(\\partial_{x}\\varphi_{+}(x)\\) is positive for \\(x<0\\) and any \\(\\alpha\\), since both terms are positive in this regime. For \\(x\\geq 0\\), \\(\\frac{e^{x}}{1+e^{x}}=(1+e^{-x})^{-1}\\geq 1/2\\), and \\(-1/(1+x^{2})^{2}\\geq-1/(1+x^{2})\\), so that\n\n\\[\\partial_{x}\\varphi_{+}(x)\\geq\\frac{1}{2}-\\alpha\\frac{2x}{(1+x^{2})}\\]\n\nForcing \\(\\frac{1}{2}-\\alpha\\frac{2x}{(1+x^{2})}>0\\), and multiplying by \\((1+x^{2})\\) gives rise to a quadratic equation whose roots are \\(x=2\\alpha\\pm\\sqrt{4\\alpha^{2}-1}\\). Thus, there are no real roots for \\(\\alpha<1/2\\). Since the derivative is certainly positive for the negative reals and the guaranteed non-existence of roots implies that the derivative cannot cross zero elsewhere, \\(0\\leq\\alpha<1/2\\) is a sufficient condition for monotonicity of \\(\\varphi_{+}\\).\n\nRegarding convexity, our goal is to prove a similar condition on \\(\\alpha\\) that guarantees the positivity of the second derivative:\n\n\\[\\partial_{x}^{2}\\varphi_{+}(x)=\\alpha\\frac{6x^{2}-2}{(1+x^{2})^{3}}+\\frac{e^{- x}}{(1+e^{-x})^{2}}\\]\n\nNote that \\(\\frac{6x^{2}-2}{(1+x^{2})^{3}}\\) is symmetric around \\(0\\), is negative in \\((-\\sqrt{1/3},\\sqrt{1/3})\\) and has a minimum of \\(-2\\) at \\(0\\). \\(\\frac{e^{-x}}{(1+e^{-x})^{2}}\\) is symmetric around zero and decreasing away from zero. Since the rational polynomial is only negative in \\((-\\sqrt{1/3},\\sqrt{1/3})\\), we can lower bound \\(\\frac{e^{-x}}{(1+e^{-x})^{2}}>\\frac{e^{-\\sqrt{1/3}}}{(1+e^{-\\sqrt{1/3}})^{2}}\\) in \\((-\\sqrt{1/3},\\sqrt{1/3})\\). Therefore,\n\n\\[\\partial_{x}^{2}\\varphi_{+}(x)\\geq\\frac{e^{-x}}{(1+e^{-x})^{2}}-2\\alpha\\]\n\nForcing \\(\\frac{e^{-\\sqrt{1/3}}}{(1+e^{-\\sqrt{1/3}})^{2}}-2\\alpha>0\\) and rearranging yields the result. Since \\(\\frac{e^{-\\sqrt{1/3}}}{(1+e^{-\\sqrt{1/3}})^{2}}/2\\sim 0.115135\\), the convexity condition is stronger than the monotonicity condition and therefore subsumes it. \n\nImportantly \\(\\varphi\\) decays only polynomially for increasingly negative inputs, and therefore \\(\\log\\varphi\\) only logarithmically, which keeps the range of \\(\\varphi\\) constrained to values that are more manageable numerically. Similar to Lemma 7, one can show that\n\n\\[|\\tau\\varphi_{+}(x/\\tau)-\\mathtt{ReLU}(x)|\\leq\\left(\\alpha+\\log(2)\\right)\\tau. \\tag{20}\\]\n\nThere are a large number of approximations or variants of the ReLU that have been proposed as activation functions of artificial neural networks, but to our knowledge, none satisfy the properties that we seek here: (1) smoothness, (2) positivity, (3) monotonicity, (4) convexity, and (5) polynomial decay. For example, the leaky ReLU does not satisfy (1) and (2), and the ELU does not satisfy (5).\n\nFat MaximumThe canonical logsumexp approximation to \\(\\max_{i}x_{i}\\) suffers from numerically vanishing gradients if \\(\\max_{i}x_{i}-\\min_{j}x_{j}\\) is larger a moderate threshold, around 760 in double precision, depending on the floating point implementation. In particular, while elements close to the maximum receive numerically non-zero gradients, elements far away are increasingly likely to have a numerically zero gradient. To fix this behavior for the smooth maximum approximation, we propose\n\n\\[\\varphi_{\\max}(\\mathbf{x})=\\max_{j}x_{j}+\\tau\\log\\sum_{i}\\left[1+\\left(\\frac{x _{i}-\\max_{j}x_{j}}{\\tau}\\right)^{2}\\right]^{-1}. \\tag{21}\\]\n\nThis approximation to the maximum has the same error bound to the true maximum as the logsumexp approximation:\n\n**Lemma 6**.: _Given \\(\\tau>0\\)_\n\n\\[\\max_{i}x_{i}\\leq\\tau\\;\\phi_{\\max}(x/\\tau)\\leq\\max_{i}x_{i}+\\tau\\log(d). \\tag{22}\\]\n\nProof.: Regarding the lower bound, let \\(i=\\arg\\max_{j}x_{j}\\). For this index, the associated summand in (21) is \\(1\\). Since all summands are positive, the entire sum is lower bounded by \\(1\\), hence\n\n\\[\\tau\\log\\sum_{i}\\left[1+\\left(\\frac{x_{i}-\\max_{j}x_{j}}{\\tau}\\right)^{2} \\right]^{-1}>\\tau\\log(1)=0\\]\n\nAdding \\(\\max_{j}x_{j}\\) to the inequality finishes the proof for the lower bound.\n\nRegarding the upper bound, (21) can be maximized when \\(x_{i}=\\max_{j}x_{j}\\) for all \\(i\\), in which case each \\((x_{i}-\\max_{j}x_{j})^{2}\\) is minimized, and hence each summand is maximized. In this case,\n\n\\[\\tau\\log\\sum_{i}\\left[1+\\left(\\frac{x_{i}-\\max_{j}x_{j}}{\\tau}\\right)^{2} \\right]^{-1}\\leq\\tau\\log\\left(\\sum_{i}1\\right)=\\tau\\log(d).\\]\n\nAdding \\(\\max_{j}x_{j}\\) to the inequality finishes the proof for the upper bound. \n\nFat SigmoidNotably, we encountered a similar problem using regular (log)-sigmoids to smooth the constraint indicators for EI with black-box constraints. In principle the Cauchy cummulative distribution function would satisfy these conditions, but requires the computation of \\(\\arctan\\), a special function that requires more floating point operations to compute numerically than the following function. Here, we want the smooth approximation \\(\\iota\\) to satisfy 1) positivity, 2) monotonicity, 3) polynomial decay, and 4) \\(\\iota(x)=1/2-\\iota(-x)\\). Let \\(\\gamma=\\sqrt{1/3}\\), then we define\n\n\\[\\iota(x)=\\begin{cases}\\frac{2}{3}\\left(1+(x-\\gamma)^{2}\\right)^{-1}&x<0,\\\\ 1-\\frac{2}{3}\\left(1+(x+\\gamma)^{2}\\right)^{-1}&x\\geq 0.\\end{cases}\\]\\(\\iota\\) is monotonically increasing, satisfies \\(\\iota(x)\\to 1\\) as \\(x\\to\\infty\\), \\(\\iota(0)=1/2\\), and \\(\\iota(x)=\\mathcal{O}(1/x^{2})\\) as \\(x\\to-\\infty\\). Further, we note that the asymptotics are primarily important here, but that we can also make the approximation tighter by introducing a temperature parameter \\(\\tau\\), and letting \\(\\iota_{\\tau}(x)=\\iota(x/\\tau)\\). The approximation error of \\(\\iota_{\\tau}(x)\\) to the Heaviside step function becomes tighter point-wise as \\(\\tau\\to 0+\\), except for at the origin where \\(\\iota_{\\tau}(x)=1/2\\), similar to the canonical sigmoid.\n\n### Constrained Expected Improvement\n\nFor the analytical case, many computational frameworks already provide a numerically stable implementation of the logarithm of the Gaussian cummulative distribution function, in the case of PyTorch, torch.special.log_ndtr, which can be readily used in conjunction with our implementation of LogEI, as described in Sec. 4.3.\n\nFor the case of Monte-Carlo parallel EI, we implemented the fat-tailed \\(\\iota\\) function from Sec. A.4 to approximate the constraint indicator and compute the per-candidate, per-sample acquisition utility using\n\n\\[(\\texttt{logsoftplus}_{\\tau_{0}}(\\xi_{i}(\\mathbf{x}_{j})-y^{*})+\\sum_{k} \\texttt{log}\\circ\\iota\\left(-\\frac{\\xi_{i}^{(k)}(\\mathbf{x}_{j})}{\\tau_{\\text {cons}}}\\right),\\]\n\nwhere \\(\\xi_{i}^{(k)}\\) is the \\(i\\)th sample of the \\(k\\)th constraint model, and \\(\\tau_{\\text{cons}}\\) is the temperature parameter controlling the approximation to the constraint indicator. While this functionality is in our implementation, our benchmark results use the analytical version.\n\n### Parallel Expected Hypervolume Improvement\n\nThe hypervolume improvement can be computed via the inclusion-exclusion principle, see [13] for details, we focus on the numerical issues concerning qEHVI here. To this end, we define\n\n\\[z_{k,i_{1},\\ldots,i_{j}}^{(m)}=\\min\\left[\\mathbf{u}_{k},\\mathbf{f}(\\mathbf{x} _{i_{1}}),\\ldots,\\mathbf{f}(\\mathbf{x}_{i_{j}})\\right],\\]\n\nwhere \\(\\mathbf{f}\\) is the vector-valued objective function, and \\(\\mathbf{u}_{k}\\) is the vector of upper bounds of one of \\(K\\) hyper-rectangles that partition the non-Pareto-dominated space, see [13] for details on the partitioning. Letting \\(\\mathbf{l}_{k}\\) be the corresponding lower bounds of the hyper-rectangles, the hypervolume improvement can then be computed as\n\n\\[\\text{HVI}(\\{\\mathbf{f}(\\mathbf{x}_{i})\\}_{i=1}^{q}=\\sum_{k=1}^{K}\\sum_{j=1}^{ q}\\sum_{X_{j}\\in\\mathcal{X}_{j}}(-1)^{j+1}\\prod_{m=1}^{M}[z_{k,X_{j}}^{(m)}-l_{k }^{(m)}]_{+}, \\tag{23}\\]\n\nwhere \\(\\mathcal{X}_{j}=\\{X_{j}\\subset\\mathcal{X}_{\\text{cand}}:|X_{j}|=j\\}\\) is the superset of all subsets of \\(\\mathcal{X}_{\\text{cand}}\\) of size \\(j\\) and \\(z_{k,X_{j}}^{(m)}=z_{k,i_{1},\\ldots,i_{j}}^{(m)}\\) for \\(X_{j}=\\{\\mathbf{x}_{i_{1}},\\ldots,\\mathbf{x}_{i_{j}}\\}\\).\n\nTo find a numerically stable formulation of the logarithm of this expression, we first re-purpose the \\(\\varphi_{\\max}\\) function to compute the minimum in the expression of \\(z^{(m)}_{k,i_{1},...,i_{q}}\\), like so \\(\\varphi_{\\min}(x)=-\\varphi_{\\max}(-x)\\). Further, we use the \\(\\varphi_{+}\\) function of Sec. A.4 as for the single objective case to approximate \\([z^{(m)}_{k,X_{j}}-l^{(m)}_{k}]_{+}\\). We then have\n\n\\[\\log\\prod_{m=1}^{M}\\varphi_{+}[z^{(m)}_{k,X_{j}}-l^{(m)}_{k}]=\\sum_{m=1}^{M} \\log\\varphi_{+}[z^{(m)}_{k,X_{j}}-l^{(m)}_{k}] \\tag{24}\\]\n\nSince we can only transform positive quantities to log space, we split the sum in Eq. (23) into positive and negative components, depending on the sign of \\((-1)^{j+1}\\), and compute the result using a numerically stable implementation of \\(\\log(\\exp(\\log\\text{ of positive terms})-\\exp(\\log\\text{ of negative terms})\\). The remaining sums over \\(k\\) and \\(q\\) can be carried out by applying logsumexp to the resulting quantity. Finally, applying logsumexp to reduce over an additional Monte-Carlo sample dimension yields the formulation of qLogEHVI that we use in our multi-objective benchmarks.\n\n### Probability of Improvement\n\nNumerical improvements for the probability of improvement acquisition which is defined as \\(\\alpha(x)=\\Phi\\left(\\frac{\\mu(x)-y^{*}}{\\sigma(x)}\\right)\\), where \\(\\Phi\\) is the standard Normal CDF, can be obtained simply by taking the logarithm using a numerically stable implementation of \\(\\log(\\Phi(z))=\\texttt{logerfc}\\Big{(}-\\frac{1}{\\sqrt{2}}z\\Big{)}-\\log(2)\\), where logerfc is computed as\n\n\\[\\texttt{logerfc}(x)=\\begin{cases}\\texttt{log}(\\texttt{erfc}(x))&x\\leq 0\\\\ \\texttt{log}(\\texttt{erfcx}(x))-x^{2}&x>0.\\end{cases}\\]\n\n### q-Noisy Expected Improvement\n\nThe same numerical improvements used by qLogEI to improve Monte-Carlo expected improvement (qEI) in Appendix A.3 can be applied to improve the fully Monte Carlo Noisy Expected Improvement [6, 52] acquisition function. As in qLogEI, we can (i) approximate the \\(\\max(0,\\mathbf{x})\\) using a softplus to smooth the sample-level improvements to ensure that they are mathematically positive, (ii) approximate the maximum over the \\(q\\) candidate designs by norm \\(||\\cdot||_{\\frac{1}{\\max}}\\), and (iii) take the logarithm to of the resulting smoothed value to mitigate vanishing gradients. To further mitigate vanishing gradients, we can again leverage the Fat Softplus and Fat Maximum approximations. The only notable difference in the \\(q\\)EI and \\(q\\)NEI acquisition functions is the choice of incumbent, and similarly only a change of incumbent is required to obtain qLogNEI from qLogEI. Specifically, when the scalar \\(y^{*}\\) in Equation (10) is replaced with a vector containing the new incumbent under each sample we obtain the qLogNEI acquisition value. The \\(i^{\\text{th}}\\) element of the incumbent vector for qLogNEI is \\(\\max_{j^{\\prime}=q+1}^{n+q}\\xi^{i}(\\mathbf{x}_{j^{\\prime}})\\), where \\(\\mathbf{x}_{q+1},...,\\mathbf{x}_{n+q}\\) are the previously evaluated designs and \\(\\xi^{i}(\\mathbf{x}_{j^{\\prime}})\\) is the value of the \\(j^{\\text{th}}\\) point under the \\(i^{\\text{th}}\\) sample from the joint posterior over \\(\\mathbf{x}_{1},...,\\mathbf{x}_{n+q}\\). We note that we use a hard maximum to compute the incumbent for each sample because we do not need to compute gradients with respect to the previously evaluated designs \\(\\mathbf{x}_{q+1},...,\\mathbf{x}_{n+q}\\).\n\nWe note that we obtain computational speed ups by (i) pruning the set of previously evaluated points that considered for being the best incumbent to include only those designs with non-zero probability of being the best design and (ii) caching the Cholesky decomposition of the posterior covariance over the resulting pruned set of previously evaluated designs and using low-rank updates for efficient sampling [14].\n\nFor experimental results of \\(q\\)LogNEI see Section D.4.\n\n## Appendix B Strategies for Optimizing Acquisition Functions\n\nAs discussed in Section 2.3, a variety of different approaches and heuristics have been applied to the problem of optimizing acquisition functions. For the purpose of this work, we only consider continuous domains \\(\\mathbb{X}\\). While discrete and/or mixed domains are also relevant in practice and have received substantial attention in recent years - see e.g. Baptista and Poloczek [7], Daulton et al. [15], Deshwal et al. [18], Kim et al. [44], Oh et al. [62], Wan et al. [74] - our work here on improving acquisition functions is largely orthogonal to this (though the largest gains should be expected when using gradient-based optimizers, as is done in mixed-variable BO when conditioning on discrete variables, or when performing discrete or mixed BO using continuous relaxations, probabilistic reparameterization, or straight-through estimators [15]).\n\nArguably the simplest approach to optimizing acquisition functions is by grid search or random search. While variants of this combined with local descent can make sense in the context of optimizing over discrete or mixed spaces and when acquisition functions can be evaluated efficiently in batch (e.g. on GPUs), this clearly does not scale to higher-dimensional continuous domains due to the exponential growth of space to cover.\n\nAnother relatively straightforward approach is to use zeroth-order methods such as DIRECT[41] (used e.g. by Dragonfly[43]) or the popular CMA-ES[32]. These approaches are easy implement as they avoid the need to compute gradients of acquisition functions. However, not relying on gradients is also what renders their optimization performance inferior to gradient based methods, especially for higher-dimensional problems and/or joint batch optimization in parallel Bayesian optimization.\n\nThe most common approach to optimizing acquisition functions on continuous domains is using gradient descent-type algorithms. Gradients are either be computed based on analytically derived closed-form expressions, or via auto-differentiation capabilities of modern ML systems such as PyTorch[63], Tensorflow[1], or JAX[9].\n\nFor analytic acquisition functions, a common choice of optimizer is L-BFGS-B[10], a quasi-second order method that uses gradient information to approximate the Hessian and supports box constraints. If other, more general constraints are imposed on the domain, other general purpose nonlinear optimizers such as SLSQP[47] or IPOPT[73] are used (e.g. by BoTorch). For Monte Carlo (MC) acquisition functions, Wilson et al. [77] proposes using stochastic gradient ascent (SGA) based on stochastic gradient estimates obtained via the reparameterization trick[45]. Stochastic first-order algorithms are also used by others, including e.g. Wang et al. [75] and Daulton et al. [15]. Balandat et al. [6] build on the work by Wilson et al. [77] and show how sample average approximation (SAA) can be employed to obtain deterministic gradient estimates for MC acquisition functions, which has the advantage of being able to leverage the improved convergence rates of optimization algorithms designed for deterministic functions such as L-BFGS-B. This general approach has since been used for a variety of other acquisition functions, including e.g. Daulton et al. [13] and Jiang et al. [40].\n\nVery few implementations of Bayesian Optimization actually use higher-order derivative information, as this either requires complex derivations of analytical expressions and their custom implementation, or computation of second-order derivatives via automated differentiation, which is less well supported and computationally much more costly than computing only first-order derivatives. One notable exception is Cornell-MOE[78, 79], which supports Newton's method (though this is limited to the acquisition functions implemented in C++ within the library and not easily extensible to other acquisition functions).\n\n### Common initialization heuristics for multi-start gradient-descent\n\nOne of the key issues to deal with gradient-based optimization in the context of optimizing acquisition functions is the optimizer getting stuck in local optima due to the generally highly non-convex objective. This is typically addressed by means of restarting the optimizer from a number of different initial conditions distributed across the domain.\n\nA variety of different heuristics have been proposed for this. The most basic one is to restart from random points uniformly sampled from the domain (for instance, scikit-optimize[33] uses this strategy). However, as we have argued in this paper, acquisition functions can be (numerically) zero in large parts of the domain, and so purely random restarts can become ineffective, especially in higher dimensions and with more data points. A common strategy is therefore to either augment or bias the restart point selection to include initial conditions that are closer to \"promising points\". GPyOpt[69] augments random restarts with the best points observed so far, or alternatively points generated via Thompson sampling. Spearmint[66] initializes starting points based on Gaussian perturbations of the current best point. BoTorch[6] selects initial points by performing Boltzmann sampling on a set of random points according to their acquisition function value; the goal of this strategy is to achieve a biased random sampling across the domain that is likely to generate more points around regions with high acquisition value, but remains asymptotically space-filling. The initialization strategy used by Trieste[64] works similarly to the one in BoTorch, but instead of using soft-randomization via Boltzmann sampling, it simply selects the top-\\(k\\) points. Most recently, Gramacy et al. [31] proposed distributing initial conditions using a Delaunay triangulation of previously observed data points. This is an interesting approach that generalizes the idea of initializing \"in between\" observed points from the single-dimensional case. However, this approach does not scale well with the problem dimension and the number of observed data points due to the complexity of computing the triangulation (with wall time empirically found to be exponential in the dimension, see [31, Fig. 3] and worst-case quadratic in the number of observed points).\n\nHowever, while these initialization strategies can help substantially with better optimizing acquisition functions, they ultimately cannot resolve foundational issues with acquisition functions themselves. Ensuring that acquisition functions provides enough gradient information (not just mathematically but also numerically) is therefore key to be able to optimize it effectively, especially in higher dimensions and with more observed data points.\n\n## Appendix C Proofs\n\nSee 1\n\nProof.: We begin by expanding the argument to the \\(h\\) function in Eq. (2) as a sum of 1) the standardized error of the posterior mean \\(\\mu_{n}\\) to the true objective \\(f\\) and 2) the standardized difference of the value of the true objective \\(f\\) at \\(x\\) to the best previously observed value \\(y^{*}=\\max_{i}^{n}y_{i}\\):\n\n\\[\\frac{\\mu_{n}(\\mathbf{x})-y^{*}}{\\sigma_{n}(\\mathbf{x})}=\\frac{\\mu_{n}( \\mathbf{x})-f(\\mathbf{x})}{\\sigma_{n}(\\mathbf{x})}+\\frac{f(\\mathbf{x})-f^{*}} {\\sigma_{n}(\\mathbf{x})} \\tag{25}\\]\n\nWe proceed by bounding the first term on the right hand side. Note that by assumption, \\(f(\\mathbf{x})\\sim\\mathcal{N}(\\mu_{n}(\\mathbf{x}),\\sigma_{n}(\\mathbf{x})^{2})\\) and thus \\((\\mu_{n}(\\mathbf{x})-f(\\mathbf{x}))/\\)\\(\\sigma_{n}(\\mathbf{x})\\sim\\mathcal{N}(0,1)\\). For a positive \\(C>0\\) then, we use a standard bound on the Gaussian tail probability to attain\n\n\\[P\\left(\\frac{\\mu_{n}(\\mathbf{x})-f(\\mathbf{x})}{\\sigma_{n}(\\mathbf{x})}>C \\right)\\leq e^{-C^{2}/2}/2. \\tag{26}\\]\n\nTherefore, \\((\\mu(\\mathbf{x})-f(\\mathbf{x}))/\\sigma_{n}(\\mathbf{x})0\\), the approximation error of \\(\\operatorname{qLogEI}\\) to \\(\\operatorname{qEI}\\) is bounded by_\n\n\\[\\left|e^{\\operatorname{qLogEI}(\\mathbf{X})}-\\operatorname{qEI}(\\mathbf{X}) \\right|\\leq(q^{\\tau_{\\max}}-1)\\;\\operatorname{qEI}(\\mathbf{X})+\\log(2)\\tau_{0 }q^{\\tau_{\\max}}. \\tag{11}\\]\n\nProof.: Let \\(z_{iq}=\\xi_{i}(\\mathbf{x}_{q})-y^{*}\\), where \\(i\\in\\{1,...,n\\}\\), and for brevity of notation, and let \\(\\mathtt{lse}\\), \\(\\mathtt{lsp}\\) refer to the \\(\\mathtt{logsumexp}\\) and \\(\\mathtt{logsoftplus}\\) functions, respectively, and \\(\\mathtt{ReLU}(x)=\\max(x,0)\\). We then bound \\(n|e^{\\operatorname{qLogEI}(\\mathbf{X})}-\\operatorname{qEI}(\\mathbf{X})|\\) by\n\n\\[\\left|\\exp(\\mathtt{lse}_{i}(\\tau_{\\max}\\mathtt{lse}_{q}(\\mathtt{ lsp}_{\\tau_{0}}(z_{iq})/\\tau_{\\max})))-\\sum_{i}\\max_{q}\\mathtt{ReLU}(z_{iq})\\right|\\] \\[\\leq\\sum_{i}\\left|\\exp(\\tau_{\\max}\\mathtt{lse}_{q}(\\mathtt{lsp}_{ \\tau_{0}}(z_{iq})/\\tau_{\\max}))-\\max_{q}\\mathtt{ReLU}(z_{iq})\\right|\\] \\[=\\sum_{i}\\left|\\|\\mathtt{softplus}_{\\tau_{0}}(z_{i}.)\\|_{1/\\tau_{ \\max}}-\\max_{q}\\mathtt{ReLU}(z_{iq})\\right| \\tag{30}\\] \\[\\leq\\sum_{i}\\left|\\|\\mathtt{softplus}_{\\tau_{0}}(z_{i}.)\\|_{1/ \\tau_{\\max}}-\\max_{q}\\mathtt{softplus}_{\\tau_{0}}(z_{iq})\\right|\\] \\[\\qquad+\\left|\\max_{q}\\mathtt{softplus}_{\\tau_{0}}(z_{iq})-\\max_{q }\\mathtt{ReLU}(z_{iq})\\right|\\]\n\nFirst and second inequalities are due to the triangle inequality, where for the second we used \\(|a-c|\\leq|a-b|+|b-c|\\) with \\(b=\\max_{q}\\mathtt{softplus}(z_{iq})\\).\n\nTo bound the first term in the sum, note that \\(\\|\\mathbf{x}\\|_{\\infty}\\leq\\|\\mathbf{x}\\|_{q}\\leq\\|\\mathbf{x}\\|_{\\infty}d^{1/q}\\), thus \\(0\\leq(\\|\\mathbf{x}\\|_{q}-\\|\\mathbf{x}\\|_{\\infty})\\leq d^{1/q}-1\\|\\mathbf{x}\\|_{\\infty}\\), and therefore\n\n\\[\\left\\|\\mathtt{softplus}_{\\tau_{0}}(z_{i}.)\\|_{1/\\tau_{\\max}}- \\max_{q}\\mathtt{softplus}_{\\tau_{0}}(z_{iq})\\right| \\leq(q^{\\tau_{\\max}}-1)\\max_{q}\\mathtt{softplus}_{\\tau_{0}}(z_{iq})\\] \\[\\leq(q^{\\tau_{\\max}}-1)(\\max_{q}\\mathtt{ReLU}(z_{iq})+\\log(2)\\tau _{0})\\]\n\nThe second term in the sum can be bound due to \\(|\\mathtt{softplus}_{\\tau_{0}}(x)-\\mathtt{ReLU}(x)|\\leq\\log(2)\\tau_{0}\\) (see Lemma 7 below) and therefore,\n\n\\[\\left|\\max_{q}\\mathtt{softplus}_{\\tau_{0}}(z_{iq})-\\max_{q}\\mathtt{ReLU}_{ \\tau_{0}}(z_{iq})\\right|\\leq\\log(2)\\tau_{0}.\\]\n\nDividing Eq. (30) by \\(n\\) to compute the sample mean finishes the proof for the Monte-Carlo approximations to the acquisition value. Taking \\(n\\to\\infty\\) further proves the result for the mathematical definitions of the parallel acquisition values, i.e. Eq. (4). \n\nApproximating the \\(\\mathtt{ReLU}\\) using the \\(\\mathtt{softplus}_{\\tau}(x)=\\tau\\log(1+\\exp(x/\\tau))\\) function leads to an approximation error that is at most \\(\\tau\\) in the infinity norm, i.e. \\(\\|\\mathtt{softplus}_{\\tau}-\\mathtt{ReLU}\\|_{\\infty}=\\log(2)\\tau\\). The following lemma formally proves this.\n\n**Lemma 7**.: _Given \\(\\tau>0\\), we have for all \\(x\\in\\mathbb{R}\\),_\n\n\\[|\\mathtt{softplus}_{\\tau}(x)-\\mathtt{ReLU}(x)|\\leq\\log(2)\\;\\tau. \\tag{31}\\]\n\nProof.: Taking the (sub-)derivative of \\(\\mathtt{softplus}_{\\tau}-\\mathtt{ReLU}\\), we get\n\n\\[\\partial_{x}\\mathtt{softplus}_{\\tau}(x)-\\mathtt{ReLU}(x)=(1+e^{-x/\\tau})^{-1} -\\begin{cases}1&x>0\\\\ 0&x\\leq 0\\end{cases}\\]\n\nwhich is positive for all \\(x<0\\) and negative for all \\(x>0\\), hence the extremum must be at \\(x\\), at which point \\(\\mathtt{softplus}_{\\tau}(0)-\\mathtt{ReLU}(0)=\\log(2)\\tau\\). Analyzing the asymptotic behavior, \\(\\lim_{x\\to\\pm\\infty}(\\mathtt{softplus}_{\\tau}(x)-\\mathtt{ReLU}(x))=0\\), and therefore \\(\\mathtt{softplus}_{\\tau}(x)>\\mathtt{ReLU}(x)\\) for \\(x\\in\\mathbb{R}\\). \n\nApproximation guarantees for the fat-tailed non-linearities of App. A.4 can be derived similarly.\n\nAdditional Empirical Details and Results\n\n### Experimental details\n\nAll algorithms are implemented in BoTorch. The analytic EI, qEI, cEI utilize the standard BoTorch implementations. We utilize the original authors' implementations of single objective JES [36], GIBBON [61], and multi-objective JES [71], which are all available in the main BoTorch repository. All simulations are ran with 32 replicates and error bars represent \\(\\pm 2\\) times the standard error of the mean. We use a Matern-5/2 kernel with automatic relevance determination (ARD), i.e. separate length-scales for each input dimension, and a top-hat prior on the length-scales in \\([0.01,100]\\). The input spaces are normalized to the unit hyper-cube and the objective values are standardized during each optimization iteration.\n\n### Additional Empirical Results on Vanishing Values and Gradients\n\nThe left plot of Figure 1 in the main text shows that for a large fraction of points across the domain the gradients of EI are numerically essentially zero. In this section we provide additional detail on these simulations as well as intuition for results.\n\nThe data generating process (DGP) for the training data used for the left plot of Figure 1 is the following: 80% of training points are sampled uniformly at random from the domain, while 20% are sampled according to a multivariate Gaussian centered at the function maximum with a standard deviation of 25% of the length of the domain. The idea behind this DGP is to mimic the kind of data one would see during a Bayesian Optimization loop (without having to run thousands of BO loops to generate the Figure 1). Under this DGP with the chosen test problem, the incumbent (best observed point) is typically better than the values at the random test locations, and this becomes increasingly the case as the dimensionality of the problem increases and the number of training points grows. This is exactly the situation that is typical when conducting Bayesian Optimization.\n\nFor a particular replicate, Figure 15 shows the model fits in-sample (black), out-of-sample (blue), and the best point identified so far (red) with 60 training points and a random subset of 50 (out of 2000) test points. One can see that the model produces decent mean predictions for out-of-sample data, and that the uncertainty estimates appear reasonably well-calibrated (e.g., the credible intervals typically cover the true value). A practitioner would consider this a good model for the purposes of Bayesian Optimization. However, while there is ample uncertainty in the predictions of the model away from the training points, for the vast majority of points, the mean prediction is many standard deviations away from the incumbent value (the error bars are \\(\\pm\\) 2 standard deviations). This is the key reason for EI taking on zero (or vanishingly small) values and having vanishing gradients.\n\nFigure 16: Histogram of \\(z(x)\\), the argument to \\(h\\) in (2), corresponding to Figure 15. Vertical lines are the thresholds corresponding to values \\(z\\) below which \\(h(z)\\) is less than the respective threshold. The majority of the test points fall below these threshold values.\n\nFigure 15: Model fits for a typical replicate used in generating the Fig. 1 (left). While there is ample uncertainty in the test point predictions (blue, chosen uniformly at random), the mean prediction for the majority of points is many standard deviations away from the incumbent value (red).\n\nTo illustrate this, Figure 16 shows the histogram of \\(z(x)\\) values, the argument to the function \\(h\\) in (2). It also contains the thresholds corresponding to the values \\(z\\) below which \\(h(z)\\) is less than the respective threshold. Since \\(\\sigma(x)\\) is close to 1 for most test points (mean: 0.87, std: 0.07), this is more or less the same as saying that EI(\\(z(x)\\)) is less than the threshold. It is evident from the histogram that the majority of the test points fall below these threshold values (especially for larger thresholds), showing that the associated acquisition function values (and similarly the gradients) are numerically almost zero and causing issues during acquisition function optimization.\n\n### Parallel Expected Improvement\n\nFigure 17 reports optimization performance of parallel BO on the 16-dimensional Ackley and Levy functions for both sequential greedy and joint batch optimization. Besides the apparent substantial advantages of qLogEI over qEI on Ackley, a key observation here is that jointly optimizing the candidates of batch acquisition functions can yield highly competitive optimization performance, especially as the batch size increases. Notably, joint optimization of the batch with qLogEI starts out performing worse in terms of BO performance than sequential on the Levy function, but outperforms all sequential methods as the batch size increases. See also Figure 18 for the scaling of each method with respect to the batch size \\(q\\).\n\nFigure 17: Parallel optimization performance on the Ackley and Levy functions in 16 dimensions. qLogEI outperforms all baselines on Ackley, where joint optimization of the batch also improves on the sequential greedy. On Levy, joint optimization of the batch with qLogEI starts out performing worse in terms of BO performance than sequential, but wins out over all sequential methods as the batch size increases.\n\nFigure 18: Breakdown of the parallel optimization performance of Figure 17 per method, rather than per batch size. On Levy, qLogEI exhibits a much smaller deterioration in BO performance due to increases in parallelism than the methods relying on sequentially optimized batches of candidates.\n\n### Noisy Expected Improvement\n\nFigure 19 benchmarks the \"noisy\" variant, qLogNEI. Similar to the noiseless case, the advantage of the LogEI versions over the canonical counterparts grows as with the dimensionality of the problem, and the noisy version improves on the canonical versions for larger noise levels.\n\n### Multi-Objective optimization with qLogEHVI\n\nFigure 20 compares qLogEHVI and qEHVI on 6 different test problems with 2 or 3 objectives, and ranging from 2-30 dimensions. This includes 3 real world inspired problems: cell network design for optimizing coverage and capacity [19], laser plasma acceleration optimization [38], and vehicle design optimization [54, 68]. The results are consistent with our findings in the single-objective and constrained cases: qLogEHVI consistently outperforms qEHVI, and the gap is larger on higher dimensional problems.\n\n### Combining LogEI with TuRB0 for High-Dimensional Bayesian Optimization\n\nIn the main text, we show how LogEI performs particularly well relative to other baselines in high dimensional spaces. Here, we show how LogEI can work synergistically with trust region based methods for high-dimensional BO, such as TuRBO [24].\n\nFig. 21 compares the performance of LogEI, TuRBO-1 + LogEI, TuRBO-1 + EI, as well as the original Thompson-sampling based implementation for the 50d Ackley test problem. Combining TuRBO-1 with LogEI results in substantially better performance than the baselines when using a small number of function evaluations. Thompson sampling (TS) ultimately performs better after \\(10,000\\) evaluations, but this experiment shows the promise of combining TuRBO and LogEI in settings where we cannot do thousands of function evaluations. Since we optimize batches of \\(q=50\\) candidates jointly, we also increase the number of Monte-Carlo samples from the Gaussian process\n\nFigure 19: Optimization performance with noisy observations on Hartmann 6d (top), Ackley 8d (mid), and Ackley 16 (bottom) for varying noise levels and \\(q=1\\). We set the noise level as a proportion of the maximum range of the function, which is \\(\\approx 3.3\\) for Hartmann and \\(\\approx 20\\) for Ackley. That is, the \\(1\\%\\) noise level corresponds to a standard deviation of \\(0.2\\) for Ackley. qLogNEI outperforms both canonical EI counterparts and Gibbon significantly in most cases, especially in higher dimensions.\n\nfrom \\(128\\), the BoTorch default, to \\(512\\), and use the fat-tailed smooth approximations of Sec. A.4 to ensure a strong gradient signal to all candidates of the batch.\n\nRegarding the ultimate out-performance of TS over qLogEI, we think this is due to a model misspecification since the smoothness of a GP with the Matern-5/2 kernel cannot express the non-differentiability of Ackley at the optimum.\n\n### Constrained Problems\n\nWhile running the benchmarks using CEI in section 5, we found that we in fact improved upon a best known result from the literature. We compare with the results in Coello and Montes [11], which are generated using 30 runs of **80,000 function evaluations** each.\n\nFigure 21: Combining LogEI with TuRBO on the high-dimensional on the 50d Ackley problem yields significant improvement in objective value for a small number of function evaluations. Unlike for qEI, no random restarts are necessary to achieve good performance when performing joint optimization of the batch with qLogEI (\\(q=50\\)).\n\nFigure 20: Sequential (\\(q=1\\)) optimization performance on multi-objective problems, as measured by the hypervolume of the Pareto frontier across observed points. This plot includes JES [71]. Similar to the single-objective case, qLogEHVI significantly outperforms all baselines on all test problems.\n\n* For the pressure vessel design problem, Coello and Montes [11] quote a best-case feasible objective of \\(6059.946341\\). Out of just 16 different runs, LogEI achieves a worst-case feasible objective of \\(5659.1108\\)**after only 110 evaluations**, and a best case of \\(5651.8862\\), a notable reduction in objective value using almost three orders of magnitude fewer function evaluations.\n* For the welded beam problem, Coello and Montes [11] quote \\(1.728226\\), whereas LogEI found a best case of \\(1.7496\\) after 110 evaluations, which is lightly worse, but we stress that this is using three orders of magnitude fewer evaluations.\n* For the tension-compression problem, LogEI found a feasible solution with value \\(0.0129\\) after 110 evaluations compared to the \\(0.012681\\) reported in in [11].\n\nWe emphasize that genetic algorithms and BO are generally concerned with distinct problem classes: BO focuses heavily on sample efficiency and the small-data regime, while genetic algorithms often utilize a substantially larger number of function evaluations. The results here show that in this case BO is competitive with and can even outperform a genetic algorithm, using only a tiny fraction of the sample budget, see App. D.7 for details. Sample efficiency is particularly relevant for physical simulators whose evaluation takes significant computational effort, often rendering several tens of thousands of evaluations infeasible.\n\n### Parallel Bayesian Optimization with cross-batch constraints\n\nIn some parallel Bayesian optimization settings, batch optimization is subject to non-trivial constraints across the batch elements. A natural example for this are budget constraints. For instance, in the context of experimental material science, consider the case where each manufactured compound requires a certain amount of different materials (as described by its parameters), but there is only a fixed total amount of material available (e.g., because the stock is limited due to cost and/or storage capacity). In such a situation, batch generation will be subject to a budget constraint that is not separable across the elements of the batch. Importantly, in that case sequential greedy batch generation is not an option since it is not able to incorporate the budget constraint. Therefore, joint batch optimization is required.\n\nHere we give one such example in the context of Bayesian Optimization for sequential experimental design. We consider the five-dimensional silver nanoparticle flow synthesis problem from Liang et al. [53]. In this problem, to goal is to optimize the absorbance spectrum score of the synthesized nanoparticles over five parameters: four flow rate ratios of different components (silver, silver nitrate, trisodium citrate, polyvinyl alcohol) and a total flow rate \\(Q_{tot}\\).\n\nThe original problem was optimized over a discrete set of parameterizations. For our purposes we created a continuous surrogate model based on the experimental dataset (available from [https://github.com/PV-Lab/Benchmarking](https://github.com/PV-Lab/Benchmarking)) by fitting an RBF interpolator (smoothing factor of 0.01) in scipy on the (negative) loss. We use the same search space as Liang et al. [53], but in addition to the box bounds on the parameters we also impose an additional constraint on the total flow rate \\(Q_{tot}^{max}=2000\\) mL/min across the batch: \\(\\sum_{i=1}^{q}Q_{tot}^{i}\\leq Q_{tot}^{max}\\) (the maximum flow rate per syringe / batch element is 1000mL/min). This constraint expresses the maximum throughput limit of the microfluidic experimentation setup. The result of this constraint is that we cannot consider the batch elements (in this case automated syringe pumps) have all elements of a batch of experiments operate in the high-flow regime at the same time.\n\nIn our experiment, we use a batch size of \\(q=3\\) and start the optimization from 5 randomly sampled points from the domain. We run 75 replicates with random initial conditions (shared across the different methods), error bars show \\(\\pm\\) two times the standard error of the mean. Our baseline is uniform random sampling from the domain (we use a hit-and-run sampler to sample uniformly from the constraint polytope \\(\\sum_{i=1}^{q}Q_{tot}^{i}\\leq Q_{tot}^{max}\\)). We compare qEI vs. qLogEI, and for each of the two we evaluate (i) the version with the batch constraint imposed explicitly in the optimizer (the optimization in this case uses scipy's SLSQP solver), and (ii) a heuristic that first samples the total flow rates \\(\\{Q_{tot}^{i}\\}_{i=1}^{q}\\) uniformly from the constraint set, and then optimizes the acquisition function with the flow rates fixed to the sampled values.\n\nThe results in Figure 22 show that while both the heuristic (\"random \\(Q_{tot}\\)\") and the proper constrained optimization (\"batch-constrained \\(Q_{tot}\\)\") substantially outperform the purely random baseline, it requires uisng both LogEI _and_ proper constraints to achieve additional performance gains over the other 3 combinations. Importantly, this approach is only possible by performing joint optimization of the batch, which underlines the importance of qLogEI and its siblings being able to achieve superior joint batch optimization in settings like this.\n\n### Details on Multi-Objective Problems\n\nWe consider a variety of multi-objective benchmark problems. We evaluate performance on three synthetic biobjective problems Branin-Currin (\\(d=2\\)) [8], ZDT1 (\\(d=6\\)) [83], and DTLZ2 (\\(d=6\\)) [17]. As described in 5, we also evaluated performance on three real world inspired problems. For the laser plasma acceleration problem, we used the public data available at Irshad et al. [39] to fit an independent GP surrogate model to each objective. We only queried te surrogate at the highest fidelity to create a single fidelity benchmark.\n\n### Effect of Temperature Parameter\n\nIn Figure 23, we examine the effect of fixed \\(\\tau\\) for the softplus operator on optimization performance. We find that smaller values typically work better.\n\n### Effect of the initialization strategy\n\nPackages and frameworks commonly utilize smart initialization heuristics to improve acquisition function optimization performance. In Figure 24, we compare simple random restart optimization, where initial points are selected uniformly at random, with BoTorch's default initialization strategy, which evaluates the acquisition function on a large number of points selected from a scrambled Sobol sequence, and selects \\(n\\) points at random via Boltzman sampling (e.g., sampling using probabilities computed by taking a softmax over the acquisition values [6]. Here we consider 1024 initial candidates. We find that the BoTorch initialization strategy improves regret for all cases, and that qLogEI, followed by UCB show less sensitivity to the choice of initializations strategy. Figure25 examines the sensitivity of qEI to the number of initial starting points when performing standard random restart optimization and jointly optimizing the \\(q\\) points in the batch. We find that, consistent with our empirical and theoretical results in the main text, qEI often gets stuck in local minima for the Ackley test function, and additional random restarts often improve results but do not compensate for the fundamental optimality gap. The performance of qLogEI also improves as the number of starting points increases.\n\nFigure 22: Optimization results on the nanomaterial synthesis material science problem with cross-batch constraints. While qLogEI outperforms qEI under the proper constrained (“batch-constrained \\(Q_{tot}\\)”) optimization, this is not the case for the the heuristic (“random \\(Q_{tot}\\)”), demonstrating the value of both joint batch optimization with constraints and LogEI.\n\n\\begin{table}\n\\begin{tabular}{l c c c c c c} \\hline \\hline & Cell Network & Branin-Currin & DTLZ2 & Laser Plasma & ZDT1 & Vehicle Safety \\\\ \\hline JES & 21.6 (+/- 1.1) & 89.6 (+/- 3.3) & 33.6 (+/- 1.0) & 57.3 (+/- 0.7) & 72.7 (+/- 1.0) & 47.0 (+/- 1.6) \\\\ qEHVI & 0.6 (+/- 0.0) & 0.7 (+/- 0.0) & 1.0 (+/- 0.0) & 3.0 (+/- 0.1) & 0.6 (+/- 0.0) & 0.6 (+/- 0.0) \\\\ qLogEHVI & 9.2 (+/- 0.8) & 10.0 (+/- 0.4) & 5.8 (+/- 0.2) & 31.6 (+/- 1.7) & 7.2 (+/- 0.7) & 2.1 (+/- 0.1) \\\\ Rand & 0.2 (+/- 0.0) & 0.2 (+/- 0.0) & 0.2 (+/- 0.0) & 0.3 (+/- 0.0) & 0.3 (+/- 0.0) & 0.3 (+/- 0.0) \\\\ \\hline \\hline \\end{tabular}\n\\end{table}\nTable 1: Multi-objectve acquisition function optimization wall time in seconds on CPU (2x Intel Xeon E5-2680 v4 @ 2.40GHz). We report the mean and \\(\\pm\\) 2 standard errors.\n\nFigure 23: Ablation study on the convergence characteristics of LogEI on Ackley and sum of squares (SOS) problems in 2 and 16 dimensions. The study shows that it is important to choose a small \\(\\tau_{0}\\) for the best convergence properties, which results in a very tight approximation to the original ReLU non-linearity in the integrand. Critically, setting \\(\\tau_{0}\\) as low as \\(10^{-6}\\) is only possible due to the transformation of all computations into log-space. Otherwise, the smoothed acquisition utility would exhibit similarly numerically vanishing gradients as the original ReLU non-linearity.\n\nFigure 24: Sensitivity to the initialization strategy. Random selects random restart points from the design space uniformly at random, whereas Boltzmann initialization is the default BoTorch initialization strategy which selects points with higher acquisition function values with a higher probability via Boltzmann sampling.\n\nFigure 25: Sensitivity to number of starting points with multi-start optimization for the 16D Ackley and Levy test problems. Note: We plot negative regret, so higher is better.", "response": "## Review\n\n### Summary\n\nThis paper addresses a critical issue in Bayesian optimization related to the Expected Improvement (EI) acquisition function by identifying and proposing solutions to numerical pathologies such as vanishing gradients. The authors introduce LogEI, a reformulated acquisition function that not only mitigates these issues but also maintains performance comparable to state-of-the-art methods. Extensive theoretical analysis and empirical benchmarks demonstrate LogEI's superior performance and robustness across various settings, including constrained, parallel, and multi-objective optimization. Overall, the work presents a significant advancement in making acquisition functions more reliable for practical applications in fields like hyperparameter tuning and materials science.\n\n### Strengths\n\n- Well-written and organized manuscript.\n- Addresses a previously neglected aspect of Bayesian optimization, focusing on the optimization of acquisition functions.\n- The proposed LogEI and its variants show significant improvements over the canonical EI function with minimal computational overhead.\n- Extensive numerical experiments and ablation studies support all claims made in the paper.\n- The paper emphasizes the interaction between acquisition function design and optimization algorithms, highlighting its practical implications.\n\n### Weaknesses\n\n- The proposed solution primarily addresses acquisition functions with vanishing gradient issues, limiting its applicability.\n- The similarity between LogEI and EI is not thoroughly supported by evidence, warranting further clarification.\n- The paper lacks discussion on existing LogEI implementations and related work, which could enhance its novelty.\n- The justification for LogEI's importance in high-dimensional problems is not convincingly presented.\n- Minor concerns include a lack of attention to noisy tasks and conventional benchmarks.\n\n### Questions\n\n- What scenarios exist where LogEI cannot replace EI or may perform worse?\n- Have the authors tested LogEI in non-continuous search spaces, and what were the outcomes?\n- Can the authors elaborate on the expected performance of batch acquisition optimization under strong locality constraints?\n\n### Soundness\n\n**Score:** 4\n\n**Description:** 4 = excellent; the theoretical foundations and empirical results are robust and well-supported.\n\n### Presentation\n\n**Score:** 4\n\n**Description:** 4 = excellent; the manuscript is well-structured, clear, and effectively communicates complex ideas.\n\n### Contribution\n\n**Score:** 4\n\n**Description:** 4 = excellent; the paper presents novel insights into acquisition function optimization, with significant potential impact on the field.\n\n### Rating\n\n**Score:** 7\n\n**Description:** 7 = accept, but needs minor improvements; the paper is technically sound with high impact potential but could benefit from additional clarity and context.\n\n### Paper Decision\n\n**Decision:** Accept\n\n**Reasons:** The paper is original and provides significant contributions to the field of Bayesian optimization by addressing critical numerical issues associated with the EI acquisition function. Its soundness, presentation, and contribution scores reflect a strong manuscript that, while needing some minor clarifications, is likely to have a substantial impact on both academic research and practical applications.\n"} {"query": "You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections: \n1. Summary: A summary of the paper in 100-150 words.\n2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.\n3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are: \n 1 poor\n 2 fair\n 3 good\n 4 excellent\n4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are: \n 1 strong reject\n 2 reject, significant issues present\n 3 reject, not good enough\n 4 possibly reject, but has redeeming facets\n 5 marginally below the acceptance threshold\n 6 marginally above the acceptance threshold\n 7 accept, but needs minor improvements \n 8 accept, good paper\n 9 strong accept, excellent work\n 10 strong accept, should be highlighted at the conference \n5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.\n\nHere is the template for a review format, you must follow this format to output your review result:\n**Summary:**\nSummary content\n\n**Strengths:**\n- Strength 1\n- Strength 2\n- ...\n\n**Weaknesses:**\n- Weakness 1\n- Weakness 2\n- ...\n\n**Questions:**\n- Question 1\n- Question 2\n- ...\n\n**Soundness:**\nSoundness result\n\n**Presentation:**\nPresentation result\n\n**Contribution:**\nContribution result\n\n**Rating:**\nRating result\n\n**Paper Decision:**\n- Decision: Accept/Reject\n- Reasons: reasons content\n\n\nPlease ensure your feedback is objective and constructive. The paper is as follows:\n\n# Rethinking Bias Mitigation: Fairer Architectures\n\nMake for Fairer Face Recognition\n\n Samuel Dooley\\({}^{*}\\)\n\nUniversity of Maryland, Abacus.AI\n\nsamuel@abacus.ai &Rhea Sanjay Sukthanker\\({}^{*}\\)\n\nUniversity of Freiburg\n\nsukthank@cs.uni-freiburg.de &John P. Dickerson\n\nUniversity of Maryland, Arthur AI\n\njohnd@umd.edu &Colin White\n\nCaltech, Abacus.AI\n\ncrwhite@caltech.edu &Frank Hutter\n\nUniversity of Freiburg\n\nfh@cs.uni-freiburg.de &Micah Goldblum\n\nNew York University\n\ngoldblum@nyu.edu\n\n# indicates equal contribution\n\n###### Abstract\n\nFace recognition systems are widely deployed in safety-critical applications, including law enforcement, yet they exhibit bias across a range of socio-demographic dimensions, such as gender and race. Conventional wisdom dictates that model biases arise from biased training data. As a consequence, previous works on bias mitigation largely focused on pre-processing the training data, adding penalties to prevent bias from effecting the model during training, or post-processing predictions to debias them, yet these approaches have shown limited success on hard problems such as face recognition. In our work, we discover that biases are actually inherent to neural network architectures themselves. Following this reframing, we conduct the first neural architecture search for fairness, jointly with a search for hyperparameters. Our search outputs a suite of models which Pareto-dominate all other high-performance architectures and existing bias mitigation methods in terms of accuracy and fairness, often by large margins, on the two most widely used datasets for face identification, CelebA and VGGFace2. Furthermore, these models generalize to other datasets and sensitive attributes. We release our code, models and raw data files at [https://github.com/dooleys/FR-NAS](https://github.com/dooleys/FR-NAS).\n\n## 1 Introduction\n\nMachine learning is applied to a wide variety of socially-consequential domains, e.g., credit scoring, fraud detection, hiring decisions, criminal recidivism, loan repayment, and face recognition [78, 81, 61, 3], with many of these applications significantly impacting people's lives, often in discriminatory ways [5, 55, 114]. Dozens of formal definitions of fairness have been proposed [80], and many algorithmic techniques have been developed for debiasing according to these definitions [106]. Existing debiasing algorithms broadly fit into three (or arguably four [96]) categories: pre-processing [e.g., 32, 93, 89, 110], in-processing [e.g., 123, 124, 25, 35, 83, 110, 73, 79, 24, 59], or post-processing [e.g., 44, 114].\n\nConventional wisdom is that in order to effectively mitigate bias, we should start by selecting a model architecture and set of hyperparameters which are optimal in terms of accuracy and then apply a mitigation strategy to reduce bias. This strategy has yielded little success in hard problems such as face recognition [14]. Moreover, even randomly initialized face recognition models exhibit bias and in the same ways and extents as trained models, indicating that these biases are baked in to the architectures already [13]. While existing methods for debiasing machine learning systems use a fixed neural architecture and hyperparameter setting, we instead, ask a fundamental question which has received little attention: _Does model bias arise from the architecture and hyperparameters?_ Following an affirmative answer to this question, we exploit advances in neural architecture search (NAS) [30] and hyperparameter optimization (HPO) [33] to search for inherently fair models.\n\nWe demonstrate our results on face identification systems where pre-, post-, and in-processing techniques have fallen short of debiasing face recognition systems. Training fair models in this setting demands addressing several technical challenges [14]. Face identification is a type of face recognition deployed worldwide by government agencies for tasks including surveillance, employment, and housing decisions. Face recognition systems exhibit disparity in accuracy based on race and gender [37, 92, 91, 61]. For example, some face recognition models are 10 to 100 times more likely to give false positives for Black or Asian people, compared to white people [2]. This bias has already led to multiple false arrests and jail time for innocent Black men in the USA [48].\n\nIn this work, we begin by conducting the first large-scale analysis of the impact of architectures and hyperparameters on bias. We train a diverse set of 29 architectures, ranging from ResNets [47] to vision transformers [28, 68] to Gluon Inception V3 [103] to MobileNetV3 [50] on the two most widely used datasets in face identification that have socio-demographic labels: CelebA [69] and VGGFace2 [8]. In doing so, we discover that architectures and hyperparameters have a significant impact on fairness, across fairness definitions.\n\nMotivated by this discovery, we design architectures that are simultaneously fair and accurate. To this end, we initiate the study of NAS for fairness by conducting the first use of NAS+HPO to jointly optimize fairness and accuracy. We construct a search space informed by the highest-performing architecture from our large-scale analysis, and we adapt the existing Sequential Model-based Algorithm Configuration method (SMAC) [66] for multi-objective architecture and hyperparameter search. We discover a Pareto frontier of face recognition models that outperform existing state-of-the-art models on both test accuracy and multiple fairness metrics, often by large margins. An outline of our methodology can be found in Figure 1.\n\nWe summarize our primary contributions below:\n\n* By conducting an exhaustive evaluation of architectures and hyperparameters, we uncover their strong influence on fairness. Bias is inherent to a model's inductive bias, leading to a substantial difference in fairness across different architectures. We conclude that the implicit convention of choosing standard architectures designed for high accuracy is a losing strategy for fairness.\n* Inspired by these findings, we propose a new way to mitigate biases. We build an architecture and hyperparameter search space, and we apply existing tools from NAS and HPO to automatically design a fair face recognition system.\n* Our approach finds architectures which are Pareto-optimal on a variety of fairness metrics on both CelebA and VGGFace2. Moreover, our approach is Pareto-optimal compared to other previous bias mitigation techniques, finding the fairest model.\n\nFigure 1: Overview of our methodology.\n\n* The architectures we synthesize via NAS and HPO generalize to other datasets and sensitive attributes. Notably, these architectures also reduce the linear separability of protected attributes, indicating their effectiveness in mitigating bias across different contexts.\n\nWe release our code and raw results at [https://github.com/dooleys/FR-NAS](https://github.com/dooleys/FR-NAS), so that users can easily adapt our approach to any bias metric or dataset.\n\n## 2 Background and Related Work\n\nFace Identification.Face recognition tasks can be broadly categorized into two distinct categories: _verification_ and _identification_. Our specific focus lies in face _identification_ tasks which ask whether a given person in a source image appears within a gallery composed of many target identities and their associated images; this is a one-to-many comparison. Novel techniques in face recognition tasks, such as ArcFace [108], CosFace [23], and MagFace [75], use deep networks (often called the _backbone_) to extract feature representations of faces and then compare those to match individuals (with mechanisms called the _head_). Generally, _backbones_ take the form of image feature extractors and _heads_ resemble MLPs with specialized loss functions. Often, the term \"head\" refers to both the last layer of the network and the loss function. Our analysis primarily centers around the face identification task, and we focus our evaluation on examining how close images of similar identities are in the feature space of trained models, since the technology relies on this feature representation to differentiate individuals. An overview of these topics can be found in Wang and Deng [109].\n\nBias Mitigation in Face Recognition.The existence of differential performance of face recognition on population groups and subgroups has been explored in a variety of settings. Earlier work [e.g., 57, 82] focuses on single-demographic effects (specifically, race and gender) in pre-deep-learning face detection and recognition. Buolamwini and Gebru [5] uncover unequal performance at the phenotypic subgroup level in, specifically, a gender classification task powered by commercial systems. Raji and Buolamwini [90] provide a follow-up analysis - exploring the impact of the public disclosures of Buolamwini and Gebru [5] - where they discovered that named companies (IBM, Microsoft, and Megvij) updated their APIs within a year to address some concerns that had surfaced. Further research continues to show that commercial face recognition systems still have socio-demographic disparities in many complex and pernicious ways [29, 27, 54, 54, 26].\n\nFacial recognition is a large and complex space with many different individual technologies, some with bias mitigation strategies designed just for them [63, 118]. The main bias mitigation strategies for facial identification are described in Section 4.2.\n\nNeural Architecture Search (NAS) and Hyperparameter Optimization (HPO).Deep learning derives its success from the manually designed feature extractors which automate the feature engineering process. Neural Architecture Search (NAS) [30, 116], on the other hand, aims at automating the very design of network architectures for a task at hand. NAS can be seen as a subset of HPO [33], which refers to the automated search for optimal hyperparameters, such as learning rate, batch size, dropout, loss function, optimizer, and architectural choices. Rapid and extensive research on NAS for image classification and object detection has been witnessed as of late [67, 125, 121, 88, 6]. Deploying NAS techniques in face recognition systems has also seen a growing interest [129, 113]. For example, reinforcement learning-based NAS strategies [121] and one-shot NAS methods [113] have been deployed to search for an efficient architecture for face recognition with low _error_. However, in a majority of these methods, the training hyperparameters for the architectures are _fixed_. We observe that this practice should be reconsidered in order to obtain the fairest possible face recognition systems. Moreover, one-shot NAS methods have also been applied for multi-objective optimization [39, 7], e.g., optimizing accuracy and parameter size. However, none of these methods can be applied for a joint architecture and hyperparameter search, and none of them have been used to optimize _fairness_.\n\nFor the case of tabular datasets, a few works have applied hyperparameter optimization to mitigate bias in models. Perrone et al. [87] introduced a Bayesian optimization framework to optimize accuracy of models while satisfying a bias constraint. Schmucker et al. [97] and Cruz et al. [17] extended Hyperband [64] to the multi-objective setting and showed its applications to fairness. Lin et al. [65] proposed de-biasing face recognition models through model pruning. However, they only considered two architectures and just one set of fixed hyperparameters. To the best of our knowledge,no prior work uses any AutoML technique (NAS, HPO, or joint NAS and HPO) to design fair face recognition models, and no prior work uses NAS to design fair models for any application.\n\n## 3 Are Architectures and Hyperparameters Important for Fairness?\n\nIn this section, we study the question _\"Are architectures and hyperparameters important for fairness?\"_ and report an extensive exploration of the effect of model architectures and hyperparameters.\n\nExperimental Setup.We train and evaluate each model configuration on a gender-balanced subset of the two most popular face identification datasets: CelebA and VGGFace2. CelebA [69] is a large-scale face attributes dataset with more than 200K celebrity images and a total of 10 177 gender-labeled identities. VGGFace2 [8] is a much larger dataset designed specifically for face identification and comprises over 3.1 million images and a total of 9 131 gender-labeled identities. While this work analyzes phenotypic metadata (perceived gender), the reader should not interpret our findings absent a social lens of what these demographic groups mean inside society. We guide the reader to Hamidi et al. [40] and Keyes [56] for a look at these concepts for gender.\n\nTo study the importance of architectures and hyperparameters for fairness, we use the following training pipeline - ultimately conducting 355 training runs with different combinations of 29 architectures from the Pytorch Image Model (timm) database [117] and hyperparameters. For each model, we use the default learning rate and optimizer that was published with that model. We then train the model with these hyperparameters for each of three heads, ArcFace [108], CosFace [23], and MagFace [75]. Next, we use the model's default learning rate with both AdamW [70] and SGD optimizers (again with each head choice). Finally, we also train with AdamW and SGD with unified learning rates (SGD with learning_rate=0.1 and AdamW with learning_rate=0.001). In total, we thus evaluate a single architecture between 9 and 13 times (9 times if the default optimizer and learning rates are the same as the standardized, and 13 times otherwise). All other hyperparameters are held constant fortraining of the model.\n\nEvaluation procedure.As is commonplace in face identification tasks [12, 13], we evaluate the performance of the learned representations. Recall that face recognition models usually learn representations with an image backbone and then learn a mapping from those representations onto identities of individuals with the head of the model. We pass each test image through a trained model and save the learned representation. To compute the representation error (which we will henceforth simply refer to as _Error_), we merely ask, for a given probe image/identity, whether the closest image in feature space is _not_ of the same person based on \\(l_{2}\\) distance. We split each dataset into train, validation, and test sets. We conduct our search for novel architectures using the train and validation splits, and then show the improvement of our model on the test set.\n\nThe most widely used fairness metric in face identification is _rank disparity_, which is explored in the NIST FRVT [38]. To compute the rank of a given image/identity, we ask how many images of a different identity are closer to the image in feature space. We define this index as the rank of a given image under consideration. Thus, \\(\\text{Rank(image)}=0\\) if and only if \\(\\text{Error(image)}=0\\); \\(\\text{Rank(image)}>0\\) if and only if \\(\\text{Error(image)}=1\\). We examine the **rank disparity**: the absolute difference of the average ranks for each perceived gender in a dataset \\(\\mathcal{D}\\):\n\n\\[\\bigg{|}\\frac{1}{|\\mathcal{D}_{\\text{male}}|}\\sum_{x\\in\\mathcal{D}_{\\text{ male}}}\\text{Rank }(x)-\\frac{1}{|\\mathcal{D}_{\\text{female}}|}\\sum_{x\\in\\mathcal{D}_{\\text{ female}}}\\text{Rank}(x)\\bigg{|}. \\tag{1}\\]\n\nWe focus on rank disparity throughout the main body of this paper as it is the most widely used in face identification, but we explore other forms of fairness metrics in face recognition in Appendix C.4.\n\nResults and Discussion.By plotting the performance of each training run on the validation set with the error on the \\(x\\)-axis and rank disparity on the \\(y\\)-axis in Figure 2, we can easily conclude two main points. First, optimizing for error does not always optimize for fairness, and second, different architectures have different fairness properties. We also find the DPN architecture has the lowest error and is Pareto-optimal on both datasets; hence, we use that architecture to design our search space in Section 4.\n\nWe note that in general there is a low correlation between error and rank disparity (e.g., for models with error < 0.3, \\(\\rho=.113\\) for CelebA and \\(\\rho=.291\\) for VGGFace2). However, there are differences between the two datasets at the most extreme low errors. First, for VGGFace2, the baseline models already have very low error, with there being 10 models with error < 0.05; CelebA only has three such models. Additionally, models with low error also have low rank disparity on VGGFace2 but this is not the case for CelebA. This can be seen by looking at the Pareto curves in Figure 2.\n\nThe Pareto-optimal models also differ across datasets: on CelebA, they are versions of DPN, TNT, ReXNet, VovNet, and ResNets, whereas on VGGFace2 they are DPN and ReXNet. Finally, we note that different architectures exhibit different optimal hyperparameters. For example, on CelebA, for the Xception65 architecture finds the combinations of (SGD, ArcFace) and (AdamW, ArcFace) as Pareto-optimal, whereas the Inception-ResNet architecture finds the combinations (SGD, MagFace) and (SGD, CosFace) Pareto-optimal.\n\n## 4 Neural Architecture Search for Bias Mitigation\n\nInspired by our findings on the importance of architecture and hyperparameters for fairness in Section 3, we now initiate the first joint study of NAS for fairness in face recognition, also simultaneously optimizing hyperparameters. We start by describing our search space and search strategy. We then compare the results of our NAS+HPO-based bias mitigation strategy against other popular face recognition bias mitigation strategies. We conclude that our strategy indeed discovers simultaneously accurate and fair architectures.\n\n### Search Space Design and Search Strategy\n\nWe design our search space based on our analysis in Section 3, specifically around the Dual Path Networks[10] architecture which has the lowest error and is Pareto-optimal on both datasets, yielding the best trade-off between rank disparity and accuracy as seen in Figure 2.\n\nHyperparameter Search Space Design.We optimize two categorical hyperparameters (the architecture head/loss and the optimizer) and one continuous one (the learning rate). The learning rate's range is conditional on the choice of optimizer; the exact ranges are listed in Table 6 in the appendix.\n\nArchitecture Search Space Design.Dual Path Networks [10] for image classification share common features (like ResNets [46]) while possessing the flexibility to explore new features [52] through a dual path architecture. We replace the repeating 1x1_conv-3x3_conv-1x1_conv block with a simple recurring searchable block. Furthermore, we stack multiple such searched blocks to closely follow the architecture of Dual Path Networks. We have nine possible choices for each of the three operations in the DPN block, each of which we give a number 0 through 8. The choices include a vanilla convolution, a convolution with pre-normalization and a convolution with post-normalization, each of them paired with kernel sizes 1\\(\\times\\)1, 3\\(\\times\\)3, or 5\\(\\times\\)5 (see Appendix C.2 for full details). We thus have 729 possible architectures (in addition to an infinite number of hyperparameter configurations). We denote each of these architectures by XYZ where \\(X,Y,Z\\in\\{0,\\dots,8\\}\\); e.g., architecture 180 represents the architecture which has operation 1, followed by operation 8, followed by operation 0.\n\nFigure 2: (Left) CelebA (Right) VGGFace2. Error-Rank Disparity Pareto front of the architectures with lowest error (< 0.3). Models in the lower left corner are better. The Pareto front is denoted with a dashed line. Other points are architecture and hyperparameter combinations which are not Pareto-optimal.\n\nSearch strategy.To navigate this search space we have the following desiderata:\n\n* **Joint NAS+HPO.** Since there are interaction effects between architectures and hyperparameters, we require an approach that can jointly optimize both of these.\n* **Multi-objective optimization.** We want to explore the trade-off between the accuracy of the face recognition system and the fairness objective of choice, so our joint NAS+HPO algorithm needs to supports multi-objective optimization [84; 21; 71].\n* **Efficiency.** A single function evaluation for our problem corresponds to training a deep neural network on a given dataset. As this can be quite expensive on large datasets, we would like to use cheaper approximations with multi-fidelity optimization techniques [98; 64; 31].\n\nTo satisfy these desiderata, we employ the multi-fidelity Bayesian optimization method SMAC3 [66] (using the SMAC4MF facade), casting architectural choices as additional hyperparameters. We choose Hyperband [64] for cheaper approximations with the initial and maximum fidelities set to 25 and 100 epochs, respectively, and \\(\\eta=2\\). Every architecture-hyperparameter configuration evaluation is trained using the same training pipeline as in Section 3. For multi-objective optimization, we use the ParEGO [21] algorithm with \\(\\rho\\) set to 0.05.\n\n### Empirical Evaluation\n\nWe now report the results of our NAS+HPO-based bias mitigation strategy. First, we discuss the models found with our approach, and then we compare their performance to other mitigation baselines.\n\nSetup.We conducted one NAS+HPO search for each dataset by searching on the train and validation sets. After running these searches, we identified three new candidate architectures for CelebA (SMAC_000, SMAC_010, and SMAC_680), and one candidate for VGGFace2 (SMAC_301) where the naming convention follows that described in Section 4.1. We then retrained each of these models and those high performing models from Section 3 for three seeds to study the robustness of error and disparity for these models; we evaluated their performance on the validation and test sets for each dataset, where we follow the evaluation scheme of Section 3.\n\nComparison against timm models.On CelebA (Figure 3), our models Pareto-dominate all of the timm models with nontrivial accuracy on the validation set. On the test set, our models still Pareto-dominate all highly competitive models (with Error<0.1), but one of the original configurations (DPN with Magface) also becomes Pareto-optimal. However, the error of this architecture is 0.13, which is significantly higher than our models (0.03-0.04). Also, some models (e.g., VoVNet and DenseNet) show very large standard errors across seeds. Hence, it becomes important to also study\n\nFigure 3: Pareto front of the models discovered by SMAC and the rank-1 models from timm for the _(a)_ validation and _(b)_ test sets on CelebA. Each point corresponds to the mean and standard error of an architecture after training for 3 seeds. The SMAC models Pareto-dominate the top performing timm models (\\(Error<0.1\\)).\n\nthe robustness of models across seeds along with the accuracy and disparity Pareto front. Finally, on VGGFace2 (Figure 4), our models are also Pareto-optimal for both the validation and test sets.\n\nNovel Architectures Outperform the State of the Art.Comparing the results of our automatically-found models to the current state of the art baseline ArcFace [23] in terms of error demonstrates that our strategy clearly establishes a new state of the art. While ArcFace [23] achieves an error of 4.35% with our training pipeline on CelebA, our best-performing novel architecture achieves a much lower error of 3.10%. Similarly, the current VGGFace2 state of the art baseline [112] achieves an error of 4.5%, whereas our best performing novel architecture achieves a much lower error of 3.66%.\n\nNovel Architectures Pareto-Dominate other Bias Mitigation Strategies.There are three common pre-, post-, and in-processing bias mitigation strategies in face identification. First, Chang et al. [9] demonstrated that randomly flipping labels in the training data of the subgroup with superior accuracy can yield fairer systems; we call this technique Flipped. Next, Wang and Deng [110] use different angular margins during training and therefore promote better feature discrimination for the minority class; we call this technique Angular. Finally, Morales et al. [76] introduced SensitiveNets which is a sensitive information removal network trained on top of a pre-trained feature extractor with an adversarial sensitive regularizer. While other bias mitigation techniques exist in face recognition, these three are the most used and pertinent to _face identification_. See Cherepanova et al. [14] for an overview of the technical challenges of bias mitigation in face recognition. We take the top performing, Pareto-optimal timm models from the previous section and apply the three bias mitigation techniques (Flipped, Angular, and SensitiveNets). We also apply these same techniques to the novel architectures that we found. The results in Table 1 show that the novel architectures from our NAS+HPO-based mitigation strategy Pareto-dominate the bias-mitigated models. In VGGFace2, the SMAC_301 model achieves the best performance, both in terms of error and fairness, compared to the bias-mitigated models. On CelebA, the same is true for the SMAC_680 model.\n\nNAS+HPO-Based Bias Mitigation can be Combined with other Bias Mitigation Strategies.Additionally, we combined the three other bias mitigation methods with the SMAC models that resulted from our NAS+HPO-based bias mitigation strategy. More precisely, we first conducted our NAS+HPO approach and then applied the Flipped, Angular, and SensitiveNets approach afterwards. On both datasets, the resulting models continue to Pareto-dominate the other bias mitigation strategies used by themselves and ultimately yield the model with the lowest rank disparity of all the models (0.18 on VGGFace2 and 0.03 on CelebA). In particular, the bias improvement of SMAC_000+Flipped model is notable, achieving a score of 0.03 whereas the lowest rank disparity of any model from Figure 3 is 2.63, a 98.9% improvement. In Appendix C.6, we demonstrate that this result is robust to the fairness metric -- specifically our bias mitigation strategy Pareto-dominates the other approaches on all five fairness metrics.\n\nFigure 4: Pareto front of the models discovered by SMAC and the rank-1 models from timm for the _(a)_ validation and _(b)_ test sets on VGGFace2. Each point corresponds to the mean and standard error of an architecture after training for 3 seeds. The SMAC models are Pareto-optimal the top performing timm models (Error<0.1).\n\nNovel Architectures Generalize to Other Datasets.We observed that when transferring our novel architectures to other facial recognition datasets that focus on fairness-related aspects, our architectures consistently outperform other existing architectures by a significant margin. We take the state-of-the-art models from our experiments and test the weights from training on CelebA and VGGFace2 on different datasets which the models did not see during training. Specifically, we transfer the evaluation of the trained model weights from CelebA and VGGFace2 onto the following datasets: LFW [53], CFP_FF [100], CFP_FP [100], AgeDB [77], CALFW [128], CPLFW [127]. Table 2 demonstrates that our approach consistently achieves the highest performance among various architectures when transferred to other datasets. This finding indicates that our approach exhibits exceptional generalizability compared to state-of-the-art face recognition models in terms of transfer learning to diverse datasets.\n\nNovel Architectures Generalize to Other Sensitive Attributes.The superiority of our novel architectures even goes beyond accuracy-related metrics when transferring to other datasets -- our novel architectures have superior fairness properties compared to the existing architectures _even on datasets which have completely different protected attributes than were used in the architecture search_. Specifically, to inspect the generalizability of our approach to other protected attributes, we transferred our models pre-trained on CelebA and VGGFace2 (which have a gender presentation category) to the RFW dataset [111] which includes a protected attribute for race and the AgeDB dataset [77] which includes a protected attribute for age. The results detailed in Appendix C.7 show that our novel architectures always outperforms the existing architectures, across all five fairness metrics studied in this work on both datasets.\n\nNovel Architectures Have Less Linear-Separability of Protected Attributes.Our comprehensive evaluation of multiple face recognition benchmarks establishes the importance of architectures for fairness in face-recognition. However, it is natural to wonder: _\"What makes the discovered architectures fair in the first place?_ To answer this question, we use linear probing to dissect the\n\n\\begin{table}\n\\begin{tabular}{l c c c c c c c c} \\hline \\hline & \\multicolumn{4}{c}{**Trained on VGGFace2**} & \\multicolumn{4}{c}{**Trained on CelebA**} \\\\\n**Model** & **Baseline** & **Flipped** & **Angular** & **SensitiveNets** & **Model** & **Baseline** & **Flipped** & **Angular** & **SensitiveNets** \\\\ \\hline SMAC\\_301 & **(3.66:0.23)** & **(4.95:0.18)** & (4.14:0.25) & (6.20:0.41) & SMAC\\_000 & (3.25:2.18) & **(5.20:0.03)** & (3.45:2.28) & (3.45:2.18) \\\\ DPN & (3.56:0.27) & (5.87:0.32) & (6.06:0.36) & (4.76:0.34) & SMAC\\_010 & (4.44:2.27) & (12.72:5.46) & (45.4:2.50) & (3.99:2.12) \\\\ REXNet & (4.69:0.27) & (5.73:0.45) & (5.47:0.26) & (4.75:0.25) & SMAC\\_680 & **(3.21:9.16)** & (12.42:4.50) & (3.80:1.16) & (3.29:2.09) \\\\ Swin & (5.47:0.38) & (5.75:0.44) & (5.23:0.25) & (5.03:0.30) & ArcFace & (11.30:4.6) & (13.56:2.70) & (9.90:5.60) & (9.10:3.00) \\\\ \\hline \\hline \\end{tabular}\n\\end{table}\nTable 1: Comparison of bias mitigation techniques where the SMAC models were found with our NAS+HPO bias mitigation technique and the other three techniques are standard in facial recognition: Flipped [9], Angular [76], and SensitiveNets [110]. Items in bold are Pareto-optimal. The values show (Error;Rank Disparity). Other metrics are reported in Appendix C.6 and Table 8.\n\n\\begin{table}\n\\begin{tabular}{l c c c c c c} \\hline \\hline\n**Architecture (trained on VGGFace2)** & **LFW** & **CFP\\_FF** & **CFP\\_FP** & **AgeDB** & **CALFW** & **CPLFW** \\\\ \\hline \\hline & 82.60 & 80.91 & 65.51 & 59.18 & 68.23 & 62.15 \\\\ DPN\\_SGD & 93.0 & 91.81 & 78.96 & 71.87 & 78.27 & 72.97 \\\\ DPN\\_AdamW & 78.66 & 77.17 & 64.35 & 61.32 & 64.78 & 60.30 \\\\ SMAC\\_301 & **96.63** & **95.10** & **86.63** & **79.97** & **86.07** & **81.43** \\\\ \\hline \\hline\n**Architecture (trained on CelebA)** & **LFW** & **CFP\\_FF** & **CFP\\_FP** & **AgeDB** & **CALFW** & **CPLFW** \\\\ \\hline DPN\\_CosFace & 87.78 & 90.73 & 69.97 & 65.55 & 75.50 & 62.77 \\\\ DPN\\_MagFace & 91.13 & 92.16 & 70.58 & 68.17 & 76.98 & 60.80 \\\\ SMAC\\_000 & **94.98** & 95.60 & **74.24** & 80.23 & 84.73 & 64.22 \\\\ SMAC\\_010 & 94.30 & 94.63 & 73.83 & **80.37** & 84.73 & **65.48** \\\\ SMAC\\_680 & 94.16 & **95.68** & 72.67 & 79.88 & **84.78** & 63.96 \\\\ \\hline \\hline \\end{tabular}\n\\end{table}\nTable 2: We transfer the evaluation of top performing models on VGGFace2 and CelebA onto six other common face recognition datasets: LFW [53], CFP_FF [100], CFP_FP [100], AgeDB [77], CALFW [128], CPLPW [127]. The novel architectures found with our bias mitigation strategy significantly outperform other models in terms of accuracy. Refer Table 9 for the complete results.\n\nintermediate features of our searched architectures and DPNs, which our search space is based upon. Intuitively, given that our networks are trained only on the task of face recognititon, we do not want the intermediate feature representations to implicitly exploit knowledge about protected attributes (e.g., gender). To this end we insert linear probes[1] at the last two layers of different Pareto-optimal DPNs and the model obtained by our NAS+HPO-based bias mitigation. Specifically, we train an MLP on the feature representations extracted from the pre-trained models and the protected attributes as labels and compute the gender-classification accuracy on a held-out set. We consider only the last two layers, so k assumes the values of \\(N\\) and \\(N-1\\) with \\(N\\) being the number of layers in DPNs (and the searched models). We represent the classification probabilities for the genders by \\(gp_{k}=softmax(W_{k}+b)\\), where \\(W_{k}\\) is the weight matrix of the \\(k\\)-th layer and \\(b\\) is a bias. We provide the classification accuracies for the different pre-trained models on VGGFace2 in Table 3. This demonstrates that, as desired, our searched architectures maintain a lower classification accuracy for the protected attribute. In line with this observation, in the t-SNE plots in Figure 18 in the appendix, the DPN displays a higher degree of separability of features.\n\nComparison between different NAS+HPO techniquesWe also perform an ablation across different multi-objective NAS+HPO techniques. Specifically we compare the architecture derived by SMAC with architectures derived by the evolutionary multi-objective optimization algorithm NSGA-II [22] and multi-objective asynchronous successive halving (MO-ASHA) [98]. We obesrv that the architecture derived by SMAC Pareto-dominates the other NAS methods in terms of accuracy and diverse fairness metrics Table 4. We use the implementation of NSGA-II and MO-ASHA from the syne-tune library [95] to perform an ablation across different baselines.\n\n## 5 Conclusion, Future Work and Limitations\n\nConclusion.Our approach studies a novel direction for bias mitigation by altering network topology instead of loss functions or model parameters. We conduct the first large-scale analysis of the relationship among hyperparameters and architectural properties, and accuracy, bias, and disparity in predictions across large-scale datasets like CelebA and VGGFace2. Our bias mitigation technique centering around Neural Architecture Search and Hyperparameter Optimization is very competitive compared to other common bias mitigation techniques in facial recognition.\n\nOur findings present a paradigm shift by challenging conventional practices and suggesting that seeking a fairer architecture through search is more advantageous than attempting to rectify an unfair one through adjustments. The architectures obtained by our joint NAS and HPO generalize across different face recognition benchmarks, different protected attributes, and exhibit lower linear-separability of protected attributes.\n\nFuture Work.Since our work lays the foundation for studying NAS+HPO for fairness, it opens up a plethora of opportunities for future work. We expect the future work in this direction to focus on\n\n\\begin{table}\n\\begin{tabular}{l c c c c c c} \\hline \\hline\n**NAS Method** & **Accuracy \\(\\uparrow\\)** & **Rank Disparity\\(\\downarrow\\)** & **Disparity\\(\\downarrow\\)** & **Ratio\\(\\downarrow\\)** & **Rank Ratio \\(\\downarrow\\)** & **Error Ratio\\(\\downarrow\\)** \\\\ \\hline MO-ASHA\\_108 & 95.212 & 0.408 & 0.038 & 0.041 & 0.470 & 0.572 \\\\ \\hline NSGA-IL\\_728 & 86.811 & 0.599 & 0.086 & 0.104 & 0.490 & **0.491** \\\\ \\hline SMAC\\_301 & **96.337** & **0.230** & **0.030** & **0.032** & **0.367** & 0.582 \\\\ \\hline \\hline \\end{tabular}\n\\end{table}\nTable 4: Comparison between architectures derived by SMAC and other NAS baselines\n\n\\begin{table}\n\\begin{tabular}{l c c} \\hline \\hline\n**Architecture (trained on VGGFace2)** & **Accuracy on Layer N \\(\\downarrow\\)** & **Accuracy on Layer N-1 \\(\\downarrow\\)** \\\\ \\hline DPN\\_MagFace\\_SGD & 86.042\\% & 95.461\\% \\\\ DPN\\_CosFace\\_SGD & 90.719\\% & 93.787\\% \\\\ DPN\\_CosFace\\_AdamW & 87.385\\% & 94.444\\% \\\\ SMAC\\_301 & **69.980\\%** & **68.240\\%** \\\\ \\hline \\hline \\end{tabular}\n\\end{table}\nTable 3: Linear Probes on architectures. Lower gender classification accuracy is better studying different multi-objective algorithms [34, 60] and NAS techniques [67, 125, 115] to search for inherently fairer models. Further, it would be interesting to study how the properties of the architectures discovered translate across different demographics and populations. Another potential avenue for future work is incorporating priors and beliefs about fairness in the society from experts to further improve and aid NAS+HPO methods for fairness. Given the societal importance, it would be interesting to study how our findings translate to real-life face recognition systems under deployment. Finally, it would also be interesting to study the degree to which NAS+HPO can serve as a general bias mitigation strategy beyond the case of facial recognition.\n\nLimitations.While our work is a step forward in both studying the relationship among architectures, hyperparameters, and bias, and in using NAS techniques to mitigate bias in face recognition models, there are important limitations to keep in mind. Since we only studied a few datasets, our results may not generalize to other datasets and fairness metrics. Second, since face recognition applications span government surveillance [49], target identification from drones [72], and identification in personal photo repositories [36], our findings need to be studied thoroughly across different demographics before they could be deployed in real-life face recognition systems. Furthermore, it is important to consider how the mathematical notions of fairness used in research translate to those actually impacted [94], which is a broad concept without a concise definition. Before deploying a particular system that is meant to improve fairness in a real-life application, we should always critically ask ourselves whether doing so would indeed prove beneficial to those impacted by the given sociotechnical system under consideration or whether it falls into one of the traps described by Selbst et al. [99]. Additionally, work in bias mitigation, writ-large and including our work, can be certainly encourage techno-solutionism which views the reduction of statistical bias from algorithms as a justification for their deployment, use, and proliferation. This of course can have benefits, but being able to reduce the bias in a technical system is a _different question_ from whether a technical solution _should_ be used on a given problem. We caution that our work should not be interpreted through a normative lens on the appropriateness of using facial recognition technology.\n\nIn contrast to some other works, we do, however, feel, that our work helps to overcome the portability trap [99] since it empowers domain experts to optimize for the right fairness metric, in connection with public policy experts, for the problem at hand rather than only narrowly optimizing one specific metric. Additionally, the bias mitigation strategy which we propose here can be used in other domains and applied to applications which have more widespread and socially acceptable algorithmic applications [19].\n\n#### Acknowledgments\n\nThis research was partially supported by the following sources: NSF CAREER Award IIS-1846237, NSF D-ISN Award #2039862, NSF Award CCF-1852352, NIH R01 Award NLM-013039-01, NIST MSE Award #20126334, DARPA GARD #HR00112020007, DoD WHS Award #HQ003420F0035, ARPA-E Award #4334192; TAILOR, a project funded by EU Horizon 2020 research and innovation programme under GA No 952215; the German Federal Ministry of Education and Research (BMBF, grant RenormalizedFlows 01IS19077C); the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant number 417962828; the European Research Council (ERC) Consolidator Grant \"Deep Learning 2.0\" (grant no. 101045765). Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the ERC. Neither the European Union nor the ERC can be held responsible for them.\n\n## References\n\n* [1] Guillaume Alain and Yoshua Bengio. Understanding intermediate layers using linear classifier probes. _arXiv preprint arXiv:1610.01644_, 2016.\n* [2] Bobby Allyn. 'The computer got it wrong': How facial recognition led to false arrest of black man. _NPR, June_, 24, 2020.\n\n* Barocas et al. [2017] Solon Barocas, Moritz Hardt, and Arvind Narayanan. Fairness in machine learning. _NIPS Tutorial_, 2017.\n* Bello et al. [2021] Irwan Bello, William Fedus, Xianzhi Du, Ekin Dogus Cubuk, Aravind Srinivas, Tsung-Yi Lin, Jonathon Shlens, and Barret Zoph. Revisiting resnets: Improved training and scaling strategies. _Advances in Neural Information Processing Systems_, 34:22614-22627, 2021.\n* Buolamwini and Gebru [2018] Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In _Proceedings of the 1st Conference on Fairness, Accountability and Transparency_, volume 81, pages 77-91, 2018. URL [http://proceedings.mlr.press/v81/buolamwini18a.html](http://proceedings.mlr.press/v81/buolamwini18a.html).\n* Cai et al. [2018] Han Cai, Ligeng Zhu, and Song Han. Proxylessnas: Direct neural architecture search on target task and hardware. _arXiv preprint arXiv:1812.00332_, 2018.\n* Cai et al. [2019] Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. Once-for-all: Train one network and specialize it for efficient deployment. _arXiv preprint arXiv:1908.09791_, 2019.\n* Cao et al. [2018] Qiong Cao, Li Shen, Weidi Xie, Omkar M Parkhi, and Andrew Zisserman. Vggface2: A dataset for recognising faces across pose and age. In _2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018)_, pages 67-74. IEEE, 2018.\n* Chang et al. [2020] Hongyan Chang, Ta Duy Nguyen, Sasi Kumar Murakonda, Ehsan Kazemi, and Reza Shokri. On adversarial bias and the robustness of fair machine learning. _arXiv preprint arXiv:2006.08669_, 2020.\n* Chen et al. [2017] Yunpeng Chen, Jianan Li, Huaxin Xiao, Xiaojie Jin, Shuicheng Yan, and Jiashi Feng. Dual path networks. _Advances in neural information processing systems_, 30, 2017.\n* Chen et al. [2021] Zhengsu Chen, Lingxi Xie, Jianwei Niu, Xuefeng Liu, Longhui Wei, and Qi Tian. Visformer: The vision-friendly transformer. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 589-598, 2021.\n* Cherepanova et al. [2021] Valeriia Cherepanova, Micah Goldblum, Harrison Foley, Shiyuan Duan, John P. Dickerson, Gavin Taylor, and Tom Goldstein. Lowkey: leveraging adversarial attacks to protect social media users from facial recognition. In _International Conference on Learning Representations (ICLR)_, 2021.\n* Cherepanova et al. [2022] Valeriia Cherepanova, Steven Reich, Samuel Dooley, Hossein Souri, Micah Goldblum, and Tom Goldstein. A deep dive into dataset imbalance and bias in face identification. _arXiv preprint arXiv:2203.08235_, 2022.\n* Cherepanova et al. [2023] Valeriia Cherepanova, Vedant Nanda, Micah Goldblum, John P Dickerson, and Tom Goldstein. Technical challenges for training fair neural networks. _6th AAAI/ACM Conference on AI, Ethics, and Society_, 2023.\n* Chollet [2017] Francois Chollet. Xception: Deep learning with depthwise separable convolutions. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 1251-1258, 2017.\n* Chu et al. [2021] Xiangxiang Chu, Zhi Tian, Yuqing Wang, Bo Zhang, Haibing Ren, Xiaolin Wei, Huaxia Xia, and Chunhua Shen. Twins: Revisiting the design of spatial attention in vision transformers. _Advances in Neural Information Processing Systems_, 34:9355-9366, 2021.\n* Cruz et al. [2020] Andre F Cruz, Pedro Saleiro, Catarina Belem, Carlos Soares, and Pedro Bizarro. A bandit-based algorithm for fairness-aware hyperparameter optimization. _arXiv preprint arXiv:2010.03665_, 2020.\n* Dai et al. [2021] Xiaoliang Dai, Alvin Wan, Peizhao Zhang, Bichen Wu, Zijian He, Zhen Wei, Kan Chen, Yuandong Tian, Matthew Yu, Peter Vajda, et al. Fbnetv3: Joint architecture-recipe search using predictor pretraining. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 16276-16285, 2021.\n\n* [19] Richeek Das and Samuel Dooley. Fairer and more accurate tabular models through nas. _arXiv preprint arXiv:2310.12145_, 2023.\n* [20] Stephane d'Ascoli, Hugo Touvron, Matthew Leavitt, Ari Morcos, Giulio Biroli, and Levent Sagun. Convit: Improving vision transformers with soft convolutional inductive biases. _arXiv preprint arXiv:2103.10697_, 2021.\n* [21] Joan Davins-Valldaura, Said Moussaoui, Guillermo Pita-Gil, and Franck Plestan. Parego extensions for multi-objective optimization of expensive evaluation functions. _Journal of Global Optimization_, 67(1):79-96, 2017.\n* [22] Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, and TAMT Meyarivan. A fast and elitist multiobjective genetic algorithm: Nsga-ii. _IEEE transactions on evolutionary computation_, 6(2):182-197, 2002.\n* [23] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 4690-4699, 2019.\n* [24] Emily Diana, Wesley Gill, Michael Kearns, Krishnaram Kenthapadi, and Aaron Roth. Convergent algorithms for (relaxed) minimax fairness. _arXiv preprint arXiv:2011.03108_, 2020.\n* [25] Michele Donini, Luca Oneto, Shai Ben-David, John S Shawe-Taylor, and Massimiliano Pontil. Empirical risk minimization under fairness constraints. In _Advances in Neural Information Processing Systems_, pages 2791-2801, 2018.\n* [26] Samuel Dooley, Ryan Downing, George Wei, Nathan Shankar, Bradon Thymes, Gudrun Thorkelsdottir, Tiye Kurtz-Miott, Rachel Mattson, Olufemi Obiwumi, Valeria Cherepanova, et al. Comparing human and machine bias in face recognition. _arXiv preprint arXiv:2110.08396_, 2021.\n* [27] Samuel Dooley, George Z Wei, Tom Goldstein, and John Dickerson. Robustness disparities in face detection. _Advances in Neural Information Processing Systems_, 35:38245-38259, 2022.\n* [28] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. _arXiv preprint arXiv:2010.11929_, 2020.\n* [29] Pawel Drozdowski, Christian Rathgeb, Antitza Dantcheva, Naser Damer, and Christoph Busch. Demographic bias in biometrics: A survey on an emerging challenge. _IEEE Transactions on Technology and Society_, 1(2):89-103, 2020.\n* [30] Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey. _The Journal of Machine Learning Research_, 20(1):1997-2017, 2019.\n* [31] Stefan Falkner, Aaron Klein, and Frank Hutter. Bohb: Robust and efficient hyperparameter optimization at scale. In _International Conference on Machine Learning_, pages 1437-1446. PMLR, 2018.\n* [32] Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In _Proceedings of the Annual Conference on Knowledge Discovery and Data Mining (KDD)_, pages 259-268, 2015.\n* [33] Matthias Feurer and Frank Hutter. Hyperparameter optimization. In _Automated machine learning_, pages 3-33. Springer, Cham, 2019.\n* [34] HC Fu and P Liu. A multi-objective optimization model based on non-dominated sorting genetic algorithm. _International Journal of Simulation Modelling_, 18(3):510-520, 2019.\n* [35] Naman Goel, Mohammad Yaghini, and Boi Faltings. Non-discriminatory machine learning through convex fairness criteria. _Proceedings of the AAAI Conference on Artificial Intelligence_, 32(1), 2018. URL [https://ojs.aaai.org/index.php/AAAI/article/view/11662](https://ojs.aaai.org/index.php/AAAI/article/view/11662).\n\n* Google [2021] Google. How google uses pattern recognition to make sense of images, 2021. URL [https://policies.google.com/technologies/pattern-recognition?hl=en-US](https://policies.google.com/technologies/pattern-recognition?hl=en-US).\n* Grother et al. [2019] Patrick Grother, Mei Ngan, and Kayee Hanaoka. _Face Recognition Vendor Test (FVRT): Part 3, Demographic Effects_. National Institute of Standards and Technology, 2019.\n* Grother et al. [2010] Patrick J. Grother, George W. Quinn, and P J. Phillips. Report on the evaluation of 2d still-image face recognition algorithms. _NIST Interagency/Internal Report (NISTIR)_, 2010. URL [https://doi.org/10.6028/NIST.IR.7709](https://doi.org/10.6028/NIST.IR.7709).\n* Guo et al. [2020] Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. Single path one-shot neural architecture search with uniform sampling. In _European conference on computer vision_, pages 544-560. Springer, 2020.\n* Hamidi et al. [2018] Foad Hamidi, Morgan Klaus Scheuerman, and Stacy M Branham. Gender recognition or gender reductionism? the social implications of embedded gender recognition systems. In _Proceedings of the 2018 chi conference on human factors in computing systems_, pages 1-13, 2018.\n* Han et al. [2020] Dongyoon Han, Sangdoo Yun, Byeongho Heo, and YoungJoon Yoo. Rexnet: Diminishing representational bottleneck on convolutional neural network. _arXiv preprint arXiv:2007.00992_, 6, 2020.\n* Han et al. [2020] Kai Han, Yunhe Wang, Qi Tian, Jianyuan Guo, Chunjing Xu, and Chang Xu. Ghostnet: More features from cheap operations. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 1580-1589, 2020.\n* Han et al. [2021] Kai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing Xu, and Yunhe Wang. Transformer in transformer. _Advances in Neural Information Processing Systems_, 34:15908-15919, 2021.\n* Hardt et al. [2016] Moritz Hardt, Eric Price, Nati Srebro, et al. Equality of opportunity in supervised learning. In _Advances in neural information processing systems_, pages 3315-3323, 2016.\n* Hazirbas et al. [2021] Caner Hazirbas, Joanna Bitton, Brian Dolhansky, Jacqueline Pan, Albert Gordo, and Cristian Canton Ferrer. Towards measuring fairness in ai: the casual conversations dataset. _arXiv preprint arXiv:2104.02821_, 2021.\n* He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016.\n* He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016.\n* Hill [2020] Kashmir Hill. Another arrest, and jail time, due to a bad facial recognition match. _The New York Times_, 29, 2020.\n* Hill [2020] Kashmir Hill. The secretive company that might end privacy as we know it. In _Ethics of Data and Analytics_, pages 170-177. Auerbach Publications, 2020.\n* Howard et al. [2019] Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. Searching for mobilenetv3. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 1314-1324, 2019.\n* Hu et al. [2018] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 7132-7141, 2018.\n* Huang et al. [2017] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In _2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 2261-2269, 2017. doi: 10.1109/CVPR.2017.243.\n\n* Huang et al. [2008] Gary B Huang, Marwan Mattar, Tamara Berg, and Eric Learned-Miller. Labeled faces in the wild: A database forstudying face recognition in unconstrained environments. In _Workshop on faces in'Real-Life'Images: detection, alignment, and recognition_, 2008.\n* Jaiswal et al. [2022] Siddharth Jaiswal, Karthikeya Duggirala, Abhisek Dash, and Animesh Mukherjee. Two-face: Adversarial audit of commercial face recognition systems. In _Proceedings of the International AAAI Conference on Web and Social Media_, volume 16, pages 381-392, 2022.\n* Joo and Karkkainen [2020] Jungseock Joo and Kimmo Karkkainen. Gender slopes: Counterfactual fairness for computer vision models by attribute manipulation. _arXiv preprint arXiv:2005.10430_, 2020.\n* Keyes [2018] Os Keyes. The misgendering machines: Trans/hci implications of automatic gender recognition. _Proceedings of the ACM on human-computer interaction_, 2(CSCW):1-22, 2018.\n* Klare et al. [2012] Brendan F Klare, Mark J Burge, Joshua C Klontz, Richard W Vorder Bruegge, and Anil K Jain. Face recognition performance: Role of demographic information. _IEEE Transactions on Information Forensics and Security_, 7(6):1789-1801, 2012.\n* Lacoste et al. [2019] Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. Quantifying the carbon emissions of machine learning. _arXiv preprint arXiv:1910.09700_, 2019.\n* Lahoti et al. [2020] Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, and Ed H. Chi. Fairness without demographics through adversarially reweighted learning. _arXiv preprint arXiv:2006.13114_, 2020.\n* Laumanns and Ocenasek [2002] Marco Laumanns and Jiri Ocenasek. Bayesian optimization algorithms for multi-objective optimization. In _International Conference on Parallel Problem Solving from Nature_, pages 298-307. Springer, 2002.\n* Learned-Miller et al. [2020] Erik Learned-Miller, Vicente Ordonez, Jamie Morgenstern, and Joy Buolamwini. Facial recognition technologies in the wild, 2020.\n* Lee and Park [2020] Youngwan Lee and Jongyoul Park. Centermask: Real-time anchor-free instance segmentation. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 13906-13915, 2020.\n* Leslie [2020] David Leslie. Understanding bias in facial recognition technologies. _arXiv preprint arXiv:2010.07023_, 2020.\n* Li et al. [2017] Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. Hyperband: A novel bandit-based approach to hyperparameter optimization. _The Journal of Machine Learning Research_, 18(1):6765-6816, 2017.\n* Lin et al. [2022] Xiaofeng Lin, Seungbae Kim, and Jungseock Joo. Fairgrape: Fairness-aware gradient pruning method for face attribute classification. _arXiv preprint arXiv:2207.10888_, 2022.\n* Lindauer et al. [2022] Marius Lindauer, Katharina Eggensperger, Matthias Feurer, Andre Biedenkapp, Difan Deng, Carolin Benjamins, Tim Ruhkopf, Rene Sass, and Frank Hutter. Smac3: A versatile bayesian optimization package for hyperparameter optimization. _Journal of Machine Learning Research_, 23(54):1-9, 2022. URL [http://jmlr.org/papers/v23/21-0888.html](http://jmlr.org/papers/v23/21-0888.html).\n* Liu et al. [2018] Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. _arXiv preprint arXiv:1806.09055_, 2018.\n* Liu et al. [2021] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 10012-10022, 2021.\n* Liu et al. [2015] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In _Proceedings of the IEEE international conference on computer vision_, pages 3730-3738, 2015.\n* Loshchilov and Hutter [2019] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In _International Conference on Learning Representations_, 2019. URL [https://openreview.net/forum?id=Bkg6RiCqY7](https://openreview.net/forum?id=Bkg6RiCqY7).\n\n* [71] Gong Mao-Guo, Jiao Li-Cheng, Yang Dong-Dong, and Ma Wen-Ping. Evolutionary multi-objective optimization algorithms. _Journal of Software_, 20(2), 2009.\n* [72] James Marson and Brett Forrest. Armed low-cost drones, made by turkey, reshape battlefields and geopolitics. _The Wall Street Journal, Jun_, 2021.\n* [73] Natalia Martinez, Martin Bertran, and Guillermo Sapiro. Minimax pareto fairness: A multi objective perspective. In _Proceedings of the 37th International Conference on Machine Learning_, volume 119, pages 6755-6764, 2020. URL [http://proceedings.mlr.press/v119/martinez20a.html](http://proceedings.mlr.press/v119/martinez20a.html).\n* [74] Dushyant Mehta, Oleksandr Sotnychenko, Franziska Mueller, Weipeng Xu, Mohamed Elgharib, Pascal Fua, Hans-Peter Seidel, Helge Rhodin, Gerard Pons-Moll, and Christian Theobalt. Xnect: Real-time multi-person 3d human pose estimation with a single rgb camera. _arXiv preprint arXiv:1907.00837_, 2019.\n* [75] Qiang Meng, Shichao Zhao, Zhida Huang, and Feng Zhou. Magface: A universal representation for face recognition and quality assessment. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 14225-14234, 2021.\n* [76] Aythami Morales, Julian Fierrez, Ruben Vera-Rodriguez, and Ruben Tolosana. Sensitivityets: Learning agnostic representations with application to face images. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 2020.\n* [77] Stylianos Moschoglou, Athanasios Papaioannou, Christos Sagonas, Jiankang Deng, Irene Kotsia, and Stefanos Zafeiriou. Agedb: the first manually collected, in-the-wild age database. In _proceedings of the IEEE conference on computer vision and pattern recognition workshops_, pages 51-59, 2017.\n* [78] Amitabha Mukerjee, Rita Biswas, Kalyanmoy Deb, and Amrit P Mathur. Multi-objective evolutionary algorithms for the risk-return trade-off in bank loan management. _International Transactions in operational research_, 2002.\n* [79] Vedant Nanda, Samuel Dooley, Sahil Singla, Soheil Feizi, and John P Dickerson. Fairness through robustness: Investigating robustness disparity in deep learning. In _Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency_, pages 466-477, 2021.\n* [80] Arvind Narayanan. Translation tutorial: 21 fairness definitions and their politics. In _Proc. Conf. Fairness Accountability Transp., New York, USA_, 2018.\n* [81] Eric WT Ngai, Yong Hu, Yiu Hing Wong, Yijun Chen, and Xin Sun. The application of data mining techniques in financial fraud detection: A classification framework and an academic review of literature. _Decision support systems_, 50(3):559-569, 2011.\n* [82] Alice J O'Toole, P Jonathon Phillips, Xiaobo An, and Joseph Dunlop. Demographic effects on estimates of automatic face recognition performance. _Image and Vision Computing_, 30(3):169-176, 2012.\n* [83] Manisha Padala and Sujit Gujar. Fnnc: Achieving fairness through neural networks. In _Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20_, pages 2277-2283. International Joint Conferences on Artificial Intelligence Organization, 7 2020. doi: 10.24963/ijcai.2020/315. URL [https://doi.org/10.24963/ijcai.2020/315](https://doi.org/10.24963/ijcai.2020/315).\n* [84] Biswajit Paria, Kirthevasan Kandasamy, and Barnabas Poczos. A flexible framework for multi-objective bayesian optimization using random scalarizations. In _Uncertainty in Artificial Intelligence_, pages 766-776. PMLR, 2020.\n* [85] Amandalynne Paullada, Inioluwa Deborah Raji, Emily M Bender, Emily Denton, and Alex Hanna. Data and its (dis) contents: A survey of dataset development and use in machine learning research. _Patterns_, 2(11):100336, 2021.\n* [86] Kenny Peng, Arunesh Mathur, and Arvind Narayanan. Mitigating dataset harms requires stewardship: Lessons from 1000 papers. _arXiv preprint arXiv:2108.02922_, 2021.\n\n* Perrone et al. [2021] Valerio Perrone, Michele Donini, Muhammad Bilal Zafar, Robin Schmucker, Krishnaram Kenthapadi, and Cedric Archambeau. Fair bayesian optimization. In _Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society_, pages 854-863, 2021.\n* Pham et al. [2018] Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, and Jeff Dean. Efficient neural architecture search via parameters sharing. In _International conference on machine learning_, pages 4095-4104. PMLR, 2018.\n* Quadrianto et al. [2019] Novi Quadrianto, Viktoriia Sharmanska, and Oliver Thomas. Discovering fair representations in the data domain. In _IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019_, pages 8227-8236. Computer Vision Foundation / IEEE, 2019. doi: 10.1109/CVPR.2019.00842. URL [http://openaccess.thecvf.com/content_CVPR.2019/html/Quadrianto_Discovering_Fair_Representations_in_the_Data_Domain_CVPR.2019_paper.html](http://openaccess.thecvf.com/content_CVPR.2019/html/Quadrianto_Discovering_Fair_Representations_in_the_Data_Domain_CVPR.2019_paper.html).\n* Raji and Buolamwini [2019] Inioluwa Deborah Raji and Joy Buolamwini. Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products. In _Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society_, pages 429-435, 2019.\n* Raji and Fried [2021] Inioluwa Deborah Raji and Genevieve Fried. About face: A survey of facial recognition evaluation. _arXiv preprint arXiv:2102.00813_, 2021.\n* Raji et al. [2020] Inioluwa Deborah Raji, Timnit Gebru, Margaret Mitchell, Joy Buolamwini, Joonseok Lee, and Emily Denton. Saving face: Investigating the ethical concerns of facial recognition auditing. In _Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society_, pages 145-151, 2020.\n* Ryu et al. [2018] Hee Jung Ryu, Hartwig Adam, and Margaret Mitchell. Inclusivefacenet: Improving face attribute detection with race and gender diversity. _arXiv preprint arXiv:1712.00193_, 2018.\n* Saha et al. [2020] Debjani Saha, Candice Schumann, Duncan C. McElfresh, John P. Dickerson, Michelle L Mazurek, and Michael Carl Tschantz. Measuring non-expert comprehension of machine learning fairness metrics. In _Proceedings of the International Conference on Machine Learning (ICML)_, 2020.\n* Salinas et al. [2022] David Salinas, Matthias Seeger, Aaron Klein, Valerio Perrone, Martin Wistuba, and Cedric Archambeau. Syne tune: A library for large scale hyperparameter tuning and reproducible research. In _International Conference on Automated Machine Learning_, pages 16-1. PMLR, 2022.\n* Savani et al. [2020] Yash Savani, Colin White, and Naveen Sundar Govindarajulu. Intra-processing methods for debiasing neural networks. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2020.\n* Schmucker et al. [2020] Robin Schmucker, Michele Donini, Valerio Perrone, Muhammad Bilal Zafar, and Cedric Archambeau. Multi-objective multi-fidelity hyperparameter optimization with application to fairness. In _NeurIPS Workshop on Meta-Learning_, volume 2, 2020.\n* Schmucker et al. [2021] Robin Schmucker, Michele Donini, Muhammad Bilal Zafar, David Salinas, and Cedric Archambeau. Multi-objective asynchronous successive halving. _arXiv preprint arXiv:2106.12639_, 2021.\n* Selbst et al. [2019] Andrew D. Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. Fairness and abstraction in sociotechnical systems. In _Proceedings of the Conference on Fairness, Accountability, and Transparency_, FAT* '19, page 59-68, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450361255. doi: 10.1145/3287560.3287598. URL [https://doi.org/10.1145/3287560.3287598](https://doi.org/10.1145/3287560.3287598).\n* Sengupta et al. [2016] Soumyadip Sengupta, Jun-Cheng Chen, Carlos Castillo, Vishal M Patel, Rama Chellappa, and David W Jacobs. Frontal to profile face verification in the wild. In _2016 IEEE winter conference on applications of computer vision (WACV)_, pages 1-9. IEEE, 2016.\n* Simonyan and Zisserman [2014] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. _arXiv preprint arXiv:1409.1556_, 2014.\n\n* Sun et al. [2019] Ke Sun, Bin Xiao, Dong Liu, and Jingdong Wang. Deep high-resolution representation learning for human pose estimation. In _CVPR_, 2019.\n* Szegedy et al. [2016] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 2818-2826, 2016.\n* Szegedy et al. [2017] Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. In _Thirty-first AAAI conference on artificial intelligence_, 2017.\n* Tan and Le [2019] Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In _International conference on machine learning_, pages 6105-6114. PMLR, 2019.\n* Verma and Rubin [2018] Sahil Verma and Julia Rubin. Fairness definitions explained. In _2018 IEEE/ACM International Workshop on Software Fairness (FairWare)_, pages 1-7. IEEE, 2018.\n* Wang et al. [2020] Chien-Yao Wang, Hong-Yuan Mark Liao, Yueh-Hua Wu, Ping-Yang Chen, Jun-Wei Hsieh, and I-Hau Yeh. Cspnet: A new backbone that can enhance learning capability of cnn. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops_, pages 390-391, 2020.\n* Wang et al. [2018] Hao Wang, Yitong Wang, Zheng Zhou, Xing Ji, Dihong Gong, Jingchao Zhou, Zhifeng Li, and Wei Liu. Cosface: Large margin cosine loss for deep face recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 5265-5274, 2018.\n* Wang and Deng [2018] Mei Wang and Weihong Deng. Deep face recognition: A survey. _arXiv preprint arXiv:1804.06655_, 2018.\n* Wang and Deng [2020] Mei Wang and Weihong Deng. Mitigating bias in face recognition using skewness-aware reinforcement learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 9322-9331, 2020.\n* Wang et al. [2019] Mei Wang, Weihong Deng, Jiani Hu, Xunqiang Tao, and Yaohai Huang. Racial faces in the wild: Reducing racial bias by information maximization adaptation network. In _Proceedings of the IEEE International Conference on Computer Vision_, pages 692-702, 2019.\n* Wang et al. [2021] Qingzhong Wang, Pengfei Zhang, Haoyi Xiong, and Jian Zhao. Face. evolve: A high-performance face recognition library. _arXiv preprint arXiv:2107.08621_, 2021.\n* Wang [2021] Xiaobo Wang. Teacher guided neural architecture search for face recognition. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 35, pages 2817-2825, 2021.\n* Wang et al. [2020] Zeyu Wang, Klint Qinami, Ioannis Christos Karakozis, Kyle Genova, Prem Nair, Kenji Hata, and Olga Russakovsky. Towards fairness in visual recognition: Effective strategies for bias mitigation, 2020.\n* White et al. [2021] Colin White, Willie Neiswanger, and Yash Savani. Bananas: Bayesian optimization with neural architectures for neural architecture search. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 35, pages 10293-10301, 2021.\n* White et al. [2023] Colin White, Mahmoud Safari, Rhea Sukthanker, Binxin Ru, Thomas Elsken, Arber Zela, Debadeepta Dey, and Frank Hutter. Neural architecture search: Insights from 1000 papers. _arXiv preprint arXiv:2301.08727_, 2023.\n* Wightman [2019] Ross Wightman. Pytorch image models. [https://github.com/rwightman/pytorch-image-models](https://github.com/rwightman/pytorch-image-models), 2019.\n* Wu et al. [2020] Wenying Wu, Pavlos Protopapas, Zheng Yang, and Panagiotis Michalatos. Gender classification and bias mitigation in facial images. In _12th acm conference on web science_, pages 106-114, 2020.\n* Xie et al. [2016] Saining Xie, Ross B. Girshick, Piotr Dollar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. _CoRR_, abs/1611.05431, 2016. URL [http://arxiv.org/abs/1611.05431](http://arxiv.org/abs/1611.05431).\n\n* Xu et al. [2021] Weijian Xu, Yifan Xu, Tyler Chang, and Zhuowen Tu. Co-scale conv-attentional image transformers. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 9981-9990, 2021.\n* Xu et al. [2019] Yuhui Xu, Lingxi Xie, Xiaopeng Zhang, Xin Chen, Guo-Jun Qi, Qi Tian, and Hongkai Xiong. Pc-darts: Partial channel connections for memory-efficient architecture search. _arXiv preprint arXiv:1907.05737_, 2019.\n* Yu et al. [2018] Fisher Yu, Dequan Wang, Evan Shelhamer, and Trevor Darrell. Deep layer aggregation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 2403-2412, 2018.\n* Zafar et al. [2017] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez-Rodriguez, and Krishna P. Gummadi. Fairness constraints: Mechanisms for fair classification. In _Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20-22 April 2017, Fort Lauderdale, FL, USA_, volume 54 of _Proceedings of Machine Learning Research_, pages 962-970. PMLR, 2017. URL [http://proceedings.mlr.press/v54/zafar17a.html](http://proceedings.mlr.press/v54/zafar17a.html).\n* Zafar et al. [2019] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez-Rodriguez, and Krishna P. Gummadi. Fairness constraints: A flexible approach for fair classification. _Journal of Machine Learning Research_, 20(75):1-42, 2019. URL [http://jmlr.org/papers/v20/18-262.html](http://jmlr.org/papers/v20/18-262.html).\n* Zela et al. [2019] Arber Zela, Thomas Elsken, Tonmoy Saikia, Yassine Marrakchi, Thomas Brox, and Frank Hutter. Understanding and robustifying differentiable architecture search. _arXiv preprint arXiv:1909.09656_, 2019.\n* Zhang et al. [2021] Zizhao Zhang, Han Zhang, Long Zhao, Ting Chen, and Tomas Pfister. Aggregating nested transformers. _arXiv preprint arXiv:2105.12723_, 2021.\n* Zheng and Deng [2018] Tianyue Zheng and Weihong Deng. Cross-pose lfw: A database for studying cross-pose face recognition in unconstrained environments. _Beijing University of Posts and Telecommunications, Tech. Rep_, 5(7), 2018.\n* Zheng et al. [2017] Tianyue Zheng, Weihong Deng, and Jiani Hu. Cross-age lfw: A database for studying cross-age face recognition in unconstrained environments. _arXiv preprint arXiv:1708.08197_, 2017.\n* Zhu [2019] Ning Zhu. Neural architecture search for deep face recognition. _arXiv preprint arXiv:1904.09523_, 2019.\n\nEthics Statement\n\nFace recognition systems are being used for more and more parts of daily lives, from government surveillance [49], to target identification from drones [72], to identification in personal photo repositories [36]. It is also increasingly evident that many of these models are biased based on race and gender [37, 92, 91]. If left unchecked, these technologies, which make biased decision for life-changing events, will only deepen existing societal harms. Our work seeks to better understand and mitigate the negative effects that biased face recognition models have on society. By conducting the first large-scale study of the effect of architectures and hyperparameters on bias, and by developing and open-sourcing face recognition models that are more fair than all other competitive models, we provide a resource for practitioners to understand inequalities inherent in face recognition systems and ultimately advance fundamental understandings of the harms and technological ills of these systems.\n\nThat said, we would like to address potential ethical challenges of our work. We believe that the main ethical challenge of this work centers on our use of certain datasets. We acknowledge that the common academic datasets which we used to evaluate our research questions -- CelebA [69] and VGGFace2 [8] -- are all datasets of images scraped from the web without the informed consent of those whom are depicted. This also includes the datasets we transfer to, including LFW, CFP_FF, CFP_FP, AgeDB, CALFW, CPLFW, and RFW. This ethical challenge is one that has plagued the research and computer vision community for the last decade [86, 85] and we are excited to see datasets being released which have fully informed consent of the subjects, such as the Casual Conversations Dataset [45]. Unfortunately, this dataset in particular has a rather restrictive license, much more restrictive than similar datasets, which prohibited its use in our study. Additionally, these datasets all exhibit representational bias where the categories of people that are included in the datasets are not equally represented. This can cause many problems; see [14] for a detailed look at reprsentational bias's impact in facial recognition specifically. At least during training, we did address representaional bias by balancing the training data between gender presentations appropriately.\n\nWe also acknowledge that while our study is intended to be constructive in performing the first neural architecture search experiments with fairness considerations, the specific ethical challenge we highlight is that of unequal or unfair treatment by the technologies. We note that our work could be taken as a litmus test which could lead to the further proliferation of facial recognition technology which could cause other harms. If a system demonstrates that it is less biased than other systems, this could be used as a reason for the further deployment of facial technologies and could further impinge upon unwitting individual's freedoms and perpetuate other technological harms. We explicitly caution against this form of techno-solutionism and do not want our work to contribute to a conversation about whether facial recognition technologies _should_ be used by individuals.\n\nExperiments were conducted using a private infrastructure, which has a carbon efficiency of 0.373 kgCO\\({}_{2}\\)eq/kWh. A cumulative of 88 493 hours of computation was performed on hardware of type RTX 2080 Ti (TDP of 250W). Total emissions are estimated to be 8,251.97 kgCO\\({}_{2}\\)eq of which 0% was directly offset. Estimations were conducted using the MachineLearning Impact calculator presented in [58]. By releasing all of our raw results, code, and models, we hope that our results will be widely beneficial to researchers and practitioners with respect to designing fair face recognition systems.\n\n## Appendix B Reproducibility Statement\n\nWe ensure that all of our experiments are reproducible by releasing our code and raw data files at [https://github.com/dooleys/FR-NAS](https://github.com/dooleys/FR-NAS). We also release the instructions to reproduce our results with the code. Furthermore, we release all of the configuration files for all of the models trained. Our experimental setup is described in Section 3 and Appendix C.1. We provide clear documentation on the installation and system requirements in order to reproduce our work. This includes information about the computing environment, package requirements, dataset download procedures, and license information. We have independently verified that the experimental framework is reproducible which should make our work and results and experiments easily accessible to future researchers and the community.\n\n## Appendix C Further Details on Experimental Design and Results\n\n### Experimental Setup\n\nThe list of the models we study from timm are: coat_lite_small [120], convit_base [20], cspdarknet53 [107], dla102x2 [122], dpn107 [10], ese_vovnet39b [62], fbnetv3_g [18], ghostnet_100 [42], gluon_inception_v3 [103], gluon_xception65 [15], hrnet_w64 [102], ig_resnet101_32x8d [119], inception_resnet_v2 [104], inception_v4 [104], jx_nest_base [126], legacy_senet154 [51], mobilenetv3_large_100 [50], resnetrs101 [4], rexnet_200 [41], selects60b [74], swin, base_patch_window7_224 [68], tf_efficientnet_b7_ns' [105], 'tnt_s_patch16_224[43], twins_svt_large [16], vgg19 [101], vgg19_bn [101], vissformer_small [11], xception and xception65 [15].\n\nWe study at most 13 configurations per model ie 1 default configuration corresponding to the original model hyperparameters with CosFace as head. Further, we have at most 12 configs consisting of the 3 heads (CosFace, ArcFace, MagFace) \\(\\times\\) 2 learning rates(0.1,0.001) \\(\\times\\) 2 optimizers (SGD, AdamW). All the other hyperparameters are held constant for training all the models. All model configurations are trained with a total batch size of 64 on 8 RTX2080 GPUS for 100 epochs each.\n\nWe study these models across five important fairness metrics in face identification: Rank Disparity, Disparity, Ratio, Rank Ratio, and Error Ratio. Each of these metrics is defined in Table 5.\n\n### Additional details on NAS+HPO Search Space\n\nWe replace the repeating 1x1_conv-3x3_conv-1x1_conv block in Dual Path Networks with a simple recurring searchable block. depicted in Figure 6. Furthermore, we stack multiple such searched blocks to closely follow the architecture of Dual Path Networks. We have nine possible choices for each of the three operations in the DPN block, each of which we give a number 0 through 8, depicted in Figure 6. The choices include a vanilla convolution, a convolution with pre-normalization and a convolution with post-normalization Table 7. To ensure that all the architectures are tractable in terms of memory consumption during search we keep the final projection layer (to 1000 dimensionality) in timm.\n\n### Obtained architectures and hyperparameter configurations from Black-Box-Optimization\n\nIn Figure 5 we present the architectures and hyperparameters discovered by SMAC. Particularly we observe that conv 3x3 followed batch norm is a preferred operation and CosFace is the preferred head/loss choice.\n\n### Analysis of the Pareto front of different Fairness Metrics\n\nIn this section, we include additional plots that support and expand on the main paper. Primarily, we provide further context of the Figures in the main body in two ways. First, we provide replication plots of the figures in the main body but for all models. Recall, the plots in the main body only show models with Error<0.3, since high performing models are the most of interest to the community.\n\n\\begin{table}\n\\begin{tabular}{l l} \\multicolumn{1}{c}{**Fairness Metric**} & \\multicolumn{1}{c}{**Equation**} \\\\ \\hline Rank Disparity & \\(|\\text{Rank}(male)-\\text{Rank}(female)|\\) \\\\ Disparity & \\(|\\text{Accuracy}(male)-\\text{Accuracy}(female)|\\) \\\\ Ratio & \\(|1-\\frac{\\text{Accuracy}(male)}{\\text{Accuracy}(female)}|\\) \\\\ Rank Ratio & \\(|1-\\frac{\\text{Rank}(male)}{\\text{Rank}(female)}|\\) \\\\ Error Ratio & \\(|1-\\frac{\\text{Error}(male)}{\\text{Error}(female)}|\\) \\\\ \\end{tabular}\n\\end{table}\nTable 5: The fairness metrics explored in this paper. Rank Disparity is explored in the main paper and the other metrics are reported in Appendix C.4\n\n## 6 Conclusion\n\n\\begin{table}\n\\begin{tabular}{c c}\n**Hyperparameter** & **Choices** \\\\ \\hline Architecture Head/Loss & MagFace, ArcFace, CosFace \\\\ Optimizer Type & Adam, AdamW, SGD \\\\ Learning rate (conditional) & Adam/AdamW \\(\\rightarrow[1e-4,1e-2]\\), \\\\ SGD \\(\\rightarrow[0.09,0.8]\\) \\\\ \\hline \\end{tabular}\n\\end{table}\nTable 6: Searchable hyperparameter choices.\n\n\\begin{table}\n\\begin{tabular}{c c c}\n**Operation Index** & **Operation** & **Definition** \\\\ \\hline\n0 & BnConv1x1 & Batch Normalization \\(\\rightarrow\\) Convolution with 1x1 kernel \\\\\n1 & Conv1x1Bin & Convolution with 1x1 kernel \\(\\rightarrow\\) Batch Normalization \\\\\n2 & Conv1x1 & Convolution with 1x1 kernel \\\\ \\hline\n3 & BnConv3x3 & Batch Normalization \\(\\rightarrow\\) Convolution with 3x3 kernel \\\\\n4 & Conv3x3Bin & Convolution with 3x3 kernel \\(\\rightarrow\\) Batch Normalization \\\\\n5 & Conv3x3 & Convolution with 3x3 kernel \\\\ \\hline\n6 & BnConv5x5 & Batch Normalization \\(\\rightarrow\\) Convolution with 5x5 kernel \\\\\n7 & Conv5x5Bin & Convolution with 5x5 kernel \\(\\rightarrow\\) Batch Normalization \\\\\n8 & Conv5x5 & Convolution with 5x5 kernel \\\\ \\hline \\end{tabular}\n\\end{table}\nTable 7: Operation choices and definitions.\n\nFigure 5: SMAC discovers the above building blocks with (a) corresponding to architecture with CosFace, with SGD optimizer and learning rate of 0.2813 as hyperparamters (b) corresponding to CosFace, with SGD as optimizer and learning rate of 0.32348 and (c) corresponding to CosFace, with AdamW as optimizer and learning rate of 0.0006\n\nFigure 6: DPN block (left) vs. our searchable block (right).\n\nSecond, we also show figures which depict other fairness metrics used in facial identification. The formulas for these additional fairness metrics can be found in Table 5.\n\nWe replicate Figure 2 in Figure 7 and Figure 8. We add additional metrics for CelebA in Figure 9-Figure 11 and for VGGFace2 in Figure 12-Figure 15.\n\ncorrelations between parameter sizes and different fairness metrics Figure 16. This observation supports the claim that increases in accuracy and decreases in disparity are very closely tied to the architectures and feature representations of the model, irrespective of the parameter size of the model. Hence, we do not constraint model parameter sizes to help our NAS+HPO approach search in a much richer search space.\n\n### Comparison to other Bias Mitigation Techniques on all Fairness Metrics\n\nWe have shown that our bias mitigation approach Pareto-dominates the existing bias mitigation techniques in face identification on the Rank Disparity metric. Here, we perform the same experiments but evaluate on the four other metrics discussed in the face identification literature: Disparity, Rank Ratio, Ratio, and Error Ratio.\n\nFigure 10: Replication of Figure 7 on the CelebA validation dataset with the Disparity in accuracy metric.\n\nFigure 9: Replication of Figure 7 on the CelebA validation dataset with Ratio of Ranks (left) and Ratio of Errors (right) metrics.\n\nRecall, we take top performing, Pareto-optimal models from Section 4 and apply the three bias mitigation techniques: Flipped, Angular, and SensitiveNets. We also apply these same techniques to the novel architectures that we found. We report results in Table 1.\n\nIn Table 8, we see that in every metric, the SMAC_301 architecture is Pareto-dominant and that the SMAC_301, demonstrating the robustness of our approach.\n\n### Transferability to other Sensitive Attributes\n\nThe superiority of our novel architectures even goes beyond accuracy when transfering to other datasets -- our novel architectures have superior fairness property compared to the existing architectures **even on datasets which have completely different protected attributes than were used in the architecture search**. To inspect the generalizability of our approach to other protected attributes, we transferred our models pre-trained on CelebA and VGGFace2 (which have a gender presentation category) to the RFW dataset [111] which includes a protected attribute for race. We see that our novel architectures always outperforms the existing architectures across all five fairness metrics studied in this work. See Table 10 for more details on each metric. We see example Pareto fronts for\n\nFigure 11: Replication of Figure 7 on the CelebA validation dataset with the Ratio in accuracy metric.\n\nFigure 12: Replication of Figure 8 on the VGGFace2 validation dataset with Ratio of Ranks metric.\n\nthese transfers in Figure 17. They are always on the Pareto front for all fairness metrics considered, and mostly Pareto-dominate all other architectures on this task. In this setting, since the race label in RFW is not binary, the Rank Disparity metric considered in Table 10 and Figure 17 is computed as the maximum rank disparity between pairs of race labels.\n\nFurthermore, we also evaluate the transfer of fair properties of our models across different age groups on the AgeDB dataset [77]. In this case we use age as a protected attribute. We group the faces into 4 age groups 1-25, 26-50, 51-75 and 76-110. Then we compute the max disparity across age groups. As observed in Table 11 the models discovered by NAS and HPO, Pareto-dominate other competitive hand-crafted models. This further emphasise the generalizability of the fair features learnt by these models.\n\nFigure 14: Replication of Figure 8 on the VGGFace2 validation dataset with the Disparity in accuracy metric.\n\nFigure 13: Replication of Figure 8 on the VGGFace2 validation dataset with Ratio of Errors metric.\n\nFigure 16: Correlation map between different fairness metrics and architecture statistics. We find no significant correlation between these objectives, e.g. between fairness metrics and parameter count.\n\n\\begin{table}\n\\begin{tabular}{l c c} \\hline \\hline\n**Architecture (trained on VGGFace2)** & **Overall Accuracy \\(\\uparrow\\)** & **Disparity \\(\\downarrow\\)** \\\\ \\hline \\hline & 59.18 & 28.9150 \\\\ DPN\\_SGD & 71.87 & 22.4633 \\\\ DPN\\_AdamW & 61.32 & 21.1437 \\\\ SMAC\\_301 & **79.97** & **18.827** \\\\ \\hline \\hline\n**Architecture (trained on CelebA)** & **Accuracy** & **Disparity** \\\\ DPN\\_CosFace & 65.55 & 27.2434 \\\\ DPN\\_MagFace & 68.17 & 31.2903 \\\\ SMAC\\_000 & 80.23 & **19.6481** \\\\ SMAC\\_010 & **80.37** & 26.5103 \\\\ SMAC\\_680 & 79.88 & 20.0586 \\\\ \\hline \\hline \\end{tabular}\n\\end{table}\nTable 11: Taking the highest performing models from the Pareto front of both VGGFace2 and CelebA, we transfer their evaluation onto a dataset with a different protected attribute – age – on the AgeDB dataset [77]. The novel architectures which we found with our bias mitigation strategy are Pareto-dominant with respect to the Accuracy and Disparity metrics\n\nFigure 15: Replication of Figure 8 on the VGGFace2 validation dataset with the Ratio in accuracy metric.\n\n\\begin{table}\n\\begin{tabular}{l c c c c c c} \\hline \\hline \\multicolumn{1}{c}{Architecture (trained on VGGFace2)} & LFW & CFP\\_FF & CFP\\_FP & AgeDB & CALFW & CPLPW \\\\ \\hline \\hline & 82.60 & 80.91 & 65.51 & 59.18 & 68.23 & 62.15 \\\\ DPN\\_SGD & 93.00 & 91.81 & 78.96 & 71.87 & 78.27 & 72.97 \\\\ DPN\\_AdamW & 78.66 & 77.17 & 64.35 & 61.32 & 64.78 & 60.30 \\\\ SMAC\\_301 & **96.63** & **95.10** & **86.63** & **79.97** & **86.07** & **81.43** \\\\ \\hline \\hline & & & & & & \\\\ \\hline \\hline Architecture (trained on CelebA) & LFW & CFP\\_FF & CFP\\_FP & AgeDB & CALFW & CPLFW \\\\ \\hline & & & & & & \\\\ \\hline & & & & & & \\\\ \\hline & & & & & & \\\\ \\hline & & & & & & \\\\ \\end{tabular}\n\\end{table}\nTable 10: Taking the highest performing models from the Pareto front of both VGGFace2 and CelebA, we transfer their evaluation onto a dataset with a different protected attribute – race – on the RFW dataset [111]. The novel architectures which we found with our bias mitigation strategy are always on the Pareto front, and mostly Pareto-dominant of the traditional architectures.\n\n\\begin{table}\n\\begin{tabular}{l c c c c c c c c} \\hline \\hline & \\multicolumn{4}{c}{Rank Disparity} & \\multicolumn{4}{c}{Disparity} \\\\ Model & Baseline & Flipped & Angular & SensitiveNets & Baseline & Flipped & Angular & SensitiveNets \\\\ \\hline SMAC\\_301 & **(3.66;0.23)** & **(4.95;0.18)** & (4.14.0;25) & (6.20;0.41) & **(3.66;0.03)** & **(4.95;0.02)** & (4.14;0.04) & (6.14;0.04) \\\\ DPN & \\((3.56;0.27)\\) & (5.87;0.32) & (6.06;0.36) & (4.76;0.34) & (3.98;0.04) & (5.87;0.05) & (6.06;0.05) & (4.78;0.05) \\\\ ReXNet & \\((4.09.27)\\) & (5.73.0.45) & (5.47;0.26) & (4.75;0.25) & (4.09;0.03) & (5.73;0.05) & (5.47;0.05) & (4.75;0.04) \\\\ Swin & \\((5.47;0.38)\\) & (5.75;0.44) & (5.23;0.25) & (5.03;0.30) & (5.47;0.05) & (5.75;0.05) & (5.23;0.04) & (5.03;0.04) \\\\ \\hline & \\multicolumn{4}{c}{Rank Ratio} & \\multicolumn{4}{c}{Ratio} \\\\ Model & Baseline & Flipped & Angular & SensitiveNets & Baseline & Flipped & Angular & SensitiveNets \\\\ \\hline SMAC\\_301 & **(3.66;0.37)** & **(4.95;0.21)** & (4.14.0;3.09) & (6.14;0.41) & **(3.66;0.03)** & **(4.95;0.02)** & (4.14;0.04) & (6.14;0.05) \\\\ DPN & \\((3.98;0.49)\\) & (5.87;0.49) & (6.06;0.54) & (4.78;0.49) & (3.98;0.04) & (5.87;0.06) & (6.06;0.06) & (4.78;0.05) \\\\ ReXNet & \\((4.09;0.41)\\) & (5.73;0.53) & (5.47;0.38) & (4.75;0.34) & (4.09;0.04) & (5.73;0.05) & (5.47;0.05) & (4.75;0.04) \\\\ Swin & \\((5.47;0.47)\\) & (5.75;0.47) & (5.23;0.42) & (5.03;0.43) & (5.47;0.05) & (5.75;0.05) & (5.23;0.05) & (5.03;0.05) \\\\ \\hline & \\multicolumn{4}{c}{Error Ratio} & \\multicolumn{4}{c}{Error Ratio} \\\\ Model & Baseline & Flipped & Angular & SensitiveNets & & & \\\\ \\hline SMAC\\_301 & **(3.66;0.58)** & **(4.95;0.29)** & (4.14.0;60) & (6.14;0.52) & & & \\\\ DPN & \\((3.98;0.65)\\) & (5.87;0.62) & (6.06;0.62) & (4.78;0.69) & & & \\\\ ReXNet & \\((4.09;0.60)\\) & (5.73;0.57) & (5.47;0.59) & (4.75;0.58) & & & \\\\ Swin & \\((5.47;0.60)\\) & (5.75;0.56) & (5.23;0.60) & (5.03;0.60) & & & \\\\ \\hline \\hline & & & & & & \\\\ \\end{tabular}\n\\end{table}\nTable 8: Comparison bias mitigation techniques where the SMAC models were found on VGGFace2 with NAS bias mitigation technique and the other three techniques are standard in facial recognition: Flipped [9], Angular [76], and Discriminator [110]. Items in bold are Pareto-optimal. The values show (Error,_metric_).\n\n\\begin{table}\n\\begin{tabular}{l c c c c c} \\hline \\hline \\multicolumn{1}{c}{Architecture (trained on VGGFace2)} & LFW & CFP\\_FF & CFP\\_FP & AgeDB & CALFW & CPLPW \\\\ \\hline & 82.60 & 80.91 & 65.51 & 59.18 & 68.23 & 62.15 \\\\ DPN\\_SGD & 93.00 & 91.81 & 78.96 & 71.87 & 78.27 & 72.97 \\\\ DPN\\_AdamW & 78.66 & 77.17 & 64.35 & 61.32 & 64.78 & 60.30 \\\\ SMAC\\_301 & **96.63** & **95.10** & **86.63** & **79.97** & **86.07** & **81.43** \\\\ \\hline \\hline & & & & & \\\\ \\end{tabular}\n\\end{table}\nTable 9: Taking the highest performing models from the Pareto front of both VGGFace2 and CelebA, we transfer their evaluation onto six other common face recognition datasets: LFW [53], CFP\\(\\_\\)FF [100], CFP\\(\\_\\)FP [100], AgeDB [77], CALFW [128], CPLPW [127]. The novel architectures which we found with our bias mitigation strategy significantly out perform all other models.\n\nFigure 17: Models trained on CelebA (left) and VGGFace2 (right) evaluated on a dataset with a different protected attribute, specifically on RFW with the racial attribute, and with the Rank Disparity metric. The novel architectures out perform the existing architectures in both settings.\n\nFigure 18: TSNE plots for models pretrained on VGGFace2 on the test-set _(a)_ SMAC model last layer _(b)_ DPN MagFace on the last layer _(b)_ SMAC model second last layer _(b)_ DPN MagFace on the second last layer. Note the better linear separability for DPN MagFace in comparison with the SMAC model", "response": "## Review\n\n### Summary\n\nThis paper presents a novel framework combining Neural Architecture Search (NAS) and Hyperparameter Optimization (HPO) to mitigate biases in face recognition models. It challenges traditional bias mitigation approaches by demonstrating that identifying fairer architectures can yield better outcomes than merely enhancing existing high-performing models. The authors conduct extensive experiments on popular datasets, CelebA and VGGFace2, revealing that specific architectures and hyperparameter configurations significantly improve fairness without sacrificing performance. The paper emphasizes the importance of architecture choice in achieving robust performance across various identities, making a valuable contribution to the fairness and bias community.\n\n### Strengths\n\n- The paper introduces a new paradigm for bias mitigation in face recognition by utilizing NAS and HPO.\n- Extensive empirical analysis shows that specific architectures and hyperparameters significantly improve fairness.\n- The authors provide their source code, enhancing reproducibility.\n- The results demonstrate the generalizability of the proposed method across different datasets.\n- The paper is well-motivated and addresses an important issue in machine learning.\n- The presentation is clear and easy to follow.\n\n### Weaknesses\n\n- Lack of discussion on the impact of pretraining on model performance and generalization.\n- The theoretical analysis of why NAS models outperform traditional methods is insufficient.\n- Some experimental design choices need further justification, such as the selection of the SMAC3 optimization method.\n- Visualization of results could be improved for better insights into error analysis.\n- The reported performance in large-scale datasets remains unverified.\n\n### Questions\n\n- Have you considered alternatives to SMAC3 or ParEGO for optimization?\n- What patterns in neural architecture contribute to Pareto-optimal performance?\n- Can the authors clarify the impact of training on gender-balanced versus gender-imbalanced datasets?\n\n### Soundness\n\n**Score:** 3\n\n**Description:** 3 = good: The methodology is solid, but some theoretical aspects and justifications could be strengthened.\n\n### Presentation\n\n**Score:** 3\n\n**Description:** 3 = good: The paper is well-organized and clear, though some figures and tables need improvement in resolution and clarity.\n\n### Contribution\n\n**Score:** 4\n\n**Description:** 4 = excellent: The paper makes a significant contribution to the field, presenting novel methods and practical insights into bias mitigation.\n\n### Rating\n\n**Score:** 7\n\n**Description:** 7 = accept, but needs minor improvements: The paper is technically solid with high impact and good evaluation, but some weaknesses need addressing.\n\n### Paper Decision\n\n**Decision:** Accept (oral)\n\n**Reasons:** The paper is original and tackles an important issue in machine learning regarding bias mitigation. It presents a novel approach that shows soundness in methodology and contributes significantly to the field. While there are some weaknesses and areas for improvement, particularly in theoretical explanations and validations on larger datasets, the overall quality and relevance of the work justify acceptance.\n"} {"query": "You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections: \n1. Summary: A summary of the paper in 100-150 words.\n2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.\n3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are: \n 1 poor\n 2 fair\n 3 good\n 4 excellent\n4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are: \n 1 strong reject\n 2 reject, significant issues present\n 3 reject, not good enough\n 4 possibly reject, but has redeeming facets\n 5 marginally below the acceptance threshold\n 6 marginally above the acceptance threshold\n 7 accept, but needs minor improvements \n 8 accept, good paper\n 9 strong accept, excellent work\n 10 strong accept, should be highlighted at the conference \n5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.\n\nHere is the template for a review format, you must follow this format to output your review result:\n**Summary:**\nSummary content\n\n**Strengths:**\n- Strength 1\n- Strength 2\n- ...\n\n**Weaknesses:**\n- Weakness 1\n- Weakness 2\n- ...\n\n**Questions:**\n- Question 1\n- Question 2\n- ...\n\n**Soundness:**\nSoundness result\n\n**Presentation:**\nPresentation result\n\n**Contribution:**\nContribution result\n\n**Rating:**\nRating result\n\n**Paper Decision:**\n- Decision: Accept/Reject\n- Reasons: reasons content\n\n\nPlease ensure your feedback is objective and constructive. The paper is as follows:\n\n# Hierarchical VAEs provide a normative account of motion processing in the primate brain\n\n Hadi Vafaii\\({}^{1}\\)\n\nvafaii@umd.edu\n\n&Jacob L. Yates\\({}^{2}\\)\n\nyates@berkeley.edu\n\n&Daniel A. Butts\\({}^{1}\\)\n\ndab@umd.edu\n\n\\({}^{1}\\)University of Maryland, College Park\n\n\\({}^{2}\\)UC Berkeley\n\n###### Abstract\n\nThe relationship between perception and inference, as postulated by Helmholtz in the 19th century, is paralleled in modern machine learning by generative models like Variational Autoencoders (VAEs) and their hierarchical variants. Here, we evaluate the role of hierarchical inference and its alignment with brain function in the domain of motion perception. We first introduce a novel synthetic data framework, Retinal Optic Flow Learning (ROFL), which enables control over motion statistics and their causes. We then present a new hierarchical VAE and test it against alternative models on two downstream tasks: (i) predicting ground truth causes of retinal optic flow (e.g., self-motion); and (ii) predicting the responses of neurons in the motion processing pathway of primates. We manipulate the model architectures (hierarchical versus non-hierarchical), loss functions, and the causal structure of the motion stimuli. We find that hierarchical latent structure in the model leads to several improvements. First, it improves the linear decodability of ground truth factors and does so in a sparse and disentangled manner. Second, our hierarchical VAE outperforms previous state-of-the-art models in predicting neuronal responses and exhibits sparse latent-to-neuron relationships. These results depend on the causal structure of the world, indicating that alignment between brains and artificial neural networks depends not only on architecture but also on matching ecologically relevant stimulus statistics. Taken together, our results suggest that hierarchical Bayesian inference underlines the brain's understanding of the world, and hierarchical VAEs can effectively model this understanding.\n\n## 1 Introduction\n\nIntelligent interactions with the world require representation of its underlying composition. This inferential process has long been postulated to underlie human perception [1, 2, 3, 4, 5, 6, 7, 8, 9], and is paralleled in modern machine learning by generative models [10, 11, 12, 13, 14, 15, 16, 17], which learn latent representations of their sensory inputs. The question of what constitutes a \"good\" representation has no clear answer [18, 19], but several desirable features have been proposed. In the field of neuroscience, studies focused on object recognition have suggested that effective representations \"_untangle_\" the various factors of variation in the input, rendering them linearly decodable [20, 21]. This intuitive notion of linear decodability has emerged in the machine learning community under different names such as \"_informativeness_\" [22] or \"_explicitness_\" [23]. Additionally, it has been suggested that \"_disentangled_\" representations are desirable, wherein distinct, informative factors of variations in the data are separated [24, 25, 26, 27, 28, 29]. Artificial neural networks (ANNs) are also increasingly evaluated based on their alignment with biological neural processing [30, 31, 32, 33, 34, 35, 36, 37, 38], because of the shared goals of ANNs and the brain's sensory processing [25, 39, 40]. Such alignment also provides the possibility of gaining insights into the brain by understanding the operations within an ANN [41, 42, 43, 44, 45, 46, 47].\n\nIn this work, we investigate how the combination of (i) model architecture, (ii) loss function, and (iii) training dataset, affects learned representations, and whether this is related to the brain-alignment of the ANN [41; 44]. We focus specifically on understanding the representation of motion because large sections of the visual cortex are devoted to processing motion [34], and the causes of retinal motion (moving objects and self-motion [48]) can be manipulated systematically. Crucially, motion in an image can be described irrespective of the identity and specific visual features that are moving, just as the identity of objects is invariant to how they are moving. This separation of motion and object processing mirrors the division of primate visual processing into dorsal (motion) and ventral (object) streams [49; 50; 51].\n\nWe designed a _naturalistic_ motion simulation based on distributions of ground truth factors corresponding to the location and depth of objects, motion of these objects, motion of the observer, and observer's direction of gaze (i.e., the fixation point; Fig. 1a). We then trained and evaluated an ensemble of autoencoder-based models using our simulated retinal flow data. We based our evaluation on (1) whether the models untangle and disentangle the ground truth factors in our simulation; and (2) the degree to which their latent spaces could be directly related to neural data recorded in the dorsal stream of primates (area MT).\n\nWe introduce a new hierarchical variational autoencoder, the \"compressed\" Nouveau VAE (cNVAE) [52]. The cNVAE exhibited superior performance compared to other models across our multiple evaluation metrics. First, it discovered latent factors that accurately captured the ground truth factors in the simulation in a more disentangled manner than other models. Second, it achieved significant improvements in predicting neural responses compared to the previous state-of-the-art model [34], doubling the performance, with sparse mapping from its latent space to neural responses.\n\nTaken together, these observations demonstrate the power of the synthetic data framework and show that a single inductive bias--hierarchical latent structure--leads to many desirable features of representations, including brain alignment.\n\n## 2 Background & Related Work\n\nNeuroscience and VAEs.It has long been argued that perception reflects unconscious inference of the structure of the world constructed from sensory inputs. The concept of \"perception as unconscious inference\" has existed since at least the 19th century [1; 2], and more recently inspired Mumford [3] to conjecture that brains engage in hierarchical Bayesian inference to comprehend the world [3; 4]. These ideas led to the development of Predictive Coding [5; 9; 53; 54; 55; 56; 57; 58], Bayesian Brain Hypothesis [59; 60; 61; 62; 63; 64; 65], and Analysis-by-Synthesis [7], collectively suggesting that brains contain an internal generative model of the world [7; 8; 66; 67]. A similar idea underlies modern generative models [68; 69; 70; 15; 16; 17], especially hierarchical variants of VAEs [71; 72; 73; 52].\n\nThe Nouveau VAE (NVAE) [52] and very deep VAE (vdvae) [71] demonstrated that deep hierarchical VAEs can generate realistic high-resolution images, overcoming the limitations of their non-hierarchical predecessors. However, neither work evaluated how the hierarchical latent structure changed the quality of learned representations. Additionally, both NVAE and vdvae have an undesirable property: their convolutional latents result in a latent space that is several orders of magnitude larger than the input space, defeating a main purpose of autoencoders: compression. Indeed, Hazami et al. [74] showed that a tiny subset (around \\(3\\%\\)) of the vdvae latent space is sufficient for comparable input reconstruction. Here, we demonstrate that it is possible to compress hierarchical VAEs and focus on investigating their latent representations with applications to neuroscience data.\n\nEvaluating ANNs on predicting biological neurons.Several studies have focused on evaluating ANNs on their performance in predicting brain responses, but almost entirely on describing static (\"ventral stream\") image processing [30; 33; 36]. In contrast, motion processing (corresponding to the dorsal stream) has only been considered thus far in Mineault et al. [34], who used a 3D ResNet (\"DorsalNet\") to extract ground truth factors about self-motion from drone footage (\"AirSim\", [75]) in a supervised manner. DorsalNet learned representations with receptive fields that matched known features of the primate dorsal stream and achieved state-of-the-art on predicting neural responses on the dataset that we consider here. In addition to our model architecture and training set, a fundamental difference between our approach and Mineault et al. [34] is that they trained their models using direct supervision. As such, their models have access to the ground truth factors at all times.\n\nHere, we demonstrate that it is possible to obtain ground truth factors \"for free\", in a completely unsupervised manner, while achieving better performance in predicting biological neuronal responses.\n\nUsing synthetic data to train ANNs.A core component of a reductionist approach to studying the brain is to characterize neurons based on their selectivity to a particular subset of pre-identified visual \"features\", usually by presenting sets of \"feature-isolating\" stimuli [76]. In the extreme, stimuli are designed that remove all other features except the one under investigation [77]. While these approaches can inform how pre-selected feature sets are represented by neural networks, it is often difficult to generalize this understanding to more natural stimuli, which are not necessarily well-described by any one feature set. As a result, here we generate synthetic data representing a _naturalistic_ distribution of natural motion stimuli. Such synthetic datasets allow us to manipulate the causal structure of the world, in order to make hypotheses about what aspects of the world matter for the representations learned by brains and ANNs [78]. Like previous work on synthesized textures [15], here we specifically manipulate the data generative structure to contain factors of variation due to known ground truth factors.\n\n## 3 Approach: Data & Models\n\nRetinal Optic Flow Learning (ROFL).Our synthetic dataset framework, ROFL, generates the resulting optic flow from different world structures, self-motion trajectories, and object motion (Fig. 1a, see also [79]).\n\nROFL can be used to generate _naturalistic_ flow fields that share key elements with those experienced in navigation through 3-D environments. Specifically, each frame contains global patterns that are due to self-motion, including rotation that can arise due to eye or head movement [80, 81]. In addition, local motion patterns can be present due to objects that move independently of the observer [48]. The overall flow pattern is also affected by the observer's direction of gaze (fixation point [82], Fig. 1a).\n\nROFL generates flow vectors that are instantaneous in time, representing the velocity across the visual field resulting from the spatial configuration of the scene and motion vectors of self and object. Ignoring the time-evolution of a given scene (which can arguably be considered separably [83]) dramatically reduces the input space from \\([3\\times H\\times W\\times T]\\) to \\([2\\times H\\times W]\\), and allows a broader sampling of configurations without introducing changes in luminance and texture. As a result, we can explore the role of different causal structures in representation learning in ANNs.\n\nThe retinal flow patterns generated by a moving object depend on both the observer's self-motion and the rotation of their eyes as they maintain fixation in the world, in addition to the motion of the object itself. For example, Fig. 1c demonstrates a situation where the observer is moving forward, and the object is moving to the right, with different object positions: an object on the left side will have its flow patterns distorted, while an object on the right will have its flow patterns largely unaffected because its flow vectors are parallel with that of the self-motion. In summary, ROFL allows us to simulate retinal optic flow with a known ground truth structure driven by object and self-motion.\n\nThe compressed NVAE (cNVAE).The latent space of the NVAE is partitioned into groups, \\(\\mathbf{z}=\\{\\mathbf{z}_{1},\\mathbf{z}_{2},\\ldots,\\mathbf{z}_{L}\\}\\), where \\(L\\) is the number of groups. The latent groups are serially dependent, meaning that the distribution of a given latent group depends on the value of the preceding latents,\n\n\\begin{table}\n\\begin{tabular}{l c c} \\hline \\hline Category & Description & Dimensionality \\\\ \\hline \\multirow{2}{*}{fixate-1} & A moving observer maintains fixation on a background point. & \\multirow{2}{*}{\\(11=2+3+6\\)} \\\\ & In addition, the scene contains one independently moving object. & \\\\ \\hline \\multirow{2}{*}{fixate-0} & Same as fixate-1 but without the object. & \\multirow{2}{*}{\\(5=2+3\\)} \\\\ & A single moving object, stationary observer. & \\\\ \\hline \\hline \\end{tabular}\n\\end{table}\nTable 1: ROFL categories used in this paper. ground truth factors include fixation point (\\(+2\\)); velocity of the observer when self-motion is present (\\(+3\\)); and, object position & velocity (\\(+6\\)). Figure 1b showcases a few example frames for each category. The stimuli can be rendered at any given spatial scale \\(N\\), yielding an input shape of \\(2\\times N\\times N\\). Here we work with \\(N=17\\).\n\nsuch that the prior is given by \\(p(\\mathbf{z})=p(\\mathbf{z}_{1})\\cdot\\prod_{\\ell=2}^{L}p(\\mathbf{z}_{\\ell}|\\mathbf{z}_{<\\ell})\\), and approximate posterior is given by \\(q(\\mathbf{z}|\\mathbf{x})=\\prod_{\\ell=1}^{L}q(\\mathbf{z}_{\\ell}|\\mathbf{z}_{<\\ell},\\mathbf{x})\\) (more details in section 9.1). Additionally, different latent groups in the NVAE operate at different spatial scales (Fig. 2, left), with multiple groups per scale. Crucially, such scale-dependent grouping is absent from non-hierarchical VAEs (Fig. 2, right).\n\nThe cNVAE closely follows the NVAE [52], with one important difference: the original NVAE latent space is convolutional, and ours is not. We modified the _sampler_ layers (grey trapezoids, Fig. 2) such that their receptive field sizes match the spatial scale they operate on. Thus, sampler layers integrate over spatial information before sampling from the approximate posterior. The spatial patterns of each latent dimension are then determined by _expand_ modules (yellow trapezoids, Fig. 2), based on a deconvolution step. Further details about the processing of the sampler and expand layers are provided in Supplementary section 9.2.\n\nOur modification of the NVAE serves two purposes. First, it decouples spatial information from the functionality of latent variables, allowing them to capture abstract features that are invariant to particular spatial locations. Second, it has the effect of compressing the input space into a lower-dimensional latent code. We explain this in more detail in Supplementary section 9.3.\n\nOur model has the following structure: 3 latent groups operating at the scale of \\(2\\times 2\\); 6 groups at the scale of \\(4\\times 4\\); and 12 groups at the scale of \\(8\\times 8\\) (Table 4, Fig. 2). Therefore, the model has \\(3+6+12=21\\) hierarchical latent groups in total. Each latent group has \\(20\\) latent variables, which results in an overall latent dimensionality of \\(21\\times 20=420\\). See Table 4 and Supplementary section 9.3 for more details.\n\nAlternative models.We evaluated a range of unsupervised models alongside cNVAE, including standard (non-hierarchical) VAEs [11; 12], a hierarchical autoencoder with identical architecture as\n\nFigure 1: Retinal Optic Flow Learning (ROFL): a simulation platform for synthesizing naturalistic optic flow patterns. **(a)** The general setup includes a moving or stationary observer and a solid background, with optional moving object(s) in the scene. More details are provided in the appendix (section 13). **(b)** Example frames showcasing different categories (see Table 1 for definitions). **(c, d)** Demonstrating the causal effects of varying a single ground truth factor while keeping all others fixed: **(c)**\\(X_{obj}\\), the \\(x\\) component of object position (measured in retinal coordinates, orange), and **(d)**\\(F_{x}\\), the \\(X\\) component of the fixation point (measured in fixed coordinates, gray).\n\nthe cNVAE but trained only with reconstruction loss (cNAE), and an autoencoder (AE) counterpart for the VAE (Table 2). All models had the same latent dimensionality (Table 4), and approximately the same number of parameters and convolutional layers. We used endpoint error as our measure of reconstruction loss, which is the Euclidean norm of the difference between actual and reconstructed flow vectors. This metric works well with optical flow data [84].\n\nModel representations.We define a model's internal representation to be either the mean of each Gaussian for variational models (i.e., samples drawn from \\(q\\left(\\mathbf{z}|\\mathbf{x}\\right)\\) at zero temperature), or the bottleneck activations for autoencoders. For hierarchical models (cNVAE, cNAE), we concatenate representations across all levels (Table 4).\n\nTraining details.Models were trained for \\(160,000\\) steps at an input scale of \\(17\\times 17\\), requiring slightly over a day on Quadro RTX 5000 GPUs. Please refer to Supplementary section 9.4 for additional details.\n\nDisentanglement and \\(\\beta\\)-VAEs.A critical decision when optimizing VAEs involves determining the weight assigned to the KL term in the loss function compared to the reconstruction loss. Prior research has demonstrated that modifying a single parameter, denoted as \\(\\beta\\), which scales the KL term, can lead to the emergence of disentangled representations [85, 86]. Most studies employing VAEs for image reconstruction typically optimize the standard evidence lower bound (ELBO) loss, where \\(\\beta\\) is fixed at a value of 1 [11, 52, 71]. However, it should be noted that due to the dependence of the reconstruction loss on the input size, any changes in the dimensionality of the input will inevitably alter the relative contribution of the KL term, and thus the \"effective\" \\(\\beta\\)[85].\n\nFurthermore, Higgins et al. [16] recently established a strong correspondence between the generative factors discovered by \\(\\beta\\)-VAEs and the factors encoded by inferotemporal (IT) neurons in the primate ventral stream. The alignment between these factors and IT neurons exhibited a linear relationship with the value of \\(\\beta\\). In light of these findings, we explicitly manipulate the parameter \\(\\beta\\) within a range spanning from \\(0.01\\) to \\(10\\) to investigate the extent to which our results depend on its value.\n\n\\begin{table}\n\\begin{tabular}{l l c c} \\hline \\hline Model & Architecture & Loss & Kullback–Leibler term (KL) \\\\ \\hline \\multirow{2}{*}{cNVAE} & \\multirow{2}{*}{Hierarchical} & \\multirow{2}{*}{EPE \\(+\\beta*\\mathrm{KL}\\)} & \\(\\mathrm{KL}=\\sum_{\\ell=1}^{L}\\mathbb{E}_{q\\left(\\mathbf{z}_{<\\ell}|\\mathbf{x}\\right)} \\left[\\mathrm{KL}_{\\ell}\\right],\\) where \\\\ & & & \\(\\mathrm{KL}_{\\ell}\\coloneqq\\mathcal{D}_{\\mathrm{KL}}\\left[q\\left(\\mathbf{z}_{\\ell}| \\mathbf{x},\\mathbf{z}_{<\\ell}\\right)\\|\\,p\\left(\\mathbf{z}_{\\ell}|\\mathbf{z}_{<\\ell}\\right)\\right]\\) \\\\ \\hline VAE & Non-hierarchical & \\(\\mathrm{EPE}+\\beta*\\mathrm{KL}\\) & \\(\\mathrm{KL}=\\mathcal{D}_{\\mathrm{KL}}\\left[q\\left(\\mathbf{z}|\\mathbf{x}\\right)\\|\\,p \\left(\\mathbf{z}\\right)\\right]\\) \\\\ \\hline cNAE & Hierarchical & EPE & - \\\\ \\hline AE & Non-hierarchical & EPE & - \\\\ \\hline \\hline \\end{tabular}\n\\end{table}\nTable 2: Model details. Here, _hierarchical_ means that there are parallel pathways for information to flow from the encoder to the decoder (Fig. 2), which is slightly different from the conventional notion. For variational models, this implies hierarchical dependencies between latents in a statistical sense [71]. This hierarchical dependence is reflected in the KL term for the cNVAE, where \\(L\\) is the number of hierarchical latent groups. See Supplementary section 9.3 for more details and section 9.1 for a derivation. All models have an equal # of latent dimensions (\\(420\\), see Table 4), approximately the same # of convolutional layers, and # of parameters (\\(\\sim 24\\)\\(M\\)). EPE, endpoint error.\n\nFigure 2: Architecture comparison. Left, compressed NVAE (cNVAE); right, non-hierarchical VAE. We modified the NVAE _sampler_ layer (grey trapezoids) and introduced a deconvolution _expand_ layer (yellow trapezoids). The encoder (inference) and decoder (generation) pathways are depicted in red and blue, respectively. \\(r\\), residual block; \\(h\\), trainable parameter; \\(+\\), feature combination.\n\nResults\n\nOur approach is based on the premise that the visual world contains a hierarchical structure. We use a simulation containing a hierarchical structure (ROFL, described above) and a hierarchical VAE (the cNVAE, above) to investigate how these choices affect the learned latent representations. While we are using a relatively simple simulation generated from a small number of ground truth factors, \\(\\mathbf{g}\\), we do not specify how \\(\\mathbf{g}\\) should be represented in our model or include \\(\\mathbf{g}\\) in the loss. Rather, we allow the model to develop its own latent representation in a purely unsupervised manner. See Supplementary section 9.6 for more details on our approach.\n\nWe first consider hierarchical and non-hierarchical VAEs trained on the fixate-1 condition (see Table 1; throughout this work, fixate-1 is used unless stated otherwise). We extracted latent representations from each model and estimated the mutual information (MI) between the representations and ground truth factors such as self-motion, etc. For fixate-1, each data sample is uniquely determined using 11 ground truth factors (Table 1), and the models have latent dimensionality of \\(420\\) (Table 4). Thus, the resulting MI matrix has shape \\(11\\times 420\\), where each entry shows how much information is contained in that latent variable about a given ground truth factor.\n\nFunctional specialization emerges in the cNVAE.Figure 3 shows the MI matrix for the latent space of cNVAE (top) and VAE (bottom). While both models achieved a good reconstruction of validation data (Fig. 14), the MI matrix for cNVAE exhibits clusters corresponding to distinct ground truth factors at different levels of the hierarchy. Specifically, object-related factors of variation are largely captured at the top \\(2\\times 2\\) scale, while information about fixation point can be found across the hierarchy, and self-motion is largely captured by \\(8\\times 8\\) latent groups. In contrast, non-hierarchical VAE has no such structure, suggesting that the inductive bias of hierarchy enhances the quality of latent spaces, which we quantify next.\n\nEvaluating the latent code.To demonstrate the relationship between ground truth factors and latent representations discovered by the cNVAE visible in Fig. 3, we apply metrics referred to as \"untangling\" and \"disentengling\". Additionally, in a separate set of experiments, we also evaluate model representations by relating them to MT neuron responses, which we call \"brain-alignment\". We discuss each of these in detail in the following sections.\n\nUntangling: the cNVAE untangles factors of variation.One desirable feature of a latent representation is whether it makes information about ground truth factors easily (linearly) decodable [20; 21; 87]. This concept has been introduced in the context of core object recognition as \"_untangling_\". Information about object identity that is \"tangled\" in the retinal input is untangled through successive nonlinear transforms, thus making it linearly available for higher brain regions to extract [20]. This concept is closely related to the \"_informativeness_\" metric of Eastwood and Williams [22] and \"_explicitness_\" metric of Ridgeway and Mozer [23].\n\nTo assess the performance of our models, we evaluated the linear decodability of the ground truth factors, \\(\\mathbf{g}\\), from model latent codes, \\(\\mathbf{z}\\). Based on the \\(R^{2}\\) scores obtained by predicting \\(\\mathbf{g}\\) from \\(\\mathbf{z}\\) using linear regression (Fig. 4), the cNVAE greatly outperforms competing models, faithfully capturing all ground truth factors. In contrast, the non-hierarchical VAE fails to capture object-related variables. Notably, the cNVAE can recover the fixation point location (\\(F_{X}\\), \\(F_{Y}\\)) in physical space almost\n\nFigure 3: Mutual information between latent variables (x-axis) and ground truth factors (y-axis) is shown for cNVAE (top) and VAE (bottom). Dashed lines indicate \\(21\\) hierarchical latent groups of \\(20\\) latents each, comprising a \\(420\\)-dimensional latent space. These groups operate at three different spatial scales, as indicated. In contrast, the VAE latent space lacks such grouping and operates solely at the spatial scale of \\(2\\times 2\\) (see Fig. 2 and Table 4 for details on model latent configurations).\n\nperfectly. The fixation location has a highly nontrivial effect on the flow patterns, and varying it causes both global and local changes in the flow patterns (Fig. 1d).\n\nFurthermore, cNVAE is the only model that reliably captures object position and velocity: especially note \\(V_{obj,z}\\) (last column in Fig. 4). Inferring object motion from complex optic flow patterns involves two key components. First, the model must extract self-motion from flow patterns. Second, the model must understand how self-motion influences flow patterns globally. Only then can the model subtract self-motion from global flow vectors to obtain object motion. In vision science, this is known as the _\"flow-parsing hypothesis\"_[88, 89, 90, 91]. Such flow-parsing is achieved by the cNVAE but none of the other models. See Supplementary section 11 for further discussion of this result and its implications.\n\nDisentanglement: the cNVAE produces more disentangled representations.The pursuit of disentanglement in neural representations has garnered considerable attention [23, 85, 92, 93, 94, 95, 96, 97, 98, 99, 100]. In particular, Locatello et al. [19] established that learning fully disentangled representations is fundamentally impossible without inductive biases. Prior efforts such as \\(\\beta\\)-VAE [85] demonstrated that increasing the weight of the KL loss (indicated by \\(\\beta\\)) promotes disentanglement in VAEs. More recently, Whittington et al. [92] demonstrated that simple biologically inspired constraints such as non-negativity and energy efficiency encourage disentanglement. Here, we demonstrate that another biological inductive bias, hierarchy in the latent space, will promote disentanglement of the latent representations learned by VAEs.\n\nTo evaluate the role of hierarchy, we adopted the DCI framework [22] which offers a well-rounded evaluation of latent representations. The approach involves training a simple decoder (e.g., lasso regression) that predicts data generative factors \\(\\mathbf{g}\\) from a latent code \\(\\mathbf{z}\\); followed by computing a matrix of relative importances (e.g., based on lasso weights) which is then used to evaluate different aspects of the code quality: _Informativeness_--measures whether \\(\\mathbf{z}\\) contains easily accessible information about \\(\\mathbf{g}\\) (similar to untangling from above). _Disentanglement_--measures whether individual latents correspond to individual generative factors. _Completeness_--measures how many \\(z_{i}\\) are required to capture any single \\(g_{j}\\). If a single latent contributes to \\(g_{j}\\)'s prediction, the score will be 1 (complete). If all latent variables equally contribute to \\(g_{j}\\)'s prediction, the score will be 0 (maximally overcomplete). Note that \"_completeness_\" is also referred to as \"_compactness_\" [23]. See Fig. 9 and Supplementary\n\nFigure 4: Hierarchical VAE untangles underlying factors of variation in data. The linear decodability of ground truth factors (x-axis) from different latent codes is shown. Untangling scores averaged across all ground truth factors are \\(\\text{cNVAE}=0.898\\), \\(\\text{NVAE}=0.639\\), \\(\\text{VAE}=0.548\\), \\(\\text{cNAE}=0.456\\), \\(\\text{AE}=0.477\\), \\(\\text{PCA}=0.236\\), and \\(\\text{Raw}=0.235\\). For variational models, the best performing \\(\\beta\\) values were selected: cNVAE, \\(\\beta=0.15\\); VAE, \\(\\beta=1.5\\) (see Supplementary section 9.5 for more details).\n\nFigure 5: Evaluating the learned latent codes using the DCI framework [22]. Larger values are better for all metrics. Note that _informativeness_ is closely related to _untangling_[20, 21]. See also Fig. 9.\n\nsection 9.7.1 for more details, ref. [101] for a review, and ref. [102] for a recent extension of the DCI framework.\n\nWe follow the methods outlined by Eastwood and Williams [22] with two modifications: (1) we replaced lasso with linear regression to avoid the strong dependence on the lasso coefficient that we observed, and (2) we estimate the matrix of relative importances using a feature permutation-based algorithm (sklearn.inspection.permutation_importance), which measures the relative performance drop that results from shuffling a given latent.\n\nWe found that cNVAE outperforms competing models across all metrics for a broad range of \\(\\beta\\) values (Fig. 5). The observed pattern of an inverted U shape is consistent with previous work [85], which suggests that there is an optimal \\(\\beta\\) that can be empirically determined. In this case, cNVAE with \\(\\beta=0.5\\) achieved the best average DCI score. Further, we found that VAEs lacking hierarchical structure learn highly overcomplete codes, such that many latents contribute to predicting a single ground truth factor. In conclusion, the simple inductive bias of hierarchy in the latent space led to a substantial improvement in VAE performance across all components of the DCI metric.\n\nBrain-alignment: the cNVAE aligns more closely with MT neurons.To evaluate the performance of models in predicting neuronal activity in response to motion stimuli, we used an existing dataset of \\(N=141\\) MT neurons recorded while presented with random dot kinematograms representing smoothly changing combinations of optic flow velocity fields [103, 104]. A subset of these neurons (\\(N=84\\)) are publicly available on crcns.org, and were recently used in Mineault et al. [34] that we compare to.\n\nTo measure neuronal alignment, we first determined the mapping from each model's latent representation to MT neuron responses (binned spike counts, Fig. 6a). Here, the latent representation is defined as the mean of predicted Gaussian distributions for VAEs, and the bottleneck activations for AEs. We learn this linear latent-to-neuron mapping using ridge regression. Figure 6b shows the average firing rate of an example neuron along with model predictions. Because sensory neurons have a nonzero response latency, we determined each neuron's optimal response latency, which maximized cross-validated performance. The resulting distribution of best-selected latencies (Fig. 6c) peaked around \\(100~{}ms\\): consistent with known MT latencies [103]. We also empirically optimized ridge coefficients to ensure each neuron has its best fit. Figure 6d shows that the models capture the receptive field properties of MT neurons as measured by the spike-triggered average stimulus. To evaluate performance, we follow methods established by Mineault et al. [34]: whenever repeated trials were available, we report Pearson's \\(R\\) on that held-out data, normalized by maximum explainable variance [105]. When repeats were not available, we performed 5-fold cross-validation and reported the held-out performance using Pearson's \\(R\\) between model prediction and spike trains.\n\nEvaluating brain alignment.We use two measures of brain alignment: the success at predicting the neural response (Pearson's \\(R\\), Fig. 7, Table 3); and, the \"_alignment_\" between neurons and individual model latents (Fig. 8, [16]). These mirror the untangling and completeness metrics described above (more details are provided below).\n\nFigure 6: **(a)** Experimental setup form [103, 104]. **(b)** Both models explain MT neural variability well. **(c)** Distribution of best estimated latencies. **(d)** Spike-triggered averages (STA) are shown.\n\nAll models predict MT neuron responses well.After training a large ensemble of unsupervised models on fixate-1 and learning the neural mapping, we found that both hierarchical (cNVAE & cNAE) and non-hierarchical (VAE & AE) variants had similar ability to predict neural responses (Fig. 7). The performance did depend on the loss function itself, with the variational loss outperforming simple autoencoder reconstruction loss (Table 3).\n\nHierarchical VAEs are more aligned with MT neurons.We next tested how these factors affect neural alignment, i.e., how closely neurons are related to individual latents in the model. Figure 8a demonstrates what we mean by \"alignment\": a sparse latent-to-neuron relationship means larger alignment, indicative of a similar representational \"form\" [16]. See Fig. 10 for an illustration of this idea. To formalize this notion, we use feature permutation importance (described above), applied to the ridge regression models. This yields a \\(420\\)-dimensional vector per neuron. Each dimension of this vector captures the importance of a given latent variable in predicting the responses of the neuron. We normalize these vectors and interpret them as the probability of importance. We then define alignment score \\(a_{i}\\) of neuron \\(i\\) as \\(a_{i}=1+\\sum_{k=1}^{K}p_{ik}\\log_{K}p_{ik}\\), where \\(p_{ik}\\) is interpreted as the importance of \\(k-\\)th latent variable in predicting neuron \\(i\\) (Fig. 8a). This concept is closely related to the \"_completeness_\" score from the DCI framework as discussed above.\n\nFigure 8: Hierarchical models (cNVAE, cNAE) are more aligned with MT neurons since they enable sparse latent-to-neuron relationships. **(a)** Alignment score measures the sparsity of permutation feature importances. \\(a_{i}=0\\) when all latents are equally important in predicting neuron \\(i\\); and, \\(a_{i}=1\\) when a single latent predicts the neuron. **(b)** Feature importances are plotted for an example neuron (same as in Fig. 6b). cNVAE (\\(\\beta=0.01\\)) predicts this neuron’s response in a much sparser manner compared to non-hierarchical VAE (\\(\\beta=5\\)). Supplementary section 9.5 contains a discussion of our rationale in choosing these \\(\\beta\\) values. **(c)** Alignment across \\(\\beta\\) values, and autoencoders (ae).\n\nFigure 7: All models (pretrained on fixate-1) perform comparably in predicting MT neuron responses. Dashed line corresponds to the previous state-of-the-art on this data [106].\n\nFor almost all \\(\\beta\\) values, the cNVAE exhibited a greater brain alignment than non-hierarchical VAE (Fig. 8c; cNVAE > VAE, paired \\(t-\\)test; see Fig. 16 and Table 5). Similarly, for the autoencoders, we found that the hierarchical variant outperformed the non-hierarchical one (cNAE > AE). Based on these observations, we conclude that higher brain alignment is primarily due to hierarchical latent structure. However, note that hierarchy in the traditional sense did not matter: all these models had approximately the same number of convolutional layers and parameters.\n\nFactors leading to brain-alignment.To test the effect of the training dataset (i.e., category of ROFL) on model performance, we trained cNVAE models using fixate-0, fixate-1, and obj-1 categories (Table 1), while also exploring a variety of \\(\\beta\\) values. We found that fixate-1 clearly outperformed the other two ROFL categories (Table 3), suggesting that both global (e.g., self-motion) and local (e.g., object motion) sources of variation are necessary for learning MT-like representations. The effect of loss function was also visible: some \\(\\beta\\) values led to more alignment. But this effect was small compared to the effect of hierarchical architecture (Fig. 8c).\n\n## 5 Discussion\n\nWe introduced a new framework for understanding and evaluating the representation of visual motion learned by artificial and biological neural networks. This framework provides a way to manipulate causes in the world and evaluate whether learned representations untangle and disentangle those causes. In particular, our framework makes it possible to test the influence of architecture (Fig. 2), loss function (Table 2), and training set (Table 1) on the learned representations, encompassing 3 out of the 4 core components of a recently proposed neuroconnectionist research programme [41]. Our framework brings hypothesis-testing to understand [biological] neural processing of vision and provides an interpretive framework to understand neurophysiological data.\n\nThe goal of the present work was to establish our framework and demonstrate its potential. To this end, we made several simplifying choices, such as training on individual flow frames rather than time-evolving videos. We provide a detailed discussion of study limitations in Supplementary section 8. Future work will address these by rendering images in simulations and using image-computable models, incorporating real eye-tracking and scene data in ROFL [83, 109], testing our approach on more data from other brain areas such as MST [110, 111], and using more sophisticated methods to measure representational alignment between ANNs and brains [112, 113, 114, 115].\n\nConclusion.We used synthetic data to test how causal structure in the world affects the representations learned by autoencoder-based models and evaluated the learned representations based on how they represent ground truth factors and how well they align with biological brains. We found that a single inductive bias, hierarchical latent structure, leads to desirable representations and increased brain alignment.\n\n\\begin{table}\n\\begin{tabular}{c c c c c c} \\hline \\hline \\multirow{2}{*}{Model} & Pretraining dataset & \\multicolumn{3}{c}{Performance, \\(R\\)\\((\\mu\\pm se;\\ N=141)\\)} \\\\ \\cline{3-6} & \\(\\beta=0.5\\) & \\(\\beta=0.8\\) & \\(\\beta=1\\) & \\(\\beta=5\\) \\\\ \\hline \\multirow{3}{*}{cNVAE} & fixate-1 & \\(\\mathbf{.506\\pm.018}\\) & \\(\\mathbf{.517\\pm.017}\\) & \\(\\mathbf{.494\\pm.018}\\) & \\(\\mathbf{.486\\pm.016}\\) \\\\ & fixate-0 & \\(\\mathbf{.428\\pm.018}\\) & \\(\\mathbf{.450\\pm.019}\\) & \\(\\mathbf{.442\\pm.019}\\) & \\(\\mathbf{.469\\pm.018}\\) \\\\ & obj-1 & \\(\\mathbf{.471\\pm.018}\\) & \\(\\mathbf{.465\\pm.018}\\) & \\(\\mathbf{.477\\pm.017}\\) & \\(\\mathbf{.468\\pm.018}\\) \\\\ \\hline VAE & fixate-1 & \\(\\mathbf{.508\\pm.019}\\) & \\(\\mathbf{.481\\pm.018}\\) & \\(\\mathbf{.494\\pm.018}\\) & \\(\\mathbf{.509\\pm.018}\\) \\\\ \\hline cNAE & fixate-1 & \\multicolumn{3}{c}{\\(\\mathbf{.476\\pm.018}\\)} \\\\ \\hline AE & fixate-1 & \\multicolumn{3}{c}{\\(\\mathbf{.495\\pm.019}\\)} \\\\ \\hline CPC [108] & AirSim [75] & \\multicolumn{3}{c}{\\(\\mathbf{.250\\pm.020}\\) (Mineault et al. [34])} \\\\ \\hline DorsalNet & AirSim [75] & \\multicolumn{3}{c}{\\(\\mathbf{.251\\pm.019}\\) (Mineault et al. [34])} \\\\ \\hline \\hline \\end{tabular}\n\\end{table}\nTable 3: Both cNVAE and VAE perform well in predicting MT neuron responses, surpassing previous state-of-the-art models by more than a twofold improvement. Moreover, the clear gap between fixate-1 and other categories highlights the importance of pretraining data [107].\n\nCode & Data\n\nOur code and model checkpoints are available here: [https://github.com/hadivafaii/ROFL-cNVAE](https://github.com/hadivafaii/ROFL-cNVAE).\n\n## 7 Acknowledgments\n\nThis work was supported by NSF IIS-2113197 (HV and DAB), NSF DGE-1632976 (HV), and NIH R00EY032179 (JLY). We thank our anonymous reviewers for their helpful comments, and the developers of the software packages used in this project, including PyTorch [116], NumPy [117], SciPy [118], scikit-learn [119], pandas [120], matplotlib [121], and seaborn [122].\n\n## References\n\n* [1] Hermann Von Helmholtz. _Handbuch der physiologischen Optik_. Vol. 9. Voss, 1867.\n* [2] Ibn al-Haytham. _Book of optics (Kitab Al-Manazir)_. 1011-1021 AD.\n* [3] David Mumford. \"On the computational architecture of the neocortex: II The role of corticocortical loops\". In: _Biological Cybernetics_ 66.3 (1992), pp. 241-251. doi: 10.1007 / BF00198477.\n* [4] Tai Sing Lee and David Mumford. \"Hierarchical Bayesian inference in the visual cortex\". In: _JOSA A_ 20.7 (2003), pp. 1434-1448. doi: 10.1364/JOSAA.20.001434.\n* [5] Rajesh PN Rao and Dana H Ballard. \"Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects\". In: _Nature Neuroscience_ 2.1 (1999), pp. 79-87. doi: 10.1038/4580.\n* [6] David C Knill and Alexandre Pouget. \"The Bayesian brain: the role of uncertainty in neural coding and computation\". In: _Trends in Neurosciences_ 27.12 (2004), pp. 712-719. doi: 10.1016/j.tins.2004.10.007.\n* [7] Alan Yuille and Daniel Kersten. \"Vision as Bayesian inference: analysis by synthesis?\" In: _Trends in Cognitive Sciences_ 10.7 (2006), pp. 301-308. doi: 10.1016/j.tics.2006.05.002.\n* [8] Karl Friston. \"A theory of cortical responses\". In: _Philosophical transactions of the Royal Society B: Biological Sciences_ 360.1456 (2005), pp. 815-836. doi: 10.1098/rstb.2005.1622.\n* [9] Andy Clark. \"Whatever next? Predictive brains, situated agents, and the future of cognitive science\". In: _Behavioral and brain sciences_ 36.3 (2013), pp. 181-204. doi: 10.1017 / S0140525X12000477.\n* [10] Peter Dayan et al. \"The Helmholtz machine\". In: _Neural Computation_ 7.5 (1995), pp. 889-904. doi: 10.1162/neco.1995.7.5.889.\n* [11] Diederik P Kingma and Max Welling. \"Auto-encoding variational bayes\". In: (2014). arXiv: 1312.6114v11 [stat.ML].\n* [12] Danilo Jimenez Rezende et al. \"Stochastic backpropagation and approximate inference in deep generative models\". In: _International Conference on Machine Learning_. PMLR. 2014, pp. 1278-1286. url: [https://proceedings.mlr.press/v32/rezende14.html](https://proceedings.mlr.press/v32/rezende14.html).\n* [13] Lukas Schott et al. \"Towards the first adversarially robust neural network model on MNIST\". In: _International Conference on Learning Representations_. 2019. url: [https://openreview.net/forum?id=S1EHOsC9tX](https://openreview.net/forum?id=S1EHOsC9tX).\n* [14] Ilker Yildirim et al. \"Efficient inverse graphics in biological face processing\". In: _Science Advances_ 6.10 (2020), eaax5979. doi: 10.1126/sciadv.aax5979.\n* [15] Katherine R Storrs et al. \"Unsupervised learning predicts human perception and misperception of gloss\". In: _Nature Human Behaviour_ 5.10 (2021), pp. 1402-1417. doi: 10.1038/s41562-021-01097-6.\n* [16] Irina Higgins et al. \"Unsupervised deep learning identifies semantic disentanglement in single inferotemporal face patch neurons\". In: _Nature Communications_ 12.1 (2021), p. 6456. doi: 10.1038/s41467-021-26751-5.\n* [17] Joseph Marino. \"Predictive coding, variational autoencoders, and biological connections\". In: _Neural Computation_ 34.1 (2022), pp. 1-44. doi: 10.1162/neco_a_01458.\n\n* [18] Irina Higgins et al. \"Towards a definition of disentangled representations\". In: (2018). arXiv: 1812.02230 [cs.LG].\n* [19] Francesco Locatello et al. \"Challenging common assumptions in the unsupervised learning of disentangled representations\". In: _international conference on machine learning_. PMLR. 2019, pp. 4114-4124. url: [https://proceedings.mlr.press/v97/locatello19a.html](https://proceedings.mlr.press/v97/locatello19a.html).\n* [20] James J DiCarlo and David D Cox. \"Untangling invariant object recognition\". In: _Trends in Cognitive Sciences_ 11.8 (2007), pp. 333-341. doi: 10.1016/j.tics.2007.06.010.\n* [21] James J DiCarlo et al. \"How does the brain solve visual object recognition?\" In: _Neuron_ 73.3 (2012), pp. 415-434. doi: 10.1016/j.neuron.2012.01.010.\n* [22] Cian Eastwood and Christopher K. I. Williams. \"A framework for the quantitative evaluation of disentangled representations\". In: _International Conference on Learning Representations_. 2018. url: [https://openreview.net/forum?id=By-7dz-AZ](https://openreview.net/forum?id=By-7dz-AZ).\n* [23] Karl Ridgeway and Michael C Mozer. \"Learning Deep Disentangled Embeddings With the F-Statistic Loss\". In: _Advances in Neural Information Processing Systems_. Vol. 31. Curran Associates, Inc., 2018. url: [https://papers.nips.cc/paper_files/paper/2018/hash/2b24d495052a8ce66358eb576b8912c8-Abstract.html](https://papers.nips.cc/paper_files/paper/2018/hash/2b24d495052a8ce66358eb576b8912c8-Abstract.html).\n* [24] Yoshua Bengio et al. \"Representation learning: A review and new perspectives\". In: _IEEE Transactions on Pattern Analysis and Machine Intelligence_ 35.8 (2013), pp. 1798-1828. doi: 10.1109/TPAMI.2013.50.\n* [25] Brenden M Lake et al. \"Building machines that learn and think like people\". In: _Behavioral and Brain Sciences_ 40 (2017), e253. doi: 10.1017/S0140525X16001837.\n* [26] Jonas Peters et al. _Elements of causal inference: foundations and learning algorithms_. The MIT Press, 2017. url: [https://mitpress.mit.edu/9780262037310/elements-of-causal-inference](https://mitpress.mit.edu/9780262037310/elements-of-causal-inference).\n* [27] Yann LeCun et al. \"Deep learning\". In: _Nature_ 521.7553 (2015), pp. 436-444. doi: 10.1038/nature14539.\n* [28] Jurgen Schmidhuber. \"Learning factorial codes by predictability minimization\". In: _Neural Computation_ 4.6 (1992), pp. 863-879. doi: 10.1162/neco.1992.4.6.863.\n* [29] Michael Tschannen et al. \"Recent advances in autoencoder-based representation learning\". In: (2018). arXiv: 1812.05069 [cs.LG].\n* [30] Martin Schrimpf et al. \"Brain-score: Which artificial neural network for object recognition is most brain-like?\" In: _BioRxiv_ (2018), p. 407007. doi: 10.1101/407007.\n* [31] Daniel LK Yamins et al. \"Performance-optimized hierarchical models predict neural responses in higher visual cortex\". In: _Proceedings of the National Academy of Sciences_ 111.23 (2014), pp. 8619-8624. doi: 10.1073/pnas.1403112111.\n* [32] Seyed-Mahdi Khaligh-Razavi and Nikolaus Kriegeskorte. \"Deep supervised, but not unsupervised, models may explain IT cortical representation\". In: _PLoS Computational Biology_ 10.11 (2014), e1003915. doi: 10.1371/journal.pcbi.1003915.\n* [33] Daniel LK Yamins and James J DiCarlo. \"Using goal-driven deep learning models to understand sensory cortex\". In: _Nature Neuroscience_ 19.3 (2016), pp. 356-365. doi: 10.1038/nn.4244.\n* [34] Patrick Mineault et al. \"Your head is there to move you around: Goal-driven models of the primate dorsal pathway\". In: _Advances in Neural Information Processing Systems_. Ed. by M. Ranzato et al. Vol. 34. Curran Associates, Inc., 2021, pp. 28757-28771. url: [https://papers.nips.cc/paper/2021/hash/f1676935f9304b97d59b0738289d2e22](https://papers.nips.cc/paper/2021/hash/f1676935f9304b97d59b0738289d2e22) -Abstract.html.\n* [35] Eric Elmoznino and Michael F Bonner. \"High-performing neural network models of visual cortex benefit from high latent dimensionality\". In: _bioRxiv_ (2022), pp. 2022-07. doi: 10.1101/2022.07.13.499969.\n* [36] Colin Conwell et al. \"What can 1.8 billion regressions tell us about the pressures shaping high-level visual representation in brains and machines?\" In: _bioRxiv_ (2023). doi: 10.1101/2022.03.28.485868.\n* [37] Nicholas J Sexton and Bradley C Love. \"Reassessing hierarchical correspondences between brain and deep networks through direct interface\". In: _Science Advances_ 8.28 (2022), eabm2219. doi: 10.1126/sciadv.abm2219.\n\n* [38] Greta Tuckute et al. \"Many but not all deep neural network audio models capture brain responses and exhibit correspondence between model stages and brain regions\". In: _bioRxiv_ (2023). doi: 10.1101/2022.09.06.506680.\n* [39] Blake Richards et al. \"The application of artificial intelligence to biology and neuroscience\". In: _Cell_ 185.15 (2022), pp. 2640-2643. doi: 10.1016/j.cell.2022.06.047.\n* [40] Anthony Zador et al. \"Catalyzing next-generation Artificial Intelligence through NeuroAI\". In: _Nature Communications_ 14.1 (2023), p. 1597. doi: 10.1038/s41467-023-37180-x.\n* [41] Adrien Doering et al. \"The neuroconnectionist research programme\". In: _Nature Reviews Neuroscience_ (2023), pp. 1-20. doi: 10.1038/s41583-023-00705-w.\n* [42] Nancy Kanwisher et al. \"Using artificial neural networks to ask 'why' questions of minds and brains\". In: _Trends in Neurosciences_ (2023). doi: 10.1016/j.tins.2022.12.008.\n* [43] Rosa Cao and Daniel Yamins. \"Explanatory models in neuroscience: Part 1-taking mechanistic abstraction seriously\". In: (2021). arXiv: 2104.01490v2 [q-bio.NC].\n* [44] Blake A Richards et al. \"A deep learning framework for neuroscience\". In: _Nature Neuroscience_ 22.11 (2019), pp. 1761-1770. doi: 10.1038/s41593-019-0520-2.\n* [45] David GT Barrett et al. \"Analyzing biological and artificial neural networks: challenges with opportunities for synergy?\" In: _Current Opinion in Neurobiology_ 55 (2019), pp. 55-64. doi: 10.1016/j.conb.2019.01.007.\n* [46] Thomas Serre. \"Deep learning: the good, the bad, and the ugly\". In: _Annual Review of Vision Science_ 5 (2019), pp. 399-426. doi: 10.1146/annurev-vision-091718-014951.\n* [47] Nikolaus Kriegeskorte. \"Deep neural networks: a new framework for modeling biological vision and brain information processing\". In: _Annual Review of Vision Science_ 1 (2015), pp. 417-446. doi: 10.1101/029876.\n* [48] James J Gibson. \"The visual perception of objective motion and subjective movement\". In: _Psychological Review_ 61.5 (1954), p. 304. doi: 10.1037/h0061885.\n* [49] Leslie Ungerleider and Mortimer Mishkin. \"Two cortical visual systems\". In: _Analysis of Visual Behavior_ (1982), pp. 549-586. url: [https://www.cns.nyu.edu/~tony/vns/readings/ungerleider-mishkin-1982.pdf](https://www.cns.nyu.edu/~tony/vns/readings/ungerleider-mishkin-1982.pdf).\n* [50] Melvyn A Goodale and A David Milner. \"Separate visual pathways for perception and action\". In: _Trends in Neurosciences_ 15.1 (1992), pp. 20-25. doi: 10.1016/0166-2236(92)90344-8.\n* [51] L. G. Ungerleider and L. Pessoa. \"What and where pathways\". In: _Scholarpedia_ 3.11 (2008). revision #91940, p. 5342. doi: 10.4249/scholarpedia.5342.\n* [52] Arash Vahdat and Jan Kautz. \"NVAE: A Deep Hierarchical Variational Autoencoder\". In: _Advances in Neural Information Processing Systems_. Vol. 33. Curran Associates, Inc., 2020, pp. 1967-19679. url: [https://papers.nips.cc/paper_files/paper/2020/hash/e3b21256183cf7c2c7a66be163579d37-Abstract.html](https://papers.nips.cc/paper_files/paper/2020/hash/e3b21256183cf7c2c7a66be163579d37-Abstract.html).\n* [53] Mandyam Veerambul Srinivasan et al. \"Predictive coding: a fresh view of inhibition in the retina\". In: _Proceedings of the Royal Society of London. Series B. Biological Sciences_ 216.1205 (1982), pp. 427-459. doi: 10.1098/rspb.1982.0085.\n* [54] Andre M Bastos et al. \"Canonical microcircuits for predictive coding\". In: _Neuron_ 76.4 (2012), pp. 695-711. doi: 10.1016/j.neuron.2012.10.038.\n* [55] Dawei W Dong and Joseph J Atick. \"Temporal decorrelation: a theory of lagged and non-lagged responses in the lateral geniculate nucleus\". In: _Network: Computation in Neural Systems_ 6.2 (1995), p. 159. doi: 10.1088/0954-898X_6\\(2\\)003.\n* [56] Wolf Singer. \"Recurrent dynamics in the cerebral cortex: Integration of sensory evidence with stored knowledge\". In: _Proceedings of the National Academy of Sciences_ 118 (2021). doi: 10.1073/pnas.210104311.\n* [57] Fabian A Mikulasch et al. \"Where is the error? Hierarchical predictive coding through dendritic error computation\". In: _Trends in Neurosciences_ 46.1 (2023), pp. 45-59. doi: 10.1016/j.tins.2022.09.007.\n* [58] Beren Millidge et al. \"Predictive Coding: Towards a Future of Deep Learning beyond Backpropagation?\" In: _International Joint Conference on Artificial Intelligence_. 2022. doi: 10.24963/ijcai.2022/774.\n* [59] David C Knill and Whitman Richards. _Perception as Bayesian inference_. Cambridge University Press, 1996. doi: 10.1017/CBO9780511984037.\n\n* [60] Yair Weiss et al. \"Motion illusions as optimal percepts\". In: _Nature Neuroscience_ 5.6 (2002), pp. 598-604.\n* [61] Wilson S Geisler and Daniel Kersten. \"Illusions, perception and Bayes\". In: _Nature Neuroscience_ 5.6 (2002), pp. 508-510. doi: 10.1038/nn0602-508.\n* [62] Iris Vilares and Konrad Kording. \"Bayesian models: the structure of the world, uncertainty, behavior, and the brain\". In: _Annals of the New York Academy of Sciences_ 1224.1 (2011), pp. 22-39. doi: 10.1111/j.1749-6632.2011.05965.x.\n* [63] Richard Langton Gregory. \"Perceptions as hypotheses\". In: _Philosophical Transactions of the Royal Society of London. B, Biological Sciences_ 290.1038 (1980), pp. 181-197. doi: 10.1098/RSTB.1980.0090.\n* [64] Timm Lochman and Sophie Deneve. \"Neural processing as causal inference\". In: _Current Opinion in Neurobiology_ 21.5 (2011), pp. 774-781. doi: 10.1016/j.conb.2011.05.018.\n* [65] Sabyasachi Shivkumar et al. \"A probabilistic population code based on neural samples\". In: _Advances in Neural Information Processing Systems_. Ed. by S. Bengio et al. Vol. 31. Curran Associates, Inc., 2018. URL: [https://papers.nips.cc/paper_files/paper/2018/hash/5401acfe633e6817b508b84d23686743-Abstract.html](https://papers.nips.cc/paper_files/paper/2018/hash/5401acfe633e6817b508b84d23686743-Abstract.html).\n* [66] Jozsef Fiser et al. \"Statistically optimal perception and learning: from behavior to neural representations\". In: _Trends in cognitive sciences_ 14.3 (2010), pp. 119-130. doi: 10.1016/j.tics.2010.01.003.\n* [67] Bruno A. Olshausen. \"Perception as an Inference Problem\". In: _The Cognitive Neurosciences (5th edition)_ (2014). Ed. by Michael Gazzaniga and George R. Mangun. doi: 10.7551/mitpress/9504.003.0037. url: [http://rctn.org/bruno/papers/perception-as-inference.pdf](http://rctn.org/bruno/papers/perception-as-inference.pdf).\n* [68] Ferenc Csikor et al. \"Top-down effects in an early visual cortex inspired hierarchical Variational Autoencoder\". In: _SVRHM 2022 Workshop @ NeurIPS_. 2022. url: [https://openreview.net/forum?id=8dfbo0QfYt3](https://openreview.net/forum?id=8dfbo0QfYt3).\n* [69] Eleni Miliotou et al. \"Generative Decoding of Visual Stimuli\". In: _Proceedings of the 40th International Conference on Machine Learning_. Ed. by Andreas Krause et al. Vol. 202. Proceedings of Machine Learning Research. PMLR, July 2023, pp. 24775-24784. url: [https://proceedings.mlr.press/v202/miliotou23a.html](https://proceedings.mlr.press/v202/miliotou23a.html).\n* [70] Yujia Huang et al. \"Neural Networks with Recurrent Generative Feedback\". In: _Advances in Neural Information Processing Systems_. Vol. 33. Curran Associates, Inc., 2020. url: [https://papers.nips.cc/paper_files/paper/2020/hash/0660895c22f8a14eb039bfb9eb0778f-Abstract.html](https://papers.nips.cc/paper_files/paper/2020/hash/0660895c22f8a14eb039bfb9eb0778f-Abstract.html).\n* [71] Rewon Child. \"Very Deep [VAE]s Generalize Autoregressive Models and Can Outperform Them on Images\". In: _International Conference on Learning Representations_. 2021. url: [https://openreview.net/forum?id=RLRXCV6DbeJ](https://openreview.net/forum?id=RLRXCV6DbeJ).\n* [72] Casper Kaae Sonderby et al. \"Ladder Variational Autoencoders\". In: _Advances in Neural Information Processing Systems_. Vol. 29. Curran Associates, Inc., 2016. url: [https://papers.nips.cc/paper_files/paper/2016/hash/6ae07dcb33ec3b7c814df797cbda0f87-Abstract.html](https://papers.nips.cc/paper_files/paper/2016/hash/6ae07dcb33ec3b7c814df797cbda0f87-Abstract.html).\n* [73] Lars Maaloe et al. \"BIVA: A Very Deep Hierarchy of Latent Variables for Generative Modeling\". In: _Advances in Neural Information Processing Systems_. Vol. 32. Curran Associates, Inc., 2019. url: [https://papers.nips.cc/paper_files/paper/2019/hash/9bd8b1faffa4b3d41779bb495d79fb9-Abstract.html](https://papers.nips.cc/paper_files/paper/2019/hash/9bd8b1faffa4b3d41779bb495d79fb9-Abstract.html).\n* [74] Louay Hazami et al. \"Efficientvdvae: Less is more\". In: (2022). arXiv: 2203.13751v2 [cs.LG].\n* [75] Shital Shah et al. \"Airsim: High-fidelity visual and physical simulation for autonomous vehicles\". In: _Field and Service Robotics: Results of the 11th International Conference_. Springer. 2018, pp. 621-635. doi: 10.1007/978-3-319-67361-5_40.\n* [76] Nicole C Rust and J Anthony Movshon. \"In praise of artifice\". In: _Nature Neuroscience_ 8.12 (2005), pp. 1647-1650. doi: 10.1038/nn1606.\n* [77] Bela Julesz. \"Foundations of cyclopean perception\". In: (1971). url: [https://books.google.com/books/about/Foundations_of_Cyclopean_Perception.html?id=K_NFQgAACAAJ](https://books.google.com/books/about/Foundations_of_Cyclopean_Perception.html?id=K_NFQgAACAAJ).\n\n* [78] Tal Golan et al. \"Controversial stimuli: Pitting neural networks against each other as models of human cognition\". In: _Proceedings of the National Academy of Sciences_ 117.47 (2020), pp. 29330-29337. doi: 10.1073/pnas.1912334117.\n* [79] Michael Beyeler et al. \"3D visual response properties of MSTd emerge from an efficient, sparse population code\". In: _Journal of Neuroscience_ 36.32 (2016), pp. 8399-8415. doi: 10.1523/JNEUROSCI.0396-16.2016.\n* [80] James J Gibson. \"The perception of the visual world\". In: (1950). url: [https://psycnet.apa.org/record/1951-04286-000](https://psycnet.apa.org/record/1951-04286-000).\n* [81] William H Warren Jr and Daniel J Hannon. \"Direction of self-motion is perceived from optical flow\". In: _Nature_ 336.6195 (1988), pp. 162-163. doi: 10.1038/336162A0.\n* [82] J. Inigo Thomas et al. _Spherical retinal flow for a fixating observer_. Tech. rep. 1994. url: [https://repository.upenn.edu/entities/publication/f9b44866-54cd-483d-8a17-a51fb732958a](https://repository.upenn.edu/entities/publication/f9b44866-54cd-483d-8a17-a51fb732958a).\n* [83] Jonathan Samir Matthis et al. \"Retinal optic flow during natural locomotion\". In: _PLOS Computational Biology_ 18.2 (2022), e1009575. doi: 10.1371/journal.pcbi.1009575.\n* [84] Eddy Ilg et al. \"Flownet 2.0: Evolution of optical flow estimation with deep networks\". In: _Proceedings of the IEEE conference on computer vision and pattern recognition_. 2017, pp. 2462-2470. doi: 10.1109/CVPR.2017.179.\n* [85] Irina Higgins et al. \"beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework\". In: _International Conference on Learning Representations_. 2017. url: [https://openreview.net/forum?id=Sy2fzUggl](https://openreview.net/forum?id=Sy2fzUggl).\n* [86] Christopher P Burgess et al. \"Understanding disentangling in \\(\\beta\\)-VAE\". In: (2018). arXiv: 1804.03599 [stat.ML].\n* [87] Nikolaus Kriegeskorte and Jorn Diedrichsen. \"Peeling the onion of brain representations\". In: _Annual Review of Neuroscience_ 42 (2019), pp. 407-432. doi: 10.1146/annurev-neuro-080317-061906.\n* [88] Simon K Rushton and Paul A Warren. \"Moving observers, relative retinal motion and the detection of object movement\". In: _Current Biology_ 15.14 (2005), R542-R543. doi: 10.1016/j.cub.2005.07.020.\n* [89] Paul A Warren and Simon K Rushton. \"Optic flow processing for the assessment of object movement during ego movement\". In: _Current Biology_ 19.18 (2009), pp. 1555-1560. doi: 10.1016/j.cub.2009.07.057.\n* [90] Paul A Warren and Simon K Rushton. \"Perception of object trajectory: Parsing retinal motion into self and object movement components\". In: _Journal of Vision_ 7.11 (2007), pp. 2-2. doi: 10.1167/7.11.2.\n* [91] Nicole E Peltier et al. \"Optic flow parsing in the macaque monkey\". In: _Journal of Vision_ 20.10 (2020), pp. 8-8. doi: 10.1167/jov.20.10.8.\n* [92] James C. R. Whittington et al. \"Disentanglement with Biological Constraints: A Theory of Functional Cell Types\". In: _The Eleventh International Conference on Learning Representations_. 2023. url: [https://openreview.net/forum?id=9Z_Gfh2nGH](https://openreview.net/forum?id=9Z_Gfh2nGH).\n* [93] Sebastien Lachapelle et al. \"Synergies between Disentanglement and Sparsity: Generalization and Identifiability in Multi-Task Learning\". In: _Proceedings of the 40th International Conference on Machine Learning_. Ed. by Andreas Krause et al. Vol. 202. Proceedings of Machine Learning Research. PMLR, July 2023, pp. 18171-18206. url: [https://proceedings.mlr.press/v202/lachapelle23a.html](https://proceedings.mlr.press/v202/lachapelle23a.html).\n* [94] Abhishek Kumar et al. \"Variational Inference of Disentangled Latent Concepts from Unlabeled Observations\". In: _International Conference on Learning Representations_. 2018. url: [https://openreview.net/forum?id=H1kG7GZAW](https://openreview.net/forum?id=H1kG7GZAW).\n* [95] Hyunjik Kim and Andriy Mnih. \"Disentangling by factorising\". In: _International Conference on Machine Learning_. PMLR. 2018, pp. 2649-2658. url: [http://proceedings.mlr.press/v80/kim18b.html](http://proceedings.mlr.press/v80/kim18b.html).\n* [96] Ricky T. Q. Chen et al. \"Isolating Sources of Disentanglement in Variational Autoencoders\". In: _Advances in Neural Information Processing Systems_. Ed. by S. Bengio et al. Vol. 31. Curran Associates, Inc., 2018. url: [https://papers.nips.cc/paper_files/paper/2018/hash/1ee3dfcd8a0645a25a35977997223d22-Abstract.html](https://papers.nips.cc/paper_files/paper/2018/hash/1ee3dfcd8a0645a25a35977997223d22-Abstract.html).\n\n* [97] Michal Rolinek et al. \"Variational autoencoders pursue PCA directions (by accident)\". In: _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 2019, pp. 12406-12415. url: [https://openaccess.thecvf.com/content_CVPR_2019/html/Rolinek_Variational_Autoencoders_Pursue_PCA_Directions_by_Accident_CVPR_2019_paper.html](https://openaccess.thecvf.com/content_CVPR_2019/html/Rolinek_Variational_Autoencoders_Pursue_PCA_Directions_by_Accident_CVPR_2019_paper.html).\n* [98] Sjoerd van Steenkiste et al. \"Are Disentangled Representations Helpful for Abstract Visual Reasoning?\" In: _Advances in Neural Information Processing Systems_. Vol. 32. Curran Associates, Inc., 2019. url: [https://papers.nips.cc/paper_files/paper/2019/hash/bc3c4a6331a8a9950945a1aa8c95ab8a-Abstract.html](https://papers.nips.cc/paper_files/paper/2019/hash/bc3c4a6331a8a9950945a1aa8c95ab8a-Abstract.html).\n* [99] Andrea Dittadi et al. \"On the Transfer of Disentangled Representations in Realistic Settings\". In: _International Conference on Learning Representations_. 2021. url: [https://openreview.net/forum?id=8VXvj1QNR11](https://openreview.net/forum?id=8VXvj1QNR11).\n* [100] W Jeffrey Johnston and Stefano Fusi. \"Abstract representations emerge naturally in neural networks trained to perform multiple tasks\". In: _Nature Communications_ 14.1 (2023), p. 1040. doi: 10.1038/s41467-023-36583-0.\n* [101] Marc-Andre Carbonneau et al. \"Measuring disentanglement: A review of metrics\". In: _IEEE Transactions on Neural Networks and Learning Systems_ (2022). doi: 10.1109/TNNLS.2022.3218982.\n* [102] Cian Eastwood et al. \"DCI-ES: An Extended Disentanglement Framework with Connections to Identifiability\". In: _The Eleventh International Conference on Learning Representations_. 2023. url: [https://openreview.net/forum?id=462z-glgSht](https://openreview.net/forum?id=462z-glgSht).\n* [103] Yuwei Cui et al. \"Diverse suppressive influences in area MT and selectivity to complex motion features\". In: _Journal of Neuroscience_ 33.42 (2013), pp. 16715-16728. doi: 10.1523/JNEUROSCI.0203-13.2013.\n* [104] Yuwei Cui et al. \"Inferring cortical variability from local field potentials\". In: _Journal of Neuroscience_ 36.14 (2016), pp. 4121-4135. doi: 10.1523/JNEUROSCI.2502-15.2016.\n* [105] Maneesh Sahani and Jennifer Linden. \"How Linear are Auditory Cortical Responses?\" In: _Advances in Neural Information Processing Systems_. Vol. 15. MIT Press, 2002. URL: https: // /papers.nips. cc / paper _files / paper / 2002 / hash / Tb4773c039d539af17c883eb9283dd14-Abstract.html.\n* [106] Patrick J Mineault et al. \"Hierarchical processing of complex motion along the primate dorsal visual pathway\". In: _Proceedings of the National Academy of Sciences_ 109.16 (2012), E972-E980. doi: 10.1073/pnas.1115685109.\n* [107] Eero P. Simoncelli and Bruno A. Olshausen. \"Natural image statistics and neural representation.\" In: _Annual Review of Neuroscience_ 24 (2001), pp. 1193-216. doi: 10.1146/annurev.neuro.24.1.1193.\n* [108] Aaron van den Oord et al. \"Representation Learning with Contrastive Predictive Coding\". In: (2019). arXiv: 1807.03748 [cs.LG].\n* [109] Karl S Muller et al. \"Retinal motion statistics during natural locomotion\". In: _eLife_ 12 (2023), e82410. doi: 10.7554/eLife.82410.\n* [110] Benedict Wild and Stefan Treue. \"Primate extrastriate cortical area MST: a gateway between sensation and cognition\". In: _Journal of Neurophysiology_ 125.5 (2021), pp. 1851-1882. doi: 10.1152/jn.00384.2020.\n* [111] Benedict Wild et al. \"Electrophysiological dataset from macaque visual cortical area MST in response to a novel motion stimulus\". In: _Scientific Data_ 9 (2022). doi: 10.1038/s41597-022-01239-z.\n* [112] Alex H Williams et al. \"Generalized Shape Metrics on Neural Representations\". In: _Advances in Neural Information Processing Systems_. Vol. 34. Curran Associates, Inc., 2021. url: https: //papers.nips. cc/paper_files/paper/2021/hash/252a3dbaeb32e7690242ad3b556e626b-Abstract.html.\n* [113] Lyndon Duong et al. \"Representational Dissimilarity Metric Spaces for Stochastic Neural Networks\". In: _The Eleventh International Conference on Learning Representations_. 2023. url: [https://openreview.net/forum?id=xjb563TH-GH](https://openreview.net/forum?id=xjb563TH-GH).\n* [114] Max Klabunde et al. \"Similarity of Neural Network Models: A Survey of Functional and Representational Measures\". In: (2023). arXiv: 2305.06329 [cs.LG].\n\n* [115] Abdulkadir Canatar et al. _A Spectral Theory of Neural Prediction and Alignment_. 2023. arXiv: 2309.12821 [q-bio.NC].\n* [116] Adam Paszke et al. \"PyTorch: An Imperative Style, High-Performance Deep Learning Library\". In: _Advances in Neural Information Processing Systems_. Vol. 32. Curran Associates, Inc., 2019. url: [https://papers.nips.cc/paper_files/paper/2019/hash/bdbca288fee7f92f2bfa9f7012727740-Abstract.html](https://papers.nips.cc/paper_files/paper/2019/hash/bdbca288fee7f92f2bfa9f7012727740-Abstract.html).\n* [117] Charles R. Harris et al. \"Array programming with NumPy\". In: _Nature_ 585.7825 (Sept. 2020), pp. 357-362. doi: 10.1038/s41586-020-2649-2.\n* [118] Pauli Virtanen et al. \"Scipy 1.0: Fundamental Algorithms for Scientific Computing in Python\". In: _Nature Methods_ 17 (2020), pp. 261-272. doi: 10.1038/s41592-019-0686-2.\n* [119] Fabian Pedregosa et al. \"Scikit-learn: Machine learning in Python\". In: _the Journal of Machine Learning Research_ 12 (2011), pp. 2825-2830. doi: 10.5555/1953048.2078195.\n* [120] The pandas development team. _pandas-dev/pandas: Pandas_. Version latest. Feb. 2020. doi: 10.5281/zenodo.3509134.\n* [121] John D Hunter. \"Matplotlib: A 2D graphics environment\". In: _Computing in Science & Engineering_ 9.03 (2007), pp. 90-95. doi: 10.1109/MCSE.2007.55.\n* [122] Michael L Waskom. \"Seaborn: statistical data visualization\". In: _Journal of Open Source Software_ 6.60 (2021), p. 3021. doi: 10.21105/joss.03021.\n* [123] Edward H Adelson and James R Bergen. \"Spatiotemporal energy models for the perception of motion\". In: _Josa a_ 2.2 (1985), pp. 284-299. doi: 10.1364/JOSAA.2.000284.\n* [124] Shinji Nishimoto and Jack L Gallant. \"A three-dimensional spatiotemporal receptive field model explains responses of area MT neurons to naturalistic movies\". In: _Journal of Neuroscience_ 31.41 (2011), pp. 14551-14564. doi: 10.1523/JNEUROSCI.6801-10.2011.\n* [125] Yena Han et al. \"System identification of neural systems: If we got it right, would we know?\" In: _International Conference on Machine Learning_. PMLR. 2023, pp. 12430-12444. url: [https://proceedings.mlr.press/v202/han23d.html](https://proceedings.mlr.press/v202/han23d.html).\n* [126] Jie Hu et al. \"Squeeze-and-excitation networks\". In: _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 2018, pp. 7132-7141. doi: 10.1109/CVPR.2018.00745.\n* [127] Sergey Ioffe and Christian Szegedy. \"Batch normalization: Accelerating deep network training by reducing internal covariate shift\". In: _International Conference on Machine Learning_. pmlr. 2015, pp. 448-456. url: [https://proceedings.mlr.press/v37/ioffe15.html](https://proceedings.mlr.press/v37/ioffe15.html).\n* [128] Tim Salimans and Durk P Kingma. \"Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks\". In: _Advances in Neural Information Processing Systems_. Ed. by D. Lee et al. Vol. 29. Curran Associates, Inc., 2016. url: [https://papers.nips.cc/paper_files/paper/2016/hash/ed265bc903a5a097f61d3ec064d96d2e-Abstract.html](https://papers.nips.cc/paper_files/paper/2016/hash/ed265bc903a5a097f61d3ec064d96d2e-Abstract.html).\n* [129] Prajit Ramachandran et al. \"Searching for Activation Functions\". In: _International Conference on Learning Representations_. 2018. url: [https://openreview.net/forum?id=SkBYTY2RZ](https://openreview.net/forum?id=SkBYTY2RZ).\n* [130] Stefan Elfwing et al. \"Sigmoid-weighted linear units for neural network function approximation in reinforcement learning\". In: _Neural Networks_ 107 (2018), pp. 3-11. doi: 10.1016/j.neunet.2017.12.012.\n* [131] Yuichi Yoshida and Takeru Miyato. \"Spectral Norm Regularization for Improving the Generalizability of Deep Learning\". In: (2017). arXiv: 1705.10941 [stat.ML].\n* [132] Samuel R. Bowman et al. \"Generating Sentences from a Continuous Space\". In: _Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning_. Berlin, Germany: Association for Computational Linguistics, Aug. 2016, pp. 10-21. doi: 10.18653/v1/K16-1002.\n* [133] Hao Fu et al. \"Cyclical Annealing Schedule: A Simple Approach to Mitigating KL Vanishing\". In: _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_. Minneapolis, Minnesota: Association for Computational Linguistics, June 2019, pp. 240-250. doi: 10.18653/v1/N19-1021.\n\n* [134] Arash Vahdat et al. \"DVAE++: Discrete Variational Autoencoders with Overlapping Transformations\". In: _Proceedings of the 35th International Conference on Machine Learning_. Ed. by Jennifer Dy and Andreas Krause. Vol. 80. Proceedings of Machine Learning Research. PMLR, July 2018, pp. 5035-5044. url: [https://proceedings.mlr.press/v80/vahdat18a.html](https://proceedings.mlr.press/v80/vahdat18a.html).\n* [135] Xi Chen et al. \"Variational Lossy Autoencoder\". In: _International Conference on Learning Representations_. 2017. url: [https://openreview.net/forum?id=BysvGP5ee](https://openreview.net/forum?id=BysvGP5ee).\n* [136] Diederik P Kingma and Jimmy Ba. \"Adam: A method for stochastic optimization\". In: (2014). arXiv: 1412.6980 [cs.LG].\n* [137] Ilya Loshchilov and Frank Hutter. \"SGDR: Stochastic Gradient Descent with Warm Restarts\". In: _International Conference on Learning Representations_. 2017. url: [https://openreview.net/forum?id=Sku89Sccxx](https://openreview.net/forum?id=Sku89Sccxx).\n* [138] Yoav Benjamini and Yosef Hochberg. \"Controlling the false discovery rate: a practical and powerful approach to multiple testing\". In: _Journal of the Royal Statistical Society: series B (Methodological)_ 57.1 (1995), pp. 289-300. doi: 10.1111/J.2517-6161.1995.TB02031.X.\n* [139] Jacob Cohen. _Statistical power analysis for the behavioral sciences_. Academic press, 1988. doi: 10.2307/2529115.\n* [140] Gregory C. DeAngelis and Dora E. Angelaki. \"Visual-Vestibular Integration for Self-Motion Perception\". In: _The Neural Bases of Multisensory Processes_ (2012), pp. 629-644. doi: 10.1201/9781439812174.\n* [141] Eduard Von Holst. \"Relations between the central nervous system and the peripheral organs\". In: _British Journal of Animal Behaviour_ (1954). doi: 10.1016/S0950-5601(54)80044-X.\n* [142] Paul R MacNeilage et al. \"Vestibular facilitation of optic flow parsing\". In: _PLoS One 7.7_ (2012), e40264. doi: 10.1371/journal.pone.0040264.\n* [143] Kathleen E Cullen and Omid A Zobeiri. \"Proprioception and the predictive sensing of active self-motion\". In: _Current Opinion in Physiology_ 20 (2021), pp. 29-38. doi: 10.1016/j.cophys.2020.12.001.\n* [144] Constance S Royden and Ellen C Hildreth. \"Human heading judgments in the presence of moving objects\". In: _Perception & Psychophysics_ 58 (1996), pp. 836-856. doi: 10.3758/BF03205487.\n* [145] William H Warren Jr and Jeffrey A Saunders. \"Perceiving heading in the presence of moving objects\". In: _Perception 24.3_ (1995), pp. 315-331. doi: 10.1068/p240315.\n* [146] Edward AB Horrocks et al. \"Walking humans and running mice: perception and neural encoding of optic flow during self-motion\". In: _Philosophical Transactions of the Royal Society B_ 378.1869 (2023), p. 20210450. doi: 10.1098/rstb.2021.0450.\n* [147] Jean-Paul Noel et al. \"Causal inference during closed-loop navigation: parsing of self-and object-motion\". In: _Philosophical Transactions of the Royal Society B_ 378.1886 (2023), p. 20220344. doi: 10.1098/rstb.2022.0344.\n* [148] Denis N Lee. \"The optic flow field: The foundation of vision\". In: _Philosophical Transactions of the Royal Society of London. B, Biological Sciences_ 290.1038 (1980), pp. 169-179. doi: 10.1098/rstb.1980.0089.\n* [149] Markus Lappe et al. \"Perception of self-motion from visual flow\". In: _Trends in Cognitive Sciences_ 3.9 (1999), pp. 329-336. doi: 10.1016/S1364-6613(99)01364-9.\n* [150] Irina Higgins et al. \"Symmetry-based representations for artificial and biological general intelligence\". In: _Frontiers in Computational Neuroscience_ 16 (2022), p. 836498. doi: 10.3389/fncom.2022.836498.\n* [151] Fabio Anselmi et al. \"On invariance and selectivity in representation learning\". In: _Information and Inference: A Journal of the IMA_ 5.2 (2016), pp. 134-158. doi: 10.1093/imaiai/iaw009.\n* [152] Michael M Bronstein et al. \"Geometric deep learning: Grids, groups, graphs, geodesics, and gauges\". In: (2021). arXiv: 2104.13478 [cs.LG].\n* [153] Ishaan Gulrajani et al. \"PixelVAE: A Latent Variable Model for Natural Images\". In: _International Conference on Learning Representations_. 2017. url: [https://openreview.net/forum?id=BJKYvt5lg](https://openreview.net/forum?id=BJKYvt5lg).\n\nSupplementary material for:\n\nHierarchical VAEs provide a normative account of motion processing in the primate brain\n\n## 8 Study Limitations & Considerations\n\nWe established a synthetic data framework that allows hypothesis generation and testing for neural processing of motion. While our paper opens up many interesting venues for future exploration, it is necessarily simplified, as it focuses on establishing our framework and demonstrating its potential. As such, it currently has several limitations:\n\nFirst, our simulation generates velocity fields, rather than full spatiotemporal movies, which allows our model to avoid mimicking the complexities of motion extracted by the early visual pathway [123, 124]. This strategy also allows for direct comparison with recorded neural data in MT and MST using random dot kinematograms [103, 104, 106]. However, a more complex model would be necessary to explain responses of neurons earlier in the visual pathway, such as V1, which would require a pixel-computable model as in previous work (e.g., DorsalNet [34]). Likewise, our \\(>2\\times\\) improvement in performance in explaining MT neural data over DorsalNet is likely due in part to their network trained to perform this extraction from spatiotemporal movies, which was not necessarily equivalent to the random dot velocity fields used in the experiments. Thus, it is an open question whether a hierarchical VAE trained and tested on video stimuli would better align to neural data than the model from Mineault et al. [34]. Future work in this space will involve rendering images in simulations and using image-computable models for a fair comparison.\n\nSecond, we chose relatively simple environments and simple fixation rules to generate our optic flow fields and avoided the true complexity of 3-D natural environments and their interaction with eye movements and self-motion as has recently been measured [83, 109]. The simplification of our simulation still demonstrates the importance of including such elements in understanding neural representations and provides a framework for incorporating real eye-tracking and scene data [109, 83] into future work with ROFL.\n\nFinally, we only tested neural alignment on one experimental paradigm using neurons in area MT, which leaves the question of whether this is a general principle of brain computation. Addressing this requires testing our approach on more data from other brain areas, such as MST. Based on previous work [106, 110], we expect that hierarchical computation is even more necessary for MST, which is an open question to address in future work.\n\nInterpreting brain-alignment.We measured the alignment between ANN models and MT neurons using both linear predictive power (Fig. 7) and an alternative measure of alignment that is sensitive to the sparsity of latent-to-neuron relationships (\"alignment-score\", Fig. 8a). Linear regression has been most commonly used to measure similarity, or alignment, between pairs of representations [30, 31, 34, 36], but often results in degenerate [30, 36, 38] and unreliable [125] measures of representational alignment. Consistent with this, our application of linear regression found that it was not effective in differentiating between models: although the cNVAE produced the single best model in terms of neuronal prediction, we found that both hierarchical and non-hierarchical VAEs performed similarly in predicting MT neuron responses (Fig. 7).\n\nIn contrast, the alignment score (Fig. 8a) was much more consistent in distinguishing between models (see Fig. 16), and revealed that hierarchical models (both cNVAE and cNAE) had significantly sparser latent-to-neuron relationships. The alignment score measures whether a model has learned a similar representational \"form\" to the brain, which would enable the sparsity of latent-to-neuron relationships. This concept is closely related to the \"_completeness_\" score from the DCI framework [22]. The alignment score shown in Fig. 8a was also used in Higgins et al. [16], although they used the magnitude of nonzero coefficients as their feature importance (under lasso regression). However, we note that this alignment score also has limitations, and for example, it is not a proper metric in the mathematical sense [112]. Future work will consider more sophisticated metrics for brain alignment [112, 113, 114, 115].\n\nAdditional Methods\n\n### VAE loss derivation\n\nSuppose some observed data \\(\\mathbf{x}\\) are sampled from a generative process as follows:\n\n\\[p(\\mathbf{x})=\\int p(\\mathbf{x},\\mathbf{z})\\,d\\mathbf{z}=\\int p(\\mathbf{x}|\\mathbf{z})p(\\mathbf{z})\\,d\\mathbf{z}, \\tag{1}\\]\n\nwhere \\(\\mathbf{z}\\) are latent (or unobserved) variables. In this setting, it is interesting to ask which set of latents \\(\\mathbf{z}\\) are likely, given an observation \\(\\mathbf{x}\\). In other words, we are interested in posterior inference\n\n\\[p(\\mathbf{z}|\\mathbf{x})\\propto p(\\mathbf{x}|\\mathbf{z})p(\\mathbf{z}). \\tag{2}\\]\n\nThe goal of VAEs is to approximate the (unknown) true posterior \\(p(\\mathbf{z}|\\mathbf{x})\\) with a distribution \\(q(\\mathbf{z}|\\mathbf{x};\\theta)\\), where \\(\\theta\\) are some free parameters to be learned. This goal is achieved by minimizing the Kullback-Leibler divergence between the approximate posterior \\(q\\) and the true posterior \\(p\\):\n\n\\[\\text{Goal:}\\quad\\text{minimize}\\quad\\mathcal{D}_{\\text{KL}}\\Big{[}q(\\mathbf{z}| \\mathbf{x};\\theta)\\,\\big{\\|}\\,p(\\mathbf{z}|\\mathbf{x})\\Big{]}. \\tag{3}\\]\n\nThis objective is intractable, but rearranging the terms leads to the following loss that is also the (negative) variational lower bound on \\(\\log p(\\mathbf{x})\\):\n\n\\[\\mathcal{L}_{\\text{VAE}}=-\\mathbb{E}_{q}\\Big{[}\\log p(\\mathbf{x}|\\mathbf{z};\\theta_{ dec})\\Big{]}+\\mathcal{D}_{\\text{KL}}\\Big{[}q(\\mathbf{z}|\\mathbf{x};\\mathbf{\\theta}_{enc}) \\,\\big{\\|}\\,p(\\mathbf{z}|\\theta_{dec})\\Big{]}, \\tag{4}\\]\n\nwhere \\(\\mathbf{\\theta}_{enc}\\) and \\(\\theta_{dec}\\) are the parameters for the encoder and decoder neural networks (see Fig. 2). The first term in Equation 4 is the reconstruction loss, which we chose to be the Euclidean norm of the differences between predicted and reconstructed velocity vectors (i.e., the endpoint error [84]). We will now focus on the second term in Equation 4, the KL term.\n\n#### 9.1.1 Prior, approximate posterior, and the KL term in vanilla VAE\n\nIn standard non-hierarchical VAEs, the prior is not parameterized. Instead, it is chosen to be a simple distribution such as a Gaussian with zero mean and unit covariance:\n\n\\[p(\\mathbf{z}|\\theta_{dec})\\,\\rightarrow\\,p(\\mathbf{z})=\\mathcal{N}(\\mathbf{0},\\mathbf{I}). \\tag{5}\\]\n\nThe approximate posterior is also a Gaussian with mean \\(\\mathbf{\\mu}(\\mathbf{x};\\mathbf{\\theta}_{enc})\\) and variance \\(\\mathbf{\\sigma}^{2}(\\mathbf{x};\\mathbf{\\theta}_{enc})\\):\n\n\\[q(\\mathbf{z}|\\mathbf{x};\\mathbf{\\theta}_{enc})=\\mathcal{N}(\\mathbf{\\mu}(\\mathbf{x};\\mathbf{\\theta}_{ enc}),\\mathbf{\\sigma}^{2}(\\mathbf{x};\\mathbf{\\theta}_{enc})) \\tag{6}\\]\n\nAs a result, we see that the KL term for a vanilla VAE only depends on encoder parameters \\(\\mathbf{\\theta}_{enc}\\).\n\n#### 9.1.2 Prior, approximate posterior, and the KL term in the cNVAE\n\nSimilar to the NVAE [52], the cNVAE latent space is organized hierarchically such that latent groups are sampled sequentially, starting from \"top\" latents all the way down to the \"bottom\" ones (i.e., \\(\\mathbf{z}_{1}\\) to \\(\\mathbf{z}_{3}\\) in Fig. 2). In addition to its hierarchical structure, another important difference between cNVAE and vanilla VAE is that the cNVAE prior is learned from data. Note the \"mid\" and \"bottom\" latent groups in Fig. 2, indicated as \\(\\mathbf{z}_{2}\\) and \\(\\mathbf{z}_{3}\\) respectively: the cNVAE is designed in a way that changing the parameters along the decoder pathway will impact the prior distributions on \\(\\mathbf{z}_{2}\\) and \\(\\mathbf{z}_{3}\\) (but not \\(\\mathbf{z}_{1}\\)). Note the \"\\(h\\)\" in Fig. 2, which is also a learnable parameter. In summary, the \"top\" cNVAE latents (e.g., \\(\\mathbf{z}_{1}\\) in Fig. 2) have a fixed prior distribution similar to vanilla VAEs; whereas, the prior distribution for every other cNVAE latent group is parametrized and learned from data.\n\nMore formally, the cNVAE latents are partitioned into disjoint groups, \\(\\mathbf{z}=\\{\\mathbf{z}_{1},\\mathbf{z}_{2},\\ldots,\\mathbf{z}_{L}\\}\\), where \\(L\\) is the number of groups. Then, the prior is represented by:\\[p(\\mathbf{z}|\\theta_{dec})=p(\\mathbf{z}_{1})\\cdot\\prod_{\\ell=2}^{L}p(\\mathbf{z}_{\\ell}|\\mathbf{z}_{ <\\ell};\\theta_{dec}), \\tag{7}\\]\n\nwhere the conditionals are represented by factorial Normal distributions. For the first latent group we have \\(p(\\mathbf{z}_{1})=\\mathcal{N}(\\mathbf{0},\\mathbf{I})\\), similar to vanilla VAEs. For every other latent group, we have:\n\n\\[p(\\mathbf{z}_{\\ell}|\\mathbf{z}_{<\\ell};\\theta_{dec})=\\mathcal{N}(\\mathbf{\\mu}(\\mathbf{z}_{< \\ell};\\theta_{dec}),\\mathbf{\\sigma}^{2}(\\mathbf{z}_{<\\ell};\\theta_{dec})), \\tag{8}\\]\n\nwhere \\(\\mathbf{\\mu}(\\mathbf{z}_{<\\ell};\\theta_{dec})\\) and \\(\\mathbf{\\sigma}^{2}(\\mathbf{z}_{<\\ell};\\theta_{dec})\\) are outputted from the decoder _sampler_ layers. Similarly, the approximate posterior in the cNVAE is represented by:\n\n\\[q(\\mathbf{z}|\\mathbf{x};\\mathbf{\\theta}_{enc})=\\prod_{\\ell=1}^{L}q(\\mathbf{z}_{\\ell}|\\mathbf{z}_{ <\\ell},\\mathbf{x};\\theta_{enc}). \\tag{9}\\]\n\nWe adopt a Gaussian parameterization for each conditional in the approximate posterior:\n\n\\[q(\\mathbf{z}_{\\ell}|\\mathbf{z}_{<\\ell},\\mathbf{x};\\mathbf{\\theta}_{enc})=\\mathcal{N}(\\mathbf{\\mu} (\\mathbf{z}_{<\\ell},x;\\mathbf{\\theta}_{enc}),\\mathbf{\\sigma}^{2}(\\mathbf{z}_{<\\ell},\\mathbf{x}; \\mathbf{\\theta}_{enc})), \\tag{10}\\]\n\nwhere \\(\\mathbf{\\mu}(\\mathbf{z}_{<\\ell},x;\\mathbf{\\theta}_{enc})\\) and \\(\\mathbf{\\sigma}^{2}(\\mathbf{z}_{<\\ell},\\mathbf{x};\\mathbf{\\theta}_{enc})\\) are outputted from the encoder _sampler_ layers (Fig. 2; grey trapezoids). We are now in a position to explicitly write down the KL term from Equation 4 for the cNVAE:\n\n\\[\\text{KL term }=\\mathcal{D}_{\\text{KL}}\\Big{[}q(\\mathbf{z}_{1}|\\mathbf{x},\\mathbf{ \\theta}_{enc})\\,\\big{\\|}\\,p(\\mathbf{z}_{1})\\Big{]}+\\sum_{\\ell=2}^{L}\\mathbb{E}_{q (\\mathbf{z}_{<\\ell}|\\mathbf{x},\\mathbf{\\theta}_{enc})}\\Big{[}\\text{KL}_{\\ell}(\\theta_{enc},\\theta_{dec})\\Big{]}, \\tag{11}\\]\n\nwhere \\(\\text{KL}_{\\ell}\\) refers to the local KL term for group \\(\\ell\\) and is given by:\n\n\\[\\text{KL}_{\\ell}(\\mathbf{\\theta}_{enc},\\theta_{dec})\\coloneqq\\mathcal{D}_{\\text{ KL}}\\Big{[}q\\left(\\mathbf{z}_{\\ell}|\\mathbf{z}_{<\\ell},\\mathbf{x};\\mathbf{\\theta}_{enc}\\right) \\,\\big{\\|}\\,p\\left(\\mathbf{z}_{\\ell}|\\mathbf{z}_{<\\ell},\\theta_{dec}\\right)\\Big{]}, \\tag{12}\\]\n\nand the approximate posterior up to the \\((\\ell-1)^{th}\\) group is defined as:\n\n\\[q(\\mathbf{z}_{<\\ell}|\\mathbf{x};\\mathbf{\\theta}_{enc})\\coloneqq\\prod_{i=1}^{\\ell-1}q(\\mathbf{ z}_{i}|\\mathbf{z}_{