[ { "text": "Planning with partial observation is a central challenge in embodied AI. A\nmajority of prior works have tackled this challenge by developing agents that\nphysically explore their environment to update their beliefs about the world\nstate.In contrast, humans can $\\textit{imagine}$ unseen parts of the world\nthrough a mental exploration and $\\textit{revise}$ their beliefs with imagined\nobservations. Such updated beliefs can allow them to make more informed\ndecisions, without necessitating the physical exploration of the world at all\ntimes. To achieve this human-like ability, we introduce the $\\textit{Generative\nWorld Explorer (Genex)}$, an egocentric world exploration framework that allows\nan agent to mentally explore a large-scale 3D world (e.g., urban scenes) and\nacquire imagined observations to update its belief. This updated belief will\nthen help the agent to make a more informed decision at the current step. To\ntrain $\\textit{Genex}$, we create a synthetic urban scene dataset, Genex-DB.\nOur experimental results demonstrate that (1) $\\textit{Genex}$ can generate\nhigh-quality and consistent observations during long-horizon exploration of a\nlarge virtual physical world and (2) the beliefs updated with the generated\nobservations can inform an existing decision-making model (e.g., an LLM agent)\nto make better plans.", "split": "arXiv" }, { "text": "I model a rational agent who spends resources between the current time and\nsome fixed future deadline. Opportunities to spend resources arise randomly\naccording to a Poisson process, and the quality of each opportunity follows a\nuniform distribution. The agent values their current resource stock at exactly\nthe sum of expected utility from all future spending opportunities. Unlike in\ntraditional discounted expected utility models, the agent exhibits correlation\naversion, static (but not dynamic) preference reversals, and monotonicity with\nrespect to payment timing. Connecting the agent's risk and time preference is\nintuitive, and doing so leads to a new model of procrastination where the agent\nmisperceives their general attitude toward spending resources.", "split": "arXiv" }, { "text": "A full set of optimized observables is measured in an angular analysis of the\ndecay B$^0$ $\\to$ K$^*$(892)$^0\\mu^+\\mu^-$ using a sample of proton-proton\ncollisions at $\\sqrt{s}$ = 13 TeV, collected with the CMS detector at the LHC,\ncorresponding to an integrated luminosity of 140 fb$^{-1}$. The analysis is\nperformed in six bins of the squared invariant mass of the dimuon system,\n$q^2$, over the range 1.1 $\\lt$ $q^2$ $\\lt$ 16 GeV$^2$. The results are among\nthe most precise experimental measurements of the angular observables for this\ndecay and are compared to a variety of predictions based on the standard model.", "split": "arXiv" }, { "text": "Differential equations are derived which show how generalized Euler vector\nrepresentations of the Euler rotation axis and angle for a rigid body evolve in\ntime; the Euler vector is also known as a rotation vector or axis-angle vector.\nThe solutions can exhibit interesting rotational features in this non-abstract,\nvisualizable setting, including spinor-like behavior and quasiperiodicity. The\nequations are well-behaved at zero, reducing to the simple infinitesimal case\nthere. One of them is equivalent to a known quaternion differential equation.\nThe simple geometric derivation does not depend on Euler's rotation theorem,\nand yields a proof of Euler's theorem using only infinitesimal motions. With\nmild regularity conditions on the angular velocity function, there is a\ncontinuous evolution of the normalized axis and angle for all time. Dynamical\nsystems properties are discussed, and numerical solutions are used to\ninvestigate them when the angular velocity is itself rotating, and the Euler\nvector trajectory traces out a torus-like shape, with a strobe plot that\ndensely fills in a closed curve.", "split": "arXiv" }, { "text": "This paper describes two C++/Open Motion Planning Library implementations of\nthe recently developed motion planning algorithms HyRRT arXiv:2210.15082v1\n[cs.RO] and HySST arXiv:2305.18649v1 [cs.RO]. Specifically, cHyRRT, an\nimplementation of the HyRRT algorithm, is capable of generating a solution to a\nmotion planning problem for hybrid systems with probabilistically completeness,\nwhile cHySST, an implementation of the asymptotically near-optimal HySST\nalgorithm, is capable of computing a trajectory to solve the optimal motion\nplanning problem for hybrid systems. cHyRRT is suitable for motion planning\nproblems where an optimal solution is not required, whereas cHySST is suitable\nfor such problems that prefer optimal solutions, within all feasible solutions.\nThe structure, components, and usage of the two tools are described. Examples\nare included to illustrate the main capabilities of the toolbox.", "split": "arXiv" }, { "text": "Federated learning (FL) is an emerging paradigm for training machine learning\nmodels across distributed clients. Traditionally, in FL settings, a central\nserver assigns training efforts (or strategies) to clients. However, from a\nmarket-oriented perspective, clients may independently choose their training\nefforts based on rational self-interest. To explore this, we propose a\npotential game framework where each client's payoff is determined by their\nindividual efforts and the rewards provided by the server. The rewards are\ninfluenced by the collective efforts of all clients and can be modulated\nthrough a reward factor. Our study begins by establishing the existence of Nash\nequilibria (NEs), followed by an investigation of uniqueness in homogeneous\nsettings. We demonstrate a significant improvement in clients' training efforts\nat a critical reward factor, identifying it as the optimal choice for the\nserver. Furthermore, we prove the convergence of the best-response algorithm to\ncompute NEs for our FL game. Finally, we apply the training efforts derived\nfrom specific NEs to a real-world FL scenario, validating the effectiveness of\nthe identified optimal reward factor.", "split": "arXiv" }, { "text": "High penetration from volatile renewable energy resources in the grid and the\nvarying nature of loads raise the need for frequent line switching to ensure\nthe efficient operation of electrical distribution networks. Operators must\nensure maximum load delivery, reduced losses, and the operation between voltage\nlimits. However, computations to decide the optimal feeder configuration are\noften computationally expensive and intractable, making it unfavorable for\nreal-time operations. This is mainly due to the existence of binary variables\nin the network reconfiguration optimization problem. To tackle this issue, we\nhave devised an approach that leverages machine learning techniques to reshape\ndistribution networks featuring multiple substations. This involves predicting\nthe substation responsible for serving each part of the network. Hence, it\nleaves simple and more tractable Optimal Power Flow problems to be solved. This\nmethod can produce accurate results in a significantly faster time, as\ndemonstrated using the IEEE 37-bus distribution feeder. Compared to the\ntraditional optimization-based approaches, a feasible solution is achieved\napproximately ten times faster for all the tested scenarios.", "split": "arXiv" }, { "text": "Tuning the density of resident electrons or holes in semiconductors provides\ncrucial insight into the composition of excitonic complexes that are observed\nas absorption or photoluminescence resonances in optical studies. Moreover, we\ncan change the way these resonances shift and broaden in energy by controlling\nthe quantum numbers of the resident carriers with magnetic fields and doping\nlevels, and by selecting the quantum numbers of the photoexcited or recombining\nelectron-hole (e-h) pair through optical polarization. We discuss the roles of\ndistinguishability and optimality of excitonic complexes, showing them to be\nkey ingredients that determine the energy shifts and broadening of optical\nresonances in charge-tunable semiconductors. A distinguishable e-h pair means\nthat the electron and hole undergoing photoexcitation or recombination have\nquantum numbers that are not shared by any of the resident carriers. An optimal\nexcitonic complex refers to a complex whose particles come with all available\nquantum numbers of the resident carriers. All optical resonances may be\nclassified as either distinct or indistinct depending on the distinguishability\nof the e-h pair, and the underlying excitonic complex can be classified as\neither optimal or suboptimal. The universality of these classifications,\ninherited from the fundamental Pauli exclusion principle, allows us to\nunderstand how optical resonances shift in energy and whether they should\nbroaden as doping is increased. This understanding is supported by conclusive\nevidence that the decay of optical resonances cannot be simply attributed to\nenhanced screening when resident carriers are added to a semiconductor.\nFinally, applying the classification scheme in either monolayer or moire\nheterobilayer systems, we relate the energy shift and amplitude of the neutral\nexciton resonance to the compressibility of the resident carrier gas.", "split": "arXiv" }, { "text": "Blockchain networks are facing increasingly heterogeneous computational\ndemands, and in response, protocol designers have started building specialized\ninfrastructure to supply that demand. This paper introduces Resoonance: a new\nkind of transaction fee mechanism that operates in a general two-sided\nmarketplace setting with extreme preference heterogeneity on both sides of the\nmarket. We allow users submitting transactions to have arbitrary valuations for\ninclusion, nodes responsible for executing transactions to incur arbitrary net\ncosts for executing any bundle, and further allow for arbitrary constraints in\nallocation validity. These constraints, for example, may range from\nrepresenting an individual node's specialized hardware constraints to denoting\nthe fact that transactions may not be executed in parallel across different\nnodes if they utilize the same part of the network's state. Transactions may\neven require multiple nodes for execution.\n Resonance's design utilizes competition among sophisticated brokers to find\nidiosyncratic prices. We show that at pure Nash equilibria, Resonance finds an\nefficient outcome and minimizes the need for strategization by users and nodes.\nIt is also budget-balanced, individually rational for all parties, and\ncomputationally tractable.", "split": "arXiv" }, { "text": "In this paper we prove that Schr\\\"{o}dinger's equation with a Hamiltonian of\nthe form $H=-\\Delta+i(A \\nabla + \\nabla A) + V$, which includes a magnetic\npotential $A$, has the same dispersive and solution decay properties as the\nfree Schr\\\"{o}dinger equation. In particular, we prove $L^1 \\to L^\\infty$ decay\nand some related estimates for the wave equation.\n The potentials $A$ and $V$ are short-range and $A$ has four derivatives, but\nthey can be arbitrarily large. All results hold in three space dimensions.", "split": "arXiv" }, { "text": "A generative adversarial network (GAN) has been a representative backbone\nmodel in generative artificial intelligence (AI) because of its powerful\nperformance in capturing intricate data-generating processes. However, the GAN\ntraining is well-known for its notorious training instability, usually\ncharacterized by the occurrence of mode collapse. Through the lens of\ngradients' variance, this work particularly analyzes the training instability\nand inefficiency in the presence of mode collapse by linking it to\nmultimodality in the target distribution. To ease the raised training issues\nfrom severe multimodality, we introduce a novel GAN training framework that\nleverages a series of tempered distributions produced via convex interpolation.\nWith our newly developed GAN objective function, the generator can learn all\nthe tempered distributions simultaneously, conceptually resonating with the\nparallel tempering in Statistics. Our simulation studies demonstrate the\nsuperiority of our approach over existing popular training strategies in both\nimage and tabular data synthesis. We theoretically analyze that such\nsignificant improvement can arise from reducing the variance of gradient\nestimates by using the tempered distributions. Finally, we further develop a\nvariant of the proposed framework aimed at generating fair synthetic data which\nis one of the growing interests in the field of trustworthy AI.", "split": "arXiv" }, { "text": "In 1975, Erd\\H{o}s and Sauer asked to estimate, for any constant $r$, the\nmaximum number of edges an $n$-vertex graph can have without containing an\n$r$-regular subgraph. In a recent breakthrough, Janzer and Sudakov proved that\nany $n$-vertex graph with no $r$-regular subgraph has at most $C_r n \\log \\log\nn$ edges, matching an earlier lower bound by Pyber, R\\\"odl and Szemer\\'edi and\nthereby resolving the Erd\\H{o}s-Sauer problem up to a constant depending on\n$r$. We prove that every $n$-vertex graph without an $r$-regular subgraph has\nat most $Cr^2 n \\log \\log n$ edges. This bound is tight up to the value of $C$\nfor $n\\geq n_0(r)$ and hence resolves the Erd\\H{o}s-Sauer problem up to an\nabsolute constant.\n Moreover, we obtain similarly tight results for the whole range of possible\nvalues of $r$ (i.e., not just when $r$ is a constant), apart from a small error\nterm at a transition point near $r\\approx \\log n$, where, perhaps surprisingly,\nthe answer changes. More specifically, we show that every $n$-vertex graph with\naverage degree at least $\\min(Cr\\log(n/r),Cr^2 \\log\\log n)$ contains an\n$r$-regular subgraph. The bound $Cr\\log(n/r)$ is tight for $r\\geq \\log n$,\nwhile the bound $Cr^2 \\log \\log n$ is tight for $r<(\\log n)^{1-\\Omega(1)}$.\nThese results resolve a problem of R\\\"odl and Wysocka from 1997 for almost all\nvalues of $r$.\n Among other tools, we develop a novel random process that efficiently finds a\nvery nearly regular subgraph in any almost-regular graph. A key step in our\nproof uses this novel random process to show that every $K$-almost-regular\ngraph with average degree $d$ contains an $r$-regular subgraph for some\n$r=\\Omega_K(d)$, which is of independent interest.", "split": "arXiv" }, { "text": "Quantum computing architectures based on neutral atoms offer large scales and\nhigh-fidelity operations. They can be heterogeneous, with different zones for\nstorage, entangling operations, and readout. Zoned architectures improve\ncomputation fidelity by shielding idling qubits in storage from side-effect\nnoise, unlike monolithic architectures where all operations occur in a single\nzone. However, supporting these flexible architectures with efficient\ncompilation remains challenging. In this paper, we propose ZAC, a scalable\ncompiler for zoned architectures. ZAC minimizes data movement overhead between\nzones with qubit reuse, i.e., keeping them in the entanglement zone if an\nimmediate entangling operation is pending. Other innovations include novel data\nplacement and instruction scheduling strategies in ZAC, a flexible\nspecification of zoned architectures, and an intermediate representation for\nzoned architectures, ZAIR. Our evaluation shows that zoned architectures\nequipped with ZAC achieve a 22x improvement in fidelity compared to monolithic\narchitectures. Moreover, ZAC is shown to have a 10% fidelity gap on average\ncompared to the ideal solution. This significant performance enhancement\nenables more efficient and reliable quantum circuit execution, enabling\nadvancements in quantum algorithms and applications. ZAC is open source at\nhttps://github.com/UCLA-VAST/ZAC", "split": "arXiv" }, { "text": "Motivated by knot theory, it is natural to define the orientation-reversal of\na quandle orbit by inverting all the translations given by elements of that\norbit. In this short note we observe that this natural notion is unsuited to\nmedial quandles.", "split": "arXiv" }, { "text": "We consider the dynamics of a continuously monitored qubit in the limit of\nstrong measurement rate where the quantum trajectory is described by a\nstochastic master equation with Poisson noise. Such limits are expected to give\nrise to quantum jumps between the pointer states associated with the\nnon-demolition measurement. A surprising discovery in earlier work [Tilloy et\nal., Phys. Rev. A 92, 052111 (2015)] on quantum trajectories with Brownian\nnoise was the phenomena of spikes observed in between the quantum jumps. Here,\nwe show that spikes are observed also for Poisson noise. We consider three\ncases where the non-demolition is broken by adding, to the basic strong\nmeasurement dynamics, either unitary evolution or thermal noise or additional\nmeasurements. We present a complete analysis of the spike and jump statistics\nfor all three cases using the fact that the dynamics effectively corresponds to\nthat of stochastic resetting. We provide numerical results to support our\nanalytic results.", "split": "arXiv" }, { "text": "Coherent perfect absorption (CPA) has been a topic of considerable\ncontemporary research interest. However, its implementation in practical\napplications has been limited, since it has been demonstrated only for plane\nwaves till now. The issue for beams with finite confinement -- characterized by\na collection of plane waves -- is that complete destructive interference is not\nfeasible for all the plane waves simultaneously. In this paper, we study the\nabsorption characteristics of two counter-propagating structured beams, e.g.,\nGaussian and Laguerre-Gaussian (LG) beams with and without orbital angular\nmomentum respectively, incident normally on a composite slab from both sides by\nfulfilling the CPA condition exclusively for the central plane waves. We show\nthat though perfect absorption is not achievable, there can be a substantial\nreduction of the scattered light. We also consider CPA for oblique incidence\nand discuss the difficulties.", "split": "arXiv" }, { "text": "We investigate prophet inequalities with competitive ratios approaching $1$,\nseeking to generalize $k$-uniform matroids. We first show that large girth does\nnot suffice: for all $k$, there exists a matroid of girth $\\geq k$ and a\nprophet inequality instance on that matroid whose optimal competitive ration is\n$\\frac{1}{2}$. Next, we show $k$-fold matroid unions do suffice: we provide a\nprophet inequality with competitive ratio $1-O(\\sqrt{\\frac{\\log k}{k}})$ for\nany $k$-fold matroid union. Our prophet inequality follows from an online\ncontention resolution scheme.\n The key technical ingredient in our online contention resolution scheme is a\nnovel bicriterion concentration inequality for arbitrary monotone $1$-Lipschitz\nfunctions over independent items which may be of independent interest. Applied\nto our particular setting, our bicriterion concentration inequality yields\n\"Chernoff-strength\" concentration for a $1$-Lipschitz function that is not\n(approximately) self-bounding.", "split": "arXiv" }, { "text": "We prove upper bounds which are independent of the dimension of the ambient\nspace, on the number of realizable zero-nonzero patterns as well as sign\nconditions (when the field of coefficients is ordered) of a finite set of\npolynomials $\\mathcal{P}$ restricted to some algebraic subset $V$.Our bounds\n(which are tight) depend on the number and the degrees of the polynomials in\n$\\mathcal{P}$, as well as the degree (of the embedding) of $V$ and the\ndimension of $V$, but are independent of the dimension of the space in which\n$V$ is embedded. This last feature of our bounds is useful in situations where\nthe ambient dimension could be much larger than $\\dim V$.\n We give several applications of our results. We generalize existing results\non bounding the speeds of algebraically defined classes of graphs, as well as\nlower bounds in terms of the number of connected components for testing\nmembership in semi-algebraic sets using algebraic computation trees. Motivated\nby quantum complexity theory we introduce a notion of relative rank (additive\nas well as multiplicative) in finite dimensional vector spaces and algebras\nrelative to a fixed algebraic subset in the vector space or algebra -- which\ngeneralizes the classical definition of ranks of tensors. We prove a very\ngeneral lower bound on the maximum relative rank of finite subsets relative to\nalgebraic subsets of bounded degree and dimension which is independent of the\ndimension of the vector space or algebra. We show how our lower bound implies a\nquantum analog of a classical lower bound result of Shannon for Boolean\ncircuits -- that almost all Boolean functions require (classical) circuits of\nsize at least $\\Omega(2^n/n)$.", "split": "arXiv" }, { "text": "Aligning diffusion models with downstream objectives is essential for their\npractical applications. However, standard alignment methods often struggle with\nstep generalization when directly applied to few-step diffusion models, leading\nto inconsistent performance across different denoising step scenarios. To\naddress this, we introduce Stepwise Diffusion Policy Optimization (SDPO), a\nnovel alignment method tailored for few-step diffusion models. Unlike prior\napproaches that rely on a single sparse reward from only the final step of each\ndenoising trajectory for trajectory-level optimization, SDPO incorporates dense\nreward feedback at every intermediate step. By learning the differences in\ndense rewards between paired samples, SDPO facilitates stepwise optimization of\nfew-step diffusion models, ensuring consistent alignment across all denoising\nsteps. To promote stable and efficient training, SDPO introduces an online\nreinforcement learning framework featuring several novel strategies designed to\neffectively exploit the stepwise granularity of dense rewards. Experimental\nresults demonstrate that SDPO consistently outperforms prior methods in\nreward-based alignment across diverse step configurations, underscoring its\nrobust step generalization capabilities. Code is avaliable at\nhttps://github.com/ZiyiZhang27/sdpo.", "split": "arXiv" }, { "text": "The Vol-Det Conjecture, formulated by Champanerkar, Kofman and Purcell,\nstates that there exists a specific inequality connecting the hyperbolic volume\nof an alternating link and its determinant. Among the classes of links for\nwhich this conjecture holds are all knots with at most 16 crossings, 2-bridge\nlinks, and links that are closures of 3-strand braids. In the present paper,\nBurton's bound on the number of crossings for which the Vol-Det Conjecture\nholds is improved for links with more than eight twists. In addition,\nStoimenow's inequalities between hyperbolic volumes and determinants are\nimproved for alternating and alternating arborescent links with more than eight\ntwists.", "split": "arXiv" }, { "text": "We prove that for every compact, convex subset $K\\subset\\mathbb{R}^2$ the\noperator system $A(K)$, consisting of all continuous affine functions on $K$,\nis hyperrigid in the C*-algebra $C(\\mathrm{ex}(K))$. In particular, this result\nimplies that the weak and strong operator topologies coincide on the set $$\n \\{ T\\in\\mathcal{B}(H);\\ T\\ \\mathrm{normal}\\ \\mathrm{and}\\ \\sigma(T)\\subset\n\\mathrm{ex}(K) \\}.\n $$ Our approach relies on geometric properties of $K$ and generalizes\nprevious results by Brown.", "split": "arXiv" }, { "text": "This paper explores quadratic forms over finite fields with associated\nArtin-Schreier curves. Specifically, we investigate quadratic forms of $\\mathbb\nF_{q^n}/\\mathbb F_q$ represented by polynomials over $\\mathbb F_{q^n}$ with $q$\nodd, characterizing them using certain matrices defined by coefficients of the\npolynomials. In particular, a comprehensive treatment will be given for those\npolynomials whose coefficients all lie in $\\mathbb F_q$. Afterwards, the\nresults on quadratic forms will be applied to get maximal and minimal\nArtin-Schreier curves explicitly.", "split": "arXiv" }, { "text": "We are interested in the nonlinear damped Klein-Gordon equation \\[\n\\partial_t^2 u+2\\alpha \\partial_t u-\\Delta u+u-|u|^{p-1}u=0 \\] on\n$\\mathbb{R}^d$ for $2\\le d\\le 5$ and energy sub-critical exponents $2 < p <\n\\frac{d+2}{d-2}$.\n We construct multi-solitons, that is, solutions which behave for large times\nas a sum of decoupled solitons, in various configurations with symmetry: this\nincludes multi-solitons whose soliton centers lie at the vertices of an\nexpanding regular polygon (with or without a center), of a regular polyhedron\n(with a center), or of a higher dimensional regular polytope. We give a precise\ndescription of these multi-solitons: in particular the interaction between\nnearest neighbour solitons is asymptotic to $\\ln (t)$ as $t \\to +\\infty$.\n We also prove that in any multi-soliton, the solitons can not all share the\nsame sign.\n Both statements generalize and precise results from \\cite{F98}, \\cite{Nak}\nand are based on the analysis developed in \\cite{CMYZ,CMY}.", "split": "arXiv" }, { "text": "The personalization techniques of diffusion models succeed in generating\nspecific concepts but also pose threats to copyright protection and illegal\nuse. Model Watermarking is an effective method to prevent the unauthorized use\nof subject-driven or style-driven image generation, safeguarding concept\ncopyrights. However, under the goal of concept-oriented protection, current\nwatermarking schemes typically add watermarks to all images rather than\napplying them in a refined manner targeted at specific concepts. Additionally,\nthe personalization techniques of diffusion models can easily remove\nwatermarks. Existing watermarking methods struggle to achieve fine-grained\nwatermark embedding with a few images of specific concept and prevent removal\nof watermarks through personalized fine-tuning. Therefore, we introduce a novel\nconcept-oriented watermarking framework that seamlessly embeds imperceptible\nwatermarks into the concept of diffusion models. We conduct extensive\nexperiments and ablation studies to verify our framework. Our code is available\nat https://anonymous.4open.science/r/Conceptwm-4EB3/.", "split": "arXiv" }, { "text": "We critically review the evidence for time-varying dark energy from recent\nBaryon Acoustic Oscillations (BAO) and Supernova (SN) observations. First, we\nshow that such evidence is present at the 3$\\sigma$ level, even without the new\nBAO data from the dark energy Spectroscopic Instrument (DESI), by instead using\nBAO data from the dark energy Survey (DES), combined with the DES5Y supernovae\nand Planck CMB data. Next, we examine the role of the DES5Y supernova dataset,\nshowing that the preference for time-varying dark energy is driven by the low\nredshift supernovae common to both the DES5Y and Pantheon+ compilations. We\nfind that combining Pantheon+ and DES5Y supernovae by removing the common\nsupernovae leads to two different results, depending on whether they are\nremoved from the DES5Y or the Pantheon+ catalog, leading to stronger or weaker\nexclusion of $\\Lambda$CDM, at the (3.8$\\sigma$) and (2.5$\\sigma$) level,\nrespectively. These common supernovae have smaller error bars in DES5Y compared\nto Pantheon+, and, as recently pointed out, there is an offset in magnitude in\nDES5Y between supernovae at ($z > 0.1$), where almost all the measurements\ntaken during the full five years of DES are, and the low-redshift ones ($z <\n0.1$), where all the historical set of nearby supernovae lies. We show that\nmarginalizing over such an offset in DES5Y would lead to significantly weaker\nevidence for evolving dark energy.", "split": "arXiv" }, { "text": "The abundant chemical compositions in ternary hydrides bring much more\npossibility to explore high temperature superconductors under lower pressure.\nHere we constructed 115 ternary hydrides on the basis of the elements\nsubstitution using 16 metal elements within 5 reported prototype structures. We\nconducted a three-step approach to screen and study these candidate structures\nin the aspects of dynamical stability, formation energy and relative enthalpy,\nrespectively. Based on this approach, we found three meta-stable compounds with\nhydrogen clathrate cages in the space group of P-3m1, including Y2CdH18,\nY2InH18 and Ca2SnH18. All of the structures are superconductive under high\npressure with Tc above 110 K, which is larger than the superconductive\ntemperature of liquid nitrogen. Our study enriches the database of novel\nternary hydrides under high pressure, and provides insight for future\ntheoretical and experimental researches.", "split": "arXiv" }, { "text": "The variation of the physical conditions across the three dimensions of our\nGalaxy is a major source of complexity for the modelling of the foreground\nsignal facing the cosmic microwave background (CMB). In the present work, we\ndemonstrate that the spin-moment expansion formalism provides a powerful\nframework to model and understand this complexity, with a special focus on that\narising from variations of the physical conditions along each line-of-sight on\nthe sky. We perform the first application of the moment expansion to reproduce\na thermal dust model largely used by the CMB community, demonstrating its power\nas a minimal tool to compress, understand and model the information contained\nwithin any foreground model. Furthermore, we use this framework to produce new\nmodels of thermal dust emission containing the maximal amount of complexity\nallowed by the current data, remaining compatible with the observed angular\npower-spectra by the $Planck$ mission. By assessing the impact of these models\non the performance of component separation methodologies, we conclude that the\nadditional complexity contained within the third dimension could represent a\nsignificant challenge for future CMB experiments and that different component\nseparation approaches are sensitive to different properties of the moments.", "split": "arXiv" }, { "text": "Ashkin-Teller model is a two-layer lattice model where spins in each layer\ninteract ferromagnetically with strength $J$, and the spin-dipoles (product of\nspins) interact with neighbors with strength $\\lambda.$ The model exhibits\nsimultaneous magnetic and electric transitions along a self-dual line on the\n$\\lambda$-$J$ plane with continuously varying critical exponents. In this\narticle, we investigate the percolation of geometric clusters of spins and\nspin-dipoles denoted respectively as magnetic and electric clusters. We find\nthat the largest cluster in both cases becomes macroscopic in size and spans\nthe lattice when interaction exceeds a critical threshold given by the same\nself-dual line where magnetic and electric transitions occur. The fractal\ndimension of the critical spanning clusters is related to order parameter\nexponent $\\beta_{m,e}$ as $D_{m,e}=d-\\frac{5}{12}\\frac{\\beta_{m,e}}\\nu,$ where\n$d=2$ is the spatial dimension and $\\nu$ is the correlation length exponent.\nThis relation determines all other percolation exponents and their variation\nwrt $\\lambda.$ We show that for magnetic Percolation, the Binder cumulant, as a\nfunction of $\\xi_2/L$ with $\\xi_2$ being the second-moment correlation length,\nremains invariant all along the critical line and matches with that of the\nspin-percolation in the usual Ising model. The function also remains invariant\nfor the electric percolation, forming a new superuniversality class of\npercolation transition.", "split": "arXiv" }, { "text": "We study the dynamics of synthetic molecules whose architectures are\ngenerated by space transformations from a point group acting on seed\nresonators. We show that the dynamical matrix of any such molecule can be\nreproduced as the left regular representation of a self-adjoint element from\nthe stabilized group's algebra. Furthermore, we use elements of representation\ntheory and K-theory to rationalize the dynamical features supported by such\nsynthetic molecules up to topological equivalences. These tools enable us to\nidentify a set of fundamental models which generate by superposition all\npossible dynamical matrices up to homotopy equivalences. Interpolations between\nthese fundamental models give rise to topological spectral flows.", "split": "arXiv" }, { "text": "We prove a version of Jordan's classification theorem for finite subgroups of\n$\\mathrm{GL}_{n}(K)$ that is at the same time quantitatively explicit,\nCFSG-free, and valid for arbitrary $K$. This is the first proof to satisfy all\nthree properties at once. Our overall strategy follows Larsen and Pink [24],\nwith explicit computations based on techniques developed by the authors and\nHelfgott [2, 3], particularly in relation to dimensional estimates.", "split": "arXiv" }, { "text": "We consider two supersymmetric M5 brane probe solutions in $\\textrm{AdS}_7\n\\times S^4$ and one in $\\textrm{AdS}_4 \\times S^7$ that all have the\n$\\textrm{AdS}_3 \\times S^3$ world-volume geometry. The values of the classical\naction of the first two M5 probes (with $S^3$ in $\\textrm{AdS}_7$ or in $S^4$)\nare related to the leading $N^2$ parts in the anomaly b-coefficient in the\n(2,0) theory corresponding to a spherical surface defect in symmetric or\nantisymmetric $SU(N)$ representations. We present a detailed computation of the\ncorresponding one-loop M5 brane partition functions finding that they vanish\n(in a particular regularization). This implies the vanishing of the order $N^0$\npart in the b-anomaly coefficients, in agreement with earlier predictions for\ntheir exact values. It remains, however, a puzzle of how to reproduce the\nnon-vanishing order $N$ terms in these coefficients within the semiclassical\nM5-brane probe setup.", "split": "arXiv" }, { "text": "Determining potential probability distributions with a given causal graph is\nvital for causality studies. To bypass the difficulty in characterizing latent\nvariables in a Bayesian network, the nested Markov model provides an elegant\nalgebraic approach by listing exactly all the equality constraints on the\nobserved variables. However, this algebraically motivated causal model\ncomprises distributions outside Bayesian networks, and its physical\ninterpretation remains vague. In this work, we inspect the nested Markov model\nthrough the lens of generalized probabilistic theory, an axiomatic framework to\ndescribe general physical theories. We prove that all the equality constraints\ndefining the nested Markov model hold valid theory-independently. Yet, we show\nthis model generally contains distributions not implementable even within such\nrelaxed physical theories subjected to merely the relativity principles and\nmild probabilistic rules. To interpret the origin of such a gap, we establish a\nnew causal model that defines valid distributions as projected from a\nhigh-dimensional Bell-type causal structure. The new model unveils inequality\nconstraints induced by relativity principles, or equivalently high-dimensional\nconditional independences, which are absent in the nested Markov model.\nNevertheless, we also notice that the restrictions on states and measurements\nintroduced by the generalized probabilistic theory framework can pose\nadditional inequality constraints beyond the new causal model. As a by-product,\nwe discover a new causal structure exhibiting strict gaps between the\ndistribution sets of a Bayesian network, generalized probabilistic theories,\nand the nested Markov model. We anticipate our results will enlighten further\nexplorations on the unification of algebraic and physical perspectives of\ncausality.", "split": "arXiv" }, { "text": "In this paper, we construct new t-server Private Information Retrieval (PIR)\nschemes with communication complexity subpolynomial in the previously best\nknown, for all but finitely many t. Our results are based on combining\nderivatives (in the spirit of Woodruff-Yekhanin) with the Matching Vector based\nPIRs of Yekhanin and Efremenko. Previously such a combination was achieved in\nan ingenious way by Dvir and Gopi, using polynomials and derivatives over\ncertain exotic rings, en route to their fundamental result giving the first\n2-server PIR with subpolynomial communication.\n Our improved PIRs are based on two ingredients:\n - We develop a new and direct approach to combine derivatives with Matching\nVector based PIRs. This approach is much simpler than that of Dvir-Gopi: it\nworks over the same field as the original PIRs, and only uses elementary\nproperties of polynomials and derivatives.\n - A key subproblem that arises in the above approach is a higher-order\npolynomial interpolation problem. We show how \"sparse S-decoding polynomials\",\na powerful tool from the original constructions of Matching Vector PIRs, can be\nused to solve this higher-order polynomial interpolation problem using\nsurprisingly few higer-order evaluations.\n Using the known sparse S-decoding polynomials, in combination with our ideas\nleads to our improved PIRs. Notably, we get a 3-server PIR scheme with\ncommunication $2^{O^{\\sim}( (\\log n)^{1/3}) }$, improving upon the previously\nbest known communication of $2^{O^{\\sim}( \\sqrt{\\log n})}$ due to Efremenko.", "split": "arXiv" }, { "text": "Large double-logarithmic corrections are induced by soft gluon emissions near\nthreshold in the semi-inclusive $e^+e^-$ annihilation (SIA) distributions, and\nmust be resummed to all-orders in perturbation theory for reliable theoretical\npredictions. Building on strategy developed for threshold resummation for DIS\nstructure function in momentum space using soft-collinear effective theory\n(SCET), we present the explicit formalism for SIA cross section. We then\nperform the resummation directly in momentum space for $\\gamma^* \\to q \\bar q$,\n$H \\to gg$ and $H \\to b\\bar b$ to N$^4$LL accuracy and demonstrate good\nconvergence. We anticipate that these results will benefit the extraction of\nthe light-quark, the heavy-quark as well as the gluon fragmentation functions.", "split": "arXiv" }, { "text": "This paper studies the distributed bandit convex optimization problem with\ntime-varying inequality constraints, where the goal is to minimize network\nregret and cumulative constraint violation. To calculate network cumulative\nconstraint violation, existing distributed bandit online algorithms solving\nthis problem directly use the clipped constraint function to replace its\noriginal constraint function. However, the use of the clipping operation\nrenders Slater condition (i.e, there exists a point that strictly satisfies the\ninequality constraints at all iterations) ineffective to achieve reduced\nnetwork cumulative constraint violation. To tackle this challenge, we propose a\nnew distributed bandit online primal-dual algorithm. If local loss functions\nare convex, we show that the proposed algorithm establishes sublinear network\nregret and cumulative constraint violation bounds. When Slater condition holds,\nthe network cumulative constraint violation bound is reduced. In addition, if\nlocal loss functions are strongly convex, for the case where strongly convex\nparameters are unknown, the network regret bound is reduced. For the case where\nstrongly convex parameters are known, the network regret and cumulative\nconstraint violation bounds are further reduced. To the best of our knowledge,\nthis paper is among the first to establish reduced (network) cumulative\nconstraint violation bounds for (distributed) bandit convex optimization with\ntime-varying constraints under Slater condition. Finally, a numerical example\nis provided to verify the theoretical results.", "split": "arXiv" }, { "text": "This paper studies observability inequalities for heat equations on both\nbounded domains and the whole space $\\mathbb{R}^d$. The observation sets are\nmeasured by log-type Hausdorff contents, which are induced by certain log-type\ngauge functions closely related to the heat kernel. On a bounded domain, we\nderive the observability inequality for observation sets of positive log-type\nHausdorff content. Notably, the aforementioned inequality holds not only for\nall sets with Hausdorff dimension $s$ for any $s\\in (d-1,d]$, but also for\ncertain sets of Hausdorff dimension $d-1$. On the whole space $\\mathbb{R}^d$,\nwe establish the observability inequality for observation sets that are thick\nat the scale of the log-type Hausdorff content. Furthermore, we prove that for\nthe 1-dimensional heat equation on an interval, the Hausdorff content we have\nchosen is an optimal scale for the observability inequality.\n To obtain these observability inequalities, we use the adapted Lebeau-Robiano\nstrategy from \\cite{Duyckaerts2012resolvent}. For this purpose, we prove the\nfollowing results at scale of the log-type Hausdorff content, the former being\nderived from the latter: We establish a spectral inequality/a Logvinenko-Sereda\nuncertainty principle; we set up a quantitative propagation of smallness of\nanalytic functions; we build up a Remez' inequality; and more fundamentally, we\nprovide an upper bound for the log-type Hausdorff content of a set where a\nmonic polynomial is small, based on an estimate in Lubinsky\n\\cite{Lubinsky1997small}, which is ultimately traced back to the classical\nCartan Lemma. In addition, we set up a capacity-based slicing lemma (related to\nthe log-type gauge functions) and establish a quantitative relationship between\nHausdorff contents and capacities. These tools are crucial in the studies of\nthe aforementioned propagation of smallness in high-dimensional situations.", "split": "arXiv" }, { "text": "Bonne and Censor-Hillel (ICALP 2019) initiated the study of distributed\nsubgraph finding in dynamic networks of limited bandwidth. For the case where\nthe target subgraph is a clique, they determined the tight bandwidth complexity\nbounds in nearly all settings. However, several open questions remain, and very\nlittle is known about finding subgraphs beyond cliques. In this work, we\nconsider these questions and explore subgraphs beyond cliques.\n For finding cliques, we establish an $\\Omega(\\log \\log n)$ bandwidth lower\nbound for one-round membership-detection under edge insertions only and an\n$\\Omega(\\log \\log \\log n)$ bandwidth lower bound for one-round detection under\nboth edge insertions and node insertions. Moreover, we demonstrate new\nalgorithms to show that our lower bounds are tight in bounded-degree networks\nwhen the target subgraph is a triangle. Prior to our work, no lower bounds were\nknown for these problems.\n For finding subgraphs beyond cliques, we present a complete characterization\nof the bandwidth complexity of the membership-listing problem for every target\nsubgraph, every number of rounds, and every type of topological change: node\ninsertions, node deletions, edge insertions, and edge deletions. We also show\npartial characterizations for one-round membership-detection and listing.", "split": "arXiv" }, { "text": "In this paper, we study the componentwise linearity of symbolic powers of\nedge ideals. We propose the conjecture that all symbolic powers of the edge\nideal of a cochordal graph are componentwise linear. This conjecture is\nverified for some families of cochordal graphs, including complements of block\ngraphs and complements of proper interval graphs. As a corollary, Minh's\nconjecture is established for such families. Moreover, we show that\n$I(G)^{(2)}$ is componentwise linear, for any cochordal graph $G$.", "split": "arXiv" }, { "text": "The Schrieffer-Wolff transformation (SWT) is an important perturbative method\nin quantum mechanics used to simplify Hamiltonians by decoupling low- and\nhigh-energy subspaces. Existing methods for implementing the SWT often lack\ngeneral applicability to arbitrary perturbative systems or fail to provide a\nclosed-form solution for the SWT generator. In this article, we present a\nsystematic and unified framework for the SWT that addresses these shortcomings.\nSpecifically, we derive a closed-form solution for the SWT generator that is\nuniversally applicable to any system that satisfies the conditions required to\nbe perturbatively treated. Furthermore, we extend this solution to\ntime-dependent systems with periodic perturbations, covering all frequency\nregimes. The effectiveness of this approach is then demonstrated by applying it\nto analyze the dispersive shift of an anharmonic resonator coupled to a\ntwo-level system with time-dependent coupling.", "split": "arXiv" }, { "text": "The James Webb Space Telescope has uncovered a puzzling population of\nUV-faint broad-line active galactic nuclei (AGN), nicknamed ``Little Red Dots''\n(LRD) owing to their compact morphology and red rest-frame optical colours.\nInterpreted as dust attenuated AGN, their inferred intrinsic luminosities and\nsupermassive black hole (SMBH) masses rival those of UV-luminous quasars,\nalthough they are $>100$ times more abundant. If LRDs and quasars are members\nof the same underlying population, they should inhabit comparable mass dark\nmatter halos, traced by similar overdensities of galaxies. Otherwise, they\nrepresent distinct populations with different physical properties and formation\nhistories. Characterizing LRD environments thus provides a critical test of\ntheir nature. Here, we report the discovery of a LRD at $z=7.3$, attenuated by\nmoderate amounts of dust, $A_V = {3.26}\\,\\rm{mag}$, with an intrinsic\nbolometric luminosity of $10^{46.7}\\,\\rm{erg}\\,\\rm{s}^{-1}$ and a SMBH mass of\n$7\\times10^8\\,\\rm{M}_\\odot$. Most notably, this object is embedded in an\noverdensity of eight nearby galaxies, allowing us to calculate the first\nspectroscopic estimate of the clustering of galaxies around LRDs. We find a\nLRD-galaxy cross-correlation length of $r_0\\!=\\!9\\pm2\\,\\rm{h}^{-1}\\,\\rm{cMpc}$,\ncomparable to that of $z\\!\\sim\\!6$ UV-luminous quasars. The resulting estimate\nof their minimum dark matter halo mass of $\\log_{10}(M_{\\rm{halo,\nmin}}/\\rm{M}_{\\odot})= 12.3_{-0.8}^{+0.7}$ indicates that nearly all halos\nabove this mass must host actively accreting SMBHs at $z\\approx7$, in strong\ncontrast with the far smaller duty cycle of luminous quasars ($<1\\%$). Our\nresults, taken at face value, motivate a picture in which LRDs are the obscured\ncounterparts of UV-luminous quasars, which provides a natural explanation for\nthe short UV-luminous lifetimes inferred from both quasar clustering and quasar\nproximity zones.", "split": "arXiv" }, { "text": "The rapid development of large language models (LLMs) with advanced\nprogramming capabilities has paved the way for innovative approaches in\nsoftware testing. Fuzz testing, a cornerstone for improving software\nreliability and detecting vulnerabilities, often relies on manually written\nfuzz drivers, limiting scalability and efficiency. To address this challenge,\nwe propose CodeGraphGPT, a novel system that integrates code knowledge graphs\nwith an LLM-powered intelligent agent to automate the fuzz driver generation\nprocess. By framing fuzz driver creation as a code generation task,\nCodeGraphGPT leverages program analysis to construct a knowledge graph of code\nrepositories, where nodes represent code entities, such as functions or files,\nand edges capture their relationships. This enables the system to generate\ntailored fuzz drivers and input seeds, resolve compilation errors, and analyze\ncrash reports, all while adapting to specific API usage scenarios.\nAdditionally, querying the knowledge graph helps identify precise testing\ntargets and contextualize the purpose of each fuzz driver within the fuzzing\nloop. We evaluated CodeGraphGPT on eight open-source software projects,\nachieving an average improvement of 8.73\\% in code coverage compared to\nstate-of-the-art methods. Moreover, it reduced the manual workload in crash\ncase analysis by 84.4\\% and identified 11 real-world bugs, including nine\npreviously unreported ones. This work highlights how integrating LLMs with code\nknowledge graphs enhances fuzz driver generation, offering an efficient\nsolution for vulnerability detection and software quality improvement.", "split": "arXiv" }, { "text": "In this paper we present an approach to reduce hallucinations in Large\nLanguage Models (LLMs) by incorporating Knowledge Graphs (KGs) as an additional\nmodality. Our method involves transforming input text into a set of KG\nembeddings and using an adapter to integrate these embeddings into the language\nmodel space, without relying on external retrieval processes.\n To facilitate this, we created WikiEntities, a dataset containing over 3\nmillion Wikipedia texts annotated with entities from Wikidata and their\ncorresponding embeddings from PyTorch-BigGraph. This dataset serves as a\nvaluable resource for training Entity Linking models and adapting the described\nmethod to various LLMs using specialized adapters.\n Our method does not require fine-tuning of the language models themselves;\ninstead, we only train the adapter. This ensures that the model's performance\non other tasks is not affected. We trained an adapter for the Mistral 7B, LLaMA\n2-7B (chat), and LLaMA 3-8B (instruct) models using this dataset and\ndemonstrated that our approach improves performance on the HaluEval, True-False\nbenchmarks and FEVER dataset. The results indicate that incorporating KGs as a\nnew modality can effectively reduce hallucinations and improve the factual\naccuracy of language models, all without the need for external retrieval.", "split": "arXiv" }, { "text": "Massive Open Online Courses (MOOCs) have greatly contributed to making\neducation more accessible.However, many MOOCs maintain a rigid,\none-size-fits-all structure that fails to address the diverse needs and\nbackgrounds of individual learners.Learning path personalization aims to\naddress this limitation, by tailoring sequences of educational content to\noptimize individual student learning outcomes.Existing approaches, however,\noften require either massive student interaction data or extensive expert\nannotation, limiting their broad application.In this study, we introduce a\nnovel data-efficient framework for learning path personalization that operates\nwithout expert annotation.Our method employs a flexible recommender system\npre-trained with reinforcement learning on a dataset of raw course\nmaterials.Through experiments on semi-synthetic data, we show that this\npre-training stage substantially improves data-efficiency in a range of\nadaptive learning scenarios featuring new educational materials.This opens up\nnew perspectives for the design of foundation models for adaptive learning.", "split": "arXiv" }, { "text": "E-commerce app users exhibit behaviors that are inherently logically\nconsistent. A series of multi-scenario user behaviors interconnect to form the\nscene-level all-domain user moveline, which ultimately reveals the user's true\nintention. Traditional CTR prediction methods typically focus on the item-level\ninteraction between the target item and the historically interacted items.\nHowever, the scene-level interaction between the target item and the user\nmoveline remains underexplored. There are two challenges when modeling the\ninteraction with preceding all-domain user moveline: (i) Heterogeneity between\nitems and scenes: Unlike traditional user behavior sequences that utilize items\nas carriers, the user moveline utilizes scenes as carriers. The heterogeneity\nbetween items and scenes complicates the process of aligning interactions\nwithin a unified representation space. (ii) Temporal misalignment of linked\nscene-level and item-level behaviors: In the preceding user moveline with a\nfixed sampling length, certain critical scene-level behaviors are closely\nlinked to subsequent item-level behaviors. However, it is impossible to\nestablish a complete temporal alignment that clearly identifies which specific\nscene-level behaviors correspond to which item-level behaviors. To address\nthese challenges and pioneer modeling user intent from the perspective of the\nall-domain moveline, we propose All-domain Moveline Evolution Network (AMEN).\nAMEN not only transfers interactions between items and scenes to homogeneous\nrepresentation spaces, but also introduces a Temporal Sequential Pairwise (TSP)\nmechanism to understand the nuanced associations between scene-level and\nitem-level behaviors, ensuring that the all-domain user moveline differentially\ninfluences CTR predictions for user's favored and unfavored items. Online A/B\ntesting demonstrates that our method achieves a +11.6% increase in CTCVR.", "split": "arXiv" }, { "text": "While AI models have demonstrated remarkable capabilities in constrained\ndomains like game strategy, their potential for genuine creativity in\nopen-ended domains like art remains debated. We explore this question by\nexamining how AI can transcend human cognitive limitations in visual art\ncreation. Our research hypothesizes that visual art contains a vast unexplored\nspace of conceptual combinations, constrained not by inherent incompatibility,\nbut by cognitive limitations imposed by artists' cultural, temporal,\ngeographical and social contexts.\n To test this hypothesis, we present the Alien Recombination method, a novel\napproach utilizing fine-tuned large language models to identify and generate\nconcept combinations that lie beyond human cognitive availability. The system\nmodels and deliberately counteracts human availability bias, the tendency to\nrely on immediately accessible examples, to discover novel artistic\ncombinations.\n This system not only produces combinations that have never been attempted\nbefore within our dataset but also identifies and generates combinations that\nare cognitively unavailable to all artists in the domain. Furthermore, we\ntranslate these combinations into visual representations, enabling the\nexploration of subjective perceptions of novelty. Our findings suggest that\ncognitive unavailability is a promising metric for optimizing artistic novelty,\noutperforming merely temperature scaling without additional evaluation\ncriteria. This approach uses generative models to connect previously\nunconnected ideas, providing new insight into the potential of framing\nAI-driven creativity as a combinatorial problem.", "split": "arXiv" }, { "text": "We establish the profound equivalence between measures of genuine\nmultipartite entanglement(GME) and their corresponding coherence measures.\nInitially we construct two distinct classes of measures for genuine\nmultipartite entanglement utilizing real symmetric concave functions and the\nconvex roof technique. We then demonstrate that all coherence measures for any\nqudit states, defined through the convex roof approach, are identical to our\ntwo classes of GME measures of the states combined with an incoherent ancilla\nunder a unitary incoherent operation. This relationship implies that genuine\nmultipartite entanglement can be generated from the coherence inherent in an\ninitial state through the unitary incoherent operations. Furthermore, we\nexplore the interplay between coherence and other forms of genuine quantum\ncorrelations, specifically genuine multipartite steering and genuine\nmultipartite nonlocality. In the instance of special three-qubit X-states (only\nnonzero elements of X-state are diagonal or antidiagonal when written in an\northonormal basis), we find that genuine multipartite steering and nonlocality\nare present if and only if the coherence exists in the corresponding qubit\nstates.", "split": "arXiv" }, { "text": "Uncertainty persists over how and why some countries become democratic and\nothers do not, or why some countries remain democratic and others 'backslide'\ntoward autocracy. Furthermore, while scholars generally agree on the nature of\n'democracy' and 'autocracy', the nature of regimes in-between, and changes\nbetween them, are much less clear. By applying the spectral\ndimensionality-reduction technique Diffusion Map to political-science data from\nthe V-Dem project for the period 1900 to 2021, we identify a low-dimensional\nnon-linear manifold on which all electoral regimes move. Using the diffusion\nequation from statistical physics, we measure the time scale on which countries\nchange their degree of electoral quality, freedom of association, and freedom\nof expression depending on their position on the manifold. By quantifying the\ncoefficients of the diffusion equation for each country and over time, we show\nthat democracies behave like sub-diffusive (i.e. slow spreading) particles and\nthat autocracies on the verge of collapse behave like super-diffusive (i.e.\nfast spreading) particles. We show that regimes in-between exhibit diffusion\ndynamics distinct from autocracies and democracies, and an overall higher\ninstability. Furthermore, we show that a country's position on the manifold and\nits dynamics are linked to its propensity for civil conflict. Our study\npioneers the use of statistical physics in the analysis of political regimes.\nOur results provide a quantitative foundation for developing theories about\nwhat changes during democratization and democratic backsliding, as well as a\nnew framework for regime-transformation and risk-of-conflict assessment.", "split": "arXiv" }, { "text": "Given a closed subset $K$ in $\\mathbb{R}$, the rational $K$-truncated moment\nproblem ($K$-RTMP) asks to characterize the existence of a positive Borel\nmeasure $\\mu$, supported on $K$, such that a linear functional $\\mathcal{L}$,\ndefined on all rational functions of the form $\\frac{f}{q}$, where $q$ is a\nfixed polynomial with all real zeros of even order and $f$ is any real\npolynomial of degree at most $2k$, is an integration with respect to $\\mu$. The\ncase of a compact set $K$ was solved by Chandler in 1994, but there is no\nargument that ensures that $\\mu$ vanishes on all real zeros of $q$. An obvious\nnecessary condition for the solvability of the $K$-RTMP is that $\\mathcal{L}$\nis nonnegative on every $f$ satisfying $f|_{K}\\geq 0$. If $\\mathcal{L}$ is\nstrictly positive on every $0\\neq f|_{K}\\geq 0$, we add the missing argument\nfrom Chandler's solution and also bound the number of atoms in a minimal\nrepresenting measure. We show by an example that nonnegativity of $\\mathcal{L}$\nis not sufficient and add the missing conditions to the solution. We also solve\nthe $K$-RTMP for unbounded $K$ and derive the solutions to the strong truncated\nHamburger moment problem and the truncated moment problem on the unit circle as\nspecial cases.", "split": "arXiv" }, { "text": "This work investigates the effects of tangent polar activity on the\nconformational and dynamic properties of entangled polymer melts through\nLangevin molecular dynamics simulations. We examine systems composed of all\nself-propelled, monodisperse linear chains, so that constraint release is\nconsidered. The range of activities explored here includes values where the\nactive reptation theory is applicable, as well as higher activities that\nchallenge the validity of the theory. Chain conformations exhibit a moderate\nincrease in coil size increase, which becomes more pronounced at higher\nactivity levels. Under these conditions, a local bond alignment along the chain\ncontour appears together with a non-homogeneous segmental stretching, and\norientation and stretching of the tube. Dynamically, polar activity induces a\nmolecular-weight-independent diffusion coefficient, a transient superdiffusive\nbehavior, and an end-to-end relaxation time inversely proportional to the\nmolecular weight. Finally, our results are summarized in a diagram that\nclassifies the various regimes of behavior observed in the simulations.\nOverall, these findings provide valuable insights into the complex interplay\nbetween activity and entanglements, advancing our understanding of active\npolymer systems and their potential applications across various fields.", "split": "arXiv" }, { "text": "Depth estimation is an essential task toward full scene understanding since\nit allows the projection of rich semantic information captured by cameras into\n3D space. While the field has gained much attention recently, datasets for\ndepth estimation lack scene diversity or sensor modalities. This work presents\nthe ADUULM-360 dataset, a novel multi-modal dataset for depth estimation. The\nADUULM-360 dataset covers all established autonomous driving sensor modalities,\ncameras, lidars, and radars. It covers a frontal-facing stereo setup, six\nsurround cameras covering the full 360-degree, two high-resolution long-range\nlidar sensors, and five long-range radar sensors. It is also the first depth\nestimation dataset that contains diverse scenes in good and adverse weather\nconditions. We conduct extensive experiments using state-of-the-art\nself-supervised depth estimation methods under different training tasks, such\nas monocular training, stereo training, and full surround training. Discussing\nthese results, we demonstrate common limitations of state-of-the-art methods,\nespecially in adverse weather conditions, which hopefully will inspire future\nresearch in this area. Our dataset, development kit, and trained baselines are\navailable at https://github.com/uulm-mrm/aduulm_360_dataset.", "split": "arXiv" }, { "text": "Migration is a key ingredient for the formation of close-in super-Earth and\nmini-Neptune systems, as it sets in which resonances planets can be trapped.\nSlower migration rates result in wider resonance configurations compared to\nhigher migration rates. We investigate the influence of different migration\nrates, set by the disc's viscosity, on the structure of multi-planet systems\ngrowing by pebble accretion via N-body simulations. Planets in low viscosity\nenvironments migrate slower due to partial gap opening. Thus systems formed in\nlow viscosity environments tend to have planets trapped in wider resonant\nconfigurations (typically 4:3, 3:2 and 2:1), compared to their high viscosity\ncounterparts (mostly 7:6, 5:4 and 4:3 resonances). After gas disc dissipation,\nthe damping forces cease and the systems can undergo instabilities, rearranging\ntheir configurations and breaking the resonance chains. The low viscosity discs\nnaturally account for the resonant chains like Trappist-1, TOI-178 and\nKepler-223, unlike high viscosity simulations which produce relatively more\ncompact chains. About 95% of our low viscosity resonant chains became unstable,\nexperiencing giant impacts. Dynamical instabilities in our low viscosity\nsimulations are more violent than those of high viscosity simulations due to\nthe effects of leftover external perturbers (P>200 days). About 50% of our\nfinal system ended with no planets within 200 days, while all our systems have\nremaining outer planets. We speculate that this process could be qualitatively\nconsistent with the lack of inner planets in a large fraction of Sun-like\nstars. Systems produced in low viscosity simulations alone do not match the\noverall period ratio distribution of observations, but give a better match to\nthe period distributions of chains, which may suggest that systems of\nsuper-Earths and mini-Neptunes form in natal discs with a diversity of\nviscosities.", "split": "arXiv" }, { "text": "Recently, the Large High-Altitude Air Shower Observatory (LHAASO)\ncollaboration has obtained a measurement of the gamma-ray diffuse emission in\nthe ultra-high energy range, $10-10^3$ TeV after masking the contribution of\nknown sources. The measurement appears to be 2-3 times higher than the\ngamma-ray signal expected from the hadronic interactions of diffuse cosmic rays\nwith the interstellar medium, potentially suggesting a contribution from\nunresolved sources. However, estimates of the diffuse emission are affected by\nlarge uncertainties that must be accounted for. In this work, we calculate the\nhadronic gamma-ray diffuse emission including uncertainties in the gas content\nof the Galactic disk, in the energy and spatial distribution of cosmic rays as\nwell as in the hadronic interaction cross-section. We show that the LHAASO data\nabove $\\sim 30$ TeV are consistent with the gamma-ray diffuse emission model\nwhen all these uncertainties are taken into account. This implies that, with\nthe current data in this energy range, there is no need to invoke a cosmic ray\nspectral variation toward the Galactic center, nor a dominant contribution from\nunresolved sources.", "split": "arXiv" }, { "text": "A large host of scientific journals and conferences solicit peer reviews from\nmultiple reviewers for the same submission, aiming to gather a broader range of\nperspectives and mitigate individual biases. In this work, we reflect on the\nrole of diversity in the slate of reviewers assigned to evaluate a submitted\npaper as a factor in diversifying perspectives and improving the utility of the\npeer-review process. We propose two measures for assessing review utility:\nreview coverage -- reviews should cover most contents of the paper -- and\nreview redundancy -- reviews should add information not already present in\nother reviews. We hypothesize that reviews from diverse reviewers will exhibit\nhigh coverage and low redundancy. We conduct a causal study of different\nmeasures of reviewer diversity on review coverage and redundancy using\nobservational data from a peer-reviewed conference with approximately 5,000\nsubmitted papers. Our study reveals disparate effects of different diversity\nmeasures on review coverage and redundancy. Our study finds that assigning a\ngroup of reviewers that are topically diverse, have different seniority levels,\nor have distinct publication networks leads to broader coverage of the paper or\nreview criteria, but we find no evidence of an increase in coverage for\nreviewer slates with reviewers from diverse organizations or geographical\nlocations. Reviewers from different organizations, seniority levels, topics, or\npublications networks (all except geographical diversity) lead to a decrease in\nredundancy in reviews. Furthermore, publication network-based diversity alone\nalso helps bring in varying perspectives (that is, low redundancy), even within\nspecific review criteria. Our study adopts a group decision-making perspective\nfor reviewer assignments in peer review and suggests dimensions of diversity\nthat can help guide the reviewer assignment process.", "split": "arXiv" }, { "text": "Graph augmentation is a fundamental and well-studied problem that arises in\nnetwork optimization. We consider a new variant of this model motivated by\nreconfigurable communication networks. In this variant, we consider a given\nphysical network and the measured communication demands between the nodes. Our\ngoal is to augment the given physical network with a matching, so that the\nshortest path lengths in the augmented network, weighted with the demands, are\nminimal.We prove that this problem is NP-hard, even if the physical network is\na cycle. We then use results from demand-aware network design to provide a\nconstant-factor approximation algorithm for adding a matching in case that only\na few nodes in the network cause almost all the communication. For general\nreal-world communication patterns, we design and evaluate a series of\nheuristics that can deal with arbitrary graphs as the underlying network\nstructure. Our algorithms are validated experimentally using real-world traces\n(from e.g., Facebook) of data centers.", "split": "arXiv" }, { "text": "Recent advances in Large Language Models (LLMs) have enabled them to overcome\ntheir context window limitations, and demonstrate exceptional retrieval and\nreasoning capacities on longer context. Quesion-answering systems augmented\nwith Long-Context Language Models (LCLMs) can automatically search massive\nexternal data and incorporate it into their contexts, enabling faithful\npredictions and reducing issues such as hallucinations and knowledge staleness.\nExisting studies targeting LCLMs mainly concentrate on addressing the so-called\nlost-in-the-middle problem or improving the inference effiencicy, leaving their\nprivacy risks largely unexplored. In this paper, we aim to bridge this gap and\nargue that integrating all information into the long context makes it a\nrepository of sensitive information, which often contains private data such as\nmedical records or personal identities. We further investigate the membership\nprivacy within LCLMs external context, with the aim of determining whether a\ngiven document or sequence is included in the LCLMs context. Our basic idea is\nthat if a document lies in the context, it will exhibit a low generation loss\nor a high degree of semantic similarity to the contents generated by LCLMs. We\nfor the first time propose six membership inference attack (MIA) strategies\ntailored for LCLMs and conduct extensive experiments on various popular models.\nEmpirical results demonstrate that our attacks can accurately infer membership\nstatus in most cases, e.g., 90.66% attack F1-score on Multi-document QA\ndatasets with LongChat-7b-v1.5-32k, highlighting significant risks of\nmembership leakage within LCLMs input contexts. Furthermore, we examine the\nunderlying reasons why LCLMs are susceptible to revealing such membership\ninformation.", "split": "arXiv" }, { "text": "We elaborate on and further develop an approach to determining the\nteleparallel analogue of spacetimes in General Relativity (GR) by studying the\nTeleparallel analogue of pp-Wave (TppW) spacetimes. This relies on using the\nfact that these solutions belong to the Vanishing Scalar Invariant (VSI)\nsubclass for which the explicit forms of the frame and spin-connection are\nknown. By identifying the pp-wave (ppW) metric within this class, we are able\nto use frame based symmetry methods and the Cartan-Karlhede (CK) algorithm to\ndetermine the necessary form for the frame. Through this analysis we find two\noverlooked solutions that are permitted in teleparallel gravity (TPG) and in\nGR.", "split": "arXiv" }, { "text": "Let $k$ be an uncountable algebraically closed filed of positive\ncharacteristic and let $S_0$ be a connected smooth projective surface over $k$.\nWe extend the theorem on the Gysin kernel from [28, Theorem 5.1] to also be\ntrue over $k$, where it was proved over $\\mathbb{C}$. This is done by showing\nthat almost all results still hold true over $k$ via the same argument or by\nusing \\'{e}tale base arguments and then use a lift with the Comparison theorem\n[22, Theorems 21.1 & 20.5] as needed.", "split": "arXiv" }, { "text": "In this study, we introduced a simple yet innovative method to trigger\nturbulence in a channel flow to achieve statistically stationary flow\nconditions. We compare this new method based on synthetically generated\nthree-dimensional turbulence with two other well-established methods, namely,\nlinear profile superposed with random noise and descending counter-rotating\nvortices and log-law profile superposed with random noise and descending\ncounter-rotating vortices. We found that synthetically generated\nthree-dimensional turbulence provides a computationally cheap and effective way\nto reduce simulation spin-up to achieve statistically stationary flow\nconditions when a precursor turbulent initial condition is not available. At a\none-time cost of less than 1 CPU hour to generate the synthetic turbulent\ninitial condition, the flow becomes statistically stationary within 3 eddy\nturnovers for all the parameters of interest in wall-bounded pressure-driven\nchannel flow simulations when compared to other alternatives that can take more\nthan 10 eddy turnovers resulting in substantial savings in the computational\ncost.", "split": "arXiv" }, { "text": "In this paper, we prove several new infinite families of Ramanujan--like\ncongruences satisfied by the coefficients of the generating function $U_t(a,q)$\nwhich is an extension of MacMahon's generalized sum-of-divisors function. As a\nby-product, we also show that, for all $n\\geq 0$, $\\overline{B}_3(15n+7)\\equiv\n0 \\pmod{5}$ where $\\overline{B}_3(n)$ is the number of almost $3$-regular\noverpartitions of $n$.", "split": "arXiv" }, { "text": "Bell scenarios are multipartite scenarios that exclude any communication\nbetween parties. This constraint leads to a strict hierarchy of correlation\nsets in such scenarios, namely, classical, quantum, and nonsignaling. However,\nwithout any constraints on communication between the parties, they can realize\narbitrary correlations by exchanging only classical systems. Here we consider a\nmultipartite scenario where the parties can engage in at most a single round of\ncommunication, i.e., each party is allowed to receive a system once, implement\nany local intervention on it, and send out the resulting system once. While no\nglobal assumption about causal relations between parties is assumed in this\nscenario, we do make a causal assumption local to each party, i.e., the input\nreceived by it causally precedes the output it sends out. We then introduce\nantinomicity, a notion of nonclassicality for correlations in such scenarios,\nand prove the existence of a strict hierarchy of correlation sets classified by\ntheir antinomicity. Antinomicity serves as a generalization of Bell\nnonlocality: when all the parties discard their output systems (i.e., in a\nnonsignaling scenario), it is mathematically equivalent to Bell nonlocality.\nLike Bell nonlocality, it can be understood as an instance of fine-tuning, one\nthat is necessary in any classical model of cyclic causation that avoids\ntime-travel antinomies but allows antinomic correlations. Furthermore,\nantinomicity resolves a long-standing puzzle, i.e., the failure of causal\ninequality violations as device-independent witnesses of nonclassicality.\nAntinomicity implies causal inequality violations, but not conversely.", "split": "arXiv" }, { "text": "The rapid advancement of face forgery techniques has introduced a growing\nvariety of forgeries. Incremental Face Forgery Detection (IFFD), involving\ngradually adding new forgery data to fine-tune the previously trained model,\nhas been introduced as a promising strategy to deal with evolving forgery\nmethods. However, a naively trained IFFD model is prone to catastrophic\nforgetting when new forgeries are integrated, as treating all forgeries as a\nsingle ''Fake\" class in the Real/Fake classification can cause different\nforgery types overriding one another, thereby resulting in the forgetting of\nunique characteristics from earlier tasks and limiting the model's\neffectiveness in learning forgery specificity and generality. In this paper, we\npropose to stack the latent feature distributions of previous and new tasks\nbrick by brick, $\\textit{i.e.}$, achieving $\\textbf{aligned feature\nisolation}$. In this manner, we aim to preserve learned forgery information and\naccumulate new knowledge by minimizing distribution overriding, thereby\nmitigating catastrophic forgetting. To achieve this, we first introduce Sparse\nUniform Replay (SUR) to obtain the representative subsets that could be treated\nas the uniformly sparse versions of the previous global distributions. We then\npropose a Latent-space Incremental Detector (LID) that leverages SUR data to\nisolate and align distributions. For evaluation, we construct a more advanced\nand comprehensive benchmark tailored for IFFD. The leading experimental results\nvalidate the superiority of our method.", "split": "arXiv" }, { "text": "From an architectural perspective with the main goal of reducing the\neffective traffic load in the network and thus gaining more operational\nefficiency, optical networks have been essentially remained the same in the\nrecent two decades since the year 2000s with the success and then dominance of\noptical-bypass mode. In the optical-bypass-enabled network, the add/drop and\ncross-connect functions constitute the fundamental operations in handling the\ntraffic at the optical layer, whose the underlying principle lies in the fact\nthat in cross-connecting in-transit lightpaths over an intermediate node, such\nlightpaths must be guarded from each other in a certain dimension, be it the\ntime, frequency or spatial domain, to avoid interference, which is treated as\ndestructive. In view of the rapid progresses in the realm of optical computing\nenabling the precisely controlled interference between optical channels for\nvarious computing capabilities, we envision a different perspective to turn the\nlong-established wisdom in optical-bypass network around by putting the optical\nchannel interference to a good use, resulting into the so-called\noptical-computing-enabled network. This paper presents two illustrative\nexamples based on the optical aggregation and optical XOR operations which have\nbeen progressively maturing and thus, could be feasibly integrated into the\ncurrent legacy infrastructure with possibly minimal disruptions. We then\npropose a detailed case study in formulating and solving the network\ncoding-enabled optical networks, demonstrating the efficacy of the\noptical-computing-enabled network, and highlighting the unique challenges tied\nwith greater complexities in network design problems, compared to\noptical-bypass counterpart", "split": "arXiv" }, { "text": "Recent advancements in medical image analysis have predominantly relied on\nConvolutional Neural Networks (CNNs), achieving impressive performance in chest\nX-ray classification tasks, such as the 92% AUC reported by AutoThorax-Net and\nthe 88% AUC achieved by ChexNet in classifcation tasks. However, in the medical\nfield, even small improvements in accuracy can have significant clinical\nimplications. This study explores the application of Vision Transformers (ViT),\na state-of-the-art architecture in machine learning, to chest X-ray analysis,\naiming to push the boundaries of diagnostic accuracy. I present a comparative\nanalysis of two ViT-based approaches: one utilizing full chest X-ray images and\nanother focusing on segmented lung regions. Experiments demonstrate that both\nmethods surpass the performance of traditional CNN-based models, with the\nfull-image ViT achieving up to 97.83% accuracy and the lung-segmented ViT\nreaching 96.58% accuracy in classifcation of diseases on three label and AUC of\n94.54% when label numbers are increased to eight. Notably, the full-image\napproach showed superior performance across all metrics, including precision,\nrecall, F1 score, and AUC-ROC. These findings suggest that Vision Transformers\ncan effectively capture relevant features from chest X-rays without the need\nfor explicit lung segmentation, potentially simplifying the preprocessing\npipeline while maintaining high accuracy. This research contributes to the\ngrowing body of evidence supporting the efficacy of transformer-based\narchitectures in medical image analysis and highlights their potential to\nenhance diagnostic precision in clinical settings.", "split": "arXiv" }, { "text": "Nuclear magnetic resonance instruments are becoming available to the\ndo-it-yourself community. The challenges encountered in the endeavor to build a\nmagnetic resonance imaging instrument from scratch were confronted in a\nfour-day hackathon at Singapore University of Technology and Design in spring\n2024. One day was devoted to educational lectures and three days to system\nconstruction and testing. Seventy young researchers from all parts of the world\nformed six teams focusing on magnet, gradient coil, RF coil, console, system\nintegration, and design, which together produced a working MRI instrument in\nthree days. The different steps, encountered challenges, and their solutions\nare reported.", "split": "arXiv" }, { "text": "We describe an algorithm to compute the stable multiplicity of a family of\nirreducible representations in the cohomology of ordered configuration space of\nthe plane. Using this algorithm, we compute the stable multiplicities of all\nfamilies of irreducibles given by Young diagrams with $23$ boxes or less up to\ncohomological degree $50$. In particular, this determines the stable cohomology\nin cohomological degrees $0 \\leq i \\leq 11$. We prove related qualitative\nresults and formulate some conjectures.", "split": "arXiv" }, { "text": "Amongst the issues plaguing the Standard Model (SM) are questions pertaining\nto neutrino masses and mixings, the anomalous magnetic moment of the electron\nand muon and the problem of a suitable dark matter (DM) candidate. All the\nthree issues can be addressed at once by extending the SM with two generations\nof vector-like fermions and an inert scalar doublet, all odd under a Z2\nsymmetry. The light neutrino masses and mixings are generated radiatively while\nmaintaining consistency with bounds on lepton flavor violation. Loop diagrams\nwith the very same fields also serve to explain the anomalous magnetic moments.\nSimilarly, the correct dark matter relic abundance is reproduced without coming\ninto conflict with direct detection constraints, or those from big bang\nnucleosynthesis or the cosmic microwave observations. Finally, prospective\nsignatures at the LHC are discussed.", "split": "arXiv" }, { "text": "Machine learning is an important tool for analyzing high-dimension\nhyperspectral data; however, existing software solutions are either\nclosed-source or inextensible research products. In this paper, we present\ncuvis.ai, an open-source and low-code software ecosystem for data acquisition,\npreprocessing, and model training. The package is written in Python and\nprovides wrappers around common machine learning libraries, allowing both\nclassical and deep learning models to be trained on hyperspectral data. The\ncodebase abstracts processing interconnections and data dependencies between\noperations to minimize code complexity for users. This software package\ninstantiates nodes in a directed acyclic graph to handle all stages of a\nmachine learning ecosystem, from data acquisition, including live or static\ndata sources, to final class assignment or property prediction. User-created\nmodels contain convenient serialization methods to ensure portability and\nincrease sharing within the research community. All code and data are available\nonline: https://github.com/cubert-hyperspectral/cuvis.ai", "split": "arXiv" }, { "text": "In this paper, we propose a conceptual framework for personalized\nbrain-computer interface (BCI) applications, which can offer an enhanced user\nexperience by customizing services to individual preferences and needs, based\non endogenous electroencephalography (EEG) paradigms including motor imagery\n(MI), speech imagery (SI), and visual imagery. The framework includes two\nessential components: user identification and intention classification, which\nenable personalized services by identifying individual users and recognizing\ntheir intended actions through EEG signals. We validate the feasibility of our\nframework using a private EEG dataset collected from eight subjects, employing\nthe ShallowConvNet architecture to decode EEG features. The experimental\nresults demonstrate that user identification achieved an average classification\naccuracy of 0.995, while intention classification achieved 0.47 accuracy across\nall paradigms, with MI demonstrating the best performance. These findings\nindicate that EEG signals can effectively support personalized BCI\napplications, offering robust identification and reliable intention decoding,\nespecially for MI and SI.", "split": "arXiv" }, { "text": "This paper presents an accelerated spherical K-means clustering algorithm for\nlarge-scale and high-dimensional sparse document data sets. We design an\nalgorithm working in an architecture-friendly manner (AFM), which is a\nprocedure of suppressing performance-degradation factors such as the numbers of\ninstructions, branch mispredictions, and cache misses in CPUs of a modern\ncomputer system. For the AFM operation, we leverage unique universal\ncharacteristics (UCs) of a data-object and a cluster's mean set, which are\nskewed distributions on data relationships such as Zipf's law and a\nfeature-value concentration phenomenon. The UCs indicate that the most part of\nthe number of multiplications for similarity calculations is executed regarding\nterms with high document frequencies (df) and the most part of a similarity\nbetween an object- and a mean-feature vector is obtained by the multiplications\nregarding a few high mean-feature values. Our proposed algorithm applies an\ninverted-index data structure to a mean set, extracts the specific region with\nhigh-df terms and high mean-feature values in the mean-inverted index by newly\nintroduced two structural parameters, and exploits the index divided into three\nparts for efficient pruning. The algorithm determines the two structural\nparameters by minimizing the approximate number of multiplications related to\nthat of instructions, reduces the branch mispredictions by sharing the index\nstructure including the two parameters with all the objects, and suppressing\nthe cache misses by keeping in the caches the frequently used data in the\nforegoing specific region, resulting in working in the AFM. We experimentally\ndemonstrate that our algorithm efficiently achieves superior speed performance\nin large-scale documents compared with algorithms using the state-of-the-art\ntechniques.", "split": "arXiv" }, { "text": "Quantum secure direct communication (QSDC) enables the message sender to\ndirectly send secure messages to the receiver through the quantum channel\nwithout keys. Device-independent (DI) and measurement-device-independent (MDI)\nQSDC protocols can enhance QSDC's practical security in theory. DI QSDC\nrequires extremely high global detection efficiency and has quite low secure\ncommunication distance. DI and MDI QSDC both require high-quality entanglement.\nCurrent entanglement sources prepare entangled photon pairs with low\nefficiency, largely reducing their practical communication efficiency. In the\npaper, we propose a single-photon-based receiver-device-independent (RDI) QSDC\nprotocol. It only relies on the trusted single-photon source, which is nearly\non-demand under current technology, and treats all the receiving devices in\nboth communication parties as ``black-boxes''. The parties ensure the message\nsecurity only from the observed statistics. We develop a numerical method to\nsimulate its performance in practical noisy communication situation. RDI QSDC\nprovides the same security level as MDI QSDC. Compared with DI and MDI QSDC,\nRDI QSDC has some advantages. First, it uses the single-photon source and\nsingle-photon measurement, which makes it obtain the practical communication\nefficiency about 3415 times of that in DI QSDC and easy to implement. The whole\nprotocol is feasible with current technology. Second, it has higher photon loss\nrobustness and noise tolerance than DI QSDC, which enables it to have a secure\ncommunication distance about 26 times of that in DI QSDC. Based on above\nfeatures, the RDI QSDC protocol makes it possible to achieve highly-secure and\nhigh-efficient QSDC in the near future.", "split": "arXiv" }, { "text": "It is proved that the Chebyshev's method applied to an entire function $f$ is\na rational map if and only if $f(z) = p(z) e^{q(z)}$, for some polynomials $p$\nand $q$. These are referred to as rational Chebyshev maps, and their fixed\npoints are discussed in this article. It is seen that $\\infty$ is a parabolic\nfixed point with multiplicity one bigger than the degree of $q$. Considering\n$q(z)=p(z)^n+c$, where $p$ is a linear polynomial, $n \\in \\mathbb{N}$ and $c$\nis a non-zero constant, we show that the Chebyshev's method applied to $pe^q$\nis affine conjugate to that applied to $z e^{z^n}$. We denote this by $C_n$.\nAll the finite extraneous fixed points of $C_n$ are shown to be repelling. The\nJulia set $\\mathcal{J}(C_n)$ of $C_n$ is found to be preserved under rotations\nof order $n$ about the origin. For each $n$, the immediate basin of $0$ is\nproved to be simply connected. For all $n \\leq 16$, we prove that\n$\\mathcal{J}(C_n)$ is connected. The Newton's method applied to $ze^{z^n}$ is\nfound to be conjugate to a polynomial, and its dynamics is also completely\ndetermined.", "split": "arXiv" }, { "text": "Currently, deep learning-based instance segmentation for various applications\n(e.g., Agriculture) is predominantly performed using a labor-intensive process\ninvolving extensive field data collection using sophisticated sensors, followed\nby careful manual annotation of images, presenting significant logistical and\nfinancial challenges to researchers and organizations. The process also slows\ndown the model development and training process. In this study, we presented a\nnovel method for deep learning-based instance segmentation of apples in\ncommercial orchards that eliminates the need for labor-intensive field data\ncollection and manual annotation. Utilizing a Large Language Model (LLM), we\nsynthetically generated orchard images and automatically annotated them using\nthe Segment Anything Model (SAM) integrated with a YOLO11 base model. This\nmethod significantly reduces reliance on physical sensors and manual data\nprocessing, presenting a major advancement in \"Agricultural AI\". The synthetic,\nauto-annotated dataset was used to train the YOLO11 model for Apple instance\nsegmentation, which was then validated on real orchard images. The results\nshowed that the automatically generated annotations achieved a Dice Coefficient\nof 0.9513 and an IoU of 0.9303, validating the accuracy and overlap of the mask\nannotations. All YOLO11 configurations, trained solely on these synthetic\ndatasets with automated annotations, accurately recognized and delineated\napples, highlighting the method's efficacy. Specifically, the YOLO11m-seg\nconfiguration achieved a mask precision of 0.902 and a mask mAP@50 of 0.833 on\ntest images collected from a commercial orchard. Additionally, the YOLO11l-seg\nconfiguration outperformed other models in validation on 40 LLM-generated\nimages, achieving the highest mask precision and mAP@50 metrics.\n Keywords: YOLO, SAM, SAMv2, YOLO11, YOLOv11, Segment Anything, YOLO-SAM", "split": "arXiv" }, { "text": "A \\emph{conforming partition} of a rectilinear $ n $-gon\\bastien{I change\nfrom ``a polygon'', otherwise $ n $ is not defined.} $ P $ is a partition of $\nP $ into rectangles without using Steiner points (i.e., all corners of all\nrectangles must lie on\\bastien{Maybe add: the boundary of} $ P $). The stabbing\nnumber of such a partition is the maximum number of rectangles intersected by\nan axis-aligned segment lying in the interior of $ P $. In this paper, we\nexamine the problem of computing conforming partitions with low stabbing\nnumber. We show that computing a conforming partition with stabbing number at\nmost~$ 4 $ is $ NP $-hard, which strengthens a previously known hardness result\n[Durocher \\& Mehrabi, Theor. Comput. Sci. 689: 157-168 (2017)] and eliminates\nthe possibility for fixed-parameter-tractable algorithms parameterized by the\nstabbing number unless $ P = NP $. In contrast, we give (i) an $ O ( n \\log n )\n$-time\\bastien{Reviewer request: changed from \"linearithmic\".} algorithm to\ndecide whether a conforming partition with stabbing number~$ 2 $ exists, (ii) a\nfixed-parameter-tractable algorithm parameterized by both the stabbing number\nand treewidth of the pixelation of the polygon, and (iii) a\nfixed-parameter-tractable algorithm parameterized by the stabbing number for\nsimple polygons in general position.", "split": "arXiv" }, { "text": "We consider estimating the shared mean of a sequence of heavy-tailed random\nvariables taking values in a Banach space. In particular, we revisit and extend\na simple truncation-based mean estimator by Catoni and Giulini. While existing\ntruncation-based approaches require a bound on the raw (non-central) second\nmoment of observations, our results hold under a bound on either the central or\nnon-central $p$th moment for some $p > 1$. In particular, our results hold for\ndistributions with infinite variance. The main contributions of the paper\nfollow from exploiting connections between truncation-based mean estimation and\nthe concentration of martingales in 2-smooth Banach spaces. We prove two types\nof time-uniform bounds on the distance between the estimator and unknown mean:\nline-crossing inequalities, which can be optimized for a fixed sample size $n$,\nand non-asymptotic law of the iterated logarithm type inequalities, which match\nthe tightness of line-crossing inequalities at all points in time up to a\ndoubly logarithmic factor in $n$. Our results do not depend on the dimension of\nthe Banach space, hold under martingale dependence, and all constants in the\ninequalities are known and small.", "split": "arXiv" }, { "text": "In Radio Super Novae (RSNe) a magnetic field of $(B \\, \\times \\, r) \\, = \\,\n10^{16.0 \\pm 0.12} \\, {\\rm Gauss \\, \\times \\, cm}$ is observed; these are the\nsame numbers for Blue Super Giant (BSG) star explosions as for Red Super Giant\n(RSG) star explosions, despite their very different wind properties. The EHT\ndata for M87 as well for low power radio galaxies all show consistency with\njust this value of the quantity $(B \\, \\times \\, r )$, key for angular momentum\nand energy transport, and can be derived from the radio jet data. We interpret\nthis as a property of the near surroundings of a black hole (BH) at near\nmaximal rotation, independent of BH mass. In the commonly used green onion\nmodel, in which a $2 \\, \\pi$ flow changes over to a jet flow we interpret this\nas a wind emanating from the BH/accretion disk system and its surroundings.\nNear the BH collisions in the wind can produce a large fraction of\nanti-protons. In this scenario the cosmic Ray (CR) population from the wind/jet\nis proposed to be visible as EeV protons and anti-protons in the CR data to EeV\nenergy, with a $E^{-7/3}$ spectrum. This can be connected to a concept of inner\nand outer Penrose zones in the ergo-region. The observed numbers for the\nmagnetic field imply the Planck time as the governing time scale: A BH rotating\nnear maximum can accept a proton per log bin of energy in an extended spectrum\nwith the associated pions every Planck time.", "split": "arXiv" }, { "text": "Aggregators of distributed energy resources are increasingly encouraged to\nparticipate in wholesale market bidding. However, the delivery of the power\nthey are awarded can result in over-voltage or congestion issues within the\ndistribution network (DN). The opportunity to lease energy storage from the\nutility that manages the DN provides the aggregator with a means to mitigate\nthese issues, while also benefiting the utility in terms of additional lease\nrevenue. Nevertheless, this leasing opportunity considerably complicates the\naggregator's offer-making process, as it requires the consideration of market\nuncertainties, uncertain power injection at DN buses, and the strategic\ninteractions between the aggregator and the utility. This paper presents a\nstochastic Stackelberg game model that effectively captures the interactions\nbetween the aggregator and the utility, ensuring DN security across all\npotential uncertainty scenarios. Furthermore, in light of the privacy concerns\nof both the aggregator and the utility, two distributed solution methods are\nproposed. The first method follows a traditional predict-then-optimize\nframework and has been validated to achieve the game equilibrium. The second\nmethod employs an end-to-end framework, which has been empirically shown to\nyield superior economic results. Case studies conducted on 69 and 533-bus DNs\nillustrate the efficacy of the proposed methods.", "split": "arXiv" }, { "text": "Convection in planets and stars is predicted to occur in the \"ultimate\nregime'' of diffusivity-free, rapidly rotating turbulence, in which flows are\ncharacteristically unaffected by viscous and thermal diffusion. Boundary layer\ndiffusion, however, has historically hindered experimental study of this\nregime. Here, we utilize the boundary-independent oscillatory thermal-inertial\nmode of rotating convection to realize the diffusivity-free scaling in liquid\nmetal laboratory experiments. This oscillatory style of convection arises in\nrotating liquid metals (low Prandtl number fluids) and is driven by the\ntemperature gradient in the fluid bulk, thus remaining independent of diffusive\nboundary dynamics. We triply verify the existence of the diffusivity-free\nregime via measurements of heat transfer efficiency $Nu$, dimensionless flow\nvelocities $Re$, and internal temperature anomalies $\\theta$, all of which are\nin quantitative agreement with planar asymptotically-reduced models. Achieving\nthe theoretical diffusivity-free scalings in desktop-sized laboratory\nexperiments provides the validation necessary to extrapolate and predict the\nconvective flows in remote geophysical and astrophysical systems.", "split": "arXiv" }, { "text": "Pre-trained vision-language models provide a robust foundation for efficient\ntransfer learning across various downstream tasks. In the field of video action\nrecognition, mainstream approaches often introduce additional parameter modules\nto capture temporal information. While the increased model capacity brought by\nthese additional parameters helps better fit the video-specific inductive\nbiases, existing methods require learning a large number of parameters and are\nprone to catastrophic forgetting of the original generalizable knowledge. In\nthis paper, we propose a simple yet effective Multi-modal Spatio-Temporal\nAdapter (MSTA) to improve the alignment between representations in the text and\nvision branches, achieving a balance between general knowledge and\ntask-specific knowledge. Furthermore, to mitigate over-fitting and enhance\ngeneralizability, we introduce a spatio-temporal description-guided consistency\nconstraint. This constraint involves feeding template inputs (i.e., ``a video\nof $\\{\\textbf{cls}\\}$'') into the trainable language branch, while\nLLM-generated spatio-temporal descriptions are input into the pre-trained\nlanguage branch, enforcing consistency between the outputs of the two branches.\nThis mechanism prevents over-fitting to downstream tasks and improves the\ndistinguishability of the trainable branch within the spatio-temporal semantic\nspace. We evaluate the effectiveness of our approach across four tasks:\nzero-shot transfer, few-shot learning, base-to-novel generalization, and\nfully-supervised learning. Compared to many state-of-the-art methods, our MSTA\nachieves outstanding performance across all evaluations, while using only 2-7\\%\nof the trainable parameters in the original model. Code will be avaliable at\nhttps://github.com/chenhaoxing/ETL4Video.", "split": "arXiv" }, { "text": "Most robotics applications are typically accompanied with safety restrictions\nthat need to be satisfied with a high degree of confidence even in environments\nunder uncertainty. Controlling the state distribution of a system and enforcing\nsuch specifications as distribution constraints is a promising approach for\nmeeting such requirements. In this direction, covariance steering (CS) is an\nincreasingly popular stochastic optimal control (SOC) framework for designing\nsafe controllers via explicit constraints on the system covariance.\nNevertheless, a major challenge in applying CS methods to systems with the\nnonlinear dynamics and chance constraints common in robotics is that the\napproximations needed are conservative and highly sensitive to the point of\napproximation. This can cause sequential convex programming methods to converge\nto poor local minima or incorrectly report problems as infeasible due to\nshifting constraints. This paper presents a novel algorithm for solving\nchance-constrained nonlinear CS problems that directly addresses this\nchallenge. Specifically, we propose an operator-splitting approach that\ntemporarily separates the main problem into subproblems that can be solved in\nparallel. The benefit of this relaxation lies in the fact that it does not\nrequire all iterates to satisfy all constraints simultaneously prior to\nconvergence, thus enhancing the exploration capabilities of the algorithm for\nfinding better solutions. Simulation results verify the ability of the proposed\nmethod to find higher quality solutions under stricter safety constraints than\nstandard methods on a variety of robotic systems. Finally, the applicability of\nthe algorithm on real systems is confirmed through hardware demonstrations.", "split": "arXiv" }, { "text": "In this article, we study the FitzHugh-Nagumo $(1,1)$--fast-slow system where\nthe vector fields associated to the slow/fast equations come from the reduction\nof the Hodgin-Huxley model for the nerve impulse. After deriving dynamical\nproperties of the singular and regular cases, we perform a bifurcation analysis\nand we investigate how the parameters (of the affine slow equation) impact the\ndynamics of the system. The study of codimension one bifurcations and the\nnumerical locus of canards concludes this case-study. All theoretical results\nare numerically illustrated.", "split": "arXiv" }, { "text": "Let $p$ be an odd prime and $k$ be an algebraically closed field with\ncharacteristic $p$. Booher and Cais showed that the $a$-number of a $\\mathbb\nZ/p \\mathbb Z$-Galois cover of curves $\\phi: Y \\to X$ must be greater than a\nlower bound determined by the ramification of $\\phi$. In this paper, we provide\nevidence that the lower bound is optimal by finding examples of Artin-Schreier\ncurves that have $a$-number equal to its lower bound for all $p$. Furthermore\nwe use formal patching to generate infinite families of Artin-Schreier curves\nwith $a$-number equal to the lower bound in any characteristic.", "split": "arXiv" }, { "text": "Because of the rapid development and increasing public availability of\nGenerative Artificial Intelligence (GenAI) models and tools, educational\ninstitutions and educators must immediately reckon with the impact of students\nusing GenAI. There is limited prior research on computing students' use and\nperceptions of GenAI. In anticipation of future advances and evolutions of\nGenAI, we capture a snapshot of student attitudes towards and uses of yet\nemerging GenAI, in a period of time before university policies had reacted to\nthese technologies. We surveyed all computer science majors in a small\nengineering-focused R1 university in order to: (1) capture a baseline\nassessment of how GenAI has been immediately adopted by aspiring computer\nscientists; (2) describe computing students' GenAI-related needs and concerns\nfor their education and careers; and (3) discuss GenAI influences on CS\npedagogy, curriculum, culture, and policy. We present an exploratory\nqualitative analysis of this data and discuss the impact of our findings on the\nemerging conversation around GenAI and education.", "split": "arXiv" }, { "text": "In a system of two-dimensional electrons, a combination of broken symmetry,\ninteractions, and nontrivial topology can conspire to give rise to a nonlinear\ntransport regime, where electric current density scales as the square of\nelectric field. This regime has become a venue for exciting discoveries such as\nthe nonlinear Hall effect and diode-like nonreciprocal transport. However,\ninterpretation of experimental data is challenging in the nonlinear regime as\nDC transport is described by a rank-3 conductivity tensor with 6 free\nparameters. Here, we resolve this challenge by analytically solving for the\nnonlinear potential distribution across the disk sample for an arbitrary linear\nand nonlinear conductivity tensors. This allows us to unambiguously extract all\ncomponents of the nonlinear tensor from experimental measurement. Using this\nnovel tool, we identify giant nonlinear Hall effect in Bernal bilayer graphene.\nOur methodology provides the first systematic framework for interpreting\nnonlinear transport and uncovers a new route towards understanding quasi-2D\nmaterials.", "split": "arXiv" }, { "text": "This work investigates the self-organization of multi-agent systems into\nclosed trajectories, a common requirement in unmanned aerial vehicle (UAV)\nsurveillance tasks. In such scenarios, smooth, unbiased control signals save\nenergy and mitigate mechanical strain. We propose a decentralized control\nsystem architecture that produces a globally stable emergent structure from\nlocal observations only; there is no requirement for agents to share a global\nplan or follow prescribed trajectories. Central to our approach is the\nformulation of an injective virtual embedding induced by rotations from the\nactual agent positions. This embedding serves as a structure-preserving map\naround which all agent stabilize their relative positions and permits the use\nof well-established linear control techniques. We construct the embedding such\nthat it is topologically equivalent to the desired trajectory (i.e., a\nhomeomorphism), thereby preserving the stability characteristics. We\ndemonstrate the versatility of this approach through implementation on a swarm\nof Quanser QDrone quadcopters. Results demonstrate the quadcopters\nself-organize into the desired trajectory while maintaining even separation.", "split": "arXiv" }, { "text": "For a positive integer $k \\ge 1$, a $k$-star ($k^+$-star, $k^-$-star,\nrespectively) is a connected graph containing a degree-$\\ell$ vertex and $\\ell$\ndegree-$1$ vertices, where $\\ell = k$ ($\\ell \\ge k$, $1 \\le \\ell \\le k$,\nrespectively). The $k^+$-star packing problem is to cover as many vertices of\nan input graph $G$ as possible using vertex-disjoint $k^+$-stars in $G$; and\ngiven $k > t \\ge 1$, the $k^-/t$-star packing problem is to cover as many\nvertices of $G$ as possible using vertex-disjoint $k^-$-stars but no $t$-stars\nin $G$. Both problems are NP-hard for any fixed $k \\ge 2$. We present a $(1 +\n\\frac {k^2}{2k+1})$- and a $\\frac 32$-approximation algorithms for the\n$k^+$-star packing problem when $k \\ge 3$ and $k = 2$, respectively, and a $(1\n+ \\frac 1{t + 1 + 1/k})$-approximation algorithm for the $k^-/t$-star packing\nproblem when $k > t \\ge 2$. They are all local search algorithms and they\nimprove the best known approximation algorithms for the problems, respectively.", "split": "arXiv" }, { "text": "We consider the problem of allocating heterogeneous and indivisible goods\namong strategic agents, with preferences over subsets of goods, when there is\nno medium of exchange. This model captures the well studied problem of fair\nallocation of indivisible goods. Serial-quota mechanisms are allocation\nmechanisms where there is a predefined order over agents, and each agent in her\nturn picks a predefined number of goods from the remaining goods. These\nmechanisms are clearly strategy-proof, non-bossy, and neutral. Are there other\nmechanisms with these properties? We show that for important classes of strict\nordinal preferences (as lexicographic preferences, and as the class of all\nstrict preferences), these are the only mechanisms with these properties.\nImportantly, unlike previous work, we can prove the claim even for mechanisms\nthat are not Pareto-efficient. Moreover, we generalize these results to\npreferences that are cardinal, including any valuation class that contains\nadditive valuations. We then derive strong negative implications of this result\non truthful mechanisms for fair allocation of indivisible goods to agents with\nadditive valuations.", "split": "arXiv" }, { "text": "We participated in track 2 of the VoiceMOS Challenge 2024, which aimed to\npredict the mean opinion score (MOS) of singing samples. Our submission secured\nthe first place among all participating teams, excluding the official baseline.\nIn this paper, we further improve our submission and propose a novel\nPitch-and-Spectrum-aware Singing Quality Assessment (PS-SQA) method. The PS-SQA\nis designed based on the self-supervised-learning (SSL) MOS predictor,\nincorporating singing pitch and spectral information, which are extracted using\npitch histogram and non-quantized neural codec, respectively. Additionally, the\nPS-SQA introduces a bias correction strategy to address prediction biases\ncaused by low-resource training samples, and employs model fusion technology to\nfurther enhance prediction accuracy. Experimental results confirm that our\nproposed PS-SQA significantly outperforms all competing systems across all\nsystem-level metrics, confirming its strong sing quality assessment\ncapabilities.", "split": "arXiv" }, { "text": "We investigate the degrees of freedom of New General Relativity. This theory\nis a three-parameter theory and is classified into nine irreducible types\naccording to the rotation symmetry of $SO(3)$ on each leaf of ADM-foliation. In\nthe previous work~[{\\it 2410.15056[gr-qc]}], we investigated the degrees of\nfreedom in types of NGR that are interests for describing gravity: Type 2, Type\n3, Type 5, and Type 8. In this work, we focus on unveiling those numbers in all\nother types to complete the analysis of NGR. After providing the Hamiltonian\nformulation of NGR, we perform the analysis on Type 4, Type 7, and Type 9\naccording to the method that is provided in the previous work~[{\\it\n2410.15056[gr-qc]}]. Then we unveil that the degrees of freedom of Type 4, Type\n7, and Type 9 are five, null, and three, respectively. Type 4 and Type 9 have\nsecond-class constraint densities only. Type 7 has first-class constraint\ndensities only, but which is over constraint. In every type, no bifurcation\noccurs unlikely to Type 8 in the previous work~[2410.15056[gr-qc]]. Finally, we\nsummarize this work and give a concluding remark for this series of works.", "split": "arXiv" }, { "text": "Abundant geomorphological and geochemical evidence of liquid water on the\nsurface of early Mars during the late Noachian and early Hesperian periods\nneeds to be reconciled with a fainter young Sun. While a dense CO2 atmosphere\nand related warming mechanisms are potential solutions to the early Mars\nclimate problem, further investigation is warranted. Here, we complete a\ncomprehensive survey of the warming potential of all known greenhouse gases and\nperform detailed calculations for 15 different minor gas species under early\nMartian conditions. We find that of these 15 species, H2O2, HNO3, NH3, SO2, and\nC2H4 cause significant greenhouse warming at concentrations of ~0.1 ppmv or\ngreater. However, the most highly effective greenhouse gas species also tend to\nbe more condensable, soluble and vulnerable to photolytic destruction. To\nprovide a reference for future atmospheric evolution and photochemical studies,\nwe have made our warming potential database freely available online.", "split": "arXiv" }, { "text": "The accurate segmentation of retinal blood vessels plays a crucial role in\nthe early diagnosis and treatment of various ophthalmic diseases. Designing a\nnetwork model for this task requires meticulous tuning and extensive\nexperimentation to handle the tiny and intertwined morphology of retinal blood\nvessels. To tackle this challenge, Neural Architecture Search (NAS) methods are\ndeveloped to fully explore the space of potential network architectures and go\nafter the most powerful one. Inspired by neuronal diversity which is the\nbiological foundation of all kinds of intelligent behaviors in our brain, this\npaper introduces a novel and foundational approach to neural network design,\ntermed ``neuron programming'', to automatically search neuronal types into a\nnetwork to enhance a network's representation ability at the neuronal level,\nwhich is complementary to architecture-level enhancement done by NAS.\nAdditionally, to mitigate the time and computational intensity of neuron\nprogramming, we develop a hypernetwork that leverages the search-derived\narchitectural information to predict optimal neuronal configurations.\nComprehensive experiments validate that neuron programming can achieve\ncompetitive performance in retinal blood segmentation, demonstrating the strong\npotential of neuronal diversity in medical image analysis.", "split": "arXiv" }, { "text": "In a setting where segmentation models have to be built for multiple\ndatasets, each with its own corresponding label set, a straightforward way is\nto learn one model for every dataset and its labels. Alternatively, multi-task\narchitectures with shared encoders and multiple segmentation heads or shared\nweights with compound labels can also be made use of. This work proposes a\nnovel label sharing framework where a shared common label space is constructed\nand each of the individual label sets are systematically mapped to the common\nlabels. This transforms multiple datasets with disparate label sets into a\nsingle large dataset with shared labels, and therefore all the segmentation\ntasks can be addressed by learning a single model. This eliminates the need for\ntask specific adaptations in network architectures and also results in\nparameter and data efficient models. Furthermore, label sharing framework is\nnaturally amenable for incremental learning where segmentations for new\ndatasets can be easily learnt. We experimentally validate our method on various\nmedical image segmentation datasets, each involving multi-label segmentation.\nFurthermore, we demonstrate the efficacy of the proposed method in terms of\nperformance and incremental learning ability vis-a-vis alternative methods.", "split": "arXiv" }, { "text": "The coefficient algebra of a finite-dimensional Lie algebra with respect to a\nfaithful representation is defined as the subalgebra generated by all\ncoefficients of the corresponding characteristic polynomial. We establish a\nconnection between classical invariant theory and the coefficient algebras of\nfinite-dimensional complex Lie algebras. Specifically, we prove that with\nrespect to any symmetric power of the standard representation: (1) the\ncoefficient algebra of the upper triangular solvable complex Lie algebra is\nisomorphic to the algebra of symmetric polynomials; (2) the coefficient algebra\nof the general linear complex Lie algebra is the invariant ring of the general\nlinear group with the conjugacy action on the full space of matrices; and (3)\nthe coefficient algebra of the special linear complex Lie algebra can be\ngenerated by classical trace functions. As an application, we exactly exhibit\nthe characteristic polynomial of the special linear complex Lie algebra.", "split": "arXiv" }, { "text": "Let $M_n$ be the algebra of $n \\times n$ complex matrices and $\\mathcal{T}_n\n\\subseteq M_n$ the corresponding upper-triangular subalgebra. In their\ninfluential work, Petek and \\v{S}emrl characterize Jordan automorphisms of\n$M_n$ and $\\mathcal{T}_n$, when $n \\geq 3$, as (injective in the case of\n$\\mathcal{T}_n$) continuous commutativity and spectrum preserving maps $\\phi :\nM_n \\to M_n$ and $\\phi : \\mathcal{T}_n \\to \\mathcal{T}_n$. Recently, in a joint\nwork with Petek, the authors extended this characterization to the maps $\\phi :\n\\mathcal{A} \\to M_n$, where $\\mathcal{A}$ is an arbitrary subalgebra of $M_n$\nthat contains $\\mathcal{T}_n$. In particular, any such map $\\phi$ is a Jordan\nembedding and hence of the form $\\phi(X)=TXT^{-1}$ or $\\phi(X)=TX^tT^{-1}$, for\nsome invertible matrix $T\\in M_n$. In this paper we further extend the\naforementioned results in the context of structural matrix algebras (SMAs),\ni.e. subalgebras $\\mathcal{A}$ of $M_n$ that contain all diagonal matrices.\nMore precisely, we provide both a necessary and sufficient condition for an SMA\n$\\mathcal{A}\\subseteq M_n$ such that any injective continuous commutativity and\nspectrum preserving map $\\phi: \\mathcal{A} \\to M_n$ is necessarily a Jordan\nembedding. In contrast to the previous cases, such maps $\\phi$ no longer need\nto be multiplicative/antimultiplicative, nor rank-one preservers.", "split": "arXiv" }, { "text": "Expanding reinforcement learning (RL) to offline domains generates promising\nprospects, particularly in sectors where data collection poses substantial\nchallenges or risks. Pivotal to the success of transferring RL offline is\nmitigating overestimation bias in value estimates for state-action pairs absent\nfrom data. Whilst numerous approaches have been proposed in recent years, these\ntend to focus primarily on continuous or small-scale discrete action spaces.\nFactorised discrete action spaces, on the other hand, have received relatively\nlittle attention, despite many real-world problems naturally having\nfactorisable actions. In this work, we undertake a formative investigation into\noffline reinforcement learning in factorisable action spaces. Using\nvalue-decomposition as formulated in DecQN as a foundation, we present the case\nfor a factorised approach and conduct an extensive empirical evaluation of\nseveral offline techniques adapted to the factorised setting. In the absence of\nestablished benchmarks, we introduce a suite of our own comprising datasets of\nvarying quality and task complexity. Advocating for reproducible research and\ninnovation, we make all datasets available for public use alongside our code\nbase.", "split": "arXiv" }, { "text": "The prevailing of artificial intelligence-of-things calls for higher\nenergy-efficient edge computing paradigms, such as neuromorphic agents\nleveraging brain-inspired spiking neural network (SNN) models based on\nspatiotemporally sparse binary activations. However, the lack of efficient and\nhigh-accuracy deep SNN learning algorithms prevents them from practical edge\ndeployments with a strictly bounded cost. In this paper, we propose a\nspatiotemporal orthogonal propagation (STOP) algorithm to tack this challenge.\nOur algorithm enables fully synergistic learning of synaptic weights as well as\nfiring thresholds and leakage factors in spiking neurons to improve SNN\naccuracy, while under a unified temporally-forward trace-based framework to\nmitigate the huge memory requirement for storing neural states of all\ntime-steps in the forward pass. Characteristically, the spatially-backward\nneuronal errors and temporally-forward traces propagate orthogonally to and\nindependently of each other, substantially reducing computational overhead. Our\nSTOP algorithm obtained high recognition accuracies of 99.53%, 94.84%, 74.92%,\n98.26% and 77.10% on the MNIST, CIFAR-10, CIFAR-100, DVS-Gesture and\nDVS-CIFAR10 datasets with adequate SNNs of intermediate scales from LeNet-5 to\nResNet-18. Compared with other deep SNN training works, our method is more\nplausible for edge intelligent scenarios where resources are limited but\nhigh-accuracy in-situ learning is desired.", "split": "arXiv" }, { "text": "High annotation costs from hiring or crowdsourcing complicate the creation of\nlarge, high-quality datasets needed for training reliable text classifiers.\nRecent research suggests using Large Language Models (LLMs) to automate the\nannotation process, reducing these costs while maintaining data quality. LLMs\nhave shown promising results in annotating downstream tasks like hate speech\ndetection and political framing. Building on the success in these areas, this\nstudy investigates whether LLMs are viable for annotating the complex task of\nmedia bias detection and whether a downstream media bias classifier can be\ntrained on such data. We create annolexical, the first large-scale dataset for\nmedia bias classification with over 48000 synthetically annotated examples. Our\nclassifier, fine-tuned on this dataset, surpasses all of the annotator LLMs by\n5-9 percent in Matthews Correlation Coefficient (MCC) and performs close to or\noutperforms the model trained on human-labeled data when evaluated on two media\nbias benchmark datasets (BABE and BASIL). This study demonstrates how our\napproach significantly reduces the cost of dataset creation in the media bias\ndomain and, by extension, the development of classifiers, while our subsequent\nbehavioral stress-testing reveals some of its current limitations and\ntrade-offs.", "split": "arXiv" }, { "text": "The demand for deploying deep convolutional neural networks (DCNNs) on\nresource-constrained devices for real-time applications remains substantial.\nHowever, existing state-of-the-art structured pruning methods often involve\nintricate implementations, require modifications to the original network\narchitectures, and necessitate an extensive fine-tuning phase. To overcome\nthese challenges, we propose a novel method that, for the first time,\nincorporates the concepts of charge and electrostatic force from physics into\nthe training process of DCNNs. The magnitude of this force is directly\nproportional to the product of the charges of the convolution filter and the\nsource filter, and inversely proportional to the square of the distance between\nthem. We applied this electrostatic-like force to the convolution filters,\neither attracting filters with opposite charges toward non-zero weights or\nrepelling filters with like charges toward zero weights. Consequently, filters\nsubject to repulsive forces have their weights reduced to zero, enabling their\nremoval, while the attractive forces preserve filters with significant weights\nthat retain information. Unlike conventional methods, our approach is\nstraightforward to implement, does not require any architectural modifications,\nand simultaneously optimizes weights and ranks filter importance, all without\nthe need for extensive fine-tuning. We validated the efficacy of our method on\nmodern DCNN architectures using the MNIST, CIFAR, and ImageNet datasets,\nachieving competitive performance compared to existing structured pruning\napproaches.", "split": "arXiv" }, { "text": "We report the detection of an extreme stellar prominence eruption on the M\ndwarf LAMOST J044431.62+235627.9, observed through time-domain H$\\alpha$\nspectroscopy with the Large Sky Area Multi-Object Fiber Spectroscopic Telescope\n(LAMOST). This prominence eruption was accompanied by a superflare lasting over\n160.4 minutes. The H$\\alpha$ line profile exhibits significant blue-wing\nenhancement during the impulsive phase and near the flare peak, with a\nprojected bulk blueshift velocity of $-228\\pm11$~km~s$^{-1}$ and a maximum\nblueshift velocity reaching $-605\\pm15$~km~s$^{-1}$. Velocity analysis of the\neruptive prominence at various heights above the stellar surface indicates that\nsome of the projected ejection velocities along the line of sight exceed the\ncorresponding escape velocities, suggesting a potential coronal mass ejection\n(CME). The equivalent width (EW) of the H$\\alpha$ blue-wing enhancement in this\neruption appears to be the largest observed to date and is comparable to the EW\nof the H$\\alpha$ line profile during the quiescent phase of the host star. We\nperformed a two-cloud modeling for the prominence and the associated flare,\nwhich suggests that the eruptive prominence has a mass ranging from $1.6 \\times\n10^{19}~\\text{g}$ to $7.2 \\times 10^{19}~\\text{g}$. More importantly, the mass\nratio of the erupting prominence to its host star is the largest among all\nreported stellar prominence eruptions/CMEs.", "split": "arXiv" }, { "text": "Subspace clustering seeks to identify subspaces that segment a set of n data\npoints into k (k<0$. We provide both lower and upper bounds: for\n$\\varepsilon>n^{-1/2}$ we show an exponential query-complexity lower bound. In\ncontrast, when $\\varepsilon< {1}/{k}$ or under a stronger bounded curvature\nassumption, we give constant approximation algorithms.", "split": "arXiv" }, { "text": "In recent years, with the rapid development of augmented reality (AR)\ntechnology, there is an increasing demand for multi-user collaborative\nexperiences. Unlike for single-user experiences, ensuring the spatial\nlocalization of every user and maintaining synchronization and consistency of\npositioning and orientation across multiple users is a significant challenge.\nIn this paper, we propose a multi-user localization system based on ORB-SLAM2\nusing monocular RGB images as a development platform based on the Unity 3D game\nengine. This system not only performs user localization but also places a\ncommon virtual object on a planar surface (such as table) in the environment so\nthat every user holds a proper perspective view of the object. These generated\nvirtual objects serve as reference points for multi-user position\nsynchronization. The positioning information is passed among every user's AR\ndevices via a central server, based on which the relative position and movement\nof other users in the space of a specific user are presented via virtual\navatars all with respect to these virtual objects. In addition, we use deep\nlearning techniques to estimate the depth map of an image from a single RGB\nimage to solve occlusion problems in AR applications, making virtual objects\nappear more natural in AR scenes.", "split": "arXiv" }, { "text": "We present a novel digital humanities method for representing our Twitch\nchatters as user embeddings created by a large language model (LLM). We cluster\nthese embeddings automatically using affinity propagation and further narrow\nthis clustering down through manual analysis. We analyze the chat of one stream\nby each Twitch streamer: SmallAnt, DougDoug and PointCrow. Our findings suggest\nthat each streamer has their own type of chatters, however two categories\nemerge for all of the streamers: supportive viewers and emoji and reaction\nsenders. Repetitive message spammers is a shared chatter category for two of\nthe streamers.", "split": "arXiv" }, { "text": "Non-gravitational forces play surprising and, sometimes, centrally important\nroles in shaping the motions and properties of small planetary bodies. In the\nsolar system, the morphologies of comets, the delivery of meteorites and the\nshapes and dynamics of asteroids are all affected by non-gravitational forces.\nAround other stars, non-gravitational forces affect the lifetimes of particles\nand their rates of radial transport within circumstellar disks. Unlike the\ngravitational force, which is a simple function of the well known separations\nand masses of bodies, the non-gravitational forces are frequently functions of\npoorly known or even unmeasurable physical properties. Here, we present\norder-of-magnitude descriptions of non-gravitational forces, with examples of\ntheir application.", "split": "arXiv" }, { "text": "Understanding the performance of electrochemical energy storage systems\nrequires probing the electrochemical properties at each layer and interface\nduring cell operation. While traditional onboard and operando methods can\nmeasure impedance, voltage, or capacity, they lack spatial resolution to\npinpoint the properties to specific layers and interfaces. In this work, we\ndescribe an approach of using thermal waves to measure entropy change,\ntransport resistance, and charge-transfer resistance with depth resolution of a\nfew microns within an electrochemical cell. We achieve this by relating heat\ngeneration at multiple harmonics of an AC current to electrochemical processes\nand leveraging frequency dependence of thermal penetration depth for spatial\nresolution. We name this frequency domain spectroscopy of the thermal\nsignatures of the electrochemical processes measured at multiple harmonics of\nthe alternating current as Multi-harmonic ElectroThermal Spectroscopy (METS).\nThis technique enables isolation and measurement of solvation entropy at\nindividual electrode-electrolyte interfaces from the first harmonic (1{\\omega})\nthermal signature and resolution of the overall interfacial impedance into\ncharge-transfer and interface transport resistance components from the second\nharmonic (2{\\omega}) thermal signature. From this, we also demonstrate an\noperando measurement of the growth of the solid-electrolyte interphase (SEI)\nlayer at the lithium-electrolyte interface and show that two chemically similar\nelectrodes can have significantly different interfacial transport resistance\nbased on the preparation of the electrodes. Additionally, the method is not\nspecific to lithium-ion chemistry and can therefore be generalized for all\nelectrochemical systems of interest.", "split": "arXiv" }, { "text": "We establish a fundamental theorem of orders (FTO) which allows us to express\nall orders uniquely as an intersection of `irreducible orders' along which the\nindex and the conductor distributes multiplicatively.\n We define a subclass of Irreducible orders named Pseudo maximal orders. We\nthen consider orders (called Sudo maximal orders) whose decomposition under FTO\ncontains only Pseudo maximal orders. These rings can be seen as being ``close\"\nto being maximal (ring of integers) and thus there is a limited number of them\nwith bounded index (by X). We give an upper bound for this quantity. We then\nshow that all polynomials which can be sieved using only the Ekedahl sieve\ncorrespond to Sudo Maximal Orders. We use this understanding to get a weighted\ncount for the number of number-fields with fixed degree and bounded\ndiscriminant using the concept of weakly divisible rings.", "split": "arXiv" }, { "text": "Alignment of large language models (LLMs) to societal values should account\nfor pluralistic values from diverse groups. One technique uses in-context\nlearning for inference-time alignment, but only considers similarity when\ndrawing few-shot examples, not accounting for cross-group differences in value\nprioritization. We propose SPICA, a framework for pluralistic alignment that\naccounts for group-level differences during in-context example retrieval. SPICA\nintroduces three designs to facilitate pluralistic alignment: scenario banks,\ngroup-informed metrics, and in-context alignment prompts. From an evaluation of\nSPICA on an alignment task collecting inputs from four demographic groups ($n =\n544$), our metrics retrieve in-context examples that more closely match\nobserved preferences, with the best prompt configuration using multiple\ncontrastive responses to demonstrate examples. In an end-to-end evaluation ($n\n= 80$), we observe that SPICA-aligned models are higher rated than a baseline\nsimilarity-only retrieval approach, with groups seeing up to a +0.16 point\nimprovement on a 5 point scale. Additionally, gains from SPICA were more\nuniform, with all groups benefiting from alignment rather than only some.\nFinally, we find that while a group-agnostic approach can effectively align to\naggregated values, it is not most suited for aligning to divergent groups.", "split": "arXiv" }, { "text": "A fundamental problem in network experiments is selecting an appropriate\nexperimental design in order to precisely estimate a given causal effect of\ninterest. In fact, optimal rates of estimation remain unknown for essentially\nall causal effects in network experiments. In this work, we propose a general\napproach for constructing experiment designs under network interference with\nthe goal of precisely estimating a pre-specified causal effect. A central\naspect of our approach is the notion of a conflict graph, which captures the\nfundamental unobservability associated with the casual effect and the\nunderlying network. We refer to our experimental design as the Conflict Graph\nDesign. In order to estimate effects, we propose a modified Horvitz--Thompson\nestimator. We show that its variance under the Conflict Graph Design is bounded\nas $O(\\lambda(H) / n )$, where $\\lambda(H)$ is the largest eigenvalue of the\nadjacency matrix of the conflict graph. These rates depend on both the\nunderlying network and the particular causal effect under investigation. Not\nonly does this yield the best known rates of estimation for several\nwell-studied causal effects (e.g. the global and direct effects) but it also\nprovides new methods for effects which have received less attention from the\nperspective of experiment design (e.g. spill-over effects). Our results\ncorroborate two implicitly understood points in the literature: (1) that in\norder to increase precision, experiment designs should be tailored to specific\ncausal effects of interest and (2) that \"more local\" effects are easier to\nestimate than \"more global\" effects. In addition to point estimation, we\nconstruct conservative variance estimators which facilitate the construction of\nasymptotically valid confidence intervals for the casual effect of interest.", "split": "arXiv" }, { "text": "Distributed phased arrays have recently garnered interest in applications\nsuch as satellite communications and high-resolution remote sensing.\nHigh-performance coherent distributed operations such as distributed\nbeamforming are dependent on the ability to synchronize the spatio-electrical\nstates of the elements in the array to the order of the operational wavelength,\nso that coherent signal summation can be achieved at any arbitrary target\ndestination. In this paper, we address the fundamental challenge of precise\ndistributed array element localization to enable coherent operation, even in\ncomplex environments where the array may not be capable of directly estimating\nall nodal link distances. We employ a two-way time transfer technique to\nsynchronize the nodes of the array and perform internode ranging. We implement\nthe classical multidimensional scaling algorithm to recover a decentralized\narray geometry from a set of range estimates. We also establish the incomplete\nset of range estimates as a multivariable non-convex optimization problem, and\ndefine the differential evolution algorithm which searches the solution space\nto complete the set of ranges. We experimentally demonstrate wireless\nlocalization using a spectrally-sparse pulsed two-tone waveform with 40 MHz\ntone separation in a laboratory environment, achieving a mean localization\nerror vector magnitude of 0.82 mm in an environment with an average link SNR of\n34 dB, theoretically supporting distributed beamforming operation up to 24.3\nGHz.", "split": "arXiv" }, { "text": "Recently the galaxy matter density 4-point correlation function has been\nlooked at to investigate parity violation in large scale structure surveys. The\n4-point correlation function is the lowest order statistic which is sensitive\nto parity violation, since a tetrahedron is the simplest shape that cannot be\nsuperimposed on its mirror image by a rotation. If the parity violation is\nintrinsic in nature, this could give us a window into inflationary physics.\nHowever, we need to exhaust all other contaminations before we consider them to\nbe intrinsic. Even though the standard Newtonian redshift-space distortions are\nparity symmetric, the full relativistic picture is not. Therefore, we expect a\nparity-odd trispectrum when observing in redshift space. We calculate the\ntrispectrum with the leading-order relativistic effects and investigate in\ndetail the parameter space of the trispectrum and the effects of these\nrelativistic corrections for different parameter values and configurations. We\nalso look at different surveys and how the evolution and magnification biases\ncan be affected by different parameter choices.", "split": "arXiv" }, { "text": "While state-of-the-art models for breast cancer detection leverage multi-view\nmammograms for enhanced diagnostic accuracy, they often focus solely on visual\nmammography data. However, radiologists document valuable lesion descriptors\nthat contain additional information that can enhance mammography-based breast\ncancer screening. A key question is whether deep learning models can benefit\nfrom these expert-derived features. To address this question, we introduce a\nnovel multi-modal approach that combines textual BI-RADS lesion descriptors\nwith visual mammogram content. Our method employs iterative attention layers to\neffectively fuse these different modalities, significantly improving\nclassification performance over image-only models. Experiments on the CBIS-DDSM\ndataset demonstrate substantial improvements across all metrics, demonstrating\nthe contribution of handcrafted features to end-to-end.", "split": "arXiv" }, { "text": "We study the classic single-choice prophet secretary problem through a\nresource augmentation lens. Our goal is to bound the $(1-\\epsilon)$-competition\ncomplexity for different classes of online algorithms. This metric asks for the\nsmallest $k$ such that the expected value of the online algorithm on $k$ copies\nof the original instance, is at least a $(1 - \\epsilon)$-approximation to the\nexpected offline optimum on the original instance (without added copies).\n We consider four natural classes of online algorithms: single-threshold,\ntime-based threshold, activation-based, and general algorithms. We show that\nfor single-threshold algorithms the $(1-\\epsilon)$-competition complexity is\n$\\Theta(\\ln(\\frac{1}{\\epsilon}))$ (as in the i.i.d. case). Additionally, we\ndemonstrate that time-based threshold and activation-based algorithms (which\ncover all previous approaches for obtaining competitive-ratios for the classic\nprophet secretary problem) yield a sub-optimal $(1-\\epsilon)$-competition\ncomplexity of\n$\\Theta\\left(\\frac{\\ln(\\frac{1}{\\epsilon})}{\\ln\\ln(\\frac{1}{\\epsilon})}\\right)$,\nwhich is strictly better than the class of single-threshold algorithms.\nFinally, we find that the $(1-\\epsilon)$-competition complexity of general\nadaptive algorithms is $\\Theta(\\sqrt{\\ln(\\frac{1}{\\epsilon})})$, which is in\nsharp contrast to $\\Theta(\\ln\\ln(\\frac{1}{\\epsilon}))$ in the i.i.d. case.", "split": "arXiv" }, { "text": "Scientific Workflow Systems (SWSs) are advanced software frameworks that\ndrive modern research by orchestrating complex computational tasks and managing\nextensive data pipelines. These systems offer a range of essential features,\nincluding modularity, abstraction, interoperability, workflow composition\ntools, resource management, error handling, and comprehensive documentation.\nUtilizing these frameworks accelerates the development of scientific computing,\nresulting in more efficient and reproducible research outcomes. However,\ndeveloping a user-friendly, efficient, and adaptable SWS poses several\nchallenges. This study explores these challenges through an in-depth analysis\nof interactions on Stack Overflow (SO) and GitHub, key platforms where\ndevelopers and researchers discuss and resolve issues. In particular, we\nleverage topic modeling (BERTopic) to understand the topics SWSs developers\ndiscuss on these platforms. We identified 10 topics developers discuss on SO\n(e.g., Workflow Creation and Scheduling, Data Structures and Operations,\nWorkflow Execution) and found that workflow execution is the most challenging.\nBy analyzing GitHub issues, we identified 13 topics (e.g., Errors and Bug\nFixing, Documentation, Dependencies) and discovered that data structures and\noperations is the most difficult. We also found common topics between SO and\nGitHub, such as data structures and operations, task management, and workflow\nscheduling. Additionally, we categorized each topic by type (How, Why, What,\nand Others). We observed that the How type consistently dominates across all\ntopics, indicating a need for procedural guidance among developers. The\ndominance of the How type is also evident in domains like Chatbots and Mobile\ndevelopment. Our study will guide future research in proposing tools and\ntechniques to help the community overcome the challenges developers face when\ndeveloping SWSs.", "split": "arXiv" }, { "text": "We study the contact geometry of the connected components of the energy\nhypersurface, in the symmetric restricted 3-body problem on $\\mathbb{S}^2$, for\na specific type of motion of the primaries. In particular, we show that these\ncomponents are of contact type for all energies below the first critical value\nand slightly above it. We prove that these components, suitably compactified\nusing a Moser-type regularization are contactomorphic to $R\\mathbb{P}^3$ with\nits unique tight contact structure or to the connected sum of two copies of it,\ndepending on the value of the energy. We exploit Taubes' solution of the\nWeinstein conjecture in dimension three, to infer the existence of periodic\norbits in all these cases.", "split": "arXiv" }, { "text": "Operating Systems enforce logical isolation using abstractions such as\nprocesses, containers, and isolation technologies to protect a system from\nmalicious or buggy code. In this paper, we show new types of side channels\nthrough the file system that break this logical isolation. The file system\nplays a critical role in the operating system, managing all I/O activities\nbetween the application layer and the physical storage device. We observe that\nthe file system implementation is shared, leading to timing leakage when using\ncommon I/O system calls. Specifically, we found that modern operating systems\ntake advantage of any flush operation (which saves cached blocks in memory to\nthe SSD or disk) to flush all of the I/O buffers, even those used by other\nisolation domains. Thus, by measuring the delay of syncfs, the attacker can\ninfer the I/O behavior of victim programs. We then demonstrate a syncfs covert\nchannel attack on multiple file systems, including both Linux native file\nsystems and the Windows file system, achieving a maximum bandwidth of 5 Kbps\nwith an error rate of 0.15% on Linux and 7.6 Kbps with an error rate of 1.9% on\nWindows. In addition, we construct three side-channel attacks targeting both\nLinux and Android devices. On Linux devices, we implement a website\nfingerprinting attack and a video fingerprinting attack by tracking the write\npatterns of temporary buffering files. On Android devices, we design an\napplication fingerprinting attack that leaks application write patterns during\nboot-up. The attacks achieve over 90% F1 score, precision, and recall. Finally,\nwe demonstrate that these attacks can be exploited across containers\nimplementing a container detection technique and a cross-container covert\nchannel attack.", "split": "arXiv" }, { "text": "Giant radio galaxies (GRGs), a minority among the extended-jetted population,\nform in a wide range of jet and environmental configurations, complicating the\nidentification of the growth factors that facilitate their attainment of\nmegaparsec scales. This study aims to numerically investigate the hypothesized\nformation mechanisms of GRGs extending $\\gtrsim 1$ Mpc to assess their general\napplicability. We employ triaxial ambient medium settings to generate varying\nlevels of jet frustration and simulate jets with low and high power from\ndifferent locations in the environment, formulating five representations. The\nemergence of distinct giant phases in all five simulated scenarios suggests\nthat GRGs may be more common than previously believed, a prediction to be\nverified with contemporary radio telescopes. We find that different\ncombinations of jet morphology, power, and the evolutionary age of the formed\nstructure hold the potential to elucidate different formation scenarios. The\nsimulated lobes are overpressured, prompting further investigation into\npressure profiles when jet activity ceases, potentially distinguishing between\nrelic and active GRGs. We observed a potential phase transition in giant radio\ngalaxies, marked by differences in lobe expansion speed and pressure variations\ncompared to their smaller evolutionary phases. This suggests the need for\nfurther investigation across a broader parameter space to determine if GRGs\nfundamentally differ from smaller RGs. Axial ratio analysis reveals\nself-similar expansion in rapidly propagating jets, with notable deviations\nwhen the jet forms wider lobes. Overall, this study emphasizes that multiple\ngrowth factors at work can better elucidate the current-day population of GRGs,\nincluding scenarios e.g., growth of GRGs in dense environments, GRGs of several\nmegaparsecs, GRG development in low-powered jets, and the formation of X-shaped\nGRGs.", "split": "arXiv" }, { "text": "The generation of complex, large-scale code projects using generative AI\nmodels presents challenges due to token limitations, dependency management, and\niterative refinement requirements. This paper introduces the See-Saw generative\nmechanism, a novel methodology for dynamic and recursive code generation. The\nproposed approach alternates between main code updates and dependency\ngeneration to ensure alignment and functionality. By dynamically optimizing\ntoken usage and incorporating key elements of the main code into the generation\nof dependencies, the method enables efficient and scalable code generation for\nprojects requiring hundreds of interdependent files. The mechanism ensures that\nall code components are synchronized and functional, enabling scalable and\nefficient project generation. Experimental validation demonstrates the method's\ncapability to manage dependencies effectively while maintaining coherence and\nminimizing computational overhead.", "split": "arXiv" }, { "text": "Many computational problems can be modelled as the class of all finite\nrelational structures $\\mathbb A$ that satisfy a fixed first-order sentence\n$\\phi$ hereditarily, i.e., we require that every substructure of $\\mathbb A$\nsatisfies $\\phi$. In this case, we say that the class is in HerFO. The problems\nin HerFO are always in coNP, and sometimes coNP-complete. HerFO also contains\nmany interesting computational problems in P, including many constraint\nsatisfaction problems (CSPs). We show that HerFO captures the class of\ncomplements of CSPs for reducts of finitely bounded structures, i.e., every\nsuch CSP is polynomial-time equivalent to the complement of a problem in HerFO.\nHowever, we also prove that HerFO does not have the full computational power of\ncoNP: there are problems in coNP that are not polynomial-time equivalent to a\nproblem in HerFO, unless E=NE. Another main result is a description of the\nquantifier-prefixes for $\\phi$ such that hereditarily checking $\\phi$ is in P;\nwe show that for every other quantifier-prefix there exists a formula $\\phi$\nwith this prefix such that hereditarily checking $\\phi$ is coNP-complete.", "split": "arXiv" }, { "text": "Data contamination presents a critical barrier preventing widespread\nindustrial adoption of advanced software engineering techniques that leverage\ncode language models (CLMs). This phenomenon occurs when evaluation data\ninadvertently overlaps with the public code repositories used to train CLMs,\nseverely undermining the credibility of performance evaluations. For software\ncompanies considering the integration of CLM-based techniques into their\ndevelopment pipeline, this uncertainty about true performance metrics poses an\nunacceptable business risk. Code refactoring, which comprises code\nrestructuring and variable renaming, has emerged as a promising measure to\nmitigate data contamination. It provides a practical alternative to the\nresource-intensive process of building contamination-free evaluation datasets,\nwhich would require companies to collect, clean, and label code created after\nthe CLMs' training cutoff dates. However, the lack of automated code\nrefactoring tools and scientifically validated refactoring techniques has\nhampered widespread industrial implementation. To bridge the gap, this paper\npresents the first systematic study to examine the efficacy of code refactoring\noperators at multiple scales (method-level, class-level, and cross-class level)\nand in different programming languages. In particular, we develop an\nopen-sourced toolkit, CODECLEANER, which includes 11 operators for Python, with\nnine method-level, one class-level, and one cross-class-level operator. A drop\nof 65% overlap ratio is found when applying all operators in CODECLEANER,\ndemonstrating their effectiveness in addressing data contamination.\nAdditionally, we migrate four operators to Java, showing their generalizability\nto another language. We make CODECLEANER online available to facilitate further\nstudies on mitigating CLM data contamination.", "split": "arXiv" }, { "text": "We employ Maximum Likelihood Estimators to examine the Pantheon+ catalogue of\nType Ia supernovae for large scale anisotropies in the expansion rate of the\nUniverse. The analyses are carried out in the heliocentric frame, the CMB\nframe, as well as the Local Group frame. In all frames, the Hubble expansion\nrate in the redshift range 0.023 < z < 0.15 is found to have a statistically\nsignificant dipolar variation exceeding 1.5 km/s/Mpc, i.e. bigger than the\nclaimed 1% uncertainty in the SH0ES measurement of the Hubble parameter H_0.\nThe deceleration parameter too has a redshift-dependent dipolar modulation at\n>5 sigma significance, consistent with previous findings using the SDSSII/SNLS3\nJoint Lightcurve Analysis catalogue. The inferred cosmic acceleration cannot\ntherefore be due to a Cosmological Constant, but is probably an apparent\n(general relativistic) effect due to the anomalous bulk flow in our local\nUniverse.", "split": "arXiv" }, { "text": "We present a unified controllable video generation approach AnimateAnything\nthat facilitates precise and consistent video manipulation across various\nconditions, including camera trajectories, text prompts, and user motion\nannotations. Specifically, we carefully design a multi-scale control feature\nfusion network to construct a common motion representation for different\nconditions. It explicitly converts all control information into frame-by-frame\noptical flows. Then we incorporate the optical flows as motion priors to guide\nfinal video generation. In addition, to reduce the flickering issues caused by\nlarge-scale motion, we propose a frequency-based stabilization module. It can\nenhance temporal coherence by ensuring the video's frequency domain\nconsistency. Experiments demonstrate that our method outperforms the\nstate-of-the-art approaches. For more details and videos, please refer to the\nwebpage: https://yu-shaonian.github.io/Animate_Anything/.", "split": "arXiv" }, { "text": "Given the inherent non-stationarity prevalent in real-world applications,\ncontinual Reinforcement Learning (RL) aims to equip the agent with the\ncapability to address a series of sequentially presented decision-making tasks.\nWithin this problem setting, a pivotal challenge revolves around\n\\textit{catastrophic forgetting} issue, wherein the agent is prone to\neffortlessly erode the decisional knowledge associated with past encountered\ntasks when learning the new one. In recent progresses, the \\textit{generative\nreplay} methods have showcased substantial potential by employing generative\nmodels to replay data distribution of past tasks. Compared to storing the data\nfrom past tasks directly, this category of methods circumvents the growing\nstorage overhead and possible data privacy concerns. However, constrained by\nthe expressive capacity of generative models, existing \\textit{generative\nreplay} methods face challenges in faithfully reconstructing the data\ndistribution of past tasks, particularly in scenarios with a myriad of tasks or\nhigh-dimensional data. Inspired by the success of diffusion models in various\ngenerative tasks, this paper introduces a novel continual RL algorithm DISTR\n(Diffusion-based Trajectory Replay) that employs a diffusion model to memorize\nthe high-return trajectory distribution of each encountered task and wakeups\nthese distributions during the policy learning on new tasks. Besides,\nconsidering the impracticality of replaying all past data each time, a\nprioritization mechanism is proposed to prioritize the trajectory replay of\npivotal tasks in our method. Empirical experiments on the popular continual RL\nbenchmark \\texttt{Continual World} demonstrate that our proposed method obtains\na favorable balance between \\textit{stability} and \\textit{plasticity},\nsurpassing various existing continual RL baselines in average success rate.", "split": "arXiv" }, { "text": "Taking inspiration from [1, 21, 24], we develop a general framework to deal\nwith the model theory of open incidence structures. In this first paper we\nfocus on the study of systems of points and lines (rank $2$). This has a number\nof applications, in particular we show that for any of the following classes\nall the non-degenerate free structures are elementarily equivalent, and their\ncommon theory is decidable, stricly stable, and with no prime model: $(k,\nn)$-Steiner systems (for $2 \\leq k < n$); generalised $n$-gons (for $n \\geq\n3$); $k$-nets (for $k \\geq 3$); affine planes; projective M\\\"obius, Laguerre\nand Minkowski planes.", "split": "arXiv" }, { "text": "Consider a trade market with one seller and multiple buyers. The seller aims\nto sell an indivisible item and maximize their revenue. This paper focuses on a\nsimple and popular mechanism--the fixed-price mechanism. Unlike the standard\nsetting, we assume there is information asymmetry between buyers and the\nseller. Specifically, we allow the seller to design information before setting\nthe fixed price, which implies that we study the mechanism design problem in a\nbroader space. We call this mechanism space the fixed-price signaling\nmechanism.\n We assume that buyers' valuation of the item depends on the quality of the\nitem. The seller can privately observe the item's quality, whereas buyers only\nsee its distribution. In this case, the seller can influence buyers' valuations\nby strategically disclosing information about the item's quality, thereby\nadjusting the fixed price. We consider two types of buyers with different\nlevels of rationality: ex-post individual rational (IR) and ex-interim\nindividual rational. We show that when the market has only one buyer, the\noptimal revenue generated by the fixed-price signaling mechanism is identical\nto that of the fixed-price mechanism, regardless of the level of rationality.\nFurthermore, when there are multiple buyers in the market and all of them are\nex-post IR, we show that there is no fixed-price mechanism that is obedient for\nall buyers. However, if all buyers are ex-interim IR, we show that the optimal\nfixed-price signaling mechanism will generate more revenue for the seller than\nthe fixed-price mechanism.", "split": "arXiv" }, { "text": "We conduct a systematic investigation of the role of Hubbard U corrections in\nelectronic structure calculations of two-dimensional (2D) materials containing\n3d transition metals. Specifically, we use density functional theory (DFT) with\nthe PBE and PBE+U approximations to calculate the crystal structure, band gaps,\nand magnetic parameters of 638 monolayers. Based on a comprehensive comparison\nto experiments we first establish that inclusion of the U correction worsens\nthe accuracy for the lattice constant. Consequently, PBE structures are used\nfor subsequent property evaluations. The band gaps show significant dependence\non the U-parameter. In particular, for 134 (21%) of the materials the U\nparameter leads to a metal-insulator transition. For the magnetic materials we\ncalculate the magnetic moment, magnetic exchange coupling, and magnetic\nanisotropy parameters. In contrast to the band gaps, the size of the magnetic\nmoments shows only weak dependence on U. Both the exchange energies and\nmagnetic anisotropy parameters are systematically reduced by the U correction.\nOn this basis we conclude that the Hubbard U correction will lead to lower\npredicted Curie temperatures in 2D materials. All the calculated properties are\navailable in the Computational 2D Materials Database (C2DB).", "split": "arXiv" }, { "text": "Cardiovascular magnetic resonance (CMR) imaging is the gold standard for\ndiagnosing several heart diseases due to its non-invasive nature and proper\ncontrast. MR imaging is time-consuming because of signal acquisition and image\nformation issues. Prolonging the imaging process can result in the appearance\nof artefacts in the final image, which can affect the diagnosis. It is possible\nto speed up CMR imaging using image reconstruction based on deep learning. For\nthis purpose, the high-quality clinical interpretable images can be\nreconstructed by acquiring highly undersampled k-space data, that is only\npartially filled, and using a deep learning model. In this study, we proposed a\nstepwise reconstruction approach based on the Patch-GAN structure for highly\nundersampled k-space data compatible with the multi-contrast nature, various\nanatomical views and trajectories of CMR imaging. The proposed approach was\nvalidated using the CMRxRecon2024 challenge dataset and outperformed previous\nstudies. The structural similarity index measure (SSIM) values for the first\nand second tasks of the challenge are 99.07 and 97.99, respectively. This\napproach can accelerate CMR imaging to obtain high-quality images, more\naccurate diagnosis and a pleasant patient experience.", "split": "arXiv" }, { "text": "We study the following one-dimensional cubic nonlinear Schr\\\"{o}dinger\nsystem: \\[ u_i''+2\\Big(\\sum_{k=1}^Nu_k^2\\Big)u_i=-\\mu_iu_i \\ \\,\\ \\mbox{in}\\, \\\n\\mathbb{R} , \\ \\ i=1, 2, \\cdots, N, \\] where\n$\\mu_1\\leq\\mu_2\\leq\\cdots\\leq\\mu_N<0$ and $N\\ge 2$. In this paper, we mainly\nfocus on the case $N=3$ and prove the following results: (i). The solutions of\nthe system can be completely classified; (ii). Depending on the explicit values\nof $\\mu_1\\leq\\mu_2\\leq\\mu_3<0$, there exist two different classes of normalized\nsolutions $u=(u_1, u_2, u_3)$ satisfying $\\int _{R}u_i^2dx=1$ for all $i=1, 2,\n3$, which are completely different from the case $N=2$; (iii). The linearized\noperator at any nontrivial solution of the system is non-degenerate. The\nconjectures on the explicit classification and nondegeneracy of solutions for\nthe system are also given for the case $N>3$. These address the questions of\n[R. Frank, D. Gontier and M. Lewin, CMP, 2021], where the complete\nclassification and uniqueness results for the system were already proved for\nthe case $N=2$.", "split": "arXiv" }, { "text": "Chest X-rays (CXRs) often display various diseases with disparate class\nfrequencies, leading to a long-tailed, multi-label data distribution. In\nresponse to this challenge, we explore the Pruned MIMIC-CXR-LT dataset, a\ncurated collection derived from the MIMIC-CXR dataset, specifically designed to\nrepresent a long-tailed and multi-label data scenario. We introduce LTCXNet, a\nnovel framework that integrates the ConvNeXt model, ML-Decoder, and strategic\ndata augmentation, further enhanced by an ensemble approach. We demonstrate\nthat LTCXNet improves the performance of CXR interpretation across all classes,\nespecially enhancing detection in rarer classes like `Pneumoperitoneum' and\n`Pneumomediastinum' by 79\\% and 48\\%, respectively. Beyond performance metrics,\nour research extends into evaluating fairness, highlighting that some methods,\nwhile improving model accuracy, could inadvertently affect fairness across\ndifferent demographic groups negatively. This work contributes to advancing the\nunderstanding and management of long-tailed, multi-label data distributions in\nmedical imaging, paving the way for more equitable and effective diagnostic\ntools.", "split": "arXiv" }, { "text": "A popular poster from Myanmar lists food pairings that should be avoided,\nsometimes at all costs. Coconut and honey taken together, for example, are\nbelieved to cause nausea, while pork and curdled milk will induce diarrhea.\nWorst of all, according to the poster, many seemingly innocuous combinations\nthat include jelly and coffee, beef and star fruit, or pigeon and pumpkin, are\nlikely to kill the unwary consumer. But why are these innocuous combinations\nconsidered dangerous, even fatal? The answer is relevant, not just to food\nbeliefs, but to social beliefs of many kinds. Here we describe the prevalence\nof food combination superstitions, and an opinion formation model simulating\ntheir emergence and fixation. We find that such food norms are influenced, not\njust by actual risks, but also by strong forces of cultural learning that can\ndrive and lock in arbitrary rules, even in the face of contrary evidence.", "split": "arXiv" }, { "text": "Various linear complexity models, such as Linear Transformer (LinFormer),\nState Space Model (SSM), and Linear RNN (LinRNN), have been proposed to replace\nthe conventional softmax attention in Transformer structures. However, the\noptimal design of these linear models is still an open question. In this work,\nwe attempt to answer this question by finding the best linear approximation to\nsoftmax attention from a theoretical perspective. We start by unifying existing\nlinear complexity models as the linear attention form and then identify three\nconditions for the optimal linear attention design: 1) Dynamic memory ability;\n2) Static approximation ability; 3) Least parameter approximation. We find that\nnone of the current linear models meet all three conditions, resulting in\nsuboptimal performance. Instead, we propose Meta Linear Attention (MetaLA) as a\nsolution that satisfies these conditions. Our experiments on Multi-Query\nAssociative Recall (MQAR) task, language modeling, image classification, and\nLong-Range Arena (LRA) benchmark demonstrate that MetaLA is more effective than\nthe existing linear models.", "split": "arXiv" }, { "text": "We developed a shoe-mounted gait monitoring system capable of tracking up to\n17 gait parameters, including gait length, step time, stride velocity, and\nothers. The system employs a stereo camera mounted on one shoe to track a\nmarker placed on the opposite shoe, enabling the estimation of spatial gait\nparameters. Additionally, a Force Sensitive Resistor (FSR) affixed to the heel\nof the shoe, combined with a custom-designed algorithm, is utilized to measure\ntemporal gait parameters. Through testing on multiple participants and\ncomparison with the gait mat, the proposed gait monitoring system exhibited\nnotable performance, with the accuracy of all measured gait parameters\nexceeding 93.61%. The system also demonstrated a low drift of 4.89% during\nlong-distance walking. A gait identification task conducted on participants\nusing a trained Transformer model achieved 95.7% accuracy on the dataset\ncollected by the proposed system, demonstrating that our hardware has the\npotential to collect long-sequence gait data suitable for integration with\ncurrent Large Language Models (LLMs). The system is cost-effective,\nuser-friendly, and well-suited for real-life measurements.", "split": "arXiv" }, { "text": "Finding ground state solutions of diagonal Hamiltonians is relevant for both\ntheoretical as well as practical problems of interest in many domains such as\nfinance, physics and computer science. These problems are typically very hard\nto tackle by classical computing and quantum computing could help in speeding\nup computations and efficiently tackling larger problems. Here we use imaginary\ntime evolution through a new block encoding scheme to obtain the ground state\nof such problems and apply our method to MaxCut as an illustration. Our method,\nwhich for simplicity we call ITE-BE, requires no variational parameter\noptimization as all the parameters in the procedure are expressed as analytical\nfunctions of the couplings of the Hamiltonian. We demonstrate that our method\ncan be successfully combined with other quantum algorithms such as quantum\napproximate optimization algorithm (QAOA). We find that the QAOA ansatz\nincreases the post-selection success of ITE-BE, and shallow QAOA circuits, when\nboosted with ITE-BE, achieve better performance than deeper QAOA circuits. For\nthe special case of the transverse initial state, we adapt our block encoding\nscheme to allow for a deterministic application of the first layer of the\ncircuit.", "split": "arXiv" }, { "text": "Gamma-ray bursts (GRBs) are intense pulses of high-energy emission associated\nwith massive stars' death or compact objects' coalescence. Their\nmulti-wavelength observations help verify the reliability of the standard\nfireball model. We analyze 14 GRBs observed contemporaneously in gamma-rays by\nthe \\textit{Fermi} Large Area Telescope (LAT), in X-rays by the \\textit{Swift}\nTelescope, and in the optical bands by \\textit{Swift} and many ground-based\ntelescopes. We study the correlation between the spectral and temporal indices\nusing closure relations according to the synchrotron forward-shock model in the\nstratified medium ($n \\propto r^{-k}$) with $k$ ranging from 0 to 2.5. We find\nthat the model without energy injection is preferred over the one with energy\ninjection in all the investigated wavelengths. In gamma-rays, we only explored\nthe $\\nu > $ max\\{$\\nu_c,\\nu_m$\\} (SC/FC) cooling condition (where $\\nu_c$ and\n$\\nu_m$ are the cooling and characteristic frequencies, namely the frequencies\nat the spectral break). In the X-ray and optical bands, we explored all the\ncooling conditions, including $\\nu_m < \\nu < \\nu_c$ (SC), $\\nu_c < \\nu < \\nu_m$\n(FC), and SC/FC, and found a clear preference for SC for X-rays and SC/FC for\noptical. Within these cooling conditions, X-rays exhibit the highest rate of\noccurrence for the density profile with $k = 0$, while the optical band has the\nhighest occurrence for $k$ = 2.5 when considering no energy injection. Although\nwe can pinpoint a definite environment for some GRBs, we find degeneracies in\nother GRBs.", "split": "arXiv" }, { "text": "We explore Mahler numbers originating from functions $f(z)$ that satisfy the\nfunctional equation $f(z) = (A(z)f(z^d) + C(z))/B(z)$. A procedure to compute\nthe irrationality exponents of such numbers is developed using continued\nfractions for formal Laurent series, and the form of all such irrationality\nexponents is investigated. This serves to extend Dmitry Badziahin's paper, On\nthe Spectrum of Irrationality Exponents of Mahler Numbers, where he does the\nsame under the condition that $C(z) = 0$. Furthermore, we cover the required\nbackground of continued fractions in detail for unfamiliar readers. This essay\nwas submitted as a thesis in the Pure Mathematics Honours program at the\nUniversity of Sydney.", "split": "arXiv" }, { "text": "The management of type 1 diabetes has been revolutionized by the artificial\npancreas system (APS), which automates insulin delivery based on continuous\nglucose monitor (CGM). While conventional closed-loop systems rely on CGM data,\nwhich leads to higher energy consumption at the sensors and increased data\nredundancy in the underlying communication network. In contrast, this paper\nproposes a self-triggered control mechanism that can potentially achieve lower\nlatency and energy efficiency. The model for the APS consists of a state and\ninput-constrained dynamical system affected by exogenous meal disturbances. Our\nself-triggered mechanism relies on restricting the state evolution within the\nrobust control invariant of such a system at all times. To that end, using\ntools from reachability, we associate a safe time interval with such invariant\nsets, which denotes the maximum time for which the invariant set remains\ninvariant, even without transmission of CGM data at all times.", "split": "arXiv" }, { "text": "Image restoration models often face the simultaneous interaction of multiple\ndegradations in real-world scenarios. Existing approaches typically handle\nsingle or composite degradations based on scene descriptors derived from text\nor image embeddings. However, due to the varying proportions of different\ndegradations within an image, these scene descriptors may not accurately\ndifferentiate between degradations, leading to suboptimal restoration in\npractical applications. To address this issue, we propose a novel\nTransformer-based restoration framework, AllRestorer. In AllRestorer, we enable\nthe model to adaptively consider all image impairments, thereby avoiding errors\nfrom scene descriptor misdirection. Specifically, we introduce an All-in-One\nTransformer Block (AiOTB), which adaptively removes all degradations present in\na given image by modeling the relationships between all degradations and the\nimage embedding in latent space. To accurately address different variations\npotentially present within the same type of degradation and minimize ambiguity,\nAiOTB utilizes a composite scene descriptor consisting of both image and text\nembeddings to define the degradation. Furthermore, AiOTB includes an adaptive\nweight for each degradation, allowing for precise control of the restoration\nintensity. By leveraging AiOTB, AllRestorer avoids misdirection caused by\ninaccurate scene descriptors, achieving a 5.00 dB increase in PSNR compared to\nthe baseline on the CDD-11 dataset.", "split": "arXiv" }, { "text": "The Gutzwiller trace formula establishes a profound connection between the\nquantum spectrum and classical periodic orbits. However, its application is\nlimited by its reliance on the semiclassical saddle point approximation. In\nthis work, we explore the full quantum version of the trace formula using the\nLefschetz thimble method by incorporating complexified periodic orbits. Upon\ncomplexification, classical real periodic orbits are transformed into cycles on\ncompact Riemann surfaces. Our key innovation lies in the simultaneous\ncomplexification of the periods of cycles, resulting in a fully quantum trace\nformula that accounts for all contributions classified by the homology classes\nof the associated Riemann surfaces. This formulation connects the quantum\nspectrum to contributions across all complex time directions, encompassing all\nrelevant homology classes. Our approach naturally unifies and extends two\nestablished methodologies: periodic orbits in real time, as in Gutzwiller's\noriginal work, and quantum tunneling in imaginary time, as in the instanton\nmethod.", "split": "arXiv" }, { "text": "In this paper, we analyze the feature-based knowledge distillation for\nrecommendation from the frequency perspective. By defining knowledge as\ndifferent frequency components of the features, we theoretically demonstrate\nthat regular feature-based knowledge distillation is equivalent to equally\nminimizing losses on all knowledge and further analyze how this equal loss\nweight allocation method leads to important knowledge being overlooked. In\nlight of this, we propose to emphasize important knowledge by redistributing\nknowledge weights. Furthermore, we propose FreqD, a lightweight knowledge\nreweighting method, to avoid the computational cost of calculating losses on\neach knowledge. Extensive experiments demonstrate that FreqD consistently and\nsignificantly outperforms state-of-the-art knowledge distillation methods for\nrecommender systems. Our code is available at\n\\url{https://anonymous.4open.science/r/FreqKD/}", "split": "arXiv" }, { "text": "Federated learning (FL) is vulnerable to model poisoning attacks due to its\ndistributed nature. The current defenses start from all user gradients (model\nupdates) in each communication round and solve for the optimal aggregation\ngradients (horizontal solution). This horizontal solution will completely fail\nwhen facing large-scale (>50%) model poisoning attacks. In this work, based on\nthe key insight that the convergence process of the model is a highly\npredictable process, we break away from the traditional horizontal solution of\ndefense and innovatively transform the problem of solving the optimal\naggregation gradients into a vertical solution problem. We propose VERT, which\nuses global communication rounds as the vertical axis, trains a predictor using\nhistorical gradients information to predict user gradients, and compares the\nsimilarity with actual user gradients to precisely and efficiently select the\noptimal aggregation gradients. In order to reduce the computational complexity\nof VERT, we design a low dimensional vector projector to project the user\ngradients to a computationally acceptable length, and then perform subsequent\npredictor training and prediction tasks. Exhaustive experiments show that VERT\nis efficient and scalable, exhibiting excellent large-scale (>=80%) model\npoisoning defense effects under different FL scenarios. In addition, we can\ndesign projector with different structures for different model structures to\nadapt to aggregation servers with different computing power.", "split": "arXiv" }, { "text": "As the research of Multimodal Large Language Models (MLLMs) becomes popular,\nan advancing MLLM model is typically required to handle various textual and\nvisual tasks (e.g., VQA, Detection, OCR, and ChartQA) simultaneously for\nreal-world applications. However, due to the significant differences in\nrepresentation and distribution among data from various tasks, simply mixing\ndata of all tasks together leads to the well-known``multi-task conflict\" issue,\nresulting in performance degradation across various tasks. To address this\nissue, we propose Awaker2.5-VL, a Mixture of Experts~(MoE) architecture\nsuitable for MLLM, which acquires the multi-task capabilities through multiple\nsparsely activated experts. To speed up the training and inference of\nAwaker2.5-VL, each expert in our model is devised as a low-rank adaptation\n(LoRA) structure. Extensive experiments on multiple latest benchmarks\ndemonstrate the effectiveness of Awaker2.5-VL. The code and model weight are\nreleased in our Project Page: https://github.com/MetabrainAGI/Awaker.", "split": "arXiv" }, { "text": "The discovery of the Dead Sea Scrolls over 60 years ago is widely regarded as\none of the greatest archaeological breakthroughs in modern history. Recent\nstudy of the scrolls presents ongoing computational challenges, including\ndetermining the provenance of fragments, clustering fragments based on their\ndegree of similarity, and pairing fragments that originate from the same\nmanuscript -- all tasks that require focusing on individual letter and fragment\nshapes. This paper presents a computational method for segmenting ink and\nparchment regions in multispectral images of Dead Sea Scroll fragments. Using\nthe newly developed Qumran Segmentation Dataset (QSD) consisting of 20\nfragments, we apply multispectral thresholding to isolate ink and parchment\nregions based on their unique spectral signatures. To refine segmentation\naccuracy, we introduce an energy minimization technique that leverages ink\ncontours, which are more distinguishable from the background and less noisy\nthan inner ink regions. Experimental results demonstrate that this\nMultispectral Thresholding and Energy Minimization (MTEM) method achieves\nsignificant improvements over traditional binarization approaches like Otsu and\nSauvola in parchment segmentation and is successful at delineating ink borders,\nin distinction from holes and background regions.", "split": "arXiv" }, { "text": "Large Language Models (LLMs) have revolutionized natural language processing\nby unifying tasks into text generation, yet their large parameter sizes and\nautoregressive nature limit inference speed. SAM-Decoding addresses this by\nintroducing a novel retrieval-based speculative decoding method that uses a\nsuffix automaton for efficient and accurate draft generation. Unlike n-gram\nmatching used by the existing method, SAM-Decoding finds the longest suffix\nmatch in generating text and text corpuss, achieving an average time complexity\nof $O(1)$ per generation step. SAM-Decoding constructs static and dynamic\nsuffix automatons for the text corpus and input prompts, respectively, enabling\nfast and precise draft generation. Meanwhile, it is designed as an approach\nthat can be combined with existing methods, allowing SAM-Decoding to adaptively\nselect a draft generation strategy based on the matching length, thus\nincreasing the inference speed of the LLM. When combined with Token Recycling,\nevaluations show SAM-Decoding outperforms existing model-free methods,\nachieving a speedup of $2.27\\times$ over autoregressive decoding on Spec-Bench.\nWhen combined with EAGLE2, it reaches a speedup of $2.49\\times$, surpassing all\ncurrent approaches. Our code is available at\nhttps://github.com/hyx1999/SAM-Decoding.", "split": "arXiv" }, { "text": "Latency is a major concern for web rendering engines like those in Chrome,\nSafari, and Firefox. These engines reduce latency by using an incremental\nlayout algorithm to redraw the page when the user interacts with it. In such an\nalgorithm, elements that change frame-to-frame are marked dirty; only the dirty\nelements need be processed to draw the next frame, dramatically reducing\nlatency. However, the standard incremental layout algorithm must search the\npage for dirty elements, accessing a number of auxiliary elements in the\nprocess. These auxiliary elements add cache misses and stalled cycles, and are\nresponsible for a sizable fraction of all layout latency. We introduce a new,\nfaster incremental layout algorithm called Spineless Traversal. Spineless\nTraversal uses a more computationally demanding priority queue algorithm to\navoid the need to access auxiliary nodes and thus reduces cache traffic and\nstalls. This leads to dramatic speedups on the most latency-critical\ninteractions such as hovering, typing, or animations. Moreover, thanks to\nnumerous low-level optimizations, we are able to make Spineless Traversal\ncompetitive across the whole spectrum of incremental layout workloads. As a\nresult, across 2216 benchmarks, Spineless Traversal is faster on 78.2% of the\nbenchmark, with a mean speedup of 3.23x concentrated in the most\nlatency-critical interactions such as hovering, typing, and animations.", "split": "arXiv" }, { "text": "As technology advances, conceptualizations of effective strategies for\nteaching and learning shift. Due in part to their facilitation of unique\naffordances for learning, mobile devices, augmented reality, and games are all\nbecoming more prominent elements in learning environments. In this work, we\nexamine mobile augmented reality serious games (MARSGs) as the intersection of\nthese technology-based experiences and to what effect their combination can\nyield even greater learning outcomes. We present a PRISMA review of 23 papers\n(from 610) spanning the entire literature timeline from 2002 to 2023. Among\nthese works, there is wide variability in the realized application of game\nelements and pedagogical theories underpinning the game experience. For an\neducational tool to be effective, it must be designed to facilitate learning\nwhile anchored by pedagogical theory. Given that most MARSG developers are not\npedagogical experts, this review further provides design considerations\nregarding which game elements might proffer the best of three major pedagogical\ntheories for modern learning (cognitive constructivism, social constructivism,\nand behaviorism) based on existing applications. We will also briefly touch on\nradical constructivism and the instructional elements embedded within MARSGs.\nLastly, this work offers a synthesis of current MARSG findings and extended\nfuture directions for MARSG development.", "split": "arXiv" }, { "text": "Generalizations of plain strings have been proposed as a compact way to\nrepresent a collection of nearly identical sequences or to express uncertainty\nat specific text positions by enumerating all possibilities. While a plain\nstring stores a character at each of its positions, generalizations consider a\nset of characters (indeterminate strings), a set of strings of equal length\n(generalized degenerate strings, or shortly GD strings), or a set of strings of\narbitrary lengths (elastic-degenerate strings, or shortly ED strings). These\ngeneralizations are of importance to compactly represent such type of data, and\nfind applications in bioinformatics for representing and maintaining a set of\ngenetic sequences of the same taxonomy or a multiple sequence alignment. To be\nof use, attention has been drawn to answering various query types such as\npattern matching or measuring similarity of ED strings by generalizing\ntechniques known to plain strings. However, for some types of queries, it has\nbeen shown that a generalization of a polynomial-time solvable query on classic\nstrings becomes NP-hard on ED strings, e.g. [Russo et al.,2022]. In that light,\nwe wonder about other types of queries, which are of particular interest to\nbioinformatics: the search for the longest repeating factor, unique substrings,\nabsent words, anti-powers, and longest previous factors. While we obtain a\npolynomial time algorithm for the first problem on ED strings, we show that all\nothers are NP-hard to compute, some of them even under the restriction that the\ninput can be modelled as an indeterminate or GD string.", "split": "arXiv" }, { "text": "The practical applications of Wasserstein distances (WDs) are constrained by\ntheir sample and computational complexities. Sliced-Wasserstein distances\n(SWDs) provide a workaround by projecting distributions onto one-dimensional\nsubspaces, leveraging the more efficient, closed-form WDs for one-dimensional\ndistributions. However, in high dimensions, most random projections become\nuninformative due to the concentration of measure phenomenon. Although several\nSWD variants have been proposed to focus on \\textit{informative} slices, they\noften introduce additional complexity, numerical instability, and compromise\ndesirable theoretical (metric) properties of SWD. Amidst the growing literature\nthat focuses on directly modifying the slicing distribution, which often face\nchallenges, we revisit the classical Sliced-Wasserstein and propose instead to\nrescale the 1D Wasserstein to make all slices equally informative. Importantly,\nwe show that with an appropriate data assumption and notion of \\textit{slice\ninformativeness}, rescaling for all individual slices simplifies to \\textbf{a\nsingle global scaling factor} on the SWD. This, in turn, translates to the\nstandard learning rate search for gradient-based learning in common machine\nlearning workflows. We perform extensive experiments across various machine\nlearning tasks showing that the classical SWD, when properly configured, can\noften match or surpass the performance of more complex variants. We then answer\nthe following question: \"Is Sliced-Wasserstein all you need for common learning\ntasks?\"", "split": "arXiv" }, { "text": "In wireless communications, efficient image transmission must balance\nreliability, throughput, and latency, especially under dynamic channel\nconditions. This paper presents an adaptive and progressive pipeline for\nlearned image compression (LIC)-based architectures tailored to such\nenvironments. We investigate two state-of-the-art learning-based models: the\nhyperprior model and Vector Quantized Generative Adversarial Network (VQGAN).\nThe hyperprior model achieves superior compression performance through lossless\ncompression in the bottleneck but is susceptible to bit errors, necessitating\nthe use of error correction or retransmission mechanisms. In contrast, the\nVQGAN decoder demonstrates robust image reconstruction capabilities even in the\nabsence of channel coding, enhancing reliability in challenging transmission\nscenarios. We propose progressive versions of both models, enabling partial\nimage transmission and decoding under imperfect channel conditions. This\nprogressive approach not only maintains image integrity under poor channel\nconditions but also significantly reduces latency by allowing immediate partial\nimage availability. We evaluate our pipeline using the Kodak high-resolution\nimage dataset under a Rayleigh fading wireless channel model simulating dynamic\nconditions. The results indicate that the progressive transmission framework\nenhances reliability and latency while maintaining or improving throughput\ncompared to non-progressive counterparts across various Signal-to-Noise Ratio\n(SNR) levels. Specifically, the progressive-hyperprior model consistently\noutperforms others in latency metrics, particularly in the 99.9th percentile\nwaiting time-a measure indicating the maximum waiting time experienced by 99.9%\nof transmission instances-across all SNRs, and achieves higher throughput in\nlow SNR scenarios. where Adaptive WebP fails.", "split": "arXiv" }, { "text": "Iterative methods such as iterative closest point (ICP) for point cloud\nregistration often suffer from bad local optimality (e.g. saddle points), due\nto the nature of nonconvex optimization. To address this fundamental challenge,\nin this paper we propose learning to form the loss landscape of a deep\niterative method w.r.t. predictions at test time into a convex-like shape\nlocally around each ground truth given data, namely Deep Loss Convexification\n(DLC), thanks to the overparametrization in neural networks. To this end, we\nformulate our learning objective based on adversarial training by manipulating\nthe ground-truth predictions, rather than input data. In particular, we propose\nusing star-convexity, a family of structured nonconvex functions that are\nunimodal on all lines that pass through a global minimizer, as our geometric\nconstraint for reshaping loss landscapes, leading to (1) extra novel hinge\nlosses appended to the original loss and (2) near-optimal predictions. We\ndemonstrate the state-of-the-art performance using DLC with existing network\narchitectures for the tasks of training recurrent neural networks (RNNs), 3D\npoint cloud registration, and multimodel image alignment.", "split": "arXiv" }, { "text": "The pseudogap state of high-$T_{\\rm c}$ cuprates, known for its partial\ngapping of the Fermi surface above the superconducting transition temperature\n$T_{\\rm c}$, is believed to hold the key to understanding the origin of\nPlanckian relaxation and quantum criticality. However, the nature of the Fermi\nsurface in the pseudogap state has remained a fundamental open question. Here,\nwe report the observation of the Yamaji effect above $T_{\\rm c}$ in the single\nlayer cuprate HgBa$_2$CuO$_{4+\\delta}$. This observation is direct evidence of\nclosed Fermi surface pockets in the normal state of the pseudogap phase. The\nsmall size of the pockets determined from the Yamaji effect (occupying\napproximately $1.3\\%$ of the Brillouin zone area) is all the more surprising\ngiven the absence of evidence for long-range broken translational symmetry that\ncan reconstruct the Fermi-surface.", "split": "arXiv" }, { "text": "Efforts are needed to identify and measure both communities' exposure to\nclimate hazards and the social vulnerabilities that interact with these\nhazards, but the science of validating hazard vulnerability indicators is still\nin its infancy. Progress is needed to improve: 1) the selection of variables\nthat are used as proxies to represent hazard vulnerability; 2) the\napplicability and scale for which these indicators are intended, including\ntheir transnational applicability. We administered an international urban\nsurvey in Buenos Aires, Argentina; Johannesburg, South Africa; London, United\nKingdom; New York City, United States; and Seoul, South Korea in order to\ncollect data on exposure to various types of extreme weather events,\nsocioeconomic characteristics commonly used as proxies for vulnerability (i.e.,\nincome, education level, gender, and age), and additional characteristics not\noften included in existing composite indices (i.e., queer identity, disability\nidentity, non-dominant primary language, and self-perceptions of both\ndiscrimination and vulnerability to flood risk). We then use feature importance\nanalysis with gradient-boosted decision trees to measure the importance that\nthese variables have in predicting exposure to various types of extreme weather\nevents. Our results show that non-traditional variables were more relevant to\nself-reported exposure to extreme weather events than traditionally employed\nvariables such as income or age. Furthermore, differences in variable relevance\nacross different types of hazards and across urban contexts suggest that\nvulnerability indicators need to be fit to context and should not be used in a\none-size-fits-all fashion.", "split": "arXiv" }, { "text": "Pressure injury (PI) detection is challenging, especially in dark skin tones,\ndue to the unreliability of visual inspection. Thermography has been suggested\nas a viable alternative as temperature differences in the skin can indicate\nimpending tissue damage. Although deep learning models have demonstrated\nconsiderable promise toward reliably detecting PI, the existing work fails to\nevaluate the performance on darker skin tones and varying data collection\nprotocols. In this paper, we introduce a new thermal and optical imaging\ndataset of 35 participants focused on darker skin tones where temperature\ndifferences are induced through cooling and cupping protocols. We vary the\nimage collection process to include different cameras, lighting, patient pose,\nand camera distance. We compare the performance of a small convolutional neural\nnetwork (CNN) trained on either the thermal or the optical images on all skin\ntones. Our preliminary results suggest that thermography-based CNN is robust to\ndata collection protocols for all skin tones.", "split": "arXiv" }, { "text": "We address the issue of the exploding computational requirements of recent\nState-of-the-art (SOTA) open set multimodel 3D mapping (dense 3D mapping)\nalgorithms and present Voxel-Aggregated Feature Synthesis (VAFS), a novel\napproach to dense 3D mapping in simulation. Dense 3D mapping involves\nsegmenting and embedding sequential RGBD frames which are then fused into 3D.\nThis leads to redundant computation as the differences between frames are small\nbut all are individually segmented and embedded. This makes dense 3D mapping\nimpractical for research involving embodied agents in which the environment,\nand thus the mapping, must be modified with regularity. VAFS drastically\nreduces this computation by using the segmented point cloud computed by a\nsimulator's physics engine and synthesizing views of each region. This reduces\nthe number of features to embed from the number of captured RGBD frames to the\nnumber of objects in the scene, effectively allowing a \"ground truth\" semantic\nmap to be computed an order of magnitude faster than traditional methods. We\ntest the resulting representation by assessing the IoU scores of semantic\nqueries for different objects in the simulated scene, and find that VAFS\nexceeds the accuracy and speed of prior dense 3D mapping techniques.", "split": "arXiv" }, { "text": "Resilient divertor features connected to open chaotic edge structures in the\nHelically Symmetric Experiment (HSX) are investigated. For the first time, an\nexpanded vessel wall was considered that would give space for implementation of\na physical divertor target structure. The analysis was done for four different\nmagnetic configurations with very different chaotic plasma edges. A resilient\nplasma wall interaction pattern was identified across all configurations. This\nmanifests as qualitatively very similar footprint behavior across the different\nplasma equilibria. Overall, the resilient field lines of interest with high\nconnection length $L_C$ lie within a helical band along the wall for all\nconfigurations. This resiliency can be used to identify the best location of a\ndivertor. The details of the magnetic footprint's resilient helical band is\nsubject to specific field line structures which are linked to the penetration\ndepth of field lines into the plasma and directly influence the heat and\nparticle flux patterns. The differences arising from these details are\ncharacterized by introducing a new metric, the minimum radial connection\n$\\text{min}(\\delta_N)$ of a field line from the last closed flux surface. The\nrelationship, namely the deviation from a scaling law, between\n$\\text{min}(\\delta_N)$ and $L_C$ of the field lines in the plasma edge field\nline behavior suggests that the field lines are associated with structures such\nas resonant islands, cantori, and turnstiles. This helps determine the relevant\nmagnetic flux channels based on the radial location of these chaotic edge\nstructures and the divertor target footprint. These details will need to be\ntaken into account for resilient divertor design.", "split": "arXiv" }, { "text": "In this work we establish under certain hypotheses the $N \\to +\\infty$\nasymptotic expansion of integrals of the form $$\\mathcal{Z}_{N,\\Gamma}[V] \\, =\n\\, \\int_{\\Gamma^N} \\prod_{ a < b}^{N}(z_a - z_b)^\\beta \\, \\prod_{k=1}^{N}\n\\mathrm{e}^{ - N \\beta V(z_k) } \\, \\mathrm{d}\\mathbf{z}$$ where $V \\in\n\\mathbb{C}[X]$, $\\beta \\in 2 \\mathbb{N}^*$ is an even integer and $\\Gamma\n\\subset \\mathbb{C}$ is an unbounded contour such that the integral converges.\nFor even degree, real valued $V$s and when $\\Gamma = \\mathbb{R}$, it is well\nknown that the large-$N$ expansion is characterised by an equilibrium measure\ncorresponding to the minimiser of an appropriate energy functional. This method\nbears a structural resemblance with the Laplace method. By contrast, in the\ncomplex valued setting we are considering, the analysis structurally resembles\nthe classical steepest-descent method, and involves finding a critical point\n\\textit{and} a steepest descent curve, the latter being a deformation of the\noriginal integration contour. More precisely, one minimises a curve-dependent\nenergy functional with respect to measures on the curve and then maximises the\nenergy over an appropriate space of curves. Our analysis deals with the one-cut\nregime of the associated equilibrium measure. We establish the existence of an\nall order asymptotic expansion for $\\ln \\mathcal{Z}_{N,\\Gamma}[V]$ and\nexplicitly identify the first few terms.", "split": "arXiv" }, { "text": "We investigate the dynamic behavior of spin reversal events in the dilute\nIsing model, focusing on the influence of static disorder introduced by pinned\nspins. Our Monte Carlo simulations reveal that in a homogeneous, defect-free\nsystem, the inter-event time (IET) between local spin flips follows an\nexponential distribution, characteristic of Poissonian processes. However, in\nheterogeneous systems where defects are present, we observe a significant\ndeparture from this behavior. At high temperatures, the IET exhibits a\npower-law distribution resulting from the interplay of spins located in varying\npotential environments, where defect density influences reversal probabilities.\nAt low temperatures, all site classes converge to a unique power-law\ndistribution, regardless of their potential, leading to distinct critical\nexponents for the high- and low-temperature regimes. This transition from\nexponential to power-law behavior underscores the critical response features of\nmagnetic systems with defects, suggesting analogies to glassy dynamics. Our\nfindings highlight the complex mechanisms governing spin dynamics in disordered\nsystems, with implications for understanding the universal aspects of\nrelaxation in glassy materials.", "split": "arXiv" }, { "text": "Federated Learning (FL) enables collaborative, personalized model training\nacross multiple devices without sharing raw data, making it ideal for pervasive\ncomputing applications that optimize user-centric performances in diverse\nenvironments. However, data heterogeneity among clients poses a significant\nchallenge, leading to inconsistencies among trained client models and reduced\nperformance. To address this, we introduce the Alignment with Prototypes (ALP)\nlayers, which align incoming embeddings closer to learnable prototypes through\nan optimal transport plan. During local training, the ALP layer updates local\nprototypes and aligns embeddings toward global prototypes aggregated from all\nclients using our novel FL framework, Federated Alignment (FedAli). For model\ninferences, embeddings are guided toward local prototypes to better reflect the\nclient's local data distribution. We evaluate FedAli on heterogeneous\nsensor-based human activity recognition and vision benchmark datasets,\ndemonstrating that it outperforms existing FL strategies. We publicly release\nour source code to facilitate reproducibility and furthered research.", "split": "arXiv" }, { "text": "Price forecasting for used construction equipment is a challenging task due\nto spatial and temporal price fluctuations. It is thus of high interest to\nautomate the forecasting process based on current market data. Even though\napplying machine learning (ML) to these data represents a promising approach to\npredict the residual value of certain tools, it is hard to implement for small\nand medium-sized enterprises due to their insufficient ML expertise. To this\nend, we demonstrate the possibility of substituting manually created ML\npipelines with automated machine learning (AutoML) solutions, which\nautomatically generate the underlying pipelines. We combine AutoML methods with\nthe domain knowledge of the companies. Based on the CRISP-DM process, we split\nthe manual ML pipeline into a machine learning and non-machine learning part.\nTo take all complex industrial requirements into account and to demonstrate the\napplicability of our new approach, we designed a novel metric named method\nevaluation score, which incorporates the most important technical and\nnon-technical metrics for quality and usability. Based on this metric, we show\nin a case study for the industrial use case of price forecasting, that domain\nknowledge combined with AutoML can weaken the dependence on ML experts for\ninnovative small and medium-sized enterprises which are interested in\nconducting such solutions.", "split": "arXiv" }, { "text": "Rocky planets in our Solar System, namely Mercury, Venus, Earth, Mars, and\nthe Moon, which is generally added to this group due to its geological\ncomplexity, possess a solid surface and share a common structure divided into\nmajor layers, namely a silicate crust, a silicate mantle, and an iron-rich\ncore. However, while all terrestrial planets share a common structure, the\nthickness of their interior layers, their bulk chemical composition, and\nsurface expressions of geological processes are often unique to each of them.\nIn this chapter we provide an overview of the surfaces and interiors of rocky\nplanets in the Solar System. We list some of the major discoveries in planetary\nexploration and discuss how they have helped to answer fundamental questions\nabout planetary evolution while at the same time opening new avenues. For each\nof the major planetary layers, i.e., the surface, the crust and lithosphere,\nthe mantle, and the core, we review key geological and geophysical processes\nthat have shaped the planets that we observe today. Understanding the\nsimilarities and differences between the terrestrial planets in the Solar\nSystem will teach us about the diversity of evolutionary paths a planet could\nfollow, helping us to better understand our own home, the Earth.", "split": "arXiv" }, { "text": "We prove that for $1\\le p,q\\le\\infty$ the mixed-norm spaces $L_q(L_p)$ are\nmutually non-isomorphic, with the only exception that $L_q(L_2)$ is isomorphic\nto $L_q(L_q)$ for all $1 5$\ngalaxies. We find no clear evidence of an AGN suggesting the emission lines are\nstar formation driven. EELG1002 is chemically unevolved (direct $T_e$;\n$12+\\log_{10} (\\textrm{O/H}) \\sim 7.5$ consistent with $z > 5$ galaxies at\nfixed stellar mass) and may be undergoing a first intense, bursty star\nformation phase analogous to conditions expected of galaxies in the early\nUniverse. We find evidence for a highly energetic ISM ([OIII]/[OII] $\\sim 11$)\nand hard ionizing radiation field (elevated [NeIII]/[OII] at fixed\n[OIII]/[OII]). Coupled with its compact, metal-poor, and actively star-forming\nnature, EELG1002 is found to efficiently produce ionizing photons with\n$\\xi_{ion} \\sim 10^{25.70 - 25.75}$ erg$^{-1}$ Hz and may have $\\sim 10 - 20\\%$\nLyC escape fraction suggesting such sources may be important reionization-era\nanalogs. We find dynamical mass of $\\sim 10^9$ M$_\\odot$ suggesting copious\namounts of gas to support intense star-formation activity as also suggested by\nanalogs identified in Illustris-TNG. EELG1002 may be an ideal low-$z$\nlaboratory of galaxies in the early Universe and demonstrates how archival\ndatasets can support high-$z$ science and next-generation surveys planned with\n\\textit{Euclid} and \\textit{Roman}.", "split": "arXiv" }, { "text": "Nearly all cool, evolved stars are solar-like oscillators, and fundamental\nstellar properties can be inferred from these oscillations with\nasteroseismology. Scaling relations are commonly used to relate global\nasteroseismic properties, the frequency of maximum power $\\nu_{max}$ and the\nlarge frequency separation $\\Delta \\nu$, to stellar properties. Mass, radius,\nand age can then be inferred with the addition of stellar spectroscopy. There\nis excellent agreement between seismic radii and fundamental data on the lower\nred giant branch and red clump. However, the scaling relations appear to\nbreakdown in luminous red giant stars. We attempt to constrain the\ncontributions of the asteroseismic parameters to the observed breakdown. We\ntest the $\\nu_{max}$ and $\\Delta \\nu$ scaling relations separately, by using\nstars of known mass and radius in star clusters and the Milky Way's\nhigh-$\\alpha$ sequence. We find evidence that the $\\Delta \\nu$-scaling relation\ncontributes to the observed breakdown in luminous giants more than the\n$\\nu_{max}$ relation. We test different methods of mapping the observed $\\Delta\n\\nu$ to the mean density via a correction factor, $F_{\\Delta \\nu}$ and find a\n$\\approx 1 - 3\\%$ difference in the radii in the luminous giant regime\ndepending on the technique used to measure $F_{\\Delta \\nu}$. The differences\nbetween the radii inferred by these two techniques are too small on the\nluminous giant branch to account for the inflated seismic radii observed in\nevolved giant stars. Finally, we find that the $F_{\\Delta \\nu}$ correction is\ninsensitive to the adopted mixing length, chosen by calibrating the models to\nobservations of $T_{eff}$.", "split": "arXiv" }, { "text": "We explore the possibility that exotic forms of dark matter could expose\nhumans on Earth or on prolonged space travel to a significant radiation dose.\nThe radiation exposure from dark matter interacting with nuclei in the human\nbody is generally assumed to be negligible compared to other sources of\nbackground radiation. However, as we discuss here, current data allow for dark\nmatter models where this is not necessarily true. In particular, if dark matter\nis heavier and more strongly interacting than weakly interacting massive\nparticle dark matter, it could act as ionizing radiation and deposit a\nsignificant amount of radiation energy in all or part of the human population,\nsimilar to or even exceeding the known radiation exposure from other background\nsources. Conversely, the non-observation of such an exposure can be used to\nconstrain this type of heavier and more strongly interacting dark matter. We\nfirst consider the case where dark matter scatters elastically and identify the\nrelevant parameter space in a model-independent way. We also discuss how\nprevious bounds from cosmological probes, as well as atmospheric and\nspace-based detectors, might be avoided, and how a re-analysis of existing\nradiation data, along with a simple experiment monitoring ionizing radiation in\nspace with a lower detection threshold, could help constrain part of this\nparameter space. We finally propose a hypothetical dark matter candidate that\nscatters inelastically and argue that, in principle, one per mille of the\nEarth's population could attain a significant radiation dose from such a dark\nmatter exposure in their lifetime.", "split": "arXiv" }, { "text": "Quenching of star-formation plays a fundamental role in galaxy evolution.\nThis process occurs due to the removal of the cold interstellar medium (ISM) or\nstabilization against collapse, so that gas cannot be used in the formation of\nnew stars. In this paper, we study the effect of different mechanisms of ISM\nremoval. In particular, we revised the well-known Baldwin-Philips-Terlevich\n(BPT) and $\\mathrm{EW_{H\\alpha}}$ vs. $\\mathrm{[NII]/H\\alpha}$ (WHAN)\nemission-line ratio diagnostics, so that we could classify all galaxies, even\nthose not detected at some emission lines, introducing several new spectral\nclasses. We use spectroscopic data and several physical parameters of 2409\ndusty early-type galaxies in order to find out the dominant ionization source\n[active galactic nuclei (AGNs), young massive stars, hot low-mass evolved stars\n(HOLMES)] and its effect on the ISM. We find that strong AGNs can play a\nsignificant role in the ISM removal process only for galaxies with ages lower\nthan $10^{9.4}$ yr, but we cannot rule out the influence of weak AGNs at any\nage. For older galaxies, HOLMES/planetary nebulae contribute significantly to\nthe ISM removal process. Additionally, we provide the BPT and WHAN\nclassifications not only for the selected sample but also for all 300000\ngalaxies in the GAMA fields.", "split": "arXiv" }, { "text": "The paper is concerned with a scalar conservation law with discontinuous\ngradient-dependent flux. Namely, the flux is described by two different\nfunctions $f(u)$ or $g(u)$, when the gradient $u_x$ of the solution is positive\nor negative, respectively. We study here the stable case where $f(u)100\\;\\rm{cm}^{-3}$) gas, we find steady-state magnetic field\nstrengths of 10--40 $\\mu$G, comparable to those observed in molecular clouds.\nFinally, we demonstrate that our simulation framework is consistent with the\nOstriker & Kim (2022) Pressure Regulated Feedback Modulated Theory of star\nformation and stellar Feedback.", "split": "arXiv" }, { "text": "We study vertical resonant trapping and resonant heating of orbits. These two\nprocesses both lead to the growth of a boxy/peanut-shaped bulge in a typical\n$N$-body model. For the first time, we study this by means of the action\nvariables and resonant angles of the actual orbits that compose the model\nitself. We used the resonant angle instead of the frequency ratio, which\nallowed us to clearly distinguish between these two processes in numerical\nsimulations. We show that trapping and heating occur simultaneously, at least\nat the stage of a mature bar, that is, some orbits quickly pass through\nvertical resonance while at the same time, a substantial number of orbits\nremains trapped into this stage for a long time. Half of all bar orbits spend\nmore than 2.5 Gyr in vertical resonance over an interval of 4 Gyr. Half of the\norbits trapped into the bar over the last 3 Gyr of simulation remain captured\nin vertical resonance for more than 2 Gyr. We conclude that in the later stages\nof the bar evolution, the process of vertical trapping dominates in the ongoing\nprocess that causes the boxy/peanut shape of a bar in a typical $N$-body model.\nThis contradicts the results of several recent works.", "split": "arXiv" }, { "text": "Internal crack detection has been a subject of focus in structural health\nmonitoring. By focusing on crack detection in structural datasets, it is\ndemonstrated that deep learning (DL) methods can effectively analyze seismic\nwave fields interacting with micro-scale cracks, which are beyond the\nresolution of conventional visual inspection. This work explores a novel\napplication of DL-based key point detection technique, where cracks are\nlocalized by predicting the coordinates of four key points that define a\nbounding region of the crack. The study not only opens new research directions\nfor non-visual applications but also effectively mitigates the impact of\nimbalanced data which poses a challenge for previous DL models, as it can be\nbiased toward predicting the majority class (non-crack regions). Popular DL\ntechniques, such as the Inception blocks, are used and investigated. The model\nshows an overall reduction in loss when applied to micro-scale crack detection\nand is reflected in the lower average deviation between the location of actual\nand predicted cracks, with an average Intersection over Union (IoU) being 0.511\nfor all micro cracks (greater than 0.00 micrometers) and 0.631 for larger micro\ncracks (greater than 4 micrometers).", "split": "arXiv" }, { "text": "Studies investigating the causal effects of spatially varying exposures on\nhealth$\\unicode{x2013}$such as air pollution, green space, or\ncrime$\\unicode{x2013}$often rely on observational and spatially indexed data. A\nprevalent challenge is unmeasured spatial confounding, where an unobserved\nspatially varying variable affects both exposure and outcome, leading to biased\ncausal estimates and invalid confidence intervals. In this paper, we introduce\na general framework based on instrumental variables (IV) that encompasses and\nunites most of the existing methods designed to account for an unmeasured\nspatial confounder. We show that a common feature of all existing methods is\ntheir reliance on small-scale variation in exposure, which functions as an IV.\nIn this framework, we outline the underlying assumptions and the estimation\nstrategy of each method. Furthermore, we demonstrate that the IV can be used to\nidentify and estimate the exposure-response curve under more relaxed\nassumptions. We conclude by estimating the exposure-response curve between\nlong-term exposure to fine particulate matter and all-cause mortality among\n33,454 zip codes in the United States while adjusting for unmeasured spatial\nconfounding.", "split": "arXiv" }, { "text": "Different methods can be employed to render virtual reverberation, often\nrequiring substantial information about the room's geometry and the acoustic\ncharacteristics of the surfaces. However, fully comprehensive approaches that\naccount for all aspects of a given environment may be computationally costly\nand redundant from a perceptual standpoint. For these methods, achieving a\ntrade-off between perceptual authenticity and model's complexity becomes a\nrelevant challenge.\n This study investigates this compromise through the use of geometrical\nacoustics to render Ambisonics-based binaural reverberation. Its precision is\ndetermined, among other factors, by its fidelity to the room's geometry and to\nthe acoustic properties of its materials.\n The purpose of this study is to investigate the impact of simplifying the\nroom geometry and the frequency resolution of absorption coefficients on the\nperception of reverberation within a virtual sound scene. Several decimated\nmodels based on a single room were perceptually evaluated using the a\nmulti-stimulus comparison method. Additionally, these differences were\nnumerically assessed through the calculation of acoustic parameters of the\nreverberation.\n According to numerical and perceptual evaluations, lowering the frequency\nresolution of absorption coefficients can have a significant impact on the\nperception of reverberation, while a less notable impact was observed when\ndecimating the geometry of the model.", "split": "arXiv" }, { "text": "Simulations and observations suggest that galaxy interactions may enhance the\nstar formation rate (SFR) in merging galaxies. One proposed mechanism is the\ntorque exerted on the gas and stars in the larger galaxy by the smaller galaxy.\nWe analyze the interaction torques and star formation activity on six galaxies\nfrom the FIRE-2 simulation suite with masses comparable to the Milky Way galaxy\nat redshift $z=0$. We trace the halos from $z = 3.6$ to $z=0$, calculating the\ntorque exerted by the nearby galaxies on the gas in the central galaxy. We\ncalculate the correlation between the torque and the SFR across the simulations\nfor various mass ratios. For near-equal-stellar-mass-ratio interactions in the\ngalaxy sample, occurring between $z=1.2-3.6$, there is a positive and\nstatistically significant correlation between the torque from nearby galaxies\non the gas of the central galaxies and the SFR. For all other samples, no\nstatistically significant correlation is found between the torque and the SFR.\nOur analysis shows that some, but not all, major interactions cause starbursts\nin the simulated Milky Way-mass galaxies, and that most starbursts are not\ncaused by galaxy interactions. The transition from `bursty' at high redshift\n($z\\gtrsim1$) to `steady' star-formation state at later times is independent of\nthe interaction history of the galaxies, and most of the interactions do not\nleave significant imprints on the overall trend of the star formation history\nof the galaxies.", "split": "arXiv" }, { "text": "False data injection attacks (FDIAs) on smart inverters are a growing concern\nlinked to increased renewable energy production. While data-based FDIA\ndetection methods are also actively developed, we show that they remain\nvulnerable to impactful and stealthy adversarial examples that can be crafted\nusing Reinforcement Learning (RL). We propose to include such adversarial\nexamples in data-based detection training procedure via a continual adversarial\nRL (CARL) approach. This way, one can pinpoint the deficiencies of data-based\ndetection, thereby offering explainability during their incremental\nimprovement. We show that a continual learning implementation is subject to\ncatastrophic forgetting, and additionally show that forgetting can be addressed\nby employing a joint training strategy on all generated FDIA scenarios.", "split": "arXiv" }, { "text": "The current vision-based aphid counting methods in water traps suffer from\nundercounts caused by occlusions and low visibility arising from dense\naggregation of insects and other objects. To address this problem, we propose a\nnovel aphid counting method through interactive stirring actions. We use\ninteractive stirring to alter the distribution of aphids in the yellow water\ntrap and capture a sequence of images which are then used for aphid detection\nand counting through an optimized small object detection network based on\nYolov5. We also propose a counting confidence evaluation system to evaluate the\nconfidence of count-ing results. The final counting result is a weighted sum of\nthe counting results from all sequence images based on the counting confidence.\nExperimental results show that our proposed aphid detection network\nsignificantly outperforms the original Yolov5, with improvements of 33.9% in\nAP@0.5 and 26.9% in AP@[0.5:0.95] on the aphid test set. In addition, the aphid\ncounting test results using our proposed counting confidence evaluation system\nshow significant improvements over the static counting method, closely aligning\nwith manual counting results.", "split": "arXiv" }, { "text": "We analyze the recent MIT lattice data for the gravitational form factors\n(GFFs) of the pion which extend up to $Q^2= 2~{\\rm GeV}^2$ for $m_\\pi=170$~MeV.\nWe show that simple monopole fits comply with the old idea of meson dominance.\nWe use Chiral Perturbation theory ($\\chi$PT) to next-to-leading order (NLO) to\ntransform the MIT data to the physical world with $m_\\pi=140~$MeV and find that\nthe spin-0 GFF is effectively saturated with the $f_0(600)$ and the spin-2 with\nthe $f_2(1270)$, with monopole masses $m_\\sigma= 630(60)$~MeV and $m_{f_2}=\n1270(40)$~MeV. We determine in passing the chiral low energy constants (LECs)\nfrom the MIT lattice data alone\n $$\n 10^3 \\cdot L_{11} (m_\\rho^2)=1.06(15) \\, , \\qquad 10^3 \\cdot L_{12}\n(m_\\rho^2)= -2.2(1) \\, ,\n \\qquad 10^3 \\cdot L_{13} (m_\\rho^2) = -0.7(1.1).\n $$ which agree in sign and order of magnitude with the original estimates by\nDonoghue and Leutwyler. We also analyze the sum rules based on perturbative QCD\n(pQCD) that imply that the corresponding spectral functions are not positive\ndefinite. We show that these sum rules are strongly violated in a variety of\n$\\pi\\pi-K \\bar K$ coupled channel Omn\\`es-Muskhelishvili calculations. This is\nnot mended by the inclusion of the pQCD tail, suggesting the need for an extra\nnegative spectral strength. Using a simple model implementing all sum rules, we\nfind the expected onset of pQCD at very high momenta.", "split": "arXiv" }, { "text": "Large language models (LLMs) have significantly advanced the field of\nautomated code generation. However, a notable research gap exists in the\nevaluation of social biases that may be present in the code produced by LLMs.\nTo solve this issue, we propose a novel fairness framework, i.e., Solar, to\nassess and mitigate the social biases of LLM-generated code. Specifically,\nSolar can automatically generate test cases for quantitatively uncovering\nsocial biases of the auto-generated code by LLMs. To quantify the severity of\nsocial biases in generated code, we develop a dataset that covers a diverse set\nof social problems. We applied Solar and the crafted dataset to four\nstate-of-the-art LLMs for code generation. Our evaluation reveals severe bias\nin the LLM-generated code from all the subject LLMs. Furthermore, we explore\nseveral strategies for bias mitigation, including Chain-of-Thought (CoT)\nprompting, combining positive role-playing with CoT prompting and iterative\nprompting. Our experiments show that iterative prompting can effectively reduce\nsocial bias in LLM-generated code by up to 90%. Solar is highly extensible to\nevaluate new social problems.", "split": "arXiv" }, { "text": "In machine learning (ML), the inference phase is the process of applying\npre-trained models to new, unseen data with the objective of making\npredictions. During the inference phase, end-users interact with ML services to\ngain insights, recommendations, or actions based on the input data. For this\nreason, serving strategies are nowadays crucial for deploying and managing\nmodels in production environments effectively. These strategies ensure that\nmodels are available, scalable, reliable, and performant for real-world\napplications, such as time series forecasting, image classification, natural\nlanguage processing, and so on. In this paper, we evaluate the performances of\nfive widely-used model serving frameworks (TensorFlow Serving, TorchServe,\nMLServer, MLflow, and BentoML) under four different scenarios (malware\ndetection, cryptocoin prices forecasting, image classification, and sentiment\nanalysis). We demonstrate that TensorFlow Serving is able to outperform all the\nother frameworks in serving deep learning (DL) models. Moreover, we show that\nDL-specific frameworks (TensorFlow Serving and TorchServe) display\nsignificantly lower latencies than the three general-purpose ML frameworks\n(BentoML, MLFlow, and MLServer).", "split": "arXiv" }, { "text": "We present Y-MAP-Net, a Y-shaped neural network architecture designed for\nreal-time multi-task learning on RGB images. Y-MAP-Net, simultaneously predicts\ndepth, surface normals, human pose, semantic segmentation and generates\nmulti-label captions, all from a single network evaluation. To achieve this, we\nadopt a multi-teacher, single-student training paradigm, where task-specific\nfoundation models supervise the network's learning, enabling it to distill\ntheir capabilities into a lightweight architecture suitable for real-time\napplications. Y-MAP-Net, exhibits strong generalization, simplicity and\ncomputational efficiency, making it ideal for robotics and other practical\nscenarios. To support future research, we will release our code publicly.", "split": "arXiv" }, { "text": "Bitcoin, launched in 2008 by Satoshi Nakamoto, established a new digital\neconomy where value can be stored and transferred in a fully decentralized\nmanner - alleviating the need for a central authority. This paper introduces a\nlarge scale dataset in the form of a transactions graph representing\ntransactions between Bitcoin users along with a set of tasks and baselines. The\ngraph includes 252 million nodes and 785 million edges, covering a time span of\nnearly 13 years of and 670 million transactions. Each node and edge is\ntimestamped. As for supervised tasks we provide two labeled sets i. a 33,000\nnodes based on entity type and ii. nearly 100,000 Bitcoin addresses labeled\nwith an entity name and an entity type. This is the largest publicly available\ndata set of bitcoin transactions designed to facilitate advanced research and\nexploration in this domain, overcoming the limitations of existing datasets.\nVarious graph neural network models are trained to predict node labels,\nestablishing a baseline for future research. In addition, several use cases are\npresented to demonstrate the dataset's applicability beyond Bitcoin analysis.\nFinally, all data and source code is made publicly available to enable\nreproducibility of the results.", "split": "arXiv" }, { "text": "The recently released model, Claude 3.5 Computer Use, stands out as the first\nfrontier AI model to offer computer use in public beta as a graphical user\ninterface (GUI) agent. As an early beta, its capability in the real-world\ncomplex environment remains unknown. In this case study to explore Claude 3.5\nComputer Use, we curate and organize a collection of carefully designed tasks\nspanning a variety of domains and software. Observations from these cases\ndemonstrate Claude 3.5 Computer Use's unprecedented ability in end-to-end\nlanguage to desktop actions. Along with this study, we provide an\nout-of-the-box agent framework for deploying API-based GUI automation models\nwith easy implementation. Our case studies aim to showcase a groundwork of\ncapabilities and limitations of Claude 3.5 Computer Use with detailed analyses\nand bring to the fore questions about planning, action, and critic, which must\nbe considered for future improvement. We hope this preliminary exploration will\ninspire future research into the GUI agent community. All the test cases in the\npaper can be tried through the project:\nhttps://github.com/showlab/computer_use_ootb.", "split": "arXiv" }, { "text": "Let $\\mathcal{G}$ be the set of all the planar embeddings of a (not\nnecessarily connected) $n$-vertex graph $G$. We present a bijection $\\Phi$ from\n$\\mathcal{G}$ to the natural numbers in the interval $[0 \\dots |\\mathcal{G}| -\n1]$. Given a planar embedding $\\mathcal{E}$ of $G$, we show that\n$\\Phi(\\mathcal{E})$ can be decomposed into a sequence of $O(n)$ natural numbers\neach describing a specific feature of $\\mathcal{E}$. The function $\\Phi$, which\nis a ranking function for $\\mathcal{G}$, can be computed in $O(n)$ time, while\nits inverse unranking function $\\Phi^{-1}$ can be computed in $O(n \\alpha(n))$\ntime. The results of this paper can be of practical use to uniformly at random\ngenerating the planar embeddings of a graph $G$ or to enumerating such\nembeddings with amortized constant delay. Also, they can be used to counting,\nenumerating or uniformly at random generating constrained planar embeddings of\n$G$.", "split": "arXiv" }, { "text": "Autonomous vehicles require road information for their operation, usually in\nform of HD maps. Since offline maps eventually become outdated or may only be\npartially available, online HD map construction methods have been proposed to\ninfer map information from live sensor data. A key issue remains how to exploit\nsuch partial or outdated map information as a prior. We introduce M3TR\n(Multi-Masking Map Transformer), a generalist approach for HD map construction\nboth with and without map priors. We address shortcomings in ground truth\ngeneration for Argoverse 2 and nuScenes and propose the first realistic\nscenarios with semantically diverse map priors. Examining various query\ndesigns, we use an improved method for integrating prior map elements into a HD\nmap construction model, increasing performance by +4.3 mAP. Finally, we show\nthat training across all prior scenarios yields a single Generalist model,\nwhose performance is on par with previous Expert models that can handle only\none specific type of map prior. M3TR thus is the first model capable of\nleveraging variable map priors, making it suitable for real-world deployment.\nCode is available at https://github.com/immel-f/m3tr", "split": "arXiv" }, { "text": "In this work, we propose novel offline and online Inverse Differential Game\n(IDG) methods for nonlinear Differential Games (DG), which identify the cost\nfunctions of all players from control and state trajectories constituting a\nfeedback Nash equilibrium. The offline approach computes the sets of all\nequivalent cost function parameters that yield the observed trajectories. Our\nonline method is guaranteed to converge to cost function parameters of the\noffline calculated sets. For both methods, we additionally analyze the case\nwhere the cost and value functions are not given by known parameterized\nstructures and approximation structures, like polynomial basis functions, need\nto be chosen. Here, we found that for guaranteeing a bounded error between the\ntrajectories resulting from the offline and online IDG solutions and the\nobserved trajectories an appropriate selection of the cost function structures\nis required. They must be aligned to assumed value function structures such\nthat the coupled Hamilton-Jacobi-Bellman equations can be fulfilled. Finally,\nthe theoretical results and the effectiveness of our new methods are\nillustrated with a numerical example.", "split": "arXiv" }, { "text": "In this paper, we investigate the parking process on a uniform random rooted\nbinary tree with $n$ vertices. Viewing each vertex as a single parking space, a\nrandom number of cars independently arrive at and attempt to park on each\nvertex one at a time. If a car attempts to park on an occupied vertex, it\ntraverses the unique path on the tree towards the root, parking at the first\nempty vertex it encounters. If this is not possible, the car exits the tree at\nthe root.\n We shall investigate the limit of the probability of the event that all cars\ncan park when $\\lfloor \\alpha n \\rfloor$ cars arrive, with $\\alpha > 0$. We\nfind that there is a phase transition at $\\alpha_c = 2 - \\sqrt{2}$, with this\nevent having positive limiting probability when $\\alpha < \\alpha_c$, and the\nprobability tending to 0 as $n \\rightarrow \\infty$ for $\\alpha > \\alpha_c$.\n This is analogous to the work done by Goldschmidt and Przykucki\n(arXiv:1610.08786) and Goldschmidt and Chen (arXiv:1911.03816), while agreeing\nwith the general result proven by Curien and H\\'enard (arXiv:2205.15932).", "split": "arXiv" }, { "text": "We study the problem of clock synchronization in a networked system with\narbitrary starts for all nodes. We consider a synchronous network of $n$ nodes,\nwhere each node has a local clock that is an integer counter. Eventually,\nclocks must be all equal and increase by one in each round modulo some period\n$P$. The purpose of this paper is to study whether clock synchronization can be\nachieved with bounded memory, that is every node maintains a number of states\nthat does not depend on the network size. In particular, we are interested in\nclock synchronization algorithms which work in dynamic networks, i.e., tolerate\nthat communication links continuously fail and come-up.\n We first focus on self-stabilizing solutions for clock synchronization, and\nprove that there is no such algorithm that is bounded memory, even in the case\nof static networks. More precisely, we show a lower bound of $n+1$ states at\neach node required to achieve clock synchronization in static strongly\nconnected networks with at most $n$ nodes, and derive a lower bound of $n-2$\nrounds on synchronization time, in the worst case. We then prove that, when the\nself-stabilizing requirement is removed, the impossibility of clock\nsynchronization with bounded memory still holds in the dynamic setting: every\nsolution for the clock synchronization problem in dynamic networks with at most\n$n$ nodes requires each node to have $\\Omega(\\log n)$ states.", "split": "arXiv" }, { "text": "We consider the Hospital/Residents (HR) problem in the presence of ties in\npreference lists. Among the three notions of stability, viz. weak, strong, and\nsuper stability, we focus on the notion of strong stability. Strong stability\nhas many desirable properties both theoretically and practically; however, its\nexistence is not guaranteed.\n In this paper, our objective is to optimally increase the quotas of hospitals\nto ensure that a strongly stable matching exists in the modified instance.\nFirst, we show that if ties are allowed in residents' preference lists, it may\nnot be possible to augment the hospital quotas to obtain an instance that\nadmits a strongly stable matching. When residents' preference lists are strict,\nwe explore two natural optimization criteria: (i) minimizing the maximum\ncapacity increase for any hospital (MINMAX), and (ii) minimizing the total\ncapacity increase across all hospitals (MINSUM). We show that the MINMAX\nproblem is NP-hard in general. When hospital preference lists can have ties of\nlength at most $\\ell+1$, we give a polynomial-time algorithm that increases\neach hospital's quota by at most $\\ell$, ensuring the resulting instance admits\na strongly stable matching.\n We show that the MINSUM problem admits a polynomial-time algorithm. However,\nwhen each hospital incurs a cost for each capacity increase, the problem\nbecomes NP-hard, even if the costs are 0 or 1. This also implies that the\nproblem cannot be approximated to any multiplicative factor. We also consider a\nrelated problem under the MINSUM objective. Given an HR instance and a forced\npair $(r^*,h^*)$, the goal is to decide if it is possible to increase hospital\nquotas (if necessary) to obtain a strongly stable matching that matches the\npair $(r^*,h^*)$. We show a polynomial-time algorithm for this problem.", "split": "arXiv" }, { "text": "We produce twisted derived equivalences between torsors under abelian\nvarieties and their moduli spaces of simple semi-homogeneous sheaves. We also\nestablish the natural converse to this result and show that a large class of\ntwisted derived equivalences, including all derived equivalences, between\ntorsors arise in this way. As corollaries, we obtain partial extensions of the\nusual derived equivalence criterion for abelian varieties established by Orlov\nand Polishchuk.", "split": "arXiv" }, { "text": "We consider $\\mathbb{Z}_q$-valued clock models on a regular tree, for general\nclasses of ferromagnetic nearest neighbor interactions which have a discrete\nrotational symmetry. It has been proved recently that, at strong enough\ncoupling, families of homogeneous Markov chain Gibbs states $\\mu_A$ coexist\nwhose single-site marginals concentrate on $A\\subset \\mathbb{Z}_q$, and which\nare not convex combinations of each other [AbHeKuMa24]. In this note, we aim at\na description of the extremal decomposition of $\\mu_A$ for $|A|\\geq 2$ into all\nextremal Gibbs measures, which may be spatially inhomogeneous. First, we show\nthat in regimes of very strong coupling, $\\mu_A$ is not extremal. Moreover,\n$\\mu_A$ possesses a single-site reconstruction property which holds for spin\nvalues sent from the origin to infinity, when these initial values are chosen\nfrom $A$. As our main result, we show that $\\mu_A$ decomposes into uncountably\nmany extremal inhomogeneous states. The proof is based on multi-site\nreconstruction which allows to derive concentration properties of branch\noverlaps. Our method is based on a new good site/bad site decomposition adapted\nto the $A$-localization property, together with a coarse graining argument in\nlocal state space.", "split": "arXiv" }, { "text": "Many magnetic white dwarfs exhibit a polarised spectrum that periodically\nvaries as the star rotates because the magnetic field is not symmetric about\nthe rotation axis. In this work, we report the discovery that while weakly\nmagnetic white dwarfs of all ages with M < 1Mo show polarimetric variability\nwith a period between hours and several days, the large majority of magnetic\nwhite dwarfs in the same mass range with cooling ages older than 2 Gyr and\nfield strengths > 10 MG show little or no polarimetric variability. This could\nbe interpreted as extremely slow rotation, but a lack of known white dwarfs\nwith measured periods longer than two weeks means that we do not see white\ndwarfs slowing their rotation. We therefore suggest a different interpretation:\nold strongly magnetic white dwarfs do not vary because their fields are roughly\nsymmetric about the rotation axes. Symmetry may either be a consequence of\nfield evolution or a physical characteristic intrinsic to the way strong fields\nare generated in older stars. Specifically, a strong magnetic field could\ndistort the shape of a star, forcing the principal axis of maximum inertia away\nfrom the spin axis. Eventually, as a result of energy dissipation, the magnetic\naxis will align with the angular momentum axis. We also find that the\nhigher-mass strongly magnetised white dwarfs, which are likely the products of\nthe merging of two white dwarfs, may appear as either polarimetrically variable\nor constant. This may be the symptom of two different formation channels or the\nconsequence of the fact that a dynamo operating during a merger may produce\ndiverse magnetic configurations. Alternatively, the massive white dwarfs with\nconstant polarisation may be rotating with periods much shorter than the\ntypical exposure times of the observations.", "split": "arXiv" }, { "text": "Super $L_\\infty$-algebras unify extended super-symmetry with rational\nclassifying spaces for higher flux densities: The super-invariant super-fluxes\nwhich control super $p$-branes and their supergravity target super-spaces are,\ntogether with their (non-linear) Bianchi identities, neatly encoded in\n(non-abelian) super-$L_\\infty$ cocycles. These are the rational shadows of\nflux-quantization laws (in ordinary cohomology, K-theory, Cohomotopy, iterated\nK-theory, etc).\n We first review, in streamlined form while filling some previous gaps,\ndouble-dimensional reduction/oxidation and 10D superspace T-duality along\nhigher-dimensional super-tori. We do so tangent super-space wise, by viewing it\nas an instance of adjunctions (dualities) between super-$L_\\infty$-extensions\nand -cyclifications, applied to the avatar super-flux densities of 10D\nsupergravity. In particular, this yields a derivation, at the rational level,\nof the traditional laws of \"topological T-duality\" from the super-$L_\\infty$\nstructure of type II superspace. At this level, we also discuss a higher\ncategorical analog of T-duality involving M-branes.\n Then, by considering super-space T-duality along all 1+9 spacetime dimensions\nwhile retaining the 11th dimension as in F-theory, we find the M-algebra\nappearing as the complete brane-charge extension of the fully\nT-doubled/correspondence super-spacetime. On this backdrop, we recognize the\n\"decomposed\" M-theory 3-form on the \"hidden M-algebra\" as an M-theoretic lift\nof the Poincar\\'e super 2-form that controls superspace T-duality as the\nintegral kernel of the super Fourier-Mukai transform. This provides the\nsuper-space structure of an M-theory lift of the doubled/correspondence space\ngeometry, which controls T-duality.", "split": "arXiv" }, { "text": "A well-known result of Shalom says that lattices in SO$(n,1)$ are $L^p$\nmeasure equivalent for all $p