- Uncovering a Massive z~7.65 Galaxy Hosting a Heavily Obscured Radio-Loud QSO Candidate in COSMOS-Web In this letter, we report the discovery of the highest redshift, heavily obscured, radio-loud QSO candidate selected using JWST NIRCam/MIRI, mid-IR, sub-mm, and radio imaging in the COSMOS-Web field. Using multi-frequency radio observations and mid-IR photometry, we identify a powerful, radio-loud (RL), growing supermassive black hole (SMBH) with significant spectral steepening of the radio SED (f_{1.32 GHz} sim 2 mJy, q_{24mu m} = -1.1, alpha_{1.32-3GHz}=-1.2, Delta alpha = -0.4). In conjunction with ALMA, deep ground-based observations, ancillary space-based data, and the unprecedented resolution and sensitivity of JWST, we find no evidence of QSO contribution to the UV/optical/NIR data and thus infer heavy amounts of obscuration (N_{H} > 10^{23} cm^{-2}). Using the wealth of deep UV to sub-mm photometric data, we report a singular solution photo-z of z_phot = 7.65^{+0.4}_{-0.3} and estimate an extremely massive host-galaxy (log M_{star} = 11.92 pm 0.06,M_{odot}). This source represents the furthest known obscured RL QSO candidate, and its level of obscuration aligns with the most representative but observationally scarce population of QSOs at these epochs. 45 authors · Aug 24, 2023
15 Language models scale reliably with over-training and on downstream tasks Scaling laws are useful guides for developing language models, but there are still gaps between current scaling studies and how language models are ultimately trained and evaluated. For instance, scaling is usually studied in the compute-optimal training regime (i.e., "Chinchilla optimal" regime); however, in practice, models are often over-trained to reduce inference costs. Moreover, scaling laws mostly predict loss on next-token prediction, but ultimately models are compared based on downstream task performance. In this paper, we address both shortcomings. To do so, we create a testbed of 104 models with 0.011B to 6.9B parameters trained with various numbers of tokens on three data distributions. First, we investigate scaling in the over-trained regime. We fit scaling laws that extrapolate in both the number of model parameters and the ratio of training tokens to parameters. This enables us to predict the validation loss of a 1.4B parameter, 900B token run (i.e., 32times over-trained) and a 6.9B parameter, 138B token runx2014each from experiments that take 300times less compute. Second, we relate the perplexity of a language model to its downstream task performance via a power law. We use this law to predict top-1 error averaged over downstream tasks for the two aforementioned models using experiments that take 20times less compute. Our experiments are available at https://github.com/mlfoundations/scaling. 23 authors · Mar 13, 2024 1