pploner eric-moreno commited on
Commit
cd5e5a6
·
verified ·
1 Parent(s): 114dd2a

Upload COLLIDE-1m_streaming_classifier_tutorial.ipynb (#3)

Browse files

- Upload COLLIDE-1m_streaming_classifier_tutorial.ipynb (db106de435c4ad0720add28db284f52497e00268)


Co-authored-by: Eric Moreno <[email protected]>

COLLIDE-1m_streaming_classifier_tutorial.ipynb ADDED
@@ -0,0 +1,640 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "id": "d28f887e",
6
+ "metadata": {},
7
+ "source": [
8
+ "# COLLIDE-2V — Six‑Class Classifier\n",
9
+ "\n",
10
+ "This notebook streams COLLIDE-1m (1 million event subset of COLLIDE-2V) dataset from the Hugging Face Hub and builds a **fixed, physics‑aware feature vector** per event from **FullReco** variables only:\n",
11
+ "\n",
12
+ "**Per event features**\n",
13
+ "- **Particles (PUPPI, top‑20 by pT):** for each particle we keep *(pT, η, φ, charge, mass, PID, PuppiW)* → 7 × 20 = **140**\n",
14
+ "- **Jets (AK4, top‑4 by pT):** *(pT, η, φ, mass, btag, charge)* → 6 × 4 = **24**\n",
15
+ "- **Leading leptons/photons:** \n",
16
+ " - Electron: *(pT, η, φ, EhadOverEem, IsoRhoCorr)* → **5** \n",
17
+ " - MuonTight: *(pT, η, φ, IsoRhoCorr)* → **4** \n",
18
+ " - PhotonTight: *(pT, η, φ)* → **3**\n",
19
+ "- **MET:** *(PUPPIMET_MET, PUPPIMET_φ, MET_MET, MET_φ)* → **4**\n",
20
+ "- **Primary Vertex:** *(Z, SumPT2 of best PV)* → **2**\n",
21
+ "- **Counts:** *(N_PUPPIPart, N_JetAK4)* → **2**\n",
22
+ "\n",
23
+ "Total vector length = **184**.\n",
24
+ "\n",
25
+ "We then train a tiny MLP classifier on **six classes** (one per family):\n",
26
+ "- DY: `DY to ll`\n",
27
+ "- QCD: `QCD inclusive`\n",
28
+ "- SingleHiggs: `VBFHtautau`\n",
29
+ "- top: `tt all-lept`\n",
30
+ "- diboson: `WZ (semi-leptonic)`\n",
31
+ "- diHiggs: `HH bbtautau`\n",
32
+ "\n"
33
+ ]
34
+ },
35
+ {
36
+ "cell_type": "code",
37
+ "execution_count": null,
38
+ "id": "56d3eb16",
39
+ "metadata": {},
40
+ "outputs": [],
41
+ "source": [
42
+ "# If needed (Colab/Kaggle/etc.). Comment out if your env already has these.\n",
43
+ "%pip -q install datasets==2.21.0 huggingface_hub==0.24.6 fsspec==2024.6.1 pyarrow==16.1.0 torch --extra-index-url https://download.pytorch.org/whl/cpu\n"
44
+ ]
45
+ },
46
+ {
47
+ "cell_type": "code",
48
+ "execution_count": null,
49
+ "id": "e0301175",
50
+ "metadata": {},
51
+ "outputs": [
52
+ {
53
+ "name": "stdout",
54
+ "output_type": "stream",
55
+ "text": [
56
+ "Using device: cpu\n"
57
+ ]
58
+ },
59
+ {
60
+ "data": {
61
+ "text/plain": [
62
+ "<torch._C.Generator at 0x7f36c08b7af0>"
63
+ ]
64
+ },
65
+ "execution_count": 1,
66
+ "metadata": {},
67
+ "output_type": "execute_result"
68
+ }
69
+ ],
70
+ "source": [
71
+ "from typing import List, Dict, Any, Iterable, Optional, Tuple\n",
72
+ "import random\n",
73
+ "\n",
74
+ "import torch\n",
75
+ "from torch import nn\n",
76
+ "from torch.utils.data import DataLoader, IterableDataset as TorchIterable\n",
77
+ "\n",
78
+ "import pyarrow as pa\n",
79
+ "import pyarrow.parquet as pq\n",
80
+ "\n",
81
+ "from datasets import IterableDataset, interleave_datasets, Features, Sequence, Value, ClassLabel\n",
82
+ "from huggingface_hub import HfApi, HfFileSystem\n",
83
+ "\n",
84
+ "# ====== USER CONFIG ======\n",
85
+ "HF_REPO = \"fastmachinelearning/collide-1m\" \n",
86
+ "\n",
87
+ "SELECTED_6 = {\n",
88
+ " \"DY\": \"DY to ll\",\n",
89
+ " \"QCD\": \"QCD inclusive\",\n",
90
+ " \"SingleHiggs\": \"VBFHtautau\",\n",
91
+ " \"top\": \"tt all-lept\",\n",
92
+ " \"diboson\": \"WZ (semi-leptonic)\",\n",
93
+ " \"diHiggs\": \"HH bbtautau\",\n",
94
+ "}\n",
95
+ "\n",
96
+ "# Feature packing hyperparams\n",
97
+ "K_PART = 20 # top-K PUPPI particles by pT\n",
98
+ "K_JET = 4 # top-J AK4 jets by pT\n",
99
+ "\n",
100
+ "# Training config\n",
101
+ "TRAIN_PER_CLASS = 512\n",
102
+ "VAL_PER_CLASS = 100\n",
103
+ "BATCH_SIZE = 256\n",
104
+ "EPOCHS = 10\n",
105
+ "LR = 2e-3\n",
106
+ "SEED = 42\n",
107
+ "DEVICE = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
108
+ "print(f\"Using device: {DEVICE}\")\n",
109
+ "random.seed(SEED)\n",
110
+ "torch.manual_seed(SEED)\n"
111
+ ]
112
+ },
113
+ {
114
+ "cell_type": "code",
115
+ "execution_count": 2,
116
+ "id": "3292efac",
117
+ "metadata": {},
118
+ "outputs": [
119
+ {
120
+ "name": "stdout",
121
+ "output_type": "stream",
122
+ "text": [
123
+ "Classes: ['DY', 'QCD', 'SingleHiggs', 'top', 'diboson', 'diHiggs']\n",
124
+ "Pretty per class: {'DY': 'DY to ll', 'QCD': 'QCD inclusive', 'SingleHiggs': 'VBFHtautau', 'top': 'tt all-lept', 'diboson': 'WZ (semi-leptonic)', 'diHiggs': 'HH bbtautau'}\n",
125
+ "Folder per class: {'DY': 'DYJetsToLL_13TeV-madgraphMLM-pythia8', 'QCD': 'QCD_HT50toInf', 'SingleHiggs': 'VBFHtautau', 'top': 'tt0123j_5f_ckm_LO_MLM_leptonic', 'diboson': 'WZ_semileptonic', 'diHiggs': 'HH_bbtautau'}\n"
126
+ ]
127
+ }
128
+ ],
129
+ "source": [
130
+ "# Pretty name -> folder name (from your mapping)\n",
131
+ "PROCESS_TO_FOLDER = {\n",
132
+ " # DY / Z / W\n",
133
+ " \"DY to ll\": \"DYJetsToLL_13TeV-madgraphMLM-pythia8\",\n",
134
+ " \"Z -> vv + jet\": \"ZJetsTovv_13TeV-madgraphMLM-pythia8\",\n",
135
+ " \"Z -> qq (uds)\": \"ZJetsToQQ_13TeV-madgraphMLM-pythia8\",\n",
136
+ " \"Z -> bb\": \"ZJetsTobb_13TeV-madgraphMLM-pythia8\",\n",
137
+ " \"Z -> cc\": \"ZJetsTocc_13TeV-madgraphMLM-pythia8\",\n",
138
+ " \"W -> lv\": \"WJetsToLNu_13TeV-madgraphMLM-pythia8\",\n",
139
+ " \"W -> qq\": \"WJetsToQQ_13TeV-madgraphMLM-pythia8\",\n",
140
+ " \"gamma\": \"gamma\",\n",
141
+ " \"gamma + V\": \"gamma_V\",\n",
142
+ " \"tri-gamma\": \"tri_gamma\",\n",
143
+ "\n",
144
+ " # QCD\n",
145
+ " \"QCD inclusive\": \"QCD_HT50toInf\",\n",
146
+ " \"QCD bb\": \"QCD_HT50tobb\",\n",
147
+ " \"Minbias / Soft QCD\": \"minbias\",\n",
148
+ "\n",
149
+ " # top\n",
150
+ " \"tt all-hadr\": \"tt0123j_5f_ckm_LO_MLM_hadronic\",\n",
151
+ " \"tt semi-lept\": \"tt0123j_5f_ckm_LO_MLM_semiLeptonic\",\n",
152
+ " \"tt all-lept\": \"tt0123j_5f_ckm_LO_MLM_leptonic\",\n",
153
+ " \"ttH incl\": \"ttH_incl\",\n",
154
+ " \"tttt\": \"tttt_incl\",\n",
155
+ " \"ttW incl\": \"ttW_incl\",\n",
156
+ " \"ttZ incl\": \"ttZ_incl\",\n",
157
+ "\n",
158
+ " # dibosons\n",
159
+ " \"WW (all-leptonic)\": \"WW_leptonic\",\n",
160
+ " \"WW (all-hadronic)\": \"WW_hadronic\",\n",
161
+ " \"WW (semi-leptonic)\": \"WW_semileptonic\",\n",
162
+ " \"WZ (all-leptonic)\": \"WZ_leptonic\",\n",
163
+ " \"WZ (all-hadronic)\": \"WZ_hadronic\",\n",
164
+ " \"WZ (semi-leptonic)\": \"WZ_semileptonic\",\n",
165
+ " \"ZZ (all-leptonic)\": \"ZZ_leptonic\",\n",
166
+ " \"ZZ (all-hadronic)\": \"ZZ_hadronic\",\n",
167
+ " \"ZZ (semi-leptonic)\": \"ZZ_semileptonic\",\n",
168
+ " \"VVV\": \"VVV_incl\",\n",
169
+ " \"VH incl\": \"VH_incl\",\n",
170
+ "\n",
171
+ " # single-Higgs\n",
172
+ " \"ggHbb\": \"ggHbb\",\n",
173
+ " \"ggHcc\": \"ggHcc\",\n",
174
+ " \"ggHgammagamma\": \"ggHgammagamma\",\n",
175
+ " \"ggHgluglu\": \"ggHgluglu\",\n",
176
+ " \"ggHtautau\": \"ggHtautau\",\n",
177
+ " \"ggHWW\": \"ggHWW\",\n",
178
+ " \"ggHZZ\": \"ggHZZ\",\n",
179
+ " \"VBFHbb\": \"VBFHbb\",\n",
180
+ " \"VBFHcc\": \"VBFHcc\",\n",
181
+ " \"VBFHgammagamma\": \"VBFHgammagamma\",\n",
182
+ " \"VBFHgluglu\": \"VBFHgluglu\",\n",
183
+ " \"VBFHtautau\": \"VBFHtautau\",\n",
184
+ " \"VBFHWW\": \"VBFHWW\",\n",
185
+ " \"VBFHZZ\": \"VBFHZZ\",\n",
186
+ "\n",
187
+ " # di-Higgs\n",
188
+ " \"HH 4b\": \"HH_4b\",\n",
189
+ " \"HH bbtautau\": \"HH_bbtautau\",\n",
190
+ " \"HH bbWW\": \"HH_bbWW\",\n",
191
+ " \"HH bbZZ\": \"HH_bbZZ\",\n",
192
+ " \"HH bbgammagamma\": \"HH_bbgammagamma\",\n",
193
+ "}\n",
194
+ "\n",
195
+ "CLASS_NAMES = list(SELECTED_6.keys())\n",
196
+ "PRETTY = {c: SELECTED_6[c] for c in CLASS_NAMES}\n",
197
+ "FOLDER = {c: PROCESS_TO_FOLDER[PRETTY[c]] for c in CLASS_NAMES}\n",
198
+ "LABELS = {c: i for i, c in enumerate(CLASS_NAMES)}\n",
199
+ "print(\"Classes:\", CLASS_NAMES)\n",
200
+ "print(\"Pretty per class:\", PRETTY)\n",
201
+ "print(\"Folder per class:\", FOLDER)\n"
202
+ ]
203
+ },
204
+ {
205
+ "cell_type": "code",
206
+ "execution_count": 3,
207
+ "id": "2f3d1a00",
208
+ "metadata": {},
209
+ "outputs": [],
210
+ "source": [
211
+ "# Columns to read (FullReco only)\n",
212
+ "PUPPI_PART_COLS = [\n",
213
+ " 'FullReco_PUPPIPart_PT','FullReco_PUPPIPart_Eta','FullReco_PUPPIPart_Phi',\n",
214
+ " 'FullReco_PUPPIPart_Charge','FullReco_PUPPIPart_Mass','FullReco_PUPPIPart_PID',\n",
215
+ " 'FullReco_PUPPIPart_PuppiW'\n",
216
+ "]\n",
217
+ "\n",
218
+ "JET_AK4_COLS = [\n",
219
+ " 'FullReco_JetAK4_PT','FullReco_JetAK4_Eta','FullReco_JetAK4_Phi',\n",
220
+ " 'FullReco_JetAK4_Mass','FullReco_JetAK4_BTag','FullReco_JetAK4_Charge'\n",
221
+ "]\n",
222
+ "\n",
223
+ "ELEC_COLS = [\n",
224
+ " 'FullReco_Electron_PT','FullReco_Electron_Eta','FullReco_Electron_Phi',\n",
225
+ " 'FullReco_Electron_EhadOverEem','FullReco_Electron_IsolationVarRhoCorr'\n",
226
+ "]\n",
227
+ "\n",
228
+ "MUON_COLS = [\n",
229
+ " 'FullReco_MuonTight_PT','FullReco_MuonTight_Eta','FullReco_MuonTight_Phi',\n",
230
+ " 'FullReco_MuonTight_IsolationVarRhoCorr'\n",
231
+ "]\n",
232
+ "\n",
233
+ "PHOT_COLS = [\n",
234
+ " 'FullReco_PhotonTight_PT','FullReco_PhotonTight_Eta','FullReco_PhotonTight_Phi'\n",
235
+ "]\n",
236
+ "\n",
237
+ "MET_COLS = [\n",
238
+ " 'FullReco_PUPPIMET_MET','FullReco_PUPPIMET_Phi',\n",
239
+ " 'FullReco_MET_MET','FullReco_MET_Phi'\n",
240
+ "]\n",
241
+ "\n",
242
+ "PV_COLS = [\n",
243
+ " 'FullReco_PrimaryVertex_Z','FullReco_PrimaryVertex_SumPT2'\n",
244
+ "]\n",
245
+ "\n",
246
+ "ALL_COLS = PUPPI_PART_COLS + JET_AK4_COLS + ELEC_COLS + MUON_COLS + PHOT_COLS + MET_COLS + PV_COLS\n",
247
+ "\n",
248
+ "# Fixed vector length: 184\n",
249
+ "VLEN = 184\n"
250
+ ]
251
+ },
252
+ {
253
+ "cell_type": "code",
254
+ "execution_count": 4,
255
+ "id": "7afc8005",
256
+ "metadata": {},
257
+ "outputs": [],
258
+ "source": [
259
+ "api = HfApi()\n",
260
+ "fs = HfFileSystem()\n",
261
+ "\n",
262
+ "def list_repo_parquet_files(repo_id: str, subfolder: str) -> List[str]:\n",
263
+ " files = api.list_repo_files(repo_id=repo_id, repo_type='dataset')\n",
264
+ " prefix = f\"{subfolder.strip('/')}/\"\n",
265
+ " return [f for f in files if f.startswith(prefix) and f.endswith('.parquet')]\n",
266
+ "\n",
267
+ "def _safe_list(x):\n",
268
+ " if x is None:\n",
269
+ " return []\n",
270
+ " if isinstance(x, list):\n",
271
+ " return x\n",
272
+ " return [x]\n",
273
+ "\n",
274
+ "def _pack_topk_by_pt(pt, *others, k: int, fill: List[float]):\n",
275
+ " idx = sorted(range(len(pt)), key=lambda i: pt[i] if pt[i] is not None else -1.0, reverse=True)\n",
276
+ " out = []\n",
277
+ " for j in range(k):\n",
278
+ " if j < len(idx):\n",
279
+ " i = idx[j]\n",
280
+ " vals = [pt[i]] + [arr[i] if i < len(arr) else 0.0 for arr in others]\n",
281
+ " else:\n",
282
+ " vals = fill\n",
283
+ " out.extend([float(v if v is not None else 0.0) for v in vals])\n",
284
+ " return out\n",
285
+ "\n",
286
+ "def _pack_leading(vals: List[List[float]], fill: List[float]) -> List[float]:\n",
287
+ " if not vals or not vals[0]:\n",
288
+ " return fill\n",
289
+ " pt = vals[0]\n",
290
+ " if len(pt) == 0:\n",
291
+ " return fill\n",
292
+ " i = max(range(len(pt)), key=lambda j: pt[j] if pt[j] is not None else -1.0)\n",
293
+ " chosen = [arr[i] if i < len(arr) else 0.0 for arr in vals]\n",
294
+ " return [float(v if v is not None else 0.0) for v in chosen]\n",
295
+ "\n",
296
+ "def _best_pv(z_list, sumpt2_list) -> Tuple[float, float]:\n",
297
+ " if not sumpt2_list:\n",
298
+ " z = z_list[0] if z_list else 0.0\n",
299
+ " s = sumpt2_list[0] if sumpt2_list else 0.0\n",
300
+ " return float(z if z is not None else 0.0), float(s if s is not None else 0.0)\n",
301
+ " j = max(range(len(sumpt2_list)), key=lambda i: sumpt2_list[i] if sumpt2_list[i] is not None else -1.0)\n",
302
+ " z = z_list[j] if j < len(z_list) else 0.0\n",
303
+ " s = sumpt2_list[j]\n",
304
+ " return float(z if z is not None else 0.0), float(s if s is not None else 0.0)\n",
305
+ "\n",
306
+ "def build_vector(ev: Dict[str, Any]) -> List[float]:\n",
307
+ " # PUPPI particles\n",
308
+ " p_pt = _safe_list(ev.get('FullReco_PUPPIPart_PT'))\n",
309
+ " p_eta = _safe_list(ev.get('FullReco_PUPPIPart_Eta'))\n",
310
+ " p_phi = _safe_list(ev.get('FullReco_PUPPIPart_Phi'))\n",
311
+ " p_ch = _safe_list(ev.get('FullReco_PUPPIPart_Charge'))\n",
312
+ " p_m = _safe_list(ev.get('FullReco_PUPPIPart_Mass'))\n",
313
+ " p_pid = _safe_list(ev.get('FullReco_PUPPIPart_PID'))\n",
314
+ " p_w = _safe_list(ev.get('FullReco_PUPPIPart_PuppiW'))\n",
315
+ " part = _pack_topk_by_pt(p_pt, p_eta, p_phi, p_ch, p_m, p_pid, p_w, k=20, fill=[0.0]*7)\n",
316
+ "\n",
317
+ " # AK4 jets\n",
318
+ " j_pt = _safe_list(ev.get('FullReco_JetAK4_PT'))\n",
319
+ " j_eta = _safe_list(ev.get('FullReco_JetAK4_Eta'))\n",
320
+ " j_phi = _safe_list(ev.get('FullReco_JetAK4_Phi'))\n",
321
+ " j_m = _safe_list(ev.get('FullReco_JetAK4_Mass'))\n",
322
+ " j_bt = _safe_list(ev.get('FullReco_JetAK4_BTag'))\n",
323
+ " j_ch = _safe_list(ev.get('FullReco_JetAK4_Charge'))\n",
324
+ " jets = _pack_topk_by_pt(j_pt, j_eta, j_phi, j_m, j_bt, j_ch, k=4, fill=[0.0]*6)\n",
325
+ "\n",
326
+ " # Leading leptons/photons\n",
327
+ " e_pt = _safe_list(ev.get('FullReco_Electron_PT'))\n",
328
+ " e_eta = _safe_list(ev.get('FullReco_Electron_Eta'))\n",
329
+ " e_phi = _safe_list(ev.get('FullReco_Electron_Phi'))\n",
330
+ " e_hoe = _safe_list(ev.get('FullReco_Electron_EhadOverEem'))\n",
331
+ " e_iso = _safe_list(ev.get('FullReco_Electron_IsolationVarRhoCorr'))\n",
332
+ " elec = _pack_leading([e_pt, e_eta, e_phi, e_hoe, e_iso], fill=[0.0]*5)\n",
333
+ "\n",
334
+ " m_pt = _safe_list(ev.get('FullReco_MuonTight_PT'))\n",
335
+ " m_eta = _safe_list(ev.get('FullReco_MuonTight_Eta'))\n",
336
+ " m_phi = _safe_list(ev.get('FullReco_MuonTight_Phi'))\n",
337
+ " m_iso = _safe_list(ev.get('FullReco_MuonTight_IsolationVarRhoCorr'))\n",
338
+ " muon = _pack_leading([m_pt, m_eta, m_phi, m_iso], fill=[0.0]*4)\n",
339
+ "\n",
340
+ " g_pt = _safe_list(ev.get('FullReco_PhotonTight_PT'))\n",
341
+ " g_eta = _safe_list(ev.get('FullReco_PhotonTight_Eta'))\n",
342
+ " g_phi = _safe_list(ev.get('FullReco_PhotonTight_Phi'))\n",
343
+ " phot = _pack_leading([g_pt, g_eta, g_phi], fill=[0.0]*3)\n",
344
+ "\n",
345
+ " # MET\n",
346
+ " pmet = float(_safe_list(ev.get('FullReco_PUPPIMET_MET'))[0]) if _safe_list(ev.get('FullReco_PUPPIMET_MET')) else 0.0\n",
347
+ " pphi = float(_safe_list(ev.get('FullReco_PUPPIMET_Phi'))[0]) if _safe_list(ev.get('FullReco_PUPPIMET_Phi')) else 0.0\n",
348
+ " met = float(_safe_list(ev.get('FullReco_MET_MET'))[0]) if _safe_list(ev.get('FullReco_MET_MET')) else 0.0\n",
349
+ " mphi = float(_safe_list(ev.get('FullReco_MET_Phi'))[0]) if _safe_list(ev.get('FullReco_MET_Phi')) else 0.0\n",
350
+ "\n",
351
+ " # Primary vertex\n",
352
+ " pvz_list = _safe_list(ev.get('FullReco_PrimaryVertex_Z'))\n",
353
+ " pvsp2_list = _safe_list(ev.get('FullReco_PrimaryVertex_SumPT2'))\n",
354
+ " pvz, pvsp2 = _best_pv(pvz_list, pvsp2_list)\n",
355
+ "\n",
356
+ " # Counts\n",
357
+ " n_part = float(len(p_pt))\n",
358
+ " n_jet = float(len(j_pt))\n",
359
+ "\n",
360
+ " vec = part + jets + elec + muon + phot + [pmet, pphi, met, mphi] + [pvz, pvsp2] + [n_part, n_jet]\n",
361
+ " if len(vec) != 184:\n",
362
+ " if len(vec) < 184:\n",
363
+ " vec = vec + [0.0]*(184-len(vec))\n",
364
+ " else:\n",
365
+ " vec = vec[:184]\n",
366
+ " return vec\n",
367
+ "\n",
368
+ "def generate_examples(repo_id: str, process_folder: str, label_id: int,\n",
369
+ " per_class_limit: int, seed: int = 42):\n",
370
+ " files = list_repo_parquet_files(repo_id, process_folder)\n",
371
+ " if not files:\n",
372
+ " raise RuntimeError(f\"No parquet under '{process_folder}' in {repo_id}\")\n",
373
+ " rng = random.Random(seed)\n",
374
+ " rng.shuffle(files)\n",
375
+ " emitted = 0\n",
376
+ " for rel in files:\n",
377
+ " path = f\"hf://datasets/{repo_id}/{rel}\"\n",
378
+ " with fs.open(path, 'rb') as fh:\n",
379
+ " pqf = pq.ParquetFile(fh)\n",
380
+ " for batch in pqf.iter_batches(columns=ALL_COLS):\n",
381
+ " tbl = pa.Table.from_batches([batch])\n",
382
+ " pyd = tbl.to_pydict()\n",
383
+ " n = tbl.num_rows\n",
384
+ " cols = list(pyd.keys())\n",
385
+ " for i in range(n):\n",
386
+ " ev = {k: pyd[k][i] for k in cols}\n",
387
+ " x = build_vector(ev)\n",
388
+ " yield {\"x\": x, \"label\": label_id}\n",
389
+ " emitted += 1\n",
390
+ " if emitted >= per_class_limit:\n",
391
+ " return\n"
392
+ ]
393
+ },
394
+ {
395
+ "cell_type": "code",
396
+ "execution_count": 5,
397
+ "id": "8089bf8b",
398
+ "metadata": {},
399
+ "outputs": [
400
+ {
401
+ "name": "stdout",
402
+ "output_type": "stream",
403
+ "text": [
404
+ "IterableDataset({\n",
405
+ " features: ['x', 'label'],\n",
406
+ " n_shards: 1\n",
407
+ "})\n",
408
+ "IterableDataset({\n",
409
+ " features: ['x', 'label'],\n",
410
+ " n_shards: 1\n",
411
+ "})\n"
412
+ ]
413
+ }
414
+ ],
415
+ "source": [
416
+ "# Build datasets with a FIXED schema\n",
417
+ "features = Features({\n",
418
+ " 'x': Sequence(Value('float32'), length=184),\n",
419
+ " 'label': ClassLabel(names=CLASS_NAMES),\n",
420
+ "})\n",
421
+ "\n",
422
+ "def make_split(repo_id: str, per_class: int, seed: int) -> IterableDataset:\n",
423
+ " parts = []\n",
424
+ " for cname in CLASS_NAMES:\n",
425
+ " ds = IterableDataset.from_generator(\n",
426
+ " generate_examples,\n",
427
+ " gen_kwargs=dict(\n",
428
+ " repo_id=repo_id,\n",
429
+ " process_folder=FOLDER[cname],\n",
430
+ " label_id=LABELS[cname],\n",
431
+ " per_class_limit=per_class,\n",
432
+ " seed=seed + LABELS[cname],\n",
433
+ " ),\n",
434
+ " features=features,\n",
435
+ " )\n",
436
+ " parts.append(ds)\n",
437
+ " return interleave_datasets(parts, seed=seed)\n",
438
+ "\n",
439
+ "train_stream = make_split(HF_REPO, TRAIN_PER_CLASS, SEED)\n",
440
+ "val_stream = make_split(HF_REPO, VAL_PER_CLASS, SEED+1000)\n",
441
+ "print(train_stream)\n",
442
+ "print(val_stream)\n"
443
+ ]
444
+ },
445
+ {
446
+ "cell_type": "code",
447
+ "execution_count": 6,
448
+ "id": "e283afd8",
449
+ "metadata": {},
450
+ "outputs": [
451
+ {
452
+ "name": "stdout",
453
+ "output_type": "stream",
454
+ "text": [
455
+ "Estimated mean/std for 184 features.\n"
456
+ ]
457
+ }
458
+ ],
459
+ "source": [
460
+ "# Compute mean/std for scaling\n",
461
+ "def estimate_mean_std(hf_stream: IterableDataset, max_samples: int = 5000):\n",
462
+ " count = 0\n",
463
+ " mean = torch.zeros(184)\n",
464
+ " M2 = torch.zeros(184)\n",
465
+ " for ex in hf_stream.take(max_samples):\n",
466
+ " x = torch.tensor(ex['x'], dtype=torch.float32)\n",
467
+ " count += 1\n",
468
+ " delta = x - mean\n",
469
+ " mean += delta / max(count, 1)\n",
470
+ " delta2 = x - mean\n",
471
+ " M2 += delta * delta2\n",
472
+ " var = (M2 / max(count - 1, 1))\n",
473
+ " std = torch.sqrt(var + 1e-6)\n",
474
+ " return mean, std\n",
475
+ "\n",
476
+ "stats_stream = make_split(HF_REPO, per_class=min(256, TRAIN_PER_CLASS), seed=SEED+222)\n",
477
+ "MEAN, STD = estimate_mean_std(stats_stream, max_samples=512)\n",
478
+ "print(\"Estimated mean/std for\", len(MEAN), \"features.\")\n"
479
+ ]
480
+ },
481
+ {
482
+ "cell_type": "code",
483
+ "execution_count": 7,
484
+ "id": "f31332e0",
485
+ "metadata": {},
486
+ "outputs": [],
487
+ "source": [
488
+ "class HFToTorch(TorchIterable):\n",
489
+ " def __init__(self, hf_stream: IterableDataset):\n",
490
+ " self.hf_stream = hf_stream\n",
491
+ " def __iter__(self):\n",
492
+ " return iter(self.hf_stream)\n",
493
+ "\n",
494
+ "class CollateCLF:\n",
495
+ " def __init__(self, mean: torch.Tensor, std: torch.Tensor):\n",
496
+ " self.mean = mean\n",
497
+ " self.std = std\n",
498
+ " def __call__(self, batch: List[Dict[str, Any]]):\n",
499
+ " xs, ys = [], []\n",
500
+ " for ex in batch:\n",
501
+ " x = torch.tensor(ex['x'], dtype=torch.float32)\n",
502
+ " x = (x - self.mean) / self.std\n",
503
+ " y = int(ex['label'])\n",
504
+ " xs.append(x); ys.append(y)\n",
505
+ " return {\n",
506
+ " 'x': torch.stack(xs, dim=0),\n",
507
+ " 'y': torch.tensor(ys, dtype=torch.long),\n",
508
+ " }\n",
509
+ "\n",
510
+ "train_loader = DataLoader(HFToTorch(train_stream), batch_size=BATCH_SIZE, collate_fn=CollateCLF(MEAN, STD))\n"
511
+ ]
512
+ },
513
+ {
514
+ "cell_type": "code",
515
+ "execution_count": 8,
516
+ "id": "26cacf8b",
517
+ "metadata": {},
518
+ "outputs": [
519
+ {
520
+ "name": "stdout",
521
+ "output_type": "stream",
522
+ "text": [
523
+ "Val set: torch.Size([595, 184]) torch.Size([595])\n"
524
+ ]
525
+ }
526
+ ],
527
+ "source": [
528
+ "# Materialize validation once\n",
529
+ "Xv, Yv = [], []\n",
530
+ "for ex in val_stream:\n",
531
+ " Xv.append(((torch.tensor(ex['x']) - MEAN) / STD).unsqueeze(0))\n",
532
+ " Yv.append(int(ex['label']))\n",
533
+ "X_val = torch.cat(Xv, dim=0)\n",
534
+ "y_val = torch.tensor(Yv, dtype=torch.long)\n",
535
+ "print(\"Val set:\", X_val.shape, y_val.shape)\n"
536
+ ]
537
+ },
538
+ {
539
+ "cell_type": "code",
540
+ "execution_count": 9,
541
+ "id": "ea94521c",
542
+ "metadata": {},
543
+ "outputs": [],
544
+ "source": [
545
+ "class TinyMLP(nn.Module):\n",
546
+ " def __init__(self, d=184, h=256, num_classes=6):\n",
547
+ " super().__init__()\n",
548
+ " self.net = nn.Sequential(\n",
549
+ " nn.Linear(d, h), nn.ReLU(),\n",
550
+ " nn.Linear(h, h//2), nn.ReLU(),\n",
551
+ " nn.Linear(h//2, num_classes),\n",
552
+ " )\n",
553
+ " def forward(self, x):\n",
554
+ " return self.net(x)\n",
555
+ "\n",
556
+ "model = TinyMLP(d=184, h=256, num_classes=len(CLASS_NAMES)).to(DEVICE)\n",
557
+ "opt = torch.optim.AdamW(model.parameters(), lr=LR)\n",
558
+ "loss_fn = nn.CrossEntropyLoss()\n"
559
+ ]
560
+ },
561
+ {
562
+ "cell_type": "code",
563
+ "execution_count": null,
564
+ "id": "fec8c721",
565
+ "metadata": {},
566
+ "outputs": [
567
+ {
568
+ "name": "stdout",
569
+ "output_type": "stream",
570
+ "text": [
571
+ "[epoch 1] val acc: 49.24% | classes: ['DY', 'QCD', 'SingleHiggs', 'top', 'diboson', 'diHiggs']\n",
572
+ "epoch 2 step 20 | loss 0.4491\n",
573
+ "[epoch 2] val acc: 52.77% | classes: ['DY', 'QCD', 'SingleHiggs', 'top', 'diboson', 'diHiggs']\n",
574
+ "[epoch 3] val acc: 55.13% | classes: ['DY', 'QCD', 'SingleHiggs', 'top', 'diboson', 'diHiggs']\n"
575
+ ]
576
+ }
577
+ ],
578
+ "source": [
579
+ "def evaluate(model, X, y, batch=2048):\n",
580
+ " model.eval()\n",
581
+ " correct = 0\n",
582
+ " total = 0\n",
583
+ " with torch.no_grad():\n",
584
+ " for i in range(0, len(X), batch):\n",
585
+ " xb = X[i:i+batch].to(DEVICE)\n",
586
+ " yb = y[i:i+batch].to(DEVICE)\n",
587
+ " logits = model(xb)\n",
588
+ " pred = logits.argmax(dim=1)\n",
589
+ " correct += int((pred == yb).sum().item())\n",
590
+ " total += int(len(yb))\n",
591
+ " model.train()\n",
592
+ " return correct / max(total, 1)\n",
593
+ "\n",
594
+ "steps = 0\n",
595
+ "for epoch in range(1, EPOCHS+1):\n",
596
+ " running = 0.0\n",
597
+ " for batch in train_loader:\n",
598
+ " x = batch['x'].to(DEVICE, non_blocking=True)\n",
599
+ " y = batch['y'].to(DEVICE, non_blocking=True)\n",
600
+ "\n",
601
+ " logits = model(x)\n",
602
+ " loss = loss_fn(logits, y)\n",
603
+ " opt.zero_grad(set_to_none=True)\n",
604
+ " loss.backward()\n",
605
+ " opt.step()\n",
606
+ "\n",
607
+ " running += float(loss.item())\n",
608
+ " steps += 1\n",
609
+ " if steps % 20 == 0:\n",
610
+ " print(f\"epoch {epoch} step {steps} | loss {running/20:.4f}\")\n",
611
+ " running = 0.0\n",
612
+ "\n",
613
+ " acc = evaluate(model, X_val, y_val)\n",
614
+ " print(f\"[epoch {epoch}] val acc: {acc*100:.2f}% | classes: {CLASS_NAMES}\")\n",
615
+ "print(\"Training done.\")\n"
616
+ ]
617
+ }
618
+ ],
619
+ "metadata": {
620
+ "kernelspec": {
621
+ "display_name": "collide2v",
622
+ "language": "python",
623
+ "name": "python3"
624
+ },
625
+ "language_info": {
626
+ "codemirror_mode": {
627
+ "name": "ipython",
628
+ "version": 3
629
+ },
630
+ "file_extension": ".py",
631
+ "mimetype": "text/x-python",
632
+ "name": "python",
633
+ "nbconvert_exporter": "python",
634
+ "pygments_lexer": "ipython3",
635
+ "version": "3.10.18"
636
+ }
637
+ },
638
+ "nbformat": 4,
639
+ "nbformat_minor": 5
640
+ }