File size: 23,881 Bytes
c011401
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
[2025-01-15 18:18:59,631 I 544799 544799] (raylet) main.cc:180: Setting cluster ID to: a4d3b5fc7160307393534b8952c426e80423b036f7a37aa3929b322e
[2025-01-15 18:18:59,640 I 544799 544799] (raylet) main.cc:289: Raylet is not set to kill unknown children.
[2025-01-15 18:18:59,640 I 544799 544799] (raylet) io_service_pool.cc:35: IOServicePool is running with 1 io_service.
[2025-01-15 18:18:59,641 I 544799 544799] (raylet) main.cc:419: Setting node ID node_id=58d80ea2d267ecee5ffb7fe19b48e69ad93cbf27de2f37efe3848792
[2025-01-15 18:18:59,641 I 544799 544799] (raylet) store_runner.cc:32: Allowing the Plasma store to use up to 2.14748GB of memory.
[2025-01-15 18:18:59,641 I 544799 544799] (raylet) store_runner.cc:48: Starting object store with directory /dev/shm, fallback /tmp/ray, and huge page support disabled
[2025-01-15 18:18:59,642 I 544799 544828] (raylet) dlmalloc.cc:154: create_and_mmap_buffer(2147483656, /dev/shm/plasmaXXXXXX)
[2025-01-15 18:18:59,643 I 544799 544828] (raylet) store.cc:564: Plasma store debug dump: 
Current usage: 0 / 2.14748 GB
- num bytes created total: 0
0 pending objects of total size 0MB
- objects spillable: 0
- bytes spillable: 0
- objects unsealed: 0
- bytes unsealed: 0
- objects in use: 0
- bytes in use: 0
- objects evictable: 0
- bytes evictable: 0

- objects created by worker: 0
- bytes created by worker: 0
- objects restored: 0
- bytes restored: 0
- objects received: 0
- bytes received: 0
- objects errored: 0
- bytes errored: 0

[2025-01-15 18:19:00,646 I 544799 544799] (raylet) grpc_server.cc:134: ObjectManager server started, listening on port 36739.
[2025-01-15 18:19:00,648 I 544799 544799] (raylet) worker_killing_policy.cc:101: Running GroupByOwner policy.
[2025-01-15 18:19:00,648 I 544799 544799] (raylet) memory_monitor.cc:47: MemoryMonitor initialized with usage threshold at 94999994368 bytes (0.95 system memory), total system memory bytes: 99999997952
[2025-01-15 18:19:00,648 I 544799 544799] (raylet) node_manager.cc:287: Initializing NodeManager node_id=58d80ea2d267ecee5ffb7fe19b48e69ad93cbf27de2f37efe3848792
[2025-01-15 18:19:00,649 I 544799 544799] (raylet) grpc_server.cc:134: NodeManager server started, listening on port 44719.
[2025-01-15 18:19:00,656 I 544799 544893] (raylet) agent_manager.cc:77: Monitor agent process with name dashboard_agent/424238335
[2025-01-15 18:19:00,657 I 544799 544895] (raylet) agent_manager.cc:77: Monitor agent process with name runtime_env_agent
[2025-01-15 18:19:00,657 I 544799 544799] (raylet) event.cc:493: Ray Event initialized for RAYLET
[2025-01-15 18:19:00,657 I 544799 544799] (raylet) event.cc:324: Set ray event level to warning
[2025-01-15 18:19:00,660 I 544799 544799] (raylet) raylet.cc:134: Raylet of id, 58d80ea2d267ecee5ffb7fe19b48e69ad93cbf27de2f37efe3848792 started. Raylet consists of node_manager and object_manager. node_manager address: 192.168.0.2:44719 object_manager address: 192.168.0.2:36739 hostname: 0cd925b1f73b
[2025-01-15 18:19:00,663 I 544799 544799] (raylet) node_manager.cc:525: [state-dump] NodeManager:
[state-dump] Node ID: 58d80ea2d267ecee5ffb7fe19b48e69ad93cbf27de2f37efe3848792
[state-dump] Node name: 192.168.0.2
[state-dump] InitialConfigResources: {memory: 853957464070000, CPU: 200000, GPU: 20000, accelerator_type:A40: 10000, node:192.168.0.2: 10000, object_store_memory: 21474836480000, node:__internal_head__: 10000}
[state-dump] ClusterTaskManager:
[state-dump] ========== Node: 58d80ea2d267ecee5ffb7fe19b48e69ad93cbf27de2f37efe3848792 =================
[state-dump] Infeasible queue length: 0
[state-dump] Schedule queue length: 0
[state-dump] Dispatch queue length: 0
[state-dump] num_waiting_for_resource: 0
[state-dump] num_waiting_for_plasma_memory: 0
[state-dump] num_waiting_for_remote_node_resources: 0
[state-dump] num_worker_not_started_by_job_config_not_exist: 0
[state-dump] num_worker_not_started_by_registration_timeout: 0
[state-dump] num_tasks_waiting_for_workers: 0
[state-dump] num_cancelled_tasks: 0
[state-dump] cluster_resource_scheduler state: 
[state-dump] Local id: 4176580051252218132 Local resources: {"total":{GPU: [10000, 10000], node:__internal_head__: [10000], memory: [853957464070000], object_store_memory: [21474836480000], node:192.168.0.2: [10000], accelerator_type:A40: [10000], CPU: [200000]}}, "available": {GPU: [10000, 10000], node:__internal_head__: [10000], memory: [853957464070000], object_store_memory: [21474836480000], node:192.168.0.2: [10000], accelerator_type:A40: [10000], CPU: [200000]}}, "labels":{"ray.io/node_id":"58d80ea2d267ecee5ffb7fe19b48e69ad93cbf27de2f37efe3848792",} is_draining: 0 is_idle: 1 Cluster resources: node id: 4176580051252218132{"total":{accelerator_type:A40: 10000, GPU: 20000, object_store_memory: 21474836480000, CPU: 200000, node:__internal_head__: 10000, memory: 853957464070000, node:192.168.0.2: 10000}}, "available": {accelerator_type:A40: 10000, GPU: 20000, object_store_memory: 21474836480000, CPU: 200000, node:__internal_head__: 10000, memory: 853957464070000, node:192.168.0.2: 10000}}, "labels":{"ray.io/node_id":"58d80ea2d267ecee5ffb7fe19b48e69ad93cbf27de2f37efe3848792",}, "is_draining": 0, "draining_deadline_timestamp_ms": -1} { "placment group locations": [], "node to bundles": []}
[state-dump] Waiting tasks size: 0
[state-dump] Number of executing tasks: 0
[state-dump] Number of pinned task arguments: 0
[state-dump] Number of total spilled tasks: 0
[state-dump] Number of spilled waiting tasks: 0
[state-dump] Number of spilled unschedulable tasks: 0
[state-dump] Resource usage {
[state-dump] }
[state-dump] Backlog Size per scheduling descriptor :{workerId: num backlogs}:
[state-dump] 
[state-dump] Running tasks by scheduling class:
[state-dump] ==================================================
[state-dump] 
[state-dump] ClusterResources:
[state-dump] LocalObjectManager:
[state-dump] - num pinned objects: 0
[state-dump] - pinned objects size: 0
[state-dump] - num objects pending restore: 0
[state-dump] - num objects pending spill: 0
[state-dump] - num bytes pending spill: 0
[state-dump] - num bytes currently spilled: 0
[state-dump] - cumulative spill requests: 0
[state-dump] - cumulative restore requests: 0
[state-dump] - spilled objects pending delete: 0
[state-dump] 
[state-dump] ObjectManager:
[state-dump] - num local objects: 0
[state-dump] - num unfulfilled push requests: 0
[state-dump] - num object pull requests: 0
[state-dump] - num chunks received total: 0
[state-dump] - num chunks received failed (all): 0
[state-dump] - num chunks received failed / cancelled: 0
[state-dump] - num chunks received failed / plasma error: 0
[state-dump] Event stats:
[state-dump] Global stats: 0 total (0 active)
[state-dump] Queueing time: mean = -nan s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] Execution time:  mean = -nan s, total = 0.000 s
[state-dump] Event stats:
[state-dump] PushManager:
[state-dump] - num pushes in flight: 0
[state-dump] - num chunks in flight: 0
[state-dump] - num chunks remaining: 0
[state-dump] - max chunks allowed: 409
[state-dump] OwnershipBasedObjectDirectory:
[state-dump] - num listeners: 0
[state-dump] - cumulative location updates: 0
[state-dump] - num location updates per second: 69915709415132000.000
[state-dump] - num location lookups per second: 69915709415120000.000
[state-dump] - num locations added per second: 0.000
[state-dump] - num locations removed per second: 0.000
[state-dump] BufferPool:
[state-dump] - create buffer state map size: 0
[state-dump] PullManager:
[state-dump] - num bytes available for pulled objects: 2147483648
[state-dump] - num bytes being pulled (all): 0
[state-dump] - num bytes being pulled / pinned: 0
[state-dump] - get request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable}
[state-dump] - wait request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable}
[state-dump] - task request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable}
[state-dump] - first get request bundle: N/A
[state-dump] - first wait request bundle: N/A
[state-dump] - first task request bundle: N/A
[state-dump] - num objects queued: 0
[state-dump] - num objects actively pulled (all): 0
[state-dump] - num objects actively pulled / pinned: 0
[state-dump] - num bundles being pulled: 0
[state-dump] - num pull retries: 0
[state-dump] - max timeout seconds: 0
[state-dump] - max timeout request is already processed. No entry.
[state-dump] 
[state-dump] WorkerPool:
[state-dump] - registered jobs: 0
[state-dump] - process_failed_job_config_missing: 0
[state-dump] - process_failed_rate_limited: 0
[state-dump] - process_failed_pending_registration: 0
[state-dump] - process_failed_runtime_env_setup_failed: 0
[state-dump] - num PYTHON workers: 0
[state-dump] - num PYTHON drivers: 0
[state-dump] - num PYTHON pending start requests: 0
[state-dump] - num PYTHON pending registration requests: 0
[state-dump] - num object spill callbacks queued: 0
[state-dump] - num object restore queued: 0
[state-dump] - num util functions queued: 0
[state-dump] - num idle workers: 0
[state-dump] TaskDependencyManager:
[state-dump] - task deps map size: 0
[state-dump] - get req map size: 0
[state-dump] - wait req map size: 0
[state-dump] - local objects map size: 0
[state-dump] WaitManager:
[state-dump] - num active wait requests: 0
[state-dump] Subscriber:
[state-dump] Channel WORKER_OBJECT_EVICTION
[state-dump] - cumulative subscribe requests: 0
[state-dump] - cumulative unsubscribe requests: 0
[state-dump] - active subscribed publishers: 0
[state-dump] - cumulative published messages: 0
[state-dump] - cumulative processed messages: 0
[state-dump] Channel WORKER_REF_REMOVED_CHANNEL
[state-dump] - cumulative subscribe requests: 0
[state-dump] - cumulative unsubscribe requests: 0
[state-dump] - active subscribed publishers: 0
[state-dump] - cumulative published messages: 0
[state-dump] - cumulative processed messages: 0
[state-dump] Channel WORKER_OBJECT_LOCATIONS_CHANNEL
[state-dump] - cumulative subscribe requests: 0
[state-dump] - cumulative unsubscribe requests: 0
[state-dump] - active subscribed publishers: 0
[state-dump] - cumulative published messages: 0
[state-dump] - cumulative processed messages: 0
[state-dump] num async plasma notifications: 0
[state-dump] Remote node managers: 
[state-dump] Event stats:
[state-dump] Global stats: 28 total (13 active)
[state-dump] Queueing time: mean = 1.342 ms, max = 9.572 ms, min = 28.030 us, total = 37.578 ms
[state-dump] Execution time:  mean = 36.656 ms, total = 1.026 s
[state-dump] Event stats:
[state-dump] 	PeriodicalRunner.RunFnPeriodically - 11 total (2 active, 1 running), Execution time: mean = 227.555 us, total = 2.503 ms, Queueing time: mean = 3.400 ms, max = 9.572 ms, min = 41.309 us, total = 37.398 ms
[state-dump] 	ray::rpc::NodeInfoGcsService.grpc_client.RegisterNode.OnReplyReceived - 1 total (0 active), Execution time: mean = 296.305 us, total = 296.305 us, Queueing time: mean = 34.539 us, max = 34.539 us, min = 34.539 us, total = 34.539 us
[state-dump] 	ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	RayletWorkerPool.deadline_timer.kill_idle_workers - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (0 active), Execution time: mean = 1.477 ms, total = 1.477 ms, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	ObjectManager.UpdateAvailableMemory - 1 total (0 active), Execution time: mean = 3.894 us, total = 3.894 us, Queueing time: mean = 28.030 us, max = 28.030 us, min = 28.030 us, total = 28.030 us
[state-dump] 	ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch.OnReplyReceived - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	NodeManager.deadline_timer.flush_free_objects - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	MemoryMonitor.CheckIsMemoryUsageAboveThreshold - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	NodeManager.deadline_timer.spill_objects_when_over_threshold - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	NodeManager.GCTaskFailureReason - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	NodeManager.deadline_timer.record_metrics - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	ray::rpc::InternalKVGcsService.grpc_client.GetInternalConfig - 1 total (0 active), Execution time: mean = 1.938 ms, total = 1.938 ms, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	ray::rpc::InternalKVGcsService.grpc_client.GetInternalConfig.OnReplyReceived - 1 total (0 active), Execution time: mean = 1.018 s, total = 1.018 s, Queueing time: mean = 118.007 us, max = 118.007 us, min = 118.007 us, total = 118.007 us
[state-dump] 	ClusterResourceManager.ResetRemoteNodeView - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	NodeManager.deadline_timer.debug_state_dump - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	ray::rpc::NodeInfoGcsService.grpc_client.RegisterNode - 1 total (0 active), Execution time: mean = 2.489 ms, total = 2.489 ms, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] 	NodeManager.ScheduleAndDispatchTasks - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s
[state-dump] DebugString() time ms: 0
[state-dump] 
[state-dump] 
[2025-01-15 18:19:00,665 I 544799 544799] (raylet) accessor.cc:762: Received notification for node, IsAlive = 1 node_id=58d80ea2d267ecee5ffb7fe19b48e69ad93cbf27de2f37efe3848792
[2025-01-15 18:19:00,732 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 544932, the token is 0
[2025-01-15 18:19:00,736 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 544933, the token is 1
[2025-01-15 18:19:00,738 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 544934, the token is 2
[2025-01-15 18:19:00,740 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 544935, the token is 3
[2025-01-15 18:19:00,743 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 544936, the token is 4
[2025-01-15 18:19:00,746 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 544937, the token is 5
[2025-01-15 18:19:00,748 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 544938, the token is 6
[2025-01-15 18:19:00,750 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 544939, the token is 7
[2025-01-15 18:19:00,752 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 544940, the token is 8
[2025-01-15 18:19:00,755 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 544941, the token is 9
[2025-01-15 18:19:00,757 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 544942, the token is 10
[2025-01-15 18:19:00,759 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 544943, the token is 11
[2025-01-15 18:19:00,762 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 544944, the token is 12
[2025-01-15 18:19:00,764 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 544945, the token is 13
[2025-01-15 18:19:00,767 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 544946, the token is 14
[2025-01-15 18:19:00,770 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 544947, the token is 15
[2025-01-15 18:19:00,772 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 544948, the token is 16
[2025-01-15 18:19:00,775 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 544949, the token is 17
[2025-01-15 18:19:00,777 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 544950, the token is 18
[2025-01-15 18:19:00,781 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 544951, the token is 19
[2025-01-15 18:19:01,457 I 544799 544828] (raylet) object_store.cc:35: Object store current usage 8e-09 / 2.14748 GB.
[2025-01-15 18:19:01,596 I 544799 544799] (raylet) worker_pool.cc:692: Job 01000000 already started in worker pool.
[2025-01-15 18:19:02,405 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=3, has creation task exception = false
[2025-01-15 18:19:02,696 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=1, has creation task exception = false
[2025-01-15 18:19:02,696 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=1, has creation task exception = false
[2025-01-15 18:19:02,697 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=1, has creation task exception = false
[2025-01-15 18:19:02,697 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=1, has creation task exception = false
[2025-01-15 18:19:02,697 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=1, has creation task exception = false
[2025-01-15 18:19:02,697 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=1, has creation task exception = false
[2025-01-15 18:19:02,698 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=1, has creation task exception = false
[2025-01-15 18:19:02,698 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=1, has creation task exception = false
[2025-01-15 18:19:02,698 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=1, has creation task exception = false
[2025-01-15 18:19:02,705 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=1, has creation task exception = false
[2025-01-15 18:19:02,706 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=1, has creation task exception = false
[2025-01-15 18:19:02,707 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=1, has creation task exception = false
[2025-01-15 18:19:02,707 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=1, has creation task exception = false
[2025-01-15 18:19:02,707 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=1, has creation task exception = false
[2025-01-15 18:19:02,708 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=1, has creation task exception = false
[2025-01-15 18:19:02,708 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=1, has creation task exception = false
[2025-01-15 18:19:02,708 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=1, has creation task exception = false
[2025-01-15 18:19:02,709 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=1, has creation task exception = false
[2025-01-15 18:19:03,093 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=3, has creation task exception = false
[2025-01-15 18:19:03,102 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 546631, the token is 20
[2025-01-15 18:19:04,370 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=3, has creation task exception = false
[2025-01-15 18:19:04,380 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 546732, the token is 21
[2025-01-15 18:19:05,597 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=3, has creation task exception = false
[2025-01-15 18:19:05,607 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 546833, the token is 22
[2025-01-15 18:19:06,780 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=3, has creation task exception = false
[2025-01-15 18:19:06,789 I 544799 544799] (raylet) worker_pool.cc:501: Started worker process with pid 546934, the token is 23
[2025-01-15 18:19:07,960 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=3, has creation task exception = false
[2025-01-15 18:19:07,985 I 544799 544799] (raylet) node_manager.cc:1481: NodeManager::DisconnectClient, disconnect_type=3, has creation task exception = false
[2025-01-15 18:19:07,986 I 544799 544799] (raylet) node_manager.cc:1586: Driver (pid=544535) is disconnected. worker_id=01000000ffffffffffffffffffffffffffffffffffffffffffffffff job_id=01000000
[2025-01-15 18:19:07,988 I 544799 544799] (raylet) worker_pool.cc:692: Job 01000000 already started in worker pool.
[2025-01-15 18:19:08,136 I 544799 544799] (raylet) main.cc:454: received SIGTERM. Existing local drain request = None
[2025-01-15 18:19:08,136 I 544799 544799] (raylet) main.cc:255: Raylet graceful shutdown triggered, reason = EXPECTED_TERMINATION, reason message = received SIGTERM
[2025-01-15 18:19:08,136 I 544799 544799] (raylet) main.cc:258: Shutting down...
[2025-01-15 18:19:08,136 I 544799 544799] (raylet) accessor.cc:510: Unregistering node node_id=58d80ea2d267ecee5ffb7fe19b48e69ad93cbf27de2f37efe3848792
[2025-01-15 18:19:08,138 I 544799 544799] (raylet) accessor.cc:523: Finished unregistering node info, status = OK node_id=58d80ea2d267ecee5ffb7fe19b48e69ad93cbf27de2f37efe3848792
[2025-01-15 18:19:08,141 I 544799 544799] (raylet) agent_manager.cc:112: Killing agent dashboard_agent/424238335, pid 544892.
[2025-01-15 18:19:08,151 I 544799 544893] (raylet) agent_manager.cc:79: Agent process with name dashboard_agent/424238335 exited, exit code 0.
[2025-01-15 18:19:08,151 I 544799 544799] (raylet) agent_manager.cc:112: Killing agent runtime_env_agent, pid 544894.
[2025-01-15 18:19:08,158 I 544799 544895] (raylet) agent_manager.cc:79: Agent process with name runtime_env_agent exited, exit code 0.
[2025-01-15 18:19:08,158 I 544799 544799] (raylet) io_service_pool.cc:47: IOServicePool is stopped.
[2025-01-15 18:19:08,351 I 544799 544799] (raylet) stats.h:120: Stats module has shutdown.