File size: 8,579 Bytes
c011401 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 |
NodeManager: Node ID: 49709ded25b009838cca283b77f0f8a63a6d0f1300f65be831971360 Node name: 192.168.0.2 InitialConfigResources: {node:192.168.0.2: 10000, memory: 856446742530000, node:__internal_head__: 10000, accelerator_type:A40: 10000, GPU: 20000, object_store_memory: 21474836480000, CPU: 200000} ClusterTaskManager: ========== Node: 49709ded25b009838cca283b77f0f8a63a6d0f1300f65be831971360 ================= Infeasible queue length: 0 Schedule queue length: 0 Dispatch queue length: 0 num_waiting_for_resource: 0 num_waiting_for_plasma_memory: 0 num_waiting_for_remote_node_resources: 0 num_worker_not_started_by_job_config_not_exist: 0 num_worker_not_started_by_registration_timeout: 0 num_tasks_waiting_for_workers: 0 num_cancelled_tasks: 0 cluster_resource_scheduler state: Local id: -8491188467377325818 Local resources: {"total":{node:__internal_head__: [10000], accelerator_type:A40: [10000], CPU: [200000], memory: [856446742530000], GPU: [10000, 10000], object_store_memory: [21474836480000], node:192.168.0.2: [10000]}}, "available": {node:__internal_head__: [10000], accelerator_type:A40: [10000], CPU: [200000], memory: [856446742530000], GPU: [10000, 10000], object_store_memory: [21474836480000], node:192.168.0.2: [10000]}}, "labels":{"ray.io/node_id":"49709ded25b009838cca283b77f0f8a63a6d0f1300f65be831971360",} is_draining: 0 is_idle: 1 Cluster resources: node id: -8491188467377325818{"total":{GPU: 20000, memory: 856446742530000, node:__internal_head__: 10000, accelerator_type:A40: 10000, node:192.168.0.2: 10000, object_store_memory: 21474836480000, CPU: 200000}}, "available": {GPU: 20000, memory: 856446742530000, node:__internal_head__: 10000, accelerator_type:A40: 10000, node:192.168.0.2: 10000, object_store_memory: 21474836480000, CPU: 200000}}, "labels":{"ray.io/node_id":"49709ded25b009838cca283b77f0f8a63a6d0f1300f65be831971360",}, "is_draining": 0, "draining_deadline_timestamp_ms": -1} { "placment group locations": [], "node to bundles": []} Waiting tasks size: 0 Number of executing tasks: 0 Number of pinned task arguments: 0 Number of total spilled tasks: 0 Number of spilled waiting tasks: 0 Number of spilled unschedulable tasks: 0 Resource usage { } Backlog Size per scheduling descriptor :{workerId: num backlogs}: Running tasks by scheduling class: ================================================== ClusterResources: LocalObjectManager: - num pinned objects: 0 - pinned objects size: 0 - num objects pending restore: 0 - num objects pending spill: 0 - num bytes pending spill: 0 - num bytes currently spilled: 0 - cumulative spill requests: 0 - cumulative restore requests: 0 - spilled objects pending delete: 0 ObjectManager: - num local objects: 0 - num unfulfilled push requests: 0 - num object pull requests: 0 - num chunks received total: 0 - num chunks received failed (all): 0 - num chunks received failed / cancelled: 0 - num chunks received failed / plasma error: 0 Event stats: Global stats: 0 total (0 active) Queueing time: mean = -nan s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s Execution time: mean = -nan s, total = 0.000 s Event stats: PushManager: - num pushes in flight: 0 - num chunks in flight: 0 - num chunks remaining: 0 - max chunks allowed: 409 OwnershipBasedObjectDirectory: - num listeners: 0 - cumulative location updates: 0 - num location updates per second: 0.000 - num location lookups per second: 0.000 - num locations added per second: 0.000 - num locations removed per second: 0.000 BufferPool: - create buffer state map size: 0 PullManager: - num bytes available for pulled objects: 2147483648 - num bytes being pulled (all): 0 - num bytes being pulled / pinned: 0 - get request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable} - wait request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable} - task request bundles: BundlePullRequestQueue{0 total, 0 active, 0 inactive, 0 unpullable} - first get request bundle: N/A - first wait request bundle: N/A - first task request bundle: N/A - num objects queued: 0 - num objects actively pulled (all): 0 - num objects actively pulled / pinned: 0 - num bundles being pulled: 0 - num pull retries: 0 - max timeout seconds: 0 - max timeout request is already processed. No entry. WorkerPool: - registered jobs: 0 - process_failed_job_config_missing: 0 - process_failed_rate_limited: 0 - process_failed_pending_registration: 0 - process_failed_runtime_env_setup_failed: 0 - num PYTHON workers: 0 - num PYTHON drivers: 0 - num PYTHON pending start requests: 0 - num PYTHON pending registration requests: 0 - num object spill callbacks queued: 0 - num object restore queued: 0 - num util functions queued: 0 - num idle workers: 0 TaskDependencyManager: - task deps map size: 0 - get req map size: 0 - wait req map size: 0 - local objects map size: 0 WaitManager: - num active wait requests: 0 Subscriber: Channel WORKER_OBJECT_LOCATIONS_CHANNEL - cumulative subscribe requests: 0 - cumulative unsubscribe requests: 0 - active subscribed publishers: 0 - cumulative published messages: 0 - cumulative processed messages: 0 Channel WORKER_REF_REMOVED_CHANNEL - cumulative subscribe requests: 0 - cumulative unsubscribe requests: 0 - active subscribed publishers: 0 - cumulative published messages: 0 - cumulative processed messages: 0 Channel WORKER_OBJECT_EVICTION - cumulative subscribe requests: 0 - cumulative unsubscribe requests: 0 - active subscribed publishers: 0 - cumulative published messages: 0 - cumulative processed messages: 0 num async plasma notifications: 0 Remote node managers: Event stats: Global stats: 23 total (13 active) Queueing time: mean = 1.309 ms, max = 9.296 ms, min = 86.835 us, total = 30.105 ms Execution time: mean = 44.570 ms, total = 1.025 s Event stats: PeriodicalRunner.RunFnPeriodically - 11 total (6 active, 1 running), Execution time: mean = 17.117 us, total = 188.290 us, Queueing time: mean = 2.706 ms, max = 9.296 ms, min = 1.247 ms, total = 29.770 ms NodeManager.ScheduleAndDispatchTasks - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s NodeManager.GCTaskFailureReason - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s MemoryMonitor.CheckIsMemoryUsageAboveThreshold - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s ObjectManager.UpdateAvailableMemory - 1 total (0 active), Execution time: mean = 2.516 us, total = 2.516 us, Queueing time: mean = 146.311 us, max = 146.311 us, min = 146.311 us, total = 146.311 us ray::rpc::NodeInfoGcsService.grpc_client.RegisterNode.OnReplyReceived - 1 total (0 active), Execution time: mean = 314.492 us, total = 314.492 us, Queueing time: mean = 102.054 us, max = 102.054 us, min = 102.054 us, total = 102.054 us ray::rpc::NodeInfoGcsService.grpc_client.RegisterNode - 1 total (0 active), Execution time: mean = 2.553 ms, total = 2.553 ms, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s RayletWorkerPool.deadline_timer.kill_idle_workers - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s ray::rpc::InternalKVGcsService.grpc_client.GetInternalConfig.OnReplyReceived - 1 total (0 active), Execution time: mean = 1.020 s, total = 1.020 s, Queueing time: mean = 86.835 us, max = 86.835 us, min = 86.835 us, total = 86.835 us ClusterResourceManager.ResetRemoteNodeView - 1 total (1 active), Execution time: mean = 0.000 s, total = 0.000 s, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s ray::rpc::InternalKVGcsService.grpc_client.GetInternalConfig - 1 total (0 active), Execution time: mean = 1.792 ms, total = 1.792 ms, Queueing time: mean = 0.000 s, max = -0.000 s, min = 9223372036.855 s, total = 0.000 s DebugString() time ms: 1 |