randomath commited on
Commit
0f8bc42
·
verified ·
1 Parent(s): 5a8b5f4

Add files using upload-large-folder tool

Browse files
dashboard.log ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2026-02-27 00:30:27,260 INFO utils.py:307 -- Get all modules by type: DashboardHeadModule
2
+ 2026-02-27 00:30:27,960 INFO utils.py:340 -- Available modules: [<class 'ray.dashboard.modules.usage_stats.usage_stats_head.UsageStatsHead'>]
3
+ 2026-02-27 00:30:27,961 INFO head.py:235 -- DashboardHeadModules to load: None.
4
+ 2026-02-27 00:30:27,961 INFO head.py:238 -- Loading DashboardHeadModule: <class 'ray.dashboard.modules.usage_stats.usage_stats_head.UsageStatsHead'>.
5
+ 2026-02-27 00:30:27,961 INFO head.py:242 -- Loaded 1 dashboard head modules: [<ray.dashboard.modules.usage_stats.usage_stats_head.UsageStatsHead object at 0x79f6ffa88fe0>].
6
+ 2026-02-27 00:30:27,961 INFO utils.py:307 -- Get all modules by type: SubprocessModule
7
+ 2026-02-27 00:30:27,964 INFO utils.py:340 -- Available modules: [<class 'ray.dashboard.modules.metrics.metrics_head.MetricsHead'>, <class 'ray.dashboard.modules.data.data_head.DataHead'>, <class 'ray.dashboard.modules.event.event_head.EventHead'>, <class 'ray.dashboard.modules.job.job_head.JobHead'>, <class 'ray.dashboard.modules.node.node_head.NodeHead'>, <class 'ray.dashboard.modules.reporter.reporter_head.ReportHead'>, <class 'ray.dashboard.modules.serve.serve_head.ServeHead'>, <class 'ray.dashboard.modules.state.state_head.StateHead'>, <class 'ray.dashboard.modules.train.train_head.TrainHead'>]
8
+ 2026-02-27 00:30:27,964 INFO head.py:292 -- Loading SubprocessModule: <class 'ray.dashboard.modules.metrics.metrics_head.MetricsHead'>.
9
+ 2026-02-27 00:30:27,965 INFO head.py:292 -- Loading SubprocessModule: <class 'ray.dashboard.modules.data.data_head.DataHead'>.
10
+ 2026-02-27 00:30:27,965 INFO head.py:292 -- Loading SubprocessModule: <class 'ray.dashboard.modules.event.event_head.EventHead'>.
11
+ 2026-02-27 00:30:27,965 INFO head.py:292 -- Loading SubprocessModule: <class 'ray.dashboard.modules.job.job_head.JobHead'>.
12
+ 2026-02-27 00:30:27,965 INFO head.py:292 -- Loading SubprocessModule: <class 'ray.dashboard.modules.node.node_head.NodeHead'>.
13
+ 2026-02-27 00:30:27,965 INFO head.py:292 -- Loading SubprocessModule: <class 'ray.dashboard.modules.reporter.reporter_head.ReportHead'>.
14
+ 2026-02-27 00:30:27,965 INFO head.py:292 -- Loading SubprocessModule: <class 'ray.dashboard.modules.serve.serve_head.ServeHead'>.
15
+ 2026-02-27 00:30:27,965 INFO head.py:292 -- Loading SubprocessModule: <class 'ray.dashboard.modules.state.state_head.StateHead'>.
16
+ 2026-02-27 00:30:27,965 INFO head.py:292 -- Loading SubprocessModule: <class 'ray.dashboard.modules.train.train_head.TrainHead'>.
17
+ 2026-02-27 00:30:27,965 INFO head.py:296 -- Loaded 9 subprocess modules: [<ray.dashboard.subprocesses.handle.SubprocessModuleHandle object at 0x79f6f99b4680>, <ray.dashboard.subprocesses.handle.SubprocessModuleHandle object at 0x79f6f99b4740>, <ray.dashboard.subprocesses.handle.SubprocessModuleHandle object at 0x79f6f99b46e0>, <ray.dashboard.subprocesses.handle.SubprocessModuleHandle object at 0x79f6f9909160>, <ray.dashboard.subprocesses.handle.SubprocessModuleHandle object at 0x79f6f99096d0>, <ray.dashboard.subprocesses.handle.SubprocessModuleHandle object at 0x79f6f9909460>, <ray.dashboard.subprocesses.handle.SubprocessModuleHandle object at 0x79f6f99095e0>, <ray.dashboard.subprocesses.handle.SubprocessModuleHandle object at 0x79f6f6d44590>, <ray.dashboard.subprocesses.handle.SubprocessModuleHandle object at 0x79f6f6d44b30>].
18
+ 2026-02-27 00:30:30,183 INFO head.py:311 -- Starting dashboard metrics server on port 44227
19
+ 2026-02-27 00:30:30,188 INFO head.py:435 -- Initialize the http server.
20
+ 2026-02-27 00:30:30,190 INFO http_server_head.py:111 -- Setup static dir for dashboard: /usr/local/lib/python3.12/dist-packages/ray/dashboard/client/build
21
+ 2026-02-27 00:30:30,194 WARNING __init__.py:161 -- DeprecationWarning: `ray.ray_constants.DASHBOARD_CLIENT_MAX_SIZE` is a private attribute and access will be removed in a future Ray version.
22
+ 2026-02-27 00:30:30,223 INFO http_server_head.py:440 -- Dashboard head http address: 127.0.0.1:8265
23
+ 2026-02-27 00:30:30,223 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /usage_stats_enabled> -> <function UsageStatsHead.get_usage_stats_enabled at 0x79f6f27893a0>
24
+ 2026-02-27 00:30:30,223 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /cluster_id> -> <function UsageStatsHead.get_cluster_id at 0x79f6f27894e0>
25
+ 2026-02-27 00:30:30,223 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /> -> <function HttpServerDashboardHead.get_index at 0x79f6f27c8a40>
26
+ 2026-02-27 00:30:30,223 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /favicon.ico> -> <function HttpServerDashboardHead.get_favicon at 0x79f6f27c8b80>
27
+ 2026-02-27 00:30:30,223 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /timezone> -> <function HttpServerDashboardHead.get_timezone at 0x79f6f27c8cc0>
28
+ 2026-02-27 00:30:30,223 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/authentication_mode> -> <function HttpServerDashboardHead.get_authentication_mode at 0x79f6f27c8e00>
29
+ 2026-02-27 00:30:30,223 INFO http_server_head.py:447 -- <ResourceRoute [POST] <PlainResource /api/authenticate> -> <function HttpServerDashboardHead.authenticate at 0x79f6f27c8f40>
30
+ 2026-02-27 00:30:30,223 INFO http_server_head.py:447 -- <ResourceRoute [GET] <StaticResource /static -> PosixPath('/usr/local/lib/python3.12/dist-packages/ray/dashboard/client/build/static')> -> <bound method StaticResource._handle of <StaticResource /static -> PosixPath('/usr/local/lib/python3.12/dist-packages/ray/dashboard/client/build/static')>>
31
+ 2026-02-27 00:30:30,223 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/grafana_health> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f6df2fc0>
32
+ 2026-02-27 00:30:30,224 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/prometheus_health> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f6df3100>
33
+ 2026-02-27 00:30:30,224 INFO http_server_head.py:447 -- <ResourceRoute [GET] <DynamicResource /api/data/datasets/{job_id}> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f6df34c0>
34
+ 2026-02-27 00:30:30,224 INFO http_server_head.py:447 -- <ResourceRoute [POST] <PlainResource /report_events> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f3fc8400>
35
+ 2026-02-27 00:30:30,224 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /events> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f3fc85e0>
36
+ 2026-02-27 00:30:30,224 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/v0/cluster_events> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f3fc8860>
37
+ 2026-02-27 00:30:30,224 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/version> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f3feec00>
38
+ 2026-02-27 00:30:30,224 INFO http_server_head.py:447 -- <ResourceRoute [GET] <DynamicResource /api/packages/{protocol}/{package_name}> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f3feede0>
39
+ 2026-02-27 00:30:30,224 INFO http_server_head.py:447 -- <ResourceRoute [PUT] <DynamicResource /api/packages/{protocol}/{package_name}> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f3fef060>
40
+ 2026-02-27 00:30:30,224 INFO http_server_head.py:447 -- <ResourceRoute [POST] <PlainResource /api/jobs/> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f3fef240>
41
+ 2026-02-27 00:30:30,224 INFO http_server_head.py:447 -- <ResourceRoute [POST] <DynamicResource /api/jobs/{job_or_submission_id}/stop> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f3fef380>
42
+ 2026-02-27 00:30:30,224 INFO http_server_head.py:447 -- <ResourceRoute [DELETE] <DynamicResource /api/jobs/{job_or_submission_id}> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f3fef4c0>
43
+ 2026-02-27 00:30:30,224 INFO http_server_head.py:447 -- <ResourceRoute [GET] <DynamicResource /api/jobs/{job_or_submission_id}> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f3fef600>
44
+ 2026-02-27 00:30:30,224 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/jobs/> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f3fef740>
45
+ 2026-02-27 00:30:30,224 INFO http_server_head.py:447 -- <ResourceRoute [GET] <DynamicResource /api/jobs/{job_or_submission_id}/logs> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f3fef880>
46
+ 2026-02-27 00:30:30,224 INFO http_server_head.py:447 -- <ResourceRoute [GET] <DynamicResource /api/jobs/{job_or_submission_id}/logs/tail> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f3fef9c0>
47
+ 2026-02-27 00:30:30,225 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/component_activities> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f3fefba0>
48
+ 2026-02-27 00:30:30,225 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /nodes> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f31bca40>
49
+ 2026-02-27 00:30:30,225 INFO http_server_head.py:447 -- <ResourceRoute [GET] <DynamicResource /nodes/{node_id}> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f31bcc20>
50
+ 2026-02-27 00:30:30,225 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /logical/actors> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f31bd300>
51
+ 2026-02-27 00:30:30,225 INFO http_server_head.py:447 -- <ResourceRoute [GET] <DynamicResource /logical/actors/{actor_id}> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f31bd4e0>
52
+ 2026-02-27 00:30:30,225 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /test/dump> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f31bd6c0>
53
+ 2026-02-27 00:30:30,225 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/v0/cluster_metadata> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f2753880>
54
+ 2026-02-27 00:30:30,225 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/cluster_status> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f27539c0>
55
+ 2026-02-27 00:30:30,225 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /task/traceback> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f2753c40>
56
+ 2026-02-27 00:30:30,225 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /task/cpu_profile> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f2753d80>
57
+ 2026-02-27 00:30:30,225 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /worker/traceback> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f2753ec0>
58
+ 2026-02-27 00:30:30,225 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /worker/cpu_profile> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f2778040>
59
+ 2026-02-27 00:30:30,225 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /worker/gpu_profile> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f2778180>
60
+ 2026-02-27 00:30:30,225 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /memory_profile> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f27782c0>
61
+ 2026-02-27 00:30:30,225 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/gcs_healthz> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f2778400>
62
+ 2026-02-27 00:30:30,226 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/actors/kill> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f2778540>
63
+ 2026-02-27 00:30:30,226 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/prometheus/sd> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f2778680>
64
+ 2026-02-27 00:30:30,226 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/ray/version> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f2779260>
65
+ 2026-02-27 00:30:30,226 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/serve/applications/> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f2779300>
66
+ 2026-02-27 00:30:30,226 INFO http_server_head.py:447 -- <ResourceRoute [DELETE] <PlainResource /api/serve/applications/> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f2779580>
67
+ 2026-02-27 00:30:30,226 INFO http_server_head.py:447 -- <ResourceRoute [PUT] <PlainResource /api/serve/applications/> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f2779760>
68
+ 2026-02-27 00:30:30,226 INFO http_server_head.py:447 -- <ResourceRoute [POST] <DynamicResource /api/v1/applications/{application_name}/deployments/{deployment_name}/scale> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f2779a80>
69
+ 2026-02-27 00:30:30,226 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/v0/actors> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f277a3e0>
70
+ 2026-02-27 00:30:30,226 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/v0/jobs> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f277a5c0>
71
+ 2026-02-27 00:30:30,226 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/v0/nodes> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f277a7a0>
72
+ 2026-02-27 00:30:30,226 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/v0/placement_groups> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f277a980>
73
+ 2026-02-27 00:30:30,226 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/v0/workers> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f277ab60>
74
+ 2026-02-27 00:30:30,226 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/v0/tasks> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f277ad40>
75
+ 2026-02-27 00:30:30,226 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/v0/objects> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f277af20>
76
+ 2026-02-27 00:30:30,226 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/v0/runtime_envs> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f277b100>
77
+ 2026-02-27 00:30:30,226 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/v0/logs> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f277b2e0>
78
+ 2026-02-27 00:30:30,227 INFO http_server_head.py:447 -- <ResourceRoute [GET] <DynamicResource /api/v0/logs/{media_type}> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f277b4c0>
79
+ 2026-02-27 00:30:30,227 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/v0/tasks/summarize> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f277b6a0>
80
+ 2026-02-27 00:30:30,227 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/v0/actors/summarize> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f277b880>
81
+ 2026-02-27 00:30:30,227 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/v0/objects/summarize> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f277ba60>
82
+ 2026-02-27 00:30:30,227 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/v0/tasks/timeline> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f277bc40>
83
+ 2026-02-27 00:30:30,227 INFO http_server_head.py:447 -- <ResourceRoute [GET] <DynamicResource /api/v0/delay/{delay_s}> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f277bd80>
84
+ 2026-02-27 00:30:30,227 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/train/v2/runs/v1> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f27885e0>
85
+ 2026-02-27 00:30:30,227 INFO http_server_head.py:447 -- <ResourceRoute [GET] <PlainResource /api/train/v2/runs> -> <function SubprocessRouteTable._register_route.<locals>._wrapper.<locals>.parent_side_handler at 0x79f6f2788a40>
86
+ 2026-02-27 00:30:30,227 INFO http_server_head.py:448 -- Registered 63 routes.
87
+ 2026-02-27 00:30:30,227 INFO head.py:440 -- http server initialized at 127.0.0.1:8265
88
+ 2026-02-27 00:30:30,236 INFO usage_stats_head.py:200 -- Usage reporting is disabled.
89
+ 2026-02-27 00:32:13,993 WARNING dashboard.py:285 -- Exiting with SIGTERM immediately...
dashboard_EventHead.out ADDED
File without changes
dashboard_ReportHead.log ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ 2026-02-27 00:30:30,167 INFO module.py:210 -- Starting module ReportHead with incarnation 0 and config SubprocessModuleConfig(cluster_id_hex='1055e483f4dc49122b1241989fd976e87d4b63a2cfd37b9f5e0a28de', gcs_address='10.128.0.163:54299', session_name='session_2026-02-27_00-30-26_175126_10593', temp_dir='/tmp/ray', session_dir='/tmp/ray/session_2026-02-27_00-30-26_175126_10593', logging_level=20, logging_format='%(asctime)s\t%(levelname)s %(filename)s:%(lineno)s -- %(message)s', log_dir='/tmp/ray/session_2026-02-27_00-30-26_175126_10593/logs', logging_filename='dashboard.log', logging_rotate_bytes=536870912, logging_rotate_backup_count=5, socket_dir='/tmp/ray/session_2026-02-27_00-30-26_175126_10593/sockets')
2
+ 2026-02-27 00:30:30,171 WARNING __init__.py:161 -- DeprecationWarning: `ray.ray_constants.DASHBOARD_CLIENT_MAX_SIZE` is a private attribute and access will be removed in a future Ray version.
3
+ 2026-02-27 00:30:30,174 INFO module.py:142 -- Started aiohttp server over /tmp/ray/session_2026-02-27_00-30-26_175126_10593/sockets/dash_ReportHead.
4
+ 2026-02-27 00:30:30,178 INFO module.py:225 -- Module ReportHead initialized, receiving messages...
5
+ 2026-02-27 00:32:14,326 WARNING module.py:82 -- Parent process 10931 died. Exiting...
dashboard_ServeHead.log ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ 2026-02-27 00:30:29,899 INFO module.py:210 -- Starting module ServeHead with incarnation 0 and config SubprocessModuleConfig(cluster_id_hex='1055e483f4dc49122b1241989fd976e87d4b63a2cfd37b9f5e0a28de', gcs_address='10.128.0.163:54299', session_name='session_2026-02-27_00-30-26_175126_10593', temp_dir='/tmp/ray', session_dir='/tmp/ray/session_2026-02-27_00-30-26_175126_10593', logging_level=20, logging_format='%(asctime)s\t%(levelname)s %(filename)s:%(lineno)s -- %(message)s', log_dir='/tmp/ray/session_2026-02-27_00-30-26_175126_10593/logs', logging_filename='dashboard.log', logging_rotate_bytes=536870912, logging_rotate_backup_count=5, socket_dir='/tmp/ray/session_2026-02-27_00-30-26_175126_10593/sockets')
2
+ 2026-02-27 00:30:29,903 WARNING __init__.py:161 -- DeprecationWarning: `ray.ray_constants.DASHBOARD_CLIENT_MAX_SIZE` is a private attribute and access will be removed in a future Ray version.
3
+ 2026-02-27 00:30:29,908 INFO module.py:142 -- Started aiohttp server over /tmp/ray/session_2026-02-27_00-30-26_175126_10593/sockets/dash_ServeHead.
4
+ 2026-02-27 00:30:29,908 INFO module.py:225 -- Module ServeHead initialized, receiving messages...
5
+ 2026-02-27 00:32:14,034 WARNING module.py:82 -- Parent process 10931 died. Exiting...
dashboard_StateHead.err ADDED
File without changes
dashboard_StateHead.log ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ 2026-02-27 00:30:30,084 INFO module.py:210 -- Starting module StateHead with incarnation 0 and config SubprocessModuleConfig(cluster_id_hex='1055e483f4dc49122b1241989fd976e87d4b63a2cfd37b9f5e0a28de', gcs_address='10.128.0.163:54299', session_name='session_2026-02-27_00-30-26_175126_10593', temp_dir='/tmp/ray', session_dir='/tmp/ray/session_2026-02-27_00-30-26_175126_10593', logging_level=20, logging_format='%(asctime)s\t%(levelname)s %(filename)s:%(lineno)s -- %(message)s', log_dir='/tmp/ray/session_2026-02-27_00-30-26_175126_10593/logs', logging_filename='dashboard.log', logging_rotate_bytes=536870912, logging_rotate_backup_count=5, socket_dir='/tmp/ray/session_2026-02-27_00-30-26_175126_10593/sockets')
2
+ 2026-02-27 00:30:30,088 WARNING __init__.py:161 -- DeprecationWarning: `ray.ray_constants.DASHBOARD_CLIENT_MAX_SIZE` is a private attribute and access will be removed in a future Ray version.
3
+ 2026-02-27 00:30:30,093 INFO module.py:142 -- Started aiohttp server over /tmp/ray/session_2026-02-27_00-30-26_175126_10593/sockets/dash_StateHead.
4
+ 2026-02-27 00:30:30,093 INFO module.py:225 -- Module StateHead initialized, receiving messages...
5
+ 2026-02-27 00:32:14,227 WARNING module.py:82 -- Parent process 10931 died. Exiting...
dashboard_TrainHead.err ADDED
File without changes
log_monitor.log ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2026-02-27 00:30:30,866 INFO log_monitor.py:169 -- Starting log monitor with [max open files=200], [is_autoscaler_v2=False]
2
+ 2026-02-27 00:30:30,867 INFO log_monitor.py:291 -- Beginning to track file raylet.err
3
+ 2026-02-27 00:30:30,867 INFO log_monitor.py:291 -- Beginning to track file monitor.log
4
+ 2026-02-27 00:30:30,867 INFO log_monitor.py:291 -- Beginning to track file gcs_server.err
5
+ 2026-02-27 00:30:32,394 INFO log_monitor.py:291 -- Beginning to track file worker-2cda7ffb1fdfeaaf98e6be62760ae2627c565d43cb10409f83c0a748-ffffffff-11372.out
6
+ 2026-02-27 00:30:32,394 INFO log_monitor.py:291 -- Beginning to track file worker-2cda7ffb1fdfeaaf98e6be62760ae2627c565d43cb10409f83c0a748-ffffffff-11372.err
7
+ 2026-02-27 00:30:32,598 INFO log_monitor.py:291 -- Beginning to track file worker-5ad3b871f9c47a0419d1c26aa73c88d3ae2d40ede3aeceeef3079ef2-ffffffff-11379.out
8
+ 2026-02-27 00:30:32,602 INFO log_monitor.py:291 -- Beginning to track file worker-bb0f50c5405699ae07f957ec3f7c03f2bdf40be03f6e39b39232dc16-ffffffff-11378.out
9
+ 2026-02-27 00:30:32,602 INFO log_monitor.py:291 -- Beginning to track file worker-658f00c930b44d143152233262d8e94af875b52448898614a4b579ba-ffffffff-11377.err
10
+ 2026-02-27 00:30:32,603 INFO log_monitor.py:291 -- Beginning to track file worker-1535028fd440028216a02042c55e0b58baec34df171b54a8306f4bc8-ffffffff-11376.out
11
+ 2026-02-27 00:30:32,603 INFO log_monitor.py:291 -- Beginning to track file worker-bb0f50c5405699ae07f957ec3f7c03f2bdf40be03f6e39b39232dc16-ffffffff-11378.err
12
+ 2026-02-27 00:30:32,603 INFO log_monitor.py:291 -- Beginning to track file worker-658f00c930b44d143152233262d8e94af875b52448898614a4b579ba-ffffffff-11377.out
13
+ 2026-02-27 00:30:32,603 INFO log_monitor.py:291 -- Beginning to track file worker-1535028fd440028216a02042c55e0b58baec34df171b54a8306f4bc8-ffffffff-11376.err
14
+ 2026-02-27 00:30:32,603 INFO log_monitor.py:291 -- Beginning to track file worker-5ad3b871f9c47a0419d1c26aa73c88d3ae2d40ede3aeceeef3079ef2-ffffffff-11379.err
15
+ 2026-02-27 00:30:32,705 INFO log_monitor.py:291 -- Beginning to track file worker-d94856809cc941df63bf786bee51e2d1396c50fb3b25d48a4be64edf-ffffffff-11373.out
16
+ 2026-02-27 00:30:32,709 INFO log_monitor.py:291 -- Beginning to track file worker-100e3eeb4a57ce034285a311628f834885904c1e1ea9caa911a3c4da-ffffffff-11375.out
17
+ 2026-02-27 00:30:32,709 INFO log_monitor.py:291 -- Beginning to track file worker-100e3eeb4a57ce034285a311628f834885904c1e1ea9caa911a3c4da-ffffffff-11375.err
18
+ 2026-02-27 00:30:32,709 INFO log_monitor.py:291 -- Beginning to track file worker-d94856809cc941df63bf786bee51e2d1396c50fb3b25d48a4be64edf-ffffffff-11373.err
19
+ 2026-02-27 00:30:32,811 INFO log_monitor.py:291 -- Beginning to track file worker-fc14e0d4e4b6acb4ecead813c2d960587eefa7859aac6d8e19aeec98-ffffffff-11374.err
20
+ 2026-02-27 00:30:32,811 INFO log_monitor.py:291 -- Beginning to track file worker-fc14e0d4e4b6acb4ecead813c2d960587eefa7859aac6d8e19aeec98-ffffffff-11374.out
21
+ 2026-02-27 00:30:38,484 INFO log_monitor.py:291 -- Beginning to track file worker-8d27d27b6a5c820150d6a54cd27fe296fd0409567d2b4685b9a84fc8-01000000-11896.err
22
+ 2026-02-27 00:30:38,484 INFO log_monitor.py:291 -- Beginning to track file worker-8d27d27b6a5c820150d6a54cd27fe296fd0409567d2b4685b9a84fc8-01000000-11896.out
23
+ 2026-02-27 00:30:59,181 INFO log_monitor.py:291 -- Beginning to track file worker-389d4ca43c5eadc5290ba2907f911210cffe11839a5cfe9496d636c1-01000000-12110.out
24
+ 2026-02-27 00:30:59,181 INFO log_monitor.py:291 -- Beginning to track file worker-389d4ca43c5eadc5290ba2907f911210cffe11839a5cfe9496d636c1-01000000-12110.err
25
+ 2026-02-27 00:31:02,118 INFO log_monitor.py:291 -- Beginning to track file worker-a711ab381f1202e338fc2083afa6dd5133aebf91969e1e83b35a9610-01000000-12223.err
26
+ 2026-02-27 00:31:02,118 INFO log_monitor.py:291 -- Beginning to track file worker-a711ab381f1202e338fc2083afa6dd5133aebf91969e1e83b35a9610-01000000-12223.out
27
+ 2026-02-27 00:31:33,088 INFO log_monitor.py:291 -- Beginning to track file worker-9224fcd6abcfd04deeca6990e3ac522c58f6eec637ba09c0e927aaef-01000000-12481.err
28
+ 2026-02-27 00:31:33,089 INFO log_monitor.py:291 -- Beginning to track file worker-9224fcd6abcfd04deeca6990e3ac522c58f6eec637ba09c0e927aaef-01000000-12481.out
29
+ 2026-02-27 00:31:33,193 INFO log_monitor.py:291 -- Beginning to track file worker-b809b75ac50a13f3d02e083041fe7ba32c1445fa33db5801d3c6cfe5-01000000-12477.err
30
+ 2026-02-27 00:31:33,193 INFO log_monitor.py:291 -- Beginning to track file worker-b809b75ac50a13f3d02e083041fe7ba32c1445fa33db5801d3c6cfe5-01000000-12477.out
31
+ 2026-02-27 00:31:33,297 INFO log_monitor.py:291 -- Beginning to track file worker-e98598eeddae739fb0211beef22a201ab9028016b2b64fe185d8c813-01000000-12584.err
32
+ 2026-02-27 00:31:33,297 INFO log_monitor.py:291 -- Beginning to track file worker-e98598eeddae739fb0211beef22a201ab9028016b2b64fe185d8c813-01000000-12584.out
33
+ 2026-02-27 00:31:33,402 INFO log_monitor.py:291 -- Beginning to track file worker-15c410d5d6a75625cb50c80927d18090e899b8edc49402fe08e50ee6-01000000-12567.out
34
+ 2026-02-27 00:31:33,402 INFO log_monitor.py:291 -- Beginning to track file worker-af6e4d2eae80c226c783dd6717832e015ec8fc0144d801649c12abfe-01000000-12563.out
35
+ 2026-02-27 00:31:33,402 INFO log_monitor.py:291 -- Beginning to track file worker-f46103e29121f0b748164b47d1653310da7f304c2c8c8df73871f0e5-01000000-12507.out
36
+ 2026-02-27 00:31:33,403 INFO log_monitor.py:291 -- Beginning to track file worker-15c410d5d6a75625cb50c80927d18090e899b8edc49402fe08e50ee6-01000000-12567.err
37
+ 2026-02-27 00:31:33,403 INFO log_monitor.py:291 -- Beginning to track file worker-af6e4d2eae80c226c783dd6717832e015ec8fc0144d801649c12abfe-01000000-12563.err
38
+ 2026-02-27 00:31:33,403 INFO log_monitor.py:291 -- Beginning to track file worker-f46103e29121f0b748164b47d1653310da7f304c2c8c8df73871f0e5-01000000-12507.err
39
+ 2026-02-27 00:31:33,505 INFO log_monitor.py:291 -- Beginning to track file worker-772052e39bd349442253d65d82fc94825e9e58c75098b1b473bedce2-01000000-12586.out
40
+ 2026-02-27 00:31:33,505 INFO log_monitor.py:291 -- Beginning to track file worker-772052e39bd349442253d65d82fc94825e9e58c75098b1b473bedce2-01000000-12586.err
41
+ 2026-02-27 00:31:33,611 INFO log_monitor.py:291 -- Beginning to track file worker-33b9d0a21a51ca22dda2aa2142cb264d1ee4f9d53a55dc567b49496c-01000000-12500.out
42
+ 2026-02-27 00:31:33,613 INFO log_monitor.py:291 -- Beginning to track file worker-33b9d0a21a51ca22dda2aa2142cb264d1ee4f9d53a55dc567b49496c-01000000-12500.err
43
+ 2026-02-27 00:31:44,384 INFO log_monitor.py:291 -- Beginning to track file worker-a99abd04b70eed71bbc2b85849964e1f45cdec8a7b96f35e101ab940-01000000-13106.out
44
+ 2026-02-27 00:31:44,385 INFO log_monitor.py:291 -- Beginning to track file worker-a99abd04b70eed71bbc2b85849964e1f45cdec8a7b96f35e101ab940-01000000-13106.err
monitor.err ADDED
File without changes
python-core-worker-100e3eeb4a57ce034285a311628f834885904c1e1ea9caa911a3c4da_11375.log ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2026-02-27 00:30:32,688 I 11375 11375] core_worker_process.cc:773: Constructing CoreWorkerProcess. pid: 11375
2
+ [2026-02-27 00:30:32,690 I 11375 11375] event.cc:499: Ray Event initialized for CORE_WORKER
3
+ [2026-02-27 00:30:32,690 I 11375 11375] event.cc:499: Ray Event initialized for EXPORT_TASK
4
+ [2026-02-27 00:30:32,690 I 11375 11375] event.cc:332: Set ray event level to warning
5
+ [2026-02-27 00:30:32,690 I 11375 11375] event_aggregator_client.h:50: Initiating the local event aggregator client with port: 55678
6
+ [2026-02-27 00:30:32,692 I 11375 11375] grpc_server.cc:143: worker server started, listening on port 50277.
7
+ [2026-02-27 00:30:32,707 I 11375 11375] core_worker_process.cc:261: Initializing worker at address: 10.128.0.163:50277 worker_id=100e3eeb4a57ce034285a311628f834885904c1e1ea9caa911a3c4da node_id=d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
8
+ [2026-02-27 00:30:32,708 I 11375 11375] task_event_buffer.cc:480: Reporting task events to GCS every 1000ms.
9
+ [2026-02-27 00:30:32,709 I 11375 11375] core_worker.cc:515: Adjusted worker niceness to 15
10
+ [2026-02-27 00:30:32,709 I 11375 11375] metrics_agent_client.cc:42: Initializing exporter ...
11
+ [2026-02-27 00:30:32,710 I 11375 11693] core_worker.cc:455: Event stats:
12
+
13
+
14
+ Global stats: 13 total (11 active)
15
+ Queueing time: mean = 0.03ms, max = 0.32ms, min = 0.04ms, total = 0.36ms
16
+ Execution time: mean = 0.00ms, total = 0.04ms
17
+ Event stats:
18
+ PeriodicalRunner.RunFnPeriodically - 7 total (5 active, 1 running), Execution time: mean = 0.01ms, total = 0.04ms, Queueing time: mean = 0.05ms, max = 0.32ms, min = 0.04ms, total = 0.36ms
19
+ ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
20
+ ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
21
+ ReporterService.grpc_client.HealthCheck - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
22
+ Publisher.CheckDeadSubscribers - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
23
+ CoreWorker.ExitIfParentRayletDies - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
24
+ ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
25
+
26
+ -----------------
27
+ Task execution event stats:
28
+
29
+ Global stats: 0 total (0 active)
30
+ Queueing time: mean = -nanms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
31
+ Execution time: mean = -nanms, total = 0.00ms
32
+ Event stats:
33
+
34
+ -----------------
35
+ Task Event stats:
36
+
37
+ IO Service Stats:
38
+
39
+ Global stats: 3 total (2 active)
40
+ Queueing time: mean = 0.02ms, max = 0.06ms, min = 0.06ms, total = 0.06ms
41
+ Execution time: mean = 0.28ms, total = 0.84ms
42
+ Event stats:
43
+ CoreWorker.deadline_timer.flush_task_events - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
44
+ ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
45
+ PeriodicalRunner.RunFnPeriodically - 1 total (0 active), Execution time: mean = 0.84ms, total = 0.84ms, Queueing time: mean = 0.06ms, max = 0.06ms, min = 0.06ms, total = 0.06ms
46
+ Other Stats:
47
+ gcs_grpc_in_progress:1
48
+ event_aggregator_grpc_in_progress:0
49
+ current number of task status events in buffer: 0
50
+ current number of profile events in buffer: 0
51
+ current number of dropped task attempts tracked: 0
52
+ total task events sent: 0 MiB
53
+ total number of task attempts sent: 0
54
+ total number of task attempts dropped reported: 0
55
+ total number of sent failure: 0
56
+ num status task events dropped: 0
57
+ num profile task events dropped: 0
58
+ num ray task events reported to aggregator: 0
59
+ num ray task events failed to report to aggregator: 0
60
+ num of task attempts dropped reported to aggregator: 0
61
+ num of failed requests to aggregator: 0
62
+
63
+ [2026-02-27 00:30:32,712 I 11375 11693] accessor.cc:540: Received address and liveness notification for node, IsAlive = 1 node_id=d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
64
+ [2026-02-27 00:30:32,712 I 11375 11693] normal_task_submitter.cc:824: Number of alive nodes:1
65
+ [2026-02-27 00:30:34,172 I 11375 11693] metrics_agent_client.cc:54: Exporter initialized.
66
+ [2026-02-27 00:30:57,500 I 11375 11693] core_worker_shutdown_executor.cc:184: Executing handle exit: INTENDED_SYSTEM_EXIT - Worker exited because it was idle for a long time (timeout: -1ms)
67
+ [2026-02-27 00:30:57,500 I 11375 11693] core_worker_shutdown_executor.cc:94: Executing worker exit: INTENDED_SYSTEM_EXIT - Worker exited because it was idle for a long time (timeout: 10000ms)
68
+ [2026-02-27 00:30:57,500 I 11375 11375] core_worker_shutdown_executor.cc:128: Wait for currently executing tasks in the underlying thread pools to finish.
69
+ [2026-02-27 00:30:57,500 I 11375 11375] core_worker_shutdown_executor.cc:162: Releasing local references, then draining reference counter.
70
+ [2026-02-27 00:30:57,503 I 11375 11375] core_worker_shutdown_executor.cc:217: Try killing all child processes of this worker as it exits. Child process pids:
71
+ [2026-02-27 00:30:57,503 I 11375 11375] core_worker_shutdown_executor.cc:262: Sending disconnect message to the local raylet.
72
+ [2026-02-27 00:30:57,504 I 11375 11375] raylet_ipc_client.cc:135: RayletIpcClient::Disconnect, exit_type=INTENDED_SYSTEM_EXIT, exit_detail=Worker exited because it was idle for a long time, has creation_task_exception_pb_bytes=0
73
+ [2026-02-27 00:30:57,505 I 11375 11375] core_worker_shutdown_executor.cc:279: Disconnected from the local raylet.
74
+ [2026-02-27 00:30:57,505 I 11375 11375] task_event_buffer.cc:491: Shutting down TaskEventBuffer.
75
+ [2026-02-27 00:30:57,505 I 11375 11719] task_event_buffer.cc:459: Task event buffer io service stopped.
76
+ [2026-02-27 00:30:57,506 I 11375 11375] core_worker_shutdown_executor.cc:54: Waiting for joining a core worker io thread. If it hangs here, there might be deadlock or a high load in the core worker io service.
77
+ [2026-02-27 00:30:57,506 I 11375 11693] core_worker_process.cc:194: Core worker main io service stopped.
78
+ [2026-02-27 00:30:57,510 I 11375 11375] core_worker_shutdown_executor.cc:72: Disconnecting a GCS client.
79
+ [2026-02-27 00:30:57,510 I 11375 11375] core_worker_shutdown_executor.cc:79: Core worker ready to be deallocated.
80
+ [2026-02-27 00:30:57,510 I 11375 11375] core_worker_process.cc:950: Task execution loop terminated. Removing the global worker.
81
+ [2026-02-27 00:30:57,510 I 11375 11375] core_worker.cc:539: Core worker is destructed
82
+ [2026-02-27 00:30:57,510 I 11375 11375] task_event_buffer.cc:491: Shutting down TaskEventBuffer.
83
+ [2026-02-27 00:30:57,512 I 11375 11375] core_worker_process.cc:846: Destructing CoreWorkerProcessImpl. pid: 11375
84
+ [2026-02-27 00:30:57,514 I 11375 11375] stats.h:149: Stats module has shutdown.
85
+ [2026-02-27 00:30:57,545 W 11375 11375] core_worker_process.cc:860: The core worker process is not initialized yet or already shutdown.
python-core-worker-658f00c930b44d143152233262d8e94af875b52448898614a4b579ba_11377.log ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2026-02-27 00:30:32,509 I 11377 11377] core_worker_process.cc:773: Constructing CoreWorkerProcess. pid: 11377
2
+ [2026-02-27 00:30:32,527 I 11377 11377] event.cc:499: Ray Event initialized for CORE_WORKER
3
+ [2026-02-27 00:30:32,527 I 11377 11377] event.cc:499: Ray Event initialized for EXPORT_TASK
4
+ [2026-02-27 00:30:32,527 I 11377 11377] event.cc:332: Set ray event level to warning
5
+ [2026-02-27 00:30:32,528 I 11377 11377] event_aggregator_client.h:50: Initiating the local event aggregator client with port: 55678
6
+ [2026-02-27 00:30:32,530 I 11377 11377] grpc_server.cc:143: worker server started, listening on port 50373.
7
+ [2026-02-27 00:30:32,564 I 11377 11377] core_worker_process.cc:261: Initializing worker at address: 10.128.0.163:50373 worker_id=658f00c930b44d143152233262d8e94af875b52448898614a4b579ba node_id=d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
8
+ [2026-02-27 00:30:32,568 I 11377 11377] task_event_buffer.cc:480: Reporting task events to GCS every 1000ms.
9
+ [2026-02-27 00:30:32,574 I 11377 11524] accessor.cc:540: Received address and liveness notification for node, IsAlive = 1 node_id=d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
10
+ [2026-02-27 00:30:32,574 I 11377 11524] normal_task_submitter.cc:824: Number of alive nodes:1
11
+ [2026-02-27 00:30:32,574 I 11377 11377] core_worker.cc:515: Adjusted worker niceness to 15
12
+ [2026-02-27 00:30:32,574 I 11377 11524] core_worker.cc:455: Event stats:
13
+
14
+
15
+ Global stats: 16 total (8 active)
16
+ Queueing time: mean = 0.01ms, max = 0.08ms, min = 0.01ms, total = 0.15ms
17
+ Execution time: mean = 0.19ms, total = 3.04ms
18
+ Event stats:
19
+ PeriodicalRunner.RunFnPeriodically - 7 total (5 active, 1 running), Execution time: mean = 0.00ms, total = 0.03ms, Queueing time: mean = 0.01ms, max = 0.08ms, min = 0.01ms, total = 0.09ms
20
+ ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
21
+ ray::rpc::NodeInfoGcsService.grpc_client.GetAllNodeAddressAndLiveness.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.30ms, total = 0.30ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms
22
+ ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo - 1 total (0 active), Execution time: mean = 0.59ms, total = 0.59ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
23
+ ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (0 active), Execution time: mean = 0.86ms, total = 0.86ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
24
+ ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.02ms, total = 0.02ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms
25
+ CoreWorker.ExitIfParentRayletDies - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
26
+ ray::rpc::NodeInfoGcsService.grpc_client.GetAllNodeAddressAndLiveness - 1 total (0 active), Execution time: mean = 0.57ms, total = 0.57ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
27
+ Publisher.CheckDeadSubscribers - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
28
+ ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.67ms, total = 0.67ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms
29
+
30
+ -----------------
31
+ Task execution event stats:
32
+
33
+ Global stats: 0 total (0 active)
34
+ Queueing time: mean = -nanms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
35
+ Execution time: mean = -nanms, total = 0.00ms
36
+ Event stats:
37
+
38
+ -----------------
39
+ Task Event stats:
40
+
41
+ IO Service Stats:
42
+
43
+ Global stats: 4 total (1 active)
44
+ Queueing time: mean = 0.09ms, max = 0.33ms, min = 0.03ms, total = 0.35ms
45
+ Execution time: mean = 0.26ms, total = 1.05ms
46
+ Event stats:
47
+ ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.03ms, total = 0.03ms, Queueing time: mean = 0.03ms, max = 0.03ms, min = 0.03ms, total = 0.03ms
48
+ ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData - 1 total (0 active), Execution time: mean = 0.77ms, total = 0.77ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
49
+ PeriodicalRunner.RunFnPeriodically - 1 total (0 active), Execution time: mean = 0.25ms, total = 0.25ms, Queueing time: mean = 0.33ms, max = 0.33ms, min = 0.33ms, total = 0.33ms
50
+ CoreWorker.deadline_timer.flush_task_events - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
51
+ Other Stats:
52
+ gcs_grpc_in_progress:0
53
+ event_aggregator_grpc_in_progress:0
54
+ current number of task status events in buffer: 0
55
+ current number of profile events in buffer: 0
56
+ current number of dropped task attempts tracked: 0
57
+ total task events sent: 0 MiB
58
+ total number of task attempts sent: 0
59
+ total number of task attempts dropped reported: 0
60
+ total number of sent failure: 0
61
+ num status task events dropped: 0
62
+ num profile task events dropped: 0
63
+ num ray task events reported to aggregator: 0
64
+ num ray task events failed to report to aggregator: 0
65
+ num of task attempts dropped reported to aggregator: 0
66
+ num of failed requests to aggregator: 0
67
+
68
+ [2026-02-27 00:30:32,574 I 11377 11377] metrics_agent_client.cc:42: Initializing exporter ...
69
+ [2026-02-27 00:30:33,585 I 11377 11524] metrics_agent_client.cc:54: Exporter initialized.
70
+ [2026-02-27 00:31:32,575 I 11377 11524] core_worker.cc:455: Event stats:
71
+
72
+
73
+ Global stats: 887 total (8 active)
74
+ Queueing time: mean = 1.19ms, max = 1000.05ms, min = 0.01ms, total = 1057.44ms
75
+ Execution time: mean = 0.08ms, total = 73.84ms
76
+ Event stats:
77
+ CoreWorker.RecoverObjects - 600 total (1 active), Execution time: mean = 0.01ms, total = 4.19ms, Queueing time: mean = 0.06ms, max = 5.00ms, min = 0.01ms, total = 33.80ms
78
+ CoreWorker.InternalHeartbeat - 60 total (1 active), Execution time: mean = 0.13ms, total = 7.81ms, Queueing time: mean = 0.04ms, max = 0.07ms, min = 0.02ms, total = 2.24ms
79
+ CoreWorker.ExitIfParentRayletDies - 60 total (1 active), Execution time: mean = 0.01ms, total = 0.65ms, Queueing time: mean = 0.05ms, max = 0.15ms, min = 0.02ms, total = 2.73ms
80
+ NodeManagerService.grpc_client.ReportWorkerBacklog - 60 total (0 active), Execution time: mean = 0.71ms, total = 42.86ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
81
+ NodeManagerService.grpc_client.ReportWorkerBacklog.OnReplyReceived - 60 total (0 active), Execution time: mean = 0.02ms, total = 1.20ms, Queueing time: mean = 0.03ms, max = 0.11ms, min = 0.01ms, total = 1.76ms
82
+ CoreWorker.RecordMetrics - 12 total (1 active), Execution time: mean = 0.04ms, total = 0.45ms, Queueing time: mean = 0.03ms, max = 0.05ms, min = 0.02ms, total = 0.39ms
83
+ PeriodicalRunner.RunFnPeriodically - 7 total (0 active), Execution time: mean = 1.06ms, total = 7.43ms, Queueing time: mean = 2.28ms, max = 7.41ms, min = 0.01ms, total = 15.95ms
84
+ CoreWorker.TryDelPendingObjectRefStreams - 6 total (1 active), Execution time: mean = 0.00ms, total = 0.02ms, Queueing time: mean = 0.04ms, max = 0.06ms, min = 0.03ms, total = 0.23ms
85
+ CoreWorkerService.grpc_server.GetCoreWorkerStats - 4 total (0 active), Execution time: mean = 0.41ms, total = 1.63ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
86
+ CoreWorkerService.grpc_server.GetCoreWorkerStats.HandleRequestImpl - 4 total (0 active), Execution time: mean = 0.08ms, total = 0.33ms, Queueing time: mean = 0.05ms, max = 0.07ms, min = 0.02ms, total = 0.18ms
87
+ ReporterService.grpc_client.HealthCheck - 2 total (0 active), Execution time: mean = 1.76ms, total = 3.52ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
88
+ ReporterService.grpc_client.HealthCheck.OnReplyReceived - 2 total (0 active), Execution time: mean = 0.32ms, total = 0.63ms, Queueing time: mean = 0.02ms, max = 0.03ms, min = 0.02ms, total = 0.05ms
89
+ ray::rpc::NodeInfoGcsService.grpc_client.GetAllNodeAddressAndLiveness.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.30ms, total = 0.30ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms
90
+ ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
91
+ MetricsAgentClient.WaitForServerReadyWithRetry - 1 total (0 active), Execution time: mean = 0.12ms, total = 0.12ms, Queueing time: mean = 1000.05ms, max = 1000.05ms, min = 1000.05ms, total = 1000.05ms
92
+ ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.02ms, total = 0.02ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms
93
+ ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo - 1 total (0 active), Execution time: mean = 0.59ms, total = 0.59ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
94
+ ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (0 active), Execution time: mean = 0.86ms, total = 0.86ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
95
+ CoreWorker.PrintEventStats - 1 total (1 active, 1 running), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
96
+ Publisher.CheckDeadSubscribers - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
97
+ ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.67ms, total = 0.67ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms
98
+ ray::rpc::NodeInfoGcsService.grpc_client.GetAllNodeAddressAndLiveness - 1 total (0 active), Execution time: mean = 0.57ms, total = 0.57ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
99
+
100
+ -----------------
101
+ Task execution event stats:
102
+
103
+ Global stats: 5940 total (1 active)
104
+ Queueing time: mean = 0.09ms, max = 31.90ms, min = 0.00ms, total = 556.21ms
105
+ Execution time: mean = 0.01ms, total = 61.29ms
106
+ Event stats:
107
+ CoreWorker.CheckSignal - 5939 total (1 active), Execution time: mean = 0.01ms, total = 61.28ms, Queueing time: mean = 0.09ms, max = 31.90ms, min = 0.01ms, total = 556.20ms
108
+ PeriodicalRunner.RunFnPeriodically - 1 total (0 active), Execution time: mean = 0.01ms, total = 0.01ms, Queueing time: mean = 0.00ms, max = 0.00ms, min = 0.00ms, total = 0.00ms
109
+
110
+ -----------------
111
+ Task Event stats:
112
+
113
+ IO Service Stats:
114
+
115
+ Global stats: 181 total (1 active)
116
+ Queueing time: mean = 0.05ms, max = 2.21ms, min = 0.02ms, total = 9.26ms
117
+ Execution time: mean = 0.29ms, total = 52.30ms
118
+ Event stats:
119
+ ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData.OnReplyReceived - 60 total (0 active), Execution time: mean = 0.03ms, total = 1.92ms, Queueing time: mean = 0.03ms, max = 0.11ms, min = 0.02ms, total = 1.67ms
120
+ ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData - 60 total (0 active), Execution time: mean = 0.65ms, total = 39.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
121
+ CoreWorker.deadline_timer.flush_task_events - 60 total (1 active), Execution time: mean = 0.19ms, total = 11.13ms, Queueing time: mean = 0.12ms, max = 2.21ms, min = 0.03ms, total = 7.27ms
122
+ PeriodicalRunner.RunFnPeriodically - 1 total (0 active), Execution time: mean = 0.25ms, total = 0.25ms, Queueing time: mean = 0.33ms, max = 0.33ms, min = 0.33ms, total = 0.33ms
123
+ Other Stats:
124
+ gcs_grpc_in_progress:0
125
+ event_aggregator_grpc_in_progress:0
126
+ current number of task status events in buffer: 0
127
+ current number of profile events in buffer: 0
128
+ current number of dropped task attempts tracked: 0
129
+ total task events sent: 0 MiB
130
+ total number of task attempts sent: 0
131
+ total number of task attempts dropped reported: 0
132
+ total number of sent failure: 0
133
+ num status task events dropped: 0
134
+ num profile task events dropped: 0
135
+ num ray task events reported to aggregator: 0
136
+ num ray task events failed to report to aggregator: 0
137
+ num of task attempts dropped reported to aggregator: 0
138
+ num of failed requests to aggregator: 0
139
+
140
+ [2026-02-27 00:32:12,517 I 11377 11524] accessor.cc:540: Received address and liveness notification for node, IsAlive = 0 node_id=d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
141
+ [2026-02-27 00:32:12,517 I 11377 11524] core_worker.cc:740: Node failure. All objects pinned on that node will be lost if object reconstruction is not enabled. node_id=d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
142
+ [2026-02-27 00:32:12,517 I 11377 11524] normal_task_submitter.cc:824: Number of alive nodes:0
python-core-worker-a99abd04b70eed71bbc2b85849964e1f45cdec8a7b96f35e101ab940_13106.log ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2026-02-27 00:31:44,365 I 13106 13106] core_worker_process.cc:773: Constructing CoreWorkerProcess. pid: 13106
2
+ [2026-02-27 00:31:44,371 I 13106 13106] event.cc:499: Ray Event initialized for CORE_WORKER
3
+ [2026-02-27 00:31:44,372 I 13106 13106] event.cc:499: Ray Event initialized for EXPORT_TASK
4
+ [2026-02-27 00:31:44,372 I 13106 13106] event.cc:332: Set ray event level to warning
5
+ [2026-02-27 00:31:44,372 I 13106 13106] event_aggregator_client.h:50: Initiating the local event aggregator client with port: 55678
6
+ [2026-02-27 00:31:44,374 I 13106 13106] grpc_server.cc:143: worker server started, listening on port 50267.
7
+ [2026-02-27 00:31:44,389 I 13106 13106] core_worker_process.cc:261: Initializing worker at address: 10.128.0.163:50267 worker_id=a99abd04b70eed71bbc2b85849964e1f45cdec8a7b96f35e101ab940 node_id=d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
8
+ [2026-02-27 00:31:44,393 I 13106 13106] task_event_buffer.cc:480: Reporting task events to GCS every 1000ms.
9
+ [2026-02-27 00:31:44,395 I 13106 13106] core_worker.cc:515: Adjusted worker niceness to 15
10
+ [2026-02-27 00:31:44,395 I 13106 13106] metrics_agent_client.cc:42: Initializing exporter ...
11
+ [2026-02-27 00:31:44,395 I 13106 13175] core_worker.cc:455: Event stats:
12
+
13
+
14
+ Global stats: 12 total (10 active)
15
+ Queueing time: mean = 0.01ms, max = 0.04ms, min = 0.04ms, total = 0.08ms
16
+ Execution time: mean = 0.00ms, total = 0.04ms
17
+ Event stats:
18
+ PeriodicalRunner.RunFnPeriodically - 7 total (5 active, 1 running), Execution time: mean = 0.01ms, total = 0.04ms, Queueing time: mean = 0.01ms, max = 0.04ms, min = 0.04ms, total = 0.08ms
19
+ ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
20
+ ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
21
+ Publisher.CheckDeadSubscribers - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
22
+ ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
23
+ CoreWorker.ExitIfParentRayletDies - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
24
+
25
+ -----------------
26
+ Task execution event stats:
27
+
28
+ Global stats: 0 total (0 active)
29
+ Queueing time: mean = -nanms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
30
+ Execution time: mean = -nanms, total = 0.00ms
31
+ Event stats:
32
+
33
+ -----------------
34
+ Task Event stats:
35
+
36
+ IO Service Stats:
37
+
38
+ Global stats: 1 total (1 active)
39
+ Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
40
+ Execution time: mean = 0.00ms, total = 0.00ms
41
+ Event stats:
42
+ PeriodicalRunner.RunFnPeriodically - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
43
+ Other Stats:
44
+ gcs_grpc_in_progress:0
45
+ event_aggregator_grpc_in_progress:0
46
+ current number of task status events in buffer: 0
47
+ current number of profile events in buffer: 0
48
+ current number of dropped task attempts tracked: 0
49
+ total task events sent: 0 MiB
50
+ total number of task attempts sent: 0
51
+ total number of task attempts dropped reported: 0
52
+ total number of sent failure: 0
53
+ num status task events dropped: 0
54
+ num profile task events dropped: 0
55
+ num ray task events reported to aggregator: 0
56
+ num ray task events failed to report to aggregator: 0
57
+ num of task attempts dropped reported to aggregator: 0
58
+ num of failed requests to aggregator: 0
59
+
60
+ [2026-02-27 00:31:44,399 I 13106 13175] accessor.cc:540: Received address and liveness notification for node, IsAlive = 1 node_id=d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
61
+ [2026-02-27 00:31:44,399 I 13106 13175] normal_task_submitter.cc:824: Number of alive nodes:1
62
+ [2026-02-27 00:31:44,402 I 13106 13175] metrics_agent_client.cc:54: Exporter initialized.
63
+ [2026-02-27 00:31:44,402 I 13106 13106] actor_task_submitter.cc:74: Set actor max pending calls to -1 actor_id=8b6cdab5e264a7c65d1ca0d701000000
64
+ [2026-02-27 00:31:44,402 I 13106 13106] core_worker.cc:2903: Creating actor actor_id=8b6cdab5e264a7c65d1ca0d701000000
65
+ [2026-02-27 00:31:55,667 I 13106 13259] actor_task_submitter.cc:74: Set actor max pending calls to -1 actor_id=ddbc05b6418ab5a7f12dad2101000000
66
+ [2026-02-27 00:32:01,975 I 13106 13106] task_receiver.cc:142: Actor creation task finished, task_id: ffffffffffffffff8b6cdab5e264a7c65d1ca0d701000000, actor_id: 8b6cdab5e264a7c65d1ca0d701000000, actor_repr_name:
67
+ [2026-02-27 00:32:01,978 I 13106 13106] out_of_order_actor_scheduling_queue.cc:51: Setting actor as asyncio with max_concurrency=1000, and defined concurrency groups are:
68
+
69
+ [2026-02-27 00:32:12,455 I 13106 13175] core_worker.cc:4303: Worker is not idle: reference counter: ReferenceTable{size: 1 sample: ffffffffffffffffddbc05b6418ab5a7f12dad210100000001000000:Reference{borrowers: 0 local_ref_count: 1 submitted_count: 0 contained_on_owned: 0 contained_in_borrowed: 0 contains: 0 stored_in: 0 lineage_ref_count: 0}} # pins in flight: 0 # pending tasks: 0
70
+ [2026-02-27 00:32:12,455 I 13106 13175] core_worker.cc:4308: Force exiting worker that's not idle. reference counter: ReferenceTable{size: 1 sample: ffffffffffffffffddbc05b6418ab5a7f12dad210100000001000000:Reference{borrowers: 0 local_ref_count: 1 submitted_count: 0 contained_on_owned: 0 contained_in_borrowed: 0 contains: 0 stored_in: 0 lineage_ref_count: 0}} # Pins in flight: 0 # pending tasks: 0
71
+ [2026-02-27 00:32:12,484 I 13106 13175] core_worker_shutdown_executor.cc:217: Try killing all child processes of this worker as it exits. Child process pids: 13282,13283
72
+ [2026-02-27 00:32:12,484 I 13106 13175] core_worker_shutdown_executor.cc:226: Kill result for child pid 13282: Success, bool 0
73
+ [2026-02-27 00:32:12,485 I 13106 13175] core_worker_shutdown_executor.cc:226: Kill result for child pid 13283: Success, bool 0
74
+ [2026-02-27 00:32:12,487 I 13106 13175] core_worker_shutdown_executor.cc:262: Sending disconnect message to the local raylet.
75
+ [2026-02-27 00:32:12,487 I 13106 13175] raylet_ipc_client.cc:135: RayletIpcClient::Disconnect, exit_type=INTENDED_USER_EXIT, exit_detail=Worker force exited because its job has finished, has creation_task_exception_pb_bytes=0
76
+ [2026-02-27 00:32:12,500 I 13106 13175] core_worker_shutdown_executor.cc:279: Disconnected from the local raylet.
77
+ [2026-02-27 00:32:12,500 W 13106 13175] core_worker_shutdown_executor.cc:288: Quick exit - terminating process immediately
python-core-worker-d94856809cc941df63bf786bee51e2d1396c50fb3b25d48a4be64edf_11373.log ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2026-02-27 00:30:32,655 I 11373 11373] core_worker_process.cc:773: Constructing CoreWorkerProcess. pid: 11373
2
+ [2026-02-27 00:30:32,659 I 11373 11373] event.cc:499: Ray Event initialized for CORE_WORKER
3
+ [2026-02-27 00:30:32,659 I 11373 11373] event.cc:499: Ray Event initialized for EXPORT_TASK
4
+ [2026-02-27 00:30:32,659 I 11373 11373] event.cc:332: Set ray event level to warning
5
+ [2026-02-27 00:30:32,659 I 11373 11373] event_aggregator_client.h:50: Initiating the local event aggregator client with port: 55678
6
+ [2026-02-27 00:30:32,660 I 11373 11373] grpc_server.cc:143: worker server started, listening on port 50163.
7
+ [2026-02-27 00:30:32,671 I 11373 11373] core_worker_process.cc:261: Initializing worker at address: 10.128.0.163:50163 worker_id=d94856809cc941df63bf786bee51e2d1396c50fb3b25d48a4be64edf node_id=d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
8
+ [2026-02-27 00:30:32,672 I 11373 11373] task_event_buffer.cc:480: Reporting task events to GCS every 1000ms.
9
+ [2026-02-27 00:30:32,673 I 11373 11373] core_worker.cc:515: Adjusted worker niceness to 15
10
+ [2026-02-27 00:30:32,673 I 11373 11626] core_worker.cc:455: Event stats:
11
+
12
+
13
+ Global stats: 12 total (10 active)
14
+ Queueing time: mean = 0.01ms, max = 0.11ms, min = 0.03ms, total = 0.14ms
15
+ Execution time: mean = 0.00ms, total = 0.04ms
16
+ Event stats:
17
+ PeriodicalRunner.RunFnPeriodically - 7 total (5 active, 1 running), Execution time: mean = 0.01ms, total = 0.04ms, Queueing time: mean = 0.02ms, max = 0.11ms, min = 0.03ms, total = 0.14ms
18
+ ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberPoll - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
19
+ ray::rpc::WorkerInfoGcsService.grpc_client.AddWorkerInfo - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
20
+ Publisher.CheckDeadSubscribers - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
21
+ CoreWorker.ExitIfParentRayletDies - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
22
+ ray::rpc::InternalPubSubGcsService.grpc_client.GcsSubscriberCommandBatch - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
23
+
24
+ -----------------
25
+ Task execution event stats:
26
+
27
+ Global stats: 0 total (0 active)
28
+ Queueing time: mean = -nanms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
29
+ Execution time: mean = -nanms, total = 0.00ms
30
+ Event stats:
31
+
32
+ -----------------
33
+ Task Event stats:
34
+
35
+ IO Service Stats:
36
+
37
+ Global stats: 4 total (1 active)
38
+ Queueing time: mean = 0.04ms, max = 0.13ms, min = 0.02ms, total = 0.15ms
39
+ Execution time: mean = 0.26ms, total = 1.02ms
40
+ Event stats:
41
+ CoreWorker.deadline_timer.flush_task_events - 1 total (1 active), Execution time: mean = 0.00ms, total = 0.00ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
42
+ ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData.OnReplyReceived - 1 total (0 active), Execution time: mean = 0.03ms, total = 0.03ms, Queueing time: mean = 0.02ms, max = 0.02ms, min = 0.02ms, total = 0.02ms
43
+ PeriodicalRunner.RunFnPeriodically - 1 total (0 active), Execution time: mean = 0.24ms, total = 0.24ms, Queueing time: mean = 0.13ms, max = 0.13ms, min = 0.13ms, total = 0.13ms
44
+ ray::rpc::TaskInfoGcsService.grpc_client.AddTaskEventData - 1 total (0 active), Execution time: mean = 0.75ms, total = 0.75ms, Queueing time: mean = 0.00ms, max = -0.00ms, min = 9223372036854.78ms, total = 0.00ms
45
+ Other Stats:
46
+ gcs_grpc_in_progress:0
47
+ event_aggregator_grpc_in_progress:0
48
+ current number of task status events in buffer: 0
49
+ current number of profile events in buffer: 0
50
+ current number of dropped task attempts tracked: 0
51
+ total task events sent: 0 MiB
52
+ total number of task attempts sent: 0
53
+ total number of task attempts dropped reported: 0
54
+ total number of sent failure: 0
55
+ num status task events dropped: 0
56
+ num profile task events dropped: 0
57
+ num ray task events reported to aggregator: 0
58
+ num ray task events failed to report to aggregator: 0
59
+ num of task attempts dropped reported to aggregator: 0
60
+ num of failed requests to aggregator: 0
61
+
62
+ [2026-02-27 00:30:32,674 I 11373 11373] metrics_agent_client.cc:42: Initializing exporter ...
63
+ [2026-02-27 00:30:32,675 I 11373 11626] accessor.cc:540: Received address and liveness notification for node, IsAlive = 1 node_id=d9160dc27b026e787f1f81b4465a36f139cf00f32ff2820e42f20120
64
+ [2026-02-27 00:30:32,675 I 11373 11626] normal_task_submitter.cc:824: Number of alive nodes:1
65
+ [2026-02-27 00:30:34,677 I 11373 11626] metrics_agent_client.cc:54: Exporter initialized.
66
+ [2026-02-27 00:30:57,500 I 11373 11626] core_worker_shutdown_executor.cc:184: Executing handle exit: INTENDED_SYSTEM_EXIT - Worker exited because it was idle for a long time (timeout: -1ms)
67
+ [2026-02-27 00:30:57,500 I 11373 11626] core_worker_shutdown_executor.cc:94: Executing worker exit: INTENDED_SYSTEM_EXIT - Worker exited because it was idle for a long time (timeout: 10000ms)
68
+ [2026-02-27 00:30:57,500 I 11373 11373] core_worker_shutdown_executor.cc:128: Wait for currently executing tasks in the underlying thread pools to finish.
69
+ [2026-02-27 00:30:57,500 I 11373 11373] core_worker_shutdown_executor.cc:162: Releasing local references, then draining reference counter.
70
+ [2026-02-27 00:30:57,505 I 11373 11373] core_worker_shutdown_executor.cc:217: Try killing all child processes of this worker as it exits. Child process pids:
71
+ [2026-02-27 00:30:57,505 I 11373 11373] core_worker_shutdown_executor.cc:262: Sending disconnect message to the local raylet.
72
+ [2026-02-27 00:30:57,506 I 11373 11373] raylet_ipc_client.cc:135: RayletIpcClient::Disconnect, exit_type=INTENDED_SYSTEM_EXIT, exit_detail=Worker exited because it was idle for a long time, has creation_task_exception_pb_bytes=0
73
+ [2026-02-27 00:30:57,507 I 11373 11373] core_worker_shutdown_executor.cc:279: Disconnected from the local raylet.
74
+ [2026-02-27 00:30:57,508 I 11373 11373] task_event_buffer.cc:491: Shutting down TaskEventBuffer.
75
+ [2026-02-27 00:30:57,508 I 11373 11673] task_event_buffer.cc:459: Task event buffer io service stopped.
76
+ [2026-02-27 00:30:57,508 I 11373 11626] core_worker_process.cc:194: Core worker main io service stopped.
77
+ [2026-02-27 00:30:57,508 I 11373 11373] core_worker_shutdown_executor.cc:54: Waiting for joining a core worker io thread. If it hangs here, there might be deadlock or a high load in the core worker io service.
78
+ [2026-02-27 00:30:57,511 I 11373 11373] core_worker_shutdown_executor.cc:72: Disconnecting a GCS client.
79
+ [2026-02-27 00:30:57,511 I 11373 11373] core_worker_shutdown_executor.cc:79: Core worker ready to be deallocated.
80
+ [2026-02-27 00:30:57,511 I 11373 11373] core_worker_process.cc:950: Task execution loop terminated. Removing the global worker.
81
+ [2026-02-27 00:30:57,511 I 11373 11373] core_worker.cc:539: Core worker is destructed
82
+ [2026-02-27 00:30:57,511 I 11373 11373] task_event_buffer.cc:491: Shutting down TaskEventBuffer.
83
+ [2026-02-27 00:30:57,512 I 11373 11373] core_worker_process.cc:846: Destructing CoreWorkerProcessImpl. pid: 11373
84
+ [2026-02-27 00:30:57,514 I 11373 11373] stats.h:149: Stats module has shutdown.
85
+ [2026-02-27 00:30:57,545 W 11373 11373] core_worker_process.cc:860: The core worker process is not initialized yet or already shutdown.
worker-1535028fd440028216a02042c55e0b58baec34df171b54a8306f4bc8-ffffffff-11376.err ADDED
File without changes
worker-8d27d27b6a5c820150d6a54cd27fe296fd0409567d2b4685b9a84fc8-01000000-11896.out ADDED
@@ -0,0 +1,571 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ :job_id:01000000
2
+ :actor_name:TaskRunner
3
+ TaskRunner hostname: cs-01kje4289qf3k6pv20jzcef9t8, PID: 11896
4
+ {'actor_rollout_ref': {'actor': {'_target_': 'verl.workers.config.FSDPActorConfig',
5
+ 'calculate_entropy': False,
6
+ 'calculate_sum_pi_squared': False,
7
+ 'checkpoint': {'_target_': 'verl.trainer.config.CheckpointConfig',
8
+ 'async_save': False,
9
+ 'load_contents': ['model',
10
+ 'optimizer',
11
+ 'extra'],
12
+ 'mbridge_config': {},
13
+ 'save_contents': ['model',
14
+ 'optimizer',
15
+ 'extra']},
16
+ 'clip_ratio': 0.2,
17
+ 'clip_ratio_c': 3.0,
18
+ 'clip_ratio_high': 0.2,
19
+ 'clip_ratio_low': 0.2,
20
+ 'data_loader_seed': 42,
21
+ 'entropy_checkpointing': False,
22
+ 'entropy_coeff': 0,
23
+ 'entropy_from_logits_with_chunking': False,
24
+ 'freeze_vision_tower': False,
25
+ 'fsdp_config': {'_target_': 'verl.workers.config.FSDPEngineConfig',
26
+ 'dtype': 'bfloat16',
27
+ 'entropy_checkpointing': False,
28
+ 'entropy_from_logits_with_chunking': False,
29
+ 'forward_only': False,
30
+ 'forward_prefetch': False,
31
+ 'fsdp_size': -1,
32
+ 'full_determinism': False,
33
+ 'model_dtype': 'fp32',
34
+ 'offload_policy': False,
35
+ 'optimizer_offload': False,
36
+ 'param_offload': False,
37
+ 'reshard_after_forward': True,
38
+ 'seed': 42,
39
+ 'strategy': 'fsdp',
40
+ 'ulysses_sequence_parallel_size': 1,
41
+ 'use_orig_params': False,
42
+ 'use_torch_compile': True,
43
+ 'wrap_policy': {'min_num_params': 0}},
44
+ 'grad_clip': 1.0,
45
+ 'kl_loss_coef': 0.001,
46
+ 'kl_loss_type': 'low_var_kl',
47
+ 'loss_agg_mode': 'token-mean',
48
+ 'loss_scale_factor': None,
49
+ 'optim': {'_target_': 'verl.workers.config.FSDPOptimizerConfig',
50
+ 'betas': [0.9, 0.999],
51
+ 'clip_grad': 1.0,
52
+ 'lr': 1e-06,
53
+ 'lr_scheduler_type': 'constant',
54
+ 'lr_warmup_steps': -1,
55
+ 'lr_warmup_steps_ratio': 0.0,
56
+ 'min_lr_ratio': 0.0,
57
+ 'num_cycles': 0.5,
58
+ 'optimizer': 'AdamW',
59
+ 'optimizer_impl': 'torch.optim',
60
+ 'override_optimizer_config': None,
61
+ 'total_training_steps': -1,
62
+ 'warmup_style': None,
63
+ 'weight_decay': 0.01,
64
+ 'zero_indexed_step': True},
65
+ 'policy_loss': {'_target_': 'verl.workers.config.PolicyLossConfig',
66
+ 'clip_cov_lb': 1.0,
67
+ 'clip_cov_ratio': 0.0002,
68
+ 'clip_cov_ub': 5.0,
69
+ 'kl_cov_ratio': 0.0002,
70
+ 'loss_mode': 'vanilla',
71
+ 'ppo_kl_coef': 0.1},
72
+ 'ppo_epochs': 1,
73
+ 'ppo_max_token_len_per_gpu': 16384,
74
+ 'ppo_micro_batch_size': None,
75
+ 'ppo_micro_batch_size_per_gpu': 4,
76
+ 'ppo_mini_batch_size': 64,
77
+ 'profiler': {'_target_': 'verl.utils.profiler.ProfilerConfig',
78
+ 'all_ranks': True,
79
+ 'enable': True,
80
+ 'ranks': [],
81
+ 'save_path': 'outputs/profile',
82
+ 'tool': 'nsys',
83
+ 'tool_config': {'npu': {'_target_': 'verl.utils.profiler.config.NPUToolConfig',
84
+ 'analysis': True,
85
+ 'contents': [],
86
+ 'discrete': False,
87
+ 'level': 'level0'},
88
+ 'nsys': {'_target_': 'verl.utils.profiler.config.NsightToolConfig',
89
+ 'discrete': False},
90
+ 'torch': {'_target_': 'verl.utils.profiler.config.TorchProfilerToolConfig',
91
+ 'contents': [],
92
+ 'discrete': False},
93
+ 'torch_memory': {'_target_': 'verl.utils.profiler.config.TorchMemoryToolConfig',
94
+ 'stack_depth': 32,
95
+ 'trace_alloc_max_entries': 100000}}},
96
+ 'qat': {'activation_observer': 'static_minmax',
97
+ 'enable': False,
98
+ 'group_size': 16,
99
+ 'ignore_patterns': ['lm_head',
100
+ 'embed_tokens',
101
+ 're:.*mlp.gate$'],
102
+ 'mode': 'w4a16',
103
+ 'quantization_config_path': None},
104
+ 'rollout_n': 1,
105
+ 'router_replay': {'_target_': 'verl.workers.config.RouterReplayConfig',
106
+ 'mode': 'disabled',
107
+ 'record_file': None,
108
+ 'replay_file': None},
109
+ 'shuffle': False,
110
+ 'strategy': 'fsdp',
111
+ 'sum_pi_squared_checkpointing': False,
112
+ 'tau_neg': 1.05,
113
+ 'tau_pos': 1.0,
114
+ 'ulysses_sequence_parallel_size': 1,
115
+ 'use_dynamic_bsz': False,
116
+ 'use_fused_kernels': False,
117
+ 'use_kl_loss': False,
118
+ 'use_prefix_grouper': False,
119
+ 'use_remove_padding': True,
120
+ 'use_torch_compile': True},
121
+ 'hybrid_engine': True,
122
+ 'model': {'_target_': 'verl.workers.config.HFModelConfig',
123
+ 'custom_chat_template': None,
124
+ 'enable_activation_offload': False,
125
+ 'enable_gradient_checkpointing': True,
126
+ 'exclude_modules': None,
127
+ 'external_lib': None,
128
+ 'fused_kernel_options': {'impl_backend': 'torch'},
129
+ 'hf_config_path': None,
130
+ 'lora_adapter_path': None,
131
+ 'lora_alpha': 16,
132
+ 'lora_rank': 0,
133
+ 'mtp': {'_target_': 'verl.workers.config.MtpConfig',
134
+ 'detach_encoder': False,
135
+ 'enable': False,
136
+ 'enable_rollout': False,
137
+ 'enable_train': False,
138
+ 'method': 'mtp',
139
+ 'mtp_loss_scaling_factor': 0.1,
140
+ 'num_speculative_tokens': 1,
141
+ 'speculative_algorithm': 'EAGLE',
142
+ 'speculative_eagle_topk': 1,
143
+ 'speculative_num_draft_tokens': 4,
144
+ 'speculative_num_steps': 3},
145
+ 'override_config': {},
146
+ 'path': 'Qwen/Qwen2.5-0.5B-Instruct',
147
+ 'target_modules': 'all-linear',
148
+ 'tiled_mlp': {'enabled': False,
149
+ 'num_shards': 4},
150
+ 'tokenizer_path': None,
151
+ 'trust_remote_code': False,
152
+ 'use_fused_kernels': False,
153
+ 'use_liger': False,
154
+ 'use_remove_padding': True,
155
+ 'use_shm': False},
156
+ 'nccl_timeout': 600,
157
+ 'ref': {'_target_': 'verl.workers.config.FSDPActorConfig',
158
+ 'entropy_checkpointing': False,
159
+ 'entropy_from_logits_with_chunking': False,
160
+ 'fsdp_config': {'_target_': 'verl.workers.config.FSDPEngineConfig',
161
+ 'dtype': 'bfloat16',
162
+ 'entropy_checkpointing': False,
163
+ 'entropy_from_logits_with_chunking': False,
164
+ 'forward_only': True,
165
+ 'forward_prefetch': False,
166
+ 'fsdp_size': -1,
167
+ 'full_determinism': False,
168
+ 'model_dtype': 'fp32',
169
+ 'offload_policy': False,
170
+ 'optimizer_offload': False,
171
+ 'param_offload': False,
172
+ 'reshard_after_forward': True,
173
+ 'seed': 42,
174
+ 'strategy': 'fsdp',
175
+ 'ulysses_sequence_parallel_size': 1,
176
+ 'use_orig_params': False,
177
+ 'use_torch_compile': True,
178
+ 'wrap_policy': {'min_num_params': 0}},
179
+ 'log_prob_max_token_len_per_gpu': 16384,
180
+ 'log_prob_micro_batch_size': None,
181
+ 'log_prob_micro_batch_size_per_gpu': 4,
182
+ 'log_prob_use_dynamic_bsz': False,
183
+ 'profiler': {'_target_': 'verl.utils.profiler.ProfilerConfig',
184
+ 'all_ranks': False,
185
+ 'enable': False,
186
+ 'ranks': [],
187
+ 'save_path': 'outputs/profile',
188
+ 'tool': 'nsys',
189
+ 'tool_config': {'npu': {'_target_': 'verl.utils.profiler.config.NPUToolConfig',
190
+ 'analysis': True,
191
+ 'contents': [],
192
+ 'discrete': False,
193
+ 'level': 'level0'},
194
+ 'nsys': {'_target_': 'verl.utils.profiler.config.NsightToolConfig',
195
+ 'discrete': False},
196
+ 'torch': {'_target_': 'verl.utils.profiler.config.TorchProfilerToolConfig',
197
+ 'contents': [],
198
+ 'discrete': False},
199
+ 'torch_memory': {'_target_': 'verl.utils.profiler.config.TorchMemoryToolConfig',
200
+ 'stack_depth': 32,
201
+ 'trace_alloc_max_entries': 100000}}},
202
+ 'rollout_n': 1,
203
+ 'router_replay': {'_target_': 'verl.workers.config.RouterReplayConfig',
204
+ 'mode': 'disabled',
205
+ 'record_file': None,
206
+ 'replay_file': None},
207
+ 'strategy': 'fsdp',
208
+ 'ulysses_sequence_parallel_size': 1,
209
+ 'use_torch_compile': True},
210
+ 'rollout': {'_target_': 'verl.workers.config.RolloutConfig',
211
+ 'agent': {'_target_': 'verl.workers.config.AgentLoopConfig',
212
+ 'agent_loop_config_path': None,
213
+ 'custom_async_server': {'_target_': 'verl.workers.config.CustomAsyncServerConfig',
214
+ 'name': None,
215
+ 'path': None},
216
+ 'default_agent_loop': 'single_turn_agent',
217
+ 'num_workers': 8},
218
+ 'calculate_log_probs': False,
219
+ 'checkpoint_engine': {'_target_': 'verl.workers.config.CheckpointEngineConfig',
220
+ 'backend': 'naive',
221
+ 'engine_kwargs': {},
222
+ 'update_weights_bucket_megabytes': 2048},
223
+ 'cudagraph_capture_sizes': None,
224
+ 'data_parallel_size': 1,
225
+ 'disable_log_stats': True,
226
+ 'do_sample': True,
227
+ 'dtype': 'bfloat16',
228
+ 'enable_chunked_prefill': True,
229
+ 'enable_prefix_caching': True,
230
+ 'enable_rollout_routing_replay': False,
231
+ 'enforce_eager': False,
232
+ 'engine_kwargs': {'sglang': {},
233
+ 'trtllm': {},
234
+ 'vllm': {}},
235
+ 'expert_parallel_size': 1,
236
+ 'free_cache_engine': True,
237
+ 'gpu_memory_utilization': 0.4,
238
+ 'ignore_eos': False,
239
+ 'layered_summon': False,
240
+ 'load_format': 'dummy',
241
+ 'log_prob_max_token_len_per_gpu': 16384,
242
+ 'log_prob_micro_batch_size': None,
243
+ 'log_prob_micro_batch_size_per_gpu': 8,
244
+ 'log_prob_use_dynamic_bsz': False,
245
+ 'logprobs_mode': 'processed_logprobs',
246
+ 'max_model_len': None,
247
+ 'max_num_batched_tokens': 8192,
248
+ 'max_num_seqs': 1024,
249
+ 'mode': 'async',
250
+ 'mtp': {'_target_': 'verl.workers.config.MtpConfig',
251
+ 'detach_encoder': False,
252
+ 'enable': False,
253
+ 'enable_rollout': False,
254
+ 'enable_train': False,
255
+ 'method': 'mtp',
256
+ 'mtp_loss_scaling_factor': 0.1,
257
+ 'num_speculative_tokens': 1,
258
+ 'speculative_algorithm': 'EAGLE',
259
+ 'speculative_eagle_topk': 1,
260
+ 'speculative_num_draft_tokens': 4,
261
+ 'speculative_num_steps': 3},
262
+ 'multi_stage_wake_up': False,
263
+ 'multi_turn': {'_target_': 'verl.workers.config.MultiTurnConfig',
264
+ 'enable': False,
265
+ 'format': 'hermes',
266
+ 'interaction_config_path': None,
267
+ 'max_assistant_turns': None,
268
+ 'max_parallel_calls': 1,
269
+ 'max_tool_response_length': 256,
270
+ 'max_user_turns': None,
271
+ 'num_repeat_rollouts': None,
272
+ 'tokenization_sanity_check_mode': 'strict',
273
+ 'tool_config_path': None,
274
+ 'tool_response_truncate_side': 'middle',
275
+ 'use_inference_chat_template': False},
276
+ 'n': 1,
277
+ 'name': 'vllm',
278
+ 'over_sample_rate': 0,
279
+ 'pipeline_model_parallel_size': 1,
280
+ 'profiler': {'_target_': 'verl.utils.profiler.ProfilerConfig',
281
+ 'all_ranks': True,
282
+ 'enable': True,
283
+ 'ranks': [],
284
+ 'save_path': 'outputs/profile',
285
+ 'tool': 'nsys',
286
+ 'tool_config': {'npu': {'_target_': 'verl.utils.profiler.config.NPUToolConfig',
287
+ 'analysis': True,
288
+ 'contents': [],
289
+ 'discrete': False,
290
+ 'level': 'level0'},
291
+ 'torch': {'_target_': 'verl.utils.profiler.config.TorchProfilerToolConfig',
292
+ 'contents': [],
293
+ 'discrete': False}}},
294
+ 'prometheus': {'_target_': 'verl.workers.config.PrometheusConfig',
295
+ 'enable': False,
296
+ 'file': '/tmp/ray/session_latest/metrics/prometheus/prometheus.yml',
297
+ 'port': 9090,
298
+ 'served_model_name': 'Qwen/Qwen2.5-0.5B-Instruct'},
299
+ 'prompt_length': 512,
300
+ 'qat': {'activation_observer': 'static_minmax',
301
+ 'enable': False,
302
+ 'group_size': 16,
303
+ 'ignore_patterns': ['lm_head',
304
+ 'embed_tokens',
305
+ 're:.*mlp.gate$'],
306
+ 'mode': 'w4a16',
307
+ 'quantization_config_path': None},
308
+ 'quantization': None,
309
+ 'quantization_config_file': None,
310
+ 'response_length': 512,
311
+ 'scheduling_policy': 'fcfs',
312
+ 'skip_dump_dir': '/tmp/rollout_dump',
313
+ 'skip_rollout': False,
314
+ 'skip_tokenizer_init': True,
315
+ 'temperature': 1.0,
316
+ 'tensor_model_parallel_size': 1,
317
+ 'top_k': -1,
318
+ 'top_p': 1,
319
+ 'trace': {'_target_': 'verl.workers.config.TraceConfig',
320
+ 'backend': None,
321
+ 'max_samples_per_step_per_worker': None,
322
+ 'token2text': False},
323
+ 'val_kwargs': {'_target_': 'verl.workers.config.SamplingConfig',
324
+ 'do_sample': False,
325
+ 'n': 1,
326
+ 'temperature': 0,
327
+ 'top_k': -1,
328
+ 'top_p': 1.0}}},
329
+ 'algorithm': {'_target_': 'verl.trainer.config.AlgoConfig',
330
+ 'adv_estimator': 'gae',
331
+ 'gamma': 1.0,
332
+ 'kl_ctrl': {'_target_': 'verl.trainer.config.KLControlConfig',
333
+ 'horizon': 10000,
334
+ 'kl_coef': 0.001,
335
+ 'target_kl': 0.1,
336
+ 'type': 'fixed'},
337
+ 'kl_penalty': 'kl',
338
+ 'lam': 1.0,
339
+ 'norm_adv_by_std_in_grpo': True,
340
+ 'pf_ppo': {'reweight_method': 'pow', 'weight_pow': 2.0},
341
+ 'rollout_correction': {'bypass_mode': False,
342
+ 'loss_type': 'ppo_clip',
343
+ 'rollout_is': None,
344
+ 'rollout_is_batch_normalize': False,
345
+ 'rollout_is_threshold': 2.0,
346
+ 'rollout_rs': None,
347
+ 'rollout_rs_threshold': None},
348
+ 'use_kl_in_reward': False,
349
+ 'use_pf_ppo': False},
350
+ 'critic': {'_target_': 'verl.workers.config.FSDPCriticConfig',
351
+ 'checkpoint': {'_target_': 'verl.trainer.config.CheckpointConfig',
352
+ 'async_save': False,
353
+ 'load_contents': ['model', 'optimizer', 'extra'],
354
+ 'mbridge_config': {},
355
+ 'save_contents': ['model', 'optimizer', 'extra']},
356
+ 'cliprange_value': 0.5,
357
+ 'data_loader_seed': 42,
358
+ 'enable': None,
359
+ 'forward_max_token_len_per_gpu': 32768,
360
+ 'forward_micro_batch_size': None,
361
+ 'forward_micro_batch_size_per_gpu': 8,
362
+ 'grad_clip': 1.0,
363
+ 'loss_agg_mode': 'token-mean',
364
+ 'model': {'_target_': 'verl.workers.config.FSDPCriticModelCfg',
365
+ 'enable_activation_offload': False,
366
+ 'enable_gradient_checkpointing': True,
367
+ 'external_lib': None,
368
+ 'fsdp_config': {'_target_': 'verl.workers.config.FSDPEngineConfig',
369
+ 'dtype': 'bfloat16',
370
+ 'entropy_checkpointing': False,
371
+ 'entropy_from_logits_with_chunking': False,
372
+ 'forward_only': False,
373
+ 'forward_prefetch': False,
374
+ 'fsdp_size': -1,
375
+ 'full_determinism': False,
376
+ 'model_dtype': 'fp32',
377
+ 'offload_policy': False,
378
+ 'optimizer_offload': False,
379
+ 'param_offload': False,
380
+ 'reshard_after_forward': True,
381
+ 'seed': 42,
382
+ 'strategy': 'fsdp',
383
+ 'ulysses_sequence_parallel_size': 1,
384
+ 'use_orig_params': False,
385
+ 'use_torch_compile': True,
386
+ 'wrap_policy': {'min_num_params': 0}},
387
+ 'lora_alpha': 16,
388
+ 'lora_rank': 0,
389
+ 'override_config': {},
390
+ 'path': 'Qwen/Qwen2.5-0.5B-Instruct',
391
+ 'target_modules': 'all-linear',
392
+ 'tiled_mlp': {'enabled': False, 'num_shards': 4},
393
+ 'tokenizer_path': 'Qwen/Qwen2.5-0.5B-Instruct',
394
+ 'trust_remote_code': False,
395
+ 'use_remove_padding': False,
396
+ 'use_shm': False},
397
+ 'optim': {'_target_': 'verl.workers.config.FSDPOptimizerConfig',
398
+ 'betas': [0.9, 0.999],
399
+ 'clip_grad': 1.0,
400
+ 'lr': 1e-05,
401
+ 'lr_scheduler_type': 'constant',
402
+ 'lr_warmup_steps': -1,
403
+ 'lr_warmup_steps_ratio': 0.0,
404
+ 'min_lr_ratio': 0.0,
405
+ 'num_cycles': 0.5,
406
+ 'optimizer': 'AdamW',
407
+ 'optimizer_impl': 'torch.optim',
408
+ 'override_optimizer_config': None,
409
+ 'total_training_steps': -1,
410
+ 'warmup_style': None,
411
+ 'weight_decay': 0.01,
412
+ 'zero_indexed_step': True},
413
+ 'ppo_epochs': 1,
414
+ 'ppo_max_token_len_per_gpu': 32768,
415
+ 'ppo_micro_batch_size': None,
416
+ 'ppo_micro_batch_size_per_gpu': 8,
417
+ 'ppo_mini_batch_size': 64,
418
+ 'profiler': {'_target_': 'verl.utils.profiler.ProfilerConfig',
419
+ 'all_ranks': True,
420
+ 'enable': True,
421
+ 'ranks': [],
422
+ 'save_path': 'outputs/profile',
423
+ 'tool': 'nsys',
424
+ 'tool_config': {'npu': {'_target_': 'verl.utils.profiler.config.NPUToolConfig',
425
+ 'analysis': True,
426
+ 'contents': [],
427
+ 'discrete': False,
428
+ 'level': 'level0'},
429
+ 'nsys': {'_target_': 'verl.utils.profiler.config.NsightToolConfig',
430
+ 'discrete': False},
431
+ 'torch': {'_target_': 'verl.utils.profiler.config.TorchProfilerToolConfig',
432
+ 'contents': [],
433
+ 'discrete': False},
434
+ 'torch_memory': {'_target_': 'verl.utils.profiler.config.TorchMemoryToolConfig',
435
+ 'stack_depth': 32,
436
+ 'trace_alloc_max_entries': 100000}}},
437
+ 'rollout_n': 1,
438
+ 'shuffle': False,
439
+ 'strategy': 'fsdp',
440
+ 'ulysses_sequence_parallel_size': 1,
441
+ 'use_dynamic_bsz': False},
442
+ 'data': {'apply_chat_template_kwargs': {},
443
+ 'custom_cls': {'name': None, 'path': None},
444
+ 'datagen': {'name': None, 'path': None},
445
+ 'dataloader_num_workers': 8,
446
+ 'filter_overlong_prompts': False,
447
+ 'filter_overlong_prompts_workers': 1,
448
+ 'image_key': 'images',
449
+ 'image_patch_size': 14,
450
+ 'max_prompt_length': 512,
451
+ 'max_response_length': 512,
452
+ 'prompt_key': 'prompt',
453
+ 'return_full_prompt': False,
454
+ 'return_multi_modal_inputs': True,
455
+ 'return_raw_chat': True,
456
+ 'return_raw_input_ids': False,
457
+ 'reward_fn_key': 'data_source',
458
+ 'sampler': {'class_name': None, 'class_path': None},
459
+ 'seed': None,
460
+ 'shuffle': True,
461
+ 'tokenizer': None,
462
+ 'tool_config_path': None,
463
+ 'train_batch_size': 256,
464
+ 'train_files': '/root/data/gsm8k/train.parquet',
465
+ 'train_max_samples': -1,
466
+ 'truncation': 'error',
467
+ 'trust_remote_code': False,
468
+ 'use_shm': False,
469
+ 'val_batch_size': None,
470
+ 'val_files': '/root/data/gsm8k/test.parquet',
471
+ 'val_max_samples': -1,
472
+ 'validation_shuffle': False,
473
+ 'video_key': 'videos'},
474
+ 'global_profiler': {'_target_': 'verl.utils.profiler.ProfilerConfig',
475
+ 'discrete': False,
476
+ 'global_tool_config': {'nsys': {'_target_': 'verl.utils.profiler.config.NsightToolConfig',
477
+ 'controller_nsight_options': {'cuda-graph-trace': 'graph',
478
+ 'cuda-memory-usage': 'true',
479
+ 'trace': 'cuda,nvtx,cublas,ucx'},
480
+ 'discrete': False,
481
+ 'worker_nsight_options': {'capture-range': 'cudaProfilerApi',
482
+ 'capture-range-end': None,
483
+ 'cuda-graph-trace': 'graph',
484
+ 'cuda-memory-usage': 'true',
485
+ 'kill': 'none',
486
+ 'trace': 'cuda,nvtx,osrt'}},
487
+ 'torch_memory': {'context': 'all',
488
+ 'kw_args': {},
489
+ 'stack_depth': 32,
490
+ 'stacks': 'all',
491
+ 'trace_alloc_max_entries': 100000}},
492
+ 'profile_continuous_steps': False,
493
+ 'save_path': 'outputs/profile',
494
+ 'steps': [1, 2, 5, 10],
495
+ 'tool': 'nsys'},
496
+ 'model_engine': 'dp',
497
+ 'ray_kwargs': {'ray_init': {'num_cpus': None}, 'timeline_json_file': None},
498
+ 'reward': {'custom_reward_function': {'name': 'compute_score', 'path': None},
499
+ 'num_workers': 8,
500
+ 'reward_manager': {'_target_': 'verl.workers.config.reward_model.RewardManagerConfig',
501
+ 'module': {'_target_': 'verl.trainer.config.config.ModuleConfig',
502
+ 'name': 'custom_reward_manager',
503
+ 'path': None},
504
+ 'name': 'naive',
505
+ 'source': 'register'},
506
+ 'reward_model': {'enable': False,
507
+ 'enable_resource_pool': False,
508
+ 'model_path': None,
509
+ 'n_gpus_per_node': 8,
510
+ 'nnodes': 0,
511
+ 'rollout': {'_target_': 'verl.workers.config.RolloutConfig',
512
+ 'cudagraph_capture_sizes': None,
513
+ 'data_parallel_size': 1,
514
+ 'disable_log_stats': True,
515
+ 'dtype': 'bfloat16',
516
+ 'enable_chunked_prefill': True,
517
+ 'enable_prefix_caching': True,
518
+ 'enforce_eager': True,
519
+ 'engine_kwargs': {},
520
+ 'expert_parallel_size': 1,
521
+ 'free_cache_engine': True,
522
+ 'gpu_memory_utilization': 0.5,
523
+ 'limit_images': None,
524
+ 'load_format': 'auto',
525
+ 'max_model_len': None,
526
+ 'max_num_batched_tokens': 8192,
527
+ 'max_num_seqs': 1024,
528
+ 'name': '???',
529
+ 'prompt_length': 2048,
530
+ 'response_length': 2048,
531
+ 'skip_tokenizer_init': False,
532
+ 'tensor_model_parallel_size': 2}},
533
+ 'sandbox_fusion': {'max_concurrent': 64,
534
+ 'memory_limit_mb': 1024,
535
+ 'url': None}},
536
+ 'trainer': {'balance_batch': True,
537
+ 'critic_warmup': 0,
538
+ 'default_hdfs_dir': None,
539
+ 'default_local_dir': 'checkpoints/verl_examples/rl_ppo_Qwen2.5-0.5B-Instruct_ep1_mb8',
540
+ 'del_local_ckpt_after_load': False,
541
+ 'device': 'cuda',
542
+ 'esi_redundant_time': 0,
543
+ 'experiment_name': 'rl_ppo_Qwen2.5-0.5B-Instruct_ep1_mb8',
544
+ 'log_val_generations': 0,
545
+ 'logger': ['console', 'wandb'],
546
+ 'max_actor_ckpt_to_keep': None,
547
+ 'max_critic_ckpt_to_keep': None,
548
+ 'n_gpus_per_node': 1,
549
+ 'nnodes': 1,
550
+ 'project_name': 'verl_examples',
551
+ 'ray_wait_register_center_timeout': 300,
552
+ 'resume_from_path': None,
553
+ 'resume_mode': 'auto',
554
+ 'rollout_data_dir': None,
555
+ 'save_freq': 10,
556
+ 'test_freq': 10,
557
+ 'total_epochs': 1,
558
+ 'total_training_steps': 10,
559
+ 'use_legacy_worker_impl': 'auto',
560
+ 'val_before_train': False,
561
+ 'val_only': False,
562
+ 'validation_data_dir': None},
563
+ 'transfer_queue': {'enable': False}}
564
+ [validate_config] All configuration checks passed successfully!
565
+ Using dataset class: RLHFDataset
566
+ dataset len: 7473
567
+ Using dataset class: RLHFDataset
568
+ dataset len: 1319
569
+ Size of train dataloader: 29, Size of val dataloader: 1
570
+ Total training steps: 10
571
+ colocated worker base class <class 'verl.single_controller.base.worker.Worker'>
worker-a711ab381f1202e338fc2083afa6dd5133aebf91969e1e83b35a9610-01000000-12223.err ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ :job_id:01000000
2
+ WARNING:2026-02-27 00:31:13,385:fused_indices_to_multihot has reached end of life. Please migrate to a non-experimental function.
3
+ :actor_name:WorkerDict
4
+ /workspace/verl/verl/utils/tokenizer.py:109: UserWarning: Failed to create processor: Unsupported processor type: Qwen2TokenizerFast. This may affect multimodal processing
5
+ warnings.warn(f"Failed to create processor: {e}. This may affect multimodal processing", stacklevel=1)
6
+ `torch_dtype` is deprecated! Use `dtype` instead!
7
+ Flash Attention 2 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in Qwen2ForTokenClassification is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", dtype=torch.float16)`
8
+ Flash Attention 2 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in Qwen2Model is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", dtype=torch.float16)`
9
+ Some weights of Qwen2ForTokenClassification were not initialized from the model checkpoint at Qwen/Qwen2.5-0.5B-Instruct and are newly initialized: ['score.bias', 'score.weight']
10
+ You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
11
+ /usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/_init_utils.py:430: UserWarning: FSDP is switching to use `NO_SHARD` instead of ShardingStrategy.FULL_SHARD since the world size is 1.
12
+ warnings.warn(
13
+ /workspace/verl/verl/utils/tokenizer.py:109: UserWarning: Failed to create processor: Unsupported processor type: Qwen2TokenizerFast. This may affect multimodal processing
14
+ warnings.warn(f"Failed to create processor: {e}. This may affect multimodal processing", stacklevel=1)
15
+ Flash Attention 2 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in Qwen2ForCausalLM is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", dtype=torch.float16)`
16
+ /usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/_init_utils.py:430: UserWarning: FSDP is switching to use `NO_SHARD` instead of ShardingStrategy.FULL_SHARD since the world size is 1.
17
+ warnings.warn(
18
+ /workspace/verl/verl/utils/tokenizer.py:109: UserWarning: Failed to create processor: Unsupported processor type: Qwen2TokenizerFast. This may affect multimodal processing
19
+ warnings.warn(f"Failed to create processor: {e}. This may affect multimodal processing", stacklevel=1)
20
+ /usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:675: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .
21
+ warnings.warn(
worker-a711ab381f1202e338fc2083afa6dd5133aebf91969e1e83b35a9610-01000000-12223.out ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ :job_id:01000000
2
+ :actor_name:WorkerDict
3
+ [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
4
+ Critic overriding config {'bos_token_id': None, 'eos_token_id': 151645, 'pad_token_id': 151643}
5
+ Monkey patch state_dict in AutoModelForCausalLMWithValueHead.
6
+ Skipping monkey patch for Qwen2ForTokenClassification as use_fused_kernels is False or fused_kernels_backend is None
7
+ Qwen2ForTokenClassification contains 494.03M parameters
8
+ Before critic FSDP, memory allocated (GB): 0.00, memory reserved (GB): 0.00, device memory used/total (GB): 0.21/22.28
9
+ NCCL version 2.27.5+cuda12.9
10
+ After critic FSDP, memory allocated (GB): 1.84, memory reserved (GB): 2.45, device memory used/total (GB): 3.26/22.28
11
+ Total steps: 10, num_warmup_steps: 0
12
+ Critic use_remove_padding=False
13
+ Model config after override: Qwen2Config {
14
+ "architectures": [
15
+ "Qwen2ForCausalLM"
16
+ ],
17
+ "attention_dropout": 0.0,
18
+ "dtype": "bfloat16",
19
+ "eos_token_id": 151645,
20
+ "hidden_act": "silu",
21
+ "hidden_size": 896,
22
+ "initializer_range": 0.02,
23
+ "intermediate_size": 4864,
24
+ "layer_types": [
25
+ "full_attention",
26
+ "full_attention",
27
+ "full_attention",
28
+ "full_attention",
29
+ "full_attention",
30
+ "full_attention",
31
+ "full_attention",
32
+ "full_attention",
33
+ "full_attention",
34
+ "full_attention",
35
+ "full_attention",
36
+ "full_attention",
37
+ "full_attention",
38
+ "full_attention",
39
+ "full_attention",
40
+ "full_attention",
41
+ "full_attention",
42
+ "full_attention",
43
+ "full_attention",
44
+ "full_attention",
45
+ "full_attention",
46
+ "full_attention",
47
+ "full_attention",
48
+ "full_attention"
49
+ ],
50
+ "max_position_embeddings": 32768,
51
+ "max_window_layers": 21,
52
+ "model_type": "qwen2",
53
+ "num_attention_heads": 14,
54
+ "num_hidden_layers": 24,
55
+ "num_key_value_heads": 2,
56
+ "pad_token_id": 151643,
57
+ "rms_norm_eps": 1e-06,
58
+ "rope_scaling": null,
59
+ "rope_theta": 1000000.0,
60
+ "sliding_window": null,
61
+ "tie_word_embeddings": true,
62
+ "transformers_version": "4.57.3",
63
+ "use_cache": true,
64
+ "use_sliding_window": false,
65
+ "vocab_size": 151936
66
+ }
67
+
68
+ Monkey patch state_dict in AutoModelForCausalLMWithValueHead.
69
+ Monkey patch _flash_attention_forward in transformers.integrations.flash_attention
70
+ Skipping monkey patch for Qwen2ForCausalLM as use_fused_kernels is False or fused_kernels_backend is torch
71
+ Qwen2ForCausalLM contains 494.03M parameters
72
+ wrap_policy: functools.partial(<function _or_policy at 0x7ad8cd487920>, policies=[functools.partial(<function transformer_auto_wrap_policy at 0x7ad8cd4877e0>, transformer_layer_cls={<class 'transformers.models.qwen2.modeling_qwen2.Qwen2DecoderLayer'>})])
73
+ Total steps: 10, num_warmup_steps: 0
74
+ Actor use_remove_padding=True
75
+ Actor use_fused_kernels=False
76
+ Actor use_prefix_grouper=False
77
+ [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
78
+ [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
79
+ [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
worker-a99abd04b70eed71bbc2b85849964e1f45cdec8a7b96f35e101ab940-01000000-13106.err ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ :job_id:01000000
2
+ :actor_name:vLLMHttpServer
3
+ WARNING:2026-02-27 00:31:57,432:fused_indices_to_multihot has reached end of life. Please migrate to a non-experimental function.
4
+ /workspace/verl/verl/utils/tokenizer.py:109: UserWarning: Failed to create processor: Unsupported processor type: Qwen2TokenizerFast. This may affect multimodal processing
5
+ warnings.warn(f"Failed to create processor: {e}. This may affect multimodal processing", stacklevel=1)
6
+ WARNING:2026-02-27 00:32:01,974:agent loop only support torch and npu profiler, got nsys
7
+ INFO:2026-02-27 00:32:01,974:vLLMHttpServer, replica_rank: 0, node_rank: 0, CUDA_VISIBLE_DEVICES: 0, master_address: 10.128.0.163, master_port: 50077, data_parallel_rpc_port: 50281, data_parallel_master_port: 50363
8
+ INFO:2026-02-27 00:32:01,981:override_generation_config: {'temperature': 1.0, 'top_k': -1, 'top_p': 1, 'repetition_penalty': 1.0, 'max_new_tokens': 512}
9
+ INFO:2026-02-27 00:32:01,981:enable_sleep_mode: True
10
+ (Worker pid=13325) /usr/lib/python3.12/multiprocessing/resource_tracker.py:147: UserWarning: resource_tracker: process died unexpectedly, relaunching. Some resources might leak.
11
+ (Worker pid=13325) warnings.warn('resource_tracker: process died unexpectedly, '
12
+ !!!!!!! Segfault encountered !!!!!!!
13
+
worker-d94856809cc941df63bf786bee51e2d1396c50fb3b25d48a4be64edf-ffffffff-11373.err ADDED
File without changes
worker-e98598eeddae739fb0211beef22a201ab9028016b2b64fe185d8c813-01000000-12584.out ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ :job_id:01000000
2
+ :actor_name:RewardLoopWorker