You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
we have problems scaling up a specific workflow which iteself has dynamically parallel workflow nodes. When increasing the paralellisation, some workflow nodes are marked as failed.
Now, we saw in the workflow controller following messages: E0309 07:01:58.457340 1 event.go:468] "Unable to record event: too many queued events, dropped event" event="&Event{ObjectMeta:{test-b5g7g.189b1a3adf4d4921 workflows 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[workflows.argoproj.io/node-id:test-b5g7g-331028202 workflows.argoproj.io/node-name:test-b5g7g.test(13:file-name:chunk-13.parquet).select-artifact-for-test workflows.argoproj.io/node-start-time:1773038844000000000 workflows.argoproj.io/node-type:Pod] [] [] []},InvolvedObject:ObjectReference{Kind:Workflow,Namespace:workflows,Name:test-b5g7g,UID:fe0bc401-0ccf-4167-b0a6-e7d7a4bb2d8c,APIVersion:argoproj.io/v1alpha1,ResourceVersion:150546526,FieldPath:,},Reason:WorkflowNodeRunning,Message:Running node test-b5g7g.test(13:file-name:chunk-13.parquet).select-artifact-for-test,Source:EventSource{Component:workflow-controller,Host:,},FirstTimestamp:2026-03-09 07:01:58.457231649 +0000 UTC m=+310240.163275487,LastTimestamp:2026-03-09 07:01:58.457231649 +0000 UTC m=+310240.163275487,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:workflow-controller,ReportingInstance:,}"
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
we have problems scaling up a specific workflow which iteself has dynamically parallel workflow nodes. When increasing the paralellisation, some workflow nodes are marked as failed.
Now, we saw in the workflow controller following messages:
E0309 07:01:58.457340 1 event.go:468] "Unable to record event: too many queued events, dropped event" event="&Event{ObjectMeta:{test-b5g7g.189b1a3adf4d4921 workflows 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[workflows.argoproj.io/node-id:test-b5g7g-331028202 workflows.argoproj.io/node-name:test-b5g7g.test(13:file-name:chunk-13.parquet).select-artifact-for-test workflows.argoproj.io/node-start-time:1773038844000000000 workflows.argoproj.io/node-type:Pod] [] [] []},InvolvedObject:ObjectReference{Kind:Workflow,Namespace:workflows,Name:test-b5g7g,UID:fe0bc401-0ccf-4167-b0a6-e7d7a4bb2d8c,APIVersion:argoproj.io/v1alpha1,ResourceVersion:150546526,FieldPath:,},Reason:WorkflowNodeRunning,Message:Running node test-b5g7g.test(13:file-name:chunk-13.parquet).select-artifact-for-test,Source:EventSource{Component:workflow-controller,Host:,},FirstTimestamp:2026-03-09 07:01:58.457231649 +0000 UTC m=+310240.163275487,LastTimestamp:2026-03-09 07:01:58.457231649 +0000 UTC m=+310240.163275487,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:workflow-controller,ReportingInstance:,}"It looks like this message comes from the k8s client library handling the k8s events.
https://github.com/kubernetes/kubernetes/blob/d0bd636b3e3c1ea7e7d732cb2c20616b0052829c/staging/src/k8s.io/client-go/tools/record/event.go#L468
Are those messages just informal or might they indicate a problem with our setup?
Is there a way to increase the event queue size?
Beta Was this translation helpful? Give feedback.
All reactions