-
Notifications
You must be signed in to change notification settings - Fork 239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: respect multinode consolidation timeout in all cases #2025
base: main
Are you sure you want to change the base?
perf: respect multinode consolidation timeout in all cases #2025
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: rschalo The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Pull Request Test Coverage Report for Build 13532586135Details
💛 - Coveralls |
for min <= max { | ||
if m.clock.Now().After(timeout) { | ||
// Check for timeout using select | ||
select { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just check for if ctx.Err() != nil
here, and does that make this a bit cleaner? (no select effectively and a smaller diff)
@@ -106,6 +107,10 @@ func SimulateScheduling(ctx context.Context, kubeClient client.Client, cluster * | |||
if _, ok := deletingNodePodKeys[client.ObjectKeyFromObject(p)]; !ok { | |||
results.PodErrors[p] = NewUninitializedNodeError(n) | |||
} | |||
// check for a pod that was not scheduled due to ctx timeout |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm probably missing something, but what is this for? Shouldn't we normally fail scheduling if there is a pod error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is probably unnecessary. Was using it to error on line 142 after calling SimulateScheduling
in computeConsolidation
but we do check if pods failed to schedule on line 146. We'd then return empty results, no command nil error, and continue the loop.
The loop then either breaks because min > max and returns the last saved items or the timeout bails out of consolidation and returns the last saved items plus an error if context deadline was exceeded.
Fixes #N/A
Description
Scheduling for 50 nodes in multinode consolidation can take a long time, especially in large clusters where a scheduling decision for a node can take 20 seconds or longer. This can cause multinode consolidation to block drift, emptiness, and single node consolidation for longer than intended.
How was this change tested?
Deployed with a 5 second timeout and saw multinode consolidation bail before exhausting the list of candidates.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.