Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: respect multinode consolidation timeout in all cases #2025

Open
wants to merge 14 commits into
base: main
Choose a base branch
from

Conversation

rschalo
Copy link
Contributor

@rschalo rschalo commented Feb 21, 2025

Fixes #N/A

Description
Scheduling for 50 nodes in multinode consolidation can take a long time, especially in large clusters where a scheduling decision for a node can take 20 seconds or longer. This can cause multinode consolidation to block drift, emptiness, and single node consolidation for longer than intended.

How was this change tested?
Deployed with a 5 second timeout and saw multinode consolidation bail before exhausting the list of candidates.

disruption/multinodeconsolidation.go:83","message":"failed to find a multi-node consolidation after timeout, last considered batch had 8 candidates

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: rschalo
Once this PR has been reviewed and has the lgtm label, please assign tzneal for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Feb 21, 2025
@coveralls
Copy link

coveralls commented Feb 21, 2025

Pull Request Test Coverage Report for Build 13532586135

Details

  • 12 of 21 (57.14%) changed or added relevant lines in 3 files are covered.
  • 1 unchanged line in 1 file lost coverage.
  • Overall coverage increased (+0.08%) to 81.615%

Changes Missing Coverage Covered Lines Changed/Added Lines %
pkg/controllers/provisioning/provisioner.go 1 3 33.33%
pkg/controllers/disruption/multinodeconsolidation.go 8 11 72.73%
pkg/controllers/provisioning/scheduling/scheduler.go 3 7 42.86%
Files with Coverage Reduction New Missed Lines %
pkg/controllers/disruption/multinodeconsolidation.go 1 86.86%
Totals Coverage Status
Change from base Build 13509538416: 0.08%
Covered Lines: 9296
Relevant Lines: 11390

💛 - Coveralls

@rschalo rschalo marked this pull request as draft February 24, 2025 18:50
@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 24, 2025
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 24, 2025
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 24, 2025
@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Feb 25, 2025
@rschalo rschalo marked this pull request as ready for review February 25, 2025 21:12
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 25, 2025
for min <= max {
if m.clock.Now().After(timeout) {
// Check for timeout using select
select {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we just check for if ctx.Err() != nil here, and does that make this a bit cleaner? (no select effectively and a smaller diff)

@@ -106,6 +107,10 @@ func SimulateScheduling(ctx context.Context, kubeClient client.Client, cluster *
if _, ok := deletingNodePodKeys[client.ObjectKeyFromObject(p)]; !ok {
results.PodErrors[p] = NewUninitializedNodeError(n)
}
// check for a pod that was not scheduled due to ctx timeout
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm probably missing something, but what is this for? Shouldn't we normally fail scheduling if there is a pod error?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is probably unnecessary. Was using it to error on line 142 after calling SimulateScheduling in computeConsolidation but we do check if pods failed to schedule on line 146. We'd then return empty results, no command nil error, and continue the loop.

The loop then either breaks because min > max and returns the last saved items or the timeout bails out of consolidation and returns the last saved items plus an error if context deadline was exceeded.

@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Feb 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants