Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add per-client backpressure #1181
Add per-client backpressure #1181
Changes from all commits
4b48762
597b950
1190fa3
80b6f73
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any way we can see this from the outside? It would be interesting to be able
to track this and watch how it changes, but I'm not sure how to bubble it out to
where we could grab it with the rest of the
collect_stats()
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added this to the existing
up-status
DTrace probe in 597b950(The string is now 768 bytes long with all zeros, so still well below the
strsize=1k
limit)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we're not setting the per-client time, should we clear it? What if one Downstairs is
Faulted
, or in replay - the previously set delays will still apply.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so. If we did that, then the first two replies to a job would clear the delay, then the third reply would set it (before it's cleared again by the first reply to the next job).
I added logic to clear the per-client delay in
Client::reinitialize
(1190fa3).Maybe we should clear it if any of the clients restart? Otherwise, I could imagine a situation where
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
That sounds like a good idea, yeah.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, done in 80b6f73