-
Notifications
You must be signed in to change notification settings - Fork 506
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: Use bare-metal OTEL runners for benchmark CI for pushes to main #2832
ci: Use bare-metal OTEL runners for benchmark CI for pushes to main #2832
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2832 +/- ##
=====================================
Coverage 80.4% 80.4%
=====================================
Files 124 124
Lines 23390 23390
=====================================
+ Hits 18824 18828 +4
+ Misses 4566 4562 -4 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
e51e003
to
e783389
Compare
# If we're running on a PR, use ubuntu-latest - a shared runner. We can't use the self-hosted | ||
# runners on arbitrary PRs, and we don't want to unleash that load on the pool anyway. | ||
# If we're running on main, use the OTEL self-hosted runner pool. | ||
runs-on: ${{ github.event_name == 'pull_request' && 'ubuntu-latest' || 'self-hosted' }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Switch between the self-hosted (labelled PR) and ubuntu-latest (main)
# If we're running on main, use the OTEL self-hosted runner pool. | ||
runs-on: ${{ github.event_name == 'pull_request' && 'ubuntu-latest' || 'self-hosted' }} | ||
if: | | ||
${{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This maintains our filter for the performance
label being required to fire on a PR
env: | ||
# For PRs, compare against the base branch - e.g., 'main'. | ||
# For pushes to main, compare against the previous commit | ||
BRANCH_NAME: ${{ github.event_name == 'pull_request' && github.base_ref || github.event.before }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This switches to compare to the previous commit if we are building for a push to main.
For PRs, we compare to the current state of main
@@ -0,0 +1,58 @@ | |||
# This workflow runs a Criterion benchmark on a PR and compares the results against the base branch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Extra review facts:
- This is actually a rename-and-change of pr_criterion, but git has got confused
- For the branches that behave differently on push-to-main, i've tested they work by inverting the logic and pushing. I am confident the "template magic" is good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the detailed comments!
I'm nothing if not wordy 😆 |
I'll need to get the build rule updated on the runners on the issue, now, before this will work on main. On it. |
Supersedes #2813. This PR triggers our benchmark suite on pushes to main, regressing the performance against the previous commit on main.
Changes
Following on from open-telemetry/community#2616 , this PR moves our criterion suite to use the self-hosted OTEL runners for pushes to
main
, whilst continuing to use the ubuntu shared runners for PRs, on request, when labeling a PR withperformance
. I've also renamed the workflow tobenchmark
as its no longer only tied to the PR workflow.Why?
Opting in to PR regressions against main gives us an easy way to check the performance of a PR we're interested in against main - e.g. if it looks performance-impacting, one way or another. This avoids the "works on my machine" ad-hoc re-running of the benchmark suite.
Adding performance regression on the dedicated runners for commits to main will give us a stable, long-term journal of performance impact of changes.
What else?
This looks more intense than it is. Really there's just a bit of logic added to switch 1/ the worker pool and 2/ the "point of comparison" if we're running on a push to main. I've tested these bits work by inverting the logic and pushing.
Merge requirement checklist
CHANGELOG.md
files updated for non-trivial, user-facing changes