-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update check and release workflow to release only tested charms #161
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
eeb4a23
to
1a764c8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this approach, but I had a concern about caching in the charm pack action.
- uses: canonical/craft-actions/charmcraft/pack@main | ||
id: build | ||
with: | ||
verbosity: verbose | ||
channel: "${ charmcraft_channel }" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we check the details of how this caches things here? I have some doubts:
- if there are multiple jobs, they will be likely running in parallel, so they won't be able to benefit from the cache of the other
- we're only going to be running multiple jobs if we need to build on different architectures. I'm worried that cache artefacts from one architecture will be different to that of another architecture.
Maybe the cache is a good idea; I haven't looked far into the details. But my personal thought on this is I'd rather ensure the charm is built from a true clean reproducible environment, because this will be the production charm we're publishing. If the build takes a bit longer, it's still worth it imo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's based on the following https://github.com/canonical/craft-actions/blob/main/charmcraft/pack/action.yaml#L58
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not think about the multi-arch scenario. I was thinking about e.g. rebuilding charm during a PR, where you can constantly pushing different commit without changing the dependency
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah of course, I wasn't sure how long the caches lived - that could definitely be helpful if it works across test runs. Maybe we need to look into setting a custom cache key if we're going to use the cache? To ensure that it depends on the architecture (if that applies) and any requirements files, etc. we use that might not be picked up by the default cache key.
Although even then, if we want to be extra careful, dependencies can change even if our requirements.txt doesn't change, because we're not using lock files. It feels a little risky to me if we're using this for the production charm uploads. 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or maybe something like only using the cache for code running in PRs, or PRs in draft mode when there may be commits pushed frequently? That might help for saving time there, but ensuring a clean environment for the production build?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, thanks for that, maybe we should just rollback we previously did for now. There's still at least on unclear points from me based on your question: does this work properly in multi-arch case. (Note caching the build is one part that speed up the testing, the other big part is the each matrix item does not need to build multiple charms individually)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
btw charmcraft team recommend using this https://canonical-charmcraft.readthedocs-hosted.com/en/stable/howto/build-guides/shared-cache/#on-github
The extra runs-on is not longer needed now that we release only tested charms
4936372
to
d800067
Compare
@@ -18,8 +18,8 @@ templates = { | |||
vars = { | |||
# Skip ARM64 check because the functional test runs on lxd VM which is not working |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess now we can remove this comment because we are adding arm64?
Changes in this PR need to be applied manually for these projects since they only have
check.yaml
orrelease.yaml
.Manage the
checks.yaml
for hardware-observer again, see example canonical/hardware-observer-operator#401Also closes: #118