Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update check and release workflow to release only tested charms #161

Open
wants to merge 12 commits into
base: main
Choose a base branch
from

Conversation

chanchiwai-ray
Copy link
Contributor

@chanchiwai-ray chanchiwai-ray commented Mar 3, 2025

Changes in this PR need to be applied manually for these projects since they only have check.yaml or release.yaml.

  1. charm-prometheus-juju-exporter
  2. charm-simple-streams

Manage the checks.yaml for hardware-observer again, see example canonical/hardware-observer-operator#401

Also closes: #118

samuelallan72
samuelallan72 previously approved these changes Mar 3, 2025
Copy link
Contributor

@samuelallan72 samuelallan72 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

Pjack
Pjack previously approved these changes Mar 4, 2025
Copy link
Contributor

@samuelallan72 samuelallan72 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this approach, but I had a concern about caching in the charm pack action.

Comment on lines 91 to 93
- uses: canonical/craft-actions/charmcraft/pack@main
id: build
with:
verbosity: verbose
channel: "${ charmcraft_channel }"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we check the details of how this caches things here? I have some doubts:

  • if there are multiple jobs, they will be likely running in parallel, so they won't be able to benefit from the cache of the other
  • we're only going to be running multiple jobs if we need to build on different architectures. I'm worried that cache artefacts from one architecture will be different to that of another architecture.

Maybe the cache is a good idea; I haven't looked far into the details. But my personal thought on this is I'd rather ensure the charm is built from a true clean reproducible environment, because this will be the production charm we're publishing. If the build takes a bit longer, it's still worth it imo.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not think about the multi-arch scenario. I was thinking about e.g. rebuilding charm during a PR, where you can constantly pushing different commit without changing the dependency

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah of course, I wasn't sure how long the caches lived - that could definitely be helpful if it works across test runs. Maybe we need to look into setting a custom cache key if we're going to use the cache? To ensure that it depends on the architecture (if that applies) and any requirements files, etc. we use that might not be picked up by the default cache key.

Although even then, if we want to be extra careful, dependencies can change even if our requirements.txt doesn't change, because we're not using lock files. It feels a little risky to me if we're using this for the production charm uploads. 😅

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or maybe something like only using the cache for code running in PRs, or PRs in draft mode when there may be commits pushed frequently? That might help for saving time there, but ensuring a clean environment for the production build?

Copy link
Contributor Author

@chanchiwai-ray chanchiwai-ray Mar 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, thanks for that, maybe we should just rollback we previously did for now. There's still at least on unclear points from me based on your question: does this work properly in multi-arch case. (Note caching the build is one part that speed up the testing, the other big part is the each matrix item does not need to build multiple charms individually)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pjack
Pjack previously approved these changes Mar 5, 2025
@@ -18,8 +18,8 @@ templates = {
vars = {
# Skip ARM64 check because the functional test runs on lxd VM which is not working
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess now we can remove this comment because we are adding arm64?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Port hardware-observer check.yaml changes to automation
4 participants