Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(Jenkinsfile) introduce Maven client side caching #4669

Draft
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

dduportal
Copy link
Contributor

@dduportal dduportal commented Mar 10, 2025

This PR is a first "real life" test of the BOM builds using the S3 PVC-based Maven client side caching.

It uses the ~6Gb archive (which is partial cache created in #4667) as a first tentative(s?).

The goal is to evaluate it does not slow down the BOM as we want this cache at least as a protection layer to avoid breaking bom build when ACP start receiving HTTP/500 from artifactory.

Ref. jenkins-infra/helpdesk#4525

Testing done

Submitter checklist

  • Make sure you are opening from a topic/feature/bugfix branch (right side) and not your main branch!
  • Ensure that the pull request title represents the desired changelog entry
  • Please describe what you did
  • Link to relevant issues in GitHub or Jira
  • Link to relevant pull requests, esp. upstream and downstream changes
  • Ensure you have provided tests - that demonstrates feature works or fixes the issue

Signed-off-by: Damien Duportal <damien.duportal@gmail.com>
@dduportal
Copy link
Contributor Author

First tentative: the archive restoration time takes between 5 to 13 min during the parallel stages (while it takes ~1min on the prep stage). It does not scale well in the current setup.

Trying a 2 step process instead of a direct tar read: 1. copy archive from S3 share to local empty dir 2. uncompress archive from local empty to local empty. The goal is to identify what is not scaling well: local node filesystem (so step 2. would be slow) or S3 accesses (step 1 is slow).

Signed-off-by: Damien Duportal <damien.duportal@gmail.com>
@dduportal
Copy link
Contributor Author

dduportal commented Mar 10, 2025

Second retry: the copy from S3 to local system is unsustainable. Gotta try with the aws s3 command instead, to rule out (or in) the S3 mount driver. Copy from the mountpoint (S3 CSI driver) to local filesystem (emptydir on a local nvme) of all the parallel pct stages takes > 10 min. Canceled the build

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant