-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci(docker): remove artifacts from base CUDA image #5830
ci(docker): remove artifacts from base CUDA image #5830
Conversation
Signed-off-by: Amadeusz Szymko <amadeusz.szymko.2@tier4.jp>
Thank you for contributing to the Autoware project! 🚧 If your pull request is in progress, switch it to draft mode. Please ensure:
|
Wow I didn't even know they were a part of the images :o |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not only does the learning models disappear from the development containers, but also from the runtime containers. Are we sure that’s okay?
@xmfcx https://github.com/orgs/autowarefoundation/discussions/5007#discussioncomment-10086717
@youtalk -san, I remember that discussion, but I thought they were already removed since then 😕
I'm OK with adding them back to the runtime containers, either in this PR or a new PR. I'm not sure if this would fix it but currently the docker-build workflow is not working so we need a solution soon: |
We discussed it in [TIER IV INTERNAL LINK] and seems there is no need to deploy OSS images with artifacts. I would like to keep artifacts for unit tests purposes, but AFAIK, we don't plan to use CI runners with CUDA runtime support. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then LGTM
We need to update |
I hope https://github.com/autowarefoundation/autoware/actions/runs/13671765416/job/38280231227?pr=5830 will be success. |
@amadeuszsz Did you check where the artifacts were installed in the old image? I couldn't find any. |
@mitsudome-r |
To check what occupies most size easily, you can run the following in the docker container: sudo apt update && sudo apt install ncdu -y
cd /
ncdu And you can navigate with arrow keys and enter and backspace keys. It will sort all the folders from large to small. This should make it easier to find big blobs. |
So there has been a lot of misunderstandings from my part and some miscommunications. Until yesterday, I thought: But in reality: So this PR doesn't change anything from the autoware core/universe CI perspective. It only effects the docker image building workflow size requirements.
So this could be reverted. But I don't mind either way. |
@xmfcx |
Thing is, the image that you've modified universe-base-cuda, which influences the universe-cuda was not meant for CI in the first place, it was a runtime image, just for the users to pull and run the image. Universe CI uses universe-devel-cuda which doesn't have the artifacts anyways. |
I see. I also considered that each artifacts update forces
we can revert this PR. |
We have about 6-7GB to spare for cache. I've marked some summary here on the dockerfile.svg: There are multiple solutions we could consider:
This could be also discussed in the OpenADKit Working Group meeting next week. |
We already mount |
Difference is that autoware_map folder is a simple manual download process. And you can only download the latest artifacts. If an artifact is removed in a new version, old images cannot be used anymore. |
Good point, that kind of link between software and artifacts disappeared now from our image.
If this is the option 1, it will be beneficial to keep artifacts in Docker images. |
Description
Relax about 1.7 GB space.
How was this PR tested?
Notes for reviewers
Due to type of CI runner, there is no unit tests which can use ML artifacts.
Effects on system behavior
None.