-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update dockerfiles to do staged builds #19952
base: master
Are you sure you want to change the base?
Conversation
5287e97
to
0115b4d
Compare
{% else %} | ||
FROM {{ prefix }}{{DOCKER_BASE_ARCH}}/debian:bookworm | ||
ARG BASE={{ prefix }}{{DOCKER_BASE_ARCH}}/debian:bookworm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@saiarcot895
You can use debian slim images to reduce final image size as it was suggested here: #19008.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. I want to keep the focus of this PR on unblocking Docker upgrades, but I had to bring in some space optimization stuff (see the COPY
at the end of this file) to get things to work.
On newer versions of Docker, only the buildkit builder is supported, and cannot be disabled by setting DOCKER_BUILDKIT to 0. The side effect of this is that the behavior of `--squash` is different (see moby/moby#38903). This will result in the container sizes being significantly higher. To work around this, make all of our builds two-stage builds, with the `--squash` flag entirely removed. The way this works is in the first stage, whatever new files/packages need to be added are added (along with files/packages that need to be removed). Then, in the second stage, all of the files from the final state of the first stage are copied to the second stage. As part of this, also consolidate the container cleanup code into `post_run_cleanup`, and remove it from the individual containers (for consistency). Also experiment a bit with not needing to explcitly install library dependencies, and let apt install it as necessary. This will help during upgrades in the case of ABI changes for packages. Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
This shouldn't be committed Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
The docker root cleanup is removing the contents of the docker root directory we create from within a container. However, this container isn't using the container registry variable, which menas it may fail depending on the network environment. Fix this by prefixing the container registry variable The docker root directory creation is missing the `shell` at the beginning, which means the directory doesn't actually get created. While the docker command later will still create the directory automatically, fix this and make sure it gets created here. Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
It seems that on the Bullseye slave container (not sure about Buster), the nofile ulimit is set to 1048576:1048576 (as in, 1048576 for both the soft and hard limit). However, the Docker startup script in version 25 and newer sets the hard limit to 524288 (because of moby/moby#c8930105b), which fails because then the soft limit will be higher than the hard limit, which doesn't make sense. However, on a Bookworm slave container, the nofile ulimit is set to 1024:1048576, and the startup script's ulimit command goes through. A simple workaround would be to explicitly set the nofile ulimit to be 1024:1048576 for all slave containers. However, sonic-swss's tests needs more than 1024 file descriptors open, because the test code doesn't clean up file descriptors at the end of each test case/test suite. This results in FD leaks. Therefore, set the ulimit to 524288:1048576, so that Docker's startup script can lower it to 524288 and swss can open file descriptors. Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
With the new approach of building the images (where the entire final rootfs is copied into the second stage), if the system building the containers is using the overlay2 storage driver (which is the default) and is able to use native diffs (which could be true if CONFIG_OVERLAY_FS_REDIRECT_DIR isn't enabled in the kernel), then the final result of the image will be different than if naive diffs (where Docker compares the metadata of each file and, if needed, the contents to find out if something has changed) were used. Specifically, with native diffs, each container would be much larger, since technically speaking, the whole rootfs is being written to, even if the content ends up the same. This appears to be a known issue (in some form), and workarounds are being thought of in moby/moby#35280. As a workaround, install rsync into the base container, copy the entirely of that into an empty base image, and use rsync to copy only the changed files into the layer in one shot. This does mean that rsync will remain installed in the final built containers, but hopefully this is fine. Signe-off-by: Saikrishna Arcot <sarcot@microsoft.com>
7968803
to
0b85785
Compare
/azp run Azure.sonic-buildimage |
Azure Pipelines successfully started running 1 pipeline(s). |
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
/azp run Azure.sonic-buildimage |
Azure Pipelines successfully started running 1 pipeline(s). |
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
/azp run Azure.sonic-buildimage |
Azure Pipelines successfully started running 1 pipeline(s). |
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
/azp run Azure.sonic-buildimage |
Azure Pipelines successfully started running 1 pipeline(s). |
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
/azp run Azure.sonic-buildimage |
Azure Pipelines successfully started running 1 pipeline(s). |
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
/azp run Azure.sonic-buildimage |
Azure Pipelines successfully started running 1 pipeline(s). |
@@ -310,6 +312,7 @@ DOCKER_RUN := docker run --rm=true --privileged --init \ | |||
-e "https_proxy=$(https_proxy)" \ | |||
-e "no_proxy=$(no_proxy)" \ | |||
-i$(shell { if [ -t 0 ]; then echo t; fi }) \ | |||
--ulimit nofile=524288:524288 \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why bullseye needs this option but bookworm doesn't?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was needed with an older version of changes that included a Docker daemon version upgrade, but isn't needed anymore. It may be needed in the future when the upgrade is done.
There was previously a commit to upgrade Docker to version 25, for the purposes of using containerd image store. With that upgrade, docker-in-docker in the Bullseye slave container did not start up because of a ulimit error. In the Bullseye slave container, the ulimit for the max open files was set to 1048576:1048576 (as in, 1048576 for both the soft and hard limit). However, Docker 25 and newer lowered just the hard limit to 524288 in moby/moby@c8930105b. This caused the soft limit to be higher than the hard limit, which is an issue, and so starting up docker failed. Based on local testing, the Bookworm slave container there appeared to be not affected because this ulimit was 1024:1048576, and so even lowering just the hard limit to 524288 would be fine.
Rechecking my testing now, when starting up a slave container locally, I see that both Bullseye and Bookworm have 1048576:1048576; I'm not sure what changed to also cause Bookworm to be potentially affected.
This issue affects us only after we upgrade to Docker 25 or newer; we're still on Docker 24 for now. I can either keep this change or take it out of the PR.
|
||
FROM $BASE | ||
|
||
RUN --mount=type=bind,from=base,target=/changes-to-image rsync -axAX --no-D --exclude=/sys --exclude=resolv.conf /changes-to-image/ / |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have a test result that shows the how much the disk saves after this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Taking a sonic-broadcom.bin
image built by the pipeline, from around January 15th, the sonic-broadcom.bin
file with this PR is about 80MB smaller than the sonic-broadcom.bin
from January 15th official build (921MB vs 1002MB). The docker
directory, after extraction, is about 170MB smaller (1592MB vs 1766MB).
I need to resolve new merge conflicts, so I can get updated numbers after doing that.
Signed-off-by: Saikrishna Arcot <sarcot@microsoft.com>
/azp run Azure.sonic-buildimage |
Azure Pipelines successfully started running 1 pipeline(s). |
|
||
FROM scratch | ||
|
||
COPY --from=base / / |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why docker-base use COPY but other dockers use RUN rsync?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rsync
is run from the base/source layer, and the new contents of the new image is mounted in a directory. In the case of docker-base-bookworm
, the base layer is debian:bookworm
, which doesn't have rsync
installed, so rsync
can't be run from there. Since rsync
is installed in docker-base-bookworm
, and everything else is on top of this image, rsync
can be used elsewhere.
While it might be technically possible to run rsync
from that mounted directory, it's far easier and more reliable to just copy the final results of docker-base-bookworm
into an empty layer. This results in two things:
- When this container gets built,
docker-base-bookworm
will be the top-level image for any container that gets built from this container, instead of beingdebian:bookworm
. - For files that get removed in this container, those files will not be present in the final container build at all. Currently, when any file/directory is removed, there is a "whiteout" file that gets added into the container indicating that the file/directory referenced in the base layer no longer exists, but the space by that file/directory is still taken up. Now, that file/directory will not exist at all, and thus not take up disk space.
Why I did it
On newer versions of Docker, only the buildkit builder is supported, and
cannot be disabled by setting DOCKER_BUILDKIT to 0. The side effect of
this is that the behavior of
--squash
is different (seemoby/moby#38903). This will result in the container sizes being
significantly higher.
Work item tracking
How I did it
To work around this, make all of our builds two-stage builds, with the
--squash
flag entirely removed. The way this works is in the firststage, whatever new files/packages need to be added are added (along
with files/packages that need to be removed). Then, in the second stage,
use rsync to copy over the changed files as a single command/layer. In the
case of the base layer for each Debian version, the final result of the first
stage will be copied into an empty base layer.
As part of this, also consolidate the container cleanup code into
post_run_cleanup
, and remove it from the individual containers (forconsistency). Also experiment a bit with not needing to explicitly
install library dependencies, and let apt install it as necessary. This
will help during upgrades in the case of ABI changes for packages.
Also, remove the
SONIC_USE_DOCKER_BUILDKIT
option, and don'tset
DOCKER_BUILDKIT
option. This option will eventually have no impact.This also means that builds will now use buildkit, as that is the default now.
How to verify it
Which release branch to backport (provide reason below if selected)
Tested branch (Please provide the tested image version)
Description for the changelog
Link to config_db schema for YANG module changes
A picture of a cute animal (not mandatory but encouraged)