-
Kaniko works differently from Docker. It runs inside a Docker container and detect and extract new layers to build Docker images. Since Kaniko manipulates the filesystem (layers) inside the Docker container, it can have unexpected side effect if not used carefully. For this reason, the developer team suggests users to use Kaniko only via official Docker images. The official Docker image
gcr.io/kaniko-project/executorlimits itself to run the command/kaniko/executoronly so that users won't screw up things accidentally. The official Docker imagegcr.io/kaniko-project/executor:debugadds a shell (/busybox/sh) in addition to to the command/kaniko/executor. However,/busybox/shingcr.io/kaniko-project/executor:debugis minimal and limited too. For example, thecdcommand is not provided and users can work in the directory/workspaceonly. Even if some users have tried customizing their own Kaniko Docker images and reported successful experiences, it is NOT guranteed that customized Kaniko Docker images will always work. Any additional tool (besides/kaniko/executorand/busybox/sh) might fail to work due to changed filesystems (layers) during the building of a Docker image, and even worse, might intervent the building of Docker images. -
Even if you use an official Kaniko Docker image, it doesn't mean that it will always work even if your Docker image can be built successfull using Docker. Various issues can make the image building fail to work.
i. If your Docker image intervent with critial Kaniko filesystem/layers, it might fail to work. For example,
/workspaceis the work directory of Kaniko. If you Docker image create a directory/workspaceor make it a symbolic link to another directory, Kaniko might fail to build your Docker image.- Network issues.
-
It might be helpful to keep a Kaniko pod running for testing, debugging and possibly for building multiple Docker images (even if this is not a good idea). You couldn't do this with the Docker image
gcr.io/kaniko-project/executor, however, it is doable with the Docker imagegcr.io/kaniko-project/executor:debug. Basically, you have to define a Kubernetes command (entrypoint in Docker) which runs forever for the container. A simple shell command that runs forever istail -f /dev/null, so you can define the command as["tail", "-f", "/dev/null"]. The above command implicity invokes/busybox/sh. If you'd like to invoke/busybox/shdirectly, make sure to use["/busybox/sh", "-c", "tail -f /dev/null"]instead of["/busybox/sh", "-c", "tail", "-f", "/dev/null"]as the latter one won't work with Kubernetes even ifdocker run --entrypoint /busybox/sh gcr.io/kaniko-project/executor:debug -c tail -f /dev/nullruns OK locally. -
The credential file
config.jsonfor authentication should be mounted to/kaniko/.docker/config.jsonwhen running an official Kaniko Docker image. Some users customize their own Kaniko Docker images, in which situations, the credential fileconfig.jsonmight need to be mounted/placed at$HOME/.docker/config.json, where$HOMEis the home directory of the user that is used to run/kaniko/executor. In most situations, customized Kaniko Docker images use the root user whose home directory is/root. However, be aware that the directory/rootmight not survive during the building of Docker images and thus can make authentication to fail. -
It is suggested that you always pass the option
--log-timestampwhen building Docker images using/kaniko/executor. It adds timestamps into logs which is useful for debugging and measuring performancing of pulling, pushing and building. -
The option
--cleanup(of/kaniko/executor) cleans up the filesystem/layers after building a Docker image so that you can use the same Kaniko Docker container to build multiple Docker images. However, Kaniko won't recover the filesystem/layers if it fails to build or push a Docker image even if the option--cleanupis specified. It is suggested that you build only 1 image in a Kaniko pod/container. When you need to build another Docker image, start a new Kaniko pod/container. -
Always, specify
--push-try(e.g.,--push-retry=2) and--image-fs-extract-retrywhen Kaniko > 1.6.0 is released.
Kaniko Build Contexts
- Kaniko supports varying context locations. Specifically, Git repositories are supported as context locations. For more details, please refer to Kaniko Build Contexts . However, private Git repositories might not work at this time (Sep 2021). Please refer to the issue build context in private gitlab repository #719 for more discussions.
Connection Reset by Peer
--image-fs-extract-retry=2 (need a version > 1.6.0)
--push-retry=2
What seems to help is the following annotations for the registry Nginx Ingress (taken from the Gitlab Helm chart):
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "900"
nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
nginx.ingress.kubernetes.io/proxy-buffering: "off"
error building image: error building stage connection reset by peer #1377
Improve retry logic for downloading base image - Implement a --pull-retry flag #1627
failed to get filesystem from image: connection reset by peer #1717