8000 What's the fastest configuration for docker? · Issue #301 · runs-on/runs-on · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

What's the fastest configuration for docker? #301

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
jesseduffield opened this issue May 17, 2025 · 1 comment
Open

What's the fastest configuration for docker? #301

jesseduffield opened this issue May 17, 2025 · 1 comment

Comments

@jesseduffield
Copy link

I'm not clear on which configuration is fastest for CI that is heavily docker-oriented.

I would love to speed up my CI jobs anyway I can: Each of my workflows both builds a docker image and pulls a docker image from my ECR registry. I depend on image layers being re-used across builds, and I use the s3 cache for that.

Given the release of https://github.com/runs-on/runs-on/releases/tag/v2.8.2, I'm wondering what's the fastest configuration for docker? There seems to be a few options available:

  • s3/gha cache
  • EFS
  • Ephemeral registry

In the ephemeral registry doc you say it's slightly faster than the s3 cache approach: https://runs-on.com/caching/ephemeral-registry/. But I'm not sure how much faster, or how these things could be used together.

Thanks

@crohr
Copy link
Contributor
crohr commented May 18, 2025

Hi @jesseduffield, there are many choices indeed. This is how I would see things:

  • if compatibility with github actions runners (i.e. being able to switch back quickly to official runners) is a priority, then use gha. Compared to S3 or ECR, it will be slightly slower for large layer sizes because there is an intermediate step involved due to the magic cache proxy.
  • otherwise use s3 or ephemeral registry, whichever is faster in your tests. Note that ephemeral is better if you also need the image to be reused in dependent jobs. But it can become expensive unless you enable the ECR VPC endpoint, which is not free compared to the S3 endpoint.
  • EFS can quickly become expensive, and it's more of a "you should know this is possible" kind of thing, but I would not recommend it especially since the other options are nice.

Also note that the compression method and level have a large impact on the time it takes for buildkit to export and compress layers. I found that using compression=zstd and compression-level=0 sped up that phase (at the cost of a bit more storage).

And finally, stay tuned for the final boss: block-level snapshot/restore capabilities for /var/lib/docker, which should be the final answer in the specific case of docker builds. The performance should be similar to what you get with remote builders (e.g. Depot etc.) for a fraction of the cost.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants
0