docs: additions from editorial review

- editorial review
- address review comments, rework param sections
- added a common section for parameters
- remove liquid tags for notes

Signed-off-by: David Karlsson <david.karlsson@docker.com>
Signed-off-by: Justin Chadwell <me@jedevc.com>
pull/1316/head
David Karlsson 3 years ago committed by Justin Chadwell
parent 04b56c7331
commit 91f0ed3fc3

@ -2,61 +2,57 @@
> **Warning** > **Warning**
> >
> The `azblob` cache is currently unreleased. You can use it today, by using > This cache backend is unreleased. You can use it today, by using the
> the `moby/buildkit:master` image in your Buildx driver. > `moby/buildkit:master` image in your Buildx driver.
The `azblob` cache store uploads your resulting build cache to The `azblob` cache store uploads your resulting build cache to
[Azure's blob storage service](https://azure.microsoft.com/en-us/services/storage/blobs/). [Azure's blob storage service](https://azure.microsoft.com/en-us/services/storage/blobs/).
> **Note** > **Note**
> >
> The `azblob` cache storage backend requires using a different driver than > This cache storage backend requires using a different driver than the default
> the default `docker` driver - see more information on selecting a driver > `docker` driver - see more information on selecting a driver
> [here](../drivers/index.md). To create a new docker-container driver (which > [here](../drivers/index.md). To create a new driver (which can act as a simple
> can act as a simple drop-in replacement): > drop-in replacement):
> >
> ```console > ```console
> docker buildx create --use --driver=docker-container > docker buildx create --use --driver=docker-container
> ``` > ```
To import and export your cache using the `azblob` storage backend we use the ## Synopsis
`--cache-to` and `--cache-from` flags and point it to our desired blob using
the required `account_url` and `name` parameters:
```console ```console
$ docker buildx build --push -t <user>/<image> \ $ docker buildx build . --push -t <registry>/<image> \
--cache-to type=azblob,account_url=https://myaccount.blob.core.windows.net,name=my_image \ --cache-to type=azblob,name=<cache-image>[,parameters...] \
--cache-from type=azblob,account_url=https://myaccount.blob.core.windows.net,name=my_image --cache-from type=azblob,name=<cache-image>[,parameters...]
``` ```
## Authentication Common parameters:
To authenticate to Azure to read from and write to the cache, the following
parameters are required:
* `secret_access_key`: secret access key - `name`: the name of the cache image.
* specifies the primary or secondary account key for your Azure Blob
Storage account. [Azure Blob Storage account keys](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage)
While these can be manually provided, if left unspecified, then the credentials Parameters for `--cache-to`:
for Azure will be pulled from the BuildKit server's environment following the
environment variables scheme for the [Azure Go SDK](https://docs.microsoft.com/en-us/azure/developer/go/azure-sdk-authentication).
> **Warning** - `account_url`: the base address of the blob storage account, for example:
> `https://myaccount.blob.core.windows.net`. See
> These environment variables **must** be specified on the BuildKit server, not [authentication](#authentication).
> the `buildx` client. - `secret_access_key`: specifies the
[Azure Blob Storage account key](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage),
see [authentication](#authentication).
- `mode`: specify cache layers to export (default: `min`), see
[cache mode](./index.md#cache-mode)
## Cache options ## Authentication
The `azblob` cache has lots of parameters to adjust its behavior.
### Cache mode
See [Registry - Cache mode](./registry.md#cache-mode) for more information. The `secret_access_key`, if left unspecified, is read from environment variables
on the BuildKit server following the scheme for the
[Azure Go SDK](https://docs.microsoft.com/en-us/azure/developer/go/azure-sdk-authentication).
The environment variables are read from the server, not the Buildx client.
## Further reading ## Further reading
For an introduction to caching see [Optimizing builds with cache management](https://docs.docker.com/build/building/cache). For an introduction to caching see
[Optimizing builds with cache management](https://docs.docker.com/build/building/cache).
For more information on the `azblob` cache backend, see the [BuildKit README](https://github.com/moby/buildkit#azure-blob-storage-cache-experimental). For more information on the `azblob` cache backend, see the
[BuildKit README](https://github.com/moby/buildkit#azure-blob-storage-cache-experimental).

@ -2,108 +2,109 @@
> **Warning** > **Warning**
> >
> The `gha` cache is currently experimental. You can use it today, in current > The GitHub Actions cache is a beta feature. You can use it today, in current
> releases of Buildx and Buildkit, however, the interface and behavior do not > releases of Buildx and BuildKit. However, the interface and behavior are
> have any stability guarantees and may change in future releases. > unstable and may change in future releases.
The `gha` cache utilizes the [GitHub-provided Action's cache](https://github.com/actions/cache) The GitHub Actions cache utilizes the
available from inside your CI execution environment. This is the recommended [GitHub-provided Action's cache](https://github.com/actions/cache) available
cache to use inside your GitHub action pipelines, as long as your use case from within your CI execution environment. This is the recommended cache to use
falls within the [size and usage limits set by GitHub](https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#usage-limits-and-eviction-policy). inside your GitHub action pipelines, as long as your use case falls within the
[size and usage limits set by GitHub](https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#usage-limits-and-eviction-policy).
> **Note** > **Note**
> >
> The `gha` cache storage backend requires using a different driver than > This cache storage backend requires using a different driver than the default
> the default `docker` driver - see more information on selecting a driver > `docker` driver - see more information on selecting a driver
> [here](../drivers/index.md). To create a new docker-container driver (which > [here](../drivers/index.md). To create a new driver (which can act as a simple
> can act as a simple drop-in replacement): > drop-in replacement):
> >
> ```console > ```console
> docker buildx create --use --driver=docker-container > docker buildx create --use --driver=docker-container
> ``` > ```
>
> If you're using the official [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action),
> then this step will be automatically run for you.
To import and export your cache using the `gha` storage backend we use the ## Synopsis
`--cache-to` and `--cache-from` flags configured with the appropriate
[Authentication](#authentication) parameters:
```console ```console
$ docker buildx build --push -t <user>/<image> \ $ docker buildx build . --push -t <registry>/<image> \
--cache-to type=gha,url=...,token=... --cache-to type=gha[,parameters...] \
--cache-from type=gha,url=...,token=... --cache-from type=gha[,parameters...]
``` ```
By default, caches are scoped by branch - this ensures a separate cache Common parameters:
environment for the main branch, as well as for each feature branch. However,
if you build multiple images as part of your build, then caching them both to - `url`: cache server URL (default `$ACTIONS_CACHE_URL`), see
the same `gha` scope will overwrite all but the last build, leaving only the [authentication](#authentication)
final cache. - `token`: access token (default `$ACTIONS_RUNTIME_TOKEN`), see
[authentication](#authentication)
- `scope`: cache scope (defaults to the name of the current Git branch).
Parameters for `--cache-to`:
- `mode`: specify cache layers to export (default: `min`), see
[cache mode](./index.md#cache-mode)
## Authentication
If the `url` or `token` parameters are left unspecified, the `gha` cache backend
will fall back to using environment variables. If you invoke the `docker buildx`
command manually from an inline step, then the variables must be manually
exposed (using
[`crazy-max/ghaction-github-runtime`](https://github.com/crazy-max/ghaction-github-runtime),
for example).
## Scope
By default, cache is scoped per Git branch. This ensures a separate cache
environment for the main branch and each feature branch. If you build multiple
images on the same branch, each build will overwrite the cache of the previous,
leaving only the final cache.
To prevent this, you can manually specify a cache scope name using the `scope` To preserve the cache for multiple builds on the same branch, you can manually
parameter (in this case, including the branch name set [by GitHub](https://docs.github.com/en/actions/learn-github-actions/environment-variables#default-environment-variables) specify a cache scope name using the `scope` parameter. In the following
to ensure each branch gets its own cache): example, the cache is set to a combination of the branch name and the image
name, to ensure each branch gets its own cache):
```console ```console
$ docker buildx build --push -t <user>/<image> \ $ docker buildx build . --push -t <registry>/<image> \
--cache-to type=gha,url=...,token=...,scope=$GITHUB_REF_NAME-image --cache-to type=gha,url=...,token=...,scope=$GITHUB_REF_NAME-image
--cache-from type=gha,url=...,token=...,scope=$GITHUB_REF_NAME-image --cache-from type=gha,url=...,token=...,scope=$GITHUB_REF_NAME-image
$ docker buildx build --push -t <user>/<image2> \ $ docker buildx build . --push -t <registry>/<image2> \
--cache-to type=gha,url=...,token=...,scope=$GITHUB_REF_NAME-image2 --cache-to type=gha,url=...,token=...,scope=$GITHUB_REF_NAME-image2
--cache-from type=gha,url=...,token=...,scope=$GITHUB_REF_NAME-image2 --cache-from type=gha,url=...,token=...,scope=$GITHUB_REF_NAME-image2
``` ```
GitHub's [cache scoping rules](https://docs.github.com/en/actions/advanced-guides/caching-dependencies-to-speed-up-workflows#restrictions-for-accessing-a-cache), GitHub's
still apply, with the cache only populated from the current branch, the base [cache access restrictions](https://docs.github.com/en/actions/advanced-guides/caching-dependencies-to-speed-up-workflows#restrictions-for-accessing-a-cache),
branch and the default branch for a run. still apply. Only the cache for the current branch, the base branch and the
default branch is accessible by a workflow.
## Authentication ### Using `docker/build-push-action`
To authenticate against the [GitHub Actions Cache service API](https://github.com/tonistiigi/go-actions-cache/blob/master/api.md#authentication)
to read from and write to the cache, the following parameters are required:
* `url`: cache server URL (default `$ACTIONS_CACHE_URL`)
* `token`: access token (default `$ACTIONS_RUNTIME_TOKEN`)
If the parameters are not specified, then their values will be pulled from the
environment variables. If invoking the `docker buildx` command manually from an
inline step, then the variables must be manually exposed, for example, by using
[crazy-max/ghaction-github-runtime](https://github.com/crazy-max/ghaction-github-runtime)
as a workaround.
### With [docker/build-push-action](https://github.com/docker/build-push-action)
When using the [docker/build-push-action](https://github.com/docker/build-push-action), When using the
the `url` and `token` parameters are automatically populated, with no need to [`docker/build-push-action`](https://github.com/docker/build-push-action), the
manually specify them, or include any additional workarounds. `url` and `token` parameters are automatically populated. No need to manually
specify them, or include any additional workarounds.
For example: For example:
```yaml ```yaml
- - name: Build and push
name: Build and push uses: docker/build-push-action@v3
uses: docker/build-push-action@v3 with:
with: context: .
context: . push: true
push: true tags: "<registry>/<image>:latest"
tags: user/app:latest cache-from: type=gha
cache-from: type=gha cache-to: type=gha,mode=max
cache-to: type=gha,mode=max
``` ```
<!-- FIXME: cross-link to ci docs once docs.docker.com has them --> <!-- FIXME: cross-link to ci docs once docs.docker.com has them -->
## Cache options
The `gha` cache has lots of parameters to adjust its behavior.
### Cache mode
See [Registry - Cache mode](./registry.md#cache-mode) for more information.
## Further reading ## Further reading
For an introduction to caching see [Optimizing builds with cache management](https://docs.docker.com/build/building/cache). For an introduction to caching see
[Optimizing builds with cache management](https://docs.docker.com/build/building/cache).
For more information on the `gha` cache backend, see the [BuildKit README](https://github.com/moby/buildkit#github-actions-cache-experimental). For more information on the `gha` cache backend, see the
[BuildKit README](https://github.com/moby/buildkit#github-actions-cache-experimental).

@ -1,87 +1,182 @@
# Cache storage backends # Cache storage backends
To ensure that builds run as quickly as possible, BuildKit automatically caches To ensure fast builds, BuildKit automatically caches the build result in its own
the build result in its own internal cache. However, in addition to this simple internal cache. Additionally, BuildKit also supports exporting build cache to an
cache, BuildKit also supports exporting cache for a specific build to an external location, making it possible to import in future builds.
external location to easily import into future builds.
This external cache becomes almost essential in CI/CD environments, where there An external cache becomes almost essential in CI/CD build environments. Such
may be little-to-no persistence between runs, but it's still important to keep environments usually have little-to-no persistence between runs, but it's still
the runtime of image builds as low as possible. important to keep the runtime of image builds as low as possible.
> **Warning** > **Warning**
> >
> If you use secrets or credentials inside your build process, then ensure you > If you use secrets or credentials inside your build process, ensure you
> manipulate them using the dedicated [--secret](../../reference/buildx_build.md#secret) > manipulate them using the dedicated
> functionality instead of using manually `COPY`d files or build `ARG`s. Using > [--secret](../../reference/buildx_build.md#secret) functionality instead of
> manually managed secrets like this with exported cache could lead to an > using manually `COPY`d files or build `ARG`s. Using manually managed secrets
> information leak. > like this with exported cache could lead to an information leak.
Currently, Buildx supports the following cache storage backends: ## Backends
- `inline` image cache, that embeds the build cache into the image, and is pushed to Buildx supports the following cache storage backends:
the same location as the main output result - note that this only works for the
`image` exporter.
([guide](./inline.md)) - [Inline cache](./inline.md) that embeds the build cache into the image.
- `registry` image cache, that embeds the build cache into a separate image, and The inline cache gets pushed to the same location as the main output result.
pushes to a dedicated location separate from the main output. Note that this only works for the `image` exporter.
([guide](./registry.md)) - [Registry cache](./registry.md) that embeds the build cache into a separate
image, and pushes to a dedicated location separate from the main output.
- `local` directory cache, that writes the build cache to a local directory on - [Local directory cache](./local.md) that writes the build cache to a local
the filesystem. directory on the filesystem.
([guide](./local.md)) - [GitHub Actions cache](./gha.md) that uploads the build cache to
[GitHub](https://docs.github.com/en/rest/actions/cache) (beta).
- `gha` GitHub Actions cache, that uploads the build cache to [GitHub](https://docs.github.com/en/rest/actions/cache) - [Amazon S3 cache](./s3.md) that uploads the build cache to an
(experimental). [AWS S3 bucket](https://aws.amazon.com/s3/) (unreleased).
([guide](./gha.md)) - [Azure Blob Storage cache](./azblob.md) that uploads the build cache to
[Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/)
- `s3` AWS cache, that uploads the build cache to an [AWS S3 bucket](https://aws.amazon.com/s3/)
(unreleased). (unreleased).
([guide](./s3.md)) ## Command syntax
- `azblob` Azure cache, that uploads the build cache to [Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/)
(unreleased).
([guide](./azblob.md)) To use any of the cache backends, you first need to specify it on build with the
[`--cache-to`](../../reference/buildx_build.md#cache-to) option to export the
cache to your storage backend of choice. Then, use the
[`--cache-from`](../../reference/buildx_build.md#cache-from) option to import
the cache from the storage backend into the current build. Unlike the local
BuildKit cache (which is always enabled), all of the cache storage backends must
be explicitly exported to, and explicitly imported from. All cache exporters
except for the `inline` cache requires that you
[select an alternative Buildx driver](../drivers/index.md).
To use any of the above backends, you first need to specify it on build with Example `buildx` command using the `registry` backend, using import and export
the [`--cache-to`](../../reference/buildx_build.md#cache-to) option to export cache:
the cache to your storage backend of choice, then use the [`--cache-from`](../../reference/buildx_build.md#cache-from)
option to import the cache from the storage backend into the current build.
Unlike the local BuildKit cache (which is always enabled), **all** of the cache
storage backends have to be explicitly exported to and then explicitly imported
from. Note that all cache exporters except for the `inline` cache, require
[selecting an alternative Buildx driver](../drivers/index.md).
For example, to perform a cache import and export using the [`registry` cache](./registry.md):
```console ```console
$ docker buildx build --push -t <user>/<image> \ $ docker buildx build . --push -t <registry>/<image> \
--cache-to type=registry,ref=<user>/<cache-image> \ --cache-to type=registry,ref=<registry>/<cache-image>[,parameters...] \
--cache-from type=registry,ref=<user>/<cache-image> . --cache-from type=registry,ref=<registry>/<cache-image>[,parameters...]
``` ```
> **Warning** > **Warning**
> >
> As a general rule, each cache writes to some location - no location can be > As a general rule, each cache writes to some location. No location can be
> written to twice, without overwriting the previously cached data. If you want > written to twice, without overwriting the previously cached data. If you want
> to maintain multiple separately scoped caches (e.g. a cache per git branch), > to maintain multiple scoped caches (for example, a cache per Git branch), then
> then ensure that you specify different locations in your cache exporters. > ensure that you use different locations for exported cache.
## Multiple caches
While [currently](https://github.com/moby/buildkit/pull/3024) only a single BuildKit currently only supports
cache exporter is supported, you can import from as many remote caches as you [a single cache exporter](https://github.com/moby/buildkit/pull/3024). But you
like. For example, a common pattern is to use the cache of both the current can import from as many remote caches as you like. For example, a common pattern
branch as well as the main branch (again using the [`registry` cache](./registry.md)): is to use the cache of both the current branch and the main branch. The
following example shows importing cache from multiple locations using the
registry cache backend:
```console ```console
$ docker buildx build --push -t <user>/<image> \ $ docker buildx build . --push -t <registry>/<image> \
--cache-to type=registry,ref=<user>/<cache-image>:<branch> \ --cache-to type=registry,ref=<registry>/<cache-image>:<branch> \
--cache-from type=registry,ref=<user>/<cache-image>:<branch> \ --cache-from type=registry,ref=<registry>/<cache-image>:<branch> \
--cache-from type=registry,ref=<user>/<cache-image>:master . --cache-from type=registry,ref=<registry>/<cache-image>:main
``` ```
## Configuration options
<!-- FIXME: link to image exporter guide when it's written -->
This section describes some of the configuration options available when
generating cache exports. The options described here are common for at least two
or more backend types. Additionally, the different backend types support
specific parameters as well. See the detailed page about each backend type for
more information about which configuration parameters apply.
The common parameters described here are:
- Cache mode
- Cache compression
- OCI media type
### Cache mode
When generating a cache output, the `--cache-to` argument accepts a `mode`
option for defining which layers to include in the exported cache.
Mode can be set to either of two options: `mode=min` or `mode=max`. For example,
to build the cache with `mode=max` with the registry backend:
```console
$ docker buildx build . --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>,mode=max \
--cache-from type=registry,ref=<registry>/<cache-image>
```
This option is only set when exporting a cache, using `--cache-to`. When
importing a cache (`--cache-from`) the relevant parameters are automatically
detected.
In `min` cache mode (the default), only layers that are exported into the
resulting image are cached, while in `max` cache mode, all layers are cached,
even those of intermediate steps.
While `min` cache is typically smaller (which speeds up import/export times, and
reduces storage costs), `max` cache is more likely to get more cache hits.
Depending on the complexity and location of your build, you should experiment
with both parameters to find the results that work best for you.
### Cache compression
Since `registry` cache image is a separate export artifact from the main build
result, you can specify separate compression parameters for it. These parameters
are similar to the options provided by the `image` exporter. While the default
values provide a good out-of-the-box experience, you may wish to tweak the
parameters to optimize for storage vs compute costs.
To select the compression algorithm, you can use the
`compression=<uncompressed|gzip|estargz|zstd>` option. For example, to build the
cache with `compression=zstd`:
```console
$ docker buildx build . --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>,compression=zstd \
--cache-from type=registry,ref=<registry>/<cache-image>
```
Use the `compression-level=<value>` option alongside the `compression` parameter
to choose a compression level for the algorithms which support it:
- 0-9 for `gzip` and `estargz`
- 0-22 for `zstd`
As a general rule, the higher the number, the smaller the resulting file will
be, and the longer the compression will take to run.
Use the `force-compression=true` option to force re-compressing layers imported
from a previous cache, if the requested compression algorithm is different from
the previous compression algorithm.
> **Note**
>
> The `gzip` and `estargz` compression methods use the
> [`compress/gzip` package](https://pkg.go.dev/compress/gzip), while `zstd` uses
> the
> [`github.com/klauspost/compress/zstd` package](https://github.com/klauspost/compress/tree/master/zstd).
### OCI media types
Like the `image` exporter, the `registry` cache exporter supports creating
images with Docker media types or with OCI media types. To export OCI media type
cache, use the `oci-mediatypes` property:
```console
$ docker buildx build . --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>,oci-mediatypes=true \
--cache-from type=registry,ref=<registry>/<cache-image>
```
This property is only meaningful with the `--cache-to` flag. When fetching
cache, BuildKit will auto-detect the correct media types to use.

@ -1,45 +1,48 @@
# Inline cache storage # Inline cache storage
The `inline` cache store is the simplest way to get an external cache and is The `inline` cache storage backend is the simplest way to get an external cache
easy to get started using if you're already building and pushing an image. and is easy to get started using if you're already building and pushing an
However, it doesn't scale as well to multi-stage builds as well as the other image. However, it doesn't scale as well to multi-stage builds as well as the
drivers do and it doesn't offer separation between your output artifacts and other drivers do. It also doesn't offer separation between your output artifacts
your cache output. This means that if you're using a particularly complex build and your cache output. This means that if you're using a particularly complex
flow, or not exporting your images directly to a registry, then you may want to build flow, or not exporting your images directly to a registry, then you may
consider the [registry](./registry.md) cache. want to consider the [registry](./registry.md) cache.
To export your cache using `inline` storage, we can pass `type=inline` to the To export cache using `inline` storage, pass `type=inline` to the `--cache-to`
`--cache-to` option: option:
```console ```console
$ docker buildx build --push -t <user>/<image> --cache-to type=inline . $ docker buildx build . --push -t <registry>/<image> --cache-to type=inline
``` ```
Alternatively, you can also export inline cache by setting the build-arg Alternatively, you can also export inline cache by setting the build argument
`BUILDKIT_INLINE_CACHE`, instead of using the `--cache-to` flag: `BUILDKIT_INLINE_CACHE=1`, instead of using the `--cache-to` flag:
```console ```console
$ docker buildx build --push -t <user>/<image> --arg BUILDKIT_INLINE_CACHE=1 . $ docker buildx build . --push -t <registry>/<image> --arg BUILDKIT_INLINE_CACHE=1
``` ```
To import the resulting cache on a future build, we can pass `type=registry` to To import the resulting cache on a future build, pass `type=registry` to
`--cache-from` which lets us extract the cache from inside a docker image: `--cache-from` which lets you extract the cache from inside a Docker image in
the specified registry:
```console ```console
$ docker buildx build --push -t <user>/<image> --cache-from type=registry,ref=<user>/<image> . $ docker buildx build . --push -t <registry>/<image> --cache-from type=registry,ref=<registry>/<image>
``` ```
Most of the time, you'll want to have each build both import and export cache Most of the time, you'll want to have each build both import and export cache
from the cache store - to do this, specify both `--cache-to` and `--cache-from`: from the cache store. To do this, specify both `--cache-to` and `--cache-from`:
```console ```console
$ docker buildx build --push -t <user>/<image> \ $ docker buildx build . --push -t <registry>/<image> \
--cache-to type=inline \ --cache-to type=inline \
--cache-from type=registry,ref=<user>/<image> --cache-from type=registry,ref=<registry>/<image>
``` ```
## Further reading ## Further reading
For an introduction to caching see [Optimizing builds with cache management](https://docs.docker.com/build/building/cache). For an introduction to caching see
[Optimizing builds with cache management](https://docs.docker.com/build/building/cache).
For more information on the `inline` cache backend, see the [BuildKit README](https://github.com/moby/buildkit#inline-push-image-and-cache-together). For more information on the `inline` cache backend, see the
[BuildKit README](https://github.com/moby/buildkit#inline-push-image-and-cache-together).

@ -1,40 +1,63 @@
# Local cache storage # Local cache storage
The `local` cache store is a simple cache option that stores your cache as The `local` cache store is a simple cache option that stores your cache as files
files in a local directory on your filesystem (using an [OCI image layout](https://github.com/opencontainers/image-spec/blob/main/image-layout.md) in a directory on your filesystem, using an
for the underlying directory structure). It's a good choice if you're just [OCI image layout](https://github.com/opencontainers/image-spec/blob/main/image-layout.md)
testing locally, or want the flexibility to manage a shared storage option for the underlying directory structure. Local cache is a good choice if you're
yourself, by manually uploading it to a file server, or mounting it over an just testing, or if you want the flexibility to self-manage a shared storage
NFS volume. solution.
> **Note** > **Note**
> >
> The `local` cache storage backend requires using a different driver than > This cache storage backend requires using a different driver than the default
> the default `docker` driver - see more information on selecting a driver > `docker` driver - see more information on selecting a driver
> [here](../drivers/index.md). To create a new docker-container driver (which > [here](../drivers/index.md). To create a new driver (which can act as a simple
> can act as a simple drop-in replacement): > drop-in replacement):
> >
> ```console > ```console
> docker buildx create --use --driver=docker-container > docker buildx create --use --driver=docker-container
> ``` > ```
To import and export your cache using the `local` storage backend we use the ## Synopsis
`--cache-to` and `--cache-from` flags and point it to our desired local
directory using the `dest` and `src` parameters respectively:
```console ```console
$ docker buildx build --push -t <user>/<image> \ $ docker buildx build . --push -t <registry>/<image> \
--cache-to type=local,dest=path/to/local/dir \ --cache-to type=local,dest=path/to/local/dir[,parameters...] \
--cache-from type=local,src=path/to/local/dir . --cache-from type=local,src=path/to/local/dir,
``` ```
If the cache does not exist, then the cache import step will fail, but the Parameters for `--cache-to`:
build will continue.
- `dest`: absolute or relative path to the local directory where you want to
export the cache to.
- `mode`: specify cache layers to export (default: `min`), see
[cache mode](./index.md#cache-mode)
- `oci-mediatypes`: whether to use OCI media types in exported manifests
(default `true`, since BuildKit `v0.8`), see
[OCI media types](./index.md#oci-media-types)
- `compression`: compression type for layers newly created and cached (default:
`gzip`), see [cache compression](./index.md#cache-compression)
- `compression-level`: compression level for `gzip`, `estargz` (0-9) and `zstd`
(0-22)
- `force-compression`: forcibly apply `compression` option to all layers
Parameters for `--cache-from`:
- `src`: absolute or relative path to the local directory where you want to
import cache from.
- `digest`: specify explicit digest of the manifest list to import, see
[cache versioning](#cache-versioning)
If the `src` cache doesn't exist, then the cache import step will fail, but
the build will continue.
## Cache versioning ## Cache versioning
If you inspect the cache directory manually, you can see the resulting OCI This section describes how versioning works for caches on a local filesystem,
image layout: and how you can use the `digest` parameter to use older versions of cache.
If you inspect the cache directory manually, you can see the resulting OCI image
layout:
```console ```console
$ ls cache $ ls cache
@ -55,40 +78,26 @@ $ cat cache/index.json | jq
} }
``` ```
Similarly to the other cache exporters, the cache is replaced on export, by Like other cache types, local cache gets replaced on export, by replacing the
replacing the contents of the `index.json` file - however, previous caches will contents of the `index.json` file. However, previous caches will still be
still be available if the hash of the previous cache image index is known. available in the `blobs` directory. These old caches are addressable by digest,
These old caches will be kept indefinitely, so the local directory will and kept indefinitely. Therefore, the size of the local cache will continue to
continue to grow: see [moby/buildkit#1896](https://github.com/moby/buildkit/issues/1896) grow (see [`moby/buildkit#1896`](https://github.com/moby/buildkit/issues/1896)
for more information. for more information).
When importing cache using `--cache-to`, you can additionally specify the When importing cache using `--cache-to`, you can specify the `digest` parameter
`digest` parameter to force loading an older version of the cache, for example: to force loading an older version of the cache, for example:
```console ```console
$ docker buildx build --push -t <user>/<image> \ $ docker buildx build . --push -t <registry>/<image> \
--cache-to type=local,dest=path/to/local/dir \ --cache-to type=local,dest=path/to/local/dir \
--cache-from type=local,ref=path/to/local/dir,digest=sha256:6982c70595cb91769f61cd1e064cf5f41d5357387bab6b18c0164c5f98c1f707 . --cache-from type=local,ref=path/to/local/dir,digest=sha256:6982c70595cb91769f61cd1e064cf5f41d5357387bab6b18c0164c5f98c1f707
``` ```
## Cache options
The `local` cache has lots of parameters to adjust its behavior.
### Cache mode
See [Registry - Cache mode](./registry.md#cache-mode) for more information.
### Cache compression
See [Registry - Cache compression](./registry.md#cache-compression) for more information.
### OCI media types
See [Registry - OCI Media Types](./registry.md#oci-media-types) for more information.
## Further reading ## Further reading
For an introduction to caching see [Optimizing builds with cache management](https://docs.docker.com/build/building/cache). For an introduction to caching see
[Optimizing builds with cache management](https://docs.docker.com/build/building/cache).
For more information on the `local` cache backend, see the [BuildKit README](https://github.com/moby/buildkit#local-directory-1). For more information on the `local` cache backend, see the
[BuildKit README](https://github.com/moby/buildkit#local-directory-1).

@ -1,129 +1,71 @@
# Registry cache storage # Registry cache storage
The `registry` cache store can be thought of as the natural extension to the The `registry` cache storage can be thought of as an extension to the `inline`
simple `inline` cache. Unlike the `inline` cache, the `registry` cache is cache. Unlike the `inline` cache, the `registry` cache is entirely separate from
entirely separate from the image, which allows for more flexible usage - the image, which allows for more flexible usage - `registry`-backed cache can do
`registry`-backed cache can do everything that the inline cache can do, and everything that the inline cache can do, and more:
more:
- Allows for separating the cache and resulting image artifacts so that you can
- It allows for separating the cache and resulting image artifacts so that distribute your final image without the cache inside.
you can distribute your final image without the cache inside. - It can efficiently cache multi-stage builds in `max` mode, instead of only the
- It can efficiently cache multi-stage builds in `max` mode, instead of only final stage.
the final stage.
- It works with other exporters for more flexibility, instead of only the - It works with other exporters for more flexibility, instead of only the
`image` exporter. `image` exporter.
> **Note** > **Note**
> >
> The `registry` cache storage backend requires using a different driver than > This cache storage backend requires using a different driver than the default
> the default `docker` driver - see more information on selecting a driver > `docker` driver - see more information on selecting a driver
> [here](../drivers/index.md). To create a new docker-container driver (which > [here](../drivers/index.md). To create a new driver (which can act as a simple
> can act as a simple drop-in replacement): > drop-in replacement):
> >
> ```console > ```console
> docker buildx create --use --driver=docker-container > docker buildx create --use --driver=docker-container
> ``` > ```
To import and export your cache using the `registry` storage backend we use the ## Synopsis
`--cache-to` and `--cache-from` flags and point it to our desired image target
using the `ref` parameter:
```console
$ docker buildx build --push -t <user>/<image> \
--cache-to type=registry,ref=<user>/<cache-image> \
--cache-from type=registry,ref=<user>/<cache-image> .
```
You can choose any valid reference value for `ref`, as long as it's not the
same as the target location that you push your image to. You might choose
different tags (e.g. `foo/bar:latest` and `foo/bar:build-cache`), separate
image names (e.g. `foo/bar` and `foo/bar-cache`), or even different
repositories (e.g. `docker.io/foo/bar` and `ghcr.io/foo/bar`).
If the cache does not exist, then the cache import step will fail, but the
build will continue.
## Cache options
Unlike the simple `inline` cache, the `registry` cache has lots of parameters to
adjust its behavior.
### Cache mode
Build cache can be exported in one of two modes: `min` or `max`, with either
`mode=min` or `mode=max` respectively. For example, to build the cache with
`mode=max`:
```console
$ docker buildx build --push -t <user>/<image> \
--cache-to type=registry,ref=<user>/<cache-image>,mode=max \
--cache-from type=registry,ref=<user>/<cache-image> .
```
Note that only `--cache-to` needs to be modified, as `--cache-from` will
automatically extract the relevant parameters from the resulting output.
In `min` cache mode (the default), only layers that are exported into the
resulting image are cached, while in `max` cache mode, all layers are cached,
even those of intermediate steps.
While `min` cache is typically smaller (which speeds up import/export times,
and reduces storage costs), `max` cache is more likely to get more cache hits.
Depending on the complexity and location of your build, you should experiment
with both parameters to get the results.
### Cache compression
Since `registry` cache is exported separately from the main build result, you
can specify separate compression parameters for it (which are similar to the
options provided by the `image` exporter). While the defaults have been
selected to provide a good out-of-the-box experience, you may wish to tweak the
parameters to optimize for storage vs compute costs.
To select the compression algorithm, you can use the `compression=<uncompressed|gzip|estargz|zstd>` Unlike the simpler `inline` cache, the `registry` cache supports several
option. For example, to build the cache with `compression=zstd`: configuration parameters:
```console ```console
$ docker buildx build --push -t <user>/<image> \ $ docker buildx build . --push -t <registry>/<image> \
--cache-to type=registry,ref=<user>/<cache-image>,compression=zstd \ --cache-to type=registry,ref=<registry>/<cache-image>[,parameters...] \
--cache-from type=registry,ref=<user>/<cache-image> . --cache-from type=registry,ref=<registry>/<cache-image>
``` ```
The `compression-level=<value>` option can be used alongside the `compression` Common parameters:
parameter to choose a compression level for the algorithms which support it
(from 0-9 for `gzip` and `estargz` and 0-22 for `zstd`). As a general rule, the
higher the number, the smaller the resulting file will be, but the longer the
compression will take to run.
The `force-compression=<bool>` option can be enabled with `true` (and disabled - `ref`: full address and name of the cache image that you want to import or
with `false`) to force re-compressing layers that have been imported from a export.
previous cache if the requested compression algorithm is different from the
previous compression algorithm.
> **Note** Parameters for `--cache-to`:
>
> The `gzip` and `estargz` compression methods use the [`compress/gzip` package](https://pkg.go.dev/compress/gzip),
> while `zstd` uses the [`github.com/klauspost/compress/zstd` package](https://github.com/klauspost/compress/tree/master/zstd).
### OCI media types
Like the `image` exporter, the `registry` cache exporter supports creating - `mode`: specify cache layers to export (default: `min`), see
images with Docker media types or with OCI media types. To enable OCI Media [cache mode](./index.md#cache-mode)
types, you can use the `oci-mediatypes` property: - `oci-mediatypes`: whether to use OCI media types in exported manifests
(default `true`, since BuildKit `v0.8`), see
```console [OCI media types](./index.md#oci-media-types)
$ docker buildx build --push -t <user>/<image> \ - `compression`: compression type for layers newly created and cached (default:
--cache-to type=registry,ref=<user>/<cache-image>,oci-mediatypes=true \ `gzip`), see [cache compression](./index.md#cache-compression)
--cache-from type=registry,ref=<user>/<cache-image> . - `compression-level`: compression level for `gzip`, `estargz` (0-9) and `zstd`
``` (0-22)
- `force-compression`: forcibly apply `compression` option to all layers
This property is only meaningful with the `--cache-to` flag, when fetching You can choose any valid value for `ref`, as long as it's not the same as the
cache, BuildKit will auto-detect the correct media types to use. target location that you push your image to. You might choose different tags
(e.g. `foo/bar:latest` and `foo/bar:build-cache`), separate image names (e.g.
`foo/bar` and `foo/bar-cache`), or even different repositories (e.g.
`docker.io/foo/bar` and `ghcr.io/foo/bar`). It's up to you to decide the
strategy that you want to use for separating your image from your cache images.
<!-- FIXME: link to image exporter guide when it's written --> If the `--cache-from` target doesn't exist, then the cache import step will
fail, but the build will continue.
## Further reading ## Further reading
For an introduction to caching see [Optimizing builds with cache management](https://docs.docker.com/build/building/cache). For an introduction to caching see
[Optimizing builds with cache management](https://docs.docker.com/build/building/cache).
For more information on the `registry` cache backend, see the [BuildKit README](https://github.com/moby/buildkit#registry-push-image-and-cache-separately). For more information on the `registry` cache backend, see the
[BuildKit README](https://github.com/moby/buildkit#registry-push-image-and-cache-separately).

@ -1,64 +1,61 @@
# AWS S3 cache storage # Amazon S3 cache storage
> **Warning** > **Warning**
> >
> The `s3` cache is currently unreleased. You can use it today, by using the > This cache backend is unreleased. You can use it today, by using the
> `moby/buildkit:master` image in your Buildx driver. > `moby/buildkit:master` image in your Buildx driver.
The `s3` cache store uploads your resulting build cache to [AWS's S3 file storage service](https://aws.amazon.com/s3/), The `s3` cache storage uploads your resulting build cache to
into a bucket of your choice. [Amazon S3 file storage service](https://aws.amazon.com/s3/), into a specified
bucket.
> **Note** > **Note**
> >
> The `s3` cache storage backend requires using a different driver than > This cache storage backend requires using a different driver than the default
> the default `docker` driver - see more information on selecting a driver > `docker` driver - see more information on selecting a driver
> [here](../drivers/index.md). To create a new docker-container driver (which > [here](../drivers/index.md). To create a new driver (which can act as a simple
> can act as a simple drop-in replacement): > drop-in replacement):
> >
> ```console > ```console
> docker buildx create --use --driver=docker-container > docker buildx create --use --driver=docker-container
> ``` > ```
To import and export your cache using the `s3` storage backend we use the ## Synopsis
`--cache-to` and `--cache-from` flags and point it to our desired bucket using
the required `region` and `bucket` parameters:
```console ```console
$ docker buildx build --push -t <user>/<image> \ $ docker buildx build . --push -t <user>/<image> \
--cache-to type=s3,region=eu-west-1,bucket=my_bucket,name=my_image \ --cache-to type=s3,region=<region>,bucket=<bucket>,name=<cache-image>[,parameters...] \
--cache-from type=s3,region=eu-west-1,bucket=my_bucket,name=my_image --cache-from type=s3,region=<region>,bucket=<bucket>,name=<cache-image>
``` ```
## Authentication Common parameters:
To authenticate to S3 to read from and write to the cache, the following - `region`: geographic location
parameters are required: - `bucket`: name of the S3 bucket used for caching
- `name`: name of the cache image
- `access_key_id`: access key ID, see [authentication](#authentication)
- `secret_access_key`: secret access key, see [authentication](#authentication)
- `session_token`: session token, see [authentication](#authentication)
* `access_key_id`: access key ID Parameters for `--cache-to`:
* `secret_access_key`: secret access key
* `session_token`: session token
While these can be manually provided, if left unspecified, then the credentials - `mode`: specify cache layers to export (default: `min`), see
for S3 will be pulled from the BuildKit server's environment following the [cache mode](./index.md#cache-mode)
environment variables scheme for the [AWS Go SDK](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html).
> **Warning** ## Authentication
>
> These environment variables **must** be specified on the BuildKit server, not
> the `buildx` client.
<!-- FIXME: update once https://github.com/docker/buildx/pull/1294 is released -->
## Cache options
The `s3` cache has lots of parameters to adjust its behavior.
### Cache mode `access_key_id`, `secret_access_key`, and `session_token`, if left unspecified,
are read from environment variables on the BuildKit server following the scheme
for the
[AWS Go SDK](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html).
The environment variables are read from the server, not the Buildx client.
See [Registry - Cache mode](./registry.md#cache-mode) for more information. <!-- FIXME: update once https://github.com/docker/buildx/pull/1294 is released -->
## Further reading ## Further reading
For an introduction to caching see [Optimizing builds with cache management](https://docs.docker.com/build/building/cache). For an introduction to caching see
[Optimizing builds with cache management](https://docs.docker.com/build/building/cache).
For more information on the `s3` cache backend, see the [BuildKit README](https://github.com/moby/buildkit#s3-cache-experimental). For more information on the `s3` cache backend, see the
[BuildKit README](https://github.com/moby/buildkit#s3-cache-experimental).

Loading…
Cancel
Save