docs: additions from editorial review
- editorial review - address review comments, rework param sections - added a common section for parameters - remove liquid tags for notes Signed-off-by: David Karlsson <david.karlsson@docker.com> Signed-off-by: Justin Chadwell <me@jedevc.com>pull/1316/head
parent
04b56c7331
commit
91f0ed3fc3
@ -1,87 +1,182 @@
|
||||
# Cache storage backends
|
||||
|
||||
To ensure that builds run as quickly as possible, BuildKit automatically caches
|
||||
the build result in its own internal cache. However, in addition to this simple
|
||||
cache, BuildKit also supports exporting cache for a specific build to an
|
||||
external location to easily import into future builds.
|
||||
To ensure fast builds, BuildKit automatically caches the build result in its own
|
||||
internal cache. Additionally, BuildKit also supports exporting build cache to an
|
||||
external location, making it possible to import in future builds.
|
||||
|
||||
This external cache becomes almost essential in CI/CD environments, where there
|
||||
may be little-to-no persistence between runs, but it's still important to keep
|
||||
the runtime of image builds as low as possible.
|
||||
An external cache becomes almost essential in CI/CD build environments. Such
|
||||
environments usually have little-to-no persistence between runs, but it's still
|
||||
important to keep the runtime of image builds as low as possible.
|
||||
|
||||
> **Warning**
|
||||
>
|
||||
> If you use secrets or credentials inside your build process, then ensure you
|
||||
> manipulate them using the dedicated [--secret](../../reference/buildx_build.md#secret)
|
||||
> functionality instead of using manually `COPY`d files or build `ARG`s. Using
|
||||
> manually managed secrets like this with exported cache could lead to an
|
||||
> information leak.
|
||||
> If you use secrets or credentials inside your build process, ensure you
|
||||
> manipulate them using the dedicated
|
||||
> [--secret](../../reference/buildx_build.md#secret) functionality instead of
|
||||
> using manually `COPY`d files or build `ARG`s. Using manually managed secrets
|
||||
> like this with exported cache could lead to an information leak.
|
||||
|
||||
Currently, Buildx supports the following cache storage backends:
|
||||
## Backends
|
||||
|
||||
- `inline` image cache, that embeds the build cache into the image, and is pushed to
|
||||
the same location as the main output result - note that this only works for the
|
||||
`image` exporter.
|
||||
Buildx supports the following cache storage backends:
|
||||
|
||||
([guide](./inline.md))
|
||||
- [Inline cache](./inline.md) that embeds the build cache into the image.
|
||||
|
||||
- `registry` image cache, that embeds the build cache into a separate image, and
|
||||
pushes to a dedicated location separate from the main output.
|
||||
The inline cache gets pushed to the same location as the main output result.
|
||||
Note that this only works for the `image` exporter.
|
||||
|
||||
([guide](./registry.md))
|
||||
- [Registry cache](./registry.md) that embeds the build cache into a separate
|
||||
image, and pushes to a dedicated location separate from the main output.
|
||||
|
||||
- `local` directory cache, that writes the build cache to a local directory on
|
||||
the filesystem.
|
||||
- [Local directory cache](./local.md) that writes the build cache to a local
|
||||
directory on the filesystem.
|
||||
|
||||
([guide](./local.md))
|
||||
- [GitHub Actions cache](./gha.md) that uploads the build cache to
|
||||
[GitHub](https://docs.github.com/en/rest/actions/cache) (beta).
|
||||
|
||||
- `gha` GitHub Actions cache, that uploads the build cache to [GitHub](https://docs.github.com/en/rest/actions/cache)
|
||||
(experimental).
|
||||
- [Amazon S3 cache](./s3.md) that uploads the build cache to an
|
||||
[AWS S3 bucket](https://aws.amazon.com/s3/) (unreleased).
|
||||
|
||||
([guide](./gha.md))
|
||||
|
||||
- `s3` AWS cache, that uploads the build cache to an [AWS S3 bucket](https://aws.amazon.com/s3/)
|
||||
- [Azure Blob Storage cache](./azblob.md) that uploads the build cache to
|
||||
[Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/)
|
||||
(unreleased).
|
||||
|
||||
([guide](./s3.md))
|
||||
|
||||
- `azblob` Azure cache, that uploads the build cache to [Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/)
|
||||
(unreleased).
|
||||
## Command syntax
|
||||
|
||||
([guide](./azblob.md))
|
||||
To use any of the cache backends, you first need to specify it on build with the
|
||||
[`--cache-to`](../../reference/buildx_build.md#cache-to) option to export the
|
||||
cache to your storage backend of choice. Then, use the
|
||||
[`--cache-from`](../../reference/buildx_build.md#cache-from) option to import
|
||||
the cache from the storage backend into the current build. Unlike the local
|
||||
BuildKit cache (which is always enabled), all of the cache storage backends must
|
||||
be explicitly exported to, and explicitly imported from. All cache exporters
|
||||
except for the `inline` cache requires that you
|
||||
[select an alternative Buildx driver](../drivers/index.md).
|
||||
|
||||
To use any of the above backends, you first need to specify it on build with
|
||||
the [`--cache-to`](../../reference/buildx_build.md#cache-to) option to export
|
||||
the cache to your storage backend of choice, then use the [`--cache-from`](../../reference/buildx_build.md#cache-from)
|
||||
option to import the cache from the storage backend into the current build.
|
||||
Unlike the local BuildKit cache (which is always enabled), **all** of the cache
|
||||
storage backends have to be explicitly exported to and then explicitly imported
|
||||
from. Note that all cache exporters except for the `inline` cache, require
|
||||
[selecting an alternative Buildx driver](../drivers/index.md).
|
||||
|
||||
For example, to perform a cache import and export using the [`registry` cache](./registry.md):
|
||||
Example `buildx` command using the `registry` backend, using import and export
|
||||
cache:
|
||||
|
||||
```console
|
||||
$ docker buildx build --push -t <user>/<image> \
|
||||
--cache-to type=registry,ref=<user>/<cache-image> \
|
||||
--cache-from type=registry,ref=<user>/<cache-image> .
|
||||
$ docker buildx build . --push -t <registry>/<image> \
|
||||
--cache-to type=registry,ref=<registry>/<cache-image>[,parameters...] \
|
||||
--cache-from type=registry,ref=<registry>/<cache-image>[,parameters...]
|
||||
```
|
||||
|
||||
> **Warning**
|
||||
>
|
||||
> As a general rule, each cache writes to some location - no location can be
|
||||
> As a general rule, each cache writes to some location. No location can be
|
||||
> written to twice, without overwriting the previously cached data. If you want
|
||||
> to maintain multiple separately scoped caches (e.g. a cache per git branch),
|
||||
> then ensure that you specify different locations in your cache exporters.
|
||||
> to maintain multiple scoped caches (for example, a cache per Git branch), then
|
||||
> ensure that you use different locations for exported cache.
|
||||
|
||||
## Multiple caches
|
||||
|
||||
While [currently](https://github.com/moby/buildkit/pull/3024) only a single
|
||||
cache exporter is supported, you can import from as many remote caches as you
|
||||
like. For example, a common pattern is to use the cache of both the current
|
||||
branch as well as the main branch (again using the [`registry` cache](./registry.md)):
|
||||
BuildKit currently only supports
|
||||
[a single cache exporter](https://github.com/moby/buildkit/pull/3024). But you
|
||||
can import from as many remote caches as you like. For example, a common pattern
|
||||
is to use the cache of both the current branch and the main branch. The
|
||||
following example shows importing cache from multiple locations using the
|
||||
registry cache backend:
|
||||
|
||||
```console
|
||||
$ docker buildx build --push -t <user>/<image> \
|
||||
--cache-to type=registry,ref=<user>/<cache-image>:<branch> \
|
||||
--cache-from type=registry,ref=<user>/<cache-image>:<branch> \
|
||||
--cache-from type=registry,ref=<user>/<cache-image>:master .
|
||||
$ docker buildx build . --push -t <registry>/<image> \
|
||||
--cache-to type=registry,ref=<registry>/<cache-image>:<branch> \
|
||||
--cache-from type=registry,ref=<registry>/<cache-image>:<branch> \
|
||||
--cache-from type=registry,ref=<registry>/<cache-image>:main
|
||||
```
|
||||
|
||||
## Configuration options
|
||||
|
||||
<!-- FIXME: link to image exporter guide when it's written -->
|
||||
|
||||
This section describes some of the configuration options available when
|
||||
generating cache exports. The options described here are common for at least two
|
||||
or more backend types. Additionally, the different backend types support
|
||||
specific parameters as well. See the detailed page about each backend type for
|
||||
more information about which configuration parameters apply.
|
||||
|
||||
The common parameters described here are:
|
||||
|
||||
- Cache mode
|
||||
- Cache compression
|
||||
- OCI media type
|
||||
|
||||
### Cache mode
|
||||
|
||||
When generating a cache output, the `--cache-to` argument accepts a `mode`
|
||||
option for defining which layers to include in the exported cache.
|
||||
|
||||
Mode can be set to either of two options: `mode=min` or `mode=max`. For example,
|
||||
to build the cache with `mode=max` with the registry backend:
|
||||
|
||||
```console
|
||||
$ docker buildx build . --push -t <registry>/<image> \
|
||||
--cache-to type=registry,ref=<registry>/<cache-image>,mode=max \
|
||||
--cache-from type=registry,ref=<registry>/<cache-image>
|
||||
```
|
||||
|
||||
This option is only set when exporting a cache, using `--cache-to`. When
|
||||
importing a cache (`--cache-from`) the relevant parameters are automatically
|
||||
detected.
|
||||
|
||||
In `min` cache mode (the default), only layers that are exported into the
|
||||
resulting image are cached, while in `max` cache mode, all layers are cached,
|
||||
even those of intermediate steps.
|
||||
|
||||
While `min` cache is typically smaller (which speeds up import/export times, and
|
||||
reduces storage costs), `max` cache is more likely to get more cache hits.
|
||||
Depending on the complexity and location of your build, you should experiment
|
||||
with both parameters to find the results that work best for you.
|
||||
|
||||
### Cache compression
|
||||
|
||||
Since `registry` cache image is a separate export artifact from the main build
|
||||
result, you can specify separate compression parameters for it. These parameters
|
||||
are similar to the options provided by the `image` exporter. While the default
|
||||
values provide a good out-of-the-box experience, you may wish to tweak the
|
||||
parameters to optimize for storage vs compute costs.
|
||||
|
||||
To select the compression algorithm, you can use the
|
||||
`compression=<uncompressed|gzip|estargz|zstd>` option. For example, to build the
|
||||
cache with `compression=zstd`:
|
||||
|
||||
```console
|
||||
$ docker buildx build . --push -t <registry>/<image> \
|
||||
--cache-to type=registry,ref=<registry>/<cache-image>,compression=zstd \
|
||||
--cache-from type=registry,ref=<registry>/<cache-image>
|
||||
```
|
||||
|
||||
Use the `compression-level=<value>` option alongside the `compression` parameter
|
||||
to choose a compression level for the algorithms which support it:
|
||||
|
||||
- 0-9 for `gzip` and `estargz`
|
||||
- 0-22 for `zstd`
|
||||
|
||||
As a general rule, the higher the number, the smaller the resulting file will
|
||||
be, and the longer the compression will take to run.
|
||||
|
||||
Use the `force-compression=true` option to force re-compressing layers imported
|
||||
from a previous cache, if the requested compression algorithm is different from
|
||||
the previous compression algorithm.
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> The `gzip` and `estargz` compression methods use the
|
||||
> [`compress/gzip` package](https://pkg.go.dev/compress/gzip), while `zstd` uses
|
||||
> the
|
||||
> [`github.com/klauspost/compress/zstd` package](https://github.com/klauspost/compress/tree/master/zstd).
|
||||
|
||||
### OCI media types
|
||||
|
||||
Like the `image` exporter, the `registry` cache exporter supports creating
|
||||
images with Docker media types or with OCI media types. To export OCI media type
|
||||
cache, use the `oci-mediatypes` property:
|
||||
|
||||
```console
|
||||
$ docker buildx build . --push -t <registry>/<image> \
|
||||
--cache-to type=registry,ref=<registry>/<cache-image>,oci-mediatypes=true \
|
||||
--cache-from type=registry,ref=<registry>/<cache-image>
|
||||
```
|
||||
|
||||
This property is only meaningful with the `--cache-to` flag. When fetching
|
||||
cache, BuildKit will auto-detect the correct media types to use.
|
||||
|
@ -1,45 +1,48 @@
|
||||
# Inline cache storage
|
||||
|
||||
The `inline` cache store is the simplest way to get an external cache and is
|
||||
easy to get started using if you're already building and pushing an image.
|
||||
However, it doesn't scale as well to multi-stage builds as well as the other
|
||||
drivers do and it doesn't offer separation between your output artifacts and
|
||||
your cache output. This means that if you're using a particularly complex build
|
||||
flow, or not exporting your images directly to a registry, then you may want to
|
||||
consider the [registry](./registry.md) cache.
|
||||
The `inline` cache storage backend is the simplest way to get an external cache
|
||||
and is easy to get started using if you're already building and pushing an
|
||||
image. However, it doesn't scale as well to multi-stage builds as well as the
|
||||
other drivers do. It also doesn't offer separation between your output artifacts
|
||||
and your cache output. This means that if you're using a particularly complex
|
||||
build flow, or not exporting your images directly to a registry, then you may
|
||||
want to consider the [registry](./registry.md) cache.
|
||||
|
||||
To export your cache using `inline` storage, we can pass `type=inline` to the
|
||||
`--cache-to` option:
|
||||
To export cache using `inline` storage, pass `type=inline` to the `--cache-to`
|
||||
option:
|
||||
|
||||
```console
|
||||
$ docker buildx build --push -t <user>/<image> --cache-to type=inline .
|
||||
$ docker buildx build . --push -t <registry>/<image> --cache-to type=inline
|
||||
```
|
||||
|
||||
Alternatively, you can also export inline cache by setting the build-arg
|
||||
`BUILDKIT_INLINE_CACHE`, instead of using the `--cache-to` flag:
|
||||
Alternatively, you can also export inline cache by setting the build argument
|
||||
`BUILDKIT_INLINE_CACHE=1`, instead of using the `--cache-to` flag:
|
||||
|
||||
```console
|
||||
$ docker buildx build --push -t <user>/<image> --arg BUILDKIT_INLINE_CACHE=1 .
|
||||
$ docker buildx build . --push -t <registry>/<image> --arg BUILDKIT_INLINE_CACHE=1
|
||||
```
|
||||
|
||||
To import the resulting cache on a future build, we can pass `type=registry` to
|
||||
`--cache-from` which lets us extract the cache from inside a docker image:
|
||||
To import the resulting cache on a future build, pass `type=registry` to
|
||||
`--cache-from` which lets you extract the cache from inside a Docker image in
|
||||
the specified registry:
|
||||
|
||||
```console
|
||||
$ docker buildx build --push -t <user>/<image> --cache-from type=registry,ref=<user>/<image> .
|
||||
$ docker buildx build . --push -t <registry>/<image> --cache-from type=registry,ref=<registry>/<image>
|
||||
```
|
||||
|
||||
Most of the time, you'll want to have each build both import and export cache
|
||||
from the cache store - to do this, specify both `--cache-to` and `--cache-from`:
|
||||
from the cache store. To do this, specify both `--cache-to` and `--cache-from`:
|
||||
|
||||
```console
|
||||
$ docker buildx build --push -t <user>/<image> \
|
||||
$ docker buildx build . --push -t <registry>/<image> \
|
||||
--cache-to type=inline \
|
||||
--cache-from type=registry,ref=<user>/<image>
|
||||
--cache-from type=registry,ref=<registry>/<image>
|
||||
```
|
||||
|
||||
## Further reading
|
||||
|
||||
For an introduction to caching see [Optimizing builds with cache management](https://docs.docker.com/build/building/cache).
|
||||
For an introduction to caching see
|
||||
[Optimizing builds with cache management](https://docs.docker.com/build/building/cache).
|
||||
|
||||
For more information on the `inline` cache backend, see the [BuildKit README](https://github.com/moby/buildkit#inline-push-image-and-cache-together).
|
||||
For more information on the `inline` cache backend, see the
|
||||
[BuildKit README](https://github.com/moby/buildkit#inline-push-image-and-cache-together).
|
||||
|
@ -1,129 +1,71 @@
|
||||
# Registry cache storage
|
||||
|
||||
The `registry` cache store can be thought of as the natural extension to the
|
||||
simple `inline` cache. Unlike the `inline` cache, the `registry` cache is
|
||||
entirely separate from the image, which allows for more flexible usage -
|
||||
`registry`-backed cache can do everything that the inline cache can do, and
|
||||
more:
|
||||
|
||||
- It allows for separating the cache and resulting image artifacts so that
|
||||
you can distribute your final image without the cache inside.
|
||||
- It can efficiently cache multi-stage builds in `max` mode, instead of only
|
||||
the final stage.
|
||||
The `registry` cache storage can be thought of as an extension to the `inline`
|
||||
cache. Unlike the `inline` cache, the `registry` cache is entirely separate from
|
||||
the image, which allows for more flexible usage - `registry`-backed cache can do
|
||||
everything that the inline cache can do, and more:
|
||||
|
||||
- Allows for separating the cache and resulting image artifacts so that you can
|
||||
distribute your final image without the cache inside.
|
||||
- It can efficiently cache multi-stage builds in `max` mode, instead of only the
|
||||
final stage.
|
||||
- It works with other exporters for more flexibility, instead of only the
|
||||
`image` exporter.
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> The `registry` cache storage backend requires using a different driver than
|
||||
> the default `docker` driver - see more information on selecting a driver
|
||||
> [here](../drivers/index.md). To create a new docker-container driver (which
|
||||
> can act as a simple drop-in replacement):
|
||||
> This cache storage backend requires using a different driver than the default
|
||||
> `docker` driver - see more information on selecting a driver
|
||||
> [here](../drivers/index.md). To create a new driver (which can act as a simple
|
||||
> drop-in replacement):
|
||||
>
|
||||
> ```console
|
||||
> docker buildx create --use --driver=docker-container
|
||||
> ```
|
||||
|
||||
To import and export your cache using the `registry` storage backend we use the
|
||||
`--cache-to` and `--cache-from` flags and point it to our desired image target
|
||||
using the `ref` parameter:
|
||||
|
||||
```console
|
||||
$ docker buildx build --push -t <user>/<image> \
|
||||
--cache-to type=registry,ref=<user>/<cache-image> \
|
||||
--cache-from type=registry,ref=<user>/<cache-image> .
|
||||
```
|
||||
|
||||
You can choose any valid reference value for `ref`, as long as it's not the
|
||||
same as the target location that you push your image to. You might choose
|
||||
different tags (e.g. `foo/bar:latest` and `foo/bar:build-cache`), separate
|
||||
image names (e.g. `foo/bar` and `foo/bar-cache`), or even different
|
||||
repositories (e.g. `docker.io/foo/bar` and `ghcr.io/foo/bar`).
|
||||
|
||||
If the cache does not exist, then the cache import step will fail, but the
|
||||
build will continue.
|
||||
|
||||
## Cache options
|
||||
|
||||
Unlike the simple `inline` cache, the `registry` cache has lots of parameters to
|
||||
adjust its behavior.
|
||||
|
||||
### Cache mode
|
||||
|
||||
Build cache can be exported in one of two modes: `min` or `max`, with either
|
||||
`mode=min` or `mode=max` respectively. For example, to build the cache with
|
||||
`mode=max`:
|
||||
|
||||
```console
|
||||
$ docker buildx build --push -t <user>/<image> \
|
||||
--cache-to type=registry,ref=<user>/<cache-image>,mode=max \
|
||||
--cache-from type=registry,ref=<user>/<cache-image> .
|
||||
```
|
||||
|
||||
Note that only `--cache-to` needs to be modified, as `--cache-from` will
|
||||
automatically extract the relevant parameters from the resulting output.
|
||||
|
||||
In `min` cache mode (the default), only layers that are exported into the
|
||||
resulting image are cached, while in `max` cache mode, all layers are cached,
|
||||
even those of intermediate steps.
|
||||
|
||||
While `min` cache is typically smaller (which speeds up import/export times,
|
||||
and reduces storage costs), `max` cache is more likely to get more cache hits.
|
||||
Depending on the complexity and location of your build, you should experiment
|
||||
with both parameters to get the results.
|
||||
|
||||
### Cache compression
|
||||
|
||||
Since `registry` cache is exported separately from the main build result, you
|
||||
can specify separate compression parameters for it (which are similar to the
|
||||
options provided by the `image` exporter). While the defaults have been
|
||||
selected to provide a good out-of-the-box experience, you may wish to tweak the
|
||||
parameters to optimize for storage vs compute costs.
|
||||
## Synopsis
|
||||
|
||||
To select the compression algorithm, you can use the `compression=<uncompressed|gzip|estargz|zstd>`
|
||||
option. For example, to build the cache with `compression=zstd`:
|
||||
Unlike the simpler `inline` cache, the `registry` cache supports several
|
||||
configuration parameters:
|
||||
|
||||
```console
|
||||
$ docker buildx build --push -t <user>/<image> \
|
||||
--cache-to type=registry,ref=<user>/<cache-image>,compression=zstd \
|
||||
--cache-from type=registry,ref=<user>/<cache-image> .
|
||||
$ docker buildx build . --push -t <registry>/<image> \
|
||||
--cache-to type=registry,ref=<registry>/<cache-image>[,parameters...] \
|
||||
--cache-from type=registry,ref=<registry>/<cache-image>
|
||||
```
|
||||
|
||||
The `compression-level=<value>` option can be used alongside the `compression`
|
||||
parameter to choose a compression level for the algorithms which support it
|
||||
(from 0-9 for `gzip` and `estargz` and 0-22 for `zstd`). As a general rule, the
|
||||
higher the number, the smaller the resulting file will be, but the longer the
|
||||
compression will take to run.
|
||||
Common parameters:
|
||||
|
||||
The `force-compression=<bool>` option can be enabled with `true` (and disabled
|
||||
with `false`) to force re-compressing layers that have been imported from a
|
||||
previous cache if the requested compression algorithm is different from the
|
||||
previous compression algorithm.
|
||||
- `ref`: full address and name of the cache image that you want to import or
|
||||
export.
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> The `gzip` and `estargz` compression methods use the [`compress/gzip` package](https://pkg.go.dev/compress/gzip),
|
||||
> while `zstd` uses the [`github.com/klauspost/compress/zstd` package](https://github.com/klauspost/compress/tree/master/zstd).
|
||||
|
||||
### OCI media types
|
||||
Parameters for `--cache-to`:
|
||||
|
||||
Like the `image` exporter, the `registry` cache exporter supports creating
|
||||
images with Docker media types or with OCI media types. To enable OCI Media
|
||||
types, you can use the `oci-mediatypes` property:
|
||||
|
||||
```console
|
||||
$ docker buildx build --push -t <user>/<image> \
|
||||
--cache-to type=registry,ref=<user>/<cache-image>,oci-mediatypes=true \
|
||||
--cache-from type=registry,ref=<user>/<cache-image> .
|
||||
```
|
||||
- `mode`: specify cache layers to export (default: `min`), see
|
||||
[cache mode](./index.md#cache-mode)
|
||||
- `oci-mediatypes`: whether to use OCI media types in exported manifests
|
||||
(default `true`, since BuildKit `v0.8`), see
|
||||
[OCI media types](./index.md#oci-media-types)
|
||||
- `compression`: compression type for layers newly created and cached (default:
|
||||
`gzip`), see [cache compression](./index.md#cache-compression)
|
||||
- `compression-level`: compression level for `gzip`, `estargz` (0-9) and `zstd`
|
||||
(0-22)
|
||||
- `force-compression`: forcibly apply `compression` option to all layers
|
||||
|
||||
This property is only meaningful with the `--cache-to` flag, when fetching
|
||||
cache, BuildKit will auto-detect the correct media types to use.
|
||||
You can choose any valid value for `ref`, as long as it's not the same as the
|
||||
target location that you push your image to. You might choose different tags
|
||||
(e.g. `foo/bar:latest` and `foo/bar:build-cache`), separate image names (e.g.
|
||||
`foo/bar` and `foo/bar-cache`), or even different repositories (e.g.
|
||||
`docker.io/foo/bar` and `ghcr.io/foo/bar`). It's up to you to decide the
|
||||
strategy that you want to use for separating your image from your cache images.
|
||||
|
||||
<!-- FIXME: link to image exporter guide when it's written -->
|
||||
If the `--cache-from` target doesn't exist, then the cache import step will
|
||||
fail, but the build will continue.
|
||||
|
||||
## Further reading
|
||||
|
||||
For an introduction to caching see [Optimizing builds with cache management](https://docs.docker.com/build/building/cache).
|
||||
For an introduction to caching see
|
||||
[Optimizing builds with cache management](https://docs.docker.com/build/building/cache).
|
||||
|
||||
For more information on the `registry` cache backend, see the [BuildKit README](https://github.com/moby/buildkit#registry-push-image-and-cache-separately).
|
||||
For more information on the `registry` cache backend, see the
|
||||
[BuildKit README](https://github.com/moby/buildkit#registry-push-image-and-cache-separately).
|
||||
|
@ -1,64 +1,61 @@
|
||||
# AWS S3 cache storage
|
||||
# Amazon S3 cache storage
|
||||
|
||||
> **Warning**
|
||||
>
|
||||
> The `s3` cache is currently unreleased. You can use it today, by using the
|
||||
> This cache backend is unreleased. You can use it today, by using the
|
||||
> `moby/buildkit:master` image in your Buildx driver.
|
||||
|
||||
The `s3` cache store uploads your resulting build cache to [AWS's S3 file storage service](https://aws.amazon.com/s3/),
|
||||
into a bucket of your choice.
|
||||
The `s3` cache storage uploads your resulting build cache to
|
||||
[Amazon S3 file storage service](https://aws.amazon.com/s3/), into a specified
|
||||
bucket.
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> The `s3` cache storage backend requires using a different driver than
|
||||
> the default `docker` driver - see more information on selecting a driver
|
||||
> [here](../drivers/index.md). To create a new docker-container driver (which
|
||||
> can act as a simple drop-in replacement):
|
||||
> This cache storage backend requires using a different driver than the default
|
||||
> `docker` driver - see more information on selecting a driver
|
||||
> [here](../drivers/index.md). To create a new driver (which can act as a simple
|
||||
> drop-in replacement):
|
||||
>
|
||||
> ```console
|
||||
> docker buildx create --use --driver=docker-container
|
||||
> ```
|
||||
|
||||
To import and export your cache using the `s3` storage backend we use the
|
||||
`--cache-to` and `--cache-from` flags and point it to our desired bucket using
|
||||
the required `region` and `bucket` parameters:
|
||||
## Synopsis
|
||||
|
||||
```console
|
||||
$ docker buildx build --push -t <user>/<image> \
|
||||
--cache-to type=s3,region=eu-west-1,bucket=my_bucket,name=my_image \
|
||||
--cache-from type=s3,region=eu-west-1,bucket=my_bucket,name=my_image
|
||||
$ docker buildx build . --push -t <user>/<image> \
|
||||
--cache-to type=s3,region=<region>,bucket=<bucket>,name=<cache-image>[,parameters...] \
|
||||
--cache-from type=s3,region=<region>,bucket=<bucket>,name=<cache-image>
|
||||
```
|
||||
|
||||
## Authentication
|
||||
Common parameters:
|
||||
|
||||
To authenticate to S3 to read from and write to the cache, the following
|
||||
parameters are required:
|
||||
- `region`: geographic location
|
||||
- `bucket`: name of the S3 bucket used for caching
|
||||
- `name`: name of the cache image
|
||||
- `access_key_id`: access key ID, see [authentication](#authentication)
|
||||
- `secret_access_key`: secret access key, see [authentication](#authentication)
|
||||
- `session_token`: session token, see [authentication](#authentication)
|
||||
|
||||
* `access_key_id`: access key ID
|
||||
* `secret_access_key`: secret access key
|
||||
* `session_token`: session token
|
||||
Parameters for `--cache-to`:
|
||||
|
||||
While these can be manually provided, if left unspecified, then the credentials
|
||||
for S3 will be pulled from the BuildKit server's environment following the
|
||||
environment variables scheme for the [AWS Go SDK](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html).
|
||||
- `mode`: specify cache layers to export (default: `min`), see
|
||||
[cache mode](./index.md#cache-mode)
|
||||
|
||||
> **Warning**
|
||||
>
|
||||
> These environment variables **must** be specified on the BuildKit server, not
|
||||
> the `buildx` client.
|
||||
|
||||
<!-- FIXME: update once https://github.com/docker/buildx/pull/1294 is released -->
|
||||
|
||||
## Cache options
|
||||
|
||||
The `s3` cache has lots of parameters to adjust its behavior.
|
||||
## Authentication
|
||||
|
||||
### Cache mode
|
||||
`access_key_id`, `secret_access_key`, and `session_token`, if left unspecified,
|
||||
are read from environment variables on the BuildKit server following the scheme
|
||||
for the
|
||||
[AWS Go SDK](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html).
|
||||
The environment variables are read from the server, not the Buildx client.
|
||||
|
||||
See [Registry - Cache mode](./registry.md#cache-mode) for more information.
|
||||
<!-- FIXME: update once https://github.com/docker/buildx/pull/1294 is released -->
|
||||
|
||||
## Further reading
|
||||
|
||||
For an introduction to caching see [Optimizing builds with cache management](https://docs.docker.com/build/building/cache).
|
||||
For an introduction to caching see
|
||||
[Optimizing builds with cache management](https://docs.docker.com/build/building/cache).
|
||||
|
||||
For more information on the `s3` cache backend, see the [BuildKit README](https://github.com/moby/buildkit#s3-cache-experimental).
|
||||
For more information on the `s3` cache backend, see the
|
||||
[BuildKit README](https://github.com/moby/buildkit#s3-cache-experimental).
|
||||
|
Loading…
Reference in New Issue