Compare commits

...

No commits in common. 'i8-stream-4.0' and 'c9' have entirely different histories.

@ -1,4 +0,0 @@
92a2bbb50398d4326834e37c17d7ad0c2ed68e5f SOURCES/aardvark-dns-v1.0.3-a92337b.tar.gz
c43b8b3548ebc251461f5667d01142e4491691a3 SOURCES/aardvark-dns-v1.0.3-vendor.tar.gz
41313c08ae196941064c6b9c8a090be2381cce51 SOURCES/netavark-v1.0.3-ec7efb8.tar.gz
26c57cd7077cd8088720082889e3f3c9d776e75c SOURCES/netavark-v1.0.3-vendor.tar.gz

4
.gitignore vendored

@ -1,4 +0,0 @@
SOURCES/aardvark-dns-v1.0.3-a92337b.tar.gz
SOURCES/aardvark-dns-v1.0.3-vendor.tar.gz
SOURCES/netavark-v1.0.3-ec7efb8.tar.gz
SOURCES/netavark-v1.0.3-vendor.tar.gz

File diff suppressed because it is too large Load Diff

@ -0,0 +1,10 @@
[aliases]
"skopeo" = "registry.access.redhat.com/ubi8/skopeo"
"ubi8/skopeo" = "registry.access.redhat.com/ubi8/skopeo"
"rhel9/skopeo" = "registry.redhat.io/rhel9/skopeo"
"buildah" = "registry.access.redhat.com/ubi8/buildah"
"ubi8/buildah" = "registry.access.redhat.com/ubi8/buildah"
"rhel9/buildah" = "registry.redhat.io/rhel9/buildah"
"podman" = "registry.access.redhat.com/ubi8/podman"
"ubi8/podman" = "registry.access.redhat.com/ubi8/podman"
"rhel9/podman" = "registry.redhat.io/rhel9/podman"

@ -9,7 +9,7 @@ Containerfile(Dockerfile) - automate the steps of creating a container image
The **Containerfile** is a configuration file that automates the steps of creating a container image. It is similar to a Makefile. Container engines (Podman, Buildah, Docker) read instructions from the **Containerfile** to automate the steps otherwise performed manually to create an image. To build an image, create a file called **Containerfile**. The **Containerfile** is a configuration file that automates the steps of creating a container image. It is similar to a Makefile. Container engines (Podman, Buildah, Docker) read instructions from the **Containerfile** to automate the steps otherwise performed manually to create an image. To build an image, create a file called **Containerfile**.
The **Containerfile** describes the steps taken to assemble the image. When the The **Containerfile** describes the steps taken to assemble the image. When the
**Containerfile** has been created, call the `buildah bud`, `podman build`, `docker build` command, **Containerfile** has been created, call the `buildah build`, `podman build`, `docker build` command,
using the path of context directory that contains **Containerfile** as the argument. Podman and Buildah default to **Containerfile** and will fall back to **Dockerfile**. Docker only will search for **Dockerfile** in the context directory. using the path of context directory that contains **Containerfile** as the argument. Podman and Buildah default to **Containerfile** and will fall back to **Dockerfile**. Docker only will search for **Dockerfile** in the context directory.
@ -31,7 +31,7 @@ A Containerfile is similar to a Makefile.
# USAGE # USAGE
``` ```
buildah bud . buildah build .
podman build . podman build .
``` ```
@ -40,7 +40,7 @@ A Containerfile is similar to a Makefile.
build. build.
``` ```
buildah bud -t repository/tag . buildah build -t repository/tag .
podman build -t repository/tag . podman build -t repository/tag .
``` ```
@ -49,19 +49,19 @@ A Containerfile is similar to a Makefile.
to a new image if necessary, before finally outputting the ID of the new to a new image if necessary, before finally outputting the ID of the new
image. image.
Container engines re-use intermediate images whenever possible. This significantly Container engines reuse intermediate images whenever possible. This significantly
accelerates the *build* process. accelerates the *build* process.
# FORMAT # FORMAT
`FROM image` `FROM image [AS <name>]`
`FROM image:tag` `FROM image:tag [AS <name>]`
`FROM image@digest` `FROM image@digest [AS <name>]`
-- The **FROM** instruction sets the base image for subsequent instructions. A -- The **FROM** instruction sets the base image for subsequent instructions. A
valid Containerfile must have either **ARG** or *FROM** as its first instruction. valid Containerfile must have either **ARG** or **FROM** as its first instruction.
If **FROM** is not the first instruction in the file, it may only be preceded by If **FROM** is not the first instruction in the file, it may only be preceded by
one or more ARG instructions, which declare arguments that are used in the next FROM line in the Containerfile. one or more ARG instructions, which declare arguments that are used in the next FROM line in the Containerfile.
The image can be any valid image. It is easy to start by pulling an image from the public The image can be any valid image. It is easy to start by pulling an image from the public
@ -82,6 +82,9 @@ A Containerfile is similar to a Makefile.
-- If no digest is given to the **FROM** instruction, container engines apply the -- If no digest is given to the **FROM** instruction, container engines apply the
`latest` tag. If the used tag does not exist, an error is returned. `latest` tag. If the used tag does not exist, an error is returned.
-- A name can be assigned to a build stage by adding **AS name** to the instruction.
The name can be referenced later in the Containerfile using the **FROM** or **COPY --from=<name>** instructions.
**MAINTAINER** **MAINTAINER**
-- **MAINTAINER** sets the Author field for the generated images. -- **MAINTAINER** sets the Author field for the generated images.
Useful for providing users with an email or url for support. Useful for providing users with an email or url for support.
@ -106,7 +109,7 @@ Current supported mount TYPES are bind, cache, secret and tmpfs.
e.g. e.g.
mount=type=bind,source=/path/on/host,destination=/path/in/container mount=type=bind,source=/path/on/host,destination=/path/in/container,relabel=shared
mount=type=tmpfs,tmpfs-size=512M,destination=/path/in/container mount=type=tmpfs,tmpfs-size=512M,destination=/path/in/container
@ -128,6 +131,18 @@ Current supported mount TYPES are bind, cache, secret and tmpfs.
· from: stage or image name for the root of the source. Defaults to the build context. · from: stage or image name for the root of the source. Defaults to the build context.
· relabel=shared, z: Relabels src content with a shared label.
. relabel=private, Z: Relabels src content with a private label.
Labeling systems like SELinux require proper labels on the bind mounted content mounted into a container. Without a label, the security system might prevent the processes running in side the container from using the content. By default, container engines do not change the labels set by the OS. The relabel flag tells the engine to relabel file objects on the shared mountz.
The relabel=shared and z options tell the engine that two or more containers will share the mount content. The engine labels the content with a shared content label.
The relabel=private and Z options tell the engine to label the content with a private unshared label. Only the current container can use a private mount.
Relabeling walks the file system under the mount and changes the label on each file, if the mount has thousands of inodes, this process takes a long time, delaying the start of the container.
· rw, read-write: allows writes on the mount. · rw, read-write: allows writes on the mount.
Options specific to tmpfs: Options specific to tmpfs:
@ -154,6 +169,47 @@ Current supported mount TYPES are bind, cache, secret and tmpfs.
· rw, read-write: allows writes on the mount. · rw, read-write: allows writes on the mount.
**RUN --network**
`RUN --network` allows control over which networking environment the command
is run in.
Syntax: `--network=<TYPE>`
**Network types**
| Type | Description |
|----------------------------------------------|----------------------------------------|
| [`default`](#run---networkdefault) (default) | Run in the default network. |
| [`none`](#run---networknone) | Run with no network access. |
| [`host`](#run---networkhost) | Run in the host's network environment. |
##### RUN --network=default
Equivalent to not supplying a flag at all, the command is run in the default
network for the build.
##### RUN --network=none
The command is run with no network access (`lo` is still available, but is
isolated to this process).
##### Example: isolating external effects
```dockerfile
FROM python:3.6
ADD mypackage.tgz wheels/
RUN --network=none pip install --find-links wheels mypackage
```
`pip` will only be able to install the packages provided in the tarfile, which
can be controlled by an earlier build stage.
##### RUN --network=host
The command is run in the host's network environment (similar to
`buildah build --network=host`, but on a per-instruction basis)
**RUN Secrets** **RUN Secrets**
@ -163,7 +219,7 @@ Container engines pass secret the secret file into the build using the `--secret
**--mount**=*type=secret,TYPE-SPECIFIC-OPTION[,...]* **--mount**=*type=secret,TYPE-SPECIFIC-OPTION[,...]*
- `id` is the identifier for the secret passed into the `buildah bud --secret` or `podman build --secret`. This identifier is associated with the RUN --mount identifier to use in the Containerfile. - `id` is the identifier for the secret passed into the `buildah build --secret` or `podman build --secret`. This identifier is associated with the RUN --mount identifier to use in the Containerfile.
- `dst`|`target`|`destination` rename the secret file to a specific file in the Containerfile RUN command to use. - `dst`|`target`|`destination` rename the secret file to a specific file in the Containerfile RUN command to use.
@ -180,7 +236,7 @@ RUN --mount=type=secret,id=mysecret,dst=/foobar cat /foobar
The secret needs to be passed to the build using the --secret flag. The final image built does not container the secret file: The secret needs to be passed to the build using the --secret flag. The final image built does not container the secret file:
``` ```
buildah bud --no-cache --secret id=mysecret,src=mysecret.txt . buildah build --no-cache --secret id=mysecret,src=mysecret.txt .
``` ```
-- The **RUN** instruction executes any commands in a new layer on top of the current -- The **RUN** instruction executes any commands in a new layer on top of the current
@ -321,10 +377,10 @@ The secret needs to be passed to the build using the --secret flag. The final im
-- **COPY** has two forms: -- **COPY** has two forms:
``` ```
COPY <src> <dest> COPY [--chown=<user>:<group>] [--chmod=<mode>] <src> <dest>
# Required for paths with whitespace # Required for paths with whitespace
COPY ["<src>",... "<dest>"] COPY [--chown=<user>:<group>] [--chmod=<mode>] ["<src>",... "<dest>"]
``` ```
The **COPY** instruction copies new files from `<src>` and The **COPY** instruction copies new files from `<src>` and
@ -337,6 +393,16 @@ The secret needs to be passed to the build using the --secret flag. The final im
attempt to unpack it. All new files and directories are created with mode **0755** attempt to unpack it. All new files and directories are created with mode **0755**
and with the uid and gid of **0**. and with the uid and gid of **0**.
`--chown=<user>:<group>` changes the ownership of new files and directories.
Supports names, if defined in the containers `/etc/passwd` and `/etc/groups` files, or using
uid and gid integers. The build will fail if a user or group name can't be mapped in the container.
Numeric id's are set without looking them up in the container.
`--chmod=<mode>` changes the mode of new files and directories.
The optional flag `--from=name` can be used to copy files from a named previous build stage. It
changes the context of `<src>` from the build context to the named build stage.
**ENTRYPOINT** **ENTRYPOINT**
-- **ENTRYPOINT** has two forms: -- **ENTRYPOINT** has two forms:
@ -409,7 +475,7 @@ The secret needs to be passed to the build using the --secret flag. The final im
In the above example, the output of the **pwd** command is **a/b/c**. In the above example, the output of the **pwd** command is **a/b/c**.
**ARG** **ARG**
-- ARG <name>[=<default value>] -- `ARG <name>[=<default value>]`
The `ARG` instruction defines a variable that users can pass at build-time to The `ARG` instruction defines a variable that users can pass at build-time to
the builder with the `podman build` and `buildah build` commands using the the builder with the `podman build` and `buildah build` commands using the
@ -540,6 +606,56 @@ The secret needs to be passed to the build using the --secret flag. The final im
$ podman build --build-arg HTTPS_PROXY=https://my-proxy.example.com . $ podman build --build-arg HTTPS_PROXY=https://my-proxy.example.com .
``` ```
**Platform/OS/Arch ARG**
-- `ARG <name>`
When building multi-arch manifest-lists or images for a foreign-architecture,
it's often helpful to have access to platform details within the `Containerfile`.
For example, when using a `RUN curl ...` command to install OS/Arch specific
binary into the image. Or, if certain `RUN` operations are known incompatible
or non-performant when emulating a specific architecture.
There are several named `ARG` variables available. The purpose of each should be
self-evident by its name. _However_, in all cases these ARG values are **not**
automatically populated. You must always declare them within each `FROM` section
of the `Containerfile`.
The available `ARG <name>` variables are available with two prefixes:
* `TARGET...` variable names represent details about the currently running build
context (i.e. "inside" the container). These are often the most useful:
* `TARGETOS`: For example `linux`
* `TARGETARCH`: For example `amd64`
* `TARGETPLATFORM`: For example `linux/amd64`
* `TARGETVARIANT`: Uncommonly used, specific to `TARGETARCH`
* `BUILD...` variable names signify details about the _host_ performing the build
(i.e. "outside" the container):
* `BUILDOS`: OS of host performing the build
* `BUILDARCH`: Arch of host performing the build
* `BUILDPLATFORM`: Combined OS/Arch of host performing the build
* `BUILDVARIANT`: Uncommonly used, specific to `BUILDARCH`
An example `Containerfile` that uses `TARGETARCH` to fetch an arch-specific binary could be:
```
FROM busybox
ARG TARGETARCH
RUN curl -sSf -O https://example.com/downloads/bin-${TARGETARCH}.zip
```
Assuming the host platform is `linux/amd64` and foreign-architecture emulation
enabled (e.g. `qemu-user-static`), then running the command:
```
$ podman build --platform linux/s390x .
```
Would end up running `curl` on `https://example.com/downloads/bin-s390x.zip` and producing
a container image suited for the the `linux/s390x` platform. **Note:** Emulation isn't
strictly required, these special build-args will also function when building using
`podman farm build`.
**ONBUILD** **ONBUILD**
-- `ONBUILD [INSTRUCTION]` -- `ONBUILD [INSTRUCTION]`
The **ONBUILD** instruction adds a trigger instruction to an image. The The **ONBUILD** instruction adds a trigger instruction to an image. The

@ -0,0 +1,29 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.2.6 (GNU/Linux)
mQINBEmkAzABEAC2/c7bP1lHQ3XScxbIk0LQWe1YOiibQBRLwf8Si5PktgtuPibT
kKpZjw8p4D+fM7jD1WUzUE0X7tXg2l/eUlMM4dw6XJAQ1AmEOtlwSg7rrMtTvM0A
BEtI7Km6fC6sU6RtBMdcqD1cH/6dbsfh8muznVA7UlX+PRBHVzdWzj6y8h84dBjo
gzcbYu9Hezqgj/lLzicqsSZPz9UdXiRTRAIhp8V30BD8uRaaa0KDDnD6IzJv3D9P
xQWbFM4Z12GN9LyeZqmD7bpKzZmXG/3drvfXVisXaXp3M07t3NlBa3Dt8NFIKZ0D
FRXBz5bvzxRVmdH6DtkDWXDPOt+Wdm1rZrCOrySFpBZQRpHw12eo1M1lirANIov7
Z+V1Qh/aBxj5EUu32u9ZpjAPPNtQF6F/KjaoHHHmEQAuj4DLex4LY646Hv1rcv2i
QFuCdvLKQGSiFBrfZH0j/IX3/0JXQlZzb3MuMFPxLXGAoAV9UP/Sw/WTmAuTzFVm
G13UYFeMwrToOiqcX2VcK0aC1FCcTP2z4JW3PsWvU8rUDRUYfoXovc7eg4Vn5wHt
0NBYsNhYiAAf320AUIHzQZYi38JgVwuJfFu43tJZE4Vig++RQq6tsEx9Ftz3EwRR
fJ9z9mEvEiieZm+vbOvMvIuimFVPSCmLH+bI649K8eZlVRWsx3EXCVb0nQARAQAB
tDBSZWQgSGF0LCBJbmMuIChiZXRhIGtleSAyKSA8c2VjdXJpdHlAcmVkaGF0LmNv
bT6JAjYEEwECACAFAkpSM+cCGwMGCwkIBwMCBBUCCAMEFgIDAQIeAQIXgAAKCRCT
ioDK8hVB6/9tEAC0+KmzeKceXQ/GTUoU6jy9vtkFCFrmv+c7ol4XpdTt0QhqBOwy
6m2mKWwmm8KfYfy0cADQ4y/EcoXl7FtFBwYmkCuEQGXhTDn9DvVjhooIq59LEMBQ
OW879RwwzRIZ8ebbjMUjDPF5MfPQqP2LBu9N4KvXlZp4voykwuuaJ+cbsKZR6pZ6
0RQKPHKP+NgUFC0fff7XY9cuOZZWFAeKRhLN2K7bnRHKxp+kELWb6R9ZfrYwZjWc
MIPbTd1khE53L4NTfpWfAnJRtkPSDOKEGVlVLtLq4HEAxQt07kbslqISRWyXER3u
QOJj64D1ZiIMz6t6uZ424VE4ry9rBR0Jz55cMMx5O/ni9x3xzFUgH8Su2yM0r3jE
Rf24+tbOaPf7tebyx4OKe+JW95hNVstWUDyGbs6K9qGfI/pICuO1nMMFTo6GqzQ6
DwLZvJ9QdXo7ujEtySZnfu42aycaQ9ZLC2DOCQCUBY350Hx6FLW3O546TAvpTfk0
B6x+DV7mJQH7MGmRXQsE7TLBJKjq28Cn4tVp04PmybQyTxZdGA/8zY6pPl6xyVMH
V68hSBKEVT/rlouOHuxfdmZva1DhVvUC6Xj7+iTMTVJUAq/4Uyn31P1OJmA2a0PT
CAqWkbJSgKFccsjPoTbLyxhuMSNkEZFHvlZrSK9vnPzmfiRH0Orx3wYpMQ==
=21pb
-----END PGP PUBLIC KEY BLOCK-----

@ -0,0 +1,66 @@
The following public key can be used to verify RPM packages built and
signed by Red Hat, Inc. This key is used for packages in Red Hat
products shipped after November 2009, and for all updates to those
products.
Questions about this key should be sent to security@redhat.com.
pub 4096R/FD431D51 2009-10-22 Red Hat, Inc. (release key 2) <security@redhat.com>
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBErgSTsBEACh2A4b0O9t+vzC9VrVtL1AKvUWi9OPCjkvR7Xd8DtJxeeMZ5eF
0HtzIG58qDRybwUe89FZprB1ffuUKzdE+HcL3FbNWSSOXVjZIersdXyH3NvnLLLF
0DNRB2ix3bXG9Rh/RXpFsNxDp2CEMdUvbYCzE79K1EnUTVh1L0Of023FtPSZXX0c
u7Pb5DI5lX5YeoXO6RoodrIGYJsVBQWnrWw4xNTconUfNPk0EGZtEnzvH2zyPoJh
XGF+Ncu9XwbalnYde10OCvSWAZ5zTCpoLMTvQjWpbCdWXJzCm6G+/hx9upke546H
5IjtYm4dTIVTnc3wvDiODgBKRzOl9rEOCIgOuGtDxRxcQkjrC+xvg5Vkqn7vBUyW
9pHedOU+PoF3DGOM+dqv+eNKBvh9YF9ugFAQBkcG7viZgvGEMGGUpzNgN7XnS1gj
/DPo9mZESOYnKceve2tIC87p2hqjrxOHuI7fkZYeNIcAoa83rBltFXaBDYhWAKS1
PcXS1/7JzP0ky7d0L6Xbu/If5kqWQpKwUInXtySRkuraVfuK3Bpa+X1XecWi24JY
HVtlNX025xx1ewVzGNCTlWn1skQN2OOoQTV4C8/qFpTW6DTWYurd4+fE0OJFJZQF
buhfXYwmRlVOgN5i77NTIJZJQfYFj38c/Iv5vZBPokO6mffrOTv3MHWVgQARAQAB
tDNSZWQgSGF0LCBJbmMuIChyZWxlYXNlIGtleSAyKSA8c2VjdXJpdHlAcmVkaGF0
LmNvbT6JAjYEEwECACAFAkrgSTsCGwMGCwkIBwMCBBUCCAMEFgIDAQIeAQIXgAAK
CRAZni+R/UMdUWzpD/9s5SFR/ZF3yjY5VLUFLMXIKUztNN3oc45fyLdTI3+UClKC
2tEruzYjqNHhqAEXa2sN1fMrsuKec61Ll2NfvJjkLKDvgVIh7kM7aslNYVOP6BTf
C/JJ7/ufz3UZmyViH/WDl+AYdgk3JqCIO5w5ryrC9IyBzYv2m0HqYbWfphY3uHw5
un3ndLJcu8+BGP5F+ONQEGl+DRH58Il9Jp3HwbRa7dvkPgEhfFR+1hI+Btta2C7E
0/2NKzCxZw7Lx3PBRcU92YKyaEihfy/aQKZCAuyfKiMvsmzs+4poIX7I9NQCJpyE
IGfINoZ7VxqHwRn/d5mw2MZTJjbzSf+Um9YJyA0iEEyD6qjriWQRbuxpQXmlAJbh
8okZ4gbVFv1F8MzK+4R8VvWJ0XxgtikSo72fHjwha7MAjqFnOq6eo6fEC/75g3NL
Ght5VdpGuHk0vbdENHMC8wS99e5qXGNDued3hlTavDMlEAHl34q2H9nakTGRF5Ki
JUfNh3DVRGhg8cMIti21njiRh7gyFI2OccATY7bBSr79JhuNwelHuxLrCFpY7V25
OFktl15jZJaMxuQBqYdBgSay2G0U6D1+7VsWufpzd/Abx1/c3oi9ZaJvW22kAggq
dzdA27UUYjWvx42w9menJwh/0jeQcTecIUd0d0rFcw/c1pvgMMl/Q73yzKgKYw==
=zbHE
-----END PGP PUBLIC KEY BLOCK-----
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBGIpIp4BEAC/o5e1WzLIsS6/JOQCs4XYATYTcf6B6ALzcP05G0W3uRpUQSrL
FRKNrU8ZCelm/B+XSh2ljJNeklp2WLxYENDOsftDXGoyLr2hEkI5OyK267IHhFNJ
g+BN+T5Cjh4ZiiWij6o9F7x2ZpxISE9M4iI80rwSv1KOnGSw5j2zD2EwoMjTVyVE
/t3s5XJxnDclB7ZqL+cgjv0mWUY/4+b/OoRTkhq7b8QILuZp75Y64pkrndgakm1T
8mAGXV02mEzpNj9DyAJdUqa11PIhMJMxxHOGHJ8CcHZ2NJL2e7yJf4orTj+cMhP5
LzJcVlaXnQYu8Zkqa0V6J1Qdj8ZXL72QsmyicRYXAtK9Jm5pvBHuYU2m6Ja7dBEB
Vkhe7lTKhAjkZC5ErPmANNS9kPdtXCOpwN1lOnmD2m04hks3kpH9OTX7RkTFUSws
eARAfRID6RLfi59B9lmAbekecnsMIFMx7qR7ZKyQb3GOuZwNYOaYFevuxusSwCHv
4FtLDIhk+Fge+EbPdEva+VLJeMOb02gC4V/cX/oFoPkxM1A5LHjkuAM+aFLAiIRd
Np/tAPWk1k6yc+FqkcDqOttbP4ciiXb9JPtmzTCbJD8lgH0rGp8ufyMXC9x7/dqX
TjsiGzyvlMnrkKB4GL4DqRFl8LAR02A3846DD8CAcaxoXggL2bJCU2rgUQARAQAB
tDVSZWQgSGF0LCBJbmMuIChhdXhpbGlhcnkga2V5IDMpIDxzZWN1cml0eUByZWRo
YXQuY29tPokCUgQTAQgAPBYhBH5GJCWMQGU11W1vE1BU5KRaY0CzBQJiKSKeAhsD
BQsJCAcCAyICAQYVCgkICwIEFgIDAQIeBwIXgAAKCRBQVOSkWmNAsyBfEACuTN/X
YR+QyzeRw0pXcTvMqzNE4DKKr97hSQEwZH1/v1PEPs5O3psuVUm2iam7bqYwG+ry
EskAgMHi8AJmY0lioQD5/LTSLTrM8UyQnU3g17DHau1NHIFTGyaW4a7xviU4C2+k
c6X0u1CPHI1U4Q8prpNcfLsldaNYlsVZtUtYSHKPAUcswXWliW7QYjZ5tMSbu8jR
OMOc3mZuf0fcVFNu8+XSpN7qLhRNcPv+FCNmk/wkaQfH4Pv+jVsOgHqkV3aLqJeN
kNUnpyEKYkNqo7mNfNVWOcl+Z1KKKwSkIi3vg8maC7rODsy6IX+Y96M93sqYDQom
aaWue2gvw6thEoH4SaCrCL78mj2YFpeg1Oew4QwVcBnt68KOPfL9YyoOicNs4Vuu
fb/vjU2ONPZAeepIKA8QxCETiryCcP43daqThvIgdbUIiWne3gae6eSj0EuUPoYe
H5g2Lw0qdwbHIOxqp2kvN96Ii7s1DK3VyhMt/GSPCxRnDRJ8oQKJ2W/I1IT5VtiU
zMjjq5JcYzRPzHDxfVzT9CLeU/0XQ+2OOUAiZKZ0dzSyyVn8xbpviT7iadvjlQX3
CINaPB+d2Kxa6uFWh+ZYOLLAgZ9B8NKutUHpXN66YSfe79xFBSFWKkJ8cSIMk13/
Ifs7ApKlKCCRDpwoDqx/sjIaj1cpOfLHYjnefg==
=UZd/
-----END PGP PUBLIC KEY BLOCK-----

@ -5,30 +5,33 @@ containers-auth.json - syntax for the registry authentication file
# DESCRIPTION # DESCRIPTION
A credentials file in JSON format used to authenticate against container image registries. A file in JSON format controlling authentication against container image registries.
The primary (read/write) file is stored at `${XDG_RUNTIME_DIR}/containers/auth.json` on Linux; The primary (read/write) file is stored at `${XDG_RUNTIME_DIR}/containers/auth.json` on Linux;
on Windows and macOS, at `$HOME/.config/containers/auth.json`. on Windows and macOS, at `$HOME/.config/containers/auth.json`.
When searching for the credential for a registry, the following files will be read in sequence until the valid credential is found: When searching for the credential for a registry, the following files will be read in sequence until the valid credential is found:
first reading the primary (read/write) file, or the explicit override using an option of the calling application. first reading the primary (read/write) file, or the explicit override using an option of the calling application.
If credentials are not present, search in `${XDG_CONFIG_HOME}/containers/auth.json` (usually `~/.config/containers/auth.json`), `$HOME/.docker/config.json`, `$HOME/.dockercfg`. If credentials are not present there,
the search continues in `${XDG_CONFIG_HOME}/containers/auth.json` (usually `~/.config/containers/auth.json`), `$HOME/.docker/config.json`, `$HOME/.dockercfg`.
Except the primary (read/write) file, other files are read-only, unless the user use an option of the calling application explicitly points at it as an override. Except for the primary (read/write) file, other files are read-only unless the user, using an option of the calling application, explicitly points at it as an override.
## FORMAT ## FORMAT
The auth.json file stores encrypted authentication information for the The auth.json file stores, or references, credentials that allow the user to authenticate
user to container image registries. The file can have zero to many entries and to container image registries.
is created by a `login` command from a container tool such as `podman login`, It is primarily managed by a `login` command from a container tool such as `podman login`,
`buildah login` or `skopeo login`. Each entry either contains a single `buildah login`, or `skopeo login`.
hostname (e.g. `docker.io`) or a namespace (e.g. `quay.io/user/image`) as a key
and an auth token in the form of a base64 encoded string as value of `auth`. The Each entry contains a single hostname (e.g., `docker.io`) or a namespace (e.g., `quay.io/user/image`) as a key,
token is built from the concatenation of the username, a colon, and the and credentials in the form of a base64-encoded string as value of `auth`. The
password. The registry name can additionally contain a repository name (an image base64-encoded string contains a concatenation of the username, a colon, and the
name without tag or digest) and namespaces. The path (or namespace) is matched password.
in its hierarchical order when checking for available authentications. For
example, an image pull for `my-registry.local/namespace/user/image:latest` will When checking for available credentials, the relevant repository is matched
against available keys in its hierarchical order, going from most-specific to least-specific.
For example, an image pull for `my-registry.local/namespace/user/image:latest` will
result in a lookup in `auth.json` in the following order: result in a lookup in `auth.json` in the following order:
- `my-registry.local/namespace/user/image` - `my-registry.local/namespace/user/image`
@ -77,10 +80,8 @@ preserving a fallback for `my-registry.local`:
An entry can be removed by using a `logout` command from a container An entry can be removed by using a `logout` command from a container
tool such as `podman logout` or `buildah logout`. tool such as `podman logout` or `buildah logout`.
In addition, credential helpers can be configured for specific registries and the credentials-helper In addition, credential helpers can be configured for specific registries, and the credentials-helper
software can be used to manage the credentials in a more secure way than depending on the base64 encoded authentication software can be used to manage the credentials more securely than storing only base64-encoded credentials in `auth.json`.
provided by `login`. If the credential helpers are configured for specific registries, the base64 encoded authentication will not be used
for operations concerning credentials of the specified registries.
When the credential helper is in use on a Linux platform, the auth.json file would contain keys that specify the registry domain, and values that specify the suffix of the program to use (i.e. everything after docker-credential-). For example: When the credential helper is in use on a Linux platform, the auth.json file would contain keys that specify the registry domain, and values that specify the suffix of the program to use (i.e. everything after docker-credential-). For example:

@ -30,7 +30,9 @@ Policy requirements can be defined for:
Usually, a scope can be defined to match a single image, and various prefixes of Usually, a scope can be defined to match a single image, and various prefixes of
such a most specific scope define namespaces of matching images. such a most specific scope define namespaces of matching images.
- A default policy for a single transport, expressed using an empty string as a scope - A default policy for a single transport, expressed using an empty string as a scope
- A global default policy. - A global default policy.
If multiple policy requirements match a given image, only the requirements from the most specific match apply, If multiple policy requirements match a given image, only the requirements from the most specific match apply,
@ -59,18 +61,41 @@ The global `default` set of policy requirements is mandatory; all of the other f
<!-- NOTE: Keep this in sync with transports/transports.go! --> <!-- NOTE: Keep this in sync with transports/transports.go! -->
## Supported transports and their scopes ## Supported transports and their scopes
See containers-transports(5) for general documentation about the transports and their reference syntax.
### `atomic:` ### `atomic:`
The `atomic:` transport refers to images in an Atomic Registry. The deprecated `atomic:` transport refers to images in an Atomic Registry.
Supported scopes use the form _hostname_[`:`_port_][`/`_namespace_[`/`_imagestream_ [`:`_tag_]]], Supported scopes use the form _hostname_[`:`_port_][`/`_namespace_[`/`_imagestream_ [`:`_tag_]]],
i.e. either specifying a complete name of a tagged image, or prefix denoting i.e. either specifying a complete name of a tagged image, or prefix denoting
a host/namespace/image stream or a wildcarded expression for matching all a host/namespace/image stream, or a wildcarded expression starting with `*.` for matching all
subdomains. For wildcarded subdomain matching, `*.example.com` is a valid case, but `example*.*.com` is not. subdomains. For wildcarded subdomain matching, `*.example.com` is a valid case, but `example*.*.com` is not.
*Note:* The _hostname_ and _port_ refer to the container registry host and port (the one used *Note:* The _hostname_ and _port_ refer to the container registry host and port (the one used
e.g. for `docker pull`), _not_ to the OpenShift API host and port. e.g. for `docker pull`), _not_ to the OpenShift API host and port.
### `containers-storage:`
Supported scopes have the form `[`_storage-specifier_`]`_image-scope_.
`[`_storage-specifier_`]` is usually `[`_graph-driver-name_`@`_graph-root_`]`, e.g. `[overlay@/var/lib/containers/storage]`.
_image-scope_ matching the individual image is
- a named Docker reference *in the fully expanded form*, either using a tag or digest. For example, `docker.io/library/busybox:latest` (*not* `busybox:latest`)
- and/or (depending on which one the users input provides) `@`_image-id_
More general scopes are prefixes of individual-image scopes, and specify a less-precisely-specified image, or a repository
(by omitting first the image ID, if any; then the digest, if any; and finally a tag, if any),
a repository namespace, or a registry host (by only specifying the host name and possibly a port number).
Finally, two full-store specifiers matching all images in the store are valid scopes:
- `[`_graph-driver-name_`@`_graph-root_`]` and
- `[`_graph-root_`]`
Note that some tools like Podman and Buildah hard-code overrides of the signature verification policy for “push” operations,
allowing these operations regardless of configuration in `policy.json`.
### `dir:` ### `dir:`
The `dir:` transport refers to images stored in local directories. The `dir:` transport refers to images stored in local directories.
@ -78,9 +103,9 @@ The `dir:` transport refers to images stored in local directories.
Supported scopes are paths of directories (either containing a single image or Supported scopes are paths of directories (either containing a single image or
subdirectories possibly containing images). subdirectories possibly containing images).
*Note:* The paths must be absolute and contain no symlinks. Paths violating these requirements may be silently ignored. *Note:*
- The paths must be absolute and contain no symlinks. Paths violating these requirements may be silently ignored.
The top-level scope `"/"` is forbidden; use the transport default scope `""`, - The top-level scope `"/"` is forbidden; use the transport default scope `""`,
for consistency with other transports. for consistency with other transports.
### `docker:` ### `docker:`
@ -91,24 +116,73 @@ Scopes matching individual images are named Docker references *in the fully expa
using a tag or digest. For example, `docker.io/library/busybox:latest` (*not* `busybox:latest`). using a tag or digest. For example, `docker.io/library/busybox:latest` (*not* `busybox:latest`).
More general scopes are prefixes of individual-image scopes, and specify a repository (by omitting the tag or digest), More general scopes are prefixes of individual-image scopes, and specify a repository (by omitting the tag or digest),
a repository namespace, or a registry host (by only specifying the host name) a repository namespace, or a registry host (by only specifying the host name and possibly a port number)
or a wildcarded expression for matching all subdomains. For wildcarded subdomain or a wildcarded expression starting with `*.`, for matching all subdomains (not including a port number). For wildcarded subdomain
matching, `*.example.com` is a valid case, but `example*.*.com` is not.
### `docker-archive:`
Only the default `""` scope is supported.
### `docker-daemon:`
For references using the _algo:digest_ format (referring to an image ID), only the default `""` scope is used.
For images using a named reference, scopes matching individual images are *in the fully expanded form*, either
using a tag or digest. For example, `docker.io/library/busybox:latest` (*not* `busybox:latest`).
More general named scopes are prefixes of individual-image scopes, and specify a repository (by omitting the tag or digest),
a repository namespace, or a registry host (by only specifying the host name and possibly a port number)
or a wildcarded expression starting with `*.`, for matching all subdomains (not including a port number). For wildcarded subdomain
matching, `*.example.com` is a valid case, but `example*.*.com` is not. matching, `*.example.com` is a valid case, but `example*.*.com` is not.
### `oci:` ### `oci:`
The `oci:` transport refers to images in directories compliant with "Open Container Image Layout Specification". The `oci:` transport refers to images in directories compliant with "Open Container Image Layout Specification".
Supported scopes use the form _directory_`:`_tag_, and _directory_ referring to Supported scopes are paths to directories
a directory containing one or more tags, or any of the parent directories. (either containing an OCI layout, or subdirectories possibly containing OCI layout directories).
The _reference_ annotation value, if any, is not used.
*Note:* See `dir:` above for semantics and restrictions on the directory paths, they apply to `oci:` equivalently. *Note:*
- The paths must be absolute and contain no symlinks. Paths violating these requirements may be silently ignored.
- The top-level scope `"/"` is forbidden; use the transport default scope `""`,
for consistency with other transports.
### `tarball:` ### `oci-archive:`
Supported scopes are paths to OCI archives, and their parent directories
(either containing a single archive, or subdirectories possibly containing archives).
The _reference_ annotation value, if any, is not used.
*Note:*
- The paths must be absolute and contain no symlinks. Paths violating these requirements may be silently ignored.
- The top-level scope `"/"` is forbidden; use the transport default scope `""`,
for consistency with other transports.
### `ostree`:
The `tarball:` transport refers to tarred up container root filesystems. Supported scopes have the form _repo-path_`:`_image-scope_; _repo_path_ is the path to the OSTree repository.
_image-scope_ is the _docker_reference_ part of the reference, with with a `:latest` tag implied if no tag is present,
and parent namespaces of the _docker_reference_ value (by omitting the tag, or a prefix specifying a higher-level namespace).
*Note:*
- The _repo_path_ must be absolute and contain no symlinks. Paths violating these requirements may be silently ignored.
### `sif:`
Supported scopes are paths to Singularity images, and their parent directories
(either containing images, or subdirectories possibly containing images).
*Note:*
- The paths must be absolute and contain no symlinks. Paths violating these requirements may be silently ignored.
- The top-level scope `"/"` is forbidden; use the transport default scope `""`,
for consistency with other transports.
### `tarball:`
Scopes are ignored. The `tarball:` transport is an implementation detail of some import workflows. Only the default `""` scope is supported.
## Policy Requirements ## Policy Requirements
@ -149,20 +223,21 @@ This requirement rejects every image, and every signature.
### `signedBy` ### `signedBy`
This requirement requires an image to be signed with an expected identity, or accepts a signature if it is using an expected identity and key. This requirement requires an image to be signed using “simple signing” with an expected identity, or accepts a signature if it is using an expected identity and key.
```js ```js
{ {
"type": "signedBy", "type": "signedBy",
"keyType": "GPGKeys", /* The only currently supported value */ "keyType": "GPGKeys", /* The only currently supported value */
"keyPath": "/path/to/local/keyring/file", "keyPath": "/path/to/local/keyring/file",
"keyPaths": ["/path/to/local/keyring/file1","/path/to/local/keyring/file2"…],
"keyData": "base64-encoded-keyring-data", "keyData": "base64-encoded-keyring-data",
"signedIdentity": identity_requirement "signedIdentity": identity_requirement
} }
``` ```
<!-- Later: other keyType values --> <!-- Later: other keyType values -->
Exactly one of `keyPath` and `keyData` must be present, containing a GPG keyring of one or more public keys. Only signatures made by these keys are accepted. Exactly one of `keyPath`, `keyPaths` and `keyData` must be present, containing a GPG keyring of one or more public keys. Only signatures made by these keys are accepted.
The `signedIdentity` field, a JSON object, specifies what image identity the signature claims about the image. The `signedIdentity` field, a JSON object, specifies what image identity the signature claims about the image.
One of the following alternatives are supported: One of the following alternatives are supported:
@ -236,6 +311,58 @@ used with `exactReference` or `exactRepository`.
<!-- ### `signedBaseLayer` --> <!-- ### `signedBaseLayer` -->
### `sigstoreSigned`
This requirement requires an image to be signed using a sigstore signature with an expected identity and key.
```js
{
"type": "sigstoreSigned",
"keyPath": "/path/to/local/public/key/file",
"keyPaths": ["/path/to/first/public/key/one", "/path/to/first/public/key/two"],
"keyData": "base64-encoded-public-key-data",
"keyDatas": ["base64-encoded-public-key-one-data", "base64-encoded-public-key-two-data"]
"fulcio": {
"caPath": "/path/to/local/CA/file",
"caData": "base64-encoded-CA-data",
"oidcIssuer": "https://expected.OIDC.issuer/",
"subjectEmail", "expected-signing-user@example.com",
},
"rekorPublicKeyPath": "/path/to/local/public/key/file",
"rekorPublicKeyPaths": ["/path/to/local/public/key/one","/path/to/local/public/key/two"],
"rekorPublicKeyData": "base64-encoded-public-key-data",
"rekorPublicKeyDatas": ["base64-encoded-public-key-one-data","base64-encoded-public-key-two-data"],
"signedIdentity": identity_requirement
}
```
Exactly one of `keyPath`, `keyPaths`, `keyData`, `keyDatas` and `fulcio` must be present.
If `keyPath` or `keyData` is present, it contains a sigstore public key.
Only signatures made by this key are accepted.
If `keyPaths` or `keyDatas` is present, it contains sigstore public keys.
Only signatures made by any key in the list are accepted.
If `fulcio` is present, the signature must be based on a Fulcio-issued certificate.
One of `caPath` and `caData` must be specified, containing the public key of the Fulcio instance.
Both `oidcIssuer` and `subjectEmail` are mandatory,
exactly specifying the expected identity provider,
and the identity of the user obtaining the Fulcio certificate.
At most one of `rekorPublicKeyPath`, `rekorPublicKeyPaths`, `rekorPublicKeyData` and `rekorPublicKeyDatas` can be present;
it is mandatory if `fulcio` is specified.
If a Rekor public key is specified,
the signature must have been uploaded to a Rekor server
and the signature must contain an (offline-verifiable) “signed entry timestamp”
proving the existence of the Rekor log record,
signed by one of the provided public keys.
The `signedIdentity` field has the same semantics as in the `signedBy` requirement described above.
Note that `cosign`-created signatures only contain a repository, so only `matchRepository` and `exactRepository` can be used to accept them (and that does not protect against substitution of a signed image with an unexpected tag).
To use this with images hosted on image registries, the `use-sigstore-attachments` option needs to be enabled for the relevant registry or repository in the client's containers-registries.d(5).
## Examples ## Examples
It is *strongly* recommended to set the `default` policy to `reject`, and then It is *strongly* recommended to set the `default` policy to `reject`, and then
@ -255,9 +382,56 @@ selectively allow individual transports and scopes as desired.
"docker.io/openshift": [{"type": "insecureAcceptAnything"}], "docker.io/openshift": [{"type": "insecureAcceptAnything"}],
/* Similarly, allow installing the “official” busybox images. Note how the fully expanded /* Similarly, allow installing the “official” busybox images. Note how the fully expanded
form, with the explicit /library/, must be used. */ form, with the explicit /library/, must be used. */
"docker.io/library/busybox": [{"type": "insecureAcceptAnything"}] "docker.io/library/busybox": [{"type": "insecureAcceptAnything"}],
/* Allow installing images from all subdomains */ /* Allow installing images from all subdomains */
"*.temporary-project.example.com": [{"type": "insecureAcceptAnything"}] "*.temporary-project.example.com": [{"type": "insecureAcceptAnything"}],
/* A sigstore-signed repository */
"hostname:5000/myns/sigstore-signed-with-full-references": [
{
"type": "sigstoreSigned",
"keyPath": "/path/to/sigstore-pubkey.pub"
}
],
/* A sigstore-signed repository using the community Fulcio+Rekor servers.
The community servers public keys can be obtained from
https://github.com/sigstore/sigstore/tree/main/pkg/tuf/repository/targets . */
"hostname:5000/myns/sigstore-signed-fulcio-rekor": [
{
"type": "sigstoreSigned",
"fulcio": {
"caPath": "/path/to/fulcio_v1.crt.pem",
"oidcIssuer": "https://github.com/login/oauth",
"subjectEmail": "test-user@example.com"
},
"rekorPublicKeyPath": "/path/to/rekor.pub",
}
],
/* A sigstore-signed repository, accepts signatures by /usr/bin/cosign */
"hostname:5000/myns/sigstore-signed-allows-malicious-tag-substitution": [
{
"type": "sigstoreSigned",
"keyPath": "/path/to/sigstore-pubkey.pub",
"signedIdentity": {"type": "matchRepository"}
}
],
/* A sigstore-signed repository using the community Fulcio+Rekor servers,
accepts signatures by /usr/bin/cosign.
The community servers public keys can be obtained from
https://github.com/sigstore/sigstore/tree/main/pkg/tuf/repository/targets . */
"hostname:5000/myns/sigstore-signed-fulcio-rekor- allows-malicious-tag-substitution": [
{
"type": "sigstoreSigned",
"fulcio": {
"caPath": "/path/to/fulcio_v1.crt.pem",
"oidcIssuer": "https://github.com/login/oauth",
"subjectEmail": "test-user@example.com"
},
"rekorPublicKeyPath": "/path/to/rekor.pub",
"signedIdentity": { "type": "matchRepository" }
}
]
/* Other docker: images use the global default policy and are rejected */ /* Other docker: images use the global default policy and are rejected */
}, },
"dir": { "dir": {
@ -301,7 +475,7 @@ selectively allow individual transports and scopes as desired.
"signedIdentity": { "signedIdentity": {
"type": "remapIdentity", "type": "remapIdentity",
"prefix": "private-mirror:5000/vendor-mirror", "prefix": "private-mirror:5000/vendor-mirror",
"signedPrefix": "vendor.example.com", "signedPrefix": "vendor.example.com"
} }
} }
] ]

@ -19,6 +19,12 @@ Container engines will use the `$HOME/.config/containers/registries.conf` if it
`credential-helpers` `credential-helpers`
: An array of default credential helpers used as external credential stores. Note that "containers-auth.json" is a reserved value to use auth files as specified in containers-auth.json(5). The credential helpers are set to `["containers-auth.json"]` if none are specified. : An array of default credential helpers used as external credential stores. Note that "containers-auth.json" is a reserved value to use auth files as specified in containers-auth.json(5). The credential helpers are set to `["containers-auth.json"]` if none are specified.
`additional-layer-store-auth-helper`
: A string containing the helper binary name. This enables passing registry credentials to an
Additional Layer Store every time an image is read using the `docker://`
transport so that it can access private registries. See the 'Enabling Additional Layer Store to access to private registries' section below for
more details.
### NAMESPACED `[[registry]]` SETTINGS ### NAMESPACED `[[registry]]` SETTINGS
The bulk of the configuration is represented as an array of `[[registry]]` The bulk of the configuration is represented as an array of `[[registry]]`
@ -43,6 +49,8 @@ also include wildcarded subdomains in the format `*.example.com`.
The wildcard should only be present at the beginning as shown in the formats The wildcard should only be present at the beginning as shown in the formats
above. Other cases will not work. For example, `*.example.com` is valid but above. Other cases will not work. For example, `*.example.com` is valid but
`example.*.com`, `*.example.com/foo` and `*.example.com:5000/foo/bar:baz` are not. `example.*.com`, `*.example.com/foo` and `*.example.com:5000/foo/bar:baz` are not.
Note that `*` matches an arbitrary number of subdomains. `*.example.com` will hence
match `bar.example.com`, `foo.bar.example.com` and so on.
As a special case, the `prefix` field can be missing; if so, it defaults to the value As a special case, the `prefix` field can be missing; if so, it defaults to the value
of the `location` field (described below). of the `location` field (described below).
@ -71,16 +79,16 @@ internet without having to change `Dockerfile`s, or to add redundancy).
: Accepts the same format as the `prefix` field, and specifies the physical location : Accepts the same format as the `prefix` field, and specifies the physical location
of the `prefix`-rooted namespace. of the `prefix`-rooted namespace.
By default, this equal to `prefix` (in which case `prefix` can be omitted and the By default, this is equal to `prefix` (in which case `prefix` can be omitted and the
`[[registry]]` TOML table can only specify `location`). `[[registry]]` TOML table can only specify `location`).
Example: Given Example: Given
``` ```
prefix = "example.com/foo" prefix = "example.com/foo"
location = "internal-registry-for-example.net/bar" location = "internal-registry-for-example.com/bar"
``` ```
requests for the image `example.com/foo/myimage:latest` will actually work with the requests for the image `example.com/foo/myimage:latest` will actually work with the
`internal-registry-for-example.net/bar/myimage:latest` image. `internal-registry-for-example.com/bar/myimage:latest` image.
With a `prefix` containing a wildcard in the format: "*.example.com" for subdomain matching, With a `prefix` containing a wildcard in the format: "*.example.com" for subdomain matching,
the location can be empty. In such a case, the location can be empty. In such a case,
@ -97,30 +105,37 @@ as-is. But other settings like insecure/blocked/mirrors will be applied to match
`mirror` `mirror`
: An array of TOML tables specifying (possibly-partial) mirrors for the : An array of TOML tables specifying (possibly-partial) mirrors for the
`prefix`-rooted namespace. `prefix`-rooted namespace (i.e., the current `[[registry]]` TOML table).
The mirrors are attempted in the specified order; the first one that can be The mirrors are attempted in the specified order; the first one that can be
contacted and contains the image will be used (and if none of the mirrors contains the image, contacted and contains the image will be used (and if none of the mirrors contains the image,
the primary location specified by the `registry.location` field, or using the unmodified the primary location specified by the `registry.location` field, or using the unmodified
user-specified reference, is tried last). user-specified reference, is tried last).
Each TOML table in the `mirror` array can contain the following fields, with the same semantics Each TOML table in the `mirror` array can contain the following fields:
as if specified in the `[[registry]]` TOML table directly: - `location` same semantics
- `location` as specified in the `[[registry]]` TOML table
- `insecure` - `insecure` same semantics
as specified in the `[[registry]]` TOML table
- `pull-from-mirror`: `all`, `digest-only` or `tag-only`. If "digest-only" mirrors will only be used for digest pulls. Pulling images by tag can potentially yield different images, depending on which endpoint we pull from. Restricting mirrors to pulls by digest avoids that issue. If "tag-only", mirrors will only be used for tag pulls. For a more up-to-date and expensive mirror that it is less likely to be out of sync if tags move, it should not be unnecessarily used for digest references. Default is "all" (or left empty), mirrors will be used for both digest pulls and tag pulls unless the mirror-by-digest-only is set for the primary registry.
Note that this per-mirror setting is allowed only when `mirror-by-digest-only` is not configured for the primary registry.
`mirror-by-digest-only` `mirror-by-digest-only`
: `true` or `false`. : `true` or `false`.
If `true`, mirrors will only be used during pulling if the image reference includes a digest. If `true`, mirrors will only be used during pulling if the image reference includes a digest.
Note that if all mirrors are configured to be digest-only, images referenced by a tag will only use the primary
registry.
If all mirrors are configured to be tag-only, images referenced by a digest will only use the primary
registry.
Referencing an image by digest ensures that the same is always used Referencing an image by digest ensures that the same is always used
(whereas referencing an image by a tag may cause different registries to return (whereas referencing an image by a tag may cause different registries to return
different images if the tag mapping is out of sync). different images if the tag mapping is out of sync).
Note that if this is `true`, images referenced by a tag will only use the primary
registry, failing if that registry is not accessible.
*Note*: Redirection and mirrors are currently processed only when reading images, not when pushing *Note*: Redirection and mirrors are currently processed only when reading a single image,
to a registry; that may change in the future. not when pushing to a registry nor when doing any other kind of lookup/search on a on a registry.
This may change in the future.
#### Short-Name Aliasing #### Short-Name Aliasing
The use of unqualified-search registries entails an ambiguity as it is The use of unqualified-search registries entails an ambiguity as it is
@ -228,14 +243,47 @@ location = "example-mirror-0.local/mirror-for-foo"
[[registry.mirror]] [[registry.mirror]]
location = "example-mirror-1.local/mirrors/foo" location = "example-mirror-1.local/mirrors/foo"
insecure = true insecure = true
[[registry]]
location = "registry.com"
[[registry.mirror]]
location = "mirror.registry.com"
``` ```
Given the above, a pull of `example.com/foo/image:latest` will try: Given the above, a pull of `example.com/foo/image:latest` will try:
1. `example-mirror-0.local/mirror-for-foo/image:latest` 1. `example-mirror-0.local/mirror-for-foo/image:latest`
2. `example-mirror-1.local/mirrors/foo/image:latest` 2. `example-mirror-1.local/mirrors/foo/image:latest`
3. `internal-registry-for-example.net/bar/image:latest` 3. `internal-registry-for-example.com/bar/image:latest`
in order, and use the first one that exists. in order, and use the first one that exists.
Note that a mirror is associated only with the current `[[registry]]` TOML table. If using the example above, pulling the image `registry.com/image:latest` will hence only reach out to `mirror.registry.com`, and the mirrors associated with `example.com/foo` will not be considered.
### Enabling Additional Layer Store to access to private registries
The `additional-layer-store-auth-helper` option enables passing registry
credentials to an Additional Layer Store so that it can access private registries.
When accessing a private registry via an Additional Layer Store, a helper binary needs to be provided. This helper binary is
registered via the `additional-layer-store-auth-helper` option. Every time an image
is read using the `docker://` transport, the specified helper binary is executed
and receives registry credentials from stdin in the following format.
```json
{
"$image_reference": {
"username": "$username",
"password": "$password",
"identityToken": "$identityToken"
}
}
```
The format of `$image_reference` is `$repo{:$tag|@$digest}`.
Additional Layer Stores can use this helper binary to access the private registry.
## VERSION 1 FORMAT - DEPRECATED ## VERSION 1 FORMAT - DEPRECATED
VERSION 1 format is still supported but it does not support VERSION 1 format is still supported but it does not support
using registry mirrors, longest-prefix matches, or location rewriting. using registry mirrors, longest-prefix matches, or location rewriting.

@ -63,25 +63,31 @@ more general scopes is ignored. For example, if _any_ configuration exists for
### Built-in Defaults ### Built-in Defaults
If no `docker` section can be found for the container image, and no `default-docker` section is configured, If no `docker` section can be found for the container image, and no `default-docker` section is configured:
the default directory, `/var/lib/containers/sigstore` for root and `$HOME/.local/share/containers/sigstore` for unprivileged user, will be used for reading and writing signatures.
- The default directory, `/var/lib/containers/sigstore` for root and `$HOME/.local/share/containers/sigstore` for unprivileged user, will be used for reading and writing signatures.
- Sigstore attachments will not be read/written.
## Individual Configuration Sections ## Individual Configuration Sections
A single configuration section is selected for a container image using the process A single configuration section is selected for a container image using the process
described above. The configuration section is a YAML mapping, with the following keys: described above. The configuration section is a YAML mapping, with the following keys:
- `sigstore-staging` defines an URL of of the signature storage, used for editing it (adding or deleting signatures). <!-- `sigstore` and `sigstore-staging` are deprecated and intentionally not documented here. -->
- `lookaside-staging` defines an URL of of the signature storage, used for editing it (adding or deleting signatures).
This key is optional; if it is missing, `sigstore` below is used. This key is optional; if it is missing, `lookaside` below is used.
- `sigstore` defines an URL of the signature storage. - `lookaside` defines an URL of the signature storage.
This URL is used for reading existing signatures, This URL is used for reading existing signatures,
and if `sigstore-staging` does not exist, also for adding or removing them. and if `lookaside-staging` does not exist, also for adding or removing them.
This key is optional; if it is missing, no signature storage is defined (no signatures This key is optional; if it is missing, no signature storage is defined (no signatures
are download along with images, adding new signatures is possible only if `sigstore-staging` is defined). are download along with images, adding new signatures is possible only if `lookaside-staging` is defined).
- `use-sigstore-attachments` specifies whether sigstore image attachments (signatures, attestations and the like) are going to be read/written along with the image.
If disabled, the images are treated as if no attachments exist; attempts to write attachments fail.
## Examples ## Examples
@ -92,11 +98,11 @@ The following demonstrates how to to consume and run images from various registr
```yaml ```yaml
docker: docker:
registry.database-supplier.com: registry.database-supplier.com:
sigstore: https://sigstore.database-supplier.com lookaside: https://lookaside.database-supplier.com
distribution.great-middleware.org: distribution.great-middleware.org:
sigstore: https://security-team.great-middleware.org/sigstore lookaside: https://security-team.great-middleware.org/lookaside
docker.io/web-framework: docker.io/web-framework:
sigstore: https://sigstore.web-framework.io:8080 lookaside: https://lookaside.web-framework.io:8080
``` ```
### Developing and Signing Containers, Staging Signatures ### Developing and Signing Containers, Staging Signatures
@ -110,13 +116,13 @@ For developers in `example.com`:
```yaml ```yaml
docker: docker:
registry.example.com: registry.example.com:
sigstore: https://registry-sigstore.example.com lookaside: https://registry-lookaside.example.com
registry.example.com/mydepartment: registry.example.com/mydepartment:
sigstore: https://sigstore.mydepartment.example.com lookaside: https://lookaside.mydepartment.example.com
sigstore-staging: file:///mnt/mydepartment/sigstore-staging lookaside-staging: file:///mnt/mydepartment/lookaside-staging
registry.example.com/mydepartment/myproject:mybranch: registry.example.com/mydepartment/myproject:mybranch:
sigstore: http://localhost:4242/sigstore lookaside: http://localhost:4242/lookaside
sigstore-staging: file:///home/useraccount/webroot/sigstore lookaside-staging: file:///home/useraccount/webroot/lookaside
``` ```
### A Global Default ### A Global Default
@ -126,7 +132,7 @@ without listing each domain individually. This is expected to rarely happen, usu
```yaml ```yaml
default-docker: default-docker:
sigstore-staging: file:///mnt/company/common-sigstore-staging lookaside-staging: file:///mnt/company/common-lookaside-staging
``` ```
# AUTHORS # AUTHORS

@ -68,7 +68,9 @@ the consumer MUST verify at least the following aspects of the signature
(like the `github.com/containers/image/signature` package does): (like the `github.com/containers/image/signature` package does):
- The blob MUST be a “Signed Message” as defined RFC 4880 section 11.3. - The blob MUST be a “Signed Message” as defined RFC 4880 section 11.3.
(e.g. it MUST NOT be an unsigned “Literal Message”, or any other non-signature format). (e.g. it MUST NOT be an unsigned “Literal Message”,
a “Cleartext Signature” as defined in RFC 4880 section 7,
or any other non-signature format).
- The signature MUST have been made by an expected key trusted for the purpose (and the specific container image). - The signature MUST have been made by an expected key trusted for the purpose (and the specific container image).
- The signature MUST be correctly formed and pass the cryptographic validation. - The signature MUST be correctly formed and pass the cryptographic validation.
- The signature MUST correctly authenticate the included JSON payload - The signature MUST correctly authenticate the included JSON payload
@ -210,7 +212,8 @@ Consumers still SHOULD reject any signature where a member of an `optional` obje
### `optional.creator` ### `optional.creator`
If present, this MUST be a JSON string, identifying the name and version of the software which has created the signature. If present, this MUST be a JSON string, identifying the name and version of the software which has created the signature
(identifying the low-level software implementation; not the top-level caller).
The contents of this string is not defined in detail; however each implementation creating container signatures: The contents of this string is not defined in detail; however each implementation creating container signatures:

@ -27,8 +27,7 @@ No bare options are used. The format of TOML can be simplified to:
The `storage` table supports the following options: The `storage` table supports the following options:
**driver**="" **driver**=""
container storage driver Copy On Write (COW) container storage driver. Valid drivers are "overlay", "vfs", "aufs", "btrfs", and "zfs". Some drivers (for example, "zfs", "btrfs", and "aufs") may not work if your kernel lacks support for the filesystem.
Default Copy On Write (COW) container storage driver. Valid drivers are "overlay", "vfs", "devmapper", "aufs", "btrfs", and "zfs". Some drivers (for example, "zfs", "btrfs", and "aufs") may not work if your kernel lacks support for the filesystem.
This field is required to guarantee proper operation. This field is required to guarantee proper operation.
Valid rootless drivers are "btrfs", "overlay", and "vfs". Valid rootless drivers are "btrfs", "overlay", and "vfs".
Rootless users default to the driver defined in the system configuration when possible. Rootless users default to the driver defined in the system configuration when possible.
@ -37,35 +36,46 @@ The `storage` table supports the following options:
**graphroot**="" **graphroot**=""
container storage graph dir (default: "/var/lib/containers/storage") container storage graph dir (default: "/var/lib/containers/storage")
Default directory to store all writable content created by container storage programs. Default directory to store all writable content created by container storage programs.
The rootless graphroot path supports environment variable substitutions (ie. `$HOME/containers/storage`) The rootless graphroot path supports environment variable substitutions (ie. `$HOME/containers/storage`).
When changing the graphroot location on an SELINUX system, ensure When changing the graphroot location on an SELINUX system, ensure the labeling matches the default locations labels with the following commands:
the labeling matches the default locations labels with the
following commands:
``` ```
# semanage fcontext -a -e /var/lib/containers/storage /NEWSTORAGEPATH # semanage fcontext -a -e /var/lib/containers/storage /NEWSTORAGEPATH
# restorecon -R -v /NEWSTORAGEPATH # restorecon -R -v /NEWSTORAGEPATH
``` ```
In Rootless Mode you would set In rootless mode you would set
``` ```
# semanage fcontext -a -e $HOME/.local/share/containers NEWSTORAGEPATH # semanage fcontext -a -e $HOME/.local/share/containers NEWSTORAGEPATH
$ restorecon -R -v /NEWSTORAGEPATH $ restorecon -R -v /NEWSTORAGEPATH
``` ```
**rootless_storage_path**="$HOME/.local/share/containers/storage" **rootless_storage_path**="$HOME/.local/share/containers/storage"
Storage path for rootless users. By default the graphroot for rootless users Storage path for rootless users. By default the graphroot for rootless users is set to `$XDG_DATA_HOME/containers/storage`, if XDG_DATA_HOME is set. Otherwise `$HOME/.local/share/containers/storage` is used. This field can be used if administrators need to change the storage location for all users. The rootless storage path supports environment variable substitutions (ie. `$HOME/containers/storage`)
is set to `$XDG_DATA_HOME/containers/storage`, if XDG_DATA_HOME is set.
Otherwise `$HOME/.local/share/containers/storage` is used. This field can
be used if administrators need to change the storage location for all users.
The rootless storage path supports environment variable substitutions (ie. `$HOME/containers/storage`)
A common use case for this field is to provide a local storage directory when user home directories are NFS-mounted (podman does not support container storage over NFS). A common use case for this field is to provide a local storage directory when user home directories are NFS-mounted (podman does not support container storage over NFS).
**imagestore**=""
The image storage path (the default is assumed to be the same as `graphroot`). Path of the imagestore, which is different from `graphroot`. By default, images in the storage library are stored in the `graphroot`. If `imagestore` is provided, newly pulled images will be stored in the `imagestore` location. All other storage continues to be stored in the `graphroot`. When using the `overlay` driver, images previously stored in the `graphroot` remain accessible. Internally, the storage library mounts `graphroot` as an `additionalImageStore` to allow this behavior.
A common use case for the `imagestore` field is users who need to split filesystems in different partitions. The `imagestore` partition stores images and the `graphroot` partition stores container content created from the images.
Imagestore, if set, must be different from `graphroot`.
**runroot**="" **runroot**=""
container storage run dir (default: "/run/containers/storage") container storage run dir (default: "/run/containers/storage")
Default directory to store all temporary writable content created by container storage programs. Default directory to store all temporary writable content created by container storage programs. The rootless runroot path supports environment variable substitutions (ie. `$HOME/containers/storage`)
The rootless runroot path supports environment variable substitutions (ie. `$HOME/containers/storage`)
**driver_priority**=[]
Priority list for the storage drivers that will be tested one after the other to pick the storage driver if it is not defined. The first storage driver in this list that can be used, will be picked as the new one and all subsequent ones will not be tried. If all drivers in this list are not viable, then **all** known drivers will be tried and the first working one will be picked.
By default, the storage driver is set via the `driver` option. If it is not defined, then the best driver will be picked according to the current platform. This option allows you to override this internal priority list with a custom one to prefer certain drivers.
Setting this option only has an effect if the local storage has not been initialized yet and the driver name is not set.
**transient_store** = "false" | "true"
Transient store mode makes all container metadata be saved in temporary storage
(i.e. runroot above). This is faster, but doesn't persist across reboots.
Additional garbage collection must also be performed at boot-time, so this option should remain disabled in most configurations. (default: false)
### STORAGE OPTIONS TABLE ### STORAGE OPTIONS TABLE
@ -74,23 +84,32 @@ The `storage.options` table supports the following options:
**additionalimagestores**=[] **additionalimagestores**=[]
Paths to additional container image stores. Usually these are read/only and stored on remote network shares. Paths to additional container image stores. Usually these are read/only and stored on remote network shares.
**remap-uids=**"" **pull_options** = {enable_partial_images = "true", use_hard_links = "false", ostree_repos=""}
**remap-gids=**""
Remap-UIDs/GIDs is the mapping from UIDs/GIDs as they should appear inside of a container, to the UIDs/GIDs outside of the container, and the length of the range of UIDs/GIDs. Additional mapped sets can be listed and will be heeded by libraries, but there are limits to the number of mappings which the kernel will allow when you later attempt to run a container. Allows specification of how storage is populated when pulling images. This
option can speed the pulling process of images compressed with format zstd:chunked. Containers/storage looks
Example for files within images that are being pulled from a container registry that
remap-uids = 0:1668442479:65536 were previously pulled to the host. It can copy or create
remap-gids = 0:1668442479:65536 a hard link to the existing file when it finds them, eliminating the need to pull them from the
container registry. These options can deduplicate pulling of content, disk
These mappings tell the container engines to map UID 0 inside of the container to UID 1668442479 outside. UID 1 will be mapped to 1668442480. UID 2 will be mapped to 1668442481, etc, for the next 65533 UIDs in succession. storage of content and can allow the kernel to use less memory when running
containers.
**remap-user**=""
**remap-group**="" containers/storage supports four keys
Remap-User/Group is a user name which can be used to look up one or more UID/GID ranges in the /etc/subuid or /etc/subgid file. Mappings are set up starting with an in-container ID of 0 and then a host-level ID taken from the lowest range that matches the specified name, and using the length of that range. Additional ranges are then assigned, using the ranges which specify the lowest host-level IDs first, to the lowest not-yet-mapped in-container ID, until all of the entries have been used for maps. * enable_partial_images="true" | "false"
Tells containers/storage to look for files previously pulled in storage
Example rather then always pulling them from the container registry.
remap-user = "containers" * use_hard_links = "false" | "true"
remap-group = "containers" Tells containers/storage to use hard links rather then create new files in
the image, if an identical file already existed in storage.
* ostree_repos = ""
Tells containers/storage where an ostree repository exists that might have
previously pulled content which can be used when attempting to avoid
pulling content from the container registry
* convert_images = "false" | "true"
If set to true, containers/storage will convert images to a format compatible with
partial pulls in order to take advantage of local deduplication and hardlinking. It is an
expensive operation so it is not enabled by default.
**root-auto-userns-user**="" **root-auto-userns-user**=""
Root-auto-userns-user is a user name which can be used to look up one or more UID/GID ranges in the /etc/subuid and /etc/subgid file. These ranges will be partitioned to containers configured to create automatically a user namespace. Containers configured to automatically create a user namespace can still overlap with containers having an explicit mapping set. This setting is ignored when running as rootless. Root-auto-userns-user is a user name which can be used to look up one or more UID/GID ranges in the /etc/subuid and /etc/subgid file. These ranges will be partitioned to containers configured to create automatically a user namespace. Containers configured to automatically create a user namespace can still overlap with containers having an explicit mapping set. This setting is ignored when running as rootless.
@ -121,66 +140,6 @@ The `storage.options.btrfs` table supports the following options:
**size**="" **size**=""
Maximum size of a container image. This flag can be used to set quota on the size of container images. (format: <number>[<unit>], where unit = b (bytes), k (kilobytes), m (megabytes), or g (gigabytes)) Maximum size of a container image. This flag can be used to set quota on the size of container images. (format: <number>[<unit>], where unit = b (bytes), k (kilobytes), m (megabytes), or g (gigabytes))
### STORAGE OPTIONS FOR THINPOOL (devicemapper) TABLE
The `storage.options.thinpool` table supports the following options for the `devicemapper` driver:
**autoextend_percent**=""
Tells the thinpool driver the amount by which the thinpool needs to be grown. This is specified in terms of % of pool size. So a value of 20 means that when threshold is hit, pool will be grown by 20% of existing pool size. (default: 20%)
**autoextend_threshold**=""
Tells the driver the thinpool extension threshold in terms of percentage of pool size. For example, if threshold is 60, that means when pool is 60% full, threshold has been hit. (default: 80%)
**basesize**=""
Specifies the size to use when creating the base device, which limits the size of images and containers. (default: 10g)
**blocksize**=""
Specifies a custom blocksize to use for the thin pool. (default: 64k)
**directlvm_device**=""
Specifies a custom block storage device to use for the thin pool. Required for using graphdriver `devicemapper`.
**directlvm_device_force**=""
Tells driver to wipe device (directlvm_device) even if device already has a filesystem. (default: false)
**fs**="xfs"
Specifies the filesystem type to use for the base device. (default: xfs)
**log_level**=""
Sets the log level of devicemapper.
0: LogLevelSuppress 0 (default)
2: LogLevelFatal
3: LogLevelErr
4: LogLevelWarn
5: LogLevelNotice
6: LogLevelInfo
7: LogLevelDebug
**metadata_size**=""
metadata_size is used to set the `pvcreate --metadatasize` options when creating thin devices. (Default 128k)
**min_free_space**=""
Specifies the min free space percent in a thin pool required for new device creation to succeed. Valid values are from 0% - 99%. Value 0% disables. (default: 10%)
**mkfsarg**=""
Specifies extra mkfs arguments to be used when creating the base device.
**mountopt**=""
Comma separated list of default options to be used to mount container images. Suggested value "nodev". Mount options are documented in the mount(8) man page.
**size**=""
Maximum size of a container image. This flag can be used to set quota on the size of container images. (format: <number>[<unit>], where unit = b (bytes), k (kilobytes), m (megabytes), or g (gigabytes))
**use_deferred_deletion**=""
Marks thinpool device for deferred deletion. If the thinpool is in use when the driver attempts to delete it, the driver will attempt to delete device every 30 seconds until successful, or when it restarts. Deferred deletion permanently deletes the device and all data stored in the device will be lost. (default: true).
**use_deferred_removal**=""
Marks devicemapper block device for deferred removal. If the device is in use when its driver attempts to remove it, the driver tells the kernel to remove the device as soon as possible. Note this does not free up the disk space, use deferred deletion to fully remove the thinpool. (default: true).
**xfs_nospace_max_retries**=""
Specifies the maximum number of retries XFS should attempt to complete IO when ENOSPC (no space) error is returned by underlying storage device. (default: 0, which means to try continuously.)
### STORAGE OPTIONS FOR OVERLAY TABLE ### STORAGE OPTIONS FOR OVERLAY TABLE
The `storage.options.overlay` table supports the following options: The `storage.options.overlay` table supports the following options:
@ -193,20 +152,19 @@ The `storage.options.overlay` table supports the following options:
**force_mask** = "0000|shared|private" **force_mask** = "0000|shared|private"
ForceMask specifies the permissions mask that is used for new files and ForceMask specifies the permissions mask that is used for new files and
directories. directories. The values "shared" and "private" are accepted. (default: ""). Octal permission
The values "shared" and "private" are accepted. (default: ""). Octal permission
masks are also accepted. masks are also accepted.
``: Not set - ``: Not set
All files/directories, get set with the permissions identified within the All files/directories, get set with the permissions identified within the
image. image.
`private`: it is equivalent to 0700. - `private`: it is equivalent to 0700.
All files/directories get set with 0700 permissions. The owner has rwx All files/directories get set with 0700 permissions. The owner has rwx
access to the files. No other users on the system can access the files. access to the files. No other users on the system can access the files.
This setting could be used with networked based home directories. This setting could be used with networked based home directories.
`shared`: it is equivalent to 0755. - `shared`: it is equivalent to 0755.
The owner has rwx access to the files and everyone else can read, access The owner has rwx access to the files and everyone else can read, access
and execute them. This setting is useful for sharing containers storage and execute them. This setting is useful for sharing containers storage
with other users. For instance, a storage owned by root could be shared with other users. For instance, a storage owned by root could be shared
@ -221,7 +179,7 @@ Note: The force_mask Flag is an experimental feature, it could change in the
future. When "force_mask" is set the original permission mask is stored in the future. When "force_mask" is set the original permission mask is stored in the
"user.containers.override_stat" xattr and the "mount_program" option must be "user.containers.override_stat" xattr and the "mount_program" option must be
specified. Mount programs like "/usr/bin/fuse-overlayfs" present the extended specified. Mount programs like "/usr/bin/fuse-overlayfs" present the extended
attribute permissions to processes within containers rather then the attribute permissions to processes within containers rather than the
"force_mask" permissions. "force_mask" permissions.
**mount_program**="" **mount_program**=""
@ -236,9 +194,15 @@ based file systems.
**mountopt**="" **mountopt**=""
Comma separated list of default options to be used to mount container images. Suggested value "nodev". Mount options are documented in the mount(8) man page. Comma separated list of default options to be used to mount container images. Suggested value "nodev". Mount options are documented in the mount(8) man page.
**skip_mount_home=""**
Tell storage drivers to not create a PRIVATE bind mount on their home directory.
**size**="" **size**=""
Maximum size of a read/write layer. This flag can be used to set quota on the size of a read/write layer of a container. (format: <number>[<unit>], where unit = b (bytes), k (kilobytes), m (megabytes), or g (gigabytes)) Maximum size of a read/write layer. This flag can be used to set quota on the size of a read/write layer of a container. (format: <number>[<unit>], where unit = b (bytes), k (kilobytes), m (megabytes), or g (gigabytes))
**use_composefs** = "false"
Use ComposeFS to mount the data layers image. ComposeFS support is experimental and not recommended for production use. (default: false)
### STORAGE OPTIONS FOR VFS TABLE ### STORAGE OPTIONS FOR VFS TABLE
The `storage.options.vfs` table supports the following options: The `storage.options.vfs` table supports the following options:
@ -256,9 +220,6 @@ The `storage.options.zfs` table supports the following options:
**mountopt**="" **mountopt**=""
Comma separated list of default options to be used to mount container images. Suggested value "nodev". Mount options are documented in the mount(8) man page. Comma separated list of default options to be used to mount container images. Suggested value "nodev". Mount options are documented in the mount(8) man page.
**skip_mount_home=""**
Tell storage drivers to not create a PRIVATE bind mount on their home directory.
**size**="" **size**=""
Maximum size of a container image. This flag can be used to set quota on the size of container images. (format: <number>[<unit>], where unit = b (bytes), k (kilobytes), m (megabytes), or g (gigabytes)) Maximum size of a container image. This flag can be used to set quota on the size of container images. (format: <number>[<unit>], where unit = b (bytes), k (kilobytes), m (megabytes), or g (gigabytes))
@ -317,7 +278,7 @@ This is a way to prevent xfs_quota management from conflicting with containers/s
Distributions often provide a `/usr/share/containers/storage.conf` file to define default storage configuration. Administrators can override this file by creating `/etc/containers/storage.conf` to specify their own configuration. Likewise rootless users can create a storage.conf file to override the system storage.conf files. Files should be stored in the `$XDG_CONFIG_HOME/containers/storage.conf` file. If `$XDG_CONFIG_HOME` is not set then the file `$HOME/.config/containers/storage.conf` is used. Distributions often provide a `/usr/share/containers/storage.conf` file to define default storage configuration. Administrators can override this file by creating `/etc/containers/storage.conf` to specify their own configuration. Likewise rootless users can create a storage.conf file to override the system storage.conf files. Files should be stored in the `$XDG_CONFIG_HOME/containers/storage.conf` file. If `$XDG_CONFIG_HOME` is not set then the file `$HOME/.config/containers/storage.conf` is used.
Note: The storage.conf file overrides all other strorage.conf files. Container Note: The storage.conf file overrides all other storage.conf files. Container
engines run by users with a storage.conf file in their home directory do not engines run by users with a storage.conf file in their home directory do not
use options in the system storage.conf files. use options in the system storage.conf files.

@ -9,16 +9,23 @@ containers-transports - description of supported transports for copying and stor
## DESCRIPTION ## DESCRIPTION
Tools which use the containers/image library, including skopeo(1), buildah(1), podman(1), all share a common syntax for referring to container images in various locations. Tools which use the containers/image library, including skopeo(1), buildah(1), podman(1), all share a common syntax for referring to container images in various locations.
The general form of the syntax is _transport:details_, where details are dependent on the specified transport, which are documented below. The general form of the syntax is _transport_`:`_details_, where details are dependent on the specified transport, which are documented below.
### **containers-storage**:[**[**storage-specifier**]**]{image-id|docker-reference[@image-id]} The semantics of the image names ultimately depend on the environment where
they are evaluated. For example: if evaluated on a remote server, image names
might refer to paths on that server; relative paths are relative to the current
directory of the image consumer.
<!-- atomic: is deprecated and not documented here. -->
### **containers-storage:**[**[**_storage-specifier_**]**]{_image-id_|_docker-reference_[**@**_image-id_]}
An image located in a local containers storage. An image located in a local containers storage.
The format of _docker-reference_ is described in detail in the **docker** transport. The format of _docker-reference_ is described in detail in the **docker** transport.
The _storage-specifier_ allows for referencing storage locations on the file system and has the format `[[driver@]root[+run-root][:options]]` where the optional `driver` refers to the storage driver (e.g., overlay or btrfs) and where `root` is an absolute path to the storage's root directory. The _storage-specifier_ allows for referencing storage locations on the file system and has the format `[`[_driver_`@`]_root_[`+`_run-root_][`:`_options_]`]` where the optional _driver_ refers to the storage driver (e.g., `overlay` or `btrfs`) and where _root_ is an absolute path to the storage's root directory.
The optional `run-root` can be used to specify the run directory of the storage where all temporary writable content is stored. The optional _run-root_ can be used to specify the run directory of the storage where all temporary writable content is stored.
The optional `options` are a comma-separated list of driver-specific options. The optional _options_ are a comma-separated list of driver-specific options.
Please refer to containers-storage.conf(5) for further information on the drivers and supported options. Please refer to containers-storage.conf(5) for further information on the drivers and supported options.
### **dir:**_path_ ### **dir:**_path_
@ -33,43 +40,68 @@ By default, uses the authorization state in `$XDG_RUNTIME_DIR/containers/auth.js
If the authorization state is not found there, `$HOME/.docker/config.json` is checked, which is set using docker-login(1). If the authorization state is not found there, `$HOME/.docker/config.json` is checked, which is set using docker-login(1).
The containers-registries.conf(5) further allows for configuring various settings of a registry. The containers-registries.conf(5) further allows for configuring various settings of a registry.
Note that a _docker-reference_ has the following format: `name[:tag|@digest]`. Note that a _docker-reference_ has the following format: _name_[`:`_tag_ | `@`_digest_].
While the docker transport does not support both a tag and a digest at the same time some formats like containers-storage do. While the docker transport does not support both a tag and a digest at the same time some formats like containers-storage do.
Digests can also be used in an image destination as long as the manifest matches the provided digest. Digests can also be used in an image destination as long as the manifest matches the provided digest.
The docker transport supports pushing images without a tag or digest to a registry when the image name is suffixed with `@@unknown-digest@@`. The _name_`@@unknown-digest@@` reference format cannot be used with a reference that has a tag or digest.
The digest of images can be explored with skopeo-inspect(1). The digest of images can be explored with skopeo-inspect(1).
If `name` does not contain a slash, it is treated as `docker.io/library/name`.
Otherwise, the component before the first slash is checked if it is recognized as a `hostname[:port]` (i.e., it contains either a . or a :, or the component is exactly localhost).
If the first component of name is not recognized as a `hostname[:port]`, `name` is treated as `docker.io/name`.
### **docker-archive:**_path[:{docker-reference|@source-index}]_ If _name_ does not contain a slash, it is treated as `docker.io/library/`_name_.
Otherwise, the component before the first slash is checked if it is recognized as a _hostname_[`:`_port_] (i.e., it contains either a `.` or a `:`, or the component is exactly `localhost`).
If the first component of name is not recognized as a _hostname_[`:`_port_], _name_ is treated as `docker.io/`_name_.
### **docker-archive:**_path_[`:`{_docker-reference_|`@`_source-index_}]
An image is stored in the docker-save(1) formatted file. An image is stored in the docker-save(1) formatted file.
Unless a tool explicitly documents otherwise,
a write to a **docker-archive:** destination completely overwrites _path_, replacing it with the single provided image.
The _path_ can refer to a stream, e.g. `docker-archive:/dev/stdin`.
_docker-reference_ must not contain a digest. _docker-reference_ must not contain a digest.
Alternatively, for reading archives, @_source-index_ is a zero-based index in archive manifest Alternatively, for reading archives, `@`_source-index_ is a zero-based index in archive manifest
(to access untagged images). (to access untagged images).
If neither _docker-reference_ nor @_source_index is specified when reading an archive, the archive must contain exactly one image. If neither _docker-reference_ nor `@`_source_index is specified when reading an archive, the archive must contain exactly one image.
It is further possible to copy data to stdin by specifying `docker-archive:/dev/stdin` but note that the used file must be seekable.
### **docker-daemon:**_docker-reference|algo:digest_ ### **docker-daemon:**_docker-reference_|_algo_`:`_digest_
An image stored in the docker daemon's internal storage. An image stored in the docker daemon's internal storage.
The image must be specified as a _docker-reference_ or in an alternative _algo:digest_ format when being used as an image source. The image must be specified as a _docker-reference_ or in an alternative _algo_`:`_digest_ format when being used as an image source.
The _algo:digest_ refers to the image ID reported by docker-inspect(1). The _algo_`:`_digest_ refers to the image ID reported by docker-inspect(1).
### **oci:**_path[:reference]_ ### **oci:**_path_[`:`_reference_]
An image compliant with the "Open Container Image Layout Specification" at _path_. An image in a directory structure compliant with the "Open Container Image Layout Specification" at _path_.
Using a _reference_ is optional and allows for storing multiple images at the same _path_.
### **oci-archive:**_path[:reference]_ The _path_ value terminates at the first `:` character; any further `:` characters are not separators, but a part of _reference_.
The _reference_ is used to set, or match, the `org.opencontainers.image.ref.name` annotation in the top-level index.
If _reference_ is not specified when reading an image, the directory must contain exactly one image.
An image compliant with the "Open Container Image Layout Specification" stored as a tar(1) archive at _path_. ### **oci-archive:**_path_[`:`_reference_]
### **ostree:**_docker-reference[@/absolute/repo/path]_ An image in a tar(1) archive with contents compliant with the "Open Container Image Layout Specification" at _path_.
Unless a tool explicitly documents otherwise,
a write to an **oci-archive:** destination completely overwrites _path_, replacing it with the single provided image.
The _path_ value terminates at the first `:` character; any further `:` characters are not separators, but a part of _reference_.
The _reference_ is used to set, or match, the `org.opencontainers.image.ref.name` annotation in the top-level index.
If _reference_ is not specified when reading an archive, the archive must contain exactly one image.
### **ostree:**_docker-reference_[`@`_/absolute/repo/path_]
An image in the local ostree(1) repository. An image in the local ostree(1) repository.
_/absolute/repo/path_ defaults to _/ostree/repo_. _/absolute/repo/path_ defaults to `/ostree/repo`.
### **sif:**_path_
An image using the Singularity image format at _path_.
Only reading images is supported, and not all scripts can be represented in the OCI format.
<!-- tarball: can only usefully be used from Go callers who call tarballReference.ConfigUpdate, and is not documented here. -->
## Examples ## Examples

@ -10,7 +10,8 @@
# locations in the following order: # locations in the following order:
# 1. /usr/share/containers/containers.conf # 1. /usr/share/containers/containers.conf
# 2. /etc/containers/containers.conf # 2. /etc/containers/containers.conf
# 3. $HOME/.config/containers/containers.conf (Rootless containers ONLY) # 3. $XDG_CONFIG_HOME/containers/containers.conf or
# $HOME/.config/containers/containers.conf if $XDG_CONFIG_HOME is not set
# Items specified in the latter containers.conf, if they exist, override the # Items specified in the latter containers.conf, if they exist, override the
# previous containers.conf settings, or the default settings. # previous containers.conf settings, or the default settings.
@ -26,6 +27,18 @@
# #
#apparmor_profile = "container-default" #apparmor_profile = "container-default"
# The hosts entries from the base hosts file are added to the containers hosts
# file. This must be either an absolute path or as special values "image" which
# uses the hosts file from the container image or "none" which means
# no base hosts file is used. The default is "" which will use /etc/hosts.
#
#base_hosts_file = ""
# List of cgroup_conf entries specifying a list of cgroup files to write to and
# their values. For example `memory.high=1073741824` sets the
# memory.high limit to 1GB.
# cgroup_conf = []
# Default way to to create a cgroup namespace for the container # Default way to to create a cgroup namespace for the container
# Options are: # Options are:
# `private` Create private Cgroup Namespace for the container. # `private` Create private Cgroup Namespace for the container.
@ -45,20 +58,19 @@
# List of default capabilities for containers. If it is empty or commented out, # List of default capabilities for containers. If it is empty or commented out,
# the default capabilities defined in the container engine will be added. # the default capabilities defined in the container engine will be added.
# #
default_capabilities = [ #default_capabilities = [
"NET_RAW", # "CHOWN",
"CHOWN", # "DAC_OVERRIDE",
"DAC_OVERRIDE", # "FOWNER",
"FOWNER", # "FSETID",
"FSETID", # "KILL",
"KILL", # "NET_BIND_SERVICE",
"NET_BIND_SERVICE", # "SETFCAP",
"SETFCAP", # "SETGID",
"SETGID", # "SETPCAP",
"SETPCAP", # "SETUID",
"SETUID", # "SYS_CHROOT",
"SYS_CHROOT" #]
]
# A list of sysctls to be set in containers by default, # A list of sysctls to be set in containers by default,
# specified as "name=value", # specified as "name=value",
@ -108,13 +120,22 @@ default_sysctls = [
# #
#env = [ #env = [
# "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", # "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
# "TERM=xterm",
#] #]
# Pass all host environment variables into the container. # Pass all host environment variables into the container.
# #
#env_host = false #env_host = false
# Set the ip for the host.containers.internal entry in the containers /etc/hosts
# file. This can be set to "none" to disable adding this entry. By default it
# will automatically choose the host ip.
#
# NOTE: When using podman machine this entry will never be added to the containers
# hosts file instead the gvproxy dns resolver will resolve this hostname. Therefore
# it is not possible to disable the entry in this case.
#
#host_containers_internal_ip = ""
# Default proxy environment variables passed into the container. # Default proxy environment variables passed into the container.
# The environment variables passed in include: # The environment variables passed in include:
# http_proxy, https_proxy, ftp_proxy, no_proxy, and the upper case versions of # http_proxy, https_proxy, ftp_proxy, no_proxy, and the upper case versions of
@ -129,15 +150,27 @@ default_sysctls = [
#init = false #init = false
# Container init binary, if init=true, this is the init binary to be used for containers. # Container init binary, if init=true, this is the init binary to be used for containers.
# If this option is not set catatonit is searched in the directories listed under
# the helper_binaries_dir option. It is recommended to just install catatonit
# there instead of configuring this option here.
# #
#init_path = "/usr/libexec/podman/catatonit" #init_path = "/usr/libexec/podman/catatonit"
# Default way to to create an IPC namespace (POSIX SysV IPC) for the container # Default way to to create an IPC namespace (POSIX SysV IPC) for the container
# Options are: # Options are:
# `private` Create private IPC Namespace for the container. # "host" Share host IPC Namespace with the container.
# `host` Share host IPC Namespace with the container. # "none" Create shareable IPC Namespace for the container without a private /dev/shm.
# "private" Create private IPC Namespace for the container, other containers are not allowed to share it.
# "shareable" Create shareable IPC Namespace for the container.
# #
#ipcns = "private" #ipcns = "shareable"
# Default way to set an interface name inside container. Defaults to legacy
# pattern of ethX, where X is a integer, when left undefined.
# Options are:
# "device" Uses the network_interface name from the network config as interface name.
# Falls back to the ethX pattern if the network_interface is not set.
#interface_name = ""
# keyring tells the container engine whether to create # keyring tells the container engine whether to create
# a kernel keyring for use within the container. # a kernel keyring for use within the container.
@ -150,10 +183,15 @@ default_sysctls = [
# #
#label = true #label = true
# label_users indicates whether to enforce confined users in containers on
# SELinux systems. This option causes containers to maintain the current user
# and role field of the calling process. By default SELinux containers run with
# the user system_u, and the role system_r.
#label_users = false
# Logging driver for the container. Available options: k8s-file and journald. # Logging driver for the container. Available options: k8s-file and journald.
# #
#log_driver = "k8s-file" #log_driver = "k8s-file"
log_driver = "k8s-file"
# Maximum size allowed for the container log file. Negative numbers indicate # Maximum size allowed for the container log file. Negative numbers indicate
# that no size limit is imposed. If positive, it must be >= 8192 to match or # that no size limit is imposed. If positive, it must be >= 8192 to match or
@ -168,6 +206,13 @@ log_driver = "k8s-file"
# #
#log_tag = "" #log_tag = ""
# List of mounts. Specified as
# "type=TYPE,source=<directory-on-host>,destination=<directory-in-container>,<options>", for example:
# "type=bind,source=/var/lib/foobar,destination=/var/lib/foobar,ro".
# If it is empty or commented out, no mounts will be added
#
#mounts = []
# Default way to to create a Network namespace for the container # Default way to to create a Network namespace for the container
# Options are: # Options are:
# `private` Create private Network Namespace for the container. # `private` Create private Network Namespace for the container.
@ -181,6 +226,10 @@ log_driver = "k8s-file"
# #
#no_hosts = false #no_hosts = false
# Tune the host's OOM preferences for containers
# (accepts values from -1000 to 1000).
#oom_score_adj = 0
# Default way to to create a PID namespace for the container # Default way to to create a PID namespace for the container
# Options are: # Options are:
# `private` Create private PID Namespace for the container. # `private` Create private PID Namespace for the container.
@ -199,6 +248,22 @@ log_driver = "k8s-file"
# #
#prepare_volume_on_create = false #prepare_volume_on_create = false
# Give extended privileges to all containers. A privileged container turns off
# the security features that isolate the container from the host. Dropped
# Capabilities, limited devices, read-only mount points, Apparmor/SELinux
# separation, and Seccomp filters are all disabled. Due to the disabled
# security features the privileged field should almost never be set as
# containers can easily break out of confinment.
#
# Containers running in a user namespace (e.g., rootless containers) cannot
# have more privileges than the user that launched them.
#
#privileged = false
# Run all containers with root file system mounted read-only
#
# read_only = false
# Path to the seccomp.json profile which is used as the default seccomp profile # Path to the seccomp.json profile which is used as the default seccomp profile
# for the runtime. # for the runtime.
# #
@ -227,12 +292,6 @@ log_driver = "k8s-file"
# #
#userns = "host" #userns = "host"
# Number of UIDs to allocate for the automatic container creation.
# UIDs are allocated from the "container" UIDs listed in
# /etc/subuid & /etc/subgid
#
#userns_size = 65536
# Default way to to create a UTS namespace for the container # Default way to to create a UTS namespace for the container
# Options are: # Options are:
# `private` Create private UTS Namespace for the container. # `private` Create private UTS Namespace for the container.
@ -247,6 +306,11 @@ log_driver = "k8s-file"
# #
#volumes = [] #volumes = []
#[engine.platform_to_oci_runtime]
#"wasi/wasm" = ["crun-wasm"]
#"wasi/wasm32" = ["crun-wasm"]
#"wasi/wasm64" = ["crun-wasm"]
[secrets] [secrets]
#driver = "file" #driver = "file"
@ -264,7 +328,6 @@ log_driver = "k8s-file"
# iptables rules and network interfaces might leak on the host. A reboot will fix this. # iptables rules and network interfaces might leak on the host. A reboot will fix this.
# #
#network_backend = "" #network_backend = ""
network_backend = "cni"
# Path to directory where CNI plugin binaries are located. # Path to directory where CNI plugin binaries are located.
# #
@ -276,6 +339,23 @@ network_backend = "cni"
# "/opt/cni/bin", # "/opt/cni/bin",
#] #]
# List of directories that will be searched for netavark plugins.
#
#netavark_plugin_dirs = [
# "/usr/local/libexec/netavark",
# "/usr/libexec/netavark",
# "/usr/local/lib/netavark",
# "/usr/lib/netavark",
#]
# The firewall driver to be used by netavark.
# The default is empty which means netavark will pick one accordingly. Current supported
# drivers are "iptables", "nftables", "none" (no firewall rules will be created) and "firewalld" (firewalld is
# experimental at the moment and not recommend outside of testing).
#
#firewall_driver = ""
# The network name of the default network to attach pods to. # The network name of the default network to attach pods to.
# #
#default_network = "podman" #default_network = "podman"
@ -287,6 +367,27 @@ network_backend = "cni"
# #
#default_subnet = "10.88.0.0/16" #default_subnet = "10.88.0.0/16"
# DefaultSubnetPools is a list of subnets and size which are used to
# allocate subnets automatically for podman network create.
# It will iterate through the list and will pick the first free subnet
# with the given size. This is only used for ipv4 subnets, ipv6 subnets
# are always assigned randomly.
#
#default_subnet_pools = [
# {"base" = "10.89.0.0/16", "size" = 24},
# {"base" = "10.90.0.0/15", "size" = 24},
# {"base" = "10.92.0.0/14", "size" = 24},
# {"base" = "10.96.0.0/11", "size" = 24},
# {"base" = "10.128.0.0/9", "size" = 24},
#]
# Configure which rootless network program to use by default. Valid options are
# `slirp4netns` and `pasta` (default).
#
#default_rootless_network_cmd = "pasta"
# Path to the directory where network configuration files are located. # Path to the directory where network configuration files are located.
# For the CNI backend the default is "/etc/cni/net.d" as root # For the CNI backend the default is "/etc/cni/net.d" as root
# and "$HOME/.config/cni/net.d" as rootless. # and "$HOME/.config/cni/net.d" as rootless.
@ -295,16 +396,57 @@ network_backend = "cni"
# #
#network_config_dir = "/etc/cni/net.d/" #network_config_dir = "/etc/cni/net.d/"
# Port to use for dns forwarding daemon with netavark in rootful bridge
# mode and dns enabled.
# Using an alternate port might be useful if other dns services should
# run on the machine.
#
#dns_bind_port = 53
# A list of default pasta options that should be used running pasta.
# It accepts the pasta cli options, see pasta(1) for the full list of options.
#
#pasta_options = []
[engine] [engine]
# Index to the active service # Index to the active service
# #
#active_service = production #active_service = "production"
#List of compression algorithms. If set makes sure that requested compression variant
#for each platform is added to the manifest list keeping original instance intact in
#the same manifest list on every `manifest push`. Supported values are (`gzip`, `zstd` and `zstd:chunked`).
#
#add_compression = ["gzip", "zstd", "zstd:chunked"]
# Enforces using docker.io for completing short names in Podman's compatibility
# REST API. Note that this will ignore unqualified-search-registries and
# short-name aliases defined in containers-registries.conf(5).
#compat_api_enforce_docker_hub = true
# Specify one or more external providers for the compose command. The first
# found provider is used for execution. Can be an absolute and relative path
# or a (file) name.
#compose_providers=[]
# Emit logs on each invocation of the compose command indicating that an
# external compose provider is being executed.
#compose_warning_logs = true
# The compression format to use when pushing an image. # The compression format to use when pushing an image.
# Valid options are: `gzip`, `zstd` and `zstd:chunked`. # Valid options are: `gzip`, `zstd` and `zstd:chunked`.
# This field is ignored when pushing images to the docker-daemon and
# docker-archive formats. It is also ignored when the manifest format is set
# to v2s2.
# #
#compression_format = "gzip" #compression_format = "gzip"
# The compression level to use when pushing an image.
# Valid options depend on the compression format used.
# For gzip, valid options are 1-9, with a default of 5.
# For zstd, valid options are 1-20, with a default of 3.
#
#compression_level = 5
# Cgroup management implementation used for the runtime. # Cgroup management implementation used for the runtime.
# Valid options "systemd" or "cgroupfs" # Valid options "systemd" or "cgroupfs"
@ -334,11 +476,20 @@ network_backend = "cni"
# short-name aliases defined in containers-registries.conf(5). # short-name aliases defined in containers-registries.conf(5).
#compat_api_enforce_docker_hub = true #compat_api_enforce_docker_hub = true
# The database backend of Podman. Supported values are "" (default), "boltdb"
# and "sqlite". An empty value means it will check whenever a boltdb already
# exists and use it when it does, otherwise it will use sqlite as default
# (e.g. new installs). This allows for backwards compatibility with older versions.
# Please run `podman-system-reset` prior to changing the database
# backend of an existing deployment, to make sure Podman can operate correctly.
#
#database_backend = ""
# Specify the keys sequence used to detach a container. # Specify the keys sequence used to detach a container.
# Format is a single character [a-Z] or a comma separated sequence of # Format is a single character [a-Z] or a comma separated sequence of
# `ctrl-<value>`, where `<value>` is one of: # `ctrl-<value>`, where `<value>` is one of:
# `a-z`, `@`, `^`, `[`, `\`, `]`, `^` or `_` # `a-z`, `@`, `^`, `[`, `\`, `]`, `^` or `_`
# # Specifying "" disables this feature.
#detach_keys = "ctrl-p,ctrl-q" #detach_keys = "ctrl-p,ctrl-q"
# Determines whether engine will reserve ports on the host when they are # Determines whether engine will reserve ports on the host when they are
@ -360,11 +511,32 @@ network_backend = "cni"
# Define where event logs will be stored, when events_logger is "file". # Define where event logs will be stored, when events_logger is "file".
#events_logfile_path="" #events_logfile_path=""
# Sets the maximum size for events_logfile_path.
# The size can be b (bytes), k (kilobytes), m (megabytes), or g (gigabytes).
# The format for the size is `<number><unit>`, e.g., `1b` or `3g`.
# If no unit is included then the size will be read in bytes.
# When the limit is exceeded, the logfile will be rotated and the old one will be deleted.
# If the maximum size is set to 0, then no limit will be applied,
# and the logfile will not be rotated.
#events_logfile_max_size = "1m"
# Selects which logging mechanism to use for container engine events. # Selects which logging mechanism to use for container engine events.
# Valid values are `journald`, `file` and `none`. # Valid values are `journald`, `file` and `none`.
# #
#events_logger = "journald" #events_logger = "journald"
events_logger = "file"
# Creates a more verbose container-create event which includes a JSON payload
# with detailed information about the container.
#events_container_create_inspect_data = false
# Whenever Podman should log healthcheck events.
# With many running healthcheck on short interval Podman will spam the event
# log a lot as it generates a event for each single healthcheck run. Because
# this event is optional and only useful to external consumers that may want
# to know when a healthcheck is run or failed allow users to turn it off by
# setting it to false. The default is true.
#
#healthcheck_events = true
# A is a list of directories which are used to search for helper binaries. # A is a list of directories which are used to search for helper binaries.
# #
@ -381,6 +553,12 @@ events_logger = "file"
# "/usr/share/containers/oci/hooks.d", # "/usr/share/containers/oci/hooks.d",
#] #]
# Directories to scan for CDI Spec files.
#
#cdi_spec_dirs = [
# "/etc/cdi",
#]
# Manifest Type (oci, v2s2, or v2s1) to use when pulling, pushing, building # Manifest Type (oci, v2s2, or v2s1) to use when pulling, pushing, building
# container images. By default image pulled and pushed match the format of the # container images. By default image pulled and pushed match the format of the
# source image. Building/committing defaults to OCI. # source image. Building/committing defaults to OCI.
@ -396,38 +574,46 @@ events_logger = "file"
# #
#image_parallel_copies = 0 #image_parallel_copies = 0
# Tells container engines how to handle the built-in image volumes.
# * anonymous: An anonymous named volume will be created and mounted
# into the container.
# * tmpfs: The volume is mounted onto the container as a tmpfs,
# which allows users to create content that disappears when
# the container is stopped.
# * ignore: All volumes are just ignored and no action is taken.
#
#image_volume_mode = ""
# Default command to run the infra container # Default command to run the infra container
# #
#infra_command = "/pause" #infra_command = "/pause"
# Infra (pause) container image name for pod infra containers. When running a # Infra (pause) container image name for pod infra containers. When running a
# pod, we start a `pause` process in a container to hold open the namespaces # pod, we start a `pause` process in a container to hold open the namespaces
# associated with the pod. This container does nothing other then sleep, # associated with the pod. This container does nothing other than sleep,
# reserving the pods resources for the lifetime of the pod. By default container # reserving the pod's resources for the lifetime of the pod. By default container
# engines run a builtin container using the pause executable. If you want override # engines run a built-in container using the pause executable. If you want override
# specify an image to pull. # specify an image to pull.
# #
#infra_image = "" #infra_image = ""
# Default Kubernetes kind/specification of the kubernetes yaml generated with the `podman kube generate` command.
# The possible options are `pod` and `deployment`.
#kube_generate_type = "pod"
# Specify the locking mechanism to use; valid values are "shm" and "file". # Specify the locking mechanism to use; valid values are "shm" and "file".
# Change the default only if you are sure of what you are doing, in general # Change the default only if you are sure of what you are doing, in general
# "file" is useful only on platforms where cgo is not available for using the # "file" is useful only on platforms where cgo is not available for using the
# faster "shm" lock type. You may need to run "podman system renumber" after # faster "shm" lock type. You may need to run "podman system renumber" after
# you change the lock type. # you change the lock type.
# #
#lock_type** = "shm" #lock_type = "shm"
# Indicates if Podman is running inside a VM via Podman Machine.
# Podman uses this value to do extra setup around networking from the
# container inside the VM to to host.
#
#machine_enabled = false
# MultiImageArchive - if true, the container engine allows for storing archives # MultiImageArchive - if true, the container engine allows for storing archives
# (e.g., of the docker-archive transport) with multiple images. By default, # (e.g., of the docker-archive transport) with multiple images. By default,
# Podman creates single-image archives. # Podman creates single-image archives.
# #
#multi_image_archive = "false" #multi_image_archive = false
# Default engine namespace # Default engine namespace
# If engine is joined to a namespace, it will see only containers and pods # If engine is joined to a namespace, it will see only containers and pods
@ -443,20 +629,41 @@ events_logger = "file"
#network_cmd_path = "" #network_cmd_path = ""
# Default options to pass to the slirp4netns binary. # Default options to pass to the slirp4netns binary.
# For example "allow_host_loopback=true" # Valid options values are:
# #
#network_cmd_options = ["enable_ipv6=true",] # - allow_host_loopback=true|false: Allow the slirp4netns to reach the host loopback IP (`10.0.2.2`).
# Default is false.
# - mtu=MTU: Specify the MTU to use for this network. (Default is `65520`).
# - cidr=CIDR: Specify ip range to use for this network. (Default is `10.0.2.0/24`).
# - enable_ipv6=true|false: Enable IPv6. Default is true. (Required for `outbound_addr6`).
# - outbound_addr=INTERFACE: Specify the outbound interface slirp should bind to (ipv4 traffic only).
# - outbound_addr=IPv4: Specify the outbound ipv4 address slirp should bind to.
# - outbound_addr6=INTERFACE: Specify the outbound interface slirp should bind to (ipv6 traffic only).
# - outbound_addr6=IPv6: Specify the outbound ipv6 address slirp should bind to.
# - port_handler=rootlesskit: Use rootlesskit for port forwarding. Default.
# Note: Rootlesskit changes the source IP address of incoming packets to a IP address in the container
# network namespace, usually `10.0.2.100`. If your application requires the real source IP address,
# e.g. web server logs, use the slirp4netns port handler. The rootlesskit port handler is also used for
# rootless containers when connected to user-defined networks.
# - port_handler=slirp4netns: Use the slirp4netns port forwarding, it is slower than rootlesskit but
# preserves the correct source IP address. This port handler cannot be used for user-defined networks.
#
#network_cmd_options = []
# Whether to use chroot instead of pivot_root in the runtime # Whether to use chroot instead of pivot_root in the runtime
# #
#no_pivot_root = false #no_pivot_root = false
# Number of locks available for containers and pods. # Number of locks available for containers, pods, and volumes. Each container,
# pod, and volume consumes 1 lock for as long as it exists.
# If this is changed, a lock renumber must be performed (e.g. with the # If this is changed, a lock renumber must be performed (e.g. with the
# 'podman system renumber' command). # 'podman system renumber' command).
# #
#num_locks = 2048 #num_locks = 2048
# Set the exit policy of the pod when the last container exits.
#pod_exit_policy = "continue"
# Whether to pull new image before running a container # Whether to pull new image before running a container
# #
#pull_policy = "missing" #pull_policy = "missing"
@ -467,15 +674,25 @@ events_logger = "file"
# #
#remote = false #remote = false
# Number of times to retry pulling/pushing images in case of failure
#
#retry = 3
# Delay between retries in case pulling/pushing image fails.
# If set, container engines will retry at the set interval,
# otherwise they delay 2 seconds and then exponentially back off.
#
#retry_delay = "2s"
# Default OCI runtime # Default OCI runtime
# #
#runtime = "crun" #runtime = "crun"
runtime = "runc" runtime = "crun"
# List of the OCI runtimes that support --format=json. When json is supported # List of the OCI runtimes that support --format=json. When json is supported
# engine will use it for reporting nicer errors. # engine will use it for reporting nicer errors.
# #
#runtime_supports_json = ["crun", "runc", "kata", "runsc", "krun"] #runtime_supports_json = ["crun", "runc", "kata", "runsc", "youki", "krun"]
# List of the OCI runtimes that supports running containers with KVM Separation. # List of the OCI runtimes that supports running containers with KVM Separation.
# #
@ -506,16 +723,21 @@ runtime = "runc"
# #
#stop_timeout = 10 #stop_timeout = 10
# Number of seconds to wait before exit command in API process is given to.
# This mimics Docker's exec cleanup behaviour, where the default is 5 minutes (value is in seconds).
#
#exit_command_delay = 300
# map of service destinations # map of service destinations
# #
#[service_destinations] # [engine.service_destinations]
# [service_destinations.production] # [engine.service_destinations.production]
# URI to access the Podman service # URI to access the Podman service
# Examples: # Examples:
# rootless "unix://run/user/$UID/podman/podman.sock" (Default) # rootless "unix:///run/user/$UID/podman/podman.sock" (Default)
# rootfull "unix://run/podman/podman.sock (Default) # rootful "unix:///run/podman/podman.sock (Default)
# remote rootless ssh://engineering.lab.company.com/run/user/1000/podman/podman.sock # remote rootless ssh://engineering.lab.company.com/run/user/1000/podman/podman.sock
# remote rootfull ssh://root@10.10.1.136:22/run/podman/podman.sock # remote rootful ssh://root@10.10.1.136:22/run/podman/podman.sock
# #
# uri = "ssh://user@production.example.com/run/user/1001/podman/podman.sock" # uri = "ssh://user@production.example.com/run/user/1001/podman/podman.sock"
# Path to file containing ssh identity key # Path to file containing ssh identity key
@ -532,6 +754,12 @@ runtime = "runc"
# #
#volume_path = "/var/lib/containers/storage/volumes" #volume_path = "/var/lib/containers/storage/volumes"
# Default timeout (in seconds) for volume plugin operations.
# Plugins are external programs accessed via a REST API; this sets a timeout
# for requests to that API.
# A value of 0 is treated as no timeout.
#volume_plugin_timeout = 5
# Paths to look for a valid OCI runtime (crun, runc, kata, runsc, krun, etc) # Paths to look for a valid OCI runtime (crun, runc, kata, runsc, krun, etc)
[engine.runtimes] [engine.runtimes]
#crun = [ #crun = [
@ -544,6 +772,15 @@ runtime = "runc"
# "/run/current-system/sw/bin/crun", # "/run/current-system/sw/bin/crun",
#] #]
#crun-vm = [
# "/usr/bin/crun-vm",
# "/usr/local/bin/crun-vm",
# "/usr/local/sbin/crun-vm",
# "/sbin/crun-vm",
# "/bin/crun-vm",
# "/run/current-system/sw/bin/crun-vm",
#]
#kata = [ #kata = [
# "/usr/bin/kata-runtime", # "/usr/bin/kata-runtime",
# "/usr/sbin/kata-runtime", # "/usr/sbin/kata-runtime",
@ -575,6 +812,13 @@ runtime = "runc"
# "/run/current-system/sw/bin/runsc", # "/run/current-system/sw/bin/runsc",
#] #]
#youki = [
# "/usr/local/bin/youki",
# "/usr/bin/youki",
# "/bin/youki",
# "/run/current-system/sw/bin/youki",
#]
#krun = [ #krun = [
# "/usr/bin/krun", # "/usr/bin/krun",
# "/usr/local/bin/krun", # "/usr/local/bin/krun",
@ -592,9 +836,15 @@ runtime = "runc"
# #
#disk_size=10 #disk_size=10
# The image used when creating a podman-machine VM. # Default Image used when creating a new VM using `podman machine init`.
# Can be specified as registry with a bootable OCI artifact, download URL, or a local path.
# Registry target must be in the form of `docker://registry/repo/image:version`.
# Container engines translate URIs $OS and $ARCH to the native OS and ARCH.
# URI "https://example.com/$OS/$ARCH/foobar.ami" would become
# "https://example.com/linux/amd64/foobar.ami" on a Linux AMD machine.
# If unspecified, the default Podman machine image will be used.
# #
#image = "testing" #image = ""
# Memory in MB a machine is created with. # Memory in MB a machine is created with.
# #
@ -605,8 +855,46 @@ runtime = "runc"
# #
#user = "core" #user = "core"
# Host directories to be mounted as volumes into the VM by default.
# Environment variables like $HOME as well as complete paths are supported for
# the source and destination. An optional third field `:ro` can be used to
# tell the container engines to mount the volume readonly.
#
#volumes = [
# "$HOME:$HOME",
#]
# Virtualization provider used to run Podman machine.
# If it is empty or commented out, the default provider will be used.
#
#provider = ""
# Rosetta supports running x86_64 Linux binaries on a Podman machine on Apple silicon.
# The default value is `true`. Supported on AppleHV(arm64) machines only.
#
#rosetta=true
# The [machine] table MUST be the last entry in this file. # The [machine] table MUST be the last entry in this file.
# (Unless another table is added) # (Unless another table is added)
# TOML does not provide a way to end a table other than a further table being # TOML does not provide a way to end a table other than a further table being
# defined, so every key hereafter will be part of [machine] and not the # defined, so every key hereafter will be part of [machine] and not the
# main config. # main config.
[farms]
#
# the default farm to use when farming out builds
# default = ""
#
# map of existing farms
#[farms.list]
[podmansh]
# Shell to spawn in container. Default: /bin/sh.
#shell = "/bin/sh"
#
# Name of the container the podmansh user should join.
#container = "podmansh"
#
# Default timeout in seconds for podmansh logins.
# Favored over the deprecated "podmansh_timeout" field.
#timeout = 30

@ -9,11 +9,11 @@ Container engines like Podman & Buildah read containers.conf file, if it exists
and modify the defaults for running containers on the host. containers.conf uses and modify the defaults for running containers on the host. containers.conf uses
a TOML format that can be easily modified and versioned. a TOML format that can be easily modified and versioned.
Container engines read the /usr/share/containers/containers.conf and Container engines read the __/usr/share/containers/containers.conf__,
/etc/containers/containers.conf, and /etc/containers/containers.conf.d/*.conf files __/etc/containers/containers.conf__, and __/etc/containers/containers.conf.d/\*.conf__
if they exist. When running in rootless mode, they also read for global configuration that effects all users.
$HOME/.config/containers/containers.conf and For user specific configuration it reads __\$XDG_CONFIG_HOME/containers/containers.conf__ and
$HOME/.config/containers/containers.conf.d/*.conf files. __\$XDG_CONFIG_HOME/containers/containers.conf.d/\*.conf__ files. When `$XDG_CONFIG_HOME` is not set it falls back to using `$HOME/.config` instead.
Fields specified in containers conf override the default options, as well as Fields specified in containers conf override the default options, as well as
options in previously read containers.conf files. options in previously read containers.conf files.
@ -22,13 +22,47 @@ Config files in the `.d` directories, are added in alpha numeric sorted order an
Not all options are supported in all container engines. Not all options are supported in all container engines.
Note container engines also use other configuration files for configuring the environment. Note, container engines also use other configuration files for configuring the environment.
* `storage.conf` for configuration of container and images storage. * `storage.conf` for configuration of container and images storage.
* `registries.conf` for definition of container registires to search while pulling. * `registries.conf` for definition of container registries to search while pulling.
container images. container images.
* `policy.conf` for controlling which images can be pulled to the system. * `policy.conf` for controlling which images can be pulled to the system.
## ENVIRONMENT VARIABLES
If the `CONTAINERS_CONF` environment variable is set, all system and user
config files are ignored and only the specified config file will be loaded.
If the `CONTAINERS_CONF_OVERRIDE` path environment variable is set, the config
file will be loaded last even when `CONTAINERS_CONF` is set.
The values of both environment variables may be absolute or relative paths, for
instance, `CONTAINERS_CONF=/tmp/my_containers.conf`.
## MODULES
A module is a containers.conf file located directly in or a sub-directory of the following three directories:
- __\$XDG_CONFIG_HOME/containers/containers.conf.modules__ or __\$HOME/.config/containers/containers.conf.modules__ if `$XDG_CONFIG_HOME` is not set.
- __/etc/containers/containers.conf.modules__
- __/usr/share/containers/containers.conf.modules__
Files in those locations are not loaded by default but only on-demand. They are loaded after all system and user configuration files but before `CONTAINERS_CONF_OVERRIDE` hence allowing for overriding system and user configs.
Modules are currently supported by podman(1). The `podman --module` flag allows for loading a module and can be specified multiple times. If the specified value is an absolute path, the config file will be loaded directly. Relative paths are resolved relative to the three module directories mentioned above and in the specified order such that modules in `$XDG_CONFIG_HOME/$HOME` allow for overriding those in `/etc` and `/usr/share`.
## APPENDING TO STRING ARRAYS
The default behavior during the loading sequence of multiple containers.conf files is to override previous data. To change the behavior from overriding to appending, you can set the `append` attribute as follows: `array=["item-1", "item=2", ..., {append=true}]`. Setting the append attribute instructs to append to this specific string array for the current and also subsequent loading steps. To change back to overriding, set `{append=false}`.
Consider the following example:
```
modules1.conf: env=["1=true"]
modules2.conf: env=["2=true"]
modules3.conf: env=["3=true", {append=true}]
modules4.conf: env=["4=true"]
```
After loading the files in the given order, the final contents are `env=["2=true", "3=true", "4=true"]`. If modules4.conf would set `{append=false}`, the final contents would be `env=["4=true"]`.
# FORMAT # FORMAT
The [TOML format][toml] is used as the encoding of the configuration file. The [TOML format][toml] is used as the encoding of the configuration file.
Every option is nested under its table. No bare options are used. The format of Every option is nested under its table. No bare options are used. The format of
@ -50,6 +84,7 @@ TOML can be simplified to:
The containers table contains settings to configure and manage the OCI runtime. The containers table contains settings to configure and manage the OCI runtime.
**annotations** = [] **annotations** = []
List of annotations. Specified as "key=value" pairs to be added to all containers. List of annotations. Specified as "key=value" pairs to be added to all containers.
Example: "run.oci.keep_original_groups=1" Example: "run.oci.keep_original_groups=1"
@ -59,6 +94,19 @@ Example: "run.oci.keep_original_groups=1"
Used to change the name of the default AppArmor profile of container engines. Used to change the name of the default AppArmor profile of container engines.
The default profile name is "container-default". The default profile name is "container-default".
**base_hosts_file**=""
The hosts entries from the base hosts file are added to the containers hosts
file. This must be either an absolute path or as special values "image" which
uses the hosts file from the container image or "none" which means
no base hosts file is used. The default is "" which will use /etc/hosts.
**cgroup_conf**=[]
List of cgroup_conf entries specifying a list of cgroup files to write to and
their values. For example `memory.high=1073741824` sets the
memory.high limit to 1GB.
**cgroups**="enabled" **cgroups**="enabled"
Determines whether the container will create CGroups. Determines whether the container will create CGroups.
@ -69,7 +117,7 @@ Options are:
**cgroupns**="private" **cgroupns**="private"
Default way to to create a cgroup namespace for the container. Default way to create a cgroup namespace for the container.
Options are: Options are:
`private` Create private Cgroup Namespace for the container. `private` Create private Cgroup Namespace for the container.
`host` Share host Cgroup Namespace with the container. `host` Share host Cgroup Namespace with the container.
@ -81,15 +129,13 @@ List of default capabilities for containers.
The default list is: The default list is:
``` ```
default_capabilities = [ default_capabilities = [
"AUDIT_WRITE",
"CHOWN", "CHOWN",
"DAC_OVERRIDE", "DAC_OVERRIDE",
"FOWNER", "FOWNER",
"FSETID", "FSETID",
"KILL", "KILL",
"MKNOD",
"NET_BIND_SERVICE", "NET_BIND_SERVICE",
"NET_RAW", "SETFCAP",
"SETGID", "SETGID",
"SETPCAP", "SETPCAP",
"SETUID", "SETUID",
@ -97,6 +143,10 @@ default_capabilities = [
] ]
``` ```
Note, by default container engines using containers.conf, run with less
capabilities than Docker. Docker runs additionally with "AUDIT_WRITE", "MKNOD" and "NET_RAW". If you need to add one of these capabilities for a
particular container, you can use the --cap-add option or edit your system's containers.conf.
**default_sysctls**=[] **default_sysctls**=[]
A list of sysctls to be set in containers by default, A list of sysctls to be set in containers by default,
@ -134,7 +184,7 @@ A list of dns servers to override the DNS configuration passed to the
container. The special value “none” can be specified to disable creation of container. The special value “none” can be specified to disable creation of
/etc/resolv.conf in the container. /etc/resolv.conf in the container.
**env**=["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "TERM=xterm"] **env**=["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"]
Environment variable list for the container process, used for passing Environment variable list for the container process, used for passing
environment variables to the container. environment variables to the container.
@ -143,6 +193,16 @@ environment variables to the container.
Pass all host environment variables into the container. Pass all host environment variables into the container.
**host_containers_internal_ip**=""
Set the ip for the host.containers.internal entry in the containers /etc/hosts
file. This can be set to "none" to disable adding this entry. By default it
will automatically choose the host ip.
NOTE: When using podman machine this entry will never be added to the containers
hosts file instead the gvproxy dns resolver will resolve this hostname. Therefore
it is not possible to disable the entry in this case.
**http_proxy**=true **http_proxy**=true
Default proxy environment variables will be passed into the container. Default proxy environment variables will be passed into the container.
@ -158,16 +218,29 @@ Run an init inside the container that forwards signals and reaps processes.
**init_path**="/usr/libexec/podman/catatonit" **init_path**="/usr/libexec/podman/catatonit"
If this option is not set catatonit is searched in the directories listed under
the **helper_binaries_dir** option. It is recommended to just install catatonit
there instead of configuring this option here.
Path to the container-init binary, which forwards signals and reaps processes Path to the container-init binary, which forwards signals and reaps processes
within containers. Note that the container-init binary will only be used when within containers. Note that the container-init binary will only be used when
the `--init` for podman-create and podman-run is set. the `--init` for podman-create and podman-run is set.
**ipcns**="private" **interface_name**=""
Default way to to create a IPC namespace for the container. Default way to set interface names inside containers. Defaults to legacy pattern
of ethX, where X is an integer, when left undefined.
Options are:
`device` Uses the network_interface name from the network config as interface name. Falls back to the ethX pattern if the network_interface is not set.
**ipcns**="shareable"
Default way to create a IPC namespace for the container.
Options are: Options are:
`private` Create private IPC Namespace for the container.
`host` Share host IPC Namespace with the container. `host` Share host IPC Namespace with the container.
`none` Create shareable IPC Namespace for the container without a private /dev/shm.
`private` Create private IPC Namespace for the container, other containers are not allowed to share it.
`shareable` Create shareable IPC Namespace for the container.
**keyring**=true **keyring**=true
@ -178,9 +251,16 @@ the container.
Indicates whether the container engine uses MAC(SELinux) container separation via labeling. This option is ignored on disabled systems. Indicates whether the container engine uses MAC(SELinux) container separation via labeling. This option is ignored on disabled systems.
**log_driver**="k8s-file" **label_users**=false
label_users indicates whether to enforce confined users in containers on
SELinux systems. This option causes containers to maintain the current user
and role field of the calling process. By default SELinux containers run with
the user system_u, and the role system_r.
Logging driver for the container. Available options: `k8s-file` and `journald`. **log_driver**=""
Logging driver for the container. Currently available options are k8s-file, journald, none and passthrough, with json-file aliased to k8s-file for scripting compatibility. The journald driver is used by default if the systemd journal is readable and writable. Otherwise, the k8s-file driver is used.
**log_size_max**=-1 **log_size_max**=-1
@ -193,9 +273,16 @@ limit is never exceeded.
Default format tag for container log messages. This is useful for creating a specific tag for container log messages. Container log messages default to using the truncated container ID as a tag. Default format tag for container log messages. This is useful for creating a specific tag for container log messages. Container log messages default to using the truncated container ID as a tag.
**mounts**=[]
List of mounts.
Specified as "type=TYPE,source=<directory-on-host>,destination=<directory-in-container>,<options>"
Example: [ "type=bind,source=/var/lib/foobar,destination=/var/lib/foobar,ro", ]
**netns**="private" **netns**="private"
Default way to to create a NET namespace for the container. Default way to create a NET namespace for the container.
Options are: Options are:
`private` Create private NET Namespace for the container. `private` Create private NET Namespace for the container.
`host` Share host NET Namespace with the container. `host` Share host NET Namespace with the container.
@ -206,9 +293,13 @@ Options are:
Create /etc/hosts for the container. By default, container engines manage Create /etc/hosts for the container. By default, container engines manage
/etc/hosts, automatically adding the container's own IP address. /etc/hosts, automatically adding the container's own IP address.
**oom_score_adj**=0
Tune the host's OOM preferences for containers (accepts values from -1000 to 1000).
**pidns**="private" **pidns**="private"
Default way to to create a PID namespace for the container. Default way to create a PID namespace for the container.
Options are: Options are:
`private` Create private PID Namespace for the container. `private` Create private PID Namespace for the container.
`host` Share host PID Namespace with the container. `host` Share host PID Namespace with the container.
@ -222,6 +313,16 @@ is imposed.
Copy the content from the underlying image into the newly created volume when the container is created instead of when it is started. If `false`, the container engine will not copy the content until the container is started. Setting it to `true` may have negative performance implications. Copy the content from the underlying image into the newly created volume when the container is created instead of when it is started. If `false`, the container engine will not copy the content until the container is started. Setting it to `true` may have negative performance implications.
**privileged**=false
Give extended privileges to all containers. A privileged container turns off the security features that isolate the container from the host. Dropped Capabilities, limited devices, read-only mount points, Apparmor/SELinux separation, and Seccomp filters are all disabled. Due to the disabled security features, the privileged field should almost never be set as containers can easily break out of confinment.
Containers running in a user namespace (e.g., rootless containers) cannot have more privileges than the user that launched them.
**read_only**=true|false
Run all containers with root file system mounted read-only. Set to false by default.
**seccomp_profile**="/usr/share/containers/seccomp.json" **seccomp_profile**="/usr/share/containers/seccomp.json"
Path to the seccomp.json profile which is used as the default seccomp profile Path to the seccomp.json profile which is used as the default seccomp profile
@ -251,23 +352,24 @@ Sets umask inside the container.
**userns**="host" **userns**="host"
Default way to to create a USER namespace for the container. Default way to create a USER namespace for the container.
Options are: Options are:
`private` Create private USER Namespace for the container. `private` Create private USER Namespace for the container.
`host` Share host USER Namespace with the container. `host` Share host USER Namespace with the container.
**userns_size**=65536
Number of UIDs to allocate for the automatic container creation. UIDs are
allocated from the “container” UIDs listed in /etc/subuid & /etc/subgid.
**utsns**="private" **utsns**="private"
Default way to to create a UTS namespace for the container. Default way to create a UTS namespace for the container.
Options are: Options are:
`private` Create private UTS Namespace for the container. `private` Create private UTS Namespace for the container.
`host` Share host UTS Namespace with the container. `host` Share host UTS Namespace with the container.
**volumes**=[]
List of volumes.
Specified as "directory-on-host:directory-in-container:options".
Example: "/db:/var/lib/db:ro".
## NETWORK TABLE ## NETWORK TABLE
The `network` table contains settings pertaining to the management of CNI The `network` table contains settings pertaining to the management of CNI
@ -298,6 +400,20 @@ cni_plugin_dirs = [
] ]
``` ```
**netavark_plugin_dirs**=[]
List of directories that will be searched for netavark plugins.
The default list is:
```
netavark_plugin_dirs = [
"/usr/local/libexec/netavark",
"/usr/libexec/netavark",
"/usr/local/lib/netavark",
"/usr/lib/netavark",
]
```
**default_network**="podman" **default_network**="podman"
The network name of the default network to attach pods to. The network name of the default network to attach pods to.
@ -307,20 +423,56 @@ The network name of the default network to attach pods to.
The subnet to use for the default network (named above in **default_network**). The subnet to use for the default network (named above in **default_network**).
If the default network does not exist, it will be automatically created the first time a tool is run using this subnet. If the default network does not exist, it will be automatically created the first time a tool is run using this subnet.
**default_subnet_pools**=[]
DefaultSubnetPools is a list of subnets and size which are used to
allocate subnets automatically for podman network create.
It will iterate through the list and will pick the first free subnet
with the given size. This is only used for ipv4 subnets, ipv6 subnets
are always assigned randomly.
The default list is (10.89.0.0-10.255.255.0/24):
```
default_subnet_pools = [
{"base" = "10.89.0.0/16", "size" = 24},
{"base" = "10.90.0.0/15", "size" = 24},
{"base" = "10.92.0.0/14", "size" = 24},
{"base" = "10.96.0.0/11", "size" = 24},
{"base" = "10.128.0.0/9", "size" = 24},
]
```
**default_rootless_network_cmd**="pasta"
Configure which rootless network program to use by default. Valid options are
`slirp4netns` and `pasta` (default).
**network_config_dir**="/etc/cni/net.d/" **network_config_dir**="/etc/cni/net.d/"
Path to the directory where network configuration files are located. Path to the directory where network configuration files are located.
For the CNI backend the default is "/etc/cni/net.d" as root For the CNI backend the default is __/etc/cni/net.d__ as root
and "$HOME/.config/cni/net.d" as rootless. and __$HOME/.config/cni/net.d__ as rootless.
For the netavark backend "/etc/containers/networks" is used as root For the netavark backend "/etc/containers/networks" is used as root
and "$graphroot/networks" as rootless. and "$graphroot/networks" as rootless.
**volumes**=[] **firewall_driver**=""
List of volumes. The firewall driver to be used by netavark.
Specified as "directory-on-host:directory-in-container:options". The default is empty which means netavark will pick one accordingly. Current supported
drivers are "iptables", "nftables", "none" (no firewall rules will be created) and "firewalld" (firewalld is
experimental at the moment and not recommend outside of testing).
Example: "/db:/var/lib/db:ro". **dns_bind_port**=53
Port to use for dns forwarding daemon with netavark in rootful bridge
mode and dns enabled.
Using an alternate port might be useful if other dns services should
run on the machine.
**pasta_options** = []
A list of default pasta options that should be used running pasta.
It accepts the pasta cli options, see pasta(1) for the full list of options.
## ENGINE TABLE ## ENGINE TABLE
The `engine` table contains configuration options used to set up container engines such as Podman and Buildah. The `engine` table contains configuration options used to set up container engines such as Podman and Buildah.
@ -329,11 +481,39 @@ The `engine` table contains configuration options used to set up container engin
Name of destination for accessing the Podman service. See SERVICE DESTINATION TABLE below. Name of destination for accessing the Podman service. See SERVICE DESTINATION TABLE below.
**add_compression**=[]
List of compression algorithms. If set makes sure that requested compression variant
for each platform is added to the manifest list keeping original instance intact in
the same manifest list on every `manifest push`. Supported values are (`gzip`, `zstd` and `zstd:chunked`).
Note: This is different from `compression_format` which allows users to select a default
compression format for `push` and `manifest push`, while `add_compression` is limited to
`manifest push` and allows users to append new instances to manifest list with specified compression
algorithms in `add_compression` for each platform.
**cgroup_manager**="systemd" **cgroup_manager**="systemd"
The cgroup management implementation used for the runtime. Supports `cgroupfs` The cgroup management implementation used for the runtime. Supports `cgroupfs`
and `systemd`. and `systemd`.
**compat_api_enforce_docker_hub**=true
Enforce using docker.io for completing short names in Podman's compatibility
REST API. Note that this will ignore unqualified-search-registries and
short-name aliases defined in containers-registries.conf(5).
**compose_providers**=[]
Specify one or more external providers for the compose command. The first
found provider is used for execution. Can be an absolute and relative path or
a (file) name.
**compose_warning_logs**=true
Emit logs on each invocation of the compose command indicating that an external
compose provider is being executed.
**conmon_env_vars**=[] **conmon_env_vars**=[]
Environment variables to pass into Conmon. Environment variables to pass into Conmon.
@ -358,6 +538,15 @@ conmon_path=[
] ]
``` ```
**database_backend**=""
The database backend of Podman. Supported values are "" (default), "boltdb"
and "sqlite". An empty value means it will check whenever a boltdb already
exists and use it when it does, otherwise it will use sqlite as default
(e.g. new installs). This allows for backwards compatibility with older versions.
Please run `podman-system-reset` prior to changing the database
backend of an existing deployment, to make sure Podman can operate correctly.
**detach_keys**="ctrl-p,ctrl-q" **detach_keys**="ctrl-p,ctrl-q"
Keys sequence used for detaching a container. Keys sequence used for detaching a container.
@ -365,6 +554,7 @@ Specify the keys sequence used to detach a container.
Format is a single character `[a-Z]` or a comma separated sequence of Format is a single character `[a-Z]` or a comma separated sequence of
`ctrl-<value>`, where `<value>` is one of: `ctrl-<value>`, where `<value>` is one of:
`a-z`, `@`, `^`, `[`, `\`, `]`, `^` or `_` `a-z`, `@`, `^`, `[`, `\`, `]`, `^` or `_`
Specifying "" disables this feature.
**enable_port_reservation**=true **enable_port_reservation**=true
@ -385,22 +575,68 @@ if you want to set environment variables for the container.
Define where event logs will be stored, when events_logger is "file". Define where event logs will be stored, when events_logger is "file".
**events_logfile_max_size**="1m"
Sets the maximum size for events_logfile_path.
The unit can be b (bytes), k (kilobytes), m (megabytes) or g (gigabytes).
The format for the size is `<number><unit>`, e.g., `1b` or `3g`.
If no unit is included then the size will be in bytes.
When the limit is exceeded, the logfile will be rotated and the old one will be deleted.
If the maximum size is set to 0, then no limit will be applied,
and the logfile will not be rotated.
**events_logger**="journald" **events_logger**="journald"
Default method to use when logging events. The default method to use when logging events.
Valid values: `file`, `journald`, and `none`.
The default method is different based on the platform that
Podman is being run upon. To determine the current value,
use this command:
`podman info --format {{.Host.EventLogger}`
Valid values are: `file`, `journald`, and `none`.
**events_container_create_inspect_data**=true|false
Creates a more verbose container-create event which includes a JSON payload
with detailed information about the container. Set to false by default.
**healthcheck_events**=true|false
Whenever Podman should log healthcheck events.
With many running healthcheck on short interval Podman will spam the event
log a lot as it generates a event for each single healthcheck run. Because
this event is optional and only useful to external consumers that may want
to know when a healthcheck is run or failed allow users to turn it off by
setting it to false.
Default is true.
**helper_binaries_dir**=["/usr/libexec/podman", ...] **helper_binaries_dir**=["/usr/libexec/podman", ...]
A is a list of directories which are used to search for helper binaries. A is a list of directories which are used to search for helper binaries.
The following binaries are searched in these directories:
- aardvark-dns
- catatonit
- netavark
- pasta
- slirp4netns
Podman machine uses it for these binaries:
- gvproxy
- qemu
- vfkit
The default paths on Linux are: The default paths on Linux are:
- `/usr/local/libexec/podman` - `/usr/local/libexec/podman`
- `/usr/local/lib/podman` - `/usr/local/lib/podman`
- `/usr/libexec/podman` - `/usr/libexec/podman`
- `/usr/lib/podman` - `/usr/lib/podman`
The default paths on macOS are: The default paths on macOS are:
- `/usr/local/opt/podman/libexec` - `/usr/local/opt/podman/libexec`
- `/opt/homebrew/bin` - `/opt/homebrew/bin`
- `/opt/homebrew/opt/podman/libexec` - `/opt/homebrew/opt/podman/libexec`
@ -411,12 +647,17 @@ The default paths on macOS are:
- `/usr/lib/podman` - `/usr/lib/podman`
The default path on Windows is: The default path on Windows is:
- `C:\Program Files\RedHat\Podman` - `C:\Program Files\RedHat\Podman`
**hooks_dir**=["/etc/containers/oci/hooks.d", ...] **hooks_dir**=["/etc/containers/oci/hooks.d", ...]
Path to the OCI hooks directories for automatically executed hooks. Path to the OCI hooks directories for automatically executed hooks.
**cdi_spec_dirs**=["/etc/cdi", ...]
Directories to scan for CDI Spec files.
**image_default_format**="oci"|"v2s2"|"v2s1" **image_default_format**="oci"|"v2s2"|"v2s1"
Manifest Type (oci, v2s2, or v2s1) to use when pulling, pushing, building Manifest Type (oci, v2s2, or v2s1) to use when pulling, pushing, building
@ -433,22 +674,34 @@ Default transport method for pulling and pushing images.
Maximum number of image layers to be copied (pulled/pushed) simultaneously. Maximum number of image layers to be copied (pulled/pushed) simultaneously.
Not setting this field will fall back to containers/image defaults. (6) Not setting this field will fall back to containers/image defaults. (6)
**image_volume_mode**="bind"
Tells container engines how to handle the built-in image volumes.
* bind: An anonymous named volume will be created and mounted into the container.
* tmpfs: The volume is mounted onto the container as a tmpfs, which allows the users to create content that disappears when the container is stopped.
* ignore: All volumes are just ignored and no action is taken.
**infra_command**="/pause" **infra_command**="/pause"
Infra (pause) container image command for pod infra containers. When running a Infra (pause) container image command for pod infra containers. When running a
pod, we start a `/pause` process in a container to hold open the namespaces pod, we start a `/pause` process in a container to hold open the namespaces
associated with the pod. This container does nothing other then sleep, associated with the pod. This container does nothing other than sleep,
reserving the pods resources for the lifetime of the pod. reserving the pod's resources for the lifetime of the pod.
**infra_image**="" **infra_image**=""
Infra (pause) container image for pod infra containers. When running a Infra (pause) container image for pod infra containers. When running a
pod, we start a `pause` process in a container to hold open the namespaces pod, we start a `pause` process in a container to hold open the namespaces
associated with the pod. This container does nothing other then sleep, associated with the pod. This container does nothing other than sleep,
reserving the pods resources for the lifetime of the pod. By default container reserving the pod's resources for the lifetime of the pod. By default container
engines run a builtin container using the pause executable. If you want override engines run a built-in container using the pause executable. If you want override
specify an image to pull. specify an image to pull.
**kube_generate_type**="pod"
Default Kubernetes kind/specification of the kubernetes yaml generated with the `podman kube generate` command. The possible options are `pod` and `deployment`.
**lock_type**="shm" **lock_type**="shm"
Specify the locking mechanism to use; valid values are "shm" and "file". Specify the locking mechanism to use; valid values are "shm" and "file".
@ -457,12 +710,6 @@ Change the default only if you are sure of what you are doing, in general
faster "shm" lock type. You may need to run "podman system renumber" after you faster "shm" lock type. You may need to run "podman system renumber" after you
change the lock type. change the lock type.
**machine_enabled**=false
Indicates if Podman is running inside a VM via Podman Machine.
Podman uses this value to do extra setup around networking from the
container inside the VM to to host.
**multi_image_archive**=false **multi_image_archive**=false
Allows for creating archives (e.g., tarballs) with more than one image. Some container engines, such as Podman, interpret additional arguments as tags for one image and hence do not store more than one image. The default behavior can be altered with this option. Allows for creating archives (e.g., tarballs) with more than one image. Some container engines, such as Podman, interpret additional arguments as tags for one image and hence do not store more than one image. The default behavior can be altered with this option.
@ -479,16 +726,16 @@ and pods are visible.
Path to the slirp4netns binary. Path to the slirp4netns binary.
**network_cmd_options**=["enable_ipv6=true",] **network_cmd_options**=[]
Default options to pass to the slirp4netns binary. Default options to pass to the slirp4netns binary.
Valid options values are: Valid options values are:
- **allow_host_loopback=true|false**: Allow the slirp4netns to reach the host loopback IP (`10.0.2.2`, which is added to `/etc/hosts` as `host.containers.internal` for your convenience). Default is false. - **allow_host_loopback=true|false**: Allow the slirp4netns to reach the host loopback IP (`10.0.2.2`). Default is false.
- **mtu=MTU**: Specify the MTU to use for this network. (Default is `65520`). - **mtu=MTU**: Specify the MTU to use for this network. (Default is `65520`).
- **cidr=CIDR**: Specify ip range to use for this network. (Default is `10.0.2.0/24`). - **cidr=CIDR**: Specify ip range to use for this network. (Default is `10.0.2.0/24`).
- **enable_ipv6=true|false**: Enable IPv6. Default is false. (Required for `outbound_addr6`). - **enable_ipv6=true|false**: Enable IPv6. Default is true. (Required for `outbound_addr6`).
- **outbound_addr=INTERFACE**: Specify the outbound interface slirp should bind to (ipv4 traffic only). - **outbound_addr=INTERFACE**: Specify the outbound interface slirp should bind to (ipv4 traffic only).
- **outbound_addr=IPv4**: Specify the outbound ipv4 address slirp should bind to. - **outbound_addr=IPv4**: Specify the outbound ipv4 address slirp should bind to.
- **outbound_addr6=INTERFACE**: Specify the outbound interface slirp should bind to (ipv6 traffic only). - **outbound_addr6=INTERFACE**: Specify the outbound interface slirp should bind to (ipv6 traffic only).
@ -503,10 +750,20 @@ Whether to use chroot instead of pivot_root in the runtime.
**num_locks**=2048 **num_locks**=2048
Number of locks available for containers and pods. Each created container or Number of locks available for containers, pods, and volumes.
pod consumes one lock. The default number available is 2048. If this is Each created container, pod, or volume consumes one lock.
changed, a lock renumbering must be performed, using the Locks are recycled and can be reused after the associated container, pod, or volume is removed.
`podman system renumber` command. The default number available is 2048.
If this is changed, a lock renumbering must be performed, using the `podman system renumber` command.
**pod_exit_policy**="continue"
Set the exit policy of the pod when the last container exits. Supported policies are:
| Exit Policy | Description |
| ------------------ | --------------------------------------------------------------------------- |
| *continue* | The pod continues running when the last container exits. Used by default. |
| *stop* | The pod is stopped when the last container exits. Used in `play kube`. |
**pull_policy**="always"|"missing"|"never" **pull_policy**="always"|"missing"|"never"
@ -517,16 +774,25 @@ Pull image before running or creating a container. The default is **missing**.
- **never**: do not pull the image from the registry, use only the local version. Raise an error if the image is not present locally. - **never**: do not pull the image from the registry, use only the local version. Raise an error if the image is not present locally.
**remote** = false **remote** = false
Indicates whether the application should be running in remote mode. This flag modifies the Indicates whether the application should be running in remote mode. This flag modifies the
--remote option on container engines. Setting the flag to true will default `podman --remote=true` for access to the remote Podman service. --remote option on container engines. Setting the flag to true will default `podman --remote=true` for access to the remote Podman service.
**retry** = 3
Number of times to retry pulling/pushing images in case of failure.
**retry_delay** = ""
Delay between retries in case pulling/pushing image fails. If set, container engines will retry at the set interval, otherwise they delay 2 seconds and then exponentially back off.
**runtime**="" **runtime**=""
Default OCI specific runtime in runtimes that will be used by default. Must Default OCI specific runtime in runtimes that will be used by default. Must
refer to a member of the runtimes table. Default runtime will be searched for refer to a member of the runtimes table. Default runtime will be searched for
on the system using the priority: "crun", "runc", "kata". on the system using the priority: "crun", "runc", "runj", "kata", "runsc", "ocijail"
**runtime_supports_json**=["crun", "runc", "kata", "runsc", "krun"] **runtime_supports_json**=["crun", "crun-vm", "runc", "kata", "runsc", "youki", "krun"]
The list of the OCI runtimes that support `--format=json`. The list of the OCI runtimes that support `--format=json`.
@ -534,7 +800,7 @@ The list of the OCI runtimes that support `--format=json`.
The list of OCI runtimes that support running containers with KVM separation. The list of OCI runtimes that support running containers with KVM separation.
**runtime_supports_nocgroups**=["crun", "krun"] **runtime_supports_nocgroups**=["crun", "crun-vm", "krun"]
The list of OCI runtimes that support running containers without CGroups. The list of OCI runtimes that support running containers without CGroups.
@ -561,6 +827,10 @@ stores containers.
Number of seconds to wait for container to exit before sending kill signal. Number of seconds to wait for container to exit before sending kill signal.
**exit_command_delay**=300
Number of seconds to wait for the API process for the exec call before sending exit command mimicking the Docker behavior of 5 minutes (in seconds).
**tmp_dir**="/run/libpod" **tmp_dir**="/run/libpod"
The path to a temporary directory to store per-boot container. The path to a temporary directory to store per-boot container.
@ -579,23 +849,33 @@ not be by other drivers.
Determines whether file copied into a container will have changed ownership to Determines whether file copied into a container will have changed ownership to
the primary uid/gid of the container. the primary uid/gid of the container.
**compression_format**="" **compression_format**="gzip"
Specifies the compression format to use when pushing an image. Supported values
are: `gzip`, `zstd` and `zstd:chunked`. This field is ignored when pushing
images to the docker-daemon and docker-archive formats. It is also ignored
when the manifest format is set to v2s2.
Specifies the compression format to use when pushing an image. Supported values are: `gzip`, `zstd` and `zstd:chunked`. **compression_level**="5"
The compression level to use when pushing an image. Valid options
depend on the compression format used. For gzip, valid options are
1-9, with a default of 5. For zstd, valid options are 1-20, with a
default of 3.
## SERVICE DESTINATION TABLE ## SERVICE DESTINATION TABLE
The `service_destinations` table contains configuration options used to set up remote connections to the podman service for the podman API. The `engine.service_destinations` table contains configuration options used to set up remote connections to the podman service for the podman API.
**[service_destinations.{name}]** **[engine.service_destinations.{name}]**
URI to access the Podman service URI to access the Podman service
**uri="ssh://user@production.example.com/run/user/1001/podman/podman.sock"** **uri="ssh://user@production.example.com/run/user/1001/podman/podman.sock"**
Example URIs: Example URIs:
- **rootless local** - unix://run/user/1000/podman/podman.sock - **rootless local** - unix:///run/user/1000/podman/podman.sock
- **rootless remote** - ssh://user@engineering.lab.company.com/run/user/1000/podman/podman.sock - **rootless remote** - ssh://user@engineering.lab.company.com/run/user/1000/podman/podman.sock
- **rootfull local** - unix://run/podman/podman.sock - **rootful local** - unix:///run/podman/podman.sock
- **rootfull remote** - ssh://root@10.10.1.136:22/run/podman/podman.sock - **rootful remote** - ssh://root@10.10.1.136:22/run/podman/podman.sock
**identity="~/.ssh/id_rsa** **identity="~/.ssh/id_rsa**
@ -608,6 +888,10 @@ used as the backend for Podman named volumes. Individual plugins are specified
below, as a map of the plugin name (what the plugin will be called) to its path below, as a map of the plugin name (what the plugin will be called) to its path
(filepath of the plugin's unix socket). (filepath of the plugin's unix socket).
**[engine.platform_to_oci_runtime]**
Allows end users to switch the OCI runtime on the bases of container image's platform string.
Following config field contains a map of `platform/string = oci_runtime`.
## SECRET TABLE ## SECRET TABLE
The `secret` table contains settings for the configuration of the secret subsystem. The `secret` table contains settings for the configuration of the secret subsystem.
@ -635,11 +919,13 @@ The size of the disk in GB created when init-ing a podman-machine VM
**image**="" **image**=""
Default image used when creating a new VM using `podman machine init`. Image used when creating a new VM using `podman machine init`.
Options: On Linux/Mac, `testing`, `stable`, `next`. On Windows, the major Can be specified as a registry with a bootable OCI artifact, download URL, or a local path.
version of the OS (e.g `35`). For all platforms you can alternatively specify Registry target must be in the form of `docker://registry/repo/image:version`.
a custom path or download URL to an image. The default is `testing` on Container engines translate URIs $OS and $ARCH to the native OS and ARCH.
Linux/Mac, and `35` on Windows. URI "https://example.com/$OS/$ARCH/foobar.ami" would become
"https://example.com/linux/amd64/foobar.ami" on a Linux AMD machine.
If unspecified, the default Podman machine image will be used.
**memory**=2048 **memory**=2048
@ -650,27 +936,78 @@ Memory in MB a machine is created with.
Username to use and create on the podman machine OS for rootless container Username to use and create on the podman machine OS for rootless container
access. The default value is `user`. On Linux/Mac the default is`core`. access. The default value is `user`. On Linux/Mac the default is`core`.
**volumes**=["$HOME:$HOME"]
Host directories to be mounted as volumes into the VM by default.
Environment variables like $HOME as well as complete paths are supported for
the source and destination. An optional third field `:ro` can be used to
tell the container engines to mount the volume readonly.
On Mac, the default volumes are:
[ "/Users:/Users", "/private:/private", "/var/folders:/var/folders" ]
**provider**=""
Virtualization provider to be used for running a podman-machine VM. Empty value
is interpreted as the default provider for the current host OS. On Linux/Mac
default is `QEMU` and on Windows it is `WSL`.
**rosetta**="true"
Rosetta supports running x86_64 Linux binaries on a Podman machine on Apple silicon.
The default value is `true`. Supported on AppleHV(arm64) machines only.
## FARMS TABLE
The `farms` table contains configuration options used to group up remote connections into farms that will be used when sending out builds to different machines in a farm via `podman buildfarm`.
**default**=""
The default farm to use when farming out builds.
**[farms.list]**
Map of farms created where the key is the farm name and the value is the list of system connections.
## PODMANSH TABLE
The `podmansh` table contains configuration options used by podmansh.
**shell**="/bin/sh"
The shell to spawn in the container.
The default value is `/bin/sh`.
**container**="podmansh"
Name of the container that podmansh joins.
The default value is `podmansh`.
**timeout**=0
Number of seconds to wait for podmansh logins. This value if favoured over the deprecated field `engine.podmansh_timeout` if set.
The default value is 30.
# FILES # FILES
**containers.conf** **containers.conf**
Distributions often provide a `/usr/share/containers/containers.conf` file to Distributions often provide a __/usr/share/containers/containers.conf__ file to
define default container configuration. Administrators can override fields in provide a default configuration. Administrators can override fields in this
this file by creating `/etc/containers/containers.conf` to specify their own file by creating __/etc/containers/containers.conf__ to specify their own
configuration. Rootless users can further override fields in the config by configuration. They may also drop `.conf` files in
creating a config file stored in the `$HOME/.config/containers/containers.conf` file. __/etc/containers/containers.conf.d__ which will be loaded in alphanumeric order.
For user specific configuration it reads __\$XDG_CONFIG_HOME/containers/containers.conf__ and
If the `CONTAINERS_CONF` path environment variable is set, just __\$XDG_CONFIG_HOME/containers/containers.conf.d/\*.conf__ files. When `$XDG_CONFIG_HOME` is not set it falls back to using `$HOME/.config` instead.
this path will be used. This is primarily used for testing.
Fields specified in the containers.conf file override the default options, as Fields specified in a containers.conf file override the default options, as
well as options in previously read containers.conf files. well as options in previously loaded containers.conf files.
**storage.conf** **storage.conf**
The `/etc/containers/storage.conf` file is the default storage configuration file. The `/etc/containers/storage.conf` file is the default storage configuration file.
Rootless users can override fields in the storage config by creating Rootless users can override fields in the storage config by creating
`$HOME/.config/containers/storage.conf`. __$HOME/.config/containers/storage.conf__.
If the `CONTAINERS_STORAGE_CONF` path environment variable is set, this path If the `CONTAINERS_STORAGE_CONF` path environment variable is set, this path
is used for the storage.conf file rather than the default. is used for the storage.conf file rather than the default.

@ -6,9 +6,18 @@
], ],
"transports": { "transports": {
"docker": { "docker": {
"": [ "registry.access.redhat.com": [
{ {
"type": "insecureAcceptAnything" "type": "signedBy",
"keyType": "GPGKeys",
"keyPaths": ["/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release", "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta"]
}
],
"registry.redhat.io": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPaths": ["/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release", "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta"]
} }
] ]
}, },

@ -1,55 +1,47 @@
#!/bin/bash #!/bin/bash
#set -e set -e
rm -f /tmp/pyxis*.json rm -f /tmp/pyxis*.json
TOTAL=`curl -s --negotiate -u: -H 'Content-Type: application/json' -H 'Accept: application/json' -X GET "https://pyxis.engineering.redhat.com/v1/repositories?page_size=1" | jq .total` TOTAL=`curl -s --negotiate -u: -H 'Content-Type: application/json' -H 'Accept: application/json' -X GET "https://pyxis.engineering.redhat.com/v1/repositories?page_size=1" | jq .total`
if [ "$TOTAL" == "null" ]; then if [ "$TOTAL" == "null" ]; then
echo "Error comunicating with Pyxis API." echo "Error comunicating with Pyxis API."
exit 1 exit 1
fi fi
PAGES=$(($TOTAL/500)) PAGES=$(($TOTAL/250))
for P in `seq 0 $PAGES`; do for P in `seq 0 $PAGES`; do
curl -s --negotiate -u: -H 'Content-Type: application/json' -H 'Accept: application/json' -X GET "https://pyxis.engineering.redhat.com/v1/repositories?page_size=500&page=$P" > /tmp/pyxis$P.json curl -s --negotiate -u: -H 'Content-Type: application/json' -H 'Accept: application/json' -X GET "https://pyxis.engineering.redhat.com/v1/repositories?page_size=500&page=$P" > /tmp/pyxis$P.json
done done
cat /tmp/pyxis*.json > /tmp/pyx.json cat /tmp/pyxis*.json > /tmp/pyx.json
rm -f /tmp/pyx_debug
rm -f /tmp/rhel-shortnames.conf rm -f /tmp/rhel-shortnames.conf
while read -r LINE; do jq '.data[]|.published,.requires_terms,.repository,.registry,.release_categories[0]' < /tmp/pyx.json >/tmp/pyx
if [[ "$LINE" == *\"_id\":* ]] || [[ "$LINE" == *\"total\":* ]]; then readarray -t lines < /tmp/pyx
if [ -z $REGISTRY ] || IDX=0
[ -z $PUBLISHED ] || while [ $IDX -lt ${#lines[@]} ]; do
[ -z $REPOSITORY ] || PUBLISHED=${lines[$IDX]}
[ $REPOSITORY == \"\" ] || REQ_TERMS=${lines[$IDX+1]}
[ "$AVAILABLE" != "Generally Available" ] || REPOSITORY=`echo ${lines[$IDX+2]} | tr -d '"'`
[[ $REPOSITORY == *[@:]* ]] || REGISTRY=`echo ${lines[$IDX+3]} | tr -d '"'`
[[ $REPOSITORY == *[* ]] || RELEASE=`echo ${lines[$IDX+4]} | tr -d '"'`
[[ "$REGISTRY" == *non_registry* ]] || if [ "$PUBLISHED" == "true" ] &&
[[ $REGISTRY != *.* ]] [ "$RELEASE" == "Generally Available" ] &&
then [ ! -z "$REPOSITORY" ] &&
continue [ "$REPOSITORY" != \"\" ] &&
fi [[ $REPOSITORY != *[@:]* ]] &&
REGISTRY="" [[ $REPOSITORY != *[* ]] &&
PUBLISHED="" [[ $REGISTRY == *.* ]] &&
AVAILABLE="" [ "$REGISTRY" != "non_registry" ]; then
REPOSITORY="" if [[ $REGISTRY == *quay.io* ]] ||
REQUIRES_TERMS="" [[ $REGISTRY == *redhat.com* ]]; then
continue if [ "$REQ_TERMS" == "true" ]; then
fi REGISTRY=registry.redhat.io
if [[ "$LINE" == *\"published\":\ true,* ]]; then fi
PUBLISHED=1 fi
fi echo "\"$REPOSITORY\" = \"$REGISTRY/$REPOSITORY\""
if [[ "$LINE" == *\"requires_terms\":\ true,* ]]; then echo $PUBLISHED,$REQ_TERMS,$REPOSITORY,$REGISTRY,$RELEASE >> /tmp/pyx_debug
REQUIRES_TERMS=1 echo "\"$REPOSITORY\" = \"$REGISTRY/$REPOSITORY\"" >> /tmp/rhel-shortnames.conf
fi fi
if [[ "$LINE" == *\"repository\":\ * ]]; then IDX=$(($IDX+5))
REPOSITORY=`echo $LINE | sed 's,^.* ",,' | sed 's;",$;;'` done
fi
if [[ "$LINE" == *\"registry\":\ * ]]; then
REGISTRY=`echo $LINE | sed -e 's,^.*:\ ",,' -e 's,".*,,'`
fi
if [[ "$LINE" == *\"release_categories\":\ * ]]; then
read -r LINE
AVAILABLE=`echo $LINE | sed 's,",,g'`
fi
done < /tmp/pyx.json
cp /tmp/rhel-shortnames.conf /tmp/r.conf cp /tmp/rhel-shortnames.conf /tmp/r.conf
for D in `cut -d\ -f1 /tmp/r.conf | sort | uniq -d`; do for D in `cut -d\ -f1 /tmp/r.conf | sort | uniq -d`; do

@ -19,7 +19,7 @@
# #
# # An array of host[:port] registries to try when pulling an unqualified image, in order. # # An array of host[:port] registries to try when pulling an unqualified image, in order.
unqualified-search-registries = ["docker.io"] unqualified-search-registries = ["registry.access.redhat.com", "registry.redhat.io", "docker.io"]
# [[registry]] # [[registry]]
# # The "prefix" field is used to choose the relevant [[registry]] TOML table; # # The "prefix" field is used to choose the relevant [[registry]] TOML table;
@ -76,4 +76,4 @@ unqualified-search-registries = ["docker.io"]
# # 2. example-mirror-1.local/mirrors/foo/image:latest # # 2. example-mirror-1.local/mirrors/foo/image:latest
# # 3. internal-registry-for-example.net/bar/image:latest # # 3. internal-registry-for-example.net/bar/image:latest
# # in order, and use the first one that exists. # # in order, and use the first one that exists.
short-name-mode = "permissive" short-name-mode = "enforcing"

@ -0,0 +1,3 @@
docker:
registry.access.redhat.com:
sigstore: https://access.redhat.com/webassets/docker/content/sigstore

@ -0,0 +1,3 @@
docker:
registry.redhat.io:
sigstore: https://registry.redhat.io/containers/sigstore

@ -55,9 +55,16 @@
{ {
"names": [ "names": [
"bdflush", "bdflush",
"cachestat",
"futex_requeue",
"futex_wait",
"futex_waitv",
"futex_wake",
"io_pgetevents", "io_pgetevents",
"io_pgetevents_time64",
"kexec_file_load", "kexec_file_load",
"kexec_load", "kexec_load",
"map_shadow_stack",
"migrate_pages", "migrate_pages",
"move_pages", "move_pages",
"nfsservctl", "nfsservctl",
@ -72,9 +79,9 @@
"pciconfig_write", "pciconfig_write",
"sgetmask", "sgetmask",
"ssetmask", "ssetmask",
"swapcontext",
"swapoff", "swapoff",
"swapon", "swapon",
"syscall",
"sysfs", "sysfs",
"uselib", "uselib",
"userfaultfd", "userfaultfd",
@ -149,6 +156,7 @@
"fchdir", "fchdir",
"fchmod", "fchmod",
"fchmodat", "fchmodat",
"fchmodat2",
"fchown", "fchown",
"fchown32", "fchown32",
"fchownat", "fchownat",
@ -176,6 +184,7 @@
"futex", "futex",
"futex_time64", "futex_time64",
"futimesat", "futimesat",
"get_mempolicy",
"get_robust_list", "get_robust_list",
"get_thread_area", "get_thread_area",
"getcpu", "getcpu",
@ -191,7 +200,6 @@
"getgroups", "getgroups",
"getgroups32", "getgroups32",
"getitimer", "getitimer",
"get_mempolicy",
"getpeername", "getpeername",
"getpgid", "getpgid",
"getpgrp", "getpgrp",
@ -228,6 +236,9 @@
"ipc", "ipc",
"keyctl", "keyctl",
"kill", "kill",
"landlock_add_rule",
"landlock_create_ruleset",
"landlock_restrict_self",
"lchown", "lchown",
"lchown32", "lchown32",
"lgetxattr", "lgetxattr",
@ -243,6 +254,7 @@
"lstat64", "lstat64",
"madvise", "madvise",
"mbind", "mbind",
"membarrier",
"memfd_create", "memfd_create",
"memfd_secret", "memfd_secret",
"mincore", "mincore",
@ -256,6 +268,7 @@
"mmap", "mmap",
"mmap2", "mmap2",
"mount", "mount",
"mount_setattr",
"move_mount", "move_mount",
"mprotect", "mprotect",
"mq_getsetattr", "mq_getsetattr",
@ -279,9 +292,9 @@
"nanosleep", "nanosleep",
"newfstatat", "newfstatat",
"open", "open",
"open_tree",
"openat", "openat",
"openat2", "openat2",
"open_tree",
"pause", "pause",
"pidfd_getfd", "pidfd_getfd",
"pidfd_open", "pidfd_open",
@ -300,14 +313,17 @@
"preadv", "preadv",
"preadv2", "preadv2",
"prlimit64", "prlimit64",
"process_mrelease",
"process_vm_readv",
"process_vm_writev",
"pselect6", "pselect6",
"pselect6_time64", "pselect6_time64",
"ptrace",
"pwrite64", "pwrite64",
"pwritev", "pwritev",
"pwritev2", "pwritev2",
"read", "read",
"readahead", "readahead",
"readdir",
"readlink", "readlink",
"readlinkat", "readlinkat",
"readv", "readv",
@ -360,7 +376,6 @@
"sendmmsg", "sendmmsg",
"sendmsg", "sendmsg",
"sendto", "sendto",
"setns",
"set_mempolicy", "set_mempolicy",
"set_robust_list", "set_robust_list",
"set_thread_area", "set_thread_area",
@ -374,6 +389,7 @@
"setgroups", "setgroups",
"setgroups32", "setgroups32",
"setitimer", "setitimer",
"setns",
"setpgid", "setpgid",
"setpriority", "setpriority",
"setregid", "setregid",
@ -396,8 +412,10 @@
"shmget", "shmget",
"shutdown", "shutdown",
"sigaltstack", "sigaltstack",
"signal",
"signalfd", "signalfd",
"signalfd4", "signalfd4",
"sigprocmask",
"sigreturn", "sigreturn",
"socket", "socket",
"socketcall", "socketcall",
@ -546,7 +564,8 @@
}, },
{ {
"names": [ "names": [
"sync_file_range2" "sync_file_range2",
"swapcontext"
], ],
"action": "SCMP_ACT_ALLOW", "action": "SCMP_ACT_ALLOW",
"args": [], "args": [],
@ -562,10 +581,10 @@
"names": [ "names": [
"arm_fadvise64_64", "arm_fadvise64_64",
"arm_sync_file_range", "arm_sync_file_range",
"sync_file_range2",
"breakpoint", "breakpoint",
"cacheflush", "cacheflush",
"set_tls" "set_tls",
"sync_file_range2"
], ],
"action": "SCMP_ACT_ALLOW", "action": "SCMP_ACT_ALLOW",
"args": [], "args": [],
@ -626,6 +645,20 @@
}, },
"excludes": {} "excludes": {}
}, },
{
"names": [
"riscv_flush_icache"
],
"action": "SCMP_ACT_ALLOW",
"args": [],
"comment": "",
"includes": {
"arches": [
"riscv64"
]
},
"excludes": {}
},
{ {
"names": [ "names": [
"open_by_handle_at" "open_by_handle_at"
@ -661,8 +694,8 @@
"bpf", "bpf",
"fanotify_init", "fanotify_init",
"lookup_dcookie", "lookup_dcookie",
"perf_event_open",
"quotactl", "quotactl",
"quotactl_fd",
"setdomainname", "setdomainname",
"sethostname", "sethostname",
"setns" "setns"
@ -679,11 +712,11 @@
}, },
{ {
"names": [ "names": [
"bpf",
"fanotify_init", "fanotify_init",
"lookup_dcookie", "lookup_dcookie",
"perf_event_open", "perf_event_open",
"quotactl", "quotactl",
"quotactl_fd",
"setdomainname", "setdomainname",
"sethostname", "sethostname",
"setns" "setns"
@ -733,8 +766,8 @@
{ {
"names": [ "names": [
"delete_module", "delete_module",
"init_module",
"finit_module", "finit_module",
"init_module",
"query_module" "query_module"
], ],
"action": "SCMP_ACT_ALLOW", "action": "SCMP_ACT_ALLOW",
@ -750,8 +783,8 @@
{ {
"names": [ "names": [
"delete_module", "delete_module",
"init_module",
"finit_module", "finit_module",
"init_module",
"query_module" "query_module"
], ],
"action": "SCMP_ACT_ERRNO", "action": "SCMP_ACT_ERRNO",
@ -799,10 +832,7 @@
{ {
"names": [ "names": [
"kcmp", "kcmp",
"process_madvise", "process_madvise"
"process_vm_readv",
"process_vm_writev",
"ptrace"
], ],
"action": "SCMP_ACT_ALLOW", "action": "SCMP_ACT_ALLOW",
"args": [], "args": [],
@ -817,10 +847,7 @@
{ {
"names": [ "names": [
"kcmp", "kcmp",
"process_madvise", "process_madvise"
"process_vm_readv",
"process_vm_writev",
"ptrace"
], ],
"action": "SCMP_ACT_ERRNO", "action": "SCMP_ACT_ERRNO",
"args": [], "args": [],
@ -836,8 +863,8 @@
}, },
{ {
"names": [ "names": [
"iopl", "ioperm",
"ioperm" "iopl"
], ],
"action": "SCMP_ACT_ALLOW", "action": "SCMP_ACT_ALLOW",
"args": [], "args": [],
@ -851,8 +878,8 @@
}, },
{ {
"names": [ "names": [
"iopl", "ioperm",
"ioperm" "iopl"
], ],
"action": "SCMP_ACT_ERRNO", "action": "SCMP_ACT_ERRNO",
"args": [], "args": [],
@ -868,10 +895,10 @@
}, },
{ {
"names": [ "names": [
"settimeofday",
"stime",
"clock_settime", "clock_settime",
"clock_settime64" "clock_settime64",
"settimeofday",
"stime"
], ],
"action": "SCMP_ACT_ALLOW", "action": "SCMP_ACT_ALLOW",
"args": [], "args": [],
@ -885,10 +912,10 @@
}, },
{ {
"names": [ "names": [
"settimeofday",
"stime",
"clock_settime", "clock_settime",
"clock_settime64" "clock_settime64",
"settimeofday",
"stime"
], ],
"action": "SCMP_ACT_ERRNO", "action": "SCMP_ACT_ERRNO",
"args": [], "args": [],
@ -1037,6 +1064,68 @@
] ]
}, },
"excludes": {} "excludes": {}
},
{
"names": [
"bpf"
],
"action": "SCMP_ACT_ERRNO",
"args": [],
"comment": "",
"includes": {},
"excludes": {
"caps": [
"CAP_SYS_ADMIN",
"CAP_BPF"
]
},
"errnoRet": 1,
"errno": "EPERM"
},
{
"names": [
"bpf"
],
"action": "SCMP_ACT_ALLOW",
"args": [],
"comment": "",
"includes": {
"caps": [
"CAP_BPF"
]
},
"excludes": {}
},
{
"names": [
"perf_event_open"
],
"action": "SCMP_ACT_ERRNO",
"args": [],
"comment": "",
"includes": {},
"excludes": {
"caps": [
"CAP_SYS_ADMIN",
"CAP_BPF"
]
},
"errnoRet": 1,
"errno": "EPERM"
},
{
"names": [
"perf_event_open"
],
"action": "SCMP_ACT_ALLOW",
"args": [],
"comment": "",
"includes": {
"caps": [
"CAP_PERFMON"
]
},
"excludes": {}
} }
] ]
} }

@ -2,6 +2,8 @@
# almalinux # almalinux
"almalinux" = "docker.io/library/almalinux" "almalinux" = "docker.io/library/almalinux"
"almalinux-minimal" = "docker.io/library/almalinux-minimal" "almalinux-minimal" = "docker.io/library/almalinux-minimal"
# Amazon Linux
"amazonlinux" = "public.ecr.aws/amazonlinux/amazonlinux"
# Arch Linux # Arch Linux
"archlinux" = "docker.io/library/archlinux" "archlinux" = "docker.io/library/archlinux"
# centos # centos
@ -18,10 +20,11 @@
"registry" = "docker.io/library/registry" "registry" = "docker.io/library/registry"
"swarm" = "docker.io/library/swarm" "swarm" = "docker.io/library/swarm"
# Fedora # Fedora
"fedora-bootc" = "registry.fedoraproject.org/fedora-bootc"
"fedora-minimal" = "registry.fedoraproject.org/fedora-minimal" "fedora-minimal" = "registry.fedoraproject.org/fedora-minimal"
"fedora" = "registry.fedoraproject.org/fedora" "fedora" = "registry.fedoraproject.org/fedora"
# MSVSphere # Gentoo
"msvsphere" = "docker.io/inferit/msvsphere" "gentoo" = "docker.io/gentoo/stage3"
# openSUSE # openSUSE
"opensuse/tumbleweed" = "registry.opensuse.org/opensuse/tumbleweed" "opensuse/tumbleweed" = "registry.opensuse.org/opensuse/tumbleweed"
"opensuse/tumbleweed-dnf" = "registry.opensuse.org/opensuse/tumbleweed-dnf" "opensuse/tumbleweed-dnf" = "registry.opensuse.org/opensuse/tumbleweed-dnf"
@ -54,10 +57,11 @@
"rhel7" = "registry.access.redhat.com/rhel7" "rhel7" = "registry.access.redhat.com/rhel7"
"rhel7.9" = "registry.access.redhat.com/rhel7.9" "rhel7.9" = "registry.access.redhat.com/rhel7.9"
"rhel-atomic" = "registry.access.redhat.com/rhel-atomic" "rhel-atomic" = "registry.access.redhat.com/rhel-atomic"
"rhel-minimal" = "registry.access.redhat.com/rhel-minimum" "rhel9-bootc" = "registry.redhat.io/rhel9/rhel-bootc"
"rhel-minimal" = "registry.access.redhat.com/rhel-minimal"
"rhel-init" = "registry.access.redhat.com/rhel-init" "rhel-init" = "registry.access.redhat.com/rhel-init"
"rhel7-atomic" = "registry.access.redhat.com/rhel7-atomic" "rhel7-atomic" = "registry.access.redhat.com/rhel7-atomic"
"rhel7-minimal" = "registry.access.redhat.com/rhel7-minimum" "rhel7-minimal" = "registry.access.redhat.com/rhel7-minimal"
"rhel7-init" = "registry.access.redhat.com/rhel7-init" "rhel7-init" = "registry.access.redhat.com/rhel7-init"
"rhel7/rhel" = "registry.access.redhat.com/rhel7/rhel" "rhel7/rhel" = "registry.access.redhat.com/rhel7/rhel"
"rhel7/rhel-atomic" = "registry.access.redhat.com/rhel7/rhel7/rhel-atomic" "rhel7/rhel-atomic" = "registry.access.redhat.com/rhel7/rhel7/rhel-atomic"
@ -98,7 +102,7 @@
"ubi9/buildah" = "registry.access.redhat.com/ubi9/buildah" "ubi9/buildah" = "registry.access.redhat.com/ubi9/buildah"
"ubi9/skopeo" = "registry.access.redhat.com/ubi9/skopeo" "ubi9/skopeo" = "registry.access.redhat.com/ubi9/skopeo"
# Rocky Linux # Rocky Linux
"rockylinux" = "docker.io/library/rockylinux" "rockylinux" = "quay.io/rockylinux/rockylinux"
# Debian # Debian
"debian" = "docker.io/library/debian" "debian" = "docker.io/library/debian"
# Kali Linux # Kali Linux
@ -121,3 +125,12 @@
"rust" = "docker.io/library/rust" "rust" = "docker.io/library/rust"
# node # node
"node" = "docker.io/library/node" "node" = "docker.io/library/node"
# Grafana Labs
"grafana/agent" = "docker.io/grafana/agent"
"grafana/grafana" = "docker.io/grafana/grafana"
"grafana/k6" = "docker.io/grafana/k6"
"grafana/loki" = "docker.io/grafana/loki"
"grafana/mimir" = "docker.io/grafana/mimir"
"grafana/oncall" = "docker.io/grafana/oncall"
"grafana/pyroscope" = "docker.io/grafana/pyroscope"
"grafana/tempo" = "docker.io/grafana/tempo"

@ -1,4 +1,4 @@
# This file is is the configuration file for all tools # This file is the configuration file for all tools
# that use the containers/storage library. The storage.conf file # that use the containers/storage library. The storage.conf file
# overrides all other storage.conf files. Container engines using the # overrides all other storage.conf files. Container engines using the
# container/storage library do not inherit fields from other storage.conf # container/storage library do not inherit fields from other storage.conf
@ -19,6 +19,10 @@ driver = "overlay"
# Temporary storage location # Temporary storage location
runroot = "/run/containers/storage" runroot = "/run/containers/storage"
# Priority list for the storage drivers that will be tested one
# after the other to pick the storage driver if it is not defined.
# driver_priority = ["overlay", "btrfs"]
# Primary Read/Write location of container storage # Primary Read/Write location of container storage
# When changing the graphroot location on an SELINUX system, you must # When changing the graphroot location on an SELINUX system, you must
# ensure the labeling matches the default locations labels with the # ensure the labeling matches the default locations labels with the
@ -27,11 +31,21 @@ runroot = "/run/containers/storage"
# restorecon -R -v /NEWSTORAGEPATH # restorecon -R -v /NEWSTORAGEPATH
graphroot = "/var/lib/containers/storage" graphroot = "/var/lib/containers/storage"
# Optional alternate location of image store if a location separate from the
# container store is required. If set, it must be different than graphroot.
# imagestore = ""
# Storage path for rootless users # Storage path for rootless users
# #
# rootless_storage_path = "$HOME/.local/share/containers/storage" # rootless_storage_path = "$HOME/.local/share/containers/storage"
# Transient store mode makes all container metadata be saved in temporary storage
# (i.e. runroot above). This is faster, but doesn't persist across reboots.
# Additional garbage collection must also be performed at boot-time, so this
# option should remain disabled in most configurations.
# transient_store = true
[storage.options] [storage.options]
# Storage options to be passed to underlying storage drivers # Storage options to be passed to underlying storage drivers
@ -40,26 +54,32 @@ graphroot = "/var/lib/containers/storage"
additionalimagestores = [ additionalimagestores = [
] ]
# Remap-UIDs/GIDs is the mapping from UIDs/GIDs as they should appear inside of # Allows specification of how storage is populated when pulling images. This
# a container, to the UIDs/GIDs as they should appear outside of the container, # option can speed the pulling process of images compressed with format
# and the length of the range of UIDs/GIDs. Additional mapped sets can be # zstd:chunked. Containers/storage looks for files within images that are being
# listed and will be heeded by libraries, but there are limits to the number of # pulled from a container registry that were previously pulled to the host. It
# mappings which the kernel will allow when you later attempt to run a # can copy or create a hard link to the existing file when it finds them,
# container. # eliminating the need to pull them from the container registry. These options
# # can deduplicate pulling of content, disk storage of content and can allow the
# remap-uids = 0:1668442479:65536 # kernel to use less memory when running containers.
# remap-gids = 0:1668442479:65536
# containers/storage supports four keys
# Remap-User/Group is a user name which can be used to look up one or more UID/GID # * enable_partial_images="true" | "false"
# ranges in the /etc/subuid or /etc/subgid file. Mappings are set up starting # Tells containers/storage to look for files previously pulled in storage
# with an in-container ID of 0 and then a host-level ID taken from the lowest # rather then always pulling them from the container registry.
# range that matches the specified name, and using the length of that range. # * use_hard_links = "false" | "true"
# Additional ranges are then assigned, using the ranges which specify the # Tells containers/storage to use hard links rather then create new files in
# lowest host-level IDs first, to the lowest not-yet-mapped in-container ID, # the image, if an identical file already existed in storage.
# until all of the entries have been used for maps. # * ostree_repos = ""
# # Tells containers/storage where an ostree repository exists that might have
# remap-user = "containers" # previously pulled content which can be used when attempting to avoid
# remap-group = "containers" # pulling content from the container registry
# * convert_images = "false" | "true"
# If set to true, containers/storage will convert images to a
# format compatible with partial pulls in order to take advantage
# of local deduplication and hard linking. It is an expensive
# operation so it is not enabled by default.
pull_options = {enable_partial_images = "true", use_hard_links = "false", ostree_repos=""}
# Root-auto-userns-user is a user name which can be used to look up one or more UID/GID # Root-auto-userns-user is a user name which can be used to look up one or more UID/GID
# ranges in the /etc/subuid and /etc/subgid file. These ranges will be partitioned # ranges in the /etc/subuid and /etc/subgid file. These ranges will be partitioned
@ -72,7 +92,7 @@ additionalimagestores = [
# Auto-userns-min-size is the minimum size for a user namespace created automatically. # Auto-userns-min-size is the minimum size for a user namespace created automatically.
# auto-userns-min-size=1024 # auto-userns-min-size=1024
# #
# Auto-userns-max-size is the minimum size for a user namespace created automatically. # Auto-userns-max-size is the maximum size for a user namespace created automatically.
# auto-userns-max-size=65536 # auto-userns-max-size=65536
[storage.options.overlay] [storage.options.overlay]
@ -97,6 +117,9 @@ mountopt = "nodev,metacopy=on"
# Set to skip a PRIVATE bind mount on the storage home directory. # Set to skip a PRIVATE bind mount on the storage home directory.
# skip_mount_home = "false" # skip_mount_home = "false"
# Set to use composefs to mount data layers with overlay.
# use_composefs = "false"
# Size is used to set a maximum size of the container image. # Size is used to set a maximum size of the container image.
# size = "" # size = ""
@ -128,83 +151,7 @@ mountopt = "nodev,metacopy=on"
# future. When "force_mask" is set the original permission mask is stored in # future. When "force_mask" is set the original permission mask is stored in
# the "user.containers.override_stat" xattr and the "mount_program" option must # the "user.containers.override_stat" xattr and the "mount_program" option must
# be specified. Mount programs like "/usr/bin/fuse-overlayfs" present the # be specified. Mount programs like "/usr/bin/fuse-overlayfs" present the
# extended attribute permissions to processes within containers rather then the # extended attribute permissions to processes within containers rather than the
# "force_mask" permissions. # "force_mask" permissions.
# #
# force_mask = "" # force_mask = ""
[storage.options.thinpool]
# Storage Options for thinpool
# autoextend_percent determines the amount by which pool needs to be
# grown. This is specified in terms of % of pool size. So a value of 20 means
# that when threshold is hit, pool will be grown by 20% of existing
# pool size.
# autoextend_percent = "20"
# autoextend_threshold determines the pool extension threshold in terms
# of percentage of pool size. For example, if threshold is 60, that means when
# pool is 60% full, threshold has been hit.
# autoextend_threshold = "80"
# basesize specifies the size to use when creating the base device, which
# limits the size of images and containers.
# basesize = "10G"
# blocksize specifies a custom blocksize to use for the thin pool.
# blocksize="64k"
# directlvm_device specifies a custom block storage device to use for the
# thin pool. Required if you setup devicemapper.
# directlvm_device = ""
# directlvm_device_force wipes device even if device already has a filesystem.
# directlvm_device_force = "True"
# fs specifies the filesystem type to use for the base device.
# fs="xfs"
# log_level sets the log level of devicemapper.
# 0: LogLevelSuppress 0 (Default)
# 2: LogLevelFatal
# 3: LogLevelErr
# 4: LogLevelWarn
# 5: LogLevelNotice
# 6: LogLevelInfo
# 7: LogLevelDebug
# log_level = "7"
# min_free_space specifies the min free space percent in a thin pool require for
# new device creation to succeed. Valid values are from 0% - 99%.
# Value 0% disables
# min_free_space = "10%"
# mkfsarg specifies extra mkfs arguments to be used when creating the base
# device.
# mkfsarg = ""
# metadata_size is used to set the `pvcreate --metadatasize` options when
# creating thin devices. Default is 128k
# metadata_size = ""
# Size is used to set a maximum size of the container image.
# size = ""
# use_deferred_removal marks devicemapper block device for deferred removal.
# If the thinpool is in use when the driver attempts to remove it, the driver
# tells the kernel to remove it as soon as possible. Note this does not free
# up the disk space, use deferred deletion to fully remove the thinpool.
# use_deferred_removal = "True"
# use_deferred_deletion marks thinpool device for deferred deletion.
# If the device is busy when the driver attempts to delete it, the driver
# will attempt to delete device every 30 seconds until successful.
# If the program using the driver exits, the driver will continue attempting
# to cleanup the next time the driver is used. Deferred deletion permanently
# deletes the device and all data stored in device will be lost.
# use_deferred_deletion = "True"
# xfs_nospace_max_retries specifies the maximum number of retries XFS should
# attempt to complete IO when ENOSPC (no space) error is returned by
# underlying storage device.
# xfs_nospace_max_retries = "0"

@ -7,23 +7,27 @@ CENTOS=""
pwd | grep /tmp/centos > /dev/null pwd | grep /tmp/centos > /dev/null
if [ $? == 0 ]; then if [ $? == 0 ]; then
CENTOS=1 CENTOS=1
PKG=centpkg
else
PKG=rhpkg
fi fi
set -e set -e
for P in podman skopeo buildah; do for P in podman skopeo buildah; do
BRN=`pwd | sed 's,^.*/,,'` BRN=`pwd | sed 's,^.*/,,'`
rm -rf $P rm -rf $P
pkg clone $P $PKG clone $P
cd $P cd $P
[ -z "$CENTOS" ] && pkg switch-branch $BRN $PKG switch-branch $BRN
if [ $BRN != stream-container-tools-rhel8 ]; then if [ $BRN != stream-container-tools-rhel8 ]; then
pkg prep $PKG prep
else else
pkg --release rhel-8 prep $PKG --release rhel-8 prep
fi fi
DIR=`ls -d -- */ | grep -v ^tests | head -n1` rm -rf *SPECPARTS
grep github.com/containers/image $DIR/go.mod | grep -v - | cut -d\ -f2 >> /tmp/ver_image DIR=`ls -d -- */ | grep "$P"`
grep github.com/containers/common $DIR/go.mod | grep -v - | cut -d\ -f2 >> /tmp/ver_common grep github.com/containers/image $DIR/go.mod | cut -d\ -f2 | sed 's,-.*,,'>> /tmp/ver_image
grep github.com/containers/storage $DIR/go.mod | grep -v - | cut -d\ -f2 >> /tmp/ver_storage grep github.com/containers/common $DIR/go.mod | cut -d\ -f2 | sed 's,-.*,,' >> /tmp/ver_common
grep github.com/containers/storage $DIR/go.mod | cut -d\ -f2 | sed 's,-.*,,' >> /tmp/ver_storage
cd - cd -
done done
IMAGE_VER=`sort -n /tmp/ver_image | head -n1` IMAGE_VER=`sort -n /tmp/ver_image | head -n1`

@ -21,27 +21,47 @@ $2 = $3" $1
#./pyxis.sh #./pyxis.sh
#./update-vendored.sh #./update-vendored.sh
spectool -f -g containers-common.spec spectool -f -g containers-common.spec
for FILE in *; do
[ -s "$FILE" ]
if [ $? == 1 ] && [ "$FILE" != "sources" ]; then
echo "empty file: $FILE"
exit 1
fi
done
ensure storage.conf driver \"overlay\" ensure storage.conf driver \"overlay\"
ensure storage.conf mountopt \"nodev,metacopy=on\" ensure storage.conf mountopt \"nodev,metacopy=on\"
if pwd | grep rhel-8 > /dev/null if pwd | grep rhel-8 > /dev/null
then then
ensure registries.conf unqualified-search-registries [\"docker.io\"] awk -i inplace '/#default_capabilities/,/#\]/{gsub("#","",$0)}1' containers.conf
ensure registries.conf unqualified-search-registries [\"registry.access.redhat.com\",\ \"registry.redhat.io\",\ \"docker.io\"]
ensure registries.conf short-name-mode \"permissive\" ensure registries.conf short-name-mode \"permissive\"
ensure containers.conf runtime \"runc\" ensure containers.conf runtime \"runc\"
ensure containers.conf events_logger \"file\" ensure containers.conf events_logger \"file\"
ensure containers.conf log_driver \"k8s-file\" ensure containers.conf log_driver \"k8s-file\"
ensure containers.conf network_backend \"cni\" ensure containers.conf network_backend \"cni\"
if ! grep \"NET_RAW\" containers.conf > /dev/null
then
sed -i '/^default_capabilities/a \
"NET_RAW",' containers.conf
fi
if ! grep \"SYS_CHROOT\" containers.conf > /dev/null
then
sed -i '/^default_capabilities/a \
"SYS_CHROOT",' containers.conf
fi
else else
ensure registries.conf unqualified-search-registries [\"docker.io\"] ensure registries.conf unqualified-search-registries [\"registry.access.redhat.com\",\ \"registry.redhat.io\",\ \"docker.io\"]
ensure registries.conf short-name-mode \"enforcing\" ensure registries.conf short-name-mode \"enforcing\"
ensure containers.conf runtime \"crun\" ensure containers.conf runtime \"crun\"
fi fi
[ `grep "keyctl" seccomp.json | wc -l` == 0 ] && sed -i '/\"kill\",/i \ [ `grep \"keyctl\", seccomp.json | wc -l` == 0 ] && sed -i '/\"kill\",/i \
"keyctl",' seccomp.json "keyctl",' seccomp.json
sed -i '/\"socketcall\",/i \ [ `grep \"socket\", seccomp.json | wc -l` == 0 ] && sed -i '/\"socketcall\",/i \
"socket",' seccomp.json "socket",' seccomp.json
if ! grep \"NET_RAW\" containers.conf > /dev/null rhpkg clone redhat-release
then cd redhat-release
sed -i '/^default_capabilities/a \ rhpkg switch-branch rhel-9.4.0
"NET_RAW",' containers.conf rhpkg prep
fi cp -f redhat-release-*/RPM-GPG* ../
cd -
rm -rf redhat-release

@ -4,17 +4,18 @@
# pick the oldest version on c/image, c/common, c/storage vendored in # pick the oldest version on c/image, c/common, c/storage vendored in
# podman/skopeo/podman. # podman/skopeo/podman.
%global skopeo_branch main %global skopeo_branch main
%global image_branch v5.19.1 %global image_branch v5.32.2
%global common_branch v0.47.4 %global common_branch v0.60.2
%global storage_branch v1.38.2 %global storage_branch v1.55.0
%global shortnames_branch main %global shortnames_branch main
Epoch: 2 Epoch: 2
Name: containers-common Name: containers-common
Version: 1 Version: 1
Release: 38%{?dist}.inferit Release: 93%{?dist}
Summary: Common configuration and documentation for containers Summary: Common configuration and documentation for containers
License: ASL 2.0 License: ASL 2.0
ExclusiveArch: %{go_arches}
BuildRequires: /usr/bin/go-md2man BuildRequires: /usr/bin/go-md2man
Provides: skopeo-containers = %{epoch}:%{version}-%{release} Provides: skopeo-containers = %{epoch}:%{version}-%{release}
Conflicts: %{name} <= 2:1-22 Conflicts: %{name} <= 2:1-22
@ -48,40 +49,24 @@ Source14: https://raw.githubusercontent.com/containers/common/%{common_branch}/d
Source15: https://raw.githubusercontent.com/containers/image/%{image_branch}/docs/containers-auth.json.5.md Source15: https://raw.githubusercontent.com/containers/image/%{image_branch}/docs/containers-auth.json.5.md
Source16: https://raw.githubusercontent.com/containers/image/%{image_branch}/docs/containers-registries.conf.d.5.md Source16: https://raw.githubusercontent.com/containers/image/%{image_branch}/docs/containers-registries.conf.d.5.md
Source17: https://raw.githubusercontent.com/containers/shortnames/%{shortnames_branch}/shortnames.conf Source17: https://raw.githubusercontent.com/containers/shortnames/%{shortnames_branch}/shortnames.conf
Source19: 001-rhel-shortnames-pyxis.conf
Source20: 002-rhel-shortnames-overrides.conf
Source21: RPM-GPG-KEY-redhat-release
Source22: registry.access.redhat.com.yaml
Source23: registry.redhat.io.yaml
#Source24: https://raw.githubusercontent.com/containers/skopeo/%%{skopeo_branch}/default-policy.json #Source24: https://raw.githubusercontent.com/containers/skopeo/%%{skopeo_branch}/default-policy.json
Source24: default-policy.json Source24: default-policy.json
Source25: https://raw.githubusercontent.com/containers/skopeo/%{skopeo_branch}/default.yaml Source25: https://raw.githubusercontent.com/containers/skopeo/%{skopeo_branch}/default.yaml
# FIXME: fix the branch once these are available via regular c/common branch # FIXME: fix the branch once these are available via regular c/common branch
Source26: https://raw.githubusercontent.com/containers/common/main/docs/Containerfile.5.md Source26: https://raw.githubusercontent.com/containers/common/main/docs/Containerfile.5.md
Source27: https://raw.githubusercontent.com/containers/common/main/docs/containerignore.5.md Source27: https://raw.githubusercontent.com/containers/common/main/docs/containerignore.5.md
Source28: RPM-GPG-KEY-redhat-beta
# scripts used for synchronization with upstream and shortname generation # scripts used for synchronization with upstream and shortname generation
Source100: update.sh Source100: update.sh
Source101: update-vendored.sh Source101: update-vendored.sh
Source102: pyxis.sh Source102: pyxis.sh
%global aardvark_dns_version v1.0.3
#%%global aardvark_dns_branch v1.0.1-rhel
%global aardvark_dns_commit0 a92337b08fbd88c9eb10c1a5ebce2bf61aa59a7b
%global aardvark_dns_shortcommit0 %(c=%{aardvark_dns_commit0}; echo ${c:0:7})
%if 0%{?aardvark_dns_branch:1}
Source200: https://github.com/containers/aardvark-dns/tarball/%{aardvark_dns_commit0}/%{aardvark_dns_branch}-%{aardvark_dns_shortcommit0}.tar.gz
%else
Source200: https://github.com/containers/aardvark-dns/archive/%{aardvark_dns_commit0}/aardvark-dns-%{aardvark_dns_version}-%{aardvark_dns_shortcommit0}.tar.gz
%endif
Source201: https://github.com/containers/aardvark-dns/releases/download/%{aardvark_dns_version}/aardvark-dns-%{aardvark_dns_version}-vendor.tar.gz
%global netavark_version v1.0.3
#%%global netavark_branch v1.0.1-rhel
%global netavark_commit0 ec7efb85ef90db4a14c07cb003b65491f7eb4edf
%global netavark_shortcommit0 %(c=%{netavark_commit0}; echo ${c:0:7})
%if 0%{?netavark_branch:1}
Source300: https://github.com/containers/netavark/tarball/%{netavark_commit0}/%{netavark_branch}-%{netavark_shortcommit0}.tar.gz
%else
Source300: https://github.com/containers/netavark/archive/%{netavark_commit0}/netavark-%{netavark_version}-%{netavark_shortcommit0}.tar.gz
%endif
Source301: https://github.com/containers/netavark/releases/download/%{netavark_version}/netavark-%{netavark_version}-vendor.tar.gz
%description %description
This package contains common configuration files and documentation for container This package contains common configuration files and documentation for container
tools ecosystem, such as Podman, Buildah and Skopeo. tools ecosystem, such as Podman, Buildah and Skopeo.
@ -90,127 +75,28 @@ It is required because the most of configuration files and docs come from projec
which are vendored into Podman, Buildah, Skopeo, etc. but they are not packaged which are vendored into Podman, Buildah, Skopeo, etc. but they are not packaged
separately. separately.
%package -n aardvark-dns
Version: 1.0.1
Release: 38%{?dist}.inferit
URL: https://github.com/containers/aardvark-dns
Summary: Authoritative DNS server for A/AAAA container records
License: ASL 2.0 and BSD and MIT
BuildRequires: cargo
BuildRequires: git-core
BuildRequires: make
BuildRequires: rust-srpm-macros
BuildRequires: rust-toolset
#ExclusiveArch: %%{rust_arches}
ExclusiveArch: aarch64 ppc64le s390x x86_64
%description -n aardvark-dns
%{summary}
Forwards other request to configured resolvers.
Read more about configuration in `src/backend/mod.rs`.
%package -n netavark
Version: 1.0.1
Release: 38%{?dist}.inferit
URL: https://github.com/containers/netavark
Summary: OCI network stack
License: ASL 2.0 and BSD and MIT
BuildRequires: cargo
BuildRequires: make
BuildRequires: rust-srpm-macros
BuildRequires: git-core
BuildRequires: /usr/bin/go-md2man
Recommends: aardvark-dns
Provides: container-network-stack = 2
BuildRequires: rust-toolset
#ExclusiveArch: #%%{rust_arches}
ExclusiveArch: aarch64 ppc64le s390x x86_64
%description -n netavark
%{summary}
Netavark is a rust based network stack for containers. It is being
designed to work with Podman but is also applicable for other OCI
container management applications.
Netavark is a tool for configuring networking for Linux containers.
Its features include:
* Configuration of container networks via JSON configuration file
* Creation and management of required network interfaces,
including MACVLAN networks
* All required firewall configuration to perform NAT and port
forwarding as required for containers
* Support for iptables and firewalld at present, with support
for nftables planned in a future release
* Support for rootless containers
* Support for IPv4 and IPv6
* Support for container DNS resolution via aardvark-dns.
%prep %prep
tar fx %{SOURCE200}
pushd aardvark-dns-%{aardvark_dns_commit0}
tar fx %{SOURCE201}
mkdir -p .cargo
cat >.cargo/config << EOF
[source.crates-io]
replace-with = "vendored-sources"
[source.vendored-sources]
directory = "vendor"
EOF
popd
tar fx %{SOURCE300}
pushd netavark-%{netavark_commit0}
tar fx %{SOURCE301}
mkdir -p .cargo
cat >.cargo/config << EOF
[source.crates-io]
replace-with = "vendored-sources"
[source.vendored-sources]
directory = "vendor"
EOF
popd
%build %build
%if 0%{?build_rustflags:1}
export RUSTFLAGS="%{build_rustflags}"
%endif
pushd aardvark-dns-%{aardvark_dns_commit0}
%__scm_setup_git -q
%make_build build
popd
pushd netavark-%{netavark_commit0}
%__scm_setup_git -q
%make_build build
pushd docs
go-md2man -in netavark.1.md -out netavark.1
popd
%{__make} DESTDIR=%{buildroot} PREFIX=%{_prefix} install
popd
%install %install
pushd aardvark-dns-%{aardvark_dns_commit0} install -dp %{buildroot}%{_sysconfdir}/containers/{certs.d,oci/hooks.d,systemd,registries.d,registries.conf.d}
%{__make} DESTDIR=%{buildroot} PREFIX=%{_prefix} install install -dp %{buildroot}%{_datadir}/containers/systemd
popd
pushd netavark-%{netavark_commit0}
%{__make} DESTDIR=%{buildroot} PREFIX=%{_prefix} install
popd
install -dp %{buildroot}%{_sysconfdir}/containers/{certs.d,oci/hooks.d,registries.d,registries.conf.d}
install -m0644 %{SOURCE1} %{buildroot}%{_sysconfdir}/containers/storage.conf install -m0644 %{SOURCE1} %{buildroot}%{_sysconfdir}/containers/storage.conf
install -m0644 %{SOURCE5} %{buildroot}%{_sysconfdir}/containers/registries.conf install -m0644 %{SOURCE5} %{buildroot}%{_sysconfdir}/containers/registries.conf
install -m0644 %{SOURCE17} %{buildroot}%{_sysconfdir}/containers/registries.conf.d/000-shortnames.conf install -m0644 %{SOURCE17} %{buildroot}%{_sysconfdir}/containers/registries.conf.d/000-shortnames.conf
install -m0644 %{SOURCE19} %{buildroot}%{_sysconfdir}/containers/registries.conf.d/001-rhel-shortnames.conf
install -m0644 %{SOURCE20} %{buildroot}%{_sysconfdir}/containers/registries.conf.d/002-rhel-shortnames-overrides.conf
# for signature verification # for signature verification
%if !0%{?rhel} || 0%{?centos} %if !0%{?rhel} || 0%{?centos}
install -dp %{buildroot}%{_sysconfdir}/pki/rpm-gpg install -dp %{buildroot}%{_sysconfdir}/pki/rpm-gpg
install -m0644 %{SOURCE21} %{buildroot}%{_sysconfdir}/pki/rpm-gpg
install -m0644 %{SOURCE28} %{buildroot}%{_sysconfdir}/pki/rpm-gpg
%endif %endif
install -dp %{buildroot}%{_sysconfdir}/containers/registries.d install -dp %{buildroot}%{_sysconfdir}/containers/registries.d
install -m0644 %{SOURCE22} %{buildroot}%{_sysconfdir}/containers/registries.d
install -m0644 %{SOURCE23} %{buildroot}%{_sysconfdir}/containers/registries.d
install -m0644 %{SOURCE24} %{buildroot}%{_sysconfdir}/containers/policy.json install -m0644 %{SOURCE24} %{buildroot}%{_sysconfdir}/containers/policy.json
install -dp %{buildroot}%{_sharedstatedir}/containers/sigstore install -dp %{buildroot}%{_sharedstatedir}/containers/sigstore
install -m0644 %{SOURCE25} %{buildroot}%{_sysconfdir}/containers/registries.d/default.yaml install -m0644 %{SOURCE25} %{buildroot}%{_sysconfdir}/containers/registries.d/default.yaml
@ -243,6 +129,19 @@ ln -s %{_sysconfdir}/pki/entitlement %{buildroot}%{_datadir}/rhel/secrets/etc-pk
ln -s %{_sysconfdir}/rhsm %{buildroot}%{_datadir}/rhel/secrets/rhsm ln -s %{_sysconfdir}/rhsm %{buildroot}%{_datadir}/rhel/secrets/rhsm
ln -s %{_sysconfdir}/yum.repos.d/redhat.repo %{buildroot}%{_datadir}/rhel/secrets/redhat.repo ln -s %{_sysconfdir}/yum.repos.d/redhat.repo %{buildroot}%{_datadir}/rhel/secrets/redhat.repo
# ship preconfigured /etc/containers/registries.d/ files with containers-common - #1903813
cat <<EOF > %{buildroot}%{_sysconfdir}/containers/registries.d/registry.access.redhat.com.yaml
docker:
registry.access.redhat.com:
sigstore: https://access.redhat.com/webassets/docker/content/sigstore
EOF
cat <<EOF > %{buildroot}%{_sysconfdir}/containers/registries.d/registry.redhat.io.yaml
docker:
registry.redhat.io:
sigstore: https://registry.redhat.io/containers/sigstore
EOF
%files %files
%dir %{_sysconfdir}/containers %dir %{_sysconfdir}/containers
%dir %{_sysconfdir}/containers/certs.d %dir %{_sysconfdir}/containers/certs.d
@ -250,12 +149,19 @@ ln -s %{_sysconfdir}/yum.repos.d/redhat.repo %{buildroot}%{_datadir}/rhel/secret
%dir %{_sysconfdir}/containers/oci %dir %{_sysconfdir}/containers/oci
%dir %{_sysconfdir}/containers/oci/hooks.d %dir %{_sysconfdir}/containers/oci/hooks.d
%dir %{_sysconfdir}/containers/registries.conf.d %dir %{_sysconfdir}/containers/registries.conf.d
%dir %{_sysconfdir}/containers/systemd
%dir %{_datadir}/containers/systemd
%if !0%{?rhel} || 0%{?centos}
%{_sysconfdir}/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
%{_sysconfdir}/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta
%endif
%config(noreplace) %{_sysconfdir}/containers/policy.json %config(noreplace) %{_sysconfdir}/containers/policy.json
%config(noreplace) %{_sysconfdir}/containers/registries.d/default.yaml
%config(noreplace) %{_sysconfdir}/containers/storage.conf %config(noreplace) %{_sysconfdir}/containers/storage.conf
%config(noreplace) %{_sysconfdir}/containers/registries.conf %config(noreplace) %{_sysconfdir}/containers/registries.conf
%config(noreplace) %{_sysconfdir}/containers/registries.conf.d/*.conf %config(noreplace) %{_sysconfdir}/containers/registries.conf.d/*.conf
%config(noreplace) %{_sysconfdir}/containers/registries.d/*.yaml %config(noreplace) %{_sysconfdir}/containers/registries.d/default.yaml
%config(noreplace) %{_sysconfdir}/containers/registries.d/registry.redhat.io.yaml
%config(noreplace) %{_sysconfdir}/containers/registries.d/registry.access.redhat.com.yaml
%ghost %{_sysconfdir}/containers/containers.conf %ghost %{_sysconfdir}/containers/containers.conf
%dir %{_sharedstatedir}/containers/sigstore %dir %{_sharedstatedir}/containers/sigstore
%{_mandir}/man5/* %{_mandir}/man5/*
@ -266,134 +172,277 @@ ln -s %{_sysconfdir}/yum.repos.d/redhat.repo %{buildroot}%{_datadir}/rhel/secret
%dir %{_datadir}/rhel/secrets %dir %{_datadir}/rhel/secrets
%{_datadir}/rhel/secrets/* %{_datadir}/rhel/secrets/*
%files -n aardvark-dns %changelog
%license aardvark-dns-%{aardvark_dns_commit0}/LICENSE * Thu Oct 17 2024 Jindrich Novy <jnovy@redhat.com> - 2:1-93
%dir %{_libexecdir}/podman - rebuild
%{_libexecdir}/podman/aardvark-dns - Resolves: RHEL-62937
%files -n netavark * Tue Aug 27 2024 Jindrich Novy <jnovy@redhat.com> - 2:1-92
%license netavark-%{netavark_commit0}/LICENSE - update vendored components
%dir %{_libexecdir}/podman - Related: RHEL-27608
%{_libexecdir}/podman/netavark
%{_mandir}/man1/netavark.1*
%changelog * Wed Aug 07 2024 Jindrich Novy <jnovy@redhat.com> - 2:1-91
* Thu Dec 21 2023 Sergey Cherevko <s.cherevko@msvsphere-os.ru> - 2:1-38.inferit - Update shortnames and vendored components
- MSVSphere debranding - Related: RHEL-27608
- Rebuilt for MSVSphere 8.9
* Tue Mar 14 2023 Jindrich Novy <jnovy@redhat.com> - 2:1-38 * Fri Apr 05 2024 Lokesh Mandvekar <lsm5@redhat.com> - 2:1-90
- update vendored components and configuration files - Bump release to way higher than rhel 8.10 to preserve upgrade path
- Related: #2176055 - Related: Jira:RHEL-31950
* Tue Oct 18 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-36 * Wed Feb 14 2024 Jindrich Novy <jnovy@redhat.com> - 2:1-62
- update vendored components and configuration files - regenerate shortnames from Pyxis and update vendored components
- Related: #2129766 - Related: Jira:RHEL-2112
* Wed Jul 13 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-35 * Thu Feb 08 2024 Jindrich Novy <jnovy@redhat.com> - 2:1-61
- update vendored components and configuration files - update vendored components
- Related: #2061390 - Related: Jira:RHEL-2112
* Mon Jun 27 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-34 * Tue Jan 02 2024 Jindrich Novy <jnovy@redhat.com> - 2:1-60
- update shortnames and be sure to remove rhel-els - Update vendored components
- Related: #2061390 - Related: Jira:RHEL-2112
* Thu Jun 09 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-33 * Wed Oct 11 2023 Jindrich Novy <jnovy@redhat.com> - 2:1-59
- additional fix for unqualified registries - fix shortnames
- Related: #2061390 - Related: Jira:RHEL-2112
* Wed May 11 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-26 * Thu Sep 14 2023 Jindrich Novy <jnovy@redhat.com> - 2:1-58
- update vendored components and configuration files - implement GPG auto updating mechanism from redhat-release
- Related: #2061390 - Resolves: #RHEL-3164
* Wed Sep 13 2023 Jindrich Novy <jnovy@redhat.com> - 2:1-57
- update GPG keys to the current content of redhat-release
- Resolves: #RHEL-3164
* Fri Aug 25 2023 Jindrich Novy <jnovy@redhat.com> - 2:1-56
- update vendored components and shortnames
- Related: #2176063
* Wed Jul 19 2023 Jindrich Novy <jnovy@redhat.com> - 2:1-55
- fix vendoring script
- Related: #2176063
* Mon Jul 10 2023 Jindrich Novy <jnovy@redhat.com> - 2:1-54
- update vendored components
- Related: #2176063
* Tue Jun 20 2023 Jindrich Novy <jnovy@redhat.com> - 2:1-53
- rebuild
- Resolves: #2178263
* Fri Apr 21 2023 Jindrich Novy <jnovy@redhat.com> - 2:1-52
- update vendored components
- Related: #2176063
* Fri Apr 01 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-25 * Fri Mar 24 2023 Jindrich Novy <jnovy@redhat.com> - 2:1-51
- regenerate shortnames, vendored components + fix pyxis script
- Related: #2176063
* Wed Feb 22 2023 Jindrich Novy <jnovy@redhat.com> - 2:1-50
- improve shortnames generation
- Related: #2124478
* Tue Jan 31 2023 Jindrich Novy <jnovy@redhat.com> - 2:1-49
- add missing systemd directories
- Related: #2124478
* Mon Jan 30 2023 Jindrich Novy <jnovy@redhat.com> - 2:1-48
- update vendored components and configuration files - update vendored components and configuration files
- Related: #2061390 - Related: #2124478
* Thu Jan 05 2023 Jindrich Novy <jnovy@redhat.com> - 2:1-47
- update vendored components, regenerate pyxis
- Related: #2124478
* Thu Nov 10 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-46
- The NET_RAW capability was required in RHEL8 but no longer required in RHEL9
- Resolves: #2141531
* Mon Feb 28 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-23 * Fri Oct 21 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-45
- add beta GPG key
- Related: #2124478
* Tue Aug 23 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-44
- exclude non-go arches because of go-md2man
- Related: #2061316
* Tue Aug 23 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-43
- add beta keys to default-policy.json
- Related: #2061316
* Mon Aug 08 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-42
- update shortnames
- Related: #2061316
* Wed Aug 03 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-41
- drop aardvark-dns and netavark - packaged separately
- update vendored components
- Related: #2061316
* Mon Jun 27 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-40
- remove rhel-els and update shortnames
- Related: #2061316
* Tue Jun 14 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-39
- update shortnames
- Related: #2061316
* Thu Jun 09 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-38
- fix unqualified registries in registries.conf generation code
- Related: #2088139
* Mon May 23 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-37
- update unqualified registries list
- Related: #2088139
* Mon May 09 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-36
- update aardvark-dns and netavark to 1.0.3
- update vendored components
- Related: #2061316
* Wed Apr 20 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-35
- add missing man pages from Fedora
- Related: #2061316
* Wed Apr 06 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-34
- update to netavark and aardvark-dns 1.0.2
- update vendored components
- Related: #2061316
* Mon Mar 21 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-33
- allow consuming aardvark-dns and netavark from upstream branches
- Related: #2061316
* Mon Feb 28 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-32
- build rust packages with RUSTFLAGS set to make ExecShield happy (Lokesh Mandvekar)
- Related: #2000051
* Mon Feb 28 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-31
- update to netavark and aardvark-dns 1.0.1 - update to netavark and aardvark-dns 1.0.1
- Related: #2001445 - Related: #2000051
* Wed Feb 23 2022 Lokesh Mandvekar <lsm5@redhat.com> - 2:1-30
- archful package should conflict with older noarch package
- Related: #2000051
* Tue Feb 22 2022 Lokesh Mandvekar <lsm5@redhat.com> - 2:1-29
- consistent release tags for all packages
- Related: #2000051
* Wed Feb 23 2022 Lokesh Mandvekar <lsm5@redhat.com> - 2:1-22 * Tue Feb 22 2022 Lokesh Mandvekar <lsm5@redhat.com> - 2:1-28
- build rust packages with RUSTFLAGS set to make ExecShield happy - main package should obsolete noarch versions upto 2:1-22
- bump release tag by 3 for easier cherry-picking from rhel8 stream - Related: #2000051
- Related: #2001445
* Mon Feb 21 2022 Lokesh Mandvekar <lsm5@redhat.com> - 2:1-19 * Mon Feb 21 2022 Lokesh Mandvekar <lsm5@redhat.com> - 2:1-27
- do not specify infra_image in containers.conf - do not specify infra_image in containers.conf
- needed to resolve gating test failures - needed to resolve gating test failures
- Related: #2001445 - Related: #2000051
* Fri Feb 18 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-18 * Sat Feb 19 2022 Lokesh Mandvekar <lsm5@redhat.com> - 2:1-26
- aardvark-dns built for same arches as netavark
- Related: #2000051
* Sat Feb 19 2022 Lokesh Mandvekar <lsm5@redhat.com> - 2:1-25
- build netavark only for podman's arches
- i686 can't find go-md2man which causes the build to fail otherwise
- Related: #2000051
* Fri Feb 18 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-24
- update to netavark-1.0.0 and aardvark-dns-1.0.0 - update to netavark-1.0.0 and aardvark-dns-1.0.0
- Related: #2001445 - Related: #2000051
* Thu Feb 10 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-17 * Thu Feb 17 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-23
- update vendored components and configuration files - package aarvark-dns and netavark as part of the containers-common
- Related: #2001445 - Related: #2000051
* Thu Feb 10 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-16 * Thu Feb 17 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-22
- sync vendored components - update shortnames and vendored components
- Related: #2001445 - Related: #2000051
* Thu Feb 10 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-15 * Wed Feb 16 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-21
- update vendored components and configuration files - containers.conf should contain network_backend = "cni" in RHEL8.6
- Related: #2001445 - Related: #2000051
* Wed Feb 09 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-20
- update shortname aliases from upstream
- Related: #2000051
* Fri Feb 04 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-14 * Fri Feb 04 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-19
- sync vendored components - sync vendored components
- Related: #2001445 - Related: #2000051
* Fri Feb 04 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-13 * Fri Feb 04 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-18
- sync vendored components - sync vendored components
- Related: #2001445 - Related: #2000051
* Mon Jan 17 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-17
- sync shortname aliases via Pyxis
- Related: #2000051
* Fri Jan 21 2022 Jindrich Novy <jnovy@redhat.com> - 2:1-12 * Fri Dec 10 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-16
- update shortnames from Pyxis - do not hardcode log_driver = "journald" and events_logger = "journald"
- Related: #2001445 for RHEL9 and leave the rootful/rootless behaviour change based on
internal logic
- Related: #2000051
* Fri Dec 10 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-11 * Thu Dec 09 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-15
- do not allow broken content from Pyxis to land in shortnames.conf - do not allow broken content from Pyxis to land in shortnames.conf
- Related: #2001445 - Related: #2000051
* Wed Dec 08 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-10 * Wed Dec 08 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-14
- sync vendored components - update vendored component versions
- update shortnames from Pyxis - sync shortname aliases via Pyxis
- Related: #2001445 - Related: #2000051
* Wed Dec 01 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-9 * Tue Nov 30 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-13
- use log_driver = "journald" and events_logger = "journald" for RHEL9 - use log_driver = "journald" and events_logger = "journald" for RHEL9
- Related: #2001445 - Related: #2000051
* Tue Nov 16 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-8 * Tue Nov 16 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-12
- consume seccomp.json from the oldest vendored version of c/common, - consume seccomp.json from the oldest vendored version of c/common,
not main branch not main branch
- Related: #2001445 - Related: #2000051
* Fri Nov 12 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-11
- use ubi8/pause as ubi9/pause is not available yet
- Related: #2000051
* Mon Nov 15 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-7 * Wed Nov 10 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-10
- update vendored components - update vendored components
- Related: #2001445 - Related: #2000051
* Wed Oct 13 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-6 * Tue Nov 02 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-9
- sync vendored components - make log_driver = "k8s-file" default in containers.conf
- Related: #2001445 - Related: #2000051
* Fri Oct 01 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-8
- perform only sanity/installability tests for now
- Related: #2000051
* Wed Sep 29 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-5 * Wed Sep 29 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-7
- update to the new vendored components - update to the new vendored components
- Related: #2001445 - Related: #2000051
* Fri Sep 24 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-4 * Wed Sep 29 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-6
- add gating.yaml
- Related: #2000051
* Fri Sep 24 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-5
- update to the new vendored components - update to the new vendored components
- Related: #2001445 - Related: #2000051
* Fri Sep 10 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-4
- fix updating scripts
- Related: #2000051
* Fri Sep 10 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-3 * Thu Sep 09 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-3
- update to the new vendored components - update to the new vendored components
- Related: #2001445 - Related: #2000051
* Wed Aug 11 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-2 * Fri Aug 20 2021 Lokesh Mandvekar <lsm5@fedoraproject.org> - 2:1-2
- synchronize config files for RHEL-8.5 - bump configs to latest versions
- Related: #1934415 - replace ubi9 references with ubi8
- Related: #1970747
* Wed Aug 11 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-1 * Wed Aug 11 2021 Jindrich Novy <jnovy@redhat.com> - 2:1-1
- initial import - initial import
- Related: #1934415 - Related: #1970747

Loading…
Cancel
Save