Deploying and Using the Podman Run Mechanism

Note

Support for Podman as a run mechanism within Jacamar CI is a feature introduced in release v0.19.0. If you encounter any issues or have suggestions we would appreciate the feedback.

Jacamar CI is now supporting the use of Podman to run jobs within containers in the user’s namespace. This is all done while preserving support for existing executor types (e.g., shell and flux). To understand how this functions it is first important to review that every job Jacamar CI runs is based upon a script/environment/arguments provided by the GitLab Custom Executor. The user application jacamar (traditionally ran after jacamar-auth has authorized the job and dropped permissions) combines the runner generated script with commands derived from the executor type and passes them to a clean Bash login shell. The new Podman run_mechanism modifies this process to optionally leverage a podman run .. command when users provide an image in their Job:

../../_images/run_mechanism_structure.svg

With this new workflow we can ensure that the runner generated scripts are always executed within the defined container while still potentially submitting them to the desired scheduling system. The new mechanism also handles the distinct runner generated stages (see official docs) by ensuring an administratively defined runner image is used where appropriate, mounting all required volumes to preserve stateful information, and automating registry credential management with the CI_JOB_TOKEN. The resulting Podman command would look something like:

/usr/bin/podman run \
    --volume '<data-dir>/builds/runner/000:<data-dir>/builds/runner/000' \
    --volume '<data-dir>/cache/group/project:<data-dir>/cache/group/project' \
    --volume '<data-dir>/scripts/runner/000/group/project/<job-id>:<data-dir>/scripts/runner/000/group/project/<job-id>' \
    --authfile '<data-dir>/scripts/runner/000/group/project/<job-id>/auth.json' \
    --entrypoint '["/bin/bash","-l","-c"]' \
    --rm \
    --tty \
    docker.io/library/debian:latest \
    '<data-dir>/scripts/runner/000/group/project/<job-id>/build_script.bash'

Examining the above command we can note several key elements:

  • All key directories (builds, cache, and scripts) within the data_dir are all mounted.

  • The authfile is created on a per-job basis and provides access to the GitLab server registry using the CI_JOB_TOKEN.

    • The only time this isn’t used is if the user has supplied their own via the REGISTRY_AUTH_FILE variable. It should be a path to a valid authorization file, it advisable to use the file type CI/CD variables for this purpose.

  • The image, in this example docker.io/library/debian:latest, is provided in the CI job using the image key.

  • The build_script.bash, as with all other job scripts, are generated by the GitLab-Runner.

Note

We currently do not recommend relying solely on the Podman run_mechanism to enforce isolation between jobs. We strongly advise you utilize standard authorization and downscoping features found in the jacamar-auth application in addition to the container runtime in userspace.

Configuration

Note

The Podman run_mechanism is only observed with the jacamar application and is not utilized during any authorization steps. It relies on a user namespace ready application already deployed and all supporting configurations already established for your specific environment.

[general] - Table

Key

Description

run_mechanism

Defines a proposed mechanism to execute all runner generated scripts rather than simply relying on the user’s existing Bash shell. Use of this is dictated by the individual mechanism, for example with podman it is used when the user has supplied an image.

force_mechanism

Requires the defined mechanism is used for all jobs, ignoring user setting/behaviors that normally trigger the usage.

[general]
  run_mechanism = "podman"
  force_mechanism = false

[general.podman] - Table

Key

Description

application_path

Full path to the Podman application, used in constructing all commands. When not provided the application found in the users PATH will be used.

runner_image

Helper image that will be used for standard runner manged actions (i.e., Git, artifacts, and caching). If not provided the host system will be used for these steps instead.

runner_options

Additional Podman-Run options that are used with the runner_image.

runner_entry_point

Override the default ENTRYPOINT used with the runner_image.

runner_pull_policy

Override the default image pull-policy specifically for the runner_image.

default_image

This image is used when no user provied image is found (only observed when force_mechanism has been enabled).

custom_options

Additional Podman-Run options that are used in user’s job steps.

user_entry_point

Override the default ENTRYPOINT in the user’s job steps.

step_script_only

Limits the use of containers to the step_script (user defined script combining the before_script + script of a CI job). This is useful in cases where a deployment may limit the usage of container runtimes to the compute environment.

disable_container_removal

Prevents the defaults behavior of using the rm option in all generated Podman-Run commands.

image_allowlist

When defined only images that match this list of regular expressions will be allowed.

user_volume_variable

Defines the prefix for a CI variable users can leverage to mount custom volumes at runtime.

prep_script

Allows for an admin defined script that is run during the prepare_exec stage. The aim is to allow for direct influence over a user’s configuration/directories before any Podman related commands are ran.

archive_format

How user defined images will be archived after being pulled. If set to none, these steps will be skipped. See the official Podman documentation for details covering the available formats. By default Jacamar will attempt to use the best option for your deployment, for example when a Batch executor is encountered the docker-archive will be used and for basic shell interactions the none option will be preferred.

volume_labels

Volumes automatically added to commands will use the appropriate label (:z, :Z).

disable_user_args

Ignores any user defined arguments found in the JACAMAR_CI_PODMAN_ARGS variable.

[general.podman]
  application_path = "/bin/podman"
  runner_image = "registry.gitlab.com/ecp-ci/jacamar-ci/ubi9-runner:16.6.2"
  runner_entry_point = ["/bin/bash", "-l", "-c"]
  default_image = "registry.access.redhat.com/ubi9/ubi-minimal:latest"
  custom_options = []
  user_entry_point = []
  step_script_only = false
  disable_container_removal = false
  image_pull_policy = "always"
  image_allowlist = ["^registry.example.com\/group/.*$"]
  user_volume_variable = "PODMAN_VOLUMES"
  archive_format = "none"
  volume_labels = false

Example

If you already have a functional Podman deployment available (with all setuid/setgid mappings defined and any user configurations identified) you can add the following to your existing Jacamar CI test deployment:

[general]
run_mechanism = "podman"

[general.podman]
# If not defined related scripts will be run in the users shell.
runner_image = "registry.gitlab.com/ecp-ci/jacamar-ci/ubi9-runner:16.6.2"

# Allowing users to define their own mounted volumes is recommended but optional.
user_volume_variable = "PODMAN_VOLUMES"

User dictated scripts (e.g., those from the before_script, script, and after_script) will all be run using the image defined on the job level. All other aspects of the job (e.g., Git, artifacts, and caching) will use the supplied runner_image.

To observe a simple example you can use the following job:

stages:
  - host
  - container

after_script:
  - whoami

hello-host:
  stage: host
  script:
    # This job runs on the host system because no image is provided.
    - cat /etc/os-release
    - date >> date.txt
  artifacts:
    paths:
      - date.txt

hello-container:
  stage: container
  image: registry.access.redhat.com/ubi9/ubi-minimal:latest
  # variables:
    # Adding a variable like this will cause the string to be added to the
    # --volume argument in the generated run command.
    # PODMAN_VOLUMES_0: ???:/opt/scratch:z
  script:
    # Because we have defined an image our job will instead run using Podman.
    - cat /etc/os-release
    - cat date.txt

Examining the second job (which uses the Podman mechanism) we can identify several important aspects of the job:

../../_images/prep_runner_image.png
  • We are still using a shell executor, meaning the containers will run in the user’s environment on the machine the runner is hosted.

  • In our test runner we still rely on setuid to drop permissions and all aspects of the job observe this as you would expect. No Podman related commands are run until after authorization and downscoping have taken place.

  • Unbeknownst to the user every runner generated script is executed within a container. This is useful in cases where you want to avoid having any aspect of the job run directly on the host system but requires additional setup and increases the maintenance burden.

  • Build directories are still created in the defined data_dir and mounted into the container at runtime following the same folder paths.

  • During the prepare_exec stage the user defined image will be pulled and potentially archived based upon the archive_format configuration.

Finally, we can clearly see that the user’s script is running in our UBI9 image and due to the correct mappings, features like artifacts/caching will be made available with no additional efforts required.

../../_images/job_user_image.png

Runner Image

Important

Use of the runner_image is optional and there is no planned image release from the Jacamar CI repository. Please either use an officially supported GitLab container release or follow the example image documented below to generate you own.

There are a number of standard actions in every GitLab CI job (i.e., using Git, managing artifacts and caches) and when the Podman run_mechanism is used we can run all these within a distinct container (defined via runner_image in the configuration). As an administrator you have to define what this image is and ensure it has the following applications installed:

  • Bash

  • Git

  • Git-LFS

  • GitLab-Runner

  • Hostname

The easiest option is to use an image from the official registry (i.e., registry.gitlab.com/gitlab-org/gitlab-runner:ubi-fips-v16.6.1); however, be warned this image is rather large as it contains all the helper images/files found in the official runner RPM/DEB packages. If you require a different base image, software, or wish to have a smaller image size then you will need to build your own. Here is an example Containerfile with the minimum requirements:

FROM registry.access.redhat.com/ubi9/ubi-minimal:latest as runner

WORKDIR /

# https://docs.gitlab.com/runner/install/linux-repository.html#gpg-signatures-for-package-installation
COPY gitlab-runner.pub.gpg gitlab-runner.pub.gpg

ARG RUNNER_VER=16.6.2

RUN mkdir binaries \
    && curl -LO https://gitlab.com/gitlab-org/gitlab-runner/-/releases/v${RUNNER_VER}/downloads/release.sha256.asc \
    && curl -LO https://gitlab.com/gitlab-org/gitlab-runner/-/releases/v${RUNNER_VER}/downloads/release.sha256 \
    && curl -L -o binaries/gitlab-runner-linux-amd64 https://gitlab.com/gitlab-org/gitlab-runner/-/releases/v${RUNNER_VER}/downloads/binaries/gitlab-runner-linux-amd64 \
    && gpg --import gitlab-runner.pub.gpg \
    && gpg --verify release.sha256.asc release.sha256 \
    && grep -E "gitlab-runner-linux-amd64$" release.sha256 | sha256sum --check

FROM registry.access.redhat.com/ubi9/ubi-minimal:latest

WORKDIR /
COPY --from=runner /binaries/gitlab-runner-linux-amd64 /usr/bin/gitlab-runner

RUN microdnf update  -y \
    && microdnf install -y \
        git \
        git-lfs \
        hostname \
    && microdnf clean all \
    && chmod +x /usr/bin/gitlab-runner

Define Hostname

Added in version 0.19.2 it is possible to define the container hostname using the JACAMAR_CI_HOSTNAME variable in your CI pipeline:

jobA:
  variables:
    JACAMAR_CI_HOSTNAME: example
  script:
    # This will simply be 'example'.
    - hostname

jobB:
  variables:
    # Without using two $ GitLab will resolve the variable.
    JACAMAR_CI_HOSTNAME: $$HOSTNAME
  script:
    # $HOSTNAME is a special case that is resolved at runtime.
    - hostname

Custom Arguments

Unless disabled by configuration you can provide custom arguments to the Podman-Run command:

job:
  variables:
    JACAMAR_CI_PODMAN_ARGS: "--workdir /example -u 1000"
  script:
   - make test

Note that it is up to you to ensure that these arguments do not conflict with any defaults required to realize a successful CI/CD job. To help with this the full podman run ... command is always printed to the job log. Defer to the official Podman documentation for details on all options.