Brik, the portable pipeline standard.
Write once. Run everywhere.
Official Docker images for Brik CI/CD runners.
Pre-built images with all Brik prerequisites (bash 5+, yq, jq, git) and stack-specific tools. Eliminates the ~30-40s bootstrap overhead from every CI job.
All images are multi-arch: linux/amd64 and linux/arm64.
- Images are scanned with Grype on every build (blocks on critical CVEs with available fixes)
- Scan results are uploaded to the Security tab for full visibility
- SBOMs are generated with Syft in CycloneDX format
- Images are signed with cosign (keyless, OIDC)
- Weekly rebuilds pick up base image security patches
- Renovate auto-merges digest updates
These images bundle the latest available versions of their respective base images and tools (yq, jq, git). Some upstream base images (e.g. node:22-slim, python:3.13-slim) may contain known vulnerabilities that have not yet been patched by their maintainers.
What we control: yq, jq, and git versions are pinned to the latest releases and updated regularly. The build fails on any critical CVE with an available fix.
What we don't control: CVEs in the upstream base images (Alpine, Debian, Ubuntu). These are resolved when the upstream maintainers publish updated images. Weekly rebuilds automatically pick up new patches.
Check the Security tab for the current scan results of every image.
Each image is published with multiple tags:
ghcr.io/getbrik/brik-runner-node:22 # stack version (mutable)
ghcr.io/getbrik/brik-runner-node:latest # latest LTS (mutable)
ghcr.io/getbrik/brik-runner-node:sha-a1b2c3d # immutable git SHA
ghcr.io/getbrik/brik-runner-node:22@sha256:... # digest pin (most secure)
For production pipelines, pin images by digest (@sha256:...) to guarantee reproducible builds. Mutable tags like :22 or :latest can change on rebuilds. Use docker inspect --format='{{index .RepoDigests 0}}' <image> to retrieve the current digest.
Every image contains:
- bash (5.x)
- yq (v4.52.5) - YAML processor
- jq (1.8.1) - JSON processor
- git - version control
- curl - HTTP client
Stack images additionally include their respective toolchain (node/npm, python/pip, java/maven, etc.).
The scanning tooling is split into two images based on their runtime requirements:
- analysis -- Python/Ruby runtime, for deep SAST analysis, license compliance, and IaC scanning (semgrep, checkov, scancode, license_finder)
- scanner -- static Go binaries only, fast to pull, for vulnerability scanning, secret detection, Dockerfile linting, and container scanning
The brik-runner-analysis image (~1.7 GB) bundles Python/Ruby-based analysis tools via multi-stage build:
| Tool | Purpose |
|---|---|
| semgrep | Static analysis (SAST) |
| checkov | Infrastructure-as-Code scanning |
| scancode-toolkit | License and origin detection |
| license_finder | License compliance |
The brik-runner-scanner image (~500 MB) bundles static Go binary tools -- no Python or Ruby runtime:
| Tool | Purpose |
|---|---|
| grype | Vulnerability scanning (SCA) |
| syft | SBOM generation |
| osv-scanner | Open-source vulnerability scanning |
| hadolint | Dockerfile linting |
| gitleaks | Secret/credential leak detection |
| trufflehog | Secret scanning (entropy + patterns) |
| dockle | Docker image best-practice linting |
Pinned versions for all tools are in versions.json.
Note: The brik runtime is NOT pre-installed. It is cloned at CI time by the shared library's before_script. This decouples image releases from brik releases.
Currently, the brik runtime is cloned at CI time by the shared library's before_script. This keeps image releases decoupled from brik development, which is the right trade-off during active development.
Once brik reaches a stable release cadence, the runtime will be pre-installed in the images. This will unlock:
- Zero-config local usage --
docker run ghcr.io/getbrik/brik-runner-node:22 brik run stage buildwith no setup, no clone, no CI platform required. - Fully offline pipelines -- images become self-contained, no network dependency at runtime.
- Freemium / Enterprise tiers -- community images ship with brik core; enterprise images could include additional modules, caching layers, or premium integrations.
# .gitlab-ci.yml
variables:
# Pin by digest for reproducible builds: ghcr.io/getbrik/brik-runner-node:22@sha256:...
BRIK_CI_IMAGE: "ghcr.io/getbrik/brik-runner-node:22"
include:
- project: 'brik/gitlab-templates'
ref: v1
file: '/templates/pipeline.yml'Or override per-job:
build:
image: ghcr.io/getbrik/brik-runner-node:22 # or :22@sha256:... for digest pin
script:
- brik run stage buildpipeline {
agent {
docker {
// Pin by digest for reproducible builds:
// image 'ghcr.io/getbrik/brik-runner-java:21@sha256:...'
image 'ghcr.io/getbrik/brik-runner-java:21'
}
}
stages {
stage('Build') {
steps {
sh 'brik run stage build'
}
}
}
}jobs:
build:
runs-on: ubuntu-latest
container:
# Pin by digest for reproducible builds:
# image: ghcr.io/getbrik/brik-runner-node:22@sha256:...
image: ghcr.io/getbrik/brik-runner-node:22
steps:
- uses: actions/checkout@v4
- run: brik run stage builddocker run --rm -v "$(pwd):/workspace" -w /workspace \
ghcr.io/getbrik/brik-runner-node:22 \
brik run stage build# Build all images (multi-arch, no push)
./scripts/build-local.sh
# Build and load into local Docker (native arch only)
./scripts/build-local.sh --load
# Build specific stacks (expands to all versions)
./scripts/build-local.sh --load node python
# Build specific targets
./scripts/build-local.sh --load analysis-1 scanner-1| Option | Description |
|---|---|
| (no args) | Build all images (multi-arch) |
<stack> |
Build all versions of a stack (e.g. node builds node-22 + node-24) |
<target> |
Build a specific target (e.g. node-22, quality-1) |
--load |
Load images into local Docker (forces native arch) |
--platform PLAT |
Override platforms (e.g. linux/amd64) |
--no-cache |
Disable Docker build cache |
--regenerate |
Regenerate docker-bake.hcl before building |
--push |
Push images to registry (requires authentication) |
--list |
List all available targets and stacks |
--dry-run |
Show the command without executing it |
# List available targets
./scripts/build-local.sh --list
# Rebuild analysis image from scratch, single arch
./scripts/build-local.sh --load --no-cache analysis-1
# Build for a specific platform
./scripts/build-local.sh --platform linux/amd64 scanner-1
# Regenerate bake file and build everything
./scripts/build-local.sh --regenerate --load
# Preview the command without running it
./scripts/build-local.sh --dry-run node java# Generate the bake file from the version matrix
./scripts/generate-bake.sh
# Run smoke tests on built images
./scripts/smoke-test.sh
# Lint Dockerfiles
hadolint images/*/DockerfileAll tool and stack versions are defined in versions.json (single source of truth). To add or update a version:
- Edit
versions.json - Run
./scripts/generate-bake.sh(or use--regeneratewithbuild-local.sh) - Commit and push -- CI handles the rest
MIT
