-
Notifications
You must be signed in to change notification settings - Fork 8
Using Caching Volumes
Caching volumes provide persistent storage for your GitHub Actions runners, allowing you to cache Docker images, APT packages, Python virtual environments, build artifacts, and other data between workflow runs. This can significantly speed up your CI/CD pipelines by reducing download times and build times, as well as improve pipeline stability by minimizing dependency on repeatedly accessing external resources during each workflow run.
| β Warning: |
Volumes are not deleted automatically and can accumulate over time.
Depending on your specific use case, the number of created volumes can be very large.
Read this section carefully to understand how to properly configure and use caching volumes.
Use the Hetzner console and the volumes list and volumes delete commands to manage them and control costs. |
|---|
- π§ Overview
- π Facts
- π Naming conventions
- π Volume lifecycle
- π― Volume Assignment
- π¨ Design Patterns
- βοΈ Adding volume
- π Volume mount points
- π§© Multiple volumes
- βοΈ The /etc/hetzner-volumes
- π·οΈ Volumes starting with the cache prefix
- π¦ Caching APT packages
- π Caching Python modules
- π³ Caching Docker
- π Monitoring
- π Volume resizing
- π΄ Volume deactivation and activation
- π Listing volumes
- ποΈ Deleting volumes
- π Security considerations
- π° Estimating costs
- πΈ Reminder: monitor costs and cleanup
Caching volumes are Hetzner Cloud volumes that are automatically created and attached to your runners based
on your specified runner labels. Each runner can have from 1 up to 16 caching volumes,
with each volume supporting up to 10TB of storage. The volumes persist between workflow runs.
The volume label you specify in your runner configuration references a group of actual Hetzner volumes. When a new runner starts, the service automatically:
- Attaches an existing matching volume if one is available
- Creates a new volume if no matching volume exists or is available
This automatic volume management ensures that your runners always have the necessary storage available while maintaining persistence of your cached data between workflow runs. Multiple physical Hetzner volumes can share the same cache volume name, forming a distributed cache system. This system provides eventual consistency: when multiple runners use the same cache volume name, they will eventually store the same cached data, assuming the workloads are deterministic (same inputs produce same outputs).
The cache volumes are cumulative between runs unless explicitly cleaned up. You can perform cleanup either:
- Implementing an automatic cleanup procedure in your workflow
- Manually deleting the volumes using the
volumes deletecommand
Because volumes are assigned to jobs using runner labels, you have very flexible control over which volumes will be used by each job. This flexibility allows you to:
- Share volumes between related jobs for efficient caching
- Isolate volumes for sensitive or untrusted workflows
- Control volume sizes per job requirements
- Manage volume lifecycle independently for different job types
- You can add from
1to16volumes. - Volume size ranges from
10GBto10240GB(10TB). If the volume size is not specified, the default is10GB(the minimum). - Volumes are added using a runner label in the format:
volume-<name_without_dashes>[-<size>GB](size is optional). - If the cache volume name starts with the
cacheprefix (e.g.volume-cache,volume-cache_artifacts), it is automatically used as the default caching volume. - Volumes can be resized:
- Automatically, by updating a volume label (e.g. changing -10GB to -100GB)
- Manually, using the
volumes resizecommand
- Volume size is not considered during the attach process:
- Smaller volumes can be attached to jobs requesting larger sizes and are automatically resized
- Larger volumes can be attached to jobs requesting smaller sizes
- New volumes are created automatically when no existing volume is available for the given label.
- Volumes are attached only when a new server is created. Recyclable servers keep their volumes attached between jobs.
- The volume name defines a group of physical Hetzner volumes.
- Volumes are location-, architecture-, OS flavor-, and OS version-dependent,
based on the attributes of the server image.
For example, a volume created for servers in
in-nbg1cannot be used by servers inin-fsn1. - Volumes are never deleted automatically.
You must manually delete unused volumes using the
volumes deletecommand. - The maximum number of volumes per volume-name group depends on the maximum number of concurrent jobs requesting the same volume label.
- All volumes are mounted to the
/mnt/<volume_name>directory, with default ownership set toubuntu:ubuntu. - The file
/etc/hetzner-volumescontains metadata about all volumes mounted on the runner.
When defining caching volumes, follow these naming rules:
-
No dashes in volume names: Volume names must not contain dashes (-) except in the required
volume-prefix. For example, this is valid:- β
volume-cache
But the following is not allowed:
- β
volume-my-cache
Instead, use underscores (_) to separate words:
- β
volume-cache_artifacts
- β
-
Prefix must be `volume-`: All volume labels must begin with the
volume-prefix. This prefix identifies the label as a volume declaration and is required. -
Optional size suffix: You can optionally specify a volume size in gigabytes by appending
-<size>GBto the label:- β
volume-cache_apt-100GB - β
volume-python_wheels(defaults to 10GB)
- β
-
Special handling for cache-prefixed names:
The cache volume whose name starts with
cacheis treated as a special default caching volume. It is automatically used for caching. See: Volumes starting with the cache prefix.
Caching volumes follow a defined lifecycle tied to job execution and runner provisioning:
| Creation: |
New volumes are created automatically when no existing volume with the requested label is available. The creation process is triggered when a new runner starts and no matching volume is found. |
|---|---|
| Attachment: |
Volumes are attached only when a new runner server is created. If a server is recyclable, it retains its attached volumes between jobs. The attachment process happens after volume creation or when an existing volume is found. |
| Detachment: |
Volumes are detached automatically when a server is deleted. After detachment, they become available to
be automatically attached to a new server if they were not deactivated using the |
| Deactivation and activation: |
Volumes can be deactivated to prevent them from being attached to new servers, and reactivated when needed.
This is useful for manual volume management. Use the |
| Resizing: |
Volumes can be resized in two ways:
Note that you can only increase volume size, not decrease it. See Volume resizing for more details. |
| Reuse: |
Volumes are reused across jobs and runners, grouped by:
This grouping ensures compatibility and optimal performance. |
| Deletion: |
Volumes are never deleted automatically.
To remove a volume, use the |
The volume assignment system provides granular control over how caching volumes are used in your workflows:
- Job-Level Control: Each job can specify exactly which volumes it needs through runner labels
- Size Management: Volume sizes can be tailored to specific job requirements
- Isolation: Different jobs can use different volumes to prevent cache contamination or thrashing (when jobs overwrite each other's cache)
- Sharing: Related jobs can share volumes to maximize cache efficiency
- Lifecycle: Volumes can be managed independently for different job types
For example, you might have:
- A build job using a large volume for Docker images
- A test job using a smaller volume for test artifacts
- A deployment job using a separate volume for deployment artifacts
- All build, test, and deployment jobs using the same volume
This flexibility ensures you can optimize your caching strategy for each specific use case.
When designing your volume assignment strategy, consider these common patterns:
-
Shared Volume Pattern
- Use the same volume label between jobs
- Cache accumulates data for all jobs
- Best for related jobs that benefit from shared dependencies
- Example: Multiple build jobs sharing Docker images
-
Isolated Volume Pattern
- Use different volume names for different jobs
- Makes volumes more specific and easier to manage
- Best for jobs with unique caching needs but will result in more physical volumes
- Example: Separate volumes for build and test artifacts
-
Hierarchical Cache Pattern
- Use the same volume with structured subdirectories
- Organize cache by version, PR, or job name
- Example:
/mnt/cache/<version>/<PR>or/mnt/cache/<job_name> - Best for complex workflows needing both isolation and sharing
| β Note: |
The volume assignment strategy will be highly use case specific. Choose the pattern that best fits your project's needs:
|
|---|
To use caching volumes in your workflow job, add the volume label to your runner specification.
The label format is volume-{name_without_dashes}[-{size}GB], where:
-
{name}is the name of your cache volume (defines volume group). No dashes are allowed β use underscores instead.- β
volume-this_is_my_custom_volume_name-20GBβ correct - β
volume-this-is-my-custom-volume-name-20GBβ incorrect
- β
-
{size}GB(optional) is the size in GB (e.g.,20GB), if not specified the default is10GB
# Example of a job using a caching volume
jobs:
build:
runs-on: [self-hosted, type-cpx31, setup-docker, volume-cache-20GB]| β Note: | Jobs that use the same volume name will share the same volume group irrespective of the volume size. See Volume lifecycle for more details. |
|---|---|
| β Note: | If volume size is not specified, the default size is 10GB.
See Volume resizing for more information about volume sizes. |
For the example below, both build and archive jobs will use the same volume-name group cache.
If build and archive jobs run in parallel, and there are enough runners available, then different physical volumes will be used for each job.
# Example of jobs sharing the same volume group
jobs:
build:
runs-on: [self-hosted, type-cpx31, setup-docker, volume-cache]
archive:
runs-on: [self-hosted, type-cpx31, setup-docker, volume-cache-20GB]All volumes are mounted to the /mnt/<volume_name> directory.
By default, the ownership is set to ubuntu:ubuntu.
For example:
-
volume-cacheβ/mnt/cache -
volume-artifactsβ/mnt/artifacts -
volume-cache_buildsβ/mnt/cache_builds
You can specify multiple caching volumes for a single job runner by adding multiple volume labels:
jobs:
build:
runs-on: [self-hosted, type-cpx31, setup-docker, volume-cache-10GB, volume-artifacts-20GB]The /etc/hetzner-volumes file is automatically created that defines all volumes mounted on the runner.
You can cat this file in your workflow to save this information.
For example, if a volume gets invalid data you will be able to find the volume name and id to delete it.
# Example of checking mounted volumes
run: |
if [ -f "/etc/hetzner-volumes" ]; then
echo "Hetzner volumes"
cat /etc/hetzner-volumes
fi# Example output showing volume details
name,id,size,mount,device,used,free,usage
cache-x86-ubuntu-unknown-1747486837257882,102587536,20GB,/mnt/cache,/dev/disk/by-id/scsi-0HC_Volume_102587536,16G,2.9G,85%The first volume in the list (sorted using Python sorted() function), that starts with cache prefix is used for default caching.
Default caching will cache all resources needed for startup and setup.
| β Note: | If you don't want any caching to be enabled by default, avoid using volume names starting with the cache prefix. |
|---|---|
| β Warning: | To avoid confusion, it is recommended to have only one volume with cache prefix per job. |
For example,
jobs:
build:
runs-on: [self-hosted, type-cpx31, setup-docker, volume-cache]jobs:
build:
runs-on: [self-hosted, type-cpx31, setup-docker, volume-cache_custom_name]The default caching on the cache volume includes the following:
-
The /var/cache/apt-archives and /var/lib/apt/lists to cache all APT packages
f"sudo mkdir -p /mnt/{volume_name}/apt-archives /mnt/{volume_name}/apt-lists /var/cache/apt/archives /var/lib/apt/lists" f"sudo mount --bind /mnt/{volume_name}/apt-archives /var/cache/apt/archives" f"sudo mount --bind /mnt/{volume_name}/apt-lists /var/lib/apt/lists"
-
GitHub Actions runner binary
See the startup script for x64: scripts/startup-x64.sh See the startup script for arm64: scripts/startup-arm64.sh -
Docker GPG, repository list, and APT packages.
See dockersetup script:scripts/docker.sh
To speed up package installation, you can cache downloaded .deb files and APT metadata by bind-mounting directories from a cache volume. See Volume mount points for information about where volumes are mounted.
Caching of APT packages is done automatically during runner setup if one of the volumes starts with the cache prefix.
| β Warning: | If you use a caching volume that starts with the cache prefix, this setup is done automatically β you do not need to perform these steps manually.
See Volumes starting with the cache prefix for more information. |
|---|
Here is an example of how you can do it manually if you don't have any cache prefix volumes:
# Example of setting up APT package caching
- name: Setup APT cache
shell: bash
run: |
if [ -d "/mnt/cache" ]; then
# Define cache directories
APT_ARCHIVES="/mnt/cache/apt-archives"
APT_LISTS="/mnt/cache/apt-lists"
# Create local and volume cache directories
sudo mkdir -p "$APT_ARCHIVES" "$APT_LISTS" \
/var/cache/apt/archives /var/lib/apt/lists
# Bind mount volume-backed directories
sudo mount --bind "$APT_ARCHIVES" /var/cache/apt/archives
sudo mount --bind "$APT_LISTS" /var/lib/apt/lists
echo "APT cache directories mounted from volume:"
echo " - $APT_ARCHIVES β /var/cache/apt/archives"
echo " - $APT_LISTS β /var/lib/apt/lists"
else
echo "No APT cache volume available, proceeding without caching"
fiThe easiest way to cache Python modules is by using a virtual environment (venv) and
binding a cache volume folder to the venv directory. See Volume mount points
for information about where volumes are mounted.
For example:
# Example of setting up Python virtual environment caching
- name: Setup Python cache
shell: bash
run: |
if [ -d "/mnt/cache" ]; then
# Define Python cache directory
PYTHON_CACHE_DIR="/mnt/cache/python3.12-venv"
mkdir -p "$PYTHON_CACHE_DIR" "$PWD/venv"
sudo mount --bind "$PYTHON_CACHE_DIR" "$PWD/venv"
echo "Using cached Python venv directory: $PYTHON_CACHE_DIR"
else
echo "No Python venv cache directory available, proceeding without caching"
fiTo create and activate the virtual environment:
- name: Create and activate Python virtual environment
shell: bash
run: |
sudo apt-get install -y python3.12-venv
echo "Creating and activating Python virtual environment..."
if [ ! -f venv/bin/activate ]; then
python3 -m venv venv
fi
source venv/bin/activate
echo "PATH=$PATH" >> "$GITHUB_ENV"Caching Docker images and build layers can be more or less tricky, depending on your use case. See Volume mount points for information about where volumes are mounted.
Here are some caching techniques that do not work:
- Creating symlinks from cache volume directories to
/var/lib/dockerdirectories. - Using overlayfs for
/var/lib/dockeritself does not work as Docker relies on its own use of the overlayfs. - Using plain
cpcommand without the-aargument can break Docker. - Not synchronizing
/var/lib/docker/imageand/var/lib/docker/overlay2folders as theimagefolder contains metadata referencing contents in theoverlay2and expects theoverlay2folder contents to be correct.
Here are some caching techniques that work but not optimal:
- Using
cp -afor setup andrsync -aH --deletefor syncing back, but the initialcp -acopy can be slow depending on the cache size.
The simple use case is when you can get away with just mounting directory located on the caching
volume directly to /var/lib/docker.
This works under the following conditions:
- During runtime, you don't create very large volumes that live in
/var/lib/docker/volumes - During runtime, you don't run containers and write excessively to the container filesystem
which Docker implements as a writable layer stored in
/var/lib/docker/overlay2folder.
If one of the above conditions is not meant, it will mean that the size of your caching volume will have to meet the peak image, volume, and filesystem size during jobs runtime. For heavy test regression jobs, this could mean very large caching volumes (e.g. 100GB or more) which could be very expensive especially with high concurrent job counts.
Here is an example of how to setup Docker caching for the simple case,
- name: Setup Docker cache
shell: bash
run: |
if ! systemctl is-active --quiet docker; then
echo "Docker is not running, skipping Docker cache setup"
exit 0
fi
if [ -d "/mnt/cache" ]; then
DOCKER_CACHE_DIR="/mnt/cache/docker"
echo "Using docker cache directory: $DOCKER_CACHE_DIR"
echo "Stopping Docker to prepare cache"
sudo systemctl stop docker
sudo sync
# Create cache directory if it doesn't exist
sudo mkdir -p "$DOCKER_CACHE_DIR"
# Mount the cache directory to /var/lib/docker
sudo mount --bind "$DOCKER_CACHE_DIR" /var/lib/docker
sudo sync
sudo systemctl start docker
else
echo "No docker cache directory available, proceeding without caching"
fiand here an example syncing Docker job:
- name: Sync Docker cache
shell: bash
run: |
if ! command -v docker >/dev/null; then
echo "Docker is not installed, skipping cache sync"
exit 0
fi
if [ -d "/mnt/cache" ]; then
echo "Stopping containers and cleaning up..."
sudo docker stop $(sudo docker ps -q) || true
sudo docker rm -fv $(sudo docker ps -a -q) || true
echo "Removing all Docker volumes..."
sudo docker volume rm $(sudo docker volume ls -q) || true
echo "Stopping Docker daemon"
sudo systemctl stop docker
sudo sync
# Since we're using a direct bind mount, no sync is needed
# The cache is automatically updated as Docker writes to /var/lib/docker
else
echo "/mnt/cache not available β skipping Docker cache sync"
fiThe advanced Docker caching is required when the runtime size is too large to just accommodate the jobs requirements by adjusting caching volume size.
This means you have jobs that hit one of the following conditions that will
prevent you from simply mounting cache directory to /var/lib/docker.
- During runtime, you create very large volumes that live in
/var/lib/docker/volumes - During runtime, you run containers and write excessively to the container filesystem
which Docker implements as a writable layer stored in
/var/lib/docker/overlay2folder.
Therefore, you must cache only contents from /var/lib/docker selectively.
For achieving image and build layer caching we must cache the following:
-
/var/lib/docker/image- contains layers metadata (fully) -
/var/lib/docker/buildkit- contains data for build kits (fully) -
/var/lib/docker/overlay2- contains image and container layers (partially)
It is the requirement to partially cache contents of the /var/lib/docker/overlay2 that is tricky.
However, it is possible to achieve efficient caching using the following technique:
- The
imageandbuildkitfolder can becp -afrom cache and ``rsync``ed back to cache directly as these do not consume much space - Before caching contents of
overlay2folder, all containers must be stopped and removed. Then we need to selectivelyrsynconly the new folders while skipping folders that we mounted during the setup. This has to be done onoverlay2subfolder level.
Here is an example how to setup Docker caching,
- name: Setup Docker cache
shell: bash
run: |
if ! systemctl is-active --quiet docker; then
echo "Docker is not running, skipping Docker cache setup"
exit 0
fi
if [ -f "/etc/hetzner-volumes" ]; then
echo "Hetzner volumes"
cat /etc/hetzner-volumes
fi
if [ -d "/mnt/cache" ]; then
DOCKER_CACHE_DIR="/mnt/cache/docker"
echo "Using docker cache directory: $DOCKER_CACHE_DIR"
echo "Stopping Docker to prepare cache"
sudo systemctl stop docker
sudo sync
if [ -d "$DOCKER_CACHE_DIR/overlay2" ]; then
echo "Restoring overlay2 from cache"
sudo rm -rf "/var/lib/docker/overlay2"
targets=$(sudo find "$DOCKER_CACHE_DIR/overlay2" -mindepth 1 -maxdepth 1)
if [ -z "$targets" ]; then
echo "β οΈ No entries found in $DOCKER_CACHE_DIR/overlay2 β skipping"
else
for target in $targets; do
id=$(basename "$target")
echo "Mounting $target to /var/lib/docker/overlay2/$id"
sudo mkdir -p "/var/lib/docker/overlay2/$id"
sudo mount --bind "$target" "/var/lib/docker/overlay2/$id"
echo "/var/lib/docker/overlay2/$id" | sudo tee -a /etc/docker-cache-mounts > /dev/null
done
fi
fi
for DIR in image buildkit; do
if [ -d "$DOCKER_CACHE_DIR/$DIR" ]; then
echo "Restoring $DIR from cache"
sudo rm -rf "/var/lib/docker/$DIR"
sudo cp -a "$DOCKER_CACHE_DIR/$DIR" "/var/lib/docker/$DIR"
fi
done
sudo sync
sudo systemctl start docker
else
echo "No docker cache directory available, proceeding without caching"
fiHere is an example of syncing Docker cache,
- name: Sync Docker cache
shell: bash
run: |
if ! command -v docker >/dev/null; then
echo "Docker is not installed, skipping cache sync"
exit 0
fi
if [ -d "/mnt/cache" ]; then
echo "Stopping containers and cleaning up..."
sudo docker stop $(sudo docker ps -q) || true
sudo docker rm -fv $(sudo docker ps -a -q) || true
echo "Stopping Docker daemon"
sudo systemctl stop docker
sudo sync
echo "Syncing docker folders to cache"
sudo mkdir -p /mnt/cache/docker
if sudo test -d "/var/lib/docker/overlay2"; then
sudo mkdir -p /mnt/cache/docker/overlay2
targets=$(sudo find "/var/lib/docker/overlay2" -mindepth 1 -maxdepth 1)
if [ -z "$targets" ]; then
echo "β οΈ No entries found in /var/lib/docker/overlay2 β skipping"
else
for target in $targets; do
id=$(basename "$target")
if [ ! -f /etc/docker-cache-mounts ] || ! grep -Fxq "$target" /etc/docker-cache-mounts; then
sudo rsync -aH --delete "$target/" /mnt/cache/docker/overlay2/$id/
fi
done
fi
fi
for DIR in image buildkit; do
sudo rsync -aH --delete /var/lib/docker/$DIR/ /mnt/cache/docker/$DIR/
done
sudo sync
else
echo "/mnt/cache not available β skipping Docker cache sync"
fiTo monitor your cache volumes inside your workflow, here are some useful commands. See Volume mount points for information about where volumes are mounted.
# Example of monitoring Docker cache usage
- name: π Show Docker disk usage
if: always()
run: docker system df
- name: π Show cache directory size
if: always()
run: du -sh /mnt/cache/docker
- name: π¦ Show Docker images
if: always()
run: docker images
- name: π§Ή Clean up unused images
if: always()
run: docker image prune -a -f
- name: π Show Hetzner volumes
if: always()
run: |
if [ -f "/etc/hetzner-volumes" ]; then
echo "Hetzner volumes"
cat /etc/hetzner-volumes
fi
- name: π§Ό Clean up containers and volumes
if: always()
run: |
# Stop and remove all containers
echo "Stopping containers and cleaning up..."
docker stop $(docker ps -q) || true
docker rm -fv $(docker ps -a -q) || true
# Remove all Docker volumes
echo "Removing all Docker volumes..."
docker volume rm $(docker volume ls -q) || true| β Warning: | Regularly monitor your cache size and implement cleanup strategies to prevent the cache from growing too large. Always check Hetzner console periodically. |
|---|
To monitor your existing cache volumes:
github-hetzner-runners listAll volumes are resized by default either by updating volume label or using the volume resize command.
See Volume lifecycle for more information about volume management.
| β Note: | You can only increase volume size. Decreasing volume size is not supported. |
|---|---|
| β Warning: | If you want to decrease volume size, you will have to delete and create a new volume. |
For example,
jobs:
build:
runs-on: [self-hosted, type-cpx31, setup-docker, volume-cache]To resize the volume cache from the default 10GB to 20GB, update the label from volume-cache to volume-cache-20GB
and rerun your workflow.
jobs:
build:
runs-on: [self-hosted, type-cpx31, setup-docker, volume-cache-20GB]By default, all caching volumes selected using the github-hetzner-runner-volume=active label which
is added to each volume during creation. See Volume lifecycle for more information.
You can use volumes deactivate command to change volume label to github-hetzner-runner-volume=inactive
which will prevent the volume from being selected for attachment to a new server.
| β Note: | Deactivating a volume does not detach it from any currently bound server. |
|---|
The recommended procedure to deactivate the volume:
- Run the
volumes deactivatecommand. - If the volume is currently bound (check using the
volumes listcommand), delete the server to which the volume is bound to detach it.
usage: github-hetzner-runners volumes deactivate [-h] [-n name] [-v name] [--id id] [--all]
Deactivate volumes. This will prevent the volume to be attached to a new server.
options:
-h, --help show this help message and exit
-n name, --name name deactivate all volumes matching name
-v name, --volume name
deactivate by volume name
--id id deactivate by volume id
--all deactivate all volumesYou can use volumes activate command to change volume label back to github-hetzner-runner-volume=active
so that it will be available to be attached during the next scale up cycle after the volume was detached using
the volumes deactivate command. See Volume deactivation and
Volume lifecycle for more information.
usage: github-hetzner-runners volumes activate [-h] [-n name] [-v name] [--id id] [--all]
Activate volumes. This will allow the volume to be attached to a new server.
options:
-h, --help show this help message and exit
-n name, --name name activate all volumes matching name
-v name, --volume name
activate by volume name
--id id activate by volume id
--all activate all volumesUse the volumes list command to list current volumes.
usage: github-hetzner-runners volumes list [-h] [-n name] [-v name] [--id id] [--all]
List volumes.
options:
-h, --help show this help message and exit
-n name, --name name list all volumes matching name
-v name, --volume name
list by volume name
--id id list by volume id
--all list all volumes (default if no other options are provided)| Example: |
github-hetzner-runners volumes list20:34:13 π Logging in to Hetzner Cloud
20:34:13 π Getting a list of volumes
status state, name, actual name, id, size, location, server, created, format
π’ available active, cache_pull_100_docker_images, cache_pull_100_docker_images-x86-ubuntu-22.04-1747506174049519, 102588112, 200GB, nbg1, none, 2025-05-17 18:22:57, ext4 |
|---|
Use the volumes delete command to delete volumes.
usage: github-hetzner-runners volumes delete [-h] [-n name] [-v name] [--id id] [--all] [-f]
Delete volumes.
options:
-h, --help show this help message and exit
-n name, --name name delete all volumes matching name
-v name, --volume name
delete by volume name
--id id delete by volume id
--all delete all volumes
-f, --force force delete volumes even if they are attached to a serverWhen using caching volumes with self-hosted runners, it's important to understand the security implications:
-
- Volumes are shared between runners and jobs
-
Caching volumes are persistent and may be reused by different runners across multiple jobs and workflows. As such, cached data is not isolated β one job may read from or write to a cache created by another.
-
- Avoid using shared volumes with untrusted pull requests
-
If your repository accepts contributions from external contributors, do not use shared caching volumes when running jobs triggered by untrusted pull requests. These jobs could potentially read or overwrite cached files, leading to information disclosure or cache poisoning.
For guidance, refer to GitHub's official documentation: Security hardening for GitHub Actions β Self-hosted runners.
| Best practices: |
|
|---|
To estimate the monthly costs for maximum volume usage (in GB) for your caching setup, use the following formula:
max_volume_usage =
peak_concurrent_jobs *
total_volume_size_per_job_gb *
num_locations *
num_architectures *
num_os_flavorsWhere:
- peak_concurrent_jobs: the highest number of jobs that may run at the same time
- total_volume_size_per_job_gb: sum of all volume sizes (in GB) that a single job may require
- num_locations: number of Hetzner regions used (e.g., fsn1, nbg1)
- num_architectures: number of CPU architectures (e.g., x86, arm)
- num_os_flavors: number of OS flavor + version combinations (e.g., Ubuntu 22.04, Debian 12)
For example, if you expect:
- peak_concurrent_jobs = 10
- total_volume_size_per_job_gb = 50
- num_locations = 2
- num_architectures = 2
- num_os_flavors = 1
Then,
max_volume_usage = 10 x 50 x 2 x 2 x 1 = 2000 (GB).
This means the service may provision up to 2000 GB of volume storage to cover the worst case.
With the current cost of β¬4.40 per month per 100 GB:
2000 / 100 x 4.40 = β¬88.00/per month
| β Note: | Refer to the official pricing page for the most up-to-date storage rates: https://www.hetzner.com/cloud/. |
|---|
While caching volumes greatly improve performance and pipeline stability, they also consume persistent storage that accumulates over time and directly contributes to your Hetzner Cloud costs.
To avoid unnecessary expenses:
- π§Ή Regularly clean up unused or outdated volumes using the
volumes deletecommand - π§ Design workflows with optional cache cleanup steps (e.g. deleting large temporary files)
- π Monitor usage and volume count with
volumes listand through the Hetzner Console - π Set appropriate volume sizes instead of relying on the default (
10GB)
You are responsible for volume lifecycle management. Unused volumes will remain active until manually removed.
Developed and maintained by the TestFlows team.
- Home
- Installation
- Quick Start
- Getting Started Tutorial
- Basic Configuration
- Specifying the Maximum Number of Runners
- Specifying the Maximum Number of Runners Used in Workflow a Run
- Recycling PoweredβOff Servers
- Skipping Jobs
- Using Custom Label Prefix
- Jobs That Require the Docker Engine
- Specifying The Runner Type
- Specifying The Runner Location
- Specifying The Runner Network
- Specifying The Runner Image
- Specifying The Custom Runner Server Setup Script
- Specifying The Custom Runner Server Startup Script
- Disabling Setup or Startup Scripts
- Specifying Standby Runners
- Using Caching Volumes
- Specifying Logger Configuration
- Listing All Current Servers
- Opening The SSH Client To The Server
- Deleting All Runners and Their Servers
- Using a Configuration File
- Using Project Configuration Files
- Specifying SSH Key
- Specifying Additional SSH Keys
- Running as a Service
- Running as a Cloud Service
- Scaling Up Runners
- Scaling Down Runners
- Handling Failing Conditions
- Meta Labels
- Estimating Costs
- Listing Images
- Deleting Images
- Creating Custom Images
- Embedded Monitoring Dashboard
- Prometheus Metrics
- Program Options