Skip to main content

Volume-backed Build Cache on Morph Cloud

tl;dr Use Morph Volumes as persistent S3-compatible storage for build caches and release artifacts. Pull a cache down at job start, build, then sync the warmed cache and final artifacts back into the same volume.

Volume-backed Build Cache

Why this pattern works

  • Warm expensive caches across devboxes, CI, and agent runs
  • Promote release bundles without rebuilding from scratch
  • Share one durable storage surface across multiple teams or workflows
  • Optionally publish anonymous cache/artifact reads from public_read volumes

Prerequisites

  • Morph Cloud CLI, Morph Cloud Python SDK, AWS CLI v2, or another S3-compatible client
  • A Morph API key
  • A volume name you can create or reuse, for example team-build-cache
export MORPH_API_KEY="<your-api-key>"
export MORPH_VOLUMES_BASE_URL=${MORPH_VOLUMES_BASE_URL:-"https://volumes.svc.cloud.morph.so"}
export VOLUME_NAME="team-build-cache"

export AWS_ACCESS_KEY_ID="$MORPH_API_KEY"
export AWS_SECRET_ACCESS_KEY="$MORPH_API_KEY"
export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION:-"us-any-1"}

What the workflow demonstrates

  • Restore a cached build directory from a Morph Volume
  • Run a build locally or in CI with a warm cache
  • Sync the updated cache back to the same volume
  • Publish final release artifacts alongside the cache

Getting started

set -euo pipefail

mkdir -p .cache/build dist

# 1) Restore the most recent cache (best-effort)
aws --endpoint-url "$MORPH_VOLUMES_BASE_URL" \
s3 sync "s3://$VOLUME_NAME/build-cache/" .cache/build/ || true

# 2) Run your build using the restored cache
docker buildx build \
--cache-from "type=local,src=.cache/build" \
--cache-to "type=local,dest=.cache/build-next" \
--output "type=local,dest=dist" \
.

# 3) Promote the refreshed cache
rsync -a --delete .cache/build-next/ .cache/build/
aws --endpoint-url "$MORPH_VOLUMES_BASE_URL" \
s3 sync .cache/build/ "s3://$VOLUME_NAME/build-cache/"

# 4) Publish release artifacts
GIT_SHA=$(git rev-parse --short HEAD)
aws --endpoint-url "$MORPH_VOLUMES_BASE_URL" \
s3 cp dist/app.tar.gz "s3://$VOLUME_NAME/releases/$GIT_SHA/app.tar.gz"

Adapt the cached directory to whatever you need to keep warm: ccache, sccache, uv, pip, pnpm, cargo, lake, or your own compiled build tree.

Same flow with morphcloud volumes

If you prefer the first-party CLI, the same workflow maps cleanly onto the new volumes command group:

set -euo pipefail

mkdir -p .cache/build dist

# 1) Restore the most recent cache (best-effort)
morphcloud volumes cp -r "s3://$VOLUME_NAME/build-cache/" .cache/build/ || true

# 2) Run your build using the restored cache
docker buildx build \
--cache-from "type=local,src=.cache/build" \
--cache-to "type=local,dest=.cache/build-next" \
--output "type=local,dest=dist" \
.

# 3) Promote the refreshed cache
rsync -a --delete .cache/build-next/ .cache/build/
morphcloud volumes cp -r .cache/build/ "s3://$VOLUME_NAME/build-cache/"

# 4) Publish release artifacts
GIT_SHA=$(git rev-parse --short HEAD)
morphcloud volumes cp dist/app.tar.gz "s3://$VOLUME_NAME/releases/$GIT_SHA/app.tar.gz"

Notes

  • Bucket names are globally unique, which makes shared cache URLs predictable.
  • The Volumes web guide shows the required endpoint, path-style, and auth model.
  • The full connection guide lives at Volumes getting started.
  • If your release pipeline is already Python-based, the same bucket and object lifecycle is available through MorphCloudClient().volumes with the same MORPH_API_KEY.
  • If your volume is marked public_read, published artifacts can be fetched anonymously with stable URLs like:
https://volumes.svc.cloud.morph.so/team-build-cache/releases/<git-sha>/app.tar.gz
  • Billing is based on stored GiB-hours, not MCU compute consumption, so this pattern is a good fit for outputs that are expensive to rebuild but cheap to store relative to the rebuild cost.