Files
xahaud/.github/actions/xahau-ga-cache-save/action.yml
Nicholas Dudfield 8b39d0915f feat: replace github actions cache with s3
Replaces actions/cache with custom S3-based caching to avoid upcoming
GitHub Actions cache eviction policy changes.

Changes:
- New S3 cache actions (xahau-ga-cache-restore/save)
- zstd compression (ccache: clang=323/gcc=609 MB, Conan: clang=1.1/gcc=1.9 GB)
- Immutability (first-write-wins)
- Bootstrap mode (creates empty dir if no cache)

Cache clearing tag:
- [ci-ga-clear-cache] - Clear all caches for this job
- [ci-ga-clear-cache:ccache] - Clear only ccache
- [ci-ga-clear-cache:conan] - Clear only conan

Configuration ordering fixes:
- ccache config applied AFTER cache restore (prevents stale cached config)
- Conan profile created AFTER cache restore (prevents stale cached profile)

ccache improvements:
- Single cache directory (~/.ccache)
- Wrapper toolchain (enables ccache without affecting Conan builds)
- Verbose build output (-v flag)
- Fixes #620

Conan improvements:
- Removed branch comparison logic for cache saves
- Cache keys don't include branch names, comparison was ineffective
- Fixes #618

Breaking changes:
- Workflows must pass AWS credentials (aws-access-key-id, aws-secret-access-key)

S3 setup:
- Bucket: xahaud-github-actions-cache-niq (us-east-1)
- Credentials already configured in GitHub secrets
2025-11-03 14:03:54 +07:00

111 lines
3.7 KiB
YAML

name: 'Xahau Cache Save (S3)'
description: 'Drop-in replacement for actions/cache/save using S3 storage'
inputs:
path:
description: 'A list of files, directories, and wildcard patterns to cache (currently only single path supported)'
required: true
key:
description: 'An explicit key for saving the cache'
required: true
s3-bucket:
description: 'S3 bucket name for cache storage'
required: false
default: 'xahaud-github-actions-cache-niq'
s3-region:
description: 'S3 region'
required: false
default: 'us-east-1'
# Note: Composite actions can't access secrets.* directly - must be passed from workflow
aws-access-key-id:
description: 'AWS Access Key ID for S3 access'
required: true
aws-secret-access-key:
description: 'AWS Secret Access Key for S3 access'
required: true
runs:
using: 'composite'
steps:
- name: Save cache to S3
shell: bash
env:
AWS_ACCESS_KEY_ID: ${{ inputs.aws-access-key-id }}
AWS_SECRET_ACCESS_KEY: ${{ inputs.aws-secret-access-key }}
S3_BUCKET: ${{ inputs.s3-bucket }}
S3_REGION: ${{ inputs.s3-region }}
CACHE_KEY: ${{ inputs.key }}
TARGET_PATH: ${{ inputs.path }}
run: |
set -euo pipefail
echo "=========================================="
echo "Xahau Cache Save (S3)"
echo "=========================================="
echo "Target path: ${TARGET_PATH}"
echo "Cache key: ${CACHE_KEY}"
echo "S3 bucket: s3://${S3_BUCKET}"
echo ""
# Normalize target path (expand tilde and resolve to absolute path)
if [[ "${TARGET_PATH}" == ~* ]]; then
TARGET_PATH="${HOME}${TARGET_PATH:1}"
fi
echo "Normalized target path: ${TARGET_PATH}"
echo ""
# Check if target directory exists
if [ ! -d "${TARGET_PATH}" ]; then
echo "⚠️ Target directory does not exist: ${TARGET_PATH}"
echo "Skipping cache save."
exit 0
fi
# Use static base name (one base per key, immutable)
S3_BASE_KEY="s3://${S3_BUCKET}/${CACHE_KEY}-base.tar.zst"
# Check if base already exists (immutability - first write wins)
if aws s3 ls "${S3_BASE_KEY}" --region "${S3_REGION}" >/dev/null 2>&1; then
echo "⚠️ Cache already exists: ${S3_BASE_KEY}"
echo "Skipping upload (immutability - first write wins, like GitHub Actions)"
echo ""
echo "=========================================="
echo "Cache save completed (already exists)"
echo "=========================================="
exit 0
fi
# Create tarball
BASE_TARBALL="/tmp/xahau-cache-base-$$.tar.zst"
echo "Creating cache tarball..."
tar -cf - -C "${TARGET_PATH}" . | zstd -3 -T0 -q -o "${BASE_TARBALL}"
BASE_SIZE=$(du -h "${BASE_TARBALL}" | cut -f1)
echo "✓ Cache tarball created: ${BASE_SIZE}"
echo ""
# Upload to S3
echo "Uploading cache to S3..."
echo " Key: ${CACHE_KEY}-base.tar.zst"
aws s3api put-object \
--bucket "${S3_BUCKET}" \
--key "${CACHE_KEY}-base.tar.zst" \
--body "${BASE_TARBALL}" \
--tagging 'type=base' \
--region "${S3_REGION}" \
>/dev/null 2>&1
echo "✓ Uploaded: ${S3_BASE_KEY}"
# Cleanup
rm -f "${BASE_TARBALL}"
echo ""
echo "=========================================="
echo "Cache save completed successfully"
echo "Cache size: ${BASE_SIZE}"
echo "Cache key: ${CACHE_KEY}"
echo "=========================================="