Files
xahaud/.github/actions/xahau-actions-cache-save/action.yml
Nicholas Dudfield be6fad9692 fix: revert to PID-based temp files (was working before)
Reverts the unnecessary mktemp change from 638cb0afe that broke cache saving.

What happened:
- Original delta code used $$ (PID) for temp files: DELTA_TARBALL="/tmp/...-$$.tar.zst"
- This creates a STRING, not a file - zstd creates the file when writing
- When removing deltas (638cb0afe), I unnecessarily changed to mktemp for "better practice"
- mktemp CREATES an empty file - zstd refuses to overwrite it
- Result: "already exists; not overwritten" error

Why it seemed to work:
- Immutability check skipped save for existing caches
- Upload code path never executed during testing
- Bug only appeared when actually trying to create new cache

The fix:
- Revert to PID-based naming ($$) that was working
- Don't fix what isn't broken

Applies to both save and restore actions for consistency.
2025-10-31 12:10:13 +07:00

111 lines
3.6 KiB
YAML

name: 'Xahau Cache Save (S3)'
description: 'Drop-in replacement for actions/cache/save using S3 storage'
inputs:
path:
description: 'A list of files, directories, and wildcard patterns to cache (currently only single path supported)'
required: true
key:
description: 'An explicit key for saving the cache'
required: true
s3-bucket:
description: 'S3 bucket name for cache storage'
required: false
default: 'xahaud-github-actions-cache-niq'
s3-region:
description: 'S3 region'
required: false
default: 'us-east-1'
# Note: Composite actions can't access secrets.* directly - must be passed from workflow
aws-access-key-id:
description: 'AWS Access Key ID for S3 access'
required: true
aws-secret-access-key:
description: 'AWS Secret Access Key for S3 access'
required: true
runs:
using: 'composite'
steps:
- name: Save cache to S3
shell: bash
env:
AWS_ACCESS_KEY_ID: ${{ inputs.aws-access-key-id }}
AWS_SECRET_ACCESS_KEY: ${{ inputs.aws-secret-access-key }}
S3_BUCKET: ${{ inputs.s3-bucket }}
S3_REGION: ${{ inputs.s3-region }}
CACHE_KEY: ${{ inputs.key }}
TARGET_PATH: ${{ inputs.path }}
run: |
set -euo pipefail
echo "=========================================="
echo "Xahau Cache Save (S3)"
echo "=========================================="
echo "Target path: ${TARGET_PATH}"
echo "Cache key: ${CACHE_KEY}"
echo "S3 bucket: s3://${S3_BUCKET}"
echo ""
# Normalize target path (expand tilde and resolve to absolute path)
if [[ "${TARGET_PATH}" == ~* ]]; then
TARGET_PATH="${HOME}${TARGET_PATH:1}"
fi
echo "Normalized target path: ${TARGET_PATH}"
echo ""
# Check if target directory exists
if [ ! -d "${TARGET_PATH}" ]; then
echo "⚠️ Target directory does not exist: ${TARGET_PATH}"
echo "Skipping cache save."
exit 0
fi
# Use static base name (one base per key, immutable)
S3_BASE_KEY="s3://${S3_BUCKET}/${CACHE_KEY}-base.tar.zst"
# Check if base already exists (immutability - first write wins)
if aws s3 ls "${S3_BASE_KEY}" --region "${S3_REGION}" >/dev/null 2>&1; then
echo "⚠️ Cache already exists: ${S3_BASE_KEY}"
echo "Skipping upload (immutability - first write wins, like GitHub Actions)"
echo ""
echo "=========================================="
echo "Cache save completed (already exists)"
echo "=========================================="
exit 0
fi
# Create tarball
BASE_TARBALL="/tmp/xahau-cache-base-$$.tar.zst"
echo "Creating cache tarball..."
tar -cf - -C "${TARGET_PATH}" . | zstd -3 -T0 -q -o "${BASE_TARBALL}"
BASE_SIZE=$(du -h "${BASE_TARBALL}" | cut -f1)
echo "✓ Cache tarball created: ${BASE_SIZE}"
echo ""
# Upload to S3
echo "Uploading cache to S3..."
echo " Key: ${CACHE_KEY}-base.tar.zst"
aws s3api put-object \
--bucket "${S3_BUCKET}" \
--key "${CACHE_KEY}-base.tar.zst" \
--body "${BASE_TARBALL}" \
--tagging 'type=base' \
--region "${S3_REGION}" \
>/dev/null
echo "✓ Uploaded: ${S3_BASE_KEY}"
# Cleanup
rm -f "${BASE_TARBALL}"
echo ""
echo "=========================================="
echo "Cache save completed successfully"
echo "Cache size: ${BASE_SIZE}"
echo "Cache key: ${CACHE_KEY}"
echo "=========================================="