chore: remove migration scripts and endpoint after successful migration
All checks were successful
CI / update (push) Successful in 1m8s
All checks were successful
CI / update (push) Successful in 1m8s
Migration completed successfully. Removing one-time migration files: - Migration endpoint (api/admin/migrate-image-hashes) - Migration shell script - Migration documentation Core image hashing functionality remains in place for all future uploads.
This commit is contained in:
@@ -1,288 +0,0 @@
|
|||||||
# Image Hash Migration Guide
|
|
||||||
|
|
||||||
This guide explains how to migrate existing images to use content-based hashing for cache invalidation.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
The new system stores images with content-based hashes for proper cache invalidation:
|
|
||||||
- **Database**: `images[0].mediapath = "maccaroni.a1b2c3d4.webp"`
|
|
||||||
- **Disk (hashed)**: `maccaroni.a1b2c3d4.webp` - cached forever (immutable)
|
|
||||||
- **Disk (unhashed)**: `maccaroni.webp` - fallback for graceful degradation
|
|
||||||
|
|
||||||
## What This Does
|
|
||||||
|
|
||||||
The migration will:
|
|
||||||
|
|
||||||
1. **Find all recipes** in the database
|
|
||||||
2. **Check each recipe's images**:
|
|
||||||
- If `images[0].mediapath` already has a hash → skip
|
|
||||||
- If image file doesn't exist on disk → skip
|
|
||||||
- Otherwise → generate hash and create hashed copy
|
|
||||||
3. **Generate content hash** from the full-size image (8-char SHA-256)
|
|
||||||
4. **Copy files** (keeps originals!) in all three directories:
|
|
||||||
- `/var/www/static/rezepte/full/`
|
|
||||||
- `/var/www/static/rezepte/thumb/`
|
|
||||||
- `/var/www/static/rezepte/placeholder/`
|
|
||||||
5. **Update database** with new hashed filename in `images[0].mediapath`
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
- **Authentication**: Either be logged in as admin OR have `ADMIN_SECRET_TOKEN` set
|
|
||||||
- Only runs in production (when `IMAGE_DIR=/var/www/static`)
|
|
||||||
- Requires confirmation token to prevent accidental runs
|
|
||||||
- Backup your database before running (recommended)
|
|
||||||
|
|
||||||
### Setting Up Admin Token (Production Server)
|
|
||||||
|
|
||||||
Add `ADMIN_SECRET_TOKEN` to your production `.env` file:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Generate a secure random token
|
|
||||||
openssl rand -hex 32
|
|
||||||
|
|
||||||
# Add to .env (production only!)
|
|
||||||
echo "ADMIN_SECRET_TOKEN=your-generated-token-here" >> .env
|
|
||||||
```
|
|
||||||
|
|
||||||
**Important**: Keep this token secret and only set it on the production server. Do NOT commit it to git.
|
|
||||||
|
|
||||||
## Step 1: Deploy Code Changes
|
|
||||||
|
|
||||||
Deploy the updated codebase to production. The changes include:
|
|
||||||
- Image upload endpoint now saves both hashed and unhashed versions
|
|
||||||
- Frontend components use `images[0].mediapath` for image URLs
|
|
||||||
- New migration endpoint at `/api/admin/migrate-image-hashes`
|
|
||||||
|
|
||||||
## Step 2: Update Nginx Configuration
|
|
||||||
|
|
||||||
Add this to your nginx site configuration for `bocken.org`:
|
|
||||||
|
|
||||||
```nginx
|
|
||||||
location /static/rezepte/ {
|
|
||||||
root /var/www;
|
|
||||||
|
|
||||||
# Cache hashed files forever (they have content hash in filename)
|
|
||||||
location ~ /static/rezepte/(thumb|placeholder|full)/[^/]+\.[a-f0-9]{8}\.webp$ {
|
|
||||||
add_header Cache-Control "public, max-age=31536000, immutable";
|
|
||||||
}
|
|
||||||
|
|
||||||
# Cache unhashed files with revalidation (fallback for manual uploads)
|
|
||||||
location ~ /static/rezepte/(thumb|placeholder|full)/[^/]+\.webp$ {
|
|
||||||
add_header Cache-Control "public, max-age=3600, must-revalidate";
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Reload nginx:
|
|
||||||
```bash
|
|
||||||
sudo nginx -t && sudo nginx -s reload
|
|
||||||
```
|
|
||||||
|
|
||||||
## Step 3: Run Migration
|
|
||||||
|
|
||||||
### Option 1: Using Shell Script (Recommended for Server)
|
|
||||||
|
|
||||||
SSH into your production server and run:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd /path/to/homepage
|
|
||||||
|
|
||||||
# Source your .env to get ADMIN_SECRET_TOKEN
|
|
||||||
source .env
|
|
||||||
|
|
||||||
# Make script executable (first time only)
|
|
||||||
chmod +x scripts/migrate-image-hashes.sh
|
|
||||||
|
|
||||||
# Run migration
|
|
||||||
./scripts/migrate-image-hashes.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
The script will:
|
|
||||||
- Check that `ADMIN_SECRET_TOKEN` is set
|
|
||||||
- Ask for confirmation
|
|
||||||
- Call the API endpoint with the admin token
|
|
||||||
- Pretty-print the results
|
|
||||||
|
|
||||||
### Option 2: Using curl with Admin Token
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# On production server with .env sourced
|
|
||||||
source .env
|
|
||||||
|
|
||||||
curl -X POST https://bocken.org/api/admin/migrate-image-hashes \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d "{\"confirm\": \"MIGRATE_IMAGES\", \"adminToken\": \"$ADMIN_SECRET_TOKEN\"}"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Option 3: Using curl with Session Cookie (Browser)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Get your session cookie from browser DevTools
|
|
||||||
# In Chrome/Firefox: F12 → Network tab → Click any request → Headers → Copy Cookie value
|
|
||||||
|
|
||||||
curl -X POST https://bocken.org/api/admin/migrate-image-hashes \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "Cookie: YOUR_SESSION_COOKIE_HERE" \
|
|
||||||
-d '{"confirm": "MIGRATE_IMAGES"}'
|
|
||||||
```
|
|
||||||
|
|
||||||
### Option 4: Using Browser (Postman, Insomnia, etc.)
|
|
||||||
|
|
||||||
1. Make sure you're logged in to bocken.org in your browser
|
|
||||||
2. Send POST request to: `https://bocken.org/api/admin/migrate-image-hashes`
|
|
||||||
3. Headers:
|
|
||||||
```
|
|
||||||
Content-Type: application/json
|
|
||||||
```
|
|
||||||
4. Body (JSON):
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"confirm": "MIGRATE_IMAGES"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Response Format
|
|
||||||
|
|
||||||
Success response:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"success": true,
|
|
||||||
"message": "Migration complete. Migrated 42 of 100 recipes.",
|
|
||||||
"total": 100,
|
|
||||||
"migrated": 42,
|
|
||||||
"skipped": 58,
|
|
||||||
"errors": [],
|
|
||||||
"details": [
|
|
||||||
{
|
|
||||||
"shortName": "maccaroni",
|
|
||||||
"status": "migrated",
|
|
||||||
"unhashedFilename": "maccaroni.webp",
|
|
||||||
"hashedFilename": "maccaroni.a1b2c3d4.webp",
|
|
||||||
"hash": "a1b2c3d4",
|
|
||||||
"filesCopied": 3,
|
|
||||||
"note": "Both hashed and unhashed versions saved for graceful degradation"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"shortName": "pizza",
|
|
||||||
"status": "skipped",
|
|
||||||
"reason": "already hashed",
|
|
||||||
"filename": "pizza.e5f6g7h8.webp"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## What Gets Skipped
|
|
||||||
|
|
||||||
The migration will skip recipes where:
|
|
||||||
- `images[0].mediapath` already contains a hash pattern (`.[a-f0-9]{8}.webp`)
|
|
||||||
- Image file doesn't exist on disk
|
|
||||||
- Recipe has no images array
|
|
||||||
|
|
||||||
## After Migration
|
|
||||||
|
|
||||||
### Verification
|
|
||||||
|
|
||||||
1. Check a few recipe pages - images should load correctly
|
|
||||||
2. Check browser DevTools → Network tab:
|
|
||||||
- Hashed images should have `Cache-Control: max-age=31536000, immutable`
|
|
||||||
- Unhashed images should have `Cache-Control: max-age=3600, must-revalidate`
|
|
||||||
|
|
||||||
### New Uploads
|
|
||||||
|
|
||||||
All new image uploads will automatically:
|
|
||||||
- Generate content hash
|
|
||||||
- Save both hashed and unhashed versions
|
|
||||||
- Store hashed filename in database
|
|
||||||
- Work immediately with proper cache invalidation
|
|
||||||
|
|
||||||
### Manual Uploads
|
|
||||||
|
|
||||||
If you manually upload an image:
|
|
||||||
1. Drop `recipe-name.webp` in all three folders (full, thumb, placeholder)
|
|
||||||
2. It will work immediately (graceful degradation)
|
|
||||||
3. Next time the recipe is edited and image re-uploaded, it will get a hash
|
|
||||||
|
|
||||||
## Rollback (If Needed)
|
|
||||||
|
|
||||||
If something goes wrong:
|
|
||||||
|
|
||||||
1. **Database rollback**: Restore from backup taken before migration
|
|
||||||
2. **Files**: The original unhashed files are still on disk - no data loss
|
|
||||||
3. **Remove hashed files** (optional):
|
|
||||||
```bash
|
|
||||||
cd /var/www/static/rezepte
|
|
||||||
find . -name '*.[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9].webp' -delete
|
|
||||||
```
|
|
||||||
|
|
||||||
## Safety Features
|
|
||||||
|
|
||||||
1. ✅ **Production-only**: Won't run unless `IMAGE_DIR=/var/www/static`
|
|
||||||
2. ✅ **Confirmation token**: Requires `{"confirm": "MIGRATE_IMAGES"}` in request body
|
|
||||||
3. ✅ **Authentication**: Requires either logged-in user OR valid `ADMIN_SECRET_TOKEN`
|
|
||||||
4. ✅ **Non-destructive**: Copies files (keeps originals)
|
|
||||||
5. ✅ **Skip already migrated**: Won't re-process files that already have hashes
|
|
||||||
6. ✅ **Detailed logging**: Returns detailed report of what was changed
|
|
||||||
|
|
||||||
## Technical Details
|
|
||||||
|
|
||||||
### Hash Generation
|
|
||||||
|
|
||||||
- Uses SHA-256 of image file content
|
|
||||||
- First 8 hex characters used (4 billion combinations)
|
|
||||||
- Same image = same hash (deterministic)
|
|
||||||
- Different image = different hash (cache invalidation)
|
|
||||||
|
|
||||||
### File Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
/var/www/static/rezepte/
|
|
||||||
├── full/
|
|
||||||
│ ├── maccaroni.webp ← Unhashed (fallback)
|
|
||||||
│ ├── maccaroni.a1b2c3d4.webp ← Hashed (cache busting)
|
|
||||||
│ └── ...
|
|
||||||
├── thumb/
|
|
||||||
│ ├── maccaroni.webp
|
|
||||||
│ ├── maccaroni.a1b2c3d4.webp
|
|
||||||
│ └── ...
|
|
||||||
└── placeholder/
|
|
||||||
├── maccaroni.webp
|
|
||||||
├── maccaroni.a1b2c3d4.webp
|
|
||||||
└── ...
|
|
||||||
```
|
|
||||||
|
|
||||||
### Database Schema
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
images: [{
|
|
||||||
mediapath: "maccaroni.a1b2c3d4.webp", // Full filename with hash
|
|
||||||
alt: "Maccaroni and cheese",
|
|
||||||
caption: "Delicious comfort food"
|
|
||||||
}]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Frontend Usage
|
|
||||||
|
|
||||||
```svelte
|
|
||||||
<!-- Card.svelte -->
|
|
||||||
<script>
|
|
||||||
// Uses images[0].mediapath (with hash)
|
|
||||||
// Falls back to short_name.webp if missing
|
|
||||||
const img_name = $derived(
|
|
||||||
recipe.images?.[0]?.mediapath ||
|
|
||||||
`${recipe.short_name}.webp`
|
|
||||||
);
|
|
||||||
</script>
|
|
||||||
|
|
||||||
<img src="https://bocken.org/static/rezepte/thumb/{img_name}" />
|
|
||||||
```
|
|
||||||
|
|
||||||
## Questions?
|
|
||||||
|
|
||||||
If you encounter issues:
|
|
||||||
1. Check nginx error logs: `sudo tail -f /var/log/nginx/error.log`
|
|
||||||
2. Check application logs for the migration endpoint
|
|
||||||
3. Verify file permissions on `/var/www/static/rezepte/`
|
|
||||||
4. Ensure database connection is working
|
|
||||||
|
|
||||||
The migration is designed to be safe and non-destructive. Original files are never deleted, only copied.
|
|
||||||
@@ -1,62 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# Image Hash Migration Script
|
|
||||||
# This script triggers the image hash migration endpoint in production
|
|
||||||
# It will:
|
|
||||||
# 1. Find all images without hashes (shortname.webp)
|
|
||||||
# 2. Generate content hashes for them
|
|
||||||
# 3. Copy them to shortname.{hash}.webp (keeps originals)
|
|
||||||
# 4. Update the database
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
echo "======================================"
|
|
||||||
echo "Image Hash Migration Script"
|
|
||||||
echo "======================================"
|
|
||||||
echo ""
|
|
||||||
echo "This will migrate all existing images to use content-based hashes."
|
|
||||||
echo "Images will be copied from 'shortname.webp' to 'shortname.{hash}.webp'"
|
|
||||||
echo "Original unhashed files will be kept for graceful degradation."
|
|
||||||
echo ""
|
|
||||||
echo "⚠️ WARNING: This operation will modify the database and create new files!"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Check for admin token
|
|
||||||
if [ -z "$ADMIN_SECRET_TOKEN" ]; then
|
|
||||||
echo "Error: ADMIN_SECRET_TOKEN environment variable not set."
|
|
||||||
echo ""
|
|
||||||
echo "Please set it first:"
|
|
||||||
echo " export ADMIN_SECRET_TOKEN='your-secret-token'"
|
|
||||||
echo ""
|
|
||||||
echo "Or source your .env file:"
|
|
||||||
echo " source .env"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
read -p "Are you sure you want to continue? (yes/no): " confirm
|
|
||||||
|
|
||||||
if [ "$confirm" != "yes" ]; then
|
|
||||||
echo "Migration cancelled."
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "Starting migration..."
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Get the production URL (modify this to your production URL)
|
|
||||||
PROD_URL="${PROD_URL:-https://bocken.org}"
|
|
||||||
|
|
||||||
# Make the API call with admin token
|
|
||||||
response=$(curl -s -X POST \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d "{\"confirm\": \"MIGRATE_IMAGES\", \"adminToken\": \"$ADMIN_SECRET_TOKEN\"}" \
|
|
||||||
"${PROD_URL}/api/admin/migrate-image-hashes")
|
|
||||||
|
|
||||||
# Pretty print the response
|
|
||||||
echo "$response" | jq '.' 2>/dev/null || echo "$response"
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "======================================"
|
|
||||||
echo "Migration complete!"
|
|
||||||
echo "======================================"
|
|
||||||
@@ -1,155 +0,0 @@
|
|||||||
import type { RequestHandler } from '@sveltejs/kit';
|
|
||||||
import { error } from '@sveltejs/kit';
|
|
||||||
import { IMAGE_DIR } from '$env/static/private';
|
|
||||||
import { env } from '$env/dynamic/private';
|
|
||||||
import { Recipe } from '$models/Recipe';
|
|
||||||
import { dbConnect } from '$utils/db';
|
|
||||||
import { generateImageHash, getHashedFilename } from '$utils/imageHash';
|
|
||||||
import path from 'path';
|
|
||||||
import fs from 'fs';
|
|
||||||
import { rename } from 'node:fs/promises';
|
|
||||||
|
|
||||||
export const POST = (async ({ locals, request }) => {
|
|
||||||
// Only allow in production (check if IMAGE_DIR contains production path)
|
|
||||||
const isProd = IMAGE_DIR.includes('/var/www/static');
|
|
||||||
|
|
||||||
// Require confirmation token to prevent accidental runs
|
|
||||||
const data = await request.json();
|
|
||||||
const confirmToken = data?.confirm;
|
|
||||||
const adminToken = data?.adminToken;
|
|
||||||
|
|
||||||
if (!isProd) {
|
|
||||||
throw error(403, 'This endpoint only runs in production (IMAGE_DIR must be /var/www/static)');
|
|
||||||
}
|
|
||||||
|
|
||||||
if (confirmToken !== 'MIGRATE_IMAGES') {
|
|
||||||
throw error(400, 'Missing or invalid confirmation token. Send {"confirm": "MIGRATE_IMAGES"}');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check authentication: either valid session OR admin token from env
|
|
||||||
const auth = await locals.auth();
|
|
||||||
const isAdminToken = adminToken && env.ADMIN_SECRET_TOKEN && adminToken === env.ADMIN_SECRET_TOKEN;
|
|
||||||
|
|
||||||
if (!auth && !isAdminToken) {
|
|
||||||
throw error(401, 'Need to be logged in or provide valid admin token');
|
|
||||||
}
|
|
||||||
|
|
||||||
await dbConnect();
|
|
||||||
|
|
||||||
const results = {
|
|
||||||
total: 0,
|
|
||||||
migrated: 0,
|
|
||||||
skipped: 0,
|
|
||||||
errors: [] as string[],
|
|
||||||
details: [] as any[]
|
|
||||||
};
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Get all recipes
|
|
||||||
const recipes = await Recipe.find({});
|
|
||||||
results.total = recipes.length;
|
|
||||||
|
|
||||||
for (const recipe of recipes) {
|
|
||||||
const shortName = recipe.short_name;
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Check if already has hashed filename
|
|
||||||
const currentMediaPath = recipe.images?.[0]?.mediapath;
|
|
||||||
|
|
||||||
// If mediapath exists and has hash pattern, skip
|
|
||||||
if (currentMediaPath && /\.[a-f0-9]{8}\.webp$/.test(currentMediaPath)) {
|
|
||||||
results.skipped++;
|
|
||||||
results.details.push({
|
|
||||||
shortName,
|
|
||||||
status: 'skipped',
|
|
||||||
reason: 'already hashed',
|
|
||||||
filename: currentMediaPath
|
|
||||||
});
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if image file exists on disk (try full size first)
|
|
||||||
const unhashed_filename = `${shortName}.webp`;
|
|
||||||
const fullPath = path.join(IMAGE_DIR, 'rezepte', 'full', unhashed_filename);
|
|
||||||
|
|
||||||
if (!fs.existsSync(fullPath)) {
|
|
||||||
results.skipped++;
|
|
||||||
results.details.push({
|
|
||||||
shortName,
|
|
||||||
status: 'skipped',
|
|
||||||
reason: 'file not found',
|
|
||||||
path: fullPath
|
|
||||||
});
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Generate hash from the full-size image
|
|
||||||
const imageHash = generateImageHash(fullPath);
|
|
||||||
const hashedFilename = getHashedFilename(shortName, imageHash);
|
|
||||||
|
|
||||||
// Create hashed versions and keep unhashed copies (for graceful degradation)
|
|
||||||
const folders = ['full', 'thumb', 'placeholder'];
|
|
||||||
let copiedCount = 0;
|
|
||||||
|
|
||||||
for (const folder of folders) {
|
|
||||||
const unhashedPath = path.join(IMAGE_DIR, 'rezepte', folder, unhashed_filename);
|
|
||||||
const hashedPath = path.join(IMAGE_DIR, 'rezepte', folder, hashedFilename);
|
|
||||||
|
|
||||||
if (fs.existsSync(unhashedPath)) {
|
|
||||||
// Copy to hashed filename (keep original unhashed file)
|
|
||||||
fs.copyFileSync(unhashedPath, hashedPath);
|
|
||||||
copiedCount++;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Update database with hashed filename
|
|
||||||
if (!recipe.images || recipe.images.length === 0) {
|
|
||||||
// Create images array if it doesn't exist
|
|
||||||
recipe.images = [{
|
|
||||||
mediapath: hashedFilename,
|
|
||||||
alt: recipe.name || '',
|
|
||||||
caption: ''
|
|
||||||
}];
|
|
||||||
} else {
|
|
||||||
// Update existing mediapath
|
|
||||||
recipe.images[0].mediapath = hashedFilename;
|
|
||||||
}
|
|
||||||
|
|
||||||
await recipe.save();
|
|
||||||
|
|
||||||
results.migrated++;
|
|
||||||
results.details.push({
|
|
||||||
shortName,
|
|
||||||
status: 'migrated',
|
|
||||||
unhashedFilename: unhashed_filename,
|
|
||||||
hashedFilename: hashedFilename,
|
|
||||||
hash: imageHash,
|
|
||||||
filesCopied: copiedCount,
|
|
||||||
note: 'Both hashed and unhashed versions saved for graceful degradation'
|
|
||||||
});
|
|
||||||
|
|
||||||
} catch (err) {
|
|
||||||
results.errors.push(`${shortName}: ${err instanceof Error ? err.message : String(err)}`);
|
|
||||||
results.details.push({
|
|
||||||
shortName,
|
|
||||||
status: 'error',
|
|
||||||
error: err instanceof Error ? err.message : String(err)
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return new Response(JSON.stringify({
|
|
||||||
success: true,
|
|
||||||
message: `Migration complete. Migrated ${results.migrated} of ${results.total} recipes.`,
|
|
||||||
...results
|
|
||||||
}), {
|
|
||||||
status: 200,
|
|
||||||
headers: {
|
|
||||||
'Content-Type': 'application/json'
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
} catch (err) {
|
|
||||||
throw error(500, `Migration failed: ${err instanceof Error ? err.message : String(err)}`);
|
|
||||||
}
|
|
||||||
}) satisfies RequestHandler;
|
|
||||||
Reference in New Issue
Block a user