首页龙虾技能列表 › Self-Host Deployer — 技能工具

Self-Host Deployer — 技能工具

v1.0.0

Deploy self-hosted applications to any VPS with Docker Compose. Catalog of 18 apps with production-ready configs, Nginx reverse proxy, SSL via Certbot, autom...

0· 47·0 当前·0 累计
by @llcsamih (Samih Mansour)·MIT-0
下载技能包
License
MIT-0
最后更新
2026/4/8
安全扫描
VirusTotal
可疑
查看报告
OpenClaw
可疑
medium confidence
The skill's instructions broadly match its stated purpose (deploying apps to a VPS), but there are important mismatches and sensitive operations (asking for SSH root/sudo credentials, shelling out to git/npm/openssl/certbot, writing to /opt, etc.) that the metadata does not declare — the combination warrants caution before use.
评估建议
This skill does what it says (deploy apps) but requires giving SSH access and running many remote commands — only proceed if you trust the skill's author and you understand the commands it will run. Before using: (1) ask the publisher for a homepage/source and verify GitHub repo URLs are official; (2) prefer creating an unprivileged/deploy user or use an ephemeral SSH key rather than providing root; (3) request the generated docker-compose.yml and .env files for manual review before applying the...
详细分析 ▾
用途与能力
The skill's name and description (deploy self-hosted apps with Docker Compose, Nginx, Certbot, backups) align with the runtime instructions. However the registry metadata claims no required binaries or credentials while the SKILL.md clearly expects many host tools (git, docker/docker-compose, openssl, npm/node, certbot, curl, ssh) and root/sudo SSH access. That mismatch between declared requirements and actual steps is inconsistent.
指令范围
The instructions explicitly ask the user for VPS IP and SSH credentials (root or sudo), instruct cloning repos, editing and writing .env files under /opt, generating secrets, installing npm packages and running commands remotely (e.g., running docker-compose, certbot). These are within the stated deployment purpose but are high-risk operations because they give the agent the ability to run arbitrary shell commands on your server and modify system-wide state.
安装机制
There is no install spec (instruction-only), which minimizes what is written to the local agent. However, the skill assumes many binaries and remote installs on the target VPS. The metadata omits required binaries while the instructions rely on external tools and clones from GitHub — this inconsistency is a red flag because it hides the true operational requirements.
凭证需求
The skill requests SSH credentials and a domain/email for Certbot — these are legitimately needed to deploy to a VPS, but they are sensitive. The skill does not request unrelated cloud/API keys, which is good. Still, asking for root-level SSH access gives broad control over the target machine and should only be granted with caution and explicit user understanding.
持久化与权限
The skill does not set always:true and is user-invocable only. It will instruct changes on the VPS (writing stacks under /opt, configuring proxy, SSL, backups) which are normal for deployment tools, but these are system-wide changes requiring privileged access — verify commands before running. Autonomous invocation plus ability to collect credentials increases risk if you enable the agent to act without supervision.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.0.02026/4/8

Version 1.0.0 - Initial release of "self-host-deployer" skill. - Guides users to deploy 18+ production-ready self-hosted applications on any VPS using Docker Compose, with automated setup for Nginx, SSL (Certbot), backups, resource limits, and health checks. - Includes a detailed application catalog with minimum requirements and key app details. - Interactive workflow: helps users select an app, gathers necessary VPS and domain info, and generates secure, app-specific Docker Compose configurations. - Supports privacy-friendly, open-source tools as alternatives to mainstream SaaS offerings.

● 可疑

安装命令 点击复制

官方npx clawhub@latest install self-host-deployer
镜像加速npx clawhub@latest install self-host-deployer --registry https://cn.clawhub-mirror.com

技能文档

Deploy production-ready self-hosted applications to any VPS with Docker Compose, Nginx, SSL, backups, and health checks.

When to Use

  • User wants to self-host an open-source application
  • User says "self-host", "deploy X", "host my own X"
  • User wants a privacy-respecting alternative to a SaaS product

When NOT to Use

  • Deploying custom application code (use /vps-deploy)
  • Deploying to managed platforms like Vercel/Netlify
  • User wants the hosted/cloud version of a service

Prerequisites

  • SSH access to a VPS (Ubuntu/Debian with Docker + Docker Compose installed)
  • A domain or subdomain pointed to the VPS IP (for SSL)
  • Minimum 1GB RAM (some apps need more — see catalog)

Phase 1: App Selection

Present the catalog and ask the user which app to deploy. If the user already named an app, skip to Phase 2.

App Catalog

#AppCategoryDescriptionMin RAMPortsDatabase
1SupabaseBackend/BaaSOpen-source Firebase alternative (Postgres, Auth, REST, Realtime, Storage)4GB3000, 8000Postgres (built-in)
2PlausibleAnalyticsPrivacy-friendly web analytics, no cookies4GB8000Postgres + ClickHouse
3UmamiAnalyticsLightweight privacy-focused analytics (~2KB script)512MB3000Postgres
4Uptime KumaMonitoringSelf-hosted uptime monitoring (like Uptime Robot)256MB3001SQLite (built-in)
5n8nAutomationWorkflow automation platform (like Zapier)1GB5678Postgres
6GiteaDev ToolsLightweight Git server with CI (like GitHub)512MB3000, 22Postgres
7VaultwardenSecurityBitwarden-compatible password manager (Rust)128MB80SQLite (built-in)
8GhostfolioFinanceOpen-source wealth management dashboard1GB3333Postgres + Redis
9LangfuseAI/LLMLLM observability and tracing platform4GB3000Postgres + ClickHouse + Redis
10GhostCMSProfessional publishing platform with ActivityPub1GB2368MySQL
11MinIOStorageS3-compatible object storage (NOTE: archived Feb 2026 — consider Garage or SeaweedFS)1GB9000, 9001None
12ImmichPhotosSelf-hosted Google Photos alternative with AI4GB2283Postgres + Redis
13Paperless-ngxDocumentsDocument management with OCR and auto-tagging2GB8000Postgres + Redis
14CoolifyPaaSOpen-source Heroku/Netlify alternative (280+ one-click apps)2GB8000Built-in
15Stirling PDFDocumentsAll-in-one PDF tool (merge, split, OCR, convert)512MB8080None
16Nginx Proxy ManagerInfrastructureVisual reverse proxy manager with Let's Encrypt256MB80, 443, 81SQLite
17PortainerInfrastructureDocker management GUI256MB9000, 9443Built-in
18DockgeInfrastructureDocker Compose stack manager (by Uptime Kuma creator)256MB5001Built-in

Phase 2: Gather Information

Ask the user for:

  • VPS IP address and SSH credentials (root or sudo user)
  • Domain/subdomain for the app (e.g., analytics.example.com)
  • Email address for SSL certificate registration (Certbot)
  • Any app-specific settings (see gotchas per app below)

Phase 3: Generate Docker Compose

Based on the selected app, generate a docker-compose.yml with:

  • Proper service definitions and dependencies
  • Named volumes for all persistent data
  • A shared Docker network (web for proxy, internal for inter-service)
  • Resource limits via deploy.resources.limits
  • Health checks on all services
  • Automatic restart policies (unless-stopped)
  • Secure randomly-generated passwords for all secrets

Docker Compose Templates

1. Supabase

Gotchas: Supabase has 11+ services (Postgres, GoTrue, PostgREST, Realtime, Storage, Studio, Kong, Meta, Edge Functions, Analytics/Logflare, Imgproxy). Do NOT write a compose from scratch. Clone the official repo and customize .env.

# Clone official Supabase Docker setup
git clone --depth 1 https://github.com/supabase/supabase /opt/supabase
cd /opt/supabase/docker

# Copy and configure environment cp .env.example .env

Critical .env changes:

POSTGRES_PASSWORD=
JWT_SECRET=
ANON_KEY=
SERVICE_ROLE_KEY=
DASHBOARD_USERNAME=admin
DASHBOARD_PASSWORD=
SITE_URL=https://
API_EXTERNAL_URL=https://

Generate JWT keys:

# Generate JWT_SECRET
openssl rand -base64 32

# Generate ANON_KEY and SERVICE_ROLE_KEY using the JWT_SECRET # Use https://supabase.com/docs/guides/self-hosting#api-keys or: # npm install -g jsonwebtoken && node -e "const jwt=require('jsonwebtoken'); console.log(jwt.sign({role:'anon',iss:'supabase',iat:Math.floor(Date.now()/1000),exp:Math.floor(Date.now()/1000)+315360000},process.env.JWT_SECRET))"

Health check: curl -f http://localhost:3000 (Studio) and curl -f http://localhost:8000/rest/v1/ (API via Kong)


2. Plausible

Gotchas: Requires ClickHouse for event storage. The CE version is released twice per year. CPU must support SSE 4.2 (check with grep -q sse4_2 /proc/cpuinfo).

services:
  plausible:
    image: ghcr.io/plausible/community-edition:v2-latest
    container_name: plausible
    restart: unless-stopped
    command: sh -c "sleep 10 && /entrypoint.sh db createdb && /entrypoint.sh db migrate && /entrypoint.sh run"
    ports:
      - "127.0.0.1:8000:8000"
    environment:
      - BASE_URL=https://${DOMAIN}
      - SECRET_KEY_BASE=${SECRET_KEY_BASE}
      - DATABASE_URL=postgres://plausible:${DB_PASSWORD}@plausible-db:5432/plausible
      - CLICKHOUSE_DATABASE_URL=http://plausible-events-db:8123/plausible_events
    depends_on:
      plausible-db:
        condition: service_healthy
      plausible-events-db:
        condition: service_healthy
    networks:
      - internal
      - web
    deploy:
      resources:
        limits:
          memory: 2G
          cpus: "2.0"

plausible-db: image: postgres:16-alpine container_name: plausible-db restart: unless-stopped volumes: - plausible-db-data:/var/lib/postgresql/data environment: - POSTGRES_DB=plausible - POSTGRES_USER=plausible - POSTGRES_PASSWORD=${DB_PASSWORD} healthcheck: test: ["CMD-SHELL", "pg_isready -U plausible"] interval: 10s timeout: 5s retries: 5 networks: - internal deploy: resources: limits: memory: 512M

plausible-events-db: image: clickhouse/clickhouse-server:24-alpine container_name: plausible-events-db restart: unless-stopped volumes: - plausible-events-data:/var/lib/clickhouse - ./clickhouse/clickhouse-config.xml:/etc/clickhouse-server/config.d/logging.xml:ro - ./clickhouse/clickhouse-user-config.xml:/etc/clickhouse-server/users.d/logging.xml:ro ulimits: nofile: soft: 262144 hard: 262144 healthcheck: test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:8123/ping || exit 1"] interval: 10s timeout: 5s retries: 5 networks: - internal deploy: resources: limits: memory: 1G

volumes: plausible-db-data: plausible-events-data:

networks: internal: web: external: true

Create ClickHouse config files:

mkdir -p clickhouse
cat > clickhouse/clickhouse-config.xml << 'XMLEOF'

  
    warning
    true
  
  
  
  
  
  
  
  
  

XMLEOF

cat > clickhouse/clickhouse-user-config.xml << 'XMLEOF' 0 0 XMLEOF

Health check: curl -f http://localhost:8000/api/health


3. Umami

services:
  umami:
    image: ghcr.io/umami-software/umami:postgresql-latest
    container_name: umami
    restart: unless-stopped
    ports:
      - "127.0.0.1:3000:3000"
    environment:
      DATABASE_URL: postgres://umami:${DB_PASSWORD}@umami-db:5432/umami
      APP_SECRET: ${APP_SECRET}
    depends_on:
      umami-db:
        condition: service_healthy
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:3000/api/heartbeat || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 3
    networks:
      - internal
      - web
    deploy:
      resources:
        limits:
          memory: 512M
          cpus: "1.0"

umami-db: image: postgres:16-alpine container_name: umami-db restart: unless-stopped volumes: - umami-db-data:/var/lib/postgresql/data environment: POSTGRES_DB: umami POSTGRES_USER: umami POSTGRES_PASSWORD: ${DB_PASSWORD} healthcheck: test: ["CMD-SHELL", "pg_isready -U umami"] interval: 10s timeout: 5s retries: 5 networks: - internal deploy: resources: limits: memory: 256M

volumes: umami-db-data:

networks: internal: web: external: true

Health check: curl -f http://localhost:3000/api/heartbeat Default login: admin / umami (change immediately)


4. Uptime Kuma

services:
  uptime-kuma:
    image: louislam/uptime-kuma:2
    container_name: uptime-kuma
    restart: unless-stopped
    ports:
      - "127.0.0.1:3001:3001"
    volumes:
      - uptime-kuma-data:/app/data
      - /var/run/docker.sock:/var/run/docker.sock:ro
    healthcheck:
      test: ["CMD-SHELL", "extra/healthcheck"]
      interval: 30s
      timeout: 10s
      retries: 3
    networks:
      - web
    deploy:
      resources:
        limits:
          memory: 256M
          cpus: "0.5"

volumes: uptime-kuma-data:

networks: web: external: true

Gotchas: Mounting Docker socket is optional but enables container monitoring. First visit creates the admin account. Health check: curl -f http://localhost:3001/api/status-page/heartbeat


5. n8n

Gotchas: N8N_ENCRYPTION_KEY encrypts credentials at rest. Set before first run and NEVER lose it. Use Postgres for production, not SQLite.

services:
  n8n:
    image: n8nio/n8n:latest
    container_name: n8n
    restart: unless-stopped
    ports:
      - "127.0.0.1:5678:5678"
    environment:
      - N8N_HOST=${DOMAIN}
      - N8N_PORT=5678
      - N8N_PROTOCOL=https
      - WEBHOOK_URL=https://${DOMAIN}/
      - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=n8n-db
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_PASSWORD=${DB_PASSWORD}
      - GENERIC_TIMEZONE=${TIMEZONE:-America/New_York}
    volumes:
      - n8n-data:/home/node/.n8n
    depends_on:
      n8n-db:
        condition: service_healthy
    healthcheck:
      test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:5678/healthz || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 3
    networks:
      - internal
      - web
    deploy:
      resources:
        limits:
          memory: 1G
          cpus: "2.0"

n8n-db: image: postgres:16-alpine container_name: n8n-db restart: unless-stopped volumes: - n8n-db-data:/var/lib/postgresql/data environment: - POSTGRES_DB=n8n - POSTGRES_USER=n8n - POSTGRES_PASSWORD=${DB_PASSWORD} healthcheck: test: ["CMD-SHELL", "pg_isready -U n8n"] interval: 10s timeout: 5s retries: 5 networks: - internal deploy: resources: limits: memory: 512M

volumes: n8n-data: n8n-db-data:

networks: internal: web: external: true

Health check: curl -f http://localhost:5678/healthz


6. Gitea

Gotchas: Uses port 22 for SSH — change the host SSH port first (e.g., to 2222) or map Gitea SSH to another port. Forgejo is a community fork worth considering.

services:
  gitea:
    image: gitea/gitea:latest
    container_name: gitea
    restart: unless-stopped
    ports:
      - "127.0.0.1:3000:3000"
      - "2222:22"
    environment:
      - USER_UID=1000
      - USER_GID=1000
      - GITEA__database__DB_TYPE=postgres
      - GITEA__database__HOST=gitea-db:5432
      - GITEA__database__NAME=gitea
      - GITEA__database__USER=gitea
      - GITEA__database__PASSWD=${DB_PASSWORD}
      - GITEA__server__ROOT_URL=https://${DOMAIN}/
      - GITEA__server__SSH_DOMAIN=${DOMAIN}
      - GITEA__server__SSH_PORT=2222
    volumes:
      - gitea-data:/data
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    depends_on:
      gitea-db:
        condition: service_healthy
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:3000/api/healthz || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 3
    networks:
      - internal
      - web
    deploy:
      resources:
        limits:
          memory: 512M
          cpus: "1.0"

gitea-db: image: postgres:16-alpine container_name: gitea-db restart: unless-stopped volumes: - gitea-db-data:/var/lib/postgresql/data environment: - POSTGRES_DB=gitea - POSTGRES_USER=gitea - POSTGRES_PASSWORD=${DB_PASSWORD} healthcheck: test: ["CMD-SHELL", "pg_isready -U gitea"] interval: 10s timeout: 5s retries: 5 networks: - internal deploy: resources: limits: memory: 256M

volumes: gitea-data: gitea-db-data:

networks: internal: web: external: true

Health check: curl -f http://localhost:3000/api/healthz


7. Vaultwarden

Gotchas: MUST be served over HTTPS or it won't work from clients. Disable signups after initial setup. Enable admin panel only temporarily.

services:
  vaultwarden:
    image: vaultwarden/server:latest
    container_name: vaultwarden
    restart: unless-stopped
    ports:
      - "127.0.0.1:8080:80"
    environment:
      - DOMAIN=https://${DOMAIN}
      - SIGNUPS_ALLOWED=true  # Set to false after creating your account
      - ADMIN_TOKEN=${ADMIN_TOKEN}  # Generate with: openssl rand -base64 48
      - WEBSOCKET_ENABLED=true
      - LOG_LEVEL=warn
    volumes:
      - vaultwarden-data:/data
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:80/alive || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 3
    networks:
      - web
    deploy:
      resources:
        limits:
          memory: 256M
          cpus: "0.5"

volumes: vaultwarden-data:

networks: web: external: true

Health check: curl -f http://localhost:8080/alive Post-deploy: Create account, then set SIGNUPS_ALLOWED=false and remove ADMIN_TOKEN


8. Ghostfolio

services:
  ghostfolio:
    image: ghostfolio/ghostfolio:latest
    container_name: ghostfolio
    restart: unless-stopped
    ports:
      - "127.0.0.1:3333:3333"
    environment:
      - NODE_ENV=production
      - ACCESS_TOKEN_SALT=${ACCESS_TOKEN_SALT}
      - DATABASE_URL=postgres://ghostfolio:${DB_PASSWORD}@ghostfolio-db:5432/ghostfolio
      - JWT_SECRET_KEY=${JWT_SECRET}
      - REDIS_HOST=ghostfolio-redis
      - REDIS_PORT=6379
    depends_on:
      ghostfolio-db:
        condition: service_healthy
      ghostfolio-redis:
        condition: service_healthy
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:3333/api/v1/health || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 3
    networks:
      - internal
      - web
    deploy:
      resources:
        limits:
          memory: 1G
          cpus: "1.0"

ghostfolio-db: image: postgres:16-alpine container_name: ghostfolio-db restart: unless-stopped volumes: - ghostfolio-db-data:/var/lib/postgresql/data environment: - POSTGRES_DB=ghostfolio - POSTGRES_USER=ghostfolio - POSTGRES_PASSWORD=${DB_PASSWORD} healthcheck: test: ["CMD-SHELL", "pg_isready -U ghostfolio"] interval: 10s timeout: 5s retries: 5 networks: - internal deploy: resources: limits: memory: 256M

ghostfolio-redis: image: redis:7-alpine container_name: ghostfolio-redis restart: unless-stopped volumes: - ghostfolio-redis-data:/data healthcheck: test: ["CMD", "redis-cli", "ping"] interval: 10s timeout: 5s retries: 5 networks: - internal deploy: resources: limits: memory: 128M

volumes: ghostfolio-db-data: ghostfolio-redis-data:

networks: internal: web: external: true

Health check: curl -f http://localhost:3333/api/v1/health


9. Langfuse

Gotchas: Langfuse v3 has two app containers (web + worker), plus Postgres, ClickHouse, Redis. Docker Compose is for low-scale/testing — use k8s for HA. All # CHANGEME secrets must be replaced.

services:
  langfuse-web:
    image: langfuse/langfuse:2
    container_name: langfuse-web
    restart: unless-stopped
    ports:
      - "127.0.0.1:3000:3000"
    environment:
      - NODE_ENV=production
      - DATABASE_URL=postgres://langfuse:${DB_PASSWORD}@langfuse-db:5432/langfuse
      - NEXTAUTH_URL=https://${DOMAIN}
      - NEXTAUTH_SECRET=${NEXTAUTH_SECRET}
      - SALT=${SALT}
      - ENCRYPTION_KEY=${ENCRYPTION_KEY}
      - TELEMETRY_ENABLED=false
    depends_on:
      langfuse-db:
        condition: service_healthy
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:3000/api/public/health || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 60s
    networks:
      - internal
      - web
    deploy:
      resources:
        limits:
          memory: 2G
          cpus: "2.0"

langfuse-db: image: postgres:16-alpine container_name: langfuse-db restart: unless-stopped volumes: - langfuse-db-data:/var/lib/postgresql/data environment: - POSTGRES_DB=langfuse - POSTGRES_USER=langfuse - POSTGRES_PASSWORD=${DB_PASSWORD} healthcheck: test: ["CMD-SHELL", "pg_isready -U langfuse"] interval: 10s timeout: 5s retries: 5 networks: - internal deploy: resources: limits: memory: 512M

volumes: langfuse-db-data:

networks: internal: web: external: true

Health check: curl -f http://localhost:3000/api/public/health


10. Ghost

Gotchas: Ghost 6 uses Docker as primary install method. Requires MySQL 8. Email must be configured or login will fail (sends verification link). For ActivityPub support, use the official Docker tooling.

services:
  ghost:
    image: ghost:5-alpine
    container_name: ghost
    restart: unless-stopped
    ports:
      - "127.0.0.1:2368:2368"
    environment:
      url: https://${DOMAIN}
      database__client: mysql
      database__connection__host: ghost-db
      database__connection__user: ghost
      database__connection__password: ${DB_PASSWORD}
      database__connection__database: ghost
      mail__transport: SMTP
      mail__options__host: ${SMTP_HOST:-smtp.mailgun.org}
      mail__options__port: ${SMTP_PORT:-587}
      mail__options__auth__user: ${SMTP_USER}
      mail__options__auth__pass: ${SMTP_PASSWORD}
    volumes:
      - ghost-content:/var/lib/ghost/content
    depends_on:
      ghost-db:
        condition: service_healthy
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:2368/ghost/api/v4/admin/site/ || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 3
    networks:
      - internal
      - web
    deploy:
      resources:
        limits:
          memory: 1G
          cpus: "1.0"

ghost-db: image: mysql:8.0 container_name: ghost-db restart: unless-stopped volumes: - ghost-db-data:/var/lib/mysql environment: MYSQL_ROOT_PASSWORD: ${DB_ROOT_PASSWORD} MYSQL_DATABASE: ghost MYSQL_USER: ghost MYSQL_PASSWORD: ${DB_PASSWORD} healthcheck: test: ["CMD", "mysqladmin", "ping", "-h", "localhost"] interval: 10s timeout: 5s retries: 5 networks: - internal deploy: resources: limits: memory: 512M

volumes: ghost-content: ghost-db-data:

networks: internal: web: external: true

Health check: curl -f http://localhost:2368/ghost/api/v4/admin/site/ Setup: Visit https:///ghost to create admin account


11. MinIO

WARNING: MinIO was archived in February 2026. The community edition lost its GUI in May 2025 and entered maintenance mode in December 2025. Consider Garage or SeaweedFS as alternatives. Including for legacy/existing deployments.

services:
  minio:
    image: minio/minio:latest
    container_name: minio
    restart: unless-stopped
    command: server /data --console-address ":9001"
    ports:
      - "127.0.0.1:9000:9000"
      - "127.0.0.1:9001:9001"
    environment:
      MINIO_ROOT_USER: ${MINIO_ROOT_USER:-admin}
      MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
    volumes:
      - minio-data:/data
    healthcheck:
      test: ["CMD", "mc", "ready", "local"]
      interval: 30s
      timeout: 10s
      retries: 3
    networks:
      - web
    deploy:
      resources:
        limits:
          memory: 1G
          cpus: "1.0"

volumes: minio-data:

networks: web: external: true

Health check: curl -f http://localhost:9000/minio/health/live


12. Immich

Gotchas: Heavy app — ML models need 2GB+ RAM. Use the official docker-compose.yml and .env from the Immich repo. Do NOT write compose from scratch.

# Use official Immich setup
mkdir -p /opt/immich && cd /opt/immich
wget -O docker-compose.yml https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml
wget -O .env https://github.com/immich-app/immich/releases/latest/download/example.env

Critical .env changes:

UPLOAD_LOCATION=/opt/immich/upload
DB_PASSWORD=
IMMICH_MACHINE_LEARNING_URL=http://immich-machine-learning:3003

Health check: curl -f http://localhost:2283/api/server/ping


13. Paperless-ngx

services:
  paperless:
    image: ghcr.io/paperless-ngx/paperless-ngx:latest
    container_name: paperless
    restart: unless-stopped
    ports:
      - "127.0.0.1:8000:8000"
    environment:
      PAPERLESS_DBHOST: paperless-db
      PAPERLESS_DBNAME: paperless
      PAPERLESS_DBUSER: paperless
      PAPERLESS_DBPASS: ${DB_PASSWORD}
      PAPERLESS_REDIS: redis://paperless-redis:6379
      PAPERLESS_URL: https://${DOMAIN}
      PAPERLESS_SECRET_KEY: ${SECRET_KEY}
      PAPERLESS_ADMIN_USER: ${ADMIN_USER:-admin}
      PAPERLESS_ADMIN_PASSWORD: ${ADMIN_PASSWORD}
      PAPERLESS_OCR_LANGUAGE: eng
      PAPERLESS_TIME_ZONE: ${TIMEZONE:-America/New_York}
    volumes:
      - paperless-data:/usr/src/paperless/data
      - paperless-media:/usr/src/paperless/media
      - paperless-export:/usr/src/paperless/export
      - paperless-consume:/usr/src/paperless/consume
    depends_on:
      paperless-db:
        condition: service_healthy
      paperless-redis:
        condition: service_healthy
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:8000/api/ || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 3
    networks:
      - internal
      - web
    deploy:
      resources:
        limits:
          memory: 2G
          cpus: "2.0"

paperless-db: image: postgres:16-alpine container_name: paperless-db restart: unless-stopped volumes: - paperless-db-data:/var/lib/postgresql/data environment: POSTGRES_DB: paperless POSTGRES_USER: paperless POSTGRES_PASSWORD: ${DB_PASSWORD} healthcheck: test: ["CMD-SHELL", "pg_isready -U paperless"] interval: 10s timeout: 5s retries: 5 networks: - internal deploy: resources: limits: memory: 256M

paperless-redis: image: redis:7-alpine container_name: paperless-redis restart: unless-stopped volumes: - paperless-redis-data:/data healthcheck: test: ["CMD", "redis-cli", "ping"] interval: 10s timeout: 5s retries: 5 networks: - internal deploy: resources: limits: memory: 128M

volumes: paperless-data: paperless-media: paperless-export: paperless-consume: paperless-db-data: paperless-redis-data:

networks: internal: web: external: true

Health check: curl -f http://localhost:8000/api/


14. Coolify

Gotchas: Coolify manages its own Docker setup. Use the official install script instead of manual compose.

curl -fsSL https://cdn.coollabs.io/coolify/install.sh | bash

Coolify will be available at http://:8000. It handles its own reverse proxy, SSL, and database deployment.

Health check: curl -f http://localhost:8000/api/health


15. Stirling PDF

services:
  stirling-pdf:
    image: frooodle/s-pdf:latest
    container_name: stirling-pdf
    restart: unless-stopped
    ports:
      - "127.0.0.1:8080:8080"
    environment:
      - DOCKER_ENABLE_SECURITY=false
      - LANGS=en_US
    volumes:
      - stirling-data:/usr/share/tessdata
      - stirling-configs:/configs
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:8080/api/v1/info/status || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 3
    networks:
      - web
    deploy:
      resources:
        limits:
          memory: 512M
          cpus: "1.0"

volumes: stirling-data: stirling-configs:

networks: web: external: true

Health check: curl -f http://localhost:8080/api/v1/info/status


16. Nginx Proxy Manager

services:
  npm:
    image: jc21/nginx-proxy-manager:latest
    container_name: nginx-proxy-manager
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
      - "81:81"
    volumes:
      - npm-data:/data
      - npm-letsencrypt:/etc/letsencrypt
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:81/api/ || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 3
    networks:
      - web
    deploy:
      resources:
        limits:
          memory: 256M
          cpus: "0.5"

volumes: npm-data: npm-letsencrypt:

networks: web: external: true

Default login: admin@example.com / changeme Health check: curl -f http://localhost:81/api/


17. Portainer

services:
  portainer:
    image: portainer/portainer-ce:latest
    container_name: portainer
    restart: unless-stopped
    ports:
      - "127.0.0.1:9443:9443"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - portainer-data:/data
    healthcheck:
      test: ["CMD-SHELL", "curl -fk https://localhost:9443/api/system/status || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 3
    networks:
      - web
    deploy:
      resources:
        limits:
          memory: 256M
          cpus: "0.5"

volumes: portainer-data:

networks: web: external: true

Health check: curl -fk https://localhost:9443/api/system/status


18. Dockge

services:
  dockge:
    image: louislam/dockge:1
    container_name: dockge
    restart: unless-stopped
    ports:
      - "127.0.0.1:5001:5001"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - dockge-data:/app/data
      - /opt/stacks:/opt/stacks
    environment:
      - DOCKGE_STACKS_DIR=/opt/stacks
    networks:
      - web
    deploy:
      resources:
        limits:
          memory: 256M
          cpus: "0.5"

volumes: dockge-data:

networks: web: external: true

Health check: curl -f http://localhost:5001


Phase 4: Nginx Reverse Proxy

Generate an Nginx config for the selected app. All app ports bind to 127.0.0.1 so they're only accessible through the proxy.

# Create Nginx site config
cat > /etc/nginx/sites-available/${APP_NAME} << 'NGINXEOF'
server {
    listen 80;
    server_name ${DOMAIN};

location / { return 301 https://$host$request_uri; }

location /.well-known/acme-challenge/ { root /var/www/certbot; } }

server { listen 443 ssl http2; server_name ${DOMAIN};

ssl_certificate /etc/letsencrypt/live/${DOMAIN}/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/${DOMAIN}/privkey.pem; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on;

# Security headers add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Content-Type-Options "nosniff" always; add_header X-XSS-Protection "1; mode=block" always; add_header Referrer-Policy "strict-origin-when-cross-origin" always; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

# Proxy settings location / { proxy_pass http://127.0.0.1:${APP_PORT}; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme;

# WebSocket support (needed for Supabase Realtime, Uptime Kuma, n8n, Vaultwarden) proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade";

# Timeouts proxy_connect_timeout 60s; proxy_send_timeout 60s; proxy_read_timeout 60s;

# Large uploads (for Ghost, Immich, Paperless, MinIO) client_max_body_size 100M; } } NGINXEOF

# Enable the site ln -sf /etc/nginx/sites-available/${APP_NAME} /etc/nginx/sites-enabled/ nginx -t && systemctl reload nginx

App-specific Nginx adjustments:

  • Immich: Set client_max_body_size 50G; for photo uploads
  • MinIO: Add separate location / blocks for API (port 9000) and console (port 9001)
  • Supabase: Proxy to Kong on port 8000, Studio on port 3000 needs a separate subdomain or path
  • Gitea: Add a stream block for SSH passthrough if using port 2222
  • Vaultwarden: Add WebSocket location: location /notifications/hub { proxy_pass ...; }

Phase 5: SSL via Certbot

# Install Certbot if not present
apt-get update && apt-get install -y certbot python3-certbot-nginx

# Obtain SSL certificate certbot --nginx -d ${DOMAIN} --non-interactive --agree-tos -m ${EMAIL}

# Verify auto-renewal is set up certbot renew --dry-run

# Check the systemd timer systemctl status certbot.timer


Phase 6: Backup Configuration

Create a backup script for the app's persistent data. Customize based on which databases the app uses.

cat > /opt/backups/backup-${APP_NAME}.sh << 'BACKUPEOF'
#!/bin/bash
set -euo pipefail

BACKUP_DIR="/opt/backups/${APP_NAME}" TIMESTAMP=$(date +%Y%m%d_%H%M%S) RETENTION_DAYS=30

mkdir -p "$BACKUP_DIR"

# === Postgres Backup (if applicable) === docker exec ${APP_NAME}-db pg_dumpall -U ${DB_USER} | gzip > "$BACKUP_DIR/db_${TIMESTAMP}.sql.gz"

# === MySQL Backup (Ghost only) === # docker exec ghost-db mysqldump -u ghost -p${DB_PASSWORD} ghost | gzip > "$BACKUP_DIR/db_${TIMESTAMP}.sql.gz"

# === Volume Backup === # Stop app briefly for consistent backup (optional — skip for near-zero-downtime) # docker compose -f /opt/${APP_NAME}/docker-compose.yml stop ${APP_NAME} tar czf "$BACKUP_DIR/volumes_${TIMESTAMP}.tar.gz" -C /var/lib/docker/volumes . --include="${APP_NAME}" # docker compose -f /opt/${APP_NAME}/docker-compose.yml start ${APP_NAME}

# === SQLite Backup (Vaultwarden, Uptime Kuma) === # docker exec ${APP_NAME} sqlite3 /data/db.sqlite3 ".backup '/data/backup.sqlite3'" # docker cp ${APP_NAME}:/data/backup.sqlite3 "$BACKUP_DIR/db_${TIMESTAMP}.sqlite3"

# === Cleanup old backups === find "$BACKUP_DIR" -type f -mtime +${RETENTION_DAYS} -delete

echo "[$(date)] Backup complete: $BACKUP_DIR/_${TIMESTAMP}" BACKUPEOF

chmod +x /opt/backups/backup-${APP_NAME}.sh

# Add to crontab — daily at 3 AM (crontab -l 2>/dev/null; echo "0 3 /opt/backups/backup-${APP_NAME}.sh >> /var/log/backup-${APP_NAME}.log 2>&1") | crontab -


Phase 7: Deploy and Verify

Execute these steps in order:

# 1. Create the Docker network if it doesn't exist
docker network create web 2>/dev/null || true

# 2. Create app directory and write compose file mkdir -p /opt/${APP_NAME} # Write docker-compose.yml and .env to /opt/${APP_NAME}/

# 3. Generate secrets cat > /opt/${APP_NAME}/.env << ENVEOF DOMAIN=${DOMAIN} DB_PASSWORD=$(openssl rand -base64 24 | tr -d '/+=') SECRET_KEY=$(openssl rand -hex 32) # ... app-specific secrets ENVEOF

# 4. Pull and start cd /opt/${APP_NAME} docker compose pull docker compose up -d

# 5. Wait for services to be healthy echo "Waiting for services to start..." sleep 15

# 6. Run health check curl -f http://localhost:${APP_PORT}/${HEALTH_ENDPOINT} && echo "HEALTHY" || echo "UNHEALTHY — check logs with: docker compose logs"

# 7. Set up Nginx and SSL (from Phases 4-5)

# 8. Final verification via HTTPS curl -f https://${DOMAIN}/${HEALTH_ENDPOINT} && echo "DEPLOYMENT COMPLETE" || echo "SSL/PROXY ISSUE — check nginx and certbot"


Phase 8: Post-Deploy Checklist

Present this checklist to the user after deployment:

  • [ ] App is accessible at https://${DOMAIN}
  • [ ] Admin account created (first visit for most apps)
  • [ ] Default passwords changed
  • [ ] Signups disabled (if applicable — Vaultwarden, Gitea)
  • [ ] Email/SMTP configured (if applicable — Ghost, n8n)
  • [ ] Backup cron is running (crontab -l)
  • [ ] Firewall only exposes ports 80, 443, and SSH (ufw status)
  • [ ] Docker auto-updates considered (Watchtower or manual update schedule)
  • [ ] Monitoring set up (deploy Uptime Kuma if not already running)

Quick Reference: All Health Check URLs

AppHealth Check URL
Supabasehttp://localhost:3000 + http://localhost:8000/rest/v1/
Plausiblehttp://localhost:8000/api/health
Umamihttp://localhost:3000/api/heartbeat
Uptime Kumahttp://localhost:3001
n8nhttp://localhost:5678/healthz
Giteahttp://localhost:3000/api/healthz
Vaultwardenhttp://localhost:8080/alive
Ghostfoliohttp://localhost:3333/api/v1/health
Langfusehttp://localhost:3000/api/public/health
Ghosthttp://localhost:2368/ghost/api/v4/admin/site/
MinIOhttp://localhost:9000/minio/health/live
Immichhttp://localhost:2283/api/server/ping
Paperless-ngxhttp://localhost:8000/api/
Coolifyhttp://localhost:8000/api/health
Stirling PDFhttp://localhost:8080/api/v1/info/status
Nginx Proxy Mgrhttp://localhost:81/api/
Portainerhttps://localhost:9443/api/system/status
Dockgehttp://localhost:5001
数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务