首页龙虾技能列表 › workflow-migrate — 技能工具

workflow-migrate — 技能工具

v1.0.0

[自动翻译] Migrate N8N/Zapier/Make workflows to production-grade Python or Node.js scripts. Given a workflow description or paste, rewrites automation logic with...

0· 383·0 当前·0 累计
下载技能包
License
MIT-0
最后更新
2026/4/9
安全扫描
VirusTotal
可疑
查看报告
OpenClaw
可疑
medium confidence
The skill's purpose (migrating automations to scripts) is plausible, but the runtime instructions reference environment secrets, file I/O, and tooling scope that are not declared in the metadata, creating mismatches you should understand before installing.
评估建议
This skill appears to do what it says (generate production-style scripts) but has some inconsistencies you should address before installing: 1) The SKILL.md templates expect API keys and a .env file, but the skill metadata doesn't declare any required credentials — confirm with the author which secrets you'll need and how they should be provided. 2) The allowed-tools include filesystem and shell access; avoid granting the agent access to workspace files that contain unrelated secrets (home dir, ...
详细分析 ▾
用途与能力
The skill's name and description match the SKILL.md: it parses workflow descriptions and outputs runnable Python/Node scripts. However, the templates it generates assume use of environment variables (API_KEY, WEBHOOK_URL, etc.), .env files, and filesystem logging — none of which are declared in the skill metadata. That mismatch is noteworthy but may be an omission rather than malicious intent.
指令范围
SKILL.md instructs the agent to parse user-provided workflow JSON/pastes, ask clarifying questions, and then generate, write, and edit full scripts (including config loading and log file creation). The declared allowed-tools (Read, Write, Edit, Bash, Glob, Grep, WebSearch) give the agent filesystem and shell capability; the instructions don't explicitly tell the agent to read arbitrary host files, but generated templates call load_dotenv() / require('dotenv') which will read a .env file if present. That combination gives the agent potential to read environment files in the workspace—this is within scope for code-generation but increases the risk of accidental exposure of unrelated secrets if the agent is granted broad file access.
安装机制
Instruction-only skill with no install spec and no code files — lowest install risk. No downloads, package installs, or external installers are specified in the metadata.
凭证需求
The skill metadata declares no required environment variables or credentials, yet the code templates expect secrets via environment variables (.env) such as API_KEY and WEBHOOK_URL. Requiring user API keys to reach third‑party APIs is reasonable, but the omission in metadata is a mismatch: the skill does not explicitly request those secrets up front, and the allowed-tools set could let the agent access .env or other local credential files. This creates an unclear credential surface and potential for accidental exposure or misuse of unrelated secrets.
持久化与权限
always is false and the skill is user-invocable (normal). The skill writes generated scripts and log files per its instructions, which is expected behavior for a code-generation/migration tool; there is no indication it tries to modify other skills or system-wide agent settings.
安装前注意事项
  1. The SKILL.md templates expect API keys and a .env file, but the skill metadata doesn't declare any required credentials — confirm with the author which secrets you'll need and how they should be provided.
  2. The allowed-tools include filesystem and shell access; avoid granting the agent access to workspace files that contain unrelated secrets (home dir, CI/CD credentials, SSH keys, etc.).
  3. Inspect any generated scripts for hardcoded endpoints, excessive logging of sensitive payloads, and proper error handling before running them in production.
  4. Run generated code in a sandbox or isolated environment, and rotate any secrets used for initial testing. If you need higher assurance, ask the publisher for an explicit list of required env vars and a minimal reproduction that shows exactly which files will be read/written.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.0.02026/3/4

workflow-migrate 1.0.0 - Initial release of workflow-migrate. - Converts N8N, Zapier, or Make workflows into production-ready Python or Node.js scripts. - Adds retry, exponential backoff, logging, and self-healing (alerts, idempotency, dead-letter queue, heartbeats) to migrated scripts. - Guides users interactively when workflow input is vague. - Generated scripts are runnable, testable, and billable as deliverables.

● 可疑

安装命令 点击复制

官方npx clawhub@latest install workflow-migrate
镜像加速npx clawhub@latest install workflow-migrate --registry https://cn.clawhub-mirror.com

技能文档

# Workflow Migrate — Automation Migration Tool

Why This Exists

N8N/Zapier/Make workflows break silently, can't be version-controlled, and cost $50-500/month in SaaS fees. This skill rewrites them as standalone scripts that run forever with zero subscription cost. Each migration is a $500-5000 billable deliverable.

Trigger

Use when: "migrate this workflow", "convert N8N to Python", "rewrite my Zapier", "turn this automation into a script", "get off N8N" Invoked as: /workflow-migrate [workflow description or paste]

Process

Step 1: Parse the Workflow Input

From $ARGUMENTS:
  • If it's a description ("when a form is submitted, send email and update Airtable"): parse directly
  • If it's N8N JSON: read and extract nodes/connections
  • If it's Zapier steps: parse the trigger + action chain
  • If it's Make (formerly Integromat) scenario: parse modules
Extract and document: Triggers:
  • Webhook (POST endpoint)
  • Cron/schedule (every X minutes/hours)
  • Form submission
  • Email received
  • File drop (S3, Google Drive, local folder)
  • DB row created/updated
Actions:
  • HTTP/API calls (list each endpoint, method, payload)
  • Database reads/writes
  • Email sends
  • File operations
  • Conditional logic (if/else branches)
  • Loops over arrays
  • Data transformations
Data flow:
  • Input payload fields used
  • Intermediate computed values
  • Output destinations
If the workflow description is too vague, ask 2-3 targeted questions before proceeding:
  • "What triggers it — webhook, schedule, or something else?"
  • "Which API endpoints does it call and with what data?"
  • "What counts as success? What should happen on failure?"

Step 2: Choose Language

Default to Python unless:
  • The workflow is heavily async/event-driven (prefer Node.js)
  • Existing codebase is Node.js
  • Kevin explicitly requests Node
Python stack: requests, schedule, logging, tenacity (retry), python-dotenv Node.js stack: axios, node-cron, winston, async-retry, dotenv

Step 3: Write the Script

Generate a complete, runnable script. Required elements:

Python Template Structure:

``python #!/usr/bin/env python3 """ [Workflow Name] — Migrated from [N8N/Zapier/Make] Original: [brief description of what the workflow did] Migrated: [date] Usage: python workflow_[name].py # run once python workflow_[name].py --schedule # run on schedule python workflow_[name].py --dry-run # test without side effects """ import os import sys import logging import time import argparse from datetime import datetime from typing import Optional, Dict, Any import requests from tenacity import retry, stop_after_attempt, wait_exponential, retry_if_exception_type from dotenv import load_dotenv load_dotenv() # ─── Logging ─────────────────────────────────────────────────────────────────── logging.basicConfig( level=logging.INFO, format="%(asctime)s [%(levelname)s] %(message)s", handlers=[ logging.StreamHandler(sys.stdout), logging.FileHandler(f"logs/workflow_{datetime.now().strftime('%Y%m')}.log"), ] ) log = logging.getLogger(__name__) # ─── Config ──────────────────────────────────────────────────────────────────── API_KEY = os.getenv("API_KEY") # from .env WEBHOOK_URL = os.getenv("WEBHOOK_URL") # destination DRY_RUN = False # ─── Retry Decorator ─────────────────────────────────────────────────────────── @retry( stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=2, max=30), retry=retry_if_exception_type((requests.RequestException, ConnectionError)), before_sleep=lambda rs: log.warning(f"Retrying (attempt {rs.attempt_number})..."), reraise=True ) def api_call(method: str, url: str, kwargs) -> Dict[str, Any]: """Make an HTTP call with automatic retry and exponential backoff.""" if DRY_RUN: log.info(f"[DRY RUN] {method.upper()} {url} payload={kwargs.get('json', {})}") return {"dry_run": True} resp = requests.request(method, url, timeout=30, kwargs) resp.raise_for_status() return resp.json() `

Node.js Template Structure:

`javascript #!/usr/bin/env node /* [Workflow Name] — Migrated from [N8N/Zapier/Make] Original: [brief description] Migrated: [date] Usage: node workflow_[name].js # run once node workflow_[name].js --schedule # run on schedule node workflow_[name].js --dry-run # test without side effects / require('dotenv').config(); const axios = require('axios'); const retry = require('async-retry'); const cron = require('node-cron'); const winston = require('winston'); const fs = require('fs'); // ─── Logger ──────────────────────────────────────────────────────────────────── const logger = winston.createLogger({ level: 'info', format: winston.format.combine(winston.format.timestamp(), winston.format.json()), transports: [ new winston.transports.Console({ format: winston.format.simple() }), new winston.transports.File({ filename: logs/workflow_${new Date().toISOString().slice(0,7)}.log }), ], }); // ─── Config ──────────────────────────────────────────────────────────────────── const API_KEY = process.env.API_KEY; const DRY_RUN = process.argv.includes('--dry-run'); // ─── Retry Wrapper ───────────────────────────────────────────────────────────── async function apiCall(method, url, data = {}, headers = {}) { return retry(async (bail, attempt) => { if (DRY_RUN) { logger.info([DRY RUN] ${method.toUpperCase()} ${url}, { data }); return { dry_run: true }; } try { const resp = await axios({ method, url, data, headers, timeout: 30000 }); return resp.data; } catch (err) { if (err.response && err.response.status < 500) bail(err); // 4xx = don't retry logger.warn(Retrying attempt ${attempt}..., { error: err.message }); throw err; } }, { retries: 3, minTimeout: 2000, maxTimeout: 30000, factor: 2 }); } ` For each action in the workflow, write a dedicated function:
  • One function per logical action (e.g., fetch_leads(), send_notification(), update_database())
  • Functions are composable and testable in isolation
  • Every function logs what it's doing: start, result, any errors
Main orchestrator: `python def run(dry_run: bool = False): global DRY_RUN DRY_RUN = dry_run log.info("=== Workflow started ===") try: # Step 1: [action name] data = fetch_source_data() log.info(f"Fetched {len(data)} records") # Step 2: [action name] for item in data: result = process_item(item) if result: send_to_destination(result) log.info("=== Workflow completed successfully ===") except Exception as e: log.error(f"Workflow failed: {e}", exc_info=True) send_alert(f"Workflow [name] failed: {e}") # self-healing alert raise ` Self-healing patterns to include:
  • Alert on failure (Telegram message via Kevin's bot token if available, else log to file)
  • Idempotency: skip already-processed records (use a state file or DB flag)
  • Dead-letter queue: failed items saved to failed_[date].json for manual review
  • Heartbeat: log "still alive" every N runs for scheduled workflows

Step 4: Generate .env.example

`bash # [Workflow Name] — Environment Variables # Copy to .env and fill in values API_KEY=your_api_key_here WEBHOOK_URL=https://... DATABASE_URL=... # Optional: Telegram alerts on failure TELEGRAM_BOT_TOKEN= TELEGRAM_CHAT_ID=8062428674 `

Step 5: Generate requirements.txt or package.json

Python:
` requests>=2.31.0 tenacity>=8.2.0 python-dotenv>=1.0.0 schedule>=1.2.0 # only if cron-triggered ` Node.js: `json { "dependencies": { "axios": "^1.6.0", "async-retry": "^1.3.3", "node-cron": "^3.0.3", "winston": "^3.11.0", "dotenv": "^16.3.1" } } `

Step 6: If the Workflow is Recurring — Generate a SKILL.md

If the workflow runs on a schedule or will be reused:
  • Generate a SKILL.md in the same output directory
  • Name it after the workflow function
  • Description: "Run [workflow name] — [what it does in 10 words]"
  • Instructions: path to script, how to run it, what env vars are needed

Step 7: Save Outputs and Print Migration Summary

Output location: Ask Kevin where to save, default to
./workflow_[name]/ Create:
  • workflow_[name].py (or .js)
  • .env.example
  • requirements.txt (or package.json)
  • SKILL.md (if recurring)
  • README.md (quick usage guide — 20 lines max)
Print migration summary: ` Migration complete. Original: [N8N/Zapier/Make] workflow — [X] nodes/steps Output: ./workflow_[name]/workflow_[name].py What changed: - [X] N8N nodes → [Y] lines of Python - Added: retry with exponential backoff (3 attempts, 2s-30s) - Added: rotating log file (monthly) - Added: dry-run mode (--dry-run flag) - Added: failure alerts via [Telegram/log] - Removed: $XX/month SaaS subscription To run: cd workflow_[name]/ cp .env.example .env # fill in your keys pip install -r requirements.txt python workflow_[name].py --dry-run # test first python workflow_[name].py # run for real `

Error Handling

  • Ambiguous workflow: ask 2-3 targeted questions, don't guess at API endpoints
  • Proprietary N8N nodes (e.g., OpenAI node): rewrite using the raw API — include API docs link in comments
  • Very complex workflow (>15 nodes): break into multiple scripts with a coordinator, document the dependency order
  • No credentials visible: generate .env.example with clear placeholders, add comments explaining where each key comes from
  • Webhook trigger: generate a Flask/Express endpoint stub that receives the webhook and calls the workflow function

Notes

  • Always test with --dry-run before running for real
  • State file for idempotency: ./state/processed_ids.json — create state/ dir if it doesn't exist
  • For scheduled runs, prefer cron / schedule` over external task runners (fewer dependencies)
数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务