概述
使用 API 格式的 JSON 在本地服务器(默认 127.0.0.1:8188)上运行 ComfyUI 工作流,并返回输出图像。
运行前编辑工作流
运行脚本仅接受 --workflow 。在运行之前,你必须检查并编辑工作流 JSON,运用你对 ComfyUI API 格式的最佳了解。不要假设固定的节点 ID、class_type 名称或 _meta.title 值——用户可能已更新默认工作流或提供了自定义工作流。
对于每次运行(包括默认工作流):
- 读取工作流 JSON(默认:
skills/comfyui/assets/default-workflow.json,或用户提供的路径/文件)。
- 识别与提示相关的节点:通过检查图形来查找包含主要文本提示的节点——例如
PrimitiveStringMultiline、CLIPTextEncode(正向文本),或任何具有 _meta.title 或 class_type 暗示 "Prompt" / "positive" / "text" 的节点。将相应的输入(例如 inputs.value,或编码器的文本输入)更新为你从用户那里得出的图像提示(主体、风格、光照、质量)。如果用户没有要求自定义图像,你可以保留现有提示或仅在需要时进行调整。
- 可选识别风格/前缀节点——例如
StringConcatenate,或作为样式的第二个字符串输入。如果用户要求特定风格或清除默认前缀,则设置它们。
- 可选设置新种子——找到采样器类节点(例如
KSampler、BasicGuider,或任何具有 seed 输入的节点)并将 seed 设置为新的随机整数,以便每次运行可以不同。
- 将修改后的工作流写入临时文件(例如
skills/comfyui/assets/tmp-workflow.json)。对于任何内联 Python 使用 ~/ComfyUI/venv/bin/python;不要使用裸 python。
- 运行:
comfyui_run.py --workflow 。如果工作流结构不清楚或找不到提示/采样器节点,按原样运行文件,只更改你能可靠识别的部分。对于任意用户提供的 JSON 采用相同方法:首先检查,然后在你最佳了解的范围内编辑,然后运行。
运行脚本(单一职责)
~/ComfyUI/venv/bin/python skills/comfyui/scripts/comfyui_run.py \
--workflow
该脚本仅将工作流加入队列并轮询直到完成。它打印包含 prompt_id 和输出 images 的 JSON。所有提示/风格/种子的更改都由你预先在 JSON 中完成。
如果服务器无法访问
如果运行脚本失败并出现连接错误(例如连接被拒绝或超时到 127.0.0.1:8188),ComfyUI 可能未安装或未运行。检查: ~/ComfyUI 是否存在并包含 main.py?
- 如果未安装: 安装 ComfyUI(例如克隆仓库、创建 venv、安装依赖项,然后启动服务器)。示例:
git clone https://github.com/comfyanonymous/ComfyUI.git ~/ComfyUI
cd ~/ComfyUI
python3 -m venv venv
~/ComfyUI/venv/bin/pip install -r requirements.txt
然后启动服务器(见下文)。告诉用户根据工作流可能需要将模型权重安装到 ~/ComfyUI/models/ 中。
- 已安装但未运行: 启动 ComfyUI 服务器,使 API 在端口 8188 上可用。示例:
~/ComfyUI/venv/bin/python ~/ComfyUI/main.py --listen 127.0.0.1
在后台或单独的终端中运行,使其保持运行。然后重试工作流运行。使用 ~(或用户的主目录)以便在其机器上正常工作。
来自 URL 的模型权重
当用户粘贴或发送模型权重 URL 列表(每行一个,或逗号分隔)时,将这些文件下载到 ComfyUI 安装目录,以便工作流以后可以使用它们。
- 规范化列表——每行一个 URL;剥离空行和注释(以
# 开头的行)。
- 运行下载脚本,并指定 ComfyUI 基础路径(默认
~/ComfyUI)。该脚本在可用时使用 pget 进行并行下载;如果 pget 不在 PATH 中,它会自动安装到 ~/.local/bin(无需 sudo)。如果无法安装 pget(例如不支持的 OS/架构),它会回退到内置下载。使用 ComfyUI venv Python 以便脚本正确运行:
~/ComfyUI/venv/bin/python skills/comfyui/scripts/download_weights.py --base ~/ComfyUI
将 URL 作为参数传递,或通过标准输入管道传输文件/列表:
echo "https://example.com/model.safetensors" | ~/ComfyUI/venv/bin/python skills/comfyui/scripts/download_weights.py --base ~/ComfyUI
或将用户的列表保存到临时文件并运行:
~/ComfyUI/venv/bin/python skills/comfyui/scripts/download_weights.py --base ~/ComfyUI < /tmp/weight_urls.txt
要强制使用内置下载(无 pget):添加 --no-pget。
- 子文件夹: 脚本从 URL/文件名推断 ComfyUI 模型子文件夹(例如
vae、clip、loras、checkpoints、text_encoders、controlnet、upscale_models)。用户可以选择每行指定一个子文件夹,格式为 url subfolder(例如 https://.../model.safetensors vae)。你还可以使用 --subfolder loras 传递默认子文件夹,以便该运行中的所有 URL 都进入 models/loras/。
- 现有文件: 默认情况下,脚本会跳过已存在于磁盘上的 URL;使用
--overwrite 替换。
- 路径: 文件写入
~/ComfyUI/models// 下。告诉用户每个文件保存到的位置,以及如果需要他们可以在 ComfyUI 服务器(重新)启动后运行工作流。支持在 ComfyUI/models/ 下的子文件夹:checkpoints、clip、clip_vision、controlnet、diffusion_models、embeddings、loras、text_encoders、unet、vae、vae_approx、upscale_models 等。当自动推断错误时使用 --subfolder 。
运行后
输出保存在 ComfyUI/output/ 下。使用脚本输出中的 images 列表来定位文件(文件名 + 子文件夹)。
⚠️ 始终将输出发送给用户
成功的 ComfyUI 运行后,你必须将生成的图像发送给用户。不要仅用文本回复文件名或使用 NO_REPLY。
- 解析脚本输出 JSON 中的
images(每个都有 filename、subfolder、type)。
- 构建完整路径:
ComfyUI/output/ + 子文件夹 + 文件名(例如 ComfyUI/output/z-image_00007_.png)。
- 通过用户所在的渠道发送图像(例如使用消息/发送工具并提供图像
path,以便用户收到文件)。如果有帮助,可以包含简短的说明(例如 "给你。" 或 "东京街头场景。")。每次成功的运行都必须导致用户收到图像。永远不要只给他们一个文件名或不发送。
Overview
Run ComfyUI workflows on the local server (default 127.0.0.1:8188) using API-format JSON and return output images.
Editing the workflow before running
The run script only takes
--workflow . You must
inspect and edit the workflow JSON before running, using your best knowledge of the ComfyUI API format. Do not assume fixed node IDs,
class_type names, or
_meta.title values — the user may have updated the default workflow or supplied a custom one.
For every run (including the default workflow):
- Read the workflow JSON (default:
skills/comfyui/assets/default-workflow.json, or the path/file the user gave).
- Identify prompt-related nodes by inspecting the graph: look for nodes that hold the main text prompt — e.g.
PrimitiveStringMultiline, CLIPTextEncode (positive text), or any node with _meta.title or class_type suggesting "Prompt" / "positive" / "text". Update the corresponding input (e.g. inputs.value, or the text input to the encoder) to the image prompt you derived from the user (subject, style, lighting, quality). If the user didn’t ask for a custom image, you can leave the existing prompt or tweak only if needed.
- Optionally identify style/prefix nodes — e.g.
StringConcatenate, or a second string input that acts as style. Set them if the user asked for a specific style or to clear a default prefix.
- Optionally set a new seed — find sampler-like nodes (e.g.
KSampler, BasicGuider, or any node with a seed input) and set seed to a new random integer so each run can differ.
- Write the modified workflow to a temp file (e.g.
skills/comfyui/assets/tmp-workflow.json). Use ~/ComfyUI/venv/bin/python for any inline Python; do not use bare python.
- Run:
comfyui_run.py --workflow .
If the workflow structure is unclear or you can’t find prompt/sampler nodes, run the file as-is and only change what you can reliably identify. Same approach for arbitrary user-supplied JSON: inspect first, edit at your best knowledge, then run.
Run script (single responsibility)
~/ComfyUI/venv/bin/python skills/comfyui/scripts/comfyui_run.py \
--workflow
The script only queues the workflow and polls until done. It prints JSON with prompt_id and output images. All prompt/style/seed changes are done by you in the JSON beforehand.
If the server isn’t reachable
If the run script fails with a connection error (e.g. connection refused or timeout to 127.0.0.1:8188), ComfyUI may not be installed or not running.
Check: Does ~/ComfyUI exist and contain main.py?
- If not installed: Install ComfyUI (e.g. clone the repo, create a venv, install dependencies, then start the server). Example:
git clone https://github.com/comfyanonymous/ComfyUI.git ~/ComfyUI
cd ~/ComfyUI
python3 -m venv venv
~/ComfyUI/venv/bin/pip install -r requirements.txt
Then start the server (see below). Tell the user they may need to install model weights into
~/ComfyUI/models/ depending on the workflow.
- If installed but not running: Start the ComfyUI server so the API is available on port 8188. Example:
~/ComfyUI/venv/bin/python ~/ComfyUI/main.py --listen 127.0.0.1
Run in the background or in a separate terminal so it keeps running. Then retry the workflow run.
Use ~ (or the user’s home) for paths so it works on their machine.
Model weights from URLs
When the user pastes or sends a
list of model weight URLs (one per line, or comma-separated), download those files into the ComfyUI installation so the workflow can use them later.
- Normalize the list — one URL per line; strip empty lines and comments (lines starting with
#).
- Run the download script with the ComfyUI base path (default
~/ComfyUI). The script uses pget for parallel downloads when available; if pget is not in PATH, it installs it to ~/.local/bin automatically (no sudo). If pget cannot be installed (e.g. unsupported OS/arch), it falls back to a built-in download. Use the ComfyUI venv Python so the script runs correctly:
~/ComfyUI/venv/bin/python skills/comfyui/scripts/download_weights.py --base ~/ComfyUI
Pass URLs as arguments, or pipe a file/list on stdin:
echo "https://example.com/model.safetensors" | ~/ComfyUI/venv/bin/python skills/comfyui/scripts/download_weights.py --base ~/ComfyUI
Or save the user’s list to a temp file and run:
~/ComfyUI/venv/bin/python skills/comfyui/scripts/download_weights.py --base ~/ComfyUI < /tmp/weight_urls.txt
To force the built-in download (no pget): add
--no-pget.
- Subfolder: The script infers the ComfyUI models subfolder from the URL/filename (e.g.
vae, clip, loras, checkpoints, text_encoders, controlnet, upscale_models). The user can optionally specify a subfolder per line as url subfolder (e.g. https://.../model.safetensors vae). You can also pass a default with --subfolder loras so all URLs in that run go to models/loras/.
- Existing files: By default the script skips URLs that already exist on disk; use
--overwrite to replace.
- Paths: Files are written under
~/ComfyUI/models//. Tell the user where each file was saved and that they can run the workflow once the ComfyUI server is (re)started if needed.
Supported subfolders (under ComfyUI/models/): checkpoints, clip, clip_vision, controlnet, diffusion_models, embeddings, loras, text_encoders, unet, vae, vae_approx, upscale_models, and others. Use --subfolder when the auto-inference is wrong.
After run
Outputs are saved under
ComfyUI/output/. Use the
images list from the script output to locate the files (filename + subfolder).
⚠️ Always send the output to the user
After a successful ComfyUI run,
you must deliver the generated image(s) to the user. Do not reply with only the filename in text or with NO_REPLY.
- Parse the script output JSON for
images (each has filename, subfolder, type).
- Build the full path:
ComfyUI/output/ + subfolder + filename (e.g. ComfyUI/output/z-image_00007_.png).
- Send the image to the user via the channel they're on (e.g. use the message/send tool with the image
path so the user receives the file). Include a short caption if helpful (e.g. "Here you go." or "Tokyo street scene.").
Every successful run must result in the user receiving the image. Never leave them with only a filename or no delivery.
Resources
scripts/
comfyui_run.py: Queue a workflow, poll until completion, print prompt_id and images. No args — you edit the JSON before running.
download_weights.py: Download model weight URLs into ~/ComfyUI/models//. Uses pget when available (installs to ~/.local/bin if missing); fallback to built-in download. Input: URLs as args or one per line on stdin. Options: --base, --subfolder, --overwrite, --no-pget. Infers subfolder from URL/filename when not given.
assets/
default-workflow.json: Default workflow. Copy and edit (prompt, style, seed) then run with the edited path; or run as-is for a generic run.