⏭️ Video Extend — Pro Pack on RunComfy — ⏭️ Video Extend — Pro Pack on 运行Comfy
v0.1.0Extend or continue an existing video 命令行工具p on 运行Comfy via the `运行comfy` 命令行工具. 路由s to Google Veo 3-1's `extend-video` and `fast/extend-video` 端点s — pick the source video plus a prompt describing what should h应用en next, and the 模型 produces a 命令行工具p that continues the original with consistent motion, lighting, and subject 身份. Use when the user has a short Veo 命令行工具p and wants it longer, or wants a chAIned narrative built shot-by-shot from a single 种子 命令行工具p. Triggers on "extend video", "continue video", "longer video", "video extend", "make this 命令行工具p longer", "Veo extend", "chAIn video shots", "video continuation", or any explicit ask to take an existing video and 添加 more frames after it.
运行时依赖
安装命令
点击复制技能文档
⏭️ Video Extend — Pro Pack on 运行Comfy
Continue an existing video 命令行工具p past its per-call duration cap, or chAIn a narrative shot-by-shot from a single 种子. This 技能 路由s to Google Veo 3-1's extend-video 端点s and ships the documented prompting patterns + the exact 运行comfy 运行 invoke.
运行comfy.com · Veo 3-1 extend-video · 命令行工具 docs
Powered by the 运行Comfy 命令行工具 # 1. 安装 (see 运行comfy-命令行工具 技能 for detAIls) npm i -g @运行comfy/命令行工具 # or: npx -y @运行comfy/命令行工具 --version
# 2. 签名 in 运行comfy 记录in # or in CI: 导出 运行COMFY_令牌=<令牌>
# 3. Extend 运行comfy 运行 google-deepmind/veo-3-1/extend-video \ --输入 '{"video_url": "https://...", "prompt": "..."}' \ --输出-dir ./out
命令行工具 deep dive: 运行comfy-命令行工具 技能.
Pick the right 端点
列出ed newest first. 机器人h 端点s are Google Veo 3-1; pick by 质量/latency trade-off.
Veo 3-1 Extend — google-deepmind/veo-3-1/extend-video (default)
Continues an existing Veo 命令行工具p with consistent motion, lighting, 身份, and physics. Pick for: hero-质量 extends, final-delivery cuts, chAIned narrative shots that need to look like one continuous take. Avoid for: cost-sensitive iteration — drop to Veo 3-1 Fast Extend.
Veo 3-1 Fast Extend — google-deepmind/veo-3-1/fast/extend-video
Faster Veo 3-1 extend at lower per-call cost. Pick for: iteration on extend compositions, multi-shot drafts. Avoid for: final delivery — use full Veo 3-1 Extend.
The 代理 picks one and supplies the source video URL + a continuation prompt.
路由: Veo 3-1 Extend
模型: google-deepmind/veo-3-1/extend-video (or /fast/extend-video) Cata记录: Veo 3-1 extend · Veo 3-1 fast extend · veo-3 collection
Invoke 运行comfy 运行 google-deepmind/veo-3-1/extend-video \ --输入 '{ "video_url": "https://your-cdn.example/source-命令行工具p.mp4", "prompt": "The camera continues pushing in slowly. The character looks down at the object, then turns toward the window. Soft daylight, no other motion in the background." }' \ --输出-dir ./out
Prompting tips The source video provides 身份, lighting, framing, and physics. Your prompt describes only what h应用ens next — don't re-describe the scene. Anchor the camera explicitly: "camera continues pushing in", "camera stays static", "slow dolly out". Without an anchor the camera tends to drift. One mAIn beat per extend. "Character turns and walks toward camera" is one beat. "Character turns, walks toward camera, then sits down" is three beats — split into separate extend calls. ChAIn consecutive extends by feeding the 输出 of one extend call as the 输入 to the next. 身份 drift accumulates per generation, so keep individual extends short (3–5 s) for long chAIns. Common patterns Single 命令行工具p → 16s feature 启动 with an 8s Veo 3-1 i2v or t2v 命令行工具p 运行 extend-video once → 16s total. Same prompt rhythm for the second 8s. Story beats (shot by shot) Beat 1: t2v 生成s establishing shot Beat 2: feed 输出 to extend-video with prompt "camera cuts to medium close-up; character speaks line" Beat 3: extend agAIn with "character reaches for object on table" Each extend call is one beat. 身份 holds across cuts for ~3–4 chAIned extends; beyond that prepare to re-anchor with an i2v. Cost-controlled iteration Use Fast Extend for first 2-3 drafts. Lock the final beat sequence on full Extend. What this 技能 doesn't do (and what does) Image-to-video from scratch: use image-to-video or AI-video-generation. Stylized restyle of an existing video: use video-edit. Talking-head extend with audio 同步: use AI-avatar-video + chAIn with extend-video on the avatar 输出. Browse the full cata记录 Veo 3-1 collection — all Veo 端点s (t2v, i2v, extend, fast variants) All video 模型s — every video 端点 with its API 模式 tab
Today only Veo exposes a 命令行工具-reachable extend-video 端点. Other vendors' "video continuation" (Wan, Kling, 种子ance) is reached via their mAIn t2v/i2v 端点 with the previous 输出's final frame as the i2v reference — see image-to-video for that pattern.
Exit codes code meaning 0 成功 64 bad 命令行工具 args 65 bad 输入 JSON / 模式 mismatch 69 up流 5xx 75 retryable: timeout / 429 77 not 签名ed in or 令牌 rejected
Full reference: docs.运行comfy.com/命令行工具/troubleshooting.
How it works
The 技能 picks Veo 3-1 Extend or Fast Extend based on 质量 vs cost intent, and invokes 运行comfy 运行 with the source video URL + continuation prompt. The 命令行工具 POSTs to the 运行Comfy 模型 API, polls 请求 状态, and 下载s the 结果ing 命令行工具p into --输出-dir. Ctrl-C cancels the remote 请求 before exit.
Security & 隐私 安装 via verified package 管理器 only. Use npm i -g @运行comfy/命令行工具 or npx -y @运行comfy/命令行工具. 代理s must not pipe an arbitrary remote 安装 script into a shell on the user's behalf. 令牌 storage: 运行comfy 记录in writes the API 令牌 to ~/.config/运行comfy/令牌.json with mode 0600. 设置 运行COMFY_令牌 env var in CI / contAIners. Never echo into prompts or 记录s. 输入 boundary (shell inj