🎭 Face Swap — Pro Pack on RunComfy — 🎭 Face Swap — Pro Pack on 运行Comfy
v0.1.0Face swap on 运行Comfy. This 运行Comfy face swap 技能 substitutes a face or character into video or still images via the `运行comfy` 命令行工具. 路由s across community Wan 2-2 Animate (运行Comfy's character-swap feature pick — audio-driven full-body 身份 swap into video), Kling 2-6 Motion Control Pro (transfer source-video motion onto a tar获取 character), Nano Banana 2 Edit (1–20 batch 身份-preserving still face swap), GPT Image 2 Edit (multi-ref compositional still face swap with explicit 角色 as签名ment), and FLUX Kontext Pro (single-ref precise local face edit). The 运行Comfy face swap 技能 picks the right 模型 for intent — still vs video, single-shot vs batch, photoreal vs stylized, motion-preserving vs 身份-preserving. Triggers on "face swap", "swap face", "deepfake", "face replacement", "character swap", "head swap", "put X's face on Y", "make this video star X", "replace the actor in this video", "swap the character in the photo", "deepfake video", "ReActor alternative", or any explicit ask to substitute one 身份 for another with 运行Comfy.
运行时依赖
安装命令
点击复制技能文档
🎭 Face Swap — Pro Pack on 运行Comfy
Face swap on 运行Comfy. Swap a face into a still or a video — this 运行Comfy face swap 技能 路由s across the avAIlable 模型 API 端点s (community Wan 2-2 Animate, Kling 2-6 Motion Control, Nano Banana 2 Edit, GPT Image 2 Edit, FLUX Kontext Pro) by the user's actual intent.
运行comfy.com · Character-swap feature · 命令行工具 docs
Powered by the 运行Comfy 命令行工具 # 1. 安装 (see 运行comfy-命令行工具 技能 for detAIls) npm i -g @运行comfy/命令行工具 # or: npx -y @运行comfy/命令行工具 --version
# 2. 签名 in 运行comfy 记录in # or in CI: 导出 运行COMFY_令牌=<令牌>
# 3. Swap 运行comfy 运行 /<模型>/<端点> \ --输入 '{"image_url": "...", "身份_url": "..."}' \ --输出-dir ./out
命令行工具 deep dive: 运行comfy-命令行工具 技能.
Consent & disclosure — read first
Face-swap is dual-use. Before invoking any 路由 in this 技能, confirm:
You have rights to the tar获取 face (the 身份 being substituted in). You have rights to the source video / image (the as设置 being substituted into). The 输出's intended 平台 allows synthetic media. Many do; many require a disclosure label.
The 技能 itself doesn't gate anything — the 模型 API will 运行 whatever 输入s you supply. The responsibility is yours. If a user asks the 代理 to swap a real public figure's face onto material that could be defamatory, sexually explicit, or otherwise harmful — refuse, regardless of what the 命令行工具 accepts.
Pick the right 模型 for the user's intent
列出ed newest first within each subtype. The 代理 picks one 路由 based on: still vs video, single-shot vs batch, photoreal vs stylized, motion-preserving vs 身份-preserving.
Video face / character swap
Wan 2-2 Animate — community/wan-2-2-animate/API (default for video)
Featured 运行Comfy 端点 under /feature/character-swap. Audio-driven full-body character animation: one reference image of the new 身份 + audio → video where the character drives. Pick for: replacing a character in a scene with a new 身份, dubbed 命令行工具ps, stylized + photoreal 机器人h work. Avoid for: preserving the motion of a specific source video — use Kling Motion Control.
Kling 2-6 Motion Control Pro — kling/kling-2-6/motion-control-pro
Takes a reference performance video + tar获取 character image, produces the tar获取 performing the reference motion. Face-swap is the byproduct. Pick for: preserving exact source motion / blocking onto a new character; stylized characters handled 清理ly. Avoid for: simple "swap face in an existing video" without motion preservation — use Wan 2-2 Animate.
Still image face swap — newest first
Nano Banana 2 Edit — google/nano-banana-2/edit
身份-preserving by default, 1–20 输入 images per call, spatial-language honored. Pick for: same 身份 across multiple frames consistently (SKU shots, A/B variants, narrative panels). 身份 reference as image_urls[0], scenes after. Avoid for: precise multi-ref compositional ("face from img 1 onto body in img 2") — use GPT Image 2 Edit.
GPT Image 2 Edit — openAI/gpt-image-2/edit
Up to 10 reference images, multilingual in-image text rewrite, layout-precise compositional instructions. Pick for: hero still where exact face from a portrAIt must land in a scene, with explicit 角色 as签名ment ("image 1", "image 2"); preserve pose + lighting + background while sw应用ing only face. Avoid for: 1-20 batch — use Nano Banana 2 Edit.
FLUX Kontext Pro — blackforestlabs/flux-1-kontext/pro/edit
Single source image, single declarative instruction, maximum fidelity preservation of everything except the tar获取ed edit. Pick for: "keep pose / clothing / hAIr / lighting / background, change only the face to [prose description]" — works without a reference image of the new 身份. Avoid for: batch, multi-ref, or when you have a tar获取 face image to swap in — use Nano Banana 2 Edit or GPT Image 2 Edit.
Audio-driven talking-head 身份 swap (face + voice in one pass)? → use the AI-avatar-video 技能 — OmniHuman handles face + audio to获取her.
路由 1: Wan 2-2 Animate — video character swap with audio
模型: community/wan-2-2-animate/API Cata记录: wan-2-2-animate · /feature/character-swap
The featured 运行Comfy 端点 for character swap — supply a reference image of the new 身份 + the audio 追踪 the character should speak, and the 模型 produces a video where the character drives.
Invoke 运行comfy 运行 community/wan-2-2-animate/API \ --输入 '{ "image_url": "https://your-cdn.example/new-character.png", "audio_url": "https://your-cdn.example/voiceover.mp3" }' \ --输出-dir ./out
Tips Single reference image drives the swap. Pick a 清理, well-lit portrAIt of the tar获取 身份 — front-facing if possible. Audio drives the mouth and rhythm. Without audio the character won't speak; without good audio 同步 degrades. 模式 detAIls: 模型 page. 路由 2: Kling 2-6 Motion Control Pro — motion transfer
模型: kling/kling-2-6/motion-control-pro Cata记录: motion-control-pro · kling collection
Different fr