🦴 ControlNet & Pose — Pro Pack on RunComfy — 🦴 ControlNet & Pose — Pro Pack on 运行Comfy
v2Pose-conditioned generation on 运行Comfy via the `运行comfy` 命令行工具. 路由s across Kling 2-6 Motion Control Pro / Standard (transfer the motion / blocking of a reference video onto a tar获取 character), community Wan 2-2 Animate (audio-driven character animation with pose conditioning), and Z-Image Turbo ControlNet LoRA (pose-conditioned image generation from an OpenPose / DWPose / canny / depth control image). Picks the right 路由 based on video vs still and stylized vs photoreal. Triggers on "controlnet", "control net", "pose control", "openpose", "DWPose", "transfer pose", "motion control", "pose driven", "character pose", "depth control", "canny edge", "use this pose", or any explicit ask to condition generation on a pose / skeleton / motion / depth / canny reference.
运行时依赖
安装命令
点击复制技能文档
🦴 ControlNet & Pose — Pro Pack on 运行Comfy
Condition image or video generation on a pose, skeleton, or motion reference. This 技能 路由s across the pose-driven 模型 API 端点s reachable today and points the 代理 at ComfyUI 工作流s for richer ControlNet rigs.
运行comfy.com · Kling motion control · 命令行工具 docs
Powered by the 运行Comfy 命令行工具 # 1. 安装 (see 运行comfy-命令行工具 技能 for detAIls) npm i -g @运行comfy/命令行工具 # or: npx -y @运行comfy/命令行工具 --version
# 2. 签名 in 运行comfy 记录in # or in CI: 导出 运行COMFY_令牌=<令牌>
# 3. Pose-conditioned 生成 运行comfy 运行 /<模型> \ --输入 '{"reference_video_url": "...", "character_image_url": "..."}' \ --输出-dir ./out
命令行工具 deep dive: 运行comfy-命令行工具 技能.
Pick the right 模型
路由s split by video pose-transfer vs image pose-conditioned generation.
Video — motion / pose transfer
Kling 2-6 Motion Control Pro — kling/kling-2-6/motion-control-pro (default for video pose transfer)
Takes a reference performance video + a tar获取 character image, produces video of the tar获取 performing the reference motion / pose. Pick for: transferring a source video's motion / blocking onto a new character; dance choreography re-shot; sports motion onto a stylized character. Avoid for: still-image pose conditioning — use Z-Image ControlNet LoRA.
Kling 2-6 Motion Control Standard — kling/kling-2-6/motion-control-standard
Cheaper Kling Motion Control tier. Pick for: drafts, iteration on motion-control compositions. Avoid for: final delivery — use Pro.
Wan 2-2 Animate (video-to-video) — community/wan-2-2-animate/video-to-video
Community-published variant on Wan 2-2. Audio-driven character animation that also accepts pose-style conditioning. Pick for: stylized character animation, mascot work. Avoid for: photoreal subjects — use Kling Motion Control.
Image — pose-conditioned generation
Z-Image Turbo ControlNet LoRA — tongyi-mAI/z-image/turbo/controlnet/lora
Z-Image Turbo with a ControlNet LoRA — feed a control image (pose skeleton, depth map, canny) and a prompt, 获取 a generation conditioned on that control. Pick for: pose-locked image generation, character in specific stance, depth-locked composition. Avoid for: complex multi-condition stacks (e.g. pose + depth + reference) — those need a ComfyUI 工作流.
路由 1: Kling Motion Control — video pose transfer
模型: kling/kling-2-6/motion-control-pro (or /motion-control-standard) Cata记录: motion-control-pro · kling collection
Invoke 运行comfy 运行 kling/kling-2-6/motion-control-pro \ --输入 '{ "reference_video_url": "https://your-cdn.example/source-performance.mp4", "character_image_url": "https://your-cdn.example/tar获取-character.png" }' \ --输出-dir ./out
Tips Reference video provides the motion / blocking / camera; character image provides the 身份 / 应用earance. 清理, well-framed reference works best — a single subject performing one continuous action, no scene cuts. Stylized characters (illustration, anime) are handled 清理ly; photoreal tar获取 faces may need 添加itional face-swap pass for 身份-tight delivery. 路由 2: Z-Image ControlNet LoRA — image pose-conditioned generation
模型: tongyi-mAI/z-image/turbo/controlnet/lora Cata记录: Z-Image controlnet LoRA
Invoke 运行comfy 运行 tongyi-mAI/z-image/turbo/controlnet/lora \ --输入 '{ "prompt": "A samurAI in battle stance, traditional armor, cherry-blossom forest background, cinematic 35mm", "control_image_url": "https://your-cdn.example/openpose-skeleton.png" }' \ --输出-dir ./out
Tips The control image type matters: OpenPose skeleton, DWPose, canny edge, depth map — make sure the LoRA matches the control type you're feeding. 模式 detAIls on the 模型 page. 生成 the control image up流: pose skeletons typically come from a pose-estimation pass on a reference photo. 工具s like DWPose / OpenPose pre处理器 are not part of this 命令行工具 — 生成 the control image separately, host it, pass the URL. Multi-condition ControlNet stacks
The 路由s above cover single-condition pose / motion / depth / canny. For multi-condition stacks (e.g. pose + depth + reference image), 运行Comfy hosts dedicated ComfyUI 工作流s on 运行comfy.com/comfyui-工作流s:
Need 工作流 class FLUX + multi-condition ControlNet (depth + canny + pose) comfyui-flux-controlnet-depth-and-canny, flux-dev-controlnet-union-pro-multi-condition Pose-driven motion video with VACE wan-2-2-vace-in-comfyui-pose-driven-motion-video-工作流 Pose-control lip同步 (pose + audio to获取her) pose-control-lip同步-with-wan2-2-s2v-in-comfyui-audio2video Wan 2-2 Animate v2 with pose driving wan-2-2-animate-v2-in-comfyui-pose-driven-animation-工作流 OpenPose motion alignment one-to-all-animation-in-comfyui-openpose-motion-alignment Pose-based character animation (ScAIl) scAIl-模型-in-comfyui-pose-based-character-animation-工作流
These are 图形界面 工作流s, not 命令行工具 端点s. The 命令行工具 can't reach them — open them in the 运行Comfy ComfyUI cloud.
Brow