首页龙虾技能列表 › debate-research — 技能工具

debate-research — 技能工具

v1.0.0

[自动翻译] Multi-perspective structured debate for complex topics. Spawn parallel subagents with opposing stances, cross-inject arguments for rebuttal, then synt...

1· 55·0 当前·0 累计
by @caius-kong·MIT-0
下载技能包
License
MIT-0
最后更新
2026/4/1
安全扫描
VirusTotal
无害
查看报告
OpenClaw
安全
high confidence
The skill's declared purpose (running adversarial subagents, cross-rebuttal, and a neutral synthesis) matches its instructions and required capabilities; it is an instruction-only orchestrator and requests no unrelated credentials or installs.
评估建议
This skill is internally consistent: it orchestrates multiple subagents, uses web_search during initial evidence collection, then disables web_search for rebuttals and judgment to prevent information drift. Before using it: (1) ensure you trust the configured LLM provider(s) because the skill will spawn sessions on those models; (2) be aware that Phase 1 can fetch external web content via the platform's web_search tool — avoid feeding sensitive or private topics if you don't want external querie...
详细分析 ▾
用途与能力
Name/description match the runtime instructions: the SKILL.md describes spawning parallel subagents, cross-injecting outputs, and synthesizing a report. Required capabilities (ability to spawn sessions, yield results, and a web_search tool for Phase 1) are consistent with that purpose.
指令范围
Instructions stay within the stated debate/research scope. They explicitly enable web_search for Phase 1 and disable it for later phases (intentional to prevent information drift). The skill passes subagent outputs between roles and can optionally write the assembled report to an output_path; both behaviors are expected but mean the skill will fetch external web content (when web_search is available) and may write to filesystem if you provide a path. Confirm-plan default is true, which helps limit unexpected runs.
安装机制
No install spec or code is included (instruction-only). No downloads, packages, or binaries are required by the skill itself — lowest install risk.
凭证需求
The skill requests no environment variables, credentials, or config paths. It does rely on configured LLM providers and a web_search tool (mentioned in README and SKILL.md) but the registry metadata does not explicitly list 'web_search' as a required tool — minor metadata omission but not a security inconsistency with the skill's purpose.
持久化与权限
No elevated persistence requested (always: false). The skill does not modify other skills or system-wide settings. Autonomous invocation is allowed (platform default) but this is expected for a skill that spawns subagents.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.0.02026/4/1

- Initial release of the debate-research skill for structured, multi-perspective debate on complex topics. - Supports automatic spawning of subagents with opposing stances and cross-injection of arguments for rebuttal, culminating in a neutral judge synthesis. - Customizable input parameters: topic, roles, goal, audience, decision type, evidence round, and more. - Multi-phase execution pipeline: pre-flight checks, stance investigation, cross rebuttal, (optional) evidence audit, neutral judgment, and report assembly. - Output is a comprehensive Markdown report including core arguments, rebuttals, evidence audit, neutral assessment, recommendations, open questions, and scenario matrix. - Error handling for model availability, agent timeouts, and degraded outputs.

● 无害

安装命令 点击复制

官方npx clawhub@latest install debate-research
镜像加速npx clawhub@latest install debate-research --registry https://cn.clawhub-mirror.com

技能文档

Input Parameters

Collect from user before starting. Only topic is required; all others have defaults.

ParamRequiredDefaultDescription
topicyesDebate subject
rolesnoProponent + Opponent2-4 role objects: {name, stance, model?}. Default: Proponent (argue for) and Opponent (argue against). Model inherits from global.
goalnoinferredWhat question to answer
audienceno"self"Who reads the report: self / team / public
decision_typeno"personal-choice"personal-choice / team-standardization / market-analysis
evidence_roundno"auto"false / true / auto (enable when topic is fact-dense)
confirm_plannotrueShow plan and wait for user OK before execution
modelnoinheritGlobal subagent model; role-level override takes priority
output_pathnonullFile path for report; null = return in conversation
Implicit parameter: language — inferred from the user's topic/conversation language. All subagent prompts output in this language.

Example User Prompt

  • Claude Code vs OpenCode (gpt-5.4, claude-4.6-sonnet)

Execution Pipeline

Phase 0 — Pre-flight

Step 0a: Model reachability check

Collect all unique models (global + per-role + judge). For each unique model, probe via sessions_spawn with a minimal one-sentence task (e.g. "Reply OK") and model: . Do NOT use curl or external HTTP — all models route through OpenClaw's provider config.

If any probe fails:

  • If user explicitly specified the failed model → abort, report failure, suggest alternatives
  • If model was default-assigned → warn user, fall back to session default model, continue

Step 0b: Plan presentation (if confirm_plan: true)

Present to user:

  • Topic
  • Role × model assignment table
  • Evidence round: on/off/auto (with rationale if auto)
  • Estimated subagent call count
  • Goal / audience / decision_type interpretation

[STOP — wait for user confirmation]

If confirm_plan: false, skip directly to Phase 1.

Phase 1 — Stance Investigation (parallel)

Spawn one subagent per role, all in parallel.

Each agent receives a prompt built from:

  • Role name + stance
  • Topic
  • web_search: enabled

Required output format per agent:

Core arguments (3-5):
  - [argument] | confidence: 0.0-1.0 | source: [official-docs/community-feedback/personal-blog/academic-paper]
Opponent weaknesses (2-3)
Predicted counter-attacks (1-2)

Use sessions_spawn + sessions_yield to wait for all completions.

Error handling:

  • Agent timeout → mark output [INCOMPLETE], continue pipeline

Phase 2 — Cross Rebuttal (parallel)

Spawn one subagent per role, all in parallel.

Each agent receives:

  • Its original stance
  • All other roles' Phase 1 output (cross-injected)
  • web_search: disabled

Required output format per agent:

Rebuttals (one per opponent argument):
  - [rebuttal] | confidence: 0.0-1.0
Weakest premise attack:
  - Identify opponent's single weakest assumption and challenge it  ← Socratic element
New attacks (2):
  - [attack]

Word limit: 300 × number_of_opponents words per agent.

Error handling:

  • Agent timeout → mark [INCOMPLETE], continue

Phase 2.5 — Evidence Audit (optional)

Triggered when evidence_round: true, or when auto and topic involves measurable claims. Auto-enable heuristic: topic contains performance benchmarks, cost comparisons, security assessments, market data, or quantitative metrics. When in doubt with auto, skip (false positive costs more than false negative).

Spawn 1 subagent as "evidence auditor":

  • Input: all Phase 1 + Phase 2 output
  • web_search: disabled
  • Task: extract every factual claim, tag each as:
[official-docs] [community-feedback] [personal-blog] [no-source] [exaggerated]
  • Output: concise fact checklist

Phase 3 — Neutral Judgment

Spawn 1 subagent as neutral judge:

  • Input: Phase 1 + Phase 2 + Phase 2.5 (if available)
  • web_search: disabled
  • Weigh arguments by confidence scores AND source quality tags

Required output structure:

  • Strong arguments per side
  • Exaggerated claims per side
  • Shared limitations (problems neither option solves)
  • Core disagreements (value-level, not just factual)
  • Consensus points
  • Recommendation — explicit directional advice, adapted to decision_type
  • Open Questions — unresolved unknowns that could change the conclusion
  • Scenario selection matrix (table: scenario × recommendation × rationale)
  • One-sentence summary

Phase 4 — Report Assembly

Orchestrator (main conversation) assembles all outputs into Markdown:

# [topic]: Debate Research Report

Date: YYYY-MM-DD
Method: Multi-agent structured debate (debate-research skill)
Roles: [role1 (model)] | [role2 (model)] | ...
Audience: [audience] | Decision type: [decision_type]
Completion: [success | degraded-success | aborted]

Core Arguments by Side

[Phase 1 output, organized by role]

Cross Rebuttals

[Phase 2 output, organized by role]

Evidence Audit

[Phase 2.5 output, or "Not requested"]

Neutral Judgment

[Phase 3 sections 1-5]

Recommendation

[Phase 3 section 6]

Open Questions

[Phase 3 section 7]

Scenario Matrix

[Phase 3 section 8]

One-line summary: [Phase 3 section 9]

If output_path specified → write file. Otherwise → return in conversation.

Completion States

StateConditionBehavior
successAll phases completed normallyFull report
degraded-success1+ agents timed out or returned [INCOMPLETE]Report with degradation note
abortedModel pre-check failed / user cancelled planNo report; return error summary

Prompt Templates

See references/prompts.md for the exact prompt templates used in each phase. Orchestrator builds prompts dynamically from parameters + these templates.

数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务