首页龙虾技能列表 › Confidant — 技能工具

Confidant — 技能工具

v1.5.3

[自动翻译] Secure secret handoff and credential setup wizard for AI agents. Use when you need sensitive information from the user (API keys, passwords, tokens) o...

1· 2,289·5 当前·5 累计
by @ericsantos (Eric Santos)·MIT-0
下载技能包
License
MIT-0
最后更新
2026/4/11
安全扫描
VirusTotal
可疑
查看报告
OpenClaw
可疑
medium confidence
The skill's purpose (secure secret handoff) matches the provided scripts, but there are important gaps and reliance on external npm packages and public tunnels that merit caution before installing or running it.
评估建议
This skill appears to implement what it claims, but exercise caution before installing or running it: - The scripts delegate the real server/secret-storage logic to an external npm package (@aiconnect/confidant) which will be downloaded/installed by npx or npm. Inspect that package's source (npm page and linked GitHub repo) before running any install commands. - setup.sh performs global npm installs. Prefer running tools in an isolated environment (container, VM) or avoid global install; consid...
详细分析 ▾
用途与能力
The name/description (secure secret handoff) aligns with what the scripts do (create a temporary web form, poll for submission, save secrets). However the skill delegates core behavior to an external CLI package (@aiconnect/confidant) that is not included in the repo; that external dependency is necessary but not shipped, so the skill's claimed capability depends on third-party code.
指令范围
SKILL.md explicitly instructs agents to produce and share a human-facing URL and to run long-lived processes (tmux, server, polling). That scope is appropriate for the stated goal, but it requires the agent/human to share the URL in chat (which can create leakage if chat logs are stored) and to run long-running processes outside normal agent timeouts. The scripts themselves do not read unrelated secrets, but they rely on an external CLI to perform the actual polling and saving — that external CLI's behavior is not visible here.
安装机制
There is no packaged install spec in the registry, but the scripts use npx --yes and setup.sh uses npm install -g to fetch @aiconnect/confidant and localtunnel from the public npm registry. This performs network downloads and global installs on the host (writes system-wide npm packages). The skill's own files do not include the server/CLI implementation, so runtime behavior depends on code downloaded from npm — increasing trust requirements.
凭证需求
The registry metadata declares only curl, jq, and npm as required binaries, but the scripts also invoke utilities like lsof, fuser, pgrep, tmux, and optionally the lt binary — these are not listed. The skill does not request arbitrary environment credentials itself, but it will save received secrets to disk (e.g., ~/.config/<service>/api_key) or set env vars if asked. That storage behavior is central to the skill but is ultimately implemented by the external CLI.
持久化与权限
The skill does not request always: true and is user-invocable. setup.sh installs npm packages globally (system-wide), which requires privileges and modifies the host environment. The skill starts background servers and can start a public tunnel (localtunnel), exposing a local port to the internet — expected for its function but higher-privilege and higher-risk than a pure in-memory helper.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.5.32026/2/1

Add tmux guidance for long-running polling process — prevents SIGKILL from agent exec timeouts

● 可疑

安装命令 点击复制

官方npx clawhub@latest install confidant
镜像加速npx clawhub@latest install confidant --registry https://cn.clawhub-mirror.com

技能文档

Receive secrets from humans securely — no chat exposure, no copy-paste, no history leaks.

🚨 CRITICAL FLOW — Read This First

This is a human-in-the-loop process. You CANNOT retrieve the secret yourself.

  • Run the script → you get a secure URL
  • SEND the URL to the user in chat ← THIS IS MANDATORY
  • WAIT for the user to open the URL in their browser and submit the secret
  • The script handles the rest (receives, saves to disk, confirms)
❌ DO NOT curl/fetch the secret URL yourself — it's a web form for humans
❌ DO NOT skip sharing the URL — the user MUST receive it in chat
❌ DO NOT poll the API to check if the secret arrived — the script does this
❌ DO NOT proceed without confirming the secret was received
✅ Share URL → Wait → Confirm success → Use the secret silently

🔧 Setup (once per environment)

Run this once to install the CLI globally (avoids slow npx calls):

bash {skill}/scripts/setup.sh
{skill} is the absolute path to the directory containing this SKILL.md file. Agents can resolve it at runtime:
>
> SKILL_DIR=$(find "$HOME" -name "SKILL.md" -path "/confidant/skill" -exec dirname {} \; 2>/dev/null | head -1)
# Then use: bash "$SKILL_DIR/scripts/setup.sh"

⚡ Quick Start

You need an API key from the user? One command:

bash {skill}/scripts/request-secret.sh --label "OpenAI API Key" --service openai

The script handles everything:

  • ✅ Starts server if not running (or reuses existing one)
  • ✅ Creates a secure request with web form
  • ✅ Detects existing tunnels (ngrok or localtunnel)
  • ✅ Returns the URL to share with the user
  • ✅ Polls until the secret is submitted
  • ✅ Saves to ~/.config/openai/api_key (chmod 600) and exits

If the user is remote (not on the same network), add --tunnel:

bash {skill}/scripts/request-secret.sh --label "OpenAI API Key" --service openai --tunnel

This starts a localtunnel automatically (no account needed) and returns a public URL.

Output example:

🔐 Secure link created!

URL: https://gentle-pig-42.loca.lt/requests/abc123 (tunnel: localtunnel | local: http://localhost:3000/requests/abc123) Save to: ~/.config/openai/api_key

Share the URL above with the user. Secret expires after submission or 24h.

Share the URL → user opens it → submits the secret → script saves to disk → done.

Without --service or --save, the script still polls and prints the secret to stdout (useful for piping or manual inspection).

Scripts

request-secret.sh — Request, receive, and save a secret (recommended)

# Save to ~/.config//api_key (convention)
bash {skill}/scripts/request-secret.sh --label "SerpAPI Key" --service serpapi

# Save to explicit path bash {skill}/scripts/request-secret.sh --label "Token" --save ~/.credentials/token.txt

# Save + set env var bash {skill}/scripts/request-secret.sh --label "API Key" --service openai --env OPENAI_API_KEY

# Just receive (no auto-save) bash {skill}/scripts/request-secret.sh --label "Password"

# Remote user — start tunnel automatically bash {skill}/scripts/request-secret.sh --label "Key" --service myapp --tunnel

# JSON output (for automation) bash {skill}/scripts/request-secret.sh --label "Key" --service myapp --json

FlagDescription
--label Description shown on the web form (required)
--service Auto-save to ~/.config//api_key
--save Auto-save to explicit file path
--env Set env var (requires --service or --save)
--tunnelStart localtunnel if no tunnel detected (for remote users)
--port Server port (default: 3000)
--timeout Max wait for startup (default: 30)
--jsonOutput JSON instead of human-readable text

check-server.sh — Server diagnostics (no side effects)

bash {skill}/scripts/check-server.sh
bash {skill}/scripts/check-server.sh --json

Reports server status, port, PID, and tunnel state (ngrok or localtunnel).

⏱ Long-Running Process — Use tmux

The request-secret.sh script blocks until the secret is submitted (it polls continuously). Most agent runtimes (including OpenClaw's exec tool) impose execution timeouts that will kill the process before the user has time to submit.

Always run Confidant inside a tmux session:

# 1. Start server in tmux
tmux new-session -d -s confidant
tmux send-keys -t confidant "confidant serve --port 3000" Enter

# 2. Create request in a second tmux window tmux new-window -t confidant -n request tmux send-keys -t confidant:request "confidant request --label 'API Key' --service openai" Enter

# 3. Share the URL with the user (read from tmux output) tmux capture-pane -p -t confidant:request -S -30

# 4. After user submits, check the result tmux capture-pane -p -t confidant:request -S -10

Why not exec? Agent runtimes typically kill processes after 30-60s. Since the script waits for human input (which can take minutes), it gets SIGKILL before completion. tmux keeps the process alive independently.

If your agent platform supports long-running background processes without timeouts, exec with request-secret.sh works fine. But when in doubt, use tmux.

Rules for Agents

  • NEVER ask users to paste secrets in chat — always use this skill
  • NEVER reveal received secrets in chat — not even partially
  • NEVER curl the Confidant API directly — use the scripts
  • NEVER kill an existing server to start a new one
  • NEVER try to expose the port directly (public IP, firewall rules, etc.) — use --tunnel instead
  • ALWAYS share the URL with the user in chat — this is the entire point of the tool
  • ALWAYS wait for the script to finish — it polls automatically and saves/outputs the secret; do not try to retrieve it yourself
  • Use --tunnel when the user is remote (not on the same machine/network)
  • Prefer --service for API keys — cleanest convention
  • After receiving: confirm success, use the secret silently

Exit Codes (Scripts)

Agents can branch on exit codes for programmatic error handling:

CodeConstantMeaning
0Success — secret received (saved to disk or printed to stdout)
1MISSING_LABEL--label flag not provided
2MISSING_DEPENDENCYcurl, jq, npm, or confidant not installed
3SERVER_TIMEOUT / SERVER_CRASHServer failed to start or died during startup
4REQUEST_FAILEDAPI returned empty URL — request not created
≠0(from CLI)confidant request --poll failed (expired, not found, etc.)
With --json, all errors include a "code" field for programmatic branching:

{ "error": "...", "code": "MISSING_DEPENDENCY", "hint": "..." }

Example Agent Conversation

This is what the interaction should look like:

User: Can you set up my OpenAI key?
Agent: I'll create a secure link for you to submit your API key safely.
       [runs: request-secret.sh --label "OpenAI API Key" --service openai --tunnel]
Agent: Here's your secure link — open it in your browser and paste your key:
       🔐 https://gentle-pig-42.loca.lt/requests/abc123
       The link expires after you submit or after 24h.
User: Done, I submitted it.
Agent: ✅ Received and saved to ~/.config/openai/api_key. You're all set!

⚠️ Notice: the agent SENDS the URL and WAITS. It does NOT try to access the URL itself.

How It Works

  • Script starts a Confidant server (or reuses existing one on port 3000)
  • Creates a request via the API with a unique ID and secure web form
  • Optionally starts a localtunnel for public access (or detects existing ngrok/localtunnel)
  • Prints the URL — agent shares it with the user in chat
  • Delegates polling to confidant request --poll which blocks until the secret is submitted
  • With --service or --save: secret is saved to disk (chmod 600), then destroyed on server
  • Without --service/--save: secret is printed to stdout, then destroyed on server

Tunnel Options

ProviderAccount neededHow
localtunnel (default)No--tunnel flag or npx localtunnel --port 3000
ngrokYes (free tier)Auto-detected if running on same port
The script auto-detects both. If neither is running and --tunnel is passed, it starts localtunnel.

Advanced: Direct CLI Usage

For edge cases not covered by the scripts:

# Start server only
confidant serve --port 3000 &

# Start server + create request + poll (single command) confidant serve-request --label "Key" --service myapp

# Create request on running server confidant request --label "Key" --service myapp

# Submit a secret (agent-to-agent) confidant fill "" --secret ""

# Check status of a specific request confidant get-request

# Retrieve a delivered secret (by secret ID, not request ID) confidant get

If confidant is not installed globally, run bash {skill}/scripts/setup.sh first, or prefix with npx @aiconnect/confidant.

⚠️ Only use direct CLI if the scripts don't cover your case.

数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务