Universal Agent — 技能工具
v1.0.0This skill should be used when the user needs to execute tasks through a complete automated workflow: understand natural language intent, dynamically generat...
0· 0·0 当前·0 累计
安全扫描
OpenClaw
可疑
medium confidenceThe skill's functionality (generate and run arbitrary commands/scripts) matches its description, but metadata omits required runtime credentials and the instructions/code allow powerful file, network, and command execution with a few undocumented environment inputs — the gaps are inconsistent and warrant caution.
评估建议
This skill truly executes arbitrary shell commands and generated Python code and thus has high potential impact. Specific points to consider before installing or running:
- Metadata mismatch: the registry claims no required env vars, but the script uses an LLM API key (config.json or LLM_API_KEY) for standalone mode and expects UA_* env vars in bridge mode. Ask the publisher to correct the metadata.
- Prefer Bridge mode with a trusted external 'brain' (external agent provides UA_* inputs) rathe...详细分析 ▾
ℹ 用途与能力
The skill's declared purpose is to generate and execute commands/scripts end-to-end; the included Python implementation and SKILL.md are consistent with that capability. However the registry metadata declares no required environment variables or credentials while the code and docs show modes that require an LLM API key (config.json or LLM_API_KEY) for standalone operation and expect bridge-specific env vars (UA_THINK, UA_GENERATE_SCRIPT, UA_DEBUG_AND_FIX, UA_SUMMARIZE). That mismatch between declared requirements and actual code is a coherence issue.
⚠ 指令范围
SKILL.md and the script explicitly instruct the agent to auto-generate and execute arbitrary shell commands and Python scripts, access/modify files (memory, temp scripts, config.json), and call arbitrary APIs or control hardware. While this is consistent with a 'universal agent' purpose, the runtime instructions also rely on environment-based bridge communication (UA_* variables) and permit self-repair loops that can execute repaired code — broad discretion that can be misused and is not constrained by the registry metadata.
✓ 安装机制
There is no install spec (instruction-only skill with bundled script), so nothing is downloaded or extracted at install time. This minimizes install-time risk; however, the skill includes a large standalone Python script that will be written to disk when installed and can execute arbitrary commands at runtime.
⚠ 凭证需求
Registry says 'no required env vars' but the code and docs expect an LLM API key for standalone mode (config.json or LLM_API_KEY) and use UA_* environment variables as the bridge protocol. The skill also persists memory and temp scripts to disk. The absence of declared credential requirements in metadata is inconsistent and could lead to users unknowingly supplying sensitive keys to a powerful executor.
ℹ 持久化与权限
always:false (not forced). The skill persists execution history/memory to a file (universal_agent_memory.json) and writes temporary script files when executing tasks. It does not declare modifying other skills or system configs, but its ability to run arbitrary commands/scripts implies it can alter system state — so limit scope and run under least privilege.
⚠ scripts/universal_agent.py:943
Dynamic code execution detected.
安全有层次,运行前请审查代码。
运行时依赖
无特殊依赖
版本
latestv1.0.02026/4/6
● 可疑
安装命令 点击复制
官方npx clawhub@latest install universal-agent
镜像加速npx clawhub@latest install universal-agent --registry https://cn.clawhub-mirror.com
技能文档
# Universal Agent Skill A minimal universal AI agent that automates end-to-end task execution: understand user intent in natural language, generate commands or scripts, execute them, analyze results, and self-recover from errors.
Architecture
``
Natural Language Input
↓
┌─────────────┐
│ LLM (Brain) │ Understand intent, generate command/Python script
└──────┬──────┘
│ Auto-generate code/command
↓
┌─────────────────┐
│ Command Executor │ Execute any command, control software & hardware
│ (Limbs) │
└───────┬─────────┘
│ Actual execution
↓
Task Complete ✅
`
File Structure
`
universal-agent/
├── SKILL.md # This file (skill definition)
├── scripts/
│ ├── universal_agent.py # Main program (complete standalone implementation)
│ └── config.json # Configuration file (fill in API key for standalone mode)
└── references/
└── README.md # Detailed usage documentation
`
When to Use
Use this skill when:
- User describes a task in natural language that requires automated execution
- Task needs dynamic code generation (Python script) and immediate execution
- Task involves file operations, data processing, system administration, CLI tools, API calls
- User wants end-to-end automation without manual intervention
- Keywords: "万能agent", "universal agent", "自动执行", "动态生成代码", "生成并执行", "帮我做XX"
How It Works
Automated Workflow (4 Steps)
- Think — LLM understands task, judges complexity, decides whether to generate a shell command or Python script
- Execute — Auto-write file → Run command/script → Capture output
- Fix — On error, LLM analyzes error, auto-fixes code, retries (up to 2 times)
- Summarize — Translates technical output into human-friendly language
Why It's "Universal"
| Capability | Description |
|------------|-------------|
| Shell Commands | File ops, process management, system admin |
| Python Scripts | Data processing, web scraping, ML, image processing |
| CLI Tools | git, docker, ffmpeg, aws, any CLI |
| Hardware Control | Serial/GPIO/network-controlled physical devices |
| API Calls | Any HTTP API |
Command executor can run Python → Python can do anything → Agent can do anything
Usage Modes
This skill supports three distinct usage modes, each suited to different scenarios:
Mode 1: Standalone (独立运行)
Run the bundled script directly as an independent program. The script handles everything internally — LLM calls, command execution, safety checks, retries, memory.
`bash
# Single task mode (needs API key)
python scripts/universal_agent.py --run "task description"
# Interactive mode
python scripts/universal_agent.py
# With environment variables
set LLM_API_KEY=sk-xxx && python scripts/universal_agent.py --run "任务"
`
What works: Safety ✅ | Auto-retry ✅ | Memory persistence ✅ | Needs API Key.
Mode 2: Bridge Execution (桥接执行 — 推荐)
Execute the script with --backend bridge. The script's brain is provided by the external Agent that loaded this Skill, while the script itself handles execution, safety, retry, and memory. Any Agent with LLM + command execution can use this.
`bash
# Basic bridge execution
python scripts/universal_agent.py --backend bridge --run "任务描述"
# View full protocol spec
python scripts/universal_agent.py --bridge-info
`
How it works — the Agent drives the script through environment variables:
`
┌─────────────────────────────────────────────────────────────┐
│ Bridge Mode Flow │
├─────────────────────────────────────────────────────────────┤
│ │
│ ① User: "列出当前目录的文件" │
│ ↓ │
│ ② External Agent LLM → generates decision │
│ set UA_THINK={"type":"command","content":"dir /b"} │
│ ↓ │
│ ③ Agent executes: │
│ python ... --backend bridge --run "列出当前目录的文件" │
│ ↓ │
│ ④ Script reads UA_THINK → runs "dir /b" │
│ → safety check passes → captures output │
│ ↓ (if error) │
│ ⑤ Script requests fix via UA_DEBUG_AND_FIX env var │
│ ↓ │
│ ⑥ External Agent provides fixed code │
│ set UA_DEBUG_AND_FIX="fixed_command_or_script" │
│ ↓ │
│ ⑦ Script re-executes → success │
│ ↓ │
│ ⑧ Script reads UA_SUMMARIZE for final output │
│ → returns structured JSON result │
│ │
└─────────────────────────────────────────────────────────────┘
`
Environment Variable Protocol:
| Variable | When Used | Format |
|----------|----------|--------|
| UA_THINK | Step 1 — decision | JSON: {"type":"command\|script","content":"...","explanation":"..."} |
| UA_GENERATE_SCRIPT | If type=script and code needed | Complete Python source code |
| UA_SUMMARIZE | Final step — result summary | Natural language summary text |
| UA_DEBUG_AND_FIX | On error retry — fixed code | Fixed Python/shell code |
What works: Safety ✅ | Auto-retry ✅ | Memory persistence ✅ | No API Key needed (Agent provides LLM).
Who can use this: WorkBuddy, Cursor, Continue.dev, Aider, Cline, any AI IDE/tool with LLM + shell access.
Mode 3: Inline Simulation (模拟执行)
The loaded Agent reads this SKILL.md, learns the architecture pattern, and simulates the workflow using its own native capabilities without executing the script at all. The script serves as a reference/teaching example only.
- Agent uses its own LLM instead of
LLMBrain
Agent uses its own execute_command instead of UniversalExecutor
Agent does its own summarization
What works: Fastest ⚡ | No setup | Safety ❌ Retry ❌ Memory ❌ (script features unused).
Core Components
scripts/universal_agent.py — Main Program
Four core classes implementing the full agent:
| Class | Role | Key Methods |
|-------|------|-------------|
| LLMBrain | Brain — HTTP LLM interface (Mode 1) | think(), generate_script(), summarize(), debug_and_fix() |
| AgentBridge | Brain — External Agent bridge (Mode 2) | think(), generate_script(), summarize(), debug_and_fix(), set_response() |
| UniversalExecutor | Limbs (command execution) | execute(), _execute_command(), _execute_script(), _check_danger() |
| ContextManager | Memory (state management) | add_task_record(), get_context_string(), save()/load() |
| UniversalAgent | Main orchestrator | run(), chat(), batch_run() |
See references/README.md for full API documentation and examples.
Safety Mechanisms
The executor includes built-in danger detection:
| Level | Examples | Handling |
|-------|----------|----------|
| 🔴 High | rm -rf /, format C: | Forced confirmation required |
| 🟡 Medium | pip uninstall, sudo | Warning prompt |
| 🟢 Low | ls, cat, python script.py | Direct execution |
Danger patterns are defined in HIGH_DANGER_PATTERNS and MEDIUM_DANGER_PATTERNS within the script.
Configuration
Mode 1 (Standalone) — Needs API Key
Option A — Config File:
Edit scripts/config.json and fill in your API key.
Option B — Environment Variables:
`bash
set LLM_API_KEY=your-key-here
set LLM_MODEL=gpt-4o
set LLM_BASE_URL=https://api.openai.com/v1
`
Option C — Local Ollama (Free):
`bash
ollama run llama3
# Then select ollama_llama3 preset when starting the script
`
Configuration priority: Environment variables > config.json > Interactive input.
Mode 2 (Bridge) — No API Key Needed
The external Agent provides all LLM capabilities. Configure only optional settings:
`bash
# Optional: change input source from env to file
set UA_INPUT_SOURCE=file
# Optional: skip safety confirmations (not recommended)
# Use --dangerous flag instead
`
Mode 3 (Simulation) — No Configuration Needed
Agent uses its own native capabilities. Nothing to configure.
Supported LLM Providers
| Provider | Models | base_url |
|----------|--------|----------|
| OpenAI | gpt-4o, gpt-4o-mini | https://api.openai.com/v1 |
| DeepSeek | deepseek-chat, deepseek-reasoner | https://api.deepseek.com |
| Qwen | qwen-max, qwen-turbo | https://dashscope.aliyuncs.com/compatible-mode/v1 |
| Zhipu GLM | glm-4-plus | https://open.bigmodel.cn/api/paas/v4 |
| Local Ollama | llama3, qwen2, any model | http://localhost:11434/v1 |
| Groq | llama-3.1-70b-versatile | https://api.groq.com/openai/v1 |
| Any OpenAI-compatible API | any | your-url |
Platform Support
✅ Cross-platform — Windows, macOS, Linux:
| OS | Shell Backend |
|----|--------------|
| Windows | cmd.exe /c (with CREATE_NO_WINDOW) |
| macOS | bash (shell=True) |
| Linux | bash (shell=True) |
All file I/O uses UTF-8 encoding. Python script execution uses sys.executable for platform-agnostic invocation.
Dependencies
✅ Zero external dependencies — Python standard library only:
os, sys — System operations
subprocess — Command execution
json, re — JSON parsing and regex
time/datetime — Time handling
urllib — HTTP requests (fallback)
Optional:
requests library — Better HTTP support (pip install requests`)
Free Options
- Ollama + local model (completely free, unlimited, private)
- DeepSeek (~¥1/million tokens, excellent cost-performance)
- Groq Cloud (free tier available, ultra-fast inference)
数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制
免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制