首页龙虾技能列表 › Fetch — 技能工具

Fetch — 技能工具

v1.0.0

Public web retrieval and clean extraction engine. Use whenever the user wants to fetch, download, inspect, clean, or save content from a public URL. Supports...

0· 1,034·16 当前·16 累计
by @agistack (AGIstack)·MIT-0
下载技能包
License
MIT-0
最后更新
2026/3/12
安全扫描
VirusTotal
无害
查看报告
OpenClaw
安全
high confidence
The skill's code, instructions, and resource requirements are consistent with its stated purpose (fetching public URLs and storing cleaned + raw results locally); nothing indicates stealthy exfiltration or unrelated privileges.
评估建议
This skill appears coherent and self-contained. Before installing: (1) review and, if desired, run the included scripts locally to confirm behavior; (2) be cautious when fetching untrusted URLs — large responses are not size-limited and raw HTML may contain sensitive data you wouldn't want stored locally; (3) confirm you are comfortable with files being created under ~/.openclaw/workspace/memory/fetch. No credentials or external uploads are requested by the skill.
详细分析 ▾
用途与能力
Name/description (public fetch + clean + local save) align with the provided scripts: fetch_url.py performs an HTTP(S) GET, extract.py cleans/extracts title/links, and storage writes files under ~/.openclaw/workspace/memory/fetch. There are no requests for unrelated credentials or services.
指令范围
SKILL.md instructions match the scripts' behavior: they require python3, operate on public URLs, store data locally, and offer list/show/save workflows. The scripts do not read other system files, contact endpoints beyond the target URL, or perform browser automation. Minor note: extracted links are returned as-is (may include non-http schemes) and large downloads are not size-limited.
安装机制
No install spec and no external package downloads — the skill is delivered as scripts and uses only Python stdlib. This is the lowest-risk install model.
凭证需求
The skill requires no environment variables, credentials, or config paths beyond writing to its own ~/.openclaw workspace. Declared runtime constraints (no cookies, no logins) match the code.
持久化与权限
always is false; the skill does not request permanent/global agent privileges or modify other skills. It writes only to its own workspace directory and job file.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.0.02026/3/12

Fetch@1.0.0: Public web retrieval and clean extraction engine. Fetch public URLs, extract readable text, save raw and cleaned output locally, and keep a simple local job history.

● 无害

安装命令 点击复制

官方npx clawhub@latest install fetch
镜像加速npx clawhub@latest install fetch --registry https://cn.clawhub-mirror.com

技能文档

Turn public URLs into usable local content.

Core Philosophy

  • Fetch only public web content.
  • Prefer clean extracted text over noisy raw HTML.
  • Save both the raw response and structured extraction locally.
  • Keep a simple local job history so previous fetches are easy to inspect.

Runtime Requirements

  • Python 3 must be available as python3
  • No external packages required

Safety Boundaries

  • Public URLs only
  • No login flows
  • No cookies or browser automation
  • No API keys or credentials
  • No external uploads or cloud sync
  • All fetched data is stored locally only

Local Storage

All data is stored under:
  • ~/.openclaw/workspace/memory/fetch/jobs.json
  • ~/.openclaw/workspace/memory/fetch/pages/

Key Workflows

  • Fetch URL: fetch_url.py --url "https://example.com"
  • Save cleaned output: save_output.py --url "https://example.com" --title "Example"
  • List history: list_jobs.py
  • Show job details: show_job.py --id JOB-XXXX

Scripts

ScriptPurpose
init_storage.pyInitialize local storage
fetch_url.pyFetch a public URL and extract content
save_output.pySave cleaned output with a custom title
list_jobs.pyList previous fetch jobs
show_job.pyShow one saved fetch job
数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务