首页龙虾技能列表 › Scrapling Web Scraping — 技能工具

Scrapling Web Scraping — 技能工具

v1.0.0

[自动翻译] Zero-bot-detection web scraping for OpenClaw. Bypass Cloudflare, handle JavaScript-heavy sites, and adapt to website changes automatically. Use when y...

2· 2,800·15 当前·16 累计
by @zhengxinjipai·MIT-0
下载技能包
License
MIT-0
最后更新
2026/3/6
安全扫描
VirusTotal
无害
查看报告
OpenClaw
可疑
medium confidence
The skill's behavior matches its description (a scraper that can bypass protections) but it relies on installing an external Python package and downloading browser binaries from the network with no provenance or install spec, and offers dual‑use anti-bot features that warrant caution.
评估建议
This skill is functionally consistent with its description: it delegates work to a third‑party 'scrapling' package and provides a small wrapper script. The main risks come from installing that external package and running 'scrapling install' (which downloads browser binaries and may execute code) — the skill bundle doesn't include an install spec, release host, or checksums. Before installing, verify the upstream project (PyPI/GitHub) and its maintainers, review the actual 'scrapling' package so...
详细分析 ▾
用途与能力
Name, description, SKILL.md, and included wrapper script are coherent: the code simply calls a third‑party 'scrapling' package to perform basic/stealth/dynamic scraping. There are no unrelated environment variables, binaries, or claims that contradict the code.
指令范围
The runtime instructions direct the user/agent to run 'pip install "scrapling[all]"' and 'scrapling install' (which downloads browsers). The skill does not instruct reading unrelated system files or exfiltrating secrets, but it explicitly advocates stealth modes that bypass anti‑bot protections — a sensitive, dual‑use capability. It also references writing custom scripts into /root/.openclaw/skills/ which is expected but worth noting.
安装机制
No formal install spec in the skill bundle; instead the SKILL.md instructs installing a PyPI package and running 'scrapling install' to download browsers. That implicitly pulls and executes third‑party code and remote binaries from the network (source/provenance not verified in the skill). This is higher risk because the skill itself does not declare or pin where those artifacts come from or provide checksums.
凭证需求
The skill requests no environment variables, credentials, or config paths in its metadata. That is proportionate to the wrapper's stated functionality. Note: some stealth/captcha‑solving features could require external solver services or keys in practice (none are declared).
持久化与权限
The skill is not always-enabled and does not request elevated platform privileges. It does suggest creating custom scripts in the skill directory (normal). There is no code that modifies other skills or global agent settings.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.0.02026/3/6

Initial release of Scrapling Web Scraping skill. - Supports fast, stealth, and dynamic web scraping modes to handle protected and JavaScript-heavy sites. - Bypasses anti-bot measures, including Cloudflare, with automatic website adaptation. - Allows data extraction using CSS selectors and provides JSON output. - Includes both command-line and Python integration examples. - Enables custom scripting for advanced scraping workflows.

● 无害

安装命令 点击复制

官方npx clawhub@latest install scrapling-web-scraper
镜像加速npx clawhub@latest install scrapling-web-scraper --registry https://cn.clawhub-mirror.com

技能文档

Zero-bot-detection web scraping for OpenClaw. Bypass Cloudflare, handle JavaScript-heavy sites, and adapt to website changes automatically.

Quick Start

# Install Scrapling
pip install "scrapling[all]"
scrapling install

# Basic usage python3 /root/.openclaw/skills/scrapling-web-scraping/scrapling_tool.py https://example.com

# Bypass Cloudflare python3 /root/.openclaw/skills/scrapling-web-scraping/scrapling_tool.py https://protected-site.com --mode stealth --cloudflare

# Extract specific data python3 /root/.openclaw/skills/scrapling-web-scraping/scrapling_tool.py https://example.com --selector ".product-title"

# JavaScript-heavy sites python3 /root/.openclaw/skills/scrapling-web-scraping/scrapling_tool.py https://spa-app.com --mode dynamic --wait ".content-loaded"

Usage with OpenClaw

Natural Language Commands

Basic scraping:

"用Scrapling抓取 https://example.com 的标题和所有链接"

Bypass protection:

"用隐身模式抓取 https://protected-site.com,绕过Cloudflare"

Extract data:

"抓取 https://shop.com 的商品名称和价格,CSS选择器是 .product"

Dynamic content:

"抓取 https://spa-app.com,等待 .data-loaded 元素加载完成"

Python Code

# Basic scraping
from scrapling.fetchers import Fetcher
page = Fetcher.get('https://example.com')
title = page.css('title::text').get()

# Bypass Cloudflare from scrapling.fetchers import StealthyFetcher page = StealthyFetcher.fetch('https://protected.com', headless=True, solve_cloudflare=True)

# JavaScript sites from scrapling.fetchers import DynamicFetcher page = DynamicFetcher.fetch('https://spa-app.com', headless=True, network_idle=True)

Features

FeatureCommandDescription
Basic Scrape--mode basicFast HTTP requests
Stealth Mode--mode stealthBypass Cloudflare/anti-bot
Dynamic Mode--mode dynamicHandle JavaScript sites
CSS Selectors--selector ".class"Extract specific elements
JSON Output--jsonMachine-readable output

Examples

1. Scrape with CSS Selector

python3 scrapling_tool.py https://quotes.toscrape.com --selector ".quote .text" --json

2. Bypass Cloudflare

python3 scrapling_tool.py https://nopecha.com/demo/cloudflare --mode stealth --cloudflare

3. Wait for Dynamic Content

python3 scrapling_tool.py https://spa-app.com --mode dynamic --wait ".loaded" --json

CLI Reference

python3 scrapling_tool.py URL [options]

Options: --mode {basic,stealth,dynamic} Scraping mode (default: basic) --selector, -s CSS_SELECTOR Extract specific elements --cloudflare Solve Cloudflare (stealth mode only) --wait SELECTOR Wait for element (dynamic mode only) --json, -j Output as JSON

Advanced: Custom Scripts

Create custom scraping scripts in /root/.openclaw/skills/scrapling-web-scraping/:

from scrapling.fetchers import StealthyFetcher

# Your custom scraper def scrape_products(url): page = StealthyFetcher.fetch(url, headless=True) products = [] for item in page.css('.product'): products.append({ 'name': item.css('.name::text').get(), 'price': item.css('.price::text').get(), 'link': item.css('a::attr(href)').get() }) return products

Notes

  • Requires Python 3.10+
  • First run: scrapling install to download browsers
  • Respect website Terms of Service
  • Use responsibly

Created: 2026-03-05 by 老二 Source: https://github.com/D4Vinci/Scrapling

数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务