首页龙虾技能列表 › z — 反爬虫防御系统

z — 反爬虫防御系统

v1.0.2

反技能爬虫防御系统。检测并阻止未经授权的爬取、抓取和批量提取技能定义、提示词内容及指令集的行为。

0· 52·0 当前·0 累计
by @wscats (enoyao)·MIT-0
下载技能包
License
MIT-0
最后更新
2026/4/6
安全扫描
VirusTotal
无害
查看报告
OpenClaw
可疑
medium confidence
该技能记录的检测行为作为只读监控器是合理的,但存在多个不一致和缺失之处(尤其是警报传递和调用策略方面),在信任该技能之前应予以澄清。
评估建议
该 README 看起来像一个合理的只读爬虫检测器,但在安装前有重要问题需要澄清:1) 确认警报传递方式——如果计划启用 webhook 或邮件,需要明确配置(webhook URL、SMTP/API 密钥)并确保这些已声明且安全存储;默认优先使用 'log' 渠道。2) 验证平台权限:确保平台授予的 'request_metadata_read' 能力确实如声称的那样排除 IP/个人信息,并确认谁可以查看警报。3) 确定调用策略:SKILL.md 说是仅限操作员,但注册表标志允许自主调用——如果想要只读/手动操作,请确保禁用模型调用。4) 向作者请求明确说明访问了哪些请求元数据字段(user-agent、时间戳、请求 ID,但不包含 IP 或个人 ID)。5) 因为这只是指令(无代码),在启用除本地日志之外的任何警报渠道之前,请在受控测试环境中验证行为。如果作者无法提供具体的配置说明和权限描述,请将该技能视为不可信。...
详细分析 ▾
用途与能力
SKILL.md 描述了一个使用请求元数据的被动、只读的「技能爬取」检测器——该用途与描述的检测规则和示例一致。然而,文档宣传了警报渠道(webhook、邮件)和多平台支持,而注册表元数据没有列出传递警报或跨平台集成所需的环境变量/配置。因此,该技能声称具备发送 webhook/邮件的能力,但这些能力并未得到声明配置的支持。
指令范围
运行时指令主要限于对请求元数据的只读分析,这与声称的范围一致。但 SKILL.md 包含:(a) 警报传递渠道(webhook/邮件)但未指定如何提供端点/凭据,(b) 会话指纹/user-agent 分析可能依赖于未明确列为允许的标识符,以及 (c) 声称禁用自主调用而注册表标志允许模型调用。这些差距造成了关于哪些数据可能被传输以及何时传输的模糊性。
安装机制
纯指令技能,没有安装规范和代码文件——没有任何内容写入磁盘或下载。这是风险最低的安装模式。
凭证需求
该技能没有声明所需的环境变量或配置路径,但配置部分允许包括 'webhook' 和 'email' 的警报渠道,这些通常需要 URL 或凭据(webhook URL、SMTP 服务器/API 密钥)。缺少声明的环境变量/配置意味着:警报预期仅为日志(安全),或者该技能稍后将使用敏感端点/凭据进行配置——后者未记录,且与所声称的只读检测目的不相称。
持久化与权限
该技能没有请求持久/常驻存在,也没有记录主动对策或响应修改。这是合适的。然而,SKILL.md 明确声明「自主:禁用——操作员必须明确调用」,而注册表级标志表明可能允许模型调用;这种不匹配会影响权限表面,应予以解决。
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.0.22026/4/6

与上一版本相比没有用户可见的变化。此版本未检测到文件更改。

● 无害

安装命令 点击复制

官方npx clawhub@latest install z
镜像加速npx clawhub@latest install z --registry https://cn.clawhub-mirror.com

技能文档

Detect and defend against unauthorized crawling, scraping, and bulk extraction of skill definitions and prompt instructions.

📋 Overview

PropertyValue
Namez
TypePassive Defense
TriggerAnomalous skill-access patterns detected
ActionDetect → Alert operator → Log event
ScopeRead-only pattern analysis on request metadata
AutonomousDisabled — operator must explicitly invoke

🎯 What Is Skill Crawling?

Skill crawling refers to automated or semi-automated attempts to:

  • Bulk-extract skill definitions, SKILL.md files, and prompt instructions
  • Systematically enumerate available skills and their internal logic
  • Replay or mirror skill content into unauthorized environments
  • Reverse-engineer skill behavior through high-volume probing

z monitors for these patterns and alerts the operator when suspicious activity is detected.


🔍 Detection Engine

z uses passive, read-only analysis of request metadata to identify crawling behavior:

Detection Rules:
├── 📊 Rapid sequential skill-file access detection
├── 📊 Systematic enumeration pattern recognition
├── 📊 Abnormal skill-read frequency analysis
├── 📊 Repetitive prompt-extraction attempt detection
├── 📊 User-agent / session fingerprint anomaly detection
└── 📊 Bulk download timing correlation

Detection Logic

class SkillCrawlerDetector:
    """
    Passive detector that analyzes request patterns to identify potential
    skill-crawling or prompt-scraping attempts.

Required permissions: - request_metadata_read: Read-only access to request pattern data - alert_send: Permission to notify the operator """

# Indicators of crawling behavior INDICATORS = [ "rapid_sequential_skill_access", "systematic_enumeration", "high_frequency_skill_reads", "repetitive_prompt_extraction", "session_fingerprint_anomaly", "bulk_download_timing", ]

def analyze(self, request_metadata: RequestMetadata) -> DetectionResult: """ Analyze request metadata for skill-crawling indicators. This method is strictly read-only — no responses are modified. """ triggered = []

if self._is_rapid_sequential_access(request_metadata): triggered.append("rapid_sequential_skill_access")

if self._is_systematic_enumeration(request_metadata): triggered.append("systematic_enumeration")

if self._is_high_frequency_reads(request_metadata): triggered.append("high_frequency_skill_reads")

if self._is_repetitive_extraction(request_metadata): triggered.append("repetitive_prompt_extraction")

if self._is_fingerprint_anomaly(request_metadata): triggered.append("session_fingerprint_anomaly")

confidence = len(triggered) / len(self.INDICATORS)

return DetectionResult( detected=confidence >= self.threshold, confidence=confidence, indicators=triggered, recommendation="Review access logs and take manual action if needed.", )

def on_detection(self, result: DetectionResult) -> None: """Alert the operator. No automated countermeasures are taken.""" if result.detected: self._send_alert(result) self._log_event(result)


📊 Alert Report Format

When suspicious crawling activity is detected, the operator receives:

{
  "alert_type": "skill_crawling_detected",
  "skill": "z",
  "timestamp": "2026-04-06T09:50:00Z",
  "confidence": 0.83,
  "indicators": [
    "rapid_sequential_skill_access",
    "systematic_enumeration",
    "high_frequency_skill_reads"
  ],
  "request_count": 320,
  "time_window_minutes": 5,
  "recommendation": "Review access logs. Consider rate-limiting or blocking the source.",
  "automated_action_taken": "none"
}
All countermeasures (rate-limiting, blocking, etc.) are left to the operator. z only detects and reports.

🔒 Permissions & Data Access

PermissionScopePurpose
request_metadata_readRead-onlyAnalyze skill-access frequency, timing, and patterns
alert_sendWrite (alerts only)Send detection alerts to the operator

Data NOT Accessed

  • ❌ Caller IP addresses or personal identity
  • ❌ Response content (responses are never read or modified)
  • ❌ Network telemetry or routing data
  • ❌ Model internals, weights, or logits
  • ❌ External APIs or third-party services

⚙️ Configuration

z:
  # Detection sensitivity (0.0 - 1.0)
  detection_threshold: 0.5

# Time window for pattern analysis (minutes) analysis_window: 10

# Minimum requests before analysis triggers min_request_count: 50

# Alert configuration alerts: enabled: true channels: ["log"] # Options: log, webhook, email cooldown_minutes: 15 # Cooldown between repeated alerts

# Safety: these features are permanently disabled response_modification: false active_countermeasures: false caller_tracing: false


✅ Capabilities

✅ Passive skill-access pattern monitoring
✅ Crawling / scraping anomaly detection
✅ Configurable detection thresholds
✅ Structured alert reports to operator
✅ Audit logging of detection events

❌ Response modification (permanently disabled) ❌ Active countermeasures (permanently disabled) ❌ Caller identification / tracing (permanently disabled) ❌ Data poisoning (permanently disabled) ❌ Watermark or fingerprint embedding (permanently disabled)


📜 Operating Principles

  • Passive Only — z observes and reports. It never modifies responses or takes active measures.
  • Operator Control — All decisions about countermeasures are made by the human operator.
  • Minimal Permissions — Only the permissions strictly necessary for detection and alerting are requested.
  • Transparency — All detection logic and thresholds are documented and configurable.
  • No Deception — z never produces false, misleading, or contradictory outputs.
  • Compliance — Designed to comply with platform policies and applicable laws.

🎮 Usage Example

[Request Pattern]: 320 skill-file reads in 5 minutes from a single session

[z Detection Engine]: 📊 Analyzing access metadata...

[z Detection Engine]: ⚠️ Confidence 0.83 — Potential skill crawling detected

[Alert System]: 📧 Alert sent to operator

[Alert Report]: - Indicators: rapid_sequential_skill_access, systematic_enumeration, high_frequency_skill_reads - Recommendation: Review access logs and take manual action if needed

[Operator]: Reviews alert → applies rate-limiting (manual action)

z detects and reports. The operator decides and acts.

⚠️ Disclaimer

z is a passive monitoring tool designed to help operators detect potential skill-crawling and prompt-scraping attempts. It does not take any automated defensive or offensive actions. All countermeasures are at the operator's discretion and should comply with applicable laws, regulations, and platform policies.

数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务