首页龙虾技能列表 › Wordpress AEO Autoblogger — 技能工具

Wordpress AEO Autoblogger — 技能工具

v1.0.0

Autonomous AEO and SEO content generation and optimization engine for scaling business operations. Use when Codex needs to run end-to-end programmatic SEO wo...

0· 3·0 当前·0 累计
by @how2rank (James Jernigan)·MIT-0
下载技能包
License
MIT-0
最后更新
2026/4/14
安全扫描
VirusTotal
无害
查看报告
OpenClaw
可疑
medium confidence
The skill's code and runtime instructions largely match an autonomous WordPress autoblogging purpose, but the package metadata omits the many credentials and heavy dependencies the code actually requires and the code injects a promotional CTA by default — these mismatches and unexpected behaviors warrant caution.
评估建议
This skill implements an autonomous WordPress autoblogging pipeline that scrapes competitors, calls LLMs, builds embeddings, and directly updates posts. Before installing, consider: - Metadata mismatch: the package metadata declares no required environment variables or install steps, but the code and SKILL.md require many sensitive keys (WP credentials, multiple LLM API keys, scraper and proxy credentials). Treat the skill as requiring several secrets and heavy dependencies. - Live publishing...
详细分析 ▾
用途与能力
The name/description (WordPress AEO Autoblogger) align with the code: it generates SEO content, scrapes competitors, builds schema, stores embeddings, and publishes directly to WordPress. The heavy use of LLM providers, search/scraper tiers, ChromaDB, and WP REST API is coherent with the stated purpose.
指令范围
SKILL.md instructs the agent to verify a .env with WP_URL, LLM provider keys, and scraper keys and to run setup and the worker scripts. The scripts perform network I/O (scraping multiple tiers, Jina, provider APIs), write to WordPress, and update local DB/vector store. However, the registry metadata declares no required env vars or binaries while the SKILL.md and code expect many secrets and dependencies — an explicit mismatch. The runtime instructions and code also contain an automatic publishing path (direct WP PUT) and analytics operations that will update live posts, which is high-impact and should be highlighted to users.
安装机制
There is no install spec in the registry (instruction-only), but the repository includes requirements.txt listing heavyweight packages (playwright, chromadb, google-generativeai, anthropic, openai, filelock, etc.). Playwright also requires browser runtime components. The lack of an install specification combined with these heavy runtime requirements is a deployment friction / surprise risk but not inherently malicious.
凭证需求
The code expects many sensitive environment values (e.g., GEMINI_API_KEY, OPENAI_API_KEY, ANTHROPIC_API_KEY, GSC_SERVICE_ACCOUNT, SCRAPER_TIER2_KEY, SCRAPER_TIER3_KEY, JINA_API_KEY, PROXY credentials, WP_URL, WP_USERNAME, WP_APP_PASSWORD). Those credentials are proportionate to the declared purpose (publishing + scraping + embedding) — except the registry declares no required env vars, which is an inconsistency. Additionally, config.py hardcodes a CTA_LINK (https://oneclickvids.com) and CTA_TEXT that the pipeline will inject into generated content; that behavior is not called out in the skill description and could be an unwanted promotional/backdoor insertion.
持久化与权限
The skill does not request 'always: true' or other elevated platform privileges. It performs filesystem writes (SQLite DB openclaw.db and ChromaDB under ./chroma_db) and modifies remote WordPress posts via REST API, which are expected for its purpose. No evidence it modifies other skills' configs or requests permanent platform-level presence.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.0.02026/4/14

- Initial release of execute-openclaw-pipeline for autonomous AEO and SEO content generation. - Supports semantic keyword generation, multi-tiered competitor scraping, and dynamic JSON-LD schema building. - Enables direct publishing to WordPress and analytics-based CTR decay detection and repair. - Structured workflows for both daily content generation and ongoing post optimization. - Enforces concurrency control on vector database writes and automatic scraper fallback logic.

● 无害

安装命令 点击复制

官方npx clawhub@latest install wordpress-aeo-autoblogger
镜像加速npx clawhub@latest install wordpress-aeo-autoblogger --registry https://cn.clawhub-mirror.com

技能文档

# OpenClaw Pipeline Execution

Initial Setup and Configuration

Before running the pipeline, ensure the environment is correctly configured:
  • Verify .env contains necessary credentials (WP_URL, LLM provider keys, Scraper keys).
  • Run scripts/setup.py to initialize the SQLite database (openclaw.db) and ChromaDB vector storage.

Executing the Daily Worker (Content Generation)

To generate and publish new content for scaling operations:
  • Execute scripts/daily_worker.py.
  • The pipeline handles:
- Semantic query generation based on TARGET_NICHE. - Competitor scraping via the waterfall method (Playwright, Firecrawl, Jina). - Content generation using the designated LLM. - Semantic internal link injection. - Direct publication to WordPress.

Executing the Analytics Worker (Content Optimization)

To optimize existing content experiencing CTR decay:
  • Execute scripts/analytics_worker.py.
  • The worker evaluates Google Search Console data against established age gates.
  • Eligible posts are updated via the WordPress REST API, and ChromaDB vector embeddings are re-synced.

Critical Architectural Constraints

  • Concurrency: ChromaDB writes are serialized via filelock. Do not attempt to write to ChromaDB concurrently without acquiring get_chroma_lock() from setup.py.
  • Scraping Fallbacks: If Tier 1-5 scrapers fail, the pipeline falls back gracefully to LLM grounded search synthesis (Tier 6). Do not halt execution if competitor scraping fails.
  • Schema Generation: JSON-LD schema is dynamically constructed via schema_engine.py based on the parsed Pydantic content outline.
数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务