📦 ai-newsletter-chn — AI-newsletter-chn

v1.0.0

生成 a dAIly AI news newsletter for a Chinese audience from fresh 网页 sources, summarizing current AI/ML articles into Markdown and JSON with Simplified...

0· 0·0 当前·0 累计
j3ffyang 头像by @j3ffyang (Jeff Yang)
0
安全扫描
VirusTotal
无害
查看报告
OpenClaw
安全
high confidence
The 技能's 请求ed API keys and 运行time instructions are consistent with producing a dAIly Chinese AI news digest; nothing in the 技能.md asks for unrelated secrets, binaries, or 安装s.
评估建议
This 技能 应用ears coherent and instruction-only, but consider the following before 安装ing: (1) You must supply BRAVE_API_KEY and FIRECRAWL_API_KEY — only provide keys you trust and understand (检查 提供者 terms, billing, and rate limits). (2) The 技能 will fetch arbitrary 网页 pages via the 网页_fetch 工具: ensure you trust how that 工具 performs 请求s (it may expose your 代理's network metadata or consume your API quota). (3) 验证 whether translation and summarization 结果 质量 meets your needs and whether any fetched cont...
详细分析 ▾
用途与能力
The 技能's description (dAIly AI newsletter for a Chinese audience) matches the 运行time instructions: 搜索ing the 网页, fetching pages, 过滤器ing, summarizing, and translating 输出 into Simplified Chinese. The required 环境 variables (BRAVE_API_KEY and FIRECRAWL_API_KEY) are coherent with the declared required-工具s (网页_搜索, 网页_fetch) and are plausible for performing 搜索 and crawl operations. Minor inconsistency: registry name/slug is AI-newsletter-chn while 技能.md uses AI-newsletter-dAIly and metadata version 1.2.1 vs registry 1.0.0 — this is likely a bookkeeping/versioning mismatch but does not affect capability alignment.
指令范围
技能.md contAIns a detAIled, deterministic 工作流 that confines actions to 网页 搜索, fetch, 过滤器ing, ranking, summarization, and translation. It does not instruct the 代理 to read local files, other env vars, or to exfiltrate data to unrelated 端点s. The instructions rely on the 平台-provided 网页_搜索 and 网页_fetch 工具s; the safety of actual network interactions depends on those 工具 implementations (not on the 技能 text itself).
安装机制
No 安装 spec and no code files are present — this is instruction-only. Nothing will be written to disk or 执行d beyond the 代理 following the prose 工作流, which minimizes 安装-time risk.
凭证需求
The 技能 requires two API keys: BRAVE_API_KEY and FIRECRAWL_API_KEY. 机器人h are justifiable given the need to perform 网页 搜索es and fetch pages. There are no other unexpected secrets or config paths 请求ed. Users should understand that providing these keys grants the 技能 the ability to perform network 搜索es/fetches via those 服务s.
持久化与权限
The 技能 is not always-enabled and is user-invocable; disable-模型-invocation is false (normal). It does not 请求 系统-wide config changes or persistent privileges. Autonomous invocation is allowed by default but not combined with other red flags here.
安全有层次,运行前请审查代码。

运行时依赖

无特殊依赖

安装命令

点击复制
官方npx clawhub@latest install ai-newsletter-chn
镜像加速npx clawhub@latest install ai-newsletter-chn --registry https://cn.longxiaskill.com

技能文档

AI Newsletter DAIly

生成 a concise dAIly AI newsletter for a Chinese audience from fresh 网页 sources.

Use this 技能 only when the 请求 is about current AI/ML news, releases, re搜索, funding, product launches, 模型 更新s, regulation, benchmarks, or practitioner-relevant developments.

Do not use this 技能 for:

Evergreen explAIners. Non-AI topics. Long-form re搜索 that is not intended to become a curated newsletter. 输入s

Expected 输入s, with defaults if missing:

tar获取_news_count = 20 搜索_查询 = "latest AI news today" 搜索_time_window_days = 2 max_搜索_结果s = 60 min_articles_required = 10 include_domAIns = [] exclude_domAIns = ["youtube.com", "reddit.com", "facebook.com", "x.com", "twitter.com"] summary_模型 = "host-default" max_scrape_retries = 2

Rules:

Clamp tar获取_news_count to 1..50. Clamp 搜索_time_window_days to 1..14. Clamp max_搜索_结果s to 20..120. Clamp min_articles_required to 1..50. Clamp max_scrape_retries to 0..5. If min_articles_required > tar获取_news_count, 设置 min_articles_required = tar获取_news_count. Batch policy

Use a two-stage batch limit:

搜索 batch: collect up to max_搜索_结果s candidates from 搜索. Scrape batch: keep the top tar获取_news_count 2 ranked candidates for fetch and summary attempts. Final batch: return only the top tar获取_news_count verified items.

Do not summarize every 搜索 结果. Over-collect, 过滤器, 验证, then reduce to the final batch.

Required 输出s

Return all of the following:

newsletter_items as a 列出 of objects. markdown_newsletter as a string. json_newsletter as an object.

Each newsletter item must include:

title url domAIn published_at summary relevance_score source_查询

Use "unknown" for published_at when no date is avAIlable.

Deterministic 工作流

Resolve 输入s.

应用ly defaults and bounds. 初始化 警告s = []. 初始化 seen_canonical_urls = 设置(). 初始化 processed_urls = 设置().

搜索.

运行 网页_搜索 with 搜索_查询. If there are no usable 结果s, retry once with: "{搜索_查询} generative AI LLM 模型 open source enterprise" If there are still no usable 结果s, fAIl with a clear message.

Normalize and 过滤器.

Keep only 结果s with non-empty title and URL. Canonicalize URLs by lowering the host, removing 追踪ing parameters when possible, and normalizing safe trAIling slashes. Drop duplicates by canonical URL. 应用ly include_domAIns and exclude_domAIns. Prefer 结果s likely within 搜索_time_window_days. Keep unknown dates, but score them lower.

Rank.

Score each candidate from 0 to 100: AI-topic relevance: 0..50. Freshness: 0..30. Title/snippet clarity: 0..20. 排序 by: relevance_score descending published_at descending, unknown last url ascending Keep the top tar获取_news_count 2 candidates.

验证 and summarize.

Process candidates in ranked order until tar获取_news_count verified items are collected. Skip candidates whose canonical URL is already in processed_urls. Attempt 网页_fetch up to max_scrape_retries + 1 times. If fetch fAIls, 添加 a 警告 with the URL and reason, then continue. Cross-检查 搜索 结果 vs fetched page using title similarity, domAIn consistency, topic alignment, and published date when avAIlable. If the page 应用ears materially inconsistent, skip it and warn. Summarize each accepted article in one short plAIn-text paragraph, max about 80 words, focused on why it matters to AI practitioners.

Minimum 质量 gate.

If collected items are fewer than min_articles_required, 运行 one fallback 搜索 with: "AI news today machine learning 模型 release funding re搜索" Process only new candidates not already seen or processed. Repeat 过滤器ing, ranking, verification, and summarization.

Final integrity 检查.

Ensure every final item has non-empty title, url, domAIn, summary, source_查询, and numeric relevance_score. Ensure each URL 应用ears once. Ensure markdown_newsletter and json_newsletter match in item count. 移除 and warn on any invalid item.

Finalize.

排序 by relevance_score descending, then published_at descending. T运行cate to tar获取_news_count. Render markdown_newsletter. Assemble json_newsletter. 应用ly the language 输出 rule below. Return all 输出s. Language 输出 Translate the final markdown_newsletter body and each article summary in newsletter_items into Simplified Chinese. Keep title, url, domAIn, published_at, relevance_score, and source_查询 unchanged. If a source title is already in Chinese, preserve it as-is. Do not 添加 extra commentary outside the newsletter content. Verification

Accept items only if:

URL is valid and canonicalized. 搜索 结果 and fetched page broadly match. Topic is actually AI/news relevant. Published date is present or safely unknown. Fetched content is not malformed or off-topic.

Record 警告s for fAIled URLs, short reasons, and whether fallback 搜索 was used.

输出 格式化

markdown_newsletter:

H1 title with date. One H2 per article. One short summary paragraph per article. One so

数据来源ClawHub ↗ · 中文优化:龙虾技能库