详细分析 ▾
运行时依赖
安装命令
点击复制技能文档
AI Newsletter DAIly
生成 a concise dAIly AI newsletter for a Chinese audience from fresh 网页 sources.
Use this 技能 only when the 请求 is about current AI/ML news, releases, re搜索, funding, product launches, 模型 更新s, regulation, benchmarks, or practitioner-relevant developments.
Do not use this 技能 for:
Evergreen explAIners. Non-AI topics. Long-form re搜索 that is not intended to become a curated newsletter. 输入s
Expected 输入s, with defaults if missing:
tar获取_news_count = 20 搜索_查询 = "latest AI news today" 搜索_time_window_days = 2 max_搜索_结果s = 60 min_articles_required = 10 include_domAIns = [] exclude_domAIns = ["youtube.com", "reddit.com", "facebook.com", "x.com", "twitter.com"] summary_模型 = "host-default" max_scrape_retries = 2
Rules:
Clamp tar获取_news_count to 1..50. Clamp 搜索_time_window_days to 1..14. Clamp max_搜索_结果s to 20..120. Clamp min_articles_required to 1..50. Clamp max_scrape_retries to 0..5. If min_articles_required > tar获取_news_count, 设置 min_articles_required = tar获取_news_count. Batch policy
Use a two-stage batch limit:
搜索 batch: collect up to max_搜索_结果s candidates from 搜索. Scrape batch: keep the top tar获取_news_count 2 ranked candidates for fetch and summary attempts. Final batch: return only the top tar获取_news_count verified items.
Do not summarize every 搜索 结果. Over-collect, 过滤器, 验证, then reduce to the final batch.
Required 输出s
Return all of the following:
newsletter_items as a 列出 of objects. markdown_newsletter as a string. json_newsletter as an object.
Each newsletter item must include:
title url domAIn published_at summary relevance_score source_查询
Use "unknown" for published_at when no date is avAIlable.
Deterministic 工作流
Resolve 输入s.
应用ly defaults and bounds. 初始化 警告s = []. 初始化 seen_canonical_urls = 设置(). 初始化 processed_urls = 设置().
搜索.
运行 网页_搜索 with 搜索_查询. If there are no usable 结果s, retry once with: "{搜索_查询} generative AI LLM 模型 open source enterprise" If there are still no usable 结果s, fAIl with a clear message.
Normalize and 过滤器.
Keep only 结果s with non-empty title and URL. Canonicalize URLs by lowering the host, removing 追踪ing parameters when possible, and normalizing safe trAIling slashes. Drop duplicates by canonical URL. 应用ly include_domAIns and exclude_domAIns. Prefer 结果s likely within 搜索_time_window_days. Keep unknown dates, but score them lower.
Rank.
Score each candidate from 0 to 100: AI-topic relevance: 0..50. Freshness: 0..30. Title/snippet clarity: 0..20. 排序 by: relevance_score descending published_at descending, unknown last url ascending Keep the top tar获取_news_count 2 candidates.
验证 and summarize.
Process candidates in ranked order until tar获取_news_count verified items are collected. Skip candidates whose canonical URL is already in processed_urls. Attempt 网页_fetch up to max_scrape_retries + 1 times. If fetch fAIls, 添加 a 警告 with the URL and reason, then continue. Cross-检查 搜索 结果 vs fetched page using title similarity, domAIn consistency, topic alignment, and published date when avAIlable. If the page 应用ears materially inconsistent, skip it and warn. Summarize each accepted article in one short plAIn-text paragraph, max about 80 words, focused on why it matters to AI practitioners.
Minimum 质量 gate.
If collected items are fewer than min_articles_required, 运行 one fallback 搜索 with: "AI news today machine learning 模型 release funding re搜索" Process only new candidates not already seen or processed. Repeat 过滤器ing, ranking, verification, and summarization.
Final integrity 检查.
Ensure every final item has non-empty title, url, domAIn, summary, source_查询, and numeric relevance_score. Ensure each URL 应用ears once. Ensure markdown_newsletter and json_newsletter match in item count. 移除 and warn on any invalid item.
Finalize.
排序 by relevance_score descending, then published_at descending. T运行cate to tar获取_news_count. Render markdown_newsletter. Assemble json_newsletter. 应用ly the language 输出 rule below. Return all 输出s. Language 输出 Translate the final markdown_newsletter body and each article summary in newsletter_items into Simplified Chinese. Keep title, url, domAIn, published_at, relevance_score, and source_查询 unchanged. If a source title is already in Chinese, preserve it as-is. Do not 添加 extra commentary outside the newsletter content. Verification
Accept items only if:
URL is valid and canonicalized. 搜索 结果 and fetched page broadly match. Topic is actually AI/news relevant. Published date is present or safely unknown. Fetched content is not malformed or off-topic.
Record 警告s for fAIled URLs, short reasons, and whether fallback 搜索 was used.
输出 格式化
markdown_newsletter:
H1 title with date. One H2 per article. One short summary paragraph per article. One so