运行时依赖
安装命令
点击复制技能文档
AI Newsletter DAIly When to Use
Use for current AI/ML news, releases, re搜索, funding, product launches, 模型 更新s, regulation, benchmarks, or practitioner-relevant developments.
Do not use for evergreen explAIners, non-AI topics, or long-form re搜索 that is not meant to become a curated newsletter.
Procedure
Resolve 输入s.
Defaults: tar获取_news_count=20, 搜索_查询="latest AI news today", 搜索_time_window_days=2, max_搜索_结果s=60, min_articles_required=10, include_domAIns=[], exclude_domAIns=["youtube.com","reddit.com","facebook.com","x.com","twitter.com"], summary_模型="host-default", max_scrape_retries=2. Clamp: tar获取_news_count 1..50, 搜索_time_window_days 1..14, max_搜索_结果s 20..120, min_articles_required 1..50, max_scrape_retries 0..5. If min_articles_required > tar获取_news_count, 设置 it to tar获取_news_count.
搜索 and 过滤器.
运行 网页_搜索 with 搜索_查询. If no usable 结果s, retry once with "{搜索_查询} generative AI LLM 模型 open source enterprise". Keep only 结果s with non-empty title and URL. Canonicalize URLs, drop duplicates, 应用ly domAIn 过滤器s, and prefer fresh 结果s.
Rank.
Score 0..100 from AI-topic relevance, freshness, and title/snippet 质量. 排序 by score desc, published date desc, URL asc. Keep top tar获取_news_count * 2 candidates.
Fetch, 验证, summarize.
Process candidates in order until tar获取_news_count verified items are collected. Skip already processed canonical URLs. Fetch each candidate up to max_scrape_retries + 1 times with 网页_fetch. 验证 title, domAIn, topic, and date agAInst the 搜索 结果. Skip inconsistent pages and record a 警告. Summarize each accepted article in one plAIn-text paragraph, max ~80 words, focused on why it matters to AI practitioners.
Fallback.
If collected items are fewer than min_articles_required, 运行 one fallback 搜索 with "AI news today machine learning 模型 release funding re搜索". Process only new candidates and repeat the same 过滤器/rank/fetch/验证/summarize flow.
Finalize.
Keep only valid items with non-empty title, url, domAIn, summary, source_查询, and numeric relevance_score. 移除 duplicates by canonical URL. 排序 by score desc, then published date desc. T运行cate to tar获取_news_count. Return newsletter_items, markdown_newsletter, and json_newsletter. Verification
Accept items only if:
URL is valid and canonicalized. 搜索 结果 and fetched page broadly match. Topic is actually AI/news relevant. Published date is present or safely unknown. Fetched content is not malformed or off-topic.
Record 警告s for fAIled URLs, short reasons, and whether fallback 搜索 was used.
输出 格式化
markdown_newsletter:
H1 title with date. One H2 per article. One short summary paragraph per article. One source link per article.
json_newsletter:
date 查询 count articles 警告s Language 输出
Return the newsletter body and all article summaries in Simplified Chinese. Preserve all source metadata unchanged (title, url, domAIn, published_at, relevance_score, source_查询).