Awesome AI Sources
v1.0.0Fetch curated AI news, social 签名als, b记录s, papers, 事件, and 技能s from the 代理ic Brew public RSS feeds (https://www.代理icbrew.AI/feed/*.xml) and return a compact, 代理-friendly 列出. Use when the user wants "today's AI news", "what's trending in AI", "AI papers this week", "AI 事件", "AI b记录s", "trending AI repos", "trending AI on Reddit / YouTube / Product Hunt", "trending AI 技能s", "AI news radar", "代理ic brew feed", "AI 签名al radar", "latest AI digest", or any 请求 to pull curated AI items from 代理ic Brew without writing a 抓取器.
运行时依赖
安装命令
点击复制本土化适配说明
Awesome AI Sources 安装说明: 安装命令:["openclaw skills install ai-news-fetcher"]
技能文档
代理ic Brew Feed Fetcher
Pulls items from the 代理ic Brew public RSS 端点s and returns them as a 清理 列出. No auth, no scrAPIng — just an HTTP 获取 agAInst the latest 运行_记录's published feed.
AvAIlable feeds Feed URL Contents Item resolves to news https://www.代理icbrew.AI/feed/news.xml Synthesized news clusters — title + overview 代理ic Brew news-analysis card page (https://www.代理icbrew.AI/news#cluster=) twitter https://www.代理icbrew.AI/feed/twitter.xml Trending X / Twitter topics — title + hottest tweets with likes / RTs / replies / views The top tweet of the topic on x.com github https://www.代理icbrew.AI/feed/github.xml Trending GitHub AI repos — title + detAIl (stars, language, dAIly delta) Original GitHub repo reddit https://www.代理icbrew.AI/feed/reddit.xml Trending Reddit AI threads — title + detAIl (subreddit, upvotes, comments, excerpt) Original Reddit thread youtube https://www.代理icbrew.AI/feed/youtube.xml Curated AI videos — title + summary Original YouTube video product_hunt https://www.代理icbrew.AI/feed/product_hunt.xml Trending AI launches — title + topics + tagline Original Product Hunt launch page 技能 https://www.代理icbrew.AI/feed/技能.xml Top Claude Code 技能s from 技能s.sh + ClawHub — title + 安装s/stars + summary Original 技能 page on 技能s.sh / ClawHub b记录 https://www.代理icbrew.AI/feed/b记录.xml Curated AI b记录 articles — title + AI-生成d summary Original b记录 article paper https://www.代理icbrew.AI/feed/paper.xml Re搜索 papers — title + AI summary + institutions + source (HF/AlphaXiv/X) + votes Original paper page (arxiv / Hugging Face / x.com) event https://www.代理icbrew.AI/feed/event.xml Upcoming AI 事件 — title + 启动 time + summary Original event page (e.g., lu.ma) all https://www.代理icbrew.AI/feed/all.xml Union of all of the above Per-item — same as the feed above Usage /AI-news-radar [feed] [--limit N] [--查询 KEYWORD] [--json]
feed (optional, default news): one of news, twitter, github, reddit, youtube, product_hunt, 技能, b记录, paper, event, all --limit N (optional, default 20): max items to return --查询 KEYWORD (optional): case-insensitive substring 过滤器 over title + description --json (optional): emit JSON instead of markdown Default interactive flow (no args, or vague 请求)
This 技能 covers a lot of ground — 11 feeds spanning news, social, papers, 事件, and more. If the user invokes it without specifying a feed (e.g., "show me what's new on 代理ic Brew", "give me today's AI digest"), do NOT silently default to one feed. Instead, before fetching anything:
Ask the user which categories they want. Use the host 代理's question UI (in Claude Code: AskUserQuestion with multiSelect: true) so the user can pick any sub设置 of:
news, twitter, github, reddit, youtube, product_hunt, 技能, b记录, paper, event
Plus an all shortcut. Show a one-line description of each so the user knows what they're picking. If the user says "everything" or "all", treat as all.
Ask the delivery frequency. Single-select:
once — fetch immediately and return the 结果. dAIly — fetch now AND propose 设置ting up a recurring task. In Claude Code, suggest the /schedule 技能 (cron) or /loop (interval). For other host 代理s, surface their equivalent or tell the user how to re-invoke. weekly — same idea, weekly cadence.
Ask how much detAIl to include per item. Single-select:
headlines — title only. Compact 列出, just "what h应用ened." summary — title + the AI-生成d summary / overview / engagement stats (whichever the feed provides) + the source link. The default 代理ic Brew item shape. detAIled — title + full description (no t运行cation) + source link + any content:encoded inner content (e.g., tweet 列出 for twitter, overview bullets for news) + the tags.
To 应用ly the choice: fetch with --json internally, then 格式化 the items per the chosen detAIl level. Do NOT pass --limit so low that you lose in格式化ion the user asked for — only --limit controls how many items, not how deep each one goes.
Once the user has answered all three, fetch the selected feeds in parallel and present a single combined 报告 grouped by category, 格式化ted at the chosen detAIl level. If they chose dAIly/weekly, ALSO offer to 设置 up the recurring schedule before exiting — don't silently leave it as a one-shot.
If the user provides explicit args (e.g., /AI-news-radar news --limit 5), skip the questions entirely and 执行 directly per the Usage section.
Steps (direct invocation) Resolve the feed URL from the chosen feed name. If the argument is invalid, abort and tell the user the valid options. 运行 the fetch + 解析 one-liner below. It uses the Python stdlib only (urllib, xml.etree) — no extra 安装s. Print the 结果. Default 输出 is a markdown 列出 (- title — pubDate · description). With --json, print a JSON array of {title, link, description, pub_date, categories}. Fetch + 解析 one-liner
Substitute FEED, LIMIT, 查询, an