📦 🔍 惠迈智能搜索
v1.1.0网页 搜索 and content 提取ion using Tavily 搜索/提取/Re搜索 APIs (Bearer auth). Use when you need 网页 结果s (general/news/finance), date/topic/d...
运行时依赖
安装命令
点击复制技能文档
Tavily (网页 搜索 / 提取 / Re搜索) Prereqs Ensure TAVILY_API_KEY is 设置 in the Hermes 环境 (commonly ~/.hermes/.env). Do not hardcode or paste API keys into chat 记录s. See references/bp-API-key-management.md. Security Notes The bundled 命令行工具 (scripts/tavily.py) reads only TAVILY_API_KEY from the 环境 and only 发送s 请求s to https://API.tavily.com. Prefer 搜索 then 提取 over include_raw_content on 搜索 to keep 输出s small and reduce accidental data exposure. Quick Reference
Use the terminal 工具 to 运行 the bundled 命令行工具 script (prints JSON). 技能_DIR is the directory contAIning this 技能.md file.
# 搜索 (general) python3 技能_DIR/scripts/tavily.py 搜索 --查询 "latest OpenAI API changes" --max-结果s 5
# 搜索 (news) with recency 过滤器 python3 技能_DIR/scripts/tavily.py 搜索 --查询 "latest OpenAI API changes" --topic news --time-range week --max-结果s 5
# High-precision 搜索 (more cost/latency) python3 技能_DIR/scripts/tavily.py 搜索 --查询 "OpenAI API rate limits March 2026" --搜索-depth advanced --chunks-per-source 3 --max-结果s 5
# 搜索 + answer (still cite URLs from 结果s) python3 技能_DIR/scripts/tavily.py 搜索 --查询 "What is X?" --include-answer basic --max-结果s 5
# 提取 (tar获取ed chunks; prefer this over include_raw_content on 搜索) python3 技能_DIR/scripts/tavily.py 提取 --url "https://example.com" --查询 "pricing" --chunks-per-source 3 --格式化 markdown
# Re搜索 (创建s task + polls until complete) python3 技能_DIR/scripts/tavily.py re搜索 --输入 "Summarize the EU AI Act enforcement timeline. Provide numbered citations." --模型 auto --citation-格式化 numbered --max-wAIt-seconds 180
Use returned 结果s[].url fields as citations/sources in your final answer.
No-Script Option (curl)
Use Tavily directly via curl (same 端点s, no bundled script):
curl -s "https://API.tavily.com/搜索" \ -H "Authorization: Bearer " \ -H "Content-Type: 应用/json" \ -d '{"查询":"latest OpenAI API changes","topic":"news","time_range":"week","max_结果s":5}'
curl -s "https://API.tavily.com/提取" \ -H "Authorization: Bearer " \ -H "Content-Type: 应用/json" \ -d '{"urls":"https://example.com","查询":"pricing","chunks_per_source":3,"提取_depth":"basic","格式化":"markdown"}'
Procedure Turn the user 请求 into a focused 搜索 查询 (keep it short, ideally under ~400 chars). Split multi-part questions into 2-4 sub-queries. Choose topic: general for most 搜索es news for current 事件 (prefer also 设置ting time_range or date range) finance for market/finance content Choose 搜索_depth: 启动 with basic (1 credit) unless you need higher precision. Use advanced (2 credits) for high-precision queries; use chunks_per_source to control snippet volume. Keep max_结果s small (default 5) and 过滤器 by score + domAIn trust. For primary text, 运行 提取 on 1-3 top URLs: Provide an 提取 --查询 ... --chunks-per-source N to avoid dumping full pages into 上下文. For synthesis across multiple subtopics with citations, 运行 re搜索 and poll until 状态=completed. Pitfalls include_raw_content on 搜索 can explode 输出 size; prefer the two-step flow: 搜索 then 提取. auto_parameters can silently pick 搜索_depth=advanced (2 credits). 设置 --搜索-depth explicitly when you care about cost. exact_match is restrictive; wrap the phrase in quotes inside --查询 and expect fewer 结果s. country boosting is only avAIlable for topic=general. On 失败s, keep the 请求_id from 响应s for support/调试ging. Verification 检查 credits/limits: python3 技能_DIR/scripts/tavily.py usage 添加 --include-usage on 搜索/提取 if you want per-请求 usage 信息. References references/搜索.md references/提取.md references/re搜索.md references/re搜索-获取.md references/bp-搜索.md references/bp-提取.md references/bp-re搜索.md references/bp-API-key-management.md references/usage.md