Connect to enterprise SaaS tools through the Venn platform REST API.
Setup
This skill is gated on VENN_API_KEY — it won't appear until the key is set.
echo 'VENN_API_KEY=your-api-key-here' >> ~/.openclaw/.env
- Restart gateway (picks up 新的 env 在...上 开始):
openclaw gateway restart
Or, for zero-downtime reload without restart:
openclaw secrets reload
Alternatively, use the interactive secrets helper:
openclaw secrets configure --skip-provider-setup
Sandboxed agents: .env file injects 进入 host process 仅. 对于 sandboxed (Docker) sessions, 也 添加 VENN_API_KEY 到 agents.defaults.sandbox.docker.env 在...中 openclaw.json, 或 bake 进入 custom sandbox image.
Configuration
VENN_API_KEY (必填) — Venn API 键
VENN_API_URL (可选) — defaults 到 https://app.venn.ai/api/tooliq
请求 格式
All requests use POST with JSON. Examples below use this shorthand:
# Full form (shown once):
VENN_URL="${VENN_API_URL:-https://app.venn.ai/api/tooliq}"
curl -s -X POST "${VENN_URL}/tools/search" \
-H "Authorization: Bearer ${VENN_API_KEY}" \
-H "Content-Type: application/json" \
-d '{"query": "..."}'# Shorthand (used throughout):
# POST /tools/search {"query": "..."}
1. Discovery
列表 connected servers
# POST /tools/help
{"action": "list_servers"}
Returns result.servers[] with server_id, name, and connection_status.
Other help actions:
getting_started — onboarding guidance
connector_help — info 在...上 connectors (pass server_id 对于 specific one)
auth_helper — OAuth re-auth URL 对于 disconnected server (requires server_id)
搜索 对于 tools
# POST /tools/search
{"query": "jira search issues", "limit": 10}
Returns result.candidates[] with server_id, tool_name, short_description, and (for top results) full inputSchema.
Additional parameters: offset, min_score (0–1, default 0.3), min_results (default 5), include_skills (default true).
搜索 strategy — broad 第一个, narrow 如果 needed:
- 开始 带有 满 task description 在...中 natural language (skills match better):
- "创建 linear ticket 和 设置 到 在...中 progress"
- "同步 salesforce contacts 到 google sheet"
- 如果 否 skill matches, decompose 进入 one 搜索 per platform + action:
- "查询 salesforce contacts" + "创建 google sheets 行"
- 对于 simple single-platform tasks, 搜索 directly: "创建 salesforce lead"
Splitting rules:
- 1 搜索 = 1 platform + 1 action (否 single tool handles compound actions)
- Always include app name 在...中 每个 查询
- "recap"/"summarize" → 搜索 platform, 然后 present
- "cross-reference"/"compare" → 搜索 每个 platform, combine results
- "同步 X 到 Y" → 搜索 source, 然后 destination
如果 否 results, try alternate names:
- "jira" → "atlassian"
- "google docs" → "google-drive" 或 "googledocs"
- "github" → "github-cloud"
Choosing 从 results:
- 读取 operations → prefer broad 查询 tools 在...上 获取-由-ID
- 创建/更新 → look 对于 specific 创建/更新 endpoints
- 如果 skill appears (
类型="skill"), prefer 在...上 assembling tools
inputSchema source 的 truth 对于 parameter names — NEVER guess
For platform-specific query syntax (JQL, SOQL, Gmail search), see references/query-syntax.md.
Describe tool
# POST /tools/describe
{"tools": [{"server_id": "SERVER_ID", "tool_name": "TOOL_NAME"}]}
Supports batch requests. Returns result.results[] with inputSchema, description, and write_operation type.
2. Execution
Schema adherence (最多 common source 的 errors)
- 复制 parameter names verbatim 从 inputSchema — casing matters
- Schema says
maxResults → 使用
maxResults, 不
max_results- Match data types exactly:
-
"类型": "字符串" →
"10", 不
10
-
"类型": "integer" →
10, 不
"10"
-
"类型": "数组" →
["值"], 不
"值"
-
"类型": "对象" →
{"键": "值"}, 不
"键=值"- Include 所有
必填 fields. 做 不 添加 fields 不 在...中 schema.
Execute single tool
# POST /tools/execute
{"server_id": "SERVER_ID", "tool_name": "TOOL_NAME", "tool_args": {...}}
Translating 用户 intent 进入 values (infer rather 比 ask):
- "recent tickets" → reasonable 日期 range (e.g., 最后的 7 days)
- "my emails" →
userId: "me"
- " main channel" → 搜索 对于 由 name 第一个
- "current sprint" → 活跃 sprint
This applies to values only — parameter names and types must come from inputSchema.
Data integrity: NEVER fabricate data. 仅 present 什么 appears 在...中 actual responses.
Handling links: 对于 创建/编辑 operations, surface clickable URLs 从 fields 点赞 url, 链接, href, web_url, permalink, html_url. Present 作为 Resource Name.
Execute workflow (multi-step)
Chain multiple tool calls in a Python sandbox:
# POST /tools/execute-workflow
{
"code": "results = call_tool(\"atlassian\", \"searchByJQL\", jql=\"assignee = currentUser() AND status != Done\")\nreturn [{\"key\": i[\"key\"], \"summary\": i[\"fields\"][\"summary\"]} for i in results.get(\"issues\", [])]",
"timeout": 180
}
当...时 到 使用 workflows:
- Multiple tool calls 在...中 sequence
- Parallel execution 穿过 services
- Data 处理中, iteration, 或 transformation
Code rules:
- 关注 schema adherence rules 上面
- 写入 flat, inline code — 否 helper functions
- Code 必须 return 值; extract 仅 needed fields
- Check 对于 errors:
如果 isinstance(结果, dict) 和 "错误" 在...中 结果: ...
- 对于 分页, 循环 until 否 更多
nextPageToken/cursor
可用 在...中 sandbox:
call_tool(server_id, tool_name, kwargs) — sequential
async_call_tool(server_id, tool_name, kwargs) — 对于 asyncio.gather()
call_skill(skill_id, inputs_dict) / async_call_skill(...) — call skills
- Modules:
asyncio, json, 日期时间, math, re, collections, itertools, functools, operator, decimal, uuid, base64, hashlib
- 否 network, filesystem, 或 subprocess access
- 否 augmented assignment 在...上 subscripts: 使用
d[k] = d[k] + 1, 不 d[k] += 1
3. 写入 Operation Confirmation
Write/delete operations return an audit response instead of executing. To proceed:
- Show operation summary 到 用户 和 wait 对于 explicit approval
- 获取 confirmation 令牌 (expires 在...中 60s):
# POST /tools/confirm
{"server_id": "SERVER_ID", "tool_name": "TOOL_NAME"}
# POST /tools/execute
{"server_id": "...", "tool_name": "...", "tool_args": {...}, "confirmed": true, "confirmation_token": "TOKEN"}
Never call confirm 没有 用户's typed approval ("是", "confirm", "proceed").
4. Skills
Skills are pre-built workflow patterns in search results with type: "skill". Prefer skills over assembling individual tools.
Executable skills
Marked executable: true. Run step-by-step:
# POST /tools/execute
{"tool_name": "SKILL_ID", "tool_args": {"step_id": "FIRST_STEP", "inputs": {...}}}
Each step returns outputs and next. If next is not null, read next.reasoning, fill placeholders, make the next call.
Guidance skills
For skills without executable: true, describe to get the pattern:
# POST /tools/describe
{"tools": [{"tool_name": "SKILL_NAME"}]}
Returns tools_involved, all_servers_connected, disconnected_servers, and step-by-step content. If all_servers_connected is false, use help(action="auth_helper") first.
5. 错误 Recovery
If a tool call fails, debug and retry — do not report failure immediately.
| Error | Action |
|---|
| Schema/parameter error | Re-read inputSchema, fix names and types, retry |
| 404 / "not found" | Wrong ID or tool; search for correct ID |
| Server not connected / 401 | Call help(action="auth_helper", server_id="...") |
| Empty results | Try fuzzy variations, broader date ranges |
| Same error twice | Try different approach (different tool/parameters) |
| Workflow fails twice | Fall back to sequential execute calls |
Only report failure after at least three different approaches have been tried.
Guardrails
- 开始 带有
list_servers 或 搜索 到 discover 什么's connected
- Always 有
inputSchema 之前 executing (从 搜索 或 describe)
- Match parameter names 和 types exactly
- Never fabricate data — 仅 present actual responses
- Never execute writes 没有 explicit 用户 approval
- Prefer workflows 对于 multi-step operations
- Prefer skills 在...上 assembling individual tools
- Pass
session_id 和 user_intent 在...上 calls 对于 tracing (API generates one 如果 omitted)
Connect to enterprise SaaS tools through the Venn platform REST API.
Setup
This skill is gated on VENN_API_KEY — it won't appear until the key is set.
- Add it to the OpenClaw
.env file:
echo 'VENN_API_KEY=your-api-key-here' >> ~/.openclaw/.env
- Restart the gateway (picks up the new env on start):
openclaw gateway restart
Or, for zero-downtime reload without restart:
openclaw secrets reload
Alternatively, use the interactive secrets helper:
openclaw secrets configure --skip-provider-setup
Sandboxed agents: The .env file injects into the host process only. For sandboxed (Docker) sessions, also add VENN_API_KEY to agents.defaults.sandbox.docker.env in openclaw.json, or bake it into your custom sandbox image.
Configuration
VENN_API_KEY (required) — your Venn API key
VENN_API_URL (optional) — defaults to https://app.venn.ai/api/tooliq
Request Format
All requests use POST with JSON. Examples below use this shorthand:
# Full form (shown once):
VENN_URL="${VENN_API_URL:-https://app.venn.ai/api/tooliq}"
curl -s -X POST "${VENN_URL}/tools/search" \
-H "Authorization: Bearer ${VENN_API_KEY}" \
-H "Content-Type: application/json" \
-d '{"query": "..."}'# Shorthand (used throughout):
# POST /tools/search {"query": "..."}
1. Discovery
List connected servers
# POST /tools/help
{"action": "list_servers"}
Returns result.servers[] with server_id, name, and connection_status.
Other help actions:
getting_started — onboarding guidance
connector_help — info on connectors (pass server_id for specific one)
auth_helper — OAuth re-auth URL for disconnected server (requires server_id)
Search for tools
# POST /tools/search
{"query": "jira search issues", "limit": 10}
Returns result.candidates[] with server_id, tool_name, short_description, and (for top results) full inputSchema.
Additional parameters: offset, min_score (0–1, default 0.3), min_results (default 5), include_skills (default true).
Search strategy — broad first, narrow if needed:
- Start with the full task description in natural language (skills match better):
- "create a linear ticket and set it to in progress"
- "sync salesforce contacts to a google sheet"
- If no skill matches, decompose into one search per platform + action:
- "query salesforce contacts" + "create google sheets row"
- For simple single-platform tasks, search directly: "create salesforce lead"
Splitting rules:
- 1 search = 1 platform + 1 action (no single tool handles compound actions)
- Always include the app name in each query
- "recap"/"summarize" → search the platform, then present
- "cross-reference"/"compare" → search each platform, combine results
- "sync X to Y" → search source, then destination
If no results, try alternate names:
- "jira" → "atlassian"
- "google docs" → "google-drive" or "googledocs"
- "github" → "github-cloud"
Choosing from results:
- Read operations → prefer broad query tools over get-by-ID
- Create/update → look for specific create/update endpoints
- If a skill appears (
type="skill"), prefer it over assembling tools
inputSchema is the source of truth for parameter names — NEVER guess
For platform-specific query syntax (JQL, SOQL, Gmail search), see references/query-syntax.md.
Describe a tool
# POST /tools/describe
{"tools": [{"server_id": "SERVER_ID", "tool_name": "TOOL_NAME"}]}
Supports batch requests. Returns result.results[] with inputSchema, description, and write_operation type.
2. Execution
Schema adherence (most common source of errors)
- Copy parameter names verbatim from inputSchema — casing matters
- Schema says
maxResults → use
maxResults, NOT
max_results- Match data types exactly:
-
"type": "string" →
"10", NOT
10
-
"type": "integer" →
10, NOT
"10"
-
"type": "array" →
["value"], NOT
"value"
-
"type": "object" →
{"key": "value"}, NOT
"key=value"- Include all
required fields. Do not add fields not in the schema.
Execute a single tool
# POST /tools/execute
{"server_id": "SERVER_ID", "tool_name": "TOOL_NAME", "tool_args": {...}}
Translating user intent into values (infer rather than ask):
- "recent tickets" → reasonable date range (e.g., last 7 days)
- "my emails" →
userId: "me"
- "the main channel" → search for it by name first
- "current sprint" → the active sprint
This applies to values only — parameter names and types must come from inputSchema.
Data integrity: NEVER fabricate data. Only present what appears in actual responses.
Handling links: For create/edit operations, surface clickable URLs from fields like url, link, href, web_url, permalink, html_url. Present as Resource Name.
Execute a workflow (multi-step)
Chain multiple tool calls in a Python sandbox:
# POST /tools/execute-workflow
{
"code": "results = call_tool(\"atlassian\", \"searchByJQL\", jql=\"assignee = currentUser() AND status != Done\")\nreturn [{\"key\": i[\"key\"], \"summary\": i[\"fields\"][\"summary\"]} for i in results.get(\"issues\", [])]",
"timeout": 180
}
When to use workflows:
- Multiple tool calls in sequence
- Parallel execution across services
- Data processing, iteration, or transformation
Code rules:
- Follow schema adherence rules above
- Write flat, inline code — no helper functions
- Code must return a value; extract only needed fields
- Check for errors:
if isinstance(result, dict) and "error" in result: ...
- For pagination, loop until no more
nextPageToken/cursor
Available in sandbox:
call_tool(server_id, tool_name, kwargs) — sequential
async_call_tool(server_id, tool_name, kwargs) — for asyncio.gather()
call_skill(skill_id, inputs_dict) / async_call_skill(...) — call skills
- Modules:
asyncio, json, datetime, math, re, collections, itertools, functools, operator, decimal, uuid, base64, hashlib
- No network, filesystem, or subprocess access
- No augmented assignment on subscripts: use
d[k] = d[k] + 1, NOT d[k] += 1
3. Write Operation Confirmation
Write/delete operations return an audit response instead of executing. To proceed:
- Show the operation summary to the user and wait for explicit approval
- Get a confirmation token (expires in 60s):
# POST /tools/confirm
{"server_id": "SERVER_ID", "tool_name": "TOOL_NAME"}
# POST /tools/execute
{"server_id": "...", "tool_name": "...", "tool_args": {...}, "confirmed": true, "confirmation_token": "TOKEN"}
Never call confirm without user's typed approval ("yes", "confirm", "proceed").
4. Skills
Skills are pre-built workflow patterns in search results with type: "skill". Prefer skills over assembling individual tools.
Executable skills
Marked executable: true. Run step-by-step:
# POST /tools/execute
{"tool_name": "SKILL_ID", "tool_args": {"step_id": "FIRST_STEP", "inputs": {...}}}
Each step returns outputs and next. If next is not null, read next.reasoning, fill placeholders, make the next call.
Guidance skills
For skills without executable: true, describe to get the pattern:
# POST /tools/describe
{"tools": [{"tool_name": "SKILL_NAME"}]}
Returns tools_involved, all_servers_connected, disconnected_servers, and step-by-step content. If all_servers_connected is false, use help(action="auth_helper") first.
5. Error Recovery
If a tool call fails, debug and retry — do not report failure immediately.
| Error | Action |
|---|
| Schema/parameter error | Re-read inputSchema, fix names and types, retry |
| 404 / "not found" | Wrong ID or tool; search for correct ID |
| Server not connected / 401 | Call help(action="auth_helper", server_id="...") |
| Empty results | Try fuzzy variations, broader date ranges |
| Same error twice | Try different approach (different tool/parameters) |
| Workflow fails twice | Fall back to sequential execute calls |
Only report failure after at least three different approaches have been tried.
Guardrails
- Start with
list_servers or search to discover what's connected
- Always have
inputSchema before executing (from search or describe)
- Match parameter names and types exactly
- Never fabricate data — only present actual responses
- Never execute writes without explicit user approval
- Prefer workflows for multi-step operations
- Prefer skills over assembling individual tools
- Pass
session_id and user_intent on calls for tracing (API generates one if omitted)