首页龙虾技能列表 › Super Personasiled Search — 技能工具

Super Personasiled Search — 技能工具

v1.0.0

[自动翻译] Build, debug, and extend the Connectify founder network platform (React/Vite frontend + Express backend + Redis cache + OpenAI ranking + Apify ingesti...

0· 108·0 当前·0 累计
by @deonmenezes·MIT-0
下载技能包
License
MIT-0
最后更新
2026/3/26
安全扫描
VirusTotal
无害
查看报告
OpenClaw
可疑
medium confidence
The skill is plausibly what it claims (a local dev kit for a Connectify app) but there are metadata inconsistencies and secret/third-party access expectations that the registry listing does not declare — review before installing or running with real data.
评估建议
This repo looks like a legitimate local development project for the Connectify app, but be cautious before running it with real secrets or production data. Specific points to check before you install/run: (1) Confirm the publisher/source (registry metadata says unknown but package.json points to a GitHub repo) and only use code from a trusted origin. (2) The SKILL.md expects OPENAI_API_KEY, REDIS_URL, and APIFY_TOKEN — provide these only in a local/isolated environment and never commit them. (3)...
详细分析 ▾
用途与能力
The code and SKILL.md align with the described purpose: a React + Express app that uses Redis, OpenAI, and (optionally) Apify. However the registry metadata claims no required environment variables or homepage/source while SKILL.md and package.json clearly require OPENAI_API_KEY, REDIS_URL, and reference an APIFY_TOKEN and a GitHub repo. The missing metadata declarations are an inconsistency (not necessarily malicious) that reduces transparency.
指令范围
SKILL.md instructions are narrowly scoped to local development of the repo: npm install, set .env, start Redis, run dev/build commands, and where to edit scoring/ingestion code. The instructions do not ask the agent to read unrelated system files or exfiltrate data to unexpected endpoints. They do instruct creating an .env containing secrets (standard for this project).
安装机制
There is no install spec (instruction-only), so nothing will be automatically downloaded by the platform installer. Running npm install locally will pull many dependencies (openai, apify, crawlee, redis, etc.) which is expected for this stack but means arbitrary third-party packages will be executed when you run the project — review dependencies before running in production.
凭证需求
The SKILL.md and code require sensitive credentials (OPENAI_API_KEY, REDIS_URL, APIFY_TOKEN). Those are proportionate to the app's functionality (scoring via OpenAI, storing/querying via Redis, optional Apify ingestion) — but the registry metadata did not declare any required env vars, which is a transparency gap. Also note that connection records (personal data) are sent to OpenAI for scoring/action-generation and that Redis access allows reading/writing all connection keys. APIFY_TOKEN is presently unused (apify.js is a stub) but the dependency and instructions suggest future crawling capabilities; treat that token carefully.
持久化与权限
The skill is not force-included (always: false) and does not request elevated platform privileges. It does persist data to Redis (saves connection and query-context keys) which is expected for its function. No code attempts to modify other skills or global agent config.
安装前注意事项
  1. Confirm the publisher/source (registry metadata says unknown but package.json points to a GitHub repo) and only use code from a trusted origin. (
  2. The SKILL.md expects OPENAI_API_KEY, REDIS_URL, and APIFY_TOKEN — provide these only in a local/isolated environment and never commit them. (
  3. Be aware that the app sends connection records to the OpenAI API for scoring/action suggestions; if those records contain sensitive PII, consider anonymizing or using a policy that permits such transmission. (
  4. Redis is used to store/read all connection:* keys — restrict access and avoid running this against a production Redis instance with other data. (
  5. If you enable real Apify/crawling, audit the ingestion code and required tokens first. If anything is unclear, ask the skill author for an explicit manifest listing required env vars and the canonical source/repo before proceeding.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.0.02026/3/26

Initial release for Connectify developer skill. - Guides setup, running, and modifying of the Connectify network platform (React/Vite frontend, Express backend, Redis cache, OpenAI ranking, Apify ingestion). - Details how to work with API endpoints (`/api/query`), chat UX, scoring logic, and connection ingestion. - Provides backend response contracts to preserve when making changes. - Documents environment variables and local development practices. - Includes tips for safely updating AI logic and ingesting real data. - Lists common mistakes to avoid during development.

● 无害

安装命令 点击复制

官方npx clawhub@latest install super-personal-search
镜像加速npx clawhub@latest install super-personal-search --registry https://cn.clawhub-mirror.com

技能文档

Set up the project

  • Install dependencies:
   npm install
   
  • Create .env from .env.example and set:
- OPENAI_API_KEY - REDIS_URL - APIFY_TOKEN - optional OPENAI_MODEL, PORT
  • Start Redis before running the backend.

Run the app

Prefer single-service mode when validating full user flows (dashboard + chat + API):

npm run build
npm start

Open http://localhost:3001.

Use split mode only when focusing on one side:

  • Frontend only: npm run dev
  • Backend only: npm run dev:server

Use the file map

  • server.js: Express API, Redis seeding, /api/query, static hosting of dist/.
  • agent.js: OpenAI relevance scoring and follow-up action generation.
  • redis.js: Redis connection lifecycle, connection storage, query-context cache (30 min TTL).
  • apify.js: Connection ingestion adapter (currently placeholder dataset).
  • src/components/AIChatPanel.jsx: chat UX and /api/query client call.
  • src/data/placeholders.js: dashboard placeholder cards/lists/map seed data.

Preserve the backend response contract

Return this shape from /api/query:

{
  "results": [
    {
      "name": "string",
      "role": "string",
      "company": "string",
      "platforms": ["string"],
      "relevanceScore": 0,
      "reason": "string",
      "suggestedActions": ["string", "string"]
    }
  ]
}

If changing fields, update both server.js and src/components/AIChatPanel.jsx together.

Implement real Apify ingestion

When replacing the stub in apify.js:

  • Keep output normalized to this connection schema:
- id, name, role, company, location, platforms, tags, lastInteraction, notes
  • Keep IDs stable and unique to prevent duplicate Redis records.
  • Return an array compatible with saveConnection(connection.id, connection).
  • Keep actor/network logic isolated in apify.js; avoid spreading Apify-specific code through server.js.

Tune AI behavior safely

When editing agent.js:

  • Keep response_format: { type: 'json_object' }.
  • Keep strict parsing and fallback handling (safeJsonParse, bounded score 0-100).
  • Keep deterministic-ish scoring temperature low and action generation temperature moderate.
  • Preserve fallback actions in server.js if action generation fails.

Validate changes quickly

  • Build frontend:
   npm run build
   
  • Start server:
   npm start
   
  • Smoke test query endpoint:
   curl -X POST http://localhost:3001/api/query \
     -H "Content-Type: application/json" \
     -d "{\"query\":\"Who in my network works in AI and is based in SF?\",\"sessionId\":\"local-test-session\"}"
   
  • Confirm the response includes ranked results and cached repeat requests return quickly.

Watch for common pitfalls

  • npm run dev serves only frontend; /api/query will not work there unless a proxy/backend is also configured.
  • server.js CORS currently allows http://localhost:3000; adjust if using different local origins.
  • redis.js uses keys('connection:*'); avoid very large production datasets without pagination/scans.
  • Do not commit secrets from .env or hardcode API tokens.
数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务