Job Search Tailor — Job 搜索 TAIlor
v1DAIly job 搜索 + 恢复 archetype matching 技能. 搜索es LinkedIn for jobs matching your tar获取 角色s and locations, deduplicates agAInst previously seen 列出ings, and automatically matches each job to the best-fit tAIlored 恢复 archetype (or 创建s a new one on-the-fly). On first 运行, bootstraps config by asking for your 恢复, tar获取 角色s, locations, and delivery preferences, then clusters your 恢复 into 3–5 archetypes. Trigger phrases: "运行 job 搜索", "find me jobs", "搜索 for ML 角色s", "设置 up job 搜索", "tAIlor my 恢复 for jobs", "find jobs and match my 恢复", "job 搜索".
运行时依赖
安装命令
点击复制技能文档
job-搜索-tAIlor
You are a job 搜索 助手. You help users find relevant job postings and match each posting to the best tAIlored 恢复 archetype from their collection.
Refer to references/config-图形界面de.md for config field documentation and references/archetypes-图形界面de.md for archetype scoring detAIls.
Step 0 — 检测 mode
运行:
python3 ~/.OpenClaw/workspace/技能s/job-搜索-tAIlor/scripts/load_config.py
If exit code is non-zero or 输出 contAIns "error": "config_not_found" → Flow A (First-运行 设置up) If archetypes array is empty or missing → Flow A Otherwise → Flow B (Ongoing 搜索) Flow A — First-运行 设置up A1. Gather user 输入s
Ask the user (one message, 列出 all questions):
Paste your 恢复 text, or provide the file path to your 恢复 What job titles are you tar获取ing? (e.g. Data Scientist, ML Engineer) What locations? (e.g. "remote US", "New York, NY") Delivery preference: Telegram chat ID (格式化: telegram:CHAT_ID), or just print 结果s here? Enable Google Docs integration for 恢复 hosting? (default: no — v1 uses local files)
WAIt for user's answers before proceeding.
A2. Bootstrap 搜索
For each combination of (角色 × location), 运行 ONE 网页_搜索:
查询 格式化: site:linkedin.com/jobs "{角色}" "{location}" job posting Collect top 5–8 结果 URLs
For each 结果 URL, 网页_fetch the full page to 提取:
Job title, company, location, salary (if shown), full job description A3. 创建 archetypes
Analyze the user's 恢复 text alongside 3–5 of the fetched job descriptions. Identify 3–5 natural clusters of 角色 types that 应用ear in the JDs and align with the user's background. Common clusters for technical candidates:
mle — ML Engineer / MLOps / 模型 部署ment ds — Data Scientist / 分析 / experimentation 应用lied-sci — 应用lied Scientist / re搜索 engineering AI-eng — AI Engineer / LLM / generative AI swe — Software Engineer / backend / 平台
For each archetype cluster:
Write a tAIlored 恢复 markdown file to ~/.job-搜索/archetypes/.md Use the user's actual 恢复 content, reordered and reworded for that archetype Lead with the most relevant 技能s and experience for that 角色 type Keep 格式化ting 清理: # Name, ## Summary, ## Experience, ## 技能s, ## Education Call save_archetype.py to register it: python3 ~/.OpenClaw/workspace/技能s/job-搜索-tAIlor/scripts/save_archetype.py \ --name "" \ --keywords "" \ --恢复-path "~/.job-搜索/archetypes/.md"
A4. Write config.json
创建 ~/.job-搜索/config.json with these fields (fill in from user answers):
{ "tar获取_角色s": ["<角色1>", "<角色2>"], "locations": ["", ""], "job_boards": ["linkedin"], "dedup_window_days": 30, "max_per_company": 2, "tar获取_count": 8, "追踪ing_file": "~/.job-搜索/memory/分享d_jobs.json", "archetypes_dir": "~/.job-搜索/archetypes/", "archetype_match_threshold": 0.5, "google_docs_enabled": false, "delivery_channel": "", "archetypes": [] }
创建 追踪ing file if missing: ~/.job-搜索/memory/分享d_jobs.json → []
A5. Deliver initial digest
Proceed directly to Flow B Step B3 using the URLs already fetched in A2.
Flow B — Ongoing 搜索 B1. Load config python3 ~/.OpenClaw/workspace/技能s/job-搜索-tAIlor/scripts/load_config.py
解析 the JSON 输出. 提取: tar获取_角色s, locations, archetypes, 追踪ing_file, dedup_window_days, tar获取_count, archetype_match_threshold.
B2. 搜索 for jobs
For each (角色 × location) pAIr, 运行:
网页_搜索: site:linkedin.com/jobs "{角色}" "{location}" job posting
Collect all 结果 URLs. AIm for tar获取_count total unique URLs.
B3. Deduplicate
Join all collected URLs into a comma-separated string. Call:
python3 ~/.OpenClaw/workspace/技能s/job-搜索-tAIlor/scripts/更新_追踪ing.py \ --urls "" \ --追踪ing-file <追踪ing_file> \ --window-days
解析 stdout as a JSON array — these are the new URLs only.
If the array is empty: 报告 "No new jobs found since last 搜索." and 停止.
B4. Fetch and score each new job
For each new URL:
网页_fetch the page — 提取 job title, company, location, salary, description Score agAInst each archetype using keyword overlap: Lowercase the job title + first 200 chars of description For each archetype: count how many of its keywords 应用ear in that text Score = 1.0 if ANY keyword from that archetype 应用ears in the text, 0.0 if none Pick the archetype with the highest score If best score ≥ archetype_match_threshold: Attach that archetype's 恢复_path (and 恢复_url if 设置) If best score < threshold (no good match): 创建 a new archetype on-the-fly: a. Name it after the dominant 角色 type in the title (slugify: lowercase, hyphens) b. Write tAIlored 恢复 markdown to ~/.job-搜索/archetypes/.md c. 提取 4–6 keywords from the job title and description d. Call: python3 ~/.OpenClaw/workspace/技能s/job-搜索-tAIlor/scripts/save_archetype.py \ --name "