首页龙虾技能列表 › Faya Session Memory — 技能工具

Faya Session Memory — 技能工具

v1.0.0

[自动翻译] Persistent session memory system that prevents knowledge loss after context compaction. Converts session transcripts to searchable Markdown, builds an...

0· 393·0 当前·0 累计
下载技能包
License
MIT-0
最后更新
2026/2/26
安全扫描
VirusTotal
无害
查看报告
OpenClaw
安全
high confidence
The skill's code and instructions are consistent with a local session-to-markdown and glossary builder — it reads and indexes local OpenClaw session files and writes searchable Markdown, and does not request credentials or contact external endpoints.
评估建议
This skill appears to do what it says: convert local OpenClaw session logs into Markdown, build a searchable glossary, and suggest cron-based updates. Before installing/running it, consider the following: - Review the scripts locally — they read and write files under your home directory (~/.openclaw/...). If your session logs contain very sensitive data (passwords, private keys, personal PII), decide whether you want those written into new Markdown transcripts or included in vector search index...
详细分析 ▾
用途与能力
Name/description promise (persistent session memory, searchable Markdown, glossary, cron-based updating) matches the included scripts: session-to-memory.py converts session JSONL to Markdown, build-glossary.py builds a glossary index, and cron-optimizer.py suggests cron prompt improvements. All required operations (reading session logs, writing memory/*.md, generating reports) are present and appropriate for the stated purpose. Minor discrepancy: SKILL.md claims broader session-scanning semantics (e.g., scanning ~/.openclaw/agents/*/sessions/ and supporting --agent) while session-to-memory.py uses a fixed default (~/.openclaw/agents/main/sessions) and does not implement an --agent flag; this is a documentation/instruction mismatch rather than malicious functionality.
指令范围
Runtime instructions tell the user/agent to run the shipped scripts and to create cron jobs to run them periodically. The instructions reference the right files and behavior, but include some inaccurate CLI/docs details: SKILL.md documents an --agent option and scanning wildcard paths that the converter script does not implement. The scripts read and write only local files under ~/.openclaw (sessions, workspace, cron JSON) and do not attempt to read other unrelated system paths or send data externally.
安装机制
No install spec is provided (instruction-only skill with shipped scripts). Nothing is downloaded or executed from an external URL; scripts are plain Python files intended to be run locally. This is the lowest-risk install posture.
凭证需求
The skill declares no required environment variables or credentials. The code optionally respects a WORKSPACE env var for the glossary builder; otherwise it uses user-home ~/.openclaw paths. The scripts operate on local session and cron JSON files only — there are no requests for unrelated secrets or cloud credentials.
持久化与权限
always is false and the skill does not attempt to enable itself, modify other skills, or write to global system configuration. It writes files under the user's ~/.openclaw/workspace/memory/ and ~/.openclaw/cron report locations, which is expected for a local memory/indexing tool. Cron jobs are suggested but not auto-installed.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.0.02026/2/24

Initial release: transcript converter, glossary builder, cron memory optimizer. Three-layer memory architecture for OpenClaw agents.

● 无害

安装命令 点击复制

官方npx clawhub@latest install faya-session-memory
镜像加速npx clawhub@latest install faya-session-memory --registry https://cn.clawhub-mirror.com

技能文档

Solve the #1 problem with long-running AI agents: knowledge loss after context compaction.

The Problem

When sessions compact (summarize old messages to free context), specific details are lost: names, decisions, file paths, reasoning. The agent retains a summary but loses the ability to recall "What exactly did Annika say?" or "When did we decide to use v6 format?"

The Solution: Three-Layer Memory Architecture

Layer 1: MEMORY.md          — Curated long-term memory (human-edited)
Layer 2: SESSION-GLOSSAR.md — Auto-generated structured index (people/projects/decisions/timeline)
Layer 3: memory/sessions/   — Full session transcripts as searchable Markdown

All three layers live under memory/ and are automatically vectorized by OpenClaw's memory search, creating a navigational hierarchy: glossary finds the right session, session provides the details.

Setup (run once)

Step 1: Convert existing sessions to Markdown

python3 scripts/session-to-memory.py

This scans all JSONL session logs in ~/.openclaw/agents//sessions/ and converts them to memory/sessions/session-YYYY-MM-DD-HHMM-.md. Truncates long assistant responses to 2KB, skips system messages, tracks state to avoid re-processing.

Options:

  • --new — Only convert sessions not yet processed (for incremental runs)
  • --agent main — Specify agent ID (default: main)

Step 2: Build the glossary

python3 scripts/build-glossary.py

Scans all session transcripts and builds memory/SESSION-GLOSSAR.md with:

  • People — Who was mentioned, in how many sessions, date ranges
  • Projects — Which projects discussed, with relevant topic tags
  • Topics — Categorized themes (Email Drafts, Website Build, Security, etc.)
  • Timeline — Per-day summary (session count, people, topics)
  • Decisions — Extracted decision-like statements with dates

Options:

  • --incremental — Only process new sessions (uses cached scan state)

Step 3: Set up cron jobs for auto-updates

Create two cron jobs (use a cheap model like Gemini Flash):

Job 1: Session sync + glossary rebuild (every 4-6 hours)

Task: Run python3 scripts/session-to-memory.py --new then
      python3 scripts/build-glossary.py --incremental.
      Report how many new sessions were converted and indexed.

Optional Job 2: Pre-compaction memory flush check Already built into AGENTS.md by default — just ensure the agent writes to memory/YYYY-MM-DD.md before each compaction.

Customizing Entity Detection

Edit scripts/build-glossary.py to add your own known people and projects:

KNOWN_PEOPLE = {
    "alice": "Alice Smith — Project Manager",
    "bob": "Bob Jones — CTO",
}

KNOWN_PROJECTS = { "website-redesign": "Website Redesign — Q1 Initiative", "api-migration": "API Migration — v2 to v3", }

The glossary also detects topics via regex patterns. Add new patterns in the topic_patterns dict for your domain.

How It Works With memory_search

Once set up, memory_search("Alice project decision") will find:

  • The glossary entry for Alice (which sessions she appears in)
  • The actual session transcript where the decision was discussed
  • Any MEMORY.md entry about Alice

This gives the agent a navigation layer (glossary) plus detail access (transcripts) — much better than either alone.

File Structure After Setup

memory/
├── MEMORY.md                    — Curated (you maintain this)
├── SESSION-GLOSSAR.md           — Auto-generated index
├── YYYY-MM-DD.md                — Daily notes
├── .glossary-state.json         — Glossary builder state
├── .glossary-scans.json         — Cached scan results
└── sessions/
    ├── .state.json              — Converter state
    ├── session-2026-01-15-0830-abc123.md
    ├── session-2026-01-15-1200-def456.md
    └── ...

Cron Memory Optimizer

Cron jobs run in isolated sessions with zero memory context. The optimizer analyzes your cron jobs and suggests memory-enhanced versions:

python3 scripts/cron-optimizer.py

This scans ~/.openclaw/cron/jobs.json, identifies jobs that would benefit from memory context, and generates memory/cron-optimization-report.md with before/after prompts and implementation guidance.

Example optimization:

Original: "Run daily research scout..."
Enhanced: "Before starting: Use memory_search to find recent context about research activities. Check memory/SESSION-GLOSSAR.md for relevant people, projects, and recent decisions. Then proceed with the original task using this context.

Run daily research scout..."

The script is conservative (suggests only, never auto-modifies) and skips monitoring jobs that don't need context.

Tips

  • Run the full rebuild (python3 scripts/build-glossary.py without --incremental)
occasionally to pick up improvements to entity detection
  • The glossary is most useful when KNOWN_PEOPLE and KNOWN_PROJECTS are populated —
spend 5 minutes adding your key contacts and projects
  • For agents that run 24/7, the cron job keeps everything current automatically
  • Session transcripts can get large (our 297 sessions = 24MB) — this is fine,
OpenClaw's vector search handles it efficiently
  • Use the cron optimizer after setting up memory to enhance existing automation
数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务