首页龙虾技能列表 › The Spatiotemporal Rendering Engine — 实用工具

The Spatiotemporal Rendering Engine — 实用工具

v1.0.0

使用 scheduled keyframes orchestrate smart home elements across 6-Element Spatial Matrix based natural language i...

0· 116·0 当前·0 累计
by @spacesq (MilesXiang)·MIT-0
下载技能包
License
MIT-0
最后更新
2026/3/21
安全扫描
VirusTotal
无害
查看报告
OpenClaw
安全
high confidence
The skill's code and instructions are internally consistent: it reads a local mounts file, calls a local LLM service to generate timeline JSON, and writes scheduled tracks to a local timeline DB; it does not request unrelated credentials or perform network calls to external endpoints.
评估建议
This skill appears to do what it claims: read an active_hardware_mounts.json, call a local LLM on localhost:1234 to generate timeline JSON, and save tracks to s2_timeline_data/rendered_tracks.json. Before installing, ensure: (1) the local LLM at localhost:1234 is trusted — untrusted LLMs can produce unexpected or malformed JSON; (2) other S2 connectors (e.g., s2-nlp-connector) are the only modules that provide microphone/mmWave/sensor data — the orchestrator itself does not access hardware; (3) ...
详细分析 ▾
用途与能力
The manifest, SKILL.md, and skill.py align: the orchestrator consumes an Active Mounts JSON, uses a local LLM to generate timeline keyframes, and injects the resulting track into a local rendered_tracks.json. There are no unexpected external credentials, unrelated binaries, or config paths requested.
指令范围
SKILL.md describes features like microphone monitoring, mmWave sensing, swarm pings and booking actions; the code itself does not access microphones, radar sensors, or external booking APIs — it only reads active_hardware_mounts.json and writes rendered_tracks.json. This is coherent if other S2 modules (e.g., s2-nlp-connector) supply sensor data; confirm those connectors are what provide sensitive inputs rather than this skill directly.
安装机制
No install spec or external downloads are present; this is an instruction+code skill that runs from included skill.py. No external packages or remote archives are fetched by the skill itself.
凭证需求
The skill requests no environment variables or credentials. The only network call is to http://localhost:1234 (a local LLM endpoint) which is consistent with the declared behavior. No unrelated secrets or external service credentials are requested.
持久化与权限
always is false and the skill does not attempt to modify other skills or system-wide agent settings. It writes its own timeline DB under the current working directory (s2_timeline_data/rendered_tracks.json), which is a scoped and expected persistence behavior.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.0.02026/3/20

Initial release introducing spatiotemporal orchestration across the 6-Element Matrix. - Predictive timeline rendering: Converts natural language intents into 4D timeline tracks with scheduled keyframes. - Real-time context awareness: Reads active hardware mounts to tailor rendering to available devices. - Bilingual documentation (English/中文) with detailed scenarios for smart home automation, emotional sensing, and multi-room orchestration. - Supports simulated or real devices (recommended: s2-nlp-connector). - Example use cases include post-workout routines, birthday events, emotional health monitoring, pet diagnostics, and elderly care.

● 无害

安装命令 点击复制

官方npx clawhub@latest install s2-timeline-orchestrator
镜像加速npx clawhub@latest install s2-timeline-orchestrator --registry https://cn.clawhub-mirror.com
数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务