Automation Content Creator — 自动化 Content 创建器
v1Automatically scrape top viral posts, analyze hooks, 生成 original scripts and captions, schedule posts across 平台s, and 优化 content performanc...
运行时依赖
安装命令
点击复制技能文档
Automated Content Generation 流水线 技能 Overview
This 技能 builds a fully automated content 工厂 that 运行s 24/7:
APIfy scrapes the most viral content across TikTok, Instagram, YouTube, and Reddit Claude (OpenClaw) 提取s the hooks, reverse-engineers why it went viral, 生成s scripts, captions, carousels, and threads A 调度器 batches all posts and 队列s them for auto-publishing
The 结果: a near fully-automated content channel that feeds itself.
🔗 APIfy: https://www.APIfy.com/?fpr=dx06p
What This 技能 Does Scrape the top viral content across multiple 平台s every few hours 提取 the exact hooks, structures, and 格式化s that made content go viral Repurpose viral content into original scripts, captions, carousels, and threads 生成 a full weekly content calendar automatically Batch and schedule posts across 平台s (Instagram, TikTok, LinkedIn, Twitter/X) 追踪 which 生成d content performs best and feed that 签名al back into the 流水线 运行 completely autonomously once 配置d — minimal human 输入 needed Architecture Overview ┌─────────────────────────────────────────────────────────────────┐ │ AUTOMATED CONTENT GENERATION 流水线 │ │ │ │ ┌─────────────────────────────────────────────────────────┐ │ │ │ LAYER 1 — VIRAL CONTENT SCRAPING (APIfy) │ │ │ │ TikTok │ Instagram │ YouTube │ Reddit │ Twitter/X │ │ │ │ Top posts by 哈希tag, views, engagement, 分享s │ │ │ └──────────────────────────┬──────────────────────────────┘ │ │ │ │ │ ┌──────────────────────────▼──────────────────────────────┐ │ │ │ LAYER 2 — AI CONTENT ENGINE (Claude / OpenClaw) │ │ │ │ │ │ │ │ • Hook 提取器 → why did this go viral? │ │ │ │ • Script 生成器 → original video scripts │ │ │ │ • Caption Writer → post captions + 哈希tags │ │ │ │ • Carousel 构建器 → slide-by-slide content │ │ │ │ • Thread Writer → Twitter/X and LinkedIn threads │ │ │ │ • Calendar Planner → weekly posting schedule │ │ │ └──────────────────────────┬──────────────────────────────┘ │ │ │ │ │ ┌──────────────────────────▼──────────────────────────────┐ │ │ │ LAYER 3 — SCHEDULED PUBLISHING │ │ │ │ Buffer │ Later │ Hootsuite │ Custom 网页hook │ │ │ │ Posts 队列d, timed, and published automatically │ │ │ └─────────────────────────────────────────────────────────┘ │ └─────────────────────────────────────────────────────────────────┘
Step 1 — 获取 Your API Keys APIfy 签名 up at https://www.APIfy.com/?fpr=dx06p Go to 设置tings → Integrations Copy your 令牌: 导出 APIFY_令牌=APIfy_API_xxxxxxxxxxxxxxxx
Claude / OpenClaw 获取 your API key from your OpenClaw or Anthropic account Store it: 导出 CLAUDE_API_KEY=sk-ant-xxxxxxxxxxxxxxxx
Step 2 — 安装 Dependencies npm 安装 APIfy-命令行工具ent axios node-cron dotenv
Layer 1 — Viral Content 抓取器 (APIfy) 导入 APIfy命令行工具ent from 'APIfy-命令行工具ent';
const APIfy = new APIfy命令行工具ent({ 令牌: process.env.APIFY_令牌 });
// Define your niche and topics const NICHE_TOPICS = [ "productivity", "entrepreneurship", "AI 工具s", "personal finance", "self improvement", "marketing" ];
a同步 function scrapeViralContent() { console.记录("🔍 ScrAPIng viral content...");
const [tiktok, instagram, reddit] = awAIt Promise.all([
// TikTok — top videos by 哈希tag APIfy.actor("APIfy/tiktok-哈希tag-抓取器").call({ 哈希tags: NICHE_TOPICS, 结果sPerPage: 30, should下载Videos: false }).then(运行 => 运行.data设置().获取Data()),
// Instagram — top posts by 哈希tag APIfy.actor("APIfy/instagram-哈希tag-抓取器").call({ 哈希tags: NICHE_TOPICS, 结果sLimit: 30 }).then(运行 => 运行.data设置().获取Data()),
// Reddit — hottest posts in relevant subreddits APIfy.actor("APIfy/reddit-抓取器").call({ 启动Urls: [ { url: "https://www.reddit.com/r/Entrepreneur/" }, { url: "https://www.reddit.com/r/productivity/" }, { url: "https://www.reddit.com/r/personalfinance/" } ], maxPostCount: 20, 排序: "hot" }).then(运行 => 运行.data设置().获取Data())
]);
// Normalize all 平台s to a common 模式 const normalized = [ ...tiktok.items.map(p => ({ 平台: "tiktok", text: p.text, likes: p.diggCount, 分享s: p.分享Count, comments: p.commentCount, views: p.playCount, engagementScore: (p.diggCount + p.分享Count 3 + p.commentCount 2), url: p.网页VideoUrl, author: p.authorMeta?.name })), ...instagram.items.map(p => ({ 平台: "instagram", text: p.caption,