Logo
CloudWithSingh
Back to all field notes
Quickstart
Claude Code
Intermediate

Claude Code Skills & MCP — The Power User Setup

How I set up Claude Code as a full content and automation engine — custom skills, MCP servers, voice files, vault-first persistence, and the router pattern that replaced 20 separate commands with 6.

Parveen Singh
April 1, 2026
28 min read
Prerequisites:Claude Code CLI installedBasic terminal/CLI comfort
TLDR

Claude Code becomes a different tool when you set it up properly. This field note covers everything I use daily: CLAUDE.md project files, custom skills with the router pattern, MCP server configuration for Notion/Buffer/Linear, knowledge files that keep voice consistent across platforms, and a vault-first persistence layer so nothing gets lost between sessions. This is the setup behind my entire content pipeline — LinkedIn, YouTube, Instagram, Reddit, and three blog sites, all driven by Claude Code skills.

Who This Is For

You already use Claude Code. You've run some prompts, maybe even built something useful. But every session starts from zero — you're re-explaining your project, re-pasting context, re-describing how you want things to sound.

This guide is about fixing that. I run a multi-platform content operation through Claude Code — 6 skill namespaces, 30+ modes, connected to Notion, Buffer, and Linear through MCP. Every piece of content I publish touches this system. I'm going to show you exactly how it's built.

Not theory. Not "you could do this." This is what I actually do.


1. CLAUDE.md — Your Project Brain

Every project needs a CLAUDE.md file in its root. Claude Code reads this automatically at the start of every session. It's the single most impactful thing you can do to improve your Claude Code experience.

Without it, you're explaining your project from scratch every time. With it, Claude Code knows your architecture, your conventions, your rules, and your preferences before you type a single prompt.

What to Include

Your CLAUDE.md should cover:

  • Project overview — what this repo is, what it does, who it's for (2-3 sentences)
  • Architecture — how the pieces fit together, where things live, naming conventions
  • Commands reference — what skills exist, what they do, how to invoke them
  • Content/code rules — hard constraints that should never be violated
  • Integration config — database IDs, API patterns, external service details

Here's the structure I use for my content pipeline:

# CLAUDE.md
 
## Project Overview
Content creation engine for multi-platform publishing.
Built entirely as Claude Code skills — no application code.
Skills research, draft, and publish content across LinkedIn,
YouTube, Instagram, Reddit, and three blog sites.
 
## Setup
./setup.sh   # symlinks commands/ into ~/.claude/skills/
 
## Architecture
### Skill Namespace Pattern
Every skill lives in commands/<namespace>/ and follows:
  commands/<namespace>/
  ├── SKILL.md              # Main definition
  └── references/
      ├── config.md         # Platform IDs, limits, channels
      └── *.md              # Supplementary guides
 
### Knowledge Files
knowledge/ contains voice profiles and strategy docs:
- about_me.md       — brand identity, credentials
- linkedin_voice.md — LinkedIn-specific voice DNA
- content_strategy.md — pillars, titling rules
 
### Vault Structure (Obsidian-first)
vault/
  _master_index.json    # Cross-channel dedup index
  linkedin/_index.json  # Per-channel pipeline state
  youtube/_index.json
  ...
 
## Commands Reference
| Skill     | Modes                                    |
|-----------|------------------------------------------|
| /linkedin | pipeline, discover, draft, publish, ...  |
| /blog     | [topic], cloudlearn, techrupt, brainstorm |
| /reddit   | scan, <url>, post, trends, confirm, log  |
 
## Content Rules
- Write as Parveen — practitioner, not thought leader
- Each platform has a distinct voice. Read the voice file.
- No emojis in hooks.
- 1,248-character hard limit for LinkedIn posts.

What NOT to Include

  • Entire codebase documentation. CLAUDE.md is a map, not a manual. If Claude needs details about a specific module, it should read that module.
  • Ephemeral state. Don't put "current sprint" or "today's tasks" in CLAUDE.md. That belongs in your vault or project management tool.
  • Secrets. No API keys, tokens, or credentials. Use .env files and reference them.
  • Lengthy prose. Every line costs tokens. Be direct. Use tables over paragraphs where possible.
Pro Tip

Keep CLAUDE.md under 200 lines. I know mine is longer than that — it grew with the project. But the tighter it is, the fewer tokens it burns on every session. If a section is only relevant to one skill, move it into that skill's references/ folder instead.

The Token Math

CLAUDE.md is loaded on every single session. If yours is 300 lines of detailed prose, that's ~3,000 tokens before you've typed anything. Multiply that by every conversation, every day. It adds up.

The fix: put the overview in CLAUDE.md, put the details in skill-specific reference files that only load when that skill runs. Progressive disclosure — Claude sees the map first, reads the territory only when it needs to.


2. Skills Folder Architecture

Skills are the core of the power user setup. A skill is a markdown file that tells Claude Code how to perform a specific workflow — with instructions, context, and constraints baked in.

Where Skills Live

Claude Code loads skills from ~/.claude/skills/. You can write them directly there, but I keep mine in a project directory and symlink them:

content-creation-project/
├── commands/
│   ├── linkedin/
│   │   ├── SKILL.md
│   │   └── references/
│   │       ├── config.md
│   │       ├── post-formats.md
│   │       ├── engagement-guide.md
│   │       └── carousel-guide.md
│   ├── blog/
│   │   ├── SKILL.md
│   │   └── references/
│   │       ├── config.md
│   │       └── quality-sweep.md
│   ├── reddit/
│   │   ├── SKILL.md
│   │   └── references/
│   │       └── subreddit-rules.md
│   ├── video/
│   │   ├── brainstorm.md
│   │   └── script.md
│   └── yt/
│       └── SKILL.md
├── knowledge/
│   ├── about_me.md
│   ├── linkedin_voice.md
│   ├── content_strategy.md
│   └── shared-context.md
├── setup.sh
└── CLAUDE.md

The setup.sh script creates the symlinks:

#!/bin/bash
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
CLAUDE_SKILLS_DIR="$HOME/.claude/skills"
mkdir -p "$CLAUDE_SKILLS_DIR"
 
# Link namespace directories
for dir in "$SCRIPT_DIR/commands"/*/; do
  dirname=$(basename "$dir")
  ln -sfn "$dir" "$CLAUDE_SKILLS_DIR/$dirname"
  echo "Linked: $dirname/"
done

Run ./setup.sh once, then again whenever you add or rename a skill. That's it.

SKILL.md Structure

Every skill needs two things: YAML frontmatter and markdown instructions.

---
name: linkedin
description: LinkedIn content pipeline — discover topics,
  draft posts, generate carousels, write engagement comments,
  and publish to Buffer.
---
 
# LinkedIn Content Pipeline
 
You are Parveen Singh's LinkedIn content pipeline.
 
## Setup (every run)
 
1. Read `references/config.md` — operational settings
2. Read `knowledge/about_me.md` — brand identity
3. Read the voice profile for the target channel
 
## Entry Points
 
Parse `$ARGUMENTS` to determine mode:
 
| Input              | Mode           |
|--------------------|----------------|
| `discover [focus]` | Discovery Only |
| `draft [topic]`    | Draft Only     |
| `publish`          | Publish Only   |
| `carousel [topic]` | Carousel Mode  |
| `engage <post>`    | Engagement     |

The frontmatter is critical. name is the slash command trigger. description tells Claude Code when to suggest the skill — it's doing semantic matching against your prompt, so be specific about the use cases.

The references/ Subfolder

This is the progressive disclosure pattern that keeps token costs sane.

  • config.md — Notion database IDs, Buffer channel IDs, character limits, scheduling rules. Loaded every run.
  • Supplementary guides — loaded only when a specific mode needs them. My LinkedIn skill has carousel-guide.md (loaded in carousel mode), engagement-guide.md (loaded in engage mode), and post-formats.md (loaded during drafting).

The loading cascade:

  1. Frontmatter scanned (~100 tokens) — Claude reads name and description to decide if this skill is relevant
  2. Full SKILL.md loaded (~2-5K tokens) — only when the skill is actually invoked
  3. Reference files loaded on demand (~1-3K tokens each) — only when a specific mode needs them

This means /linkedin discover loads about 5K tokens of skill context. /linkedin carousel loads the base 5K plus the carousel guide. You're never paying for context you don't use.

Pro Tip

Put your most-used instructions in SKILL.md directly. Put mode-specific details in reference files. If a section is only relevant to one mode, it doesn't belong in the main file — it's burning tokens for every other mode.


3. The Router Pattern — One Skill, Multiple Modes

This is the architectural decision that simplified everything for me. Instead of having separate skills for /linkedin-discover, /linkedin-draft, /linkedin-publish, /linkedin-carousel, and /linkedin-engage — I have one skill (/linkedin) that dispatches based on arguments.

Why Router Over Separate Skills

The router pattern works when your modes share context. My LinkedIn skill needs the same setup regardless of mode — it always reads the voice profile, always loads the config, always checks the vault for duplicates. The only thing that changes is what it does after setup.

If I had 5 separate skills, each one would duplicate that setup section. That's wasted tokens and wasted maintenance effort. One change to the voice loading logic means updating one file, not five.

How It Works

The skill parses $ARGUMENTS — the text after the slash command — and dispatches to the right mode:

## Entry Points
 
Parse `$ARGUMENTS` to determine mode:
 
| Input                        | Mode              |
|------------------------------|-------------------|
| (nothing) or `pipeline`      | Full Pipeline     |
| `discover [focus]`           | Discovery Only    |
| `draft [topic]`              | Draft Only        |
| `publish`                    | Publish Only      |
| `carousel [topic]`           | Carousel Mode     |
| `engage <post>`              | Engagement Mode   |

When I type /linkedin discover azure security, Claude Code:

  1. Loads the LinkedIn SKILL.md
  2. Parses $ARGUMENTS as discover azure security
  3. Matches discover in the routing table
  4. Runs the discovery workflow with "azure security" as the focus
  5. Loads only the reference files that discovery mode needs

When I type /linkedin carousel conditional access, it follows the same path but dispatches to carousel mode and loads references/carousel-guide.md.

A Real Router: My YouTube Pipeline

The /yt skill routes to 8 different modes through a single SKILL.md:

| Input                | Mode         | What It Does                    |
|----------------------|--------------|---------------------------------|
| `pipeline [topic]`   | Research     | Topic analysis + NotebookLM     |
| `ideation [slug]`    | Strategy     | Positioned ideas, desire mapping |
| `hooks [slug]`       | Engineering  | Scored hooks + alignment test   |
| `outlines [slug]`    | Blueprint    | Recording-ready video outline   |
| `titles [slug]`      | Packaging    | Tiered titles + thumbnail text  |
| `cascade [slug]`     | Distribution | Blog + LinkedIn + Twitter       |
| `short-form [slug]`  | Repurposing  | 3-5 clips with hooks + captions |
| `status`             | Dashboard    | Pipeline overview               |

Eight modes. One SKILL.md. One setup section. Each mode references the same slug-based vault files, so the pipeline chains naturally — the output of hooks feeds into outlines, which feeds into titles.

When to Use Router vs Separate Skills

Use a router when:

  • Modes share setup context (same voice file, same config, same vault)
  • Modes are part of a pipeline (output of one feeds the next)
  • You want a single mental model ("everything LinkedIn is /linkedin")

Use separate skills when:

  • Workflows are genuinely independent (no shared config or context)
  • The skill files would be massive if combined (1000+ lines)
  • Different teams own different workflows

My content pipeline uses routers for LinkedIn (6 modes), YouTube (8 modes), Reddit (8 modes), Blog (5 modes), and Unsave marketing (8 modes). The video brainstorm and video script skills are separate — they were built before the router pattern and work fine as standalone commands.

Building a Router: Practical Tips

A few things I learned the hard way:

Default mode matters. When someone types /linkedin with no arguments, what happens? Don't make it an error. Pick a sensible default — my LinkedIn skill runs the full pipeline (discover + draft + publish) when invoked bare. My Reddit skill shows today's engagement opportunities.

Keep the routing table visible. Put it early in the SKILL.md, right after setup. Claude reads top-to-bottom, and the routing table is the decision point for the entire session. Don't bury it below 50 lines of context.

Mode-specific instructions go after shared setup. Structure your SKILL.md like this: frontmatter, then shared setup, then routing table, then shared workflow steps, then mode-specific sections. Claude reads the shared parts first, then jumps to the relevant mode.

Reference files are your pressure valve. When a mode's instructions get long (my carousel mode is 200+ lines of brand system specs, layout catalogs, and font rules), move them to a reference file. The routing table says "load references/carousel-guide.md" and the main SKILL.md stays clean.


4. Knowledge Files for Voice Consistency

This is the part that took my content from "AI-generated" to "sounds like me." Without voice files, Claude Code writes generic professional content that could come from anyone. With them, it writes in my actual voice — different energy per platform, but recognizably me.

The Problem

I publish across LinkedIn, YouTube, Reddit, Instagram, and three blog sites. Each platform has a different voice:

  • LinkedIn — direct, opinionated, practitioner talking to practitioners. Short fragments. Strong takes.
  • YouTube — conversational, slightly more structured, teaching energy. "Let me show you what actually happens."
  • Reddit — casual, brief, riffing on shared experience. No structured advice. 50-100 words max.
  • Instagram — punchy, visual-first, hook-driven. Completely different cadence.

Without explicit voice instructions, Claude Code writes the same way everywhere. It defaults to a generic helpful tone that sounds fine but reads like every other AI-generated post on the platform.

The Solution: knowledge/*.md

I keep voice profiles and strategy docs in a knowledge/ directory. Skills declare which files they need in their Setup section, and load them before doing any work.

Three types of knowledge files:

Voice profiles — how to sound on a specific platform

knowledge/linkedin_voice.md
knowledge/instagram_reel_style.md
knowledge/writing_style.md (YouTube scripts)

Strategy docs — what to write about and how to think about content

knowledge/content_strategy.md
knowledge/about_me.md

Shared context — operational config and cross-skill routing

knowledge/shared-context.md
knowledge/vault-patterns.md

What Goes in a Voice Profile

Here's the structure I use. This is simplified from my actual LinkedIn voice file, which is about 150 lines:

# Platform Voice Profile — LinkedIn
 
Read this file before drafting ANY content. Every post
must sound like [your name], not like AI.
 
## WHO YOU ARE
[2-3 sentences. Role, experience, positioning.]
 
## CORE BELIEF
[The one thing that drives your content. Mine: "Cloud
certifications don't build cloud engineers. Production
experience does."]
 
## VOICE CHARACTERISTICS
 
### Tone
- Raw and personal first — like talking to a friend in tech
- Direct value without fluff
- Opinionated but grounded in experience
- Short > long. Say it in 5 sentences if you can.
 
### Vocabulary Fingerprint
[Words and phrases you actually use]
 
| Pattern         | Examples                                    |
|-----------------|---------------------------------------------|
| Emphasis        | "super important" / "honestly"              |
| Casual markers  | "this thing" / "Not great." / "That's it."  |
| Belief shifts   | "I used to think X. I don't anymore."       |
| Authority       | "I've seen this break in client environments"|
 
### What You NEVER Say
- No "I'm excited to share..."
- No corporate jargon ("leverage", "synergy")
- No explaining what you're about to explain
- No padding to hit a word count
 
### Emoji Philosophy
[Your rules. Mine: default to zero emojis.]
 
## POST FORMATS
[Platform-specific format templates with character limits,
structure patterns, CTA styles.]

The vocabulary fingerprint section is the secret weapon. It's not enough to say "write casually." You need to show Claude the specific words and phrases that make your voice yours. "Not great." as a standalone sentence. "Every single day" as an emphasis pattern. "I stopped telling people to..." as a belief-shift opener.

Loading Order Matters

Every skill declares its knowledge file dependencies in the Setup section:

## Setup (every run)
 
1. Read `references/config.md`
2. Read `knowledge/about_me.md`
3. Read the voice profile for the target channel

The voice file loads before any drafting happens. This isn't optional — it's a hard rule. I've seen the difference. When the voice file loads first, the output nails my cadence from the first draft. When I skip it (testing, laziness), the output reads like a LinkedIn influencer template.

Before and After: Voice Files in Practice

Here's the same LinkedIn post topic — "why I stopped recommending AZ-900 as a first cert" — drafted without and with a voice profile loaded.

Without voice file:

Azure certifications are an important part of professional development in the cloud space. However, I've found that starting with AZ-900 may not be the most effective approach for everyone. Here are three reasons to consider skipping it and going directly to AZ-104...

That's not wrong. It's just... nobody. It could be any person with a LinkedIn account and an opinion about Azure certs.

With voice file loaded:

I stopped telling people to start with AZ-900.

Not because it's bad. Because it teaches you Azure exists. You already know that.

AZ-104 forces you to actually touch things. Deploy a VNet. Break an NSG rule. Fix it. That's where learning happens.

Same person, same topic. But the second one sounds like me — the fragment sentences, the belief-shift opener, the "touch things" phrasing, the concrete example. The voice file made that possible.

Cross-Platform Voice Differentiation

The real power shows up when you write the same topic for different platforms. I had a video about managed identity mistakes. The YouTube script, LinkedIn post, and Reddit comment all covered the same core insight — but they sounded completely different because each loaded a different voice file.

Same person. Same expertise. Different energy. That's what voice files give you that a generic "write in a conversational tone" instruction never will.

Warning

Don't stuff your entire life story into the voice file. Claude needs patterns and constraints, not a biography. The vocabulary fingerprint table and the "what you never say" list do more work than three paragraphs of background. Keep it tight.


5. MCP Server Configuration

MCP — Model Context Protocol — is what connects Claude Code to external tools. It's the difference between "Claude can read and write files" and "Claude can query my Notion database, schedule posts to Buffer, and create Linear tickets."

How MCP Works

Each MCP server exposes tools that Claude Code can call. You configure them in .claude/settings.local.json at the project level (this file is gitignored — it contains paths and sometimes credentials).

When Claude Code starts a session, it reads the MCP config and adds those tools to its available set. Each server's tools show up alongside Claude's built-in tools (Read, Write, Bash, etc.).

Settings File Structure

{
  "mcpServers": {
    "notion": {
      "command": "npx",
      "args": ["-y", "@notionhq/notion-mcp-server"],
      "env": {
        "OPENAPI_MCP_HEADERS": "{\"Authorization\": \"Bearer ntn_xxx\", \"Notion-Version\": \"2022-06-28\"}"
      }
    },
    "buffer": {
      "command": "npx",
      "args": ["-y", "@anthropic-ai/buffer-mcp-server"],
      "env": {
        "BUFFER_API_KEY": "your-buffer-api-key"
      }
    },
    "linear": {
      "command": "npx",
      "args": ["-y", "@anthropic-ai/linear-mcp-server"],
      "env": {
        "LINEAR_API_KEY": "your-linear-api-key"
      }
    }
  }
}

Each server definition has:

  • command — the executable to run (usually npx for Node-based servers)
  • args — arguments passed to the command (the server package name)
  • env — environment variables the server needs (API keys, tokens)

My Active MCP Servers

Here's what I actually run:

Notion — database queries, page creation, property updates. My content pipeline uses two Notion databases: a Social Content Board for LinkedIn/Blog/Reddit and a Video Production Pipeline for YouTube/Instagram. Every skill that creates content syncs to Notion through these tools.

Buffer — social media scheduling. After a LinkedIn post is drafted and approved, the publish mode pushes it to Buffer as a draft. I never auto-schedule — everything goes to draft first so I can review timing.

Linear — project management. My lab building pipeline uses Linear cards to track lab ideas through research, build, review, and publish stages. Claude Code creates cards, updates status, and reads card details.

Gmail — email drafting. Used by my consulting engagement skill to draft follow-up emails and proposal cover letters.

Common Gotchas

JSON syntax errors. The settings file is strict JSON — no trailing commas, no comments. One misplaced comma and none of your MCP servers load. I've lost 20 minutes to a trailing comma more than once. Use a JSON validator.

PATH issues. If Claude Code can't find npx, the server won't start. Make sure your shell profile exports the correct PATH. On macOS with Homebrew, Node.js might be in /opt/homebrew/bin/ — and Claude Code's shell might not pick that up automatically.

Transport mismatches. Some MCP servers use stdio (the default), others use HTTP. Make sure your config matches what the server expects. If a server docs say "HTTP transport on port 3000," you need a different config structure than the stdio examples above.

Silent failures. When an MCP server fails to start, Claude Code doesn't always tell you loudly. If your Notion tools aren't showing up, check the MCP server logs. Run claude mcp to see server status.

The Token Cost of MCP

This is the one nobody talks about. Every MCP server adds its tool definitions to Claude's context window. Each tool definition includes the name, description, and full parameter schema.

My setup has about 84 MCP tools across all servers. That's roughly 15,000 tokens of tool definitions loaded before I type a single character. On every session.

That's the trade-off. You get powerful integrations, but you pay for them in context window space. On a large project where CLAUDE.md is already 3K tokens and you load a skill at 5K tokens, adding 15K of MCP tool definitions means you've used 23K tokens of context before any actual work happens.

Pro Tip

Only enable the MCP servers you actually need for the project you're working on. I have different .claude/settings.local.json configs for different projects. My content pipeline needs Notion and Buffer. My consulting project needs Linear and Gmail. My lab-building project needs Linear only. Don't load everything everywhere.

MCP vs Direct API Calls

MCP servers aren't the only way to integrate external tools. For some workflows, a bash script that calls an API directly is simpler and cheaper.

I use a scripts/notion_query.sh script for complex Notion database queries because the Notion MCP's search tool does semantic search — it can't reliably filter by Status, Channel, or any specific property. When I need "all LinkedIn posts with Status = Ready to Write," the script gives me exact results. The MCP search gives me approximate matches ranked by relevance.

Rule of thumb: use MCP for creating and updating individual records. Use scripts for querying and filtering across many records. Don't force everything through MCP.

Debugging MCP: When Things Don't Work

MCP failures are usually silent. Claude Code won't say "your Notion server failed to start" — it just won't have Notion tools available. Here's my debugging checklist:

  1. Run claude mcp in your terminal. This shows which servers are configured and their status. If a server shows "error" or isn't listed, the config is wrong.

  2. Check the JSON syntax. Open .claude/settings.local.json in an editor with JSON validation. A missing comma or extra bracket is the most common cause.

  3. Test the server standalone. Run the server command directly in your terminal: npx -y @notionhq/notion-mcp-server. If it errors out, the issue is the server package or your environment, not Claude Code.

  4. Verify environment variables. API keys expire. Tokens get rotated. If a server that worked yesterday doesn't work today, check the env block first.

  5. Check Node.js availability. Most MCP servers need Node.js. If you recently updated or switched Node versions (nvm, volta), the npx path might have changed.


6. Vault-First Persistence

This is the pattern that made everything actually reliable. Claude Code starts fresh every session — no memory of what you did yesterday, what posts you drafted, what's in your pipeline. Without persistence, you're rebuilding context every time.

The Problem

I draft LinkedIn posts on Monday. On Tuesday, I want to review and publish them. But Claude Code doesn't remember Monday. I'd have to say "I have 3 LinkedIn posts in my Notion database, here are the IDs, please fetch them and..." — that's tedious and error-prone.

Worse: if I'm drafting content across 6 channels, I need dedup awareness. Did I already write about managed identities this week? Is there a YouTube video on that topic? Without a shared state, every skill operates in isolation.

The Solution: Write Local First, Sync External Second

Everything writes to a vault/ directory first. This is a local folder — just markdown files and JSON indexes. Claude Code can read it instantly at the start of any session. No API calls needed.

vault/
  _master_index.json          # Cross-channel unified index
  linkedin/
    _index.json               # LinkedIn pipeline state
    drafts/                   # Post markdown files
    published/                # Archived published posts
  youtube/
    _index.json               # YouTube pipeline state
    00-research/              # Research docs
    01-ideation/              # Video ideas
    ...
  reddit/
    _index.json               # Engagement tracking
    engagement/               # Daily session files

The Master Index

vault/_master_index.json is the cross-channel awareness layer. Every skill reads this first. One file gives Claude Code visibility into everything that's been created across all channels.

{
  "entries": [
    {
      "id": "li-20260407-managed-identity-mistakes",
      "channel": "linkedin",
      "title": "3 Managed Identity Mistakes I See Every Week",
      "lifecycle": "published",
      "topic_keywords": ["managed identity", "azure", "security", "rbac"],
      "cross_references": ["yt-20260401-managed-identity-deep-dive"],
      "notion_id": "abc123",
      "notion_synced": true,
      "created": "2026-04-07",
      "updated": "2026-04-07"
    },
    {
      "id": "yt-20260401-managed-identity-deep-dive",
      "channel": "youtube",
      "title": "Managed Identities — The Complete Guide",
      "lifecycle": "production",
      "topic_keywords": ["managed identity", "azure", "system-assigned", "user-assigned"],
      "cross_references": ["li-20260407-managed-identity-mistakes"],
      "notion_id": "def456",
      "notion_synced": true,
      "created": "2026-04-01",
      "updated": "2026-04-05"
    }
  ]
}

When any skill starts, it reads this file to:

  • Dedup — check topic_keywords before creating new content. If I already have a LinkedIn post about managed identity mistakes, don't suggest another one.
  • Cross-reference — the YouTube video cascaded into a LinkedIn post. Both entries link to each other via cross_references. The skill knows about the relationship.
  • Gap detection — filter by channel + lifecycle to find channels missing coverage. "You have a YouTube video about managed identities but no blog post. Want to cascade it?"

Per-Channel Indexes

Each channel also has its own _index.json for channel-specific state. The LinkedIn index tracks draft status and Buffer post IDs. The Reddit index tracks engagement stats and the CloudLearn mention ratio. The YouTube index tracks which of the 7 pipeline stages each video has completed.

This two-level index design means a skill that only cares about LinkedIn can read vault/linkedin/_index.json (~1K tokens) instead of the full master index (~5K tokens). Load what you need.

The Content Flow

Every skill follows the same pattern:

Skill invoked
  → Read vault/_master_index.json (dedup check)
  → Create/update vault/{channel}/ files
  → Update per-channel _index.json
  → Update vault/_master_index.json
  → Sync to Notion (downstream, non-blocking)

That last step is key: Notion is downstream. If the Notion API is slow, or down, or rate-limited — the content is already safe in the vault. The skill sets notion_synced: false on the master index entry and moves on. Next time any skill runs, it can pick up the failed sync and retry.

I've had Notion go down during a content session exactly once. My vault had all 4 drafts. When Notion came back, I ran the sync and everything landed. If I'd been writing directly to Notion, I'd have lost the session.

Warning

The vault is gitignored by default — you don't want draft content and JSON indexes cluttering your commit history. But make sure you have a backup strategy. I use iCloud sync on the vault directory. A local git repo with a daily auto-commit script would also work. Don't rely solely on one machine's filesystem.

Why Not Just Use Notion Directly?

Three reasons:

  1. Speed. Reading a local JSON file is instant. Querying Notion's API takes 500ms-2s per call. When a skill needs to check dedup across 6 channels, that's 6 API calls vs 1 file read.

  2. Reliability. External APIs fail. Rate limits hit. Auth tokens expire. Local files don't have these problems.

  3. Context cost. Every Notion API call through MCP uses tokens for the tool call, the request formatting, and the response parsing. A file read is significantly cheaper.

Notion is still valuable — it's where I see the full pipeline visually, where I share status with collaborators, where I track long-running production workflows. But it's the dashboard, not the database.

What a Real Session Looks Like

Here's my actual Monday content workflow, start to finish:

  1. Open Claude Code in my content project directory. CLAUDE.md loads automatically — Claude knows the project, the vault, the skills.
  2. Type /linkedin pipeline. The skill loads, reads the vault master index (dedup check), reads my LinkedIn voice file, and starts discovering topics from my credibility bank and recent YouTube videos.
  3. It presents 8-10 topic ideas. I pick 4. It drafts all 4 — each one sounds like me because the voice file is loaded.
  4. Drafts save to vault/linkedin/drafts/. Master index updates. Notion sync happens in the background.
  5. I review the drafts, make edits, then run /linkedin publish. It pushes the approved posts to Buffer as drafts (never direct-publish).
  6. Total time: 20 minutes for 4 LinkedIn posts. Without this setup, that's 2 hours of context-switching between tools.

The vault makes Tuesday seamless. When I come back to review what I drafted, Claude reads vault/linkedin/_index.json and knows exactly where things stand. No re-explaining. No searching Notion for page IDs. The state is right there in a file.


7. Quick Start Checklist

Here's the step-by-step to go from zero to a working Claude Code power user setup. Each step builds on the previous one.

Step 1: Create CLAUDE.md

touch CLAUDE.md

Write a project overview, architecture summary, and key rules. Start with 50 lines. You'll add more as patterns emerge. Keep it in your project root.

Step 2: Set Up the Skills Folder

mkdir -p commands/my-first-skill/references
mkdir -p knowledge

This gives you the directory structure. Skills go in commands/, voice and strategy files go in knowledge/.

Step 3: Create Your First Skill

cat > commands/my-first-skill/SKILL.md << 'EOF'
---
name: my-first-skill
description: Describe when this skill should trigger.
  Be specific about the use cases.
---
 
# My First Skill
 
## Setup (every run)
 
1. Read `knowledge/about_me.md` — who you are
 
## Instructions
 
[What the skill should do, step by step.]
EOF

Step 4: Write a Knowledge File

cat > knowledge/about_me.md << 'EOF'
# About Me
 
[Your name, role, experience, expertise areas.]
[Your core belief about your field.]
[What makes your perspective different.]
EOF

Step 5: Create the Setup Script

cat > setup.sh << 'SCRIPT'
#!/bin/bash
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
CLAUDE_SKILLS_DIR="$HOME/.claude/skills"
mkdir -p "$CLAUDE_SKILLS_DIR"
for dir in "$SCRIPT_DIR/commands"/*/; do
  dirname=$(basename "$dir")
  ln -sfn "$dir" "$CLAUDE_SKILLS_DIR/$dirname"
  echo "Linked: $dirname/"
done
SCRIPT
chmod +x setup.sh
./setup.sh

Step 6: Configure MCP (If Needed)

mkdir -p .claude
cat > .claude/settings.local.json << 'EOF'
{
  "mcpServers": {}
}
EOF

Add servers as you need them. Don't start with 5 servers on day one. Add one, verify it works, then add the next.

Step 7: Set Up Vault Persistence (Optional)

mkdir -p vault
echo '{"entries":[]}' > vault/_master_index.json

Add channel subdirectories as you build skills that need them. The vault grows organically with your workflows.

Step 8: Test the Workflow

Open Claude Code in your project directory and invoke your skill:

/my-first-skill

Verify that CLAUDE.md loaded (Claude should know your project context), the skill triggered, and the knowledge file was read. If all three work, you have a functioning setup.


Where to Go From Here

This setup isn't something you build in an afternoon and never touch again. It evolves. I started with a single LinkedIn skill and a 30-line CLAUDE.md. Six months later, I have 6 namespaces, 30+ modes, and a vault-first architecture that handles my entire content operation.

Start small. One CLAUDE.md. One skill. One knowledge file. Get the loop working, then expand.

The pattern scales. Whether you're running a content pipeline, a DevOps workflow, a consulting practice, or a lab-building operation — the architecture is the same: CLAUDE.md for project context, skills for workflows, knowledge files for consistency, MCP for integrations, vault for persistence.


What's Next

What's Next

Bookmark this page

Save it for your next project sprint

Start a project

Apply what you just learned hands-on

Follow on Instagram

Daily cloud tips & behind-the-scenes

Try hands-on labs

Practice in a real cloud environment

Parveen Singh

Parveen Singh

Microsoft Certified Trainer & Cloud Solutions Consultant

Related Field Notes

Found this useful?

Stay in the loop

Weekly cloud insights, no spam

Subscribe

Explore CloudLearn

Hands-on labs & projects

Start Learning

Book Training

Custom cloud training for your team

Get in Touch

On this page