This is the primary path for using Content Machine today.
Use it when you are working from Claude Code, Codex CLI, or a similar coding-agent CLI that can read repo-local docs and optionally invoke the repo's runners.
- Existing project: use
Agent Harness Install to materialize
.content-machine/skills/,.content-machine/flows/, and.content-machine/AGENTS.md. - Content Machine checkout: use the repo-local
skills/,flows/, andscripts/harness/paths in this guide.
After install, you normally do not need to memorize commands. Ask your
agent for the outcome and let it read the local skills/, flows/, and
runtime docs.
Try:
Use Content Machine to make a 35-second TikTok-style explainer about Redis vs PostgreSQL for caching. Pick the best lane, run the default
generate-shortflow, and only call it ready if publish-prep passes.
If you already know the format:
Use the
reddit-post-over-gameplaylane to turn this story into a vertical short. Keep gameplay full-screen, show a Reddit opener card for 3-5 seconds, add bold captions, and run the review gate.
skills/or.content-machine/skills/defines when to use something, what input it needs, and how to do it well.flows/or.content-machine/flows/defines45ck/prompt-languageorchestration: which skills run, in what order, and what marks success.scripts/harness/is the repo-checkout executable surface; installed projects usenpx --no-install cm-agent <tool>instead.
Start with skills/ when you want one capability. Start with flows/
when you want a multi-step path.
The skills are normal repo files, not magic commands. A coding agent uses
them the same way it uses AGENTS.md, CLAUDE.md, project docs, and
local scripts:
- The root instruction block points the agent at
.content-machine/. - The agent reads
.content-machine/AGENTS.mdfor operating rules. - The agent chooses a skill for one capability or a flow for a coordinated run.
- The agent opens the relevant
SKILL.mdor flow notes before acting. - The agent calls
npx --no-install cm-agent <tool>only when it needs deterministic runtime execution. - The agent writes inspectable artifacts under
runs/<run-id>/and reports review status.
That separation matters: skills contain judgment, flows contain
orchestration, and cm-agent tools execute the parts that should be
deterministic.
For an existing agent project:
npm install --save-dev @45ck/content-machine
npx cm-install --target .content-machine --write-instructions
npx --no-install cm-agent listUse --instruction-file CLAUDE.md with cm-install when Claude Code
should load the root instruction block from CLAUDE.md.
For a Content Machine checkout:
npm installNode.js 20.6+ is required.
The expected operator model is an existing coding-agent harness such as Claude Code, Codex CLI, Cursor, or similar. Ask the harness for the outcome and let it read the skills/flows and call the JSON-stdio runtime when needed; the command examples are for automation and debugging.
Installed project:
cat <<'JSON' | npx --no-install cm-agent skill-catalog
{
"skillsDir": ".content-machine/skills",
"includeExamples": true
}
JSON
cat <<'JSON' | npx --no-install cm-agent flow-catalog
{
"flowsDir": ".content-machine/flows"
}
JSONThen run diagnostics before generation:
cat <<'JSON' | npx --no-install cm-agent doctor-report
{
"strict": false
}
JSONRepo checkout:
cat <<'JSON' | node --import tsx scripts/harness/skill-catalog.ts
{}
JSONOr:
printf '{}\n' | npm run agent:skill-catalogcat <<'JSON' | node --import tsx scripts/harness/flow-catalog.ts
{}
JSONOr:
printf '{}\n' | npm run agent:flow-catalogThen run diagnostics:
printf '{"strict":false}\n' | npm run agent:doctor-reportGolden first-run order:
- Confirm Node.js 20.6+.
- Install or use the repo checkout.
- List skills and flows.
- Run
doctor-report. - Choose the archetype.
- Run
generate-short. - Inspect
publish-prepbefore calling the MP4 ready.
For longform source videos, replace step 6 with
longform-to-shorts: select candidate clips first, get approval, then
run longform-clip-extract, reframe if needed, and render only the
approved ranges.
Before generating, pick the lane. This prevents generic "topic to video" runs from turning into weak stock montages or the wrong Reddit/gameplay layout.
- Use Archetypes for the status table and routing rules.
- Use
reddit-post-over-gameplayas the default Reddit/story mode. - Use
reddit-story-split-screenonly when top story footage plus bottom gameplay is explicitly wanted. - Use Quality And Review before promoting a render as ready.
If you only want to prove the artifact chain without API keys, use the
no-key smoke path in
content-machine-self-demo.
It creates mock audio, mock visuals, caption sidecars, render metadata,
and a placeholder MP4. It is not a publishable demo.
generate-short is the default topic-to-video path. Use this after
provider credentials are configured, for example OPENAI_API_KEY plus a
visual provider key such as PEXELS_API_KEY when using Pexels visuals:
Use the command form when scripting automation. Inside Claude Code, Codex CLI, Cursor, or another agent harness, the normal interaction is to ask for the outcome and let the agent choose the skill or flow.
Installed project:
cat <<'JSON' | npx --no-install cm-agent run-flow
{
"flowsDir": ".content-machine/flows",
"flow": "generate-short",
"runId": "demo-run",
"input": {
"topic": "Redis vs PostgreSQL for caching",
"audio": { "voice": "af_heart" },
"visuals": { "provider": "pexels", "orientation": "portrait" },
"render": { "fps": 30, "downloadAssets": true },
"publishPrep": { "enabled": true, "platform": "tiktok" }
}
}
JSONRepo checkout:
cat <<'JSON' | node --import tsx scripts/harness/run-flow.ts
{
"flow": "generate-short",
"runId": "demo-run",
"input": {
"topic": "Redis vs PostgreSQL for caching",
"audio": { "voice": "af_heart" },
"visuals": { "provider": "pexels", "orientation": "portrait" },
"render": { "fps": 30, "downloadAssets": true },
"publishPrep": { "enabled": true, "platform": "tiktok" }
}
}
JSONThis writes run-scoped files under runs/demo-run/ by default.
By default the review gate is fail-closed: if publish-prep says the
short is not ready, generate-short exits non-zero instead of quietly
handing back junk. The run also writes
runs/demo-run/provenance/asset-ledger.json and passes it into
publish-prep, so stock footage, user media, gameplay, or external audio
must have rights evidence before the run is considered publish-ready.
If you prefer npm aliases, the same runner is available as
npm run agent:run-flow.
Use this when the input is a podcast, interview, talk, screen recording, or other long source file. The flow writes candidate and handoff artifacts; it does not pretend approved JSON is already a final MP4.
Installed project:
cat <<'JSON' | npx --no-install cm-agent run-flow
{
"flowsDir": ".content-machine/flows",
"flow": "longform-to-shorts",
"runId": "source-clips",
"input": {
"timestampsPath": "input/source/timestamps.json",
"sourceMediaPath": "input/source/source.mp4",
"maxCandidates": 3
}
}
JSONRepo checkout:
cat <<'JSON' | node --import tsx scripts/harness/run-flow.ts
{
"flow": "longform-to-shorts",
"runId": "source-clips",
"input": {
"timestampsPath": "input/source/timestamps.json",
"sourceMediaPath": "input/source/source.mp4",
"maxCandidates": 3
}
}
JSONReview runs/source-clips/longform-to-shorts/handoff/render-handoff.v1.json,
approve a candidate, then run longform-clip-extract to create
clip-local render inputs before calling video-render.
Installed command form:
cat <<'JSON' | npx --no-install cm-agent longform-clip-extract
{
"sourceMediaPath": "input/source/source.mp4",
"approvalPath": "runs/source-clips/longform-to-shorts/highlights/highlight-approval.v1.json",
"boundarySnapPath": "runs/source-clips/longform-to-shorts/highlights/boundary-snap.v1.json",
"timestampsPath": "input/source/timestamps.json",
"outputDir": "runs/source-clips/extracted"
}
JSONInstalled project:
cat .content-machine/skills/brief-to-script/examples/request.json | \
npx --no-install cm-agent brief-to-script
cat .content-machine/skills/reverse-engineer-winner/examples/request.json | \
npx --no-install cm-agent reverse-engineer-winner
cat <<'JSON' | npx --no-install cm-agent publish-prep
{
"videoPath": "runs/demo-run/render/video.mp4",
"scriptPath": "runs/demo-run/script/script.json",
"assetLedgerPath": "runs/demo-run/provenance/asset-ledger.json",
"outputDir": "runs/demo-run/publish-prep",
"platform": "tiktok",
"validate": { "cadence": true, "audioSignal": true }
}
JSONRepo checkout:
Generate only a script:
cat skills/brief-to-script/examples/request.json | \
node --import tsx scripts/harness/brief-to-script.tsReverse-engineer a reference short from a local file or supported URL.
URL inputs use yt-dlp before analysis; only use URLs you own, have
permission to analyze, or can otherwise use under the source terms:
cat skills/reverse-engineer-winner/examples/request.json | \
node --import tsx scripts/harness/reverse-engineer-winner.tsRun diagnostics, including ffmpeg, ffprobe, and yt-dlp checks:
cat skills/doctor-report/examples/request.json | \
node --import tsx scripts/harness/doctor-report.tsRun the review gate directly when you want a hard ready-to-post verdict:
cat <<'JSON' | node --import tsx scripts/harness/publish-prep.ts
{
"videoPath": "runs/demo-run/render/video.mp4",
"scriptPath": "runs/demo-run/script/script.json",
"assetLedgerPath": "runs/demo-run/provenance/asset-ledger.json",
"outputDir": "runs/demo-run/publish-prep",
"platform": "tiktok",
"validate": { "cadence": true, "audioSignal": true }
}
JSONIf you are already working from this repo, skip this during first-run generation. Use it only when you want these skills inside a separate coding-agent project:
npm install --save-dev @45ck/content-machine
npx cm-install --target .content-machine --write-instructionsThat creates .content-machine/skills/ and .content-machine/flows/
with SKILL.md files already pointed at the installed package runner.
It also writes .content-machine/README.md and
.content-machine/AGENTS.md, plus a managed root instruction block when
--write-instructions is used. These tell the harness how to discover
skills, pass flowsDir, run diagnostics, and validate outputs.
- Archetype guide:
ARCHETYPES.md - Review guide:
QUALITY-AND-REVIEW.md - Skill guide:
../../skills/README.md - Flow guide:
../../flows/README.md - Optional repo-side runners:
../../scripts/harness/README.md - Repo direction:
../../DIRECTION.md
The cm CLI still exists, but it is now a thin compatibility shell
rather than the primary interface.
If you need it, use:
npm run cm -- --help- Legacy CLI Archive