Anydocs Documents
Quick Start

AI Authoring Workflow

A shared workflow for letting Claude Code, Codex, and similar assistants participate safely in Anydocs authoring.

In Anydocs, the default AI workflow is: install and register the MCP server, read project constraints, then write through MCP with explicit page, navigation, and status boundaries.

Steps

  1. Install and register the MCP server
  2. Initialize the project and prepare the agent entry file
  3. Read the project contract first
  4. Establish page and navigation context
  5. Write through page and navigation tools
  6. Keep publication as a separate action
  7. Verify both human and AI delivery outputs

Install and register the MCP server

Before letting an AI assistant participate in authoring, register the Anydocs MCP server in your runtime environment. Refer to the Installation page for quick-install commands. Common registration configs:

codex mcp add anydocs -- npx -y @anydocs/mcp
claude mcp add -s user anydocs -- npx -y @anydocs/mcp

Initialize the project and prepare the agent entry file

Create the project with `init`; if you want a minimal guide file for a specific agent, include `--agent` during initialization.

Read the project contract first

Have the agent call `project_open` before writing so it can inspect allowed blocks, templates, languages, resources, and authoring guidance.

project_open({ projectRoot: "/path/to/project" })

Establish page and navigation context

Continue with `page_list`, `page_get`, `page_find`, and `nav_get` so the agent understands the current content and navigation structure before deciding what to change.

Write through page and navigation tools

Create or update content through template tools, page tools, and navigation tools instead of editing `pages/<lang>/*.json` or `navigation/*.json` directly.

Keep publication as a separate action

After the content is reviewed, call `page_set_status` to move the page to `published`; do not change status through `page_update`.

page_set_status({ projectRoot: "/path/to/project", lang: "en", pageId: "guide", status: "published" })

Verify both human and AI delivery outputs

Finish with `preview` or `build` and confirm the reader pages, search indexes, `llms.txt`, `llms-full.txt`, and `mcp/*.json` all reflect the current published content.

Boundary: Content edits and publication status changes should remain separate actions.
Division of responsibility: AI is best for drafting, rewriting, metadata completion, and navigation cleanup; humans should handle fact checking, brand voice, and final approval.