Skip to content

Port workflow and docs automation stack#517

Merged
koxudaxi merged 37 commits intomainfrom
feature/port-actions-docs-stack
Apr 21, 2026
Merged

Port workflow and docs automation stack#517
koxudaxi merged 37 commits intomainfrom
feature/port-actions-docs-stack

Conversation

@koxudaxi
Copy link
Copy Markdown
Owner

@koxudaxi koxudaxi commented Apr 17, 2026

Summary by CodeRabbit

  • New Features

    • Integrated performance benchmarking into the project to measure generation speed.
  • Documentation

    • Large docs overhaul: new site pages, streamlined README, updated CLI help snapshots, supported-formats reference, and llms exports.
  • Infrastructure

    • Many new/updated CI workflows for testing, docs/schema generation, security scanning, changelog/release automation, and publishing.
  • Tests

    • Added tests for README/CLI snapshots, schema/docs generation, prompt-data, config validation, and performance benchmarks.
  • Chores

    • Added issue templates, funding configuration, and various build/tooling/config updates.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 17, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds a Pydantic GenerateConfig model and generated TypedDict, multiple docs/site generators and prompt-data tooling, many GitHub Actions workflows, CLI help snapshot tooling, new tests and tox environments, repository templates (FUNDING, issue templates), and assorted docs/site configuration changes. No public runtime API signatures removed.

Changes

Cohort / File(s) Summary
Workflows
.github/workflows/changelog.yml, .github/workflows/cli-docs.yml, .github/workflows/codeql.yml, .github/workflows/codespell.yml, .github/workflows/codspeed.yml, .github/workflows/config-types.yml, .github/workflows/docs.yml, .github/workflows/llms-txt.yml, .github/workflows/publish.yml, .github/workflows/readme.yml, .github/workflows/release-draft.yml, .github/workflows/release-notify.yml, .github/workflows/schema-docs.yml, .github/workflows/test.yml
Adds many CI workflows and updates existing ones (docs deployment, release drafting/notify, CodeQL, codespell, benchmarks, schema/docs/readme generators, publish artifact flow). Pay attention to token/secret usage, conditional preflight logic, and cross-repo/PR branch push semantics.
Docs & Site Config / Static Files
zensical.toml, mkdocs.yml (removed), docs/index.md, docs/cli-reference.md, docs/development-contributing.md, docs/llms.txt, docs/llms-full.txt, docs/supported_formats.md, README.md, .gitignore
Replaces mkdocs with zensical site config, adds/updates docs pages and README snapshots; docs/index.md added and un-ignored. Verify snapshot injection points, links, and consistency with generated artifacts.
Doc Generation & Utility Scripts
scripts/build_cli_docs.py, scripts/build_llms_txt.py, scripts/build_prompt_data.py, scripts/build_schema_docs.py, scripts/build_config_types.py, scripts/update_command_help_on_markdown.py, scripts/update_docs_version.py, scripts/build_site.sh (removed)
Adds generators for CLI docs, llms text, prompt payload, schema docs, and TypedDict output; improves help-text normalization and adds check modes. Removed legacy site build script. Check encoding/normalization, CLI check-mode behavior, and file-commit logic used by CI.
Configuration Model & Generated Types
fastapi_code_generator/config.py, fastapi_code_generator/_types/generate_config_dict.py, fastapi_code_generator/_types/__init__.py, fastapi_code_generator/prompt_data.py, fastapi_code_generator/__main__.py
Introduces GenerateConfig (Pydantic) and tooling to emit GenerateConfigDict TypedDict; refactors __main__.main() to use GenerateConfig; adds generated prompt_data.py. Review model schema, CLI metadata extraction, and Path/string normalization.
Tests
tests/test_config.py, tests/test_prompt_data.py, tests/test_readme.py, tests/test_schema_docs.py, tests/main/test_main.py, tests/main/test_performance.py
Adds tests covering config model, prompt-data payload, README CLI snapshot, schema-docs sync, JSON input handling, and a performance benchmark. Review fixture setup, golden-file comparisons, and benchmark pre/post teardown.
Project Metadata & Tooling
pyproject.toml, tox.ini, README.md
Updates project metadata (Documentation URL), dependency groups and pytest markers, coverage omit rules; adds tox envs for readme/llms/schema/config/perf and adjusts docs build commands. Ensure tox environments align with CI workflows and added scripts.
Repository Config & Templates
.github/FUNDING.yml, .github/ISSUE_TEMPLATE/bug_report.md, .github/ISSUE_TEMPLATE/feature_request.md, .github/dependabot.yml
Adds funding config and issue templates; updates Dependabot configuration (rebase strategy, allowed dependency-types). Validate template frontmatter, labels, and Dependabot allow rules.
Removed / Deleted
mkdocs.yml, scripts/build_site.sh
Removed legacy mkdocs config and site-build shell script; replaced by zensical and Python-based generators. Confirm no remaining references to deleted scripts/config.

Sequence Diagram(s)

(Skipped)

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

  • Add CLI E2E coverage pipeline #516: Overlapping CI and docs changes—touches CLI docs tooling, scripts/build_cli_docs.py, docs/cli-reference.md, tox.ini, and related workflows.

Poem

🐰 I hopped from mkdocs to zensical bright,
Types and scripts stitched configs tight,
Help text snapped and docs aligned,
Workflows hum as releases unwind,
A carrot for tests — all builds take flight 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 34.72% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely summarizes the primary change: porting the workflow and documentation automation infrastructure to a new stack.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feature/port-actions-docs-stack

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-advanced-security
Copy link
Copy Markdown

You are seeing this message because GitHub Code Scanning has recently been set up for this repository, or this pull request contains the workflow file for the Code Scanning tool.

What Enabling Code Scanning Means:

  • The 'Security' tab will display more code scanning analysis results (e.g., for the default branch).
  • Depending on your configuration and choice of analysis tool, future pull requests will be annotated with code scanning analysis results.
  • You will be able to see the analysis results for the pull request's branch on this overview once the scans have completed and the checks have passed.

For more information about GitHub Code Scanning, check out the documentation.

Comment thread fastapi_code_generator/__main__.py Fixed
Comment thread fastapi_code_generator/config.py Fixed
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 18

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
scripts/build_cli_docs.py (1)

42-43: ⚠️ Potential issue | 🟡 Minor

Surface a helpful error when the CLI-doc collection is missing.

The pipeline failure (FileNotFoundError: tests/cli_doc/.cli_doc_collection.json) surfaces as a raw traceback because load_cli_doc_collection unconditionally reads COLLECTION_PATH. This file is produced by the pytest cli_doc plugin run, so any invocation of scripts/build_cli_docs.py (or scripts/build_prompt_data.py, which calls the same loader) without a prior collection run fails opaquely.

Consider guarding with a clear message pointing at the required prerequisite step (e.g., the tox -e cli-docs / cli_doc pytest run), or have the tox env ensure the collection is produced first.

♻️ Suggested guard
 def load_cli_doc_collection() -> dict[str, object]:
+    if not COLLECTION_PATH.exists():
+        sys.exit(
+            f"Error: {COLLECTION_PATH} not found. "
+            "Run the cli_doc pytest collection first (e.g. `tox run -e cli-docs`)."
+        )
     return json.loads(COLLECTION_PATH.read_text(encoding="utf-8"))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/build_cli_docs.py` around lines 42 - 43, The loader
load_cli_doc_collection currently unconditionally reads COLLECTION_PATH and
raises a raw FileNotFoundError; wrap the read in a try/except that catches
FileNotFoundError (and optionally json.JSONDecodeError) and raise or log a clear
error message instructing the user to generate the collection first (e.g., run
the pytest cli_doc collection step such as "tox -e cli-docs" or the cli_doc
pytest run) while referencing COLLECTION_PATH and the load_cli_doc_collection
function so the failure points to the missing prerequisite.
🧹 Nitpick comments (31)
scripts/update_docs_version.py (1)

52-60: Minor: --check failure path is fine; consider surfacing the count.

Works as-is. Optional improvement: print a summary (N files out of date, run scripts/update_docs_version.py) for friendlier CI output, matching the pattern in scripts/build_cli_docs.py.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/update_docs_version.py` around lines 52 - 60, Update the --check
failure path to also print a short summary count and hint like in
build_cli_docs.py: after collecting updated_files (variable updated_files) and
inside the args.check and updated_files conditional, compute the number of files
(len(updated_files)) and print a single summary line to stderr such as "N files
out of date, run scripts/update_docs_version.py to update" in addition to the
existing per-file prints so CI shows a concise summary; keep all changes near
the existing target_files/README_FILE/updated_files logic and do not alter the
return values.
pyproject.toml (2)

73-87: Deduplicate pytest-benchmark / pytest-codspeed between test and benchmark groups.

Both groups now pin identical versions of the same two packages. That works but drift is easy — next version bump will forget one side. Prefer composing test from benchmark (or keeping benchmark deps only in the benchmark group if they aren't needed for the base test matrix).

♻️ Proposed fix (composition)
 test = [
+  { include-group = "benchmark" },
   "inline-snapshot>=0.31",
   "PyYAML>=6",
   "pytest>=8.3",
-  "pytest-benchmark>=5",
   "pytest-cov>=5",
-  "pytest-codspeed>=2.2",
   "pytest-mock>=3.14",
   "pytest-xdist>=3.6",
   "time-machine>=2.15",
 ]
 benchmark = [
   "pytest-benchmark>=5",
   "pytest-codspeed>=2.2",
 ]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyproject.toml` around lines 73 - 87, The test extras duplicate
pytest-benchmark and pytest-codspeed already declared in the benchmark extras;
update pyproject.toml so the test extra composes from benchmark (or remove those
two packages from test and keep them only in benchmark) to avoid drift — modify
the "test" and "benchmark" extras blocks (referencing the "test" and "benchmark"
extras and the package names "pytest-benchmark" and "pytest-codspeed") so the
benchmark list is the single source of truth and test reuses it instead of
listing the two packages twice.

75-75: Nit: normalize PyYAMLpyyaml for consistency with surrounding entries.

PEP 503 canonicalization makes them equivalent, but every other requirement in this file uses lowercase.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyproject.toml` at line 75, The dependency entry "PyYAML>=6" should be
normalized to lowercase to match surrounding entries; replace the literal
"PyYAML>=6" with "pyyaml>=6" in the pyproject.toml dependency list so the
requirement is consistent with PEP 503 canonicalization and the file's style.
.github/workflows/changelog.yml (1)

54-60: Pass the tag through env rather than direct ${{ }} interpolation in shell.

Interpolating ${{ github.event.release.tag_name }} directly into the run: block lets a crafted tag name inject shell tokens into the git commit -m command. Tags are maintainer-controlled so risk is low, but the standard hardening is to move it into env: and reference as "$TAG".

♻️ Proposed fix
       - name: Commit and push
+        env:
+          TAG: ${{ github.event.release.tag_name }}
         run: |
           git config user.name "github-actions[bot]"
           git config user.email "github-actions[bot]@users.noreply.github.com"
           git add CHANGELOG.md
-          git diff --cached --quiet || git commit -m "docs: update CHANGELOG.md for ${{ github.event.release.tag_name }}"
+          git diff --cached --quiet || git commit -m "docs: update CHANGELOG.md for $TAG"
           git push
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/changelog.yml around lines 54 - 60, The commit step
"Commit and push" currently interpolates the release tag directly into the run
block which can inject shell tokens; instead add an env entry (e.g., TAG) for
github.event.release.tag_name on the job/step and reference it in the run script
(use "$TAG" with proper double quotes) when constructing the git commit message
and any other uses; update the step to set env: TAG: ${{
github.event.release.tag_name }} and change the git commit -m to use "$TAG" to
avoid direct `${{ }}` interpolation in the shell.
.github/workflows/release-draft.yml (2)

27-31: Prefer env var over direct secret interpolation in run: script.

if [ -n "${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}" ] inlines the secret value into the shell command. GitHub will mask it in logs, but the hardened pattern is to expose presence via an env var (or just a boolean) and compare that. Same note applies for the secrets.CLAUDE_CODE_OAUTH_TOKEN that’s also piped into anthropics/claude-code-action — which is already correct via with:.

♻️ Proposed fix
       - id: check
+        env:
+          CLAUDE_OAUTH_PRESENT: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN != '' }}
         run: |
-          if [ -n "${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}" ]; then
+          if [ "$CLAUDE_OAUTH_PRESENT" = "true" ]; then
             echo "enabled=true" >> "$GITHUB_OUTPUT"
           else
             echo "enabled=false" >> "$GITHUB_OUTPUT"
           fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/release-draft.yml around lines 27 - 31, Replace the direct
secrets interpolation in the shell if test with an environment variable check:
expose the secret into the job/step env (e.g., CLAUDE_CODE_OAUTH_TOKEN=${{
secrets.CLAUDE_CODE_OAUTH_TOKEN }}) and in the script use if [ -n
"$CLAUDE_CODE_OAUTH_TOKEN" ] to write "enabled=true"/"enabled=false" to
GITHUB_OUTPUT; this avoids inlining the secret value in the run command while
keeping the existing usage of secrets.CLAUDE_CODE_OAUTH_TOKEN in the
anthropics/claude-code-action `with:` block unchanged.

185-205: Breaking changes always bump minor — no major-bump path.

HAS_BC == true bumps MINOR, never MAJOR. That is consistent with a pre-1.0 "0.x means anything can break" convention, and fits a project currently at 0.x. Once the project crosses 1.0, this logic will understate breaking changes (e.g., 1.5.0 + BC → 1.6.0 instead of 2.0.0). Consider gating on MAJOR or making the bump level explicit in Claude's structured output when you graduate past 0.x.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/release-draft.yml around lines 185 - 205, The release
logic treats HAS_BC=true as always incrementing MINOR (NEW_MINOR=$((MINOR + 1)))
and never MAJOR; update the branch that handles breaking changes (the block
using HAS_BC, MAJOR, MINOR, NEXT_VERSION, DRAFT_TAG_RAW) so that when MAJOR is
>= 1 a breaking change increments MAJOR (e.g., NEW_MAJOR=$((MAJOR + 1)) and
NEXT_VERSION="${NEW_MAJOR}.0.0"), while preserving the current MINOR-bump
behavior when MAJOR==0; alternatively make the bump level explicit by reading a
bump type from the structured release output (instead of implicitly assuming
MINOR) and apply MAJOR|MINOR|PATCH accordingly to compute NEXT_VERSION.
.github/workflows/publish.yml (1)

19-24: Consider @v8 instead of pinning to v8.0.0 for automatic patch updates.

astral-sh/setup-uv@v8.0.0 is the current latest release. If your project standardizes on major-version tag floats elsewhere (letting Dependabot manage major bumps), align with that pattern by using @v8 instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/publish.yml around lines 19 - 24, Replace the pinned
action version astral-sh/setup-uv@v8.0.0 with the floating major tag
astral-sh/setup-uv@v8 so the workflow receives automatic patch updates; update
the uses entry that currently reads "astral-sh/setup-uv@v8.0.0" (and leave the
subsequent "uv build ..." step unchanged) to use "@v8" instead.
.github/workflows/codespell.yml (1)

27-27: Minor: inconsistent path prefixes in skip.

CODE_OF_CONDUCT.md is listed without a ./ prefix while the other three entries use ./docs/.... The action tolerates both, but unifying the style (either all or none with ./) avoids surprises if the matcher is ever tightened.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/codespell.yml at line 27, In the codespell workflow `skip`
list there's an inconsistent path prefix: "CODE_OF_CONDUCT.md" lacks the "./"
used by the other entries; update the `skip` value to use consistent prefixes
(e.g., change "CODE_OF_CONDUCT.md" to "./CODE_OF_CONDUCT.md") so all entries
follow the same style and avoid potential matcher issues; the relevant symbol to
edit is the skip key and its list entries (CODE_OF_CONDUCT.md and ./docs/...).
scripts/build_cli_docs.py (1)

20-21: Duplicated _normalize_help_text across scripts.

The identical helper lives in scripts/update_command_help_on_markdown.py (per snippet). Extracting it (and the shared ANSI_ESCAPE_PATTERN / env setup in get_help_text) into a small shared module would prevent the two snapshot producers from drifting.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/build_cli_docs.py` around lines 20 - 21, Duplicate helper logic
`_normalize_help_text` (and related `ANSI_ESCAPE_PATTERN` / env setup used by
`get_help_text`) should be extracted into a small shared module: create a new
module that exports `_normalize_help_text`, `ANSI_ESCAPE_PATTERN`, and any
environment/setup helper used by `get_help_text`, update both `get_help_text`
callers to import these symbols instead of redefining them, and remove the
duplicate definitions from each script so both `get_help_text` implementations
reference the shared utilities.
.github/workflows/readme.yml (1)

14-33: Duplicated paths lists across three triggers.

The three on: entries repeat an identical 7-item paths list. Extract to a YAML anchor or keep a single list via &paths / *paths to reduce drift risk when the set changes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/readme.yml around lines 14 - 33, Extract the duplicated
paths list into a single YAML anchor and reference it from each trigger: define
an anchor (e.g., &paths) containing the 7-item list and then replace the inline
lists under the pull_request and pull_request_target triggers with the alias
(*paths); update every trigger that currently contains the repeated "paths:"
block (including pull_request and pull_request_target in
.github/workflows/readme.yml) to reference the anchor so the list is maintained
in one place.
.github/workflows/test.yml (1)

235-244: coverage needs four jobs that don't emit coverage artifacts.

readme, llms-txt, schema-docs, and config-types are listed in needs for the coverage job but none of them upload a coverage-* artifact (only test and cli-docs do). They act as gatekeepers rather than coverage contributors, which makes the dependency graph misleading and blocks coverage publication even when doc generation is flaky. Consider keeping coverage-contributing jobs in coverage.needs and moving doc-only jobs to check.needs (where they already are).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/test.yml around lines 235 - 244, The coverage job
currently lists readme, llms-txt, schema-docs, and config-types in its needs but
those jobs do not upload coverage artifacts and thus block coverage publication;
update the coverage job's needs to include only actual coverage-contributing
jobs (e.g., test and cli-docs) and remove readme, llms-txt, schema-docs, and
config-types from coverage.needs, then add those doc-only jobs to check.needs
(or another appropriate non-coverage gate) so they remain required elsewhere but
no longer prevent the coverage job from running/publishing.
.github/workflows/cli-docs.yml (1)

80-89: git push on auto-generated commits can re-trigger this workflow.

If secrets.PAT is set, the bot's push to main triggers the same workflow again (on: push: branches: [main]). git diff --staged --quiet prevents an infinite commit loop for idempotent outputs, but each regeneration still spins a full CI run. Adding [skip ci] to the commit message (or using the default GITHUB_TOKEN which doesn't re-trigger) avoids the extra round-trip.

💡 Proposed fix
-          git diff --staged --quiet || git commit -m "docs: update CLI reference documentation
-
-          Generated by GitHub Actions"
+          git diff --staged --quiet || git commit -m "docs: update CLI reference documentation [skip ci]
+
+          Generated by GitHub Actions"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/cli-docs.yml around lines 80 - 89, The auto-generated
commit currently uses git commit -m "docs: update CLI reference
documentation\n\nGenerated by GitHub Actions" which can re-trigger the workflow
when pushing with a PAT; update the commit step in the "Commit and push if
changed" job to include a CI skip token in the message (e.g., append "[skip ci]"
or "[ci skip]") so the commit message prevents workflow re-run, or alternatively
ensure the push uses the default GITHUB_TOKEN instead of secrets.PAT; modify the
git commit invocation referenced in the step to include the chosen change.
README.md (1)

45-78: Snapshot block is pinned to 94 columns — document that for contributors.

The rendered table is hard-wrapped to the width get_help_text() uses (COLUMNS=94). Anyone regenerating the snapshot manually (outside the readme/cli-docs tox env) will silently drift the CI assertion. A short comment in this section referencing scripts/update_command_help_on_markdown.py / tox -e readme would save future churn.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 45 - 78, Add a short inline comment above the
snapshot block explaining that the command help output is pinned to 94 columns
by get_help_text() (COLUMNS=94) so contributors must regenerate via
scripts/update_command_help_on_markdown.py or by running tox -e readme; update
the README.md snapshot section surrounding the fenced code block to mention
these exact references (get_help_text(),
scripts/update_command_help_on_markdown.py, and tox -e readme) so CI snapshot
regeneration is explicit.
.github/workflows/llms-txt.yml (1)

81-92: git add will fail if either artifact is missing; add [skip ci] to avoid re-triggering.

Two small hygiene improvements:

  1. If scripts/build_llms_txt.py only produces one of the two files (e.g. transitional state), git add docs/llms.txt docs/llms-full.txt fails the whole step and the preflight-gated fallback never runs. Consider git add -A docs/llms.txt docs/llms-full.txt 2>/dev/null || true or adding both outputs explicitly from the script side.
  2. Auto-commits on main with a PAT will re-trigger the workflow. Append [skip ci] to the commit message to break the loop.
💡 Proposed fix
-          git add docs/llms.txt docs/llms-full.txt
-          git diff --staged --quiet || git commit -m "docs: update llms.txt files
-
-          Generated by GitHub Actions"
+          git add -- docs/llms.txt docs/llms-full.txt 2>/dev/null || true
+          git diff --staged --quiet || git commit -m "docs: update llms.txt files [skip ci]
+
+          Generated by GitHub Actions"
           git push
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/llms-txt.yml around lines 81 - 92, The commit step
currently fails if one of the artifacts is missing and the commit message can
re-trigger CI; update the "Commit and push if changed" run block to (1) make the
git add resilient by ignoring missing files (e.g. run git add for docs/llms.txt
and docs/llms-full.txt but tolerate failure: use a shell-safe fallback such as
appending "|| true" or redirecting errors to /dev/null) so the step doesn't
abort when one file is absent, and (2) append "[skip ci]" to the auto-commit
message (the git commit -m value) so automated commits do not re-trigger the
workflow.
fastapi_code_generator/__main__.py (1)

92-134: Config object is built, then partially bypassed — pick one source of truth.

The GenerateConfig is constructed with normalized str/None values, but generate_code is called with a mix: config.encoding, config.output_dir, config.disable_timestamp, config.generate_routers, config.specify_tags come from config, while enum_field_as_literal, custom_visitors, output_model_type, and python_version still come directly from the CLI parameters. That works today because the CLI parameters retain their enum / Path types that the config normalizes away, but it makes the config a leaky abstraction: any future caller reading config.enum_field_as_literal / config.custom_visitors in isolation will get the normalized strings and have to re-convert them.

Either (a) convert at the call site consistently from config (so only the config is used), or (b) drop those fields from GenerateConfig so it's obvious they aren't part of the normalized surface.

💡 Option A — always read from config at the call site
-        enum_field_as_literal=enum_field_as_literal or None,
-        custom_visitors=custom_visitors,
+        enum_field_as_literal=(
+            LiteralType(config.enum_field_as_literal)
+            if config.enum_field_as_literal
+            else None
+        ),
+        custom_visitors=[Path(p) for p in config.custom_visitors] or None,
         disable_timestamp=config.disable_timestamp,
         generate_routers=config.generate_routers,
         specify_tags=config.specify_tags,
-        output_model_type=output_model_type,
-        python_version=python_version,
+        output_model_type=DataModelType(config.output_model_type),
+        python_version=PythonVersion(config.python_version),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@fastapi_code_generator/__main__.py` around lines 92 - 134, The GenerateConfig
is created but generate_code mixes raw CLI variables and config fields; update
the generate_code call to consistently use values from the config object (not
the original CLI variables). Specifically, replace uses of
enum_field_as_literal, custom_visitors, output_model_type, python_version,
template_dir and model_template_dir with config.enum_field_as_literal,
config.custom_visitors, config.output_model_type, config.python_version,
config.template_dir and config.model_template_dir (wrapping the latter two with
Path(...) if non-None as you already do elsewhere), and keep model_path derived
from config.model_file; this makes GenerateConfig the single source of truth for
the call to generate_code.
.github/workflows/docs.yml (2)

37-50: No cleanup when a PR closes — preview deployments and comments persist indefinitely.

There's no pull_request: closed handler to tear down the pr-<number> Cloudflare Pages preview branch or dismiss the comment. Minor hygiene issue but worth considering if you expect many PRs.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/docs.yml around lines 37 - 50, Add a pull_request: closed
workflow path that runs a cleanup job paired with the existing "Deploy Preview"
flow: when a PR is closed, call Cloudflare Wrangler to remove the preview branch
(reverse the Deploy Preview command, e.g., use the Wrangler/pages delete command
targeting --project-name and --branch=pr-<number> or equivalent) and dismiss or
delete the GitHub comment created earlier (use a GitHub Actions step such as
github-script or a comment-delete action and reference the comment ID saved as
an output/annotation). Ensure the new job uses the same env vars
(CLOUDFLARE_API_TOKEN, CLOUDFLARE_ACCOUNT_ID, CLOUDFLARE_PAGES_PROJECT) and
references the PR number via github.event.pull_request.number so it targets
pr-<number> and ties cleanup to the "Deploy Preview" lifecycle.

64-88: listComments isn't paginated — long-running PRs will get duplicate preview comments.

github.rest.issues.listComments returns at most 30 comments per page by default. Once a PR accumulates more than ~30 comments, the existing "Docs Preview:" comment falls off the first page and the workflow creates a new duplicate comment on every subsequent run.

Use github.paginate() to fetch all comments before filtering:

Suggested fix
-            const comments = await github.rest.issues.listComments({
-              issue_number: context.issue.number,
-              owner: context.repo.owner,
-              repo: context.repo.repo,
-            });
-
-            const existingComment = comments.data.find((comment) =>
-              comment.body && comment.body.includes('Docs Preview:')
-            );
+            const comments = await github.paginate(github.rest.issues.listComments, {
+              issue_number: context.issue.number,
+              owner: context.repo.owner,
+              repo: context.repo.repo,
+              per_page: 100,
+            });
+
+            const existingComment = comments.find((comment) =>
+              comment.body && comment.body.includes('Docs Preview:')
+            );

Note: github.paginate() returns the aggregated array directly, not wrapped in .data.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/docs.yml around lines 64 - 88, The current call to
github.rest.issues.listComments is not paginated, causing duplicate "Docs
Preview:" comments on long-lived PRs; replace that call with github.paginate to
fetch all issue comments (use github.paginate(github.rest.issues.listComments, {
owner: context.repo.owner, repo: context.repo.repo, issue_number:
context.issue.number }) and note paginate returns an aggregated array, not a
{data} wrapper), then use that array to find existingComment (the variable used
in the find(...) call) and proceed with the existing updateComment/createComment
logic unchanged.
tests/test_readme.py (1)

33-46: The regex scans the entire README including the fenced help block, causing all CLI options in the snapshot to be treated as if they are documented in the main text.

The re.findall(r"--[a-z0-9-]+", readme_text) captures all 11 options from the help snapshot (lines 45–78), not just --install-completion and --show-completion. If any of these options are not exposed by get_command(app).params, the assertion fails even when the README and CLI are aligned.

The cleaner fix is to exclude the help block before scanning, since _extract_help_snapshot() already isolates it:

💡 Better fix
 def test_readme_cli_options_exist() -> None:
     readme_text = README_PATH.read_text(encoding="utf-8")
-    referenced = set(re.findall(r"--[a-z0-9-]+", readme_text))
+    # Exclude the fenced help snapshot; it is verified separately
+    help_block_text = _extract_help_snapshot(readme_text)
+    readme_without_help = readme_text.replace(help_block_text, "")
+    referenced = set(re.findall(r"--[a-z0-9-]+", readme_without_help))

This avoids the need to whitelist individual options and aligns with how the help snapshot is already managed in the test suite.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_readme.py` around lines 33 - 46, The test
test_readme_cli_options_exist currently scans the whole README via
README_PATH.read_text which picks up the fenced help snapshot; instead, strip
out the help snapshot before running the regex by using the existing helper that
isolates it (call _extract_help_snapshot or equivalent used elsewhere in the
tests) to remove the fenced block from readme_text, then run re.findall on the
cleaned text; keep references to get_command(app), command.params, and the known
set unchanged so the comparison against documented options excludes options
present only in the help snapshot.
scripts/build_prompt_data.py (1)

69-84: Minor: --check returns 1 without explaining what drifted.

When the payload diverges from the on-disk file, users get a bare exit code. Consider printing a hint such as "Run scripts/build_prompt_data.py --output <path> to regenerate." to stderr, matching the UX in scripts/update_command_help_on_markdown.py.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/build_prompt_data.py` around lines 69 - 84, In update_prompt_data,
when running with check and the generated payload differs from the on-disk file
(the branch where check is True and existing != payload), write a concise hint
to stderr (e.g. "Run scripts/build_prompt_data.py --output <path> to
regenerate.") before returning exit code 1; also mirror this behavior for the
earlier branch where output_path is None and check is True (print the same hint
to stderr then return 1). Locate these branches inside the update_prompt_data
function to add the stderr message so users see actionable guidance when drift
is detected.
scripts/build_schema_docs.py (2)

62-63: _count_yaml_fixtures ignores .yml extensions.

rglob("*.yaml") will miss any fixture saved with .yml. Today everything under tests/data/openapi appears to use .yaml, so this is only a latent concern, but if that ever changes the "supported formats" doc will silently under-report. Consider ("*.yaml", "*.yml") to be safe.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/build_schema_docs.py` around lines 62 - 63, _count_yaml_fixtures
currently only searches for "*.yaml" and will miss fixtures with ".yml"; update
the function that uses OPENAPI_FIXTURES_ROOT (the _count_yaml_fixtures function)
to include both extensions by searching for both patterns (e.g., iterate rglob
over "*.yaml" and "*.yml" or use two calls and sum their lengths) so the
returned count includes files with either extension.

97-103: Empty example_files cell when a suite has no fixtures.

If a suite directory becomes empty (e.g., during a refactor), example_files renders as an empty cell with no indication why. A small guard like example_files or "—" keeps the table readable.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/build_schema_docs.py` around lines 97 - 103, When building the table
rows in the loop over FIXTURE_SUITES, guard the example_files cell so it doesn't
render empty: after computing example_files = ", ".join(...), replace an empty
string with a placeholder (e.g. "—") before appending the row. Update the block
that constructs rows.append(f"| {suite.title} | {len(fixtures)} |
{example_files} | {suite.notes} |") to use the guarded value so suites with no
fixtures show the placeholder instead of a blank cell.
.github/workflows/schema-docs.yml (1)

45-50: Fork-PR branch (first checkout) may be stale.

For fork PRs, you check out github.event.pull_request.head.ref from the fork repo. If the fork branch has diverged from the PR's merge base, the docs generated here may not reflect what will actually land after merge. Using refs/pull/<num>/merge gives you the merged-state view GitHub already computes — worth considering for doc drift accuracy.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/schema-docs.yml around lines 45 - 50, The checkout step
using actions/checkout@v5 currently uses ref: ${{
github.event.pull_request.head.ref }} (and repository: ${{
github.event.pull_request.head.repo.full_name }}), which can produce a stale
view for fork PRs; update the checkout logic so that for pull_request events you
use the merged PR ref refs/pull/${{ github.event.pull_request.number }}/merge
(set ref: refs/pull/${{ github.event.pull_request.number }}/merge) instead of
head.ref so the workflow operates on the merged-state view GitHub computes and
the docs reflect what will actually land after merge.
.github/workflows/release-notify.yml (3)

97-97: Redundant \] inside the lookahead character class.

[\s,.)\]}\]:;] escapes ] twice: once after ) and again after }. The second \] is a no-op duplicate. Harmless, but easy to tidy.

🧹 Proposed cleanup
-            const standaloneRefRegex = /(?:^|[\s({\[-])#(\d+)(?=[\s,.)\]}\]:;]|$)/g;
+            const standaloneRefRegex = /(?:^|[\s({\[-])#(\d+)(?=[\s,.)\]}:;]|$)/g;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/release-notify.yml at line 97, The regex assigned to
standaloneRefRegex contains a redundant escaped closing bracket in the lookahead
character class; update the pattern /(?:^|[\s({\[-])#(\d+)(?=[\s,.)\]}\]:;]|$)/g
by removing the duplicate '\]' inside the lookahead so the character class only
includes a single escaped ']' (keep the rest of the characters and anchors
unchanged) to tidy the expression without altering matching behavior.

200-205: Reusing module-scope standaloneRefRegex across PR iterations — verify intent.

standaloneRefRegex is defined once at Line 97 (with the g flag), used to scan releaseBody (Lines 106–108), then reused here to scan each prBody in the outer for (const prNumber of prNumbers) loop. JavaScript resets lastIndex to 0 only after exec() returns null, so this works only if each while-loop runs to exhaustion. That invariant holds today (no early break inside the loop), but it's fragile — any future break or early return inside one of these loops would silently skip matches in subsequent iterations.

Consider constructing a fresh RegExp per usage (as you already do for prRefUrlPattern and issueUrlPattern on Lines 185–190), or explicitly reset standaloneRefRegex.lastIndex = 0 before each loop.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/release-notify.yml around lines 200 - 205, The reusable
module-scope regex standaloneRefRegex (used in while ((bodyMatch =
standaloneRefRegex.exec(prBody)) ...) is fragile because of its global lastIndex
state; update the code to create a fresh RegExp instance before each scan of
prBody (or reset standaloneRefRegex.lastIndex = 0) — mirror how prRefUrlPattern
and issueUrlPattern are constructed per-use by creating new
RegExp(standaloneRefRegex.source, 'g') or explicitly setting lastIndex = 0 just
before the while loop that iterates over prBody (and also before the earlier
releaseBody usage) so each iteration over prNumbers/prNumber sees a clean regex
state.

1-11: Consider adding a concurrency guard.

Two back-to-back publish.yml runs (e.g., double tag push or a retried release) can trigger overlapping notify jobs that race on creating the same "Released in [version]" comments. The hasExistingComment check partially mitigates this, but two concurrent runs can still both pass the check and post duplicates. A concurrency block keyed on the tag would serialize execution:

🛡️ Proposed addition
 on:
   workflow_run:
     workflows: ["Publish"]
     types: [completed]

+concurrency:
+  group: release-notify-${{ github.event.workflow_run.head_branch }}
+  cancel-in-progress: false
+
 jobs:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/release-notify.yml around lines 1 - 11, Add a concurrency
guard to the notify job so overlapping workflow_run triggers are serialized;
inside the notify job (job name: notify) add a concurrency block with a unique
group keyed on the release tag/reference (for example group: release-notify-${{
github.event.workflow_run.head_ref || github.ref }}) and set cancel-in-progress:
false to avoid duplicate comment races.
fastapi_code_generator/config.py (1)

228-251: del params[name] is dead code.

params is a function-local dict built at Line 233 and discarded when iter_config_options returns; mutating it at Line 250 has no observable effect. If the intent was to detect CLI params with no matching field, that work is already done in validate_generate_config_model() (which runs on Line 231 before this loop and enforces exact name equality), so the del is redundant and slightly misleading.

♻️ Proposed cleanup
-    options: list[ConfigOption] = []
-    params = get_command_config_params()
     for name, field in GenerateConfig.model_fields.items():
         cli = _get_cli_metadata(field)
         options.append(
             ConfigOption(
                 name=name,
                 cli_flags=tuple(cli["flags"]),
                 description=field.description or "",
                 required=field.is_required(),
                 default=(
                     None if field.is_required() else _normalize_default(field.default)
                 ),
                 multiple=cli["multiple"],
                 type_label=_type_label(field.annotation),
                 choices=_literal_choices(field.annotation),
             )
         )
-        del params[name]
     return tuple(options)

(And add back options: list[ConfigOption] = [] above the loop.)

As a bonus, this also drops the redundant second call to get_command_config_params() — it was already invoked inside validate_generate_config_model() on Line 231.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@fastapi_code_generator/config.py` around lines 228 - 251, Remove the dead
mutation of a local params dict in iter_config_options: don't call
get_command_config_params() or create params at all and remove the del
params[name] line; keep the initial validate_generate_config_model() call and
the options list construction (ConfigOption creation loop over
GenerateConfig.model_fields) so behavior is unchanged while eliminating the
redundant params variable and its deletion.
scripts/build_llms_txt.py (5)

200-213: "Optional" classification ignores nested grand-children.

The optional-detection logic only inspects section.children one level deep:

section.children and all(is_optional(child.path) for child in section.children)

If a section's children are themselves containers (no path, further nested), child.path is None, is_optional(None) returns False, and the whole section falls back to "main" regardless of whether every leaf page matches OPTIONAL_PATTERNS. That's fine for today's single-optional file (development-contributing.md), but worth a short comment documenting the shallow-only assumption so a future nav refactor doesn't silently demote optional pages back into the main list.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/build_llms_txt.py` around lines 200 - 213, The
optional-classification loop (building main_sections and optional_sections) only
checks children one level deep using is_optional(child.path), which causes
grand-children to be ignored; add a short clarifying comment above this logic
(near the for section in sections loop) stating that the optional-detection is
intentionally shallow (only checks section.path and immediate children.path) and
that deeper nesting is not supported by this check — mention the
functions/variables involved (sections, section.path, section.children,
is_optional, main_sections, optional_sections) and/or note that recursion would
be needed if future nav refactors introduce nested container children.

225-231: Optional pages nested below the first child level are silently dropped.

When rendering the ## Optional section, only section.children (one level) or section itself is inspected; any item.children below that are never walked. Combined with the detection logic at Lines 204–208, this means only 1- or 2-level-deep optional structures render correctly. Acceptable for the current nav, but brittle — consider reusing render() here or flattening the optional subtree before iterating.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/build_llms_txt.py` around lines 225 - 231, The Optional section only
iterates one level of children so deeper optional pages are dropped; update the
block that builds items (around the loop using variables section, items, item
and page_map) to recursively walk or flatten section.children before formatting,
e.g., call the existing render() helper or implement a
flat_list_of_pages(section) that yields all descendant items with item.path,
then iterate that flattened list and use format_page(page) as before so nested
optional pages are included. Ensure you reference the same variables and
preserve the page_map lookup and lines.append("") behavior.

238-259: generate_llms_full_txt emits a trailing --- separator with no sentinel.

The output always ends with ...content\n\n---\n (from the parts.extend([..., "---", ""]) per page, then "\n".join(parts) without a final newline). A dangling --- after the last page can look like the start of another document to some Markdown/YAML front-matter parsers. Consider either (a) joining with --- between entries rather than appending one after each, or (b) stripping the trailing separator before joining.

♻️ Proposed fix
 def generate_llms_full_txt(pages: list[PageInfo]) -> str:
     """Generate llms-full.txt with the full docs content."""
-    parts: list[str] = []
-    for page in pages:
+    blocks: list[str] = []
+    for page in pages:
         content = page.content.strip()
         if content.startswith("# "):
             first_newline = content.find("\n")
             if first_newline != -1:
                 content = content[first_newline + 1 :].strip()
-        parts.extend(
-            [
-                f"# {page.title}",
-                "",
-                f"Source: {page.url}",
-                "",
-                content,
-                "",
-                "---",
-                "",
-            ]
-        )
-    return "\n".join(parts)
+        blocks.append(
+            f"# {page.title}\n\nSource: {page.url}\n\n{content}\n"
+        )
+    return "\n---\n\n".join(blocks) + "\n"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/build_llms_txt.py` around lines 238 - 259, generate_llms_full_txt
currently appends a trailing separator ("---") and blank line for every page
which leaves a dangling separator after the last page; change the logic to
produce separators only between page blocks by building a list of per-page
blocks (using page.title, page.url, and sanitized page.content as you already
do) and then join those blocks with "\n\n---\n\n" (or alternatively remove the
final separator/empty items before joining) so the output has separators only
between entries and not after the final page.

151-169: Same page may be emitted twice if referenced from multiple nav entries.

collect_pages appends a PageInfo every time a nav section carries a path, without deduplicating on page.path. If zensical.toml ever references the same Markdown file from more than one nav location, llms-full.txt will contain duplicated content blocks and llms.txt will list the page twice. Low probability given the current nav layout, but easy to guard:

♻️ Optional deduplication
 def collect_pages(
     sections: list[NavSection], site_url: str, depth: int = 0
 ) -> list[PageInfo]:
     """Collect page metadata from the nav structure."""
     pages: list[PageInfo] = []
+    seen: set[str] = set()
     for section in sections:
         if section.path:
+            if section.path in seen:
+                continue
             base = section.path.rsplit(".", 1)[0]
             url = (
                 f"{site_url}/"
                 if base == "index"
                 else f"{site_url}/{base.removesuffix('/index')}/"
             )
             page = extract_page_info(DOCS_DIR / section.path, url, depth)
             if page is not None:
+                seen.add(section.path)
                 pages.append(page)
         if section.children:
             pages.extend(collect_pages(section.children, site_url, depth + 1))
     return pages

Note: with recursion, the seen set needs to be threaded through or moved to the caller to be fully effective.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/build_llms_txt.py` around lines 151 - 169, collect_pages currently
appends a PageInfo for every NavSection with a path and can emit the same page
multiple times if the same Markdown file is referenced from different nav
entries; add deduplication by tracking seen page paths (e.g., a seen set) and
skip adding a PageInfo if section.path (or the resolved page.path) is already in
seen, and ensure recursion respects the same set by threading a seen: set[str]
parameter (default None at the public call, initialize to empty set) through
collect_pages and its recursive calls so duplicates are prevented across the
entire traversal.

63-77: sys.exit(...) inside a library helper makes reuse awkward.

parse_zensical_toml terminates the process on error via sys.exit, which is fine for the CLI entrypoint but prevents callers (e.g., future tests or other scripts that import this helper) from catching and handling the failure cleanly. Consider raising a SystemExit / dedicated exception from the helper and keeping sys.exit/print in main() only. Same applies to Line 66. Nit — safe to defer.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/build_llms_txt.py` around lines 63 - 77, The helper
parse_zensical_toml currently calls sys.exit on missing file and on TOML parse
errors; remove those sys.exit calls and instead raise appropriate exceptions
(e.g., FileNotFoundError when path.exists() is False and re-raise or wrap
tomllib.TOMLDecodeError as a ValueError or custom exception) so callers can
catch failures; keep process termination (sys.exit/print) only in the CLI
entrypoint (main) that calls parse_zensical_toml. Ensure references: function
parse_zensical_toml, the path.exists() check and the tomllib.TOMLDecodeError
except block are updated accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 9731e83e-01ba-4570-9467-77f76338dcdf

📥 Commits

Reviewing files that changed from the base of the PR and between 09c8dac and c4317d3.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (49)
  • .github/FUNDING.yml
  • .github/ISSUE_TEMPLATE/bug_report.md
  • .github/ISSUE_TEMPLATE/feature_request.md
  • .github/dependabot.yml
  • .github/workflows/changelog.yml
  • .github/workflows/cli-docs.yml
  • .github/workflows/codeql.yml
  • .github/workflows/codespell.yml
  • .github/workflows/codspeed.yml
  • .github/workflows/config-types.yml
  • .github/workflows/docs.yml
  • .github/workflows/llms-txt.yml
  • .github/workflows/publish.yml
  • .github/workflows/readme.yml
  • .github/workflows/release-draft.yml
  • .github/workflows/release-notify.yml
  • .github/workflows/schema-docs.yml
  • .github/workflows/test.yml
  • .gitignore
  • README.md
  • docs/cli-reference.md
  • docs/development-contributing.md
  • docs/index.md
  • docs/llms-full.txt
  • docs/llms.txt
  • docs/supported_formats.md
  • fastapi_code_generator/__main__.py
  • fastapi_code_generator/_types/__init__.py
  • fastapi_code_generator/_types/generate_config_dict.py
  • fastapi_code_generator/config.py
  • fastapi_code_generator/prompt_data.py
  • mkdocs.yml
  • pyproject.toml
  • scripts/build_cli_docs.py
  • scripts/build_config_types.py
  • scripts/build_llms_txt.py
  • scripts/build_prompt_data.py
  • scripts/build_schema_docs.py
  • scripts/build_site.sh
  • scripts/update_command_help_on_markdown.py
  • scripts/update_docs_version.py
  • tests/main/test_main.py
  • tests/main/test_performance.py
  • tests/test_config.py
  • tests/test_prompt_data.py
  • tests/test_readme.py
  • tests/test_schema_docs.py
  • tox.ini
  • zensical.toml
💤 Files with no reviewable changes (3)
  • .gitignore
  • mkdocs.yml
  • scripts/build_site.sh

Comment thread .github/ISSUE_TEMPLATE/bug_report.md
Comment thread .github/workflows/changelog.yml Outdated
Comment thread .github/workflows/cli-docs.yml Outdated
Comment thread .github/workflows/codeql.yml Outdated
Comment thread .github/workflows/codspeed.yml
Comment thread fastapi_code_generator/prompt_data.py Outdated
Comment thread scripts/build_prompt_data.py
Comment thread scripts/update_docs_version.py Outdated
Comment thread tests/test_prompt_data.py Outdated
Comment thread tests/test_readme.py
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
scripts/update_command_help_on_markdown.py (1)

23-48: Duplicated CLI-help plumbing with scripts/build_cli_docs.py risks drift.

ANSI_ESCAPE_PATTERN, _normalize_help_text, and get_help_text here are byte-for-byte copies of the ones in scripts/build_cli_docs.py (and tests/test_readme.py imports get_help_text from there to enforce the snapshot format). If the environment setup (e.g. COLUMNS, NO_COLOR) or the normalization pipeline ever changes in one place and not the other, --check will silently diverge from the test contract and the docs. Consider importing the shared helpers instead of re-declaring them.

♻️ Proposed refactor
-import os
 import re
-import subprocess
 import sys
 from pathlib import Path

+sys.path.insert(0, str(Path(__file__).resolve().parent))
+from build_cli_docs import get_help_text  # noqa: E402
+
 START_MARK = "<!-- start command help -->"
 END_MARK = "<!-- end command help -->"
 CODE_BLOCK_START = "```text"
 CODE_BLOCK_END = "```"

 PROJECT_DIR = Path(__file__).resolve().parent.parent
 TARGET_MARKDOWN_FILES = [
     PROJECT_DIR / "README.md",
     PROJECT_DIR / "docs" / "index.md",
 ]
-
-ANSI_ESCAPE_PATTERN = re.compile(r"\x1b\[[0-?]*[ -/]*[`@-`~]")
-
-
-def _normalize_help_text(text: str) -> str:
-    return "\n".join(line.rstrip() for line in text.splitlines()).strip()
-
-
-def get_help_text() -> str:
-    """Return normalized CLI help output for the current environment."""
-    env = os.environ.copy()
-    ...
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/update_command_help_on_markdown.py` around lines 23 - 48, This file
duplicates CLI-help utilities (ANSI_ESCAPE_PATTERN, _normalize_help_text,
get_help_text) from scripts/build_cli_docs.py which risks divergence; remove the
local definitions and import the shared helpers from scripts.build_cli_docs (or
the module where they are defined) and update any call sites to use the imported
symbols (ANSI_ESCAPE_PATTERN, _normalize_help_text, get_help_text) so the
environment setup and normalization pipeline are single-sourced and consistent
with tests like tests/test_readme.py.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@scripts/update_command_help_on_markdown.py`:
- Around line 23-48: This file duplicates CLI-help utilities
(ANSI_ESCAPE_PATTERN, _normalize_help_text, get_help_text) from
scripts/build_cli_docs.py which risks divergence; remove the local definitions
and import the shared helpers from scripts.build_cli_docs (or the module where
they are defined) and update any call sites to use the imported symbols
(ANSI_ESCAPE_PATTERN, _normalize_help_text, get_help_text) so the environment
setup and normalization pipeline are single-sourced and consistent with tests
like tests/test_readme.py.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ec423cac-1210-45d3-8f59-0b3321f5a334

📥 Commits

Reviewing files that changed from the base of the PR and between c4317d3 and 6adb183.

📒 Files selected for processing (3)
  • scripts/build_cli_docs.py
  • scripts/update_command_help_on_markdown.py
  • tests/test_prompt_data.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • scripts/build_cli_docs.py
  • tests/test_prompt_data.py

@codspeed-hq
Copy link
Copy Markdown

codspeed-hq Bot commented Apr 20, 2026

Congrats! CodSpeed is installed 🎉

🆕 1 new benchmark was detected.

You will start to see performance impacts in the reports once the benchmarks are run from your default branch.

⚠️ Unknown Walltime execution environment detected

Using the Walltime instrument on standard Hosted Runners will lead to inconsistent data.

For the most accurate results, we recommend using CodSpeed Macro Runners: bare-metal machines fine-tuned for performance measurement consistency.

Detected benchmark


Open in CodSpeed

Comment thread fastapi_code_generator/__main__.py Fixed
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/cli-docs.yml:
- Around line 58-59: The workflow step named "Build prompt data" currently
invokes scripts/build_prompt_data.py with no output, so the generated prompt
data goes to stdout and fastapi_code_generator/prompt_data.py never gets updated
before it's staged; change that step to write into the staged file by either
passing the script's --output argument with the target file (e.g.,
scripts/build_prompt_data.py --output fastapi_code_generator/prompt_data.py) or
redirecting stdout (e.g., scripts/build_prompt_data.py >
fastapi_code_generator/prompt_data.py) so the file exists when the later staging
step runs.

In `@scripts/update_docs_version.py`:
- Around line 64-68: The except block in main currently only handles
subprocess.CalledProcessError and FileNotFoundError but
get_latest_release_version() can raise RuntimeError for unexpected tag formats;
update the exception tuple in the try/except that wraps the call to
get_latest_release_version() (the block referencing version =
get_latest_release_version()) to also catch RuntimeError so the script prints
the intended stderr message and returns exit code 1 when tag parsing fails.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: b3c3a917-c5d0-43ce-960b-75873ab65ec1

📥 Commits

Reviewing files that changed from the base of the PR and between 57c83ba and 262b3a4.

📒 Files selected for processing (17)
  • .github/ISSUE_TEMPLATE/bug_report.md
  • .github/workflows/changelog.yml
  • .github/workflows/cli-docs.yml
  • .github/workflows/codeql.yml
  • .github/workflows/codspeed.yml
  • .github/workflows/llms-txt.yml
  • .github/workflows/readme.yml
  • .github/workflows/release-notify.yml
  • .github/workflows/schema-docs.yml
  • docs/llms-full.txt
  • fastapi_code_generator/__main__.py
  • fastapi_code_generator/config.py
  • scripts/build_cli_docs.py
  • scripts/build_llms_txt.py
  • scripts/build_prompt_data.py
  • scripts/update_command_help_on_markdown.py
  • scripts/update_docs_version.py
✅ Files skipped from review due to trivial changes (4)
  • .github/ISSUE_TEMPLATE/bug_report.md
  • docs/llms-full.txt
  • .github/workflows/schema-docs.yml
  • .github/workflows/codeql.yml
🚧 Files skipped from review as they are similar to previous changes (9)
  • .github/workflows/readme.yml
  • .github/workflows/changelog.yml
  • .github/workflows/codspeed.yml
  • .github/workflows/llms-txt.yml
  • scripts/build_prompt_data.py
  • fastapi_code_generator/main.py
  • fastapi_code_generator/config.py
  • .github/workflows/release-notify.yml
  • scripts/build_llms_txt.py

Comment thread .github/workflows/cli-docs.yml Outdated
Comment thread scripts/update_docs_version.py
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/build_cli_docs.py`:
- Around line 52-54: The code reads completed.stderr directly which can raise
ValueError if stderr wasn't captured (e.g., Click 8.1.8); change the msg
construction around the `completed` result to safely access stderr (use
try/except ValueError or getattr with a default) and fall back to
`completed.output` or a default string, and include any caught exception details
in the final RuntimeError message so the original failure isn't masked; update
the block that builds `msg` before `raise RuntimeError(msg)` (referencing the
`completed` variable and the RuntimeError raise site).
- Around line 11-14: The top-level import of fastapi_code_generator.__main__.app
causes failures when running the script from a fresh checkout; move that import
into the get_help_text() function so the package is imported only when needed.
Specifically, keep imports for CliRunner and get_command at module top, then
inside get_help_text() perform "from fastapi_code_generator.__main__ import app"
(or equivalent) and use app with get_command/CliRunner there; this ensures
direct script execution (e.g., python scripts/build_cli_docs.py --check) works
without the package pre-installed.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ec546bec-0848-4a5c-a82f-ed8149831d38

📥 Commits

Reviewing files that changed from the base of the PR and between 262b3a4 and 5576cd5.

📒 Files selected for processing (1)
  • scripts/build_cli_docs.py

Comment thread scripts/build_cli_docs.py Outdated
Comment thread scripts/build_cli_docs.py Outdated
Comment thread fastapi_code_generator/cli.py Fixed
Comment thread fastapi_code_generator/cli.py Fixed
Comment thread fastapi_code_generator/cli.py Fixed
Comment thread fastapi_code_generator/cli.py Fixed
Comment thread fastapi_code_generator/cli.py Fixed
Comment thread fastapi_code_generator/cli.py Fixed
Comment thread fastapi_code_generator/config.py Fixed
Comment thread fastapi_code_generator/prompt_data.py Fixed
@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 20, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 100.00%. Comparing base (32d053a) to head (c9856b1).
⚠️ Report is 39 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##              main      #517    +/-   ##
==========================================
  Coverage   100.00%   100.00%            
==========================================
  Files           10        16     +6     
  Lines          659       984   +325     
  Branches        79       110    +31     
==========================================
+ Hits           659       984   +325     
Flag Coverage Δ
unittests 100.00% <100.00%> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 20, 2026

📚 Docs Preview: https://pr-517.fastapi-code-generator.pages.dev

@koxudaxi koxudaxi merged commit 27d510f into main Apr 21, 2026
47 checks passed
@koxudaxi koxudaxi deleted the feature/port-actions-docs-stack branch April 21, 2026 05:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants