Port workflow and docs automation stack#517
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a Pydantic GenerateConfig model and generated TypedDict, multiple docs/site generators and prompt-data tooling, many GitHub Actions workflows, CLI help snapshot tooling, new tests and tox environments, repository templates (FUNDING, issue templates), and assorted docs/site configuration changes. No public runtime API signatures removed. Changes
Sequence Diagram(s)(Skipped) Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
You are seeing this message because GitHub Code Scanning has recently been set up for this repository, or this pull request contains the workflow file for the Code Scanning tool. What Enabling Code Scanning Means:
For more information about GitHub Code Scanning, check out the documentation. |
There was a problem hiding this comment.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
scripts/build_cli_docs.py (1)
42-43:⚠️ Potential issue | 🟡 MinorSurface a helpful error when the CLI-doc collection is missing.
The pipeline failure (
FileNotFoundError: tests/cli_doc/.cli_doc_collection.json) surfaces as a raw traceback becauseload_cli_doc_collectionunconditionally readsCOLLECTION_PATH. This file is produced by the pytestcli_docplugin run, so any invocation ofscripts/build_cli_docs.py(orscripts/build_prompt_data.py, which calls the same loader) without a prior collection run fails opaquely.Consider guarding with a clear message pointing at the required prerequisite step (e.g., the
tox -e cli-docs/ cli_doc pytest run), or have the tox env ensure the collection is produced first.♻️ Suggested guard
def load_cli_doc_collection() -> dict[str, object]: + if not COLLECTION_PATH.exists(): + sys.exit( + f"Error: {COLLECTION_PATH} not found. " + "Run the cli_doc pytest collection first (e.g. `tox run -e cli-docs`)." + ) return json.loads(COLLECTION_PATH.read_text(encoding="utf-8"))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/build_cli_docs.py` around lines 42 - 43, The loader load_cli_doc_collection currently unconditionally reads COLLECTION_PATH and raises a raw FileNotFoundError; wrap the read in a try/except that catches FileNotFoundError (and optionally json.JSONDecodeError) and raise or log a clear error message instructing the user to generate the collection first (e.g., run the pytest cli_doc collection step such as "tox -e cli-docs" or the cli_doc pytest run) while referencing COLLECTION_PATH and the load_cli_doc_collection function so the failure points to the missing prerequisite.
🧹 Nitpick comments (31)
scripts/update_docs_version.py (1)
52-60: Minor:--checkfailure path is fine; consider surfacing the count.Works as-is. Optional improvement: print a summary (
N files out of date, run scripts/update_docs_version.py) for friendlier CI output, matching the pattern inscripts/build_cli_docs.py.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/update_docs_version.py` around lines 52 - 60, Update the --check failure path to also print a short summary count and hint like in build_cli_docs.py: after collecting updated_files (variable updated_files) and inside the args.check and updated_files conditional, compute the number of files (len(updated_files)) and print a single summary line to stderr such as "N files out of date, run scripts/update_docs_version.py to update" in addition to the existing per-file prints so CI shows a concise summary; keep all changes near the existing target_files/README_FILE/updated_files logic and do not alter the return values.pyproject.toml (2)
73-87: Deduplicatepytest-benchmark/pytest-codspeedbetweentestandbenchmarkgroups.Both groups now pin identical versions of the same two packages. That works but drift is easy — next version bump will forget one side. Prefer composing
testfrombenchmark(or keeping benchmark deps only in thebenchmarkgroup if they aren't needed for the base test matrix).♻️ Proposed fix (composition)
test = [ + { include-group = "benchmark" }, "inline-snapshot>=0.31", "PyYAML>=6", "pytest>=8.3", - "pytest-benchmark>=5", "pytest-cov>=5", - "pytest-codspeed>=2.2", "pytest-mock>=3.14", "pytest-xdist>=3.6", "time-machine>=2.15", ] benchmark = [ "pytest-benchmark>=5", "pytest-codspeed>=2.2", ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyproject.toml` around lines 73 - 87, The test extras duplicate pytest-benchmark and pytest-codspeed already declared in the benchmark extras; update pyproject.toml so the test extra composes from benchmark (or remove those two packages from test and keep them only in benchmark) to avoid drift — modify the "test" and "benchmark" extras blocks (referencing the "test" and "benchmark" extras and the package names "pytest-benchmark" and "pytest-codspeed") so the benchmark list is the single source of truth and test reuses it instead of listing the two packages twice.
75-75: Nit: normalizePyYAML→pyyamlfor consistency with surrounding entries.PEP 503 canonicalization makes them equivalent, but every other requirement in this file uses lowercase.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyproject.toml` at line 75, The dependency entry "PyYAML>=6" should be normalized to lowercase to match surrounding entries; replace the literal "PyYAML>=6" with "pyyaml>=6" in the pyproject.toml dependency list so the requirement is consistent with PEP 503 canonicalization and the file's style..github/workflows/changelog.yml (1)
54-60: Pass the tag through env rather than direct${{ }}interpolation in shell.Interpolating
${{ github.event.release.tag_name }}directly into therun:block lets a crafted tag name inject shell tokens into thegit commit -mcommand. Tags are maintainer-controlled so risk is low, but the standard hardening is to move it intoenv:and reference as"$TAG".♻️ Proposed fix
- name: Commit and push + env: + TAG: ${{ github.event.release.tag_name }} run: | git config user.name "github-actions[bot]" git config user.email "github-actions[bot]@users.noreply.github.com" git add CHANGELOG.md - git diff --cached --quiet || git commit -m "docs: update CHANGELOG.md for ${{ github.event.release.tag_name }}" + git diff --cached --quiet || git commit -m "docs: update CHANGELOG.md for $TAG" git push🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/changelog.yml around lines 54 - 60, The commit step "Commit and push" currently interpolates the release tag directly into the run block which can inject shell tokens; instead add an env entry (e.g., TAG) for github.event.release.tag_name on the job/step and reference it in the run script (use "$TAG" with proper double quotes) when constructing the git commit message and any other uses; update the step to set env: TAG: ${{ github.event.release.tag_name }} and change the git commit -m to use "$TAG" to avoid direct `${{ }}` interpolation in the shell..github/workflows/release-draft.yml (2)
27-31: Prefer env var over direct secret interpolation inrun:script.
if [ -n "${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}" ]inlines the secret value into the shell command. GitHub will mask it in logs, but the hardened pattern is to expose presence via an env var (or just a boolean) and compare that. Same note applies for thesecrets.CLAUDE_CODE_OAUTH_TOKENthat’s also piped intoanthropics/claude-code-action— which is already correct viawith:.♻️ Proposed fix
- id: check + env: + CLAUDE_OAUTH_PRESENT: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN != '' }} run: | - if [ -n "${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}" ]; then + if [ "$CLAUDE_OAUTH_PRESENT" = "true" ]; then echo "enabled=true" >> "$GITHUB_OUTPUT" else echo "enabled=false" >> "$GITHUB_OUTPUT" fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release-draft.yml around lines 27 - 31, Replace the direct secrets interpolation in the shell if test with an environment variable check: expose the secret into the job/step env (e.g., CLAUDE_CODE_OAUTH_TOKEN=${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}) and in the script use if [ -n "$CLAUDE_CODE_OAUTH_TOKEN" ] to write "enabled=true"/"enabled=false" to GITHUB_OUTPUT; this avoids inlining the secret value in the run command while keeping the existing usage of secrets.CLAUDE_CODE_OAUTH_TOKEN in the anthropics/claude-code-action `with:` block unchanged.
185-205: Breaking changes always bump minor — no major-bump path.
HAS_BC == truebumpsMINOR, neverMAJOR. That is consistent with a pre-1.0 "0.x means anything can break" convention, and fits a project currently at0.x. Once the project crosses1.0, this logic will understate breaking changes (e.g.,1.5.0 + BC → 1.6.0instead of2.0.0). Consider gating onMAJORor making the bump level explicit in Claude's structured output when you graduate past 0.x.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release-draft.yml around lines 185 - 205, The release logic treats HAS_BC=true as always incrementing MINOR (NEW_MINOR=$((MINOR + 1))) and never MAJOR; update the branch that handles breaking changes (the block using HAS_BC, MAJOR, MINOR, NEXT_VERSION, DRAFT_TAG_RAW) so that when MAJOR is >= 1 a breaking change increments MAJOR (e.g., NEW_MAJOR=$((MAJOR + 1)) and NEXT_VERSION="${NEW_MAJOR}.0.0"), while preserving the current MINOR-bump behavior when MAJOR==0; alternatively make the bump level explicit by reading a bump type from the structured release output (instead of implicitly assuming MINOR) and apply MAJOR|MINOR|PATCH accordingly to compute NEXT_VERSION..github/workflows/publish.yml (1)
19-24: Consider@v8instead of pinning tov8.0.0for automatic patch updates.
astral-sh/setup-uv@v8.0.0is the current latest release. If your project standardizes on major-version tag floats elsewhere (letting Dependabot manage major bumps), align with that pattern by using@v8instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/publish.yml around lines 19 - 24, Replace the pinned action version astral-sh/setup-uv@v8.0.0 with the floating major tag astral-sh/setup-uv@v8 so the workflow receives automatic patch updates; update the uses entry that currently reads "astral-sh/setup-uv@v8.0.0" (and leave the subsequent "uv build ..." step unchanged) to use "@v8" instead..github/workflows/codespell.yml (1)
27-27: Minor: inconsistent path prefixes inskip.
CODE_OF_CONDUCT.mdis listed without a./prefix while the other three entries use./docs/.... The action tolerates both, but unifying the style (either all or none with./) avoids surprises if the matcher is ever tightened.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/codespell.yml at line 27, In the codespell workflow `skip` list there's an inconsistent path prefix: "CODE_OF_CONDUCT.md" lacks the "./" used by the other entries; update the `skip` value to use consistent prefixes (e.g., change "CODE_OF_CONDUCT.md" to "./CODE_OF_CONDUCT.md") so all entries follow the same style and avoid potential matcher issues; the relevant symbol to edit is the skip key and its list entries (CODE_OF_CONDUCT.md and ./docs/...).scripts/build_cli_docs.py (1)
20-21: Duplicated_normalize_help_textacross scripts.The identical helper lives in
scripts/update_command_help_on_markdown.py(per snippet). Extracting it (and the sharedANSI_ESCAPE_PATTERN/ env setup inget_help_text) into a small shared module would prevent the two snapshot producers from drifting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/build_cli_docs.py` around lines 20 - 21, Duplicate helper logic `_normalize_help_text` (and related `ANSI_ESCAPE_PATTERN` / env setup used by `get_help_text`) should be extracted into a small shared module: create a new module that exports `_normalize_help_text`, `ANSI_ESCAPE_PATTERN`, and any environment/setup helper used by `get_help_text`, update both `get_help_text` callers to import these symbols instead of redefining them, and remove the duplicate definitions from each script so both `get_help_text` implementations reference the shared utilities..github/workflows/readme.yml (1)
14-33: Duplicatedpathslists across three triggers.The three
on:entries repeat an identical 7-itempathslist. Extract to a YAML anchor or keep a single list via&paths/*pathsto reduce drift risk when the set changes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/readme.yml around lines 14 - 33, Extract the duplicated paths list into a single YAML anchor and reference it from each trigger: define an anchor (e.g., &paths) containing the 7-item list and then replace the inline lists under the pull_request and pull_request_target triggers with the alias (*paths); update every trigger that currently contains the repeated "paths:" block (including pull_request and pull_request_target in .github/workflows/readme.yml) to reference the anchor so the list is maintained in one place..github/workflows/test.yml (1)
235-244:coverageneeds four jobs that don't emit coverage artifacts.
readme,llms-txt,schema-docs, andconfig-typesare listed inneedsfor the coverage job but none of them upload acoverage-*artifact (onlytestandcli-docsdo). They act as gatekeepers rather than coverage contributors, which makes the dependency graph misleading and blocks coverage publication even when doc generation is flaky. Consider keeping coverage-contributing jobs incoverage.needsand moving doc-only jobs tocheck.needs(where they already are).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/test.yml around lines 235 - 244, The coverage job currently lists readme, llms-txt, schema-docs, and config-types in its needs but those jobs do not upload coverage artifacts and thus block coverage publication; update the coverage job's needs to include only actual coverage-contributing jobs (e.g., test and cli-docs) and remove readme, llms-txt, schema-docs, and config-types from coverage.needs, then add those doc-only jobs to check.needs (or another appropriate non-coverage gate) so they remain required elsewhere but no longer prevent the coverage job from running/publishing..github/workflows/cli-docs.yml (1)
80-89:git pushon auto-generated commits can re-trigger this workflow.If
secrets.PATis set, the bot's push tomaintriggers the same workflow again (on: push: branches: [main]).git diff --staged --quietprevents an infinite commit loop for idempotent outputs, but each regeneration still spins a full CI run. Adding[skip ci]to the commit message (or using the defaultGITHUB_TOKENwhich doesn't re-trigger) avoids the extra round-trip.💡 Proposed fix
- git diff --staged --quiet || git commit -m "docs: update CLI reference documentation - - Generated by GitHub Actions" + git diff --staged --quiet || git commit -m "docs: update CLI reference documentation [skip ci] + + Generated by GitHub Actions"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/cli-docs.yml around lines 80 - 89, The auto-generated commit currently uses git commit -m "docs: update CLI reference documentation\n\nGenerated by GitHub Actions" which can re-trigger the workflow when pushing with a PAT; update the commit step in the "Commit and push if changed" job to include a CI skip token in the message (e.g., append "[skip ci]" or "[ci skip]") so the commit message prevents workflow re-run, or alternatively ensure the push uses the default GITHUB_TOKEN instead of secrets.PAT; modify the git commit invocation referenced in the step to include the chosen change.README.md (1)
45-78: Snapshot block is pinned to 94 columns — document that for contributors.The rendered table is hard-wrapped to the width
get_help_text()uses (COLUMNS=94). Anyone regenerating the snapshot manually (outside thereadme/cli-docstox env) will silently drift the CI assertion. A short comment in this section referencingscripts/update_command_help_on_markdown.py/tox -e readmewould save future churn.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 45 - 78, Add a short inline comment above the snapshot block explaining that the command help output is pinned to 94 columns by get_help_text() (COLUMNS=94) so contributors must regenerate via scripts/update_command_help_on_markdown.py or by running tox -e readme; update the README.md snapshot section surrounding the fenced code block to mention these exact references (get_help_text(), scripts/update_command_help_on_markdown.py, and tox -e readme) so CI snapshot regeneration is explicit..github/workflows/llms-txt.yml (1)
81-92:git addwill fail if either artifact is missing; add[skip ci]to avoid re-triggering.Two small hygiene improvements:
- If
scripts/build_llms_txt.pyonly produces one of the two files (e.g. transitional state),git add docs/llms.txt docs/llms-full.txtfails the whole step and the preflight-gated fallback never runs. Considergit add -A docs/llms.txt docs/llms-full.txt 2>/dev/null || trueor adding both outputs explicitly from the script side.- Auto-commits on
mainwith a PAT will re-trigger the workflow. Append[skip ci]to the commit message to break the loop.💡 Proposed fix
- git add docs/llms.txt docs/llms-full.txt - git diff --staged --quiet || git commit -m "docs: update llms.txt files - - Generated by GitHub Actions" + git add -- docs/llms.txt docs/llms-full.txt 2>/dev/null || true + git diff --staged --quiet || git commit -m "docs: update llms.txt files [skip ci] + + Generated by GitHub Actions" git push🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/llms-txt.yml around lines 81 - 92, The commit step currently fails if one of the artifacts is missing and the commit message can re-trigger CI; update the "Commit and push if changed" run block to (1) make the git add resilient by ignoring missing files (e.g. run git add for docs/llms.txt and docs/llms-full.txt but tolerate failure: use a shell-safe fallback such as appending "|| true" or redirecting errors to /dev/null) so the step doesn't abort when one file is absent, and (2) append "[skip ci]" to the auto-commit message (the git commit -m value) so automated commits do not re-trigger the workflow.fastapi_code_generator/__main__.py (1)
92-134: Config object is built, then partially bypassed — pick one source of truth.The
GenerateConfigis constructed with normalizedstr/Nonevalues, butgenerate_codeis called with a mix:config.encoding,config.output_dir,config.disable_timestamp,config.generate_routers,config.specify_tagscome fromconfig, whileenum_field_as_literal,custom_visitors,output_model_type, andpython_versionstill come directly from the CLI parameters. That works today because the CLI parameters retain their enum /Pathtypes that the config normalizes away, but it makes the config a leaky abstraction: any future caller readingconfig.enum_field_as_literal/config.custom_visitorsin isolation will get the normalized strings and have to re-convert them.Either (a) convert at the call site consistently from
config(so only the config is used), or (b) drop those fields fromGenerateConfigso it's obvious they aren't part of the normalized surface.💡 Option A — always read from config at the call site
- enum_field_as_literal=enum_field_as_literal or None, - custom_visitors=custom_visitors, + enum_field_as_literal=( + LiteralType(config.enum_field_as_literal) + if config.enum_field_as_literal + else None + ), + custom_visitors=[Path(p) for p in config.custom_visitors] or None, disable_timestamp=config.disable_timestamp, generate_routers=config.generate_routers, specify_tags=config.specify_tags, - output_model_type=output_model_type, - python_version=python_version, + output_model_type=DataModelType(config.output_model_type), + python_version=PythonVersion(config.python_version),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@fastapi_code_generator/__main__.py` around lines 92 - 134, The GenerateConfig is created but generate_code mixes raw CLI variables and config fields; update the generate_code call to consistently use values from the config object (not the original CLI variables). Specifically, replace uses of enum_field_as_literal, custom_visitors, output_model_type, python_version, template_dir and model_template_dir with config.enum_field_as_literal, config.custom_visitors, config.output_model_type, config.python_version, config.template_dir and config.model_template_dir (wrapping the latter two with Path(...) if non-None as you already do elsewhere), and keep model_path derived from config.model_file; this makes GenerateConfig the single source of truth for the call to generate_code..github/workflows/docs.yml (2)
37-50: No cleanup when a PR closes — preview deployments and comments persist indefinitely.There's no
pull_request: closedhandler to tear down thepr-<number>Cloudflare Pages preview branch or dismiss the comment. Minor hygiene issue but worth considering if you expect many PRs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/docs.yml around lines 37 - 50, Add a pull_request: closed workflow path that runs a cleanup job paired with the existing "Deploy Preview" flow: when a PR is closed, call Cloudflare Wrangler to remove the preview branch (reverse the Deploy Preview command, e.g., use the Wrangler/pages delete command targeting --project-name and --branch=pr-<number> or equivalent) and dismiss or delete the GitHub comment created earlier (use a GitHub Actions step such as github-script or a comment-delete action and reference the comment ID saved as an output/annotation). Ensure the new job uses the same env vars (CLOUDFLARE_API_TOKEN, CLOUDFLARE_ACCOUNT_ID, CLOUDFLARE_PAGES_PROJECT) and references the PR number via github.event.pull_request.number so it targets pr-<number> and ties cleanup to the "Deploy Preview" lifecycle.
64-88:listCommentsisn't paginated — long-running PRs will get duplicate preview comments.
github.rest.issues.listCommentsreturns at most 30 comments per page by default. Once a PR accumulates more than ~30 comments, the existing "Docs Preview:" comment falls off the first page and the workflow creates a new duplicate comment on every subsequent run.Use
github.paginate()to fetch all comments before filtering:Suggested fix
- const comments = await github.rest.issues.listComments({ - issue_number: context.issue.number, - owner: context.repo.owner, - repo: context.repo.repo, - }); - - const existingComment = comments.data.find((comment) => - comment.body && comment.body.includes('Docs Preview:') - ); + const comments = await github.paginate(github.rest.issues.listComments, { + issue_number: context.issue.number, + owner: context.repo.owner, + repo: context.repo.repo, + per_page: 100, + }); + + const existingComment = comments.find((comment) => + comment.body && comment.body.includes('Docs Preview:') + );Note:
github.paginate()returns the aggregated array directly, not wrapped in.data.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/docs.yml around lines 64 - 88, The current call to github.rest.issues.listComments is not paginated, causing duplicate "Docs Preview:" comments on long-lived PRs; replace that call with github.paginate to fetch all issue comments (use github.paginate(github.rest.issues.listComments, { owner: context.repo.owner, repo: context.repo.repo, issue_number: context.issue.number }) and note paginate returns an aggregated array, not a {data} wrapper), then use that array to find existingComment (the variable used in the find(...) call) and proceed with the existing updateComment/createComment logic unchanged.tests/test_readme.py (1)
33-46: The regex scans the entire README including the fenced help block, causing all CLI options in the snapshot to be treated as if they are documented in the main text.The
re.findall(r"--[a-z0-9-]+", readme_text)captures all 11 options from the help snapshot (lines 45–78), not just--install-completionand--show-completion. If any of these options are not exposed byget_command(app).params, the assertion fails even when the README and CLI are aligned.The cleaner fix is to exclude the help block before scanning, since
_extract_help_snapshot()already isolates it:💡 Better fix
def test_readme_cli_options_exist() -> None: readme_text = README_PATH.read_text(encoding="utf-8") - referenced = set(re.findall(r"--[a-z0-9-]+", readme_text)) + # Exclude the fenced help snapshot; it is verified separately + help_block_text = _extract_help_snapshot(readme_text) + readme_without_help = readme_text.replace(help_block_text, "") + referenced = set(re.findall(r"--[a-z0-9-]+", readme_without_help))This avoids the need to whitelist individual options and aligns with how the help snapshot is already managed in the test suite.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_readme.py` around lines 33 - 46, The test test_readme_cli_options_exist currently scans the whole README via README_PATH.read_text which picks up the fenced help snapshot; instead, strip out the help snapshot before running the regex by using the existing helper that isolates it (call _extract_help_snapshot or equivalent used elsewhere in the tests) to remove the fenced block from readme_text, then run re.findall on the cleaned text; keep references to get_command(app), command.params, and the known set unchanged so the comparison against documented options excludes options present only in the help snapshot.scripts/build_prompt_data.py (1)
69-84: Minor:--checkreturns 1 without explaining what drifted.When the payload diverges from the on-disk file, users get a bare exit code. Consider printing a hint such as
"Run scripts/build_prompt_data.py --output <path> to regenerate."to stderr, matching the UX inscripts/update_command_help_on_markdown.py.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/build_prompt_data.py` around lines 69 - 84, In update_prompt_data, when running with check and the generated payload differs from the on-disk file (the branch where check is True and existing != payload), write a concise hint to stderr (e.g. "Run scripts/build_prompt_data.py --output <path> to regenerate.") before returning exit code 1; also mirror this behavior for the earlier branch where output_path is None and check is True (print the same hint to stderr then return 1). Locate these branches inside the update_prompt_data function to add the stderr message so users see actionable guidance when drift is detected.scripts/build_schema_docs.py (2)
62-63:_count_yaml_fixturesignores.ymlextensions.
rglob("*.yaml")will miss any fixture saved with.yml. Today everything undertests/data/openapiappears to use.yaml, so this is only a latent concern, but if that ever changes the "supported formats" doc will silently under-report. Consider("*.yaml", "*.yml")to be safe.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/build_schema_docs.py` around lines 62 - 63, _count_yaml_fixtures currently only searches for "*.yaml" and will miss fixtures with ".yml"; update the function that uses OPENAPI_FIXTURES_ROOT (the _count_yaml_fixtures function) to include both extensions by searching for both patterns (e.g., iterate rglob over "*.yaml" and "*.yml" or use two calls and sum their lengths) so the returned count includes files with either extension.
97-103: Emptyexample_filescell when a suite has no fixtures.If a suite directory becomes empty (e.g., during a refactor),
example_filesrenders as an empty cell with no indication why. A small guard likeexample_files or "—"keeps the table readable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/build_schema_docs.py` around lines 97 - 103, When building the table rows in the loop over FIXTURE_SUITES, guard the example_files cell so it doesn't render empty: after computing example_files = ", ".join(...), replace an empty string with a placeholder (e.g. "—") before appending the row. Update the block that constructs rows.append(f"| {suite.title} | {len(fixtures)} | {example_files} | {suite.notes} |") to use the guarded value so suites with no fixtures show the placeholder instead of a blank cell..github/workflows/schema-docs.yml (1)
45-50: Fork-PR branch (first checkout) may be stale.For fork PRs, you check out
github.event.pull_request.head.reffrom the fork repo. If the fork branch has diverged from the PR's merge base, the docs generated here may not reflect what will actually land after merge. Usingrefs/pull/<num>/mergegives you the merged-state view GitHub already computes — worth considering for doc drift accuracy.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/schema-docs.yml around lines 45 - 50, The checkout step using actions/checkout@v5 currently uses ref: ${{ github.event.pull_request.head.ref }} (and repository: ${{ github.event.pull_request.head.repo.full_name }}), which can produce a stale view for fork PRs; update the checkout logic so that for pull_request events you use the merged PR ref refs/pull/${{ github.event.pull_request.number }}/merge (set ref: refs/pull/${{ github.event.pull_request.number }}/merge) instead of head.ref so the workflow operates on the merged-state view GitHub computes and the docs reflect what will actually land after merge..github/workflows/release-notify.yml (3)
97-97: Redundant\]inside the lookahead character class.
[\s,.)\]}\]:;]escapes]twice: once after)and again after}. The second\]is a no-op duplicate. Harmless, but easy to tidy.🧹 Proposed cleanup
- const standaloneRefRegex = /(?:^|[\s({\[-])#(\d+)(?=[\s,.)\]}\]:;]|$)/g; + const standaloneRefRegex = /(?:^|[\s({\[-])#(\d+)(?=[\s,.)\]}:;]|$)/g;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release-notify.yml at line 97, The regex assigned to standaloneRefRegex contains a redundant escaped closing bracket in the lookahead character class; update the pattern /(?:^|[\s({\[-])#(\d+)(?=[\s,.)\]}\]:;]|$)/g by removing the duplicate '\]' inside the lookahead so the character class only includes a single escaped ']' (keep the rest of the characters and anchors unchanged) to tidy the expression without altering matching behavior.
200-205: Reusing module-scopestandaloneRefRegexacross PR iterations — verify intent.
standaloneRefRegexis defined once at Line 97 (with thegflag), used to scanreleaseBody(Lines 106–108), then reused here to scan eachprBodyin the outerfor (const prNumber of prNumbers)loop. JavaScript resetslastIndexto 0 only afterexec()returnsnull, so this works only if each while-loop runs to exhaustion. That invariant holds today (no earlybreakinside the loop), but it's fragile — any futurebreakor earlyreturninside one of these loops would silently skip matches in subsequent iterations.Consider constructing a fresh
RegExpper usage (as you already do forprRefUrlPatternandissueUrlPatternon Lines 185–190), or explicitly resetstandaloneRefRegex.lastIndex = 0before each loop.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release-notify.yml around lines 200 - 205, The reusable module-scope regex standaloneRefRegex (used in while ((bodyMatch = standaloneRefRegex.exec(prBody)) ...) is fragile because of its global lastIndex state; update the code to create a fresh RegExp instance before each scan of prBody (or reset standaloneRefRegex.lastIndex = 0) — mirror how prRefUrlPattern and issueUrlPattern are constructed per-use by creating new RegExp(standaloneRefRegex.source, 'g') or explicitly setting lastIndex = 0 just before the while loop that iterates over prBody (and also before the earlier releaseBody usage) so each iteration over prNumbers/prNumber sees a clean regex state.
1-11: Consider adding aconcurrencyguard.Two back-to-back
publish.ymlruns (e.g., double tag push or a retried release) can trigger overlappingnotifyjobs that race on creating the same "Released in [version]" comments. ThehasExistingCommentcheck partially mitigates this, but two concurrent runs can still both pass the check and post duplicates. Aconcurrencyblock keyed on the tag would serialize execution:🛡️ Proposed addition
on: workflow_run: workflows: ["Publish"] types: [completed] +concurrency: + group: release-notify-${{ github.event.workflow_run.head_branch }} + cancel-in-progress: false + jobs:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release-notify.yml around lines 1 - 11, Add a concurrency guard to the notify job so overlapping workflow_run triggers are serialized; inside the notify job (job name: notify) add a concurrency block with a unique group keyed on the release tag/reference (for example group: release-notify-${{ github.event.workflow_run.head_ref || github.ref }}) and set cancel-in-progress: false to avoid duplicate comment races.fastapi_code_generator/config.py (1)
228-251:del params[name]is dead code.
paramsis a function-local dict built at Line 233 and discarded wheniter_config_optionsreturns; mutating it at Line 250 has no observable effect. If the intent was to detect CLI params with no matching field, that work is already done invalidate_generate_config_model()(which runs on Line 231 before this loop and enforces exact name equality), so thedelis redundant and slightly misleading.♻️ Proposed cleanup
- options: list[ConfigOption] = [] - params = get_command_config_params() for name, field in GenerateConfig.model_fields.items(): cli = _get_cli_metadata(field) options.append( ConfigOption( name=name, cli_flags=tuple(cli["flags"]), description=field.description or "", required=field.is_required(), default=( None if field.is_required() else _normalize_default(field.default) ), multiple=cli["multiple"], type_label=_type_label(field.annotation), choices=_literal_choices(field.annotation), ) ) - del params[name] return tuple(options)(And add back
options: list[ConfigOption] = []above the loop.)As a bonus, this also drops the redundant second call to
get_command_config_params()— it was already invoked insidevalidate_generate_config_model()on Line 231.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@fastapi_code_generator/config.py` around lines 228 - 251, Remove the dead mutation of a local params dict in iter_config_options: don't call get_command_config_params() or create params at all and remove the del params[name] line; keep the initial validate_generate_config_model() call and the options list construction (ConfigOption creation loop over GenerateConfig.model_fields) so behavior is unchanged while eliminating the redundant params variable and its deletion.scripts/build_llms_txt.py (5)
200-213: "Optional" classification ignores nested grand-children.The optional-detection logic only inspects
section.childrenone level deep:section.children and all(is_optional(child.path) for child in section.children)If a section's children are themselves containers (no
path, further nested),child.pathisNone,is_optional(None)returnsFalse, and the whole section falls back to "main" regardless of whether every leaf page matchesOPTIONAL_PATTERNS. That's fine for today's single-optional file (development-contributing.md), but worth a short comment documenting the shallow-only assumption so a future nav refactor doesn't silently demote optional pages back into the main list.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/build_llms_txt.py` around lines 200 - 213, The optional-classification loop (building main_sections and optional_sections) only checks children one level deep using is_optional(child.path), which causes grand-children to be ignored; add a short clarifying comment above this logic (near the for section in sections loop) stating that the optional-detection is intentionally shallow (only checks section.path and immediate children.path) and that deeper nesting is not supported by this check — mention the functions/variables involved (sections, section.path, section.children, is_optional, main_sections, optional_sections) and/or note that recursion would be needed if future nav refactors introduce nested container children.
225-231: Optional pages nested below the first child level are silently dropped.When rendering the
## Optionalsection, onlysection.children(one level) orsectionitself is inspected; anyitem.childrenbelow that are never walked. Combined with the detection logic at Lines 204–208, this means only 1- or 2-level-deep optional structures render correctly. Acceptable for the current nav, but brittle — consider reusingrender()here or flattening the optional subtree before iterating.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/build_llms_txt.py` around lines 225 - 231, The Optional section only iterates one level of children so deeper optional pages are dropped; update the block that builds items (around the loop using variables section, items, item and page_map) to recursively walk or flatten section.children before formatting, e.g., call the existing render() helper or implement a flat_list_of_pages(section) that yields all descendant items with item.path, then iterate that flattened list and use format_page(page) as before so nested optional pages are included. Ensure you reference the same variables and preserve the page_map lookup and lines.append("") behavior.
238-259:generate_llms_full_txtemits a trailing---separator with no sentinel.The output always ends with
...content\n\n---\n(from theparts.extend([..., "---", ""])per page, then"\n".join(parts)without a final newline). A dangling---after the last page can look like the start of another document to some Markdown/YAML front-matter parsers. Consider either (a) joining with---between entries rather than appending one after each, or (b) stripping the trailing separator before joining.♻️ Proposed fix
def generate_llms_full_txt(pages: list[PageInfo]) -> str: """Generate llms-full.txt with the full docs content.""" - parts: list[str] = [] - for page in pages: + blocks: list[str] = [] + for page in pages: content = page.content.strip() if content.startswith("# "): first_newline = content.find("\n") if first_newline != -1: content = content[first_newline + 1 :].strip() - parts.extend( - [ - f"# {page.title}", - "", - f"Source: {page.url}", - "", - content, - "", - "---", - "", - ] - ) - return "\n".join(parts) + blocks.append( + f"# {page.title}\n\nSource: {page.url}\n\n{content}\n" + ) + return "\n---\n\n".join(blocks) + "\n"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/build_llms_txt.py` around lines 238 - 259, generate_llms_full_txt currently appends a trailing separator ("---") and blank line for every page which leaves a dangling separator after the last page; change the logic to produce separators only between page blocks by building a list of per-page blocks (using page.title, page.url, and sanitized page.content as you already do) and then join those blocks with "\n\n---\n\n" (or alternatively remove the final separator/empty items before joining) so the output has separators only between entries and not after the final page.
151-169: Same page may be emitted twice if referenced from multiple nav entries.
collect_pagesappends aPageInfoevery time a nav section carries apath, without deduplicating onpage.path. Ifzensical.tomlever references the same Markdown file from more than one nav location,llms-full.txtwill contain duplicated content blocks andllms.txtwill list the page twice. Low probability given the current nav layout, but easy to guard:♻️ Optional deduplication
def collect_pages( sections: list[NavSection], site_url: str, depth: int = 0 ) -> list[PageInfo]: """Collect page metadata from the nav structure.""" pages: list[PageInfo] = [] + seen: set[str] = set() for section in sections: if section.path: + if section.path in seen: + continue base = section.path.rsplit(".", 1)[0] url = ( f"{site_url}/" if base == "index" else f"{site_url}/{base.removesuffix('/index')}/" ) page = extract_page_info(DOCS_DIR / section.path, url, depth) if page is not None: + seen.add(section.path) pages.append(page) if section.children: pages.extend(collect_pages(section.children, site_url, depth + 1)) return pagesNote: with recursion, the
seenset needs to be threaded through or moved to the caller to be fully effective.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/build_llms_txt.py` around lines 151 - 169, collect_pages currently appends a PageInfo for every NavSection with a path and can emit the same page multiple times if the same Markdown file is referenced from different nav entries; add deduplication by tracking seen page paths (e.g., a seen set) and skip adding a PageInfo if section.path (or the resolved page.path) is already in seen, and ensure recursion respects the same set by threading a seen: set[str] parameter (default None at the public call, initialize to empty set) through collect_pages and its recursive calls so duplicates are prevented across the entire traversal.
63-77:sys.exit(...)inside a library helper makes reuse awkward.
parse_zensical_tomlterminates the process on error viasys.exit, which is fine for the CLI entrypoint but prevents callers (e.g., future tests or other scripts that import this helper) from catching and handling the failure cleanly. Consider raising aSystemExit/ dedicated exception from the helper and keepingsys.exit/main()only. Same applies to Line 66. Nit — safe to defer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/build_llms_txt.py` around lines 63 - 77, The helper parse_zensical_toml currently calls sys.exit on missing file and on TOML parse errors; remove those sys.exit calls and instead raise appropriate exceptions (e.g., FileNotFoundError when path.exists() is False and re-raise or wrap tomllib.TOMLDecodeError as a ValueError or custom exception) so callers can catch failures; keep process termination (sys.exit/print) only in the CLI entrypoint (main) that calls parse_zensical_toml. Ensure references: function parse_zensical_toml, the path.exists() check and the tomllib.TOMLDecodeError except block are updated accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 9731e83e-01ba-4570-9467-77f76338dcdf
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (49)
.github/FUNDING.yml.github/ISSUE_TEMPLATE/bug_report.md.github/ISSUE_TEMPLATE/feature_request.md.github/dependabot.yml.github/workflows/changelog.yml.github/workflows/cli-docs.yml.github/workflows/codeql.yml.github/workflows/codespell.yml.github/workflows/codspeed.yml.github/workflows/config-types.yml.github/workflows/docs.yml.github/workflows/llms-txt.yml.github/workflows/publish.yml.github/workflows/readme.yml.github/workflows/release-draft.yml.github/workflows/release-notify.yml.github/workflows/schema-docs.yml.github/workflows/test.yml.gitignoreREADME.mddocs/cli-reference.mddocs/development-contributing.mddocs/index.mddocs/llms-full.txtdocs/llms.txtdocs/supported_formats.mdfastapi_code_generator/__main__.pyfastapi_code_generator/_types/__init__.pyfastapi_code_generator/_types/generate_config_dict.pyfastapi_code_generator/config.pyfastapi_code_generator/prompt_data.pymkdocs.ymlpyproject.tomlscripts/build_cli_docs.pyscripts/build_config_types.pyscripts/build_llms_txt.pyscripts/build_prompt_data.pyscripts/build_schema_docs.pyscripts/build_site.shscripts/update_command_help_on_markdown.pyscripts/update_docs_version.pytests/main/test_main.pytests/main/test_performance.pytests/test_config.pytests/test_prompt_data.pytests/test_readme.pytests/test_schema_docs.pytox.inizensical.toml
💤 Files with no reviewable changes (3)
- .gitignore
- mkdocs.yml
- scripts/build_site.sh
There was a problem hiding this comment.
🧹 Nitpick comments (1)
scripts/update_command_help_on_markdown.py (1)
23-48: Duplicated CLI-help plumbing withscripts/build_cli_docs.pyrisks drift.
ANSI_ESCAPE_PATTERN,_normalize_help_text, andget_help_texthere are byte-for-byte copies of the ones inscripts/build_cli_docs.py(andtests/test_readme.pyimportsget_help_textfrom there to enforce the snapshot format). If the environment setup (e.g.COLUMNS,NO_COLOR) or the normalization pipeline ever changes in one place and not the other,--checkwill silently diverge from the test contract and the docs. Consider importing the shared helpers instead of re-declaring them.♻️ Proposed refactor
-import os import re -import subprocess import sys from pathlib import Path +sys.path.insert(0, str(Path(__file__).resolve().parent)) +from build_cli_docs import get_help_text # noqa: E402 + START_MARK = "<!-- start command help -->" END_MARK = "<!-- end command help -->" CODE_BLOCK_START = "```text" CODE_BLOCK_END = "```" PROJECT_DIR = Path(__file__).resolve().parent.parent TARGET_MARKDOWN_FILES = [ PROJECT_DIR / "README.md", PROJECT_DIR / "docs" / "index.md", ] - -ANSI_ESCAPE_PATTERN = re.compile(r"\x1b\[[0-?]*[ -/]*[`@-`~]") - - -def _normalize_help_text(text: str) -> str: - return "\n".join(line.rstrip() for line in text.splitlines()).strip() - - -def get_help_text() -> str: - """Return normalized CLI help output for the current environment.""" - env = os.environ.copy() - ...🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/update_command_help_on_markdown.py` around lines 23 - 48, This file duplicates CLI-help utilities (ANSI_ESCAPE_PATTERN, _normalize_help_text, get_help_text) from scripts/build_cli_docs.py which risks divergence; remove the local definitions and import the shared helpers from scripts.build_cli_docs (or the module where they are defined) and update any call sites to use the imported symbols (ANSI_ESCAPE_PATTERN, _normalize_help_text, get_help_text) so the environment setup and normalization pipeline are single-sourced and consistent with tests like tests/test_readme.py.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@scripts/update_command_help_on_markdown.py`:
- Around line 23-48: This file duplicates CLI-help utilities
(ANSI_ESCAPE_PATTERN, _normalize_help_text, get_help_text) from
scripts/build_cli_docs.py which risks divergence; remove the local definitions
and import the shared helpers from scripts.build_cli_docs (or the module where
they are defined) and update any call sites to use the imported symbols
(ANSI_ESCAPE_PATTERN, _normalize_help_text, get_help_text) so the environment
setup and normalization pipeline are single-sourced and consistent with tests
like tests/test_readme.py.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ec423cac-1210-45d3-8f59-0b3321f5a334
📒 Files selected for processing (3)
scripts/build_cli_docs.pyscripts/update_command_help_on_markdown.pytests/test_prompt_data.py
🚧 Files skipped from review as they are similar to previous changes (2)
- scripts/build_cli_docs.py
- tests/test_prompt_data.py
Congrats! CodSpeed is installed 🎉
You will start to see performance impacts in the reports once the benchmarks are run from your default branch.
|
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/cli-docs.yml:
- Around line 58-59: The workflow step named "Build prompt data" currently
invokes scripts/build_prompt_data.py with no output, so the generated prompt
data goes to stdout and fastapi_code_generator/prompt_data.py never gets updated
before it's staged; change that step to write into the staged file by either
passing the script's --output argument with the target file (e.g.,
scripts/build_prompt_data.py --output fastapi_code_generator/prompt_data.py) or
redirecting stdout (e.g., scripts/build_prompt_data.py >
fastapi_code_generator/prompt_data.py) so the file exists when the later staging
step runs.
In `@scripts/update_docs_version.py`:
- Around line 64-68: The except block in main currently only handles
subprocess.CalledProcessError and FileNotFoundError but
get_latest_release_version() can raise RuntimeError for unexpected tag formats;
update the exception tuple in the try/except that wraps the call to
get_latest_release_version() (the block referencing version =
get_latest_release_version()) to also catch RuntimeError so the script prints
the intended stderr message and returns exit code 1 when tag parsing fails.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b3c3a917-c5d0-43ce-960b-75873ab65ec1
📒 Files selected for processing (17)
.github/ISSUE_TEMPLATE/bug_report.md.github/workflows/changelog.yml.github/workflows/cli-docs.yml.github/workflows/codeql.yml.github/workflows/codspeed.yml.github/workflows/llms-txt.yml.github/workflows/readme.yml.github/workflows/release-notify.yml.github/workflows/schema-docs.ymldocs/llms-full.txtfastapi_code_generator/__main__.pyfastapi_code_generator/config.pyscripts/build_cli_docs.pyscripts/build_llms_txt.pyscripts/build_prompt_data.pyscripts/update_command_help_on_markdown.pyscripts/update_docs_version.py
✅ Files skipped from review due to trivial changes (4)
- .github/ISSUE_TEMPLATE/bug_report.md
- docs/llms-full.txt
- .github/workflows/schema-docs.yml
- .github/workflows/codeql.yml
🚧 Files skipped from review as they are similar to previous changes (9)
- .github/workflows/readme.yml
- .github/workflows/changelog.yml
- .github/workflows/codspeed.yml
- .github/workflows/llms-txt.yml
- scripts/build_prompt_data.py
- fastapi_code_generator/main.py
- fastapi_code_generator/config.py
- .github/workflows/release-notify.yml
- scripts/build_llms_txt.py
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/build_cli_docs.py`:
- Around line 52-54: The code reads completed.stderr directly which can raise
ValueError if stderr wasn't captured (e.g., Click 8.1.8); change the msg
construction around the `completed` result to safely access stderr (use
try/except ValueError or getattr with a default) and fall back to
`completed.output` or a default string, and include any caught exception details
in the final RuntimeError message so the original failure isn't masked; update
the block that builds `msg` before `raise RuntimeError(msg)` (referencing the
`completed` variable and the RuntimeError raise site).
- Around line 11-14: The top-level import of fastapi_code_generator.__main__.app
causes failures when running the script from a fresh checkout; move that import
into the get_help_text() function so the package is imported only when needed.
Specifically, keep imports for CliRunner and get_command at module top, then
inside get_help_text() perform "from fastapi_code_generator.__main__ import app"
(or equivalent) and use app with get_command/CliRunner there; this ensures
direct script execution (e.g., python scripts/build_cli_docs.py --check) works
without the package pre-installed.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ec546bec-0848-4a5c-a82f-ed8149831d38
📒 Files selected for processing (1)
scripts/build_cli_docs.py
Generated by GitHub Actions
for more information, see https://pre-commit.ci
Generated by GitHub Actions
Generated by GitHub Actions
for more information, see https://pre-commit.ci
Generated by GitHub Actions
for more information, see https://pre-commit.ci
Generated by GitHub Actions
for more information, see https://pre-commit.ci
Generated by GitHub Actions
for more information, see https://pre-commit.ci
Generated by GitHub Actions
for more information, see https://pre-commit.ci
Generated by GitHub Actions
for more information, see https://pre-commit.ci
Generated by GitHub Actions
for more information, see https://pre-commit.ci
Generated by GitHub Actions
for more information, see https://pre-commit.ci
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #517 +/- ##
==========================================
Coverage 100.00% 100.00%
==========================================
Files 10 16 +6
Lines 659 984 +325
Branches 79 110 +31
==========================================
+ Hits 659 984 +325
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
📚 Docs Preview: https://pr-517.fastapi-code-generator.pages.dev |
Summary by CodeRabbit
New Features
Documentation
Infrastructure
Tests
Chores