Align CI workflows with datamodel-code-generator#539
Conversation
WalkthroughEmit a compact OPTION_DESCRIPTIONS mapping from CLI docs; simplify build_prompt_data script and generated module; adjust CI workflow triggers and build invocations; update tests and docs to split Changes
Sequence Diagram(s)sequenceDiagram
participant GH as GitHub Actions
participant Script as build_prompt_data.py
participant Module as fastapi_code_generator/prompt_data.py
participant DocsTests as Docs & Tests
GH->>Script: run build_prompt_data (CI job)
Script->>Script: parse tests/cli_doc/.cli_doc_collection.json
Script->>Module: write OPTION_DESCRIPTIONS mapping
Module-->>DocsTests: consumed by docs generation & tests
DocsTests->>GH: report success/failure
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
📚 Docs Preview: https://pr-539.fastapi-code-generator.pages.dev |
Merging this PR will not alter performance
|
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #539 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 16 16
Lines 1022 1012 -10
Branches 110 110
=========================================
- Hits 1022 1012 -10
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
tests/test_prompt_data.py (1)
53-73: LGTM — fixture exercises dedupe, truncation, and empty-description skip.Good coverage: the second
--inputitem validates that the first-seen description wins,--longvalidates truncation/ellipsis, and--emptyvalidates the empty-description skip.One optional addition: assert the truncated
--longline stays within the 120-column budget, which is the implicit contract of the truncation logic.assert "..." in content + long_lines = [ln for ln in content.splitlines() if ln.startswith(' "--long":')] + assert long_lines and all(len(ln) <= 120 for ln in long_lines)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_prompt_data.py` around lines 53 - 73, Add an assertion in test_build_prompt_data_generates_file that locates the generated '--long' option line in the file content (after calling build_prompt_data with OUTPUT_PATH set) and verifies its string length does not exceed 120 characters; specifically, find the line containing '"--long":' in the content variable and assert len(line) <= 120 to enforce the truncation/ellipsis contract implemented by the prompt_data logic.scripts/build_prompt_data.py (1)
53-60: Truncation boundary can interact with escape sequences — current fallback is narrow.After
replace("\\", "\\\\")andreplace('"', '\\"'),escaped_descriptioncan contain multi-char sequences (\\,\"). Slicing atmax_description_length - 3may land inside one of them.removesuffix("\\")only handles a single trailing\; if a truncated\\leaves a lone\it's fixed, but if it leaves\\(both halves retained) followed by...the result happens to be valid — this is working by luck of the boundary cases rather than by design.Consider truncating the raw description first and escaping afterwards, which sidesteps the class of issues entirely:
♻️ Proposed refactor
- for option, description in sorted(descriptions.items()): - escaped_description = description.replace("\\", "\\\\").replace('"', '\\"') - max_description_length = 120 - len(f' "{option}": "",') - if len(escaped_description) > max_description_length: - truncated_description = escaped_description[: max_description_length - 3] - truncated_description = truncated_description.removesuffix("\\") - escaped_description = truncated_description + "..." - lines.append(f' "{option}": "{escaped_description}",') + for option, description in sorted(descriptions.items()): + max_description_length = 120 - len(f' "{option}": "",') + if len(description) > max_description_length: + description = description[: max_description_length - 3] + "..." + escaped_description = description.replace("\\", "\\\\").replace('"', '\\"') + lines.append(f' "{option}": "{escaped_description}",')Note: this changes the budget to apply to the pre-escape length, so a description full of quotes/backslashes could still exceed 120 columns on the generated line — acceptable for a cosmetic budget, and the output stays valid.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/build_prompt_data.py` around lines 53 - 60, The truncation should operate on the raw description before escaping to avoid slicing inside escape sequences: for each option in descriptions, compute max_raw_length = 120 - len(f' "{option}": "",') and if len(description) > max_raw_length, truncate the raw description to max_raw_length - 3, append "..." to that raw truncated text, then perform escaping (description.replace("\\", "\\\\").replace('"', '\\"')) and use the escaped result when building the lines list (i.e., replace the current escaped_description/truncated_description/removesuffix logic with truncate-then-escape and append f' "{option}": "{escaped_after_truncate}",').
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@fastapi_code_generator/prompt_data.py`:
- Around line 16-19: The "--output" option description was accidentally
copy-pasted from "--input"; update the value for the key "--output" in
prompt_data.py so it contains the correct description (e.g., "Path or directory
to write the generated FastAPI application files" or similar), ensuring it no
longer duplicates the "--input" text; also verify other option descriptions in
the same mapping to ensure they weren't similarly duplicated and, if this file
is generated, adjust the upstream marker generating the option_description
grouping so "--output" gets its own distinct description rather than inheriting
"--input"'s text.
---
Nitpick comments:
In `@scripts/build_prompt_data.py`:
- Around line 53-60: The truncation should operate on the raw description before
escaping to avoid slicing inside escape sequences: for each option in
descriptions, compute max_raw_length = 120 - len(f' "{option}": "",') and if
len(description) > max_raw_length, truncate the raw description to
max_raw_length - 3, append "..." to that raw truncated text, then perform
escaping (description.replace("\\", "\\\\").replace('"', '\\"')) and use the
escaped result when building the lines list (i.e., replace the current
escaped_description/truncated_description/removesuffix logic with
truncate-then-escape and append f' "{option}": "{escaped_after_truncate}",').
In `@tests/test_prompt_data.py`:
- Around line 53-73: Add an assertion in test_build_prompt_data_generates_file
that locates the generated '--long' option line in the file content (after
calling build_prompt_data with OUTPUT_PATH set) and verifies its string length
does not exceed 120 characters; specifically, find the line containing
'"--long":' in the content variable and assert len(line) <= 120 to enforce the
truncation/ellipsis contract implemented by the prompt_data logic.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: bffb50de-62cf-46bd-865f-e6b93cef5d55
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (8)
.github/workflows/cli-docs.yml.github/workflows/docs-deploy.yml.github/workflows/generated-docs-sync.yml.github/workflows/llms-txt.ymlfastapi_code_generator/prompt_data.pypyproject.tomlscripts/build_prompt_data.pytests/test_prompt_data.py
💤 Files with no reviewable changes (1)
- pyproject.toml
There was a problem hiding this comment.
🧹 Nitpick comments (2)
tests/test_prompt_data.py (2)
114-116: Cumulative capsys buffer — intentional but brittle.
build_prompt_data(check=False)at line 114 prints aGenerated: ...line to stdout; sincecapsys.readouterr()isn't called between L114 and L116, the assertion onsplitlines()[-1]works only because the subsequentcheck=Truecall appends theOK:line last. This is correct today but couples the test to print ordering across two separate invocations. A quickcapsys.readouterr()after L114 would make the intent of L116 (asserting theOK:line from the check run specifically) unambiguous.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_prompt_data.py` around lines 114 - 116, The test relies on a cumulative capsys buffer across two calls to build_prompt_data, making the final assertion brittle; after calling build_prompt_data(check=False) capture and clear stdout by calling capsys.readouterr() before invoking build_prompt_data(check=True) so the final assertion capsys.readouterr().out.splitlines()[-1] == f"OK: {output_path}" only checks the output produced by the check=True run; reference the build_prompt_data function and capsys.readouterr() to locate where to insert the extra readouterr() call.
32-43: Optional: make "first-wins" semantics for duplicate options explicit.The fixture implicitly exercises two behaviors of
build_prompt_data:
- When the same option appears twice (
--inputat items 0 and 2), the first-seenoption_descriptionwins and the multiline variant is ignored.- Only the first line of a multiline description is taken (would matter if the multiline entry were first).
Only (1) is asserted. Consider adding a small assertion that
"Extra detail."is not in the generated content, so a future regression that inverts the iteration order or changes the dedup policy is caught explicitly.✏️ Suggested additional assertion
assert "--empty" not in content + assert "Extra detail." not in content assert "..." in content🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_prompt_data.py` around lines 32 - 43, The test currently asserts that the first-seen description wins for duplicate options but doesn't assert the multiline second occurrence was ignored; update the test for build_prompt_data to also assert that the string "Extra detail." (the description from the later duplicate marker for "--input") does NOT appear in the generated prompt content, referencing the duplicate markers (the entries with marker_kwargs.options ["--input"] and the multiline entry at index 2) so the first-wins deduplication behavior and the "first-line only" rule are explicitly checked.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/test_prompt_data.py`:
- Around line 114-116: The test relies on a cumulative capsys buffer across two
calls to build_prompt_data, making the final assertion brittle; after calling
build_prompt_data(check=False) capture and clear stdout by calling
capsys.readouterr() before invoking build_prompt_data(check=True) so the final
assertion capsys.readouterr().out.splitlines()[-1] == f"OK: {output_path}" only
checks the output produced by the check=True run; reference the
build_prompt_data function and capsys.readouterr() to locate where to insert the
extra readouterr() call.
- Around line 32-43: The test currently asserts that the first-seen description
wins for duplicate options but doesn't assert the multiline second occurrence
was ignored; update the test for build_prompt_data to also assert that the
string "Extra detail." (the description from the later duplicate marker for
"--input") does NOT appear in the generated prompt content, referencing the
duplicate markers (the entries with marker_kwargs.options ["--input"] and the
multiline entry at index 2) so the first-wins deduplication behavior and the
"first-line only" rule are explicitly checked.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 8563cf77-a858-4b6a-931d-90d20af4b346
📒 Files selected for processing (4)
docs/cli-reference.mdfastapi_code_generator/prompt_data.pytests/main/test_main.pytests/test_prompt_data.py
✅ Files skipped from review due to trivial changes (1)
- docs/cli-reference.md
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/config-types.yml:
- Line 47: The CI currently runs the generator in write mode via the "tox -e
config-types" job which, per fastapi_code_generator/config.py (check=False
behavior), will overwrite committed types instead of failing; update the
workflow step to restore the check flag (i.e., run "tox -e config-types --
--check") so generator runs in validation mode, or if you must keep the write
build then add an explicit clean-worktree assertion immediately after the
generation step (run a git-diff/clean-check and fail the job if uncommitted
changes are present) to ensure stale generated types cause the job to fail.
In @.github/workflows/test.yml:
- Around line 56-57: The CI step "Build CLI docs" (and the similar generator
steps later) currently runs generators in write mode (e.g. "tox -e cli-docs")
which will silently update files and still exit 0; change these steps to
validation mode by adding the generators' check flag (e.g. pass --check or the
equivalent to the tox target invoked by "tox -e cli-docs") or, alternatively,
keep the build but add a "git diff --exit-code" immediately after each generator
step so the job fails if any files were modified; update the "Build CLI docs"
step and the corresponding generator steps referenced in the comment to use the
check flag or follow with git diff --exit-code.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d658cd7c-ed3e-4c9d-8f4a-d02eaa5250c4
📒 Files selected for processing (2)
.github/workflows/config-types.yml.github/workflows/test.yml
Summary by CodeRabbit
Chores
Refactor
Documentation
Tests