Skip to content

Expose reuse model option#568

Merged
koxudaxi merged 2 commits intomainfrom
support-custom-models
Apr 30, 2026
Merged

Expose reuse model option#568
koxudaxi merged 2 commits intomainfrom
support-custom-models

Conversation

@koxudaxi
Copy link
Copy Markdown
Owner

@koxudaxi koxudaxi commented Apr 28, 2026

Fixes: #479

Summary

  • add a --reuse-model CLI option and pass it through to the OpenAPI parser
  • document the option in generated CLI/help metadata
  • cover reuse behavior with a generated model regression test
  • keep --use-annotated as the existing Annotated type-hint option for the issue's Annotated use case

Tests

  • tox -e py314-parallel -- tests/main/test_main.py::test_generate_with_reuse_model tests/test_config.py
  • tox -e py314-parallel -- tests/main/test_performance.py --benchmark-only --benchmark-min-rounds=5
  • tox -e cli-docs -- --check
  • tox -e config-types -- --check
  • tox -e readme
  • tox -e llms-txt -- --check
  • tox -e type

Summary by CodeRabbit

  • New Features

    • Added a CLI flag (--reuse-model) to enable reusing identical generated model types across outputs.
  • Documentation

    • Updated CLI reference, README, and docs to document the new flag and provide usage examples; updated supported-formats counts.
  • Tests

    • Added CLI golden-output test and example fixtures demonstrating the new flag's behavior.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 28, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds a new CLI flag --reuse-model and a corresponding generation config field to enable reusing generated model classes when schemas are identical; wires the option through the CLI into generation code and OpenAPI parsing, updates prompt/config typing, adds docs and a CLI golden-output test with input/expected fixtures.

Changes

Cohort / File(s) Summary
Documentation
README.md, docs/cli-reference.md, docs/index.md, docs/llms-full.txt, docs/supported_formats.md
Added --reuse-model flag to CLI help snapshots and reference docs; updated examples and fixture counts.
CLI & Invocation
fastapi_code_generator/cli.py
Added --reuse-model CLI option, threaded value from main()generate_code() → parser; introduced _get_command() helper to cache the Click command.
Config & Types
fastapi_code_generator/config.py, fastapi_code_generator/_types/generate_config_dict.py
Added reuse_model: bool to Pydantic GenerateConfig (default False) and to GenerateConfigDict TypedDict.
Prompt Metadata
fastapi_code_generator/prompt_data.py
Extended PROMPT_DATA with reuse_model option metadata and a CLI example demonstrating --reuse-model.
Tests & Fixtures
tests/main/test_main.py, tests/data/openapi/default_template/reuse_model.yaml, tests/data/expected/openapi/default_template/reuse_model/*
Added a CLI documentation/golden-output test that exercises --reuse-model, plus input OpenAPI fixture and expected generated main.py and models.py demonstrating reused model types.

Sequence Diagram(s)

sequenceDiagram
  participant CLI as CLI (click)
  participant Main as main()/generate_code()
  participant Parser as OpenAPIParser
  participant FS as FileWriter

  CLI->>Main: parse args (including --reuse-model)
  Main->>Parser: instantiate with reuse_model flag
  Parser->>Parser: detect identical schemas -> reuse model types
  Parser->>FS: emit models.py, main.py using reused types
  FS-->>CLI: generated files written
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Poem

🐰 I hopped through flags and docs with glee,
Found twin schemas and said "Match for me!"
One model to rule, two endpoints now sing,
Less duplicate fluff—what joy I bring! ✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 30.77% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Expose reuse model option' directly summarizes the main change: adding a --reuse-model CLI option that can be used by developers.
Linked Issues check ✅ Passed The PR partially addresses issue #479 by implementing the --reuse-model feature to enable reusing identical generated models, satisfying the core request for model reuse functionality.
Out of Scope Changes check ✅ Passed All changes are scoped to implementing the --reuse-model CLI option and its documentation; fixture inventory updates align with the new test coverage.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch support-custom-models

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 28, 2026

📚 Docs Preview: https://pr-568.fastapi-code-generator.pages.dev

@codspeed-hq
Copy link
Copy Markdown

codspeed-hq Bot commented Apr 28, 2026

Merging this PR will improve performance by 52.34%

⚠️ Unknown Walltime execution environment detected

Using the Walltime instrument on standard Hosted Runners will lead to inconsistent data.

For the most accurate results, we recommend using CodSpeed Macro Runners: bare-metal machines fine-tuned for performance measurement consistency.

⚠️ Different runtime environments detected

Some benchmarks with significant performance changes were compared across different runtime environments,
which may affect the accuracy of the results.

Open the report in CodSpeed to investigate

⚡ 1 improved benchmark

Performance Changes

Benchmark BASE HEAD Efficiency
test_generate_default_template_benchmark 52.4 ms 34.4 ms +52.34%

Comparing support-custom-models (a6d3362) with main (71ddd8b)

Open in CodSpeed

@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 28, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 100.00%. Comparing base (71ddd8b) to head (a6d3362).
⚠️ Report is 3 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff            @@
##              main      #568   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files           17        17           
  Lines         1344      1351    +7     
  Branches       139       139           
=========================================
+ Hits          1344      1351    +7     
Flag Coverage Δ
unittests 100.00% <100.00%> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@koxudaxi koxudaxi force-pushed the support-custom-models branch from a513a0b to 51e6842 Compare April 28, 2026 03:20
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
fastapi_code_generator/prompt_data.py (1)

188-189: Clarify the --reuse-model help text for user-facing docs.

Current wording is hard to parse. Consider a clearer sentence so generated CLI/help docs are easier to understand.

✏️ Suggested wording update
-            'description': 'Reuse models on the field when a module has the '
-            'model with the same content.',
+            'description': 'Reuse an existing model when another model has '
+            'the same fields and content.',
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@fastapi_code_generator/prompt_data.py` around lines 188 - 189, Update the
user-facing help text for the --reuse-model flag in prompt_data.py by replacing
the current confusing 'description' string with a clearer sentence; locate the
description value associated with the '--reuse-model' flag (the 'description'
key in the prompt_data.py entry for reuse-model) and change it to a concise
message such as: "Reuse identical model definitions across modules and fields so
the generator treats models with the same content as the same type." Ensure the
updated string is grammatical and user-facing appropriate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@fastapi_code_generator/prompt_data.py`:
- Around line 188-189: Update the user-facing help text for the --reuse-model
flag in prompt_data.py by replacing the current confusing 'description' string
with a clearer sentence; locate the description value associated with the
'--reuse-model' flag (the 'description' key in the prompt_data.py entry for
reuse-model) and change it to a concise message such as: "Reuse identical model
definitions across modules and fields so the generator treats models with the
same content as the same type." Ensure the updated string is grammatical and
user-facing appropriate.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 7a2d470e-44b6-4b75-ba7d-f524bb1f8ace

📥 Commits

Reviewing files that changed from the base of the PR and between e9f2965 and a513a0b.

📒 Files selected for processing (9)
  • README.md
  • docs/cli-reference.md
  • docs/index.md
  • docs/llms-full.txt
  • fastapi_code_generator/_types/generate_config_dict.py
  • fastapi_code_generator/cli.py
  • fastapi_code_generator/config.py
  • fastapi_code_generator/prompt_data.py
  • tests/main/test_main.py

@koxudaxi koxudaxi force-pushed the support-custom-models branch 2 times, most recently from bf1ef0e to 20bdda2 Compare April 28, 2026 05:08
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@tests/data/expected/openapi/default_template/body_and_parameters_reuse_model/main.py`:
- Around line 39-44: The generated functions convert1 and convert2 currently use
a parameter named format which shadows the builtin; update the code generator to
emit a safe parameter name (e.g., format_) and wire the original API name via
FastAPI Query alias: for convert1 change signature to use format_: Optional[str]
= Query('pdf', alias='format') and for convert2 use format_: Optional[str] =
Query(None, alias='format'); ensure the generator also adds the necessary import
for Query and preserves the Request and return types.
- Around line 119-121: The parameter annotation for body in function
put_pets_pet_id is inconsistent with its default None; update the type from
PetForm to Optional[PetForm] (and import typing.Optional if not already) so the
signature reads body: Optional[PetForm] = None, ensuring the function annotation
matches the None default.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ed049ba9-a843-44ab-8644-88bf0689821d

📥 Commits

Reviewing files that changed from the base of the PR and between bf1ef0e and 20bdda2.

📒 Files selected for processing (11)
  • README.md
  • docs/cli-reference.md
  • docs/index.md
  • docs/llms-full.txt
  • fastapi_code_generator/_types/generate_config_dict.py
  • fastapi_code_generator/cli.py
  • fastapi_code_generator/config.py
  • fastapi_code_generator/prompt_data.py
  • tests/data/expected/openapi/default_template/body_and_parameters_reuse_model/main.py
  • tests/data/expected/openapi/default_template/body_and_parameters_reuse_model/models.py
  • tests/main/test_main.py
✅ Files skipped from review due to trivial changes (6)
  • fastapi_code_generator/_types/generate_config_dict.py
  • README.md
  • tests/data/expected/openapi/default_template/body_and_parameters_reuse_model/models.py
  • docs/index.md
  • docs/llms-full.txt
  • docs/cli-reference.md
🚧 Files skipped from review as they are similar to previous changes (3)
  • fastapi_code_generator/config.py
  • fastapi_code_generator/prompt_data.py
  • tests/main/test_main.py

@koxudaxi koxudaxi force-pushed the support-custom-models branch 3 times, most recently from a7a16f7 to 9c42e57 Compare April 28, 2026 05:19
@koxudaxi koxudaxi force-pushed the support-custom-models branch from f353f31 to 009129b Compare April 28, 2026 05:21
…ator into support-custom-models

# Conflicts:
#	README.md
#	docs/cli-reference.md
#	docs/index.md
#	docs/llms-full.txt
#	docs/supported_formats.md
#	fastapi_code_generator/cli.py
#	fastapi_code_generator/config.py
#	fastapi_code_generator/prompt_data.py
@koxudaxi koxudaxi merged commit c2fa6a1 into main Apr 30, 2026
45 checks passed
@koxudaxi koxudaxi deleted the support-custom-models branch April 30, 2026 04:45
@github-actions github-actions Bot added the breaking-change-analyzed PR has been checked for release draft updates label Apr 30, 2026
@github-actions
Copy link
Copy Markdown
Contributor

Breaking Change Analysis

Result: No breaking changes detected

Reasoning: The breaking-change label is absent, so this PR is treated as a non-breaking release update.


This analysis was performed by repository automation using PR labels and the ## Breaking Changes section.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

breaking-change-analyzed PR has been checked for release draft updates

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Support to use custom models and Annotated type hints

1 participant