Feature/add documentation#35
Conversation
|
ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (5)
✅ Files skipped from review due to trivial changes (3)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughThis PR adds Boost-fork admin docs and new QuickBook/AsciiDoc documentation, moves boost add-or-update processing to an asynchronous Celery task, adds the Celery task, isolates AsciiDoc subprocess env to avoid global PATH mutation, lazy-loads the OpenAI SDK, updates docs linkcheck/links, and adjusts a test and codecov settings. ChangesBoost Fork Features and Documentation
Sequence Diagram(s)sequenceDiagram
participant Client
participant Endpoint as AddOrUpdateView
participant Celery
participant Service as BoostComponentService
Client->>Endpoint: POST /boost-endpoint/add-or-update/
Endpoint->>Endpoint: validate request
Endpoint->>Celery: boost_add_or_update_task.delay(...)
Endpoint-->>Client: 202 {status: accepted, task_id}
Celery->>Service: instantiate per-lang and call process_all(...)
Service->>Service: process submodules (async)
Service-->>Celery: results per lang_code
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (4)
weblate/lang/tests.py (1)
404-407: ⚡ Quick winRemoval of
assertNumQueriesdrops query-count regression guard.The second
call_command("setuplang")still usefully validates idempotency (no crash on re-run), but without any assertion on the number of queries the test can no longer catch future performance regressions where the command starts issuing unnecessary DB work on subsequent executions.If the query count is now genuinely variable (e.g., depends on existing data), the bare call is fine. If it has settled on a new stable count after the other PR changes, consider restoring a looser bound (e.g.,
assertNumQuerieswith the new count, or at leastassertNumQueriesLessThan) to preserve some regression protection.♻️ Suggested restoration of a query bound (adjust N to the actual new count)
+ with self.assertNumQueries(N): call_command("setuplang")🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@weblate/lang/tests.py` around lines 404 - 407, The test test_setuplang removed its assertNumQueries guard and thus no longer detects DB-query regressions on the repeated call_command("setuplang"); restore a query-count assertion around the second call to preserve regression protection—either use self.assertNumQueries(<N>) with the new stable expected count for the second call_command("setuplang") or, if the exact number varies, use a looser check such as self.assertNumQueriesLessThan(<M>) to ensure no unexpected extra queries are introduced while keeping the idempotency check (Language.objects.exists() may remain as-is).weblate/utils/openrouter_translator.py (2)
56-63: ⚡ Quick winN806: rename
OpenAIClientto a lowercase alias to satisfy ruff/PEP 8.Ruff flags
OpenAIClient(a PascalCase local variable) in a function scope as N806. The simplest fix is a lowercase alias:♻️ Proposed fix
- OpenAIClient = _openai_client_factory() + openai_client_cls = _openai_client_factory() # Initialize OpenAI client with OpenRouter endpoint - self.client = OpenAIClient( + self.client = openai_client_cls( base_url="https://openrouter.ai/api/v1", api_key=api_key, timeout=60 * 20, # 20 minutes )Alternatively, add
# noqa: N806on line 56 if you prefer to preserve the PascalCase class-alias convention.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@weblate/utils/openrouter_translator.py` around lines 56 - 63, The local PascalCase variable OpenAIClient returned from _openai_client_factory() violates ruff N806; change the local name to a lowercase alias (e.g., openai_client = _openai_client_factory()) and then instantiate self.client using that lowercase identifier (openai_client(...)) so the factory and the client initialization use a PEP8-compliant name; alternatively, if you intentionally want PascalCase, add a "# noqa: N806" comment on the OpenAIClient assignment line.
18-28: 💤 Low value
_openai_client_factoryis missing a return type annotation.Per the project's type-hints requirement, the function needs an explicit return type. Since
OpenAIis a conditional import, thread it throughTYPE_CHECKING:♻️ Proposed fix
if TYPE_CHECKING: from weblate.trans.models.translation import Translation + from openai import OpenAI as _OpenAIType -def _openai_client_factory(): +def _openai_client_factory() -> type[_OpenAIType]: """Return OpenAI client class from optional ``openai`` PyPI dependency."""As per coding guidelines, "Type hints are required; project uses py.typed and mypy for type checking."
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@weblate/utils/openrouter_translator.py` around lines 18 - 28, The function _openai_client_factory lacks a return type annotation; add a typed return using typing.TYPE_CHECKING to import the OpenAI class for type-checking and annotate the function to return a Type of that class (or Type[Any] if you prefer a more general annotation). Specifically, add an import for TYPE_CHECKING and Type from typing, inside a TYPE_CHECKING block import OpenAI as OpenAIClient for the annotation, then change def _openai_client_factory() to include the return type (e.g., -> Type["OpenAIClient"] or -> Type[Any]) while keeping the runtime conditional import and behavior inside the function. Ensure the annotation references the symbol OpenAIClient used in the existing conditional import.docs/admin/boost-weblate.rst (1)
137-158: ⚡ Quick winShow clients how to use
task_id.This section documents the accepted response but never tells callers where to poll with that
task_id. Add a short cross-reference to :http:get:/api/tasks/(str:uuid)/so integrators know how to fetch completion state and results.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@docs/admin/boost-weblate.rst` around lines 137 - 158, Add a short cross-reference after the accepted response example that tells callers where to poll the returned task_id by linking to the task status endpoint; e.g., reference the GET endpoint /api/tasks/(str:uuid)/ (or :http:get:`/api/tasks/(str:uuid)/`) immediately after the JSON response so users of /boost-endpoint/add-or-update/ know how to fetch completion state and results using the provided "task_id".
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@weblate/boost_endpoint/__init__.py`:
- Around line 1-3: Remove the executable bit on the package __init__.py and
commit the normalized line endings: clear the executable flag for
weblate/boost_endpoint/__init__.py in the git index, re-run the
pre-commit/mixed-line-ending normalization to update the file, stage the
normalized file and commit the change so the executable bit removal and
line-ending fixes are recorded in the repo (ensure the index shows the file as
non-executable before pushing).
In `@weblate/boost_endpoint/apps.py`:
- Around line 1-3: The file apps.py currently has an executable bit set and
mixed line endings; remove the executable bit (e.g., run git update-index
--chmod=-x for weblate/boost_endpoint/apps.py) and normalize/commit the line
endings the pre-commit hook applied (same treatment as __init__.py) so the file
is no longer executable and uses consistent line endings.
In `@weblate/boost_endpoint/serializers.py`:
- Around line 1-3: The file has an accidental executable bit set and mixed line
endings; remove the exec bit for serializers.py using git update-index
--chmod=-x and re-run the repository's pre-commit hooks (or let CI run them) to
normalize line endings and produce a clean commit; ensure the change is
committed so CI no longer fails on weblate boost_endpoint serializers.py.
In `@weblate/boost_endpoint/services.py`:
- Around line 636-638: The fixed sleep currently happens after adding the
language (so add_new_language() runs immediately and then the worker sleeps),
which defeats the intended "wait before adding"; move the wait logic so it runs
before invoking the add_new_language operation (or replace it with a bounded
readiness-poll that checks the target service before calling add_new_language on
the component), ensuring you use the same BOOST_ENDPOINT_ADD_TRANSLATION_SECONDS
setting and do not block unboundedly per component.
- Around line 70-74: The current truncate_component_slug function can produce
collisions by only keeping head+sep+tail; change it to insert a short
deterministic hash of the full slug between head and tail so the truncated slug
stays within max_len but is collision-resistant. Implement in
truncate_component_slug: when slug is longer than max_len compute a stable hash
(e.g., sha1 hex digest) and use the first N hex chars (e.g., 8) as hash_token,
then compute head_len = max_len - len(TRUNCATE_SLUG_SEP) - len(hash_token) -
TRUNCATE_SLUG_TAIL, take slug[:head_len] + TRUNCATE_SLUG_SEP + hash_token +
slug[-TRUNCATE_SLUG_TAIL:], ensuring total length ≤ max_len; use a cryptographic
or stable hash function (sha1) for determinism and update constants or logic
accordingly so existing callers of truncate_component_slug, get_or_create, and
delete remain compatible.
- Around line 793-804: The current authorization check builds project_slug as
"boost-{_submodule_slug(submodule)}-documentation" but get_or_create_project()
actually uses a slug that includes the language suffix (e.g.
"boost-{...}-documentation-{lang_code}"), so existing_project is looked up with
the wrong slug and permission checks can be bypassed; fix it by constructing the
exact same slug used by get_or_create_project() (include the lang_code suffix)
or, better, call get_or_create_project() (or its slug helper) to obtain the
canonical slug before querying and then perform user.has_perm("project.edit",
existing_project) / user.has_perm("project.add") against that correctly-resolved
project (referencing project_slug, existing_project, get_or_create_project()).
In `@weblate/boost_endpoint/urls.py`:
- Around line 1-12: This file was committed with mixed line endings and the
executable bit set causing pre-commit failures (EXE002); open the module
containing urlpatterns, BoostEndpointInfo, and AddOrUpdateView and re-save it
with LF line endings and remove the executable permission bit (chmod -x) so it's
a normal Python module (no shebang needed), and apply the same LF and permission
fixes to the other new weblate.boost_endpoint Python modules in this PR.
In `@weblate/boost_endpoint/views.py`:
- Around line 55-61: The call to boost_add_or_update_task.delay(...) can raise
if the broker is unavailable; wrap the call to boost_add_or_update_task.delay
(the invocation that passes organization, add_or_update, version, extensions,
user_id=request.user.pk) in a try/except, on exception call report_error(...)
with context including the exception, the input data and request.user.pk, and
return a deterministic HTTP error response indicating the task was not queued
(use 503 Service Unavailable for broker/unreachable errors or 500 for unexpected
failures) instead of letting the exception propagate; ensure normal success
still returns the existing async_result handling when no exception occurs.
---
Nitpick comments:
In `@docs/admin/boost-weblate.rst`:
- Around line 137-158: Add a short cross-reference after the accepted response
example that tells callers where to poll the returned task_id by linking to the
task status endpoint; e.g., reference the GET endpoint /api/tasks/(str:uuid)/
(or :http:get:`/api/tasks/(str:uuid)/`) immediately after the JSON response so
users of /boost-endpoint/add-or-update/ know how to fetch completion state and
results using the provided "task_id".
In `@weblate/lang/tests.py`:
- Around line 404-407: The test test_setuplang removed its assertNumQueries
guard and thus no longer detects DB-query regressions on the repeated
call_command("setuplang"); restore a query-count assertion around the second
call to preserve regression protection—either use self.assertNumQueries(<N>)
with the new stable expected count for the second call_command("setuplang") or,
if the exact number varies, use a looser check such as
self.assertNumQueriesLessThan(<M>) to ensure no unexpected extra queries are
introduced while keeping the idempotency check (Language.objects.exists() may
remain as-is).
In `@weblate/utils/openrouter_translator.py`:
- Around line 56-63: The local PascalCase variable OpenAIClient returned from
_openai_client_factory() violates ruff N806; change the local name to a
lowercase alias (e.g., openai_client = _openai_client_factory()) and then
instantiate self.client using that lowercase identifier (openai_client(...)) so
the factory and the client initialization use a PEP8-compliant name;
alternatively, if you intentionally want PascalCase, add a "# noqa: N806"
comment on the OpenAIClient assignment line.
- Around line 18-28: The function _openai_client_factory lacks a return type
annotation; add a typed return using typing.TYPE_CHECKING to import the OpenAI
class for type-checking and annotate the function to return a Type of that class
(or Type[Any] if you prefer a more general annotation). Specifically, add an
import for TYPE_CHECKING and Type from typing, inside a TYPE_CHECKING block
import OpenAI as OpenAIClient for the annotation, then change def
_openai_client_factory() to include the return type (e.g., ->
Type["OpenAIClient"] or -> Type[Any]) while keeping the runtime conditional
import and behavior inside the function. Ensure the annotation references the
symbol OpenAIClient used in the existing conditional import.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ba3f75b5-582a-4531-9030-dbfb9b41e2d2
📒 Files selected for processing (18)
codecov.ymldocs/admin/boost-weblate.rstdocs/api.rstdocs/formats.rstdocs/formats/asciidoc.rstdocs/formats/quickbook.rstdocs/index.rstweblate/boost_endpoint/__init__.pyweblate/boost_endpoint/apps.pyweblate/boost_endpoint/serializers.pyweblate/boost_endpoint/services.pyweblate/boost_endpoint/tasks.pyweblate/boost_endpoint/urls.pyweblate/boost_endpoint/views.pyweblate/formats/asciidoc.pyweblate/lang/tests.pyweblate/settings_example.pyweblate/utils/openrouter_translator.py
💤 Files with no reviewable changes (1)
- codecov.yml
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
weblate/boost_endpoint/services.py (1)
688-740:⚠️ Potential issue | 🟡 Minor | ⚡ Quick win
CalledProcessError.stderrisbyteswhen any of the threecheck=Truegit calls fail.The
git add,git commit, andgit pushcalls capture output withouttext=True, soCalledProcessError.stderris a bytes sequence, not a string, unlesstext=Trueis passed. When the error handler at line 739 formatse.stderrinto an f-string and appends it toresult["errors"], callers receive entries like"Git commit/push failed: b'fatal: ...\\n'". Thegit statuscall at line 694 already usestext=Truecorrectly.🛠️ Proposed fix
subprocess.run( ["git", "-C", base_path, "add", "--", *rel_paths], check=True, capture_output=True, + text=True, timeout=60, ) ... subprocess.run( [ "git", "-C", base_path, "commit", ... ], check=True, capture_output=True, + text=True, timeout=30, ) ... subprocess.run( [ "git", "-C", base_path, "push", ... ], check=True, capture_output=True, + text=True, timeout=120, )🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@weblate/boost_endpoint/services.py` around lines 688 - 740, The git subprocess.run calls that use capture_output=True ("git add", "git commit", "git push" in the block around the subprocess.run calls) do not pass text=True, so CalledProcessError.stderr can be bytes; update those subprocess.run invocations to include text=True so stdout/stderr are strings, and also harden the exception handler at the LOGGER.warning / result["errors"].append logic to ensure e.stderr is a str (e.g. use (e.stderr.decode() if isinstance(e.stderr, bytes) else e.stderr) or fallback to str(e) when building the log and error message) so callers never get b'...'-prefixed byte strings.
♻️ Duplicate comments (4)
weblate/boost_endpoint/services.py (3)
636-638:⚠️ Potential issue | 🟠 Major | ⚡ Quick winSleep placement issue from previous review remains unresolved.
time.sleep(settings.BOOST_ENDPOINT_ADD_TRANSLATION_SECONDS)at line 636 executes afteradd_new_language()returns (line 612), so any intended pre-creation readiness wait never fires. Every successful component then idles the Celery worker for the full sleep duration. This needs to be moved before theadd_new_languagecall (or replaced with a bounded readiness poll).🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@weblate/boost_endpoint/services.py` around lines 636 - 638, The sleep is currently executed after add_new_language() returns causing unnecessary idle time; move the time.sleep(settings.BOOST_ENDPOINT_ADD_TRANSLATION_SECONDS) so it runs before calling self.add_new_language(component) (or replace it with a bounded readiness poll loop checking whatever precondition you expect) to ensure the wait happens prior to creation; update the surrounding logic where LOGGER.info("Added language %s to %s", self.lang_code, component.name) is logged so it remains after successful add_new_language().
793-804:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winAuthorization bypass from previous review remains unresolved.
The permission guard at line 793 constructs
project_slug = f"boost-{_submodule_slug(submodule)}-documentation", butget_or_create_project()creates the project with slugf"boost-{slug}-documentation-{self.lang_code}"(line 256). For any existing language-specific project,existing_projectresolves toNone, so theproject.editcheck is skipped and a caller with onlyproject.addpermission can mutate existing project components.🔒 Proposed fix (same as prior review)
- project_slug = f"boost-{_submodule_slug(submodule)}-documentation" + project_slug = ( + f"boost-{_submodule_slug(submodule)}-documentation-{self.lang_code}" + )🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@weblate/boost_endpoint/services.py` around lines 793 - 804, The code builds project_slug as f"boost-{_submodule_slug(submodule)}-documentation" but get_or_create_project() creates projects with the language-specific suffix (f"boost-{slug}-documentation-{self.lang_code}"), causing existing_project to be None and skipping the project.edit check; update the permission check to use the exact same slug generation used by get_or_create_project (include self.lang_code or reuse the same slug variable/function that get_or_create_project uses) so existing_project resolves correctly and user.has_perm("project.edit", existing_project) is enforced before allowing component creation.
70-74:⚠️ Potential issue | 🟠 Major | ⚡ Quick winSlug collision issue from previous review remains unresolved.
The head+sep+tail truncation in
truncate_component_slugcan map two distinct long slugs to the same value when they share a common prefix/suffix, which can cause silent merges or wrong-component deletion inprocess_submodule. The previously suggested fix — inserting a short stable hash (e.g., 8 hex chars ofsha1) between the head and tail — still hasn't been applied.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@weblate/boost_endpoint/services.py` around lines 70 - 74, truncate_component_slug currently truncates by joining head+sep+tail which can collide; change it to insert a short stable hash of the original slug (e.g., first 8 hex chars of sha1) between head and tail to make truncated values unique. In the truncate_component_slug function compute the sha1 hex digest of slug and take the first 8 chars, then build the truncated value as head + TRUNCATE_SLUG_SEP + hash + TRUNCATE_SLUG_SEP + tail (using TRUNCATE_SLUG_HEAD and TRUNCATE_SLUG_TAIL to select slices), and if that assembled string would exceed max_len trim the tail slice accordingly so the final string length <= MAX_COMPONENT_SLUG_LENGTH; keep the hash and separators intact so collisions are avoided.weblate/boost_endpoint/views.py (1)
55-61:⚠️ Potential issue | 🟠 Major | ⚡ Quick winUnhandled broker failure from previous review remains unresolved.
boost_add_or_update_task.delay(...)communicates synchronously with the broker at call time. If Redis/Celery is unavailable, an unhandled exception propagates past validation and returns HTTP 500, preventing clients from distinguishing between a validation failure and a service outage. The previously suggested fix — wrapping the call intry/except, callingreport_error(...), and returning a deterministic 503 — still hasn't been applied.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@weblate/boost_endpoint/views.py` around lines 55 - 61, Wrap the synchronous broker call boost_add_or_update_task.delay(...) in a try/except that catches broker/connection exceptions (e.g., Kombu/Celery/Redis connection errors) and in the except block call report_error(...) with the caught exception and context (include request.user.pk and payload like data["organization"], data["add_or_update"], data["version"]), then return an HTTP 503 response (deterministic message) instead of letting the exception propagate; keep the normal flow (returning the async task info) in the try branch if delay succeeds.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@weblate/boost_endpoint/views.py`:
- Around line 63-73: The "detail" response string in the return Response block
is not marked for translation; import gettext_lazy as _ (from
django.utils.translation import gettext_lazy as _) at the top of
weblate/boost_endpoint/views.py and wrap the multi-line message passed to the
"detail" key with _(…) so the Response (in the view function returning the
accepted status and task_id) returns a translatable lazy string.
---
Outside diff comments:
In `@weblate/boost_endpoint/services.py`:
- Around line 688-740: The git subprocess.run calls that use capture_output=True
("git add", "git commit", "git push" in the block around the subprocess.run
calls) do not pass text=True, so CalledProcessError.stderr can be bytes; update
those subprocess.run invocations to include text=True so stdout/stderr are
strings, and also harden the exception handler at the LOGGER.warning /
result["errors"].append logic to ensure e.stderr is a str (e.g. use
(e.stderr.decode() if isinstance(e.stderr, bytes) else e.stderr) or fallback to
str(e) when building the log and error message) so callers never get
b'...'-prefixed byte strings.
---
Duplicate comments:
In `@weblate/boost_endpoint/services.py`:
- Around line 636-638: The sleep is currently executed after add_new_language()
returns causing unnecessary idle time; move the
time.sleep(settings.BOOST_ENDPOINT_ADD_TRANSLATION_SECONDS) so it runs before
calling self.add_new_language(component) (or replace it with a bounded readiness
poll loop checking whatever precondition you expect) to ensure the wait happens
prior to creation; update the surrounding logic where LOGGER.info("Added
language %s to %s", self.lang_code, component.name) is logged so it remains
after successful add_new_language().
- Around line 793-804: The code builds project_slug as
f"boost-{_submodule_slug(submodule)}-documentation" but get_or_create_project()
creates projects with the language-specific suffix
(f"boost-{slug}-documentation-{self.lang_code}"), causing existing_project to be
None and skipping the project.edit check; update the permission check to use the
exact same slug generation used by get_or_create_project (include self.lang_code
or reuse the same slug variable/function that get_or_create_project uses) so
existing_project resolves correctly and user.has_perm("project.edit",
existing_project) is enforced before allowing component creation.
- Around line 70-74: truncate_component_slug currently truncates by joining
head+sep+tail which can collide; change it to insert a short stable hash of the
original slug (e.g., first 8 hex chars of sha1) between head and tail to make
truncated values unique. In the truncate_component_slug function compute the
sha1 hex digest of slug and take the first 8 chars, then build the truncated
value as head + TRUNCATE_SLUG_SEP + hash + TRUNCATE_SLUG_SEP + tail (using
TRUNCATE_SLUG_HEAD and TRUNCATE_SLUG_TAIL to select slices), and if that
assembled string would exceed max_len trim the tail slice accordingly so the
final string length <= MAX_COMPONENT_SLUG_LENGTH; keep the hash and separators
intact so collisions are avoided.
In `@weblate/boost_endpoint/views.py`:
- Around line 55-61: Wrap the synchronous broker call
boost_add_or_update_task.delay(...) in a try/except that catches
broker/connection exceptions (e.g., Kombu/Celery/Redis connection errors) and in
the except block call report_error(...) with the caught exception and context
(include request.user.pk and payload like data["organization"],
data["add_or_update"], data["version"]), then return an HTTP 503 response
(deterministic message) instead of letting the exception propagate; keep the
normal flow (returning the async task info) in the try branch if delay succeeds.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 974fa912-a624-40aa-ab3c-79c52d5ad76b
📒 Files selected for processing (7)
weblate/boost_endpoint/__init__.pyweblate/boost_endpoint/apps.pyweblate/boost_endpoint/serializers.pyweblate/boost_endpoint/services.pyweblate/boost_endpoint/tasks.pyweblate/boost_endpoint/urls.pyweblate/boost_endpoint/views.py
🚧 Files skipped from review as they are similar to previous changes (1)
- weblate/boost_endpoint/tasks.py
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
weblate/boost_endpoint/views.py (1)
36-36:⚠️ Potential issue | 🟠 Major | ⚡ Quick win
postmethod is missing type hints — required by project conventions.The method signature has no type annotations for
request,format, or the return type. Withfrom __future__ import annotationsalready present,Requestcan be guarded underTYPE_CHECKINGto satisfy the project's "type-only imports" guideline.🛠️ Proposed fix
+from typing import TYPE_CHECKING + +if TYPE_CHECKING: + from rest_framework.request import Request ... - def post(self, request, format=None): # pylint: disable=redefined-builtin # noqa: A002 + def post(self, request: Request, format: str | None = None) -> Response: # pylint: disable=redefined-builtin # noqa: A002Apply the same fix to the
getmethod inBoostEndpointInfofor consistency.As per coding guidelines, "Type hints are required; project uses py.typed and mypy for type checking" and "Use
TYPE_CHECKINGimports for type-only imports to avoid circular dependencies in Python."🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@weblate/boost_endpoint/views.py` at line 36, Add type annotations to the BoostEndpointInfo view methods: annotate post (and similarly get) parameters and return type using TYPE_CHECKING-guarded imports for Request and Response to keep them as type-only imports; add "from typing import TYPE_CHECKING, Optional" and inside the TYPE_CHECKING block import Request, Response, then change signatures to accept request: Request, format: Optional[str] = None and return -> Response (apply the same change to get) so the methods comply with project type-hinting conventions.
♻️ Duplicate comments (2)
weblate/boost_endpoint/views.py (2)
63-73:⚠️ Potential issue | 🟡 Minor | ⚡ Quick win
"detail"string still not wrapped withgettext_lazy— unresolved from the previous review.The human-readable
"detail"message returned in the 202 response is a user-facing string and must be translatable. As per coding guidelines, all user-facing strings must be translatable usinggettext_lazyor template translation tags.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@weblate/boost_endpoint/views.py` around lines 63 - 73, The "detail" message in the Response is not marked for translation; update the Response construction in boost_endpoint/views.py (the block that returns Response with "status":"accepted" and "task_id": str(async_result.id)) to wrap the user-facing detail string with gettext_lazy (imported as _ from django.utils.translation). Add the import if missing and replace the plain string with _(...) so the message becomes translatable while preserving the same wording and formatting.
55-61:⚠️ Potential issue | 🟠 Major | ⚡ Quick winUnhandled broker enqueue failure — still unresolved from the previous review.
boost_add_or_update_task.delay(...)at this point can still raise (e.g.,kombu.exceptions.OperationalError) if the broker is unreachable, producing a 500 even after successful validation. Wrapping this call with atry/except, callingreport_error(...), and returning a deterministic503remains unaddressed.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@weblate/boost_endpoint/views.py` around lines 55 - 61, The call to boost_add_or_update_task.delay(...) can raise broker errors (e.g., kombu OperationalError) and must be wrapped in a try/except so enqueue failures don't return 500; around the boost_add_or_update_task.delay(...) call (the code that assigns async_result) catch broad broker-related exceptions, call report_error(...) with the caught exception and context (include organization, version, user_id/request.user.pk and the caught error), and return a deterministic HTTP 503 response instead of letting the exception propagate; ensure the async_result variable is only used after successful enqueue and that the error path returns immediately with the 503.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Outside diff comments:
In `@weblate/boost_endpoint/views.py`:
- Line 36: Add type annotations to the BoostEndpointInfo view methods: annotate
post (and similarly get) parameters and return type using TYPE_CHECKING-guarded
imports for Request and Response to keep them as type-only imports; add "from
typing import TYPE_CHECKING, Optional" and inside the TYPE_CHECKING block import
Request, Response, then change signatures to accept request: Request, format:
Optional[str] = None and return -> Response (apply the same change to get) so
the methods comply with project type-hinting conventions.
---
Duplicate comments:
In `@weblate/boost_endpoint/views.py`:
- Around line 63-73: The "detail" message in the Response is not marked for
translation; update the Response construction in boost_endpoint/views.py (the
block that returns Response with "status":"accepted" and "task_id":
str(async_result.id)) to wrap the user-facing detail string with gettext_lazy
(imported as _ from django.utils.translation). Add the import if missing and
replace the plain string with _(...) so the message becomes translatable while
preserving the same wording and formatting.
- Around line 55-61: The call to boost_add_or_update_task.delay(...) can raise
broker errors (e.g., kombu OperationalError) and must be wrapped in a try/except
so enqueue failures don't return 500; around the
boost_add_or_update_task.delay(...) call (the code that assigns async_result)
catch broad broker-related exceptions, call report_error(...) with the caught
exception and context (include organization, version, user_id/request.user.pk
and the caught error), and return a deterministic HTTP 503 response instead of
letting the exception propagate; ensure the async_result variable is only used
after successful enqueue and that the error path returns immediately with the
503.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b2b43e6c-a50f-4845-a8d6-4a8c3bea188f
📒 Files selected for processing (5)
docs/admin/boost-weblate.rstdocs/formats/quickbook.rstweblate/boost_endpoint/tasks.pyweblate/boost_endpoint/views.pyweblate/utils/openrouter_translator.py
✅ Files skipped from review due to trivial changes (2)
- docs/admin/boost-weblate.rst
- docs/formats/quickbook.rst
🚧 Files skipped from review as they are similar to previous changes (2)
- weblate/boost_endpoint/tasks.py
- weblate/utils/openrouter_translator.py
Codecov Report❌ Patch coverage is ❌ Your patch check has failed because the patch coverage (28.12%) is below the target coverage (100.00%). You can increase the patch coverage or adjust the target coverage. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Closes #36
Summary by CodeRabbit
New Features
Bug Fixes
Documentation