Commit 81d4e0c
committed
docs: add local-model providers section to README
Promotes the existing `### LLM endpoint compatibility` subsection
from inside `## Configuration` to a top-level `## Using local LLM
providers` section, expanded with per-provider paragraphs covering
Ollama, llama.cpp, and vLLM/DeepSeek per CONV7_handoff.md sec 5.4.
This is the documentation half of Reviewer 2's optional
recommendation O7 ("API Flexibility: Support for local models...
will further improve metaScreener"). The technical capability has
existed since the project's first release - any OpenAI-compatible
endpoint just works once OPENAI_BASE_URL is set - but it was
previously buried in a single-paragraph subsection. The new
structure gives the topic visibility appropriate to its scope and
provides concrete copy-paste-ready environment variable settings
for each common provider.
WHAT THE NEW SECTION COVERS:
* Opening paragraph stating that metaScreener targets any
OpenAI-compatible endpoint, with a bulleted summary
distinguishing hosted commercial APIs (Azure OpenAI, DeepSeek)
from locally hosted models (Ollama, llama.cpp, vLLM).
* The OPENAI_BASE_URL / OPENAI_API_KEY / Model field contract,
explained once at section level so per-provider paragraphs
don't need to repeat it.
* ### Ollama subsection: endpoint URL, install/pull workflow,
Model field guidance.
* ### llama.cpp subsection: llama-server invocation, endpoint URL,
note that Model field is informational when running llama.cpp
directly (the server uses whichever model is currently loaded).
* ### vLLM and DeepSeek subsection: vLLM as a high-throughput
self-hosted alternative; DeepSeek as a hosted alternative with
larger context windows than GPT-4o-mini.
* Closing evidence-gating caveat (preserved VERBATIM from the
previous subsection per sec 5.4: open-weight model
compatibility with the evidence-gating protocol has not been
formally tested; users testing local models are invited to
file feedback).
WHAT WAS REMOVED:
* The previous `### LLM endpoint compatibility` subsection
inside `## Configuration` (8 lines). Its content is fully
absorbed into the new top-level section, with the bullet list
of compatible backends restructured and expanded. The verbatim
caveat is preserved word-for-word as the closing note.
`## Configuration` retains its `### Environment variables`
subsection unchanged; only the LLM-endpoint subsection is moved
out.
INVARIANTS PRESERVED:
* Test count unchanged (no test changes): 103 passed, 1 xfailed.
* The README badge regression test added in C0
(test_readme_tested_on_badge_lists_actual_ci_platforms) still
passes via the GitHub Actions CI badge present from C1.
* No code changes; no plugin changes; no test changes.
Spec: see CONV7_handoff.md sec 4 ("Add README section on local-model
providers") and sec 5.4 (where it goes; verbatim caveat directive).1 parent dce0352 commit 81d4e0c
1 file changed
Lines changed: 19 additions & 5 deletions
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
290 | 290 | | |
291 | 291 | | |
292 | 292 | | |
293 | | - | |
| 293 | + | |
294 | 294 | | |
295 | | - | |
296 | | - | |
297 | | - | |
298 | | - | |
| 295 | + | |
| 296 | + | |
| 297 | + | |
| 298 | + | |
| 299 | + | |
| 300 | + | |
| 301 | + | |
| 302 | + | |
| 303 | + | |
| 304 | + | |
| 305 | + | |
| 306 | + | |
| 307 | + | |
| 308 | + | |
| 309 | + | |
| 310 | + | |
| 311 | + | |
| 312 | + | |
299 | 313 | | |
300 | 314 | | |
301 | 315 | | |
| |||
0 commit comments