feat: on-demand native lib download and optional feature splitting#1039
feat: on-demand native lib download and optional feature splitting#1039msluszniak wants to merge 2 commits intomainfrom
Conversation
|
TODO: separate xnnpack and coreml backends to separate libs, so they can be opt-out the same way as opencv etc. |
|
Adding support for vulkan is in progress |
e69584e to
5aa72cc
Compare
|
Status — verified working state PR is in a tested, working state with a clear opt-in/opt-out model for every backend except XNNPACK on Android (which stays baked into
Verified on a Galaxy S26 Ultra and an iPhone 17 Pro simulator (Xcode 26.4.1):
Supporting executorch fork branch: Next work Make XNNPACK separable on Android too — apply the same QNN-style |
5aa72cc to
eb14dbe
Compare
|
Update — XNNPACK on Android is now separable too Every backend is now opt-in symmetrically across both platforms:
Two changes on the executorch fork (
Verified on the same Galaxy S26 Ultra:
The This finishes the work tracked above. PR is ready for review. |
The npm tarball ships without prebuilt native binaries — they are
downloaded from GitHub Releases at postinstall and extracted into
third-party/, where the existing CMake / podspec configurations pick them
up unchanged. Apps can opt out of features they don't need to skip both
the download and the native compilation:
"react-native-executorch": {
"extras": ["opencv", "phonemizer", "xnnpack", "coreml", "vulkan"]
}
Defaults to all enabled. Each extra trims one or more artifacts and toggles
a corresponding RNE_ENABLE_* CMake / podspec flag, dropping its sources from
compilation and its libraries from the final link.
Per-platform behavior:
- opencv Android + iOS (iOS provided via opencv-rne CocoaPod)
- phonemizer Android + iOS
- xnnpack iOS-only as a force-loaded XnnpackBackend.xcframework;
baked into libexecutorch.so on Android
- coreml iOS-only as a force-loaded CoreMLBackend.xcframework
- vulkan Android-only as a separately-loaded
libvulkan_executorch_backend.so
Vulkan ships as its own shared library (mirroring the QNN backend pattern)
so its load-time backend registration runs only when the user opts in. The
.so links only against vulkan_backend + vulkan_schema + executorch_core,
not the CPU kernel registries, so it does not cause duplicate kernel
registration when loaded alongside libexecutorch.so.
eb14dbe to
ef7ab0e
Compare
Bring NATIVE_LIBS_PIPELINE.md in sync with the current
msluszniak/executorch@ms/separate-backends tip:
- Pin SHA bumped from 1a5c0f267 to bd24ac7681.
- Patch list expanded to cover all 10 commits on the fork branch:
the original 4 (version-script removal, vulkan-shared, flatcc-Werror,
xnnpack-shared) plus the 6 added in this round (tokenizers submodule
switched to software-mansion-labs/pytorch-tokenizers@build,
build_android_library.sh forwards BACKEND_SHARED env vars to cmake,
XNNWeightsCache null-ptr fix, ANDROID_SUPPORT_FLEXIBLE_PAGE_SIZES,
iOS create_frameworks.sh keeps merged .a files, iOS CMakePresets
disable XNNPACK_ENABLE_ARM_SME{,2}).
- iOS build section rewritten as a three-stage flow (fork .a build →
stage into RNE → repackage via ExecutorchLib/build.sh).
|
Thanks @msluszniak . Mind working on small stack of PRs so it's easier to review? |
|
@kirklandsign this particular PR solely address React Native ExecuTorch repo, but what is probably instresting for you is a not-yet-published PR from https://github.com/msluszniak/executorch/tree/@ms/separate-backends branch to executorch. For this one, for sure I can proceed with a series of small PRs :)). I just want to make sure changes are aligned with RNE repo first. I just need to check if iOS works the same way as Android. |
Description
Removes prebuilt native binaries from the npm tarball and downloads them at postinstall from GitHub Releases instead. Apps can opt out of features they don't need, skipping both the download and the native compilation. Each backend (XNNPACK, CoreML, Vulkan) ships as its own opt-in artifact with no platform asymmetry — every flag is meaningful where the backend exists.
User configuration in the app's
package.json:All extras default on; omit any to skip its artifact and disable its compilation. Each toggle drives a
RNE_ENABLE_*flag through the podspec or build.gradle.opencvopencv-rneCocoaPodlibopencv_*.a+ KleidiCV HALphonemizerlibphonemis.alibphonemis.axnnpackXnnpackBackend.xcframeworklibxnnpack_executorch_backend.socoremlCoreMLBackend.xcframeworkvulkanlibvulkan_executorch_backend.soEach Android backend
.solinks only<backend>_backend (--whole-archive) + <backend>_schema + executorch_core— no CPU kernel registries — so loading multiple side-by-side does not trigger duplicate kernel registration. Supporting executorch fork branch:msluszniak/executorch@ms/separate-backends(EXECUTORCH_BUILD_XNNPACK_BACKEND_SHARED+EXECUTORCH_BUILD_VULKAN_BACKEND_SHAREDswitches, acustom_opsfix to stop the transitive XNNPACK link from leaking intolibexecutorch_jni.so, and a flatcc-Werrorworkaround for Apple clang 21).Introduces a breaking change?
Type of change
Tested on
Testing instructions
Test the download flow:
Test extras splitting (drop XNNPACK; works the same on iOS and Android):
package.jsonset"extras": ["opencv", "phonemizer", "coreml", "vulkan"]RNET_SKIP_DOWNLOAD=1 INIT_CWD=<app-root> node packages/react-native-executorch/scripts/download-libs.js.pteproducesE ExecuTorch: Backend XnnpackBackend is not registered.— app stays alive."extras": ["opencv", "phonemizer", "xnnpack", "coreml", "vulkan"]restores XNNPACK inference.Test Vulkan opt-in (Android-only):
"vulkan"toextras, regeneraterne-build-config.json, rebuild.yolo26n_vulkan_fp32_multi.pte) on a GPU-capable device — backend registers and inference runs.Related issues
Builds toward pytorch/executorch#10457 (hot-pluggable Android backends).
Checklist