mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-12-24 13:28:13 +08:00
[Feature] Enhance build script, add pre_wheel logic (#4729)
* Enhance build script, add pre_wheel logic Updated copyright year and added precompiled wheel installation logic. * update the nvidia_gpu.md, add pre_wheel description * fix zh .md * update the url, automatically detect CUDA and SM * Fix GPU architecture string formatting in build.sh * Change default for FD_USE_PRECOMPILED to 0 * fix build.sh * add ./dist, pre-wheel path * simplify the process,just save the whl * del pre_wheel dir * fix function name, extract_ops_from_precompiled_wheel * fix docs * add default commitID in docs --------- Co-authored-by: plusNew001 <95567040+plusNew001@users.noreply.github.com>
This commit is contained in:
@@ -80,6 +80,51 @@ bash build.sh 1 python false [80,90]
|
||||
```
|
||||
The built packages will be in the ```FastDeploy/dist``` directory.
|
||||
|
||||
## 5. Precompiled Operator Wheel Packages
|
||||
|
||||
FastDeploy provides precompiled GPU operator wheel packages for quick setup without building the entire source code.
|
||||
This method currently supports **SM90 architecture (e.g., H20/H100)** and **CUDA 12.6** environments only.
|
||||
|
||||
> By default, `build.sh` compiles all custom operators from source.To use the precompiled package, enable it with the `FD_USE_PRECOMPILED` parameter.
|
||||
> If the precompiled package cannot be downloaded or does not match the current environment, the system will automatically fall back to `4. Build Wheel from Source`.
|
||||
|
||||
First, install paddlepaddle-gpu.
|
||||
For detailed instructions, please refer to the [PaddlePaddle Installation Guide](https://www.paddlepaddle.org.cn/).
|
||||
|
||||
```shell
|
||||
python -m pip install paddlepaddle-gpu==3.2.0 -i https://www.paddlepaddle.org.cn/packages/stable/cu126/
|
||||
```
|
||||
|
||||
Then, clone the FastDeploy repository and build using the precompiled operator wheels:
|
||||
|
||||
```shell
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy
|
||||
cd FastDeploy
|
||||
|
||||
# Argument 1: Whether to build wheel package (1 for yes)
|
||||
# Argument 2: Python interpreter path
|
||||
# Argument 3: Whether to compile CPU inference operators (false for GPU only)
|
||||
# Argument 4: Target GPU architectures (currently supports [90])
|
||||
# Argument 5: Whether to use precompiled operators (1 for enable)
|
||||
# Argument 6 (optional): Specific commitID for precompiled operators(The default is the current commit ID.)
|
||||
|
||||
# Use precompiled operators for accelerated build
|
||||
bash build.sh 1 python false [90] 1
|
||||
|
||||
# Use precompiled wheel from a specific commit
|
||||
bash build.sh 1 python false [90] 1 7dbd9412b0de47aacad9011e8ace492af7247620
|
||||
```
|
||||
|
||||
The downloaded wheel packages will be stored in the `FastDeploy/pre_wheel` directory.
|
||||
After the build completes, the operator binaries can be found in `FastDeploy/fastdeploy/model_executor/ops/gpu`.
|
||||
|
||||
> **Notes:**
|
||||
>
|
||||
> - This mode prioritizes downloading precompiled GPU operator wheels to reduce build time.
|
||||
> - Currently supports **GPU + SM90 + CUDA 12.6** only.
|
||||
> - For custom architectures or modified operator logic, please use **source compilation (Section 4)**.
|
||||
> - You can check whether the precompiled wheel for a specific commit has been successfully built on the [FastDeploy CI Build Status Page](https://github.com/PaddlePaddle/FastDeploy/actions/workflows/ci_image_update.yml).
|
||||
|
||||
## Environment Verification
|
||||
|
||||
After installation, verify the environment with this Python code:
|
||||
|
||||
Reference in New Issue
Block a user