mirror of
https://github.com/s0md3v/roop.git
synced 2025-10-16 21:30:37 +08:00

On lots of systems onnxruntime doesn't detect GPU unless pytorch is imported before it. So despite having CUDA and CUDNN setup correctly, it is only using CPUExecutionProvider. By importing pytorch first, the issue is fixed. So let's use this until any official solution is available. See: https://stackoverflow.com/questions/75294639/onnxruntime-inference-with-cudnn-on-gpu-only-working-if-pytorch-imported-first
6 lines
99 B
Python
6 lines
99 B
Python
import torch
|
|
import onnxruntime
|
|
|
|
use_gpu = False
|
|
providers = onnxruntime.get_available_providers()
|