wangguoya
c61a07712e
fix bug for kunlunxin run sd demo for uing fp16 ( #1680 )
...
* modify sd infer.py for using paddle_kunlunxin_fp16
* Update infer.py
* [fix bug] fix bug sd in demo infer.py for kunlunxin using fp16
2023-03-27 14:04:21 +08:00
wangguoya
bf6caeb2ce
modify sd demo infer.py for using paddle_kunlunxin_fp16 ( #1612 )
...
* modify sd infer.py for using paddle_kunlunxin_fp16
* Update infer.py
2023-03-16 19:48:15 +08:00
yeliang2258
45865c8724
[Other] Change all XPU to KunlunXin ( #973 )
...
* [FlyCV] Bump up FlyCV -> official release 1.0.0
* XPU to KunlunXin
* update
* update model link
* update doc
* update device
* update code
* useless code
Co-authored-by: DefTruth <qiustudent_r@163.com >
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com >
2022-12-27 10:02:02 +08:00
yeliang2258
1911002b90
[Backend]Add stable_diffusion and detection models support for KunlunXin XPU ( #954 )
...
* [FlyCV] Bump up FlyCV -> official release 1.0.0
* add valid_xpu for detection
* add paddledetection model support for xpu
* support all detection model in c++ and python
* fix code
* add python stable_diffusion support
Co-authored-by: DefTruth <qiustudent_r@163.com >
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com >
2022-12-26 16:22:52 +08:00
Jason
4351ce8665
Rename PaddleBackend to PaddleInferBackend ( #728 )
2022-11-28 21:29:09 +08:00
Jack Zhou
d4995e5468
[Model] Add stable diffusion model based on fastdeploy ( #297 )
...
* Add stable diffusion model base on fastdeploy
* Add sd infer
* pipelines->multimodal
* add create_ort_runtime
* use fp16 input
* fix pil
* Add optimize unet model
* add hf license
* Add workspace args
* Add profile func
* Add schedulers
* usrelace torch.Tenosr byp.ndarray
* Add readme
* Add trt shape setting
* add dynamic shape
* Add dynamic shape for stable diffusion
* fix max shape setting
* rename tensorrt file suffix
* update dynamic shape setting
* Add scheduler output
* Add inference_steps and benchmark steps
* add diffuser benchmark
* Add paddle infer script
* Rename 1
* Rename infer.py to torch_onnx_infer.py
* Add export torch to onnx model
* renmove export model
* Add paddle export model for diffusion
* Fix export model
* mv torch onnx infer to infer
* Fix export model
* Fix infer
* modif create_trt_runtime create_ort_runtime
* update export torch
* update requirements
* add paddle inference backend
* Fix unet pp run
* remove print
* Add paddle model export and infer
* Add device id
* remove profile to utils
* Add -1 device id
* Add safety checker args
* remove safety checker temporarily
* Add export model description
* Add predict description
* Fix readme
* Fix device_id description
* add timestep shape
* add use fp16 precision
* move use gpu
* Add EulerAncestralDiscreteScheduler
* Use EulerAncestralDiscreteScheduler with v1-5 model
* Add export model readme
* Add link of exported model
* Update scheduler on README
* Addd stable-diffusion-v1-5
2022-11-10 14:59:07 +08:00