Create SegmentationResult doc and evaluation functions (#119)

* Update README.md

* Update README.md

* Update README.md

* Create README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Add evaluation calculate time and fix some bugs

* Update classification __init__

* Move to ppseg

* Add segmentation doc

* Add PaddleClas infer.py

* Update PaddleClas infer.py

* Delete .infer.py.swp

* Add PaddleClas infer.cc

* Update README.md

* Update README.md

* Update README.md

* Update infer.py

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Add PaddleSeg doc and infer.cc demo

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Create segmentation_result.md

* Update README.md

* Update segmentation_result.md

* Update segmentation_result.md

* Update segmentation_result.md

* Update classification and detection evaluation function

Co-authored-by: Jason <jiangjiajun@baidu.com>
This commit is contained in:
huangjianhui
2022-08-18 13:05:28 +08:00
committed by GitHub
parent 04c1ffde2c
commit c0e5ce248d
6 changed files with 43 additions and 5 deletions

View File

@@ -71,7 +71,12 @@ def eval_classify(model, image_file_path, label_file_path, topk=5):
topk_acc_score = topk_accuracy(np.array(result_list), np.array(label_list))
if topk == 1:
scores.update({'topk1': topk_acc_score})
scores.update({
'topk1_average_inference_time(s)': average_inference_time
})
elif topk == 5:
scores.update({'topk5': topk_acc_score})
scores.update({'average_inference_time': average_inference_time})
scores.update({
'topk5_average_inference_time(s)': average_inference_time
})
return scores