Yolov5 tf-lite方式导出

在之前的文章《Yolov5 Android tf-lite方式集成》中,导出tf-lite方式的模型使用的是https://github.com/zldrobit/yolov5.git中的tf.py。晚上尝试用yolov5 最新版本的代码的export.py导出,如果不想修改命令行参数,可以字节修改以下代码:

# 需要修改参数 data weights batch-size
def parse_opt():
    parser = argparse.ArgumentParser()
    parser.add_argument('--data', type=str, default=ROOT / 'data/ads.yaml', help='dataset.yaml path')
    parser.add_argument('--weights', type=str, default=ROOT / 'best.pt', help='weights path')
    parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640, 640], help='image (h, w)')
    parser.add_argument('--batch-size', type=int, default=1, help='batch size')
    parser.add_argument('--device', default='cpu', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    parser.add_argument('--half', action='store_true', help='FP16 half-precision export')
    parser.add_argument('--inplace', action='store_true', help='set YOLOv5 Detect() inplace=True')
    parser.add_argument('--train', action='store_true', help='model.train() mode')
    parser.add_argument('--optimize',default=True, action='store_true', help='TorchScript: optimize for mobile')
    parser.add_argument('--int8', action='store_true', help='CoreML/TF INT8 quantization')
    parser.add_argument('--dynamic', action='store_true', help='ONNX/TF: dynamic axes')
    parser.add_argument('--simplify', action='store_true', help='ONNX: simplify model')
    parser.add_argument('--opset', type=int, default=13, help='ONNX: opset version')
    parser.add_argument('--topk-per-class', type=int, default=100, help='TF.js NMS: topk per class to keep')
    parser.add_argument('--topk-all', type=int, default=100, help='TF.js NMS: topk for all classes to keep')
    parser.add_argument('--iou-thres', type=float, default=0.45, help='TF.js NMS: IoU threshold')
    parser.add_argument('--conf-thres', type=float, default=0.25, help='TF.js NMS: confidence threshold')
    parser.add_argument('--include', nargs='+',
                        default=['torchscript', 'onnx'],
                        help='available formats are (torchscript, onnx, coreml, saved_model, pb, tflite, tfjs)')
    opt = parser.parse_args()
    print_args(FILE.stem, opt)
    return opt

修改完成后使用下面的命令导出:

python export_ads.py  --include  tflite

导出效果:

(E:\anaconda_dirs\venvs\yolov5_latest) F:\Pycharm_Projects\yolov5_latest>python export_ads.py  --include  tflite
export_ads: data=F:\Pycharm_Projects\yolov5_latest\data\ads.yaml, weights=F:\Pycharm_Projects\yolov5_latest\best.pt, imgsz=[640, 640], batch_size=16, device=cpu, half=False, inplace=False, train=False, optimize=True, int8=False, dynamic=False, simplify=False, opset=13, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['tflite']
YOLOv5  v5.0-458-g2c2ef25 torch 1.9.0+cpu CPU

Fusing layers...
Model Summary: 224 layers, 7053910 parameters, 0 gradients, 16.3 GFLOPs

PyTorch: starting from F:\Pycharm_Projects\yolov5_latest\best.pt (14.4 MB)
2021-10-09 21:19:55.779525: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll

TensorFlow saved_model: starting export with tensorflow 2.4.1...

                 from  n    params  module                                  arguments
  0                -1  1      3520  models.common.Focus                     [3, 32, 3]
2021-10-09 21:19:56.879550: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-10-09 21:19:56.880237: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library nvcuda.dll
2021-10-09 21:19:56.903797: E tensorflow/stream_executor/cuda/cuda_driver.cc:328] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2021-10-09 21:19:56.907011: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: obaby-msi-ml
2021-10-09 21:19:56.907167: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: obaby-msi-ml
2021-10-09 21:19:56.907460: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-10-09 21:19:56.908195: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
  1                -1  1     18560  models.common.Conv                      [32, 64, 3, 2]
  2                -1  1     18816  models.common.C3                        [64, 64, 1]
  3                -1  1     73984  models.common.Conv                      [64, 128, 3, 2]
  4                -1  1    156928  models.common.C3                        [128, 128, 3]
  5                -1  1    295424  models.common.Conv                      [128, 256, 3, 2]
  6                -1  1    625152  models.common.C3                        [256, 256, 3]
  7                -1  1   1180672  models.common.Conv                      [256, 512, 3, 2]
  8                -1  1    656896  models.common.SPP                       [512, 512, [5, 9, 13]]
  9                -1  1   1182720  models.common.C3                        [512, 512, 1, False]
 10                -1  1    131584  models.common.Conv                      [512, 256, 1, 1]
 11                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 12           [-1, 6]  1         0  models.common.Concat                    [1]
 13                -1  1    361984  models.common.C3                        [512, 256, 1, False]
 14                -1  1     33024  models.common.Conv                      [256, 128, 1, 1]
 15                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 16           [-1, 4]  1         0  models.common.Concat                    [1]
 17                -1  1     90880  models.common.C3                        [256, 128, 1, False]
 18                -1  1    147712  models.common.Conv                      [128, 128, 3, 2]
 19          [-1, 14]  1         0  models.common.Concat                    [1]
 20                -1  1    296448  models.common.C3                        [256, 256, 1, False]
 21                -1  1    590336  models.common.Conv                      [256, 256, 3, 2]
 22          [-1, 10]  1         0  models.common.Concat                    [1]
 23                -1  1   1182720  models.common.C3                        [512, 512, 1, False]
 24      [17, 20, 23]  1     16182  models.yolo.Detect                      [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512], [640, 640]]
Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to
==================================================================================================
input_1 (InputLayer)            [(16, 640, 640, 3)]  0
__________________________________________________________________________________________________
tf_focus (TFFocus)              (16, 320, 320, 32)   3488        input_1[0][0]
__________________________________________________________________________________________________
tf_conv_1 (TFConv)              (16, 160, 160, 64)   18496       tf_focus[0][0]
__________________________________________________________________________________________________
tf_c3 (TFC3)                    (16, 160, 160, 64)   18624       tf_conv_1[0][0]
__________________________________________________________________________________________________
tf_conv_7 (TFConv)              (16, 80, 80, 128)    73856       tf_c3[0][0]
__________________________________________________________________________________________________
tf_c3_1 (TFC3)                  (16, 80, 80, 128)    156288      tf_conv_7[0][0]
__________________________________________________________________________________________________
tf_conv_17 (TFConv)             (16, 40, 40, 256)    295168      tf_c3_1[0][0]
__________________________________________________________________________________________________
tf_c3_2 (TFC3)                  (16, 40, 40, 256)    623872      tf_conv_17[0][0]
__________________________________________________________________________________________________
tf_conv_27 (TFConv)             (16, 20, 20, 512)    1180160     tf_c3_2[0][0]
__________________________________________________________________________________________________
tfspp (TFSPP)                   (16, 20, 20, 512)    656128      tf_conv_27[0][0]
__________________________________________________________________________________________________
tf_c3_3 (TFC3)                  (16, 20, 20, 512)    1181184     tfspp[0][0]
__________________________________________________________________________________________________
tf_conv_35 (TFConv)             (16, 20, 20, 256)    131328      tf_c3_3[0][0]
__________________________________________________________________________________________________
tf_upsample (TFUpsample)        (16, 40, 40, 256)    0           tf_conv_35[0][0]
__________________________________________________________________________________________________
tf_concat (TFConcat)            (16, 40, 40, 512)    0           tf_upsample[0][0]
                                                                 tf_c3_2[0][0]
__________________________________________________________________________________________________
tf_c3_4 (TFC3)                  (16, 40, 40, 256)    361216      tf_concat[0][0]
__________________________________________________________________________________________________
tf_conv_41 (TFConv)             (16, 40, 40, 128)    32896       tf_c3_4[0][0]
__________________________________________________________________________________________________
tf_upsample_1 (TFUpsample)      (16, 80, 80, 128)    0           tf_conv_41[0][0]
__________________________________________________________________________________________________
tf_concat_1 (TFConcat)          (16, 80, 80, 256)    0           tf_upsample_1[0][0]
                                                                 tf_c3_1[0][0]
__________________________________________________________________________________________________
tf_c3_5 (TFC3)                  (16, 80, 80, 128)    90496       tf_concat_1[0][0]
__________________________________________________________________________________________________
tf_conv_47 (TFConv)             (16, 40, 40, 128)    147584      tf_c3_5[0][0]
__________________________________________________________________________________________________
tf_concat_2 (TFConcat)          (16, 40, 40, 256)    0           tf_conv_47[0][0]
                                                                 tf_conv_41[0][0]
__________________________________________________________________________________________________
tf_c3_6 (TFC3)                  (16, 40, 40, 256)    295680      tf_concat_2[0][0]
__________________________________________________________________________________________________
tf_conv_53 (TFConv)             (16, 20, 20, 256)    590080      tf_c3_6[0][0]
__________________________________________________________________________________________________
tf_concat_3 (TFConcat)          (16, 20, 20, 512)    0           tf_conv_53[0][0]
                                                                 tf_conv_35[0][0]
__________________________________________________________________________________________________
tf_c3_7 (TFC3)                  (16, 20, 20, 512)    1181184     tf_concat_3[0][0]
__________________________________________________________________________________________________
tf_detect (TFDetect)            ((16, 25200, 6), [(1 16182       tf_c3_5[0][0]
                                                                 tf_c3_6[0][0]
                                                                 tf_c3_7[0][0]
==================================================================================================
Total params: 7,053,910
Trainable params: 0
Non-trainable params: 7,053,910
__________________________________________________________________________________________________
2021-10-09 21:20:02.177675: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
Found untraced functions such as tf_conv_layer_call_and_return_conditional_losses, tf_conv_layer_call_fn, tf_conv_2_layer_call_and_return_conditional_losses, tf_conv_2_layer_call_fn, tf_conv_3_layer_call_and_return_conditional_losses while saving (showing 5 of 550). These functions will not be directly callable after loading.
Found untraced functions such as tf_conv_layer_call_and_return_conditional_losses, tf_conv_layer_call_fn, tf_conv_2_layer_call_and_return_conditional_losses, tf_conv_2_layer_call_fn, tf_conv_3_layer_call_and_return_conditional_losses while saving (showing 5 of 550). These functions will not be directly callable after loading.
Assets written to: F:\Pycharm_Projects\yolov5_latest\best_saved_model\assets
TensorFlow saved_model: export success, saved as F:\Pycharm_Projects\yolov5_latest\best_saved_model (239.7 MB)

TensorFlow Lite: starting export with tensorflow 2.4.1...
Found untraced functions such as tf_conv_layer_call_and_return_conditional_losses, tf_conv_layer_call_fn, tf_conv_2_layer_call_and_return_conditional_losses, tf_conv_2_layer_call_fn, tf_conv_3_layer_call_and_return_conditional_losses while saving (showing 5 of 550). These functions will not be directly callable after loading.
Found untraced functions such as tf_conv_layer_call_and_return_conditional_losses, tf_conv_layer_call_fn, tf_conv_2_layer_call_and_return_conditional_losses, tf_conv_2_layer_call_fn, tf_conv_3_layer_call_and_return_conditional_losses while saving (showing 5 of 550). These functions will not be directly callable after loading.
Assets written to: C:\Users\obaby\AppData\Local\Temp\tmpej3xo4ik\assets
2021-10-09 21:20:50.470748: I tensorflow/core/grappler/devices.cc:69] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2021-10-09 21:20:50.471066: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2021-10-09 21:20:50.472942: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-10-09 21:20:50.509549: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:928] Optimization results for grappler item: graph_to_optimize
  function_optimizer: function_optimizer did nothing. time = 0.001ms.
  function_optimizer: function_optimizer did nothing. time = 0ms.

2021-10-09 21:20:51.339012: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:316] Ignored output_format.
2021-10-09 21:20:51.339096: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:319] Ignored drop_control_dependency.
2021-10-09 21:20:51.395311: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
TensorFlow Lite: export success, saved as F:\Pycharm_Projects\yolov5_latest\best-fp16.tflite (14.3 MB)

Export complete (60.24s)
Results saved to F:\Pycharm_Projects\yolov5_latest
Visualize with https://netron.app

(E:\anaconda_dirs\venvs\yolov5_latest) F:\Pycharm_Projects\yolov5_latest>python export_ads.py  --include  tflite
export_ads: data=F:\Pycharm_Projects\yolov5_latest\data\ads.yaml, weights=F:\Pycharm_Projects\yolov5_latest\best.pt, imgsz=[640, 640], batch_size=1, device=cpu, half=False, inplace=False, train=False, optimize=True, int8=False, dynamic=False, simplify=False, opset=13, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['tflite']
YOLOv5  v5.0-458-g2c2ef25 torch 1.9.0+cpu CPU

Fusing layers...
Model Summary: 224 layers, 7053910 parameters, 0 gradients, 16.3 GFLOPs

PyTorch: starting from F:\Pycharm_Projects\yolov5_latest\best.pt (14.4 MB)
2021-10-09 21:21:03.907332: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll

TensorFlow saved_model: starting export with tensorflow 2.4.1...

                 from  n    params  module                                  arguments
  0                -1  1      3520  models.common.Focus                     [3, 32, 3]
2021-10-09 21:21:05.007065: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-10-09 21:21:05.007781: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library nvcuda.dll
2021-10-09 21:21:05.029777: E tensorflow/stream_executor/cuda/cuda_driver.cc:328] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2021-10-09 21:21:05.032833: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: obaby-msi-ml
2021-10-09 21:21:05.032951: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: obaby-msi-ml
2021-10-09 21:21:05.033353: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-10-09 21:21:05.035414: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
  1                -1  1     18560  models.common.Conv                      [32, 64, 3, 2]
  2                -1  1     18816  models.common.C3                        [64, 64, 1]
  3                -1  1     73984  models.common.Conv                      [64, 128, 3, 2]
  4                -1  1    156928  models.common.C3                        [128, 128, 3]
  5                -1  1    295424  models.common.Conv                      [128, 256, 3, 2]
  6                -1  1    625152  models.common.C3                        [256, 256, 3]
  7                -1  1   1180672  models.common.Conv                      [256, 512, 3, 2]
  8                -1  1    656896  models.common.SPP                       [512, 512, [5, 9, 13]]
  9                -1  1   1182720  models.common.C3                        [512, 512, 1, False]
 10                -1  1    131584  models.common.Conv                      [512, 256, 1, 1]
 11                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 12           [-1, 6]  1         0  models.common.Concat                    [1]
 13                -1  1    361984  models.common.C3                        [512, 256, 1, False]
 14                -1  1     33024  models.common.Conv                      [256, 128, 1, 1]
 15                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 16           [-1, 4]  1         0  models.common.Concat                    [1]
 17                -1  1     90880  models.common.C3                        [256, 128, 1, False]
 18                -1  1    147712  models.common.Conv                      [128, 128, 3, 2]
 19          [-1, 14]  1         0  models.common.Concat                    [1]
 20                -1  1    296448  models.common.C3                        [256, 256, 1, False]
 21                -1  1    590336  models.common.Conv                      [256, 256, 3, 2]
 22          [-1, 10]  1         0  models.common.Concat                    [1]
 23                -1  1   1182720  models.common.C3                        [512, 512, 1, False]
 24      [17, 20, 23]  1     16182  models.yolo.Detect                      [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512], [640, 640]]
Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to
==================================================================================================
input_1 (InputLayer)            [(1, 640, 640, 3)]   0
__________________________________________________________________________________________________
tf_focus (TFFocus)              (1, 320, 320, 32)    3488        input_1[0][0]
__________________________________________________________________________________________________
tf_conv_1 (TFConv)              (1, 160, 160, 64)    18496       tf_focus[0][0]
__________________________________________________________________________________________________
tf_c3 (TFC3)                    (1, 160, 160, 64)    18624       tf_conv_1[0][0]
__________________________________________________________________________________________________
tf_conv_7 (TFConv)              (1, 80, 80, 128)     73856       tf_c3[0][0]
__________________________________________________________________________________________________
tf_c3_1 (TFC3)                  (1, 80, 80, 128)     156288      tf_conv_7[0][0]
__________________________________________________________________________________________________
tf_conv_17 (TFConv)             (1, 40, 40, 256)     295168      tf_c3_1[0][0]
__________________________________________________________________________________________________
tf_c3_2 (TFC3)                  (1, 40, 40, 256)     623872      tf_conv_17[0][0]
__________________________________________________________________________________________________
tf_conv_27 (TFConv)             (1, 20, 20, 512)     1180160     tf_c3_2[0][0]
__________________________________________________________________________________________________
tfspp (TFSPP)                   (1, 20, 20, 512)     656128      tf_conv_27[0][0]
__________________________________________________________________________________________________
tf_c3_3 (TFC3)                  (1, 20, 20, 512)     1181184     tfspp[0][0]
__________________________________________________________________________________________________
tf_conv_35 (TFConv)             (1, 20, 20, 256)     131328      tf_c3_3[0][0]
__________________________________________________________________________________________________
tf_upsample (TFUpsample)        (1, 40, 40, 256)     0           tf_conv_35[0][0]
__________________________________________________________________________________________________
tf_concat (TFConcat)            (1, 40, 40, 512)     0           tf_upsample[0][0]
                                                                 tf_c3_2[0][0]
__________________________________________________________________________________________________
tf_c3_4 (TFC3)                  (1, 40, 40, 256)     361216      tf_concat[0][0]
__________________________________________________________________________________________________
tf_conv_41 (TFConv)             (1, 40, 40, 128)     32896       tf_c3_4[0][0]
__________________________________________________________________________________________________
tf_upsample_1 (TFUpsample)      (1, 80, 80, 128)     0           tf_conv_41[0][0]
__________________________________________________________________________________________________
tf_concat_1 (TFConcat)          (1, 80, 80, 256)     0           tf_upsample_1[0][0]
                                                                 tf_c3_1[0][0]
__________________________________________________________________________________________________
tf_c3_5 (TFC3)                  (1, 80, 80, 128)     90496       tf_concat_1[0][0]
__________________________________________________________________________________________________
tf_conv_47 (TFConv)             (1, 40, 40, 128)     147584      tf_c3_5[0][0]
__________________________________________________________________________________________________
tf_concat_2 (TFConcat)          (1, 40, 40, 256)     0           tf_conv_47[0][0]
                                                                 tf_conv_41[0][0]
__________________________________________________________________________________________________
tf_c3_6 (TFC3)                  (1, 40, 40, 256)     295680      tf_concat_2[0][0]
__________________________________________________________________________________________________
tf_conv_53 (TFConv)             (1, 20, 20, 256)     590080      tf_c3_6[0][0]
__________________________________________________________________________________________________
tf_concat_3 (TFConcat)          (1, 20, 20, 512)     0           tf_conv_53[0][0]
                                                                 tf_conv_35[0][0]
__________________________________________________________________________________________________
tf_c3_7 (TFC3)                  (1, 20, 20, 512)     1181184     tf_concat_3[0][0]
__________________________________________________________________________________________________
tf_detect (TFDetect)            ((1, 25200, 6), [(1, 16182       tf_c3_5[0][0]
                                                                 tf_c3_6[0][0]
                                                                 tf_c3_7[0][0]
==================================================================================================
Total params: 7,053,910
Trainable params: 0
Non-trainable params: 7,053,910
__________________________________________________________________________________________________
2021-10-09 21:21:08.904313: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
Found untraced functions such as tf_conv_layer_call_fn, tf_conv_layer_call_and_return_conditional_losses, tf_conv_2_layer_call_fn, tf_conv_2_layer_call_and_return_conditional_losses, tf_conv_3_layer_call_fn while saving (showing 5 of 550). These functions will not be directly callable after loading.
Found untraced functions such as tf_conv_layer_call_fn, tf_conv_layer_call_and_return_conditional_losses, tf_conv_2_layer_call_fn, tf_conv_2_layer_call_and_return_conditional_losses, tf_conv_3_layer_call_fn while saving (showing 5 of 550). These functions will not be directly callable after loading.
Assets written to: F:\Pycharm_Projects\yolov5_latest\best_saved_model\assets
TensorFlow saved_model: export success, saved as F:\Pycharm_Projects\yolov5_latest\best_saved_model (239.7 MB)

TensorFlow Lite: starting export with tensorflow 2.4.1...
Found untraced functions such as tf_conv_layer_call_fn, tf_conv_layer_call_and_return_conditional_losses, tf_conv_2_layer_call_fn, tf_conv_2_layer_call_and_return_conditional_losses, tf_conv_3_layer_call_fn while saving (showing 5 of 550). These functions will not be directly callable after loading.
Found untraced functions such as tf_conv_layer_call_fn, tf_conv_layer_call_and_return_conditional_losses, tf_conv_2_layer_call_fn, tf_conv_2_layer_call_and_return_conditional_losses, tf_conv_3_layer_call_fn while saving (showing 5 of 550). These functions will not be directly callable after loading.
Assets written to: C:\Users\obaby\AppData\Local\Temp\tmp3e12zbt2\assets
2021-10-09 21:21:57.639337: I tensorflow/core/grappler/devices.cc:69] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2021-10-09 21:21:57.639574: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2021-10-09 21:21:57.640650: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-10-09 21:21:57.650914: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:928] Optimization results for grappler item: graph_to_optimize
  function_optimizer: function_optimizer did nothing. time = 0.002ms.
  function_optimizer: function_optimizer did nothing. time = 0ms.

2021-10-09 21:21:58.471195: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:316] Ignored output_format.
2021-10-09 21:21:58.471422: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:319] Ignored drop_control_dependency.
2021-10-09 21:21:58.529722: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
TensorFlow Lite: export success, saved as F:\Pycharm_Projects\yolov5_latest\best-fp16.tflite (14.3 MB)

Export complete (55.30s)
Results saved to F:\Pycharm_Projects\yolov5_latest
Visualize with https://netron.app

通过上面的命令导出的模型,比旧版的tf.py导出的模型大约大了1倍,准确度略有下降,在模拟器上的执行效率却比旧版本的效率快了不少。

☆版权☆

* 网站名称:obaby@mars
* 网址:https://h4ck.org.cn/
* 个性:https://oba.by/
* 本文标题: 《Yolov5 tf-lite方式导出》
* 本文链接:https://image.h4ck.org.cn/2021/10/9145
* 短链接:https://oba.by/?p=9145
* 转载文章请标明文章来源,原文标题以及原文链接。请遵从 《署名-非商业性使用-相同方式共享 2.5 中国大陆 (CC BY-NC-SA 2.5 CN) 》许可协议。


You may also like

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注