作者:RayChiu_Labloy
版权声明:著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处


目录

准备pytorch版本的yolo环境

安装OpenVINO

下载yolov5s模型然后转ONNX

用yolo提供的现成的脚本(根目录下)的export.py

yolo.py和export.py的修改

export.py修改onnx的版本号对应你自己的onnx版本

yolo.py修改forward方法

并且在common.py中的 line.42-line.45 和 line.80-line.83 有两个。

中间遇到的问题

转的时候报 No module named 'onnx'

用onnx模型测试

ONNX模型和原pt模型对比

速度上:

大小上

使用工具onnx-simplifier简化ONNX模型

onnx-simplifier安装

简化

结果:

生成IR文件

首先找到要导出的节点:

进入指定目录执行转换脚本mo.py

遇到的问题

python推理xml模型

yolo_openvino_demo.py代码:


准备pytorch版本的yolo环境

时光机:win10搭建pytorch环境跑通pytorch版本的yolov5_RayChiu757374816的博客-CSDN博客

安装OpenVINO

传送门:win10 安装OpenVINO并在tensorflow环境下测试官方demo_RayChiu757374816的博客-CSDN博客

下载yolov5s模型然后转ONNX

用yolo提供的现成的脚本(根目录下)的export.py

Terminal窗口输入命令 python export.py --include=onnx 回车即可导出 ONNX文件,注意加上 “--include=onnx”参数,否则导出多余的东西,其他参数还需要设置你需要转换的模型路径如图:

onnx生成路径和--weights路径一直。

yolo.py和export.py的修改

export.py修改onnx的版本号对应你自己的onnx版本

否则会在OpneVINO转IR文件的时候报错,见本文最下边的问题描述

yolo.py修改forward方法

def forward(self, x):z = []  # inference outputfor i in range(self.nl):x[i] = self.m[i](x[i])  # convbs, _, ny, nx = x[i].shape  # x(bs,255,20,20) to x(bs,3,20,20,85)x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()if not self.training:  # inferenceif self.onnx_dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]:self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i)y = x[i].sigmoid()c = (y[..., 0:2] * 2. - 0.5 + self.grid[i].to(x[i].device)) * self.stride[i]  # xyd = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i]  # whe = y[..., 4:]f = torch.cat((c, d, e), 4)z.append(f.view(bs, -1, self.no))return x if self.training else torch.cat(z, 1)

并且在common.py中的 line.42-line.45 和 line.80-line.83 有两个。

# if yolov4#self.act = Mish() if act else nn.Identity()
# if yolov5self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())
# if yolov4#self.act = Mish()
# if yolov5self.act = nn.LeakyReLU(0.1, inplace=True)

中间遇到的问题

转的时候报 No module named 'onnx'

安装命令

conda install -c conda-forge onnx

自动安装依赖:

成功生成onnx文件:

用onnx模型测试

安装onnxruntime

pip install onnxruntime

测试命令

python  detect.py --source ./data/images/bus.jpg --weights=./yolov5s.onnx

结果没问题

ONNX模型和原pt模型对比

速度上:

原模型和onnx测试官方示例耗时:

onnx模型比元模型快了将近一倍

大小上

我们没有做过多的处理,onnx模型稍微大一些:

使用工具onnx-simplifier简化ONNX模型

onnx-simplifier安装

pip install -i http://pypi.douban.com/simple/ --trusted-host=pypi.douban.com/simple onnx-simplifier

简化

python -m onnxsim ./yolov5s.onnx ./yolov5s_sim.onnx

结果:

大小没怎么变,速度稍微快了一些

生成IR文件

首先找到要导出的节点:

首先用Netron打开找到yolov5s_sim.onnx的三个transport节点往上找到对应的三个Conv节点,查看节点name。

下边的命令里带上  --output Conv_455,Conv_504,Conv_553  这样就指定输出了。

进入指定目录执行转换脚本mo.py

所谓的ir文件就是OpenVINO把模型(例如ONNX格式的)通过mo.py转化生成的文件,会生成两个文件,一个.bin是参数文件,一个是xml文件是描述网络结构的。

打开命令行,进入OpenVINO的mo.py所在目录:

cd C:\Program Files (x86)\IntelSWTools\openvino_2020.4.287\deployment_tools\model_optimizer

执行转换命令(记得切换一下pytorch_py37那个conda环境,因为安装了需要的onnx依赖):

python mo.py --input_model E:\projects\pyHome\about_yolo\yolov5-master\changeModle2\yolov5s_sim.onnx  -s 255 --reverse_input_channels --output_dir E:\projects\pyHome\about_yolo\yolov5-master\changeModle2  --output Conv_455,Conv_504,Conv_553

遇到的问题

问题1:提示我没有安装networkx defusedxml,那么安装

pip install networkx defusedxml

然而还是报缺东西,提示我执行install_prerequisites_onnx.bat这个脚本,好吧到我的OpenVINO安装目录执行吧

cd C:\Program Files (x86)\IntelSWTools\openvino_2020.4.287\deployment_tools\model_optimizer\install_prerequisitesinstall_prerequisites_onnx.bat

问题2:后来执行转换命令报 ONNX Resize operation from opset 12 is not supported

看起来是导出的onnx文件配置和自己安装的onnx版本不一致:

默认的是12:

而我安装的是10(conda list查看):

export.py改一下对应自己安装的版本重新导出onnx文件,然后再次转换成功:

问题3:PermissionError: [Errno 13] Permission denied:

这个问题是权限不够,用管理员身份打开cmd命令行窗口再次执行命令就可以了。

python推理xml模型

Python环境中并没有OpenVINO™工具套件,所以我这里需要用pip安装一下OpenVINO™工具套件:

pip install openvino

脚本yolo_openvino_demo.py在下边,命令如下:

python yolo_openvino_demo.py -m ./changeModle2/yolov5s_sim.xml -i ./data/images/bus.jpg -at yolov5

效果:

yolo_openvino_demo.py代码:

#!/usr/bin/env python
"""Copyright (C) 2018-2019 Intel CorporationLicensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.
"""
from __future__ import print_function, divisionimport logging
import os
import sys
from argparse import ArgumentParser, SUPPRESS
from math import exp as exp
from time import time
import numpy as npimport ngraph
import cv2
from openvino.inference_engine import IENetwork, IECorelogging.basicConfig(format="[ %(levelname)s ] %(message)s", level=logging.INFO, stream=sys.stdout)
log = logging.getLogger()def build_argparser():parser = ArgumentParser(add_help=False)args = parser.add_argument_group('Options')args.add_argument('-h', '--help', action='help', default=SUPPRESS, help='Show this help message and exit.')args.add_argument("-m", "--model", help="Required. Path to an .xml file with a trained model.",required=True, type=str)args.add_argument("-at", "--architecture_type", help='Required. Specify model\' architecture type.',type=str, required=True, choices=('yolov3', 'yolov4', 'yolov5', 'yolov4-p5', 'yolov4-p6', 'yolov4-p7'))args.add_argument("-i", "--input", help="Required. Path to an image/video file. (Specify 'cam' to work with ""camera)", required=True, type=str)args.add_argument("-l", "--cpu_extension",help="Optional. Required for CPU custom layers. Absolute path to a shared library with ""the kernels implementations.", type=str, default=None)args.add_argument("-d", "--device",help="Optional. Specify the target device to infer on; CPU, GPU, FPGA, HDDL or MYRIAD is"" acceptable. The sample will look for a suitable plugin for device specified. ""Default value is CPU", default="CPU", type=str)args.add_argument("--labels", help="Optional. Labels mapping file", default=None, type=str)args.add_argument("-t", "--prob_threshold", help="Optional. Probability threshold for detections filtering",default=0.5, type=float)args.add_argument("-iout", "--iou_threshold", help="Optional. Intersection over union threshold for overlapping ""detections filtering", default=0.4, type=float)args.add_argument("-ni", "--number_iter", help="Optional. Number of inference iterations", default=1, type=int)args.add_argument("-pc", "--perf_counts", help="Optional. Report performance counters", default=False,action="store_true")args.add_argument("-r", "--raw_output_message", help="Optional. Output inference results raw values showing",default=False, action="store_true")args.add_argument("--no_show", help="Optional. Don't show output", action='store_true')return parserclass YoloParams:# ------------------------------------------- Extracting layer parameters ------------------------------------------# Magic numbers are copied from yolo samplesdef __init__(self, param, side, yolo_type):self.coords = 4 if 'coords' not in param else int(param['coords'])self.classes = 80 if 'classes' not in param else int(param['classes'])self.side = sideif yolo_type == 'yolov4':self.num = 3self.anchors = [12.0,16.0, 19.0,36.0, 40.0,28.0, 36.0,75.0, 76.0,55.0, 72.0,146.0, 142.0,110.0, 192.0,243.0, 459.0,401.0] elif yolo_type == 'yolov4-p5':self.num = 4 self.anchors = [13.0,17.0, 31.0,25.0, 24.0,51.0, 61.0,45.0, 48.0,102.0, 119.0,96.0, 97.0,189.0, 217.0,184.0,171.0,384.0, 324.0,451.0, 616.0,618.0, 800.0,800.0] elif yolo_type == 'yolov4-p6':self.num = 4 self.anchors = [13.0,17.0, 31.0,25.0, 24.0,51.0, 61.0,45.0, 61.0,45.0, 48.0,102.0, 119.0,96.0, 97.0,189.0, 97.0,189.0, 217.0,184.0, 171.0,384.0, 324.0,451.0, 324.0,451.0, 545.0,357.0, 616.0,618.0, 1024.0,1024.0]elif yolo_type == 'yolov4-p7':self.num = 5 self.anchors = [13.0,17.0,  22.0,25.0,  27.0,66.0,  55.0,41.0, 57.0,88.0,  112.0,69.0,  69.0,177.0,  136.0,138.0,136.0,138.0,  287.0,114.0,  134.0,275.0,  268.0,248.0, 268.0,248.0,  232.0,504.0,  445.0,416.0,  640.0,640.0,812.0,393.0,  477.0,808.0,  1070.0,908.0,  1408.0,1408.0]else:self.num = 3 self.anchors = [10.0, 13.0, 16.0, 30.0, 33.0, 23.0, 30.0, 61.0, 62.0, 45.0, 59.0, 119.0, 116.0, 90.0, 156.0,198.0, 373.0, 326.0]def log_params(self):params_to_print = {'classes': self.classes, 'num': self.num, 'coords': self.coords, 'anchors': self.anchors}[log.info("         {:8}: {}".format(param_name, param)) for param_name, param in params_to_print.items()]def letterbox(img, size=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True):# Resize image to a 32-pixel-multiple rectangle https://github.com/ultralytics/yolov3/issues/232shape = img.shape[:2]  # current shape [height, width]w, h = size# Scale ratio (new / old)r = min(h / shape[0], w / shape[1])if not scaleup:  # only scale down, do not scale up (for better test mAP)r = min(r, 1.0)# Compute paddingratio = r, r  # width, height ratiosnew_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))dw, dh = w - new_unpad[0], h - new_unpad[1]  # wh paddingif auto:  # minimum rectangledw, dh = np.mod(dw, 64), np.mod(dh, 64)  # wh paddingelif scaleFill:  # stretchdw, dh = 0.0, 0.0new_unpad = (w, h)ratio = w / shape[1], h / shape[0]  # width, height ratiosdw /= 2  # divide padding into 2 sidesdh /= 2if shape[::-1] != new_unpad:  # resizeimg = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR)top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))left, right = int(round(dw - 0.1)), int(round(dw + 0.1))img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color)  # add bordertop2, bottom2, left2, right2 = 0, 0, 0, 0if img.shape[0] != h:top2 = (h - img.shape[0])//2bottom2 = top2img = cv2.copyMakeBorder(img, top2, bottom2, left2, right2, cv2.BORDER_CONSTANT, value=color)  # add borderelif img.shape[1] != w:left2 = (w - img.shape[1])//2right2 = left2img = cv2.copyMakeBorder(img, top2, bottom2, left2, right2, cv2.BORDER_CONSTANT, value=color)  # add borderreturn imgdef scale_bbox(x, y, height, width, class_id, confidence, im_h, im_w, resized_im_h=640, resized_im_w=640):gain = min(resized_im_w / im_w, resized_im_h / im_h)  # gain  = old / newpad = (resized_im_w - im_w * gain) / 2, (resized_im_h - im_h * gain) / 2  # wh paddingx = int((x - pad[0])/gain)y = int((y - pad[1])/gain)w = int(width/gain)h = int(height/gain)xmin = max(0, int(x - w / 2))ymin = max(0, int(y - h / 2))xmax = min(im_w, int(xmin + w))ymax = min(im_h, int(ymin + h))# Method item() used here to convert NumPy types to native types for compatibility with functions, which don't# support Numpy types (e.g., cv2.rectangle doesn't support int64 in color parameter)return dict(xmin=xmin, xmax=xmax, ymin=ymin, ymax=ymax, class_id=class_id.item(), confidence=confidence.item())def entry_index(side, coord, classes, location, entry):side_power_2 = side ** 2n = location // side_power_2loc = location % side_power_2return int(side_power_2 * (n * (coord + classes + 1) + entry) + loc)def parse_yolo_region(blob, resized_image_shape, original_im_shape, params, threshold, yolo_type):# ------------------------------------------ Validating output parameters ------------------------------------------    out_blob_n, out_blob_c, out_blob_h, out_blob_w = blob.shapepredictions = 1.0/(1.0+np.exp(-blob)) assert out_blob_w == out_blob_h, "Invalid size of output blob. It sould be in NCHW layout and height should " \"be equal to width. Current height = {}, current width = {}" \"".format(out_blob_h, out_blob_w)# ------------------------------------------ Extracting layer parameters -------------------------------------------orig_im_h, orig_im_w = original_im_shaperesized_image_h, resized_image_w = resized_image_shapeobjects = list()side_square = params.side[1] * params.side[0]# ------------------------------------------- Parsing YOLO Region output -------------------------------------------bbox_size = int(out_blob_c/params.num) #4+1+num_classes#print('bbox_size = ' + str(bbox_size))#print('bbox_size = ' + str(bbox_size))for row, col, n in np.ndindex(params.side[0], params.side[1], params.num):bbox = predictions[0, n*bbox_size:(n+1)*bbox_size, row, col]x, y, width, height, object_probability = bbox[:5]class_probabilities = bbox[5:]if object_probability < threshold:continue#print('resized_image_w = ' + str(resized_image_w))#print('out_blob_w = ' + str(out_blob_w))x = (2*x - 0.5 + col)*(resized_image_w/out_blob_w)y = (2*y - 0.5 + row)*(resized_image_h/out_blob_h)if int(resized_image_w/out_blob_w) == 8 & int(resized_image_h/out_blob_h) == 8: #80x80, idx = 0elif int(resized_image_w/out_blob_w) == 16 & int(resized_image_h/out_blob_h) == 16: #40x40idx = 1elif int(resized_image_w/out_blob_w) == 32 & int(resized_image_h/out_blob_h) == 32: # 20x20idx = 2elif int(resized_image_w/out_blob_w) == 64 & int(resized_image_h/out_blob_h) == 64: # 20x20idx = 3elif int(resized_image_w/out_blob_w) == 128 & int(resized_image_h/out_blob_h) == 128: # 20x20idx = 4if yolo_type == 'yolov4-p5' or yolo_type == 'yolov4-p6' or yolo_type == 'yolov4-p7':width = (2*width)**2* params.anchors[idx * 8 + 2 * n] height = (2*height)**2 * params.anchors[idx * 8 + 2 * n + 1]else:width = (2*width)**2* params.anchors[idx * 6 + 2 * n] height = (2*height)**2 * params.anchors[idx * 6 + 2 * n + 1]class_id = np.argmax(class_probabilities * object_probability)confidence = class_probabilities[class_id] * object_probabilityobjects.append(scale_bbox(x=x, y=y, height=height, width=width, class_id=class_id, confidence=confidence,im_h=orig_im_h, im_w=orig_im_w, resized_im_h=resized_image_h, resized_im_w=resized_image_w))return objectsdef intersection_over_union(box_1, box_2):width_of_overlap_area = min(box_1['xmax'], box_2['xmax']) - max(box_1['xmin'], box_2['xmin'])height_of_overlap_area = min(box_1['ymax'], box_2['ymax']) - max(box_1['ymin'], box_2['ymin'])if width_of_overlap_area < 0 or height_of_overlap_area < 0:area_of_overlap = 0else:area_of_overlap = width_of_overlap_area * height_of_overlap_areabox_1_area = (box_1['ymax'] - box_1['ymin']) * (box_1['xmax'] - box_1['xmin'])box_2_area = (box_2['ymax'] - box_2['ymin']) * (box_2['xmax'] - box_2['xmin'])area_of_union = box_1_area + box_2_area - area_of_overlapif area_of_union == 0:return 0return area_of_overlap / area_of_uniondef main():args = build_argparser().parse_args()model_xml = args.modelmodel_bin = os.path.splitext(model_xml)[0] + ".bin"# ------------- 1. Plugin initialization for specified device and load extensions library if specified -------------log.info("Creating Inference Engine...")ie = IECore()if args.cpu_extension and 'CPU' in args.device:ie.add_extension(args.cpu_extension, "CPU")# -------------------- 2. Reading the IR generated by the Model Optimizer (.xml and .bin files) --------------------log.info("Loading network files:\n\t{}\n\t{}".format(model_xml, model_bin))net = IENetwork(model=model_xml, weights=model_bin)# ---------------------------------- 3. Load CPU extension for support specific layer ------------------------------#if "CPU" in args.device:#    supported_layers = ie.query_network(net, "CPU")#    not_supported_layers = [l for l in net.layers.keys() if l not in supported_layers]#    if len(not_supported_layers) != 0:#        log.error("Following layers are not supported by the plugin for specified device {}:\n {}".#                  format(args.device, ', '.join(not_supported_layers)))#        log.error("Please try to specify cpu extensions library path in sample's command line parameters using -l "#                  "or --cpu_extension command line argument")#        sys.exit(1)##assert len(net.inputs.keys()) == 1, "Sample supports only YOLO V3 based single input topologies"# ---------------------------------------------- 4. Preparing inputs -----------------------------------------------log.info("Preparing inputs")input_blob = next(iter(net.inputs))#  Defaulf batch_size is 1net.batch_size = 1# Read and pre-process input imagesn, c, h, w = net.inputs[input_blob].shapeng_func = ngraph.function_from_cnn(net)yolo_layer_params = {}for node in ng_func.get_ordered_ops():layer_name = node.get_friendly_name()if layer_name not in net.outputs:continueshape = list(node.inputs()[0].get_source_output().get_node().shape)yolo_params = YoloParams(node._get_attributes(), shape[2:4], args.architecture_type)yolo_layer_params[layer_name] = (shape, yolo_params)if args.labels:with open(args.labels, 'r') as f:labels_map = [x.strip() for x in f]else:labels_map = Noneinput_stream = 0 if args.input == "cam" else args.inputis_async_mode = Truecap = cv2.VideoCapture(input_stream)number_input_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))number_input_frames = 1 if number_input_frames != -1 and number_input_frames < 0 else number_input_frameswait_key_code = 1# Number of frames in picture is 1 and this will be read in cycle. Sync mode is default value for this caseif number_input_frames != 1:ret, frame = cap.read()else:is_async_mode = Falsewait_key_code = 0# ----------------------------------------- 5. Loading model to the plugin -----------------------------------------log.info("Loading model to the plugin")exec_net = ie.load_network(network=net, num_requests=2, device_name=args.device)cur_request_id = 0next_request_id = 1render_time = 0parsing_time = 0# ----------------------------------------------- 6. Doing inference -----------------------------------------------log.info("Starting inference...")print("To close the application, press 'CTRL+C' here or switch to the output window and press ESC key")print("To switch between sync/async modes, press TAB key in the output window")while cap.isOpened():# Here is the first asynchronous point: in the Async mode, we capture frame to populate the NEXT infer request# in the regular mode, we capture frame to the CURRENT infer requestif is_async_mode:ret, next_frame = cap.read()else:ret, frame = cap.read()if not ret:breakif is_async_mode:request_id = next_request_idin_frame = letterbox(frame, (w, h))else:request_id = cur_request_idin_frame = letterbox(frame, (w, h))# resize input_frame to network sizein_frame = in_frame.transpose((2, 0, 1))  # Change data layout from HWC to CHWin_frame = in_frame.reshape((n, c, h, w))# Start inferencestart_time = time()exec_net.start_async(request_id=request_id, inputs={input_blob: in_frame})# Collecting object detection resultsobjects = list()if exec_net.requests[cur_request_id].wait(-1) == 0:det_time = time() - start_timeoutput = exec_net.requests[cur_request_id].outputsstart_time = time()for layer_name, out_blob in output.items():#out_blob = out_blob.reshape(net.layers[layer_name].out_data[0].shape)layer_params = yolo_layer_params[layer_name]#YoloParams(net.layers[layer_name].params, out_blob.shape[2])out_blob.shape = layer_params[0]#log.info("Layer {} parameters: ".format(layer_name))#layer_params.log_params()objects += parse_yolo_region(out_blob, in_frame.shape[2:],#in_frame.shape[2:], layer_params,frame.shape[:-1], layer_params[1],args.prob_threshold, args.architecture_type)parsing_time = time() - start_time# Filtering overlapping boxes with respect to the --iou_threshold CLI parameterobjects = sorted(objects, key=lambda obj : obj['confidence'], reverse=True)for i in range(len(objects)):if objects[i]['confidence'] == 0:continuefor j in range(i + 1, len(objects)):if objects[i]['class_id'] != objects[j]['class_id']: # Only compare bounding box with same class idcontinueif intersection_over_union(objects[i], objects[j]) > args.iou_threshold:objects[j]['confidence'] = 0# Drawing objects with respect to the --prob_threshold CLI parameterobjects = [obj for obj in objects if obj['confidence'] >= args.prob_threshold]if len(objects) and args.raw_output_message:log.info("\nDetected boxes for batch {}:".format(1))log.info(" Class ID | Confidence | XMIN | YMIN | XMAX | YMAX | COLOR ")origin_im_size = frame.shape[:-1]for obj in objects:# Validation bbox of detected objectif obj['xmax'] > origin_im_size[1] or obj['ymax'] > origin_im_size[0] or obj['xmin'] < 0 or obj['ymin'] < 0:continuecolor = (int(min(obj['class_id'] * 12.5, 255)),min(obj['class_id'] * 7, 255), min(obj['class_id'] * 5, 255))det_label = labels_map[obj['class_id']] if labels_map and len(labels_map) >= obj['class_id'] else \str(obj['class_id'])if args.raw_output_message:log.info("{:^9} | {:10f} | {:4} | {:4} | {:4} | {:4} | {} ".format(det_label, obj['confidence'], obj['xmin'],obj['ymin'], obj['xmax'], obj['ymax'],color))cv2.rectangle(frame, (obj['xmin'], obj['ymin']), (obj['xmax'], obj['ymax']), color, 2)cv2.putText(frame,"#" + det_label + ' ' + str(round(obj['confidence'] * 100, 1)) + ' %',(obj['xmin'], obj['ymin'] - 7), cv2.FONT_HERSHEY_COMPLEX, 0.6, color, 1)# Draw performance stats over frameinf_time_message = "Inference time: N\A for async mode" if is_async_mode else \"Inference time: {:.3f} ms".format(det_time * 1e3)render_time_message = "OpenCV rendering time: {:.3f} ms".format(render_time * 1e3)async_mode_message = "Async mode is on. Processing request {}".format(cur_request_id) if is_async_mode else \"Async mode is off. Processing request {}".format(cur_request_id)parsing_message = "YOLO parsing time is {:.3f} ms".format(parsing_time * 1e3)cv2.putText(frame, inf_time_message, (15, 15), cv2.FONT_HERSHEY_COMPLEX, 0.5, (200, 10, 10), 1)cv2.putText(frame, render_time_message, (15, 45), cv2.FONT_HERSHEY_COMPLEX, 0.5, (10, 10, 200), 1)cv2.putText(frame, async_mode_message, (10, int(origin_im_size[0] - 20)), cv2.FONT_HERSHEY_COMPLEX, 0.5,(10, 10, 200), 1)cv2.putText(frame, parsing_message, (15, 30), cv2.FONT_HERSHEY_COMPLEX, 0.5, (10, 10, 200), 1)start_time = time()if not args.no_show:cv2.imshow("DetectionResults", frame)render_time = time() - start_timeif is_async_mode:cur_request_id, next_request_id = next_request_id, cur_request_idframe = next_frameif not args.no_show:key = cv2.waitKey(wait_key_code)# ESC keyif key == 27:break# Tab keyif key == 9:exec_net.requests[cur_request_id].wait()is_async_mode = not is_async_modelog.info("Switched to {} mode".format("async" if is_async_mode else "sync"))cv2.destroyAllWindows()if __name__ == '__main__':sys.exit(main() or 0)

参考:【深入YoloV5(开源)】基于YoloV5的模型优化技术与使用OpenVINO推理实现_cv君的博客-CSDN博客

OpenVINO部署Yolov5_洪流之源-CSDN博客_openvino yolov5

u版YOLOv5目标检测openvino实现_缘分天空的专栏-CSDN博客

当YOLOv5遇见OpenVINO!_阿木寺的博客-CSDN博客

GitHub - Chen-MingChang/pytorch_YOLO_OpenVINO_demo

【如果对您有帮助,交个朋友给个一键三连吧,您的肯定是我博客高质量维护的动力!!!】

win10+python环境yolov5s预训练模型转onnx然后用openvino生成推理加速模型并测试推理相关推荐

  1. (二)Python环境配置:AI实时抠图、AI实时抠像、PaddlePaddle模型、虚拟现实视频会议、沉浸式会议场景

    (二)Python环境配置:AI实时抠图.AI实时抠像.PaddlePaddle模型.虚拟现实视频会议.沉浸式会议场景.人像去背景.图像去背景.视频背景消除 摘要:此文承接上一篇博文,是在软件编程之前 ...

  2. yolov5s 预训练模型_YOLO v5 实现目标检测(参考数据集自制数据集)

    YOLO v5 实现目标检测(参考数据集&自制数据集) Author: Labyrinthine Leo   Init_time: 2020.10.26 GitHub: https://git ...

  3. 详解预训练模型、信息抽取、文本生成、知识图谱、对话系统技术

    我们正处在信息爆炸的时代.面对每天铺天盖地的网络资源和论文.很多时候我们面临的问题并不是缺资源,而是找准资源并高效学习.其次,即便网络上的资源非常多,学习是需要成本的,而且越有深度的内容越难找到好的学 ...

  4. 关于NLP相关技术全部在这里:预训练模型、信息抽取、文本生成、知识图谱、对话系统...

    我们正处在信息爆炸的时代.面对每天铺天盖地的网络资源和论文.很多时候我们面临的问题并不是缺资源,而是找准资源并高效学习.其次,即便网络上的资源非常多,学习是需要成本的,而且越有深度的内容越难找到好的学 ...

  5. yolov5s 预训练模型_GitHub上YOLOv5开源代码的训练数据定义

    GitHub上YOLOv5开源代码的训练数据定义 代码地址:https://github.com/ultralytics/YOLOv5 训练数据定义地址:https://github.com/ultr ...

  6. Python单元测试介绍及单元测试理解,单元测试的自动生成(对函数进行测试)

    目录 一.单元测试的定义 二.实例理解 2.1可通过的测试 一个模拟的登录 测试用例 测试代码 运行结果 2.2不可通过的测试 一个模拟的登录 测试用例 测试代码 运行结果 三.单元测试的自动生成 h ...

  7. 预训练模型参数量越来越大?这里有你需要的BERT推理加速技术指南

    ©作者 | 徐超 单位 | 微软亚洲互联网工程院 研究方向 | 文本相关性.多语言扩展 基于 Transformer 的预训练模型,尤其是 BERT,给各种 NLP 任务的 performance 带 ...

  8. Pytorch:NLP 迁移学习、NLP中的标准数据集、NLP中的常用预训练模型、加载和使用预训练模型、huggingface的transfomers微调脚本文件

    日萌社 人工智能AI:Keras PyTorch MXNet TensorFlow PaddlePaddle 深度学习实战(不定时更新) run_glue.py微调脚本代码 python命令执行run ...

  9. BERT预训练模型的使用

    当下NLP中流行的预训练模型 BERT GPT GPT-2 Transformer-XL XLNet XLM RoBERTa DistilBERT ALBERT T5 XLM-RoBERTa BERT ...

最新文章

  1. 一个情怀引发的生产事故(续)
  2. 机器学习的练功方式(六)——朴素贝叶斯
  3. 【免费毕设】asp.net服装连锁店管理系统的设计与开发(源代码+lunwen)
  4. 对《构建之法——现代软件工程》13-17章的困惑与思考
  5. javascript高逼格代码实现数组去重,JSON深度拷贝,匿名函数自执行,数字取整等...
  6. BZOJ5221[Lydsy2017省队十连测] 偏题
  7. java se下载完怎么启动_【Java SE】如何安装JDK以及配置Java运行环境
  8. 程序员简历应该怎么写?
  9. 基于目标检测的回归创新实验感想(基于yolo v1)
  10. 告诉你如何回答线上CPU100%排查面试问题
  11. python抢票软件哪个好_50个抢票加速包,还不如这个Python抢票神器
  12. MySQL 查询优化如何坐到极致?
  13. sql server 找到刚刚插入的indentify的数字
  14. [转] Photoshop教程8000例
  15. SSH连接时候出现 REMOTE HOST IDENTIFICATION HAS CHANGED
  16. 面试技巧STAR原则
  17. torch中Tensor的使用
  18. IDS反病毒与APT的具体介绍
  19. 大家我是来自广东工业大学的吴文钧
  20. window 2012 R2 忘记密码处理方案

热门文章

  1. 【EDA】EDA技术Quartus仿真步骤(图表仿真)
  2. 分享66个ASP.NET学校班级源码,总有一款适合您
  3. 一个自己开发的并应用在很多项目里的unity关卡编辑器
  4. 将DataFrameGroupBy转回DataFrame
  5. matlab画直线,对比原数据与拟合直线
  6. Mentor Xpedition设置中英文切换
  7. 怎么控制填充时的比例
  8. DataGridView 加进度条显示
  9. USB3.0之硬件关注点
  10. 什么是训练集、验证集和测试集?