要使用OpenVINO™ Toolkit运行网络,您首先需要将其转换为中间表示(IR)。为此,您需要模型优化器,它是OpenVINO™工具包开发人员包中的命令行工具。获得它的最简单方法是通过PyPi:
pip install openvino-dev
TensorFlow模型由Model Optimizer直接支持,因此下一步是在终端中使用以下命令:
mo --input_model v3-small_224_1.0_float.pb --input_shape "[1,224,224,3]"
这意味着您正在为一个大小为224x224的RGB图像转换v3-small_224_1.0_float.pb模型。当然,您可以指定更多参数,例如预处理步骤或所需的模型精度(FP32或FP16):
mo --input_model v3-small_224_1.0_float.pb --input_shape "[1,224,224,3]" --mean_values="[127.5,127.5,127.5]" --scale_values="[127.5]" --data_type FP16
您的模型会将所有像素标准化为[-1,1]值范围,并且将使用FP16执行推理。运行后,您应该会看到如下所示的内容,其中包含所有显式和隐式参数,例如模型的路径、输入形状、选择的精度、通道反转、均值和比例值、转换参数等等:
Exporting TensorFlow model to IR… This may take a few minutes.
Model Optimizer arguments:
Common parameters:
— Path to the Input Model: /home/adrian/repos/openvino_notebooks/notebooks/101-tensorflow-to-openvino/model/v3-small_224_1.0_float.pb
— Path for generated IR: /home/adrian/repos/openvino_notebooks/notebooks/101-tensorflow-to-openvino/model
— IR output name: v3-small_224_1.0_float
— Log level: ERROR
— Batch: Not specified, inherited from the model
— Input layers: Not specified, inherited from the model
— Output layers: Not specified, inherited from the model
— Input shapes: [1,224,224,3]
— Mean values: [127.5,127.5,127.5]
— Scale values: [127.5]
— Scale factor: Not specified
— Precision of IR: FP16
— Enable fusing: True
— Enable grouped convolutions fusing: True
— Move mean values to preprocess section: None
— Reverse input channels: False
TensorFlow specific parameters:
— Input model in text protobuf format: False
— Path to model dump for TensorBoard: None
— List of shared libraries with TensorFlow custom layers implementation: None
— Update the configuration file with input/output node names: None
— Use configuration file used to generate the model with Object Detection API: None
— Use the config file: None
— Inference Engine found in: /home/adrian/repos/openvino_notebooks/openvino_env/lib/python3.8/site-packages/openvino
Inference Engine version: 2021.4.1–3926–14e67d86634-releases/2021/4
Model Optimizer version: 2021.4.1–3926–14e67d86634-releases/2021/4
[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /home/adrian/repos/openvino_notebooks/notebooks/101-tensorflow-to-openvino/model/v3-small_224_1.0_float.xml
[ SUCCESS ] BIN file: /home/adrian/repos/openvino_notebooks/notebooks/101-tensorflow-to-openvino/model/v3-small_224_1.0_float.bin
[ SUCCESS ] Total execution time: 9.97 seconds.
[ SUCCESS ] Memory consumed: 374 MB.
最后的SUCCESS表示一切都已成功转换。您应该获得IR,它由两个文件组成:.xml和.bin。现在,您已准备好将此网络加载到推理引擎并运行推理。下面的代码假设您的模型用于ImageNet分类。
import cv2
import numpy as np
from openvino.inference_engine import IECore
# Load the model
ie = IECore()
net = ie.read_network(model="v3-small_224_1.0_float.xml", weights="v3-small_224_1.0_float.bin")
exec_net = ie.load_network(network=net, device_name="CPU")
input_key = next(iter(exec_net.input_info))
output_key = next(iter(exec_net.outputs.keys()))
# Load the image
# The MobileNet network expects images in RGB format
image = cv2.cvtColor(cv2.imread(filename="image.jpg"), code=cv2.COLOR_BGR2RGB)
# resize to MobileNet image shape
input_image = cv2.resize(src=image, dsize=(224, 224))
# reshape to network input shape
input_image = np.expand_dims(input_image.transpose(2, 0, 1), axis=0)
# Do inference
result = exec_net.infer(inputs={input_key: input_image})[output_key]
result_index = np.argmax(result)
# Convert the inference result to a class name.
imagenet_classes = open("imagenet_2012.txt").read().splitlines()
# The model description states that for this model, class 0 is background,
# so we add background at the beginning of imagenet_classes
imagenet_classes = ["background"] + imagenet_classes
print(imagenet_classes[result_index])
它有效!你会得到一个图像类(如下面的这个——平面涂层猎犬)。您可以通过此演示自己尝试。
如果您想以更有限的方式尝试OpenVINO,对代码的更改更少,请查看我们的与TensorFlow集成插件。
本文最初发布于https://medium.com/openvino-toolkit/how-to-convert-tensorflow-model-and-run-it-with-openvino-toolkit-519e4277ccdb
https://www.codeproject.com/Articles/5326994/How-to-convert-TensorFlow-model-and-run-it-with-Op