YOLOv5 是一种高效的实时目标检测算法,而 TensorRT 是 NVIDIA 提供的一种高性能深度学习推理库。将 YOLOv5 部署到 TensorRT 可以显著提升模型的推理速度,尤其是在 GPU 环境下。本文将详细介绍从 YOLOv5 模型训练、转换到 TensorRT 的全流程,并提供实际操作步骤和代码示例。
在开始之前,确保你的开发环境满足以下要求:
pip install torch torchvision numpy onnx
安装 TensorRT 的 Python 接口:
pip install nvidia-pyindex
pip install nvidia-tensorrt
克隆官方 YOLOv5 仓库并训练模型:
git clone https://github.com/ultralytics/yolov5.git
cd yolov5
pip install -r requirements.txt
python train.py --img 640 --batch 16 --epochs 30 --data coco128.yaml --weights yolov5s.pt
训练完成后,生成的模型文件通常位于 runs/train/exp/weights/best.pt
。
TensorRT 支持 ONNX 格式的模型,因此需要先将 PyTorch 模型转换为 ONNX 格式。
import torch
from models.experimental import attempt_load
# 加载 PyTorch 模型
model = attempt_load('runs/train/exp/weights/best.pt', map_location=torch.device('cpu'))
model.eval()
# 导出为 ONNX 格式
dummy_input = torch.randn(1, 3, 640, 640) # 输入张量形状
torch.onnx.export(
model,
dummy_input,
"yolov5s.onnx",
input_names=["images"],
output_names=["output"],
opset_version=11
)
在将 ONNX 模型转换为 TensorRT 引擎之前,可以使用工具对其进行优化。推荐使用 NVIDIA 的 trtexec
工具或 onnx-simplifier
。
pip install onnxsim
onnxsim yolov5s.onnx yolov5s_simplified.onnx
以下是将 ONNX 模型转换为 TensorRT 引擎的 Python 实现:
import tensorrt as trt
import os
TRT_LOGGER = trt.Logger(trt.Logger.WARNING)
def build_engine(onnx_file_path, engine_file_path):
with trt.Builder(TRT_LOGGER) as builder, \
builder.create_network(1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)) as network, \
trt.OnnxParser(network, TRT_LOGGER) as parser:
builder.max_workspace_size = 1 << 30 # 1GB
builder.max_batch_size = 1
# 解析 ONNX 文件
with open(onnx_file_path, 'rb') as model:
if not parser.parse(model.read()):
print("Failed to parse ONNX file")
for error in range(parser.num_errors):
print(parser.get_error(error))
return None
# 构建 TensorRT 引擎
engine = builder.build_cuda_engine(network)
if engine is None:
print("Failed to create engine")
return None
# 保存引擎文件
with open(engine_file_path, "wb") as f:
f.write(engine.serialize())
return engine
# 调用函数
onnx_file = "yolov5s_simplified.onnx"
engine_file = "yolov5s.trt"
build_engine(onnx_file, engine_file)
以下是加载 TensorRT 引擎并进行推理的代码:
import tensorrt as trt
import pycuda.driver as cuda
import pycuda.autoinit
import numpy as np
from PIL import Image
class HostDeviceMem:
def __init__(self, host_mem, device_mem):
self.host = host_mem
self.device = device_mem
def __str__(self):
return "Host:\n" + str(self.host) + "\nDevice:\n" + str(self.device)
def __repr__(self):
return self.__str__()
def allocate_buffers(engine):
inputs = []
outputs = []
bindings = []
stream = cuda.Stream()
for binding in engine:
size = trt.volume(engine.get_binding_shape(binding)) * engine.max_batch_size
dtype = trt.nptype(engine.get_binding_dtype(binding))
host_mem = cuda.pagelocked_empty(size, dtype)
device_mem = cuda.mem_alloc(host_mem.nbytes)
bindings.append(int(device_mem))
if engine.binding_is_input(binding):
inputs.append(HostDeviceMem(host_mem, device_mem))
else:
outputs.append(HostDeviceMem(host_mem, device_mem))
return inputs, outputs, bindings, stream
def do_inference(context, bindings, inputs, outputs, stream, batch_size=1):
[cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
context.execute_async(batch_size=batch_size, bindings=bindings, stream_handle=stream.handle)
[cuda.memcpy_dtoh_async(out.host, out.device, stream) for out in outputs]
stream.synchronize()
return [out.host for out in outputs]
# 加载 TensorRT 引擎
with open("yolov5s.trt", "rb") as f, trt.Runtime(TRT_LOGGER) as runtime:
engine = runtime.deserialize_cuda_engine(f.read())
context = engine.create_execution_context()
inputs, outputs, bindings, stream = allocate_buffers(engine)
# 准备输入数据
image = Image.open("test.jpg").resize((640, 640))
image_array = np.array(image).transpose(2, 0, 1).astype(np.float32) / 255.0
inputs[0].host = image_array.ravel()
# 执行推理
trt_outputs = do_inference(context, bindings, inputs, outputs, stream)
print(trt_outputs)
为了验证 TensorRT 的加速效果,可以分别测试 PyTorch 和 TensorRT 的推理时间。使用 timeit
或其他性能分析工具进行对比。
通过上述步骤,我们成功将 YOLOv5 模型部署到 TensorRT,并实现了高效的推理加速。以下是整个流程的关键点回顾: