TensorRT-LLM保姆级教程(二)-离线环境搭建、模型量化及推理

作者:Nicky2024.01.19 17:36浏览量:23

简介:本文将为你详细介绍如何搭建TensorRT-LLM离线环境,以及如何进行模型量化与推理。我们将以Bloom模型为例,提供清晰易懂的步骤和实际应用经验,帮助你掌握这一技术。

随着大模型的普及,模型参数量规模不断增大,导致推理成本急剧增加。为了降低推理延迟并提升模型吞吐量,TensorRT-LLM作为一种高效的推理框架应运而生。本文将为你详细介绍如何搭建TensorRT-LLM离线环境,以及如何进行模型量化与推理。我们将以Bloom模型为例,提供清晰易懂的步骤和实际应用经验,帮助你掌握这一技术。
一、离线环境搭建
首先,你需要安装TensorRT-LLM。可以通过pip安装命令进行安装:

  1. pip install ./build/tensorrt_llm*.whl -i http://nexus3.xxx.com/repository/pypi/simple --trusted-host nexus3.xxx.com

安装完成后,整个环境搭建就完成了。
二、模型量化
接下来,我们进行模型量化。以Bloom模型为例,首先需要编写build.py文件,用于构建TensorRT引擎来运行Bloom模型。在build.py文件中,你需要定义输入和输出格式、模型路径、优化设置等参数。例如:

  1. import tensorrt as trt
  2. import pycuda.autoinit
  3. import pycuda.driver as cuda
  4. import numpy as np
  5. import os
  6. TRT_LOGGER = trt.Logger(trt.Logger.WARNING)
  7. trt_runtime = trt.Runtime(TRT_LOGGER)
  8. def build_engine(model_path, shape, max_batch_size=1):
  9. with trt.Builder(TRT_LOGGER) as builder, builder.create_network(1) as network, trt.OnnxParser(network, TRT_LOGGER) as parser:
  10. builder.max_workspace_size = (256 << 20) # 16GB
  11. with open(model_path, 'rb') as model:
  12. parser.parse(model.read(), shape_dict={0: shape}) # parse the ONNX model and infer the shape at runtime
  13. engine = builder.build_cuda_engine(network) # build TensorRT engine
  14. return engine, network, trt_runtime # return engine, network and runtime for further usage like setting up inputs and outputs, and running the engine with inputs and outputs provided.

在上述代码中,你需要将model_path替换为Bloom模型的路径,shape替换为模型输入的形状。然后,你可以通过调用build_engine函数来构建TensorRT引擎。构建完成后,你可以使用返回的engine、network和trt_runtime进行模型推理。
三、模型推理
完成模型量化后,接下来进行模型推理。你需要编写run.py文件,用于模型推理。在run.py文件中,你需要加载模型参数、设置输入和输出格式、运行推理等步骤。例如:
```python
def run_inference(engine, shape, context, input_tensor):

get the output tensor from engine using the context and input tensor provided.

this is where the actual computation is happening in the TensorRT engine.

the output tensor will be used to get the actual inference results after this function call returns.

outputs = [cuda.pagelocked_empty(trt.volume(shape), dtype=trt.nptype(trt.float32))] # create an empty tensor to hold the output of the TensorRT engine.
stream = cuda.Stream() # create a stream for asynchronous copies and kernel launches
with context: # execute the TensorRT engine with the given context and input tensor
cuda.memcpy_htod_async(outputs[0], input_tensor.data, stream) # copy the input tensor to device (GPU)
context.execute_async(bindings=[int(i) for i in range(len(input_tensor.data))], stream_handle=stream.handle) # execute the TensorRT engine with the given bindings (input and output buffer handles)
cuda.memcpy_dtoh_async(input_tensor.data, outputs[0], stream) # copy the output tensor from device (GPU) to host (CPU)
stream.synchronize() # wait for all asynchronous operations to complete
return outputs[