Troubleshooting
The following are the issues that users may encounter when using Torchpipe and their corresponding solutions.
TensorrtTensor
Slow model initialization speed
Use model caching
The converted model can be cached locally. When
model::cacheexists, this model is loaded directly. Otherwise, the model specified bymodelis loaded, and the generated model is saved in the path specified bymodel::cache:[model]
backend="SyncTensor[TensorrtTensor]"
model="a.onnx.encrypted"
"model::cache"="a.trt.encrypted"Save cached models in advance for commonly used GPUs, and use multiple configurations to handle different types of GPUs:
+--------------+---------------+-------------------------+
| config file | key | value |
+==============+===============+=========================+
| | | |
| 2080ti.toml | model | a.2080ti.trt.encrypted |
+--------------+---------------+-------------------------+
| t4.toml | model | a.t4.trt.encrypted |
+--------------+---------------+-------------------------+
| others.toml | model | a.onnx.encrypted |
+--------------+---------------+-------------------------+
tip
To use the built-in encryption and decryption functions, you need to specify IPIPE_KEY when compiling Torchpipe.