đī¸ Parallel Inference of ResNet18
Thread-Safe Local Inference
đī¸ Computational Backend
In deep learning services, it is not enough to only support model acceleration. Therefore, we have built-in some commonly used fine-grained backends.
đī¸ Sequential
Sequential can link multiple backends together. In other words, Sequential[DecodeTensor,ResizeTensor,cvtColorTensor,SyncTensor] and Sequential[DecodeMat,ResizeMat] are valid backends.
đī¸ Custom Backend
We have found that a major problem in business is that the preset backends (computational backend/scheduling backend/RPC backend/cross-process backend, etc.) cannot cover all requirements. Typically, extending the backend is not the task of the user, but rather the task of the library developer. Torchpipe holds a different perspective, believing that the backend itself is also an API oriented towards the user. Therefore, the backend must be designed to be simple enough. Drawing on the design of frameworks such as GStreamer and FFmpeg, and targeting modern C++ and Python, Torchpipe hopes that the backend:
đī¸ Backend API Reference
3 items
đī¸ Single Node Scheduling
The input data is distributed to the computing back-end for execution via the default single node scheduling system BaselineSchedule. During this process, it mainly undergoes batch gathering and multi-instance scheduling.