Computational Backend
In deep learning services, it is not enough to only support model acceleration. Therefore, we have built-in some commonly used fine-grained backends.
Built-in Backends
- OpenCV-related Image Processing
- PyTorch-related Backends
| Name | Description |
|---|---|
| DecodeMat | JPEG decoding |
| cvtColorMat | Color space conversion |
| ResizeMat | Image resizing |
| PillowResizeMat | Resize that strictly matches Pillow's results |
| More... |
| Name | Description |
|---|---|
| DecodeTensor | JPEG decoding on GPU |
| cvtColorTensor | Color space conversion |
| ResizeTensor | Image resizing |
| PillowResizeTensor | Resize that strictly matches Pillow's results |
| More... |
The default backend is Identity:
| Name | Initialization Parameters | Input/Type | Output/Type | Remarks |
|---|---|---|---|---|
Identity | None | data/any | result/any | Assigns the value of data to result. |
Usage Example:
- Python
- C++
import torchpipe as tp
import numpy as np
# Set up configuration for single-node scheduler and DecodeMat backend
config = {
"instance_num": 2,
"backend": "DecodeMat",
}
# Initialize models
models = tp.pipe(config)
# Read image data from file
with open("../test/assets/norm_jpg/dog.jpg", "rb") as f:
data = f.read()
# Perform forward pass on input data
input = {"data": data}
models(input)
result: np.ndarray = input["result"]
assert(result.shape == (576, 768, 3))
#include "Interpreter.hpp"
#include "opencv2/core.hpp"
int main(void) {
// Single node scheduler parameters:
std::unordered_map<std::string, std::string> config = {
{"instance_num", "2"}, // Number of instances
{"backend", "DecodeMat"} // Compute backend
};
// Initialization
ipipe::Interpreter model(config);
// Prepare data
auto input = std::make_shared<std::unordered_map<std::string, ipipe::any>>();
(*input)[ipipe::TASK_DATA_KEY] = std::string(...);
// Forward
model(input); // <== Can be called in multiple threads
cv::Mat result = ipipe::any_cast<cv::Mat>(input->at(ipipe::TASK_RESULT_KEY));
return 0;
};