Inference
torchpipe.pipe
Initialization
torchpipe.pipe
is:import torchpipe as tp
models = tp.pipe(config: Union[Dict[str, str] | Dict[str, Dict[str, str]] | str])
- From Dictionary
- From Double-Layer Dictionary
- From TOML File
class torchpipe.pipe(config: Dict[str, str])
config: Dict[str, str]
: Configuration parameters will be passed to the backend for parsing. Specific backends may perform parameter expansion on the configuration.
config = {"backend":"DecodeMat"};
class torchpipe.pipe(config: Dict[str, Dict[str, str]])
Used for configuring multiple nodes:
{node_name: Dict[str, str]}
. The node name cannot beglobal
or empty.
config: Dict[str, Dict[str, str]]
: Configuration parameters will be passed to the backend for parsing. Specific backends may perform parameter expansion on the configuration. Global default parameters can be set throughglobal
.
config = {"decoder":{"backend":"DecodeMat"},
"decoder_gpu":{"backend":"SyncTensor[DecodeTensor]"},
"global":{"instance_num":"2"}};
class torchpipe.pipe(config: str)
Parse parameters from file.
config: str
: 以.toml
为后缀的toml格式文件。
# Values can be strings or convertable to strings, such as int and float. Boolean values are not accepted.
instance_num=2
[decoder]
backend="DecodeMat"
[decoder_gpu]
backend="SyncTensor[DecodeTensor]"
Inference
torchpipe.pipe
is:class torchpipe.pipe
def __call__(self, data: Dict[str, Any] | List[Dict[str, Any]]) -> None
Thread-safe forward computation
- Single Data
- Multiple Data
def __call__(self, data: Dict[str, Any]) -> None
"data": Any
: Required input, the calculation backend needs to retrieve data from this key and then perform parsing."result": Any
: Used for output, when it does not exist, it means there is no calculation result."node_name": str
: When there are multiple root nodes, it is used to specify the node name.- Other key values (except for system reserved key values) can be used as input or output, which is determined by the backend.
def __call__(self, data: List[Dict[str, Any]]) -> None
When an exception is thrown during the calculation process of one of the data, all results are not available. When some results do not have 'result', other results are still available.
Generally speaking, this interface is not recommended for scenarios other than processing a large amount of data at once in Python.
Alternative Ways to Batch Process Data
- Keep the computation pipeline simple by processing multiple inputs through a thread pool:Each
self.executor =ThreadPoolExecutor(max_workers=15)
# forward pass
future_tasks = self.executor.map(self.forward_for_single_input, data)forward_for_single_input
function defines multiple computation steps for data to go from input to final result. - For scenarios involving context switching such as detection, scheduling can be done through MapReduce.
Reserved Key Values in the System
Key | Definition | Remarks |
---|---|---|
TASK_DATA_KEY | data | One of the input key values |
TASK_RESULT_KEY | result | One of the output key values |
TASK_CONTEXT_KEY | context | Syntax sugar for global sharing |
TASK_EVENT_KEY | event | |
"_*" | All strings starting with an underscore | |
TASK_NODE_NAME_KEY | node_name | |
"global" | Currently used to represent global settings | |
"default" | ||
"TASK_*_KEY" | Strings starting with TASK_ and ending with _KEY |