OnnxTensorPrediction
Computes a prediction on tensors from an ONNX model and generates tensors representing the prediction scores.
Access to parameter description
For an overview, please refer to the Deep Learning section.
The OnnxTensorPrediction algorithm performs inference on generic tensor data using an ONNX model. Unlike other ONNX prediction algorithms such as OnnxPredictionSegmentation2d , which operate only on 2D images, this command accepts one or several generic tensors that may contain either images (of any dimension) or data frames, and outputs corresponding tensors containing the prediction results. The outputs are also generic tensors whose structure matches the model's defined outputs.
This provides a flexible interface for deploying ONNX models trained on a wide range of data modalities.
Note:
See also
Access to parameter description
For an overview, please refer to the Deep Learning section.
The OnnxTensorPrediction algorithm performs inference on generic tensor data using an ONNX model. Unlike other ONNX prediction algorithms such as OnnxPredictionSegmentation2d , which operate only on 2D images, this command accepts one or several generic tensors that may contain either images (of any dimension) or data frames, and outputs corresponding tensors containing the prediction results. The outputs are also generic tensors whose structure matches the model's defined outputs.
This provides a flexible interface for deploying ONNX models trained on a wide range of data modalities.
Note:
- All tensors but the first one are optional.
- Depending on the characteristics of the model only the the first output can be filled
See also
Function Syntax
This function returns a OnnxTensorPredictionOutput structure containing outputTensor1, outputTensor2, outputTensor3, outputTensor4, outputTensor5, outputTensor6, outputTensor7, outputTensor8, outputTensor9 and outputTensor10.
// Output structure of the onnxTensorPrediction function.
struct OnnxTensorPredictionOutput final
{
/// The first output tensor.
std::shared_ptr< iolink::TensorView > outputTensor1;
/// The second output tensor.
std::shared_ptr< iolink::TensorView > outputTensor2;
/// The third output tensor.
std::shared_ptr< iolink::TensorView > outputTensor3;
/// The fourth output tensor.
std::shared_ptr< iolink::TensorView > outputTensor4;
/// The fifth output tensor.
std::shared_ptr< iolink::TensorView > outputTensor5;
/// The sixth output tensor.
std::shared_ptr< iolink::TensorView > outputTensor6;
/// The seventh output tensor.
std::shared_ptr< iolink::TensorView > outputTensor7;
/// The eighth output tensor.
std::shared_ptr< iolink::TensorView > outputTensor8;
/// The ninth output tensor.
std::shared_ptr< iolink::TensorView > outputTensor9;
/// The tenth output tensor.
std::shared_ptr< iolink::TensorView > outputTensor10;
};
// Function prototype
OnnxTensorPredictionOutput
onnxTensorPrediction( std::shared_ptr< const iolink::TensorView > inputTensor1,
std::shared_ptr< const iolink::TensorView > inputTensor2,
std::shared_ptr< const iolink::TensorView > inputTensor3,
std::shared_ptr< const iolink::TensorView > inputTensor4,
std::shared_ptr< const iolink::TensorView > inputTensor5,
std::shared_ptr< const iolink::TensorView > inputTensor6,
std::shared_ptr< const iolink::TensorView > inputTensor7,
std::shared_ptr< const iolink::TensorView > inputTensor8,
std::shared_ptr< const iolink::TensorView > inputTensor9,
std::shared_ptr< const iolink::TensorView > inputTensor10,
OnnxModel::Ptr inputOnnxModel,
std::shared_ptr< iolink::TensorView > outputTensor1 = nullptr,
std::shared_ptr< iolink::TensorView > outputTensor2 = nullptr,
std::shared_ptr< iolink::TensorView > outputTensor3 = nullptr,
std::shared_ptr< iolink::TensorView > outputTensor4 = nullptr,
std::shared_ptr< iolink::TensorView > outputTensor5 = nullptr,
std::shared_ptr< iolink::TensorView > outputTensor6 = nullptr,
std::shared_ptr< iolink::TensorView > outputTensor7 = nullptr,
std::shared_ptr< iolink::TensorView > outputTensor8 = nullptr,
std::shared_ptr< iolink::TensorView > outputTensor9 = nullptr,
std::shared_ptr< iolink::TensorView > outputTensor10 = nullptr );
This function returns a tuple containing output_tensor1, output_tensor2, output_tensor3, output_tensor4, output_tensor5, output_tensor6, output_tensor7, output_tensor8, output_tensor9 and output_tensor10.
// Function prototype.
onnx_tensor_prediction(input_tensor1: Optional[iolink.TensorView] = None,
input_tensor2: Optional[iolink.TensorView] = None,
input_tensor3: Optional[iolink.TensorView] = None,
input_tensor4: Optional[iolink.TensorView] = None,
input_tensor5: Optional[iolink.TensorView] = None,
input_tensor6: Optional[iolink.TensorView] = None,
input_tensor7: Optional[iolink.TensorView] = None,
input_tensor8: Optional[iolink.TensorView] = None,
input_tensor9: Optional[iolink.TensorView] = None,
input_tensor10: Optional[iolink.TensorView] = None,
input_onnx_model: Union[Any, None] = None,
output_tensor1: Optional[iolink.TensorView] = None,
output_tensor2: Optional[iolink.TensorView] = None,
output_tensor3: Optional[iolink.TensorView] = None,
output_tensor4: Optional[iolink.TensorView] = None,
output_tensor5: Optional[iolink.TensorView] = None,
output_tensor6: Optional[iolink.TensorView] = None,
output_tensor7: Optional[iolink.TensorView] = None,
output_tensor8: Optional[iolink.TensorView] = None,
output_tensor9: Optional[iolink.TensorView] = None,
output_tensor10: Optional[iolink.TensorView] = None) -> Tuple[Optional[iolink.TensorView], Optional[iolink.TensorView], Optional[iolink.TensorView], Optional[iolink.TensorView], Optional[iolink.TensorView], Optional[iolink.TensorView], Optional[iolink.TensorView], Optional[iolink.TensorView], Optional[iolink.TensorView], Optional[iolink.TensorView]]
This function returns a OnnxTensorPredictionOutput structure containing outputTensor1, outputTensor2, outputTensor3, outputTensor4, outputTensor5, outputTensor6, outputTensor7, outputTensor8, outputTensor9 and outputTensor10.
/// Output structure of the OnnxTensorPrediction function.
public struct OnnxTensorPredictionOutput
{
/// The first output tensor.
public IOLink.TensorView outputTensor1;
/// The second output tensor.
public IOLink.TensorView outputTensor2;
/// The third output tensor.
public IOLink.TensorView outputTensor3;
/// The fourth output tensor.
public IOLink.TensorView outputTensor4;
/// The fifth output tensor.
public IOLink.TensorView outputTensor5;
/// The sixth output tensor.
public IOLink.TensorView outputTensor6;
/// The seventh output tensor.
public IOLink.TensorView outputTensor7;
/// The eighth output tensor.
public IOLink.TensorView outputTensor8;
/// The ninth output tensor.
public IOLink.TensorView outputTensor9;
/// The tenth output tensor.
public IOLink.TensorView outputTensor10;
};
// Function prototype.
public static OnnxTensorPredictionOutput
OnnxTensorPrediction( IOLink.TensorView inputTensor1 = null,
IOLink.TensorView inputTensor2 = null,
IOLink.TensorView inputTensor3 = null,
IOLink.TensorView inputTensor4 = null,
IOLink.TensorView inputTensor5 = null,
IOLink.TensorView inputTensor6 = null,
IOLink.TensorView inputTensor7 = null,
IOLink.TensorView inputTensor8 = null,
IOLink.TensorView inputTensor9 = null,
IOLink.TensorView inputTensor10 = null,
OnnxModel inputOnnxModel = null,
IOLink.TensorView outputTensor1 = null,
IOLink.TensorView outputTensor2 = null,
IOLink.TensorView outputTensor3 = null,
IOLink.TensorView outputTensor4 = null,
IOLink.TensorView outputTensor5 = null,
IOLink.TensorView outputTensor6 = null,
IOLink.TensorView outputTensor7 = null,
IOLink.TensorView outputTensor8 = null,
IOLink.TensorView outputTensor9 = null,
IOLink.TensorView outputTensor10 = null );
Class Syntax
Parameters
| Parameter Name | Description | Type | Supported Values | Default Value | |
|---|---|---|---|---|---|
![]() |
inputTensor1 |
The first input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. |
TensorViewConst | nullptr | |
![]() |
inputTensor2 |
The second input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
TensorViewConst | nullptr | |
![]() |
inputTensor3 |
The third input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
TensorViewConst | nullptr | |
![]() |
inputTensor4 |
The fourth input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
TensorViewConst | nullptr | |
![]() |
inputTensor5 |
The fifth input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
TensorViewConst | nullptr | |
![]() |
inputTensor6 |
The sixth input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
TensorViewConst | nullptr | |
![]() |
inputTensor7 |
The seventh input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
TensorViewConst | nullptr | |
![]() |
inputTensor8 |
The eighth input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
TensorViewConst | nullptr | |
![]() |
inputTensor9 |
The ninth input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
TensorViewConst | nullptr | |
![]() |
inputTensor10 |
The tenth input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
TensorViewConst | nullptr | |
![]() |
inputOnnxModel |
The in memory ONNX model.
It must be loaded with the ReadOnnxModel command. |
OnnxModel | nullptr | |
![]() |
outputTensor1 |
The first output tensor. | TensorView | nullptr | |
![]() |
outputTensor2 |
The second output tensor.
Depending on the selected model characteristics, this output may not be used. |
TensorView | nullptr | |
![]() |
outputTensor3 |
The third output tensor.
Depending on the selected model characteristics, this output may not be used. |
TensorView | nullptr | |
![]() |
outputTensor4 |
The fourth output tensor.
Depending on the selected model characteristics, this output may not be used. |
TensorView | nullptr | |
![]() |
outputTensor5 |
The fifth output tensor.
Depending on the selected model characteristics, this output may not be used. |
TensorView | nullptr | |
![]() |
outputTensor6 |
The sixth output tensor.
Depending on the selected model characteristics, this output may not be used. |
TensorView | nullptr | |
![]() |
outputTensor7 |
The seventh output tensor.
Depending on the selected model characteristics, this output may not be used. |
TensorView | nullptr | |
![]() |
outputTensor8 |
The eighth output tensor.
Depending on the selected model characteristics, this output may not be used. |
TensorView | nullptr | |
![]() |
outputTensor9 |
The ninth output tensor.
Depending on the selected model characteristics, this output may not be used. |
TensorView | nullptr | |
![]() |
outputTensor10 |
The tenth output tensor.
Depending on the selected model characteristics, this output may not be used. |
TensorView | nullptr | |
| Parameter Name | Description | Type | Supported Values | Default Value | |
|---|---|---|---|---|---|
![]() |
input_tensor1 |
The first input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. |
tensor_view_const | None | |
![]() |
input_tensor2 |
The second input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
tensor_view_const | None | |
![]() |
input_tensor3 |
The third input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
tensor_view_const | None | |
![]() |
input_tensor4 |
The fourth input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
tensor_view_const | None | |
![]() |
input_tensor5 |
The fifth input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
tensor_view_const | None | |
![]() |
input_tensor6 |
The sixth input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
tensor_view_const | None | |
![]() |
input_tensor7 |
The seventh input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
tensor_view_const | None | |
![]() |
input_tensor8 |
The eighth input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
tensor_view_const | None | |
![]() |
input_tensor9 |
The ninth input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
tensor_view_const | None | |
![]() |
input_tensor10 |
The tenth input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
tensor_view_const | None | |
![]() |
input_onnx_model |
The in memory ONNX model.
It must be loaded with the ReadOnnxModel command. |
OnnxModel | None | |
![]() |
output_tensor1 |
The first output tensor. | tensor_view | None | |
![]() |
output_tensor2 |
The second output tensor.
Depending on the selected model characteristics, this output may not be used. |
tensor_view | None | |
![]() |
output_tensor3 |
The third output tensor.
Depending on the selected model characteristics, this output may not be used. |
tensor_view | None | |
![]() |
output_tensor4 |
The fourth output tensor.
Depending on the selected model characteristics, this output may not be used. |
tensor_view | None | |
![]() |
output_tensor5 |
The fifth output tensor.
Depending on the selected model characteristics, this output may not be used. |
tensor_view | None | |
![]() |
output_tensor6 |
The sixth output tensor.
Depending on the selected model characteristics, this output may not be used. |
tensor_view | None | |
![]() |
output_tensor7 |
The seventh output tensor.
Depending on the selected model characteristics, this output may not be used. |
tensor_view | None | |
![]() |
output_tensor8 |
The eighth output tensor.
Depending on the selected model characteristics, this output may not be used. |
tensor_view | None | |
![]() |
output_tensor9 |
The ninth output tensor.
Depending on the selected model characteristics, this output may not be used. |
tensor_view | None | |
![]() |
output_tensor10 |
The tenth output tensor.
Depending on the selected model characteristics, this output may not be used. |
tensor_view | None | |
| Parameter Name | Description | Type | Supported Values | Default Value | |
|---|---|---|---|---|---|
![]() |
inputTensor1 |
The first input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. |
TensorViewConst | null | |
![]() |
inputTensor2 |
The second input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
TensorViewConst | null | |
![]() |
inputTensor3 |
The third input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
TensorViewConst | null | |
![]() |
inputTensor4 |
The fourth input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
TensorViewConst | null | |
![]() |
inputTensor5 |
The fifth input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
TensorViewConst | null | |
![]() |
inputTensor6 |
The sixth input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
TensorViewConst | null | |
![]() |
inputTensor7 |
The seventh input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
TensorViewConst | null | |
![]() |
inputTensor8 |
The eighth input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
TensorViewConst | null | |
![]() |
inputTensor9 |
The ninth input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
TensorViewConst | null | |
![]() |
inputTensor10 |
The tenth input tensor.
Its dimensions and data type must be compliant with the corresponding expected input of the selected model. Depending on the selected model characteristics, this input may not be used. |
TensorViewConst | null | |
![]() |
inputOnnxModel |
The in memory ONNX model.
It must be loaded with the ReadOnnxModel command. |
OnnxModel | null | |
![]() |
outputTensor1 |
The first output tensor. | TensorView | null | |
![]() |
outputTensor2 |
The second output tensor.
Depending on the selected model characteristics, this output may not be used. |
TensorView | null | |
![]() |
outputTensor3 |
The third output tensor.
Depending on the selected model characteristics, this output may not be used. |
TensorView | null | |
![]() |
outputTensor4 |
The fourth output tensor.
Depending on the selected model characteristics, this output may not be used. |
TensorView | null | |
![]() |
outputTensor5 |
The fifth output tensor.
Depending on the selected model characteristics, this output may not be used. |
TensorView | null | |
![]() |
outputTensor6 |
The sixth output tensor.
Depending on the selected model characteristics, this output may not be used. |
TensorView | null | |
![]() |
outputTensor7 |
The seventh output tensor.
Depending on the selected model characteristics, this output may not be used. |
TensorView | null | |
![]() |
outputTensor8 |
The eighth output tensor.
Depending on the selected model characteristics, this output may not be used. |
TensorView | null | |
![]() |
outputTensor9 |
The ninth output tensor.
Depending on the selected model characteristics, this output may not be used. |
TensorView | null | |
![]() |
outputTensor10 |
The tenth output tensor.
Depending on the selected model characteristics, this output may not be used. |
TensorView | null | |
Object Examples
auto inputImage = readVipImage( std::string( IMAGEDEVDATA_IMAGES_FOLDER ) + "grayscale_images_2d_float.vip" );
OnnxModel::Ptr inputModel = OnnxModel::read( std::string( IMAGEDEVDATA_OBJECTS_FOLDER ) + "model2d-2outputs.onnx" );
// The ONNX model works with a tensor of shape {1, height, width, 1},
// so we need to convert the image to this shape.
// Convert the input image to tensor
auto inputImageAsTensor = iolink::TensorViewFactory::fromImage( inputImage );
auto tensorForPredictionShape =
iolink::VectorXu64{ 1, inputImageAsTensor->shape()[0], inputImageAsTensor->shape()[1], 1 };
auto tensorForPrediction =
iolink::TensorViewFactory::allocate( tensorForPredictionShape, inputImageAsTensor->dtype() );
memcpy( tensorForPrediction->buffer(), inputImageAsTensor->buffer(), inputImageAsTensor->bufferSize() );
OnnxTensorPrediction onnxTensorPredictionAlgo;
onnxTensorPredictionAlgo.setInputTensor1( tensorForPrediction );
onnxTensorPredictionAlgo.setInputOnnxModel( inputModel );
BOOST_CHECK_NO_THROW( onnxTensorPredictionAlgo.execute() );
std::cout << "outputTensor1: " << onnxTensorPredictionAlgo.outputTensor1()->shape().toString() << std::endl;
std::cout << "outputTensor2: " << onnxTensorPredictionAlgo.outputTensor2()->shape().toString() << std::endl;
import itertools
input_image = imagedev.read_vip_image(imagedev_data.get_image_path("grayscale_images_2d_float.vip"))
model_2d_2outputs = imagedev.OnnxModel.read(imagedev_data.get_object_path("model2d-2outputs.onnx"))
# The ONNX model works with a tensor of shape {1, height, width, 1},
# so we need to convert the image to this shape.
# Convert the input image to tensor
input_image_as_tensor: iolink.TensorView = iolink.TensorViewFactory.from_image(input_image)
# Copy tensor data to byte array
region_input_tensor = iolink.RegionXu64.create_full_region(input_image_as_tensor.shape)
# Create a byte array to hold the tensor data
buffer_input_tensor = iolink.array('i', itertools.repeat(0, region_input_tensor.element_count))
input_image_as_tensor.read(region_input_tensor, buffer_input_tensor)
# Create the tensor with the required shape for prediction
tensor_for_prediction_shape = \
iolink.VectorXu64(1, input_image_as_tensor.shape[0], input_image_as_tensor.shape[1],1)
tensor_for_prediction: iolink.TensorView = iolink.TensorViewFactory.allocate(tensor_for_prediction_shape,
input_image_as_tensor.dtype)
# Copy data from byte array to tensor
region_tensor_for_prediction = iolink.RegionXu64.create_full_region(tensor_for_prediction_shape)
tensor_for_prediction.write(region_tensor_for_prediction, buffer_input_tensor)
onnx_tensor_prediction_algo = imagedev.OnnxTensorPrediction()
onnx_tensor_prediction_algo.input_tensor1 = tensor_for_prediction
onnx_tensor_prediction_algo.input_onnx_model = model_2d_2outputs
onnx_tensor_prediction_algo.execute()
print("output_tensor1: ", str(onnx_tensor_prediction_algo.output_tensor1.shape))
print("output_tensor2: ", str(onnx_tensor_prediction_algo.output_tensor2.shape))
ImageView inputImage = Data.ReadVipImage(@"Data\images\grayscale_images_2d_float.vip");
OnnxModel inputModel = OnnxModel.Read(@"Data\objects\model2d-2outputs.onnx");
// The ONNX model works with a tensor of shape {1, height, width, 1},
// so we need to convert the image to this shape.
// Convert the input image to tensor
TensorView inputImageAsTensor = TensorViewFactory.FromImage(inputImage);
// Copy tensor data to byte array
RegionXu64 regionInputTensor = RegionXu64.CreateFullRegion(inputImageAsTensor.Shape);
uint byteCountInputTensor = regionInputTensor.ElementCount * inputImageAsTensor.Dtype.ByteCount();
byte[] bufferInputTensor = new byte[byteCountInputTensor];
inputImageAsTensor.Read(regionInputTensor, bufferInputTensor);
// Create the tensor with the required shape for prediction
var tensorForPredictionShape =
new VectorXu64(1, inputImageAsTensor.Shape[0], inputImageAsTensor.Shape[1], 1);
TensorView tensorForPrediction =
TensorViewFactory.Allocate(tensorForPredictionShape, inputImageAsTensor.Dtype);
inputImageAsTensor.Dispose();
// Copy data from byte array to tensor
RegionXu64 regionTensorForPrediction = RegionXu64.CreateFullRegion(tensorForPredictionShape);
tensorForPrediction.Write(regionTensorForPrediction, bufferInputTensor);
OnnxTensorPrediction onnxTensorPredictionAlgo = new OnnxTensorPrediction
{
inputTensor1 = tensorForPrediction,
inputOnnxModel = inputModel
};
onnxTensorPredictionAlgo.Execute();
onnxTensorPredictionAlgo.Dispose();
Console.WriteLine("outputTensor1: " + onnxTensorPredictionAlgo.outputTensor1.Shape.ToString());
Console.WriteLine("outputTensor2: " + onnxTensorPredictionAlgo.outputTensor2.Shape.ToString());
Function Examples
auto inputImage = readVipImage( std::string( IMAGEDEVDATA_IMAGES_FOLDER ) + "grayscale_images_2d_float.vip" );
OnnxModel::Ptr inputModel = OnnxModel::read( std::string( IMAGEDEVDATA_OBJECTS_FOLDER ) + "model2d-2outputs.onnx" );
// The ONNX model works with a tensor of shape {1, height, width, 1},
// so we need to convert the image to this shape.
// Convert the input image to tensor
auto inputImageAsTensor = iolink::TensorViewFactory::fromImage( inputImage );
auto tensorForPredictionShape =
iolink::VectorXu64{ 1, inputImageAsTensor->shape()[0], inputImageAsTensor->shape()[1], 1 };
auto tensorForPrediction =
iolink::TensorViewFactory::allocate( tensorForPredictionShape, inputImageAsTensor->dtype() );
memcpy( tensorForPrediction->buffer(), inputImageAsTensor->buffer(), inputImageAsTensor->bufferSize() );
// Run the prediction
auto result = onnxTensorPrediction( tensorForPrediction, nullptr, nullptr, nullptr, nullptr, nullptr, nullptr, nullptr,
nullptr, nullptr, inputModel );
std::cout << "outputTensor1: " << result.outputTensor1->shape().toString() << std::endl;
std::cout << "outputTensor2: " << result.outputTensor2->shape().toString() << std::endl;
import itertools
input_image = imagedev.read_vip_image(imagedev_data.get_image_path("grayscale_images_2d_float.vip"))
model_2d_2outputs = imagedev.OnnxModel.read(imagedev_data.get_object_path("model2d-2outputs.onnx"))
# The ONNX model works with a tensor of shape {1, height, width, 1},
# so we need to convert the image to this shape.
# Convert the input image to tensor
input_image_as_tensor: iolink.TensorView = iolink.TensorViewFactory.from_image(input_image)
# Copy tensor data to byte array
region_input_tensor = iolink.RegionXu64.create_full_region(input_image_as_tensor.shape)
buffer_input_tensor = iolink.array('i', itertools.repeat(0, region_input_tensor.element_count))
input_image_as_tensor.read(region_input_tensor, buffer_input_tensor)
# Create the tensor with the required shape for prediction
tensor_for_prediction_shape = \
iolink.VectorXu64(1, input_image_as_tensor.shape[0], input_image_as_tensor.shape[1],1)
tensor_for_prediction: iolink.TensorView = iolink.TensorViewFactory.allocate(tensor_for_prediction_shape,
input_image_as_tensor.dtype)
# Copy data from byte array to tensor
region_tensor_for_prediction = iolink.RegionXu64.create_full_region(tensor_for_prediction_shape)
tensor_for_prediction.write(region_tensor_for_prediction, buffer_input_tensor)
result_tensor1, result_tensor2, _, _, _, _, _, _, _, _ = \
imagedev.onnx_tensor_prediction(input_tensor1=tensor_for_prediction, input_onnx_model=model_2d_2outputs)
print("output_tensor1: ", str(result_tensor1.shape))
print("output_tensor2: ", str(result_tensor2.shape))
ImageView inputImage = Data.ReadVipImage(@"Data\images\grayscale_images_2d_float.vip");
OnnxModel inputModel = OnnxModel.Read(@"Data\objects\model2d-2outputs.onnx");
// The ONNX model works with a tensor of shape {1, height, width, 1},
// so we need to convert the image to this shape.
// Convert the input image to tensor
TensorView inputImageAsTensor = TensorViewFactory.FromImage(inputImage);
// Copy tensor data to byte array
RegionXu64 regionInputTensor = RegionXu64.CreateFullRegion(inputImageAsTensor.Shape);
uint byteCountInputTensor = regionInputTensor.ElementCount * inputImageAsTensor.Dtype.ByteCount();
byte[] bufferInputTensor = new byte[byteCountInputTensor];
inputImageAsTensor.Read(regionInputTensor, bufferInputTensor);
// Create the tensor with the required shape for prediction
var tensorForPredictionShape =
new VectorXu64(1, inputImageAsTensor.Shape[0], inputImageAsTensor.Shape[1], 1);
TensorView tensorForPrediction =
TensorViewFactory.Allocate(tensorForPredictionShape, inputImageAsTensor.Dtype);
inputImageAsTensor.Dispose();
// Copy data from byte array to tensor
RegionXu64 regionTensorForPrediction = RegionXu64.CreateFullRegion(tensorForPredictionShape);
tensorForPrediction.Write(regionTensorForPrediction, bufferInputTensor);
Processing.OnnxTensorPredictionOutput result =
Processing.OnnxTensorPrediction(tensorForPrediction, null, null, null, null, null, null,
null, null, null, inputModel);
Console.WriteLine("outputTensor1: " + result.outputTensor1.Shape.ToString());
Console.WriteLine("outputTensor2: " + result.outputTensor2.Shape.ToString());
inputModel.Dispose();
© 2026 Thermo Fisher Scientific Inc. All rights reserved.

