OnnxPredictionTwoOutputs2d
Computes a prediction on a two-dimensional image from an ONNX model and generates two images.
Access to parameter description
For an overview, please refer to the Deep Learning section.
This algorithm produces two images given by the execution of the model. Depending on how the model has been trained, it can be used either to perform image filtering, or segmentation by applying an appropriate post-processing step afterwards.
This algorithm may be especially useful for applying a StarDist model which provides two outputs: an object probability map and the star-convex polygon distances for a set of orientation.
The following steps are applied:
See also
Access to parameter description
For an overview, please refer to the Deep Learning section.
This algorithm produces two images given by the execution of the model. Depending on how the model has been trained, it can be used either to perform image filtering, or segmentation by applying an appropriate post-processing step afterwards.
This algorithm may be especially useful for applying a StarDist model which provides two outputs: an object probability map and the star-convex polygon distances for a set of orientation.
The following steps are applied:
- Pre-processing
- Prediction
See also
Function Syntax
This function returns a OnnxPredictionTwoOutputs2dOutput structure containing outputImage1 and outputImage2.
// Output structure of the onnxPredictionTwoOutputs2d function. struct OnnxPredictionTwoOutputs2dOutput { /// The first output image. Its dimensions and data type are determined by the model. std::shared_ptr< iolink::ImageView > outputImage1; /// The second output image. Its dimensions and data type are determined by the model. std::shared_ptr< iolink::ImageView > outputImage2; }; // Function prototype
OnnxPredictionTwoOutputs2dOutput onnxPredictionTwoOutputs2d( std::shared_ptr< iolink::ImageView > inputImage, OnnxModel::Ptr inputOnnxModel, OnnxPredictionTwoOutputs2d::DataFormat dataFormat, OnnxPredictionTwoOutputs2d::InputNormalizationType inputNormalizationType, const iolink::Vector2d& normalizationRange, OnnxPredictionTwoOutputs2d::NormalizationScope normalizationScope, const iolink::Vector2u32& tileSize, uint32_t tileOverlap, std::shared_ptr< iolink::ImageView > outputImage1 = nullptr, std::shared_ptr< iolink::ImageView > outputImage2 = nullptr );
This function returns a tuple containing output_image1 and output_image2.
// Function prototype. onnx_prediction_two_outputs_2d(input_image: idt.ImageType, input_onnx_model: Union[Any, None] = None, data_format: OnnxPredictionTwoOutputs2d.DataFormat = OnnxPredictionTwoOutputs2d.DataFormat.NHWC, input_normalization_type: OnnxPredictionTwoOutputs2d.InputNormalizationType = OnnxPredictionTwoOutputs2d.InputNormalizationType.STANDARDIZATION, normalization_range: Union[Iterable[int], Iterable[float]] = [0, 1], normalization_scope: OnnxPredictionTwoOutputs2d.NormalizationScope = OnnxPredictionTwoOutputs2d.NormalizationScope.GLOBAL, tile_size: Iterable[int] = [256, 256], tile_overlap: int = 32, output_image1: idt.ImageType = None, output_image2: idt.ImageType = None) -> Tuple[idt.ImageType, idt.ImageType]
This function returns a OnnxPredictionTwoOutputs2dOutput structure containing outputImage1 and outputImage2.
/// Output structure of the OnnxPredictionTwoOutputs2d function. public struct OnnxPredictionTwoOutputs2dOutput { /// /// The first output image. Its dimensions and data type are determined by the model. /// public IOLink.ImageView outputImage1; /// /// The second output image. Its dimensions and data type are determined by the model. /// public IOLink.ImageView outputImage2; }; // Function prototype. public static OnnxPredictionTwoOutputs2dOutput OnnxPredictionTwoOutputs2d( IOLink.ImageView inputImage, OnnxModel inputOnnxModel = null, OnnxPredictionTwoOutputs2d.DataFormat dataFormat = ImageDev.OnnxPredictionTwoOutputs2d.DataFormat.NHWC, OnnxPredictionTwoOutputs2d.InputNormalizationType inputNormalizationType = ImageDev.OnnxPredictionTwoOutputs2d.InputNormalizationType.STANDARDIZATION, double[] normalizationRange = null, OnnxPredictionTwoOutputs2d.NormalizationScope normalizationScope = ImageDev.OnnxPredictionTwoOutputs2d.NormalizationScope.GLOBAL, uint[] tileSize = null, UInt32 tileOverlap = 32, IOLink.ImageView outputImage1 = null, IOLink.ImageView outputImage2 = null );
Class Syntax
Parameters
Parameter Name | Description | Type | Supported Values | Default Value | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
![]() |
inputImage |
The input image. It can be a grayscale or color image, depending on the selected model. | Image | Binary, Label, Grayscale or Multispectral | nullptr | ||||||
![]() |
inputOnnxModel |
The in memory ONNX model.
It must be loaded with the ReadOnnxModel command. |
OnnxModel | nullptr | |||||||
![]() |
dataFormat |
The tensor layout expected as input by the model. The input image is automatically converted to this layout by the algorithm.
|
Enumeration | NHWC | |||||||
![]() |
inputNormalizationType |
The type of normalization to apply before computing the prediction. It is recommended to apply the same pre-processing as during the training.
|
Enumeration | STANDARDIZATION | |||||||
![]() |
normalizationRange |
The data range in which the input image is normalized before computing the prediction. It is recommended to apply the same pre-processing as during the training. This parameter is ignored if the normalization type is set to NONE. | Vector2d | Any value | {0.f, 1.f} | ||||||
![]() |
normalizationScope |
The scope for computing normalization (mean, standard deviation, minimum or maximum). This parameter is ignored if the normalization type is set to NONE.
|
Enumeration | GLOBAL | |||||||
![]() |
tileSize |
The width and height in pixels of the sliding window. This size includes the user defined tile overlap. It must be a multiple of 2 to the power of the number of downsampling or upsampling layers.
Guidelines to select an appropriate tile size are available in the Tiling section. |
Vector2u32 | != 0 | {256, 256} | ||||||
![]() |
tileOverlap |
The number of pixels used as overlap between the tiles. An overlap of zero may lead to artifacts in the prediction result. A non-zero overlap reduces such artifacts but increases the computation time. | UInt32 | Any value | 32 | ||||||
![]() |
outputImage1 |
The first output image. Its dimensions and data type are determined by the model. | Image | nullptr | |||||||
![]() |
outputImage2 |
The second output image. Its dimensions and data type are determined by the model. | Image | nullptr |
Parameter Name | Description | Type | Supported Values | Default Value | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
![]() |
input_image |
The input image. It can be a grayscale or color image, depending on the selected model. | image | Binary, Label, Grayscale or Multispectral | None | ||||||
![]() |
input_onnx_model |
The in memory ONNX model.
It must be loaded with the ReadOnnxModel command. |
OnnxModel | None | |||||||
![]() |
data_format |
The tensor layout expected as input by the model. The input image is automatically converted to this layout by the algorithm.
|
enumeration | NHWC | |||||||
![]() |
input_normalization_type |
The type of normalization to apply before computing the prediction. It is recommended to apply the same pre-processing as during the training.
|
enumeration | STANDARDIZATION | |||||||
![]() |
normalization_range |
The data range in which the input image is normalized before computing the prediction. It is recommended to apply the same pre-processing as during the training. This parameter is ignored if the normalization type is set to NONE. | vector2d | Any value | [0, 1] | ||||||
![]() |
normalization_scope |
The scope for computing normalization (mean, standard deviation, minimum or maximum). This parameter is ignored if the normalization type is set to NONE.
|
enumeration | GLOBAL | |||||||
![]() |
tile_size |
The width and height in pixels of the sliding window. This size includes the user defined tile overlap. It must be a multiple of 2 to the power of the number of downsampling or upsampling layers.
Guidelines to select an appropriate tile size are available in the Tiling section. |
vector2u32 | != 0 | [256, 256] | ||||||
![]() |
tile_overlap |
The number of pixels used as overlap between the tiles. An overlap of zero may lead to artifacts in the prediction result. A non-zero overlap reduces such artifacts but increases the computation time. | uint32 | Any value | 32 | ||||||
![]() |
output_image1 |
The first output image. Its dimensions and data type are determined by the model. | image | None | |||||||
![]() |
output_image2 |
The second output image. Its dimensions and data type are determined by the model. | image | None |
Parameter Name | Description | Type | Supported Values | Default Value | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
![]() |
inputImage |
The input image. It can be a grayscale or color image, depending on the selected model. | Image | Binary, Label, Grayscale or Multispectral | null | ||||||
![]() |
inputOnnxModel |
The in memory ONNX model.
It must be loaded with the ReadOnnxModel command. |
OnnxModel | null | |||||||
![]() |
dataFormat |
The tensor layout expected as input by the model. The input image is automatically converted to this layout by the algorithm.
|
Enumeration | NHWC | |||||||
![]() |
inputNormalizationType |
The type of normalization to apply before computing the prediction. It is recommended to apply the same pre-processing as during the training.
|
Enumeration | STANDARDIZATION | |||||||
![]() |
normalizationRange |
The data range in which the input image is normalized before computing the prediction. It is recommended to apply the same pre-processing as during the training. This parameter is ignored if the normalization type is set to NONE. | Vector2d | Any value | {0f, 1f} | ||||||
![]() |
normalizationScope |
The scope for computing normalization (mean, standard deviation, minimum or maximum). This parameter is ignored if the normalization type is set to NONE.
|
Enumeration | GLOBAL | |||||||
![]() |
tileSize |
The width and height in pixels of the sliding window. This size includes the user defined tile overlap. It must be a multiple of 2 to the power of the number of downsampling or upsampling layers.
Guidelines to select an appropriate tile size are available in the Tiling section. |
Vector2u32 | != 0 | {256, 256} | ||||||
![]() |
tileOverlap |
The number of pixels used as overlap between the tiles. An overlap of zero may lead to artifacts in the prediction result. A non-zero overlap reduces such artifacts but increases the computation time. | UInt32 | Any value | 32 | ||||||
![]() |
outputImage1 |
The first output image. Its dimensions and data type are determined by the model. | Image | null | |||||||
![]() |
outputImage2 |
The second output image. Its dimensions and data type are determined by the model. | Image | null |
Object Examples
auto grayscale_images_2d_float = readVipImage( std::string( IMAGEDEVDATA_IMAGES_FOLDER ) + "grayscale_images_2d_float.vip" ); OnnxModel::Ptr model2d_2outputs= OnnxModel::read( std::string( IMAGEDEVDATA_OBJECTS_FOLDER ) + "model2d-2outputs.onnx" ); OnnxPredictionTwoOutputs2d onnxPredictionTwoOutputs2dAlgo; onnxPredictionTwoOutputs2dAlgo.setInputImage( grayscale_images_2d_float ); onnxPredictionTwoOutputs2dAlgo.setInputOnnxModel( model2d_2outputs ); onnxPredictionTwoOutputs2dAlgo.setDataFormat( OnnxPredictionTwoOutputs2d::DataFormat::NHWC ); onnxPredictionTwoOutputs2dAlgo.setInputNormalizationType( OnnxPredictionTwoOutputs2d::InputNormalizationType::STANDARDIZATION ); onnxPredictionTwoOutputs2dAlgo.setNormalizationRange( {0, 1} ); onnxPredictionTwoOutputs2dAlgo.setNormalizationScope( OnnxPredictionTwoOutputs2d::NormalizationScope::GLOBAL ); onnxPredictionTwoOutputs2dAlgo.setTileSize( {128, 128} ); onnxPredictionTwoOutputs2dAlgo.setTileOverlap( 32 ); onnxPredictionTwoOutputs2dAlgo.execute(); std::cout << "outputImage1:" << onnxPredictionTwoOutputs2dAlgo.outputImage1()->toString(); std::cout << "outputImage2:" << onnxPredictionTwoOutputs2dAlgo.outputImage2()->toString();
grayscale_images_2d_float = imagedev.read_vip_image(imagedev_data.get_image_path("grayscale_images_2d_float.vip")) model_2d_2outputs = imagedev.OnnxModel.read(imagedev_data.get_object_path("model2d-2outputs.onnx")) onnx_prediction_two_outputs_2d_algo = imagedev.OnnxPredictionTwoOutputs2d() onnx_prediction_two_outputs_2d_algo.input_image = grayscale_images_2d_float onnx_prediction_two_outputs_2d_algo.input_onnx_model = model_2d_2outputs onnx_prediction_two_outputs_2d_algo.data_format = imagedev.OnnxPredictionTwoOutputs2d.NHWC onnx_prediction_two_outputs_2d_algo.input_normalization_type = imagedev.OnnxPredictionTwoOutputs2d.STANDARDIZATION onnx_prediction_two_outputs_2d_algo.normalization_range = [0, 1] onnx_prediction_two_outputs_2d_algo.normalization_scope = imagedev.OnnxPredictionTwoOutputs2d.GLOBAL onnx_prediction_two_outputs_2d_algo.tile_size = [128, 128] onnx_prediction_two_outputs_2d_algo.tile_overlap = 32 onnx_prediction_two_outputs_2d_algo.execute() print("output_image1:", str(onnx_prediction_two_outputs_2d_algo.output_image1)) print("output_image2:", str(onnx_prediction_two_outputs_2d_algo.output_image2))
ImageView grayscale_images_2d_float = Data.ReadVipImage( @"Data/images/grayscale_images_2d_float.vip" ); OnnxModel model2d_2outputs = OnnxModel.Read( @"Data/objects/model2d-2outputs.onnx" ); OnnxPredictionTwoOutputs2d onnxPredictionTwoOutputs2dAlgo = new OnnxPredictionTwoOutputs2d { inputImage = grayscale_images_2d_float, inputOnnxModel = model2d_2outputs, dataFormat = OnnxPredictionTwoOutputs2d.DataFormat.NHWC, inputNormalizationType = OnnxPredictionTwoOutputs2d.InputNormalizationType.STANDARDIZATION, normalizationRange = new double[]{0, 1}, normalizationScope = OnnxPredictionTwoOutputs2d.NormalizationScope.GLOBAL, tileSize = new uint[]{128, 128}, tileOverlap = 32 }; onnxPredictionTwoOutputs2dAlgo.Execute(); Console.WriteLine( "outputImage1:" + onnxPredictionTwoOutputs2dAlgo.outputImage1.ToString() ); Console.WriteLine( "outputImage2:" + onnxPredictionTwoOutputs2dAlgo.outputImage2.ToString() );
Function Examples
auto grayscale_images_2d_float = readVipImage( std::string( IMAGEDEVDATA_IMAGES_FOLDER ) + "grayscale_images_2d_float.vip" ); OnnxModel::Ptr model2d_2outputs= OnnxModel::read( std::string( IMAGEDEVDATA_OBJECTS_FOLDER ) + "model2d-2outputs.onnx" ); auto result = onnxPredictionTwoOutputs2d( grayscale_images_2d_float, model2d_2outputs, OnnxPredictionTwoOutputs2d::DataFormat::NHWC, OnnxPredictionTwoOutputs2d::InputNormalizationType::STANDARDIZATION, {0, 1}, OnnxPredictionTwoOutputs2d::NormalizationScope::GLOBAL, {128, 128}, 32 ); std::cout << "outputImage1:" << result.outputImage1->toString(); std::cout << "outputImage2:" << result.outputImage2->toString();
grayscale_images_2d_float = imagedev.read_vip_image(imagedev_data.get_image_path("grayscale_images_2d_float.vip")) model_2d_2outputs = imagedev.OnnxModel.read(imagedev_data.get_object_path("model2d-2outputs.onnx")) result_output_image1, result_output_image2 = imagedev.onnx_prediction_two_outputs_2d(grayscale_images_2d_float, model_2d_2outputs, imagedev.OnnxPredictionTwoOutputs2d.NHWC, imagedev.OnnxPredictionTwoOutputs2d.STANDARDIZATION, [0, 1], imagedev.OnnxPredictionTwoOutputs2d.GLOBAL, [128, 128], 32) print("output_image1:", str(result_output_image1)) print("output_image2:", str(result_output_image2))
ImageView grayscale_images_2d_float = Data.ReadVipImage( @"Data/images/grayscale_images_2d_float.vip" ); OnnxModel model2d_2outputs = OnnxModel.Read( @"Data/objects/model2d-2outputs.onnx" ); Processing.OnnxPredictionTwoOutputs2dOutput result = Processing.OnnxPredictionTwoOutputs2d( grayscale_images_2d_float, model2d_2outputs, OnnxPredictionTwoOutputs2d.DataFormat.NHWC, OnnxPredictionTwoOutputs2d.InputNormalizationType.STANDARDIZATION, new double[]{0, 1}, OnnxPredictionTwoOutputs2d.NormalizationScope.GLOBAL, new uint[]{128, 128}, 32 ); Console.WriteLine( "outputImage1:" + result.outputImage1.ToString() ); Console.WriteLine( "outputImage2:" + result.outputImage2.ToString() ); model2d_2outputs.Dispose();
© 2025 Thermo Fisher Scientific Inc. All rights reserved.