ImageDev

OnnxPredictionTwoOutputs3d

Computes a prediction on a three-dimensional image from an ONNX model and generates two images.

Access to parameter description

For an overview, please refer to the Deep Learning section.
This algorithm produces two images given by the execution of the model. Depending on how the model has been trained, it can be used either to perform image filtering, or segmentation by applying an appropriate post-processing step afterwards.
This algorithm may be especially useful for applying a StarDist model which provides two outputs: an object probability map and the star-convex polygon distances for a set of orientation.

The following steps are applied:
See also

Function Syntax

This function returns a OnnxPredictionTwoOutputs3dOutput structure containing outputImage1 and outputImage2.
// Output structure of the onnxPredictionTwoOutputs3d function.
struct OnnxPredictionTwoOutputs3dOutput
{
    /// The first output image. Its dimensions and data type are determined by the model.
    std::shared_ptr< iolink::ImageView > outputImage1;
    /// The second output image. Its dimensions and data type are determined by the model.
    std::shared_ptr< iolink::ImageView > outputImage2;
};

// Function prototype
OnnxPredictionTwoOutputs3dOutput onnxPredictionTwoOutputs3d( std::shared_ptr< iolink::ImageView > inputImage, OnnxModel::Ptr inputOnnxModel, OnnxPredictionTwoOutputs3d::DataFormat dataFormat, OnnxPredictionTwoOutputs3d::InputNormalizationType inputNormalizationType, const iolink::Vector2d& normalizationRange, OnnxPredictionTwoOutputs3d::NormalizationScope normalizationScope, const iolink::Vector3u32& tileSize, uint32_t tileOverlap, std::shared_ptr< iolink::ImageView > outputImage1 = nullptr, std::shared_ptr< iolink::ImageView > outputImage2 = nullptr );
This function returns a tuple containing output_image1 and output_image2.
// Function prototype.
onnx_prediction_two_outputs_3d(input_image: idt.ImageType,
                               input_onnx_model: Union[Any, None] = None,
                               data_format: OnnxPredictionTwoOutputs3d.DataFormat = OnnxPredictionTwoOutputs3d.DataFormat.NDHWC,
                               input_normalization_type: OnnxPredictionTwoOutputs3d.InputNormalizationType = OnnxPredictionTwoOutputs3d.InputNormalizationType.STANDARDIZATION,
                               normalization_range: Union[Iterable[int], Iterable[float]] = [0, 1],
                               normalization_scope: OnnxPredictionTwoOutputs3d.NormalizationScope = OnnxPredictionTwoOutputs3d.NormalizationScope.GLOBAL,
                               tile_size: Iterable[int] = [128, 128, 128],
                               tile_overlap: int = 16,
                               output_image1: idt.ImageType = None,
                               output_image2: idt.ImageType = None) -> Tuple[idt.ImageType, idt.ImageType]
This function returns a OnnxPredictionTwoOutputs3dOutput structure containing outputImage1 and outputImage2.
/// Output structure of the OnnxPredictionTwoOutputs3d function.
public struct OnnxPredictionTwoOutputs3dOutput
{
    /// 
    /// The first output image. Its dimensions and data type are determined by the model.
    /// 
    public IOLink.ImageView outputImage1;
    /// 
    /// The second output image. Its dimensions and data type are determined by the model.
    /// 
    public IOLink.ImageView outputImage2;
};

// Function prototype.
public static OnnxPredictionTwoOutputs3dOutput
OnnxPredictionTwoOutputs3d( IOLink.ImageView inputImage,
                            OnnxModel inputOnnxModel = null,
                            OnnxPredictionTwoOutputs3d.DataFormat dataFormat = ImageDev.OnnxPredictionTwoOutputs3d.DataFormat.NDHWC,
                            OnnxPredictionTwoOutputs3d.InputNormalizationType inputNormalizationType = ImageDev.OnnxPredictionTwoOutputs3d.InputNormalizationType.STANDARDIZATION,
                            double[] normalizationRange = null,
                            OnnxPredictionTwoOutputs3d.NormalizationScope normalizationScope = ImageDev.OnnxPredictionTwoOutputs3d.NormalizationScope.GLOBAL,
                            uint[] tileSize = null,
                            UInt32 tileOverlap = 16,
                            IOLink.ImageView outputImage1 = null,
                            IOLink.ImageView outputImage2 = null );

Class Syntax

Parameters

Parameter Name Description Type Supported Values Default Value
input
inputImage
The input image. It can be a grayscale or color image, depending on the selected model. Image Binary, Label, Grayscale or Multispectral nullptr
input
inputOnnxModel
The in memory ONNX model.
It must be loaded with the ReadOnnxModel command.
OnnxModel nullptr
input
dataFormat
The tensor layout expected as input by the model. The input image is automatically converted to this layout by the algorithm.
NDHWC The layout is organized with interlaced channels. For instance, if the input is a color image, each pixel presents its RGB components successively.
NCDHW The layout is organized with separate channels. Each channel is an individual plane.
Enumeration NDHWC
input
inputNormalizationType
The type of normalization to apply before computing the prediction. It is recommended to apply the same pre-processing as during the training.
NONE No normalization is applied before executing the prediction.
STANDARDIZATION A normalization is applied by subtracting the mean and dividing by the standard deviation.
MIN_MAX A normalization is applied by subtracting the minimum and dividing by data range.
Enumeration STANDARDIZATION
input
normalizationRange
The data range in which the input image is normalized before computing the prediction. It is recommended to apply the same pre-processing as during the training. This parameter is ignored if the normalization type is set to NONE. Vector2d Any value {0.f, 1.f}
input
normalizationScope
The scope for computing normalization (mean, standard deviation, minimum or maximum). This parameter is ignored if the normalization type is set to NONE.
GLOBAL The normalization is applied globally on the input batch.
PER_VOLUME The normalization is applied individually on each image of the input batch.
Enumeration GLOBAL
input
tileSize
The width, height, and depth in pixels of the sliding window. This size includes the user defined tile overlap. It must be a multiple of 2 to the power of the number of downsampling or upsampling layers.
Guidelines to select an appropriate tile size are available in the Tiling section.
Vector3u32 != 0 {128, 128, 128}
input
tileOverlap
The number of pixels used as overlap between the tiles. An overlap of zero may lead to artifacts in the prediction result. A non-zero overlap reduces such artifacts but increases the computation time. UInt32 Any value 16
output
outputImage1
The first output image. Its dimensions and data type are determined by the model. Image nullptr
output
outputImage2
The second output image. Its dimensions and data type are determined by the model. Image nullptr
Parameter Name Description Type Supported Values Default Value
input
input_image
The input image. It can be a grayscale or color image, depending on the selected model. image Binary, Label, Grayscale or Multispectral None
input
input_onnx_model
The in memory ONNX model.
It must be loaded with the ReadOnnxModel command.
OnnxModel None
input
data_format
The tensor layout expected as input by the model. The input image is automatically converted to this layout by the algorithm.
NDHWC The layout is organized with interlaced channels. For instance, if the input is a color image, each pixel presents its RGB components successively.
NCDHW The layout is organized with separate channels. Each channel is an individual plane.
enumeration NDHWC
input
input_normalization_type
The type of normalization to apply before computing the prediction. It is recommended to apply the same pre-processing as during the training.
NONE No normalization is applied before executing the prediction.
STANDARDIZATION A normalization is applied by subtracting the mean and dividing by the standard deviation.
MIN_MAX A normalization is applied by subtracting the minimum and dividing by data range.
enumeration STANDARDIZATION
input
normalization_range
The data range in which the input image is normalized before computing the prediction. It is recommended to apply the same pre-processing as during the training. This parameter is ignored if the normalization type is set to NONE. vector2d Any value [0, 1]
input
normalization_scope
The scope for computing normalization (mean, standard deviation, minimum or maximum). This parameter is ignored if the normalization type is set to NONE.
GLOBAL The normalization is applied globally on the input batch.
PER_VOLUME The normalization is applied individually on each image of the input batch.
enumeration GLOBAL
input
tile_size
The width, height, and depth in pixels of the sliding window. This size includes the user defined tile overlap. It must be a multiple of 2 to the power of the number of downsampling or upsampling layers.
Guidelines to select an appropriate tile size are available in the Tiling section.
vector3u32 != 0 [128, 128, 128]
input
tile_overlap
The number of pixels used as overlap between the tiles. An overlap of zero may lead to artifacts in the prediction result. A non-zero overlap reduces such artifacts but increases the computation time. uint32 Any value 16
output
output_image1
The first output image. Its dimensions and data type are determined by the model. image None
output
output_image2
The second output image. Its dimensions and data type are determined by the model. image None
Parameter Name Description Type Supported Values Default Value
input
inputImage
The input image. It can be a grayscale or color image, depending on the selected model. Image Binary, Label, Grayscale or Multispectral null
input
inputOnnxModel
The in memory ONNX model.
It must be loaded with the ReadOnnxModel command.
OnnxModel null
input
dataFormat
The tensor layout expected as input by the model. The input image is automatically converted to this layout by the algorithm.
NDHWC The layout is organized with interlaced channels. For instance, if the input is a color image, each pixel presents its RGB components successively.
NCDHW The layout is organized with separate channels. Each channel is an individual plane.
Enumeration NDHWC
input
inputNormalizationType
The type of normalization to apply before computing the prediction. It is recommended to apply the same pre-processing as during the training.
NONE No normalization is applied before executing the prediction.
STANDARDIZATION A normalization is applied by subtracting the mean and dividing by the standard deviation.
MIN_MAX A normalization is applied by subtracting the minimum and dividing by data range.
Enumeration STANDARDIZATION
input
normalizationRange
The data range in which the input image is normalized before computing the prediction. It is recommended to apply the same pre-processing as during the training. This parameter is ignored if the normalization type is set to NONE. Vector2d Any value {0f, 1f}
input
normalizationScope
The scope for computing normalization (mean, standard deviation, minimum or maximum). This parameter is ignored if the normalization type is set to NONE.
GLOBAL The normalization is applied globally on the input batch.
PER_VOLUME The normalization is applied individually on each image of the input batch.
Enumeration GLOBAL
input
tileSize
The width, height, and depth in pixels of the sliding window. This size includes the user defined tile overlap. It must be a multiple of 2 to the power of the number of downsampling or upsampling layers.
Guidelines to select an appropriate tile size are available in the Tiling section.
Vector3u32 != 0 {128, 128, 128}
input
tileOverlap
The number of pixels used as overlap between the tiles. An overlap of zero may lead to artifacts in the prediction result. A non-zero overlap reduces such artifacts but increases the computation time. UInt32 Any value 16
output
outputImage1
The first output image. Its dimensions and data type are determined by the model. Image null
output
outputImage2
The second output image. Its dimensions and data type are determined by the model. Image null

Object Examples

auto grayscale_images = readVipImage( std::string( IMAGEDEVDATA_IMAGES_FOLDER ) + "grayscale_images.vip" );
OnnxModel::Ptr model3d_2outputs= OnnxModel::read( std::string( IMAGEDEVDATA_OBJECTS_FOLDER ) + "model3d-2outputs.onnx" );

OnnxPredictionTwoOutputs3d onnxPredictionTwoOutputs3dAlgo;
onnxPredictionTwoOutputs3dAlgo.setInputImage( grayscale_images );
onnxPredictionTwoOutputs3dAlgo.setInputOnnxModel( model3d_2outputs );
onnxPredictionTwoOutputs3dAlgo.setDataFormat( OnnxPredictionTwoOutputs3d::DataFormat::NDHWC );
onnxPredictionTwoOutputs3dAlgo.setInputNormalizationType( OnnxPredictionTwoOutputs3d::InputNormalizationType::NONE );
onnxPredictionTwoOutputs3dAlgo.setNormalizationRange( {0, 1} );
onnxPredictionTwoOutputs3dAlgo.setNormalizationScope( OnnxPredictionTwoOutputs3d::NormalizationScope::GLOBAL );
onnxPredictionTwoOutputs3dAlgo.setTileSize( {128, 128, 128} );
onnxPredictionTwoOutputs3dAlgo.setTileOverlap( 16 );
onnxPredictionTwoOutputs3dAlgo.execute();

std::cout << "outputImage1:" << onnxPredictionTwoOutputs3dAlgo.outputImage1()->toString();
std::cout << "outputImage2:" << onnxPredictionTwoOutputs3dAlgo.outputImage2()->toString();
grayscale_images = imagedev.read_vip_image(imagedev_data.get_image_path("grayscale_images.vip"))
model_3d_2outputs = imagedev.OnnxModel.read(imagedev_data.get_object_path("model3d-2outputs.onnx"))

onnx_prediction_two_outputs_3d_algo = imagedev.OnnxPredictionTwoOutputs3d()
onnx_prediction_two_outputs_3d_algo.input_image = grayscale_images
onnx_prediction_two_outputs_3d_algo.input_onnx_model = model_3d_2outputs
onnx_prediction_two_outputs_3d_algo.data_format = imagedev.OnnxPredictionTwoOutputs3d.NDHWC
onnx_prediction_two_outputs_3d_algo.input_normalization_type = imagedev.OnnxPredictionTwoOutputs3d.NONE
onnx_prediction_two_outputs_3d_algo.normalization_range = [0, 1]
onnx_prediction_two_outputs_3d_algo.normalization_scope = imagedev.OnnxPredictionTwoOutputs3d.GLOBAL
onnx_prediction_two_outputs_3d_algo.tile_size = [128, 128, 128]
onnx_prediction_two_outputs_3d_algo.tile_overlap = 16
onnx_prediction_two_outputs_3d_algo.execute()

print("output_image1:", str(onnx_prediction_two_outputs_3d_algo.output_image1))
print("output_image2:", str(onnx_prediction_two_outputs_3d_algo.output_image2))
ImageView grayscale_images = Data.ReadVipImage( @"Data/images/grayscale_images.vip" );
OnnxModel model3d_2outputs = OnnxModel.Read( @"Data/objects/model3d-2outputs.onnx" );

OnnxPredictionTwoOutputs3d onnxPredictionTwoOutputs3dAlgo = new OnnxPredictionTwoOutputs3d
{
    inputImage = grayscale_images,
    inputOnnxModel = model3d_2outputs,
    dataFormat = OnnxPredictionTwoOutputs3d.DataFormat.NDHWC,
    inputNormalizationType = OnnxPredictionTwoOutputs3d.InputNormalizationType.NONE,
    normalizationRange = new double[]{0, 1},
    normalizationScope = OnnxPredictionTwoOutputs3d.NormalizationScope.GLOBAL,
    tileSize = new uint[]{128, 128, 128},
    tileOverlap = 16
};
onnxPredictionTwoOutputs3dAlgo.Execute();

Console.WriteLine( "outputImage1:" + onnxPredictionTwoOutputs3dAlgo.outputImage1.ToString() );
Console.WriteLine( "outputImage2:" + onnxPredictionTwoOutputs3dAlgo.outputImage2.ToString() );

Function Examples

auto grayscale_images = readVipImage( std::string( IMAGEDEVDATA_IMAGES_FOLDER ) + "grayscale_images.vip" );
OnnxModel::Ptr model3d_2outputs= OnnxModel::read( std::string( IMAGEDEVDATA_OBJECTS_FOLDER ) + "model3d-2outputs.onnx" );

auto result = onnxPredictionTwoOutputs3d( grayscale_images, model3d_2outputs, OnnxPredictionTwoOutputs3d::DataFormat::NDHWC, OnnxPredictionTwoOutputs3d::InputNormalizationType::NONE, {0, 1}, OnnxPredictionTwoOutputs3d::NormalizationScope::GLOBAL, {128, 128, 128}, 16 );

std::cout << "outputImage1:" << result.outputImage1->toString();
std::cout << "outputImage2:" << result.outputImage2->toString();
grayscale_images = imagedev.read_vip_image(imagedev_data.get_image_path("grayscale_images.vip"))
model_3d_2outputs = imagedev.OnnxModel.read(imagedev_data.get_object_path("model3d-2outputs.onnx"))

result_output_image1, result_output_image2 = imagedev.onnx_prediction_two_outputs_3d(grayscale_images, model_3d_2outputs, imagedev.OnnxPredictionTwoOutputs3d.NDHWC, imagedev.OnnxPredictionTwoOutputs3d.NONE, [0, 1], imagedev.OnnxPredictionTwoOutputs3d.GLOBAL, [128, 128, 128], 16)

print("output_image1:", str(result_output_image1))
print("output_image2:", str(result_output_image2))
ImageView grayscale_images = Data.ReadVipImage( @"Data/images/grayscale_images.vip" );
OnnxModel model3d_2outputs = OnnxModel.Read( @"Data/objects/model3d-2outputs.onnx" );

Processing.OnnxPredictionTwoOutputs3dOutput result = Processing.OnnxPredictionTwoOutputs3d( grayscale_images, model3d_2outputs, OnnxPredictionTwoOutputs3d.DataFormat.NDHWC, OnnxPredictionTwoOutputs3d.InputNormalizationType.NONE, new double[]{0, 1}, OnnxPredictionTwoOutputs3d.NormalizationScope.GLOBAL, new uint[]{128, 128, 128}, 16 );

Console.WriteLine( "outputImage1:" + result.outputImage1.ToString() );
Console.WriteLine( "outputImage2:" + result.outputImage2.ToString() );
model3d_2outputs.Dispose();





© 2025 Thermo Fisher Scientific Inc. All rights reserved.