ImageDev

OnnxMetadataBasedPrediction

Computes a prediction on a two-dimensional or three-dimensional image from an ONNX model.

Access to parameter description

It generates either one prediction scores image or one label or a binary image or even two images depending on the inference type defined during the model training and embedded in its metadata.
For an overview, please refer to the Deep Learning section.
See also

Function Syntax

This function returns a OnnxMetadataBasedPredictionOutput structure containing outputImage1 and outputImage2.
// Output structure of the onnxMetadataBasedPrediction function.
struct OnnxMetadataBasedPredictionOutput final
{
    /// The first output image. Its spatial dimensions, and calibration are forced to the same values as the input. Its number of channels depends on the selected model. Its type depends on the selected output type.
    std::shared_ptr< iolink::ImageView > outputImage1;
    /// The second output image. Its spatial dimensions and calibration are forced to the same values as the input. Its number of channels depends on the selected model. Its type depends on the selected output type. It will be ignored if the inference type contained in metadata is different from "stardist".
    std::shared_ptr< iolink::ImageView > outputImage2;
};

// Function prototype
OnnxMetadataBasedPredictionOutput onnxMetadataBasedPrediction( std::shared_ptr< iolink::ImageView > inputImage, OnnxModel::Ptr inputOnnxModel, OnnxMetadataBasedPrediction::TilingMode tilingMode, const iolink::Vector3u32& tileSize, std::shared_ptr< iolink::ImageView > outputImage1 = nullptr, std::shared_ptr< iolink::ImageView > outputImage2 = nullptr );
This function returns a tuple containing output_image1 and output_image2.
// Function prototype.
onnx_metadata_based_prediction(input_image: idt.ImageType,
                               input_onnx_model: Union[Any, None] = None,
                               tiling_mode: Union[Literal["MINIMUM"],Literal["MAXIMUM"],Literal["USER_DEFINED"],OnnxMetadataBasedPrediction.TilingMode] = OnnxMetadataBasedPrediction.TilingMode.MINIMUM,
                               tile_size: Iterable[int] = [0, 0, 0],
                               output_image1: idt.ImageType = None,
                               output_image2: idt.ImageType = None) -> Tuple[idt.ImageType, idt.ImageType]
This function returns a OnnxMetadataBasedPredictionOutput structure containing outputImage1 and outputImage2.
/// Output structure of the OnnxMetadataBasedPrediction function.
public struct OnnxMetadataBasedPredictionOutput
{
    /// 
    /// The first output image. Its spatial dimensions, and calibration are forced to the same values as the input. Its number of channels depends on the selected model. Its type depends on the selected output type.
    /// 
    public IOLink.ImageView outputImage1;
    /// 
    /// The second output image. Its spatial dimensions and calibration are forced to the same values as the input. Its number of channels depends on the selected model. Its type depends on the selected output type. It will be ignored if the inference type contained in metadata is different from "stardist".
    /// 
    public IOLink.ImageView outputImage2;
};

// Function prototype.
public static OnnxMetadataBasedPredictionOutput
OnnxMetadataBasedPrediction( IOLink.ImageView inputImage,
                             OnnxModel inputOnnxModel = null,
                             OnnxMetadataBasedPrediction.TilingMode tilingMode = ImageDev.OnnxMetadataBasedPrediction.TilingMode.MINIMUM,
                             uint[] tileSize = null,
                             IOLink.ImageView outputImage1 = null,
                             IOLink.ImageView outputImage2 = null );

Class Syntax

Parameters

Parameter Name Description Type Supported Values Default Value
input
inputImage
The input image. It can be a grayscale or color image, depending on the selected model. Image Binary, Label, Grayscale or Multispectral nullptr
input
inputOnnxModel
The in memory ONNX model.
It must be loaded with the ReadOnnxModel command. This model must contain training information metadata.
OnnxModel nullptr
input
tilingMode
The way to select the size of the tiles on which the prediction is performed.
Value Description
MINIMUM The smallest tile size compatible with the model is used.
MAXIMUM The largest tile size compatible with the model is used.
USER_DEFINED The closest tile size smaller than the input tileSize parameter and compatible with the model.
Enumeration MINIMUM
input
tileSize
The target tile size when the tiling is user defined. This size is automatically rounded to the closest smaller tile size compatible with the model. The third element is ignored when the input is a 2D image Vector3u32 Any value {0, 0, 0}
output
outputImage1
The first output image. Its spatial dimensions, and calibration are forced to the same values as the input. Its number of channels depends on the selected model. Its type depends on the selected output type. Image nullptr
output
outputImage2
The second output image. Its spatial dimensions and calibration are forced to the same values as the input. Its number of channels depends on the selected model. Its type depends on the selected output type. It will be ignored if the inference type contained in metadata is different from "stardist". Image nullptr
Parameter Name Description Type Supported Values Default Value
input
input_image
The input image. It can be a grayscale or color image, depending on the selected model. image Binary, Label, Grayscale or Multispectral None
input
input_onnx_model
The in memory ONNX model.
It must be loaded with the ReadOnnxModel command. This model must contain training information metadata.
OnnxModel None
input
tiling_mode
The way to select the size of the tiles on which the prediction is performed.
Value Description
MINIMUM The smallest tile size compatible with the model is used.
MAXIMUM The largest tile size compatible with the model is used.
USER_DEFINED The closest tile size smaller than the input tileSize parameter and compatible with the model.
enumeration MINIMUM
input
tile_size
The target tile size when the tiling is user defined. This size is automatically rounded to the closest smaller tile size compatible with the model. The third element is ignored when the input is a 2D image vector3u32 Any value [0, 0, 0]
output
output_image1
The first output image. Its spatial dimensions, and calibration are forced to the same values as the input. Its number of channels depends on the selected model. Its type depends on the selected output type. image None
output
output_image2
The second output image. Its spatial dimensions and calibration are forced to the same values as the input. Its number of channels depends on the selected model. Its type depends on the selected output type. It will be ignored if the inference type contained in metadata is different from "stardist". image None
Parameter Name Description Type Supported Values Default Value
input
inputImage
The input image. It can be a grayscale or color image, depending on the selected model. Image Binary, Label, Grayscale or Multispectral null
input
inputOnnxModel
The in memory ONNX model.
It must be loaded with the ReadOnnxModel command. This model must contain training information metadata.
OnnxModel null
input
tilingMode
The way to select the size of the tiles on which the prediction is performed.
Value Description
MINIMUM The smallest tile size compatible with the model is used.
MAXIMUM The largest tile size compatible with the model is used.
USER_DEFINED The closest tile size smaller than the input tileSize parameter and compatible with the model.
Enumeration MINIMUM
input
tileSize
The target tile size when the tiling is user defined. This size is automatically rounded to the closest smaller tile size compatible with the model. The third element is ignored when the input is a 2D image Vector3u32 Any value {0, 0, 0}
output
outputImage1
The first output image. Its spatial dimensions, and calibration are forced to the same values as the input. Its number of channels depends on the selected model. Its type depends on the selected output type. Image null
output
outputImage2
The second output image. Its spatial dimensions and calibration are forced to the same values as the input. Its number of channels depends on the selected model. Its type depends on the selected output type. It will be ignored if the inference type contained in metadata is different from "stardist". Image null

Object Examples

auto autorad = ioformat::readImage( std::string( IMAGEDEVDATA_IMAGES_FOLDER ) + "autorad.tif" );
OnnxModel::Ptr noise2noise_with_metadata= OnnxModel::read( std::string( IMAGEDEVDATA_OBJECTS_FOLDER ) + "noise2noise_with_metadata.onnx" );

OnnxMetadataBasedPrediction onnxMetadataBasedPredictionAlgo;
onnxMetadataBasedPredictionAlgo.setInputImage( autorad );
onnxMetadataBasedPredictionAlgo.setInputOnnxModel( noise2noise_with_metadata );
onnxMetadataBasedPredictionAlgo.setTilingMode( OnnxMetadataBasedPrediction::TilingMode::MINIMUM );
onnxMetadataBasedPredictionAlgo.setTileSize( {128, 128, 128} );
onnxMetadataBasedPredictionAlgo.execute();

std::cout << "outputImage1:" << onnxMetadataBasedPredictionAlgo.outputImage1()->toString();
std::cout << "outputImage2:" << onnxMetadataBasedPredictionAlgo.outputImage2()->toString();
autorad = ioformat.read_image(imagedev_data.get_image_path("autorad.tif"))
noise2noise_with_metadata = imagedev.OnnxModel.read(imagedev_data.get_object_path("noise2noise_with_metadata.onnx"))

onnx_metadata_based_prediction_algo = imagedev.OnnxMetadataBasedPrediction()
onnx_metadata_based_prediction_algo.input_image = autorad
onnx_metadata_based_prediction_algo.input_onnx_model = noise2noise_with_metadata
onnx_metadata_based_prediction_algo.tiling_mode = imagedev.OnnxMetadataBasedPrediction.MINIMUM
onnx_metadata_based_prediction_algo.tile_size = [128, 128, 128]
onnx_metadata_based_prediction_algo.execute()

print("output_image1:", str(onnx_metadata_based_prediction_algo.output_image1))
print("output_image2:", str(onnx_metadata_based_prediction_algo.output_image2))
ImageView autorad = ViewIO.ReadImage( @"Data/images/autorad.tif" );
OnnxModel noise2noise_with_metadata = OnnxModel.Read( @"Data/objects/noise2noise_with_metadata.onnx" );

OnnxMetadataBasedPrediction onnxMetadataBasedPredictionAlgo = new OnnxMetadataBasedPrediction
{
    inputImage = autorad,
    inputOnnxModel = noise2noise_with_metadata,
    tilingMode = OnnxMetadataBasedPrediction.TilingMode.MINIMUM,
    tileSize = new uint[]{128, 128, 128}
};
onnxMetadataBasedPredictionAlgo.Execute();

Console.WriteLine( "outputImage1:" + onnxMetadataBasedPredictionAlgo.outputImage1.ToString() );
Console.WriteLine( "outputImage2:" + onnxMetadataBasedPredictionAlgo.outputImage2.ToString() );

Function Examples

auto autorad = ioformat::readImage( std::string( IMAGEDEVDATA_IMAGES_FOLDER ) + "autorad.tif" );
OnnxModel::Ptr noise2noise_with_metadata= OnnxModel::read( std::string( IMAGEDEVDATA_OBJECTS_FOLDER ) + "noise2noise_with_metadata.onnx" );

auto result = onnxMetadataBasedPrediction( autorad, noise2noise_with_metadata, OnnxMetadataBasedPrediction::TilingMode::MINIMUM, {128, 128, 128} );

std::cout << "outputImage1:" << result.outputImage1->toString();
std::cout << "outputImage2:" << result.outputImage2->toString();
autorad = ioformat.read_image(imagedev_data.get_image_path("autorad.tif"))
noise2noise_with_metadata = imagedev.OnnxModel.read(imagedev_data.get_object_path("noise2noise_with_metadata.onnx"))

result_output_image1, result_output_image2 = imagedev.onnx_metadata_based_prediction(autorad, noise2noise_with_metadata, imagedev.OnnxMetadataBasedPrediction.MINIMUM, [128, 128, 128])

print("output_image1:", str(result_output_image1))
print("output_image2:", str(result_output_image2))
ImageView autorad = ViewIO.ReadImage( @"Data/images/autorad.tif" );
OnnxModel noise2noise_with_metadata = OnnxModel.Read( @"Data/objects/noise2noise_with_metadata.onnx" );

Processing.OnnxMetadataBasedPredictionOutput result = Processing.OnnxMetadataBasedPrediction( autorad, noise2noise_with_metadata, OnnxMetadataBasedPrediction.TilingMode.MINIMUM, new uint[]{128, 128, 128} );

Console.WriteLine( "outputImage1:" + result.outputImage1.ToString() );
Console.WriteLine( "outputImage2:" + result.outputImage2.ToString() );
noise2noise_with_metadata.Dispose();



© 2026 Thermo Fisher Scientific Inc. All rights reserved.