Post-processing to generate a binary or label output
If the prediction output contains only one channel, the prediction score normalized between 0 and 1 is
thresholded. The scores lower than 0.5 are considered as background and set to 0. Other scores are set to 1.
The output image has the binary interpretation in this case.
If the prediction output contains two channels or more, the channel index presenting the highest score is
assigned to each pixel of the output image. The first class, represented in the channel of index 0, is
considered as background. The output image has the label interpretation in this case.
Figure 1. Membrane segmentation by deep learning prediction with a Unet model trained
with the Avizo software.
This image is shown by courtesy of Albert Cardona. It is from a set of serial section TEM images of
Drosophila brain tissue described in this paper:
A.Cardona, et al. "An Integrated Micro- and Macroarchitectural Analysis of the rosophila Brain by
Computer-Assisted Serial Section Electron Microscopy."
PLoS Biology 8(10). DOI: 10.1371/journal.pbio.1000502, Oct. 2010.
// Command constructor.OnnxPredictionSegmentation2d();/// Gets the inputImage parameter./// The input image. It can be a grayscale or color image, depending on the selected model.
std::shared_ptr< iolink::ImageView> inputImage()const;/// Sets the inputImage parameter./// The input image. It can be a grayscale or color image, depending on the selected model.void setInputImage( std::shared_ptr< iolink::ImageView> inputImage );/// Gets the modelPath parameter./// The path to the ONNX model file.
std::string modelPath()const;/// Sets the modelPath parameter./// The path to the ONNX model file.void setModelPath(const std::string& modelPath );/// Gets the dataFormat parameter./// The tensor layout expected as input by the model. The input image is automatically converted to this layout by the algorithm.OnnxPredictionSegmentation2d::DataFormat dataFormat()const;/// Sets the dataFormat parameter./// The tensor layout expected as input by the model. The input image is automatically converted to this layout by the algorithm.void setDataFormat(constOnnxPredictionSegmentation2d::DataFormat& dataFormat );/// Gets the inputNormalizationType parameter./// The type of normalization to apply before computing the prediction. It is recommended to apply the same pre-processing as during the training.OnnxPredictionSegmentation2d::InputNormalizationType inputNormalizationType()const;/// Sets the inputNormalizationType parameter./// The type of normalization to apply before computing the prediction. It is recommended to apply the same pre-processing as during the training.void setInputNormalizationType(constOnnxPredictionSegmentation2d::InputNormalizationType& inputNormalizationType );/// Gets the normalizationRange parameter./// The data range in which the input image is normalized before computing the prediction. It is recommended to apply the same pre-processing as during the training. This parameter is ignored if the normalization type is set to NONE.
iolink::Vector2d normalizationRange()const;/// Sets the normalizationRange parameter./// The data range in which the input image is normalized before computing the prediction. It is recommended to apply the same pre-processing as during the training. This parameter is ignored if the normalization type is set to NONE.void setNormalizationRange(const iolink::Vector2d& normalizationRange );/// Gets the normalizationScope parameter./// The scope for computing normalization (mean, standard deviation, minimum or maximum). This parameter is ignored if the normalization type is set to NONE.OnnxPredictionSegmentation2d::NormalizationScope normalizationScope()const;/// Sets the normalizationScope parameter./// The scope for computing normalization (mean, standard deviation, minimum or maximum). This parameter is ignored if the normalization type is set to NONE.void setNormalizationScope(constOnnxPredictionSegmentation2d::NormalizationScope& normalizationScope );/// Gets the tileSize parameter./// The width and height in pixels of the sliding window. This size includes the user defined tile overlap. It must be a multiple of 2 to the power of the number of downsampling or upsampling layers.
iolink::Vector2u32 tileSize()const;/// Sets the tileSize parameter./// The width and height in pixels of the sliding window. This size includes the user defined tile overlap. It must be a multiple of 2 to the power of the number of downsampling or upsampling layers.void setTileSize(const iolink::Vector2u32& tileSize );/// Gets the tileOverlap parameter./// The number of pixels used as overlap between the tiles. An overlap of zero may lead to artifacts in the prediction result. A non-zero overlap reduces such artifacts but increases the computation time.uint32_t tileOverlap()const;/// Sets the tileOverlap parameter./// The number of pixels used as overlap between the tiles. An overlap of zero may lead to artifacts in the prediction result. A non-zero overlap reduces such artifacts but increases the computation time.void setTileOverlap(constuint32_t& tileOverlap );/// Gets the outputObjectImage parameter./// The output image. Its dimensions, and calibration are forced to the same values as the input. Its interpretation is binary if the model produces one channel, label otherwise.
std::shared_ptr< iolink::ImageView> outputObjectImage()const;/// Sets the outputObjectImage parameter./// The output image. Its dimensions, and calibration are forced to the same values as the input. Its interpretation is binary if the model produces one channel, label otherwise.void setOutputObjectImage( std::shared_ptr< iolink::ImageView> outputObjectImage );// Method to launch the command.void execute();
# Property of the inputImage parameter.OnnxPredictionSegmentation2d.input_image
# Property of the modelPath parameter.OnnxPredictionSegmentation2d.model_path
# Property of the dataFormat parameter.OnnxPredictionSegmentation2d.data_format
# Property of the inputNormalizationType parameter.OnnxPredictionSegmentation2d.input_normalization_type
# Property of the normalizationRange parameter.OnnxPredictionSegmentation2d.normalization_range
# Property of the normalizationScope parameter.OnnxPredictionSegmentation2d.normalization_scope
# Property of the tileSize parameter.OnnxPredictionSegmentation2d.tile_size
# Property of the tileOverlap parameter.OnnxPredictionSegmentation2d.tile_overlap
# Property of the outputObjectImage parameter.OnnxPredictionSegmentation2d.output_object_image
// Method to launch the command.
execute()
// Command constructor.OnnxPredictionSegmentation2d()// Property of the inputImage parameter.OnnxPredictionSegmentation2d.inputImage
// Property of the modelPath parameter.OnnxPredictionSegmentation2d.modelPath
// Property of the dataFormat parameter.OnnxPredictionSegmentation2d.dataFormat
// Property of the inputNormalizationType parameter.OnnxPredictionSegmentation2d.inputNormalizationType
// Property of the normalizationRange parameter.OnnxPredictionSegmentation2d.normalizationRange
// Property of the normalizationScope parameter.OnnxPredictionSegmentation2d.normalizationScope
// Property of the tileSize parameter.OnnxPredictionSegmentation2d.tileSize
// Property of the tileOverlap parameter.OnnxPredictionSegmentation2d.tileOverlap
// Property of the outputObjectImage parameter.OnnxPredictionSegmentation2d.outputObjectImage
// Method to launch the command.Execute()
Parameters
Parameter Name
Description
Type
Supported Values
Default Value
inputImage
The input image. It can be a grayscale or color image, depending on the selected model.
Image
Binary, Label, Grayscale or Multispectral
nullptr
modelPath
The path to the ONNX model file.
String
""
dataFormat
The tensor layout expected as input by the model. The input image is automatically converted to this layout by the algorithm.
NHWC
The layout is organized with interlaced channels. For instance, if the input is a color image, each pixel presents is RGB components successively.
NCHW
The layout is organized with separated channels. Each channel is an individual plan.
Enumeration
NHWC
inputNormalizationType
The type of normalization to apply before computing the prediction. It is recommended to apply the same pre-processing as during the training.
NONE
No normalization is applied before executing the prediction.
STANDARDIZATION
A normalization is applied by subtracting the mean and dividing by the standard deviation.
MIN_MAX
A normalization is applied by subtracting the minimum and dividing by data range.
Enumeration
STANDARDIZATION
normalizationRange
The data range in which the input image is normalized before computing the prediction. It is recommended to apply the same pre-processing as during the training. This parameter is ignored if the normalization type is set to NONE.
Vector2d
Any value
{0.f, 1.f}
normalizationScope
The scope for computing normalization (mean, standard deviation, minimum or maximum). This parameter is ignored if the normalization type is set to NONE.
GLOBAL
The normalization is applied globally on the input batch.
PER_SLICE
The normalization is applied individually on each image of the input batch.
Enumeration
GLOBAL
tileSize
The width and height in pixels of the sliding window. This size includes the user defined tile overlap. It must be a multiple of 2 to the power of the number of downsampling or upsampling layers.
Guidelines to select an appropriate tile size are available in the
Tiling section.
Vector2u32
!= 0
{256, 256}
tileOverlap
The number of pixels used as overlap between the tiles. An overlap of zero may lead to artifacts in the prediction result. A non-zero overlap reduces such artifacts but increases the computation time.
UInt32
Any value
32
outputObjectImage
The output image. Its dimensions, and calibration are forced to the same values as the input. Its interpretation is binary if the model produces one channel, label otherwise.
Image
nullptr
Parameter Name
Description
Type
Supported Values
Default Value
input_image
The input image. It can be a grayscale or color image, depending on the selected model.
image
Binary, Label, Grayscale or Multispectral
None
model_path
The path to the ONNX model file.
string
""
data_format
The tensor layout expected as input by the model. The input image is automatically converted to this layout by the algorithm.
NHWC
The layout is organized with interlaced channels. For instance, if the input is a color image, each pixel presents is RGB components successively.
NCHW
The layout is organized with separated channels. Each channel is an individual plan.
enumeration
NHWC
input_normalization_type
The type of normalization to apply before computing the prediction. It is recommended to apply the same pre-processing as during the training.
NONE
No normalization is applied before executing the prediction.
STANDARDIZATION
A normalization is applied by subtracting the mean and dividing by the standard deviation.
MIN_MAX
A normalization is applied by subtracting the minimum and dividing by data range.
enumeration
STANDARDIZATION
normalization_range
The data range in which the input image is normalized before computing the prediction. It is recommended to apply the same pre-processing as during the training. This parameter is ignored if the normalization type is set to NONE.
vector2d
Any value
[0, 1]
normalization_scope
The scope for computing normalization (mean, standard deviation, minimum or maximum). This parameter is ignored if the normalization type is set to NONE.
GLOBAL
The normalization is applied globally on the input batch.
PER_SLICE
The normalization is applied individually on each image of the input batch.
enumeration
GLOBAL
tile_size
The width and height in pixels of the sliding window. This size includes the user defined tile overlap. It must be a multiple of 2 to the power of the number of downsampling or upsampling layers.
Guidelines to select an appropriate tile size are available in the
Tiling section.
vector2u32
!= 0
[256, 256]
tile_overlap
The number of pixels used as overlap between the tiles. An overlap of zero may lead to artifacts in the prediction result. A non-zero overlap reduces such artifacts but increases the computation time.
uint32
Any value
32
output_object_image
The output image. Its dimensions, and calibration are forced to the same values as the input. Its interpretation is binary if the model produces one channel, label otherwise.
image
None
Parameter Name
Description
Type
Supported Values
Default Value
inputImage
The input image. It can be a grayscale or color image, depending on the selected model.
Image
Binary, Label, Grayscale or Multispectral
null
modelPath
The path to the ONNX model file.
String
""
dataFormat
The tensor layout expected as input by the model. The input image is automatically converted to this layout by the algorithm.
NHWC
The layout is organized with interlaced channels. For instance, if the input is a color image, each pixel presents is RGB components successively.
NCHW
The layout is organized with separated channels. Each channel is an individual plan.
Enumeration
NHWC
inputNormalizationType
The type of normalization to apply before computing the prediction. It is recommended to apply the same pre-processing as during the training.
NONE
No normalization is applied before executing the prediction.
STANDARDIZATION
A normalization is applied by subtracting the mean and dividing by the standard deviation.
MIN_MAX
A normalization is applied by subtracting the minimum and dividing by data range.
Enumeration
STANDARDIZATION
normalizationRange
The data range in which the input image is normalized before computing the prediction. It is recommended to apply the same pre-processing as during the training. This parameter is ignored if the normalization type is set to NONE.
Vector2d
Any value
{0f, 1f}
normalizationScope
The scope for computing normalization (mean, standard deviation, minimum or maximum). This parameter is ignored if the normalization type is set to NONE.
GLOBAL
The normalization is applied globally on the input batch.
PER_SLICE
The normalization is applied individually on each image of the input batch.
Enumeration
GLOBAL
tileSize
The width and height in pixels of the sliding window. This size includes the user defined tile overlap. It must be a multiple of 2 to the power of the number of downsampling or upsampling layers.
Guidelines to select an appropriate tile size are available in the
Tiling section.
Vector2u32
!= 0
{256, 256}
tileOverlap
The number of pixels used as overlap between the tiles. An overlap of zero may lead to artifacts in the prediction result. A non-zero overlap reduces such artifacts but increases the computation time.
UInt32
Any value
32
outputObjectImage
The output image. Its dimensions, and calibration are forced to the same values as the input. Its interpretation is binary if the model produces one channel, label otherwise.