OnnxPredictionFiltering2d
Computes a prediction on a two-dimensional image from an ONNX model and generates an image representing the prediction scores.
Access to parameter description
For an overview, please refer to the Deep Learning section.
This algorithm produces an image containing the raw prediction scores given by the model. Depending on how the model has been trained, it can be used either to perform image filtering, or segmentation by applying an appropriate post-processing step afterwards.
The following steps are applied:
Figure 1. Image denoising by deep learning prediction with noise to noise model.
See also
Access to parameter description
For an overview, please refer to the Deep Learning section.
This algorithm produces an image containing the raw prediction scores given by the model. Depending on how the model has been trained, it can be used either to perform image filtering, or segmentation by applying an appropriate post-processing step afterwards.
The following steps are applied:
- Pre-processing
- Prediction
- Post-processing to optionally restore the prediction result into the input data range
|
|
See also
Function Syntax
This function returns outputImage.
// Function prototype
std::shared_ptr< iolink::ImageView > onnxPredictionFiltering2d( std::shared_ptr< iolink::ImageView > inputImage, std::string modelPath, OnnxPredictionFiltering2d::DataFormat dataFormat, OnnxPredictionFiltering2d::InputNormalizationType inputNormalizationType, iolink::Vector2d normalizationRange, OnnxPredictionFiltering2d::NormalizationScope normalizationScope, iolink::Vector2u32 tileSize, uint32_t tileOverlap, OnnxPredictionFiltering2d::OutputNormalizationType outputNormalizationType, OnnxPredictionFiltering2d::OutputType outputType, std::shared_ptr< iolink::ImageView > outputImage = nullptr );
This function returns outputImage.
// Function prototype. onnx_prediction_filtering_2d(input_image: idt.ImageType, model_path: str = "", data_format: OnnxPredictionFiltering2d.DataFormat = OnnxPredictionFiltering2d.DataFormat.NHWC, input_normalization_type: OnnxPredictionFiltering2d.InputNormalizationType = OnnxPredictionFiltering2d.InputNormalizationType.STANDARDIZATION, normalization_range: Union[Iterable[int], Iterable[float]] = [0, 1], normalization_scope: OnnxPredictionFiltering2d.NormalizationScope = OnnxPredictionFiltering2d.NormalizationScope.GLOBAL, tile_size: Iterable[int] = [256, 256], tile_overlap: int = 32, output_normalization_type: OnnxPredictionFiltering2d.OutputNormalizationType = OnnxPredictionFiltering2d.OutputNormalizationType.NONE, output_type: OnnxPredictionFiltering2d.OutputType = OnnxPredictionFiltering2d.OutputType.SAME_AS_INPUT, output_image: idt.ImageType = None) -> idt.ImageType
This function returns outputImage.
// Function prototype. public static IOLink.ImageView OnnxPredictionFiltering2d( IOLink.ImageView inputImage, String modelPath = "", OnnxPredictionFiltering2d.DataFormat dataFormat = ImageDev.OnnxPredictionFiltering2d.DataFormat.NHWC, OnnxPredictionFiltering2d.InputNormalizationType inputNormalizationType = ImageDev.OnnxPredictionFiltering2d.InputNormalizationType.STANDARDIZATION, double[] normalizationRange = null, OnnxPredictionFiltering2d.NormalizationScope normalizationScope = ImageDev.OnnxPredictionFiltering2d.NormalizationScope.GLOBAL, uint[] tileSize = null, UInt32 tileOverlap = 32, OnnxPredictionFiltering2d.OutputNormalizationType outputNormalizationType = ImageDev.OnnxPredictionFiltering2d.OutputNormalizationType.NONE, OnnxPredictionFiltering2d.OutputType outputType = ImageDev.OnnxPredictionFiltering2d.OutputType.SAME_AS_INPUT, IOLink.ImageView outputImage = null );
Class Syntax
Parameters
Parameter Name | Description | Type | Supported Values | Default Value | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
inputImage |
The input image. It can be a grayscale or color image, depending on the selected model. | Image | Binary, Label, Grayscale or Multispectral | nullptr | |||||||
modelPath |
The path to the ONNX model file. | String | "" | ||||||||
dataFormat |
The tensor layout expected as input by the model. The input image is automatically converted to this layout by the algorithm.
|
Enumeration | NHWC | ||||||||
inputNormalizationType |
The type of normalization to apply before computing the prediction. It is recommended to apply the same pre-processing as during the training.
|
Enumeration | STANDARDIZATION | ||||||||
normalizationRange |
The data range in which the input image is normalized before computing the prediction. It is recommended to apply the same pre-processing as during the training. This parameter is ignored if the normalization type is set to NONE. | Vector2d | Any value | {0.f, 1.f} | |||||||
normalizationScope |
The scope for computing normalization (mean, standard deviation, minimum or maximum). This parameter is ignored if the normalization type is set to NONE.
|
Enumeration | GLOBAL | ||||||||
tileSize |
The width and height in pixels of the sliding window. This size includes the user defined tile overlap. It must be a multiple of 2 to the power of the number of downsampling or upsampling layers.
Guidelines to select an appropriate tile size are available in the Tiling section. |
Vector2u32 | != 0 | {256, 256} | |||||||
tileOverlap |
The number of pixels used as overlap between the tiles. An overlap of zero may lead to artifacts in the prediction result. A non-zero overlap reduces such artifacts but increases the computation time. | UInt32 | Any value | 32 | |||||||
outputNormalizationType |
The type of normalization to apply after computing the prediction. This parameter is ignored if the input normalization type is set to NONE.
|
Enumeration | NONE | ||||||||
outputType |
The output data type. It can either be the same as the input type or forced to float. This parameter is ignored if the input normalization type is set to NONE.
|
Enumeration | SAME_AS_INPUT | ||||||||
outputImage |
The output image. Its spatial dimensions, and calibration are forced to the same values as the input. Its number of channels depends on the selected model. Its type depends on the selected output type. | Image | nullptr |
Parameter Name | Description | Type | Supported Values | Default Value | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
input_image |
The input image. It can be a grayscale or color image, depending on the selected model. | image | Binary, Label, Grayscale or Multispectral | None | |||||||
model_path |
The path to the ONNX model file. | string | "" | ||||||||
data_format |
The tensor layout expected as input by the model. The input image is automatically converted to this layout by the algorithm.
|
enumeration | NHWC | ||||||||
input_normalization_type |
The type of normalization to apply before computing the prediction. It is recommended to apply the same pre-processing as during the training.
|
enumeration | STANDARDIZATION | ||||||||
normalization_range |
The data range in which the input image is normalized before computing the prediction. It is recommended to apply the same pre-processing as during the training. This parameter is ignored if the normalization type is set to NONE. | vector2d | Any value | [0, 1] | |||||||
normalization_scope |
The scope for computing normalization (mean, standard deviation, minimum or maximum). This parameter is ignored if the normalization type is set to NONE.
|
enumeration | GLOBAL | ||||||||
tile_size |
The width and height in pixels of the sliding window. This size includes the user defined tile overlap. It must be a multiple of 2 to the power of the number of downsampling or upsampling layers.
Guidelines to select an appropriate tile size are available in the Tiling section. |
vector2u32 | != 0 | [256, 256] | |||||||
tile_overlap |
The number of pixels used as overlap between the tiles. An overlap of zero may lead to artifacts in the prediction result. A non-zero overlap reduces such artifacts but increases the computation time. | uint32 | Any value | 32 | |||||||
output_normalization_type |
The type of normalization to apply after computing the prediction. This parameter is ignored if the input normalization type is set to NONE.
|
enumeration | NONE | ||||||||
output_type |
The output data type. It can either be the same as the input type or forced to float. This parameter is ignored if the input normalization type is set to NONE.
|
enumeration | SAME_AS_INPUT | ||||||||
output_image |
The output image. Its spatial dimensions, and calibration are forced to the same values as the input. Its number of channels depends on the selected model. Its type depends on the selected output type. | image | None |
Parameter Name | Description | Type | Supported Values | Default Value | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
inputImage |
The input image. It can be a grayscale or color image, depending on the selected model. | Image | Binary, Label, Grayscale or Multispectral | null | |||||||
modelPath |
The path to the ONNX model file. | String | "" | ||||||||
dataFormat |
The tensor layout expected as input by the model. The input image is automatically converted to this layout by the algorithm.
|
Enumeration | NHWC | ||||||||
inputNormalizationType |
The type of normalization to apply before computing the prediction. It is recommended to apply the same pre-processing as during the training.
|
Enumeration | STANDARDIZATION | ||||||||
normalizationRange |
The data range in which the input image is normalized before computing the prediction. It is recommended to apply the same pre-processing as during the training. This parameter is ignored if the normalization type is set to NONE. | Vector2d | Any value | {0f, 1f} | |||||||
normalizationScope |
The scope for computing normalization (mean, standard deviation, minimum or maximum). This parameter is ignored if the normalization type is set to NONE.
|
Enumeration | GLOBAL | ||||||||
tileSize |
The width and height in pixels of the sliding window. This size includes the user defined tile overlap. It must be a multiple of 2 to the power of the number of downsampling or upsampling layers.
Guidelines to select an appropriate tile size are available in the Tiling section. |
Vector2u32 | != 0 | {256, 256} | |||||||
tileOverlap |
The number of pixels used as overlap between the tiles. An overlap of zero may lead to artifacts in the prediction result. A non-zero overlap reduces such artifacts but increases the computation time. | UInt32 | Any value | 32 | |||||||
outputNormalizationType |
The type of normalization to apply after computing the prediction. This parameter is ignored if the input normalization type is set to NONE.
|
Enumeration | NONE | ||||||||
outputType |
The output data type. It can either be the same as the input type or forced to float. This parameter is ignored if the input normalization type is set to NONE.
|
Enumeration | SAME_AS_INPUT | ||||||||
outputImage |
The output image. Its spatial dimensions, and calibration are forced to the same values as the input. Its number of channels depends on the selected model. Its type depends on the selected output type. | Image | null |
Object Examples
auto autorad = ioformat::readImage( std::string( IMAGEDEVDATA_IMAGES_FOLDER ) + "autorad.tif" ); OnnxPredictionFiltering2d onnxPredictionFiltering2dAlgo; onnxPredictionFiltering2dAlgo.setInputImage( autorad ); onnxPredictionFiltering2dAlgo.setModelPath( std::string( IMAGEDEVDATA_OBJECTS_FOLDER ) + "noise2noise.onnx" ); onnxPredictionFiltering2dAlgo.setDataFormat( OnnxPredictionFiltering2d::DataFormat::NHWC ); onnxPredictionFiltering2dAlgo.setInputNormalizationType( OnnxPredictionFiltering2d::InputNormalizationType::NONE ); onnxPredictionFiltering2dAlgo.setNormalizationRange( {0, 1} ); onnxPredictionFiltering2dAlgo.setNormalizationScope( OnnxPredictionFiltering2d::NormalizationScope::GLOBAL ); onnxPredictionFiltering2dAlgo.setTileSize( {128, 128} ); onnxPredictionFiltering2dAlgo.setTileOverlap( 32 ); onnxPredictionFiltering2dAlgo.setOutputNormalizationType( OnnxPredictionFiltering2d::OutputNormalizationType::NONE ); onnxPredictionFiltering2dAlgo.setOutputType( OnnxPredictionFiltering2d::OutputType::SAME_AS_INPUT ); onnxPredictionFiltering2dAlgo.execute(); std::cout << "outputImage:" << onnxPredictionFiltering2dAlgo.outputImage()->toString();
autorad = ioformat.read_image(imagedev_data.get_image_path("autorad.tif")) onnx_prediction_filtering_2d_algo = imagedev.OnnxPredictionFiltering2d() onnx_prediction_filtering_2d_algo.input_image = autorad onnx_prediction_filtering_2d_algo.model_path = imagedev_data.get_object_path("noise2noise.onnx") onnx_prediction_filtering_2d_algo.data_format = imagedev.OnnxPredictionFiltering2d.NHWC onnx_prediction_filtering_2d_algo.input_normalization_type = imagedev.OnnxPredictionFiltering2d.InputNormalizationType.NONE onnx_prediction_filtering_2d_algo.normalization_range = [0, 1] onnx_prediction_filtering_2d_algo.normalization_scope = imagedev.OnnxPredictionFiltering2d.GLOBAL onnx_prediction_filtering_2d_algo.tile_size = [128, 128] onnx_prediction_filtering_2d_algo.tile_overlap = 32 onnx_prediction_filtering_2d_algo.output_normalization_type = imagedev.OnnxPredictionFiltering2d.OutputNormalizationType.NONE onnx_prediction_filtering_2d_algo.output_type = imagedev.OnnxPredictionFiltering2d.SAME_AS_INPUT onnx_prediction_filtering_2d_algo.execute() print("output_image:", str(onnx_prediction_filtering_2d_algo.output_image))
ImageView autorad = ViewIO.ReadImage( @"Data/images/autorad.tif" ); OnnxPredictionFiltering2d onnxPredictionFiltering2dAlgo = new OnnxPredictionFiltering2d { inputImage = autorad, modelPath = @"Data/objects/noise2noise.onnx", dataFormat = OnnxPredictionFiltering2d.DataFormat.NHWC, inputNormalizationType = OnnxPredictionFiltering2d.InputNormalizationType.NONE, normalizationRange = new double[]{0, 1}, normalizationScope = OnnxPredictionFiltering2d.NormalizationScope.GLOBAL, tileSize = new uint[]{128, 128}, tileOverlap = 32, outputNormalizationType = OnnxPredictionFiltering2d.OutputNormalizationType.NONE, outputType = OnnxPredictionFiltering2d.OutputType.SAME_AS_INPUT }; onnxPredictionFiltering2dAlgo.Execute(); Console.WriteLine( "outputImage:" + onnxPredictionFiltering2dAlgo.outputImage.ToString() );
Function Examples
auto autorad = ioformat::readImage( std::string( IMAGEDEVDATA_IMAGES_FOLDER ) + "autorad.tif" ); auto result = onnxPredictionFiltering2d( autorad, std::string( IMAGEDEVDATA_OBJECTS_FOLDER ) + "noise2noise.onnx", OnnxPredictionFiltering2d::DataFormat::NHWC, OnnxPredictionFiltering2d::InputNormalizationType::NONE, {0, 1}, OnnxPredictionFiltering2d::NormalizationScope::GLOBAL, {128, 128}, 32, OnnxPredictionFiltering2d::OutputNormalizationType::NONE, OnnxPredictionFiltering2d::OutputType::SAME_AS_INPUT ); std::cout << "outputImage:" << result->toString();
autorad = ioformat.read_image(imagedev_data.get_image_path("autorad.tif")) result = imagedev.onnx_prediction_filtering_2d(autorad, imagedev_data.get_object_path("noise2noise.onnx"), imagedev.OnnxPredictionFiltering2d.NHWC, imagedev.OnnxPredictionFiltering2d.InputNormalizationType.NONE, [0, 1], imagedev.OnnxPredictionFiltering2d.GLOBAL, [128, 128], 32, imagedev.OnnxPredictionFiltering2d.OutputNormalizationType.NONE, imagedev.OnnxPredictionFiltering2d.SAME_AS_INPUT) print("output_image:", str(result))
ImageView autorad = ViewIO.ReadImage( @"Data/images/autorad.tif" ); IOLink.ImageView result = Processing.OnnxPredictionFiltering2d( autorad, @"Data/objects/noise2noise.onnx", OnnxPredictionFiltering2d.DataFormat.NHWC, OnnxPredictionFiltering2d.InputNormalizationType.NONE, new double[]{0, 1}, OnnxPredictionFiltering2d.NormalizationScope.GLOBAL, new uint[]{128, 128}, 32, OnnxPredictionFiltering2d.OutputNormalizationType.NONE, OnnxPredictionFiltering2d.OutputType.SAME_AS_INPUT ); Console.WriteLine( "outputImage:" + result.ToString() );