SegmentationMetrics
Computes classical quality metrics for assessing the quality of a segmentation relatively to a ground truth.
Access to parameter description
This algorithm allows the comparison of a segmented image with a ground truth in order to assess the accuracy of a segmentation process; for instance, a classification by machine learning.
It outputs several statistical indicators, which can be computed independently for each label of the input images, or globally to provide a unique set of scores:
Access to parameter description
This algorithm allows the comparison of a segmented image with a ground truth in order to assess the accuracy of a segmentation process; for instance, a classification by machine learning.
It outputs several statistical indicators, which can be computed independently for each label of the input images, or globally to provide a unique set of scores:
- The number of positive results in the input image that are defined as such in the ground truth image (TP).
- The number of positive results in the input image that are not defined as such in the ground truth image (FP).
- The number of negative results in the input image that are defined as such in the ground truth image (TN).
- The number of negative results in the input image that are not defined as such in the ground truth image (FN).
- The sensitivity, which measures the proportion of actual positives that are correctly identified as such. This metric is also called recall or true positive rate. $$Sensitivity = Recall = TPR = \frac{TP}{TP+FN}$$
- The specificity, which measures the proportion of actual negatives that are correctly identified as such (true negative rate). $$ Specificity = TNR = \frac{TN}{TN+FP}$$
- The precision, which measures the fraction of relevant results (true positives) among positive results (true and false positives). $$ Precision = \frac{TP}{TP+FP}$$
- The accuracy, which measures the percentage of correct classifications among all results. $$ Accuracy = \frac{TP+TN}{TP+FN+TN+FP} $$
- The Dice coefficient, which is a statistic used for comparing the similarity of two samples (Sørensen-Dice coefficient).This metric is also called $F_1$ score. $$ Dice = F_1 = \frac{2TP}{2TP+FP+FN} $$
- The Jaccard index, also called mean Intersection-Over-Union metric. $$ Jaccard = IoU = \frac{TP}{TP+FP+FN} $$
- The Panoptic Quality metric, which blends the Intersection-Over-Union scores of the $N$ analyzed classes. It is only available when computing the metrics per image. $$ PQ = \frac{ \sum_{l=1}^{N}{IoU(l)}}{TP+0.5FP+0.5FN} $$
Function Syntax
This function returns a SegmentationMetricsOutput structure containing outputClassMeasurement and outputImageMeasurement.
// Output structure of the segmentationMetrics function. struct SegmentationMetricsOutput { /// The similarity results. The metrics are given for each label of the input image. SegmentationMetricsObjectMsr::Ptr outputClassMeasurement; /// The similarity results. The metrics are given globally for the input image. SegmentationMetricsImageMsr::Ptr outputImageMeasurement; }; // Function prototype
SegmentationMetricsOutput segmentationMetrics( std::shared_ptr< iolink::ImageView > inputObjectImage, std::shared_ptr< iolink::ImageView > inputReferenceImage, SegmentationMetrics::MetricScope metricScope, SegmentationMetricsObjectMsr::Ptr outputClassMeasurement = nullptr, SegmentationMetricsImageMsr::Ptr outputImageMeasurement = nullptr );
This function returns a tuple containing output_class_measurement and output_image_measurement.
// Function prototype. segmentation_metrics(input_object_image: idt.ImageType, input_reference_image: idt.ImageType, metric_scope: SegmentationMetrics.MetricScope = SegmentationMetrics.MetricScope.BOTH, output_class_measurement: Union[Any, None] = None, output_image_measurement: Union[Any, None] = None) -> Tuple[SegmentationMetricsObjectMsr, SegmentationMetricsImageMsr]
This function returns a SegmentationMetricsOutput structure containing outputClassMeasurement and outputImageMeasurement.
/// Output structure of the SegmentationMetrics function. public struct SegmentationMetricsOutput { /// /// The similarity results. The metrics are given for each label of the input image. /// public SegmentationMetricsObjectMsr outputClassMeasurement; /// The similarity results. The metrics are given globally for the input image. public SegmentationMetricsImageMsr outputImageMeasurement; }; // Function prototype. public static SegmentationMetricsOutput SegmentationMetrics( IOLink.ImageView inputObjectImage, IOLink.ImageView inputReferenceImage, SegmentationMetrics.MetricScope metricScope = ImageDev.SegmentationMetrics.MetricScope.BOTH, SegmentationMetricsObjectMsr outputClassMeasurement = null, SegmentationMetricsImageMsr outputImageMeasurement = null );
Class Syntax
Parameters
Parameter Name | Description | Type | Supported Values | Default Value | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
![]() |
inputObjectImage |
The input image. It can be a binary or a label image. | Image | Binary or Label | nullptr | ||||||
![]() |
inputReferenceImage |
The ground truth image. It can be a binary or a label image. It must have same shape and interpretation as the first input image. | Image | Binary, Label, Grayscale or Multispectral | nullptr | ||||||
![]() |
metricScope |
The type of output measurement to compute.
|
Enumeration | BOTH | |||||||
![]() |
outputClassMeasurement |
The similarity results. The metrics are given for each label of the input image. | SegmentationMetricsObjectMsr | nullptr | |||||||
![]() |
outputImageMeasurement |
The similarity results. The metrics are given globally for the input image. | SegmentationMetricsImageMsr | nullptr |
Parameter Name | Description | Type | Supported Values | Default Value | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
![]() |
input_object_image |
The input image. It can be a binary or a label image. | image | Binary or Label | None | ||||||
![]() |
input_reference_image |
The ground truth image. It can be a binary or a label image. It must have same shape and interpretation as the first input image. | image | Binary, Label, Grayscale or Multispectral | None | ||||||
![]() |
metric_scope |
The type of output measurement to compute.
|
enumeration | BOTH | |||||||
![]() |
output_class_measurement |
The similarity results. The metrics are given for each label of the input image. | SegmentationMetricsObjectMsr | None | |||||||
![]() |
output_image_measurement |
The similarity results. The metrics are given globally for the input image. | SegmentationMetricsImageMsr | None |
Parameter Name | Description | Type | Supported Values | Default Value | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
![]() |
inputObjectImage |
The input image. It can be a binary or a label image. | Image | Binary or Label | null | ||||||
![]() |
inputReferenceImage |
The ground truth image. It can be a binary or a label image. It must have same shape and interpretation as the first input image. | Image | Binary, Label, Grayscale or Multispectral | null | ||||||
![]() |
metricScope |
The type of output measurement to compute.
|
Enumeration | BOTH | |||||||
![]() |
outputClassMeasurement |
The similarity results. The metrics are given for each label of the input image. | SegmentationMetricsObjectMsr | null | |||||||
![]() |
outputImageMeasurement |
The similarity results. The metrics are given globally for the input image. | SegmentationMetricsImageMsr | null |
Object Examples
auto polystyrene_sep_label = readVipImage( std::string( IMAGEDEVDATA_IMAGES_FOLDER ) + "polystyrene_sep_label.vip" ); SegmentationMetrics segmentationMetricsAlgo; segmentationMetricsAlgo.setInputObjectImage( polystyrene_sep_label ); segmentationMetricsAlgo.setInputReferenceImage( polystyrene_sep_label ); segmentationMetricsAlgo.setMetricScope( SegmentationMetrics::MetricScope::BOTH ); segmentationMetricsAlgo.execute(); std::cout << "accuracy: " << segmentationMetricsAlgo.outputClassMeasurement()->accuracy( 0 , 0 ) ; std::cout << "accuracy: " << segmentationMetricsAlgo.outputImageMeasurement()->accuracy( 0 ) ;
polystyrene_sep_label = imagedev.read_vip_image(imagedev_data.get_image_path("polystyrene_sep_label.vip")) segmentation_metrics_algo = imagedev.SegmentationMetrics() segmentation_metrics_algo.input_object_image = polystyrene_sep_label segmentation_metrics_algo.input_reference_image = polystyrene_sep_label segmentation_metrics_algo.metric_scope = imagedev.SegmentationMetrics.BOTH segmentation_metrics_algo.execute() print("accuracy: ", str(segmentation_metrics_algo.output_class_measurement.accuracy(0, 0))) print("accuracy: ", str(segmentation_metrics_algo.output_image_measurement.accuracy(0)))
ImageView polystyrene_sep_label = Data.ReadVipImage( @"Data/images/polystyrene_sep_label.vip" ); SegmentationMetrics segmentationMetricsAlgo = new SegmentationMetrics { inputObjectImage = polystyrene_sep_label, inputReferenceImage = polystyrene_sep_label, metricScope = SegmentationMetrics.MetricScope.BOTH }; segmentationMetricsAlgo.Execute(); Console.WriteLine( "accuracy: " + segmentationMetricsAlgo.outputClassMeasurement.accuracy( 0 , 0 ) ); Console.WriteLine( "accuracy: " + segmentationMetricsAlgo.outputImageMeasurement.accuracy( 0 ) );
Function Examples
auto polystyrene_sep_label = readVipImage( std::string( IMAGEDEVDATA_IMAGES_FOLDER ) + "polystyrene_sep_label.vip" ); auto result = segmentationMetrics( polystyrene_sep_label, polystyrene_sep_label, SegmentationMetrics::MetricScope::BOTH ); std::cout << "accuracy: " << result.outputClassMeasurement->accuracy( 0 , 0 ) ; std::cout << "accuracy: " << result.outputImageMeasurement->accuracy( 0 ) ;
polystyrene_sep_label = imagedev.read_vip_image(imagedev_data.get_image_path("polystyrene_sep_label.vip")) result_output_class_measurement, result_output_image_measurement = imagedev.segmentation_metrics(polystyrene_sep_label, polystyrene_sep_label, imagedev.SegmentationMetrics.BOTH) print("accuracy: ", str(result_output_class_measurement.accuracy(0, 0))) print("accuracy: ", str(result_output_image_measurement.accuracy(0)))
ImageView polystyrene_sep_label = Data.ReadVipImage( @"Data/images/polystyrene_sep_label.vip" ); Processing.SegmentationMetricsOutput result = Processing.SegmentationMetrics( polystyrene_sep_label, polystyrene_sep_label, SegmentationMetrics.MetricScope.BOTH ); Console.WriteLine( "accuracy: " + result.outputClassMeasurement.accuracy( 0 , 0 ) ); Console.WriteLine( "accuracy: " + result.outputImageMeasurement.accuracy( 0 ) );
© 2025 Thermo Fisher Scientific Inc. All rights reserved.