ImageDev

SupervisedTextureClassification2d

Performs a segmentation of a two-dimensional grayscale image, based on a texture model automatically built from a training input image.

Access to parameter description

For an introduction: This algorithm automatically chains the 3 steps of the texture classification workflow: model creation, training, and model application.

See also

Function Syntax

This function returns a SupervisedTextureClassification2dOutput structure containing outputLabelImage and outputMapImage.
// Output structure of the supervisedTextureClassification2d function.
struct SupervisedTextureClassification2dOutput
{
    /// The output label image representing the texture classification result. Its dimensions and type are forced to the same values as the training input.
    std::shared_ptr< iolink::ImageView > outputLabelImage;
    /// The output map image. Its dimensions are forced to the same values as the training image. In CLASS_DISTANCE mode, its number of channels is equal to the number of classes defined in the training image. Its data type is forced to floating point.
    std::shared_ptr< iolink::ImageView > outputMapImage;
};

// Function prototype
SupervisedTextureClassification2dOutput supervisedTextureClassification2d( std::shared_ptr< iolink::ImageView > inputImage, std::shared_ptr< iolink::ImageView > inputTrainingImage, int32_t featureGroup, iolink::Vector2u32 radiusRange, uint32_t radiusStep, uint32_t coocRadius, SupervisedTextureClassification2d::CoocTextonShape coocTextonShape, uint32_t coocTextonSize, double minSeparationPercentage, SupervisedTextureClassification2d::OutputMapType outputMapType, std::shared_ptr< iolink::ImageView > outputLabelImage = nullptr, std::shared_ptr< iolink::ImageView > outputMapImage = nullptr );
This function returns a tuple containing output_label_image and output_map_image.
// Function prototype.
supervised_texture_classification_2d(input_image: idt.ImageType,
                                     input_training_image: idt.ImageType,
                                     feature_group: Union[int, str, SupervisedTextureClassification2d.FeatureGroup] = 31,
                                     radius_range: Iterable[int] = [2, 14],
                                     radius_step: int = 4,
                                     cooc_radius: int = 10,
                                     cooc_texton_shape: SupervisedTextureClassification2d.CoocTextonShape = SupervisedTextureClassification2d.CoocTextonShape.SPHERE,
                                     cooc_texton_size: int = 4,
                                     min_separation_percentage: float = 3,
                                     output_map_type: SupervisedTextureClassification2d.OutputMapType = SupervisedTextureClassification2d.OutputMapType.CLOSEST_DISTANCE,
                                     output_label_image: idt.ImageType = None,
                                     output_map_image: idt.ImageType = None) -> Tuple[idt.ImageType, idt.ImageType]
This function returns a SupervisedTextureClassification2dOutput structure containing outputLabelImage and outputMapImage.
/// Output structure of the SupervisedTextureClassification2d function.
public struct SupervisedTextureClassification2dOutput
{
    /// 
    /// The output label image representing the texture classification result. Its dimensions and type are forced to the same values as the training input.
    /// 
    public IOLink.ImageView outputLabelImage;
    /// 
    /// The output map image. Its dimensions are forced to the same values as the training image. In CLASS_DISTANCE mode, its number of channels is equal to the number of classes defined in the training image. Its data type is forced to floating point.
    /// 
    public IOLink.ImageView outputMapImage;
};

// Function prototype.
public static SupervisedTextureClassification2dOutput
SupervisedTextureClassification2d( IOLink.ImageView inputImage,
                                   IOLink.ImageView inputTrainingImage,
                                   Int32 featureGroup = 31,
                                   uint[] radiusRange = null,
                                   UInt32 radiusStep = 4,
                                   UInt32 coocRadius = 10,
                                   SupervisedTextureClassification2d.CoocTextonShape coocTextonShape = ImageDev.SupervisedTextureClassification2d.CoocTextonShape.SPHERE,
                                   UInt32 coocTextonSize = 4,
                                   double minSeparationPercentage = 3,
                                   SupervisedTextureClassification2d.OutputMapType outputMapType = ImageDev.SupervisedTextureClassification2d.OutputMapType.CLOSEST_DISTANCE,
                                   IOLink.ImageView outputLabelImage = null,
                                   IOLink.ImageView outputMapImage = null );

Class Syntax

Parameters

Parameter Name Description Type Supported Values Default Value
input
inputImage
The input grayscale image to segment. Image Grayscale nullptr
input
inputTrainingImage
The input label training image (16 or 32 bits) where each label represents a class sample for the training step. Image Label nullptr
input
featureGroup
The groups of textural features to compute. This list defines all the textural attributes proposed for performing the classification.
DIRECTIONAL_COOCCURRENCE Features based on co-occurrence matrices. One feature is extracted from each co-occurrence vector. Associated value = 1.
ROTATION_INVARIANT_COOCCURRENCE Features based on co-occurrence matrices. Three statistical features are extracted from all vectors. Associated value = 2.
FIRST_ORDER_STATISTICS Features based on first order statistics that are not computed using a histogram. Associated value = 4.
HISTOGRAM_STATISTICS Features based on histogram statistics, including histogram quantiles. Associated value = 8.
INTENSITY Feature based on the intensity value of the input image. Associated value = 16.
MultipleChoice DIRECTIONAL_COOCCURRENCE | ROTATION_INVARIANT_COOCCURRENCE | FIRST_ORDER_STATISTICS | HISTOGRAM_STATISTICS | INTENSITY
input
radiusRange
The minimum and maximum radius, in pixels, of the circular neighborhoods used for computing textural features. Vector2u32 >=1 {2, 14}
input
radiusStep
The step in pixels used to define the set of radii between minimum and maximum. The maximum radius is systematically added to the radius list. UInt32 >=1 4
input
coocRadius
The radius, in pixels, of the circular neighborhood used by the co-occurrence features. This parameter is ignored if none of the co-occurrence feature groups is selected. UInt32 >=1 10
input
coocTextonShape
The shape of the co-occurrence texton (the pattern defined by the set of co-occurrence vectors). This parameter is ignored if none of the co-occurrence feature groups is selected.

The texton shape represents the distribution of points around the target point for computing the co-occurrence matrices. Associated to the texton size, it defines the set of vectors that are used for computing co-occurrence features.
For instance, in 2D, a cube shape of size 3 defines the co-occurrence vectors (-3, -3), (0, -3), (3, -3), (-3, 0), (3, 0), (-3, 3), (0, 3) and (3, 3).
CUBE The set of all the points associated to corners and edge centers of a square with a half side size defined by the coocTextonSize parameter.
SPHERE The set of all the points located at the same euclidean distance defined by the coocTextonSize parameter from the center. This mode is recommended when a repetitive texture is mono-scale.
BALL The set of all the points located at a distance less or equal to the coocTextonSize parameter from the center. This mode can be useful to classify a multi-scale repetitive texture, but may be very time consuming.
Enumeration SPHERE
input
coocTextonSize
The size, in pixels, of the texton shape for co-occurrence features. This parameter is ignored if none of the co-occurrence feature groups is selected.
This size is constrained by the radius parameter. The constraint depends on the texton shape. For instance, with a square texton, the texton size cannot exceed the rounded value of $radius \times \sqrt{2}$.
UInt32 >=1 4
input
minSeparationPercentage
This parameter controls the rejection criteria of the feature selection algorithm (FS).

A measure is rejected if its contribution does not increase the separation power of the classification model enough. This ratio indicates the minimal relative growth required to keep a measure. More information is available in the Feature Selection section.
This value must be greater than or equal to 0.0.
Float64 [0, 100] 3
input
outputMapType
The type of uncertainty map image to compute.
CLOSEST_DISTANCE The uncertainty map represents the Mahalanobis distance to the class selected by the classification. The closer to 0 this metric is, the more confident the classification.
RELATIVE_DISTANCE The uncertainty map represents the Mahalanobis distance to the class selected by the classification (d1) weighted by the gap with the second closest distance (d2). The smaller this metric is, the more confident and less ambiguous the classification. $$ Map Value = log \left( \frac {d1}{d2 - d1} \right) $$
CLASS_DISTANCE The uncertainty map is a multichannel image where each channel represents the distance to the corresponding class.
NONE No uncertainty map is computed.
Enumeration CLOSEST_DISTANCE
output
outputLabelImage
The output label image representing the texture classification result. Its dimensions and type are forced to the same values as the training input. Image nullptr
output
outputMapImage
The output map image. Its dimensions are forced to the same values as the training image. In CLASS_DISTANCE mode, its number of channels is equal to the number of classes defined in the training image. Its data type is forced to floating point. Image nullptr
Parameter Name Description Type Supported Values Default Value
input
input_image
The input grayscale image to segment. image Grayscale None
input
input_training_image
The input label training image (16 or 32 bits) where each label represents a class sample for the training step. image Label None
input
feature_group
The groups of textural features to compute. This list defines all the textural attributes proposed for performing the classification.
DIRECTIONAL_COOCCURRENCE Features based on co-occurrence matrices. One feature is extracted from each co-occurrence vector. Associated value = 1.
ROTATION_INVARIANT_COOCCURRENCE Features based on co-occurrence matrices. Three statistical features are extracted from all vectors. Associated value = 2.
FIRST_ORDER_STATISTICS Features based on first order statistics that are not computed using a histogram. Associated value = 4.
HISTOGRAM_STATISTICS Features based on histogram statistics, including histogram quantiles. Associated value = 8.
INTENSITY Feature based on the intensity value of the input image. Associated value = 16.
multiple_choice DIRECTIONAL_COOCCURRENCE | ROTATION_INVARIANT_COOCCURRENCE | FIRST_ORDER_STATISTICS | HISTOGRAM_STATISTICS | INTENSITY
input
radius_range
The minimum and maximum radius, in pixels, of the circular neighborhoods used for computing textural features. vector2u32 >=1 [2, 14]
input
radius_step
The step in pixels used to define the set of radii between minimum and maximum. The maximum radius is systematically added to the radius list. uint32 >=1 4
input
cooc_radius
The radius, in pixels, of the circular neighborhood used by the co-occurrence features. This parameter is ignored if none of the co-occurrence feature groups is selected. uint32 >=1 10
input
cooc_texton_shape
The shape of the co-occurrence texton (the pattern defined by the set of co-occurrence vectors). This parameter is ignored if none of the co-occurrence feature groups is selected.

The texton shape represents the distribution of points around the target point for computing the co-occurrence matrices. Associated to the texton size, it defines the set of vectors that are used for computing co-occurrence features.
For instance, in 2D, a cube shape of size 3 defines the co-occurrence vectors (-3, -3), (0, -3), (3, -3), (-3, 0), (3, 0), (-3, 3), (0, 3) and (3, 3).
CUBE The set of all the points associated to corners and edge centers of a square with a half side size defined by the coocTextonSize parameter.
SPHERE The set of all the points located at the same euclidean distance defined by the coocTextonSize parameter from the center. This mode is recommended when a repetitive texture is mono-scale.
BALL The set of all the points located at a distance less or equal to the coocTextonSize parameter from the center. This mode can be useful to classify a multi-scale repetitive texture, but may be very time consuming.
enumeration SPHERE
input
cooc_texton_size
The size, in pixels, of the texton shape for co-occurrence features. This parameter is ignored if none of the co-occurrence feature groups is selected.
This size is constrained by the radius parameter. The constraint depends on the texton shape. For instance, with a square texton, the texton size cannot exceed the rounded value of $radius \times \sqrt{2}$.
uint32 >=1 4
input
min_separation_percentage
This parameter controls the rejection criteria of the feature selection algorithm (FS).

A measure is rejected if its contribution does not increase the separation power of the classification model enough. This ratio indicates the minimal relative growth required to keep a measure. More information is available in the Feature Selection section.
This value must be greater than or equal to 0.0.
float64 [0, 100] 3
input
output_map_type
The type of uncertainty map image to compute.
CLOSEST_DISTANCE The uncertainty map represents the Mahalanobis distance to the class selected by the classification. The closer to 0 this metric is, the more confident the classification.
RELATIVE_DISTANCE The uncertainty map represents the Mahalanobis distance to the class selected by the classification (d1) weighted by the gap with the second closest distance (d2). The smaller this metric is, the more confident and less ambiguous the classification. $$ Map Value = log \left( \frac {d1}{d2 - d1} \right) $$
CLASS_DISTANCE The uncertainty map is a multichannel image where each channel represents the distance to the corresponding class.
NONE No uncertainty map is computed.
enumeration CLOSEST_DISTANCE
output
output_label_image
The output label image representing the texture classification result. Its dimensions and type are forced to the same values as the training input. image None
output
output_map_image
The output map image. Its dimensions are forced to the same values as the training image. In CLASS_DISTANCE mode, its number of channels is equal to the number of classes defined in the training image. Its data type is forced to floating point. image None
Parameter Name Description Type Supported Values Default Value
input
inputImage
The input grayscale image to segment. Image Grayscale null
input
inputTrainingImage
The input label training image (16 or 32 bits) where each label represents a class sample for the training step. Image Label null
input
featureGroup
The groups of textural features to compute. This list defines all the textural attributes proposed for performing the classification.
DIRECTIONAL_COOCCURRENCE Features based on co-occurrence matrices. One feature is extracted from each co-occurrence vector. Associated value = 1.
ROTATION_INVARIANT_COOCCURRENCE Features based on co-occurrence matrices. Three statistical features are extracted from all vectors. Associated value = 2.
FIRST_ORDER_STATISTICS Features based on first order statistics that are not computed using a histogram. Associated value = 4.
HISTOGRAM_STATISTICS Features based on histogram statistics, including histogram quantiles. Associated value = 8.
INTENSITY Feature based on the intensity value of the input image. Associated value = 16.
MultipleChoice DIRECTIONAL_COOCCURRENCE | ROTATION_INVARIANT_COOCCURRENCE | FIRST_ORDER_STATISTICS | HISTOGRAM_STATISTICS | INTENSITY
input
radiusRange
The minimum and maximum radius, in pixels, of the circular neighborhoods used for computing textural features. Vector2u32 >=1 {2, 14}
input
radiusStep
The step in pixels used to define the set of radii between minimum and maximum. The maximum radius is systematically added to the radius list. UInt32 >=1 4
input
coocRadius
The radius, in pixels, of the circular neighborhood used by the co-occurrence features. This parameter is ignored if none of the co-occurrence feature groups is selected. UInt32 >=1 10
input
coocTextonShape
The shape of the co-occurrence texton (the pattern defined by the set of co-occurrence vectors). This parameter is ignored if none of the co-occurrence feature groups is selected.

The texton shape represents the distribution of points around the target point for computing the co-occurrence matrices. Associated to the texton size, it defines the set of vectors that are used for computing co-occurrence features.
For instance, in 2D, a cube shape of size 3 defines the co-occurrence vectors (-3, -3), (0, -3), (3, -3), (-3, 0), (3, 0), (-3, 3), (0, 3) and (3, 3).
CUBE The set of all the points associated to corners and edge centers of a square with a half side size defined by the coocTextonSize parameter.
SPHERE The set of all the points located at the same euclidean distance defined by the coocTextonSize parameter from the center. This mode is recommended when a repetitive texture is mono-scale.
BALL The set of all the points located at a distance less or equal to the coocTextonSize parameter from the center. This mode can be useful to classify a multi-scale repetitive texture, but may be very time consuming.
Enumeration SPHERE
input
coocTextonSize
The size, in pixels, of the texton shape for co-occurrence features. This parameter is ignored if none of the co-occurrence feature groups is selected.
This size is constrained by the radius parameter. The constraint depends on the texton shape. For instance, with a square texton, the texton size cannot exceed the rounded value of $radius \times \sqrt{2}$.
UInt32 >=1 4
input
minSeparationPercentage
This parameter controls the rejection criteria of the feature selection algorithm (FS).

A measure is rejected if its contribution does not increase the separation power of the classification model enough. This ratio indicates the minimal relative growth required to keep a measure. More information is available in the Feature Selection section.
This value must be greater than or equal to 0.0.
Float64 [0, 100] 3
input
outputMapType
The type of uncertainty map image to compute.
CLOSEST_DISTANCE The uncertainty map represents the Mahalanobis distance to the class selected by the classification. The closer to 0 this metric is, the more confident the classification.
RELATIVE_DISTANCE The uncertainty map represents the Mahalanobis distance to the class selected by the classification (d1) weighted by the gap with the second closest distance (d2). The smaller this metric is, the more confident and less ambiguous the classification. $$ Map Value = log \left( \frac {d1}{d2 - d1} \right) $$
CLASS_DISTANCE The uncertainty map is a multichannel image where each channel represents the distance to the corresponding class.
NONE No uncertainty map is computed.
Enumeration CLOSEST_DISTANCE
output
outputLabelImage
The output label image representing the texture classification result. Its dimensions and type are forced to the same values as the training input. Image null
output
outputMapImage
The output map image. Its dimensions are forced to the same values as the training image. In CLASS_DISTANCE mode, its number of channels is equal to the number of classes defined in the training image. Its data type is forced to floating point. Image null

Object Examples

auto polystyrene = ioformat::readImage( std::string( IMAGEDEVDATA_IMAGES_FOLDER ) + "polystyrene.tif" );
auto polystyrene_sep_label = readVipImage( std::string( IMAGEDEVDATA_IMAGES_FOLDER ) + "polystyrene_sep_label.vip" );

SupervisedTextureClassification2d supervisedTextureClassification2dAlgo;
supervisedTextureClassification2dAlgo.setInputImage( polystyrene );
supervisedTextureClassification2dAlgo.setInputTrainingImage( polystyrene_sep_label );
supervisedTextureClassification2dAlgo.setFeatureGroup( 4 );
supervisedTextureClassification2dAlgo.setRadiusRange( {2, 4} );
supervisedTextureClassification2dAlgo.setRadiusStep( 4 );
supervisedTextureClassification2dAlgo.setCoocRadius( 4 );
supervisedTextureClassification2dAlgo.setCoocTextonShape( SupervisedTextureClassification2d::CoocTextonShape::CUBE );
supervisedTextureClassification2dAlgo.setCoocTextonSize( 4 );
supervisedTextureClassification2dAlgo.setMinSeparationPercentage( 3 );
supervisedTextureClassification2dAlgo.setOutputMapType( SupervisedTextureClassification2d::OutputMapType::CLOSEST_DISTANCE );
supervisedTextureClassification2dAlgo.execute();

std::cout << "outputLabelImage:" << supervisedTextureClassification2dAlgo.outputLabelImage()->toString();
std::cout << "outputMapImage:" << supervisedTextureClassification2dAlgo.outputMapImage()->toString();
polystyrene = ioformat.read_image(imagedev_data.get_image_path("polystyrene.tif"))
polystyrene_sep_label = imagedev.read_vip_image(imagedev_data.get_image_path("polystyrene_sep_label.vip"))

supervised_texture_classification_2d_algo = imagedev.SupervisedTextureClassification2d()
supervised_texture_classification_2d_algo.input_image = polystyrene
supervised_texture_classification_2d_algo.input_training_image = polystyrene_sep_label
supervised_texture_classification_2d_algo.feature_group = 4
supervised_texture_classification_2d_algo.radius_range = [2, 4]
supervised_texture_classification_2d_algo.radius_step = 4
supervised_texture_classification_2d_algo.cooc_radius = 4
supervised_texture_classification_2d_algo.cooc_texton_shape = imagedev.SupervisedTextureClassification2d.CUBE
supervised_texture_classification_2d_algo.cooc_texton_size = 4
supervised_texture_classification_2d_algo.min_separation_percentage = 3
supervised_texture_classification_2d_algo.output_map_type = imagedev.SupervisedTextureClassification2d.CLOSEST_DISTANCE
supervised_texture_classification_2d_algo.execute()

print("output_label_image:", str(supervised_texture_classification_2d_algo.output_label_image))
print("output_map_image:", str(supervised_texture_classification_2d_algo.output_map_image))
ImageView polystyrene = ViewIO.ReadImage( @"Data/images/polystyrene.tif" );
ImageView polystyrene_sep_label = Data.ReadVipImage( @"Data/images/polystyrene_sep_label.vip" );

SupervisedTextureClassification2d supervisedTextureClassification2dAlgo = new SupervisedTextureClassification2d
{
    inputImage = polystyrene,
    inputTrainingImage = polystyrene_sep_label,
    featureGroup = 4,
    radiusRange = new uint[]{2, 4},
    radiusStep = 4,
    coocRadius = 4,
    coocTextonShape = SupervisedTextureClassification2d.CoocTextonShape.CUBE,
    coocTextonSize = 4,
    minSeparationPercentage = 3,
    outputMapType = SupervisedTextureClassification2d.OutputMapType.CLOSEST_DISTANCE
};
supervisedTextureClassification2dAlgo.Execute();

Console.WriteLine( "outputLabelImage:" + supervisedTextureClassification2dAlgo.outputLabelImage.ToString() );
Console.WriteLine( "outputMapImage:" + supervisedTextureClassification2dAlgo.outputMapImage.ToString() );

Function Examples

auto polystyrene = ioformat::readImage( std::string( IMAGEDEVDATA_IMAGES_FOLDER ) + "polystyrene.tif" );
auto polystyrene_sep_label = readVipImage( std::string( IMAGEDEVDATA_IMAGES_FOLDER ) + "polystyrene_sep_label.vip" );

auto result = supervisedTextureClassification2d( polystyrene, polystyrene_sep_label, 4, {2, 4}, 4, 4, SupervisedTextureClassification2d::CoocTextonShape::CUBE, 4, 3, SupervisedTextureClassification2d::OutputMapType::CLOSEST_DISTANCE );

std::cout << "outputLabelImage:" << result.outputLabelImage->toString();
std::cout << "outputMapImage:" << result.outputMapImage->toString();
polystyrene = ioformat.read_image(imagedev_data.get_image_path("polystyrene.tif"))
polystyrene_sep_label = imagedev.read_vip_image(imagedev_data.get_image_path("polystyrene_sep_label.vip"))

result_output_label_image, result_output_map_image = imagedev.supervised_texture_classification_2d(polystyrene, polystyrene_sep_label, 4, [2, 4], 4, 4, imagedev.SupervisedTextureClassification2d.CUBE, 4, 3, imagedev.SupervisedTextureClassification2d.CLOSEST_DISTANCE)

print("output_label_image:", str(result_output_label_image))
print("output_map_image:", str(result_output_map_image))
ImageView polystyrene = ViewIO.ReadImage( @"Data/images/polystyrene.tif" );
ImageView polystyrene_sep_label = Data.ReadVipImage( @"Data/images/polystyrene_sep_label.vip" );

Processing.SupervisedTextureClassification2dOutput result = Processing.SupervisedTextureClassification2d( polystyrene, polystyrene_sep_label, 4, new uint[]{2, 4}, 4, 4, SupervisedTextureClassification2d.CoocTextonShape.CUBE, 4, 3, SupervisedTextureClassification2d.OutputMapType.CLOSEST_DISTANCE );

Console.WriteLine( "outputLabelImage:" + result.outputLabelImage.ToString() );
Console.WriteLine( "outputMapImage:" + result.outputMapImage.ToString() );