ImageDev

SupervisedTextureClassification3d

Performs a segmentation of a three-dimensional grayscale image, based on a texture model automatically built from a training input image.

Access to parameter description

For an introduction: This algorithm automatically chains the 3 steps of the texture classification workflow: model creation, training, and model application.

See also

Function Syntax

This function returns a SupervisedTextureClassification3dOutput structure containing the outputLabelImage and outputMapImage output parameters.
// Output structure.
struct SupervisedTextureClassification3dOutput
{
    std::shared_ptr< iolink::ImageView > outputLabelImage;
    std::shared_ptr< iolink::ImageView > outputMapImage;
};

// Function prototype.
SupervisedTextureClassification3dOutput
supervisedTextureClassification3d( std::shared_ptr< iolink::ImageView > inputImage,
                                   std::shared_ptr< iolink::ImageView > inputTrainingImage,
                                   int32_t featureGroup,
                                   iolink::Vector2u32 radiusRange,
                                   uint32_t radiusStep,
                                   uint32_t coocRadius,
                                   SupervisedTextureClassification3d::CoocTextonShape coocTextonShape,
                                   uint32_t coocTextonSize,
                                   double minSeparationPercentage,
                                   SupervisedTextureClassification3d::OutputMapType outputMapType,
                                   std::shared_ptr< iolink::ImageView > outputLabelImage = NULL,
                                   std::shared_ptr< iolink::ImageView > outputMapImage = NULL );
This function returns a tuple containing the output_label_image and output_map_image output parameters.
// Function prototype.
supervised_texture_classification_3d( input_image,
                                      input_training_image,
                                      feature_group = 28,
                                      radius_range = [2, 8],
                                      radius_step = 6,
                                      cooc_radius = 6,
                                      cooc_texton_shape = SupervisedTextureClassification3d.CoocTextonShape.SPHERE,
                                      cooc_texton_size = 2,
                                      min_separation_percentage = 3,
                                      output_map_type = SupervisedTextureClassification3d.OutputMapType.CLOSEST_DISTANCE,
                                      output_label_image = None,
                                      output_map_image = None )
This function returns a SupervisedTextureClassification3dOutput structure containing the outputLabelImage and outputMapImage output parameters.
/// Output structure of the SupervisedTextureClassification3d function.
public struct SupervisedTextureClassification3dOutput
{
    public IOLink.ImageView outputLabelImage;
    public IOLink.ImageView outputMapImage;
};

// Function prototype.
public static SupervisedTextureClassification3dOutput
SupervisedTextureClassification3d( IOLink.ImageView inputImage,
                                   IOLink.ImageView inputTrainingImage,
                                   Int32 featureGroup = 28,
                                   uint[] radiusRange = null,
                                   UInt32 radiusStep = 6,
                                   UInt32 coocRadius = 6,
                                   SupervisedTextureClassification3d.CoocTextonShape coocTextonShape = ImageDev.SupervisedTextureClassification3d.CoocTextonShape.SPHERE,
                                   UInt32 coocTextonSize = 2,
                                   double minSeparationPercentage = 3,
                                   SupervisedTextureClassification3d.OutputMapType outputMapType = ImageDev.SupervisedTextureClassification3d.OutputMapType.CLOSEST_DISTANCE,
                                   IOLink.ImageView outputLabelImage = null,
                                   IOLink.ImageView outputMapImage = null );

Class Syntax

Parameters

Class Name SupervisedTextureClassification3d

Parameter Name Description Type Supported Values Default Value
input
inputImage
The input grayscale image to segment. Image Grayscale nullptr
input
inputTrainingImage
The input label training image (16 or 32 bits) where each label represents a class sample for the training step. Image Label nullptr
input
featureGroup
The groups of textural features to compute. This list defines all the textural attributes proposed for performing the classification.
DIRECTIONAL_COOCCURRENCE Features based on co-occurrence matrices. One feature is extracted from each co-occurrence vector. Associated value = 1.
ROTATION_INVARIANT_COOCCURRENCE Features based on co-occurrence matrices. Three statistical features are extracted from all vectors. Associated value = 2.
FIRST_ORDER_STATISTICS Features based on first order statistics that are not computed using a histogram. Associated value = 4.
HISTOGRAM_STATISTICS Features based on histogram statistics, including histogram quantiles. Associated value = 8.
INTENSITY Feature based on the intensity value of the input image. Associated value = 16.
MultipleChoice FIRST_ORDER_STATISTICS | HISTOGRAM_STATISTICS | INTENSITY
input
radiusRange
The minimum and maximum radius, in voxels, of the circular neighborhoods used for computing textural features. Vector2u32 >=1 {2, 8}
input
radiusStep
The step, in voxels, used to define the set of radius between minimum and maximum. The maximum radius is systematically added to the radius list. UInt32 >=1 6
input
coocRadius
The radius, in voxels, of the circular neighborhood used by the co-occurrence features. This parameter is ignored if none of the co-occurrence feature groups is selected. UInt32 >=1 6
input
coocTextonShape
The shape of the co-occurrence texton (the pattern defined by the set of co-occurrence vectors). This parameter is ignored if none of the co-occurrence feature groups is selected.

The texton shape represents the distribution of points around the target point for computing the co-occurrence matrices. Associated to the texton size, it defines the set of vectors that are used for computing co-occurrence features.
For instance, in 2D, a cube shape of size 3 defines the co-occurrence vectors (-3, -3), (0, -3), (3, -3), (-3, 0), (3, 0), (-3, 3), (0, 3) and (3, 3).
CUBE The set of all points associated to corners, edge centers, and face centers of a cube with a half side size defined by the coocTextonSize parameter.
SPHERE The set of all points located at the same euclidean distance defined by the coocTextonSize parameter from the center. This mode is recommended when a repetitive texture is mono-scale.
BALL The set of all points located at a distance less or equal to the coocTextonSize parameter from the center. This mode can be useful to classify a multi-scale repetitive texture, but may be very time consuming.
Enumeration SPHERE
input
coocTextonSize
The size, in voxels, of the texton shape for co-occurrence features. This parameter is ignored if none of the co-occurrence feature groups is selected.
This size is constrained by the radius parameter. The constraint depends on the texton shape. For instance, with a cube texton, the texton size cannot exceed the rounded value of $radius \times \sqrt{3}$.
UInt32 >=1 2
input
minSeparationPercentage
This parameter controls the rejection criteria of the feature selection algorithm (FS).

A measure is rejected if its contribution does not increase enough the separation power of the classification model. This ratio indicates the minimal relative growth required to keep a measure. More information is available in the Feature Selection section.
This value must be greater than or equal to 0.0.
Float64 [0, 100] 3
input
outputMapType
The type of uncertainty map image to compute.
CLOSEST_DISTANCE The uncertainty map represents the Mahalanobis distance to the class selected by the classification. The closer to 0 this metric is, the more confident the classification.
RELATIVE_DISTANCE The uncertainty map represents the Mahalanobis distance to the class selected by the classification (d1) weighted by the gap with the second closest distance (d2). The smaller this metric is, the more confident and less ambiguous the classification. $$ Map Value = log \left( \frac {d1}{d2 - d1} \right) $$
CLASS_DISTANCE The uncertainty map is a multichannel image where each channel represents the distance to the corresponding class.
NONE No uncertainty map is computed.
Enumeration CLOSEST_DISTANCE
output
outputLabelImage
The output label image representing the texture classification result. Its dimensions and type are forced to the same values as the training input. Image nullptr
output
outputMapImage
The output map image. Its dimensions are forced to the same values as the training image. In CLASS_DISTANCE mode, its number of channels is equal to the number of classes defined in the training image. Its data type is forced to floating point. Image nullptr

Object Examples

auto foam = readVipImage( std::string( IMAGEDEVDATA_IMAGES_FOLDER ) + "foam.vip" );
auto foam_sep_label = readVipImage( std::string( IMAGEDEVDATA_IMAGES_FOLDER ) + "foam_sep_label.vip" );

SupervisedTextureClassification3d supervisedTextureClassification3dAlgo;
supervisedTextureClassification3dAlgo.setInputImage( foam );
supervisedTextureClassification3dAlgo.setInputTrainingImage( foam_sep_label );
supervisedTextureClassification3dAlgo.setFeatureGroup( 4 );
supervisedTextureClassification3dAlgo.setRadiusRange( {2, 4} );
supervisedTextureClassification3dAlgo.setRadiusStep( 4 );
supervisedTextureClassification3dAlgo.setCoocRadius( 4 );
supervisedTextureClassification3dAlgo.setCoocTextonShape( SupervisedTextureClassification3d::CoocTextonShape::CUBE );
supervisedTextureClassification3dAlgo.setCoocTextonSize( 2 );
supervisedTextureClassification3dAlgo.setMinSeparationPercentage( 3 );
supervisedTextureClassification3dAlgo.setOutputMapType( SupervisedTextureClassification3d::OutputMapType::CLOSEST_DISTANCE );
supervisedTextureClassification3dAlgo.execute();

std::cout << "outputLabelImage:" << supervisedTextureClassification3dAlgo.outputLabelImage()->toString();
std::cout << "outputMapImage:" << supervisedTextureClassification3dAlgo.outputMapImage()->toString();
foam = imagedev.read_vip_image(imagedev_data.get_image_path("foam.vip"))
foam_sep_label = imagedev.read_vip_image(imagedev_data.get_image_path("foam_sep_label.vip"))

supervised_texture_classification_3d_algo = imagedev.SupervisedTextureClassification3d()
supervised_texture_classification_3d_algo.input_image = foam
supervised_texture_classification_3d_algo.input_training_image = foam_sep_label
supervised_texture_classification_3d_algo.feature_group = 4
supervised_texture_classification_3d_algo.radius_range = [2, 4]
supervised_texture_classification_3d_algo.radius_step = 4
supervised_texture_classification_3d_algo.cooc_radius = 4
supervised_texture_classification_3d_algo.cooc_texton_shape = imagedev.SupervisedTextureClassification3d.CUBE
supervised_texture_classification_3d_algo.cooc_texton_size = 2
supervised_texture_classification_3d_algo.min_separation_percentage = 3
supervised_texture_classification_3d_algo.output_map_type = imagedev.SupervisedTextureClassification3d.CLOSEST_DISTANCE
supervised_texture_classification_3d_algo.execute()

print( "output_label_image:", str( supervised_texture_classification_3d_algo.output_label_image ) )
print( "output_map_image:", str( supervised_texture_classification_3d_algo.output_map_image ) )
ImageView foam = Data.ReadVipImage( @"Data/images/foam.vip" );
ImageView foam_sep_label = Data.ReadVipImage( @"Data/images/foam_sep_label.vip" );

SupervisedTextureClassification3d supervisedTextureClassification3dAlgo = new SupervisedTextureClassification3d
{
    inputImage = foam,
    inputTrainingImage = foam_sep_label,
    featureGroup = 4,
    radiusRange = new uint[]{2, 4},
    radiusStep = 4,
    coocRadius = 4,
    coocTextonShape = SupervisedTextureClassification3d.CoocTextonShape.CUBE,
    coocTextonSize = 2,
    minSeparationPercentage = 3,
    outputMapType = SupervisedTextureClassification3d.OutputMapType.CLOSEST_DISTANCE
};
supervisedTextureClassification3dAlgo.Execute();

Console.WriteLine( "outputLabelImage:" + supervisedTextureClassification3dAlgo.outputLabelImage.ToString() );
Console.WriteLine( "outputMapImage:" + supervisedTextureClassification3dAlgo.outputMapImage.ToString() );

Function Examples

auto foam = readVipImage( std::string( IMAGEDEVDATA_IMAGES_FOLDER ) + "foam.vip" );
auto foam_sep_label = readVipImage( std::string( IMAGEDEVDATA_IMAGES_FOLDER ) + "foam_sep_label.vip" );

auto result = supervisedTextureClassification3d( foam, foam_sep_label, 4, {2, 4}, 4, 4, SupervisedTextureClassification3d::CoocTextonShape::CUBE, 2, 3, SupervisedTextureClassification3d::OutputMapType::CLOSEST_DISTANCE );

std::cout << "outputLabelImage:" << result.outputLabelImage->toString();
std::cout << "outputMapImage:" << result.outputMapImage->toString();
foam = imagedev.read_vip_image(imagedev_data.get_image_path("foam.vip"))
foam_sep_label = imagedev.read_vip_image(imagedev_data.get_image_path("foam_sep_label.vip"))

result_output_label_image, result_output_map_image = imagedev.supervised_texture_classification_3d( foam, foam_sep_label, 4, [2, 4], 4, 4, imagedev.SupervisedTextureClassification3d.CUBE, 2, 3, imagedev.SupervisedTextureClassification3d.CLOSEST_DISTANCE )

print( "output_label_image:", str( result_output_label_image ) )
print( "output_map_image:", str( result_output_map_image ) )
ImageView foam = Data.ReadVipImage( @"Data/images/foam.vip" );
ImageView foam_sep_label = Data.ReadVipImage( @"Data/images/foam_sep_label.vip" );

Processing.SupervisedTextureClassification3dOutput result = Processing.SupervisedTextureClassification3d( foam, foam_sep_label, 4, new uint[]{2, 4}, 4, 4, SupervisedTextureClassification3d.CoocTextonShape.CUBE, 2, 3, SupervisedTextureClassification3d.OutputMapType.CLOSEST_DISTANCE );

Console.WriteLine( "outputLabelImage:" + result.outputLabelImage.ToString() );
Console.WriteLine( "outputMapImage:" + result.outputMapImage.ToString() );