=Paper= {{Paper |id=Vol-2744/invited1 |storemode=property |title=Identification and Classification of Color Textures (invited paper) |pdfUrl=https://ceur-ws.org/Vol-2744/invited1.pdf |volume=Vol-2744 |authors=Mattias Mende,Thomas Wiener }} ==Identification and Classification of Color Textures (invited paper)== https://ceur-ws.org/Vol-2744/invited1.pdf
     Identification and Classification of Color Textures1

                             [0000-0002-4458-6539]                  [0000-0002-9912-0162]
           Mattias Mende                         , Thomas Wiener

           Fraunhofer Institute for Machine Tools and Forming Technology IWU,
                                     Chemnitz, Germany
                               www.iwu.fraunhofer.de
                       mattias.mende@iwu.fraunhofer.de,
                       thomas.wiener@iwu.fraunhofer.de



       Abstract. This article describes, how color textures can be reliably detected and
       classified in the production process independent of external parameters such as
       brightness, object positions (translation), angulars (rotation), object distances
       (scaling) or curved surfaces (rotation + scaling). The methods described here are
       also suitable for reliably classifying at least 18 color textures even if they differ
       only slightly from each other optically. The online classification of color textures
       is a classic task in the wood, furniture and textile industry. For example, un-
       wanted defects or partial soiling on moving webs can be reliably detected regard-
       less of fluctuations in brightness and/or shadows during process operation.
       Algorithms has been developed for teach-in with RGB-HSI-transform, set fewer
       segments on the color textures of each class with e.g. 24x24 Pixel, use suitable
       transformations {HSI}, e.g. 2D-FFT for formation characteristic 2D spectral
       mountains in these segments, extraction of statistical features and setting up the
       individual classifiers. Algorithms has been developed for identification &
       classification in process operation with extraction of statistical characteristics
       and methods of robust classification. The implementation of the methods, the
       triggering of the color cameras, the processing of the color information including
       the output of the results to the process control is done with the data analysis
       program Xeidana®.

       Keywords: Optical Image Processing, Identification, Classification, Color
       Textures, Process Control, Color Sphere Model.


1      Introduction

The applications of optical image processing cover almost all areas of daily life as well
as in production, manufacturing and research. They can be assigned to five essential
fields, see Fig.1.1.




Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons
License Attribution 4.0 International (CC BY 4.0).
2 M.Mende and Th.Wiener




                  Fig.1.1. Important fields of application of optical systems

Important fields of application are object recognition and classification, monitoring and
control of visible areas, inspection of object surfaces, recording the (3D) positions
of objects or measuring their 3D geometries. In this paper the automatic recognition and
classification of objects using existing textures will be presented. Textures are
characteristic regular or disordered patterns, which can be found on the surfaces of
objects, see Fig.1.2.




                   Fig.1.2. Example of a color texture (Vanessa atalanta)


2      Task definition

During the actual classification process, all objects of one type are grouped together in
a uniform class. New classes are created for objects of a new type. An important task
in classification is to separate textures of different classes from each other (selectivity),
but at the same time to allow textures with small deviations within the class (immunity
to interference) [1].
                                          Identification and Classification of Color Textures 3


In particular, the correct result of the classification should not be affected even if typical
deviations occur in the production process, e.g.

    -    The objects change their positions in the camera image, (T-) translation
         invariance
    -    The color textures of individual objects are rotated, (R-) rotation invariance
    -    The sizes of the textures change, (S-) scaling invariance
    -    The ambient brightness changes and thereby the integral brightness of the
         image changes
    -    Partial shadows appear on the textures
    -    There is partial soiling on the colour textures
    -    The textures lie on spherically curved surfaces and are therefore distorted in
         the camera image (RST invariance)


3       Methods and procedures

Mostly color cameras are used for optical image processing. The method of
classification described here is therefore based on color textures, but is also well
suited for b/w textures. The actual process of classification is divided into 2 steps; the
prior learning of suitable features of the textures (1.) as well as their recognition and
online classification (2.) in the process, see Fig.3.1.




Fig.3.1. Scheme of the 2 phases and individual sub-steps for online classification of color
textures
4 M.Mende and Th.Wiener


    Both for teaching the textures and for their classification in process operation, the
textures are divided into individual segments (segmentation). This corresponds to the
model of nature with a successive rejuvenation of the number of image channels from
the retina to the following bipolar cells according to the visual process.


3.1    Segmentation (tiling)

During the teach-in process one or more representative segments (tiles) with e.g.
16x16 or 32x32 pixels edge length are taken from the textures of each relevant object
class, see Fig.3.2.




Fig.3.2. Teaching of characteristics with square segments for the classification of 2
textures


3.2    RGB-HSI transformation

A previous HSI transformation of each RGB color pixel proves to be advantageous. It
forms the basis for obtaining invariant features and ensures by simple means an
additional invariance against changes in brightness, partial shading etc., Fig.3.3.-6.
The HSI model used for this purpose is based on the HSI color sphere model according
to [2]. Compared to other colour models, this model has a spatially vivid representation
of hue (H), colour saturation (S) and brightness (I) and at the same time a largely
consistent and unambiguous mapping of the entire set of all points of the RGB colour
space to the points of the HSI colour space.




         Fig.3.3. RGB HSI transformation with color sphere model according to [2]
                                           Identification and Classification of Color Textures 5




Fig.3.4. 32Bit RGB HSI transformation into 8Bit hue (H), colour saturation (S) and brightness
(I) images




Fig.3.5. RGB HSI transformation for hue (H) during laser desoldering of silicon bronze (above
original, below result of analysis) [3]




Fig.3.6. Invariance of the hue image (lower half) compared to brightness variations on real 32-
bit RGB color textures (upper half) (exposure time in the left image: 8ms, right image 1ms) [4].


3.3    Preparation of characteristics

Characteristic features are extracted from the tiles of each class for the classification.
The HSI values of the individual segments are first processed, i.e. subjected to a suitable
transformation {HSI}. From this, m-characteristics are then generated within each
class. Depending on the number ‘n’ of segments used, these characteristics are
summarized in m-dimensional clouds with n-points each. From the point clouds those
6 M.Mende and Th.Wiener


characteristics can be extracted for each class, which have the best invariance
properties against external parameters such as changes in brightness, scaling, rotation
or distortion [4], see Table 3.1.


         Table 3.1. Invariance properties of statistical characteristics used in 2D FFT




                             O invariant, - non-invariant,
                             * after transformation from cartesian coordinates to polar
                             coordinates

    The transformations {HSI} listed below offer, among other things, the possibility
of bringing about an extraction of suitable and partially invariant characteristics after
ap- propriate preparation of the segments:

   -   Histogram analysis
   -   Fast Fourier Transformation (FFT)
   -   Hough Transformation
   -   Gabor transformation
   -   Local Binary Pattern (LBP) approach

    With methods such as the Local Binary Pattern (LBP) approach [5], characteristics
with invariant properties can be extracted without prior transformation {HSI}.
    In order to achieve additional better properties regarding immunity to interference
(shadows, contamination) and at the same time high separation efficiency for similar
textures belonging to other classes, methods with robust classification are used. Modern
methods of classification use numerical compensation methods, e.g. the method of least
squares, as well as methods of fuzzy classification based on probabilistic models, see
Fig.3.7. For their description, for example, the minimum sympathy difference (shown
in green), which is a measure of the minimum distance between two texture classes in
                                           Identification and Classification of Color Textures 7


the n-dimensional feature space, is suitable. The sympathy difference then describes
the affiliation to a texture class.




Fig.3.7. Display of the point cloud of a new texture (New Object) for robust classification into
one of n = 2 texture classes with m = 3 used characteristics

    In [6] the number of texture classes corresponds to the number of m-dimensional
point clouds. Their positions, shapes and characteristics are important parameters for
each class. As a result of the classification, the class with the highest sympathy is
used, i.e. where the sum of the distance values of the feature vectors is minimal.
However, if this sympathy is smaller than the minimum sympathy SMS or the
difference to the next smaller sympathy is smaller than the minimum sympathy
difference SMD, the point cloud is rejected as unclassifiable. In the following Table
3.2., different methods are listed, which are in principle suitable for the robust
classification of color textures.
    The methods listed in Table 3.2. were implemented in the existing universal data
analysis program Xeidana® [10].
      8 M.Mende and Th.Wiener

                         Table 3.2. Overview of Robust Classification Methods
             Procedures           Advantages               Disadvantages
             Support Vector       + Very good with a large  - Possibly complex search
             Machine [7, 8]       number                    for parameters
                                  of features               for kernel function
                                  + Complex classes can be
                                  taught
             Multilayer           + Complex classes can be - Search for optimal
             Perceptron [9]       taught                    network topology and
                                                            parameters

             Multiple linear      + Parameter-free method     - Linear classifier
             discriminant
             analysis
             [9]
             Radial basic         + Complex classes can be    - Very extensive
             function net-        taught process → better     parameter set
             work [9]             convergence than with
                                  multilayer
                                  Perceptron



3.4   Feature Extraction & Create Classifier

      The program Xeidana® (eXtensible Environment for Industrial Data ANAlysis) is used
      for the feature extraction and the creation of the classifier. The program Xeidana® is
      written in C#.
          It comprises of an extensible development environment for solving data analysis
      tasks in the industrial sector, Fig. 3.8.




                       Fig.3.8. Xeidana® user interface with modular functionality
                                          Identification and Classification of Color Textures 9


    Xeidana® is also used for the classification and visualization of large amounts of
data. With the help of the extensive repertoire of algorithms and procedures, colour
textures, for example, can be classified. The software has a modular structure and can
be extended with new functionalities using a variety of different libraries.
    Fig.3.9. shows the user interface with which colour textures are both taught and
classified. The left window shows 18 different colour textures which are to be taught.
The right part of the user interface contains the functionality for selecting the RGB and
HSI color channels, setting segments of variable size, e.g. with 16x16 or 32x32 pixels,
and classifying the textures.
    The segments are set with the mouse on the texture, see red tile on the upper
texture. The extraction of the characteristics can be done with Histogram Analysis, Fast
Fourier Transformation (FFT) or Gabor Transformation




         Fig.3.9. Xeidana® user interface for teaching and classifying color textures
10 M.Mende and Th.Wiener


    In Fig.3.10. n-dimensional feature vectors of the individual classes are shown. By
pair- wise combination of two characteristics in each case, in the plan view (left
picture) suitable characteristics are compared and selected by assessing their separation
perfor- mance (Features of the LBP procedure).




Fig.3.10. Xeidana® user interface for displaying the n-dimensional feature vectors of the
individual color texture classes

   Fig.3.11. shows the module for Robust Classification used for setting the
parameters of the classifier and for automatic parameter optimization using the
Support Vector Machine as an example.




                Fig.3.11. Xeidana® user interface for Robust Classification
                                           Identification and Classification of Color Textures 11




4       Results & industrial application examples

   In order to create environmental conditions for the classification of color textures
similar to those in process operation, a special test stand with a conveyor belt (width
1m, length 4m) was set up at Fraunhofer IWU. The asynchronous triggering of the
individual cameras, the processing of the image information and the transfer of the
classification results are based on the data analysis program Xeidana®, see Fig.4.1.




Fig.4.1. Test rig for online classification of color textures on moving objects

    To demonstrate the performance of the Xeidana® program, a demonstrator was
cre- ated, see Fig. 4.2. which consists of a camera and a turntable with 10 different
types of wood. The grains and colors of the 10 types of wood are different, but in
some cases they differ only slightly from each other. The aim was to achieve a correct
clas- sification over their entire surface despite the spherically curved surfaces and the
re- sulting variations in colour textures in distance and angular position. The result of
the classification is shown in Fig.4.3.




                          Fig.4.2. Demonstrator with 10 wood species
12 M.Mende and Th.Wiener




                   Fig.4.3. Result of the classification (18 color textures)


5        Summary
With the methods for classification described here, color textures can be reliably
detected and classified in the production process independent of external parameters
such as brightness, object positions (translation and rotation), object distances
(scaling) or curved surfaces (rotation + scaling).
    The methods described here are also suitable for reliably classifying at least 18
colour textures even if they optically differ only slightly from each other.
    The online classification of color textures is a classic task in the wood, furniture
and textile industry. For example, unwanted defects or partial soiling on moving webs
can be reliably detected regardless of fluctuations in brightness and/or shadows during
process operation.
    The implementation of the methods, the triggering of the color cameras, the
processing of the color information including the output of the results to the process
control is done with the data analysis program Xeidana®. The following algorithm has
been developed for teaching and classifying HSI color textures is inserted:
    a)    Teach-in phase of the objects
          1. RGB-HSI transformation of the color textures
          2. set fewer segments on the color textures of each class with e.g.
              24x24Pixel
          3. suitable transformations {HSI}, e.g. 2D-FFT for formation characteristic
              2D spectral mountains in these segments
                                        Identification and Classification of Color Textures 13


         4.   extraction of statistical features from the 2D spectral mountains
         5.   setting up the individual classifiers

     b) Identification & classification in process operation
         1. RGB-HSI transformation of all pixel values of the image
         2. segmentation of previously defined areas of the image (ROI)
         3. suitable transformations {HSI} of all segments
         4. extraction of statistical characteristics
         5. robust classification


References
1.  Beyerer, H., Heintz, R.: Licht und Farbe. In: Fraunhofer Allianz Vision Seminar
    Inspektion und Charakterisierung von Oberflächen mit Bildverarbeitung, Erlangen (Dec.
    2007).
2. Matile, H.: Die Farbenlehre Phillip Otto Runges, 2nd edn. München (1979).
3. 3 https://www.iwu.fraunhofer.de, last accessed 2020.
4. Mende, M., Wiener, T.: Online-Klassifikation von Farbtexturen. In: Fraunhofer Vision
    Leitfaden 16 "Inspektion und Charakterisierung von Oberflächen mit Bildverarbeitung",
    pp.66. (2016).
5. Mäenpää, Topi.: The local binary pattern approach to texture analysis – extensions
6. and applications. In: Infotech Oulu and Department of Electrical and Information
    Engineering, University of Oulu, P.O.Box 4500, FIN-90014 University of Oulu, Finland
    Oulu, Finland (2003).
7. Priber, U., Kretzschmar, W.: Inspection and Supervision by Means of Hierarchical Fuzzy
    Classifiers. Fuzzy Sets and Systems, Vol. 85/1, North-Holland (1997).
8. Vapnik, Chervonenkis: Theory of Pattern Recognition (1974) (germ.: Wapnik und
    Tschervonenkis, Theorie der Mustererkennung (1979).
9. Schölkopf, Smola: „Learning with Kernels”, MIT Press (2001).
10. Duda, Richard O., Hart, Peter E., Stork, David G., Pattern Classification. 2nd edn. Wiley
    Interscience ISBN 0471056693 (2000).