<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>U. Bhaumik);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Color Characterization of Displays using Neural Networks</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ujjayanta Bhaumik</string-name>
          <email>ujjayanta.bhaumik@kuleuven.be</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rik Marco Spieringhs</string-name>
          <email>rik.spieringhs@kuleuven.be</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kevin A. G. Smet</string-name>
          <email>kevin.smet@kuleuven.be</email>
        </contrib>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>When studying human color perception using displays, accurate and reproducible presentation of color stimuli is paramount, which requires color characterization of the display. Often simple (non-linear) models, such as the Gain-Ofset-Gamma (GOG) model, or low-resolution look-up tables (LUT), both based on a limited number of optical measurements, work well enough. However, some displays show a much more complex relationship between RGB input and color output, requiring more complex models or high-resolution LUTs based on a large number of measurements. In this paper, as the first step in a larger study, the feasibility and performance of neural networks (NN) for color characterization of a simulation display following a simple GOG model are explored and compared to a LUT-based method. Statistical analysis showed that the NN-method performed significantly better than the LUT model in terms of predicting the required RGB device input values that generate a set of target (device output) XYZ tristimulus values. In fact, to achieve the same accuracy, the number of training points can be substantially reduced for the NN-method compared to the LUT-method. Furthermore, the neural network is also more than 20 times faster than the look-up table in performing display characterization.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Display characterization</kwd>
        <kwd>Neural Network</kwd>
        <kwd>Look-up table</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Color characterization is the process of converting a device-dependent color space to a
deviceindependent space like the XYZ or LMS. The necessity of characterization arises from the
primaries that a device uses to produce colors. As the primaries are diferent in each device,
these intrinsic diferences give rise to device-dependent color spaces [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. In diferent cathode ray
tube (CRT) monitors, for instance, the red, green, and blue phosphors are physically diferent
resulting in minute diferences i n c olor p roduced b y t hem f or t he s ame i nput v alue. S o, if
a person is sending a particular input RGB color, for instance, (200, 0, 200) to two diferent
devices, depending on how the primaries are in both devices, the output on the display can
be diferent. A device-independent color space, on the other hand, would always produce the
same color output for a particular input. For a color characterized device, one would know the
relation between the device-dependent color space and device-independent color space.
†These authors contributed equally.
      </p>
      <p>The problem of characterization can be treated as finding the relation between two sets of
points in R3. A function is to be determined from XYZ color space to RGB color space so that
one can determine which RGB values should be sent to the headset to display the correct XYZ
output. For instance, to achieve a particular output (1 = 19.2, 1 = 21, 1 = 73.2), the fitted
function would predict (1, 1, 1) =  (19.2, 21, 73.2). This triplet (1, 1, 1) can then be
sent to the device to obtain the output as (1, 1, 1).</p>
      <p>
        This function can be determined in a variety of ways, for instance, using a Gain-Ofset-Gamma
(GOG) model or look-up tables [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. This paper presents a novel approach based on simulated
datasets of diferent sizes to thoroughly compare methods that use multilayer perceptron based
neural networks with a varying number of hidden layers to determine the color characterization
function and traditional look-up tables. Multilayer perceptrons have been used for solving
problems like pattern recognition and interpolation and were an improvement over the simpler
predecessor perceptron suitable for linearly separable problems [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. They provide the necessary
complexity for predicting more dificult non-linear functions. The next sections describe the
characterization techniques relevant to this work in more detail: GOG model, look-up-table,
and neural networks.
      </p>
      <sec id="sec-1-1">
        <title>1.1. Gain-Ofset-Gamma model</title>
        <p>
          The Gain-Ofset-Gamma (GOG) model is one of the several ways of characterizing displays.
For an overview of other methods, one can refer to Brainard et. al. [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. It consists of 2 stages:
the first step is a non-linear transformation to convert digital RGB values to linear RGB values,
and the second step converts the linear RGB values to tristimulus XYZ values using a matrix
multiplication [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ].
        </p>
        <p>
          The first stage, which is also a forward transformation from digital to linearized RGB values,
is represented by a simple gamma transform [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. Let (, , ) ∈ [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] be the digital pixel values
to be sent to the display. Then the linearized values are given by:
(1)
(2)
(3)
(4)
Here, 0 ≤ , ,  ≤ 1
        </p>
        <p>After linearization, it is now possible to represent the device-independent (X,Y,Z) tristimulus
values as a linear combination of the device-dependent linearized (R,G,B) values using a 3 x 3
matrix multiplication:</p>
        <p>where T is the transformation matrix from linear RGB to tristimulus XYZ values. If one
measures the tristimulus values for maximum red, maximum green, and maximum blue, the
equation for T is given by equation (5).</p>
        <p>= 
 = 
 = 
⎡⎤ ⎡⎤
⎣ ⎦ =  ⎣⎦</p>
        <p />
        <p>Although it is typically more accurate to determine the 9 matrix coeficients based on a
least-squares minimization algorithm using a suficiently large number of measured (RGB, XYZ)
pairs.</p>
        <p>If we also consider the minimal ambient light, the full equation becomes:
⎡⎤ ⎡⎤
⎣ ⎦ − ⎣ ⎦</p>
        <p>⎡⎤
=  ⎣⎦</p>
        <p>It represents the fact that even for R=G=B=0 (no display output), still some non-negative
XYZ values might be measured due to the ambient light. This is also sometimes referred to as
display flare.</p>
        <p>The whole characterization process is explained in Figure 1.</p>
      </sec>
      <sec id="sec-1-2">
        <title>1.2. Look-up table (LUT)</title>
        <p>
          A look-up table (LUT) is a dictionary that allows one to search for a value corresponding to a key.
Look-up tables have been used historically since the time of Babylonian mathematics when they
used them to calculate logarithmic tables [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. In the context of display color characterization, a
look-up table would store (,  ) pairs which can be used to find the   value for a
characterized display corresponding to a  input or vice versa. An instance is shown in
Figure 2.
        </p>
        <p>When converting from device (, , ) to (, , ) using a fully populated LUT (i.e. one
which has a pair for every possible r,g,b combination), the conversion process is reduced to
a simple look-up. However, given that there would be 16777216(= 2563) entries loading the
LUT into memory, and especially the look-up process might be quite time consuming. The
actual measurement time required to generate the LUT would also be huge. Look-up tables
are therefore usually determined by measuring pairs for only certain (, , ) combinations,
whereby intermediate values are interpolated. Interpolation would almost certainly also be
required for the inverse conversion, i.e. when going from device-independent (, , ) to
device-dependent (, , ), as its highly unlikely that the desired (, , ) is one of the values
in the LUT!</p>
      </sec>
      <sec id="sec-1-3">
        <title>1.3. Neural Networks</title>
        <p>
          This paper presents a neural networks based method to address the problem of accurate color
characterization for displays. Cheung et. al. used neural networks and polynomial functions for
camera characterization and both methods were found capable of producing similar results [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ].
Vrhel et. al. used neural networks to calibrate color scanners and found that they performed
better over polynomial-based methods [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. Usui et. al. used neural networks to do color
transformations for color management systems. This was done to take care of colors produced
by diferent media and Usui et. al. showed that a neural network of 3 layers can be used as a
powerful tool for such purpose [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ].
        </p>
        <p>
          Neural networks were used by Climent et. al. to test the accuracy of LCD displays [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] and
they notified the appropriateness of neural networks in learning XYZ-RGB relations. Diferent
groups of techniques are used for colorimetric characterization like methods that try to model
the color physically assuming independence between channels [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ],[
          <xref ref-type="bibr" rid="ref12">12</xref>
          ],[13].
        </p>
        <p>Statistical techniques like Linear Color Correction (LCC) [14], polynomial regression [14] and
geometric methods like 3D thin-plate spline [15] are some other methods used for colorimetric
characterization of displays but methods like LCC can map linearized RGB to XYZ with high
errors [14]. This paper used a feedforward network to perform display color characterization. In
a feedforward neural network, the propagation of information is only in the forward direction
and there are no cycles or loops. An instance of a feedforward network is shown in Figure 3.</p>
        <p>
          A multilayer feedforward network can approximate any continuous function with just one
hidden layer. This is referred to as the universal approximation theorem[
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. The mapping of
XYZ color space to RGB color space or vice versa can be achieved with such a network. In this
paper, we present a GOG model, a look-up table, and our neural network approach to perform
characterization.
        </p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Setup</title>
      <p>For the purpose of this research, an Alienware m15 laptop with 16GB RAM, 64 bit Windows
operating system, and Intel(R) Core(TM) i7-8750H CPU at 2.20 GHz was used. The simulation
was done in Python using the Luxpy library [16].</p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <sec id="sec-3-1">
        <title>3.1. Data Simulation</title>
        <p>A virtual display was simulated using a Gain Ofset Gamma model to generate  ≡  
pairs. It has been assumed that the display is perfectly characterized by the gain ofset gamma
model (no measurement errors). Several data sets, composed of (, , ) − (, , ) pairs, were
simulated to train the neural network, generate the look-up table, and test both methods. Both
methods were always trained (for the LUT-based method this means generation of the LUT)
and tested using the same data sets to ensure a fair comparison of their performance.
3.1.1. Sampling of RGB space
The first step of data generation for the training set included the sampling of RGB space into
equal voxels while ensuring that the axes of the RGB color space, and particularly the maximum
value (255), were also sampled. The sampled (, , ) data points are represented by the set 3:
 = { :  ∈ R &amp; |} ∪ {255}, 6 ≤  ≤ 20
(7)</p>
        <p>The red, green and blue axes are sampled uniformly due to the fact that equal sampling is the
baseline or the simplest approach and it helps in representing non-additivity better. If any of
the red, green or, blue channels contribute to the non-additivity, then the equal sampling makes
sure an equal impact of all the channels is present. An example of the sampling with  = 20
is shown in Figure 4. Here, n is called the LUT increment, and the data set composed of the
sampled RGB space is henceforth referred to as the RGB cube.</p>
        <p>
          RGB cubes for neural network training and LUT generation were created for LUT
increments of: [
          <xref ref-type="bibr" rid="ref10 ref11 ref12 ref6 ref7 ref8 ref9">6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20</xref>
          ] which corresponded to [79271,
50676, 32890, 24398, 17699, 13927, 10736, 8105, 6916, 5008, 4175, 3455, 3402, 2801, 2268] number
of data points in the cube.
        </p>
        <p>The (r,g,b) values for the test set were generated using Numpy’s random generator function.
The same test set was used in combination with all training sets and was composed of 10000
points. To complete the training and test sets and generate (r,g,b)-(X,Y,Z) pairs, their (r,g,b)
values were converted into (X,Y,Z) tristimulus values using the GOG model.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Neural Network Training</title>
        <p>The  ≡   training data generated using the simulated algorithm are fed to a multilayer
perceptron (coded with the Perceptron function from scikit-learn sklearn.linear_model class
[17]) with the following parameters:
• hidden layer = [number of hidden layers]
• relu activation function ( () = (0, ))
• maximum iteration = 100000
• adaptive learning rate: adam optimizer</p>
        <p>
          The number of hidden layers is varied from the set [
          <xref ref-type="bibr" rid="ref10">10, 20, 40, 80, 130, 160, 400, 460, 600</xref>
          ] to
test the performance of the neural network. The Adam optimizer uses an adaptive learning rate
for optimizing stochastic objective functions [18].
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Look-up table</title>
        <p>
          The points in RGB cubes for each LUT increments of: [
          <xref ref-type="bibr" rid="ref10 ref11 ref12 ref6 ref7 ref8 ref9">6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
18, 19, 20</xref>
          ] were converted into XYZ points using the simulation algorithm. This resulted in
(, , ), (, , ) pairs for each LUT increment.
        </p>
        <p>
          For the LUT, predicting the value of input involved a few steps. First for finding the value
of the new element, one needs to find the nearest neighbors and assign a value based on the
values of the nearest neighbor from the table. This requires querying the LUT for the nearest
neighbors. The number of neighbors considered for diferent experiments ranged in the set
[
          <xref ref-type="bibr" rid="ref1 ref10 ref12 ref2 ref3 ref4 ref5 ref6 ref8">1,2,3,4,5,6,8,10,12,14,16,18,20,25</xref>
          ]. The second step involved using the cKDtree function from
scipy for querying the diferent LUTs [ 19]. Then, the distances of the input point from all the
nearest neighbors returned by the algorithm are calculated.
        </p>
        <p>The predicted value for the input is calculated using a weighted average of the values of the
nearest neighbors. The weights are calculated using the inverse squared distances of the query
input point from the nearest neighbors. So, if a key is closer in distance to the input query point,
it contributes more to determining the output value.</p>
        <p>The predictions using the same test set are carried out using both the neural network and
LUT and the results are presented in the next section.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results &amp; Discussion</title>
      <p>A GOG model was used as the perfect model to generate the ground truth (RGB, XYZ) pairs.
The forward model to go from display (, , ) to linear (, , ) is a simple Gamma function.</p>
      <p>The transformation matrix given in Eq. (5) and (6) allows conversion from RGB to XYZ and
vice versa. The forward transformation matrix (M) to convert linear RGB to XYZ and the reverse
transformation matrix (N) to convert XYZ back to linear RGB are calculated and given below.</p>
      <p>Error maps are generated both for the Neural Network method and the LUT-based method
to compare their performances. For both methods, the LUT increments were kept consistent.
Errors were calculated for the Neural Network method at various numbers of hidden layers and
the LUT-based method at various numbers of nearest neighbors. The error maps are shown in
Figure 6. The neural network performs extremely well with LUT increments of less than 16
and at a number of hidden layers more than 100. Even at finer resolution with a lower LUT
increment, the performance of the LUT-based method is worse compared to the Neural Network
based method.</p>
      <p>Considering the error graph in Figure 6., there are recommendations for the number of hidden
layers of the neural network where it performs substantially better than the LUT-based method.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion and Future Work</title>
      <p>In this paper, a comparison of the performance of display color characterization methods based
on a neural network and a look-up table was carried out. The analysis showed that on average
the neural network was substantially better than the look-up table. Prediction time was also
better for the neural network which provided over 22 times speed-up compared to the look-up
table method.</p>
      <p>In the future, we propose to test the neural network and look-up-table methods on real
measured displays, some of which that have more complex relationships between the RGB input
values and the measured XYZ tristimulus output, due to, for example, additivity failure. Using
a look-up table for high resolution images becomes computationally expensive and a neural
network might significantly help in obtaining better real-time predictions, while also ofering
the possibility of reducing the number of characterization measurements for the same accuracy.
[13] D. H. Brainard, Calibration of a computer controlled color monitor, Color Research &amp;</p>
      <p>Application 14 (1989) 23–34.
[14] G. D. Finlayson, M. Mackiewicz, A. Hurlbert, Color correction using root-polynomial
regression, IEEE Transactions on Image Processing 24 (2015) 1460–1470.
[15] P. Menesatti, C. Angelini, F. Pallottino, F. Antonucci, J. Aguzzi, C. Costa, Rgb color
calibration for quantitative image analysis: The “3d thin-plate spline” warping approach,
Sensors 12 (2012) 7063–7079.
[16] K. A. Smet, Tutorial: The luxpy python toolbox for lighting and color science, Leukos 16
(2020) 179–201.
[17] Sklearn.linear_model.perceptron, https://scikit-learn.org/stable/modules/generated/
sklearn.linear_model.Perceptron.html, 2022. Accessed: 2022-10-13.
[18] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint
arXiv:1412.6980 (2014).
[19] scipy.spatial.ckdtree, https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.</p>
      <p>cKDTree.html, 2022. Accessed: 2022-10-13.
[20] J. Zhang, Y. Meuret, X. Wang, K. A. Smet, Improved and robust spectral reflectance
estimation, Leukos 17 (2021) 359–379.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M. D.</given-names>
            <surname>Fairchild</surname>
          </string-name>
          , Color appearance models, John Wiley &amp; Sons,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R. S.</given-names>
            <surname>Berns</surname>
          </string-name>
          ,
          <article-title>Methods for characterizing crt displays</article-title>
          ,
          <source>Displays</source>
          <volume>16</volume>
          (
          <year>1996</year>
          )
          <fpage>173</fpage>
          -
          <lpage>182</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Noriega</surname>
          </string-name>
          , Multilayer perceptron tutorial, School of Computing. Stafordshire University (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D. H.</given-names>
            <surname>Brainard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. G.</given-names>
            <surname>Pelli</surname>
          </string-name>
          , T. Robson, Display characterization,
          <source>Signal Process</source>
          <volume>80</volume>
          (
          <year>2002</year>
          )
          <fpage>2</fpage>
          -
          <lpage>067</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>V.</given-names>
            <surname>Cheung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Westland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Connah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ripamonti</surname>
          </string-name>
          ,
          <article-title>A comparative study of the characterisation of colour cameras by means of neural networks and polynomial transforms</article-title>
          ,
          <source>Coloration technology 120</source>
          (
          <year>2004</year>
          )
          <fpage>19</fpage>
          -
          <lpage>25</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R. R.</given-names>
            <surname>Hainich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Bimber</surname>
          </string-name>
          , Displays: fundamentals &amp; applications, AK Peters/CRC Press,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Campbell-Kelly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Croarken</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Flood</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Robson</surname>
          </string-name>
          , et al.,
          <article-title>The history of mathematical tables: from Sumer to spreadsheets</article-title>
          , Oxford University Press,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Vrhel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. J.</given-names>
            <surname>Trussell</surname>
          </string-name>
          ,
          <article-title>Color scanner calibration via a neural network</article-title>
          ,
          <source>in: 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No. 99CH36258)</source>
          , volume
          <volume>6</volume>
          , IEEE,
          <year>1999</year>
          , pp.
          <fpage>3465</fpage>
          -
          <lpage>3468</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Usui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Arai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nakauchi</surname>
          </string-name>
          ,
          <article-title>Neural networks for device-independent digital color imaging</article-title>
          ,
          <source>Information Sciences 123</source>
          (
          <year>2000</year>
          )
          <fpage>115</fpage>
          -
          <lpage>125</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Prats-Climent</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Gòmez-Robledo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Huertas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>García-Nieto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Rodríguez-Álvarez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Morillas</surname>
          </string-name>
          ,
          <article-title>A study of neural network-based lcd display characterization</article-title>
          ,
          <source>in: London Imaging Meeting</source>
          , volume
          <volume>2021</volume>
          ,
          <article-title>Society for Imaging Science</article-title>
          and Technology,
          <year>2021</year>
          , pp.
          <fpage>97</fpage>
          -
          <lpage>100</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>P.</given-names>
            <surname>Colantoni</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.-B. Thomas</surname>
            ,
            <given-names>J. Y.</given-names>
          </string-name>
          <string-name>
            <surname>Hardeberg</surname>
          </string-name>
          ,
          <article-title>High-end colorimetric display characterization using an adaptive training set</article-title>
          ,
          <source>Journal of the Society for Information Display</source>
          <volume>19</volume>
          (
          <year>2011</year>
          )
          <fpage>520</fpage>
          -
          <lpage>530</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>W. B.</given-names>
            <surname>Cowan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Rowell</surname>
          </string-name>
          ,
          <article-title>On the gun independence and phosphor constancy of colour video monitors</article-title>
          .,
          <source>COLOR reSearch and application 11</source>
          (
          <year>1986</year>
          )
          <fpage>s34</fpage>
          -
          <lpage>s38</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>