<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Cross-category material interpolation and binocular material fusion</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Hua-Chun Sun</string-name>
          <email>hua-chun.sun@psychol.uni-giessen.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roland W. Fleming</string-name>
          <email>roland.w.fleming@psychol.uni-giessen.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Center for Mind, Brain and Behavior (CMBB), University of Marburg, Justus Liebig University Giessen and TU Darmstadt</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Justus Liebig University Giessen</institution>
          ,
          <addr-line>Otto-Behaghel-Str. 10, 35394 Giessen</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>Recognizing and identifying materials is essential for navigating the visual world. In this study, we investigate perceptual scaling, material interpolation, and binocular combination of materials across three experiments using cross-category image morphs derived from deep neural network (DNN)-based interpolation algorithm [1]. Twenty-four real-world material images from the STUFF dataset [2] were selected to create 12 cross-category morph pairs (e.g., moss-hair). By systematically adjusting the morph weights, each image gradually transitioned from one material to another, producing a continuum of intermediate blended materials. In Experiment 1, we examined the perceptual scaling of these synthesized blends using a rating task. Results revealed that participants' perceptual judgments generally followed a linear relationship with the interpolation weights, though the degree of compression and nonlinearity varied across diferent morph pairs. In the following experiments, we employed matching tasks in which participants adjusted a test stimulus along 49 morphing steps (ranging from 2% to 98%) to achieve perceptual equivalence with a reference. In Experiment 2, participants adjusted the test stimulus to match the perceived midpoint blend of two original materials presented one the two sides of the test. In Experiment 3, the adjustment aimed to match the perceptual outcome of dichoptic viewing-where each eye was presented with a diferent weighted combination of a material pair (e.g., 30% sand + 70% grass in the left eye and 70% sand + 30% grass in the right eye). We found that participants' adjustments deviated systematically from the 50% interpolation midpoint across diferent material pairs in both experiments. Image statistics revealed that RMS contrast was the primary predictor, accounting for a substantial portion of the variance in both tasks. These ifndings suggest that perceptual interpolation across materials and binocular material integration may rely on a shared scaling mechanism within a common representational space.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;material perception</kwd>
        <kwd>perceptual scaling</kwd>
        <kwd>image statistic</kwd>
        <kwd>binocular integration</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Material properties provide essential information about the nature and status of objects, enabling us to
identify what we see and guiding how we physically interact with our surroundings. This perceptual
information supports object recognition and informs motor planning. Despite its importance in daily
life, the mechanisms underlying material perception are still not fully understood.</p>
      <p>
        Current theories suggest that visual material information is encoded in continuous, multidimensional
feature spaces in the brain [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]. These spaces are thought to represent both familiar and novel materials
and this organization facilitates eficient recognition and discrimination. However, a key challenge
remains: how do we perceive and interpret materials that fall between established categories—those
that lie in intermediate regions of the material space (e.g., between wood and metal)? While past studies
have often concentrated on isolated material properties, our understanding of how full materials are
represented in a multidimensional perceptual space is still limited.
      </p>
      <p>
        Another knowledge gap concerns how material information is integrated binocularly. Binocular
vision provides crucial cues for perceiving material properties, yet much of the existing research has
focused primarily on gloss perception—showing that the visual system can use binocular disparities
in specular reflections to infer surface gloss. However, it remains unclear how binocular processing
contributes to the perception of other material dimensions. Specifically, how does the brain integrate
conflicting material information presented separately to each eye? Addressing this question may not
only clarify the mechanisms underlying binocular material perception but also provide insight into
broader processes such as the integration of naturalistic, chromatic signals across the eyes [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and the
neural balance between binocular fusion and rivalry [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>In this study, we address these knowledge gaps by examining how the visual system integrates
material information that spans across established material category boundaries. Using deep learning
based image interpolation, we investigate both the internal perceptual scaling and the processes by
which diferent material information presented separately to each eye is integrated to support coherent
material perception.</p>
    </sec>
    <sec id="sec-2">
      <title>2. General Methods</title>
      <p>
        We employ image interpolation based on the deep convolutional neural network (CNN) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] to examine
the perceptual scaling of synthesized material mixtures derived from images of real-world materials. We
selected 24 natural images from the STUFF dataset [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] , each representing a distinct material category, to
generate 12 cross-category morph pairs (e.g. moss-fur, iron-cork, see Figure 1). All digital images depict
fronto-parallel views of material textures under unknown lighting conditions. In total 49 morphing
steps (ranging from 2% to 98%) were generated for each pair of morphs. The interpolation was obtained
with the Wasserstein loss using VGG19 pretrained weights [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>The experimental procedures were approved by the Ethics Committee of Justus Liebig University
Giessen and conducted in accordance with institutional guidelines and the Declaration of Helsinki.
Participants were recruited through the SONA human subject management system of the Department
of Psychology and Sport Science at Justus Liebig University Giessen. The recruitment and data handling
procedures comply with European Union regulations on research ethics and data protection. All
participants provided written informed consent prior to participation.</p>
      <p>In addition to behavior measurements, here we also evaluate the interpolation algorithm through
analysis of image statistics—specifically Root mean square (RMS) contrast and hue, saturation, and
value (HSV) values—to determine whether these features varied systematically with the generated
material morph images. RMS contrast is calculated as the standard deviation of pixel intensities divided
by their mean. HSV values are computed as the mean of pixel values within each channel.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Experiment 1</title>
      <sec id="sec-3-1">
        <title>3.1. Experiment 1 design and task</title>
        <p>To quantify the perceptual scaling of CNN-generated interpolation weights, 29 participants completed a
rating task in which they evaluated 9 material blends ranging from 10% to 90% of two original materials
in each pair. In each trial, participants used a red slider—controlled via mouse or keyboard—to indicate
whether the centrally presented morph image resembled the original material shown on the left and
right (see Figure 2). Each participant completed 108 trials (12 morph pairs × 9 morph levels), presented
in randomized order.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Experiment 1 Results</title>
        <p>Our findings demonstrate a linear relationship in general between the perceived material scale and the
interpolation weights (Figure 3, main panel). Perceptual scaling closely tracked the physical morph
levels, as indicated by a strong correlation between the two (r = 0.95, p &lt; .001). However, the regression
slope revealed perceptual compression of approximately 77%, suggesting that internal scaling was more
compact than the external morphing scale.</p>
        <p>Notably, we observed that perceptual scaling varied across the 12 morph pairs, warranting a more
nuanced interpretation. For example, morph pair 1 (moss–fur; see Figure 3, subpanel 1) exhibited an
almost perfect linear correspondence between physical and perceptual scales (r=0.9), with minimal
compression (98%, as indicated by slope). In contrast, morph pairs such as glass–cellophane (pair 5),
plastic–aluminium (pair 6), and chrome–bubble wrap (pair 7) showed weaker linear relationships (r
0.7) and large compression efects, with slopes indicating approximately 60% of the physical scale range.</p>
        <p>Interestingly, the slope values showed a strong correlation with the saturation diference between the
original material images (r = 0.76, p &lt; .01). Likewise, the correlation between physical and perceptual
scales was also significantly associated with saturation diferences (r = 0.63, p &lt; .05). These results
suggest that chromatic properties of image statistics may play a key role in estimating perceptual
midpoint between materials.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experiment 2</title>
      <sec id="sec-4-1">
        <title>4.1. Experiment 2 design and task</title>
        <p>In Experiment 2, we further employed matching tasks to measure the perceived midpoint blend of two
original materials. 27 participants from Experiment 1 also participated in Experiment 2. The experiment
setup was similar—two original material images were shown on the left and right, with a test stimulus
displayed in the center (Figure 2). This time, participants adjusted the middle test stimulus along 49
morphing steps (ranging from 2% to 98%) to achieve a perceptual midpoint blend between the two
original materials. One participant was excluded from further analysis due to an excessive number of
extreme responses—over 98% of trials showed deviations greater than ±45% from the midpoint (the
maximum possible range being ±50%).</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Experiment 2 Results</title>
        <p>Figure 4 presents the adjustment results from 26 participants for each of the 12 morph pairs. The data
are shown as the percentage deviation from the 50% interpolation midpoint, ranging from 2.6% to
16.9% across morph pairs, with a mean deviation of 8.81%. Notably, these deviations were significantly
correlated with the RMS contrast diference between the original material images (r = 0.67, p &lt; .05),
suggesting that low-level image statistics may play a role in perceptual material interpolation.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Experiment 3</title>
      <p>Experiments 1 and 2 aim to investigate perceptual scaling and material space across diferent material
categories under ordinary, non-stereoscopic viewing conditions. In Experiment 3, we extend this
investigation to binocular viewing. The central question is: how does the visual system perceive and
interpret materials when each eye receives conflicting material information? Does binocular integration
of diferent materials rely on a similar mechanism as perceiving a single, morphed image composed of
the two materials?</p>
      <sec id="sec-5-1">
        <title>5.1. Experiment 3 design and task</title>
        <p>Observers (N=21) with normal or adjusted-to-normal visual acuity took part in the experiment. Stimuli
were presented dichoptically using a stereoscope and consisted of two types: a reference stimulus and a
match stimulus.</p>
        <p>The reference stimulus was displayed in the top row of the screen (Figure 5). Each eye was shown a
diferent weighted blend of a given material morph pair (e.g., 90% moss + 10% fur to the left eye and
10% moss + 90% fur to the right eye in Figure 5 top, referred to as the 10–90 condition). Image-eye
assignments were counterbalanced across trials, with each image presented equally often to the left
and right eyes. Four interocular morph weight conditions were tested: 40–60, 30–70, 20–80, and 10–90.
The match stimulus, presented in the bottom row of the screen, was identical in both eyes and served
as the comparison for perceptual matching (Figure 5).</p>
        <p>Each participant completed 192 trials, comprising 12 morph pairs × 4 morph weight conditions ×
4 repetitions, presented in randomized order. As in Experiment 2, participants adjusted the match
stimulus along 49 morph steps between the original images to achieve perceptual equality with the
reference stimulus. Trials with unfusable or rivalry perception are skipped and removed from analysis.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Experiment 3 Results</title>
        <p>The adjustment results deviated from the 50% interpolation midpoint by 0% to 17% on average (Figure 6
main panel), depending on the material pair. Greater midpoint deviations were observed in the most
extreme binocular diference condition (10–90), with a mean deviation of 14.44% across the 12 morph
pairs. The magnitude of deviation progressively decreased as the interocular diference diminished:
11.38% for the 20–80 condition, 6.40% for 30–70, and 4.00% for 40–60 (Figure 6, top panels).</p>
        <p>Using RMS contrast and saturation, hue, and value (from the HSV color space) as predictors, we
conducted a stepwise regression analysis to assess their contribution to the adjustment results. For
the 10–90 morph weight condition, the analysis revealed that RMS contrast and saturation were the
most influential predictors, jointly explaining 50% of the variance in perceived material dominance
(2 = 0.59; Adjusted 2 = 0.50). Notably, RMS contrast alone accounted for a substantial portion of
this variance (2 = 0.40; Adjusted 2 = 0.34). Similar analyses for other morph weight conditions
showed that RMS contrast remained a robust predictor:
• 20–80 morph: 2 = 0.49; Adjusted 2 = 0.44
• 30–70 morph: 2 = 0.78; Adjusted 2 = 0.75
• 40–60 morph: 2 = 0.50; Adjusted 2 = 0.45</p>
        <p>These findings suggest that RMS contrast is a primary driver of perceptual bias in binocular material
integration, particularly when the diference between the two eyes is moderate (i.e., 20–80 to 40–60
conditions). In contrast, saturation appears to play a larger role only when the binocular diference is
more pronounced, as in the 10–90 morph condition.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Discussion</title>
      <p>Our findings demonstrate a generally linear relationship in the perceptual scaling of morphed materials,
which mirrors the linear changes observed in image statistics across interpolation weights. Moreover,
we show that perceptual interpolation and binocular material fusion are closely linked to low-level
image features. Materials with higher contrast and greater color saturation are weighted more heavily
in the resulting perceptual blend. Together, these results suggest that material scaling, interpolation,
and interocular summation may rely on a shared representational framework within the visual system.</p>
      <p>The way our visual system integrates conflicting visual information across eyes is not only crucial
for depth perception but also influences how we interpret texture and material. Here we show that
perceptually distinct materials presented separately to each eye can be integrated into a novel coherent
material percept in the brain.</p>
      <sec id="sec-6-1">
        <title>6.1. Perceptual Scaling of Materials: Linear or Nonlinear?</title>
        <p>
          In Experiment 1, we observed a consistent correspondence between perceptual scaling and physical
morph levels, indicating an approximately linear relationship overall. This contrasts with findings
by Vacher et al. (2020) [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], who reported nonlinear perceptual scaling for most participants. Several
methodological diferences may account for the discrepancy. Vacher et al. [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] employed Maximum
Likelihood Diference Scaling (MLDS) [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] with a smaller sample (N = 8), whereas our study used a
continuous rating task with a larger participant group (N = 29). In addition, the image sets they used
are also diferent [
          <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
          ]. These diferences underscore the need for further research to evaluate whether
perceptual scaling functions generalize across image sets, methods, and participant samples. Replicating
ifndings under varied conditions will be critical to establishing the robustness of perceptual scaling
behavior in material perception.
        </p>
        <p>Note that the 0% and 100% blend conditions were not included from Experiment 1, as these test images
would be identical to the original material images shown on either side. In such cases, participants’
responses would be expected to align closely with the physical scale, potentially resulting in an S-shaped
(sigmoidal) response curve for morph pairs with shallower slopes.</p>
      </sec>
      <sec id="sec-6-2">
        <title>6.2. Perceptual interpolation v.s. binocular integration of materials</title>
        <p>We observed a significant correlation between the results of Experiment 2 and Experiment 3 across the 12
morph pairs (r = 0.58, p &lt; .05; see Figure 4 and Figure 6). Notably, the strength of this correlation increased
systematically as the binocular diference in Experiment 3 decreased. Specifically, the correlation
coeficients rose from r = 0.40 in the 10–90 morph weight condition, to r = 0.48 (20–80), r = 0.72 (30–70),
and r = 0.77 (40–60), suggesting greater consistency between perceptual interpolation and binocular
material integration when the visual inputs to the two eyes were more similar.</p>
        <p>The efect size in Experiment 2, measured as the mean deviation from the 50% midpoint (8.81%),
falls between the deviation observed in the 20–80 and 30–70 morph weight conditions of Experiment
3 (11.38% and 6.40%, respectively). These comparable findings suggest that perceptual interpolation
and binocular material integration may rely on a shared scaling mechanism within a common material
representation space.</p>
      </sec>
      <sec id="sec-6-3">
        <title>6.3. Individual Diferences in Binocular Material Integration</title>
        <p>
          In Experiment 3, we found that material information presented to each eye with diferently weighted
morph was integrated binocularly across all 12 tested material pairs (Figure 6). Interestingly, the
degree of binocular integration varied systematically across participants. Some individuals consistently
exhibited a stronger perceptual bias toward one material over the other within a pair (e.g., red data
points in Figure 6), while others showed more balanced integration (blue data points). These individual
tendencies were stable across diferent material pairs, suggesting trait-like patterns of integration.
Future research should consider potential contributing factors to these individual diferences, such as
eye dominance [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] or interocular diferences in visual acuity, to better understand the mechanisms
underlying material integration in binocular vision.
        </p>
      </sec>
      <sec id="sec-6-4">
        <title>6.4. The Role of Luster, Binocular Rivalry, and Stereopsis in Binocular Material</title>
      </sec>
      <sec id="sec-6-5">
        <title>Fusion</title>
        <p>
          In Experiment 3, diferently morphed material images were presented to each eye. Previous research has
shown that binocular diferences in chromatic and achromatic contrast can give rise to a percept of luster
[
          <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
          ]. This can potentially make the fused material appear glossier or more metallic. Additionally,
disparities in fine-grained texture and surface patterns between the two eyes’ images may evoke depth
cues via stereopsis, potentially altering the perceived material structure. These efects could influence
performance and bias material judgments on the adjustment task.
        </p>
        <p>In conditions with large interocular diferences—particularly in the 10–90 morph condition, where
the interocular diferences were substantial, the dissimilarity between the two images may approach
a threshold that induces binocular rivalry rather than integration. Although the reported incidence
of rivalry was low in our study—and those trials were excluded from analysis—future investigations
should systematically examine the transition point at which binocular integration gives way to rivalry.
Understanding this threshold is essential for interpreting the limits of binocular material fusion.</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>7. Appendices</title>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgments</title>
      <sec id="sec-8-1">
        <title>The Material Morph Image Synthesis Algorithm [1]</title>
        <p>https://github.com/JonathanVacher/texture-interpolation/tree/master
This research is funded by the DFG (222641018 – SFB/TRR 135 TP C1), the Excellence Cluster EXC3066
"The Adaptive Mind" and European Research Council Grant ERC-2022-AdG “STUFF” (project number
101098225). We thank Hannah Schösser, Lily Stock, and Zeynep Ceyda Demirkan for data collection.</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used GPT-4 for grammar and spelling check and
editing. After using the tool, the authors reviewed and edited the content as needed and take full
responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Vacher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Davila</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kohn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Coen-Cagli</surname>
          </string-name>
          ,
          <article-title>Texture interpolation for probing visual perception, 2020</article-title>
          . URL: https://arxiv.org/abs/
          <year>2006</year>
          .03698.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>F.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. N.</given-names>
            <surname>Hebart</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Schmid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. W.</given-names>
            <surname>Fleming</surname>
          </string-name>
          ,
          <article-title>Core dimensions of human material perception</article-title>
          ,
          <source>Proceedings of the National Academy of Sciences</source>
          <volume>122</volume>
          (
          <year>2025</year>
          ). doi:
          <volume>10</volume>
          .1073/pnas. 2417202122.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>H.-C.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Schmid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. N.</given-names>
            <surname>Hebart</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. W.</given-names>
            <surname>Fleming</surname>
          </string-name>
          ,
          <article-title>Cortical representations of core visual material dimensions</article-title>
          ,
          <source>Journal of Vision</source>
          <volume>24</volume>
          (
          <year>2024</year>
          )
          <article-title>285</article-title>
          . doi:
          <volume>10</volume>
          .1167/jov.24.10.285.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D. H.</given-names>
            <surname>Baker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. J.</given-names>
            <surname>Hansford</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. G.</given-names>
            <surname>Segala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Y.</given-names>
            <surname>Morsi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Huxley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. T.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rockman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Wade</surname>
          </string-name>
          ,
          <article-title>Binocular integration of chromatic and luminance signals</article-title>
          ,
          <source>Journal of Vision</source>
          <volume>24</volume>
          (
          <year>2024</year>
          )
          <article-title>7</article-title>
          . doi:
          <volume>10</volume>
          .1167/jov.24.12.7.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Meng</surname>
          </string-name>
          ,
          <article-title>Integration and suppression interact in binocular vision</article-title>
          ,
          <source>Journal of Vision</source>
          <volume>23</volume>
          (
          <year>2023</year>
          )
          <article-title>17</article-title>
          . doi:
          <volume>10</volume>
          .1167/jov.23.10.17.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L. T.</given-names>
            <surname>Maloney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. N.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <article-title>Maximum likelihood diference scaling</article-title>
          ,
          <source>Journal of Vision</source>
          <volume>3</volume>
          (
          <year>2003</year>
          )
          <fpage>573</fpage>
          -
          <lpage>585</lpage>
          . doi:
          <volume>10</volume>
          .1167/3.8.5.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Cimpoi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Maji</surname>
          </string-name>
          , I. Kokkinos,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mohamed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Vedaldi</surname>
          </string-name>
          ,
          <article-title>Describing textures in the wild</article-title>
          ,
          <source>in: 2014 IEEE Conference on Computer Vision and Pattern Recognition</source>
          ,
          <year>2014</year>
          , pp.
          <fpage>3606</fpage>
          -
          <lpage>3613</lpage>
          . doi:
          <volume>10</volume>
          . 1109/CVPR.
          <year>2014</year>
          .
          <volume>461</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>P.</given-names>
            <surname>Arbeláez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Maire</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Fowlkes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Malik</surname>
          </string-name>
          ,
          <article-title>Contour detection and hierarchical image segmentation</article-title>
          ,
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          <volume>33</volume>
          (
          <year>2011</year>
          )
          <fpage>898</fpage>
          -
          <lpage>916</lpage>
          . doi:
          <volume>10</volume>
          .1109/ TPAMI.
          <year>2010</year>
          .
          <volume>161</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>F. A. A.</given-names>
            <surname>Kingdom</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Mohammad-Ali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Breuil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Chang-Ou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Irgaliyev</surname>
          </string-name>
          ,
          <article-title>Detection of vertical interocular phase disparities using luster as cue</article-title>
          ,
          <source>Journal of Vision</source>
          <volume>23</volume>
          (
          <year>2023</year>
          )
          <article-title>10</article-title>
          . doi:
          <volume>10</volume>
          .1167/jov. 23.6.10.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>B. J.</given-names>
            <surname>Jennings</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A. A.</given-names>
            <surname>Kingdom</surname>
          </string-name>
          ,
          <article-title>Detection of between-eye diferences in color: Interactions with luminance</article-title>
          ,
          <source>Journal of Vision</source>
          <volume>16</volume>
          (
          <year>2016</year>
          )
          <article-title>23</article-title>
          . doi:
          <volume>10</volume>
          .1167/16.3.23.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>