<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="editor">
          <string-name>Keynote: Professor Theo Gevers</string-name>
        </contrib>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <abstract>
        <p>The Colourlab at the Norwegian University of Science and Technology (NTNU), Norway has organised the Colour and Visual Computing Symposium 2024 (CVCS 2024), which this year is taking place in Gjøvik, on September 5-6, 2024. Born in 2003 as the Gjøvik Colour Imaging Symposium (GCIS), the Colour and Visual Computing Symposium (CVCS) has attracted a growing number of participants and provided a platform for fruitful discussion and exploration of recent advances in the field of colour and visual computing.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Visual explanations have traditionally acted as rationales used to justify the decisions made
by machine learning systems. With the advent of large-scale neural networks, the role of
visual explanations has been to shed interpretability on opaque models. We view this role as
the process for the network to answer the question `Why P?’, where P is a trained network’s
prediction. Recently however, with increasingly capable models, the role of explainability has
expanded. Neural networks are asked to justify `What if?’ counterfactual and `Why P, rather
than Q?’ contrastive question modalities that the network did not explicitly train to answer.
This allows explanations to act as reasons to make further prediction. The talk provides a
principled and rational overview of Explainability within machine learning and justifies them
as reasons to make decisions. Such a reasoning framework allows for robust machine
learning as well as trustworthy AI to be accepted in everyday lives. Applications like robust
recognition, image quality assessment, visual saliency, anomaly detection,
out-ofPCWrEooUrckResehdoinpgs ISSNc1e6u1r-sd3w-0se0.oi7r3gsgtmribeunttiaotnion, dinettreocstpioenct,iona,adnvedrmsaarciahlineitmeaacgheing daemtoecntgioont,herssewisilmlbice briienftleyrdpirsectuastsioend,. semantic
Keynote: Dr Sira Ferradans</p>
    </sec>
    <sec id="sec-2">
      <title>Title: Studying user preferences for diverse skin tone portrait quality rendition</title>
      <p>Portraits are the most common use case for smartphone photography, however, producing
a realistic and pleasant skin tone in real scenarios is still challenging for all manufacturers,
especially in common conditions such as night or low light scenes. However, producing
nonhomogeneous quality rendition across skin tones has become a sensitive issue, and its
evaluation is crucial for the industry. In the scientific literature, we find mostly studies that
evaluate synthetic modifications of laboratory portraits. In this talk, we will show the
challenges of systematically evaluating diverse skin tones in the lab using realistic
mannequins. However, we will also show that real setups are much more complex to
evaluate, and user preferences depend on many factors.</p>
      <p>We will go through the conclusions obtained during DXOMARK’s last user studies, where we
examine the performance of high-end smart-phone cameras in common every-day use
cases. This study shows that around 20% of portraits are currently discarded due to quality
problems, implying that contemporary smartphone cameras are far from solving the skin
tone rendition problem.</p>
      <p>These challenges are mostly because there is no clear target definition of user preferences
regarding color skin tone rendering. The definition of this target could path the way to
automatizing skin tone rendition evaluation with Machine Learning.</p>
    </sec>
    <sec id="sec-3">
      <title>Keynote: Dr Charles Poynton</title>
    </sec>
    <sec id="sec-4">
      <title>Title: Technological Natural Selection in Imaging Standards</title>
      <p>Video signal decoding by a CRT’s inherent power function (“gamma”) very nearly inverts the
perceptual uniformity of CIE L*. I used to consider this to be an amazing coincidence. In
about 1992, I was chatting to Mike Schuster (of Adobe) about CRT gamma, and I
commented to him about what I saw as the fluke by which halftone dot gain in printing also
has nonlinear behaviour favourable to perception. Michael told me that he had thought
about that for a long time. He said that he had reached the conclusion that it was a kind of
technological natural selection – if not for optical dot gain, 8-bit CMYK halftoning would
have failed, and some other scheme would have eventually been found.</p>
      <p>In this talk, I’ll describe several situations in digital colour imaging where suitable – even
near-optimum – solutions to problems were found by processes involving mutation and
selection pressure, rather than by explicit engineering. There are lessons for imaging system
design.</p>
      <p>The members of the programme committee are:
⎯ Seyed Ali Amirshahi – General Chair
⎯ Steven le Moan – General Chair
⎯ Giuseppe Claudio Guarnera – Programme Chair
⎯ Aditya Suneel Sole – Programme Chair
⎯ Davit Gigilashvili – Publication Chair
⎯ Dar’ya Guarnera – Publication Chair
⎯ Giorgio Trumpy – Student Session Chair
⎯ Jon Yngve Hardeberg – Publicity and Sponsorship Chair
⎯ Peter Nussbaum – Publicity and Sponsorship Chair
⎯ Anneli Torsdbakken Østlien – Aministrative chair
⎯ Cathrine Øverberg Larsen - Administrative chair
We express sincere gratitude to all the experts from the scientific committee for
participating in the paper review process. Additional thanks go to Chunhong Luo for her
assistance with accounting matters. We are pleased to acknowledge the financial support
of Research Council of Norway and the Norwegian University of Science and Technology
as well as our other sponsors.</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>