<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>March</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>through Novel Facial Approximation of the Crew of a Civil War Submarine</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Michael P. Scafuri</string-name>
          <email>scafuri@clemson.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Eric Patterson</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nicholas DeLong</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Facial Approximation, Digital Humans, Archaeology, Digital Cultural Preservation</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Warren Lasch Conservation Center, Clemson University</institution>
          ,
          <addr-line>1250 Supply St., N. Charleston, SC 29405</addr-line>
          ,
          <country country="US">United States</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Zucker Family Graduate Education Center, Clemson University</institution>
          ,
          <addr-line>1240 Supply St., N. Charleston, SC 29405</addr-line>
          ,
          <country country="US">United States</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>1</volume>
      <fpage>8</fpage>
      <lpage>21</lpage>
      <abstract>
        <p>The crew of the H.L. Hunley submarine all perished following the successful attack and sinking of the blockading ship USS Housatonic of the coast of Charleston, South Carolina in 1864. Through archaeological and historical research, much has been learned about this event. However, because of the paucity of historical records, the identities of the crew have not been well understood. This project seeks to expand our understanding of the crew of H.L. Hunley through development of innovative digital techniques in the reconstruction of facial features from 3D models of their skeletal remains. Traditional methods of facial approximation have significant limitations in terms of flexibility and future applicability. This project tests the use of cutting-edge production methods in an innovative workflow to build more interactive and lifelike representations that exceed simple facial approximations. Using 3D scan data from individuals from the crew of H.L. Hunley, digital likenesses are being constructed that work within the flexible Epic Games MetaHuman framework to create interactive, photorealistic likenesses that can make realistic expressions, perform to match speech, and be animated for any digital applications. By developing multiple reconstructions for each individual, archaeological research can be enhanced using facial recognition to search currently available photo archives. We plan to test this with specific individual facial approximations in consultation with the team of Civil War Photo Sleuth (https://www.civilwarphotosleuth.com/) to attempt to find unidentified photographic records and potential background histories for the crewmen of H.L. Hunley.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>
        On the night of February 17, 1864, the submarine H.L. Hunley attacked and sank the blockading
sloop-of-war USS Housatonic of the coast of Charleston, South Carolina, marking the first time
in naval history that a submersible sank an enemy ship in combat. However, the submarine never
returned to shore that night. Rediscovered some 1000 feet from the wreck of USS Housatonic in
CEUR
Workshop
Proceedings
(WLCC) in North Charleston SC for conservation treatment and archaeological analysis[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
In 2004, archaeologists at the WLCC began the process of documenting the entire skeletal
assemblage of the eight-man crew of H.L. Hunley using 3D scanning technology[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. This was
partly to record the remains prior to burial, but also to create a permanent record of the bones
that could be applied to future research. This resulted in the development of accurate 3D models
of the skeletal remains, including the crania. This work forms the foundation of the current
digital likeness project. Facial reconstruction and the virtual recreation of people from the
past is a developing discipline that has the potential to bring the past alive like never before[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
Through a collaborative efort, the WLCC and Clemson’s Digital Production Arts program
(DPA) hope to bring the story and history of the crew and their accomplishments to the public
through these interactive tools and produce a more accessible understanding of this significant
event in history.
      </p>
    </sec>
    <sec id="sec-3">
      <title>2. Facial Approximation</title>
      <p>
        Traditionally, facial reconstruction for archaeological purposes has involved established
sculpting methods guided by anthropological and historical knowledge[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The cranium and possibly
other skeletal remains are used to inform the likeness which is sculpted in a multi-step process
on a cast of the cranial remains. Form construction is guided using tissue-depth markers, based
on established statistical tables, to predict the thickness of facial tissues over the skull at key
locations[
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ]. The rest of the tissue and musculature structure is built on top with a final
sculpt of facial details and addition of teeth, hair, and eyes to finish the singular outcome[
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ].
The context in which skeletal remains are found, including artifacts and burial practices, as
well as other knowledge of the person’s life provide insight that might feed into the sculpting
process. Facial approximations created in this way have not usually reached a particularly
lifelike appearance, nor have they been flexible – one notable limitation that multiple
hypothetical appearances may not be created without manually repeating the time- and skill-intensive
process.
      </p>
      <p>
        Application of digital methods has allowed for more rapid development and iteration. Remains
can be scanned for 3D geometric information without having to craft molds, and more accurate
measurements can often be made across each step of a face reconstruction, replicating the
sculpting process but in more flexible representations. Automated technques may also be used
to create reconstructions[
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ]. These methods all still primarily drive output of still images
rather than lifelike, animatable, performance-driven digital likenesses where hair and other
features may also be easily iterated or updated when new information becomes available.
      </p>
    </sec>
    <sec id="sec-4">
      <title>3. Methodology Development</title>
      <p>
        We suggest that in many ways digital reconstructions have still been limited to ”single use”
through manual reconstruction by an individual and the approach taken in modeling software.
In contrast, population facial models in computer studies typically use a common, registered
topology; however, they fall short in detail and photorealism, and are not rigged for
animation[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Feature-film visual-efects work produces photorealistic digital likenesses that perform
and are cut together convincingly with video of the actual individuals but requires significant
skilled work to produce the animatable, rigged asset[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Video game technology in recent
years also approaches this level of fidelity but also requires skill and time[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. The recent
introduction of the freely available and rapidly adopted Epic Games’ Unreal Engine MetaHuman
system, though, has significantly facilitated building a digital-human likeness[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. The tools are
primarily focused on fictional character creation for the game industry, but the same technology
may be driven by facial scans or by archaeologically based reconstructions as we propose. Using
the Mesh-to-MetaHuman plugin, we can transfer the geometric topology and facial rigging
from scientifically based facial reconstructions to the MetaHuman framework[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. This novel
workflow for archaeological face reconstructions allows for the creation of dynamic, lifelike
approximations that are not just static images but can be animated, used interactively, and
rendered in various lighting and environments. The common geometric format allows a variety
of flexible applications, uses in facial studies based on statistics, and easier modification of
facial hair and other features. Performance may also be driven by video or an iPhone and
could be controlled by actor or even generated by machine-learning if desired for hypothetical
interactions with an individual that could be guided by historical research.
      </p>
      <p>
        Included here is work in progress regarding one of the Hunley crewmen, built using digital
facial approximation with a novel workflow we introduce here. We present the workflow
followed by brief discussion considering aspects and limitations of the process. See Figure 1 to
see the sequence progression of the process, from left to right. The proposed workflow that
we are testing is the following, where steps 1-5 are digital versions of traditional methods, and
steps 6 onward introduce new technique:
1. Scan and clean geometric data from cranial remains.
2. Scale cranial model correctly in 3D software such as Autodesk Maya.
3. Use any known information about individual to assist choosing facial-tissue depths from
anthropological tables.
4. Carefully mark fiducial points on cranial model to indicate tissue depths as in traditional
methods.
5. Overlay nose estimate and approximate facial musculature and fatty tissue, also in a
similar manner to traditional methods.
6. Use a parametric face model with a template mesh to conform roughly to geometry
constructed so far; as the parametric model can be guided by facial statistics, a base
”skin” mesh may be chosen somewhat similar to the demographic of the individual
being reconstructed. (Our model is built on scans from 3D Scan Store using Principal
Components Analysis (PCA), in a method similar to the 3D Morphable Model[
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]).
7. After rough alignment of the base-skin mesh, iterate through broader to finer passes
of digital sculpting to have the mesh best match the underlying structures while also
matching overall facial structures and any known information about the individual.
8. At this point, individual texture maps might normally be created to use with lighting
and rendering; however, we use the Mesh-to-MetaHuman tool, as shown in Figure 3, to
generate a version of the geometry (along with rig, etc.) that match the MetaHuman
format. (This is essentially an automated re-topology tool, where we comment further
below). The procedural parameters of the tool are guided to best match known traits of
the individual, relating in a near-realistic digital version (see the final image on the right
in Figure 1 or posed faces in Figure 2).
9. Once the MetaHuman mesh is brought back into Unreal Engine (or Maya or other
software), it can additionally be improved by adopting a photo-based texture map and normal
map that match known aspects of the individual. These could also include specular albedo
or other maps used to improve photo-realistic rendering and can be derived from scan
data [17] or from a system such as VarIS[18] or a Light Stage[19]. These need to be
transformed to match the texture layout (UV-coordinates) of the MetaHuman mesh. This
can be performed using software such as Wrap[ 20] or custom software[ 21]. See Figure 5
for a sample of detail possible with this method.
      </p>
    </sec>
    <sec id="sec-5">
      <title>4. Discussion</title>
      <p>The method proposed here ofers many benefits such as more rapid completion of realistic
versions of a facial approximation as well as generation of a template rig that can be used
for animation. As there are already several tools for working with the MetaHuman format,
performances may be used fairly easily to drive the facial animation and body animation of the
digital likeness. See Figure 3 for sample face poses as well as Figure 4 for sample body poses
(both made using the basic MetaHuman face representation from step 8 before photo-scan
texture detail had been added, per step 9, and shown in Figure 5). There are some limitations we
note. As the Mesh-to-Metahuman tool performs an automated retopology based on landmarks,
as shown in Figure 3, it tends to make mistakes on a generic mesh with no facial texture which
need to be corrected manually. (One way to aid this in the future would be to apply an average
face texture to the base skin mesh before using the tool). Another consideration is the accuracy
of the topological conversion; we have not yet tested how close the mesh is to the original,
but we plan to do this in the future. It is visually similar, but it would be useful to quantify if
there are any areas of significant topological change going through the process that could afect
the facial approximation. Lastly, the current hair (and facial hair) grooms of the MetaHuman
system are limited and primarily contemporary in nature; many of them also only work in
the highest level-of-detail (LOD 0) within the game engine. Custom grooms may be created in
Maya or other software, though, per character-efects production methods and brought into
Unreal Engine for use in interactive applications; these, however, can take a fairly significant
additional efort. We hope to develop a parametric system in the future for matching a variety
of hair grooms to facial models of the same common topology for further flexibility and speed.</p>
      <p>The tangible outcomes of the project will be the creation of photorealistic digital likeness
of the members of the crew of the H.L. Hunley submarine. These likenesses will be fully
rigged 3D-character assets and can be used in museums and educational settings, ofering
interactive experiences. This will also make them fully animatable, potentially providing
engaging storytelling and enhanced learning opportunities. The quality and flexibility of
reconstructions may also aid in identifying historical photographs through collaboration, such as
with the team of Civil War Photo Sleuth[22]. These may relate to the lives of the individuals and
tie more details together for particular studies.[23, 24]. Additionally, the updated and improved
facial reconstructions and interactive applications of this proposed project are envisioned to
have a prominent place the creation of a digital presentation and display for the Friends of the
Hunley website (www.hunley.org), as well as for the proposed Maritime Museum currently in
development in the Charleston, South Carolina area. In the future, this will enable all public
visitors to experience the educational content and story.
3596730.
[17] 3d scan store, https://www.3dscanstore.com, 2024. Accessed: February 2024.
[18] J. Baron, X. Li, P. Joshi, N. Itty, S. Greene, D. S. J. Dhillon, E. Patterson, VarIS: Variable
Illumination Sphere for Facial Capture, Model Scanning, and Spatially Varying Appearance
Acquisition, in: F. Banterle, G. Caggianese, N. Capece, U. Erra, K. Lupinetti, G. Manfredi
(Eds.), Smart Tools and Applications in Graphics - Eurographics Italian Chapter Conference,
The Eurographics Association, 2023. doi:10.2312/stag.20231292.
[19] P. E. Debevec, The light stages and their applications to photoreal digital actors, in:
International Conference on Computer Graphics and Interactive Techniques, 2012. URL:
https://api.semanticscholar.org/CorpusID:6088120.
[20] Wrap, https://faceform.com, 2023. Accessed: February 2024.
[21] E. Patterson, J. R. Baron, D. Simpson, Landmark-based re-topology of stereo-pair acquired
face meshes, in: International Conference on Computer Vision and Graphics, 2018. URL:
https://api.semanticscholar.org/CorpusID:52276389.
[22] Civil war photo sleuth, https://www.civilwarphotosleuth.com/, 2024.
[23] V. Mohanty, D. Thames, S. Mehta, K. Luther, Photo sleuth: combining human expertise and
face recognition to identify historical portraits, in: Proceedings of the 24th International
Conference on Intelligent User Interfaces, IUI ’19, Association for Computing Machinery,
New York, NY, USA, 2019, p. 547–557. URL: https://doi.org/10.1145/3301275.3302301. doi:10.
1145/3301275.3302301.
[24] Civil war photos: New technologies bring to life faces from the past, https://time.com/
5749059/civil-war-photos/, 2019.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R. S.</given-names>
            <surname>Neyland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Brown</surname>
          </string-name>
          , H. L. Hunley Recovery Operations,
          <source>Naval History and Heritage Command</source>
          , Washington D.C.,
          <year>2016</year>
          . URL: https://www.history.navy.mil/research/ underwater-archaeology/sites-and
          <article-title>-projects/ship-wrecksites/hl-hunley/recovery-report</article-title>
          . html.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Scafuri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Rennison</surname>
          </string-name>
          ,
          <string-name>
            <surname>Scanning the H.L. Hunley</surname>
          </string-name>
          :
          <article-title>Employing a structured-light scanning system in the archaeological documentation of a unique maritime artifact</article-title>
          ,
          <source>Journal of Archaeological Science: Reports</source>
          <volume>6</volume>
          (
          <year>2016</year>
          )
          <fpage>302</fpage>
          -
          <lpage>309</lpage>
          . URL: https://www.sciencedirect.com/ science/article/pii/S2352409X1630058X. doi:https://doi.org/10.1016/j.jasrep.
          <year>2016</year>
          .
          <volume>02</volume>
          .023.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>K.</given-names>
            <surname>Ellenberger</surname>
          </string-name>
          ,
          <article-title>Virtual and augmented reality in public archaeology teaching</article-title>
          ,
          <source>Advances in Archaeological Practice</source>
          <volume>5</volume>
          (
          <year>2017</year>
          )
          <fpage>305</fpage>
          -
          <lpage>309</lpage>
          . doi:
          <volume>10</volume>
          .1017/aap.
          <year>2017</year>
          .
          <volume>20</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Prag</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Neave</surname>
          </string-name>
          , Making Faces:
          <article-title>Using Forensic and Archaeological Evidence</article-title>
          , Texas AM University Press,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>K. T.</given-names>
            <surname>Taylor</surname>
          </string-name>
          , Forensic Art and Illustration, CRC Press,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>C.</given-names>
            <surname>Wilkinson</surname>
          </string-name>
          , Forensic Facial Reconstruction, Cambridge University Press,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hayes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Taylor</surname>
          </string-name>
          , A. Paterson,
          <article-title>Forensic facial approximation: An overview of current methods used at the victorian institute of forensic medicine/victoria police criminal identification squad</article-title>
          ,
          <source>The Journal of Forensic Odonto-Stomatology</source>
          <volume>23</volume>
          (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hayes</surname>
          </string-name>
          , 3D Facial Approximation Lab Manual,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S. D.</given-names>
            <surname>Greef</surname>
          </string-name>
          , G. Willems,
          <article-title>Three dimensional cranio-facial reconstruction in forensic identification: latest progress and new tendencies in the 21st century</article-title>
          ,
          <source>Journal of Forensic Sciences</source>
          <volume>50</volume>
          (
          <year>2005</year>
          ).
          <article-title>URL: www</article-title>
          .astm.org.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>P.</given-names>
            <surname>Navic</surname>
          </string-name>
          , et al.,
          <article-title>Facial reconstruction using 3-d computerized method: A scoping review of methods, current status, and future developments</article-title>
          ,
          <source>Legal Medicine</source>
          <volume>62</volume>
          (
          <year>2023</year>
          )
          <article-title>102239</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.legalmed.
          <year>2023</year>
          .
          <volume>102239</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>B.</given-names>
            <surname>Egger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. A. P.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tewari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wuhrer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zollhoefer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Beeler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bernard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Bolkart</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kortylewski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Romdhani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Theobalt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Blanz</surname>
          </string-name>
          , T. Vetter,
          <article-title>3d morphable face models-past, present, and future</article-title>
          ,
          <source>ACM Trans. Graph</source>
          .
          <volume>39</volume>
          (
          <year>2020</year>
          ). URL: https://doi.org/ 10.1145/3395208. doi:
          <volume>10</volume>
          .1145/3395208.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>W.-D. K. Ma</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Ghifary</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Lewis</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Choi</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Eom</surname>
          </string-name>
          ,
          <article-title>Fdls: A deep learning approach to production quality, controllable, and retargetable facial performances</article-title>
          .,
          <source>in: The Digital Production Symposium</source>
          , DigiPro '22,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2022</year>
          . URL: https://doi.org/10.1145/3543664.3543672. doi:
          <volume>10</volume>
          .1145/3543664.3543672.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>C.</given-names>
            <surname>Murphy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mudur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Holden</surname>
          </string-name>
          , M.
          <article-title>-</article-title>
          <string-name>
            <surname>A. Carbonneau</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Ghafourzadeh</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Beauchamp</surname>
          </string-name>
          ,
          <article-title>Appearance controlled face texture generation for video game characters</article-title>
          ,
          <source>in: Proceedings of the 13th ACM SIGGRAPH Conference on Motion, Interaction and Games</source>
          , MIG '20,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2020</year>
          . URL: https://doi.org/10. 1145/3424636.3426898. doi:
          <volume>10</volume>
          .1145/3424636.3426898.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Epic</surname>
            <given-names>Games</given-names>
          </string-name>
          , MetaHuman Creator, Epic Games, Inc.,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Epic</surname>
            <given-names>Games</given-names>
          </string-name>
          , Mesh to MetaHuman, Epic Games, Inc.,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>V.</given-names>
            <surname>Blanz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Vetter</surname>
          </string-name>
          ,
          <source>A Morphable Model For The Synthesis Of 3D Faces</source>
          , 1 ed.,
          <source>Association for Computing Machinery</source>
          , New York, NY, USA,
          <year>2023</year>
          . URL: https://doi.org/10.1145/3596711.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>