<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>3D4ALL: Toward an Inclusive Pipeline to Classify 3D Contents</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Nahyun Kwon</string-name>
          <email>nahyunkwon@tamu.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Chen Liang</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jeeeun Kim</string-name>
          <email>jeeeun.kim@tamu.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>HCIED Lab, Texas A&amp;M University</institution>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Algorithmic content moderation manages an explosive number of user-created content shared online everyday. Despite a massive number of 3D designs that are free to be downloaded, shared, and 3D printed by the users, detecting sensitivity with transparency and fairness has been controversial. Although sensitive 3D content might have a greater impact than other media due to its possible reproducibility and replicability without restriction, prevailed unawareness resulted in proliferation of sensitive 3D models online and a lack of discussion on transparent and fair 3D content moderation. As the 3D content exists as a document on the web mainly consisting of text and images, we first study the existing algorithmic eforts based on text and images and the prior endeavors to encompass transparency and fairness in moderation, which can also be useful in a 3D printing domain. At the same time, we identify 3D specific features that should be addressed to advance a 3D specialized algorithmic moderation. As a potential solution, we suggest a human-in-the-loop pipeline using augmented learning, powered by various stakeholders with diferent backgrounds and perspectives in understanding the content. Our pipeline aims to minimize personal biases by enabling diverse stakeholders to be vocal in reflecting various factors to interpret the content. We add our initial proposal for redesigning metadata of open 3D repositories, to invoke users' responsible actions of being granted consent from the subject upon sharing contents for free in the public spaces.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;3D printing</kwd>
        <kwd>sensitive contents</kwd>
        <kwd>content moderation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <sec id="sec-1-1">
        <title>To date, many social media platforms ob</title>
        <p>served an explosive number of user-created
content posted everyday from Twitter to
YouTube to Instagram and more. Following
the acceleration of online contents which
becomes even faster partly due to COVID-19,
it has also become easier for people to
access sensitive content that may not be
appropriate for the general purpose. Owing to
the scale of these content and users’ abilities
to share and repost them in a flash, it
becomes extremely costly to detect the
sensitive content solely by manual work. Current
social media platforms have adopted various
(semi)automated content moderation
methods including a deep learning-based
classification (e.g., Microsoft Azure Content
Moderator [1], DeepAI’s Nudity Detection API
[2], Amazon Rekognition Content
Moderation [3]).</p>
        <p>Meanwhile, since desktop 3D printers
have been flooded into the consumer market,
3D printing specific social platforms such as
Thingiverse [4] have also gained popularity,
contributing to the proliferation of shared 3D
contents that are easily downloadable and loop validation pipeline using augmented
replicable among community users. Despite learning that incrementally trains the model
a massive number of 3D contents shared for with the input from the human workforce.
free to date—As of 2020 2Q, there are near We highlight potential biases that are likely
1.8 million 3D models available for down- to be propagated from diferent perspectives
load, excluding empty entries due to post of human moderators who provide final
dedeletion—, there has been relatively little at- cisions and labeling for re-training a
classitention to sensitive 3D contents. This might fication model. To mitigate those biases, we
result in not only a lack of a dataset to be propose an image annotation interface to
deused as a bench mark, but also a lack of dis- velop an explainable dataset and the system
cussion on fair rationales to be utilized in that reflects various stakeholders’
perspecbuilding a algorithmic 3D content modera- tives in understanding the 3D content. We
tion that integrates everyone’s perspectives conclude with initial recommendations for
with a diferent background. Along with sig- metadata design to (1) require consent and (2)
nificant advances in technology of machine inform previously unaware users of consent
mechanisms and materials (e.g., 3D print- for publicizing the content which might
ining in metals), the 3D printing community vade copyright or privacy.
may present an even greater impact from
the spread of content due to its limitless
potential for replication and reproduction. In 2. Algorithmic Content
view of various stakeholders who have difer- Moderation
ent perspectives in consuming and
interpreting contents—from K-12 teachers who may Manual moderation relying on a few trusted
seek 3D files online to design curricula to human workforce and voluntary reports has
artists who depict their creativity in digi- been common solutions to review shared
tized 3D sculptures—, moderating 3D content contents. Unfortunately, it becomes
inwith fairness becomes more challenging. 3D creasingly dificult to meet the demands of
contents online often consist of images and growing volumes of users and user-created
text that are possibly useful to adopt exist- content [12]. Algorithmic content
moderaing moderation schemes including text (e.g., tion has taken an important place in
popu[5, 6, 7, 8]) or image based (e.g., [9, 10, 11]) ap- lar social media platforms to prevent
variproaches. However, there exist 3D printing ous sensitive content in real-time, including
specific features (e.g., print support to avoid graphic violence, sexual abuse, harassment,
overhangs, uni-colored outcome, segmented and more. As with other media posts, 3D
in parts, etc.) that may prevent direct adop- contents available online appear as web
doction of those schemes, requiring further con- uments that consist of images and text. For
sideration about implementing advanced 3D example, to attract audiences and help
othcontent moderation techniques. ers understand the design project, creators in</p>
        <p>In this work, we first study the existing Thingiverse voluntarily include various
incontent moderation eforts that has poten- formation such as written descriptions of the
tial to be used in 3D content moderationand model, tags, as well as photos of a 3D printed
discuss shared concerns in examining trans- design; thus, 3D content can provide us an
parency and fairness issues in algorithmic ample opportunity to employ the existing
content moderation. As a potential solution, text and image based moderation schemes.
we propose a semi-automated
human-in-the</p>
        <p>Among various text-based solutions, sen- 2.1. Challenges in Moderating
timent analysis is one traditionally popular 3D Content
approach that categorizes input text into
either two or more categories: positive and As we noted earlier, 3D contents appear as
negative, or more detailed n-point scales web documents that consist of text
descrip(e.g., highly positive, positive, neutral, neg- tions, auto-generated preview images, and
ative, highly negative) [5, 6]. Moderators can user-uploaded images to help others
comconsider categorization results in deciding prehend the content at a glance. Although
whether the content is ofensive or discrim- it is technically possible to utilize existing
inatory [13]. Various classifiers, such as Lo- text and image based moderation schemes,
gistic Regression Model, Support Vector Ma- 3D models have unique features that make it
chine, and random forest, are actively used in hard to directly adopt the existing CV
techdetecting misogynistic posts on Twitter (e.g., niques to their rendered images or photos.
[7, 8]). Jigsaw and Google’s Counter Abuse
Technology suggested Perspective API [14] 2.1.1. 3D specific features that hamper
provide a score on how toxic (i.e., rude, disre- the use of existing CV techniques
spectful, or unreasonable) the text comment We identified four characteristics that make
is, using a machine learning (ML) model that sensitive elements undetectable by the
existwas trained by people’s rating of internet ing algorithms.
comments. Challenge 1. Dificulties in Locating
Fea</p>
        <p>With the rapid improvement of Com- tures from Images of the Current
Placeputer Vision (CV) technologies with ma- ment. Thingiverse automatically generates
chine learning, several image datasets (e.g., rendered images of the 3D model when a 3D
NudeNet Classifier dataset[15]) and moder- ifle is uploaded, and this is used as a
represenation APIs enable developers to apply these tative image if the designer does not provide
ready-to-use mechanisms to their applica- any photos of real 3D prints. In many cases,
tions. For example, Microsoft Azure Con- these files are placed in the best orientation
tent Moderator [1] classifies adult images that guarantees print-success in FDM (Fused
into several categories, such as explicitly Deposition Modeling) printers, aligning the
sexual in nature, sexually suggestive, or design to minimize overhangs. As the
pregory. DeepAI’s Nudity Detection API [2] view is taken in a fixed angle, so it might not
enables automatic detection of adult images be in a perfect angle that shows the main part
and adult videos. Amazon Rekognition con- of the model thoroughly (e.g., Fig 1(a)). It
hintent moderation [3] detects inappropriate or ders the existing image-based APIs from
acofensive features in images and provides curate detection of sensitivity in the preview
detected labels and prediction probabilities. images, because sensitive parts might not be
However, many of-the-shelf services and visible.</p>
        <p>APIs are often obscured, because it is hard
for users to expect that the models are trained Challenge 2. Support Structure that
Ocwith fair ground-truths that can ofer reliable cludes the Features. Following the model
results to various stakeholders with diferent alignment strategy of FDM printing,
designcultural or social backgrounds without any ers often include a custom support structure
biases, which we will discuss more in a de- to prevent overhangs and to avoid printing
tailed way in the following section. failures and deterring surface textures with
(a) Rotated model
(b) Support structure
(c) Texture on surface
(d) Divided into parts
auto-generated supports from slicers (i.e., 3D
model compiler) such as Cura [16]. These
special structures easily occlude the design’s
significant features (e.g., Fig 1(b)). Since the
model is partly or completely occluded, the
existing CV techniques barely detect sensi- 3.1. Transparency: Black Box
tivity of the design. that Lacks Explanation</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>3. Transparency and</title>
    </sec>
    <sec id="sec-3">
      <title>Fairness Issues in</title>
    </sec>
    <sec id="sec-4">
      <title>Content Moderation</title>
      <p>Challenge 3. Texture and Colors.
Current 3D printing technologies enable users to
use various print settings and other
postprocessing techniques. Accordingly, the printed
model may present unique appearances
compared to general real-world entities. Often
the model is single-colored and can have a
unique texture such as linear lines on the
surface (e.g., Fig 1(c)) due to the nature of 3D
printing mechanisms of accumulating
materials layer-by-layer, which might let the
existing CV algorithms overlook the features.</p>
      <p>Challenge 4. Models Separated into Parts
for Printing. As one common 3D printing
strategy to minimize printing failures from a
complex 3D designs such as a human body,
many designers divide their models into
several parts to ease the printing process, and let
users post-assemble as shown in Fig 1(d). In
this case, it is hard for the existing CV
techniques to get the whole assembled model,
resulting in a failure to recognize its sensitivity.</p>
      <p>Content moderation has long been
controversial due to its non-transparent and
secretive process [17], resulting from lacking
explanations for community members about
how the algorithm works. To meet the
growing demands for transparent and accountable
moderation practice as well as to elevate
public trust, recently, popular social media
platforms have begun to dedicate their eforts to
make their moderation process more
obvious and candid [17, 18, 19, 20]. As a
reasonable starting point, those services
provided detailed terms and policies (e.g.,
Facebook’s Community Standards [21])
describing the bounds of acceptable behaviors on the
platform [17]. In 2018, as a collective efort,
researchers and practitioners proposed the
Santa Clara Principles on Transparency and
Accountability in Content Moderation (SCP)
[22]. SCP suggests one requirement that
social media platforms should provide detailed
guidance to the members about which
content and behaviors are discouraged,
including examples of permissible and
impermissible content, as well as an explanation of how tificial intelligence (AI) in content
moderaautomated tools are used across each cate- tion, it has long been in the black box [23],
gory of content. It also recommends for con- thus not understandable for users due to the
tent moderators to give users a rationale for complexity of the ML model. To address the
content removal to assure about what hap- issue of the uninterpretable model that
hinpens behind the content moderation. ders the users from understanding how it</p>
      <p>Making the moderation process transpar- works, researchers shed lights on the blind
ent and explainable is crucial to the success spot by studying various techniques to make
of the community [23], in order not only to the model explainable (e.g., [29, 30, 31]).
Exmaintain its current scale but also to invite plainability has been on the rise to be an
new users, because it may afect users’ sub- efective way of enhancing transparency of
sequent behaviors. For example, given no ex- ML models [32]. In order to secure
explainplanation about the content removal, users ability, the system must enable stakeholders
are less likely to upload new posts in the fu- to understand the high-level concepts of the
ture or leave the community, because they model, the reasoning used by the model, and
may believe that their content was treated the model’s resulting behavior [33]. For
exunfairly thus get frustrated owing to an ab- ample, as shown in the Fairness,
Accountsence of communication [24]. Reddit [25], ability, and Transparency (FAT) model,
supwhich is one of the most popular social me- porting users to know which variables are
dia, has equipped volunteer-based modera- important in the prediction and how they
tion schemes resulting in the removal of al- will be combined is one powerful way to
enmost one fifth of all posts every day [26] able them to understand and finally trust the
due to violation of their community policy decision made by the model [34].
[27] (e.g., Rule 4: Do not post or
encourage the posting of sexual or suggestive con- 3.2. Fairness: Implicit Bias and
tent involving minors.) or individual rules of Inclusivity Issues
the subreddits (i.e., subcommunity of
Reddit that has a specific individual topic) ac- People often overlook fairness of the
modcording to their own objectives (e.g., One of eration algorithm and tend to believe that
the rules in 3D printing subreddit: “Any de- the systems automatically make unbiased
device/design/instructions which are intended cisions [35]. In fact, the human
adjudicainjure people or damage property will be re- tion of user-generated content has been
ocmoved.”). Users being aware of community curred in secret and for relatively low wages
guidelines or receiving explanations for con- by unidentified moderators [36]. In some
tent removal are more likely to perceive that platforms, users are even unable to know the
the removal was fair [24] and showcase more presence of moderators or who they are [37],
positive behaviors in the future. As many so- and thus it is hard for them to know what
cial platforms including 3D open communi- potential bias, owing to diferent reasoning
ties such as Thingiverse highly rely on vol- processes, has been injected into the
moderauntary posting of the user-created content tion procedure. For example, there have been
[28], the role of a transparent system in con- worldwide actions that strongly criticize the
tent moderation becomes more significant in sexualization of women’s bodies without
inmaintaining the communities themselves. clusive inference (e.g., ‘My Breasts Are Not</p>
      <p>Even if many existing social media plat- Obscene’ protest by the global feminist group
forms have their full gears to implement ar- Femen [38] to denounce a museum’s
censorship of nudity.). Similarly, Facebook’s auto- Work) by replacing their thumbnail images
matic turning down of postings and selfies with the black warning images. It is a
sethat include women’s topless photo by tag- cretive process because there are no clear
raging them as Sexual/Porn ignited ‘My Body tionale or explanations ofered to users
beis not Porn’ movement [39, 40]. The difer- hind this process. Therefore, users cannot
exent points of view in perceiving and reason- pect whether Thingiverse operates based on
ing towards the same piece of work makes an unbiased and fair set of rules.
it yet hard to decide the absolute sensitiv- While the steep acceleration of increments
ity. It is nearly impossible that the sole group of 3D models [43] is making automatic
detecof users represent all, therefore, it is difi- tion of sensitive 3D content imperative,
modcult for users to expect a ground-truth in the erating 3D content also faces fairness issues
decision-making process, and trust the result and users are sufering from lacking
explawhile believing experts made the final deci- nations. We need to take our account into
sions based on thoughtful consideration with various stakeholders’ points of view that
afan unbiased rationale. fect their decision on potentially sensitive 3D</p>
      <p>Subsequently, many studies (e.g., [41, 42]) content, as well as further discussions to
mithave explored potential risks of algorithmic igate bias and discrimination of the
algorithdecision-making that are potentially biased mic decision-making system. Here we
proand discriminatory to a certain group of peo- pose an explainable human-in-the-loop 3D
ple such as underrepresented groups of gen- content moderation system to enable
varider, race, disability. Classifier has been one ous users who have distinct rules to
particcommon approach in content moderation, ipate in calibrating algorithmic decisions to
but developing a perfectly fair set of classi- decrease bias or discrimination of the
algoifers in content moderation is complex com- rithm itself. Although we focus on specific
pared to those in common recommendation issues in shared 3D content online, our
proor ranking systems, as classifiers tend to in- posed pipeline generally applies to
advancevitably embed a preference to the certain ing a semi-automatic process toward an
exgroup over others to decide whether the con- plainable and fair content moderation for all.
tent is ofensive or not [17].</p>
    </sec>
    <sec id="sec-5">
      <title>4. Towards Explainable 3D</title>
    </sec>
    <sec id="sec-6">
      <title>Moderation System</title>
      <sec id="sec-6-1">
        <title>3.3. Transparency &amp; Fairness</title>
      </sec>
      <sec id="sec-6-2">
        <title>Issue in 3D Content</title>
      </sec>
      <sec id="sec-6-3">
        <title>Moderation</title>
        <sec id="sec-6-3-1">
          <title>A potential solution to examine 3D contents’</title>
          <p>Through a text feature based classification, sensitivity with fairness is employing the
huwe identified there are three main cate- man workforce with ample experiences in
gories of sensitive 3D content: (1) sex- observing and perceiving with various
perual/suggestive, (2) dangerous weaponry, and spectives. We suggest a human-in-the-loop
(3) drug/smoke. Due to the capability of un- pipeline, based on the idea of incremental
limited replication and reproduction in 3D learning [44] that the human workforce can
printing, unawareness of these 3D contents collaborate with an intelligent system,
concould be crucial. We noticed that Thingiverse currently classifying data input and annotate
limits access to some of sensitive things that features with the explanation for the
deciare currently labeled as NSFW (Not Safe for sion.
4.1. Building an Inclusive vide filtered cases for humans to support
Moderation Process a decision-making process [24], if we
wellecho diverse perspectives in understanding
Making decisions on the sensitivity of a 3D contents. In our proposal of the
human-inmodel can be subjective due to various fac- the-loop pipeline (Fig 2(a)), an input image
tors such as cultural diferences, the nature dataset of 3D models will be used for the
of the community, and the purpose of navi- initial model training, then the result will
gating 3D models. To reflect diferent angles be reviewed by multiple human moderators
in discerning the nature and intention of con- step by step. We trained the model with
tents, we need to deliberate various interpre- 1,077 things that are already labeled as NSFW
tations taken from various groups of people. by Thingiverse and 1,077 randomly selected
For example, there are lots of 3D printable non-NSFW things. All input images are
simreplicas of artistic statues or Greek sculptures ply categorized as NSFW or not, with no
anthat are reconstructed by 3D scanning of the notation for specific image features to
prooriginal in the museums [45]. Speculative vide the reasoning. Human moderators
reK-12 teachers designing their STEAM edu- cruited from various groups of people now
cation curriculum using 3D models are not review the classification results whether they
likely to want any NSFW designs revealed agree. They are asked to annotate image
segto their search results. On the other hand, ments using a bounding box where they
rethere are many activists and artists who may ferred to make the final decision with the
catwant to investigate the limitless potential of egory. At the same time, they provide the
the technology, sharing a 3D scanned copy rough level of how much the part afected
of the naked body of herself [46] or digitiz- the entire sensitivity and a written rationale
ing nude sculptures available in the museum for the decision. These features will enhance
to make the intellectual assets accessible to the data quality so to be used to fine-tune
everyone, etc. The nude sculpture has been the model with the weighted score, thus the
one popular form of artistic creation in his- model becomes able to recognize previously
tory, and it is not simple to stigmatize these unknown sensitive models based on the
simiworks as ‘sensitive’. Everyone has their own larity and now can explain sensitive features.
right to ‘leave the memory of self’ in a dig- When two diferent groups of people with
ital form. Forcing to adapt a preset thresh- diferent standards do not agree on the same
old of sensitivity and filter these wide array model’s classification results, the model uses
of user-created contents could unfairly treat their decision, annotated features, and
levone’s creative freedom. As the extent that els of sensitivity to diferentiate the extent of
various stakeholders perceive the sensitivity perceived sensitivity and reflect to the
difercould be distinct, our objective is to design ent threshold. For example, one moderator
an inclusive process in accepting and adopt- thinks that the model is sensitive while the
ing the sensitivity. other does not, the model will have a higher
threshold in categorizing the content.
Dif4.2. Solution 1: ferent decisions on the same model finally
Human-in-the-loop with could be brought to the table for further
disAugmented Learning cussion if needed, for example, to regulate
policy guidelines, or used as search criteria
Automated content moderation could help for other community users who have similar
review of a vast amount of data and pro- goals in viewing and unlocking analogous 3D
(a) Human-in-the-loop pipeline
(b) User interface mockup
contents. To summarize, one iteration con- sexual activity, etc.). We currently refer to
tains the following steps: a two-level hierarchical taxonomy of
Ama1. The pre-trained model presents predic- zon Rekognition to label categories of
inaption results. propriate or ofensive content.</p>
          <p>Case 2. Sensitive Parts Ignored by the
2. The human moderator can enter dis- Algorithm Another possible case is that the
agreement/agreement with the results specific feature in the image that the
moderand annotate sensitive parts with a ator perceives as sensitive is missing in the
sensitivity level and a decision ratio- detection results. In this case, human
modnale. erators can label that part and provide
ratio3. The annotated image is used to fine- nales using enter the level of sensitivity field
tune the model. from 1 (slightly sensitive) to 5 (highly
sensi4. If the decision for the image is dif- tive), how each specific part afects the entire
ferent from other moderators, annota- sensitivity of the model.
tions and sensitivity levels are used to Case 3. False Negative It is also possible
set the diferent threshold. that some parts detected by the model are
We elaborate more on feedback from the not sensitive for the moderator due to the
moderators by showing three possible sce- higher tolerance to sensitivity. The
modernarios: (1) the moderator’s agreement with ator can either submit the disagreement or
the prediction results, (2) sensitive parts not provide more detailed feedback by excluding
detected, and (3) false-classification of insen- specific results.
sitive features sensitive. Diferent degrees of sensitivity perception
Case 1. Agreement with the Predic- from various stakeholders can reflect distinct
tion Result In case that the moderators points of view, which may manifest fairness
agree with the decision, they can either fi- in algorithmic moderation through multiple
nalize it or reject the classification, by se- iterations of this process. In our interface for
lecting provided top-level categories (e.g., the end-users that assists searching 3D
desexual/suggestive, weaponry, drug/smoke) signs, we let users set their desired threshold.
and second-level categories (e.g., under sex- For those who might find it dificult to decide
ual/suggestive, explicit nudity, adult toys, a threshold that perfectly fits their need, we
show several random example images that relies on the users’ voluntary action given
have detected sensitive labels with the corre- no oficial guidelines, resulting in a lack of
sponding threshold. This pipeline also helps awareness that the users must be granted the
obtain the explainable moderation algorithm. consent to upload possibly privacy-invasive
Our model can help users understand the ra- contents at the time of posting those
contionales of the model by locating detected tent in public spaces regardless of the
comfeatures/prediction probabilities in the image mercial purpose. Without explicit consent,
and providing written descriptions that the the content is very likely to be auto-filtered
moderators entered for data classification. by Thingiverse, which decreases fairness by
hampering artistic/creative freedom. To iron
4.3. Solution 2: New Metadata out a better content-sharing environment in
Design to Avoid the these open communities, redesigning of
metadata must be considered and adapted</p>
          <p>Auto-Filtering by system admins that invoke responsible
Another potential problem in open 3D com- actions. For example, providing a
checkmunities is copyright or privacy-invasive box that asks “If the design is made of 3D
contents that are immediately marked as scanned human subject, I got an agreement
NSFW by Thingiverse indicating they are from the subject” can inform previously
uninappropriate. Currently, Thingiverse lacks aware users about the need for permission to
notification and explanation for content re- post potentially privacy-breaching contents.
moval, while a majority of them might in- Including the subject’s consent can also
provade copyrights. Its obscurity results in a tect creative freedom from auto-filtering, by
negative impact on the user’s future behav- adding that the content is not breaching
iors. For example, creators are frustrated at copyright or privacy and can be shared in
the un-notified removal of their content thus the public spaces. In addition, it can enable
decided to quit their membership (e.g., [47]), users to understand that an absence of
conwhich might not happen if they saw an in- sent could be the reason for filtering.
formative alert when they post the content.</p>
          <p>Along with advanced 3D scanning technolo- 5. Conclusion
gies [48], many creators are actively
sharing 3D scanned models (e.g., As of
December 2020, Thingiverse has 1150 things that
tagged with ‘3D_scan’ and 308 things with
the tag ‘3D_scanning’). With arising
concerns over possible privacy invasion in
sensitive 3D designs, what caught our attention is
3D scanned replicas of human bodies. Many
of them do not include an explicit description
of whether the creator received the consent
from the subject (e.g., [49, 50]). Some
designers quoted the subject’s permission, for
example, one creator describes that the
subject, Nova, has agreed to share her scanned
body on Thingiverse [51]. Still, this process
As an inclusive process to develop
transparent and fair moderation procedure in 3D
printing communities, our study proposes
to build an explainable human-in-the-loop
pipeline. We aim to employ diverse group of
human moderators to collect their rationales,
which can be used to enhance the model’s
incremental learning. Our objective is not to
censor 3D content but to build a pleasant 3D
printing community for all, by safeguarding
search as well as guaranteeing creative
freedom, through the pipeline and new metadata
design that has potential to minimize issues
related with privacy or copyright.
[20] M. MacCarthy, Transparency require- [29] B. Letham, C. Rudin, T. H. McCormick,
ments for digital social media plat- D. Madigan, et al., Interpretable
clasforms: Recommendations for policy sifiers using rules and bayesian
analymakers and industry, Transatlantic sis: Building a better stroke prediction
Working Group (2020). model, The Annals of Applied Statistics
[21] Facebook, Community stan- 9 (2015) 1350–1371.</p>
          <p>dards, https://www.facebook.com/ [30] T. Wang, C. Rudin, F. Doshi-Velez,
communitystandards/, 2020. (Accessed Y. Liu, E. Klampfl, P. MacNeille, A
on 12/12/2020). bayesian framework for learning rule
[22] U. o. S. C. U. Queensland Univer- sets for interpretable classification, The
sity of Technology (QUT), E. F. F. Journal of Machine Learning Research
(EFF), The santa clara principles on 18 (2017) 2357–2393.
transparency and accountability in con- [31] A. A. Freitas, Comprehensible
clastent moderation, 2018. URL: https:// sification models: a position paper,
santaclaraprinciples.org/, (Accessed on ACM SIGKDD explorations newsletter
12/16/2020). 15 (2014) 1–10.
[23] P. Juneja, D. Rama Subramanian, T. Mi- [32] B. Lepri, N. Oliver, E. Letouzé, A.
Penttra, Through the looking glass: Study land, P. Vinck, Fair, transparent, and
acof transparency in reddit’s moderation countable algorithmic decision-making
practices, Proceedings of the ACM on processes, Philosophy &amp; Technology 31
Human-Computer Interaction 4 (2020) (2018) 611–627.</p>
          <p>1–35. [33] U. Bhatt, A. Xiang, S. Sharma, A. Weller,
[24] S. Jhaver, D. S. Appling, E. Gilbert, A. Taly, Y. Jia, J. Ghosh, R. Puri, J. M.</p>
          <p>A. Bruckman, "did you suspect the Moura, P. Eckersley, Explainable
mapost would be removed?" understand- chine learning in deployment, in:
ing user reactions to content removals Proceedings of the 2020 Conference
on reddit, Proceedings of the ACM on on Fairness, Accountability, and
Transhuman-computer interaction 3 (2019) parency, 2020, pp. 648–657.
1–33. [34] H. Lakkaraju, S. H. Bach, J. Leskovec,
[25] Reddit, https://www.reddit.com/, 2005. Interpretable decision sets: A joint
[26] S. Jhaver, D. S. Appling, E. Gilbert, framework for description and
predicA. Bruckman, Did you suspect the post tion, in: Proceedings of the 22nd
would be removed?”: User reactions to ACM SIGKDD international conference
content removals on reddit, Proceed- on knowledge discovery and data
minings of the ACM on Human-Computer ing, 2016, pp. 1675–1684.</p>
          <p>Interaction 2 (2018). [35] S. Garfinkel, J. Matthews, S. S. Shapiro,
[27] Reddit, Reddit content policy, 2020. J. M. Smith, Toward algorithmic
transURL: https://www.redditinc.com/ parency and accountability,
Communipolicies/content-policy, (Accessed on cations of the ACM 60 (2017) 5–5.
12/12/2020). [36] S. T. Roberts, Behind the screen:
Con[28] F. Çömlekçi, Custodians of the internet: tent moderation in the shadows of
soPlatforms, content moderation, and the cial media, Yale University Press, 2019.
hidden decisions that shape social me- [37] S. T. Roberts, Commercial content
moddia, Communication Today 10 (2019) eration: Digital laborers’ dirty work
165–166. (2016).
[38] S. Cascone, Topless feminist protestors
famous-sculptures-statues-artworkshit the musee d’orsay after the mu-
download-3d-print-rodins-thinkerseum tried to bar a visitor for wearing michelangelos-david-more.html,
a low-cut dress, 2020. URL: https: (Accessed on 12/23/2020).
//news.artnet.com/art-world/femen- [46] B. Mufson, Art made from human
stage-protest-musee-dorsay-1908260, body scans | gif six-pack, 2016. URL:
(Accessed on 12/22/2020). https://www.vice.com/en/article/
[39] B. Korea-savvy, Activists claim nz4kq7/3D-scanning-gifs, (Accessed on
“my body is not porn!”, http: 12/23/2020).
//koreabizwire.com/activists-claim- [47] VidovicArts, I’m quitting thingiverse,
my-body-is-not-porn/119529, 2018. 2020. URL: https://www.youtube.com/
(Accessed on 09/08/2020). watch?v=UPRCE8FsSak.
[40] My body is not your porn, 2020. URL: [48] All3DP, 2020 best 3d
scanhttps://www.facebook.com/pages/ ners (december), 2020. URL:
category/Community/My-Body-Is-
https://all3dp.com/1/best-3d-scannerNot-Your-Porn-106365187645422/. diy-handheld-app-software/, (Accessed
[41] S. L. Blodgett, L. Green, B. O’Connor, on 12/23/2020).</p>
          <p>Demographic dialectal variation in so- [49] Tob1112, 3d body scan amber, 2015.
cial media: A case study of african- URL: https://www.thingiverse.com/
american english, arXiv preprint thing:1052758.</p>
          <p>arXiv:1608.08868 (2016). [50] ThreeForm, Mel - "column 2" pose,
[42] R. Binns, M. Veale, M. Van Kleek, 2017. URL: https://www.thingiverse.</p>
          <p>N. Shadbolt, Like trainer, like bot? in- com/thing:2688184.
heritance of bias in algorithmic con- [51] ThreeForm, Nova - "pose 3", 2017. URL:
tent moderation, in: International con- https://www.thingiverse.com/thing:
ference on social informatics, Springer, 2461567.</p>
          <p>2017, pp. 405–415.
[43] B. Wire, Makerbot thingiverse
celebrates 10 years of 3d printed things,
2018. URL: https://financialpost.com/
pmn/press-releases-pmn/businesswire-news-releases-pmn/makerbotthingiverse-celebrates-10-years-of3d-printed-things, (Accessed on
12/10/2020).
[44] M. Längkvist, M. Alirezaie, A. Kiselev,</p>
          <p>A. Loutfi, Interactive learning with
convolutional neural networks for image
labeling, in: IJCAI 2016, 2016.
[45] C. Marshall, 3d scans of 7,500
famous sculptures, statues &amp; artworks:
Download &amp; 3d print rodin’s thinker,
michelangelo’s david &amp; more, 2017.</p>
          <p>URL: https://www.openculture.
com/2017/08/3d-scans-of-7500</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>