=Paper= {{Paper |id=Vol-2903/IUI21WS-TExSS-10 |storemode=property |title=3D4ALL: Toward an Inclusive Pipeline to Classify 3D Contents |pdfUrl=https://ceur-ws.org/Vol-2903/IUI21WS-TExSS-10.pdf |volume=Vol-2903 |authors=Nahyun Kwon,Chen Liang,Jeeeun Kim |dblpUrl=https://dblp.org/rec/conf/iui/KwonLK21 }} ==3D4ALL: Toward an Inclusive Pipeline to Classify 3D Contents== https://ceur-ws.org/Vol-2903/IUI21WS-TExSS-10.pdf
3D4ALL: Toward an Inclusive Pipeline to
Classify 3D Contents
Nahyun Kwona , Chen Lianga and Jeeeun Kima
a HCIED Lab, Texas A&M University



                                      Abstract
                                      Algorithmic content moderation manages an explosive number of user-created content shared online ev-
                                      eryday. Despite a massive number of 3D designs that are free to be downloaded, shared, and 3D printed
                                      by the users, detecting sensitivity with transparency and fairness has been controversial. Although sen-
                                      sitive 3D content might have a greater impact than other media due to its possible reproducibility and
                                      replicability without restriction, prevailed unawareness resulted in proliferation of sensitive 3D models
                                      online and a lack of discussion on transparent and fair 3D content moderation. As the 3D content exists
                                      as a document on the web mainly consisting of text and images, we first study the existing algorithmic
                                      efforts based on text and images and the prior endeavors to encompass transparency and fairness in
                                      moderation, which can also be useful in a 3D printing domain. At the same time, we identify 3D specific
                                      features that should be addressed to advance a 3D specialized algorithmic moderation. As a potential
                                      solution, we suggest a human-in-the-loop pipeline using augmented learning, powered by various stake-
                                      holders with different backgrounds and perspectives in understanding the content. Our pipeline aims
                                      to minimize personal biases by enabling diverse stakeholders to be vocal in reflecting various factors to
                                      interpret the content. We add our initial proposal for redesigning metadata of open 3D repositories, to
                                      invoke users’ responsible actions of being granted consent from the subject upon sharing contents for
                                      free in the public spaces.

                                      Keywords
                                      3D printing, sensitive contents, content moderation


1. Introduction                                                                                 it has also become easier for people to ac-
                                                                                                cess sensitive content that may not be ap-
To date, many social media platforms ob-                                                        propriate for the general purpose. Owing to
served an explosive number of user-created                                                      the scale of these content and users’ abilities
content posted everyday from Twitter to                                                         to share and repost them in a flash, it be-
YouTube to Instagram and more. Following                                                        comes extremely costly to detect the sensi-
the acceleration of online contents which be-                                                   tive content solely by manual work. Current
comes even faster partly due to COVID-19,                                                       social media platforms have adopted various
                                                                                                (semi)automated content moderation meth-
Joint Proceedings of the ACM IUI 2021 Workshops, April
                                                                                                ods including a deep learning-based classifi-
13-17, 2021, College Station, USA
" nahyunkwon@tamu.edu (N. Kwon);                                                                cation (e.g., Microsoft Azure Content Mod-
cltamu@tamu.edu (C. Liang); jeeeun.kim@tamu.edu (J.                                             erator [1], DeepAI’s Nudity Detection API
Kim)                                                                                            [2], Amazon Rekognition Content Modera-
~ https://nahyunkwon.github.io/ (N. Kwon);
                                                                                                tion [3]).
http://www.jeeeunkim.com/ (J. Kim)
 0000-0002-2332-0352 (N. Kwon);                                                                   Meanwhile, since desktop 3D printers
0000-0003-1645-2397 (C. Liang); 0000-0002-8915-481X                                             have been flooded into the consumer market,
(J. Kim)                                                                                        3D printing specific social platforms such as
                                   © 2021 Copyright © 2021 for this paper by its authors. Use
                                   permitted under Creative Commons License Attribution
                                   4.0 International (CC BY 4.0).
                                                                                                Thingiverse [4] have also gained popularity,
CEUR
              http://ceur-ws.org
                                   CEUR   Workshop                       Proceedings            contributing to the proliferation of shared 3D
                                   (CEUR-WS.org)
Workshop      ISSN 1613-0073
Proceedings
contents that are easily downloadable and              loop validation pipeline using augmented
replicable among community users. Despite              learning that incrementally trains the model
a massive number of 3D contents shared for             with the input from the human workforce.
free to date—As of 2020 2Q, there are near             We highlight potential biases that are likely
1.8 million 3D models available for down-              to be propagated from different perspectives
load, excluding empty entries due to post              of human moderators who provide final de-
deletion—, there has been relatively little at-        cisions and labeling for re-training a classi-
tention to sensitive 3D contents. This might           fication model. To mitigate those biases, we
result in not only a lack of a dataset to be           propose an image annotation interface to de-
used as a bench mark, but also a lack of dis-          velop an explainable dataset and the system
cussion on fair rationales to be utilized in           that reflects various stakeholders’ perspec-
building a algorithmic 3D content modera-              tives in understanding the 3D content. We
tion that integrates everyone’s perspectives           conclude with initial recommendations for
with a different background. Along with sig-           metadata design to (1) require consent and (2)
nificant advances in technology of machine             inform previously unaware users of consent
mechanisms and materials (e.g., 3D print-              for publicizing the content which might in-
ing in metals), the 3D printing community              vade copyright or privacy.
may present an even greater impact from
the spread of content due to its limitless po-
tential for replication and reproduction. In           2. Algorithmic Content
view of various stakeholders who have differ-             Moderation
ent perspectives in consuming and interpret-
ing contents—from K-12 teachers who may                Manual moderation relying on a few trusted
seek 3D files online to design curricula to            human workforce and voluntary reports has
artists who depict their creativity in digi-           been common solutions to review shared
tized 3D sculptures—, moderating 3D content            contents. Unfortunately, it becomes in-
with fairness becomes more challenging. 3D             creasingly difficult to meet the demands of
contents online often consist of images and            growing volumes of users and user-created
text that are possibly useful to adopt exist-          content [12]. Algorithmic content modera-
ing moderation schemes including text (e.g.,           tion has taken an important place in popu-
[5, 6, 7, 8]) or image based (e.g., [9, 10, 11]) ap-   lar social media platforms to prevent vari-
proaches. However, there exist 3D printing             ous sensitive content in real-time, including
specific features (e.g., print support to avoid        graphic violence, sexual abuse, harassment,
overhangs, uni-colored outcome, segmented              and more. As with other media posts, 3D
in parts, etc.) that may prevent direct adop-          contents available online appear as web doc-
tion of those schemes, requiring further con-          uments that consist of images and text. For
sideration about implementing advanced 3D              example, to attract audiences and help oth-
content moderation techniques.                         ers understand the design project, creators in
   In this work, we first study the existing           Thingiverse voluntarily include various in-
content moderation efforts that has poten-             formation such as written descriptions of the
tial to be used in 3D content moderationand            model, tags, as well as photos of a 3D printed
discuss shared concerns in examining trans-            design; thus, 3D content can provide us an
parency and fairness issues in algorithmic             ample opportunity to employ the existing
content moderation. As a potential solution,           text and image based moderation schemes.
we propose a semi-automated human-in-the-
    Among various text-based solutions, sen-       2.1. Challenges in Moderating
timent analysis is one traditionally popular            3D Content
approach that categorizes input text into ei-
ther two or more categories: positive and          As we noted earlier, 3D contents appear as
negative, or more detailed n-point scales          web documents that consist of text descrip-
(e.g., highly positive, positive, neutral, neg-    tions, auto-generated preview images, and
ative, highly negative) [5, 6]. Moderators can     user-uploaded images to help others com-
consider categorization results in deciding        prehend the content at a glance. Although
whether the content is offensive or discrim-       it is technically possible to utilize existing
inatory [13]. Various classifiers, such as Lo-     text and image based moderation schemes,
gistic Regression Model, Support Vector Ma-        3D models have unique features that make it
chine, and random forest, are actively used in     hard to directly adopt the existing CV tech-
detecting misogynistic posts on Twitter (e.g.,     niques to their rendered images or photos.
[7, 8]). Jigsaw and Google’s Counter Abuse
Technology suggested Perspective API [14]          2.1.1. 3D specific features that hamper
provide a score on how toxic (i.e., rude, disre-          the use of existing CV techniques
spectful, or unreasonable) the text comment        We identified four characteristics that make
is, using a machine learning (ML) model that       sensitive elements undetectable by the exist-
was trained by people’s rating of internet         ing algorithms.
comments.                                          Challenge 1. Difficulties in Locating Fea-
    With the rapid improvement of Com-             tures from Images of the Current Place-
puter Vision (CV) technologies with ma-            ment. Thingiverse automatically generates
chine learning, several image datasets (e.g.,      rendered images of the 3D model when a 3D
NudeNet Classifier dataset[15]) and moder-         file is uploaded, and this is used as a represen-
ation APIs enable developers to apply these        tative image if the designer does not provide
ready-to-use mechanisms to their applica-          any photos of real 3D prints. In many cases,
tions. For example, Microsoft Azure Con-           these files are placed in the best orientation
tent Moderator [1] classifies adult images         that guarantees print-success in FDM (Fused
into several categories, such as explicitly        Deposition Modeling) printers, aligning the
sexual in nature, sexually suggestive, or          design to minimize overhangs. As the pre-
gory. DeepAI’s Nudity Detection API [2]            view is taken in a fixed angle, so it might not
enables automatic detection of adult images        be in a perfect angle that shows the main part
and adult videos. Amazon Rekognition con-          of the model thoroughly (e.g., Fig 1(a)). It hin-
tent moderation [3] detects inappropriate or       ders the existing image-based APIs from ac-
offensive features in images and provides          curate detection of sensitivity in the preview
detected labels and prediction probabilities.      images, because sensitive parts might not be
However, many off-the-shelf services and           visible.
APIs are often obscured, because it is hard
for users to expect that the models are trained    Challenge 2. Support Structure that Oc-
with fair ground-truths that can offer reliable    cludes the Features. Following the model
results to various stakeholders with different     alignment strategy of FDM printing, design-
cultural or social backgrounds without any         ers often include a custom support structure
biases, which we will discuss more in a de-        to prevent overhangs and to avoid printing
tailed way in the following section.               failures and deterring surface textures with
    (a) Rotated model       (b) Support structure    (c) Texture on surface   (d) Divided into parts
Figure 1: Example images for the mainly 4 characteristics that make it hard to use the
existing CV techniques; each thing is reachable using its unique ID through the url of
https://thingiverse.com/thing:ID



auto-generated supports from slicers (i.e., 3D       3. Transparency and
model compiler) such as Cura [16]. These
special structures easily occlude the design’s
                                                        Fairness Issues in
significant features (e.g., Fig 1(b)). Since the        Content Moderation
model is partly or completely occluded, the
existing CV techniques barely detect sensi-          3.1. Transparency: Black Box
tivity of the design.                                     that Lacks Explanation
Challenge 3. Texture and Colors. Cur-                Content moderation has long been contro-
rent 3D printing technologies enable users to        versial due to its non-transparent and se-
use various print settings and other postpro-        cretive process [17], resulting from lacking
cessing techniques. Accordingly, the printed         explanations for community members about
model may present unique appearances com-            how the algorithm works. To meet the grow-
pared to general real-world entities. Often          ing demands for transparent and accountable
the model is single-colored and can have a           moderation practice as well as to elevate pub-
unique texture such as linear lines on the sur-      lic trust, recently, popular social media plat-
face (e.g., Fig 1(c)) due to the nature of 3D        forms have begun to dedicate their efforts to
printing mechanisms of accumulating mate-            make their moderation process more obvi-
rials layer-by-layer, which might let the ex-        ous and candid [17, 18, 19, 20]. As a rea-
isting CV algorithms overlook the features.          sonable starting point, those services pro-
                                                     vided detailed terms and policies (e.g., Face-
Challenge 4. Models Separated into Parts             book’s Community Standards [21]) describ-
for Printing. As one common 3D printing              ing the bounds of acceptable behaviors on the
strategy to minimize printing failures from a        platform [17]. In 2018, as a collective effort,
complex 3D designs such as a human body,             researchers and practitioners proposed the
many designers divide their models into sev-         Santa Clara Principles on Transparency and
eral parts to ease the printing process, and let     Accountability in Content Moderation (SCP)
users post-assemble as shown in Fig 1(d). In         [22]. SCP suggests one requirement that so-
this case, it is hard for the existing CV tech-      cial media platforms should provide detailed
niques to get the whole assembled model, re-         guidance to the members about which con-
sulting in a failure to recognize its sensitivity.   tent and behaviors are discouraged, includ-
                                                     ing examples of permissible and impermissi-
ble content, as well as an explanation of how    tificial intelligence (AI) in content modera-
automated tools are used across each cate-       tion, it has long been in the black box [23],
gory of content. It also recommends for con-     thus not understandable for users due to the
tent moderators to give users a rationale for    complexity of the ML model. To address the
content removal to assure about what hap-        issue of the uninterpretable model that hin-
pens behind the content moderation.              ders the users from understanding how it
   Making the moderation process transpar-       works, researchers shed lights on the blind
ent and explainable is crucial to the success    spot by studying various techniques to make
of the community [23], in order not only to      the model explainable (e.g., [29, 30, 31]). Ex-
maintain its current scale but also to invite    plainability has been on the rise to be an
new users, because it may affect users’ sub-     effective way of enhancing transparency of
sequent behaviors. For example, given no ex-     ML models [32]. In order to secure explain-
planation about the content removal, users       ability, the system must enable stakeholders
are less likely to upload new posts in the fu-   to understand the high-level concepts of the
ture or leave the community, because they        model, the reasoning used by the model, and
may believe that their content was treated       the model’s resulting behavior [33]. For ex-
unfairly thus get frustrated owing to an ab-     ample, as shown in the Fairness, Account-
sence of communication [24]. Reddit [25],        ability, and Transparency (FAT) model, sup-
which is one of the most popular social me-      porting users to know which variables are
dia, has equipped volunteer-based modera-        important in the prediction and how they
tion schemes resulting in the removal of al-     will be combined is one powerful way to en-
most one fifth of all posts every day [26]       able them to understand and finally trust the
due to violation of their community policy       decision made by the model [34].
[27] (e.g., Rule 4: Do not post or encour-
age the posting of sexual or suggestive con-     3.2. Fairness: Implicit Bias and
tent involving minors.) or individual rules of
                                                      Inclusivity Issues
the subreddits (i.e., subcommunity of Red-
dit that has a specific individual topic) ac-    People often overlook fairness of the mod-
cording to their own objectives (e.g., One of    eration algorithm and tend to believe that
the rules in 3D printing subreddit: “Any de-     the systems automatically make unbiased de-
vice/design/instructions which are intended      cisions [35]. In fact, the human adjudica-
injure people or damage property will be re-     tion of user-generated content has been oc-
moved.”). Users being aware of community         curred in secret and for relatively low wages
guidelines or receiving explanations for con-    by unidentified moderators [36]. In some
tent removal are more likely to perceive that    platforms, users are even unable to know the
the removal was fair [24] and showcase more      presence of moderators or who they are [37],
positive behaviors in the future. As many so-    and thus it is hard for them to know what
cial platforms including 3D open communi-        potential bias, owing to different reasoning
ties such as Thingiverse highly rely on vol-     processes, has been injected into the modera-
untary posting of the user-created content       tion procedure. For example, there have been
[28], the role of a transparent system in con-   worldwide actions that strongly criticize the
tent moderation becomes more significant in      sexualization of women’s bodies without in-
maintaining the communities themselves.          clusive inference (e.g., ‘My Breasts Are Not
   Even if many existing social media plat-      Obscene’ protest by the global feminist group
forms have their full gears to implement ar-     Femen [38] to denounce a museum’s censor-
ship of nudity.). Similarly, Facebook’s auto-      Work) by replacing their thumbnail images
matic turning down of postings and selfies         with the black warning images. It is a se-
that include women’s topless photo by tag-         cretive process because there are no clear ra-
ging them as Sexual/Porn ignited ‘My Body          tionale or explanations offered to users be-
is not Porn’ movement [39, 40]. The differ-        hind this process. Therefore, users cannot ex-
ent points of view in perceiving and reason-       pect whether Thingiverse operates based on
ing towards the same piece of work makes           an unbiased and fair set of rules.
it yet hard to decide the absolute sensitiv-          While the steep acceleration of increments
ity. It is nearly impossible that the sole group   of 3D models [43] is making automatic detec-
of users represent all, therefore, it is diffi-    tion of sensitive 3D content imperative, mod-
cult for users to expect a ground-truth in the     erating 3D content also faces fairness issues
decision-making process, and trust the result      and users are suffering from lacking expla-
while believing experts made the final deci-       nations. We need to take our account into
sions based on thoughtful consideration with       various stakeholders’ points of view that af-
an unbiased rationale.                             fect their decision on potentially sensitive 3D
   Subsequently, many studies (e.g., [41, 42])     content, as well as further discussions to mit-
have explored potential risks of algorithmic       igate bias and discrimination of the algorith-
decision-making that are potentially biased        mic decision-making system. Here we pro-
and discriminatory to a certain group of peo-      pose an explainable human-in-the-loop 3D
ple such as underrepresented groups of gen-        content moderation system to enable vari-
der, race, disability. Classifier has been one     ous users who have distinct rules to partic-
common approach in content moderation,             ipate in calibrating algorithmic decisions to
but developing a perfectly fair set of classi-     decrease bias or discrimination of the algo-
fiers in content moderation is complex com-        rithm itself. Although we focus on specific
pared to those in common recommendation            issues in shared 3D content online, our pro-
or ranking systems, as classifiers tend to in-     posed pipeline generally applies to advanc-
evitably embed a preference to the certain         ing a semi-automatic process toward an ex-
group over others to decide whether the con-       plainable and fair content moderation for all.
tent is offensive or not [17].

3.3. Transparency & Fairness
                                                   4. Towards Explainable 3D
     Issue in 3D Content                              Moderation System
     Moderation                                    A potential solution to examine 3D contents’
Through a text feature based classification,       sensitivity with fairness is employing the hu-
we identified there are three main cate-           man workforce with ample experiences in
gories of sensitive 3D content: (1) sex-           observing and perceiving with various per-
ual/suggestive, (2) dangerous weaponry, and        spectives. We suggest a human-in-the-loop
(3) drug/smoke. Due to the capability of un-       pipeline, based on the idea of incremental
limited replication and reproduction in 3D         learning [44] that the human workforce can
printing, unawareness of these 3D contents         collaborate with an intelligent system, con-
could be crucial. We noticed that Thingiverse      currently classifying data input and annotate
limits access to some of sensitive things that     features with the explanation for the deci-
are currently labeled as NSFW (Not Safe for        sion.
4.1. Building an Inclusive                         vide filtered cases for humans to support
     Moderation Process                            a decision-making process [24], if we well-
                                                   echo diverse perspectives in understanding
Making decisions on the sensitivity of a 3D        contents. In our proposal of the human-in-
model can be subjective due to various fac-        the-loop pipeline (Fig 2(a)), an input image
tors such as cultural differences, the nature      dataset of 3D models will be used for the
of the community, and the purpose of navi-         initial model training, then the result will
gating 3D models. To reflect different angles      be reviewed by multiple human moderators
in discerning the nature and intention of con-     step by step. We trained the model with
tents, we need to deliberate various interpre-     1,077 things that are already labeled as NSFW
tations taken from various groups of people.       by Thingiverse and 1,077 randomly selected
For example, there are lots of 3D printable        non-NSFW things. All input images are sim-
replicas of artistic statues or Greek sculptures   ply categorized as NSFW or not, with no an-
that are reconstructed by 3D scanning of the       notation for specific image features to pro-
original in the museums [45]. Speculative          vide the reasoning. Human moderators re-
K-12 teachers designing their STEAM edu-           cruited from various groups of people now
cation curriculum using 3D models are not          review the classification results whether they
likely to want any NSFW designs revealed           agree. They are asked to annotate image seg-
to their search results. On the other hand,        ments using a bounding box where they re-
there are many activists and artists who may       ferred to make the final decision with the cat-
want to investigate the limitless potential of     egory. At the same time, they provide the
the technology, sharing a 3D scanned copy          rough level of how much the part affected
of the naked body of herself [46] or digitiz-      the entire sensitivity and a written rationale
ing nude sculptures available in the museum        for the decision. These features will enhance
to make the intellectual assets accessible to      the data quality so to be used to fine-tune
everyone, etc. The nude sculpture has been         the model with the weighted score, thus the
one popular form of artistic creation in his-      model becomes able to recognize previously
tory, and it is not simple to stigmatize these     unknown sensitive models based on the simi-
works as ‘sensitive’. Everyone has their own       larity and now can explain sensitive features.
right to ‘leave the memory of self’ in a dig-         When two different groups of people with
ital form. Forcing to adapt a preset thresh-       different standards do not agree on the same
old of sensitivity and filter these wide array     model’s classification results, the model uses
of user-created contents could unfairly treat      their decision, annotated features, and lev-
one’s creative freedom. As the extent that         els of sensitivity to differentiate the extent of
various stakeholders perceive the sensitivity      perceived sensitivity and reflect to the differ-
could be distinct, our objective is to design      ent threshold. For example, one moderator
an inclusive process in accepting and adopt-       thinks that the model is sensitive while the
ing the sensitivity.                               other does not, the model will have a higher
                                                   threshold in categorizing the content. Dif-
4.2. Solution 1:                                   ferent decisions on the same model finally
     Human-in-the-loop with                        could be brought to the table for further dis-
     Augmented Learning                            cussion if needed, for example, to regulate
                                                   policy guidelines, or used as search criteria
Automated content moderation could help            for other community users who have similar
review of a vast amount of data and pro-           goals in viewing and unlocking analogous 3D
          (a) Human-in-the-loop pipeline                       (b) User interface mockup
Figure 2: (a) Overview of the human-in-the-loop pipeline powered by human moderators to acknowl-
edge various perceptions of sensitivity and (b) an user interface mockup for the moderators to validate
prediction results and provide annotations regarding their rationale, thus to augment the model.



contents. To summarize, one iteration con-          sexual activity, etc.). We currently refer to
tains the following steps:                          a two-level hierarchical taxonomy of Ama-
                                                    zon Rekognition to label categories of inap-
   1. The pre-trained model presents predic-
                                                    propriate or offensive content.
      tion results.
                                                    Case 2. Sensitive Parts Ignored by the
   2. The human moderator can enter dis-            Algorithm Another possible case is that the
      agreement/agreement with the results          specific feature in the image that the moder-
      and annotate sensitive parts with a           ator perceives as sensitive is missing in the
      sensitivity level and a decision ratio-       detection results. In this case, human mod-
      nale.                                         erators can label that part and provide ratio-
   3. The annotated image is used to fine-          nales using enter the level of sensitivity field
      tune the model.                               from 1 (slightly sensitive) to 5 (highly sensi-
   4. If the decision for the image is dif-         tive), how each specific part affects the entire
      ferent from other moderators, annota-         sensitivity of the model.
      tions and sensitivity levels are used to      Case 3. False Negative It is also possible
      set the different threshold.                  that some parts detected by the model are
   We elaborate more on feedback from the           not sensitive for the moderator due to the
moderators by showing three possible sce-           higher tolerance to sensitivity. The moder-
narios: (1) the moderator’s agreement with          ator can either submit the disagreement or
the prediction results, (2) sensitive parts not     provide more detailed feedback by excluding
detected, and (3) false-classification of insen-    specific results.
sitive features sensitive.                             Different degrees of sensitivity perception
Case 1. Agreement with the Predic-                  from various stakeholders can reflect distinct
tion Result In case that the moderators             points of view, which may manifest fairness
agree with the decision, they can either fi-        in algorithmic moderation through multiple
nalize it or reject the classification, by se-      iterations of this process. In our interface for
lecting provided top-level categories (e.g.,        the end-users that assists searching 3D de-
sexual/suggestive, weaponry, drug/smoke)            signs, we let users set their desired threshold.
and second-level categories (e.g., under sex-       For those who might find it difficult to decide
ual/suggestive, explicit nudity, adult toys,        a threshold that perfectly fits their need, we
show several random example images that          relies on the users’ voluntary action given
have detected sensitive labels with the corre-   no official guidelines, resulting in a lack of
sponding threshold. This pipeline also helps     awareness that the users must be granted the
obtain the explainable moderation algorithm.     consent to upload possibly privacy-invasive
Our model can help users understand the ra-      contents at the time of posting those con-
tionales of the model by locating detected       tent in public spaces regardless of the com-
features/prediction probabilities in the image   mercial purpose. Without explicit consent,
and providing written descriptions that the      the content is very likely to be auto-filtered
moderators entered for data classification.      by Thingiverse, which decreases fairness by
                                                 hampering artistic/creative freedom. To iron
4.3. Solution 2: New Metadata                    out a better content-sharing environment in
                                                 the these open communities, redesigning of
     Design to Avoid
                                                 metadata must be considered and adapted
     Auto-Filtering                              by system admins that invoke responsible
Another potential problem in open 3D com-        actions. For example, providing a check-
munities is copyright or privacy-invasive        box that asks “If the design is made of 3D
contents that are immediately marked as          scanned human subject, I got an agreement
NSFW by Thingiverse indicating they are          from the subject” can inform previously un-
inappropriate. Currently, Thingiverse lacks      aware users about the need for permission to
notification and explanation for content re-     post potentially privacy-breaching contents.
moval, while a majority of them might in-        Including the subject’s consent can also pro-
vade copyrights. Its obscurity results in a      tect creative freedom from auto-filtering, by
negative impact on the user’s future behav-      adding that the content is not breaching
iors. For example, creators are frustrated at    copyright or privacy and can be shared in
the un-notified removal of their content thus    the public spaces. In addition, it can enable
decided to quit their membership (e.g., [47]),   users to understand that an absence of con-
which might not happen if they saw an in-        sent could be the reason for filtering.
formative alert when they post the content.
Along with advanced 3D scanning technolo-
gies [48], many creators are actively shar-
                                                 5. Conclusion
ing 3D scanned models (e.g., As of Decem-        As an inclusive process to develop trans-
ber 2020, Thingiverse has 1150 things that       parent and fair moderation procedure in 3D
tagged with ‘3D_scan’ and 308 things with        printing communities, our study proposes
the tag ‘3D_scanning’). With arising con-        to build an explainable human-in-the-loop
cerns over possible privacy invasion in sensi-   pipeline. We aim to employ diverse group of
tive 3D designs, what caught our attention is    human moderators to collect their rationales,
3D scanned replicas of human bodies. Many        which can be used to enhance the model’s in-
of them do not include an explicit description   cremental learning. Our objective is not to
of whether the creator received the consent      censor 3D content but to build a pleasant 3D
from the subject (e.g., [49, 50]). Some de-      printing community for all, by safeguarding
signers quoted the subject’s permission, for     search as well as guaranteeing creative free-
example, one creator describes that the sub-     dom, through the pipeline and new metadata
ject, Nova, has agreed to share her scanned      design that has potential to minimize issues
body on Thingiverse [51]. Still, this process    related with privacy or copyright.
References                                             J. Harambam, Artificial intelligence,
                                                       content moderation, and freedom of ex-
 [1] MicroSoft, Adult, racy, gory con-                 pression (2020).
     tent: Azure cognitive services, https:       [12] D. Hettiachchi, J. Goncalves, Towards
     //docs.microsoft.com/en-us/azure/                 effective crowd-powered online con-
     cognitive-services/computer-vision/               tent moderation,        in: Proceedings
     concept-detecting-adult-content, 2020.            of the 31st Australian Conference
     (Accessed on 05/21/2020).                         on       Human-Computer-Interaction,
 [2] DeepAI,      Nudity       detection   api,        OZCHI’19, Association for Com-
     https://deepai.org/machine-learning-              puting Machinery, New York, NY,
     model/nsfw-detector, 2020. (Accessed              USA, 2019, p. 342–346. URL: https:
     on 05/21/2020).                                   //doi.org/10.1145/3369457.3369491.
 [3] A. W. Services, Amazon rekog-                     doi:10.1145/3369457.3369491.
     nition content moderation, 2020.             [13] M. Taboada, J. Brooke, M. Tofiloski,
     URL:      https://docs.aws.amazon.com/            K. Voll, M. Stede, Lexicon-based meth-
     rekognition/latest/dg/moderation.html,            ods for sentiment analysis, Comput.
     (Accessed on 12/20/2020).                         Linguist. 37 (2011) 267–307. URL:
 [4] Thingiverse, 2008. URL: https://www.              https://doi.org/10.1162/COLI_a_00049.
     thingiverse.com/.                                 doi:10.1162/COLI_a_00049.
 [5] R. Prabowo, M. Thelwall, Sentiment           [14] Jigsaw, G. C. A. T. team, Perspective api,
     analysis: A combined approach, Jour-              2017. URL: http://perspectiveapi.com/.
     nal of Informetrics 3 (2009) 143–157.        [15] bedigaadu, Nudenet classifier dataset,
 [6] S. Baccianella, A. Esuli, F. Sebastiani,          2019. URL: https://archive.org/details/
     Sentiwordnet 3.0: an enhanced lexi-               NudeNet_classifier_dataset_v1.
     cal resource for sentiment analysis and      [16] Ultimaker      cura,       2020.    URL:
     opinion mining., in: Lrec, volume 10,             https://ultimaker.com/software/
     2010, pp. 2200–2204.                              ultimaker-cura.
 [7] P. Saha, B. Mathew, P. Goyal,                [17] R. Gorwa, R. Binns, C. Katzenbach,
     A. Mukherjee,          Hateminers: De-            Algorithmic      content     moderation:
     tecting hate speech against women,                Technical and political challenges in
     arXiv preprint arXiv:1812.06700 (2018).           the automation of platform gover-
 [8] R. Ahluwalia, H. Soni, E. Callow,                 nance, Big Data & Society 7 (2020)
     A. Nascimento, M. De Cock, Detecting              2053951719897945.
     hate speech against women in english         [18] N. Granados, A. Gupta, Transparency
     tweets 330 (2018).                                strategy: Competing with information
 [9] S. Minaee, H. Pathak, T. Crook, Ma-               in a digital world, MIS quarterly (2013)
     chine learning powered content mod-               637–641.
     eration: Computer vision applications        [19] K. Leetaru, Without transparency,
     at expedia, Expedia Group Technology              democracy dies in the darkness of
     (2019).                                           social media, 2018. URL: https://www.
[10] A. Kumar, N. K. Kumar, M. Shivaram,               forbes.com/sites/kalevleetaru/2018/01/
     S. G. Jadhav, C.-S. Li, S. Mahadik, Image         25/without-transparency-democracy-
     content moderation, 2020. US Patent               dies-in-the-darkness-of-social-media/
     10,726,308.                                       ?sh=479d8b527221#694732567221,
[11] E. Llansó, J. Van Hoboken, P. Leerssen,           (Accessed on 12/16/2020).
[20] M. MacCarthy, Transparency require-        [29] B. Letham, C. Rudin, T. H. McCormick,
     ments for digital social media plat-            D. Madigan, et al., Interpretable clas-
     forms: Recommendations for policy               sifiers using rules and bayesian analy-
     makers and industry, Transatlantic              sis: Building a better stroke prediction
     Working Group (2020).                           model, The Annals of Applied Statistics
[21] Facebook,        Community         stan-        9 (2015) 1350–1371.
     dards,      https://www.facebook.com/      [30] T. Wang, C. Rudin, F. Doshi-Velez,
     communitystandards/, 2020. (Accessed            Y. Liu, E. Klampfl, P. MacNeille, A
     on 12/12/2020).                                 bayesian framework for learning rule
[22] U. o. S. C. U. Queensland Univer-               sets for interpretable classification, The
     sity of Technology (QUT), E. F. F.              Journal of Machine Learning Research
     (EFF), The santa clara principles on            18 (2017) 2357–2393.
     transparency and accountability in con-    [31] A. A. Freitas, Comprehensible clas-
     tent moderation, 2018. URL: https://            sification models: a position paper,
     santaclaraprinciples.org/, (Accessed on         ACM SIGKDD explorations newsletter
     12/16/2020).                                    15 (2014) 1–10.
[23] P. Juneja, D. Rama Subramanian, T. Mi-     [32] B. Lepri, N. Oliver, E. Letouzé, A. Pent-
     tra, Through the looking glass: Study           land, P. Vinck, Fair, transparent, and ac-
     of transparency in reddit’s moderation          countable algorithmic decision-making
     practices, Proceedings of the ACM on            processes, Philosophy & Technology 31
     Human-Computer Interaction 4 (2020)             (2018) 611–627.
     1–35.                                      [33] U. Bhatt, A. Xiang, S. Sharma, A. Weller,
[24] S. Jhaver, D. S. Appling, E. Gilbert,           A. Taly, Y. Jia, J. Ghosh, R. Puri, J. M.
     A. Bruckman, "did you suspect the               Moura, P. Eckersley, Explainable ma-
     post would be removed?" understand-             chine learning in deployment,           in:
     ing user reactions to content removals          Proceedings of the 2020 Conference
     on reddit, Proceedings of the ACM on            on Fairness, Accountability, and Trans-
     human-computer interaction 3 (2019)             parency, 2020, pp. 648–657.
     1–33.                                      [34] H. Lakkaraju, S. H. Bach, J. Leskovec,
[25] Reddit, https://www.reddit.com/, 2005.          Interpretable decision sets: A joint
[26] S. Jhaver, D. S. Appling, E. Gilbert,           framework for description and predic-
     A. Bruckman, Did you suspect the post           tion,     in: Proceedings of the 22nd
     would be removed?”: User reactions to           ACM SIGKDD international conference
     content removals on reddit, Proceed-            on knowledge discovery and data min-
     ings of the ACM on Human-Computer               ing, 2016, pp. 1675–1684.
     Interaction 2 (2018).                      [35] S. Garfinkel, J. Matthews, S. S. Shapiro,
[27] Reddit, Reddit content policy, 2020.            J. M. Smith, Toward algorithmic trans-
     URL:        https://www.redditinc.com/          parency and accountability, Communi-
     policies/content-policy, (Accessed on           cations of the ACM 60 (2017) 5–5.
     12/12/2020).                               [36] S. T. Roberts, Behind the screen: Con-
[28] F. Çömlekçi, Custodians of the internet:        tent moderation in the shadows of so-
     Platforms, content moderation, and the          cial media, Yale University Press, 2019.
     hidden decisions that shape social me-     [37] S. T. Roberts, Commercial content mod-
     dia, Communication Today 10 (2019)              eration: Digital laborers’ dirty work
     165–166.                                        (2016).
[38] S. Cascone, Topless feminist protestors          famous-sculptures-statues-artworks-
     hit the musee d’orsay after the mu-              download-3d-print-rodins-thinker-
     seum tried to bar a visitor for wearing          michelangelos-david-more.html,
     a low-cut dress, 2020. URL: https:               (Accessed on 12/23/2020).
     //news.artnet.com/art-world/femen-          [46] B. Mufson, Art made from human
     stage-protest-musee-dorsay-1908260,              body scans | gif six-pack, 2016. URL:
     (Accessed on 12/22/2020).                        https://www.vice.com/en/article/
[39] B. Korea-savvy,         Activists claim          nz4kq7/3D-scanning-gifs, (Accessed on
     “my body is not porn!”, http:                    12/23/2020).
     //koreabizwire.com/activists-claim-         [47] VidovicArts, I’m quitting thingiverse,
     my-body-is-not-porn/119529,         2018.        2020. URL: https://www.youtube.com/
     (Accessed on 09/08/2020).                        watch?v=UPRCE8FsSak.
[40] My body is not your porn, 2020. URL:        [48] All3DP,      2020   best    3d   scan-
     https://www.facebook.com/pages/                  ners     (december),     2020.    URL:
     category/Community/My-Body-Is-                   https://all3dp.com/1/best-3d-scanner-
     Not-Your-Porn-106365187645422/.                  diy-handheld-app-software/, (Accessed
[41] S. L. Blodgett, L. Green, B. O’Connor,           on 12/23/2020).
     Demographic dialectal variation in so-      [49] Tob1112, 3d body scan amber, 2015.
     cial media: A case study of african-             URL:      https://www.thingiverse.com/
     american english,         arXiv preprint         thing:1052758.
     arXiv:1608.08868 (2016).                    [50] ThreeForm, Mel - "column 2" pose,
[42] R. Binns, M. Veale, M. Van Kleek,                2017. URL: https://www.thingiverse.
     N. Shadbolt, Like trainer, like bot? in-         com/thing:2688184.
     heritance of bias in algorithmic con-       [51] ThreeForm, Nova - "pose 3", 2017. URL:
     tent moderation, in: International con-          https://www.thingiverse.com/thing:
     ference on social informatics, Springer,         2461567.
     2017, pp. 405–415.
[43] B. Wire, Makerbot thingiverse cele-
     brates 10 years of 3d printed things,
     2018. URL: https://financialpost.com/
     pmn/press-releases-pmn/business-
     wire-news-releases-pmn/makerbot-
     thingiverse-celebrates-10-years-of-
     3d-printed-things,       (Accessed     on
     12/10/2020).
[44] M. Längkvist, M. Alirezaie, A. Kiselev,
     A. Loutfi, Interactive learning with con-
     volutional neural networks for image
     labeling, in: IJCAI 2016, 2016.
[45] C. Marshall, 3d scans of 7,500 fa-
     mous sculptures, statues & artworks:
     Download & 3d print rodin’s thinker,
     michelangelo’s david & more, 2017.
     URL:           https://www.openculture.
     com/2017/08/3d-scans-of-7500-