<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Position: Who Gets to Harness (X)AI? For Billion-Dollar Organizations Only</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jonathan Dodge</string-name>
          <email>dodgej@oregonstate.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Joint Proceedings of the ACM IUI 2021 Workshops "</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Oregon State University, 1148 Kelley Engineering Center</institution>
          ,
          <addr-line>Corvallis, OR 97331-5501</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Recently, researchers have made a number of tremendous advancements in AI capabilities. However, I will show that the fixed financial cost of making such advancements is high, and further, so are the recurring energy costs. As a result we see “AI haves and have nots” with wildly difering amounts of power between these two groups. This means our research community needs to carefully consider the role XAI has in mediating communication between stakeholders in the face of such an important power dynamic. This paper aims to engage that process, examining the current state of afairs through a variety of lenses, and then identifying some promising ideas for the future.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Explainable AI</kwd>
        <kwd>Social Aspects of AI</kwd>
        <kwd>Social Aspects of Explanation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <sec id="sec-1-1">
        <title>This paper considers each of the following in</title>
        <p>turn: (1) What does AI cost to setup/scale?
(2) What does energy to run hardware for AI
cost? (3) As a result of 1 and 2, who is getting
left behind? (4) What’s the alternative?</p>
        <p>
          Explanation’s role largely appears in the
third question, where I argue that as a
downstream consequence of inequity of access to
AI, explainable AI will sufer a similar fate.
In justifying the decision to release as a
commercialized API rather than open-source,
OpenAI states (underlining added for emphasis):
...many of the models underlying the API
are very large, taking a lot of expertise
to develop and deploy and making them
very expensive to run. This makes it hard
for anyone except larger companies to
benefit from the underlying technology.
We’re hopeful that the API will make
powerful AI systems more accessible to smaller
businesses and organizations.” [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]
        </p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>1. What does AI cost to setup?</title>
      <p>
        © 2021 Copyright ©2021 for this paper by its authors. Use
permitted under Creative Commons License Attribution
CPWrEooUrckReshdoinpgs IhStpN:/c1e6u1r3-w-0s.o7r3g 4(C.C0EIEnUtUeRrnRat-iWonaSlW(.CooCrrBgkY)s4.h0)o.p Proceedings
1Lee Sedol would go on to retire, stating “Even if I
become the number one, there is an entity that cannot be
defeated.” [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        2While the underlying source is not necessarily
reputable—a blog post [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]—they present the full chain
of reasoning to arrive at this cost.
model would cost $35M [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Medium [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] ar- enough time to implement appropriate
rives at a lower estimated cost to train the rules, and we stand ready to help if
remore complex AlphaStar—$12M—but that es- quested.”
timate is just to replicate the final agent(s), —Amazon Staf, June 10, 2020 [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]
and does not include formative experiments.
      </p>
      <p>
        Of note, part of these cost estimates include “But I do think this is a moment in time
renting time on proprietary Tensor Process- that really calls on us to listen more, to
ing Unit (TPU) hardware [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], so the internal learn more, and most importantly, to do
costs to DeepMind would be lower than the more. Given that, we’ve decided that we
estimates presented here. Of additional note, will not sell facial recognition to police
the Wired article [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] opens with scary points departments in the United States until
about how much money DeepMind has lost we have a national law in place grounded
(more than $1B in 3 years), and develops an in human rights that will govern this
techargument about how these steep losses may nology.”
chill investment in AI research. —Brad Smith, Microsoft President, June
      </p>
      <p>
        One might expect high costs to yield efec- 11, 2020 [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]
tive solutions, if not viable products.
However, in the face recognition market, IBM, Mi- These rich and powerful companies seem to
crosoft, and Amazon have retreated (at least feel facial recognition is presently too legally
temporarily), potentially abandoning their in- and/or ethically thorny for at least some
apvestments. To understand why, here are ex- plications, e.g. law enforcement. Or
possicerpts from each of their public statements: bly it is too dificult to do with suficient
ro“IBM no longer offers general purpose IBM bustness using current techniques and data,
facial recognition or analysis software. as indicated by findings like the following,
IBM firmly opposes and will not condone from Buolamwini et al., “All classifiers
peruses of any technology, including facial form worst on darker female faces (20.8%-34.7%
recognition technology ofered by other error rate)” [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Either way, how can one
vendors, for mass surveillance, racial pro- imagine similarly large problems as tractable
ifling, violations of basic human rights for organizations with smaller budgets,
deand freedoms, or any purpose which is velopment teams, and legal departments?
not consistent with our values and Prin- So what if one wants to do something on
ciples of Trust and Transparency.” a smaller scale, by purchasing a GPU? Tim
      </p>
      <p>
        Dettmers, currently a PhD student at
Univer—Dr. Arvind Krishna, IBM CEO, June 8, sity of Washington, has hosted a benchmark
2020 [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] comparison of various cards since 2014. The
      </p>
      <p>
        September 2020 revision3 recommends RTX
3080 or RTX 3090 [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], which has a retail price
of around $700. In his post’s TL;DR advice, he
ofers the following:
“We’re implementing a one-year
moratorium on police use of Amazon’s facial
recognition technology. ... We’ve
advocated that governments should put in place
stronger regulations to govern the
ethical use of facial recognition technology,
and in recent days, Congress appears ready
to take on this challenge. We hope this
one-year moratorium might give Congress
I have little money: Buy used cards.
      </p>
      <p>Hierarchy: RTX 2070 ($400), RTX 2060</p>
      <sec id="sec-2-1">
        <title>3Notably, this revision was made in the middle of</title>
        <p>
          the pandemic, and prices have gotten much worse since,
if one can even find a card to purchase [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ].
($300), GTX 1070 ($220), GTX 1070 Ti
($230), GTX 1650 Super ($190), GTX 980
Ti (6GB $150).
        </p>
        <p>While this may not sound like a lot of money
to some readers, as we will see in Section 3, it
is for many people. Unfortunately, this does
not even include the cost of the rest of the
computer (e.g. motherboard, CPU,
peripherals for I/O). Nor does it account for energy to
run the system.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>2. What does energy to run hardware for AI cost?</title>
      <sec id="sec-3-1">
        <title>A lot. According to NVIDIA’s tech specs, the</title>
        <p>
          RTX 3080 mentioned earlier consumes 320W
by itself, thus requiring at least a 750W power
supply for the full system [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. A full
breakdown of recommended power supplies for CPU
plus GPU pairings comes courtesy of ASUS,
shown in Figure 1. Note that the upper
diagonal has power draws that are twice those in
the lower diagonal, and that the newest
generation of GPUs use much more power than
previous generations.
        </p>
        <p>So now armed with estimates of power
conmodel requires a great deal of prior
experimentation. To illustrate, Hill et al. report:
“All 7 Interview #2 respondents reported
evaluating the model to be a long,
arduous process, in which participants
iteratively refined the model through changes
to parameters, training data, and so on.</p>
        <p>
          They did these iterations using an
evaluateifx-evaluate cycle that often required many
changes, sometimes even sending the
participants back to earlier steps...” [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]
        </p>
        <p>To illustrate the length of training times
needed for certain challenge problems,
during AlphaStar’s demonstration matches against
professional StarCraft II players, one of its
creators made the following remarks:
“The league here was run for about a
week... 7 days of real time is actually
longer in StarCraft, as Tim was saying
we got a binary that can run the game
much faster. So the most experienced agents
we see today have played about 200 years
of StarCraft II.”
—Dr. Oriol Vinyals, Research Scientist at</p>
        <p>
          Deepmind [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]
sumption required by some relevant hardware, As a sidenote, the league used to train the
next we investigate length of training times:
“[After starting with 64x64 images] ...we
have ramped up to bigger images, it takes
a while, it takes 2-3 days to train a GAN.
        </p>
        <p>We don’t do grid search, but even
random search you still need to train 12
models times however many config urations
you have, so it really adds up.”
—Dr. Sasha Luccioni, Postdoc in AI for</p>
        <p>
          Humanity [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]
Notably, the GAN Dr. Luccioni refers to is
intended to visualize climate change efects,
so training eficiency is of particular concern
to her. Further, generating one production
agent pool contains a few hundred agents,
with Vinyals et al. [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ] stating “During league
training almost 900 distinct players were
created”. Consider the Cartesian product involved
with these numbers: 900 agents × 200 years
× 525600 minutes/year × 280 actions/minute
≈ 26.5 trillion actions selected4. To produce
the huge quantities of floating point
operations to do this (and the necessary
experimentation to devise the final training process),
it is important to realize that processes like
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>4Action per minute (APM) estimate is based on data</title>
        <p>
          presented in [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]. The rest of the numbers used in this
estimate are natural constants or from quotes earlier in
the paper.
        </p>
        <p>RTX 3090
RTX 3080
RTX 3090</p>
        <p>RTX 3090
Turing Series
“Depending on the grid you are connected
to, you know the energy mix of the grid...
Like in the U.S. there’s, I don’t know,
2030 grids, some are really clean, and some
are really coal based... So depending where
[your training] is, your carbon emissions
can vary up to 80x.”
these will involve running many large
capacity servers for long periods of time.</p>
        <p>
          Unfortunately, it is hard to know the
energy costs of many AI systems because of
obscured costs:
“It’s really really hard to quantify
exactly how much energy you are
consuming when training a neural network
because often it is on the cloud, or you are
sharing a GPU with others...”
—Dr. Sasha Luccioni [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>So, in efect, if one’s local grid runs on coal,</title>
        <p>
          training on the cloud might be a way to get
long-running processes onto a cleaner grid,
Google’s proprietary TPU obscures these costs e.g. one running on hydro power. To do so,
further because its power consumption is un- all that would be needed is to build data
cenknown as of the time of this writing, but es- ters near power plants, particularly clean ones.
timated at 200-250W by [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. Note that this is already under proposal in
        </p>
        <p>
          Training on the cloud has advantages be- some localities (e.g. [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]).
yond convenience, but first some context: However, training on the cloud also has
3. Who is getting left
behind in this?
a major drawback—obscured costs. Previous
research in resource consumption awareness
by Strengers found that people struggle to
understand resource management units, pre- Most everyone. A pre-pandemic survey by
ferring visual analogs, e.g. buckets of wa- Pew found:
ter [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]. For water, the hard-to-understand
lfow rate (e.g. gallons/min) has an easy-to- “61% of Americans say there is too much
understand volume analog (gallon)5. Contrast- economic inequality in the country
toingly, both electricity’s flow rate unit and “vol- day... For those who say reducing
inume analog” are hard to understand (e.g. watt equality should be a government
priorand watt hour, respectively). While most un- ity, large majorities point to unfair
acderstand that kilowatt hours appear on power cess it affords the wealthy and limits it
bills to measure payment, few can conjure a places on others.” [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ]
visual or describe the meaning of the term.
        </p>
        <p>
          All of this is to say that electricity con- Since that study, equality of access has not
sumption habits and their consequences are improved. In a July 2020 publication
(suralready obscure due to the nature of electron vey dates March 28 and April 4) by Bartik
lfow, and adding another layer of obfuscation— et al., small-businesses “reported having
rethe cloud—makes it even harder to observe: duced their active employment by 39% since
1) that consumption is occurring, 2) how much, January.” [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ]. While there is a depth of
reand 3) the consequences of consumption. So search on wealth/income inequality6, these
what results from these factors? Ineficient will sufice to illustrate the point: Many
busiconsumption is a common result, i.e.: nesses (and individuals) are priced out of AI,
so are therefore priced out of XAI. This point
is crystallized by Andreas Madsen, who first
published as an independent researcher [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ]
but is now a PhD student at Université de
Montréal, Mila. When asked to describe why
he recommended that researchers avoid
researching independently, as he just had done:
“It’s a problem in deep learning
nowadays, ‘Your model is not doing well?’ ’Train
on more data. Add more layers.’ People’s
reflexes are essentially bigger, more...
People who have been around in the field for
a long time... their first reflex is NOT
get more data, it is: ‘Have you looked at
your model? Have you figured out what
it is doing? Do you know what is
going on in these layers? Are you sure you
have the right learning rate?’ ... more
about the fundamental stuf, whereas
nowadays its like ‘Throw more data at it’...
because we can.”
        </p>
        <p>
          —Dr. Sasha Luccioni [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]
5To illustrate the dificulty of actually perceiving
lfoTwurnraotens,ycoounrsfiaduecretthae lfiottllloewbiitnsgo(tthhoatutghhet)weaxtpeerriismgeonitn:g We have argued previously [29] that,
“exright down the sink. Then, note the visual appearance, planations are not just for people to understand
open the faucet more, and compare. Fairly large flow
rate changes will look essentially the same until the wa- 6To the reader interested in these topics, I
recomter stream gets much wider or starts to roil. mend [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ], or the papers to which they respond.
“If you just want to develop machine
learning, like in industry, and you don’t want
to do anything novel, you can do that
independently... But if you want to do
research, if you want to develop new
methods, this is a really hard field to do it in,
because you need huge computational
resources that you are not gonna have.”
—Andreas Madsen [28]
the ML system, they also provide a more efec- cipient consumes the explanation that they
tive interface for the human in-the-loop...”. If may have the right to receive in some
jurisone considers that the explanation originator dictions, they will be attempting to
underis an AI-enabled entity, they are likely a bil- stand the arcane decisions of an organization
lion dollar company. On the other side of the with power over them. While doing so, the
explanation interface, if the explanation con- user may suspect the service provider is just
sumer is a member of the general public they hiding behind the AI and/or its explanation.
are likely to be much poorer. Thus, XAI tech- This is a dark pattern, described as
“obstrucniques mediate the interaction between two tion” by Chromik et al. [32]. Concerns
surentities with a very lopsided power dynamic. rounding abuse of dark patterns (intentional
        </p>
        <p>As a result of this power dynamic, the rich or otherwise) seem particularly pertinent given
entity possessing the AI-powered system has how many of the big AI players have
previan inherent conflict of interest. Consider four ously been sued (or are currently being sued)
stakeholders7: Rich Uncle Pennybags (the char- for various anti-competitive practices. The
acter in the Monopoly game), Jon the XAI list has grown to include (chronologically):
scientist, Alice the small business owner, and IBM [33], Microsoft [34], Google [35], and8
Joe on the street. While these various people Facebook [37].
have varying roles in the creation and con- The second illustration comes from a
dissumption of explanations, Rich Uncle Penny- cussion between TWIML/AI host Sam
Charbags is the only one with substantial power, rington and guest Dr. Michael I. Jordan, a
disand his incentive is to help himself econom- tinguished professor at UC Berkeley. There
ically. Critically, this leads to incentives that are two important aspects of this conversion
may conflict with the goals of explaining. For (emphasized with boldface):
example, instead of explaining transparently,
Rich Uncle Pennybags might harness expla- Jordan: “If you use my data, then that’s
nations to protect the brand or to increase ok with me if you use it and I get value
sales by omitting particular kinds of informa- out of it, if you pay me in some sense for
tion. The analogy I would draw is that be- using my data. In particular, a travel
tween peers, “Because I said so” would not be agent is a person who makes travel plans
considered an acceptable explanation—though for lots and lots of people, and they get
it might be as the power dynamic grows more better and better at it over the years. ...
lopsided (e.g. a boss might say that to their So a system that does that, partially
employee, or a parent to their child). anonymizes me but that builds up
expe</p>
        <p>To illustrate the potential lopsidedness of rience dealing with people like me, then
power dynamics in XAI as interface, consider I kinda know what I’m getting. ... The
two scenarios. The first is a simple thought medical system, you treat me and it works,
experiment: Imagine a median income wage- I want it to be available to you tomorrow,
earner (e.g. Joe on the street) gets denied for I don’t want to protect that data.”
a bank loan by an AI-powered system created Charrington: We think about AI in the
by Jon the XAI scientist, on behalf of Rich context of this digital divide and there
Uncle Pennybags. As the would-be loan re- will be communities that will be left
be</p>
      </sec>
      <sec id="sec-3-4">
        <title>7This exercise is similar to the approach found in</title>
        <p>Ehsan et al’s work, [30, 31], where they break down
explainability as a socio-technical phenomenon focusing
on who is doing what, when, and why.</p>
        <p>8As of the time of this writing, lawsuits against
Apple and Amazon have not been announced, though they
were also involved in a recent congressional hearing on
antitrust [36].
hind because they don’t have ready
access to AI technology. But the travel agent
example makes me think that in a lot of
ways, its like a human divide, in a sense,
that we need to worry that the
knowledge of humans is going to be
commoditized into these computational systems,
like AIs ... It gets really scary if its medicine,
for example, where the masses are
being treated by the commoditized robot
doctors but only the select few can speak
to an actual human doctor.” [38]</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Whats the alternative?</title>
      <sec id="sec-4-1">
        <title>So it seems the (X)AI community hurtles to</title>
        <p>ward a future where few can aford industry
scale, with most relegated to toy problems.</p>
        <sec id="sec-4-1-1">
          <title>4.1. Possible answer 1: Futurism</title>
          <p>In their book9, “Why Can’t We All Just Get
Along?: How Science Can Enable A More
Cooperative Future”, authors Drs. Christopher
Fry and Henry Lieberman ofer a possible
future paradigm “Makerism” in which
everyone “owns their own means of production” [39].
They envision the advent of 3-D printing
technologies, combined with advances in
programming, energy generation, and materials
recycling as enabling such a future.</p>
          <p>A similar argument appeared in a recent
Wired article entitled “Want a More Equitable
Future? Empower Citizen Developers”.
Authored by Microsoft CEO Satya Nadella and
Harvard Professor Dr. Marco Iansiti, the
article argues, “Tools from cloud computing to
AI should be in the hands of every knowledge
worker, first-line worker, organization, and
public sector agency around the world.” [40].
However, these optimistic visions do not seem to</p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>9I recommend this book highly, and hope the reader</title>
        <p>shares my appreciation for its game theoretic framing,
via both prisoners dilemma and ultimatum game.
suficiently address or resolve the existing
economic imbalances described earlier.</p>
        <sec id="sec-4-2-1">
          <title>4.2. Possible answer 2: Emerging</title>
        </sec>
        <sec id="sec-4-2-2">
          <title>Hardware/Software</title>
        </sec>
        <sec id="sec-4-2-3">
          <title>Architectures</title>
          <p>While XAI is usually considered a software
problem:
“It is important to understand that the
reason why ML applications have received
so much attention and widespread usage
is because of the hardware.”
—Dr. Diana Marculsecu, Chair of the ECE</p>
          <p>Department at UT Austin [41]
This paper already touched on the TPU
architecture briefly, and it seems like they might
be more power eficient than GPUs. But what
other hardware might be possible? Dr.
Marculsecu argues for a co-design process between
software and hardware, which requires:
“...the ability to characterize neural
network architectures, or perhaps
components thereof, in terms of the time it takes
to process those things, the energy or power
it takes, and maybe come up with a joint
metric that characterizes their energy
efif ciency or latency, per correct inference.”</p>
          <p>—Dr. Diana Marculsecu [41]</p>
          <p>On the software side alone, a number of
researchers are working to reduce power
consumption. One reason this is necessary is that
sometimes data must be processed on
mobile devices which impose stif constraints on
computing and battery resources, e.g. for
privacy and security reasons10. To take a
single example, Bejnordi et al. propose a
“channel gating” technique, in which the network</p>
          <p>10Unfortunately, while each mobile device or IoT
widget might consume very little power individually,
these devices are so prevalent that collective energy costs
will accumulate quickly due to the law of large numbers.
should learn when and which parts of the net- that lower wall clock time to solve does not
work can be short circuited [42]. Thus, re- always coincide with the fewest iterations,
searchers can train large capacity networks because various techniques require diferent
more eficiently, because, “...there is no reason amounts of time per iteration. The spirit of
to compute features that help diferentiate be- this branch of research is well encapsulated
tween several dog breeds, if there is no dog to by a discussion section question of
Henderbe seen in the image.” [42]. son et al., “How much is your performance gain</p>
          <p>
            Setting aside novel techniques, there are a worth? Balancing gains with cost” [
            <xref ref-type="bibr" rid="ref33">48</xref>
            ].
wide variety of classical techniques that are
generally considered to be more frugal than 4.4. Possible answer 4: Ethics
deep learning (e.g. random forests or Bayesian training and practice among
networks). However, since deep learning
techniques currently dominate performance leader- AI creators:
boards, some of the cheaper techniques have Failing any advances in equitability,
improvfallen out of favor. Perhaps given hardware ing the foresight of AI practitioners is a good
that can execute these architectures more ef- second best.
ifciently they could be competitive?
          </p>
        </sec>
        <sec id="sec-4-2-4">
          <title>4.3. Possible answer 3: Consumption Awareness and Disclosure</title>
          <p>
            “We are creating all kinds of artifacts,
including social networks, that look fun...
but it has turned out to not be so great
in various ways... So a field that builds
those systems should also be aware of
the consequences of building that.”
To raise awareness, many researchers are
trying to make it easier for AI practitioners to —Dr. Michael I. Jordan [38]
estimate costs, both in terms of money and
carbon footprint. Recent work by Strubell et Possibly to similar ends, ACM revised its
al. [
            <xref ref-type="bibr" rid="ref28">43</xref>
            ] provides estimates of costs to train code of ethics in 2018 [
            <xref ref-type="bibr" rid="ref34">49</xref>
            ]. The first bullet
various benchmark models (see their Table point in the code is, “Contribute to society and
3). However, what if one has a custom model to human well-being, acknowledging that all
that difers from the benchmark designs by people are stakeholders in computing.”
Unfora great deal? Recent work by Cai et al. [
            <xref ref-type="bibr" rid="ref29">44</xref>
            ] tunately, not all practitioners fully engage with
aims to build predictive models capable of re- these concepts:
turning “Detailed power, runtime &amp; energy, with
breakdowns” given a convolutional neural net- “How do you know the unknowns that
work (CNN). Similar in spirit, Lacoste et al. you’re being unfair towards? [...] You
recently proposed the “Machine Learning Emis- just have to put your model out there,
sions Calculator” [
            <xref ref-type="bibr" rid="ref30">45</xref>
            ], which provides a nice and then you’ll know if there’s fairness
set of action items for individuals11. issues if someone raises hell online.”
          </p>
          <p>
            Last, when discussing energy consumption, —Participant R7, Software Engineer [
            <xref ref-type="bibr" rid="ref35">50</xref>
            ]
one cannot overlook “sample eficiency” . Kevin
Vu [
            <xref ref-type="bibr" rid="ref32">47</xref>
            ]’s Table 2 explains this by illustrating
          </p>
        </sec>
        <sec id="sec-4-2-5">
          <title>4.5. Possible answers 5+: Choose your own adventure!</title>
          <p>
            11If the reader wishes to engage in some AI research
in the climate change space, I recommend [
            <xref ref-type="bibr" rid="ref31">46</xref>
            ], as they
attempt to identify a variety of research gaps.
          </p>
        </sec>
      </sec>
      <sec id="sec-4-3">
        <title>Plenty of ideas went unexplored here, which do you find promising, dear reader?</title>
        <p>https://openreview.net/forum?id=
https://www.justice.gov/atr/case/usH1gNOeHKPS. and-plaintif-states-v-google-llc.
[28] A. Madsen, Neural Arithmetic Units [36] C-SPAN, Committees of the 116th
and Experiences as an Independent ML congress: House judiciary
Researcher with Andreas Madsen, subcommittee on antitrust, commercial
TWIML AI Podcast with Sam and administrative law, 2020. URL:
Charrington, 2020. URL: https://www.c-span.org/congress/
https://twimlai.com/twiml-talk-382- committee/?112051Źcongress=116.
neural-arithmetic-units-experiences- [37] Washington Post (Tony Romm), U.s.,
as-an-independent-ml-researcher- states sue facebook as an illegal
with-andreas-madsen/. monopoly, setting stage for potential
[29] J. Dodge, Q. V. Liao, Y. Zhang, R. K. E. breakup, 2020. URL:
Bellamy, C. Dugan, Explaining models: https://www.washingtonpost.com/
An empirical study of how
technology/2020/12/09/facebookexplanations impact fairness judgment, antitrust-lawsuit/.
in: Proceedings of the 24th [38] M. I. Jordan, What are the implications
International Conference on Intelligent of algorithmic thinking? with michael
User Interfaces, IUI ’19, ACM, New i. jordan, TWIML AI Podcast with Sam
York, NY, USA, 2019, pp. 275–285. URL: Charrington, 2020. URL:
http:
https://twimlai.com/what-are-the//doi.acm.org/10.1145/3301275.3302310.
implications-of-algorithmic-thinkingdoi:10.1145/3301275.3302310. with-michael-i-jordan/.
[30] U. Ehsan, M. O. Riedl, Human-centered [39] C. Fry, H. Lieberman, Why Can’t We
explainable ai: Towards a reflective All Just Get Along? How Science Can
sociotechnical approach, 2020. URL: Enable A More Cooperative Future,
https://arxiv.org/abs/2002.01092. Ingram Content Group, 2018. URL:
[31] U. Ehsan, Q. V. Liao, M. Muller, M. O. https://www.whycantwe.org/.</p>
        <p>Riedl, J. D. Weisz, Expanding [40] Wired (Satya Nadella and Marco
explainability: Towards social Iansiti), Want a more equitable future?
transparency in ai systems (2021). URL: empower citizen developers, 2020.
https://arxiv.org/abs/2101.04719. URL: https://www.wired.com/story/
[32] M. Chromik, M. Eiband, S. T. Völkel,
want-a-more-equitable-futureD. Buschek, Dark patterns of empower-citizen-developers/.
explainability, transparency, and user [41] D. Marculsecu, The case for
control for intelligent systems, in: IUI hardware-ml model co-design with
Workshops, 2019. diana marculescu, TWIML AI Podcast
[33] United States v. International Business with Sam Charrington, 2020. URL:
Machines Corp, 1996. URL:
https://twimlai.com/twiml-talk-391https://www.law.cornell.edu/supct/
the-case-for-hardware-ml-model-cohtml/95-591.ZO.html. designwith-diana-marculescu/.
[34] United States v. Microsoft Corporation, [42] B. E. Bejnordi, T. Blankevoort,
2018. URL: M. Welling, Batch-shaped channel
https://www.oyez.org/cases/2017/17-2. gated networks, CoRR abs/1907.06627
[35] US and Plaintif States v. Google, LLC, (2019). URL:</p>
        <p>2020. URL: http://arxiv.org/abs/1907.06627.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>N.</given-names>
            <surname>Benaich</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Hogarth</surname>
          </string-name>
          ,
          <source>State of ai report</source>
          ,
          <year>2020</year>
          . URL: https://www.stateof.ai/.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2] OpenAI, Openai api faq,
          <year>2020</year>
          . URL: https://openai.com/blog/openai-api/.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Silver</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hubert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schrittwieser</surname>
          </string-name>
          , I. Antonoglou,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Guez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lanctot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Sifre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kumaran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Graepel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Lillicrap</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Simonyan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Hassabis</surname>
          </string-name>
          ,
          <article-title>A general reinforcement learning algorithm that masters chess, shogi, and go through self-play</article-title>
          ,
          <source>Science</source>
          <volume>362</volume>
          (
          <year>2018</year>
          )
          <fpage>1140</fpage>
          -
          <lpage>1144</lpage>
          . URL: https://science.sciencemag.org/ content/362/6419/1140. doi:
          <volume>10</volume>
          .1126/science.aar6404.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>CNN (James</surname>
            <given-names>Grifiths)</given-names>
          </string-name>
          ,
          <article-title>Korean go master quits the game because ai 'cannot be defeated'</article-title>
          ,
          <source>CNN</source>
          ,
          <year>2019</year>
          . URL: https://www.cnn.com/
          <year>2019</year>
          /11/27/tech/ go
          <article-title>-master-google-intl-hnk-scli/ index</article-title>
          .html.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>D. H</surname>
          </string-name>
          ,
          <article-title>How much did alphago zero cost</article-title>
          ?,
          <year>2020</year>
          . URL: https: //www.yuzeh.com/data/agz-cost.html.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Wired</surname>
          </string-name>
          (Gary Marcus),
          <source>Deepmind's losses and the future of artificial intelligence</source>
          ,
          <year>2019</year>
          . URL: https: //www.wired.com/story/deepmindslosses-future
          <source>-artificial-intelligence/.</source>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Medium (Ken</surname>
            <given-names>Wang)</given-names>
          </string-name>
          ,
          <article-title>Deepmind achieved starcraft ii grandmaster level</article-title>
          ,
          <source>but at what cost?</source>
          ,
          <year>2020</year>
          . URL: https://medium.com/swlh/deepmindachieved-starcraft
          <article-title>-ii-grandmasterlevel-but-at-what-cost-32891dd990e4.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Google</surname>
          </string-name>
          , Cloud tpu,
          <year>2020</year>
          . URL: https://cloud.google.com/tpu.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Krishna</surname>
          </string-name>
          ,
          <article-title>Ibm ceo's letter to congress on racial justice reform</article-title>
          ,
          <year>2020</year>
          . URL: https://www ibm.com/blogs/policy/ .
          <article-title>facial-recognition-sunset-racialjustice-reforms/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Amazon</surname>
            <given-names>Staf</given-names>
          </string-name>
          ,
          <article-title>We are implementing a one-year moratorium on police use of rekognition, 2020</article-title>
          . URL: https:// www.aboutamazon.com/news/policynews-views/
          <article-title>we-are-implementing-aone-year-moratorium-on-police-useof-rekognition.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Washington</given-names>
            <surname>Post</surname>
          </string-name>
          (Jay Greene),
          <article-title>Microsoft won't sell police its facial-recognition technology, following similar moves by amazon</article-title>
          and ibm,
          <year>2020</year>
          . URL: https://www.washingtonpost.com/ technology/2020/06/11/microsoftfacial-recognition/.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J.</given-names>
            <surname>Buolamwini</surname>
          </string-name>
          , T. Gebru,
          <article-title>Gender shades: Intersectional accuracy disparities in commercial gender classification, in: S. A</article-title>
          .
          <string-name>
            <surname>Friedler</surname>
          </string-name>
          , C. Wilson (Eds.),
          <source>Proceedings of the 1st Conference on Fairness, Accountability and Transparency</source>
          , volume
          <volume>81</volume>
          <source>of Proceedings of Machine Learning Research</source>
          , PMLR, New York, NY, USA,
          <year>2018</year>
          , pp.
          <fpage>77</fpage>
          -
          <lpage>91</lpage>
          . URL: http://proceedings.mlr.press/v81/ buolamwini18a.html.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>BBC (David</surname>
            <given-names>Molloy)</given-names>
          </string-name>
          ,
          <article-title>The great graphics card shortage of 2020 (and 2021)</article-title>
          , BBC,
          <year>2020</year>
          . URL: https://www.bbc.com/news/ technology-55755820.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>T.</given-names>
            <surname>Dettmers</surname>
          </string-name>
          ,
          <article-title>Which gpu(s) to get for deep learning: My experience and advice for using gpus in deep learning</article-title>
          ,
          <year>2020</year>
          . URL: https://timdettmers.com/
          <year>2020</year>
          /09/07/ which-gpu
          <article-title>-for-deep-learning/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>NVIDIA</surname>
          </string-name>
          ,
          <source>Geforce rtx 3080</source>
          ,
          <year>2020</year>
          . URL: https: //www.nvidia.com/en-us/geforce/ graphics-cards/30-series/rtx-3080/.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>ASUS</surname>
          </string-name>
          ,
          <article-title>Recommended power supply unit table</article-title>
          ,
          <year>2020</year>
          . URL: https://dlcdnets.asus.com/pub/ASUS/ Accessory/Power_Supply/Manual/ RECOMMENDED_PSU_TABLE.pdf .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>S.</given-names>
            <surname>Luccioni</surname>
          </string-name>
          ,
          <article-title>Visualizing climate impact with gans w/ sasha luccioni</article-title>
          ,
          <source>TWIML AI Podcast with Sam Charrington</source>
          ,
          <year>2020</year>
          . URL: https: //twimlai.com
          <article-title>/visualizing-climateimpact-with-gans-w-sasha-luccioni/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>C.</given-names>
            <surname>Hill</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bellamy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Erickson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Burnett</surname>
          </string-name>
          ,
          <article-title>Trials and tribulations of developers of intelligent systems: A ifeld study</article-title>
          ,
          <source>in: 2016 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>162</fpage>
          -
          <lpage>170</lpage>
          . doi:
          <volume>10</volume>
          .1109/VLHCC.
          <year>2016</year>
          .
          <volume>7739680</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>O.</given-names>
            <surname>Vinyals</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Silver</surname>
          </string-name>
          , et al.,
          <article-title>AlphaStar: Mastering the Real-Time Strategy Game StarCraft II</article-title>
          , https://deepmind.com/blog/article/ alphastar
          <article-title>-mastering-real-timestrategy-game-starcraft-</article-title>
          <string-name>
            <surname>ii</surname>
          </string-name>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>O.</given-names>
            <surname>Vinyals</surname>
          </string-name>
          , I. Babuschkin,
          <string-name>
            <given-names>W. M.</given-names>
            <surname>Czarnecki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mathieu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dudzik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. H.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Powell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Ewalds</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Georgiev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Oh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Horgan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kroiss</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Danihelka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Sifre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Cai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Agapiou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jaderberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Vezhnevets</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Leblond</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Pohlen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Dalibard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Budden</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Sulsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Molloy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. L.</given-names>
            <surname>Paine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gulcehre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Pfaf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ring</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Yogatama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Wünsch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>McKinney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Schaul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Lillicrap</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kavukcuoglu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Hassabis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Apps</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Silver</surname>
          </string-name>
          ,
          <article-title>Grandmaster level in starcraft ii using multi-agent reinforcement learning</article-title>
          ,
          <source>Nature</source>
          <volume>575</volume>
          (
          <year>2019</year>
          )
          <fpage>350</fpage>
          -
          <lpage>354</lpage>
          . URL: https: //doi.org/10.1038/s41586-019-1724-z. doi:
          <volume>10</volume>
          .1038/s41586-019-1724-z.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>Next</given-names>
            <surname>Platform</surname>
          </string-name>
          (Paul Teich),
          <article-title>Tearing apart google's tpu 3.0 ai coprocessor</article-title>
          ,
          <year>2020</year>
          . URL: https://www.nextplatform.com/
          <year>2018</year>
          / 05/10/tearing-apart
          <article-title>-googles-tpu-3-0- ai-coprocessor/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Press</surname>
          </string-name>
          <article-title>Enterprise (Geri Gibbons), Talen plans data warehouse</article-title>
          , Press Enterprise,
          <year>2021</year>
          . URL: https: //www.pressenterpriseonline.com/ daily/021721/page/1/story/talen-plansdata-warehouse.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>Y. A.</given-names>
            <surname>Strengers</surname>
          </string-name>
          ,
          <article-title>Designing eco-feedback systems for everyday life</article-title>
          ,
          <source>in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '11</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2011</year>
          , p.
          <fpage>2135</fpage>
          -
          <lpage>2144</lpage>
          . URL: https: //doi.org/10.1145/1978942.1979252. doi:
          <volume>10</volume>
          .1145/1978942.1979252.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>Pew</given-names>
            <surname>Research</surname>
          </string-name>
          <article-title>Center (Juliana Menasce Horowitz and Ruth Igielnik and Rakesh Kochhar), Most americans say there is too much economic inequality in the u.s., but fewer than half call it a top priority</article-title>
          ,
          <year>2020</year>
          . URL: https: //www.pewsocialtrends.org/
          <year>2020</year>
          /01/ 09/views-of-economic-inequality/.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>A. W.</given-names>
            <surname>Bartik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bertrand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Cullen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. L.</given-names>
            <surname>Glaeser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Luca</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Stanton</surname>
          </string-name>
          ,
          <article-title>The impact of covid-19 on small business outcomes and expectations</article-title>
          ,
          <source>Proceedings of the National Academy of Sciences</source>
          <volume>117</volume>
          (
          <year>2020</year>
          )
          <fpage>17656</fpage>
          -
          <lpage>17666</lpage>
          . URL: https: //www.pnas.org/content/117/30/17656. doi:
          <volume>10</volume>
          .1073/pnas.2006991117.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>E.</given-names>
            <surname>Saez</surname>
          </string-name>
          , G. Zucman,
          <article-title>Trends in US Income and Wealth Inequality: Revising After the Revisionists</article-title>
          ,
          <source>Technical Report, National Bureau of Economic Research</source>
          ,
          <year>2020</year>
          . URL: https://www.nber.org/papers/w27921.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>A.</given-names>
            <surname>Madsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Johansen</surname>
          </string-name>
          ,
          <article-title>Neural arithmetic units</article-title>
          ,
          <source>in: International Conference on Learning Representations</source>
          ,
          <year>2020</year>
          . URL:
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [43]
          <string-name>
            <given-names>E.</given-names>
            <surname>Strubell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ganesh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>McCallum</surname>
          </string-name>
          ,
          <article-title>Energy and policy considerations for deep learning in NLP, in: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics</article-title>
          , Florence, Italy,
          <year>2019</year>
          , pp.
          <fpage>3645</fpage>
          -
          <lpage>3650</lpage>
          . URL: https:// www.aclweb.org/anthology/P19-1355. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>P19</fpage>
          -1355.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [44]
          <string-name>
            <given-names>E.</given-names>
            <surname>Cai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.-C.</given-names>
            <surname>Juan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Stamoulis</surname>
          </string-name>
          , D. Marculescu,
          <article-title>NeuralPower: Predict and deploy energy-eficient convolutional neural networks</article-title>
          , in: M.
          <string-name>
            <surname>-L. Zhang</surname>
          </string-name>
          , Y.-K. Noh (Eds.),
          <source>Proceedings of the Ninth Asian Conference on Machine Learning</source>
          , volume
          <volume>77</volume>
          <source>of Proceedings of Machine Learning Research, PMLR</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>622</fpage>
          -
          <lpage>637</lpage>
          . URL: http:// proceedings.mlr.press/v77/cai17a.html.
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [45]
          <string-name>
            <given-names>A.</given-names>
            <surname>Lacoste</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Luccioni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          , T. Dandres,
          <article-title>Quantifying the carbon emissions of machine learning</article-title>
          ,
          <source>CoRR abs/1910</source>
          .09700 (
          <year>2019</year>
          ). URL: http://arxiv.org/abs/
          <year>1910</year>
          .09700.
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [46]
          <string-name>
            <given-names>D.</given-names>
            <surname>Rolnick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. L.</given-names>
            <surname>Donti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. H.</given-names>
            <surname>Kaack</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kochanski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lacoste</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Sankaran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Ross</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Milojevic-Dupont</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Jaques</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Waldman-Brown</surname>
          </string-name>
          , A. Luccioni,
          <string-name>
            <given-names>T.</given-names>
            <surname>Maharaj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. D.</given-names>
            <surname>Sherwin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Mukkavilli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. P.</given-names>
            <surname>Körding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gomes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Y.</given-names>
            <surname>Ng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Hassabis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Platt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Creutzig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chayes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          ,
          <article-title>Tackling climate change with machine learning</article-title>
          ,
          <source>CoRR abs/1906</source>
          .05433 (
          <year>2019</year>
          ). URL: http://arxiv.org/abs/
          <year>1906</year>
          .05433.
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [47]
          <string-name>
            <given-names>K.</given-names>
            <surname>Vu</surname>
          </string-name>
          ,
          <article-title>Two things you need to know about reinforcement learning - computational eficiency and sample eficiency</article-title>
          ,
          <year>2020</year>
          . URL: https://www.kdnuggets.com/
          <year>2020</year>
          /04/ 2
          <article-title>-things-reinforcement-learning</article-title>
          .
          <source>html.</source>
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [48]
          <string-name>
            <given-names>P.</given-names>
            <surname>Henderson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Romof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Brunskill</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Jurafsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pineau</surname>
          </string-name>
          ,
          <article-title>Towards the systematic reporting of the energy and carbon footprints of machine learning</article-title>
          ,
          <source>Journal of Machine Learning Research</source>
          <volume>21</volume>
          (
          <year>2020</year>
          )
          <fpage>1</fpage>
          -
          <lpage>43</lpage>
          . URL: http: //jmlr.org/papers/v21/
          <fpage>20</fpage>
          -
          <lpage>312</lpage>
          .html.
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [49]
          <string-name>
            <given-names>ACM</given-names>
            <surname>Code 2018 Task Force</surname>
          </string-name>
          , Acm code of ethics,
          <year>2018</year>
          . URL: https://www.acm.org/code-of-ethics.
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [50]
          <string-name>
            <given-names>K.</given-names>
            <surname>Holstein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Wortman</given-names>
            <surname>Vaughan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Daumé</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dudik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wallach</surname>
          </string-name>
          ,
          <article-title>Improving fairness in machine learning systems: What do industry practitioners need?</article-title>
          ,
          <source>in: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2019</year>
          , p.
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          . URL: https: //doi.org/10.1145/3290605.3300830. doi:
          <volume>10</volume>
          .1145/3290605.3300830.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>