=Paper= {{Paper |id=None |storemode=property |title=A Sensing Platform for High Visibility of the Datacenter |pdfUrl=https://ceur-ws.org/Vol-1002/paper2.pdf |volume=Vol-1002 |dblpUrl=https://dblp.org/rec/conf/ipsn/LoureiroPST13 }} ==A Sensing Platform for High Visibility of the Datacenter== https://ceur-ws.org/Vol-1002/paper2.pdf
     A Sensing Platform for High Visibility of the
                     Datacenter

           João Loureiro?? , Nuno Pereira, Pedro Santos, Eduardo Tovar

      CISTER/INESC-TEC, ISEP, Polytechnic Institute of Porto, Porto, Portugal,
                   {joflo, nap, pjsol, emt}@isep.ipp.pt



         Abstract. Data centers are large energy consumers and a substantial
         portion of this power consumption is due to the control of physical pa-
         rameters, which bring the need of high efficiency environmental control
         systems. In this paper, we describe a hardware sensing platform specifi-
         cally tailored to collect physical parameters (temperature, pressure, hu-
         midity and power consumption) in large data centers. This platform is
         an important enabler to find opportunities to optimize energy consump-
         tion. We also introduce an analysis of the delay to obtain the sensing
         data from the sensor network. This analysis provides an insight into the
         time scales supported by our platform, and also allows to study the delay
         for different datacenter topologies.


1      Introduction
Data center’s large power consumption justifies a special attention to the design
of energy efficient data centers. Power usage effectiveness (PUE) has become
the metric to measure data center efficiency. It measures how much of the total
energy consumed is really spent on IT work other than on facility’s overhead,
like lightning, cooling and power distribution, and it is given by: PUE = (IT
Equipment Energy + Facility Overhead) / Energy IT Equipment Energy. It
is desirable to measure it with a high spatio-temporal granularity, so that the
PUE metric is as accurate as possible and to enable better understanding of
the power consumption in the datacenter. This better understanding may lead
to great reductions through e.g. better load balancing, power distribution, or
reduced air conditioning usage [1].
    To have a full picture of the datacenter environment, it is important to col-
lect air pressure, temperature, humidity and power consumption data at a high
granularity (in time and space). The relevance of collecting these parameters is
discussed in the next paragraphs.
    In a typical datacenter, IT equipment is organized into rows, with a cold
aisle in front, where cold air enters the equipment racks, and a hot aisle in
back, where hot air is exhausted. Computer-room air conditioners (CRACs) are
commissioned to cycle the air, by pushing the cold air and returning the hot air
??
     João Loureiro is supported by the government of Brazil through CNPq, Conselho
     Nacional de Desenvolvimento Cientı́fico e Tecnológico.
                    A Sensing Platform for High Visibility of the Datacenter    15

to be cooled again. The CRAC systems are responsible for a big share of the
facility overhead energy, and in order to achieve a more uniform thermal profile,
special effort must be given on airflow distribution, by preventing cold and hot
air from mixing and by eliminating any hotspots. Better understanding of the
airflow can be addressed by placing pressure and temperature sensors.
    By measuring the local pressure, it is possible to estimate the speed and
direction of the airflow between the sensed points and possibly identify unwanted
mixtures or flow bottlenecks, as shown in [2]. It can also be used for workload-
balancing among servers like in [3] where the patent application describes a
system that uses a load balancer to shift tasks among servers based on their
particular cooling needs, which are related to air pressure drop across the server.
    With fine grained temperature measurement it becomes easy to localize
hotspots, and by crossing such information with pressure information, better
details of the airflow can be estimated, thus lead to better tuned CRAC sys-
tems.
    Another important environmental parameter is the local humidity. Higher
relative humidity decreases the chances of static electrical discharges that can
damage the IT equipment and, at the same time, increases the heat transfer from
the server to the cooling airflow. But too much water particles in the air reduces
the lifetime of the IT equipment and increases the chance of water condensation,
which is not desirable. Several entities, such as the American Society of Heating,
Refrigerating & Air-Conditioning Engineers (ASHRAE), provide guidelines with
allowed and recommended values of relative humidity, as well as for dry bulb
temperature, maximum dew point, maximum elevation and maximum rate of
temperature changes, as seen in [4].
    We present in this paper a sensing platform for collection of temperature,
pressure, humidity and local (rack-level) power consumption. The development
of the platform was centered on the specific application scenario of energy opti-
mization in large datacenters, focusing on high resolution sensing: several sensing
points per rack, sampled at sub-seconds time intervals. Evidently, for such sys-
tem to be practical, cost is an important factor to consider. We detail the design
of this platform and develop an analysis of the time to obtain the sensing data
from the sensor network. This analysis also allows to see the tradeoff between the
number of sensing points per datacenter row and the speed of data collection.


2   Related Work

Green data centers have received considerable attention in recent research liter-
ature. Some recent approaches rely on building software models through a joint
coordination of cooling and load management [5, 6], or by formulating an energy
minimization problem, subject to service delay and Quality of Service (QoS)
constraints. In this class it is worth to mention dynamic voltage scaling [7, 8]
and on/off power management schemes [9] – [11]. The complexity of data center
airflow and heat transfer is compounded by each data center facility having its
own unique layout, so achieving a general model is difficult [12]. For example,
16     J. Loureiro et al.

in [5], authors stress that their model has several parameters that need to be
determined for specific applications.
    Given such models, acquiring real-time data at a fine enough spatio-temporal
resolution becomes an important topic, as this data can be used to validate mod-
els and keep their inputs updated at run-time. Nevertheless, this problem poses
new challenges and research issues concerning the type, number and placement
of sensors [12].
    Some works [13, 14] pushed in the direction of deploying wireless sensor nodes
and monitor the thermal distribution, to figure out how to avoid hot-spots and
overheating conditions. We differ from such approaches in the sense that we
want very fine-grained (in space and time) gathering of power and environmen-
tal parameters, including physical quantities other than temperature. Using a
mixed wire/wireless solution, [13] obtained a average one-round collection time
of approximately 6 seconds for 50 nodes. They also deployed 694 sensor nodes
in a data-center, reading every cluster of 4 at most at every 30 seconds. In this
work ([13]), for every cluster there was a wireless station and nodes where pow-
ered via USB, which makes the system dependent on having a powered USB
port available (this might be a problem, since the server to where the node is
connected to cannot be powered off, for example). A pure wireless solution was
presented in [14], where it was reported a deployment of 107 battery powered
wireless nodes, taking 3 seconds to sample all of them (not considering data
losses). The experiment only lasted for 35 days before the battery had to be
replaced, which is not practical for large, long-lived deployments.
    Our proposed system is based on a hierarchical, modular, flexible and fine-
grained sensor network architecture, where data is collected from heterogeneous
sensors (including power), placed in each rack. The analysis of their inter-
correlations will enable closer examination and a better understanding of the
flow and temperature dynamics within each data center [15]. To our knowledge,
no previous work enables correlating power and environment characteristics on
a per rack or per-server granularity with such temporal resolution.


3    Overview

The proposed sensor network architecture is a combination of wired and wireless
technologies, designed to achieve high spatio-temporal resolution of data center
rooms, keeping system’s flexibility and modularity, with a low latency and low
cost.
   Our system is designed to cover the datacenter first by a short range bus
that covers the communication needs inside each rack, a longer range bus that
covers each row in the datacenter and then wireless communication is used to
gather the data from the entire datacenter room. Four different types of devices
cover each of these levels (rack, row and room): (i) Sensing Units sense the
physical parameters (temperature, pressure, humidity, and power) in each rack,
then (ii) Sensor Nodes collect the sensing data for the entire rack, and (iii)
Wireless Base Stations (WBS s), collect data from several Sensor Nodes in a
                          A Sensing Platform for High Visibility of the Datacenter   17


                                 SN
           WBS                   A    Buffer
                        I2C
                       Switch
      802.15.4                   B                 SU-E         SU-P
                                      Buffer


    RTC                                  AD1       Buffer        Buffer
                  µC        µC
 RAM
                                                I2C
                                               Switch             µC
          RS485          RS485                     A
                                         AD6   P       H    T   Power M




                              Fig. 1. Network Architecture and Layout



row, as represented in Figure 1. Finally, (iv) Gateways collect data from all of
the WBS s in a datacenter room.
    Starting at the lower level, our sensor network consists of two different types
of Sensing Units: (a) a small passive sensing unit for measuring environmental
quantities, with at most one temperature, one humidity and one pressure digital
sensor, and (b) a power metering unit with real, active, and reactive power mea-
surement capabilities, as presented in Figure 1 by SU-E and SU-P respectively.
The environmental Sensing Units can be manufactured according to the sensing
and cost needs, by having any combination of sensors on it, what is represented
by the three different shapes. Both sensing units deliver data to the next level
in the hierarchy, through a wired short range bus (I2C), projected to cover only
one rack of servers (back and front).
    At the next level, the Sensor Node is responsible to collect the data of all the
Sensing Units attached to it and possibly to perform simple data aggregation
and sensor fusion before delivering it to the next level in the hierarchy using a
longer range wired bus (MODBUS).
    WBSs are responsible for querying the Sensor Nodes within their respective
cluster, and again perform data aggregation, sensor fusion and data analysis.
They communicate then with devices at the next level in the hierarchy to deliver
the relevant data. Gateways then provide the data gathered from the sensor
network to the data distribution system in a standard format. From this point
on, sensing data is published at a publish/subscribe middleware that distributes
the acquired data to different applications, where each of them will use such
information with different proposes (alarms, data logging, visualization, etc).
    Each Sensor Node can be connected up to 52 temperature sensors, 54 power
meters, 14 pressure sensors and 14 humidity sensors. The following section de-
scribes in more detail each of the system components.

4         Platform Details
Well-known protocols, network architectures and of-the-shelf electronic compo-
nents had to be chosen to compose the system, considering that the final objec-
18     J. Loureiro et al.




                            (a) Sensor Node            (b) Sensing Unit

                               Fig. 2. Hardware Platforms




tive was to build a fully functional, industry ready, sensor network with very low
cost. Besides the architecture, the technology chosen to implement the network
is described below.


4.1   Sensing Unit

With the popularization of two-wire I2C buses on motherboards, cellphones
and on general embedded systems, many companies are nowadays developing
sensors with digital I2C output, by embedding the micro-mechanical sensor,
signal amplifiers, analogue to digital converters, memory and a I2C front end
to manage with the communication on the bus. These Systems-on-Chip enable
high accuracy and reliability measurements, since this decreases the probability
of data corruption due to any external interference. It also prevents calibration
issues found on pure-analogue sensors measurements, since digital sensors are
factory calibrated and digitally compensated. Due to these reasons, I2C sensors
were used to connect the several sensing units.
    Some limitations of I2C buses had to be overcome to make its usage practical
in this application. First, buffers had to be added as an interface between the
I2C bus lines and every circuit board attached to it, in order to allow the I2C to
operate over longer distances, by increasing the robustness of the logic signals
of the standard I2C buses. Second, switches had to be added to every Sensing
Unit on the bus in order to allow the usage of more than one sensor with non-
configurable addresses, making it accessible from the main bus.
    Figure 2(b) depicts one Sensing Units with temperature, humidity and pres-
sure sensors. The temperature sensor used is a low cost and low power device with
1.5◦ C accuracy, maximum resolution of 0.0625◦ C and minimum and maximum
conversion times between 27.5 and 300ms. The humidity sensor has 1.8%RH
accuracy, with maximum 0.04%RH resolution and minimum and maximum con-
version times between 3 and 29ms, both the temperature and humidity sensors
                   A Sensing Platform for High Visibility of the Datacenter    19

suitable for the application, where the focus are in changes in major scales ac-
cording to the ASHARE guidelines [4], which specifies a range of dew points
between 5.5◦ C (for 60%RH) and 15◦ C. The pressure sensor ranges from 300 to
1100hPa, with an accuracy of +-1hPa typical and 0.03hPa of resolution with
minimum and maximum conversion times between 3 and 25.5ms, also suitable
for the application, where typical pressure variation values inside datacenter’s
are in greater orders of magnitude, as seen in [2].
    The Power Meter Sensing Unit is composed by a dedicated chip which in-
terfaces with the power line, and provide real, reactive, and apparent power
measurements to the embedded computational unit, which is responsible for in-
terfacing with the I2C bus as a slave, and to deliver such information to the
master, at the next level.
    To both Sensing Units, the power is carried into the same cable as the I2C
data, and locally converted from 5 to 3.3V by a low-drop LDO converter, for
more stable and lower ripple power supply for the sensors, which are sensitive
to such variations.


4.2   Sensor Nodes

A Sensor Node is a communication/computation enabled device, physically linked
over the I2C bus (also trough buffers) to a number of Sensing Units.The Sen-
sor Nodes gather the data from the Sensing Units and, in turn, answer to data
requests from the WBS. Figure 2(a) depicts a Sensor Node.
     To keep cost and complexity low at this tier of the network architecture, the
Sensor Nodes communicate with one Wireless Base Station (WBS) over a bus,
e.g., using a RS485/MODBUS technology [16]. In particular, the WBS node acts
as a local coordinator and master of the bus.
     The Sensor Node is also composed by: (i) six analogue inputs suited for
current measurement, connected to external current transducers attached to the
power lines, as a cheap and simple alternative for basic current measurement;
(ii) two I2C buffered ports through one switch, responsible for duplicating the
bus capacity in terms of addressable devices, and enabling a better mechanical
placement for cables to go to the back and front of a rack, and (iii) one RS485
port for the MODBUS.
     The power supply for the Sensor Nodes is carried by a twisted pair cable,
along with the MODBUS data, in another pair. At every Sensor Node, a high
efficiency DC-DC step down converter, converts from 48 to 5V for the local
supply. This is an important feature as it reduces the number of cables that
connect to each node, facilitating installation of the devices.


4.3   Wireless Base Stations (WBSs)

The WBS is directly connected to a power source and supplies power through a
twisted pair cable to all the Sensor Nodes in that bus. In all the nodes on this
bus, the voltage is locally converted to lower values by a step-down switched
20      J. Loureiro et al.

power supply for a higher system efficiency. Wires running in the same cable
form a serial data bus (MODBUS over a RS485 connection) that interconnects
the Sensor Nodes.
    The WBS is based on the same printed circuit board as the Sensor Node,
missing the sensors interfaces, and with some extra components, like one exter-
nal non-volatile ferrite random access memory (FRAM), used as a buffer and for
diagnosing the system in cases of failures or power cuts (by keeping the last op-
erational state). The WBS also includes a real-time clock used for time stamping
the data packets.
    The WBSs act as IEEE 802.15.4 cluster heads and are connected with each
other in a mesh topology. A common Gateway is in charge of gathering measure-
ments and sending them over long range communication technology (e.g., WiFi,
Ethernet). In terms of HW platforms, the WBS node will be the same platform
as a generic Sensor Node, with an on-board ZigBee radio. Thus, each Sensor
Node can become a WBS with minimal modifications, i.e., just by plugging the
wireless module and uploading a different firmware.

4.4   Gateways
The sensor network can have one or more Gateways. Gateways maintain rep-
resentations of the data flows from the sensor network to the data distribution
system. They perform the necessary adaptation of the data received from the
WSN. The gateways can be deployed as one per room serving all the rows of racks
in that room; more gateways can also be deployed to improve radio coverage,
for load-balancing or for redundancy.


5     Delay Analysis
In this section we will develop an analysis of the time to transmit sensor data
using our system. The purpose of this analysis is to show that our system can
exhibit very low delays in the presence of a large number of sensing points.
    This analysis will also enable us to study the communication delay as we
add Sensor Nodes to the network. We consider that each Sensor Node added has
Nsu−sn Sensing Units attached to it, where each Sensing unit has three 16 bit
sensors. For every Nsn−wbs Sensor Nodes added to the network, one WBS has
to be added also. The total number of Sensor Nodes is defined as Nsn .
    These parameters (Nsu−sn and Nsn−wbs ) are defined according to the topol-
ogy of the data center room. Therefore, our analysis can allow evaluating the
tradeoff between the delay and the network organization.

5.1   Calculating the Response Time
The response time R required to collect data from all the sensors is given by
adding together the time to transmit all the wireless requests to all WBS (treq )
and also the corresponding replies (trep ), as given by Equation (1).
                   A Sensing Platform for High Visibility of the Datacenter   21



                                R = (treq + trep )                            (1)
    The time to transmit all requests is computed by the sum of the time required
                                                    Nsn
to transmit a request to each WBS (there are d Nsn−wbs    e WBS:s in the network)
with the worst-case blocking time, Bmb , is given by Equation (2).
                                     
                               Nsn
                   treq =               × (twtx (Swreq ) + Bmb )              (2)
                             Nsn−wbs
where the twtx (Swreq ) is the time to transmit a request packet in the wireless
802.15.4 network including all protocol overhead for a packet with Swreq bits
of payload, and will be defined later. Bmb is a constant given by the longest
data transaction over the MODBUS, which corresponds to the largest task to
be executed by the WBS in a non pre-emptive system.
   The time to transmit all replies is given by Equation (3) as follows:
                                                 
                                           Ssd
            trep =    (Nsu−sn × Nsn ) ×          + 1 × twtx (Smwp )          (3)
                                          Smwp

where Ssd is the size of the sensor data to be transmitted by each Sensor Unit
and Smwp is maximum wireless data payload, after accounting for all proto-
cols headers. twtx (Smwp ) is the time to transmit a packet in the wireless IEEE
802.15.4 network with the maximum possible payload (mwp bits) and will be
defined in Section 5.2.


5.2   Calculating the Wireless Transmission time

The reasoning applied to calculating the wireless transmission time (twtx (S))
is similar to the one found in [17, 18] when analyzing the maximum theoretical
throughput of a non-beacon enabled IEEE 802.15.4. The time to send a IEEE
802.15.4 packet with payload size of S bits if given by:

                     twtx (S) = Tib + tppdu (S) + Tack + Tif s                (4)
where Tib is the initial backoff period, which depends on the parameter macM inBE,
and, by default, macM inBE = 3, resulting in Tib = 1120 µs). The time to
transmit the PHY protocol data unit (ppdu) with a payload size of S bits is
denoted by tppdu (S). The time to transmit an acknowledgment is defined as
Tack = Tackppdu + Trxtx = 544 µs since it must include the time to send the
acknowledgment packet (Tackppdu = 352 µs as defined in the standard [19]) and
the time for the transceiver to switch from receive to transmit (Trxtx = 192 µs
is the maximum value defined in [19], and this is the value found in the 802.15.4
transceivers employed [20]). The interframe spacing (IFS), Tif s , is set to the
value of the long IFS defined by the standard, 640 µs (actually, this is only used
when the size of the MAC protocol data unit (MPDU) to be sent is above or
equal to 18 bytes [19]).
22                         J. Loureiro et al.


                       5

                                         NSN−WBS = 1
                       4                 NSN−WBS = 2
                                         NSN−WBS = 20
            Time (s)
                       3                 NSN−WBS = 250


                       2


                       1



                           0       500        1000       1500     2000     2500                     3000          3500       4000        4500         5000
                                                                           NSU

                                     (a) R (Eq. 1) for Nsu−sn = 10, with varying Nsn−wbs
                  0.25
                                                                                              1.7
                                     NSN−WBS = 20                                                                 NSN−WBS = 250
                       0.2

                                                                                             1.65
                  0.15
       Time (s)




                                                                                  Time (s)
                       0.1                                                                    1.6


                  0.05
                                                                                             1.55

                           0
                               0    100         200         300      400                            2800   2850    2900   2950 3000   3050   3100   3150
                                                NSU                                                                          NSU

                                   (b) Nsn−wbs = 20                                                         (c) Nsn−wbs = 250

                                                         Fig. 3. Network Response Time



      The time to transmit the ppdu with a payload of size S bits, can be defined
as:
                                            Tppdu (S) = (Shdr + Szbee + S + Sf tr ) × τbit                                                                   (5)
where Shdr is the sum of the sizes of the synchronization header (SHR), PHY
header (PHR) and MAC header (MHR; from [19]: SSHR = 40; SP HR = 8;
SM HR = 56 bits). The size of the ZigBEE protocol headers is Szbee = 41 ∗ 8 bits,
and the size of the MAC footer is Sf tr = 16 bits. The time to transmit one bit
is τbit = 4 µs (for a data rate of 250 kbps).

5.3               Delay Results
Instantiating the response time given by Equation (1) for Nsu−sn = 10 and
Nsn−wbs = {1, 2, 20, 250} results in Figure 3(a). We have selected these values
for Nsn−wbs because they exemplify well the trend as we change this parameter.
For these calculations, we have used Swreq = 16 bits (a request with a two-byte
identifier) and Smwp = 576 bits (the maximum IEEE 802.15.4 payload minus
the overhead defined in Equation (5)).
    Not surprisingly, in Figure 3(a), we can see that R is reduced as we have
more Sensor Nodes attached to each WBS, but this reduction is increasingly
smaller. We can see that our system can support a large number of sensor points
                    A Sensing Platform for High Visibility of the Datacenter    23

and still enable very fast, automatic response to events detected by the sensor
system.
   Figures 3(b) and 3(c) present another aspect related to the network topology,
which must be considered when designing the network. The horizontal line in
both plots shows the time to gather the data from all Sensor Nodes attached to
the WSB (20 Sensor Nodes in Figure 3(b), and 250 in Figure 3(c)). The way the
network is designed, if one implements a network with Nsn below the intersection
between the horizontal line and the response time, the wireless communication
cycle of the WBS will be faster than the communication cycle on the MODBUS.
Thus, the WBS would repeatedly transmit data from previous communication
cycles. Nsn−wbs should be set such that the lines intersect at the desired Nsn .
Something that can be easily found, given the analysis presented in this section.
    In Figures 3(b) and 3(c), we can see a stepped behavior of the response time,
with the growth of the Nsu . One step happens at each 6×Nsu−sn . The reason for
this step is that, as we add Sensor Nodes, there is the need for and extra packet
to be sent (the length of the packet and number of packets needed depends on
Nsu−sn and also on the maximum payload mwp). In this scenario, the sensor
data for the 7th Sensor Nodes fits in the same number of packets, and thus the
delay does not increase. A bigger step is given at every Nsn−wbs , due to the
overhead of adding one WBS more.




6   Conclusions


We have presented a platform for acquiring the physical parameters of a datacen-
ter. This platform was developed as a mix of wired and wireless communicating
nodes, such that it can enable flexible monitoring of the datacenter at a very
high temporal and spatial resolution of the sensor measurements, while keep-
ing the cost per sensing point very low. Compared to previous work, we enable
much higher sensing resolution (several sensing points per rack, sampled at sub-
second frequency), maintaining cost low and ease of installation. Acquiring such
physical parameters at a very high resolution is important to find opportunities
to optimize energy consumption, minimize local hot-spots, achieve more accu-
rate predictive maintenance, perform more accurate billing, and it also enables
very fast response to changes in the measured parameters, including automated
actuation.
    We also presented an analysis of the delay of our system. This analysis en-
abled us to study the communication delay as we add Sensor Nodes to the
network, and has shown that our system can exhibit very low delays in the
presence of a large number of sensing points. This analysis also allows to try dif-
ferent network deployments and check the tradeoff between different topologies
(described by parameters Nsu−sn and Nsn−wbs ) and the resulting delay.
24      J. Loureiro et al.

Acknowledgement

This work was supported by National Funds through the FCT-MCTES (Por-
tuguese Foundation for Science and Technology) and by ERDF (European Re-
gional Development Fund) through COMPETE (Operational Programme ’The-
matic Factors of Competitiveness’), within projects Ref. FCOMP-01-0124-FEDER-
022701 (CISTER), FCOMP-01- 0124-FEDER-012988 (SENODs) and FCOMP-
01-0124-FEDER-020312 (SMARTSKIN).


References

 1. Google. Google’s Green Data Centers : Network POP Case Study.
 2. Tom Brey, Pamela Lembke, Joe Prisco, Ken Abbott, Dominic Cortese, Kerry
    Hazelrigg, Jim Larson, Stan Shaffer, Travis North, and Tommy (Texas Instru-
    ments) Darby. CASE STUDY : THE ROI OF COOLING SYSTEM ENERGY
    EFFICIENCY UPGRADES.
 3. Amir Meir Michael and Michael Paleczny. Load Balancing Tasks in a Data Center
    Based on Pressure Differential Needed for Cooling Servers, 2012.
 4. TC ASHRAE. 2011 thermal guidelines for data processing environments expanded
    data center classes and usage guidance. ASHRAE, pages 1–45, 2011.
 5. Luca Parolini, Bruno Sinopoli, and Bruce H. Krogh. Reducing data center energy
    consumption via coordinated cooling and load management. In Proceedings of
    the 2008 conference on Power aware computing and systems, HotPower’08, pages
    14–14, Berkeley, CA, USA, 2008. USENIX Association.
 6. Rongliang Zhou, Zhikui Wang, Cullen E. Bash, and Alan McReynolds. Data center
    cooling management and analysis – a model based approach. In 28 Annual Semi-
    conductor Thermal Measurement, Modeling and Management Symposium (SEMI-
    THERM 2012), San Jose, California, USA, March 2012.
 7. Pat Bohrer, Elmootazbellah N. Elnozahy, Tom Keller, Michael Kistler, Charles Le-
    furgy, Chandler McDowell, and Ram Rajamony. Power aware computing. chapter
    The case for power management in web servers, pages 261–289. Kluwer Academic
    Publishers, Norwell, MA, USA, 2002.
 8. Tibor Horvath, Tarek Abdelzaher, Kevin Skadron, and Xue Liu. Dynamic volt-
    age scaling in multitier web servers with end-to-end delay control. IEEE Trans.
    Comput., 56(4):444–458, April 2007.
 9. Ruibin Xu, Dakai Zhu, Cosmin Rusu, Rami Melhem, and Daniel Mossé. Energy-
    efficient policies for embedded clusters. In Proceedings of the 2005 ACM SIG-
    PLAN/SIGBED conference on Languages, compilers, and tools for embedded sys-
    tems, LCTES ’05, pages 1–10, New York, NY, USA, 2005. ACM.
10. David Meisner, Brian T. Gold, and Thomas F. Wenisch. Powernap: eliminating
    server idle power. In Proceedings of the 14th international conference on Archi-
    tectural support for programming languages and operating systems, ASPLOS ’09,
    pages 205–216, New York, NY, USA, 2009. ACM.
11. Shengquan Wang, Jian-Jia Chen, Jun Liu, and Xue Liu. Power saving design for
    servers under response time constraint. In Proceedings of the 2010 22nd Euromicro
    Conference on Real-Time Systems, ECRTS ’10, pages 123–132, Washington, DC,
    USA, 2010. IEEE Computer Society.
                    A Sensing Platform for High Visibility of the Datacenter      25

12. Jeffrey Rambo and Yogendra Joshi. Modeling of data center airflow and heat
    transfer: State of the art and future trends. Distrib. Parallel Databases, 21(2-
    3):193–225, June 2007.
13. Chieh-Jan Mike Liang, Jie Liu, Liqian Luo, Andreas Terzis, and Feng Zhao. Rac-
    net: a high-fidelity data center sensing network. In Proceedings of the 7th ACM
    Conference on Embedded Networked Sensor Systems, SenSys ’09, pages 15–28, New
    York, NY, USA, 2009. ACM.
14. Beat Weiss, Hong Linh Truong, Wolfgang Schott, Thomas Scherer, Clemens Lom-
    briser, and Pierre Chevillat. Wireless sensor network for continuously monitoring
    temperatures in data centers. IBM RZ 3807, 2011.
15. R. R. Schmidt, E. E. Cruz, and M. Iyengar. Challenges of data center thermal
    management. IBM Journal of Research and Development, 49(4.5):709 –723, july
    2005.
16. Modbus over serial line - specification & implementation guide - v1.0, February
    2002. http://www.modbus.org/docs/Modbus_over_serial_line_V1.pdf.
17. B. Latré, P. De Mil, I. Moerman, B. Dhoedt, P. Demeester, and N. Van Dierdonck.
    Throughput and delay analysis of unslotted IEEE 802.15.4. JNW, 1(1):20–28,
    2006.
18. Measuring effective capacity of IEEE 802.15.4 beaconless mode, volume 1, 2006.
19. IEEE. IEEE standard for information technology - telecommunications and infor-
    mation exchange between systems - local and metropolitan area networks - specific
    requirements - part 14.4: Wireless medium access control (MAC) and physical layer
    (PHY) specifications for low rate wireless personal area networks (LR-WPANs),
    October, 2003.
20. Chipcon. CC2420 datasheet. http://www.chipcon.com/files/CC2420 Data
     Sheet 1 3.pdf.