=Paper= {{Paper |id=Vol-2507/443-447-paper-82 |storemode=property |title=Design Challenges of the CMS High Granular Calorimeter Level 1 Trigger |pdfUrl=https://ceur-ws.org/Vol-2507/443-447-paper-82.pdf |volume=Vol-2507 |authors=Vito Palladino }} ==Design Challenges of the CMS High Granular Calorimeter Level 1 Trigger== https://ceur-ws.org/Vol-2507/443-447-paper-82.pdf
      Proceedings of the 27th International Symposium Nuclear Electronics and Computing (NEC’2019)
                         Budva, Becici, Montenegro, September 30 – October 4, 2019



   DESIGN CHALLENGES OF THE CMS HIGH GRANULAR
           CALORIMETER LEVEL 1 TRIGGER
                                            V. Palladino1​

                            On Behalf of the CMS Collaboration
       1​
            Imperial College London, Prince Consort Road, London, SW7 2BW, United Kingdom

                                     E-mail: Vito.Palladino@cern.ch


The high luminosity (HL) LHC will pose significant detector challenges for radiation tolerance and
event pile-up, especially for forward calorimetry. This will provide a benchmark for future hadron
colliders. The CMS experiment has chosen a novel high granularity calorimeter (HGCAL) for the
forward region as part of its planned Phase 2 upgrade for the HL-LHC. Based largely on silicon
sensors, the HGCAL features unprecedented transverse and longitudinal readout segmentation which
will be exploited in the upgraded Level 1 (L1) trigger system. The high channel granularity results in
around one million trigger channels in total, to be compared with the 2000 trigger channels in the
endcaps of the current detector. This presents a significant challenge in terms of data manipulation and
processing for the trigger. The high luminosity will result in an average of 140 interactions per bunch
crossing and along with it a higher rate of background in the endcap for trigger algorithms to mitigate.
Three-dimensional reconstruction of the HGCAL clusters in events with high hit rates is also a more
complex computational problem for the trigger than the two-dimensional reconstruction in the current
CMS calorimeter trigger. The status of the trigger architecture and design, as well as the concepts for
the algorithms needed in order to tackle these major issues and their impact on trigger object
performance, will be presented.

Keywords: HL-LHC, HGCAL, Trigger, CMS.


                                                                                             Vito Palladino

                                                             Copyright © 2019 for this paper by its authors.
                     Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).




                                                                                                       443
      Proceedings of the 27th International Symposium Nuclear Electronics and Computing (NEC’2019)
                         Budva, Becici, Montenegro, September 30 – October 4, 2019



1. Introduction
        With the end of the large hadron collider (LHC) Phase-1 in 2023, the accelerator will undergo
an upgrade in luminosity (high luminosity LHC or HL-LHC) [1]. The Phase-2 of the physics program
will see an increase of pile-up from ~50 to 140-200 interactions per bunch crossing, reflecting the
luminosity increase from ~2×10​34 to 5-7×10​34 cm​−2 s​−1​. The current CMS detector [2] is not designed
to withstand these levels of radiation and pile-up. For this reason, a significant detector upgrade is also
foreseen. In particular, the radiation dose in the endcap regions has imposed a complete redesign of the
endcap calorimeters. The high-granular calorimeter (HGCAL) has been chosen as the solution for the
Phase-2 upgrade of CMS.


2. The high-granular calorimeter
         The HGCAL [3] will include both electromagnetic (CE-E) and hadronic (CE-H) calorimeters
in one. It is a sampling calorimeter whose active layers will adopt a fully silicon technology for the
CE-E and a hybrid silicon-scintillator for CE-H. The silicon is needed in order to achieve high
radiation tolerance (up to 10 MGy at 3 pb​-1 see Fig. 1) and the scintillator to reduce costs where
possible. The detector’s high granularity (cell size of 0.5 and 1.2 cm​2 depending on θ position, where θ
is the azimuthal angle and φ is the inclination) requirement will improve jets separation at low angles.
Each detector endcap will be formed by 28 layers (CuW+Cu+Pb absorber and 25 X​0 +         ​ 1.3 𝜆​0​) in the
CE-E region and 22 layers (stainless steel absorber and 8 𝜆​0​) in the CE-H region (Fig. 1).




         Figure 1. Left-Top: signal/noise ratio for one MIP in scintillator at the end of the HGCAL life
  span. Left-Bottom: the HGCAL lateral view. Right: an HGCAL hybrid layer, in red the scintillator
                             tiles and in green-yellow the silicon modules
        An HGCAL active layer is reported in Fig. 1. One of the main challenges, while designing the
detector, has been the one-order-of-magnitude variability of bandwidth across the detector. In order to
adapt to this variability, each detector element is mounted on motherboards of variable size (see Fig.
2).


3. The trigger primitive generator
        The capability to trigger on complex objects in the forward region will be a key feature of the
CMS detector during the HL-LHC. One compelling physics signature to study is weak vector boson
fusion production of both standard model vector boson and new physics, which often produces two
jets in the endcaps. One of the key points has been to reduce the bandwidth while balancing the




                                                                                                       444
        Proceedings of the 27th International Symposium Nuclear Electronics and Computing (NEC’2019)
                           Budva, Becici, Montenegro, September 30 – October 4, 2019



reduction in physics performance. To achieve this goal, several key design choices have been studied
and implemented into simulations:
    ●    Trigger data are readout with a coarser granularity: 1/4 or 1/9 of the full granularity
         (depending on the θ region),
    ●    electromagnetic section CE-E contributes to the trigger using every other layer,
    ●    time information are not transmitted to the trigger processor,
    ●    trigger cells data are not sent to the backend if a programmable energy threshold is not
         reached,
    ●    in case of buffer overflows the system is designed to drop data outside the latency window.




          Figure 2. Left: Example of a motherboard. Right: silicon modules grouped into motherboards
           (grey boxes), bandwidth in Gbps is reported for both data (black) and trigger (red)
        The trigger primitive generator will implement its algorithms over the Serenity platform [4].
This is a flexible ATCA blade able to host different Xilinx FPGAs. This flexibility allows the
collaboration to optimize the hardware resources to the specific sub-system. Moreover, the Serenity
community provides all the common infrastructure letting the HGCAL developers focusing on
detector-specific firmware development. The trigger primitive will be a collection of
three-dimensional clusters that the Central Level 1 trigger processor will use to implement particle
flow algorithms [5] in firmware.
         The general overview of the HGCAL trigger primitive generator is presented in Fig. 3. It
consists of 2 stages. It is based on a time multiplex (TMUX) architecture (see Fig. 3) [6], a
fundamental choice that reduces the intr-FPGA data sharing hence bandwidth, and concentrate the data
from the same bunch crossing and an entire region of the detector (in the HGCAL case a 120​o sector)
in a single processor unit.
        The detector upgrade will increase also the total allowed latency for the trigger path, from the
current 4 𝜇s to 12.5 𝜇s. This will include the central trigger processor latency and all the hardware
contributions. The total latency allocated to the HGCAL tigger primitive generator (TPG) is 5 𝜇s. This
latency is the sum of the fixed contributions from upstream electronics (e.g. front-end, concentrator
ASIC, SerDes and TMUX) and contribution from data processing int the trigger firmware. The fixed
latency from upstream electronics amounts to 2.2 𝜇s, leaving ca. 2.8 𝜇s for TPG algorithm
development.
      Stage 1 receives data from the front-end electronics via low power gigabit transceiver links
(lpGBT) [7]. This stage is implemented using Xilinx Kintex Ultrascale+ FPGAs (KU15P). Each
FPGA collects data from 72 links and implements: trigger cell calibration, previous bunch crossing




                                                                                                       445
      Proceedings of the 27th International Symposium Nuclear Electronics and Computing (NEC’2019)
                         Budva, Becici, Montenegro, September 30 – October 4, 2019



correction (large energy deposits have effects on several consecutive bunch crossings) and time
multiplex.
        Stage 2 implements the trigger primitive generation. It is planned to mount Xilinx Virtex
Ultrascale+ FPGAs (VU7P). Each FPGA is collecting data for the full depth of a 120​o sector. The
current time multiplex period is set to 18 bunch crossing (450 ns). This is a clear design choice in
order to keep the system flexible for future updates of the firmware.




         Figure 3. Left-top: conventional data flow in which each FPGA collects data from all bunch
crossings. Left-bottom: TMUX architecture, each FPGA collects data from one bunch crossing. Right:
                       HGCAL trigger primitive generator (TPG) architecture
        Currently, the baseline for the Stage 2 algorithm adopts an imaging algorithm for the cluster
reconstruction. The algorithm is split into two logically separated steps, the first one generates seeds to
be passed to the second where the actual clusters are built.
         The seeds are generated using a histogram. The histogram is built using the position and
energy of all the Trigger cells projected to the (r/z, φ/z) plane (where z is the distance from the
interaction point along the beam axis and r the transversal distance form the ). Once all the trigger cells
are collected into the histogram a Gaussian smearing function is applied over all the histogram bins.
This is crucial to removing local fluctuations and identify the local maxima that are then used as seeds.
An example of this procedure is illustrated in Fig. 5.




          Figure 4: Left: Serenity board configuration for HGCAL Stage 1 TPG, black lines denotes
   physical links. The board will collect data form 144 lpGBT links (72 per FPGA) at 10 Gbps and
transmit calibrated and time-multiplexed data to Stage 2 (Right) via 54 16 Gpbs links, 3 for each Stage
  2 board. Centre: an example of Serenity ATCA board, clearly visible the interposer technology that



                                                                                                      446
      Proceedings of the 27th International Symposium Nuclear Electronics and Computing (NEC’2019)
                         Budva, Becici, Montenegro, September 30 – October 4, 2019



allow us to replace the FPGA (on the top half of the picture). The bottom half of the picture shows an
  FPGA mounted on the interposer. Right: Serenity board configuration for HGCAL Stage 2 TPG,
which receives data from all the Stage 1 boards via 72 links at 16 Gpbs and transmits the primitives to
                                        the central correlator.
       The second step consists in collecting the trigger cells around a seed and within a
programmable radius (in the (r/z, φ/z) plane).
        Finally the three-dimensional clusters are formed and relevant information are extracted (e.g.
position, energy and shape).




  Figure 5: Example of seeding for the HGCAL three-dimensional clustering. Left: the histogram is
  filled with the trigger cells energy and position. Centre: Gaussian smearing is applied in order to
                 remove local fluctuations. Right: seeds are selected as local maxima
4. Conclusions
         The HGCAL and its trigger primitive generator designs are facing new challenges dictated by
the unprecedented at LHC pile-up levels and radiation dose that the system must withstand. This note
describes the main problems faced and proposed solutions found have been presented. A careful study
of the trigger path and algorithms is underway in order to ensure the performance needed to fulfil the
future challenges. An important role has been played by the implementation choice of adopting the
Serenity platform, this has allowed the collaboration to tailor resource, hence costs, to the specific
problem. The hardware tests and the firmware implementation has started.


References
[1]  G. Apollinari, O. Bruening, T. Nakamoto, L. Rossi, “High Luminosity Large Hadron Collider
HL-LHC”, ​CERN Yellow Report​ CERN-2015-005, pp.1-19.
[2]     The CMS Collaboration, “The CMS experiment at the CERN LHC”, ​JINST 3​ (2008) S08004.
[3]   The CMS Collaboration, “The Phase-2 Upgrade of the CMS Endcap Calorimeter”,
CERN-LHCC-2017-023​; ​CMS-TDR-019​.
[4]      G. Ardila al., “Serenity: An ATCA prototyping platform for CMS Phase-2”, in proceedings of
"Topical Workshop on Electronics for Particle Physics", ​PoS(TWEPP2018)115​, DOI:
https://doi.org/10.22323/1.343.0115.
[5]   The CMS Collaboration, “Particle-flow reconstruction and global event description with the
CMS detector”, ​JINST 12​ (2017) P10003.
[6]    R. Frazier at al., “A demonstration of a Time Multiplexed Trigger for the CMS experiment”,
JINST 7​ (2012) C01060.
[7]   P. Moreira st al., “The LpGBT Status, in Common ATLAS CMS Electronics Workshop for
SLHC”, CERN/Geneva, Switzerland, March 2015.




                                                                                                     447