Anwesha Bhattacharya
2019-11-22
<p>In run 4 of the LHC, the extreme high luminosity is expected to generate an enormous pileup of up to 200 <br>
proton-proton collisions for each bunch crossing. This has to be read out at 750 kHz with a maximum <br>
latency of 12.5𝜇s. In order to disentangle the energy from pileup collision, the upgraded CMS detector for <br>
Run-4 will feature a new High Granularity Calorimeter (HGCAL) with unprecedented lateral and <br>
longitudinal segmentation. The total number of channels read out into the Level-1 trigger processor will be <br>
of the order of 106. To process this data with such small latency, we need to develop sophisticated <br>
algorithms. In this report, we aim to use machine learning techniques for electron-photon identification and <br>
energy estimation in the L1 Trigger. The idea is to implement the architectures on FPGA boards that will <br>
have fast inference, enough to cope with the requirements of the HGCAL. </p>
https://doi.org/10.5281/zenodo.3550707
oai:zenodo.org:3550707
Zenodo
https://zenodo.org/communities/cernopenlab
https://doi.org/10.5281/zenodo.3550706
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
CERN openlab
summer student programme
Using deep learning for particle identification and energy estimation in CMS HGCAL L1 trigger
info:eu-repo/semantics/report