On Monday, 25 November 2024, the inaugural Next Generation Triggers 1st Technical Workshop commenced at CERN, setting the stage for three afternoons of in-depth discussions on innovative approaches in trigger systems for the High-Luminosity Large Hadron Collider (HL-LHC). This workshop, running from 25 to 27 November and with over 100 participants everyday both in person and online, has brought together leading experts to explore advancements in data analysis and trigger technology.
The event opened with a welcome address by Alberto Di Meglio, who expressed gratitude to all participants for their attendance and provided an overview of the project’s objectives. He highlighted the collaborative efforts across the various work packages and emphasized the potential impact of the initiative: “This is an ambitious project that has just begun, but with everyone’s collaboration, it has the potential to become a transformative program for the High-Luminosity LHC and beyond.”
The first afternoon of the workshop focused on two critical areas of the NextGen project: Infrastructure, algorithms, and theory (WP1) and Education and outreach (WP4).
Discussions around WP1 highlighted the development of advanced hardware and services for large-scale neural networks, fast inference techniques for LHC online systems, simulation frameworks to validate trigger reliability, and more. The presenters explored how optimized neural network algorithms can improve the deployment of hardware-specific solutions in real-time systems, ensuring faster and more accurate processing of experimental data. They also delved into quantum-inspired algorithms for Lattice Quantum Field Theory (LQFT) simulations, which aim to address increasingly complex physics scenarios.
The talks further showcased how cutting-edge technologies, including heterogeneous computing architectures such as GPUs and FPGAs, are paving the way for more robust data collection and simulation capabilities. Additionally, there was an emphasis on enhancing infrastructure, such as dedicated clusters of interconnected GPUs, and improving low-latency, high-bandwidth network interconnects. These advancements are intended to not only increase the robustness of experimental data collection but also expand the scope of theoretical simulations, ultimately leading to greater efficiency and predictivity in high-energy physics research.
In parallel, WP4 emphasized the vital role of education and outreach in ensuring the long-term success of the NextGen project. This work package is dedicated to supporting the skills development of world-class high-energy physicists, engineers, and data scientists in close collaboration with academic and industry partners. Key initiatives include enabling exchanges where scientists and researchers come to CERN to work with project experts, as well as the creation of the annual STEAM Programme—a training program designed to equip postgraduate students, Ph.D. scholars, and researchers with state-of-the-art computing and data science skills.
Alberto Pace, WP4 leader, underscored the significance of education within NextGen. “Education is a cornerstone of the NextGen project. By combining domain-specific knowledge of high-energy physics with expertise in data science, artificial intelligence, and hardware architectures, we ensure the future growth and impact of this ambitious initiative. The dissemination of project results from the third year onwards will be one of our key contributions to the broader scientific community”, he stated.
Click here to see the all the presentations of Day 1.
And the second day began. It was time for both experiments to take over. On Tuesday, 26 November, discussions shifted to the low-level triggers for ATLAS and CMS, with a strong focus on optimizing the earliest stages of data acquisition. First talks opened on discussions centered around optimizing real-time event selection and improving the Level-0 Muon Trigger, a key component for early data filtering with high precision. Presentations also highlighted advancements in high-throughput data collection and innovative techniques to enhance event reconstruction, ensuring that the trigger systems can efficiently manage the vast data generated by the HL-LHC.
After the break the focus shifted to scouting and the integration of AI into low-level triggers. Topics included the use of Level-1 scouting to identify key events in real-time, practical applications of AI for decision-making in the Level-1 trigger system, and the development of advanced data compression methods. AI-driven methods are being employed to improve the accuracy and speed of event selection, as well as to identify patterns and anomalies that might otherwise go undetected. By combining traditional approaches with advanced AI algorithms, the low-level triggers are being transformed into smarter, more adaptive systems. These advancements are crucial for meeting the HL-LHC’s ambitious physics goals, ensuring that the most meaningful data is captured for further analysis.
If you want to know more in depht into one specific topic, you can access all the presentations of Day 2 by clicking here.
The third day and last day, Wednesday, 27 November, turned the spotlight on high-level triggers and software optimization, focusing on transforming data processing pipelines to meet the HL-LHC’s unprecedented demands. Discussions opened with innovative approaches to real-time reconstruction and optimizing data structures for heterogeneous platforms. Another key focus was on the evolution of software systems for HLT, including strategies for reducing raw data size, improving calibration processes, and developing scalable and efficient infrastructures. Presenters from both experiments emphasized the importance of optimizing software and hardware to manage data flow effectively, enabling greater precision in selecting physics events of interest.
The day concluded with a deep dive into Event filter tracking, Optimized muon trigger selection, and Common tracking event filter infrastructure. The Event Filter Muon Trigger Selection aims to utilize the extended coverage of the Level-0 muon trigger and the ACTS tracking infrastructure to reduce computing demands and enhance muon reconstruction precision. Meanwhile, the Common Tracking Event Filter infrastructure is designed to integrate advanced tracking algorithms into ATLAS’s software ecosystem, leveraging ACTS to enable efficient use of accelerator hardware like GPUs and FPGAs. These developments are key to ensuring that ATLAS and CMS can effectively handle the immense data rates of the HL-LHC while retaining only the most valuable physics events for further analysis.
Interested to know more? You can access full presentations of Day 3 by clicking here.
Throughout the workshop, participants engaged in lively discussions, exchanged ideas, and explored collaborative opportunities, all while addressing the challenges of advancing trigger systems to meet the HL-LHC’s demands.
A huge thank you to all speakers, participants, and organizers who made this event a success. This workshop was just the beginning of what promises to be a transformative journey for data analysis and trigger systems in high-energy physics!
If you want to know more in detail about a specific topic, all the recordings of the presentations are available here.