Neos: End-to-End-Optimised Summary Statistics for High Energy Physics

Nathan Simpson, Lukas Heinrich

Research output: Chapter in Book/Report/Conference proceedingPaper in conference proceedingpeer-review

Abstract

The advent of deep learning has yielded powerful tools to automatically compute gradients of computations. This is because training a neural network equates to iteratively updating its parameters using gradient descent to find the minimum of a loss function. Deep learning is then a subset of a broader paradigm; a workflow with free parameters that is end-to-end optimisable, provided one can keep track of the gradients all the way through. This work introduces neos: an example implementation following this paradigm of a fully differentiable high-energy physics workflow, capable of optimising a learnable summary statistic with respect to the expected sensitivity of an analysis. Doing this results in an optimisation process that is aware of the modelling and treatment of systematic uncertainties.

Original languageEnglish
Title of host publicationJournal of Physics: Conference Series
Volume2438
DOIs
Publication statusPublished - 2023
Event20th International Workshop on Advanced Computing and Analysis Techniques in Physics Research, ACAT 2021 - Daejeon, Virtual, Korea, Republic of
Duration: 2021 Nov 292021 Dec 3

Conference

Conference20th International Workshop on Advanced Computing and Analysis Techniques in Physics Research, ACAT 2021
Country/TerritoryKorea, Republic of
CityDaejeon, Virtual
Period2021/11/292021/12/03

Subject classification (UKÄ)

  • Computer Science

Fingerprint

Dive into the research topics of 'Neos: End-to-End-Optimised Summary Statistics for High Energy Physics'. Together they form a unique fingerprint.

Cite this