Network Parameterisation and Activation Functions in Deep Learning

Martin Trimmel

Forskningsoutput: AvhandlingDoktorsavhandling (sammanläggning)

145 Nedladdningar (Pure)


Deep learning, the study of multi-layered artificial neural networks, has received tremendous attention over the course of the last few years. Neural networks are now able to outperform humans in a growing variety of tasks and increasingly have an impact on our day-to-day lives. There is a wide range of potential directions to advance deep learning, two of which we investigate in this thesis:

(1) One of the key components of a network are its activation functions. The activations have a big impact on the overall mathematical form of the network. The \textit{first paper} studies generalisation of neural networks with rectified linear activations units (“ReLUs”). Such networks partition the input space into so-called linear regions, which are the maximally connected subsets on which the network is affine. In contrast to previous work, which focused on obtaining estimates of the number of linear regions, we proposed a tropical algebra-based algorithm called TropEx to extract coefficients of the linear regions. Applied to fully-connected and convolutional neural networks, TropEx shows significant differences between the linear regions of these network types. The \textit{second paper} proposes a parametric rational activation function called ERA, which is learnable during network training. Although ERA only adds about ten parameters per layer, the activation significantly increases network expressivity and makes small architectures have a performance close to large ones. ERA outperforms previous activations when used in small architectures. This is relevant because neural networks keep growing larger and larger and the computational resources they require result in greater costs and electricity usage (which in turn increases the CO2 footprint).

(2) For a given network architecture, each parameter configuration gives rise to a mathematical function. This functional realisation is far from unique and many different parameterisations can give rise to the same function. Changes to the parameterisation that do not change the function are called symmetries. The \textit{third paper} theoretically studies and classifies all the symmetries of 2-layer networks using the ReLU activation. Finally, the \textit{fourth paper} studies the effect of network parameterisation on network training. We provide a theoretical analysis of the effect that scaling layers have on the gradient updates. This provides a motivation for us to propose a Cooling method, which automatically scales the network parameters during training. Cooling reduces the reliance of the network on specific tricks, in particular the use of a learning rate schedule.
Tilldelande institution
  • Matematik LTH
  • Sminchisescu, Cristian, handledare
  • Heyden, Anders, Biträdande handledare
  • Petzka, Henning, Biträdande handledare
Tilldelningsdatum2023 maj 16
ISBN (tryckt)978-91-8039-572-4
ISBN (elektroniskt)978-91-8039-573-1
StatusPublished - 2023 maj 16

Bibliografisk information

Defence details
Date: 2023-05-16
Time: 15:15
Place: Lecture Hall Hörmander, Centre of Mathematical Sciences, Sölvegatan 18 A, Faculty of Engineering LTH, Lund University, Lund. The dissertation will be live streamed, but part of the premises is to be excluded from the live stream.
External reviewer(s)
Name: Montúfar, Guido
Title: Prof.
Affiliation: UCLA, USA.

Ämnesklassifikation (UKÄ)

  • Datorseende och robotik (autonoma system)
  • Annan matematik
  • Annan data- och informationsvetenskap


Utforska forskningsämnen för ”Network Parameterisation and Activation Functions in Deep Learning”. Tillsammans bildar de ett unikt fingeravtryck.

Citera det här