Neural Network Theory
Neural Networks have witnessed extensive integration across diverse domains within physics. However, our focus shifts towards the inverse problem: How can neural networks benefit from physics? This line of inquiry holds immense potential for the field of interpretable machine learning and explainable AI.
Learning with a neural network involves algorithmically assimilating information into a model. However, the process of neural learning remains largely elusive, given the challenge of understanding how to extract information from its complex inner workings. This lack of interpretability is a major hurdle in many AI applications, as it makes it difficult to trust and debug these powerful models.
Here's where the physics analogy comes in. Analogous to dynamical systems in statistical physics, describing neural network training involves an extensive number of degrees of freedom – essentially the vast number of connections and weights within the network. This complexity makes it challenging to understand the learning process in detail. However, physics offers a powerful approach: describing complex systems through macroscopic quantities.
To that end, we introduce Collective Variables for neural networks [1]. These variables capture the essential aspects of the learning process, tracing out the microscopic details (individual weights and connections) to describe and analyze the learning process at every stage. By applying this physics-inspired approach, we aim to construct a macroscopic theory of learning to achieve a more interpretable understanding of neural networks. Working towards a solution to this central issue of machine learning, we want to build and optimize systems that are not only powerful but also understandable and trustworthy.
Responsible People: Konstantin Nikolaou
[1] Samuel Tovey et al 2023 Mach. Learn.: Sci. Technol. 4 035040
Quantum Machine Learning
Quantum Machine Learning (QML) is an emerging field that uses quantum mechanics to solve complex data challenges. The potential advantage lies in utilizing the exponentially large Hilbert space of quantum systems to perform sophisticated feature mapping. This process may allow quantum algorithms to capture complex data correlations that are intractable for classical computers.
Our work focuses on investigating this theoretical power for reliable machine learning models. We are particularly interested in quantum time series models, i.e. models that can process sequential data. Our research into sequential data began with a comprehensive benchmark of variational QML [1]. We found that this class of models in many cases struggles against classical baselines. This motivates our focus on Quantum Reservoir Computing (QRC). In contrast to variational QML, QRC uses the fixed, natural dynamics of a quantum system as its computational engine. This approach simplifies training to a single, gradient-free optimization step on a classical output layer. Our research on QRC investigated a scalable measurement method using randomized matrices to efficiently generate high-dimensional state representations of the quantum reservoir [2].
Beyond time series models, we are interested in robust quantum computing and machine learning. This work is carried out in cooperation with the Institute for Systems Theory and Automatic Control. In [3,6] we derive worst-case fidelity bounds against errors and find practical guidelines to certify an algorithm's inherent resilience to physical hardware noise. Moreover, in [4,5], we quantify robustness against noisy input data using Lipschitz bounds and find that tunable robustness and improved generalization is possible through the use of trainable data encoding.
Research questions we are actively pursuing include:
- Quantum Resource Exploitation: How can quantum resources be efficiently exploited within QRC frameworks?
- Robustness and Generalization: What are the necessary conditions for robust and generalizable QRC?
- Information Extraction: How can relevant information be efficiently extracted from a quantum reservoir?
Contact: If you have further questions about our research, please contact us. We are also seeking motivated students with a strong background in quantum physics and/or quantum computing for Bachelor's or Master's projects. If you are interested in joining our team, please feel free to reach out.
Responsible Person: Tobias Fellner
[1] Fellner, T., Kreplin, D., Tovey, S., & Holm, C. (2025). Quantum vs. classical: A comprehensive benchmark study for predicting time series with variational quantum machine learning. arXiv preprint arXiv:2504.12416.
[2] Tovey, S., Fellner, T., Holm, C., & Spannowsky, M. (2025). Generating quantum reservoir state representations with random matrices. Machine Learning: Science and Technology, 6(1), 015068.
[3] Berberich, J., Fellner, T., Kosut, R. L., & Holm, C. (2025). Robustness of quantum algorithms: Worst-case fidelity bounds and implications for design. arXiv preprint arXiv:2509.08481.
[4] Berberich, J., Fellner, T., & Holm, C. (2025). The interplay of robustness and generalization in quantum machine learning. arXiv preprint arXiv:2506.08455.
[5] Berberich, J., Fink, D., Pranjić, D., Tutschku, C., & Holm, C. (2024). Training robust and generalizable quantum models. Physical Review Research, 6(4), 043326.
[6] Berberich, J., Fink, D., & Holm, C. (2024). Robustness of quantum algorithms against coherent control errors. Physical Review A, 109(1), 012417.
Machine-learned Interatomic Potential
Published soon
Reinforcement Learning
Published soon
Konstantin Nikolaou
PhD Student
Tobias Fellner
PhD Student