SciPost Submission Page
Machine Learning and Quantum Devices
by Florian Marquardt
This is not the current version.
Submission summary
As Contributors:  Florian Marquardt 
Arxiv Link:  https://arxiv.org/abs/2101.01759v1 (pdf) 
Code repository:  https://github.com/FlorianMarquardt/machinelearningforphysicists 
Date submitted:  20210112 14:02 
Submitted by:  Marquardt, Florian 
Submitted to:  SciPost Physics Lecture Notes 
Academic field:  Physics 
Specialties: 

Approaches:  Theoretical, Experimental, Computational 
Abstract
These brief lecture notes cover the basics of neural networks and deep learning as well as their applications in the quantum domain, for physicists without prior knowledge. In the first part, we describe training using backpropagation, image classification, convolutional networks and autoencoders. The second part is about advanced techniques like reinforcement learning (for discovering control strategies), recurrent neural networks (for analyzing time traces), and Boltzmann machines (for learning probability distributions). In the third lecture, we discuss first recent applications to quantum physics, with an emphasis on quantum information processing machines. Finally, the fourth lecture is devoted to the promise of using quantum effects to accelerate machine learning.
Current status:
Submission & Refereeing History
You are currently on this page
Reports on this Submission
Report 1 by Lukas Grünhaupt on 2021220 (Invited Report)
Strengths
 Easily accessible general level introduction to machine learning geared towards physicists
 Relating /illustrating the concepts of neural networks to concepts in physics, i.e. cost function <> Hamiltonian, ...
 Describing algorithms not only mathematically but also giving an intuitive picture.
 Clearly explained simple examples for different machine learning approaches
 Provides a starting point for literature about machine learning applications for quantum information devices, and experiments with such devices harnessing machine learning techniques
 Outlook and discussion of possible use of quantum computing algorithms and devices for machine learning for more advanced readers
Weaknesses
 Potentially too basic of an introduction for people with some prior knowledge in the field
 The introductory chapter could benefit from additional references for further reading, where more details of the presented algorithms are discussed
Report
The lecture notes Machine Learning and Quantum Devices encompass an easily accessible introduction to the field and nomenclature of machine learning, a sample overview of experiments and theoretical proposals to harness the capabilities of machine learning for quantum information devices to improve readout, control and error detection/mitigation based on physical device parameters, and an outlook on potential applications where machine learning algorithms could benefit from quantum computing algorithms. As a reviewer without any expertise in the field of machine learning I find the introductory chapter is well structured and provides an easily accessible introduction to the basics of machine learning. I especially liked the examples clearly illustrating the algorithmic approaches. The links and reference to a more complete set of lecture notes by the same author as well as the link to a code repository with examples to get working code is highly appreciated.
The second part of the notes serves as an illustration and inspiration where and how machine learning can be applied to quantum information devices. In the final chapter the author provides a balanced discussion of the promises, but also pitfalls of quantum computing for the field of machine learning.
Given the rather small number of lectures (4) dedicated to the topic, the author can of course not dive too deep into the different topics. However, thanks to the clear introductory chapter and some links to literature the lecture notes provide a good starting point and definitely an inspiration for the reader to further look into the topic of machine learning for quantum devices.
Requested changes
 First paragraph on page 5, "Interestingly, practically any nonlinear activation
function will do the job, although some may be better for training than others. However, a representation by multiple hidden layers may be more efficient (defining a socalled “deep network”).": Could the author further expand on what more efficient would mean in this context.
 Figure 2: I think it would improve readability to move panels d)h) into a separate figure and place this figure closer to the discussion of CNN, maybe page 14/15
 Page 21, 3rd paragraph: "The resulting training progress is shown in Fig. 4c." the reference appears to be misplaced as Fig. 4c shows a walker reaching a target, but that is only discussed later than the reference in the text.
 Figure 5 caption: d) needs to be changed to c)
 Page 32 4th paragraph: [...] In each step, has as input available the information from the encoder as well as the sequence of words [...]. It appears an 'it' is missing?
 Reference [28] seems not formatted correctly
 Ref. [33] is also published in PRX: https://doi.org/10.1103/PhysRevX.10.011006