SciPost logo

SciPost Submission Page

Machine Learning and Quantum Devices

by Florian Marquardt

Submission summary

As Contributors: Florian Marquardt
Arxiv Link: https://arxiv.org/abs/2101.01759v2 (pdf)
Code repository: https://github.com/FlorianMarquardt/machine-learning-for-physicists
Date accepted: 2021-04-30
Date submitted: 2021-04-22 14:41
Submitted by: Marquardt, Florian
Submitted to: SciPost Physics Lecture Notes
Academic field: Physics
Specialties:
  • Artificial Intelligence
  • Neural and Evolutionary Computing
  • Atomic, Molecular and Optical Physics - Experiment
  • Atomic, Molecular and Optical Physics - Theory
  • Condensed Matter Physics - Experiment
  • Condensed Matter Physics - Theory
  • Quantum Physics
Approaches: Theoretical, Experimental, Computational

Abstract

These brief lecture notes cover the basics of neural networks and deep learning as well as their applications in the quantum domain, for physicists without prior knowledge. In the first part, we describe training using backpropagation, image classification, convolutional networks and autoencoders. The second part is about advanced techniques like reinforcement learning (for discovering control strategies), recurrent neural networks (for analyzing time traces), and Boltzmann machines (for learning probability distributions). In the third lecture, we discuss first recent applications to quantum physics, with an emphasis on quantum information processing machines. Finally, the fourth lecture is devoted to the promise of using quantum effects to accelerate machine learning.

Published as SciPost Phys. Lect. Notes 29 (2021)



Author comments upon resubmission

Dear editor, dear referee,

I very much appreciate the feedback, and I apologize for the delay.

Best regards,
Florian Marquardt

List of changes

Replies to referee:

Thank you very much for your effort and good suggestions, and I apologize for the delay in implementing the revisions.

- I now made it more explicit what 'more efficient' means for a deep network: “However, a representation by multiple hidden layers may be more efficient, i.e. would be able to reach a better approximation with the given overall number of neurons or use fewer neurons for a given approximation accuracy (sometimes this difference can be dramatic). Such a multi-layer network is sometimes called a “deep network”, especially if the number of layers becomes larger than a handful.”

- Thank you for the suggestion regarding figure 2. However, panel (d) also belongs to CNN, and I did not find a good way to separate off the (small) autoencoder part or the small image recognition part (a) [given the format of figures in scipost]. So I left it as is.

- "The resulting training progress is shown in Fig. 4c." -> I have now corrected this, it was Fig 4d.

- Fig. 5 caption corrected.

- page 32, “*it* has as input available” -> revised, inserted 'it'

- Ref 28 fixed.

- Ref 33 fixed.

Thank you again.

Submission & Refereeing History

You are currently on this page

Resubmission 2101.01759v2 on 22 April 2021

Login to report or comment