SciPost logo

SciPost Submission Page

Learning Coulomb Diamonds in Large Quantum Dot Arrays

by Oswin Krause, Anasua Chatterjee, Ferdinand Kuemmeth, Evert van Nieuwenburg

This Submission thread is now published as

Submission summary

Authors (as registered SciPost users): Oswin Krause
Submission information
Preprint Link: https://arxiv.org/abs/2205.01443v2  (pdf)
Code repository: https://github.com/Ulfgard/quantum_polytopes/releases/tag/V0.1
Data repository: https://erda.ku.dk/archives/88e993ae5f8ac761d6c2af16b6f4b953/published-archive.html
Date accepted: 2022-08-24
Date submitted: 2022-08-01 09:59
Submitted by: Krause, Oswin
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
Specialties:
  • Condensed Matter Physics - Computational
Approach: Computational

Abstract

We introduce an algorithm that is able to find the facets of Coulomb diamonds in quantum dot arrays. We simulate these arrays using the constant-interaction model, and rely only on one-dimensional raster scans (rays) to learn a model of the device using regularized maximum likelihood estimation. This allows us to determine, for a given charge state of the device, which transitions exist and what the compensated gate voltages for these are. For smaller devices the simulator can also be used to compute the exact boundaries of the Coulomb diamonds, which we use to assess that our algorithm correctly finds the vast majority of transitions with high precision.

Author comments upon resubmission

We have tried to answer and in integrate all reviewer comments, we further added a number of changes we deemed necessary after integrating the changes, especially of some shortcomings we found ourselves.

List of changes

1. Added paragraph in the introduction clarifying at which point of tuning our algorithm becomes relevant
2. Added paragraph in the introduction clarifying that the algorithm presented is theoretical and will not work on real devices
3. Added new Figure 4, including a step-by-step visualisation of the algorithm on a DQD, answering requested change 2 of referee report 1. Further added section III.B that discusses the Figure as part of the example.
4. AS a result of 3. we realized we did not describe a part of the algorithm in sufficient detail: added paragraphs in section III and Appendix A
5. Clarified the differences between the two uses of W_k in section III.A, answering requested change 3 of referee report 1.
6. Clarified the distinction of Lambda and Gamma in Section III.A
7. As a result of answering a point of referee report 2, we added a new experiment, as variation of Scenario S4 where we reduces the number of transitions we search for to the most relevant ones (8 neighbourhood of any given dot). We renamed the old scenario S5 to S6 and added the new scenario as S5.
8. Added various graphs to the discussion: Better embedding of prior work [8,9], used [8,9] to argue that the strong assumption of linearity is relevant in actual devices, discussion of curvilinear devices and outlined future work and work diections that better embed our work in the general domain.

Published as SciPost Phys. 13, 084 (2022)


Reports on this Submission

Report #1 by Anonymous (Referee 1) on 2022-8-2 (Invited Report)

Report

The authors replied to all my comments convincingly and I am happy to recommend publication.

  • validity: -
  • significance: -
  • originality: -
  • clarity: -
  • formatting: -
  • grammar: -

Login to report


Comments

Anonymous on 2022-08-14  [id 2726]

The authors replied and amended the manuscript accordingly. In particular, I feel that adding section B and clearer explanation in section A within the methods helped a lot in understanding how the algorithm works.

By re-reading the manuscript I have some minor questions: 1) In figure 1 capacitances C^DD_14, C^DD_25 and C^DD_36 are missing. 2) A space is missing in the Results section "preservedand"

On a separate matter, couldn't the algorithm be improved by adding the knowledge of how many facets should be present? E.g. for the (1,1) charge state in the example, the authors state that there should be 6 transitions. We also know that four of those are dot to reservoir transitions, which have a negative slope, while the remaining two are ICTs, which have a positive slope. Since the authors assume that they are in the constant interaction model, they also know that the slopes of the dot to reservoir to a particular QD are constant. Therefore, one could also add this information to the model, e.g. the slope of (0,1)->(1,1) should be the same as (1,1)->(2,1).

I am happy to recommend publication after correcting these two minor points.

Anonymous on 2022-08-14  [id 2727]

(in reply to Anonymous Comment on 2022-08-14 [id 2726])

Thanks for the reply!

1) I think we initially left them out because the figure was getting crowded. Would it be okay if we amended the caption? we will of course try to add those connections.

2) Thanks for pointing this out, it is fixed already.

Regarding the separate matter: We introduce the knowledge about which/how many facets should be present via T: if one assumes that certain facets are not present, then not adding them to the set includes that knowledge. In the new comparison between S_4 and S_5, we also showed that getting this set tight can improve runtime.

Your second idea is smart and this is what the algorithm is already doing. After learning gamma in P_0, we know the slopes of all dot->reservoir transitions and they are stored as the rows of gamma, where the ith row stores the slopes of the ith transition (e.g., the first row stores the slopes of the transition from state (0,0)->(0,1)).

Then for the target polytope (1,1), if T includes the transition t=(1,0) (for transition (1,1)->(1,2)), we have tgamma = first row of gamma. Similarly for the transition (1,1) to (0,1) with t=(-1,0) we have tgamma = minus the first row of gamma (the flip of sign is needed because the classifier uses that to determine which side of the transition is inside the polytope). Therefore, the algorithm knows from gamma all slopes of the dot->reservoir transitions.