# Scaling of disorder operator at deconfined quantum criticality

### Submission summary

 Authors (as Contributors): Meng Cheng
Submission information
Date submitted: 2022-06-20 16:27
Submitted by: Cheng, Meng
Submitted to: SciPost Physics
Ontological classification
Specialties:
• Condensed Matter Physics - Theory
Approaches: Theoretical, Computational

### Abstract

We study scaling behavior of the disorder parameter, defined as the expectation value of a symmetry transformation applied to a finite region, at the deconfined quantum critical point in (2+1)$d$ in the $J$-$Q_3$ model via large-scale quantum Monte Carlo simulations. We show that the disorder parameter for U(1) spin rotation symmetry exhibits perimeter scaling with a logarithmic correction associated with sharp corners of the region, as generally expected for a conformally-invariant critical point. However, for large rotation angle the universal coefficient of the logarithmic corner correction becomes negative, which is not allowed in any unitary conformal field theory. We also extract the current central charge from the small rotation angle scaling, whose value is much smaller than that of the free theory.

###### Current status:
Has been resubmitted

We would like to thank the referees for their careful reading of our paper. The referee 1 gives strong support and he/she ”find that all the general criteria for publication in SciPost Phys. are already met by the manuscript or will be met after my requests for changes are implemented, ···, since this work opens a new perspective and approach to study the DQC scenario, calling for follow-up work from both computational and field-theoretical perspectives.” . Referee 2 fully appreciates ”the significance of this work”, and expect our results ”will stimulate more related research and put steps further of understanding the nature of DQCP. In addition, the method of calculating the disorder operator in QMC itself might also be a useful tool for studying other models and phenomena, and this work would be a good way to broadcast this method”.

Both referees also give valuable suggestions for the improvement of our presentation, which have been incorporated in the main text. We have also added new sections in the Supplemental Materials to address the comments from the referees. With these changes, the narrative of our manuscript has been greatly improved and we really appreciate the help from both respected referees.

### List of changes

1. Responding to the comments of referees, we added Fig.4, Fig.7(b) Fig.9(b) and modified Fig.9(c).

2. Turned the SM into regular appendices.

3. Corrected typos and updated references.

### Submission & Refereeing History

Resubmission scipost_202208_00008v1 on 3 August 2022

Resubmission 2106.01380v3 on 20 June 2022
Submission 2106.01380v2 on 1 January 2022

## Reports on this Submission

### Anonymous Report 1 on 2022-7-5 (Invited Report)

• Cite as: Anonymous, Report on arXiv:2106.01380v3, delivered 2022-07-05, doi: 10.21468/SciPost.Report.5306

### Report

I thank the authors for considering the suggested changes to their manuscript. However, regarding two of my previous points, I still request changes:

With regards to my request 2, I don't consider the new Fig. 4 to be useful to understand the quoted values of chi (btw: is this chi/DOF?). The authors can improve on this by simply plotting the relative differences so that one can indeed see the deviations between the data and the red fit line on the scale of the error bars.

With regards to my request 6, the authors have not responded appropriately: I asked them to show in the former Fig. 5 (now Fig. 9) the data for a naive measurement, i.e., without considering the even/odd character. Please include such a measurement as well. Panel 10c is not related to this point.

### Requested changes

See report.

• validity: -
• significance: -
• originality: -
• clarity: -
• formatting: -
• grammar: -

### Author:  Meng Cheng  on 2022-08-03  [id 2709]

(in reply to Report 1 on 2022-07-05)

We thank referee for his/her useful comments and suggestions, and have implemented the requested changes. Below are our responses to the comments:

Requested changes 1: With regards to my request 2, I don't consider the new Fig. 4 to be useful to understand the quoted values of chi (btw: is this chi/DOF?). The authors can improve on this by simply plotting the relative differences so that one can indeed see the deviations between the data and the red fit line on the scale of the error bars.

Reply 1: Thanks for the suggestion, it is chi-square/DOF. We update the Fig. 4 and add the calculation of of fitting deviations $\Delta_{1(2)}(l)=(X_m(l)-f_{1(2)}(l))/\delta_{X_M(l)}$ with $\delta_{X_M(l)}$ is the error bar of the $X_M(l)$, as show in Fig. 4(b) in the revised manuscript.

Requested changes 2: With regards to my request 6, the authors have not responded appropriately: I asked them to show in the former Fig. 5 (now Fig. 9) the data for a naive measurement, i.e., without considering the even/odd character. Please include such a measurement as well. Panel 10c is not related to this point.

Reply 2: Thanks for the suggestion, we update Fig. 9 in the revised manuscript, i.e., adding the data without considering the even/odd character as Fig. 9(a), as suggested by the referee. Since the data with odd and even boundary of the region $M$ have intrinsic even-odd oscillations, we do not fit but only present them.

### Anonymous Report 2 on 2022-7-5 (Invited Report)

• Cite as: Anonymous, Report on arXiv:2106.01380v3, delivered 2022-07-05, doi: 10.21468/SciPost.Report.5335

### Report

I thank the authors for their clarifications and efforts to improve the data analysis.

I still think the discussion of error bars in the main text is too sparse. Results for s(theta) are quoted in the main text with error bars of the order of 2%. But the large deviations between curves in Figure 7 c and 9 c suggest that quantifying the error bar is a nontrivial issue, and it deserves discussion in the main text.

At present, the authors argue that the preferred protocol is safe because it agrees a known result in the J1-J2 case. But in the absence of any direct extraction of an error bar to compare between protocols, (a) how do we know this agreement is not a coincidence, (b) how do we know that the preferred protocol will be safe also in J-Q3 case?

Can the accuracy for each protocol be quantified in some way from the data? The authors give a numerical error bar for one protocol. How is it obtained, and can be it be compared between protocols to give direct comparison?

If the authors cannot directly quantify the error bar then at least there should be some discussion in the main text of the fact that there are nontrivial discrepancies between different protocols.

Apart from this the other issues have been addressed, so when the authors have resolved the above to their satisfaction the paper can be published.

• validity: -
• significance: -
• originality: -
• clarity: -
• formatting: -
• grammar: -

### Author:  Meng Cheng  on 2022-08-03  [id 2710]

(in reply to Report 2 on 2022-07-05)

We thank the referee for his/her useful comments and suggestions. Below are responses to the comments:

Comment 1: I still think the discussion of error bars in the main text is too sparse. Results for s(theta) are quoted in the main text with error bars of the order of $2\%$. But the large deviations between curves in Figure 7 c and 9 c suggest that quantifying the error bar is a nontrivial issue, and it deserves discussion in the main text.

Reply 1: We thank the referee for the insightful and professional suggestion and we note that the different curves in Figure 7c and 9c are meant to present the different protocols and we chose the most physical ones from the three and presented the errorbars of $s(\theta)$ within this protocol. In the revised manuscript, we add the sentence as suggested by the referee, that we note the quantification of the error bar in $s(\theta)$ is certainly a nontrivial issue, and we use the errorbar of one fitting protocol which gives rise to the correct values of $s(\theta)$ in the J1-J2 case (the other two don't even give the correct values there) but at the same time aware the problem caused among different schemes."

Comment 2: At present, the authors argue that the preferred protocol is safe because it agrees a known result in the J1-J2 case. But in the absence of any direct extraction of an error bar to compare between protocols, (a) how do we know this agreement is not a coincidence, (b) how do we know that the preferred protocol will be safe also in J-Q3 case?

Can the accuracy for each protocol be quantified in some way from the data? The authors give a numerical error bar for one protocol. How is it obtained, and can be it be compared between protocols to give direct comparison?

If the authors cannot directly quantify the error bar then at least there should be some discussion in the main text of the fact that there are nontrivial discrepancies between different protocols.

Reply 2: We thank again the referee for the insightful comment, since different fitting protocol yield different values of $s(\theta)$ and we chose the most physical ones as it gives the correct results in J1-J2 cases, and the other two cannot even agree with such known results. We follow the suggestion of the referee to add the sentence that we note the quantification of the error bar in $s(\theta)$ is certainly a nontrivial issue, and we use the errorbar of one fitting protocol which gives rise to the correct values of $s(\theta)$ in the J1-J2 case (the other two don't even give the correct values there) but at the same time aware the problem caused among different schemes."