SciPost Submission Page
Why space must be quantised on a different scale to matter
by Matthew J. Lake
This is not the latest submitted version.
This Submission thread is now published as SciPost Phys. Proc. 4, 014 (2021)
Submission summary
As Contributors:  Matthew J. Lake 
Preprint link:  scipost_202009_00001v2 
Date submitted:  20210203 11:07 
Submitted by:  Lake, Matthew J. 
Submitted to:  SciPost Physics Proceedings 
Proceedings issue:  4th International Conference on Holography, String Theory and Discrete Approach in Hanoi 
Academic field:  Physics 
Specialties: 

Approaches:  Theoretical, Phenomenological 
Abstract
The scale of quantum mechanical effects in matter is set by Planck's constant, $\hbar$. This represents the quantisation scale for material objects. In this article, we give a simple argument why the quantisation scale for space, and hence for gravity, cannot be equal to $\hbar$. Indeed, assuming a single quantisation scale for both matter and geometry leads to the `worst prediction in physics', namely, the huge difference between the observed and predicted vacuum energies. Conversely, assuming a different quantum of action for geometry, $\beta \neq \hbar$, allows us to recover the observed density of the Universe. Thus, by measuring its presentday expansion, we may in principle determine, empirically, the scale at which the geometric degrees of freedom must be quantised.
Current status:
Author comments upon resubmission
Please see attached the revised manuscript of scipost_202009_00001v1. I have amended the text in order to address the referee’s comments and a detailed list of all changes is given below. These mostly take the form of footnotes, in order to preserve the essaystyle flow of the original draft. I thank the referee for taking the time to read through the manuscript carefully and giving feedback. I hope that these changes have satisfactorily addressed his / her remaining concerns.
Best wishes,
Matt
List of changes
A footnote has been added at the end of the last sentence on pg 1 clarifying the meaning of the term ‘quantisation scale’.
• A footnote has been added at the end of the sentence below Eq. (3.1). This is to clarify the way in which whole functions, $g(x’x)$, may be associated with single points, $x$, as well as how the model defines `points’ in both the quantum and classical regimes.
• A footnote was added at the end of sentence following Eq. (3.3). This is to clarify the precise meaning of the term ‘quantum geometry wave’.
• The final sentence of the Conclusions has been changed from “These include the proposal that the observed vacuum energy is related to the quantisation scale of space itself \cite{Lake:2019oaz,Lake:2019nmn}.” to “These include the proposal that the observed vacuum energy, and the presentday accelerated expansion of the universe that it drives, are related to the quantum properties of spacetime \cite{Lake:2019oaz,Lake:2019nmn}. In this model, a measurement of the dark energy density constitutes a de facto measurement of the geometry quantisation scale, $\beta$, fixing its value to $\beta \simeq \hbar \times 10^{61}$.” Although this is clearly a postdiction rather than a prediction of the model, it provides a concrete link between observations and the smearedspace model parameters, in response to the referee’s final point.
• Two new references have been added, which appear as refs. [5] and [23] in the new draft.
Submission & Refereeing History
Published as SciPost Phys. Proc. 4, 014 (2021)
You are currently on this page
Reports on this Submission
Anonymous Report 2 on 2021329 (Invited Report)
 Cite as: Anonymous, Report on arXiv:scipost_202009_00001v2, delivered 20210329, doi: 10.21468/SciPost.Report.2738
Strengths
1 Deals with an important problem in theoretical physics
Weaknesses
1 The main claim does not logically follow from an argument as presented.
2 Some statements lack rigorous explanation.
3 Certain similar approaches in the literature are not mentioned.
4. Some important issues are not addressed.
Report
In the submitted manuscript the author suggests a new physical scale where geometry is quantized and proposes an uncertainty relation that involves this scale and explains the cosmological constant problem.
The idea of a quantization scale for spacetime which has no a priori relation to $\hbar$ and a possible coarsegraining of spacetime at high energy is certainly not new. One can find rigorous arguments and specific models for instance in Phys.Lett.B 331 (1994) 3944 by Doplicher, Fredenhagen and Roberts or Annals Phys. 219 (1992) 187198 by Madore. The author presents a similar suggestion, however it is not supported by rigorous arguments or a particular model that can realise it. The title of the manuscript claims that space must be quantised with a new scale, but the argumentation is that this must be so due to the smallness of the vacuum energy. It is not clear how the cosmological constant problem necessarily leads to this conclusion. In the body of the paper, the only explanation is that “we have no a priori reason to believe that space must be quantized on the same scale […]”. Although this is not incorrect, and indeed it has been suggested many decades ago, it does not follow as a logical conclusion from the cosmological constant problem.
Moreover, the lack of a specific model in the manuscript raises questions such as what is the precise mathematical description of the “smeared space”, how does classical geometry emerge, and whether Lorentz invariance is lost. None of these important issues is addressed. Based on the above, I cannot recommend the submitted manuscript for publication in its present form.
Requested changes
1 The main claim of the paper should be scaled down, already in the title and also in the main text. There is no rigorous argument establishing that a new quantization scale "must" exist. The proposal should be presented as a possibility, not as a certain conclusion.
2 Mention and comparison with previous similar proposals is required, such as the ones of Doplicher, Fredenhagen and Roberts, and Madore.
3 The author should explain how classical space(time) emerges, presumably in the limit $\beta \to 0$.
4 Previous approaches to the quantization of space at a new scale have addressed the question of doing this in a Lorentz invariant way. The author is asked to explain whether Lorentz invariance is lost or not, and if it does how does the proposal avoid being ruled out.
5 It would be advisable to support the proposal with a particular model/theory that can implement the modified uncertainty relation (4). Heisenberg's uncertainty relation leads to a noncommutative phase space; is there a similar noncommutative space or some other specific geometric picture that underlies the proposal of the paper?
Anonymous Report 1 on 2021210 (Invited Report)
 Cite as: Anonymous, Report on arXiv:scipost_202009_00001v2, delivered 20210210, doi: 10.21468/SciPost.Report.2536
Strengths
1. New interpretation of the quantization process
2. Development of a nonlocal quantum theoretical approach
3. Cosmological applications, related to the interpretation of dark energy.
Weaknesses
1. More physical applications and tests are necessary to support the main idea.
Report
The author has improved the initial version of the manuscript, and hence I think that teh present version is suitable for publication in SciPost.
Author: Matthew J. Lake on 20210422 [id 1375]
(in reply to Report 2 on 20210329)Reply to the Referee Report for `Why space must be quantised on a different scale to matter' [scipost20200900001v2]
I thank the referee for their comments, and for drawing several references to my attention, which I was not previously aware of. I have cited these in the updated manuscript, together with some additional related literature. Below, I provide detailed replies to the points raised in the report. However, first, let me say that, as a contribution to a conference proceedings, my original draft was written with two important aims in mind:
To provide an accurate written summary of the presentation I actually gave at the 4th International Conference on Holography in Hanoi, and
To avoid undue technical detail in the text itself, which would be inappropriate for a proceedings article. Instead, I aimed to cite the relevant (already published) works, in which the technical discussion and mathematical details of the model are contained.
For these reasons, I have not made major changes to the manuscript. Instead, I address the questions raised by the referee, in depth, in this reply letter, and have added only short notes to the text to highlight the references where the relevant technical details can be found. For clarity, the referee's questions are given in quotation marks and my responses are written in normal type.
Report:
"In the submitted manuscript the author suggests a new physical scale where geometry is quantized and proposes an uncertainty relation that involves this scale and explains the cosmological constant problem.
"The idea of a quantization scale for spacetime which has no a priori relation to $\hbar$ and a possible coarsegraining of spacetime at high energy is certainly not new. One can find rigorous arguments and specific models for instance in Phys. Lett. B 331 (1994) 3944 by Doplicher, Fredenhagen and Roberts or Annals Phys. 219 (1992) 187198 by Madore."
I have read the works cited by the referee in detail, and cannot find any reference to a second quantisation scale. In the works by Madore and others, uncertainty relations for spatial coordinates are introduced by introducing noncommutative geometry (NCG) in the position space representation, e.g., by introducing an XX commutator of the form $[X^i,X^j] = \sigma^2 \, \delta^{ij}$. Here, $\sigma$ is a constant with dimensions $[L]$ or, equivalently, $[M]^{1}$ (if $\hbar = c = 1$). It is important to recognise that this does not constitute a second quantisation scale, which must have units of action, $[L][M]$. To obtain a new quantisation scale from NCG models, one must also introduce a similar relation in the momentum space representation, e.g., $[P_i,P_j] = \tilde{\sigma}^2 \, \delta_{ij}$, where $\tilde{\sigma}$ has dimensions $[M]$, or equivalently $[L]^{1}$. It is then straightforward to see that, setting
$(\Delta X^i) _{min} \simeq \sigma$,
where $\sigma = l_{\rm Pl}$ is the Planck length, and
$(\Delta P_j)_{\rm min} \simeq \tilde{\sigma}$,
where $\tilde{\sigma} \simeq m_{\rm dS}$ and $m_{\rm dS} \simeq \hbar\sqrt{\Lambda}$ is the de Sitter mass, the new quantisation scale is $\beta \simeq \sigma \tilde{\sigma} \simeq \hbar\sqrt{\rho_{\Lambda}/\rho_{\rm Pl}}$. This is exactly Eq. (6) in the present text.
The studies by Doplicher, Fredenhagen and Roberts, and by Madore, cited above, certainly do not do this, and, to the best of my knowledge, neither do any of the subsequent related works. I am not sure why this is, and I confess that I am not an expert on NCG, but I could imagine that there are significant technical barriers to the consistent implementation of such a model.
However, if such a model were to be selfconsistently constructed, it should be stressed that, since the XX and PP commutators above do not refer to the position and momentum of material particles, but, instead, to delocalised, or `smeared out' spatial points, the new quantisation scale $\beta \simeq \sigma \tilde{\sigma}$ need not have any a priori relation with $\hbar$. It would, instead, represent the quantisation scale for the spatial background on which canonical quantum matter propagates, which is much more reminiscent of our model than conventional NCG theories. Despite this, there are still many differences, which are discussed below in response to the referee's other questions.
"The author presents a similar suggestion, however it is not supported by rigorous arguments or a particular model that can realise it."
The formalism of the model is presented in a series of published works, which are cited at the relevant points in the manuscript, see refs. [16], [17], [18] and [19]. (Please note that the book chapter, [19], has not yet been formally published, because the hardback copy is still in print. However, the manuscript available on the arXiv has been accepted, in its current form, after a rigorous review process by Springer. I append the letter of acceptance to the end of this reply letter.)
"The title of the manuscript claims that space must be quantised with a new scale, but the argumentation is that this must be so due to the smallness of the vacuum energy. It is not clear how the cosmological constant problem necessarily leads to this conclusion."
The argument for this claim is based on the following, very general, observations:
If the Planck length is a fundamental length scale in nature, then classical spatial points are in some way `delocalised' in the quantum theory of gravity. This introduces metric fluctuations over Planckscale volumes and, hence, a minimum observable position $(\Delta X^i)_{\rm min}$ of the order of the Planck length. (This is certainly not a new idea and is a mainstay of most approaches to quantum gravity.)
A Planckscale fluctuation of the spacetime metric, over a volume $\sim (\Delta X)_{\rm min}^3$, must carry an associated momentum, which we label $\Delta P$. (The italicised word must here is important.)
The associated energy density is of order $\rho \simeq \Delta P/(\Delta X)_{\rm min}^3$. We stress that this is the energy density induced by quantum fluctuations of the spacetime metric, i.e., the energy density of the quantum spatial background, not the energy density of the canonical quantum matter that propagates within it.
On dimensional grounds, $\Delta P \simeq \kappa/l_{\rm Pl}$, where $\kappa$ has dimensions of action. Clearly, if $\kappa = \hbar$, i.e., if space is quantised on the same scale as matter, then $\rho \simeq \rho_{\rm Pl}$. Therefore, since the observed vacuum density is much lower than the Planck density, $\kappa \ll \hbar$.
Essentially, we argue that this conclusion is logically inherent in all previous works on quantum gravity (or at least those which assume Planckscale metric fluctuations) but, for some reason, has never been fully explored in the literature. In Sections 3 and 4, we argue that setting $\kappa \equiv \beta \simeq \sigma\tilde{\sigma} \simeq \hbar\sqrt{\rho_{\Lambda}/\rho_{\rm Pl}}$, i.e., setting $\sigma \simeq l_{\rm Pl}$ and $\tilde{\sigma} \simeq m_{\rm dS}$, allows us to recover the observed vacuum energy density, $\rho_{\Lambda} \simeq \Lambda/G$, though it should be noted that, in order to realise this, we are required to make several other assumptions. The additional assumptions required are discussed, explicitly, in Section 4, but we make no strong claims for their acceptance.
As a happy byproduct, we also recover the GUP, EUP and EGUP, previously proposed in the quantum gravity literature. Equivalently, we may say that the generalised uncertainty relations naturally motivate a specific vacuum energy model, which saturates the EGUP, and that both arise, ultimately, from the existence of a second quantisation scale for space(time). Note that the EUP and EGUP, as presented in the existing literature, are recovered only when $\kappa \equiv \beta \propto \sqrt{\Lambda}$.
Nonetheless, the observation that $\rho_{\rm vac} \ll \rho_{\rm Pl}$, plus the existence of a Planck length cutoff for spatial wavelengths, requires $\kappa \ll \hbar$, regardless of whether this is identified with the observed value of the cosmological constant.
"In the body of the paper, the only explanation is that `we have no a priori reason to believe that space must be quantized on the same scale [...]' ”
As stated above, this is not the only explanation given in the body of the paper. The argument given in Sections 2 and 3 of the text was condensed into points 14 in the previous response.
"Although this is not incorrect, and indeed it has been suggested many decades ago, it does not follow as a logical conclusion from the cosmological constant problem."
I completely agree with this statement, but am not aware of any specific references in which a second quantisation scale (quantum of action) for space was suggested. If such a suggestion was made decades ago, then it is certainly an oversight on my part not to have cited these works! I would be most grateful if the referee could point me in their direction, and will also cite them in any future work on this topic.
"Moreover, the lack of a specific model in the manuscript raises questions such as what is the precise mathematical description of the “smeared space”, how does classical geometry emerge, and whether Lorentz invariance is lost. None of these important issues is addressed. Based on the above, I cannot recommend the submitted manuscript for publication in its present form."
The results quoted in the manuscript are based on a very specific model, with a rigorously defined mathematical formalism, which was developed in a series of papers, [16], [17], [18] and [19]. All the questions raised by the referee are explicitly addressed therein, e.g.,
how does classical geometry emerge?' (see ref. [16] Section 3.1, below Eq. (47), and Section 4.1),
whether Lorentz invariance is lost?' (see ref. [19] Section 2). These points are discussed further below, but were not treated in detail in the present manuscript, since this would have been inappropriate for a conference proceedings.(To be honest, I am not at all sure whether the referees were made aware, by SciPost, that the text is a contribution to a conference proceedings and was never intended as a research article. Therefore, it contains only a very brief summary of already published work. I stress that the omission of mathematical details is by design and that said details can be found in the works cited in the text. Needless to say, if the referee was misinformed, by SciPost, as to the nature and purpose of the article, this is in no way his / her fault.)
Requested changes:
"1. The main claim of the paper should be scaled down, already in the title and also in the main text. There is no rigorous argument establishing that a new quantization scale "must" exist. The proposal should be presented as a possibility, not as a certain conclusion."
To be honest, the title of the paper was not meant to be taken too literally. I completely agree with the referee, that such a title would be completely inappropriate for a research article, but I considered it within the scope of artistic license for the title of a conference talk. This was deliberately `provocative', to some degree, since it it was intended to grab, and hopefully keep, the attention of the audience. The title of the present manuscript is exactly the title of the talk I gave at the conference in Hanoi, because this is already a a matter of public record. (The conference program has long since been published.)
For this reason, with the referee's permission, I would like to keep the present title. However, I have no strong feelings on this either way, and, if he / she feels that another title would be more appropriate, I am happy to comply with this request. In this case, I would suggest either
Why space could be quantised on a different scale to matter' or
Should space be quantised on a different scale to matter?'.Corresponding changes of language could also be made throughout the text, but I refer again to points 14 above, which I regard as a strong argument in favour of a new quantisation scale, $\kappa \ll \hbar$. A more detailed mathematical argument for its existence is given in [16], [18], [19]. See, for example, the treatment of delocalised (`smeared') momentum measurements, in a universe with a finite de Sitter horizon, given in [16], Section 3.1.3.
"2. Mention and comparison with previous similar proposals is required, such as the ones of Doplicher, Fredenhagen and Roberts, and Madore."
These have been added to the text. Once again, I thank the referee for bringing them to my attention.
"3. The author should explain how classical space(time) emerges, presumably in the limit $\beta \rightarrow 0$."
The emergence of the canonical quantum limit, that is, of quantum matter on a classical space(time) background, is dealt with explicitly in [16] (see Section 3.1, below Eq. (47)). The key point is that, although there are three ways to take the limit $\beta \rightarrow 0$, two of them lead to inconsistencies. Since $\beta \simeq \sigma\tilde{\sigma}$, where $\sigma$ sets the smearing scale for the position space representation and $\tilde{\sigma}$ sets the smearing scale for momentum space, setting $\beta \rightarrow 0$ by taking either $\sigma > 0$, $\tilde{\sigma} \rightarrow 0$ or $\sigma \rightarrow 0$, $\tilde{\sigma} > 0$ leads to a smearing of one representation while the other remains purely classical. In these cases, the mathematical formalism of the theory breaks down. Therefore, we are required to set $\sigma \rightarrow 0$ and $\tilde{\sigma} \rightarrow 0$, simultaneously. In this limit, we recover the predictions of canonical quantum theory [16]. A short note has been added to the text to highlight this point.
"4. Previous approaches to the quantization of space at a new scale have addressed the question of doing this in a Lorentz invariant way. The author is asked to explain whether Lorentz invariance is lost or not, and if it does how does the proposal avoid being ruled out."
This is an important point, which relates to one of the model's strongest advantages. It is well known that mainstream approaches to GUP models, based on modified commutation relations (including those derived from NCG) suffer from severe pathologies. These include:
Violation of the equivalence principle.
Violation of Lorentz invariance in the relativistic limit.
The reference frame dependence of the `minimum' length.
The inability to construct sensible multiparticle states, known as the `soccer ball problem'.
Ultimately, all of these problems, including the breaking of Lorentz invariance, arise from the breaking of the shiftisometry subgroup of the Poincare group, which also forms a subgroup of the Galilean group in the nonrelativistic limit. (See [19] and references therein, including the reviews of GUP literature by Hossenfelder [Living Reviews in Relativity volume 16, Article number: 2 (2013)] and Tawfik and Diab [Int. J. Mod. Phys. D 23 (2014) 1430025].) A major advantage of our model is that it generates generalised uncertainty relations (GURs) without introducing modified commutators of the form suggested in the existing literature. Instead, the canonical XP commutator is simply rescaled, such that $\hbar \rightarrow \hbar + \beta$. (See also Bishop et al, Phys. Lett. B, Volume 816, 10 May 2021, 136265, for work in a similar direction.) This preserves translation invariance, even in the presence of a minimum length, since the reference frame of the background space is quantised, but not discretised.
Therefore, although we have not constructed an explicitly Lorentz invariant extension of the model, it is clear that a major obstacle to its implementation, which exists in all virtually all other GUP models, has been removed by the careful construction of our model in the nonrelativisitic limit. I completely agree with the referee that this point is important, but I did not have time to address it adequately in my conference talk, even in the Q and A. Therefore, since it is dealt with at length in already published work, I regarded it as beyond the scope of the present summary.
" 5. It would be advisable to support the proposal with a particular model/theory that can implement the modified uncertainty relation (4). Heisenberg's uncertainty relation leads to a noncommutative phase space; is there a similar noncommutative space or some other specific geometric picture that underlies the proposal of the paper?"
As explained above, the model is very particular, and is based on a rigorous mathematical formalism that was developed in a series of already published works, [16], [17], [18] and [19]. Below, I outline some of its essential features, including its differences with, and similarities to, previous models in the quantum gravity literature. I hope that this will clear up any remaining confusion.
With regard to this point, there is an unfortunate confusion of terminology, which, however, we were unable to avoid, even with the help of a thesaurus. In the review by Madore [arXiv:grqc/9709002], he states that points are somehow
smeared out' or
fuzzy'. It is important to recognise that points in our model are delocalised (smeared), not in the sense of NCG, but in the way that a quantum reference frame (QRF) is delocalised, or smeared, with respect to its classical counterpart. (See, for example, the work by Giacomini, CastroRuiz and Brukner [Nature Communications volume 10, Article number: 494 (2019)].) Importantly, this allows us to derive GURs, incorporating the minimum length and momentum scales, $(\Delta X^i)_{\rm min}$of the order of the Planck legnth and $(\Delta P_j)_{\rm min}$ of the order of the de Sitter mass, even in the presence of commuting coordinates, i.e.,
$[X^i,X^j] = 0$, $[P_i,P_j] = 0$
[16,18,19]. The underlying geometric picture is illustrated, heuristically, in Figure 1 of ref. [16], for a toy onedimensional universe.
More specifically, our model represents a nontrivial twoparameter generalisation (including $\hbar$ and $\beta$) of the formalism derived by Giacomini et al. This leads to a nontrivial generalisation of the canonical de Broglie relation (Eq. (38) in [16]), which, however, remains consistent with a generalised Galiliean and/or Poincare invariance. This generalisation consists in transforming space(time) `points' into superpositions thereof, translations into superpositions of translations, and Galilean or Lorentz velocity boosts into superpositions of boosts, etc. (Note, also, that trivial twoparameter generalisations of the form $p = \hbar k \mapsto p = \hbar k, \, p' = \beta k'$, i.e., models that treat quantum spacetime like quantum matter, but with a different quantisation constant, are already ruled out by well known no go theorems. See [19] and references therein for further discussion.) Although it was arrived at independently, and via somewhat different arguments, it is straightforward to verify that the formalism of the smeared space model [16,18,19] reduces, in the limit $\beta \rightarrow \hbar$, to the QRF formalism published in Nature Communications.
Strengths:
"1. Deals with an important problem in theoretical physics."
Weaknesses:
"1. The main claim does not logically follow from an argument as presented."
I cannot agree with this statement. While there is, of course, reasonable doubt over the validity of any scientific model, the results presented in the manuscript are supported by a rigorous mathematical formalism, which was developed logically from its underlying assumptions in a series of published works, [16], [17]. [18] and [19]. (There is also a new work, submitted as an invited contribution to a special issue of Quantum Reports, which further develops the implications of the model for quantum information theory, see [Quantum Rep. 2021, 3(1), 196227].)
I completely agree! However, I stress again that this was by design, not by omission, and that the rigorous explanations and arguments for the results presented in the manuscript are given in [16], [17], [18] and [19]. In total, these works comprise nearly 150 pages of material, so that many compromises had to be made when preparing the talk summary.
"3. Certain similar approaches in the literature are not mentioned."
These have been added, together with a brief note explaining their similarities, and important differences, with the model presented.
"4. Some important issues are not addressed."
I hope that the comments above have now addressed these issues, to the referee's satisfaction.
Attachment:
Response_to_2nd_Referee_Report_scipost_202009_00001v2_Letter_2AFUbZA.pdf