Constraints on physical computers in holographic spacetimes

.


Introduction
Complexity theory deals with the power of mathematical models of computation.It is generally believed that these models capture the computational abilities of physical computers, but making this connection precise is difficult.For instance, considering a quantum circuit model we may be tempted to equate circuit depth with the time needed to implement the computation on a physical computer.By assuming a bound on energy, that connection can be made precise via the Margolus-Levinin theorem [1].For any given unitary however, a Hamiltonian can be constructed which implements that unitary arbitrarily quickly, even at bounded energy [2].This means that in this Hamiltonian model of computation, an energy bound doesn't suffice to relate computational and physical notions of time.Observations such as this one leave it unclear how to connect the limits of physical computers and mathematical models of computation.
In this article we make a preliminary step towards understanding the limits of physical computers.To consider the full set of constraints on physical computers, and the full physical setting that can be exploited by a computer, we consider computation in the context of quantum gravity.We work within the framework of AdS/CFT, which claims an equivalence between quantum gravity in asymptotically anti de Sitter (AdS) spaces and a purely quantum mechanical theory (a conformal field theory, the CFT) living at the boundary of that spacetime.Our main result is a construction of a family of unitaries that a computer operating inside of a black hole with entropy S bh cannot perform, where the computation is on n qubits with log S bh ≤ n ≪ S bh and the family we construct is of size 2 o(S bh ) .Because n ≪ S bh , the inputs to the computation do not themselves couple strongly to gravity.Instead, it must be the computation on these small inputs that is restricted.
While we are ultimately interested in the physical limits of computers in our universe, working within the context of the AdS/CFT correspondence gives us a precise framework for quantum gravity.As well, a fundamental observation in computer science is that the power of computers is robust to "reasonable" changes in the details of the computing model: classical computers can be described in terms of Turing machines, uniform circuits, etc. and the resources needed to solve a given computational problem will change only polynomially.Quantum computers are similarly robust.This robustness suggests understanding the power of computers in AdS is likely to yield insights that apply more broadly.
Naively, the AdS/CFT duality between a bulk quantum gravity theory and quantum mechanical boundary suggests the power of computers in quantum gravity should be equivalent in some way to quantum computers.We can imagine simulating the CFT on a quantum computer, and thereby producing the outcomes of any computations run in the dual bulk picture.This approach is complicated however by the possibility that mapping between the boundary CFT descriptions and bulk gravity description is exponentially complex [3][4][5][6].Consequently determining the result of the bulk computation from the boundary simulation may itself be highly complex, allowing for a discrepancy in efficiencies between the bulk and boundary.An intriguing observation is that this leaves open the possibility of a quantum gravity computer being much more powerful than a quantum computer [7].
In this work, we give a strategy to restrict bulk computation using the existence of the boundary quantum mechanical description.The crucial property of the bulk to boundary map we assume is state independence, which we have in AdS/CFT when reconstructing suitably small bulk subsystems.We also use that this map is isometric. 1The state independence of a p Figure 1: A two sided black hole, with systems a and p falling in from opposite sides.The state on P describes a unitary, which should be applied to the state on A.
the bulk to boundary map allows us to relate bulk computation to programmable quantum processors, a well studied notion in quantum information theory.Using tools from functional analysis, we give a bound on the average case behaviour of programmable processors.
Beyond the quantum processor bound, we use additional input from quantum gravity: we assume that we cannot pass more than a black holes area worth of qubits into the black hole (a special case of the covariant entropy bound), and we use that the boundary CFT has a "short" description. 2To reach the strongest version of our result, we will also make an assumption that a computation which is forbidden from happening inside a black hole also cannot be implemented inside of a smaller one.
Before proceeding, we note that another strategy to constrain bulk computation using the boundary description was suggested in [9], and similar ideas appear in [10,11].That strategy involves noting that bulk computations are supported, in a sense that can be made precise, by boundary entanglement.The finite entanglement between distant boundary subregions can then be used to place constraints on the size of inputs for some bulk computations, and it has been further suggested that better understanding of entanglement requirements in non-local computation may lead to computational constraints.

Summary of our thought experiment and result
The basic setting in which we constrain computation is shown in figure 1, where we consider a two sided black hole.A quantum system A is recorded into bulk degrees of freedom a and thrown into the black hole from the left asymptotic boundary, and a second system P is recorded into bulk degrees of freedom p and thrown in from the right.System A initially holds a state |ψ〉 A , and P holds a description of a unitary that needs to be performed, along with any computing device to be used to perform it.We will impose that the computer is built from a much smaller number of degrees of freedom than the black hole we are throwing it into, so that n p ≪ S bh . 3Otherwise, we can remain agnostic as to the design and functioning of this computer -it might exploit some exotic quantum gravitational effects in performing its computation.We aim to have the computer produce the state U |ψ〉 a , which will be stored somewhere in the black hole.We assume that a global reconstruction of the H a Hilbert space from the joint Hilbert space of both CFT's exists, and we require the reconstruction procedure is independent of the unitary to be performed. 4Thus there is some isometry R that maps , where H A holds the state U |ψ〉 A if the bulk computation has succeeded.
To relate this setting to quantum information theory, consider the notion of a quantum programmable processor.An exact programmable processor is an isometry T which acts according to We will also consider approximate notions of a quantum processor.The P Hilbert space holds a state |φ U 〉 which we call a program state, and which specifies a unitary U to be applied.We will consider non-universal programmable processors, which have program states for only some finite set of unitaries.
Returning to our black hole, we note that we can view the insertion of the relevant degree's of freedom, time evolution, and the recovery operation as the action of a quantum processor.This is because once the program state is prepared, the remaining operations used to carry out the computation -inserting these systems into the bulk, allowing the black hole to time evolve, then recovering the output system -are all independent of the program state, and can be viewed as a particular choice of isometry T that acts according to equation 1.We discuss the definition of T in more detail later on, but note here that it is fixed by the description of the CFT and of the initial state of the black hole.
Quantum processors are subject to constraints.Consider processors that implement a family of diagonal unitaries on n A qubits, For this family, one can show that an isometry T succeeds in implementing a randomly chosen unitary U ϵ ∈ E poorly whenever the number of qubits in the program state is sub-exponential in the number of data qubits.In particular, we will show that the probability p(T, U ϵ ) of successfully applying the unitary 5 satisfies the bound where n P is the number of qubits in the program state, the average ϵ is over all values of ϵ, and C is a constant.Returning to the holographic setting, take The upper bound on n P is our imposition that we are considering a computer built of many fewer degrees of freedom than the black hole.We are free to choose n A as we like, and take n A ≪ S bh to ensure the inputs to the computation fit easily into the black hole.The lower bound on n A ensures C n P /2 n A will be small and our processor bound consequently non-trivial.Inside this regime, the bound 3 implies that some unitaries U ϵ can be implemented in the bulk only with probability less than 1.By itself this is no surprise: to specify an arbitrary U ϵ requires 2 n A bits (the signs ϵ i ), so for some U ϵ the program state of n P ≪ S bh ≤ 2 n A /C qubits is too few qubits to specify the unitary, preventing the bulk computer from applying it.More surprising is that there are also unitaries with short descriptions that cannot be implemented in the bulk.To construct one, notice that the U ϵ inherit an ordering from the strings ϵ.Choosing some threshold δ < 1, we have from the bound 3 that some unitaries cannot be completed with probability higher than δ.We define U ϵ as the first unitary which the processor T defined by our setting can't complete with probability more than δ.In the main text we argue the CFT and the initial state can be efficiently described, using in particular O(log S bh ) qubits, which means the description of these forbidden unitaries is small enough to be recorded into n P .Thus inside the black hole the computer holds a complete description of the unitary U ϵ to be applied, but by construction the computer must fail to apply U ϵ , since otherwise the programmable processor T would succeed.
This construction shows that there are at least some computations which cannot be performed inside the black hole, despite there being no information theoretic reason they shouldn't be (i.e. the unitary is fully specified, and the inputs are available).Consequently, it is a computational restriction that forbids these unitaries from happening -we have shown that the bulk quantum gravity computer cannot implement arbitrary computations, and in particular cannot implement the explicit computation we constructed.
To better understand the workings of our bulk computer, it is interesting to ask how hard it is to implement the computations we've shown to be forbidden.In particular, what is their complexity, when considering for example a quantum circuit model of computation?We argue that in the regime 4, the computation that implements the needed unitary requires circuits with memory at least C S bh and depth at least 2 S bh .Assuming the physical computer has similar space and time requirements would suffice as a bulk explanation for why these computations are forbidden.

Summary of notation
We briefly recall some asymptotic notation used in computer science and employed here.We will use In words, big O means f (x) grows not much faster than g(x), little o means f (x) grows more slowly than g(x), little ω means f (x) grows faster than g(x), and Θ means g(x) and f (x) grow at the same rate.Some other notation: • We use capital Latin letters for quantum systems A, B, ..., except when they are bulk subsystems, in which case we use lower case Latin letters a, b, ..., etc.
• We use bold face capital Latin letters for unitaries and isometries, T, U, etc.

Programmable processors
In this section we define the notion of a programmable processor more carefully, then give a bound on a particular class of processors.

Universal and non-universal quantum processors
A classical computer functions according to the following basic structure.We input some data recorded in a string, call it x, and a program, call it P. Then the computer applies the program to the input data, producing output P(x).When any program can be input to the computer in this way, we say the computer is universal.
In the quantum context the analogue is known as a universal processor.In this setting a program amounts to a specification of a unitary, and the input data is a quantum state.The overall action of a processor is given by an isometry T AP→AE , which satisfies In [12], the notion of universal quantum processor was defined, and it was shown that for each distinct unitary (up to a phase) the processor can implement, an orthogonal program state is needed.Because there are an infinite number of distinct unitaries, no universal processor can exist in the exact setting.
Giving up on a universal quantum processor we can consider similar but weaker objects that might be possible to construct.One possibility is to consider approximate universal processors, allowing for some error tolerance in applying the unitary U.Such approximate universal processors can be constructed [13], and it is known that any such construction needs the dimension of the program Hilbert space to scale exponentially with the dimension of the input Hilbert space [14].Another route is to consider finite families of unitaries, and look for processors that apply only elements of this family, either exactly or approximately.
In this work, we will make use of results on this second notion of a quantum processor, which we now define more fully.

Definition 1. A quantum processor T : H
holds for all |ψ〉.We also call such a construction a U-processor.
To define a notion of an approximate U-processor, one approach would be to require 6 holds approximately for all U. Instead, we will define a quantity which captures how close to a U-processor an isometry is in an averaged sense.

Definition 2. (Processor testing scenario) Consider an isometry T : H
The U-processor testing scenario is as follows.
1. Choose U A ∈ U uniformly and at random.2. Choose a state |φ U 〉 P ∈ H P .Apply T to |Ψ〉 AA ⊗ |φ U 〉 P , where R is a reference system and |Ψ〉 AA is the maximally entangled state.

Measure the POVM {U
The probability of passing this test is, using the optimal choice of program state, given by The quantity p(T, U) gives our quantification of how close to a U-processor T is.

Lower bounds on quantum processors
Below, we will show that U-processors are constrained by the size of their program Hilbert spaces.We will be interested in processors implementing the family of unitaries This family is of particular interest because it can be related to the notion of type constants in the theory of Banach spaces, which will be the technical tool that eventually leads to our bound.
We now state the main claim of this section.

Theorem 3. (Bound on E-processors) Given an isometry T : H
where C is a constant.
This will be the technical statement used in the next section, and the reader uninterested in the proof may proceed to there.In the rest of this section we explain some tools needed and then give the proof.Note that this result is similar to the bound given in [14], both in the techniques we will use to prove it and the statement.The only distinction is that in [14] they give a lower bound on the dimension of the program space in terms of a measure of the worst case performance of the processor.We can read the above as a lower bound on d P in terms of the performance of the processor on a particular state, the maximally entangled one, which can also be related to the average case performance of the processor.
The central mathematical structure we will exploit is the notion of a Banach space, and the theory of type constants.A Banach space B is a vector space equipped with a norm || • || B , and which is complete under that norm.This can be compared to the more familiar notion of Hilbert space, which is a vector space with an inner product 〈•, •〉, and which is complete under the norm induced by that inner product ||x|| = 〈x, x〉.Notice that every Hilbert space is also a Banach space, but the reverse is not true.Type constants are certain numerical values associated with a given Banach space B that characterize, in a sense we explain, how far from being a Hilbert space B is.In particular, if a norm is defined by an inner product, it carries with it additional structure beyond what is usually given by a norm.For example, in a Hilbert space we have 1 2 How badly a Banach space can violate this equality then gives some notion of how far it is from being a Hilbert space.This motivates the following definition, which follows [15].We will only exploit the type 2 constants, but give a more general definition for completeness.
Definition 4. Let B be a Banach space and let 1 ≤ p ≤ 2. We say B is of type p if there exists a positive constant t such that for every natural number n and every sequence The infimum of the constants t that satisfy this condition is the type p constant of B, which we denote t B,p .
Note that in a Hilbert space H, we always have t H,2 = 1.
It is also helpful to introduce the Banach space formed by linear operators acting on a Hilbert space.Given an operator O : H → H ′ define the operator norm, where Ball(H) is the unit ball in Hilbert space H. Then L(H ′ , H), the space of linear operators mapping H into H ′ which also have bounded operator norm, forms a Banach space.Considering the case of finite dimensional spaces, the type 2 constant of L(H ′ , H) can be bounded above according to [15,16] With these ingredients, we are able to give the proof of theorem 3.

Proof. (Of theorem 3)
We introduce the notation and will denote the choice of program states by |φ ϵ 〉.The success probability p(T, E) is expressed as where || |ψ〉 E || E = 〈ψ|ψ〉 is the usual norm on the Hilbert space H E .Using that |Ψ〉 AR is the maximally entangled state, and that we obtain Define T i ≡ 〈i| A T |i〉 A , which is a linear map from P to E. Then the above becomes The last norm is on the Banach space of bounded linear operators from H P to H E .Our choice of family of unitaries E has lead conveniently to the final expression here being the sum appearing in the definition of the type 2 constant.Using the result 13 for the upper bound on the type 2 constant of this Banach space, we obtain where we used that ||T i || ∞ ≤ 1 in the last inequality.This is exactly equation 9.

Forbidden computations for physical computers
In this section we relate bounds on programmable processors to computation in holographic spacetimes.Then, we comment on the interpretation of the resulting constraints from a bulk perspective.We begin however with a very brief review of some needed results in AdS/CFT related to reconstructing states in the bulk from the boundary.

The reconstruction wedge
A basic element in the understanding of AdS/CFT is the Ryu-Takayanagi formula, and its various generalizations and restatements.One form of the modern statement reads [17] S(A) = min The area plus entropy term inside the brackets is often called the generalized entropy.The extremization is over surfaces γ which are homologous to A, which means that there exists a codimension 1 surface E γ such that The term S bul k (E γ ) counts the entropy inside the region E γ .When there are multiple candidate extremal surfaces homologous to A, the final minimization picks out the one with least generalized entropy.The minimal extremal surface picked out by the optimization procedure in the RT formula will be labelled γ A .This formula receives leading order corrections in some regimes, as understood in [18], but the form 18 will suffice for our application. 6iven a subregion of the boundary A, it is natural to ask if a subregion of the bulk is recorded into A. To make this question more precise, we should introduce a choice of bulk subspace, which we refer to as the code-space and label H code .The subspace H code might for instance be specified by a particular choice of bulk geometry, along with some qubits distributed spatially across the bulk.Then, assume we are told the bulk degrees of freedom are in a state within H cod e , and we are given the degree's of freedom on subregion A. What portion of the bulk degrees of freedom can we recover?
Answering this question is related closely to the RT formula.In particular, the portion of the bulk we can recover if we know the bulk state in H code is given by [19,20] That is, for each state in the code space we find where the RT surface γ A sits, and define the corresponding bulk subregion E γ A .Then, we define the intersection of all such surfaces, considering all states in the code-subspace.Note that in this procedure we should include mixed states of the code-space.The resulting region is the portion of the bulk degrees of freedom we can recover, if we know nothing about which state in the code-space the full bulk is in.This region is sometimes referred to as the reconstruction wedge of region A, defined relative to the code-space H cod e .Given that it is possible to recover information inside the reconstruction wedge, we can also ask what explicit operation recovers the code space from the CFT degrees of freedom.Given a global map from the bulk subspace H code to the boundary Hilbert space, it was understood in [21] how to construct such a recovery channel.Note that in this construction, a single choice of recovery channel works correctly for the entire code-space.
We will apply the notion of the reconstruction wedge with the region A taken to be the entire boundary CFT.In this setting we might expect the reconstruction wedge is always the entire bulk, but if we choose too large of a code space it is possible for this to break down.In particular the minimal extremal surface appearing in equation 18 can appear that cuts out a portion of the bulk.While this incurs an area term with a cost like L AdS /G N in the generalized entropy, if we take H cod e large enough this can reduce the generalized entropy and be favoured.For this reason it will be necessary to keep our code spaces sufficiently small.

Holographic thought experiment with the game G E
Let's return to the setting of the thought experiment presented in the introduction.Our goal will be to construct a unitary acting on a small system that is forbidden from being completed in the black hole interior.It will also be important that the unitary have a short description: if even specifying the unitary requires an exponential number of bits, bringing this description into the region may itself induce a large backreaction and cause the experiment to fail.
To make the notion of an efficient description more precise, we recall the definition of Komolgorov complexity, also known as descriptive complexity.Intuitively, the descriptive complexity counts the minimal number of bits needed to describe a given string.Somewhat more formally, we make the following definition, which follows [22].Definition 5.The shortest description of a string x is the shortest string 〈M , w〉, where M is a Turing machine and w is an input string for that Turing machine, such that M (w) outputs x.The descriptive complexity of x, which we denote d(x) is the length of the shortest description.
Returning to holography, we consider two copies of a holographic CFT placed in the thermofield double state, so that the bulk description is a two sided black hole.We consider a one parameter family of black holes parameterized by their entropy S bh . 7We could realize this by for instance considering a family of CFT's parameterized by the central charge c, which is proportional to the black hole entropy.Our argument however is agnostic to how we realize this family, which we could also realize by adjusting the black hole temperature.
We are interested in putting constraints on what can be computed within an AdS space dual to a holographic CFT.Before proceeding, we should make some comments on what is meant by having performed a computation.Given some input system H a , we usually say that we have performed some computation U ϵ a (which will here be unitary) if the state on H a transforms according to |ψ〉 a → U ϵ a |ψ〉 a .In quantum mechanics this is unambiguous, since the Hilbert space H a is defined at all times.In field theory, we only have subregions of the spacetime, and a priori it's not clear what "the same" Hilbert space H a at different times means.Unlike in quantum mechanics, we have different Hilbert spaces H a and H a ′ at early and late times, and some identification of bases in the two spaces.In practice we routinely identify persistent Hilbert spaces: for example we can track a given particle through spacetime, and call the Hilbert space describing its spin degree of freedom H a , but implicitly we have some basis information we are identifying across the early and late times.
In our context it will suffice to say that computation U ϵ has been completed if we can identify in a "sufficiently simple" way a Hilbert space H a ′ and identification of basis elements between H a and H a ′ such that the transformation |ψ〉 a → U ϵ |ψ〉 a ′ has been implemented.For us "sufficiently simple" will mean that the H a ′ and the identification of bases can be specified using a number of bits small compared to other parameters in the problem.This agrees with the usual setting in quantum mechanics where H a ′ is trivial to identify, and avoids some trivial ways of "performing" an arbitrary highly complexity computation, by for instance absorbing the computation into the basis identification.As an example, considering our particle moving through spacetime, we might identify the early and late time Hilbert spaces by specifying the background metric and parallel transporting a set of axes along the particle trajectory.
With this background on what we mean by a computation happening in a spacetime, let's proceed to understand the claimed constraints.We consider three agents, whom we call Alice, Bob, and the referee.The referee decides on some input size for the computation, call it n A = log d A .We then play the following game.
• Based on the value of ϵ, Alice prepares a state |φ ϵ 〉 P ∈ H P and acts on CFT R so as to record the state on P into bulk degrees of freedom p, and throws this state into the black hole.
• The referee prepares the state |i〉 A |i〉 A and gives the A system to Bob. Bob acts on CFT L so as to record the state on A into bulk degrees of freedom a, and throws this state into the black hole.
• Alice gives CFT R to the referee, Bob gives CFT L to the referee.
• The referee applies a global reconstruction procedure on H L ⊗ H R to recover the state on the a ′ system, which he records into H A .The Hilbert spaces H a and H a ′ should be identified as discussed above.The referee then measures the POVM If the referee obtains the measurement outcome U ϵ A |Ψ〉〈Ψ| AA (U ϵ A ) † , we declare Alice and Bob to have won the diagonal unitary game.
The steps in this procedure are summarized in figure 2.
Circuit describing Alice and Bob's procedure to carry out the diagonal unitary game.Unitary V L acts on AL, and corresponds in the holographic picture to recording the state on the A system into bulk degrees of freedom a sitting in the left asymptotic region.Unitary V R acts on RP and in the bulk picture corresponds to recording P into degree's of freedom p in the right asymptotic region.We allow the two CFT's to time evolve, which we absorb into V L and V R , which in the bulk picture allows a to interact with p.The isometry R extracts the a system from the bulk and records it back into A. The state |φ ϵ 〉 P is prepared based on the string ϵ.The full circuit can be viewed as an isometry T AP→AE .
In the reconstruction step, the referee applies a map R to the Hilbert space H A ⊗ H L ⊗ H R .We claim this map can be made independent of ϵ and isometric.To understand why, recall from the last section that we can reconstruct H a ′ in a state independent way if we take our code space to be the full Hilbert space of states that can depend on ϵ, since then the reconstruction procedure is independent of ϵ.Thus we should take H code to include all of those states obtained by inserting any state in H p and time evolving forward to the point where we do the reconstruction.If we would also like to reconstruct without holding the reference system A, which we will need to apply our processor bound, 8 we should add in the n A qubits worth of states.Thus state independent reconstruction is possible when n A + n P is much smaller than S bh , so that the bulk entropy term never competes with the area of the black hole in finding the minimal extremal surface in equation 18. Concretely, it suffices to impose that Figure 3: a) We apply the CEB to the right going light sheet that begins on the bifurcation surface, call it Σ.Note that the information thrown in from the left will never cross Σ. b) Throwing in matter from the right deforms Σ.According to the CEB, Σ will always bend inwards enough so that no more than area(Σ)/4G N qubits will cross through it.Consequently, information thrown in from the left will not encounter more than area(Σ)/4G N qubits thrown in from the right.
We will need to ensure we work in this regime.
The claim that R is isometric is easy to misunderstand in light of another set of ideas in AdS/CFT.Often it is useful to discuss the Hilbert space of an effective field theory that lives on the bulk geometry.In the context of black holes, this EFT Hilbert space is thought to map non-isometrically into the CFT Hilbert space [8].Said another way, the EFT Hilbert space of black holes is too big, and some of its states do not have corresponding states in the fundamental (CFT) description.In our context we never introduce the larger bulk EFT Hilbert space.Instead, we begin with some CFT state dual to the two sided black hole, then act on the CFT to introduce the inputs to our computation.Thus our bulk state is necessarily a state in the fundamental description.
If indeed we can ensure R is state independent, we can notice that after the initial preparation of ϵ all the steps in the protocol are independent of ϵ, and form an isometry.In fact, looking at the circuit diagram of figure 2 we see that the protocol is described by an isometry T AP→AE and a state preparation of |φ ϵ 〉 P , which is then input to T AP→AE .Thus the overall action is described by a map This is exactly the action of a quantum programmable processor, so we have from theorem 3 that p(T, E) If we put appropriate constraints on n P , n A this bound will lead to constraints on computation happening inside the black hole.
The value of n P we would like to have constrained physically, rather than as a choice we put in -n P controls the size of the computer, and we want to allow Alice and Bob to exploit the action of any physically allowed computer.A natural constraint on n P is given by the covariant entropy bound (CEB) [23][24][25].We will apply the CEB to the bifurcation surface of the black hole, as shown in figure 3a.This limits the size of the computer that can be thrown into the hole according to Notice that we can throw in arbitrarily large systems from the right and create a larger black hole, but at most S bh of these degrees of freedom can interact with the systems falling in from the left.See figure 3b.Unfortunately, at the upper limit of allowed values given by the CEB we violate 21, and lose our guarantee of state independent recovery.To continue our argument in light of this, we introduce an assumption, which is that if a computation is forbidden inside of a black hole with entropy S ′ bh , then it is also forbidden inside of a black hole with entropy S bh with S bh = o(S ′ bh ).That is, we will restore state independent recovery in the diagonal unitary game by allowing Alice and Bob an apparently more powerful resource, the geometry of a larger black hole, and assume this doesn't weaken their computational power. 9ow with the diagonal unitary game in the larger black hole in mind, consider the value of n A .The value of n A is something we choose: we can decide to ask for a unitary on n A qubits to be applied inside the black hole, for whatever value of n A .We will choose n A such that it is much smaller than S bh , and so can be brought into the original black hole.Further, we will need to make n A large enough for equation 23 to be a meaningful constraint.Summarizing all the needed constraints, we consider running the diagonal unitary game inside of a black hole with entropy S ′ bh , with n P , n A satisfying In this regime, the constraint 21 is satisfied and the map T (which acts on the CFT state describing the larger black hole) is a state independent isometry.Consequently, the bound 23 applies, and using that n P ≤ S bh < 2 n A /C we have that the average success probability of the diagonal unitary game will be below 1.Now revisit the bound 23.Define the success probability of the processor T on value ϵ as so that the processor bound 9 is expressed as Setting some threshold probability δ with C n P /2 n A < δ < 1, we define the set We refer to elements in this set as forbidden unitaries.From 27, this set will be of size at least which is doubly exponentially large in our parameter regime.
To understand the meaning of these forbidden unitaries, first notice that n A grows more slowly than S bh .This means applying the needed unitaries is not restricted because the CEB is limiting the size of the systems acted on by our unitary.Looking at n P however, we can notice that since n P ≤ S bh < 2 n A /C, and ϵ consists of 2 n A bits, it is not possible to fit a complete description of an arbitrary ϵ into n P qubits.If we can't even bring a specification of the unitary U ϵ into the black hole, there's no surprise we can't implement it there -it's not possible to do so on information theoretic grounds.While this does explain why many unitaries are forbidden, we claim there are also some forbidden unitaries whose description can be compressed to fewer than n P bits.Consequently information theoretic constraints don't suffice to explain why those unitaries are forbidden.
We now define a unitary which both cannot be implemented in the bulk region, and has a short description.Definition 7. Define the unitary U ϵ 0 to be the first element of P(T, E), where the ordering is the one induced by interpreting the string ϵ as a binary number.
Notice that from equation 29 the set P(T, E) is non-empty and thus this unitary exists.Also observe that the above definition uniquely specifies this unitary.
We claim that U ϵ 0 can be specified using n P bits, with n P inside of the regime 25.The definition above is an Θ(1) length string, plus the descriptive lengths of T AP→AE and E. Let's consider the length of a description of each of these objects in turn.
• To describe E, we need some Θ(1) description plus the value of n A , which fixes the size of the unitaries in the set, which we can specify in O(log n A ) bits.
• To describe T AP→AE we need to specify R and the initial state in H L ⊗ H R appearing in figure 2.
-To define the initial state of the two CFT's, we need to specify which CFT we are discussing, and the one parameter describing the black hole, for which we use the entropy S ′ bh .Considering the description of the CFT, we assume there is a family of CFT's parameterized by the central charge c.Then to describe the CFT requires some Θ(1) data to specify which family we are considering, plus Θ(log c) = Θ(log S ′ bh ) bits to specify the member of that family.To specify S ′ bh requires at most log S ′ bh bits.-Consider the map R.This is fixed by defining the choice of CFT, the initial state of the CFT, and the choice of subspace H a ′ .The choice of CFT and initial state was already specified above.To specify the subspace H a ′ , recall that we defined having completed a computation to mean recording the output into a Hilbert space that can be described in a small number of bits.In the black hole context, we take this as meaning that we need far fewer than S ′ bh bits.We will allow in particular log S ′ bh bits to specify the subspace.
The last point regarding the number of bits to specify H a ′ is worth a few more comments.While we allow for log S ′ bh bits, in the argument below anything smaller than S ′ bh bits will lead to forbidden bulk computations.Our specific choice of log S ′ bh bits is motivated by considering the setting where, at the time of recovery, the bulk is described geometrically, and the output is recorded into some localized degree's of freedom.In this case we can specify the subspace using O(log S ′ bh ) bits, since S ′ bh controls the size of the black hole and we would need to specify where in the black hole those bits are stored.
The full accounting then is that the descriptive length The second equality follows from our choice of parameter regime.From this equation, we see that we can describe ϵ 0 using a state on n P bits whenever log S ′ bh < S bh , which we can easily take while being consistent with S bh = o(S ′ bh ).Notice also that we can define U ϵ m as the mth element of P(T, E), in which case we use k = log m additional bits.So long as we keep k = o(S bh ), this allows us to construct a family of unitaries of size 2 k which are similarly describable inside the black hole but forbidden from being implemented by the processor T.
Let's summarize now our holographic thought experiment.On the right, Alice R prepares a randomly drawn string.Consider a case where she obtains a string describing a unitary in the set {U ϵ m } m≤2 k .In this case, she can record a description of the unitary U ϵ into no more than S bh bits.Doing so, and sending these bits into the black hole with larger entropy S ′ bh , a complete description of U ϵ 0 is inside the black hole.However, by construction these unitaries cannot be completed with probability more than δ in our thought experiment.Thus performing these unitaries inside the black hole must be forbidden in the black hole of entropy S ′ bh , and hence by our assumption forbidden inside the smaller black hole of entropy S bh .In that setting, n P (the size of the computer) may be taken to be as large as the black hole entropy, n A (the size of the inputs) is still much smaller than the black hole, and the description of the forbidden unitaries is much smaller than S bh , so can easily be brought into the black hole.Thus, the computation is forbidden from happening inside the smaller black hole using any physically allowed computer, even while the information needed to implement it is stored there -these forbidden computations must then be computationally forbidden.Further, there are at least 2 k such unitaries, with k = o(S bh ).

Bulk interpretation of forbidden unitaries
It is generally expected that the widely studied models of computation -classical Turing machines or quantum circuits -capture the power of physical computers.To make the connection between models of computation and physical computers, many authors have looked to gravitational constraints.This is because within quantum mechanics it does not seem possible to find a fundamental unit of time, or fundamental constraint on the memory held in a physical region.
As one example, Lloyd [26] offered a plausible gravity argument that, considering a circuit model of computation, the number of gates that can be performed in a given time is limited by the available energy.He then argues the available energy should be bounded above by the energy of a black hole, putting an apparent speed limit on computation.However, working with a Hamiltonian description of the computation one can evade this bound [2], doing arbitrarily complex operations arbitrarily quickly, and at arbitrarily low energy.While the needed Hamiltonians are likely unphysical, this construction shows that it remains unclear how to obtain a precise bound on computation from a direct gravity perspective.
Our construction of forbidden unitaries gives a very preliminary step towards connecting physical computers and models of computation: it at least shows that some computations cannot happen in certain finite spacetime regions.A natural question is how high of complexity our forbidden computations are, and if this high complexity offers some plausible physical reason from a bulk perspective why these unitaries should be forbidden.
We can make a few comments about the complexity of our forbidden computations.The needed computation is to, given the compressed description of ϵ 0 and input system A, apply U ϵ 0 .One route to doing this is to first decompress ϵ 0 , then apply U ϵ 0 based on the value of the uncompressed string.To decompress ϵ 0 from its compressed description, we need to find the first value ϵ where the function p(T, E|ϵ) is smaller than δ.A naive classical algorithm then to decompress the description of ϵ 0 is the following.
Assuming computing p(T, E|ϵ ′ ) takes O( 1) steps (it is likely longer) this runs in O(2 2 n A ) steps.From 25 we see that this gives a number of steps in this algorithm of 2 C S bh .Further, notice that the memory needed to run this algorithm is at least the memory needed to store ϵ ′ , which is length 2 n A , so can be made as small as C S bh bits.In appendix A, we give a heuristic argument that it is not possible to significantly improve on the memory usage and number of steps used in this algorithm, even using a quantum circuit model of computation.The 'central dogma' of black hole physics states that black holes can be described as quantum mechanical systems with dimension 2 S bh .If we assume this, and assume a quantum circuit model captures the power of the bulk computer, this provides one plausible explanation for why these unitaries are forbidden in the bulk: the best quantum algorithm seems to require memory C S bh > S bh , so can't run inside the black hole.
We can also discuss the relationship between the number of computational steps needed to perform our unitary and the time available inside the black hole.Recall that we considered running our diagonal unitary game in the larger black hole of entropy S ′ bh , where we first showed the computation was forbidden, assuming Setting the above constraint amounts to a constraint on the choice of computing device thrown into the black hole, imposing that it is sufficiently small compared to the black hole entropy.Why does this constrained computer fail to implement the given computation in the larger black hole?Notice that the memory usage of the naive algorithm above is 2 n A = C S bh = o(S ′ bh ), which is now much smaller than the black hole entropy.The number of computational steps of the naive algorithm now presents the most plausible computational restriction: the number of steps is 2 2 n A which is much larger than S ′ bh if ω(log S ′ bh ) = n A , which we are indeed guaranteed by our parameter regime 25.If we suppose a computational step takes some finite time, and that the naive algorithm above cannot be significantly improved in run time, this suffices as a bulk explanation for why our (restricted) computer cannot perform the needed computation.Because this seems to be the needed explanation in the context of the larger black hole, we might take this as evidence that the run time is also the relevant constraint in the (unconstrained) computer in the smaller black hole, although as noted above in that setting the memory is also larger than is available, again assuming a circuit model.
In fact, it is interesting to push this restriction on the size of the computer as far as possible and understand the number of computational steps needed in the resulting problem.Suppose we take n P = log S ′ bh .This is the smallest we can take it while still allowing a description of T to be fit into n P bits.Then, we can have n A = log C log S ′ bh and still get a non-trivial bound from our processor bound.This leads to unitaries that are forbidden from happening inside of the black hole using a computer built from n P qubits.The memory then needed to run our naive algorithm is log S ′ bh , while the run-time is S ′ bh .Thus the run-time of this small computer still seems to explain its inability to perform the computation inside the black hole. 10f we are willing to place constraints on the size of the computer by hand, there is no longer any need to consider the black hole setting, gave a natural surface on which to invoke the CEB.In the next section we consider restrictions on small computers in more general settings.

Forbidden computations for small computers
Given our construction in section 3 of constrained computations, we should ask to what extent our argument can be generalized away from the black hole setting, and away from AdS/CFT.
Towards making a more general statement, consider the following setting.We have a quantum mechanical system described by Hilbert space H = H A ⊗ H P ⊗ H E and evolving under Hamiltonian H, where we refer to the A system as the data Hilbert space, the P system as the program space, and E as the environment.Given a unitary U A , Alice prepares H P in a state recording the unitary to be applied, or description of a program to apply it, along with any computing device prepared to apply it.She may use arbitrarily complex computations in preparing this state.Then, Bob prepares some state on the A system.Further, the E system is put in an arbitrary state |ψ〉 E which we take to be initially pure, so that the environment is initially unentangled with the data and program spaces.The full Hilbert space is then allowed to evolve under time evolution given by the Hamiltonian H.After some amount of time t, a measurement is made on the A subsystem testing if U A has been applied.This setting closely models the basic computational setting we find in the real world: we can prepare our computer which holds the program, insert the data, and then the computer runs -it evolves in this case under the Hamiltonian describing our universe.
In the black hole setting there is a natural bound on n P , the number of qubits in the program space, which is imposed physically.In this scenario, we restrict n P arbitrarily -consequently, we are deriving here constraints on how fast small computers can perform computations, but not on all physically allowed computers.Also, note that in that setting the role of the environment Hilbert space H E was played by the combined Hilbert spaces of the two CFT's.
Our processor bound 9 leads to a constraint on how quickly some unitaries can be performed in this scenario.In particular we have again that, after the system P is put into the program state, the remaining action of the computer is described by an isometry independent of the unitary.In particular, the remaining action is just time evolution under H.The description of H, initial state of the environment |ψ〉 E , and amount of time we evolve for t then defines a processor, which we label T. Considering the family of unitaries 8, we can apply the processor bound 9, finding that Given an allowed program space of n P qubits, we choose the family of computations E such that n A is large enough, satisfying in particular so that p(T, E) is less than 1.Given a value of t and choice of Hamiltonian H, we can then define a forbidden unitary in a way analogous to definition 7, which we do next.Define the set of unitaries with low success probability and then define a unitary which has a short description and is forbidden.
Definition 8. Let U ϵ 0 be the first unitary in the set P(T, E), where we order the set E by interpreting the strings ϵ as binary numbers.
As before, we can also extend this to a family of unitaries.How long is the description of U ϵ 0 ?Importantly, it must be short enough to be written into n P qubits while maintaining n P ≤ 2 n A /C. Notice that this definition consists of the O(1) string given explicitly, plus a description of T and the parameter n A describing the set E. Thus, if we have for d(H) and d(|ψ〉) the descriptive lengths of H and |ψ〉, there will exist forbidden unitaries which have descriptions fitting inside the program state, and hence must be computationally forbidden.We can always adjust our chosen value of n P to ensure this is the case.The requirement above is essential to the physical consistency of our construction.One way this manifests is that we have so that we cannot construct forbidden unitaries for arbitrarily small n A compared to t, which means the complexity of the computation cannot be made small compared to the time t.As an interesting case consider the setting where log t is much larger than the other parameters in the description of the isometry, in particular we allow a long enough time that Going to this setting, and using 36, we see that forbidden unitaries occur only for times shorter than t ∼ 2 2 n A .Recall that 2 2 n A is exactly the scaling of the number of steps needed to decompress the forbidden ϵ.Thus our forbidden computations remain complex enough to ensure the number of steps it takes to implement them scales like the physical time needed to implement them on a computer.Another comment is that we expect that for a given computation we can always find a t large enough that our dynamical evolution implements the computation.Indeed our construction doesn't violate this, as it requires we first choose t, then can construct a unitary that cannot be implemented within time t.In particular we emphasize that for larger t the value of n A must be chosen suitably large.A similar comment arises in comparing to the construction of Jordan [2].Given a unitary, Jordan constructs a Hamiltonian that completes the unitary in an arbitrarily short time.In contrast, our ordering is different: we fix a Hamiltonian and a choice of time t and then show there are computations that cannot be run by this Hamiltonian within that time.Since we expect there is ultimately one Hamiltonian describing our universe, this reversed statement seems sufficient to find physically unrealizable computations.

Discussion
In this work we have constructed computations which cannot be implemented inside of a black hole with entropy S bh , despite the inputs to these computations being small, and the description of the computation being easily fit inside the black hole.We've argued that these computations are high complexity, which may explain why they are forbidden.Regardless of the explanation for why these computations are forbidden, our construction unambiguously establishes that at least some computations are forbidden from being implemented inside the black hole.
Moving forward, it would be interesting to understand general properties of unitaries that restrict their bulk implementation.To do this, we have two alternative approaches by which we can proceed.As we've done here, we can exploit the view of bulk computation in terms of programmable processors.Alternatively, following [9,10,27,28], we can relate bulk computation to non-local quantum computation. 11So far, the constraints coming from non-local computation have been complimentary to the ones derived from programmable processors.Perhaps one of these techniques, or some synthesis of the two, will allow further progress in the understanding of the limits of computation in the presence of gravity.
Before making a few comments on the connections between this work and others, we summarize the basic conceptual tension underlying our construction.A universal computer can follow instructions and, given an unbounded number of steps, perform any computation.Taking an outside view, and assuming our system is quantum mechanical, any computer evolves under the time evolution of some fixed Hamiltonian.This time evolution can be viewed as the action of a programmable quantum processor.Programmable processors are limited in the computations they can perform, while universal computers are apparently unrestricted, setting up a tension between the two perspectives.The naive resolution is that the programmable processor is only limited when the program states are small, restricting us from specifying most computations, thereby explaining on information theoretic grounds why the universal computer fails.For universal processors with simple descriptions however the tension becomes sharper -the universal computer can be input a description of the processor, which allows efficient descriptions of programs the computer can't itself run.Now, the way out of the tension is a computational restriction on the universal computer.
Our construction is similar to the diagonalization technique as used in computer science, in that the universal computer is being fed a description of the dynamics which it is itself is governed by.A key new ingredient however is the universal processor bound, which ties our argument to a physical setting.In particular, the length of the description of the processor, which relates to physical parameters (e.g. the time or black hole entropy), constrains the n P and n A parameters which then enter the processor bound.In this way physical data is brought into the diagonalization argument.
We conclude with a few comments on related topics.

What is special about black holes?
We discussed here constraints on both computers inside of black holes and in ordinary AdS.We can briefly comment on what is unique about the black hole case.First, the black hole gave a natural covariant definition of a bulk subregion, and a surface on which to apply the CEB.These features are convenient but not strictly necessary: we could define a bulk subregion in some other way, and can then apply the CEB again or place a constraint on the size of the computer by hand.More fundamentally, the black hole gives us a way to specify the setting in a simple way, in terms of just the parameter S bh .This parameter then appears in the specification of the forbidden computation, and controls the complexity of the forbidden computation.In contrast, away from the black hole setting, we had to specify a parameter T setting the time for which we allow our system to evolve.The complexity of the forbidden computation is then set in terms of T .

Quantum extended Church Turing thesis
The quantum extended Church Turing thesis states that any physically realizable computer can be efficiently simulated by a quantum Turing machine.Recently, Susskind [7] proposes an interesting tension with this thesis and a thought experiment in the setting of a two sided black hole.He argues that an observer who jumps into the black hole can compute certain functions efficiently that an observer who instead holds the two CFT's cannot.We find this thought experiment suggestive that a notion of an observer is needed in the statement of the extended Church Turing thesis, and the statement should only apply when two observers may separate for a time and then meet again and compare the efficiency of their computations.While broadly this work and ours are both interested in the computational abilities of computers in the presence of gravity, we should be careful to distinguish between the two settings.Note that we never compare observers outside and inside the black hole and ask about their relative ability to perform some computation.Instead, we ask only about the computational abilities of the observer inside the hole.The boundary perspective is exploited to relate bulk computation to quantum processors.

Complexity of the AdS/CFT dictionary
Recently, there have been discussions around the complexity of the operations needed to recover bulk data from the boundary [3,6].We emphasize that our argument does not rely on this map being low or high complexity.Instead, we only rely on this map being state independent within some appropriate, and small, subspace of states.

Bulk computation as non-local computation
Our results are interesting in light of a conjecture made in the context of non-local computation and its relationship to AdS/CFT.Non-local computation implements unitaries on two, separated, subsystems using an entangled resource state and a single round of communication.In [9], the authors state that at least one of the following must be true: 1.All computations can be performed with linear entanglement.

Gravity places constraints on bulk computation.
They also argue that not 1) implies 2).That work conjectured that 1) is false and consequently 2) is true.This work establishes that 2) is true in AdS/CFT, without resolving 1).

Understanding of the black hole interior
In [8], the authors discuss a puzzle in the physics of black holes.The central dogma of black hole physics states that a black hole can be described by a number of degrees of freedom given by its entropy.The description of the black hole using S bh degrees of freedom is referred to as the fundamental description.Additionally, we can describe the black hole within effective field theory, within some background set by the appropriate solution to Einsteins equations.In the effective description, and at late times, the black hole interior volume can be very large.Thus the number of low energy degrees of freedom in the effective description will exceed S bh .A puzzle then is to understand how the effective description, with a large number of apparent degrees of freedom, is embedded into the fundamental description with fewer degrees of freedom.Necessarily many of the states in the effective description will not be realizable states of the black hole, since most states cannot map to a state in the fundamental description.
To understand this, the authors of [8] argue that it is the low complexity states in the black hole interior that are mapped to the fundamental description.They show that even while the effective black hole interior is exponentially larger than the fundamental description, a subspace in the effective description large enough to contain all the low complexity states can be mapped to states in the fundamental description, and this map can approximately preserve orthogonality.
Our results support this perspective, in that they suggest high complexity unitaries are restricted in the bulk.In particular, the variation on our thought experiment most relevant to this discussion involves taking the computer to consist of n P = o(S ′ bh ) qubits and considering the diagonal unitary game in the larger black hole, with entropy S ′ bh .Then, the computer state is a state in the effective description of the black hole.Our argument then shows there are high complexity states the computer cannot evolve dynamically into, in line with the proposal of [8].Said differently, our results support the idea that boundary time evolution, which must take fundamental states into fundamental states, also preserves a low-complexity set of states in the bulk.

An end to time
Among the strangest properties of black holes is that time in the interior comes to an apparent end at the singularity, at least within the classical description of the black hole.Understanding how this can arise from a quantum mechanical theory, in which time does not end, seems to be a basic challenge in understanding how gravitational physics can emerge from quantum mechanics.Our results support the idea that the finite bulk time corresponds, in some sense to be made precise, to limits on bulk complexity enforced by the boundary theory: the bulk geometrizes the limits on complexity enforced by the boundary by having an end to time at the singularity. 12