Whac-a-constraint with anomaly-free dark matter models

Theories where a fermionic dark matter candidate interacts with the Standard Model through a vector mediator are often studied using minimal models, which are not necessarily anomaly-free. In fact, minimal anomaly-free simplified models are usually strongly constrained by either direct detection experiments or collider searches for dilepton resonances. In this paper, we study the phenomenology of models with a fermionic dark matter candidate that couples axially to a leptophobic vector mediator. Canceling anomalies in these models requires considerably enlarging their field content. For an example minimal scenario we show that the additional fields lead to a potentially much richer phenomenology than the one predicted by the original simplified model. In particular collider searches for pair-produced neutralinos and charginos can be more sensitive than traditional monojet searches in thermally motivated parts of the parameter space where the mediator is outside the reach of current searches.


Introduction
Dark matter phenomenology has often been studied using so-called Simplified Models [1][2][3][4][5][6][7][8] which aim at considering only the minimal Lagrangian and particle content relevant for relic density calculations, direct and indirect detection, and collider searches. Such models can be understood as a bottom-up approach to dark matter model building, to be contrasted with the top-down approach focusing on plausible dark matter scenarios in UV-complete theories such as Supersymmetry. However, the requirement for minimality and the focus on phenomenology has also resulted in many of these models failing basic theoretical self-consistency requirements. Models with a new heavy gauge boson, notably, often do not satisfy perturbative unitarity, as the new boson is usually assigned a Stueckelberg mass. Furthermore, models with new fermions that are charged under the Standard Model or under a new gauge group are often plagued by anomalies. Fixing these problems often requires the introduction of further interactions and fields that will lead to a much richer phenomenology than predicted by the original simplified model [9][10][11][12][13][14][15][16][17][18][19].
Perfect illustrations of this consistency issue are the so-called gauge portal models, involving a fermionic dark matter candidate that is charged under a new dark U (1) gauge group associated with a massive gauge boson Z . While generating a mass for Z only requires introducing a new Higgs field, cancelling the anomalies associated with the dark fermion can prove particularly cumbersome. Since these models typically introduce family-independent couplings for the leptons and quarks, they often correspond to U (1) B and U (1) L models of gauged baryon and lepton numbers, which were originally discussed in [20][21][22] with the dark matter phenomenology studied in [23][24][25]. While simple anomaly-free gauge portal solutions based on this family of models have been discussed in [16], they involve either a large vectorlike coupling between the dark matter and the Z , or a sizable coupling between the Z and the Standard Model (SM) leptons. Both features are somewhat undesirable from a dark matter perspective, as they respectively imply large direct detection cross-sections or a clear dilepton resonance signal at the LHC for large parts of the thermally motivated values of the parameters. Given the current sensitivities for these types of signals [26,27], it is well-motivated to also investigate anomaly-free models for which the Z is leptophobic and couples primarily axially to the DM. As outlined in [16,20] however, anomaly cancellation for these models is non-trivial and requires dramatically increasing the model's particle content.
In this paper we derive a minimal field content and charge assignment which cancels all gauge anomalies for a model with a fermionic dark matter candidate a vector mediator from a new broken U (1) Y gauge group. As outlined above, in order to avoid the current direct detection and collider constraints, we require that the mediator is strictly leptophobic, with a purely axial coupling to at least one fermionic interaction eigenstate. We then use this scenario as a benchmark example to illustrate that, in the region of parameter space outside the reach of the dijet searches, anomaly-free gauge portal models will often be more sensitive to searches for new heavy states with electroweak charges at the LHC than to monojet searches. Requiring theoretical consistency for simplified models thus uncovers a particularly intriguing interplay between dark matter searches and the wider BSM programme of the LHC.

Model details
We start from a minimal gauge-portal model, with a new Dirac fermion χ that is charged under a new U (1) Y gauge group associated with a massive gauge boson Z . The corresponding new physics Lagrangian is where B µν and B µν are the field strengths for the SM hypercharge and the new U (1) Y group respectively. In the rest of our study, we set the kinetic mixing to zero, briefly discussing this choice in section 2.2. The relative values of the vector and axial couplings g V,A depend on the dark hypercharges of the left and right-handed components of χ, that is, χ L and χ R . This basic scenario has been widely used to derive constraints from relic density, direct detection, and collider searches for gauge portal models. When the coupling between χ and Z is axial, however, which is the configuration with the loosest direct detection constraints, this model has non-zero U (1) Y anomalies. In what follows, we present an example model with an extended dark sector that allows to cancel these anomalies while keeping the DM-Z coupling mostly axial and the Z leptophobic. This corresponds to a specific implementation of the general gauged baryon and lepton-number motivated models derived in [9,22]. We stress that, although our chosen model has a particularly large number of fields, it is among the most minimal models that can be built with our requirements. A more detailed discussion of the construction of these models is presented by the authors of [16]. In the rest of our study, we will use SARAH [28][29][30][31][32][33] throughout in order to implement the model and derive relevant quantities.
In order to have an anomaly-free gauge portal model that satisfies the existing experimental constraints, we need to introduce additional fields which transform non-trivially under U (1) Y × SU (2) L [16,20]. A minimal solution is to add 6 new Weyl fermions in the following Here we have already made θ and φ vectorlike under the Standard Model gauge group to avoid having to consider anomaly equations for products of gauge groups not involving U (1) Y . The relevant anomaly equations assuming Y l,e = 0 (which requires Y q = Y u,d to keep the Standard Model Yukawa terms gauge invariant) are then: In order to avoid having charged dark matter candidates we require Y φ = m+ 1 2 and Y θ = n where m, n ∈ Z, so that the lightest charged state can decay into the lightest neutral state by emitting a Standard Model particle. Restricting ourselves to configurations with |Q| ∈ {0, 1} for all new fields we find the solution For simplicity we take Y φ L = 1 and Y θ , Y φ = 1, 1 2 for the rest of the paper 2 . Due to the gauge invariance of the SM lepton Yukawa interaction term mentioned above, the Standard Model Higgs doublet H must be a singlet under U (1) Y . Therefore in order to break U (1) Y and give masses to all of the new fields we have to add a new complex scalar which is a Standard Model singlet: This scalar gets a vev v S :S which generates a mass for the Z gauge boson of U (1) Y 2 While writing up this paper a similar study appeared in [19] which directly connected the model to U (1)B and hence chose to normalise such that Y q = 1 3 , however the models are otherwise identical. We will take a different approach to studying the phenomenology for the remainder of this paper.
The allowed mass and interaction terms in the Lagrangian are then as follows: where H = iσ 2 H * .

From interactions to mass eigenstates
In the broken phase we get the following Majorana mass matrix for the neutral states after expanding the SU (2) doublets as φ L/R = φ L/R,1 φ L/R,2 : Turning off the Majorana terms y χ L and y χ R allows us to describe the propagating degrees of freedom as two Dirac fermions χ 1/2 where χ 1 will generically denote the lightest fermion a.k.a. the DM candidate, and that we will always take as mostly χ to avoid direct detection constraints. Suppressing these terms can be motivated by the fact that it introduces a new global U (1) symmetry in the dark sector. Keeping them turned on but small splits the two Dirac pairs into pseudo-Dirac pairs [35,36].
The terms mixing the Standard Model singlets with the neutral components of the SU (2) doublets, y φ L ,χ R and y χ L ,φ R , also introduce couplings to the Z for χ 1 which again rule the model out through direct detection unless these terms are very small: y φ L ,χ R , y χ L ,φ R v H 10 −2 y χ v S for y φ = 1.1y χ . We keep track of this effect and note that this is a somewhat unfortunate consequence of this particular charge assignment as it requires some fine-tuning to achieve the stated goal of avoiding direct detection constraints. Other anomaly-cancelling solutions such as models involving a SU (2) L triplet with Y = 0 could avoid this hurdle as the neutral fermions would not couple to the Z. We present an analysis of the direct detection sensitivity of non-zero y φ L ,χ R , y χ L ,φ R in Section 3.1 to give an estimate of the degree of fine-tuning involved.
Although the y φ L ,χ R and y χ L ,φ R couplings should be small in order to avoid direct detection constraints, they should also not be identically zero in order to avoid having the neutral component of the doublet φ -which always couples to the Z-being stable, and thus a dark matter candidate. In what follows, we therefore always choose small non-zero values for these couplings in our benchmark models. Note that the existence of these couplings allows χ 1 and χ 2 to coannihilate if they are close enough in mass (so y χ ∼ y φ ) by ensuring that the two particles remain in equilibrium in the early Universe. We emphasize that studies that involve relic density constraints need to take this possibility into account.
The mass matrix for the charged fermions is: We denote the two mass eigenstates by χ ± 1/2 . If the off-diagonal elements are small and y θ y φ , χ ± 1 is the charged component of the doublet φ, m χ ± 1 ∼ m χ 2 and χ ± 1 can also become a coannihilation partner of the dark matter. We will study this maximal coannihilation scenario in Section 3.1. For completeness we also include the mixing in the scalar sector in the Appendix A. This mixing has no incidence on the DM phenomenology as long as the contribution of the scalar portal interactions to the DM relic density remain negligible. We will therefore assume λ H,S = 0 such that the mixing completely vanishes and H is completely Standard Model-like, with the caveat that our results only apply as long as the approximation that any such effects are negligible holds.
A potential worry for our model is that since U (1) Y ∼ U (1) B at low energies, breaking U (1) Y might induce proton decay or neutron-antineutron oscillations. The general form of the corresponding operator is: with p, q, r ∈ N. Since Y q = 2/9 and Y S = 2, the minimal gauge invariant realization is given by: with |∆B| = 3 and mass dimension 16. As such we are not sensitive to proton decay bounds or searches for neutron-antineutron oscillations [37], and any other effect is highly suppressed at Q ∼ Λ QCD by 12 powers of the scale where U (1) Y is broken. The general insensitivity of searches for baryon number violation to electroweak scales of U (1) B breaking when it is gauged has been previously pointed out in e.g. [20,38,39].

The effect of kinetic mixing
In general a kinetic mixing term between U (1) Y and U (1) Y is allowed by all symmetries: It is reasonable to assume = 0 at some high scale Λ where the gauge groups might unify, but since the quarks are charged under both U (1) Y and U (1) Y and do not have orthogonal charges, this term will nevertheless run to a non-zero value at the weak scale. The effect on electroweak precision observables for models similar to ours has been studied in [18,19,40] which found the resulting constraints to be rather weak. However this mixing will also introduce couplings of the Z to leptons, as after rotating the kinetic mixing away the general form of the covariant derivative becomes: where the sum over x, y = Y, Y denotes the two U (1)s.
A non-zero kinetic mixing will therefore not affect the coupling structure of the dark matter to the Z but it could make our model dramatically sensitive to LHC searches for dilepton resonances.
In order to show that the effect of the kinetic mixing remains small in most of our parameter space, we have estimated the running of g Y and g Y Y using g Y (100 GeV) = 0.2, 1 and g Y (m Z ) = 0.358 when = 0 at Λ = 10 TeV. The one-loop renormalisation group equations of these parameters, derived using SARAH The evolution of the mixing couplings as a function of the energy scale µ is shown in Figure 1.
Calculating di-lepton constraints requires the branching ratio to leptons to be correctly evaluated taking into account Z → χχ decays, after which the model-independent cross section limits for narrow resonances in [42] can be recasted after taking into account the production cross section, and the branching fraction into e + e − and µ + µ − . The ultimate sensitivity therefore depends on the value of g Y , so ultimately a combination of g Y and Λ will be constrained as demonstrated in Figure 1. The constraint gets weaker for larger values of m Z . According to [19] dilepton constraints become relevant again for thermally motivated values of g Y for m Z ∼ Λ/100, but this conclusion only applies when g Y is fixed by the relic density assuming no coannihilation. In fact, in the coannihilation region, relic density requirements typically point to smaller values of g Y , that lead to a slower running of g Y Y . In this region, dilepton constraints will therefore be negligible even for smaller values of m Z /Λ. Since it is in general always possible to choose Λ such that dilepton constraints are negligible we will not consider these when determining the benchmark points which we take as not constrained by dijet constraints in Section 3.2, but a full study of the parameter space should of course specify the assumptions made on Λ. An interesting study we leave for future work would be to derive a model where Tr(Y Y ) = 0 and the kinetic mixing would not run above the scale where all of the new fermions are dynamical, which would allow the gauge unification scale to be pushed arbitrarily high. In the following section, we derive the constraints associated with the model described at the beginning of this section. We notably derive the relic density and direct detection constraints for well-motived choices of parameters, and derive and discuss the impact of the current 13 TeV LHC searches for selected benchmark points.

Phenomenology
This section explores the constraints on our example model from relic density requirements, direct detection, and LHC searches. In order to evaluate these constraints we have implemented out model in SARAH and exported it to the UFO format [43]. The mixing parameters as well as the masses of the different eigenstates are derived from the Yukawa couplings and the vevs using a separate Python code, and directly inputed into the parameter card of the model. We use MG5_aMC@NLO [44] to numerically evaluate the widths of χ 2 and χ ± 1/2 [45] which typically decay into a three-body final state in the coannihilation region. The widths of Z and h 2 are evaluated analytically at tree-level assuming vanishing mixing in the neutral and charged scalar sectors using the general forms given in [6]. Finally, we evaluate the 13 TeV LHC constraints using CheckMATE 2 [46][47][48][49][50][51][52][53][54][55][56][57][58].

Dark Matter Observables
We use MadDM 3.0 [44,[59][60][61] to calculate the relic density and the direct detection crosssection for our model. We begin by investigating the sensitivity to direct detection experiments induced by the mixing of the DM candidate χ 1 with the neutral components of the new SU (2) L doublet φ as discussed above. We find the limits on the spin-independent crosssection with nucleons from Xenon1T [26] to always be the most constraining 3 and plot the resulting constraints as a function of (y φ L ,χ R , y χ L ,φ R )/y χ for v S = v H in Figure 2. In general these couplings have to verify y φ L ,χ R , y χ L ,φ R v H 10 −2 y χ v S for the induced coupling to the Z not to generate a too large cross section when y χ = 1.1y φ , however this constraint significantly relaxes if there is a hierarchy between y χ and y φ or v H and v S . For diagonal Yukawa couplings of order one, the resulting upper bounds on the mixing between χ 1 and χ 2 still allow for prompt decays of the heavy dark sector particles in the coannihilation region as long as the mass difference is not too small.
To check that χ 2 and χ ± 1 can be interconverted with χ 1 efficiently enough for coannihilation to occur we have followed the procedure in [64,65] and find that for m χ 1 = 150 GeV, m χ 2 = 165 GeV, the Zχ 1 → Zχ 2 velocity averaged cross section is only an order of magnitude larger than the Hubble rate at freeze-out for a singlet-doublet mixing approaching the direct detection limit. An example diagram of the process is shown in Figure 3. Since W/Zχ 1 → Z/W χ ± 1 will generically be more efficient, we take m χ 1 = 150 GeV to be the smallest viable value for coannihilation to occur to be conservative when selecting parameter points later in the study, however we display results for m χ 1 below this value in the relic density plots in Figure 4. The reason for this stringent limit compared to e.g. the MSSM is that interconversion only can occur through off-diagonal interactions with the massive weak bosons in the plasma, and that these diagrams all are suppressed by one factor of the mixing. At higher m χ 1 freeze-out will occur at higher temperatures which reduces the impact of the boson masses, and eventually the electroweak phase transition will reduce the boson masses themselves, making interconversion more efficient. However it is interesting that direct detection constraints are stringent enough to have an impact on the viable coannihilation scenarios in the model. Figure 2: The impact of direct detection constraints on the mixing between χ L,R and the neutral components of the doublets φ L,R where we have used the shorthand y Z = y φ L ,χ R , y χ L ,φ R in the legend to indicate that these arise from the coupling to the Z. We have set v S = v H here.
We now compute the DM relic density as a function of m χ 1 and m Z for a few representative scenarios, showing the results in Figure 4. On one hand we considered a non-coannihilating case, where there is a sizable mass hierarchy between χ 1 and χ 2 /χ ± 1 and the mixing between the two states does not contribute significantly to the final relic density. This scenario corresponds to that found in typical axial-vector Simplified Models with the addition of diagrams involving S, drawn in the top row of Figure 3. On the other hand, we considered two coannihilating scenarios where the χ 1 − χ 2 and χ 1 − χ ± 1 relative mass splittings, defined by are fixed to 10% and 2%, and the χ 1 − χ 2 mixing, albeit small, is sufficient to ensure thermal and chemical equilibrium between the two particles in the early Universe for these values of ∆ (example diagrams of the relevant processes are drawn in the bottom row of Figure 3). We present our results for these three configurations in figure 4. In the non-coannihilating scenario the relic density observed by Planck [66] can generically be obtained for perturbative values of all couplings for DM masses around the TeV scale. Setting the DM relic density to its Planck value also allows to uniquely determine the coupling g Y after m χ 1 , m Z , and m S are fixed. This interesting feature however does not hold when m S and ∆m are small enough for coannihilation effects to be significant. Besides the Z and S funnel regions, where resonance DM annihilation significantly loosens the relic density constraints, we note that the m χ 1 /2 m Z region always overcloses in the Figure 3: Top row: new diagrams for χ 1χ1 annihilation involving S absent in a Simplified Model where the Z gets its mass using the Stückelberg mechanism. Bottom row: example diagrams which allow the conversion processes χ 1 ψ → χ 2 ψ ans χ 1 ψ → χ ± 1 ψ (where ψ is any Standard Model particle) to occur, which are suppressed by one factor of the singlet-doublet mixing.
non-coannihilating scenario. Conversely, in the m χ 1 2m Z region, the DM relic density decreases when the DM mass increases. This rather counter-intuitive behaviour is due to the fact that this mass is proportional to v S and y χ . For a fixed value of m Z and g Y , with hence fixed v S , increasing this mass automatically implies increasing the associated Yukawa coupling, which in turn leads to higher DM annihilation rates through S interactions. We therefore expect perturbativity constraints to set upper limits on the masses of the particles in the dark sector. This is demonstrated in the bottom right plot in Figure 4 where we show the value of y χ as a function of m χ 1 for fixed g Y , v S , m Z as a red line. In the coannihilating models with ∆m = 2, 10 %, the shapes of the funnel and m χ 1 2m Z regions are mostly unchanged although the relic density constraints significantly relax at low ∆m. Part of the m χ 1 2m Z region, however, is now allowed and the relic density constraints in this area now translate into an upper bound on m χ 1 that does not depend on m Z . This behaviour is due to the fact that the new coannihilation processes that lower the DM relic density in this region are annihilations of dark sector fermions into SM gauge bosons and thus do not involve the Z . These observations are confirmed by the one-dimensional profile of Ωh 2 on the bottom-right plot of figure 4. For the non-coannihilating scenario, the relic density is much larger than its Planck value and overall decreases when the DM mass increases. When coannihilation is significant, on the other hand, the relic density first increases with the DM mass before decreasing again after the Z funnel. Note also the appearance of the W and Z funnels for coannihilating scenarios where SM electroweak processes dominate at low m χ 1 .
Finally, note that indirect detection experiments will also place competitive constraints on the model, especially in the high-m Z region (see [11] for a comparative study for a model Figure 4: The relic density after freeze-out for g Y = 0.4 as a function of m Z and m χ 1 for m S = 1500 GeV, y θ = 1.5y χ . Results are presented as a function of m χ 1 and m Z for y φ y χ (no coannihilation, top left), ∆m = 10% (top right), ∆m = 2% (bottom left), and as a function of m χ 1 for v S = 1000 and g Y = 0.4, 1.0 for these scenarios (bottom right). In the 2D plots the red dashed line shows the contour with the correct relic density for g Y = 0.4, whereas the black dashed line shows the same contour for g Y = 1.0 for comparison. similar to ours in the limit where all additional fermions are heavy enough to decouple). We will focus on the impact on direct detection, relic density, and LHC constraints from not decoupling the additional fermions in the model here, and leave a detailed study of indirect detection constraints to future work.

LHC Searches
For models that evade the direct detection bounds on the χ 1 -χ 2 mixing discussed in section 3.1, the leading experimental constraints are set by colliders, and notably by the 13 TeV LHC searches. Dijet resonance searches are particularly sensitive in the off-shell region where m χ 1 > m Z /2 where the Z branching ratio to jets is always 100 % [17][18][19], but only down to m Z ∼ 450 GeV, which is the minimal Z mass probed by jj searches at the LHC, while lower values are probed by jj + ISR searches at the LHC [67][68][69][70] 4 . Other particularly sensitive probes are direct searches for pair-produced e + e − → χ ± 1 χ ± 1 charged fermions decaying to a lighter stable neutral fermion χ 1 by emitting a W at LEP. In the MSSM, LEP results rule out charginos lighter than 92.4 GeV in the case where the two charged and neutral fermions are pure doublet Higgsino components [71]. This limit can not be directly translated to our model but suggests that the Z and W funnels that can occur in the co-annihilating case are firmly ruled out. A third source of collider constraints are monojet searches for pp → χ 1 χ 1 j production where the additional jet comes from ISR. These searches will be competitive at m Z < 450 GeV and m χ 2 /χ ± 1 > 100 GeV where the dijet constraints are weaker and there are no LEP constraints. When the additional fermions in the model are close in mass to χ 1 there are additional pp → χ 2 χ 2 j. . . processes that also will contribute significantly to the monojet cross section (since the additional decay products of the heavier fermions in the dark sector might be too soft to be picked up by the detector, or invisible in the case of χ 2 → ννχ 1 ) and have to be taken into account.
An additional sensitive channel that has so far been neglected for gauge-portal models is direct searches for pair production of χ 2 , χ ± 1 at the LHC. In the coannihilation region these particles will often decay to χ 1 promptly by emitting a Z or a W and will therefore have SUSY-like signatures with leptons and jets plus missing transverse energy. The usefulness of searches for these signatures for U (1) extensions of the MSSM was recently studied in detail in [72]. In general the pair-production cross sections for χ 2 and χ ± 1 do not depend on g Y when this coupling is small since SM gauge interactions dominate, which generates a signal which can compete in sensitivity with monojet searches even when the couplings to the Z are very weak. Note that constraints from these searches can be expected to be most significant when these additional particles are not too heavy. This feature suggests a kind of complementarity between these searches and the relic density constraints since the latter can often be alleviated by bringing the masses of the dark fermions down to the m χ 2 /χ ± 1 ∼ m χ 1 coannihilation region. To demonstrate the importance of these electroweakino searches and compare them to the monojet searches we use CheckMATE 2.0.26 to scan all of the implemented 13 TeV searches for the following processes: • pp → χ m χ n j • pp → χ m χ n 4 We use the combination of all of these searches derived in [18] with gY q = 2 9 g Y in our model to assess these constraints.
where χ m/n is summed over all neutral and charged fermions in the dark sector, and the jet is required to have a p T > 200 GeV in order to fill the phase space where monojet searches are sensitive.
Due to the computational cost of performing a sensitivity analysis over all the phase space of our model we leave a parameter scan taking all constraints into account to future work and focus on a few benchmark points where we generate 5 × 10 5 events per process with minimal phase space cuts. Since many processes often contribute to the same signal region it is necessary to include them all and add up the contributions to derive the strongest constraints possible. Depending on m χ 1 and ∆m the visible decay products can be either hard -allowing to reconstruct the W/Z masses-or soft, and some three-body decays can be phase-space suppressed and soft compared to others. If the mass splitting is large but m χ 2 , m χ ± 1 still have non-negligible production cross sections there can be new decay channels opening up, such as χ 2 → hχ 1 which can be sensitive to unexpected searches for new signatures like opposite-sign (OS) tau pairs and missing energy. The lifetimes of χ 2 and χ ± 1 for the points we consider are shorter than for b mesons in order to be able to consider these prompt. In general smaller mass splitting will further increase the lifetime of the heavier fermions. In this nearly degenerate region where long-lived searches can be expected to be highly sensitive to χ ± 1 , higher order mass corrections need to be taken into account properly. We leave the analysis of this region to future work, but note that this again suggests the model is sensitive to the wider search program at the LHC for well-motivated parameter choices and that the full mixing scenario needs to be taken into consideration to evaluate these. Here choose four representative benchmark points that all approximately give the correct relic density within 20% and are summarised in Table 1. These four points all escape the current constraints from dijet resonance searches and correspond to different choices of g Y and mass differences between the DM and the other dark fermions. In particular, we consider one point where this mass splitting is large enough to allow χ 2 and χ ± 1 to decay to the DM and an on-shell W or Z boson. We also study points in the coannihilation region, with mass splittings of 10 and 15 GeV in order to evaluate the relative impacts of the monojet and electroweakino searches.
We present the results using the conservative r value for the most sensitive signal regions defined by CheckMATE, where S is the predicted number of signal events, ∆S is the uncertainty on the signal prediction, and S95 is the reported 95% C.L. limit on the BSM cross section in the signal region. Here, we take ∆S to be solely the statistical uncertainty due to insufficient simulated events and do not consider any theoretical systematic uncertainties or K-factors. Since our event generation is by necessity inclusive we do not have negligible statistical uncertainties in all sensitive signal regions and our results are therefore conservative. A point is therefore considered excluded at 95 % C.L. if r > 1. Note that since we are only considering models that are allowed by dijet resonance bounds, searches giving r > 1 will be the leading probes for the corresponding benchmark point.
The results for the four selected parameter points are summarised in Tables 2-5. Points [1] and [3], which achieve the correct relic density through coannihilation processes, are out of the reach of the monojet searches implemented in CheckMATE but is excluded by searches for opposite-sign (OS) soft lepton pairs and MET [75,77,78]. Point [2] which does not have coannihilation is allowed by the LHC searches studied here but is also more sensitive to searches  Table 1: Summary of benchmark points for scan of LHC searches. In the branching ratios q denotes light quarks and l an electron or muon. The mixing in the neutral fermion sector is set to be small enough to avoid direct detection constraints, which will in general mean it's small enough to be phenomenologically negligible for all other observables except for the lifetimes of the heavier fermions. As an example of these lifetimes, the smallest 10 GeV splitting between χ 2 and χ 1 and ∆m = 5% with a mixing within direct detection bounds of point [3] gives a χ 2 lifetime of the same order of magnitude as b-mesons (∼ 1.5 picosecond) and is therefore approaching the limit for where we can ignore searches for long-lived particles, especially for smaller masses where the boosts can be significant. For point [4] we write out the decay modes contributing to the final branching ratio for an experimental signature to demonstrate that larger mass hierarchies in the dark sector means these can no longer be described by (phase-space suppressed) W/Z branching ratios but rather can include cascade decays into Z and h with modified kinematics, introducing new signatures. † : there is also an invisible Br(Z → χ 1 χ 1 ) contribution which is experimentally indistinguishable. † † : the total branching ratios are slightly rescaled due to rare χ ± 1 → χ 1 tb, χ 1 hW, χ 1 ZW, χ 1 Z W three-body decays.

Analysis
Constraint on [1] ATLAS 36.1 fb −1 γ + MET [73] r = 0.021, SRI1 ATLAS 36.1 fb −1 OS τ pair + MET [74] r = 0.058, SR2 highmass ATLAS 36.1 fb −1 soft OS lepton pair + MET [75] r = 1.2, SRI-MLL-30 ATLAS 36.1 fb −1 j + MET [76] r = 0.12, EM2 CMS 12.9 fb −1 soft OS lepton pair + MET [77] r = 3.3, stop low MET low p T,l 1 CMS 35.9 fb −1 soft OS lepton pair + MET [78] r = 1.1, stop low MET low p T,l 1  [76] r = 0.046, EM2 CMS 35.9 fb −1 2,3 leptons + MET [81] r = 0.14, SR A25 Table 3: Summary of constraints on benchmark point [2] from the most sensitive 13 TeV searches implemented in CheckMATE at the LHC. The EM2 region is the same as the one shown in Table 2. SR 3−body W -SF is originally designed for stops with ∆m( t, χ) ∼ m W and hence imposes a b-quark veto and cuts on a "super-Razor" variable M R ∆ , that reaches an endpoint near the stop-neutralino mass splitting. 3LI simply imposes a moderate missing E T cut and a tight requirement on the p T of the third lepton. Finally, SR A25 imposes a harder / E T cut as well as a transverse mass cut, and tags Z bosons.
for multiple leptons and MET [80,81] than to monojet searches. Only point [4] with its large mass splitting between the DM and the other fermions is most sensitive to the monojet search, and even still shows almost comparable sensitivity to a search for opposite-sign τ s + MET in the high m τ τ signal region (m τ τ > 110 GeV) [74], which can be traced back to the 20% branching fraction of χ 2 → hχ 1 . These results demonstrate that monojet searches often do not provide the only or most sensitive collider constraints on anomaly-free gauge portal models which do not couple to leptons in the part of parameter space where dijet constraints are not applicable. Viable coannihilation scenarios, for example, are often particularly sensitive to searches for compressed SUSY spectra. These scenarios, with their compressed topologies and relaxed relic density constraints are expected to gain importance as collider and direct detection constraints become even more stringent in the rest of the parameter space 5 . Even in scenarios without coannihilation it is possible to find interesting new signatures due to the heavier fermions if they are not completely decoupled, which often is necessary to avoid nonperturbative Yukawas after fixing the other masses in the model. Thus, even seemingly simple scenarios such as portal models are in fact associated with a wide variety of collider searches and a full study of these models requires the scan of as many published analyses and signal regions as possible, which mirrors the situation in for example the pMSSM. Finally, we note that aside from SUSY searches, studying constraints from Standard Model measurements using Contur could also lead to meaningful constraints on this class of models [84].

Conclusions
Simplified dark matter models where a fermion annihilates into SM particles via a vector mediator are popular benchmark scenarios. In order to avoid very strong direct detection and dilepton resonance constraints in these models it is convenient to keep the coupling of the vector boson to the DM candidate axial and make it leptophobic. This choice of parameters, however, introduces gauge anomalies when the vector is the gauge boson of a new broken U (1) gauge group. We have demonstrated that the phenomenological consequences of avoiding such anomalies by enlarging the field content of the model can be wide-reaching even for the simplest possible solution. The additional particles allow for much richer scenarios than in simplified models, due to the presence of new weakly charged fermions and the possibility of coannihilation. It follows that constraints on simplified dark matter models which assume leptophobia and an axial DM-Z coupling should be understood as applying only to a specific corner of the full parameter space of the minimal realistic model. Notably, regions of the parameter space of the anomaly free theory can be better constrained using LHC searches for electroweakinos rather than monojet searches. These results further highlight the value of the full width of the BSM program at the LHC from a dark matter perspective.
Finally, we note that the anomaly-free model studied here is only one example of how to extend gauge portal simplified models to make them theoretically consistent. A wider study of how to cancel anomalies in Z models without running into direct detection or LHC dilepton resonance constraints and the typical additional constraints that one should expect would be particularly useful. Additionally, an in-depth study of some of the more exotic signatures associated with our scenario such as long-lived particles would be of prime importance to fully understand this class of models.
Although simplified models proved incredibly useful tools for exploring new physics model at colliders and dark matter experiments, it is crucial to keep their limitations in mind and question their minimality. The example of anomaly cancellation illustrates that even enforcing basic consistency requirements on these models can considerably enlarge the associated phenomenology, with a wide palette of new signatures to explore.
The absence of mixing is due to our scalars H and S breaking U (1) Y × SU (2) L and U (1) Y separately. We can use the tadpole equations to determine the masses of the pseudoscalars A 1/2 (which of course are eaten by Z and Z in unitary gauge) as above: