 All right, welcome everybody. I'm just gonna make some quick announcements here at the beginning, then turn things over to Jodi to introduce our speaker today. So for the folks who are connected on Zoom, we won't be able to see you here in the room if you raise your hand. So it's fine to just politely interrupt Rob during the talk and say, oh, excuse me, I have a question and then just go ahead and ask your question. Well, I'll hear you if you do that. So if you put your hand up, we'll politeness you away completely because we can't see the participants list in the room. I don't think there's anything else. So on that note, I'm gonna hand things over to Jodi to make introductions. Hi everyone. It's my pleasure today to introduce someone who is known to all of us, Dr. Rob Colkins. He received his bachelor's degree from the University of Illinois, Urbana-Champaign in 2006. After that, he went to graduate school at Northern Illinois Center. I always think NIC for Accelerator and Detector Development. And he worked with demon and I don't even know how to pronounce his last name. Chakraborty, yeah. Thank you. So during his time there, he worked on the Atlas experiment. And I know that some of the faculty got to know him from his work there. He was measuring the top court decay, branching fractions. And then he also worked on heavy-charged hags. He received his PhD in 2012. And then after that, he joined the Lumina Lab, my research group, and started to work on dark matter searches. And we have been very lucky to have him. Rob has held a number of leadership positions in our collaboration. Most notably, he was the analysis coordinator for two cycles. In addition, he's currently also a part of the operations team. He was chair of the backgrounds working group. He does a lot in terms of documentation for the collaboration. So I think you're on some other documentation committees and a variety of the work that you have to do to make sure your collaboration is working. So today, Rob is going to talk to us about a project that he and Dan Jargon had started working on when Dan was a graduate student here. And then Dan left us and Rob stayed and finished up the project for us. And it went on the archive a couple of weeks ago, right? It's been about a month now, almost exactly, yeah. So if you're interested in this and reading more about it, definitely check out the publication that's on the archive that we've submitted to PRL. PRD, it's a bit too long for PRL. Yeah, it's quite sizable, yeah. All right, so on that note, let's welcome Rob. Thanks. Yeah, so some a lot of this material might be familiar to you, those of you who came here for Dan Jargon's thesis defense. And this is basically the formal collaboration paper or review of this. So basically what I'm going to be presenting today are new results for dark matter with the superseding mass experiment through these inelastic scattering channels. And I think this is the first time this result has been talked about publicly. But as Jody said, the paper itself is a search for low mass dark matter. And it came out back in March in PRD. So if you want the full gory details, you can go to this link. So today I'll give basically a brief overview of dark matter to introduce what we're searching for and why standard paradigms haven't really panned out. I'll describe inelastic scattering channels and so these have been kind of a hot topic in the past couple of years. They're not quite textbook yet, but they're getting towards that. I'll describe the superseding mass student experiment in particular the Oberburden, which is kind of a new feature that we kind of need to take into account when talking about these inelastic scattering channels, in particular the CMS light detectors, which is really crucial to actually doing this type of analysis and the background modeling that goes into it. I'll talk about how all these things go together to give us our analysis and our likelihood. I'll talk about the statistics and our limit setting. I'll give a brief sales pitch for superseding mass snow lab and finally I'll kind of wrap up or remind you of what I talked about for the past hour or so. All right, so dark matter in the universe, why should you care? There's a lot of moderate interest in this. It's pretty much a cottage industry these days, but this is a problem that is going on almost a century. So back in the 1920s, 1930s, Orto is famous for his cloud, his ore cloud was measuring nearby star velocities, trace of star type stuff. I notice there's a lot of discrepancy and the velocities based upon what he should see. So that was kind of your first hint. Dan Hooper has run a great paper on sort of uncovering a lot of the history of this, but Zwicky often gets credited for being one of the first to notice dark matter with his observation of the galactic velocities in the Coma cluster, which were basically too large for the optical mass that observed and you famously named the dark matter on Dunkelmatter in Swiss German and that name has kind of stuck. And then this kind of sort of disappeared for a while until Vera Rubin, basically in the 70s, made plots like this, where she would look at a galactic cluster, measure the rotation curve of it using Doppler shift and integrating out the light curves and figure out what contribution of the mass and using basic Newtonian type physics. You can calculate how much mass is within this and calculate what the velocity should be and really there was a big discrepancy of what she saw compared to what she observed. So there had to be some extra component here in the mass that wasn't being observed that had to basically give you this type of velocity curve. So these were the early signs. So we're coming into more modern stuff. There's another college industry, a C and B experiments and these basically measure the very small fluctuations in the cosmic microwave background. And from this, you can basically make some inferences about the composition and basically take the weight of the universe and figure out what is in there. And we know a shockingly small amount of this. So this is your chemistry class and physics classes and this is basically all the rest of the stuff. And so Desi, which we have collaborators here working on, we'll be an experiment to basically study this huge dark energy portion, dark matter that's our bread and butter here is only about a quarter and then 5% is ordinary matter. So we've only been about 5% done in this. So there's really a huge component of our universe that we really don't understand. So that's certainly worth looking for. So one of the popular hypotheses of what dark matter actually could be is called the WEMP. So the WEMP is an acronym for weekly quotes here interacting massive particle. And if you do this, you get something that's either called the WEMP miracle or the WEMP coincidence depending on your outlook. So if you take a hundred GV particle, so this is a mass scale like the WZ Higgs boson at 126 GV in that ballpark. And you take a weekly interacting cross section. This gives you basically the right relic density that we observed today. So you have at the early universe, you have everything basically in super roots and thermal equilibrium with each other. So then as the universe expands at some point it starts expanding too fast for the interaction rate and then you basically get a freeze out of the density. And so depending on what parameters you pick in this you can basically figure out where this freeze out happens and correlate it with the measurements that we have today. And so this comes through with a particle hypothesis. So assuming that it is some massive particle this sort of textbook photo here is the bullet cluster. So you can have basically a superposition of the optical plus X-rays and gravitational lensing. And what you get is if you look at what is happening you see the gas here is kind of of these two clusters kind of passing through each other. You have the gas here and kind of see this bullet of the gas coming through and the gas has interacts with itself. And so it slows down and sort of collides and kind of pushes together. But then you look at this part here and this part here which is the mass from gravitational lensing and you basically see that there's some component of the mass that doesn't really interact with itself. And this is a cherry picked example but there are surveys of people have taken galactic catalogs and looked for these and have made constraints on the self interaction of particle dark matter with itself. All right, so how do we actually go about detecting it? Well, we basically build a fixed target experiment. So you have dark matter in these compact, in these gravitational wells like a galaxy or solar system and these are moving at astrophysical velocities, non-relativistic. And in your lab you basically have some target nucleus. So this is gonna be probably a chunk of germanium silicon or some liquid noble, bad of liquid noble gas that you've put underground somewhere. And sort of the traditional sense of this you have an elastic collision. So you have dark matter come in scatters off your target nucleus. You get a small recoil. The energy scale for these type of interactions is on the scale of hundreds of EV to KV depending on the mass of the dark matter and where on the velocity curve it happens to live. But basically you build your target experiment and you sit around and you wait for this dark matter wind to come through and pass through and hit your detector, scatter, give you a signal. The local density of dark matter is about a third of a GEV per centimeter cubed. That's a number you can argue with if you want to. And there's also some other interesting effects like the fact that we're not stationary in our inner lab frame. We live in a solar system, which we have June and December. So what happens is you can get an annual modulation depending on the boost of the earth's local velocity to the wind that gives you a prediction of an annual modulation rate. I'll put the footnote down here that there's sort of a famous resolve from the Domine-Libre collaboration that claims to have seen the signal. But again, that's another thing that you're welcome to debate about. It hasn't been corroborated by any other experiment or observed by that. So how do we go about actually calculating some rates? Well, you write down, basically, you need to integrate over the kinematics. So you write down a differential event race, your DRD, your spectrum. You have a local dark matter rate. You have a nucleon masses, mass of the dark matter. And you basically integrate from a minimum velocity to give you a given recoil over a velocity distribution. So this is a pretty heavy equation, but it has a lot of the main points in it. So you have a scattering cross-section. And this is velocity dependent for a given recoil energy. There's a range of recoil energies you can get for a given velocity from basically the scattering kinematics. You have to know what the when velocity distribution is in your detector frame. You have a reduced mass here. The minimum velocity to give a given recoil, you can work it out from the basic kinematics as this term here. And then you basically have a particle physics term. And so the particle physics term is a cross-section as a function, this differential cross-section of DR. So you have basically a bunch of kinematic terms, a one over velocity squared. And then generically, you have two components of this. So you can have a spin independent component that doesn't really care about what the flavor of the nucleon is looking at. You can also have a spin dependent component that does care. So this is where a lot of your more complex interactions come in, like a spin dependent, couplings to fluorine and whatever. But this is sort of the generic form of this. And most times people report these two things independently. But this one equation, you have astrophysics, you have particle physics, you have a little bit of nuclear physics in the form factor. So this is really kind of a loaded equation. But at the end of the day, for the cross-sections that we're looking for, we tend to expect just a couple of events per year in sort of the traditional 100G VDARC matter search. So going back to sort of the WIMP hypothesis results. So as I said at the beginning, this is really cottage industry and just the number of experiments that exists out there should give you a hint of how many of these things there how many people are doing this. But it's going basically around the 100G VDARC site here, which is a liquid argon xenon one ton, which doesn't hide what its target material is at all. But you can see these liquid nobles really dominate the high mass reach. And these bands are really the exclusion. So they've collected their data, they've done their search and they've excluded this region of parameter space. And so the simplest case of one dark matter particle that has a spin independent interaction, but the simple WIMP scenario is not here. So if you look at, basically any talk on electric weak physics they're going to tell you these cross-sections are 10 to the minus 43, 10 to the minus 44 ish. And that puts that point right here of where your quote unquote WIMP miracle is at 100G V. And these have been, this region is basically excluded. So this, the simple scenario hasn't really panned out so we need to do something different or at least looking at different under a different lamppost. So we can move the lamppost. So we can start looking for low mass dark matter. So this doesn't have quite the nice WIMP miracle explanation, but theorists are inventive. So they'll give you anything you ask for if you ask them nicely enough. So some of these are known from other fields like the QCD axion comes back from basically solving the QCD problem of petroquin symmetries. And so that can be a candidate at very light masses. The WIMP mass is actually very small in this plot. So these are, this is, or is a magnitude and this is just kind of a schematic, but there's been a renewed interest in black holes of the dark matter candidate from LIGO. And there's also different flavors of axion, depending on what theorists you want and some astrophysical things, but basically there's once the swim scenario is just one and possibility, there could be plenty, but searching for these is tricky. So if you look, the kinematics really kind of hurt you when you look for low mass dark matter, basically because you have a very light bullets to speak hitting a heavy target of an atom. And so if you look at like the integrated counts per your detector threshold, these fall like a rock, which means you have to really do something different, either build better detectors, which we're currently doing, or do take a different approach from what I've just showed you. So that's where the inelastic scattering collisions come in. So just kind of an overview of the two I'll consider here today. So basically these inelastic channels involve dark matter scattering with the nucleus, like before in sort of the WIMP scenario, and then something happening to the electrons. So this is really not a two to two collision, this now is a two to three body collision. So one such channel is Brimshaw long. So that word is familiar to most of you in particle physics, basically the emission of photons as an object moves down in matter. So when we look at these atoms, they're in crystals or some bat. So if they recoil, they slow down. And when they slow down, you can get a Feynman diagram that kind of looks like this where you have your nucleus, the dark matter comes in and you get a emission of a photon. So what you can do then is you can look for that additional photon. There's also the MIGDOL effect, which is kind of similar in the sense that you have dark matter come in, wiggles the nucleus around the electron cloud and that basically perturbs the wave function of it. And when that perturbation happens or things settle back down, you can get the possibility of emitting electrons and photons. You can look for these electrons and photons. And if you calculate what happens from this, you can, basically if you look at the emitted energy of the electrons or the photons, they can extend the higher energies and then the basic nuclear recoil, the nucleus recoil directly. So if you have a detector threshold here, you can be sensitive to recoils happening at below your threshold if you can measure and observe the electrons and the photons that are being emitted in these type of collisions, which can let you go to very low dark matter masses without having to be directly sensitive to the nuclear recoil itself. So going on to the Brimstallung channel. So basically what you have is you have the center action where, well, you don't know, your final state is gonna be the recoil nucleus. The dark matter goes off on its very little way and you get the photons. You get these two diagrams. But calculating the rate of this is actually gonna be suppressed because in the basic dark matter, classic dark matter experiment, you're basically drawing a line here. You're just having nucleus and collision. But now that you have this photon, well, now you have this coupling of the photon that you have to pay a price for. So the rate that happens is the rate of this times the probability of emitting a photon. So you basically give up some of the rate at the cost of being able to probe lower dark matter masses. So you can basically write down what the particle physics term of this looks like for Brimstallung. So there's a lot of stuff here that's very similar to the nuclear recoil race. You have your classic sigma spin independent dark matter coupling cross section. And this is the same quantity that's in all the other plots I've shown up to this point. So you're probing basically the same type of physics. You're just doing it through a different signal channel. You also have these atomic scattering factors, which I'll get more into a little bit later. You also have these kinematic terms that are a little bit more complex because now you have kinematics between the nucleons and the emitted photons here, W. But you still have the standard velocities, velocities squared over nucleus mass. And we're basically making the assumption that the form factor here is approximately one for low-interduce scattering. That's roughly true. Notice the wiggles. So writing this down to a total rate, now this is where it gets a little bit messier. So you have basically your standard velocity integration here, so you have your Maxwell-Boltzmann distribution. So it's basically a Gaussian distribution for some most probable velocity. So you do the velocity integral, and these are basically your Avogadro's number telling you how many atoms you're passing through. But yeah, it's basically the same deal except now this is messy and you can't do it by hand like Lou and Smith did back in the 80s. So you do this numerically, but that gives you your signal rate. So going back to the atomic scattering functions, this is where the nuclear physics gets even messier than it sort of intrinsically is. So on the previous slide, you basically saw this F square. So this atomic scattering function actually comes with two parts of it. There's a real part and an imaginary part. So the imaginary part is basically this term. So you have this dispersion relation between your electron, your photon and the energy. So you have to do this Cauchy integral, which depends on the atomic, the full absorption cross-section. So that comes in there in that way. And you have your Planck's constant and all that good stuff and also a little bit of nuclear physics in there of your atom. But anyways, you can calculate this. And then F2, this term is actually pretty straightforward. So you just have the atomic, the photo absorption cross-section over your wavelength. But the nasty thing about this is at low energy is this cross-section isn't terribly well measured. So there's quite a bit of uncertainty. So there's quite a lot of measurements that have been done over the past 70 years or so of what the full absorption cross-section is. And so that basically feeds into an uncertainty and F if you want to calculate it. So here's sort of where the nominal value is of this. If you take the central part of these curves or the Hingey data that you can get from LBNL, but then you can kind of calculate what the spread is and plug it through here and calculate sort of where the spread in this atomic scarring function is. So I'm moving on to the MIGDOL effect. Basically it is just sort of jostling a nucleus and gaining a mission. This is a lot of quantum mechanics and I think one of the first instances of this was in like a Landau-Lyschitz textbook in the 1940s of calculating what these transition probabilities are. But there's been a lot of renewed interest in this. But basically what you need to do is, and there's a couple of ways you can do it. So sort of the paper that we've chosen to the model ourselves after is this paper from 2017 by eBay. And you basically can calculate numerically what the transition probabilities are for going from for a mission of electron of a given energy using this flexible atomic code. They calculated for a reference velocity of 10 to the minus third times C. And then they just give you the tables and then to convert it to a generic and electron emission energy. You just need to know what the electron, you just need to know what momentum electron you want to know. And then you can just basically rescale in this type of fashion. There's also an alternative formalism of using the photoelectric cross-section. So basically, if you look at what this diagram is you can kind of think about coming from the other way in terms of time of an electron or a photon coming into an atom and basically getting captured. So there's an alternative formalism that's available for doing this. And if you compare these two, they're pretty much, they're very comparable in terms of the final result. But both of these tend to assume that your atom is in a vacuum and neglect crystal effects. And so a lot of people these days are starting to work on understanding how being in a crystal potential actually affects these transition probabilities. So that is an active field in terms of the MIGDAL effect. So calculating the rate, again, it's kind of so much of the brimstone long. So you have this triple integral. So you have to integrate to get a differential rate of DR, D, D, E. You have to integrate over the nuclear recalls. You have to integrate over your velocity distribution. And now you have a subnation overall of the different electron cloud states for that. So you basically have to go over these different transition probabilities, calculate it and do the integral. But it pretty much looks kind of the same as before. So yeah, a nuclear WIMP scattering cross section, kinematic terms, local dark matter density of velocity distribution. These will pop up anywhere and everywhere. And then there's a little caveat of what you see in the detector isn't really just the electrons. So if you wanna convert this from a DR, D, E to a D, E detector, you actually have two components. So you have a nuclear recoil component and then an electron recoil component that you need to consider in this type of delta function where this is basically setting those things equal. The trick is with detectors, is that there's usually some sort of quenching factor. That means that the energy scales differ but I'll go into that later. Right. So I swear I'm an experimentalist. So we did build an experiment. It was called Super CDMS Sudan. It was at Sudan Underground Laboratory which is actually in Minnesota. And it's about half a mile underground with about two kilometers of overburden above it. This is sort of a recycled experiment. So there used to be a CDMS two experiment and we basically took the shield and the new on veto and all that good stuff and took out the old detectors and put our new detectors in. This detector stopped taking data back in 2015. So that is quite a while ago. And we really focused our shift, our focus on Snowlab. And this is probably one of the last analyses that will probably come out of this experiment. But we got some good stuff out of it. So basically this experiment is a crystal-based experiment. And one of the things that really did was exploit this Neganoff-Tromyov-Luke effect. Basically what happens in a crystal is if you put a bias on a crystal and you have some interaction, you can have ionization which will produce electron hole pairs. And if you put a bias on that, those electron holes will drift. They kind of spread out because electrons have this inner valley scattering that will spread them out in a shotgun pattern. But basically as it goes through, they create phonons. So phonons are basically the quantum of lattice vibration. So basically audio. So these are really very good microphones. And so basically what this does is it's proportional to the bias. So if you put a big bias on this crystal, you basically turn your crystal into an amplifier. So that lets you, if you have a small charged deposit and you put a big bias there and you can measure this out, you can basically get a very low threshold experiment. The trade-off is that you can't measure the primary phonon signal because it's gonna be to below your threshold and we don't have a way of ionizing it. So you do lose some bit of information but you do gain low threshold stuff. And that's great for these kind of searches. So going back to the quenching factor thing that I mentioned earlier, the terminology is not great but yield is what we use in the crystal physics community. Basically what happens is, this is data taken from a detector that can measure phonon energy and ionization energy at the same time but it doesn't have a low threshold. So you pay that price with that. But on this plot, basically what you have is if you can measure these two things out, you can make a ratio of the phonon energy to the charge energy. And there's basically a couple of populations. So here are our gammas. And so by definition, we calibrate those to one. You have betas that are near the surface and those have less yield than lead atoms have a very low yield and nuclear recalls are here. But basically by definition, you define yield as the ratio of the charge energy to the phonon energy. Going back to the Lutrum, the NTL effect, you have your total observed energy is the recoil energy plus the amplified energy. So you can just work through the different terms. And so you get a energy recall. So you have one plus your yield times the electron charge, your bias voltage and the amount of energy that you need to create one electrical pair which is pretty remaining about three EV. But that comes out in the wash. And then you can work through it and converge between one energy scale. So if you calibrate your detector in total energy, then you basically only see one energy scale, but you need to convert it using basically some sort of model. And you can just work through it and basically going through it, you get this equation that converts from observed electron energy scale to nuclear recoil energy scale. So this again, like the photoelectric effect is kind of hard to measure. There's quite a bit of spread in the field. Again, this is another cottage industry. People are doing experiments. We have a recent paper this year of our own measurement with Sudan data set of this, the photoneutron source. But this is basically how you have to do it. You have to go out there, build your experiment, measure it. You can see Bill have done this at room temperatures, liquid nitrogen temperatures. Sorry, I guess there's no room temperatures on this. This particular plot, this is liquid nitrogen temperatures. Then you get liquid helium temperatures. But you go through and you basically look at this. So the service standard that's been adopted by the crystal dark matter community and that's Lintar. And this comes back from the 1950s where Lintar wrote down sort of a paper motivated by nuclear physics of what the ionization yield should be. And that's what you'll often see as sort of the reference in these plots. And for the most part, it kind of matches at the KAB scale, but when we get the low energies, things really don't, things start deviating. And physically there's not really a reason to believe that this should continue all the way down to arbitrary low recoil energies. This was a model developed at KV energies for nuclear physics. So no one is quite sure what happens at the lowest side of there. And it doesn't always, it doesn't always match up. So going back to the sort of the astrophysics of this. So I mentioned the overburden. And that has an effect for the cross sections that were actually going to be interested in. So if you have no overburden, you have a Maxwell-Boltzmann velocity distribution. You have a maximum there. And basically you can take that and calculate your limit. And everything is fairly linear. But what happens is as the cross section gets larger, you start to have interactions first with the atmosphere and then with the rock. The atmospheres are the best you can do. It's hard to put an experiment in space. But as this coupling with the normal matter actually increases what happens is at some point you're going to become basically insensitive to high, strongly interacting dark matter because at some point it's all going to scatter, lose all of its energy and either the atmosphere or the rock. And then you're not going to have any kinematics to work with at your experiment. You can see what will happen at the velocity to the velocity curve. There's a couple of ways you can go about how you actually deal with this. So for traditional stuff at 10 to the minus 33, 10 to the minus 44, you can ignore this. The earth is transparent, the atmosphere is transparent, you're good. But then up here is where you start turning over. And then you have to make a decision on how you deal with this. So there's a paper you can view if you're really interested in this, but you can do it with Monte Carlo that's computationally intensive, that is the proper way to do it. But for this analysis, we picked method B. So method B, using their terminology, basically is you calculate the average energy loss as the dark matter passes through the material on its way to the detector and then you recalculate your limit curve. So you see what kind of is kind of happening here is as you get more and more overburdened, this is just kind of a generic silicon oxide model. The velocity curve shifts lower and lower. And sort of the way you model this is you write down this attenuation term. So here's the cross section that you're interested in. You know, dark matter density, they have basically a subnation of the elements. The number of nuclei that it's passing through, kinematic, reduced masses and all that stuff, and then a path length, that you can recalculate your velocity. You can transform your velocity distribution at your detector using this equation. You get this kind of term out here, which is kind of conserving the flux as dark matter is slowing down, you kind of get more of it coming in. So it's kind of a pile up effect. All right, so that's the physics of it. So now I don't see any geologists in the room in this seminar, but we have the consult, not an SMU geologist, but a UMN geologist. So there's a lot of mining in Minnesota. So a lot of these maps are pretty well known. And basically a lot of it is due to this, this Misabi iron range that if you're driving up to Sudan, you'll pass by. But you can get very detailed maps of the geography of Northern Minnesota. So our experiment is dead in the center and it was kind of enough to give us very detailed maps at these various radius and basically gave us the chemical composition for all these things. If you take the mine tour, they'll show you a chunk of elite greenstone, but there's a bunch of granites and all sorts of stuff that I'm not qualified to really talk about, but I know how to deal with the spreadsheet. So we took all this data and we plugged it in and we calculated what the attenuation from the local area is from that. And then of course, there's the whole rest of the world. So looking at the geometry of this, you have your detector, it's not at the center of the earth, you're down a couple of kilometers, there's a crust, you work through all your classic trig and you calculate the path length. The density of the earth actually changes as you get to the surface and there's quite a few sort of epochs to worry about, but you can calculate, plugging all this in, looking these things up basically sort of from a Wikipedia level, you don't need a ton of detail. You can calculate what the velocity dampening parameter is depending on your angle of where the dark matter is coming from. If it's coming through all the earth or if it's coming from above. And so we basically made a model where we switch. So everything below the detector, we pretend it's the earth model and above the detector, we use the local Sudan geography model. So going back to where things stand, so we had this analysis on CNIMS light three, which was sort of a standard dark matter search. And this had basically about a month kilogram of exposure at 70 volts. And it was very good at the time. And that paper was from 2018 and there's a couple of other experiments, I'll move on quickly. But we basically piggybacked off that experiment to do this analysis because that experiment basically did all of the hard work for us in terms of understanding the detector. So there's a couple of peaks. You can activate your manium with a neutron source. Now I'll give you peaks that you can use to calibrate the energy scale in kilograms electron equivalent because these are gammas being emitted. So you can get the energy scale, the calibrate, and you can also get the width to get the resolution. So generically, we parameterize a resolution in this. So there's a noise term. There's a B term, which all is approximately final factor term. So that's sort of square root of N of number of charges. That's only approximate, there should be an asterisk there. And then this position dependence, which is basically our way of saying everything we don't understand. And you can go through and basically do this analysis and get a model for the resolution with uncertainties. Backgrounds. So we have a Compton background to contend with. And this basically comes from radiogenics. So uranium, thorium, potassium, that's just in your detector material. You don't really see resolution because we put everything in shielding. And that means that most of it Compton scatters and smears out and gives you this Compton continuum. What's interesting about this, going to these low threshold detectors is now you can see these quote unquote Compton steps. So at certain points, the incoming gamma doesn't have enough energy to actually cause a recoil because it doesn't, it can't overcome the binding energy. So you actually get this kind of step feature that you can model and basically see the atomic structure of your target material from the Compton background. There's also plenty of other activation backgrounds to worry about. So this is kind of going a little bit higher energy, but you have things like the 10 KV line. So that's your detector itself getting activated, your primary atom. But then you also have some cosmogenically produced isotopes like gamium, iron 55 cobalt, vanadium that are peaks. So again, you have to look up the energies and model them, not too hard. There's also tritium. So there is a paper we've done where we measure the tritium rate in our detectors. And that has about an 18 KV end point. So you can ask any number of programs that calculate the spectrum for you and they'll do it, but it's pretty straightforward. And so you basically can model all these backgrounds in this way. A little bit harder, and this is going into sort of the detector specifics are the surface backgrounds. So these are all the lead 210 stuff that we fight down in the lab to really reduce. So you can have radon that the case into lead 210, the lead 210 will play it out onto detector surfaces, your housing surface or anything you leave out in the open air. And then when the lead 210 the case, it gives you a lead to six atoms and it also gives you betas. But it also gives you alphas. And the thing about alphas is alphas are at the energy scales of 5.3 MeV. So that's well above the energy ranges that we're interested in for a dark matter search. So we did independent measurements of the alpha rate in these detectors. And you can constrain, and you can use these alpha rates to constrain the level of lead 210 on the different parts of the detector basically by seeing where the alphas are hitting. And then you can use detector simulation where you kind of map the, you make a voltage map of what's happening in the corners of the detectors and try to figure out from the amplification and make a map in between a physical volume, a physical place and sort of a parameterized observed space using JL4. But then you can basically make a background, a spectral PDF of what the surface background is. And there's correlations between all of these things because if you change parts of this, the mapping changes. I'll spare you the gory details of that. So I just showed you a plot of why you don't want to consider detector boundaries. So one of the things we do is we cut this off. And what we do is we define a parameter and this is arbitrary units. And this is, you don't want to know what goes into this, but it's a lot of pulse fitting details. But you can construct a parameter that's sensitive to the position in the detector into the radial distance from the center of the detector this is, make a cut in this parameter space, you can see the activation lines. You can use those activation lines to calibrate this, but you can basically make a cut in this and this will get rid of the events that happen near the sides of the detector. So basically all those low penetrating betas, nuclei, alphas, all that good stuff. So you put that together. So there's some hardware cuts. So basically your discriminators and your triggers. There's quality cuts that you have to apply to your data and really the biggest hit in terms of analysis is this fiducial volume cut. So you sort of lose about 50 to 60% of your fiducial volume with this. So that is sort of the biggest hit in terms of actual physical exposure. And, you know, but this isn't unique to us. This is just kind of a common thing in most dark matter experiments of trying to optimize the fiducial volume because that is what controls a lot of your radiogenic background. All right, so going back to the Lindhar model. So parameterizing uncertainties in the Lindhar model. So, you know, there's plenty of plots like this, but you have basically a bunch of experiments. And what you can do is you can basically try and draw a line around the data points in these different experiments. So there's kind of a standard value of what this should be for germanium. But reasonably you can go to point one or point two and kind of make an uncertainty band. And so basically by taking a point zero five and kind of cap or make a sort of one sigma range on Lindhar. So going back to sort of the issue of yield at low recoil energies. Again, we expected to go to zero because at some point you're not going to jostle the atom enough to actually create ionization. So what we've done is we've created a model. So we take Lindhar and we make a smooth turnoff. So you may recognize some of these values as being close to the Matt Stein's thesis where I measured the wrinkle defect or the defect displacement energy. But basically we use that to set a turnoff. Basically goes to zero at point one KV. So it's kind of a mix of the Lindhar model with the smooth turnoff. And this is important because going back to this equation, we are integrating over ER. So ER can be arbitrarily small. And unfortunately, we're taking uncertainties on Lindhar and K, it doesn't map linearly. So this becomes quite ugly, quite fast, but it's certainly doable. Then you can calculate what the effect of this on your signal model is. So here is K being varied from one to three sigma. So you can see that at three sigma where the nuclear recoil component of this, and I purposely picked AGV for this plot because it kind of illustrates the lower dark matter masses. This term is basically zero. And people do neglect it, but for higher masses, there is an effect. But you can see that when the ionization is pretty small, you basically have just the electron part of any, you can kind of see the structure, but as the ionization increases, this term kind of smears out a lot of the atomic features. But to handle these type of systematic uncertainties and the analysis, we build a two-dimensional interpolator that goes in terms of KV and between these sigma. And then you can basically parameterize this in terms of sigma. So I'll just need some coffee before we get to this slide, but don't worry. It's massive. Actually, let's just do it this quickly. All right. So there's really nothing too unique about this likelihood, it just is intimidating. So these terms here, these are basically just your Poisson terms. So you have your number of events and you have your signal model. So there, again, we split the data into two periods. So that's why that's more complex and it seems like it needs to be. But that's just constraining the number of events from what you observed to what your signal models are predicting. Here are the surface model backgrounds. So I mentioned the correlation. So you basically write down your correlation matrix, you do the summation and you just put it in as basically those sort of Gaussian terms. The efficiency uncertainty, going back to those peaks. So again, you have uncertainty on those parameters. There's some correlations there, but basically you write this down as two different, here's period one, this is just a Gaussian term taken the log, don't worry about it. And for period one, period two, the resolution, again, there's correlations you have to write down the correlation matrix, blah, blah, blah, sum over the different components and do it there. And yield is actually the simplest one because of how we treat it, right? So by definition, we define sigma as being one and any deviation as being zero. So this all gets, this is all zeros and ones. So it looks very nice and neat. But we kind of did all the nasty stuff earlier. So that's why that is. All right, so that's the likelihood. Here's all the different backgrounds that I've told you about. So those are all the surface backgrounds. There's the Compton background, the activation peaks, all those different color-coded models. And so here you can just basically do the fit for this likelihood. So we just use i-minuit. We float all these different nuisance parameters like the cut-offs, the efficiency curves, all that good stuff, fit it. I'll spoil the end. We didn't really see an access, we didn't see an access of signal over background. So how do we go about setting a limit? All right, so we have a bunch of systematics and we have a big likelihood. So what we do is we define a test statistic. So this is textbook profile likelihood ratio method. So you take the likelihood ratio of where you fix a particular signal point over the total unconstrained likelihood and that you can write this out as basically one likelihood minus the other which basically sets this curve to be a zero. And you set it to zero in the case that it goes against the number of total events. Great, but this becomes a one degree of freedom problem now. You can go back, read Wilk's paper from the 1940s on what to do about this. And there's enough data here that the Wilk's theorem approximation is good enough which means you don't have to do the mind of Carl instead you can just do a scan. So that's what we do. So we just calculate the likelihood for a bunch of given number of signal events. And that is a key thing here as we are doing the number of signal events not cross section. But you can calculate where your likelihood crosses the 90% confidence level threshold. You can use that to set an upper limit on your number of signal events and then you can calculate the number of expected events. So in this case here we set an upper limit of 7.4 events and then we had a number of expected signal events of about three. So this is a point that we really can't exclude but here we have an upper limit of seven events but we expecting 28. So that is a point that we can exclude. So to actually convert from these plots to parameter space what we do is we do the analysis basically at all these points here on this we make a grid and we do hypothesis testing basically and then we see which points fall within the hypothesis of it and not. And so this is a region that we're able to exclude. So we have the ceiling here that comes from the overburden and we get a floor that just basically comes from just a number of events. So going back to the number of events so when we do this analysis we're basically taking the spectrum shape. So if you remember the spectrum shape is gonna change as you change cross-section. So that's tricky. So we just set that on the number of events and do the hypothesis testing and that's kind of what's driving this type of analysis test. And from this we can carve out the exclusion boundary. So this is what the low mass field looks like. So notice we're going below 10GV to oneGV here. So this is subGV dark matter. So other experiments have done this and these plots are getting quite messy but our good friends at Edelweiss have done this analysis. They did it on a surface facility. So they actually have a better upper limit than us because they don't have all the shielding but they have less exposure. So they don't have the lower reach. For us where this black curve here is the McDull analysis. So you can see we basically have this chunk of primer space to be ours. Our colleagues in CEDEX have also done this analysis. Their paper came out in November and was published recently in PRD and they managed to squeak out this little bit when you put them both on the same plot. And the Brunchelung analysis is here. So the Brunchelung analysis isn't quite as good as the McDull because McDull is taking the electrons in the atom, the Brunchelung has to create it so it doesn't quite have the rate that McDull does but it's a little bit easier theoretically to calculate. So that's sort of the big picture of this result. All right, so I'll be brief because I'm running out of time. So what we're doing today. So Supercineum is Snowlap. So this is our new experiment. This is a brand new from the ground up literally. So we built a whole new, there's a new seismic platform that protects us from blasting, but these are a brand new experiment, brand new shielding, brand new crystals. So these are chunky. These are like one and a half times the size of our Snowlap, sorry, our Sudan crystals. But these also are gonna be going deeper underground for Snowlap is now two kilometers underground and the density affects us, but it didn't see effects the overburden part of it. And this basically gives us six kilometers from shielding and it's in a clean room and we're spending a lot of time under construction. So I'll plug our Twitter if you wanna see the up-to-date photos of where that is. But this is really why we wanna do it. So this is the plot of the Muon Flocks versus the lab depth. So you were here, but now we're gonna go here. So Sudan, so this home stick is where a lot of other experiments are. Baldy in England is pretty deep, but there's just orders of magnitude improvement in the cosmic ray shielding that you get just for moving lab space. So I'll plug our white paper. So this isn't a Snowlap talk, but we did put out a paper sort of in the plethora of Snowlap papers that came out earlier this year, showing what we think we can do with various upgrade scenarios, but this is kind of what that plot sort of looks like. So again, looking at, this is sort of the web nuclear traditional search. So again, you're looking at masses, around a GEV or an above. You'll notice that there is no inelastic channel range. So I'm gonna leave this as an exercise to the current and future students if they wanna tackle this, just throwing that out there. All right, so I'll wrap up. So right now, as I've shown, sort of the traditional wind miracle, wind coincident plot is kind of not panned out the way people have thought. So we really need to do something different. These inelastic channels are really kind of a breath of fresh air because these really let us do low mass searches with detectors and data sets that we have now. I'd even venture to say more so with the liquid nobles, but because they also have threshold issues, but this really lets you probe low masses, but you do lose the rate. And because of the cross-sections that you're probing, because of the lost rate, you really do need to take into account overburden. I showed that the Adelweiss plot where they had a higher ceiling, super CDMS, we have these test facilities that are either on the surface or at low, a couple hundred feet underground that don't have this. So these test facilities with small devices could possibly provide some of that sensitivity near the ceiling to be determined. I think in terms of procedurally, I think the one thing about this analysis besides the limit at the very bottom end is that this provides sort of a consistent way of actually calculating this and sort of the intermediate regime. So people often calculate the ceiling on its own, assuming the overburden, then they calculate the floor assuming that there is no overburden effect, but this is a consistent way you can do both in the same analysis and treating the overburden in a consistent way between both the surface and the floor. So this analysis is now leading this, and hopefully we'll have data from super CDMS Snow Lab, which is gonna be running a very large payload of these HV bias detectors. So who knows what we'll see with that. Thank you. I'll switch off your mic. Yeah, thank you, Mike. Okay, good, there we go. All right, so let's go to Q&A. Thanks very much, Rob. Let's start with a question from anybody connected online, because since we can't see you, just go ahead and make yourself known if you have a question. Okay, over to the room. Oh, yeah, sorry, go ahead. Yeah, hi, Harold. Thanks for this very nice review of the very interesting work. So just to give kind of an idea of what, again, what is the ratio of the signal to the background in the studies that you would expect? Yeah, so that's kind of a strange, so that's a hard question to answer. So it really depends on what the cross-section is of where a signal is, right? So if you look at these type of plots, it really depends on where it is. So at the bottom of any of these curves, you're basically gonna see a signal, this sort of kind of tells you where the signal would be invisible with the current background level. So if you had a signal here, you would definitely see it. So you basically pick up orders of magnitude. Sorry, Rob, he won't be able to see your pointer. You can use the mouse on the tray there. Yeah, there we go. That'll help. Pull out the tray on the podium. There we go, yeah. But yeah, to be on the point, kind of in here, in the middle, you're gonna have a very high signal rating. It'll be able to actually see it, which is why we're able to exclude it. But down here at the bottom part, it's gonna be comparable to the background. You can work through the statistic of what that would look like. And then below here is gonna be way below background levels. So it really depends on where it is. But then the other question that kind of follows up, it's not possible that you fit out the signal. So in other words, well, of course, you were showing this quite complex likelihood function. Well, I mean, it depends on where the signal, how strong the signal is, right? So, I mean, the signal shape is mostly degenerate, is mostly, I mean, it depends on the signal shape, but it's not quite the same as the background shapes. I mean, so it is kind of a complex issue, right? So there's like degeneracies between the different surface backgrounds. So it's hard to fit those out, but we have independent measurements. They're a little bit of degeneracy with a premium background. I mean, I guess it depends on what you mean by fit out. Well, actually, that you see, like this plot is very nice because you assume that the cross-section, the wing cross-section is presumably more or less constant, right? But in principle, it could have some kind of resonant dependence. What do you mean resonant dependence? Well, in other words, can it have some complex shape that like, for example, you could have a bump, let's say take this figure as an example, but you have a bump with a very low range of energies and that disappears or it's entirely impossible. That's possible. I'm certain that a theorist could come up with a model, but you can calculate, but going with these models, you can calculate what they expect a single shape should be. And it's not that complex. So let me find the right plot. Probably this is the right plot that we want to look at. So for a traditional dark matter search where you're just looking for a nuclear recoil interaction, you don't really care about the details of it. You're just looking for a scattering. You basically get this falling exponential and it basically gets chopped off because you've run out of the velocity in your role. But you basically have a smoothly falling exponential. These search channels are a bit more complex because you basically introduce all the nuclear fizz into it. So now like the signal model has features from the atomic physics enemies, but you can calculate that using me. I admit there are model assumptions that go into this, but you can calculate what the signal model should look like for these cases. Now, if you want to get into like effective field theories and have different vector axial couplings and things, I'm sure you can come up with more complex things, but that's a different challenge. So you basically have to do the same thing here where you basically calculate what your expected signal shape looks like and then do the analysis. But you have to know what your expected signal shape looks like. Either through theory or this analysis really is a mix of both the theoretical part where you write down atomic scattering factors and then you go through and figure out, well, how well do I know that part? And that comes from experiments and then likewise with the yield. And so at the end of the day, you have to know what your signal model looks like basically to do anything. Okay, thank you. Thank you. We have a question from the room. That was a nice talk. I probably missed it. How do you distinguish black, I mean dark matter particle interaction with the nucleus in the elastic collision from the very low energy neutron in the elastic scattering. We know that there are a lot of them and they don't get stopped very easily by any shielding. Yeah, so you're absolutely right. There is a degeneracy here. So you can have a thermal neutron, you can have basically thermal neutrons that permeate through your poly field and then you basically have a bath of very low energy neutrons that can scatter with your detector and then emit the photon or give you basically the same type of thing. So I didn't have a slide on this, but going back to this. So what we did for CDMS Light Run 3 is we calculated what the expected neutron background for CDMS Light Run 3 would be and that was constrained to be less than one electron. So we did the Monte Carlo of that. And if you look at this slide, you can see that the rate of Brimstallon and these inelastic scatterings are orders of magnitude below the basic nuclear scattering. So we used that basically to convince ourselves that this background was so low, we wouldn't have to worry about it. Okay, another question online and then we'll come back to the room. Okay, we have a question from Bob. Thanks Rob, this was really interesting. So you were showing the work that you and Dan have done. That was with the Sudan data. So, and I'm sorry if I missed it. What's the plan for running in Snow Lab? Has it already started, how much more mass? I mean, what's the configuration and the plan there? Yeah, so it hasn't started, the construction has started. Again, I'll plug the Twitter. So there's, Sylvia has post updates of different construction things. But basically, last I checked the seismic platform has been installed. The shielding is coming from France and they've done test fits of that and there's a nice time lapse of that and we're planning to have a time lapse of the actual construction of the detector when all of a sudden done. But it is currently under construction. So, I'll just blame COVID for this. Oh, no, no, no, but so it's, when do you expect it to start running? And does it require a certain amount of detectors to be ready or, you know, always in the past? I remember that the amount of mass was being brought to bear was important. Yeah, so the scope has changed. You can see that the crust is a little bit smaller than the shielding these days from maybe a plot of pictures of this you've seen in the past. But we're expecting to start taking serious data with that in probably about one to two years. That number always seems to change on me. But that's sort of the ballpark estimate. What it's gonna have is the payload is basically gonna be four towers and each of these four towers is gonna have six detectors. These detectors are about one and a half, two times the mass of super CDMS detectors. In super CDMS Sudan, we only ran one HP detector CDMS light at a time because we only have one board that could do it. But now we've made this part of our, it was kind of a test device to don't judge us too harshly on that. It was kind of a proof of principle and it worked out fantastically well. So now it's sort of part of the baseline physics program for super CDMS snow lab. So basically we're running half of the detectors in super CDMS light mode or HP mode depending on how you want to brand it. And the other half in this traditionalized mode but the payload is gonna be much, much larger than CDMS. So this is really kind of an interesting with my sales pitch, right? This is really kind of an interesting thing to do. Jasmine is currently working on doing projections of sensitivity to solar axions with the detector. Michael might choose to take this up and see who wants to for a future project. I'm kind of tired of this myself. It's been years. It's been years, yeah, yeah. And yeah, if you're interested in all the gory details I'll point you to the snow mass white paper. But yeah, so now we're looking at, let's just take a point, 10 to the minus 43 at 2GV and compare that to sort of this black here, right? So that's really pushing down to this region. So here's where CDMS light on three is with that. But the ERS thing really is a payload. So if you look at the comparison of CDMS light front three and same as spring ball these are the same data sets. These are basically the same and mostly the same analysis with the change signal model. You can see that there is basically six nodes of magnitude difference. So increasing the payload will really push this down. The big question is what happens to the ceiling because you are increasing the overburden substantially the ceiling and principles should drop. But on the plus side, what we didn't have at CDMS we didn't have as many tough facilities. So at Stanford, we have I think people call it blue forest. So there's basically a fridge running on the surface. Next to stuff at a Fermilab is in the new me tunnels. So that's just a couple hundred feet underground and Ashwita is working on that Northwestern has a fridge. So a lot of our collaborators have fridges and they're able to run things. So I imagine that just spitballing on my own not speaking to the collaboration but you could do a baseline exposure and measurement and analysis with the main detector and basically move this mingle limit down a couple of hours of magnitude. And then you could have a side project where you basically run, take the data that was quite good at these tough facilities and kind of fill back out this and right. So if you read this adobeis mingle paper they're running basically on the surface. So I think if you do a physics program where you combine the super CDMS snow lab experiment that curve about the low mass and you do a bunch of runs on the surface curve out the high mass you can really expand fill out all this primer space and take it back from Xenon one time. Yeah. I don't think there's any shortage of opportunities. So I mean, riffing off that when do we see CDMS in space to go to move? I'm not answering that question. No, no, no, yeah. I mean, is it a hypothetically possible to put one of these cryogenic detectors up there the solid state ones or no? Is that just not even feasible even if you had an unlimited budget? You know, space and cryogens don't mix very well. That's right. I mean, so if you look at, yeah, so you lose the copy or exposure and actually, you know what? You know, actually Tali has done some of this stuff. So he's done stuff where he's basically taken the sounding rockets, put a cryostat on it and a small detector and shot it up in the upper atmosphere. This is Tali Figaro at Northwestern. Right? Yeah. I think Dan Jordan is actually working with it. Yeah. I don't think it, I'm not sure it's the same project or the same scope, but you can do that sort of thing where you send a rocket up with a small detector and just collect data. Your exposure is going to be very small, but you know, you can, you know, this is something that's kind of new. I guess it's about four years old now, but people are developing these very small, very high precise, very good resolution, very low threshold detectors. And just to give you a sense of scale, this is centimeters and millimeters of thickness. So this is the type of thing that you could put on a rocket and shoot it up. Doing a superseding of us a space is harder. And I don't know, that's a good idea. The trick would be figuring out what this balance is. So the cosmic ray background is going to be murder. The question is, how much shielding do you need and from what? So if you look at these kind of experiments, there's a lot of poly to get rid of the neutrons and there's a lot of lead to get rid of the gammas, but these gammas come from, a lot of these come from the rock and the, well, yeah, the rock, right? This all comes to the cavern, the fact that you're contracting us on the build up, e-tank and they're not going to use clean materials on it or dilution refrigerator you buy off the shelf from someone. So that's a good question. Naively I'd say no, because you'd have to send the shielding, but then the question is, how much of the shielding do you actually need? So could you get away with basically having sort of a dilution refrigerator and having like a, just a wall of lead to try and shield you from that, kind of like James Webb is doing with thermal stuff. I don't know, it's a good, that's a very interesting question. It's not out there around the possibility, but that would have been a good snow mass fight. I missed opportunity for the SMU rocketry club, I guess. So all right, well, we've definitely run over our time here. So we're going to go ahead and wrap this up. Let's thank Rob one more time. Thank you, Rob. Okay.