 Okay. Hi. My name is Connell Murray. I work at the IBM T.J. Watson Research Center. Please excuse the illusion-heavy title that I chose, but I did it to emphasize how remarkable it is that we take something as unwieldy as quantum mechanics, and we've been able to harness that into a viable computing platform. And that's thanks to the work over several decades now of looking at how to mitigate the various loss mechanisms that we see that are present. And so in this talk, I'll briefly describe some aspects of quantum computing and focus on the superconducting transmon qubit. I'll discuss some limitations that we see in the current performance metrics, in particular looking at the various loss mechanisms at play. I'll discuss how we can actually predict a certain aspect of electric loss within these systems. We can do that through a combination of either a finite element or analytical approaches. And then I'll close with some discussions about the prospects that we expect to see from this near-term quantum computing regime. So if you look at the history of classical computing, it's been also pretty remarkable that starting up in the top right there with one of the first integrated circuits about 50 years ago, where the device sizes were roughly 10 microns in size, we've been able now to get down to the bottom image of the case where we have things that are less than 10 nanometers for device lengths. And again, that's thanks to the ingenuity and the resourcefulness of many scientists and engineers in being able to achieve that. That's been done basically through lithographic scaling. The problem is the past decade or so that's been insufficient to also give us the performance gains that we require. Part of that is simply the nature of the materials with which we work. The fact that the oxides that are used to insulate between the gate and the channel regions are so thin they're no longer insulating. Other cases where we actually have to tune the strain within the device channels to improve the carrier mobility. And then in some cases, too, we've actually moved from a planar geometry to one of three dimensions. That's being done so that we get better electrostatic control of the regions of interest. But even being able to accomplish all these feats, the problem is we're hitting a thermodynamic hard limit now. And that's referred to as Boltzmann tyranny. The fact that there is a fundamental amount of voltage that has to be supplied to these systems so that we can distinguish between the on and off states. And therefore, we have to look at different types of paradigms because of the fact that these power densities are climbing dramatically. And I would argue that quantum computing is the leading contender in that race. So I'm showing a picture here that many of you have seen in various forums on how we describe the difference between classical and quantum computing. The idea is that in the left where we have classical bits that can either be one or two states, we can either have current flowing corresponding to the one state or we have an off state. In the picture on the right here, which is known as a block sphere, we have those two states as well. We have the ground state, the zero state, as well as an excited or one state. But you can have a combination of these states. You can have states whose amplitudes are positive, negative, imaginary. And it's a superposition then that can exist within your quantum bit. Now, that's not sufficient to harness the true power of quantum computing. For that, you have to exploit this second property called entanglement. This is an interaction that Einstein famously referred to as bookey action at a distance. And so it is this correlation between these different qubits that really expands the solution space that you can interrogate. In fact, it expands exponentially with the number of qubits you have. And so a question I often hear is when will a quantum computer replace my laptop? When will I get a quantum computer in my car? I would not personally like to ride a shotgun with a dilution refrigerator for a variety of reasons. But perhaps the simplest one is that there are many problems that simply don't require that level of computing. And the fact that classical computers are still very good at number crunching and other aspects that we typically use. And we really should focus on the problem sets on the right here. Things in which you really have a huge a myriad of possible solutions that you need to explore, such as optimization, such as balancing financial portfolios. And so in those aspects, it's important to have the ability to use quantum computing. And more importantly, it's really the synergy between both of these two approaches where you have a sharing of the load between the classical and quantum computing where we will see the greatest benefit in the near term. So there are many different platforms in which you can perform quantum computing. They all have their strengths and disadvantages with respect to scalability or the complexity associated with the processing. But for the talk today, I'll focus on superconducting circuits. And a couple of metrics that we typically use that you may also hear about when describing the performance of qubits are referred to how well they can retain their particular state of interest. The plot on the left here is a depiction of what we call the energy relaxation. The amount of time it takes to when we put the system into an excited state here at the bottom of the block sphere. And then eventually it loses that state due to energy transmission back to the ground state. The second parameter is associated with the coherence of the device. How well you can keep it in that particular state of interest. And so that involves a defacing which you can see a rotation around the azimuth here on the equator of the device. But the actual T2 metric that we use is a combination of both of these, the defacing time as well as your T1 time that's there. Now again, if you look at the evolution of in this case the T2 times over the period of just two decades, it's been remarkable that we've been able to see five orders of magnitude increase in that metric. And that's due, again, thanks to the advancements in the types of materials that we use in these superconducting qubits. The processing that's required to create them. How they're laid out in the system. And also just as importantly, the environment in which these qubits reside. So now I'll focus specifically on one type of superconducting qubit. Again, are a variety of flavors that can be chosen. The transmon, which is an abbreviation for transverse plasma oscillation, is essentially a combination of LC resonators. And so what I'm showing up here is a collection of shunting capacitors and they're made from niobium metallization. They are connected by a region with aluminum, aluminum oxide, just some junctions. And the just some junction is really the key to how we can interrogate and address these structures. Because what you see on the right then is the energy well that's generated. The fact that you need to have the ability to interrogate different energy bandwidths between these regions uniquely. And so the just some junction provides a nonlinear amount of inductance that then can allow you to address the one to zero transition state uniquely. But of course we need to communicate to the outside world with these qubits. And so interspersed between these qubits and to the environment are superconducting microwave resonators. Now those, as you can see the depiction on the right and the bottom, they actually do have equal spacing between the different levels that you can access. And so therefore you can't access an individual state in that process. But it is the ability to communicate using gigahertz radiation using the resonators to these qubits that allow the operation. Now there are, there is a laundry list of different mechanisms that are at play within our systems. I just put a few here in cartoon form on the right and list a number of them as well. With the exception of the bottom bullet point these are our known unknowns. It's the fact that again the qubit state is so fragile just about every element that you can think of at some point will impact the response of your qubits. Whether or not it's vortices that flow through the superconducting metallization, the possibility of quasi-particles actually tunneling through those junctions, how phonons and radiation interact with all these structures. And even two prominent researchers in this field have proposed the existence of cosmic rays actually impacting our structures. So it does give testament to the fact that there are many things that we have to be concerned about in the operation of our qubits. And the one I'll focus on primarily is known as dielectric loss. You often hear about two-level systems or TLSs. They are ubiquitous in nature, particularly in our structures. The qubit is a two-level system as well. And so it really corresponds to the fact that as the qubit radiates electromagnetic energy you will have this energy wash over regions of your device and its environment. And some of those devices, some of those, some of that environment will contain structures that I'm showing in the cartoon up here on the right, things that are intrinsic to the dielectrics that you might have or the contamination that may be present. They may also be associated with defects that are present within, say, your substrate region. And so all of these essentially siphon off energy from your qubit. The degree to which that occurs depends on the alignment of the dipole of that particular TLS. It's frequency, i.e., it's energy in comparison to the energy of the qubit system. And we know that, unfortunately, we actually see greater absorption, greater impact of these effects at lower temperatures and lower powers. And that has to do with the concept of the fact that these TLSs actually can saturate given enough power. And so in the region that we really care about, the single photon regime, we can see what I'm plotting on the right here, the fact that certain dielectrics near and around your system will have a bigger effect on the overall qubit performance. Now part of the big problem is trying to determine what of these TLSs are actually intrinsic to the materials that you need to have and what is associated with contamination that may be present. But because there are so many of these, there's a bath of these TLSs, we can basically use a continuum approximation to describe the response. And that's simply the ratio of the effect of dielectric constant of the materials, the imaginary over the real component of that, into this loss tangent and then some values that are shown on the right. Now here's a chart that contains a list of different materials that we often see within our structures, as well as the primary culprits we think are involved in controlling their impact. One of the most salient points that probably comes out of this is the fact that for a given material such as aluminum oxide, the amorphous version of that material has a much higher loss tangent than the single crystal or substrate version that's there. And that makes sense because again with an amorphous structure you have so many more configurations that don't require much energy to get between the two states of interest. And so you would expect a greater impact of those structures. Now of course where can we see this loss manifest itself? There are a variety of places that it's already been experimentally demonstrated. What I'm showing on the left here is the fact that even with a pristine system, if you unintentionally damage that system, you can introduce TLSs and therefore dielectric loss. One step that's often used in the cleaning of materials in regions of interest is ion milling. And you can see on the left here the fact that you see a change in the overall performance. What I'm showing here is quality factor which is related to that T1 time. And you can see a significant dip simply from the fact that you're introducing excessive ion milling in that structure for a coplanar waveguide resonator. Now another example involves the fact that the Josephson junctions themselves, although in principle they're lossless elements, clearly they will contain some degree of TLSs. And so one example was in looking at the ability to use epitaxial metallization to then grow either amorphous aluminum oxide barriers or to actually create epitaxial versions. What you can see between the two plots on the right are the fact that there are many more striations in this energy landscape on the top image than you do on the bottom. And that refers to the different places in which you have unwanted coupling between your qubit and those TLSs that are present. So we also know that just the choice of material does make a difference within the performance of your qubit. Let's take nominally equivalent design and an example here is shown on the left where we have intertigitated capacitors that shunt the overall capacitance within the Josephson junction region. And these are composed, the Josephson junction regions again are composed with aluminum and aluminum oxide. But what we're changing is the metallization that exists within the capacitors as well as the ground plane. And what I'm showing on the right is the shown performance on those qubits simply as a function of the choice of the material. Now it could be tempting to look at this graph and to assign some intrinsic property that might correlate such as the superconducting transition temperature that you have within your system. That also increases as these Q values increase. But probably what's more important is the fact that the way in which these materials are deposited could have a tremendous impact on the overall performance. The fact that evaporating aluminum requires certain organics that may still provide some residue there. Even the sputtering cases of tinitride versus niobium, the sputtering conditions are slightly different and those can have an impact there. So how do we even try to quantify some of these effects? Well one thing that we've seen heuristically involves the evolution of the transmon design that IBM's used over the years. And the fact that we've gone from an interdigitated capacitor design shown on the left to monolithic shunting capacitor paddles and those that are actually larger in size and have larger gaps that are there. Now these are all designed to have nominally equivalent capacitance. But as you'll see here the fact that there is significant difference in the intensity of the electric field energy within these regions. And this is a linear scale here. And again it's simply due to the geometry that's present. So to quantify this value we use a term called participation. Participation simply refers to the relative fraction of electric field energy that's going through a particular region of interest within your design. We typically demarcate three different regions for that, which I'm showing in the cartoon on the right here. You can have the region between the metallization, in this case niobium, and the underlying silicon substrate. That's the substrate to metal interface. You can have the free surface of that substrate, which is the substrate to air interface. Or even what exists above the niobium, which is the metal air interface. And all you have to do is basically calculate the norm squared of the electric field intensity as well as the relative dielectric constant of the contamination layer that's present to determine what the energy is that's flowing through a certain region. Now often of course we're going to have very thin contamination layers that are present. The problem that poses of course the structures that I showed on the previous slide can be microns if not tens or hundreds of microns in size. But we have to deal with the fact that there's a big disparity between those different length scales. The other complicating factor is that the electric field distributions that we care about, particularly very close to these metallization regions, are singular at the edges. And so that poses certain challenges in terms of how to quantify it. Now one technique that's often used to deal with this is the fact that because we don't necessarily know what the thickness of that contamination layer is, we may not even know the relative dielectric constant of it, is to simply prorate it by changing the volume integral into an integration over a surface. And on the plot on the right here, you can see what I've done is basically taken a mirror of the metallization surfaces, the SM interface, and then projected a certain distance below that region. You can also project it above. But the idea here is you do that same calculation of the integral across a sheet and then watch how that value changes as you move away or move closer to that region of interest. Because again, at the actual metallization edges, the value diverges. This approximation assumes that you have a linearly dependent participation value with respect to the thickness that you have of that material, the fact that you have the divergent case at the region where you really care about. And so one way to determine these values is to use finite element approaches. These are extremely versatile, not only with respect to the geometries that you can analyze, but also the governing equations that you can solve for. And that's because you're actually using a variational approach or a weak formulation in the solution. But the 3M's for finite element method modeling are meshing and more meshing. Because your solution is highly dependent on the number of nodes that you have that are there. Because at any given point you actually are interpolating those solutions between the adjacent nodes that are surrounding you. And so that presents the complication to then dealing with things like singular quantities. And what I show on the right here now is a comparison between such a calculation where we're looking at the substrate to metal participation as we decrease our distance from between the sheet that we do the analysis on and the actual SM interface. And we can see of course a roll off in those values predicted from the finite element method. Now in reality we know that these things should behave logarithmically. And so it should diverge as we go to the value of zero metallization. So there is a disconnect here and that analytical methods if they are available will help us sort of bridge this gap. So one way and I'm color coding here the various interfaces of interest within our system and then the electric fields as they propagate for an instant in time through the system. If we can take a two dimensional slice of those shunting capacitors that I showed before. Where they're separated by a gap which is a value 2a. And the paths themselves both have widths b minus a. If we can assume a quasi-static approximation which is again means a snapshot in time of the electric field distribution. We can actually use math that's only 200 years old almost. To convert the volume integral in order to calculate the electric field energy into one that is now a surface integral. And more importantly by using green's first identity you can actually convert some of the gradient of the potential which is just the electric field into potential and so the integrand itself reduces the order of singularity. The result is now you have an integral that does converge but you're integrating over the surface that encapsulates the volume of interest. And so you can see for a variety of cases here the dots correspond to that numerical integration. We can see very easily how these various values and what I'm plotting here is the surface to air, substrate to air participation. That as we change the various design parameters of our system as we increase the gap for instance which corresponds to increasing a we see the participation drops. As we increase the outer value of the pad sizes that also corresponds to different k values here k's just the ratio of a over b and that drops. We also see a good sanity check that for the case of zero thickness of your contamination there you shouldn't have any participation and now we have a value that converges. We also can make some interesting observations based on this approach. The fact that these values actually do depend on the relative dielectric constants that are present within your system and by knowing that you can actually use the boundary conditions associated with electromagnetics to assess the relative contributions and what you can see here in most cases the substrate to metal the substrate to air interfaces for our dielectric materials of choice are going to be comparable. However if you look at the metal to air component of surface participation you can see it's dramatically less than that that we would see in the other surfaces. Now the good news that all of these are collinear and so it doesn't really matter which one you plot when you want to assess the performance of your device. The problem of course is then it's very hard to disentangle which one of these is impacting the performance that you have. The good news is that there's an analytical approximation that I was able to derive. The details are in the reference on the top right there but the good thing is that with this one formula you can actually assess a variety of designs to see very quickly what's going on and to help confirm that finite element calculations are giving you something that's relevant and realistic within your system. Now of course the important thing is to see how this corresponds to what we see in our qubit performance and so what I'm showing on the right here are actual measured values of niobium based qubits as a function of different designs and you can see some examples here on the bottom right and we can see a regime definitely where we do have performance that is dictated by this level of participation that's present within the system. You typically will only see this in the small devices because those are the ones that have the most participation but also you see in the regime between mods cd and e where you do have these larger structures that something else is impacting their performance otherwise this trend should be linear throughout the entire space of different qubit designs and so we have to look at what other mechanisms might be at play here. So when we want to ascribe the overall quality factor within our system there are losses of course that occur in parallel and what I'm showing in the equation up here is the fact that any particular dielectric loss mechanism is going to be a combination of the participation that's there as well as the loss tangent associated with those materials the table that I showed earlier as well as some values that probably have nothing to do with participation. So this cartoon on the left here just meant to depict what we think are the most dominant effects in the outer core of this of this this nut that we're trying to shave down and then as we start to mitigate those different effects we start to see the impact of other aspects and what I'm showing on the right are just various values of the participation loss tangent that we typically see within our systems and so you can see for instance that although the substrate has a huge participation it's loss tangent we think is extremely small much smaller in fact the 10 to the minus 7 that's there. Now of course there's some caveats to go with this type of idea of what are the most prevalent mechanisms part of which I showed earlier in the fact that you can have damage or amorphization within your system you can also have various extrinsic aspects and purities and what not and perhaps even more important is the fact that the junction size that you're dealing with may also play a definite role the example that I showed earlier with the epitaxial junction region where you saw so many different regions of avoided level crossings that corresponded to a 70 micron square junction size and if you remember for our case we're having something several orders of magnitude lower than that and so where these where these different mechanisms will be dominant may depend on those particulars of your design. Now another aspect that's commonly mentioned is the fact of how quasi-particle tunneling can be impacting your overall qubit performance what I'm showing the left here is a depiction of the density of states that we see in our metallization as a function of the temperature the gray regions correspond to the ambient thermal energy that's present and what we see as we cool down through the superconducting transition temperature TC that at intermediate temperatures there actually is enough thermal energy there to help break the cooper pairs and essentially quasi-particle in this parlance is simply an unpaired electron and it will cause ohmic loss within your system as you get to even lower temperatures though particularly those that we think exist in our operating conditions there isn't enough thermal energy to break those cooper pairs however there is the existence of other non-equilibrium quasi-particle generation generating mechanisms that are present and so what I'm plotting on the right here is actually the predicted T1 values then looking only at thermal quasi-particles as a function of this temperature and the purple line there corresponds to 20 milliculbine which is essentially our operating region now we also we will have heard from Dr. Sheehan Kerter who actually was able to measure the effect of quasi-particle tunneling through the Josephson Junction and her measurements confirmed that the quasi-particle tunneling rate is significantly lower than what we would expect just from thermal quasi- particles it's around around less than a hundred hertz meaning that naively you would have structures that are tens of milliseconds or so in terms of T1 times if they were being impacted by quasi-particle tunneling and so the net result is we know that this at least for operating conditions aren't impacting our qubits now up to this point all I've been talking about are the dielectric materials the qubits themselves and they all reside in this very small region within this depiction of what exists within the dilution refrigerator you can see here the volume fraction of that region is parts per million right compared to this entire ensemble that's necessary to operate your qubit so if we actually invert this picture just as a point of reference right the volume fraction that's required for a similarly ambitious program right was parts per thousand so the point here is just to show that there's so many different aspects that are required to maintain the conditions the hermiticity that is that is necessary within your qubit environment and those involve various isolators attenuators and amplifiers they're just as important to ensuring the performance in our qubits and continue to be where we look at issues of thermalization how to properly shield these qubits from these different sources of loss so in the last few minutes now I'll just briefly touch on some of the aspects of near term quantum computing and one metric that we typically try to use to assess what the useful work that we can derive from quantum computers and one example is the idea of quantum volume and to put it simply it relates the largest square lattice of qubits where you have in one dimension the number of qubits that you can run in parallel in the second dimension how deeply you can entangle those qubits to get an operation of interest or an algorithm to be performed now equally important in that is the fact that there's a fidelity associated with the gates and the errors that are present within those qubits particularly in this era of near term quantum computing and so all of those aspects although they're not necessarily orthogonal dimensions they do help to define a parameter that is hardware agnostic it can actually tell you what perhaps are the most pivotal aspects of your qubit and one example I'm showing here on the bottom right the fact that to certain past a certain threshold you can add additional qubits but if the error rate isn't sufficient to have them be part of the entangling process you will not get a change in the quantum volume that's present now thankfully the quantum volume has increased dramatically over the past three years or so there are a variety of different layouts that IBM Q offers and these are publicly accessible anyone can go online and be able to utilize these and they have various differences with respect to the number of qubits of course but also how they're related how they're interconnected and ideally you would want the greatest amount of interconnectivity present but not at the detriment of the error that might be introduced by having so many in series with these systems the point is that we have we've been able to demonstrate an exponential trend in the quantum volume and in fact just earlier this year we've announced the Raleigh generation of qubit 28 qubit devices that have demonstrated qubit volume of 32 and so what can we use these types of devices for clearly there are examples that involve chemistry material science machine learning the problem again is that we don't yet have the ability to perform true error correction within our qubits that would represent a fault tolerant cute quantum computing system and instead we have ways in which we can try to mitigate that error loss and I'll show that on the next slide the fact that we do have universal computers mean that we can program these gates and we try to look at what type of algorithms that have a short enough depth that can give us useful information from it I also want to mention though that quiscate represents an open-source quantum computing software framework that's being widely adopted now over 300,000 downloads they're present in which you can also use to help program both simulations as well as actual experimental cases on these various devices and the one example here I'll show an error mitigation involves the calculation of the ground state energy of the lithium hydride molecule as a function of the distance between the two atoms and of course you can see on the left that if we only use circuit depth of one here we get a trend that that does follow what we would expect but there's a significant decrease in the accuracy that one would hope for what I'm showing in the right then is by using a technique called Richardson extrapolation you can actually see the performance based on a certain gate time that's present you can amplify that time and then actually use that to back propagate what you would expect to have within the condition of zero error and that corresponds to the bottom plot here where now you actually can utilize the fact of a greater circuit depth to get you more accurate information and there are plenty of different examples of this I would encourage you to access the IBM quantum website to see how in fact some of these can be applied to the particular research interest that you might have and with that I'll conclude we've gone over different ways in which we've been able to see how materials designed processing are important to the operation of qubits we're always looking for the next limiting mechanism that's present in our system but in the meantime the fact that we do have near-term quantum computing applications that we can utilize and they involve the fact that we are able to increase our quantum volume and eventually hopefully move us from the state of error mitigation to true error correction and that will represent a giant leap in terms of the ability to operate our quantum computers please access this website if you'd like more information I'd like to thank the contributions from my colleagues here and the hard work that the entire IBM Q team does thank you