 And I will be the chairman of this last session this morning and so Without further ado, let's start with the David Ferguson stock Pushing the limits of analog computing and quantum annealing design progress and theoretical concepts Yeah, it's great. I assume the microphone is working well. I'll just kind of lean in here I get my voice might get a little bit elevated. I get it. I'm easily excited so and I'm I think it's been a very Exciting conference. There's been a lot of things that have really made me sit up and take notice and hopefully I'll be able to Continue that trend here a little I'm hoping that this talk will be a bit of a synthesis It does touch on a lot of other ideas that people have presented in research in detail. So I hope Many research groups find it interesting. Let's see here to that advance the slide. Let's just Go with it right arrow. Click that work. Oh, I know what I need to do click here and then like that. Yeah So, let me tell you a little bit about Northrop Grumman so One of of course one of the most exciting things happening in science right now is the James Webb Space Telescope and Northrop Grumman is the prime contractor on that, you know So there's this great YouTube video called like 29 days on the edge where they talk about, you know They're gonna have to put this thing on this area in five blast it off from French Guiana and and he's gonna shake and then they got a You know put out its sun shields and everything like that and it's a fantastic video And it's just amazing that it all worked makes you proud to be part of Northrop Grumman And there's a quote in that video where it says that the James Webb is the perfect example of Scientific desire driving engineering capabilities to new frontiers And I feel very similarly about the era of computing that we're in where you're kind of in this regime where there's a growing demand for novel compute capabilities and so I kind of borrowed that and puts this as the title of this slide that Northrop Grumman's computing research is the perfect example of modern computing imperatives driving engineering capabilities to new frontiers And we've got a fantastic team there our team is centered in Baltimore, but we have Expert team out in Colorado We may be looking to expand in a couple of other locations one one of the fun things About Northrop Grumman, of course is over the years it was formed from a lot of different companies one of our Parent companies is Westinghouse and actually Zener of Landau Zener himself Clarence Zener Was actually director of science at Westinghouse for almost 15 years So he's really in our academic legacy and of course the this kind of Landau Zener physics is a big part of what we're doing here this week so So we do a full range of Superconducting Technologies we do classical digital technologies. We do superconducting qubit technologies and of course we do a adiabatic Optimization of the full range and so it's quite an exciting time there at Northrop Grumman so as I was saying that the slowing of traditional scaling metrics and the shift to AI workloads They're generating new imperatives for ultra efficient machine learning accelerators and so even kind of apart from quantum mechanics, you know, it's kind of a a Cambry and explosion of new compute technologies here. I've kind of highlighted two of them are you know here This kind of wafer square scale computing is certainly interesting to keep an eye on And of course people have heard of these TPU pods and pod racers and all this other type of stuff the stuff that that Google and DeepMind are doing We've heard it. We heard a bit about the coherent icing machines. That's an incredibly Fascinating technology keeping our eyes on And of course, you know down here you have superconducting technologies that are doing Very promising things as well on the quantum side. So just kind of adds an overall summary especially in the area You know of of ultra low power eventually I think that superconducting annealing both quantum and classical are promising compute frameworks that are going to help Meet these new imperatives So the things that I'm going to talk about to today are a bit of a digression From the way that people often think of these circuits in terms of two-level systems I'll kind of insist on focusing on the kind of the fundamental variables of the circuit the lumped element chargers and fluxes And the reason for this is that of course, you know At working at Northrop Grumman you get to work with a lot of graybeards that have been working in this field forever And you say well, how what is what how does an engineer thinking think about the variables of the circuit? How what is your classical model and I'll tell you a little bit as to as a theoretical physicist type interpreted what they've said Then I'm going to talk a bit about a Analog artificial neural networks, and I'll kind of explain What I mean there? Then I'm going to talk a little bit about What doesn't how might I imagine that if you start from some something like a a primitive compute? capability in this analog neural network side, and then you want to ask You want to ask the question? You want to ask the question of even for a primitive compute operation where you're kind of doing something similar to You know an individual instruction operation can an individual operation have a quantum advantage even independent of the scaling limit, so I kind of at the existing scale at which instructions are Implemented right now. Can you have a a quantum advantage for a primitive operation? And then I'll tell you a bit about the design work and the simulation work That we're doing at Northrop Grumman to help us move this technology forward All right, so here's a pretty busy slide, but it has Two essential parts here that are kind of repetitive so these these are the if I kind of you look down here on the bottom part of the screen here you kind of have a picture of a little array of You know six Flux qubits You know tunable flux qubits that you might want to anneal by putting flux here in these what we call the X loop And you want to imagine that there are kind of mutual interactions between neighbors Maybe perhaps implemented with the tunable coupler, so there might be that gamma ij may be tunable And then you say what is the actual Hamiltonian for that is relevant for these charges that build up on these Capacitors and for these fluxes that are stored in these loops and On that on the classical side, you know again, this goes back to what I was talking about in terms of talking to the engineers Um, this is this is what it is. You just kind of have charging energy you have Let me see whether I can Maybe I can You have the inductive energy you have the Josephson energy and then you have the mutual interactions between them And when you do Hamiltonian's equations of motion, you know, you just kind of say You know for instance q dot is equal dh by d phi you actually see that this is equivalent to Kirchhoff's laws You know, and so it basically said, you know, it's kind of the fundamental reason why the engineers are telling you You know use this model. It's just because you know, of course This is a this is this is what you learn in electrical engineering class to kind of use these kind of current conservation rules That really help you accurately model these circuits that are it's valid outside of the two-level approximation so even if you have kind of things in your double well and You have some remaining harmonic, you know degrees of freedom that kind of form to collective modes of your entire circuit that this type of this type of kind of using continuous variables to describe the dynamics of your system Can't capture a lot lot larger range of of that physics So to get to the quantum side, of course You know flux and charge are conjugate variables like position and momentum And so you just have to put hats on everything and then it becomes a quantum mechanical Hamiltonian and this of course is the Hamiltonian from which the two-level approximation can be Derived if you want to go there but for this talk I'm we're gonna stay in terms of this quantum Hamiltonian this classical Hamiltonian and we'll talk about using it directly So of course, there's a really nice theory that allows you to transform a very natural transformation between classical and quantum physics Vigner-Vial transformations and Here's an example of a vigner function where you kind of take the plus state and you kind of see it has this nice behavior You know where it's gonna be left and right well simultaneously and get these interference fringes here when you kind of calculate the marginal probability distributions you begin to see some sort of wave-like behavior So one of the interesting things to consider is what are the fundamental ways that if you if you do try to describe things in terms of your density matrix in terms of a In an operator description and you kind of do this vigner-Vial map you can basically say oh these operators get mapped to probability distributions like this one here and You can then kind of in the same way that you know, it's always annoying when you want to go on figure out what are your hermite Polynomials, you know for quantum mechanics you have to decide oh wait are those the probabilists Are my polynomials with the or the quantum physicists and they're because they're useful in both contexts and so this is this is an a generalization where of course USC has done a really a lot of great work of defining what these adiabatic eigenstates would be as a function of Neil you can transform them into operators and and potentially use them as a Basis for your classical probability distributions and then kind of figure out how does Probability slosh between that on the classical side and kind of then derive some fundamental differences between of course How quantum and classical are behaving so this is just one of the types of ways that Looking at things from this type of perspective kind of opens new vistas on on understanding What is the computational power between these two frameworks? so one thing I wanted to touch on very briefly is that One particular way that the choice of abstraction level can be important for understanding novel computational resources And that's been the case study of non-stop causticity that we've heard about some this week And of course there was this great experiment by D-Wave that kind of showed that if you if you couple things both by a mutual inductance as well as by a fixed Capacible capacitive inductance if you go to a two-level description that indeed there is a degree of non-stop causticity that cannot be removed However at the circuit Hamiltonian level Of course the the Hamiltonian is fully stochastic without any offset charges. And so you have this kind of You know question of weight is what's primitive the two-level description? Or is it kind of the circuit Hamiltonian description that I've been talking about and in terms of you know Because you know that somehow you know from these adiabatic theorems that somehow plus xx is is fundamental for Universality at least that's the construction that people have proven is is you can get a universal construction there So somehow if you go to the two-level description is this kind of losing too much And further there's this nice proposal that that may may be getting revived is these kind of notions of these face Facelift qubits and there it has this nice a nice Properties that even at the two-level descriptions It's strongly non-stop caustic meaning that you can have large non-stop caustic interactions between these these two qubits While the single qubit fields are zero well the equivalent single qubit x fields are zero And I think that property is very important to have this capability to have a qubit that has strong interactions Novel interactions both xx and zz While the single qubit fields are zero. I think that's really going to unleash a lot of new Design capability and one one final thing that I'll say is that I think I'm very under under appreciated a Result that's been coming out of the group of the collaboration between Rutgers and UMass Boston Are these examples where you have this kind of charge sensitive Island here? Very similar to the the the facelift these kind of annealing capable facelift qubits that North of Grumman together with the QAO team and of course Lincoln lab have just demonstrated some capabilities for we've demonstrated the charge Tuning and we've just demonstrated that you can't anneal these But so this one's not tunable but what they did show was that when they when they basically have this Island have a larger gap by having a little bit thinner aluminum And as well as surrounding things by a super inductance they get charge stability I eat, you know, they don't have they have quasi particle Poisoning events. So here this kind of turning blue to yellow was an example of a quasi particle Poisoning event, but they get quasi particle charge stability on the on the time scales of an hour and that's a huge huge time scales and you don't see it in any other system and so So kind of making sure that you have these capabilities where you have this gap engineering and you're surrounding things by super inductance Those two pretty things together I think that it's a really promising moment to go back and look at these non-stop plastic systems And we heard some good great work from UCL kind of looking at how you might begin to utilize those All right, so here's just kind of a little summary of where we're gonna go We're gonna focus here on the top of you know between the classical circuit model That's why it's green in the quantum circuit model. We'll talk about that a little bit and of course You know what people are more used to all is thinking about going in this one-way direction down here to the cuba deproximation And then then also looking at the classical spin models But of course this this one is a bit way more tricky to make a nice vigner vial Dual description between a two-level system in a spin model because a lot of spin models have a natural natural like, you know spin one Not you know is their first non-trivial representation, but anyway, that's that's a little side note yeah, so here's an example of a you know a number of Interaction interacting you know Classical flux cubits, which I'll call AFPs for analog flux parametrons they go by many names and you can just kind of think of them as You know the the flux cubit operating in the classical regime and You can see that you know, it's kind of some some pretty standard You know it's kind of like oscillating back and forth as the charge and flux are kind of sloshing of back and forth against each other And if you look over here at the phase it kind of does what you would expect as you go through the anneal Then they you know it decides which well to go in based on the interactions and it kind of goes through this icing instability So one of the things that I think D-Wave is as convinced us at this conference and they've been saying it for a long time is that you know quantum Monte Carlo is a study of the equilibrium properties of Super of quantum superconducting circuits, but it's but when your dynamics they can be abiabatic Ie. You're kind of you can change your fields Quickly or sorry slowly with respect to any sort of intrinsic frequency scale However That doesn't necessarily mean that you're gonna be thermalizing all the time You know and so really the Schrodinger revolution is primitive But the same is true of classical dynamics as well. So you you know a You know Hamiltonian dynamics You know if you kind of adjust your Your Hamiltonian slowly compared to the plasma frequency scale But the RC Tom time constant the time scale at which you equilibrate at is very long compared to those timescales You can do things like you know as your well come Well gets wider your qubit is going or your qubit your your classical Flux qubit basically can undergo cooling and that's what you see here in terms of the charge fluctuations Really reducing as you're kind of widening the the harmonic well And so it's these types of effects that are at the heart of the reason why you know That these kind of classical models are very useful for capturing all the physics that are really there in these circuits And and you don't kind of you know Just focus on on the on the two-level approximation or on a spin approximation You kind of lose some of those physics all right, so What in what way can? Hamiltonian evolution Give you a computational advantage. So one of the things that's really nice about modeling classical circuits, of course is that You can model them up to giant scales you don't have this problem where you know, oh, it's going to be super hard to model a hundred quantum, you know You know qubits at the doing dynamics, that's not a problem for That's not a problem for classical dynamics The simulation resources basically scale linearly, although they're you know, depending if you have a dense matrix of interactions It can get a little bit more complicated but so where can you find a Computing advantage, you know doesn't isn't you know CMOS is CMOS you've been engineering it forever but One of the downsides of CMOS is that it's an in it's an intrinsically dissipative Technology, you know, there is no Hamiltonian from which the well I would just say that the resistive dynamics are the dominant form of the dynamics. You're kind of like in the In the in the regime where You know if you look at the evolution in terms this kind of being the inductive the terms that come from any sort Inductive energy are just minimal Compared to the terms that come from the resistance and so you're always Intrinsically dissipative whereas for superconducting circuits, you know We've done a ton of engineering now to be able to get them into be in this very high q low resistance regime And so basically the evolution can be very nonlinear So you can do lots of interesting computing things, but you can basically do things with Hamiltonian evolution and Hamiltonian evolution is perfectly reversible so Basically what you can do is you can kind of generalize the Bennett construction for reversible computing Where instead of having some, you know primitive Logically reversible compute operations up here and then you can basically just any sort of irreversible compute you want to do You can basically oh change that, you know I Change that, you know And gate or whatever Into you know something that that is actually formally reversible by keeping extra, you know And silas and instead of having one output you have two outputs and now by those having two outputs You can then know what came before it so you can generalize any You know irreversible evolutions who something that's logically reversible and then you imagine doing all of that computing And you get your answer, but then you also have, you know all these kind of zero initialized Auxiliary bits that have gotten your new your garbage bits So what do you do there? Well you take your answer you get more zero initialize things and you do one final controlled Not and you get your answer down here, and then you've got to implement the reverse all the reverse Reversible operations and then you set all of this back to zero So this was you know Bennett's observation that you can actually do you can get any answer to to a Computation without generating any entropy, and so this is what we're going to evaluate We're going to see whether there is an interesting compute from Hamiltonian evolution of these AFP's You know we can go to giant scales and see that that You know like you know really kind of look towards application relevance And and then further what you can see you can ask the question of all right in simulation How close to this reversible limit can you get and can you formulate? Application relevance from that is it is it good enough such that you'd be like wow if we really engineered that we'd be We'd have you know a compute that would you know blow away existing technological approaches And I do think that there is this kind of notion of like well what type of Moore's law scaling What you get you know if you do have this kind of technology that's not land limited by Landauers the You know Landauers principle where you could get arbitrarily efficient in your compute going forward That you know you could have a situation where if you kind of made the right investments that you could get You know you could have an exponentially exponentially growing efficiency over an extended period of time with no fundamental limit whereas if you use Irreversible technology you will be fundamentally you'll be kind of coming into that land our limit pretty quickly all right So now to motivate what what do we want to use? What do we want to use these analog dynamics for? So I would kind of break For you know at a very high level. What would what is the what is a modern taxonomy of? Classical computing architectures, so I'm not even putting icing machines on here I'm not doing anything quantum mechanics. These are just things that there's a ton of it out there right now, okay? And here is the just an example of an instruction set architecture. This is by far the dominant technology Except for this is a fascinating example where they've taken these Classical flux qubits in this case this team calls them a qfp's and they've actually implemented a You know a four-bit CPU here and it's really interesting to think about you know a lot of times people think about oh Yeah, you know just run you know to compare the d-wave to your to running it on a CPU Well, I think it's really fun to think about of running it on this CPU right where you know here in this example You know you do have things that are annealing and unnealing and all this type of stuff And you can see all these flux quantum moving around and and you really kind of think about oh You've got to go from you know from from your from your register to your to your you know your CPU And then it's got to move back and forth and you really see how physically taxing it is to move all this information around all the time whereas what what What d-wave does of course is super efficient, you know It's just like let's do it all at once boom, you know, and it's done and you can really kind of contrast it And see that advantage and of course all the way on this side is the neuromorphic computing And there I would say the major challenges is even though it's ubiquitous. It's what's happening in all of our minds right now Super powerful obviously evolution seems to have figured that out But harnessing it from a technological perspective, we're still quite not sure how to make it work You know these are asynchronous, you know, how do you do the training all this type of stuff? It's it's it's still a bit beyond our our engineering capabilities even at the level of a nematode You kind of understand what the hardware is, but you're not quite sure You're not quite sure how to engineer it, you know, how is it that it's learning and doing its thing so we've kind of You know resulted in this compromise of these artificial neural networks where it's much more structured, you know It's much, you know, it's it's these are the these are the types of networks Of course there are generalizations that that people are exploring But these are the ones that are kind of appear to be most application relevant and the current era You just kind of have these feed for feed forward Artificial neural networks where you train up these weights and you need some sort of non-linear activation and then you feed forward again non-linear activation and and of course you then want to Implement this into some larger larger Architecture, but I want to ask the fundamental question of What's the best way to implement this right now the way that it's implemented? Of course is you just reduce all of this back to you know your instruction set architecture Maybe you get a TPU or whatever, you know the novel novel implementation But at the fundamental level it's going to be something that looks like this But what if we could do this more like this we implemented this in an analog way? And so that's what I'll I'll describe and in particular You can ask the question of if you do this analog dynamics and then you combine it with the Bennett construction Is it reversible or not or how far away from? Reversibility are you all right, so? What what what simulation did we do so? basically, we we took a simulation where you took a an initial image and that initial image had some sort of weights and You know that weight matrix multiplied times the initial image through some you can you can imagine it's through some sort of interaction matrix mutual matrix perhaps and then Basically, this blue arrow says the first layer of AFQs AFP's they anneal Okay, and annealing is a very nonlinear Transformation it functions a lot like a hyperbolic tangent when you look at as a function of its activation level You know, which weld does it end up going in? It looks like a hyperbolic tangent and then you know that then you imagine okay now I have a second layer of Weights between that first layer of AFP's and it's the second layer of AFP's and then I anneal the second one and then the second one is where I've kind of imagined Okay, I'm going to now do the Bennett construction I'm going to copy it move that information out now I'm going to reverse it back and see what is it what is it? You know how much entropy is generated? Okay, and you can basically so How these simulations work is although we are assuming that the resistance is very very long We've we kind of assume that you can you start from something where you're sampling the face base from a thermal Distribution so you do have some sort of noise at least characterized by a thermal distribution initially and you see after you do this Dynamics how much did that that rms fluctuation of charging increase and then you can say alright? Well, I'm going to have to you know have enough resistance to kind of like bring this level back down to that So that things don't blow up and get out of hand and then you from that you can kind of say that's how much Dispation you need for this technology to work and then finally we'll ask the question of If you then replace your charge operators you add your hats back Is this a good thing or a bad thing? You know is there a way of quantifying now? All right So here it is again, so you we've done that did the fashion emnist data set just to be a little bit fashionable and So here's this weight matrix So initially we we did a standard conventional artificial neural network where you just instead of using I think You know rectified linear is more fashionable these days We're just kind of a linear and that you know constant zero and then a linear here, but hyperbolic tangent is you know That was fashionable at one point and so you can bus basically just plug that in and you can you know Just train this up and here at you've took that take took that final information out of the network and did a softmax Well, I sorry you did a one one more fully connected layer that I was imagining. Okay. That's done in In standard in standard technology Digital technology or whatever and then you do some softmax. I didn't necessarily I didn't make this analog dynamics over here though perhaps maybe you could I just focused on these parts and So first I just did everything digitally with a standard artificial neural network. I trained up all these weights and It's dense. It's just completely dense And then I just by hand because in simulation you can do anything you don't have to worry about layout rules I just said, okay, just use that dense matrix multiply it by the right, you know coefficient so it actually became in units of Pico Henry or whatever and Had a mutual interaction between each one of these pixels and each one of these a hundred AFP's and then a mutual Interactions between each of these hundred AFP's and each of these a hundred AFP's and I just kind of without no additional training With no additional training I just did the analog dynamics of the AFP and I saw that the classification accuracy did not decrease and so you can basically You don't even need to do any Any additional training you can just kind of do all of your training on your standard You know digital artificial neural network and on your analog artificial neural network It just it just works at least in this in this example All right So that kind of mutual matrix that all to all mutual matrix is a little bit artificial So what you can do is you can Basically imagine that you have a coupler network Okay, so and you have your your image kind of biasing one side of this coupler network and then you have a whole bunch of couplers Within a network and then you have your AFP's on the other side of the network say it doesn't have to be quite that Geometry, but then all of these nodes of your coupler network. They can be imagined especially if their frequency is higher to Mediate an interaction between your image and your AFP's and you know, there are some good approximations So this is a you can do a Born-Oppenheimer approximation where where this is kind of instead of a clamped nuclei You can think of this as a clamped source but basically you can derive how to Ineffective mutual interaction. That's dense. That's dense between this image and the AFP's okay You do involve this inverse matrix and so it becomes a little bit challenging to train but you can do it and so Basically, you know, you just kind of plug this formalism you define yourself a custom layer and Keras or whatever And then you train it up digitally and then you implement it with your analog dynamics And you see again that it works for a physically realistic coupler network But then you kind of do the do the estimate you say is this a promising technological approach, okay? And you say well, you know what? Let's give this the CMOS guys an advantage They've they're they're thinking about how to do this in ultra low powerways So one of the biggest things is this kind of von Neumann bottleneck of bringing all this information from the memory out Of the memory back to the memory back to the CPU or wherever, you know Moving it around so instead you just imagine leaving all the weights On chip and then you just kind of run your your image data through it, okay? And so you kind of avoid the von Neumann bottleneck and there's there's this kind of in-memory compute is is a hot area of Research and they show they do still have to they don't implement any of the nonlinear activations here on chip so they they they are doing some Not everything is accounted for here, but They get 98% accuracy a little bit higher Than than we were seeing over here, which was 96% Accuracy and they get really awesome Energy for classification. So this is all the operations that are necessary to do this classification at 98% accuracy and they get you know energy per classification of you know This many jewels, okay, so For this little trusty laptop over here I of course did all of those those previous calculations that I was talking about again I only trained it up to 96% accuracy and Kind of looking at the meters on that machine. I can figure out that actually For that 90% accuracy just doing the inference none of the training just the inference step. It is you know 750 microjoules, you know this did have all the nonlinear activations Everything like that. So this was the full compute. All right, but it's still many orders of magnitude more than this one In this one, you know when you're doing the analog AFP estimation And you're just doing that step where you're looking at all right the charge I got a little bit more charge fluctuations here How much energy was that to implement this classification at the 96% accuracy? Of course, there still was that step like I was talking about where you'd have to take that you know information off chip and and do Do that one final softmax and stuff like that? So there are things some things that aren't accounted for but you can just kind of see that these energy scales You know 10 to the 10 to the minus 21 Yeah, if you can make this the dominant part of your compute if you can make this the dominant part of your compute It's going to be a really promising technology even without any quantum physics, you know that these type of analog operations Especially if you can combine it into a depth of compute a depth of compute it it has a you know really promising future. Okay so now on to the How it's related to This conference a little bit more and the guys that kind of close up here So of course one way that you can measure how bad things are From the classical dynamics is you can just kind of even just take a single AFP And you basically can anneal on a kneel anneal on a kneel anneal on a kneel And you can kind of measure how during this dynamics, you know Is there chaos or not and I won't get into the details of this But I can basically say that you know if you have a low Z bias And so that means is you're bringing up the bump of your potential You're kind of sitting up there on the top of the hill You can imagine that that's going to be the dangerous one where you your face particle gets brought up And now it's going to start rolling down That's going to be where you you begin to get chaos Whereas if you are if you're kind of tilted to begin with you're bringing up the bump And you're just kind of sitting out over here. That's really not going to be very chaotic But you can you can you can mitigate this by including resistance And so kind of dissipating things away away like that you have to double check to make sure that you know This is actually chaos that you're that you that is kind of growing in the face space isn't just due to growing of your energy average energy and so yes, we do see that it is chaos, but this is this is this is an potential example Where quantum effects might yield an advantage so you put the hats on your dynamics and Maybe the case, you know these kind of chaotic effects these uncontrollable effects are least reduced so this is kind of be one way that maybe you put the hats on and And the performance gets better you engineer into regime where the hats are relevant I guess I would say and and and you get better performance than what you were expecting from the classical side Here's another way. I was talking to before about these kind of ways of describing things in terms of either your you know a operator basis or a probability distribution rate basis and You basically want to look at these transition rates are Classical and are Quantum and you want to kind of compare them which is there's their fundamental difference and if you look at Basically these kind of high-weight operators these kind of GHZ style operators and you look at okay What if I started in an all-zero state and I want to drive with you know like this P1 operator That's going to drive me a direct high hamming weight transition from zero to one Okay, is there some sort of fundamental difference in this rate? That's calculated between your classical and your quantum and you come when you do this you do this calculation and you see that you end up Evaluating something that's directly equal to Mermin's inequality and And you can basically say oh wow, you know like there you get an Exponential difference between the quantum and classical in the rates that you can achieve and I think that this is you know extremely You know, I certainly would have if I appreciated the work of D wave before this conference I would have had this slide be all about the differences of those scaling exponents because I think that that is way more compelling But there are kind of fundamental reasons why you know There's just a difference between the rates that you can achieve the transition rates that you can achieve between quantum and classical so there is good reasons to put those hats on everything and Again, here's some review a review paper. I looked at this DQA And in that case you really do need coherence between your computational states You know, it's not just a ground-state calculation. You do need to have excited states and need to main coat name maintain coherence between things and And of course We heard a little bit about this RFQA, but this is another one where where you kind of are you know You know going to these excited states a little bit at least virtually and so you know this these type of errors are Having coherence is probably important so as Steve Dissler Described, you know, one of the things that Northern Grumman did recently is we generalized The the QEO testbeds using some of the same design resources five minutes two minutes. I'm done. I Gotta stop now In two minutes if I wanted questions, I do want questions, so I will stop in two minutes. Yeah So we generalized those designs just replacing the kind of the single the single quantum layer and we made We made them a five by five grid of flexonium a lot of breakout circuits that have a higher probability of working as well This was kind of a bit of a speculative design, but it's it's still compelling nonetheless and so that's going to be really exciting to look at because in terms of You know if you're trying to set an annealing time here is a little bit, you know faster than a microsecond I guess during this conference. I guess I've thought, you know, maybe maybe even down in that 10 to 100 nanoseconds might be even the best type of target But the the phasing times of these kind of original QEO test bed We're a bit better than that But they're kind of you would expect to get an error pretty typically but when with flexonium you're in a completely different regime where where you could imagine that you know quite often you do a Compute without getting any defacing so One of the key things that we utilized was we use this full circuit Hamiltonian again because you know these collective modes of this You know a galvanically coupled circuit probably are important Instead of going to at the two-level approximation. We'd use that kind of the the the full circuit Hamiltonian quantum and What's nice about one of the nice things about matrix product states is that the local basis can be pretty big and it really doesn't hurt you in terms of The memory requirements like it does In a product basis where things grow exponentially so in particular We were able to find the ground state of this of this of this type of test bed as you know The test bed got bigger and we saw about a Quadratic scaling and both the time and memory scaling and we were able to use these resources to kind of project out What type of coupling strengths we were going to be able to achieve with these test beds using these tensor network simulation resources So I encourage people to to look at those yeah, so this is just my summary slide and Yeah, I think this is a really compelling approach. I will say that I Have not really even talked about you know I've been talking about interactions between layers just because that's what's implemented Currently as the most promising classical approach, but I think interactions within a layer You know as d-wave. This is an additional capability, you know And so almost by a variational argument that the computing power will only be enhanced by that But I don't have a fully framed Reason why yet, but I'm sure it would be easily Thank you very much We have time for a couple of questions if there is a further most point Need some like transition music, you know For AFP Description about the amount of energy like in a physical system, you would need the control circuit right and refrigerator to put the system in So how much does That consume? So that would be the dominant Error depending on how big a compute you did but I would what I would say is some of the key capabilities if So Adiabatic CMOS is a concept. They do they do work at this in Sandhya and there one of the key things is Resonant clock distribution. So and they have a whole approach like Develop these resonant clocks. So that's that's going to be a key aspect, right? Because you can't just have your your clock just just you know being Dispating all your power that would not be good for reversible technology and then you do need You know the ability to program You know pretty low power to you don't want to lose everything there Eventually, you do hope that you're everybody says oh training is the dominant form of for machine learning well Hopefully not forever, you know, you generate like a network. That's actually useful But there are actually quite promising ways to To do Programming near-reversibly and we've actually demoed some of those on the QEO Program as well. I can be excited to talk to you about those ones as well There's another question Don't one of your first slides you showed on the non-stochastic slide. Yeah showed some new design there on the right and I Yeah, so what what is that? Yeah, I'm able to catch the details. Yeah. Yeah, so this one is it's it's pretty much a flexonium cubit It's not tunable. So you can't anneal it so you can't use it for annealing But instead of having one black sheep one weak junction you put two right in the row Okay, you can kind of see it down here in this image here are these super inductors Do you do and then they use this Manhattan style for their super inductances? And then this little tiny trace here is Um, it has a little bit thinner aluminum and because you know aluminum is weird that disorder enhances the gap This this this island has a little bit higher of a gap And so a quasi particle kind of gets generated it comes in here It goes through these super inductors and the super inductors really are diffusive for it for quasi particles And it kind of slows it down like rumble strips And then it doesn't have enough energy to get up here at least that's to be what the data is showing They do show that without implementing the gap or what the sorry without implementing You know the difference in gap for the small island they see the standard, you know, both both This is what we saw on QEO, you know, we saw both curves at the same time So that you need the gap engineering and then you can so you get this nice Single, you know to e periodicity except for if you raise your temperature back up Now the quasi particles do have enough energy to begin, you know back quasi particle poisoning So to generalize this for annealing context, you have to go back to what? You know these these JPSQ designs where these guys are now tunable But the key thing is is that instead of having, you know, just a you know one junction here or something like that You know not that you need super inductances to help with You know calming the quasi particles at least that's what this data is is telling you okay, so Let's thank