 So, today, I want to thank the organizers for the opportunity to speak today, and I really appreciate it. And I want to thank everyone for joining. We'll be talking about our next generation quantum system. The WAVE has been building processors for some time now, delivering successively more powerful quantum annealing systems. Our current generation advantage system has over 5,000 qubits, 40,000 couplers. And so the question is, what's next? To motivate our design choices that we are making for our next generation system, I'm going to borrow heavily from the work that my colleagues have already presented this week. It's that work that is really driving one of annealing technology for that work and really the work of the broader community. So to some degree, this presentation is essentially a summary of that work and how it pertains to our development of our next generation systems. WAVE properties are fairly straightforward. They are the number of qubits, which determines the maximum number of variables of problems that can be posed directly to the hardware. The connectivity, which is the arrangement of programmable couplings between the qubits. The energy scale, which describes the strength of interaction between the qubits. And the level of coherence, which determines the time scale over which the system will interact with the environment. Okay, so starting with those key metrics, I want to start with some work by my colleague Jack Raymond, in which he uses a hybrid method to solve spin glass problems on a 3D lattice. In this case that I'm showing, it's a native lattice on the Pegasus topology of our advantage systems. The reason to use a hybrid algorithm here is to be able to solve problems that are larger than can fit directly on the hardware. But in this work, the point of this slide is to show that the QPU, which is a quantum processing unit in the quantum annealing hardware, provides a clear advantage in the context of this hybrid algorithm. To show that, I have this orange curve on the bottom in which the QPU is used as a part of this hybrid algorithm. In this hybrid method, the subgraphs of the larger problem that do fit on the processor are chosen and run on the processor and the solutions are inserted back into the problem, into the full problem that's done iteratively to find solutions to the full problem. When the advantage QPU which solves problems on the subgraphs is replaced with a classical alternative, in this case running simulated annealing, performance is degraded. This highlights the value that the QPU brings. What I haven't shown here, but you can find in the reference, is that the hybrid method with the advantage QPU also beats simulated annealing running on the full problem rather than within the hybrid method. What I really want to focus on with this work is the comparison of performance between our current generation advantage systems and the previous generation, D-Wave 2000Q. What this plot shows is that there's a clear improvement in performance in this hybrid algorithm. For example, in terms of relative error, the advantage system cannot find solutions faster or in terms of a fixed time, the advantage system can find solutions with significantly lower relative error. The reason for this is it boils down to the larger number of qubits and the higher connectivity in the advantage processor. The conclusion is that for our next generation system beyond advantage, we'd really like to continue scaling the number of qubits and the connectivity. These hybrid algorithms that are focused on here are particularly important for commercial applications where problems tend to be larger than can be posed directly on the processor. The value that the QPU brings to those hybrid algorithms is going to increase the larger scale and higher connectivity via the native QPU topology. Next on the list of key metrics is coherence. Earlier in the week, my colleague, Mohammed Amin, presented results showing coherent quantum annealing in a 1D chain. This work really demonstrates that as you go to short annealing times, annealing times that are short enough that the Ising Spin system does not have time to exchange energy with the environment, that we enter a regime in which dynamics follow ideal Schrodinger dynamics, in this case on a 1D chain showing kibble-ziric kink density, which decreases as a power law with annealing time, the measured kink density follows very closely with the theoretical expectation up to an annealing time at which the system begins to interact with the environment and thermal excitations generate an excess of kinks. That is highlighted by the fact that once you get into that regime, you start being sensitive to the temperature. You start seeing independence on the temperature that is not present for shorter annealing times where the system is effectively decoupled from the environment. This work was demonstrated that really we can enter into the coherent regime. It also shows clearly that if we can increase the coherence in our system, we can continue to follow this theoretical scaling curve and expect to have improved error rates as kink density is essentially an error rate, improved quality of solutions. This provides a strong motivation for us to continue increasing coherence. Taking this a step further, my colleague, Andrew King, looked at 3D spin glass instances in the coherent regime and again found excellent agreement with theoretical predictions for short anneal times in the coherent regime which deviate at a longer anneal times when the system begins to interact with the environment. Again, this shows that if we can increase anneal time, then our solution quality will follow this scaling. In particular, this work is particularly exciting because the scaling followed by the system in the coherent regime in excellent agreement with theory is better than the scaling exhibited by classical alternatives, in this case simulated annealing and simulated quantum annealing. So that means that as we increase coherence, not only are our solution quality is going to improve, but we are going to continue to improve relative to classical algorithms. So clearly we'd like to have higher coherence. We'd also like to have higher energy scale. Again, energy scale characterizes the strength of interaction between the environment where it characterizes the gap sizes in the system. So the larger the energy scale, once we do enter the regime in which thermal excitation becomes important, a higher energy scale directly results in a lower probability of excitation, even in thermal equilibrium. So that would be nice to have as well in our next generation system. So that brings me to our next generation system, which we have called advantage to. It will have more than 7000 qubits. It will have higher connectivity, degree 20 or 20 couplers per qubit up from 15 on our advantage system with a new arrangement of connections between qubits that we're calling the Zephyr topology. It will have higher energy scale and it will have higher coherence. The Zephyr topology in particular you can find details in this technical report published on our website. And the advantage two system is due for release in the 2023-2024 time frame. In fact, as of one week ago, we have a prototype that we have made available to our customers via our Leap Quantum Cloud service. This early prototype is a small scaled version of the advantage to architecture with just over 500 qubits. It does feature the new Zephyr topology with higher connectivity and also has higher energy scale. And we have shown in this technical report that you can also find on the website that it already outforms advantage in several empirical case studies. And this is something if you sign up for our Leap service, you can actually go and test and compare with our existing advantage systems. So that advantage two prototype that we have online now was fabricated in what I'm referring to in this slide is Fab A rapid architectural development stack. We have in parallel with the architectural development of our next generation system been developing a new lower noise fabrication stack. And the point of this slide is to give a snapshot of progress in the development of that lower noise fab stack, which we will use for the full scale advantage to system. What we show here with low frequency flux noise measurements and also higher mid, mid, well, basically a range of frequencies that macroscopic resonant tunneling is sensitive to. Okay, the peak width here is is a measure of sort of integrated flux noise. What we've shown is that in our new with qubits fabricated in our new fabrication stack, we see significantly lower low frequency flux noise is seven seven x reduction. Three x reduction in integrated noise is reflected in this macroscopic resonant tunneling peak width. And an order of magnitude reduction in in higher frequency flux noise and two gigahertz. And so this is this is very exciting because we expect once we've integrated this lower noise process with our new Zephyr architecture for the full scale imagine to system, we will see benefits from the scaling with coherence and performance will get even better than we have seen in the in the early prototype. So also at the beginning of the week, my colleague Kathy McGew presented on the road to quantum utility. And the notion here is that what our customers really care about is comparison of our system with classical alternatives. And that's the notion of quantum utility is does it perform better rather than an emphasis on on scaling. And and so Kathy described several milestones that we have achieved starting with this milestone zero in which no overheads of the QPU are included. We have also demonstrated milestone one here in which we include programming and readout overheads. And progress on milestone two, which includes the indirect overhead of minor embedding, meaning that the class alternative is solving smaller unabetted problem and the QPU is solving a larger embedded problem required to to construct a graph with the full problem connectivity. So what's next on this quantum utility road? Well, we think the advantage to system will go a ways towards answering that. And with seven more than 7,000 qubits, our new Zephyrt higher connectivity Zephyrtology and higher energy scale, which we have demonstrated with our prototype available for testing now. And the significant improvements in coherence we have seen in this snapshot of our new fabrication process still in development. And so putting those together, our new architecture and our new fabrication process and scaling up to 7,000 qubits is is what we're working on now. And we expect to release the full system in the 2023-2024 timeframe. That's it. Thank you. Questions? Thank you so much. Is it on? Okay. Thank you so much for the nice talk. And that's great to hear that how you're improving the flux noise. I'm just curious, did you change the parameters of the qubit, such as like the critical currents of the junction, or is the enhances in coherence more just from the improvements in your fab processing? Maybe I will say that in this, so in this particular, in this work, what I will say is that the qubit, the parameters of the qubit and the geometry of the qubit are the same for these measurements. So we're doing a sort of faithful or apples to apples comparison in terms of the effect of the environment on the qubits. I won't go into the details of the qubit parameters. I will say that in our, our new, our next generation system, our architecture, the increases in connectivity, and the increases in the energy scale is, is achieved with the, with the new qubit design. Thank you for the nice talk. So the question is, so in the current d-wave machines, the d-wave user can access the d-wave machine with a needing time. So with mainly the microsecond scale. So I was curious, so is there any plan in future that the d-wave user can access the needing time with a nanosecond scale? We would very much like to release that ability for our users. That is something we are looking at. There's a lot of technical issues with making that available in sort of a general way to external users. We are looking at it. What I'd really rather give to our customers is higher coherence. So if you, if you increase the level of coherence in our systems, then you won't need to go to these very fast, technically challenging anneal times. You'll be able to explore coherent, you'll be able to achieve coherent quantum annealing even at the higher coherent, higher annealing times. That also, of course, leads to improved performance as you can follow the scaling law to higher anneal times. So it's really our main focus for our next generation system is improving coherence time. All right, let's thank the speaker again.