 Hi. Yes, I'm Evan Jeffrey. And I work at Google. The reason you see a UCSB logo here is because I used to work at a lab at UC Santa Barbara before Google started this project. And they hired several of us from the lab to build the Google scalable quantum hardware team. So in particular, some of this device here, I don't know if you can see it actually says UCSB because it was fabricated while we were at UCSB. So they get some credit for this as well. And here I just have a couple of pictures of chips that we've made in our lab for various purposes. So the goals for this are perhaps slightly ambitious. I want to explain to you what is quantum computing, what is Google trying to do to build a quantum computer. For you guys, when should you be worried about quantum computing putting their current RSA and elliptic curve cryptography at risk? And what does Google hope to do with a quantum computer? Because I want to make absolutely clear, we are not trying to crack your codes. The lawyers definitely don't want me to give the wrong impression there. So I'm going to give you maybe the shortest introduction to quantum computing ever. So the difference between quantum and classical computing is kind of illustrated here. We have classical computing. We have bits. You can have multiple bits. You have exponentially many combinations as you increase the number of bits. And you have the standard gates that you learned about in undergraduate classes. And at least for logic rather than long-term storage, data is universally stored in the voltage or charge on a transistor circuit. This shows a DRAM cell, but there's SRAM and logic gates as well. Quantum computing, well, we put these funny brackets on the 0 and 1. And then we show we can also, for a single qubit, we have these other states that look like 0 plus 1, 0 minus 1, 0 plus i1, 0 minus i1. And then you can also have, just like the classical computing, multi-qubit states. And there's exponentially many basis states here as you increase the number of bits. And one big difference is no one's built one of these yet. So nobody has decided whether they should use vacuum tubes or transistors or relays. So here's like four different implementations that someone out there thinks each of these is the best way to build a quantum computer. And then we're building this superconducting one right here. So we think that's the best. Then what do you do with a quantum computer? Why is it different? So a given state is represented by a vector like this. You can kind of think of these amplitudes here as probabilities, but they're not. And I'll explain why. But these two parameters, theta and phi, it's a unit vector, complex unit vector, and you can map it to a point on a sphere. This is a nice visualization for a single qubit. And our gates for the single qubit are now linear operators from the group SU2. And again, two qubit gates. Now we have these four basis states here and we can have any linear combination of those. And there's many possible two qubit gates, but the only ones that people are interested in for the most part are the C0 and the CZ. And again, it's just a matrix multiplication to apply a two qubit gate. And the thing that makes those amplitudes different from probabilities different from a stochastic computer is that they can be negative, which means when you add them together, a matrix multiplication is multiply and sum. Elements of that sum can have different sign and they can cancel and you can get nulls. And this is one of my favorite pictures from all of physics. This is a double slit diffraction pattern done with electrons rather than photons, which is what you'd normally see. And it's indistinguishable. You wouldn't know that that was electrons rather than photons unless I told you because they operate the same way. So if you want a longer introduction, I highly recommend this comic strip written by Saturday morning breakfast cereal and in collaboration with Scott Aronson, who's a professor of theoretical computer science at UT Austin now. It's a extremely long comic strip, but it actually dispels all of the worst misconceptions about quantum computing. So again, I highly recommend it for a slightly more thorough introduction than I just gave. So that's all I'm gonna tell you about the fundamentals of quantum computing. I'll get a little more into what's actually required to make it in a bit, but this is the system we're using. They're superconducting quantum qubits. So this is by the way an actual photograph. It's not a drawing taken in a microscope. So these pluses here, this row, each of those is a qubit. It's aluminum on, this says silicon, this particular device is actually on sapphire. We've moved to silicon. So each little, there's a little island of aluminum here and there's a sea of aluminum around it. And then right down at the bottom, there's a device called a squid, a superconducting quantum interference device. It's made of a tunnel junction or two tunnel junctions actually, which makes this a nonlinear circuit. So each of these devices is essentially a nonlinear harmonic oscillator and that is enough to allow us to do qubit operations. The resonant frequency of each of these devices is between five and six gigahertz and each one has two control lines. One of them we send microwave pulses at the five to six gigahertz frequency and that, if I go back to here, lets us do an arbitrary single qubit rotation on this sphere. We can do by applying a microwave pulse of a given amplitude and phase and frequency. The second line we apply a current to, which changes the frequency of the qubit. So I said they're between five and six gigahertz. We can frequency tune them. That's part of our personal approach, not all qubits are tunable. And the reason we do that is that's the way we do two qubit gates. If we bring two adjacent qubits to the same frequency, there's an interaction between them and it turns out that that makes a control Z gate, which is one of those two qubit gates that I talked about before. So just like a NAND gate is universal for classical computing, single qubit arbitrary rotations plus a control Z gate is universal for quantum computing. So whenever you see a quantum algorithm, it can be decomposed down into those elements. So we have all of those elements in this device. One of the nice things about this is it's a single layer fab. There's, we'll say almost single layer, the junctions have a slight overlap, but this is just aluminum on a insulating substrate. The reason that's important for us is quantum systems, they're made of single photons or single excitations of some sort, and they're very prone to noise and decoherence from the environment. Any extra materials, any extra layers you put on your chip make your decoherence worse. So we, our goal is stick with this, the simplest possible fab, a grad student or a new employee can learn to do this and get functional devices in a week or two in the clean room. Getting the design takes more effort, but it's not like an Intel CMOS process with God knows how many metal layers and 15 nanometer technology. So this is simple, the problem is right now, our signals can't cross over each other, so the devices we make right now are limited to this single line, which is, as you can imagine, not a terribly scalable architecture. And I'll talk more at the end if I have time about what we're doing to get past that. So the process of this thing, like I said, these are superconducting, they actually have to be cooled down to about a 10 millikelvin to operate with the lowest noise. So the, going from a device here to something we can measure, this chip is mounted into a sample holder here, which is attached to this copper plate. We put this copper can around it and the entire thing goes into a dilution refrigerator, which is a multi-stage cryogenic system that cools down to, hopefully below 20 millikelvin, with nothing in it, it gets to 10 once we put all of our experiments in. It's not quite that cold. And we have, there was two control lines per qubit, plus a couple of extra for readout. So we have, for this chip, there are 20 coaxial lines going up and down from room temperature to 20 millikelvin. And then they get, this whole system gets closed up, it's right here, and then there's like racks of conventional electronics that apply these microwave pulses and these frequency tuning pulses to the qubits through the coax to allow us to do all of our gate operations on the qubits. So that's, and then over here, we have classical computers that we can sit down at and we can say, okay, do a sequence that is a rotation on qubit three, a control Z between qubits three and four, so on and so forth. And it does that, it's pretty nice. Yeah, so we're doing this in a brute force approach. Other people in the field are doing different approaches. They're trying to build, there's topological quantum computing, which uses superconducting circuits, but sort of attempts to build a error robust system directly in the qubit. There's people using all kinds of other things. People trying to make cold logic, so all of that room temperature electronics, they try and put inside their dilution refrigerator. That keeps the cables shorter, reduces. It improves a lot of things, but you can't go by arbitrary waveform generators off the shelf that work in a dilution refrigerator. So we are sticking with this brute force approach, no classical logic in the fridge, lots and lots of coaxial cables, which is generally a rule if you've ever been to a physics lab, the one thing you'll notice is that they just love coaxial cables. We like that too. And then we have built a sophisticated software stack that allows us to take a gate sequence that a theorist will come to us and say, can you run this gate sequence? And just automatically turns that into the correct waveforms, also does all the calibration. The conventional approach here is you take a grad student, sits down in front of the computer and says, okay, measure the frequency of qubit one, and then they write that down. Measure the frequency of qubit two and write that down. That works for three or five, maybe 10 qubits, but the problem is these things drift faster than grad students can calibrate them. So we need automated calibration. There's more, you need more and more parameters to do more and more accurate control. So grad students are not scalable. That's the first thing we needed to solve. So, okay, so that's our physical approach. Why is it so hard? Classical computing has a lot of things going for it. There's many levels of robustness built into a classical computer. This DRAM cell I showed before has 10 to the five electrons per bit. We work with a single excitation, energies much smaller than that, a small fraction of an electron volt. Classical logic can use feedback. So an SRAM cell has two transistors, one is on and one is off, and you're constantly measuring and amplifying each of those two cells and feeding back. So if it perturbs itself a little bit, it immediately corrects. And then you can build forward error correction on top of that. We don't have this built in error correction ability in our quantum systems. There's a little bit of that, as I mentioned, in topological quantum computing, but that is, I would say, still in its infancy at best. We can't duplicate our quantum states. There's a theorem that says you can't do that. And just to make it worse, there's two types of errors. I said you have this sphere, so there's two axes on which your qubit can screw up for each qubit. So that means your error correction code has to protect both of those. So this sounds pretty bad, but oh, and just, I'm not gonna go through the no cloning theorem, but this is really a key problem or key issue is that you can't clone an unknown state with linear operations. So we know that we can't say initialize five copies of our quantum computer, run it for some period of time, then measure those five copies, take a majority vote, and then re-initialize everything and start over. We need to do all of our redundancy, has to be initialized at the beginning and then preserved by mechanisms that don't directly measure the state. And in fact, we need to be able to perform gate operations directly on our encoded state. It's sort of like a homomorphic error correction code, if you wanna think about it that way. There is a solution. There's actually several solutions to quantum error correcting codes that meet these requirements. The one that the approach we are using is called the surface code. So it uses a patch geometry like this, a two-dimensional array of nearest neighbor couplings. So that's not what we have right now, but it's not so far off compared to if you need an arbitrary long-range coupling between the qubits. So this patch of 25 qubits would make a single logical qubit. And you can in fact make the patch bigger to get better error suppression and so forth. And so the white circles here are the data qubits. Those together are gonna store a single quantum bit. And then these M symbols are measurement qubits. So what each one does in every cycle of our computer is it measures the parity of all of the data qubits adjacent to it. And because it's only a parity operation, it doesn't reveal the actual state of the encoded logical qubit. It only reveals errors because an error will change the parity on at least two locations. So this is the surface code. And some of these, yes, the light measure qubits are measuring the bit flip for bit flip errors and the dark ones are measuring for phase flip errors. And that's fundamentally why we need a two-dimensional array is so we can correct both of those at the same time. So like I said, this will map to a hardware implementation. But it's pretty unforgiving. This is, as far as I know, the best algorithm known in terms of tolerance to errors. So what this plot shows here, for different size patches to make a single qubit, the bottom is the physical error rate and the left axis is the logical error rate. So this point here is a threshold. It's between one and 2% depending on how you count errors. And if your error rate is above 1%, then making a bigger patch makes your errors worse. And if it's below that, adding a bigger patch makes your errors better. So obviously we want to be below that. But really, if you're talking about making something that can compute a factor at 2048 bit number, you need logical error rates 10 to the minus 14, the same things that people routinely get with classical computers. And that our best devices have just under 1% error. We really hope and think we need to get that down to about a 10th of a percent. But even at a 10th of a percent, we're talking about patches with hundreds of physical qubits per logical qubit. So that's one factor of the overhead is that. Then the other problem, I said we could use the error correcting codes to do, we could do logical gates directly on the error correcting codes. It turns out there's somewhat obviously a discrete set of gates that we can do that way, not a continuous set. And they're called the Clifford gates. And for a single qubit, they mean rotating this octahedron onto itself. And for two qubit, they include the control phase and the control z and control not gate. So it's almost everything we need. But the set of Clifford gates is actually group. And that means it's pretty limited because any series of Clifford gates is another Clifford gate. So it turns out it's not universal by itself. We need one more gate. And conventionally it's this pi over eight gate. It's a rotation that's not in this Clifford set. And you can't perform that in the surface code. But there is a way around this, which is if you imagine, if you had a classical computer that couldn't perform a not gate, but you could do an exclusive OR. If you initialize an ancillic bit to a one state and do an exclusive OR, that's a not gate. We can do the same thing. We can initialize any state we want in the surface code. And then we can do a two qubit gate that sort of imprints that rotation onto our target logical qubit. So the one and two qubit Clifford gates that we can perform plus this T gate is a universal set of gates. And there is a way to implement it. Unfortunately, that is where all of the overhead and the quantum computer comes from. So this picture doesn't mean anything but it's basically showing to you, but it's basically showing what parts of this giant array of physical qubits have to be activated to do different sorts of gates. So Clifford gates, I mean, even if you don't know what these symbols mean, they're small. This is a T gate. And the reason is because this whole process of initializing these ancilla and applying them is probabilistic. So you have to iterate many times in order to do it with a 10 to the minus 14 error rate. So 99% of a surface code quantum computer is spent in this thing, initializing these ancillas and performing the T gates. So if you wanna figure out what you have to do, how hard a say a factoring problem is in a fault tolerant surface code, you need to write it in terms of quantum gates and then you actually need to figure out how many T gates are involved because everything else is basically free. And this is a pretty daunting set of numbers, at least for me when we have nine qubit chips in the lab. To factor a 2048 bit number, you need several thousand logical qubits. That's pretty obvious. You might need to add or multiply two of them. So you need a few registers that can hold a 2000 bit number. Just from the surface code overhead, we now need 10 million physical qubits in order to encode those state registers. But we need 250 million physical qubits to implement these T gates. And with a hypothetical computer with these 250 million physical qubits, we estimate at the 500 nanosecond cycle time that we currently operate at, it would take you about 10 hours to do this. So this is, if you wanted to build a computer for factoring, which again, we don't, with 0.1% errors, this is sort of what you're looking at. It's a big device compared to what we have. But in terms of the sort of fundamental physics, which is this coherence and ability to do high fidelity gates between the qubits, we're pretty close to what we need. There's just tons of engineering work to go from this Moore's Law type scaling. We have to increase our densities, increase the amount of control electronics and so forth in order to get to this threshold, or get to something of this size where you guys would have to be worried about it. Yeah, so pretty much every part of the stack from the classical software that users operate with all the way down to the interconnects on the chip, there's tons of engineering work that is gonna be required to make this. But there's actually lots of companies working on this. Google, IBM, Northrop Grumman, there's several startups for Getty Computing. I'm sure I can't name them all. They all have slightly different approaches, but there's lots of people working on this for various reasons. I think it's, I'm now confident that this can work. If someone wants to build a quantum computer bad enough, they will, and it seems like people wanna build it. But I'll give some rules of thumb in engineering research. People have been saying quantum computers will be here in 10 to 15 years for at least 10 to 15 years. And a few other examples of technology that doesn't seem to get closer as fast as it should. But I'm pretty confident, even just based on some of this work we've seen in the last year or two. Okay, so why does Google wanna do this? Shor's algorithm does not have a lot of commercial applications, at least not legitimate ones. We are interested in quantum simulation, quantum chemistry, and classical optimization. These are all big industries that we think there's a lot of room for improvement. So quantum simulation is kind of natural. It's very expensive. People have told me that 30% of the non-classified DOE super computing time is spent doing some form of quantum simulation because of this exponentially large Hilbert space that you have to keep track of. The memory requirements are very intense. That is something that is not hard for a quantum computer to do. So one of my colleagues is interested in quantum chemistry. So this Haber-Bosch process consumes one to 2% of global energy use and it's used to generate nitrogen compounds for fertilizer and plants do it in dirt, whereas we have to heat things up to 500 degrees C and pressurize them. Even a small improvement in efficiency there, which is catalyst research, has potentially huge benefits, but we can't simulate even this chemical reaction in a quantum accurate fashion with current conventional classical computers. It might be there's simulations of superconducting high-temperature superconductors are also extremely difficult. That's another factor of what's going into these super computing research. So this is a bit more controversial. D-Wave has made a lot of very extreme claims about how their devices can solve NP-hard problems in polynomial time, which is wrong, but there is some evidence that we can be better than simulated annealing for solving optimization problems with some features, in particular where you have an optimization problem that has sharp local minima like this, quantum tunneling can help you. We can make synthetic problems where quantum annealing using the D-Wave system, this was not done by me, this was done by my colleagues in LA, outperforms classical algorithms because you can trick, sorry, outperforms simulated annealing because you can trick simulated annealing into always going into a bad state. Here's a plot of that. An important caveat on this is there are special purpose algorithms that are better than simulated annealing for many, many problems. Simulated annealing is the best known, to my knowledge, general purpose optimization that doesn't depend on your problem at all. So if you don't know anything about it, you use simulated annealing. If you know something, you use a more specialized tool, we can do better than simulated annealing for some problems. So that is a potential application. I'll just go to my conclusions. So fault-tolerant quantum computing is a ways off we hope that these other applications, I forgot to mention that, but the nice thing about quantum chemistry and optimization is unlike factoring, you don't necessarily need exact solutions. There are systems where if there's noise in the simulation, you can potentially get useful results out of it and there's a good chance you'll even know that your results are useful just by looking at it. So we hope that we can do some applications in these fields before reaching the fault-tolerant threshold and without making 250 million physical qubits and these applications are much more interesting to us than factoring anyway. The thing we're working on sort of in the next year is called quantum supremacy, which is getting a quantum computer to do something, sort of anything no matter how contrived that a classical computer can't is sort of a proof of principle. And the reason we're sort of focusing on that is because you have to do all the same things no matter what kind of quantum computer you're building in terms of the engineering work to scale it up. It kind of doesn't matter what your application is, but this is something we think 50 qubits can do a toy problem faster than a classical computer. So we're working up to build a 50-qubit array. We're working on our wiring to get this 2D array instead of a 1D and building up all of our electronics, cryogenics, and software to meet that requirements. And this backlit photo is our quantum hardware team and also the software people down from LA who provide us a lot of theoretical insights. Thank you very much. So we have time for one or maybe two quick questions, possibly about the scalability of grad students. Thank you very much for coming to our conference and giving us a very sober view of what's happening with quantum computing. Could I make one request and then ask one question? When you achieve quantum supremacy, please can you talk very, very carefully to the press about what you've actually achieved? Yes. Because they will have a field day with that and tell us that the sky is falling. And we all know how great physicists are at generating news headlines, so please be careful. We will do our best. Thank you. And now for the serious question. You talked about the basic science being in pretty good shape, you understand the basic physics. And you talked about it now being an engineering problem to scale. Graduate students notwithstanding. Do you see Google was investing heavily in the engineering required to make that scaling happen? It's not something you said explicitly in your talk. Yes, so I am a physicist by training but I am now essentially an electrical engineer. I'm building our custom waveform generators with help from a couple other people. We've invested in fabrication equipment that's more scalable than the university clean room that we had been using, at least for the parts of our system that require it. We're bringing up a bump bonding process so that we can wafer bond two wafers together and that will help us with this interconnect problem without compromising our qubits. We're getting assistance from software engineers at Google and we have our own software engineers on developing the classical control software. So Google is definitely interested in the engineering side and allowing and encouraging us to work on that. Thank you very much. Could you give an estimate to one significant figure of how many billions of dollars and engineers in years you would need to make a quantum computer if you really, really wanted it? I'm not sure because, say, it depends a lot on who you give the money to. For reference, the European Union has just started a quantum computing project that is one billion dollars. I don't think that's going to yield a quantum computer simply because it's divided among five different projects at 10 different universities. So I think that'll generate a lot of useful research in terms of new ideas, new measurements, but I don't think that's focused enough to build a quantum computer. I'm sure I'm not allowed to speculate about our budget with you guys. I can tell you that the current investment in the entire field is a few billion dollars over the next five to 10 years, and I think the scalable part is further off than that, but not by a factor of 10. Okay, let's thank Evan again.