 Show of hands. How many people here are really more of a physics background? Okay. How many are really more of a computer science background? Okay, slight majority. My own background is in classical computer architecture and high performance computing. And so the talk I'm going to give here is from that perspective. I'm not a quantum mechanic, but I do understand how complex systems tend to go together and work. And I do understand computational models. One of the things that, and I tend to like to approach this really as a hardware up kind of an approach. So I'm not going to get into any interesting machine learning algorithms. But I am going to get from some basic concepts through a hardware model through some fairly specific details on how we actually build a quantum machine. And then up to this, of course, being an open source conference, some descriptions and an example of our CERC Quantum Programming Toolkit, which was developed largely at Google. But we've got collaboration from academia, particularly in Europe. Now, a lot of people find, well, those of you who are physicists who have mastered quantum mechanics, bless you. I spend most of my time in airplanes reading textbooks and trying to catch up with what I should have learned when I was much younger. But I've come to the conclusion that, in fact, it's okay to be intimidated by quantum systems because Newtonian mechanics is something that has an evolutionary advantage to understand. When you throw a rock, it's going to follow a parabola. It may have taken us tens of thousands of years to understand the mathematics of a parabola. But even children can understand fairly quickly that if they're trying to hit something at a certain distance, they throw the rock with a certain force and a certain angle and that resolves. It's important. It matters. It can be a life or death thing. And so evolution is going to favor brains that are good at understanding and intuiting Newtonian mechanics. But there has been no evolutionary reason to be able to have intuitions about quantum mechanics. And so it sort of hurts our brains when we look at it. If you go back and consider the origins of this, I mean, the classical experiment and, again, I'm just talking for some slides that are here at the intro that you don't really need to see. You can imagine what I'm talking about here. That you consider the experiment of the beam splitter experiment. Classic thing. You fire photons into a beam splitter. The split beams you put to two mirrors. You run those two mirrors into another beam splitter and then you put some kind of detector on the two paths coming out of that second splitter. Now, if my mental model of what light does, if my mental model of what photons are doing is Newtonian, I'm going to say, well, yes, obviously half the photons are going to go in each direction. Then they're going to merge. Then they're going to split again. And my detector should see the equivalent signal. In fact, this is not the case. And this is rather perplexing. Now, quantum mechanics has a model that explains this rather neatly through linear algebra. But in order for that particular phenomenon to be explained by the equations, we have to be prepared to accept the notion that the photons are actually on both paths at the same time. And again, this hurts my caveman brain, but it is observable. It is verifiable. There's a good solid mathematical basis for it. Now, given that a photon can be on two paths at the same time, there are implications for what we can do in terms of information theory. And so what we do is we construct qubits, a quantum bit, that can have two values at the same time. Two values that are superposed. It is not either one or zero. It's not even statistically maybe one or zero. It is both one and zero, concurrently to some degree of probability. I have not had the good fortune to have seen all of the other presentations today. But I imagine you've seen some block spheres out there, this three-dimensional sphere that's usually used to visualize a qubit. And so if I have a single qubit, it's a unit vector that's out there to some point on the sphere. And that has some degree of oneness and it has some degree of zero-ness. But the projection of the complex space ends up showing that I can be performing rotations around three axes on this value. In some cases, it's going to affect that one or zero-ness. In some cases, it's not. It's going to be indirect. I'm affecting the phase, which may become a factor in some later manipulation of the qubit. So, okay, still nothing? Fine. Look, I'm just going to use my deck to just organize my thoughts, if you all don't mind. You can rotate it and then we squeeze our eyes out. It would be nice. It would be nice. Because now the slides are online. Yes, by the way, I mean not that I want everybody heads down on their computers, but the slides are on there. So for once, the people who are remote have the advantage. Oh, dear. No, but the people who are remote may have downloaded the deck already. So for those of you in the studio audience, there's a copy of this deck out there on the conference site and you can pull it in and look at it locally. Anyway, so having these quantum bits is a really cool concept in terms of information theory. But how can we get at them? How can we manipulate them? The simplest quantum systems are fundamental particles. And so, and again, you know, Xanadu is working with photons in this regard. And they're doing some very cool stuff. This is the first time I've seen them present and it was really interesting. But the smaller you get, you know, an electron, a photon. Yes, individual particles could be quantum bits, but they're hard to capture and manipulate. The first really successful experiments going back 10, 12 years now were using ions, a charged atom. If I have a charge on the atom, I can push it around using electromagnetic fields. And it is nevertheless a quantum particle and there's a lot of good work still going on in that space. But this is one of the few places in computing where being bigger is actually an advantage. We've spent decades trying to make things smaller and smaller and smaller, fit millions of transistors, billions of transistors onto chips. When I'm trying to manipulate a qubit, I actually kind of like it to be big at this stage of technology because I want to be able to get control and measurement circuits around that thing in some way and then pack them together. So the technology that we've been working with at Google, very similar to what IBM and Ruggeri have been doing, are superconducting qubits, which is to say I've got a superconducting magnetic field that I'm generating in a particular point in space. These are not tiny. If I look at a naked quantum chip, I can actually see where the qubit would be. Now, I can't see the qubit because the qubit's only going to exist at like hard vacuum and much colder than deep space temperatures, but nevertheless, I can see where it will be. It's really quite macroscopic. And the way we use these things, I now know, is visualize, if you will, that the qubit space is sort of a plus sign and with the field essentially in the middle. At each end of that plus sign, I have an opportunity to align another plus sign. And at that interface, I have some degree of coupling possible. The coupling that I can use to create entanglement and to manipulate qubits in multi-qubit operations. And so what ends up happening is the closer the frequencies are of oscillation of the qubit, the more likely they are to entangle, to interact, to couple. And so X-mons, which is the simplest thing to build, this is what we build our largest chips to use, these X-mon qubits, function by direct coupling. These qubits are right next to one another. If you bring the frequencies together, you can entangle them. If you move the frequencies apart, this creates essentially an isolation and a resistance to operation. And on that basis, we can selectively manipulate the qubits. Now, one thing that is interesting for the computer science folks here, those of you who ever written assembly language, try and imagine an assembly language where the only way you can get data into a register is through an immediate operation. You could make a machine like they, it would be awkward, and the compiler would be painful, but you could do it. And that's good, because that's what quantum computers need today. These gate-level models, there is no load, there is no store. There is simply, but however, we can send instructions in that create values. And indeed, the whole operation of a machine is somewhat turned on its head. You've got, the qubits don't move. The qubits are static, the data flows, literally flows through a computer. You've got inputs to gates, you've got outputs to gates. You've got signals going from transistor to transistor. When a quantum computer, at least of the technologies that we're using today, operates, the qubits are staying right where they are. And we are in some sense sending instructions to those qubits so that they will do what we want them to do. In the case of the superconducting devices we have at Google, that ends up being in the form of microwave pulses, very carefully timed microwave pulses that are input to the device. So if you, there is a photo later in my deck if we ever get anything live visually. But you'll see that our quantum computer looks like pretty much everybody else's. A big suspended cylinder, which is essentially a dilution refrigerator, with a lot of cables coming out of it going to racks of equipment. Well, an important element of those racks of equipment is an atomic clock. And we need atomic clocks to keep things synchronized tightly enough to actually function. The gates that we use, and this may have been touched on in the IBM talk. Again, I missed it. There are single input gates and there are multiple input gates, much as there is in classical logic. There is something. Does this mean if we just, well, I mean, I am actually on this slide. So what if we just, well, I'll let you do what you got to do. I'll continue for just a moment. But anyway, there are unary gates and there are binary gates. And in fact, there are multiple input gates. One of the things about these gate model computations is the concept of a gate is relatively virtual at this point. We're still trying to figure out what we need to do. Just as you can build an entire quantum computer, pardon me, an entire classical computer out of nothing but NAND gates. It would be foolish, but a NAND gate is sufficient. No, no, still not there. A NAND gate is sufficient to build any logic circuit, though it would be foolish to do so. To fully gate is, in principle, sufficient to build any quantum gate model system. Ooh. Okay. We are getting close. Okay. Okay. You missed the cute caveman picture, but it's cool. Now, what's interesting is I don't see here what's on the screen. So, forgive me if I look back nervously over my shoulder from time to time. So, here is a diagrams from a paper. There's a reference to it that was published by our team actually about five years ago now. And in fact, the next slide will show you what it's actually doing that is interesting. But I like this because this is showing the execution at a couple of levels. You've got your qubits there. There's three qubits. It was always required to do this particular operation. There is a phase where we're preparing an initial state. Then we are sending various operations out there. You can sort of see these symbolic microwave pulses there along the thread. And what we end up getting is the result. But the result is encoded in the phase of the qubit that we're looking at. Well, the way these machines work is by having a resonator circuit that's close to the magnetic field. That resonator circuit is going to read a higher value, the higher the amplitude of the field. So, in the end, to read anything out of this machine, we have to convert it into amplitude. So, basically, we're running a phase to amplitude conversion in this final step. And then we do our measurement. Now, what was that doing? It was simulating the hydrogen molecule. What the output was the bond energy that varies with the distance between the two nuclei. And so, we're very pleased. Again, this is almost five years ago now, I guess, that they did this. The blue dots on this line are the actual measurements from the quantum machine, the Google quantum machine. The red dots there are IBM's data. They were able to confirm our work. And we're very pleased to see that it took them a couple years longer and they still didn't get as good a resolution. But, all credit to IBM, they put their machines on the web early. Okay, so, again, a lot of respect there, but we do take some pride in the quality of our work. So, perhaps other people have talked about this. I'll address it here in another way of visualizing it because sometimes these concepts, if you're not used to them, it's useful to see things in several ways. It's useful to read a couple of textbooks on the same subject to really grasp the matter. So, here what we're looking at is a computation as sort of a 3D volume. At the back there, those little circles with the arrows, those are our actual qubits. And those arrows are all pointing out because we've all set them to an initial zero state. And then we start applying gates to those qubits in place. So, you can see some little cubes. Those are unary gates, single input gates. And then you can see some rectangular boxes which represent two qubit gates, operations that are being applied to those qubits. Now, those things can be done within limitations, which I'll get to later, in parallel. And then at some point, when we finish the algorithm, there's a phase where we measure all the qubits that we're interested in. Now, the problem is that every time we do an operation, there is a probability that we're going to deco here. And so, you know, nothing is free and particularly not in quantum. So, depending on that fidelity factor, and depending on just sort of the absolute stability of my system, I have a variable number of steps that I can do before I have to measure, before it becomes highly probable that my measurement is meaningless because something has gone wrong. So, two qubit gates are more dangerous than single qubit gates. They tend to intrinsically have a lower fidelity. And so, part of the art of quantum algorithm development is to minimize the number of intangible operations, minimize the number of conditional operations that need to be done, and put things into unary gates. Another factor of these sort of early machines, like the model I just showed you of the hydrogen molecule, there were some unary gates in there that were really quite complex and really unique to that particular problem. If I'm building a general-purpose computer, general-purpose tool, portable programming, things like that, I'm probably going to want to have some fairly general operations, but because fidelity is relatively low, because my reliability is relatively low, it really behooves me to minimize the number of steps. So, having a large number of, you know, it's one of these risk versus sys things. Sys wins in NISC, noisy intermediate-scale quantum systems. So, we're building these qubits in arrays, and where we are on our particular development roadmap is we're making them larger and larger and better and better, and then there's a certain amount of feedback that goes in, because every time we make them larger, we start seeing secondary effects that we had not observed at the smaller scale. Those need to be dealt with. We get better at the control logic. We get better at how the silicon actually has to be laid out. And so, what we're shooting for in the very near term is what's referred to as quantum supremacy. I'll explain a little bit about that in a moment. But not much beyond this quantum supremacy threshold, we expect there to be some useful applications that people will actually be able to get out of these NISC machines. It's a limited subset of what quantum computing will be able to ultimately do, but there are some of them out there, and we're hoping that will keep a whole sector of industry and a couple generations of grad students going. And so, and then once we get to a certain scale, then we will actually have error-corrected machines. An error-corrected quantum device uses logical qubits, so you're doing exactly what you're doing. Your thought processes of algorithm design are quite the same, but the underlying hardware and one of a better word, firmware, the classical firmware that's controlling the system, is really quite different because to have a single logical qubit that is preserving my value and phase information for seconds and seconds, hours and hours, days and days, I need periodically to regenerate things, and that's tricky because quantum mechanics doesn't actually allow us to make a copy of a qubit. We can measure it, we can teleport it in a certain sense, we can cause it to be recreated by converting to classical and back again, but this basic act of just give me a copy of this and I'll check at some point whether I'm happy, you can't do that. So the algorithms are far more subtle and they're far more expensive in terms of qubits. The better your qubit, the less overhead you have. And this is why various people are taking very aggressive and exotic approaches to constructing qubits in hopes of having a higher fidelity so they can reduce their overhead. We'll continue looking at that, we'll continue working at that, but we're close enough to having something that we think is workable where we're moving ahead, but at today's technology it's going to be about a thousand physical qubits to get a logical qubit. So to get a really good quantum computer, we're going to need large numbers of physical qubits. We're pushing hard and we'll get there. So this quantum supremacy experiment I alluded to is something that's somewhat controversial. I was never wild about it because I'm an old school computer architecture guy and when you build a computer you build it to do something specific and useful. Whether that's weather forecasting or video gaming, it's doing something you know what you want to do. Quantum supremacy, as described by the guys on the team, is a question of, well, first of all, it's doing something with a quantum computer that you couldn't do with a classical machine no matter how big it was. Now as a classical architect I have a little bit of a problem with that because give me enough centuries and enough energy, I could build a really big classical machine. But so, you know, never say never and all that, but nevertheless to show this advantage, to show the fact that there are things that you could do, you could not do reasonably with a classical machine. But then you get to the problem of, okay, I've done a computation on my quantum machine that can't be done on a classical machine. How do I know it was right? And that's where the subtlety comes in. That's where I really respect the guys that worked on this. The notion is that this is a, an example, I don't even know if this is a candidate, but at any rate you generate random circuits. They don't do anything particularly useful, but they're generated according to particular rules. And those rules cause the measured output to have a certain well-defined statistical distribution. Now while I cannot simulate the actual circuit on a classical supercomputer, or Google Cluster is what we use, much the same thing really. We can compute what the statistical distribution is going to be and these statistical distributions have a really nice instability property, which is to say if everything's working I get this very nice exponential looking curve. And if an error happens, this is labeled multiple errors as a flat line, I've seen more recent simulation results that show even a single bit, single error on a two qubit gate will cause the thing to essentially flat line. So you can tell pretty quickly whether it did the right thing. And again, these things being unreliable, the way one tends to run these things, you do a lot of runs, you do a lot of measurements, and you accumulate the data and you look at it. So we can throw away the bad runs, but the point being once we start getting good runs, we know we will have achieved this quantum supremacy. And then beyond quantum supremacy, the exciting stuff is error correction. This is a, at the lowest level, what's going on? The algorithms are really quite complex, but at the lowest level is what's going on. If you can see on that diagram, and for those of you who have no visual, I'm really sorry, there's the little white dots are the actual qubits that are being used for the computation. This is what we refer to as the data qubits. The black dots are essentially what we refer to as measurement qubits and those are just there to see if something has gone wrong. And because we can have potentially both data value, sort of errors, z-axis errors, and phase errors, x-axis errors, then we actually need to have both x and z. So those are the yellows and greens there. So there's a, that actually only looks like two to one overhead, but that's just the beginning because every logical qubit is still going to need more than one of those data qubits. Now this requires a lot of qubits and it requires them in a fairly regular grid to use this kind of a model. And so we got to this hardware-wise in a stepwise manner that is instructive. So if you look up here, there's a diagram you can see. This is a photo micrograph. You can even see the University of California Santa Barbara logo at the top of the picture there. We were using the UCSB FAB line for the longest time because the team is in Santa Barbara. They were from the Santa Barbara research team when they came to Google and they knew how to use that line. And you do not need, as I say, these are course geometry things. You can see the qubits. We don't need the latest Intel TSMC technology to build this stuff. So you can build it on a student line. And so those plus signs are the qubits. The little squiggly lines above them, those are the resonators. That's what you're actually doing the measuring. And then what's coming in from below are the control signals. And so this is a linear array, nine qubits, and here we managed to get the two-bit error rate down to .6%, or a fidelity of 99.4%, which is damped close to what we believe is needed to be able to hit supremacy and start doing some meaningful error correction experiments. And indeed, they've published some work using this device that showed that at least one dimension of the error correction algorithm worked. So we're moving forward here. But I can only make chips so long and thin. Mechanics get us into that material science. So I want to do a couple of things. One of the things I want to do is to fold that linear array onto a vertical axis. So our scheme is to use 2.5D technology, as it's sometimes called in the field, where we actually have two pieces of silicon that we process. And what we do is we put the qubits on one, we put the resonators and the control logic on the other, and we mate them together. And those are close enough to do the job. And so there's some bump bonds between them. And again, this is already operating at superconducting temperatures and in a near vacuum. So a lot of the problems that you would have doing this normally, I mean, people do have. This is not an unusual procedure in the industry. It works really quite well. And now that's solved my 1D problem, but now I need to go to a 2D array. And so as you might imagine, what I do is I tile the qubits on one substrate, on one silicon substrate. I align my readout and control logic onto another substrate. I align the geometries. I can sandwich them and that allows me to build chips up to the scale of what I can reasonably do on a die, which is reasonably large these days. So here are some pictures, and thank God I have the deck now, of the first of these 2.5D quantum computing chips. It's called Foxtail. The guys assure me that putting the Google logo across the middle had absolutely no impact on the functionality of the device. Made me nervous. I was in the semiconductor industry for a long time, ages ago, and yes, of course we put our logos on, but they were off in the corner somewhere. Anyway. But you can see this thing labeled, and yes, it had its quirks coming up, but nothing having to do with the logo. But moving onward to what we ultimately want to get to, or not ultimately want to get to, but the next phase, to get to supremacy, to demonstrate error correction, we organized things a little bit differently. So you saw this linear array here, what we did, what we're actually doing is pivoting. We're doing a rotate. We're putting them on a diagonal. Now that might seem counterintuitive and geometrically inefficient, but what that allows us to do is to basically say that I have groupings of qubits that are either data qubits or measurement qubits. So I can use a common read-out line down a set of qubits and know that I'm not mixing my data and my measurement qubits. And so this gives us an architecture where we tile these diagonal strips. The first chip that we made with this was code named Bristlecone. It's a 72 qubit device. 12 unit cells, 6 qubits each. There it looks in the package, and again you can sort of see it's kind of high in the middle because this is one of these flip chip that's posed on the substrate, two and a half D kinds of things. I sometimes wonder who's ever going to see the logo because as I say, it's in a hard vacuum at the bottom of a cryostat. But hey, pride of place. And so here's some photos of what it looks like. And yes, it looks rather more like an automotive garage than a computer center. But this is one corner of the lab. This is the famous yellow fridge. We have fridges in the various primary Google colors as you might imagine. But this is yellow and it is up there. When you see these photos, you always see these things suspended. I don't know if anybody else tells you why, but it's about minimizing vibration. You really got to eliminate as much of any kind of energy that's getting in there. Mechanical coupling, acoustical energy is just as nasty as electromagnetic energy when it comes to perturbing these things. So it's suspended in isolation and huge sets of wires coming out. You can see those racks of equipment. They were still in the process of wiring the whole thing up when that picture was taken. The actual racks of equipment, that rack of equipment is replicated a couple more times down the road to get to all 72 bits. So near-term devices at the scale that we're talking about, getting to these NIST devices, people have talked about, for example, oh yeah, we're going to obsolete RSA. We're going to bankrupt all the Bitcoin people. Sorry, any Bitcoin people in the audience. But Bitcoin is toast once things start working at scale. The good news for Bitcoin is that you need a pretty darn big quantum machine to break RSA for any sensible key length. So you've got a few years yet. But there are things that we can do with order of 100 qubits. And so the most likely things, as has been said, quantum simulation of quantum processes. This is pretty basic. Again, we've demonstrated it for hydrogen, the most trivial case. IBM has shown work both hydrogen. They've done lithium-hydride. I don't believe we publish any lithium-hydride results ourselves yet. Every qubit you add adds additional complexity to the whole, every quantum element you add adds additional complexity to the thing. But this is a promising area. And then another is numerical optimization. Now, numerical optimization covers a broad field. There is just the whole optimization notion along the lines of the quantum annealing that the D-wave does. And there's also machine learning. Machine learning can be thought of the same thing. This tends to be a fairly common looking set of algorithms. And these are things where we hope to be able to do relatively small numbers of qubits and relatively unstable qubits to do that. The model that we have at Google is more one of, we would use classical processing for the first several layers of a neural network. But as things fan down, it'll start reaching the scale where we can actually process them in quantum and use the quantum technology. But this is a conference about open source software. So I want to talk about some open source software. And specifically, I want to talk about CERC, which I don't even know that the guys who wrote it on the phone was possible, but I like it. So what is CERC? Hit the button on my own machine. So it's a Python package, unsurprisingly. Seymour Cray is quoted to have said, back in the 1970s or 80s, someone asking what will people be doing, writing high performance computing programs in in the 21st century? And his reply, which is often quoted in the communities, I don't know what they're going to, what it's going to look like, but they'll call it Fortran. With all due respect to Seymour Cray, and I have enormous respect for Seymour Cray, he was wrong. I don't know what they're going to call it, but it's going to look like Python. So at any rate, but the model is it's, so you have a Python framework and, which was conceived to allow us to have a sort of a, a quantum engine, if you will, sort of as a cloud service that one would connect to, and then provide one's program, and that program might be run on simulations, on classical simulations. We happen to have rather a lot of computers at Google, and we have some parallel simulation algorithms, and we can simulate pretty large systems on those machines. But as the quantum hardware comes online and the quantum hardware gets larger and larger, then, you know, we start having more capability, and certainly you'll run out of steam in that sort of 30 to 40 qubit range. Because remember, every time I'm adding a qubit, I'm doubling the possible state space. So just going from simulating 45 qubits to 46 qubits, all of the things being equal, I need twice as much memory and roughly twice as much processing power. So hence the appeal of this. So there are reasons why I described a lot of how the machine works, because the design of CIRC is built around the requirements that come out of these sorts of machines. For example, I know some of you actually, you know, could probably eyeball this and know what I'm talking about. Well, here we have a set of controlled Z gates across a set of qubits. Okay, I have nine qubits here, A through I. And you look at this, is this a good quantum circuit? Well, let me put it to you this way. Would this be a good assembly language program? And those of you who have an I for data dependency would probably say that's a dreadful program, because you have a linear chain of data dependencies going right down the line. Well, guess what? In the quantum universe, it's a similar problem. I've created a gate depth that is not good, because as I mentioned, the stability of my qubits is time limited. I need to do things as quickly as possible and as much in parallel as possible. So how about this? Is this a decent circuit? Because here I've broken things up. And so this would be a big improvement if I were running this, say, on a Intel or some other classical microprocessor. This is a little, I'm scheduling the registers, I'm scheduling the pipeline a little bit. And in the quantum sense, yeah, I visualize that this is what's going to happen, that I can do four in parallel and then the other four in parallel. But I didn't tell you what the topology was. And the topology matters. Cubits that are in contact with another can be entangled easily. There are swaps that you can do. You can move these things around by various operations, but it's inefficient. But fundamentally, these operations can only be done on adjacent qubits, on one axis or another. If it were a linear array of nine qubits, like the picture I showed you, that would actually be a good program. But for a 3x3 grid, it's not. Because F and G and C and D are rather too far apart to actually do those operations. So no, it was not actually a good circuit. Okay, third try. I know that I have a 3x3 grid. Therefore, I'm going to be intelligent and I'm going to visualize, okay, none of these are in conflict. I can do those four and those four in that order and so this ought to be a two-step process. But again, life is not so simple. There are things that you can't actually do. And we have to protect certain qubits. Now, why is this? The way we do CZs on an X-mon device, at least, is we bring the frequencies close together. As you heard me mention, we get this coupling when the frequencies of oscillation of two qubits get close to one another and they're isolated from another if you shift the frequency away. So the way we actually perform the operation is to bring to the same frequency, the ones that we're interested in working with, but we're also going to take the other surrounding qubits to frequencies that are further away. Because we want to introduce any, and I don't want to use the word ancilliary, even though that would be a correct English word. It's an overloaded meaning here. But all the bits not involved, we want to keep them at frequencies that are relatively far away. And what that means is we can't just do arbitrary operations on the grid in parallel. We have to take some care of this. And so on this particular 6x6 grid, to actually do a CZ along every edge, it would take eight steps to make sure everything was properly isolated. So these are the kinds of things that we are playing with. These are the kinds of things we're experimenting with. And so we need to be able to get at that level of detail in the programming of these early devices. We need to be able to do algorithms, but we also need to be able to actually do experiments on the systems themselves. So we need a relatively low level language. And so there's this sort of range here of levels of detail and complexity and abstraction that these proposed quantum languages have. CERC is very deliberately pretty low level. I think the biggest contrast will be with Q-Sharp. I don't know if Microsoft or anybody working with Q-Sharp was in here today, but Q-Sharp, it fits into Visual Studio, very cool. But it is a high level language, and it pretty much presumes that you have logical qubits, pretty reliable qubits to work with. So we're down on the low end of this range. So here's just a trivial case. I'll take just a slightly more interesting case in a moment. So you can generate qubits. You can generate circuits built from those qubits. You can put operations into that model. In the trivial cases, there's this circuit. There's the from-ops method, which I basically provide a long set, a variable length, a number of described operators, and it will populate the array with them. You can do it incrementally if it's a more complex thing. And of course, this being Python, you have to be able to print it. And when I print a circuit, we get these sort of cute ASCII versions of the circuit model diagram. So the structure of the tool is very modular. I think this is pretty important. So there is a circuit area and a schedule area. User code will typically go in and it will generate circuits. And those circuits can be saved and expressed in various formats. So ProtoBuffs is an internal format. RPC is basically Google generic RPC payload. We can save it as various people's chasms. We can print it out as text diagrams. And so various formats can be done there. But then, of course, operationally, those gates have to be played on the machine. And as I showed you, directly playing the qubits onto the machine may not be what you need to do. There are rules. There are constraints. It's like the old risk processors where the compiler had to schedule instructions around conflicts. You had branch delay slots. And if you couldn't put something useful in the branch delay slot, you had to stick a no op in and pad things out. You had low delays on the original Berkeley risk that were visible and the compiler had to deal with them. Well, it's a similar thing. But we've separated out the circuit and schedule. And that's a concept that you'll find throughout CIRC. So typically, just writing and running a program, you have the path through that is just going, you generate a circuit, generate a schedule, send it out to the machine or the simulator. But you can also use it as an optimizer. You can read chasm. You can also use the optimization modules at the circuit level. And then you can write it back out again. And I'll show an example of that in a moment. We can also do transcoding. And I suppose you could argue that that's what that initial example did where I just sort of generated something and said print. But the circuit and schedule dichotomy permeates the thing. And the notion is sort of circuit is a discretized sort of a thing. You basically have operations. Operations are essentially a binding of gate to a qubit. And at the level of a circuit, you're really not concerned about timings and durations. It's just things and they're ordering and the schedule is continuous. So that's made up of scheduled operations. So operations what ties the whole thing together, but they're operating in a couple of different domains. Now one example, and this is a nice simple example that fits on a page but does something comprehensible and almost useful, is the one bit calculator. Now it's not hard for me, just with transistors on a piece of circuit board, I could create a one bit adder. That's fine. But this one bit calculator is actually calculating all possible additions at once. And so it's actually, it is executing in parallel. It's doing four operations concurrently. So my top two gates are, my top two qubits are going to be the inputs coming in and then the bottom is an ancilla. So what I'm doing is I'm using Hadamard gates. Hadamard gate, if I'm coming in with a zero state initially, is going to put something that's in a super position that is exactly between, is equally zero and one. An important tool. And then I'm going to run that in a Tifoli gate, which is, again, a conditional, conditional knot. And then I'm going to run it into a conditional knot gate. And what I'm going to measure is the ancilla that was involved in the Tifoli gate and the qubit that was involved in the CZ gate, not the controlling element, but the data element. And so that in principle should do all this simultaneously. So I don't know how well you can see this. This is in the deck if you want to download it. This is actually not too unreadable. But here's what it looks like, as you've seen in several of the other examples. We're importing our package, picking up some additional things I'm going to need later when I actually plot the thing out. But one of the things you'll note is that I'm generating my qubits and I'm generating them as grid qubits with certain coordinates. Now, the earliest versions of CERC did things a little differently. The earliest versions of CERC were actually rather similar to one of the other talks that happened a little earlier, where when you create the qubit, you specify what kind of qubit it is. And so the earliest versions of CERC, the earliest versions of this program, it was CERC dot Xmon qubit. I'm just generating Xmon qubits. And then I would take those Xmon qubits and put them in an array. But the more we work with this stuff, the more we understand that topology is more important than technology. Technology matters, but I may want to have the same topology, but have different implementation technologies on me. So this is very interesting in terms of quantum languages, because we're still figuring out what the appropriate points are to bind things. And so, again, earliest efforts, early binding of qubit type, and then now we're taking it up a level and abstracting it a little bit more. So I generate my three qubits of this type. I generate a circuit from the operations. So I have, essentially, you'll note that I say CERC, I imported the C naught and Tifoli. Those aren't there automagically, but they're part of the standard libraries so I can import them. And then the Hadamard is so fundamental, it's just there as a built-in. And so I attach the Hadamards on Q1 and Q2. I do a Tifoli of 1, 2, and 3, a C naught of 1 and 2, and then I perform a measurement on these things. And then I print the circuit, and I print the circuit. Now, this actually is screen capture off my workstation, so I know darn well that this is not cheating. So then, having had the program instantiated, now I'm going to simulate it. So I instantiate a simulator, and this is where the X-Mind binding came in. I say, OK, now I actually want to run this simulating X-Monds. And then I'm going to run that simulator on the circuit for some number of times. These are statistical critters, as has been observed before. I can have a lot of states superposed, but for every qubit, I can only read one of them at a time, and the rest of it just sort of collapses. So what I need to do is I need to do enough statistical samples to be highly confident that I've seen what the distribution is and see what's going on. And so it worked, which is to say, if you think about adding two bits together, then two ones is going to give me this, two zeros is going to give me that, and zero one or two zero, in other words, two possible combinations will give me this. So this trivially works, but it's a very easy package to use. Here is an example of optimization. I did not personally run this. This was somebody's prototype optimization method, but it worked pretty well, which is to say we started out with this kind of awkward looking gate diagram. But the important thing to understand is all of those vertical bars represent two qubit operations. Those are all intangible operations. And so by optimizing, we've taken that from six down to three. And because those are the operations that contribute most to my loss of coherence, that's a really, really big deal. So thanks for your attention. I will leave this up here as long as I can get away with. You've already heard about OpenFermian. OpenFermian is a project that's been collaborated on by a bunch of people. ETH in Europe, they were a big contributor to this. University of Oxford, as well as the two main European institutions involved, as well a number of US universities and national laboratories. It's out there. Play with it. Then CERC is also off the GitHub slash quantum repository. It contains its own little simulator. It's very self-contained. There's a startup page that's pretty trivial. You install the package and you can just follow the sort of steps that I did there. So thanks. I'm probably over time with all the technical hiccups, but hopefully the organizers will tolerate my taking a question or two.