 Please welcome Robert Smith, who's a passionate common disk packer and he's also the director of software engineering at Rigeti, which is one of the big quantum computing hardware players in the universal quantum computing space. And they're based in San Francisco, or actually in Berkeley, San Francisco. And he's going to talk about Taurus and open source quantum software development here. Right, cool. I'm super happy to be here. My background is very, very much in software, only in the past few years that I kind of get into the quantum business. And so being at an open source conference and seeing that quantum is here at the open source conference is really exciting for me. Before I start, I just want to understand the audience a little bit better. How many of you are not sort of professionally in quantum computing, not in academia, quantum computing in academia, or anything like that? Right, that's good. That's really great. How many, so this is my personal, just kind of what I want to know, how many of you know anything about Lisp? All right, that's pretty good, too. That's more than I would expect. Cool. A little bit about Rigeti. So we build, get the little mouse pointer out of the way, we build universal gate-based hybrid classical quantum computers. So these are machines that are not just, they don't just do quantum computations, they interact with the classical computer as well, very much in concert. Classical computer might produce results that affect the quantum computation, the quantum computer might produce results that affect a classical computation. As was said in the keynote, the quantum computers aren't really doing computations right now that exceed what a classical computer can do. But nonetheless, they do interesting computations. They're doing real computations. They're not just theoretical equations or anything like that. You can compute real things about the physical world, for instance. We're a full-stack company, so we do everything in-house more or less, all the way from the design of silicon chips, the fabrication of chips, we have our own fabrication facility, everything in between all the way to building applications on top of the chip or on top of the computer. We have a very wide range of papers published. Again, basically, any of these horizontals here, we probably have a paper in. And our flagship product is called Quantum Cloud Services. So Quantum Cloud Services is something that we just released publicly as a beta. As far as I know, it's the fastest quantum programming environment available right now. It's about 30 times faster than an HTTP-based service. This comes from a lot of different things all across the stack. And this is one of the benefits of being a full-stack company. It's not just that we got some clever hardware trick or something like that. It really took a combination of infrastructure, hardware, and software to gain these speed-ups. So as a part of this, as a part of the Quantum Cloud Services, you get your own QMI. It's a virtual machine. It comes pre-loaded with a bunch of software, including our Forrest SDK. So Forrest SDK is the software development kit that includes a very powerful arrangement of software to write, build, and execute quantum programs. This includes a compiler, a simulator, a Python API, and a set of optional libraries on top of that. So pretty early on at Regetti. I joined Regetti when it was around 20 or so people. Pretty early on at Regetti, we were very interested. I actually credited it to Will Zhang a lot, who planted the seed at the company to try to maintain openness in a lot of our ideas and a lot of our software and so on. So being in open source slash open standards isn't something that's relatively new. We've been doing it for the past three or so years. Maybe one of the big milestones was about three years ago. We released an open standard for quantum instruction language called Quill. So Quill is this assembly looking code. It's not assembly. It doesn't run natively on the machine. It requires further compilation. But it looks like an assembly code that describes hybrid classical quantum programs. And we published a standard and a little bit, to my surprise, certainly. We saw all these little libraries popping up. Somebody wrote something in Haskell, for instance, to construct Quill circuits. I, myself, out of interest wrote something in Lisp. Somebody else wrote something in OCaml. Just all these things came out. And that sort of is a testament to when you provide something that's open, that's well specified, that's well documented, gets adopted. And people are kind of interested in it. Since the release of the standard for Quill, we've had a good handful of different open source libraries. All of these are homegrown or at least adopted by us. And then beyond that, we've contributed back to any number of open source projects. One of my favorite was a tool for editing CAD files. We found a few bugs in it, contributed it back. It's nice to design these quantum circuits as within CAD and finding these things and being able to contribute back. So I'll very briefly talk about kind of the force SDK. You can kind of think of it as a sort of layered architecture, like so, at the top of your applications, the things you're building on top of the platform. This includes Grove, which contains a lot of different textbook algorithms, a lot of different things you'd find in any sort of course on quantum computing, for instance. We just released an alpha version of something called force benchmarking, which contains a lot of very, very easy to read routines for how to benchmark and classify the quality of a quantum computer. And then we have a bunch of partner apps. And this is kind of the layer where you would build if you were interested in building programs for your quantum computer or for a quantum computer. Below that is PyQuil. It's a Python library that allows you to write Quil programs. It contains a lot more than that. It contains an API to talk to the machine. It contains a good API for interacting with the compiler. It contains some mathematical routines if you want to translate sort of a mathematical problem into a quantum computing problem, and so on. Below that, we have a little RPC framework, just to have everything in the stack communicate with each other. Below that is a compiler called Quil-C. This is the compiler that'll take Quil code and compile it to the native gate set of the quantum computer, the native instruction set of the machine. On the right here, you can imagine this box right here going down and down and down. That's where you actually start executing things on the machine that gets to the firmware level, that gets into the control system level, and so on. But to the left, you can replace all of that, of course, with a simulator that's going to be a lot slower than the real machine, of course, but serves as a good method for debugging and so on. So there we have something called the QVM, the quantum virtual machine, which I'll talk about in a little bit, and PyQVM, which is something built into PyQuil. It's a simulator written in Python. Henceforth, I'm going to talk about two main components here, the compiler and specifically the QVM. So the QVM, it seems like everybody in their brother and sister and dog has made a simulator these days. There's a billion of them online. Simulating a quantum computer, it's kind of a fun problem to work through. It's a thing that you need to do if you're going to be working with quantum computing at all. But there is a lot of great things that you can put into a simulator to make it more efficient, to make it more useful, to make it better for debugging, and so on. So our simulator, the one that we call kind of the QVM, is very, very high performance. If you can throw it on any machine, I mean, I think the biggest we've thrown it on is something that has like 12 terabytes of memory and four socket motherboard with something like 224 logical cores or something like that. And it'll use all of it. It's completely multithreaded. It's a good way to heat up your house if that's what you're aiming to do. It can execute the entire core language. It does all the classical, all the quantum stuff. It has lots and lots of different ways to execute programs, kind of the standard ways do something called standard pure state evolution. It kind of assumes that your computer is perfect, that it has no errors, that it's kind of living in that fairytale land. But then you have these other execution modes like the stochastic pure state evolution, full density matrix simulation if you want that, and some other things. I haven't personally found particular use for them, but stuff like the path integral formulation if you want to compute just a single amplitude of your exponentially sized state. Like I said, it can simulate perfect and imperfect quantum computers, but the thing I'm most excited about, about the simulator, is actually, as a part of the simulator, it contains a compiler that compiles the quill code into native instructions of your computer. So it's not trying to interpret the instructions, it actually compiles it all to machine code. And because of that, you get really, really fast execution, sometimes outperforming some state-of-the-art simulators written in the finest toothpick and tweezers, C code, sometimes by 2x, because it dynamically, it's able to do what a compiler does. And so I wanna show a little demonstration of that, just a real quick one. So if I run the QVM program, I'm gonna say verbose, and I'm just gonna run one of the standard benchmark modes, which computes a GHZ state, an entangled state on, say, I think the default is 26 qubits. So if you see it run, you can see each gate, these gates over here, the Hadamard at the top, see not, it's taking about 500 milliseconds on my laptop per gate, we'll let it get to the end, just so you can feel how long this stuff takes. To compare on a real actual quantum computer, each of those gates would take on the order of maybe 50 nanoseconds or so. So you see the run time at the end was about 20 seconds, and all of this was without compilation. So if I tell it to compile, what it's going to do is take the quill program that's being benchmarked here, it's gonna compile it to machine code and then run it. And so if we do that, you just see the difference is a lot, and we just got 10x speed up just by pre-compiling the circuit. I wanna move on from the simulator and talk about our compiler, which I'm even more excited about. So as far as I understand, quill C is the only truly sort of general purpose, fully automatic optimizing compiler, which means it's sort of turnkey, you can run the program, it'll compile, it doesn't need any extra information, you don't need to give it specific steps on how to compile or anything like that, you give it your program and it'll compile to a specific architecture. It was built with portability in mind. Of course, we use it at Rigetti Computing to compile for our architectures, but we were trying to build as a general purpose tool because we design different chips, we try out different gate sets, and every single time we do that, we don't wanna have to rewind and figure out how to recompile our programs and everything. I can compile any unitary gate, at least any unitary gate that feasibly fits in memory and that you have time to compile for. So two, three, four Q gates, it doesn't matter. And most interestingly, it has a lot in it we're still adding to it, of course. Lots and lots of different ways of optimizing your program. It does a lot of things that have a lot of similarity to classical compiler theory. So it does something similar to register allocation. It tries to figure out what actual physical qubits it should use given your logical qubits in the program. It does something similar to people optimization. It'll scan the sections of your code and try to compress or optimize them. Flow analysis, when you have classical portions of your program, when you have jump statements and loops and if statements and whatever, it analyzes the control flow of your program. It does all the classical stuff like dead code elimination and all of that. And then for some cases like tube qubit gates, it actually has routines for doing optimal compilation. It will compile it in the best way that mathematics has proven to be the best. Like I said, I'm a software person. I've been writing software since I was like eight years old and it's definitely one of the most interesting pieces of software that I've been able to work on in my career, me and my colleagues. So I can show a little demo of this as well. We kind of follow the standard of being very unixy with the application. So just by default, if you run it, it's gonna take input from standard in. If I make like a bell state, for instance, and end it, it compiles it by default to one of our eight qubit rings that we have. So it compiled that Hadamard and CNOT at the top to that program out there. It turned it into Z rotations, X rotations and CZ. So in this case, the default chip that it compiles to is one where CZ is the gate that can operate between edges. One of our researchers at work was a messing around with the Bernstein-Vasarani algorithm, which for those unfamiliar, contains kind of this portion of the algorithm where you have to define something called an oracle. And depending on what your goals are as a researcher, maybe you're not trying to think about exactly how to construct this oracle. Maybe you're just, I don't know, trying to get your work done and you just wanna try running it on the quantum computer. So what he did is he defined this giant matrix. That's what all these ones and zeros and all this stuff are. And then his actual program. So you see he does a bunch of, he does an X gate, a bunch of Hadamards. This giant five qubit operator and so on. And so if I cat Bernstein-Vasarani into Quilsey, and I'm gonna make it measure gate depth and all of that, it'll think about it for a second and it gives the answer. So it decomposed this five qubit gate into a bunch of gates. We can see at the bottom it compiled it into a circuit that's of depth 2,746 gates, which is way too many to execute on any current quantum computer for the record. But it starts giving it an idea of how this circuit would actually be realized on the real machine. So one point that I like to make is that I really, really believe in automatic compilation. Sometimes I feel that some folks in the field are maybe like in the 1950s or something where I know our quantum computers currently aren't these machines that can execute hundreds or thousands or millions of gates yet. Like we're still kinda poking around with gates measured between 10 and 100 qubits. We're still looking at gate depths that are measured still on both of your hands. And it is reasonable to construct programs by hand that way. But in general, to not have a tool that you can use to study the mathematics of a problem, to debug things, to even see how it decomposes into the actual native code of your machine, I think it's not good to have such a tool. I also think that computers are just exceedingly good at doing what they do. I mean, compilers these days, like GCC, Clang, all of those, they're amazing. They're incredible. If you can see what Intel's Fortran compiler does, it does amazing things. It's real testament that computers are really good at figuring things out, at calculating things. So I do have this belief that if we're able to confidently write C for small microcontrollers that contain K of RAM, then we can write Quill or some other instructions that for a quantum computer that passes through a compiler. Is Quill C perfect at this? And is it like outperforming like hand compiled circuits? No. But can Quill C compile a circuit faster than you could? Almost surely yes. Sometimes circuits take like, I don't know, three days by hand to figure out even what to do. So I wanna give a little example to show what Quill C even thinks about. So if I pass the verbose option, I'm gonna compile this Bernstein-Vasarani circuit again. But this time I'm gonna say, just tell me everything you're thinking, compiler. And this is everything, I'll of course maybe scroll after it, stops thinking, or I'll just control C it. You can see here it's going through, it's looking at different sub-sequences. Here's looking at Rx, Rz, Rx. It's going to try to collapse that in some way. And it just continues until it just tears your program apart. Lots and lots of interesting stuff. I like seeing the compiler think, because it again reminds me that it's smarter than I am. I wanna talk a little bit about what even a compiler target for a quantum computer looks like. Generally you have a graph of qubits that are connected in some way physically on a chip. Somebody actually etched metal or something into the chip to link these qubits together. So you have a graph of qubits. Each single qubit can have some operation on it, usually for superconducting qubits you're firing some microwave pulse. For example, in quill, you could have an Rx pi over two gate or an Rz, totally parametric or fixed. Each qubit pair can have operations. Some example gates are C, Z, C not, example parametric gate is C phase where you can actually give it a parameter. And then there are even gates that happen on qubit triplets or quadruplets and whatever. And one of the examples that I know is the Mollmer-Sarnsson gate for ion traps where you can actually perform a gate simultaneously on many qubits. Importantly though, it may be that depending on the design of your quantum computer, different qubits might have different operations tuned for them. So we could imagine a four qubit quantum computer. I mean, this is crazy. I don't think anybody would actually build this to be clear. Where each qubit here has some single qubit operations but each qubit pair has a different set of native operations. So what this diagram is showing is that qubit zero and one may support a C, Z operation on them. Qubit zero and two might only have a C not only in one direction, you know, and so on and so forth. And so we designed Quilsey to accommodate chip designs that may have this. And so Quilsey can compile a circuit there and like I would challenge you to try to construct a G, H, Z state on an architecture that has this. I mean, I don't know, it'd be a waste of time I think. It's probably a waste of time to make the chip too. So for FOSSTEM kind of as a testament to this we actually ported Quilsey to what we know about Google's bristle cone architecture which is 72 qubits. We ported it to IBM's, IBM QX5 chip. Again, what we could learn about it from papers online and everything. Any program that was written in Quil, any program that works in Grove, anything at all will compile to any of our chips to again, what we could find about the bristle cone chip by Google, IBM QX5 chip. But not only that, it will actually try to optimize for those architectures. It can work with the full chip or you can actually select sub graphs of that chip and it'll compile for those. And again, as far as I know, I don't really know of other compilers that are able to kind of do this job. So as a little demo, I'm actually wrote a program with this Moomer Sorensen gate. If I look at it, you can see this crazy, crazy big gate. I think this one's on, yeah, four qubits. The program down here is just doing Moomer Sorensen on four qubits. So if I cat Moomer Sorensen into Quilsey, I'm going to show the gate depth and I'm gonna compile it for our 8Q ring. It'll do something. So gate depth, this is about 319. If I do ISA bristle cone, bristle, if I can spell it right. Bristle cone. It'll think about it more and actually found a smaller gate depth, not by a lot, but a little bit. We could do it for IBM QX5. And by the way, this argument here, this ISA argument is actually parametric. You can supply it a file to describe the chip that you have instead of like baking it into the compiler. So this one, it like doubled the gate depth. It took 600 gates. I wanna show one last thing where we have some advanced features of the compiler that can even optimize it more. So the argument is enable state prep, enable state prep reductions. So again, like a normal compiler, if you're compiling C code and you pass O3 or minus F fast math or whatever, we have similar things. This argument here says compile the program and assume that we're starting an initial zero state. Typically, that's what you assume when you write a quantum program. And if you can make that assumption, it'll actually take that into account in the compilation process. It'll partially simulate it and we actually get down to a gate depth of a third of what we got before. So there are all different ways you can coax the compiler into compiling things and get more and more efficient circuits. So QVM and QLC are free to download. We just released the Windows version. For the QVM, we do have an open source alternative. Like I said, the Pi QVM, it doesn't have all of the things, but it is licensed under Apache 2. It's much slower if you're doing it for lots of qubits, but if all of your work is in Python, it's actually much faster if you're doing one or two or three qubit computations because there's no overhead communicating with this external process and so on. And there's no real alternative to QLC and maybe in that case, if you don't want to use this thing, then you can compile things by hand or use some of the more manual alternatives. So I wanted to talk a little bit about kind of from a startup perspective, open and close source software. And so it's a little bit tricky at a startup like what you should decide to do when you want to work in software, especially when the startup is small like Rigetti and when all your competition is quite large. From the perspective of open source, there's a lot of great things. Languages should definitely be open sourced. It's nice when you have a language that's open source and APIs because people can help develop them and so on. It's good when your RPC framework is open source because that means people can plug into your architecture in various ways. And open source allows us to understand what customers actually want and use more. On the other hand, for a company, again, like ours, when you develop a lot of great IP and everything, it's not something that you necessarily just want to give away. It's something that you want to kind of hold close to yourself. It's something that you can use as competitive advantage. I actually don't really want to waste time talking about it because I'm just kidding anyway. So actually, what I want to do is kind of live at Fostam. Open source, Quilsey, and, I mean, why not? Which one is this? Quilsey, is he going to make me... So, Quilsey, hopefully, should be visible to you. You know what's the license? QVM, let's... I think QVM became public. There shouldn't be the private symbol anymore. Yeah, I think it's public now. So anyway, feel free to start looking at the code. All of the code is licensed under Apache 2. There is one little front-end UI component that's AGPL. So no other licenses in there. No ULAs, no anything. It should be all free and open source. So you guys are truly the first to know. It wasn't announced or anything earlier, and you saw it with your own eyes. So I'm American, and hopefully nobody mistook my shirt for being like Colonel Sanders or something. My shirt is, in fact, John McCarthy, who was the inventor of Lisp. And early on, we wrote the QVM and initial prototype in Lisp and the compiler in Lisp. I don't think many of the innovations that we've had could have happened if we're not originally written in Lisp. We don't have teams of, you know, tens or hundreds of developers. Originally, the team was... the software team was actually like two or three people way back when the company was 20 people. And developing in Lisp was pretty time and money efficient, more or less. It's very snappy to develop. Moreover, like quantum computing is an interesting field in that not... I don't think anybody has really figured out a great expressive syntax for expressing programs, for being able to take an idea on one's mind and actually convert it into something that runs on a machine. I mean quill and chasm and all of these things are good starts, I think. They describe a machine representation of a program, but they don't necessarily make it easy to take an idea in one's mind and put it on paper. And so Lisp is a wonderful substrate for testing different syntactic abstractions, different ways of arranging characters on a screen in order to express some type of computation. The real, real, real reason, though, is that a compiler, an optimizing compiler, I don't know if anybody's hacked on, you know, an optimizing compiler before, but they're very difficult to debug. If you get a wrong answer, you put in, let's say, we were compiling that mulmer Sorensen gate, and at the end we detect it's the wrong answer. Like, where do you start to even debug something like that? And so in Lisp, in particular with Emacs and Slime, which are both also free software, you can inspect everything. You can open up everything live while the code is running. While the compiler is running, you can actually open a REPL and inspect all of the state inside the compiler, inspect what it's doing, how it's working, all of that stuff, not just verbose printing out on the screen, but actual interactivity. You know, and there's kind of the usual argument of, you know, it's Lisp, so maybe it's hard to learn or something like that. Almost all of our programmers barring me and I think one other were professional Lisp programmers prior, and most people are productive within a few days. It's not like a purely functional language or anything like that. If you want to mutate state and do all that stuff, you can absolutely do it. I have to do it for the QVM to make it efficient. So most people can learn it. And if you're interested in learning it, there's a free book. It's been out for, I don't know how long, now 10 years or so. Practical common Lisp, if all of you probably are programmers already. So if you want to check it out, great book. So if you find yourself wanting to be in super position with a beer at some point, I invented this beer plus you over root two challenge. If anybody, again, you guys are the first to know, kind of the first three people, if you solve any of the issues on GitHub, we tried to even internally keep a list of issues. If you fix a bug, if you just make any interesting contribution, I'm happy to have a beer with you. It'll be on me. I should say thanks to all of my colleagues at Rigetti, in particular. I'd like to give thanks to Eric Peterson. He's one of the principal architects of the compiler. He's also an algebraic topologist. So he thinks about like chips as like, you know, n-dimensional simplices and all this stuff that doesn't make any sense. Mark Skilbeck, Lauren Capoludo, Will Zhang, Chad Rigetti, and the rest of the team really for supporting building this effort. And for really thinking about quantum software engineering is like a thing unto itself. My email's down there. The code on GitHub is rigetti.qvm and rigetti.qlc. And we do have a Slack channel. If you go to rigetti.community, you can find the link to our Slack channel. I'm on it. A lot of other people are on it. Happy to chat about this stuff. And thank you all for listening. It was really fun. Happy to take questions. Real rubber. Yep. So two things. First of all, you said many times that Qlc is the only compiler of its kind. Not correct. Cambridge also includes chips. Project of Ticket. It's very similar to all this other thing you mentioned. It's a Python module rather than a Python module. My actual question is about the simulator. So you mentioned Qlc is a platform-interpreting factor. Yes. That's right. So the simulator is going to just simulate a Ql program. Oh, yes. So the question was, well, first there was a comment saying that I said that the compiler was only of its kind. And this gentleman said that there is some other software that provides a lot of similar features. And then the question was, is the simulator general enough to support other architectures, like the IBM QX5, for instance? The simulator kind of develops from a theory of something that we wrote in the quill paper called the quantum abstract machine. So it's a simulator for an abstract machine. It's a classical simulation. It doesn't restrict itself to any particular gate or architecture. Well, whatever gate you throw at it, it will simulate. It will not error if you say, I'm doing this gate on this architecture. Presumably, you would pass it through the compiler first before simulating it if you wanted to restrict it to the gates of that architecture. With that said, however, for instance, with our chips, we do develop noise models that are similar, at least somewhat characteristic of our chips. And you can supply that to the QVM. And it will be able to simulate that. But otherwise, there isn't any hard line restriction on the gates that it simulates. No. Yep. In terms of practical efficacy, in what fields apply the quantum computing? Yeah, so I don't think I'm the best person to answer this. I think you'll, oh yes, I keep forgetting. What fields, in terms of practical applicability, what fields is quantum computing sort of good or useful for? I don't think I'm the greatest person to answer this question. I think I definitely want to learn a lot from the people here. But my take on it is there are two things that we've definitely been focusing on, two things that seem to be popular in the field. Quantum chemistry, simulating dynamics, simulating different aspects of molecules, finding energy states or energy levels and so on. And optimization problems, usually finding the maximum or minimum of some constrained optimization problem, I would say. So that ties into logistics and what have you. But I do want to reiterate, we haven't solved the problem on a quantum computer, again that I know of, that outperforms something that we've computed classically. We've done great proof of concepts, the kind of almost the hello world of quantum computation almost seems to be computing the energy of like a dihydrogen molecule at different bond distances. It's something you can compute today, but that's also something that my laptop can compute. But those seem to be the promising areas. Yep. The algorithms that are basically, they are finding the ways how to develop the cryptography, cryptography, cryptographic functions that are resistant to quantum computers. Basically I think the first question is that I guess also there is quite a good field of application of quantum computers in cryptography. Yeah, the question is kind of what's the state of post-quantum crypto. I actually don't know anything about it. And I think we have a talk later that's going to be about it. So that's definitely something I'm interested in hearing about as well. Yep. I have a question about the quill language. Yes. You say this is a kind of assembly language. So when you have the binary, is there a one-to-one relationship with the code? Yeah, right. Yeah, right. Sure. The question is, I said quill is like an assembly-like language. Is there like an equivalent of like the end binary that would be like loaded into memory where it actually gets executed? Quill, like other sort of big air quotes, assembly quote-unquote languages, are not true assemblies. They're not languages that run native on the machine, generally not even a binary encoding runs native on the machine. In the end, these things, at least with current control systems and on microwave devices to be clear, superconducting devices that are driven by microwaves, usually in the end the quill program will be something general that has all these funny gates, like if I go back and look at this mulmer Sorenson thing, like even though it looks like assembly, that's definitely not something any superconducting computer will run. An ion trap, or some can run this natively, but not any superconducting chip. And so what happens is this passes through a compiler and you get quill back out, but this quill is now in one-to-one mapping with what your machine will run. Generally after that, you're going to pass it to another sort of translator, almost like a true assembler now, which gets turned into code that runs on your control system. Even when it gets on the control system, however, it's not going to be like one single binary that runs single-threaded. Oftentimes, since qubits can be excited in parallel, you're actually loading this code onto a bunch of things that are driving a bunch of different D to A converters and ADD converters for readout and so on. So there is a little bit more that has to happen after this for a compilation. Yep. Well, first of all, congratulations. Yeah, thanks. And the question is when you said 30 times speed up without service, what was the metrics you had in mind? Yeah. So one of the main metrics, there are a few metrics, but one of them where the 30x thing came from was one of the popular things that we do. There was an algorithm called, sorry, the question was where did the 30x come from? I stated 30x speed up before. Where did that come from? So generally, the comparison was kind of the realm of HTTP-based services where you construct a job of some sort. You have a program that you like to run. You ship it off to some service ingesting this job. It processes it, packages up the answer, sends it back to you over the internet. And perhaps for a lot of programming and quantum computing, you either have to do that a lot because maybe you're scanning over a range of things, or your program itself is actually changing frequently. You might be generating a new circuit for every run of your program. So you might be optimizing, let's suppose you're finding the maximum of something. You construct the initial program, you send it, you get the answer back, you evaluate whether that was good or bad. Maybe it's not good, so you take a step in another direction, construct a new circuit, send it, receive it. So those types of use cases were what we were comparing to, specifically VQE and QAOA algorithms if you're familiar with those names. That's what we were comparing it to, where the speed ups come from our locality with the machine. On a QMI, you're actually running local to the chip itself. Your virtual machine is actually resident on the machine with the control system. What is it called? Reset with fast feedback or active reset is another one where we can zero the state of the machine by kind of forcing it to be zero. We can send pulses in, measure, and bring it down to the zero state as opposed to kind of stepping off the background, letting the machine kind of slowly get back. So you can really iterate your program fast. And the third thing that we introduced is something called parametric compilation where the compiler itself not only takes a fixed set of gates but can actually take a parametric description of your gates where you have unknown variables and still compile it into your native gate set. And with that, with parametric compilation, that means you don't have to compile each loop around when you hit the actual hardware. And so that's kind of where the speed ups came from. Those are the problems that we compared to. Yep. Can you use a different native gate set? So like you find my own native gate set and run it on QVM you will see? Yeah, so QVM will execute anything. Whatever you throw at it, whatever gates you throw at it, that'll be fine. So there's no restriction there. I don't think I'll get it right. Can QVM operate with different native gate sets? QVM will run anything. Any gate set, it doesn't matter as long as it understands the actual representation of the gate. Quilsey, I would say mostly yes. Quilsey is able to operate with most of the gates that show up in literature, rotation gates, a big handful of two-cupid gates. It's relatively straightforward to extend it with more two-cupid gates. We had to do that when we were reporting it for the IBM architecture. We had to go through a process of making sure at every step along how to optimize for those gates and so on. Out of the box you can't say compile for my wacky two-cupid mulmer Sorenson and don't do any rotations, do these funky one-cupid operations or something like that. Compiler would need to be told a little bit more about that. So out of the box we won't get that. But if it falls into any of the gates that are in sort of the general superconducting regime like any of the eye swap, parametric eye swap, all of those things, it does work out of the box. Am I missing anybody? Yep. I think you partially answered. But you said that you provide a personal quantum machine in cloud services. Yes. Are there back-end with true quantum acceleration or is it all like a big simulator on a classical coaster? The question is these quantum machine images, is it all just simulator or does it have a real back-end? Actually the major point of it is the real back-end. The simulation is more of a bonus. In fact we just load a QVM onto the things and you can run the QVM just like a normal UNIX program. What you can do is run it as a service and now you can connect up to it. So the simulation is something that you kind of get for free. The real point of the QMIs is so that these virtual machines can lay resident with the quantum hardware itself and you get full 100% control. It's not being mediated. You're not in a queue, anything like that. You get sort of immediate one-to-one access with the machine. And so my question was how do you share those resources between all the machines you get access to? Yep. So the question is how do we share those resources across several machines? Just coarse-grained booking. We have a QCS command-line app which is also open-source. You can do QCS reserve and you reserve a block of time where the machine is yours and only yours. And so if it's available, you can get it. We also allow booking on what we call certain sub-lattices. I don't remember if there's a little cute image of a little lattice. You can see this guy pulling a little lattice out from the 16-Q ring. You can book smaller chunks of the machine and operate on just those. I think that's it for time, right? We might have one more question. Okay. Who else? Yep. If I understood correctly, compilation involves an optimization problem, minimizing the ticket length. So the question is, compilation involves some type of optimization problem. What heuristic does it use or what different heuristics does it use? I want to give the snarky answer and say, read the source now that it's available. But it actually uses a lot of different things. So it depends exactly what it's compiling. And so a good amount of it, I probably can't scroll back to it, a good amount of it is doing P pull rewriting. So it actually has a bunch of rewrite rules. And this is probably the easiest thing to add to the compiler. So if anybody has like a cool circuit identity that they like, they can put in the compiler and the compiler will suddenly know about it and be able to apply it automatically. So finding sequences of gates and shortening them, sometimes it just multiplies the gates out and just does an optimal 2Q compilation, for instance. If you have a bunch of 2Q gates kind of in a row, it's just going to multiply them out and then do a re-expansion on them. And then lastly, in some cases, it actually has to do like a stochastic gradient, not stochastic, just a gradient ascent or something for certain types of compilation, but definitely not enough time to get into that. Lots and lots of different optimization problems in the compiler, certainly. All right, let's thank the speaker one more time.