 So I'm Matt Trinesh and I work at IBM Research on the quantum computing team. And I'm going to be here today to talk about quantum computing and the role that open source software is playing in the development of these new types of computers. To start though, we need to really talk about what a quantum computer is because I find a lot of people really have some misconceptions. And the first one that everyone assumes is that the world is going to be on fire because of something called Shor's algorithm, which is an algorithm that was developed in the 90s that says using a quantum computer we can factor prime numbers efficiently. And prime numbers are the basis of RSA encryption and that's what everyone uses for everything. And everyone assumes the world is going to be on fire. But that's neglecting the fact that for Shor's algorithm to work to factor the prime numbers we use for RSA, we need something called a universal fault tolerant quantum computer and one with massively more qubits than what we have today. It's a long way in the future. The numbers I hear are like decades in the future before this is a reality. The other misconception for people more like me a little bit nerdier is something from science fiction. There's a quantum computer from an anime I loved in college and you have just this idea that there are super computers that are in space and glowing. That's not actually what a quantum computer is, although it'd be kind of cool if I worked in space with glowing lights. This is a real quantum computer. This is one of IBM's quantum computers at the research lab in Yorktown Heights, New York where I work. And most of the space here is actually taken for cooling. That white cylinder and then all of the internals, which are the three pictures there, are what's called a dilution refrigerator. And that enables getting the quantum chip, which that guy is holding in his hand there, down to a temperature of about 10 to 20 millikelvin, which is actually colder than outer space. And it needs to be at that temperature for the quantum chip to operate and to limit the noise. The other key thing you can see there is in that picture on the upper right, you can see some of the transmission cables, which are used for transmitting the microwave pulses from the electronics that are outside the refrigerator into the quantum chip. And those microwave pulses are what are actually used to perform the operations on the qubits. And you can see some of those electronics in the background there. But just like on any computer, the processor is still the smallest bit of it. And we can zoom in on these chips and get some dye photos of what they actually look like. Here we have two dye photos for two different quantum computers. The one on the right is a five qubit device called Tenerife. And the one on the left is a 14 qubit device called Melbourne. And the key features to really take away here are the qubits, which are labeled Q1 on the picture on the right. And they're those little squares. And then between them labeled with B are the bus resonators, which are basically just connections between the qubits. And those are used for running multi-qubit operations. And you can see that there's a certain pattern to the connection. And then there's the resonators labeled R, which are for reading out the data from the qubit when it's time to measure. And as sophisticated as these devices are and as cool as they are, there are still a lot of physical limitations with them. The most obvious one is the number of qubits. This one has five. This one has 14. That's not a very large number. You can't really do big problems with it, especially if you're using 16 bits of data or 64 bits of data. That's a size limit there. But the other thing that people don't really realize is that these machines are very noisy. They're very susceptible to temperature fluctuations and other sources of noise in the environment. And as you perform operations on them, the noise builds up over time. And there's something called quantum decoherence, which is if you run it long enough, you'll lose your quantum state and all the information in your qubit will disappear. And you just get a random output, which is not what you want if you're trying to do computation. And all of these things together are limiting what we can do with quantum computing today. You can't just increase the number of qubits. You could build a bigger chip, but you'd still have noise problems or you'd have interconnection problems because you can't run operations on two qubits unless they're connected. So there are still a lot of limitations with the devices that exist today. But why am I talking about this if they're so limited? And that's because despite these limitations, it's actually a really exciting time in quantum computing. And to showcase that, I just have this timeline, which I borrowed from one of my colleagues because I don't actually have a background in quantum information theory, which just to showcase how things have evolved over time. So we have in the upper left-hand corner the 1930s with the theoretical underpinnings of quantum mechanics. And it wasn't until like the 1970s when people realized you could use quantum mechanics to do computation and use it for information theory. And into the 80s, they started having conferences on the topic. People started developing different algorithms and techniques on a strictly theoretical basis. Into the 90s, when they were practical lab experiments. So in your experimental physics lab with your dilution refrigerator, they started building small like one or two qubit devices and they started getting more sophisticated over time. But it was the domain of laboratory research up until about three years ago when IBM decided to take one of its five qubit devices that was in its research lab in Yorktown and say, you know what, this is pretty stable. Let's put it out on the internet for anyone to sign up for an account and submit jobs to. And that was the IBM quantum experience in 2016. And when that happened, it really changed the game because no longer did you need like a PhD in experimental physics to get access to these machines. Anyone in the world sitting at home can just submit to it and start playing around and learning about quantum computing. And that's actually where the project I came, I Work On, comes into play because with open access, you needed software to deal with it. And we started developing a toolkit called Kiskit which is an open source SDK for programming quantum computers. It's designed for dealing with NISC devices or noisy intermediate scale quantum computers which is just a term that means for the computers we have today and into the near future where they're still small, still not fault tolerant. Basically this provides us the tool, provides anyone the tools to build software and interact with these computers and try to use what we have today. It's an Apache licensed project. We've designed it to be back-end agnostic. So while out of the box it runs on IBM's quantum computers and some local simulators we include, the framework itself is not hard-coded for that. You can write your own back-end if you have access to other quantum devices or you have your own simulator that you wrote. And like a lot of open source projects, it's made up of multiple components. We've named them after the classical elements kind of to be a little funny, I guess. I don't really know how I didn't come up with the names. And the four components of Kiskit are there's the Terra component which is the base. This provides the interface between the quantum hardware and software. It's basically Python SDK for writing quantum circuits and then compiling them for devices and dealing with results. Then there's the air component which is a high-performance simulator written in C++. And then there's the aqua component which is interesting for a lot of software developers because it is a Python library interface that has pre-written algorithms for quantum computing. So you give it data in an expected format and it'll give you the result. So you don't have to think about the quantum information at a low level. It will just give you like any library call. It's the Ignis component which is designed for dealing with noise. So the computers we have today are very noisy. This provides you some techniques for characterizing that noise and trying to mitigate it. And it was at this point in the presentation I wanted to show a example application, something we can actually run on one of these quantum computers today to show you, give you a feel for how you do this. But before I can do that, I really needed to provide some background on quantum information because it's not something that most people know about. And this is not going to be an exhaustive primer on quantum information theory. It's just going to be the basics so I can explain the application. I'll have some links and places for more information at the end if people are interested. So to start, we need to talk about the qubit or the quantum bit. The easiest way I've found to think about a quantum bit is using the block sphere, which is the sphere on the right. It's a geometric representation for a quantum bit and it lets you visualize in your head pretty easily the quantum state and what the effect your operations are having on it. So it's represented by the sphere and there's a vector represented by that orange line and you basically perform operations by moving that vector along any point of the surface of the sphere. And just like a classical computer, you can have it at the zero state or the one state. And when you perform operations, you can move it anywhere on the surface of the sphere, but when it comes time to measure to get a result from the quantum bit, it measures either a zero or one and not any of the other information. It will collapse either up or down and you'll get a zero or one out and when you measure, you lose all of the other information. So you don't, you know, if it's pointing somewhere else on the sphere, you get a zero or one out and where it's pointing, you don't know. And it's not recoverable after you measure. And you perform operations on the qubit with what are called quantum logic gates. The example I have here is the X gate. It's also called the quantum knot gate. It's the simplest one to think about. You can think of it as a 180 degree rotation over the X axis. So if it's pointing up at the zero, it would just rotate down to point down to one, which is why it's a knot operation. A zero becomes a one and if you do it again, it goes from one to zero. But two key things to take away about quantum logic gates is that first, they're reversible, which is a little bit different from your classical computer. You can run them both ways. And the other thing, which I'm not really going to get into too much detail, but is that all the logic gates can be represented as unitary matrices, which is just to say that all of these operations are matrix multiplication and big linear algebra operations. So it's just matrix multiplication. So you have a vector representing the position on the sphere and you multiply it by your unitary matrix and that's your transform. And if you get more advanced, that's useful to understand. But we've only talked about whether it's at a zero or a one. What happens if it's pointing somewhere in the middle? And that's the first principle of quantum computers that's kind of fundamental to understand, is that when you identically prepare multiple qubits, they can still behave randomly. They're probabilistic. So in this example here, I've put it exactly halfway between zero and one with like plus one on the X axis. And if you were to measure when it's in that state, you have a 50-50 chance of getting a zero or a one. And it's a little bit weird to think about. You put the computer in a state and you get a random answer. And that randomness is inherent to nature. And this is actually useful. It's hard to think about, but when I get to the example you'll see. And the quantum gate for putting something in superposition is called the Hadamard, which you can think about as a 180-degree rotation over the X plus Z axis, which is a little abstract to think about, but you can just see it as that diagonal right there. And the other interesting thing about a Hadamard is its own inverse. So if you apply two Hadamards, you go from zero to that 50-50, one in the middle. And if you apply it again, it goes back to the zero. And the same is true when it's at the one state. So if it's pointing down at one, you apply a Hadamard, it will go to the minus X. And then if you apply it again, it will go back to one. And then the last operation I wanted to talk about is the controlled knot gate, which is a two-qubit gate, operates on two. You've got the target bit on the top and the control bit on the top and the target on the bottom. And it's basically if the control is at zero, it does nothing. And if the control is at one, the target flips. It applies an X operation. You can represent it there. I did it the linear algebra way. So it's just the coefficients swap. So if that was zero, it would be one plus zero. And then it would just swap. So it would become a one. And then you can put this all together to build quantum circuits, which you can also think about like quantum programs. And it's just a way of representing the series of operations and the dependencies between them over multiple qubits. So in this example, we apply two X gates to the first two qubits. We apply Hadamards to all of them. And we have that one C naught between qubit two and qubit zero. And then those are the only things I haven't talked about yet, which are just measurements. And then so you measure from a quantum bit to a classical bit, and then you'll get a zero or one on the output. And this is a way of visually representing a program where you can see all of the operations at once and how they interact with each other. It doesn't explicitly show timing, just the dependency between the operations. And that's all you really need to know to understand the basics of the example application I want to talk about, which is the Bernstein-Vasarani algorithm, which was a paper published in 1993 about this hypothetical situation, and it's an algorithm you can run on a classical computer and a quantum computer. The basic premise is there is this oracle function that has a secret number, a secret bit string. You can ask the oracle a question by giving it your own bit string of the same length, and it will give you the dot product output of the secret and your input. And the goal is to figure out what that secret is. And it turns out on a classical computer the best way to solve this is by looping over every bit. So you give it, if you have four bits, you give it one zero zero zero, then zero one zero zero and so on and so forth until you figure out what the value at each position is and then you know your bit string. So the efficiency of the algorithm on a classical computer is on. But on a quantum computer, you can do this in a single call to the oracle function. So you give it your input, you'll get the output in one call. It turns out the implementation of the oracle function is actually really simple. It's just a bunch of c-nuts. You basically put the control bit on any qubit where there's a one, and then the target is this temporary bit and zero if there's nothing. You put nothing if there's zero. And this relies on something called phase kickback, which is where if that temp bit is a one, the phase of the qubit will flip. And using the Hadamard gate, we can, if you remember what I said before, where the Hadamard is itself inverse. So if it's at plus x, it goes to zero, and if it's at minus x, it goes down to one. So we can use this phase kickback. If everything's a zero on the input, we can use that to flip a zero to a one, and that's how the oracle works. So if we go into a bigger circuit where we put the temp bit at an x to make it a one, then we apply Hadamards to everything to move it to that plus x state, then we apply our oracle function with those c-nuts to make it, you know, for our secret value, which is one zero zero one. Then we apply Hadamards again, and we get our one output, and then we measure, and we get the right answer every time, or at least hopefully. And now I'm going to hopefully show this to you on a real quantum computer. So let's hope this live demo works. Is this legible to everyone? If not, I can make the font bigger. Is it good? Okay. So the first thing we do is we create our quantum registers, which are just the q-bits. So we make one four-bit for our secret, or for our input, four bits for q. We have one q-bit for the temp, and then our four bits, classical bits for the result. Then we build our oracle function, and I'm just using a for loop here for brevity, where we just loop over our secret, which is one zero zero one. You can put any value there or an integer, and we just loop over it, and when there's a one, we apply a c-naught gate, which is a cx in the Python code, on our bits, and we build an oracle function with that. Then we plug that into our bigger circuit, where we apply the x, x, then our Hadamards on everything, then we add our oracle function, then we apply the Hadamards again, and we measure, and then we can draw the circuit to make sure our, what we wrote in Python code to represent this matches what we have in our head, and here it does match, because I copied the code from this for my presentation. Then we can run this on a simulator real quickly, just to verify, because it's a little bit faster. We just list all the simulators, we pick the one we want, you can read the documentation for what all the different ones do, and we get a result, and then we can print the result, and we can see that when we run it 1,024 times, because I said they're probabilistic, and also the real computers are noisy, multiple times to get the full probability, so we run it 1,024 times, and we get our answer all the time, and we can graph that as well, and we get 100% of the time, 1,001. And now we can run it on a real quantum computer. So first I load my credentials, and I list the available devices, and there are three quantum computers available to me with my credentials and then a simulator that's online, and I can print the state of them, and because it's so blown up and the resolution's low, you can only see one of the three, but you can see right there the qubit mapping, and then the current state, so it's 14 qubits and some parameters about it, and then we're just going to pick one because this one's fast, and then run the circuit on it. And this takes some time because it's a shared device, there are only three quantum computers in the world available to everyone right now, and it's cued one, and it takes a little bit of time, and because I only have two minutes left according to the clock, I'm just going to switch because I saved the results ahead of time from a previous run just to give you an idea. So you run the circuit, and then you get an output histogram graph that looks like this. And you can see here that the real computer is noisy. There's not fault correction, so there's noise in the system. So you can see here we ran it 1,024 times, and 46% of the results were the right answer. And then we had at least once every single other result because there's no fault tolerance and noise in the system gave us the wrong answer, and that's a big reason why we run it so many times. And that's one of the big limitations I was talking about earlier with the current devices. So the other thing that you might be thinking is this example is terrible because no one in the real world actually writes an oracle function. No one at your day job says, oh, here I need you to write a function with a secret value and no one is going to know what that secret is and they have to probe it with a dot product. This is just an example to prove that quantum computers can do something better than a classical computer. There are real world applications where the current devices are being used. The biggest example that I always talk about is quantum chemistry using the quantum computers to simulate molecule creation, which is being investigated by a lot of different companies. So just to wrap up with what little time I have left, I want to talk about open sources role in quantum computing. So it's being used to foster collaboration between research institutes. This is still a developing field. The technology is very new and not one single company or individual can do it on their own. So by having all the tools open source, you can get collaboration between multiple entities to try to make the ecosystem better and moving forward. And that's something I see day to day working on this project in the open. We get contributors from research labs, companies, all trying to use this for their own research in the field. The other thing which I personally found very useful is it's a great educational tool. Having everything open about this enables people to dig in and learn. I don't have a background in quantum information theory or in physics. I'm an open source software developer, and just by looking at this, digging in and figuring out how it all worked, I was able to learn enough to at least work on the project and also give a presentation here. And something I found amusing is that history is kind of repeating itself a bit because we're starting with open software and if you look at the history of free software, it's kind of the same thing except we have all the lessons learned. And the last thing is there are actually a ton of open source projects not outside of IBM that are related to quantum computing. If you click this link in search for quantum computing, you'll get 300 plus results, which is really exciting to me. The field is still like day zero. I mean, you saw, I ran an example and I didn't get the right answer every time. But even with that, it's, yeah. So I'm just going to skip that slide because I'm out of time and everyone's brushed and just have some links for more information, including my workshop tomorrow afternoon. So if people want to learn more in depth about quantum information theory and using Qiskit, they can come to that. And there are also the slides and some other useful places. So with that, I think I'm out of time. So one of the simulator, the AIR project has support for extensible noise modeling. So you can create a noise model for your device or either some example ones that you can inject when you call the simulator and it will try to inject noise into the simulation.