 Thank you everyone. Unfortunately my colleague Nathan couldn't make it today so I'll be moonlighting as him. So Nathan is actually the head of Quantum Machine Learning and software at Zandu. So this was one of his pet projects and it's devouted into something really really cool and I'm really excited to talk about it today. So I'll just start off introducing Quantum Machine Learning just in case people aren't as familiar with it. So Quantum Machine Learning is currently tasked as one of the biggest advances we have with Quantum Computation. And this is just a, so what I'm showing here is this is a number of papers that mention Quantum Machine Learning over the years starting from 2013 to 2018. So you can see just how rapidly this has grown. Could be hype, we don't believe it is. New advances are being made every day so these are including papers uploaded to the archive plus papers published. And you can see that especially 2015 to 2018 there was a massive increase in theoretical results relating to Quantum Machine Learning. And part of it is because it turns out Quantum Computers are quite good at it. So we're in a regime at the moment called noisy intermediate scale Quantum Computation where we're working with devices that don't have error correction like we do with classical computation. So the device we're working with are noisy. Errors can be introduced to the qubits and the q-modes. But what they're really good at is problems like optimization, problems where you don't necessarily currently need the Quantum error correction. So these are just a couple of things Quantum Computers are good at. So obviously they're good at Quantum Physics, very good at linear algebra, especially graph problems. So the work we do at Zandu it's very easy to embed graphs into our Quantum circuit. So graphs can be embedded into just a collection of beam splitters and rotation gates which are really simple to do in our Quantum Photonics lab. They're very good at sampling, sampling problems. So what we're doing with Quantum Machine Learning or with Quantum Computation essentially is we're processing Quantum information. So we're processing vectors in a very highly, a very high-dimensional vector space. And this is one of the connections to Quantum Machine Learning. So when you're processing vectors in a very highly dimensional vector space you start to see a familiarity with other machine learning techniques such as kernel methods where you're mapping classical data to high-dimensional vector spaces to detect features and do classification. And indeed once we have fault tolerance we'll see a massive computational speed up using Quantum Computation for a huge array of the algorithms that have already been proposed. But in the meantime at Zandu we're thinking how we can harness the noisy devices we already have to do something that we can't do classically. So in classical machine learning we already use things like GPUs, TPUs, ASICs to speed up the parts of the computation we know are too slow on CPUs. The same thing can be said of the current noisy quantum devices we currently have. So we can think of them almost like QPUs. We can use them to speed up computations that we can't do classically or too slow classically. But at the same time we can't let go of the classical processing completely. So we have a hybrid computational model where we're doing lots of classical processing with a lot of offloading to quantum processing units. And this is really exciting. As you saw from the previous graph, QML is still quite untested, lots of new results are coming out. So it's very exciting to see what we can do and what new models might come out of quantum machine learning. Even if these are models that are quantum inspired and we can do classically, everything's currently open. So I mentioned before these are some of the things currently being looked at in the literature. So we have kernel methods, so using quantum circuits to embed classical data, to encode classical data, and then doing quantum measurements, which are essentially linear algebra and inner products in a high dimensional vector space. Things like Boltzmann machines and variational circuits have gone really big lately. So variational circuits are where we have a quantum circuit that has parameterized gates. And this is a hard-coded circuit. The only thing we're allowed to vary is the parameters and perhaps the initial state encoding. And what we do is we do classical machine learning. So we use a classical backpropagation loop using something like TensorFlow or PyTorch to vary these parameters and work out the best parameters to minimize or optimize the problem we're solving. So for those of you who saw the storyfields talk, this is what the storyfields TensorFlow back-end does. It's a classical machine learning loop around the quantum simulation. Disadvantages to this approach, if you're simulating it classically, is that in order to calculate the gradients to do the backpropagation, you have to simulate the quantum circuit again. And so you're just doing a huge number of quantum simulations and loops to calculate the cost functions, calculate the gradients, and you end up doing something that is massively, massively classically intractable. So the thinking is, can we avoid this mindset of using classical machine learning techniques with quantum simulations and query the quantum devices exactly to calculate the quantum gradients? So that was the main thinking that sort of led to the development of Pennywayne. So just a brief background into whenever I mentioned a quantum neural network. So this is something we developed at Zandu. And as Zandu, for those who missed the first talk, we do continuous variable quantum computation. So we're working with continuous variables, not discreet qubit states. And this lends itself really nicely to quantum neural networks where we want to work with real values. So with qubits, working with a discrete system, it's a bit more of a difficulty trying to embed a classical neural network into the system. There's an issue with trying to binarize the continuous output you want into the discrete system, which requires a huge number of qubits, depending on your method. There's also issues with working out the best way to apply the nonlinear transformation you need in a neural network layer. So from our thinking, CV quantum computers are almost a natural platform for quantum neural networks. And this is why. So when we have a interferometer in quantum photonics, quantum optics, all we're essentially doing is we're multiplying a continuous variable state x by a unitary matrix. So already with linear interferometers in quantum optics and quantum photonics, we've already got matrix multiplication on continuous variables. We can multiply by diagonal matrices, and this is the squeezing operation in quantum optics and CV quantum computation. We can also introduce a bias. This is also super easy. We just do a displacement of the continuous variable state. And finally, when we want that non-gasion transformation or nonlinear transformation to enable us to perform the activation function, we simply apply one of the nonlinear quantum photonics operations. So for those familiar, a Kerr gate or a cubic phase gate would be the phi function in this case. So this is what it looks like. This is using continuous variables in quantum optics gates to implement a single layer of a quantum neural network. So we have our, I'm just going to leave the mandated area for a split second. We have our matrix multiplication multiplying by a diagonal matrix, another matrix multiplication, adding a bias, and then activation function. And it maps really, really nicely. So this was all theory that we developed. There's a paper on this, I think, called continuous variable quantum neural networks up on the archive. So if you want more details on this, feel free to check it out. We also have a github repository. So that paper has a whole bunch of toy examples where we're using strawberry fields on the TensorFlow back end to see what we can do using machine learning in variational circuits. Stuff like creating a GAN to generate tetra shapes and learning particular gates and particular states that are hard to generate normally. So the code for this is all in our github repository called quantum learning, I think. So feel free to check that out if you want to actually get down and dirty and play with the code. As I was saying, all these methods require a classical loop where you're doing back propagation very intractably. And we didn't want to do that. So that was the idea behind Penny Lane. So this is a picture of Liverpool tying in with the theme. And I didn't actually know this. So in the middle of a previous talk, I had to stop and use Google Lens because everyone was asking me what the picture was and I didn't know. And remarkably, Google Lens knew. So I'm quite impressed with that. So the exciting thing about 2019 is we now have all these noisy intermediate-scale quantum devices accessible publicly. So Regetti has their QCS, which is now in the public base. It was off a couple of days ago, so anyone can log in and request time on their QPU. IBM has their Q experience. So you can log in and use the IBM QX5 to do quantum computation simulations on actual quantum hardware right now. And when we were previously thinking about quantum neural networks, we were using the old mentality where we had to simulate everything. But that's not the case. So as I do, we have one of the best quantum machine learning teams in the world, we like to think, regularly contributing to this field. So what we want to do is we want to actually use these intermediate-scale quantum devices to do backpropagation and gradient descent. So that's what we're going to try to do. And that's what we did. So we took the ideas we liked best from classical machine learning and we ported them to work with the quantum devices that we have now. And it's fun. You can create these models. You don't know what they're going to do. You know what you'd like to optimize and you can play around with it. And sometimes it works. Sometimes it doesn't, like normal machine learning. But we can get to do that with physical quantum hardware now. It's fun. Our main issue when we wanted to build the software to make it easy for everyone to access was that there's no software currently available for automatic differentiation of quantum computations. I mean, sure, you can use TensorFlow or PyTorch to wrap a classical simulation, but we don't want to do that. A quantum simulation, we don't want to do that. So before we even started building this open source software package, we had to work out how we automatically differentiate quantum circuits. So we want to process that scales naturally with quantum hardware. If we have a process using quantum hardware that scales as badly as doing a classical simulation, then there's no point to it. So this is what we wanted to do. So this is where we started off. This is basically our encapsulation of quantum circuits in Penny Lane. So we call this a Q-node. You can almost think of it like a black box. Inside this Q-node, a quantum computation is happening. As input, we have a state you might want to be encoded. We also have parameters for the variational quantum circuits. So the Q-node is a fixed circuit with a specified number of gates applied to a particular qubit to Q-modes. Some of those gates might have parameters and you can specify these outside. And what we can do is we can measure this quantum node and we get an expectation value. So we sample this quantum circuit and we get a real value coming out of it. So we have a deterministic node that we can then use for back propagation. So you can almost think of this circuit as a function, U of theta, that applies this variational quantum circuit. And for those of you with a physics background, the output of this quantum circuit is just an expectation value that you define. So if you have a Hermitian operated B, then the output of this quantum circuit is just the expectation value of that. And that's something that we can measure now with quantum hardware. If you use Ragesse's QCS, if you use IBM's Q experience, you get back measurements and expectation values sampling the quantum hardware. So we have our model of what we want to be able to find the automatic gradient of. How do we do it? And it turns out to be deceivingly simple. So this is what it looks like. We're taking the derivative with respect to theta of the expectation value and simply the exact same circuit with the parameter theta shifted forward by s minus the exact same circuit just with the parameter shifted again, multiply by constant. And I want to emphasize one thing. This is not the finite difference formula. So this is exact. And this applies for any qubit q node where we have gates with a maximum of two distinct eigenvalues. So a majority of the gates, almost all the gates we currently work with with PyQuil or with QIS kit. So what we're doing here is we're able to run the circuit, sample it, get the value f. And then we just need to do a maximum of two more circuit evaluations on the exact same circuit just with different parameters. And we have the gradient of that circuit. So just to drive the point home that this is not finite difference. This is not numerical differentiation. This formula we derived is exact. And there's no restriction on the shift. In general, what we want is we want a macroscopic shift. So we want s to be a value like one or pi or two. So s is not an infinitesimal quantity. The reason we want this maximum shift is because we don't want to be in the regime of noise because we're working with noisy devices at the moment. We want to be sampling at locations that are significantly far away that we know that noise and number of samples aren't an issue. If we're working with finite differences, which looks deceptively similar, and it's confused a lot of us at the time. As we were writing the paper, we had, I think, notes in the sidebar that kept saying, note, this is not finite difference. I think you wrote the wrong thing here. So finite difference only in approximation requires that h is small or infinitesimal quantity. And it's subject to the quirks of numerical differentiation. So those of you who've done numerical differentiation before know that it sometimes can be a hassle. If you're working with a system that's significantly stiff, then you have stability issues. There's also rounding error, precision error, truncation error. And specifically for the quantum devices we have now, which are noisy, reducing h, which in classical, and ideally would reduce the error in your system, could actually increase the error in your system, due to being swamped by noise. So I'll explain that briefly here. So this is just a plot of the expectation value of your q-node or quantum circuit for different parameter values of that quantum circuit. And the Gaussian curves represent the distribution that we're sampling from when we're sampling from that quantum device, simply because we're restricted to a finite number of samples. And if h is small, so we're trying to reduce the error in a numerical finite difference approach, we end up with, for instance, you could sample from the wrong part of the curve, and end up with a numerical differentiation value, which is completely off base. So this is numerical differentiation, finite differences, and this is not what we're doing. We're doing this. So in our parameter shift formula, we make sure that s is macroscopic and large enough that we're not impacted by the finite sampling of our quantum device. So this is kind of, it looks like a simple formula, but it's like a four page derivation. We have to look at gates that only include distinct eigenvalues, two distinct eigenvalues. If the quantum gates don't, then we have to consider a different approach where we include encilers. And also it generalizes to both the qubit model and the continuous variable q-mode model. So once we had this formula, we realized we had something nice, and we wanted to be able to write a software tool around it that allowed everyone to use it. So just to drive the point home, this is very similar to something that you see every day in maths, where if you have a sine function, the derivative of that is just cos, and you can actually write the cos function in terms of two sine functions shifted by pi and four, the variable root two. So we're sort of making use of the same maths at play here to find the automatic, the analytic gradient of a quantum circuit. So the parameter shifts rule in a different form. And as I was saying, this applies to CV methods as well. This is just the continuous variable parameter shift rule. So if you have phase rotation, you shift by pi on two and divide by half. If you have squeezing and you want to work out the derivative of that, you shift by s and s is a free value. You want it to be large enough so you're not impacted by noise, but not too large that you have to introduce too much energy into the system. And then you just divide by one on two, shine s. So this generalizes across CV and across, across qubit gate sets we have, we currently work with. So it's a really nice property to find the gradients of quantum circuits. One thing I will note though is that we currently don't have an efficient parameter shift rule for non-Gashin CV gates. So the code gate and the cubic phase gate. There is a parameter shift rule, but it requires an exponential amount of memory to compute. So in CV at the moment we're restricted to Gashin gates, but we're still working on expanding that efficiently to non-Gashin gates. So the cool thing about this approach is it's completely hardware agnostic. So say we want to make use of, at the moment we have various quantum hardware companies that have different devices and each of these devices might be suited for a specific task. And what we want to be able to do is we want to be able to use all these devices in one computational model as well as classical processing. And so what we can do in penny lane is we can do that. You have your Q nodes here and green. So for instance you have a qubit one there and on the very left you have a continuous variable one including a beam splitter. So we have both paradigms of quantum computation. We also have classical processing in yellow. So these these classical nodes as we call them just use numpy functions in Python. And we're using a special wrapped version of numpy forked from autograd. And that allows us to keep track of the gradient classically. When we get to a quantum node we apply the parameter shift rule. We do the chain rule. We do parameter fan out. And we do the product rule all automatically. So you just construct your circuit using your model using penny lane. Do as much numpy classical processing. It uses many quantum devices you want in any combination. And we'll control the back propagation for you with analytic gradients everywhere. So that's penny lane. It's available on we have on github so please check out our github page. We have extensive documentation where we go through various tutorials and I'll get to them a bit later. But we're very excited by this because you can now train the quantum computer the same way you train a neural network. It's designed to scale as quantum computers grow in power. And also it's compatible with Zandu, IBM, Regetti and Google platforms. So we have a plug-in interface for penny lane. So it's with the penny lane framework. We're keeping track of the gradients for you. But we don't care how you apply the implement the gates. So we have various plugins. We have plugins for strawberry fields, plugins for IBM QS kit, plugins for... We're in the late stages of our PyQuil plugin. I'll show you a brief demo today. But I think at current stage it will be released publicly on Monday. So please look out for that. That's really exciting. It's also the PyTorch of quantum computation. I find that when I give this talk it can be very divisive to say it's the TensorFlow of quantum computation. So I've got this slide as backup. So this is what your average penny lane program looks like. So we try to... So this is another level of abstraction above all the existing quantum computing frameworks. So because it's hardware agnostic we want to make it really easy to swap devices in and out. So at the beginning you define your device. In this case we're just using a default qubit simulator that ships with penny lane. But you could replace that with, for instance, Forest.QVM to use the Forest Quantum Virtual Machine, Forest.QPU to use the Forest Quantum Processing Unit, StrawberryFields.Fox to use the back end and StrawberryFields etc etc. You can define as many of these devices as you want and then you can use them when you define your Qnodes. So we have a Qnode right below using that device and a Qnode in penny lane is just a Python function. We want to make it as simple as possible. You define the function, you define the parameters that you want to optimize and you just use a decorator to let penny lane know what device to send it to. So you can almost think of them like little accelerators. You're sending it to this device to accelerate the computation and then you'll get the result back classically and the gradients as well due to the parameter shift formula. You apply the quantum gates, so penny lane contains knowledge of all the qubit gates plus continuous variable gates and then you can just use it as if it was any other machine learning library. So you can use NumPy functions like sign and abs to find your cost function and then we also ship a couple of optimizers which you can use to optimize over your cost function or you can write your own optimizers. This is written in Python. We've tried to keep it as simple as possible so that anyone can use it in their current workflow. So this is just an overview of the plugins we already have. So we have a strawberry fields on project Q plugin and when the Zadu hardware is available later this year we'll have a plugin available for that. We have a plugin available for QIS kit and hopefully on Monday our PyQuil plugin will be live so the penny lane forest plugin. Yeah so that's just a quick overview of the plugins that are available now. So project Q has, project Q is a is a platform agnostic quantum compiler which has backends for Google's CERC and IBM's QIS kit so you can access CERC or IBM Q through project Q or you can access IBM Q directly through QIS kit. So that's really exciting and it's really easy to make these plugins. I think most of them are a maximum of 500 lines all you need to do is define how the penny lane operators map to the framework skates and that's it and everything else is taking care of you automatically. So please feel free to check out these, check out our Github page and give us feedback and contribute. We're happy for PRs and issues and let us know if we're doing anything you'd rather we weren't doing or if there were features you'd like or even if there's a framework that we might not know of that you want to plug in for so you can use penny lane, let us know and we'll look into it. So what I'm going to do now is I'm just going to do a very quick live demo. So what we have here is a very simple variational circuit so it's just one qubit and the qubit starts in the state zero and what we're then doing is we're applying two rotation gates so we're allowing this qubit to be rotated in the block sphere around the x-axis by some angle phi one and then around the y-axis by some angle phi two and then we're just doing a measurement and this measurement's done in the poly-z axis which poly-z basis which is standard in quantum commutation and this is a bit of a toy example it's a hello world but what we want to do is we want to optimize these two parameters phi one and phi two so that the qubit flips so we start off in state zero and we finish in state one so what I'm going to attempt to do now is I'm going to attempt to do that live in python in penny lane this isn't good the good stuff perfect so is that too small to read I'll try and work out how to zoom in just increase the font size does that look okay or still bigger yeah so what I'll start off by doing is importing penny lane and importing numpy as mp and so what I'm doing is I'm putting the wrapped version of numpy from penny lane so this allows us to keep track of the gradients using numpy as well if you use numpy functions I'm importing penny lanes qml just because that's what we used internally and it kind of stuck and then what I want to do is I want to create my device so I'll create two devices here the first one is so these are two default reference plugins we've shipped with penny lane so we have default qubit which is a really basic qubit simulator it's there for reference to show you how you can create a very quick fast plug-in for penny lane it's very slow so I don't really recommend anyone use it use one of the actual frameworks we also have default gashan which is a gashan cv reference plug-in built-ins penny lane they're good for a quick prototyping but for actual fast simulations or if you want to access quantum hardware you need to use the actual plugins that connect to the frameworks so what I'll do now is I'm going to create the q-node so I'm just using this the decorator to specify that this is the device I want to run on the qubit device and I just define my function and my q-node function so we've tried to be smart in pain lane and how we designed it we wanted it to be as close to standard python functions as we could so you can pass non-py arrays as arguments you can pass lists you can pass keyword arguments everything should hopefully work so here I'm just applying my rotation again around the x-axis with the first elements of premises here I'm then I'm rotating around why with the second element of prams and I'm just returning the expectation value in the poly-z basis and just a quick comment we use the term wires to refer to either a qubit or a q-mode or anything else so why is essentially saying number of subsystems the reason we use the term wires is because it's it's more general than saying either qubit or q-modes because pain lane supports both so we wanted a term that was short easy to remember made sense and could replace both why I can show you now is you can run this like any python function so say I pass a list with the two parameter values I get the output we also include a function in pain lane that allows you to turn the gradient function so qml.grad so I'm just asking for the gradient function of circuit with argnum so the with respect to the first argument and now I can call that directly and so that's using the parameter shift rule to determine the analytic exact gradients of that circuit with those two parameters and if you were to use a device that was connected to a quantum processing unit it would calculate this on the quantum hardware so what I can do now is I can define my cost function so in this case we want to minimize the poly-z return value so the cost function is essentially the circuit we just want to minimize the circuit's return value so I'll just call that oops and you can run this like any other function so pain lane should be as close to using standard non-pine python as it can be excluding the part where you're using quantum compuses so now that we've got a cost function defined I can start defining the optimizers so I'll just call it up my history is not working so I'm going to do something a bit I'm going to do some copying and pasting so now I'm defining the initial values of the parameters I'll just check the cost function at those initial parameters and then create the optimizer so with penny lane we ship some default optimizers they're super basic they do one optimization step so you have to loop over them manually and further versions we're looking at implementing some actual optimization classes so now that the optimizer is designed we can just do that loop I was talking about earlier and so we're just ranging over 25 steps and every step we're updating the parameters with the one determined by the gradient descent optimizer and then we're just printing out the result so there you go so after 25 steps we're hitting minus 0.91 so close to one I can actually increase that I'll say steps equals 100 see how much better it does run that again yeah yeah so you can see after 40 steps we're hitting minus one so we flip the qubit from state zero state one and you can see the optimization angles so the first rotation around the x-axis just requires zero as the angle the second one requires pi and this is super simple you could solve this by hand using two matrix multiplications in five minutes but we want to use Penny Lane so I'm going to try and do something a bit more complicated now I'm going to try and actually run this on regates QPU and this is live so it's probably very dangerous but I'm going to try it anyway so this is a Jupyter notebook loaded on regates QCS so I'm importing Penny Lane and it was easy to get Penny Lane on QCS I could just pip install it I think I've lost the connection so I need to sign in again so I'm just going to sign in and open the Jupyter notebook from before so this is the exact same example I just showed this is this is just qubit rotation so it's very simple it's basically hello world example but let's run this on the QPU because we can so this is probably why shouldn't have tried a live demo it looks like the kernel still starting so in case this doesn't work you can see from when I ran it before it did work oh perfect so about an hour ago I was testing this at my desk and I ran it and it worked and I did a keyboard interrupt optimizations at 40 so just in case ah maybe I'll try again at the end of the talk see if I can connect it looks like it could just be a connection issue but and this is really cool so one of the one of the algorithms experts we're working with that regetti with Penny Lane when the Penny Lane forest plug-in was coming together we could start to use it on the QPU she wants to explore and see how well it did so this is again the qubit rotation example but what you're seeing here is the actual optimization landscape using Penny Lane so she created the cost function and then she sampled the cost function so this is by Kerry McKinnon at regetti she sampled the cost function at various values within this 2d grid and then tried some of the optimizers that came with pain lane and so it's really cool this is coming straight off the regetti QPU using up doing optimization with Penny Lane and you can see how well it works so the values in the optimization landscape are easy and the optimizers find them and this is maximum 10 lines of Penny Lane code in Python so I was really excited when I saw this and because Penny Lane is hardware agnostic you can do crazy cool things with Penny Lane you can for example say you have something that works really well using continuous variables but then you have another problem that you could solve really well using qubits but then your problem requires both of us so you can do that you just create two Q nodes so for example you can create a Q node that does qubit rotation and you can create another one that does photon redirection so this is just two Q modes with a beam splitter causing them to interact and what you try and do in photon redirection is you want to learn the parameters of the beam splitter to cause a photon initially in Q mode one to end up in Q mode two so it's a toy example like qubit rotation but it works so we did it and this is something we ran and what this optimization is doing is that it's optimizing the beam splitter in the CV quantum chip to give the same result as the qubit rotation on some qubit hardware ship somewhere else in the world using a bit of numpy processing to construct the cost function so we're still working getting public access to our hardware this will be something that is available in a private beta in their public beta later this year but it's really exciting we're really excited by what we can do with penny lane and if you're interested there are a couple of other examples on our penny lane documentation so just penny lane dot read the docs.io and these are things that aren't toy example all I mean there's one there's qubit rotations there in case you want to have a look at that again and actually delve into the maths and show that it's giving the correct results for the angles we also have a couple of others so variational quantum eigen solver is really easy to build in penny lane and that's something that's very big in the literature right now especially in quantum chemistry variational classifiers easy to build with penny lane and we have a example on that I think in one of our papers we we created the Zandu logo and then used a quantum neural network to classify the logo from the background using randomly sample points and it worked worked really well QGANs as well we have an example Jupyter notebook demo in our documentation that generates a QGAN using a penny lane so I think one Qnode for the generator and one Qnode for the discriminator we also have a couple of continuous variable notebooks so these ones are we have a toy example for some redirection in case you want to look at that one in more detail we also have one on quantum neural networks so I talked about right at the very beginning so that was something six months ago we were doing intensive flow with really slow classical intractable computations now we can do those exact same things in penny lane and they're faster just by nature of having these analytic gradient formulas we don't have to repeat the simulations numerous times for the gradient so yeah just a quick summary so the whole idea is run and optimize specific aspects of your computation directly on quantum processing units QPUs in a similar way you might use GPUs today and this is in theoretical this is something you could do right now using the quantum devices we have available on the cloud and what pain lane brings that is a quantum aware implementation of back propagation so anything to do with analytic gradients for back propagation we take care of it's hardware agnostic and has a large plug-in ecosystem already accessing all the major frameworks keep your eye out for the forest plug-in and also as with all our other projects we're very excited by open source projects and we want all our research and code that we use internally to be open source so we're very happy to provide this on github feel free to check it out and check out the documentation as well and just to finish off we're running a bit of a competition at the moment see what is the coolest thing people can do with penny lane this rule refills so we've got some crazy amount of prizes on offers up to a thousand dollars Canadian so and three categories education software and research so if you're a physicist and you doing some research and you feel it would be good fit to use some of our software like penny lane feel free enter the research award if you want to submit some PRs or you have a way of making the code more efficient or you have a plug-in idea submit something for the software award or if you have a cool way of educating people back quantum computation then feel free check out the education award and this is a long-term competition so I think entries close at the end of August thank you you mean for that particular example so the qubit rotation so for the qubit rotation yeah so the question was if I was to rerun the failed demo would it work with the classical computing yeah yeah yeah yeah yeah so it should give the exact same result yeah so classically we're doing a simulation but because it was such a small system it was easy to simulate the problem becomes when you have a large computational model with a huge number of qubits where the simulation becomes intractable then we can still do that using the devices accelerators so the question was can you find a way of embedding the analytic gradient formula into the quantum circuit directly rather than having to query the quantum circuit additional times that's a really good question and something I think is still lacking in the literature I know there are examples of quantum circuits that do numerical differentiation in by themselves I'm not sure there's one for analytical derivations but a differentiation it's something we're looking into at the moment with the way it sets up it's you're only ever getting a constant overhead so you're only ever some querying the circuits maximum two additional times per parameter in some cases we could actually rewrite it so that the parameter shift rule only only requires to add one additional query to the circuit so some gates we've identified do allow that but at the moment because we're generalizing it for everything we haven't worked in those optimizations so that's awesome because I'm doing optimization with strawberry fields and so this is the advantages yes yeah but but I need to use non-linear gates in this case would it just not work or would it fall back to a more complicated way of computing the gradient with what gates are with non-linear gates okay so the question is if you use non-linear gates does penny lane fall back so to other methods so that's that's the case yeah so we've built we've got numerical differentiation also built into penny lane so if penny lane ever comes across a case where it can't do the analytic gradient so for instance the curgate of the cubic phase gate where analytic gradients don't exist then it automatically falls back to finite differences and I think like second order finite differences yeah yes we do we have some papers where we're using multiple layers I I didn't work on that research so I can't tell you in more detail but I advise you check it out on the archive so particularly the CV quantum neural network paper