 So, yeah, hi everyone. Welcome to another session. Of course, we're closing on day two, but I just wanted to give a quick introduction today to quantum machine learning with the help of TensorFlow. So just before we start, how many of you know or are aware of machine? I mean, machine learning, let's say. And how many of you are aware of quantum computing and what quantum computing is all about. And how many of you are aware of quantum machine learning? Okay, so it looks like very few. So today we are going to be deciphering what exactly is quantum machine learning. So it will be a very big enough friendly session. So thanks for that quick word of hands for letting us know that what's the audience like today. So yeah, without wasting any further time, let's get started. So I'm Shavai. I'm a developer advocate at Millie search. And with me, it's Russia. And I study CSR the University of Toronto. And I also do academic research in the intersection of deep learning and computer vision. So, though, so I've been contributing quite a lot to Cirque. And Shavai has been contributing quite a lot to TensorFlow quantum, which is built on top of Cirque. So, yeah, we'll also talk about these projects. That's like a sneak peek for what's about to come. And of course, I just wanted to first of all start off with what what this particular talk is who this talk is for. So of course, I kind of did that survey. So whether you're a machine learning engineer, and you're interested in quantum computing, or you just a software engineer willing to get into quantum computing, or of course, you want to go all the way towards becoming a quantum computing based machine learning engineer. That's QML that's quantum machine learning. This talk will be for you. So again, as I mentioned, this is a big friendly talk and we'll be covering all the way from what exactly is quantum computing for those folks who are not aware of it and how you can apply that to machine learning principles. How is it different from, let's say your classical machine learning algorithms and how we can actually use the intersection of both the core concepts of quantum computing and machine learning and be able to build machine learning models. So yeah, without wasting any for the time, let's get started. Yeah, so we'll start with quick introduction. And this is mainly because this is mode of an open source focused conference and a lot of cloud native folks, including myself. So yeah, I'll just start by talking a bit about the fundamentals behind building sure car even building TensorFlow quantum. So, so standard computers usually work with ones and zeros and any kind of bits you have are either ones or zeros. And the fundamental idea. Oh, also, since this is up there, this talk will be from a software perspective. So there will be no math, so to say, and we have had other talks with a bit heavy math. So, but this will be from the software perspective. And what a qubit does is it exists with when a qubit exists, it can be it can be up with some probability and one with some probability zero with some probability. And when you measure it, it's either one or it's zero. But when it's when it exists, and you're not measuring it, it could be one with some probability and zero with some probability that seems a bit odd. And that also might seem if you have, especially if you're from the software background, it might seem like how do you make use of this or how do you exploit this fact. So, but yeah, we'll talk about that. So, so that's what and that's what you and that's what you call a qubit very fundamentally. Let's also talk about gates. So, once we talk about gates, I think you'll have a better idea of why this even exists. So, very simply, a gate is a two cross in cross to cross in matrix. And so the end over here is how many bits you're making it work on. So, ideally, not gates work on a single bit and and gate works on two bits and so on, simple stuff. And there are also some special properties for this matrix. It should be invertible and so on. Let's not talk about that. Let's keep it from the software perspective. And here is an interesting case of simple matrix multiplication as a gate. So, so what you essentially want to do when applying a gate to a bit or a qubit is identify the standard basis vectors of the gate. And that essentially tells you what the gate will compute. And just like traditional software using and not or stuff like that, we can pretty much build all of software today. And it's pretty similar. You can perform multiple operations using gates and then complimenting gates. You can also build just as a starting exercise, it's pretty easy to start building with a two bit order. I assume most of you might have done that in college with and or not gates built a two bit order. It's pretty simple to do that. And so, so what this does is I've just taken a qubit and applied a gate to it. This gate is simply for element wise matrix multiplication, but a gates could be for anything. And yeah, I just wanted you to see from this kind of animation that you are trying to transform stuff in some way. And you put together multiple of these gates and you end up with a circuit and circuits is what we'll be using. And if the name sounds familiar to you, I was talking about sir can contributing a bit to it. So this is what sir makes particularly easy for you. So you can build these kinds of circuits pretty easily with sir and we'll also talk about that. But this is what a circuit looks like pretty similar to how standard and how we do stuff in standard software. And all of the ABCs or gates and at a particular slice of it. That's what we'll call moment. So that's like a moment in time. Think of it that way. So, okay, that's enough of beating around the bush. And let's start talking about why this is faster or why you should even think about this. So I make the claim that this is exponentially faster. Seems interesting. I make the claim that this is exponentially faster, which is not fully true. This is exponentially faster if you know how to exploit duality. So let's take a look at this and I'll again be using some animation. So the main reason I use animation is so I don't have to write equations and explain them. It's hard to do that on a screen and especially in a software conference. So let's start by saying that we have two bits. These could be anything. You could have zero one one zero anything. We have two separate bits independent of each other. And so the first step you do is you entangle cubits. And what do you mean by entangle cubits is we'll see that. And then we give these two cubits to two different people. So starts becoming much like a communication problem like you would have seen earlier. So what we want to do is see all of these steps and how it works out. There's also will not go into the detail of especially how the encoding part works. But there's like enough documentation talking about that. I just want to fundamentally explain the idea. So this is the kind of cubit we have. And these are the different states we have. So let's say we are working with a cubit one zero. So this is what it looks like. And we pass it through some gate. And right now let's just say that there is there exists some kind of circuit that does the encoding for me. So for the example we were just seeing I was able to make this circuit and that allows me to encode this cubit. So now I have some way be with some circuit. The circuit just flashed there but with some circuit I've been able to transform two cubits or two bits into a single cubit. So those are the B ones and B twos I had. And finally we have a state for which we have another circuit and you go through the circuit like I'm showing you. And you finally have an encoding. So the way this so the way this encoding works is using multiple circuits which I just manually built for this example. But let's just say you have these circuits and once you have a cubit you measure the cubit once you measure the cubit it's either one or zero. So our friend here who wants to receive these two cubits gets to gets to gets those cubits and I guess that went a bit fast. But when you get these cubits you run a decoder on it and the decoder is again a circuit and here is where the entangling part becomes pretty useful. So we say that if the two cubits are entangled which was our first step you will be always you'll have like a nice circuit to decode that. And once you decode that with a single cubit you end up getting two bits of information. So there were two people one had two bits he they were able to transfer a single cubit and send over two bits of information seems fundamentally correct at the moment. There is a bit of the encoding decoding part which we have skipped through but I hope that makes sense how how you get from one cubit to transporting two bits of information. And this is also another example what just a few cubits could do and you can you can most certainly see that you have two raise to n number of states. So each cubit could carry two bits and you have two raise to n states you can now represent. And let's say if I just wanted to build this circuit or any kind of circuit. Cirque makes it particularly easy for me and this is also what we'll talk about in a while. These are the projects we have been working with and there's a motivation for the talk. So we'll talk about this but it makes this process of getting the circuit and doing the encoding part decoding part particularly easy. So I could start using the benefits of two raise to n. And so this is more of a start I like to use for publicity and there's a bit of gotcha with the start. But this is like a pretty famous start and a lot of people code this. There's a bit of a gotcha which is this represents a number of states and but but it's still interesting to see. So I'll say this anyway. So 300 bits are not even enough to store one image and you have 300 cubits and you could have the number of particles in the universe. This again this is the number of states and at first it might seem a bit misleading but yeah. So yeah so of course like that is a bit of a primer to what exactly is quantum computing itself. And again if that might have been a bit too much it happens to everyone. It happened to me it took me around like two to three months to really just understand the entire quantum entanglement and all the physics and math behind it. But yeah I mean we'll be sharing some resources after this talk so that you can all get started. But of course coming to the main topic for today's presentation and that's quantum machine learning. So applying the same principles that fundamentally define how quantum computing works but to the entire landscape of machine learning. And primarily what you'll see is that we are going to be using the lots of concept of the quantum circuits to find the patterns and the relationships within our data. And that can be applied to whatever data set that you might actually use and we'll see that how you can actually use your standard data sets how they can get converted into a circuit. And by using TensorFlow Quantum you can actually run your standard Keras or TensorFlow based machine learning models on top of this particular data. And these quantum circuits are essentially as they find these patterns and with any typical machine learning algorithm how you are able to find patterns in our data. And then you can use it to find further new patterns in sort of any new data that you introduced to it to find certain predictions. That's what is essentially being done with the help of these quantum circuits. They are sort of replacing your normal tensors right and of course we'll be representing these circuits with the help of tensors within TensorFlow when we define that. So with the help of these patterns these quantum circuits can make predictions about your new data. And one thing that of course my co-presenter Rashid covered was about duality right. So how can we actually exploit this duality with the help of QML. So primarily if we look at how fundamentally quantum computing works right there are primarily two different things. So one is quantum interference. That means like how you have different type of waves that might collide with each other or they might add up to each other right. So let's say that we are representing these quantum states as waves. So if these waves collide with each other they might suppress or if they of course amplify each other that will raise the state right. So that can actually lead to more accurate predictions and better classification. So if you are using a quantum machine learning algorithm for let's say classifying a binary classification this quantum interference actually does help with that. And the other one is superposition. So quantum superposition is also another very well known concept. So the idea is that if you have multiple qubits in this superposition then that also does enhance the overall parallel computation for your machine learning algorithms. And we know that parallel computation especially in terms of like machine learning is very useful to be able to make faster predictions. So primarily if you are using something like these core concepts of quantum computing whether it's interference or it's superposition those can help you to exploit the duality. Again duality means that your state could represent could be represented either in zero or in one. So that's how you exploit duality for quantum machine learning and we'll now talk about how you can actually leverage the use of hybrid models. Yeah so most of my work has been around machine learning and when I started exploring this area started knowing about these projects using them. I saw fundamental mismatch which is it wouldn't be faster for just like classic image recognition stuff that I had been researching on. And so I started researching more about hybrid models and this is like quite a while ago. But so this is where the motivation for hybrid models comes in and this is also what the projects will be talking about open source projects will be talking about help you do better. So we want to use the idea of qubits for the ops for which it is faster and these would often be and these would often be ops where we can exploit the principle of superposition. But for all the other ops it doesn't seem to necessarily be faster and we would still be handling like the same same amount of data and spending the same amount of compute probably even more effusion correctly. So this is where the idea of hybrid models comes in. We can use these ops for quantum data or where we can exploit superposition and for rest of the parts we could just use classic machine learning or software techniques. And this is how it seems to also work out the best brings up makes the best use of superposition principles and also makes use of standard neural networks. So we do the thing and combine them. So this is an interesting image and I just want you to take out a bit from it which is think about all of this as a single circuit to start out with. And so this is the kind of circuit we want to build probably with circuit to make like understanding all of this easier. And so we also have a quantum model which is essentially just a circuit and we want to change the gradients for it. So the standard idea is same. We use the cost function to change the gradients for the quantum model we have. And we also then need a classical model because the parts of making classical inference is way faster and way better by a standard neural network. So you're essentially solving a two network problem and we want to minimize the cost function. So this becomes like a standard optimization problem and yeah you could pretty much easily reduce this to a standard optimization problem. If you also notice there's a lot of these kinds of techniques where you optimize two neural networks at once in response to a single cost function in standard machine learning stuff. A lot of the motivation for GANs and VAEs essentially came from pitching two networks against each other which is not always the case here but draws a nice parallel to classical machine learning stuff. And what's the main difference between quantum machine learning as compared to your standard machine learning algorithms. So primarily the way that I'll kind of break this down into two parts. So the first one is how the different type of algorithms deal with the data. So of course my fellow presenter spoke about how you can actually use quantum data. So when you talk about your standard machine learning algorithms the data set can be of any type right you might be representing your data set as let's say some images or some videos but ultimately it's in the form of bytes. Now typically when you're talking about your quantum machine learning algorithms and the data they're dealing with again as the data is quantum data you're essentially dealing with states. So whether it's represented as let's say zeros or ones and all the different brains of values that can exist between them. So this particular diagram kind of showcases the difference between a standard example. So if you look at the CML that's basically your classical machine learning on the left hand side. So you have three bits that you're taking in as an input and you're getting as an output. But when you're using let's say something like a quantum machine learning you are essentially having three qubits. And as you know that that would essentially mean two is part n number of different possible states and all these different states will kind of be utilized in serve the machine learning algorithm where you're dealing with that states essentially and you are converting your standard data into the quantum definition by transferring it through this quantum machine learning process. And we'll see that how this kind of correlates when we talk about some examples with the hybrid models that my co-presenter talk spoke about. And the other way that they are fundamentally different is as of course I mean it makes sense that the QML algorithms are primarily designed to run on quantum systems. Right. So if we kind of look at one of the examples of one of the most well known QML and that's quantum machine learning algorithms. That's the Schwarz algorithm that runs exponentially faster with the help of QML as compared to your standard machine learning algorithms. And that's primarily again because of the way that we deal with this multiple states inside of any kind of a machine learning algorithm that's running with the quantum entanglement and using a concept like superposition. But yeah that kind of will bring us to the circle. Sure. So yeah we'll now start introducing the projects and probably also start talking about how you implement this stuff. Though the detail was a bit shallow we'll start by talking about the projects. I've usually been working on Sir contributing and writing a few of the APIs for Sir and Sir allows you to make this part of circuits which you heard quite often. And was even the basis of what we are doing to build the overall models. So it makes writing manipulate manipulating and optimizing these circuits pretty easy. So I also talked about doing kind of gradient descent on on the quantum model. And the way this would work is well it's great in decent but the parameters you update are different than your standard models. So so it makes easy make the makes easy easier. And I unfortunately also don't have a quantum computer. So so it makes the simulation part easier as well. But but so this is like a quick example of Sir and kind of the basic building blocks that we'll be using multiple times. So in a while I think we'll be talking about TensorFlow quantum. Yeah. So TensorFlow quantum uses Sir for manipulating for manipulating and creating these circuits creating the dogs that build these. So so this is essentially how cubits are handled and how gates are made. So this is a pretty simple example. And I show this in two parts because that's how it has represented in memory. We have all of these moments which are slices. What what happens at a particular moment in time. And so this example just adds a not gate to the next of after after the after the two inputs we have. And you it also allows you to implement these not gates and all other kinds of gates very easily which we often use. Yeah. And then that basically brings us to TensorFlow quantum. So TensorFlow quantum is an open source project by Google. It was released in 2020 and it's essentially been built on top of Sir. So whatever analysis that you do, you're essentially building these circuits and TensorFlow quantum layer kind of adds the layer of TensorFlow on top of it. So some of the big benefits that you get by using a tool or a framework like TensorFlow quantum is first fall. It removes a lot of the abstraction layers for you when you're dealing with machine learning algorithms. So a lot of the compute mathematical compute that you have to normally write TensorFlow does away with those. And one of the other benefits is that that I kind of also covered earlier with that. And I'll talk about this more as we will share like a live coding example is that if you look at your standard TensorFlow, right? So we have Keras. So a lot of the Keras functions that typically machine learning engineers might use the TensorFlow quantum actually allows you to use the same set of functions on top of your standard compute as well. So if you're having a quantum data set, you can very easily use your standard Keras functions. So that means if you're using something like model dot fit or model dot compile, which a lot of you if you are dealing with TensorFlow or you're working with Keras, you might have actually use these tools. So you don't have to reinvent the wheel and use some completely new functions. You can directly just use the same functions with your quantum reset as well. Now, if you look at the second line, I have used some asterisk. And that is because it's not entirely true. There are some other changes that you have to make, of course, especially the ones where we spoke about that with quantum mechanics, you deal with states. So the data that you're essentially dealing with you are dealing with states and not just regular bits or qubits. So the only things that you'll have to make in your overall machine learning algorithm and you're in your entire program will be to deal with the states and convert your circuits that we spoke about that's because we're using CERC to convert these circuits into tensors. And then of course you can still use your regular machine learning model functions such as your compile or dot fit to be able to fit your model to any of the data set that you're dealing with. So we'll see this in a stepwise manner. We have two code demos ready for all of you to kind of walk through of how this entire process works. So just stay with us. But yeah, primarily as I mentioned that since the CERC library and TensorFlow are very closely related, we are as I mentioned that we are going to be representing these circuits as tensors. And the second one is that we are going to be classifying our data that will be used. So we have prepared a classification algorithm that we are going to be using to demonstrate this entire process. And of course, with TensorFlow Quantum, you can also introduce parallelism to your code. So if you're dealing with larger data sets, you can run each and every individual tests parallely as well. So whether you're training or algorithm or for basically doing the inference part, TensorFlow Quantum does allow for parallelism as well. So we'll not go too deep into it, but it does allow if you are interested in that kind of use case. So over to the demo time. So we are running these particular programs in server GitHub code space. So shorter to get up. But like if you look at the TensorFlow Quantum official GitHub repository, there are a lot of different code samples that are already there. We could not actually use the Google collab because unfortunately the TensorFlow Quantum project only supports Python versions 3.7, 3.8 and 3.9. And by default, if you try to run it on a code on a collab nowadays, Google collab actually only supports versions 3.10. So either you might have to install a separate version of the Python, which is like anything between 3.7 to 3.9. And also do ensure that you with the current release of the TensorFlow Quantum, which is 0.7.2, it only supports TensorFlow versions 2.7 and below. So just some prerequisites to keep in mind if you want to try these on your own. But we'll be sharing some links at the end of the session so that all of you can actually try it out. So over here, I have my first notebook. So the first notebook if you see is primarily if you want to get started, you'll have to install three main dependencies. So you have TensorFlow again, this way simply pip install TensorFlow, then you have Cirque. So you'll just have to do pip install Cirque. And you'll have to do pip install TensorFlow-Quantum to use the TensorFlow Quantum library. So these are just the prerequisites that you will require. One important dependency that you'll see over here in the line number three that you see is the SVG circuit. So you can actually generate SVG images to see how your circuit looks like, which of course Rashid showcased in the entire demonstration during the slides. So those were generated using the SVG circuit. So the first step that you'll see is that after we have imported... Sorry for the cut-in, but those were actually generated using Manim, I think on the stage. But what you'll see is that some of these other images that you see, you can generate them using SVG as well. Okay, so the first step that we'll do is that we'll import the dependencies. And once you have done that, so this is where we are first defining our circuit. Now, one of the methods that you see over here is the grid qubit. So Cirque actually has multiple methods using which you can generate your circuits. So one that we have actually used is the grid qubit. But there are a few other ones that if you look at the documentation, you can define your initial circuit in any different method that you want. And over here, we have just placed it at the coordinate 0 comma 0. So then we create the circuit. So, I mean, that will help you to create your circuit. So we'll just go ahead and run this. And once you have created a circuit, so this is where now we are using the TensorFlow Quantum. So the TensorFlow Quantum comes with a convert to tensor function, which will take your circuit and convert it into a tensor. So if I go ahead and run this, the first thing that you see is that right now, of course, we are not doing anything with the actual converted tensor. But this is the actual representation of our circuit, which is at the coordinate 0 comma 0. And if you look at my if I go ahead and actually print the circuit over here, you can see that that's how it looks like, you know, in the presentation of a tensor. But if I also change it to the tensor circuit, which is our converted tensor, I'll go ahead and run this. So this is like how you would any standard TensorFlow tensor would look like, right? So we have this converted our circuit into a tensor with the help of the convert to tensor function that is provided by the TensorFlow Quantum. So of course, this was just the first program that we wanted to show you that once you create a circuit with the help of Cirque, how easily you can actually just convert it into a tensor with the help of TensorFlow Quantum. Do you want to probably add something to this program? Anything that you might want to add? No, these are pretty much just like the circuits we we saw in the slides and yeah. All right, so we'll move on to the next demo. So this demo actually is this program is going to be classifying. Think of this as a binary classification where using like an SVN like program, SVN like machine learning algorithm, and we'll be looking at how you can also track your entire loss function and your loss function as well. So this kind of will give you an end to end example of how you can actually leverage the use of a circuit and then train it in a classical machine learning fashion that you might do with your normal data or your regular data. So over here, we have this imported all of our dependencies. The one that we have also imported is Matpotlib to plot our loss function. So after this, what we do is that so first of all, this is our main function. Again, I'll not go too deep into explaining all of this, but in a nutshell, this particular make data function is essentially used for generating our quantum data. So what you'll also see is that again, we are initializing our search library over here to create a circuit. And again, we are doing a number of different additions on top of our data. So of course, what you can do with your circuit is that once you have created a circuit, you can very easily transform that data inside of it. So whether you want to use something like you want to change the access, right? Because with circuits, you can deal with not this X and Y, you can deal with X, Y and Z coordinates as well. So we have just gone ahead and done a bunch of optimizations to our data to create a circuit. And I'll just go ahead and run this. And then the next one is where we're actually now creating a parametrized quantum circuit. And Rishi, do you want to probably talk more about the parametrized quantum circuit? Sure. So essentially what we want to do is also have some learnable parameters over here and not just the kind of circuits we see. We also want to do some kind of gradient descent on this. So that's the reason why we use parametrized quantum circuits. And another thing to kind of note over here is there's also some bit of data management that goes on if you were to actually run the search stuff on a real quantum computer. And we actually have one at the university and I've done a lot of experiments with that. So we kind of skip through a lot of the data management stuff. If you were to actually run it on a quantum computer, a lot of what we are doing here is in eager mode and at least for the title of this talk, I think that's justified. Yeah. And of course, since you're running this in a Google collab or, for example, in a GitHub code space rather than running it on top of an actual quantum computer. So, yeah, I mean, we are primarily creating a parametrized quantum circuit that will allow us to add learnable attributes to our circuit so that we can actually use it instead of a machine learning algorithm. And then what we are doing is, of course, like now this probably might be familiar to all the machine learning folks, where we are just defining the accuracy function that will be utilized on top of this circuit that we have defined. And then over here, what we'll do is, we'll break our quantum data into the training and our testing data. So training will be used for primarily for fitting our model on top of our data and then the testing to look at our loss function and see how the model has performed over the course of the entire training. So here, then what we have done is, we have gone ahead and defined our qubit. So if you might have seen the previous example, we had used the grid qubit to create like a qubit. So we have gone ahead and done that. And we have created our PPC. That's our parametrized quantum computing model. And finally, what you'll see is that we have compiled our model. So as I mentioned earlier, that with quantum TensorFlow quantum, you can still use your standard keras based functions after you have actually converted your circuits into your tensors. So you can still use your normal machine learning model algorithms. So over here, we are basically compiling and using the atom optimizer again, very standard stuff. If you are from a machine learning background, and we have just gone ahead and fit our model on top of our quantum training data, right? And you can see that we have all these different epochs. I'll probably just run it because it's just running a few times. So it's going ahead and training. And finally, what you're doing is that we're just plotting our data. So of course, we are plotting both our training and our validation loss that we'll again see in any standard machine learning algorithm. And you can see that over the course of all these different epochs, how it has actually fade over top over here. So I mean, the main idea to showcase is that you can leverage the use of either regular data. And then of course, then read the circuits from top of it and use your regular machine learning workflow very easily with the help of this. We also actually have another example. But yeah, before that, Trisha, if you want to. Sure. So I just so these demos were created by sure. But I just wanted to add that. So the reason why we have a similar kind of loss and optimization strategy is because all of these are if you think about it, all of the loss. And the optimization is actually running on just standard accuracy, just standard bits. And we are not necessarily thinking about the qubit perspective when doing the optimization. And which is why we can still still use standard loss functions and standard optimization strategies. And here's like another example because we are running slightly over time. But here's like another example of using the QNNs with the help of MNIST dataset, which again, a lot of you might be familiar with. So yeah, I mean, we'll be happy to share these notebooks with everyone at the end of session. So feel free to connect with us. But yeah, the main idea was to demonstrate how you can leverage the same set of tools that you use for your regular machine learning purposes. But of course, you add now your quantum data and of course we're dealing with states and circuits instead of your regular data that you might deal with the regular machine learning algorithm. But what I'll do is, of course, after this, we'll also talk a bit about what's the current status of quantum machine learning itself, right? So this is a primer, a lot of the quantum machine learning that we see right now is still under research phase. So there's very little to no actual applications that you might see that have a very widespread industrial use. So a lot of the things that you see are still being researched upon. But of course, a lot of the different companies including like Google, IBM, all the major worlds, bigger companies, they today do support quantum computing and quantum machine learning. There's like Kiskit by IBM. Of course, Google has their open source project called Quantum. So some of the main research focus areas is also like, you know, drug discovery. So with the help of QML algorithms, you can very easily find out about large drug modules, and then you can actually classify between them. And so that's one major area where you actually see quantum machine learning being used right now. And apart from that, IBM recently just started to use a lot of the quantum machine learning, especially with Kiskit project. It's an open source framework again, similar to the machine learning framework by TensorFlow Quantum, right? So similar to TensorFlow Quantum and they are leveraging it to use for NLV that's natural language generation. And another area where you might find it is in financial modeling as well. So these are some of the areas where there is a lot of current research going on. And what we've also found out is that it is a lot more optimized as compared to your standard machine learning that you might actually use. But again, that does come with a caveat that it's not always going to be faster than your regular machine learning algorithms. But of course, these are some of the areas we have found that it is a lot faster. And if you look at the future scope as well. So of course, a lot of the research that we are doing in all these particular areas, it's there. But apart from that, you'll also see it a lot in material sciences as well and just want to add something to it. Good. Perfect. So yeah, with that we'll conclude and we'll be open to questions now. If you want to ask us about contributing to any of these open source projects, feel free to and if not, that's all for today. Thanks. Actually, two related questions. One you said you, you need to run a transformation of the data to a quantum. Whatever right regular data to quantum enable data or whatever. I'm assuming that's standard compute not necessarily quantum compute right you don't need a quantum computer to do that transformation. But that is also cost right associated with your model. Like I know you're I know you're saying that the the the algorithm itself runs exponentially faster. I'm assuming that cost place some part into the into that particular equation. So is there a particular point at which it becomes. So the question as we also do some transformations on the data beforehand, and how well does that factor into the cost and how's the effect of that. So, so for the demos maybe show I could talk about that but engine. So any demos we run right now, even though I was not building those, I could say that they were definitely slower than classical algorithms. I'm probably sure that's the case for the demos you showed as well. Yeah, so they are. Yeah, they're still slower than classical algorithms. And that's because the cost of transforming that data is huge. And the gains you get in a simulation are not so much. So, so, but let's talk about the ideal scenario and at least in what I've experienced the cost. So what you'd often use it for is find avenues where the data is already in the qubit states you want. And that is one of the main areas where you would probably efficiently be able to apply it. And for a lot of classical, for a lot of classical due to sets. It seems is hard to apply to it right now. And that's what I've seen from my experience, but if you want to talk about the demos or anything else. No, I mean, completely agree. And that's why I mentioned that a lot of the particular research that's going on right now, especially in the drug discovery phase, right? So they are in that process of converting that entire classical data set that might have for, let's say the drug models and converting into the qubits. So that process, of course, does take a lot of compute and might add up to our claim that we made initially that it will be faster. But as mentioned, that once we get our data in the qubit format, then of course it will be a lot faster. If you take away that main conversion step, then it will be a lot faster as compared to your standard algorithms. That's all. Thank you.