 Hello again. So we have another interesting talk lined up by Abhilash on Introduction to Quantum Deep Learning. Abhilash is a research scientist working for Morgan Stanley Capital International and is a former research engineer for HSBC Holdings. He has also authored a Springer publication on deep reinforcement learning in Unity, and he is a contributor for Google Research. He's a mentor for Udacity Nanodicrys in deep learning and has been associated with several organizations in mentoring for deep learning. Over to you, Abhilash. Yeah, hi. Thanks for the introduction. Hi, everyone. So this session is going to be about Quantum Deep Learning. And this is a very interesting session, the reason being most of the deep learning that we see nowadays when it comes to the field of NLP, computer vision, reinforcement learning, any variant of semi-supervised learning, we see that it is conglomerated with deep models, be it transformers, be it any Keras models, TensorFlow models, and this is actually the realm of classical learning. But what happens when we try to see beyond classical learning? So in this talk, I'm going to show you, and rather give you a perspective how we can analyze classical circuits in the eyesight of quantum. The main part of quantum learning is going to be revolving around qubits. Cubits are the fundamental parts of any quantum computing system. So to start off, we'll be starting off with introduction to quantum computing, and what does it mean? What are qubit systems? Quantum variational circuits. We'll be seeing about how to create hybrid circuits. These are circuits which employ both classical components as well as quantum components together. And we'll be seeing how these combined hybrid circuits provide so much of a boost in performance when it comes to task-agnostic machine learning, be it NLP, be it graph learning, be it reinforcement learning, or any kind of learning. So, and we'll be seeing the applications of these in the field of quantum reinforcement learning and quantum NLP, which are topics of active research. And we'll be ending off with democratizing the adoption of quantum circuits or quantum ansatz over deep learning models. That being the content of today's talk, I'll be moving forward to the first part, which revolves around what is quantum computing and qubit systems. So the fundamental part of quantum computing came from Max Planck's idea, right? So where each body can be converted into a particle, and that particle can be resonated in the form of bits. That is zeros and ones. The peculiar characteristic of a qubit is that these qubits have only two states, either it is zero or it is one. But we cannot get the intermediate states in which a particular qubit lies in a particular time step. We can only see what happens after we perform certain operations on those qubits. So we can think of them as kind of a black box where we know that the qubits can attain only zeros and ones, but we do not know what is going to be inside the operations. What is going to be the intermediate stages of those qubits? And that is very interesting because it opens up an exponential possibilities. When we think from a classical perspective, in classical deep learning, we are bounded by the number of dense layers that we have. We are bounded by the number of sequential neurons that are there. Be it any classical circuit for any deep learning task, sequential models. But in the case of qubits, that particular intermediate space like goes up exponentially high because we are not able to store those intermediate operations of those qubits. We can only store the terminal final values that is zeros and ones. And these qubits have certain interesting properties. They have these properties which are known as waves, that is the amplitudes, and they have certain important properties called as phases, the most important property. So these qubits are actually arranged in the form of a sphere which is known as a block sphere. And the block sphere represents, it has three different axes, X, Y and Z. And each of them have a certain angle known as phi and theta. And these angle represents what is the vector point for a particular qubit vector on that particular block matrix or block circle. And this helps to visualize what is the output from a particular gate, your quantum gate, what is the final state of that particular qubit and other details. And this block circuit or the block circle that we see over here, block sphere actually, this actually provides different rotations based on different operations when applied on those qubits. In quantum circuits, like in pure quantum circuits, our task is to optimize a function as in the case of classical circuits. Like, let's say in computer vision, if you want to do a classification on MNIST data sets, like digits classification, or you want to train a GAN, adversarial network on that. So there we have a cost function or effectively called as a loss function, which we want to optimize. And we apply different kinds of circuits, different kinds of layers on top of that. But in the case of pure quantum circuits, we are trying to optimize another cost function which tries to optimize the rotations. Quantum qubits are basically relying on rotations. So after each operation, the particular qubit will get rotated based on a certain axis. It can be an x axis, y or z. And what we do after that, a particular classical pure quantum circuit gives the output in the form of an expectation value because that is the final state which we can collect. Because as I mentioned before, we cannot harvest the intermediate states when a particular gate operation is happening on a qubit. We can only collect the terminal states. That is the raw expectation values. Now, this is very beautiful to understand because what happens when we try to merge this qubit generated, you know, qubit systems, rotational qubit systems along with the qubit gates along with classical circuits. So a qubit, as you can see, it can be represented as in the form of a sinusoidal pattern like cos theta, right? And this is quite similar to complex systems or complex mathematics where you have two different angles presenting the theta and the phi components. There are different gates when it comes to qubits. One of the gates are x gate, y gate, z gate. These gates are actually rotations based on that particular axis. Let's say the Rx gate or the x gate is represented by the polymatrix 0, 1 and 1, 0 combined together. In the matrix form, it is 0, 1 and 1, 0. So this actually is kind of rotation of a particular qubit along the x axis. Similarly, we have rotations along the y and the z axis respectively. And these rotations are actually a part of a randomized circuit, quantum circuit, which tries to optimize certain loss functions. Some of the popular gates are the Hadamard gate, which has one by root two part outside of that square matrix, right? That means effectively a phase of pi by four. So when we are trying to normalize these kinds of gates, there is also a rotation gate where we have 1, 0, 0 and EI phi, where phi is the angle that we mentioned before in the block sphere. So all these gates that we see are actually a subset of a universal gate, which is known as the S gate, okay? So when we see S gate, S gate has this kind of formulation where it is 1, 0, 0 and EI pi by two. That means it is along, it is an RZ gate, which a phase of 90 degrees. Now, why are we discussing this? The most important part is understanding that these rotations are the fundamental part when designing a quantum circuit. Whenever we have different open source libraries for the same, some of the libraries are TensorFlow Quantum, which uses Cirque Simulator. Some of them are the most popular IBM Qiskit and we also have Penny Lane. So these three are quite the most popular, democratized open source quantum learning libraries. And we are going to use those to build certain interesting circuits. And these circuits employ rotation and the fundamental parts of these rotations are particularly taken from these kinds of gates, RZ, RX, CNOT, right? Hadamard gate, S gate and T gate. These gates are very simplistically can be understood as rotations of a particular qubit along certain degrees along the certain axis. And we are interested with harvesting the final values, which is coming out of a circuit, quantum circuit and getting the raw expectation values. An interesting point to mention is that whenever we consider the raw expectation values, we use the poly operator. The poly operator actually gives us the raw values of a particular expectation of a particular qubit. So if we have two qubits, then we can combine the two qubits using the poly Z operator or you can separately have, return the two qubits separately using isolated poly operators. So this kind of analogies and this kinds of circuit variations are quite popular when we will try to design different circuits. And yeah, so now the next part is about quantum variational circuits. Now, before we move into quantum variational circuits, it is interesting to understand something. When we analyze with respect to computer vision or in NLP, we are interested in embeddings. That means embedding a data in a certain space, rhyme and space preferably, that belongs to all real spaces. And we want to optimize that space based on a certain loss function. And we create certain different models on top of it. We try to use some optimizers on top of it to optimize that loss function and to get a favorable value. Now, the same idea just in a different sense comes to quantum systems. So here, instead of encoding them to rhyme and space, we encode them to something called as a Hilbert space. Now, these encodings or embeddings are part of something which determines the angle of that particular qubit. It also determines the phase that was the angle actually. It also determines the amplitude of that particular embedding. And it may determine the basis vectors of that particular qubit system or that qubit vector. So there are three kinds of embeddings that we are going to see. One is known as the basis vector, basis embeddings. So basis embeddings associates each input with a computational basis of a qubit system. So all the systems that we see are encoded as zeros and ones. Classical data, when we want to encode it in basis embedding has to be represented in zeros or ones. It cannot be encoded in decimals or float values. This is because we are only converting and embedding them into a zero one space rather than in a decimal or a floating point space. So basis embeddings forms the first part of embedding a particular qubit. We also have amplitude embeddings. Amplitude embeddings are quite interesting because we are considering, we are embedding the qubit data based on the amplitude of the quantum system. So a normalized classical, let's say, in-body system or an in-body data point can, you know, is represented as, you know, a sinusoidal pattern when it comes to qubits. And this is actually very, very important. So one of the most important things is whenever we try to encode it in amplitudes, we are considering a part of the phase as well. So as we know that, you know, in a wave, we have sinusoidal waves, we have cosine waves. Similar to amplitude embeddings in qubit systems, we are encoding those in the case of the amplitude of that particular wave. So, yeah. And the third part is the angle embeddings, which I did not mention here, but angle embeddings are based on dividing the phi and the theta angles in certain values so that it can be embedded in that, data can be embedded in that particular angular value. So let's say we have a phi angle of pi by three. So that means I'm going to embed the data in pi by three, in the phase of pi by three. Let's say if I have fewer number of qubits as it is required in pi by three, then I will only restrict myself to the number of data or the qubits that I have. I will not go beyond the number of qubits that I have. So angle embeddings are mostly based out of embedding a particular data point, that is it can be a floating data point or any real value data point in the angular domain. It can be phi or theta, right? So now that we have understood what quantum gates are, we have a brief understanding of what quantum embeddings are, we can go and understand the quantum variational circuits. So quantum variational circuits are effectively quantum feature maps. Feature maps are nothing but another word for quantum embeddings. So these are passed through a quantum circuit or an ansax and undergo a sequence of gate transformations. So these gate transformations are basically the rotations, just like I mentioned before, it can be rotation along x-axis, y-axis, z-axis. And these randomized circuits are effectively, they try to combine different qubits and they try to superimpose different states on top of each other. Because whenever we are trying to optimize a cost function in the case of quantum circuits, they will try to superimpose different states on top of each other. And these superimposition is done on the quantum feature maps or the quantum embedded data, right? And these kinds of overlapping and entanglement, particularly the word is entanglement in quantum. So entanglement means that a particular qubit state can exist in any terminal state, either zeros or ones. And it is not possible to abstract them until we get the final terminal or the raw expectation values. So the same concept we apply over here. And these U gates that we see over here are nothing, but the U gates are known as universal gates. These are a broader representation of all the gates that we have seen, be it RX, Ry, RZ, or CNOT, Hadamard gate, T gate, S gate. So all the gates that we have seen, this is just a global representation of that. A particular classical, a particular quantum variational circuit, or it is, these are known as classical or pure quantum variational circuits, consists only of quantum qubits, right? And their rotations associated with it. But what happens if we try to combine these kinds of circuits with also layers of any kind of libraries? So these libraries can be, let's say from Torch by Torch, or it can be from TensorFlow, or it can be based on any other machine learning library, right? So the most important aspect is to know is that there are different kinds of circuits of pure variational circuits, depending upon the architecture that we have. The most common are the layered architecture. Layered architecture means that you will have a particular layers, block of layers, which are nothing but a set of quantum rotations. So these rotations, let's say in the figure that we see over here, A alpha, this will try to perform the CNOT gate, RX gate, and Ry gate, right? And followed by another different layer. This is layer B, which has different sets of rotations applied on that particular qubit. So these kinds of layered architectures are quite popular and these are perhaps the most used variants of quantum neural nets. So parameterized circuits are another variant of these kinds of layered circuits, where we try to pass different parameters inside it. So what can these parameters be? In a basic neural network, when we talk about parameters, the most important thing that comes to our mind is weights, weights of a particular hidden layer, the bias associated with that particular hidden layer, parameters can include the initializer variables, initializing variables or initializing functions of a particular neural network layer, or it can mean learning rates, it can mean different other things when it comes to classical learning. But in the case of parameterized circuits, we are concerned with the weights, the shape of the weights preferably. So the qubits will have, are related to the number of wires in a system. So in this figure that we see, we have four different qubits, Q0, Q1, Q2 and Q3, if you start from zero. So these qubits are effectively the number of wires that are present in a quantum circuit. A quantum circuit is a, let's say, is a synthetic circuit, right? Which can be used on certain, which can be done on certain hardware. And like I mentioned, Penelain TensorFlow Quantum and IBM Qiskit, they provide different such quantum simulators or hardware circuits on which we can perform these kinds of rotations, or we can build these kinds of pure variational circuits. So at the end of the session, we'll be seeing, there is a repository and group of links which are there, which contains all the details and implementations of all the different items, which we'll be seeing later. So parameterized circuits are nothing, but we pass certain parameters. Let's say the weight of the hidden layer of the qubit system. Or we want to pass, let's say the initializer, whether we want to initialize the entire qubit system as zeros or whether we want to initialize it as one, since there is no point having any other values rather than zeros and ones. So parameterized gates, when combined with this kinds of layered architecture, they provide a pure quantum variational circuit. So in this case, there are again different other variations of these kinds of circuits. Some circuits are known as IQP parameterized circuits where you have these Hadamard gates on four qubits, Q0, Q1, Q2 and Q3. You have this RZ rotation on the fourth gate, right? And that is the output of that particular fourth qubit. For the third qubit, you have a T gate, which is nothing but a five value of pi by four, that is 45 degree rotation. So followed by an RZ gate. And then you have the second qubit, which is operated by a simple RZ gate. And the first one is just a simple T gate, which is pi by four. So these kinds of parameterized circuits can be used along with layers. So if you see clearly, this A block is a layer and this B block is a layer. So we can think of them as layered parameterized circuits. And since there are no classical components there, this is a pure quantum variational circuit. And the next part is alternating operator ansatz or alternating operator quantum circuits. In this architectures, there is a layered pattern, which is quite similar to the parameterized circuit that we saw. But the unitary operations are time evolving Hamiltonians. Now this is a very interesting concept because all the qubit systems can be represented in exponential format, which is given over here. e to the power i a alpha delta t. That means at each time step that qubit is changing. At each time step that qubit value or that qubit state is changing. These are known as temporal evolving Hamiltonian. And this is very interesting concept when we will be seeing quantum graph recurrent neural networks. So we have graph recurrent neural networks when we want to build any kind of knowledge graph based solutions, when you want to abstract different data. When we want to do sentiment classification on knowledge graphs. So that is classical kind of knowledge graphs that we have. But we can extend that to include quantum variational circuits with evolving Hamiltonians. So this particular ansatz is quite interesting when we will be going into a QGRNN or quantum graph recurrent neural network. So yeah, apart from this, there are again different kinds of tensor networks ansatz which are, which is actually the crux of classical hybrid, classical quantum hybrid circuits. So tensor network ansatz, what they do, we have these kinds of layers associated with ourselves belonging to each qubit system, parameterized circuits. Now what we do, we wrap them around any auto gradient library. We wrap them to make it familiar with any auto gradient library. That auto gradient library can be torched, that auto gradient library can be TensorFlow or it can be any other auto gradient library. So why we need this because we want to combine the power of classical learning with quantum learning, quantum pure circuits. So what we do, we wrap that quantum layer, right? We left that quantum layer in a torched network, torched in a torched layer, pytorched layer or in the form of TensorFlow layer and we merge it and club it with a classical circuit. So it can be a classical that the first part can be classical. Let's say pure dense circuits for simplicity and then we can have a quantum circuit wrapped in a torch or a TensorFlow model which is our quantum circuit and then we have the successive layers but the final layers which are nothing but classical circuits, right? So classical quantum classical, this is one kind of hybrid circuit. We can also have classical quantum if you want to just extract the variables from the raw expectation states in binary tasks. We can just have a classical and a quantum block combined together and we can have a quantum and a classical block combined together. So if we understand this very clearly we can think of this in a very abstract manner that the qubits are at the states or the number of wires that come into the system of a quantum system, right? So if we can somehow manipulate this qubits to generate n classes or n labels, we can use it for any kind of classification task. Now, classical and quantum circuits when we combine them and let's say we have an n qubit of two then that becomes the final state output of that or the raw expectation values can be two in the case of the final qubit layer, right? So that becomes very much suited for sigmoid kind of, you know, sigmoid learning or let's say binary classifications, right? Because we only have two states and we are using a classical circuit and we are plugging a quantum circuit at the end having two qubits and we want to get the raw expectation values. We have created a classical and a quantum answers which is used for binary classification and quite the most popular example of that being on any, let's say if you want to classify it on any data sets. So moving forward, the next point is actually the hybrid quantum circuits or classical quantum circuits where you have classical layers and quantum layers combined together. There has to be certain restrictions though. It is important to note that at the junction point of a particular, let's say, classical layer and a quantum layer, that particular junction point has to be very cleanly maintained. That means the outputs of that particular layer. So that layer can be anything. It can be an embedding layer. It can be a dense LSTM temporal times distributed or any kind of layer, right? So that output of that particular temporal or deep learning layer should match with the inputs corresponding to the quantum layer. So let's say if you have n qubits equal to five, that means the total qubits of the system are five and we are trying to input a dense output of, let's say 10, then there will be a clash because that dense 10 outputs cannot be converted into five qubit states on which the operations can be. The intermediate hidden layers can increase inside the qubit, but the limit is five in this case. We cannot exceed that qubit limit. So that is an interesting point we have to look into it because whenever we are at the junction point of classical and quantum learning, we can only pass that much of data, that much of quantum embedded data or classical data inside the quantum circuit as much as it can hold. So let's say it can hold five qubits or five quantum bits. Then we should not be passing six, six amounts of classical data or seven or greater than five bits of classical data. So that will create an error. Similarly, at the extraction point, that means at the junction of quantum layers and the pure classical layers, the outputs of the quantum layers should be consistent with the input of the next layers, the classical layer. And one important aspect is that we can expand that layer. So let's say a qubit system is five and we want to expand the next dense layer to contain, let's say 10 labels. So we can do that, that can be done. But during the starting point, that means the junction of classical and quantum, that should be taken care. So yeah, so this actually is an example taken from Penelain. So in quantum CNN, what we have, we are trying to build a two neuron classical layer. You can see it over here. And we are trying to build a quantum circuit or a quantum circuit, which contains, let's say different gates. It can have a Hadamard gate, it can have S gate or any kind of U gate, right? And this is wrapped around the Keras layer, okay? And we are going to output the values of this Keras layer into a two neuron classical layer and we can use sigmoid or softmax to give a binary classification problem, right? So this is like a fundamental, I would say the most basic concept of what a hybrid quantum circuit is. So you can plug in different parts of the quantum rotations, club them into a layer, quantum layer, wrap it around any library, torch or TensorFlow, and we can just club or keep it separate or we can club it with other corresponding torch or TensorFlow layers. So this being the thing, there is some important concept regarding the same. Some important concepts are, like whenever we are trying to optimize a cost function with the use of quantum classical hybrid kind of circuits, it is interesting to note that, in some cases, the learning may get thrashed that means it may get stagnant. This case is known as barren plateaus, a very popular concept in quantum neuron networks. Barren plateaus are nothing but the, let's say the change in the gradients or change in the weights because the quantum qubits are also trainable parameters. So there is also back propagation happening. So that back propagation on the qubits, the change in the weights is so slight that it is, I would say an exponentially small part or an exponentially small part of the qubit system. So that is what it is getting stagnant, that learning part is getting stagnant and we are not able to learn, that quantum circuit is not able to learn anything. This happens because we are concerned ourselves with creating randomized quantum circuits. Just like I said before, we are creating randomized quantum circuits, like let's say rotation blocks, rotation blocks and conversion blocks, C0 blocks, Hamilton blocks, like these kinds of blocks we are creating, but also we have to note that these kinds of randomized blocks are the root cause of barren plateaus. So there are ways to resolve that, but this idea should be very much clear. Also, quantum hybrid circuits have a good performance as the lost landscape in which it operates, generally tend to avoid barren plateaus. So these avoiding techniques are used, there are certain techniques to avoid them, which one of the most popular technique is to increase the layer. So layer wise, if we increase the depth of the quantum circuit and if we train the first part of the quantum circuit and keep the remaining part frozen, or we keep the first part frozen of the quantum circuit and keep the remaining part trainable. So that particular, this kind of layer wise increasing of depth of the quantum circuit tries to alleviate this barren plateau mechanism. So, and again, there are different kinds of methods like Nestorov or natural gradient descent or Fischer approximation. Fischer approximation is quite popular when it comes to avoiding barren plateaus. So in the next case, we'll be seeing how to build a convolution network. So one-volution networks are nothing but, let's say we want to build a convolution circuit for MNIST digit classification. What the circuit does is it instead of having a quantum classical kind of a hybrid structure, we will be using a pure classical circuit. That means only we'll have convolution and dense layers, a pure convolution circuit, but we'll be encoding the data. We'll be encoding the data in a different format. We will not using the raw data. We'll be encoding the data by applying rotations on the pixels. So you can see that this is a particular pixel of the input image. And we try to apply rotations on those pixels. Now you can think of this as a kind of angle embedding, as a kind of angle embedding because we are trying to apply rotation gates on these pixels. So we are trying to apply angle embeddings and embed the data in angular domain to provide that input embedded image. And this image can be passed into a pure classical circuit for classification. So this is the crux idea of convolution neural networks. So, and the corresponding circuit structure has been taken from Penelain as well. So the beauty of this circuit is that whenever we are training this kind of hybrid circuits like quantum and classical, classical, quantum, classical, these kinds of circuits, they take a lot of time. Because just like I said, it is not the fixed states that is there in the hidden neurons. We are trying to have infinite number of states with the help of this qubit, right? We are trying to have an infinite number of states because what kind of rotations happen and what are the intermediate states we can never know in a quantum system. So the intermediate states are infinite. And optimizing that particular system and optimizing that particular qubit system is what is the goal of classical quantum, classical circuits. So instead of going through that burden, why don't we just encode the data in quantum domain? So we are angle embedding the data in quantum domain and we are passing that through a classical circuit, your classical circuit, convolution circuit. So that becomes a rather faster approach with better performance gains, okay? So yeah, quantum RL and quantum NLP. Quantum RL is actually a field of active research because though there are many drawbacks because we cannot isolate intermediate quantum states that being the main disadvantage of it. But we can use quantum, it has been shown that we can use quantum, quantum answer or quantum classical circuits as intermediate parts in our any kind of reinforcement learning algorithm. So let's say we have an on-policy algorithm. Let's say it is PPO or TRPO, any kind of algorithm which tries to update the policy at each time step. So in that case, just like we saw, we have finite number of policies, right? But what will happen if we try to expand that policy domain to infinite dimensions? So that opens up a lot of possibilities, right? Because whenever we try to optimize a policy in a reinforcement learning, on-policy reinforcement learning algorithm, it tries to maximize the efforts or maximize the rewards coming from that policy by gradient asset. Now, what will happen if we try to expand that domain itself? What will happen if we try to, with the help of qubits, we try to expand that particular domain? Instead of having n number of finite policies, we can have, let's say two to the power n, number of, let's say with two qubits, two to the power n number of finite possibilities, right? So let's say with nine qubits, we can have nine to the power n policies from which the agent can pick from rather than n, just simple n policies. So that is actually the idea behind applying quantum reinforcement learning on deeper algorithms. Particularly in off-policy techniques, like DQN, deep Q learning, or let's say DDPG, or any kind of off-policy algorithms, the values, we are concerned with the values. Now in that values, there are infinite, and we maintain a buffer size to accommodate that value. Now, what will happen if we try to have infinite number of values? The scope of quantum reinforcement learning has been explored on on-policy and has been explored on tabular Q learning approach, or simple DDP based Q learning approach, table-based Q learning approach. So, but it has been shown that, this is a field of active research and a lot of things can be coming out of this. So there is one algorithm which is known as Grover's algorithm, which tries to apply, and tries to retrieve the most favorable states coming out of a quantum circuit or a quantum classical circuit. So this is a very peculiar algorithm and quite difficult to understand based on, but these are based on entirely rotations of a quantum qubit. So in the field of quantum search, Grover's algorithm is actually really well because Grover's actually returns the most favorable quantum qubit that pertaining to a particular search. So, and these kinds of circuits, just like I said, these are based only on changing the angles, that means phase shift or rotations and amplitude modifications or amplifications of the amplitude. So quantum search in QHC is used in researches spanning NLP search, okay? Apart from this, we have, just like I mentioned, quantum GNNs or graph neural networks. So quantum graph neural networks are employed because we want to abstract knowledge graph-based information with the help of quantum circuits. So, and these kinds of circuits, they change over time. Just like I mentioned, quantum temporal Hamiltonians. And this time about Hamiltonian can actually be used to find similarity. Let's say in the case of NLP, it can be used to find semantic similarity between distant nodes in a knowledge graph or it can be used to cluster different sentiments or the same sentiments and different contexts pertaining to a cluster, right? So it can be used for clustering. It can be used for sentiment classification. It can be used for broad items in NLP. And the reason why it is still under research is because quantum GNNs are very hard to stabilize because graph neural networks, they itself have, let's say a huge number of nodes, a huge number of embedding space. And if you combine quantum embeddings on top of it or quantum graphs on top of it, it becomes a little difficult to tackle that problem. So that being said, these are also, there are also applications of these kinds of variational circuits in GANs where we try to predict whether a particular label is fake or a particular label is real. It in the case of image classification, right? So QHCs are used almost everywhere. The reason why for using this is because the infinite number of possibilities in the intermediate steps rather than a fixed number of possibilities. So and as we know that the advent of quantum computing and most of these neural network models will be is going to be hopefully teleported to quantum or hybrid kind of circuits. This kind of circuits are especially important for all kinds of learning. We saw NLP, we saw reinforcement learning, we saw computer vision, we saw GANs, right? So in GANs, we can use a quantum variational layer, a quantum variational circuit and for the discriminator and a quantum variational circuit for the generator. And we try to optimize and we try to optimize the different loss functions just like in a classical GAN, right? Simple GAN, right? And we try to build, determine whether the discriminator is trying to, you know, is predicting the real or it's predicting the fake data, fake information. So these kinds of circuits are actually quite interesting because for GANs, particularly, it opens up an entire realm of possibilities. So that is the last part of, you know, this particular talk, democratizing the adoption of hybrid circuits. So it is an entirely new paradigm and there has been so much of development, but ultimately at the end of the day, quantum circuits are nothing but these tensor or matrix multiplications, okay? And we try to perform the same logic that and the logics are quite restricted because we can only apply different kinds of rotations. We can apply different phase shifts. We can apply embeddings, right? And these are quite restricted. But using these restrictions, we can create like infinite number of possibilities. So the high intermediate, you know, due to the high intermediate sample spaces, the QHCs are time consuming. However, they provide a better loss statistic than traditional ones. And there have been significant growth of open source quantum ML, like Penny Lane, TensorFlow Quantum and IBM Qiskit being the most popular repositories available. And there has been a lot of, you know, development going on in these open source projects, in these open source repositories, which is quite good for the quantum neural network community. So, and I also think that from a personal perspective, quantum research and quantum research, particularly to reinforcement learning, particularly to graphs, particularly to let's say GANs can open up a wide range of possibilities. So that being the crux of what I was going to talk about today. And these are some of the links. So this GitHub contains all the implementations, some of them which have been adapted from Penny Lane. There are also implementations based on CERC and, you know, TensorFlow Quantum Qiskit. And these are the corresponding links. If you want to dive deep, like do a deep dive into the realm of quantum, understanding quantum learning and quantum circuits, variational circuits, why they are used and the different applications. So that being said, I'm open to any questions that may be there if we have time and the time permits. Okay, thank you so much for that talk Abhilash. We in fact, do have a couple of questions. I'm just going to start displaying them here and you can take them one by one. Okay, okay, just one second, yeah. Could the qubits be used to represent the neurons within a layer or is one whole circuit representing one neuron? Yeah. Yeah, so very good question. So the qubits, what are the qubits? The qubits have finite states that is zeros and ones and the superposition of them. It can be zeros plus ones like that. Now, the qubits can be enrolled in a particular layer that we saw. So the question is if we can use the qubits as intermediate layers inside a classical circuit. Yeah, that can be done as well because, you know, whenever we are trying to wrap a particular qubit or a quantum layer or quantum circuit in a library, let's say torch or TensorFlow, we are what we're doing, but we are taking that and plugging it into a classical circuit as a layer in a classical circuit. That means we are just wrapping it around a gift, gift covered, let's say and that gift cover can be torch or TensorFlow and we just plug it in. So instead of having, let's say dense, dense, let's say a fixed amount of dense, let's say 64. So 64 dense layers three times. What will happen if we do 64 and let's say a qubit layer of nine, okay? And then we have 64. That nine will have nine to the, like two to the power nine combinations if I'm not wrong in the intermediate hidden states. So think about instead of 64 intermediate neurons, you will have two to the power nine intermediate neurons which you're trying to optimize. So that opens up a space of possibilities, right? Interesting. We have two more questions. I think we have two minutes. So let's take them. Yeah. So during the exit junction from a quantum circuit into a classical one, wouldn't the observation result be a probability result of the calculation and not a pure zero and one answer? Yeah, that can also be done. So whenever we are extracting it from the poly operator, we are getting the raw expectation values. And these expectation values can be interpreted in the form of either binary zeros or ones or it can be interpreted in the, in a phi theta angle. So there are just like I mentioned, there are two angles, phi and theta. So generally we'll be seeing that whenever we try to apply poly or raw expectation values we'll be getting the floating point values associated with these inputs, right, qubits. And yeah, so you can, you can do lots of things associated with it. So let's say you want to add, you want to add the poly expectation values for all the qubits and then normalize them. You can do that as well. Or you can extract poly operator for the first, for the second, for the third separately and use them separately as well. So lots of possibilities. So not only the probabilities, but also pure one and zero. That can also be the answer. Okay. We just have a minute now. So I guess the remaining questions can be taken up in the breakout optimal room. So everyone please head over there and have a lunch. Would be happy to answer your questions. Thank you. Thank you.