 Hello. Can everyone hear me? Yeah, hi. I'm gonna, yeah, I'm gonna talk about QNOP using diagrammatic software. So this demo is gonna be roughly two parts. And the first half, you're gonna, you're gonna see how we build, manipulate minodal diagrams, IEDs like diagrams in Discopy, which is a library that we've developed. And we'll talk about things at a higher level and how we convert things to quantum circuits. And then later on, we'll talk more specifically about QNOP. And that will be using Lambeck, which is another software package we've developed. So you have the notebooks in the channels. So if you wanna follow along and run the cells, you can. I've run them in advance. So, yeah. Cool. So in Discopy, the diagrams are, you can build diagrams using boxes and wires as we've talked about today. And these boxes of wires have, the wires have types. So you first define the types of the wires by importing type in Discopy. You can define these types. And once you have that, you can have identity wires. Right? So if you put, you can tensor these two wires together, food and water. You get this diagram with two wires on it. So this is how you compose two diagrams together using the at operation. So corresponds to the tensor product. You can do more things with it. You can draw, like, more arbitrary string diagrams. So when you have a swap here, you're swapping, you have two systems, one that contains water, one that contains energy. And you swap it. And now you're working in a symmetric monodal category. So this is a monodal category equipped with this swap operation. To define a box, you need to give it an input type and an output type. In this case, we have a human box, which is a process that takes in food and water and converts it to energy. And the way you construct this is you take, you import the class box from Discopy, you give the box a name, and then you give it input and output type. So it's very straightforward. You can draw it like this. What's after this is you can compose things together. Oh, yeah. So you have a, so let's say pasta is a type of food and wine is a very, like, impure form of water. And you can tensor it together again. Here, this is an example of, like, you have an empty type. So this is a box that takes in no wires at the top. This is how you would define in Discopy. So here you go. You have pasta and wine. And you can put pasta and wine into human, right? So when you compose this diagram, so you do pasta at wine followed by human, you get this following diagram. And they're all type checks. So if it doesn't type check, you'll get an error that the types don't compose. So you can see using this kind of syntax, you can define arbitrary string diagrams for anything, right? And you can, it's bit tedious, but you can do it for arbitrary large diagrams by hand, using the syntax. You can flip diagrams. So, like, we talked about how we work in a dagger category, maybe. And if you have a human, you can take a dagger of a human, you can flip it upside down. And now you can, you see these two boxes can be joined together. And that would be like, you dagger you in the quantum context. But this is, right now it's a bit abstract. We're talking about humans and food and water. So here's another concept we've talked about, like, functors. So functors are a type of, like, recursive way of mapping one diagram to another diagram, maybe perhaps in a different category. And it's a functor because it satisfies a property called functoriality. And what that means is a structure preserving mapping. And all that really means is you open up each of these boxes within the diagram and replace it with a sub diagram that respects the type. So the way you do this is you first give a mapping on the objects. In this example, I'm going to give a doubling mapping. So given whatever type, I'm going to map it to itself twice. So food becomes food and food. And then water becomes water and water. And the energy becomes energy and energy. And here I give a mapping for the boxes. And I'm going to map them into sub diagrams. And so the pasta state becomes two pasta. And then the wine state becomes two wines. And for human, you want everything to be composed together, right? So you need to add some swaps at the beginning to make things composed. And once you have this, you can convert any diagram consisting of just pasta, wine and human. You can keep applying this functor. So once I define this functor, I can now turn this small diagram here into a larger diagram. So now there's two people having dinner. So this is kind of a way you can recursively modularily build diagrammatic software. So this is one example of functor. You go from abstract diagram to abstract diagram. But now, perhaps you want to embed some meaning into this diagram. So let's say I've defined this mapping. I'm going to turn each thing into two qubits. I want to model this as a two-qubit system. Every food is a two-qubit system. Every water is a two-qubit system. And then I fit in each box with some sort of quantum circuit, which I've defined here. It's not super important how I defined it. But you can build more logic on top of this software. This is like NumPy before diagrams. You can do any kind of diagrammatic thing you want with this software. So I've defined another functor. And now the smaller diagram, pasta, wine, human, becomes this quantum circuit. Roughly speaking, this is the pasta I'm modeling with this quantum state. Wine is this quantum state. And then human is this quantum state. That takes four qubits in. Outputs two qubits out. Post selects on two other qubits. You can put other things in, but this is just a simple example. But now that I've done this, I can combine these two functors together, right? So I can take the original diagram, make it doubled, and then apply the quantum functor. And now I have a larger quantum circuit. So as long as I know what to do with the smaller boxes, I can convert any larger diagram that consists of these, a free composition of these boxes. So now you get an even more complex circuit, and it's very little code to do this. You don't want to build these circuits by hand every time. That's what I'm saying. You can think in this high-level diagrammatic schema. And once you have these circuits, you can further convert them. So this is another functor. Discopy comes with a functor that converts quantum circuits to ZX diagrams. And specifically, we mapped this to Pizzix, which is a library for ZX diagrams that you can... It's pretty cool. You can drag things around. You can apply rewrites. That's actually one of the challenges that we can put the ZX team. So you can try to extend this piece of software to do more interesting rewrites by hand on software. Here's another thing you can do. Discopy supports converting quantum circuits to ticket format, which supports all sorts of quantum circuit back in. So from Discopy, you can evaluate circuits from different machine vendors, which I think is quite cool. So from having dinner, we suddenly have this kind of quantum circuit. And here's kind of its representation. Nice. Or you might want to just simulate it directly on your machine. So this is what you can do. You can leverage TensorFlow libraries to advance age. So Discopy integrates well with TensorFlow libraries. And on top of it, we build our own density machine simulator, which works very fast. This isn't your average state vector simulator, which struggles at 20 qubits. If you had some sort of large circuit, which had 50 qubits, but the entanglement between the systems is relatively low. And it has some sort of tree-like structure. You can do up to 50, 100 qubits. As long as you only care about a couple qubits at the end, it knows how to kind of contract it in a clever order. So it's really useful for our experiments. As you see, we use grammar as a tree, and there's local entanglement. So we can actually simulate much larger systems than you if you use Kiskit or something like that. This is the output tensor of, I think, the four qubits. I can't remember exactly how many qubits there were. But yeah, as a quick recap, let's say we have some sort of process that we model using this kind of diagrammatic notation. And we can embed meaning using functors by converting this high-level schema into quantum circuits. So if you believe this is how your problem works, it's how this process can be modeled. And you believe that, say, each bus can be modeled with a quantum state or like some unitary or then CMatrix or CPTP map. If you think that, this is how you could build software in general, right? You have some sort of problem, you build thing, and you put quantum things into it. So we're going to use this kind of idea to do QNLP. So it's a way to develop high-level diagrammatic schema. So now we're going to move to Lambeck. Lambeck is a QNLP toolkit written in Python. Both of these packages are open source. You can use it and contribute to it if you want. There's a community. There's a discord. You can go and ask questions if you want. Yeah, so it helps you develop models for QNLP. Here's one model, for example. You could decide that you have, here's a sentence, fat cats eat rats. And you might say, I want to combine these words in such a way that I don't care about the ordering of the words. So you can combine using a spider. As we know, the spider is commutative and you can fuse and unfuse. So yeah, actually, this doesn't capture the ordering of the words. I only care about what's in the sentence. So you build the schema, you convert to a quantum circuit. This ends up corresponding to a very old natural language model called a bag of words. So you're essentially multiplying the word embeddings for each word and surprisingly effective. So it's a good baseline. Here's another example. You might want to care about ordering the words and you kind of read in the words these tokens from left to right. This kind of resembles the kind of architecture of a recursive neural network. You have a starting token and you're actually repeating the insert words into it. And these wires in the middle correspond to hidden, they carry the hidden state of a recursive neural network, a recurrent neural network, sorry. So as you can see, you don't necessarily have to put quantum circuits in. You could, if you don't have cups and caps, yes, you just live in normal monodal category and are compact closed monodal category. So you could put all sorts of interesting things for your semantics. Yeah, so you put a quantum circuit and you get cure and end, which is nice. But we are interesting other things. We're interested in grammatical models. We think that we want to do, we want to model these problems based on the structure and we think the structure may be perhaps comes from the grammatical structure of the sentence. So if you model, want to find a meaning of a sentence, you might want to combine the words in the way that the grammar tells you to. So here's an example of another sentence. Cat eats rats. And a cat is a noun. Rats is a noun. Subject, object. And in the middle, you have what is known as a transitive verb, which takes an noun on the right, noun on the left, and gives you a sentence. So far I haven't combined them yet. But as you can see, it's kind of obvious how to combine them. We've done this a lot today already. So we can do this with just a few lines of code, maybe one line even. And here you get a mathematical cat diagram. We get output here as a sentence type. Then you know the grammatically well-formed sentence. So we know we can actually convert this to a quantum circuit and try to see what it means. Here's another example. Here we have fat cats eat rats. Here fat is an adjective which takes a noun and gives you another noun. So that's why it has this noun, noun left adjoint type. As you can see, the types of the other words aren't changed. And that's kind of a property of the pregroup grammar that Bob talked about earlier today. It's kind of a lexical grammar. You give the type to the word. And in many cases, you can combine the words in the same way, using the same types. If you've studied formal grammar linguistics, you might come across consistency grammars. They're kind of similar. You have your nouns, your n's, your nps, your vps, your s. This is kind of similar. But this is for category grammars, which I'll talk a little bit more about later. So another well-formed sentence. That's fine. Of course, it will be completely impractical to manually express for the parts of each sentence. But Landbeck comes with this state-of-the-art CCG parser known as Bobcat, which automatically parser sentences and conversums into these diagrams. It's a new rule. We trained it. And it's a state-of-the-art parser for categorical grammar, which is quite nice. So here's it in action. Well, I ran it 20 minutes ago. If you input such a sentence, it will give you the parse tree. And this is what is known as a CCG grammar, a combinatorial categorical grammar, just another type of categorical grammar. It's quite similar in spirit to what Bob has mentioned to be with pre-group types, pre-group grammars. So here's the parse tree. And here's what the parse tree looks like in DiscoPie. And you can functorily map this again. It's another structure, sexually preserving, mapping into this DiscoCat diagram that can see grants. So again, same thing. But now it looks like a pre-group grammar. Yep. And now we're ready to do DiscoCat experiments on it. Yep. So I have this corpus of text. It contains a lot of sentences. And you can actually read them in and parse it all at once batched. So this line of code reads in a list of sentences and converts them into a list of diagrams. As you can see, some of these sentences are actually quite long and complex. And you can have a look to see that the grammar kind of makes sense, like bears are noun, buster bear comes out as a noun, which makes sense. Conjunction types, determiners, so on. That's the sort of thing you're interested in squinting at. This swap is an artifact of the CTG grammar. It's a rule known as cross-composition. I wouldn't worry too much about it. It's out of the context of this talk. But we can now map this to quantum circuits. How do we do this? We use parameter as quantum circuits. So Lambert comes with a bunch of quantum ansets. And here we use the IQP ansets, which consist of a list of Hadamard gates followed by a layer of diagonal gates and then followed by more Hadamards. And if you replace each box with this kind of circuit, with this kind of sub-circuit, you get an overall circuit that describes your sentence. Here's what it looks like. So the original diagram was this, faculty threads, and then you end up getting... Does this make sense? Is this cool? You can also send this to tensor networks, although we primarily... Lambert is a PNLP package. We can also do classical tensor networks. And that's another interesting way to train these models. So we can convert these diagrams into tensor networks. And each Y here is carrying the dimension of the vector space. So imagine this is a vector, this is basically a matrix, this is all the free tensor, and the wires correspond to contraction, tensor network contraction. You can do that, you can do eval on it. Yeah, so here's another line of... We come to a rewriting section. So sometimes you want to run things on a quantipeter, but your quantipeters is small, so you can't really put it on there, and it takes too long to run, so on and so forth. What could you do? You could apply rewrites to it, right? So I've talked about funtors and dyscopy and Lambert so far. You've seen a lot of funtors already. You can use funtors to reduce the size of your diagram. So how do you do this? In this example you have cat eat rats on mats, and you see the on, the word on has a, has five wires, so this is kind of an order free tensor, or the five tensor, and it, each vector space we assign sentence and noun to have like a vector space of 100, so that's reasonable, right? And now we want 100 elements describe its position in higher order space. Then you end up having 100 to the 5 to describe this tensor, which is way too large. We can't even put down a computer, like, I mean we can't do efficient contraction with it. So once you reduce the size of this tensor, so Lambert comes with some rewritings that you could, you could replace it, because there's, this is NR type and this NRR type, and it's kind of, it's basically just carrying the meaning of this cat, these cats, you know, it's going in and out, so you can model it as just having a cap, so replace the word on, which had five wires, and then you place it with a sub diagram, now three wires plus this cap, and when you put it together, suddenly you have smaller tensors, and you can now remove these cups and caps using the normal form method, which applies the snake equation. So you see here there's like a really long cap, really long cup and a really small cap here, they form a snake, so you can remove it, and now you're using less qubits. Yeah, that's good, and Lambert also come, also acts as a standalone tool for like four more grammars, you can use it to pass things, so if you input, if you just run it as a command on your terminal saying Lambert, and you put in a sentence, it will give you a pre-grid diagram, here's a very nice kind of ASCII Unicode printing code, it's always fun to write this sort of stuff, and yeah so now that we have a way to turn sentences into parse trees and parse trees into quantum circuits, parameterize quantum circuits, now we can train it as Constantine has described, and it's essentially a classic, a typical supervised learning training loop, and so Lambert also has support for that, what's this doing here? Ah yes, you can do further, you can do further rewriting, you can bend this establish around doing the categorical transpose, which was what Constantine was trying to show you earlier, I see like, so the functor is really cool, it knows, I've given the mapping from of the nouns as states, but you see here established dagger, it's more like an effect, it's like the dagger of the original state, but I don't need to give a new mapping for it, because we know, you know, if I know the mapping of established, then the dagger of established is the dagger of the state which it maps to, so all of these commutative algebraic properties are encoded inside the scope high, which is really nice, so yeah, you get this really narrow quantum circuit, which would have been like, I don't know, like a dozen qubits, and now it's only four qubits, yes, okay now back to training the model, so you can first, you build a model by passing it all the diagrams you're about to train on, so you pass in the train circuit, the validation circuit, so it knows which symbols appear in the circuit, so as you can see each word gets filled in with gates and rotations where the rotation values are parameterized, so now this numpy model is just collecting what symbols are going to appear in your model, and then there's optimization on them using gradient descent, and you define your loss function, here we choose the binary cross entropy loss function, I think someone was asking you about loss functions earlier, this is one of them, and then we have accuracy function which just checks that the predicted label, how many of the predicted labels match the actual ground truth in our data set, once you have the loss function, you have the accuracy function, you have your model filled up with train circuit and validation circuits, you can give it hyper parameters to the trainer, the model, you can start training, and you get this, so it takes a little while to run ahead, and as you can see roughly speaking, loss goes down, accuracy goes up, so it works, but if you play more of it, you get better accuracy, and some of you will be doing that this weekend with the QNP projects, so in conclusion, Landbeck is cool, DiscoPy is cool, you can pip install it, it's open source, you are welcome to contribute, and you have any questions, we have a discord, you can talk to us, I see any questions about Landbeck, DiscoPy, compositionality, category theory, QNP in general, so yeah, thanks to everyone who's worked on this project, and also thanks to Ian, half of the smokebook he made, so thank you, and thanks for listening.