 Please sit down. I'm not going to make an introduction. Just if you need the microphone Please raise your hand find me and I'll bring you the microphones Remind them for everyone. Please use the microphone if you have anything to say for the people online Yeah, have a good afternoon. Yeah. Hello again So it's a this is not gonna be a tutorial this more like an overview talk of the the things going on in My team in Oxford, I'll give a little bit more background information on how this is all Situated together Just about myself for a long time. I was actually at University of Oxford In computer science as a professor Then we built like a very big group there. There's quite some people in there So Constantino should be here. He's here. Not yet. Always late So who else is there? I don't know this. Anyway, there's a lot of people here and a lot of them they basically Moved with me from Oxford University To the then Oxford Cambridge Quantum offices. So I'm chief scientist at Quantinum They also have a team in Oxford and I'm gonna talk today about what we do there and So in 2021 Cambridge Quantum Watched no more and then became Quantinum or something like that And this was our to be precise if nobody has explained that so what happened is like Honeywell The big American company Honeywell had a bunch of people in a basement building quantum computers And they didn't know it even that is what's happening. It was a very funny story But yeah, they actually were very close to quantum computer and then these people realized that and they say, oh wait wait a minute But then they didn't feel it building quantum computers really fit it in a traditional company like They split off and then they fused with Cambridge Quantum to become Quantinum So that's that's kind of what happened there, but I mean So so Honeywell and also IBM even there's there's some of our main investors actually in the company So we mainly get money from these other Companies anyway, so that's that's that's that's the situation and like I've got a team in Oxford So Richie should be somewhere there or Richie there and who else is here Nobody nobody me Thomas Thomas where's Thomas? Thomas where is he somewhere here? I mean he has a circle. Oh there is Thomas there's Thomas Who? Afford, yeah, I've heard he heard he heard he's here. It's not technically in Oxford But I put him in Oxford because it is so much for us Yeah, yeah, that's okay Right and this is our logo this is our logo and I mean you see this is see and you see stands for compositional composing things like this out is already this morning a lot of what we do is about Plugging things together and composing things and this is oh, sorry wrong. This is an eye and this stands for intelligence So we're interested in compositional Intelligence like trying to understand intelligence beat artificial or not artificial in the In a compositional manner and it also looks a little bit like a cue You know it looks a bit like a cue as because it's very quantum composition intelligent quantum inspired compositional intelligence and this was designed by Konstantinos who's always late We're supposed to give the next talk, but he's still not here and Then then and then and then there was this really unfortunate accident that is actually looked like a skull This is very very very, but yeah, I mean We didn't have time to design and do one so we stuck with it All right Okay, and this is our basement in our office We got an office somewhere in Oxford in Bowman Street, and this is our basement. So we play some music there Okay, language of quantum. So I'm gonna go very far back in time. So this is John von Neumann Originally on carrion so not too far away from here and and For now I mean is known For inventing game theory and so to some extent is the father of mathematical economics and things like that And also the computer architecture like we use now That's the von Neumann architecture and von Neumann did a lot a lot a lot of other stuff He did a lot of stuff But one thing he did was he actually came up with the quantum mechanical formalism the Hilbert space formalism Which we still use today soon not anymore, of course like I explained this morning But now he's still using Hilbert space and they all go back to von Neumann and von Neumann published his book Mathematical group like in their quantum mechanic in 1932 1932 it publishes book you sort of had done everything. It's okay now the formalism is ready I'll publish a book and then this is from 1935. This is from 1935. I would like to make a confession Which may seem immoral. I do not believe absolutely in Hilbert space no more so three year after basically Giving birth to the Hilbert space formalism. He denounced it Father denounced the child doesn't happen much. No, no many people have the bravery for that Now I should say something so von Neumann then went on in 1935 With Birkhoff to come up with some alternative to Hilbert space, which he called quantum logic And I'm not gonna go into that too much I can tell you you don't find it in any textbook for physics, so it's kind of a failure you find it nowhere You do find it in some psychology papers but definitely not in physics and The the important thing to know about the von Neumann quantum logic and is that von Neumann Felt that the most important thing about quantum mechanics was quantum measurement And if we didn't if we understand and conceptualize on quantum measurement Maybe abstract then we would build a better formalism than Hilbert space. That was von Neumann's opinion So the focus was very much on quantum measurement as a key ingredient Of quantum mechanics Remember that so this is Schrodinger Austrian not too far away I mean he's known for a cat and for an equation But then another thing Schroeder did 1935 again 1935 This is said the following in some paper So he's talking about composition of systems using the tensor product like You describe comes with quantum systems using terms tensor product and he said I would not call that one But rather the characteristic trade of quantum mechanics So he thought very different than than von Neumann He didn't say measurement is not that important what is important is what happens if you bring two systems together in quantum mechanics And that's the characteristic trade of quantum mechanics That's what that's what Schrodinger said. So I mean most people most people historically they follow followed von Neumann's line because in his book von Neumann also did the first in variable You know go theorem as to all of quantum foundations focused on measurement and measurement only and it was all about measurement people were kind of ignoring Schrodinger for a long time and Until Samson Abrams came ourself we put this paper out a category of semantics of quantum protocols and this was really This was really like trying to build up quantum mechanics with composition as the only connective So you know the only symbol uses composition of systems You don't use sums you don't use all the other linear algebra stuff And you see a fire you can go you see a fire you can get and you try to see which actions You need to do something and for example in this paper We basically just introduced these cups and these caps you saw this morning and we start to derive things like teleportation But it was all formulated in the language of category theory. It was no pictures. It was all category theory and category theory well This is kind of high entrance fee for a lot of people. So then This Roger Penderos lives in Oxford is in the Mads Institute is yeah, he also got a Nobel Prize for Things like black holes and you may know him from these picture pictures, which she actually gave a lot of ideas to assure for all these pictures There another thing Roger Penderos did in 1960 or something like that when he was an undergraduate student and he had to learn relativity He really didn't like this. He really didn't like this sort of tensor notation. He said it's horrible It's horrible. It's totally non-intuitive and he started to substitute it with pictures You see here is an example of the the identity wire like we saw and this is like and but mainly he was drawing these pictures These things are not at all the same as as our spiders They just happen to be the same in notation. They're not spiders these they're just blobs. They're blobs. They're like our boxes There's no more meaning to it than that. But anyway, he started draw these pictures and he realized that you could do all this tensor notation with these pictures too and then then Then I basically wrote the paper with Samsung and I wrote the paper with a silly titled kindergarten quantum mechanics and I Start to sort of for the first time do calculations with these diagrams like the teleportation So I saw this morning and then okay I'm not gonna go through that but then the idea is of course that you just do topological deformations of wires and This is not something I said this morning But if you take a box if you take a box and you wire the output into the input and the input into the Output you're actually taking the transpose of a matrix That's really what you're doing. So this is our diagrammatically represented transport and you can also represent uniterity and you can represent Conjugate and adjoints all these things just with wires and boxes. So that sort of stuff was in there Okay as 13 years later. That was the doda book five years later. That was the book you now all have And then so okay the story the story succeeded. We had a category with a counter formalism entirely in pictures Cool. So this is not what this talk is gonna be about but it's gonna use it. So Okay, I skip that what we're also gonna do you got this book now We filmed the entire book like lectures of the entire book and they're gonna be soon thrown on YouTube No, I'm waiting for furthest furthest not furthest not here Okay, so he's in charge of like getting them out, but they're all done So this is Stefan of Gugyoso with Uma wrote the book This is a moon and you can see it can have different faces because the dark side And so there's lots of stuff and it's also fun We do do also a little bit of stand-up comedy in these videos just to keep it fun Okay, so we've seen all this this morning so I can go I'm just reminding you because we will need these things So you got wires and boxes you can compose them You remember the cup wire the cap wire and then the fact that you got this yanking equation This is like sliding a box is like taking the transpose these old things We're gonna need for something else than physics later. Okay teleportation. You've seen these all spider is all you need The only thing I want to say now, which I didn't say this morning is I want to give the philosophy of the spider What is the philosophy of the spider? The to understand the philosophy of spider you need to understand the philosophy of the wire What is a wire a wire is a thing that has two endpoints right a wire is a thing that has two endpoints and What is the what are the equations governing wires if you take one wire and another wire and you glue them together you get again a wire? Not very interesting like we fire skin and only only create wires And so the philosophy of the spider is basically that this is a wire with multiple ends and then this equation Which I showed this morning is basically okay if you got a multi wire and you got a lot of multi wire because there's multiple Possible outputs not just two then you get another multi wire. So this equation is basically just a generalization of what wire is So the spider fusion. I mean acid is this morning. That's kind of funny. Anyway, so we've seen all that so the interpretation I give to this To this equation is you got these two spiders and they hate each other and they both give each other a right hook See they both give each other a right hook But it's so hard that they're that their leg flies away and That's why that's why you need to because they both need to give a right hook and that's why that when you go to they vanish So that's that that's what deep philosophy behind this equation Okay, and then like in the beginning when you wrote this book We actually wanted to sort of denote these phases with moon phases and then Stefan will start to draw them And then just like me having to sign hundred books. He got a bit bored And then so that's why actually that's the only mathematical thing we have in this book You know is that we actually do right angles rather than phase of the moon But you could have done phases of the moon in principle. You could have done phase of the moon anyway I'm gonna so this is X calculus As you've seen it I've said that this is the people who proved the completeness of the X calculus like that any equation you can Derive using linear maps. You can actually derive now using these X calculus. This was for dimension 2 Okay, and this is like a Review H10. I mean it's really very clear very well structured That's I mean, I don't think this is your average 10 year old But still Okay, and so So what is happening now What what So what is happening now currently in Collaboration with this is us at continuum in collaboration with Oxford University a bit and a little bit of IBM What we are doing now and that's why we did wrote the book in the first place and that's why we did the videos is We're gonna teach a bunch of teenagers Quantum in pictures like all these techniques with like proper lectures So this will be the video lectures and then some tutorial sessions in which they can try some exercises and things like that And then we're gonna take a bunch of Oxford University Students posh posh pretentious pretentious people and We we gonna make them take like a regular course a regular quantum course with Hilbert space And then we're gonna let them both do the same exam So but of course in one case the questions are formulated in pictures in all the case the questions are formulated in Hilbert space And then we're gonna see who wins So and this so the time I mean I've been telling this the first time I talked about this experiment was 2009 And then I said, yeah, we're gonna do it next year Yeah, I'm gonna do it next year, but it is gonna happen this summer effectively it is gonna it's really happening It's really happening now. Is it like recruiting is gonna start next week and all that so it is really happening this time So so that's gonna be cool. So the point is just to prove that that yeah I mean, it's just much better to use this picture than usual a Hilbert space quantum mechanics Okay, now the the content of my talk is now really starting So this Jim Lomback Jim Lomback is not a physicist Jim Lomback is a mathematician was was a mathematician He was based in Montreal in Canada professor of mathematics at McGill and he also wrote the first paper which can be considered as proper mathematical linguistics Talk about linguistics now. So you know the paper the mathematics of sentence structure and Basically there introduced there had been attempts before I should say that but that this was the first proper theory of Something a piece of algebra that allows you to verify if a sentence is grammatically well-formed So we're talking about something completely different right completely change of Direction so so he came up with this theory To show that a grand sentence. I'm gonna give a little illustration on how this works later. Anyway, so this was Jim Lomback He was in McGill He was still alive in 2005 or something like that. So let's let's now go to Montreal 2005 So we're in Montreal and I just wrote this paper kindergarten quantum mechanics And I was very proudly like explaining how you can do teleportation with pictures and all that And then Jim was in the audience You were sort of sleep I mean, there was all very old category kids in that they were all asleep, but then suddenly they asked questions It was very funny. So they were sort of in a state in a superposition of conscious and unconscious huh, yeah, and then Jim was there and then and then he Heard me saying that and he said Bob this is grammar I Said no Jim Jim just go back dreaming It's physics. It's physics Jim. No, no Bob. This is grammar and he was right He was right in a certain Incarnation of his grammatical theories a very particular one which is from 1999 which he published in 1999 and which he called pre-groups he called pre-groups as Categorical structures, I'm I didn't give you cataclysm category definitions, but as categorical structures These diagrams and his grammatical structures were exactly the same thing They're exactly the same thing and I'll illustrate later how this works But so this is a remarkable coincidence from if you write Hilbert space and tensor product and linear maps in this language and suddenly it looks exactly the same as a grammatical structure And it's just a mathematical coincidence. I mean I didn't I didn't take much notice of that at the time I didn't take too much notice because I was starting the development of all this category or quantum mechanics things, but then three year later There was some colleagues in Oxford So one of them was mergers others are them I know there's a few purgeants here like so Mergers is purgeants from Iran and now is now a professor at UCL in London and She knew these grammar systems from Lambeck because Lambeck had asked her to develop that theories for a specific case of Farsi so she had developed this paper specific case of Farsi pre-groups for Farsi and and So anyway, she knew that and then the way I'm gonna try to explain how it works a little bit So this is my phone No, it's not mine So so basically what you got is you got like you see these letters like s and n and they Represent what's called sort of basic grammatical types. There are grammatical types that that of Entities that means something in their own right. So like a noun That's a meaningful thing could be an answer to a question Yeah, just a noun on its own is a useful thing a sentence is also a useful thing. So these are Undividable types now then you go to things like transitive verb This is a transitive verb would have type an s you see I'm using like intuitive like the idea of like inverses in a group And a noun minus one and a noun minus one with the minus ones are at different sides What this means actually is a transitive verb once a noun on the left and once a noun on the right the subject and the object He wants these two on the right and by sticking the inverse there is actually asking for it Give me a noun on the left. Give me a noun on the right And then if you so here is this transitive verb these three things in the middle And if you stick a noun on the left and you stick a noun on the right this this cancel out and you get a sentence And that's the way you compute that noun transitive verb noun means sentence and This is of course a very simple example But these these words for for all of English and all of languages actually I mean in languages Of course there are differences in some languages You want to stick the subject and the object on one side of the transitive verb and another language on the other side of a transitive Verb and so on actually for each permutation. There is a language where you use it For each presentation of transitive verb object and subject There is some language where you do it like that So these type systems are a little bit different in each language But the algebra is always the same the algebra is always the same So now now what is important for what I'm going to say is that this little calculation Which I did here so at a transitive verb in the middle transitive work in the middle now on the left now on the right Which cancel out you can actually represent these diagrammatically. So this here is basically the canceling out It's almost like an inhalation in fine and diagram type stuff, you know And these two cancel out and then the sentence type goes to So we got a little diagram representing this calculation and for each sentence or whatever complex you can get such a diagram You can get such a diagram. I mean in the paper in the paper. Let me go in the paper I meant so I think it's in I think in Mernous's pregroup paper. She actually is very interesting She gives them these diagram structures for different languages and then you see for example that for English and French They're very similar for for for Arabic and and Hebrew they're actually very similar and then Persian is very funny because for example for English and And friends the cups are very very close to each other They're all small cups like they're only words close connected to each other in in Farsi It's like the set like words from one side of a sentence are connected to words at the other side of a sense So you can really recognize languages by the structures by the structures of these these cups and caps Okay, so okay, so that was my range. There was also Steve Clark who was in In Oxford so Steve is now Quantinium's head of AI after he was in Oxford They moved to Cambridge after Cambridge you moved to deep mind after deep mind you've moved to our team in Quantinium again And he was doing something else he was doing something that everybody now knows Representing meanings of words by vectors That's what all the chat GPT's and all these things are doing and all these large language models. They represent Meaning by vectors, but this was 2006 2007 there hadn't been any deep learning yet And this was a time just a purely academic discipline This was not something that was taking place in industry very much It was purely at universities and so he was Steve was studying like how to represent meanings by vectors and what are the best ways to do that But then if you put if you produce these two together, okay on the one hand We've got grammatical structure So how you compose words to make a meaningful whole and the rules to do that there you got meanings of words and then How can you combine these two? How can you combine grammar meaning for example if I know the meanings of words in a sentence? Can I come up with the meaning of a sentence as a vector? Is there some way I can combine this grammatical structure? Because that's the grammatical structure will have a role That's really the those are the rules of our words interact with each other And if got the meaning of the words cannot basically come up with a theory for meanings of sentences Which you derive from the meanings of the words and I mean I Mean when when when when when they Steve told about this problem and to me I say of course I have a solution of course. I know how to do that. I Mean no not really me Jim knew how to do that Jim knew how to do that because Jim looked at these diagrams and say these wires They are like grammatical structure. What is flowing in the wires or vectors? vectors just like meanings of Words and sentences so it's an obvious it was an obvious conjecture that we could just use this Categorical quantum formalism or diagrammatic quantum formalism to Come up with a theory for how meanings of words combined to meanings of sentences I mean I'm To me it was obvious that he was gonna work to me. It was obvious that he was gonna work and We we tried this and he worked he worked So we publish this paper Mathematical foundations for compositional these because there is grammatical distributional because you're working with these Vectors, which are the time we're in the previous probabilities of meaning and I'll tell you how it goes. So you remember Okay, let's so this this for me is the algorithm Basically, what I'm doing is I take meanings of words so the top The top our meanings of words So this would be a vector representing Alice a vector representing eights a vector representing Bob's and then the bottom Is the grammatical structure? This is this little diagram, which we derived before from the calculation And so intuitively what's happening here? So you compose them and then the claim is whatever comes out here is the meaning of the sentence Now if you look at it a little bit It's really like Bob is being teleported into hate or just far into it Alice is being teleported into it hate is some sort of entangling Some entangling linear map which actually makes them interact and put them in the hatred relationship So to say and then whatever that this comes out here So you got what comes out here is Alice and Bob in a hatred relationship Me it's in it's very intuitive and of course we did a lot of experiments This is an experimental science so with a lot of experiments with that and it worked it worked really well. Yeah. Oh So a good question good question So the question is I'll repeat how would Alice a it's both be different from Bob a tell us hate is not at all symmetric It is not at all symmetric It's like a very directed that you would see the matrix of hatred that that goes in one direction if this would be Mary's Then it probably much more symmetric Yeah, that's more symmetric But so yes, so this is it's not so this is a this is some sort of quantum state If you want in here and this this this will be not at all symmetric in these two inputs not at all So so good question Okay, and what you can do now you see this is like a direct bracket on its side the rack breaker on its side I take this sentence Alice hates Bob I take another send Alice does not like Bob and then you can take it in a product And then you can see how closely related these sentences are and so you can start comparing different sentences like how close are the meanings related So that there was a new thing. That was a useful new thing. So it was all cool Now then we work a bit further. I'm not going to go too deep into it But we were talking about spiders and spiders also have a role in this theory. We use spiders to for example So this is she who hates Bob So we we use spiders to encode things like relative pronouns again, this is not something we just Guess there is there's a logical reason to do that. I'm not going to go into deep into it But but the point is that such things like relative pronouns You can actually build from spiders and then you do your experiments and it works There is there is some sort of conceptual justification to do that But there is always like with this sort of stuff you do empirical experiments and if it works, it's good and this work This work. That's cool. Okay. So so here is an interesting thing So I I was the only physicist of the team of the tree I was the only physicist and I told people please don't mention quantum When you write about this and you talk about this don't mention quantum because otherwise we're gonna be branded as crackpots complete and utter crackpots and Then like immediately there were these headlines like quantum linguistics the quantum linguist quantum mechanical words so Okay, so this was this was around what is this 2013 2010 13 okay, okay, so quite a while back ten years back. However, that was this man Fine man nature and classical dammit and if you want to make a simulation of nature you'd better make it quantum mechanical and by golly It's wonderful problem because it doesn't look so easy. Okay, so you hear about quantum chemistry. I guess this week Yeah, so chem chemicals their quantum Chemical have quantum mechanical descriptions. So it's really hard to stick them on a classical computer It's really hard to stick them on a quantum classical computer now This stuff this theory. Oh, what am I doing? What happened? Oh Actually, this is funny Going backward backward backward back with all these the stuff which you're gonna come. Okay. Good. I want to go there direction direction Yeah, so so this sort of theory we had here Although I told them never mentioned quantum But it was it was a theory of vectors and cancer products and linear maps and all that Like a thing which is not easy to simulate on a classical computer and we actually noticed that like I mean I didn't do much of the experiment with Steve and my nose and others were doing these experiments They worked very well, but it didn't scale very well because these tensors became very quickly huge and They didn't really fit on classical machines. So I mean What what basically happened at a time initially is like we just stopped doing it You should stop doing it but then I Mean I had this student will sang who until recently was the head of Quantum at Goldman Sachs and he did a PhD with me and I will set will sort of suggest it Maybe we should actually take this idea seriously of like doing like Language on a quantum computer doing this natural language processing Distributable meaning as your language model all a quantum computer. And so the first thing we did is like We basically looked at what are the typical things people in natural language processing would want to do with meanings of sentences And these are stuff like comparing them maybe classifying them Seeing whether a headline is about sports or Romans or politics or whatever these sort of tasks And then we we discovered that for all the sort of typical tasks you would get a groover like speed up So it's not just that it wants to live on a quantum computer because of the size issues Like because the size is a big of these things you also get algorithmic advantage. I mean this was when was this 2016? I mean Will sort of took it seriously, but I still thought it was a joke. Honestly, I thought it was joke I didn't take it seriously But then then areas come He around 2019 I Was in a pub with him and say and I was sort of joking about that we had done that stuff We had sort of looked at a These natural language stuff from a from quantum glass a quantum computing glass and then he said hey, Bob You need to take seriously here is some money do it Okay, so I built an initially a team like Constantinus. He's still not here and the Alexis and Giovanni and We actually were able to do Question answering on a quantum computer. So we use the real quantum computer. We did be trained with something We did some question answering and he worked. I mean, I was I was astonished that you could do anything like that because typically NLP and all that stuff is considered as really heavy on data I mean the amount of training data these GPT's uses just ridiculous It's ridiculous and we did all this with good results with very little data We could do this with very little data because we had this linguistic structure working for us The ways these words interacted through this grammatical structure was helping us a lot and and allowed us to do things with very little data And in an interpretable manner by the way in an interpretable manner because we understood what was going on Okay, so then we started to do sort of bigger bigger bigger experiments this worked and then Basically with the cold the software with the cold the software we We developed for that made it very nice made it very clean made it very readable rich you work on the rich you work on everything sleep It made it very readable very useful friendly and we threw it at the world and We threw it at the world and have and you can use it now. No You can go there. What you should look for is Lambeck you kind of imagine where this name came from Lumbic with a Q at the end. So this is our software You can just download it and it connects you can type in sentences and stuff and it connects directly to a quantum computer And so you can do that stuff you can do question answering on a quantum computer this evening if you want So it's all available Yep, if I understand what this kind of approach or mentioning Isn't it closer to let's say old-school NLP in the sense of expert systems rather than Machine learning type of an LP which are based on statistical patterns nowadays. So it's it's very good questions So so the structures we use so the structures we use here are in they're completely different than the old Propositional structures they use in AI. They are very compatible with the machine learning techniques you use They are sort of more like an umbrella Rather than a than a than a skeleton So in in these wires the we use training that we train So I'll explain we train in the usual way the data on a quantum computer in something that looks like a neural network But it's interpretable So it's it's a subtle difference and we actually need to work harder to actually explain this clearly But I'll show I'll show this later. You'll see this Okay, after we start doing this like people at IBM started to So disco cat that's the name of the thing. It's a horrible name But that's the people at IBM start to use that and then these people start to use some generation Use the same thing for some generation also people at IBM They wrote actually if you want to read about this They wrote a very nice block like a blog post about all the stuff. I've been talking about now Yeah, this quantum natural language stuff Okay, so then we want then we want to do something Different but we've without much work So basically what we did next was we take out the linguistic grammar out of the system And we put in instead a musical grammar. These are things which exist you can write grammars for music and so we Without much effort. We got actually a new a new piece of software, which is called quanthoven Quanthoven so quanthoven is pretty much the same but for music instead of for language and then then Without much effort, which has sort of generated some some some silly pieces of music not and then suddenly Bob cigar boss one of our Songs like from Ludovico quanthoven became number one in some classical charts Again, you can do this yourself the software is available. You can go to github get the quanto event and play around with it Okay, so what's happening inside Lambeck? That's that's the answer to the question or so you got you got this diagram You shouldn't think of this as a logical system. It's not sort of seem these are not symbols These are this is not a logical symbol. There's no logical symbol. There's no logical symbol. This is a Trained state Something like you do in modern machine learning. This is a train state and We're now going to start deforming this Base. I'm introducing some spiders to actually reduce the size a bit That's all stuff which is empirically justified and now now I'm formulating it like this So you got Alice and now you see better like hate is sort of entangling Alice and Bob almost like a circuit Now I turn this Unspidered you see I pulled out spiders like we did this morning and now you see see not gates appearing Quantum computers don't have much qubits. So we reduce the size a bit using all the sort of deformations. We saw this morning And now we need to parameterize now we parameterize now we parameterize these boxes So that's where that's that's the big difference They now get parameterized like like a neural network and you get you end up with something this So you could can think of this like a neural network But these are these are phases which we will train these are phases which will train now This represents the trans the verb this represents Alice this represent Bob And so we get this network and we're going to train we stick it on the quantum computer And we train it on the quantum computer. Why do we do that because you can't stick data on a quantum computer efficiently? So the way to do it is basically to train the circuit and that's what we are doing here So in a way, this is a neural network, but we know that this is like the subject We know that this is the object We know that the objects is connected to the trans the verb here and the subject is also connected to the trans the verb here So so the linguistic structure is in this in this network. So to say it's present in there Yeah, oh Uh Which one did we use? So it depends on what problem you're solving So you to invite you sense into a quantum circuit and you learn the weights of your model based on the objective function Like and you would do that based on the data or the labels and you pick up You're supervised learning So this the sentence structure this syntax tells you the shape of the circuit and though each word would have its own set of Parameters, so those will be then petting for the words It depends on your task. It could be a classification. It could be, you know, like it could be sense of analysis. Yeah Sorry For example, you could do that for example. Yeah, yeah That the quantum Yeah, so yeah, I mean here the diagram so the this kind of we just use like it's sort of It's it's kind of hybrid thing because what you're training are actually the classical settings of your quantum circuit So it's sort of half quantum half classical in a way It's a it's a hybrid thing. It's the same as it's the same as the chemists do the same as the chemists do pretty much Okay look, okay now Okay, now What you see here is a sentence represented as a quantum circuit forget about the quantum sentence represented as a circuit Usually when we speak words are on a line Words are online, right So here words form a circuit and I mean this was for the reason that we wanted to stick it on a quantum computer So it needed to be in circuit form But I mean in parallel with these developments All the previous stuff we had all the previous theory yet and and all these grammatical theories from the past they were all about For when is a sentence well correctly formed so it was all about sentences so far so far We only had an algorithm to give Meaning of sentences given meaning of words now sentences are not the most interesting Linguistic entities texts are much more interesting. I mean you don't buy a sentence in the shop. You buy a book in a shop Sentences are not very informative on their own and So so I've been thinking out to generalize this theory to text and so we came up with this thing called text circuits I started this first in this paper in mathematics of text structure and basically at some point we had this representation Do you remember rather than and and so basically we got here an Alice wire And we got a Bob wire and they get entangled by hates that's sort of the idea you got an Alice wire a Bob wire They get entangled by its but nothing stops you from doing this Introducing a third entity beer and then also and tangled up with beer through likes and so you can go on you can you see you Can go on and you can basically form something which is closer than a text and then What is beautiful about this and this so so far Constantinos is still not here. No, we are 17 minutes Okay, so because he's working on this so basically you got a circuit now you got a circuit now and so a text a Text is basically a Circuit where you got a whole bunch of agents This could be the actors in your story if you want and then a bunch of relationships that happen like actions that happen between the agents But ultimately this is basically like a quantum evolution There's very much like a circuit you would do if you want to simulate the chemics the chemicals. It's the same thing so a text so basically Reading out or like Executing a text is pretty much the same as executing the evaluation of a quantum system And so are we are now starting to work with our expectation These are the sort of speed-ups we can get here the advantage We can get here with these new methods going to be much more than the sort of groover like stuff so personally We are not a quantum machine learning group or say we use use quantum machine learning But in a way we're closer to the people who do simulation the chemist with this new theory It's it's actually much closer because we got this quantum theory of language of interaction And it basically wants to live on a quantum computer and you got you we're expecting these circuits to be really really hard Really, I'm gonna use the term tensor network because people like that These are really hard tensor network to simulate very hard answer that works to simulate So that's the stuff we're starting to do now and we're hoping this year to get some really nice results with that Demonstrating how hard they are and stuff now Now for I mean this is sort of a special occasion. There's people from many different countries here How many different nationalities are here does anybody know 24 24, so we got many different nationalities many different languages and What we start so basically me me Vince and then Jonah we wanted to develop this theory of text circuits very properly Understand very deeply mathematical what these circuits for text are so what do they teach us? How are they different from just language on a line? I mean I'm not gonna go in great deal. But basically what we did this week This is this is also a grammar looks very different from the Lambe grammar. This is more like a Chomsky style generative grammar This is more like a Chomsky style generative grammars. You got these little bits and they generate like Pieces of text and then you can build the tree and then you get a text and here is some pro running Resolution going on so we created this grammar specifically to get a very nice mathematical statement We wanted to know what is the structure that feeds into building a structure a circuit for text So we had this stuff and I'm going to do a linear. So this is basically just a Sentence on a line like we usually speak together with its grammatical structure and some other stuff now We're gonna start deforming this So we throw these links. I mean it's not important that you understand what's going on But this is a mathematical algorithm that we have which turns every text into a circuit like this So every text become a circuit like this and you see you can go back to the sentence sober Alice who sees drunk Bob clumsily dance love sat in and Here is the Alice sober She sees Bob dance clumsily is drunk Alice laughs at Bob So all the data is there all the data of the sentence is there Now you can ask yourself. Is the sentence faithfully represented there? Maybe maybe not Okay, and then you can ask another few questions and then we came to two very remarkable conclusions First of all these circuits are the same in action in every language There's no differences. So suddenly all languages become the same Which is quite a remarkable thing. I mean these I really love you You should tell them all they give the same circuit You can do the same the same for really complicated sentences and you get the same in every language So this is a sort of this is inter language independent representation of meaning Which is pretty cool. I mean I was shocked when I saw and More also so here you see sober Alice who sees drunk Bob clumsily dance loves a team Alice sees Bob dance clumsily Alice laughs at Bob Bob is drunk Alice sober. They give the same circuit So the league the style your style all your I things long sentence a short sentence as punctuation Relative pronouns all verges There is there is no punctuation anymore here. It's all gone. There's no relative pronouns anymore here It's all gone. So it's it's also inter language independent Representation of language. It's it's a lot more economical than usual language. Now. What is the philosophy here? Constantinos is still not here There he is. Oh, I see her now And he wears he wears his own t-shirt. You see me too. He wears his own t-shirt So the story here, what is the moral here the story is We as human have no problem thinking about things in parallel if you play music That's why I show this music slide. You are aware of Everything every other musician is playing and you immediately anticipated what the anticipating what they're doing So you're perfectly we are perfectly able to perceive things in parallel several different things in parallel not too much, but we can do it can deal with it easily and There's the other Alex with KS So for the first time I met another Alex, this is my Alex with cares and now there's another Alex with KS okay, I So yeah, that's Ross Duncan. He's the head of software Is the builder of ticket anyway, so we can think in parallel now Unfortunately, this device I'm speaking with here is pretty bad in saying words in parallel side by side The only thing you can do is saying one word after another. That's all we can do account so So for me for us for me This is our language wants to be represented fundamentally, but we as humans we can't speak like that We can't we can't say this we can't say that thing So what do you have to do? You have to work backward You have to work backward you have to take this thing and try to put it in line one way or another So that's the that's that's the old so that's the opposite That's the opposite algorithm So you start here and then you start to do things and all kind of acrobacy to get that thing on the line and there's many different decisions you can make there Like how where are you gonna put your subject? Where are you gonna put your object relative to your transdiver all this and then you have and then typically if you don't use Punctuation you get something super ambiguous something that you can't really interpret So then you need to understand introduce commas and stuff like that just to basically Get an ambiguous representation of that circuit on the line So I never every in every language people introduce different bureaucracies You can use different styles to do the same thing you use relative pronouns You don't use relative pronouns things like that, but so you get something That's that wants to be there wants to be that circuit But unfortunately so I mean if you go for example to the dough the book for those who ever do I've done that We do a long discussion there and we compare the diagrams to symbolic mathematics And then a symbolic mathematics is really complicated for a given diagram because again You have to make many choices because you stick stick something that wants to live in two dimensions or one dimension And you can't do this in a unique way So and lots of ambiguity is arise it for example the definition I mean somebody mentioned the definition of symmetric monoidal category No, if you look at the definition you find this in a textbook on category theory McLean on page 300 It's a and you pretty much need all the definitions of the book like natural transformation Blah blah blah to properly define the symbolically super complicated definition and basically what you're defining its boxes and wires Yeah, but isn't it your? Framework at the end just just I mean by just is not Pesuative, but a very efficient compression algorithm because if you if you lose the style if you meaning that you extract from a text written by 10 years old kids Describing the day and you ask the same thing to let's say James Joyce who spent many years in Trieste the same Maybe the fundamental meaning that you're grasping the final Circuit that you get is the same, but it's very clear to anyone That's a very different thing that you read at the end. So is it just compression from the information authority perspective? So it you throw away stuff. That means nothing to that extent. It's probably compression throws stuff like the bureaucratic choices now the style differences of Of course the commas and all these things these bureaucratic things we need to introduce an online you can use them for art that's what you're saying you can use them to make poetry and Rhyming and then you can do lots of funny things when you put on a line, which you can't do here I completely agree But this this I would call like that's the machine language That's sort of the language of a machine which can actually speak in these two dimensions like computer We humans we do all the stuff with it. I agree with you now what you would now not thinking for example Translation what you would do is for example, you could train So this is company you could you could train something that There's the circuit into a text of a certain style Of course, you could you could you could train that given enough data And so you can actually basically have a style changing stuff like that using this intermediate. It's an intermediate form But so my point is this so is the size of this circuit that you get here Connected in some way to some Shannon entropy that you can associate to your text or I don't know mental level of compression of I must say we haven't really looked much at the Quantitative characterization of how small it becomes but I've got some examples. Let me show look Okay, so for example Here you got this is just factual text the palm next to a king that a knight can capture So that we developed some linguistic theory of space and this would be the usual sentence Which characterizes this thing in the context of the chessboard? This would be the normal representation This is the circuit So there's a huge there's you we don't know quantitatively how much this is We don't know I should say that but it is it is it is an incredible compression like you say It's an incredible compression. I mean I've got a more extreme example here I think the ostrich next to the tree that a cheetah next to the grass can capture in human language and here a circuit But it's true what you say you see it here. You're all nice. It's it's a huge compression Maybe it could be interesting to try to Look at the size of the circuit and actually compare that to fundamental lower bounds given by classical Shannon entropy Yeah, exactly. So so what we've been what we've been doing richie has been working on that we've now got a pipeline where for arbitrary text we spit out the circuits and We actually want to say the thing more important for us to see the quantum advantage of Executing these on a quantum machine as compared to how you do it in classical machine That's that sort of thing because we're a quantum computing company That's the thing we are shooting for and Constantino should be doing that and Constantino is still not here I think yeah, I mean another thing another I Think it's not just a compression actually think and that's the same discussion that these circuits are also closer to how we as humans perceive the meaning of language Like for example here. I've got an example This is a movie Once upon a time in the West and you can really think of this circuit that you see here like this happens And this happens and this happens and this happens. That's like a screenplay of a movie So so these circuits for me are much more when I tell when I tell the sentence like Something like before the let's go to this one. Oh, sorry. Let's go to this one The old stretch next to the tree that cheetah next to a grass can capture I mean if you if somebody tells a sentence like that, maybe not that that's complicated like like Then then you imagine an old stretch and you imagine the animal and you imagine a chase Right, so the tiger is changing the zebra You imagine the tiger tiger. I mean I do at least I imagine a tiger chasing a zebra So to some extent for me did these algorithms is also how text translate in how we imagine things So it's more at a core. It's more a cognitive. It's a cognitively is it's closer to our cognition I would say then Then then the text structure itself But again need more experiments need to be done with that But that's my that's my understanding that we that these things are much closer to how we actually think So I think Yes, so basically what we're doing in the team is we're actually trying to use these structures to come up with some sort of Models of thinking both human and artificial. That's kind of what we're doing the team and also I gave some examples So that's kind of what we're doing Yeah, that thing looks also. Yeah, that's now the skull is actually used to positive use Okay, okay, I stop here Constantino is still not here. No Okay We'll take some questions. We have time. Yeah, we've got a few minutes Thank you so much for the talk. I was just wondering does it also incorporate the conjunctions part for example If I want to say Alice hates Bob, but Bob loves Ellis So how would I say the word using these circuits? Like what's the how would I represent it in the circuit form? I want to represent the word but like same the case with other conjunctions as well So we You can swap though you can permute the wires So yeah, you can permute the order of Alice and Bob after the first box and then you can You you just gave an example where like the subjects and then the objects and then now the object becomes a subject, right? That's what you're saying. Oh But that kind of gets abstracted away from the that that in this model You kind of don't have you have these these connected words are removed from the model So there are there are kind of intentional things that we abstract away that we consider like I mean this particular pause when we're not we're gonna put a pipeline out So we have we have a tool you just type in whatever like the example you give and it gives you the circuit and We basically we use a parser. So we use a traditional CCG parser and then Mainly Richie and some others they've come up with then the algorithm you are you actually from your CCG pars You have to do extra stuff because these not just a CCG pars You have to do extra stuff and then you always get a circuit. Of course Like with every parser there is some there can be some ambiguity in your given data And so it's a probabilistic process But most of the time you get these out and for all these words it gives the right type and then the type turns into the circuit We actually have a version of this this tool sitting on our discord So if you go into the QNOP discord and you message Jim bot So it's like a robotic version of Jim Landberg You can like type these texts at him and he'll reply with the circuit diagram So we have like a version of like the working version of our converse are sitting as a part on the discord So you can play around with it afterwards So go join the discord enter Under Jim bot chat and you'll have the answer to what you bought actually looks like in the sentence you you are asking Any further questions? Oh, yeah, thanks for the great talk. I was just wondering how does it the representation handle grammatical errors? does that Will the representation break? in In principle, yes, I mean there are there are there are you need there are tools around to actually deal with Ungrammatical sentence and try to make the best of it so you can use them if you want to but most of the things we developed We kind of assume so so so what is long? Richie, what does lumbic do do with an ungrammatical sentence? So this is a grammatical puzzle right so given a sentence it will output a diagram of a parse tree And also dissociate his property or how confident is so it was below a threshold or just say like it's this is non-grammatical But it will try because some words it may not have seen and it would decrease the confidence It's a puzzle Hi, thank you So what would you say or what would you expect the main of advantage of qNLP to be compared to classical NLP? I can mention the requirement for training data said because there's this in bills Linguistic a structure over there you need less data. Is there anything else? Yeah, well, so What the number one thing would be interpretability? So you get interpretability? Which is an important thing and we're actually working to get it even more in the oh, it's a very important thing I mean I Can't really I was talking to a very big firm and they basically told me that they couldn't use They were not allowed to use by because of regulations international regulations That was not interpretable and we've what's been happening in the last few weeks. I think this is going to become even more extreme So you got interpretability then basically The fact that that it's that these are really hard tensor web networks And you can't really stick on a classical machine very easily and then we're expecting also huge algorithmic advantages Because of this difficulty of a tensor network a speed up Speed up like like for quantum chemistry same thing and I mean, I mean, I don't know like a Lot of research needs to be done and we still don't know where the line large language models are going There are things which they clearly fail at where we expect this to be much better because again and it's connected to the interpretability So you spend quite some time in explaining this Compression what I understand as a kind of compression or coding process into circuits, but at the end this technology In an LP what we're looking for a lot our generative model So is this at the end a generative model? So so the beauty of this one the beauty of this one is whatever you stick together It's always correct like we've grammatical structures. You have to be careful like we it was mentioned Non-grammaticality here everything you stick together at least is structurally valid So you can just it's like with Legos wherever you build is okay with them with the with the old syntactic models Like this one It's very easy to do something wrong In a generative model, there is no notion of correctness or not It's just you you have an inputs and then you generate new content Sure, sure, so but I mean if you would want it I'm just comparing to this one if you want if you want to generate content here This would be complicated if you want to do it if you want to do it like us like fully structurally this would be complicated and With the circuits there would be no complication whatsoever. It's always correct. So it's it's a bit different. It's a bit different So it's not a generative model Not in that I mean I'm using generative like in can you generate text? Yes, yes, and then so we can generate text in the same week like we generated music We've done the music generation before now you can do tech generation with the new this was all we've long back By the way the music generation now with this tech circuits It's much easier for us to generate because there are no constraints anymore before there were many constraints But it's it's very different from it's generative in the in the in the semantic meaning of the word generative Not generative in the exact Precise meaning you would use with a large language model So what you've seen so far is it's quite like kind of discriminative like like kind of classifying Models, it's not quite generous it But there's some work that we're working on that will make it more generous of like language models or stuff So so talk to me talk to a constant, you know start yeah We can yeah, we're starting a translation project now, so then you have to be generative But you know it's kind of more classification and what you see because you kind of measure and you kind of see which class It belongs in the most so the setup we have here is more classification in generation We'll take one last question from gentlemen over there You're both don't like to hear it, but do we need quantum computers at all or I mean this is independent from quantum I can do it on paper I mean We expect this to be really hard in the first place of a quantum computer to simulate such a circuit And then there is the expected algorithmic advantages so but at the same time we are we are we have we have like a We have like a classical implementation of these circuits and we are testing them and we see so what what we especially see that That's also for genre. What do we especially see in the experiments? We've been doing with this is the incredible reduction in the need for data If we use this sort of circuits rather than neural networks The data reduction is enormous. Yeah, we haven't quantified it exactly, but Are you involved in the Bobby stuff? No Okay, we'll give you a break. I suggest so you come back in ten minutes unless someone has a strict schedule Well, we'll do a ten minutes break and we'll be back. Thank you