 Hello, and welcome to SuperCloud Six and the continuing discussion around AI innovation. I'm Rob Streche, Managing Director with theCUBE Research. Today, I'm joined by Enrique Lizasso, CEO of Multiverse Computing, a company that is really at the forefront of quantum computing and AI, that intersection that really is at that far edge, but coming to a place and computing near you soon. Welcome, Enrique. Oh, thank you very much, Rob, for inviting me to be here. So I'm excited because we covered you on Silicon Angle, and you guys just got your A-Round, which was oversubscribed. It was 27 million in US dollars, 25 in euros. That's really interesting to us because I'm glad to have somebody on from across the pond in Europe, because again, you guys are really seeing it from a different perspective than maybe we are over here. Why don't you kind of give us a view on how things are going in Europe with quantum and with AI and what does it really look like? Okay, Rob, the situation here is, let's say, Europe has a feeling in that side that we are losing again and again all the tech evolution regarding, for example, the United States and also China, and that we should just be working harder and bolder in some particular fields, particularly on quantum, but also on AI. And it's a very good time for us given that we are working in this particular intersection. So yes, we are just working hard. We have talent. Now we are just infusing, putting money inside the companies like us, like mine, which is Multiverse Computing. And now we are trying to deliver some of the solutions that we have in, or not only in Europe, but also in the United States. I have to say that we are on the other side of the pond. I mean, we have an office, which is the largest one in Toronto, in Canada, but the United States deserves a different approach on its own, for sure. Yeah, now, Toronto's a good place. And for people who don't know, they actually have an innovation zone in Canada in Toronto, where they're looking for companies like yours that are doing really innovative things. And I think Multiverse Computing is really, again, at the forefront of that intersection between quantum techniques and AI, two very important topics. What drives the company to really invest in those two topics in particular? How did this come about? We discovered that, particularly with customers from us, we can tell the name now because it's public, it's boss. I mean, the super powerful automotive and some other company. The idea, the real idea was just, okay, we have AI systems that are expensive to train, they are large and so on. Then, could we just prepare something that is smaller, not only but not only smaller because you have to fit the system somewhere, but also to consume less energy and so on. We analyze the problem and say, yes, this is a particular problem that fits into the quantum computing and also in what is called quantum inspired computing, which is basically techniques that came from quantum computing, but you can apply in regular computers, in the classical computers that we are using right now. In particular, a family of techniques which are called tensor networks. Nothing to do with tensor flow. I mean, this is different. This tensor network is a super powerful double use military and civil mathematical beast. And then at some point, we were not so clever, but at some point, somebody inside the company said, okay, we have done that in manufacturing line with some very big names and so on. Can we apply that to LLMs and say, okay, let's try. Let's try to compress a model and compress it with it. I mean, we took a Yamados, okay, 85% of compression in the first trial, but a few degrees in the accuracy and slashing by half the retaining time and also slashing by half the inference time and say, oh, this is something that we should focus on and happen that this is quantum on one side, this is on the other side, also AI. And then we noticed that some people from civil insights and so on, they put us in the intersession and say, okay, maybe this is our place in the one to be. That's a real thing. Yeah. And that's how you got there. And I think that one of the things, because let's unpack that a little bit, you've really hit on a lot of things. First, it's the power and sustainability of AI, which I think has not really been a big topic here in the US, but every time I go to Europe or I go to anywhere else in the world, they're always, hey, we can't get 100,000 GPUs and what is the, we're building, and I know because there's certain service providers in Europe that are building into caves and using ambient air up in the Nordics and things of that nature for the data centers with tons of GPUs to kind of lower the carbon footprint of what's being done. But also you hit on the size of the models and compression. This really, the cost of LLMs and where they are being trained and fine-tuned is a big topic of conversation even here in the US. And we see people saying, hey, we're going, we're gonna maybe do training in the cloud, but then we do fine-tuning on premises or in a colo or somewhere where we, it's closer to our data so we can keep privacy and PII with GDPR is a big concern in Europe as well. How are you seeing the intersections of all of this and it all fitting into, again, like you said, classical computing and current infrastructure, given that it is quantum and everybody thinks quantum, they think, and I know you were mentioned by IBM in one of their reports, but they think of quantum, they think big companies, very cold temperatures and quantum computing, how does the cost aspect really help your customers? Yes. You are right. I mean, there are some particular details that are quite relevant. Last, I think that two days ago was some news in the financial times about Ireland using 20% of the electricity production just for data centers. And the demands from new data centers and also from more performance of the data centers is escalating super fast, but this is because LLM and AI, I mean, this is just rising like a rocket, a real rocket. Okay, so they say, okay, so we have to apply the data centers to use green energy, green sources of energy, electricity, I mean, wind, so the photovoltaics and so on. No way, I mean, this is not going to work. I mean, the AI electricity consumption is escalating so fast that even if you put all the resources of green electricity you have now, you are not going to cope with the demand. Even worse, even worse. I mean, you want AI everywhere. I mean, you can see the other day from Volkswagen stating that, okay, they want to put LLMs inside the car, which is a natural way to have a relationship, let's say, with your car. I mean, not just clicking buttons and so on and so on. So this solution, for example, from us is a proposal which is called, in general speaking, green algorithms, but things from quantum, particularly. So now you can apply from, let's, quantum-inspired techniques, tensor network, and that means that you can apply that. If the model is small, completely offline, which is super good, you can imagine, if you are talking, if you are Ironman and talking to Jarvis and you are just skyrocketing and when you arrive there, you try to speak to Jarvis and Jarvis says, okay, I lost my connection. No way, okay? No, you need those models, but you need those models very small and not so energy hungry. Natural dance, the thing differently. Yeah, and we actually came up with what we call the power law distribution of Gen AI. And again, you have your classical clouds out on the far left, I guess you could say, of the model and a very steep curve comes down and then you have the extension out, which we look at as a lot of the use cases or the vast majority of the use cases may be smaller models, but more numerous, like you were talking about with Karm, say at the edge and far edge, is where most of the models doing inference will actually live because you want to interact with them and that, that. But you guys are not only involved with Gen AI, you're also involved with what I hate to even call it this, but classical AI and ML. Yes. So these, tell us about how these techniques actually can be applied to that as well. Yes, I mean, we are regularly applying to that. You have a number of places that you cannot apply a quantum computer or a genetic quantum computer for some resources, for example, if you go to the manufacturing line, a lot of times it's about something as, let's say, as old, if you can use this word, as defect detection. I mean, you have a manufacturing line and you have to detect which are the defects on a particular PC, short for one. Okay, this is a classical AI problem. This is super classical. The point is you have just to deliver a solution that is more accurate and faster. This is consuming a lot of resources as well. So if you can provide those solutions, this is much better. And this is where the, let's say, classical AI solutions coming from the quantum sphere that we are just particularly experts in that can just save a lot of money. And this is important because when you go to a customer, a real customer, if you go to, let's say, the business line, boy, those guys are hard. And they are not extremely excited about the technology. Or the innovative part of the technology, they are just focused on, okay, cheaper, faster, and that's all. Okay, can you provide that? Yes. Absolutely. Yeah, I mean, I think that, like you said, defect detection and being able to do things faster as a organization and trying to get at this, we're doing money laundering detection inside of a bank and things of that nature, which has been classical AI for quite some time, really does play out. And I think that I hadn't even thought about quantum-like techniques being applied there, but that makes a lot of sense. Yes. I can just tell you some of the examples that are exciting. But as you can see, those problems and the technology is completely transversal. So for example, okay, we started in finance. I mean, this is the traditional achieve in finance. Okay. And we have some anomaly detection techniques, okay? Then we applied those anomaly detection on the manufacturing lines, but you want to have something funny. I mean, we were approached by third level hospital here in Europe. Third level means the transplantations, those kinds of things, super high level medical care. And they told us, okay, boys, we have 140 beds in the intensive care unit. Intensive care unit, the doctors, the physicians there, they are not interested in you are going to live alone 100 years or not. They are just interested, okay? In which of the 140 beds that we have there, fully connected, 40 connections, five seconds, millisecond sampling, which of those patients are going to give us a problem in the next 30 minutes or an hour, okay? Because we have to be prepared. And this is, and we say, okay, wait, wait, wait, wait. This is exactly the defy protection algorithm, okay? I thought that I have heard a very nice story, but then after in another field, football, okay, soccer here in Europe, okay, team, they came to us and they said, we have tons of data, tons of data of our players, what they eat, what they, how fast they run, everything with EPS, everything. And the point is we need to have some clue about which of the players is going to have an injury, okay, in the next match or in the next week. So again, same algorithm, okay? Precision, quite difficult problem, okay? And we say, okay, yes, we can provide, we have the algorithms, we have the algorithms coming from Guantanamo, and Guantanamo is fine. And they told me, but we ask, but, is this worth, and they say, boy, do you know how much we are paying to a soccer player? Oh yeah, the football player is there, me, of one. Yeah, yes. They're definitely corporate assets, let's put it that way. But, and the injuries are definitely on the liability side. So, but, so I think another thing that really you're focused on is that sometimes, especially with LLMs, and we hear this all the time, is that everybody's afraid of bad information being injected into an LLM. So that there is, you know, like you said, anomalies, but we call them hallucinations and other stuff. And if you're using that for, you know, customer success or customer service, you don't want to give the wrong information out to a customer. How are you guys really addressing that as well? Yes. We have not entered particularly in this problem, we are going to enter, but we are tackling with, because this is a problem that appears on the inference time, I mean, with a model is hallucinating. But we have a super nice update that is going to appear in the next two months, which is something that we call the lobotomizer. Okay. And say, okay, lobotomizing. Yes. And this is, well, what's about that? This is making LLM forget something. Okay. This is something that don't even the natural can do with us, the human beings and so on. But this is important because if you have feed your LLM model with the wrong data, I mean, and wrong means that maybe the data is wrong or maybe you have used some data that is under IP protection. Okay. The only option that you have now is to get an agreement with some party or being sued and then forced to retrain all the model. We know that this is super expensive. Right. We thought that, okay. Those quantum techniques make a representation of the distribution of the knowledge inside an LLM so that from a different point of view, I mean, the system's organization, et cetera, et cetera. But you can just extract, eliminate some parts of information which more precision you know approximately what you are doing. So this is super important. This is also going to be this lobotomizer.ai is what is going to appear in the next month as I mentioned. So it's not only about compressing and making things cheaper and so on but also to do that. Yeah. No, I think that to me makes total sense because it really helps people reduce cost. It actually does have a sustainability aspect to it as well where you won't have to retrain these models which training is the vast majority of the cost is in the training. And then you do have costs and inference and things like that but that's distributed out typically. Any other things that you can see on the horizon if you're looking out that you think are very interesting in the intersection between quantum and AI? Yes, I think this is super important also. One thing that I forget to mention is that those techniques which came from completely different mathematical lists about how to see the nature and all the things, you can apply that on top of the classical techniques of quantization, pruning or so on that people are using now to compress the model. So the effect on compression is additive. I mean, so this is something completely new. And we are just only tapping at the top of the pie, of the bottle. This is going to be even deeper and deeper and deeper. At the end of the day, if you use, let's say 100, 200 millions to train a model, okay, maybe nature is not doing that way. I mean, you don't need 100 million to educate a child, okay? Maybe it's not a Nobel Prize, but most of the times you don't need a Nobel Prize to help you drive in your horse wagon. This is a boy. So, okay, I want something cheaper, faster and so on. Those are the techniques that can really deliver the value as today. Also, this is going, sorry, something important is we are democratizing also the field in the sense that if you need 200 million or half a billion or two billions to train some model, okay, very few companies can do that. If you make it cheaper, even by half, which is too much, boy, you are going to have more competitors. That's good. Yep, I agree. Well, I want to thank you Enrique for coming on. This has been super interesting. And I think, again, we'll keep in touch because you guys are doing, you're really out there bridging the gap between the classical and the quantum computing, which I think is going to become even more important as we move towards these in the future. So thank you for coming on board. Thank you very much, Rob. I mean, it has been a real, real pleasure. Same here. Stay tuned for more from SuperCloud Six.