 Okay, it's 11.30 and you're in the evolving technology track. We're having an ask the experts session on engineering tomorrow now with a panel of illustrious experts from Red Hat. I'd like to introduce Harrison rips who is serving as moderator for this discussion. Harrison, do you want to introduce other folks or should I continue? I was actually going to put as a manager I like to delegate so I was actually going to put it on them to introduce themselves. Okay, that sounds cool. Just a quick line. Okay, so this is a completely live session and I hope you enjoy. If you have any questions, please put them in the chat box. And they'll also be an opportunity to follow up after this in a breakout session if you're interested. And with that, take it away. Cool. Hi everybody. As Heidi said, my name is Harrison rips. I'm a an engineering manager with the CTO office at Red Hat. And I am joined today by three experts who are also working in the CTO office on on various projects. So I'm going to go in the order that my screen has everybody arranged so I'm going to start with you if you could just let people know your name and what you're working on right now. Hello everyone. My name is Paul Singh and I work for the office of CTO under the emerging technologies and right now I'm leading the quantum efforts at Red Hat in this department. Awesome. Thanks. And next up is Sophie. Sophie Watson. I'm a data scientist in the AICOE and part of the forward deployed engineering team. So we've been doing a lot of work around enabling Red Hat's customers to get their machine learning workloads running on Red Hat's open source offerings. And also talking to the field about AI ML just making sure we can talk about AI ML at Red Hat and what's going on. Awesome. Thanks. And last but not least Andrew. Hey everyone. I'm a software engineer in the office of CTO as well. I am currently part of the networking team but I am very focused on edge related solutions and figuring out how Red Hat is going to promote their solutions to customers moving forward. Awesome. Thanks. So this isn't ask the expert session. If you are in the track with us I'm just going to change the chat over so that I'm following the track chat. If you have a question for the experts by all means please post it to the track chat and we will attempt to field your questions. For starters though I thought it would be really useful for us just to kind of level set. I'm going to ask each of our experts in turn to explain a little something about the technologies that they've been working with. So Perule I'll start with you. Can you just give us an idea like what is quantum computing? Sure. So the first thing to understand is the classical computing that we have all been using and why quantum computing is the new thing that everyone is talking about. So in right now we have the bits which is the central processing or it's kind of the unit of information. We also have neurons which provides deep learning executions in when we are computing but it's important to ask ourselves like are they enough and do we solve all the problems that needs to be solved just using bits and neurons. And right now the biggest computer that we have is called Summit and it is made of bits as well as neurons. It can do 200,000 trillion calculations per second that is so huge and it is running on IBM Power 9 processors. It also uses NVIDIA GPUs but is it capable of solving all the problems and the answer is no. And hence we have newcomers in the field of computing which is called QBits which comes from quantum computers and what they help is they're going to give us unprecedented advantage in the field of computing to solve problems that are untractable right now. All the NP-hard problems that we have like breaking the factoring, they all got to be solved with the quantum computers. And a very example that I am a big fan of because everybody loves coffee is just think can you to represent a molecule of caffeine on the classical computer. You would need almost one to 10% of the total atoms on the earth. So if you want to build a classical computer that can represent caffeine, you would need that many atoms which is impossible but with quantum computers you can just use 160 qubits to do that. So that is the advantage we are talking about that quantum computers can bring and it is very different from classical computers because it follows a different laws of physics which is called quantum mechanics. And I don't want to dive into the quantum mechanics aspect of it but just for software engineers and people like us we can just think that the qubits they are going to give unprecedented advantage to us when solving any problems in the domain. Awesome. Very cool. All right well thank you Perule. So I am going to come back to Perule with some other questions but right now I am actually going to shift over to Andrew and say Andrew I have heard a lot about edge computing. What is it? It is a pretty murky term right? Every company seems to have a term for edge computing in a broad sense. All edge computing means is we are moving the processing out of central clouds and towards the edge. Now this can be applied in many different ways for many different use cases but that is the core thing to remember. Instead of dealing with round trip latency times to get data back to the user, we are processing data and giving results right there at the edge. That can be on small lower resource devices such as smart phones or raspberry pies but that is the core thing to remember. It is very important as we move forward to have this to ensure low trip latency times with the advent of smart cars and a whole other host of new 21st century technology that needs this as a requirement. Gotcha. Alright, so I am going to summarize that to make sure I have it. It sounds like the answer is edge is doing some of the work of processing data out where the data is being collected instead of sending it all back to some central place to do all the processing. Does that sound right? Pretty much the main idea and if you start with that it is easier to kind of dive in further. Got it. Cool. Okay, Sophie, over to you. So I know everything I have learned about artificial intelligence I have learned from Netflix movies and books. So what can you tell me about like what like I know what it might be but what is it really? It is core really. It is just optimising statistics problems. That sounds scary. It is not as scary as it sounds really with just using and training algorithms to solve problems without explicitly giving them a recipe. So we are letting them learn from experience and examples in order to come to solutions. Okay, alright. Okay, so I have a follow on question for you, which is, I'm interested with all these technologies and we'll start with AI. And the real question is going to be what is Red Hat doing in this space. But before I ask that question, if I'm not mistaken, you're a data scientist. Red Hat is an infrastructure company. So what is a data scientist doing at an infrastructure company? Having lots of fun, I think. So yeah, that's completely true. Red Hat is an infrastructure company, but we do have a lot of great data science teams here. Okay, so we use data science internally to improve our own process. From monitoring the infrastructure itself to check how well it's performing using AI, identify alerts, and so that then someone can go and check, hey, maybe we need to test this machine. Or maybe we think this machine might break in the next week. So we need to do something about it. We can get great insights there from using AI. We're also using it internally in terms of hiring processes and improving general workflows and systems. There's lots of AI going on. But aside from using it, we're also innovating in this space. So Red Hat was arguably the first company to start talking about doing data science on Kubernetes and to talk about the benefits from that. Of which there are many. A data science workflow isn't that different from your standard software engineering workflow. So you can make lots of improvements by just leaning on these great technologies that we've got. Interesting. All right. Very cool. So now I'm going to bring that question over to Perule. So what is Red Hat doing in this, in the quantum space? Yeah. So we are, first of all, the quantum computers are not going to replace classical computer. If you think that you will get a laptop that will have the quantum processors, that's not going to happen in the near future. So at Red Hat, what we are trying to do is figure out the best practices that we can run both quantum classical as well as deep learning neurons in a core processor model. And we started doing that with GPUs. And when we started the effort, obviously GPU was also not present on the physical device where the traditional classical computer would be. And we would call a remote. There would be back ends of the remote calls that we would do. So similarly, we are doing going to do the same thing with the quantum computers. We will have the core application running on the classical sites and specialized tasks or workloads to be offloaded to the quantum computers. So the caveat is we will have to take into account the network and the latency bandman. But what we are trying to achieve is to come up with a core processor model where you have the classical computers, the neurons, as well as qubits all running in tandem to solve the problems at a faster speed. And for that we have developed multiple operators, two operators, and one of them would help you to develop your content circuits on OpenShift or any Kubernetes cluster and the other is the application runtime operators that will help you to schedule these tasks more efficiently on quantum computers. All right, cool. So I was going to ask, when will I be able to buy a laptop with a quantum processor? It sounds like not anytime soon. But I guess my question then is, so are there quantum machines now? It sounds like you've been working on some kind of connector so that I can have workloads that are actually working against a quantum machine. Where would I find a quantum machine that I can actually work against? Yeah, that's a very good question. So we started doing this experimentation. And I like to point out that we are not productizing our efforts right now. This is all prototyping. And we started with Q-Skip, which is an open source framework for developing quantum circuits. And the back ends that we are consuming is IBM quantum computers. And so our experiments were like, we are connecting to the IBM quantum computers in the back end. And for developing the circuits, we're using Q-Skip open source framework. And even Honeywell also uses Q-Skip. So it's kind of like, you know, like as many vendors who are willing to provide Q-Skip implementation, whatever things we have developed right now can be consumed by them. That's really cool. And just one last question there before I go to Andrew and start picking his brain about edge again. So if I just like based on your observations, like have you actually sent jobs to a quantum machine through this connector? And what's the turnaround time like on that? So I have tried doing both. If you want to do, so there's also concept of simulator where you're not running the jobs against actual quantum computers, but you have you're just simulating on a classical computer. And that is, that is very fast and it gives you very, very, there's no noise in that the answers that you get. And it's like a hundred percent right answers. But when you do it on back ends on real quantum computers, it depends on, you know, what is how busy that back end is and how far what is the nature of latency. So it depends on a lot of aspects. But so I would say like not so bad within five minutes you get the answers. So the time it takes for the computation is not as bad as you wait for your task to be scheduled. So that is one thing that we are trying to work around with. Got it, got it. So for developers, they could be working against a simulator that's turning around results fairly quickly. But in practice, the delay might be more like a couple minutes or something like that. Right, right. All right, cool. Just like to add one thing. And that's why we have the second operator that we are making the runtime operator because IBM is very focused on doing real time and your time computation. So that operator would be scheduling like if my classical computer is in Switzerland and it needs to upload the quantum workloads. We are trying to find a quantum computer that is in close proximity locality wise so that the latency is reduced. Got it, got it. Cool. Thank you. Now, Perule actually mentioned something which I is a really good point and I should have said it right up front. But for everybody who's attending our Ask the Expert session today, the office of the CTO at Red Hat is an awesome place. We get to play around with all of these exciting emerging technologies. But it's important to know that while we're doing a lot of research in this area and trying out different kinds of technology, there's no guarantee that Red Hat is actually going to productize any of it. Part of our job is just trying to figure out like, is this really a viable thing that we want to be involved with? So we get to talk about the cool stuff, but ultimately there's a bigger decision about whether or not it gets productized. There's my disclaimer. You heard it here. Okay. All right. Now, going to Andrew, I'm going to ask the same question like, what is Red Hat doing in the edge space right now? So with Red Hat, it's really important to distinguish the fact that we aren't concerned with the actual IoT or low resource devices running on the edge. Okay. We're an infrastructure provider. We kind of stop at the gateway server at the far edge where we are talking directly to these IoT devices. Now, the important thing that we're doing in Red Hat is trying to create a unified architecture to provide solutions to run these edge workloads across an array of devices, right? So most edge workloads today are custom. A company might be responsible for recording license plates as cars go over a highway. They create a custom stack that talks to a custom cloud. And I think Red Hat's really concerned with providing a unified developer infrastructure experience to kind of cut out this custom defined edge workload infrastructure. Essentially, we want to be able to run edge workloads that are across an array of an open hybrid cloud setup, i.e. VMs, containers, bare metal nodes, anywhere you can think about. And we want to provide an easy way for developers to interface with these workloads across an entire solution. So I think that's where Red Hat really comes in. We have all the tools and we're putting them together in a way that's going to make it easier for developers and customers to actually create these edge solutions. One example specifically we've created now is our first edge kind of proof of concept in an industrial setting. So we're running a bunch of analytics and data digestion in a warehouse involving supply chain analysis and doing optimization on the edge. And the really cool part is we can provide real time answers on the edge to, okay, how fast should this competitor belt be going or how slow should it be going? But we also can take the data from the edge, report it back to the public cloud and continue to update and improve our models as we go. So that's just one specific example. Cool. All right, now we've been getting some questions in the chat and I'm just going to go back to the first one since I'm talking with you, Andrew. Heidi wants to know, and I guess this is based on your experience working with Edge, would you get into a self-driving car in Boston? Well, that's a really loaded question, Heidi. Who's the company backing it? I don't know. Is it a Tesla? I guess I'll ask the question in a different way because I think there's actually an interesting thing there, like what are the challenges, right? Like if we're talking about a car, a smart car as the edge device, what are the challenges that we have to tackle to make it safe for that car to operate in the city? I'm thinking particularly of, like even basic stuff, like if you use a GPS in the middle of Boston, you can lose link to the satellites because there's just so much noise, you know? So, yeah, talk through that a little bit. So as I touched on earlier, in the smart car scenario, it's a great example and it's used all the time when talking about edge workloads. Network latency is the number one problem, right? If the smart car or autonomous car sees a red light, it can't send the image it's seeing of the red light back to the central cloud, get an answer, okay, this is a red light, come back to the smart car. That'll never work. It's going to take too long. It's a round-trip latency. We have to be running that analytics on the edge that can see the camera frame. Oh, we see red light. We better stop. And that latency needs to be really, really small. That's kind of the biggest thing with the autonomous car example. And as the original question of what I do it now today in Boston, I think so. We're pretty close. It's definitely with the advent of 5G, which is helping kind of push edge workloads because 5G has such stringent network latency requirements that I think as we continue to roll out 5G, this stuff will get progressively safer and safer. And in the future, I mean, almost every car will probably be autonomous, especially regarding the trucking industry. But people still definitely want to drive cars and I'm one of those people. Alright, Sophie, the next questions for you. I see you did reply in chat, but I'll go ahead and ask it because again, I think there's a deeper question here. Based on your knowledge of AIML, are you afraid that robots will take over the world? Well, right now in 2020, I think what's taking over the world would be the least of our worries and it might actually be quite nice. That's a great improvement. To be frank, there's there's still so many things that machine learning can't do, you know, you see every day on Twitter, some machine learning algorithm or bot that is misbehaving that's introducing bias into itself because of the data it's trained on. Similarly, there's loads of places where we don't want to apply AI. For example, Harrison, if you come to me and say, I'd like a loan, please, and I look at all your bank history and I say, nah, I'm not going to give you a loan, but I can't tell you why just my algorithm decided and you're not going to be happy with that, right. So we need that interpretability and that explainability alongside AI and that's something that people are working on and it's improving over time. But really, I welcome the robots, I think. Well, actually, so this gets me thinking like we're talking a lot about AI and ML really around, I think it's core function, which is pattern recognition, right. But are there other applications, like I mean, I think that we, you know, we hear about people training models, and this is an area where this this kind of technology is really powerful because you can train it on on a lot of different examples and then kind of set it loose to do things. What, you know, are there uses beyond that? Is there an area, any areas of research beyond that? I mean, I genuinely don't know, I'm curious. Help me out here, so beyond training models, beyond? I guess, or maybe I'm thinking about this a little too obtusely. Let me ask it this way. What patterns are hard to teach to a machine learning, you know, something capable of learning patterns. Have you ever encountered patterns or you wear patterns that are hard to teach? Oh, I see. Absolutely. I mean, there's certainly some notion, there's kind of two notions about AI and ML. One is that it's really easy, and you just throw a loaded data at something and it's going to give you an answer. And the other bit is that it's impossible and nobody can do it and it's magic. In practice, if you throw a loaded data at something and you haven't really, you know, if you don't understand that data or you haven't processed it nicely, the model isn't going to be able to learn anything useful. So it's really about combining that subject matter expertise, talking with experts in the area about the data. If I set out to make some new AI for self-driving cars, I imagine it would be terrible and Andrew definitely wouldn't get in that car in Boston, mainly because I can't drive. And so I don't have that expertise to be able to put that information into the data that we then passed into the model. So as well as the pattern recognition that you're talking about, there's also this other class of machine learning models called reinforcement learning, which is why we don't actually provide patterns. We just sort of let the machine go on and learn how to act based on a point system, I suppose. And we say, yes, you're doing well if it does something good and no, you're doing badly if it does something bad, so it learns over time. So that's another aspect. Gotcha, gotcha. All right, that's very cool. Very cool. And just as a reminder for folks who are hanging out with us today, please feel free to send your questions into the track chat. We'll do our best to field them. So one of the exercises that I was thinking would be really interesting. I've been lucky enough to see a couple of really interesting demos come together and demos, the ones that excite me the most are the ones that are tied to real world problems. And I can think of a number of demos that couldn't easily involve two of these three technologies. I kind of wanted to have as a toss-up question, could we think of a demo that actually demonstrates edge, AI, and quantum? And I guess while you're thinking about that, I'm going to put the focus on Perule for a minute and just say like, what are the real world problems that people are tackling with quantum right now that we might be able to kind of pull into some kind of very complicated demo system? Great question. So definitely when Andrew was talking about the signal that the car sees a signal and has a star, it just reminded me of one of the things that IBM folks were talking about, why they want real-time and near-time computation to happen is because when you're offloading the task from the classical computer, the latency is very important. So there are a lot of problems. The most interesting problems are the problems that the hackers get. The most is factoring problems. Everybody wants to break the RSA encryption. But they're constructing problems that we can solve and which one of the problems that Microsoft is working on is how to find the best molecular configuration for a fertilizer. Like quantum works very similar to how nature works because it can store a lot of information and it can have many results at the same time. There's a classical computer that goes down one track and try to find the answer to a problem. Quantum computer can come up with many answers simultaneously. And that is the most interesting thing about quantum computers. So right now one of the, at least the most interesting thing that for me is figuring out the chemical composition of fertilizers that is very, that does not affect the environment as much as the existing fertilizers are, but also boot the growth of plants. So yeah, one of the problems that it has to be applied in a way that nature works and that is the most enticing thing about quantum computer and what's exactly as the nature works. That's really interesting. So it sounds like you're describing effectively like an organic chemistry problem. Can we come up with an atomic configuration or molecular configuration that has certain qualities and doesn't have certain other qualities so that it's not interacting badly with the environment. Exactly. And to do that on classical computer, you will have to pick one path and then go down that path. But on quantum computers, you can start with same point and you can diverge into multiple paths simultaneously. And then at the end of the experiment, you decide, oh no, this configuration is the best. Let's discard the other solution and stick with the solution. All right. Now, I don't, I don't know how deep you can go on this particular one, but you mentioned security and particularly breaking encryptions. And it's my understanding that, yeah, quantum is is actually going to be able to, you know, easily I'm going to air quote that decrypt some some pretty standard encryption methods. And I guess, you know, can you talk about that at all? Like, you know, I know that they've sort of identified some quantum safe encryption stuff. But, you know, what does that mean? How did they know that an encryption method is quantum safe? And I don't know if you know the answer. I just I'm throwing it out there. No, I don't know exactly. But what I know is it is quantum safe only because we are restricted to the number of qubits that we have in the quantum computers as of now. So it's it's like depending on the number of qubits like right now we have the fastest, largest quantum computer that we have is psychomore that is developed by Google, and it has 54 qubits. So, whatever the RSA or the key that we take in the RSA to make the encryption that cannot be factored by the 54 qubits it will need more qubits so it only depends on the size of the quantum computers that we have and if you reach to a point where we have a quantum supremacy and quantum supremacy is a term that describes that when you have reached this point, any problem that you solve on a classical computer would be solved way more faster than a quantum computer. But we have not reached quantum supremacy right now. But once we have, then I don't know what would be a quantum safe algorithm. Let's see. That's pretty fascinating and last question for you sort of digging deep into this quantum stuff. So, I have written programs for normal computers. How does programming for an array of qubits work like is it, do you get a CS degree and you just like it's a different programming language or like how does that actually, how do you write programs for qubits. Yeah, so as I was talking about Q-Skit earlier, so Q-Skit is kind of a set of libraries or APIs that use to develop quantum programs and quantum programs are essentially circuits, you know, you design the logical circuits that's going to go and solve a problem. And, and you use Q-Skit for that. So, if you tell me, Paro, I want you to develop this algorithm. I am not that person to do that because I don't understand the whole quantum mechanics and the physics behind it. So there is a set of personas or they're set of people who are known as quantum physicists or like quantum algorithm designer. They're the people who would write these circuits using the frameworks for example Q-Skit and then you would be running these circuits or these algorithms on quantum computers. So it's a different programming language. It is not the same way that you import a Python hello world. It is, it is a hello world, but you use a different programming language for that. Got it. Got it. Okay, very cool. So, I am going to, so I had sort of brought up that question about, you know, could we all imagine some system that leverages quantum AI and edge. And I still want you to keep that in mind but actually something came up on the chat and so I figure, you know, I'll ask about this. So Sophie, Open Data Hub, what is it? Open Data Hub is a community project that has come out of the AI CRE here at Red Hat. And it's also an operator. So it's a one click install operator for OpenShift. And so with that you get basically everything that you need to do machine learning. So it gives you exploratory environments where you can quickly get access to your data, test all new packages, and then it gives you tools to go ahead and serve those models, put them into production. So as data scientists, we don't want to sit and like troll through pages of YAML, but that's definitely, that would be worse than the robots taking over for sure if we just have to read YAML all day. We just want to be able to do data science. So the Open Data Hub is a really nice project that is making data science run well on OpenShift. And it's actually downstream from the Kubeflow project, which is a popular community project for AIML on Kubernetes that's come out of Google. So if you want to know more about that, you should definitely head over to Booth. We have a booth in the expo hall and lots of great people from the Open Data Hub team are going to be there to answer your questions all day. Awesome. Very cool. Do you happen to know, are there any particularly interesting projects that are using Open Data Hub? What kind of problems are people trying to solve right now? Oh gosh. Well, the types of problems that people try to solve with machine learning really vary, right? You think it's all self-driving cars and in some ways we wish it was, but actually it can be much simpler. Just things like, I don't know, if I'm a customer and I have customers, how do I keep those happy? So how do I identify when one of my customers might be thinking about leaving? Or how do I get them to the right person on the other end of the phone quicker? And the Open Data Hub is really flexible enough to solve these problems as well as identify faults in self-driving cars. So it's really just about giving data scientists the platform where they can go and do their work, whatever that may be. Got it. Got it. I imagine that the data sets that we're talking about are quite large. I don't know for sure. I haven't tried any projects on Open Data Hub, but like, is there some kind of, like, do people have to pay to use Open Data Hub? Like, is there a storage limit or anything like that? So you have to go to the booth to get the details. The Open Data Hub is a community project, so you can go ahead and download the operator. You're not explicitly storing your data on the Open Data Hub. So if you wanted to, you know, store your data on however you usually store your data, give me a look at how people store the data. You know, you can have it in an S3 bucket and connect to that. Wherever your data is, you can order it to it. Got it. Okay. So it's bring your own thumb drive and then you're good to go. Okay. All right. Very cool. All right. So, Andrew, I'm going to come back to you with another edge question. I know that smartphones aren't the only edge devices, but there are certainly a lot of them out there. So I imagine that the 5G rollout has got to be related to this whole edge revolution in some way or another. Like, you know, can you speak to that? Am I am I on the right track? Yeah, I mean, 5G rolling out goes right in hand with the world's edge. 5G presents really stringent network latency requirements that aren't going to be able to be handled with roundtrip routes all the way back to the public cloud and back to the user. This is also going to make connectivity between edge devices like that much better. Okay, so say you have maybe in the future you have a city full of smartphones and you have some crazy workloads that may not be able to be handled by a single smartphone. But with 5G and with increased network improvements, I think we might one day see the sharing of resources across an array of smartphones and other edge devices that allow you to handle bigger and bigger workloads on the edge closer to the user. Now it's important to remember the central cloud will always be important. We need a place to go back and learn with the data we're collecting at the edge. But in order to handle autonomous cars, real-time emergency medicine, whether that's sensors in an ambulance and real-time sensors on an industrial floor, we're going to need to be processing these workloads on the edge. Interesting. All right. Very cool. Very cool. And one thing I did want to touch on in the chat, Heidi asked a great question. I'll just read it. That sounds like an interoperability nightmare. How do you deal with the issues of differing functionality and different versions of software with such a broad range of edge devices? It's a nightmare. Think about it. Say, most edge systems will have tens of thousands of edge devices. If you think about, I don't know, a weather system for a city or a traffic system for a city like I brought up earlier, you have thousands and thousands of cameras. How do you deal with this? It's really important that the platform driving these edge, coalescing and driving all these edge devices is a unified platform. So all of us will be able to then write microservices that can be deployed right there near the edge that can handle updating the devices, making sure the devices are running with a good state, and also be able to treat them much like containers in the sense that they are almost like cattle. One dies and other will spin up to take its place. And that's the beauty of Kubernetes and other orchestration platforms. I think with Red Hat Solution, we'll be able to push it all the way right there at the edge, and it will be able to interface elegantly with all these crazy different versions of edge devices. Interesting. Very cool. One thing that I wanted to just note for folks who are listening in and actually, amazingly, we're almost out of time. This is really flown by. One thing I just wanted to note is if you are not familiar with the term operator, which we've thrown around a little bit during this conversation, what we're talking about is a kind of container that can run in a cloud orchestration environment, Kubernetes, OpenShift, what have you. And this container is privileged to actually set up other containers to run. So it literally functions as an operator of one or more other containers inside your cloud cluster environment. So that's what it is. There's lots of reading that you can do on that. So back to my demo idea, here's what I'm hearing. And my experts can give me a thumbs up or thumbs down on whether I'm totally off base or not. Wouldn't be the first time. But here's my idea. So we have some kind of demo environment. It seems like AI ML technology, we could have that out at the edge doing some of our data processing right there and then. But some of the things that we want to glean from our data are going to be sufficiently complex that we may benefit from some kind of quantum application quantum circuit. So I can imagine a demo system where we have edge deployed devices, and there's some amount of, you know, processing happening right there in those devices. Then we're sending back some amount of data, probably not raw data probably cooked data. You know, stuff that's already been vetted by our AI ML patterns that goes back to a central place. And then we bring in the big guns, which is calling out to kids kit and, you know, some quantum machine that can that can do some of the heavy lifting that we can't do within the cluster. So thumbs up thumbs down does that sound like an actual thing. Have I have I understood things correctly. See this demo when when can we, I know right that sounds sounds pretty cool. So my last question for everybody. Since we only have a few minutes left is just I'm going to go around quickly and the question is going to be, how can people get involved if they're interested in learning more. So per rule I'll start with you. So we have developed, if you are interested on writing how quantum circuits like the Hello World, we have built online tutorial, which you can access you don't need to have a Kubernetes cluster or anything you can just go to learn dot com. And we have a quantum computing scenarios that you can go and try it yourself. And we also the best way to interact us to the community because everything that we're doing is managed by the kids get community and I can just send the links. So all our source code is there all of the videos for the presentation everything is there on the kids get open source community. Awesome. Thank you, Andrew. How can people get involved with. Yeah, definitely so I'm going to go ahead and post a link to red hats, kind of edge computing approach that's approach that's a really good places to dive in. A lot of companies are promoting different standardized ways of running their edge systems but red hats really is different because it's all open. It's all open to everyone you can see the whole stack. And I think open is forward in this way, in order for edge to have any sort of coalescence it needs to be open instead of proprietary so get started by reading that. And then you can also, there are some blog posts that I will find involving some demos that have been put together for edge systems you can start looking there and seeing how different red hat products are being used at or near the edge. Awesome. And Sophie, how can people start getting their their hands dirty with AI ML. Yep, go and check out open data hub dot IO and make sure you come and visit AI ML booth in the expo hall for the next couple of days. Awesome. Well, thank, thank you all for joining in today and thanks in particular to our experts, Perule, Sophie, Andrew, I owe you virtual beers. Thank you so much for hanging out with us today. And yeah, thanks again for everybody listening in I hope you have a great rest of your virtual conference today. Thank you all for joining us. The links in the chat will stay here so you can come back later if you want to try and find them. If you were interested in a topic but you have to run off for lunch. Feel free to join again later and just scroll back in the chat. Also there's a, there's a breakout room, which I put the link to. It's the new edition breakout room if anybody likes new edition. If it's too bad, you have to go there anyway. So feel free to stop by there anytime if you're interested in talking with other new technologists. Thanks very much. And our next session is after the lunch break. And it will be predicting the traffic jam about congestion aware routing. So see you at 1250. Thanks all.