 Welcome back to Boston. This is theCUBE's coverage of the Red Hat Summit 2022. The first Red Hat Summit we've done face-to-face in at least two years, 2019 was our last one. We're kind of rounding the far turn. You know, coming up for the home stretch, my name is Dave Vellante. I'm here with Paul Gillan. A.J. Mungara is here as the Senior Director of IoT, the IoT group for developer solutions and engineering at Intel. A.J., thanks for coming on theCUBE. Thank you so much. We heard your colleague this morning in the keynote talking about the dev cloud. I feel like I need a dev cloud. What's it all about? So, we've been working with developers and the ecosystem for a long time trying to build edge solutions. A lot of time people think about edge solutions as like just computer at the edge. But what really it is, is you've got to have some component of the cloud. There is network and there is edge. And edge is complicated because of the variety of edge devices that you need. And when you're building a solution, you've got to figure out like where am I going to push the compute? How much of the compute I'm going to run in the cloud? How much of the compute I'm going to push it at the network? And how much I need to run it at the edge? A lot of times what happens for developers is they don't have one environment where all of the three come together. And so what we said is, today the way it works is you have all these edge devices that customers buy, they install, they set it up and they try to do all of that. And then they have a cloud environment that they do their development. And then they figure out how all of these comes together. And all of these things are only when they're integrating it at the customer, at the solution space is when they try to do it. So what we did is we took all of these edge devices, put it in the cloud and gave one environment for cloud to the edge where you could do your complete solution. So it essentially simulates, yeah. No, it's not simulating. There's a span, so the cloud spans the cloud, the centralized cloud out to the edge, yes? No, no, what we did is we took all of these edge devices that will theoretically get deployed at the edge. Like we took all these variety of devices and put it in a cloud environment. So these are non-rack mountable devices that you can buy in the market today. That you just have, like we have about 500 devices in the cloud that you have from Atom to Core to Xeons to FPGAs to HDDL cards to graphics. All of these devices are available to you. So in one environment, you have like, you can connect to any of the cloud, the hyperscalers. You could connect to any of these network devices. You can define your network topology. You could bring in any of your sources that is sitting in the Git repository or Docker containers that may be sitting somewhere in a cloud environment or it could be sitting on a Docker Hub. You can pull all of these things together and give you one place where you can build it, where you can test it. You can performance benchmark it. So you can know when you're actually going to the field to deploy it, what type of sizing you need. So let me sure I understand. I mean, if I want to test an actual edge device using a 100 gig ethernet versus an MPLS versus a 5G, you can do all that without virtualizing? So all the edge devices are there today and the network part of it, we are building with Red Hat together, where we are putting everything on this environment. So the network part of it is not quite yet solved, but that's what we want to solve. But the goal is here is, you can, let's say you have five cameras or you have 50 cameras with different type of resolutions. You want to do some AI inference type of workloads at the edge. What type of compute you need? What type of memory you need? How many devices do you need? And where do you want to push the data? Because security is very important at the edge. So you've got to really figure out, like I've gone to secure the data on flight. I want to secure the data at rest. And how do you do the governance of it? How do you kind of do service governance so that all the services, different containers that are running on the edge device, they're behaving well. You don't have one container hogging up all the memory or hogging up all the compute or you don't have like certain points in the day, you might have a priority for certain containers. So all of these models, where do you run it? So we have an environment where you could run all of that. Okay, so take that example of AI inferencing at the edge. So I've got an edge device and I've developed an application and I'm going to say, okay, I want you to do the AI inferencing in real time. Right. You got some, maybe some kind of streaming data coming in. And I want you to persist, I don't know, every hour on the hour. I want you to save that timestamp. Or if some event, if a deer runs across the headlights, I want you to persist that data, send that back to the cloud and you can develop that, test it, benchmark it. Right. And then you can say that, okay, look, in this environment, I have like five cameras like at different angles and you want to kind of try it out. And what we have is a product which is Intel OpenVida, which is like an open source product, which does all of the optimizations you need for edge inference. So you develop the, like to recognize a deer in your example, I developed the training model somewhere in the cloud. Okay. So I have like, I've developed with all of the things, I've annotated the different video streams and I know that I'm recognizing a deer now. Okay. So now you need to figure out, like when the deer is coming and you want to immediately take an action, you don't want to send all of your video streams to the cloud. It's too expensive bandwidth that costs a lot. Right. So you need to compute that inference at the edge. Okay. In order to do that inference at the edge, you need some environment you should be able to do it. And to build that solution, what type of edge device do you really need? What type of compute you need? How many cameras are you computing it? What different things? You're not only recognizing a deer, probably you're recognizing some other objects. Sure. You do all of that. In fact, one of the things happened was I took my nephew to San Diego Zoo and he was very disappointed that he couldn't see the chimpanzees that was there, right? The gorillas and other things. So he was very sad. So I said, all right, there should be a better way. I saw like there was a stream of the camera feed that was there. So what we did is we did an edge inference and we did some logic to say at this time of the day, the gorillas get fed. So there's likelihood of you actually seeing the gorillas very high. So you just go at that point in a day so that you see it. And that's what you capture. That's what you do. You want to develop that entire solution. It's based on weather, based on other factors. You need to bring all of these services together and build a solution. And we offer an environment that allows you to do it. Will you customize the edge configuration for the developer if they want 50 cameras? That's not, you don't have 50 cameras available, right? So it's all cameras. What we do is we have a streaming capability that we support. So you can upload all your videos and you can say I want to now simulate 50 streams. I want to simulate 30 streams or I want to do this, right? Or just like two or three videos that you want to just pull in and you want to be able to do the inference simultaneously running different algorithms at the edge. All of that is supported. And the bigger challenge at the edge is developing a solution is fine. And now when you go to actual deployment and post deployment monitoring maintenance, make sure that you're like managing it. It's very complicated. What we have seen is over 50%, 51% to be precise of developers are developed some kind of a cloud native applications recently, right? So that we believe that if you bring that type of a cloud native development model to the edge, then your scaling problem, your maintenance problem, you're like, how do you actually deploy it? All of these challenges can be better managed. And if you run all of that as an orchestration layer on Kubernetes and we run everything on top of OpenShift. So you have a deployment ready solution already there. So everything is containerized. Everything you have it as help charts or Docker compose, you have all there. You have tested it in this environment and now you go take that to the deployment. And if it is there on any standard Kubernetes environment or in an OpenShift, you can just straight away deploy your application. What's that edge architecture look like? What's Intel's and Red Hat's philosophy around, what's programmable and it's different. I know you can run SAP in a data center. You guys got that covered. What's the edge look like? What's that architecture? Silicon, middleware, describe that for us. So at the edge, you think about it, right? It can run traditional in an industrial PC. You have a lot of Windows environment. You have a lot of Linux there now in an edge environment. Quite a few of these devices. I'm not talking about Far Edge where there are tiny microcontrollers and these devices. I'm talking about those devices that connect to these Far Edge devices, collect the data, do some analytics, do some compute, that type of a thing, right? You have Far Edge devices could be a camera, could be a temperature sensor, could be like a weighing scale, could be anything, right? It could be that Far Edge. And then all of that data, instead of pushing all the data to the cloud in order for you to do the analysis, you're going to have some type of an edge set of devices where it is collecting all these data, doing some decisions that's close to the data. You're making some analysis there, all of that stuff, right? So you need some analysis tools. You need certain other things. And let's say that you want to run, like RHCOS or Rail or any of these operating systems at the edge, then you have an ability for you to manage all of that using a control node. The control node can also sit at the edge in some cases, like in a smart factory, right? You have a little data center in a smart factory. Or even in a retail store. Yeah, retail store. Behind a closet, you have like a bunch of devices that are sitting there, correct? And those devices all can be managed and clustered in an environment. So now the question is, how do you deploy applications to that edge? How do you collect all the data that is sitting through the camera or other sensors? And you're processing it close to where the data is being generated. Make immediate decisions. So the architecture would look like, you have some cloud, which does some management of these edge devices, management of these applications, some type of a control. You have some network because you need to connect to that. Then you have the whole plethora of edge starting from an hybrid environment where you have an entire, like a mini data center sitting at the edge. Or it could be one or two of these devices that are just collecting data from these sensors and processing it. That is the heart of the other challenge. The architecture varies from different verticals, like from smart cities to retail to healthcare to industrial. They have all these different variations. They need to worry about these different environments they're going to operate under. They have different regulations that they have to look into, different security protocols that they need to follow. So your solution, maybe it is just recognizing people and identifying if they are wearing a helmet or in a coal mine, right? Whether they are wearing a safety gear, equipment or not. That solution versus you are like driving in a traffic, in a bike, and for safety reasons, we want to identify if the person is wearing a helmet or not. Very different use cases, very different environments, different ways in which you're operating. But that is where a developer needs to have similar algorithms are used by the way. But how you deploy it varies quite a bit. But the DevCloud, make sure I understand it, you talked about like a retail store, great example. But that's a general purpose infrastructure that's now customized through software for that retail environment. Same thing with Telco, same thing with the smart factory. You said not the far edge, right? But that's coming in the future or is that, will that extend? See the far edge, putting everything in one cloud environment, we did it, right? In fact, I put some cameras on some, like iPads and laptops and we could stream different videos, I did all of that. In a data center is a boring environment, right? What are you going to see? Like bunch of racks and servers? So putting far edge devices there didn't make sense. So what we did is you could just have an easy ability for you to stream or connect or upload these far edge data that gets generated in the far edge. Like say time series data, right? Like you can take some of this time series data, some of the sensor data or mostly, camera data, videos. So you upload those videos and that is as good as you're streaming those videos, right? And that means you're generating that data and then you're developing your solution with the assumption that the camera is observing whatever is going on and then you do your edge inference and you optimize it, you make sure that you size it and then you have a complete solution. Are you supporting all manner of microprocessors at the edge, including non-intel? Today it is all Intel, but the plan because we are really promoting the whole open ecosystem and things like that. Yes, we've heard that. Talk about that. Yes, Pat is really talking about it. So we want to be able to do that in the future but today it's been like a lot of the, we were trying to address the customers that we are serving today. We needed an environment where they could do all of this, right? For example, on what circumstances would use i5 versus i9 versus putting an algorithm on using a graphics, integrated graphics versus running it on a CPU or running it on a neural compute stick? It's hard, right? You need to buy all those devices. You need to experiment your solutions on all of that. It's hard. So having everything available in one environment, you could compare in contrast to see what type of a workload makes best sense. But it's not just x86. It's x86 is your portfolio. Portfolio of FPGAs, of like graphics, of like we have all what Intel supports today and in future we would want to open it up. So how do developers get access to this cloud? It is all free. You just have to go sign up, register and you get access to it. It is devcloud.intel.com, you go there and the container playground is all available for free for developers to get access to it and you can bring in container workloads there or even bare-matter workloads. And yes, all of it is available for free. Do you need to reserve the endpoint devices, get access to them? You just comment that is where it is an interesting technology. How do you govern this? Correct. So what we did was we built a kind of a queuing system. Okay, so scheduler. So you develop your application in a control node and only you need the edge device when you're scheduling that workload. So we have this scheduling systems like we use Kafka and other technologies to do the scheduling in a container workload environment which are all the optimized operators that are available in an open shift environment. So we got those operators, we installed it. So what happens is you take your workload and you run it, let's say, on an i7 device. When you're running that workload on i7 device, that device is dedicated to you, okay? So, and we've instrumented each of these devices with telemetry. So we could see at the point your workload is running on that particular device, what is the memory looking like, power looking like, how hot is the device running, what is the compute looking like. So we capture all that metrics, and then what you do is you take it and run it on an i9 or run it on a graphics or run it on an FPGA, then you compare and contrast and you say, huh, okay, for this particular workload, this device makes best sense. In some cases, I'll tell you, right? Developers have come back and told me I don't need a bigger processor, I need bigger memory. Yeah, sure. Right? And some cases they've said, look, I want to prioritize accuracy over performance because if you're in a healthcare setting, accuracy is more important. In some cases, they have optimized it for the size of the device because it needs to fit in the right environment in the right place. So every use case, where you optimize, is up to the solution, up to the developer. And we give you an ability for you to do that. What kind of folks are you seeing? You got hardware developers, you got software developers, you got our IT people coming in. We have a lot of system integrators, we have enterprises that are coming in. We are seeing a lot of software solution developers, independent software developers. We also have a lot of students who are coming in free environment for them to kind of play with, instead of them having to buy all of these devices, we are seeing those people. I mean, we are pulling through a lot of developers in this environment currently and we are getting, of course, feedback from the developers. We are just getting started here. We are continuing to improve our capabilities. We are adding virtualization capabilities. We are working very closely with Red Hat to kind of showcase all the goodness that's coming out of Red Hat, OpenShift and other innovations, right? We heard in one of the OpenShift sessions, they're talking about MicroShift, they're talking about HyperShift, they're talking about a lot of these innovations, operators, everything that is coming together. But where do developers play with all of this? If you spend half your time trying to configure it, install it and buy the hardware, trying to figure it out, you lose patience. What we have done is you lose time, and it's complicated, right? How do you set up? Especially when you involves cloud, it has network, it has got the edge, you need all of that right setup. So what we have done is we have set up everything for you. You just come in, and by the way, not only just that, what we realized is when you go talk to customers, they don't want to listen to all of our optimizations, processors and all that, they want to say that I'm here to solve my retail problem. I want to count the people coming into my store, right? I want to see that if there is any spills that I recognize, and I want to go clean it up before a customer complains about it. Or I have a brain tumor segmentation where I want to identify if the tumor is malignant or not, right? And I want a telehealth solution. So they're really talking about these use cases, they're not talking about all these things. So what we did is we built many of these use cases by talking to customers, we open sourced it and made it available on DevCloud for developers to use as a starting point. So that they have this retail starting point or they have this healthcare starting point, all these use cases so that we have all the core, we have showed them how to containerize it. The biggest problem is developers still don't know at the edge how to bring a legacy application and make it cloud native. So they just wrap it all into one Docker and they say, okay, now I'm containerized. Yeah, got a lot more to do. So we tell them how to do it right. So we train these developers, we give them an opportunity to experiment with all these use cases so that they get closer and closer to what the customer solutions need to be. Yeah, we saw that a lot with the early cloud where they'd wrap their legacy apps in a container, shove it into the cloud and say, hey, it's really hosting a legacy app as all it was, it wasn't, didn't take advantage of the cloud native. And now people are coming around. Sounds like a great developer, free resource, take advantage of that. Where do they go? They go, So it's devcloud.intel.com devcloud.intel.com, check it out. So it's a great freebie. AJ, thanks very much for coming. Thank you very much. I really appreciate your time. All right, keep it right there. This is Dave Vellante for Paul Gillan. We were right back covering the Cube at Red Hat Summit 2022.