 Off we go. Thank you, everyone, for being here today. Today's CNCF live webinar, delivering optimal out-of-box experiences. I'm Libby Schultz, and I'll be moderating today's webinar. I'm going to read our Code of Conduct and then hand over to Bob Monkman, open source strategist at Intel and Team. A few housekeeping items before we get started. During the webinar, you're not able to speak as an attendee, but there is a chat box on the right-hand side of your screen. Please feel free to drop your questions there, and we'll get to as many as we can at the end. This is an official webinar. The CNCF, and as such, is subject to the CNCF Code of Conduct. Please do not chat or questions that would be in violation of that Code of Conduct and be respectful of all of your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF online programs page. They will also be available on your registration link and in our online programs YouTube playlist. So with that, I will hand things over to Bob and crew, and you'll have a great chat. Thank you very much. I greatly appreciate it, and thanks everybody for joining this morning. I'm Bob Monkman, open source strategist for Intel based in California. I've got with me today, Peter Torre, principal engineer and Daniel Ugarte, product manager and technical product manager. And so we're going to give an overview today of network platform reference architectures and how they deliver out of the box productivity for cloud native application development. So without further ado, I'm going to share my screen and I'm going to give a little bit of a setup and overview of how we're making that happen. And then Petar and Daniel are going to give us some very specific industry use case examples, and then we're going to wrap it up and have some time for our Q&A at the end. All right, without further ado, let me share my screen. I'm actually going to, I'm going to stop my video and distract while I'm presenting. I'm going to share my screen. Okay, is that working? Now here we go. There we go. I'm just familiarizing myself with this tool. All right, very good. So again, out of box productivity delivered by reference system architectures. And we're going to have a particular focus on communication infrastructure systems in that sector. Let's, let's just go through a little bit of setup first. So, I mean, some of the trends that we're seeing in network infrastructure, our course are the, the cloudification of everything, everything's connected to the cloud. There's data masses, mass amounts of data going in and out of the cloud. 5G adoption is here now. And the, the service providers in this space are also following this trend and beginning to adopt cloud, cloud native principles, work in sort of hybrid, you know, on-prem and cloud environments. And service agility becomes a really key factor in this, in this new sort of environment that the need to or the requirement from customers to meet market demands, delivering new service services in minutes, ideally, instead of months. And then, as I mentioned a little bit earlier, now that we're connected to everything out there and you have the internet of things, you have intelligent systems, millions and billions of intelligent systems out there. It's not just a one-way management and data transfer out to the edge. You've got volumes of data coming in. And really what you're seeing is the burden of orchestration and management is really becoming beyond the capabilities of humans to manage. So you're alone. So you're seeing the advent of AI and machine learning to help to orchestrate, to schedule, to manage these systems. And furthermore, we're seeing this move to, you know, cloud native design and microservices design and implementation of those services. Some of the market data that we're seeing and the conversations that we'd have with cloud service providers and service, software as a service vendors out there is that over the next couple of three years, we're going to see 80% or more experts predict software done in a cloud native and microservices implementation. So that's a massive shift, very rapid shift. And it's really happening across multiple sectors. And today we're going to focus more on, you know, the columns infrastructure because that's where we have some particularly interesting examples today, but it really applies broadly across a broad swath of sectors. And this really has a significant impact on the technology building blocks that are relevant, the way that software is built, the way that software is assured and the way that software is delivered. And as I said earlier, we really have this massive complexity and this need for AI and ML to help get us to a point where you see the advent and the emergence of concepts like zero touch automation aided by AI and machine learning. Now in the network transformation space in particular, when we look at the applications across the continuum of edge back into the cloud, there are some unique challenges that have always been there. There are realities that this sector in particular has to deal with. And I'm not going to sort of rattle them all off, but you can sort of see them here. And we're working with all the leading vendors, the ISVs, the service providers, system integrators, and others to comprehend these challenges. And some of which are introduced with these new technologies that we're not necessarily designed with all of these challenges in mind. And we work with these ecosystem players and leverage our deep insight into how software executes and how data moves across the system to identify bottlenecks and gaps. And then we work with these players to mitigate these issues and deliver enhancements to the open source communities, our partners, and our customers. But all of these point optimizations that we can achieve and deliver through these open source communities are only part of the solution. And our topic today is centered on how can we take this work one step further and deliver highly integrated, well documented, reference system architecture to the market that helps speed deployment of new infrastructure and services that can be built with these optimized solutions. So let's take a closer look at that. So here you see this sort of introduction to the Intel network platform. And the idea here is it's really an offering that is geared towards easing and accelerating this network transformation. It addresses these challenges by delivering software and hardware innovation and adoption tools that enable the ecosystem of vendors and service providers to leverage them and deliver these services much more rapidly. So on the left-hand side, we see hardware and a broad range of software optimizations in different areas that mitigate some of these challenges that we discussed on the previous slide. And then we combine that with a variety of adoption tools that help developers quickly onboard and ramp up their applications on these reference systems, including documentation, collateral we call experience kits, which we're going to go into more detail later on in the presentation. And of course we work, as I mentioned earlier, we're working with the ecosystem of customers and partners in this space across the spectrum of the edge to the cloud to ensure that this integration and the wealth of productivity aids around that software are really vastly improving the out-of-box experience for the developers, accelerating that time to market, accelerating that time to revenue. So if we take a little bit of a deeper look at the specific innovations and optimizations that we deliver inherently into these reference systems, it comes in different buckets. And so again, we're leveraging the deep insights from running and analyzing representative workloads and combining those insights with Intel's unique silicon capabilities as well as general software optimizations to deliver better performance, lower latencies, cut jitter and close security gaps. And all of these land in the reference system architectures, along with the vast portfolio of experience kits that we deliver with them to deliver this productivity and this value proposition. And again here, if you look at some of the details, I won't go through all of them, but this is measured in terms of things like crypto acceleration, compression acceleration, transactions per second across remote procedure calls, service mesh optimization, service mesh in particular is a really powerful way to connect microservices, but it comes with a lot of overhead. So we've found and implemented a great deal of optimizations to mitigate that overhead. We've done a lot of work in the areas of scheduling and optimal workload replacement, telemetry, security gaps, protecting keys, isolating certificate authorities in multi-tenancy environments, so on and so forth. And again, pulling this all together into these reference systems or high productivity. So really what we're delivering to the marketplace is a set of these cloud native reference system blueprints, if you will, optimized for native development bare metal type environment, private cloud, public cloud deployments, and continually updating and innovating the various open source building blocks that make up this reference platform and working in those communities to get these optimizations upstream, working with customers and the relevant top of stack software vendors to get these downstream into the popular stacks out there and making sure that it's all very well integrated, validated with product operators and other deployment automation tools that make it easy to sort of onboard this stuff and accelerate that development and deployment of your applications on these network infrastructure solutions out there in the marketplace. If we take yet another deeper look at this, it's not just one sort of one size fits all reference system architecture, because as you're well aware, there's at least three fundamental operating environments that people are deploying these cloud native applications in. We have sort of native bare metal on physical hardware deployments, typically on-prem. We have some virtual machine configurations and environments often in the private cloud. And then of course, more and more we're seeing applications, software services written for and deployed in the cloud instances from the major cloud service providers. And each of these reference architectures are designed from the ground up to accommodate and address these unique environmental considerations of these different environments. And again, it's all very well integrated, validated, Kubernetes managed, there's quarterly releases, experience kits delivered with them and we're going to give you some details as to what those experience kits entail in upcoming slides. And then even within those deployment models, if we take a closer look, what does that really mean? If you look, this is just really just a visual depiction of what the bare metal, the virtual machine reference architecture and the cloud reference architecture might look like. And it just gives you a sort of a visual sense of, on the left-hand side here, we have a bare metal cluster deployment here on individual servers. Down at the bottom of each of these boxes is a node, a physical server, right? Control nodes, worker nodes that make up the Kubernetes cluster. In the middle, you have virtual clusters. This is just showing a single node, but it should be noted that, you know, this environment, this configuration certainly does support multi nodes, virtual machine deployments across multiple nodes. And then of course, on the right-hand side, you have this particular example here is an AWS EKS instance environment, and all the necessary considerations and support for that kind of environment are built into this particular reference architecture. So the reference architecture concept and deliverables are very, really flexible, providing, you know, options for these multiple modern, you know, network deployment types. And then within those different environments, we actually have created a lot of very specific, what we call configuration profiles. And what that entails is a very specific hardware and software bill of materials, a manifest, if you will, specifically for certain workloads or application requirements. The application requirements, the stack, is going to be different for RAN elements versus, you know, versus transport elements versus the core of a 5G network, if you will. So this is just one sort of example in the network infrastructure space where we can simplify that journey and add even more, you know, sort of out-of-the-box productivity by having, you know, predefined very specific recipes that are highly designed and characterized from that particular application. And of course, in the right-hand side, if you look in the box on the right, there's actually a way for you to actually build your own configuration profile, create your own software bomb that's very specific to your environment and if the preassembled configuration profiles aren't exactly what you need, you can always, you know, create your own. And earlier I mentioned that it's not just the software integrated reference platform that's delivered here. We have this notion of experience kits as well. And so the experience kit, again, is it's really a library of documentation, how-to guides, training that provide best practices, step-by-step instructions and development guidelines for each of the, you know, technology areas that are found in the reference architecture and also, you know, again, tuned to the particular type of application that they're designed to address. And these are, you know, available with the, you know, you can go and actually download the experience kits themselves along with the reference architectures. And, you know, this presentation, this webinar is recorded and these links can take you there. So when you view the recording later, you'll be able to go to these links and actually examine these experience kits and see what's in them. But this is a big part of the puzzle. It's not just throwing software optimizations at you. It's giving you these integrated packages with the guidelines, with the training, with the step-by-step instructions on how to use all of the various elements within the reference architecture. So with that, that's sort of a broad overview of how we're delivering that out-of-the-box experience for developers. I'd like to turn it over to Pater Tore, a cloud architect, principal engineer who works in our field organization. Pater has worked with some partners and service providers to put together a particular multi-cloud orchestration use case example using these reference architectures. Pater, I'm going to let you go through that and show folks how you... demonstrate how you've applied that for the specific use case. Thanks, Bob. So in the next three slides, in about eight to ten minutes, I will explain how did we look at compute-intensive workload in multi- and hybrid cloud environment. And there, we paid special attention to satisfying the key principles where we would like to do it multi-cloud, meaning across multiple different environments. And obvious consequence of that is as we are trying to do it, we should not be using single environment tooling that will not work on the next environment. And this is where previously mentioned reference architectures are really helping. And as we wanted to have an outcome that in real world would also have easy life cycle and easy onboarding, we need to be careful as we are layering the stack in the sense that we don't unconsciously create undesired linkages and dependencies and actually bring this thing into combination that is very hard to decouple. Now, this example here, we have... Let's start, for example, from the bottom. We have consistent hardware platform in AWS region and on-premises in form of Intel Xeon CPUs with particular CPU instructions that will help us. And then next level of consistency is to have Kubernetes environment with particular features. And here we will look at Node Witcher Discovery that is giving us details of underlying software and hardware platform. And we're supported. We can also enable next level of features that are useful for compute intensive workload that is coming here, which is CPU pinning with static policy for Kubelet for CPU manager. And while we do know how to do that in case of EKS, this is still to be all documented and supported. So, for example, like it is on Azure AKS where we are also doing it. Now, running the Monte Carlo as an example of compute intensive workload coming from financial services industries vertical being used a lot over there for various risk management. That workload is based on vector instructions and this is where different implementation in hardware plays big difference. And the application is then packaged as pod can be scheduled over Kubernetes. And on top we have policy driven orchestration in form of Cloudify with Tosca blueprints. So the exercise that brings us to this multi part of cloud is to make sure that across different environments we have minimum inconsistency and maximum in common in Tosca blueprints. Of course, the pod is the same and different ways of building Kubernetes environments are very consistent. And this is all also documented. You have the paper here for the reading and if you could move to next slide. This is visualizing how we built the stack so in this example here we will have benefits of the reference architectures. Under the hood is going to create data from templates from those templates. We will go and apply them and get hold of appropriate EKS Kubernetes managed service and appropriate EC2 instances. And here for purpose of comparison we will take the current compute instances with third generation Xeon and then the previous Xeon also to compare it to. And in those CPUs we have different vector instructions that are either the current is AVX 512 or the previous is AVX 2. They are 512 or 256 bit width. And based on that we will see different performance. The workload is built using appropriate Intel tools like compilers that will produce binaries that take advantage of all the hardware accelerations available there. And then we will see how long does it take to run the simulation and we will then send the metrics into the reporting little subsystem consisting of Prometheus and Grafana and we will observe it in Grafana. Now this cloudify orchestrator on top is driving all of the bits on the bottom so we can have blueprint to create environment using Terraform and another one then on top of it to schedule something over Kubernetes. And there is the flow, fully automated execution of all of that to apply or to destroy or if we go further to do other actions on it. And if you look at the results of when we build something like that if you move to next slide so all of that being automated it's easy to create different instances all within the same cluster and also to deploy different versions of this Monte Carlo well actually it's the same pod which is different environment variables so it knows how to run and what to do and we can observe that for example we have so lower is better here this is a lapse time to complete the simulation of particular size and we can observe two benefits coming here one is that generational improvements the current instances so C6I or equivalent that is coming with third generation Xeon is the codename are faster than the previous ones for this type of workloads both running in AVX2 version and AVX512 and generally when we run just compute workloads that are not using vector instructions also we observe similar and then this workload really correlates with the size of those the width of those vectors and because 512 is double the 256 we see that the workload runs two times faster when we enable it with AVX512 compared to AVX2 and we see this about 2X consistent on current generations or on previous generations and this directly equates results in reduced time to use the instances or reduce cost of those instances of the compute time and this was our example and with that we will move to Daniel who will explain not little but a way more complicated example than just compute intensive one please Peter thanks for that great example I'll turn it over to Daniel yes hi this is Daniel Ugarte so I'll be presenting the application of the BMRA, the bare metal reference of the VRAM and our flexRAN Intel's flexRAN solution can we go to the next slide please so let's start by explaining what is VRAM in our flexRAN so this is part of the 5G network and the VRAM it's the radio access node it's been used as the base station this is what we are using physical layers for the wireless media and all the way to the control layer in the 5G network now what is the reference architecture doing here so if you see on the left Bob mentioned the configuration profiles so this is an example of what the configuration profile for the VRAM is so basically it's a representation of all software ingredients that we have from the network from one of our teams from the network platform teams we cherry pick out of everything that is available we cherry pick the specific ingredients that the flexRAN or the VRAM team needs right this becomes the reference architecture configuration profile it is basically if you see it's an abstraction of what the requirements the software platform requirements software and hardware platform requirements from the VRAM if you see at the top there are different buckets okay Kubernetes has many different features and we turn on a subset of these features for Kubernetes, for service mesh for security, for power management and we go all the way down to the hardware and the mix operating system and so on right so this is the build of materials that was presented previously now the line the green line that you see there implies that those are our choices needed for the VRAM if you move to the right what we have here is how does the flexRAN how does the reference architecture deploy this configuration profile right we use a set of ansible scripts ansible playbooks that deploy the capabilities or these ingredients on a cluster wide base or for the software that is required for the worker nodes or control nodes some of these capabilities yes they are shared across all nodes some of them are only in the worker node with the flexRAN application if we this is an optimized and validated easy and fast way to deploy the VRAM reference system right if we go to the next slide please thank you let's see how how this is consumed right so at the bottom you will see all our hardware selections for VRAM or flexRAN above you have all the green software ingredients platform software ingredients that are being used and the reference architecture consists of an installation playbook a way to manage the configuration profile and with this you can basically have different variants for different flavors of the VRAM that you have been using and these all of these have to be I would say curated all the dependencies from one software to another software are being examined are being tested and integrated right and that's part of the configuration profile management so on the right side this is a couple of things we did for our flexRAN team that we want customers to enjoy this easy way to deploy this network element right and not only that we want them to see it in action so if there are tests that are standalone they could deploy these test modes to see in this case the VU is part of the VRAM to see the VRAM sorry the VU in flexRAN in action they would see how does the layer1 performs right and how does the layer1 performs with their NICs if they are connected to a to a radio simulator right so we have created a couple of profiles one on the left side you see a VRAM application this is for any generic VRAM and on the right side will be for our flexRAN now what do we do with this right so what is the value add that we have so as we say ok so we can engage customers in the case of Intel we can reach the customers with an early version of our silicon and so when customers receive these early versions of hardware they could see the flexRAN in action and see how what performance improvements they have so it's a way to engage the customers earlier on another way to use the reference architecture is so if partners would like to start testing and they have more complicated tests to perform the reference architecture is the first step where they receive a pre-verified already verified software platform and software and hardware platforms and they can check that everything is in place that they have the right versions, the right configurations all over the system and then start their testing and these will reduce a lot of the overhead produced when the customers or partners have to verify every single little thing in the recipe that they are using and finally if you are not using the test modes you could use it to deploy the DU network element over the 5G network for an end to end setup and this is beneficial when you have tests in-house testing in your labs or even at the customer side if we go to the next slide please so how does the reference architecture evolve over time so this is an example of what we call a capability a capability is basically the bundle of different software ingredients that provide one system property or system behavior let's say you have you can have a capabilities or one capability could be security another capability could be power management in the case of a flex run application we have the power management capability and these are a series of optimizations in different pieces of software across different layers of the stack now so much of this capability already exists and it's been moving from one generation to the next but in this case our new software rapids they have optimizations for how to send our course to sleep for to reduce the power consumption so there are different instructions provided to different layers all the way to the application layer so the application is in control of how to send this course to sleep so in the case of software rapids there are new states that are being used at the DPDK and with different boot settings and that's provided to the flex run application right and yes so you will see improvements when moving from one generation to the next can we go to the next slide please okay so this is a summary slide so we have the what is the reference system architecture is it a total product offering is available for you to explore and leverage for new developments on network deployments their reference architecture systems provides you with a validated workload optimized and easy to consume reference to accelerate your time to market if you want to download the experience kits you can click on these links they will explain the capabilities that we have developed and the user guide and how to develop sorry how to use and deploy the reference architecture and if you have any feedback or questions you can contact our reference architecture team to Dan and Hama or myself Daniel and these are our emails and with this we complete the presentation and we are open to questions thank you thank you Daniel great example thank you so much if you have any questions be sure to click into the chat so we can get to the chat let me go to that window and see if I've got some particular I don't see any I don't see any particular questions so keyed up in the chat how can we potentially oh and twice well thank you all so much for coming together Bob I'm doing for all of us for today welcome to catch it there on our YouTube channel and I think all of y'all for another CNCF live webinar and thank you all so much thanks very much Libby and thanks to our my fellow presenters and Libby you'll send us a link