 in. Let's go ahead and start the recording. Okay, I wanted to thank everybody who's joining us today. Welcome to today's CNCF webinar, Delivering Cloud Native Application and Infrastructure Management. My name is Josh Berkus. I'm the Kubernetes community manager at Red Hat. And I'm a cloud native ambassador in which context I'm hosting today's webinar. I'd like to welcome our presenter today, Matt Baldwin, who is Director of Cloud Native and Kubernetes Engineering at NetApp. So welcome, Matt. Thank you. Before we actually start, I wanted to give a few housekeeping items. Number one, during the webinar, you will not be able to talk as an attendee and you will not be able to display your video. Instead, we have a Q&A panel. If you click the small Q&A icon at the bottom of your screen, it will open up a separate panel where you can post questions, some of which will get answered in text messages by the NetApp team who are dialed into this call, and some of which will get answered during the Q&A phase at the end of the presentation. Well, you can ask questions in chat. We may miss them. So I really recommend asking the questions through that Q&A panel and not in the group chat. One other thing that I actually wanted to mention is that this is an official CNCF webinar, and therefore it is subject to the CNCF Code of Conduct. So please do not put anything in chat or anything in your questions that would violate that Code of Conduct, which just really says be excellent to each other. And with that, I want to hand it over to Matt for today's presentation. So take it away, Matt. Thanks, Josh. I love the be excellent. Can't wait for that movie to come out here. So like Josh said, I am Matt Baldwin. I'm the director of Cloud Native and Kubernetes Engineering at NetApp. Prior to that, I was founder and CEO of a company called StackPoint Cloud. And I, too, am a CNCF ambassador for Kubernetes and have been one for the last few years. So with that real brief introduction, let me get moving on this. So where are we here today for? So first I'm going to kind of take you take everybody through what is Cloud Native and kind of my thinking on the definition of Cloud Native. And I'm going to go through the nice little Cloud Native trail map that CNCF has produced. Then I'm going to go through kind of some user persona definitions because from my position, Cloud Native is about the users and about how are users consuming these systems. And so I think and I always think from the user point of view. Next we'll move into what we're calling Cloud Native Anywhere, which is a concept that you should be continuing the concept of portability with Kubernetes workload is that your Cloud Native environments and that workload on those environments should ultimately be able to run anywhere, regardless of on-premise or off-premise. And I'll show you some tooling of how that can work in a demo. After that, I'll discuss a concept called Cloud Native app management. So how do you provide developers with what they need for the environments that they're shipping code into? How do you take kind of responsibility off the developer to need to understand things like Kubernetes, understand things like Istio and so they all they have to do is basically push code to deploy basically push code to deploy their application into into a running environment. And so I'm going to talk through that here towards the end of the presentation and provide a demo as well. So let me get going with Cloud Native definition. So this is a I'm just going to put the slide linger here. I'm not going to read out the definition. I'm sure you guys can read it. This is the definition as published by the CNCF. And effectively what it tries to do is try to capture what do we mean when we say Cloud Native? And what I want to zero in and on is that last statement in the last sentence, which is engineers will be able to do things with high impact changes frequently be able to do high impact changes frequently and predictably with minimal toil. So that last phrase minimal toil is an interesting one for me because if you read the definition to get to Cloud Native requires a lot of toil. So it takes a lot of toil to get to minimal toil so that going forward you could begin to do quick changes frequently that you're able to do predictably. But to get into that into that world it's going to take quite a bit of work. And CNCF has published a cloud native trail map. And so what we're trying to do here is we're trying to call out here's the services solutions that you're going to need to look out begin to adopt on your journey to cloud native. And just like a trail here in the Olympics here in Seattle, trails can be dangerous, trails can be hard, trails can be very long. And but at the end we're trying to get to a point where it's minimal toil for everybody. So we're saying, you know, you're going to need some type of containerization, you're going to need orchestration for pretty much all of us that is Kubernetes. You know, you're going to need to be able to package those those applications up in some type of definition helm for most of us. How do I observe all this? How do I do CICD? You know, what about network? What about service meshes? So, you know, do I need to use Istio? Am I looking at LinkerD? And then we move into things like distributed databases and then storage. And so these are all pieces that end users and operators are going to need to understand to symbol a cloud native platform. And you know, the intent is that we should drive towards a world where these things are easier and easier to compose together that we don't need to have deep knowledge on Kubernetes to be able to run Kubernetes that we don't need to have any knowledge of Istio to be able to actually leverage features like canaries or AB deployments or blue-green type of deployments that all of this should be easily composable for me as a user that I don't need to learn a lot of new things, effectively, where I would like to head in the space. Because everybody has to learn new things now. You know, there's like a ton of stuff on everybody's plate. And from, you know, with customer interactions that we're seeing, there's a lot of mandates coming down from, you know, now that Kubernetes is kind of out there, cloud native is now circulating more in the mainstream. C-level, C-sweets are starting to catch wind of this technology. And we're hearing mandates are just being pushed down into teams, you know, things like get us over to cloud native in six months, where that team has zero knowledge of cloud native. And so you have six months to go from zero knowledge to being an expert in this entire trail map. So what we're trying to do at NetApp is try to deliver a tooling that makes it easier to get down this trail. And I'm going to show some of that here in a bit. So what are some of the pillars that we try to stand on when it comes to cloud native? The first pillar is the workload ultimately should always be portable. It should be ready for multi-cloud. You should be abstracted from your cloud provider. You should be abstracted from your infrastructure. Because the idea is that you should be able to take that workload. And I should be able to have that live on premise. I should be able to move that to, say, a public cloud provider. My data should be able to come with me in that scenario. And these are some of these components are not things that are part of Kubernetes directly. Like Kubernetes doesn't account for moving data from point A to point B. You need extra tooling to be able to do that. Things like cloud native application management is another pillar that we look at. And that is a way of thinking about how do you deliver and run and manage applications inside of Kubernetes. Not necessarily a deployment, but how do you take a group of deployments and create an application taxonomy around those and say these deployments together represent my application. And as a developer, so say I'm in a persona of a developer, I want to be able to see service dashboards for that application. I want to be able to see logs for the application. I need to be able to quickly scale it. I need to be able to quickly iterate on it. And I don't really need, I don't really want to know a lot about Kubernetes to do so. So how do we arrive at that state? And so we're thinking around things like cloud native application management. And I'll dig into that a little deeper here in a bit. Another pillar that we're looking at with regards to cloud, a full cloud native stack is security. And the belief is that security needs to be, you need to have security at rest. So there's great open source tooling out there to be able to accomplish that. Anchor comes to mind, which is a container scanner, a register scanner can be used in place of Claire, if anyone out there knows what Claire is. Also we believe in container security at runtime, open source tools that would help you address that would be things like Falco from Sysdig would allow you to begin to address runtime security and also, you know, forensics around events that occur in that container. And so we think, you know, there needs to be a lot of the tooling out there, all that needs to be open source, but there needs to be a way to tie all of this together so that you have a consistent user and management experience around these tools. The problem that most of the users, the space is having is what tools do I use and how do I assemble them all together so I have a cohesive, what we would call a cloud native stack of solutions to work with. You know, how do I ensure that all my, you know, all my, you know, traffic inside my cluster is secure in flight, you know, how do I do that with Istio? Is that by default? You know, how do I visualize that? You know, there's a lot of that tooling that needs to be composed and put together for users. And that's what we're, what we'll be showing you here. Storage is another big one. That's a cloud native pillar for us, for sure as NetApp, is how do you think about storage and inside the context of a microservice and Kubernetes running in Kubernetes? How do you protect that storage? How do you move that data from point A to point B? So how do you migrate to the public cloud if you're on premise? Or if you're in the public cloud and want to move back to on premise, how do you do that? How do you replicate, you know, production data into test workloads so that you can actually test against real data? And so these are a lot of the things that we're thinking about when it comes to cloud native storage and how we're beginning to expose cloud native storage inside of Kubernetes. As NetApp, we actually do sit inside the SIG storage and participate in that SIG with contributions for things like snapshots and tying, you know, some of our technology into Kubernetes to be able to, you know, basically surface what we've invented over the last 20 plus years, you know, things like deduplication, stuff like that. So we're very heavy into the storage side with the, you know, some of the other moment in NetApp. We're moving, you know, we're tying, kind of tying storage together with cloud native app management security. And then also we're thinking about managed data services. So how do you, how do you quickly get up and running with Kafka? You know, so how do you get, how do you leverage Confluence Kafka's operator? How do you get that under management? How do you get support for that? And then how do you just declare that you need that in your environment? And that's some of the stuff that we've been working on as well. One of the last pillars that I think about is personas. So when I approach the space, I think about things like, who are my users? And because, you know, I'm not thinking about the person who's going to pop the engine and disassemble the engine, but the actual users of like the broad market of users. And so we're thinking broadly around people, like what we call an operator, what we would call classically a developer, then there's executive level. And so, you know, I'm starting to approach the space from a, you know, how does an app developer think about Kubernetes and how, what tooling would they wish to use? And so my hope is that some of the demo that I show will map to those answers. This slide kind of digs deeper into defining these personas. Operators would have different titles. You know, these are people who could be happy and IT manager. They could be a system admin, you know, the new kind of SREs and new term coming out. Not all enterprises are aware of what an SRE is. I think it's very important as cloud native proponents that we are, that we break out of a myopic view of the cloud native space right now and understand that most of the enterprise world, we're talking these like G100 size customers. Most of these customers don't really understand some of these new roles. They don't know what an SRE is. They don't, some of them do, some of them don't. Some of them are very early in their cloud native journey to the point where they're just beginning to containerize. Some are very, very far along in their journey. But the intent of the operator role persona is, you know, people who are managing this cloud native infrastructure, who have to maintain it on day two, you know, who have to, you know, manage the resources that their developers are consuming. Then on the flip side, we do have the developers and these are, you know, individual contributors inside the company. They have different titles like software engineer, software developer, developer. Some people call these guys will assign a DevOps lead role to them if your shop goes down the path of having titles for DevOps. So we do see a lot of these types of personas in the markets and kind of just going back to the myopic idea. I think there's this intent inside a cloud native that we want to solve all of these problems for users and we want to have users begin to build, you know, all your apps need to be Greenfield, all your apps need to be microservices and you need to consume and be monitoring things with Prometheus. Oh, you need to use elastic search for Wendy Kibana. And when you sit down with large organizations and you say that to them, they may have 20 to 30,000 applications that are already running. And so the journey is not easy for organizations adopting this technology and the journey isn't, it doesn't have minimal total at the moment. So what we're trying to do with NetApps tooling is try to get you quickly to minimal toil so that you don't have to worry about most of that trail map that that trail map is done for you effectively. So this leads me into the kind of cloud native anywhere concept, which is you should be able to have your infrastructure should be able to be maintained and managed regardless anywhere it's at. So your experience should always be the same of managing Kubernetes on Amazon should be identical to how you manage Kubernetes on top of Google to how you manage Kubernetes on top of say VMware on premise and you know where the concept should always be the same as well. So if you're thinking about things like node pools, conceptually the idea of node pool should be the same with on premise VMware as it is with public cloud Amazon. Also, I believe that cloud native anywhere also implies that it should always be multi-cloud ready and there's a lot of argument around what we could define as multi-cloud. I would define multi-cloud as you're doing business with two different providers or you may have an on-prem environment but you are using at least two different classically infrastructure the cloud infrastructure platforms be that of on premise VMware or that is Amazon or that's Azure and Amazon. I'm not currently seeing too many users at enterprises spanning workload across where they split applications between you know say Azure and Amazon. More anything what I'm starting to see is customers have multiple accounts with multiple providers and then they have failover at those providers or they have some of their services deployed on X and some of their services deployed on Y and that's where we're seeing kind of starting to see multi-cloud go but the in cloud native you should be able to manage the environment in the identical way regardless of where it's running. It's about life cycle of that infrastructure so not just life cycle of Kubernetes but life cycle of the host nodes themselves, life cycle of if you're running a service mesh like Istio how what's the life cycle of that. Also, how do you life cycle your own applications running on top of environments and so we need tooling that helps us make that easy and brings that kind of brings in that complexity for us. I think we also want to be able to manage access across all of this so that when I add a team and I add users to that team and grant those that team access to say a cluster or a namespace in a cluster that that flows down to a cluster and that I can do that across I can grant a team access to one cluster or one namespace multiple namespaces or multiple clusters or multiple namespaces across multiple clusters and I need an easy way of doing that an easy way of visualizing that. Lastly, I need to be able to scale in the same way either on-prem or off-prem so how do I do automatic node cluster node scaling you know how do we accomplish you know HA on-premise off-premise things like that. So what do we mean in context of NetApp when we start to talk about that so within our tooling what we mean is support between Microsoft, Google and Amazon and then on-premise for things like HCI, generalized VMware and FlexPod and then we also when we when I've been talking about day one day two type of operations and day zero operations one thing that I've started to run into is people tend to start to argue about like what happens on it you know like kind of like there's some gray area with some some end users so I've been starting to define this as just any day ops as in your tooling needs to be able to support you not on just standing up the cluster but you need to be able to do ongoing management of that cluster you need to be able to visualize workload in that cluster you need to be able to visualize load logs things like that you need to be able to upgrade that cluster and you need to be able to rotate the certificates in the cluster and and you also should be able to manage things like Prometheus on top of that you should be able to manage things like service mesh on top of that so I try not to parse day one day two day zero type of conversation and I just call it all any day ops you also should be able to manage service mesh easily developers shouldn't need to learn how to write a traffic management rule so the intent with tooling is it should make it so easy that a developer can easily just pick up the tool and create a canary and you know just say I have service A and service B and I want to run a canary with service B and I want to pass one percent of my risk traffic to that canary where I wish to do blue green where I'm going to do an AB and the intent of the type of tooling that we've been building is to remove the need for that type of knowledge from the developer so that they just say give me a canary and then the system manages the rules for that behind the scenes you also need tooling that makes it easier to manage our back so world-based controls for users and teams tooling that ties together GPU support with you know regular CPU instances and then tooling that takes you from being able to scale from a POC to a high availability cluster so you know some tools out there you know are great for you know just quickly standing up cluster if you want to start to play with it but if you need begin to be able to do ongoing management of say fcd or the masters then that it starts to fall apart and so the intent is to create tooling that allows you to easily do all that uh easily go from you know I want to have a test cluster but you know maybe I want to scale that out to a production cluster and then lastly things like private topology which is the idea that there's in a public cloud setup that you don't have anything like a public ip attached to any of those nodes that you're coming in through a bastion host and that should always you know our belief is that you know all clusters should be built as private topologies that you shouldn't have you know public inbound access directly to that cluster unless that's exposed load balancer you know then we get to weird problems like developers declaring a load balancer without the it team under knowing that that's happening and then you can actually expose a security issue by that by doing that um so we try to look for things like that in in our solution so with that let me go ahead and just do a quick demo of what i'm talking about with the tooling uh so josh i'm going to ask you if you see the screen here so i assume you can see the screen um yeah this is a browser screen yes yes it's coming through clear so this is uh a tool that you can uh go and just sign up for uh just sign up through it's a sas space tool uh that is uh some of people might recognize this as the old stack point io tool um it's the same thing just renamed but what we're trying to do is we're trying to create an easy way to uh basically get cluster infrastructure up and running um or be able to do you know day two type operations and so here i've got a list view of you know clusters running on each of the cloud providers i can dig in and i can actually manage them if i wanted to i can see liveness probes so these all the green dots indicate that those nodes are online this is a poc cluster if i wanted to i could actually scale the cluster and begin to add you know i can go up to two more masters so i can have three and then ecd is managed behind the scenes uh and then move up to five nodes masters and then have five ecd members as well um then what we're trying to do here is make the tooling so that when you say i want node pools that when you create a node pool on amazon you can actually just replicate and create that same node pool on say gcp or on azure or on vmware uh the idea is that you can also come in here and do quick upgrades uh so that we're making you so you don't need to worry about clays or command line tools uh we don't you know this isn't you know we're not replacing kube cuddle kube cuddle works fine with with you know these clusters these are all upstream on clusters we don't remember not a fork we're not doing anything like that but an upgrade is just as simple as choosing you know i want to move up to 1.15.3 uh with this cluster and then it does it basically performs it in place upgrade um we try to make it you know we try to do things where we're saying hey you can still use you know tooling that comes with the project and so we want to be able to say hey you can use uh the Kubernetes dashboard but we don't want to expose that dashboard to the public internet so what we do is we tunnel it through our our tooling so that you can just you know get access to it um but what we're trying to say is we're going to say we don't really you know we try not to create a fork of these clusters so that you have a unique way of managing it if you're a net app customer we don't want you to just be a net app customer we want you to be a Kubernetes user uh the end goal is that we want to make it easy for you to manage Kubernetes um you know you don't have to purchase storage or anything like that from net app to do this um this is uh the wizard uh that makes it you know it's a three-step wizard allows you to create a cluster uh we you know you just basically walk through the wizard and at the end of it uh we ask you one more question of if you want to you know do you want to make any changes to this if not once you click submit it begins to build build a cluster on premise the VMware experience that we have is also identical to uh to our um our public the public cloud experience so this is where we're trying to say we we believe in harmony of managing cloud infrastructure regardless of where that's running and so the idea with the tooling also is to walk through three step process and you have a cluster running on your VMware environment uh if you have a flex pod environment three step process and now you have flex pod a cluster running on top of flex pod environment um you can also have an Amazon cluster running alongside that as well um so that's the quick demo of the cluster infrastructure side of this um and but what about our developers so that whole scenario was you know I'm a IT operator type of person I need to create infrastructure I need to manage infrastructure um but now I'm a developer and you know with you know what I want to be able to do so I want to be able to push code and I want to be able to you know uh I just want to do a get commit and then now my my application's up and running inside of Kubernetes so in this next so let me talk through what we mean when we start to talk about cloud native app management so what we think is it should be a get-based apt deployment model uh so that and it should be able to support you should be able to use this tooling to run against both on-premise clusters and off-premise clusters so public cloud clusters um it should be all built on top of open source tooling so we we implement our own set of controllers and CRDs on top of uh things like Tecton and Knative uh it's just to let everybody know and but like at no point do we want to build custom tooling custom components that make all this possible what we're trying to say is we're gluing it all together we're composing that entire cloud native road trail map for you so that you don't actually have to think through that trail map let us do all that that tooling and manage it all for you uh it's just to try to get you to minimal toil as quickly as possible um so in our concept of app management we have uh an idea of projects and a project is a maps to a tenet of the native space inside of Kubernetes cluster we provide tooling that allows you to set resource quotas and limits on that project so that uh you know whatever users are deploying into that project um are capped and we allow you to tie teams of users to that project and then flow that team down to Kubernetes RBAC so that that team could subsequently log in and pull down a kubecon kubecon config file to attach to that namespace um you know we also believe in this idea that you have a choose your own adventure with how you want to deploy workload into into the cluster um you know there's the standard kube cuddle uh kube kube kube kube kuddle apply my yaml to my running environment cool i have it all online or i'm using helm to do a helm install of a helm chart into my environment um where i i want to be able to do a git commit and uh then see my application come up inside the environment or i'm going to do a git commit on a change and see that change come online in the environment um i want to be able to lifecycle this application so i should be able to upgrade this you know i should be able to see metrics attached to this application so i should have service dashboards i should be able to see logs if i'm outputting logs the standard out i should be able to see that um and uh the other idea is that we want to make it easy for as easy as a developer when they were working with heroku that kubernetes becomes that simple to work with as a developer is it's it's heroku like experience um but a little bit further than where heroku left off um you know also we want to make sure that it teams can manage the auto scaling for applications their developers pushing into the environments without the developers having to worry about those uh those management settings so where are you going to see so i'm going to show you uh the concept of the projects uh the solutions that we place into those projects uh and the kind of the dashboards for things like metrics and then i'm going to do a demo of uh delivering it through git so project is uh basically a bucket it's a namespace uh our thinking is that you're going to be working a single project uh you know that would be an application so this is the taxonomy here for for an app um wordpress is an example i've been using uh to describe you know wordpress has multiple components to an end user uh wordpress is the application but behind the scenes you have my my sql you have a front end um and you maybe have some other components but the idea is that project would contain all of that um the project would also have our back protection um and it would have default network policy so services don't communicate outside of that namespace and it would have quotas and limits if the it operator wanted to apply those um solutions uh the concept here is the tooling allows you to uh deploy a solution into your kubernetes cluster in three different ways tracker being uh how we support you know bring your own tooling so like kubectl uh get workflow which is how do you you know do continue to use git to manage your workload and then a way of deploying helm to the environment helm charts into the environment uh and we actually never required tiller uh and with with this solution we were helped we replaced tiller um we understand that helm three tiller is gone uh but you know until helm three is fully out we're not going to adopt you yet um also things like you know applying default pod security policies to uh to that namespace so when pods come online that they inherit uh they they get a particular pod security policy and then also how do you alert on on you know does that pod security policy align with icis benchmarks and so let me go ahead and do a quick demo of of this piece here so here i have a um so going kind of going back to the idea of simplicity um we're trying to provide with this tooling just a simple dashboards for you to see what is the health of my clusters whether or the health what is the health of my projects and where are some average you know you know metrics coming off these things you know so like my average cpu and my average memory across all my clusters and so this is all real time and so we're saying you know two projects and all the objects inside those namespaces are online you know we allow you to dig deeper into this and start to uh kind of break them down and say i you know i can start to see what's my cpu core and memory usage for a particular project that i'm working in so this is my new app i'm running it on amazon everything is online and then we start to show uh deeper details once you dig into it and we show memory cpu usage disk and network io pressure and then we kind of tell you quick numbers like total number of pods total number deployments what's network traffic in and out looking like and then what deploy uh what solutions do you have deployed inside of this environment um and then kind of just quickly highlight how those three components i was talking about work we've made it simple or simple to use the tooling to uh take advantage of those components so tracker uh again is something if i'm going to apply put my own workload into the environment i could come into uh this piece of the tooling and add it to say i'm running this is the label i'm using i'm using you know app uh test is my label so go ahead and when i create this tracker then our controllers will watch for your your application your solution in the environment and they'll pull it all together inside of this view so that when you come back into the app you'd actually be able to see metrics and logs for uh that yaml that you deployed onto the environment um the other idea is you know very simple like marketplaces here's i'm just showing the bitnami uh marketplace where you know we have the capacity to ingest helm repos either from public or private repositories and this is actually i've set it up to just ingest bitnami uh market uh bitnami chart marketplace into my private accounts so that i have a private marketplace as my user logged into into this system versus other users um we also provide things like trusted charts that we've gone through and verified and so this is one way you know you deploy these into the environment and it you would also again see uh kind of metrics dashboard and logging dashboards for that i have read this deployed right here um the last way of doing this is what we call uh get workflow which is you know what what i'm doing here is i'm an it manager and i want to create a custom application you know my new teams app that they're working on uh you know i want to make sure that they're going to have a rolling update strategy for the application once they deploy into the environment through git i want to make sure that there's always you know five replicas for them and they can max to 15 and we're going to go ahead and trigger on 70 cp utilization and when i create this application this is going to create just a very small number of objects inside of the inside of the kubernetes cluster and then uh what we have a controller that's going to watch for uh when you do the first git push and then once that happens we build a deployment around the container that so we build a container off of that git push and then we wrap that container up as a deployment and then we bring that uh put that under management inside of uh by our controller and then our controller effectively turns on things like horizontal pod autoscaling and takes the values that you've set here uh for that um so i already have that deployed right here and the other thing that we do is we give you kind of a quick dashboard sorry not a dashboard but a vanity uh url that you can check uh so that when you make a change you can go ahead and say cool this is what my my app looks like um so i have hold on let me i'm gonna just go make a quick change and and then what i'm doing here is i just made a change and now i am uh effectively rebuilding that deployment that container and redeploy during a rolling upgrade uh out into the environment to uh basically get that new change out there and so um over time you'll see things like an error will show up as pods are coming online uh and then eventually that is now up and running so the developer was able to so i was able to make a change push it real quick and have it up and running in the environment takes about two to three minutes for that to happen um you know another aspect of what we do is we uh provide uh you know easy so we're all about ui and user usability as as hopefully as you can see so we also have uh you know i'm not going to jump into the demo of this but we have usability around uh the cnc of harbour project as well so everything that we use here is open source and so that is how we've built our tooling is off of is by leveraging open source and tying it together uh and so we've been doing a lot of work inside of the harbour project as well to um create a better managed experience for how you manage harbour inside of like a context like this so you know when i do a git commit and that container starts to be built i want to be able to target my harbour registry inside of my service cluster for it to be stored there i want to be able to use things like anchor to lock down uh policy you know lock down those containers inside of harbour so that when a cv you know there's a cv that is reported for one of the containers that there's a policy that does not allow that container to be pulled out into to run time and so we do have tooling so we do have usability around that right now uh in this but i i don't have a demo prepared for that so with that i'm gonna i'm gonna turn it back to you josh i'm ready for q and a okay thank you um thank you very much uh for that presentation um i'm i as a reminder everybody there is a question and answer panel um in uh which you can access by clicking on the q and a icon you can ask your questions um and i will queue them um and share them with the presenter um um well waiting for those to queue up um i'll start with one of my own um so um and i'm answering some of these uh through the chat i guess right so i i'll i'll ask do you want me to just read down the list uh yeah go ahead okay so one of the questions was about downgrades of kubernetes clusters and we don't uh so we don't currently account for that you know we've so our history with upgrades so we've been supporting kubernetes upgrades since 1.11 and so for us it's been difficult to uh kind of support downgrade uh now that we move moving into more of a standardized way of doing uh cluster builds kubadium this is what we would use behind the scenes um we would look at you know to support that but right now we don't yeah and actually one of the things i'll add to that is um one of the things we have going on in the kubernetes community is that currently nobody is maintaining the end-to-end test for downgrade um so officially as the kubernetes project we don't have any official support for downgrading and if that's a use case that matters to you um you might want to think about having some of your qe engineers contribute to the project so that it's something that we can actually test at the upstream level yeah i think what the pattern that we're seeing is that um customers like a lot of the enterprise larger customers are starting to just run one or two versions behind on kubernetes versus you know jumping into the new version each time it ships and i think that's going to become a more of a persistent story that we're going to have most most enterprises going to run at a version or two behind okay okay um so the question from howard drew um our multi cloud builds a security concern um is um mfa i am and encryption enough i would like to ask to clarify that question what do you mean by um um i i guess i i want a little bit more clarity on that question yeah okay so howard if you can actually retype your question in the q&a panel we'll pick it up in a minute uh in the meantime let's go on to denis's question uh no that was denis with the downgrade uh there's a second question which is um the ui tool and everything you just demonstrated is this open source so some of our components are open source and some of them are just free to use uh so you can just go and log in and trial the the ui tooling itself um and then we have other open source components as well that we have on our github repo okay um like all our uh like if you're interested in some of the storage work that we're doing behind the scenes in that tool uh that's an open source project that's called trident uh and that's uh basically how we're doing how solving storage inside of the denis okay um the um i another question from uh bow button uh what kind of support uh comes with the service uh presumably the commercial version yeah with the commercial version uh net app is selling i guess we we're selling um enterprise support with that so it's bundled in with the the commercial version so there's you don't have to basically with the commercial version you do not have to pay for support you get the same level of support that net apps provided to its storage customers so 24 7365 uh same uh from a kubernetes point of view same level of support that you would get from red hat okay um another question i can can we host this entire system ourselves uh no no okay um i actually have another question i have a question for you uh out of out of mind which is um you're doing federation with us um for multicloud um and maybe uh hybrid um what's the underlying federation mechanism you're using are you using kubernetes federation federation v2 or using something developed by net app so we so federation is a touchy topic as you can imagine in the space uh we originally we supported federation v1 and we worked with google actually to enable to ensure that support uh but because v1 is now kind of into life and v2 is kind of uh kind of a shit show at the moment uh we are kind of taking the approach of writing on some of our own ip and leveraging istio as well kind of coupling our own ip that we're combining with istio and then propose sending proposals up to the istio community on things like mcp um so how do we you know so with our how we federate between the two clusters now uh is using some ip that we wrote that were uh that we just sent upstream which is uh how do you connect to istio service matches together um and then how do you route traffic between those two so that's how we're doing it is i would say tldr are uh a bit of custom ip combined with istio but all that custom ip is open sourced okay um so one questioner asked if you have plans to work with ibm and red hat um i mean i would say actually already working with red hat but the um uh because we've collaborated with you guys and stuff um the um uh the the question i guess would be on what particular um because we all collaborate a lot on istio um i know uh where we're looking at uh our collaboration inside of the istio world is mcp multi cluster the multi cluster protocol is what we're focused on looking at uh with broadly with ibm though uh we're also uh talking about being able to support ibm cloud but i know with red hat i we do a lot of different work with red hat so uh it depends on the area of red hat yeah um the um i'm i'm getting some clarification on another question i so uh louis sanchez actually asked a question i'm going to actually paraphrase louis's question um to make it more applicable to the presentation which is um if somebody is just getting started with kubernetes um would you recommend that they get started with um a system uh like this one with stackpoint and trident and that sort of thing um or that they get started with something like a bare bones you know upstream kubernetes distro i you know i i think it one it depends on where what your area of where you want to place your area of expertise if it's just about i want to learn how to deploy and run microservices on top of kubernetes because i want to learn in this whole how do i refactor my application to run in this world uh i don't want to be probably an it manager of this platform so i would use a tool like ours i would use you know gke maybe you know like ask google for a two hundred dollar gift card you know a gift card two hundred dollar credit and they'll give you one and then you just spin up gke and then play around with uh you know workload there um you can plug that into ours as well play you know and then like with ours you'd be able to play with more than just kubernetes you'd be able to use istio and get experience with things like prometheus and all that other stuff but if you want strictly kubernetes uh i feel like google with gke and their two hundred dollars credits might be a quick way for a newbie to get going without having to spend any money outside of that you know you're going to be spending a bit of money not a lot of money but a pfc cluster would cost you something at all the providers okay if you wanted to go the hard route we i would all you know persistently recommend the kelsey high tower document yeah um one more uh a question from outside the q and a um and then we'll wind it up so helm three has been released in beta have you played with it yet do you have thoughts i don't know um there's a lot of thoughts around helm uh so we we are where we took our tooling is where helm went with three oh um so our where our pain's going to come in is we're going to have to refactor some of the tooling to account for helm three dot oh because we were dealing with helm two dot oh and we actually replaced the need for tiller in the environment on two dot oh and i would say that would be the biggest thing with three dot oh is the removal three dot oh yeah but as a you know as a packaging format in general i think it can still stand some work but we're we're attending we're at helm summit this in set in a few weeks you know so we're we're attending you know i'm trying to participate there as well yeah okay so one last question and then we'll wind up um which is um the kubernetes upgrade through um uh the orchestration tool is that a rolling upgrade yes okay um and then i also see uh road map support clusters in other clout by oracle and ibm are on the way okay oh wait okay um and i apologize for the construction i had in the background okay um i think that's everything um the okay well thank you very much for a terrific presentation per the chat um the recording the webinar and the slides will be posted shortly in the cncf events page um on the webinars page so if you can review any material if you missed something as part of this presentation and once again thank you matt for sharing and demonstrating that with us you're welcome yeah and if anyone feel free to DM me on twitter if you have any questions cool thank you cool thanks guys have a good day bye