 OK, we're going to get started. Hi, welcome to the KubeCon 2022 event in Valencia. I'm Steve Wong, and I'm joined by Michael Gash. You're about to learn about some cool new tech that lets you enable better integration of Kubernetes workloads with vSphere. And this is brought to you by the VMware User Group. A word about this user group, we're inclusive of all users running all Kubernetes on VMware infrastructure. So I don't want you to be mistaken thinking that since the two of us happened to be with VMware, that we're only talking VMware Kubernetes distributions. If you're running Red Hat OpenShift or Rancher or whatever, what we're going to talk about here still applies. This is completely generic. It should work with any compliant Kubernetes running on the vSphere hypervisor. And this group itself, if you elect to join the community, all the stuff we cover is like that. It's completely neutral as to the type of Kubernetes you're running. We're going to start with a little background explaining why a tool that monitors activities spanning multiple layers of your stack from infrastructure up to Kubernetes might be useful or even essential. And then we're going to explain the tool mostly through demonstrations. A strong reason to retain your own on-prem infrastructure is a sovereign cloud requirement where you can ensure data locality and local governance. Kubernetes can run either on on-prem things like vSphere or in a public cloud. With Kubernetes and with hypervisor infrastructure, the goal is to maintain a working illusion so that operations run consistently across locations and across variations over time. So these abstraction layers can simplify the job of the components above them. But they do need to make some choices as to how to support what runs on top in relation to the layers that are below. vSphere is designed to allow VMs to opt in to a high availability model that rigorously attempts to defend workloads from going down. The Kubernetes model is more tuned to an expectation that a workload has an architecture that allows for individual components and individual containers to go down while still maintaining an acceptable level of service. There are classes of applications better suited to the vSphere model, others better suited to the Kubernetes model. And if you're running an on-prem installation, you likely have both operating, and they might need to interoperate. A lot of on-prem installations host legacy VM-based services which interact with the Kubernetes workloads. The Kubernetes zone abstraction mapping to cloud fault domains is pretty comparable across all the public clouds. But vSphere, and thus Kubernetes running on vSphere, is actually a lot less opinionated. It has a lot of configuration options that leave it to the user to implement failure domains based on budget constraints, resource constraints, other. And it is possible to configure your vSphere installation so that the storage, compute, and network service fault domains may not be aligned with each other. And there are ways to take this into account with Kubernetes, best practices, et cetera. But the bottom line is your fault domains might be aligned on rack, aisle, or building boundaries, and it's up to you. When a fault happens, how do you deal with operationally with an underwater explosion that might ripple up through these layers from the bottom and impact the things running up there in different ways? Michael's going to show you a tool that will help you be more like a pilot dealing with some turbulence in the air when these things occur as to the alternative, which is the captain of the Titanic, who's oblivious to the fact that the hull has been ripped open underwater. So with that said, I'm going to turn this over to Michael. We'll have a brief interruption because he's going to change the screen to mirror mode. Yes, thank you, Steve. So where's my display settings? This is going to be the hardest part, figuring out the displays. There are. OK, so while this is adjusting, who runs vSphere? Good, you in the right room. Who runs Kubernetes on vSphere? Or plans to? OK, perfect. Awesome. Nice, nice. Who runs? Let me advance a bit. Knative eventing sort of on vSphere? Yeah, I know. I know. Good. OK, very good. So the session is set up in a way that hopefully everyone can take something out of this. Like even if you've never heard about the appliance, the eventing concept that we're going to talk about before, or you're an expert like Scott here, hopefully there's a little bit of things that you can also take from the session. So now let's see if this goes. Now, like, I don't know what's happening now. OK, let me just do it this way then here. So what is event-driven architecture? So in this talk, we talk a lot about events, asynchronous operations, event-driven, whatever. And so we looked at Wikipedia as kind of the source of truth for event-driven architecture. If you've never heard about it before. So event-driven architecture is a software architecture paradigm. It's not new at all. Like, it's been around for decades. Promoting the production, detection, consumption, and reaction to events. And the most important part here is the reaction. You do something, right? Something happens, and you react to it. So what is an event? An event is an immutable fact. It's kind of an information. It's different from a message or a command. In the case that an event is something that you cannot change, it is there. You get a notification for that. Also, the center of the event, for example, vSphere and vCenter, as we will see in a bit, is not aware of the potential recipients of that event. And there's no expectation if I send an event out there that I know who's listening and reacting to that. That kind of has kind of no coupling between that. And since those facts are immutable, you cannot rely on this information. There's a difference between saying, I want to power on a VM versus the fact that a VM was powered on. Because the first is a command, meaning it can fail. It can fail for various reasons. You don't have host resources. You don't have the permissions for that. So commands, in general, can fail, and you need to account for that. Versus if you receive an event that the VM was powered on, you can be sure that this has happened. It's a fact from the past, right? It's something that has happened. Or in a nutshell, just think about if this done that. If you know that Cloud Service, IFTTT, that's basically the whole idea there. If something happens, you do something with it. And so, did you know that vSphere or VMware vCenter, and again, we're talking very much around vSphere today, but events in general are also very useful. So a lot of events, produce events, I'm sorry, a lot of products and services produce events, but vSphere happens to be the core one that our administrators and core personas happen to use a lot. So did you know that vSphere and vCenter alone in a standard setup, so no plugins, no extensions installed, comes with already 1,800-plus events? Did you know that? Like, I didn't know that. Yeah, you knew it, I know. So which is great, because if you could only send you three events, that's probably not gonna be useful. 1,800 is a lot, starting from these power-on, power-off events, DRS-related events, because DRS is a little bit different when it comes to these operations, configuration events, login events, backup events, you name it. This list is huge. My colleague, William Lamb, he has a GitHub page where he tracks all these different events that are available for use in different vSphere and vCenter versions. And the beauty of this is that you can do something with them, right? Now, what are some use cases for event-driven systems in general, how they are applied, and then specific to the context of a vSphere environment and the Kubernetes environment? First of all, the most commonly used is notification. I want to get a notification when the VM is deleted or powered on in Slack. Now, the challenge with vSphere and vCenter is that it doesn't give you this integration, there's no Slack integration. Even if there would be, you would come and say, can you give me Slack integration? Well, so we would have to, I'm sorry, the Teams integration, we would have to build Teams integration. Can you give me Discord integration? Well, we would have to, you know, the pattern, right? So what we're trying to do here is to not have vCenter become a thing where we have to attach all these like notification things and say, look, vCenter is just sending the event, there's then some magic, and then we'll send it somewhere to Slack, PageOddy, or whatever. The other one is automation and Scott here first role has built a lot of like automation use cases. There was also a VM user group meeting last year, I guess, right? Which was recorded by Scott, explained a bit more about all the use cases that they have like one, more than a hundred of functions by the time, right? And so automation is a key one because notification is nice, but what you also can do with these events or this information is to reset the VM, for example, or to repower it on, or to put it in a DNS database or in a CMDB or do something was the fact that you received. Another one is integration, big one in the cloud providers. So we have VMware cloud on AWS, we have VMware on Google, we have VMware on Azure and on service providers. And we see right now a lot of interest in these use cases where you have these fear events, you might have some events in AWS, and you wanna send them and aggregate them and correlate them maybe together. And so we also have some AWS folks there which use the vCenter events and do automation in AWS. Remediation, I already spoke about that. That's an important one. So we have common use cases, you have vSphere and you have NSXT, which is our network stack, and both happen to be like kind of their own islands. But if you wanna make sure that a virtual machine is secured on the network, then you need to have kind of the settings and the awareness of both of these control planes knowing about each other. And so with event-driven systems and automation here, we can send this information to another control plane like NSXT and say, look, this is a VM, it has like five tags and with that tag, put it on a security group, whatever, or shut it down in the firewall. So we can do the synchronization between the systems. Auditing, analytics, compliance, those are more like advanced use cases. The more common one would be Oracle licensing where typically Oracle would enforce you to audit and prove that certain VMs of Oracle database are running just within a set of hosts because otherwise you would have to license all the hosts potentially. And so we have users that use Vibar or the event-driven automation to do the auditing to track these events when VMs migrate between hosts or when they are powered on and store them like in a log and in persistent log and then prove that these VMs were only running within the boundary that they actually set up. So the question is how can you do that? And so three years ago with William Lam, I sat together and we built the VM by event broker appliance because it turns out that accessing these eventing APIs just for vCenter alone is hard and sometimes people don't even know it, especially if you don't have a software engineering background. And so what we wanted to do is to make it easy for anyone, even without coding or software development background to leverage these events. And so we built the VM by event broker appliance which started as a very much vCenter, vSphere centric appliance that you can download open source. It's a fling. I don't know if you can read it but there's a link down there as well. And, but you can also use the individual components to just run Kubernetes. So you can take all, if you already have a Kubernetes environment, you can just install the bits that you needed for doing that. And so Viva gives you those use cases that you have seen before, but it also gives you some of the examples and the integrations, for example, sending stuff to Slack, doing some text synchronization between NSXT and so on. And we have a community around Viva who's like contributing like Scott and others or like giving feedback on documentation, filing issues on GitHub and just giving feedback in general about it. And one of the core feedback was also, it's nice to have vSphere events but VMware alone has a lot of more products like Horizon and the VDI space and NSXT and so on. And if you expand it further in the cloud, you will also see systems that generate events. So the goal here is to be like an, you know, like an aggregator, like a router, if you will that connects to different event sources of different kind and then sends these somewhere. And whether that's a Slack integration or that's another Kafka or AWS, what have you, that's the goal there. And we heavily rely, and this is a shout out, to the Knative and the Cloud Events community because most of the time, we want you to run little code or little actions to react to it. And so a common pattern there is to use functions like an AWS Lambda or with Knative where you just write the minimal amount of code to do something with the event. So you don't have to connect to vCenter, you don't have to be aware of vCenter, like whatever you do. And we use a standard for these events which is called the Cloud Events because if you have different sources sending events, these events don't look like each other. They have different schemas and shapes and payloads, et cetera. So we use the CNCF Cloud Events standard to put a common shape around it so that for the code, for the function that you write, you know there are always these kind of fields like a type, a subject, a timestamp, a source, et cetera, for that. So that's the standard that we do and we kind of harmonize that in Viva. And so over the years, obviously the community grew and we were lucky to have Scott as like a very heavy contributor here in the first row. And he said Viva has really allowed us to spend a time on what really matters and not on repetitive tasks, like setting up and connecting to vCenter, et cetera. Getting started with Viva's really easy and adds huge value to almost every environment. And Scott works for Terrasci and so Terrasci uses Viva internally for their own processes but also in customer environments. And even they built some crazy logic on Google Cloud that I heard before so he can tell you more about it which is mind blowing what they did with Viva. And this is also a learning that William, I and the Viva community had is when we put this out we had common use cases like slack integration but then users were adopting it in a way that we didn't even like think about when we released Viva. Okay, so let's talk about use cases now that we have kind of the baseline of what Viva does. So I have two scenarios here and two demos of how we think Viva can help and they are mostly driven actually by community and customer requests. So Steve spoke in his introduction, he spoke about the concept of full domains zones that you might know from cloud providers like AWS East 1, 2, 3 for example but also vSphere has a concept of zones or rack awareness or affinity if you will if you're aware with DRS. So what we typically see in larger environments or more resilient deployments is that we have a host, a cluster, a rack however you wanna size it which lives in zone A or environment A and then we have another one which lives on environment B. And then our users start the vCenter admins or like the business teams there start distributing the virtual machines in the workloads so that they are resilient to failure like putting one virtual machine on the one rack and then the other one on the other one so that if the rack fails or you have maintenance those workloads are not impacted. And this is basic vSphere DRS resource management 101 and this is all fine and we use it. Now Kubernetes itself has the same concept but at a different layer because the Kubernetes schedule is also aware of zones and affinity and or distribution of workloads in this case parts not virtual machines but parts. And so if you do things right between those two layers in this case here then you will have a direct mapping between a part workload that runs on a Kubernetes worker which maps to a virtual machine which maps to the same site if you will all zone. This is all good especially for developers when they say oh I wanna run my stateful workload and I wanna distribute it. It's really important to have this mapping because it could easily happen that the virtual machine actually runs somewhere else. So it's important to obitrate this stack and this is also like common practice nothing new here. The trouble begins if you have DRS or vSphere starting to migrate those virtual machines. So vSphere has the concept of vMotion, life migration where we can non-disruptively move workloads between different domains or like hosts or clusters. This is good for maintenance, availability concerns, right? So there's a lot of benefit in there. Now the problem is that because this happens at the vSphere layer, Kubernetes is not aware of this. Kubernetes does not update its knowledge of the worker that actually now it's running on zone B. And even if it would be, it would have actually to take the pod off that worker and put it somewhere else. In this case, it couldn't even put it somewhere else because the zone does no virtual machine there. So the challenge that most of our users have and some try to mitigate by writing custom script and logic and calling against vCenter is that they don't know when these migrations happen. And this is a scenario that can affect the availability of the application. But even in the case of just the normal vM migration from a host to another host, sometimes business application teams say, oh, there was a latency and my app was slow. And I think it's the vMotion that you guys do down there. And then the vI admin needs to look in the logs and say, okay, no, there was no vMotion at this timestamp. So wouldn't it be nice if we could detect that change, right, when this happens and then either like log it or notify someone or even put the vM back, right, if needed. And so that's the first scenario that I'm gonna run through quickly. But I need to fix the monitor first because otherwise it's gonna be hard for me. So mirror, pray for the gods that this works. It's always jumping back. Man, I'm in trouble now. Let me stop this here. So just quick question. Who knows this problem of like, oh, the motion happens and latency is high or we have the DRS affinity problem or, okay, cool. Looks like we had at least three hands going. Well, this is good and bad because for the others in the room, you might not even be aware of this problem or how to mitigate it, right? So let me show you how this would work with Viva. So I think, no, I'm good. So I have a, on my laptop, I have a setup of a Kubernetes environment. Viva runs locally there, but I have a remote environment in Palo Alto where my whole vCenter environment is. For this demo and all the code and the tutorial, we have a GitHub page. This is also in the appendix of the presentation. So we have all the links there if you wanna follow along. So what I'm going to show you now is the first scenario of detecting and sending something to Slack. So here's my Slack instance. You see it's currently empty. And now I'm gonna go to my vSphere environment. And I was told that it's hard to read in the back, so I'm gonna explain what's going on here. So here on the left, and let me try to increase a little bit. On the left, we have a cluster of four hosts, vSphere hosts. One is already in maintenance mode because that helps me to enforce to show what I wanna show here. And the three other ones are like vSphere hosts. And then I have two virtual machines. Those are here, those two. And one of the virtual machine is pinned to the upper set of host or has a DRS rule to say you should run on this host group. And the lower one is running on these other two hosts. And this is a should rule. It's a soft rule, meaning it can be violated. For example, if I'm doing maintenance, then the upper one would also migrate down there. So ideally they are distributed, but sometimes if you lose the one rack if you want, then is the quick question, any question? Or no, okay. Am I good still audible? Cool. But ideally you also want to have like a failover or like a proactive failover to the other group. So what I'm going to do now is to enforce this, but I first need to deploy the code. And what I'm doing here is I have a small function, which is called attack drift function, which will detect, which basically gets an event that says a virtual machine was migrated either by DRS or by manual, by an operator like me. And then it sends it to Slack. The function itself is very small. Let me just quickly run the command here. So it sets, it creates a function. That's the first one. And then it creates two triggers, which is basically just the way, hey, if this happens, call the function. And this is all Knative based. So we had a couple of Knative talks at KubeCon here. So I'm not going to cover the Knative bits because that is also out of scope. What is more important is the code. You get an event, you look at the event, you do something. You don't need to know about Knative. You don't need to know about vSphere at all. Like this could also be used by business applications and developers because they don't need even access to vCenter. So you don't need to give them accounts. They don't have scripts calling your vCenter, taking them down, network policies, et cetera. You're basically shielding them from all this stuff and just telling them when something has happened. So it looks like this is all deployed now. And what I'm going to do is I have a little thingy here. This is the Sockeye application, which gives me the stream of events was built by some of the cool Knative founders and it will basically show all the events that are going on. Right now there's no traffic in my environment so we don't see anything. And I just hope that this virtual machine, yes, I need to authenticate just a second, of course, demo. But this also gives me an event. When I log in you will see actually the event. This is also good. So we're going to off script here, but why not? So by logging in now, this will also show me if my system works. Yeah, we already get the events coming in. There's a task event, there's a user log in session event. So it's good to see that working. And now we go to all control planes. I called this to Kubernetes control plane because typically you would distribute these kind of control planes of Kubernetes on different hosts in vSphere. And currently it runs on host 51, which is that guy. And that guy I'm going to put in maintenance mode now. Enter maintenance mode. And what it does, it basically tells me, hey, there's still a virtual machine on that host. I'm going to migrate that virtual machine off this host onto another group because I forced the other one also to be in maintenance mode. There's only two hosts left which are on the other group. And now, because I'm running non-compliant, the worker is now in a not preferred zone, if you will, from a Kubernetes perspective, now I actually should be alerted. And so you saw there's a lot of events you see, like when the host enters maintenance mode, et cetera. So there are also some good events that you will also get out of this. And everything worked. And this is not a screenshot. See, I can scroll here. You get, basically the function was executed. And in my case, the function just takes the event and creates a slack message by saying, hey, this virtual machine called Kates Control Plane was migrated from host A to host B. And then here I have just for introspection, the preferred zone for the VM is EU1. And the host where it's currently running is EU2. And so now someone can at least take a look or extend the function, right? There's also a little bit of info in the template code of the function where it says, hey, you can actually try to revert it back, like remediate, right? But for the simplicity of the demo, I just kept it simple. I just sent this slack notification here. So that was the first one. Let's jump back. We're still good on time. So this is very typically like sending to Slack or Teams or PagerDuty or whatever. Now the next one is a little bit more advanced because I wanted to show some advanced concepts. I think we have 10 minutes left, right? Yep, cool. I'll get there. So if you're not familiar with vSphere alarms, and again, this is hard to read, basically you can set alarms in your vSphere environment that for example, if a data store gets full or a cluster is too utilized or you have lost network connection or bandwidth, et cetera, you can set up alarms and thresholds and all the stuff. And these are great because they help administrators and teams to observe the system. The trouble with alarms in vSphere is that there's just a limited number of integrations that they have. You can send an email, you can send an SNMP trap or you can run a script on the control plane. Now an email is probably fine. SNMP traps was fine 20 years ago but maybe not this day anymore. And running a script is also not good because you might be in control of the control plane but what if someone else wants to run a script? Well now you're basically using arbitrary code running it on your vCenter appliance which is not a good idea also from a security perspective. Next one is still you need someone to have access to vCenter which we wanna get off from. Why would we give everyone vCenter access just for configuring and extending alarms? Next one is if you are starting to write like external scripts, whatever you have this issue with polling like they need to poll for hey has this alarm changed has it changed and does the trouble with polling if you're too polling too frequent then this is obviously troublesome for vCenter if you're polling too slow well then you might miss the alarm or you react too late, right? And again it's cobbling you need to know the APIs, the vCenter APIs for that which you might know but not everyone like in different teams also. And now you might be smart okay well I heard Michael talking about event driven I know I can do something with these alarms let me build my own custom integration let me build my Python, my Power CLI or whatever this works but the problem with the alarm event that you get it's very generic and it misses a lot of data. So what we've done with some of the Vibar examples is we use some K-native concepts here which are basically the runtime that we use to do some a bit more magic on the event. So vCenter sends us an alarm event to say oh host memory usage is way too high you need to do something. Well then Vibar obviously gets it and then we transform it into a cloud event so this is all the boilerplate internals that you don't need to care about. But now for the example what we do is we use something called a sequence in Vibar or in K-native if you will which gives you steps you can say call this one first and that function or and that one and that one. So what we wanna do here is we have a service, not a function a service which receives that event then it calls into vCenter for more details on the alarm because they are missing it's not in there. It caches these details because we don't wanna always go back to vCenter and do all these calls because you could have a lot of alarm events, right? And then it produces a new event like an enriched alarm event if you will with all the details of the alarm like the name, the triggers, the criticality like how it's set up and all the stuff and it sends it downstream to another set of parallel receivers. And in this case parallel, why parallel because we want to send it to a dashboard in this case my Soci event viewer which you will see but we also wanna send it to Slack. So the beauty of this is that one event can expand into multiple like receivers or endpoints if you will and you can do this in parallel or you can have like these steps before where you intercept and inject and even create your own new events which a lot of Vyba users they use the vSphere alarms but when we tell them look you can even create your own events like oh cool I can do billing I can do user management right because we use cloud events and you can just produce cloud events we even have an SDK for PowerShell and PowerCLI to create and read cloud events. So let's jump to the demo last one before we wrap it up. Oh, I know what the issue is PowerPoint is the issue of course. Where's my mouse here? I need to finish this. Then wait. So back to the example. Da da da da da. So again the whole scenario what we're gonna show here is described in the tutorial. So I'm going to deploy a function this one. Just one cube CTL command but I need to go into a specific folder here for that code and I hope I did that right. Yeah okay so what it did here it created a function for sending to Slack and it has a couple of triggers and sequences and parallels that the stuff that you saw before to wire this all up and I think there's a dispatcher or it's way too big too. Oh yeah trigger here. Okay this looks good. And now I show you the vSphere environment. So I have set up an alarm like a fake alarm if you will because I need to trigger it. Come on. And go here alarm definitions. So this is my alarm here and it's like host memory above 5%. Obviously I set it low to trigger it right. So I'm gonna enable it now because it was disabled so that I can show the example. And after some time you will see some of these hosts getting flagged as alarms because it takes the center a while to figure out all the utilization and alarm I think 30 seconds here by default. So that takes a bit but in the meantime we already see a lot of alarms coming in and here's one which is the already enriched alarm event which we call alarm info. In this case the function just happens to do that. And what it does it takes the original event which is roughly this. It has some boilerplate blah blah blah blah but it misses important information which we then with the pipeline that you saw interject into the event and so we patch it in and now once it's patched in with all the more detailed information it's hard to read but again it's more detailed information for the alarm. Now we send it to our slack function so that our slack function and it's already doing stuff here can print us a nice hey this is an alarm and the type was an error and the object was a host this is a slack symbol not recognized here and we also have to threshold. Without patching that event we wouldn't be able to create this slack message because all this information is missing this is not available in the original event. That's why we're doing this patching on the chain of events there and this is all possible so I always joke is like the elastic sky is the limit elastic sky used to be the first name of ESXi. So when you start thinking about these events and integrations and possibilities that you have there literally your own imagination is the limit there. So I'm gonna pause here that was the last demo that I had and Steve's gonna wrap it up. Okay well we've already uploaded the deck to the scan site so you could go get these links to follow up on this afterward and if the screen was a little out of focus I think you should be able to get enough out of these links to repeat this at home. This session is hosted by the Kubernetes VMware users group and if you like these kinds of content this kind of content we are having meetings every month on Zoom and we also have a Slack channel where you don't have to wait till the next meeting if you've got questions or things you wanna share. The whole point of this is to allow users to ask questions, share best practices, experiences and interact even with developers who are working on this. So where do you find this? Well it's online I guess I should have gone to this screen this explains the purpose of the group. These are the links you join by joining this Google group mailing list. You could drop in the group without joining the mailing list but we use it to gate access to the document the agenda notes. We also encourage, we've got a couple of chairs who try to bring in guest speakers. Sometimes if we don't have a guest speaker we just turn it into a birds of a feather open-ended conversation but if you wanna give us homework to try to recruit a speaker on a topic this agenda document is open to users and you're free to just go in there and edit the agenda for next month just please put your name a topic you'd like to discuss or have discussed an estimated number of minutes and we'll try to make it happen and here's the Slack channel. It isn't on this screen but I do wanna point out that in terms of groups Michael works on that Viva event broker and they have monthly meetings as well it is monthly, right? And I've dropped in on some of those and it's very interesting material. Also, both of these groups publish the meetings up on YouTube sometimes they don't come out the day after but they come out eventually so you can go look at the historical meetings if that interests you. So, unfortunately I guess the QR code overwrote the link, sorry about that but the link, that's the link to the deck that's the Kubernetes channel that will find you Michael and now I think we're ready for Q&A if there aren't any I'd ask that raise your hand and we'll bring you a microphone so that the people online or watching the recording can hear any questions you have and if you'd rather have your questions one-on-one I think we can hang around a little bit out in the hall and we've also, I noticed in the audience in terms of user group members and actual people with Viva experience I see a number of them in the audience too so go ahead if you've got any questions raise your hand please. Okay, we've got one of those. Yes, thank you. I don't know if this, oh yeah, it's on. Hi, actually I have a mark of a question about the scalability so actually when you actually really start acting on a large environment let's say with a couple of thousand virtual machines we really have a lot of alerts triggered is there some kind of a sizing guide or is there some kind of, we also see analytics maybe some of the functions take a very long time so when they are triggered over and over that they actually start to stack up. So for Viva we don't publish sizing guidelines because we're also a little bit doubling right now in terms of deployments we often we also don't hear back from the users because it just works so we have a little bit of telemetry in the system of how many events per second we have throughput because we see that and the good thing with the Viva appliance is that it's a one to one model meaning it connects to one vCenter and then it has like one processing engine behind that obviously that doesn't scale from a management perspective but it helps with like the scaling from a vCenter to the actual execution and for Viva itself couple hundred events triggering per second is not a big deal like we've seen that but if you're thinking like thousands then the appliance itself becomes an easy bottleneck also from an availability perspective and we recommend doing the distribution model on Kubernetes. Yes please. Okay we've got a second question. What's the security model around Viva? Is there any sensitive information in the published events and how do you ensure that it uses the same authentication? Yeah so question is what's the security role model for like Viva and I think this is like almost two questions one is for connecting to the sources like there's a security implication and the other one is with the data inside these events right personal information for example so on the one hand the good thing of Viva is we only require read only access so it's not a write account that you need for in this case vCenter as one particular source and you can even scope it down to say just the host just the clusters it literally just sees then whatever this object resource and hierarchy sends over for the PII personal information kind of case we don't strip them off right now meaning you would have to write an additional function like a blinder or an extractor like in the alarm example that I gave you we injected in this case you would write a function or an interceptor that does this because we initially started thinking about enriching Viva itself with this kind of functionality but we saw ourselves becoming like a bottleneck of these features and so we kind of deferred it to like the examples that you saw like the sequence and then doing the other stuff for person like admin user names for example passwords are not in there right we don't send passwords but host names, user names login this could be personal information yes I think another thing that I've heard in one of your community meetings that William Lam pointed out with regard to the auditing use cases many times auditors want to have information about specific workloads but it might be a violation to give them everything and kind of the legacy way of doing that would be open-ended where they get free access to more information that they would need or maybe depending on the rules that apply to you should be able to see and through filtering on these events it's possible to put a level of classification that goes beyond what you can do in just the open-ended built-in API and typically you see multi-staged like you take the event and then you process it in a way that it's secure has the information that you want and then you give it for consumption that that's also possible and the latency by the way some people ask like how fast is it this is like sub-second it's almost real time I think we're up against the time deadline maybe we have one more just come by oh yeah just can we have one more question no if we have one more we can do it I think the next session got canceled so so we have time and otherwise we will be around so the question was about are you correlating events for multiple layers of vSphere so if you're taking like a delete VM event from vCenter and from the host you know whether the event happens twice does that get duplicated within Viva or are they handled as separate events so what's the question the question was around seeing multiple events for the same operation that was performed correct yes so yes or no you saw like the whole thing scrolling like crazy and the reason is not that you get like 10 times the same event but you see these staged like a VM created VM being created VM like deployed right there's always a pipeline of events that vCenter ships and that's only for one kind of operation a click could mean multiple events in the system but they all have unique event names and that particular event will only occur once at least from vCenter we try hard to deliver it multiple times so if your function fails for example you have a retry so you could handle events differently depending on where they've been sourced yes and no so because the functions in the default model the functions themselves they are stateless meaning you send them something they do something and then they turn down or they forget right so they don't know that there was a sequence or that they should wait or there's like this means like you would go into stateful processing right this is a little bit more stateful where you retain state and in that case the solution typically would be to put it in a like a database or a log or a Kafka if you want and then have stateful logic processing from there and this is very common like building complex pipelines out of this yes any state store if you will even imudibi works from the guy there good question I think we're gonna have to cut off questions but Michael and I will hang around we'll be here and like I say join join the community meetings too if this topic interest you I suspect we put on other topics that will be of interest too well one last word I want to point out there has been a proposal in the CNCF this group is operating under the Kubernetes project and there's a proposal to move it out of the Kubernetes project and start hosting user groups under the CNCF one reason is that in reality frequent topics that come up in this group are essentially how do you run Kubernetes on-prem as opposed to vSphere they encompass other CNCF projects like load balancers and other things that technically are not Kubernetes but they are CNCF projects so it's entirely possible that by the end of the year this group might have a slightly different identity where you find it I don't think it's going away because you know there's a community of people who found value in it and show up but that proposal is on the table to change the way user groups are done I suspect the earliest it would happen would be at KubeCon North America in October but if somebody is watching this on a YouTube video a year from now I just want that on there that that is a possibility that this group might have appeared to cease but it probably popped up somewhere else thank you for coming