 I think we get started here, just let everybody's time and make sure they get enough time in here. My name is Brad Askar. I'm the field product manager for CloudForms. There's some good announcements this morning. We'll talk a little bit about how that all fits into the plan as well. Everybody hear me OK in the back? Good. OK. So who am I? I've been around the IT industry for a long time because of my picture. The title of this is Fielding Management Gap. Marketing got ahold of it. Cloud Management Platforms Managing OpenStack and Other Clouding Instructions. What does that mean? What we're really going to get into is what is a Cloud Management Platform? What is OpenStack? I'm assuming everybody here kind of knows a little bit about OpenStack and not everybody necessarily knows everything that they need to know, whether it's maybe their first brush on it. Red Hat's involvement in OpenStack or what is a Cloud Management Platform? Red Hat's CloudForms, then our special announcement and then any Q&A. I'll leave plenty of time with that for our Q&A. So what is a Cloud Management Platform? So Gartner has this slide. Everything that you see here in yellow are really the things that we look at for a CMP's capability. A CMP's capability for self-service, service catalog, charge back, cloud management, capacity management, performance management, configuration and change management, life cycle management, orchestration, and external cloud connection. These are all the things that are part of the definition of Gartner for a Cloud Management Platform. So what is OpenStack? Within Gartner's definition, OpenStack is a cloud infrastructure for cloud-enabled workloads. So all the things that you see is modular architecture, designed to easily scale out based on the growing number of core set of services. And it is probably the reason why you're here at this conference is to really look at all this stuff and all the pieces that are part of this. It's OpenStack Cloud Operating System. And in itself, OpenStack is not a cloud operating system. It relies on X86 hardware resources underneath. It needs an operating environment hypervisor services and leverages existing code bases for its functionality. So it is dependent on the underlying Linux. Red Hat has a solution for that, Red Hat Enterprise Linux. That is a hard version of Linux that ships out and is supported in your traditional IT environments with all the support and things that you expect to come from Red Hat. And all of these things are, because of our involvement in OpenStack and of course our many years with Linux, really tied closely together and the teams that do this work very closely together for the various things that are needed within these projects. So Red Hat's open stack involvement. Red Hat's been around OpenStack for a while now. You can see the timeline there. You'll see here as we're moving, we're number two contributor here, number one contributor here, number one contributor for Havana, just in from corporate commits and closing of issues with number one again. So we're committed to it. We love OpenStack. OpenStack is a very large portion of our future. It's a very large number of folks that are at this conference that are very passionate about it. They're on the committees. They work within the code itself every day. They're committing upstream all the time. And our Red Hat's commitment to that is that we're always committing upstream. We don't just keep little bits for ourselves. We commit upstream and then we work what's upstream down into our release version. So this gives you an idea of, Red Hat has some chops here. We've got people that know how to do this stuff. We know how to support customer, drive new features. And depending on what's going on, there's a bug we import by our customers. The possibility of the person where the bug is may work within Red Hat and has actually been working down that area to the core knows where to go looking for that. We can help influence the strategy and direction of the product and enable partner collaboration. We've got a lot of partners that are involved in our ecosystem of all the other things that we do, not just within Red Hat Enterprise Linux, not just in OpenStack, but within everything that we do as projects. So back to the original themes. What is the differences between OpenStack and Cloud Management Platform? So again, same thing, then we'll break these down to the biggest components of that. Red Hat Platforms gives you the ability to do approval workflows, things like compliance, self-service and charge back, quota enforcement, cloud bursting, resource management, capacity planning, optimization, configuration management, root cause analysis. Now, OpenStack has a lot of overlapped things that are different projects that have some of these same goals. One of the things that you'll find about a cloud management platform is, we're providing this at a higher level. We're providing this no matter which computing environment you're in, whether it's public or private, whether it's virtualization or cloud, and that's really where the differentiations starts to come in. That's really why you look at a cloud management platform. You get executive dashboards, governance compliance, IT up, process orchestration over the entire panel or whatever it is that you're managing, and that's really for us. We love OpenStack, but we're pretty sure that most of our IT customers haven't gone 100% OpenStack all the time for everything within their environment, so we feel that there's a long, long ways that people will be using any of these other things, including just bare metal, within their own environments. Infrastructure as a Service, Consumers are some of the consumers that work, support, delivery services, folks like Dev and QA, Governance, whatever, and they all come through the single pane of glass, the single pane of glass is Red Hat CloudForms. CloudForms gives you four basic modules, and those are insight. What's going on in the environment, the kinds of statistics and things that I need to go know that's going on in my environment. Things like computed insight, so it's nice to know how things are running, but you may wanna have visibility into deployments to understand where are the hot zones, where am I running hot, where am I running cold in my environment to help me make placement decisions, and I might be making those decisions based on the kinds of things like one hour average, seven day average, 30 day average, so that you can make decisions based on that kind of insight, a lot of before you now have this capabilities charged back and trending within the product. In control, security and compliance-based alerting, policy-based resource and integration enforcement. So this comes into play, we'll show several examples of control. Control is really where you can start to make a difference. It's great to know all this information about your environment, but wouldn't it be nice instead of finding out that somebody's doing something they're not supposed to be doing, that you can actually stop them from doing that, or change the behavior, or shut it down, or whatever it is that you wanna do, notify you're not. Automation is simply the ability to automate IT processes. When it really gets down to it, we're not doing all these things that we do within Compute just for the purpose of hey, it's neat technology. Some of you might be doing it because it's a hey, it's neat technology, but most of us really wanna automate IT tasks. That's really what it's down to is, how fast can I get from pushing the button to what I actually want at the end and did anybody have to touch it in between unless I want them to touch it like an approval process or something like that. So we'll go into automation as well, integration. This thing doesn't live in a vacuum, right? In real enterprise, you're really talking to a lot of other systems, CMDBs, ITSM systems, maybe your IPMs, IP address management, all sorts of event consoles and all the things that you have in your environment. And then you actually wanna do it through an adaptive management platform that really lets you talk to the infrastructure, which would be your traditional verb and files, which would be your cloud-like environments. Platforms itself is built from the ground up since day one to be a cloud-scale application. It's agent-free virtual appliance architecture. We deploy the product as a virtual appliance depending on which environment we wanna deploy it into. We can deploy one or many of these appliances in each one of the appliances that can have special roles within the environment so you can delegate the amount of work that you can do and also so that you can really scale the application, which means you have load balance and fail-back. Web-based administration, no special agents, no special anything else you have to have, no special client you have to have other than a web browser to interact with it. Enterprise directory support. A lot of products out on the market don't really understand enterprise directory support or following trusts within federated domains or having multiple domains and the fact that you may have very different schemas in those domains. Multi-tenancy, of course, you're not gonna do any of this stuff unless you're able to support a bunch of different groups, possibly different companies within your organization. We'll talk about horizontally-scaling and falling-back. That is important. There's a lot of products out there that are gonna have some serious problems when they start managing very large numbers of workloads in very large environments. We're in some very, very large financial customers, most of which won't allow us to talk about who they are or what they do with this. They're actually financial customers that actually run large infrastructures for the customers, and they're literally managing hundreds and hundreds of thousands of instances or VMs and thousands of servers within that environment. Management across multiple locations. So if you're an international corporation or this corporation's got two data centers, you've got two locations, you wanna be able to do that management and you wanna be able to do the management as close as you can to what it is if you're managing. So if you end up with things like data cuts and problems from data center to data center, the product itself can still manage that data center even when it's disconnected from the rest of the world. Because you can't really drive some things like policy if it's no longer running. And management across virtual platforms and public clouds, single pane of glass for all the different kinds of compute that you have. So we started at the very top, roles-based access control. Very important product, everything that's done is filtered through RBAC. It determines who's allowed to see what, what they're allowed to do within the product. And to the point where you can configure yourself as some really strange situations where also I can't see anything because I changed all the RBAC and now I keep seeing the things that I wanted to look at myself. Five intelligence, the analytics and specifications, the relationships there, what kind of information is going on. You want to be able to know that kind of information about the hosts, the VMs or infrastructures running on, what's going on at the cluster level, what's going on at an even higher level. Automation engine for those policies and orchestration, workflows and approvals, the control surface for discovery, monitoring, tracking. Here's a good one, discovery. A lot of product out there that are in the cloud space, they don't discover old stuff, right? That ground field and stuff you already got running in your environment, they just know about the net new stuff, which is fine if you're running only net new things and you've decided I don't want to combine those worlds. Most people we've talked to, most companies we've talked to, they really want to combine both worlds because they're really looking at a cloud management platform, they really want to look at everything holistically and as soon as you talk about that, then you're in the ground field. As soon as you're in a ground field, environment, a lot of products really fall short because they don't do that. Because of the history of the product, it's a very, very strong capabilities in that area because the product wasn't designed to go green, it was designed to really go ground field first. And then the extraction layer for all the different languages that are out there, for the APIs, really gives you a very, very nice extraction layer, including management and two things like Microsoft and PowerShell. And all that comes back into the virtual management database. So all the things that we're talking to in these environments as we connect to these various compute environments, we're talking not only doing discovery at the API level to figure out what's out in the environment, we're also talking about whatever kind of bus they provide, so if they provide some sort of bus, we're on that bus, that's how we're actually able to intercept when things are happening in that environment. So you decide to move something from point A to point B and the hypervisor or cloud platform says, now we're shooting to move something from point A to point B, we know about it, we know it's on the bus, we can then take a look and see if you're allowed to do that. If you're doing something that you're not supposed to do or something you're allowed to do to do that. And we know that because of all this information we have in the database. So I'll give you a couple examples, seamless self-service, roles-based delegation for the users, self-service portals, people are really interested in the self-service portals, service catalog, of course, you expose those to your users. It's great to be able to deploy one thing, it's even better to be able to deploy an entire environment. Maybe you've got a multi-tier application, you want to be able to deploy all of that, maybe you want to be able to sequence what that looks like. Maybe the database server's got to be up, we ask, hey, they in it before you start loading your mid-tier servers, maybe they need to be up, we ask before you start putting load balancers out in front of it, maybe they actually need to be answering and doing what they're supposed to before you start delivering traffic to it. Automate provisioning quotas and target. As an ideal organization, you may want to say it's really nice to be able to push buttons and get a lot of resources really, really fast. Maybe you don't want them to all hit the button at the same time and everybody uses all the same resources. So maybe you want to be able to enforce things like quota and maybe you want to actually do charge back to show back so that they understand the financial implication of what they're doing. And because we can do it across all of these environments, including open tech, we can make decisions not only at the technology level, because we have all this other information in our VMDB, we can also do things like wait the cost of what these things cost so that we can determine where you want to deploy and make business decisions about where you want to deploy your workloads, not just technology decisions about where things are going. Give you an example, user self-service automation, so the user comes in as a request, first thing happens is our back. Do they even see the catalog that they want to request out of? Then we go filter through quotas. Are they doing what they're supposed to in their quotas or are they over quota? Maybe they just get a rejection message that says you need to go ask them and talk to IT about more quota in your environment. Then maybe you want to go into an approval workflow. Maybe your developers are allowed to order three systems and no more. And if it goes over three systems, maybe they have to go through their management work that somebody else that physically responds before that so you can have approval workflows. Whatever kind of workflows that you want, whatever you dream of, you can do within this. And then intelligent placement based on all of the factors including who they are, what their quotas are, and all their abilities, including the tagging within the product has ability to do serious level of tagging. Within that, you can then determine where you want to place your workloads. So, an example of that is, where do I have available capacity? Because it's nice that you've requested, but where do I have available capacity? There are policies that affect your placement. Maybe I want to deploy something, but it's something that might be PCI compliant and can only run a PCI compliant container. Well, do I have capacity in the places that I want that to go? And then which options offer these costs? So you can make all of those decisions for your users at the highest level and then determine where you're actually going to place that ultimate workload. Executive management, these folks don't log in and look out of bits and bytes, right? They really want the charts and the graphs and financial management kinds of things, governance and compliance. They want to make sure that what they're running out there is meeting all of their responsibilities. They have judicial responsibilities and regulatory responsibilities. They'd like to have a dashboard be able to look and see what's going on in that environment. Forecasting and planning, it's great that we've got all of this capacity, but based on current run rates, how much is it going to cost me to run whatever it is that I want to do based on what's going on in the environment? If I've got an environment, let's say Red Hat Enterprise Virtualization, or VMware inside and outside, when am I going to start running out of capacity on some of the projects that maybe are your ground field projects that you haven't moved someplace else? Maybe you're going to run out of capacity based on the trending much faster than you thought. And you can then take that and look across environments and determine, hey, I've got this specific workload that's running somewhere. Show me someplace else that I might be able to run this based on its characteristics. How has it been performing over the last 30 days? How much memory does it actually use? How much CPU does it actually use? And then also things like health and availability just give you some good green light, red light, up, down, what's going on in my environment and its capabilities. And doing it across all of those environments, not just across one of the environments. Automating IT processes, maybe you've got Sam Rue here, Windows and VM must have Mac be installed. Based on the information that's within the environment, we can determine that using things like smart state technology that's in the product, it's patented technology, allows us to actually read what's on the disk in the underlying whether it's been powered on, not powered on, whether it's phone, phone, configuration management system or not. We know what's inside that container. We can make determinations about things that go on time. So you're only users, obviously the conforming VMs and workloads, maybe you can't even see workloads that don't meet that criteria because you put it in the quarantine or something like that. Policy breach notifications automatically. So somebody goes to do something and they break policy and maybe you need a notify security team, IT management and the help desk that this is going on, right? Help desk so that they can get in a hold of that user quickly, start working with them while security team finds out that there was some sort of policy breach. So Red Hat CloudForms sits over the top of all these kinds of virtualization and cloud. Amazon Web Services, Red Hat Enterprise Virtualization, Red Hat OpenStat, VMware, the one that's not on this slide now is Microsoft and the next release, even more stuff with Microsoft and Hyper-V. And if it is, you get past this time cloud low acquisition costs into a reduction. And Red Hat gives you really this entire ability to do it on all these platforms. You got it on VMware and Microsoft, Red Hat Enterprise Virtualization for your private cloud, maybe that includes OpenStat. And your hybrid cloud includes stepping out into things like Amazon. And then underpinning all these projects is of course Red Hat Enterprise Linux and OpenStack platform. So you've got all these are based on Linux and all these places that we're running things like OpenStack platform. Do it with the signs here. Just adding the VMware and the Amazon into the same picture here. And then getting started with your private cloud. So private cloud's not just infrastructure as a service, it's not just all those things. Red Hat has products in all these areas. So if you're looking for platforms as a service, you've got OpenShift out there, OpenSource product, Red Hat Enterprise Linux, OpenStack platform, Red Hat Cloud infrastructure, Red Hat Cloud infrastructure is a bundle or a pricing skew capability for doing OpenStack, Red Hat Enterprise Virtualization, some of your Linux licensing. So if you use several of our products together, the licensing is more advantageous as you go up into the credit platforms and Enterprise Virtualization. Integrations with a lot of systems that are in the environment. You've probably got some of these in your environment and some of the ones that are TBD. So we're looking at things in the roadmap. Now that the announcement on OpenSourcing has happened, we're expecting those things to start accelerating as these partners really start jumping in and looking at this. So then you hear more paying attention or maybe you were moving. Red Hat just announced that CloudForms has been open sourced. So true to Red Hat's name, we took 100 plus million dollar acquisition and we've turned it into open sourced. Very large leap and jump ahead based on that and it's a great way to start this environment. The product originally was Manage IQ. Manage IQ was a company that was acquired. Internally they decided to call Manage IQ.org, the organization that will be the open source organization that handle that. It's in Red Hat's DNA. We really do believe in open source. We really think it really makes a difference. We do live and breathe this stuff every day and it is really part of the reason why we do what we do and we really think that all of us working together work better together rather than just one company to work on stuff. The first one is the open source product management platform. It provides an alternative traditional product management platforms. Of course it has one of the product managers for platforms. We'd also like for you to buy the Red Hat version of it. The support for that so that you also get that enterprise support 24 by 7, 365 support and all the things that you're used to getting from Red Hat. Source community is gonna have an engineering community that's doing innovation and user community differentiation. There's lots of places for people to plug things in in the automation space and the control surfaces for monitoring, things like that. And the advantage is where we're the things that we're getting out of Managed IQ is a small product company that we're resource constrained. Red Hat Acquisition, large public company, really putting a lot of renovation into the releases that have come out since we came to cloud platforms from Managed IQ. It's still resource constrained compared to the entire internet and who really contributed in that environment and that's really why. One of the big pluses that you get out of that announcement is that you get a lot of innovation and not all the innovation has to come from within our own core set of development. And it's one community, you do see many projects within it just like Wilkman Stack has a bunch of different projects within it, we expect a bunch of different projects within the Managed IQ and that really comes all down and maps into the architecture and taxonomy that will be with this product. We'll open it up to Q&A, if you would come to the microphone so that everybody can hear. Or I'll try and relay it. What was the license? Will you finally land it on? Good question. Managed IQ.4, we made the announcement today of that community and we're gonna release more news in the upcoming weeks in terms of the entire governance model that provides the licenses and all those kind of things. So stay tuned. But I encourage you if you want to know more about the post-future, it's coming, it's gonna be announced So those of you that may not have been able to hear, Managed IQ.org is gonna have all of the lights and seeing that kind of information and everything that goes on. This is Abir Leshfaw, he's the product manager, field product manager for CloudForms. One of the guys that came over from Managed IQ. Do you see a lot of what they call an equity account? So the question was, are we really seeing customers that really need to have places on many clouds, on many different places that they're really managing and what kind of trends are we seeing there? Our customers do. In the real world, they've got a lot of brownfields, they've got a lot of current environments that they need to be able to add all these capabilities to. And while a lot of them are looking at OpenStack and looking at things that have very large corporations have a lot of different pockets and a lot of different silos and they're all doing different things. And this really allows them to have one place to really start corralling all of that and really have a view of their entire infrastructure. Otherwise, you've got all these little divisions and pockets and everything else. And when they really see this at a high level, they really start becoming excited because they can then say, hey, we can actually roll this up at a real high level and really report back and say, who's doing what and what environments really allows them to do that roll up? And it makes a big difference to them. So, and because of it, actually makes them less scared about doing more things in more environments because, all right, let's go do a couple of those in Texas. I can manage and watch all of it in one place. Maybe they're not even managing. Maybe they're just looking at it, just doing the insight of whatever's going on in the environment. Use as much as you want, whenever you want. And then the other part of it is, it's in your path to cloud. You may or may not be ready to do open-sat because you may or may not have workloads that are there. You may or may not want to do some of these pieces or all these pieces. Maybe you'll never go into the public cloud because of regulatory. Maybe that's the first place you want to go as soon as you put something like this in and you've got some governance around it and you can start actually expanding into those kinds of environments. Kind of a briefly about how and how to form a specific auto-scaling. Are you speaking specifically to open-sat auto-scaling or are you talking about just auto-scaling as a? Ah, just in general. Okay. Yeah, so, okay, so the question was, is how can we really work with auto-scaling? That's actually one of the use cases a lot of people really love about automation and also within open-sat itself. So because of the automation model that's within the product, we can make decisions about what's going on in the environment and decide to start auto-scaling environments based on the needs maybe it's high CPU utilization. Maybe it's scaling on business related knowledge like looking at a corporate calendar and knowing that you got a marketing event that's going out and start scaling and maybe that scaling is internal to some point and then external to another point. Maybe you do some of the in-house on a little bit external. There's a lot of scenarios that customers use for that but really it gets back to the depth of the information that's within the platform and the automation capabilities within it really allow us to do auto-scaling and platforms that never knew they had auto-scaling but we're doing that right because we're looking at it holistically and we're really making a composition of all of those clouds and all of those environments and really making it better all the way across. I think we have another question here. So can the product discover things like compute network storage resources? So because it's talking to have API layer to whatever the platform is, hypervisor, cloud or whatever, whatever depth of information they give us is the depth that we can drill down to. Now there are some cases where we can actually get more information because we're using APIs that actually work under the cover to really discover other things that maybe the management interface doesn't give you but some of the backend things like backup APIs and things like that give you so we can get some depth of information from those sources as well. You got a second question? Yeah, my second question is regarding Yeah, I already integrated with Solometer to your charge back capability so within the product have the ability to do charge back and you can really do very multi-dimensional charge back depending on how you tag within the environment. Maybe you have higher or lower cost environments maybe you've got tier storage and you want to charge a different amount for usage in each one of those tiers. Maybe you charge differences in compute maybe on some of the other capabilities within the platform. Maybe certain hosts have things like HADRS and you want to charge you more for those environments so very, very fine grain and because we're collecting all this information we can do all that charge back. The other thing that's nice about collecting all this information is most of the systems out there have amnesia they keep 30 to 60 days worth of information which is great for running their platform not great for you making business decisions on that kind of information I think platforms you can set whatever your retention period is for that and really do it at a basis maybe you want to keep 13 months or maybe you want to keep 26 months so you can do year over year comparisons and start being able to do intelligent business decisions based on what was going on previously maybe on your marketing calendar or whatever you've got going on so deep integrations there. Question in the back? So I saw that I'm going to be able to do a lot but I'm going to ask any other services in the second. Is there a problem with orchestration? Do I have to rely on other orchestration or do I want to come through there and get a board of standards and medical services? How are you integrating with things like community integration? Okay. So we saw that we were talking to Nova how are we integrating with some of the other projects and some of the other APIs that are within the product? So like any other product we're shooting at a moving target because open stacks are moving target as well as they keep adding functionality so we enhance and with each one of our releases we enhance more so as new projects come online and additional APIs come online we integrate and do those things to give to a deeper level so with any of the platforms that we interact with we generally start at the inside level then start adding all these other depths of integrations. The nice thing is through Automate and because of the rest of the model even if it's something brand new we have a way to reach out using those APIs to do that very new thing that you just got but later it comes product ties and we start rolling those kinds of features especially the kinds of features that more customers would use we start really looking at those kinds of use cases and start adding that into the product so that it becomes a regular part of the product rather than just an automation extension. Thank you for answering the question. Are you saying that you know? Our integration with the heat template so we've got examples through automation integrated with heat to be able to do heat integration currently and X release even more depth as for our heat integration getting able to use heat as it's natively created and to be consumed. So with CloudForms there is a type of integration there's the out of the box where you start integrating and having these one-to-one peer knowledge with the domain expertise so let's take the example of heat or nutrient or whatever so that comes and there's a whole set of capabilities associated with that to discover and be able to discover the elements of those environments as well. But there's also the ability for the users to be able to expand the reach that CloudForms may have with third party domains. So heat is one good example right now where we have generated our essays and our private community right now that is going to expand into the public community once we fully open source a couple of weeks that will be able to have models on how you start integrating, start dynamically allocating heat templates and start driving them through the automation platform itself. And then what we do is we take those use cases that our user needs and we bring them back to the CloudForms platform itself. As an extension to make it easier if you want to start driving so if we take the example of heat the idea would be to say that our automation engine that is a part of an engine inside the CloudForms set of architecture can now start doing a lot of work and start discovering those heat templates it can start allocating them, enjoying them, ensuring that there's a loop back within the automation so that you can drive it to its automation engine. So, forgive me but I don't want to bother you. I brought a heat template. Yeah. I send that to keep it in since I was a no-man or she's a no-man. We're just for the enforcement where it goes to charge back or does all that. So, probably. There's a difference that you can't access. Right, so CloudForms then registers the essay to capture events that may be needed by the application itself either in QA or private MQ and start being able to understand that there's a flex that is required for all these applications and it starts monitoring and making the association between the workload, the instance that is running and the visibility that it has directly into CloudForms. So, we can work through that in-depth model if you want in terms of I mean I don't know that everybody's interested in that but I can work through it. Do I need to change my open sack to be able to get into the boats? Do you need to do a lot of that? You might or you might not. I mean, events and everything is, some of them are registering directly to the event bus, right? To be able to capture that information and get it back into the room. So, as the platform evolved, I'm expecting that there's going to be more and more normalization where we'll be able to associate those events and start understanding capacity utilization of that particular service and then automatically flex it because the event was immediate because we get over a threshold or these kind of things and then in the return, starting looking here to be able to say, hey, go ahead and implement this extension and start scaling it up. So, we can, I mean, I'm happy to work with you through those scenarios. Yeah, please, we've got Booty down there. Please come by. Yeah, that's right. Booty is going to be about that. This is a very, very good description. We want the linkage between, you know, the open stack to be, if it doesn't go in and manage it. Question back? Yeah, so if I understand correctly, you are including some kind of business process modeling engine within the product itself or you're architecturally able to integrate with one. That's something where I'm a little bit confused. Are you saying that, okay, we have all this data and we can allow users to define decisions that maybe trigger the heat template, maybe it does something that's successful with API? Are you saying, if I have one, an engine like that, I can integrate with it? So maybe it's both. You've got both. So there's capabilities within the product itself to be able to handle events, conditions, and actions based on those and really make a model of those kinds of things and be able to do those kinds of things. Is it a predefined or are they definable? No, totally definable by you. You can determine to create whatever you want. You can create your own synthetic events. It takes on a combination of events to then trigger and do things that you want to do. And you also have the ability to call out to other systems like VRMS systems to be able to integrate. Another word that is going on right now is to make it more definable, actually, so that you can start extending inter-ease and reaching out to other systems. I thought I forgot what you meant. Any other questions in the house? Yeah. Is there something more about the micro-studio question? This is a very common question. Is your micro-studio in Vietnam? So currently we've got, in technology preview for those customers who are doing it, the ability to talk to Microsoft Virtualization. That's at our VMM. System 7, our VMM, and enhanced capabilities in next release, talking to systems and our VMM for more additional capabilities. And then roadmap items are definitely tighter and deeper integration. So like we do with any platform that we first connect to, we start at the insight and then start adding additional capabilities. So one thing that you can see is that there's a lot of providers out there, right now, and so forth. When you need a lot of hands to be able to address these capabilities, right? So what we think is that what Red Hat truly means is that you can power the open source community in this case where, you know, by open sourcing these, we know that and it quickly becomes possible for me so that we can extend it much faster, reaching out to those providers so that we just, you know, I know that we had Microsoft very kind of support for when we're standing at the CPMM. Right now it's in Vietnam. We've done it in the preview. And then the next step is that. So when you bring the platform to an existing ground-field environment, can you quantify how much customization and the development that you've done in working in this particular environment so that it can work? So definitely going to a ground-field environment, it discovers by talking to the hypervisor or a cloud platform and really gathering all the data about what's going on. Whatever its retention period is, it's got 30 days worth of statistics or whatever, gathering all that kind of data. And depending on what you want to do in the environment really determines what you do. Generally that would be either your own internal developers or professional services helping you in doing some of the automation use cases. Usually people have manual processes that they don't want to automate or there's some information about their environment that just gets you to be understood from a domain modeling standpoint to know how to carve things up, how to tag and how to show who are the tenants and how that's set up. That's all the basic capability of the product. People get a lot of bang-out for their buck right out of the gate and are able to do simple provisioning use cases and things like that. Almost, I mean, what we do, I'm going to be able to see environments will connect to it and usually by lunchtime, we're doing the first and easy scenarios of doing self-service automation in those environments. Any other questions? I think we're about to bump into the next session. Thank you everybody. Please visit us at our booth, we've got a booth down on the floor and come by here if you want to talk. I'll be here afterwards. Thanks very much.