 Hello, everybody. Good morning. Good afternoon. Welcome to what is, in fact, the sixth Deeper Dive webinar on the new IT for IT technical standard for the open group. It's my pleasure and honor to serve as the chairman of the open group IT for IT forum. My role today is simply to facilitate the conversation that follows the presentation from Dwight David. Dwight is a long-standing member of the IT for IT forum and has been there since its formation when we launched the forum in London last October in 2014, October before last now, and published the standard this October just past in 2015. So I will leave Dwight to introduce himself a little bit more fully, but important for you to understand that although a Hewlett-Packard enterprise employee, Dwight is in fact an internal customer. So his responsibilities, if you will, are broad in the context of the new standard, as well as one of our treasured subject matter experts who's helped us to develop and deliver the substantial collateral that the reference architecture and their value streams comprise. He's also a consumer of this material, so he's uniquely placed both to provide this presentation, but also to respond to your questions from, if you will, a customer orientation. So without further ado, over to you, Dwight, and I'll rejoin us your presentation answer. Thank you very much, Chris. I'm excited to be presenting and delve it into the Detect-to-Correct Value Stream today. I view this particular value stream as really where the rubber meets the road in IT. And I think by the end of this presentation, you will all agree with me that in fact that is what Detect-to-Correct Value Stream actually is. What are we going to be talking about today? Well, the full purpose of this webinar is to shine the spotlight on this critical value stream of Detect-to-Correct. We will talk about how it actually contributes to the overall IT for IT value chain, what are the essential components within that value stream, and then we also highlight the business value. How are we going to spend our time today? This is how I want to approach our time divided into really six areas. We're going to do a quick introduction of maybe it's a quick review for most of you of the value chain, the Support and Service Model and Reference Architecture. Then we'll look at the phases and activities within our particular value stream of Detect-to-Correct. We'll take a look from the level two reference architecture of Detect-to-Correct, then talk about the value proposition, and I do have a call to action for you. So at the end, I expect you to take some action and we'll talk about that. And certainly we welcome your comments, questions, suggestions at the end. So let's dig right into the IT values chain. Now, no most of you have actually seen this, but when I see this, two things really come to mind, which is really around the principles that the general value chain concept brings. One is no one activity can be completely optimized. So instead of isolating and optimizing a single step, if you try to do that, it actually sub-optimizes the whole value chain. So in terms of IT, it means that the thicker the wall that we build between our IT silos, then the harder it is to create an optimized IT organization. This is one principle. We need to deal with the whole organization, optimize the whole organization to get the benefit. The second thing, which is also reflected in this value chain, is that some of the activities that support all the value streams within that chain. So, for example, take human resources. We would need human resources across each of the value streams. So those are really the two things that really come to mind for me when I actually see this. In general, value chain frameworks, they help the organization identify activities which are essential to attaining their business goals. Within here for IT for IT, we provide the capabilities of actually managing the business of IT, will enable the execution across the entire value chain. But not only that, do it better, do it faster, and a cheaper way with less risk. So as you can see from the diagram, it includes really two major areas. The primary set of activities which are depicted at the top as strategy to portfolio, requirements to deploy, request to fulfill, and detect to correct, affectionately known in the industry as plan, bill, deliver, and run. And we know definitely the deliver part is relatively new because IT organizations are really moving to service providers. And if your organization is not doing that, I'm almost guarantee that in order to keep up with the way the industry is changing, that you will certainly, this user-centric type of environment in which we are today, you will certainly need to do that. So deliver being a critical aspect within the value chain. And certainly there are the supporting activities which is really makes up the second part of the two-part section within the value chain. At the heart, really, of this value chain, we have the reference architecture. And then what makes the reference architecture work, what I would call the DNA of that reference architecture is the IT service model. Really, if we think about organizations today and, you know, include the IT service model, including my own, one of the things that we really want to get is end-to-end traceability throughout the org. This is really what the IT service model enables. Here it's depicted as a service life cycle across the value streams. And by life cycle, what I'm suggesting is a set of activities around the service models that we have. It's things like, you know, continuous assessment, continuous development, continuous integration, continuous deployment. You know, today in our industry oftentimes we focus specifically on the realized service, you know, what's actually running. But I would say we need to change our view. We really need to expand that view that we have of a service. So really what do I mean by that? I mean that we need to, from the planning stage, take into concept that this, the service model could be conceptual. So this is where the business determines what needs to be addressed. You know, who are our customers? What is the value that it provides, what it provides to them? Then we take that into a set of requirements or user stories. This is what you're seeing here. It's a strategy to portfolio versus requirements to deploy. The example that I really like to use is an email system. So conceptually if I'm a business and I need an email system, I say, hey, I need to be able to communicate internally and externally. I may want a calendar. I may have 5,000 users. They may be scattered throughout the world. All of those really are the concept of needing that email system. When I send those requirements into, to requirements to deploy or R2D, then that is then taken and made into a logical mode. So I can make the determination for my email system. I'm not going to write something. I'm not going to buy, but I'm going to use Hewlett Packard Enterprise with its O365 offering from Microsoft to run not in the Microsoft data center, but in my own data center or in the HPE's data center. And so that would then turn into the logical view of that email system. Then once we, you know, deploy that system, that is when it turns into the realized model. So we have it in our data center. We are configuring it. And now I can go to my service catalog and actually order a mailbox. So you can see from that that it gives you that end to end view. Finally, you know, as we order and I order my mailbox, you can see that it helps to enable that end to end traceability from the conceptual view of the model for my mailbox all the way to me actually running that mailbox and ensuring that it continues to run. There is, so what are the objects? What are the data objects that actually support? Because there's data behind that. There's a model behind that. There's also integrations that we need in order to support that lifecycle. And this is exactly where the reference architecture comes into play. This is an overall view of the IT5-IT reference architecture. Again, the focus on this architecture is the data and the functional components. It's not the focus, not really the focus on the processes that comprise really these activities. So any organization, any method that you have within that organization, any processes that you have, we understand and know that those will change over time. But the underlying data that we have of that service lifecycle, that will remain constant. So let's dig into really how we can understand the notation that we have within this reference architecture. By the way, there are five levels that we have within this IT5-IT framework. And it's really geared so that we can consume. So even somebody like me, I can understand really what these particular, you know, any of the levels on what they mean. So you can go from a geeky architect all the way to an executive with minimal IT background would be able to pick this up and understand it. So what we're looking at is really level one. And the notation that we use in level one, there are just three of them that we need to understand, only three. The first one is the blue boxes where we have what's called functional components. So that really is an essential function that delivers the service. And the rule says that each functional component can have a minimum or should have a minimal of one key data object. And the data object is what's represented by the black circle, which shows you really the life cycle of that particular object. It helps you to give you that end-to-end traceability of what occurs within that particular functional component. And then the third one that we have is really the solid lines, which represent the relationships between our data objects, the direct relationship between these objects. Yes, there is a fourth one, but it's really part of what we call the service backbone data object. This is really a special data object that really comes from the service model backbone. Remember I talked about the service model earlier? It keeps track of the real service being delivered is what actually IT delivers. The actual service is being delivered. So in our example earlier, in my case, in D2C, the actual service would be my mail service. The key point that I always want to highlight and continue to highlight as we go through this is it's all about the data. So if we actually skin this diagram and take off the functional components, what we can see is the end-to-end nature of the data and of those relationships. We actually call this the system of record fabric for the reference architecture. So where does the text to correct actually fit in? Let's start by looking at the activities within the text to correct. We'll take a look in context of the overall value chain. Detect to correct is the run aspect of the value stream. So at this point, the service has been deployed and the focus is on correctly preventing failures and when a failure occurs, being able to restore that service. Detect to correct really helps an organization bring IT operations together to enhance our results and enhance our efficiency and also reduce the risks that we have if we encounter. We really want to do the main thing is identifying something and fixing it before our users are impacted. So specifically, it's about keeping the services running. So in detect is really an early detection through a variety of means and across the entire services that we offer. So whether it be a server, whether it be a cloud service that we have, storage, network, application or even the user experience on a mobile device, all of those are things that we need to be able to detect and as we detect them, diagnose them, identify what is wrong. So we can then maximize the ability to resolve the issue. We want to do that through automation and then implement change to ensure that that does not occur again and have it fully resolved. So let's decompose, detect to correct a little further. So all detect to correct phases, they all contain IT processes and they all contain activities and these are integrated throughout the service life cycle. It begins when the upstream value stream, in this case I request to fulfill, it completes the final phase and deploys the service. The service will be a large service or a small service, it could be a single one or it could be many consumers that we have. It really doesn't matter. It covers all of those types of scenarios. Service deployment really ensures that the proper monitors are deployed along with the service. So in the detect phase, what we are doing is identifying early identification of anomalies that happen across the IT ecosystem and again I want to stress that it's not just limited to a service that may be on premise, it will also include your cloud providers, it can include that storage, your network, again user experience, et cetera. And these particular monitors they detect any changes that you have within the operating environment and if the condition is important, when it's detected you actually generate the notification of what we actually call an event. So the events are sent to the diagnostic system where diagnosis actually begins. What we mean by diagnostic is we are able to gather all the information around that specific problem to determine one that is real and to really make sure and see if we can actually identify the root cause. Just imagine another quick example. We just got back from a long vacation, well that's a few weeks ago already and we took a lot of pictures and I wanted to share them over the mail system with a colleague and these are all large raw files and we have a couple of people doing things like that. Being able to realize who these users are, what they are doing is really some of the things that we help to do in diagnosis and we want to do that before the service is actually impacted. That's really the key here. So when the service is interrupted, just in case it does and we didn't catch it, we want to have the automation or standard practice that could initiate the restoration of the service availability at its earliest time when we can. So if the service can't be quickly restored or if it could be quickly restored, there could be changes that are needed to ensure the long-term viability of the issue. So we may say in the case of people sending these one gig raw files, maybe a simple fix for that, we limit the size of the attachment or we can have large pipes in order to ensure that we can facilitate large file transfers. So those are the types of things that would happen in change and then we want to get to resolve where resolution controls the implementation of that change and preferably we want to elaborate something like the run book, which is an automated mechanism to actually resolve an issue and be able to reconcile that original change to the results. So these phases and their associated activity, these are critical in today's environment. We know that our environment is changing, but we think that detect the correct from the work that we've done, from the people that we've talked to within the industry as well placed to facilitate that. So really to understand how we use a centric world resolve this use, we can take a look at the four major phases that we have here in Detect the Correct and how they are impacted. We are moving today for really a localized, central, on-premise type of offering to really the next wave where it's more predictive, it's more dynamic. We have, you know, the whole DevOps movement, so things are happening faster. So we need to move with that and we think that Detect the Correct facilitates that. Looking at Detect, you know, many of us remember our data, at least in the operational area, react into issues, right? React into them after a user calls as opposed to before. You know, you can think of, you know, the scenario that I gave earlier where, you know, I would say, ask my colleague, is mail slow for you today? It seems a bit slow for me. Even though I'm sending a large file, I'd say, hey, it's slow. And sometimes our users are calling about these services and these services are not within our data center or not within our immediate scope. They could be out, you know, we may have outsourced to Hewlett Packard Enterprise and we are doing your mail system for you. So you need to be able to troubleshoot all of these types of resources that exist regardless of where they are within our hybrid IT. So some local, some out. Traditionally, when we look at diagnostics, we have designed for a specific set of testing, but we haven't really designed a solution for operating. We train our troubleshooters to focus on the resource within our static test environment, but really not for the dynamic type of infrastructure that we have today. When we look at the change, you know, we all know that change is shared in operations. We don't want any changes to happen there. And because of that, we have a whole bureaucratic process that we have developed over the years to prevent change because of fear. However, when we look at our world today, again, continuous development, continuous deployment. What that means is there will be continuous change also, right? So we need to be able to facilitate that and we think the reference model certainly helps with that. And certainly in the part of resolution, when we consider what we had previously, we always needed to balance speed and risk. Our systems sometimes have been, you know, very fragile. We may only have a few individuals that have that tribal knowledge to help troubleshoot the problem if it's really complex. But in our dynamic world, you know, the world where we have two key areas, we focus on automation and leveraging social IT to help some of our issues, where we have users that may know a little more even than our operation staff, we need to be able to facilitate that. Really, we see that IT is really moving to a world where we have continuous delivery and really for the requirement where our systems are always on. So if we actually, as an organization, can really take on that role of, you know, continuous delivery, having always on systems, we must in fact have a solution that builds on an integrated service lifecycle model that allows IT the speed that we need, even with appropriate governance. We must always, you know, learn from our issues that exist. Don't fail to learn from them. Improve them. Therefore, it means that IT knowledge should be continually flowing into our ever-increasing dynamic and automated structures. This is what I think DTC lends us and the whole value stream and reference architecture as a whole. So the takeaway really from this slide is DTC is a value stream that's well suited for this user-centric economy. And it supports, you know, what we like to call the hybrid IT environment, which we see a lot. So let's take another look at the reference architecture with a specific view on the DTC value stream. So what we're going to do in this section, we can just have a quick refresher of the reference architecture and how to read the diagram. Then we'll go into some more detail behind the detect-correct value stream. So you know this reference architecture. We've seen it before. We know that your organization, the most organizations today have heavily invested in best practice processes. You know, these frameworks, like ITEL or Corbett, you know, both of them, they work in identifying the what to track. They definitely will help us to improve the processes around, you know, maybe our continuous lifecycle. However, the problem still remains as to how. How do you track? How do you integrate? How do you manage the data consistently across that whole lifecycle? This, of course, is where IT for IT comes in. This is where we fill that specific IT management support gap. It's really on the how. So this is what the reference architecture would allow us to do. So you will notice that detect-correct is all the way over on the right side after a request to fulfill. So consistent to our overall value stream strategy. The focus, again, I know I keep saying this, the focus is on the data and the functional components. It's not on the process that really compromise that comprises these particular set of activities. So your organization's method, the processes that you already have, whether it's, you know, agile, whether it's Corbett, ITEL, you know, you're using waterfalls, type project management practices. The underlying data is really where we actually come in. So let's take a deeper look into the actual data structure within detect-correct. Detect-correct, again, the things that we need to put in place to manage the run-in services. It provides really the framework for integrated monitoring, event detection, what we do in diagnostics, change, and subsequent remediation of all the tasks within the service management environment. The framework is really designed to support a variety of source and functions, as I mentioned earlier, and processes. So again, regardless of the process, we think DTC really is a fit. So DTC actually brings the IT service operations functions together to improve the quality and efficiency and, of course, the speed that we're looking for in this new IT world. So typically today, when we think of organizations and the domains that we have in these organizations, what we see is that typically they operate in various environments. Today we have multiple domains. They are really not shared. We have our own ecosystem. Just think of how we actually developed our IT environment. Oftentimes it's really not done organically. It's really a mix and match of many different vendors who have a variety of ideas on how things should operate. How do you keep all of this working together? This is where we think the DTC model actually comes in and helps us. So this actual DTC correct model, it exposes a set of key artifacts that are produced and consumed within this value chain, and it really makes it consistent across the value stream, allowing that interoperability that we all want from the initial strategy all the way to delivery. It provides data that is trusted and it's meaningful data to everyone, which helps to then break down those silos that we have and gain the efficiencies that we're looking for across the organization. So let's take a look at some of the key artifacts that we have within DTC. So I mentioned the actual service CI. This again is part of our backbone and that we have IT service backbone. It allows you to tie everything together. It's the actual CI being deployed. So I mentioned earlier that in my particular case, it's actually the running of my mail system, right? What I see as an actual user and all of the dependent CI's associated with that, the network associated with that, the systems that are associated with that particular CI, all of those are there that we have in that actual CI, which is in our configuration management component. Then we have the service monitor. So remember when we came from request of a bill, this is where we get the actual service monitors. So actually implementing them, deploying them, setting the associated thresholds, this is what we're actually doing in this particular artifact that we have. I mentioned that configuration management earlier and that keeps track of all the running services that we have within there. Then we have our service contract. So oftentimes in case, in my mail example, I have certain service levels that we want to ensure that our customers meet. So maybe ensuring that our mail sync, when we send it, we receive it on the 10 seconds or it's very responsive. Or we are able to send 20-gig files, I'm just making that 20-gig files or large files and still get good response. This is not the actual definition, but as the service is actually instantiated, then we initiate this service contract for that running model. Of course, we can have multiple of these even for the same mail system. The service monitoring component that hosts the service monitor, it really gives you that service definition. It helps to correlate the events that we have within that environment so that all of operations can see. We have the event component, so when an event is actually generated, the correlation, the filtering, the deduplication of that event is what actually occurs here. You see the associated support model, which is really the change of state, the support model being the actual event data attribute that actually supports state change. So when there is a change within that environment, able to identify to notify somebody is really what we're talking in terms of the actual event component. Then there is the actual runbook data model. So you have an event, but you want to help diagnose what the issue is and even to remediate. This is what the runbooks are for, and this is how they actually help. The runbook really is comprised of a set of routine procedures and tasks that could be made by an administrator. We all remember the days when you would get a disk full error or you're running out of space on your system. One of the first things that you did is you went and you cleaned up the temp space. So automating that to run a script, if you know space is above 90%, if you know the temp files, that's an example of a rudimentary example, by the way, of really a runbook. So we have many certain vendors that provide a lot of these runbooks. Many people actually create them. It's really to help automate the remediation and the diagnostics of your particular environment. Even, you know, going back to our mail example, you know, if my mail is running slow and I call up being able to collect all the measurements, not only for my service level, but for performance, for capacity, end-to-end, whether it's in cloud or on-premise, of those systems, and then initiate in the runbook to help resolve the issue if it's remediated, if it can be quickly remediated, is really how we want to drive that automation within your system. But certainly we know that there are times when we actually need to create an incident, and oftentimes, you know, there is... You know, incident takes many different forms, right? It doesn't necessarily have to be in what we call an incident management system. We see even today where people, you know, will send an email, right, to your SaaS provider, and that in turn is going to help us go into what's known an incident behind the scenes, which really tells them that something needs to be fixed, right, to initiate the way in which we can fix that. Then subsequently, the ability to take that incident, in some cases, people have a robust problem management type of process. Other times, people don't. Taking the problem, understanding the known error that may be associated with that, and hopefully getting that resolved in the next stage that there is a deployment is really what the function of this particular component is, functional components, and its associated data model. So, we've talked about now that we've seen how we can... The functional components are... We've got a good description of what they are within the detector correct area. What I want to do now is simply take a look at how they interact. So how do these functional components interact, not only with each other, but with other aspects of the value chain, or other value streams within that value chain. So you would see in this particular diagram, this is the Level 2, and the Level 2 diagram really consists of the value streams. And there are a couple of things to note. The blue boxes that you see here, they are the same as we saw in the previous diagram. They represent the functional components, and you remember these black dots as the actual data objects within these functional components and the black lines. There are a couple of other things that we're introducing here. They're these gray boxes. These are functional components, but they are not owned by detector correct value stream. In this case, this one is owned by a requester fulfilled. Remember when we talked earlier about the service monitoring a component of that, really we get that data in what to monitor in terms of the metric attributes. We get that really coming in from the requester fulfilled. So there is this particular requester fulfilled functional component that then interacts with what we have, our functional component in the D2C value stream. The red lines that you see here, we recognize that within the industry today, these are practices that exist. Not that we recommend them, but we realize that they exist, and so we make allocation for them within the reference architecture. One of the prime examples, people close to ITIL and following the process would say, if we have a defect, rarely should only come from the problem component, but really in practice in many organizations today, we have defects being generated from an incident. So we know it's a practice that exists today, and in subsequent updates to the reference architecture, we will certainly highlight some of these practices and give some recommendations actually around them. So in addition to functional components, we saw the relationships. Another critical part of how actually this data object are the essential attributes. So I don't have a slide for the essential attributes, but they are available in the standard. So when you download the standard and you look at the D2C value stream or any of the other value streams, you will see a set of essential attributes. And this is key, especially in a multi-vendor environment. So today I may have event components and I'm running the technology based on Hewlett Packard Enterprise. I may have my incident component based on the solution from IBM, or I may have a problem component or even a remediation and diagnostics from Chef or something that's from one of our other vendors that are out there. How do we, as an IT organization, ensure that we can deliver that whole value to our customers? The only way that we can do that, again focusing on the data, is through not only the functional components, not only through the data objects, but also the essential attributes associated with those data objects. So we have then the same taxonomy across vendors. So certainly the goal of IT for IT, and especially within D2C, where we rarely see this happening, is that as vendors become certified on the IT for IT reference architecture, they will be speaking the same language. So if you have component from IBM, or from Hewlett Packard Enterprise, or from ServiceNow, the language that they talk would be the same. So we'll have those same essential attributes to allowing you to have that interaction and sharing that end-to-end view that we so much desire. So this is how, as we come to a close, this is really the details around the D2C area. So let me just quickly, and then we come into and take a look at the why. You know, why is really D2C important? The key value proposition in the D2C is really around efficiency. So time identification and prioritization of an issue, improving data sharing, to accelerate the ability through collaboration to accelerate the ability of identifying business impact, and then prioritizing issue resolution, and then providing the data that we need early to run from an end-to-end type environment. This is the detective-to-correct value stream. It definitely helps IT by leveraging through this whole integration and collaboration. So how can we actually substantiate, you know, these claims of benefits that we say D2C provides? It's only through KPIs, right? And we have a number of KPIs, so within really our particular area of D2C, we have KPIs about reducing the number of incidents that we have, you know, reducing mean time to repair, reducing the outages caused by changes, et cetera. All of these are KPIs that we have within the detective-to-correct value stream. So again, we know that your organizations have been invested in, you know, these process frameworks like ITIL, et cetera, and that tells you certainly some of the what. What we do in D2C, what we do with IT5T is tell you the how. IT5T fills the gap in IT management and supports you with the how of data integration and our data management. So you can control the full service lifecycle, report back to the business, how our service is doing, not just simply on what we're doing with the technology. So with that, what do I want you to do? What is your call to action? And here's where you come in. You can certainly contribute. We've been at this for three, four years now in developing this standard. It's always developing, and though it's continual, and certainly we would relish your involvement and your support, you can certainly download the collateral from our site. You have also access to customer testimonials. You have a number of white papers. There's some really great webinars that go into, you know, some of the other value streams that I would strongly recommend that you view if you haven't done so. And certainly, you have the ability to collaborate with our experts. So in summary, this is what we provide with IT for IT, and the DetectaCorrect Value Stream is a major contributor to that. So I know we're running out of time. So let me turn now over to Chris. Hopefully this was helpful, and hopefully we have a number of questions or comments that we can certainly help answer. Chris, over to you. Superb. Thank you very much, Dwight, and thank you, everybody, for your attention and the insightful questions that have been asked. The first and probably the most foremost question arose in relation to a comparison between IT for IT and ITIL. This is highly complimentary. We in the forum believe that this is a space in the IT service management landscape that's persisted for over 30 years, despite the efforts of many frameworks and tools, ITIL, COVID, and others amongst them. The IT for IT reference architecture is specific to the business of IT, and you've heard Dwight today speak to the fourth of four value streams. What it offers is complementarity in the shape of prescriptive off-the-shelf guidance for architects designing, running, and delivering services and products in that space. So it's not a competitor, but it's definitely complimentary. I won't ask Dwight to run back through the slide deck because I think that would be kind of disruptive, but I would make an observation in terms of format. We have a very cohesive technical standard, which is presented at a number of levels of abstraction. So if you will zoom out to the CIO perspective of this. In other words, if you or Dwight or any of us were to say to your CIO, I think this offers us some business benefit. The value stream presentation, that 50,000-foot level, gives an immediate indication of that margin of efficiency, that value add to the right-hand side of the value chain, which immediately is understood by people in the business of resourcing mail systems and all of the other things that Dwight has talked to us about. If you zoom down through the levels that Dwight has spoken about today, you actually get to the detail of how to build this into the configuration management database in your organization. So what Dwight has done with us today is to move between the uppermost and mid-levels of abstraction of the technical standard. So to those of you that asked about missing the cost of detector correct, that's not the case. We have complementary value streams in there, substantial KPIs, and as you can see in the supporting activities, we have a whole work group building the intelligence and reporting tools that will dashboard this for the executive sponsors. At the same time, you can see one of the supporting activities is governance, risk, and compliance. So the ITIL assets that you have in your organization are accommodated by the IT for IT reference architecture. The piece that's new is this off-the-shelf prescriptive guidance that gives you, if you will, recipe card-like guidance which today has been absent in this space. The standard today has been downloaded well over 4,000 times from the Open Groups website, so the potential of it is clearly appreciated. It is different. It is new. There is still much work to be done. We meet again face-to-face in San Francisco at the end of January, but we welcome further questions and potential contributions. Marianne asks whether or not the IT for IT white papers are on the Open Group website. Yes, indeed they are, Marianne, but you need to be a member of the Open Group. I would like to assume that you are. You also need to be a member of our forum. So if Simon or I can help you with that, all of the white papers are available for download. Those are complete. The ones that are in progress are obviously on our collaboration portal, PLATO, for those of you that are in the Open Group and know about that. So if there is a white paper which is still in formulation, we would appreciate your help to further develop that aspect of the collateral. So a question from Carlos who asks, is there any kind of certification for professionals considered in the future? I will make the bold move as the chairman of the forum telling you that we have an agenda item which is to announce the availability of people's certification which will be made at the Open Group conference in San Francisco probably on the Tuesday of the last week. So we are currently beta testing the certification materials but we are ready to launch that before the end of this month. So this is a very mature endeavor. Dwight and I have been involved with it for a number of years. We conservatively estimate that the technical standard, together with its supporting white paper guides and so on and so forth, the substantial sparks repository which as we speak is being converted to an archimate representation. All of those things collectively represent some 20 to 25 man years of work. So this is, despite its, if you will, newness in the Open Group, this is a substantial technical endeavor which is substantive and ready to offer benefit and those gains of efficiency in your organization. I pause there because I see no more questions appearing in the Q&A window. So before I hand back to Simon, I'll just thank Dwight once again and thank you all for your contributions. If you do have a question, now is the time to type into the Q&A window towards the bottom right of your screen and I'll now briefly hand back to Simon and monitor that window. Yeah, there's another question about the CMDB coming into slide 17. Got that one. That's quite technical. That's something that gets, it is addressed, it's work in progress, but it's actually, if you will, below the water line in terms of visibility for this level presentation at the moment. It will certainly talk more about that in San Francisco and there's a lot of more detail on really CMDB within our document so we'll really value your input on that. Yeah. So for those just to clarify, since you have the slide on the screen, what Dwight is sharing now is the value stream at level two. Level one is the value chain itself. Level two is what you see. We push down through levels three, four and five when the fine detail of data attributes relevant to configuration management databases plural becomes very specific to products and services that you would be responsible for the configuration planning, running and delivery of. So we are there, but it's not immediately evident from this level of presentation in this webinar. Okay, Simon, back to you. Thanks again, everybody.