 Good morning, good afternoon, good evening everybody. It's my pleasure to introduce our speaker today, Lars Rossum. Lars and I have worked together over a number of years on this initiative. It's my honor to serve as the chair of the forum at the moment, and I will keep my own introduction very briefly, simply to introduce Lars as a distinguished technologist, and the chief architect of the IT for IT initiative at Hewlett-Packard. He was part of the inception of this initiative, and has constructed the first version of the architect chat. And he leads the initiative that aligns and integrates all of HP's management tools using IT for IT as the reference basis. He's been working on IT and service provider management systems for over 20 years, and has a PhD in computer science and masters in engineering, and an MBA in technology management. So without further ado, Dr. Lars Rossum. Thank you, Chris. So this session will be really why, what, and how around IT for IT. So the abstract we sent out for this webinar was really, as has been introduced here, to describe what we've really been working on. Give an overview of IT for IT, what it offers, how you can consume it, and how you can use it in an IT organization, or if you're a consultant, or if you're a tool vendor. It can give a lot of guidance and hopefully lead the industry to a better place than where we are today. The starting point for what we have been doing is really looking at IT as it is today. The observation is that we have a broken chain. There's a lot of things that are quite problematic in IT today. We, at the same time, are seeing a shift in the industry in how we are consuming solutions, like we are going more and more towards standardized solutions, if possible. The entire concept of multi-supplier, that a single IT organization is not developing everything itself, but is actually consuming services from other suppliers, implies that the area of IT management becomes problematic or becomes more complicated. So, buying standardized solutions is great because it puts down costs of IT, but managing it, the cost goes up. We see that there is a number of existing solution framework or general framework for how to manage IT. The most known one is ITA as a framework, but there are other ones out there. There are a few on the slide here, but there are more than that. But the interesting part is that, though they have a lot of details and what you want to do on a process level, it does not really give a prescription on how to manage the service models and the life cycles and as well as not giving a prescription on what kind of systems or components do you need to put in place and how do they interact. Another observation we had was the fact that IT management fundamentally is industry agnostic. So, most people want to believe that they are special and the same goes with IT organization. I've talked to many. And in general, they go out and state, well, we have a special situation, we have a lot of special problems in our organization, but the more we analyze it, the more it turns out that everybody is really the same. So, if you're an oil and gas or if you're a bank or if you're in telecommunication, it really is the fundamental same problems you're trying to solve. And so, it should also be possible to give a reference architecture for how to manage IT in a standardized way. It actually goes hand in hand with concepts of TOCAF which is also coming and managed to the open group of you should really for each area that you do IT for has a reference architecture, a reference to the architecture. And IT itself is such an area. Just like you would have it for bank accounts accounting, you would have a reference architecture for how an account system should look like. You can have that for IT. We did, of course, as part of that analysis, see that the lack of such a standard really is driving our cost significantly. And that has been one of the driving forces. We have some examples for some of the early members of this work where just the simple thing of interfacing two incident management systems between a vendor and a consumer of a service could take half year, full year to implement, could cost upwards of a million dollars to make such an interfacing work. And fundamentally, just exchanging incidents shouldn't be an issue. It should just be clock and play ideally. And if you go to ITALT, it doesn't really help you in describing how to do that. There is way too much wiggle room in how you actually arrange your incidents within the ITALT framework. So with that said, we started to look at what is actually going on in IT. And we also decided, well, it's not enough that we solve just the operational space of IT. It really starts at the planning side. There is some business process modeling of whatever kind of business problem the lines of business have that leads to some demand being registered with IT. IT tries to translate that into requirement. Projects are being created. They've been developed. Ethics are being found. It becomes a request for change into operations. You start to monitor there's incidents and events and problems and maybe some kind of subscription being managed. And everything is linked together by processes, not by data, but by processes. And everybody refer back to some kind of a service model that says, well, this is the account receivable module that has an issue. Then people have to figure out what that is. How does that relate to conceptually what business was requesting originally, et cetera? And it turns out the traceability in this close to zero. And that's not just something I claim. If we go out to most of the organizations, they have that issue that you do not have traceability end-to-end in IT. You might have a little bit of traceability within some of the silos, within the operation space, maybe within the development space, but end-to-end doesn't really exist. So what are we going to do? The first thing we looked at was to say, well, essentially running IT itself should be done the same way as you do any other business. And that implies that you should really look at it from what kind of methods would a business apply in order to become a session? One of the things that many businesses look at is their production organized business, which IT really is. It takes demand from external sources like the lines of business and then they produce services that has been delivered by IT. And so Porter is the business guru and he came up with the concept of value chain and it was initially mostly used by production companies. And we looked at that and used that framework and said, what would that look like within an IT organization? And the picture you see here, and there's a couple of different ways of doing that layout, but the picture you see here of the value chain really highlights the essential part of it. The value chain has four essential value schemes in IT that cause strategy to portfolio, requirement to deploy, request to fulfill, and protect to correct. On the rest of the presentation, we really be spending on outlining all the details of these four value streams as they're being called. To support that, there's a number of supporting activities, finance, sourcing, vendor management, intelligence and reporting, et cetera. These are activities or supporting functions and people call that as well, which are really shared across all of the enterprise. So it's not IT specific finance, it's not IT's its own, but IT needs to interact with finance as it's delivering its value streams or implement the value streams. And by having that view into IT, actually resonate a lot with business leaders when we talk to them about the IT for IT framework that we put it into a business perspective. Each of these four essential value streams, they deliver value to IT, and can be measured and they are interlinked. If you then dive into that in a bit more detail, the next step here is that we have these four value streams that together makes up IT. The first one, strategy to portfolio. It is really where you would drive the portfolio of IT. You figure out what kind of business innovation should IT support, and making sure you have all the right strategies in place and prioritization in place to do the right thing in IT. That's also where you at the end of the day have the end-to-end reporting out of the state of the world, the state of IT. When we have requirements to deploy, that's where we build capabilities for IT. So essentially when stuff has been agreed upon, that this is something to be developed, something to be delivered by IT. You hand it over to a development organization. So the first party would call Plan. The next one you would call Build. And once you have built that capability, you put it into production. So traditionally in IT, you would say Plan, Build, Run. But here in IT for IT, we introduce the step in between which we call requests to fulfill. And that's because one of the things that we realize around IT is that in order to become a modern IT organization, it needs to change its form. It needs to be a service provider or IT organizations today are transforming themselves or should transform themselves, someone not quite there yet, into becoming service providers. They assert their lines of distances. And that implies that in essence, all the capabilities that have been developed should be put into a service catalog, be available for consumption by IT. It should also say that requirements to deploy when you build something, you could also source something and you could decide that it's not something you put yourself as you get it from supply or you get it from the cloud somewhere and you put it into the catalog. And then when the lines of distance really need an instance of that service, they need a time tracking system. They need an oil planning application in place. They need an account receivable module to be upgraded or changed or whatever. It is done in requests to fulfill. That's a very expanded version of what traditional in IT is called change management. When request to fulfill is then pushing something into the data centers through operations, then you will have one detector correct, which is the one part, making sure everything is healthy, that it gets downgraded, upgraded, monitored for performance, etc., etc. And if there are any issues that they are captured and dealt with and problems are raised and fed back, of course, back into requirements. So that's the four value streams in IT for IT. So far, there's actually nothing really mind-shattering about what you've been doing. The major change in IT for IT compared to what you've seen or discussed before is probably using a few new words. But then it's a request to fulfill that consumed part. So plan-built run has become plan-built consumed run. Another thing that is introduced in IT is to say, okay, we need to be much better at managing the actual service being delivered to understand what it is. And this is where IT for IT is going a little bit against what we see in ISOC. And we're not showing away what is in ISOC. If you look at organizations, what they've implemented, everybody to some degree is trying to do ISO version three or the latest iterations of version three. But in reality, as a service model, most organizations is where ISO version two was documented. So that is having a CMDB that kept track of what is the realized service, the service running in the data center. Often the CMDB in operation is the populated based on discovery mechanism. So there's a long process going on before you get into operations, but certainly things pop up in a data center. You discover it by some means and then you populate your CMDB, which is a very backwards way of doing it. I can do a discuss that this is not good enough. The concept of a CMS is around really having a full change management system and managing what is going to happen in the data center earlier in its basis. But in ITV, we actually take it one or two or three steps further ahead. So we really say we need to look at services as starting as a conceptual service model. The conceptual service model is essentially just describing what are the concepts you're delivering to the lines of business and attach to those concepts, what are the requirements or the demands that the business has on the service. Business don't care about how it's implemented. They only care about what is the difference. And at that stage, you don't care about the details of what does the API look like or the user interface looks like or anything like that. They care about whether you can do it and move on to time registration or you can do an account receivable or stuff like that. You take that into the development cycle, the sorting cycle, you know how to do it and what comes out of that or part of the process of that, you are really creating what we term the logical service model. That is a model that describes what, how did I actually define this service? What is the logical service? What is the, would be describing how is the service actually constructed? What modules are part of it? What are the APIs that are available? What are the way it can be consumed? Good. You can instantiate such a logical model by actually installing the service. You might install it one place or several places, et cetera. And you might have several logical service releases that all come correspond to the same conceptual service. And then finally, you get into the point of saying, okay, I take that logical service and I hand it into R2F. R2F will deliver the service into the data center and instantiate the realized service model. That way, you start to be able to create traceability all the way from conceptual to logical to realize and backwards again. I'm going to come more into the details as we go along this presentation. So moving on from that, there was another principle we looked at when we started constructing IT for IT and that is really to do a layer analysis, starting with a functional model to figure out what are all the functional areas that IT needs. That's basically looking at customer use cases, et cetera, going on to a lifecycle model where we look at what is the essential thing that happens in terms of continuous assessment, continuous integration, continuous delivery around this service model and the lifecycle of those service models. Then we went in and looked at the information model which is figuring out what are the key controlling IT artifacts or data objects that is part of managing the lifecycle of services. And then finally, looking at what are the key integration points that will then happen within such a landscape that controls these information model objects. What we wanted to do is to create a normative standard which is what we're doing in the open groups is to formalize the information model and this integration area because the top layer is what many of the other frameworks are doing a great job of trying to advise IT around what processes you want to use. You can use ISO, you can use COVID, there is a safe framework for development, for the Institute, et cetera. But the understanding system, what are the key controlling data models you need to put in place that you use to exchange information between systems and between suppliers in the IT space. So if we look at that, we can then look at the how does this IT for IT really relates to some of these other standards of framework we have in the industry that many of you deserve or use on a daily basis. And the first one that most people ask around is ISO. And you could say ISO is essentially the best practice framework for IT processes in the taxonomy of IT. It's a good framework. It is primarily focused on the operational side of IT, not as much the planning and strategy part, even though it is part of the latest standard. Practitioners traditionally mostly use it in operations. Where IT for IT is a reference architecture for the IT software ecosystem. So it describes what is it that you would put in place in IT in order to manage IT itself. So it underpins ITEL. If we take TOGAS, well that's an industry standard architecture framework, so it describes how would you go about planning out and introducing IT services and in that sense IT for IT is one of the libraries that TOGAS say you should develop for any area you want to manage. And IT itself is an area you want to manage. And finally, there is a specification language called ARCHIMATE which is also managed by the open group. It's a well-formed model language for business services. And we have chosen in IT for IT to use ARCHIMATE as the formal language for the normative standard for IT for IT, which is also being aligned with TOGAS. So it all forms a very nice complementary set of things to look into. The ITO part is not a standard unlike the other ones. And it's not part of the open group. And I would say there are competing things in the world around ITO, but we do recognize that's very important for quite a number of IT organizations. So we make sure that we are reasonable aligned with that. Going from this, we go a little bit into the structure of IT for IT. And here we say, well, in addition to the normative standard, as it is described that we are creating, we also create a number of guidance documents which really describe and actually the purpose of TOGAS. It describes how typical capabilities and processes will interlink into the IT for IT normative standard. So as we will see in the next few slides, we have concepts of a component like an application component in ARCHIMATE which is called a functional component and a lifecycle artifact or data object. That's part of the normative standard. But how that links into scenarios that is described in some guidance documents it's not normative, but it's a guide to how you can consume the reference architecture. And we have that for a number of scenarios like agile development or how to do SLA management, how to do IT financial management, etc. Another important aspect of IT for IT is that we decided to develop it as a layer structure. So there are five layers in the IT for IT structure. The reason for that is that we want to make sure that two things basically that it's consumable even if you're not a hardcore architect, IT architect, it should still be possible to consume it. But on the other hand, it should be precise and specific which requires that we use a lot of the methods from hardcore architectural disciplines. We should be sure that it's end-to-end complete. Don't have gas that it really was end-to-end in IT. And if we go into all the detail of specification that you can end up constructing in a language line, I think it might be difficult to actually see the forest fall on the tree, so to speak. So what we've done is that we say at layer one we have an overview layer. It's not the normative standard itself. It's an abstraction of the IT for IT. That really allows you to talk about it in a single slide. You can have all the concepts of IT for IT in a single slide. And you don't use any formal notation language. We have a simplistic language we use for it. We don't need three different symbols in it. So anybody can learn it in five minutes. I will show it next slide. Then we have a layer two where we go one layer deeper. Each of the values being required of full stride. And we start talking about the flow of data information, etc. And then layer three is where the real hard core normative standard is specified in Archimedes. It's very comprehensive, so it's not easy to just grasp at day one. But you can drill into that. And then you say the last two layers, four and five, they become vendor-specific. We realize that detailing something out into the entity degree of accuracy will take forever. And it actually hinders the industry in adaptation. So we say layer one through three will be formalized in the open group. Layer four and five will be for each vendor. But because of layer three, they will be able to interoperate. So let's look at IT for IT at the layer one. And so first, a couple of symbols here. As I said, there are really three symbols. The circle, black circle, that's a data object or key data object. We've identified a small set of key objects. About I think it's 33 key objects in total that really controls all IT into it. And they are listed in this. There are the blue squares. These are functional components. They are the essential components you need to put in place in order to manage IT. Each functional component should really control one key artifact. So it becomes minimal. You can't really decompose that component further without spoiling some of the traceability. So typically, you would buy or increment at least as a full functional component at a time. And then there is the black line, which is just stating that there is a relationship between these components. So for instance, an incident can be related to a problem and vice versa, similar an event can be related to an incident and vice versa. If you go further down in the specifications, you have cardinality, et cetera. But at level one, these are the only three things we have. Some of the key data objects are the ones that actually keep track on the real service model, the model of what IT delivers to the business. And they have a slightly different color, as you can see at the bottom. So the configuration management component will contain data objects that represent the actual services being delivered to the business. So configuration items in those USB speak. And so if we go into the four value streams, strategy portfolio has five functional components controlling in total six key artifacts. So at the bottom, you would see the service portfolio component. It's a pretty important one. It keeps track on all the conceptual services that IT deliver. And also what is here in terms of the conceptual service blueprint, which is essentially saying, well, what are the phases that the conceptual service go through? So we have version one of that, of say, time tracking system. And then we have version two of the time tracking system, which have more business features delivered. Maybe a business wanted it to be mobile and accessible, and that would come into version two itself. Then there is the portfolio demand component, which keeps track on all the backlog items that the businesses are requesting. So essentially it's equivalent to business demand coming in that is being managed there. Then there is the proposal component, which keeps track on what is named the scope agreements, which essentially are the IT projects that are being kicked off. So it's the contract that the IT CIO office or the planners allocate a budget for. And it says, OK, we want this to happen. And it's handed over to the requirement to deploy it to actually be developed. And of course that relates to the backlog items because it's essentially specified as a collection of backlog items that we decided just to be going to do it. And they are again related to what conceptual services to be delivered. And then finally, there is the policy component, which keeps track on all the policies that IT lives under. They will be related into which conceptual services to follow those policies. And the enterprise architecture component, which are your traditional Spark's EA or a similar kind of systems that would keep track on the service architecture of the business services that these IT services will underpin. So of course again, they relate back into the conceptual service. So these are the five things you need to put in place in order to actually be able to manage IT to keep track on what is IT delivering. And the reality is that most IT organizations today do a very poor job of actually doing this. So moving into requirements to deploy, there are more things in play here. So essentially there is, I have to quick account here, nine components, eight, nine components. There is at the bottom, again, there is a service design, which is where you start designing the service and the release position component is difficult to read the end here, which is where you keep track on what are the compositional things you release out. And then you have the usual things, you have requirements, you have defects, you have test cases, you have source and bills and build packages. And I can't have time today to go into the details here. I need to have time for Q&A. The important part is the IT project or IT initiative story that is being maintained by the project component. That's where essentially the budget is handed over to and a project manager will be sure that all of the things that happens in RCD is actually being delivered on time. Whether it's agile or whether it's a waterfall or any other kind of process you're using, you can actually do it using these test components. The next one is requested for fill and that's the new one, the new kit on the block. There's a lot of names in here that would not sound familiar to all of you listening into this presentation. The essence here is that we are transforming IT into becoming a service provider, which implies that you need to have a consumption component that you can go in and shop for what you want from IT. And some of the things that are very difficult to get your head around initially is that IT itself is a consumer of IT. So if you need another virtual machine for RCD, you should be able to go into a consumption catalog and with the profile of the developer you can get a virtual machine. If you are a person responsible for delivering an IT person responsible for delivering major services in a region, you should be able to go in and allocate machines for an extra exchange server installation or even allocate the software that you need to put into production. In order to do that, you need to manage the offers that you can do. You need to understand how is the composition of the offers and then the very important component here is the fulfillment execution component, which is the one that actually figures out how to deliver it. And the implementation of a fulfillment execution component can be a system that just keeps track of the manual things you're going to do. So kind of in line with a traditional change request process or it could be fully automated, sound-oriented way of saying I'll just press the button and we'll do everything completely automated. And then also associated with request to fulfill is keeping track of what is then actually the usage and then doing the showback, chargeback to whoever is consuming that component. There's not more information about this value stream in the detailed documentation. And again, I can spend a couple of hours explaining the details of what goes on here, but it's a very important new thing in IT for IT compared to what you're used to. The final one is to check to correct. Here, most people will feel at home because essentially these are the components you need to put in place in order to manage the things that are running in IT. You have a configuration management system where or component where you keep track on what are actually the services you have running. Changes there is controlled by change control with RFCs. You have service monitoring of it. You have an event component that keeps track of everything that happens. You can manage incidents, problem known errors. You have to be able to manage service levels. And that is the executable part of service levels. They actually come to life much earlier in the life cycle. It's a full guidance document around how that works in the IT for IT standard. And then the run book automation component. And so if you take all of that and put it together, you get this picture. And this is what you see if you put it all together. There's one important thing is that at the bottom you sort of have a line going through with all the purple circles. And you start with the conceptual service. This becomes the logical. Then it becomes what you actually release. Then it becomes the desired service that you want to put into production. And then finally what you actually have in production. That line with the various stages of the service definition is what we call the service backbone. And that is what gives you traceability. So you can go all the way from a conceptual service or a scope agreement or a portfolio backlog item and figure out what was actually done. Where is it in the development? What is which data center have the installations of this in which versions? How many incidents are actually being created on which versions of that conceptual service? So suddenly if you have this in place you can actually have true traceability in IT. And with that I'm coming to the end. Essentially in summary IT for IT is not trying to replace anything out there. It's complementary to existing frameworks. But it addresses something that didn't exist before and that is a prescriptive reference architecture for how to run IT itself. We believe it's very robust. We have tested it out. So it's not even though the standardization is just being finalized in its first release now. We have a number of IT departments that are following the IT for IT framework and has used it for incremental solutions. We have tested it against the latest trends in terms of say APIs and DevOps. We actually within my organization we have reference implementations of pretty much everything in the reference architecture. So it's a very doable thing. It works. So not just a theoretical piece of work that we have been working on for four years. It's ready to be consumed. Many thanks, Lars. I think we've all struggled a bit with the audio. So Simon and I will address that later on. So many thanks to all for their perseverance. With Lars' final slide up, I'd like to respond to the question that said that the relationship between IT for IT and IT is important. But what about the relationship with Archimedes and Toga? I think the thing to point out about one of Lars' slides is that it doesn't compare apples with apples. I mean the point that we're trying to make here is that some of the understandings of IT for IT mask its rather deceptive simplicity. There is a complementarity, a very strong complementarity to ITIL. But what we are not doing is competing with or anything else in relation to Toga or Archimedes. What we've done as good citizens of the open group if you will is built IT for IT using Toga and our representation of it uses the Archimate language. So the slide that Lars is just running back to now, thank you Lars. The top level comparison is a true comparison. The other two pieces are highly informative but jolly good question. So that's the first one. Let me go up. Another question that was raised was that the word layer is a bit misleading. I'm inclined to agree. What we have is a sophisticated leveled abstraction. So we have five abstraction levels and again the power of this is actually much more sophisticated than it first seems because the abstraction levels moving downwards enable us without being dependent on any process model to enable vendors like CA, ServiceNow, IBM, Microsoft, HP and the others that are already in the forum to conform to the standard and demonstrate the compliance of their products. So that is something new and powerful, a prescriptive architecture, a prescriptive standard that can be conformed to right down to the product level. And then we can also go upwards. So the value of abstraction levels moving up is we can articulate this as Lars has demonstrated to CIO and CIO plus one levels in language which is conceptually consistent with the standard but without overwhelming them in the down in the weeds detail that's obviously need to make the CMDB work and make everything integrate to deliver the services in the value streams. So again the comment that layer is a bit misleading. I would take that one on the chin. These are abstraction levels. The question was asked IT software ecosystem is there a clickability to telecommunications and IT infrastructure data centers extract etc. The answer to that is a qualified no. We are dealing with the business of IT. This is various labels in different parts of the world and the business. But what we're looking at is the business of IT. In other words, that specific area. And I would like to add to that. Go ahead, Lars. In the sense, in my past life I've worked a lot with the telecoms industry. You could say a lot of telecoms today in a transformation to become managed service providers so they become more and more IT organizations. And in that sense a lot of telecoms can actually use this as well. There's also another thing. I've looked at ETOM which is the framework for telecoms. So to speak. It's managed by the telemanagement forum. And there are ideas from that that has migrated into this framework. So the concept of consumption, the request to fulfill and the concept of having a layer, the way of presenting it can be traced back to ETOM. So we do have some degree of understanding of what goes on there. Absolutely. So I would underscore Lars's elaboration there by characterizing, if you will, the IT for IT initiative as a long overdue perception of IT service management as its own industry vertical. And what we've been able to do with the level of abstractions is to accommodate the learnings from other standards in other industry verticals to truly appreciate the space that this can occupy in the frameworks and standards landscape out there. So we've also asked a question about performance metrics in enterprise architecture and work products such as the ones that are articulated in TODA. Our response to this is in the substantial guidance material that comes with the reference architecture. So just to give you some background, we have almost 50 organizations actively contributing to the collateral that you've seen Lars introduce today. One of them is Westbury Software who are in the business of performance measurement and management. And we have an active work group which is developing KPIs which provide the full insight from the models that Lars has introduced. So again, there's a huge benefit in our involvement with the open group in that we can draw that if you will structural need from TOGA and deliver it through one of our own work groups within the forum. But a great question there. Again, bear with me while I scroll up. So if I use scroll up, I picked out a question here or two questions. The first one is can you implement IC for IT incurentially very much so that what we recommend all our customers and typically how you would run projects, we typically see that people start looking at the detect to correct area and implement some of the functional components around incident event, monitoring, etc. And then move out from there to other parts. So you can definitely do it incrementally. And it's not a ribbon replace very often existing systems you have in place can be made part of it. So typically the first thing you would do is to map your existing landscape into IT for IT and then you discover you can use it as a way of discovering how you would to be architecture could be based on your current else is architecture. That is what systems we already have in place. A lot of all activity takes place in most organizations. So it becomes a good backdrop for that. Yeah, thanks very much Lars. Just to elaborate a little further on that. I mean the real power of this level of abstraction that you see on the slide in front of you to extend Lars' comments a bit this work is completely process model agnostic. So we were asked whether or not this would accommodate deemings work at PDCA and so on and so forth. The answer is absolutely yes. This is sufficiently agnostic at the process model level to accommodate anything from waterfall to DevOps and so on. And that is the swappability, if you will, this approach to building the collateral and these upwards and downwards abstractions that you see indicated by Lars is arrow on the right hand side of the slide makes this truly unique and definitely a space filler in the landscape at the moment. There's one question here, Lars. I'm going to direct to you because you're in a much better place to answer it. We were asked what do you mean by a realized in quotation marks service model? Right. Some people also call it a physical service model. So really what we are talking about is the model of what you are actually running in your data center. It's something you have turned on or at least put available in the data center. So it can serve customers. So it's traditionally what you go out and have in your CMDB or CMS systems and populated typically by anti-discovery. And so physical doesn't refer to the fact that it should be the machines as opposed to the application. Just means that it is something that now has been realized that no longer is in planning or testing state. It relates to, slightly relates to another question because I see there was a little bit of discussion about does it also include machines and chips and what have I? And yes, it does. The service model will, when you model what you are actually putting in the data center, you need to go all the way down to the actual hardware. So the idea is that IT for IT was managed from the business service level all the way down to the physical machinery that it's all running on top of. Of course, if you could have an IT organization that relies on cloud services to deliver a lot of it, some of that would be abstracted away from you. But we are not precluding ourselves from also managing the physical world. Wonderful. Okay. We have a related question, which I'll initially respond to but then bring you back in again. The question was, can you define what you mean by life cycle level? And then the supplement was life cycle at what? My response would be it's an end-to-end overview, soup to nuts, as we would say in the USA. But in terms of the specific perspective, the life cycle perspective of IT for IT begins from the value chain in the four value streams. And the reference architecture is sufficiently rich that the functional components and everything else prescriptive required to design and choreograph all of that activity is rich to support that whole of life cycle. Do you think that's a reasonable response Lars or would you care to add anything? No, I think that's a reasonable response. There is always things that can be evolving more. And when it comes to the service model and the life cycles of services, there's further work that we're currently doing looking into what other standards are doing here. There are interesting standards like heat in the opens back and toss cap in places, for instance. And we don't want to replace them with something else. We want to make sure we have pointers to them and align to those things. There was a question around how we relate to some of the work that is going on from the UPDM group in OMG. Unfortunately, I don't know all the details of all the relationships we're doing. The work that we have in IT for IT is growing dramatically over the past year. But we do have relationships into the OMG and we're looking into how we can standardize the representation of ARCHIMATE and related things. We use an extended version of some UML part as well in the detailed spec and making sure there are exchange formats for that from OMG. I don't know the UPDM program in particular. So I hope I sort of answered that one. Yes, indeed. There are some more specific ones. I'll just take one more. Are there already models available in some of the commercial modeling tools like Maker, Trues, etc. And yes, there are some of the people working in the IT for IT group that have implemented versions of it. Right now, within the IT for IT group in OMG, we have used the Starks EA Enterprise Architecture tool for representing ARCHIMATE. But we are trying to find an open source tool that is strong enough to be able to capture all of it, to not put any favor of any particular vendor. But I expect that pretty much every of the major modeling tools will have a version included very soon. Wonderful. Many, many thanks Lars for a splendid presentation and thanks to you and those in the webinar for persevering with less than optimal audio. I'd like to bring the session to a close now. Many thanks for your participation.