 Well, welcome everyone to this webinar from the Open Group, which today I'm very pleased to say is concentrated on service or in cloud computing infrastructure. Well, thanks very much, Simon. I'm Chris Harding. I'm with the Open Group. I'm the Open Group Director for Interoperability. And my role in the Open Group includes supporting the work of our members on cloud computing and on SOA. So I will give a bit of background at the start of this presentation on the Open Group and the work that we're doing in those areas, and then hand over to Nathan and Tina, who will describe the technical details of the service-oriented cloud computing infrastructure framework and how to use it. Okay, so you can see, I think, the agenda on the screen now. After Nathan and Tina have told you about the SOCI framework, then we'll go on to wrap up and questions. So the Open Group is a consortium of companies that use or create information technology solutions. And the vision of the Open Group is boundless information flow, which is the idea that information should be able to be available within the enterprise and the extended enterprise, that's to say the enterprise's customers and business partners, as needed and when needed. And this should be achieved through global interoperability in a secure, reliable and timely manner. And we see enterprise architecture as a key way in which boundless information flow can be achieved. So our mission is to drive the creation of boundless information flow. We work with customers, we work with suppliers, and with consortia and other standards bodies. And we're working with these customers, suppliers and consortia. We understand requirements and we evolve and integrate specifications. And we're going to talk about one of those specifications today, the service-oriented cloud computing infrastructure framework. We also offer a comprehensive set of services to other consortia. And we are also the operators of the industry's premier certification service. And we do work on encouraging the procurement of certified products. So there's a broad spectrum of activities that the Open Group undertakes in carrying out its mission. We're vendor and technology neutral. We're an international consortium and membership is open to all enterprises, small, medium and large anywhere in the world. So how do we operate? What do we do? We have membership, as I said, we are open to membership by companies of all sizes anywhere in the world. And those memberships collaborate in forums and work groups. And I'll be saying a little bit about our SOA work group and our cloud work group. We also hold conferences four times a year. I've just come back from the last one, which was in Cannes, France last week, a very pleasant location and a very good conference. And we hold regional conferences in various parts of the world. It looks as if I will be going to one in Saudi Arabia next month. Also, we hold them in South Africa, China, India, South America, a number of places. Certification, I mentioned we certify people principally in the architecture field as being enterprise architects and solution architects and with their knowledge of the open group architecture framework COGAF. We also certified products. In fact, our original certification program was in the policing that we do of the UNIX trademark, which we own to certify that operating systems are fit to use the UNIX name. Whilst application protocol architecture tools, we also have certification programs for those and also for training services. And as I think I mentioned earlier, we provide collaboration services, our expertise in running consortia we offer as a service to other consortia. So finally, we get to SOA, Service Oriented Architecture. The SOA workgroup is set up to enable our members to work on things concerned with SOA to develop SOA specifications and to foster the use of SOA. It's been going now for I think over six years, actually seven I think. And we have 400 people participating from 70 member companies. And you can see on the right a large list of completed projects, which now includes SOCI as the latest one. The areas that we're working on still SOA and cloud security. Actually, legacy evolution since last week should be moved to the right-hand side too because we've just published the open group guide to legacy evolution to SOA. And we are starting up, we have started a project on SOA for Business Technology and we're looking at a further iteration of the project that we undertook to describe how to use the open group architecture framework to do SOA. So the SOA workgroup is a mature workgroup. It has completed projects defining a reference architecture standard and maturity model standard, the governance framework. A practical guide to using TOGA for SOA and now the Service Oriented Cloud Computing Infrastructure Framework in conjunction with the cloud workgroup. So it is a mature workgroup. The cloud workgroup has been going only for about two years, so you see that the list and I haven't called them completed projects but projects with completed deliverables because most of those projects are still working on further deliverables and a much larger list of developing projects that have not yet created their deliverables yet. The main thrust of the cloud workgroup is on the understanding of use of cloud computing to gain its benefits in enterprise architecture. It's now a larger workgroup than the SOA workgroup both in terms of the number of participants and the number of participating companies and it was the SOA and the cloud workgroup that got together jointly to develop the Service Oriented Cloud Computing Infrastructure Framework. So I think this may now be an appropriate point to hand over to our experts but just to say that this is the first cloud standard that we have produced. It's produced jointly by the SOA and cloud workgroups. The SOA workgroup had produced many SOA standards. This is the first one the open group has produced on cloud and it lays out the concepts and architectural building blocks for infrastructure to support SOA and cloud and it was developed by members of the open group SOA and cloud workgroups including HP and IBM and also other companies but the two co-chairs of this project were Natan from HP and Tina from IBM and I think this is probably an appropriate point to hand over. I think Natan is going to take it from here. Thank you very much Chris for that overview on the open group and where the SOA and cloud workgroups fit in. My name is Natan. I am one of the co-chairs for the Service Oriented Cloud Computing Infrastructure Project or as we call it SOA-Key. Tina Abdullah from IBM is the other co-chair. She is on the call as well and she will be chiming in and also speaking to some of the other slides that come later. Basically it is the two of us that co-chaired the SOA-Key project and it is taking it to the publication of the first technical standard which we will be walking through today. I must say just to give some context as to where SOA-Key started. To elaborate on what Chris said, we really started within the SOA workgroup. It was born I must say in concept as the Service Oriented Infrastructure Project and simultaneously or two words I want to say midstream the cloud workgroup evolved and we had very healthy discussions on well when we were talking about Service Oriented Infrastructure the cloud is so pertinent to that domain that it was kind of common sense that we basically applied to evolve what started as Service Oriented Infrastructure into Service Oriented Cloud Computing Infrastructure. That will provide additional context as to why this became a joint project both between the SOA and the cloud workgroups. And to the third bullet on the slide I cannot stress enough the contributions of the different members of the team representing various companies the open group itself and you will see the details within the published guide for which you have a link in this deck as well. With that if you can go to the next slide Simon. So one of the questions the open group challenges itself with which we did in this case as well. Whenever you know like all other projects we have to justify for ourselves why do we need to work on the project and also what are the goals and in this particular case since we are talking about a new technical standard what's the need is the market really asking for the standard and when we did the analysis at the time Salki started and perhaps true to some extent even today there are many prevalent standards that have been there for SOA for many years now and cloud standards have also emerged and there are quite a few around that we do interact with that Chris was talking about earlier. However when you look at the intersection between SOA and cloud standards are there standards for infrastructure being provisioned. So the application of the service orientation principles is something that we have been doing a lot in the application space but not as much in the infrastructure space that kind of evolved later and standards take time to evolve they don't you know it's not like you get the standard first and then the world starts doing things it's the other way around. Therefore what we found was that really there was no technical standard that applied to the incorporation of service orientation principles in the infrastructure domain and voila Salki that's really how that came about. So what is Salki it's really the basically the realization of an enabling framework of service oriented components with infrastructure in mind so that it infrastructure can be provided as a service in the cloud. So what is really happening is you know we are capitalizing on the emergence of virtualization technologies the SOA principles being applied to infrastructure and thereby offering infrastructure as a service when it is in the cloud that's really what the Salki framework is enabling. Basically what you are seeing here and I would encourage everyone to subscribe to the open group blog where there is a constant feed of really insightful posts by different members of the open group and what you see here is a post that actually details there is a press release that's the first link these links are live and you also see a post about you know what was involved in the first technical standard that's the second bullet and then basically this is the press release statement itself saying you know outlines the concepts the architectural building blocks and we will get into detail on the Salki standard in the next few slides. This is where I would hand it off to Tina I would also we are going to kind of chime in as we deem fit. So Tina please feel free to weigh in on the earlier slides but do take it away from here. Okay thank you Nada. First of all thank you everybody for joining the call. As we say this work products really is combined efforts with using a lot of previous work from the service oriented infrastructure groups and as well as I want to point it out we have other work groups working in concert such as the security and others team that we didn't want to really to specify in detail in this standard because we believe those groups the work groups which in the open group they have produced their significant work products which can give you much more in depth. So while we walk through here I would point it out as like going through some area that potentially you would have to go through another work group to find more details such as the security itself and other governance per se and as Chris said in his earlier opening statement we now have started some other project including the call computing governance project which Nada and I are leading. We will go into much more further detail in depth in terms of address some governance area. So with that in this slide what you are looking at is basically extending what the SOE service oriented infrastructure into the cloud by leveraging Saki as a foundation. And as we know the call computing puts a new demand on the IT infrastructure and management in an abstract manner. And a call computing provider needs to support multi-tenancy instead of for individual subscribers now they have to provide more course grant services to help maximize and utilize the resources, dynamic allocation of resources and metering with the charge back and so on. And we are actually starting out using in our paper when we start. We try to leverage what needs have published the cloud computing characteristics which I'm sure most of you are familiar with. Again what I was trying to say is we try to examine what is available industry and take what is the more acceptable standards and try to use versus building from scratch. And we use the needs of standards I mean the calculus that defined by needs such as on demand, self-services, broad network access and so on resource pooling. But because the specific differences and unique differences between cloud and service oriented infrastructure in some way is in the dynamic poolings and provisioning and so on. We follow that Saki will be able to bridge the differences to extend from the SOE that which has been done in an open group previously they have papers that if you guys are willing you can actually get it from the open group side. Some of the things are very similar like we were talking about using the services and as a foundation like a building block to provide some operational transparency to potentially to the subscriber. But it's not for the single subscriber here. We're talking about multi tenancy here. So that way to be able to provide a charge back to be able to automatic provisioning become extremely important. And why is SOE does not offer the whole spectrum of the characteristic design is become an enabler for Saki. So Saki is a service rented as we say and utility based, manageable, scalable on demand infrastructure that can support essential cloud characteristics and service and deployment models. So in other words the Saki does describe the essentials for implementing and managing the infrastructure as a services environment. So I'll emphasize here also to a much more infrastructure per se not to try to go beyond in large other areas such as the business processes, the cloud and so on. As IAS may entail the provision on multiple components including the servers for on demand computing power facility for robust web hosting and also elastic storage. Those are the characteristics which needs has identified. Here you can see sort of a high level view of what we're trying to denote in terms of what is the Saki is about. As you can see here, there are governance which is really trying to manage to providing a guideline to overall Saki which is going to be described much more in detail in the cloud governance groups. And which I welcome everyone can join it. And then we have a security that is to ensure that everything applied here is complied to the security compliance and can support from the service subscribers on based on the SLA and so on. And really the key, the core piece here you're looking at is elements of Saki which I will give you a little more detail to describe each piece. One of the elements of Saki is consist of compute which can be hypervisors and everything which uses to help to do the compute functionality. But it's not in the physical pieces much more in the virtualized manners and the storage and network and facility and so on. And then we are trying to be much more focusing since I said in this infrastructure pieces. So we are trying to standardize or make it much more simplified as an open group is much more vendor neutrals and product agnostic. Provide you a view that allowing to see to use all these elements of Saki there is a cloud computing management platform which consists of managing building blocks. We have this business perspective as well as operational level separate building blocks which interact with the user external user and the business level and operational level of the building block is interact with elements of Saki. So as we moving forward you can see that those building blocks as we right now seeing on the screen they are also to support sets of users from their perspective. So we do not include the basis you know some of the like service registry or Nord like service registry for registration and publishing services or orchestration, composer, choreographer stuff that's because it's because the availability of SOA as ADB is implied here. So we are assuming that it's already been provided. We also as I said we're talking about discuss the management building block is directly related to infrastructure that support Saki. Right so some of the broadly use of cloud and IT services management related components such as service catalog and compliance policy managers are also exclusive on this a cloud component because we and final detail like SLA service level agreement those are implied in the building blocks. So assuming the location managers let me talk about some of the this common building block because they are the enabler to allow the user to use the element of Saki. From the business perspective the location managers they would then help to geographically located resources based on some of the business processing rule and regulations and cost constraints and service level agreement might be different from different tenants. And then the building and also you're looking at meter manager they actually work together in some way because the meter manager managers we track the usage because in the virtualization space how much usage each tenant share is critical to identify and then be able to do the charge back and lay down which is that providing information to the building managers. So those are the high level building block that really to expose those services that infrastructure level to the user which is you know they subscribe to services based on specific contract and maybe geographic business legal requirements and so on. So they manage on the upper level and they work with operational building block in this case is we have virtualization managers which you look at it that is really provided resource dynamic pooling allows you to emulate a physical infrastructure component you know any parts of this elements of Saki. It's also acts like a FASA and a manager for physical infrastructure elements of Saki within this framework. Then it's important to knowing that because it's sharing the environment we have multiple tenants we provide this monitoring and event managers that can help to monitoring the different services and the event and trend of usage to see which there's any conflict whether there's alerts you need to initiate it to trigger based on some defined previous defined some of the rules or pros and abnormality and then make correlated network and resource assets ensuring the compliance and reporting any problem potentially. This is also important to in sense simply to indicate how one that is best way to propagate or clear any incidents. If we moving the model from the point in cloud is much more proactive because we needed to provide this automatic kind of automation to proactively to monitoring to detect at a need to using monitoring event manager is critical to measure the mean time between failure and mean time to repair and with any potentially configurable alerts that so that the user which is subscribed to the services can immediately get notification and escalate automatically of any problem or issues. For the provisioning managers on it provides a rapid elasticity and on demand self service enable enable and date the functionality of this purpose of this managers really to making sure that the right number of resource allocate and to balance among all the different components and showed how to address the fluctuations of the demand in the virtualization space and also provide optimized infrastructure resources were satisfying all the customer requirement. So they would then go through all these elements of sake to locate to I mean to to be able to look at the right resources for the storage of facility or network. Those are sort of wired up by like in the data center and in data center for a lot of company now basically outsource out in because of the benefit of the cloud provide not only trying to maximize the utility space model and also trying to gain benefit not like the server to be idle so they can maximize their investment and also you know cut costs. So this provisioning manager is really doing that functionality to help you to be able to dynamically allocated things to support the need and ensure the resources in the virtualization space. Then we talk about capacity and performance managers and that's where the pooling comes out and then when you need this provisioning they will get talked to the capacity performance manager to making sure that the performance is satisfying based on a subscriber SLA and also it has enough to scale dynamically in the virtual space. Configuration manager is critical because it's allowing to manage more to tendency in the space to provide versioning and configuration support whether it's a virtual or physical within the infrastructure. It also supports provisioning and monitoring event manager within infrastructure to ensure the overall functionality integrity of this infrastructure. So as we say in large suspect all those from the picture as you can see there are three primary user viewpoints which in our people we describe in you know I'm not going to go some more in detail but as a very left hand side you see the consumer and end user which is the cloud services consumer. They have it on stop roads which is service integrators, consumer business manager and so on. We also on the other hand we have service developers that has actually created developing the services that are needed for use for consumption. Service providers which is basically leverage one of developed services and deliver those services, cloud services as you can see with this business process services, the software, the service and the platform and so on to the service consumer, the cloud service consumer. We do have a sort of hybrid user view which is not showing up here but it's implied which is integrators because a lot of time that between the established SOA or between the consumer and the provider there will be a service integrator that playing a role somewhat to making sure that it's a cloud service. It's not necessarily that to match making the services needed to provide to the subscriber and but it's not necessarily is a mandatory role more like a hybrid or optional role so we're not showing here but in our people we did describe what the stop road that it provides. So as a whole our emphasis here is trying to keep a very simple sort of building block kind of component view that allow the folks which needed to start establish some kind of services in infrastructure layers in the cloud and know what would be essential some of the building blocks they need to address and to be able to support the business operations and their project. So I'm not going more in detail but there are more obviously fine more in depth and no detailed description which in the paper I work on everyone get chance to read through if you have not done so. Okay we use a model car here sort of trying to tie in what a building block we use give you sort of sense of in realization how you realize those building block and the components in the real war. And this you know race car is a simple way to see in the middle sitting here which is in the green line those are the integrator sort of of course remember this is a one scenario there might be multiple scenarios that can derive from a sake but we try to try to take one example which is just a very simple example to show you assuming there is a cloud integrator that has developed a site that to provide a model car information to the subscriber. When they wanted to see what is race information they want to put information out. This integrator then talked to the providers, the race car provider for multiple different sites geographically and that they can assess information and display that to the subscriber based on the level of their subscribing level with gold or browns or silver less assuming the different content would be displayed and they also somewhat to check and monitoring the usage and then allow to work with the building managers to build based on the consumption and usage which is the subscriber used. In this case as you see some of the legend dash lances providers that potentially can be technology from ISV and or motor car software provider has some specific software that they need to stream to display the motor car statistics and also maybe provide some analysis or anything which is reporting which it needed to. So I mean in Nasha obviously there are sets of services and that can be it depends on which subscriber has subscribed previously and then they will provide those services and to each subscriber dynamically allocating resources and providing based on the service level that bronze or silver or gold so different type of performance and different type of service agreement based on contract and legal compliance and so on. So in addition to what Tina said I just wanted to share with everyone how this scenario came about. I wish I could say that you know there was a scenario that we conjured up that exactly matched the realization of the building blocks it was actually an iterative exercise. So for each building block that Tina walked through in the framework on the previous slide the way we would go about it is okay let us see where could the virtualization manager be used where can the configuration manager be used and is there a real life or at least you know from an application perspective is there is an aspect of the scenario where we can highlight that. So what you will find in the paper is the scenario itself but in addition you will also find how and where the building blocks Tina walked through has been applied or can be applied. Which is why you see multiple levels you know the gold consumer and you see regional content like EMEA versus other regions where there could be some compliance laws and so on. We also tried to address the availability so that's why you see a primary and secondary provider for IAAS as well as the technology providers that can be a fallback. And then going back to the roots of sake just to consume services in the strictest sense of the word you have the content provisioning service which is what you see in the bottom and top left. So just wanted to highlight those aspects and emphasize that we have made a sincere effort to not just call out the technical and architectural aspects of what sake is but also ensure that there is an application in a business scenario. Hopefully this would facilitate your application of the standard in your environment. And perhaps there are some you know multi-tenancies so here you see a service integrator accommodating multiple tenants. So think of different auto racing companies provisioning videos and using the same environment. So you know such characteristics I would have said can be applied in other scenarios in other industries as well which we are hoping will facilitate your application of the standard. Which is a good segue. I mean our first challenge was okay why do we need the standard and I had spoken to that earlier. You saw what the standard is which is great but now you may be going so what? How do we use this? So this is really speaking to the steps that you can take to apply this standard in your environment. For starters it actually embraces the foundational principles of service orientation and by the way in the paper you will see a section that talks about what the basic SOA characteristics are. You will see the NIST characteristics for cloud highlighted and then you will see a section that kind of brings the two together. Kind of a one plus one equals three paradigm where you will see synergies that cannot be realized without the coexistence of both. You can judiciously apply and extend the traditional environment and enable the provisioning of infrastructure in a service oriented fashion in the cloud. That is to say that it is not that cloud is the answer for all the infrastructure to be provided as a service. We recognize that there may be a healthy balance that need to be struck so you can apply the standard to extend the adoption in that manner. Like Tina showed what you think about the cloud, how you perceive it really depends on who you are and what I mean by that is what your role is. Which is why in the diagram that Tina spoke to earlier you will see on the periphery different viewpoints. In the paper we have not really gone into the detail in this deck but in the paper you will see the viewpoints themselves detailed out. So depending on who you are or what your role is in the enterprise you can apply that particular viewpoint using the standard in your environment. The other piece is that today you may have an architecture solution in place. Or you may be in a position where you are about to implement a cloud based approach in the enterprise. Either way the list of building blocks, the business and the technical ones that Tina walked through they can serve as a good checklist at the very best. Where you can validate, do I have the virtualization manager? Who is going to do the metering? Is there a billing component to it and how is that being handled? So what this is promoting is a way to ensure that you have a well defined finite set of building blocks which will better enable you to do it right. Now in the event you already have a solution deployed in the cloud that's fine. But there again you can use this as a validation list to see do you have all those pieces in place. I would assert that if you are missing some of these foundational building blocks for Saki you will be encountering some challenges if not now in the future there are certain areas that may not be as effectively addressed. It may even not be the same list but at least if you ensure that the functionality that Tina spoke to for each building block if that is manifesting itself in your architecture that will be a good validation. And then you would have, there are different models for deploying this. It could be private, it could be public, it could be hybrid and you can actually implement, apply the standard in all cases. We have really not localized it to any particular implementation. In fact we recognize that the business and technical needs would drive the type of environment that is best suited for you. So really the standard is agnostic to that and really applies in all the scenarios. And then I spoke to the business scenario. Now Tina had mentioned this, it is just calling it out. We recognize the need for governance and in fact we almost started, you may find a passing reference to governance and we started as the Saki governance but it didn't take long to figure out that well this needs to be done really at a cloud level and I would reiterate the call to everyone interested who are attending this webinar and others who are listening to the recording later. Please consider if you are interested in the cloud governance space which Tina and I believe and certainly the Open Group believes is one area where we need to detail it out and add some clarity to it, please feel free to join the project and contribute to your thoughts. So this is kind of taking you behind the scenes and you know in the event you are on other projects that are in flight maybe this would be a good you know at least we are sharing our experience. When you are seeing the end product but here is what you know the different steps that we went through. First off we identified you know we asked the SOVA questions. SOVA if we brought SOA and cloud together that is step one. The next step was to identify the building blocks and then we did take the step to you know ensure that the Saki framework is actually in alignment with both the SOA reference architecture that has been published as well as the cloud reference architecture which is an in-flight project. So you know we had to answer ourselves the question not just us but also working with the SOA and cloud reference architecture project where does Saki and the building blocks of Saki fit in in those architectural layers. So that was a very critical step actually. Step four was the motor cars business scenario and step five like Tina was talking about earlier is recognizing the fact that you know it's not just Saki there is security and other areas that are being addressed across the open group which you will find the detail in the SOA and cloud work group project slide that Chris walked through earlier. So identifying the connection points. These are basically references. You will find the actual publication. You can download the Saki framework itself. There is a press release related blog posts and yes shamelessly Tina and I are putting a plug for the governance project. That is really what you see at the end. We really think that this is an area that needs to be evolved and we have had there is good content that has captured but a lot more work to be done. So we certainly look forward to your active participation and contribution to the cloud governance project and if you reach out to us we also came up with a list of things, list of areas that could be expanded upon. We did ask ourselves the question in the end okay this is Saki. What is Saki plus plus? What is Saki 2.0 so to speak? You know other areas that could be elaborated and so on. So we have compiled that list and would be more than happy to share if anyone of you want to work with us to take the initiative and then take it to the next level. But before opening it up for questions let me check with Tina. Was there anything else you wanted to add before we jump into Q&A Tina? Just to emphasize one thing about the governance. In our current Saki paper we do have a very small write-out trying to describe the governance because governance is sort of cross-cutting one of those discipline areas like security. It really is not just a limit or just to infrastructure or Saki per se but it's so important that once the reason I'm born it out is because the virtualization really from the subscriber perspective is providing this product agnostic thing. So how the governance be able to help to decouple the technology with product and how to be able to re-robot track the records and especially the SLA including security and so on and providing somewhat centralized visibility to the user either by reporting or dashboard and so on. I would just strongly recommend that we're not trying to shortcut not to describe in our papers. Because of the importance we try to make it actually a separate study group so we can drill in more further down and I as Naton said we both strongly welcome anyone participate today in our call to join our working session to provide that level of detail in the governance area for the call computing. So that's what I want to say. All right, thanks Naton. Thanks Dana Simon. I see some questions in the chat room. Let me just scroll up to it. This is from Khalid Dervesh and he's asking, he's wondering what would be the implications of leaving the cloud computing and the SOA as individual frameworks having their own principle best practices etc. And he just explains that he says, because these topics could overlap with other paradigms. For example, you could use SOA on a non-cloud infrastructure or similarly you could have a non-SOA on a cloud infrastructure. Has anyone got any responses to that? Yeah, let me speak to that and I would request Dana to weigh in as well. Absolutely. We are not suggesting that the SOA and cloud frameworks cannot exist on their own or should not. However, as we are looking at this slide, I will point Khalid and others to the second blog post, which is the telltale science of SOA evolving to cloud. I'm doing that because you will see that the fundamental principles of cloud really have stemmed from what SOA incorporated. It behooves us to at least see what are the synergies that could be realized and if there are, why not? Why leave it alone? We are not suggesting that they shouldn't be used in isolation. I can see instances where they could be. At the same time, we should recognize, acknowledge and capitalize on the synergies that we can gain as enterprises in this industry so that we don't lose out on the benefits. That's what I would say, Khalid. Dana, did you have any, go ahead. No, I think that's what you have to say. That's what I felt as well. That's excellent. I think if everyone is in agreement, that's probably a good time to end this presentation. I would just like to say thank you to everyone that participated. Say thank you to all the three presenters and just say if you've got any final comments, would you like to say any final comments, folks? Final comments really is, we really believe that the technical standard, this being the first technical standard, even after publication, we have not seen anything come across and so we really believe that the open group has filled a gap that has been in existence and it kind of evolved. It's kind of as other standards were evolving, this gap was slowly, it was very apparent and the open group has certainly addressed the gap. However, the standard can only be as good at it as its level of adoption and therefore consider this to be a call for serious consideration by the enterprises represented in this call and others to see if and where you can adopt this standard and so unless we actually implement it in the enterprise, we cannot grow. So the whole idea of the open group with that extended community to continuously evolve the standard so your feedback by application of this standard would be fantastic. Tina? No, that's well said. I want to thank everyone for participating in today's call as well as I want to thank Chris and Simon. Okay. Thank you once again everyone and look forward to seeing you at another event in the future. Thank you and goodbye.