 You really are not giving it a compute which is, you know, in the olden days, a compute is given for as much as you can use. But in this model, you're actually trying to balance your infrastructure for optimal provisioning. And there is this DevOps movement, which is primarily looking at a SLO-based model of DevOps collaborating. How did that work for you? Basically, the application writers were always assuming that I have everything on that machine. So if I go back to my days of computing, I was always assuming that the whole machine is mine. But that's the world which has gone away. And with cloud, there is a model that you actually get what you get. And you will have to live within that what you get. So how does that phenomenon of DevOps work for you? Yeah, so basically what happens is that cloud world actually affects most of our applications. It self-manages most of the time, at least the capacity add and then new applications roll out. That is self-managed. But we have automation built around that. That's where actually our DevOps folks are helping us out all the way from your release engineering to your business process. So we are building the workflow around it. So say suppose I have an application. It needs maybe 50 compute nodes with load balancer and then firewall or whatever. Maybe it might be okay until you are Q3. That is not high trafficking time for us. And then Q4 is, of course, we beat that recently. Because not that it's only long, we can just high trafficking for us. Actually, there are a lot of other occasions we met today. And people actually we exceeded our transactions, right? So we. Hi, I'm Hains and I work for Accenture. So I have a question regarding the technical stack you have shown. So for this orchestration as well as capacity management and even dashboard, are you using completely in-house development? Yeah, so we have our orchestration agent that is in-house built. But we are going to be using heat. So that's the template engine that we'll be using for the cloud formation. And what are the normal orchestration activities you do? And why didn't you go for puppet or chef? I'm sorry? Why didn't you go for puppet or chef? We use puppet actually. We use puppet for the configuration management. Yeah, this is for the template. Think about it actually to manage the topology of your application. I need 10 compute nodes, two for MySQL or maybe five or six for your application server, two for MQ or whatever, right? So to manage that template actually in each environment, that's the template you will be defining to move your pipeline. So what happens today, right? You put everything into one box, okay? That's where the most complexity arises in PD world, okay? You go and put local ghost, right? It will work. It will work, it will work all the way down to your pre-production. But you'll not work in the production because it is being distributed. The simple use case, right? It won't work because you need to have the similar environment in every stage. Maybe pre-production or maybe load and performance may need only maybe 50 compute nodes. But your production is 500 compute nodes, right? That's what actually we want to use the same template across the board so that it solves many of our problem, process a problem that we have today. We are running out of time. We'll take the next question during lunch, probably, like. Short question, which all OSU provision in VMs? I'm sorry? Which all operating systems do you provision? Support. Guest OS, guest OS. Okay, so it's red-art Linux 6.3 and then 5.7. Yeah, sure in 6.3, but actually we support 5.7 also. Some of the applications, you know, they need 5.7 as well. But we'll be moving into Ubuntu. Okay, yeah. Thanks, Anand. And I'm sure all those who have some more questions gonna trouble him during lunch, we're kind of running out of time. That's why we're like kind of. Next talk will be from Ritesh. Ritesh works for Ericson and he'll be talking about what they're doing with OpenStack in Ericson. I work with Ericson as a cloud architect. And we belong to a innovations team, thank you. We belong to innovation team where we carry out innovation across Ericson in 178 countries and around 480,000 employees. So even I have my colleague here, so why we are choosing OpenStack for Ericson and what we are doing with OpenStack in Ericson, so I would invite also my colleague, who is the lead architect for this. Chandra, please. Then even I would join telling you about the technologies, what we are doing with OpenStack. I'll just quickly go through some of the part of the presentation and then the rest will be carried on by Ritesh. So this is just a brief agenda that we are looking at. It's there are some enterprise challenges and opportunities that we are trying to address with OpenStack. And then I'll just discuss about some of our interests and goals that we have and how OpenStack is a solution for all these challenges. And then what we are trying to do are implementations and building a cloud ecosystem and all the hurdles we faced and of course a research area that we are trying to get into. As we all know that IT costs are always up and there's always a hardware cost involved and data centers, space, green energy, preservation. So these are some of the very common challenges that are faced by the industry today. So obviously we have the opportunity by coming up with virtualized infrastructure. And we also are trying to come up with pay as you go and some model which will be explained by him. And then there's of course a platform deployment on top of that and then software service and automating all the IT operations. So these are some of the broad opportunities we're trying to cover. So our interest is mostly on building a complete cloud ecosystem with OpenStack using the latest version of Falsum and building on top of that platform that can be used by a wide variety of enterprises. We are also productizing some applications on cloud that can be used for different enterprise verticals like retail, travel, and especially the telecom sector since we are working on that. So our focus is on mostly some of the research things and then coming up with products that can be used by the small and medium enterprise. And of course there's a lot more for the telecom operators that we keep building on. And yeah, so this is a broad architecture of our current cloud. And then so I'll just hand over to Rilesh after this. He can explain. Thank you. I've been looking how we can further integrate the OpenStack into our existing environment. When we talk about doing with our customers and all those things, there has been always a debate on why we should move on a cloud and all those things. So at present in Ericsson we have around 1,500 cores which are running on OpenStack. And we target about creating 2,000 virtual machines within Ericsson. We are at present doing this within Ericsson. Like we support our customers on top of cloud. They can provision machines due to DevOps environments and all those developments can be done on this top of OpenStack. So we have the infrastructure. And then we are at present we are not using any storage technology like what OpenStack released this new version, which is the sender in which it provides to make me use of the different preparatory storage solutions and all those. Then we have the OpenStack cloud management layer where we use the NOAA for the computation. Then we use networking, which is a NOAA network. And we have the security identity and assist management keystone all this in place. Then on top of that we have our own service cell portal which is like an orchestration layer which provides me start from the order management to the order fulfillment and the billing parts and all those things. Then we have our hosted apps which we provide to even within Ericsson and even to our customers for a small and medium enterprise or a large enterprise company would need the applications to work upon. So this was the OpenStack what we were doing. We have a Folsom in place in our data centers. But then we have announcements in storage. Like at present we are using network file systems, not the any preparatory storage solution in place. So we have like Moose FS, we use Moose FS for providing a high ability of a storage and all those things. And announcements in networking, we at present use NOAA networks. Then this is used in the two case. One for the, if I talk about the Ericsson internal for the employees, then we make on the basis of the VLANs where we provision different departments on the basis of VLAN and they can use the machines on their doing in the departments. And about the customers part, we use this NOAA network to use the different technologies available in the market like the MPLS and all those things to be connected to the OpenStack Clouds who are small and medium enterprise companies who are on the MPLS and all those things. VPNs can use this directly. Then we have launched IIS within Ericsson India. At present it is supporting around 10,000 people within Ericsson India and we target about the one lakh people all over across Ericsson. Then we have a self-holding in place. I told you before, which provides all the functionality what OpenStack provides and even top of that we provide the orchestration layer where you can do all the things. Then we even have a product which is helpful for our enterprise companies to use it. We have some of the, we have productized some of the apps which can be used on top of this OpenStack. Then in this we provide effective ordering, reporting, provisioning, authentication. Then even the past part it is still under future expansion but I didn't mention we are evaluating number of past vendors like Cloud Foundry, OpenShift and Stack Auto all those things. And below is the IIS OpenStack management layer. Like when we were doing this deployment we faced a lot of hurdles which were there when we started working on the OpenStack part if we talk about the higher quality of the services because there was always an issue we talk about giving any of the services then it should be highly available across. Then we faced that issue and even we had a solution for that at the place. Then we have the networkings where we can effectively use the load balance and mechanisms and all those which can be integrated into the OpenStack because at present the community is working very good and they even don't have the time to address these issues so it comes in the new releases and all those things. So at present we use, we do have a solution to do the load balancing and routing mechanism features in the networking part. In scheduler, it's the privatization of the services how we can effectively use the scheduler and all those things so we have it in place our own algorithms how scheduling works within OpenStack. Then storage, how can my data be more highly available whether my data would be available all the times what backup and recovery solutions I would be taking in care and the security and assist management. Then I talk about security then this is the first thing when we talk about offering this services to our customers or even internally we talk about when we say anything about the cloud like we are selling your cloud everyone who says like what about the security and all those features. So we do take care of the securities and all those things. Then we did a lot of integrations where we are as I told we are evaluating cloud pass solutions and even billing of the solutions we have a small solution of billing in place and even we have a small meters and all those things. Now we are moving trying to even evaluating the metering part which OpenStack has now in place. So this is a small solution introduced what we did with the high availability part like we wanted to make the services to be highly available. So we used a small stack of services like NOAA API Keystone and Glance which can be made highly available using the software keep alive and HAProxy. Like it depends upon the topology we created a lot of topology reading the rack space architectures give me a lot of knowledge about the OpenStack architecture how it can be deployed in different data centers how architectures I can make if I talk about providing availability solution between two different nodes reading solutions and all those things. So we covered all these topologies and we came out with a topology in which we do the load balancing and even as well as to keep the services highly available. Then we have even highly available services for RabbitMQ and MySQL. We use the high availability for RabbitMQ as proposed by RabbitMQ only where we talk about mirroring queues then providing it highly available load balancing those queues and all those things. Then if you talk about the monitoring part this is the most thing in which we if we talk about the deployment on all those things of the cloud we need something to monitor them effectively about the alarms and all those things. So in place we have a monitoring solution introduced for Keystone RabbitMQ for the DSCP servers NOAA services Glance are now effectively monitored within us. Now other research areas like we have big data analysis what we are targeting to do further in this cloud area and machine learning, data mining, scalable how can we make more scalable solution cloud data security and tested computing is the most hot topic now in security what we say about how we can make tested computing on my loads are if I talk about scalability it's not just of adding on nodes we need some more security like tested computing and all those features within this. I think so. I finished with this. It was a very small slide. I didn't want it to bore you all. So in place which we offer to the as trusted customers like the operators and all those and we enable them to provide the services to their enterprise customers. What is the product that you raised? Sorry? What is the product that you raised? I get one second. So we have a whole cloud platform which we offer to our operators and the solutions like the services like the chat and collaborations, mail and messaging solutions even some of the telecom applications like if you might have heard about the telecom applications, IMS. So we ported the IMS on virtual and we offer these solutions to the enterprise customers who are operators. And there was a, when Erikson started working on this was in the year starting when we demonstrated this whole solution in Mobile World Congress you might have heard about the Mobile World Congress in Barcelona. But this was the place where we hosted the solution and demonstrated and around, if I say, there was seven cloud solutions within Erikson they were demonstrating those solutions. This was the solution which earned around 70 leads within. I have a question. How reliable is the high availability with Rabid MQ with Q mirroring and stuff like that. And also does it scale? So does it scale under load? About the Rabid MQ mirror Q solution is a quite good solution if I say like it says about mirroring all your Qs to the slave servers and you keep on adding nodes, keep on adding the Rabid MQ nodes and keep making the slaves and all those things. And if somehow your master goes down then one of your slaves is promoted as the master. So only the master is active and everyone else is. Yes, it's like, if I say it's like just making your Qs available on all the nodes. Like you need, because the Qs basically in the Rabid MQ are perishable. They are not persistent. When we talk about mirroring the Qs we have to make the Qs, we have to make the messages and the Qs all those persistent across the system. So there's a DNS component involved as well? Yeah, we can implement the reverse proxy or some sort of to provide the load balancing features in even proper style. And I'm curious, like a few slides earlier you had shown challenges on various access and there was a storage access on which you had said like backup replication and things like, could you possibly expand upon it a little bit? The storage where you're talking about replication, data backup, could you possibly expand a little bit on that so that we'd appreciate what's going on? And is there some NetApp storage in front of there or not? Already Sarj is there and laughing. We already had a talk about this. I think it's outside, let's talk about it outside. Yeah. You probably were talking offline. I will tell you like as storage for the storage part at present we didn't did much. We wanted to make our data highly available that's it because there was no as provision for making the instance highly available and providing a shared solution for the instance and all those which are hosted on this open stack. So at present we use a network file system but we now focus as the sender comes in and we are on the Folsom part, we focus to make preparatory storage solutions to be part of this. So we are evaluating a number of solutions for it. Hey, I have a question on the networking part. So what challenges did you guys really face on networking? Okay, for the networking, at present we are not using. I see a lot of protocols like VXLine, NVGRA, blah, blah, blah, blah, coming up, right? So what really is the problem you face? Okay, for the networking part, at present we are not using quantum. We use simple VLAN manager, where we talk about providing these VLAN connectivities to our enterprise customers directly using the, like whichever technology they use, the MP wheel is. So we did a lot of tweaks in the code also to provide the services because what NOAA does is, it makes its own gateway and everything. And if I say in the case about the, when providing this MPLS connectivity to our customers, we need to make tweaks in the gateways, firewalls, then provide the edge routers and all those. Is it answers to your question? No, it's right. Yeah. So I mean you got your- This would be integrated to our core networks and further we offered it to our edge routers. Okay. That is the thing, like if I say, I have to sing some of the things in my, every VMs would be given a different gateway, which would be outside gateway and further they redirect to the edge routers and all then further. You mean to say your whole cloud networking is based on MPLS based stuff or? Yeah, for the customers part, we do it on the MPLS part I present. MPLS and even the OpenSSL VPN and all those solution we've been provided. SSL VPN, okay. What you explained here is only about the software, right? Like software in the sense that OpenStack or any other, your own applications and all the required things. Is there any hurdle that you face while updating the firmware on the physical server? Like example, you may need to update the bias on the physical server or some red controller firmware on the physical server. So any such kind of hurdles which you faced to address the physical, firmware related to the physical components, physical servers, any component on the server? I would say in this case, like OpenStack is quite flexible and even a normal server which is available in the market can be, it can be any x8664 bit architecture can run this OpenStack. So in that case, we already had in place our all servers were in x8664 bit architecture. So it didn't face as such problems in this but we address some issues on the story, availability part in which we address the raid and all those things. Thank you. Any more questions? Okay, thank you everyone. So the first slide, this is basically the cloud definition by NASD which says what's a good cloud? You know the cloud paradigm basically changes the model where you have everything for yourself to actually everything is shared and when everything is shared, what are the changes that you need and what are the layers that come? You have different deployment models which is public, private and hybrid clouds and you also have the various service models which is actually the platform as a service, infrastructure as a service and software as a service. And on the characteristics which you want in a cloud is primarily on demand self-service, elastic which is primarily elastic resource pooling, broad network access, measured services and elasticity. So these are the things that you need and this is by and large the cloud definition and from this definition, you know, why did I show this? It's one standard form of cloud. Mostly when people say what's cloud, we have different answers to it and this is one consistent definition which actually shows that and OpenStack seems to be working on that model where this is the Folsom OpenStack architecture which kind of has various services. I think this is not very clear. You have actually OpenStack Horizon portal. You also have Sinder which is actually the storage part. You have a quantum which is primarily doing the network management. You have NOVA which is talking of compute management. You have Keystone which is primarily the identification and security part of it and there are other services which are coming in and if you look at OpenStack, everything started with NOVA and if you look at NOVA networks is bifurcating and going as quantum, Sinder was part of NOVA volume and now it's separating it out and things are evolving and if you look at the core part of OpenStack, people say what is core? You also have Glance and the Swift repository which is also not, which is there. So these are things which are evolving and as what you see is projects start evolving out of NOVA and it's starting to go out like that. The other initiatives which are happening inside that is Celiometer which is by and large the metering and solution. You are also seeing things like Heat which is actually the orchestration engine and the cloud formation kind of APIs and there are other things. You are seeing interest with HPC trying to adapt to cloud. You are seeing interest from Hyperscale which is trying to adapt to cloud. Everybody is trying to get cloud in that form. So why this topic of what I will try and talk about is NOVA and specific and the physicalization plus virtualization and what are the forms which are existing in the market and how that will change. So that's my topic of interest. So if you look at NOVA, we have been talking that this is the NOVA drill down and NOVA basically has a NOVA API. It has a NOVA compute and the compute controller, the NOVA API and the compute controller, NOVA compute controller talks through a NOVA scheduler and the database and you have drivers which given any form of compute whether it's excited sex, whether it's a Unix or a Linux or any kind of system or any kind of compute, you could actually go and plug in that compute underneath NOVA and the biggest change what it does is the way you do it is actually by writing a driver and that driver manages the last mile. So in a lot of ways, OpenStack is a pluggable architecture as what we said. There are multiple points of plug-in. One is the API level where you can actually extend. The second one is the message bus where you can start attaching things to the message bus. And last but least is the various control points and the drivers that you can do. So if you look at Quantum, you can write different L2 drivers, you can write various drivers. So same way NOVA actually has a compute and underneath that compute you can plug in different drivers. And all of these drivers are loaded at runtime. Based on the configuration of NOVA and that's how the driver gets self-configured and each of these can connect to the message bus. So that's primarily the architecture and if you want to keep yourself isolated that the plug-in model and the driver model gives you that much of flexibility to plug in any kind of compute that you want. So let me look at compute models. You have physical servers, you have virtualized servers and you have other models and I've listed few which is actually virtualized servers as ESX, Hyper-V, KVM and Zen server. And you have other models which is either partitions which is like L-parts from some companies and V-parts from some companies and also container-based virtualization which is primarily a Linux terminology which is primarily LXC or OpenVZ. So these are models which you see in computes. So if you go back in time, physicalization was all about physical servers being fully consumed for the app. So it used to be said that that's a database server, that's the SAP server or whatever it is. That is how we used to address the servers and that's where it started. And each of these servers were fully dedicated and they were big monoliths. If you take a mainframe or any of those models it was a monolith which was dissected into small partitions and so on. And later it came to be X, Y, Z, X and you have scale out, scale in computing and various other forms. So that's primarily the physical nature of the computing. And if you look at workloads, there are workloads which are primarily very savvy for it needs high, low latency, it has, it needs response time, it needs high computing. And for example, if you look at Hadoop clusters, if you look at HPC clusters, these are things which still need physical hardware, okay? They don't go into any model which is primarily virtualized. The world, what is the drawback of virtualization? Virtualization gives you lot of optimization, it puts lot of VMs onto the same machine and you can use the flavors that you want but it's not near physical, okay? Your IO is not near physical, your storage bandwidth is not near physical, your compute is not near physical because somebody is interpreting it for you and it's not a near physical performance and hence you have a challenge of latency, you have a challenge of few other problems which come with that. Now, having said that, that these are the problems which are there with virtualization, however, applications which don't require a very high, you know, which is non-tolerant to, which can be tolerant, which can be tolerant to some of these models can actually adopt to virtualization and they really work together very well. The advent of virtualization has seen a lot of optimizations in the industry and you can actually have abstractions of true cloud. You can actually have a resource pool which is collected together so that you can have, you can start playing, instead of calling it a single host, you can build a cluster and then start placing VMs on it and then you get a lot of models around it and what virtualization has also pushed us is we always have where does your OS reside, where does your things reside, everything is structured and such a way that everything is tied. The virtualization has pushed into a model, something like it's a shared nothing model where nothing is shared and you could do a migration of a VM across any host to any cluster or you know, you can do storage live migration, you can do various other migration and that gets you a shared nothing model, okay? And the shared nothing model kind of gets you portability of the particular VM across anywhere to anywhere and your whole model of HA is completely dynamic and it has changed the paradigm out there. Now, having said all this, the physical model is also evolving. The physical model is not something that is stagnant, it's not dead, so even today we want our database servers on physical, we want other things on physical, but how is physical shaping up? Physical is shaping up in such a way that you can have with advent of mobile phones, with advent of various other things, processors are getting more power savvy and one of the challenges of any data center is OPEX, okay? Power seems to be the key concern and that's how it's shaping. So you will see lot of servers which are coming for a specific type of workload, okay? And this when you say specific type of workload, you might even see a calcid of processor for a web type of workload or a search type of workload and lot of process which are tuned for the type of workload and it actually consumes one fifth or one tenth of a power, what virtualization can otherwise not do. So if you have a physical hardware, even with power cycle management, you can't say that given a one VM running or 20 VMs running, you can't control the power to the granularity that you would want to and this is what is shaping physicalization which is a anti-paradigm to virtualization which is actually starting to get lot of physical servers which are made for the type of workload that it runs and this is a paradigm shift which you are seeing and physicalization is a re-invigated phenomenon which is trying to get in there and if you really want to play around with something, I think TriStack offers these physical servers which are actually small and nimble which you could use it for the type of workload that you want, okay? There are lot of scale out computers which are coming out which is actually trying to give you a processor which is made for what your workload is rather than buying a huge server and then saying that we all try to say how much can I use that capacity, okay? So these are paradigms which you are seeing as changing and while everybody is bidding on virtualization, there seems to be a re-invigated interest in physicalization and the topic of choice is primarily to look at these two paradigms and how it is happening and you have a third paradigm which is not virtualized which is called near physical but not physical which is something like your LXC or container-based virtualization which can give you pass-through IOs which can give you pass-through certain things of it and even virtualization, there are some vendors who give you pass-through in that model so that you can get pass-through IO, you can get to pass-through disk access and stuff like that but the competition to go near physical and how you go physical is a model that is evolving and you'll have to see how these things shape up in the future and I believe this is a area which is of great interest to see which one wins and I believe there is a market for all but that's where the physicalization and virtualization and these paradigms will actually shape how compute will shape up in the future, okay? So that's primarily the introduction of what's happening in the market and how things are shaping and my next model is to show how a NOVA is fully architected and how NOVA can be looked at. So in this model, the NOVA actually has different drivers and if you want to support different hypervisors, you actually have a KVM host which actually has a LibWord driver and you could write a LibWord driver and that gets you a KVM adopted into the OpenStack community while your API remains the same and then you actually write a LibWord driver to adopt it to support KVM. The same way when you want a VMware driver, you actually use a VISDK APIs and then write the last mile which is actually the driver which helps you make the ESX adapt to OpenStack and same way if you want a Hyper-V host, you do a WMI driver and that WMI driver kind of gets you these technologies enabled and then you have something like this which is actually an OpenStack model which actually has various VMs and you could actually have all of this integrated into a single cloud. So here I'm trying to show a Zen, a Hyper-V or a KVM Hyper-V and ESX in this model and this is something which we have working and it's available in Grizzly. The only thing which was not there before Falsum was ESX and that's also available as we speak. So the only change between the three hypervisor models, ESXI doesn't have anything that you can land on the ESXI because of which you need to have a VM which is running on top of the ESXI and that'll be your compute node which is the NOVA. In all other cases, the compute node can be installed within the hypervisor so that it runs within that and you can see CloudBase is the company which offers the hypervisor model for Hyper-V and KVM is ready to go and you also have Zen server in the same model. So OpenStack also has support for LXE. OpenVZ is something which is a blueprint which is available in Grizzly. So that covers the entire gamut of various things. Now outside of this, there is also bare metal deployment which is an area which OpenStack is pretty much interested and there are various blueprints and there are various tools which actually help you do bare metal deployment. Some of the bare metal deployment choices are, you have Krobar, you have Chef, you have Puppet, you have Metal Access Service, you have Juju and all of these are solutions which are actually trying to give you bare metal deployment. However, there is a blueprint for in OpenStack for basic bare metal deployment which is trying to say what are the technologies that they will do and how no one will actually evolve to that. I'll talk about it in a few seconds, but all of these technologies, you also have Dodide Deploy which uses Puppet Master to actually build a technology which is for building Hadoop or HPC clusters at one shot. So those are models which you are seeing and all of this are primarily to enable beyond IAAS and actually look at certain levels of pass layers so that you can actually start stitching a cluster in whole rather than an infrastructure as a service and stop there. So these are models which you are seeing and so these are the list of features which across each of these hypervisors which are supported and if you look at it, it will cover all the way from creating a VM, deleting a VM, making it pass, making it run and all of this is available in the community. So you can do a snapshot, you could do a cold migration, you could do a live migration, you could do NOAA networks, you could do ISCC attach and so on. So all of these features are as we speak available and the extent of support and the extent of hardware support is what is different. So why do you believe that OpenStack should will survive is primarily because the driver model and the plugin model where any person who wants to write the driver for the last mile of the type of hardware he wants that will actually help him. So this is the list of features which it is supporting and last but least, a physical bare metal provisioning. This is a base level architecture which primarily and most of the tools which are available today kind of either uses this basic technology and there is a blueprint which is available which is being worked on and there are various tools as I said which is actually available which uses this basic constructs. So what do you have? You have actually a bare metal driver, you have a power management tool which is primarily IPMI based which can actually power on and power off the node. You have a pre-boot environment which you can actually, which has some network services enabled and you pre-boot through that and then use the bare metal service to actually pull the image that you want to pull down and then install on it. So this is by and large bare metal. What most of the, whether you take it crowbar whether you take the mass or any of these they use this basic constructs and but by and large they have their own implementations to it and that's how they do it. So the one of the aspects which is really why is this very important because they want to give physical servers as infrastructure as a service on the longer run and especially with workload aware smaller computes or compute nodes which is made for a type of workload this model will actually evolve and there is this terminology called metal as a service and you will see metal as a service emerging out and it's over time we'll see whether virtualization wins or what type of market wins what and metal as a service and other forms of services which is primarily for physical models will stay is another way of saying it and there is this various paradigms which you are seeing one is virtualized environments one is semi-virtualized or container virtualized environments and then this physical environments where hyperscale needs are primarily towards physical environments and how this will evolve is what we'll see. So this is pretty much what I had for talking and this is all about doing an intro of what are the paradigms which you see in physical and virtual. We can take questions or look at it. Question about this database, you said that the database will lie on the physical infrastructure but just wanted to know will this infrastructure be outside the cloud or be a part of the cloud? It can be part of the cloud and that's where if you look at what you're trying to do is you can have a mix of virtual and physical as a service. So when you want a service in