 Okay, it's a better now. Okay, so I I've been involved in in getting service providers out there both with open stack and with some other Technologies with me here today Omar Lara is a solutions architect at canonical and he has also been delivering some Creating service providers. He Created the suempresa.com which is arguably the first service providers based on open stack in the whole Latin American Region right so before we get into the matter what we want to try to to achieve here is to explain how we Succeeded with open stack in service providers. Have we failed with open second service providers and give you kind of an overview On how we envision the future of service providers Before that, let me just give you a quick Overview of what what canonical is what my company does Can only get is the company behind Ubuntu Does the most recognizable brand? And Ubuntu is the number one Operative system and Linux desktop is the number one OS in the cloud as well that means that 60% of all the Linux OS so respond up in the most of the three major public clouds Are Ubuntu right so we lead there and according to the Latest and the the previous open stack surveys that this one was released a few days ago Ubuntu is also the leading OS in open stack so Production deployments of open stack are based out in in Ubuntu on a 65% when we're talking about Those big clouds the ones there are over 1,000 Users that means that is way more than the rest of the OS Combined okay, so we lead an open stack as well some of our customers Over there and as I say Ubuntu underpins most of the open stack clouds It doesn't have to be our way of doing open stack But it's definitely our operative system. What's what's Underlying and underneath all those clouds right so you can consume Ubuntu for open stack in many different ways You can use it just the packages the way we package things the way we build things the canonical distribution of open stack our reference architecture and On our managed managed services way, right? We'll go into the details of a couple of these a little bit later because those are Suitable for the for the the main model of our for conversation Question for you. How many service providers? Do you think they're in the world? How many it's? About 40,000 service providers, right? That is data from Netcraft the way we get this information is by tracking the traffic from those Servers that are exposed to the internet right so we know how many of those are there might be some other servers That are not exposed to the internet, but we we got to know that there are 40,000 points where people are Getting traffic from right in the in the internet This is an example of some of them You get plenty of variety here. You get the big ones the small ones the local ones global those there are Specific to a to a niche or Those there are really small than we just do you know five servers in the garage of some of some guy somewhere in the world, right? Question for you. What do all these companies have in common? It sounds of that What we sell when we've set up a service providers is basically three things right we saw compute We saw storage and we saw network to interconnect With that with those service right in any of those combinations, but the combinations are limited right everything that goes After that and there should be a fancy Thing here anything that goes Around that is cosmetics is Brandon is warding is Different namings of the same thing right at the end of the day. We're selling compute storage and network right But guess what if you're selling those same things and The combination of them or it's limited. They're limited It turns out that infrastructure service is a break-even business So no matter how big the pie is nobody how fast the pie grows It is not You cannot get a sustainable Competitive advantage by just selling infrastructure service because you are selling a combination of those three things Right, so how do we compete? How do we win? We have three ways of doing it We have to take care of our economics that's that's key right some of the big clouds are Pulling away from the public cloud because of the economics We need to take care of innovation Those three things are still evolving right so we need to make sure that Innovation is available to the end customers and we need to pursue differentiation Okay How do we win? Let's put here a bunch of things so on the lower part of this diagram It's things that are in-house things that We fix at home without our customers knowing okay on top is the perception Of our customers is what our customers really see Okay, on that side this side most likely on the hardware side will see Things that are related to to that to hardware on the other side our Procedure operations that we can enhance as well So let's take a look at each of those parts of the of the quadrant right terms of hardware You can aim at getting a cost reduction Remember the economics a cost reduction between the eight and the ten percent if you're using some of the latest the hardware We're looking at more density. We're looking at bigger hardware. We're looking at OCP. We're looking at different References different architectures that doesn't necessarily need to be the intellects 86 we can build on power We can build on on arm or we can use or integrate Container story, which is again for service providers. It's about the density you can get the dollar Of the cost per gigabit of RAM, okay How do our customers perceive that? Well, you enable choices You create a pool of options for your customers to choose from so whenever they select one of your services It has to match their specific workload completely Right an example of this is storage for instance, right? You might not want to sell cold storage with SSD drives because there's going to be super expensive for your customer Right, so you might want to have some storage that will be fast for transactional Workloads and some storage that will be slow and reliable for cold storage things that you store And you forget about it until someone eventually if that happens asks for that Let's have a look at the other side of the of the equation where we get Where we're aiming at automation right the efficiency how we operate how well our operations are crafted How good we are are operating our clown We did this last time I was in Tokyo. I It was earlier in the in the year and we had this Open-Sack Russia where we travel the world showing our technology And we got to meet very interesting people one of the guys I met was the guy who set up the operations for Amazon in Asia, right they started off with 300 and three hundred and forty thousand physical servers Do you know how many people were employed to manage to operate that cluster? five five people operating 340,000 servers that is the level of automation we should be aiming at right it's difficult to get this is Amazon You get the same thing with Airbnb. They operate 400,000 virtual and physical servers in this case with only five people It's not the same five, but it's a different five people still five people operating 400,000 servers, right so we can get to that level of of Automation that level of efficiency in order to get there We need to orchestrate some other services like the monitoring the reporting You know make sure we can deliver to an SLA which is very important for our customers as well And that will give you a measurement of how efficient your operation is And then let's let's have a look at what the real money is we've been talking that About infrastructure service being a break-even business, right when we look at this There's still a bunch of things you can deploy on top of your clouds that can be offered as a service, right? So we're looking at the solutions as a service and please don't quote me there I don't want to any as a service thing Any new as a service thing but solutions as a service, right? Whether that is Kubernetes or or any platform as a service or any software as a service or any combination of those, right? Before I hand them I hand it over to you to Omar This is again from the OpenStack Summit survey the workloads that we are deploying own OpenStack Okay, if we are able to get this a Services offered by our service providers We will be the starting that road to differentiation And tackling that niche market that we are that we need in order for our cloud to be Successful Okay well As Arturo has mentioned we we are Trying to figure out how to lower up our different technologies with innovation efforts to have the best economics approach That means that As as he mentioned or as he show in the in the latest slide You can see that the first two Topics are covered are covering infrastructure as a service typical infrastructure deployments, but the rest of the topics or the rest of the the different fields that we are covering in this survey are a Those that are related to the software as a service market as you can see we All the people is worried to deploy on production in a in a in a very good number and production web services Ecommerce databases storage backup, and how we can we handle this? Well, of course using OpenStack because we are now delivering the next For the service providers the next concern we have is to deploy different workloads on top of our Infrastructure as a service so once we have Solved this problem about the infrastructure as a service to open stack what we are concerned now It's how we are going to win not to compete to win the market with the software as a service challenge So what I am going to talk is something about the different efforts the difference experience We have gained in the past in the past experience with another service providers We have founded and collaborated And how OpenStack solve this problem, so Which projects do OpenStack deployment use well, this is based on the latest Survey that was presented the last Friday by the OpenStack Foundation as you can see the first six Projects are very important because are the core projects. I mean we can include also Swift It's very important because as you can see we have 42 percent with Swift on production So I'm going to consider Swift as well after the the main six Projects now by Kiston, Orison, Glance, Neutron and Cinder Heat is gaining a lot of popularity. That's good because that means that the community is concerned about deploying software services Deploying by it's concerned about deploying workloads on top of your OpenStack Public or private cloud so as you can see this these projects are very important How they are distributed around the ecosystem on production development, stationing, sorry testing, etc because when when you are talking about The majority or the adoption of these projects what you'll find is that we have the core projects always Leading the OpenStack ecosystem in terms of maturity in terms of adoption And how the rest of the blueprints or projects are gaining popularity now To try to to compete with the software as a service market. So quite I am mentioning this well because This these key projects are now solved. I mean are now very well Stood up in terms of their capabilities and functionalities and they are ready for productions clouds but This is important because The approach we do at Canonical is trying to do the different permutations or combinations of these projects With the rest of the ecosystem of OpenStack. I mean we have Lead the different or the next path To deploy different combinations of these core projects with our ecosystem That exists as partners in in Canonical that means that we have found the OpenStack interoperability lab that it has more than one year running. It's open running and what it means is that we build more than 30,000 more than 3000 sorry clouds per month Where different components in terms of computing in terms of hypervisor in terms of storage in terms of networking Those those different components are based on our partners That they give us feedback and we gave we give us we give them feedback with this laboratory To have the best Methodologies to integrate with all the ecosystem. I mean We run this open open second opportunity lab For different purposes, but the main the main is that we certify that Ubuntu OpenStack Can run any workload on any kind of different storage networking or compute component on top of this So with this methodology what we have found is that the different vendors or the different ecosystem has provided us a lot of Feedback and and and of course a lot of different approaches to Automate or to have the best Simplicity efforts when we are talking about economics We always are talking about simplicity and that means that we need to automate all the Distributed workloads or the distributed to scale out Needs we have in the service providers And that's the main reason that we have founded the open-stack interoperability they love And to to have these different choices in terms of hypervisors a storage networking I can mention some of them. We we we can see them in the slide and And to to to to get the next layer or the next label of deploying these workloads on top of those Those vendors and those open-stack with those components what I am talking is now it's about juju juju is this kind of Enablement tool that is going to allow us to reduce the gap the typical gap that exists right now Between the end user and the infrastructure as a service I mean between those components of storage networking infrastructure as a service simple open-stack Clouds and the the workloads you want to use when I when I say in workloads I am referring of course to to that sorby where the people is concerned about web servers about Ecommer e-commerce about big data about those real workloads that wants to deploy on top of our On top of production private or public cloud. So juju. It's a great universal modeling tool you you use And we promote it's open source. You can use it of course Where we are modeling we are encapsulating our workloads or the different workloads. We have hundreds of Charms the the concept of charm means that we encapsulate all the simplicity for your economics in the service provider So these charms or these different services that you can see are encapsulating the different Setbacks that you always are challenging or you are facing when you try to deploy any workload In this is like you can see juju trying to deploy an open stack an open stack quit Quit windows as I'm using as an active directory and Another hyper V driver that means that we use This approach to deploy those workloads on top of our more easy way to understand the cloud so We are going to show you another example for example this This charm or this model is allowing us to deploy open control as a SDN on top of open stack That's that's great because when we are talking about simplicity automation repeatability shareable Approaches we're talking about charming all those workloads on top of our clouds. So This is another example for a modeling or deploying your open control SDN base and on open stack And we have another one that is one of our partners That is concerned about application performance management up for mix They have already charmed their their workloads and we have for example here another model That can allow us to deploy this kind of workloads This is another example and this latest this last is is the most important for me because When you are in a service provider when you are in a data center when you're work when you're concerned about these Economics or this reduction of low cost you need to think of high density You need to think of how we are going to leverage all the Consumption of the power consumption or usage of the resources. So you're worried you're concerned about density and that's Why we propose lexity lexity is our next generation Lighter vice or when I say light advice is a hypervisor But it's lighter in terms of size not just size because of the of the of the of the size of the source that is living inside the kernel, but because you have direct access to your hardware Because it is a container a whole system container story So when I am talking about lexity, it's because I am talking about high density because I am talking about Performance and when I am saying performance that means that we can crush KVM and Crush KVM is something like 14 X 14 times density When when when you try to think of economics, you are trying to think of Low latency you are trying to think of how to solve the different ways You need to deploy different workloads on top of your cloud for using software as a service market Oh to win the software as a service market So as you can see this this very simple graph is showing us We have a deployed 37 KVM gusts in 940 3 seconds versus 536 Guests with lexity in less in less seconds. So we have less latency and of course we have more density and in a fraction of time so This is the way how we solve how to win the market how to Have our a gap Solved in terms of understanding the different Deployment of the workloads on on so far as a service and you know what? This ecosystem It's available now for hundreds and hundreds of open source projects in juju dot Sorry demo dot juju charms dot com so you can access this ecosystem You can deploy your own workload and if that workload does not exist on the on the app store Then you can develop your own app store or charm. It's very easy to develop them. So Guess what? I'm going to show you if the Coverage of the Wi-Fi allows me a How we can integrate This deployment of workloads on top of our open stack cloud. So give me one second Orange boxes So I'm going to log in to an orange box that actually has Library as Latest of the stack version as you can see we have here main project that is the Project Market quit Quit more easy ways To understand the deployment of workload. So for that reason is that we have embedded our juju App store inside horizon that means that you can now in a very understandable abstraction layer of Your workloads of your public or private cloud in a fancy way We have embedded this component to understand what we want as an end user as a consumer of the cloud so I have now a Workload running that this is ready. This is a key value Typical server what I'm going to do. It's deploying a new environment a new modeling To show you how easy is to understand how our app store is ready to deploy on top of your cloud with no any Knowledge of Understand or understanding of infrastructure as a service so Yeah, I like a lot for example to show you Bundle of analytics When I am talking about analytics, of course, we are talking about big data about hadoop For example, let me show you Maybe this this is a real-time syslog analytics So when you have a lot of servers that they are writing a lot of logs on top of their different Infrastructure, we need to understand the very easy mechanism to have a comprehensive monitoring of the of different Data sources, so I'm going to deploy these I have some latency because of the network You're going to add it to the canvas and you'll see now that we are Trying to place These different services on top of different machines that I am Going to run or a spin-off on top of our open stack. So they have placed all the units. I Just need to commit the change And what will happen now, it's that the this gap that exists usually Between the user and the deployment of the infrastructure as a service of your instances that are running KBM or another hypervisor or containers. It's going to be reduced immediately With our juju universal modeling tool. So I am deploying now the different workloads. I Have commit change Let's see services We need to wait response from the network Okay And now have you changed have you seen that the the status is changing to yellow? That means that at this moment we are trying to spin up those instances To install the different workloads on top of your open stack cloud So I'm going to try to minimize my environment To fit the wall in our screen so we need to I'm going to pass the microphone to Arturo and maybe Before he finishes We we can see a real Use case about syslog analytics on top of our open stack with just three clicks That's the way how we Okay, so so this is this is how you differentiate, right? So imagine the amount of the number of possibilities here It's just it's just way too big, right? So you can have your own head up or your own pass or your own Whatever so a Service you want to configure. How do you get it started? Well first the first thing you need is a cloud, right? The easiest way we agreed. Otherwise, you wouldn't be here That opensack is a way to go, right? So you need a cloud that has best economics that allows that innovation that can get you to the Configuration of services, right? At canonical we have this thing called boot stack, which is our managed services It's the easiest way to get a cloud up and running this service. We will build a cloud for you We will to your specific Workload or initial workload It's a flexible reference architecture So we we can we can decide jointly how that cloud is gonna it's gonna work and we build it for success Which means that it will be ready to accommodate all the workloads that you're gonna be designing and Modeling through juju. Okay, we will operate that cloud to an SLA for you during a minimum period of time While you get your team if you want to get your team, which is Hiring training and keeping a bunch of engineers and we will optionally transfer so the snow-locking right right after you get Your team we will hand over the keys to you and you start you get into the driver's seat Okay, this is basically what we do for the time's sake. I'm just gonna skip a few a few slides If you take the economics alone, you're gonna need a team of six five to six people to operate a cloud That will mean roughly $900,000 a year Okay Just take that into account. Also, this is only to get the team if you want to keep the team Don't send them to the open stack summit because there's people here hiring. All right Our service our services are Certified MSP Alliance certification Just to give your customers comfort on who is operating their data The data privacy is taken care of the security taken care of and there's a third body That acknowledges that for us and he will also have our Be part of our certified public cloud program. So in this program what we do is we ensure the experience with Ubuntu Images which again 60% of the Linux in in the public clouds are Ubuntu the experience is A standard is good across the board We can also extend our Monet services to the clusters that you're deploying on top, right? So we can we can do the managed service Also on the Hadoop cluster you've deployed on top of the cloud if you want to deploy it on top of this cloud Or on top of any cloud. So there's an extension to that That will allow your customers just to focus on or yourself just to focus on getting customers that will focus on their own data Right instead of figuring out how to model those those Hadoop or big data clusters Three things I want you guys to remember OpenSec is the way to go Of course is the only platform that will allow you all those four quadrants that we saw It will be the only platform to allow you to differentiate to innovate Constantly and and to have the best economics When you pursue differentiation You have to move up the stack you you cannot be doing only infrastructure service that's break even you need to move up The stack and I encourage you to use juju as a as a great asset to get They're fast and then if you want to get your first steps in the in OpenSec boot stack the managed service We'll get you a cloud with no upfront cost with incredible economics with no lock-in So basically is what you're looking for it will and if we do excellent time for the money, right? It will get you it will get you a Cloud up and running in in a couple of weeks All right, so we'll open it up for questions just to remind you we're gonna be tomorrow doing a sizing of the OpenStack in in Seg Yoku At 340 there's a canonical track day on Thursday starting at 9 with Mark shuttle earth Drop by our booth if you want to see that demo we saw What to see closely want to play around with it in an orange box Or contact any of us at any time. I don't know if you want to look at the Cluster order will live it for us. Let's do it lying Winning this of us a service market any questions. We are welcome more than welcome to Thank you so much