 Okay. All right. Is this working now? Excellent. Well, thanks everyone for coming. Thank you for taking the time to come on Thursday, and I hope you're probably tired of a couple of days of OpenStack Summit. We're going to be talking about services, glory of registration, and microservices, and all these sets for what you want to introduce yourself. Yeah. Hi, everyone. Thank you for coming for the session. My name is Fawad Tralik, and I'm an engineer at Plumgard, and I've been involved in the OpenStack community for a few years now, and I'm looking at containers and microservices and the area of specifically service registration and discovery these days, and we'll go over some of the points in that. Okay. And my name is Fernando Sanchez. I work for Mesosphere. We're the makers of the data-centered operating system, or PCOS, which is open source, and runs beautifully in OpenStack. I want to give it a try. Fire up some VMs in Nova, and check out our website, and it's free on OpenStack, so you'll probably like it. So what we're going to be covering today is we want to discuss services covering registration, and this is a really... this topic is older than most people think, so we figured out that we divide the talk in two pieces and maybe give first an introduction on the historical view of where the problem is coming from and how this has been approached pretty much since the last 40 years. Since the internet was born, really. And then, after understanding the problem how it has been evolving across the years until we got into virtualization and containers and microservices, then take a little bit of the discussion on this piece of the tools that are available, the different approaches that are available to solve it today, and also what does this mean for OpenStack? How does OpenStack fit in an OpenStack environment? Are there any solutions available? What is missing? What is there? Things like that. We have a lot of slides, so I'm going to try to kind of go quick through them and let's see. Okay, so first of all, the history part and a disclaimer. We wanted to provide context of what this is and where it's coming from, so we figured out we had to go way back, actually, way back. When you go way back on what you have in your memory, sometimes it turns out to be a little bit and a bit blurry, so I figured out I put a disclaimer or even a bigger disclaimer. If you guys are going to Google everything that's in here, probably some dates and some years will be here out there. The other thing that happens is, this actually shows that some of us network engineers are actually older than we believe, because when you start going back in time to the first time that you learn about these things, you find that you're actually becoming kind of an old man. So if anyone in the audience knows what a V92 of them is, be advised you're probably going to feel as old as I felt doing this presentation. So let's go back in history. Let's go back to history a long time back in the very, very, very early days when the internet and data communications were being born. And if we go back to even the 80s, the times when applications were living in a physical server, probably physical servers were looking at something like this. If anyone remembers that, you've probably been here longer than most. And in these times, if you had the luck to have a computer at home, we would probably be looking at something like this. And for those that you don't know, that thing in there used to be the first internet browser out there in CSA Mosaic. If you were lucky enough to have a thing connected to the internet, probably you were using something like that. And it was probably heavy and noisy, and it used what you see your friendly network provider to connect through a phone line or through an X25 line or through something like that to your application. But the point is, your application and the endpoint where you were reaching it had a one-to-one relationship. So when you want to get to your application, the service discovery piece of it is how do I get to the address where my application is with the name of the application, as easy as that. If your application had an IP address, which is where you want to get to it, basically the service discovery piece is just DNS. Whereas my application give me an IP address and I get to it. That's how I get to my application from there. At this point in time, I was a really young kid in Spain, so probably my mission-critical application at this time would be something like this. That's what I wanted to get to. But the point is, I just had to go straight from one destination, from one name to one destination. Now, as we go past in time, probably this thing of having one application in a single server was not very scalable, right? So we had to have many physical servers that probably at this point in time did look a little bit like that. I just had one of those shiny computers at that point in time, and probably I was running something like Netscape on those days. The point is, oh, using my V92 model to connect. That was great. And my friendly network provider, which had changed the logo by then. The thing is, I had one application living in many physical servers, so now I need to find a way to actually find the right endpoint. And I'm probably going to need something like a load balancer, which is one of the keys to this story. Now, I need something that translates to one address, to many addresses, to the address that I get, to the endpoints where this application is being implemented. Well, that load balancer probably did look at this point in time or something like that. But the point is, when my application reaches the IP address that the DNS gives me, the service discovery, now, actually, the application is going to be delivered in a different backend. In the different backend, not the one that I'm reaching. I need to be the middle man between the point where I get with my IP address and the backend implemented application. I was a little bit older here, so probably my business critical application for my Spanish guy, but it was probably still the same thing. But if I get another request to that IP address, my load balancer is going to be the broker between the IP address that I'm getting in the load balancer from the DNS and the backend implementing it. So that's another key. How does the load balancer know which are the backends implementing the service? Well, in these times, probably there was a guy that was programming the load balancer. Someone would get into the load balancer, would configure the load balancer, would manually program all the backends that were implemented in that application. So probably that guy had some sort of vendor certification and manually would log into that load balancer, so when I added a new backend, a new physical server, he would go in here, program the load balancer, and he would show for the same backend so that when I get to the load balancer again, I would get to my application once again. So how is this manageable? Well, it was manageable because we had physical servers, we didn't have so much charm. It didn't change that match. They were not growing and shrinking too fast, so it was manageable. We could add some scripts maybe to the load balancer, but it was probably not automated enough. Now, moving some years forward in the late 19th, early 2000s, the 3D architecture comes in. Now, our applications are divided now in three pieces. We have a web application, we have a web frontend, we have the application itself and some database backend. So what does that mean for server registration? I had a lot of other point in time. I was probably running something like the early versions of Safari. That means that instead of having single servers running my application, I probably have three tiers. I have a web tier in the frontend, I have an application tier in the middle, and with the database. I was connecting probably with some sort of cable modem through my friendly network provider, which is the logo again. And what that means is now I need service discovery registration in three pieces, in three different layers. So it's the same problem over again. I need to load balance between an endpoint for the frontend and the different frontends, an endpoint for the application and the different backends for the application and an endpoint for the database so again I need a load balancer in each one of these layers so that every time I hit my load balancer it will go to the frontend, one of them, it will go to the application, one of them, it will go to the database, I will get the information that I did back to the application and finally I get to my business critical application that I need to see and I get my fix for it. So as you guys can see things are getting increasingly complex as we move through the years because we're getting, we're dividing and conquer, but we're getting more and more complex and where are my services living and where am I offering them to the consumers? So what happens when we don't even have physical servers? Time moves forward, we're in the early 2000s, we're not doing everything virtualized. We don't even know where our servers are, we cannot know physically where they are, but that means is I can have web application database servers created and destroyed in minutes. I'm running maybe OpenStack, I'm running maybe VMware and I'm creating my application servers, my database servers, my business critical front-end application servers, pretty much anywhere in my commodity servers. What that means is my applications can also can scale up and down dynamically and much faster. I'm not bringing in physical servers and creating VMs, destroying VMs, so they can be increasing, they can be growing, they can be shrinking. What that means is we cannot configure things manually anymore. I cannot go into a load balancer and add a new backend and add a new HEProxy configuration every time that I have your income seen, it becomes unbearable, I can deal with that. So we need an automated way for these things to basically register themselves and offer themselves to other consumers. And the different options that Fawad is probably going to discuss start to appear. Do I get a sidecar process to every application that I have so that they register themselves to some sort of centralized database? Do I use an external orchestrator so that basically every time that an orchestrator creates a new workload or a new backend, it tells the load balancer or HEProxy broker that there's a new backend. Do I use some sort of API gateway that will basically do the work for me? All these are options and there are solutions following these options that we will follow in the second part. Now we move forward in time and instead of VMs, we're starting to use containers. Why do we use containers? Well, they're lightweight, they're faster to start and stop, they increase the workload capacity, they're very, very efficient. But what that means is they can start that stop in milliseconds. They're very, very fast. So what that means is now it's totally impossible to do anything manual and if it's automatic, it has to be really, really efficient to go with the churn in the service. So this is where we stand today. I'm probably just using my phone to access most of the applications. My friendly network provider is probably now a mobile network provider where I need them to live. They can be in my data center, they can be in Amazon, they can be in Google, they can be in Azure. I'm probably orchestrating my applications in some sort of container orchestration platform, either Mesos or DCOS, either Docker, either Kubernetes, something like that. And these things have been created and destroyed in milliseconds. And I don't even know how many of them have been created and destroyed. So doing this manually is just impossible, right? So these endpoints have been created in order to magnitude faster than when they were VMs. And it's impossible to have anything manual. We need some sort of automated discovery, which is probably doing the same thing that we did as Network Engineers 10 years back. It's basically registering all the backends when they come up and offering new services in the front end as they become available. The options to do this are pretty much the same that we have with VMs. Do we do sidecar processes? Do we do centralized orchestration? Do we do client servers discovery? So now when we have containers, the microservice paradigm comes in, right? Some of you may have heard of this thing called the monolith. And what is the monolith? The monolith is basically the fact that when we add in a code to our application, to our centralized application, typically it becomes a single unified code page. And it tends to grow over time. And nobody knows who wrote that code. And when you need to patch it or when you need to grow, it becomes increasingly difficult to maintain. It's difficult to troubleshoot. It's difficult to revolve. So in this application, that thing in the middle tends to grow a lot. It tends to grow and become incredibly complex and incredibly hard to maintain. So what do we do with microservices? Basically divide and conquer. Let's say instead of maintaining this huge code base, let's divide the applications in functional areas and then do different processes for them, then do different functional specifications and then interconnect them with REST APIs. So again, we're using networking for this. So we're not only interconnecting applications now. We're interconnecting pieces of applications. What that means is that probably instead of having the monolith that we have on the left there where the whole application was growing homogeneously and it had to be maintained as a single code, a single chunk of code, we can divide them and make them individual little applications. They can scale individually. They can be developed individually. They can use different languages. They can do different infrastructures, so they're tiny little applications that form together a bigger application. Now what does that mean for what we're discussing today for service registration and service discovery? It means that they actually have the same problem in a much bigger magnitude. They need to discover each other. They need to know where the back ends, where all these applications are leading. So what you get is something like this. What you get is what you had before with much bigger complexity. And this may seem overly complex, but this is actually a simplified view. This view, there is from a real customer that we have, and those are the traffic patterns between a single application that they have on the microservices. When you start dividing and you get hooked into microservices or how they're simple, they have to evolve to scale you need something that provides that level of efficiency and that is able to create those many points for your back ends. So to summarize on the history trip that in the time travel that we were saying we started with something very simple on the left side. We have tens of services. We knew where they were. We could even configure manually load balancers because they were not changing that much. And now we have that thing on the right. And that thing on the right obviously can't be configured manually. So we need these tools to do this automatically and this is what we're going to discuss. How we do that. For what? Thank you. So we went over what microservices are and what problem they introduce with all the benefits that they have. So the problem is of course solvable and to solve this problem the concept of service registration and service discovery come into the picture and I'm going to talk about some of the patterns around service registration and discovery and few of the tools. These tools that I'm going to talk about they are worthy of couple of hours of presentation each so I'm not going to go into too much details. It's going to be very high level and we have not much time. So wherever we have different ways of doing something you are given with choices and you have to choose which is the best way to go for you. You might have a very small, simple architecture or you might have something very, very which is supposed to scale a lot. You might need to do it for a particular environment or the other one. This is where you're given with different options so let's see what options are there for service registration and discovery. There are patterns which exist in these you know where to solve the problem of registration and discovery. What it means is that your microservices are coming up and down and they get registered somewhere and then at some point in time or same point in time somebody else are able to locate where those services or microservices are located so essentially what I'm trying to refer to is like a simple you know naming service as simple as that but it's not as simple because you're moving towards these millions of endpoints and which are coming up and down as Fernando mentioned at milliseconds even faster and every lifetime of these microservices in some cases is less than 10 seconds. So in that case these registration techniques and discovery techniques vary and might be different for different use cases. So let's go over the first one way to do registration for services is to self registration. What it means is that you have microservices which are running in a cluster and as soon as they come up or they go down or they have a change of state they would go and talk to some registry central registry somewhere which is running. It can be some database or anything and they would go and get updated. So you have a microservice interface with a service registry and that's very very simple doesn't seem very scalable but if you have a simple simple thing running your own cluster you might want to use it. The other way to do registration is third party the application that you're defining doesn't really have to know about what your registry is. It doesn't need to have an interface with the registry. You give this job to some third party tool an example here would be let's say you have marathon running applications with mesos and the applications don't have to talk to you know your state of the cluster or anything your marathon is responsible for managing it. In Docker you have docker swarm or docker and Kubernetes you have Kubernetes take care of these things. So that's service manager over here points to some tool which knows about these microservices and goes and updates service registry so that it has the updated state of the microservices. Then let's move on to discovery. There are two ways to discover applications once they are registered either through client side or through third party one way is to do is that your client needs to talk to your microservices they would go to the registry themselves directly query where your microservices are located and then go through the API gateway or directly somehow to your microservices so they would maybe through DNS or load balancer as Fernando mentioned or some other mechanism directly curing the keys using HTTP or maybe some you know language binding whatever method you know service registry supports that's one way of doing it. Another mechanism is to do server side discovery where your client is acknowledged it doesn't really have to know what your service registry is what interfaces support is HTTP is it some language binding is it is something else it just talks to its API gateway which can be you know as simple as or as you know common as a load balancer so you're you're talking to load balancer which is automatically getting configured from your service discovery or you know getting a state from the service discovery what members of the load balancer are supposed to be there and then this load balancer or API gateway is taking you to your microservices multiple ways of doing this and solving this you know problem of registration and discovery or mechanism patterns now let's let's let's talk about the tools that we can apply these these patterns to and there are different tools out there and there are a lot of those there are a few of those and I'm very very high level of course so it's kind of divided into two categories one is their tools which are like you know key value distributed key value stores or databases where you can store state of your microservices and your QT somehow using the interfaces that they expose the other one is which are designed particularly for microservices in a sense that they work with some solution like Docker or DCOS or Mezos or Kubernetes so let's talk about ZooKeeper it came up as part of the Hadoop project initially and it's have many use cases it does configuration management messaging queues but the one which really is relevant to us is naming service you can you know store information about your microservices and then QT using its interface that's the idea the benefit that ZooKeeper offers is that it's really good on providing consistency or you know making sure the data is consistent across your cluster if you're running three nodes however it seems and it's not really doesn't have all the features which are for service discovery solutions inherently as part of it you have to do a lot of tooling out it to get it to work and by tooling what I mean is that for example in this diagram that you see here you have a ZooKeeper cluster running and then some container hosts which are you know have containers running on them these containers come up they go and let's say as an example they talk to ZooKeeper to register their information what their name is what their IP port is and then you have this some read write module over here which is listening to ZooKeeper events or you know syncing up with ZooKeeper to configure a load balancer that your client can talk to to get to your microservices that's one way of using it so ZooKeeper is generic enough it can be used for self registration or third party both depends on how you use it you can use it using for client side discovery or service side discovery again depends on your use case it's a bit heavy for a simple you know simple ecosystems that if you want to build moving on we have HCD as well very similar to ZooKeeper it was used in some of the orchestration systems it's I'm pretty sure we all have heard about it so far so it's a key value store very similar to ZooKeeper the consensus algorithm that's used in HCD is different from ZooKeeper ZooKeeper uses ZooKeeper atomic protocol and then this uses HCD uses raft the the benefit that HCD has over ZooKeeper is it exposes an HTTP API interface which is easier to consume using JSON and it has some you know security features like TLS SSL but in terms of use case if you see it looks exactly the same your client your containers are coming up getting registered with HCD then you have this module there's a module called Contly which is capable of listening to ZooKeeper HCD events it can update some load balancer let's say nginx over here and then your client can talk to nginx to get your microservices very similar to ZooKeeper in terms of use case that it offers very generic therefore storing the state of your microservices then let's talk about console again one of the solutions which came up has been there for some time primarily it's a key value distributed key value store HTTP interface but the value add that it has in addition to ZooKeeper and HCD is it offers a built-in DNS server so you are storing your information about your containers and the IPs etc so a DNS server which is capable of serving SRV records which means your microservices is IP and port information is exported as part of DNS SRV records that's something console is value add in terms of it also offers a bit of very simple load balancing and then an additional thing that's built into console is it can do health checks and monitoring a bit so that's something is if you have a system which is already in DNS standard DNS based system you could just put console in it you could be able to use it and then some security aspects as well let's go to the next group of service discovery tools so the one we've covered so far in the same category like distributed key value stores now we're going to some tools which are kind of they work in conjunction with a full solution let's say SkyDNS it's a it's a service which serve you provides you discovery of your microservices when it works in conjunction with some key value store let's say in this case HCD in this diagram that you have is one of the examples you have an adopter which listens to Docker events it configures your HCD for the information about your containers that SkyDNS gets information from and then your client can talk to SkyDNS to to locate where your microservices are so that's one of the ways you can do it so you can also run it as part of Kubernetes it's used over there SkyDNS also in terms of DNS however it doesn't offer any health checks let's move on Meso's DNS is another DNS based solution works as part of Meso's so the idea behind is that this serves you location of the applications that you're running as part of Meso's over DNS and it works it syncs with Meso's master to get the state of your microservices where they're running what the location is, what their name is and then your Meso's slave which are running containers can talk to Meso's DNS to get the location of your microservices it also supports SRV records the thing is this is kind of a central DNS and Meso's work to improve or as part of the evolution a new project was introduced called SPARTAN as part of DCOS and we're doing distributed DNS and the way it works is that it does dual dispatching of DNS queries to the back end which are running multiple masters of DCOS and then whatever comes query comes first it sends back to the client so that there's no single point of failure and then it's also a smart way for it to figure out way to route your traffic to queries in an optimal way so somebody views which is way, way, way far so that's an evolution of DNS inside the Meso's ecosystem let's talk about load balancer based or discovery and one of the examples there are many out there so it's MarathonLB so you have HE proxy which is started HE proxy running in a container and MarathonLB what it does is that it talks to it listens to marathon events to get information about your applications which are coming up and down as part of your definition of your whatever service that you define and it goes and updates HE proxy configuration using some templates and this is a container which is running somewhere on your cluster and your client will go and talk. This is good for northbound and north sub traffic but for east west this is going to be a bit of an efficient because if you have hundreds or thousands of endpoints you might need something distributed so in terms of having a distributed solution on service discovery there's another one called Minuteman it's also open source as part of DCOS so this is very similar to how others work this is responsible for listening to events from DCOS information getting over there as Meso's and the thing that works with that is responsible for doing everything that you saw with MarathonLB as running as its proxy this guy is doing using some kernel capabilities that's doing distributed even if you're scaling your cluster to multiple more nodes you're not bound to any impact on latency or throughput it remains constant and consistent as it would otherwise let's go to another solution which is called SmartStack SmartStack has been there for a long time it was introduced by Airbnb and the idea was that to do service discovery for the applications that they were running and they were probably VMs and they could be containers but primarily designed for VMs initially the way it works is that you have ZooKeeper running which is your service registry the entire state is stored over here then you have every single host is running a process a couple of processes called NERV and Synapse so NERV is responsible for doing your service registry and Synapse is responsible for doing service discovery and programming your HAProxy so let's say you launch an application on one of the nodes NERV would go and program ZooKeeper Synapse would get information from ZooKeeper program HAProxy then your application will talk to your local HAProxy to get to your member of your service whichever you want to get to again one of the ways and to implement service registration discovery these are not the only ones which are there in Kubernetes we have Kube Proxy which is a load balancer which runs on every single Kubelet it uses two modes you have user mode and IP tables based mode again has information about the services that needs to get to the ecosystem and gets over there as part of Docker there is an embedded DNS called Docker DNS that does DNS based discovery only for local networks which are out there not across this there are many solutions out there we could combine for days there are like several saying that they are like what are the key takeaways we are talking about solutions and solutions the key takeaway is I would say that one size does not fit all and there are several several out there but what I can say the advice that I can give you is there are some parameters there are some things that whatever you are looking for your own implementation is what matters to you what kind of consistency you are looking for is it like you want to make sure that it is strong consistency it is eventually consistent what kind of registration model makes sense for your solution is it self or maybe third party you want to take care of those and when you are doing maybe third party you want to make sure you have not single process running for the entire cluster you maybe want to have something which is running in HM mode and doing lots of things over there on the discovery side you want to choose between DNS versus load balancer or you want to do time side depends on how it fits your needs then of course when you are talking about microservices KLM performance is a very very important factor the most important thing which is to take care of is what environment are you running your system into is it measures is it Docker is it DCOS is it Kubernetes is it something else so whatever you need to pick has to work with the system that you are trying to define so with that I am going to move ahead to the topic of discovery and registration inside OpenStack is it needed here is it something inside OpenStack I would say yes because containers are becoming first class citizens inside OpenStack now microservices use cases are popping up over here of course in OpenStack there are two ways to use OpenStack for containers there is infrastructure as a service and then there are also use cases on platform as a service and there are users for both the use cases so given both the use cases let's talk about what are the facilitators we have Magnum which is providing container infrastructure management you could not spin up your measures Docker Kubernetes clusters on top of Nova VMs using heat templates you have Kola which allows you to deploy OpenStack then you have Morano for application catalog and then you have Courier which does networking for containers so there are projects out there which are doing things around containers somebody is doing networking somebody using infrastructure but we have several facilitators now and they've been here for some time now what are the approaches that exist inside OpenStack for containers there are two approaches one is containers and the entire ecosystem around containers run on top of existing you know components of OpenStack is just that by Nova VMs you do measures on top of it or something else and everything is running so at that point and you don't care if there's anything else that you need to take care of from you don't care about authentication you don't care about you need a network from Neutron because that's you're running on top of this layer and you don't you don't agnostic that's one way of approach the other way is you have container ecosystem running over there as part of OpenStack and it's partly managed by OpenStack as well where you have you want to use Neutron maybe you want to use something from Keystone or Nova or maybe some of the project there are so many projects out there in OpenStack now so there are two approaches and these are the two only approaches out there in OpenStack what is the current picture with these two approaches that we have in OpenStack now with approach one we have you know Docker Mesos, DCOS, Kubernetes managed containers you can openStack as VMs in this case for service registration or discovery OpenStack does not have to participate or do anything for these components because they bring in all the service registry and discovery components which exist already with them they have been developed already and they're they're pretty good and whatever they do already maybe Docker, Mesos, B's Kubernetes so anything the other approach we do not have off the shelf any solution today in OpenStack there are projects I mentioned the facilitators but to get an end-to-end solution how do we get there so the ideology is that it boils down to just two things you need to register your microservice then you need to be able to look up the tools that we talked about you know you have load balancer you have Octavia somewhere in OpenStack you have Designate for DNS so yeah if you want to do DNS based service discovery sure you have Courier over there and Neutron for networking for couriers containers specifically for service discovery and registration maybe a new OpenStack project maybe Courier can do it maybe something else but something which doesn't exist today so this is the way would be the desired state of the OpenStack ecosystem to achieve such a goal and I believe that's one of the possible options to get to the use cases of the solving problem of disaster and discovery inside OpenStack and this is where I leave you guys with this thought thank you any questions any questions it was quite a lot of information I it's a question in the back if you could please come to the mic that would be good I'm not sure we have a mic sure speak louder for example can you start over we already have some application running in our infrastructure right now we have to convert it into microservices so what are the best practices for architecting those part yeah I don't think we're discussing architectural design for microservices and what are the patterns that you would divide your monolith into microservices by as we were discussing typically it's functional areas and typically get people companies used to get people from different functional areas because each microservice is an application in itself so the way you do it it's a pure design decision that you have to make yourself and it depends on how your application is designed today how easy it is to divide how clear the functional areas are how your team is divided can you create micro teams inside your team where you have someone from front end someone from application someone from back end and you can put them together to work in a microservices themselves so summarizing I don't think there's a one size there's no manual other than whatever fits you in your depending on how your application is is designed and how your team is designed will probably be the best option for you just as there is no right size for microservice there is no right size for a container right with some people do bigger containers and put main functions in it and most people try to do smaller containers with just a single processor a couple of them but I don't think there's a one size fits all that it depends on your particular use case start looking at start looking at the solutions which exist already you have many of those so maybe one of those might have been in terms of what tools you should be using I thought the question was more on how do I divide my monolith into microservice that was my answer now in terms of what tools do you use once you know you want to move to microservices I think we've seen many of them obviously some of them are more mature than others but when the key to this talk was if you're going to containers if you're going to microservices if your services are being created and destroyed dynamically you do need service registration and discovery and it has to be fast and efficient and fully integrated with the rest of your stack otherwise you're going to have a big task ahead of you she's developed by Netflix and we are using it at work and it works pretty good yes it's again Eureka is there you know this is there so lots of tools are there it's just that we mentioned a few of those because I had like we could go all day literally and I went back to 1980 and we could go all day just with the options that came up in the last five years I mean Eureka is a great option and Netflix has great solutions there they're actually doing great work on messos smart stack I think it was coming from Airbnb right some companies decided to cook their own and then push it to the community that's a great use case again as my first slide was saying we're not trying to be a reference that's what these tools are not at all I don't endorse any of those so you know I'm just talking about some of them so we try to be illustrative of the simpler way of doing things and the most complex way of doing things but again just the smart stack is a great combined solution what you mentioned is also another great thank you any other questions we have one minute probably perfect thanks everyone for your time