 Okay, everyone, Nihao and good evening to this session. The penultimate session of Mesoscon 2016 Asia. It's great to see that are now today. And I would start, what would start? Can you just show me the mouse please? Yeah, okay. So the title of the session is how we could use Mesos to drive DevOps. And before I dig into the session, I want to give you small introduction of who I am and where I work from and where I come from presenting this conference. So I am part of GS Shop. And GS Shop is one of the largest TV shopping networks in Asia and now one of the largest e-commerce sites in Korea. So we've been in the business for the last 20 years and we have over 1,000 employees and within a half users every day and 28 million app users on an average. So we are based out of Korea, the headquarter isn't Korea, but we have a lot of presence in Asia, including India, Vietnam, Malaysia and other countries. So I work for the container platform team and my interests are DevOps containers, microservices. You can call it the buzzwords of 2016. And I work for the IT Innovation Center. So a slow point to touch upon is this is a Mezos conference and it's 4pm. It's the last day of the conference. Why we don't have a talk on DevOps yet? So this is all missing, right? So every conference these days you have a talks on the DevOps. So this is that talk. I hope some of you may be missing that talk of this while. So this is a talk that I'm going to cover about DevOps and my pitch is basically to talk about why Mezos is a DevOps enabler. How we are using inside Gearshop Mezos to drive DevOps across our teams for the last one year. So how many of you have read this book from Leo Tolstoy? Anyone? Okay, so there's a nice quote in this book and it's quite popular. And it's about happy families. It's quite applicable in the modern times. And happy families have common characteristics. And unhappy families is very unhappy because of their own reasons. It's very unique to them. If I apply this to teams, especially software teams in our organization, I could just make it productive teams are productive and they're common to cross all productive teams. But unproductive team is basically unhappy because of their own reasons. Something very unique about how they operate, how they do business. So the reason to put this out is basically I'm going to talk about how as an organization we identified where we were doing wrong and how Mezos became the DevOps enabler for our organization and where we are going next. So my definition of productivity is basically happy teams. Happy, fulfilled, satisfied teams. That's what we define inside Jigarshop. So let's start by putting some agenda across you. I have four part agenda today. I'm going to bit start with a little bit of history. Some of you may not like history but actually it's easy to pay interest on history for a bit. And I'm going to start talking about where we move on from there, the starting of the change happening in the organization using these rules. And from there, what we are doing now, how we are adopting this across our teams, across the entire organization. And lastly, where are we going next? How we are making it, how we are driving this internally to take the next leap in the equation. So we start with a little bit of history. So the history begins with how we do build and deployment. Of course, the common things. It's a very familiar artifact that you see all around. We build something. Most of it is the Java shops. A lot of people that are building Java applications. And obviously we have a maintenance team trying to support whatever applications have been moved over to production and across environments. And these are some of the stats that we currently have. So you look at some familiar terms like lead time that is from the origin of that particular requirement until the application or the change going in production. And then you have per developer per week how many changes that we go. So this is an important metric to understand where we are in the current equation and where we started from in this space. Now I want to introduce two important characters of my story today. And you may have heard of these stories. You may have known about these characters. One is a developer, the other is operations. So these two individuals and these two rules have a divide as usually you read it in the books. And it's a divide is based upon their assumption of the goals in our organization. So the DevOps actually is a combination of efforts to unify them as you already know. So the developer and operations are driven by separate goals. Their goals are separate because that's what they're measured on and that's where they are penalized upon. So we, as a developer we look at new features fast moving to production and the operations need stability. They want strict control on how changes move in and out of the environment. So monolithic apps, operations love it because it's a very simple, easy to understand primitive and there are very less moving parts to it. So when we started looking at microservices and multi-apps this kind of creates a conflict with them because there are so many moving parts of the equation. And there are some new primitives to learn. So they obviously don't have a nice time with these services. And we practice something called yawn-driven deployment. As you know, deploying at 3 AM in the morning. So that's what we were doing a long time. Everybody gets together at 3 AM, it's a party. So we deploy and we have a lot of yawns and that goes to code, goes to production. So this was a common tendency inside our organization for a long time. So now we have seen where we were. I'm just quickly going to the next part of where we started looking at the change and how we identified that change. The first rule of making a change is to know the issues that we have. And I start by looking at the issues internally. You may have heard this term already many times and I want to repeat it. We love to have pets, pets are very important but unfortunately we have machines as pets and we like to name them and take care to them. And obviously then they die painful. So the same context applies in here as well. We love our machines so much. We also have a lot of teams inventing their own tooling and processes which means they take whatever works for them and just apply including processes. We also find out that a lot of our developers are taking long time to get feedback when they put something in production. Very common, familiar situation in large organizations. Big bang releases are like events. They're like everybody's prepped up, they're religious events. And that's like been happening for a long time in the organization. And lastly, the empathy part, especially when we're driven by separate goals in the developers and the operations, they are having different ways of dealing with their roles and the roles of the others. And that creates this great divide that I talked about. So we looked at the issues and now we need to find some inspiration. So for inspiration, we start looking into history and anybody can guess who is this gentleman? Okay, no problem. So you may know the name Conway but full name is Melvin Conway. So Melvin Conway created this Conway's Law which you might have heard. But there's a twist to it. So what we look at Conway's Law is around producing designs which are mirror of the communication structure. The way organizations are divided internally. And we want to take a flip to it to take this as an advantage for us. So we want to flip this by creating called the inverse Conway maneuver, a common familiar term for DevOps professionals. So the inverse Conway maneuver talks about creating constraints that helps and drives the organization to change. So it's basically system-based thinking, allowing systems to influence how we do and how we communicate inside our organization. So we looked at inverse Conway maneuver and the other motivation comes from the O-ring theory of economics which last year, a ring caller of Spring Source talked about it in his very familiar blog and he coined the O-ring theory of DevOps. To make it simple to understand, he talked about two key concepts. First, everything in a DevOps process in a workflow, every step has to happen. It has to be successful even if in a small quantity. Otherwise, you don't get the advantage of it. And if any one of this step is not done properly or there is an inefficiency, that kind of destroys the benefits of the entire workflow. So this applies to a lot of internal practices that we have and this is what we observe running our teams for so many years. So based on that, we looked at tenants. How can we create the tenants based on which we could drive this change inside our organization? So there are five tenants. So five tenants, first is to make our applications disposable. Secondly, focus primarily on improving the quality of life for a developer. The developer must have amazing productivity inside the organization. The third, allow applications and services built inside of share as much as possible. Make them run on a multi-tenant environment and allow them to use common tooling. Fourth, management permitives like link service starting up, restarting services relocating across different blocks. These permitives needs to be automated as much as possible. And the fifth and the most interesting point, measure everything. So based upon these five tenants, we want to drive all the changes around DevOps and I will talk about how MISO's kind of plays the most important entity in that deduction. So the master plan is simple. We take the ideas from inverse runway maneuver and the O-ring three of DevOps. We created this five tenants and we apply it as simple. So based on that, we created the platform called the Gravity. It's the service delivery platform which we started last year. And it's to make it very simple. These are, we are just meeting the building blocks for our teams to build software on it. So the end to end workflow in this is quite simple to make it to the audience that we have here. So we have a build process which obviously generates your artifacts. In this case, the talk or image. And then it goes through a preparation phase. So preparation phase generates your deployment manifest which is then sent over to scripts and tooling to allow our applications to move into our MISO's cluster and allow these load balancer reload and DNS to be kept updated. A very common familiar MISO's adoption scenario that you may have seen in your teams, in your organizations. And along with that, there's a standardized tooling for common aspects around APM, around logging, around dashboard and notifications. So this is what developers get out of the box for every service that they build inside our organization with this adoption. So we wanted to have a common manifest to rule all the environments. And I say environments, it could be any permutation and combination of these. So you want to have a single environment with a manifest that we could use for all our deployments everywhere. And for that, we wanted to make our deployment template as common as possible and allow the change parameters in this deployment manifest to be managed by a template engine. So what we did was we externalized some aspects that vary across environments and that used to generate the final template which is then pushed over to each environment. So this template is nothing but the marathon JSON configuration, which is obviously having some templates that are replaced at runtime in our case. We also follow gluten deployments. We use the marathon LB project and the zero downtime script that we got from the project to follow and to allow zero downtime deployment in our teams. And that's been working very well to now. We say zero downtime deployment is basically ideally zero downtime deployment as you would know in your organizations. It's not 100% zero downtime, but we are trying to make it as less downtime as possible. And the primary purpose of the zero downtime is to deploy when we are awake, not the odd hour of getting up and deploying. And that's been very fruitful in allowing our teams to get the confidence to move ahead in this adoption process. We do notify of every step. So use Slack as our audit log of everything that happens in our infrastructure. So in terms of builds that are created, the notifications from the marathon infrastructure. So everything goes into the Slack channel. So it's searchable and it's trackable and you can find out what happened and what time. And based on that, you can filter down to the events that you would like to pass on to your operations or the developers of the SREs in your team. We also have custom dashboard that we built inside our team. But this custom dashboard allows us some new capabilities over marathon, especially around metadata based roadmap. So we would like to go back to a version, not just based upon a build ID or version comment, but also in terms of build. And we wanted to control it end to end. So we introduced metadata based roadmap into our dashboard. We also support multiple site and multiple regions, which means we have multiple visas cluster running across different availability zones. And it allows us to manage them all together in one dashboard. And we use comprehensive head checks. So we leverage the marathon head checks as well as the HEPROXY head check and give a common indicator of the service health for the developers. So that when I deploy an application to an environment, I get a sum of all the head status together, not just the marathon head status. Lastly, everything is available through an API and CLI to our team so that they can take advantage and integrate with their infrastructure, with their other integration points in a pipeline. Puppet Labs, obviously most of you have heard about it and they publish this state of DevOps report every year. And they talk about number of times deployment and difference between organizations which are high impact, high performing versus ones which are laggard. And one of the common aspects is that amount of times the high performing organizations are doing more deployments than the others. And we wanted to take this as we apply these practices internally. So we also take advantage of this by measuring information coming through the marathon events and pushing it down to our metric collector. And that helps us to gather information to provide to our teams. For example, you can see the deployment counts, average time of deployment, end to end for all their environments. We also provide monitoring and this means the oil developers, every team using this platform is able to get this end to end stack so that they can take advantage of it with their systems, with their services that they are building. And this means end to end. So we use a couple of familiar options that you may have seen, we use Prometheus, Monit and use for APM integration with BNPenPoint and Scouter for our APM needs. And we use Elastic Stack heavily inside our organization. This is a screenshot of the Monit service dashboard and our Prometheus dashboard making material. False identification. And this has been one of the impacts for this transition in our teams. When I talk to our developers and we tell them not to worry about where your machine is, where your container is. The most important question is, what happens if something is wrong? Because I want to log into the machine and see what's happening. And that practice took some time to change it. So what we do is ask our teams, follow the inverse Convay maneuver to force them to change their practices around the system. Now they obviously have the business three-sides which they generate in the application but we bind it to the actual application and the container running in the mesos cluster. So for example, you can look at the Kibana dashboard, find out the exact mesostask ID which is generating that log. And obviously you could go to marathon dashboard and take actions on it. You can kill and scale your applications whichever is appropriate. So that's not false identification in our environment. Most importantly, this created an indirect effect on our teams. Our teams started creating more environments. They loved it so much, they would have environments for every new deployment and they were creating too much of it. And obviously we had a mix of public cloud and internal infrastructure that we managed. And creating this, making it easy for our teams to deploy as services means there are too many of these environments lying around. So we created transient environments which means we time out environments at a certain interval which is configured on a cluster. So we have a service which basically goes on deleting the dev and test environments after a pre-configured time. This allows our teams to have a check on how much resources they are using and also creates a discipline. And this creates a lot of good benefits to our infrastructure team as well because now they see good resource utilization and the disposability part is enforced on the teams which makes them more strict in how they use resources. So we practice this currently in our production, not in production environment but in the dev and test environments. When we deploy this track, suppose your teams, we are deploying the stack on a given infrastructure, whether in public cloud or internal IEC environment, we want to standardize how we deploy that. So we have basically zeroed onto two nodes. One is your master node and worker node. So master node, as you may have seen, it's basically the Viso's master and some related services. And the worker node which is the agent and it has some package services. So this allows us to simplify how we deploy our infrastructure. So some of these services in the worker node may change but overall this creates a template for us in deploying our Viso's cluster, the gravity platform for our teams. So which means we can manage an entire feed using package repository and create various worker and master nodes evenly. And some of these worker nodes would basically hold onto the application, the business services that the developers deploy and some of it runs a system management. For example, our continuous integration server or backing services. So some of the familiar tools that you may have already used or seen in your organizations, we also use them. Now coming to the third and important part of this presentation, we're about adopting the change and adoption of change is really hard. So last month an article came in for you about why it's so hard to install DevOps and Agile in Asia. There are many reasons and I believe that all the reasons have a check on it because I've been operating this in space for last 10 years. It's really hard to do DevOps and Agile in the right manner inside Asia. And primarily this has to do around mindset, the culture. So we hope to have inverse Conway maneuver as a way to drive these changes but the first thing to change is the mindset. So we created this simple adoption graph. So when we look at any developments or adoption of DevOps practices, we try to follow this adoption graph which means at the beginning you're in evaluation stage and then you'll start putting something in production so that you get some feedback, teams get some confidence that this thing works. And once you reach and confidence in a production environment, you would likely see a tipping point. That's a point where your team starts majorly using the services in all projects and all services. So currently in our evolution, we are in the experience and production stage which means we are trying to give our teams the benefits and they are actually acknowledging that benefits are coming out through the gravity platform. Once we reach the tipping point, that's the opportunity for us to ensure that this goes to scale, this goes to widespread adoption across our teams. And there are three important components of that playbook, the adoption playbook as I said. The adoption playbook first is obviously getting the confidence and technology like Vizos, like the Vizostat and DCOS. Second, we don't deploy it just for new services, we also do it for existing applications and do it in a manner which shows them the difference between the old style and the new style. So doing a compare and contrast mode for the technology. And lastly, we create teams and we create new roles for people to take up opportunity in this transition. So let me talk about this in a brief in the next few slides. So when we put for things in production and we move systems to this new environment, there will be a stage where the old and the new stays together. In our case when we moved our critical systems on the Vizos cluster, we had a child where the teams were really not confident that this thing really works. It looks great, it works great, but I don't know if it's able to take up the load and especially work with our teams. So that's about the time that we're, the present and the future remains in the same space, which means we use the old system and the new system for brief time. At this time, we follow four ideas to align and help with this transition, which is the old deployment of, specifically around VM-based or physical machine-based deployments and the container-based deployment actually are unified. So in our case, we use a single deployment channel and which deploys both the virtual machines that run our services as well as the Vizos cluster that runs our container. We use common notifications for them so that they don't have to worry about two different systems throwing information to them and centralize the logging and the monitoring aspects of the system, which means I don't want my teams to have to manage two different style of management services just because we are in the transition phase in our organization. So we did this four things and this is kind of through a way because we needed to go through this transition phase. As we go into the future where everything runs on the Vizos cluster, we might not have to use this throwaway bar all the better. So comparing contrast also means putting both the old style and the new style services, containers and VMs in the same environment and allowing our applications to move gracefully to the new services. So in our case, this is a representative diagram of how we run some of our services. So we have some legacy monoliths which are now containerized and run on the Vizos cluster but also run on the VM environment and we move between the launch to confidence which basically means we move traffic between them so that a particular percentage of traffic moves to the old environment and the rest goes to the containerized environment. This has trade-offs but it also provides us the basis for proving the technology. So everything becomes mainstream, everything becomes stable. That's the time where we move everything all the 100% traffic to our new environment which uses Vizos. And regarding roles, so we created new roles and these roles have to work with our existing team structure. So in here you can see there are two teams and we also have a separate platform team that is basically a platform engineering team building this gravity platform. And what we do is we create three parts to it. One is service engineering, platform infrastructure engineering and the developer advocacy and solution architecture. These, the solution architecture and the developer advocacy works with our different teams inside the organization. They do the outreach for our teams which means they go out and help them go and adopt these services. So that's needed for an organization of our scale to make this in production to allow the change happen quickly. And we also have so-called evangelist, DevOps evangelist installing teams. These are point of contacts who can work with our developer advocacy teams so that they can become a way to distribute the ideas and enforce constraints within their teams. And that helps us to move across many teams at one stage and in the smaller footprint of the platform team. What happens now is because of these practices the operations and the developer start mingling their goals. They try to understand better which means the operation specs more time building self-services and developers start using those self-services instead of a manual interaction between them. And because the preventives are automated operations don't waste their time building and doing manual changes and manual activities over that. Lastly, they have shared goals with them, the operations one and the developers have shared goals with the operations. This is important to do that adoption otherwise the change just remains a technology change then a non-culture change. What are these shared goals that I talked about? The shared goals in practical are three and our teams are rolled around these shared goals. We want all the operations and developer to zero into that shared goals. So first is both developer and operations have a goal to reduce the time that takes the code from a check into production. Everybody works to improve this timeline. Second, we want releases to happen any time as when it's needed not at 3 a.m. in the morning. And lastly, reduce unplanned activity especially doing work which is around downtime or issues that can be automatically managed by the services. So every opportunity that we get on unplanned activity goes as an automation task into our JIRA so that everybody can look at that and then start working and improving that activity. So these three goals helps our developers and the operations to unify how they operate inside the organization. And some reality checks. So what happens because of all this is instead of allocating a VM to a service we let the software decide and operate that and which means we don't do upfront capacity meetings basically many, many days of just meetings and going in meetings just to understand what capacity we allocate and we get more work done. Secondly, we also make availability and fault tolerance from manual map practice to more automated let the software decide by itself thanks to MISOs and which means we don't have more opportunities to do manual intervention. Third, time to production and that's extremely important metric inside our organization. Instead of being blocked by manual monotonous work we get minimal work, manual work and this means we get more set of service inside organizations and that led to something remarkable which means the teams especially developers for that the ops world becomes more accessible. They don't have to learn new skills they just have to probably call an API or use a dashboard or CLI to do that for them by themselves. And lastly the important reality checks because of this transition is because of the reusability that we lacked now we are standardizing across all workloads inside our organization which means all our teams use zero in a standard stack and they don't have to re-inquit these processes by themselves. So this is the four reality checks that we currently have inside our organization but DevOps is just not bad it's also about architecture it's also about design and obviously that leads to some new design principles that we work at, we advocate to our developers. So the work that you see from our solution architecture and developer advocacy they work with engineering teams to drive these four in the design phase and architecture phase. This means you need, as a developer you need to make ops-friendly code which means your code is monitorable you can, it's build to log, build to debug and everything generates metrics. Every code that we write has a capability generator metric and can be collected and aggregated and shown to the user at any time. Lastly, very cliched but obviously important failure-driven development which basically means failure is an important component it's thought and surprise but it's an acknowledgement. This means we try to tell the teams that your container can re-debate or could die which means we can even apply Netflix's ideas of Simeon Army and your container's data in environment and this is very easy to implement in one cases where the dev and test environment where you can just remove the environment and they say, hey, I was testing it I said, sorry, just create a new version of it and just call a new copy of it. That creates a tension but I think it's a constructive constrain applying the inverse Conway maneuver and lastly around distribution. Most engineering teams came from an era where everything was running on one box so trying to adapt that now everything is distributed is a drastic shift what Mizo's got to us instead of creating this notion by just education now we have a platform where when they put their code it's just distributed from day one the containers can move across the cluster so everything that they read in books is actually in practice so that it becomes more visible and more easy for them to apply in the day-to-day activity. So what's next where we are in this whole equation? So the road ahead will give you some glimpse of where we are and what ideas were we pursuing to take this further. Number one, we'll obviously need right now we have independent clusters of our environments not because of technical reason but more political reasons as in a large organization but we want to zero into multi-tenant cluster which basically means we run most of our teams not to own machines or not to own servers rather than run on a single cluster and for that we're taking advantage of some of the primitives that Mizo's is building around multi-tenancy and what DCOS is trying to offer especially around isolation primitives and around resource reservation and these are important for us to drive some of the multi-tenancy aspects across our application inside our infrastructure and the other point that we are working is on the global workload allocation which means everything inside GSHOP everything is a container and everything runs on a Mizo's cluster. Now with that assumption we want to have a global workload allocation which means workloads of any type but as you know not all workloads are same basically they have different attributes so I would like to classify them in three parts so there's a cost based, performance based and isolation level so if you're running a dev environment for some low critical application you will not need strong isolation you can just have a limited isolation and still take advantage of it but for example an application that is using customer data and because of some regulation they cannot be used on a single major Mizo's agent or cannot run on the same box on the same VM that also classifies for another type of workload so what we would like to do is allow our data center fabric which is running gravity platform using Mizo's to allow workload to be intelligently scheduled on different type of machines different type of agents based on the type of workload that they have so if you have a workload which is around less cost but more performance and it can go into a particular type of Mizo's agent while a workload which is not very fancy on isolation can remain on very tightly dense systems and this is a very important initiative that we are driving internally in the next year around global workload allocation and that means we take advantage of what's already there example Netflix Fenso library that they have built that allows us to build new primitives inside how we schedule applications over Mizo's cluster and that's the direction that we are doing we are building a recommendation system for workloads that are deploying on the Mizo's cluster this means it's just not the profile of that application but also the cost efficiency comes into the picture for example if I'm deploying a particular unit of an application as a container on Mizo's I would like to also see which is the right place for me to deploy this service in the Mizo's cluster think of this like a trusted advisor of AWS if you have used one so similar concept on the Mizo's cluster which means building system gets integrated into our deployment choice so as a developer I don't have to pick and choose which is the right data center or Mizo's cluster for me and the choice of Mizo's agents that becomes a part of scheduling itself that's a grand vision there where we want to go into the next year with what we are doing in the gravity system and we have some work pending around and also the community looking at how we can leverage that over marathon as well using the fence of support we also run a B2B2C platform on our gravity platform and that basically means we can give out our customers an ability to run their own e-commerce sites and that allows us to also build new capabilities on Mizo's cluster which means if I give a free account to someone I would like to probably schedule that account on a very densely populated Mizo's agent while an enterprise customer with strong isolation and needs could get allocated to less dense more quality of service to that particular customer that means we need to rethink about how scheduling happens it's not just the resource offers but also about how much density that particular node or Mizo's agent is occupied with and that is something that we are trying to approach as we deploy our B2B2C platform on the Mizo's cluster and it allows us to invent over Mizo's to build these primitives and as we build we also would like to take it out and push that community that obviously leads to cost reduction that we can pass on to the customer who are running their stores and e-commerce sites on our platform Lastly, the next steps that we have is around giving ability for our needs to do architecture A and B so we piloted VAMP as one of our services that we're trying to leverage to provide us capabilities over Marathon which means we can do candidate style releases as well as do architecture A and B very easily The other point that it gives us is around automated workflows for testing our systems which means we could provision the applications on the Mizo's cluster over Marathon but it goes through the VAM and the VAM's DSL takes care of it Lastly, we also want to take advantage of self-healing and automated rollbacks capability built inside VAM that leverages Marathon for that which means when you deploy a new version of the application and that version leads to more 500 errors that are obtained in the probably elastic search VAM could automatically rollback that changed the previous version because it knows that it has reached a threshold of issues So this is capabilities that are already out there and you can take advantage of it This is something that we are very keenly putting inside our platform next year So we covered all aspects of what we have done where we started from and it's still day one for a gravity platform We're still making it taking a lot of steps to do this change and change is possible Change is really possible through the efforts of the Mizo's community and the DCOS and the Mizo's sphere guys What change is hard? But if you need to know the proof of change that is change is really possible you may know it this week when Microsoft joined the next foundation So that really proves that change is possible even if it takes long time So we are driven by that ideology that change will happen and that brings me to the end of the presentation So if you have any questions with what we are doing or you need more information please do so and I will also be around and we can catch up after the conference Any questions? Go ahead How do you connect with the parameters? See, the noisy neighbor is more loaded terms For example, it's just not about reservation that we give to a particular tenant but also about how much utilization a particular workload happens in its peak time So for example, when we deploy we have one after applications that is running our order system So when we run this application for some time we understand how this application has been performing for a certain duration of time and how it peaks Now when we deploy some other tenants on the same NISOs agent we want that particular tenant to not just have an isolation of performance but also in terms of the tooling it could reuse For example, we have log aggregation services on it Now log aggregation runs through the same agent through a Docker containerizer Now when I try to reuse the same log beam system for the other tenant that may create issues with respect to how that tenant would want to share their information It may contain customer data It may contain data that they do not want to be mixed up with other tenants So for me noisy neighbors means one of the aspects is just resource isolation The other is your behavior isolation in terms of what you're doing on the application The other is around data isolation and the management service isolation So for example I could even deploy a separate log service a log rotator and a log forwarder for that particular tenant which is completely isolated for the other tenants So think of this like a sidecar container that basically works for a particular tenant and it's sticky to that particular tenant So that there is no issue of mixing traffic and allowing a common aggregation system to pull into data Okay, thank you very much And if you have any questions we can discuss it outside Thank you