 bit more agile and the sort of implementation of an operational team inside of Atlassian for our Atlassian On Demand, our SaaS offering. But before I get on with that, who here has been watching the cricket? Yeah, excellent. Pretty dismal sort of effort by Australia, but hopefully we can pick it up in Hyderabad in a bit. So carrying on, I mentioned I am from Atlassian. So does anyone here actually use the Atlassian products day to day? Excellent. Pretty good showing. Yep, we've got sort of the major ones, Confluence Jira, Greenhopper, FishEye, those guys, all available from our Atlassian On Demand offering. But I'm not going to hype on too much about our products. I just want to tell you about the processes and changes we've made to sort of increase our efficiency in production. So before we go on, a little bit about me. Sorry, I might just move this one. There we go. A little bit about me. I'm a development team lead at Atlassian. So we've got a small team of guys working on our Atlassian On Demand offering. That's our SaaS offering. We currently have about 26,000 customers on this platform. That's about 18 months old. And we run about 70,000, just shy of 70,000 JVMs. So it's really cool. It's really high scale, which is something I really dig. We built, well my team built the current deployment infrastructure we use for that. So pushing new releases to production and we really sort of smooth that pipeline and increase the speed. We can push sort of new features to customers. And I'm always keen on improving the development pipeline. Also my Twitter handle is there. So if anyone wants to follow my work account on Twitter. So the things I want to cover today are operations at Atlassian, how we handle sort of ops. Granted, this platform we use for SaaS is only 18 months old. We did have a SaaS offering prior to that, but that was by a third party vendor. So this is the sort of first time we've done it completely in-house. And we've had this operations team relatively young, only 18 months old, so we're still learning. And I'll go over a few of the things we've learned from that. Our development cycle, how that's improved, sort of in relation to the hosted aspect. It used to be extremely brittle. And I'll go over it in more detail how it used to be. But sort of we've made a number of improvements there. How we actually handle deployments. I love talking about deployments. I love talking about pushing code out and getting these new features in front of customers. And I went half on too much about it today, but afterwards if anyone wants to talk to me about it, I'm more than keen to. And of course the feedback loop. And this is probably the most crucial part of it. It's taking what we learned during the deployment process and the development process and feeding that back into our teams, into our process to sort of remove any of the friction there, to remove any of the sharp edges we have for actually getting these features out to customers. So I've mentioned Atlassian on demand a little bit already. As I said, this is our SaaS offering. It's basically making the Atlassian applications available as a service in the cloud. So you, Gira, your Confluence, your FishEye Crucible crowd, all available in the cloud. Does anyone here currently use an Atlassian on demand instance for work? So you're the guys that use Atlassian hosted themselves? Okay. So what we do is a simply basic SaaS model for a monthly charge. We provide the service. We look after it for you. We make sure it's always updated to the latest version. However, if you have ever administrated Atlassian applications, you will know they are large, monolithic, single-tenanted Java applications. So we had some very unique challenges in shoehorning this into a SaaS offering. And if you've ever administrated Gira, it can take potentially 10 minutes to start the application. So there's some unique challenges here for our deployments and upgrades in production there. So this is what it looks like. And we take these five applications, these five core Atlassian applications, and we push them to our cloud provider. Previously, this third party, it would take us three months from release of a product to get those features in front of a customer. Okay. Three months, that's 90 days. That is a long time. Okay. Considering Atlassian follows the model of using 98-day release cycles. So every 98 days, we push a new big feature, a new big version of the product. That three months at 90 days, if you consider our SaaS customers, they were perpetually almost a complete version behind what was currently the latest and greatest. The changes and sort of techniques I'll go over today and the feedback will show how we sort of reduce that from that three months down to six days. So we can cut a release of Gira or Confluence or any of those guys and have those new features in front of the customer, in front of 26,000 customers, in six days. And there is still a ton of room for improvement in there. Okay. We I think in there we've got about three days soak time and testing on our internal instances, but we can definitely improve that and it's definitely improvement over 90 days. So how do we handle operations at Atlassian? I mentioned earlier that the operations team is quite young. They're only 18 months old. And because of this, we've learned a few things. Let me just go over how we actually work there. The operations team at Atlassian is only seven people. Who here works in a company that has an operations team for production sort of systems? Yeah. Yeah. Yeah. A few of you. Okay. So as you might know, there's some very unique challenges for the opposite of things. We've got seven people worldwide and that's split up into three in Sydney, two in Amsterdam and two in San Francisco. They're the three major Atlassian offices we have. And these guys are responsible for maintaining that 24 seven coverage of our platform. So they I think they called it a follow the sun methodology of just constantly, you know, during the daylight hours of whoever's region is maintaining sort of uptime and incidents on that platform. These guys are also responsible for managing the, you know, nearly 70,000 JVMs we have running in production that that's only the production side of things that 70,000 as well as all of our logging monitoring and deployment infrastructure that we have. And on top of that, any incidents. So as you know, with an operational team, the biggest thing is sort of responding to the incidents, maintaining your SLA and keeping that uptime. That's also part of their job. But these guys, they're very good. Like being at Atlassian about 18 months. And they're extremely good at what they do. Three of the guys on the team are ex developers turned sys admins sort of operational engineers. So there's no sort of harm there jumping into the code and sort of finding out what's wrong. These guys work very closely with our infrastructure teams and our development teams. Go over that soon. So yeah. So the question was, is it 65,000 JVMs? And yes, it's actually a bit closer to 70,000 JVMs. And it's sort of increasing every single day as we sign up new customers. Sorry. We have 32 racks of equipment in a data center in Ashburn. I think that's in Virginia in the States. So 32 racks. And we have about 10 compute node 10 diskless compute nodes per rack. So 320 compute nodes all virtualized across them. So we can we can easily balance and shift customers and sort of do it for load. I'll go over it in a moment, but it's a completely sort of in-house built and designed sort of private cloud platform. So very much like EC2, you can just spin up a virtualized instance, get access to it and push latest releases of our software to it. Obviously, this one is tailored sort of explicitly to the Atlassian products so that we can get the best efficiency we can out of our hosting. So I'll cover it now. We have this cloud platform, this private cloud platform internally. And this hit team you can see here is our hosted infrastructure team. These are the guys responsible for designing and implementing this internal platform we have, and also the bug fixes, maintenance and all that. And so they work very, they will tightly coupled to this platform, this team, work very closely with it. Sort of conversely, we have the product team, which sits on top of this, as well as that hosted development teams. So these hosted development teams are generally engineers from all of the product teams that we have that work together to give it a specific sort of hosted twist, the Sastar twist that we need for developers on these teams. And of course the jam in the middle here is the hosted operations team. These guys are the ones that keep everything running when they break. So I mentioned earlier these three teams work very closely together, and that's true that we do some comments, we do virtual teams, we do sort of sharing of resources depending what the work is to get things done. And the next point is something that is not necessarily unique to Atlassian, but it's something I absolutely love about them, is that no code base inside of Atlassian is off limits. So we do have code ownership by teams, but there's nothing stopping an operational guy come along, check out the JIRA code base and send a pull request to fix something up to make it a little bit more efficient on the platform. One of our core values at Atlassian is sort of be the change you seek, and we really want to empower the guys to be able to go ahead and make the changes there that they need in production. Conversely, if there are changes that need to be made in the platform, we really push to allow the product developers and the host of dev guys to make those changes and get them in there. Then I know this may be a bit controversial, but nothing is better, well I have not found anything better than the physical sitting close, the physical proximity to these guys. Our hosted infrastructure team, our hosted operations team, and our hosted development teams all sit in the same area, but that includes our product owners, our product managers, and the business unit owners for that. We're all very close together and there's nothing that I've found yet that beats sitting close together. So we've learned a few things from this, and this first point may come as a surprise, but you really should have an operations team. Does anyone here run production code, customer facing production code, and not have an operational team? I've certainly been in companies that have done that. It's just a couple of devs, Joe Blogs there, if something breaks, make sure you go and fix it. So I've been in places where that, and in the last 18 months, we've only implemented this operational team in the last 18 months, when we moved to this new cloud platform. So that is a definite plus for us, and it's a definite learning point. No code is off limits. Again, we want to empower the guys to make changes. We don't want to put up artificial barriers between the projects to stop people not only making the changes they need for production, but also to hinder cross-skilling and learning what the code bases do. That sort of leads on to the third point, which is to really enable the cross-skilling of developers. At last, you're not a Java developer. You're not a Python developer. You're not an operational engineer. You're just an engineer, and you can get picked up and moved around as resources to better sort of improve your scaling and as the projects dictate. So we have operational guys that work very closely with the development teams on secondments to build features, and that is an absolute bonus because it provides that operational perspective when you're developing code that you just can't really get if it's just, you know, developers locked away in a room by themselves. So let's talk about the development cycle. What I want to cover here is how it was in it with our previous sort of cloud provider, how it was very siloed very much, I'm sure you've heard the expression just tossing it over the fence. That's how we were in this previous development cycle. We would have, as you generally do, a product development team build a release of the software. In our case here, it was the the binary you get when you go to downloads.atlasian.com. It was the binary you get if you're going to host it yourself on your own hardware, on your own service. So what we did then is, well what these product teams did then is they tossed this binary over the fence, a war file, a web archive if you're familiar with the Java terms, tossed it over the fence to this hosted development team. And these were these were sad guys. They weren't having a good time because their role was to explode the archive, explode it out, run a series of patch files against this. I'm not kidding here, run a series of patch files against it. Compress the archive back up and run it and see what broke. If something broke, they go through manually fix it, regenerate the patch files for next time. You can imagine this process. It's extremely brittle, it's extremely slow, and it's extremely manual. It's not something that scales and it's not something we can do quickly. And then to further make this worse, these guys then through this release, this new hosted archive they developed over the fence to deploy into production for these SaaS customers, which was done by a third party. I'll cover that in the deployment phase a bit more. But again, the problem here is there's no sort of feedback. It's very much once it's out of their hands, there's no care. Just do what it do what takes over there. Sure. Yes. So I'll bring this up again. So the question was were the guys applying the patches in the center there part of Atlassian? Yes, they were, but they were situated very far away or not very relatively far away in the building and in a completely different sort of reporting structure. The idea was that they would, they just existed there to apply these patches and generate this new artifact. In the, sorry, in the new cycle, we actually got rid of that and I'll talk about that in a moment. Yes, there was a definite effort and that was part of the changes we made for this was to push the responsibility for generating these on demand specific artifacts back to the product team. So was there a question over here? Okay. So we went ahead and we improved this and this is just a sort of quick diagram on how we improve that. The product teams still maintain their own internal development loop that inner development loop, you know, the build test QA sort of thing, that cycle. The advantage here was all of these stages in build test and QA we made available for them to be able to push those artifacts to our new cloud platform. That means that when you have a release, you say, um, say you're working on a feature branch of Jira to add some new feature, you can then run the command to push that actual branch, so you haven't even released it yet, push that branch up to our infrastructure and see it running in our hosted infrastructure there. So those are really good saving there and we sort of really integrated with the hosted aspect of it from day dot. Then we have the hosted development and the operational teams were brought in from the very start and once we have some sort of release and these are generally done nightly or sometimes, depending on the product team, up to every single commit, we push to our dog food environments. Does anyone know what I mean by dog fooding? Okay, I was worried this wouldn't sort of translate. Okay, so it follows the concept of eating your own dog food. Yeah, I see a few nods. Okay, so Atlassian is extremely big on eating your own dog food and what that means is using our products that we build, really in anger so that we're hopefully the first ones to find out what issues we have there. If we have issues operating a feature or how something looks or the usability of the product or finding bugs, if we can't operate it correctly, how can we expect our customers to be able to do it when us as the developers can't even do it? So what we have internally to Atlassian, we have a number of dog fooding instances and these are for all intents and purposes production systems because marketing use and HR use and finance, all the non-technical teams rely on these instances to be up every single day. So the way it generally works is every night on a nightly green build we'll push these deployments out to all of our dog fooding instances. We have some product teams that are able to do it a lot quicker, such as the Confluence team, they can deploy a new release of the product, a snapshot release of the product for every single commit and do it without any downtime for the customers, but they're probably the most advanced along here. The remainder of the deployment cycle, so the development cycle is sort of the standard with anything if you're pushing it out to production, we go through our staging, we go through our canarying and we go out to production, but at every stage these are closely tracked by our hosted development team and actually run by our hosted operations team as well with the integration from those product team guys. Yeah, okay sorry, let me too far. So what was the question? Yeah it's coming. So the idea here with the test and QA was we build in the pushing it to our hosted platform a lot sooner. The issue before was with testing and QA was that we couldn't actually test in the production environment, we couldn't test in an environment that was even remotely similar to what it was going to be like in production. So if you remember the first slide where we had this third party hosted provider out on the side, it was up to them, they were the ones maintaining all the hardware the infrastructure, the everything set up there, we couldn't actually push out there for our offer feature branch or anything like that. We could get releases out there a lot, well we could get dog food releases out there but it was a much slower process and I'll cover that in the deployment side of things. But the difference here was that during our testing and QA everything is actually done on the infrastructure that production runs. We have a special, we call them zones, we have a special zone dedicated for test instances that we can spin up and down at will and push feature branches or push issues or just push a particular hash of the product out to that so that we can test it. Then of course we have all our automated testing and unit sort of all our CI stuff pointed at these instances out in our hosted infrastructure. Okay, so these are sort of terms we use inside. The question is what's the difference between testing and QA? Testings really are our automated testing, our unit test, functional testing, acceptance testing, those sorts of things that we've been able to automate or written as part of the feature. The QA side of things is again almost probably on the manual side of things it's the validation of the feature. We use a technique called DOT which is developer on test which we have QA engineers but all their role is to sort of advise how we can best utilize and best do QA and then we have different developers that go through and actually do the QA of features. So the number of things we learn from actually changing our development cycle were eating your own dog food. I know it's not possible in a lot of places but if it is, if you work in a place where it's possible to eat your own dog food you should definitely go ahead and do that. This is sort of the apart from all our automated testing it's excellent for usability testing and having people actually use the features you're writing before it gets to in front of the customer. Okay, so you can actually find these things out a lot sooner. The feedback, the feedback's extremely important and I've got a whole section dedicated to the techniques we use for feedback but we need to get that back to the development teams, to the product managers, to the product owners as soon as possible because we always want to fail fast, okay? We don't want to go through the entire cycle to the end get a feature out to production and find issues with it so that we'll the last thing we want is for a customer to find issues with it and report it through support. We want to be able to find these as soon as possible and then fix them up before they even get in front of a customer. So how do we handle our deployments at Alasin? Well, there are a number of things. This is how we used to do it and it literally was well, not literally it was a can of worms, okay? All of this was handed off to this third party to handle deployments for us and all of the feedback only came when something broke and again that was at the end of the cycle, okay? There was no developer involvement at all for any of these deployments it was done by this third party it was literally posting them off the artifact and getting them to deploy it. It's an extremely manual process they used I don't know if you've ever deployed things at a large scale but the last thing you want is a manual process you want an automated process you can easily repeat and because of that in this case well in this old system we had it would take five days to deploy to 3,000 instances so 3,000 customers it took five days if you extrapolate that out to where we are today with 26,000 active customers I think it works it's about 40 days it equals to if it scales linearly which I doubt it does we can trust that to how we currently do deployments and we can deploy to our 26,000 instances in two and a half hours so that's pushing new versions of all of the applications So with this these deployments that required us to come up with a plan, okay? We had this completely new platform and we were going to start taking over all of these deployments so bringing everything in-house okay? So for that we had to come up and define a new process for how we're actually going to do deployments So the first thing that is paramount is automating your releases who here does not have does have an automated release process who here does not have a process where they can just click a single button and cut a new release of their software like I know again I've definitely worked in places like that I've definitely worked in places where there was a single person who knew the magic incantations and you know if they were on leave or something he couldn't release the software and again that's something that just doesn't pass the bus test Sorry, anyone understand what I mean by the bus test? Blank faces, yes or no? Yeah, okay, I'll get one, yes Anyway, the bus test basically means if this fella is a bit gruesome this person is hit by a bus on the way to work can he still function? Okay, and with a single person managing releases and knowing all these magical incantations you can't really say you passed the bus test So automating the releases is a big thing Atlassian has been huge in automating all of their releases for behind the firewall software for the stuff you download but not for our hosted stuff So what we did is went through the process we I think the question was earlier about the patch files and things we actually pulled that responsibility from that hosted development team they're no longer responsible for patching releases what they would do is hand all that code we had all these patch files off to the development team and told the development teams it's now your responsibility to produce us an artifact that works in on-demand Not too much pushback on that because on-demand was a big growing platform a big advantage for these product teams so they were pretty cool about that it also meant that all of these changes for on-demand get into their unit tests and acceptance tests all that run against it on every commit so we were able to fail a lot faster and find out what was wrong on these builds as they were going so first one's automate the releases the second one is automate the deployments so again a bit trickier but we had a huge advantage in that we had just built this internal platform and had complete ownership over how our deployments worked I know again I've worked in places if you've got a manual release process you've probably got a manual deployment process so what we've done for both release and deployments is hooked it up to bamboo our CI server but it doesn't really matter what you use we use bamboo because we eat our own dog food you can hook it up to Team City Jenkins or it doesn't even have to be of that as long as you've got a repeatable process there to be able to do this so automating the deployments fantastic just make sure we've got something we can repeat the third one we started doing and again this is more for when our process was starting out as our process matured a bit we could actually drop this restriction and use a wiki page or something like that but it was having a pre-deployment stand up when we have five products changing in a release so at the moment we release every single Monday to production if we have five products changing in that that's potentially quite a lot of feature changes or changes that are coming in so what we require is guys, the stakeholders for those feature changes to come down and have a stand up with our operational guys to say these are the new features going in these are the potential risks with those features these are the failure modes of it and this is how we sort of roll it back and what we do is we go around and then we have a coordinator for the release who then goes through if we do fail this is how much time we're giving you to fix it this is the remainder of the window and this is how we're rolling back so they're an excellent technique for when we're excuse me getting started here and then of course when you do the actual deployment we absolutely require the stakeholders to be present this is not negotiable for us and we took this from a document that was written about the Facebook deployment process I don't know if anyone's read about that but basically what it said was when Facebook are deploying to production everyone needs to be in this chat room to be there to support your feature when it goes there if you're not available in the chat room your change sets don't get merged into the release branch and your features don't go out to production so unfortunately we can't take such a stand on that given the nature of our sort of monolithic applications but so we just require the guys to be present there so that if something does fail we can easily sort of recover from that or at least go forward or we have someone there that knows what the issue is with it and the last thing we started doing was that once all of this is complete we come up with a just a post deployment report card so how the deployment went for each product and it doesn't have to be fancy it can be the simplest thing in the world we use just a wiki page a few details about the deployment and we just give it a red green yellow for how the deployment went for each product you can see there we have three greens and Jira was the only product with one which had a JVM startup bug for 14 instances and a link to the issue and just the issue says we encountered this again and this is another mechanism we use for feeding back those info into the product teams so that they can know where to help us reduce the friction in our deployment process so this leads on nicely to our feedback loop yes so the question was when I say release our deployment or automate our release or deployment was there any specific tool that we use was that correct okay so for our releases we use bamboo internally a lot of our projects are maven projects and we it's simply assume around running the unit test and then running a maven release prepare release perform if you're familiar with that but it really doesn't matter on the tool you use for that as long as you have some automated repeatable and reproducible sort of way of doing it prior to that if we don't have a build set up we actually have a server we call build box which has a known build environment which we can go in and generally for smaller projects manually run a release on this under this known configuration because there have been a number of instances where we have had unknown configurations or someone's you know manually tweaked their maven repo on their local dev machine and then run a release with it and you get these weird artifacts packaged up so you know they are incredibly hard to diagnose so it's great to have this because you know you're just wasting time they're trying to diagnose these issues in terms of deployments I think our situation was reasonably unique in that we had to deploy to this very custom infrastructure for our products we analyze things like fabric, capistrano and collective and puppet run deck all those sorts of things but none of them really fit our build we actually built an internal tool for that and you know I'm always keen to talk about that but I won't cover it in this presentation but afterwards over a coffee or a beer I'm more than happy to go into all the details with you Does that answer the question? Excellent, thank you Excuse me Okay I'm going to pick it up a bit because I'm afraid I'm going to go over time So the feedback loop Why do we want to have a feedback loop? Okay, why do we want to provide feedback? And I think the number one reason we want to provide feedback is to avoid repeating mistakes we've made in the past Okay, it's perfectly fine to make a mistake Okay, making mistakes is human yet we do it The problem occurs when we start making that same mistake two, three, four times Okay, we don't want to repeat those we want to try and stop repeating those mistakes Obviously the feedback is going to help us improve the card process any friction along the way we want to sort of, you know, remove we want to polish the sharp edges we want to sort of sand those down so we can get the smoothest process we want to take the least amount of time for a developer completing a feature to getting that feature in front of customers Obviously, I've said this a few times tonight we want to fail fast if we don't have that feedback loop in place how can we fail fast? If we don't know about it, how can we fix it? And then all of this is in the name of shipping quality features faster as I said earlier we want to reduce the time it takes to get it completed and out into production in front of customers So how do we go about doing this? Well, there's a whole bunch of different techniques and I like to break them into three categories So we have the during our iteration how we actually get feedback there our real-time feedback and then after our iteration once the iteration is complete So during it we have a whole bunch of different techniques You know, there's tons of them these are just a few we used at Lassian Obviously our CI build so unit test, acceptance test, functional test all those sorts of things providing a heaps amount of feedback during our development cycle Dog fooding, huge We don't wait to cut a release to push it to dog food We continuously push to dog food so that they're always seeing the latest and greatest We also have a sort of policy on that is all our releases are cut off master so if you get master default trunk it's all the same all our releases are cut off that So we nightly push master to all our dog food servers Obviously issues, pull requests and daily stand-ups and other great ways of sort of pulling that feedback in Obviously real-time stuff is that what we actually get in day-to-day when events are happening Information radiators, anyone familiar with the term? It's yeah, awesome, awesome That's good It's basically, it's a flashy term for dashboards Okay, have dashboards around your workplace Information radiators much like a radiator heater sort of radiates that information out to everyone that passes Obviously all the synchronous chat mechanisms IMIC hip chat really great for pushing this feedback in We've hooked up a lot of our automated stuff our monitoring and learning that is hooked into our IRC, IM and hip chat rooms So when anything happens you get about 50 notifications if something's bad's happening but that's excellent there Mentioned the learning and monitoring and of course analytics really good for seeing what the uptake is and features is a feature working, is it not if there's a sudden decline in that feature usage is there a problem there? Worth investigation Just on the information radiators we have a whole bunch of these around Atlassian I think on Mindful we have about 20 or 30 of these TVs there's just heaps of them and every couple of desks you'll see this TV with different information on it relevant to that team We can see here this is a support dashboard probably can't see it but it's a support dashboard trust me Got other ones This is a dashboard from our deployment tooling We can see here you can see the numbers up the top they add up to about 20,000 instances that some were running a deployment out to production and then of course you've got your typical things like your burn down charts and all that more specifically for the development teams and of course after the iteration we have again the standard mechanisms issues, bugs, support cases those sorts of things that are getting raised from external parties our daily stand-ups and our pre-deployment stand-ups and then of course those deployment report cards we have So I just want to reiterate what I said earlier like the sooner we know about a bug or a potential issue like the sooner we can fix it if we don't like it's very simple you know I think everyone will be over here that's extremely simple but if we don't know about an issue how can we be expected to fix it so part of this feedback loop and the whole fail fast idea is that we get that information back to the developers and the product managers as soon as possible so that it can be fixed before it goes in front of the customers I did have a story but yeah I think I'm running over time so during coffee I'm happy to tell this story it's just a little anecdotal story where we didn't follow our process exactly and we ended up conducting a DDoS on both GitHub and Bitbucket or sometimes but I'm happy to talk about that about anyone so the key things I'd like to cover off in this one were that you know eat your own dog food it's extremely important if you can to use the software you produce like if you don't have an understanding of the software how can you expect the customers to be able to the no-curd is off limits again I know it's going to be dependent on the organization you work in but if you can start breaking down those barriers between the teams and encourage the cross-skilling and the sort of I guess fixing what you need and being the change you really seek inside the organization then you know that is an absolute huge advantage avoid repeating mistakes you know I'm I think everyone's going to be with me here you know you don't want to repeat the same mistake over and over again once is fine let's not let it happen twice obviously fail fast again we can't um we don't want to see a feature all the way out to in front of the customer before we find out some critical issue with it and you know it happens all the time it happens with us constantly like um well not constantly but it does happen with us as well and of course that feedback is sort of imperative to to getting that fail fast and that the sort of reducing the friction in deploying to production so all of these things you know they're beta they're like um a Google app they're in perpetual beta all of this we're trying to improve and all of this we're always looking at how we can improve it how we can make it better and so these are just some of the things that helped us along the way and you know I'm really keen to talk to all you guys to see you know what techniques you guys have used and how you guys have improved your process and you know learn from each other so thank you very much today and um also I put a plug in we're also hiring if anyone's interested cheers guys