 All right, welcome. Glad to see we didn't lose everybody, but I think the beer kept everybody here, right? OK, so walking away a little bit from the last presentation, which was excellent, we're going to be talking more about a use case today, a business use case, a very practical one, mobile workloads on OpenSec. My name is Tim Puer. I am, how do I describe myself? I want to describe myself as a developer, but unfortunately, I don't get to do a lot of that now. Nowadays, I mostly script automated deployments. So I travel around the world, visit customers, and show them how they can automate their deployments on OpenStack, on a software, on AWS, on pretty much any cloud there is, and even traditional bare metal type workloads. My colleagues are going to introduce themselves. Thanks, Tim. So my name is Tyson Laurie. I am one of the DevOps engineers for the Apple and IBM program. As you can see from my slide, I'm actually a wayward Australian that's being moved to the United States. And this is a lot of the work that we've done to use OpenStack in a way to speed up the development timeframes and deliver all of the apps that we do as part of the partnership. Cool. And my name is Glenn Hickman. I'm the second of the Australian DevOps engineers on the Apple IBM program. That's extremely bright. I've been in Chicago the last seven months, and I don't think I've seen anything that bright before. So it's good to be here, though. What we're going to do today is we're going to do the first thing that you should never do with a presentation. And that is do a demo. And part of that, we're going to talk about how we have arrived at this point in time on our program, while we've adopted certain tools and techniques, one of which obviously being OpenStack. So as Tim alluded to before, we're not OpenStack developers. We're not shaping a product at all. We're just big users of the product, and we can see a lot of value in it. So we're going to talk a bit about how we've integrated it into what we do. We're going to talk about how we do other things on our program in terms of continuous delivery and stuff like that. And first off, we're going to do is kick off a demo. Because it takes about 30 minutes, roughly, to finish. So the OpenStack side of it, so the provisioning of the infrastructure is pretty quick. It's about 15 seconds. But what we do on top of that, which saves us a lot of time in the long run, is deploy our entire software stack on top of that using the orchestration tools. So it saves us a lot of time, especially when we're spinning up. Anyway, from 70 to 100 environments. So it's kind of a big saver for us. We have a very small team. In Chicago, there's four of us in Chicago and Tyson in New York. So we manage 100-plus servers, spending them up and down all the time with a very small team. Thanks OpenStack and some other pretty cool tools. So what we'll do is we will. Now this tool here is Tim's baby. And he's going to talk a bit more about this later on. You should understand everything you see on the board. If you don't, we'll exploit it. In 25 words or less, this is basically one of our blueprints for one of our environments. It's somewhat reduced just in terms of trying to get it to finish in terms of a presentation. So without going in too much detail, let's do some application servers, the database servers, some networks and stuff. So we'll kick it off and we'll close our eyes. And hopefully in 30 minutes, you'll be able to see something on the other end. You're going to take us through the source before you provision it. Depends on how detailed we want to get. Yeah, OK. So basically the way this tool works from a high level is you define your server blueprint. So that's essentially each image that you want to have, the image types, flavors, stuff like that, networks. So all the kind of things you traditionally would always do when you're using OpenStack to provision environments. All these in the background there, you can see all these little yellow boxes. Now what they are, they're our software component. We effectively componentize each of the bits of software we want to deploy to that stack. But we don't just sort of deploy it and walk away. We deploy it and everything's configured. And in cases where we do high availability, so we're clustering databases or application servers or Cloudant or whatever it is that we do on our program, our tooling takes care of all this. So effectively we spend 12 months doing a lot of automation. So we're obviously using OpenStack to kick everything off and to get the platform ready for us to deploy our software stack. And it's all through point and click type stuff, which is pretty cool. So in the source, you'll have... How much data do you want to go into? Essentially, it's broken down into the different types of components we've got on there. So we've got application servers, analytics servers, database servers. There's individual configuration for each one. So we have a unique configuration file for a given environment. So that's got server names, flavor specifications that we can tell for a test box as opposed to a production box where we can specify different memory sizes and CPU sizes and stuff like that. And that's all built into the configuration that sits in behind the blueprint that you see in the background there. So they sort of work together. We use the heat orchestration engine to talk to OpenStack and it builds it out for us. We'll go into a bit more detail later. Yeah, why don't we go ahead and kick that off? Yeah, we'll need 30 minutes. And we'll jump back over to the side deck. Alrighty, so while that's working, it's magic in the back end and Tim will go through it all and how it transformed to heat and how all of that hangs together. I'm going to talk a bit about the use case. So as I said, this is used for the Apple and IBM partnership. We were building these apps to transform enterprises, right? That's the amazing way of doing it. We're trying to make it the best user experience possible. We're developing, oh, we developed actually, 100 apps in partnership with Apple. Designers, developers, small teams, geographically dispersed, developing all of these applications. The APIs, they need environments. There's a lot of changes. We do sprints. It's all incredibly powerful. Behind all of that is a scale roughly this, except even more complex now because we have all of the implementations of those apps. Again, we're using a whole bunch of amazing tools, OpenStack being one of them. And the worldwide team is across the usual suspects. So North America, we've got development centers in Cupertino, Chicago, Atlanta. Then we've got Toronto up in Canada. We've got people in London, India, Brazil, China. It's every continent you could possibly touch on. And that brings issues again around access and speed and networks, et cetera. The apps are distinguished with, well, we're over 100 now. At that point in time, we had 680 APIs across all of those servers. 100 plus compute nodes running behind those apps. And we obviously had organizational pressures. I mean, every business case does, right? We can't have an unlimited team of people. So as Glenn pointed out, there's actually only four people in Chicago and myself in New York running all of those servers. So that wouldn't have been possible without orchestration tools and without the automation from heat and doing all of that. Yeah, I mean, there's definitely, and we're gonna get to this next slide, but causes for change. I mean, we talk about organizational changes, it's kind of big when the CEO promises 100 apps in a year. So these were the pressures, the previous side were the pressures that these guys were facing when they came to me and we talked in depth about what can we actually do to solve this problem? Yep, Tim's helped us quite a bit. So obviously, they're all the causes for change. You'll encounter them on pretty much every IT project. I probably won't go through them in too much detail other than say, you know, for us, it was all about maintaining all the different implementations and variations and flavors of that and all the different apps. So our goals, that drove us to these. We wanted to be able to do the quick iterations that the development teams and the designers were doing for the apps with the servers, with the changes, being able to spin up multiple development environments, having slightly different alterations in them or different streams of work from the development teams and be able to compare the differences, compare the outcomes. Obviously, we had to do that in a way that didn't increase the costs by having all of those server environments because that would never have been approved. And then obviously, provide visibility. So we have tapped into a whole bunch of the OpenStack APIs and a bunch of other things, monitoring tools to provide all of that visibility end-to-end to everybody in our program from executives through to developers, testers, et cetera. So they can tell what the impact is. So greater capability, greater flexibility, less cost. Essentially, yeah, that was simple. So those goals led us to these four questions and we apply these questions at every layer that we do now. So whether that's the infrastructure, whether that's automation, APIs, apps, these are the questions we ask on how to implement it with such a small team. And these form the basis for everything we do. And as you'll see in, well, now, Tim's gonna take you through the tooling that we used to answer these questions. All right, so when Tyson and Glenn first approached me, they basically came to me with the problems that they laid out in front of you. Like we have a limited amount of people. We have a lot of applications that we have to prepare in DevTest, QA and production. We need to give these environments to our developers right away or at least as quickly as possible. And the old way of doing things was just not working for them. And I was very happy that they came to me because we had a new way of doing things. It was built on top of OpenStack and it was built on top of Heat. And basically what we do is we utilize Heat technology, which is infrastructure as code. And we define the topologies that we need for our workloads. So whatever those applications are that we're deploying, pulling the latest code from Git repose, building it and then deploying it into a production-like system so they can be tested fully and functionally and then moved up to a truer QA system and then eventually into an actual production system. And they liked that approach very much. And in fact, the middle layer that you see here with Urban Code Deploy gave us the capability where when we first started this approach, Heat may be kind of punted a little bit on the software side of things. It was very good at standing up, network storage and compute, but when it came to the software that goes onto that compute and how you connect all those together, at the time it did not have a great solution for that. A little bit better now, but yet Urban Code Deploy provides a lot more capability than the heat tools that you build into your images now are capable of doing. So we started looking at, okay, great. So OpenStack's the way. We're gonna use Urban Code on top of that. We're gonna use Heat, but dang is it hard to manage OpenStack, right? Now we admittedly told you up front that we're not OpenStack developers, okay? And I'm actually thinking about taking that certification exam for OpenStack administration, but I'm kind of nervous. I've done a lot of stuff with OpenStack internally. I've done a lot of stuff with OpenStack externally with customers, but it's an intimidating thing. There's lots and lots of projects in OpenStack and maintaining a readiness, production level readiness with security and vulnerability testing and making sure that it's up and running 24-7. Like I said, we have a small team. We didn't wanna take on that responsibility. So when we acquired BlueBox, we immediately reached out to them and said, I think you guys would be a great fit for us. Actually, a bit before we acquired BlueBox. It was officially before we acquired BlueBox. And that worked out to be a very good fit for us, right? Because BlueBox takes on all those headaches that they're very good at managing and provides us a ready-to-use production environment for our workloads. And the best part was that Urban Code Deploy, since it's built on top of heat technology and provides us a heat editor, which we saw a little bit earlier and we're gonna see again later from the demo, allows us to easily build out heat templates which are obviously compatible with OpenStack. And one thing to add there, probably, and I think you're gonna get to it, is mobile apps are very, very different in that they can be region-specific. They could be implementation-specific. They can be user-specific. And so we have to handle those variations and those workloads. And that's something that BlueBox allowed us to repeat over and over again with the variances. Absolutely. Heat, in specifically, Urban Code Deploy is, we call it Urban Code Deploy with patterns, but it gives us the functionality to be able to define heat templates that parameterize out the differences between these different types, these different technologies, the different customizations that Tyson is talking about. And now that we have BlueBox Local, which we said this in the beginning, we kind of knew that BlueBox Local was coming, this is the perfect opportunity. We take the same exact workloads that we defined with our heat templates and we can run it locally or we can run it in a dedicated fashion where BlueBox is managing it in the cloud for us or we're running it in our own personal data center. It doesn't really matter to us. And in fact, we're working on the process of migrating from one data center to the next right now and there's very little change that actually is having to occur because of that. So we were very excited about the heat technology and urban code and I was very excited to help these guys in building out that technology and proving it out in a real enterprise class application. All right, just before we go any further, I'll just want to make one clarification that Tim mentioned before about the small team. The Apple IBM program is probably eight or 900 people. Not sure the exact count, but there's a lot. But the DevOps team, which services that entire program and not just environments but continuous integration, build, deploy, releases the whole lot is Tyson and I, three more plus Tim and Tim sort of a... He's a guest. Yeah, he's an honorary member of our team. 20. So yeah, so the beauty of the tooling and that obviously includes OpenStack really makes life for us a lot more simple than what it could be. And given that BlueBox wasn't our initial cloud platform where you actually move from a non-OpenStack cloud provider, a lot of pain and heartache using them initially. Mainly around the fact that the provisioning times are very slow, very little intricate control over how you want to define your actual images and stuff like that. So it wasn't cost effective. So BlueBox was an obvious choice, especially the fact that it's OpenStack engineered. So that made life for us a lot more simple. So just a bit more background about our program and the sort of stuff that we do within our team in terms of our broader DevOps type stuff. So we have a whole, starting initially obviously with the provisioning of environment. So using OpenStack in heat, we've got a whole virtual machine catalogue, a whole bunch of different flavors, different networks, basically all the building blocks that you need to build out a customized or a repetitious type test or production environment. So we have a catalogue and you saw a bit of that before at the start where you saw the environment map with all the images and the components of the networks and stuff. So we have a whole bunch of those which we use to either spin up our dedicated production environments or our dedicated test environments or our ad hoc performance environment. So the beauty of OpenStack is we can, if we get a request to say, can you spin us up a production like performance environment? How much lead time do you need? I said, well, when's lunch? Sort of thing. You know, we can have it basically up and sitting in there for them within an hour. And that includes the full security model over the top. It's not just a bare bones installed product, it's configured, it's highly available, if it needs to be. All the configuration is done, the security model's applied, all the role-based accesses and permissions are all being set. So it's essentially how it would be if you wanted to deliver it to a customer and say, it's production ready. So that's all done through automation. And we don't have to talk to anybody because it goes to Slack. So we don't have to, you know. We've learned very quickly that DevOps people don't like speaking to actual human beings. So it's better to use emails and stuff like that. So the good things with OpenStack, obviously, is repeatability. We get environments that are the same every time. When we first joined the program, they were building environments manually. And I don't think any two environments were actually the same. We do obviously do the full pipeline. So the developers just have to just tag their stuff and get, we pick it up and basically do the rest for them. We've got a whole other value add stuff on top. So we have a broad range of monitoring and server performance types of stuff. We've also got a whole collaboration side of our program. So we've got basically when the developers kick off a build process by whatever tag they're specifying, we've got communication channels like HipChat and Slack that basically bring that information back to the developers as soon as it happens type thing. And as Tim mentioned before, the use of urban code in the background doing kicking off these sorts of tasks, it's quick, it's on demand, and no one actually has to do anything really. As mentioned before, we've got a whole security model on top. So using OpenStack, we've got availability zones, security groups, so we've got the petitioning of infrastructure, so you've got true high availability using availability zones in OpenStack. What else, and it's also very scalable. So obviously one of the big things of OpenStack is being able to ramp up an environment so you might have seasonal type requirements. So if you're a retop type customer, maybe the, I think you call it Thanksgiving here, probably a big time to buy some stuff. So if you want to tag on an extra half a dozen servers, sure, you kick off your blueprint, it adds it up and it builds out the server pattern. And the ability to add that directly into your heat document, right? So you're defining that as being a pre-rec for your environment up front, right? I want the ability to start off with two nodes and if CPU utilization goes over 30%, then add two more nodes, right? I mean, and that's defined in the actual heat document. I mean, it's a modern day age, right? Things are completely different than 10 years ago. Absolutely, and you know, as a father of many children, I test waste each. So the best thing with having a very scalable product is really good efficient use of resources. So as I mentioned before, we had a non-open cloud backed cloud provider initially. I think even in the heat of battle, our servers were running at about 0.9% CPU. So we weren't exactly getting a good day for money for what we were paying for. So now we actually get better bang for buck. Okay, so this slide's basically giving you a full view of what I just basically mentioned about how we use OpenStack and how we use other tooling to provide what we call full stack automation. So as I touched on before, we want to be able to not just provision an empty VM that someone can log onto and have a look around but actually have a working environment that users can connect apps up to because the whole point of what we do is to provide a middle layer for our growing collection of mobile applications. So this stack is essentially the middleware that provides the connectivity back into the customer sources of record. So to lightly touch on the stack automation, obviously we use Urban Code deploy with patterns and OpenStack to build out our two base layers. So the platform and obviously the model so effectively sizing the environment and stuff like that. And then as Tim mentioned before, we use Urban Code deploy to basically deploy out all our software components. So in Urban Code deploy, we effectively have a component for each piece of middleware that we deploy. So for example, if we're using website liberty, there's a component for that and it's version. So we know exactly what version of that product is in what environment. So another ancillary benefit of using Urban Code deploy is having that full traceability of what you've actually got out there. So if you've got a hundred environments and someone says, are all your QA environments at the same specification? We can say, yes, they are. Because you can actually see there that you can see what version of what component, which is a software piece, is deployed to that environment. And also I mentioned before about configuration. So we have the whole thing hooked up as well. So any sort of clustering that needs to be done. So when you're provisioning servers through OpenStack, you might be building out a dozen application servers, but it doesn't technically wire them together and make them a useful cluster. So we do that with Urban Code deploy. We have processes that run over the top and then basically binds the whole thing together, makes it into a usable system. And then there's the application, which is the whole point of the thing, which is the mobile applications developed on the Apple Librarian Program run on top of the stack that we deploy. I think we've kind of gone through that one. Do you want me to switch to the outcome or switch back to the demo? Oh, cute, yeah. Cool. So while we're waiting for that to finish, I thought I'd go through a bit of a few of the benefits that we've had. We've mentioned a few of them. Obviously, was total cost of ownership is quite reduced or even if it's similar cost, we actually get way more benefit out of it. One of the things that came into effect with the use of heat and Urban Code was automation. So there's many tools out there that do automation now and even heat's gotten stronger at doing what it does. But we saw a reduction from five weeks to 30 minutes. And that's been gradual over time as we've gotten better with understanding how OpenStuck works. We had a few networking glitches ourselves, initially with routers and internal networks, et cetera. And if you saw the previous session, you would understand that with containers, it's gonna become a bit easier for workload management. Yeah, that'll be the next step. Yeah, next step. It was a turnkey solution. So from start to finish, you saw the stack model. We, developers can drive things through tags when they're trying to test out development environments and different apps. So we've got a suite of industry apps and there's industry APIs from a model. We've put all of that out together, all different versions, and we can test it all, including the sizing, the performance requirements, maybe some changes in the security model, et cetera, and test it all together. We've obviously a smaller, dedicated team. So a good example is performance environments. I don't wanna spend my day trying to repair performance environments once we've broken them. That's never fun. So we just spin up a new one. That's usually easier. And then obviously the version management of all of that, but the biggest thing has been total cost of ownership from team size to time wasted in delivery, all of that comes together. And that left us about a 40% reduction by switching to OpenStack. So that was amazing. I mean, I wanna get the point across, when we talk about mobile apps, we're not talking about something that you just throw together and submit to the app store, right? These are enterprise-ready mobile apps. These are things, have backends with adapters that connect into all of your enterprise applications like SAP and Oracle, and all the different enterprise-type applications that most companies are running, and they wanna get information out of, right? So there's a lot that has, more than I even realized before I agreed to work with you guys, about what you have to do to actually get these mobile apps to run and connect to them. And we're building industry-specific mobile apps but everything's tailored for a particular customer, right? So we have to be able to provide adapters and integrations and deployment automation for a lot of different possible outcomes. I should wait, describing it. Yeah, innovations. So it's almost like heat and Urban Code give us the ability to have a self-documented process, right? So there's not one, I mean, these guys are invaluable, and I'm not gonna say that they're not, but there's not one person up here that couldn't leave the program for whatever reason, and somebody wouldn't be able to find out, well, okay, well, how does the process actually work? All right, at least keep running for three months. That's right. Until they figure it out. You get the three-month vacations. Americans, we don't get that in the Americas. All right. Did it finish? Almost done, okay. So outcomes. Well, I think we've kind of covered the outcomes. What I wanna cover is the next steps for us while we're waiting for that to finish. So, obviously we've got some big changes. We weren't an earlier doctor of containers. We're still trying to wait for OpenStack and containers to figure out which one wins and how it's done. Wins is a strong word. How OpenStack and containers will live symbiotically together, like a happy marriage? So that's probably the biggest next steps for us, and understanding even larger scale use cases of OpenStack, so we currently have a fair number of servers, but as we grow another use case that's hit us is different regions of the app needing to hit and put their workloads on region-specific clouds. So we have to manage all of that. The current use case I think is 12 data centers with a different usage of the app, a different style of the app, different performance. People do things differently in different countries, amazingly. So that's something that we're going to approach with the use of OpenStack, the use of BlueBox and try and roll that out in a way that, again, scales. This time it's less of about the number of apps that we're trying to support, but again, I think it's just a scaling factor. I mean, even earlier today, you were talking about the full stack deployment and how much that is helping you manage and maintain these environments, because if you have a rogue developer that gets out and gets access to something and maybe adds a bit of code to one of those environments, you don't really care, because as soon as it stops working, you just tear it down and throw a new one up. And any code, local changes that that person has made who shouldn't be making those changes there now have lost that changes. And they'll learn quickly, as a developer, I can say that I've done this before. They'll learn quickly not to do that. Yeah, I mean, my background's development, so I don't really want to delete people's code, but we did find that we made our development environment so easily accessible and so visible by everybody in the program. And they had access to them that they loved playing around with them. I think it was nearly a challenge just to see how many times they could break a development environment and how many times we had to spin it back up. But yeah, it's been fun. Would you say developer happiness has gone up? I would have to say developer happiness has gone up. All right, so are we ready to flip back over? Well, I mean, let's just flip it back over. I mean, it's not done running yet, just full confession here. I probably spoke too long at the start before I kicked it off, so. But one point I'd like to mention though, before we change screens. Oh, sorry. That's right. Is that the theme of this summit this week is around opensack, obviously. And the direction's going in, the fantastic changes that are coming up, what's available now, and stuff like that. But there was a question I heard in a session the other day which was, you know, is it production ready? And from our perspective, yes it is. So that's probably not a really exciting thing for developers to know, but I think it's a pretty exciting thing for those at foot the bill and pay the money for these things. Actually like to know the fact that opensack in our sense and we're using it quite broadly on our program is production ready for us. It's working well. It totally suits the style of how we work in our teams. We're agile. So, you know, we need to have a non-demand type sort of approach to providing environments and then obviously then put in the whole software stack on top of that. So it kind of works for us. What I'll do now is bravely go into the tool itself and hopefully we'll, so it's not far away. Hopefully you don't have to cut and run. So what you're looking at here is basically a dashboard of currently executing processes. Well, now what you're looking at is Glenn's favorites. There we go. It's actually up and running now. You can, that's effectively we stole one of our new production environments to demo today. We didn't spin up a production environment. It takes about 45 minutes, maybe 50 minutes to do the full thing. So we kind of cred the, slowly hobbled. Which may sound like a long time. 45 minutes, right? I mean, it took 50 minutes. It's still down from what, three and a half, four weeks when we first started on this journey? Five, wow. So it's down from five, right? And these are like approved, sanctioned, sanitized, the sanitized is not a word, sanitized, environments that developers can actually deploy on, right? These aren't just like homegrown. I'm just gonna throw something out there and it's gonna have tons of security holes because I'm a developer and I don't care about firewalls, right? These things are groomed by these guys and placed out there so that developers can use them and actually test code in, like I said, a production-like environment. Yeah, exactly. And this is actually what you see when you log on to the IBM Mobile First console. It's not the most exciting console but essentially what gets deployed into here is all the connectors that connect the mobile applications on the mobile devices into some various customer source of record. So what we do by default when we spin up what these environments is, we put our own little application holder on there. It doesn't actually do anything. It's just there to show you that the service up. You can see the console. If we had a bunch of apps deployed to here and a bunch of connectors, we could go through those and show them to you but that's the next step in the continuous integration pipeline is deploying the artifacts that we also hold in urban code. Yep. So effectively, I talked before about components. So each urban code component, which is just like a container or a holder, we have one for each of our middleware software stack items. We also have one holder for each of the 200-plus applications that we deliver to these software stacks. And again, they're just a component and they have versions and we can see what versions of transgress through development test production type sort of thing. So I just don't want to confuse things. I mean, urban code is a utilizer of heat but a provider of software automation just similar to what you might find with Shaffer Puppet or Salt. Ansible's a little bit different, but that's okay. But it does things in a little bit different way and obviously IBM, it's a sold product. But these guys utilized it by basically building out components that you see here from the palette designer. And these components represent actual software that's getting installed on those nodes. These nodes are resource types from heat and we can flip over and see the actual source. If you've ever looked at a heat template, if you're not familiar with heat, this is gonna look pretty intimidating, but it's really not. And our diagramming tool, just like urban codes philosophy from the very get go is let's make things simple. Let's keep things visual until we need to go deeper into code. And ultimately, when you run one of these things, what you come out with is a process that you can dig into which tells you about everything that has executed on this system and you can dig in and actually see the actual command output from each of those processes. So we try our best to keep everything very simple and visual as much as possible. Visual, I know a lot of people give it a hard time because, well, I can't do everything from a command line. Well, we have a command line client. It's just that most people in this space that are dealing with all these different kinds of technologies, frankly, nobody has all those skills. So we provide plugins and we provide a visual context so that you don't have to be an expert in everything to be able to accomplish a lot in a very short amount of time. Did you finish? Not quite, but what you can see here is this is the, obviously you're familiar with the OpenSync Horizon dashboard. This is the blue box implementation of that. So here you can see up top here, there's the, like I said, there's only very small pattern that we deployed. The width of the pattern doesn't really affect the provisioning in terms of how many application servers because it actually builds these things in currently. But the depth, obviously, so the more software stack you dump onto it, the longer it takes, because obviously software in most traditionally needs to be installed and configured in a serial fashion. So we're limited by the duration of the install process. But you can see the top four servers were spun up 30 minutes ago. So we've gotten down to the wire. We've got one minute for questions. Is there anybody that has any questions? If not, you can find us afterwards. We'll be happy to answer them, but. And the slide back goes up. Oh yeah, yeah. Let's switch back to the PowerPoint presentation. What is it? All right, so if anybody has a barcode reader, you can take a picture of this. This will just take you a link to the slide chair that we have up online that you can read back through this material in a much more organized fashion than we've managed to explain it today. All right, well, if there are no questions, I'm going to take this opportunity to get out of this really bright light. And thank you for coming. We really appreciate it. And please check out the slide chair.