 My entire closet is full of these dudes. Yeah. Got it. All right, let's get started. So thanks everybody for coming to my talk. I know it's a little late in the afternoon. It's the last day of the conference, so I'd really appreciate the turnout. So let's get really serious and stop monkeying around. Do you need me at all? No, I'm good. Thanks. All right. So before I get started, there was a little bit of a, I guess kind of a bet that if at least 16 people, which I realize now is a very ridiculously low number of people, retweet my tweet about my talk, I would wear this monkey suit. And almost immediately, at least 20 or so people did it. So I was, here you go. But I packed it to come here, so I kind of wanted just to do this anyway. So you know that would probably do it anyway. So thanks for coming. My name is Will Foster. I work on the OpenStack R&D infrastructure team at Red Hat. And I'm here today to talk about Tristack. So what is Tristack? Let me get a raise of hands real quick. Is anyone here familiar with Tristack? Has anyone used it yet or is it where? Awesome. So about half of you. Okay. So Tristack is a free open source. It's a free public OpenStack sandbox that the purpose of this is basically to provide a free way for people to try OpenStack. It started in 2011. And we sort of took it over with the help of a lot of community sponsors like NetApp. Red Hat as well does a lot of administration there at Cisco. And the project is officially sponsored by the OpenStack Foundation. So it is a free place for you to try OpenStack without any of the complexity of installing it yourself or maintaining it or any of all that stuff. You can just get on there and go crazy. There's a few reasons of why we have Tristack. So we wanted to provide a very easy way for people who weren't familiar with Cloud, who weren't familiar with OpenStack at the very least at all, to just go and have a place to kick the tires and get used to it. We wanted to keep things as vanilla as possible with Tristack. So we don't have any sort of distribution markings. Everything runs on RDO. So there's no branding at all. It's basically what you get from upstream OpenStack. You can experience on Tristack. So in 2015, we had a bit of a hairy situation where the original Tristack infrastructure was hosted in the data center out in California and it mysteriously went off the air. So it just disappeared. So we first thought it was a network problem. We investigated that. Turns out there's no outages. It just simply disappeared. And what happened was the ownership of Tristack as far as who pays the electricity bill, who pays the COLO bill, was transferring hands between Rackspace and OpenStack Foundation. And somewhere along the way, someone forgot to pay the electricity bill. So things just went away. It put us in a very interesting predicament because we lost all of the stuff that we had there. All of the tooling, all of the data that wasn't in Git repositories or weren't backed up anywhere off-site were gone. So we very quickly had to scramble, find a new home for Tristack, buy all new gear, put it in a new data center, and reinstall everything from scratch. So that was a very interesting time. So that was around June timeframe. And since then, we've completely revamped the deployment. And it's back and it's bigger and better than ever. So let's talk about where it is and how it works. Tristack has a slash 24 public IP address. So if you're an average person and you want to spin up an application, you want to test an app, you want to just kind of kick the tires, you get allotted one public IP address. We have about a half a rack of servers there, of Dell servers, donated to us for this purpose. And we have a NetApp array as well, donated by NetApp, and a couple of Cisco switches. So we have a very minimal compute count, about 144 vCPUs, right under about 900 gigs of memory. So the resources are very finite. So we can only accommodate a very small amount of people at once, which creates a challenge for us to have an environment that everyone gets a chance to try it out. So here's a picture of the Tristack data center, a typical data center. Here there's really nothing out of the ordinary that I can see. These are the data center cats. No one really knows how they got there. They've just kind of always been there. I think they're multiplying because every time I go back there, there's more that show up. I don't really question it, they just kind of hang out. I think one of them fixed Neutron once. I don't know. So Tristack, so what do people use Tristack for? And what do people in general use OpenStack for? Well, there's a ton of uses for it. Tristack in particular, some of the major ones are DevStack. So we'll get people that want to quickly spin up an OpenStack environment and then test actual code against it. This might be patches, this might be sandbox type code that they might plan on contributing later upstream. So we provide functionality for that. We don't really have any limits on what you want to use it for as long as it's for educational purposes and it's not illegal. We have a good use case in education. There's a university in Slovenia whose computer science department is teaching a course on cloud computing. The professor contacted us a few months ago and said, hey, is it okay if I sign up my whole class and I use it as part of my curriculum? Sure, awesome. This class in Slovenia, this university, now their cloud computing class is using Tristack for basically hands-on learning. That's another good use case. A lot of folks use Tristack for cloud apps, for testing applications, for testing functionality. The RDO project runs some of their external CI on it as well. And some Manila development also happens on the NetApp side. So Ben Schwartzlander, who's the Manila PTL, also does development on the NetApp drivers, the bare metal drivers for Aaronic and for the NetApp infrastructure. So those are just a couple of use cases, but we're seeing even more and more interesting things. People spinning up a very quick blog just to test functionality or people spinning up Etherpad or something like that and then tearing it down. But with all these use cases, demand is extremely high. So I went back as of a couple of days ago and just kind of compiled some of the metrics, some of the usage metrics for Tristack. Over time, we've had about 22,000 users total since its inception. That's quite a bit for a public cloud, especially one of this size. Current active users is about 3,300 right now. On average, we see about 300 instances spun up every day, more or less, and that's about 7,500 a month. And we track all this with some tooling, which I'm going to talk about in a little bit. And then we have about 800 active networks. So this is kind of a snapshot as of a couple of days ago of the active usage. Now it's important to note that I'm not very good at microphones. It's important to note that occasionally we completely wipe Tristack. Any time that we upgrade to a new version of OpenStack, we wipe everything. But if you've ever created an account there, then when you log back in, the way that the authentication works is that if your account isn't already active, it automatically recreates it. So that's where we get that 22,000 number. It's from 2011 and forward. The 3,000 number is anyone who's used it actively since we last installed the current version. So kind of illustrating this in more of a dashboard view, this is Grafana, which is one of the tools that we use. It lets us kind of visualize various metrics that are happening on the Tristack. So we can kind of see the number of active tenants here, free memory, really anything that we can collect with CollectD, which is another tool that we use, we can kind of visualize here. It's very useful for admin teams to just have a quick glance at what's happening in your infrastructure. So onto tooling. So when you install OpenStack, when you set things up, the fun just begins. You're not done, you're just getting started. There's a tremendous amount of cleanup, of maintenance that needs to be done, of operational tooling that you want to put in place so that you can manage the environment with a minimal number of people. And you can do so intelligently, and you can do so as automated as a fashion as possible. So we use some pretty standard tools. Right off the bat, config management is extremely important. Right now we use Puppet for config management. We are looking at using Ansible as well, so we use kind of a hybrid approach, that we use Ansible to push manifests. So how many people are familiar with Puppet here before I get to in the weeds here? Okay, awesome, about half people. So we use standalone Puppet manifests. We've got a lot of resources put into those. We've got a lot of, I don't want to say debt, but we've got a lot of mindshare put into Puppet configs to automate things. We don't want to throw that out. We want to reuse that. That's the necessity of having a client server model with Puppet masters and clients. So we use standalone Puppet manifests, but we use Ansible to kind of push those out as kind of to be the delivery vehicle. Those run, and then they make the different servers inside of Tristack appear as they need to be. They end the state that we intend them to be for things to operate. We're investigating some other ways to do it, possibly using heat templates in the future, but right now we're pretty much all Puppet. That's the config management part. For graphing and trending, like we just saw the dashboard here, we use Grafana and Graphite with CollectD, which runs on each of those hosts. For logging and aggregation, because we definitely don't want to be in the business of SSHing to individual servers and kind of rooting through log files when something goes wrong. For aggregation, we use Elk, the Elk stack, so Elastic Search, LogSash, and Kibana. And then lastly, we use Nagios for monitoring. There's a lot of monitoring tools out there. There's many, many good ones. Zabix is very popular. I believe the OpenShift guys use that. Sentsu is another up-and-comer, which looks quite well, but we chose Nagios because we're comfortable with it. It's something that we've used for decades. It's got a big community behind it, a lot of plugins. And for what we do with OpenStack, it's extremely easy for us to write custom plugins. Some of the functionality that you want to monitor with OpenStack is not going to come out of the box. You're going to need to tailor that yourself. It's still very much kind of a DIY situation. And I'm going to get into the Nagios bits here in a second. So this is as ugly as it is, and it's written in parole, so don't hold it back. This is our Nagios dashboard. Now, we've extended this. This is publicly viewable. Anybody can hit this and see what's going on with Tristack at any point in time. But I want to just point out to one check here. If you see about halfway down the page, zero nine floating IP allocation, this is an example of a custom check that this one single check will tell us if the entire health of the OpenStack cluster is operating as it should. So we have a check that goes in. It spins up an OpenStack instance. So it tests all the Nova functionality. It attaches a floating IP address. It pings the floating IP address, and it gets a response. It then SSHs into the instance and runs an arbitrary command and gathers the result of that command and then records it. Then every 15 minutes it does this and it tears down that instance. Now the point of this is that through that process, just about every piece of core OpenStack functionality is tested. If any of those fail at all, it alerts. And if it alerts, then we get pings on IRC and people don't like to see things spam on screen, so they let us know very quickly. So that's just kind of a small example of some custom tooling that we've had to put together to encompass health checks to make sure things work correctly. The reason I point this out, too, is that you might be able to ping an instance. You might be able to bring up a VM. You might be able to ping it, but you might not be able to SSH to it because maybe the metadata service is down. Or various parts might not work as you expect them to. And we're not there yet, but this is kind of where we want to get full coverage on every single scenario of every type of failure that we would see in a production OpenStack implementation. So, house cleaning. When you install OpenStack, when you set it up, you're just getting started. So with TriStack, we got a lot of people that want to use the service. We get dozens and dozens of requests every day of folks that want to be added to it, that want to check it out. We have hundreds and hundreds of instances that get spawned every day, and we have a very finite amount of resources. You know, with a slash 24, we only have about 250 usable IP addresses. And we have 300 instances that spawn every day. We're cutting it pretty close. So there's a certain amount of house cleaning that has to happen for you to ensure that people have resources, that it has fair distribution, and that people that are interested in OpenStack still have a place to try it out. So to that end, we have also tooling that we've put together that go through and cull resources after a certain amount of time. So floating IP addresses, for example, are purged every 12 hours. We have sender volumes that are deleted every 48 hours. The actual instances themselves, the VMs are deleted every day, every 24 hours as well. And then we clear the gateways every 24 hours. And we've adjusted this over time. We've, depending on the demand, how many people want to use the service, we might extend those checks. So we might say, okay, we're going to let people have a public-facing IP address, have their OpenStack instance running. They can do it for an entire day, or they can do it for two days. But as demand has risen, and the amount of hardware has not risen, we've had to kind of shrink and grow what we allow people to use. So here's an example. Again, this is the Grafana dashboard. And you can see this is when the culling happens. This is when these tools fire off for us to reclaim resources. And we're always playing a very dangerous game because we're always getting very close to max capacity before this tooling fires off and cleans the slate for other people to come in and try things. So you can see very clear patterns here of usage and usage and usage, creation, instances spinning up, IPs flying around everywhere. And to a certain point, when it becomes time to reclaim things, we cut it all down, and then they grow up again. And this cycle rinses and repeats itself. And it's kind of an ongoing battle. It's a challenge from an operational perspective because you only have, you know, 12, 13 servers right now. And you have hundreds and hundreds and hundreds of people that daily want to use the service. So further automation. And back again, just to kind of running an open-stack cloud with a very small amount of people is you need to automate as much stuff as possible. If you have a very small team, especially if you're geographically distributed, you don't ever want to be doing something more than once, especially doing things manually. So there's some additional automation we put in place. The first major thing that we've done recently is the Tristack website content. So if you go to Tristack.org, you'll see the CSS content there. You'll see kind of the login button where you can get into Horizon, start messing with your open-stack instances. That website content is managed the exact same way that code in open-stack is managed. So that CSS content is in a GitHub repo. If I wanted to make a change to it or anyone else wanted to make a change to it, they would clone that repo. They would submit a Git review. That would go through tests and CI. Then that would go through Garrett and people would vote. If people liked the change, then eventually it would get merged, and then it would just automatically appear. So it's kind of a trivial thing, website content, but we wanted to stay as close as possible to how upstream open-stack manages its own code repositories. The other thing that is useful so far, and this is fairly new advancement, is we've tied in the Nagios alerting to IRC bots to have a little bit more transparency on when things go wrong. So when there's an alert, when one of these checks fail, it will broadcast in public IRC and it will also broadcast to us directly so that we have a lot of coverage and people just know what the state of the environment is. It's just of adamant importance to be as transparent as possible. So challenges. Cloud is not easy. There's a lot of stuff to do with it. There's a lot of post-work, there's a lot of tooling, a lot of automation that needs to happen. Demand and growth. Dozens of people try to sign up every day. The account approval process is done manually. We try to determine that someone's a human before they can join. But other than that, everyone's free to check it out. Security. With any sort of public-facing service, whether it's a cloud-based service or even just a web server, security is always a challenge. You see some very, very interesting usages of the service. So we do have tooling in place to say, tell me who owns this floating IP address at this time back in history. So we have an audit trail. Now this functionality doesn't come with OpenStack. You can't, with the default tools that ship with OpenStack, determine a point in time of who had a certain IP address unless you record it. So we have tooling that will do all this for us and very quickly tell us if someone's kind of breaking the rules or breaking the terms of service, who it is, and then we can hammer ban them. We don't want to do that, but people have to follow the rules. We've had a couple of cases, one in particular where a lad was torrenting a Justin Bieber album. Now I'm a strong believer that people should be able to listen to whatever music they want, but you shouldn't copyright music on a public-facing service, especially one that's for free. Rationing of resources, again, it's just the same theme. We have more people that want to use the service than we have resources to give out. And one of our biggest issues now, this is kind of our main focus with TriStack, is the auth system. So there was a historical decision made back in 2011 that the authentication mechanism for TriStack was Facebook. So who is a fan of Facebook here? Who has a Facebook account? Raise your hand. Yeah, probably everybody. And if you don't have one, guess what, you have one. Because they just make them for you, even if you don't have one, you got one. But Facebook was the, at the time, it was the choice for authentication, that they have an API and auth API that ties in to OAuth. You have a Facebook account. You join the TriStack Facebook group. Now you can log into Horizon on TriStack. That is the number one complaint that we get with TriStack, is when is Facebook going to go away? Oh, it's historical, and I'll get to that. So in the meantime, what we've done is we've allowed API access. If you log into TriStack once, you can retrieve your API credentials, and you never have to go through Facebook again. You can log in directly to TriStack with the CLI or with Horizon. Now we are in the process of moving to OpenStack ID, which is kind of the end all be all authentication mechanism for OpenStack. So if you have an OpenStack foundation account, now you can log directly into TriStack. No if, ands, or buts, just directly in there. We don't have plans of decommissioning the Facebook off. We're just going to let it kind of be secondary because there's still 22,000 people in that community, and it's been quite helpful. There's a lot of folks that answer questions in there. There's a pretty vibrant community. It is Facebook for what it is, but that's not something that we want to scrap, but it will not be the primary off mechanism, and that's something we're actively working on. That's about number one on the priority list, is getting that taken care of. That's why it's added here to challenges. So more improvements. We talked about OpenStack ID. Another thing that we need to work on is better monitoring. You're never truly done with monitoring. There's always some area that you don't have test coverage, that you don't, something's going to break. Two days ago when I was getting the most up-to-date metrics for the usage, the Grafana subsystem was down, and a portion of CollectD wasn't reporting in. Now, Nagio services were running. The checks were passing, but physically it wasn't writing data. So that's another little edge case that we just didn't anticipate, but it resulted in a failure, and then we had a gap of about a day or two where we didn't have any data, that we didn't know what was going on with the environment. So you're constantly, as you run a large public service, as you operate an OpenStack cloud, you're constantly finding little areas and gaps that you didn't know you needed coverage for, or things that you need to adapt a little bit better, because it's different than your traditional array of services. Networking is another big spot that we're going to improve upon. So right now we have a slash 24 public IP address space. We're trying to get a slash 23 to at least double it, and if we're lucky maybe a slash 22. What that will give us is allowing more people to use the service, and hopefully a longer amount of time that people can have their instances running and be publicly available. Hardware. This next quarter, this current quarter, and then the next quarter we're going to be doubling the hardware footprint of TriStack. So it's current capabilities. We should be able to accommodate a lot more people. And lastly, there's been a lot of demand and a lot of people offering support, either in development resources or in administration time, just simply to help us run the operation. There's a very small amount of people that run the service, and we need all the help we can get, but we don't have a framework yet to allow people that are outside of the OpenStack Foundation or outside of kind of our community list to jump in and help. So that's something also that we're working on. So roadmaps. We strive to upgrade TriStack every time there's an OpenStack release. So every six months, when there's a new version of RDO that comes out with the latest and greatest of OpenStack, we strive to upgrade. We've been bad about that. We're still on kilo for the current TriStack, and we should at least be on the M release here. So that's one of the things that we're trying to have renewed enthusiasm and go in and the second that the RDO release drops, we want to push that out to TriStack and be some of the first consumers. We talked about expanding the hardware, but in specifics, we're going to be replacing the Cisco switches with some newer Juniper. Juniper's a big partner in the OpenStack space, and we're going to be expanding the server footprint and, like I said, the networking. So here's some of the folks that operate the current implementation of OpenStack. Dan Radies and Nachi are kind of the original founders. They set the thing up to begin with, and we're kind of a big driving force behind some of the development, namely the Django Facebook Horizon integration. Currently, myself and Kambi is a guy poor, are the current admins of TriStack, and then we have Ben Schwartzlander, who is the Manila PTL, and he helps out as well. He does a lot of the upstream development for Manila on TriStack. So I've done a lot of talking up here, but I haven't really shown you anything. So I want to kind of illustrate the complexity of OpenStack, but I don't want to do it with a computer, I want to do it with humans. So I need four volunteers from the audience. Okay, Francesco? Okay, okay, yeah, Chris? Alicia? Okay. Okay, come on. All right. Now, what's going to happen is each of you are going to be an OpenStack component. Okay? So, he wants to be neutron. Okay, get up here. Okay, you're neutron. There you go. Okay, face your colleagues here. Okay, he wants to be salameter. Creepy salameter, digging into everything, digging into people's business. All right, he wants to be rabbit in queue. Okay, you'd be a good rabbit. Okay, now rabbit, why don't you sit right here, because you're the one that drives everything. Now, I have two Nova compute instances, two Nova servers left. Who wants to be, okay? I got a hell of a choice here. Okay, you'd be here. And you'd be Nova as well. Okay, so I'm just going to illustrate some of this complexity that you don't need to deal with if you use TriStack. Okay, so generally speaking, rabbit in queue, everything is going to be message bus driven. Rabbit's just going to sit back and fire off a bunch of stuff. So, in a typical scenario, rabbit's going to maybe tell Nova to spin up an instance. Rabbit's going to tell Nova to spin up an instance. Okay, now we have an instance going. We've got an instance going. Now, this instance isn't useful without IP addresses. So, we need some neutron in here. So, why don't you tell Nova to talk to neutron and get a flowing IP? Okay. Oh, okay. Now we have network connectivity. It's working. Now we have network connectivity. This is awesome. Things are running. Things are running. Okay. Solometer, what are you doing? Aren't you supposed to be checking things out and telling me what's going on? Okay. Okay. And then, you know, sometimes things go wrong, I guess. Yeah. Neutron crashed. Neutron crashed again. What's neutron doing? What's neutron doing? Okay. And then sometimes, yeah, apparently this is down. This is a down instance. He's just, he's not listening to anybody. Yeah. So, see, this is what you don't have to deal with if you use TriStack. All of this complexity is gone. You don't have to worry about it. We do all the hard work. You just get to go and enjoy and enjoy your open stacking. Let's give a round of my volunteers here. I'm going to open up for any questions anybody has. Any comments? Yes, sir. Is Rabbit that messy in real life? Yes. Yes, it is. Yes, it is. Yes, sir. Right. Oh, we run everything from Cron. No. We keep track of when that floating IP address was actually mapped and then the count starts. So, you should always get the full 12 hours of usage time. Oh, should add. So, these GitHub links, these are the actual tooling and collectee plugins and other stuff that we use to manage TriStack. So, if you guys want to clone these or take a look, that's all the code that we use. The first one is going to be the kind of the culling and monitoring stuff where we go through and remove things. We also have the collectee modules as well that give us those nice dashboards. And then lastly, we have the horizon integration with the Facebook API for logins. Yes, sir. So, what's the version of OpenStack and how often do we update? We try to update every six months. And realistically, it's usually ever eight to ten months. We're currently still on Kilo. We're on Ardeo Kilo right now. Sure. The question was, what's the problem? Why haven't we upgraded to Liberty yet? Just time and money, friend. Just time and money. Just really resources. It's definitely on the agenda. We are probably going to go directly to the M release and we want to sync this with the move to OpenStack ID. So, that is the number one complaint, like we talked about earlier, that why do I have to use Facebook? Why can't I just log in? Why can't I use OAuth or some sort of non-privacy invasive middleman to log into the service? So, that's also why it's taken a little bit longer, because we want to sync that upgrade with redoing the OAuth system as well. I will say, you don't have to be, you don't have to have a full-fledged Facebook profile to log in. We don't care about any of that information. We just want to make sure you're a human and not someone that's trying to do something malicious. So, there is a manual account approval process and we do our best to decipher that, but if you do abuse the rules, we do have some pretty swift ways of digging in and finding out who you are and maybe sending people to your house. So, no. Well, at least order some pizzas that you didn't want. At least we'll do that if you abuse the service. I did want to shout out to Rich Bowen in the back of the room for, Rich has a very useful video of how to use TriStack and how to log in and how to spin up an instance and basically just how to get in there and it's linked on the front page of the TriStack FAQ if you guys are interested. Yes, sir. Myself, Rich helps as well and some of the other folks in the OpenStack Foundation occasionally pop in and take a look. We try to get things approved, at least with a couple of lead time, a couple of days lead time. So, if you click to request the Facebook group, you'll probably be approved in the next couple of days. We try to be good at that. Yes, sir. Monkey dance? Wait, hold on. All right, how's that? Any other questions? Yes. No, not at all. You just make a Facebook account. Just, and then click to join the group. It's a private group. I'm sorry. Oh, yeah, sure. We'll talk later. I have a PayPal account. We'll sort it out. Anybody else? Any other questions? Well, check it out, you guys. TriStack.org. We're going to make it bigger and better. We're constantly working on it, so we appreciate those of you that have tried it and have accommodating more people to continue to use the service. Thanks, guys. Sorry about the mess here. It's okay. I don't want to be leading it up. I wanted to ask you a few things. You will be the main attraction in this video. Oh, great. I can just put it here. You're wearing this for me. You wanted to do the market on your own. Oh, sorry. I'll take it off. I'll take it off. I'll take it off. Well, I'm guessing it's making a question better for all of us that have been there. Do you know why it's still a rabbit empty? Like if there's any planning problem? I'd rate it to, you know, 206 per year. That's a good point. Okay. So, is there a reason why? If it's really that problematic. I like to take it. Yeah. Actually around two days. I like anything else. Okay. That's probably the better question. Hey, we're on the spot. Enjoy. Do you like Bible games? I really need that soup. That's awesome. Can I... Oh, sure. So if you're integrating something like OpenID, you should be looking at a Fedora OpenID. Okay, yeah. There's no reason not to integrate it. OpenStackID is kind of the default that they're trying to federate all the OpenStack services together. I think it's OOFG-based, so... Yeah, but still, since we are also distributing or using a Fedora ID, that could be a good idea. I have to read up on it. I don't know much about it. There are already folks talking to Ms. also to integrate it. Okay. It should be doable. How agnostic is this? Because we really need to have a non-distribution specific at all. And that's the reason why we want to get with OpenStackID, because it's completely agnostic on distribution and we need to be able to integrate it with OpenStackID. So, yeah. Right. But it's also really an agnostic project, so we kind of have to stay with it all. Yeah. So, you just put the class on whatever ID you need. You could even do something like a Google ID or whatever. Uh... Can you speak Russian? No. What? No. What? There's no reason why we can't have more than one full resource. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Now. Okay. Uh, it's. Um, okay. Okay. Okay. Okay. Okay. Here is this sticker for you. Okay. Can I? Okay. Do you have it? Yeah, your way here. Standing only? Standing only. Oh, there it is. Okay. So I would like to contribute. Okay. You should find us on IRC. Free note? Yeah, free note. Anyone with ops is someone who operates or develops against it. We need all the help we can get. We just need to work out a framework that we can allow people to work on. Because right now it's just open-sac foundation folks and then any of the corporate sponsors. We want to have it so that anyone can contribute up to the GitHub repository and that stuff. Well, I'm more like a staff administrator than development, so... No, no, I'm more like a system administrator. I'm working in Red Hand Global Super Services over in the state. Social extent. I could help them. Thank you. I've developed... It's not... I don't have a specialty. It's a product. Well, just to introduce myself, I'm Michele. I took down the scale a few times. Hey, I'm the faster. Oh, is that you? Bandin. So I thought, you know, at least... Did you just... I will pay you for it as long as you want to move to APEC. I'm getting ready to open up a rep. Hong Kong could work too. Basically you just have to be able to fly around Asia. You could dress up however you want. Okay, Diane is promising to buy you a dress up. Will, are you dressed up? What? I don't know where it is. Well, I saw you up here and... I'm sorry about that. I got scared. I'm taller. I'd better look this. I was scared. I would break it so it would look at me smartly. See? I would get that. I would get that. I would get that. I would get that. I would get that.