 How's OpenStack so far? Good, bad. You can do it. I'm David Roncek. As you can see, I am a product manager at Google. I work on Kubernetes and Google Container Engine. I don't drive the Kubernetes project. We're actually really proud that I am merely a member of the Kubernetes open source product management team. We really consider that to be one of the most important things that makes up the Kubernetes project is that it's very much not driven by Google. It's driven by a whole bunch of people in the world. And Google is just a member of it. I'm here to talk today about how Kubernetes and OpenStack are better together. And this is something that we hear a lot of, like, are you just adding turtles on top of turtles? Is this unnecessary? And for us, they really represent multiple levels of the stack. And I'll get into how we think that they really are things that work far better together. So a brief history. Google has been operating in containers for over 12 years at this point. Literally everything at Google runs inside a container. And it's because that's the way that we developed for rolling out applications at the scale that we need to do. We need to develop extremely quickly. We need to deploy around the world. We need to quickly migrate things as data centers come up, go down, move on and on and on. And what we saw is that by building things into containers, our developers were able to roll out applications extremely quickly and really think about how to just focus on their application and let the system take care of everything for you. For us, it all started in 2002 when we started with application machine pools. As you might remember, if you're old like me, way back in the days of web 1.0, people would go out and buy enormous machines and they'd stack them in their data center and they'd roll out their applications and have very, very specific machines for each of their various workloads. Google, on the other hand, really took an opposite view of this. We said, actually, we're just going to go extremely commodity the entire way. We're going to try and piece together machines and really make them somewhat more generic rather than having very specific ones. But in order to solve a bunch of the things that were required for the way that we ran workloads, it really meant that we had to build these application specific pools. So you would have a set of commodity hardware, but in that commodity hardware you would have a pool that was dedicated to your mail or your storage or database or whatever it might be. And unfortunately, this was extremely painful. We then moved from there to more shared machines and we used things like CH-Root, U-Limit, NICE, and other things. But this led to an entirely new set of problems, particularly around noisy neighbors and really limited our ability to share. And in particular, as the fleets got bigger, the ability for us to share became even worse. So in 2006, we were some of the engineers at Google were some of the first people to commit things to the Linux kernel that enabled what our containers today. And we were certainly not the only ones. Folks from Red Hat, folks from IBM, and others really made Linux, gave Linux the ability to really separate these things out. And we focused really on that inescapable resource isolation, which enables much better sharing. For us, we really focused just on isolation. That is separating the resources from each workload on there. At the same point, we didn't have to focus on things like namespacing, which is if you were going to download a workload from the internet, are you sure that it's not going to overwrite the other workloads that are already on your machine? For us, we already knew that it wouldn't because everything was coming from Google. You can, in fact, see the open source version of how we do containers at that GitHub repo you can see there for those that don't know. LMCTFY stands for Let Me Contain That For You. Anyhow, but that was us. And when we wanted to get out there to the real world, we realized that the way we did things at Google was not how people in the real world were going to do it. So in 2013, Docker came along and really changed the game. They provided a very clean API that allowed everyone to really adopt containers. However, we had seen this play out before. We started containers, and we used something internally to manage those containers. And then about 2005, 2006 is when we introduced the first concept of what we called Borg. Borg is our version of Kubernetes. It's what we use internally. And in 2013, when Docker came out, we said, oh, we've seen this story play out before. We know that immediately following this, the next thing people are going to need is the ability to run containers at scale. In 2014, we released the first version of Kubernetes. And as recently as this year, you see just absolutely enormous scale deployments. Walmart, New York Times, Ticketmaster, Concur. These are all enormous businesses bedding on top of Kubernetes. And the reason they're doing that is what you see here. It enables much faster productivity. You're able to get out. Your workloads to customers much more quickly. You enable much greater scale, particularly efficient scale out. Pokemon is the first application to a billion dollars in revenue. That's a public statement. Not really anything new there. And also dealing with things like Black Friday in a Man. And we're open. Walmart has 200 warehouses running on VMware. And the entire purpose of Kubernetes is that it really runs everywhere. It is multi-cloud. It runs on prem. It runs on AWS. It runs on Google Cloud, DigitalOcean. It runs on bare metal. You name it. I love giving this talk. Every time I have this slide out there, our goal with Kubernetes is to give everyone the power to run agile, reliable, distributed systems at scale. This is what we do at Google. We're not saying that everyone needs to run at the same scale we do at Google. But this is what we want to give everyone the ability to do. Little bit about the project. We release every three months. We just had a release in April. And of course, because we do release every three months, we have another release coming up at the end of June, June 28th, to be specific. We'll be releasing the next version. And you can see the number of contributors and folks using it. Again, this is something that we couldn't be prouder of. The companies behind seven out of the top 10 websites in the world use Kubernetes either in whole or in part to run their applications. And then, of course, the little story. Niantic, Pokemon Go. I love showing this graph. The orange line there is what their original target was. The red line is, in fact, their worst case, highest peak scenario. And the green line is what they actually did. And they did that all on Google Container Engine. In fact, they chose it to orchestrate their cluster at planetary scale. Okay, so getting into a little bit more about why Kubernetes and why you might want to use it, a lot of people come and say, well, can I just use Docker and call it a day? And the answer is no. And the reason is because while Docker is great as a container runtime and it gives you a great API service, Kubernetes makes it so that you can run your containers in a production ready way. And by production ready, I mean, number one, it enables that faster development, and I'll walk through what that means, much greater reliability, and it gives you more compute for less money. So faster development, what does that mean? This is how a standard flow for your workload might look. You might start with local development, move it to a test environment, move it to staging, then production, then Canary, and then finally, production. And this is fairly straightforward. The unfortunate part is that at each one of those steps, you're gonna now have to create a set of tooling that moves these things between all these different locations. And then of course, you're gonna have to build separate management, not just to watch the tooling, but to watch each of those environments. And while that is a challenge, that is something that should be addressable, the problem is that frequently, you will have very different environments for all of those places, and that just increases the overall friction for your developers moving your workloads out to production. The idea is that with Kubernetes, you can address all those things. You can run on your local laptop with MiniCube, you can run in a test environment, staging environment using native Kubernetes namespaces, you can use production setup, and then canary rollouts and things like that, all natively as Kubernetes objects. The second thing is around increased reliability. And by reliability, we mean, what does it take to get and keep your application running in production? In this case, rolling things out to deployment, that's very easy, you can do that anything. You absolutely do not need Kubernetes to roll out something to production, of course. You can just type Docker run, or you can type, you know, tarball, you know, deploy, or whatever it might be. The problem is, is how do you do the rest of the stuff? For example, manage your configurations, stay healthy, rollout updates, do load balancers across instances, how do you migrate apps, how do you update your nodes? All these various things are involved in keeping things running in production. One node, that's easy, two nodes, that's easy. Three nodes, you're gonna need something like this. And then finally, more compute for less money. So every application, or excuse me, every company will face the exact same challenges that we faced at Google when we were scaling up. You can either have very dedicated sets of nodes and roll your applications out to those nodes at the price of some utilization, or you can share all those nodes together, but then you run into all sorts of problems, noisy neighbors, and other ways of doing isolation. With Kubernetes, we very frequently see four times better, or better utilization. The average fleet today might be 5% utilization on average. With Kubernetes, we frequently see over 20% utilization of your nodes. That's like quadrupling the size of your data center for free. Okay, so that's the history of Kubernetes. Quick breeze through there. A lot of people now ask, well, why do I need OpenStack? You're providing all this benefit for me for free. That's great. But I will say that OpenStack provides a very, very specific need for a lot of folks who already either have those deployments in place or might have data centers or nodes and machines that they wanna make specific utilization of. All right, here comes the audience participation question. Who's ready to participate? Yes? I know it's late in the day. What do we have on the screen? A box. Well done. We have a box. Inside that box is not magic trick. Inside that box is Kubernetes. What is not inside that box is any of these things that you can do on top of Kubernetes. What is also not inside the box is anything you can not, that sits underneath, Kubernetes. We're getting the metaphor here. Kubernetes really focuses on handling what it knows how to do well, which is orchestration. Then on top of Kubernetes, you're gonna have a set of applications like this. Many people also ask, by the way, how you get on a slide for me. Let me tell you, I'm gonna give you the trick right here. Number one, have a logo that is alpha transparency. Number one. Number two, make sure the aspect ratio is whatever, even square, not rectangle. If you're those two, you can be on a slide. Also, we don't handle providers. So while Kubernetes has its own directory called the Cloud Provider Directory, there's an entire set of resources that Kubernetes requires. And Kubernetes does not wanna deal with that stuff natively. And the way you think about it is, those applications use orchestration provided by Kubernetes, and Kubernetes uses those providers to deliver the resources, the VMs, the load balancers, the network, all those things are outside of what Kubernetes is good at. So let's focus on OpenStack here just for a second. Here is OpenStack, and here are many, not certainly not all, but many of the core components of OpenStack that you might need to run a great workload. But these are all just pieces of software. They require running on your machines. You might currently provision them on bare metal. An alternative is you could provision them inside containers. And once you put them inside containers, you are able to schedule them via Kubernetes. And again, it lets the things focus on the things that they do well. Kubernetes focuses on running your applications, ensuring that they stay up and running, and they do all those things that I highlighted earlier around logging aggregation, around monitoring, around being able to ensure they're always up and running. But it doesn't handle actual provisioning of those resources. That's something that they want to hand off. So how do you provision OpenStack running on top of Kubernetes? We recommend a package manager that has gotten great usage and deployment called Helm. So Helm is something that we offer. It's part of the overall Kubernetes project, but it is not in Kubernetes core. That is something that we really stress. The Kubernetes core project is designed to enable folks to run just container orchestration, just what they need. It is not designed to handle all the things around Kubernetes. Again, all those things CI CD and underlying resource provisioning. And you may come to me and say, well, Dave, you got all that stuff in there already. That is true. We have spent some time and we'll continue to spend more time breaking out a lot of the stuff that you see in there today. So for example, Cloud Provider currently is in the main Kubernetes repo. It is our goal to really break that out because that is something that really should be outside. Helm is similar to that. It's a project. It's under, if you go to github.com slash kubernetes and then as a top repo, there is Helm and you can go and use it there. It is not installed by default on Kubernetes as you roll it out but it is trivial to install and I'll show you how in just a second. There are three parts to Helm and that is first you have a chart for anyone. I should have asked at the front. Anyone use Kubernetes today? Oh, a lot of people. Good. So Kubernetes has this thing. It's just a standard YAML. It's a manifest. Anyone can read it and it describes how to run a workload on top of Kubernetes. Helm is very, very similar. It's just a very standard set of text files that you can read that declaratively roll out your application to Kubernetes. That YAML, that text file is called a chart. On your laptop, you'll have a client. That's something called Helm. It's just the command Helm and you'll use it to deploy that. And then finally, you'll have something called Tiller and that's something that you install and run on your Kubernetes cluster. It is one command to spin that, get that up and running and roll that out. So in a minute, you'll be hearing from the AT&T folks who actually did this work, but basically there's something out there right now called OpenStack Helm and it goes through and makes it very, very easy to deploy all these things. So on the left-hand side, you have all these services. What you'll do is you'll take those services and you'll describe them as a chart file and then you'll also have some variable and requirements files. Those are just variables that you would set on a cluster by cluster basis. You take that and you deploy. The first thing you type is Helm init. Helm init will go out to your Kubernetes cluster. It will install Tiller. It will make sure everything is up and running and finish with a nice little line there that says happy helming. And then you'll type Helm install foo and off it goes. It will take that declarative statement, roll it out to production and then your workload will be up and running. And it does it by this. That Tiller pod is running in your Kubernetes cluster and it rolls it out to OpenStack. Okay. So with that, that is Helm. That is when you're doing it yourself or when you're doing it at a fairly low level or a fairly simple scale. When you're rolling out in a much larger configuration where you're gonna need to do things declaratively where you might have many, many of these things that work together and are gonna have dependencies between them, you're gonna need something like Armada. And this is where the AT&T folks have really innovated and created a declarative way to roll that out. And with that, I would like to hand it off. Oh, not done yet. So I'm Alan Meadows, Cloud Architect with AT&T in one of the cores on the OpenStack Helm project. And I'm Brandon Joseph, community team lead for the OpenStack Helm project and also PTO. So we have a live demo that is two parts. The first part is a prerecorded video to really kick it off. And what we're going to do is essentially instantiate the Mataka version of OpenStack into a bare metal environment, three physical nodes. And that generally takes about six minutes, but we prerecorded it to essentially just show you the first couple of minutes of that and then we'll sort of fast forward and then show you a live upgrade of that Mataka environment in place. We'll do that in about three minutes with a running workload in that environment. So I'm gonna go ahead and just start the first couple minutes of this prerecorded video and sort of talk through what's happening essentially with the stand up. And this is a real environment. This is like Alan said, it's bare metal. We have two controllers and one compute, like he said. And six minutes wasn't fast enough for the demo. We needed to condense this down a little bit. But to put this in perspective, we have a scale issue at AT&T and we need to, this is why we needed to look at Kubernetes. So this is a completely customized OpenStack deployment. You can turn all the knobs that you would typically need to turn and we can do this in six minutes with a single command. So what we've done with standing up this environment is essentially there's a declarative YAML file. Armada essentially is launching all of the OpenStack Helm charts at this environment at once. OpenStack Helm has a lot of essentially great work that's been done that is essentially managing the order of things to ensure that we are initializing databases, first creating grants for users to those databases, then synchronizing the OpenStack services to those databases. And one thing to keep in mind is that this whole environment is containerized. So the MariahDB cluster is containerized. MariahDB leverages CEP for persistent volume claims, which is also containerized. And essentially, there is obviously a clear set of dependencies that OpenStack needs to have realized at the end of the day. And the OpenStack Helm charts are ensuring that essentially things are coming up in the right order. It made it easier for developers as well. So the common tasks that Alan's speaking about, like initializing the database and so forth, these are common amongst all the services. So it makes it easier for developers to get started in creating a chart. So if you needed an OpenStack service that wasn't provided, wasn't something that we developed quite yet, it makes it easier to jump in and start developing. So one thing we'll notice is again, because of that dependency checking that's occurring and the ordering that's happening with the help of init containers and all of the stuff that's described in each of the OpenStack Helm charts, we're seeing things in a sort of, things that need to occur later in this process or in an init phase running in an init container that's sort of validating the dependencies that are in that chart. And we don't see anything crashing, we don't see anything failing to find their external services. It's sort of a controlled bring up of the environment. One of the tools that we're using too for visualization is another project that does, very close to Kubernetes, that's Weave Scope. So that's all the containers that you see on the right-hand side or your left-hand side. That's Weave and it's showing the containers connecting and the communication between the containers and it makes visualization really nice for demonstrating to you guys. So I'm gonna go ahead and just pause this here to again, to trim time and we'll actually move over to the actual environment where that did complete. Let's see if I still actually have my connections up. So obviously the environment is a little bit more complex now that it's actually up. And so we're gonna bring up a couple of things, my window's got a little disoriented, but we're gonna bring up a couple of windows to sort of demonstrate that we're not impacting the underlying environment at the end of the day. So if we quickly log into the dashboard that was provisioned after this is in this Metaka versioned environment came up, one of the things that we did is part of the up script that instantiates this demo environment as we provision a VM. This VM has a pre-baked image that essentially is running a web server. We have a video pre-loaded on it. We've attached a real floating IP to that and we should actually be able to actually ping it remotely. This pre-baked image also has a speedometer utility inside of it which will allow us to look at the network statistics of the VM's interfaces while being logged into it. One thing too that David said was our use of Armata. So OpenStack Helm is an OpenStack project just recently. We encourage you guys to look at it and provide feedback. Armata is a little bit different. Armata is essentially a Kubernetes-based project. So it's not really tied to OpenStack. We want that to be part of Kubernetes. It's supposed to override Helm chart manifest. It includes additional tooling so you can just fetch a Git repo. So if you're using Git for your manifest then you can pull that right in and deploy your cloud straight from Git. So the last thing I'm gonna do is just start a persistent ping to this VM and we will go ahead and launch the actual in-place upgrade. This is a live demo. As you can see too, we're just doing a key roll on the right-hand side, doing a ping on the left-hand side. So we're making sure that we're not losing Keystone because that's very important. Very simple check against the control plane just to ensure that it stays up during this whole time. And again, as I mentioned, the VM that we provisioned underneath OpenStack is obviously running a web server so we can actually launch a video from that VM and just stream that down below. We shouldn't see any interruption to that as well. This process takes about three minutes and the upgrade occurs across all the OpenStack components that are running. We ran through this demo quite a few times. So about 30 sometimes. We know that it's successful from here but the reason why we're so confident that it's successful is really Kubernetes. It's Kubernetes underneath. It's Helm doing the deployment with the values overrides. So our message to you is very confidently to check out Kubernetes as that underlay control plane and then using Helm, you can deploy OpenStack Cloud very easily. And no video lag or anything like that. Any lag is actually on our side, not on the OpenStack side. So again here, the same stuff that was playing out when we instantiated the environment, so the same dependency checking, the same stuff that's ensuring that the ordering of things is occurring in the right order. So in other words, when we go from the Mataka release to the Newton release, obviously a couple of things need to happen. We need to do database synchronizations. And again, the dependency ordering is ensuring that those things are happening first prior to us actually launching the Newton versions of the containers. And just as Brandon mentioned, Kubernetes is really playing a great role here in terms of doing the rolling update stuff and ensuring that we have at least one container is still up and servicing those control plane requests for us at the end of the day. And then obviously, but at the same time, that same logic is also controllable from the OpenStack Helm chart. So how you wanna do the rolling updates, how many replicas you wanna keep online. Again, there's all of that stuff is able to be controlled. And we have a lot of services that are completed. Again, if you check out the project, but we need more help. And we'd like you guys to check it out. We affirm believers in community collaboration with not just the OpenStack side, but also Kubernetes and the union of these two are extremely valuable to us. And I think it's worth pointing out. I mean, we have a couple of other things going on this week, like the forum about cloud native approaches and again, one of the topics that we'd like to talk about is what we want to be able to do during a rolling update. So again, we have a very simplistic example up here on the screen. We're fetching a token from Keystone and that's not failing during this entire process. How far can we really take that? Obviously in this demo, if we were to try to do something like a very complicated heat stack during this whole process, we haven't actually tested that, but the odds are that it would probably not survive this rolling update. And so as a community, we need to make a decision about where we're going on that particular direction. Expanding on that too, we have a continuing on, we don't want to take too much of this time. David was kind enough to have us demonstrate this for you. But we have a talk at 9.30 on Thursday, how in an aisle run through a little bit longer demonstration, talk a little bit more about the project, how we're impacting the community and how you can get involved. And then we actually have a hands-on workshop too, just after that too. So if you look on your schedules, we'd love to have you there. We have all of our cores are helping and lead you guys through deploying this, so. That's good. You want to go back up? Yeah. Yeah, this thing will just run through completion. Thanks, Alan. Thanks, Brandon. Excuse me. There we are. All right, so don't go far. We'll answer questions and things. But what's next? The top lines are, first, Kubernetes continues apace. Kubernetes 1.7 is due in just over a month and a half, currently targeted at 6.28. And we're going to be doing a bunch of work related to GPUs, security, brand new logging infrastructure that will make it much easier to integrate with alternate logging solutions, a number of plugability things. You may have heard about things like the CRI, Common Runtime Interface, CSI, which is the Common Storage Interface, CNI, Common Networking Interface, and so on. These are things that we're really investing in. Because again, while we originally had everything in the core of the project, our goal is to break those out and really make those options for everyone to swap out. And use whatever makes sense for them. I'd also like to call out Rancher, who's doing a number of things relative to the cloud provider that I mentioned earlier. That's in the core of the project right now. And we're trying to break it out to really enable using whatever cloud provider you might like. Right now, AWS, Azure, Google are all supported natively. OpenStack is there, but we'd like it to get much better. And we'd really like to make it available for every cloud. Second is around OpenStack and Helm. In particular, we'd love to contribute a number of things that the OpenStack Helm team did in all that Armada things up into Helm more natively. Right now, there are add-ons to the project. And they took particular directions. But we'd love to move those upstream. And we'll be working with the Helm team to accomplish that. Further, relative to OpenStack Helm, as CSIC and I become native and get to GA, we're going to work to bring those and use those instead of using the underlying technologies that they have right now. And then finally, really helping to migrate the OpenStack components to be more cloud native. As they're running inside of a container, today a container is expected to do a number of things itself. It's supposed to respond on a health point. It's supposed to send logging to Standard Eye out. Standard IO, a variety of things like that. And that's what we're trying to do relative to OpenStack as well. As they run inside containers, how do we make sure that they are also microservice oriented and cloud native? And then finally, you. Please join us. You can join on the Kubernetes project, of which there are many, many opportunities. You can join natively with OpenStack, either just try out the OpenStack Helm project and see if it works. If and when it doesn't, please give us your feedback about what we can do better. Join the SIG. We have SIG's special interest groups for nearly every aspect of the project and would love to have you participate. And that's it. As we said, Kubernetes is open. The idea is that we really don't. This is not a Google project. This is an everyone's project. And we want to make that happen. And that's it. Any questions? Have a couple minutes? Yeah, so I'm working the OpenStack communities since 2011. And we're trying to get these two communities together. OpenStack and Kubernetes seems like it's clear that we're trying to get there. So I was reading about the 1.6 release. Seems like the RBAC functionality is still in beta. So OpenStack has a very strong RBAC functionality with Keystone. So would it make sense to actually replace the beta version of Kubernetes with Keystone as a back end authentication? And then you resolve one problem, which is RBAC. Second problem, only one single tenant management component for all the clouds, whether it is the Kubernetes ones and the OpenStack. Well, there's two different things. We'll get to that in a second. The first is that Kubernetes has a number of extension points right now. The RBAC that exists inside of Kubernetes is native to Kubernetes. So I think there are something like 170 different verbs or something that you can interact, like create a pod, create a deployment, create a this, delete it, whatever it might be. Those are native roles inside of Kubernetes. And they're very, very flexible. They're also portable, meaning if you wrote and said, Bob can create a cluster, or excuse me, can create a deployment. And Alice can create a whatever ingress object. But that's it. That's all they can do. That works natively in Kubernetes, whether or not you're running on OpenStack or on AWS or on Google Cloud and things like that. So what you'll need to do and what we would like to do is tie that back, that awareness of identity back to the underlying OpenStack or AWS or Google Cloud IAM or whatever it might be. So rather than replace it, what I would say is we'd probably want to tie those two together. And by doing that, you still give yourself that cloud-neutral deployment of that RBAC while still being able to associate it with the underlying cloud identification that you might be interested in. So we should certainly talk about that offline or come talk to the SIG and figure out how we do that. Relative to multi-tenancy, can you say more like there's a number of efforts right now to really think through what it means to run Kubernetes cluster, excuse me, applications running on top of Kubernetes as multi-tenant. And kind of the security of the underlying workload, that's kind of the easy part, right? It's pretty straightforward. You can have a container run in SecComp and, you know, at Google we have a rule which is basically the two vulnerability rule, which is as long as it takes two vulnerabilities to chain together, that is effectively a good line to draw on the sand. And that could be, for example, use SecComp in a hypervisor or you could use, you know, a hypervisor and a machine, you know, whatever it might be. That's the easy part. The harder part is when you start getting into shared resources that don't naturally divvy up that way. For example, your network, right? I could have a container theoretically that broke out and got access to the Ethernet driver, Presto. Now every other, you know, every other application on that machine that is sending traffic over that Ethernet, you know, handle is now available to every other pod. So that's where things get, start to get much more complicated. We have an entire group, a working group from the community that's thinking this through right now really to provide how to separate these things up. And we'd love to have your contributions. Was that what you were talking about with multi-tenancy or were you talking about something else? Yeah, so basically the use case that we have is we already have open stack clouds, all configured, running, et cetera, right? Tenants well-defined, permissions so well-defined, right? If we build a Kubernetes cluster on the top of that that many of these operators were doing, then we need to redefine the multi-tenancy access. Well, it's already there, right? So it would be easier that the Kubernetes just access to that information that is already in a database that is managed by the Keystone project. So that's kind of like my approach. But maybe, yeah, something that we could put together. No, I think that's a great point. And I always do tell people, as they are going and adopting their first Kubernetes projects, don't reinvent the wheel. Likely today you already have a solution using VMs or bare metal or whatever you might use that meets your overall requirements. And those could be security requirements. And those could be stateful requirements. Those could be multi-deployment requirements, whatever they might be. My initial advice is just start with that. If you require a hypervisor isolation between your workloads, great. Continue to do that on Kubernetes. That's very easy to do through native scheduling objects that are in Kubernetes. If you want to share workloads together or you use particular isolation when it comes to networking, great. Continue to use that. Over time, we do recommend continuing to break down your services, think through things being cloud native and microservice oriented. But ultimately, that's something that's a little bit more advanced and is not something I would do on day one. It's something I would do on day two or three. What else? Any questions? Comments? Someone? Okay. Well, thank you very much.