 Welcome, everyone. My name is Adrian Otto. And I've been with OpenStack since the beginning. I currently serve as the project team lead for the Magnum project. And today, I'm going to talk to you about understanding the fundamental differences between containers and virtual machines. And the thing that I want you to remember about this when this session is over is that something special happened in 2013. A very simple idea happened that changed everything. Before I get to that idea, I'm going to tell you about another one. In 1873, toothpaste looked like this. It came essentially in a jar, and it was in a powder form. You would shake it out onto your toothbrush. And it wasn't until 23 years later that in 1896, toothpaste came in a tube for the very first time. Now, the idea actually came from the observation of extruding paint from a similar tube. And the individual who observed this thought, well, look, if we could administer paint from a tube, we could do the same thing with Colgate. We could do the same thing with toothpaste. And I just think that would be better. And it turned out they were right. It was better. In fact, it was so good that Colgate by 1908 made it their marketing slogan that they couldn't improve the product so they improved the tube. In 1962, Colgate opened a facility called the Colgate Research Center. And this is a place where they would work on making better formulas of toothpaste, making different products, trying them out, seeing what works good, seeing what works best. And in 1978, somebody who worked in the Colgate Research Center as a lab assistant came up with a really interesting idea. She said, I have an idea that if you do this, which will cost basically nothing and pretty much no time in order to accomplish, we'll double the sales of the product. And she approached the management with this idea and said, why don't you allow me to try this idea? We'll change Colgate, our sales will double, and you'll give me 1% for a year. And they hesitated at first, but then they thought, well, if nothing happens to sales, we're out of nothing. So why not? And so they entered a contract with each other. And once it was signed, they put a piece of paper in front of her and said, okay, document the idea, what is it? She says it's six words, make the opening twice as big. And that's exactly what they did. They made the opening twice as big. And as predicted, they started selling twice as much as the product. And that story would be good if it were actually true, but it's not, it's a fabrication, partially a fabrication for the purpose of highlighting the point that very simple ideas can be extremely powerful. Now there is actually some science relating to toothpaste that is true, that I would love to talk about more. There's something called the toothpaste tube theory, and it means essentially you can't keep squeezing a tube beyond a certain point to get any additional benefit. It's also used to describe human behavior in legal negotiations. And there's also a second meaning to the toothpaste tube theory that says there's diminishing returns after a certain point. So if you have a bounded system where you have pressure continuing to increase and increase and increase, eventually something's gonna go kaboom. That's the idea behind the toothpaste tube theory. But the idea that you really wanna hear about is something relating to containers, which I'll get into in just a moment. Now y'all came here under the promise that I was gonna explain the difference between something you understand very well and something that you understand a little bit less. This difference breaks down into three main categories. Containers can be more efficient, they can perform better, and they have different security characteristics than virtual machines. I'm gonna explain each of these in detail. But before I do, I will remind you that virtualization technology is nothing new. It's been around for longer than I've been on this earth. And it became commercially supported as open source software back in 2003, where Zensource became the first open source hypervisor and has evolved since KVM came around about 10 years ago as part of the mainline kernel in version 2.6.20 of the Linux kernel. And there have been many others. Containers have also been around almost as long as me. Something resembling containers showed up in Linux, or excuse me, in Unix, back in 1979 with the introduction of assist call that allowed you to create new roots. We call these ch-roots or trutes, depending on your pronunciation. And BSD added these in 1982. There were more container-like structures that showed up in free BSD around 17 years ago, things called free BSD jails. This was in the spirit of what we currently conceive as a container. And there were a bunch of others. Around 2013, Google had something called LMCTFY. Let me contain that for you. An effort that was later eclipsed by something even more compelling. So in 2013, a software called Docker was born as an open-source project. It was introduced by an organization called DotCloud that was in the platform as the service business before entering the software business. There were innovations, numerous innovations since then. But 2013 is where something really interesting happened. This is that idea that I've been alluding to. It is the concept of a Docker image or a container image. Now the image is the thing that was missing from that entire history that I just covered. All of the container innovations that happened before 2013 did not include a way to encapsulate all of an application's requirements and dependencies in a portable lightweight bundle. Before then, what you had to do is create a disk image and install what appeared to be an operating system file system, which then you would attach the container instrument on top of. And those things weren't very portable. They tended to be very large, which made the creation of containers relatively slow until this innovation. So when I talk about Docker, I am not talking about the company Docker Inc. I'm talking about the open-source software formerly known as Docker Engine, now known as Docker Community Edition. So let's explore what containers are made of. So the first thing that all modern containers in Linux have in common is a concept called a C-group. A C-group is a feature in the kernel itself. And although I'm highlighting this as a Linux C-group, there is an equivalent for this in the Microsoft operating system as well, which was recently added as a version 10 of their operating system. This feature allows you to group a set of processes that run as a related unit. And that group of running processes can be controlled with respect to how much of the host they are allowed to consume in terms of memory, in terms of CPU utilization, in terms of how much IO they do, both over the network and to disks and to other devices. And C-groups may be nested, meaning that a C-group can be apparent of another C-group. This concept is important, and I'll explain why a little bit later. The second feature that all modern container systems have in common is the idea of a namespace. So a namespace is another kernel feature that allows a restricted view of the system. So instead of showing you every aspect of a running system, it shows you a more narrow perception of the system. So you get the illusion that you're running on a system that has fewer interfaces, right? If you are just starting up a Linux box and logging in for the first time, no containers exist on it, right? You're in the root. You're gonna see all the running processes. You're gonna see the full view of all the file systems. You're gonna see every network interface, every bridge, every tunnel interface. Everything's gonna be visible to you. But if you create a container and you enter a namespace, that view can be restricted. Now there are a bunch of these. There's one called clone new FS. This is the name of the syscall that I alluded to before, the ch-ch-ch-ch-chroot syscall. And when you call this, you specify a new file system path that becomes the new root, okay? So this is the most basic, if you understand ch-root, you essentially understand all namespaces because they all fundamentally follow the same concept, except instead of limiting your view of a file system, they're limiting your view of some other resource. So clone new NS is about the view of the file system. There's a UTS namespace. This has to do with what your hostname is, so it allows one container to have a different hostname than another. So when you call your name in a container, you can get a different answer back out. There's also a namespace for inter-process communication. So semaphores and shared memory segments, wouldn't it suck if you had two containers side by side? One decrements a semaphore, and all of a sudden the behavior of a neighboring application changes as a result of that, because they both chose the same name for the semaphore. That would really be awful. So having namespace for those, you can name your semaphores the same thing, right? Or name, or use the same constructs without interfering. There's another one for process IDs. So if you have a pin namespace that's unique to you and you start a process in it, it's gonna be PID1. You start another process that's gonna be PID2, even though there's already a set of processes that's running on the host of one or two. These things are essentially processes that are mapped through a simple mapping in the kernel to allow you to have the illusion that you've got the only processes running on the machine. There's also one called a username space, which is the same sort of thing as a PID namespace. I'll come back to networks in a minute. The username space allows you to have the illusion that you're a privileged user when you're inside the container when in fact you're a non-privileged user with respect to the host. So outside you might be PID1,000, inside you might be, or excuse me, you might be UID1,000 outside and UID0 inside. That's how these mappings work. So it's a way of restricting the security exposure of the processes that are running within the container. There's a network namespace, which allows you to control what interfaces a container can view. So you can give that container, say, an eth0 and only give it that and not show it anything else, or you can give it a bunch of interfaces that are bridged. And these namespaces, all these different namespaces that I'm describing can also be nested, meaning it's possible to run both with C groups and namespaces a container that has another container inside. Now this is one of the differences between virtualization and containers. A performance difference that I'll talk about some more. But it's important to recognize that this nesting is possible in order to appreciate that difference when we get to that. Okay, remember I told you there was something special about this concept of the container image, about this being the different thing that makes containers special all of a sudden, starting in 2013. So what is a container image if it's not a virtual file system? It's not a file system. It's not a virtual hard drive. It is actually, if you look at actually how it's composed, it's essentially a tar file that has some additional metadata attached to it. One of the important pieces of metadata that is attached to this is an indication of what container this is derived from, what container image this is derived from. So much like I just explained that namespaces and C groups can be nested, container images can also have a nesting relationship. So you might have a base image that is say whatever, an Ubuntu distribution. Ubuntu distribution is my base image. I might have a container image that says, yeah, based on that, this other stuff too. And it's just that additional stuff that is in that tar file that makes up the container image. And the piece of metadata stating what it's based on is the important part. Y'all with me? Yeah. Okay. So you can have, you know, in this relationship, right, a container can have an arbitrary number, or an image can have an arbitrary number of these dependencies back up. So you can have the concept of a base image, a child image, a grandchild image. We could keep going down and down and down and down. I would argue that once you get down about two or three levels, it probably doesn't make sense to continue doing that. And there are some limits in some versions of, say, for example, Docker that would limit that. I think there was a limit at one time that was something like 40 or 60 of these. You couldn't go any further. I think that's since been lifted. But if you can't describe your system in less than, say, four levels, you're probably doing it wrong. And the same hierarchy that I'm describing maps to something called the Docker registry. So let's talk about that. So by Ray's show of hands, how many of you use Git? I'll give that, like, 90% of the audience uses Git. If you understand Git, you already understand how the Docker registry works. Because the semantics are the same. You pull in order to get a copy of something out of the registry. You can actually change it and do a commit and then push that back up into the registry to save a new version of it. And it snaps the same hierarchy that I was talking about before, right? This idea of an image being derived from another. And when we talk about Docker files in just a moment, this will start to become even more clear. Now, if I go into this audience and I start polling you individually and asking the question, what is a container? I'm likely to get a number of different answers from you. Maybe as many as 10, 15 different answers. I would like you to converge on this idea of what a container is, okay? It is the amalgam of a Linux C group, Linux kernel namespaces, a Docker image, generically referred to as a container image, and the related lifecycle. And all of those things together make up a Docker container. If you're missing the namespaces, it's really not a container. If you're missing the C groups, it's really not a container. If you're missing the Docker image, you could argue. But I'm saying that what we believe today is compelling about containers don't make sense without the container image. It is the differentiator. So where do babies come from? That's an easier question to answer than where do containers come from. There is a confusion between what is a Docker file and what is a Docker image. I'm going to clear that up. A Docker file is the imperative instruction for creating a Docker image. So if you know what a make file is, a make file is an instruction for compiling a binary. The input to make is the make file and the output is an a.out binary. You can think of a Docker file as the same thing. It's like the make file. It has instructions in it which I'll show you in just a sec that describe how to build the thing. And when it's done, if it's successful, you get a container image out the back. You get a Docker image. So this is not to be confused with an orchestration artifact or a pod file for Kubernetes. These are declarative descriptions of a deployment of an application. That's not what a Docker file is for. A Docker file is just instructions for building a single image and see what they look like. So this is a simple Docker file. It says we're going to start with a CentOS 6 environment. We're going to label it with my name to show that I'm the maintainer. This is only persisted in the metadata for the container. I'm going to install the Apache server. I'm going to expose port 80 and I'm going to add a script into the container image that is going to execute by default when I start the container unless I specify something else. That's all this says. So from indicates what this container is based on. And there is such a thing called a scratch image. So you can define a new base image if you want called scratch and put a statically linked binary into it, right? Which would be an environment with nothing underneath it. Or you can define a container based on some existing environment. Now here's another misconception I want to clear up. When people first start to understand how containers work they make the assumption that whatever operating system is running on the host is the operating environment that your container is going to enjoy. This is not true. They will all share the same kernel and I'll explain this in a moment. They will all share the same kernel but they can all have different environments. I'll explain why this works. But I could just as easily have an image. This one is going to run on a CentOS environment. I could have another one that's built on top of Ubuntu and I could run these side-by-side on the same kernel. One application believes it's running in an Ubuntu environment. The other one believes it's running in a CentOS environment and they work happily side-by-side on the same host. It does not matter what Linux distribution is running on the host. I'll say it one more time just for dramatic effect. It does not matter what operating system is running on the host. What matters is what's in the container image and what it asked for. Okay, so here's the build command. Docker build-twebserver. So dash-t means just tag this build with a name just like I would tag, you know, a git branch. And then dot means build whatever is in the current directory. So this current directory would have the Docker file in it and it would have the start.sh script in it. And that's all that's necessary in order to build this container. So what if I wanted to make a child image based on the one that I just created? I would say from webserver, install, say, a lampstack and put in a different start script. And so when I run this container, it's going to cause the other one to be loaded and, of course, whatever the base image was underneath that. And so now I've got this chain of three. I've got this one, the parent, which is the one that I just built a moment ago, and then the base image, which was the sent OS one. So this is how this hierarchy works. And when I build this, I tag it with the word lampstack instead of tagging it with the word webserver. And the reason why I would want to do this is because now they're sharing a common base. So I might have one that's a lampstack and another one that's a Node.js stack. They can still share the same base image, which means fewer base images are cached on each host as I'm starting up new containers. So it'll use less storage. They'll be faster to start. And those are some of the reasons why I would want to use containers to begin with. I promised I was going to explain these three areas in depth. Efficiency, performance, and security. Let's get to efficiency. So many of you have seen this diagram before, except there's a slight difference between this diagram and the one that you've seen before. I have taken Docker out of the one on the left. And the reason why is because Docker is not actually in the execution path of the application in the same way that a hypervisor would be in the execution path between the application and the hardware in a virtual machine. So if I run an application within a container, and once the process has started, its performance characteristics are exactly the same. I'm going to use exactly in air quotes. I don't like using absolutes. But for sake of describing how it actually behaves, it is exactly the same as it would be on a bare metal machine. Because the things that make a container different pertain to the maintenance of the namespaces and the C groups. That affects the process start-up and the process tear-down. But it does not affect how that process can interact with the kernel while it is running. So if I do something like an open syscall, and I read from it, should I expect the performance to be slower just because I'm running in a container? What do you think? Yes or no? No. I should expect it to behave the same as if I weren't in a namespace and I did the same thing. Because once my process is running, the fact that it's in a namespace is not changing the execution path between that running process and the equipment that it's running on top of. Of course, that's different in the case of running in a virtual machine. So if I'm running in a virtual machine, I've got my own kernel, then I've got either hardware-assisted or software-emulated conversion, something that's imitating a machine, and then underneath that, you've actually got the hardware. So in a virtualized environment, should you expect the behavior of your application to be different or slower or worse in some way than it would if you were running on the bare metal? Of course. It would be different. So this is how... When people say containers are faster, sometimes they're describing that, well, they're faster to start up than virtual machines because of the nature of how the container images work, this layering that I described, that hierarchy. Another reason why they say containers are faster is because there's nothing interfering between the running application and the hardware itself in the way that virtual machines can interfere in many ways with your access to the hardware. Now, some may argue, okay, with things like PCI pass-through, you know, mapping out an NVMe storage device with the file system on top, that you don't have the same performance drawbacks running in a virtual machine, and so it doesn't matter. But the truth is those behave differently from one kind of equipment to another, and you can't generalize that it's going to be as good as bare metal on every single machine that you're running on. Whereas when you're talking about a container, I can offer you an assurance that it's going to behave the same way regardless... regardless of what the different equipment is. Okay, so it's a stronger argument for there is a more universal performance benefit running an application within a container versus running in a virtual machine. Now, the next difference is a security difference. Does anybody recognize this? What is it? This is Castillo de San Marcos in Florida. It's a fortress. It was successfully defended for many, many years. How many men or women or soldiers, let's say soldiers, do you believe it would take to successfully defend this fortress against an external attack? Just a guess. Four soldiers could do it? How many think... Let's just be crazy. Let's say we need 50 to 100 soldiers to defend this, okay? Now, what if the problem looked more like this? That's a much bigger attack surface, right? What kind of a soldier force would I need in order to defend against something like this, right? We've got a multiplication problem here. And the same problem exists when you're talking about defending against hostile workloads that are sharing the same host. When those things are isolated in virtual machines, that's one risk. It has one risk profile, okay? When those things are separated only by containers, that is a different risk profile, a much more complicated risk profile I'll get into in a minute. In the hypervisor world, the number of things that compose the attack surface between two virtual machines on the same host is a very short list. It's shown here on the screen. This is a relatively small attack surface. Is it possible that you could have a running virtual machine and another running virtual machine as inside one escape through the hypervisor and get into, say, the memory space of a neighboring virtual machine? Is that possible? I sure hope I see nodding heads. Yes, that is. It is possible. But it's also relatively straightforward to defend against because the attack surface is narrow. By comparison, the Linux syscall interface is not quite as narrow. There are 397 system calls in the Linux syscall interface since version 3.19 of the kernel. That is like the multitude of Castillo de San Marcos that I showed you. That is a much more difficult attack surface to navigate. If the only thing between two neighboring containers on the same host is something that's got 397 syscalls, you need a different strategy for how to defend against escapes. Then you need, when you're defending against virtual machines contending with each other. It's a fundamentally different problem. So this is a security difference. So I want you to remember that the barrier between containers running on the same host, the security isolation barrier, is not as significant as the barrier between neighboring virtual machines on the same host. You can consider it like a dashed line instead of a solid line for this reason. And this is an attack surface argument. Now there are ways to make containers as secure as virtual machines or nearly as secure as virtual machines. Does anybody recognize what this is? Holler it out. Nobody knows what this is? Not a house key, louder, what is it? This is a bump key. Okay, a bump key is an exploit for vulnerability in almost every lock that we use on our front doors to our homes and offices today. Here's how it works. You enter the bump key all the way into the lock. You back it up one notch. You twist it very gently to the side and then you hit it with something, like the handle of a screwdriver. And what happens is all the pins that you're lined up against all jump at the same time, which allows them to line up with the shear line and for you to turn the lock. This is a fundamental security vulnerability in the physical lock. This is the exploit to that vulnerability. Now, it's only vulnerable because the strategy that we're using to secure the lock or to open the lock is a key. You can change the design of the key. And if you do, if you put millings into the side, now you have pins that are in the side and pins that are in the top. If you put a bump key in and jump all the pins up, now the lock is not going to turn because the shear line is only lining up with the pins on the top, not the pins on the side. So if you fundamentally change the game, you make the lock more secure. You need to do the same thing with containers. So you've got a bunch of techniques available to you in order to change the game, in order to limit the attack surface between these neighboring containers. So one of the most common ones that you'll see in practice is using a mandatory access control policy. So using SE Linux or App Armor is a way to produce a mandatory access control policy. This just says you are allowed to do nothing with this kernel except for the things that are allowed by the policy, which is different than the way the system normally works, which is that you specify things that you're not allowed to do. So if you think about it conceptually like a firewall policy, it's like a default deny policy for interacting with the kernel. Now the problem with using SE Linux or App Armor as the only security mitigation strategy is that to make a useful policy, it's got to allow an awful lot of stuff because applications do a wide variety of things and for it to be generally useful for applications, that policy, that default policy would need to be very, very permissive, in which makes it not very strong. So in order for this technique to be effective, it needs to be tuned on a per application basis. This is why we don't have a single SE Linux policy that works for all applications because the whole idea here is that it's default deny, not default allow. Now you can also use something else called seccomp or secure computing mode. This is originally designed for batch processing applications where you would start your batch process up, you would open the file handles for the input file that you're going to process, you call seccomp and it allows you to only call read, close, and exit and no other syscalls. This call, besides read, close, or exit, you're going to get killed by the kernel immediately. Now it just turns out that seccomp has been since expanded to allow you to specify a policy of which syscalls you're allowed to execute. So now you can say, okay, my application requires this interaction with the kernel that can be specified at the time that your application starts. Your application can call seccomp directly to set its secure operating mode and if you do this on all of your neighboring workloads it makes them much more secure. You can also nest containers. So remember I keep harping on this idea of these things can be nested, these things can be nested. This is one of the reasons why it's important. Containers can contain other containers. Now, when we think in the world of virtual machines, can virtual machines contain other virtual machines? Sure, yeah, you can do that. Why don't we do that? The huge performance drawback to doing that because the hardware virtualization assistance only works for the first level, not the additional levels down below. So if you were nesting virtual machines you were probably having really bad performance outcomes. That drawback does not apply to containers. For the same reason that a running process on bare metal performs roughly the same as a running process in a container, a container within a container also behaves roughly the same way, once the processes in that container have started. So you can nest containers, put SE Linux policy on the one that's at the top, and then create other containers underneath. And an escape now needs to be multifactored. An escape exploit needs to be multifactored in order to successfully escape not only from one container into the next, but then from that container into the host. So you're making that escape necessarily more complicated, which reduces the risk of exploit. If you're using Docker, there's a plugin interface called off plugins, and you can do things that limit your client's access to various features in the Docker server itself. So if you don't want to allow something, you can. For example, if you don't want to allow them to run privileged containers, you can do that. There's a feature in the kernel called ASLR. This randomizes the way that the address space, the memory address space is allocated on the host. I think by default now most distributions are doing this by default. But this makes it more difficult to do escape exploits. And you can also do... There are features within the hardware itself that both improve the performance and make it more difficult to perform these kind of escapes. Now, this won't protect you against defects in the hardware itself, but it will help you in terms of defects within the kernel, for example. So in this talk, I outlined three key differences between virtualization and containers. I talked about their relative efficiency. I talked about their relative performance. And I talked about their relative security. So in general, they tend to perform... They tend to perform better. You can usually stack a lot more containers on a host than you can stack virtual machines on a host, primarily because of the way that they work, right? A container says you're allowed to use a maximum of a certain amount of resource. But it doesn't preallocate that resource. So if I'm memory-bound, for example, and I create a lot of virtual machines that all have overhead of memory that's preallocated, I'm probably allocating a whole bunch of memory that I'm not using. Whereas if I'm using containers, I'm only saying you're only allowed to use a maximum of. So the oversubscription rate is much higher. So from an efficiency perspective, more work ends up happening on the same host. If I'm doing less virtualization work, less mapping going on between the running processes and the equipment, there's less tax on the equipment, so it runs faster. And then I talked about security and it being fundamentally a different problem with respect to neighboring containers and neighboring virtual machines. And although there are techniques to making that interface between containers more secure, of which I detailed a number of them, there's not just one magic bullet that's going to solve all of those concerns. So you need to keep that in mind as you implement these. Now, when you choose which of these technologies to use, it's not an exclusive choice of I'm only going to run this thing on bare metal in containers or I'm only going to run this thing in virtual machines. You can still put containers into virtual machines. It's a perfectly valid use case. And if you care about that additional security isolation, you're still going to get benefits from having the portability of the container image, having the, you know, microservices capabilities, but you may choose to use both of them in combination so that you can reap both of the benefits. The only time you need to feel like you're compromising here is if you've got an application that's exceedingly performance sensitive and needs to be exceedingly secure. And that's where you're going to need to use these benefits techniques. I'll take your questions. So there's a microphone in the aisle for your questions. Howdy. Hi. Did I miss the part where you talked about how two things can be running under different distros and feel like they're running under their own? Yeah. Under the same distro? I did promise to explain that. So when a container starts, it's sharing the kernel with the host, right? The container image specifies what the layered file system will look like when it see each routes into it, right? So essentially it's going to take, in my example, I showed a CentOS example, it's going to have a file system of the CentOS distro plus the Apache server that I added, right? That's going to be laid as a second writable layer. And I can interact with that as if it's a CentOS host. I can create another Docker image that's based on, say, Ubuntu and start it. And it's going to be using the same kernel at the same time. Now, some people believe that just because a distro is set up a certain way that it requires a certain kernel, it turns out that the Linux syscall interface is a stable API. And because it is a stable API, since roughly version 1.3 of the kernel, well, version 3.19 and later, for sure, the interface is not changing. The API is not changing. So those libraries like your G-Lib C, for example, that runs in the Ubuntu environment has exactly the same interface to the kernel as the one that's in CentOS. They are literally identical. So because that is a stable API, it does not matter which kernel you're running. Now, some may argue with me. They may say, no, there are actually differences in the kernel. And to some extent, that's true. You shouldn't expect an application that was designed for today's kernel to work on one that's built 10 years from now. That's an unreasonable expectation. But in terms of what's running today, the general answer to that question is they are compatible. And I have, for example, five or six different operating environments all running on the same kernel at the same time. And yes, this does work. It does not matter. As long as you're running a modern kernel that has all of the features of all of the applications that you're running, they're going to work. You're welcome. Yes. Hello. So I had, excuse me, a discussion with somebody a while back. And they were making the point that we're trying to that somehow containers, because of the file structure you spoke of, have a greater affinity to the DevOps model. And that would be ultimately one of the things that will, you know, accelerate the take up of containers. Do you see any truth in that? Well, look, what DevOps cares about is getting through CI CD fast, right? So any tool that's going to accelerate the process by which you go through your CI, that's going to be considered valuable. And Docker has been shown to greatly accelerate most people's CI, because the containers start more quickly, they use fewer resources, and they have a, once you've created the image, it behaves the same in test as it behaves in production. The binary artifact that you create both is what you tested and what you deployed to production, assuming you're following best practices. It's super attractive for that reason. So yes, I would say it would definitely take on to that audience quickly. All right, thank you everyone for attending.