 Okay, so I think it's time to start, probably. So, thank you very much for being here. I'm really glad to be here, talking with you. Actually, I'm coming from far away. I'm just an Italian guy that moved to France and now is in Singapore. So, I'm a little bit lost in translation or just after two days of training, docker training. So, be patient. I won't go really fast. That's an introductory talk on docker. I will speak about something else later. So, it will be like a two-talk event. Tonight, two parts. It will be also a third talk that I won't do. So, let's begin with this one-on-one. And one recommendation is interact me when you want. Don't hesitate. It will be more fun. So, I would be happy if you have questions, if you just want to ask some questions. Don't think it's a stupid question. Just ask it. So, don't hesitate. So, first of all, let's introduce myself. So, I'm an Italian based in Paris. I'm a software engineer at Zeneca. That's a consulting firm based in Paris. And I'm an ex-IBMer. So, I work for the Laboratory of Development in Rome. I became a docker official trainer a few months ago. And I've developed some plugins for IDEs, for Eclipse and for Sublime Text to integrate docker inside these IDEs. And so, I became also contributor for the docker project. And you have here my Twitter and GitHub handle, if you want to contact me or if you're interested about these projects. So, my interest for docker at the beginning started as a frustrated developer. In my teams, we always had the same problems. And this problem was like a developer that came and says, oh, it doesn't work in production, but it works in my machine. So, it's not my fault. So, that's the first problem. The second thing that always happens is you get into a new project and they say, okay, it will take like five minutes to set up the development environment. And it turns out that it takes like one week, two weeks, and you don't manage after six months to run all the tests because there is always this test that fails and you don't know what you have to set up, et cetera. So, these are some problems that every developer have lived. And so, docker two years ago seemed one cool solution to solve these problems. But before getting into docker, the first reaction, the first thing is that we have as developer, we say, okay, we want to have a way to set up a consistent development environment. It should be repeatable wherever you want or in your laptop on a stage or production environment. And you want to be able to version it so you can roll back to the previous version of your environment if you need that, if you just understand that you introduce some regression on your environment. And so, you also want to automate it, to automate the setup of your environment. And traditional virtual machine already have all these features. So, it could be a good solution. But a better solution are lightweight containers. Lightweight containers are, as the word says, lighter than traditional virtual machines because they use less memory, they use less disk space, they use less CPU. And how is that? How do they work? So, actually, traditional virtual machine has to load another operating system on top of the host operating system. So, for example, if you have a Linux operating system, you know that you can run a Windows operating system, so a Windows VM on top of that. But the drawback is that if you want to run Linux, VM on top of Linux, you will have two instances of exactly the same OS twice and you are not going to appreciate the fact that you're reusing exactly the same library. So, the traditional virtual machine are really expensive from this point of view. And lightweight containers, instead, just all that share the same kernel, the same operating system libraries of the host operating system. So, they are isolated from an higher point of view. So, the isolation is that there is less isolation. You can think of lightweight containers as curtains in a room and traditional virtual machines are like walls in a room. So, when you decide to make your room constituted of small rooms and you build walls, you really are isolating this small room in your bigger room. Containers are curtains. So, there are less isolated. You can just push the curtain and you can fall on the other room. But they are easier to set up. You can change that. You can decide to put it in another way. So, they are really, really easier to use. And that's how a container works. So, the same containers are able to reuse some layers that are below. So, that's efficient from a CPU memory and disk access point of view. So, lightweight containers is a technology that existed from 15 years ago. There are Solaris zones or BSD jails. They are the first examples of containers, lightweight containers technologies. And on Linux, you have LXC. But what Docker has done that these technologies, these implementation of lightweight containers haven't done before, is make it easy to use, make it widely adopted and define a standard. So, Docker today has constituted an organization to define the standards for containers. So, there won't be end technologies for an operating system. So, everybody is converting to the same standard for containers' runtimes, for the definition on how images of containers will be stored on disk. So, these are the reasons why Docker is so successful these days. Not because it has invented new technology, not because it has used it efficiently. So, let's see why we say that it's easy to run a container, because to run a container, you just can issue this comment and I will show you that right away. So, what I'm saying here is that I want to run a container starting from an image that it's called Ubuntu. So, you understand that Ubuntu is a Docker image that has some utilities that you only find in Ubuntu. So, that's not exactly an Ubuntu operating system, because you will always use the kernel of the underlying operating system. So, you're not going to use the kernel of an Ubuntu operating system. But libraries' utilities will be the one that you can find on Ubuntu system. So, you saw in less than one second we have started a container, okay? So, and if I do top here, you can see that there are just two processes. One process is top, and the other one is my bash shell that I just spawned. So, what happened here when I ran the container? Well, we have generated a Linux container. So, it's been with namespaces and C-groups that are the technologies that Docker used to run containers. A new file system for this container has been allocated. The read-write layer has been mounted. A network interface has been created just for this container. This container has an IP address independent from the IP address. And we have run a process inside that container and this process is bash. And finally, we have captured the output and returned it to the client. So, these are all the steps that are done by Docker to make it really simple to run a process, to run a container. So, we run Ubuntu containers. Where does Ubuntu come from? Ubuntu is one of the thousand images that you can find on the Docker app. So, you can find images of Linux distribution like CentOS or Ubuntu or Debian, but you can also find images of application services, Redis, MySQL, Nginx, WordPress, et cetera, et cetera. Every day you have ten new images, new official images. And these images are maintained from the developers that have built them. So, the Ubuntu images are maintained from the team from Ubuntu. PostgreSQL images are maintained from the people from Postgre. So, at the beginning, the guy from Docker started to create these images, but quickly they just give back what they did for these images to the real maintainers of these products. So, these are the official images, the images that are ready to use, but you can also create a custom images, and you can do that with Dockerfiles. So, here you see that we are showing a Dockerfile. So, the first line says that we start that on top of an existing images that's Ubuntu, and the second command, the second instruction, say that we are going to run something inside the container to build this new image, so we are going to install packages, this curl. And the last instruction is what is the default command that is going to be executed when we do Docker run of the image. So, let's see it. So, let's exit from this one. I already prepared it, so you already have it. That's a really, really simple Dockerfile. So, from this we can build it, we can call it MyIP Info, and we can just say that the Dockerfile is in the current directory. So, it's connecting to the Docker app to retrieve the Debian image. So, it's taking some time. It can be slow. The Debian image is one of the smallest ones. Ubuntu is like two times the Debian image. You also have the busybox image as the one that is smaller. And one image that is really popular now is the Alpine image that is smaller than the busybox one. And if you want to start from scratch, so really an image that has nothing inside, you want to install everything, every tools. So, you don't have bash, you don't have shell, you don't have anything. So, there is an image that is called scratch. So, you can just begin your Dockerfile with the instruction from scratch, and you will start with the smallest Docker image that exists. So, let's try to understand what's happened here. Okay, I've got a guess. So, maybe the package list wasn't updated. Let's see if that works better. So, once this command has been executed, we will have a new image in our local registry. We won't have it on the Docker app that is public for everybody. So, we will have in our local registry in my machine, we'll have a new image. The name of the image is this name here. And I will be able to run it with this command. So, it's working better now, seems. So, how many of you have used Docker, have already used Docker? And how many of you has used in working projects? Okay, not so many people. And in production? Who has deployed in production Docker? Nobody. It's always like that. Okay, so we have built it successfully. So, I've called it myPinfo, so we can just issue the command like that. And do you remember what's the default command that I decided to run was the curl ipinfo io slash ip. So, it should just output my ip address. So, I will run it. Okay, it's taking some time. Okay, it takes some time, but finally it outputs my ip address. So, it works. So, you see, it's really easy and straightforward to create your new image, to build it. If you have a good internet connection, it can be also faster than that. But that's one of the point of Docker, is you have to rely on the network. So, I had a really bad idea when I was coming here. I had like 13 hours of playing to come here, so I said, okay, I will try to work with some Docker images that I need to build some software project that I'm working on. And actually, after two minutes, I realized that I couldn't do anything because I needed an internet connection, otherwise I couldn't work with containers. So, be careful with that, because even if you say, okay, I will pull all the images I need from internet, you probably, when you build custom images, you will have to connect to the internet. Even if you just change one source file that's not in the internet. Because what happened is that if something changed, if nothing changed in this file and you just try to rebuild it, Docker reused the cache, so the layers that he built, he already built. So, he reused the cache. If I change a line, just let's say I change this line, I add another package to install. It will invalidate the cache for this line and for all the line that you have below. So, if you hear you're just copying a file that you have on your file system, but after that you're going to connect to the internet because you run some APT get install command. Well, you won't be able to run them even if you already installed these packages for a previous build. So, I've asked you to interrupt me at the beginning. Nobody has interrupted. What's this host that you showed us? Excuse me? The demo host. Where is it? The demo host. The Docker 101. The demo that you did, where is the host? You mean this one? Yeah. So, the question is where is my demon, the Docker demon, where is it running? It's running on my PC, here on my laptop. That's a good question. Yeah, that's a good question. So, the question is, it's a virtual machine. If, and I think that you are asking that because you're guessing how on a Mac or as can Docker run because Docker is just for Linux containers. So, am I wrong? What was it that the purpose of the question is for that reason that you asked me is because I'm running a Mac and you are wondering how Docker can run on a Mac, right? Is it, no? Maybe you're running some hypervisors or your local AWS or something. That's what we're saying. In Linux, you can entirely run Docker over there by using Windows or Mac that you probably need to go stop the virtual machine. So, yeah, the answer is I'm running that on a virtual machine. That's a Linux virtual machine that it run in memory. So, it boots really fast. It boots in a few seconds. So, it's like if you just open an application. So, it's called Tiny Core Linux. It's the distribution. And Docker released, when you want to install on Windows or on Mac OS Docker, they give you something that they call Toolbox. And so, they provide you virtual box. So, the engine that will run your virtual machine, they provide you the image of this Linux distribution that will as Docker installed inside and they will give you some other visual tools to run containers. Is it called CoreOS? That's not CoreOS. Tiny Linux, that's not CoreOS. They distribute Tiny Core Linux. This distribution is called Boot to Docker. It's not CoreOS. It's an operating system to run containers, to run Docker or Rocket containers. But it's not something that you usually install on your laptop. It's something that really it's a mature operating system that you want to use in production to deploy your containers. Yeah, so I was wondering, am I correct to say that each of the images has its own private IP address? Then what exactly is the private network supposed to be? The question is, why? What is the private network? Because each of them has their own IP address if it should be within a private network. Yeah, so the question is, if every container that you run has its own private IP address, that should be a private network. So instead of trying to explain you, I will just show what's the output of some comments on my machine. So currently here, I'm running a container, just the Docker PS command show that I'm running a container. If I run, so let me go, because right now I'm on my Mac operating system. So I have to go inside the virtual machine to just see the network interfaces, the Docker network interfaces. So to go there, I just have to do that. So I'm inside boot to Docker, so here I'm on Linux. And if I issue the if config, I can see that there are some weird stuff here. There is the first interface, network interface, that it's called Docker zero, and it's actually used as a bridge to connect containers to the host network interface. The host interface is ATH zero. And every time you start a new container, Docker starts a new virtual interface. So you will have IP addresses that are located on that interface. So when a container wants to talk outside of the host, it has to pass through the Docker bridge to the network interface of the host, and it can accept packages from the outside. But that's the default option. Another option is to just use the network interface of the host. So I can do that with the option if I do Docker run. I will run just an image, a container that doesn't stop. So that's net host. Okay, so I will run it in the background. So if I do if config now, it hasn't created a new interface. It just has this one. Because this new container that I've created, it's this one. We can't appreciate it here because it's a little bit messy. We'll try to make it smaller. But if you see here, there is no ports that is allocated in here. They are ports that are mapped. This means that port 8080 inside the container is mapped to the port 3270 on the host. So if you connect to the IP address of the host, port 3270, you will access the container, and you will access the container also if you try to connect to the private IP address of the container with the port 8080. But this is a private address. It's behind the net, so you won't be able to access it from the outside. Okay, more questions? The Docker itself also has an IP. Right now in your laptop, you have a magnet, and you have your Docker image and a virtual machine. And inside the Docker virtual machine, we have all the components. Exactly. So the container itself also has an IP file system. So that IP, let's say Docker and Docker need to talk to each other. So how do they want them to work? There's multiple options. Yeah. So the question is Docker has its own IP address because you're running Docker in your Mac. So it's a separate IP address. So how does it work? So the architecture of Docker is a client-server architecture. So you can have the client and the server on two remote hosts and they can talk together because they talk via HTTP. So that's REST API. So it works perfectly even if they are in two separate machines. And that's exactly what happens here because there is the virtual machine where the Linux distribution is running that has its own IP address and you can access through its own IP address and you have access to the Docker daemon API through this app. So to make it clear, so here I'm going to exit from the virtual machine and I can ask Boot to Docker to give me the IP address of the virtual machine. So if I want to access this port here, 32.7.7.0 inside the virtual machine, from outside I can do something like curl. I can take that. And 32.7.7.0. I don't know what will be the answer because I don't... Okay, so that's an yellow word application that is running there. So I can do that from outside, I can just put that. If I go inside, I can just curl local host with exactly the same thing here and I will have the same answer. Okay, so because now I'm on local host and I'm inside the host where Docker is. But if I'm outside, I can still access to the containers using the host IP address. What is that page into the Docker itself? No, actually there is no... When you run a container, so let's take this one for example, as we saw before, you have just one process that is running inside that. It's the process that it's executed with this command. So you don't have an SSH daemon. There is no SSH server, so you can't access the SSH. So the way you have a couple of ways to have access inside a running container. There is a command that it's Docker exec. So it allows you to execute a command inside a running container. So for example, here I should have this container that this one with this ID does... I just ran before, it's still running. If I want, I can exec like that and I can decide to exec this command inside, so bash. Bash is not inside here. So unfortunately bash wasn't there. Maybe SSH was there, I don't know. Yeah, so you see I'm inside, I executed a new command inside this container. So if I... OK, so top is here, so I can see now there is the shell that I just executed. There is the command, the default command of the container. There is stop and there is a slip one that it's executed once in a while. So these are the... So if I do another Docker exec on the same container, it will be another process. So I can add process, I can get into a container in this way, but there is no SSH server running on that. OK, is there any more questions? What is the container stop between them? How? How do? Between two containers. How can you continue to communicate another container? OK, that's a really good question. The question is how you can communicate between two containers? Well, actually I add some slides to talk about that. So, yeah, we will talk about volumes later, so I will just take the chance to talk about links to let container talk together. There's an IP address that came right 32770 of what it used to IP. Not IP, the port. Is there a way to control the port or it randomly picks the available port from the post? So the question is if the port that it's exposed outside so that it's the way to access inside to the services of the container can be selected or it's randomly picked by Docker. It depends on... You can do both. By default, Docker took a random number and that's because you want to run many instances of the same exact containers. For example, you want to run Apache. You want to run 10 instances of Apache on the same server. So every instance will run on port 80. So you can't just map the port 80 for every container on the host interface, network interface. So what you will do is you will pick random ports for everybody. So that's the way Docker works. But if you want to pick a precise port, you can select it and actually I don't have slides for that. We will see it here. You see ports, there is a mapping. But actually you won't be... I'm not going to talk a lot about that. But maybe it's a time to do a break. And we can just continue in five minutes. Okay. Running images that have been built for Linux, for example, run that on another Unix operating system. For example, for Solaris or for BSD or HPUX. So the answer is no, you will be tired to the operating system for whom the image has been built. So you won't be able to run a Windows container on Linux. That's impossible. So that's also the challenge for Docker today because they have built this big Docker app with thousands of images, but they are not prepared for Windows and Linux version of the same image. Because let's say MySQL, you have the official image for MySQL, but how can I just say, okay, I'm running it on Linux so I want the Linux version, not the Windows version. So how they will transform the Docker app when they will have to support multiple operating systems. So I don't know how it works, if they are working on that. What I know is that I've tried to execute to run the Windows 2016 beta, Windows Server 2016. And actually the main problem is that there isn't any base images that we can pull from the Docker app. So you don't know how to start a container because if you don't have the base image so you can't write your Docker file from something, you can't just do a Docker run Windows 7. You can't because there isn't a Windows 7 image. So actually it's more complicated than that. So you can't just build an image and run it really everywhere. You can build an image, you can run on the operating system where the image has been built. Okay, so let's continue with the next slide. It's about the persistence of data and I wanted just to introduce this cattle versus pets thing that everybody talks about. So we say that containers are like cattle because you can just use it and throw them away. You don't have just to take care of them like if you take care of pets. So traditional virtual machines are like pets. So if you have a template, you continue to maintain it, to update it. Instead containers are like, you use it on time and you throw it away. So to demonstrate that, just do run two times the same container. Okay, so I'm back here. So I'm inside the container. So I'm going to install curl as before. I'm going to do update. Okay, it's taking some time. Okay, so it's taking a lot of time just to do that. So what I wanted to show is that if you run twice this command and you have changed something like for example, installing curl and I just exit from the container and just re-executing again with the second command, I won't find the curl package that I just installed. That's because when I run a container, I always start from scratch. I don't reuse the last image wound to that I just run. I always start from scratch. So that's the way Docker works and that's by design, it works like that. Okay, it's taking a lot, a lot. So if you want to persist data, so it is between a run and the other of a container. So you want to persist some data. So you want, for example, you have an application that has some state. You need to persist it. And volumes are the way to persist data. So here we are mounting inside the Ubuntu container. We are mounting this folder from the host inside the container, inside the folder data. So if we do some changes inside the folder data when we are running the container, well, this data will be persisted on the file system of the host. So this data won't be lost even if we run it again and again the container. So I will just do a simpler thing. Here I will just do like touch of a file called foo. So I have a new file here that is called foo. So if I just restart this and I look for foo, it's not here anymore because I just restarted a container from scratch. Now let's say that you want to use volumes. So we just rerun with the same options and we just share the current folder into data folder inside the container. So it's better to put it that way. So let's just touch a file into data foo. Okay. Okay. So I run it again. So foo file is still here when I run it again. So that's the way to persist data. So it's really straightforward, really simple. It's a powerful way to save the state of an application. Otherwise, in containers you usually don't save state of application inside the containers. You use volume to do that. And the other good thing about volumes is that they use the 100% of the performance of the access of the disk of the operating system. They're not using what we call the copy and write layer, extra layer that make writing on disk slower. So when you access the volume, you are accessing with the operating system performance foo that is really fast to write stuff on the volume. It's faster. So if you have to write a lot of things on the file system, it's better if you write it inside a volume. So that's about the volumes. I will finish with that for the Docker command line option. Another really important option is the link option. We are talking now about what you were asking before, is how we connect containers together. Well, if for example, let's try, if we run a Tomcat server and we want to have access to this Tomcat server from another container, we can just use this link option with the name of the container that we just run. So let's try it out. Okay, so we have now a Tomcat container that it's running. You can see the command that it's run to run the container, so the default command of the container. And it's running on port 8080, but that's the private port. It's not the public one, so it's not available from the outside. You don't have a way to access to it. But if we just run, yes, another container and link to it, so I can just curl my server, 8080. I don't have that, so that's okay. Fedora has already, so I don't have to install it. I would have lost some time. So as you see, I have access to the host, this old HTML page that I'm downloading it. So from within the container, I can see the other containers using this link parameter. And what it does, the link parameter, is just adding in ATC host my server entry with the private IP address of the container. Okay, so the last thing for the introduction is about how do we configure a multitude of containers that has to interact together. So we have seen the Dockerfile. The Dockerfile allows us to just define one container, but if we want to define many containers and we want to define, to configure in a source file the way these containers interact together, we can use Dockercompose. So this is a Dockercompose.eml file. And there are two sections. The first one is a DB section, and it will just start a container from the Postgres image. The second section is a web section and it will start a container from a Dockerfile that is in the current directory. We can tell it from the build instruction that is the first instruction there. We can see that we can define the volumes that are mounted inside the web container and the ports that are exposed and the links to other containers. And so there is just the links to another container and it's the DB container that is here. So it's really straightforward with the Dockercompose.eml files to configure multiple containers that just have to work together. So it's the simpler way to do orchestration. So it's a single-host... Dockercompose does single-host orchestration. So more complex tools like Mesos from Twitter and Kubernetes from Google they do orchestration for distributed systems on many computers. So Dockercompose does the same thing but just in one host. So it's simpler. Okay, so that's all for this part. So I wanted also to talk and I just have ten minutes to talk about the Java development workflow but maybe if you have some questions we can just leave these ten minutes for questions more than for introduction because I won't have the time to talk about the Java development workflow with Docker. Maybe it will be the next occasion. Yeah. How is it important to be from Windows to Mac? So let's say I have created a container in Mac. How can you just port it over and then run on? Is it important that it is different? So the question is about how can we port the portability of a Linux container in a Mac or Windows operating system. So there is no portability except if we run a virtual machine with Linux inside on Mac and Windows. So for example I'm running here, I'm running Linux container and I have a Mac with OS X installed. So I can do that because I run a Linux virtual machine. For Windows it's the same thing. You won't be able, there is still no Docker demo natively running on OS X and on Windows there is still not here but when it will be here containers won't be partable from one operating system to the other because the reason is shown here containers share the operating system. So they are based on the operating system that is here. So if you change, if you put here Linux this application will work. If you put Windows they won't be able to work because they won't have the same system called, they won't have the same libraries. So they won't be able to, it's like if you put Windows binary inside a Linux operating system you won't be able to execute it. Yes? Maybe this is a non-technical question but if I was to take this, I'm very new to this, right? If I was to take this to a typical organization who would this excite more most and what business problem does it solve? So the question is in an organization I think that you talk about IT organization mainly who will be the people, so the groups that will be more happy excited about this technology and what are exactly the business problem that are addressed by Docker. So if I have to say I think that the people that has been more excited about Docker are the developers. It's mostly developers. System administrators, they just looked Docker as the new technology that came out and they are not, still not sure if it's something that they have to adopt or not. But that's changing because Docker is doing really a great effort to give the tools that the system administrators need to install it reliably in a reliable way on production. So that's changing. For the business, for the problem that are addressed by Docker, it's having a unique way to install the same thing on your laptop, on the laptop of your colleague and on the production server. So you don't have to bother if you have to have a dependency or a particular version of Java or a particular version of Ruby. That was the nightmare. You say, ah, actually I have a Python 2.7 and this new application that I loved a lot needs Python 3.4 and actually I don't know what to do because how can I work with both together? So there are ways to work with two versions of Python but it's always like something that you don't want to be in that situation. So with Docker, you will never be in this situation because every container will contain all the dependencies of the application. So you can have, for every version of a framework, you can have different containers. So you can just have 10 different versions of Java running together on the same machine inside Docker containers. And you don't have problems. We'll just say to your colleague, oh, you have to set up your development environment. You just have to do a Docker build of a Docker file and you will be outside. Docker build, Docker run, and you will run in the application. How is resourcing managed? In terms of system resourcing, how is Docker actually that any of these containers that consume all of the services? Okay, so the question is can you make sure that Docker won't be consuming a lot of CPU memory resources? Well, there are some flags that you can use when you run Docker. So you can see them if you do Docker, run, help. And actually if I grab and I put limit, you can see that there is a CPU period, CPU quota memory. These are all parameters that you can use when you run a container to limit the resources that it's going to use. You want that a container doesn't use more than 2% of your CPU. You can set it with this flag. How do you share a custom image to my colleague? If you want to share, so the question is how can somebody share the Docker images that he just built with Docker file with its colleague? So you have two ways, at least two ways. The first way is just you can put the Docker file in your source repository and you can just say to your colleague to download the Docker file or even better the Docker compose EML file that defines really a stack of containers. And the second way is using the Docker registry. You can just, when you have an image locally and you want to share just the image so you don't want to share the Docker file, you can just issue a Docker push with the name of the image. With this command, the image will be published on a Docker registry and will be available for all the people that can access to this registry. The registry can be a public one. The main one is the Docker app that's a registry so you can push images there. And you can have an internal for your organization registry. You can have it. It's not really straightforward to install it but there are commercial products to do that. And you can also use on the Docker app you can choose if you want to share your images with everybody or you want just to do private images that you share just with your collaborators. Yeah, excuse me? Basically, I want to create a custom image and I want to share to my colleagues. I prefer to create a Docker registry. Yeah, so the question is if it's possible to create a Docker registry. Yes, the answer is absolutely yes. There are some commercial products that allows you to create to just deploy on your network a local registry. And I have used one. It's called Artifactory that just works fine. So you can use that. And when you do a common Docker push you usually push there. But the workflow when you do that there are not users that are going to push images on your internal registry. Usually you have the continuous integration machine that creates the artifact that is going to publish that on your internal registry. What you share with your colleague is usually the Docker file. So you say, I've got the Docker file. So just take the Docker file and with the Docker build you will create a mini image in your local registry. Okay, thank you very much. Yes. Thank you very much. Okay. Thanks to you. So we have another talk. Stay for another 15 minutes. We'll introduce Scott Henry and then we'll make a small announcement.