 All right, I'd like to thank everyone who's joining us today. Welcome to today's CNCF webinar, Best Practices in Implementing Container Image Promotion Pipelines. I'm Chris Jens, Cloud Consultant at Level 25 and a Cloud Native Ambassador with the CNCF. I'll be moderating today's webinar. We would like to welcome our presenters today, Baruch Zadugowski, Head of DevOps Advocacy at JFrog. Say hello to that awesome background. A few housekeeping items before we get started. Yep, you're welcome. A few housekeeping items before we get started. During the webinar, you're not able to talk as an attendee. There is a Q&A box at the bottom of your screen. It's right below the screen, Chef, from Bow. Please feel free to drop your questions in there and we'll get to as many as we can at the end. This is an official webinar of the CNCF and as such is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of the Code of Conduct. Basically, please just be respectful of all your fellow participants and presenters. And with that, I'll hand it over to Baruch to kick off today's presentation. Thank you. Thank you very much, Chris. And let's get started. Yes, so we're going to talk about containers today. And well, when we say containers, we all love to entertain the idea that we don't have a lock-in and there are different container incriminations. But between you and me, when we say containers, we mean docker in most of the times. So excuse me if I refer to those terms interchangeably during today's webinar, because really, looking at the industry, we are in this state when we say containers and mean docker and say docker and mean containers. Now, docker is obviously amazing. It revolutionized the way we do software and do everything with software and how we go about software. But as any software, it is not perfect. And this is a VN diagram that you can apply to everything or mostly any piece of software in your life as a professional. And if you think about it real good, I believe you will find this VN diagram to be correct. The software that we like is we like less as we learn it real well. And this is true for docker as well. And one of the consequences when we learn docker and discover how it works is that we not necessarily trust it. And I will elaborate about what we don't trust in docker later and we will talk about how we can fix it and how can we build trust in what we have in our production containers in the end of the day. As Chris already mentioned, my name is Baruch Sadogurski. I'm the Chief Sticker Officer in JFrog. It means that I go to conferences like the KubeCon and give people awesome stickers. Since we don't have physical conferences now, I cannot give you awesome stickers. So I will serve as the head of DevOps Advocacy and we will talk in this webinar. The most important piece of information on this slide is my Twitter handle at JBaruch. Please feel free to connect with me on Twitter. We'll take the conversation there. Talking about code of content and how to behave. This is an amazing diagram from an amazing book, The Culture Map. Since all of us work in multi-culture environment, I really recommend you to read it if you didn't. And on this diagram, you can see that the most emotionally expressive and confrontational people in the world are from Israel and Russia. Well, I managed to be from both. So if I somehow offend you during this talk, I apologize in advance. The most important slide of this presentation is the show notes. You go to different conferences show notes and you will find a special page there dedicated to this webinar to be the top link. You will find the slides already uploaded there. The video that we'll upload later today, all the links to everything I mentioned, including the Culture Map book that we just spoke about, the place for commenting, for rating, and a very, very nice raffle for thanking you for being here. It's a Nintendo Switch Lite with Animal Crossing game. Definitely should try and participate and win. Okay, so this is kind of my household items. Let's get back to it. Promotion pipelines for containers. When we talk about a concept that we want to apply, we will usually look at, is it something that we already did and how should we adapt it? And the good news are that CICD pipelines, promotion pipelines, we do them for years. This is a no-mini-new concept. And I'm sure all of you are familiar with those pipelines and how they work. This is the promotion pyramid. Again, something very, very well known. You do your tests all the way, starting the second you finished your build, and the tests become more elaborated and take longer. But the artifacts that survive and move through promotion pipeline are less and less. So in the end of the day, we have long-running tests on staging, but very few artifacts actually go there and then are ready to go to production. If we look at the same process from a different perspective, you can see how we are promoting our artifacts through the pipelines. Once we build them in the CI server, we start to move them through different environments. And we move them through different environments once they qualify for the requirements for the promotion and we move them through something that we call quality gates. I'm going to speak a lot today about quality gates. So this is done in your artifact repository. Basically, you have some artifacts that now are in a test environment and then you decide or integration environments and then they decide they are good enough and you promote them into system environment. You install them on your system testing. You install them in the right runtime environments, system testing in this example. Do all the tests and if they are good, you promote them next to the next level, et cetera, et cetera. Now, this is all, again, very familiar. Whatever you did before you got into containers and Kubernetes and what's not, you probably did that for whatever technology used before. Now, what changed with Docker is that Docker images are large. The structure of their management is not trivial. The registry is not just a file system that you had your artifact there, your Java artifact or your NPM archive. It requires a lot of work to build this pipeline with Docker images. On the other side, we have very, very powerful, simple and appealing Docker build command and Docker build file. What it actually drives a lot of people to is actually using the Docker file as an artifact that we promote and then build the image from scratch for each and every environment. In the end of the day, what we will get to is instead of promoting the artifact that we build, we are kind of a trend, a lot of people trend to promote the Docker file and Docker build in every environment. So we will take this Docker build source file as an artifact. We will build it for development and then we will build it again for system test and then we will build it again from production mirror and then we build it again from production environment. It's very convenient to do because all you need to do to promote a text file is just tag it in Git. So you attach a tag that describes its state and then you can build and deploy to whatever environment you want. And it sounds like a good idea, but fast and cheap builds are not always the way to go. And I will give you one example that when I wanted to create the most unstable Docker file, I said, okay, I will create something that doesn't make any sense. No one will ever do it. It will be for explainational purposes only. But then I went to the Internet and I discovered that the Internet is full of Docker files which are much less stable than my any imagination. And eventually this file is just an example that is actually used in production. There it has many forks and you can actually see the link to it in the show notes again, Jeff, for the commsless show notes. And you'll see that it's not a fantasy. It's actually something that is used. And this is a horrible Docker file because every line in it actually refers to unstable version of a dependency. So when you say from Ubuntu, you actually mean take whatever version of Ubuntu there and download it and build it with it from Docker Hub. And it's the same with Node.js, the latest version, with Python, the latest version, and even adding our app, our JavaScript file is also a latest version. And this is obviously horrible because it means that the chances that every time I run Docker build with this Docker file, I will get a slightly different version of Docker image. And that means that there are very high chances that what we will run in production will be different from what we actually built in our dev environment or intended to build that we tested through our promotion. And actually, this is something different from what we intended. And this is one of the reasons we don't trust Docker a lot of the times because we have this feeling in the back of our head that what we build is not necessarily what we run in production. And we should obviously fix it and we can try and fix it. And we can say, okay, we can use a version here. So we will use 1904. And the question is, is it better? Well, to some extent. First of all, obviously now when 20.04 is out, we won't get it instead of 1904. But in Docker Hub, the versions are not immutable. So Ubuntu for canonical for this reason or another can actually push new image and tag it 1904. So still when we are going to download, when we are going to build this file, we will get a slightly different version of our base image. There are usually very good reasons to do though. Usually there are security vulnerabilities. But still when you want a repeatable build, you cannot allow this to happen. There is a way to nail down a version very, very strict. And that's using the hash code. If I use the hash of an image, it correlates to an array of bytes. And this is immutable. This will always end up in the same array of bytes. The problem is, this is completely unusable. You have no idea what version of Ubuntu my Docker file refers to. And frankly, you don't even know that it's the valis cache. Maybe I just felt asleep on my keyboard or my cat just went through my keyboard and this is what we ended up with. We have no idea what version this refers to. This is very unusable. Now we also have all our applicative dependencies that you might or might not know how to log the version or maybe log in the version sometimes is impossible. So to know how to log the versions of Python and Node.js and those examples, you need to know how up-get command works, which parameters it accepts and how to specify the version. Now I bet the majority of you, if you come from Ops background, know exactly how to nail down up-get versions and know how to specify a specific version of Python. But what about this? This is a maven command. And I guess some of you know how maven works and you might imagine that you need to check the POM file and check if all the versions are nailed down. But then my question will be, what about transitive dependencies? What happens in one of the transitive dependencies have a version range? How stable this will be and how you will even know about it? And here you will be required to have a very deep knowledge of this particular build tool for this particular part of your Docker image to make sure it is reproducible and you nail down the version. And this goes for, let's say, what if I use now Bazel for my Java build? Do you know how to nail all the versions there? And what happens if I use Go now and I use Go before 111 and use one of the 19 available Go builds or after 111 and then I use the official Go modules. Do you know how to nail all the versions there? It requires increasingly complicated knowledge to know how to create a reproducible build. And then custom stuff. What about that? What about if our Docker image just go ahead and download a bunch of files from internet? How can we guarantee that those files never change? And the answer is we really can't. So this is why the problem exists. When we go and we rerun the build of our Docker image for every environment, we will end up with different with different Docker images in every environment. That's obviously a very, very big problem. So the way we solve it is that instead of building a niche in every environment, we actually want to build once and promote those binaries through those quality gates all the way to production. So we run Docker build once and then we have an image and this will be whatever we're going to promote. And then we go through the quality gates and promote it through our pipeline every step of the way. Now, I keep talking about those gates and those gates are very, very important because this is how we guarantee we actually test and then stage and then run the right thing. The gates are there. So the QA won't get test images by mistake, depth images by mistake, which are not ready to QA. The staging won't get images which are not ready to staging and obviously the production won't get any images that are not ready to be production use. And this is again not as trivial as it might sound and not as trivial as it might be your experience with other technology stacks. Docker makes it a little bit harder. So let's see how we can build a rock solid pipeline. The real question that we need to answer when we build this pipeline is how do we separate depth from prod? How do we separate depth from staging and staging from prod? How we separate between the environments? One of the options that Docker gives us is using metadata. We can tag our images with labels, key value, and we can say environment, staging, environment, testing. This is nice, but it requires from us first of all to make sure that all of them are annotated. It requires from us to make sure that our runtime environments will check those tags every time they pull a Docker image and it basically cannot be enforced in any way because there is no ARBA controls on tags. So this is nice, but we can do better. Another option is using Docker repositories. Repositories in Docker are actually folders in our Docker registry. And what Docker suggesting is taking those repositories, taking those folders and creating batching folders for each and every image. So each and every image will have their own folders for development, for testing, for production. And this is already better because you can do ARBAs on repositories. But it's still not very useful because then for each and every new image you create and think about microservices with tens of dozens of images, then for each and every one, you need to remember to create those repositories, those folders, and make sure you attach the correct ARBA to each and every one of them. So this is also nice, but we actually need to do better. What we wanna do is create a separate registry per environment. So we will have a registry that only the dev images are, the registry that only the staging images are, and the registry that only the production images are. And this sounds like something that we should be able to do, how hard it is to establish number of registries and apparently it's not so easy. It's not so easy because we have historical limitations a little bit like this one, if you're old enough to remember, that makes our ability to have multiple registries on the same host when our pipeline is run, very, very limited. And the problem is this standard of Docker tag. How Docker tag is defined. And when you look at this, when you look at the standard, that you see that we have the host, we have the port, we have the user, we have the Docker image, and then we have the tag, the version. So basically there is no way here that we can express, okay, what maturity of registry is it on the same host? There is no way to do it, right? So there is no way to express that we have multiple registries per host. What we want to have is something like that. I have my host, I have my port, and then I want to do, I want to have Docker dev as a separate registry, Docker QA as a separate registry, Docker staging and Docker prod, all as separate registries on the same host. You cannot do it because Docker tab won't allow you. So that's kind of a little bit a strange limitation that won't allow us to do it. So obviously first reaction will be, well, that sucks, I cannot do it, but then we can start thinking about it and getting smart on how we can do it. One of the options will be virtual hosts or virtual ports. So this is how it works. When you run Docker tag, host port and the tag name, then it converts into this URL of the actual request when it goes to Docker registry. Host ports we should do and then tag name. Now what we want to have is host port and then maybe context name if we need it or we can drop it if we don't and then the registry name here and only then the tag name. And the way we can do it is by using fake ports or fake host names. So here's an example with a fake port. We can say, okay, now we're going to specify in our Docker push another port, not the real one, 8081, but the fake one, 5001. And every time our reverse proxy, kind of another layer of abstraction before we actually hit the Docker registry, we'll receive a call to this non-existing port, 5001. What it actually has to do is to translate it to a call to Docker def. And then 5002 will go to Docker staging, 5003 will go to Docker prod and what's not. Now this actually works and this is approach that a lot of users do and it actually works. The only problem with it is that it requires this additional software, this requires the reverse proxy. Well, a lot of products already have it built in, but it's still configuration and babysitting, et cetera, and we still can do better. And we can do better by abusing things. So look at that. We have here this user. Well, while it is important, usually it is not used. And as you saw in previous example with Busybox, we didn't use it at all, but this token becomes available and we can use this token for actually providing which registry exactly we want to tag or push or pull our Docker image. So this actually becomes very, very easy and while we lose the ability to use it for the username, we actually gain the ability to have multiple registries per host without having a reverse proxy. And this is very, very useful. Now, okay, we set up, we have multiple registries in the same host and the next version will be, the next version will be, how do we actually promote? How do we take those images from staging, from dev to staging, from staging to prod? And the way Docker works, the way Docker kind of implies that you use it, is pull, retag and push. Now, this is wrong on so many levels. First of all, we are talking about two registries in the same host. Why would I pull an image to a different host over a network? Images are big, just to be able to rename it and push it back. This is just wrong, but again, there is no native way of doing something else. Now, the good news are, there are tools that can help us. I will use an example of Java Container Registry, which is a container registry that supports all that obviously, and it's free for you to use, but there are other tools that use that as well. And what I'm describing here is the approach that you need to look for in your tool, not necessarily, doesn't matter which tool is that. And here's the approach that I would suggest you will look for. So what you see here is actually a bunch of registries inside the same tool, right inside, again, this is Java Container Registry. So you can see here, we have Docker Dev Local, we have testing, we have staging, Docker Test Local, Docker Stage Local, and Docker Pro Local, all inside one tool. And then if you need to promote, all you need to do is actually issue an API request, and no files are actually moved because the storage of all these images are on the same storage. So we don't move files even around on the disk, not even mentioning stuff like pooling, retagging, and pushing. All we do is the change, the visibility of those images for our environments. Now, additional features that you might like. Now you have four different, four different Docker registries. You as a developer or your developer, they need to work with all of them. Constantly switching between the registries is painful. So instead, if we can have a virtual registry that presents a single registry, but in the back containing number of them, this obviously helps a lot and this obviously simplifies our interaction with Docker. Another feature, Proxing Remote Registries. This is also very useful to have. First of all, again, we simplify the configuration. Now, our virtual repository or virtual registry now not only sees all the Docker images that are locally stored, but also all the images that exist from a remote registry like Docker Hub. And also it provides protection against situations when our remote registry is down and I'm sure you noticed over the last couple of months there were a number of times when a Docker Hub was down. So if you used a registry that gave you this proxy ability, then obviously you weren't affected and if you didn't, you probably were. So this is another nice feature. And going back to the visibility of those registries from the outside world, when you have your clusters, the dev cluster, the test cluster, the staging and the prod, they only see those registries that they are allowed to see and this is exactly what we spoke about. This provides the ultimate quality gates, the strongest quality gates that can be. There is no way that a production cluster will now be able to access the testing environment because it doesn't know that there is a registry there at all. It only sees one registry for production. And from the other side, no push and pull. The promotion is actually done inside the system. So all we do is change the visibility of the container registry. I hope that makes sense. Number of times, sleep night talk and called registries, repositories and vice versa. And that's because there is a little bit of confusion of those terms. So Docker kind of re-talking the existing term repository as a top level directory in the registry and everybody else more or less use repository as a term synonyms for the registry. So I apologize for the confusion. When we talk about the repositories in different container registry, for example, we actually mean different registries within one tool. So this is an important clarification. So to summarize what I encourage you to look for is a situation of win, win, win. When you have a single point of access for multiple registries when needed by using virtual registry, when you have completely isolated environments and that means isolated registries for every step in your pipeline with quality gates and you have immediate and free promotions without pushing pull. And this is obviously something that also very important. Another topic that comes up as a reaction for especially my first part of the presentation when I talk about how important it is not to use latest when we are talking about base images and how Ubuntu 1904 is much better than just Ubuntu a lot of people actually like the simplicity of working with latest. They can also say give me the latest docker image and they will know that they will have something that has been recently updated and it's good to go. Now, we still can have both worlds and that's again example from Geoffrey Container Registry but others probably do that as well and this is using of metadata to express what latest actually relates to. The biggest problem with latest is that you don't know latest to where it was. Was it the latest really latest or it was latest to create it a month ago and still since there we have hundreds of builds which are now the latest but latest was not updated. So using the metadata you can have here latest that is referring to a certain build by number or a certain tag by number and this is how we know this latest actually refers to actually actual image with the tag 26 and this gives you a win-win of the simplicity of using the latest and you always know what it really means as long as this 26 that it refers to is promoted as immutable artifact because if you remember if you keep rebuilding 26 you will actually not know if this 26 is the 26 that you actually started. So remember first is promoting immutable artifacts. Once you do that you can also alias it as the latest if you wish to and it's very important to make sure that this connection this alias between latest in 26 is super clear it is there in metadata and this metadata can be automated so you can actually query in API in a query language what is my latest and know that it is 26. I mean the UI is definitely nice and it's very nice for our webinar but in the end of the day when you go and automate your pipelines those questions should be answered with an API and a query language. Now obviously not less important for you to nail down not only your docker images but also the rest of your dependencies because at the end of the day no one uses docker just for docker you always have something inside you have your npm you have your java you have your go you have your cc++ with conan in the end of the day there are dependencies which are also needs to be locked down you need to know that when you install your jdk in your docker image this jdk is exactly what you meant it to be and for that again if your tool supports as I mentioned for docker the remote repositories this is what you do for your base image so your base image will be cached in your tool and then you know every time you need your base ubuntu to rebuild your image it will be there you can rely on the fact that it's cached and it doesn't matter if it was changed in the docker hub or even deleted or docker hub now went away because I don't know something happened right so you have your cache you control your dependencies and for dependencies which are not docker again here's an example of different container registry you can see how having a generic storage a generic repositories as we call them allow you to actually save cache and control your dependencies which are not only your base image so when you need to put this jdk and the Apache Tomcat and in the end they your application inside your docker image you know that they all come from trusted and control environment and exactly as we did with docker registries we can do with generic repositories we can have an entire pipeline of those repositories dev staging pre-prod prod whatever makes sense and then promote your dependencies as you promoted your docker images across those quality gates so it works exactly the same for any type of artifact being the docker image be it helm packages which are helm charts which are supported in jf container registry as well or be it any type of artifact in generic repository right so here we go your jdk and your atomic so basically you have to own your dependencies if you want to build a reliable pipeline your base image and this is by caching it from docker hub your infrastructure everything that your application needs to run and your application files as well so just to summarize for the conclusions you build only once very important because there is practically no way to guarantee irrepeatable rebuild we can get closer with doing a lot of stuff and knowing a lot about our application knowing a lot about how docker works with the don't use latest but also don't write tags also use hashtags this is sorry always use hashes this is kind of information that we need to learn but it gets more complicated with every new technology we put into our container we use up get for our dependencies we need to know how to nail down everything up get we use java we need to know the java built also we use go we need to know the goal built also so it gets more complicated with more and more technologies that we use so instead of trying and nailing down every little piece that might make your build unstable and non-repeatable instead build only once and if you build only once you don't have this problem you can rely on you can rely on your you can rely on this image to be actually the same all over separate environments as we mentioned very important as separating the environments by using different registers is actually the way to implement the most robust and secure quality gates promote what you already build and then own your dependencies and that means cash everything don't trust downloading stuff from internet because either you will download something different or you won't be able to download what you used before because it wasn't there so with that couple of links as I mentioned at jbaruch on twitter cncf is the hashtag of cncf and obviously as a cncf ambassador I will be happy if you mention that when you talk about this talk in social media and as I mentioned jeffron.com slash show notes this is the place where you go in order to get the slides the video all the links and participate in as I mentioned very very attractive very attractive raffle of the nintendo switch meaning for thank you for being here I'm pasting the link to the show notes in the chat so you can use there from there as well it will make it one click away so with that thank you very much and I think it's now good time for questions awesome thank you so much I definitely have some people in mind that I need to forward the recording to after this so if you have a question please do put it into the Q&A box or tab at the bottom of your screen and we'll get to as many as we have time for the first one we got there is from ravish tivari which is hi baruch are there any other tools that support this tag retag without pull there are as much needed feature or this is much needed feature do you think this might be supported by docker natively yeah ravish thank you for this question and when we look at the landscape of the registers that are available for us and I'll just name a few all the cloud providers have their own container registries Google Azure and AWS we have hardware which is an amazing project from a CNCF which is a container registry by itself also github supports container registry github supports container registry every everywhere you look you see container registry and the reason why it's so easy to find container registers around is because docker did a very good job in making container registry available for distribution there it's actually called docker distribution and this is an open source free and relatively simple to run piece of software that allows you to have a container registry and then you can put your UI on it your brand on it provide additional features like hardware does for example for security and what's not and but the end of the day most of them is if not all of them have this docker distribution under the hood and docker distribution means exactly single isolated container registry inside and since it has its own storage there is no way to promote easily between one container registry and another so even in tools that supposedly can have multiple container registries in one tool you unfortunately will end up promoting them by pulling and re-tagging and pushing the game it might be a little bit easier if you do it on the same machine maybe or in the same network or in the same availability zone but in the end of the day making shared storage for multiple container registries required as in JFrog rewriting re-implementing the docker distribution from scratch this is not what we have is not an embedded docker distribution and this is the was the only way for us to implement different views of registries with the shared backend storage in the back so I didn't hear I did I don't know about any other tool that do that and the good news are different container registries free to use so you might very well give it a try oh thank you so much there is another question from Daniel Zilbermann who is asking if you could actually show a design example of a CI CD pipeline that performs the internal docker image promotion from dev to test to prod and deploys it to a Kubernetes cluster he is working with Harba but it doesn't have to be specific to it yeah so let me give me one sec and let me check if I have an example of a CI that does that and I will be more than happy to share it with you real quick it won't be the entire pipeline but I think you know exactly how to deploy to Kubernetes from different registry the promotion part is is is interesting though and let me see if I can find the build and share it with you so I will stop sharing the slides and instead I will share my browser here where is my Firefox here okay do you see the JFrog container oh that's the yeah yeah it works I need pipelines yeah yeah so I need the pipelines JFroglob container registry yeah I think we need actually this one so I'm just trying to find find a CI server that runs this promotion and then I will be able to show it to you so let me look into it real quick and see if I can show you the how it actually runs so um yeah so if I look at the artifacts I can actually see here let's take Docker prod local that's exactly nothing is here okay that's again not the right one I'm sorry I wasn't prepared to show this demo so but this is an interesting this is an interesting question so it's the last question we have so no hurries take your time oh okay we have okay then I will probably find the last one yeah I it's just I have a lot of JFrog container registries around some of them show the right thing the other not so much so here it is and docker prod local yay okay I found it so let's look at the 26th that actually what I used to take the screenshots into my slides and if you look at the properties here I hope we can see the build yes okay here we go so this is the build URL this is how it was built and if I click on it this is JFrog pipelines which is a CI CD tool from JFrog obviously you don't have to use that but I just want to show you the promotion and the promotion here promote application build and the step that we use is just using curl to actually oh okay here is my key that I probably need to revoke once this webinar is recorded and don't do that don't hard code your keys into your CI scripts but what you definitely do is you use the promote API and then you say okay my target wrapper will be docker prod local and I will promote this tag and I just version my docker images with the run numbers but it could be whatever makes sense and just move it and what it does this is an operation that ended up in six seconds and it actually those six seconds was actually the API call the promotion itself was immediate because as I mentioned nothing actually changed what changed is that now before this build 26 was in docker dev local and now it's not there there is only number 10 because the promotion failed then the 26 actually moved to prod local so this is how the promotion does works it's just a one rest API by the way the way you do the alias between the 26 and local is also the same rest API you can see here all I do is I actually promote it kind of promoted from the same registry to the same registry and what I do change is the tag so from tag number I target to latest and this is how we have here 26 and latest and they actually refer to the same to the same image the interesting part here of course is that this docker refers to and this docker refers to is another rest API call which is right here and that will be put properties so the properties that I want to put is docker refer to run number I run it on latest and I say okay it refers to 26 and we are looking at the build 26 I mean I hope that gives you a clue on how to how to do it and basically what you do is this is how you build your pipeline you build a docker image you push the docker image you publish the building for here you do all your tests here will run your system tests your integration tests obviously your security tests and this is very important and again today you expect from tool that does your artifact management also to have at least integration if not building verification of the security of your artifacts so this all will be here and based on the results of your tests you will decide whether you are going to promote your build or not and if you decide to promote then it's just a rest API where from where to and done I hope that kind of answered it and of course as I mentioned doesn't have to be in different pipelines it can be in any in any CI server awesome what a demo thank you so much yeah very very ad hoc demo sorry about that so Daniel is spelling thank you in capitals as as a new question so I think the question was answered any other questions we have a few more minutes all right I think we on top of hopefully useful webinar also give five minutes back to people enough to get coffee before the next meetings that we'll probably have so with that thank you very much for having me that was fun and I hope that it was useful as well for me it's almost eight in the evening so no coffee for me anymore but thank you Baruch for a great presentation and thanks to everyone else for joining us today the webinar recording and the slides will be online later today we're looking forward to see you in the future have a great day thank you so much