 Okay guys, please settle down We have a next speaker here Tomasz Panik and the topic will be Windows containers in action So I'll come in my presentation It's quite challenging for me to present the Microsoft and Windows related presentation on Red Hat Conference But hopefully it will be useful for some of you So first something about me Can you hear me? So I will put it all closer Okay, I will try to speak louder if it doesn't work that well Okay, so I work in SolarWinds and I am software engineer, so I will be representing this this role and I'm leading the continuous integration delivery community in in the SolarWinds and actually SolarWinds is building the main Windows product the reason why I'm saying it that The for us actually the the fact that the Windows containers Made it to production. It was huge because we could start benefiting from Containerization of our applications and so on and therefore I would like to share with you some of my experiences with with that and Do some demo some some real demo how it works So so briefly talk about the story how it actually happened the Microsoft and Docker started to talk about bringing the containers into a new Windows server in 2014 and In 2015 they made actually first technical preview where it worked They introduced their the container support the including Docker support and also they introduce something like PowerShell containers But they abandoned that right after because the community didn't like it and in 2016 they actually made it to production to the release and What actually means the Docker or container is on Windows? It's not like virtualization. It's really the native the native Docker engineers on Windows server also on Windows on Windows 10 and The API is the same like you are used to it in the Linux So the those clients like the Docker client and the current compose Also the swarm works there so the swarm can talk with the with the note in the Windows note the They created two base images there in those server core and Nano server the Windows server core is like compatibility compatibility one. It's really huge like four gigabytes of size and Most of the features you are used to with the Windows server works there so when you are migrating your monolithic ASP.net whatever application to the containers You better start with this one and then once you start migrating some of the parts to microservices to the net core you can move some of the parts to nano server and Also Microsoft introduce something like hyperb containers hyperb containers They're normal native containers. They work that they work the same way like on Linux so that the the container share kernels with with the with the host The hyperb containers is different. It has its own kernel. It's somehow virtualization there but the reason why Microsoft created them is mainly they wanted to make a little bit more layer of isolation and also they are planning to somehow Support the Linux containers on Windows as well. So you as developer you will be able to deploy The Linux applications like in container Through hyperb containers on Windows 10 for example Okay, so now Yeah, quick question. No, it's the the fact is that in Windows It's still running only Windows So the this is for the same reason why the only in Linux container you can't use the Windows latest features The same way in Windows containers. You can't use the Linux latest features. So you can use only Windows applications there so currently you cannot like Run the normal traditional in those containers on Windows Yeah, it's not possible We will pass the question in the end. I will just pass my quick demos Okay, so I have the Azure virtual machine prepared here with the Windows Server 2016 So I can like run the normal Commands you are used to the list of images You can see here that I have the Windows Server core and nano server images So now I can run run interactive container from Windows Server core and Unpowerful shoulder so Right now it was been up my Windows container in the create interactive partial session So right now I'm like in container. So whatever changes I do here and exit the container will not affect the host system So I can this the files it has operating system and I even see the the process which I start here I even see in the task explorer of the host So The the next thing what I can do is that they can for example do a volume mapping like I can map the the current directory to the folder called map inside a container like in PowerShell Forgot the IT command so it will probably leave immediately So I can do this this stuff the same as you are used to on Linux as well. It's up for the volume mapping port mapping Also the the each containers has the its own IP address like virtual IP address in the system So I can see the map folder here and Any changes there will map to will affect the operating system there might have I see this my documents there what also As I mentioned that they in technical preview they Somehow tried to create something like something like PowerShell containers, but they then they abandoned that so they Ended up only with Docker containers, but they created the PowerShell command list to be able to control them so I Can use the Docker docker CLI, but I can also use the PowerShell CLI which works against the same same endpoint Like to get container it will get the same container Little list of containers. I can do something like this Like to remove all the containers that I have to have in there It's a little bit easier than using some For processing and pipelines using the Docker CLI Okay, so what about the real use cases? Why how you could get the value from from this in your organization? so Let's imagine you have some ASP.net monolithic application which Is not meant for or all the net? Not migrated to the net core yet So what you can do with this is that you can Containerize that as is like put it to containers use the benefit of Containers in your DevOps stuff. So you configure the IIS You as a developer take responsibility over the configuration You configure the IIS inside a container Whatever else inside a container and then give the container to the IT operations to deploy that so The first step would be to Actually create the container from your application and then as I told in swarm you can create for example one Linux node one windows node you can combine them in single swarm in the case warm and As your architecture evolves you you can break down your your monolithic KSP.net up into into the microservices Converse some of them to the net core and you can even combine them with the Linux Applications like proxies proxy care service like and ginks You can run on top of that and combine it together in in in a single swarm so that this this is really like That the fact that this is coming to windows applications It's really big for you guys who are using the The this is sharp and dot net and so on the classic a spirit net Yeah, I guess every I will show it. I have the example a spirit application and I guess every speed of an application can be put into this container The Microsoft Offer the SQL server in the container, but only the express one they they claim they don't they don't they will not do this for for the official Like licensed one Okay, so what else you can do it is with this is that you can also Prepare the container for building. So for example, if you have built infrastructure some kind of agents Usually you install that visual studio and whatever you need it for building all your applications if you do in the company But you can prepare the containers for each app like define the requirements just MS build tools I will show it also so this is I Prepared some example up Which I'm going to show you and I'm going to build it Inside a container. I'm going to pack that into the image and I'm going to I will try to deploy that So I have the ASP.net up here Doesn't matter whether it's dead. It's a single page which generates some good and I have the docker file Which defines requirement for building So it's based on Windows Server core it uses the shell power power shell and Now I'm defining the what needs to be installed for the building. So I Enable ASP.net 4.5 I Install the chocolate a I recommend you to use that for this kind of purpose because something like APT get for Windows Some some kind of repository or scripts which are downloading applications So I'm using that here to install the build tools like MS build you get targeting pack of net 4.2 which I have the source code in and some some other dependencies like web deploy and Targets to publish website and so on so all I needed to actually build the app Now I'm going to prepare the container To build this container So I'm going to build folder and run looker build like the cone of Build and that I've already done that so that it took all the layers from cache So I don't don't have to wait for it. It takes like half an hour and Now I'm going to build the application using that so I I Looker run His volume mapping to actually map my local directory to see that's Ceresa and Use might have gone build Right So now it should like using my old dependencies. It should compile restore the new gates Compile the website Hopefully it works took some time to pin it up so On the whole side, I don't have anything. I don't have a host today. I don't have the net So now I'm building just inside a container Once this is done. I'm going to use my second docker file Which I'm just going to show briefly and it just defines the runtime Need for for my app. So I just need a base p.net. I do I Configure IS and It's based on the microsoft IIS So I'm going to deploy it to IIS Okay, so let's do it docker belt that's gone UID and Build it from here It'll take several seconds it takes all my build build files and Create a docker image from that after this is done, I could spend the it's been the container and And I have this have this slide so This was A Example of how this this could be used how you could basically take your existing dot net ASP dot net applications and Containerize it and use all the containers for any organization as a benefit. I will wait till this is done and show show to you how it actually That it works right now. It's exposing the layers the startup command So let's start it Defcon QID So now it should be it should be running. I Didn't map that to any port. So I'm going to just figure out What's the internal? Internal IP address to open the browser and see that it's actually running Okay, this one the first law is always slow because the IS is Doing some initialization work so It's up and running. I don't have IS in the in the host operating system And I have containerized the ASP dot net application Okay, so what? What's next? What's what's the future work? Microsoft claims that is production ready. So you can use that in your in your workloads You can play with it the Microsoft states that they are going to work on the Orchestration integration looker swarm. It's already running. It's already working with that. It was already demoed in some of the previous conference But I don't know how Cuban needs and then this kind of orchestrators they probably will need some work from community and Of course, they as they claim that they are going to introduce the support for Linux containers inside the hyperbic containers so that that will be also very big improvement for for you developers I have the some reference materials the last one is the actual source code which I just shown and some official Docker Sources So do we have any questions? so the question was I Probably didn't then understand And so this is the start So the question was security of the containers if you can somehow Modify the file system of the container, right? Oh, yeah, the regarding networking the As far as I know, I'm not from Microsoft guys. So I only know those the stuff which I tried so from networking they It currently works the service discovery The what works like if you have Docker compose and you name the container inside the container if you it should have this name so this this was already implemented and The this networking stuff. It's all based on only existing virtual networking, which were before in the Windows server So it it's not it's Then the virtual network there is not it So only the ports you map there may access to to the container So this I think this is the same like in Any any more questions The question was whether we are under the containers in production in my organization we use that currently just for the testing that we Packed up into into the container And as part of the end-to-end testing we spin up multiple Instances of it and then run the test against that We don't use that in production yet But hopefully it will get close to do that. So Since we don't use that at all. So we don't use hypervee containers neither Yeah, the in Windows 10 the hypervee containers is not the only option which you can run So as a developer You probably will use the hypervee containers. I Don't fully understand why They actually wanted to make the higher level of isolation with hypervee containers on Windows server So I would recommend trying with the native ones in production and if it Well, I don't currently know the reason why she shouldn't be enough for you probably probably didn't understand understood your question, but Then answer your question. Okay. Yeah, we use that kind of just for testing. Okay, the last question Question also that is present in all installation of Windows server it you have to enable it but in all actually License editions of Windows server. It should be possible to enable that. So we have to go to if you Install the completely new you have to enable the container feature there And you have to install the docker and docker engine and docker clients Okay, so thank you very much Hello You here? Hi. Hello Introduce another speaker who's Yaniv Bronheim from Redhead who will be speaking about metrics collection Hi, everybody We have 20 minutes. So let's try to have a lot that they want to say My name is Yaniv Bronheim. I rocket I work at Reddit for the last four and a half years and I work around the overt community mostly if you've been in the keynotes is it's the upstream of rev reddit virtualization And we'll talk about it also during this talk In the last year as a developer at Reddit and it's over it I Been exploring the area around the data flow and the information that we have in our project and what we can do in overt Basically in this talk, I want to show you how easy is to break your architecture This is the architecture that you have and define your requirements For data processing or what you need from your hub and use open source tool Which is quite cool and easy to do and I'll show you examples and how it's can be done So we're talking referring around about information that we have in our project every project is different and It's very General here, but most of us have operation flows that we want sometimes to follow or maybe count Do things like that? We have the traffic traffic information that we have in the system Then you can see overloading For scaling abilities failures in the system that you want to count Maybe to show things around the fails that we have and this corporation each of those Each of those can be collected in a different way and I were up already Define a way to collect the information in different ways the the most common way that you probably have in your system It doesn't matter what application you maintain We have logs in the system most of us have slugs the logs contain timestamp Severity and the message and we can parse it and build graphs around it or do whatever we want We'll see what we want in the next slide But there is another way to collect information which are metrics if you're not familiar with that it's a time-serious information that we all have and there is a Convention of how to keep this information in a ways that we have tools that can work with this information and allow us to build Cool stuff around it. So the metric is basically a key and value Records that you want to keep Also with a timestamp and a few types of those metrics is defined as there is specification that defines the The format of it. We have gauge rate counter and count It's different terms of metrics that we can collect can do a lot of stuff with that What stuff do can we do it information just in general depends on the application to That you develop it can be a data analysis to define auto scanning Abilities alerts in the system or billing if you maintain a data center and people use your data center building around the storage for example correlate between Different information that you have from different resources if you have a lot of information in your system We'll see it in one of the example and to keep hysterical information with aggregation abilities or scalabilities Depends of what you need So it's all based on the requirements that you have in your own Project that you have So basically what I'm trying to do here is to show you how I broke broke the components that I have in my system In the design and define the tools that I want to work with so Basically, we all have this have those components in some way So let's see what are they we have the thing that report information which can be your service your demon your project your hardware Whatever it is and it uses a client To send the information to send the information From the program itself a client can be an SDK Python for for the specific shipper Which is the transport of all of the information our client can be also a log file for example The shipper is a component that allows you to serialize the information and pass it to some Store in a different ways it could be a different transportation protocols. I can do the serialization for the information to fit the store Structure if it's Jason for example, the shipper is responsible to get the metrics information and pass it The store can have many abilities aggregation abilities as we talked if I want to keep hysterical information For five years ago, and I don't want five years ago I don't care about every second of the information that I collected I want to aggregate the information that they save for longer time for example and Then the visualization whatever it is can pick it and grab the information fetch the information sample the information from the store and visualize it as we want if it graphs our Alerts that we have in the system. It's the visualization of the information that we have So those components we all have it in some certain way we some sometimes Combine parts of it for example in log parsing we have the client as the log file and We just parse it with some kind of a code and parse it and store it so let's see example of how you can break your app up and And can use this structure to define the tools that you want to work with This is a simple example. The tree shows the basic abilities that you can get by using tools in this Project I have I have like multiplayer Games My clients can connect to a console they can chat and define a server that they want to play together a game that I wrote and I use a cloud provider to run VMs and the servers that I run demon runs on it and the players if they want to start a game I Upload they start this the demon for them and they all connect to this IP address that it exposes Basically, I used logs in the system for everything also the games and also the incoming communication the scores and I used to parse these logs and show a nice dashboard for for my gamers just for fun to see the What's going on with other games and I wanted to have more information in in my Dashboard to know if I want to scale it up because I don't want to use cloud provider Specific abilities like the watchers. Did they have there or the scaling that they provide with them? I want to have my own monitoring So I decided But I need to break it down. I want basic store and basic dashboard for historical information that I have about the hardware stats about Thing that happens on my machines I don't want something like huge rescalable abilities and aggregation abilities just to save the information I want to set some alarms and events around this information to know when I want to start new Another VM maybe to scale down also if I don't want to use those resources anymore I don't need and but they want also to continue to parse my logs to Have the same dashboard that I had before Look in that I understood that I can use metrics in this For the basic information and some I'll need to use the log so first thing first The metric analysis the information about the hardware itself Apparently, it's very easy to get it by installing component that call collectd. It's an open source Tool collectd. It's a service demon that you run on the service on on your on your Entity if it's a several VM or container even and it can it uses a lot of building plugins written in C that Collective information for you about CPU usage about memory Network really cool stuff and all I needed to do is to run this demon on each of the instances that they have For getting the information and I use a simple Store and visualization abilities by using graphite, which is I started a docker instance of that and Then the information is passed very easily to this graphite instance and can be shown in very easy way With graphs I see all the host that I have and then from the plug-ins that I enabled and I can build this nice dashboard around it and just by clicking on the on the fields in the tree to so before and this is what I want and in this Specific requirement just to see what's going on in my system. Sometimes which didn't need to didn't have to be as part of my code and I can't just use it by installing this this tool. I will show samples of Setups that I did Later that will show you how easy to configure collective with just the confile that I Comment out the plugins and that's that's it. I have it for the logging Serialization and the dashboard that I had before I decided also to use some tools instead of doing it as much as part of my code So it's the very common if you heard about it common stag that called ELK, which is a combination between three tools lock stitch elastic search and Kibana Not to confuse everybody just those tools is this it's in the same structure that we saw before The lock stitch is some kind of a shipper elastic search Is a database scalable database? That work with Jason structures and Kibana allows easily connect to To the store and allow me to build cool stuff around it so I want to listen to read all the locks for that I have on from every host and To scale up like I have many games on the system. I put something that called file bit. This is the icon of file bit it's it's Google tool that they developed It's always tailed the files that they haven't forward it to anywhere. I want in this case I used lock stash locks this is also quite easy also in file bit I specify the sources and the output. Where do I want to output it? I give it the IP of the lock stash The last thing is also a service that they can install anywhere I want in the architecture in the central area a central location or On each of the host it serialized the information for me and forward it to elastic search Search keeps the information and Kibana shows it and then I can build graphs based on the On information that they have the plugins for the UI sometimes are not open source for Kibana I need to say that But it's really cool thing and it's very easy to configure and work with it Okay, so this is the first simple except an example of Using this breakage of the architecture and define the tools that you want to use for another Example a bigger example I will show over this is the main project rather virtualization as I wrote before it's a large management system That allows you to maintain a lot of infrastructure in your data center and run virtual machines with this management system It allows you much more a lot of configuration abilities and migration abilities if you're familiar with virtualization But if not, it's okay. I'm just using it as a use case So this is the basic architecture that we have in This project we have the physical hardware that we maintain We have an engine that always pull information from this host that we have in data center and We have some bigger store Place for the historical information we use just for two Which is not an open source project that that took this information and build report for us BDSM is a service the transfer and gives me the information that I used to have now in our dashboards What I needed to do here is to define the requirements that I want in this project the requirements were I Want to collect the basic information and remove this logic for my main components because I want my demon to Be responsible for what I want instead of having this in reading us as well If it's reading from files on the system or whatever it is I Want to also have information about different entities that I have in the system like the overt engine for example Here I have information only from the physical host that I maintain I Want to build monitoring a dashboard the same as I had before but I want to keep it In a very large scale I have much more information that we used to work before and I want to aggregate information after a while which I needed I didn't need to do before and The last thing is most complex thing is to correlate between the information that comes from everywhere in overt engine We define also logical entities in this case if you're familiar with Virtualization in virtualization you define a cluster of host that in this cluster you can have migration This information which host is related to what cluster is part of the overt engine database so if the overt engine Also reports information to some central location and the host also report this information About specific information about their hardware I need a way to correlate this information if I want in the historical database To get reports about specific cluster which host was related to this cluster And things like that which makes it much more complex Okay This is the problem the solution that we decided to do is this We want instead of using video same the calm the main component we have that sends the information reads information We use also collect the in this case, which is really simple and easy We install it deploy this part of our deployment mechanism collect the will send information that they want specifically in this case and Fluent is the shipper in in this case. We decided to use fluent in lot lockstation this specific requirement just because we had More requirements that we wanted to fit to like to be aligned with other product that Use fluent D and we want to integrate with it for example So fluent D is a shipper same as lockstation. I defined a serialization Serialization for format for the information that I get also from logs also for metrics and it Places the information inside this Jason structure this Jason structure is it's on each of the sir I put it like this because each of the server will run those services in parallel to the rest of the things that we install on it and This forward information to a centralized fluent D Which in this configuration of the fluent D I correlate information as I said before from the over the engine and the host information that they get And it stores in elastic search in the same way so before which allows me the scalability is and They also aggregation rules that I can define which make it much easier for me to keep the information if we If you compare it to the way we did today Then we use grafana and kibana to build a dashboard We want to to pick between those solution to build a dashboard visualize the information as we want Grafana is more intuitive and easy for our needs it's like query base Query base so we can define a query for information and it helps us also to correlate between data from different data and This is how it looks like in kibana. You get the Jason structure if you see here what each of the entities that you have It's a bit complex to understand But I just want to show you a record that I get every record looks like that we have the timestamp Like a metric also the log is parsing in the same format So I have the correlation with also logs also the metrics information that I got from collectee And also the engine information really strong thing also the server log is part of it if you run with J plus And then in grafana I get all those abilities to To show very cool graphs very easily. It's as I said, that's query based. I write it the plug-in that I want From specific hostname in this case and then I get this graph and it's based on the time interval that I want to see It's aggregate information, of course based on rules that I set Very easy The idea that I want to show as I said before is that we broke it We understood what is the client in this case if it's a file or my Demon that I want to send with it the information that they have Then what cheaper I want to use to serialize the information What's stored to use depends on my requirements and the visualization depends also in the feature that you want to use with it There are many tools out there not an expert in any of them specifically, but I use them Each tool can provide you different abilities You don't need to explore the abilities, but you need to understand what are your requirements in your specific Project, it's a web application second. It's a web application or that you want just to count the logins or count this Advertisement that you present to the user if it's a virtualization management system, which is basically maintain a lot of in hardware information What you want to achieve with the visualization if it's alarm or just visualize the information or having logic behind it What API is exposed to get information? So we want the data we have different ways to collect it and different ways to process it It's a very good things to have in your system because it allows you much more with really minimal effort It's you don't need to touch code. You just need to to configure it properly I have your a setup Blog that allows you to understand how to use that and also what we did in overt and many tutorials out there That now when it's up in your mind, you can go and reach your solution that fits to you And that's it questions yes specifically in overt and Okay, he asked if I have my own virtualization Ability visualization abilities to show my information my hardware information the data center they have and now I use Grafana For example, and this also provides graphs. How do I do that? How do I combine between the two and in overt specifically? We have the Plugins API plugins that we can head so you can add a new tab which shows you the output that Grafana gives you Exactly four of it specifically I can touch it that way But if I don't want to use it that way I can grab the information in my code and show it as I want I don't I not have to use it that way I can walk also directly with the elastic search store and also present information in my My way Over there is my team leader. It says that if you didn't hear it's not going to API plug-in I'm sorry. It's a UI plug-in and you can implement it yourself. We want we don't provide Currently we'll see how we will we get specifically in overt? Any more questions, okay, we still have four minutes any yes No, what what do you what do you refers to over it or to some certain to over? Okay, this is not not related to this talk. I'm here about I'm talking about the tools to gather the information How do you do that? You can search for that in overt or you can check for UI plugins, right? You have also a block for that Also me who of you are by the way Works with overt you all know about overt use it So those of you that don't are teres ago. I talked about it overt. It's a management for virtualization There you can maintain all your hardware host Storage and networking configure it and very central place, which is the overt engine So if not familiar with that as well, I really Encourage you to try to check that out. It can be replacement for other Product that you may be used it's an for the open source. We have the upstream community. It's very maintained if you want more information specifically about overt just Contact me More questions. Thank you. Sure. You can take the water with you. I have another more time Instead of It depends specifically Yeah, we can have Grafana also influence The queries yeah, we thought about it, but the main problems that we had is that we wanted Wait, I'll just turn it off Test test test