 Alright, so welcome everyone to our usual meeting for the CNCF Research User Group. Today is May the 4th, there was an initial idea to celebrate this by bringing Tim Hawking to tell us about the early Kubernetes war stories and the Star Wars. But we have an even better topic with kind of the bridge between traditional SSH, UI-like environments and things like this and containerization and Kubernetes. It's pretty exciting. This is something that a lot of us have discussed in past meetings here as well. So we have Janos that will tell us a lot about the project and then we have Nikos from CERN giving us a concrete use case. So I'll pass the word. You start Janos, I guess? Yes, I think I'll just screen share and then Nikos can take over on his part. So here's the thing, here's the thing though. So it's not like Nikos is just using the project. He's a substantial contributor to the project. He's spent, I don't know how many hours working on container SSH and on the code base on a lot of the features have been contributed by Nikos himself. So absolutely credit where credit is due and today's topic is building a science lab with container SSH which is one of our main use cases. So what is container SSH? Here's the part where we would have the funny video which we can't show you because there's no audio when I play video but you can go to the website and you can just watch the video there. At least that's what I thought. So back to the slides. So container SSH is an SSH server but it's not an SSH server like you would expect when you install open SSH. It's an SSH server that doesn't create a shell on the server it runs on. Instead it connects to an API so it connects to Docker or it connects to Podman or Kubernetes and starts a container and then the shell it creates is inside the container. Container SSH itself doesn't have to run in a container. Of course you can run it in a container and the other important feature is that you configure it dynamically. So it's entirely built with this you could say with webhooks in mind so it's kind of like a cloud native SSH. You can configure it however you want and there's a lot of flexibility in what you can do with it. So we built a honeypot with it. People have used it for web hosting, for jump hosts and those kinds of things and Nikos will explain a little more about the lab use case. The way it works is you have container SSH and then the user connects to container SSH using their normal SSH client so there's no specific client required, no special configuration or anything else it's just their normal SSH client and then we start a container and the user lands in that container. When the user disconnects then the container is destroyed which is an important feature because you want to clean up after your users and that's one of the problems that you typically have when you create a lab setup is that people leave stuff flying around or processes running and that kind of stuff. You don't have this problem with container SSH because container SSH destroys the container after the user is gone and we'll see later that you still have the ability to save data. Now if two separate users connect they of course go into separate containers so they're nicely isolated from each other. If you place resource restrictions on the containers themselves then the users are restricted to the resources that you give them and the directories and mounted folders etc. can be configured just as if you were to do a Docker run and whatever Docker run supports we support in container SSH the same goes for Kubernetes so if you do a kubectl run or kubectl create pod whatever you can do there you can do in container SSH as well. And as I said this is all dynamic so you can do it using webhooks. Now if one user connects with two connections the tricky part is that you land into separate containers so at currently it works on a per connection basis so every container is created for individual connection and when the connection breaks then the container is also removed. This is an important design constraint and we have opted to do this because if we wanted to drop one user with multiple connection in the same container then we would need to think about how to scale this if you wanted to run multiple copies of container SSH then you would have to think about how do I scale this how do I synchronize the cleanup of containers etc. and that is something that we haven't done yet. It's definitely a plan for the future but at this time one SSH connection lands in one container. The way this whole setup works is that container SSH has webhooks and there are two important webhooks from the original version and Nikos will definitely talk about the massive amount of extension he has done to the project. That's the auth and the config webhook. The auth webhook is responsible for authenticating the users so take their password or take their SSH key and the auth webhook can decide whether to let the user in or not and then there's the config webhook which gets a call from container SSH and says hey, here's this user, the user successfully authenticated and the config server has the option to return a partial configuration with their Docker settings or Kubernetes settings or whatever else and that's how the container is created and of course you can configure the container to mount volumes just as you would when you do Docker or Kube CTL run so you can for example use an NFS server to have users access their own data and have a writable folder that they can access data from but still the container is cleaned up so they can't leave stuff lying around elsewhere they can't leave processes running, those kinds of things. So why would you use container SSH? Why wouldn't you use something else? Of course you can build a lab environment with other tools but with container SSH it's incredibly easy to access so you don't need your users to install any specific clients if they are running Windows they can just use the built-in client in Windows 10 now so you don't even need to install Putty or anything like that you can just go and have your users SSH in and it will immediately work off the bat. You can create resource constrained environments which traditionally has been a bit of a problem as well as the automatic cleanup what users leave lying behind and what's probably important more for the corporate world is that you can record a detailed audit log that's important when you want to make sure that you record everything that users do so for example if you let the developer access a production system you want to record all the commands that they type this is something that is fairly difficult to achieve with traditional SSH servers and last but not least it's fully open source so it's under the MIT0 license you can do pretty much whatever you want with the codebase so where can you get it? You can go to containerSSH.io we have a fairly extensive website we have development documentation we have a reference guide, we have starting tutorials we have a funny little video we have a few more guide videos there's also Slack links so if you need any help or you want to discuss any potential use cases then just drop in the Slack and say hi and the source code is available on GitHub and with that I would like to hand over to Nikos Thank you Janos, let me share my screen and can you see my screen hopefully? Yes we can Okay great So I'm Nikos, I'm working with Linux configuration team at CERN For the past year I've basically been investigating ways to containerize SSH This is the project that I was hired for and the containerization was not the only thing I tried believe me The first thing I tried was actually using OpenSSH and messing around with scripts on logging and all that but this did not work at all for obvious reasons which is it's quite clunky and other features of SSH as forwarding and all that does not work because it's going to exit where the proxy is and not where the user is At some point we discovered container SSH and we found out that it basically fits right into our use case but the downside was it did not support all the things we wanted to do To give some background information on our use case at CERN we provide what's called the LXPlus service LXPlus is the Linux public logging user service which is basically exactly what it sounds like It's a set of Linux machines that have OpenSSH access for all users and employees at CERN These machines contain a big variety of pre-installed programs and analysis tools, programming tools and all that compilers and developers They also contain a set of network file systems 3 or 4 that are used for user home directories general data storage and also for delivering software Generally the main uses for the service I said is writing and testing code Also it's used for submitting jobs on our computing grid and for general file operations Another use case that emerged recently was that during the pandemic it was also used as a general workstation for those working from home and also as a network proxy into our office network So as I said we're investigating to integrate this service with container SSH Janos as well mentioned that we have made a lot of contributions to container SSH upstream and more specifically container SSH with its authentication webhooks did not really support the Kerberos protocol that we need The SSH protocol has a specific different way of authenticating if it's based on Kerberos, which is the GSS API protocol With the current integration, container SSH only supported password, public key and public key Though it did have different backends, it did not support the GSS API protocol We have now written a native integration for that and you can test it out The current status of the service at CERN is that we have set up a pilot and we are now testing its production readiness So if you are at CERN for example, you can now log into container SSH and test what has already been set up Another point to go through is why go through all this, why containerize Privilege escalation vulnerabilities are quite common especially with Linux and more importantly, when you have a service like LX Plus which is a public login this when a bit is become of way more importance as a Privilege escalation can result in the compromise of multiple users Another point is that there are a lot of shared resources on our LX Plus nodes for example the temporary directory and the network interface For the temporary directory for example, the most important thing to store there is the Kerberos credentials So if a user manages to get the Privilege escalation or even actually just manages to log into another user and they know the node that the user is currently using then someone else can basically steal their credentials With CERN SSH and with the current setup that we are testing every user gets their own temporary directory So even if someone does manage to log in even actually even if for some reason someone steals a user's password and they manage to log in they do not have any credentials on the container the container is just an empty cell Additionally, the second important point is the network interfaces I'm sure when you're developing you have a lot of you have seen that you have a lot of development servers you have language servers to provide the completion and linting you also have debuggers which also provide the server and a lot of these don't really have authentication and many haven't even considered the untrusted they haven't even considered the threat model of having someone untrusted that is able to connect to it they usually assume they are behind the firewall which in a shared service this is not the case and last but not least, first area is quite problematic Linux has C groups and it does have some interesting things to manage resource users between users but it's not really the best a lot of times when a node is overloaded we see a lot of times when a node is overloaded it crashes and we have to move users around and it gets quite messy Moving to containers, Docker and Podman for example have really good options for managing resources we can limit exactly how much memory its user is supposed to use limit how much CPU and even how much network bandwidth and all of that we can do it on a per user basis so the dynamic configuration of containers to say it allows us to have different limits depending on the user or group or their needs Finally, this was basically the presentation of the use case I'm now going to share to you the extension that was made to container states to allow authentication via Kerberos this is basically in case your organization is running the same in case your organization also depends on Kerberos Also one thing I wanted to mention and I forgot is that the reason we did all this and extended container states to Kerberos is that CERN depends on Kerberos authentication a lot and especially on LX Plus the biggest use case is that users are used to their password less authentication which is a big convenience and we weren't really willing to give that up the second point is that Kerberos is used to authenticate users to their remote file system so as soon as they log in they need access to their home directories without that access a lot of our setup scripts do not work and that brings a lot of other issues so having the user be able to authenticate to a third party service as soon as the login is successful is of critical importance to us to continue on with how the authentication flow works here at Kerberos basically when a user connects to container states via the Kerberos protocol container states when setting it up needs to be provided with a key tab that key tab is the cryptographic service key of container states so when a user connects they provide their ticket for container states container states verifies that ticket and as soon as it does that it knows that the connection is genuine and that the user who is authenticating is who they say they are so we have the username of the user after that container states continues with its web proof flow as Yanos described earlier the difference here is that instead of an authentication web proof we do an authorization we send the username of the user and we expect back if this user is allowed to login or he isn't in our case at turn this checks for example our user database it ensures that the user is registered and it also ensures that the user is authorized to use the service you need to be in a specific group to be able to login to the Alex Plus service finally there's also the configuration section which basically returns a standard template for the container along with a few customizations for example when the authorization server fetches back the user it also keeps like the user's preferred cell and the user's UID as long as any user's groups these groups are then passed to the configuration and including the container so inside the container the user has their own cell and their own UID and group ID the same as any other standard Linux system the next most important step is that before container states hands over access to the user they write the ticket the Kerberos ticket into the container this ticket is placed in the slash temp directory by default but it's configurable this ticket is then used to authenticate the user to any third-party service that is necessary for our use case it's remote file systems and after that's done the user's cell is executed and the user gains access to the container as usual and to note that this is all transparent to the user the user has no idea that all this process has taken place and the login time in my experience is about the same as container states so there's no really any overhead so this was what I had to present for now I understand Jainos has prepared a nice demo for us and I left quite a bit of time as well so we can discuss the use case discuss container states as well and let me know what you think do we do the demo now or you want to take a couple of questions if there are any questions we're happy to take them yeah maybe we get like two or three minutes for questions yes let's do that anyone wants to step in I think we shocked everyone I can kick start maybe someone will jump in I had a question regarding you explained because just now that the Kerberos credential is written into the container environment is there a process for renewal or are the users supposed to then reissue credential on expiration like the Kerberos credential like one day there's no renewal but you can quite easily set up a task that is set up in the container so like every two hours renew the ticket this is definitely possible and this would be you would have to do this for every single instance or it would come with your profile you would set this up in the how I would do is I would set it up as a cell wrapper so as soon as the user cell start I'd start the task to automatically renew the tickets actually in our current system what we use is basically a system D unit so this would be quite the same there's also the ability so what we do is we have an idle command that runs as the first process so you could configure this idle command as well to perform that renewal whenever necessary the only important consideration is that the idle command needs to stop whenever it gets a sick term so it needs to stop properly because the idle command is the first process that runs in the container because in SSH what you can do is you can open one SSH connection and then have multiple channels within and that's what Nikos implemented as well for TCP4 running and things like that but you could theoretically also open multiple shells and that's why the first process running in the container is always the idle command and this idle command is just sitting there and doing nothing you can reuse that to run time tasks like that so is this like each user would be able to specify their own command theoretically and do you actually allow this Nikos? this is actually from the configuration server the users cannot obviously for security reasons but you can quite easily have a system where the users enter their command on a portal and this is entered into a database and the configuration server pulls that for example you can do that in LDAP you can have a field there I have another question before which was just for the image that is running the container is this curated image by the service or is this also customizable by the user? I can take that as well the image, you can have any container image you want the only requirement is that if you want certain features of containers to work for example writing the key tab or port forwarding that we're working on you need to have an agent, a specific binary placed in the container or that there are no restrictions on what the image can run realistically right now you pretty much can't really use containers as H unless you just want the really basic SSH functions without the agent so you should really really add the agent and I think that's something that we might consider actually dropping support for it to run without the agent cool, thanks other questions, comments, feelings in the meantime so one of the things about the current version that Nikos is working on so this is a working prototype we haven't released this as a fixed version yet we're still working on a few things there and what's probably also interesting to mention is that we're working on an OAuth integration so if you're not jumping into the Kerberos world what you can also do and we have worked with SSH client vendors as well is basically have a prompt that says hey click this link, you click the link go through the OAuth flow and then it goes back and then you're logged into SSH so that's something that's coming in the next version as well so keep an eye on for container as H version 0.5 I was just checking this chat and Benjamin was asking exactly that if it's possible to do that so I see Arne also has a question no Mike so it says the renewal is limited to the Kerberos ticket renewal time it will not work beyond yes that's correct with the Kerberos we just placed the ticket that is given to us in the container and after the renewal time either the user has to reconnect to give us a new ticket or renew it himself yeah I guess they could just gain it from inside the container again yeah right Nathan I don't know if you have a mic so go ahead if you have otherwise I can ask for you he saw the shoes there with the mic he says okay so the question is does the agent run as the user and can we use pre-trace to have fun with the agent that's a fun question yes the agent does run as the user and yes you can have fun with the agent but it really does not do much like for the port forwarding all the agent does is basically tell container states new connection came in here's the details of the connection and the container says just forward that to the client so you can have as much fun as you could with standard SSH it does not give you anything what you can do so one of the things is that for another demo I hacked together a little bit of a modification of the audit log protocol and what I did for the audience is hey here's an SSH SSH into that and then I opened the website and on the website you could in real time see what they're typing so if that's the kind of fun you're into then I can definitely message me I can give you the source code for the patch to make that happen hopefully that's good so that's one of the things Nicos maybe I don't know if you kept track of that so that's one of the things we're using storage format called CBOAR for binary storage of the audit logs they are now the people who are implementing the CBOAR library are now working on implementing the patch that we need for live decoding of CBOAR messages into the library so that's hopefully going to be fairly interesting sounds great alright that was pretty lively you want to continue because I also have a question but I can also do it after the demo I'm afraid I think that was really cool I was just wondering I was looking into this in the context of analysis facilities traditionally people would always SSH to their machines and then basically they only have terminal access but nowadays more often people actually want the Jupyter Notebook so then they would have a web page instead combining these two I mean from the discussion I get is it possible it would be really cool if you can basically use your terminal and SSH for instance also use your local mtext editor for changes but then at the same time you basically have the browser where you can for instance then execute your Jupyter Notebook so is that something that's in principle possible or you've even tried so there if you have you go with the single note setup where you say okay this user is living on that that node which can be a VM or it can be a physical machine and then you're on the Jupyter Notebook on there and then you share a directory between the Jupyter Notebook container and the container that the user is editing in or you use something like an NFS server where you can just give them their home directory the home directory is mounted in the Jupyter Notebook container and it's also mounted in container SSH and the Jupyter Notebook container will obviously keep running the container SSH containers are just popping up as the user is SSHing in and of course you can use SFTP as well as long as you have an SFTP binary inside the container so if you have that then you can for example use and if you have a development environment that does SFTP I don't know VS Code etc then you can configure that to basically work remotely using container SSH we try that actually works pretty well as well okay that's very cool thanks there's one more I think we'll take the last one and then we do the demo which is from Timothy about persistence do you want to ask Timothy? I see he has a mic represented but maybe it also doesn't work alright yeah I couldn't find my window it got hidden multitasking here you mentioned that you have persistent storage and you can share a storage did I hear that correctly is that just a shared volume or so I don't know how Nikos does it in LX Plus but basically since when you do a pod in Kubernetes you can specify okay mount this volume the volume claim itself so let's say you wanted to dynamically set up a volume claim you would have to do that from the config server so you would have to talk to Kubernetes and say please make a volume claim and then you would have to use that volume claim in the pod spec but whatever you can do in the Kubernetes pod spec you can pass back from the config server so you can dynamically say okay use this volume claim use that volume claim so it's the typical use where you're just simply mounting some global shared home directory into the container so it looks like you're sitting on a cluster like a typical FPC cluster there's nothing specific that Container SSH does specific to volumes we just simply pull in the config structure of the backend whatever the backend is by the way so little side note there is an SSH proxy backend as well so if you're not into containers you can just use it as a proxy for for auditing but we don't do anything with the volumes we just have the config server does with it whatever the config server says we just mount it thank you all right so I think we can go for the demo but maybe Nikas also can add to this answer at CERN we actually have this kind of shared file systems we actually have one yeah we do have AFS and refile system which is actually this one was quite a complicated working I actually have two two deployments for containers one is based on Kubernetes and one is based on plain Linux machines on the Kubernetes basically both of these we have the AFS which requires a kernel module and then it works as a network file system so for its node we mount the kernel module and for its container then we just say mount from the slash AFS from the host into the container so basically an NFS but in the kernel it's kind of I think the compliment to that is that it's not related to volumes it's just basically a bind mount to whatever is set up on the node itself it's a bind mount or it can be any NFS NFS server anywhere it doesn't matter for some of the systems we are able to get around with running the module kernel module itself as a container like AFS is a bit more tricky because of the libraries not supporting recent features in the kernel I'd like to reflect on Benjamin's comment regarding SSH SS50 or I don't know how you pronounce that so as far as creating a web client is concerned we looked into that briefly it's one of the things on the roadmap and the reason why it's on the roadmap is because the way if you have to set up an external web interface for people to use then it's fairly complex to set up because you need to tunnel through a web socket connection and then make an SSH connection out of it in which case you lose things that you could use like Kerberos because the browser can authenticate via Kerberos as well so the plan is actually to build in support for a web client it's a bit of a toll order so I don't know if that would happen but it would be very nice if you could natively integrate Kerberos support into continuous SSH using a web terminal in which case you could just go and basically open a browser and then you have your SSH and it just works and you're logged in you can go start typing on that so you do for normally for SSH for web SSH things you need some sort of a server which is going to be either Python and I think that's a bit of an overhead to set up especially if you want the more advanced features that's probably not going to work so we're going to work in the future depending on time because at this time this is on my side still a little bit of a side project we're going to work to make sure that we have a web based solution for that as well I think I just jump into the demo real quick so what you're going to see here is a modified version of the quick start example so what I did here is hold on there we go so here's the quick start example and what I'm going to do is I'm just going to do Docker compose up to start a bunch of containers and you can see I have debug logging because I can this is a fairly extensive logs might want to turn this down for production and then I can do SSH foo at localhost minus p22 and 22 and this is running in the honeypot configuration so I can use any user to log in it's going to let you in without a password for any key etc and the interesting part is as I said we're dropping in a container so oh actually I didn't oh I didn't add an if config but if I added an if config to this image then you would see that there is no network interface running here there are apart from the container SSH agent which is running as I said as pit number one there is nothing else running in this container you're completely isolated and you can set up the file system permissions as you desire you can set a read only or a root file system etc and you can see I even took the user name from the SSH connection so I logged in as foo and I took it in and emulated that for the for the user so it looks like hey I'm on a really I'm on some machine that's doing some Bitcoin mining so if you're going into research and trying to research SSH attack patterns then this is a fairly good way to do it because you can actually simulate a real environment if you want some more hardening you could look into something like firecracker VM which actually runs VMs instead of containers etc etc and as far as the entire setup is concerned what we have is the config file so the config file is really well documented we have this little banner you could add your privacy disclaimer or whatever you want to add there whatever you're required and then you have the two webhooks in this case I have two separate containers running one is the auth config server this is the default we supply a basic auth config server that you can use for testing and I created a separate config server to make sure that the username matches I selected the back and docker in my case and then I have a whole bunch of settings that reduce my exposure to potential attackers this is really well documented so we have a guide for setting up a honeypot and these settings are all documented in that guide we have additional to that we have some hardening guides for both Kubernetes and Docker itself and that's it so basically there's the config file and then you have the Docker compose file which which fires up the containers what it does I have the guest image just for the convenience of building it I have container SSH itself which I'm exposing on port 2222 and I have a bunch of volumes which is basically just SSH host key mounted in the config file mounted in and I'm mounting in the Docker socket so it can talk to the Docker Docker demon and then I have my two other little helper webhooks there and then we have of course libraries in go to help you write a webhook server or you can just take the description and write your own it's basically JSON that you need to return one note the current stable version of container SSH we publish an open API doc for these the problem is it's currently unfortunately broken it's going to get fixed in the next version so you will have an open API doc so you can generate the server the server libraries in any language that you desire and that's it for the demo very good I think we can go back to comments and questions I will start again I had one which was Nikos mentioned the UID GUID coming from like a metadata service like LDAP or something in practice you're passing this to Kubernetes I guess but there's no user like username spaces or anything like this I guess for Kubernetes specifically there's no support for username space sadly but for the other service we are looking into enabling users the name space as well but currently it's using the users as it gets from LDAP yes other questions Ricardo you mentioned something about kubectl debug it was just because you were saying you didn't have the IP command available this is a docker setup this is not a Kubernetes setup that's what I said with kubectl it's a constant pain that the minimal images don't have these tools but I was just putting it there because these ephemeral containers and kubectl debug are incredibly helpful so that's actually something that we could look into implementing in container SSH that we can start in ephemeral container in an existing pod back to the question of running Jupyter Notebook for example run Jupyter Notebook and then run an ephemeral container for the purposes of SSH access but that's something we currently haven't implemented if you're having fun with go development then I have do I have the project for you awesome let me check here the chat feel free to pop in well people think I have two more comments first one is submission like we didn't even have good convalentia yet but the submission for North America is already open so I was going to suggest please submit a talk about this project I think it would be amazing and I think it would be considered the second one is I was checking the hit hub page and you seem to have quite a lot of users but maybe not so many contributors yet so did you consider donating or submitting this project to something like the CNCF for example yes so the thing about the users is that we don't know most of our users because we don't do any sort of tracking or anything like that the only number that we have is the number of container poles and that's a fairly large number so we had over the last year I think over a hundred thousand guest image poles and several thousand installations we did consider so the thing about container usage is it's very early in its life and Nikos is so there are a few we are right now for core maintainers and Nikos is next to me is one of the people who will write the most go code we have one other friend of mine who is working on on the web related stuff so there is now a configurator for container usage etc and my wife Sanya who is very avidly looking at her own project right now is working with a lot of the organizational stuff and also did a fair number of contributions the problem that we are having right now and why we haven't done this yet is because in order to donate to the CNCF we would kind of have to think about ok what's the governance model and right now the governance model is we agree on something and that's that I think you got muted somehow sorry so we'd have to think about the governance model to make that happen and I believe in order to donate it to the CNCF then we'd also have to change to the Apache 2 license so that's something that I read was a requirement in order to submit it which is not a problem because we are MIT0 which allows us to do that but as I said this is just an organizational matter we'd have to go and actually do the leg work right now we're focusing on actually getting the next release done and then we can see where we can take this project in the future because you're right it would actually be a good fit for the CNCF so actually you don't need the governance model for the sandbox you will need it later the MIT license I think it's compatible because it's copy left so it should be fine if you need help with this bring me as well thank you that would be actually good yep okay there's a question from Jonathan does Jonathan have my issues I don't remember I do I just wanted to ask if you're aware of HPE craze UAIs that's available under their CSM stack management stack so does their sort of answer to what I guess container SSH is providing so it's a containerized login environment that folks can directly SSH connect to and run it sort of replaces just having a static login friend users yeah I'm not I wasn't aware of this project what I am aware I don't know maybe you cannot speak if you have any additional info to add what I am aware of is that there is a SSH server called teleport pro but what they're doing in order to make their two factor authentication and web based login flows possible is they have an extra client that you need to install and so I don't know I don't know about this project so I'll I guess I'll have to take a look at it so yes the answer is no I wasn't aware because I don't know if you have any anything else no I was not aware of this project I guess it's something you can follow up as well after checking here if there's any more questions if not then thanks a lot this was pretty awesome nice overview and demo lots of questions so I guess maybe one thing we can do is yeah we all have the pointers now so if people try it and Nikos gave it a go here at CERN if other people try it it would be nice also to hear about it if you need if you need help setting up the version that Nikos has been working on I think then the best way to do that is to pop into the container SSH Slack and we can help you get started there because there are still a few patches that are unmerged and that we need to review and make sure that they're stable before we can merge them into the main branch I usually hang out with the Slack channel a lot so if you sorry so if you join that I will be there and I can help you and we're both on the the CNCF research user group Slack as well there's a question from Alex Alex do you want to go for it yeah just I think there are a couple of companies who do this with proprietary kind of options we can talk about how how things compare I think like Teleport is an alternative doing something very similar a lot of the same things pop up like audit logs and a lot of the same features is there a way for me to compare one to the other how should I think about it we didn't put together a feature by feature comparison table and the reason for that is that it would be very hard to keep up to date especially since we don't have the funds to continually try and buy I didn't mean to like give me a table what are some of the sort of comparisons right so the way that you can think about is I think Teleport is actually open source so if you want to go you can try Teleport today they make it open source the business model of these companies like this is usually access control so the way that this usually works is okay you want to protect your company network you want to have people come in then you can use our solution whatever that is if they use a custom client or you can use Teleport's companion to your regular SSH client etc to access the network I believe HashiCorp also has some sort of a gateway solution where you can go into your company network and they give you some sort of access control of what can be accessed the difference between container SSH and these projects is that container SSH is relatively unopinionated I'm saying relatively because we still require you to use a guest agent if you use the container backends if you use the SSH proxy then you can do whatever you want behind it so you can just pipe it to the next SSH server and it will just continue working but we don't give you a model of okay this is what we think you should do with it we just say here's the tool here's you can start containers with it you can proxy with it that has audit logging and then you could go into details of comparing okay how detailed are those audit logs because we're literally logging everything if you wanted to and then you could decode the SFTP streams and extract the files and whatnot but if you build a lab environment if you build a honeypot if you build some sort of something that we didn't even think about you can do it with container SSH because it's a building block whereas the commercial solutions are usually geared towards a specific audience of this is what you should do with it thanks that helps alright there's a couple links also posted by Panjana and Jonathan on some possibilities as well in this area so I guess people want to check them out go for it otherwise yeah thanks again Nikos and Ioannos for the nice presentation I'm sure they will come back to the group it's a thing we've seen it in other presentations already so I'm sure it will come back we can get an update later regarding other stuff so the first thing I would mention is that we won't have the meeting in two weeks because it's cooked on so we'll skip that one next one will be in June 1st we are now setting up the agenda for the rest of the year like we did in January or December for the first half so if you have suggestions on topics that you would like to be covered in the group post them on the channel either Jamie or myself will follow if you have ideas of speakers that's even better and then the last one is I mentioned there's a couple of people that indicated they are new to the group so let me just make sure that we get all the presentations because we didn't tell them to start so Nikos already presented himself Ioannos as well I don't know if you want to say something else about yourselves I don't know I work at Red Hat is that important yeah well it's relevant so if you want to show up for the meetings yeah it's open to everyone we have we have Christina but she's not new so who else is here Remy Hi I'm a new systems engineer at Sun Accelerator Controls we have a physical infrastructure of 400 servers and the way we work with Ricardo on moving some workloads to Kubernetes so get new here awesome welcome and I think I don't see the list but I see Arne as well who probably doesn't have a mic either I can introduce him he's down to Corridor at CERN as well and he actually runs the team that takes care of Linux and the cloud here and I think that's everyone so yeah so anyone has any other business for today let's see the chat awesome okay if not then we can reclaim five minutes and this has been great thanks a lot and see you all either at Kukan or June 1st when we have our new session hopefully there will be a lot of people at Kukan so looking forward we can try to get a small informal meetup of research and user group people there will be a lot of talks in this subject and there will be also the batch and HPC day on Tuesday before the conference so I hope to see you there as well talk soon thanks a lot see you bye