 Good morning. My name is Eduardo. Again, I work for PsyLabs. And I'm here to give a quick update on what's new on singularity since last year. So we had Michael Bauer last year here at Poser and at the HPC track. So I'm here to talk more about what's changed since that day. So a quick brief history for the people that doesn't know about singularity. It was invented by Greg Corsair in the National Laboratories back in 2015. In October, around something about October. People asking really how can they take their own environment from their desktops, their workstations to the infrastructure cluster. The first really released like the 1.0 was some point around April of 2016. And thanks to user feedback and open source contribution. And that's kind of when I started to contribute to singularity also. It got into 2.0 in June of 2016. But after that things are going pretty, pretty fast. And Greg founded PsyLabs last year in January. And we are now around 28 engineers. And working day by day making singularity better. And trying to get user feedback on how can singularity help them do their work faster. And this talk is going to be focused around a release 3.0 that was released in October last year. And comes with a lot of new features, more stability for users. And a new language update so open source contributions can jump in easily than it was before 3.0. So a quick thank you to all the open source supporters for singularity. Last year we won three HPC wire users and a little choice award at super computing. And we know that it's thanks to all you guys. So really thanks for the support. And this is all the people and institutions right now using singularity on production systems. And giving back contributions to the open source singularity. So once again thanks to you for these awards. So this is a gear update on singularity. And the next slide is going to hit half or more of half of the people here. And it's go. So singularity last year was a monster that was like bash, Python, C. And they kind of talk each other. It was really hard for a new open source contributor to jump in and start contributing to singularity from the user's point of view. From the stability point of view, Python was giving us a lot of dependency issues when trying to move singularity in between systems. And also the third reason was people were asking to integrate singularity with all the cloud tools out there that HPC people are really trying to leverage. Things like Q&A, things like cloud native tools. So in order to interact easier and more natively with those tools, we start to consider moving to go to the point on which some point around February last year. We started a whole rewire of the code. And now singularity is Python free. And it's a cgo project. So it's mostly go code. We are only using ccode for the security bits like doing the system calls and interacting with the kernel. But all the rest is made in go. So I always try to get new people into singularity. And I think go is a language for new people. There is really easy to just jump in, read some lines of code, understand the language, of course, and start helping with open source contributions. What do we like? A part of the integration with the cloud native tools out there is the concurrency model go as a developer. It's really easy to track threads and it's really easy to know what's happening with your code at runtime. And the last line I'm going to show you in our next slide. What is CNI? It's the container networking interface. So with the 3.0 version of singularity, you can start doing poor mapping and network visualization. We've faced some challenges. So if you are following the singularity open source, it was quite of a year moving a code from bash and Python and taking that to see. So the first main point was choosing cgo interface. I know in the go world a cgo is still opinionated, but it's really needed when you're trying to do security bits and system calls and you need to control them from a cpoint of view. Because all the ccolors here know that c gives you the entire control of the system. We are following the go standards on packaging. So if you are a regular go developer that doesn't know anything about container, file systems or like the underground of how singularity works, when you jump in into our code base, you are going to feel familiar in a go project. We are using rendering. So this third line is when we say we are Python free, because right now if you do a clean git clone, you can start making stuff without bothering or having all the Python dependencies. And if you have Python 2 or Python 3 and the failure that we were having with some users already in a newer person of Python trying to compile singularity some point around two years from now. So this is really nice now that you can just do a clean git clone and start building singularity because everything, the rendering folder is there. So all the dependencies are already in the repo. Packaging. Building a cgo project that complex as singularity took us to overthink how autotools work because autotools is too complicated for things too much complicated but not as complicated as a go project. Go is kind of easy to build. So one of the engineers designed make it. So if a homemade tool for building RPM and depth packages, the depth is still under work, but you can right now jump into the singularity project and start building your RPM and distributing the RPM package. Differences that we are already seeing against Python is we have a lot of APIs and ecosystems out there that we can start leveraging out of the box. So one of the main releases this year is that singularity right now can interact with Kubernetes. So if you go to the side lab git group, you can see a CRI repo and that CRI repo is really how singularity can integrate with Kubernetes. So thanks to go because it gave us just the native API for Kubernetes and we can just start talking to Kubernetes. Go test. So I'm part of the QA team of singularity and this is really nice. And we were talking about test and doing just batch test. We are moving our simple batch test from the 2.x version into a more complex and a more robust way of testing singularity. So if you see singularity right now on the test portion, we are doing unit testing and integration testing, things that were not happening in the 2.x version. 2.x version was only doing integration testing and not testing at the level of unit testing. So by leveraging Go, singularity is more robust project now because it has a bigger and robust test suite. It's very easy and productive as a developer to write in Go. So rather than trying to understand how the batches script raises some environment variables, Python rate it, and code C, which was the 2.x version, but right now you can just understand everything by reading the Go code. So it's a single file of Go versus trying to open a batch, Python and C file and understand what's happening between those three files. Go is very opinionated. I know I told you that that slide was going to choke some of the people here, but we are already seeing people trying the 3.0 version of singularity and production and it's very stable and I encourage you to try it. One of the big changes a part of moving from Python C to Go is the image format. So right now singularity is using its own image format called SIF, Singularity Image Format versus 2.x, people familiar with 2.x, it was you could use EXT-3 images or SquashFS images. So for 3.0 we are using a homemade image format and that gives us a lot of new features. So you can store metadata in the headers. So you can see it as a tar file but optimized for container usage. So you can store your SquashFS image in the big object in the middle. So it's a read-only image. But rather than just storing the SquashFS image you can also store metadata in the headers that you can see here. And upcoming maybe in 3.2 or 3.3 version of singularity we are going to have a writable overlay that is going to be apart from the signature block. So on the next slide I'm going to show that also in 3.0 you can sign with PGP protocols your containers. So you can sign your containers and send those keys and the container to a friend, to a peer reviewer and they can check that the image is immutable and hasn't changed during the sending or maybe someone switched it for another image on the way. So we are very proud of our new image format and as a developer also if you are maybe like an ECBU or tools like that you can start leveraging the metadata in the headers to tell the scheduler to tell your runtime how to use that root file system that is in the middle in the SquashFS format. So what's new in 3.0 on the cloud side? So right now users can have a container library so this is apart from Singularity Hub so people familiar with Singularity Hub which is supported and maintained by Stanford and I guess you all know Vanessa. SILAPS right now has a container library which is optimized for SIF. So you can only pull and push SIF images because it's just for the 3.0 and above you cannot push images from 2.x versions. We have our remote build service so a lot of users were telling us yeah Singularity is good, I want to jump into Singularity but my university, my institution doesn't allow me to have pseudo even in my workstation in my laptop so how can I start building my own images and testing my own images before giving them to the system admin? So if you are running into this issue you can go into the SILAPS white page and give us your definition file and we are going to give you back your SIF file without a pseudo need. And last but not least the key store service so you can now sign your containers as I was telling you but when you want to move those keys between hosts you can use the SILAPS key store service for free and that way you can sign your container, let's say in your workstation tell your system admin about your keys and he's going to be able to verify that that container comes from XYG user. So I'm going to show it here how it looks. So this is the SILAPS web page and you can see up there sorry I'm facing you. This is the library, the remote builder and the key store so this is how the library looks I'm signing a SILAPS ad, this is my login and we have some statistics here from different users and how are they pulling and pushing their containers and this is the remote builder so I was telling you you can drag and drop your definition file into this black and to this white box or you can start writing your definition file here and you build it and we are going to build it for you here with a light output from the cloud infrastructure where this is happening and after all this we're going to give users direct download so you can just click and download your singularity image and start using it or if you don't want to click that link we are going to give you a library link so your image is going to be stored here so you cannot download it because you just want to build the image and then pull it from the cluster or pull it from another host so this is how an image looks when stored in the container library and this is the key store as you can see here I have two keys stored and I can use these keys to sign containers on my host and then share those containers with partners, peer reviewers and they can verify that that container comes from me with a PGP based key ok I was showing you this this is how interacting with the library looks so it's really easy, singularity 2.x had the pull CLI but not the push so 3.0 introduced push command so you can push your container into the cloud you can create tokens that this is via web UNI and you can also from the CLI search containers public available in the library so we have private images and public images as github as other so this search command is only going to work for public available images so let's say you are interested in pulling a quick OVUN to a quick Fedora image and you don't want to type library that does slash slash something something Fedora you just go in singularity search for me a Fedora image singularity search for me a OVUN2 image and we are going to retrieve to you all the public available images that are OVUN2 or Fedora a base image this is how users can interact with the remote builder also from the CLI so rather than changing all the CLI we just added a small flag is does-remote so if you know that you don't have pseudo-privileges in your post you just run singularity build does-remote and the rest is the same and you can just point to your definition file and we are going to build that for you in the cloud from the CLI so there's no need on going into the web page to use the remote builder you can use the remote builder from the CLI for doing this you do need one GUI step which is the create token so you need to go to the web page and create your token that's hard to do from the CLI the key store this is how users can interact with the key store from the CLI so it's really easy if you don't have a keeper already in your host singularity is going to talk with the PGP protocol already in your hosting create some keys for you you can list keys you can have several keys not just one key but several keys storing the singularity a key folder you can push them and you can also search keys as you can search for images so I have three minutes moving away from cloud functionalities users can now do network virtualization so you can now do poor mapping set hostname, DNS, create bridges for a while with network virtualization with singularity containers so we are seeing users back in super computing I see some really nice use cases on deploying web services through singularity and doing poor mapping to do remote visualization of our GPO clusters so this is a pretty powerful tool for users security, this is mostly for system admin, singularity is always trying to give more power to the system admin to control the users system admin can now add capabilities or remove capabilities depending on the user allowed or not allowed set UID for users and keep privilege so this dash privilege can make users run singularity inside, singularity crazy things like that and a security bid that encourage you to read the documentation on how system admin can enhance singularity documentation from the CLI also we have CGRU support this feature requires running by pseudo but you can now block or limit the CPU, the memory the IO, everything that you can block using CGRU you can now block your singularity processes out of the box and one minute sign and verify, so I was telling you showing you the key store that you can now sign and verify what the CLI looks like when running a sign and verification on your singularity images okay, I was going to show them but I'm, okay and the last thing that I'm going to show which is something I really like is this file so this file is mostly for system admins so you can now create whitelist or blacklist for keys so you can say in this cluster all the singularity images must run sign it and you can create whitelist and blacklist so you can say in this cluster all images sign it with this and this and this you can create list keys are going to run and let's say you fire some user, you fire some student from the infrastructure, you can add them to the blacklist and just say okay this student key will not run even if he is trying with pseudo-privileges singularity is going to block that image from running via this configuration file, so this configuration file is really powerful for system admins on blocking and leveraging the sign and verify feature of the 3.0 version and with one minute questions and we are hiring, we are really interested in getting more developers, people with HPC knowledge, with system related knowledge so if you are interested in being a container person talk to me after this, oh no yeah I switched them, yeah because we already had a singularity one-on-one here at FOSM so the committee really asked me to be more an update than just a one-on-one so that's why, not yet so a question here is that if you can install the cloud services of singularity on premise, like in your own cluster that feature is coming some point around let's say after August, September so we are working on that, keep an eye on the singularity report, that feature is coming we are working on that, so you can install the library, the builder the key store in your host, not depending on your infrastructure