 So I'm going to take just one minute to invite you all to our Singularity user group as we are here in the ECBU user group We are going to be hosting a user group. Thanks to the San Diego Supercomputing Center Next March, so code for papers open So if you have a really interesting use case or you are facing a challenge By using Singularity, please let us know in the call for papers and we are welcoming users and contributors of Singularity Super Q announcement. If you are interested I know that the link is kind of hard to copy and paste so reach me after this talk and I Give you a better link to the conference Okay So I'm here to talk about Singularity I kind of feel that everyone here is familiar with the tool and what Singularity is doing So I'm going to do a super quick one-on-one with two slides and then I'm going to give a Big update on what Singularity this year and the new changes that are coming into the repo You know the new features that are in the upstream master right now So for those not familiar with Singularity It was invented by Greg Crouchard the same founder of CentOS in the national laboratories some point around October of 2015 because user were knocking his door about using Docker on HPC Facilities with root Privileges and the field release came some point one year after that and with a lot of user feedback and user contributions from a lot of Universities and HPC centers. It really turned into the 2.0 release a round-middle 2016 and Last year Greg founded Psylabs a company to back up the Singularity open source project. So By saying this we started to work in a complete rewrite of the code to make it more professional and so we can give a better support for users and We released the 3.0 version in October the last year and I'm here to talk you about What this 3.0 release is giving to the users? So Singularity will never forget that is designed for HPC and it was born on HPC But it is getting a lot of features from the Disney World code the commercial enterprise the cloud But Singularity is optimized for compute jobs supports all known resource managers So Singularity was born by being run by a slurm, HTCondor, PBS You name your resource manager. You can run it with Singularity It is compatible with all HPC infrastructure So I was just talking to a user that Singularity is being run in Create systems with 20 years old CentOS 2.1 2.2 kernels. So Every HPC system out there Singularity can be run. So it is think to be running these all supported HPC systems Users can bring their own environment into the HPC environment, which is what had Really make Singularity drive into a HPC culture because HPC systems tend to be some kind of old and Users are running the new CentOS the new Red Hat the new Ubuntu on their workstations And they want to take that environment into the HPC and just run their tools without having to accommodate their system what they are developing to the The version that the system admin has on the HPC infrastructure and One of the main Concerns of all system admins is the security so Singularity blocks all privilege escalation from the user however, or whatever the Way that container is being run. So it is being run through a scheduler is being run manually However, it's being run Singularity will drop all privileges Really giving all the tools and all the power to the system admin on what he wants to give to the users Singularity is also think simple versus how the cloud and all the new tools like Docker and OCI are really Design Singularity by a starting point is just a single file image Versus the layer format that all OCI Compliant rom times are there are using and One of the main things that this gave us in the HPC RAM is that you can share really easily Your Singularity image to INFS so you can just put your Singularity image in INFS in a Loster or whatever your show far Assistant is and you can be run that same image through your whole HPC infrastructure without having to worry about how you are going to be moving the layers and how you are going to be Linking the layers in on every node depending on how you are sharing your resources through your HPC infrastructure also Singularity is think as a binary so After you build your container image it is turning to a binary itself So it is think like that so it can be run through any scheduler right in there the OCI Compliant runtimes with they are trying to Themself a shell into the system Singularity is just there to be run It is not trying to run or be the master on the system Singularity is just there to be run by a scheduler or by the system admin So it is really simple. It is not trying to do really complicated things on the host or leaving a root demon Trying to orchestrate containers in the host No Singularity is just there to be run. So it is really simple to use So for those not familiar with Singularity, these are the main runtime features As I told you this was a super quick one-on-one Singularity works out of the box with GPUs. So if you have a HPC infrastructure with GPU and media You can start running your machine learning AI or A graphics integration with GPUs out of the box with Singularity images It is a standard compliance or you can now run any docker or OCI Compliant image with Singularity and you can now pull images from any OCI Compliant registry and just start running it with Singularity as long as it is a registry OCI compliant You can pull it and run it with Singularity as a rootless container Supports graphical user interfaces and that combined with GPU I don't know if you guys follow a last supercomputing on which the keynote from NVIDIA Show how you can ship a job with a slurm and with that job in a slurm over a GPU cluster Run a graphical user interface and start doing some graphical analytics with Singularity containers It is resource manager as Gnostic as I'm saying it is really simple It is think to be run as a binary so you can chip it with whatever resource manager you want It is compatible with any HPC infrastructure. So Singularity can work with Infinivan Singularity can work with any share file system you have in your host GPUs a All HPC tools hardware tools out there jobs and services Singularity at first what's designed to be a batch job But on request of the users Singularity can now run services like databases like a API RESTful APIs or whatever the user needs to Help its batch job to run like we are really seeing services with Singularity to run a small databases to support a bad job in a cluster Is this time for performance? So we don't have a rude own demon or whatever a Manager inside the host so Singularity is a binary that is jumping into the process pool on that We are not losing time talking with a demon So Singularity can give you almost native performance when running your application inside a container and it's really secure one of the design principles of Singularity is giving back the system admin all the control of the HPC infrastructure There are the other container root times that they are giving the power back to the developers and the user Singularity is thinking all the time in the system admin and how to give that system admin back the power of all the HPC infrastructure so I'm here to talk about an update so for the ones not familiar with Singularity those three slides kind of bring you into the table and I'm here to talk about 3.0 release and the major rewrite that we did and The new features that are coming that are right now in the upstream of Singularity So Singularity before 3.0. We call it 2.x versions What's a monster that was bashed Python C Calling each other in really complicated ways and now it is a goal and project and One of the main reasons for doing this is that we can integrate with the OCI and the Linux foundation tools in libraries that they have developed to run containers and is This a goal programming language is a compiled language You can see it as a C++ without the object oriented programming So it's a compiled language really close to C and it's really easy to maintain What are we winning we're doing this? It is language really what I really like from goal is that new users can jump really fast and start contributing to Singularity So this would give Singularity Bigger and healthier open source communities because everyone can just go and jump and start contributing The concurrency model of goal really as a developer give you a way of Managing threads and developing threads inside the code really fast There as I said the integration with other container projects This is why I say that you can start running any OCI compliant image with Singularity out of the box Because just by calling the Linux foundation libraries in packages We are turning those images into the Singularity support images and running it for for you Singularity is also now supporting the cloud native Container networking interface also backed by the Linux foundation So you can now do port mapping with Singularity so you can start your Database your web server with Singularity and start doing port mapping inside the host Which is pretty cool for people thinking about the NVIDIA example of running Graphical Jobs to us learn because with the port mapping you can start doing that Go doesn't allow you to code the namespaces on the kernel as we want to so Go also gives you a tool called Cgo on which you can Have a small chunks of C code inside the go code To do what you want to do on C because you don't want to do any go So we are using the Cgo interface for forking certain system calls That are really needed for running Singularity We are following all the go standards. So Singularity is now a go base project and as I said to the open HPC Speaker right now Singularity has its own vendor folder in the github repo So it is super easy to install Singularity right now. It's a git clone make make style without Dealing with Python dependencies without dealing with any external dependencies We have our vendor folder in our github repo and just git clone make make style You don't have to start calling External dependencies we have then store for you. Thanks to the go standards that you can store your self dependencies in your RIP but Singularity stills a Very complex project and one of our developers was not happy on how auto auto tools were managing The way that we weren't handling Cgo because for doing Cgo you still need to do some dot-h libraries in C So we developed a tool called make it that is allowing users to build RPMs and it's Underwork the the packaging and this make it is really nice because it is going to check for all the dependencies for the C libraries in the host which is which are pretty basic and Building and compiling Singularity for you. So when you go into the github repo, you will see that in the instructions there is no Autoconfig config step like we are used to in the HP C realm, but we are using our self-developed make it to Go versus Python Other reasons for moving was the large ecosystem of packages not just the linux foundation packages, but the network tools and Right now Singularity 3.0 can also run C groups. So you can isolate your container Processes with C groups with Singularity out of the box Go test I've been hearing that testing HPC packages is really hard and Go test is making life for us really easy to test Singularity for every release So Go has its own tool for testing go develop it packages and give you a really verbose Output of everything inside that code and what is breaking and what is working? As I said for I always try to bring new open source contributors It's really easy to be productive in Go so a user can develop a new feature in a week to weeks test it in another week and Take that feature into the upstream maybe in a month. So it's really easy to Develop new features in Go for those that was another decision for moving to go Singularity Another big change in Singularity is our image format. So back in the 2.x version. We were using EXT3 file system or a SquashFS file systems now we developed a Singularity image format which is as I was explaining from a high level You can see it as a tar file, but optimized for container usage So this is a binary file that we carry Some headers that users can leverage to store JSON configs, JML configs Different labels and environments that they want to When they are going to run that container so they don't have to Hard code those labels environments or metadata inside the container But they are going to do it in the file format of Singularity Which really gives the user the ability to leave the file system, which is this big block To be just the file system with the application and know the metadata and how to run that container is going to live in those headers We are also working. It is not in 3.0 Now, but it's going to be in 3.2. Maybe not 3.1 writeable overlay So user will be a this file system this file format is Non-writeable so as soon as you build your image in this Singularity image file format It is going to be read only We are working towards Allowing users to attach a writeable overlay to that Same Singularity image format, but it's not yet there. It's been really tricky What this Singularity image format is giving to us is the it is guaranteed to be reproducible because once the container is built It is going to be read only so if someone try to modify something inside the container is going to break it it is really easy to arch to archived because The file system that is inside it is going to be a squash face So are you are you want to imagine something around 24? Megabyte is really light so a By being a small it's also mobile. It is controlled compliant with another feature that is coming in the 3.0 Hopefully it's here. That is the key store service. So I'm going to start here because it's One of the most important things coming in 3.0 is You can sign and verify that safe image that I was showing with PGP keys and store those keys in the silabs cloud service So you can build your container sign it leave it with the key push that key to our cloud system charge it that image with a pure reviewer or a friend or something and Review that that image is still the same image with your PGP key and pull that PGP key from the singularity cloud service and Verify that image So one of the new features coming in the 3.0 is the key store service and handling PGP keys through the singularity CLI So you can create your PGP keys Store your PGP keys list them use them to sign and verify containers and push them and pull them from the cloud service Other user requested feature coming into singularity 3.0 is a remote builder So we are aware that some HPC infrastructures and also some universities or companies even in the laptop they don't allow a Pseudo to the users right so they give users a laptop already with pseudo blog So user were asking how to build an image if I cannot be pseudo at all and even in my laptop So now in in the silabs cloud service the user can just Drag and drop the singularity definition file the recipe as some people call it But we call it the definition file and we are going to build that container for you We are going to spin some cloud services Con a build that image and ship it back to you And we are going to store that image in the light container library. You can choose to remove it or not it's up to you, but Once you build your image it is going to be stored in the library So then So this is how it looks singularity. So you you have the three tools here the library the builder and the key store So once you build your image, it is going to be stored Like this well, this is how it fall but it is going to be stored with your username And here is going to sell remote view So every remote view that you are going to run is going to be stored in the library with a unique ID You can then remove it if you want, but we are storing because we are also giving the Additional feature that you can remove beer and image, but you can say to singularity But don't give it to me Leave it there store it. I'm going to use it before a after this So those three cloud features Are called are now usable to the 3.0 Version in the upstream. You just need to go into our container library and log here We you can log with your Microsoft account your Google account your github account or your githlab account So once you create your account you can start Pulling and pushing containers in the library signing your containers with your stored Keys in the key store and also building containers remotely without the need of sudo in your laptop So this is how they're Kind of blurry Yeah, this let me It looks better. Yeah, it looks better here Don't trust in the wife So this is how the remote beard looks. We are Warning users is in alpha review right now trying to make it more secure until we can say that is a GA product And you just drag and drop your definition file here or you can click and click until you find it in your host Or write it here. So this is a a writeable box And after that you are going to click build and it is going to be built here So you can see I'm still up You can see like And we are going to show you live output of how are we building that image for you? So this is my user So here would be my user which is sylabs ed slash builder slash a unique id So that's the unique id on which the image is going to be stored and Ah, this was built And after that we we can see the image in the library. So this is the the library of This user Remote builds the unique id and the image and this is how the image Looks like in the library And all this i'm showing you through the website But all this can happen through the cli and we're we always show the example here So this is how you can pull this image with the name Or the image with the shot So why the shot? because This library was Tinket and designed for data users Let's say bioinformatics So if you're building the same image over and over again, maybe with the same name At some point you will want to go back and check via the shot zoom of the image And see and where something fell or Let's say your bioinformatics That I was changing through time, but you forgot to change the image name So you can go back and check them by the shot zoom so I can Do the same build over and over again and this is going to change the shot zoom because it's always going to be unique Here so this is how the key service Looks like here. You can see that I have two keys stored So once I created the keys in my house and I use them to sign my containers I push them to the key store So let's say I change my laptop or I'm going to Give I an image made by me to a user and say, okay You need to verify that that image was made by me and you're you are not Pulling a container that is not mine, but with the same name you can then Pull this keys in another house and verify that image. So this is kind of a key hub you can start pulling keys to verify your containers This is how you can create list and push Keys in your house. So as I'm saying all that I'm showing here with the web UI you can do it through a cli So ui people cli people Trying to satisfy all flavors Other features coming into 3.0 is the network virtualization as I was saying by leveraging the linux foundation packages in go users can now run all these Features of that those are network features with singularity Making it better and easier to run services Like word servers like apache like spark and all those scientific art studio and do poor mapping or doing creating new networks or I don't know users Know how to really complicate themselves using this tool But it's really cool that you can do a dns to a service that you're running through singularity or do a poor mapping with singularity So network virtualization is also a new feature coming into the 3.0 version of singularity security As I was saying we are always thinking on how to give more power to the system admin We're always thinking on how to make the system admin the master the owner of the system and not the users So these are tools that the system admin can enable via flags or via the the singularity config file To make more secure or to remove capabilities For the users so the system admin can decide to drop capabilities or to add give Users more capabilities because they are trusted users But these are system admin features Also Singularity 3.0 gives c-groups to users so you can now run encapsulated processes you can limit the Memory you can limit the IO you can limit devices that the container will be able to see You can limit the number of cpus All the things that you can do with c-groups. You can not run it out of the box with singularity 3.0 This of course requires elevated privileges So this is up to the system admin how to Give c-groups to the users This sign this is how the sign and verify that I'm been talking here looks in the cli So when you run sign without any flag you can choose from your Key list the flag the key that you want But you're going to have one by default if you don't use any flag it is going to use the default It is going to sign the image and when you run verify It is going to tell you okay this image was signed by the pgp key That that that by the user that And you can also delete kits in the system with new tools not singularity like remove So this is the output of a singularity sign And as i'm telling A it is going to sign it with a default key when you have no keys on the system It is going to create a key for you From the singularity cli So you don't have to go to the to your pgp key application in the host singularity is going to call that pgp application In the host and create a a key for you This is how the singularity push looks singularity 2.x just had the pull So you could pull from the singularity hub which is hosted by stanford But now you can also push To a container registry so back in 2.x you could you could just pull but we are not even the users How to save their images in a cloud-based system? So pull as 2.x users already know and this is how verify looks like so it is going to show the user The email of the user in the key id of that user Another feature that is coming with the signing and verifying that i don't have demo here Because it just got into the upstream is that the system admin? The system admin by modifying this file The when you install singularity i have installing user local the eco.tomo So the user admin can say to the can tell to singularity You are not going to run any image without verifying that they have this key or this list of keys so one workflow that we are seeing like in universities is The project manager of xyg research project has to give his signing to the image And maybe the qa or the system admin team will verify that image Before running the that the e-machine production. So the system admin can tell singularity You cannot run any image without at least two Keys or at least this key and the system admin can tell singularity which key So don't run this image without verifying that it has this key So here you can see a list of image Of keys or just one key And you can tag them by white list Or black list so you can say all images with this key can run or the other You are betting a user like a bad student that was doing crypto mining But he was telling the system admin that he was doing his research thesis So you can blacklist that user's key and say okay, whatever this key is going to run blacklisted So we are giving the system admin white list and black list for the keys And trust me I was before joining side labs. I was assistant admin at my university And I had a user a student running crypto mining And when he signed for using the xpc infrastructure, he said no, this is my research thesis Here is uh, well, I have all in screenshots because someone told me do not trust in the wi-fi This is how you run a a singularity build So now in 3.0 you can build images from From registries not just from definition files So in 2.x you could pull from registries But now you can be also built from registries without privileges So you can see that there is no pseudo here We are always only asking for pseudo privileges when trying to build from recipes from container from definition files And you can show into a User registry defined Without pseudo privileges also. So right now As I was saying with the OCI compliance you can show into any OCI or Singularity hub or library or side labs library Without pseudo privileges and you can just start running your container out of the box and you don't have to build it locally so this Docker you want to latest is not on host and i'm telling singularity to show into it. So singularity is going to really Download that you want to image from docker and give you an interactive show into it Here is the example of how to use a gpus so Just by calling dash dash and be singularity we map all the gpu libraries on the host and Ex expose them to the binaries inside the container So what docker and the OCI people are really doing with nvidia is that they are Building and compiling the nvidia tools inside the image What we are giving to the users and to the system admins Is the problem is that they are going to be able to use the same libraries on the host Which is more Performance than compiling let's say in a workstation and trying to run Those gpu compile libraries in a whole in in a hpc infrastructure You're going to maybe have different libraries different performance So with the dash dash and be singularity is Communicating with the nvidia tools in the host the nvidia smi tools And asking for the libraries in that host and mapping those Inside the container So you can see here uh tensorflow example, which i'm not going to run tensorflow is something like five gigs image And just with the dash dash and be We are exposing the host gpu libraries To all the binaries inside that image When releasing the singularity 3.0 last year Thanks to the users we want three hpc wire Edit editors and users award because we want the users Rated awards and the editors rated awards so was three Awards like you're at supercomputing and this how this is a Small bit of all the users already using singularity From 2.5 to upper versions we Haven't seen users like professionally or in production using 3.0 But we are now inviting users to start a Doing a git clone from the upstream master and really giving a try to 3.0 So they can have access to all these features Another slide we are hiring. So if you are interested in change and working in go in learning about Cisco's kernel file system Signing with pgp keys Please give me a give me a call And i'm open to questions This was a quick update on what's new in 3.0 Any questions for Eduardo? And sorry for my voice. I did is no Kenneth told me that it was going to be worn and It's fine kind of cold So I have a question related to the remote builder Do you have Something similar to that so that we can also do the build on premises so that people that don't have Routes on the system could do something similar without Uh, transferring all the containers outside because we have large Containers so that could all you mean like installing the remote builder service in like that. Yeah a What this is a stream it a The plans for remote builder on prem and Something around the third quarter of this year. It is on plans. It is hard Because we need to develop a Compatibility without off virtual machine Drivers depending on the drivers that you are going to have in your infrastructure, right? so if you have kibia and qmu or Being where so we need to develop all the options so we can say to the tell the user Okay, you can saw it on prem because the build itself is happening inside a vm So the system admin won't worry about the user doing something really weird during that short time on which he has pseudo access So that's like the most tricky thing like really working with all the beam virtual machine drivers and Third quarter of this year, but this is a stream it so I just wanted to know if if it's on the road. Oh, it's on the roadmap, but it's We are working on that More questions Would it make sense to have a dash dash mpi option just like the dash dash nv option you have about the nvdi libraries. Yeah to have the mpa libraries installed and optimized for the host available in the inside inside the image what That would be a really interesting use case because what we really propose with npi and singularity is calling singularity True npi So you do npi run singularity. So npi will 100 manage the the jobs So there's no need of mapping npi libraries inside the container npi is going to be running the job Yeah, but when you do that, you don't I've seen I've had issues with the npi software inside singularity that need to talk with a slum manager And so I was not sure how to handle that But Don't you need to install the npi libraries? Yeah, you also need to start npi libraries inside the container So sometimes you get you end up with an npi library inside this container Which is different from the api library installed on the hardware and they can be friction So that was that's the reason why I was thinking about moving it through the Canada Like I am also I'm taking notes on that suggestion. Yeah, it's the same idea for like you have for nvidia, but for mpi And the same motivation. Yeah, you need stuff in the container and outside and they need to talk to each other So need to be in sync. So it makes sense It's probably not as simple as it is for the gpu still because there's more one with the gpu's Thanks to nvidia. They have this small binary that you knock knock and it just gives you all the links and all the same links that you need for running nvidia with npi would be A bit more I think it's even a bit more complicated for npi as far as I know for open npi. That's actually split in two parts Parts which is linked in the application And should I think be in the container then and then a part that's outside And before those versions needed to match but in recent versions. I believe there can be Within reasons some some some differences in versions between the back end libraries and the front end libraries So it's npi in containers is Really tricky and complicated Yeah, I think it's very difficult to implement the dash dash npi because there's so many npi implementations and Depending on the version you would even need to do different things. Yeah that that discussion is going back and forward in the github issues But right now the recommendations that we give is this example and at the end run it through npi But you are right can Be some friction if the container has some different libraries If in this case if an npi 2.1 is not installed on the hardware it does not work So you need to have the very strong mapping between inside and outside the container So the idea would to be able to get rid of this tight Link because the thing is when you build an image that works on one cluster You want to move it to another cluster? It has to have the same slurm and the same npi installed And then it breaks the portability of the containers Maybe one idea can be to try and convince one of the npi libraries to help you out with this And as soon as one does not help me But this was proposed in a bof in the last supercomputing on which Users told the npi developers you need to Really make a standard for the npi but mpitch has a standard already right the kray intel and mpitch itself they follow it right open pi Yeah, open pi has a different standard It's defining on the same standard is That's the user code that was made in the bof like if we had a Standard api for npi it would be really easy to develop things like dash dash npi for singularity or Maybe for other tools work with api easier because you can be moving from mpitch to open npi to kray npi and whatever if the api was as The single api for npi That's not happening I don't think that's happening and what you could do is just work together with one of the current standards Get it to work the other ones will jump in and do whatever they need to do to get it to work Yeah, but the discussion about a standard api was made in last supercomputing Because not just singularity developers, but other tool developers are really struggling Working for all the npi is out there like if you define just a one api that I can develop on I'm going to be all npi compatible This discussion was bring Other questions I have another one you mentioned that You're not seeing any strong adoption of singularity 3 right now And the user base And the user base yes in the system admin base. No, so the issues and the bugs Are being found by users in their workstations. Okay, but I Confident that 3.0 is not ready to be installed in production system and user to start testing it Do you think the switch to go has something to do with them? Like lots of sys admins not knowing any go At all so they have to figure out how to build singularity themselves And it takes time to figure it out and they don't have time to install it go is that it's downloaded at binary Yeah, you don't need an rpm or anything. Okay Or you can just grab the singularity rpm and it's going to do everything for you Any other questions? No, okay. Thank you very much. You do or no