 Hi, everyone. Welcome to Esmeraldae. My name is Yosserman Jaffee. I am the Director of Systems Engineering for Esmeral at HPE. Today, we're here and joined by my colleague, Don Wake, who is a technical marketing engineer who will talk to us about the day and the life of an IT administrator through the lens of Esmeral Container Platform. We'll be answering your questions in real time, so if you have any questions, please feel free to put your questions in the chat. We should have some time at the end for some live Q&A. Don, why don't you go ahead and kick us off? All right, thanks a lot, Yosserman. Yeah, my name is Don Wake. I'm the tech marketing guy. And welcome to Esmeraldae, day in the life of an IT admin and Happy St. Patrick's Day at the same time. I hope you're wearing green, virtual pinch if you're not wearing green. You don't have to look that up if you don't know what I'm talking about. So we're just going to go through some quick things, talk about discussion of modern business IT needs, kind of set the stage and go right into a demo. So what is the need here that we're trying to fulfill with Esmeral Container Platform? It's all rooted in analytics. Modern businesses are driven by data. They are also application centric and the separation of applications and data has never been more important or the relationship between the two applications are very data hungry these days. They consume data in all new ways. The applications themselves are virtualized, containerized, and distributed everywhere and optimizing every decision and every application has become a huge problem to tackle for every enterprise. So we look at, for example, data science as one big use case here and it's really a team sport. And I'm today wearing the hat of perhaps operations team, maybe software engineer guy working on continuous integration, continuous development, integration with source control and I'm supporting these data scientists, data analysts and I also have some resource control. I can decide whether or not the data science team gets a particular cluster of compute and storage so that they can do their work. So this is the solution that I've been given as an IT admin and that is the Esmeral container platform. And just walking through this real quick, at the top I'm trying to, as wherever possible, not get involved in these guys' lives. So the data engineers, scientists, app developers, DevOps guys, they all have particular needs and they can access their resources and spin up clusters or just do work with a Jupyter notebook or run Spark or Kafka or any of the popular analytics platforms by just getting end points that we can provide to them, web URLs and their self-service. But in the back end I can then, as the IT guy, make sure the Kubernetes clusters are up and running. I can assign particular access to particular roles. I can make sure the data's well protected and I can connect them. I can import clusters from public clouds. I can put my clusters on premise if I want to and I can do all this through this centralized control plane. So today I'm just going to show you, I'm supporting some data scientists. One of our very own guys is actually doing a demo right now as well called the Day in the Life of the Data Scientist. He's on the opposite side, not caring about all the stuff I'm doing in the back end and he's training models and registering the models and working with data inside his Jupyter notebook, running inferences, running postman scripts. And so I'm in the background here making sure that he's got access to his cluster, storage is protected, make sure his training models are up, he's got service end points, connecting him to his source control and making sure he's got access to all that stuff. So he's got like a taxi ride prediction model that he's working on and he has a Jupyter notebook and models. So why don't we get hands on and I'll just jump right over to the Esmeral container platform. So this is a web UI. So this is the interface into the container platform, our centralized control plane. I'm using my Active Directory credentials to log in here and when I log in, I've also been assigned a particular role with regard to how much of the resources I can access. Now, in my case, I'm a site admin. You can see right up here in the upper right hand, I'm a site admin and I have access to lots and lots of resources. And the one I'm going to be focusing on today is a Kubernetes cluster. So I have a cluster. I can go in here and let's say we have a new data scientist come on board one and I can give him his own resources so he can do whatever he wants, use some GPUs and not affect other clusters. So we have all these other clusters already created here. You can see here that this is a very busy production system. They've got some dev clusters over here. I see here we have a production cluster. So he needs to produce something for data scientists to use. It has to be well protected and not be treated like a development resource. So under his production cluster, I decided to create a new Kubernetes cluster and literally I just push a button, create Kubernetes cluster. Once I've done that, and I'll just show you some of the screens and this is a live environment. So this is, I could actually do it. All my hosts are used up right now, but I would be able to go in here and give it a name. Just select some hosts to use as the primary master controller and some workers, answer a few more questions. And then once that's done, I have now created a special and a whole nother Kubernetes cluster that I could also create tenants from. So tenants are really Kubernetes namespaces. So in addition to taking hosts and creating Kubernetes clusters, I can also go to existing clusters and now carve out a namespace from that. So I look at some of the clusters that were already created and let's see, we've got this here is an example of a tenant that I could have created from that production cluster. And to do that here in the namespace, I just hit create. And similar to how you create a cluster, you can now carve down from a given cluster and we'll say the production cluster and give it a name and a description. I can even tell it, I want this specific one to be an AIML project, which really is our MLOps license. So at the end of the day, I can say, okay, I'm going to create an MLOps tenant from that cluster that I created. And so I've already created it here for this demo. And I'm going to just go into that Kubernetes namespace now that we also call a tenant. I mean, it's like multi-tenancy. The name essentially means we're carving out resources so that somebody can be isolated from another environment. First thing I typically do, and at this point, I could also give access to this tenant and only this tenant to my data scientist. So the first thing I typically do is I go in here and you can actually assign the users right here. So right now it's just me. But if I wanted to, for example, give this to Terry, I could go in here and find another user and assign him from this list as long as he's got the proper credentials here. So you can see here all these other users have Active Directory credentials. And they, when we created the cluster itself, we also made sure it integrated with our Active Directory so that only authorized users can get in there. Let's say the first thing I want to do is make sure when I do Jupyter notebook work, or when Terry does, I'm going to connect him up straight up to the GitHub repository. So he gives me a link to GitHub and says, hey, man, this is all of my cluster work that I've been doing. I've got my source control there, my scripts, my Python notebooks, my Jupyter notebooks. So when I create that, I simply give him, he gives me his, I create a configuration. I say, okay, here's a Git repo. Here's the link to it. I can use a token. Here's his username and I can now put in that token. So this is actually a private repo and using a token standard Git interface. And then the cool thing after that, you can go in here and actually copy the authorization secret. And this gets into the Kubernetes world. If you want to make sure you have secure integration with things like your source control or perhaps your Active Directory, that's all maintained in secrets. So you can take that secret and when I then create his notebook, I can put that secret right in here in this launch YAML. And I say, hey, connect this Jupyter notebook up with this secret so he can log in. And when I've launched this Jupyter notebook cluster, this is actually now within my Kubernetes tenant. It is now really a pod. And if I want to, I can go right into a terminal for that Kubernetes tenant and say kubectl, these are standard, CNCF certified Kubernetes Git pods. And when I do this, it'll tell me all of the active pods and within those pods, the containers that I'm running. So I'm running quite a few pods and containers here in this artificial intelligence machine learning tenant. So that's kind of cool. Also, if I wanted to, I could go straight and I can download the config for Kubernetes control kubectl and then I can do something like this. We're on my own system where I'm more comfortable perhaps kubectl Git pods. So this is running on my laptop and I just had to do a kubectl refresh and give the IP address and authorization information in order to connect from my laptop to that endpoint. So from a CICD perspective, from an IT admin guide, he usually wants to use tools right on his desktop. So here I'm back in my web browser. I'm also here on the dashboard of this Kubernetes tenant and I can see how it's doing. Looks like it's kind of busy here. I can focus specifically on a pod if I want to. I happen to know this pod is my Jupyter Notebook pod. So why don't I show how I could enable my data scientists by just giving him the URL or what we call Notebook Service Endpoints or Notebook Endpoint and just by clicking on the sent URL or copying it. It's a link and then emailing it to him and say, okay, here's your Jupyter Notebook. And I say, hey, just log in with your credentials. I've already logged in. And so then he's got his Jupyter Notebook here and you can see that he's connected to his GitHub repo directly. He's got all of the files that he needs to run his data science project. And within here, and this is really in the data scientist realm, he can see that he can have access to centralized storage and he can copy the files from his GitHub repo to that centralized storage and these commands are kind of cool. They're little Jupyter magic commands and we've got some of our own that show the attachment to the cluster. But you can see here, if you run these commands, they're actually looking at the shared project repository managed by the container platform. So just to show you that again, I'll go back to container platform. And in fact, the data scientist could do the same thing. Back to platform. So here's this project repository. So this is another big point. So now putting on my storage admin hat, you know, I've got this shared storage volume that is managed for me by the Esmeralda data fabric. And in here, you can see that the data scientist from his Git repo is able to, through Jupyter Notebook, directly copy his code. He was able to run his Jupyter Notebook and create this XGBoost model. So this file can then be registered in this AIML tenant. And so he can go in here and register his model. So this is, you know, this is really where the data scientist guy can self-service, kick off his notebooks, even get a deployment endpoint so that he can then inference his cluster. So here again, another URL that you could then take this and put it into like a Postman REST URL and get answers. But let's say he wants to, he's been doing all this work and I want to make sure that his data is protected. How about creating a mirror? So if I want to create a mirror of that data, now I go back to this other, and this is the data fabric embedded in a very special cluster called a Picasso cluster. And it's a version of the Esmeralda data fabric that allows you to launch what was formerly called MapR as a Kubernetes cluster. And when you create this special cluster, every other cluster that you create is automatically gets things like that tenant storage I showed you to create a shared workspace. It's automatically managed by this data fabric and you're even given an endpoint to go into the data fabric and then use all of the awesome features of Esmeralda data fabric. So here I can just log in here. And now I'm at the data fabric web UI to do some data protection and mirroring. So let's go over here. Let's say I want to create a mirror of that tenant. So I forgot to note what the name of my tenant was. I'm going to go back to my tenant, the name of the volume that I'm playing with here. So in my AIML tenant, I'm going to go to my source control, my project repository that I want to protect. And I see that the Esmeralda data fabric has created tenant 30 as a volume. So I'll go back to my data fabric here and I'm going to look for tenant 30. And if I want to, I can go into tenant 30. And down here I can look at the usage. I can look at all of the, you know, I've used very little of the allocated storage that I want, but let's, you know what? Let's go ahead and create a volume to mirror that one. So very simple web UI that I said create volume. I go in here and I want to do a tenant 30 mirror. And I say mirror the mirror volume. I want to use my Picasso cluster. I want to use tenant 30. So now it's actually looking up in the data fabric database. There's tenant 30. Okay. So it knows exactly which one I want to use. And I can go in here and I can say, you know, EXT, HTTP, tenant 30 mirror. You know, I can give it whatever name I want. And this path here, and that's a whole another demo is this could be in Tokyo. This could be mirrored to all kinds of places all over the world because this is truly a global namespace, which is a huge differentiator for us. In this case, I'm creating a local mirror. And I can go down here and I can add an audit and encryptions. I can do access control. I can, you know, change permissions, you know, so full service interactivity here. And of course, this is using the web UI, but there's also REST API interfaces as well. So that is pretty much the brunt of what I wanted to show you in the demo. So we got hands on. And I'm just going to throw this up real quick and then come back to Yasser, see if he's got any questions he has received from anybody watching. If you have any new questions. Yeah, we've got a few questions. We can just take some time to go, hopefully, answer a few. So it does look like you can integrate or incorporate your existing GitHub to be able to extract a shared code or repositories. Correct? Yeah, so we have that built in. It can either be GitHub or Bitbucket. It's, you know, a pretty standard interface. So just like you can go into any given GitHub and do a clone of a repo, pull it into your local environment. We integrated that directly into the GUI so that you can say to your AIML tenant, to your Jupyter Notebook, you know, here's my GitHub repo. When you open up my notebook, just connect me straight up. So it saves you some steps there. Because Jupyter Notebook is designed to be integrated with GitHub. So we have GitHub integrated in as well or Bitbucket. Great. Another question around the file system. Has the MapR file system that was carried over been modified in any way to run on top of Kubernetes? So yeah, I would say that the MapR file system, data fabric, what I showed here is the Kubernetes version of it. So it gives you a lot of the same features. But if you need perhaps run it on bare metal, maybe you have performance concerns. You can also deploy it as a separate bare metal instance of data fabric. But this is just one way that you can use it integrated directly into Kubernetes. Depends upon really the needs of the user. And data fabric has a lot of different capabilities. But this has a lot of the core file system capabilities where you can do snapshots and mirrors and it is of course striped across multiple disks and nodes. And MapR data fabric has been around for years. It's designed for integration with these analytic type workloads. Great. You showed us how you can manage Kubernetes clusters through the Ezmeral Container Platform UI. The question is, can you control who accesses which tenant, I guess, namespace that you created? And also, can you restrict or inject resource limitations for each individual namespace through the UI? Oh yeah, so that's a great question. Yes to both of those. So as the site admin, I had lots of authority to create clusters to go into any cluster I wanted. But typically for like the data scientist example I used, I would give him, I would create a user for him. And there's a couple of ways you can create users. And it's all role based access control. So I could create a local user and have a container platform authenticate him. Or I can say integrate directly with Active Directory or LDAP. And then even including which groups he has access to. And then in the user interface for the site admin, I could say he gets access to this tenant. And only this tenant. Another thing you asked about is limitations. So when you create the tenant to prevent that noisy neighbor problem, you can go in and create quotas. So I didn't show the process of actually creating a tenant, but integral to that flow is okay, I've defined which cluster I want to use. I've defined how much memory I want to use. So there's the quota right there. You could say, hey, how many CPUs am I taking from this pool? And that's one of the cool things about the platform is that it abstracts all that away. You don't have to really know exactly which host. You can create the cluster and select specific hosts, but once you've created the cluster, it's not just a big pool of resources. So you can say Bob over here, he's only gonna get 50 of the 100 CPUs available. And he's only gonna get X amount of gigabytes of memory. And he's only gonna get this much storage that he can consume. So you can then safely hand off something and know they're not gonna take all the resources, especially the GPUs where those will be expensive and you want to make sure that one person doesn't hog all the resources. And so that absolutely quotas are built in there. Fantastic. Well, I think we are out of time. We have a list of other questions that we will absolutely reach out and then get all of your questions answered for those of you who ask questions in the chat. Don, thank you very much. Thanks everyone else for joining. Don, will this recording be made available for those who couldn't make it today? I believe so. Honestly, I'm not sure what the process is, but yeah, it's being recorded. So they must have done that for a reason. Fantastic. Well, Don, thank you very much for your time and thank everyone else for joining. Thank you.