 All righty. Well, thank you so much. I'm Barton George. I'm coming to you live from just outside of Austin, Texas. And I'm joined here by Florian Coulombel and Florian, you are coming from France. That's right. Thank you. All right. Well, let's get, let's get going without further ado. And so what we'll start with is just give you a quick overview of what we're going to be touching on today. So the main topics are going to be modern IT, DevOps, automation and how those three interact. And then I will be giving it over to Florian at a couple of times to do two demos, one that's going to be talking about Kubernetes and persistent storage and the other one he's going to be using Ansible to automate storage at scale. So with that, let's let's kick it off. So if we start back with a little bit of history, getting to DevOps, it all started a decade plus ago with the rise of the developer. So originally what you had was you had the business on one side, the customers on the other, and then developers and IT ops in the middle. The problem was, is there were quite a bit of quite, there was quite a bit of friction between the developers and operations who were not not aligned between each other. Then along came the cloud open source and rest, a restful API is what this allowed the developers to do was to go to the public cloud get compute resources there by putting them on their credit card. They didn't need to go to procurement to get the OK. They were able to get of course open software off the web. And then rest API is there's something that actually came from the ground up that were supported by developers. They were reaction against the soap protocols that have been put together by Microsoft son, IBM, etc. that had come down from on high. And what happened instead was developers on their own started using rest API's and those bubbled up and then became the de facto API's that were being used. So what it first started out with is using these technologies by the developers to circumvent customers. By doing that, they were able to work, move more quickly, more agilely, and this allowed them to be much more innovative. The business then took note of this innovation, the ability for developers to respond more quickly to market conditions and customers, and they threw their weight behind developers at the expense of operation. With this support, these three technologies then came into the enterprise out of the shadows and into the officially sanctioned technologies used by the by the organization. And what this also did was this this sowed the seeds for what would now become modern IT. So getting from traditional to modern IT. There are a bunch of characteristics that that would quantify and qualify what's what modern IT is and you can see them here on the left. Things like automation, which we'll talk about more integration cooperation between developers and operations, scalability, big concept there, self service, and this goes for both developers as well as IT ops. And then this idea of infrastructure as code, the idea that hardware doesn't go away, but you interact with it through code rather than working with hardware directly. So if we look at the way the product team is organized, rather than having individual ops people in silos like the orange boxes down below where you had people tied to specific hardware types servers storage and networks, operations in a modern IT space has to reinvent themselves to take on a much broader set of skills and responsibilities. So they go from focusing on one type of technology to a whole myriad of components and phases and operations that they need to do to support the cloud native application developer, and they do that by setting up and maintaining a multi cloud platform. The cloud native application developers, what they're doing is they're making use of the platform. They don't know anything about what's what's underneath the covers in fact that's what that's what they prefer, just like with a car. Most of us don't know what kind of spark flow spark plugs or carburetor that we may have under the hood. All we care is that it gets us to work takes us to the beach takes the kids to soccer practice. We want to be interacting with it and using it for what we need it for. So these individuals then go and they build secure applications and these are often including containers and microservices. And then this case of the operations as I mentioned they have to reinvent themselves by upskilling and taking on a much broader set of responsibilities. They go with the platform rather than individual pieces of hardware, and they manage that platform through software. So now underpinning all of this is DevOps. And why did DevOps come along. Well, as I mentioned the beginning it's this friction or what they call the wall of confusion that existed between developers and operations developers would write code that worked on their laptop. And then ops would have to try and implement these try to scale it so that actually could be used in production. And then of course when things went down there was a lot of finger pointing, you had developers saying what why did my code go down it. You screwed up in the, in the implementation of it. And he said no we did what we, we needed to do you wrote bad code. Once again, not good for business not good for, for anyone. So in reaction to that along comes DevOps. And if we look and think of the main tenants of DevOps, it's predominantly a cultural change, or at least I should say that DevOps methodology is enabled by culture more than anything else. And this idea of rather than being adversaries, it's about the developer sorry developers and operations, working together and operations playing a much different role than they're used to. There's constant feedback, which is, which leads to iteration, considerable measurement. There's a standardization of processes, and then automation, as I said we'll be talking more about that a bit later. So the overall goal here is to reduce friction and increase velocity. And what we've got in the middle is the DevOps infinity loop, which lays out the different phases in the product lifecycle. And what we've done below it is we've taken it and stretched it out, just to make it easier to see the steps. So you go from plan to code test package deploy, operate and monitor. And then as you see at the bottom with the dotted line you go back to the beginning and, and start all over again. So this sits on top of the cloud infrastructure and open source. And Florence going to be showing you a demo with a fair amount of open source in it, and the technologies leveraged our micros microservices which sit within containers, which then in turn are managed by Kubernetes. With that, let me turn it over to Florian who's going to show you a little bit of the DevOps lifecycle, Kubernetes, automation, and more. Thank you, Barton. So in the following demo, we are going to show how Dell can help you and help IT to deliver on the promises of DevOps and self service provision. So the demo will be showing a modern development pipeline based on open source technologies, namely get lab to store the code tickets and more important the agent to run CICD Kubernetes to orchestrate and execute the application backed by Dell technologies for servers and storage. So let me just switch over to the demo. So this demo runs on top of a bare metal normal vanilla Kubernetes cluster. The idea here is to have an application model view controller based on Vue GS could be for the web app and a small database SQL light backed by for max storage array for max is our iron storage friends. And the main idea here will be for every time we want to spin a branch create a branch test our application will set up new environment with both the modified code and the infrastructure to execute this code. So the demo runs as is. I have my application JavaScript for the front end Ruby for the web app, like I said, and my database on on the on the back system. I have my code store right into the get up the get up sorry with my GitHub agents. I have two main build steps one for development, which means that for every branch I will have in my repository. Every development every every time I will develop a new feature I will create a branch and I will build a new environment and deploy it within Kubernetes. And the second step is for my main application, the production, the latest branch, which will have its deployments as well. So, yeah, the idea here is every time you need something that will be a space for the developer to play with. We are going to use Elm package manager to deploy the application and the different flavors. And we will also take advantage of the CI CD HM so anytime we decide to merge a branch, we will squeeze the infrastructure as well, that is to say, will purge storage and the runtime. My application, thanks to grony to state machine is declaring a very simple YML a I need to stay full set with the name of my feature. And I need the custom volume backed by, in this case, that storage for Max. Every day on storage array has CSI plugins so you can use the same case for any any time of any type of desktop package. So, right now I have a single branch master, I'm connecting to my prediction system. I'm creating to do and now the magic happens, the magic will start sorry, I'll create a new branch and I decided it will be nice to try out a new feature to change colors. So, creating my branch, changing colors, committing my code and pushing it to my repository. So thanks to the CI CD pipeline, I will build my new branch. And more interestingly, create a new environment based on that backed by the infrastructure. As you can see, I have two pvs to persistent volume one for my production one for my development environment. Any del storage driver supports full lifecycle of volume so I could have my production data or take a snapshot or a story or just like here spin a new volumes. So going back to my again my new dev environment opening the URL and that I have my new colors. So I can create an insert new data and now let's say I'm going to change my colors here again, I will go to VS code commit my new colors, push it to my branch. Reexecute the pipeline and of course, since it was already deployed for this system, the personal data is still there and my new application of that. So we can also take advantage of quota management to make sure that developers are not consuming or over consuming storage. In this case I'm going to use communities built in quota management but they'll offers also find green quota management through an open source module that will allow you to overall manage quota for different varieties cluster. I'm creating a quota very simple one a just one PV maximum of 10 gigabytes overall. So if I'm creating a new feature a new branch. Let's say I'm going to change titles always been built, creating new branch and you will assume that when you will push it, it will spin yet another environment with yet another P that because I said the quota this, of course, won't happen. So building steps. And we're deploying against my communities infrastructure with a nice message that says a we breach all quota. So the main purpose of this demo was really to show you that we are there are spending quite some time in developing all the plug in to make these promises of DevOps and friction list application to production reality. I will pass over to you. Thank you very much and of course because this is a Linux Foundation event. This was running on Linux for Dora to be exact. So, now we're back to automation as I promised. And so this is automation is something I talked about in the in the context of DevOps and modern IT. It's something that gives you much more of a competitive advantage it gives you more agility, greater quality, but even more importantly, none of this really would be possible without automation. So today it environments as I'm sure you're aware are scaling out. In fact, many times, you're using multi cloud platforms that go beyond the walls of your IT environment, and without automation this, this really isn't possible. So it gives you control it gives you efficiency. As I mentioned, and all of these are key to scale and to gain speed. And if you were to do this in the traditional manner that was manual manual in nature, this really wouldn't this really wouldn't work. So you do get it does enable things like the scale as we talked about there. DevOps infrastructure is code, and then self service functionality. So now to a bit of a bit of a history lesson here. If we look at where where automation has come from. If you remember, originally started out with ticket based automation. And so here to configure what you needed to do is you needed to manually configure something. They were needs were put in by tickets. You also had to manage and verify configuration software you have to manually verify that next step of course was was scripting which provided automation. It did give you preconfigured infrastructure and you and you could get it through a self service portal. The problem was the scripts themselves could be difficult to track and require maintenance and updates. And so when you get to today, you get to infrastructure as code, which I introduced back when we're talking about both modern IT as well as DevOps. And in this case, the changes are automatically recorded and versioned in source code libraries just like application code itself. So verification verification can be automated. You can automate things and what an infrastructure technician does they take the infrastructure design. They check it into a source code repository. Once again just like code itself. It then automatically configures the infrastructure using API's and the continuous delivery pipeline. And the continues to then also verify the configuration of infrastructure. And the next last of the of the automation before I turn it back over to you Florian in a cloud native world. We talked about that's characterized by microservices containers and Kubernetes developers are looking for self service as we talked about, and they need access to platforms where they don't want to know what's under the cover. All they want to know is that they can spin up my SQL, they can get access to TensorFlow, they don't care where the, where the applications themselves are running whether it's in your IT environment, outside of them on VMs containers bare metal. That's not important to them. So what as I said what they really want to do is they just want to get the technology they want and be able to easily get access and use it. And so they, the use cases that you see there on the right. You need that applications need to be backed up. They also need to be mobile. I mean it needs to move from one infrastructure to another, which is particularly important in the multi cloud world. And then workloads need to be optimized, which means that the IT orgs, and then the consumers, who in this case we're referring to as the application developers since they're the ones who are actually consuming the infrastructure. They need to, they need to be provided and they also need the workload analysis that available to them. So with that, let me just turn this over now to Florian and here you're going to see some automation, along with scaling out storage, and he will show you this now using Ansible away Florian. Thank you Barthol. So, in this second demo, we are going to show something that has been implemented for a university. And here is a bit more use case based for C set mean. The concept will be to use Ansible playbook to manage on directories for users. So just one second. So, we are using, we are going to use Ansible playbooks Ansible modules for the power scale. The power scale is an instructor data storage to store file file storage oriented to file storage. And it has support for protocol like NFS, HTFS, S3, etc, etc. This platform is really scalable and is heavily used for tons of use cases, genomics, media, etc, etc. So the grand idea here is to, again, building on top of the DevOps concepts, we are going to use Active Directory as the one and only source of truth for the user based. And out of these users, we will create on directory within the IC learn and then man this directory this on directory into the unique server for NFS. So that way, you know, we are building something that is more secure, very fast to add or remove user, reproducible, and you don't have to maintain some big CMDB with list of users and so on. So here the referential is Active Directory, and it will be the source of truth to manage everything in this case. So we'll have a playbook Ansible playbook that will connect to AD, get the list of teachers, the list of students, and then create the on directory and mount the share on the unique systems. So for that demo, we cooked a container that has all the dependencies to execute the Ansible module so Ansible itself, and the dependencies to access Active Directory and the portal IC learn with within like many Ansible infrastructure we have a couple of files with credentials to access for scale, the unique systems and Active Directory. The first step of this playbook is to build two lists list of students and list of teachers. Why is that it's because we're going to apply different policies if you're a student or a teacher. We just connect to the AD, query the groups, burn name, we use students or teacher, and we cook that list. We create the base directory to store or on there. And in this step, the EMCE IC learn type system, we are going to create the on there for each student with a dedicated quota and the permission coming from the AD. So in this case, students have just five, five gigabytes of storage to them. This is just a simple loop. And for the teachers, we are going to apply a different quota which was 100 gigabytes. The next step of this playbook is to also implement some protection policies. A we want to take a snap of every on directory every day at midnight and for students will keep this snapshot for just a few days, while within the for the teachers will keep that for a month. We create the exports within the island, but still, and then, you know, we connect to the units and point to the share to update the staff and say, hey, to Monday on there, go to the NFS server posted by one of our things we do is, as I said several times already, the active directory being the source of truth. We also every time we run the NC bill playbook, we check if there are often directories and often directories is magazine directory, but no user creating for it within the AD. We catch on this often directory, we just apply the unseable whole to remove it. I flagged all this task as cleanup to make it easier to manage. And now let's have a look at the real gig. I'm connecting to one of my unique server. My phone is empty. And now I'm executing my playbook from pod man, because I built a container image, wearing the AD, creating the five systems, the snapshot policies, mounting everything, looking for fun. There are none. And now I can go ahead and I can see that there was several user created. As well as my snapshot policies. Now let's remove one of my users and we execute the same playbook with the tag cleanup. So it will look for often there and squeeze the directory. So, once again, this demo was to illustrate that down, we are developing and maintaining lots of tools and playbook to enable the DevOps to enable the DevOps process. Florian before you go on a question came through, since you're still on the demo. How can these credentials be secured we saw them in plain text. Yes, so that's correct. So within within a real setup, you will use something like unseable tower or ncbal AWX, where you will store this credential within an encrypted database. So instead of giving the plain text credential, you will have a token that will connect to an encrypted database through this infrastructure, this global infrastructure on civil tower on civil AWX. And that way you can secure them. Yeah. Thank you. So these tools are part of the ncbal family. How to store securely credentials. All right. So yeah, as I was saying, we just kind of scratch the surface of the number of tools and open source project we contribute to, to enable developers and operations is admin to implement on DevOps. So for example, we are building container modules to have telemetry to be able for each volume you provision through Kubernetes driver to be able to measure the response time the IOPS the capacity. We are developing software to protect your data sets within within Kubernetes workloads, etc, etc. One big commitment we have to the community and to our customer is also to make sure everything we do is buy the book on civil and Kubernetes wise, meaning that every quarter we are going to release new version of Kubernetes drivers, to qualify them against a wide range of distribution as you can see we qualified them against Mirantis Kubernetes engine. So obviously, open she Vmware 10 to Amazon, EKS, Google and toss, etc, etc. So, all of these are again done so you can have self service provisioning, you can build reproducible infrastructure and benefit from all the goodness that Barton described just a minute ago around DevOps culture. So you want to run the conclusion here. Sure. So thanks Florence. Before we go to questions I wanted to let you all know that as of two months ago we launched developer dot Dell dot com, which you should go and check out it's got our apis and within that a lot of links to GitHub. We also can get to our GitHub repos directly by going to GitHub dot com slash Dell besides the besides the apis we have lists of webinars and events coming up. And we have our DevOps page link so you can go and learn more about the solutions we have in the in the DevOps area. So as I said this was kicked off two months ago, and we will be adding more apis to the overall library will be adding more content as we go forward. Look for code there look for white papers videos, all more all of this to come so so stay tuned. And with that, I will open it up to any questions that there might be. I should I should say you're probably aware of it those who work with infrastructures code but this is for those who are not as familiar. This is akin to the shift that was made when we went to VMs and VM admins had to work through an abstracted layer of software to work to work with the hardware, very similar to that, that change that we saw at that point. So one question here, how can we back up and restore the data. So in the case of Kubernetes backed persistent storage. Every day on Kubernetes driver supports snapshot capabilities. So natively within your cube, you can describe a I want to kind snapshot on this volume and you can restore it using the similar type of directives a I want to restore based on that snapshot. We also offer solution belong to open source software as well to create policies to take snapshot at the regular basis for example. Kubernetes related as well. Pretty much all Dell storage is offers replication between sites. So if you want to build through a type of micro services on prem, you can leverage this open source del container storage module to take advantage of that. With Ansible and over tools we provide within that domain, you can use pretty much what I described. You know you have an Ansible role, you create, you will leverage the persistent storage to the capacities of the del storage is to build this snapshot capabilities. Hey, I want to create a snapshot every excellent of time, and it will create the snapshot policies. We have the same type of role to restore them. I hope that answers. Also maybe one more thing here. So we do embrace the open source movement, trying to contribute to the community from different channel, and all the documentation, the problem taking system milestone and everything it's available through GitHub. So we also have from GitHub within the GitHub portal with all the docs and all the information you may want and need to implement such features. Any more questions, comments. Concerns going once twice thrice. So I'd, before I hand it back over to the Linux foundation I'd also like to say that what I'm focusing on now is building out Dell's developer community. And as you can see, as I mentioned before, we just launched the site so we're very early on, but we would love any input from you all as what you'd like to see what what you think would be helpful, whether that's content on the website videos that you'd like to see white papers you'd like to see. Very interested in hearing that so we will be putting that functionality here on the site for you to submit the submit your input, but in the meantime, if you want to do it publicly you can send a tweet to me. I'm at Barton 808 so the numbers 808 and my first name, otherwise you can actually email me directly at Barton.george at Dell.com. So on behalf, yeah. One more question here. So the question is, can you elaborate a bit more on such class does it allow with right many etc etc. Indeed the storage class is the entry to configure how you want the consumers to access your storage. So within this storage class, depending on the platform you will have some parameters. Do you want to create to create Finland or Finland's. Do you want to use certain pool or use certain characteristics etc etc so it's pretty straightforward. Most of the time it will have the parameter what's the storage already you want to access. We support for all those storage block storage. We support read write once read write pods and whole block storage access for read write many, which means you have multiple nodes distributed nodes accessing the same volume. We support this through NFS, because you know, you need a fact system that supports concurrency to ensure you can have read write many so this is done through NFS success. And this is available for some of the storage that supports NFS, almost all of them are supporting NFS these days so yeah read write many through NFS. And then you know there are been this project we launched. So the CSI specification is, what's the standard but Kubernetes needs you to implement to to fit with the storage life cycle, create volume, good volume expand etc etc. And, you know, to circumvent some certain limitation of the CSI spec or because we wanted to expose more features more data services directly set service. No assault from Kubernetes we created a bunch of Dell containers to write modules to fill that role. So, for example, you can take not only snapshots but group of snapshots. There's a lot of applications running on multiple volume and you want to snap them all at once. You can do that for that features. There is the thing that I mentioned, we improved on the detection of a net failure. We do support for applications, etc, etc. So, and, again, if you want to know more. Everything is on github and each driver comes with multiple example of storage classes so you can see what are the different parameters. Great. Any other last minute questions we're, we're happy to feel them. I will turn them all over to Florian if they're difficult. We got another one. So maybe this one is more for your button button. Do you think there are application or company which does not use DevOps Kubernetes today. Me I'm working in the Kubernetes field so I would say pretty much every company I'm dealing with have at least Kubernetes project but there are still plenty of companies that do not have Kubernetes projects. I mean friends, you know, and I'm dealing sometimes with customer on public health care and they haven't really began on this Kubernetes journey because they serve application they are using medical application that are not containerize themselves. So, you know, as soon as their business internal application moves to container, they will make a move but for now, they're still hanging behind the scene for this. And I would agree I think, well, modern it Kubernetes DevOps is the way that we're, we're going in there quite a bit of companies who are there using that to make their digital transformation for others it's still early days so you will have a lot of people doing pilots with Kubernetes, implementing DevOps within smaller groups. I know we ourselves did it first in our Dell dot com IT infrastructure team, where we went and completely revamped our, our set of engineers just because if if not that we're not able to to keep up with with our customers and the changes. I think one thing I know talking to one customer when I was talking to them about implementing DevOps and and if they had done that he said well we're kind of DevOps ish. So I think there's people with a lot of interest a lot of passion, but they're not all are quite there yet, but a lot of interest anymore. And once twice and once again thrice. Alrighty, well thank you very much. Linux foundation and Florian and with that let me turn it back over to you. Okay wonderful thank you so much Barton and Florian again for your time today. And thank you everyone for joining us. Just a quick reminder that this recording will be up on the Linux foundations you to page later today. So we hope to see you for future webinars. Thank you so much again. Have a good day. Thank you.