 Good good more good afternoon everyone. I'm Maurizio Pilito. I'm the devil's director at Finos and Together with the Arsalan I am Suzefield engineer and David de Rossier From Canonical we are going to present today How you can deploy legend using cloud native technologies We've heard a lot of things About legend today, right a lot of exciting news a lot of people Integrating and clearly integrating with legend and clearly people want to use and deploy it so I Feel like everyone knows what legend is right and I feel a little bit scared to say what legend is in front of The legend team, but let's keep it simple, right? So legend is an end-to-end modeling platform that Puts a strong accent on collaboration It is composed it is a Microservice architecture it is composed by Several components the components we are looking at right here are the core components we have studio which is the UI what you use on our web browser and This service is packed by two other components one is called the engine and the other one is called as DLC The back-end layer is composed by a MongoDB instance and kit lab. So Good lab can be used as a Sauce the github.com sauce offering or you can spin up your github community or ultimate enterprise version so How do we run legend? And what are the use cases right for people to actually start using this Technology so we divided the deployment options into three main blocks, right? I want to start from the one in the middle the local Run, which is to me probably the most common use case right people want to See legend running on their laptop and they want to do it with a couple of clicks for this reason the legend team have Contributed and it's part of the main legend repository github.com slash Venus slash legend They contributed a docker compose a script so you can run it you will it will use Some default configuration to actually run quickly Fast on your local host environment So these methodology my opinion is particularly useful if you want to Just do a quick demo if you want to show legend internally and You want to use like the the latest and greatest version of the of the software But what if you are trying to change the code of legend and you want to you want to see your change running Locally in that case you would need to build the software from source and This option it's probably the longest and most difficult approach It takes a little bit of time and it relies on The technologies used to build the software which are Maven and yarn Finally we have a third option which is the production ready environment so imagine that you have tested the technology your boss is happy and They want to see it actually Being used on our you know Give a wider scope to the test to the internal test in that case You would need to have a different type of deployment and although legend is Let's say it's based on cloud native technologies. So it's Kubernetes friendly Right now We don't have more up until last month. We didn't have any Kubernetes Official manifest released by the legend team to do such a thing. So It's quite exciting that in the last month, I think two Members two Phinos members have stepped up to actually contribute different ways of deploying legend specifically Suze have contributed Helm charts that you can find on github.com slash Phinos slash legend dash integration dash Helm and for those who are not familiar Helm is a packaging system for Kubernetes. It allows you to Deploy legend with one comment. That's it, right? Which is quite exciting. The other exciting thing about Helm At least for my personal opinion is that I can actually use a set of deployment scripts locally and On a production ready environment using the very same strategy, right? So This is what I did as soon as I saw as soon as I saw Arsalan and the Suze team contributing to Helm I couldn't wait like I just spun up my Local minicube and I got it running actually in a matter of minutes. So this is what I'm gonna What I actually wanted to present to you today, but then after looking at the demos of Arsalan and David I think it's probably better for you to see the what Venture and canonical can do with these technologies I'm going to actually we're gonna share blog post a couple of blog posts and this recording will be available shortly So hopefully you will be able to reproduce easily reproduce what you see today on screen Also in your own local environment or within your own firm. So I Actually have a Let me let me actually show you I Actually wrote this on a guest file, which is a little bit raw But Deploying right now legend using the Helm chart is Extremely simple, right? It's a matter of starting minicube Which is a software that is available on every, you know Distribution operating system distribution running a tunnel to allow having a public IP setting up the kitlab.com application and Start in the Helm chart. So Regardless of the prerequisites What we're really saying is that legend runs with this comment, which is quite amazing, right? so I'll If you're interested in this, please Come and find me after this session. I would be more than happy to demo these on my laptop And if you're interested even running it on yours So I want to actually invite Arsalan on stage From Suze who is going to show you what the Helm chart can do, you know using a Rancher Desktop right and the rancher sweet The demo gods are with us We are doing some of the internet so Thank you Are we okay good all right guys So this is rancher so just to give you good like a quick brief Rancher is like we can manage Cubase clusters across multiple control planes Whether it's in the cloud on-prem or around your local machine. So here we have two clusters really we have one in AWS Spend up three rancher and we have a local cluster using rancher desktop So rancher desktop is basically similar to minicube. It runs like a virtual machine And now it's all just packaged. You don't have to do it You just install the DMG file and installs a little version machine with k3s, which is what our CNCF 100% of Kubernetes cluster. So it's edge Based from here. We can you know change the Kubernetes settings if you wanted to but this just creates a Kubernetes cluster just from a click From this you can see we'll have you know You can just connect to it or add it to your local context And you'll have a Kubernetes cluster up and running that simple from just installing rancher desktop What you can do is you can bring that into rancher and then manage that from there So this means you can deploy applications to it and actually manage the workloads running inside your cluster So from here, we can always see spin-up clusters in multiple environments all from a single control plane Or we can just import it, which is what I did so you can go check out rancher desktop for that So once we actually deploy rancher, so here we have the local Mac That's what's running on my machine, you know We can see a rancher will you know the helm shop will deploy the engine the studio the server and Just using rancher we can you know, we can do some simple tasks like we can review the log We can drop into shell from here. The one thing I'll say is when I created this helm chart I kept its source of truth similar to the docker compose I use the docker compose files that the legend people have and just made the helm chart around that So there's no deviation between it and there's less Changes to keep up. So from that, you know, we can actually deploy Legend so what does the deployment look like? So the first thing we would do is we would drop in the helm repo and which is on the legend GitHub and from here we can configure our Chart so here we put in the URL. So this could be like our local machine So since we're running on a Mac we can use n grok That's our hostname if you have a private get lab We can throw that in there our secrets if you do want if you do have it on the cloud and you want to do HTTPS You can add a secret. This is just a kubernetes secret and in a blog post There's details on how to create the secret it could be let's encrypt That's what I've used and that basically just creates it over a secure session. So that means it'll serve over HTTPS Once we have it deployed and we hit install So if we could later on and come back and change this So for example if we come in to install apps and say I want to update one of the images, right? I have legend deployed here. I will do edit upgrade So one thing that if you use helm on the command line what happens is sometimes you lose your values You don't know what values you choose again rancher would help you store that we can come in here and say go to images on Take this and we can bump up the images Individually for the studio for the server and you know, it's quite easy to inquire diversified how to actually roll out changes then we could use built-in tools like monitoring inside and Rancher so I think I dropped my connection that but this gives us CPU and memory usage of all the Different components and also let the bandwidth so we can use kubernetes principles and then apply like horizontal pod security policies Auto scaling so that means it can scale up and scale down if there's no load on the server or the engine Just using kubernetes knowledge rather than having actual direct infrastructure knowledge From that so here we have so once we have it deployed so this one's running on my machine as you can see is using the n grok URL and This one is using the normal HTTPS, which is running in AWS And that's about it Yeah, try to keep it minimal didn't want to break it of course. We're gonna really well so Well, you can you can keep it hooked so David can present from here So as we mentioned before This is this is what was Enabled by the fact that Suze contributed a hand chart, but prior to that canonical Started working on an integration with Technological the juju which actually provides like more support to the to the deployment and runtime of the architecture And I'll let David walk you through this sure. Thank you You make sure we get on the right slide here So we put together a series of charms that basically compose the the Legend application together, so if you haven't used you juju juju as an orchestrator that allows you to not only Configure and manage your application or your infrastructure, but also orchestrate components together relate them together with a single command So here you can see this single command is juju deploy fin os legend bundle that bundle exists upon charm hub dot IO and It pulls down a series of inside that bundle. It's a simple yaml file It pulls down a series of different charms those charms define each of the discrete components that are part of the legend Application so within this deployment here. It actually will will handle all of the interrelations between those discrete applications I'm not gonna actually do that. Whoops. Sorry I'm not gonna do the actual demo because I have a Pre-recorded session here that I'll walk you through I do have infrastructure sitting with me today that I can actually show you outside That I can do this live. I can show you exactly how these components work, but whoops. Let's make sure we get access. Oh This is not gonna play Yeah, let's let's do that. Sorry. That's okay. He's access to Google Drive. Sorry as long The demo deities have struck us again Yeah, it should be fine there you go present mode. Oh, you have the same same issue Request access I do. Yeah changes Luckily, we've got enough time. Is that any question? While we're we set up this is anyone wanting to use I can't know someone does Someone in the audience, but if you have any question what we set up Well, we set up the demo of juju, please Yeah, so it's a it's a Containerized architecture right so this is the core and we have five components now Slowly the legend team is releasing more components. We just open sourced another component called Depot and There is I'm not sure Pierre help me out here. Is that any other component? The relational database The quity. Yes. So like I would expect the the legend architecture to get to grow a little bit wider, but in general like the main like the The bulk load is done by the engine As DLC is just a facade of the github.com API and Studio is the UI so it's although it's quite articulated the load is more on the front end, right? Because it runs on the browser But yeah, that's I don't know if that answers your question. Yes Legend the dash integration dash helm. So it's it's quite interesting to also see like this type of integrations like Becoming part of the legend project Although the lead maintainers of these components will not be the Goldman Sachs team, right? so we actually see Laboration at work and Actually the possibility to to actually scale out the development of such a you know important and big Infrastructure, it's just so much easier Right so Right now Within the official legend code base as I said before right you have You have one way to run it locally And with with the technology Docker compose that is not suggested to be used as in production to be used in product It's not a production ready technology or you build from source, right? So you take the source code of legend and it takes roughly one hour ish to build the jar files and And the yarn of the web pack package that you can later run up to you Right now again, like I feel like this was we have to keep in mind that legend was Contributed a year ago and it was a huge endeavor and yes We kind of like I think that consciously the legend team like prioritized Things like full open sourcing documentation as DLC And the relational database mapping And I feel like right now like seeing the uptake of this technology and the community wanting to deploy it That it was a natural reaction to actually invest more into deployment, right? So my personal take is that These technologies are the future of production ready deployments for Legend, right? But again, keep me honest here folks Okay, so that silence means a lot to me So I hope I answered your question Sorry for the technical difficulties all okay, so let me walk you through this as is it working? Yeah, okay good AV challenges So I'll walk you through this I do have like I said a live demo here We're just running juju status this just tells us the status of our controller our models We ran juju clouds here. It actually shows. I'll just pause oops just pause this So this actually shows us the clouds that are assigned in this particular cluster right now There's none assigned then we're gonna bootstrap a microkates So we're gonna bootstrap a juju controller inside the existing Kubernetes Cluster that's already there you see microkates just a few lines above We're gonna bootstrap the juju controller juju controller is not a it's not a physical entity It's a logical entity that basically allows you to manage the substrate below and the applications above that controller It's already deployed that controller now we're gonna go ahead and add the model to hold the Application the legend application the model itself is essentially equivalent to like a Kubernetes namespace It's a it's a canvas that you aggregate all of your applications into you can we'll talk about this in a few slides later You aggregate all your applications your permissions your roles all that into the model a Controller can have multiple models and a juju can have multiple juju instance can have multiple controllers So this one happens to be talking to microkates. I can also have a controller that talks to AWS GKE vSphere local hardware of mass Now we're gonna put the status of juju under watch here so that we can see what's happening as it's deploying that so you Probably missed it there for a second But that I actually executed the juju deploy bundle and then this is actually going to deploy the charm version of the legend Bundle which includes all of those discrete applications It's going to relate all those applications. You'll see that some of these are going to go into a blocked state that block state is because The application itself requires a couple of actions that happen out of band of the deployment itself one of those is going to Git lab and getting an authorization token and then using that authorization token to authorize STLC and studio so you'll see that here. We'll just pop out really quick. We'll launch a browser We'll grab those auth tokens From inside git lab so this is the git lab instance that's running inside the docker container that was run as part of this bundle We grab that authorization token and then we authorize the studio You're gonna see here that it's importing the certs This is this step right here is no longer necessary, but it will create the certs then it will use that authorization token to authenticate the API and Then we pass that token into a juju config here So basically what I'm doing instead of editing a big long yaml files I'm saying juju config add that token to the end of the specific application that git lab needs now You'll see these services come back up. They're all green and active, which is what we expect Everything is related then we'll go ahead to the STLC Sorry, we're gonna go to the studio here now that we've gotten the token for a git lab itself We're gonna go to the studio. We're gonna authorize the studio to use that git lab instance You'll see here. We'll go to the studio URL and it will tell us unauthorized in a moment So legend is now up but not authorized for this user So you'll see that when we go to the actual studio and try to visit that to use and interact with it You'll see unauthorized there in the bottom right corner Now we have to do one more action outside, which is to get the authorization token for that interface We get that from the STLC. So you'll see that here. We go to STLC pop that into the browser and And then we'll authorize that and then when we reload studio, we'll be back into studio and that's it It's about six minutes end-to-end when you go through all the different steps And you're you're up and running in studio There you go. And now we're gonna authorize STLC now when we reload studio We got unauthorized before now we're gonna go ahead and reload studio and you'll see that we have success and Now you have a canvas that you can start to begin to build your models from It's literally that simple Any questions? So I could talk about juju later outside. I can talk about how this model works outside I have a live demo if anybody wants to see all the mechanics and what goes on behind the scenes here as well So we can talk about that Just really quickly here These are the different Resources that juju can actually manage deploying this onto so you can pull this onto on-prem local Public cloud hybrid cloud you can have your MongoDB locally you can have your STLC on public cloud You can deploy your machines with mass you can deploy this in any sort of heterogeneous way that you that you want The juju controller sits on top of that So it's essentially just an agent that runs that the juju client talks to and then deals with the applications in the substrate above it And then juju bootstrap will build that controller onto those environments Then you can use juju to deploy apps like legend like we did here from charm hub.io And there's hundreds of applications up on charm hub that you can deploy with juju including things like OpenStack my SQL Postgres Grafana and so on The model that we talked about when we did the add model again This is just a canvas where you compose your different applications. It provides service isolation So it's separate from other models separate from other controllers It provides access control so you can manage the access to applications and services within that model It also is a repeatable entity. So you can you can deploy multiple times idempotently And it isolates the boundaries between the different applications and services from Themselves as well as from other models and other controllers that you might manage If you had to do this manually, these are the discrete steps that you would do it you have to deploy each one of those individually and then Relate them together, which is what the bundle does for you So each of these applications has a relationship mango has a relationship to stlc Some of those have a relationship to gitlab so that you can gitlab requires a token legend offers that token and so on And that's essentially what the integration is between the relationships of The charm that builds all this together for you So all the operations the day two operations the configuration the management of that cluster All happens as part of that bundle when I did juju deploy all of this happens automatically So you don't have to sit there and log into each of these different services and manually configure all those The charm actually handles that for you the charms are managed by the community for the most part So all of that that tribal knowledge that goes into how to configure and how to manage and how to do those day two And through day and operations is up on charm hub.io And that's it Thank you still got Two minutes for questions Go Ali Which kind of operator? CNCF I do believe there's some some discussions on that I can talk the outline outside we can we can figure that out Yeah, there's there's certainly um, it's a very dynamic community So there are discussions about creating charms for a number of different things legend is just one of the examples But we can certainly canonical can help you create charms for public applications or internal applications We have the notion of the charm hub.io Which is a public charm store But you can also have an on-prem charm store as well that you can use for your own internal versions of of charms that you don't publish to the community for example And as far as I understand juju and the charms use kubernetes operators as a technology. They do. Yes So it doesn't require kubernetes that just happens to be something that is used here juju and kubernetes are separate entities juju can operate and manage and orchestrate kubernetes as well as other services like a full open stack or a full Postgres cluster or something like that Any other questions? Wow, fantastic. Sorry for the technical difficulties, but we made it I made our way through we thank you so much for coming Thank you