 Oh, okay Okay, cool. I want to ask you when to start would you tell me because kind of I'm supposed to I'm supposed to have like in two minutes, but there are a lot not a lot of people like yesterday. I did had So just keep in mind right now you are live streaming to the internet with Oh my god Hello testing Hello, everyone. Welcome to um deathcom us 2022 My name is Marco. I'm the moderator in this room and this is conference Auditorium room, so we are going to have a series of talk in this room um You will have a chance to ask question after each talk, but before you ask question, please Either come to this Come to here to ask question or ask me to pass you the mic so And then I will go in to introduce our first speaker Eva Levenston um Hey everyone, uh, so welcome to uh the talk about JWT and how to use it in a safer way So uh, we'll begin with what is a JWT So a JWT is a JSON web token and it's usually used for authorization processes What we're going to talk about is um, what the problem that JWT tries to solve The structure of it and then uh, which mistakes we should try to avoid My name is Ira Cherkis Levenstein and I'm a senior developer for uh synopsis, uh, Israel I work on SICKER, which is the interactive application security testing solution And what I do for my every day is basically, uh, try to help Java developers to secure their call Okay, so uh JWT uh json web token is a compact url safe A way to transfer information This is the token part and the other part that is a digit digitally signed so it can be trusted If we look on, um, like the basic structure of application, we can say there is a user app Then you have a server and a database The the user authenticates to the server then gets a session ID, right? The server authenticates with the database and a session ID And in each following request the session ID is sent the session IDs can be sent in the Can be kept Locally in in the memory and it's quite fast, but now what is going on these days? um, the server is basically a lot of services different services Each service has a lot of instances that are deleted and created And now what would happen if just the user would authenticate and get a session ID And this session ID saved in the cache would will then have to create some Form of stickiness and all the uh following requests will have to be sent to the same service This kind of uh, this is a problem ongoing a larger and larger scale So the other solution would be probably to save it in a database But then each service that gets a request will have to verify the session ID with the database again slower request and slower throughput Not only that you also have to pass this information between probably different services So so this is the same one from the bottom and would it be nice If we could send the entire Information about the user that most of ourselves services need From the user together compact Then it can be passed between the services You don't have to uh question You don't have to add the database If if the user is authenticated or not You can just trust the token But uh, what is the problem if this token is not signed then anyone can pass whatever token they want to and And your service will will Decide that this is an admin at the jeopardy. This will jeopardize the security of your system So this token must be somehow verified to be correct. And this is the part of the digital signature Let's uh, let's talk about the structure of a JWT So you have a header part a payload part and a signature The header part is a small json that um Describes the token so it usually has the uh, um The outfield saying what is the sign in Algorithm and the type that would be a JWT. There is also an encrypted token But most of the token used are usually unencrypted then this uh json goes in into Base 64 encoding so it can so uh, it can be sent in a request Without further escaping so either in a cookie or a header Uh The payload part this is like the heart of the token. This is the part that contains the information There you will have uh, let's say issuing date the name of the user user ID The role Any information that you need and your application needs can be added here Again, this json is base 64 encoded To be passed safely in the uh, not to be passed without further escaping in the request Now the signature part. This is the part that allows you to make sure that The data wasn't tampered with how you do it. You have the base 64 encoded header The base 64 encoded payload you combine the two strings with the dots between them And then you hash it you sign it with a sign in algorithm and some key Again, the result is again base 64 encoded and this is the full structure of your token How this will look, uh, how does it look like? So you just kind of a string base 64 encoded and you can play with it In jwt.io Change the header change the payload see what is going on In the code this this is also quite simple because there are a lot of libraries that support creating and decoding Jwt in any languages. So my examples are in java But it will apply to any other language as well. So what you do is you simply Create your claims through those key value pairs and with the information that you want to pass. So some of them are Like the subject they're predefined With the appreciative but you can add anything you want Then you create An object and put inside the claims put the sign in algorithm put the key just say, okay, get me the compact jwt very easy uh to use When you come to decode it you again you create an object that knows how to decode it give it the key Pass it uh pass your string The jwt string to gather base 64 encoded and you get all the parts of uh of your token So the issue is because this is so easy to use You also need to understand Behind what you are doing and what things to avoid Because you will not always get an exception if you misconfigured your Your code and Then you jeopardize the security of of your application so Rule number one would be to remember That jwt Are encoded but not encrypted. So yes, it looks like Something you cannot understand, but if you take the payload part into any base your base 64 Decoder You can get the information Now mitmara. She's our hacker for today and what happens if she uh, if she gets A hold of the token She can decode it quite easily and here she has a password inside So rule number one would be never send sensitive information inside your tokens never add password credit cards or bank account numbers Uh, this is completely controlled by you as a developer because either you add it or you don't so don't add it Okay, the other thing would be to remember that uh jwt's are actually easily created so You cannot just assume that you know the exact source of Of your token Because here's mara and she will try to send you tokens that will look like true tokens But they are not So what are the rules about that? The first one and the most obvious is always verify the signature that this is the main point and This is what a jwt is for you that it is signed so you can verify the that that The token is correct So you have to do this but The framework sometimes give you an option not to verify because for example for the road There is a way just to decode your token without any verification. This is the wrong way to go the correct way would be Right To create a verifier give it the algorithm that you expect in the key and only then Pass it the token and get it if in this case for wrong token You won't get exception and from the bottom case if someone separate with your signature and separate with the token You will get an exception. So, you know something is wrong Okay, the same goes For example for just for j but here to make it vulnerable actually by default it is not vulnerable Good, they will try to verify The signature but you can actually skip the signature verification So it sometimes can be convenient for debugging, but please do not do that Okay, and never Such code should go to production Okay, the other thing is that the rfc permits Not signing the token so you can have a token with two json parts without the signature This then the algorithm is called the non algorithm And now it depends on the package if they handle it like a valid signature or not. So, um I think a lot of them would handle it as an invalid case to actually need to allow the non algorithm in your code but They also need to make sure that you are not doing this by mistake But probably there are packages that kind of can allow this thing So make sure that you cannot pass the token without the signature that it will fail your code and now the Probably the last thing about very verifying the token Would be okay. So just not just trusting the algorithm that is inside the header So it can be substituted with the non algorithm But okay if your software and your library that you are using is not supporting the none that you are then you are okay But there is another trick that mara can do She can replace the algorithm the sign in algorithm and say Okay, this is my token and i'm using For example, uh, shawan i'm using a symmetric algorithm So, um, here i want to remind you about symmetric and asymmetric algorithm. So when you use an asymmetric algorithm The signing key and the deciphering keys are Are different. So if i'm creating the token, i'm signing it with asymmetric algorithm. I have a private key. I signed it. I publish my Sorry, I signed it with my private key. I publish my My public key to everyone and I send the token now and anyone can trust and because they can decode The token with the public key known for everyone and they say, okay, this this is a valid signature. Perfect everything is good, but in In a symmetric algorithm The signing key and the deciphering key are the same So they must be kept secret now. What happens if mara she takes the all known public key Signs it with the symmetric algorithm Now you when you call when you try to decipher it you decipher it with The same all known key You think that the signature is valid. It looks like a command this cannot happen But if i'm look on the on the regular code of joseph 4j, I just said the public key without restricting the algorithm that I expect to be signing the token and then The The packages take the algorithm from the header part So this would work perfectly fine, but will be vulnerable While you have to actually go and specify the algorithms that you expect to get and so So if if the algorithm is different than that, you will get an exception Uh, for example, in not uh, you know zero you don't have this issue because you have to specify the exactly Exactly the key and the algorithm. So it's very package dependent, but you have to keep this in mind Uh, now the last thing about uh, about those tokens is, uh, you also want to very Validate the claims. So jw keys are used as keys And they are passed. So You want to make sure who is the one that signed the The key and this is the that you got it from the right source that you expect And you also want to um, consider your expiration date because If uh, if now the token So it's saved somewhere and mara has access to it and it was expired after half an hour But she gets in after one day. She can stay still send it and use it like a valid token Uh, while it is not a valid it should not be a valid token It was expired because it has an expiration date. So you actually have to verify Um, those claims of expiration and of issuer and this you have to do Uh actively as a developer because like the default code will not contain this You actually need to specify it. So either in art zero or uh, just for j you have to specify and probably in other packages because um Like the framework doesn't know what you want to Verify and what you expect to get. So you have to do it actively Another thing about uh, expiration date is uh, the following. So it's more like over the design Uh consideration. So long expiration time Will give the hacker a longer amount of time to get you key and uh, and to use it So we would prepare a short expiration times, right? But Now, uh, there are, uh, if if you look there are kind of too common, uh, users for JWT. So you have either user authentication or API authentication What is the problem with using this as a user authentication? Because if you authenticate your user by some form of credentials, you have to have long expiration time Because a human user would never want to enter the password every five minutes Um, so this kind of jeopardizes your system and this should be a consideration While the other thing is, uh, using API authentication where two services Communicate between them and then authenticate. So one it's, I don't know on one end. The other one is the server They need to exchange information. Um, a service wouldn't mind authenticate every minute every two minutes It's not a problem, right? So, um, this would be a better use of, uh, JWT documents Okay, so Let's summarize, uh, what we have, uh, talked about so far. So, um, we talked about, uh, Jason web tokens That they are a compact URL safe and digitally signed um form of transfer information um They are used as authorization tokens um So a user Service can get a token and each follow new press it sends the token Uh, the the the things to remember is that JWT is, um Is encoded. It is not encrypted. So never to send sensitive information, uh, in this open And it is can be easily created. So you must Verify the signature verify the algorithm use a strong signing algorithm And that JWT eventually is the key and you want to make sure that this key behaves as you expect it to To behave it it has the expiration time The that is valid and the issuer is the correct So, uh, here's the checklist, uh for using JWT securely Never send sensitive information always verify the signature verify the claim Never allow the non algorithm and use predefined algorithms And feel free to ask Any questions now or afterwards? So we'll be during the entire conference You know just contact me on 18 and uh, that's it Thank you And uh, I would like to start by saying that I'm totally with you that JWT it's an excellent tool if you use it properly um Nevertheless one criticism against JWT that remains Is that it is not a protocol point meaning that People can add whatever claims they want inside of the job Right including for the things that you mentioned here like the expiration exp or the not before claim ISS and so on, right? um I would like to hear your take on JWT not being a protocol and if you see that becoming a standard in the future where those claims are enforced Rather than adopted by convention Um Well, this is this is a great question um I mean it's more like For me, it's more like an architecture question, but Each application will probably Want different information about the user to be passed So because the intention is uh to pass the information that your services need in order to operate and Reduce the time Like the database access so I'm not sure you're going to cover everyone's needs So I don't think it will become a protocol, but who knows Second, I'll see if there's in the discord. Uh, no, there are no questions so Okay, cool. If there are no questions, then thank you Welcome to the dev conf us 2022. I'm going to introduce our next speakers to you guys They are andrew brock and alex from their topic today is unleashing the power of the con container registry Thanks a lot everyone first of all My name is Andy block. I'm an extinguished architect with red hats. I am heavily invested in the open source community I am a helm maintainer. So how many of you have heard of helm? Wow Awesome, so cool. You must know kubernetes. You play no containers. You're going to fit right into this talk Uh, I also am in our customer success organization within red hat. I work with customers Doing a lot of kubernetes, but also doing edge But containers seem to be that building block where everything is delivered and the package within so Been working with alex on an initiative. We're going to talk about but also working upstream With some of the other projects that deal with container registry and out Hey, y'all Yeah, um senior field engineer I don't exactly know what that means, but I uh, he does stuff. I do stuff. Yes and uh, it's usually usually with regard to uh systems design architecture and um The uh Trend I don't know. I'm not even gonna say it, but yeah systems design architecture. So that's uh, he's the brains so containers Images registries. Oh my that's where we're going to start with uh Everything we were going to everything about building and running container and serving containers so Not not a surprise here containers have fundamentally changed how software is developed package and delivered You look back 10 years ago. How many people were using containers? Nobody They really weren't a thing to the most people but Really containers are not a new phenomenon. They've been around for a while Been around since the 70s They started out, you know It was uh introducing, you know the true route back in 1979 Uh, not much happened for 20 years Then in the early 2000s Containers started to make some headway. We started introducing new things for bsds and solaris jails became a thing Where you're able to start isolating processes and resources that really started changing the game because you could start slicing up An individual system to multiple users and multiple processes Then comes along lsc and google and then we're starting to make some headway But it wasn't until the modern era back in 2013 where containers took off Docker became a thing docker really introduced containers to the masses they provided that api the ability the runtime all packaged in one technology instead of these different projects and capabilities You had to kind of piece them together docker became that holistic bringing everything together and that's similar to like kubernetes when we look more towards the orchestration space because You don't need to introduce multiple different technologies kubernetes was the way that you could orchestrate containers and then deploy them at scale and that's kind of The whole ecosystem of how containers came to be now It really wasn't until the it was really the registry that brought containers to the mess You had the runtime, but how many of you when you first started with containers built a container You built the container first Now you went and ran docker poll my sql. Oh my god. I have a database I used to spend hours going ahead and installing my sql on my machine I can go ahead and just run my sql and instant gratification That was really the key difference is just being able to distribute content easily So the distribution model itself really exposes that api for sharing container content The this api includes both a registry and all the libraries that you need to consume The container content and this docker distribution I know docker itself isn't as popular as it once had been there was many other runtimes There's cryo. There's container d Most registries that are out there the popular ones are still using docker distribution It's the core implementation that's under docker hub github container, you know github container registry Google I think still uses it even though it has another iteration It is that model for easily getting content out there So How should the distribution api be defined and how should things be stored in it? that's one thing that you need to think about and That's where this concept or this initiative called the open container initiative came to be it's really this open governance for providing a standardized method for working with containers because Back in the mid 2010s You had a lot of different players who were starting to get into container space and a lot of Fractions started to occur and oci came in to try to bring everyone together to say, okay We know there's a lot of different opinions Let's go ahead and provide an interface so that no matter what you can go ahead and work successfully in the container space If you're in kubernetes, you may have heard of the the deprecation of the docker shim that came that happened recently that was basically a Docker had its own thing Then oci came Docker finally conformed to oci But if you're still just using docker That was going to break because we're basically getting rid of this middle of shim and just going full oci So that's just kind of how kind of falls into the kubernetes space and oci really brings in two key specifications There's three or four out there, but it's image spec How should an image be comprised? And a runtime spec if I have an image, how should I go ahead and execute it? And the image spec has all these different components It has index manifest an index image a layout a file system It has all these different capabilities Don't worry. We're going to share these slides afterwards You don't have to go ahead and try to memorize all these right now It has every single one of these but they're all related and they're all important So when you run and pull down that image All of these pieces come to play And here's how they relate to each other you have an index image index image is Allows you to have potentially multiple flavors of the same image Let's say we had a A mysql I mean you might want to run mysql on amd 64 You might have a new mac and you might want to run on arm 64 It allows you to it provides that first saying, okay I have all these collections available and depending on your architecture you can go ahead and pull the correct version Once you have that you get the container image and The first thing you'll get is the manifest the manifest will define two key elements It's going to contain the config Which has a lot of configuration details about that image as well as all the different file systems remember Container images are just file systems file system layers on top of each other It's going to have all those different you know links to different blob shots So basically all what container registry is is just a bunch of blobs in references It's that simple kind of dumb when you let me actually look into what a container didn't get this magical system. It's not It's just blobs and links cool, though Here's what an oci image manifest looks like You can basically have your media type media types we're going to come into in a little bit is incredibly important It basically tells you what data you're looking at and what's being served You have your config section. It says exactly what What this is going to be this is going to be basically a Docker container image It's going to provide the digest so you can perform some validation against it And then you're going to have a list of layers. This is as simple as it is A bunch of layers and it's going to have the links Where is it where in my registry do I go get it? I go ahead and I search for the shot with that value And the registry is smart enough to know how to serve that back to the client now How many of you have have heard of a concept called oci artifacts? Okay Did you know that registries can hold content that are not container images And that's where we're going to change the game here Because container in because registries can do more than that. You can actually serve anything um, can someone show fans um, tell me a common Way or common project or common component that is currently served by oci images by oci artifacts. Yes Oh, i'm sorry. Could you repeat the question? I was a little bit confused. Yeah, no worries. No So does anybody know a common use of oci artifacts? Oh, I sorry, you know, I have it on the slide there, but Uh, it was a different question. I thought I had no no worries But uh, oh, sorry. Yep. Oh good Oh, no, no worries. So basically An oci artifact is allows you to store any content that you want In an oci registry. So if you want to go ahead and store a helm chart You know, I'm very much in the helm community We can now store helm charts in a registry Because if you're familiar with helm one of the biggest pain points is How do I actually store this thing? If you want to if you want to serve it You have to go ahead and create a repository and put that somewhere Get up makes it easier that you can host it, but you still have to go ahead and set that up But considering most likely and most you have to Pull your images that you're referencing your helm chart from a registry. Let's go ahead and let's store the chart also in a registry And the way to do that is basically going ahead and modifying the config that that meta media type feel No, we mentioned, you know, here it is This is your traditional container image It's just referencing an image. We can actually go ahead and change that field to specify a different type But the key thing here and this is important is that not every registry Will support oci artifacts by default It's an opt-in model Docker hub doesn't quay has partial support GitHub a container registry does have support I can Google also that support as well But as more and more of these the use of oci artifacts becomes more prevalent More and more container registries are starting to add support for it because of the possibilities that are enabled And this is really how it works We can we no longer have a top level media type anymore. This is how you can how you define an oci artifact And we're looking at the manifest again We This is the key here is we can now specify the media type in the config media type So this is the config that is used by helm So if you're storing a helm chart, this is what it looks like you have your Your meet your config, but you also have the two pieces of a helm chart You have the tar the providence file. So if you've signed your chart, you have a providence file and you'll have your tar Which is just the chart itself. This is as simple as it is once again config and layers And the client and your clients are able to consume that Now, what if we wanted to add additional metadata? To our artifact now we mentioned we introduced how we can store data in there But what happens if we want to add some more Conscious descriptors to it and we look back at, you know You know the providence file The tar there's no context. We have no idea what it is. It's just a tar Can we go ahead and learn more about it? Based on the details on the manifest or anything in the oci spec and the good news is we can The oci spec allows us to add annotations To it you can add it in multiple multiple places. You can do it on the manifest itself You can do it within the config property. You can also do it on each individual layer really allows you to provide a way to Provide more details about the content that's stored in oci registries right now We're kind of hacking away around it But now we actually have the ability to do that more natively within the registry itself It's within the specification. We're not building anything new. It's already there So let's say we want to go ahead and we have an artifact We can just once again very much like kubernetes key value pairs So I can say animal equals dog Content equals true device equals radio And then on the other other artifact, you know animal equals cat content equals true And we can now add relationships to it. So give me everything that has potentially content equals true And you can start doing some interesting things with that And build relationships and you can even they're building complex relationships Where artifacts can now relate to each other and form a graph of details Some complex options and capabilities are now enabled by this. It's kind of crazy. Yes The question that was asked was can the artifacts relate to each other and you can provide a deep nested Way to relate artifacts, is that correct? Can the artifact learn itself? I'm just repeating for the audience at home You could potentially say you could potentially say, okay, I don't know about this but maybe You could build up models that says, okay, this doesn't have it So you can build a more deeper model that could say, okay, where are other other artifacts that might have it So potentially there are options out there It's kind of crazy that that's why this is so exciting Because you're starting to see how you can start building out more and more things because of it so Let's go ahead and let's talk or showcase A way that we can really start using these concepts for new opportunities I'm going to turn it over to Alex who's going to talk more about how we're going to realize that power of the OCI registry But I'll quickly go ahead and do the demo in depth slide. So what we're going to do here is we're going to Take a website content. So what's in a typical website? You have your html files, your css styling, maybe some javascript, and some images We're going to publish this to an OCI registry What we're also going to do is we're going to add some metadata to it, you know environment equals dev, end equals dev, environment equals dev environment equals prod This gives us the opportunity to maybe leverage some feature flags. How many here are developers in the audience? How many of you are using feature flags in your development? This gives us the option to potentially enable certain feature flags The content's there we can turn certain things on maybe a new image or new content new backgrounds Options We're going to push that content that artifact to a remote registry so it can be served and consumed We'll then go ahead and start pulling content But we're going to go ahead and pull a subset of content based on those attributes Because we're just inspecting all the content that's already available in that OCI registry And then we're going to finally serve the content that we've retrieved So now we're going to turn it over to Demo Thanks, Andrew He's the professional talker. I'm kind of like This isn't my full-time gig so uh bear with me. Sorry Um Just one couple quick set quick set up things before you started Probably done before can you also blow it up a little bit too in size? Uh, yeah All right, so Just one second. Sorry I have a little a little script. I wanted to go through the not a uh computer script, but a human Script that I wanted to give you but I didn't have pulled up so So first what we're going to do is I have a container running that just has my demo environment so uh, don't don't mind the uh The actual environment here just uh, I didn't want to my container. It's fine So, um First just just just to show everyone. Oh, you want me to blow it up his head? Yep Everybody see that kind of so Here I have um a couple Things that I've prepared here. So I have this uh this this script that has a wrapper on a A reference client that we've been building that that implements This type of oci artifact publishing And then I have another script that wraps that that um that reference client or that reference implementation of our client that performs the the the the pulls by attribute um and then uh This other file this data set config is how we we turn a website or any collection of content into a data set that that has these um annotations or or What you called, uh, I think at one point enriched enriched content. Yeah, all we're doing is we're we're taking your typical static files the javascript the css the images and as we're going ahead and publishing those to the to the Image registry we're adding content to it and we can get a little bit more into the details a little later But we can talk about how we're doing that and relating to okay If I have a file in certain folders or in certain directories Let's go ahead and add certain metadata the end equals dev the end equals prod So that we can then retrieve it later So what we're going to do first is we want to take an existing directory structure of website content and push it to An image registry and that's what um these don't work through now So right here. I we have the dev and the prod version of the website and i'm just going to push um the dev A change to the to the the dev version so that we can we can see we can see this in action. So Uh, the first is just me um Changing this and I just want to go into this directory real quick. So Um So right here we have a caddy binary. I don't know if anybody's feeling with caddy. It's just a web server um And then I have these two these two sub directories that are the the the local is the um The dev and then the production is Going to be prod So what i'm going to do is i'm going to take all of this including the binary and i'm going to push it into into the uh registry And if if anybody saw I I just started up a go container registry on on my local host before I started the demo so Um It's called dog pudding we talked about everything running a container if we're running the demo in a container. Yeah we're Running a demo in a container with container registry, but not using containers So Uh, so i'm gonna i'm going to use this wrapper we put on our reference implementation of the client Uh, and i'm going to push These everything under the the push uh directory uh to local host um And I have I have a the dev conf uh demo and version one and Because this is local i'm going to not use to us All right So I just took that uh all that and you can see here. This is like I said, uh an early prototype of of the Reference implementation that we're that we're using um So I think there was a it produced an error at some point in the beginning of this but and you're going to see a lot of debug output because Um, that's where we're at right now So at the end of everything we were notified that we we've uh, we've pushed this artifact uh, so uh So andrew showed an example of a um An artifact Manifest and elements in it um So never mind that but you can see here. It's all as he he uh previously demonstrated But here we have these annotations These are within the layers remember that that layer field that we had we had to convey They can figure in the layers each one of these is a layer So they represent each individual file and um As you'd said it's there's there's a dev and prod annotations here that that uh And I'll show you how I constructed that actually in a second, but um, and then we have the actual path relative path of The the content so you can see that anything that was under the build local Uh, subdirectory got the dev annotation anything that was under the production had a prod annotation We can be a little bit more fancy with this where we can Have common content and then only only annotate the uh the deltas With like the dev and prod but You know a little bit more a little bit more involved to do that but So and just to just to go ahead and show you this also, um So this is the file that I used to actually produce those um annotations so This is the API version This is the uh the api version of the of the prototype that we're using And then we just said Anything that falls under the production subdirectory gets the the prod uh annotation anything that's local as dev And then I added the uh cap also so that we can we can You know pull it there too Go back to my uh human script all right, so It's the same client that we're wrapping um with the poll um And this is this is all underpinned by by oris Is anybody familiar with oris the oris project? We'll give you that later. Yeah, so That's uh, that's how that's the underlying technology that we're using to push and pull these artifacts to and from the registry Basically what we're doing now is we're let's say we were on a different machine different anywhere Anywhere in the world one of the benefits of the distribution api is I can go ahead and pull down the content And execute it. That's what we're doing now is And that I'm doing it on my laptop or anyone and you guys and gals doing the same thing So here, uh, I'm going to retrieve I'm going to pull the dev comp v1 demo Into a demo directory that this is going to create for me Uh, and then the dev these are all positional arguments that are wrapped in the script So the the dev is the um the environment that we want to pull All right, so we just pulled all that all that, uh, stuff and now, uh, it's automatically, um executed caddy here so back here You may recognize Something similar to this that we've used to learn more about this conference so I've disabled all of the I'm going to call them feature flags with the website so except for The actual links to the uh the sub subsites So that's the the dev version of of of the website and then same thing probably could just uh Change dev to prod but I'll just repace the whole thing And same thing I've changed it now to prod pulled and then served that content Refresh and feature flags are enabled We have a nice image Content's different allows us to be very selective. So if you're a developer, you can easily go in turn different features on test them and All this is being underpinned by the power of container registry And some of the technologies that we're going to talk about now Let me go back to the slides and we'll talk about it. Yeah, let's do it So how on earth is all of this achieved? So this this is a segue into we went from the underlying I guess, um Storage provider um with with OCI artifacts uh OCI distribution And then we go into how how we we um Form these these correlations So Yeah, universal object reference is short. Uh We refer to short with uh ur If ur sounds terrible to to say, uh, I think your Is is an acceptable uh pronunciation one of the hardest parts about open source development and is naming Versions are a close second, but naming is probably the one of the hardest And I should never be charged with naming anything. So I'm sorry that uh It's the your a one one of the one of the Candidates out there was spoon. So this is bill better than spoon. Yeah. Well, I like spoon too. Um so what we're talking about here is It's truly universal capability And I don't want to sound like a lunatic, but when I say universal, I mean truly universal we can we can Provide a content management of literally anything with this with these uh underlying uh mechanisms and strategy so This is what the internet looks like and this is this is what we've been dealing with Before I started working on this project. This is this was my life Like people would be like, hey, I need I need you to get all this stuff for me And carried over to an air gap or some type of disconnected environment or edged by environment And by the way, all this stuff needs to be correlated, right? Because I have dependencies that cross different different formats content types um, none of the metadata is the same everything's terrible and So this is this is our reality right now um And over there that's That's a ur reality. So you have collapse all of these servers All this junk we we we have a nexus, right? Like you guys are familiar with nexus um Or something that consolidates servers and and content management into one device But you're still left with all of your clients So you still have a proliferation of clients and that's that's problematic To me in my opinion But over here is The way that I I'd like to go One server one client It's a universal format You have the websites that we showed you have any package management any anything that could be expressed ai models Any type of models they're all they're all um And I ain't encapsulated and I also look into we use nexus or Artifactory as an example Not all of these options are available by default. You sometimes have to enable them Or they require a subscription to them But the oci has one format one distribution. You can put everything in there. That's the benefit behind that To open standard. Yep So So you are uh, there's there's a few pillars um The biggest one is Well, they're they're all equal actually but You gotta have security right like At this point in the game 2022 if you're if you're if you're make if you're solving problems And they don't have a security first approach then you haven't really solved anything for the world so um As you saw in the demo we we we packaged a Application logic and we embedded that in with the content and that's that's that's that's one of the Main main aspects of ur is that you have content and you you embed application logic into it We have a predictable and and fairly simple way of doing that, but uh When you're running it, you know, it's using the same underlying technologies as containers are using now for isolation um As you as you saw when we were showing these these uh manifests they have um They're all based on a graph, right? So that that that dag or that um Directed to acyclic graph It gives us really really great mechanisms for like enforcing these other these other aspects like provenance um Attestation is something that we we We get all this fairly easily and natively with ur and then Things like sigstore Uh really I don't want to Sigstore is awesome. By the way, I don't know if you guys know what that is. I mean, you heard of sigstore Yeah, I I got some friends. I'm gonna tell that Yeah, do it. Um, so universal um I mean universal like I I can't I can't say this enough and I and I don't care if it's crazy, right like can I Vanessa she said, you know, can you put me can can, you know Does it include me and I said, yes, I can we can express you we can express anything in this. It's it's everything It's a single api. So in order to be universal you can't have bespoke interactions with content, right? It has to be consistent It's one api for everything And it's portable So you have all these you have the same metadata you have all the same relational mechanisms Uh, they're all they're all uh one one I'm told to stay away from the word format, but there that's what comes to mind for me It's universal and um identities Crucial here because in order to have a truly universal uh capability The user has to be or the caller has to to have a role in that system because it's all about Um perspective and um the rendering of a perception And so uh when when the user interacts with this system, they're they're presenting a part of themselves In their relationship with these with these um these objects And uh, if anybody wants to dive into this we have we have some uh supplemental content that goes over some of these concepts and concepts and in um using terminology that might not be uh expected so This is this is super exciting because like our boss just got up on stage and just said federated I have models and I didn't know anybody he was going to do that so um Yeah You know you you have you have models, uh You store them in in a in a registry. They're not in containers. It makes versioning and management Really really easy and you can make these relationships, right? So you have your model Well, first of all, I can express the model each node. I can I can express any formatted model In uor, right? But then I can say, okay. I have relationships, right? I have Validations of that model. I have uh, what you know, um Going back to the provenance and attestation, right? Like you have a lot of uh ancillary content that goes along with everything And so we can form those correlations here and this is really cool because like You don't have to export the data sets like nobody doesn't have to share data, right? Like you share the model and then you you you you bring that into The environment where the where the data is Yeah What is the what is the Can you open the can you open the mic? Okay, yeah Of models being what is the advantage of models being independent and not dependent of data sets I'm not a data scientist. I'm not going to pretend like I am but um It goes into like privacy models like you have you have You have data That you don't want to export for privacy reasons or other confidentiality concerns And then what we can import that model and perform training or inference or You know, whatever whatever the task is Uh local to the data so that we don't have these He's sharing and that that that gets really problematic though because then you have like bad actor issues where like somebody's like corrupting the model And there's all sorts of other things that go in there um and at the end of the day we can we can take Uh, whatever it is whatever ancillary components that enforce or reinforce or or mitigate bad actors or you know We can we can we can uh express all of that in correlation with the model Is that answer your question? um so Yes, there's one use case I am uh, I'm like an infrastructure person. Uh, I think that's how I got started in all this. So, um this is kind of like This is kind of like why We were trying to figure out how to um provision kubernetes or open shift in in um in air gapped environments or like uh edge in edge environments and It's been pretty hard. So this is this one's near and dear to my heart. Um so You can you can define end-to-end an information system from the boot loader All the way to the most abstracted process in the information system. You can store all of that in a graph Uh similar to the ones we just showed And the really cool thing about this to me is as that information system is processing through the information system's life cycle All the output of that information system. So logs, um You know, whatever it is that information system can can write all of its output Back to the graph that holds its its definition so like Think about this from a traceability standpoint, right? You have you have a system Running its output is written back to uh, it's it's system definite systems definition Or you know another part of that graph and then you can make these correlations, right? I had I had this configuration and it produced this output very traceable you can um It's the auditing story the auditing story is so important Especially in today's world understanding how things came to be this enables that So and I'll just give you another another ans answer to So, um Or uh use case here is is like a ci ci pipeline, right? So you have source code The pipeline has output, right? It has the as it's as it's passing Uh building, you know, the whatever it is and then you have the logs coming out of that So you store the pipeline you can even store the source code because I showed get up there Is one of the things that you can be expressing you are right? So you have the source code You have all of the output of the ci pipeline all this is being Contained within this graph, right? and then and then the of the output artifact from that pipeline You can continue carrying this logic all the way so you have the the output artifact now it's in um You can import it into a systems definition and then you can look at the output of that artifact as it's been processing within its It's running system Yeah So you may have heard of other projects like tecton chains, etc That help that traceability the idea is that you could take some of that output And so I'll package it up into one universe And so instead of having all these different projects like tecton chains and salsa if you're familiar with the The secure supply chain these are all different components You are allows you to bring those all together under one house that you can then go ahead and use Without having to keep them separately. Yes Do your application You certainly could Yeah, it's one of it's one of those things that no, we haven't obviously looked into yet But we are enabling the possibility if you don't have all the answers right now But we're providing that api and capability mechanism to develop new opportunity All right, and then this is just kind of the catch all right like everything could be a model I guess so um, we said AI models and just kind of bring a full circle we're talking about You know all models So you have these temporal and spatial representations that can be expressed as attributes on these layers that we showed um, but then you have this other this other Elevate where we said we were embedding application logic into the content, right? So that's the we're all arranged in this room, right? But that means nothing without context, right? That's the application logic. That's the event. That's the what happened um, and then so You know, that's that's the bread and butter of you are So The next level we talked about All the technologies that help underpin This all of them container registry we've introduced you are as a concept How do we go ahead and move forward from here? Well, first of all, there are other related efforts out there There's the oras project the oci registry is a storage two things there are they are working on a way to Bringing together and refer to different projects, but also more importantly I want to give a big shout out to the oras project in general because they have been great Collaborators behind this because they're they have invested interests in this but also as alice had mentioned earlier We're actually leveraging the oras project libraries Within the client that he showed earlier as part of the demo So a lot of great effort great individuals when I open source is great A lot of great people great minds working together new oras team has certainly been part of that And it's also the oci reference is working group. So you we talked about the oci Specification and the open standards group. They also are looking at different ways of organizing content And their mission statement is really how do you describe inquiry relationships between different objects in an oci registry? So a lot of these different areas are starting to come together We're also looking at from from using that same mechanism and helping push these initiatives forward in the right direction So once again from the container registry really helped provide the foundation of the modern container era And really that's the continued evolution of how the container registry will support the new technologies Will bring on new capabilities in the future So if you want to be more interested in not only the url the project Alice I mentioned we have some presentations out there that talk more about what url is How it can be how your really is universal as well as if you're interested in looking into some of the other open source initiatives Here are some references that you can look into and we really appreciate the time that you had today Be able to present to this. So thank you very much everyone Any questions? Yes, and if you can come up to the mic, that'd be great Hey guys long time fan first time caller Two questions for you. Um, all right. The first one is So what you're describing the specific Mechanism which is a label essentially, right? This is a fundamental concept in kubernetes And I feel like I'm surprised you didn't touch on that because it seems like if we're talking about image registries in kubernetes There's an opportunity to align labeling inside kubernetes with These particular images. So I'd be curious to hear from you if you thought about that or what your thoughts are around the opportunity there The other question that I have which is some semi related thinking about this in terms of long term life spans of things how do you control the proliferation of labels of of these tags so that it's actually manageable and You're not just deploying a mess, you know every time you actually go to deploy Yeah, so um kubernetes right so you have And this I think in my mind it's uh Two sides of the same the same concept. So this proliferation of labels and this correlation to kubernetes um We didn't go over the underlying mechanisms of uor here but one of one of the one one of the foundations is this idea of schema and you could you could um an analog for schema would be a custom resource definition in kubernetes So and then if you look at the embedded application logic of uor You're talking about like a kubernetes controller at that point So they're very they're very similar. Uh, uor takes a completely I hesitate to say this and there is a server involved but a serverless model, uh, so um And and the the the mitigation of proliferation of labels Is is done through public publicly sharing schemas So you have a published schema that has a limited set of attributes that pertains to a specific uh content type That's also correlated to the uh the application logic It's similar to like domains or creating domains of logic and responsibilities And we didn't go into the details here because we obviously didn't want to cover too much But we have a lot of Ideas and a lot of work. It's already starting to put together towards we're now with alex said but also Alliance to what new you were questioning. So we've been thinking about that and yes You know, we we just the the pr for the scheme is was actually just submitted yesterday So go to the website that we we linked you'll see the project repositories. Yeah anyone else Yes Yes for the audience at home I'll repeat it. No worries Yeah, like a lot of ideas one thing that's really interesting about the The container registry model is like, you know pull secrets and so I'm trying to figure out the layers here But like let's say I push some content with OCR artifacts to like a private quay instance or whatever anything that has OCR artifacts And then I want to pull it Using a client like that should interpret it. Does it Does the client libraries know how to parse, you know, my home docker config json? Yes Okay, so that's just sort of the de facto. We're just go default for we're not trying to reinvent how containers are full We're not we're using the same Specification because there's an interface and there's the OCI spec. We're going ahead and complying to that. We're just adding I always use the concept of Colonel Sanders didn't invent fried chicken. He was perfect at how to use it OCI is the chicken. We're going ahead and putting the ingredients together to build the tastiest chicken possible Yeah, no, it makes sense. I think though the next step, right is um, it feels like in some cases you want the clients Whatever it is like whether it's npm or or like the language package manager is inferior like They'd end up growing a config option if you wanted a separate pulsing for it or something like that, right? Like that's this this would start to take over some of the aspects of what Those things do, right? Would you want to challenge all these different tools at different flags and different different capabilities This would this would potentially, you know, try to bring in some some synergy between these Hopefully, but once again bringing everyone together. It's hard. Yeah, that's hard anyone else Can I pull this content with scopio right now? Yes All right, so you can put so you can publish content, but the way you can actually add annotations I believe through scopio but And I I can double check that so come back with me later I'll double check that but scopio is how I use I use scopio constantly to do the inspect So he went ahead and he just did a curl But you can use scopio inspect to inspect the different layers and different manifests. So fully supporting Cool, I think like scopio and crane and those they They kind of like varying levels of successes when you're interacting with artifacts That's why I kind of all is just default to curling everything out But everything everything in the end just uses the v2 docker api So open standards that are being used constantly Yes How is red hat so we so the question is how is ready at contributing to technology that was discovered That we talked about today. You have a lot of red headers who are working and leading this initiative So red hat, you know has a lot of involvement from a A contribute contribution standpoint right now. This is so obviously very new technology, you know, so We're excited about the future and be able to share this as we go along Thanks everyone I'm not ready to start yet another quiet Let me see if I can it's everything mirrored. No, I got to change that these slides are cool I got them from slides go slide go Yeah, I know I didn't do it Thank you. It's very futuristic So I will start here, but let me make sure my terminals up because I want to make sure that it's big enough Yeah, and my mic too. Yeah, I'm gonna turn that is there anyway Can we turn off lights so that have you guys kind of having trouble looking at the watching the terminal in this room? Yesterday it was good, but we had some lights turned off Today, it's not so good The lights are in the back those lights. Yeah, but I don't know if you can get very large I'll try to change the brightness too. Let's see if that works. Thank you You're right. You're very right about that Ben. I lost the ability to see up close with my glasses Any time I have anything I have to take my glass Can everyone hear me okay Awesome. I just called my mom and told her that this is being live streamed. So hi mom Okay Should I start now? Is it after one? Thank I um, I'm sally. I'm a software engineer. I work with emerging technologies at red hat office of the cto I'm on specifically the platform team and What that comes down to usually is I'm using a lot of different tools putting things together Based on you know, some some use cases that come in and say, oh this this tool here might work if we combine this So that's what I do over the last year I've been working on vol sync, which is a data mover For kubernetes moving data between persistent volumes also between clusters Uh, I've also worked with open telemetry a lot this year. It's been a lot of fun And uh, what else? I know i'm forgetting something. Oh micro shift. Yes micro shift is a very minimal um Open shift it combines the whole control plane of micro shift is into a single binary and then we we only It includes just a few like core operators But it's not operator driven. So and then a few months ago my team Came up with this idea like what if you don't want to run kubernetes um What if you're you know, it doesn't make sense to run kubernetes. What if you just want to run a few pods? or a few services how How could we do that better and so what came out of it is fetch it and my name is down there But there's really three of us working on this project right now. Hopefully more after this talk. Hopefully you all will join me But it's uh ryan cook and joseph. Sawaya. They couldn't be here. So it's me uh, there's our github repo so it combines git and podman and Hopefully what you end up with is a remote hands-off management tool for Workloads so There are git ops. There are a lot of git ops tools coming out lately Why is because git is awesome Git is it's a single tool that everybody knows and loves and can handle everything from development obviously version control You can release things using git deployment So all in a single tool that just simplifies your tool chain. It actually like reduces your security Uh split print the you know the the area You know i'm trying to say and it's um the git is easy to revert and roll back There's this you know chain of of trust and verification with your content every git object is Cryptographically hash it has a sign of cryptographic hash. So You just it's it's all of that for all of those reasons Git ops is a great tool all around and it goes beyond pull requests. It's yeah And then podman. I could talk all day about podman. I use podman throughout the day every day Um, it's if you work with docker. I am sure most people in here definitely know what podman is but But I will just say it, you know, because I included it and for those of you that don't It's a it's a uh a demon-less container engine Docker works with a docker demon A client server podman works with a fork exec model What that gives podman is the ability to really easily run as root or non-root You don't have to do anything special. It just runs as root or non-root always has And it's it's a single tool to develop manage and run Any oci images and basically if you're used to using docker Pretty much one-to-one anything you can do with docker run. You can do a podman The developers of podman. I see Matt out there who is a core developer of podman Have worked really hard to make the make that experience true A few releases ago podman included go bindings and a podman socket And I believe this was so that uh people could run podman That weren't running linux. So You start a podman service in a remote machine maybe a vm of linux vm and uh you can talk you can connect to the The podman service and run command It will feel as if you're running podman on your own system, but they also include go bindings and with go bindings and The podman socket you can from within any Go application call podman functions So so actually ryan was like that would be cool if we used the podman socket and wrote a go A bow program to do that and that's exactly what we did and I love podman for its System d integration it It goes really well with system d so much so that podman includes a tool a command I mean a command that is podman generate system d and that will take any running container now Some of your running containers might have like 10 different flags volumes ports And you can take that container that's running on your local system and run podman generate system d out will Be produce a unit file unit files are very unfun to write manually so I Create all of my unit files for running system d services with podman generate system d because I always run containers You don't run for their And the use cases for fetch it are for edge computing. I've mentioned this We talked about it this morning Chris Wright gave awesome explanation of that But it's from machines that lack the resources or it just doesn't make sense to run kubernetes We're talking about We're imagining like a fleet of machines warehouses factories drones satellites You get the picture And they're generating processing data and now they can generate and process that data on their actual machine rather than you know Like like an old school Surveillance camera would just blindly record everything and send all of that data back to like, you know Some central location and and there's where they will process and filter it. Well now you think about Surveillance cameras you can inject some machine learning They're very smart now. You can just you know detect movement and you know most basic level different colors, you know and you can Really filter how much data you're moving around and that really saves on bandwidth and as more and more things are instrumented That's going to be more and more important Yeah, so that's edge computing so podman plus get Equals a good tool for edge computing and here is the problem that we wanted to solve Managing deployments at scale is hard and to containerize the employment that scales hard Kubernetes has solved this since 2014 it has been proving itself as the ultimate Container orchestrator But you don't always have to run kubernetes and everybody's been so focused on kubernetes We took a step back and we're like, well, what if we don't want to run kubernetes? Can we still have a modern application? And so that's what we came up with get ops driven Using podman go bindings. It's a go binary Uh, we run it containerized and I usually run it as a system d service Not kubernetes dependent. It's like lift and shift hardware. You have these devices you You have installed on them the fetch it binary You pass a config file and you're ready to go Can you update that config file? Yes fetch it allows that Can you update your your images your deployments your configurations for your workloads? Yes That's what fetch it provides. It's like another management Level on top of say podman system d and get Hands off management. That's good So yeah, we have said this but it makes sense for small edge machines, you know raspberry pi sensors But here's the key. It's like any system where the resources are expected To be consumed by the workloads The system's running those workloads You want that to be as minimal as possible because you want to be able to use all of your You know compute resources on running that workload making it smarter and smarter, you know adding some machine learning You get the idea So great. How does it work? I've already mentioned it's a simple go binary and You pass it a fetch it configuration Fetch it configuration has some top level items number one a config reload I think that's like the most important, but you don't even have to have it if you just want to have Your you can start and stop fetch it and give it a new config, but I like to use this config reload where you Serve a config at a url and fetch it can can catch it reload its targets The prune method just leverages podman system prune when you're working with images you get a lot of croft a lot of old images configs We're I'll talk about this later, but the cleanup with betcha is something that we we are still working on but prune You can set all of these to periodically run Get to that later and then target configs A target config is a top level also Config for fetch it and it contains a list of your git targets And for each git target a list of the methods that you want each to run And then podman auto update we've included with fetch it Fetch it knows how to go on your system and start the podman auto update service And what that does is if you have system d services that The unifile includes the podman auto update label On a timer by default. It's once a day at midnight. You can change it but on a timer Podman will will look if you configured it to be local or registry It will look either in your local storage or registry Like did the digest change for this tag? Did the image digest change? Was there an update? If yes, it will Automatically reload pull a new image reload it and restart the pods Now fetch it takes that a step further because you meant I mentioned with comment auto update if you have a deployment that's running an image Usually you'd want to go from say v1 to v2 image You might not always want to use like latest tag and keep updating the the digest for the latest tag So common auto update checks the same tag if the digest is different it will update the image Fetch it with fetch it you can configure You can just update your unit file in git To be not v1, but v2 image and then with that you can update your images that way Looking closer at that target config. It's a list of these things Get URLs can have private public disconnected. I always run it with github, but for this demo I Took it upon myself to start a local git server because I just didn't know about the Internet and that was really fun because I'd never done that before honestly. And so that's what we're running today Um, and then the list of methods. Here's what we've included so far Systemd takes a unit file from git knows what to do with it. You can start at root non-root We'll I have a demo file transfer is like a base for a few of these Just, you know, you list a directory and git that you want to Take some files and pull them on your system and you pat you you can tell it. Where where do I want this on my system? Cube play, uh, I know I said we don't like these machines don't necessarily need to run kubernetes, but they can and you can use Fetch it or you can know what you can do is you can take a deployment that you would normally run in kubernetes which is a yaml definition and Podman cube play is really great if you haven't played with it It's a command and it knows how to run cube yaml files. Um, so fetch it has included That as one method too Ansible, you know, it knows what to do with an ansible playbook. You you set the ansible method It's looking for a playbook. It downloads it from git watches for updates will run your playbook Image load is very Interesting, so we were asked To you know fully disconnected. How does that look when you're loading images? Well, again podman in its awesomeness knows how to take a tar file And load an image from a tar file It you know expands the the tar file into its layers and there you have your image loads it into your local registry local image storage And then what we started with the first method is was that raw pod spec? It's a yaml or a json that just describes a pod And fetch it knows how to run that on your system. How does it do this with each of these methods? We use those podman go bindings and we spin up like an ephemeral container on your system and each of the different method containers knows what to do When it with its method so like the system deep container knows I need to mount this You know it has some set things that knows what to do to enable and start your services Yeah, that's an example I think we're up to the demos. How are we doing on time? Are we like 10 minutes yet? Okay Oh, we're not yet, but very soon so I explained all of these but here's a better look at them Transferring files from git systemd transferring unit files Playcube, I think I called it cube play in the last slide playcube And I'm not showing that today. We have I can I can share with you some of our Videos that we have recorded Because there's a lot of improvements going on with playcube. I know if ever she's here or matt like There it's as playcube gets more Interesting and more awesome. We will also be updating that with better But it does work very well the raw pods back And then yep That's it. So I'm only showing a few of these because we do not have all that much time Here we go. Here's the moment of truth people. Let's see if this works I have a handy dandy demo Script so we're going to run fetch it as a systemd service I will show you that unit file and I will this will be a perfect explanation of why sometimes you don't want to just run standalone podman commands because there can be A lot of flags. So here's the fetch it service. I'm running as A normal user. I want to do everything as a non-route So if you didn't know that you can run systemd services as a regular user There you go. You learn something So here is the unit file and You can see it was generated with systemd. I mean, sorry generated with podman And you know podman will generate it and then you go in and tweak it a little It's not it's not always perfect. You can tweak a little So here are all the flags Because we're messing around with like host files and and containerize files This security up label equals disable that allows me to run se linux and enforcing on my system But also do the things that needs to do for fetch it So it's not relabeling vials inside of the container I have this auto update set. I'm not going to demo that But I can assure you that it does work and it's super cool So here this is interesting. So I'm running as a as a non-route user So in order for And I'm going to start a systemd service So in order for that to succeed inside The container that spins up to start The systemd service on the host it needs to know Inside the container i'm root But I need to I need to trick it and say no, you're not this xdg runtime dir is Run user 1000 because that that it needs to know You know what you're trying to do on the host so and then the config URL You can either set it as an environment variable or you can set it as that top level Target config I'm going to show you that I don't need to start with a with a fetch it config at all I can have no fetch it config on my system as long as I have this URL set And this is my local get server I could not figure out how to access it. Well, I didn't take the time to Sort of hdps and I couldn't figure out how to access it with ssh. So I guess it's it's a little bit complicated. I'm running g GOGS gogs Yeah, it's a little complicated to be able to access over ssh So here's my config. Oh If there is one here, I'm going to remove it there It so I'm going to go down and remove it just so you know that it works without it Yeah, that's the last one. So I'm glad because I don't want to use that one So now I have no fetch it config and now we're going to start the service Hopefully it has started I don't see any errors. Good. We can continue We'll look at these logs closer in a minute. I want to show you the beauty of running um Running your pods as as the cindy services in case you're not used to doing that. It's just super convenient You can even like um replace say a docker compose With something like this where You're instead of you know running your docker compose file. You might run all your pods as as services on your home servers So here I'm going to stop the service that I just started and you can see that there will be no, um um Fetch it pod so it already did its thing and started my httpd service So that but there's no fetch it pods So now I have a volume that has the state for fetch it and if I just start the container back up You can see that fetch it comes up And it will pick up right where it left off as long as the volume is intact, which it is And so here are some blocks um, I for I want to go back and show you how it found the Maybe I missed it I did miss it how it it couldn't find a config file. So it went and got it from the url So I missed that Anyways Something happened that is oh, no. Yeah, that's not good. That's okay Let me make sure that system d service came up I'm probably doing something stupid because I am nervous Okay, well that came up. That's good. I might have started with like a different config file than I thought I would did um, so fetch it went in and it started this httpd service as the regular user Let me try the logs Maybe I just didn't have that method Yep, I don't see any error. So it must just be something stupid So you can see in the in the get log. I mean in the federal logs, you'll see okay. Here's one Get target. Here's another get target. I had a config and The file transfer. So why doesn't that file get transferred? Interesting Well, we'll we'll get back to it We'll get back to that. Well, that's not kind of work if the Right, let me see I'm going to try to push change first. Let me check what's running. So what it has is Let me just make sure that's running. Well, yes, okay it started the system d service and Oh, that's because I have it in the wrong That's why yeah My script thinks it's going to be an intent ft, but it's an intent file transfer So let me go back and show you that that file did get Created Now I want to make a change to this file, but first I'll show you what we have on my system This is where I told fetch it to run it. Um, this is not what's in get hello from file transfer Okay, I got to remember it's in a different place. I was going to push change wasn't I? Yes, that was Just a sec Let's go back So here this is the get hub repo right here fetch it main. So I am going to go into my examples and I'm just going to make a change What's that? Oh, thank you. Good. We're doing okay You might take I'm not going to spell that right. I made a check Whoops Okay enough um, I'm gonna Commit it. I have enabled get sign with this repository. Yes, it is a pain in the butt I'm probably one of the first people to enable get signing But it allows you to sign your get commits And it just automatically does it with this reposer because I want to show you It goes to sigstore and it uses an oidc github google or microsoft and Keyless signing for every get commit now if you use that in github I'm so far ahead of the game that they haven't even included the sigstore like ca and their trusts did So it says my all my commits are unverified, but I'm going to leave it that way because they need to change it They need to accept the sigstore certificate ca What did I do? Okay, so now I have to push it to get home. This is what I'm talking about. I couldn't Uh configure ssh So I made a very short password The password is happy because I wanted to remind myself that I need to be happy. There you go I'll change it as soon as this is over. All right, so now Now we can go back to the demo and let's wait for a minute because I have it updating in this very aggressive schedule by the way, the schedule is A cron based schedule So I have set it To update every one minute look for updates and it and it just did it looked for my update And it should be already updated. Let's see my little wq So there so this is you know where I told fetches to place it on my host system And it got it from git Now the next thing I wanted to show was how to update your The config target url, but I'm not going to have time to do that. So I'm going to have to skip through that But I can explain it as I go Yeah, see I Everything can't be perfect. So I was going to reload a configuration from the git url. And the way I would do that is the same way I just Is the same way I just pushed to get I was going to go down here because this is the repository that's serving my my config And so I was going to make a change to the config push it up Instead What can I do? I'm going to I know what I can do Instead I'm going to do this I have no minutes left Zero zero. Okay. I wanted to show you the git sign. I got to show it But you guys can start packing up. It's okay. I'm going to show git sign though. Um, copy Okay, I'm going to do this and then I'm going to um, okay Sheesh There it is. Okay. Well, we'll just do this now If we watch the demo and while while I'm doing this, I'm just going to run that last the last slide I have It takes a minute. So So far there's no changes. No changes What's next issues? I need all of you to try this Enhance it Tell me what it's going to be good for we've been talking to people throughout red hats I'm you know, and but I need everybody to like help me out because I don't have time for all this Um fleet, uh, I want to test this with a fleet of machines I want to know where will this be most useful and run it with the door iot for our core west Thank you again. It's ryan cook and joseph not just me and uh, let's see if the It takes two minutes. So I wanted to show you the pretty git signing verification It goes to recore has a recore client and it will go to recore. You can set whatever to recore url And it will find the ledger That your git commit was signed with and it will report it back in the fetch it logs And we'll also show the certificate information like the subject issuer from The sig store signing certificate Um It's not happening now. Oh, okay. Yeah, I'm you can do it out in the hall Something wonky was happening. Sorry about that. Thank you. Thank you So you think it'll be useful. Oh, I still might now we'll see because I was having problems with mine No, no, there's just no sound Other back Oh, do you want me to okay? This is Now Maybe Like how do I make it do the thing? Oh, I just know is it on how do I sound? I can hear myself now. That's cool. Hi everybody. My name is lisa sealy. Uh, I am a senior estuary at red hats and I've been every half for about four years and for about the last year ish i've been working on um A sig the special interest group called properly enough sig sre Here are my facts. You don't care about those. But if you do there they are and We're going to talk about sig sre, which is the venue Where the avenue that we're using inside of a red hat to level up sre practices for various teams So that means that the sig is focused around various sre topics And what we've been doing is collecting Documenting and sharing sre related knowledge from teams inside of red hats that are on the path so that we can Get teams that are not quite on the path yet The head start and because we're red hat we're sharing this stuff externally as well because no, that's the open source way, right? So as I mentioned the goal is to level up sre practices and We do that by having various What will we call work streams dividing all of these sre topics up? But there's so many that if I listed all of the topics that we could be covering and that we are covering The slide would be illegible way in the back or probably even in the second row So that means we're just focusing on on some of these here My favorite it actually is and others. That's the best category But today we're just going to talk about slo's or service level objectives And why is that's where I spend most of my time But before we spend a whole bunch of time talking about service level objectives or slo's Let's get a quick refresher a show of hands in the audience. Who knows what an slo is who's heard of it Okay, some people haven't perfect. You're in the right place So You'll get a pitch for slo's kind of goes like this Hey, that'll be the start cool. It was gonna You tell the person next to you because you're wearing an slo's are awesome t-shirt, which you should by the way That slo's are data-driven performance targets for a service these slo's are Are used by service owners to know How well their service is running and pass it next to you goes But why Can you say Well, we can objectively measure these things that we care about and that makes them a good proxy for customer happiness because customer happiness Really hard to measure objectively if it's a tuesday It may be kind of not feeling it because of all the meetings that had on the previous monday Who but if it's friday, they might be super happy, right? But so we need something objective that's where slo's come in So you explain a couple of easy to understand examples like around latency when we We want customers to be able to log in and under 15 milliseconds 99% of the time because when customers can't log in In who uses web services when you can't log in show of hands Nobody exactly it drives you away We want people to use our services. So we need them to be able to log in and log in click quickly whatever And because we want people to give us money We send them sign up emails and so we have another slo around how long The new customer has to wait before they get a welcome email And we have a service level objective around that as well have to sign up most customers 99 99.9% We'll get a new welcome email in like two minutes And the person next to you kind of has their eyes glazed over But you just said a whole bunch of words that they don't understand But they are polite and they go Hmm or else And you say ha ha yes or else It does sound like the should be an or else, right because I just said We'll handle these Laser pointer not just a cat toy. We'll handle log ins in under 15 milliseconds. What if we don't what happens? Right What we want to have happen is that we want service owners to fix it Because we've decided and they've decided that these are the important metrics That we're using to gauge customer happiness if it's taking 100 millisecond one second The pretty good indication that our customers are not going to be happy but since this is an elevator pitch and Doors just bought the open you say well or else you stop working on new features and you focus on getting back on target but you know, it's going to be more complicated than that because it always is and You hope that still you know google more but like I said eyes glazed over But everyone here right place to learn about this stuff So one motivated our sig to include slo's anyways who cares It's just a buzzword, right a google book came out six years ago believe it six years And it was just an aspirational book anyways, right? No one actually does stuff But for us right now We see slo's as the tool to help focus the attention of service owners on the things that really matter Which is customer happiness customer satisfaction Why because you want to grow you want to grow even more? Let's say we have 10 customers today Next week we want 100 customers the week after we want a thousand customers the week after that we want 100,000 customers How do we measure that we're doing things appropriately? How do we know that our customers are happy? You can't pull 100,000 customers. It's just not going to happen And so we need objective metrics. That's for slo's come in They give us a way to measure to report to have accountability And it's all in service of providing a stable service for our customers so that they're happy and stay customers Because we like money, right? Japan who likes money? Yeah Who deals who like I don't like money, but I have to deal with it anyways because this is how do we live in? Yeah So goes But really as we grow a number of customers the problems That we have with services that have bugs or are slow for whatever reason scales up as well So it's not just we have 10 customers that we're all good friends with because for a startup It's you know, we have a thousand customers in the same bug Larger footprint more expensive more costly more risk. So we need a way to focus slo's provide that focus And we need to improve the the on-call experience for engineers show of hands who's been on call before Who hasn't been on call but has heard on-call engineers complain about it? Uh, yeah show lots of hands for both of them actually Okay So we've agreed that there are important metrics that our service is giving us to let us know When customers are going to be happy that's response time how long it takes to plug in and things like that Hey great, those are metrics that we care about the page on them. Maybe wow And if other metrics exist that we don't care about we can delete them, right? Show the engineers who wanted to delete alerts Yeah I'm an engineer. That's why this slide exists. I feel you So what we're doing in the slo works room is we're capturing experiences from teams that are already doing this What's worked for your team? What's not worked for your team? What pitfalls have you run into? We're writing all this stuff down Why so that we can train other teams inside right hat outside red hat We're doing this inside of red hat through rhu to try to have a university if you're a red hatter and look it up Of course coming soon. I hope If you're outside of red hat get to that in a second But not every team is using them. So that's why we need to do the trainings need to write the docs We need to have meetings Spoiler alert there are meeting we need to have meetings engage people talk to teams Outside of red hat We're an open source company. We made our bones in open source. I'm an open source person I've been in the open source community for let's say a while And so it's natural for us to publish it's natural for me to publish and share the stuff that I've learned And it's kind of a secret benefit So hands who's who contributed to an open source project before? right Who's who's worked on a project internal to your company who wanted to share it out? You know a few hands and both lots of hands in the open source one great When you're writing so for other people you need to think in a different way especially documentation because When I say business unit I'm a red hatter. We have a business unit. We have product managers, right? The red hatters may know what that means if you're not a red hatter You'll be going business unit. What? That doesn't make sense We need to write our docs with that in mind because not everyone has a business unit or a p.m. Or knows what a p.m. Is And so what have we published? Let's go through some of these and as I go through these just remember that these are meant to be templates a jumping off point for teams to Take on board read and tailor to their needs. We're not trying to be dogmatic here. We're trying to give a starting place Oops look too far So first up personas this is kind of like Just what I alluded to now we have In print out we have project managers. You have project managers. They may not be the same. They may not have the same responsibilities And we need to have a way to have a level ground so that we can even talk about the fundamentals Because without the shared common ground We're not going to be talking about the same thing I'm going to be saying well project managers have these responsibilities Individual contributor developers have these responsibilities And you're going to go back to your organization and go wait a minute. I don't have that. This is totally unapplicable to us In applicable unapplicable Doesn't apply to us Right, so we want to have a personas doc to allow translation between organizations We have an slo life cycle document This breaks down the creation of an slo into three Three phases if you will the research where we're understanding the requirements What are our customers need When they use our service how quickly do they need to interact with our service? We have the implementation phase which Is where it's implemented If you have to instrument your service to a niche metrics tree that you can collect that takes place here We have the iteration phase where we're living with this slo now We're incorporating the processes that we've agreed upon into our team's life cycle and our team's work cycle rather So if you are doing sprint planning or whatever happens to be you have to be looking at An error budget We've heard of an error budget before if you haven't don't worry. We'll get to in a second And this error budget is meant to be an additional signal where We look at our slo. Are we winning? Yes. Are we picking some hits against it? Okay, how much do we how many hits do we have left? Okay, maybe it's time to start working on some reliability another signal the pms it's up to you to figure out which p.m. I mean is Should be looking at that going great cool new signals that we can use to figure out how to prioritize work We have an slo racy chart who's heard of a racy chart before No more people than I thought I had never heard of this before Super cool. So it's a responsibility assignment matrix And what this is going to the next slide. It has an example of it. So if you're going what the next slide Hopefully makes it a little more clear But it's mapping personas that we talked about a second ago to what their responsibilities are going to look like in an slo world It's also just a nod that across organizations Personas or job titles if you will have different responsibilities Again, we want to make sure that this is portable when we're talking about a project manager This is what we think they should be doing if it works differently inside your organization That's great. You can take this charge and tailor to your needs If you've never heard of a racy chart before It's r is responsibility a's for accountability C is for consulted i's for informed Responses kind of like a worker b Accountable is like a queen b a Consulted is Hey, let's have a conversation. What do you think? What do you think? But if we're if we're only informed, that's here's what we think It's one way And here's an example of what that looks like as a spoiler. This is From the actual thing in github Which is down here people in the back. I can't read it. No big deal slides will be online Actually, they're online now So I'll just need to step here for just a second so we can take this in we have the service owner Well, I'm wireless. I can just go right We have the service owner. It's going to have these new responsibilities for these steps or components of the slo life cycle and so on and so forth over We have our pictures I'll do questions at the end if that's okay And I do apologize if it's not super legible in the back But it is in github the it's all up there and I'll have links to that itself all at the end And on the bottom if you can actually read it probably not but don't worry about it We also have some guides on picking good slo lives and slo's An slo life if you don't know what that is. It's a metric System metric fancy name for metric Think about it that way We have a document up there on how to construct an air budget policy I alluded to this earlier an air budget It's If your target for an slo is 99 percent, I mean you have 1 percent of the time that you can be outside of your target Kind of cool. That allows for teams to be As again, how many I'm a pager carrying sre engineer So it allows developers to be reckless with their code and cause engineers to get paged That's right. I said it But that's good Right. We need to iterate quickly to be on top of the market We want to give engineers the space to be Trying stuff. It's good. If you're an engineer carrying a pager Probably less good, but then the brakes, right and We have an slo bootstrap guide. This was just added last week. This was just merged last week This is This is kind of like the roadmap. There's a lot of documents here Right personas life cycle So where do we even start? Start with this the bootstrap guide This document is meant to be as short as possible It I had people time it Times to about three minutes and 40 seconds who reads Who remembers cf engine to fans? Who remembers that the cf engine quick start guide was 300 pages? No joke 300 pages no one's going to read that So that's why I wanted this to be as short as possible. Let's lessen two pages This is kind of like nice area rug that just ties the whole room together This is hey start with these two definitions Hey, maybe you want to start with this document over here that leads into this one and this one and this one We couldn't be a sig without meetings, right? We got meetings. We got tons of meetings Actually, we don't have that many We have actually for the slo work stream just two meetings Every other week we alternate time zones so that we can Just be aware that red hats a global organization the people in all the time zones everything's recorded Everything notes note stock This is the best because we get people to do our workforce We have people who are living the slo life if you will ask them Hey, could you write a document that explains how this stuff is working for your team? We'll have everyone review it not just a solo gig and we'll talk about it here in the meeting. It's great We have people review the documents that we've written and it's a great place to make contacts with teams that are Just getting on on the path Because they may have questions like well, how do I pick this slo for my service? Where do we even start? How do I work service dependencies? into that end I've worked with a few teams And I've learned quite a bit and the first thing is around the language And that is you wanted to find everything This is one of the very early lessons Because if you don't you could be talking past each other void terms of art slo or responsibility matrix Ownership, but what do these things mean be very clear? And this doesn't just apply to slo's it applies to everything that's new to a team or new to a person agile slo's being on call owning a thing It's assume nothing And more takeaways around processes, especially with slo's right When was the last time you've worked on a green peel project johann's last three months Okay, you are very lucky I envy you six months a year Five years who works with mature services show fans Okay And that probably means that you have a lot of things that you care about Or that you're about your service that you care about right dozens of things It's probably going through your mind All this api endpoint this api endpoint this process how this feels to the customer the way the ui loads And it's like natural that you want to measure all of the things all at once right jump in head first Oh, don't that's a shallow end. You hurt yourself go small. No big thanks. There's no need. There's an x print And if you're not sure focus on what the lights the customer If you're not sure Then a quick survey Pull 100 people. I know I said you can't pull people but Sometimes you have to if you're not sure And if you are still on the fence, hey, we don't have time to change our processes We've got huge deadlines and we're just not sure Aspirational as loads are okay. I think so. We're excited right hat How do we know we've asked teams? You've had teams that had as loads just on paper. Hey, we think it should be this we're going to review it But we're not going to do anything about it. I see linux. Who's an ic linux fan? Oh, come on. You guys are lying You're all lying But permissive mode who keeps us he keeps ac linux in permissive mode Okay. Yeah, that that's honesty That's that's exactly what this is permissive mode for slo's When you're ready. Hey get into Um, the enforcing mode I apologize for the ac linux. It just really was a first thing that came to mind And when you're ready Hey, that's cool. You can add more things you can start saying, okay We've had a lot of misses with our with their slo now time to slow down Okay And the other thing is internal services are really a special case because that means that you and your teammates are co-workers and consumers of each of each other's is service and Since you're co-workers that means you can have a more heart to heart conversation Hey, we need this out of your service. Can you give that to us? What is that would what would that take? and just great and Right in the time I'm just gonna leave this up here. I know we had a question. If you wanted to come down and just ask a question from the mic That'd be great if that's the Not okay. I just speak up loud and I can repeat it. Yes Yeah, responsible to get the work done. Maybe like an individual contributor, right? cool So these slides are all up here on my website. You said that dev slash conferences All of these docs are in github that operate first Uh github.com slash operate dash first slash sre You can find more about operate first on line operate dash first cloud And if you're curious about sre at red hat move that side also red hat dot com flash sre Um, if you're curious what sre stuff in general, there's a birds with feather tomorrow It's here tomorrow Yeah, Thursday. Is it today? Oh, it's today Okay, great. It's on the schedule. I know it's on the schedule Uh, so with that That's how we do open source sre If you're very curious about this feel free to work on the repo. We're publishing quite a lot And thank you everyone Yes, I can see That's that to the right one. So Yeah, so you should have We can't hear um, uh Is there a separate? Oh Yeah, that's not It's not happy Okay Uh, say something now Oh wonderful Yeah, sure. Uh, how do you make it simple to make that? Which one? Okay, I think that's enough So Uh Which has to provide So The with the handball platform, which I specifically would say don't write too much information. So basically, we can just evaluate that one. We do that until that's how timely it is going to come in handy. So you don't want to, although you can only focus on the, you have to make that decision to pursue the accident. So if you are just focused on using the handball platform, you discover this impact from any point in time. So this is very important to find out in terms of everything other than management. So this slide also is very important. So this slide also which is a very important and also a very important in the last slide. So my question is also, this process is actually very important right now. So we are also working in the company which is also very important. Let's go to the bigger question which we have been talking about. So I can tell you that this is a consolidation process and it's a consolidation period and it's a period. So this is a very important application. So this communication is very important. For example, if we have an answer question which is that we do this work for you. We have a regular experiment, we have a regular experiment for you. So this is a very important project and you have a separate one for you. So, this is the most important And then there's the experience, how do you do that? So, what do you need to do to be able to do that? So, as a person, if you do not, you're going to start it. So, there is only a, you know, a form of application, for example, you can have this right, if you have a tool to do that. Most of the people who come to give you a form of application, they're looking at the different applications, they're looking at different deals. Let's say that, okay, this is how to do that. So, you can just do that, but you need to do that as a part of the process, and I think the problem is, you know, how do you do the form of application? I'm just kidding, the form of application means, like, if you have a different industry or your own organization, if you're able to actually do that, that's okay. You have to do that, because there is a part of it, like, you know, I'm sorry, you can do that, you can do that too, you can do that, based on what you're trying to do. Also, that you have completely all of the products. So, if you want to do the application, you want to do the application, you can do that, so we all have a connection, we take a lot of time. So, by doing that, it is possible. You can do both, and just add that in the form of the field. Like, now, you have to make sure that you also do the, you have to return from the field, you just want to start making the process of application, but it's not, the thing is, everything needs to be done. I think that's a good thing to do. So, after a while, I'll ask you, I think that's also a good thing to do. Also, to some extent, we are focusing on a new journey to visit, what is that mean? So, this is, we're seeing, for quite some reasons, it is here, that the business management has provided, that's to be seen, to have a time at, to see the framework. I do, I do provide, on-demand registration in the government, and applications to be seen as current, and shared content. So, let me give you this experience, until the end of the day, as a new thing, and then we'll come back to this in a few minutes. At the level of 8% of viewers should be paid now. So, this is made into one, and it will be used for a long time. I'm one of the main, all of the most paid ones. So, these are the things that we do in the long time. We just said, maybe one day, we'll come back, and we'll be back, and we'll have a long vacation at the level of the, but it comes to the, really come back at that time. We do it for a long time. So, this is for a long time, where we can use some, a whole group of companies that, how can I, and do the application, and the time, it goes, like, flat, it's like, all of the problems, all of the factors, and that's how it is. You can do it in the background. And, you know, you can do it. All of the, all of the, you can pay with a premium, and you can do it in a way, that's okay, as far as you know, because I've been using it for the whole time. There are a great, good, you can do it in no time, they have a lot of great users that come, over users that come here. If I have to listen to them, because you're doing it, I have to do it, I'll do it for you, try to sell your hands, I'll do it for you now. So, please get in the format, because we have a share, close to the platform. So then, the problem is that it just goes to you, then how it can impact on the platform? So then, we'll go to a control, and then how it will actually control to enhance the platform, and then activate the user again, user. So then, I'm going to write it off, and here it's called help state. So it will do that. And then it will be able to control the platform. Here, that's how it presents the implementation of the policy. It's inactive, so it's going to give an acquisition to the application to the policy management. So policy, I mean, is there an impact on the application to the page application? So it has a specific table to handle such much application for you. Also, it also acts as a support from the platform network. So I think that there is a time to run like a user. To build their application for example, in the framework that we are going to use for the platform to build your app. So we have a general component, so if you want to be able to create your platform, you can actually build your app. If you want to build your app, you can start with an app from a platform, so it might work for you. So it's a total power supply, and it can be created on a platform network. And it will be working like a user solution with all the components of the platform, like the application, the app, the server. So it is also a platform for your development. You can be able to actually build your app, which is out of the order of the application. I have several videos on how to build your app. So it can be used as a support on a user application, as well as the application for the API, which you can actually build your app. So this is a typical way of working. This type of process can be developed with development, and it can be developed with your app data. You have to be able to develop your app in such a way that it can be used in such a big company. So then, like this, where you have to be able to actually build your app on a platform, this is the one way to do that, the one after this one. So I come up with some kind of way of how I live my life. So I talk to all the people who are watching. And I talk to all the people who are able to really speak, actually, with this type of app. So watching, as you can see, we use my application to build the same process there. I will talk to that more in a little bit more. As for most of the flexibility of how to do it, we don't have any option to be able to do it. So this must not be able to do it, and this is the most important one. So actually, you have to be able to do it. So I can feel like I'm trying to do it like that. And so that's the main thing I'm going to talk about. So I'm going to talk about how to go ahead and do that. This app doesn't actually have authentication, and we just do it like this. As I told you before, there's notification. Do it like that, and then like that. So we don't have to do it like this. We have to do it like that. So this can help you to build your app much more efficiently and effectively. It's not important to be able to be able to integrate your app and your app here. We can do a great job with that. Hope you have a good one. Thank you so much. Another thing that we have to do is to follow up. So I'm going to go ahead and make sure that I show you. As for now, we can make sure that we do get the A8 to follow up the question. So we have the A8. A formal modification, which has the notification with the protocol kind of a thing, for the feedback that we're going to show you in the big black sheet of the list now. So we're going to go ahead and make sure that the protocol is inactivation. Now this is built by Google. You can see how built it is. I'm just going to leave it on the top of that. So to feedback with that, I just get the A8. So it is a much better option for the application. If you want to take an application to the site, for example, you're going to be able to take a look how many slides, how long, how many paper tools is that in fact more than that. And everybody has their A8, so we have to make sure that actually we have the A8 as well. So it's also hard to make a survey or so forth. I don't think it's very hard. But also, this is how it works. It's a big thing. It's a really strong application. We get to this A5.1 application. But the value of this is that it's pretty much high forward. So I think that I hope that you're going to be able to ask if there's a complex platform. If there's a complex platform for managing the staff I think that we're going to be generating, I know. So this is a problem. This is a problem. You're going to be able to set your hosting configuration. And as well, you can research what you have applied to the code, the roles that you do, what you do and I. And you can be able to manage your application, how it is, how many slides. So when we talk about both setting, for example, the categories, if you look on the other side, this is going to give you a different idea of what can be done on one of the three categories. So this kind of application and this kind of configuration is something that we perform. It's a simple thing. If you want to use your own platform to manage these leads and the various other applications, you can use the kind of like we do with the process user. You can take care of the access to a specific app or something. And I'm hoping in this case that you be able to manage an app with a very different name, which can be taken in one. So this is the sort of thing that I'm going to give you for this project, we are happy and we're going to develop it in a second. So we need to take care of the object of the application process. Also, as I said, it's going to be a very simple one. So we won't talk about it, but we are going to take some very important questions. So for more, you can download the course link. You can use the whole thing as part of the program. And the course code is under the name of the whole thing. I don't know. So you might be thinking, how can we do this kind of configuration management? There is something called this part, so you and I. So this project is usually created to do the same kind of application for the application. So I'll take you to the one that I'm using. So this is what it's called. So this is what it's called. So this is what it's called. Any multiple time. This is what it's called. And we're going to develop new react experience. So this is what it's called. Any type of experience that you might be going to is really important. So this is what it's called. So I want all of this technology back. I'm able to do the same kind of application. So I really like to do it. I don't see the big platform. And that might be the one that's going to be the same kind of application. So what I want to say to you is that I also want to do this kind of application and help this time and time. So let me talk about different standards that we used to do in the application. Even with the application. So the thing is that there's actually four different applications. I asked about the form and actually this is another of the first process I had. So I was trying to figure out which is all of them. And I just wanted to say that this is really the kind of safety and really good protection. So the quality of the application. So for more details of the package you have to get to know the package and the course. So one of the important things to do over the years is to get to know this company and how they use the package. All of that. So I don't want to look at it right now. Right now I'm collecting the best approach to the kind of protection of the application. I don't want to be covered with the content part. So I'm just going to collect the information that's part of the collaboration that I have now. That's one of the things that we want to do in the future. So for the development you can get an application for the application. Like you can set up to this type of application. Everything is different from what you do in the world. I'm sorry. But very bad in the first of all you can take the application and you can leave the application and you can take the application. And you can use the application. You can use the application So, I don't know if it's because of the application and it will be developing all the data. So, I'm telling you that if you look at the program, some of the different components which we fly away within the framework of the framework. So there is very much to the development experience. You already see the whole context and a hundred percent feedback from that. So I think that you have to understand the views. For the platform, this is how we do it. And we can reflect both the dynamic view of it. So if you want to have a sense that we have the functions in the platform, I am sure. So you need to be able to, you need to be able to otherize your own property, how you use it. So we're not going to go through that comprehensively or completely, I have. So if you, I hope that makes sense or is that it's obvious enough, basically that I am finishing up. This is also going to have an enterprise package. So we can have views of this project better for the free-based application. So it's obviously not clear. So we're going to be able to do that one at a time. So what is the, what is the aspect of the whole project? What are the things you decide to do and deal with? But I think we can take it as soon as we can. We can see a narrative, we can see a contradiction. We can see an aspect. We can, the quality of it is how we do it. So it's really, it is by a little bit of a problem that it comes to the fact that we need to consume the quality that we thought of it. So that means that a company needs to be able to accomplish the transformation of it. And it is how it is going to be published in the same way that it is a product of documentation. And it's also going to be mentioned in the workbook. So I'm going to talk basically of this point before we begin. And basically, this is a product that we consider as a product of management and development. But it is a type of product that I like to use all the way after that. So it will be able to use that application in 100% evolution, but I'm going to get it in the same way as I'm going to do it. So that is a kind of series of things that are a little bit better for our application. So we do also like to use it in a functional application. Then we also, then we also offer my own idea of what it is or what it is about. I don't know if there are components that we see as in this public feedback or as I told it last night. So there are many features in that that we are not going to use in the future. For the feedback, how do you come up with the data to be able to complement it and for the public to take the fact which is actually a product of management. Like they have a mission, they understand the feedback. So we are going to play a little bit more and then we are going to share that feedback. And it is that it is the classic and almost the same as the rest of the data. So this is that feedback on them or the opportunity to create that. Another important component is the public feedback. It's played in the program of updates to be able to navigate to the full product. So I'm going to put it in the video for the public to come to this. So it is a gradient of data that is being used in fact to be able to do this. And it's like the way a comment is being forwarded and handled. So I'm going to put it in the box to go and comment it and handle it. You can have it in the video and use the comment over the same topic that's happening right here. So it will become really good for the public to comment it. So again, if you click it, you can post a comment. And it will be able to do the data from the form that I get. So it is a feature of the document which is managed to talk to the comment of the user from the user. Which we are going to talk about. I know I'm not going to take it from the application. So instead of the application program, you can be going to the page to have more information for us to be able to support you. If you can see that, okay, then I have it in the video. So you can see it in the application after like the handbook from the application in my browser. So instead of the application program, we will be able to see how the application is going to come. How the application is going to play in the form of the application program. That's what I'm going to do. So I can handle it. We will be able to come to the page and we will be able to do it after the last time. I don't know, we just do it with the platform. So how we process the application, how we manage the community and is under the command of this grant option. So we have good training for the community. We are looking for opportunities for the conference session. So I'd like to talk to you about that kind of thing. I'm from Ohio State too. This is me, I live in Ohio State. I'm an expert on the application program. So other than that, we are able to come to the page from the application program. So there are some more things for the app micro-commitment. So here I put all of this in action nine to a couple minutes before. So I can come to the application and I can manage if you ask within the course of time. So we just have a course that I'm going to do. We will be able to manage all of these. So these are app micro-commitments. So we will be, we will be able to use the technology managing the app because I'm probably using it. We will be working on that to be able to use the app too. The app tells you this is a component for me to use that data. And I think it will be able to use all of it for the the program and for the conference. So this, you can actually be able to use the function, you can be able to use it and this is the computational management. We are able to manage the app both to be able to use the app and the app. So I'm not going to tell you this. I don't know that you are going to say this is for non-digitization I don't tell you and if you don't have it non-digitization it's all over the app. So this will be able to use the app to configure the non-digitization of the main non-digitization of the app. As I was saying we will be able to do that. So now for non-digitization we will be able to use the non-digitization of post-non-digitization of the app. I'd like to thank you for uh accepting the non-digitization of the using non-digitization of the app. Uh let's add the current updating of the application ecosystem. So this will be an application ecosystem with the core of it and we can we can develop the application ecosystem for users and people that can become using the application ecosystem. Even for the applications of the people who use the app the app the you can use the app for example with the app like to send an app to the application ecosystem. You can use the application ecosystem for users and people who use the application ecosystem really can develop an application ecosystem. Uh I'd like to add an organization for the user. So I'd like to thank you for your help for getting to do this and your application ecosystem. But I don't think it's for the So basically, you will need to generate a political equation. So that can be used by other creative organizations, then it comes to the use of advanced algorithms. So, this interface might be used to determine how an algorithmic user can have an impact in the region. We also have a doctorate here, and a lot of data. So, this is a lot of data. Also we have a lot of users, for example, if you're in a group, if you're in a group, if you're in a subgroup, if you're in a group, and if you're in a group, then if you're not authorized to generate an algorithm, you can react with the same program as now. So, this can be used by other creative organizations, and users can write. So, we can set a permission to use an algorithm. So, this can be used as an algorithmic solution. You can also use it as a user-made algorithm. So, this is also an algorithm, which is one kind of algorithm, as we want to know. So, they're able to manage the data. So, that's where the user or the animator will come into the picture of the project. Every day, there's a lot to do. There's no optimization at this time. So, what we have is we have two jobs. We only need three. It will add in the authentication part for the user. And another good thing is, we can find out how to do it. So, we can use the algorithm, or we can also add that. We can add some feedback, and we can also add feedback. So, for the person, they're able to take over the news, or the developers. They're able to do it. They're able to know how to use this algorithm. They can use it as an algorithm. So, this is how it works. And this will be our coordinate. So, that's where we have feedback. We have feedback from the user. And so, we have feedback from the user. So, we have feedback from the user. We can add feedback. We can add it to the user. We can add feedback. And it's very, very simple. We can add feedback. We'll give it feedback, and then we can add a few more days. So, we also have feedback. So, we're able to do it. We can add altimeter. So, we have two different types of feedback. We can do feedback. We can create it from the user. Because, we are going to have two different types of feedback. So, it will add a lot which is between 90% and 70% to some extent, and we have to do that. So we had, like, we had to do the, the unbiased compliance. So also, there's a resident of the case who is a pre-accompaniment, so it comes on record, so that you can do the process in the spot, you can commentate it if you need it, if you need to start off there. So there's a lot of, a lot of what she said, we have to do that first, because she likes to do it. And there are other things, I want to back up, I'm seeing more publicers. So, thank you for the assistance, for the assistance and the work that you did. The most important part of the work that you do, is that it needs to be done. So it needs to be done in the end for, like, how, how, how much money to take in the data for the specific steps. And if we want to use the testing, that's what they can do, we can do it in the end for us. So, if we want to do it in the spot, we can do it in the spot, because we need to get that done. Similarly, we have a particular example of the SONAR, the SONAR, with the AVM, the AVM for the AVM for SONAR. So, SONAR will be the model that will make one question, and many more, that's all the information that they want to do with SONAR. So that, that, that's, that's all the information that they want to do with SONAR. Also, we have other things that SONAR can do. For example, if you are taking, or if you are a part of the AVM, you need to do it in the end. We have a motion to do it in the SONAR, or if you get a motion that says that, that's it, like, you've got to do it in the end, and you get to do it in the end. So, if we use the AVM in the SONAR, we can do it for the SONAR, and if we make that a function that we should do it for, so, if SONAR, which is around the world, which is in the time of SONAR, which is the event to maintain, you know, more accurately, as of now, so, we can work more closely, but we'll just, that's my question. Then, so, we can work with four of us, one part, one part, one part, one part, one part. And so, one part, we can work with four of us, and one part, one part, one part, one part, one part. So, when it comes to the right part, that's really not to the left. So, I know that we appreciate it from the beginning, and I feel like it's a little bit, from the beginning of the project we do, four and five, they are, four and five, the problem, the problem, four and eight, and we push it, if it comes to a sport, they find, basically, after, after the development of the youth, of course, every family, if this is a very, very important party, so, if you are a graduate, if you graduate, that's practical, if you graduate, if you're a graduate, everything is related to you, and it will be better for you to work as a sport or as a sportsman. So, when it comes to the right part, we have to take care of it, that is how we do it. So, basically, we have to work with these partners in relation to life in our collaboration, in our platform, so, when we come to the right part, we have to stay at the right part of the project, in order to have academic work, to respect the health, to ensure that the person is working offer, so, that is the same problem as life in the future. So, it will be a problem, You know, if you want to do some tests, this will be, for example, if you are going to do a test, you will be able to find a test that you need to be able to do. That's what it is. Hey, a test with 8K catalog for all your users. There's 8K to 8K catalogs for you. You can get a good data. So, that's the kind of problem that we're going to be dealing with. 8K is our secondary catalog for the submission model. So, the 8K catalog has a different part of the platform, and it's a different experience. Different experience, and you will be able to do it. So, it's a whole square, and it will be one of the 8K catalogs for you. And it has a lot of variations, it has a lot of fillets, and it has a lot of 8K. So, it's a whole square, and it also has a lot of sliders, and a few more. Another thing is that it's more of how people can access and where they can see better information, and be able to do that kind of stuff. So, it's going to be a lot easier to do that kind of thing. And it will be easier. So, this is the application for our platform. So, it's our platform we have. We have a big set of Alex teams, so all of you just do the connection to it. This also is something to your mind. It's a little bit different. So, thanks to you, you are welcome. He's trying to create a similar experience with my business. So, the conference which I mentioned over here, this is actually over here. It is, it can be, they make that kind of thing. So, this is the top one. This is for the machine, like that. So, this is the machine that we have. And then you can use the monitor in all of the resources. So, you can use that. I have another thing that I'm going to talk to you about. So, this is something I'm going to talk about right now. So, I'm going to be able to use it as a kind of analysis of the people I'm going to talk to you about. So, the platform is going to be called POSCHIN. So, POSCHIN means ability to POSCHIN So, in this POSCHIN program, we're going to be able to see that they call POSCHIN as we really call it POSCHIN hours. So, basically, the kind of motivation from the POSCHIN, the POSCHIN program you're going to see. And so, it's going to be for some of the POSCHIN programs. You can customize it, but it is completely over right now. So, if everyone is trying to use POSCHIN, I'm going to go ahead and put POSCHIN to CNI. So, this is how it goes. This is how they're going to be able to ADMJ and POSCHIN hours at a time. They're going to be able to see how they're going to be able to POSCHIN post an appointment. So, this is POSCHIN. So, I'm going to be able to use POSCHIN hours. So, this is our ability, our specific application that was POSCHIN. POSCHIN is going to be an opportunity to communicate with people of high level. And there is the light switch out that we're going to be able to do. We've got a lot of background and we're going to use POSCHIN. So, we're going to be able to do POSCHIN. So, POSCHIN is going to be an opportunity for POSCHIN. So, this is POSCHIN. It tells us we're going to be able to see POSCHIN address POSCHIN right now. And that helps us to be more of a POSCHIN manager. So, I'm going to be a POSCHIN manager. POSCHIN by habit is a thing that we're going to be able to communicate with people of high level. If you do that, you're going to be able to communicate with people of high level. How well does it work. We're going to be able to use POSCHIN in the last eight months. So, we're going to be able to do higher-quality fighting. So, that's actually not a good point to go into. So, we're going to be able to do that with POSCHIN. We're going to be able to do that with POSCHIN. So, that's actually not a good point to go into. So, this is our next one. We're going to be using the POSCHIN. So, this is POSCHIN. It's actually a whole form of POSCHIN application. POSCHIN is the less-imposed kind of problem. So, one lack of power is that we're going to be able to use POSCHIN and POSCHIN. And it will be in your mind that we're going to be able to use POSCHIN without further ado. So, other things that we're going to be able to do from the last six months, we're going to be able to use POSCHIN. So, we're going to be able to use POSCHIN in answer to your question. So, you'll be able to use that specific message. So, that's also a thing that we're going to be able to do to show you how to do it in the hands-to-the-hand function. He said, can you configure the thing over there? So, well, this kind of process is just not going on here. Another thing is not like a whole bunch of data. The labor tool, the data is out of speed, and it is able to get back out of speed. So, you can observe, you can see, it's working. So, I don't think it's able to get back on speed. So, the labor tool is working at speed back for a number of times. If you have a speedback for a number of times, how we are going to do it is going to get back on speed. If there are any people who are going to do it, they are going to do it. They are going to do it at speed back. So, it's going to get back on speed. So if you want to know if you don't believe that, like I told you, music from there, which is already a piece and I don't know if that seems to be the format of it, I don't want to try to follow it. And because that's actually a good one to try to find out. In the text, they really do want to learn the song and they can use it on their own and they can see it. So this is how I want to assess it. I'm not sure if it's the format of the melody paper. It's just how I like the feedback. I don't know if it's the format of it, but I don't know if it's the format of it. So this is the format of the melody paper. I don't know if it's the format of it. I don't know if it's the format of it. I don't know if it's the format of it. I don't think that all of the answers are associated with a lot of that. So, these things can help, and it's in my eyes what's in the end. Also, it's important to identify your problems too. Also, the decades that are coming in. So, this is a part of the law of inflation that we like to be able to use to that amount of time to do all of the things that I can try to process. Another thing is you can see what's going on. So, whatever data you go through, you can get data according to it. So, you get, when you have this data, it's going to be 100,000, 300,000. So, there's a lot of information in the form of information. So, you can see that they have 100,000, 300,000, 300,000. So, I'm going to talk about the end of the nation as well. It's a little bit of a problem. So, this guy is a little bit, this is a little bit of a problem. And he talked about people who really really don't know how to do that. So, they take you off. And so, they take you off. And so, they take you off. And then you can open the documentation. You can open the documentation all of these things. Yeah. You can open the documentation all of these things. So, I think that's all I have to say. I think that's all I have to say. I think my slides are going to be great. Thank you. Yeah, we have no questions. We're good? No questions. Okay. Thank you very much. Testing. Hi, everyone. My name is Anthony Byrne. I'm a graduate student with the Department of Electrical and Computer Engineering here at Boston University. I'm also a site reliability engineer with OpenShift dedicated at Red Hat. Today, I'd like to talk to you about a project called RTQA. And this project is a product of the Boston University Red Hat Collaboratory. So, let's get started. Every software starts with some sort of foundation. And of course, we want that foundation to be strong. Nowadays, you have to be able to iterate on these initial pieces of code that we use as the basis for what will hopefully be a very future, a future very robust software product. You need to be able to experiment and iterate quite quickly and rapidly. And oftentimes developers, data scientists, researchers and so on will do that rapid experimentation with simple scripts or Jupyter notebooks or using similar tools. Now, these tools are great for writing these initial proofs of concept or POCs. And ideally, we have a proper process of translating those proofs of concepts into the foundations of our eventual production-ready product. But unfortunately, that's often not the case. Business pressures will force companies or developers to essentially just, without doing very much review, start slapping additional features on top of this proof of concept code. I'm talking about when you see a Jupyter notebook file that's in some machine learning processing pipeline. And it's been like that for years and it works and nobody touches it. So, hopefully it's not a problem. Of course, as we add these additional features, any bugs in that initial proof of concept or cracks in our foundation, they can start to cause real problems until eventually you might have an entire application go down based on some key logic that's at the base of it. So, what I want to talk about today is a project that's motivated by the idea that developers and data scientists need tools to help them ensure their innovative early code can be safely built upon. And that project is known as RTQA, or real-time quality assurance. Real-time quality assurance is a Jupyter Lab plugin that provides feedback to developers and data scientists during those initial phases of development and experimentation. If you haven't seen Jupyter Lab or Jupyter Notebook before, here's a screenshot of it with a mock-up of our UI wireframe overlaid here. This real-time feedback could warn of everything from outdated Python modules that might have known security vulnerabilities, for example, or maybe some performance bottlenecks or maybe some unsafe SQL queries or so on. And all of these warnings will reach the developer well before the code ever reaches any formal quality assurance or quality engineering phase of development. Today we'll discuss the progress on this project so far and what our future plans are. So, Jupyter Lab, like I said, is a popular multi-language IDE. We usually see it used with Python, but there are also people using it with R, Julia, and a few other languages. And it's architecture using a pretty typical client-server model. You have your user navigating to the web UI using their browser where they'll be able to enter their code, click Run, see the results in typical Rappel fashion. Then you have your Jupyter Lab server that's mainly facilitating communication between your web UI here and the back-end language kernel, the most popular one being iPython, essentially a wrapper around the Python interpreter. Now this architecture is pretty ideal for being able to maximize the availability of extension plugins or frameworks that you want to get out to as many developers, say, within your organization as possible. If they're all using the same Jupyter Lab server, you just add a plug-in to that, and boom, now you have all of your users able to use this. The way RTQA does this is by taking advantage of a feature within iPython known as Magic. Magic is essentially just a way that users can indicate to the interpreter, hey, I want you to handle this code a little bit differently. So we have our RTQA Magic framework that connects to the kernel, and it's going to provide a common platform for what we're calling analytics engines. These analytics engines right now, what we're thinking about include vulnerable dependency scanners, network anomaly detectors, and also performance bottleneck analyzers. There's room for a lot of different areas of analyzation, though, and that's why we're open sourcing this and trying to make this as easily extensible as possible, and we hope to see, you know, your future contributions as well. So we started off with one kind of proof of concept analytics engine of our own, and that's using a system known as Praxi. Praxi is a fully automated machine learning-based method for discovering cloud software while it's being installed. So what does that mean in practice? You can take a collection of file system audit logs. So, in other words, just a list of file names that were touched during a certain period of time, whether that's a modification, creation, deletion, so on. And we can use machine learning to map those change sets, as we call them, the sets of file changes, to a labeled event. So we can say, hey, based on this change set that you're showing us here, we think that you're installing Apache 2 version 2.442. The way Praxi works in a little bit in a little more detail is like this. We start off with our change set and we feed it through a practice-based analyzer known as Columbus. This came out of IBM Research. And Columbus uses some statistical methods to pull out significant terms. So these are just words, usually coming out of these file paths here, that have some meaning associated with the event that's taking place within the change set. So in this case, since we're installing Apache 2, we might see terms like Apache, HDBD, WWW, and so on. So we call that our tag set. We then feed that tag set through our machine learning system. This is known as Vulpawabit. This was originally developed at Yahoo Research, now taken over by Microsoft Research. And that produces a trained bag of words classifier. What can we do with that classifier? Well, we can go through this whole process all over again. But this time, we don't have any labels in the change set. We don't know what was actually happening within that period of time, which is the case we're going to be in when we're using this in practice, right? So we feed that unlabeled change set through Columbus again, get an unlabeled tag set, feed it through the classifier, and finally, you get a prediction. And what we found is that proxy is able to do these predictions with over 96% accuracy, and it's able to do it about 14 times faster than previous methods. Now, when we use proxy with RTQA, we're specifically trying to detect unsafe or vulnerable pre-built components. For example, outdated Python modules, dangerous tools, for example, using the Python P-Open call to open your systems package manager, for example. Or unsafe calls to foreign code. A lot of people might not know that very popular Python modules, take NumPy, for example, aren't mostly actually written in Python. They're written in C, or other languages, mostly for performance. But that, of course, can open you up to all the vulnerabilities and bugs and so on that can come with a little bit lower-level language like C compared to Python. Proxy's not new. Proxy's been around for a few years. We developed it, I think, about four or five years ago now. But it's been already implemented and proven. And so what is new is our task to port proxy into an RTQA analytics engine. So how would this work in practice? From the user's perspective, they would start just like they normally would. They would enter their code in the code cell and then click Run. The only difference on their end is that they would enter a special combination or a special character sequence at the beginning of their code to indicate to Jupyter Notebook, I want to use your magic feature to run RTQA. Jupyter Notebook would see that character string and delay execution and notify the RTQA framework that it has been summoned. On RTQA's side of things, it then tells Proxy to start recording change sets. In other words, start looking at the file system and make notes of any files that change within this period of time here. And then we would allow the user's code to execute just like it normally would. And finally, we display the code results again just like it normally would. But in the background here, we send off this change set that we just collected over to Proxy and throw it through that process that I outlined in the previous Q slides. We'll get an inference for a change set label. And if that change set matches anything we know about in our database, then we'll display a result. So here's what this would look like. But before I get to that, I just want to highlight here that the only differences from the user's perspective are that they enter a special character combination at the beginning here and they get a result at the end here. So it's very unobtrusive system. It only shows up really when there is a potential problem. So how this actually worked from the user's perspective. There's that special character combination that I've been referencing, just percent sign, percent sign. And in this case, we tell Jupyter Notebook that we specifically want you to summon the Proxy module within RTQA. And let's say our user here is installing a Python package known as Keras. So they'd enter the exclamation point that indicates that we want to run a shell command. We say pip installKeras. And that kicks off first the recording process for Proxy. And then it allows pip to do its thing, install this package, it then stops the recording process, tells you, hey, I got this change static, captured 893 file creations and so on, ran it through both our Wabbits. And finally we get our label out here and it says, hey, this is TensorFlow. And specifically it's TensorFlow version 2.7.1 and that version is vulnerable. So what happened here? We didn't install TensorFlow. Proxy discovered that Keras has a vulnerable dependency. And that happens to be TensorFlow version 2.7.1. Keras, by the way, if you're not familiar with it, it's an open source Python module for interfacing with neural networks. Keras 2.8 depends on that version of TensorFlow, which contains those known vulnerabilities. And Proxy had been trained to recognize that version of TensorFlow using a certain number of samples of its installation. And it recognized that footprint when it was run through the inference process. So moving forward, what are our plans for RTQA? Well, first we want to look at the machine learning aspect of things. Proxy and we expect other analytics engines that we're developing for RTQA will require these large and up-to-date machine learning models. In the case of Proxy, for example, we need to have a database that captures every vulnerable version of every Python package that we might be interested in. And it's impractical to store copies of these rather large machine learning models on the user's local machines. So instead, we'll be making use of Red Hat's open data science platform, specifically the Kubeflow component of that, to build a hosted machine learning pipeline. This pipeline will allow our models to be updated daily and iteratively. So every time a new package comes out on the Python package index or if a new vulnerability is reported to safety database or safety DB, we'll be able to automatically trigger a retraining of our model, an iterative retraining, and it will be able to now recognize that newly vulnerable package. Another direction that we're looking at, of course, is our future analytics engines. So firstly, we want to take a look at static code analysis. This is a very well studied area, and so we think there's a lot of potential here. And one thing we want to do is be able to detect these unsafe foreign code calls at the line of code levels. That would be referring back to that NumPy example I gave earlier where you have modules that are backed by C. Next up, we want to do some network telemetry analysis. So this would be to detect some sort of anomalous behavior to assist the user in debugging. This could mean, you know, you're writing some code for, say, a network interface driver, or better yet, you're writing some code that interfaces with a database over a network. And all of a sudden, like, you changed one variable and you hit run again, and now your code is taking a lot longer and you're not really sure what's going on, and without any sort of deeper looking, you can see that, like, the result's the same, but it's taking a lot longer and you're not sure why. If you had a RTQA-based network anomaly detecting, a detector running in the background here, you'd be able to, it would be able to tell you, hey, your code that you just ran downloaded, like, 10 times more data than the previous calls that you ran this code through. Like, did you mean to do that? And these are the kinds of things that, again, are supposed to be able to help the debugger, the developer, catch these bugs well ahead of any particular quality assurance phase. We're also interested in looking at intelligent rollout engines for automatic A-B testing of code. So this would be, for example, if you have some code that's going into a distributed system, and let's say you have it hooked up to a staging environment, you can have RTQA automatically deploy two different versions of your code and then give you the result as to which one performed better, depending on whether that's, you know, a CPU use metric or a sales conversion metric or whatever you're interested in when you're writing your code. And, of course, because it's open source, we'd love to see any ideas that you all have for analytics engines. That's the link to our GitHub here, github.com slash operate first, AI for CloudOps, and pull requests are certainly welcome. So to conclude here, RTQA aims to deliver real-time code feedback to developers and data scientists. And we're starting off with our initial analytics engine, Praxi, that will provide vulnerable dependency detection using machine learning. Finally, we hope that RTQA will continue to grow with your contributions and also our machine learning as a service framework that we're hoping to launch alongside of it. So thank you very much for your attention and your time, and I'd be happy to take your questions. Oh, yeah, if you could just step up to the mic there. Feel free to line up. Sorry, one second. Technical difficulties. Testing. Thanks for the talk. I was wondering, was there any particular technical reason you were using a Volvo Wabbit as opposed to some other model? Yeah, yeah, great question. When we originally developed Praxi, like I said about four or five years ago now, Volvo Wabbit at the time was seen as a pretty performance machine learning engine. And for what we were trying to do, which is really just recognize these, you know, collection of string text, it was seen as a very practical way of going about that. And, you know, it worked for us at the time. Nowadays, of course, five years is a lifetime in the machine learning world. And a lot of things have changed. There's plenty of engines with better accuracy, better performance, and so on. And that's kind of why we're switching towards this machine learning as a service model with Puplo, because that allows you to essentially plug and play difference machine learning engines pretty easily. So that's what we're moving towards there. But that's a great question. Thank you. Is it possible to use RTQA on something like ODH, or does it have to be on a local JupyterLab instance? If I understand your question correctly, it definitely doesn't have to be running on a local instance. If you have a distributed instance of JupyterLab where your client's on a different machine or even a completely different network from your JupyterLab server, that's fine as long as it's installed within the JupyterLab server component. Thanks for the question. Hey, awesome talk. I like the potential to be able to see vulnerable sub-dependencies as something I want to install. And I've seen something like that on Quay when my image is on Quay. I see all the CVEs, but not all CVEs are very useful to me. And I can imagine that in my notebook, if I run Praxi and then I get like hypothetically if CVEs were also added, so like get a thousand minor situations that I need to deal with, is there a config for Praxi where I can say, okay, filter out low-priority CVEs or low-priority vulnerabilities? Is there something that I can get an even more detailed readout of what the vulnerability is and whether I need to care about it? Yeah, you point out a great point, which is in the medical field known as alarm fatigue. If you are constantly getting alerts and warnings about the code that you're writing, just like if a doctor is constantly hearing all these beeps and alarms in the operating room, what eventually happens is you just tune them out, and the same thing will happen with developers. The way to tackle that kind of config question that you asked, there's two areas. One, you can control what's making it into the knowledge base at all in the first place. So you mentioned CVEs. You could set it up so that only the highest severity CVEs make it into your learning model. That's one way to do it. The other way to do it, and this is something that we haven't developed yet, but it's planned for a future release, is just a more client-side configuration. So if the developer knows or is aware of a particular CVE, for example, or a particular performance bottleneck or something like that, they can just add either an entry to a config file or something at the top of their notebook that will say, hey, ignore anything to do with this CVE. Any other questions? Thank you all so much. Testing. Okay. Great. Did you get good questions? Yeah. Oh, I mean, they were really listening. That's wonderful. Yeah. I'm speaking in the same room on Saturday. So... I heard it through a sauce. I'm sorry? Yeah, I heard it through a sauce. Oh, does it? Yeah, yeah. Yeah, I mean, I guess if you... Yeah, no. You could just kind of break into... You could just start something. See if you can fix it. It's all on volume. That'd be great. That would be fun, wouldn't it? Yeah. It would be fun to have, like, musical numbers for competition. Yeah. Let me know what would be a challenge. I know my manager challenged people to get their status reports in poetry. First, they were high foods. And then they were limericks. And actually, I didn't do that. I mean, it was like people who wanted to. But a couple of people came up with really good limericks. I'm like, you guys are so much more talented than I am. I guess it's not totally on the side for a while. Yeah, my husband's good at that kind of thing. So I could imagine him breaking out of a limerick. That would be fun, wouldn't it? Yeah. Yeah, good boy. That would be fun. That would be really fun. Because I guess you could just sign up and do the lightning talk, right? Okay. Thank you all for joining today this afternoon. I'll try to liven it up a little bit. We're live-streaming this as well. Welcome to competition. Well, excuse me. Let me start over. Welcome to computational thinking for creatives, decoding barriers to entry at DevCon. U.S. I am Tadio Kopian. And I will speak a little bit today. All right. I love about the program manager. And my background is actually architecture, the construction kind. And over the years, I learned coding and taught other people coding. And these are kind of like big lessons I've learned about how to really communicate to other people how to think computationally from a non-traditional non-CS background. And this is the nature of the talk today is how do we bring more people in from outside CS, from non-traditional backgrounds that are touching, you know, CS-related topics more frequently than ever before. And that's good for people already in that world in computer science and programming and technology because you want more people to understand the value you offer. That's good all around. And that's good for the people learning it so they can get more tools to their skulls to do the kind of things they want to do. So we'll talk a little bit about how to get there. So the starting point here is I need to teach architects at a company I worked at, again, the building kind, not computer kind, how to take advantage of new technology for design. They didn't know how to do this stuff. They're smart people, but they didn't understand this. So how do we do that? Well, if you didn't teach them the very premise of what computational thinking is in the first place, logical analysis, logical thinking, they weren't going to get it. A lot of people are used to non-structured thinking, a little more free-form, a little more open-ended, which is great for creativity but wouldn't really work for trying to compile code. So I had to, first of all, understand my audience. This is a really good starting point is understand your audience so you don't come to the conclusion before you get started. Who are they? Where are they feel comfortable with? Where are they trying to, where are we trying to teach them? And how much exposure they had to all of this. And you can see in this graphic here, a lot of my audience, the people I was trying to teach were people who like to sketch, they like colors, they like to draw things out, touchy-feely. That's a word we'll come back to again, touchy-feely. And that's something I had to figure out ways to respond to. And it's always good to find a way to understand, no matter what you're teaching, no matter how technical or non-technical is, what is it your audience is interested in doing and how they learn. And we're talking about creatives here, right? People who like to be a little more inclined to make something usually graphical. They don't feel very comfortable with code and text. We just saw a great example on Jupyter Notebook and how to use machine learning. I think that's something a lot of people are interested in learning or trying out, but they don't know the fundamentals. It looks too technical, too abstract, too intimidating, and they get a little anxious around it. I've been around these people. They're very smart people, but they get stressed out about... So it got me thinking one way to bridge that gap is patterns. There's architecture patterns like on the left here and there's design patterns, the system thinking on the right here. These are all patterns, and that's not a coincidence. The guy who came up with the idea of pattern language is called Alexander, and that's where all the patterns you see in computer science came from in architect. Christopher Alexander was a building architect. He talked about patterns when it came to ways of creating design systems. And that got picked up in CS even though he was a design architect. So I thought that was a really good starting point to talk about patterns and thinking patterns. And when it comes to thinking patterns and computational thinking, what it really is is just expressing solutions as steps, like an algorithm that can be carried out by a computer, a machine, anything. This is a great basis starting point to help kind of align people to what it takes to be a computational thinker and a non-traditional background. And here's just the general steps. I think a lot of you are already comfortable with this point of view of what it takes to be logical thinking, computational thinking, that this is what I was showing to my learning audience. Start with using abstractions and pattern recognition to represent the problem. Think about different ways than what you're used to in an unstructured way. Logically organizing and analyzing data, really thinking about what the data is, data types and how they work, breaking down the problem with the small parts. Programmatic thinking such as iteration, symbolic representation, logical operations to understand the problem, then reformulate the problem into a series of order steps. That's algorithms. And then possible solutions to the goal of achieving the most efficient combination of steps and resources through identification, analysis and so forth. And then from there, after you go through this process, you generalize the problem-solving process to a wider variety, meaning you reuse it in the future. And you just break it down in these formats. Inputs and outputs. Okay, let's change it up. Testing. Okay, thank you so much for that. Perfect timing. So essentials to cover based on these kind of complicated steps. Inputs and outputs, what goes in and what goes out. Data flow, data type, recursion, things we've all heard before. You have to prepare people to kind of understand this. And the reasons are used to help me along the way are these awesome open source projects, Python, of course. Dynamo, we'll see that. Grasshopper, we'll see that in Blender. Some of you might have already seen Blender. These are great visual ways of working, along with the textual Python. And they're very accessible, free, and we'll look at these examples of how we're going to teach people the computational thinking. And again, people are designers. They're architects. They like to see things. They like to see stuff happen. They use the 3D environments. They make models all day. Here's an example of an entire skyscraper being designed through a computational thinking process that is really easy to wrap your head around, but looks very sophisticated. And all this is the same thing we're looking at. Inputs, outputs, data flow, structure, and all that. So we want to use something visual to help get that point across to people who are from a point of view that this is something normal to them. And they do get this kind of work in different 3D softwares. They really do come up with some sophisticated ways of coming up with some interesting designs. So work with what they are coming from. Work with what they're doing. And I have fun times showing off this example of data flow. You want to make a sandwich, right? So we saw what is a sandwich, right? A peanut butter jelly sandwich. And I show people, okay, let's start with that very basic premise and work backwards. Okay, so to get the sandwich, I have to put two pieces of bread together. One that has peanut butter at the top and one has jelly at the bottom. Simple, right? But then you break it down further. What would you have to do to get those components? Well, you need a jar of peanut butter and a jar of jelly. Great. That's an input. And then from there, you also have to make the bread. If the bread is not already sliced, you need to get two slices of bread from loaf of bread. But in between the slices of bread, you have to cut it. So that's something you have to do. So you break these down to finite steps. And people are like, cool. I have a visual flow here. You take that, again, this abstraction. You break it down, you atomize it. And from there, you can get them thinking in terms of how you take it from something that's very obvious, something that's very precise. Then you go a little further. And this is the visual scripting format. Yes, visual scripting. That is, for example, a software called Grasshopper, where you can actually apply logic into low-code formats, graphically describe it. And this is easier because you're using these nodes here to operate. It's easier for them to understand the steps along the way because they can actually toy around with this. This is a real example of how it works with the software of low-code. And you can do this with pretty much any low-code, like node red. I think there's one called Pi flow. I think it was called Pi flow for Python. There's a lot of different variations of this. And we do the same thing with this food example. We say, OK, we want to make a cup of coffee. So instead of assuming we already have a cup of coffee, how do we get there? Well, we establish that we water, and we need coffee and a cup. But the water has to get hot. So we have to create an operation that adds temperature to water to get water at a certain temperature of 110 degrees Fahrenheit, for example. And when it comes to the cup, we have to combine the coffee and the cup to get a coffee cup. And then you put those two together and you get water in a coffee cup, and you can then pour however many ounces of coffee you want into the coffee cup, and then you have your serving. So again, just trying to get them to think in a structured environment to formalize the process. So then we can then take it a little further to a real-world example. And this is the dynamo environment for visual scripting, which can actually produce 3D objects. And I'll just walk you through this real quick. We're going to set coordinates now. Now we're going to make a building. Same exact concept. What's our inputs, process outputs? So the input, I'm going to give you xyz, or yeah, xy floor coordinates. One corner, two corners, three corners, four corners in a floor plan for a nice rectangle, 30 feet by 60 feet. And we just pair these up in different combinations. You can make each one of these separate xyz inputs, but you can just mix and match these pairs. And down here, we're talking about our structure and materials, curtain wall, concrete, foundation slab, and then how many levels. So this is the inputs. I need no levels, material, wall types, and coordinates for my grid. I then process this. I pair all the grid's coordinates together into what they call a polycurve, just means a rectangle. You put them together. And down here, we take that materials and levels, levels meaning how tall this building is. I say I want 100 stories. And I pair the geometry of the floor to the number of levels with this material. Again, an output saying, give me a bunch of walls, give me a bunch of floors. So that's where the output comes from, the logical series of steps that we go from there to create this tower. And that script you saw created that tower. We have the wireframe on the left here previewing it and the real deal output on the right here. Again, easy to understand what your code just did here, a structured process of inputs, process outputs. And what's beautiful about this is you get real time results. When it compiles, you get a model. And that's also great because they can see what they made. They want to add more corners. You just add more XYZ corners. They want to make it taller or shorter and just change the numbers there. You want to use different material, change the input. This really solidifies to somebody from a non-traditional background, a non-CS background, how they can work with a logical process. And from there, they can get some cool results. This goes in seconds. They can get a whole tower where this might take an hour to build manually reinforces their interest in going from a manual, old-fashioned way to something computationally driven. They can give them lots of interesting variations. They can take it further. People love being touchy-feely. I mentioned before that they like touching things and seeing what it does and interacting with them and getting a sense of what's possible. This encourages them to explore their curiosity. I think we all have our own ways of interacting with our curiosity, how we feel comfortable learning something, what engages us. A lot of people from different backgrounds, especially in creative backgrounds, they like to see what happens when you modify something, you run through it. And this is just shapes. They're just exploring shapes, seeing what works, what these different nodes do, these different outputs do, what kind of combination they want to run through. Very non-intimidating, very easy on the eyes and easy to run through in a 3D environment, but still computationally driven. And you can take a little further and take all the lessons to learn inputs, process outputs, it creates something a little more complex, not just a rectangular building, but a parabolic building where the points at the bottom in a circle match the points at the top at a rotation. Tricky to do on your own manually, but through a simple operator like this, you can produce a result and a lot of designers, architectural engineers, which have you, this is how they get things done. And this is the kind of tooling I was talking about earlier, like people want to learn this stuff, but they didn't have the logical structure of doing so because if they ever went to school for this kind of stuff, you literally take pieces of paper and you cut them up and you shape them and you glue them. There's no real logic besides what's in your mind and how to put it together. So this is a great way to get some results quickly. And what's beautiful about it is as you develop it, you can then refine it. So people can see what the inputs do to change the outputs and they can start structuring these codes and graphs in a way that's easy for them to interpret. It also gives them results and something they can understand the finite steps. And from there, now they're ready to take a little further. This is an example of using the low code script on the left here that in the blue area can be rewritten in Python. Much more concise, easy to read, gives you those XYZ coordinates here in the Y distance and X distance. Iterations, pretty easy. So that's the goal is to give them a bridge between what they're used to, to a format that's easy to work with and then start learning that. That way they can start feeling comfortable understanding what the logic looks like so that when we do dive into the scripting code, they are ready for it. They understand the beauty of it, which is we don't have to spend 15 minutes making this blue area work, which is actually a lot of work. We can spend a few minutes writing the code or copy-pasting the code or somebody's GitHub repo and moving on with their lives because you really cannot copy-paste this very easily, but you can copy-paste this easily. So you give them the step-by-step process of getting there so they feel comfortable in the next level. I think this is the missing link between what a lot of people are trying to get to is to understand the logic of even bothering with the stuff. Again, if you started directly in any kind of computer science, this is the kind of thing they teach you or even more so with Java or C-Sharp or C++, even way more detail in this. A lot of other people do not have that perspective, so you have to give them a couple of different ways of approaching that environment of being comfortable with something a little more technical, theological, and computational thinking. And what's fun about this, I got to do a little training course to teach people about computational design, general design, even this thing called parametric design, which is parameters design. We saw examples of that with the parabolic buildings, for example, and even the tower. And just a key point here, depending on people's learning styles, sometimes they want to touch. They want to look at stuff. They want to hear. They want to read. You have to be open-minded that some people want to look at a nice shiny diagram, a graphic. Others want to find a way to manipulate and be able to modify what they're using. Like I saw an example, we're clicking through all those different options. I think this is something that will help anybody who's trying to teach an audience, technical or not, to learn because different learning modes help. So my recommendation is mix and match. Try a little bit of graphic, try to do something that's a little more interactive, give them the documentation that's helpful, and also be clear and concise as you teach through verbal communication. Just mix and match. The variety helps a long way in reinforcing knowledge. Here's a team I work with at my company at the time to create these kind of courses. It takes a village to raise a child, and it takes a team to do some training. This is an example of a real-world solution using all these things we talked about, this is a building in the Bay Area that needed to get shaded. It's a library. The south facade was getting hit by a lot of sunlight, so the school was asking us how do we make a nice facade that on the one hand protects us from being burned alive, on the other hand makes good use of sunlight and on the other hand is also nice and aesthetic. We came up with a solution of being able to show the variations of shading structures in a computationally logical way that gave us good results. This big graph here shows that. I won't go into too much detail about it, but it's based on the same tools we saw earlier. It lets you do visual scripting and gives you a lot of controls on the process so that you can then literally create 3D elements from it in the process and construction. You take those simple examples, again, peanut butter and jelly, coffee, make a tower, make a parabolic object, and from there the logic sinks in the form where people can feel company doing much more sophisticated work. As you can see here in this very long elaborated graph and this series of wires connecting over here is what generates those panels. That's how we got this study right here. The panels on the right were computationally driven layout and I should make a note here. We need to see them in 3D, otherwise we didn't know what they look like, the sizing of them, so we couldn't just have an algorithm to spin up the numbers, we need to see it. It's a pure practical example of how it worked. It was a really great way to show off what we're doing here on the left, which is actual solar analysis based on the inputs of how much sunlight we had, the time of day of the sun, the size of the louvers that shade the building, the floor plan, all that was an input. A person from a design point who's not traditional and this can work with and understand and respect the logic and be able to facilitate these higher standards of making a better space for the students to inhabit. Key concepts. Coding and computational design is more important for the average person these days. I think we're democratizing a lot of what's going on programming to a broader audience and I think that's what's happening, either for yourselves, your friends, your associates that a lot more people are interested in this kind of stuff and we have to find ways to really facilitate that. And you have to find ways to create a compatible training style to make people comfortable. A lot of people are still kind of stressed out just seeing lines of text, trust me, no matter how smart they are, they're not used to it and they're not used to it. So you have to create that baseline thinking with a computational so they don't feel overwhelmed. This could be a hit or miss trial by fire thing but over time you'll get a sense of what's possible. For all you know, you guys have been teaching a very non informed class of people about something related to edge competing one day. It's totally possible and reduce the anxiety easy to follow examples something that's really hard to miss. Create a structured course if you're going to make a course, a document, a training, something easy to follow and not hard to get on board with. People try things out and encourage them to share the results like the results I was showing you was from real projects. Every view and upgrade your training forever. It's never it never ends. And support your maintainers. For OSS projects left and right always try to find ways to support them find out how you can either pay them, have your company pay them find ways to help them track down bugs because everything that helped me and helped my companies were open source. So always support them. Here's some references here and I'll give you a link to the talk at the end if you want to check this stuff out. All the stuff we just saw these are links to them. Most of them are free. Let's say they're all free. They should be, I don't think any of these are necessarily paid. Shout out to the maintainers. This is actually what I was talking about so you guys can grab it if you like and take it with you. You can reach out to me online if you have any questions. Not hard to find through search engine optimization results. Not too many people have that combination of letters. Thank you so much for being here today this afternoon from last round of talks at DevCon on Thursday. I am open to any and all questions for the next minute, two minutes. Testing, testing. Mike. Yes, I'm curious about as a UI UX designer and a UI UX design student like myself I'm interested in wondering on how this can be useful for to help people like me who are who visually learn understand and give any way you can do the same thing with UI UX design because the coders might not have issues understanding our concepts as well like the reverse of the reverse. Yeah, the reverse of that having the helping the abstract thinkers understand the UI designs that we make essentially. Yeah, and I would say that's something I've noticed is there is a like you just mentioned that the reverse is also true sometimes if you're from a very in the weeds CS programming background everything's very, you know, text driven logical driven. I would say I don't have an example here but a lot of this stuff like there is a I would say take what you know your code language is a lot of them already have visual scripting. I would just go back and say try out that visual scripting and run the code. Is it efficient compared to text, you know writing code and compiling it that's not the point. It's not efficient, but it is a good learning experience. So if you have a abstract thinking mentality and we're talking reverse here, I would say for example, Python David, and I believe it's called py flow. See if I have it here actually I saw shown here but py flow is just the way it sounds there's it does basically Python visual scripting and I would say just using that and walking through that would be a great way to just see how works because you know like if you already know how it works another one I didn't show here it was PJ 5 I feel like I got the visual JavaScript PJ.5 I believe that is but it's a visual version of JavaScript so I recommend checking out py flow and PJ.5 to see how they create graphics or use graphics to work on code. So if you already know this stuff just do it in reverse. Visual scripting is probably the best way to verge that gap. I learned more and I would say go back to that what we talked about here and the pattern language is also a great concept again the an architect came up with this idea and CS barred from so go back to Christopher Alexander and Paralanguage as well and understand these fundamental pattern that could be used both abstractly and as like actual objects in space. These are kind of things that cross over pretty well so that's where I would say get started on that and just be kind of like draw things literally sketch things out flow charts that helps a lot too. That's a good way to meet in the middle these allow this meeting in the middle it's my understanding that you use this with the people that you work with is that the idea to teach them how to code it? You use this methodology this computational thinking with your co-workers to teach them how to code this is that do I have the correct understanding? Yeah these are parts of part of this was developed to help people co-workers colleagues to learn they were already on the cusp of learning so I was like how can I make this a little easier for them and this kind of came as a result of training processes that we developed based on these tool sets to be self. Okay all right thank you. Just to clarify what I meant was I meant if if you have somebody on your team that thinks better with text or abstract thought and I design user interface or you design and I want to show them how the design works like would there be a way with this methodology to make it easier for that person to learn it like if you're showing them how your thing works So to show somebody how it works like I'm trying to send you a question can you use this methodology of computational thinking to show somebody who has who is great at abstract thinking how your GUI works explain to how explain to the coders how my explain to the developers how the design that I made for the backend team to work on would work could I use this methodology The best way I can answer that is say that if you can part of this process is to help people who wouldn't know this stuff to kind of start speaking the language a little better you know inputs, outputs, those would have you So I would say teach them how to speak the language a little bit of UX if you're going from that point of view like churning around a little bit and teach them the fundamentals of real quick lesson of what is UX what is UI what is interface, what is user experience like I think being able to instruct them what that is and the priorities and using these kind of same discussion points is to input a mouse click is to input a window what is the data flow going like I think that will help kind of bridge it if I didn't answer your question properly let's talk a little bit after you and I can thank you all we did it guys we made it through Thursday congratulations let me turn this off