 Cloud Functions and Azure Functions. So let's go on to Apache OpenWISC. Apache OpenWISC is what started by IBM. It implements the function as a service infrastructure. It's now an Apache project, an open-source project. You can write functions in a large number of languages. For example, you can use JavaScript with Node.js, it supports Golang, it supports Ruby, and it scales using container technologies. So to interact with OpenWISC, well, first of all, functions in OpenWISC are called actions. And there are many ways you can invoke an action. You don't have to make a direct HTTP request. You can, for example, define triggers where you say, whenever an email goes in, I want to invoke an action on my serverless cluster. Or you can define feeds where you say that, I have a GitHub repository, and whenever someone pushes to that repository, I want to run an action on my cluster. Or you can use AMQP when a message is received, I want to run an action. As mentioned, all those scales with Docker containers. So if more requests go in, it will spin up multiple instances of your function to, yeah, manage the demand. Okay, usually you interact with OpenWISC through its CLI. This is how the banner of it looks like. And it gives you examples of what you can do with it. So you can manage your actions. You can manage your triggers, namespaces. Namespaces are ways to group functions. And here are some examples, or some useful commands that you can use. The first one is just list everything that's in a cluster. This will return you all the actions, all the namespaces, all the triggers and feeds that you have. You can also just specifically list actions with WSK action list. More interesting is creating an action. So the simplest way to do it is just to use WSK action create. You give the action a name and you provide a file. By default, it assumes that it's a JavaScript file and it will be run in a Node.js container. You can create more complex actions where you provide a SIP file. In that case, you have to provide a kind to tell OpenWISC, is this Node.js, is this going because it can't infer the type from the file extension. This has several advantages. So you can bundle your dependencies with your action. You could say, I have one function, but it calls out to a database maybe and I bundle the dependencies for this in my SIP file. You could just bundle your Node modules if it's a Node.js application. On to OpenShift. So what do we want to achieve? We want to run OpenWISC on OpenShift. That already works well. There is Project Odd which provides templates for this and it makes deploying OpenWISC on OpenShift for all the Kubernetes pretty straightforward. There's also an Ansible Playbook bundle. You can use that to deploy OpenWISC to the service catalog and then provision it via one click from the service catalog. This one is experimental, but it works. That's okay, but we want to go one step further. We want to manage or we want to represent OpenWISC resources in OpenShift. So we want to represent our actions in OpenShift. And we want to manage them using templates, using the YAML templates that OpenShift and Kubernetes use. And we want to give you a sensible way to retrieve all the necessary data you need to invoke those actions like credentials and endpoints. The question is why would you want to do this? There are several reasons. First one are operational reasons. So it's possible that you have cluster admins that know how to deal with OpenShift. They know how to work with templates, but they should not have to know about how to deal with OpenWISC and the CLI. So by giving them OpenShift templates or Kubernetes templates, they can just use their existing knowledge and apply that to work with OpenWISC resources. It's also possible that some applications in your cluster depend on actions that are available at deploy time. So when you have templates, you can just bundle all your templates and deploy them at once, and that guarantees you that, okay, when my application runs, those actions will be available. Then there are security reasons. You might want to restrict the CLI access to your cluster because to do that, you would have to give out credentials. And it might be more sensible to rely on OpenShift OAuth for all authentication. And then there are some user experience reasons. So you might want to take advantage of things like the service catalog or service bindings to provide more advanced features. Okay, how do we do this? We are going to use operators. So what's an operator? I've taken this from the CoroS website, and it says an operator is a method of packaging, deploying and managing a Kubernetes application. Kubernetes application is an application that is both deployed to Kubernetes and managed using the Kubernetes APIs and KubeCTL tooling. We are particularly interested in the managing aspect here. So let's go a bit into details. Operators are applications embedded into your namespace. They are typically written using Golang because that's what Kubernetes and OpenShift are also written in. They watch your resources, so everything that's deployed to your namespace like pods, deployments, routes and services, and they react to changes. You can use custom types, and we are going to use custom types to represent open-wisk types like actions. So Kubernetes gives you this ability to define types that are not known to Kubernetes, but are custom resource definitions. I will show you an example of that later. And there's also the CoroS operator SDK, and this was announced pretty recently, and it provides a pretty good way to start developing operators. It's a nice, esticated, abstracts away a lot of the nasty things of the Kubernetes API. Okay, we're going to work on the serverless operator, and I mentioned the custom resource definition. We want to have a custom resource that represents an open-wisk action. This is how this template would look like. So we say that our custom resource definition is called serverless action, and that's more or less all we have to provide. The rest is pretty standard. So you don't define here what the content of this custom resource will be. That's handled later. You just, with this, you just tell your OpenShift or your Kubernetes, hey, I want to have a resource called serverless action, and I want to make the cluster aware of this. We also, in the operator, we have to represent this custom resource as type in Golang. So, do I have a moment? Nope. What we have is we define the type serverless action, which represents our resource. It has a spec part and a status part. The spec is basically, what do I need to create an open-wisk action? So I need a name. I need to tell what the name of the action is. I need a kind. Is it Node.js? Is it Golang? I need to write the code that I want to run, of course. Then I have a username and the password here. So, open-wisk is protected by its own authentication that I need to provide. And then we have the namespace. And then we have the status struct here. This basically stores what is the status of this resource. In this case, we only have, is it created or not? And the next slide will go into a bit more detail to make sense of why we need the status here. So, the reconciliation loop. So how do operators actually work? They don't really react to events. They implement something called a reconciliation loop. And in every iteration of this loop, they are presented with all the watched resources. And it's then the job of the operator to sync the status of a resource with the managed service. So, in that case, we have only a status created or not created. If the operator would see a resource that is there, but has the status not created false, it knows, hey, I have to do something. I have to create this on open-wisk now. And then update the status. One could say that an operator is basically a state machine for OpenShift Kubernetes resources. Okay, this should help us understand a bit of the code here. This is taken from the serverless operator. It's a bit simplified. But the first part of this is what we have to tell the operator what kind of resources it should watch. In this case, we only care about the serverless action. So, we tell the operator, hey, watch this resource. And then we hand over to the handler. And in the second part, we have to implement the handle function. So, this will be called in every iteration of the reconciliation loop. And here we get the resource that is currently being watched. So, first thing we do is we make a copy of this resource. It's called event, which is a bit misleading because this actually contains the resource itself. First, we make a copy of it because it's a pointer and it's a pointer to a live Kubernetes object that is possibly watched by other operators of services as well. Then we check the deletion timestamp. And if it's set, we delete the action. The way this works is in Kubernetes, when you delete a resource, it's either deleted immediately or if there is a finalizer attached to the resource, Kubernetes will set the deletion timestamp and it will not attempt to delete the resource until the finalizer is removed. Otherwise, if no deletion timestamp is set, we check the status. If status created is true, we do nothing because, okay, this action is there, it's not deleted and it's already created. We can ignore it. If created is false, then we just create the action. This is calling out to OpenWISC using its REST API to create the action. Okay, it's downtime. So, let's create an action. We have the, let me show you that. We've deployed OpenWISC here in namespace and you can see there's also the serverless operator deployed. When we go to resources, other resources, we now get the type serverless action. That's what our custom resource definition added. You can see there is already one action, test action. Now let's create another one. Just for your business. Let's have a look at the action that we're going to create. So, this is a template of kind serverless action. That means it belongs to the type of the custom resource definition that we added earlier to the cluster. We have to give it a name, the resource itself, and we also have to give the action on OpenWISC a name. We just say that this is a Node.js application or a function. This is the code that it should run. And here we provide the user or the credentials to add this to OpenWISC. If you're concerned about this and you don't want to give out those credentials, you could also make it known only to the operator by storing it in a secret, for example. So then the operator would just take those params and apply the credentials only when it's making a request to OpenWISC. And finally, we have the namespace. Underscore means in OpenWISC, move it, just put it into the default namespace, whatever that is. Okay, let's create this thing now. I just used the OC CLI, so the OpenShift CLI, to create this action by providing the template. Since created, let's check. Okay, there is test action too now. So now we've created an action on OpenWISC and it's represented in OpenShift. Okay, back to the slides. Okay, and yeah, with this, I would hand over to David for the Android part. Just let me, oh, thanks Peter. Let's get the mouse over there. Okay, we're good. Okay, so if you want to call a service from a mobile app, the first thing you might be thinking is, okay, I need some sort of SDK perhaps to call it, or kind of just make a simple HTTP request. But you also need some details of the thing that you're actually calling. So need some sort of configuration. So that's the first area I looked at. The serverless action, how is that represented in OpenShift? What can I get from OpenShift? And how can I make that available to the mobile app? In this case, it's a simple Android app. How can I make it available to the Android app in a way that it can actually make a call to the action? So what I'm trying to show on this slide is if we just get that serverless action, the customer resource, there's a lot of stuff in there. And we don't want all that. So we can slim this down. So on the right hand side, just to get a bird's eye view of the amount of stuff in there. So as a mobile developer, I don't care about all that. I just want the very important bits, well, all URL I need to call, do I need any credentials for that? So a little bit messy, but we're getting there. We can do an OC get command, pass in a template string. And the important bit is what we actually get out. So we get the host. Here's where our open-wisk server is. There's the action name, the namespace, and credentials. So that's more usable in our app. So we can use that, pull it down, put it into a JSON file. The mobile app itself, I'll show some of this in Android Studio, but just to give an idea before we jump in. It's a simple example app. It just has one action for calling the service action. Don't expect anything magnificent looking. So repo's up there. If anyone does like to make things look nice, by all means, go ahead, credit pure. So we have a module in there for the open-wisk client. That abstracts away the bit that actually talks to open-wisk. We read in the open-wisk config from a JSON file. So using the command in the previous slide, we can just dump that out to a JSON file. Make sure it's in that location there. In our app then, in our interactivity, we can create a new open-wisk client from this config, and then using that client, we can invoke actions. So nothing too fancy there. Okay, to the studio, let's have a look at this code. Okay, I'll walk through right from the config to calling the action, and if anyone has any questions or wants me to explain a bit of code somewhere, just share. So first of all, this is in assets, open-wisk.json. That's what we saw the output of the OC command before was, location of open-wisk action name and credentials. Where is this used? In our main activity, if we look up the top here, we are just gonna parse in that config here, create a new open-wisk client from that. And please ignore the line that talks about SSL certs in New Kingdom, that's only temporary. So we have our open-wisk client at this stage. What can we do with that? Well, down here in our on-click handler, so this app is very simple, it has one button, in our on-click handler for that, we will construct some prams that we wanna send to this action, pass it along. So this particular action that we created, it takes in a name and a place. So name, we'll say world, place is Boston, and it'll respond with a string that includes those words in some place. Just down a bit further then, client.invoke. So what the client's ready to set up, it knows where to talk to. Currently, you have to pass in the action name, I have some ideas of making that much nicer, talk about that in a couple of minutes. Anyways, client.invoke, that's the action name, pass in those prams and we get a response back. And all we'll do here is just update our text view to set the text just above the button to say whatever we have got back from that serverless action. I'll jump into the invoke function as well, just to show there's nothing special in here either. So this is using the volley.hprequest library, setting up a new queue, format URL based on the various configurations, so the host, the namespace, the action name. Then we set up this new JSON object request, pass in the prams, make sure we set the headers here for basic auth, and that just adds it to the queue for volley down here at the bottom. So nothing special here, it's just a HDV request. Lots of potential for making this nicer for the mobile developer, but simpler for now. So eventually that'll come back in here to update our text view with the text. So let's give this a spin and just show that it does actually work. So we should have the emulator running here. Perfect. You might have to. Okay, button, text goes up here. Super exciting, call action. Hello world from Boston, here we go. Peter did all the hard work, I just did a small bit that looks impressive at the end. So yeah, that's pretty much the end to end. I'll say a few things of how I think we can improve this though because there is plenty of scope there. So, simpler configuration. So one thing we didn't cover is what if you create more actions? I want to call all these actions from my mobile app. Currently that would be in a customer source for each one. How can we bring that all together in one, JSON config file? I think Peter you mentioned about abstracting away the credentials to a secret. Yeah, so for security reasons you might want to do that. Also for simplification of the configuration just keep it in one place and then the app knows where you can get that. Speaking from strictly Android point of view, Gradle or build time plugins could really help here. So one idea is to get a plugin that at build time pulls down the latest config from OpenShift for you rather than you having to remember or script that horrible OC command. Also a nice plugin. This is one that I've been inspired from the Apollo client, the GraphQL client if anyone's familiar with the Apollo libraries but generating types or classes at build time that map to the serverless actions. So you could do something like that. My custom action.invoke and there's type checking on that and if that action doesn't actually exist there won't be a type there. So much safer for programming. And then integrations. I think this is where the most interesting bit is. How can we get the mobile bit integrated with more true OpenWisk? How can we get OpenWisk integrated with more things in general? So mobile security, that's a big feature of the air gear community, the air gear SDK integrating with key clock. So what can we bring in there? So only the right people are authorized to call particular actions. I know OpenWisk has its own credentials but can we bring key clock into play to keep it more centralized as part of a larger project? Server side or serverless side whatever you want to call it integrations using, so something like if people have used Fuse or Synthesis it's where the idea is you have these connections that can connect to different services many, many different types of services and you integrate them or tie them together in some sort of filtering or data mapping through the UI. So serverless actions could feed into that and possibly at the other end you could have something like a messaging queue and yeah, hook them up together. Third point, OpenShift UI extensions this is something as part of the air gear work we would have experimented a good bit with OpenShift 310 and we're looking to the future with OpenShift 4 as well because it's changing somewhat but how can the OpenShift UI how can it be made aware that OpenWisk is running here and there are actions created well the customer resource, they can tell you that and then showing that in the UI in some really nice way so you can have a unified view of your project within OpenShift, so you can see your serverless actions and you have other things running you can see all those things as well and possibly even visualize all these things working together as part of a larger project. So that's it on the future potential I'll hand it back to you Peter if you want to do a bit of a wrap up. Maybe to show you one more thing so we've seen that the resources have been created here but what does the resource actually look like we can just inspect the YAML source of this. Yeah, it's hard to read from here. Is that okay to read? I try to, yeah. So this is what the YAML representation of our OpenWisk action now looks like we can see that it's of type serverless actions and that it has a number of annotations. Those are put here by the serverless operator. One thing it does is it annotates the resource with the endpoint of the actual action in OpenWisk and it provides those properties also standalone so you have the host, the name, the namespace and this is used in David's app so this is parsed out and put into a JSON file. Then we can also see the finalizer here so when we create an action we add this finalizer to the resource so that when you delete the resource in OpenShift it's not deleted straight away but the operator gets notified and it can do its cleanup where cleanup means removed action from OpenWisk then remove the finalizer and then OpenShift can finally delete the resource. And then we see the spec part. This is what we talked about in... Let me find my mouse cursor. Whoa, there. A few slides back. So if we look at that code this is exactly what we read out so name, kind, code and so on. This is what we read from the semile definition. Yep, stored here. And the status. In that case the action is already created so the operator will just skip. That was just to show you how it actually looks like on OpenShift when we created an action. And then back to the presentation. Just a quick recap of what we did. So we have OpenWisk running on OpenShift. We didn't do much here because that already worked thanks to Project Odd. We have now an operator that manages our actions. We can interact with this operator by creating instances of custom resources. We can retrieve this configuration using the OpenShift CLI tools. And this configuration is then consumed in an Android app and this Android app can use this configuration to also invoke the action. The code to all of this is available here so you can have a look at the operator itself. It's using the operator SDK. Here's the repository and we also have the repository of David's Android app here. Yep, that's it. Thank you very much. There are a lot of questions everybody wanna ask in the queue. We know you are quite, just hope that he wants to get the mic. No, yeah, first. Yeah, this is likely a couple of questions. So the first one. Oh, great, something's on. Yeah, the first one is you mentioned like Lambda, for example, and you support for the same type of languages. For example, let's say Node.js, which is commonly used. How easy is it to port the code over to OpenWisk on OpenShift? Do you mean how easy would it be? So if I can just take the, can I just take the code from Lambda and plug it in and would it just work out of the box? I'm not, I have to say I'm not familiar with Lambda, but OpenWisk takes, in the most simple case, it just takes a function without any dependencies. So if you have something like that running on Lambda, I would imagine that you can just take it and deploy to OpenWisk. That would work. I'm not sure how Lambda deals with bundles so where you have an action with dependencies. Well, so you just upload the zip file the same as here. Yeah. And then you tell it what language, what runtime version and so on. Yeah. Can I just ask one more question? So on, so a related question. So on the, so the zip file that you upload, so there is, for example, so using your same thing, let's say you have MySQL database that you're uploading as a dependency, but you don't want to create the connection pool every time for your serverless request. What are best practices for making sure they persist over multiple serverless requests? I'm not sure if in that scenario, serverless is the best approach. It's best suited to best to stateless and then to maybe actions that do simple transactions with a database. But if you have a connection pool and you're doing frequent database transactions, maybe a standalone service is a better solution, that's just my take on it. What I tried was to add the, yeah, I added, I had an action that I, and I used dependencies to talk with a Google home. And I just, yeah, it's just, I don't want to know dependencies, bund everything into a zip and that worked fine. But I, sorry, I can't really tell you how to deal with database connection pools. Thank you. Never to. Go ahead. We have a short time. Okay. Yeah. A real list of seven is from work to support the function as a service, such as the only, and a table is also from work. So it's a special reason why we use the only, you know, machine it, don't consider as a message from work. Do you mean why do we want to run open risk on open shift? Yeah, yeah. A lot of things are the kind of, Oh yeah. I can go back to the slide. Oh, sorry. No, no, no. I'm sorry. So why? Yeah. The idea is that when you, usually you interact with open risk through your CLI and you need credentials to do that. And the idea is, imagine you have a large Kubernetes cluster and you have an admin that knows how to work with this cluster that knows how to work with templates that he shouldn't have to know about open risk. And that's the idea. You, this administrator can apply his existing knowledge about templates and just apply it to open risk. You can use those templates to interact with open risk. That's kind of the idea, giving a more general way of interacting with your services. I don't see the answer to the question. I think that in the new field, it's good to be incorporated some other, some of these problems beyond the open risk, right? Is in the future, I'm planning to adopt that. So this is purely experimental. This is not a product. And I don't know if there will be. Yeah. All right, we have two minutes left. Keep it very short. My question will be short. Instinctually, if you have a, you're building an application, an Android application and you're deciding whether or not a serverless backend is the right option for you, just by what metrics do you make that decision versus a standalone service? So I think that if you have stateless transactions where you just want to do some kind of computation on the backend, that's a pretty good use case for serverless. If you do have something like do simple lookups that involve maybe a dependency to a database, but not much, much else. This also might be a good solution for serverless. Everything more complex, especially everything that needs routing or has different in, or implements different kinds of intents is probably better suited as a standalone service or David would be your view as well. Yeah, that's pretty much my view as well. I just wanted to add that sometimes your choice might not be there as well in that. If you're using serverless on AWS, then it's all managed for you. But if you're talking about on-premise, do you have that team to manage an open-wisk cluster or not? In which case it's just ruled out completely, do your own app and manage that. I can give you maybe one practical example, Dennis. We have to wrap it up. Okay. Can you talk after? We can talk after. Yeah, that's okay. We can talk after. Yeah, sure. Sorry about that, but it's a very tight schedule. No, that's fine. Sure. Thank you. Thank you. He's an employee of what? Kevin Thorne-Cooperator. He is the marketing director. And if you have any questions, go ahead and raise your hand. He's going to repeat them because he has the microphone. So, with that. Yeah, awesome. Thanks very much. How many warnings on time are there? Yes, please. I need, let's say, five minutes before. I would appreciate that you tell me. Yeah, good. Thank you. So thanks all for being here. My pleasure being here. So this talk, and I have a lot of content, so I'm going to run. It's going to be recorded, so you can watch it even faster on YouTube later or slower, if you'd like. That's the good thing about things that go to YouTube. So this talk is on four reasons why you need Istio. Or I took a different spin about talking about basics of distributed systems, right? I think most people, when they're developing applications today, especially if they're in the microservice world, they kind of forget that they entered the realm of distributed systems and they don't realize that. And that's the problem, right? So that's my perspective. Things that you will not be seeing today. You will not be seeing source code. You will not be seeing cats, if you're expecting pictures of cats. And you will not be seeing an introduction to Istio. So there's a friend of mine here, that's actually going to talk maybe a little more introduction to Istio tomorrow. Is that correct? Right, Saturday? Sunday. Sunday, all right. So go there. So I will talk, so this is not an introduction to Istio sessions. Awesome. So my name is Deojanis Rattori. I work at Pivoto. This is my Twitter handle. You can feel free to follow me on Twitter. Mostly tweet about technology, every now and then, a few personal things, but it's a good place to catch up on things. So let's go to distributed systems basics, right? I think the first we, let's talk about the advantages of writing distributed applications. So what was once a big application? Let's see the advantages of distributing often. Because distributed systems, they become more reliable. They become more scalable, just by the nature that was what was once a single piece and hard to scale. It's now distributed in smaller pieces that can probably, should at least probably scale individually, right? Now the problem with that is that the engineering skills required to write and develop distributed systems, they are much more complicated than you would write a single monolith app. If you're writing an application that's composed by 15, 20 different pieces, that is more complicated than writing an application that's going to be packaged as one file and run on a Android or mobile phone, for example, just the basics of that, right? Also, when dealing with distributed systems, there's an increased need for tools and patterns that facilitate, right? So on patterns, I recommend a very good book by Brandon Barnes, one of the creators of Kubernetes. He wrote a book on designing distributed systems. It's like a short read that you can probably read in a day. It's not even 200 pages, but I strongly recommend. And I'm going to address some of the patterns that he talked about in this book and this session. So it's going to be good for that, right? Let's continue talking about bad news on distributed systems, right? It's hard, right? So one of the other bad news in distributed systems is that you're often dealing with chains of calls. So what was once a call or, let's say, many operations calls to functions that happen inside a single application, a single package, now this calls, they're happening across distributed networks of applications, right? And the problem with that, and when there's a problem with, let's say, one of your components, it becomes certainly much harder to debug. You can't just open a debugger and see what's going on with that specific piece of technology because first you have to know where things are fading, right? So that is a problem. So how do you know exactly where the problem's happening? So this is one of the problems with distributed systems, just identifying where the fault is. Another problem that makes distributed systems hard is that you're dealing with different types of protocols. Again, if you had one big monolith, most of the calls, they were internal calls, library dependencies, or just like calling a function from another function, you don't necessarily have to deal with different types of protocols. But again, when dealing with distributed systems, some communications, they're going to be message based. Some are going to be HTTP based. Some are going to use a protobuf gRPC. Some are going to use a file base. So like the, just the nature that you are now dealing with also different types of protocols makes it hard to develop distributed systems, right? So have that in mind, when you're writing microservices applications, that they're realm you're in, it's in the realm of distributed applications, which brings you to a very important piece of this talk, which is the fallacies of distributed computing, right? These are mistakes that developers make when they're developing applications without thinking that they are now part of a distributed system network of applications. So have that in mind, most of these concepts, they were from the 80s and one of them was incorporated in the early 90s, right? So there are old concepts that exist in software development, but they are very important. First of them is that we often developed thinking that the network, and they're eight, the network that we're dealing with, it's a reliable network. They look and somehow trust the network. I'm going to talk A to B and as a developer I expect, yeah, the call is gonna get there, right? So this is a problem. We develop thinking that that's not the case. Another problem is that bandwidth is infinite and not only bandwidth as in the ability to transmit information but the ability to process that information. Because bandwidth stops being a problem, the moment whoever's receiving a call doesn't have the ability to process that information. So more than bandwidth itself, just the ability to process information. Funny note on bandwidth is that while from a technology perspective we've been able to increase bandwidth and communications by more than a thousand times for the last 10, 15 years, the latency is still a problem, right? Just so you know, like the amount of time it takes from one ping, one, let's say, if you make a call to a service from New York to London and back without zero, zero, zero processing time it's 38 milliseconds, just in latency, right? So often we don't think about it when developing applications, especially distributed systems. So only in physics, basic speed of light, it takes 38 milliseconds for a ping back, ping back from New York to London. So that in mind, the second way is that we often think networks are secure, networks are not secure and I'll give you a few interesting, some interesting example later on. Topology doesn't change. This is one that's like myself as a developer in my early days of development I never thought about it and I'll talk about it. Why latency zero, I mentioned it about latency as well that we think that communication is going to immediately get where we think and again, as developers, I don't remember myself thinking like 15 years ago about this when developing an application. I assumed everything here was true in my source code and the things that I developed which is wrong, right? Continuing, there is one administrator, right? So that essentially means like, there's one person I have to talk to, there's one, let's say there's just a single system but there's one group of people and then this has become, let's say, not so much of a problem lately but when it was created, yes. Another one is that transport cost is zero. How much, how many times we think about the payload versus the overhead of a call, of a request when we make it, right? I don't think we do it that often, right? Unless you're dealing with, let's say, unless that is a problem or impacts your business and often we see people doing when that starts to impact business or my network bill for AWS is starting to become a little bit complicated, what am I doing? How can I bring down the number of bits that I transmit? And the other one is that the network is homogeneous, that the network doesn't change. So these are all fallacies. These are all things that I did, bad things that I did when developing applications that I forgot to think about that this are all lies, right? So have that in mind. And of course, this is a presentation on Istio. So I'm going to talk about four things in Istio that I believe Istio help you with. Now, Istio can certainly help you with one of them but I want to touch, especially given the time, I want to touch on four of these, right? That the network is reliable. And please forget that this is, what's here is not true, they're all fallacies. So every single thing that's here is a lie, okay? Let's talk about network reliable just introducing the concepts a little bit. If there are a few ways you can think about network, if you just consider MTBF, main time before failure of network equipments or servers, that is already a measure of things that things go fail. There's the stacking that can be applied. If you stack routers or network components in parallel, then you probably get twice the MTBF but if you make them serial, then you get half, right? So this again, there's just a physical nature of hardware failing, and this is a switch, by the way. And there is also the aspect that every single network that we run today is some sort of virtual network, not necessarily a cable directly connected to a switch but you have to also factor that the machines that are running the networks, they're processing many, many other things as well. That's like any Linux operating environment can have a virtual machine and that's the main thing. So we don't think about that when developing the applications that the actual networks, they're not necessarily reliable and these are servers. They look very bad, but they're servers. And now the second point which I'm going to address is that the network is reliable and I made, let's say, using some poetic freedom here. I just changed the word network from the endpoint is that many, many times we think that we're connecting or interfacing with an external application or endpoint. We tend to think that that endpoint is going to be there for you. It's going to be right. And we don't often design applications thinking that whatever endpoint I'm connecting to might not be available, might not be available to handle my request. So just switching a little bit this one from the network is reliable to the endpoint is reliable and a big mistake that I did which is like a very strong recommendation that I give it to you is that I always design for reliable endpoints assuming that whatever I was going to interface with was going to be there always, all the time. So I never had to deal with, let's say, let's say alternative error flows for applications when integrating with other systems. Because during testing we assume if these applications are not running, my application is not going to work. If these two systems that need to work together if they're not up running, then my application is not going to work. But many, many times I ended up compromising the experience of my application in the honor of B, A because the application B was not running there. So a very important recommendation is design for unreliable endpoints, right? Design for endpoints that might not be there when you're developing. And again, especially in microservices and again whenever you hear microservices please also hear distributed systems and distributed computing because that's what's really happening. The forest policy that I'm addressing and hopefully I'll talk about what Istio has to do with them as well. I'm okay with time it seems, is that the network is secure. And this is especially interesting, right? And not necessarily talking about the network itself, whether or not a communication, let's say physical channel, physical network channel is secure or not, but whether or not the communication between applications is secure, right? Now how many of you remember the Equifax breach about it's not even a year ago, right? I'm sure those of you that live in the US think you have to at least spend one hour of your time thinking like how does this will impact me, right? Am I going to sign up for the package that Equifax is offering? So like all these things we have to think about. And when you think about like what created that it was like okay you can sort of blame the unpatched struts vulnerability. That's how people got access to technology but then there is unpatched software every day, right? Like we need to keep patching our software or fixing bugs every day, right? So is it enough that as a developer can just blame unpatched software do vulnerability like this? I think that's so naive to tell it because we all know that security is not a one-time exercise, it's a practice that you need to have. And I spent some time, invested some time reading a little bit more about like what and how the vulnerability happened and what was happening is sort of like this. So like there was an application, there was like users here, evil users here I should have like put a sad face here talking to a external facing app that was using struts. Struts was great technology back then, the patterns in struts they still use, it's a mother view controller pattern, great for separating concerns inside applications. Now this struts app was running inside the DMZ which is often the type of network that separates an internal network from an externally accessible network, right? And this strut app has access to a DMZ, the middle horizon, right? So now the thing is that, and that's when I say that the application network is secure is that this struts app had unrestricted access to other applications that it should not, right? So that's how the fader happened is that someone got access to use the vulnerability to have access to the struts application and then this strut app, even though it had a specified flow of information it should go to had access to other types of components inside the application which were then exploited to get our data, right? So that tells me that whatever application network we had here was not secure. So when you have that in mind that you're not necessarily talking about the physical communication networks but also like how much information are you exposing from your application to networks? Should a database allow information from 30 million subscribers to be retrieved? Should that have raised an alarm? Probably. Should I receive a call from a user that I don't know who that user is? From I receive tons of calls from a user that should not be making those sorts of calls. So those are all let's say red alerts that we should have that something wrong and weird is going on. So have that in mind, application it's not secure. Now the other one is topology doesn't change. So as a developer here we are developing our application so this is you developing our application and your stack over flow because you don't know what you're doing and then you develop your application and then it runs in your machine and it's great. We always test some aspect of the application on our own machines. Just using this example to justify that topologies do change, right? The topology of the network in which your application participates in your machine then in itself it's different from the how the application looks like in QE in production, right? So we often think that topology is not gonna change, but like just from the developing environment to the application that I'm running over here, Java application, C, Golang, just the network of that environment is different than the network of other environments. So from the moment you're developing an application to the moment you're going to run that application either in a QE environment or in a production environment the application topology is going to change, right? Yes, that's the point, they are different. Now there are more complications, right? In terms of topology it doesn't change. If you are dealing with sensitive information either from let's say PHI which is patient health information under a HEPA compliance or PCI DSS payment car industry. I forgot the designation for DSS. There are rules that tell you that cold running in production should be separated from cold running in a QE environment which often you can say something that the topology of a production environment of the network is likely going to be different, right? So this is a slide from a recent presentation given at the Boston Kubernetes Meetup that I run and I love this because it was someone talking about how they run Kubernetes in a PCI DSS environment. So like how to run Kubernetes in the environment where it's going to be handling and dealing with credit card information. And there is a rule in the PCI DSS that says separate development test environments for production environments and enforce separation of access controls. This is again just refined that yes, topologies are different. If you're developing assuming that the way things look in your laptop or the way they look in a production that's not probably a good assumption. Now with the fact that topologies do change another problem comes to us which is how do we deal with changing topology in an effective manner? And this is where bad news comes, right? This is data from the last DevOps, State of DevOps report that 2017. And this tells that only 28% of the high performing companies have automated configuration management for their applications, right? This is very bad because the very performant companies, the high performant companies, what does it mean for that? At least in that context, one of the metrics is that you're able to turn in a code change in production in less than one hour, right? And this is not how long does it take for you to deployment? It's like from the moment a code is pushed to a repo, how long does it take for that code to be packaged, tested and deployed and running in production, right? So companies do that and the effective companies they do that in less one hour. This metric is called the lead time, right? So the lead time between a fix is fixed or a new feature is implemented, it's in code, it's in a repository and from the moment that reaches production. So the lead time, it's one hour. So companies that are ultra high performant, they do that in less than one hour. So it's one hour from the moment the code is there to the code is running in production, right? Passing of course, many gates, many checks, sometimes they're even manual checks, but the problem is that dynamic configuration management is hard and I tell you it's hard because the very high performing companies only 28% of them do that. So it's complicated, it's very complicated. So there's again, manual steps involved in that. And the fact about the point about the policy change in dynamic configuration management is that you should be able to dynamically know where things are running, right? So if you need to someone to change the IP of a database manually, that's not necessarily dynamic configuration, right? Happening, if the end point of an application changes from one IP to the other and someone has to do that manually, if the port changes if someone has to do that manually, that's not dynamic configuration management, right? Hopefully you know that there are ways to do this. And to talk about bandwidth is infinite, I'm going to have to derail this conversation a bit. So that's why the tracks and apparently are narrow that you're getting off tracks, good. So again, concepts from the distributed systems, designing distributed systems book, there are three ways you can, sorry, three patterns for designing distributed applications, three groups actually, right? So there's a single node group which is techniques that you apply to applications running close to each other in a single node. So even though it's distributed systems, you still can take advantage of running patterns such as sidecar pattern or ambassador pattern which is you bring some sort of intelligence close to the workload that you're running. There's also serving patterns which is the more popular one. So you have like a web server and you replicate that web server, you have 10 copies of that web server. That is a distributed system pattern. And batch which is the name says it's batch, right? So again, the stateless, it's very popular. You just have like a stateless application and you need to do something with it. You replicate as many copies as you'd like because they don't handle a state or state is handled somewhere else, right? State is always handled somewhere. So we, stateless is kind of complicated to say there's always state being handled somewhere. You just make many copies of the exact same application and you're good with that, right? Now sharded, it's another pattern for distributed and sharded is especially used when there's a very large amount of data involved because think about that. If the data is small enough that you can have copies of data, keep them consistently like in many nodes. If it's small enough, then there's advantages in keeping that. But when you have large pieces of data and it's expensive to keep copies of all the data, then you start thinking about sharding which is when you distribute the data. You get the data and distribute it so that you don't have to store all the data, all the time in all the replicas, right? If you're doing sharded write, you should always think that having one replica of the data, it's not enough. If that goes down, you're gonna have disruption. You don't want disruption. So often write sharded data deployments. They come together with replicated data model. So there is some part that's sharded and you're also replicated part of that data somewhere. But the scatter and gather, this is more for processing. If you heard of MapReduce or if you heard of similar patterns where you try to distribute the processing, you load into many leaves, it's in a tree model, you distribute to the leaves. And then after you finish processing, at a point in time, you're going to then reorganize that data in a way that makes sense. So when you're dealing with processing data, that's also coming. So, and then there's my batch. Batch is another one, of course, that's very popular. Just normally schedule jobs. There's a list of tasks and operations that you want to run and you use batch model for that. Now, my favorite in the session right before, this one was a little bit about this, it's event-based, right? So for me, in my opinion, the largest, the ultra great advantage about serverless is that it's event-based, right? It's not request-response-based, it's an event. You're going to process, you're going to consume capacity, hopefully when you get an event that tells you that you should do that, right? Often, messaging-based systems, they are event-based. So as I said, today, very common types of distributed computing that are event-based is serverless and function as a service. Have you ever seen a serverless data center? This is how serverless data center looks like. It's amazing, right? How did they do that? We don't know. Oh, sorry, I can't. I know, I lied. There was a cat, all right, good. So congratulations, you are now our certified distributed system developers, right? I even have my certificate that I printed for me. So after finishing this part of the presentation, I am now a officially certified distributed system developer by the Distributed System Institutes. Of course, I just made this up. But the point is that now that you know the sum of the hard problems that are involved in developing distributed systems, you should factor that in your development, in your day-to-day activities, okay? Now, finally, I only have, I don't know, 10 minutes, a little over 10 minutes, and I'm finally going to talk about something that was in the title of this presentation. But for me, it was very important to give you the idea that, yes, microservices development is distributed systems. There are tools and patterns. So when I talked about tools and patterns that make your life easier when developing microservices applications, this is one of them. This is a very good one, right? So the first thing that I'm going to address using Istio, and when I took the picture of this slide, apologies for the lack of focus there, is that the network is secure. So assuming, so this is a lie, so the network is not secure. So what do we do then to make sure that we can handle the problem of networking security? So one of the things that Istio has out of the box, it's ability to support mutual GLS, mutual transport layer security. That means that if you're talking from end point eight, end point A to end point B, they know each other, they are trusted, and you have formally established that they can communicate with each other. And you know why this is important? Because if we assume the network is secure, it's not, it might be the case where an application like this Struts app that X is another one, and you know as an architect that that show flow should not have happen, right? So again, there are technologies Istio with mutual GLS can help you with that, so that it allows you to formalize the information flow. So if you as the data architect for the application or the architect, you know what systems should be used for a specific business functions, then that should be formalized, right? This application can only talk to this one, not in no one else, and under what circumstances. So have that in mind, that Istio with mutual GLS allows you to specify a secure communication channel formalized, right? So it's not only like, okay, I'm gonna add transport layer security, and everybody has transport layer security, but everybody has can talk to me. Now you're actually specifying who are you expecting to be interfacing with? So that calls to reduce the risk again, like security is a practice, right? So you always have to do the sorts of exercises, right? The other, it's on topology, doesn't change that we all believe now that topology does change, is that okay? So you understand that just from your basic development environment to the QE environment that your application topology is going to change, right? And in order to help with that, you need to have in mind dynamic configuration management, right? Which is hard, but if you have dynamic configuration management in mind, and you do that as part of your activity, you're gonna have, let's say, in less trouble. So one of the things that Istio does with the addressing dynamic configuration management is that there's a piece of, there's a component in Istio called pilot that knows where the endpoints are. And when as new endpoints come and go, they get notified of their existence. So if endpoint foo version three comes up, it will notify pilot that there is an endpoint foo version three up is available. And anyone that wants to interface with that endpoint pilot will know exactly where that endpoint is and all the diversions that are available for that endpoint. And even more, like how many times you can call that point and under what circumstances. So again, to address the fact that topology just changed and dynamic configuration, Istio can help you with that. So that was reason number two for Istio, dynamic configuration management of endpoints. Very good. So this is the example though. Okay, let's say my application running in node A under a certain IP, under a certain network changes, it's not running in a different network under a different IP. Still, pilot is going to be notified of this change and you can have access to that information if you want to call that application, right? Again, topology will change because it does change. Now, I think the network is reliable on is also very interesting, right? Because sometimes we think about compromising our application because we didn't necessarily provide the real users of our application with the necessary priorities that they need, right? So in case I have an application from A to B, right? And again, do I want to, as owner of application A, do I want to disrupt my own experience because B is not running? And what sometimes happens is that we, knowing that some things are failing, we still wait for failure, right? I'm going to talk to application B, I know it's down, I'm still going to wait for a timeout. I'm going to talk to application B, I know it's down, I'm still going to wait for a timeout. So the user experience, the end user experience gets disrupted. Even when you know that the other application that you're interfacing with, it's not responding. So why wait for a failure if you could know right now that it's not working and keep the call local, right? So one of the things that Istio does is that through the circuit breaking, it opens the circuit, it knows that that application is not responsive and every many calls or every many minutes, you can then call that application again to see if it has come back up, right? So the fact that you're not waiting for failure increases the overall response time of your application, right? Because you're not waiting for things to go down to fail and then do something. You know that they are down, right? So and then the fact that there's a centralized repository that knows the applications that are up or not, any new application that wants to access the same application, we also know that the application is failing and then can keep the call locally, right? Again, increasing reliability, you know, having to wait for failure. And to address the bandwidth is infinite, it's rating and limiting. The rating and limiting is mostly and often associated to API management, but I think API management is changing a bit. It was only seen as a technology for external APIs, but more and more we see internal APIs. If you're dealing with microservices, multi-applications, distributed systems, you actually have multiple APIs internal to your applications. And the same way that you would only, let's say if you have ever used the Google Maps API, if you want to make more than five calls, I think per second you have to pay, you should also have that level of control inside your organization, right? There are more important applications inside your organization, and you should prioritize those, right? In the sense that let's say if you have a mission-critical application and others, you should probably give that mission-critical application the ability to receive more requests than just a regular application, right? Imagine that you might have other applications impacting mission-critical application. So one of the things that Easter can do as well is the ability to add rating and limiting to your application so that you don't necessarily fall into the fact or fall into assuming that your bandwidth is going to be infinite, right? You can give each individual user and you can do that in a job base like JSON web token base, individual users, you can give them different levels of, let's say, permission to your application in terms of number of requests and requests per second, right? So given that I have two minutes, let's go to the summary, right? So addressing the four distributed system fallacies that I talked about, network is not reliable and for that recommendation is that you can use circuit breakers. The network is not secure, the recommendation is that you can use mutual TLS. Topology changes, the recommendation is that you use dynamic configuration management and discovery, you still can do that. And the bandwidth is finite, so the recommendation is that you use rating and limiting for that. So that's it. Twitter handle is retory. And that's what I wanted to talk to you today. Thank you very much. Now we do have a couple of minutes for questions, so if you have a question, I can repeat the question then. Yeah, four minutes. All right, I'll be in the back if you wanna talk to me individually and this is a topic I'm very passionate about, so feel free to reach out to me. Thank you very much. Have a good day. So, is that Ellie Tran's face? Is it somewhere up? Pick it up, pick it up. Pick a box. Pick your box, that's the one. Pick a box, that's the one. Yeah, that was going for what, that was going for it. Three years. Almost four. All right, good afternoon and welcome to this session. This is the session on Next Generation Security for Java servers. It is being presented at this point It is being presented by Farah Juma, who is a senior software engineer at Red Hat. And Farah Juma is a senior working on the Wildfly Project. She had been focusing on applications security server, security, but she has been focusing on applications as a security for the past few years and I'm just more being with you, so go ahead. Thanks, so welcome to this session. Today we're gonna be talking all about Elitron, which is a new security framework for the Wildfly application server. And so in particular, today we're first gonna go through some security history to understand the motivation for introducing Elitron. Then we're gonna jump right into what Elitron is and we'll go through its core concepts for both server side authentication and client side authentication. And then we'll go through a demo so you can see how to secure an application that's been deployed to Wildfly using Elitron. So historically, Java application server security has been provided by the Java authentication and authorization service, also known as JAZ. Now, as its name suggests, JAZ is a set of Java APIs designed for authentication and authorization. Now, JAZ implements the Plugable Authentication module. And so since authentication is performed in a pluggable way, it allows for applications to remain independent of the underlying authentication technologies. So it was common for application servers to make use of JAZ login modules for username password verification. Now, in the early days of application servers, this solution was fine, but as finding ways to improve security became more important, this simple solution was no longer adequate and it actually became difficult to use effectively. So for JBoss application server seven, which is the predecessor of the Wildfly application server, we really wanted to switch to stronger authentication mechanisms. So specifically, we wanted to be able to use Sassel-based authentication for our native interface. Now, Sassel is a challenge response-based protocol. So the way it works is that the server issues a challenge to the client, the client responds to the challenge and the exchange continues until the server is happy and doesn't send any more challenges. Now, the main problem with switching to Sassel was that it was totally incompatible with JAZ. So we actually ended up with two security solutions, one based on JAZ for applications, one based on Sassel for management and we had integration between the two. Now, obviously having two different security solutions that solved the same authentication problem but in two different ways was not ideal and it became confusing for both users and developers. So this led to the creation of the Elitron project in order to provide a single unified security solution across the whole Wildfly application server. So what is Elitron? Elitron is a set of Java APIs and SPIs for application server security. Now, in addition to providing a single unified security solution for Wildfly, we also had a few other objectives when introducing Elitron. So in particular, we wanted to support stronger authentication mechanisms. So we wanted to move beyond the JAZ login modules. Next, we also wanted to centralize SSL configuration so that different parts of the application server that require SSL could make use of the same centralized configuration. Next, we wanted to be able to support identity switching and identity propagation. And finally, we wanted to be able to provide integration points to make it possible to create custom implementations that could extend Elitron functionality if necessary. So within Wildfly, Elitron is used to secure applications that are deployed to the server and to secure management access to the server. It's important to keep in mind though that although Elitron was developed for Wildfly, it's actually a standalone library that can theoretically be used in other Java server environments. So Elitron covers the two main security areas, authentication and authorization. Now, just as a reminder, authentication involves verifying someone is really who they say they are and authorization involves verifying that they're actually allowed to access their resource. So Elitron's APIs are based on a few core components. And so we're gonna go through these components now. The most important component is the security domain. So a security domain is a representation of a security policy and it's backed by one or more security realms and a set of resources that can perform transformations. The security realm provides access to an identity store. So it can encapsulate a database, an LDAP server, properties file, key store and so on. Now a security realm can be used to obtain attributes that are associated with an identity and to obtain or verify credentials that are associated with an identity. Now some of our security realm implementations also expose an API for modifications. So that means that it's possible to make updates to the underlying identity stores. So you could add users, remove them, update them and so on. Next, we have a realm mapper. So a realm mapper is associated with a security domain and it's used in cases where the security domain is backed by multiple security realms. So a realm mapper takes the username that's been provided during authentication and it uses it to determine which security realm should be used in order to obtain the identity information for that user. So as an example here, you can see that this security domain is backed by three security realms. So when the authentication process starts and we see a username like aliceatredhat.com, this security domain needs to determine which of these security realms it's going to use in order to obtain the identity information for alice. So it takes its realm mapper and it maps aliceatredhat.com to a security realm. Now in this example, it's been mapped to the LDAP server. So this will be the security realm that's used to obtain the identity information for alice. Next, we have a principal transformer. It can be used to map a name to another name. So in this example, it maps aliceatredhat.com to just alice. This can be useful if the identity store has user names in a different format than what's being provided during authentication. So once authentication has succeeded, the security domain produces a security identity and that's the representation of the current user. Now resources that need to make authorization decisions can be associated with a security domain. Now that security domain can then be used to obtain the current identity and its roles and permissions can then be checked in order to make authorization decisions. Now a security identity's roles and permissions are determined using resources that are associated with the security domain. So in particular, a role decoder can be used to decode the current user's roles. So it takes the raw identity information that's been obtained from the security realm and it uses it to map its attributes to roles. So in this example, the role decoder determined that aliceatredhat.com has two roles, admin and employee. Next, we have a role mapper. It can be used to apply a role modification to an identity so it can be used for normalizing roles, adding or removing roles. In this example, it adds the prefix red hat to each role. Finally, we have the permission mapper. It can be used to assign a set of permissions to an identity. So in this example, the security identity for alice gets mapped to two permissions, the login permission and the run as principle permission. So far we've taken a look at the resources that back a security domain. Now one thing to note is that it's possible to configure a security domain to inflow a security identity from another security domain. So when an identity is inflowed, it retains its raw original identity, but it gets assigned a new set of roles and permissions using the new security domains, role decoders, role mappers and permission mappers. So you actually end up with a new security identity. Another important component is the authentication factory. So an authentication factory represents an authentication policy and it's a factory for the configured server side authentication mechanism. Now Elitron provides both HTTP mechanisms like digest, form, client cert and so on and sassel mechanisms. So some examples are digest MD5, scram and GS2. So next, the sassel context is used to define all policy information related to sassel. Now in addition to the usual configuration for an sassel context like key managers and trust managers, Elitron allows you to provide configuration for additional things like cipher suites and protocols. Elitron also provides secure credential stores and these are used for secure storage and use of credentials. Now the way they work is that they allow you to associate an alias with a credential. You can then reference that alias directly in the wild fly configuration file so you don't have to specify your credential directly. So as an example here, you can see you'd specify something like credential reference and then you'd give the store name which is the name of the credential store that you want to use and then you'd specify which alias you'd like to use and that represents the credential that you want to reference. So so far we've been talking about the Elitron components for server side authentication. Now Elitron also provides a set of Java APIs and SPIs for client side authentication. Now, when I say client side, I just mean the client side of the connection that's being established. It is also possible to use client side authentication in a server environment so when you're connecting from one server to another server. Now Elitron's client APIs allow remote clients to authenticate using Elitron. So we're now gonna go through the components for the Elitron authentication client. So the first component is the authentication configuration. It contains all of the information that will be used when attempting to authenticate. So this includes things like principles, credentials and the authentication mechanisms that should be used. Now you can also use credential stores here so you don't have to specify the credentials directly in the authentication configuration. Next, we have SSL context configuration. So this is just like the SSL context configuration on the server side. So you can specify things like key managers, trust managers, cipher suites and protocols. Now finally, we have the authentication context. It consists of an ordered collection of match rules that are used to determine which authentication configuration and which SSL context configuration should be used when attempting to authenticate. So for example, you could have one authentication configuration that gets used when connecting to one server and a different authentication configuration that gets used when connecting to a different server. Now it's possible for Elitron client side authentication to be configured using an XML configuration file. Now this file can be specified using the wildfly config URL system property. And if that property hasn't been specified, then Elitron will look in the class loader's root directory and it's meta-inf directory for this file. So this is an example of a very simple client side XML configuration file. It has one authentication configuration specified and no SSL context configuration. So if we look at the top, we can see that there's one match rule, okay? And it says that when we connect to server1.com, the default config authentication configuration should be used. And the default config authentication configuration says that we should be using alice at redhat.com as the username, secret as the password, and we're gonna use the digest MD5 SASL authentication mechanism. So this information gets used when we're attempting to authenticate. Now it's also possible to specify client side authentication programmatically. So this example is just like the previous example. It has one match rule that says when we connect to server1.com, this is the authentication configuration that should be used. And again, we're saying that we want to use alice at redhat.com as our username, secret as the password, and we're also specifying that we wanna use the digest MD5 SASL authentication mechanism. Next, we can specify the code that we want to run under this authentication context. So as this code is running, when it attempts to authenticate, this is the information that's gonna be used to do that. So those were the elitron components for client side authentication. Now, while the majority of wildfly users are probably going to use the functionality that's already provided by elitron, it is possible to use elitron's APIs and SPIs to create custom implementations that extend elitron functionality. So for example, you can implement custom authentication mechanisms, SSL contacts, credential stores, and password implementations. And so these can be registered using Java security providers or Java service loader discovery. Now, out of the box, wildfly still uses its legacy security subsystem by default. However, elitron is already installed and it's ready to be used. Now, the reason for this is that we want to minimize disruption as users migrate to elitron. So in the future, wildfly's legacy security subsystem will be completely removed and elitron will become the default. So now we're going to take a look at an example application that I've deployed to wildfly. It's a simple inventory application for a store called wildfly widgets. It has two servlets that invoke an EJB. Both servlets have a constraint specified that says that only users with employee role should be allowed to access these servlets. Now, the first servlet is the inventory servlet and it allows you to view the list of products that are currently in stock. Now, it invokes an unsecured EJB method called getProducts and that retrieves the products that are in stock. Now, our EJB is called the products bean and you can see that it's associated with a security domain called other. Our next servlet is called the add servlet. It allows you to add a product to the list of products that are in stock. Now, it invokes an EJB method called addProduct that requires admin role. So as you can see here, it has this roles allowed annotation which specifies that you must have admin role to invoke this method. So we're going to create an elitron file system security realm with two users, Alice and Bob. Alice is going to have both admin role and employee role and Bob is only going to have the employee role. So we can create this file system security realm using wildfly CLI commands. So in particular, we can use the file system realm add operation to create a new security realm called example realm. The path attribute here specifies the location on your file system where you want this new elitron security realm to be stored. Okay, so we can also use CLI commands to add our users. So we can use the add identity operation to add the Alice identity to our file system realm called example realm. We can then use the set password operation to set Alice's password to Alice123 plus. Now notice I'm just specifying a clear password here but it is possible to specify many different password types. So as an example, you could specify like a salted digest password here if you wanted to. Next, we can use the add identity operation add identity attribute operation in order to assign Alice two roles and we're assigning her employee and admin. Okay, and now we can repeat the process for Bob but we're only going to assign him employee role. Okay, so, oops. So now that we have created this file system security realm we need to add it to an elitron security domain. Now the default wildfly configuration file already has an elitron security domain called application domain. And so we're gonna take this elitron file system security realm that we just created and we're going to add it to the list of security realms that back this existing domain. And so we can use the list add operation to add this example realm to the existing list of realms. And then we can set the default realm that's used by this security domain to the realm that we just created. Okay, so to indicate that security for our servlets and our EJB should be handled by elitron instead of by the legacy security subsystem, we need to add some configuration to the undertow and EJB subsystems and we do this in order to map the security domain name that's been referenced in our application to the elitron security domain that we want to use. So if you recall, our application referenced the other security domain and there it was specified using a security domain annotation but it could also have been defined using a descriptor file. So we take that security domain name that's been specified in our deployment and we add an application security domain mapping in our undertow subsystem to map that name other to application domain. And so again, application domain is the elitron security domain that we want to use. And then we repeat the process in the EJB subsystem. We add another application security domain mapping. Now there's just one more configuration change that we need to make in order to secure our application using elitron. And so that's to update the HTTP remoting connector to reference the sassel authentication factory that is backed by our elitron security domain. Now the default wildfly configuration file already defines a sassel authentication factory that's backed by our application domain. So we're just going to update our HTTP remoting connector so that it references that existing sassel authentication factory, okay? And it's called application sassel authentication. And finally, we can undefine the security realm attribute that's defined for the HTTP remoting connector. And we can do that because that attribute is a legacy attribute and so it's not going to be used when using elitron. So now once the reload operation is executed in the wildfly CLI, everything will have been set up so that now our application will be secured using elitron instead of using our legacy security subsystem. So now let's take a look at our application. So again, you can see that there's two buttons view inventory and add product. So let's try to click on view inventory and let's log in as Bob. Now remember, Bob only has employee role. Okay, so we're going to log in. And so we can see that we're able to successfully view the list of products that are currently in stock. And again, that only requires employee role to be able to access this page. And so we're able to do this successfully. So if we go back and now we try to add a product. Okay, so let's say we're adding a football. Let's say its price is $17.99 and that we have $40 in stock. Now this time if we click submit here we get a failure message that says that only an admin can add a product. And so that's expected because Bob is not an admin. So now if we go back and we try to view inventory again but this time is Alice. Okay, so we can see that again, we're able to successfully view the list of products that are currently in stock since Alice has employee role. Now if we go back and we attempt to add a product again so let's use the same information. So let's say football $17.99, $40 in stock. And we click submit. This time we get a success message that says that the product was successfully added and that's because Alice is an admin. And now if we go back, we'll see that the football now appears in the list of inventory. So today we've talked all about Elitron which is a set of Java APIs and SPIs for application server security. We went through an example of how to secure an application that's been deployed to Wildfly using Elitron. And the important thing to keep in mind is that Elitron is a standalone library so it can theoretically be used in other Java server environments. Thanks. Any questions? Thank you very much. Can you record this question? So the question was if we wanted to use Elitron in Tomcat server, what would we have to do? So it would be you'd have to take a look at how security is currently configured on the server side and then you would plug in these different Elitron components. So you'd need to have a way to configure things like authentication factories, security domains and security realms. And as long as you have a way to configure that then the server can make use of that. And then on the client side, it's just a matter of adding like an Elitron dependency and you can make use of the remote client APIs. Now we did actually do some prototype work with looking at how we'd integrate Elitron with Jetty. So we do have a proof of concept for that. And I think as part of our future work we'll be looking to see how we can integrate with other Java servers as well. It is, it's on GitHub. I can point you to the, oh, okay. Being security framework. So I'm not, like I don't know too much about the spring security framework. So I'm not 100% sure how to answer that question but does it have similar concepts like high security identities and security domains or? Not as integral as this but you have to find your own permission scheme. Okay, so one advantage of Elitron is that you can define everything in one place. So in what spring it might be that you have to define it in multiple different configuration files and maybe you're specifying the same thing multiple times in different places. Things like that. In that sense, Elitron is better because you can define everything just in the Elitron subsystem within Wildfly. And that's your one spot for security for all the various subsystems. The service has been updated. It's an API service which can be spoofed. A component which is similar to that. Is there any integral way to... So we can support like two ASSL and stuff. So I mean, I don't think spoofing would be a concern there but I guess it's something that should be looked into further. Or RxJava or RxJS, yeah. So while we're killing time, so tell me why are we have five, we have like three minutes before we start. So why are you interested in this topic today? Two minutes, two minutes to go. So why are you guys interested in this topic today? Where are you thinking of reacting as a backup? Okay. Who else? Why are you interested in it? All right, excellent. See, I'll just cough for two minutes and I'll kill the thought until we start. And Sarah, Jane, you wanna come up here again? And you'll give me the thumbs up, right, when I'm ready to go. Oh, we're ready to go? Do you want us to go ahead and start? Oh, I'm sorry. You give me that. I wouldn't chill for the sessions. All right, sorry. You can do it straight away. I'll just do it like this. That should be okay. Good afternoon, welcome to the session. This is called Get Reactive, Programming Systems Microservices. This speaker is Jeremy Davis, Red Hat. And Jeremy Davis is a Principal Solution Architect for AppDev. Before joining Red Hat, he wrote a lot of code in JavaScript, code, general basic, Ruby, Python, C, C-sharp, Objective-C, and, of course, Java. And we co-lead Microservices on Red Hat's Microservices Community Practice. Welcome. Thank you. All right, thank you for joining my talk about Reactive. I've given this talk or a variation on this talk a number of times. It usually runs longer than 35 minutes. So I'm going to go really quick. But feel free to interrupt. All right, I love when people interrupt. And I have stickers. I'll give you a sticker early if you interrupt me and ask questions, right? So audience participation is highly encouraged. Do you want a time mark? Yeah, please. Give me like 10 minutes. 10 minutes and we'll make sure we're going through stuff. So you just got a good bit of my background here. I'd started out as a .com web monkey. That's when I began my career all on front end. I'm doing a lot of JavaScript. This is the last time I was at Boston University. My father was a professor here. And that was probably the last time I was on ice skates, because after we left Boston, went to South Carolina, where you don't get a lot of opportunity to get on ice skates, right? But I have skated at the Boston University ice rink a decade or three or so ago. So we're going to start off. And I really wanted you guys to have my email address and Twitter handle, too. So these will be available. I'll put them out on slide shares. And they'll be available from the conference as well. So we're going to start, first of all, of why this matters. It's a couple of people said they're already using some reactive toolkits, are interested in the topic, have heard of it. And I'm going to give you a little bit of a different angle. I think the user experience is why reactive matters. Now, this guy's a user experience guru. But I have colleagues who are user experience gurus as well. And so I asked Sarah Jane, who you can find out here to kick us off talking about what Red Hat does around UX. Sure. My name is Sarah Jane Clark. And I'm on Red Hat's user experience design team. I'm the lead user experience researcher on developer-focused products, which is why we're here today. And what we're doing in our booth out here are actual usability tests. So we have four products that we're looking at, including pattern fly, which is our design system, OpenShift.io, OpenShift, and the new developer website. And what we're trying to do is get feedback, because that feedback is what helps the designers know how to design the products. So it's super important for us to understand what's important to you, what isn't important to you, and what you think about our products. So that's the feedback we're getting. We have all kinds of goodies. So if you have five minutes for me, I would love to hear your feedback. Thank you. So who writes front-end code? Couple people, who writes back-end code? Everybody, right? Everybody. So now we'll go to the, normally when we talk about user experience, we talk about the front-end, right? That's what we think about, in terms of our layout and design and navigation. But this guy, and this guy early on, Jacob Nielsen, I mentioned I started as a .com web monkey, right? And this guy was like the guru at the turn of the century. This guy was the guru around web design, which was kind of amusing because the guy's website was completely boring. It was just text. But he was the guru, right? But he has these three numbers, and he started his career at, I think IBM and his career doing mainframe usability. I know that's kind of an oxymoron, right? But he did mainframe usability, and then he did at Sun Microsystems, did like Fat Client stuff, and then moved on to the web when the web was taking off, and now does mobile and web consultancy. And he has these three numbers, .11 and 10, and this all relates to how you feel something is responding. In .1 second, you feel like you're interacting actively with your application, whether it's a website or whether it's moving your mouse around or using your clicky or whatever it is. At one second, you begin to notice a lag. So if I were to click this, and it took a second for that slide to change, or if I click a button on a webpage, and it takes a second, you begin to notice a lag, but it's okay. At about 10 seconds, you're gonna abandon what you're doing. And this is why you get the spinning beach ball of death on a Mac, or why you get icons on a website that tells you something's happening, right? Because it's giving you feedback saying, you know, hang on, wait. And the interesting thing about this is these numbers, I mentioned he started off doing mainframe, did fact client stuff, did web mobile. These numbers have stayed the same across all those paradigms. So there's something that's uniquely human about these numbers, right? And if we write backend code, we have to deal with this because we have to get a response to our users inside of these timeframes. And reactive is responsive. And so that's why we started with user experience and being able to deliver, right? There are also interesting fun tools, and we're gonna look at several of these tools. I'm assuming that most people have heard of at least Node.js, some of these others are probably gonna be new. Vertex is a Red Hat project. I have Vertex stickers up here, and we've got some Vertex stickers out there as well. And we'll take a dive into Vertex, ACCA, and react to actually four technologies. Four technologies in 35 minutes, or 30 minutes. All right, so we're covering really three topics, but the two big ones are reactive systems in reactive programming, and then we'll talk about how those feed into microservices architecture, where they make a lot of sense. And we're gonna do both programming and systems, right? So this programming might look a little weird, hopefully it won't look too weird in just a few minutes. And this is about reactive systems. This comes from the reactive manifesto, which is a manifesto, and you can all go sign that right after this, right? I'm a proud signatory of the reactive manifesto as well, as well as the Vertex team. Vertex team guys have all signed that. All right, so everybody in here is a programmer who writes code, right? Okay, excellent. So traditional programming, right? Imperative programming, what you normally do, the way we write code is we call a method, we get the output from that method stored in a variable, and then we do something useful with that variable, right? The way this looks is we have a method here, not a very useful method I know, it's just you had to fit on my slide, right? And then we call this compute method, right? We store it in this variable, and then we do something useful. In this case, we put it out to the console, right? But this is how programming works, right? It's how we've done programming for a long time. Unless you've done like front end code, right? I mentioned I started out as a web monkey doing a lot of kind of user interface stuff. Well, same as if you've done fact client user interface stuff. So asynchronous programming is a little bit different, right? We do callbacks. So instead of the notion of calling a function and storing that result, we create a method that does something useful, and then we define an asynchronous call to this method, and then we do some other stuff until it comes back. And what that looks like is we have this compute method, again, like really interesting, but we've added one thing here. We've added a callback. It's getting passed in. And if you've used JavaScript or Ruby, you can think about these as closures, right? A lambda here in the Java world, right? So when we call this method, instead of getting this back and storing the result and doing something with the result that comes back, we pass in our values and we pass in the function to tell it what to do, right? So we're passing in this handler and it just calls handle. And this is a vertex construct, async handler. But the notion is we're passing in a piece of code that's gonna do something and the code will get executed from inside of that method, but it's not gonna block, it's not gonna make anybody wait for this to happen. That's very efficient, but callbacks can kind of lead to stuff that looks like this, right? And there would actually be more code inside of here. Actually we have the failure handling code and we can end up with these really big nested callbacks, right? And this is usually referred to as callback hell and this can be kind of difficult to navigate, right? It can get tough to read and that's, you know, your ID has those little switches on the side, right? You can collapse your code and try to figure out where your bracket would bracket to missing, right? So when we get to reactive programming, then the rest of the stuff that we're gonna talk about is largely ways to deal with or make this easier. So it also kind of feeds into a user experience for us as developers. And we don't usually think about user experience in that way, but if you're writing a library that somebody else is gonna use, you have a user that will have an experience, right? And we don't usually think about usability in that way, but it becomes really important. And as developers, we all like the things that make our lives easier and we like nice, clean APIs, right? So I'm now gonna stop talking for a while and actually look at some code, right? Cause you guys wanna see code, right? More than you wanna see my slides, right? All right, so if you guys can see this, this is, we're gonna build a little bit of RX Java, right? So we'll get into what reactive X and where it comes from. But I mentioned that this is about building APIs around those kind of callbacks and asynchronous programming to make it easier for you to do that. And what we have here, you can see, is this thing called an observable and we're subscribing to this observable. And right here, we've just got a list of strings, right? If we run this, it doesn't do anything really cool. It just spits out the array, right? And then no big deal. So that's the first introduction to reactive programming, is it spits out strings. But the next thing we're gonna do here is we're gonna take the same method and we're gonna add some stuff here and it's gonna change what we do. Whoops, did I run the same one? Ah. So now I split the words out, so it's not one object. Now we're gonna add some other stuff. I'm gonna treat this as an iterable. So I'm not gonna have to do the traditional way of iterating over something, right? I'm gonna see you use this from iterable and then I'm gonna do something called zip with. Whoops, I missed one first. Let me do something boring first again. I'm gonna say observable and I'm gonna do a range and I'm gonna pass in some numbers and I'm gonna spit out the numbers one through five. Again, like that's not real interesting, right? We just spit out one through five. It gets interesting when we get to this one because now I've got two observables here and one of them is doing that range thing, right? So this is gonna create numbers and just return a number and then I'm passing in my lambda here and I'm telling it to spit out the number and the word and I'm using this zip with thing here. So zip with is gonna take these two streams of data and concatenate those streams of data for me. So now I've got this, right? And not a lot of code that I have to write to do that. So now we'll do something a little more interesting. We'll split this out because now we're gonna start doing some data analysis, right? We're gonna write a real programming. So we have the quick brown fox jumped over the lazy dog, right? And they always tell me when you start typing, right? This is every letter in the alphabet. So I wanna find out if it is every letter in the alphabet, which by the way, this example was not my idea. This was another guy's example who I could not find his post against on D zone though. So unknown guy, I have to give you a lot of credit because this is a great way of explaining this. So now we've got 36 different letters but that's not really so useful, is it? Because we wanna find out exactly which ones we have. So now we're gonna split our words up and we're gonna call modifier called distinct and we're gonna add that in the mix. And that's gonna show me that I have 25 different letters. The quick brown fox jumped over the lazy dog which is one shy of the alphabet, right? Or the English alphabet. So now let's throw in another one called sorted. So now we can see what we're missing. Where are we missing? We're missing an ass, right? So we can come down here. It's very quickly. So live data analysis, right? Yeah? I will get to that in just a second. I will explain this in real actual detail. So observable stuff, when we do all this stuff, we can go from, I'm still gonna give you a sticker because thank you for the question interrupting me. And I promise I am gonna answer that. We'll make sure to repeat the question. I will repeat that. Because the question was what does subscribe do? What does subscribe do? And we will jump into what subscribe is. So we can go from simply having an array and we can go from having an array so we can go from simply having an array to sorting and sorting, taking two streams of data, concatenating that data into something that we can use, right? And this is what RX is about, reactive extensions. Now let's go back to my slides here and we'll explain kind of what we were just looking at. So I mentioned that asynchronous programming is different from imperative programming, right? Because we create our method that does something useful. We define an asynchronous call and then we do other stuff until that call returns. Well, that's what we were doing there. But we used some constructs on top of it that kept us from getting in that kind of callback syntax. And what we did is we created a method that called something useful. We defined the asynchronous call and that observable object was how we defined that asynchronous call. Then we attached an observer to the observable by calling subscribe. And so when we call subscribers, okay, I'm watching you. And this is an important construct in the RX world until you attach a subscriber, it's not gonna do anything. It's just a method. It's never gonna get executed. Subscribe means like, okay, actually do stuff. It has to be observed. It has to be observed for it to execute anything. Now there are subclasses of observable. I'm going really quick so I'm not gonna get into all of them. In RX Java 2, there's a couple of things, flowable, completable, that have different use cases but there's subtypes of observable. And then the other big piece is, so one, we had this method that does something useful that we observe, right? We call observe on it. And the other piece are all of these modifiers that we saw, right? Group by flat map. Flat map means we're gonna be pulling in multiple pieces of data and concatenating them into one thing that's useful. So in the real world, what's a real world use for this kind of, it's also, it's like remember the Gang of Floor Observer pattern, right? It's kind of a lot like that, right? We've got some extra methods on here, right? So on completed and on error are key pieces of it. They're very cardinal parts of this because we know that errors are gonna happen and we know we wanna deal with it. So a real world use case and at the end of these slides, I have some links. Ben Christensen is a guy at Netflix who talks really well and he implemented the Reactive X Java Library or was one of the guys that did, has some great talks on YouTube. This is built using RxJava. And the reason for that is at one point in time when Netflix first began exploding, they kept adding functionality into their homepage. And I believe the number was 38. So at one point in time, they had to make 38 synchronous calls to display your homepage when you log in, right? Which you know how that's gonna get back to those numbers we started off talking about, right? You're not gonna get that homepage up in a second when you're making 38 blocking calls, right? And so they knew they had to do something else. And he began looking at Reactive Extensions and implemented the Reactive Extensions for Java. Another little aside here that's interesting, I mentioned that on error, errors are treated as first class citizens and this paradigm forces you to deal with that and have backup plans. And one of the things they do at Netflix is they there are some of these recommendations that are cached, right? So this is, I logged in, this is Top Picks for Dad, you know, Continue Watch for Dad. And I can guarantee you the Top Picks for Dad do not include lab rats and total ID drama or whatever these shows are. So either they're a reactive call failed and they grab some cash data or maybe my kids had logged in and watched things under my account, it's possible also. But for the sake of this talk, I like to go with the first, right? I think it illustrates the point better. It's more like either ladder. Right, it's more like either ladder. All right, so Reactive Ext. Reactive Ext started life at Microsoft in the .NET world. This is originally a C-sharp thing, but it has gained a lot of life. Let's look at Reactive Ext here. One thing that's really nice, there's a couple of things that are really nice about Reactive Ext. One thing that's really nice is, let's get back on my phone. All right, okay. On Choose Your Platform, there's Reactive, there's an RX implementation in just about any language that you want to use. And if there's not one, you can send them some pull requests, right? Because they have more implementations. So there's a lot of languages in here, some more complete than others, but there's a lot. And the documentation is really nice, especially for open source community projects. This is really nice documentation. And you can really come through here and figure out how these pieces work. And you'll get used to playing with marbles, right? This is when people started calling these diagrams. These are these operator diagrams. People started calling them marbles. It's like we saw a flat map, right? So if we go to flat map, there's a flat map. And this is showing us what happens. I'm pulling in different types of data. Like I'm pulling in these three types of data. And what that's gonna do behind the scenes is transform it all into the same thing and give me one stream out, right? So I can have red, green, and blue. And what I get out are all identical objects, right? All mapped together. And in the spirit of marbles, you can go to rxmarbles.com and here's a JavaScript implementation of this. And you can play with marbles. And it will show you, you'll see exactly how that affects the output you get. And you can go like from, you can go interval, default empty, debounces what a debounce does. And this is a debounce of time. So you notice like you won't actually get a result there. If you come within a certain interval. So you can play with these marbles, which does help when you begin programming in this, using rx. Because there is a bit of a learning curve, right? So it's just a shift, a different way of thinking. All right, I'm gonna start going maybe even a little bit faster. So operas playing with marbles, these links are also all in the back, right? So Reactive X or a series of extensions, lots of different languages designed to let you build reactive code really easily. Now you can build reactive systems using this. Netflix did that, right? They run on Tomcat, on AWS, right? Everybody's heard about Netflix, microservices architecture. Don't mutate your state outside the function, by the way. You only wanna change things inside of those functions. But there are other toolkits to make building complete reactive systems easier, right? So reactive programming is one way of doing it and you can build a system that way. But there are also toolkits to make systems easier. Now I mentioned this is the reactive manifesto. This is a manifesto that you can go and sign. You know, right after this talk you're gonna completely believe this. You're gonna wanna come sign the manifesto, right? So you're gonna reactivemanifesto.org. The ideas behind this starts with responsiveness, right? So the top of this diagram, the key to building these applications is responsiveness. Came out of some guys in Europe who had been doing work on really large systems inside of banks, or large systems for banks. And they came up with this way that this is how we need to build applications to make them responsive and to scale, right? So key number one is that everything needs to be responsive. People get bored, they want an answer, right? And that's your end user who's interacting with the user interface, as well as other people working with your library, right? We need an answer back. In order to do this, your system has to be resilient, right? So it has to be self-healing. We need to be able to replicate the components of your system, right? So like statelessness, a lot of the kind of concepts we hear when we talk about cloud-data development and a lot of things we hear about like in the world of microservices. Any kind of failures need to be contained, right? So it needs to happen inside of there, but should not propagate out into the entire system, right? So if one piece fails, that's okay. We have a strategy for dealing with that, right? And that was the key thing about Netflix, right? If one of these 38 calls fails, that's okay. You still get a home screen. They also need to be able to be elastic. So they have to scale up and scale back down, right? Because we know, one of the things we know in systems today is we can't necessarily tell how much data we're gonna get, right? We started off with the lazy dog, right? But then we started adding other things in there. We started adding another stream of data in there with numbers. In systems today, we don't necessarily know how many other systems we have to call, how many other sources of data we have, right? Especially as we move to like a microservices architecture and the business realize they can get a new feature into production in a week or two, you're gonna be dealing with a lot of new sources of data, new sources of truth, right? And then the other key here that they came up with is that your system should be message driven and the next few things we look at will implement this in different ways. Vertex uses JSON for this, right? So JSON is the payload for passing messages between objects. The first thing we'll look at though is ACA. And that's because this was written by the ACA guys. They were really leaders in this space or the light band guys. You guys are familiar with light band, right? The guys that do Scala. All right, so our first toolkit, I mentioned we'll look at these three things next, ACA, Vertex and in Spring Reactor. ACA is called a toolkit for highly resilient scalable applications. I'm gonna give you one disclaimer here. So I first saw ACA at a conference very similar to this, sat down and watched this talk on ACA. My immediate impression was, wow, I never wanna build an application using that. Then when I began doing some research for this talk, I started playing with ACA and you know what, ACA is pretty cool. So the moral of that story is, even if the guy up front doesn't do a good job about talking to the technology, go get your hands dirty, right? That's why we're looking at code. I might not convince you that this is good stuff, but you can change your mind by downloading this and firing up. This is all open source. You can get going really easy. And there are nice tutorials on ACA's website. So the way that ACA works is it uses the actor model and actors talk to other actors by sending messages to other actors' mailboxes. We don't pass anything by reference. Everything gets passed completely in a message, right? That message gets received, some sort of action gets taken, and then another message gets sent somewhere else. They're completely, there's no state, right? Everything happens within the own actor, which gets us back to how do we scale? How are we resilient, right? We can spin up more and more actors. They can be in different data centers. They can be across different machines. So it becomes very easy to scale. It's also very easy to bring those back in. There is one sort of parent that makes this map under the covers of all of the actors in the system, and it uses URLs to talk to them, right? So a layer of indirection, which means this URL could point to multiple different things, right? We can scale out behind these internal URLs. You don't have to manage this ACA itself as the framework does that for you. What it looks like is my next IntelliJ thing. So the way that ACA works, did anybody use Scala? Anybody a Scala fan? Couple of Scala fans? Okay, so I don't know Scala. One of my buddies at work and colleagues like Scala a lot. And so he says, the way that ACA is written, he thinks it's very intuitive coming from a Scala mindset or a Scala-based approach. It's not hard to get to in Java either, because I did the Java examples, right? So I don't know Scala. But this is just a little Hello World example. It's pretty simple. And we have these things, these actor references, right? We have a printer actor and we have a howdy actor. And what these classes do is that this is a class called, it has a class called greeting, right? And it just takes this class. It receives a class and you use the class type to let it know what to do. So this takes a receive a who to greet class. So instead of just passing a message to our calling a method, you use the object type to send in these messages. 10 minute mark, okay, I'm gonna go really quick. Let me fire up, let me debug this test, we'll stop. Oh, great, I typed in error, okay. Anyway, that is the basic ACA example. The thing to remember about ACA is all message based and it handles behind the scenes this abstraction and it makes it really easy for you to scale down. It's also pretty easy to get going with and pretty quick. All right, Spring Web Fox is Spring Web plus Reactor. Spring Web, I probably built a website. If you're in the Java space, you've probably built a website using Spring NBC before. Spring Web is obviously their web toolkit. They jumped into the Reactor space a few years ago. They said they were gonna make the user interface a lot easier to use. I don't think they really got there. They ended up changing some of the names. So instead of a single, they have a mono and instead of a flowable, they have a flux. Other than that, it's almost exactly like RX Java. So it's basically the same thing. It's mostly built into the regular Spring website. So instead of, so you would just get back a client response of an observable type. But otherwise, it's very similar to the way you would do traditional Spring programming. All right, now I'm gonna go to Eclipse Ferdex, right? So my Ferdex stickers are up here. In fact, you will pass around a Ferdex sticker. I'm gonna save one for the one for the, you asked me a question. So I'm gonna make sure you get a sticker. Everybody else, you got in your stickers, your backups and stickers, and there's more up front if you guys want some of these. So Ferdex is based on a single threaded event loop. So does it sound like Node.js? That's because it was inspired by Node.js. The guy that wrote this wrote HornetQ, which was the JMS message broker inside of Jboss EAP or Jboss application server. It was the world's fastest JMS message broker. It was based heavily on something called Netty. Netty will pop up a lot. This is also inside of Spring Web Flux. You'll see it in a lot of the reactive space. It's a super, super fast low level IO network. If you've ever used Twitter or done anything from with Apple, you've used Netty. It forms the basis of the iTunes and your app store. But being a message based guy, when he created Ferdex, he took the concept of a single threaded event loop to build websites and married onto it an event boss, right? Not surprising from a guy that did wrote event bosses, right? Or wrote message brokers. Now this isn't like the kind of thing we have to stand up persistently. It's very simple in a Ferdex world and everything is what we call a vertical. And a vertical can, it has its own own event bus. You can pass messages off the event bus and pass your compute over to a different vertical. I've got a app running right now. And this is a public example. And this, I've got this example. You can get a link to this. One thing about Ferdex, it's super lightweight. This is actually currently pretty heavy. It's using 275 megs, but I'm running nine instances and a database here. So all that memory is nine instances of Ferdex. The code for this, this is a stock trading application. It's not a real completely fake stock trading application. But we are sending trades. The way this stuff ends up looking in a Ferdex world. So Ferdex has a number of different ways to help you get through callbacks. One, you can do callbacks, right? This is kind of traditional looking way to do callbacks. So we have a request stream, right? We're returning a flowable, right? Which is an observable subtype. And then we're subscribing to that here, right? So a lot of RX pieces are built right into Ferdex to make it very easy for you to use those kinds of concepts. And then RX Java concepts like singles are built right in. So you can use these. And you can chain together calls. So we can call multiple different services. So we can call shares. We can get the price of a share. We can find out if there are any orders. We can grab those and zip those together. We can do this really easily, right? So when you think about it, especially in a microservices world where you're calling multiple different services like the Netflix front end, if you're calling a couple dozen services, this makes it really easy to chain those together and perform your operations right there inside of your microservice, or inside of your service. Service, and then five minutes. And then the other piece about Ferdex, also heavily message-based. So when we send messages across our event bus, we use JSON. JSON is a first-class citizen inside of Ferdex. And all messages are just recommended. You don't have to do this. But it's recommended that all your messages get passed as JSON, which is nice. It's language agnostic. You can attach to the event bus directly from your web browser using JavaScript. So you can send a JSON message to the event bus. You can read from the event bus. And that's actually how we are getting, in this example, how we're getting these numbers. We're talking natively from JavaScript right to this Ferdex event bus. And we're using JSON for our message payloads, which is key, right? Because you want to have to send messages. You want to pass messages. And we don't want to pass something by reference. We're passing something by value. In this case, JSON text. JSON objects, right? And so JSON objects become really what you work with. Another thing I didn't mention, Ferdex is polyglot. If you don't like Java, there are JavaScript, Ruby, other implementations. And this team has done a lot of work to make it feel native. So if you're using the JavaScript or the Ruby one, it actually feels like you're using JavaScript or Ruby. It doesn't feel grafted in, like a lot of, say, Ruby or other things. Links in here. This book I highly recommend. Great book written by Ben Christensen and Thomas Nerkwitz. They're the guys that wrote RxJava. Ben Christensen also has some really good talks that you can watch on, this link is hot, go to 2013 on YouTube. Akra, I mentioned this is the guy's light band. This book is pretty good. And these guys really waived more banner carriers for the reactive movement. And Jonas Bonaire, who is the CEO, I guess, or CTO, he has a number of good talks on this topic as well. Spring Web Flux, and they have a lot more, I think, to put these slides together. They're starting to talk about this a lot more in the spring world. Ferdex, you can download both of these books from developers.redhat.com. Those are both free. Really nice. I love the title, A Gentle Guide to Asynchronous Programming. There's also lots of different tutorials, including the one that I was just showing here, this trader vertical, that I'm running currently on OpenShift, which is our Kubernetes distro. On my laptop, you don't have to run it on Kubernetes. You can just spin it up on your laptop, too. All right, thank you very much. Can I pull that off in 35? All right. We have three minutes if you want to. Three minutes. Anybody have questions? Was that too fast? Yeah, yeah. How does that compare with no JS? So how does Ferdex compare to it? So reactive in general. So no uses that callback model, right? So when a web request comes in to an endpoint being serviced by node, it grabs that request and then sends the work off to another thread and then continues listening, right? Ferdex works very much the same way. And that's a reactive method of programming, because you think about the way we do things with servlets or traditional HTTP calls, right? If you're programming Java servlet, call comes in, Java servlet connects to database or grabs a database connection from somewhere, performs the query, gets the query back, un-martials it, turns it into an object, decorates it with some other stuff, and then sends the callback, right? Which is fundamentally different than like a Ferdex event loop or very similar in node.js world. Call comes in, Ferdex sends something to the event bus, which contains a handler saying what to do when it comes back, right? So when you say callbacks, does it mean the same as what we call as promises? So the question was are callbacks the same as promises? No, they're not the same as promises. There's another piece too. There's a future that's a Ferdex future that's different from a Java future. A Java future blocks. A Ferdex future does not block at all. And so in the Ferdex world, we use the Ferdex futures. Promises are, so callbacks don't have to be any particular kind of object, right? It's just a piece of code, largely. Whereas a promise is gonna return something in the future, but it's a concert on top of that. And the notion that the callbacks are the lowest level, the lowest easiest way of doing that, and you still use it, it's fine to use them, right? It's just you don't wanna end up with too many nested callbacks when it becomes problematic, right? So they're not the same. Not quite the same, yeah. Yeah. Any other questions? No, that's about what we have to ask. All right, thanks and enjoy the rest of the weekend. All right. Thank you. The presenter is Michael Musgrove. Michael Musgrove is a developer with 20 plus years experience building distributed systems in Corva, G and OSI. Currently works in the transaction team at Red Hat Inc. He will make a job on C++ solutions from the middleware market. Thank you, Michael. Thank you very much. Any time warnings? Yeah, cause I've got a demo. So if you can give me a time warning at 15 minutes. Okay. Hi, so welcome to talk. Thanks for attending. So today I'm gonna be talking on a topic called transactions for microservices. So it's a question. So it's a question I'm gonna be asking, are transactions appropriate for using in microservices in dynamic environments? So three main areas I'm gonna discuss today. So first I'm gonna talk about transactions, whether you can... Yeah, so it's all about transactions and how they might be applied in a microservice environment and go through some concepts and why they might be used and why you might need this kind of approach in a microservice environment. After that, I'll be talking about a community project called Eclipse Micro Profile. So they're building a set of APIs with implementations for using in microservices in environments and we'll be talking about an implementation of one of those specs called NARIA on LRA. And then I'll be finishing off with a demonstration of some code. I'm sorry, my error's not that good, but I can't hear you. Speak up. Even louder than I am, yeah? Yeah, correct. Move up there. Move up to the link, maybe? If I talk directly, oh, that's better, isn't it? Okay. Okay, is that better? Can you hear me now? Yeah. Okay. Thank you very much, cheers. Okay, is that better? Can you hear me? Great. To motivate the work, so the start-up of the premise systems can fail. Machines and networks and software fail as well. So things are improving. Machines and software hardware is getting more and more reliable. Software development techniques have become more mature and it's clear that the instances of fail are getting lower, but in this modern digital computing, we are moving more and more to scale as more and more functions in the modern world become digitized. So when you start running at scale, it's gonna happen that you're gonna get failures no matter how reliable your hardware is. So, historically, we also had failure with centralized systems and we've had a lot of work done into that and over the decades and we've made them more and more reliable and that's culminated in the Java application, in application servers, in particular the Java application server. So with distributed systems, that's like a, that takes the management problem and the management of failures over to a whole new level. Microservices is clearly like a distributed system and there's an acknowledgement that, you know, that you can't, if you can't guarantee your hardware and software is not gonna fail, you're gonna have to embrace failure and the mantra is, you've got to like provide mechanisms, techniques to make like failure a first class citizen in your environment. So the microservices community, they've come up with various techniques to make, to handle failure. So they can include things like health probes, liveness probes, et cetera. The management system will then like monitor what's going on and then like maybe start new, you'd spin up new machines if things are failing, you've got timeouts, you can do load balancing if you've detected that one of your loads is running more slowly, scale up and down depending on if you think that things are becoming full to your, or you've got latency in your system. So, and we think that transactions is really, is part of this ethos. It's, and that should be allowed to be part of your tool case. It's not the solution, but it should be one of the solutions that you can apply to building your systems. So the question is, to be answered is, what exactly is a transaction and how is it going to help me building microservices based systems? So, a transaction is a mechanism means by which you can move the system from one known good state to a second known good state. And while you're doing that at the end of the state, you've got certain guarantees and coming as guarantees. So effectively it provides like an all or nothing guarantee, so you're like, if you move your system and something goes wrong at the end of the transaction, you want to be able to revert things. So there's not just one model of transactions, there are many different ones. So you've probably heard of the JTA model, which is like built based on an old dex opens down the call XA. So that JTA model, that slight provides full guarantees in the system, but there are other models. So there's things like the Sargas, it's a compensation based approach. There's business activities from the OACS group. And there's also many different flavors of different protocols. So let's start off first define what exactly is a transaction model. So as I said, there's not one transaction model, but you can, all transaction models have various properties. So they all exhibit these four properties of atomicity, consistency, isolation and durability. And by looking and investigating those four properties, it gives you a way of like characterizing a particular transaction model. And it also allows you to like compare and contrast the different models. So the, the, the atoms it's, so the image that I've got on there on the right, that's just saying that you don't have to have all four of these different properties, you know. So if you like, you might relax one of them, one of the properties, so that example, you got three tests, three test tubes. So you relax one property and then you've got a very, varying degrees of that characteristic in that particular model. So a model that would display all four properties and have all four, a full, a full implementation of those four concepts. And that would be an example of a JTA model. So if I go through what each one is briefly, so atomicity is saying that when you run a set of operations, you want all those operations to either run, to run as a single unit of work. So when you've finished on the set of operations, you want all of the operations to complete or none of them to complete. So that would be full atomicity. Second property is consistency. So that's an application from the application developer's point of view. So he's built his application and his business application provides certain guarantees, certain invariance. And then, so before you start running the transaction, there's an invariant that you can rely on and at the end of the transaction, it's moved to the state, it's moved to another invariant of the system. And that's saying that the system is consistent from the application's point of view. So in between the start and end of that transaction, then you can have inconsistent states. So that's why we have to introduce a third property called exhalation. So you want to be able to isolate that inconsistency from other transactions on the system. So two transactions cannot see what each of them are doing in between the transaction. And finally, there's durability. So if you want to change using the transaction to persist at the end of the transaction. So in the JTA world, when you, the typical protocol that you run to like, to like, to enforce those four properties of your system is called two phase commit protocol. So that's, there's two phases. First is the voting phase. So there's a coordinator transaction manager and you'll go around and ask all the parties that have been involved in the transaction. So this is with distributed transactions where you've got multiple parties involved. It'll say to each one, can you commit, are you prepared to commit all the changes that you've done within this transaction? And once you've got the answer back from all different parties, the transaction manager will write a durable log representing that decision. So the durable log is important because you can have a failure at that point. And if it crashes before it's gone through and committed all the participants, then the transaction manager has no knowledge of what of that transaction. And therefore you're going to lose, you're going to lose your atomicity. In fact, you're going to lose a lot of these properties if you don't have your transaction log. And then the final phase, so if all the different parties say that they're prepared to go and commit the work, they'll go and then the commit phase will go and commit each one. So even with that one, that's the generally accepted way of doing distributed transactions, that is not a perfect solution because there are failure windows in there. So the first failure window is during the voting phase before you've actually written a durable log. So if the managed transaction manager fails before that point, then all of these different parties are left in a limbo state. So in order to get to provide the guarantees of this acid, these acid properties, it will have to go and start locking data and writing logs. So that was going to bottle any system. So typically the parties have to log some information as well and that puts an onus on them to manage that data correctly. Because like when this transaction comes back again, it needs to go and ask all the different parties it's aware of and ask them, have you got any ongoing transactions? And the other window is that is when you go and do the commit phase, so the transaction manager is going to be committing each party one at a time, commits the first party and clearly at that point, you've lost your isolation because the other transactions can now see that committed bit of work before all of the other parties have been committed. So that's, yeah. So what about using transactions for microservices? So microservice interactions, typically what you do is you start with a monolith. You've got monolith that's running inside an application server and you want to break that monolith step up into different services and you want to run those in separate machines in their work in the cloud. So for a start, you've got many different parties involved, so that increases the complexity. So once the system's actually been broken up when you're talking to a microservice environment, there'll be a strong propensity to go and use, to plug in other people's APIs, for like you might want to like, only concentrate on your course course strengths. So you want to better use someone who does better ordering, so someone who does better billing systems. So you're going to have to start crossing trust boundaries as well when you move to a microservice environment. And also typically a microservice business activity will kind of go up for a long period of time, last for minutes, hours, you know. So something, if you're going to have transactions in this kind of environment, you need something to be able to coordinate all these different activities. Otherwise the system's going to be like, because you're going to have to have a lot of requirements, a lot of like sort of responsibilities on other services, that because like if it's trust boundaries, you don't really have any influence over. So you need some kind of thing to coordinate all of that. And so the question really boils down to can we use full acid transactions to achieve that correctness? So clearly you can do because we have been building distributed systems since the 60s and 70s, but is it appropriate for modern computing at scale? So the problem with full acid is that clearly that two phase commit protocol, it's a blocking protocol. Also to achieve the isolation, typically you're going to have to provide locks on data, although typically your locks and data is only if you use optimistic locking during the transaction. And then at the final two phase commit phase part of the protocol, that's when you actually do the locking. But still failage during that phase as well, you can block. Isolation as well. So if you like, if you got to guarantee isolation between different transactions, then that's going to like tie your hands if you're in terms of like parallel processing. Database community did acknowledge that that was a problem and they came up with like the set of isolation levels, so things like repeatable reads. If you've read one bit of data during this transaction and you read it again, it's guaranteed to be the same value. You've got to be committed or read uncommitted data, et cetera. So the highest level is serializable. So that's serializable, that would make the effects of it mean that if you run two transactions, it affects if we look as though they've been run one after the other consecutively. And that's the one that impacts availability, the greatest. There's also a piece of like a theorem from an academic literature called the CAP theorem. So the CAP theorem's concerns about consistency availability. So in the presence of partitions, so if two services in the environment if they lose connectivity because of a broken network or because the machine's gone down or because the service itself is running solely. And then that's what's called those two services being partitioned. So if two services become partitioned, then the CAP theorem states that you can't guarantee full consistency and full availability. So in this kind of system, you have to be able to relax one of those if you want to be able to make progress. And in a microservice environment, availability's key because that's why you've got the microservice to get the scale and to get the responsiveness of your systems. So really it's like down to like looking at how you can like what you can do with respect to consistency. So full ass I say this clearly it's not an optimal solution in these kind of loosely coupled systems where you've got long duration activities. So if you can't use full acid, what the option is is to start looking at those set of four properties that characterize all transaction models and start thinking about which ones are those you can start relaxing or even do without in some cases. So as I go through the four acid possibilities to like make it concrete, I'll refer to a simple case of where you might be trying to book a seat on a plane and some travel insurance and you want to do those in the side of transaction. So in a conventional two phases in full acid transaction, you would have to book those two things as a one atomic operation. So either the both booked or neither of them would be booked. So if you want to relax acid, a bit of atomicity after that all or nothing guarantees, then you might want to in your transaction you might want to cancel some work while allowing other work to continue. So in the example we've got the travel insurance example you might go away, you might book the flight and then you don't want to unbook the flight if you can't get your travel insurance because you might better get your travel insurance some later time down the line. So that's a case where you want to break out atomicity. So the more conventional approach to handling that kind of breaking out atomicity is with nested transactions. So you don't get nested transaction with JTA but you do with a lot of other models say like the object transaction service from Corb for example they had nested transactions. So that's where you can do some of the work inside a nested transaction and then if you decide you don't want to do that nested work you can cancel it and it won't affect the top level transaction which will continue running. You book your flight and the top level transaction that's still there and then you just like cancel the work you've done with the insurance. Second possible property to relax is about consistency. So you might have heard in terms of when people talk about transactions for the cloud and for microservices environments they talk about the idea of eventual consistency. So as long as the system eventually comes to some kind of agreed consistent state then that's often satisfactory for a lot of system for a lot of applications but eventual consistency is a very weak guarantee. So when the system and the data will converge on a consistent value that's indeterminate can happen at all times in the future. Also once it's converged it can quickly start diverging again but that can be mitigated with application knowledge so typically you would combine it with timeouts so if you're waiting for some state that you want to be consistent and it hasn't happened within a certain period of time then you can time out that work and go and try a different approach try a different strategy or 15 minutes okay or abort it completely. So the next one is isolation. So when you start so as we said earlier I think in isolation you have to be able to relax because isolation requires a lot of light work on behalf of the application developer to isolate work from other applications. So you might want to commit work early so I'll skip through that one and also the durability aspect so typically you want your work to be durable and to pierce after you've completed your transaction but STM Software Transactual Memory that is a good example of where you would like to relax durability so in a software transactual memory system that of the acid properties it has like atomicity, consistency and isolation but it doesn't have the durability and typically you'd use that for like high scale concurrent systems, object based systems so you've got your objects and you're making changes to your objects with many concurrent threads operating on these objects updating state and then when you come to commit at the end of your transaction you find that there's been a problem but you can just abort it and start again so in that case you wouldn't necessarily want to do durability So how do we go about choosing which set of properties to use? So we approach the micro profile community disgusting with them, came up with some use cases so we knew that scalability was the concern that was the main thing to look at so full acid wasn't proposed we needed to remove locking so we needed to like something that's where different business activities can see what each other are doing so you need to drop isolation and basically you want to make sure you get as much work done and as much forward progress as possible before backing out we knew we wanted nested transactions and we knew we wanted to be time bound time bound the operations time bound the period of which the undue compensation activities can be guaranteed to run and also the composition of these transactions called long running actions and also to be able to compose them and run the compensations within long running actions as well so the result was a draft specification so we submitted that to micro profile community we have a weekly meeting a weekly hangout where we go through all the different the stakeholders go through the different issues different problems with the spec so it's still open it's still available for like people to like make contributions to it it provides micro profile is the CDI first environment so we define the set of CDI annotations by which you can start these long running actions by which you can register callbacks for the compensation, the completion activities and there's also a pure Java API for people that don't use CDI annotations and also we've got a group that have their own implementation of a Sargalike model that uses GRPC and Google prototype buffers so we separated out the transport aspects from the CDI annotations that define the model so this is a sequence diagram of our Narayan implementation of it so the actual specification is defined in terms of like a set of CDI annotations as I said and the map is the transport so this is how we implemented it so I'm going to skip over that one and go straight to the demos so in the demo we used various pieces of technology so we've used the OpenShift platform OpenShift is built on top of Kubernetes and it provides some extra management functions on top of Kubernetes and also the build and CDI pipeline we've also used the Narayan and Narayan is the transaction manager toolkit and we use that to build the prototype implementation of this specification and also the wildfly swarm which is now called Thorntail but they haven't got a logo for it yet so Thorntail is like a way to get a cut-down version of Wildfly the Wildfly application server so it's cut-down, it has just what you need inside it effectively produces a fat jaw and it's just a fat jaw that you can run jaw for jaw for jaw that you can run in a JVM so this is what the demo is so the demo is booking a hotel booking a flight so the initial state is to book a hotel and to book the hotel we run that in a top level route lowering the action and then the flight bookings we want to have two strategies so we've come to book the flight and we can't find an economy once so we'll have a first-class flight but we might want to later go back and cancel that and go back to an economy class so we've run that in a nested in a nested lowering action so the idea there is that you can go and cancel the first-class flight and then have another go booking an economy flight so you're not going to lose your hotel booking and then after we've done that we're going to cancel one of the flights and then close the LRA and the thing that's managing these LRAs will then make sure it calls into all the different services that I've registered with the LRA so for example if you cancel the whole operation then the transaction manager will ensure that it goes back and calls all the different call back to tell them to compensate for the work they did so the hotel booking could be to like refund the customer so there's a deep dive there if you do want to, these slides are online if you want to have a look at what we've done so let's have a look at running the demo how long we've got there? nine minutes nine minutes okay so quickly so yeah what I've got there, this is the set of annotations so for example to start an LRA you would annotate one of your methods with an at LRA annotation so there you've got things like required, requires new if you're familiar with JTA it's model on JTA so with all the annotations so for example if you put a mandatory LRA annotation on a method and there's no LRA context available when that method is called then it'll throw an exception for example okay so so let's have a look so we've got yeah I'm running a bit on show on time so I think it's probably best if I just run the demo first I'll say so the three services are the trip service so the trip service is one that does the booking that coordinates the booking of the hotel and also the flight services so there's one for the hotel service and there's another one for the for the flight service as well so if we start the services running so this one is the hotel service this one's the flight service running so this one being by the flat so jar with minus jar and then there's the flat jar it's like a thorn tail it's like a cut down version of wildfire flight and that's got the bundled inside it it's the various services that we're running we also need a coordinator so when different when these microservices register compensation activities with the LRA that they might want to be compensated for if the transaction is sub-synchronally rolled back closed canceled sorry to register this information with the coordinator so we've got two coordinators running in the same environment that's the bottom right one that's the one for the sub-transactions and then we've got another coordinator so the coordinators can be federated in the system they all have to be running in the same JVM because they can talk to each other and then this is the main coordinator so when you do a trip booking the trip booking is going to go to the main coordinator and register with that one and then the flight service in a nested LRA that one will go and register with the sub-ordinators with the sub-order coordinator okay so so that's the services all running so what we've got now is so this shell script so this shell script is going to is going to go and make a call to the to the trip service microservice and it's going to ask it to go and book a hotel and book the flight so if we run that okay so that's going to done the booking and that URL I've printed out there that's the URL that you have to give to tell the trip the trip microservices to complete the booking and what the trip microservice will then do is they'll then go and confirm the long running action so if I type a copy of that run that one so that's gone then and run it and that's gone and close the LRA and then the LRA is then closed all the completion callbacks for the various microservices that were registered with this long running action and you can see the aggregate booking they're all in the status confirmed so you can also do one with a cancel as well I see we didn't get a first class ticket yes you've got to get it calling me if it's available and simly if you decide you want to abort the whole trip then you would ask the trip control to cancel the cancel the whole trip and that would then just close the LRA and then the LRA would do all the right callbacks so the trip manager, the trip manager microservice doesn't have to client code all the complexities of cancelling everything and shutting everything down because it's already registered all the logic with the coordinator and effectively in this environment so it's a REST based system which is in Jax RS so they're just all the end points that it's registered there is a second part of the demo which is running on Minishift so Minishift is an open shift environment that you can run locally let me make sure so I start the console so I have already deployed these three microservices and also the coordinators into Minishift and this is showing them all running so there's the flight coordinator there's a hotel coordinator LRA coordinator so now we can we can interact with the open shift environment with these services running in that environment or pre-cancel the commands so this is going to the trip microservice this is the end point in which it's listening so if I run that one so with Kubernetes you have service based URLs which then you have the actual pods that implement the services running behind that service so that's what that Minishift IP etc is doing so that's created a book that we've created a booking ID so the booking IDs I'm using are actually the IDs for the longer in the actions so that's like you could have a mapping between the booking IDs and the actions but for simplicity booking IDs are the same as these longer in the action IDs so that's the booking ID that we've got back from starting the booking and then to finish the booking you have a request to the trip service and then that will go and close the long-running action that's running in the coordinator that I've also deployed to Minishift and that's showing them all confirmed two minutes another example would be to show that you can balance the coordinator so you can start that running so what I've done there is I've started a booking going the long-running action is in progress and then what I can do is go to the coordinator so do the coordinator so with OpenShift I can scale down so if I scale that one down so when that scales down there's no coordinator running in the environment so this is just going to show that there's some resilience in the system so the whole point of transactions and models is you have reliable guarantees so when things crash and things fail you can always come back up and start from where and continue where you left so that's scaled down so if I scale that back up again then once it's back up again I should still be able to complete that trip booking so so that's so I'll balance the coordinator and then I can ask the trip microservice to complete the booking and there it has completed the booking even so it demonstrates that the coordinator does maintain state and it does remember how which long running actions it's responsible for so that's good because that's where we are we have a break between now and there's a break now between now and the next session I think it's about 25 minutes so if you want to ask him some questions offline out of the session you can do so with this session and if you take a look at the slides there's various links where you can find out what we're doing so the Narayana project which implements the transaction manager and the presentation is there as well and also the information about the Micro4Far weekly hangouts we have so and that's the place you go if you want to contribute any about what you might want to have changed so for example like I said before we had some guys from China they're working on a SARG project and they did contribute changes to make sure that we can support any questions there's no time for questions right offline questions then and although many of you are looking at this slide and you're saying now come on those look really familiar or they look really similar so Jose on the left is actually a set of standards RFCs and Jose is my particular implementation of them we'll be using this terminology to disambiguate throughout the talk so whenever I say Jose I'm referring to these standards and whenever I say Jose I'm referring to my implementation of those standards so Jose stands for Jason Objects Signing and Encryption and it is a set of standards for formatting all sorts of cryptographic related stuff in Jason format now some of you are probably wondering why we need some new standards and there's actually a few reasons for this for those of you who've been doing cryptography at all you know that our standards have grown up organically over time so we started off with various different here's how a key works and then we started doing encryption and various other things but it got really really hairy quickly because for example what sort of format do you store your certificates in what sort of formats do you store your keys in all of this stuff is everyone did sort of their own thing as an example GPG is not the same as OpenSSL for example so we also had need for doing cryptography in the web space and in particular for bundling cryptographic data inside URLs which is one of the driving forces of these standards and so what we actually have here is the first cryptographic system that integrates all of the different parts of what we would today call a cryptosystem into one usable system where everything sort of works with everything else and we're going to walk through how exactly this looks we're going to start off with a really really simple example this example is just a symmetric key we call these json web keys or JWK for short a json web key when it's a symmetric key looks just like this on the top the KTY parameter which just simply specifies what type of key this is in this case it's octets so we should expect that it's going to have a K parameter and the K parameter is going to contain the actual octets of the key these are base64 URL encoded and then we also in this example have an optional value that is not actually required but can be present for a variety of different keys the value is the algorithm parameter and this indicates what algorithm that this key can be used in or use with and so this is pretty much the simplest example we're going to see on the set of slides notice that this same pattern of representing binary as base64 data is going to be universal throughout all of the joezy specifications any questions about this I'm hoping this one is pretty straightforward okay let's move on to a little more complex example so this is also a JWK same data format but this time we're representing not a symmetric key but we're representing an elliptic curve key in this case we also have the key type KTY at the top exactly like the last one this specifies EC this is an elliptic curve key we next then have the four required parameters an elliptic curve key the first is the name of the curve that we're actually representing in this case this is a P256 key for those of you who are not familiar with elliptic curves P256 is one of the standardized curves that's used by NIST it's one of the most widely available and it's one of the ones that's standardized in joezy next we have three parameters X, Y and D keys are points public keys and elliptic curve cryptography are points on an elliptic curve we have an X and a Y value which indicates where it is on the curve and the D value is the private value or the secret value so these are, this is a full public and private key all in one jason object finally we have two more optional parameters and notice that these are different optional parameters than ones we saw in the previous slide in this case we have a key use and this is allowed to be used for encryption and finally we have a KID parameter and this is just a unique identifier for key it can actually be anything you want but it's pretty common to see the KID actually be a thumb print of the key but it can actually be any string that you want and I'll explain more about what thumb prints are shortly any questions before we move on from here it gets much more complicated because we are now moving into RSA keys RSA keys you notice they still have the same KTY parameter at the top this indicates what type of a key this is and we have the next what is it 7 parameters or the RSA parameters actually 8 are the RSA parameters so from here all the way down to here and the specific details of those I won't go into so if you want to know more about RSA keys there's lots of information on the interwebs and you can find it there lastly we do have two more optional parameters here in this case we have the algorithm which is like what we saw of the symmetric key this is allowed to be used with the RS256 algorithm which is a force signatures and it indicates that the data should be hashed with SHA256 and then the hash of the data should be signed with the RSA key and finally we have another KID parameter you notice this time it's a date when the previous example we just had the number 1 which in this case is like a serial number for keys that have been generated over time in this case it's the date in which the key was generated again this can be anything that you want it's as long as it uniquely identifies the key according to whatever system you're using so those are the three main key types that are used in the Josie crypto system there's actually another one that has just recently been standardized which for those of you who are doing cryptography on a day-to-day basis this is the CFRG curves so things like ED25519 and ED448 have both recently been standardized as key types as well one last thing that we should note is that Josie actually standardizes a way to represent sets of keys as well which is something that we don't see much any other crypto system so you can actually define a bundle where you have this object and you have a parameter called keys and it's just simply an array of keys but there's also extra parameters that you can put in here moving on to performing a signature this is actually the most complex example of a signature that we'll see today so at the very top we have our payload this is the data that was actually signed so whatever the message is that you want to sign into the payload and again base64 URL safe base64 encoded and then we have an array of signatures we have two signatures in this JWS so here and here each signature has a protected header now the protected header as you can see all the way on the right side over here we actually have this object and that indicates the contents of what's inside the protected header that structure is then base64 encoded and included in the protected header we have another value called header which is not protected and what we mean by protected and not protected by the way is that the protected header if it's modified the signature will fail to validate but the header that is not the protected header can be modified and it doesn't invalidate the signature finally we then have the signature that is a signature over the payload and the protected header so this is the most complex example if you this is also called by the way a general serialization and it's called general serialization because basically anytime you have more than one signature in a JWS you have to have this array like this but if you have a case where you are only ever going to have one signature you can actually use another serialization called flat and the way that the flat serialization works is it just basically takes all the data from that one signature and moves it up in the object hierarchy so if you go back we have protected header and signature here and then we have protected header and signature here so it just moves them up in the hierarchy and takes up a little less room and so you notice here by the way we are saying in the header we have a key this is the key that was used to sign it so if you were to receive say this JWS you could go look in some key repository for this particular key ID and that's the one that should be able to validate the signature however there is also one more serialization and what we are going to do if you remember we have got four items here now we are going to leave off the header parameter which you remember is not protected so it could be modified and we are going to take just the signature the protected header on the payload and we can flatten this once again into a string where you just simply put the protected header contents followed by period followed by the payload, followed by period followed by the signature now the unique thing that we can actually do with this is we can put this in a URL so one particular case where this came up was we were having a meeting where we wanted to have a registration system so people could sign up with their email address and it would send them an email to confirm their email and then they would click the link of course you have done this a million times on the internet not if you know what I am talking about so basically I came into the meeting about three minutes late and they had already designed the whole thing and it was magnificent there were like multiple moving parts and there was databases and all sorts of stuff just for this registration system and I came in and I sort of raised my hand and I said why don't we just sign their email address and send it to them in a URL using this data format and you don't actually have to have any state on the server because once they click the link the server just validates the signature you don't need to have databases you don't need to have all of this massive code it's just really simple and effective so this is one example of how the Jozy standards can be used very effectively and since Jozy always uses URL safe base 64 which is just like regular base 64 but two characters are different in the encoding so it is standardized but it's slightly different than regular base 64 since Jozy always uses URL safe base 64 and then always uses periods to concatenate the fields this is always safe to be inside a URL we can do the same thing with encryption so in the last case we were talking about signing and now we're going to talk about encryption this is the most complex example you'll see today because encryption is slightly more complicated than signing so the way that encryption works is you are going to have an encryption key and that encryption key is going to encrypt all the data and then you're going to encrypt the encryption key using another key and that key in this case we call the recipient key so if I'm going to encrypt something to you and to you and to you I only need to encrypt the content one time and then I encrypt that one key to each of your own separate keys and each of your recipients and that's exactly what we're looking at here so in general format we have the first thing we have is the protected header at the top here which is just like the signature so it uses all the same mechanisms that we saw in the signature but it's now being done for encryption we have an unprotected header protected header by the way if you remember means that if you modify it then the decryption will fail but the unprotected header can be modified after the fact that the decryption will not fail same encoding over here so the protected header is actually in this format and then it's serialized into a string and then base 64 encoded and put into this protected header after this we have the ID, the initialization factor for the cryptography we have the ciphertext itself which is the plain text that we've now encrypted is stored here as ciphertext and then we have a tag and the tag is the thing that validates that when we decrypt it, it hasn't been modified so that gives us our authentication so all of this is basically we take the data in we generate an ID and that's stored here we enter the data using a randomly generated key that comes out of ciphertext we do our authentication which produces a tag we stick the tag on there we write our parameters into the protected header and then finally we take that key that we used to decryption and we encrypt it to the actual recipients and here's the recipients we have two of them here, one and two and a recipient can have a per recipient header which is not protected and then when we encrypt it the recipient we're encrypting the key that we used to encrypt this and that encrypted key is stored here and then we also have some optional parameters here like which algorithm was used for the encryption we have things like which key ID should be used to decrypt this value and so on so this is the most complex example I've probably hurt your brain a little bit but we're going to move on and we're going to get more simple from here so hopefully it should be simple to understand if you remember we also had for the JSON web signatures we had a flat syntax which is where when you have a single signature you can just move all the contents upward into the object hierarchy and we have exactly the same thing here so if we go back you notice we have two recipients here and the important bits are that we have an encrypted key well in this case we also have a encrypted key and a header so if you have a single recipient you can create the same object but instead just put the encrypted key and the header in the parent object and there's an implicit recipient there just like JSON web signatures we also have a compact format so we're going to come back here we have a protected, unprotected ID, ciphertext and tag no I'm sorry these are the five we're going to take protected, ID, ciphertext, tag and encrypted key initialization vector it's basically a bit of random data to ensure that encryption is unique for each encryption operation it's a public value but it's used, it's the very first value you put into to get a starting position in your cryptography so and it's used pretty universally although it's used slightly differently depending on the algorithm you choose so for example if you're using AESGCM it will be slightly different than if you're using AESCBCHMAC so tag is it's the authentication information for the ciphertext so after you encrypt the data then you perform authentication on the ciphertext to make sure that it's not modified so the tag indicates for one let me give you a concrete example of this in the algorithm AESCBCHMAC the actual encryption is done using AESCBC and then an HMAC is done over the entirety of the ciphertext and the output from the HMAC is stored as the tag now when you go to do the decryption the first thing you do is you validate the ciphertext you run that ciphertext through the HMAC again and the HMAC will output a value if that value doesn't match this then the ciphertext has been modified and you absolutely should not do anything with it you should drop it to the floor does that help? yes another question oh sorry yes I should do that that question was can I explain what the tag parameter is it is the question is was the tag parameter used for algorithms other than HMAC the answer is yes for example if you're using AESCBCHM the tag is yielded as part of the encryption operation so in that case it's all done as one step the plaintext is input to AESCBCHM and the ciphertext comes out and then when you're completely finished the last block of ciphertext comes out and the tag comes out but it's all done as a single operation yes the tag is an output during the encryption phase it's an input during the encryption phase no I'm saying this wrong start again the tag is an output during the encryption phase and it is an input during the decryption phase and the important thing of the tag is that it's validating that the actual message has not changed because there's all sorts of attacks if somebody can get a hold of your ciphertext and make changes to it and ask you to decrypt it there's all sorts of attacks you can do so the very first thing you want to do is validate the message has not been modified once you know that it's not been modified then you proceed with decryption in the case of ASCBC HMAC yes it is a digest in the case of other algorithms it is not it's sort of a generic remember this is really in this particular case the JWE standards we're not actually defining how the algorithms work we're defining the storage format and tag can be used in different ways by different algorithms but it does roughly the same thing for all of them so another question yes sure it is not yes the question was is the tag also encrypted the answer is no in the case of the algorithm ASCBC HMAC which is one of the algorithms defined by the JWE standards you actually generate a double length key so instead of 16 bytes you would generate 32 the first half of that key is used for doing the encryption the second half of that key is used for doing HMAC I may actually have my halves backwards but it's one of those two so basically you generate a double size key and one is used for HMAC and one is used for the actual encryption for AESGCM that's not the case you use the same key for the entire operation any other questions okay so we already talked about having compact format which can again be used in URLs so for example you wanted to have some data some metadata that you store say like in a cookie and you want to store that on a client system you could use this actually if you wanted to put it in some kind of a URL like sending it in an email or something like that and you could actually bundle in encrypted data along with a URL the next data format we're going to talk about is the JSON web token and a JSON web token is the closest analogy is that it's the metadata that you get usually in a certificate particularly in the if you're using a user certificate this would be similar kind of data in a certificate so there's no actual cryptography involved in this these are just the standard parameters that are well defined and then there's also in the standard a way to put your own data in here it basically says that either you should define a standard and publish that standard and then you can actually have a short name reserved for you or if you're doing something that's application specific then you should use a a conflict resistant format so something like com.example.parameter.foo and that would be sure not to collide with other people using the same thing we'll just walk very briefly through what these are so the issuer is the person who is making the assertion so they're saying I'm vouching for the subject which is me and Pima Cullen and the recipient for this is devconf that's you guys and the assertion will expire at this particular time and it's not valid before this particular time and it was issued at this particular time and finally JTI is something that I forget in my head yes thank you it's the token identifier yeah so this is like the KID parameter for keys but this is a unique identifier for this web token now what's interesting is not the web token itself which again has this defined information there's also some additional stuff that's been defined since this point which could be put in here and then you can add your custom data but that's not the thing that's unique or that's not the value itself the value of this is that you take this metadata and then you wrap it inside either a JWE or a JWS so this is the way that you graphically validate that this data is actually not modified and that it's who it's supposed to be and that only the person who's supposed to see the data can see it one of the things I don't like about the standard but is well-defined is that the JWT can be wrapped in JWE's or JWS's and possibly recursively so this means that you can actually pass the data across multiple hops and every hop could say add its own signature or it could add encryption at various different layers and then when you receive it you basically need to unwrap all of the layers and then you actually get the data at the end once you've validated every single layer so it's a little bit complicated and I haven't written code for this because it's hard yes correct so it does have some positive uses I only don't like it because it's actually just hard to implement and I haven't implemented it yet okay so we've up until this point we've been talking about Josie which is the set of specifications so everything before this this slide and before it should be standard everybody should be doing it exactly the same way and everything that you see after this point is now talking about the specific implementation that I've done so we have at Red Hat created the Jose project the Jose project as a C library and as CLI implementation of the Josie specifications so we have support for all of the RFC defined algorithms this statement is actually out of date because the CFRG curves were just recently standardized but up until that point we have all of the algorithms and one of the neat things about this library is that we actually don't have any C data types natively so what you and we have no date JSON parsing this is important first of all let's talk about the parsing parsing is really dangerous I don't know if you watch CVEs there's a lot of them for parsing bugs and what you really don't want to have is your parsing and your encryption in the same place that would be just absolutely a fundamental fail and in fact a lot of implementers of Josie absolutely make this mistake they take strings as input strings as output and they serialize everything so I want to point out specifically we do not do any JSON parsing there is a really good library called Janssen and Janssen is really battle tested and it works really well and that is who does all of the parsing and it can be done in another thread if you wanted to and it does not have to be related to Jose at all but you pass us those parsed data types but it's also further important to note that we don't then take those data types from Janssen and convert them into something C native and the reason for this is that actually a lot of implementations do this they make the mistake of saying well you give me something that's JSON and then all parse it into a language structure and then we can operate in the language structure and then when we're done we'll serialize it back out as JSON well the problem with this is that the standards are intentionally designed to be extremely fluid in the amount of optional data that you can have and so what all the people do wrong when they implement this is they parse that into the C type and anything they don't know about they drop on the floor well now you've just completely lost all of this extensibility and we don't want to do that so what we do is we don't have any native C data types you just parse the raw JSON and once you have a raw representation of that JSON you hand it to our library and we do everything from there our API is also driven by a template approach so what this means is that instead of having again native data types which specify all of your options you just hand us something that looks like the output you want so for example if you're generating a key tell us what algorithm you want that key to be for and then hand us that and the way that you tell us what algorithm it's going to be for if we go all the way back to the JWK we have an algorithm parameter right so give us a JSON object that has only leave out all of this data has only the algorithm parameter and we'll fill in all this data for you automatically right so it does require that you know a little bit about the specification in order to craft this template but it also means that we don't have to do all sorts of overhead when manipulating these data types so it can actually be basically just parse the data give us the data directly and then you're done whenever we have missing parameters in these templates we do our best to fill in the data first of all we infer for example algorithms from keys if you don't tell us what kind of algorithm that you want to use for your encryption and you've handed us a key that has the algorithm parameter guess what we can figure out exactly which algorithm you're trying to use and we can do this without any ambiguity one of the things that's important is that if we do detect the ambiguity or a conflict we bail but if we don't if it's very obvious what you're trying to do then we just do it for you so any all the parameters are inferred from keys they're inferred from the headers and we always if they're not specified use sensible secure defaults so if you didn't let me know you didn't specify KTY you didn't specify an ALG that it will figure it out from the key it will figure out things it can figure out the algorithm we can't fill that in for you but if you give us the algorithm we can fill in all of the other information that the algorithm implies right so the library design works like this we have a very core library that implements the Josie logic and then all of the crypto itself is implemented as hooks which means you can plug in another crypto system here at some point if you want and then all of our code is currently using OpenSSL so we're not building the algorithms directly and then on top of the CAPI we provide a CLI tool which provides a thin layer around the CAPI and what this means is that anything that you can do essentially in C you can also do from the command line the last thing is that we extensively unit test this against all of the test factors from the RFC we also use test factors that have been produced by other parties as well and we're fully conformant to all of those so here's the URL for the project github.com forward slash latch set forward slash jose and it's really easy to install on fedora just dnf install jose so let's look at how to actually use the jose code three minutes oh goodness I'll go quickly so we have a function called josejwkgen it takes a configuration object which can be null and any jwk template you want you just tell it basically I want a key for this algorithm and then it spits out that key and generates it for you you can also specify things like I don't want this algorithm specifically I just want 16 bytes and you can do that and then lastly if you specify multiple templates we will output a key set if you remember there's a key set data type and so if you just tell us generate multiple keys we'll output it as a set by the way anything that you put in this object in this template that we don't know about we leave it in place we don't touch it so we have some jwk utilities we have pub which removes all private key material we have a use parameter which basically says hey can I use this key to do signatures can I use it to do encryptions and it will tell us yes or no and then finally we can generate a thumb print from the public key material we have algorithms for signing and we basically just an algorithm the C code is here by the way and the CLI code is here anything you can do with the CLI you can do with C and vice first I think there's only one exception to that basically most of these parameters are null so typically if you're doing a signature the config parameter will be null the signature will be null and you'll say here's what my jws output is and this is the key I want to use for the signature so but if you want more control you can fill in the other fill in the other data verification is basically done the same just in the opposite direction of course and here we do oh one thing I need to say here is that if you specify multiple keys we give you general if you do one key we give you flattened serialization if you specify the dash C option we give you compact serialization and the last thing is that you can actually create a signature that does not contain the payload in it it can be contained in another file if you want to so verification is basically the same thing just the other side of the signature and we have a non-zero ex-status here and zero ex-status here here's a case where we have detached payload so it's not actually in the object itself same thing with encryption you can do multiple keys you can generalize serialization one key flattened serialization the dash C option gives you compact serialization and one of the downsides by the way to encrypting or to encrypting or signing data with the Josie standards is that for example all your cipher text if it's big cipher text it's now going to be base 64 encoded which makes it even bigger which is why we always want to use this detached serialization detached serialization is going to output the cipher text's binary to a separate file and all the other parameters are just going to be inside your JSON alright there's other stuff here that's cool let me just say this we're working on adding pkcs11 support we would love to have additional crypto library support we don't have any jason web token functions yet but if you want to add those please contribute we also would like to add functions to convert from certificates and we'd like to add any other rfc features of course poll requests are welcome and if you have any questions I'm happy to field those do we have time for questions? ok thank you I'll hang outside the door if you have questions just come and grab me how's your day been? everybody first I would like to announce that there will be a reprise of this talk on Sunday in the morning so if there's something you have that's conflicting you could also check it out then however today I would like to announce and welcome to the stage one of our most senior the senior scientist from bss labs and he's here to give you a presentation that is just awe inspiring we are really really excited that he was willing to do this for us and so we really hope you enjoy it and if you don't learn something here today clearly the problem is with you so thank you for coming and please welcome jacob kozol esteemed colleagues gracious guests I welcome you hot dogs hot dogs are ubiquitous part of American and global life everybody eats them celebrities eat them when you go to a ball game of course you eat a hot dog maybe even a ballpark dog the presidents all eat them but the thing about hot dogs is they're pretty standard how you cook a hot dog you can bake them you can boil them the methods of cooking hot dogs have been passed down by generations and there's very little variation but thanks to a fellow senior scientist at bss labs we have uncovered a method of cooking hot dogs that was left in the past I present to you the presto hot dog this claims to cook hot dogs in 60 seconds and it can cook 6 hot dogs the packaging that was found with the archaeological study as you can see very much a 70's product quality quality logo here and now we get to the technique this is a fascinating method of cooking hot dogs almost archaic in the presentation you stick the hot dog I need a mic I need a mic I need a mic do you have a wire? alright hot dog ok that works hear me in the back so let me redirect your attention to these metallic spikes we take these conical electrodes and we plug them into either end of the hot dog allowing us to plug the machine into the wall running 120 volts through it cooking these dogs in 60 seconds as you see here it includes a very helpful pamphlet you separate the cover cover 1 to 6 you can decide maybe you're cooking for yourself and you only want 1 or 2 maybe you're cooking for your whole family you fill it up with 6 hot dogs and you can do this in one go you insert the cover back on and you put it in the wall and there you go 60 seconds and these hot dogs are going to come out perfectly cooked specifically for frank furters it says nothing about sausages or any other conical shaped meat product simply frank furters now for our testing we used 4 brands to get a variety to make sure there is a consistency of quality we used oscar mayer classic uncured wieners ballpark franks light life smart dogs for the vegans out there and hebrou national we also wanted that variety we have turkey chicken chicken and pork in one and then that solo beef hebrou national as you can see the packaging I want to direct your attention right now to the horizontal nature of the ballpark and veggie dog packaging and the vertical nature of the hebrou national and oscar mayer wiener packaging now before we can plunge into the depths of electrocuting hot dogs first understand how does a hot dog cook in a traditional manner so we ran it through the standard 5 cooking methods started with a pan fry took 4 minutes and 15 seconds to heat the pan up to temperature with a little bit of butter in it to make sure the hot dog got nice and brown and fried and then we let them cook for 3 minutes and 10 seconds exactly this led to a very linear temperature scale all started around 76 77 degrees fahrenheit and over those 3 minutes and 10 seconds they were able to climb up to about 162 to 165 degrees fahrenheit we then moved on to the microwave a lesser known predominantly dorm room inspired method of cooking hot dogs very fast no preheat time and then we did a standard 100 second cook all of the hot dogs this had a little bit more variety as you notice the veggie dog and oscar mayer were a little lower temperature at the end whereas the hebr national and ballpark franks kind of climbed up at the temperature that might be more the single and double source of meat versus the triple and veggie dog moving on of course to the oven we baked them preheated for about 15 minutes to 400 degrees fahrenheit and then cooked for an additional 15 minutes some ovens may vary but for 400 degrees fahrenheit 15 minutes was a solid cook time and in this instance pretty even scaling for all of the hot dogs however the oscar mayer heated up pretty quickly and that's actually a tendency we notice in general it's a smaller hot dog so maybe that is why it cooks a little bit faster gets to a higher temperature quicker when it comes to boiling it took my stove 14 minutes to boil generally it will range from 10 to 15 minutes to get a wire to a boil and then cooking for exactly 5 minutes all of the hot dogs at once with the boil because you can fill a nice pot and a strong variation here ballpark franks really not getting up in the temperature quite edible still according to the sources a little salty a little chewy but edible again high temperature leads to a more solid kind of mouth feel and then the one everybody knows grilling a hot dog took us 20 minutes we let the coals let it preheat and then cook for exactly 8 minutes that led to good results with our grill generally we found 7 to 9 minutes is the ideal range for cooking a hot dog on the grill fairly even temperature climb getting higher up there 80 to 90 200 and they all came up with like a nice amount of browning on the surface that crispy kind of crunch you expect when you bite into a grilled hot dog so now we move on to what we're all here for electrocuting them we started by taking a resistance at a base temperature Oscar Meyer and ballpark frank up there with 2.1 and 2 mega ohms and then the light life veggie dogs in Hebrew national lower with the 0.25 and 0.5 mega ohms assuming that that might be a result of the higher sodium content in the Oscar Meyer and ballpark franks or the variety of meats products within now here's what it looks like when you line them all up you get a nice amount of separation between each dog and actually as you see there's a bit of a curvature this allows a little bit of a spark in between that kind of seal down the side which will pop open and let the juices open up but not come out so that way when you put it in your bun the juices will spill out perfectly but won't be wasted now we see some interesting temperature climbs here not a very linear progression Hebrew national very slow to heat up I think that's because of the single beef source there however spiking up well into 168 degrees fahrenheit whereas the rest had a more linear progression and all ended up around 160 so some quotes from the experiments this is all of course with the most important method which was the electrocution Oscar Meyer technically warm warmer in the center than the edges and actually this Oscar Meyer experiment showed us that if you go back here we started with 60 seconds as we were directed but we realized 100 second cook time is much better for the hot dog or it gives you a little bit more of a crunch on the inside and a generally warm inside versus here it's technically warm so the ballpark Frank's right kind of split that's due to that kind of curve you get with that dog it's a nice split not very hot turning into this one's pretty hot surface temp was high cold and dead on the inside you can all relate a little bit then the veggie dogs ooh surprisingly warm tofu licious and synthetic smoke taste a dope mouth feel and finally the Hebrew national this one's better better than the last more evenly cooked again benefiting from the 100 second cook time smells like a hot dog this one tastes like a hot dog that's actually cooked and of course I'm a really bad vegan so now the comparison we averaged out the cook time results for initial temp and final temp for all the hot dogs we scaled it down into a time range of 100 seconds and here you can see that the hot dog coming from a very low initial temperature ends up skyrocketing up very quickly even kind of beating out the microwave a little bit in this one example and then we kind of go down pan fry pretty quick little lowers grilling it boiling and baking both very slow because of that pre-heat time and then a long cook time now we talked about the charring everybody likes a charred hot dog you want that outside to be crunchy but what about the inside have to be soft as you can see here by the nature of plugging in the hot dog and running the electricity through the inside of it you get a nice char on the inside as you can see the split that was so talked about very juicy and now we thought what about larger dogs what if you want to break barriers and go for that sausage well you don't need to linearly you can actually push them over one even two slots and you can fill up more space as you can see here these are some fat hot dogs now and they have plenty of space to grow now what is a hot dog if you don't try and see what limits it can handle what does it take to make a hot dog pop when running 120 volts through it so we did an experiment with the Oscar Meyer and the veggie dog and we ran them both up at 58 seconds big popping sound 75 seconds a little bit of electric sizzle 100 seconds this is when we really could tell they were cooking because it just started smelling like straight burning paper 128 seconds that paper smells gone it just smells terrible 150 seconds that's when the smoke starts really coming out of the machine not just inside but out of the cover smells charred now due to safety concerns we had to shut down the experiment at this point we want to pop not a burn the veggie dog came out at 171.2 degrees fahrenheit and feels so weird as you can see here I understand why it feels so weird and the inside after pushing it that much nice and charred now brand verdict first the Hebrew national solo beef dog having a single source of meat really benefits the hot dog it allows that even cook that even flavor Oscar Meyer follows up the Hebrew national heating up pretty well so it was consistently warm a solid taste very well orchestrated hot dog that's followed up by the veggie dog actually beating out the ballpark dog so if you're ever going for a ballpark dog maybe just go for the veggie dog instead now Hebrew national Oscar Meyer number one and two veggie dog ballpark number three and four I'm going to go back a couple slides okay as you see here ballpark and veggie dogs horizontally lined Hebrew national Oscar Meyer vertical so if you're ever not not too sure about which hot dog brand to take because there's many you go to the store and there's a whole aisle of hot dogs go for that vertical packaging that vertical branding 100% of the time it's going to beat out the horizontal okay so now what everybody's here for how do the various methods of cooking a hot dog compare of course the fan favorite is the grill everybody likes a grilled hot dog it quote-unquote tastes like a hot dog pan frying it also very successful it's good if you can't don't have a grill you have to be inside pan frying very solid choice too number three however was the hot dog people were pleasantly surprised a little uncertain going in about an electrocuted hot dog but the taste was generally sufficient beating out baking, boiling and microwaving a hot dog but this is where the hot dog starts to really shine of course the microwave is very fast we give it number one slot because there's no cleanup you just put it on the plate and you microwave it maybe you put some paper towels around to make sure that the moisture is correct but hot dog or microwave very competitive there pan fry comes in at a solid third following those is boiling oven and grill because of the lengthy preheat times and the lengthy cook times so now we take that ratio quality time and number one comes in as the hot dog because it had satisfying quality and a high high speed cook pan fry again good quality decent time so if you don't have access to a hot dog and I feel so sorry for you if you don't have access to the hot dog I would highly recommend pan frying them grilling comes in after that grilling is just a solid way if you if you want to impress somebody grill the hot dog microwave better than boiling for time then boil and bake I would like to thank our sponsors this was all done in collaboration with GWS and then our senior scientist David Cantrell senior scientist David Shia and senior scientist Sophia Fondel and remember the mustard indicates progress okay okay alright questions any questions in the back very serious questions so I noticed that you had a wide variety of ways to cook and produce delicious hot dogs but I noticed that you didn't put prime hot dogs and since we are in a country that we love to fry things so it that's a great question I think the issue with a fried hot dog is that oh okay so the question asked was why did we not fry the hot dog and the answer to that is when you want to fry something generally you want to apply a batter to it like a corn dog where you surround the hot dog in a batter frying a hot dog solo is not really the standard method of cooking it because it will be superior with that coating so we discounted that method additional questions so obviously not everyone has access to a hot dog at home what is your go to method at home when you want a hot dog summer the question was what is my go to method to cook a hot dog at home assuming I don't have access to a hot dog because of course I would go for the hot dog and my answer to that is that in warm weather months I would go for the grill I would take that extra time I would commit to it I think it's worth deciding in advance that you want a grilled hot dog however if it's winter months the grill is not as practical so I would go for the pan fry another question did you consider using a hot dog with a 220 volts option that is a great question the issue we found is actually so the question was do we consider to run 220 volts through the hot dog instead of 120 and the issue we ran into was getting that from a standard outlet we want to experiment and what would a person at home use however that is a great foray into more experimental methods of cooking hot dogs really start upping the voltage the current going through them test what a hot dog can withstand additional question in man in the red thank you do you consider expanding the experimentals to other types of food the hot dog the question was would we consider to other food stuffs within the hot dog and absolutely we actually have recently come across another model of the hot dog and different branding and that if you pay attention to flock maybe next year there might be another talk but that included and we would love to have more food what happens when you put a carrot in the hot dog of course the sausage the sausage we have to try out sausages expanded foray into non meat based hot dogs like a whole gamut of different types of veggie dogs bananas another question in the back the question is why didn't we experiment with a sous vide so we had many culinary experts on our panel of tasters and sous vide is a very popular choice in cooking meats currently unfortunately while we did have sponsors our funding was limited and so sous vide has been put on the docket for the next round of testing another question we need to speculate how would the hot dog handle something like a swanky frank that being a hot dog wrapped in bacon around it that is an exceptional question ok so the question was how would the hot dog a swanky swanky frank how would a hot dog handle a swanky franky which is a hot dog with bacon wrapped around it now of course you're going to have to push that cook time I would not do the 220 volts with a hot dog where you're trying to cook the bacon on the outside because it's an inside out method of cooking so you run the risk of the bacon maybe not fully cooking I would postulate that the bacon would be slightly under cooked but probably will hit an edible temperature you might want to experiment with we mentioned sous vide maybe you sous vide the bacon apply it to the hot dog and you get that nice charring from the electricity any additional questions how does one acquire a hot dog one acquires a hot dog by laborious and meticulous searching on a website like ebay or searching through your parent's attic it's surprising how many families purchase these presto hot doggers we don't right there they had one in their house any additional questions well it's been an absolute pleasure it's kind of a beefy miracle that this all was able to happen so thank you for your time and enjoy the rest of the conference