 The first slide, I see it here, so at least I can start reading them. So Taito says, lies your business into functional events. So very quickly, my name is Maciej Świderski. I'm an independent software engineer and consultant, working very much with workflows as you could or would see if everything would work, then you would see workflows. I'm the creator of the Automatica project, which I was planning to show as a demonstration. That's going to happen. And occasionally blog and tweet about software development and workflows. Yeah, we'll just do that. So let me just try to make it because that's presenter view. We don't want that. I don't know how to squeeze it. Maybe you can turn it off. See, that's the first. It's a bit like this. Event. So that's the data contract. In general, it's about having full data and well-structured and well-defined. In general, to make sure that we change information between services and functions, we need to have some kind of data. Because in the end, everything that we care about while executing particular services or application is the data. So it makes sense to rely on standards. And in this particular case, cloud event standard. That allows us to define the structure, the envelope of the event, and at the same time, binding. How to express the event as part of the particular transport protocol, such as HTTP, MQTT, and so on and so forth. So that what we exchange is meaningful and brings value to the consumer functions. Functions are the business logic pieces, right? So it's usually meaningful. But it's again meaningful, from the means of executor business logic. Small piece of it. So called events. Since in general, it should be self-contained, meaning that they should not be on it. Normally, it should have everything that they need to execute. Right on the data that it gets, data that they produce is a trigger for another set of invocations. So they should be calling one another more. Let's see. Success. All right, and now you can see slightly different. So when it comes to functions and services, it's quite BBMN, this looks essentially the same. It has slightly different types of boxes, maybe some colors, maybe some annotation, small icons, and so on and so forth, but essentially that's exact thing. The only thing that you have on top of it is that you have the extra metadata that you need to make sure that it executes, right? But again, this is a complete use case that we want to execute. And this is how it looks like from the function perspective. It is slicing into those functions. First and foremost, you as a developer are in control. Actually, what a given function consists of, meaning how many activities, which activities it has, and so on and so forth. You can see those encircled activities are the composition of the function. So the first one is to validate, and then you can see that one that is not encircled is actually done that on purpose to show it that it actually, by design, it said that it will join the previous function. So the notify invalid data, you see this? No, you don't see it. The one we don't circle around it is simply joining the previous one, which is the validating data, because it doesn't make sense to send an event just to get another event to send another event, right? Because they're not just to do anything, they're just notifications. So that's why we can simply say, okay, just follow what's already executing. So we can then define a part of the workflow that becomes a function. So then control really what set of elements actually are meaningful from a business perspective to become a function. It's not going to happen. But we would see it locally, how it runs, how it flies, how it shows you what functions it is, what data it has, and then we would run it since it's DevCon, so it's Red Hat, so we run it on Red Hat OpenShift with OpenShift serverless to make sure that it actually communicates nicely with the native broker. So essentially, this is what would happen. So the user would essentially just publish an event, say, okay, I want to register a user. That's a cloud event that would say the data model of it and one of the most critical parts of the cloud event is the attribute called type, which is essentially specifying what is the type of the event and this attribute is used by the native broker to actually route those messages. So if you recall that I previously said that when we slice the functions, the automatically will generate the key native trigger. That's essentially the link. So it uses the type attribute of the cloud event to say, okay, if you have an event with that type, route it to this service. And that's essentially what would happen. And then the functions themselves, as you can see, there are a bunch of functions inside the service. Even though it deployed as a single container, it will have a bunch of endpoints that will react to those events. As soon as it gets, it wraps a particular event based on type attributes, look at the payload of it, process that, create a bunch of one or more events of the output, which again follows the same specification of cloud events, setting what is the next type. So it knows that based on the workflow definition. So it knows what type of functions are following and can simply send those events based on the outcome of the function. If we look back here, for instance, when the first circle goes in, we validate data. So it knows that it's either valid or invalid, right? At that time, it knows what are the paths in the workflow that can follow. If it's invalid, it knows that it will simply go to the invalid state, and that's it. But if it knows that it's valid and it's going to generate the username and password, it will simply send an event saying, I'm creating an event with generate username and password and set it to the payload. That is sent to the broker. Broker then receives that information, looks at the cloud event type attribute. Okay, who is going to deal with that? Okay, I will just route it there. Then it gets that event, processes it, and it follows until. So the good thing with that is that even though it might sound weird at first, that it sends event to itself, but that actually enables it to scale. Because it then can, based on the load and different autoscalar strategies, it can automatically scale multiple instances of your replica, multiple replicas of your service, and they start processing those events individually. The interesting part is that the same, the events for the same instance of the workflow, this is a function flow, can be processed by different replicas. Because there are two different ways of how to deal with persistence in this particular scenario. Is that, in this case, it's not that all states, meaning the data and everything that is relevant to executing further functions, goes with the event. So essentially the persistence is your event. Everything that goes through the broker is the complete state. So any instance and replica of your service can process the next function. If that is not the case, because you have individual things that are important to have, meaning that you keep internal state, you have a sensitive date, you have files, physical files, or a big set of data, big volumes there, you don't want to send them over events, because they are not always feasible. It makes sense to actually turn them down and keep it in the storage that is, for instance, I don't know. Apache Cassandra, or DynamoDB, MongoDB, relational database and so on and so forth. On which backend data store you want to use, you might need to just push it there, and then each replica will be capable of finding that, because every single event that is sent across the functioning vocation carries two information. One is the type that I already described, and the other is source. Source attributes of the cloud event represent who actually published the event. And in case of a workflow as a function flow, it's always identifier of the workflow. So that means that if you receive that, you can correlate that based on the source identifier of it. You can correlate to the backend store. Okay, give me the last state of it, I can then resume it. I have the full data set, even though the event that I received has only the small information, but I can resume from that point because I know what to do now. All right, so in addition to that, as you can see the small box in the corner there, we had the invocation of REST APIs as well out of the workflow to communicate with the user repository, which was the Swagger Pet Store, just why not? So we could invoke those functions and they simply send a request over the REST API to make sure that the user exists, and if it didn't exist, then we could register it, but if it existed, we ended there. So you could see here the small error handlers, that's to the get user and the register user, and those are to deal with REST API response code. That's the contract that the Swagger API has, the Swagger Pet Store API has, that if you query for a user by username, it returns 200 if it's found, and 404 if it's not found. Since it's HTTP, 404 is considered an error code, that's why we kind of treat it as an error, and then based on the invocation of that, we can simply route it to the register user. The register user has the option to react to the internal server errors as well to deal with that as well. So we kind of shrink it a little bit without the demo, but we had a few notes on what it is built with that would be more relevant if you would see the demo, but if you didn't see the demo, that is less relevant, but it was the work of definition was created with the flow chart like model with the BPMN format, the process model notation, but about the project, which again, is not implement the workflow as a flow. Realize some Quarkus and micro-profile specification underneath the station of the execution. What is important as well is using the sub-project of Quarkus code, Quarkus funky, that allows you to run functions that are agnostic from the cloud provider. So here represent functions in an agnostic way, but then the findings, like for instance, can bind to a Google Cloud Function, to AWS Lambda, to KNATP venting, to Azure Function. So you essentially write it once, but then you configure it differently to connect to a different runtimes on the server. And you would see that it was compiled native as well, so it's, I don't know if you heard about the native compilation of Java programs, that's based on GPM. So that essentially helps out with the serverless where you need to scale up and down quite often. So that allows you to have a very fast start-up a number of, yeah, 20, 30 milliseconds, plus a low memory footprint. Based on the dimension I was preparing for, OpenShift reported that it was around 40 megabytes, the full container running. So quite low, it comes to Java program and the package of the container images, again, a simple deployment model, a single code with all the functions embedded. This particular scenario, we had a very good job. But I've been running already, so it's not that all the time you have everything in that workflow, single workflow. No, you start applying process competitions where you're breaking on workflows and then you have the hierarchy of them. Workflow is a function for flow code, that use case as well. It knows which sub-process to invoke, it keeps track of all the level of the hierarchy it is and so on and so forth. And last but not least, it was supposed to be deployed to Kubernetes 15.80, it was not, I mean, it was just before in front of the room, but yeah, it didn't happen. But if someone is, and look at that, not on a small screen, but at least you will see it. So that's the backbone of the invocation functions and cloud events of the data, changing information back and forth. Since the time is running out, the last thing is to leave you out a few references that you can take a look at in more details. The QR code will take you to the automatic web where you can get the different use case. One of them is Workflow as a function flow, but there is Workflow as a function and Workflow as a service as well, or Workflow and Kubernetes as well. That's another use case that you can implement with Workflow as well. There are a few links, cloud events, the Kin-80s, Automatico, and Quarkis. With that, thank you so much for questions. Although we had the questions for the first 20 minutes, so I guess it's a very run out of questions, but yeah, if you have any, shoot. Just don't ask why it didn't work. All right, so thanks a lot, and hopefully the next sessions will be much better than this one. Thanks a lot.