 We're going to start the pre-recorded talk, which is entitled Cloud Native Application with Daypair and OpenShift by Yip Sa. Hello everybody. Welcome to DevCon 2020. We will talk about the Cloud Native application with DAPR, DAPR. What exactly is DAPR? DAPR is the Microsoft Latest Open Source Project. It is an event-driven portable one-time for building distributed microservices for stateless and stateful application on the Cloud and also on the Edge. It employs the diversity of languages and development framework. It supports many programming languages and it's a natural fit for Kubernetes and OpenShift. In the DAPR framework, we have the DAPR concept. DAPR concept contains two fundamental concepts, building blocks. The building blocks implement distributed system capability. They include, for example, publication and subscription, day management, resource binding, and distributed tracing. Components encapsulate the implementation of a building block API so that would include staff, Postgres SQL, MySQL, Raiders, MongoDB. Many of the components are also plugable so that the implementation can be swapped out and swapped in. Inside a building block, you have a building block API. It exposes a public API that you can call from your code using the component to implement the building block capability. Each building block has multiple components, as you see from the architecture diagram here. DAPR have the following building block components that were built in. For example, surface-to-surface invocation, day management, publish and subscribe, resource binding and trigger, actor, observability, and secret. These are all built-in building blocks and you could also do your own customization building blocks, depends on what you need. The DAPR architecture, you can see at the application code at the top, you could support any type of framework. You could have a Golang or Node.js, iPhone, Java, Ruby, C-Sharp. Those are all different type of application code that is supported. Then you talk to the middle tier DAPR using some sort of HTTP API or GRPC API. And then you talk interact with all these different components in DAPR. And then DAPR will talk to the lower layer on the cloud services using AWS, Google Cloud, Microsoft, and so on. The component basically is a functionality deliver as a component. Each component has an interface definition. All of these different components are plug-inable so we can swap them in and swap them out easily. The component type, so we have seen earlier, we have surfacing location, resource binding, state management, distributed tracing, push and subscribe, and act. So DAPR is an open shift. So what you need to do is to set up the different ports using DAPRs and each port contain a DAPR container, right? So for example, you have a DAPR actor on one port and then you have a DAPR sci-card injector in one port, data operator in one port, all these different DAPR ports interact with the DAPR sci-card. So DAPR also have an API that supports HTHP or GLPC that talk to your application code, right? So all these got encapsulated into open shift on the left side and that could interact with other component on the right-hand side, right? Using the resource binding, the state or store, the push and subscribe and the distributed tracing. So in order to use DAPR in open shift, first we would install ham, right? So download your latest ham from github.com and unpack your ham and then find the ham binary and move it into the user local bin ham. And once you have installed ham, you can go ahead and install Docker using the yum install Docker, then you could use the system control to start the Docker service. At this time, you are available to install DAPR. You can use the wget.dashq and get the DAPR, get the installation script from github. So once you get the shell script, you need to log into your open shift cluster as an administrator. Make sure that you check your login status with the OC who am I and you should be an admin administrator for your cluster. So since we are using ham repository to install DAPR, you need to add the DAPR URL into the ham repository. Using the ham repo, add DAPR and pass in the URL, you could add the ham repository. And then next step, you need to do a ham repo update, making sure that you get the latest code. So now you have DAPR and ham and Docker in store. You can go ahead and create a namespace in open shift. So in this case, you can use OC create namespace. I'm giving it a name called DAPR system. Once you have your namespace, you can start installing DAPR into the namespace using ham install DAPR and then pass in your namespace name DAPR system. So that would install the namespace for you. So once the DAPR is installed, double check and make sure that you have the DAPR operator, DAPR side card injector, the DAPR basement and the DAPR sentry. So these four objects should be available as four different parts in your DAPR system namespace. So the DAPR operator managed the component and service services endpoint for DAPR, right? So that's pretty clear. The DAPR in side card injector injects DAPR into your ports. The DAPR placement used for actor, where it create a mapping table that map actor instance to port. And then the DAPR sentry managed the transportation layer security, basically as a act as a certificate authority. Now you can check your port status and make sure that they are in the running state. So when you call OC get port, passing the namespace DAPR system, you should see the four different part of running. So in the sample code, we're gonna demo how to get the DAPR running in OpenShift and then deploy the Node.js app. Let's subscribe to the other message and persistent. So in this diagram here, we have the DAPR one time that talk to the Node code using the DAPR API. We are using the state management, using the state store, including some databases such as Redis. So on the left side, we have the user that were interacting with this application using a get and post endpoint. So the get endpoint will get the list of orders. The post endpoint will be creating a new order. So in this Python app, we generate messages to the Node app, consume and persist them. So this architecture diagram will show you the DAPR component. So the DAPR one time will talk to the Python code. And then, so you have one DAPR and then you have another DAPR one time that talk to the Node code. So these two DAPR one time talk to each other. And then at the end, the DAPR one time will talk to the state store with the Redis. So first get the latest code from DAPR. This is a Hello World project. If you go to github.com, DAPR, sample.gip, then you see the into the sample Hello World folder, you should be able to see the project. The sample code has a dependency on Redis as a state store for data persistency. So therefore you need to install Redis. So the new order post endpoint. So when we need to create a new order, right? So we call app.pose and basically passing new order as a endpoint API. So you have the request and response object. So in this sample code here, right? It would create, it would call Redis state store to persist the order information. And then the get endpoint is similarly, you can call app.get and slash order and then passing the request that contain the order ID. So in this case, it will call the Redis state store and we shift the latest order information. So now you know we need the dependency on Redis. To install Redis, you can call him repo-ag. And using chart.binami.com, you can install that, you can add that repository. And then after that, you can call him install Redis and install Redis into your namespace. Redis also have a dependency on the secret. So extract the secret from the default namespace for Redis. So you use the OC get secret, specify the namespace, specify the JSON path using data.redis password. Once you get the password, you can update the Redis MO file in the deploy directory and update the Redis host to use the Redis master 6379 and then the password from the last step. Now you have the Redis YAML file updated. You can call ocrpy-f on the YAML file and make sure that the component get created. Now you can create the null and Python application using ocrpy-f, you can create a null YAML and then ocrpy-f, create a Python YAML. So at this time, you can do our oc get pod on the DAPR system. You can see that the DAPR operator, DAPR placement, the sentry, the DAPR side card, null app, Python app, Redis were all set up and running. So observe that the message, you can look at the message and by looking at the logs and coming out from the pod. So all you need you to do is to do our oc log with the null app pod name and then making sure that you got the other ID coming out from the logs. And then at the end, you can expose the wow from the null app. So you can do oc expose surface null app. You can call the null app endpoint and confirm that the other ID is being persist. In this case, our other ID 42. So now you have just finished the deployment of a DAPR app. You can go ahead and update a sample code and fit a scenario, right? So in conclusion, DAPR work well with cloud native OpenShift, enable EC, event driven, state for microservices deployment and development. DAPR provide consistent and portability using standard API, including HTTP and GRPC. This architecture is also open source and work well with any programming languages and development framework. That's it, the end of the presentation. Please let me know if you have any question. Thank you. Hi, so let me check again to see if Yip is here. If you're listening, Yip. If you're here, Yip, we need to have you try to share your audio and video. Okay, so I guess Yip is not here for the Q and A. You can try and follow up with him in person or you can also try going to the breakout booth later on. He may be there, people who wanna have discussions about things that happened in evolving technology. The breakout booth is the place to do it. It's the new edition room and I'll put that link in chat for anybody who wants to go there.