 Hi everyone, welcome to the session on ballerina and open source cloud native programming language. So we have seen cloud native technologies being used to create scalable applications. So we generally use them as a set of tools, libraries, frameworks that are used on top of existing programming languages to make them cloud native ready. So they can be used to write microservices to get these features retrofitted to these languages. So what we have tried to do with ballerina is to create a program language from ground up by having these concepts of cloud nativeness in build to the language itself. So basically it's a language that is agile and network aware. So the ballerina language knows about network endpoint, it knows about how the network works, how to do resilient communication and so on. So the idea of this language is to create an additional abstraction layer of understanding the network to make the life of the developer much more easier in doing cloud native development. So in this presentation, I'll be going to some of the major features of the language and I'll be doing some hands on demonstrations on how the code looks like and how things work. So let's get right into some samples. So before that, so another aspect is Barana is a batteries included platform. So it has built in support for most of the prominent technologies we use, like starting from data drive support like XMLJSON to transports like gRPC, NATS and built in observable features and so on. So those are all built into the language and the platform so you don't have to worry about having another finding external libraries to get like the most general things done. So let's look at a hello world scenario from ballerina. So it's actually a hello world service that we are going to write. So we are going to have like a simple Barna service which will respond to HTTP get. So let's see how that is done. So I'm going to use VS Code for my demos. So there's a Barna VS Code plugin that you can use. So if you go to the VS Code marketplace, the Barna plugin is there, you can just directly install it from that place. So here I'm just going to create a new file for the model Bell. So Bell is the source file extension for Barna code. So here let's create a HTTP service. So the plugin contains some shortcuts to create often use templates like here I have directly created a service. And as you can see here, so in Barna we have a first class construct for services. So a service can be an HTTP service, gRPC, a messaging service, and so on. So the type of the service depends on the type of HTTP listener that it's bound to. So here we have an HTTP listener defined here. So then this becomes an HTTP service. So let's call our service hello. And these resource functions are the actual functionalities that are there in the service. So here a resource function can be bound to a certain sub context of my HTTP service. So I'm going to say hi and I can also give things like the base path for the service. So I can say here this is mapped to the root context. So here we are just going to send a response back to the caller. Say hello, some housekeeping task to do error handling here. So that's it. So this is our service. Then let's run it and see. So when you say Barna run, come on, the source file, it will build and run our program at once. So here our service is up and running. Now we can send it an HTTP request and see the response to our base path is from the root and the other sub context by default maps to the function name. So I can say hi. So yes, so you can see you got the response back. So that's a simple HTTP service using Barna. So then we can modify its behavior as well. So we already saw the use of annotation earlier when we gave it the base path. So we can do other things like we can restrict it only for post requests. You can say it'll only handle post. That should be added to the resource. You can say, okay, I only handle post request here. And here let's actually extract the payload that is sent to the service and use the request object and say, give me the text payload. And afterwards, we'll just echo back that payload data to the user. So we'll say it's a string template here and give the variable name. That's it for our service update. Let's run this and see. And in this case, we are going to send some payload as well. So this will be a post request. And you can see here we got the response back with the payload we gave. So it says gave Jack, so it says hello Jack and so on. So that's like the simple way of handling services in Barna. And let's go to the next section of using connectors. So we'll do a extended demonstration of the code we have by using some connectors. So as I mentioned in Barna, so it has some explicit knowledge on the network operations that we do like the network. For example, the HTTP post get and so on. Those are modeled as special operations in the language. So for example, when we say call or respond, we call it a remote method invocation. So it has a special syntax for it as well, this arrow notation. Then the runtime knows that we're actually doing a network call from the language. So this information can be used to do certain optimizations in network communication. And also other aspects such as automatic observability and so on that I'll show it as well. So here I'm going to use some connectors with our Barna service. So here the simple scenario where I'm going to use an Amazon service, Amazon recognition service to do some image analysis and return that response back to the client. So let's see how we can do that. So actually it's going to be like an OCR operation. So let's name our service OCR service. So we are going to do some processing. And so for this we are going to import a module for the client where the client is act for the Amazon recognition service. Okay, so the first thing we have to do is create the configuration for this. We just can just create the attributes and the required ones are the access key and the secret key. So here I'm just going to use Barna's config API to read in these values. So that can be read from a configuration file or from environment variables. So here I actually have my API keys and the secret key and the secret values in a file here in core Barna cons. So I'm going to read from that. So I have two properties called AK and SK. So now my configuration is initialized. So then the next part is creating the client. So now we have the client ready as well. So then for the client what we have to send is the binary payload that is sent to this resource function. So the first thing is let's extract the binary payload can say binary payload. And it's available here. Then we call the necessary remote method invocation so called detect text by giving the payload. Then we get the message from the detect text and we'll directly send it out to the client. So we do a respond with the message. So that's it for that. That's the full implementation and let's run this and see. Okay, now the service is up and running. We'll send it an image. So this is the image we are going to send. And the expectation is this text would be examined from that remote service and we'll get the text payload back. Curl command we had to use. So we are sending the binary payload from a curl command. And yes, so we can see here we got the text response back from our backend service. So we can see here from the client to our service we do the remote method invocation and we send it back. And as I said, so the burden knows about this network endpoints. And these are used in various in various aspects of the language. For certain optimizations and also another unique parties. This can be used for visualizing our code as sequence diagrams. So the bar in our language is from bottom up designed in order to be compatible with a sequence diagram concept. So all the code we write here by default one to one map about to a sequence diagram. So in the VS code. So we can create the click this icon. It automatically give you the sequence diagram view of the code you have just written. As you can see here, it contains all the actors and the remote endpoints in their own lifeline. So when you do a remote invocation like the detect text remote method invocation, it's sent as a remote message between these lifelines. So you can clearly see the interactions between the actors that are there in the system. So especially when you have a complicated scenario, it's very easy to see what's happening. So, for example, I'll quickly modify this to have some conditional branches as well. So I'll just do this at a handling explicitly. Let's say if there is. You can see here, you can see clearly see from the statement, these are different branches here and so on. So it's in a high level. You can very easily understand the code that you have written. So it becomes self documenting code with this with this mapping like that. And we'll quickly go through some other prominent features of the language as well. So starting from concurrency. So we have a unique concurrency model. Based on something called strands. A strand is like a lightweight thread construct in barina. And a worker is basically the realization of a strand. So in a specific function, we can have multiple workers for specifying concurrent executions. And also communication between these workers are done using message passing. So as you can see here, we can say certain can define certain data variables here and say send this to the other worker and so on. And from the compiler, we automatically check these interactions. We validate them in order to make sure that they don't result in deadlocks or anything like that. And if you find that these contractions are not compatible with given compiler saying that this should be fixed. And as you can see here from the sequence diagram also we clearly show how the message instructions are done with the multiple workers that are executing parallel. Also, we support futures as well so we can get any function and make it run asynchronously. So we can use a start keyword and we can run that. And we get a future construct pack which represents the future value we will be getting. So then we can later on wait on the future to finish the synchronous operation from executing and get the value. So in that way we can model our operations in that manner as well. And also that something that ties in with the concurrency model is also the input output subsystem. So here in Parina everything happens in a non-blocking manner, the IE operations. So we do that in a transparent manner. They are in our code. The coding style will actually look like a blocking code call when we do calls like from our Http client when we do like a get request to a remote endpoint. So this looks like a blocking call but what happens in Parina is in our runtime. We automatically suspend the execution strand that the execution context and we give the processing of the IE operation to the operating system. And only when that IE operation is done, we resume our strand. So no operating system level threads are blocked in that manner. So those are immediately released when we do those kind of operations. And only when our IE operation is done, they resume with a physical thread. So in other typical programming languages we have to handle this on our own by using other callback mechanisms and so on. But here we have done that in the language itself in the runtime. And so in that way, so the users get a familiar programming model to use these non-blocking IE features as well. So then let's move into like the Docker community support we have in Parina. So in the cloud native programming concepts, so containerization is critical. And with that we need container orchestration as well. So Docker and communities are leading technologies that are used for those things. So naturally Parina also has in-built support for these technologies. So basically we can take one of our services and automatically make it compatible with the container deployment and in a deployment like Kubernetes. So let's see how we can convert our earlier scenario to be able to deploy in communities as well. So we had to do some minor changes to the code here. So we are not changing the business logic of course. So we are just going to annotate our service saying, OK, deploy these in a Kubernetes environment. So we first do this by saying by adding a Kubernetes deployment annotation. And also we have to say, OK, expose this as a Kubernetes service as well. And we can give other parameters like the service type and so on. So you can give like expose as node port or load balancer and so on. And we can also give other extended information like we can say a config map. Like for example, we'll need that here because we are passing in external configuration properties for our services. So here I'm going to give my configuration file as a config map. And so these are the basically the annotations we'll need in order to deploy this in Kubernetes. So let's see how this is done. So now we just have to go and do a burn up build on the source file. So here we are just doing the build step rather than doing a run where we just build and run at the same time. So what happens is in this time the compiler sees that we have these annotations. So here the compiler will see that I have these Kubernetes annotations and it engages a compiler extension to work on these aspects. So what we'll do is it will automatically create things like the Docker image and also Kubernetes artifacts and provides us with the final command that we have to execute in order to deploy this in Kubernetes. So let me run that. Okay, so we can see now our artifacts are deployed using the config. So we can check that by doing a coop CDL get pods. You can see the pod is deployed and can check for our service as well. And you can see here, our service is also deployed. And we should be able to contact it through here. Let's do a call request for that as well. Port is 363. And now the request is going through the coding setup. And we can see we got the response back as well. So now we basically used the Kubernetes service that's exposed to not port and we sent a request and we got the response back. So basically in the same way you can deploy to any Kubernetes cluster as well like any host solution like in AWS Azure cloud and so on. So it's just matter of pointing your coop CDL coop config. And it'll be the same commands that you have to use to deploy the application. So let's go to the next section. That's so I'll quickly go through some of the servers features also we have in ballerina. Starting from AWS Lambda. So in the same way as we did the Kubernetes deployment, it's just a matter of annotating a burn up function. And the compiler will automatically generate the required artifacts to do the deployments. So I'll quickly show an example of that. So I already have a function a simple one. I'm generating a new idea and returning it. And here also if you just do a burn up build. The compiler understands that I have annotated saying spin AWS Lambda function. And it creates the required the zip file. And I can just deploy that using the AWS CLI commands. So here this the command is given here with some placeholder variable names that I should use for the functions. So if you have multiple functions, those will be listed here as well. So other ones. So here I only have one. So I'll just say UID and I'll just make sure the function is not there anymore. Okay. So let me roll AR in as well. And also my region. Okay. Now we have all the parameters required. This should be able to deploy the function now. Okay. Now the function is deployed. Let's try to do an invoke from the CLI itself. Okay. The execution is successful. Let's see the payload. And you can see here we got the response out. Let's do that in a single command. So you can see here these are separate invocations of the Lambda functions we just deployed. So in that way you can create Lambda's in this way. Like easily using the ballerina. So you don't have to worry about another extra build step on packaging these artifacts and so on. That's automatically done by the compiler itself. And so let's go to the next one. So in the same way we have support for Azure functions as well. So in a similar manner, an annotation based project is there when you want to define those type of server functions as well. So here I have a specific scenario that I have created using Azure functions. So it's a scalable data processing scenario again with some processing of images. We are, we take in requests through an HTTP request and we take in the binary data and email address and we submit that to a BlobSore agent Azure. And also we queue those requests and using serverless functions. We process the images. We put it to other queues and we publish them using email. So using of these intermediate queues and these other storage mechanisms used to scale the system. So we'll just show you how easily we can model this scenario using a serverless scenario with ballerina. So this is basically the core that is required to do that. So using Azure functions bindings concept. So input output bindings we can nicely chain this request through multiple stages so we can create a workflow using this feature. So in ballerina so we can say to a specific HTTP trigger we are invoking this function and we can bind certain parameters to storage options like queues, blob storage and so on just by setting values in the code. So we don't have to worry about specific service clients being initialized here. So don't worry about the credentials and everything the API keys but rather they are automatically handled through the system itself. So in this way we can as I said wire these functions together. So we can say we are setting a specific value in a queue in this function and it's directly bound to another function where that function is into a to that queue from here. So in this way it's by it's these two are connected and from our at the last and again the results are sent to another serverless function here by sending an email. So likewise it's very convenient model these operations using this mechanism and ballerina makes it easier because it does all the build steps the deployment. So just by building it to be you'll be presented with the deployment artifacts. Then you can use the CLI commands or any other deployment mechanisms to deploy it straight to the actual functions environment. And also another aspect of cloud native and Microsoft development is continuous integration and continued development. So you can use many systems for that like Jenkins codefresh and so on. And also we have support for GitHub actions as well. So there's a barana GitHub action that is available. You can do to directly build test barana applications and deploy it to an environment that may be a Kubernetes cluster serverless environment or anything like that. So that can be done using the GitHub functions features we have as well. And also another critical feature we have is in barana is the building observability. So basically in any code we write especially when it comes to network and points. The system can automatically observe the operation that you do and it can generate metrics and tracing information based on those operations. So that's their operations such as the remote method of invocation and so on automatically attract because we know they are network operations and that can be used for the observability features the functionality. So we'll let's quickly do tracking of our earlier scenario used by enabling observability on it. So it's just a matter of enabling observability when we are running it. So we can globally say enable observability for the full application. So by giving this runtime switch, you can say barana run and say barana observability is true. And when you do that, it starts an internal endpoint for for exposing metrics using for Prometheus. So Prometheus server can connect to this endpoint and pull metrics information. And also it starts up publishers for traces based on the open API standard. Sorry open tracing API rather. So here by default, we have we are shipping a Yeager client. So you can connect to that to send tracing information. And so here let's start up Prometheus server Yeager and also a Grafana for visualizing our metrics information. So for this scenario, I'm going to start the servers using Docker. So that's just the easiest way to get things up and running quickly. So I have the documents here. So starting from Prometheus Grafana and Yeager. And let's send some requests just to populate the metric stores and the tracing information. So now let's start up Grafana. The first thing is to add the data source. So it will be Prometheus endpoint. Okay, we added the data source, then we import the dashboard. So there's a default barana dashboard that is already available with the default dashboards. And right away will be we are seeing the stats from the request that were already sent. And we get information like there should be service metrics, the request per minute, the rates, the error rates, and so on. And we have many graphs to see the service status and so on. And also same is available for H2P client metrics as well. And also other things such as when you have database operations like SQA client metrics and so on. So obviously these dashboards can be updated as well. So these are the default ones you will be getting. So then let's move on to the tracing view as well. So that's in Yeager. So here we have our service, OCR service. So when you say find traces, so it has the two requests we sent. And we see the tracing information, the spans that were collected. So here there's only one service and we can drill down to multiple aspects, the operations that were executed in that. So we can say it started from the detect text remote method and the internal H2P clients that were used, the post request and so on. So obviously when you have multiple services, it will show you the connection between the multiple services where it tracks all the hops through the network through like correlation ID that shared among, shared through the single round trip request. So in that way you can drill down into what's happening in the network, the request. So especially useful when debugging your system or doing performance improvements. So basically that's how you get the default autoswabox observability in Marina. So it tracks all the default operations. So like 95% of the time that will be enough for the general cases where all the network operations and other actions are by default monitored. And of course when you have some custom enrichment will be done, you can use the observability use API to add some additional events or add more properties as well. So this is a screenshot off when we have multiple services running the system like how the tracing UI would be. So these what we saw with probably this and crofana. And also, so that supports open API as well. So that's a struggle standard that it was called struggle earlier basically so it's now open API. So that can be used to define services in ballerina. So starting from a swagger to create a better service skeletons and stubs and also for creating clients as well. So using the command line CLI you can create this so invoke these operations. So then for want to learn more about ballerina so in the banana by banana your website. So we have a banana by example page. Yeah, I go through one goes through the prominent language features through multiple examples so you can run them and learn how they work. So they are commended examples step by step. That's a quick way off getting the hang of the language. So banana is very similar to like syntax is similar to languages like CC plus Java. So that type of a syntax is there so it will be familiar to most of the people who have used similar languages. Other specialized concepts you can learn through the resources that are there in the website. And also for beginners for program beginners who are new to programming. So we have written a book as well. So myself and my colleague left mal we wrote a book where how you can learn programming using ballerina. So anyone who are totally newcomers to programming can check out this book as well that use a new approach to teaching programming using the language. And of course this is an open source project. So the full code is there in GitHub. And it's licensed under Apache to the compiler and the runtime. Basically, and you can contribute with ideas with code as well in any area. And you can get into the conversation using stack overflow can ask questions and also our main point of interaction is through the stack channel. So you can go and join that as well. And any any feedback would be is basically welcome and much appreciated. Also, the demos I use the demo code you can find it here in my GitHub repo for the AWS demo scenario and also the server scenario you can find the full code here and how to deploy them with instructions. So you can deploy it on your own and check it out. So that is it for my session. Thank you for listening. If there are any questions I can take them up. Thank you.