 Hello all, welcome to hyper laser India chapter meetups. So, today we have presentation about oracles in enterprise blockchain. So, generally oracle is a trusted source of data that feeds real world information into the blockchain. And generally, we have seen the oracles in public chain using chain link but unfortunately there was a none of the hyper laser fabric best oracles available. So, therefore, we have a spider talking about how oracle is a service designed to seamlessly integrate with permission blockchain. So, I will come Ashfa Govindan co-founder of spider and Sunder value to talk about oracle in enterprise blockchain especially in hyper laser fabric. So, over to you Ashfa. Thanks Kamlesh and thanks everyone for joining today's call. So, as Kamlesh was mentioning, today's topic is more focused on hyper laser fabric and how we have implemented oracles which is to get the external data from trusted sources into the blockchain during a contract execution or smart contract execution within hyper laser fabric and also another thing that we have done which is called as workflows which is all about basically providing a no code way of writing small contracts itself. So, myself and Prashant from my team who's the blockchain architect on our team will be talking about these things. So, quickly about spider, what we do at a high level is that we basically provide a low code asset organization solution for enterprise customers and businesses. So, we are more focused on the private and permissioned blockchain space and providing asset organization solution for that in a very local manner and across industry. So, we are not focused on just providing a solution for real estate for example or a specific industry asset but our solution is more generic and configurable to handle a lot of different use cases in different industries. That's our mission to make the adoption of enterprise blockchain solutions very easy in traditional industries where there are a lot of web 2 solutions which are doing a lot of things like in supply chain, insurance, healthcare, banking, finance, etc. So, while we were developing this kind of a platform that can be used across different industries and scenarios and that to a low code platform as such some of the key requirements that we came up with to make this successful and easy to use are some of these things that you see on the screen. At the very core anybody who wants to do something with an enterprise or private blockchain network would need to set up and manage an infrastructure, the whole blockchain network itself and that to infrastructure which spans across multiple organizations who may have their own way of doing things, their own infrastructure in different clouds on premises and what not. So, there has to be a way to easily set up that infrastructure across different participants. Then typically once you have that right you would need to start writing chain code or basically smart contract in a general sense but chain code is what you write. It's what it's called in Hyperledge Fabric right where you have to write code within the Hyperledge Fabric blockchain environment now. Now, that definitely has a little bit of a learning curve for developers who are coming from traditional web tools systems and who want to take advantage of blockchain systems. So, they need to understand how peers, orders and so on and so forth things work. How does authentication and signing work within the Hyperledge Fabric environment? How do you write code effectively like for example a simple thing like you can't use the current time within the chain code because the same chain code executes on multiple peers and they will all produce a different result for the current time when the code executes. And then the transaction will not succeed because the output of the chain, the smart contract execution is not the same across different peers. So, there are a lot of these kinds of nuances. So, somebody has to learn a new language and not necessarily a new language but a new way of writing code with even existing languages that they know about. So, that's the second thing that they would want to do and that's where we as a company have done a lot of work to make that experience low code. And in order to do that, some of the things that we have done is we have provided a way for users to define different kinds of assets or objects on the chain. So, in any use case, you might want to manage different kinds of objects on the chain. So, think of it like a database where you can create different tables and basically manage different kinds of objects. But apart from managing the objects or assets on the chain, the real value comes out when you really can implement some of the business rules within the blockchain itself as a smart contract. Again, that would involve writing code and that's where we have again come up with a workflow-based system with a user interface using which somebody can actually design that entire workflow experience or can basically author the rules within the blockchain without writing code. So, I will delve into that. And then the other thing that I was also saying was that typically you would need data coming from various other places. So, generally smart contract or whatever processing you want to do within the blockchain cannot just exist by in isolation. You would need some data like currency conversion data, for example, or GPS location data, health, any insurance claims related data. For example, any kind of data that you might want to use while you write smart contracts within the system. So, we have also provided a way to do that easily which was not existing in the hyper ledger fabric space as such to date. So, mostly the presentation will be focused on the workflow part which is the codifying business rules and the oracles part of it. But just to set the context so that we all understand the overall context within which all of this is being discussed. So, this is how the overall spider platform looks like at the very base. We provide a way to set up the infrastructure, the blockchain network itself among different organizations who could be on different clouds AWS Azure or even on premises. And we allow, we give a way to form a consortium network across different infrastructure environments. Then we provide a lot of features on top of it. Of course, it's a hyper ledger fabric network which means that all the typical management of the network like adding, removing nodes, channels, managing channels, deploying chain code and interacting with them using a REST API. So, all of those kind of things come out of the box. Of course, it's a managed network. So, a lot of things under high availability, disaster recovery, monitoring, you'll get health dashboards, so on and so forth. So, all of that literally comes out of the box. But then what we also provide is an asset organization application and then connected with that we provide on chain workflows and oracles and a few other features which we can look at. And of course, we provide an integration layer on top of it so that the blockchain layer itself can be integrated into existing systems like SAP, Dynamics or any of the ERP systems, Salesforce, so on and so forth. Or with custom applications using REST APIs or integration with other automation platforms like SAP or Power Automate and so on and so forth. So, I'll just, as I said, I'll just focus on the workflows and oracle part of it. So, but before that, right? Fundamentally, in any scenario, if you want to model any scenario on the blockchain, any business process per se, you would want to track some of the objects. Everything starts with tracking and tracing some of the objects, the state of some of the objects literally on the chain. So, let's take an example. Let me change to the actual platform. So, this is the spider platform as I was saying. You can create networks and you can then add organizations or invite organizations to form a consortium. And then you can deploy applications. You can deploy your own custom built chain code or we provide an asset to organization application which you can deploy that provides some of these functionalities out of the box. So, if I look at the settings in this asset to organization application, right? The first thing that this provides is a way to define the different kinds of assets that you want to manage on the chain. So, this is a use case for drug traceability in the pharmaceutical industry, right? So, in that use case, some of the things that you would want to manage and to add onto the chain and track are things like the drug lots that are being manufactured, right? And then, let's say the drug then the drug lot is shipped to across the supply chain to from by the manufacturer to distributors down the chain and then they are eventually sent to pharmacies and then they are sold to the customer, right? So, there are different participants through which this entire assets are sent through. And then there are other data like quality control reports associated with the drug lot, shipment information, some IoT data which talks about when the drug is stored either in transit or in a storage location, what is, what is the temperature used? Was it temperature controlled? What kind of was it stored in the right manner during its entire life cycle? Things like that, right? So, these are different kinds of assets that you would want to in this case, right? Or objects that you would want to manage on the chain. So, you can create them like tables in a database, right? You can also define relationships between different assets like in the case of quality control report, right? There's a field called batch number which actually points to the drug lot against which the quality control was, the quality control report was created. So, it's like a foreign key relationship in a database, right? And you can also define the different permissions that that different participants in the network have on these different assets. For example, the drug lot can only be created by the manufacturer, while the others may have the permission to read it, but they cannot create or update the drug lot asset itself or object itself on the chain, right? So, let's say, these assets have already been created on the chain and these types have already been defined and the corresponding participants in the supply chain, the farmer's supply chain are basically providing information about all of these different kinds of objects on the chain already, right? But that is just as I was saying, right? Just creating and updating assets and objects in the chain, right? In a easy manner. But then what if I want to define some of these business rules, right? Like, for example, in this case, right? One of the business rule could be that, you know, when a drug lot is manufactured, there is a quality control department or organization, which does quality controls on it and provides a report. If the report says that something has failed, right? Then I need to mark all of the drug lots that have been manufactured, you know, against which that quality control report was submitted. I want to mark all of them as defective and they should be sent for destruction. That's it, right? So, so basically, when when a quality control report says that these batches are not good, then they should not be processed for further, right? That I want to codify that business rule on the blockchain, right? Within a smart contract. So these kinds of things are what can actually be done by using the workflow engine that we have created. And I'll talk about how we have done that and the technical implementation behind that. But before that, let me show you how that works, right? So this is the workflow interface. So basically, if I create a new workflow, right, you can provide a name first of all, and then once you provide a name. So workflows have a concept of a trigger and activities that can be performed. So trigger basically says, you know, when should this workflow be the activities mentioned in this workflow be processed, right? So for example, the trigger in this case could be when an asset of type QC report is created on the chain, right? So that is my trigger here. There are different kinds of triggers like, you know, when an asset is created, when an asset is updated, deleted, when an ownership of an update is changed, when an approval action happens. So we also have a, you know, rich way of managing approvals out of the box within the blockchain environment itself. You know, everything happens in the blockchain and in the ledger. So you can basically say that for a particular action to be performed, you need approval to happen. And then there is a, you know, process for that. So when something is approved, rejected, then that could be a trigger, right? And Oracle Prashant will talk about in later slides. But basically, this is, you know, you're saying when should I work for the trigger, right? So in this case, we are saying when a QC report is created. Essentially, that would mean that when a QC report is submitted to the chain, right? Then what can I do here, right? So I can do quite a few things. I can start adding conditions. When I say conditions, it could be basically if condition or for each loop, right? You can look over multiple assets also. I'll tell you where it's meaningful. You can also read other assets from the chain. For example, right, when a quality control report is submitted, it has a batch number saying that this quality control report is for this batch of drugs. Now, I want to get more information about that batch of drugs, like who was the manufacturer, when it was manufactured, when it expires there, what should be the, you know, temperature within which it's maintained, so on and so forth, right? Let's say I want to get that information. So that's a different object on the chain, which is already there, right? So I can read an existing object from the chain by providing its ID. Or, you know, I can read that information by providing a GraphQL query. So GraphQL is something that we support generally also outside of the workflow. So once you start adding, removing or managing assets on the chain, you can use GraphQL to query GraphQL API to query any asset on the chain using any fields in the asset. For example, in our case, right, when I say asset, and let me just show you some examples, if I look at the data, right? So in this particular, this is a pharmaceutical supply chain use case, right? So if I look at the drug lot, right, so this is the, these are all the drug lot related assets which are already, which have already been added in the chain, right? So if I look at this, there are a lot of columns, but if you see this is basically just a JSON structure. So you can actually bring in any JSON structure that you have, say any unstructured data literally can be put on the chain as a drug lot. The only thing is, you know, when you define the drug lot, you would have to say what is the primary key, right? And that primary key should be one of the fields in the JSON object that you're submitting. The rest, everything is fully flexible. You can submit any fields. You don't have to inform or configure in advance that what are the other different fields of a drug lot, for example. But the batch number should definitely be there because then going forward, you might want to retrieve the same drug lot back from the chain, right? So basically, it's a flexible JSON object that you can create and say, this is a drug lot created on the chain using the rest API. So basically, you know, using the GraphQL API right now, I can, if I look at this drug lot object, right, I can say that using the GraphQL, I can say that, okay, give me all the drug lots whose shelf life is more than 24 months. Let's say that's what I want to find out, right? So typically doing that, you can't directly do that against against Hyperlegia Fabric or any blockchain in that sense, right? You'll have to probably maintain this in an off chain database and then query against the off chain database. But we have developed a GraphQL interface using which you can query and rest everything is managed by us. You don't have to separately do anything to make that query happen. So the reason I'm mentioning all of this is that because that GraphQL query can also be used within a workflow workflow activity here, right? So I can say that get me all the, if there are multiple, let's say the QC report has been created against multiple batches, right? So I want to find out all the batches with those batch numbers. So I can use a GraphQL query for that and it will give you a return back a collection of drug lots from the chain. So everything is being retrieved from the chain and whatever you do here, right? Everything is actually processed in the chain. And I'll tell you how that works from a technical implementation point of view, but this whole workflow actually executes on the chain. So it's not executing outside, right? So basically you can read, you read existing objects from the chain. And then the other thing that you can do is perform a lot of actions. Like for example, right, you can update an existing asset, you can create a new asset. For example, right, if the quality control passes, right? I was talking about if the quality control fails and I need to mark it as defective, but what if it passes? Maybe I want to create a shipment object automatically, a shipment asset automatically, right? So I can do that using this. You can update existing properties on existing assets, you can delete existing assets, request for approval as I was saying, raise events and update owners, get external data. So quite a lot of actions that you can perform and on the chain. Everything happens on the chain. The other thing is, any activity, like here you read something from the chain, right? The values that you retrieve from here can be used in a downstream activity. So everything that you read from any of these, any data that comes out from any of these activities, right, can be used in a subsequent activity. So that's how you can basically build out your entire sort of business rule without having to write any code, right? So let's take a quick look at an existing workflow that I've already created. Let me go back. So this is the same quality check workflow that I was talking about, right? But this is already created on the chain. So this starts with when a QC report is created. What this is doing is it's checking the packaging integrity. So if I look at a QC report here, there are various kinds of analysis that are done like, you know, microbiological analysis, product condition, so on and so forth, right? And then there's something called as packaging integrity, right? So basically what this workflow, if condition is doing is from the input. So input is basically what triggered this workflow. So what triggered was creation of a QC report. So whatever was submitted in the QC report comes as an input into the chain. And if I look at this data itself, right, basically it's a JSON object. And then within the data you see that there's a field called packaging integrity, right? So that's what this workflow is looking at, input.data.packaging integrity. If it equals pass, then it's a good lot. And as I was saying, we created a shipment object automatically. And, you know, you can basically use variables from the input to do that. If it fails, right, then what we are doing is we are basically, we actually have a for loop, right? So in the for loop, it's looping over a batch number. So the for QC report can have one or more batches. So this QC report was, for example, submitted against more than one batch, right? So it's looping over the batch numbers. And then for each batch, it's checking, right? So for each batch, what it's doing is that it's taking the back current, current item, which is a batch number that is being currently processed, currently processed. And it's updating the drug lot, where the batch number is the current batch number, status is being marked as defective, right? So basically, you know, it's finding all the looping over the each batch, and then it's going to find that batch drug lot on the chain and update the status of the drug lot to defective. So to see this in action, right, let's just look at this. If I see the drug lot information here, right? And I take the first two, first two lots, right? Their status right now is nothing, right? And my QC report, let's say looks like this, right? There's a new inspection ID. And I'm saying these two batch numbers are being inspected, the top top two, these two, right? And the status of packaging integrities fail, right? And this is the QC report. So let me just quickly submit this QC report onto the chain. So you can use it through REST API, but there's a UI also using which you can do that as a JSON format, you can upload using CSV and things like that, right? So when I do this, basically the QC report is now submitted, as you can see. But then if you go back now to the drug lot and look at this here, you see that these two have been marked as defective now, right? Automatically, although I didn't, you know, say it just needs to be updated. And if I look at the details, right? There's a history. So it was created some time ago, but it was updated just now, right? And if I'll view the details, it says that it was actually updated by workflow. So this is, you know, the history coming from the chain itself, but we enrich that with some data which we store additionally, right? So it was updated by this workflow, this particular activity within the workflow and it ran under the context of this organization, so on and so forth, right? And it was the status was updated. So basically, you know, that's how the workflow experience looks like, right? And as I was saying, this all actually gets executed on the chain, right? So let's look at how that actually works behind the scenes, right? So this is a one-chain workflow that I was talking about, there are different activities. There's a concept of trigger, different activities, conditions and loops, there are mathematical functions also that we support. So how does the implementation look like behind the scenes, right? So when we are thinking about this very initially, right, the first question that came to our mind was, so this is literally code which has to execute on the chain, right? So one way of doing it is that based on the workflow, you generate the corresponding code dynamically and then deploy it on the chain, right? That's a little bit complex process to do, because first of all, you have to generate the corresponding actual, you know, Golang code, for example. And then the second thing is, you know, every time that happens, you have to now do the entire chain code lifecycle, right? It has to be installed, it has to be approved, it has to be committed, all of that, right? So all of that has to be done. But the good thing with that is that that sort of fits in very well with the approval or endorsement policies kind of models within Hyperledge of Fabric. So when you generally change the chain code, right, then you want some of the organizations to approve it, and then only, you know, a minimum set depending upon the endorsement policies and so on and so forth. And then only it should be committed, right? So literally, if you think about whether you do it through a UI or otherwise, you are literally modifying the code in a way, right? And you are modifying the way the code is executing on the chain. So that process of approvals and all fits in very nicely with the existing model if you go via that route. The other way of doing it is it brings a lot of flexibility from a development and coding point of view, the way the code is managed and all that you only have a static chain code, right? Which basically processes workflows, which has the capability to process any kind of workflow. So when you change the workflow, you don't have to deploy code again on the chain, right? But the disadvantage of this is that, you know, because when you're changing the workflow, you're changing the rules, which means that the similar kind of approval mechanism still has to exist somehow, right? Even though it may not be existing at the hyperledger base layer, but we have to somehow build it in and that too in a decentralized fashion, right? So even that approval mechanism should somehow work on the blockchain side itself, right? But that has to be somehow built in then. So we did a lot of thinking around this and we finally went with the static chain code model. So basically, we have a fixed, we have a workflow chain code, think of it that way, which can process any workflow. And the way that works is that when a user creates a workflow from the UI, right? The triggers, the activities, the workflow definition itself are all stored on the ledger. So this all goes into the blockchain, right? As objects which can be stored and tracked on the ledger. You can see the history of what was done, it's immutable, you know, everything that that the ledger provides out of the box happens with the workflow definition and the corresponding objects also. Now, who has permissions to do this and, you know, whether how many organizations have to approve for a change to be made, all of that is configurable. So as I was saying, right, we have an approval kind of mechanism, which is already there. So you can say that before something, some state change happens on the ledger, two or three out of five organizations need to approve, let's say, right? So those kind of that approval mechanism, we anyway had built in already, not for this purpose, but for even simple purposes, like if I want to change the ownership of an object, right? And that needs to be approved by more than one organization, then that kind of mechanism we already had. So that approval mechanism and the permission model, we already, we layered on top of the workflow creation process, which means that whenever some change happens, it can only be done by, you know, organizations that have permissions to do that, first of all, and then a certain number of organizations would need to approve that change also. But once that happens, right, this workflow definition itself is saved on the chain as a JSON object or a series of JSON objects, right? Then when a method call happens, so as I was saying, right, we have the asset tokenization application, which has some built inbuilt functionalities, right? Like you can create assets, it provides a lot of those functionalities out of the box. So when one of those method call happens, which can trigger a workflow like create asset, right? In our case, so when create asset happens, what the create asset method on the chain does is it sees, you know, if there are any workflows on which there is a trigger, which says that when create asset happens for the asset type that I'm going to create right now, should I, are there any workflows linked to it? So trigger evaluation happens for all the workflows. And then the list of applicable workflows is retrieved by the chain code from the ledger itself. So remember, the workflows are also saved on the ledger itself. So all of the applicable workflows are retrieved. And for each workflow in scope, then the data of the asset that triggered the execution is put into a variable called dollar input. So that, you know, the entire data about that particular asset in our case, the QC report, right? That object, whatever was inside that, the entire data comes into the sort of pipeline. We call it pipeline, but basically it's a place, you know, where we store all the variables that are, that are created, created during execution of the workflow. So the entire data of the asset that triggered that workflow comes into the pipeline as the dollar input variable. And then each activity, again, is read from the chain. And then those activities are invoked, passing in this input variable. And any output for it from any of those workflow activities are, again, added to the pipeline as a variable in this format. So activity name dot output variable. And then the workflow execution continues. All the activities are executed. And when all the workflows are executed, then all the changes made state changes made to all the different objects that were affected during this processing are saved on to the chain. And then the method call exists. If an error occurs, then nothing is saved on the chain. So it's more of a transaction, right? So either everything is committed or nothing is committed onto the chain, which brings in that atomicity and in the entire workflow operation. So I talked about variables, there are the input and output variables are definitely there. There are some other inbuilt variables like, you know, current time. So current time is actually not current time, because, you know, that, that won't work as I was saying, it's basically the transaction time that comes out so that it's, you know, it's, it's the same across different peers when it executes, you can look at the current transaction ID, you can get the caller information, like, you know, whatever is the attributes in the certificate fundamentally, using which the the transaction was signed, so on and so forth. So there are some inbuilt variables that you can use also. We provide a lot of mathematical functions that you can use, addition, subtraction, multiplication, all the mathematical functions, some, some functions around, you know, aggregations like max, min, average, some string concatenation, typecasting, like, you know, sometimes you want tend to be an integer, sometimes you want tend to be a string, for example, right? So typecasting. So pretty much all of this, think of it like, you know, programming, but without actually writing code, right? So whatever is required to do that in an extensive manner and configurable manner, we provide all of that. Some interesting things, you know, workflows can trigger other workflows, for example, right? In our, in the use case that I was talking about when a QC report is submitted and it, everything is okay, you create a shipment object. There might be another workflow, it says that when a shipment object is created, do something else. So, you know, workflow can, so the creation of a shipment object in this case can trigger another workflow, which can do something else also, right? And sometimes you might want to do that, sometimes you might not want to do that for an automatically created action. So that behavior can be controlled. Permission checks are enforced, you know, even when workflow creates or manipulates other objects, but you can again tweak that behavior based on whether you want to do that or not. So a lot of different, you know, functionalities basically to cater to any use case, literally, that you can program without actually writing code, right? So that's fundamentally the the feature of workflow and what it actually does. Now, I'll hand it over to Prashant, who can talk about oracles and, you know, what they do and how it's helpful in the overall process. Prashant, over to you. Yeah, thanks, Ashut. Yeah, let me share my screen. So hello, everyone. I'm Prashant. I'm a blockchain architect at Spydera. So in this session, I would like to talk about our recently released features in Spydera, right? That is oracles. So I'd like to give a general idea of oracle and then I will explain how we can create oracles in Spydera and use it in, you know, our workflow feature, which Ashut just explained and as well as, you know, in Spydera, we can upload our custom chain codes as well, like if you don't want to use the workflows and if you want to use more complex logics, you can write your own chain code, you know, IPv6 is a single langer and you can also use that to be deployed in the network that is created with Spydera, right? So oracles can also be used there, the oracle feature that Spydera has, right? So I'll explain all these stuffs. So basically to give a general idea, an oracle is an interface that allows smart contracts to retrieve external world data into the chain code itself and process it, right? So why, so we know that Hyperledger Fabric allows some of the most commonly used languages for chain code writing, right? Like Golang can be used, Node.js can be used. Making HTTP requests directly from the chain code using those languages is easy, but why do we need a special system or special functionality like oracle, right? The reason being, when we write a smart contract, it must be deterministic. So to take an example, when a transaction is initiated in Hyperledger Fabric, it is executed by all of the peers in the network, right? The chain code itself will be executed by all of the peers in the blockchain network and they will produce individual outputs. In the end, the consensus algorithm actually checks all these outputs individually and only if they match, the transaction will result in success, otherwise the transaction will fail. When we think about introducing HTTP request calls in chain codes directly, it can lead to non-determinism, right? Because we, so we are essentially hitting third party APIs, we don't have full control over that API and when peers make requests, individual peers make individual requests and the responses can be different, so in the end it can become non-deterministic, right? So that is one reason. Another reason, usually when talking to these external services, we would need API keys, right? And if it is a single external data source, right? If the logic depends on single external service to fetch data and using that data alone, we do the logic, then it would be easier, actually. It is possible with already existing chain code logic itself. We can actually pass the API keys in the transient map of a transaction in Hyperledger Fabric, right? Essentially, this transient map won't be stored anywhere in the blockchain, so we can safely pass the API keys in the transaction itself. But imagine if, let's say, so we have a decentralized network, right? One organization does not trust another organization. And let's say one organization is preferring data from one service provider and another organization does not like the provider. They want to configure their own service provider for fetching the same data essentially, right? If that kind of situation exists, when the transaction is initiated, it is initiated by a single organization, right? They can pass their API keys, which they have access to their service, their external service, but they wouldn't have other organization's API keys as well. So it is a problem, right? So Oracle essentially is designed to solve these problems, right? Spytrap. So I'll just talk about how we can create oracles in Spytrap. In Spytrap, we can define any external HTTP API as a data source, right? If you want to take from a public weather API and use the data in the chain code, we can do that. We can configure Oracle with that. And we can embed Oracle in workflows as well as in custom chain codes, right? We have added those features as well. Apart from that, so we have been talking about taking data on demand from the chain code, right? But there is another feature that we provide where if you want to, on a certain frequency, monitor a certain API data and load those data onto the blockchain, right? For example, if we, if you want to load currency rates, currency conversion rates from a third party API onto the blockchain every one hour, right? To monitor that and save it in blockchain. We can do that with Oracle Schedules, that is also another feature that we are providing. So I'll basically explain that as well. And so right now in Spytrap, we only allow defining one external source per Oracle, right? But in future, we are also planning to support configuring multiple external sources like the use case that I just touched, right? Where one organization may be preferring one API provider and another may be preferring another API provider. In such case, in future, we will support configuring multiple external HTTP APIs, right? And these, once these results are gathered, they can be aggregated and used into the blockchain. We can say conditions like we can just save that all of this data has to match that can be one condition, or we can say that this data has to be in certain range to be able to use the smart contract, otherwise the transaction should fail. So we can aggregate and we can use any sort of conditions when we are taking these different data, right? Or rather, if you, if you are not thinking about different organization's preference, we can just simply say that we don't trust a single service provider, right? We want to collectively take data from multiple providers to make the data coming from outside more reliable, right? That's the, in a way that that would be the point of Oracle itself, right? So let me jump on to a quick demo. I have created a network already in SpyDRA. So this is my network. And I have deployed an application also using asset tokenization chain code itself. I have a crop insurance application, right? So in this, I have defined three assets, right? One is the application itself. If the insurer wants to apply for a claim, they can do that. And if the application is successfully approved, then it will come on to the approved asset, right? A new approved asset will be created. And I also have another asset with which I'm going to monitor the bad weather even. So I'm basically going to use Oracle schedules to monitor whether the weather is bad. And I'm going to log it on blockchain, right? So for the application, this is the data structure that I'm going to use. So I'll have an ID, I have a name, region, other number, incident, incident code and climb amount, right? In this, this region is an important. So what I'm going to do is I'm going to create a workflow, right? When creating this asset, this is an application, right? Which is basically an application saying, I need to claim 50,000 rupees because there has been an excessive when weather has been bad. So I lost some crops. So I need to claim some amount, right? So what I'm going to do is I'm going to write workflows that would look into this structure. And when this structure is created, it will automatically approve based on a condition. So I'm going to actually look into the weather data using an external API. And if the weather is actually really bad in that region, then I would approve. So the smart contract will approve the application automatically and it will go on to the approved status. But if the weather isn't bad, right, then the claim will be automatically rejected. That is the idea that I'm planning to write. So for that, first, let me go ahead and create an Oracle. First thing is I have to configure an Oracle, right? So I already have one Oracle. So let me just showcase you how to create one. Let's say I need, so the name of the Oracle is Leather API, right? And I have selected the channel, Hyperledge channel and my organization. Then in the Oracle configuration page, I can give the URL and other details of the API itself, right? So I have looked into this public weather API. So I'll be using this to fetch data. So I'll just quickly explain what is there in my API, right? This API. This is the URL and it accepts a queue parameter as a pre-param, which takes the region on which you want to check the weather details, right? And apart from that, in the headers, I can see I have two headers. One is the API key itself and another is some host information, right? If I send a request with this, I'm getting this sort of a structure, this sort of data where in the current field, I have temperatures, wind, general condition, et cetera, right? So I'm going to make use of this data. So to configure this, I'll head back to the Oracle section and on the URL section, I will copy this URL, right? This is my URL. This is going to be my get request. It's not post. So in this case, it is a get request. And in the params, we can configure any of these params, right? In my case, it's a query param, but it can also be a path param where you can say, you know, something like URL slash item slash 123. So this sort of URL is also there, right? This is called path parameters that can also be configured. Other than that, you can also send in the request body or you can send nothing, right? In my case, it's a query param. So I will select query. And in the authentication methods, you can do basic authentication, which is username and password sort of an authentication if the API supports that. In my case, my API does not support that. I'm going to use a custom authentication where I can add custom authentication headers, right? So I need to add X API key here, X rapid API key. And my actual API key. And I can, so I need another header also, right? So I will add the X rapid API host. This is my value, right? So these are authentication headers. But if you just want to add other regular headers, right, you can also add something like accept, accept encoding based or whatever, right? Let's say I need an accept header, and I just give star slash star. We can also do that. The difference between authentication headers and custom headers is that whatever we configure in authentication headers is going to be securely stored in a key management systems. So whatever cloud we are using in this case, I'm using AWS. So these values are stored in AWS key management system with Secret Manager. So this way, you know, we are also storing these things securely. But if you are giving in custom headers, it is just going to go into the data as it is. It's not going to be using Secret Manager. When I continue on the next page, we can configure a scheduler, right? I'm not going to create a scheduler right now. Let me first showcase how a regular request works. Then we can come here and configure a scheduler, right? Then on the next page, you can see we can review this and submit one submitted. We can see my another article is also here. So this is my second article. But when creating the first article in the network or in the channel, TIDER will actually deploy an Oracle chain code in the background. So since I've already created a first article yesterday, the Oracle chain code is already here, live running, right? So whatever Oracle we are configuring is also going on to the chain, right? It's not just stored in the database, but it's actually stored in Oracle chain code. So that gives us a sense of security, you know, since we are dealing with blockchain. Okay, now I have configured my article. Let me create the workflow that I just talked about, right? I will go on to my insurance app again, and I will go on to workflow section. Here I've already created two workflows. So first I will talk about the insurance approval flow. So that is the case I've talked. Let me create the same workflow again to just showcase how to actually create the workflow, right? To get into some of the details, right? I'm giving the name of the workflow in the description if I want, it is created. So first thing we have to define in the workflow is creating a trigger. In my case, I said that if an application asset is created, then smart contracts should check for the Oracle, check for the condition, and only if the condition satisfies, I will create the asset, right? So that is the workflow I'm going to follow. I'm going to add the trigger as on create asset, right? And my asset type is application. So I'm saying when application is created, then I'm going to do the following to create some workflows, right? So the first step that I want to add is an action. I'm going to get external data. So in the actions, I should have defined there are a lot of different actions, right? One of the actions is get external data, which is Oracle. And if you do that, you can name your blocks, action blocks, anything you want. In this case, it automatically took. And I have to select Oracle, right? So I will select this API, which is Oracle, which I just created, right? And you can see the social list mentioned here and parameter type is query. So now, when in your chain code logic, if you want to send the request to this API to fetch the data, what parameter do you want to give? What is the Q value that you want to give, right? We need to take the region. Region either we can give constant values like this, right? Or in my case, I know the exact structure that I have in my application asset, right? In my application assets itself, I'm asking for a region. So I will use this region and look the weather on this particular region, right? So to do that, I'll use this syntax, put dot data dot region. Notice whenever I'm reading some fields from the asset, I have to use a dot data in between. So because in the backend, how it generally works, we have a generic chain code, right? So we store it in a wrapper structure, and the actual data that we put in the asset goes into a data field. So whenever we are using an asset, whether it is reading or whether it is creating, we have to use dot data, whether it is output or input, but we have to use dot data then followed by our actual asset fields, right? That is what I'm doing here. So I will update. So I just created a step that is going to take data from the Oracle. Once the data is taken, I want to add a condition, right? I will say if, so I'm going to have a if condition, if temperature is this. So when the data is received from the Oracle, the workflow will have access to this data, right? And what I'm going to do is I'm going to particularly check on the temp underscore C field, which is the temperature Celsius. And if the temperature is less than a certain amount, I'm going to approve it. So it's a simple logic. Basically, I'm saying it's cold. So probably it's raining. So I will do that, right? That's what I'm doing. I'm saying on if condition, if I need to use the data from the Oracle right now, right? In input, if I use input, I can access the data that is coming in the input, which is the application data, right? But this time around, I want to use the Oracle's data. So I have given the I have given a name for the Oracle, right, which is get external data one. I can access data from that block by this syntax. Basically, I have to define the name of the log external data one dot. In this case, I can directly. Okay, so this action block will send an output actually, right? So I have to use output dot. Then in this case, directly refer to the fields. So it's current dot, temp C, send temp underscore C, right? So this is how I can access the data that is coming from Oracle. And I'm I have to check that it is less than let's say 34, because 33 is coming right now. So let's say even higher number to make sure that it works, right? Okay, let me just refer to the get external. Oh, sorry, this is the main problem. Okay. Get with the data dot output dot current. Okay, I think the name had some issue. Right. Okay, I was able to create it, right? Then if the temperature is less from the data that we received from Oracle, what do you want to do? If the condition is true, instead of ending, okay, I don't want to end. So I want to actually create another asset, right? I will add another action saying I want to create another asset yet approval, which is approved, right? Basically, I'm saying if the condition is less, then I'm just going to create the approved asset for this particular application, right? The approved asset structure that I'm going to give is this, right? It is going to have its own ID. Then what is the application ID? And what is the approved amount? And what is the reason, right? Approved assets, its own ID. I can use auto ID. This is a variable that we support to automatically populate. Apart from that, we can give other application ID and so on. So we can define the structure basically, right? For the sake of time, I will just move on to the workflow that I've already created and I'll explain with that. So basically, I've created the same thing here, right? ID I'm taking from auto ID and application ID, ID I'm taking from the assets data ID, right? In the application asset, I have the ID I'm referring to that ID here and the approved amount also I'm taking the full amount from application and I'm giving a reason, right? Then if the condition fails, then I'm just saying it is going to fail.