 language for our icons. We are a band, we work for Stambury Labs, which is a company commissioned by the Women's Foundation for the Development and for implementation of the first implementation of the Women's Protocol. The objectives of this session, which is sort of a mixture between talk and meeting workshop, are understanding what these intellectual recommendations are, what's their use, why we need them, get to know radar and why we design it as it is today, and then how we need it using radar. We will show how it works and who you like, you will see the go. And you will get a glimpse into what you're working, what you will be doing next, and what's the future for radar, for women, for our icons. So, on this centralized source, you know, you're a problem. Do any of you explain that anymore? Probably, every single of the people here already heard a lot about it, but, okay, it's bullshit. It's just mango jam. That's what it means, and it's not formalized, it's not a problem that anyone has defined, it's just marketing. So, when we're talking about it, you say, there are a problem here, is that the fact that because smart contracts need to be deterministic, we cannot query external APIs or consume any kind of non-deterministic data from within the smart contract. So, we need to do some way of resolving that kind of determination, that the rights is from having some computation being performed on multiple nodes in a network, or in all the nodes in a network, like ACD case or smart contract in Ethereum. And at the end of the day, if we try to solve that through centralized data input, basically by having a contract have an owner that can call a private method that has a modifier, so that only owner can input data. We are breaking damper and censorship resistance, which is the goal of using a smart contract and we will be in the purpose. So, the question is how do you build determination without centralization? That's the problem that we've been trying to solve. So, we came with the concept of a decentralized Oracle network, where decentralized means that basically it's damper or censorship system, no one can decide which will be the result of the contract. Decentralization is just a way to arrive to that, but the best way we know. Oracle here means that there's an entity relaying information to the contracts, and most importantly, a determined, basically an entity that extracts away the uncertainty that comes from making an HTTP query, where you can have many different servers, network servers, many different situations, if you query multiple APIs for the same data, how do you solve the fact that they could be different, and so that's the idea of the Oracle. And the network here basically means that there's some full of nodes that are randomly selected for making this work of aggregating and delivering data into smart contracts. So, decentralized networks should be about splitting power, so as mitigating rest in any single node and in particular version, so as to not have any single point of failure. So, when we thought that we needed to find a way to have a DSL for Oracle, why are programming languages that exist today not enough for something like a decentralized network? So, if we want to build determinism, we require exclusiveness, so the code needs to be really, really clear about the intention of their requests, of the data requests we are performing, we don't want any ambiguity, let's say. Of course, the human languages would not work, we cannot use natural languages or say I want to know the price of ETH when compared to USD, or I need my contract to know about the temperature in Osaka tomorrow and so on. And most programming languages are also not suitable because they are too rich in their control flow structures, so imagine for example, Solidity, it allows you to do looping and many fancy stuff that is not really required for the purpose of simply taking data from a data point, transforming it in some way, making aggregations, making some statistics, some computations and then having it delivered to your smart government. So, we need something that is really focused on transformation, aggregation, something that you could think of more similar to the map-reviews part. So, when we started thinking of this ESL, we realized that it needed to be abstract so that it hadn't any specific syntax, it's not tied to any syntax so that you don't need to learn a new language. The idea is that this is simply like a kind of grammar or defining data flows and then you can have libraries in your programming language that you like the most, for example JavaScript or whatever, that you can use these libraries for writing grade on the screen. So, it's totally data flow oriented, it means that it's like a pipeline, it's point free so it's a sequence of operators that operate on the result of the previous one and it needs to be really focused on what we are doing which is putting information, transforming it and having it delivered. This is probably the most important part which is that it needs to be statistically analysable so that we can predict the computing resources that we will be using so as to set some kind of price model similar to gas or something like that and also to detect any kind of wrong incentives that we will see later or security issues that could arise from a mud-formed purpose. So, we introduced Radon in the 2017 Winn and White paper as a language that includes most of the pick of the features that I just explained and Radon basically stands for retrieval aggregation and delivery object notation. So, bring it on, you can think of it like something like this, basically as I said it's abstract so it doesn't have a literal representation, it's just some schema let's say, okay this is not any particular language, this is just representing the operator. So, a script is a pipeline of polls, here we are bringing one code in each line, that can be either the name of an operator, an operator code actually or an operator followed by one or more arguments. Every operator operates on the output of the previous one and then that obviously means that the final value for a script will be the one of the last code. So, as a type system, Radon has a some type system, let's say that is independent from the one that you know for Java script or Solidity or for any other languages and it is strict which means that every time that you need to change from one type to another, you need to do it explicitly right now. So, the types are the obvious ones, the four value types of boolean integer flow stream and there are four complex types of structures that allow for more fancy stuff, more functional programming knowledge. Then, one nice thing is that we can take Radon's scripts and serialize them using C++ which is a standard for encoding data structures in due by code. So, what we get is something like we see here that we can feed into the Radon ranking which is an interpreter that is one item that is a machine that simply takes the input data that we query from APIs for example and start interpreting the script and putting operators on it, applying operators and then we get the final result. Basically, here this script is just saying, okay, whatever you get from the API, parse it as JSON, treat that as a map, as a hash map, let's say, in value store and from that get the value associated with an empty and then return that as a flow. So, those are four calls that once you realize you look at something like this, if you create the line JSON and this is the X code. So, we're saying this is a really, really common script, this one is for open weather maps actually. So, we have a script that says all of that, you need to parse the JSON, you need to get the embed and so on in a very 20 bytes, which is really, really common. Then, as I said, we are not expecting people to write bytecode or to do this kind of structures because this is not convenient at all. What we do is rub that with libraries. So, we have this within request JavaScript library which is available on MDN and you can do just new within a script and then start pushing operators into the script just like that with the same names, let's say, parse JSON as map, get them as code and this is actually metaprogramming. So, this is your writing software that writes software for you. This is an internally generating script and at the end, you can add one more line, say in JSON, and you get this structure, for example, or you do it to C++ and you get the bytecode. So, they will explain how this works in practice. I am. So, basically, Alan just explained what Redan is and how it works for us. Let's see how we use them in our specific use. So, as Alan said, he does this integration of the network that has some written request I've tackled, but we will actually review to understand how Redan applies in a few. So, basically, we have four main phases. We start by just inserting the bytecode that I mentioned that Redan was going to add in front of us. And actually, with that, he gets insured in with that. What we have is a pool of notes that are going to be selected to perform that task. We call those notes witnesses. They are going to be selected randomly so that they can coordinate with each other and through a graphic certification scheme, use very viral ground functions for this. So, basically, once the sub committee of the notes is selected, what they are going to do is run the scripts that they are going to specify, and the data point that they are supposed to do. But instead of just polishing the data point itself to it was with that, what they were going to do is they are going to do a sample of a new attribute scheme. So, first of all, they are going to add a file shape of a secret commitment, and then they will reveal the values. This is so to prevent the notes from connecting to each other. And once the revealed values are polished, with that, basically, we run a copy of Redan after which we get a single value, and that single value can be fed into an expert. Now we convert it in the term essence of tournament. After that, I'll give you the data request live cycle. Before, let's say that one of the purpose of Redan is to have multiple sources in order to, as Adam said, avoid single value failures, and also to provide some tamper and sensors and properties. So, Redan scripts. There are three different stages. The first one is the source scripts that they relate to the URLs we want to produce the data from, and they could be many as we can see. Then, the way we aggregate those data points, single data points are the aggregation scripts. It's important to say, like, both the scripts are run by the witnessing nodes themselves. And finally, if we have different witnessing nodes that are retrieved data from themselves, how do we aggregate these data throughout the database, which are actually not run by witnessing nodes, but by other kind of nodes, like the minor nodes. So, further information can be done by the requested creators. For example, setPortal is setting the number of witnesses that should retrieve the data. Set the fees. The data creator set the rewards and the fees. It means the rewards that the witness will obtain after executing the data request. And all the fees that specify the rewards are for all the intermediate parties that are inserted into the node. And the last one is the scale. It's a simple one. It's like a sort of time block or not before field that defines the time stamp from which a data request is executed. So that we can send a data request to the witness network and it will stay there until the time stamp is reached. So, how does this everything look like, right? So, in the end, this is a schema of what we just explained, in which, in this case, we're fetching data from three different URLs, inviting, we're fetching, for instance, the Bitcoin price. And what we selected is that we want two nodes to perform that task, right? The quorum was set to do this case. So, the cryptographic substitution algorithm, just, they run the cryptographic substitution algorithm and imagine how this is both are eligible to actually perform that action. So, what they're going to do, each of them, they're going to go to the sources as the script defines. They're going to take the data from those sources and then they're going to run the aggregation script so that those three points then get aggregated into one single data point. After that, they need to publish what the result they got into in the window. But, again, they don't do it straight away, but they just use the commit and rebuild the script, right? So, they first commit the actually publish a secret commitment and then they rebuild the values. And once they rebuild the values, a minor, so, basically, it doesn't necessarily need to be values or not. Just a minor can take the data script and just apply the Consonzo algorithm on these two rebuild values that they've been published. After that, again, we have a single data point that comes here to actually reach outside the data to smart contracts. Now, we should have an idea how radon scripts work and how they, how we can use radon speeds in all the branches, like, for example, imagine we have a solid customer contract that wants to access some data, external work. So, we have to define to write a radon script, which is all the parameters that we already said, and with this, the contract should send it to a window-reaching interface. The window-reaching interface is a solid contract that acts as a sort of bulletin board. It means contracts can post a data request there, and then there will be a forward to another piece, okay? So, it's also important to understand, like, the solid customer contract doesn't need to know not much about the post data request, sending the parameters, and that's it. So, once the data request is posted into the WBI, there will be a pool of bridge nodes that act as information relays. It means they have connections to the written network and also the blockchain, like Ethereum, and they will list this contract and they will realize there's a new data request. Okay? They took it, and they will forward, they will claim this data request, and they will forward it to the window-reaching interface. Obviously, they will get some incentives for doing the task, and then the data request will be in the window. This is where magic happens, when the train are ready, and the data will be free, aggregated, and so on, and we will be a result in time. So, once the result is included in a blog, in the window, again, the pool of bridge nodes will listen to the window and say, okay, we have a result from this data request ID. They will take it, and they will post the result to the window-reaching interface. The component you mentioned is the window-blog relay, which is also a smart contract, and it's used by the BMI and the bridge nodes in order to validate transactions that occur in the window and show we can prove it in the community. And they are using, as in any other bridges, sorry, the window-blog relays, SPD. And this is also a cornerstone of our architecture because it provides the time transition system. So, yeah, once we have, excuse me, we have made a review of our pictures of one, right? So, let's start writing the scripts, right? How they would look like, right? So, how do we write them there? For instance, the source scripts, right? Those scripts that we need to actually fetch data from sources, right? Well, first of all, as we said, it's important that we have more than one source. It's important that we don't only decentralized the way we fetch data, but also the source of the sources, right? So, that's why we're going to have one script per source that we want to fetch. And it's important that we choose our sources correctly, right? Or wisely, right? Because if we feed garbage with them, we get a garbage or something, right? So, it is important that we choose reliable sources. So, yeah, we started with, basically, the way the source scripts start is with the stream. And then as I said, we're going to see how we can parse it into a hodgemod and then one of the keys, for instance. The important thing is that the output is going to be an array in which all the elements of the array will have the same type. This is important. Otherwise, we have to operate with them. And, yeah, this array, we actually have to address each one of the sources that we want to query. So, this is kind of an example of what we just explained. So, in this case, we're reporting two sources fetching Bitcoin price. So, yeah, we started again as we said with the stream here, that we're going to have a bar specification, which is a hodgemod, and in the upper case, we're going to get the last key or the value that has the last and the stream as key. And then here, free flow. But notice that both of them have a float in the end. So, basically, that means that both of them, no matter how they're referenced in the indication. So, more than that, we want a float in the end of the scripts. Okay. Now, the next step is the regular migration screen. Now, we have, obviously, we have one migration screen for each category. As they are also run by the things we know, as we already mentioned, it's important to understand the input. It's the output of the new script, which is source, which means I'm running, which we're all the items at the same time, and it ends up with a similar method. It's important for the program dimension. Output will be committed and revealed. And here we have a single example. This example, first, we will have to do a filter using the deviation standard and then a relation with the mean. Imagine we have an input of an array of four elements. Then the mean will be 17. But if we want to filter out the outliers, let's say that we go to that, the mean is more than 1.5. That's the start of the deviation. We will take out the fourth element and then the mean will change to a more smoothly averaged essay by almost one unit. This is normally used in order to, as I said, filter out the outliers and increase the quality of the outliers. And finally, the tally script, which is the one in which consensus among those values that we have reported. It's going to look very much like the aggregation script, but this time it's going to be run by minor nodes. So basically it takes as input the data points that we just provided from the aggregation script and it reduces to a single line. The output, again, gets written into blocks and one of the... This is how it looks like. You kind of notice that it's very similar to the previous one, but we have a more harsh filter. So again, we can see how these filters are important. Then nobody will explain why, but notice how without the filters we would have been 19.4, but with the filters the mean changes to 16.4. So actually it is important to... Okay, and the most important thing of my... More interesting, it's like writing tally scripts means writing incentives design. Why? These tally scripts are set in the use of a sharing game in which normally they notice if they are not coordinated, we behave naturally and in order to maximize the profit it will act. The best way to do it is just to exit the switch and that's it, in order to converge into a single focal point. Also, what we are doing is like those that are filtered out in this last stage, they get slashed. And then why it's really important and critical because writing tally scripts would be tricky. For example, to understand them better, imagine we have a gate operator. It means we have an input, an array of the results of different witnessing nodes, and we are getting only the first item. But that's main, it means like if the item is ordered there will be some kind of witnessing nodes that will have more power to influence the result or if they are not ordered and the miner choose which one is best first, he carries out the result. Then another example is the filter, if we use the node, so the witnessing node means tally scripts are valid, so the malicious witnessing node he may just put the view set number and then he will influence the result himself. And we have the similar case with the reuse of average without doing anything. Imagine you know there's going to be an average afterwards at the end, so you just put the view set number in order to fine tune the result that the miner will use to tally a transaction. But don't get alarmed because this is really easy to identify with the static analysis. We are already working in putting this static analysis in the compiler, so any developer that is writing the tally script he will reward, like say, pay attention between an average without doing a filter. And now comes the really interesting part that is the emotional itself. It's already written. Okay, so what we saw in the examples, actually map to something like this, second, and so basically this is using the travel box, we publish the travel box so that you can win the power projects in just a few seconds. So you would start just by doing travel and box win that travel box, something like this. I will not do it right now, because it takes a minute or so and we're not that good in time. So basically this is a win that request, which is a small JavaScript file where I imported the win that request library, I paid one source for bit stamp, another for pointless, and then I bought the aggregator, the tally, and then I put everything together, like this new win that request, add source with stamp, add source with this, I sent aggregator, I sent the tally, and then I decided, or if I want this data request to be performed by four different witnesses, by four different nodes in the network, plus for backup, that's better explained in the documentation log story. Then you can set the fees for all the different stages for creating incentives for the miners including those transactions into blocks, and you can set the schedule. In this case, there's a zero which means that the request can be immediately performed and it doesn't need to be done in the future, although for many use cases it's much more convenient to have it scheduled for the future or we can delay the publication of the request and just set it with zero and don't send it from Ethereum to WinNet until we know it can be done. How could you do it today? That seems to be the request for making that work. So, we just do compile. So, when you do compile, this is actually just doing node this is the name this is just doing node.compress.js it's running the JavaScript and when the JavaScript is run it's producing the rate. So, when it reads the JavaScript and turns that into a rating it's doing some static analysis and it can tell that the sources are giving flows, that we can change and we can see here which is our type chain so that we make sure that we are doing things right and that the final type will be one type that we can consume from from Solidity so what this did is taking that compiling into a rating byte code and putting that inside our contracts folder as this. So, this is a Solidity contract that has the rating bytes inside so if we instantiate this in our Solidity contract we have the bytes and we can send that into the WBI into the Winner Bridge interface for it to be resolved by Winner. So, to use it, we simply have a very, very simple use case which is a price feed using Winnet so what we need to do is importing the using Winnet library and saying contract price feed is using Winnet. Then, by inheritance we are getting a lot of methods that we can use to control the data request lifecycle sending requests having the results read and so on so here we are instantiating the request the input price request and then what we do is sending the request like this Winnet post request we pass the request and the feed that we want to set as our reward for the breach notes to relay the request into Winnet and then when the result is ready which is guarded by this modifier we can read the result we get a structure of the result and this is a multivariate data type that allows us that this will resemble a lot something like that we can know that we got an actual result, there was no failure or if something failed because the APIs failed or anything like that and go into a full bulk case so as to recover from that situation I used to think of this as an option for if I cannot resolve this contract in a completely trustless manner using Winnet in an automated way with no latency at all I can make this price be half and completely low latency that will work great, if something fails I can go into some something like I don't know or something like that for recovering in that case something that has human intervention but I want it to be 99% of the time running automatically without human intervention because that's the point so in addition to producing the contract that contains the vibe like this something else which is what might come which is producing default constructor arguments and migrations so this detected that inside my project I have using Winnet contract so it created here in migrations folder that you will be familiar with from Truffle two migrations one is Winnet core that imports all the Winnet toolkit, let's say and this what this is doing is detecting if we are on a public Ethereum network and if you are in a public Ethereum network you will link all the things related to the Winnet region interface and the block relay and so on and if we are in a private network it will deploy an instance of the WBI and everything and link that into your contracts and the most beautiful part is that it wrote also the migrations for my contract and inserted the default argument so it detects it can read the signature of the constructor of your contracts and put default value so that you only need to manually set anything that has not the default value let's say but for example it detected that the contract is expecting the address of the Winnet region interface so it passed the address right there because okay, the Winnet region interface that is live right now it's just some reference implementation that we made but we expect other developers to create other implementations that may work in a different way but there is an interface so it's a standard interface as long as you abide by that interface internally it can work differently so the developers have the freedom to use another Winnet region interface that is not the one provided by us so going back to the slides I just put here just in case it didn't work so on the future frame the first thing about Winnet big about is going beyond HDD best HDDPS basically right now Winnet is focused on processing payloads coming from get requests don't use HDDP but we can think of other initiators like data from HDDPS, SWAR FICO and so on or even reacting to web free events so basically giving your contracts the capability of processing the events thrown by other Ethereum contracts or Solidity contracts that lead on another Ethereum network which is something quite interesting or even authenticated APIs one other thing we are working on is allowing self implicit conversions so basically of the as map as flow thing and so on but we can do that because the operators are actually defined as stream part station bytes as map and so on so they are already explicit of from which type to which type so we can remove that because it becomes obvious at some point and this is one thing that I'm also quite excited about is that because of how Seabour works so Seabour as coding for data structures it needs to encode both the values and how the values relate to each other so how what's the hierarchy and because of that normally there's some kind of expansion for example for representing a 45 hexadecimal here it's using 25 45 so there's some optimization here which is for small integers small integers only take one byte because they are if the value is below 25 or above minus 25 it automatically guesses that if it knows that it's an integer so we can turn this script into this other in which we are doing basically modulo 16 for all the operators and as we know the chain of types we know actually that 5 here actually means 45 1 is 61 and so on and we can leverage that the way that Seabour works and we turn those operators from 2 bytes into a single byte and in this case the reduction is around 30% because we have literals back in the scripts where we have a very low amount of literals and we are just dealing with changing the shape of the data and not using gets and deep like this and so on we can get up to 50% safe which is very important for long term scalability of network also you can feel that we think of Radon as something that goes beyond with net beyond Oracle networks actually because at the end of the day it's a burst file for performing data flows for defining data flows and performing operations on the way so it could be used not other for other articles for writing scrappers for taking data from anywhere for making extra transform and load tools which are quite common in the banking industry for example and so the work we will be doing with Radon in the medium term is making a generalization and detecting it from the cabinet so that we can have a higher order Radon in which developers can define their own stages define their own operators and so on and just leverage let's say the the abstract syntax behind and all the tooling but they can transform that into saving their use cases so ah when will we be able to use within from interior maintenance ok we are almost there ok so within itself has its own blockchain you know it's like a sidechain to Ethereum sidechain by this email we saw before with the block relay and so on so right now the Winnet Testnet is connected to two different Ethereum Testnets, Rindy and Verlin and it's really like with we are working heavy on improvement documentation and so on right now in the visual documentation docs.winnet.tile and you have a really comprehensive tutorial for creating the price feed that we saw before and right now we are doing a lot of the testing for the following months we also want to work on more bridges because the idea of Winnet is that in some way blockchain and diagnostic I mean that it's bound to serve as many different smart contract platforms as possible and it's expected to be a lot on magnet for early next year so the price feed is a very low level primitive for writing things like DeFi utilities but on top of that we can create many funny things in a really really easy way and in a decentralized way that was not possible before so especially for DeFi we try to push the limits of what we have right now of all the tooling and what Winnet is right now and we came up with this small that you can use it's called Bit sorry hopefully in a few months we can migrate that and basically it's a prediction market for the coin prices so you can try to predict which coin will be performing best so you can see here all the prediction volumes for it's different for it's different coin for it's different cryptocurrency right now it's winning and no surprise and here are your predictions I think I can double down on it so let's go with Ethereum 0.2 so this is for tomorrow's market so today I can predict what will happen tomorrow and as soon as tomorrow finishes I can withdraw my price if I'm one of the winners you can see your potential win for the important option and then for example we can see here just yesterday's market the predictions for yesterday are closed this work opened yesterday but this is the market for today actually this is tracking what is happening today so tonight this will be closed and what will happen is that the contract behind this will send the data request when it will attest what's the actual result and continue from three different price APIs it will aggregate everything push the result into the smart counter behind the tab and it will allow the ones that voted for the most performing crypto to take a share of the price so make sure to make it every day so that you can see how the markets evolve which are the winners and hopefully get some ring every end so the witness is only you guys right now behind this? no actually it's totally live and open for anyone who doesn't know it's pretty simple I can get about one right now don't run it it will spawn a note in the server yeah maybe do you challenge or verify the execution of the tally on shake? what happens is that as soon as you have the reveals everyone knows the tally's bits so they can reconvue which is the result of the tally and anyone can verify as the tally's have written into blocks but that means is that other nodes blocks that contain wrong tally's is that on the site team or the mention? that's done on the site team then the result is brought back to Ethereum for example and that's verified so the the region phase actually verifies the transaction through SPV let's say so it verifies that transaction belongs to WinNet and that that has been accepted by the WinNet network as the tally for that particular request so this is now syncing block teams what happens to bees and like witness reputation if you have either a persistent or intermittent error on like a guy like one witness so gets 500 the other three ones that's quite important but there are two different things ABI can fail I would also do and it's true that no one should take really a hit from that because they are not guilty for that for the witness in nodes so what happens is that this lashing right now is on reputation there's actually some reputation system behind this which simply it's a score of how often how often every single node was part of the majority let's say how often they were not filtered out by tally functions by tally spirits so it's important to design the tally is very very well so as not to do that kind of stuff but then it's true some attacker would come with some crazy request that queries really really cheat the ABI's so as to damage the system but actually the way that the reputation system works kind of prevents that because it's really volatile and changes hands really really fast and if you if you try to plan such a thing the point is that you cannot decide who to banish because the witnesses are randomly selected so ok I will cause some nodes to be banished and some others to be rewarded but I don't know who they are so this what it's doing actually is just accelerating the way that the system redistributes reputation between many different nodes because at the end of the day the reputation also has some damage which means that just because you have reputation you lose it so basically the case over time it expires so you cannot hold your reputation forever and it's always changing hands so if you do that accelerate that process so it's good for the system actually sorry I just thought you didn't understand how to explain what is the if I read the white paper it sounds really interesting and totally confusing so ok you can read the white paper what we've implemented in these two years is 90% the outline in the white paper we have recently published a new documentation site that is really straight down into the basics and into really understanding what we are doing here so I think that will be quite fine because the white paper is quite extensive but we are constantly also publishing a lot of content research content and technical content and over specific areas like the reputation system and many different parts and I think you will like that so ok thanks for coming