 So, hello all. I'm happy to be here to share with you our latest work and research on D-Type and Chain Lens. D-Type is by far the most intellectually satisfying and elegant project that I have ever worked on, because it embodies the values that I cherish the most, which are transparency and unity, and unity through folding the basic building blocks of software types. So, D-Type is a decentralized type system. D-Type's goal is to standardize a common type description format stored on chain, so available to everyone. Types themselves are stored on chain, including custom types, and this will allow a degree of interoperability that has not existed before. So, these are our motivations. D-Type's cognitive rules in computing must be open to everyone, and an unfortunate counter-example to this is the floating-point arithmetic standard, which is paywalled, and it costs $100 to access it, and WebAssembly, which is supposed to be an open standard, and it is built to become the future binary language for software building blocks, uses it. Anyone should be able to read the full open standard of such a system. If we want to truly build the Babylon Tower of software, a convergence to generally agreed upon types must happen, and a good side effect of having an on-chain standard is that you can always use it to verify that your software follows it closely, and you can use that software, in this case D-Type, as a source of truth when bugs arise. So, these are our motivations. What is the philosophy on which D-Type is being built? All types derive from the same prima materia. This prima materia for D-Type is bit one, and it is very real. It is representable on the wire, which means you can use the hardware as an etalon to mathematics, as opposed to other type systems which use abstract ideas like not natural numbers. But prima materia can be variable, and for example, we are curious to see if D-Type will be flexible enough for a parallel typing system dependent on one qubit. For D-Type, types are functions, and functions are types. Specifically, all functions with the same output of the same type represent that type, and we should be able to travel from the type to prima materia, and this means its input is prima materia or derived from it. Typecasting rules must be defined in the type system. So, every type is generated and can be traced back to the initial untyped prima materia. The type system should be flexible in terms of encoding and decoding on various hardware at least for 32 and 64-bit systems. So, is D-Type static or dynamic? And you will see it is both, in the sense that it comes in two flavors. D-Type is nominative because types can have the same underlying structure but very different semantics, in the sense that you must not be able to add apples and oranges even though they are both unsigned integers. But first, our roots. D-Type version 1 was based on C-like structs, and it allowed us to start thinking about how such a system could be integrated in Ethereum 2.0. And we advocated for having a special shard for the operating system components such as types, which would effectively act as a global scope for all other shards. And all the links to this can be found in the D-Type repository. Now, D-Type version 2 is functional, and it is built upon the knowledge that we gathered while building pipeline, specifically the on-chain pipeline graph interpreter that we now have live for testing. All types are now based on functions, so pipeline itself can be an editor for creating new types. And the new D-Type engine is actually a graph interpreter for functions residing in the same contract and both compiled functions and runtime created functions in the form of graphs. So I'm now introducing Taylor, a suit of fuel, ULPLUS extensions for using D-Type as a type system and pipeline as an interpreted language. Our target is compatibility with WASM, with WebAssembly, and with any Ethereum VM. Therefore, we chose ULPLUS for our tech stack, and then we found about ULPLUS. So our first step was to introduce our own memory struct in ULPLUS with support for D-Type types to get a sense of what we need and what can be done. And I'm keeping in contact with Nick Dodson from FuelLabs, who will talk about ULPLUS later today, and I think it's the first talk after mine, on how we can make ULPLUS support extensions for various encoding and decoding formats. The initial type bootstrap for the ULPLUS extension was done with type definitions from a JSON file where types were defined based on other types. And you can find this in my ULPLUS fork in the Taylor branch. This allowed me to more easily extend ULPLUS to memory structs containing dynamic types, and we used it to start developing the on-chain Taylor interpreted type system based on our pipeline model. So Taylor is functional and comes in two flavors, interpreted and compiled. Compiled where the ULPLUS transpiler extension uses on-chain D-type data for encoding and decoding, and pipeline can be used to compose types. And in the future, type definitions will be shadowed in multiple languages where a pipeline interpreter exists. So presently, D-type can have support for any type of any encoding, and the EVM is not restricted to only solidity types. So what are the native Taylor functions? Type creation starts by using a suite of native functions that will be used recursively. And most of them are pure functions like new identity, contig, concat, map, reduce, curry, but state mutating functions for type values and a pay function are part of a Taylor extension where logic can be built upon the same principles of interpreting graphs. In my last article about currying, I explained how the core of the mechanism works. We currently use four byte signatures for types and all recursive calls go through execute internal. Execute internal tries to call execute native or execute curried, depending on a success variable that each of them returns. And all native functions that I previously showed you reside in execute native in a switch case statement. Virtual curried functions are handled by execute curried, which retrieves the data from a pointer in memory and the pointer is also the curried function signature. The interpreted flavor of Taylor version one is significantly more complex than my article example and stored graphs for types are handled by the execute graph function. And we also have support for named and sized types, but a run of this system would not fit a slide and would require at least five minutes of explanation and if you know pipeline you know why the execute graph part can become very complex. So these are links to my recent articles also found on medium on how recursive apply and curing works as a prerequisite talk prerequisite reading for this talk. The recursive apply is the engine of functional programming, and we have implemented it along with curing in our own chain interpreter. So what types can be built with Taylor from bit one to numbers to arrays tuples union to end dimensional arrays and trains. The types in bold do not exist in solidity and the type is made to be compatible with flood buffers and seaboard. We are working on type formulas to define a proper set of native functions, and we currently have working implementation for implementations for bytes unsigned integers static arrays static and dimensional arrays and union. And these are based on a typed encoding format. This is one of the encoding formats that we are working on a type format where values always come with their type. This is an intermediate representation for other types of encoding. So this enables smart contracts to do runtime type checking and values stored under this format expose their type directly in the byte code, making robot code analysis more rich in information. And at the moment we are using a four bytes signature for types, and the base types have a hard coded signature format where the last three bytes represent the sides. And this is currently a tuple which you can see on the right starting with EE also contains the additive some of component sizes for ease of use. These are the current data structures that we are working with each type definition is an ordered array of steps, and each step contains the signature of a type which is a function, and an array of input indexes. Are the indexes of the input arguments from the setup all graph local variables that are produced while running the graph. And on the right you can see our current n dimensional array definition using the new reduce contig get type signature curry and contact native functions. This is an example of a union type definition. It only needs a selector index for the component type and the native select function, the selector runtime value is expected to be at position zero in the actual data. So when you define a type that can have sizes, for example, you end, you don't need to define every size type so you don't need to store another definition for you in 256. So you can think about the abstract type you went as a partially applied function. And we also have support for named types. As I said before you shouldn't be able to add apples and oranges nor the various CRC 20 tokens together. So now for a short demo. This is a modified version of the yield plus extension for remix. And this is the Taylor graph interpreter contract, which I will be deploying. And now we are going to insert the definition for the you went type, and then the definition for the simple array type. So we are going to call the function for creating a new array. So this is the signature for the execute function which is the main entry point in the program. This is the signature for simple arrays it starts with 4440 doesn't have a size because it's the abstract type. So the inputs and outputs are wrapped in tuples. So he is a tuple of size three. And the next three values will be the additive lengths of the components of the tuple. And the value of our first argument is a typed value. And the value is essentially this which is the signature of the abstract you went type, and itself is a bytes value so that's why we have bytes of type of size for here. So effectively, effectively, we want an array with you went items. Now the second argument is this, which is 32 so we want an array of you went size 32 elements, which is equivalent to you went 256. And the last argument is for which is the length of the array. So we call this. And we get again another tuple with one element, and the typed value is this one. So this is the signature for an array with them for which contains. So we have size 32. So 256 in celebrity. And the next value. The next is the actual value, which is initialized with zero. I don't have time to show you more examples or examples of casting. So I will go back to the presentation. So to summarize, I have talked about the interpreted flavor of Taylor. I'm not sure if you are seeing the correct slide because I'm not. We are seeing a further research. Okay. And I slide with a diagram here, but I don't seem to be able to show it. Maybe I'll show it later. So to summarize, I have talked about the interpreted flavor of Taylor. The types created on chain can be used for the transpiled version. So Taylor as a U plus extension providing type definitions and coding and decoding rules and type checking. Research for the near future is an efficient set of native functions optimizing the steps needed for building and casting types and bootstrapping the Taylor interpreter. So it uses Taylor produced types and parameterizing base based on VM slot size, Taylor's native sizes and prima materia. This was the diagram that I showed you earlier. So the type and pipeline go hand in hand the type provides a minimal set of type list functions maintains a table of signatures provides pipeline with input and out the choices being able to parse the input and out pointers and pipeline helps us turn the type into a combination of function into combinations of functions ran by an intra contract pipeline interpreter and will also provide intercontract type processing. And this is how bite one can be represented. It receives bit one as input which is repeated a given number of times in this case eight, but I. This was the diagram. I'm not sure if you saw it earlier. And now for the n dimensional array example. The n dimensional array can have n dimensions where n is larger than two. And on the right you can see a representation of an empty array with three dimensions. The first three for bite slots it are the type IDs for each sub array with with its own dimension. And the fourth, fourth bite slot is the wind 32 type ID, and then the values in this case initialize with the zeros. So I showed you the type Taylor and pipeline. What is the purpose of lens lens is a browsable and searchable cache for on Jane types. And it will provide data for editor tools or input for tools like pipeline, and anyone will be able to run it at home, because the main data is on Jane. Well, no, most people prefer trusted setups over running their own. And I think the next level in providing good, common good software data is a system of decentralized governance where companies or individuals join database clusters. So having one company control all the source code is begging for problems and I for one do not want the next wasn't package manager to follow suit. The current version of chain lens was derived from our work on the pipeline contract finder, where we needed the ABI is have already deployed contracts. And if you do not know how to use chain lens with pipeline check out my YouTube videos. But chain lens only has two types contracts and functions the lens will have multiple types. And we have around 40,000 contracts waiting to be published on the decentralized database with better search. The lens look like and I have a small demo here which will run on the rough option. So this is how the current chain lens looks like it has contracts you can search through them browse them and functions and you can interact with those contracts and functions. And then you can export them to other tools. So now let's see what types we already have inserted in the option deployed. Taylor contract. So we have here some types and I will insert another named type, which will be a you went of size for which will name scarce token. And while we are waiting for the option transaction to finalize. I will show you another thing that we have, which is a prototype of a typed database on chain. So for type you want. I'll click on this button, and we'll see the values that are stored now under type you want in in this small database so we have two values six and seven and another value under type you to. And what we can do is select some of these values. And then we can export them and apply other tools on on these values. So, while we are waiting for a transaction finalize. I'm going to tell you why we are building the type lens and pipeline in parallel, because the process is slower, but it allows us to learn from each of them and it influences how we build them individually. The type lens pipeline flow is as follows pipeline handles pure graphs from pure functions provided by lens, and it receives input from the type through lens, and the result is a state transformation graph applied to insert or modify the output which is a deep type value. And now we can see the new type that we inserted here, and we can add other values to it. But I will return to my presentation now. So, if you will port your project to the type, you will be able to generate types from the type in multiple languages do on chain type checking and cast checking. And I don't think you can see the correct slide. Okay. This is a call to arms for unifying types across languages. And I don't think the blockchain revolution has ended. And we are betting it will restart on other vectors than money. So defining types on chain guarantees that blockchain programmers will be part of the next after revolution, which will come. I view blockchain as a citadel for common resources and computable standards, and many may say that this is very hard to actually achieve which is true. But I cannot think of a higher scope and goals in this one. And this is why I donated my time to this cause. So after our work for more than a year on the type, we are in the position to launch this hypothesis. The centralized blockchain based OS will ever exist. It will contain its type definitions in its own chain boot sequence. And we wish you success in building.