 So, I want to take this time just to give you an overview of what IOSK Research does. I should say that I was at a meeting in Zurich where we were discussing many ambitious plans about what research in blockchain can be in the next couple of years. And I was happy to say that I couldn't find a single topic that IOSK was not already doing something about. So let me just tell you what is happening now in terms of structure at IOSK Research. So the operation is following a hybrid model where basically we do have in-house research, what we would be IOSK Research, and what some of you would be familiar from the website. And then there is what I call embedded research, which is research taking place in collaboration with IOSK, funded by IOSK in academic institutions. So this means collaborations with individuals, researchers that are working at academic institutions, but also funding of blockchain technology laboratories, which means that these are research units, which are funded as a whole by IOSK, but as well as other funders. So we have three research centers in Edinburgh, Athens, and Tokyo Tech, and a number of other embedded researchers that are in different locations, Denmark, Bratislava, Kiev, and Pitchesburg. Furthermore, we do have collaborations with universities, UIUC, UConn, Oxford, and Lancaster. And this is what we have right now. This is something that is growing. As we speak right now, we have a number of negotiations with universities and individual researchers for either them joining IOSK Research in-house or expanded our embedded research program. Now this concept of embedded research is something that works well for IOSK, because it's important that research happens in a research environment, and given that IOSK doesn't have a single location, it's important to have our researchers working together with other researchers in other topics. So there is a very nice opportunity for collaborations that is taking place in all these different institutions. So what is the methodology of IOSK Research? Well, obviously, as being researchers, we submit papers and participate actively in peer review. Then we perform wide dissemination, no patents, no restrictions, support the free flow of information for blockchain research. So because the intention is to contribute and ultimately evolve the science of this whole community as a whole. So that's important because we feel that this is the role that IOSK will play here. And as such, we acknowledge and we incorporate great ideas and approaches that are contributed by everyone. We are open, we don't advocate for an orthodox way. What we advocate is for the principle, for the approach. And this approach leads us to a number of research themes that we're currently pursuing. So in this presentation, I'll just walk you through all the different research topics that are currently taking place within our different research groups. So I'm not sure if you can read that well, especially in the back. But I'll just say, hopefully the next slides are not going to be that tiny, it's just that I want it on that slide to have all the different things we do. And apparently it was hard to get it a bit larger than that. So what we have first is research on the settlement layer. So basically that means whatever happens at the back end, our blockchain protocol, consensus. There is a lot of topics there like defining security properly, incentives, networking, scalability, verification, high assurance. Moving on, the contract layer, connecting to Phil's talk, programming language research, developer support, dealing with legacy code that exists in the larger ecosystem. These are some of the main questions we care about. We also do research in proof of work. And this relates to how do we connect with blockchains that are based on proof of work. And we have a thread there with paging, wallet research that touches on various security aspects and usability, applications on top of the settlement layer and the contract layer, which starts right now on implementing secure multi-party computation, thinking about privacy, zero-knowledge protocols, snarks, and hardware support. So these are like major research themes, and now I'm just going to take two specific projects that are currently underway. So Uroboros is the main backbone of the Cardano blockchain, and this was one of our important research achievements. We published this in crypto 2017, August of last year, and it was implemented, deployed in October 17. I would say this is the most remarkably fast transition from a crypto paper to a full working implementation that is used by tens of thousands of people. There are other examples. The previous examples in the past were counting in years the cycle from basic research to deployment. And here we're talking about months. So I think this was a remarkable achievement on its own that required the collaboration of a lot of people, both on the R and the D side of the company. Uroboros as it is now, which we can call the classic version, as there will be more version coming up, it utilizes a fixed core node infrastructure. And basically what happens in terms of stake is that all stake is forced delegated to these core nodes. This is something that we will change as prescribed in the protocol itself, and the new release of the protocol, Shelley, is going to be decentralized in contrast to that. So here is what's going to happen. Stake pools will naturally emerge by delegating stake between stakeholders. And for this to happen, we should be able to delegate and revoke delegation. And that should be true for all stakeholders. There is very active research done on that, and there's going to be also a presentation. The solution we are converting right now is a combination of what we call a lightweight and heavyweight certificates, which basically you use the blockchain itself as a way to record who trusts whom, who delegates to whom. This naturally gives rise to stake pools, but also naturally gives rise to many interesting questions about the stability of the system. We would like to also accommodate different delegation capabilities. For example, enterprise addresses could have a special form that would result to their delegation treated differently compared to regular users. The question that keeps us up is how to provide secure and effective control of staking to the end user. Now this is an important feature of the system. The system being decentralized basically means that the user has control of its stake and making sure that it's possible to have a effective way for the user to command the way that stake is assigned and used in the protocol is an important perspective. If we fail there, it's going to result in a system that is not truly decentralized as we would like it to be. And this of course brings us to incentives because understanding the incentives of the stakeholder that operate the protocol is critical to understand its security. So classical cryptographic security treats the entities of the protocol in two categories. The honest parties that do follow the protocol and then the adversary who is if you want a coalition of parties that act together and try to subvert a certain feature or a certain property of the protocol. This classical approach to thinking security just divides the world to good and bad. And of course the goals of the adversary and the question of why does the honest party follows the protocol is not something that is being addressed. Which brings to rationality. So the game theoretic approach does not distinguish between good and bad, it just assumes that parties are rational and then it poses a certain utility that will help you understand the objectives of every party. So this leads then to mechanism design, designing the protocol with a certain set of reward structure that will incentivize rational participants to operate in the way that they should. So a first rendering of incentives was in the analysis within the Uroboros paper leading to what is called a fair blockchain. Basically a blockchain that can faithfully record the actions that participants take. In other words if you take a certain action that is expensive for you this will be reflected in the blockchain and me as another participant that may want to prevent the blockchain from recording that I will not be able to do so. So this is just a first step towards providing a fertile ground for a rational game theoretic analysis to take place. This is what we do right now. So in the Shelly release or by the Shelly release we would like to have a first version of our game theoretic analysis being completely ironed out and what is going to happen is that the code is going to include an incentive structure which is consistent with proper protocol behavior. So the question here is how to make decentralization, proper decentralization the inevitable outcome of rationality. And this is a difficult question that actually has not been addressed before. And this is an important research theme that I should say we're head on making progress here and that's something that I think will have a lot of intellectual merit on its own. Moving to network research, well it's good to go back to this Nakamoto code back from 2009 the blockchain itself takes advantage of the nature of information being easy to spread and hard to stifle. It's a great quote and that quote needs to be supported by network layer that does it. So it turns out that the network layer that we have right now is only in optimistic conditions supporting that statement. So what we have is a peer-to-peer gossiping type of setting which is subject to a number of attacks. For example eclipse attacks is a great example. Such attacks exploit the peer-to-peer nature and the peer-to-peer fashion that the fully connected graph if you want emerges and manage to isolate parties and disconnect them from the network. So obviously that type of attacks hurt consensus, consensus can only be achieved if you have actually this ability to spread information easily without being possible to stifle it. And furthermore this should happen at regular intervals. You cannot just be arbitrary the wait times that information is being spread. And clearly what we have here is a performance reliability trade-off. So any very important theme of our research would be providing reliable message transmission in this peer-to-peer environment. Right now the whole blockchain world is based on peer-to-peer protocols that have been developed in the last 20 years. Nevertheless it's time that substantial more research is put into making these protocols robust. So this is another important research direction. This brings me also to security under extreme conditions. We're making a lot of convenient assumptions when we analyze protocols in the blockchain setting. And there are many conditions that are happening in actual deployment that are going beyond the security models that have been considered so far. An instance is that for nodes shut down or wake up in mass or there are temporary network wide splits that temporary device stakeholders non-communicating subsets. Or you have temporary 51% attacks so called 51%. So suddenly basically the adversary commands more than the let's say usual allowance the adversary has. So the question is all these things are typically outside the security model and they are outside the security model because the usual and customary conditions of the systems operation these things don't happen. But when you have a long-lived system that is going to operate for years you cannot actually exclude instances where such catastrophic events might happen. It might be small in the temporal sense so in other words it can just be very short. But just because you are outside the security model there is few things that we can say right now about how the system is going to behave in this. And this is what we want to solve. Can we retain the basic security properties of our model when such extreme security conditions take place. All right so moving on to other features that are important. Right now we don't have a multi-sig support in our transaction layer and this is an important deficiency that we are very actively trying to fix. Providing multi-sig support is more complex compared to the Bitcoin setting. Remember that signatures in our setting are used in a much more involved fashion compared to the Bitcoin blockchain or similarly proof of work based blockchains. Your balance, the money you have in the system, your stake in the system is both a cryptocurrency and your right to participate in protocol execution. So you have to answer like what is the meaning of that when you move to the multi-sig setting. So it turns out that it's possible to have staking and spending keys with different threshold structure and it's possible to fold a multi-sig type of capability within our existing address definition. Still providing fully functional multi-sig support would require synchronization between the multi-sig owners and this is a problem that we are actively trying to solve. So this is an immediate, something that is immediate, there is a lot of demand for having a multi-sig support in our transaction layer and that's something that we would like to solve. And it's important to solve it in the way that's user-friendly because the synchronization aspect of multi-sig is something that can hurt the usability of our wallet. So moving on to protocols, our next version of the protocol, Roboros-Prowse, Prowse Greek word for calm, is what we, the work that we recently completed and will appear in EuroCrypt 2018 and Roboros-Prowse provides a number of improvements both in the design of the protocol and the security model that Roboros has. Very importantly, Roboros-Prowse has operates in a partial synchronous model which is comparable to Bitcoin so its needs for synchrony are much less than Roboros and that could be something that we can exploit in the deployment in easing some of the requirements we have for example like the size of the slot. It has very efficient random number generation and it provides something that in cryptography is called adaptive corruptions. So adaptive corruptions is the analysis of a protocol in a model where the adversary can corrupt parties in an adaptive fashion. So the adaptivity of the adversary was restricted in our analysis of Roboros and this was something that was an important open question we left in the crypto paper with something here that we have addressed. Another version of our protocol, Roboros-Prowse. So Prowse means fast and this is an ongoing effort right now to provide an evolution path for our implementation. So Roboros-Prowse introduces a number of new features in our basic protocol stack and implementing them all at once is something that would be unwise. So it's important to provide a way for the code base to evolve. So Roboros-Prowse is a version of the classical Roboros protocol that uses the idea of a hash base random seed generation from Roboros-Prowse but otherwise leaves other parts of the system the same. So it's a first step in the evolution beyond Roboros-Prowse and the first implementation that we will do going beyond the blockchain that we have right now. So an important question that our research theme is trying to answer right now is what are exactly the side effects that grinding attacks have in this setting. Some of you might be familiar with the concept of a grinding attack. A grinding attack is when the adversary tries to use computational effort in a proof-of-stake protocol to bias some events on the protocol to their advantage. So grinding attacks were an important question in the context of proof-of-stake protocols and one of the important contribution of the Roboros protocol is that it neutralized them by using a simple secure multi-party computation protocol that produces randomness. So it turns out that we can speed up this process further and substitute the multi-party computation protocol scrape used in Roboros implementation with a faster protocol which now would have complexity just linear or essentially optimal in the length of the protocol segment that the random number generation is applied. Nevertheless, this comes at a price that these grinding attacks return to the threat model and we know how to control them and we would like to control them and provide a model to control them that is very tight. So these are exactly the main research question behind this. And I should say that that question is important because it really affects how much you have to weight based on your risk model and the threat model for a transaction to be confirmed. So Roboros has the virtue that it gives you a very precise weight time for how long you have to wait for a transaction to be confirmed. So for example, in Roboros you can say, I have to wait for five minutes to be secure against an adversary that has 10% of the total stake and I will be safe except with error 0.1%. And that's an absolute statement. That statement is proven in the Roboros paper. And the nice thing about this statement is that by its very precise nature it gives a simple way to communicate to a user of the system of what is the level of safety that is provided. This is the same thing that we have to do now but we have to do it again because the faster random number generation brings back some additional attack capabilities that we have to control. So this brings me now to security of these protocols in general. So far these protocols have been analyzed in what in cryptography is called the standalone model where basically the protocol is studied in isolation like it's in a glass box. You just look at the protocol and then you say I'm studying the execution of a protocol, putting the protocol in a box together with the adversary and the parties that are operating. Now there are numerous examples in cryptography where a protocol is analyzed and is secure in a standalone setting but when you compose it together with other protocols it stops being secure. And that's because it's possible for an adversary to play between different sessions or other protocols that are running concurrently. So a high standard of security is called universally composable security and for the case of Bitcoin it was shown for the first time in crypto 2017 in the work of Badatser Maverick Chudinzikas. So there is an ongoing research effort to understand what changes would be required in a protocol stack to transition from standalone security to composable security. Next generation of a protocol, Uroboros Idra. So Idra is a multi-headed serpent, there's a picture of it and it's basically what's going to provide sharding for Uroboros. So sharding is an important question in a database context and it's something that so far has not been addressed in any fully deployed system. So the main idea in sharding is to split into multiple ledgers which are made in different subset of stakeholders but still maintain the view of a single ledger. So basically this captures a true scalability. The more servers are available in terms of number of resources proportionally you get more performance. That's something that is not captured in existing blockchain systems that are deployed. Nevertheless there are a lot of research efforts to bring that concept of sharding to the blockchain context and of course when you think about it a little bit the difficult questions here is how to ensure secure cross shard transaction effects. So because if there are no cross shard transaction connectivity then it's obvious that you can just take a blockchain and split it in parts and as long as they're not communicating at the application layer, at the transaction layer of course the whole system would trivially work. The other part is that such interactions cannot be excluded and the hard part of the protocol is actually to show them that it's possible to have it secure. Next generation of a protocol, Uroboros Philos, Philos meaning friend in Greek which deals with side change. Side change is an important research direction. The idea is that different blockchains can actually support the movement of assets between them in a secure way. Right now we are in very active research for what I call side change generation one where basically you think of the main chain which is Cardano and then different side chains are supported by different sets of stakeholders that support the main chain. So having the capability of using side chains is an extremely powerful idea because it gives naturally the ability of the system to support for instance different scripting languages or different types of assets. You can contrast this with side change generation two which is going to be our next objective where you will have blockchains that are supported by completely independent subsets of parties. The fundamentally different question here, the difficult question is that you would like to protect other chains that are connected to a certain chain that has its security collapse. So you can call this the firewall property for a side chain. This is the main difficult question about security and the next one is that we would like side chains to be safe even at the initial stage where potentially very little support might be in them in terms of stake. So there are questions that are actually we are addressing right now and we do have a side chain proposal that hopefully we will be able to implement relatively easily with our existing code base. So a side chain presentation is also planned so if you are interested in this you should attend it. Post quantum security, there is a lot of interest and pressure I should say to understand how is it possible to design subsistence to withstand a quantum attacker. Basically having a post quantum digital signature is a necessity, just remember that Bitcoin's digital signature is not quantum secure and there are a number of approaches right now for post quantum security that are pursued. As a matter of fact NIST, the National Institute of Statistics Technology in the US is running a competition right now as we speak and that competition is for selecting the suite of different cryptographic primaries that are post quantum secure including digital signatures. So unfortunately the research that needs to be invested here goes beyond adopting a digital signature. The reason is that getting post quantum security goes it's more complicated than just dropping replacing a digital signature functionality with a post quantum one. So just putting a post quantum signature will not be enough because it will not capture what a quantum adversary might do. So this is another active research thread of trying to understand the threat model against a quantum adversary and I should say that right now we are in the process of starting a collaboration with one of the teams that submitted a NIST proposal for a post quantum security digital signature so that we spec the requirements we have in our transaction layer and then engage in research that will lead to a post quantum version of Uroboros and Cardano eventually. Multi-asset support, right now we are doing all our analysis so showing a single asset which is the home asset let's say of the ecosystem but the next level would be to install and manage new assets compatibility with ERC 20 token specification but we should go beyond that. What types of other assets that we should support and assets here could go beyond with what is like a currency for example like an identity is a very different type of asset with its own features and it's important to understand exactly what are the properties that we need for that. So that gives you a picture here for how the Uroboros protocol is going to evolve. So where we are right now is here is the Byron release of Cardano and the implementation of Uroboros and that's the evolution of the settlement layer. The Shelley release together with incentives will bring us to that space and from that space we have a number of directions that can be taken. Just like a multi-sig or multi-assets can just be added but what you see in the middle here is a number of different possibilities that can happen. The entry point is Uroboros THOS which is the simplest way to take the Uroboros classic protocol and make it faster and that's going to happen when we completely understand the security model that I was mentioning before that has to do with how do you control grinding attacks. And then once we're here we have a number of other improvements in our protocol in terms of the settlement layer that can actually be applied in any path. We have to find the best path. That's actually an important discussion both from a research point of view but also from a development point of view because some of these paths are going to be let's say more painful than others. An important discussion to be made between the R&D of IHK is to figure out the best possible way that we can roll in new features in the protocol as we try to make it better. And I should say that all that are just before we have the contract layer or other privacy enhancements that can happen with next releases. So in other words the contract layer can actually be sitting in any of those nodes of that graph. So this is the approaches as we've been doing it so far but we have also now to look forward and see what next. So something that's quite important is high assurance implementations. So so far the current state is that the implementation matches the protocol pseudocode as the best effort. We have a paper which is peer reviewed, published, et cetera that has the protocol as in terms in the form of a pseudocode and then you have a software implementation. And whether they're too much is basically best effort. So someone, a number of people, they sit, they interact over a slack, there are people in the research part of the company, the development part of the company, there is interaction and we try to make sure that the code matches the pseudocode. But of course it's just best effort. So this has to be done in a principled way. So high assurance will try to bridge the gap between pseudocode implementation in a verified fashion. So the pseudocode itself should become a human verifiable formal specification and then the implementation should probably provide the same functionality. And obviously this is something that will developing the methodology of doing that in itself is a challenging research question that we are currently very actively engaged in. And this is only one aspect because just making sure that the pseudocode itself is implementing the same thing as the implementation is only one dimension, then the next thing that you have is that the pseudocode as published in the paper is associated with a number of theorems that argue the security of the implementation, the security of the pseudocode and now the question is like are these theorems correct? So another dimensional form of verification here is actually formally verify these theorems, formally verify these properties that we claim. Right now these are pen and paper proofs. So a high standard in defining security is simulation based security. So I'll just give you a little glimpse of what is simulation based security. So you have an ideal functionality which presumably describes what is your objective and then you have a protocol pseudocode. So supposedly the protocol pseudocode matches the ideal functionality like implement it. Now what we would like is that in the presence of an adversary it's possible to take any attack that can happen against the protocol implementation or the pseudocode and simulate it so that it's an attack against the ideal functionality. The main concept here in simulation based security is the fact that essentially any attack that happens against the protocol can be simulated into an attack against the ideal functionality and because the ideal functionality is ideal no really bad attack can happen against it. So this is the pinnacle of cryptographic security right now which is at the level of universal composition which boils down to that statement. And I'll just read that statement for you that basically says for any adversary there is a simulator so that for all environments where the implementation is going to run the random variable that describes the execution of the protocol in that environment against this adversary is indistinguishable from the execution of the ideal functionality with the same environment together with the simulator. Now this is a handful this is like a mouthful but this is the pinnacle of defining security in modern cryptography. And now what's fascinating here is that this is a pen and paper type of definition that in a vision that I have for formal verification of these properties is to be able to provide a methodology and a way to formally verify such proofs. So here what's going to happen is that we have the ideal functionality which is the ledger and we have the implementation which is our protocol and that implementation is in the form of a formal spec that we believe that it's exactly what we intend to do. And what we would like is to establish that connection. So here we have the implementation which is at a certain abstract level as a formal spec and that's the actual thing that is being executed. So we would like to make a connection here and that will make sure that whatever you see on paper formally specified is actually what's running inside people's computers. And that has to be done via a successive refinement process where this syntactically becomes closer and every step is formally verified equivalent to the previous one. At the same time this is not enough we also have to show that this implementation matches that ideal functionality and we do have the nice thing is that this universal composition framework also enables a successive refinement step. Now I understand that most of these concepts will be, most of you will not be familiar with this concept but what I would like to have as a takeaway from here is that there is right now a body of literature that tries to lay out how to properly define security and what is missing is a principled approach for doing formal verification of these security properties. So it is possible to do them at that level and that's an ambitious research vision which I hope ISK research is going to contribute. So what's happening here actually is a nice observation that this concept of successive refinement can actually play a role not only in formally verifying that the implementation or the pseudocode matches what is running but the proof security itself can be achieved via type of successive refinement and this is a nice observation that I hope we are going to be able to exploit and I'm very happy to discuss it further for those of you that you feel that this is something that they would like to know more about. Alright so I'm moving on faster because there's a lot of stuff. Another dimension is the update system. Any distributed piece of software doing software updates is challenging even if you have full control of the code base. Of course it's a nightmare when you try to do that in a system which is deployed distributively and we've seen like examples manifestations of that problem in hard forks for instance that happen in Bitcoin and other cryptocurrencies. So being able to handle software updates in a decentralized fashion is extremely important and when you have this partially updated system you can ask what are its security properties. So can we see updates as an extension of our sidechain system is a concept that we try to understand and the connection between updates and sidechains is that you remember sidechains are extensions of our mainchain but taking this point further you can think of a sidechain being essentially more important gradually over time than the mainchain and perhaps the sidechain can then take over and the mainchain will die. Now the sidechain will be mainchain and will have sidechains of its own. So this is a way of transitioning which could be useful to think about it in this fashion and that's something that we are actively considering. The treasury system is the ability of the system to govern itself and there's many interesting questions there about deliberation, participatory budgeting that points to research that has been done in the past in what in some cases is called liquid or representative democracy. So there's many interesting questions about that and there is a thread of research that we pursue basically is like privacy, question resistance and very farability of the outcome. So how is it possible to do that? So there's going to be a presentation about treasury and this is some say if this type of governance is something that you're interested in you should attend. As I mentioned we also do proof of work research and this is related to sidechains. This is very important for theorem classic and it's also important for interfacing with other blockchains. That's important in other words also for Cardano. And this work has us the main thrust right now, the paging between proof of work based ledger an interesting paging example for instance is having being able to move assets from bitcoin to theorem classic but there's other interesting examples of using this construct that we have and we call them non interactive proof of work to create these connections between blockchains. The contract layer I'm not going to say much as Phil covered this in the previous talk but needless to say there are too many interesting questions related to designing the scripting language and the smart contract language and ultimately too many interesting research questions about developing these smart contracts for example how to ensure the programming intent is captured in the contract code. Wallet research, many interesting questions about how do we secure our wallets I'll just like point to some of the interesting questions like searchable encryption for instance is the capability that enables you to discover the addresses that are controlled by the wallet when you process the blockchain and it's a capability that we would like to allow the user to have for example when they are they have lost their implementation or they would like to use their mnemonic key word sequence to restart the wallet enable the user to have multiple devices and doing all that while preserving usability is a very interesting research theme. Other questions about wallets which I'm not going to cover but I'll just say very briefly is like having paper wallets and having paper wallets so that they are secure what's the right security model for a paper wallet and what is the security model for a threshold wallet so a threshold wallet basically is a wallet which the secret key itself is never stored in a single position and it's threshold shared between different devices or shareholders and of course like when you have this you have issues of proper synchronization and on all that with just basic blockchain support applications of the blockchain would be the next one which says developing NPC applications that can be ready to be deployed or NPC is secure multi-party computation and is the main secure protocol construct that we can use to build applications on top of a blockchain there's very active research on that the main application the first application is poker and gambling games and there is a call it a scope protocol for poker which is going to pin in finance in 2018 general card games royal protocol and fully understanding the security of these protocols is an important research direction which is our natural next step whatever told you so far was not even touching the concept of privacy the current status is that our system is susceptible at least at the transaction layer to the same attacks that you have in Bitcoin in terms of privacy so you can cluster transactions you can do mining on transactions and there is like pressing questions about how do we compare with previously preserving ledgers like ZCAS monero and so forth and that's only a first level because they talk about transactions but privacy preserving smart contracts is the natural next question to ask I mean ultimately a question to be answered here there's so many directions to go and the complication the complexity of the protocol is so high so the question is like what types of privacy is required for intended applications next question hardware assisted ledgers blocks and protocols are heavy and slow transaction processing as much as we're going to make it fast it still remains quite low and the question is is it possible to go to tens of thousands of transaction per second hundred thousands of transaction per second for some applications this is important so an interesting direction here that we're also pursuing is exploiting trust execution environments for example in tells JX arm trust zone to secure off-chain computation this idea can also expand from basic payment channels to what's called state channels which basically enable a set of participants to fast track the execution of a smart contract so what's happening here is that computation is split between the blockchain where basically is used only for settling the state of the smart contract but regular intervals and all specific transactions are actually being dealt off-chain so there is ongoing negotiation efforts to have industry partners and explore together whether it's possible like to expand our basic protocol to have hardware support and thus achieve transaction acceleration in the tens or hundreds of thousands of transactions per second as this would be important for a number of applications if this is thread that we haven't started but it's something that's gonna come up is an enterprise version of our basic protocol so cardano right now is an open ledger but enterprise cardano is basically an alternative to hyper ledger fabric it's going to be a system that will provide a permissioned ledger and it's not a cryptocurrency so the idea is that stake is going to represent the relative degree of responsibility that every entity is gonna have in the system and it's possible to run over a fixed set of servers and enforce complex access control policies on read write operations so I should say that this is an alternative hyper ledger fabric because it uses a basically the different logic of consensus that is consistent with Uroboros and Bitcoin itself and this brings me to exploring this whole design space of protocols for example there's many interesting directions to look like hybridizing with more classical business and agreement protocols there are many interesting protocols that would be interesting to consider in our context some of them are mentioned here from the real and algorithms and also exploring tree and dark structures in place of blockchain structure so blockchain is just a chain there's a lot of exploration now by the community in trying to understand whether their advantages is using different data structure as the basic memory as the basic state of the of the entities that are on the protocol and there's a number of offerings that some of you might be familiar as spectra hascraft iota and others so all these are currently under considerations and we are exploring them or will be exploring them in the future and seeing like what are their merits looking ahead plan is well to expand like we have so many many different interesting research questions and there is no specific area of computer science basically that is not taxed but by the work that we do so some immediate needs programming languages legal research networking hardware economics and gain theory formal verification these are important areas that currently like we are essentially understaffed and we would like to do more and of course expand comes also with participate participate in committees participate in research participate in conference and participate also in research grant competitions our first success as ioskey research is the project privilege a horizon 2020 you consortium which is the first u project of ioskey and perhaps the first core blockchain pro project that is running in europe there are other projects that touch on the blockchain theme but this is a core blockchain project our partners include ibm guard time jr net and smart marketing cybernetica and there's a very ambitious research agenda for developing blockchain technology in the three years of the project going to 2020 not 2021 and finally well the objectives disseminate and publish more than any other blockchain company in the space so thanks i know this was a lot but you know myself as i was going over the slides i just was i was keeping adding and you know i'm pretty sure there will be one or two things that that i haven't included and i should have so thanks for your attention and i'll be delighted to if you have questions or you have things that you want to discuss there's an array of other presentations going to more depth on these topics and it will be great to take this opportunity of this event here in lisbon to create an even faster cycle between all the different members of ioskey and research is important to be tightly connected to the development and the other parts of the company because by itself like it needs the compass of of the developing side of the company and the other sides of the company to actually solve the problems that are most important for us to have the biggest impact that we can thank you