 Okay. Welcome everyone. Good afternoon. My name is David Richards. I work for, I'm a product manager. I work for a company called Digital Assert. Digital Assert is the creator of a software stack called Damo. It's a full stack application, distributed applications built on blockchain, distributed ledger technologies, or even standard SQL databases, allows you to build POCs in days and not months. I'm here to talk about how you can solve data synchronization challenges with blockchain and smart contracts. First of all, you can't really talk about blockchain without talking about Bitcoin. In his 2008 white paper, Satoshi Nakamoto first talks about trust. He says we have proposed a system for electronic transactions without relying on trust. Now, there's a whole debate on the internet as to whether actually blockchain and Bitcoin actually eradicates trust and actually changing your trust into network rules rather than a third party, things like that. Now, whole debate online, you can go and research that. His second comment is around data synchronization. He says it's incomplete without a way to prevent double spending. Now, obviously, this is talking about Bitcoin specifically, and if you transfer money and it transfers it twice, then you're not going to be happy, or if you sell something and expect money and don't get it, you're not going to be happy either. He talks about trust and then says we need to synchronize that data. Let's put those two characteristics against two scenarios. In enterprise IT, we believe that actually the data synchronization is the better solution or the more interesting challenge rather than removing trust. In Bitcoin, if you look at trust, often it's trustless. People are transacting, trading Bitcoin, and they don't necessarily trust the other party at the other end. If you apply it to something like a bank and a bank relationship, trust isn't actually on a network. It's on, you have trust already in that relationship, probably based on regulations or economic incentives, but you don't need this intermediary, these network rules, to give you trust. Trust is actually already there. Actually, what's interesting is both of these, Bitcoin and this banking relationship, both need data synchronization. Actually, this is the more important challenge that needs to be solved here. Let's look at the problem. Why does data synchronization challenges actually exist? What causes them and why do they exist? First of all, their main problem is data silos. Data silos exist because we create applications. We have parties that have applications. I want to talk about a party. This could be anything from one organization, a complete organization, or it could be a team within an organization, or it could be down to the real granular level of a service within a service-oriented architecture. These data silos exist because they need to process data. They usually consume data. They process data. They do something with it and then they make it available or pass it on. Actually, these silos exist for good reason. Sometimes it's regulatory. Sometimes it's organizational. Sometimes in the service-oriented architecture, it's to make it more composable, make it easier to maintain, update, and add to. But data silos make things hard. They make it hard to advance specifically in digital transformation. This is difficult because you need three things primarily for digital transformation. Digital transformation has been around. It's been a buzzword for a very long time. We know it's not just a technological challenge, but there are obviously technology changes we need to make. To be successful in digital transformation, you need three things. One, you need real-time access to trusted data. If your data is spread in various different places, different data silos, you have no trusted one source of truth. It makes it very difficult to get the right data to the right place at the right time to make critical decisions. Second of all, you need to automate processes. If you have any manual processes in here, if you don't have self-service, if you can't compete on speed and cost, then you're going to find it very difficult to advance and to meet your industry and customer needs. Third of all, you need to adapt quickly. You need to adapt to your market and industry changes to the demands of your customers. You need to adapt quickly to put these new applications in place and get data out. If you have a complex data landscape, it's going to make it nigh-on impossible to be able to achieve that. Now, let's look at a simple example of going to visit a doctor in a hospital. Now, thanks to the NHS, most people, we don't actually necessarily see this in the UK. Most of the time happens under the covers, but for private healthcare and often in the US and elsewhere, this is a very common practice. You go to see a doctor. As a result of that, you see a hospital. That is another party. They have medical records about you. When you go, you incur costs, and they need to get an insurance company involved. All of those have banks. All of those have entities. So just for the fact of visiting a doctor, we have now seven parties involved that need to work together, share data, and transact. If we then blow up, this is obviously a very simplified high-level view. If we then blow up insurance, you have then, again simplified, you have three parties involved there. You have policies, claims, and finance, for example. So just very simple act of going to see a doctor. We now have ten parties involved that need to interact. They need to transact, and they need to share data. Now, don't just take it from us. This is a problem seen often in the market. So Harvard Business Review. The ask systems don't talk problem, which bedevils most companies, and is an ephemeral to digital transformation, leads to enormous process inefficiencies. Gartner, business units undertake data or analytics projects individually, which result in data silos and inconsistent processes. They happen, they're out there, and Salesforce, they have a negative impact on organizations. According to the American Management Association, 83% of executives say that their organizations have silos, and 97% think they have a negative effect. So silos are out there, they have a negative effect, and we need to overcome them. Now, silos are there. Why is it so difficult to integrate data? Why is it so difficult to make sure that things run in sync? And it's really about business consensus. If we could have all of the same information and have a true state, an understanding of a current state, then we have no problem. And this is our naïve mental view of how data integration works. We have two parties. They have their own data, the lighter colors there on either side, and then there's one part where they share data, the darker side. And we assume that all of the right data is shared and integrated effectively. Now, we believe that each party has some local view of a global truth. In reality, that's not true. In reality, we know that it often works like this, where two parties have different views of data that data isn't synchronized very efficiently at all. And why is that? Because in the worst case, we have manual integration processes. We all have experienced it. We send emails with information in there, with attachments such as PDFs or spreadsheets. Spreadsheets are used for everything. But they're often not the systems where we want the data, where we do our processing and where we use it to make critical decisions. A level better, we integrate information after the fact. We might take information and sync it with message queues or enterprise service buses or APIs. So we might take that and we might try to integrate those systems automatically. And then integration platforms. Last of all, we might shove all of the data into a data warehouse or a data lake. We might suffer duplicates of data or it might even have missing data and we have to analyze or integrate after the fact. None of these solutions deal with the problem at its core, where the data is actually, I suppose, processed, where the data is actually created. They all try to integrate data after the fact. So now I've thoroughly depressed everybody and focused on why we can't do this, why data integration is such a challenge. We're going to look at the solution. Now first of all, let's put this in a bit more concrete terms. This is a very high-level view of a typical application architecture. We have two parties, A on the left and B on the right. They all have each have their own data store. And then the integration happens at the business logic level. So we often take something like messaging. And this is what I worked in prior to joining Digital Asset. I worked for a large corporate in data integration and specifically in messaging for a period of that as well. So I know firsthand a lot can go wrong at this level. Data can be lost, messages can be lost. Business logic can be incorrect. We can misinterpret data. And if anything like that happens at any point of this process, application A and application B have a different view of state. They have a different opinion of things. We don't have business consensus. And therefore they make different decisions and they have a different view of what we believe to be a global truth, which actually isn't. And so what we propose is that you actually move that integration layer down to where the data is actually stored. And you do that at the database level. Now we call this a virtual shared system of record. Virtual because the data may not actually exist in one place at one time. And shared because it is as if you are reading from one or from the same system. The application A and B are reading from the same kind of system. Now we propose that the integration is moved from the business logic level down to the database. Now you don't actually integrate with the database itself. You integrate with an API at that level that then does the data synchronization for you. And so this API kind of gives us, it's an API to our almost naive mental image of this data synchronization here. And so we are starting to come to what we had of our naive mental image of that data integration. Now the data still physically sits at the different locations of application A and B. And it is still controlled by the individual parties. But the data is integrated at the core data level. And really the important part here, the magic part, is the secure synchronization that happens. Any time two organizations or two parties share the data, you also need to make sure that it's secure. So like when I was talking about other options to share data, there's a lot of solutions to that like data lakes, warehouses, integration. However, if you just do that, then you've got a bit of a problem on your hand. And when you're sharing data with other organizations, you need to make that it's make sure that it's secure. So with a virtual shared system of record, there's four other things that you need to make sure that you have. First of all, the data is explicitly owned. So you can you can mark data as not shared. So on these views, application A or party A and B have exclusive data that they don't share. And then there's the bit in the middle of data that they do share. That data that they share, you want to strongly permission that. So you want to make sure that only the parties that you want to access it can access that data, or maybe change the data or just view the data. And if you can do those top two things, then you gain a high privacy of the data that you're sharing. And lastly, fault tolerance. You want to make sure that the data you are sharing truly is shared, that not one party can falsify the other party's data or change it maliciously without the other party agreeing to it or even knowing. So you want to make sure that it's fault tolerant. Now, all of the top three, you need to mutually agree with the other party. Obviously, fault tolerance. Every party wants that, but if one party is malicious and not necessarily going to agree to it. So the first three you want to get agreement on that with the parties you're sharing the data with. And so to do that, you need a kind of a schema, a schema where parties can agree on the rules. And now we call this the Dammel language. Okay, if you forget it, it's on my t-shirt. So this new schema is called the Dammel language. And this really takes care of those data permissions, the ownership and the privacy. Now, I'm usually against sharing code in forums like this. You don't have to look through, you don't have to understand it, but I'm just using this to show how simple it is to write Dammel code to create applications that have parties and permissions at the core. So shown like this, this is a template and this is kind of core to what the Dammel language is. If you look at it like this, it's very similar to a class in object-oriented programming. You have at the top your properties, the data that you're using here. You then have methods that change that data that deal with that data that do something. You have parameters on those methods and then you have some business logic at the bottom. So something very similar to what you'll be used to with Java classes. And this is the benefit of Dammel. This is where the strength really comes in. At the top, you have the data model similar to like a database schema. So you can imagine that a template is maybe a table and these data, these are properties and maybe the columns in that data. And then you have authorization built in. So you have parties built into this code and you say, who's a signatory of this data? So who can sign off on this data changing? Who is an observer? So who can read this information or watch it as it happens? And who is a controller? Who can actually manipulate this data and call methods on this template? And the next, your methods become your API. That is the way you interact with your system of record. That's how you interact with the code. And lastly, you have composable actions. And these composable actions in these methods, they're pure. They're deterministic and they allow atomic composition of other actions. And this is really where it becomes interesting because if you can control, if you can make sure that only the people you want are acting on your data, that you strongly permissioned it, that you know who is doing it and you've authorized them to do it, if you can sequence those actions that are taken in those methods, those API calls, then we really call them contracts and not objects. And what you're doing here, you're effectively creating an event log. So, so, so and so made this action and that's happened. If you can replay these actions here, and you can see they kind of call other methods, you may be convinced of that. They call other methods and as long as you can sequence that, you can replay the event log to understand what's happened. And so this is a kind of a very simplified view of what an event log might look like in this term. So we have, you know, just a very simple application here of transacting money, money transfer application. We have four parties of the Federal Bank, Bank of England, Tom and Amy. And as they transfer money in the virtual shared event log, this is all recorded. Now Amy only sees the events that are relevant to her. And we believe if we have, well, we know if we have Amy's log, if we have Tom's log, we have the Federal log, and we have the Bank of England log, as long as they are ordered correctly, we can see exactly what happens when. And you have strict permissions on there. Obviously Amy can't see Tom and the Bank of England's transaction together, so that doesn't appear in her log. And if you can replay events in a log, then you can understand its state. You can understand the balances that Amy has with the Federal Bank or with the Bank of England, for example. And so this is really important. This is where that really the magic happens with the Dammel ledger and the Dammel model, which really gives it its kind of its power. I want to point out all of this is open source, all of this technology we have in terms of the Dammel ledger. All open source and available. Now this is where Dammel starts to enable this virtual shared system of record that I showed earlier. And Dammel starts to take over that kind of synchronization layer at the bottom. Now these participants, or these local data stores, become participants. They have a stock database which has their kind of view of the world stored on it. And the applications interact with that on an API server. We call that API the Dammel ledger API. Okay. And because we can sequence those events, we need something that plugs into the bottom, which is a consensus or synchronization layer that takes these API calls, these transaction contracts, and puts them in an order in a data store somewhere. And this is where we use where Dammel drivers comes in. So Dammel drivers does this. It takes those events, it takes the contracts that happen, and puts them on a consensus or synchronization layer. This consensus synchronization layer is just concerned with ordering and validity of consensus. And so when those transactions happen, it makes sure that it goes in the right order so we can get state from it and understand the order in which these things happen. Now Dammel drivers kind of abstracts away that consensus layer. So we can use multiple different technologies under the covers to get that consensus layer. And we really advise you choose those based on your non-functional requirements. So it might be based on cryptographic features, or it might be based on speed. There's multiple things that we can use on their cryptographic blockchains, distributed ledger technologies, or even your standard databases. And so it really avoids locking in to some of these distributed ledger platforms. We kind of take the mantra of Java here, where you write your application once and run it anywhere. It doesn't matter on the consensus layer that you're running your technology on. Now because you have the same ledger API, regardless of which consensus layer that you use, the Dammel driver translates that for you. It doesn't matter which one you use. We don't just stop there with a ledger API for the consensus layer. We also have an SDK and development tools which help you build your applications. So that data, that Dammel language that I showed you, we have an SDK which helps you build that. We have testing tools help you tool it, to help you test it, and then also deployment and management tools as well. And so you can continue to develop your applications in any language that you do today and then interact with that ledger API through these development tools. Now it's not to say it's a walled garden. You can pick and choose any of these tools that you like and they have APIs to interact with and use. Now just to summarise what I've talked about, we have the product here of Dammel drivers which abstracts that consensus layer away and then Dammel Connect which helps you build very quickly, build these applications, these distributed ledger technology applications and smart contracts on this platform. Now we also have Dammel Hub which is effectively a managed service version of this technology. It has drivers built in, it has Dammel Connect built in which gets you up to speed in deploying your applications much much quicker. As I said before all of this is free and on open source. We've also got a community that's very vibrant. Any question can be asked there and it responds to extremely quickly. Now I've spoken about how we can get a golden source of truth. So how we can make sure that data is synchronised of the database layer and how we can make sure that we have automated processes. Now I'm going to show you the third benefit of how you can adapt very quickly and how Dammel is very composable. Now if we have two or four applications for example, two cache applications and two IOU applications built on Dammel. The IOU and cache applications know nothing about each other. People can transact cache between them or IOUs between them. They can't transfer cache for IOUs. All we need to do now is add a third application on there here, a loan application so that we can transact cache for IOUs. We don't need to change the existing cache or IOU applications. We don't need to alter them at all. They are on the Dammel technology. They are using that to transact with each other. We just need to add a loan application that knows about the cache application and knows about the IOU application. It needs to be given the right permissions. It needs to be given access to that data but once it has that it can transact between the IOU application and the cache application and the IOU and cache applications don't need to know about each other. At this point, those applications, the loan and IOU or the cache and IOU applications know nothing about each other. The lenders and borrowers know nothing about each other but the loan sticks in the middle and synchronizes that data for them. That's really the benefit and composability of the Dammel ledger model. It gets granted access to that data. It gets given the right authorities and access to that data and it can transact between them. Now to this point I've only spoken about two applications and obviously if you can put two applications on that system then you can put more. If you can think about once you have two parties on that same technology that can transact with each other then you get in multi-party applications. Now if you put multiple multi-party applications you're getting a network of applications. This is really where the power of Dammel comes in. If you can get multiple applications on there which are sharing data which is explicitly shared and explicitly permissioned, adding more applications onto that platform they can make use of that data, they can add more data and everyone can have access to the same. Strongly permissioned and explicitly stated the ownership. Now coming back to those three challenges that prevent digital transformation because of data silos. First of all Dammel gives you a golden source of truth. No longer do we have separate ideas of a common source of truth and actually they're completely different because we're inefficiently integrated. We are integrating at the database layer where the data lives and therefore we have a golden source of truth. Second of all we have no manual reconciliation. The synchronization is done at the data layer. We can eliminate any manual processes, any data warehouses or any integration or reconciliation after the fact. And third we have a composable system which can just be added on to use the existing data and to integrate existing applications and data. And we're doing this today largely in the financial sector. Obviously we're aware that the security is extremely important. We are adding value to those applications today. And our goal is to build a global economic network of seamlessly interconnected businesses. I believe we're likely out of time. We're out of time. Happy to come and ask questions afterwards. But if you'd like to get in contact that's my email address. Dammel.com is the website or we have an active lively forum and discuss.Dammel.com. Thank you very much.