 Hello everyone. Thank you for coming to our talk. My name is Nima Caviani and I'm here with my colleague Morgan Bauer. We're from IBM and we are going to present Blockhead Service Broker, which some of you probably have got a sense of today at the keynote. So hopefully we're going to expand on it and go into the more technical details in this talk. And then we're going to show you a demo, which hopefully will go as good as the keynote demo went. So let's cross fingers. All right. So how do you code for blockchain? For those of you who are familiar with it, coding for blockchain requires a lot of knowledge that you have to have about the blockchain network. You have to know how to bring up a network. You have to know how to create an account on it. You have to or even import an account if you will. You have to have a smart contract or write a smart contract. You have to deploy the smart contract into the blockchain. You have to get the deployed API or application binary interface out of it, plus some other information, export all of that to a Web 3J applications or basically a decentralized app, as they're called. And then you have a smart contract application and then you deploy the app on something like Cloud Foundry and then you have to manage it and scale it. Or if you don't deploy it to Cloud Foundry, you have bigger problems because you really have to manage and scale it. So the motivations that we had for the broker was that we wanted to simplify this entire process. We wanted to do that, but basically allowing the platform, the platform like Cloud Foundry, to do a lot of the labor work for you and automate it and essentially hide away all the complexities of dealing with a blockchain network when you're writing Web 3 applications and make it so that the application developers for blockchain can already assume that the blockchain network is there and that's a big advantage for them because we're going to hide away all of the complexity that otherwise they have to deal with. So that way we wanted to manage the contract life cycle, so basically bringing up a node, creating a contract and basically an account address, deploying the contract, all of that we thought we could automate. And then possibly make it available for production use so that if you want you can connect to public blockchains, you can actually have a full ledger in there, you can submit transactions and basically get verified for the transactions, et cetera. And then we wanted to make it available for a variety of different blockchain networks, so Ethereum, Stellar-based networks, and Permission-based networks like Hyperledger Fabric and the other ones. So the vision that we have ultimately is that if at one point blockchain becomes the de facto standard for doing these type of transactions by the industry, we want to make it as easy as possible for any developer to be able to jump on and start building applications that have blockchain supported and basically provide blockchain as a data store to the application, similar to how a lot of developers do not think about databases when they start writing any sort of application. They just assume that it's a technology that's there and it works, and if they want they can create tables and do any sort of transaction with a database. So that's how the blockchain broker was born, and the way we provisioned it was that we assumed that you have a pass platform like Foundry and Kubernetes, and that's where you deploy your decentralized application. There you're going to have a broker that basically communicates with a node management system, which basically takes care of managing your blockchain node, and a deployer that basically makes sure your contracts are getting deployed to the blockchain network. So the broker, the way it works is that when you create the service, the platform like Foundry, for example, it talks to the container manager interface, and the container manager interface brings up a blockchain node. Now that container manager interface can talk to anything like a Docker machine, like Docker swarm, like Kubernetes, and all it does is that it brings up a blockchain node. In our case, in this case, what we've implemented right now is the node is an Ethereum node, and it connects you to the Ethereum network. So then you have a node as part of the bigger network that is running, and then in the next step, once you bind that service that gets created by the broker to your application, you also provide it with a contract, with a smart contract, and the broker takes that smart contract, deploys it to the node that it created in the first step, and provides all the information in terms of how to communicate to that contract back to your application so that you can get the application binary interface and talk to your node directly. And from that point on, you're hooked with the blockchain network, and you can do your blockchain application programming whatever you want to do. So with those three simple steps, we kind of simplified something that required a lot of understanding and knowledge about blockchain and a lot of configuration that had to be done up front in basically three commands that a service broker exposes to you, create service, bind service, and then delete service. So we made some design decisions along the way. One of the design decisions that we made was to make the broker stateless. In the sense that the containers that it brings up, it doesn't keep any reference to these containers. The only reference is the instance ID that comes along with the request to create a service to the broker. And we use that ID as the name of the node that we create as part of the container or the node management service that Kubernetes and Docker and the other container management systems provide. So that way, the next request that comes in with the same ID to request, and that requires you to do something with that node, you already know which container you need to contact. And that way, we don't have to have any database or anything for our broker. And that simplifies the scalability. Because now if you want, if all of a sudden, your broker receives a lot of requests, you can just bring up two replicas of the broker instance. And since there's no database, no syncing or anything is required. It just creates more nodes. And every broker node starts working independently. The other thing that we decided to do was when the container gets created for a node for blockchain, we expose two different addresses. One is the external address of how to access the container or basically the blockchain node that gets created. And one is the internal address that basically is you can refer to from inside the container management system. And the external address is for when your distributed app, decentralized application tries to contact the node and does stuff with the node, like for example, submitting transactions. The internal address is used internally when we bind the service and push your contract into the node, into the blockchain node. And by using the internal address, the request for deploying your contracts do not need to go out and come back in. So if we significantly cut down the amount of time that is required for the deployment of the contracts to happen. So we do two implementations. Basically, there are two ways you can deploy the broker. One is through Bosch. So we have a Bosch release of the Blockhead broker. It's available on the Cloud Foundry incubator. And you can go and deploy it as long as you have a Bosch director. You can deploy side by side with the Cloud Foundry deployment and then you can create a broker service with reference to the Blockhead broker that you deploy. And from that point on, you can use all the Cloud Foundry commands to interact with the marketplace and whatever service that the broker offers. The other implementation that we have is a Kubernetes implementation. And what Kubernetes does is that when you request to create blockchain nodes, it brings up a pod. And the pod has all you need in order to connect that node to the main blockchain network. So for the Bosch implementation, what we use is that for the container management system, we used Docker. It's Docker machine. So basically, on the VM, we get the Docker machine. And whenever there's a request to create a service, we bring up a Docker machine. The problem with it is, you know, there are so many Docker machines that you can create on a VM. So if you have to scale it, it becomes a problem. And that's why we kind of consider it mostly for development purposes, especially because if you want a public blockchain node to run in the Docker container, you're going to have a lot of problems. Because the first issue you're going to hit is that the ledger size for something like Ethereum is 600 gigabytes or more. I think by now it's probably like 700 gigabytes. So having a Docker container with a volume of 700 gigabytes mounted to it, basically, I don't know how many Docker containers you can run in probably not that many. With Kubernetes, because you get a pod, if you're willing to spend the money, you can actually get a blockchain node that is synced. And that's easier to manage. So the good thing with the Bosch release, though, is as I mentioned, is that you can deploy it side by side with the Cloud Foundry deployment. And then you can easily create the service broker in and make it available to Cloud Foundry. And from that point on, there's like two, three commands that you need in order to be able to interact with the service broker. So it makes your life very easy. And then once you bind the service, you get all the environment variables for how to interact with the smart contract as part of B-cap services environment variables in your application. So all your application needs to do, the decentralized Web 3.js application needs to do, is to use those environment variables to talk to the contract. So very easy to deal with. The next thing is the Kubernetes deployment. And Morgan is the expert with Kubernetes deployment. So I'm going to pass it to him. Yeah, I got the computer. I don't need the clicker. I don't think. Yeah, so Kubernetes, I'm not sure how familiar everybody here is with the details of the the object model and whatnot in Kubernetes. So I've got a quick couple of slides on the details of that and the specific objects that we're using in our broker to deploy it. And I will give you some high-level details and some more specific details. So we've taken the broker, same thing as the same program as the Bosch Release Go program. We've put it into Kubernetes and then we've exposed the ports that it surfaces for broker communication. When the broker provisions blockchain node, we've done the same thing. We provide a service. We provide a service port for the accessing the blockchain node, because it services a port to connect to. And because all of these things are exposed on the public internet, public-facing interfaces of the Kubernetes cluster, this is the same open service broker API that exists for cloud foundry service brokers. And thus, you can attach your CF instance to the service broker and services bind them. And we have the nice external address that we described earlier that lets everybody be contactable. And of course, other platforms that exist that are service broker API platforms, such as Kubernetes Service Catalog, SAP has one, Swisscom has one. There are probably more that I don't know of, but there exist multiple implementations of the platform, and this should be compatible with all of them because it's straight open service broker API. Earlier, we talked about the internal address, the same thing. So if you not only deploy your broker into Kubernetes, but you actually deploy your applications that use the blockchain nodes into Kubernetes, your blockchain applications, your DApps, they're also usable, again, on the Kubernetes service object concept. So just specifics of how the broker is running, the broker is created deployment, which is great for the pod, and the pod is the single unit of running things in Kubernetes. And thus, we chose the deployment because that you can just say, give me replicas 10, and then you have 10 replicas. And since everything passes through the instance name as the identifier passes straight through to the underlying Kubernetes name, there's not really any bottleneck based on the broker. It just goes to whatever, however fast Kubernetes can operate on the API. The Ethereum nodes, the blockchain nodes are directly in a pod because those are basically unique things. And I'm not going to try to scale one instance of a blockchain among multiple pods. That doesn't make any sense. So we don't need all the deployment management extra logic there. Everything is exposed as a Kubernetes service, and that's not a service service. It's really, in Kubernetes, it's just a proxy. But what it gives us is it gives us a stable reference point to access everything. You get an IP, you get a DNS entry inside the cluster, and then everything else that needs to reference one of the broker or the blockchain nodes has a stable point of reference, and so Ingress can just point directly at the service and then it gets routed to the appropriate pod. Another detail is we chose node ports, which exposes basically a global node on every port for each specific service. And I did this because configuring Ingress is unfortunately rather cloud specific, and you can't just expect the load balancers to work with basic configuration. It's usually some special GCP annotation, IKS annotation, AWS, they all have their own specific Ingress configuration. So ease of use was the primary thing there, but less sort of scalable in that you have a limited ports for the whole cluster. Everything goes into a single namespace again, ease of implementation at the point. The original client we used didn't have the concept of namespaces in it, and I know how to fix that, but for right now everything just goes into default. It's easy. Again, no Ingress, we use the client gocube client, which loads everything from environment variables and uses the service account that is shoved into every pod. I've given the default service accounts extra permissions, but really the only permissions we need are create pod, create service, delete pod, delete service, that's it. We take using node ports, we get the automatic port mapping, so we don't have to worry about providing a port mapping and making sure it's available. Kubernetes will handle that for us, so that's good there. So I will just deploy the, I'll go through the demo now. We're going to deploy a broker in a cube, then we're going to switch to a different cube that already has everything attached to it. We're going to run the demo that we ran earlier in the keynote from, which is basically go to cfee, bind provision, bind, open up the smart contract, and if we have time, I'll try to open up remix, but how do I get that way? Okay. So, sorry. We can see we have, is that, I can't really do much about it, but on the left, you can make it bigger if you want. On the left, hopefully, we have the sort of Kubernetes kind of stuff that I'm going to deal with over there, and on the right, we're going to have the, we're not actually at this one yet. Okay, go to this one. This is the, okay, this is going to be the deployment. You can see right now we've got nothing deployed. Okay, great. I've got a YAML, which has the brokers configuration. I'll pull that up, but it should be very simple. Everything should go up there, create all the stuff. I can show you the deployment. You can see it's creating the broker. Brokers running. That's basically all there is to see. And you can see everything's configured in sort of a Kubernetes way. We have a config map that holds our configuration with our very secure username and password. We've got the one sort of thing we made here is that we make sure you have to give it the sort of external name that you want it to be known by, because we figure if it's your Kubernetes cluster, you own it, you know where it is, and you know how to set that. We can't look it up any other way. Let me see if this works. Let's control C. What did I do? You jumped out of T-Max. That's standard keyboard. So I made a wish earlier today that hopefully it goes as good as the earlier demo. Maybe not true. Maybe a new one? Yeah, I don't know. Yeah, so I think that is all right. Then we're back in here, but we don't have any of the exports. Okay. Well, okay. Yes. Cluster, config. Let's go straight to that. Twins copy and paste. Okay. I just want to hit that, and that exited everything. Yeah, I don't know what happened. So I think we're getting the cluster information from SoftLayer. Just trying to connect to the cluster, and then we can... Controller command. I mean, if you do command C, and then command B... I hit that, and that exited me last time. Okay. BNX, CF, apps. We should have the... How do I configure this? CF apps. Just to see if... Oh, CF apps, okay. Oh, okay, good. Okay. So these are the things that we ran earlier. Vote was the one from the keynote. We have two new ones that are running, and the K vote is the one we're going to use, and CF services. We do that. We get the services. Okay. So now we get it. Not that one. Is it... It's on right here. Where'd it go? Oh, okay. We need to... Create a service. Is it not bound? Okay. Yeah, but rename it maybe. Oh, which one is the... That's the... All right, we're going to make a new one, and then I will show you the... So that created the pod. So we should see a new pod. Okay. That's the pod name, and lucky for us it starts with a letter. So I'll have to deal with that. Maybe let me split the screen so that you can... Okay. Anyway, so we have the service provisioned. So now we need to bind the service to the app, and should I use the K vote or in K vote? Sure. I mean, you probably need to... I mean, to push a new app and then bind it to the... Oh, okay. Where's... How do I get to the word is? Yes. Okay. So we can do... Yeah, push. And then we'll do a new one. No start. Oh, yeah. So push the existing contract app. Should be already up there. So I guess it should be in the cache. So while Morgan doing it, let me just give you a quick overview of what's happening here. So basically, we showed you that we created a broker, right? And we showed you that we created it through the Kubernetes deploy command. So we have the broker, and now we bound that broker to a Cloud Foundry deployment, which is running somewhere else. And that's the important thing. Like, you can run the broker one way. And as long as it's hooked to a Kubernetes cluster or a Docker engine, then you can reference it as... And as long as that broker is publicly accessible, you can actually bind it to a Cloud Foundry deployment or a Kubernetes deployment. So from that point on, when you create a service, and that's what Morgan showed, it basically, in this case, because it's a Kubernetes deployment of the broker, it creates a pod. And that pod is the node that is connected to the blockchain network. So here, what we created was an Ethereum network in a developer mode that is listed in the pod. And then what we're trying to do is that now we're trying to bind the contract to that node that is running, to that dev node, that dev Ethereum node that is running. So earlier, when he showed the list of pods and the new pod popped up, that was basically the Ethereum pod in the dev mode that was up and running. Right. So you can see the pods. And if we do... Do I need to... What is it? Unbind service? Too fast. And I need all this other stuff right here. If we look at the logs, we should see there's a new log bit. So the last thing we saw was right there, 54. And now it's 58. And so you can see it deployed the contract, created it, sealed it and we're ready to go. So then... Let's have that running and then we can hit the app and show that. So it is CF start. So now it should pick up the binding, which is the contact information for the node that is running in Kubernetes in IKS. And then it's going to do a bunch of stuff, I guess. I pressed one button and I destroyed your entire Tmux. It's never going to be the same. Let's go and node in here. So you see it's pulling all the staging stuff, which is since it's a node app, it's actually been quoted quite a bit of work when it's doing the staging of the application. And if you've done any development with Web 3.js, if you've done Web 3.js deployment, and then one thing that you're going to see is you've noticed is that Web 3.js is actually a huge library. It has a lot of dependencies, Python dependencies, it has... So that's a problem with deploying Web 3.js apps. There is quite a long time that it takes for the app to be staged because it needs to get all the dependencies, do all the NPM install and everything. But it looks like after all the hiccups that we have for the demo, we bound the service, we started the application, and here is the version of the app similar to the one that we demoed. The interesting thing that we can show you here looks like we can't show you here. Come on. Let's try again. That it actually writes to the pod if it writes to the pod, which remains to be seen, so I apologize for the demo. We tested everything like five minutes before, but this is with the live demos. It's absolutely real. Everything is real, and that's what you get out of when you do stuff here. Anyhow. Yes, of course. All right. Apologies for the hiccups, but if you have any questions. We accomplished what we accomplished. We did rewrite a simplified version from Ruby to go. Yeah. Please go visit all the stuff. So great work that you guys did. Nice stuff. My question is the open broker is cool, but I'm more interested about the full nodes like in the Kubernetes. So by developing some blockchain applications, I see that the concept of a full node being healthy or not depends. Like for example, like if your application is sensitive of having the latest block or not, so like did you look like how you can detect or balance to your pool of full nodes in Kubernetes, yeah, this full node is up to date or it's 30 minutes behind, or I don't know, like some conditions of the network. Did you think about this? Do you think that that's a problem? I would like your opinion. Yeah, like for public blockchains. Okay, so just a follow-up like question, my last one, is assuming that you have, okay, you do all this snapshotting on all stuff, but if you have like a lot of customers, you're going to need a lot of nodes and you cannot point to the same chain you have like to separate. So like what it was meaning before is that once you have like pool and then you do some balancing, like because you have to do it, some balancing, you have to consider also, for example, this node here is behind 30 minutes or like 100 blocks, 1,000 blocks, whatever, because it's heterogeneous, like depends on the network conditions, depends on the consensus, etc. So like my question is, did you consider this? Like do you plan? Do you think that this is important, is this relevant or not? Yeah, switch your microphone because it's being recorded. Sorry, here we go. It's on now. Yeah, so to answer your question, we haven't done anything about it just yet. We know that having an off-to-date ledger is critical to a lot of the applications, a lot of the blockchain applications out there, but no, we don't have anything yet. Chris has notes. Notes, wow. I don't know if this was just the CF implementation, but you mentioned the instance ID being like having almost to remain unique. Is that the case? Well, I mean, that's one part that service broker assumes, right, that the instance ID is supposed to be unique for the services that you create, that's basically part of this. The instance ID is tracked by the platform. It's a platform value, it comes from the platform, the platform generates it, tracks it. We don't care what it is. Oh, so you don't rely on that being stable throughout? I thought that's what the slide was kind of saying. It's the same instance, it's that's the instance ID. I don't. What do you mean by change? So in instance where Kubernetes would reschedule somewhere else, the instance ID should change. No, it's got a pod name. The pod's going to have a specific name. Basically, similar to his questions. So it means that if that pod crashes or something, you have to reschedule. And my question is when you reschedule to any other pod, what are the health algorithm or it should not be around robbing for blockchain, it's not enough. Oh, I see. When the service gets a different instance ID, what do you do? What your application does? Is that your question? If a pod gets descheduled and it comes back, how does it know? We don't do any health check. Yeah, we don't do anything about it just now. So maybe you can just use labels, right? Yes. And in that way, you can just discover. The only thing is that you add a database and basically have a way to correlate. Meet the idea of labels. Yes. But that's a good point. Thank you. Any other questions? Nobody?