 All right. Hello, everyone. I'm Shivai and I'll just quickly cross-check if everything is fine All right, I think we are good to go. It's our time. So welcome everyone to Hyperledger Global Forum this time. It's a virtual experience unfortunately because of the global pandemic we have not been able to you know have the live in-person event and I'm really looking forward to next year and hopefully be able to actually meet everyone at the Hyperledger Global Forum. So for everyone who are joining in for the first time in this virtual experience definitely do check out what exactly are Hyperledgers. If you are interested in blockchains and blockchain technology or related to solidity or any of those things definitely Hyperledger is one of the most brilliant topics to actually have a look. Today, we are seeing blockchain and the use of Hyperledger almost everywhere including you know crypto and all of different other but of course like blockchain is not just limited to crypto and it's being used in a lot of different technologies and today we are going to be speaking about one such topic and that is regarding the Hyperledger Umbra and we are going to be specifically talking about the scaling experiments and how you know scaling of experiments within Hyperledger Umbra is taking place. So just a very quick introduction about myself as well I'm Shavai. I'm currently a production engineer and an open source engineer at layer 5 which is a part of the CNCF landscape and I'm also a Linux Foundation mentorship mentor. So like an LFX mentor and I've been mentoring a couple of different projects including Mechery that is part of the CNCF and also the solar foundation projects that are part of the Linux Foundation and I've also been deeply involved with a lot of different blockchain related projects I've been contributing to a lot of different open-source hyperledger programs and of course I've also been an intern at Blocks Lab that is and blockchain based system. So you can definitely connect with me if you have any questions regarding blockchain or specifically regarding Hyperledger Umbra. Now to give us all of a quick background regarding Hyperledger Umbra so Hyperledger Umbra started off as this internship project. So as I've also mentioned that I've been a Linux Foundation mentorship mentor so a lot of different Hyperledger programs or like sort of internship opportunities come under the LFX mentorship program and Umbra actually started off as one of those mentorship programs back in 2018 where unfortunately due to some issues with the resources it couldn't be continued but it was officially continued in 2019 and it was also like carried forward in 2020 as well and it was originated in 2019 as a program and essentially it's like an emulation program for different types of hyperledger blockchains and as I've said it's already an ongoing project that does help to actually provide a really great research tool to understand how fabrics how Hyperledger blockchains actually work and also it helps to understand how they can be used in terms of let's say providing any kind of consensus based algorithms and also understanding the scalability of different types of Hyperledger blockchain networks. So it's sort of like a simulation platform to provide the comparison between different types of Hyperledger blockchain networks or platforms and see how they scale up against each other. So if you are into comparing and doing research regarding different types of Hyperledger blockchains I'll definitely recommend using Hyperledger Umbra. So in the nutshell it's sort of an emulation platform for different types of blockchains and it has support for Python 3.8 above. Now the specifically if we talk about how the networking is actually taking place we are essentially using Mininet and a lot of different Docker containers. So whether it's like you're using let's say any kind of a blockchain network it will be in a Dockerized container and then we're using gRPC and protocol buffers and of course things like vacant virtual box influx dbs the database and Grafana for monitoring the scalability and all. So it sort of combines all of this together under one package. So you can definitely look and have a look at the lab. So essentially you can check out the Hyperledger labs and inside of the labs you'll find the Umbra as well to check out and try to you know perform the experiments on any kind of a blockchain that you can have. So as we move in the presentation we'll be also speaking about the different types of experiments that you can actually do and we'll be going over a few of those you know in some of the experiments that we'll be discussing. So in a nutshell essentially Umbra encompasses or it sort of you know has all these different features that are there inside of you know when we talk about any kind of Umbra related experiment that we are going to be conducting. Now under the hood what like you know what's powering or what's ongoing inside of Umbra is that of course like it's been written in Python 3.8 and essentially the topology that we define is a graph. So if you're not aware of what exactly is a graph so a graph is a data structure that has nodes edges. So essentially within when we describe the entire network and we describe the entire experiment the topology of the Umbra is essentially a graph with its nodes edges and each component within the Umbra is a microservice. So again if you're not aware of what exactly is a microservice a microservice is essentially like you know an upgrade over the monolith architectures where each service is its own unique entity. So whether it's like let's say a microservice based on the blockchain network we're going to be having different types of microservices and each one of the components is a unique microservice within the Umbra landscape and the DRPC interfaces and protocol buffers are used for each one of these components and based on these like let's say whenever the events are scheduled we are having like let's say repetitions and we are having the intervals being defined. So the most like the most important aspect to sort of look at when we are looking at under the hood whenever we are conducting any kind of an Umbra based like you know let's say we're conducting an Umbra based experiment. So the most important thing to sort of keep in mind is that whenever like the most important thing to sort of keep in mind is that because we are looking at an hype like let's say an example is like let's say we're talking about an hyper leather fabric right. So in order to understand how the model works we are defining the topology and of course when the topology sort of comes over we are defining it in terms of a graph and where each of these components is having a unique service and they have a unique function. So that is what is sort of under the hood for fabric Umbra. Now like to understand how does the Umbra architecture sort of work. So it's sort of defined under like these six different categories and we'll just go over one of them one by one each on each one of them. So the first one that we have is the Umbra design. So of course like the design itself is not a component but essentially it's an API that you could say that allows users to let's say compose experiments for Umbra right. So if we talk about an experiment within Umbra design since it's an API so it's designed so that we can have like let's say a main instance of any kind of a blockchain project as a topology. So it could be a fabric topology, it could be an Iroha topology and like let's say all the different events that are associated with regards to that topology. Then we have an Umbra scenario. So unlike an Umbra design it is a proper component that sort of consumes the mini net APIs that we had described earlier to interact with the topologies that we have defined. So it could be let's say container as it could be switches right and then we have the Umbra broker that sort of consists of you could say like it's one of the core components of Umbra and it helps relaying between the messages between different types of components. Let's say it's like we have multiple components in a defined in a topology. We can start with the coordination of the messages between let's say the monitoring and to extract let's say measurements or information about an environment or let's say if you want to call like a specific SDK that has been defined in the blockchain event. Then we have the Umbra monitor that is sort of defining let's say the element that monitors whatever environment that is there and of course all the different topological nodes that are out there and it extracts different kind of metrics from them and parses them to find out like different kind of metrics and then of course we can influx them into a database like using let's say influx DB and then of course based on these metrics that we are getting we can showcase them using visuals using Grafana and finally we also do have like Umbra agent and Umbra CLI so like let's say Umbra agent is specifically designed to understand about any kind of a topology to know if there are any kind of anomalies that are happening within site within the network so let's say if there has been some kind of high usage in CPU there's you know some kind of issue with the traffic management then ultimately the Umbra CLI is the component that helps to actually automate the all the different experiments of Umbra so this is what we'll be talking about more in terms of when we describe how we are scaling up the experiments so Umbra CLI is what you're using to you know automate the experiments and do the installation right and make sure that all the different components within Umbra are have been properly installed so this is one of the most important ones that if you are you know trying to create an experiment of your own so I'll definitely recommend to you know go through a lot regarding the Umbra CLI and go through documentations for Umbra CLI to understand that and then specifically if we want to let's say install Umbra on Ubuntu right so these sort of showcase the steps so of course the first step that we have is essentially to you know only install git and the make packages because we are using make files so we are just installing by using sudo app install git and make to only install the git and the make packages and if like let's say because we are using python so to install all the necessary python files those will be provided within the git so if you go to the official github repository that is github hyperlabs Umbra you'll install when when you actually do git clone within your local environment it will automatically install all the different python packages because those have been already mentioned in the readme file and in the requirements.txt file and then what we have to do is that because we'll have to install so you can use the sudo make install and then we'll be also installing a vagrant virtual machine and using like a virtual box so you can install like vagrant libvert and also like let's say you can also use virtual box as well so we can use both of them like either one of them so either the libvert or like one of the options that is virtual box and once this is done then of course we need to install some kind of blockchain and fabric based projects like currently fabric currently Umbra supports both fabric and iroha so you can install make install make install fabric or let's say make install iroha if you want to test both of them and currently like Umbra also does support fabric 2.0 so we'll be talking about those as well so you can do either of those two right and once you have installed them that is when you can get started with you know the rest of the installation so these are the main steps to install and of course do make sure that you do have ubuntu 20.04 that is the latest version of course also you need to have python 3.8 installed in the system so those are the two main requirements prerequisites that you need to have before you get on and started installing with Umbra now once we have you know defined and like this is where now we are sort of you know going deeper into understanding how to actually install Umbra and also understand how you know the entire workflow works so before we actually define how the experiment works i would just want to like you know share briefly about how does actually you know the entire process is you know starting so of course as you design that you know the Umbra design comes into the picture right so Umbra design as it sort of allows you to compose experiments for Umbra that is sort of our starting point that we start with so Umbra design is what we have then we define all our topologies events right and finally then we go on to defining our Umbra CLI that essentially again as we have described earlier that Umbra CLI is you know the component that helps start off the experiments in Umbra right and it helps us with all the different steps whether it's the installation initiation of the components right so after we have defined our after we have defined our topologies our events we start off with the Umbra CLI that help us to further call on let's say the Umbra broker or things like the Umbra scenario right and now we define different types of Umbra scenarios that has you know like let's say all of these so essentially since we have spoken about how Umbra scenario is being used to define or like say interact between different instances you know because there will be multiple servers and these multiple servers are essentially called as environments and we will be having multiple such kind of environments right and how like how it has been interacting with let's say the monitoring because each one of these different environments we are monitoring these environments to see how you know like different metrics are being shown so we'll be having you can so consider like let's say we initially have one broker now this broker divides into different types of scenarios right and each of them are being monitored with their own Umbra monitoring right or we are monitoring these like you know one by one and of course once this is done we have within inside of the Umbra scenario we have different nodes and our network which are being monitored and finally the monitoring sends them across to the database that we use essentially use influx DB and finally we also visualize them so based on this like once we have installed our demonstration like we have basically installed our local system now we can execute right we can first once we have installed we can start executing it so essentially Umbra CLI will handle all the different life cycles of the experiment right and when we are designing the experiment environments sort of define you know which particular components are sort of going to be you know be used right or which one which one these components are actually going to be executed and once we have like whenever we have defined the like we have a designed experiment we sort of save it and load it using the Umbra CLI and Umbra CLI is also responsible to execute the environments and all the different components that we have and of course it will also help us to actually trigger the execution of the designed experiment right and the broker will then actually help us to actually perform the interactions with the other with the actual with the other components that actually you know come into actually running the experiment so those will actually be part of the overall experiment that you know sort of comes up so Umbra broker will be responsible for that now if we specifically talk about you know actually designing the experiment so that sort of is you know what exactly is how should we actually start designing so essentially we have a certain you know Umbra design because those help us to actually designing the APS so that helps us to define the different topologies and events and the topologies that can be used either can be you know like different type of ones that are supported currently by Umbra that includes fabric topology, aroha topology or like indie topology and then that helps us to you know because these topologies are sort of the groundwork like that sort of really the groundwork they help us to create the abstractions for each one of these blockchain networks that we are defining within Umbra and that helps us to also you know define all the different artifacts that are desired to run and these are controlled by our different components that we have within Umbra and the events are essentially all the different interactions that we are going to be having inside of Umbra and these will be scheduled and of course like these will be having the different parameters like you know repetition from when till when it will you know from where it will start and when it will start and whether there are intervals within designing of the experiment and finally of course like you know so this after we have defined all of this you can actually run and run experiments right and you can understand how these experiments are working but of course that sort of brings us to our main topic and that is how like you know we can scale up these experiments so as the slide says that you know Hyperledger Umbra is really useful for doing network level security fuzzing and also like you know doing network scaling experiments and we have this sort of seen how you can actually define one of these experiments but the need is to be able to actually you know set up execute and gather data and report you know repeating of the results so that we can actually scale up the experiments at different scales right and see how if we scale up these experiments how it actually affects the characteristics of any kind of a distributed system to you know let's say whether it's related to let's say creation of the blockchain or let's say it's you know understand how the scalability is actually performing or how much time is it actually taking to you know reach consensus so the important aspect to understand is if you know because again the idea is that we can scale up these fabric networks to let's say hundreds and thousands of different nodes so we need to see that whether you know this the scaling like if we do go ahead and scale up these networks because it's definitely possible to do so because there is something that is supported for fabric based networks as well but how would be the effect on when we actually you know scale up these different experiments and like what exact outcome would actually affect the characteristics of the distributed systems that we are running and again the need is to be able to you know develop a mechanism that can actually monitor and activate the different you know the scaling and then of course to be able to run AMRA on Fabric 2.plus and to be able to actually scale it to multiple servers right and then also like you know automatically create monitoring functionalities for the scaling up right so if we are scaling up it to multiple servers we should also be have the ability to actually monitor and scale up the monitoring as well so automating the monitoring functionalities and not make it something that you know based on once if you're able to scale up our experiments we should also be able to automatically scale up our monitoring as well and so this was sort of taken up as one of the experiment projects as well and this was sort of the need and of course we like of course like what happened was that some of the main deliverables that were part of the project included like you know of course having the scalability functionalities that includes like you know the environments events to be able to scale up those as well and of course having let's say automation functionalities that include like you know having being able to make the make files vagrant AMRA CLI to actually automatically have in the automated the automatic functionalities built in and then of course having giving the network fuzzing capabilities to hypervisor and of course being able to use AMRA for scaling it up to multiple servers so whether it's like you know for testing for setups or let's say syncing with other fuzzing functionalities so as you can see like you know these are some of the major like deliverables that were met and that were part of the overall hypervisor you know AMRA like specifically when we talk about the scaling of different types of experiments right and essentially the idea is that we are using it for monitoring and seeing how it handles for in case of a distributed system so these are some of the most important aspects to you know consider when it comes to doing like you know the scaling up so apart from that some of the results that actually took place you know because the learning objectives were to be able to understand how the hypervisor AMRA like how it is you know essentially like so also understanding right that how we can propose effective experiments to see how the performance varies with the scale of the network so essentially that is also one of the most important things to you know understand that how can we effectively look at different types of experiments and understand how scaling up of these experiments help with the performance as well and then by understanding this we get automatically understanding of how distributed network applications work and how we can study them inside of hypervisor based networks and essentially once this was actually completed the further the further plans to you know sort of look at is that we can create more and more experiments that focus on distributed ledger technologies and that will also help us you know enhance the experimentation of hypervisor AMRA itself to also have future sets that you know can better enhance the results by sort of using these experiments as a feedback to directly be used within the hypervisor AMRA so of course this sort of helps to understand how hypervisor AMRA will react and behave in those cases of scaled up hypervisor based environments and this sort of helps to very give a real world experience of you know how those kind of scale up environments actually can help so that sort of brings an end to you know the presentation and I would love to also take up any kind of questions that you might be having and of course you can also connect with me on these you know social platforms on Twitter and on LinkedIn but of course the main idea behind all these different platforms like that we have sort of created why this intern experiment was gathered in the first place was to be able to you know understand how the scaling works in terms of experiments when we're writing any kind of we're defining any kind of a experiment within hypervisor AMRA how scaling actually affects it and how it can be used to further understand AMRA in terms of when we are running it on hypervisor based networks so the overall end experiment it's still not like you know we could say 100% complete but it does give a clarity on you know how that sort of works out and definitely it's a really wonderful experiment if you are interested in you know simulation and seeing how different kind of networks and if you want to look at the monitoring if you want to look at how they're functioning so I'll definitely recommend at looking at hypervisor AMRA and actually running one of the labs as well so we'll take up some of the questions that you might be having so one of the questions that I can see by again don't please remind if I take the question like you know the name as wrong but the question is by Pascal and he's asking how does AMRA interact with Docker and can you deploy nodes as Docker containers so essentially like that's first of all a really great you know question and essentially as I've described that all of these different so I'll probably go back you know to describing how in a nutshell how basically AMRA is being used in blockchain right so essentially what you can consider is that all of the different like we described about the AMRA architecture right so essentially all of these different nodes that we have within like let's say we define a fabric we define a blockchain based network right so it could be let's say we described about two of them right that are currently supported by by by AMRA and those include basically both you know like everyone or let's say we have you know fabric or IROC projects right so these you could say these blockchain projects themselves do come under like you know inside of these containerized apps right so these themselves are under the container so you can definitely I'll definitely agree that you know that these nodes that we get from these like these blockchain based networks that we are locally installing these are containerized and of course all the different you know architecture like within the architecture when we were talking about the different components and you know like all these different components are like deployable as Docker containers so that's definitely possible and essentially these whenever we are experimenting with any kind of you know blockchain based components whether it's like you know fabric or it's ivory so you can install those Dockerized containers and that is how AMRA is actually you know interacting with them so I hope like that answers the question and essentially again the thing is that you would also need you know Docker installed in your system so you can refer to Docker nodes to understand how and because what happens is that you'll be having these Dockerized containers and you'll be you know going getting them through Docker Hub and these are already available on Docker Hub as well so you can have a look at Docker Hub and look at all the different supports that we already have so I hope like that answers the question but if we also want to probably just describe right that how essentially to further sort of point out to the question of you know how it is actually functioning the thing is that like let's say you start off by creating your first ever you know first ever based like let's say you you're running the experiment for the first time essentially whenever you're either trying it out for fabric based or let's say you're trying it out for any other kind of you know blockchain based networks you'll be you know downloading the Docker container and you'll be interacting with that container so I hope that answers the question like I don't see any other questions so this will sort of summarize if there are no additional questions this sort of summarize what we have discussed is essentially the main point of you know the experiment or scaling up the experiments is to be able to use you know understand how the Hyperledger AMRA is being used to understand how these Hyperledger based networks work and function and being able to actually use them right to understand how if we scale up these systems right if we scale up these experiments that we are able to design and use within the Hyperledger AMRA that helps us to actually start off by defining the topologies the events and then finally then you know using them to understand how we can understand right that monitoring them on a database and then using Grafana to you know understand how they're sort of scaling up we are using the scaling up to understand how will it actually work in a distributed system and that essentially is you know what is being used anyways I think that sort of is you know the overall experiment and thank you so much for connecting and of course you can connect with me on all these different social platforms on Twitter and on LinkedIn and I'll be more than happy to take up any questions during this Hyperledger forum on AMRA or really to Hyperledger thank you so much