 Hello everyone. Good morning, good afternoon, etc. Good morning. So, please. Hi, Elena. How are you? You're ready? Yes, thank you so much. I'm really good. Yeah, ready. So, I'm going to wait two minutes because people are normally late, late in the sense of a couple of minutes. The other thing is that we are going to record this session like every other session. The recording is already started. Two things before we start. One, the Hyperledger Code of Conduct, we are meant to be going by the Hyperledger Code of Conduct, which says that we will treat each other with respect, even when we are disagreeing with people or even when we are agreeing with people. Please be respectful of each other. The other one is the antitrust policy. We go by the Hyperledger antitrust policy, which means that we do not engage in any antitrust activities that is the, these are the only two requirements for participating in this call. And of course, we also know that all the material that is developed and put into play here are completely open source under the Apache 2.0 for code and the other open source, open source licensing for everything else. So if you can, please add your name to the end of the, to the list. Let's help each other develop the meeting minutes. And we are going to start off with Elena, who's going to describe the challenges of testing the LT software for financial market infrastructure. And exact pro has been in this business for a while. She is a researcher and business development manager in exact pro. She'll take it from here now. Thanks. Yeah, thank you. Thank you. Yes, so thank you so much for having me today. And this is really exciting for me to be a part of this group. Just a couple of words about who I am. As we've been said, I represent exact pro and we are a firm who is focused on software testing for financial sector as a business development manager and responsible for partnerships and events here in the U.S. And I'm also a part of the exact pro research team, which means that I talk in our research activities, the research paper submissions or maybe speaking at the conferences, or maybe making the overviews for some internal use. And this is exactly the reason that oh, why I am also a part of this group because the discussions that you give here are really high quality and they are really important for us because they give us help us to understand what's going on in the capital markets sector related to the DOT domain. So let me share my screen. Just a couple of seconds. And I will start with a presentation. Okay, can you hear, can you see it okay? Yes, yes. Yes. Okay. And here we go. All right. So just maybe a couple of words about the company itself. So exact pro provides quality assurance services for exchanges, clearing houses, investment banks, brokers, and some other financial institutions performing both functional and non-functional testing. And we've been a part of the London Stock Exchange Group between 2015 and 2018. In January 2018, exact pro completed a management buyout from LSAG. We are headquartered in London. We also have operations in the U.S. in Georgia, Georgia, the country. And we have several offices in Eastern Europe. Exact pro currently employs almost 600 specialists. And as I mentioned, our focus is mainly in financial market infrastructures. And our current clients are quite diverse in terms of their geography. We have clients over 20 locations on all six continents. And from the beginning, the main focus was on traditional technology. So our first engagements here were in the trading and post trade areas. And then maybe a couple of years ago, we got interested in the distributed ladder technology. And we tried ourselves in a couple of POC projects. But at that time, most of DOT was that early day, early day attempts to build an application and to prove that it actually works. The whole idea of testing is exactly the opposite to stress this system to its limits, to expose it to some stress and to expose its weakest points and to see what happens in the worst case scenario. So back then the industry was not exactly ready for really intensive testing. But surprisingly, quite quickly, it came up with the solutions that needed real testing. First, it happened in the area of DOT per se. And then we were also contacted by some digital exchanges for whom testing turned out to be extremely important given the need to be established as part of the capital market community. And most of them are still struggling with the regulatory challenges. And just for you to understand the context of our work, I'd like to describe our general approach to testing of traditional technology, what is usually involved here from software testing. As everyone here knows, I think they underline technology and the financial sector is extremely complex. First, the complexity is caused by the structure of a typical platform within a systemically important infrastructure. The most vivid case study for testing will be probably a post-trade system which can have about, like, 80 inward and outward connections, dozens of components, sophisticated system dependencies, like upstream and downstream. Such systems also have complex participant structures. They work with lots of asset classes. And they are accessed via diverse APIs and protocols. And the second aspect of this complexity is the life cycle of such a system because the system's daily activities involves, again, for post-trade, for example, it's trade uploads, different calculations, computation, margin runs, settlement sessions and so on and so forth. And all of those activities are performed every day. They happen in a predefined order and they happen at predefined times. And they depend on each other. And actually what does it mean for us in terms of software testing? That means that to test such a system properly, we will need to test all these repeatedly happening processes and to test them with all the multitude of parameters. So in other words, this leads to hundreds of millions of tests, which is apparently a problem that can be approached only with the automation. So as you can see, it's already enough complexity to deal with from the software testing perspective, even before DLT is brought into the picture. So for that, even for pre-DLT, I mean technology, we used to address this complexity with our test automation tools. And here on these slides, there are just two examples of those. So one of the tools is aimed for testing the complex scenarios and business life cycles. And it supports concurrent work with multiple gateways with diverse protocols and APIs. And on top of this tool, there was an enhancement allowing testing at the conflicts of functional and non-functional testing. So that was the pre-DLT era. And then we noticed that financial institutions started gradually adopting DLT, which of course required significant transformation and replacement of the core parts of the platforms. So to address those changes, we started to think about what the test approach for new transport platforms should be like. And we decided to accommodate the test approach. And we needed to understand the new challenges, the challenges introduced by the DLT. And at the very beginning, it was absolutely clear that all the challenges around the complexity are in place. So the traditional challenges didn't go away. And DLT did not eliminate them, of course. In fact, they became even more pronounced now because of the hybrid nature of the new platforms. So compared to traditional systems, DLT platforms pose additional challenges in terms of quality assurance. And these stem from the necessity to verify the interoperability of nodes in the network because there are a large number of connection permutations between them. And you need to understand the differences between varying permissioning logic. There are also challenges stemming from the non-functional requirements to DLT platforms. They have to be really high performing, having considerable transaction throughput. Another requirement here is very high availability, especially in those components which are aimed to ensure the correctness around the logic of the ledger states and their correct smart contracts and the correct propagation of the changes across the network. And maybe another weak point here is, of course, continuous upgrades where there is a risk that something will go wrong. And again, the error could be propagated to the entire network. Apparently, to address those challenges, we need a two-pronged approach with both functional and non-functional testing, be it testing of the network plumbing itself or the distributed apps. So we found here that there are a number of test tools properties that we needed for testing DLT. And as for the platform itself, our case study that you can see on this slide covers the experience with the R3 coordinate technology from the functional point of view. The existing testing tools needed to be improved with the following capabilities. So as you can see on the diagram, it represents main DLT system components. It is a network of nodes. And our tool was used for extensive anti-un-test automation of this scenario. And it was adapted so that we could simulate main node functions on the corded ledger in order to send a transaction with certain parameters. We predefined them in the automated script. And such a script we internally call a matrix, but actually it is a file in a CSV format. During testing, a whole set of matrices is run concurrently through multiple connections. And for different connections, we can trigger here the interaction between the nodes and we can check the results in volts. And some of the other enhancements are also listed on the slide. So this is around nodes deployment, ability of port apps deployment, nodes administration, and so on and so forth. So this was a case study for testing the network itself and from the functional point of view. And the next slide demonstrate the approach to the very same corded network, but from the non-functional point of view. So as you again, you may know, a non-functional testing of any platform is extremely complex and it requires robust configuration and may even involve some preliminary development activities around it. So to run automated non-functional tests, it's required to monitor various metrics of the platform under test and its hardware usage. It is also very important to have the option to start, to stop, to adjust load injection or some other actions based on the current system stage. So the exact automation framework allows the initiation of flows across a large scale network of nodes and we can do it against a remote network. We use open source tools here to gather metrics, to visualize the results, and we are now able to cover various non-functional testing types. As you can see here in the slide, it's load and stress testing, failover testing, soak testing and the measurement of latency and performance. The last two case studies are more examples of functional testing of distributed apps as in contrast to testing the network plumbing itself. So in this case, our test tool was adapted to simulation of nodes for the trade reconciliation and position update business flow. As you can see, it's for two parties and a CCP. So for this scenario, of course, this is a much simpler case. And in this example, testing was orchestrated using just exact test tools. Some of the framework components amulated the nodes, some of them triggered some actions through RPC calls and some performed checks on the ledger. And another case study is again around the business layer and it is the one with the digital asset modeling language demo. And here we used digital assets swaps lifecycle module application which was based on the Easter CDM model. In this case, the simulation that we did was performed in one of the parties node as well as in the market data provider node plus testing framework and related time shifts to facilitate the checks which were associated with a particular lifecycle events in a swap. So this is basically it outlined the pains that we had before with testing traditional platforms within large scale market infrastructures and shared some examples of how one can adjust the testing to beat the new needs which are emerging with DLT. So thank you. Thank you so much. Elena, congratulations. It's a very comprehensive look at the complex landscape. Maybe we went through it a little fast but I'm sure people have questions especially since I noticed Jim on the call he's already asked questions on the zoom. It says, did you use jmeter for loads? Did you test CI CD and change management? How do you define bad test data cases? Anyway, you can read just like I can obviously but I'm just drawing your attention to the questions on the... Right now I'm looking at this at my presentation so maybe I'm stopping. No, it's okay. You can... I can read it out to you. We can answer that. I'm looking at it right now. So regarding the first question about the jmeter we use an internal tool called that to generate the load we use our own load injector which in some way is similar to jmeter but it is more adjusted to our needs to our specific knowledge domain area so it's more focused on the trading software so for non-functional testing we mainly use this one and regarding the CI CD I... Yes, we used open source tools here and of course we use it extensively in our experience. So... How do you define bad data test cases? Well, our overall approach is an extensive MTM testing so that means that we look at everything so that's why I started my presentation with this maybe not really interesting for you but for the overview of the complexity of our typical platform so as you can see in our testing so many things are involved so we look at the data, we look at data permutations we look at how data is transmitted from one point to another because through different protocols and APIs data can be changed for example we receive some data from market data feed we receive it in one format and if it is received by some custom application or component in the platform and then it's transmitted to some downstream system the data structure can be changed a little bit it will be tested for everything for all the permutation I hope I answered the questions because I think that here I... you can have several understanding of that but data test cases so if it doesn't answer your question you can always contact me and I will try to offer you another reply here so pre and post assertion in your test framework I'm not sure how to answer this so certain languages so some languages provide capability for pre and post assertions much like a J unit kind of a thing right so you can say that the entering state is X the exiting state is Y and you can assert that those things are and then you look for what I call the transformation in the middle if that makes sense yes it does and I would say that we do not focus a lot on the formal verification here so it's more of the overall business functioning and we specialize more on the backtesting for APIs and protocols so if we approach testing from a more black box perspective thank you the next one is about whether you use some kind of identity open ID connect authentication I'm afraid I'm not familiar with this basically what kind of identity model do you use in testing or you do not you just go with one in other words like in some frameworks like caliper for example you have a way to spin up hundreds of clients with multiple identities interacting in complex ways because that will simulate the real life here of course Jim is referring to a specific identity framework called open ID connect but you know you can use any various different ones depending on the identity model inside the framework I understand it's just I'm not so deeply involved in testing so that I cannot answer this one but I can get back to you later I think we have a dedicated page on hyperledger portal questions can be asked and answered there this is the one I just have to go to my team I mean testing is a vast landscape so obviously he's got some questions hi Elena thanks for a great presentation can you elaborate more on the digital asset and the CDM testing with respect to IRS and CDS considering the fact that CDM is continuing to make a lot of changes in this testing how and when did you do the testing you could elaborate a bit more on the CDM testing itself first of all I wouldn't call that the testing of CDM so this wasn't a testing of the underlying model because as you know CDM is a framework for the description of business scenarios which was used by the dental SDK and we focused more on the application level so we simulated the business flow we tested the business logic how everything is working on the level of this very application it was a proof of concept project that wasn't a production case so yes and maybe some more interesting detail on CDM we were not only testing this we actually participated in hackathon and we developed some even developed some application but on core the framework using CDM so if you heard about this Jerry hack we participated as well we were participants in Singapore and we took price for best CDM adoption something like that but from the development point of view it wasn't from active thank you anybody else have questions for Elena no but I did want to say that was an excellent presentation obviously you put more than a couple months of effort into building out this extensive test environment which is really robust and I can agree having been on the other side of the fence at fidelity having responsibility for those systems that a much higher level of testing is needed so the fact that in a sense your company provides that that's a big big value service and there's very few companies honestly that have a high focus on what I call bad data which is more important than big data I can almost separate them and I say all of the broker dealers that you're familiar with don't have a super high threshold or a quality pain point whereas when I flipped it around to the JPMorgan data services those guys were way up there on the food chain in terms of the priority for accurate data so there are some differences there and there's still the gap with the client where they don't have that same standard I'm sure okay thank you thank you so much thanks anybody else yeah I have a question related to the most common errors right or mistakes as you work with many clients if you were to maybe highlight some of the the main issues that you encounter on the test design or if you could share with us some of the most common occurrences yeah but I think I'm not about speaking about the details here but maybe the main pain point that we experience is connected with the underestimation of the complexity of the platform so everybody is talking about digital transformation about AI and DLT and everything and they think that those new technologies pose huge risk but in fact the most risk the most of the risk stems from the non determinism of the current platforms because they are really complex and no one really understands how they work I mean in depth so there is no in-depth understanding of the exact workflow what if we change this data and what it will what will happen especially this is this makes sense in the face of the crisis that we can now see because before we had lots of problems with stakeholders of the system that we tested because sometimes we came there and we tried to perform some tests but they came back and said okay guys this scenario is really unrealistic so why are you doing it at all and especially this can be seen in some functional testing performance testing because we tested insane levels of loads and the clients think that we just want to break the system actually this is true we try to break the system because it is important to see the systems reaction we have to be sure that it has failure capabilities but the spike in the volatility showed that some scenarios which we were told were unrealistic they proved to be realistic in current situations so this is definitely was a mistake of some of our clients not to understand this well what else yes most of the problems that we uncover they are not in some particular place of an application or in some particular component so I cannot say that some components are innately prone to these errors but it is rather at the confluence it can be either at the confluence of like non-DLT connection it can be at the confluence of functional and non-functional testing because if we create just some load we cannot see an error we can like miss it or vice versa during functional testing we can again miss it but if we do like a smart simulation or smart model based testing which involves generation of lots of scenarios and loading the system under test with these scenarios using this approach we can find some errors and again one more thing I can think about is the errors which occur when we create systems which we when we upgrade some components of the nodes so something that is related to this okay thank you so much for sharing it's really insightful Elena can I ask you one more question a major challenge for you I assume you guys have a it's not a project approach it's an environment for a given company right even their testing service correct normally I would assume what I mean what I'm trying to say is a company doesn't hire your company to come in and say just test temporarily they're hiring your company I assume to do a continuous testing over a long term we have different engagements you're multi-year service agreements but your multi-year service agreements where you're considered the primary testing facility for a company how are you integrated into their change management system because every time they make a change it would impact your testing so how does that integration process work for you the answer is simple our testing framework is usually deployed into the client's framework so it's in their network it's on their premises where it's in their cloud so the testing tools are part of this framework they have internal access and so on and so forth and I think it's a really small percentage of our testing that is performed on our own frameworks mainly infrastructures anyone else has any questions if not I have some okay you mentioned you know you brought up the question of black swans basically nobody is aware of the extent of the stress scenario until it actually happens which is the true definition of a black swan because it is not within people's minds to think about these things I myself when I was running a mortgage back security system between 2008 and 2016 we created tests like pandemic we did create tests like nuclear attack in time square we did create scenarios like California earthquake all of this of course resulted in a model based approach saying the spreads were widened by this much amount but the problem is you know the spread even those metrics are difficult to project because it's all based on somebody saying okay if there is a global pandemic this is how much the spread will increase by but it may be not so to bring all this back into the picture I had written something about chaos testing inside blockchain because it is now available on medium but we had started thinking about this because the whole practice of chaos engineering is based on the premise that nobody can have the entire system in their head I mean it's impossible for this complex system which is the point that you just made just a few minutes ago I think this is definitely what do you call it so chaos engineering theory basically tries to inject faults in a random and chaotic fashion yeah we are great what's that yes we really are into the theory and we know about this and we use some of its principles in our testing and speaking on the black swans we prefer not to think about what happened as a black swan because apparently it is more of a white swan because it was predicted long ago and what we now observe in financial market infrastructures we are responding to the crisis it was actually tested because again we always we do not hope for the best or something so some firms have certain thresholds for example there are rules within the company to test for example double of historical maximum so this is not our case this is not how we work we do not have such thresholds we do not have any presuppositions what is the bad scenario we just test for the very very good results we try to kill the system components so we just go for a kill and then we just see what happens how the system reacts what's happening what is its behavior and based on that the system and our ideology here is that we have to focus on the white swan events that we have and it's enough of those and we have to be always prepared for the worst because you don't think that these are black swans exactly for our testing we always anticipate the highest volatility the highest loads the highest outages so this is something that we test that means you live in Australia because black swans were found in Australia and when the English landed there they saw that these black swans obviously they were not black swans they were not anything special for the people who lived there and were used to seeing them every day in that sense yeah anyway so testing is a very comprehensive practice and it's a very specialist practice and it has to be continuous like Jim was saying obviously whenever there's change either in the software or in the infrastructure the deployment itself you change the number of nodes if you're going to multi-cloud or if you're doing any other kind of things those are all changes that affect the software one of the other questions I have is your practice your slides are they available or are they not yes I can share them with you or I think I can publish them myself yeah you can go to the to the agenda website or meeting website which I have set up it is a meeting minutes for this call and you can just download your slides over there and link them if you have difficulty doing this just send them to me I mean either way it's fine maybe you can go okay I'm going to put the link in again because I had it earlier because I want people to help me crowdsource not just the meeting minutes but you know reactions their own presence in the you know so I have the link in the chat if you want to go there but we can have exchange later and then we can do this there okay and we also have this case study that I partially ran through on our website so it is open for download so you can go ahead and do this but of course I will share some slides as well that's great because you know I think it's important that when we come to deployment we should be thinking about these you know testing even before we get it out into the open now any of these pieces of software anything there that is available on a demo basis or on an open source basis from your site or is it all paid stuff we do not charge for our tools we have actually some of our tools in open source on our github web page we also have several videos on the topic so one of them is here on this very page which I referred to when I spoke about the case study and maybe there is and there is also a video covering this case study video I recorded from home during the pandemic already if you want to see me again go ahead to our YouTube channel and explore some videos there it's more the material that is very compelling and you are a very sympathetic presenter so that is a very good thing anybody else has any more questions on this topic no but it was an awesome presentation Elena thank you so much if I may one additional question Elena I don't know if you heard about the token taxonomy and the joint initiative that many companies are gathering together to define standards that are almost agnostic to the platform and when you think about testing I know that you may go from a higher level of abstraction right until maybe to the very fine grain a detailed unit test so in that sense talking about token standards have you come up with a more like a business level if you will and how to design comprehensive tests for different token standards whether they are smart contracts right specifically to blockchain for instance yeah I'm sorry I missed the name of the framework was it TTF? yeah the token taxonomy framework it's a joint initiative that they are trying to come up with a token templates that would define behaviors and also properties so whenever somebody wants to implement those templates they could be reused and extended to their specific business needs I'm afraid that the only place that I've heard about this framework is in this group this very group so thank you so much for sharing this knowledge and I'm afraid we haven't looked at it yet in terms of testing yeah sure thank you well you know that we have implemented that ETALAR which is based on this framework and we are adding more we may add more security definitions on that framework but recently they have migrated to another group called IWA we talked about it in this this group before so anyway I think it is going to have the similar effect let's say as CDM on the contract side so it is always good to incorporate some standard methods I see that you have in your slides the reference to SWIFT ISO 20 or 22 messages probably so these are all standards but the problem with ISO 20 or 22 for example is it seems more like a union of existing messaging standards and they are not cryptographically sealed or cryptographically integrated properly with the lineage and everything else like CDM is so we have that now we are coming to the new age in terms of those messages so I don't know how far this I mean I know that ISO 20 or 22 is a recent standard and heavily used in ASX where DML was very active but I wonder about the future of these kind of frameworks which are not which do not have this internal integrity in fact some of the most famous hacks against SWIFT came because of this lack of lineage or cross reference because the Bangladesh hack system hacks that have been perpetrated against SWIFT have come because you can enter the message stream at different points the weakest link for example and create messages that are not integral to the chain to the flow and hence create problems so I don't know whether SWIFT themselves have thought about this that's another topic for another time but still I think the testing could incorporate these kind of things into the framework what were you saying yes absolutely though if it is more about the functional side so we do not do the security testing per se but from the business perspective I think this is a very good idea to incorporate this into the testing the ISO 22 or 22 we can already see it incorporated in our business scenarios within our clients something that definitely has future any other questions for Elena before we go to the next two items okay thank you Elena and we will be looking forward to getting a copy of your slides and I will put the recording up which shows you in your glory talking about all this stuff it's very very good because this is one of the topics that is often missed thank you for your attention thanks for having me two things one is I would like people to volunteer to do other presentations we want a diverse group diverse presentations on different topics all dealing with capital markets so the first thing is I will send out the biannual report that I'm supposed to be doing I will send out a draft copy to people and you can respond to me right away because it is due by the 15th of July the second topic is that we have introduced a new lab proposal for cross-chain settlement instructions on an interoperability situation as a solution to interoperability but it goes away from the regular interoperability conversation because it is in the creation of cross-chain messages which are cryptographically bound and integral with lots of different things that attest to the source of the message and also the ways to also ways to respond to this in a very non reputable way I have already sent out the proposal but more details will be prepared once the lab is approved anybody who wants to participate in that lab can contact me but we do need people to commit some time to it otherwise you know we will get nowhere either in the future I don't want to get into any other stuff there can I ask the use cases you are describing do they fall in the same domain that the cactus project is looking at or are they different? when you say the same domain there is a settlement leg of an asset transfer we are starting small in other words we are coming from the other end of the spectrum we are not proposing a general solution for all cross-chain interoperability which seems to be the same by proposing a series of plugins and a plugin architecture that can then take any solution and you can plug it in it is possible that we can create this and then plug that into cactus but we are not doing it we are almost at time it has been a delightful hour as always next week we will have a close reading of the digital dollar project which Karen had proposed we are going to talk about this in case you are interested there will also be deeper discussion on CBDC maybe we can bring in the R3 paper into the picture and discuss what it means in terms of implementation where do we have to concentrate we can give our feedback because they are looking for alternate views so please you can get back to me or to the group especially after the next week's presentation we can talk about sending a formal reply to the DDP what do you guys think about that thank you thank you I am going to end the recording and the presentation until July 15th then thanks Pipin thanks Elena thank you