 Good afternoon everyone. Thank you for joining. My name is Jyoti Panapalli and I'm a director and a blockchain technology lead at DTCC. For those of you that don't know, DTCC is a post trade market infrastructure services provider for global financial services industry. We have about 21 global locations. DTCC along with its subsidiaries centralizes, automates and manages risk for the capital markets. And just to give you an idea about the scale, in 2021 we processed securities transactions in the amount of 2.3 quadrillion US dollars. Now in today's agenda, I'm going to talk through a little bit about the complexities of blockchain security when we started implementing DLT some of the challenges that we met. DTCC's thought leadership in the security landscape and the leading that into the DLT security framework initiative that we put out. The risks assessments of Hyperledger Beizu and then I will also discuss some of the security assessment findings as well as the configuration best practices that we have put together that will lead further into the smart contract vulnerabilities best practices and certification specs. So the nature of DLT itself is a huge complexity when it comes to blockchain. The concept of decentralization and distributed nodes the fact that a domain does not centralize and maintain all the nodes within the borders or the boundaries of the corporate domain. That led into emerging risks. So a significant lack of understanding of the threat landscape as well as the risks associated with add on to further complex the security posture for DLT applications. Now a lack of guidance towards secure implementations is another complexity because we all know that if you consider cyber security frameworks they do exist with traditional IT security assessment significantly but then there is quite a bit of a gap between DLT security and traditional IT security. Developer community is not equipped with the correct tools and techniques that they would require to be able to do secure coding, secure coding best practices, secure deployment as well as DevSecOps. Of course we all understand that patching and upgradability on immutable ledgers is another complexity added on with continuous monitoring capabilities. So blockchain networks today both public permission less as well as private permissioned are running business workflows that involve transactions and custody of value in the form of digital assets. So cyber security attributes such as confidentiality, privacy, integrity of data do take the center stage for blockchain but at the same time compromise or hack of any of these attributes can result in high value business impact that could be loss of trade, loss of funds, loss of trust and credibility for the entity basically a high value impact for an organization. So as I mentioned alluded earlier that DTCC at scale the economies of scale the requirement for them to be able to process trillions of transaction is their business model. So risk management is at the core of what we do. So corresponding to that DTCC put out a white paper in 2020 more like a call to action to the financial services industry inviting all financial industry stakeholders to collaborate and contribute to the best practices that will eventually develop into an agreed upon standard by the financial services industry. And as I said our role is to be able to effectively manage the risk and identify the appropriate countermeasures. So for that purpose we identified some of the security baseline security assurance standards and we also proposed that this be achieved via some sort of industry consortium. Security must be considered at all stages of the LD life cycle which includes design, development, deployment, post deployment, governance and compliance. So the chart that you see up here illustrates 15 categories of security considerations that was gathered across 20 plus research artifacts and publications that was conducted over multiple organizations. The size of the block that you see is indicative of the fact that more count of organizations mentioned that category as a security consideration for them. One of the things we noticed was you know as the pace of you know development and using of DLT and block chains increased we also realized that internally we were accelerating our efforts but at the same time we were also challenged with appropriate guidance towards securing these deployments. And when we were putting our thoughts together I mean if we were internally accelerating we assumed that other financial services organization were also in similar position as us. Hence the white paper, hence this research. Now this actually helped us justify the fact that it was not just as solo in the industry, there were other organizations that were thinking through these unique drivers. So having had this as a support we decided that the industry needs a DLT security framework and the vision around it was we put together a few sets of goals for being able to develop that. One was that it should assist financial services organizations to evaluate the risks across an individual firms, secure assessments via best practices and tools. It should be able to assist with third party management, incident management. This must address key aspects when it comes to you know key management life cycle especially with respect to creation, creation maintenance and storage as well as disposal of sensitive information. This framework must also be able to provide security guidance and best practices when it comes to standard access controls, authentication methods primarily be able to bridge the security gap between traditional IT as well as DLT specific security. The first layer that you see up here are the alliances and the consortiums that we reached out to collaborate as part of our planned efforts after we put out our white paper which was a call to action and we created working groups, DLT specific security working groups or we partnered with working groups that were specific focused on building security artifacts and research articles for security. The second layer that you see are the focus areas and these are the focus areas that we created more like another umbrella of DLT security framework. We created sub working groups that focused on these specific domains where we could put further together the skilled resources, the tools required, the resources required to be able to conduct research and develop security artifacts specific to that focus area. The third layer that you see are the platforms that we directly participated to deliver publications in. So what you see up here is Hyperledger Foundation. We have published close to three artifacts that cater to security controls. In Hyperledger we participated with Enterprise Ethereum Alliance to publish security smart contracts, audit specification, certification levels as well as under the CSA umbrella again we participated to do to produce an architecture security report as well as a security controls checklist for R3 Corda. And as I mentioned these are some of the security research publications that we published since we started our initiative in September of 2020. So that's a lot of work that's been put out so if you're interested in reading those articles please feel free to go to CSA Cloud Security Alliance website and they're all in public domain they're not behind the paywall so anybody can download registered at CSA and download those but the ETH security level spec is under Ethereum Enterprise Ethereum Alliance and that's also on the public domain. Two of the research artifacts are work in progress at the moment so they will be published either in Q1 or Q2 of 2023. Now I cannot move past having non-star initiatives without thanking our contributor community. This chart specifically shows our contributor growth and retention under CSA but we have similar stats under other alliances as well. Now I will dive into the Hyperledger Baso Security Assessment that we conducted. Okay so we didn't want to reinvent the wheel that was not the intent. So our efforts were centered around identifying the gap between traditional IT and DLT specific security. So for any security assessments we conducted we still leveraged the NIST cybersecurity framework as our basis like something that we used. So we first identified the platform whether it is Hyperledger Baso, Hyperledger Fabric. I mean I'm trying to you know walk through the process that we implemented and we would identify the architectural risks by applying a threat modeling on an assumed use case that befits the platform under consideration. And then while you know we correspondingly we implemented you know the threat modeling and then developed an architecture security report along with that we also developed a security controls checklist that directly ties into the functional domain as specified in the NIST. Now the risk identification process comprises of three different layers. First we the first layer we we took as was DLT. The second layer that you see is Hyperledger Baso. The third layer is Enterprise Ethereum Alliance. Now you may ask they are all you know blockchains or DLT. The reason we split them into layers is the commonalities of a DLT platform or a blockchain. Our intent was to be able to abstract that commonalities across multiple platforms and then be able to roll that up into a company's policy and a control standard. If we don't identify those commonalities then you know for any DLT project that gets rolled out for any organization there'll be a challenge to develop policy and control standards. Every DLT platform you know another complexity I forgot to mention was that every DLT platform has its own technology stack which means that you probably will you know if you didn't have those commonalities you might end up developing a separate exclusive policy guideline and a control standard guideline for each platform which is a redundant process. So that's why we identified let's see if we can abstract out any commonalities and then put them together as a policy guideline for an organization. And then of course Hyperledger Baso which is again you know customized DApps threat modeling on the DApps that were deployed Enterprise Ethereum Alliance given that it's a public permissionless environment. There might be risks that we have to identify which the organization must be willing to accept if not be able to mitigate. Our approach based on once we identified the risk areas of focus is one co-create with the security teams. The reason we say that it's very important to loop in all the skill resources from different squads and that brings us the shared best practices and experience for implementing a specific technology. And then once we had the squads together we identify the threat model risks at the component and an enterprise level. Please bear the reason I say enterprise level is eventually all of these have to be able to roll up into enterprise risk management framework. Currently it's like down you know at the component level which is again you know could end up being a written and processed down the line. We record risks to NIST cybersecurity framework which is again you know the basis be able to quantify those risks recommend short-term long-term risk acceptance plans and prioritize the remediation planning. Now our entire effort I'm jumping the gun a little bit and showing this ahead of time our entire effort resulted in quantifying the risk we identified into specific functions. Now we grouped these by cybersecurity function areas so as I mentioned earlier we you know we loop in the squads or the squads help us build the mitigation steps. Now please bear that this is a point in time slide and what we identified here was when we were conducting our assessments specific to the applications we deployed and the threat surface corresponding to those. Now you see the smart contracts you know shows a 67 percent and the secure configuration of Bezu shows like a 3 percent risk but this is a moving target at any given point of time that 67 percent on smart contracts could be down to 20 percent or up 80 percent so please bear this is a point in time slide and this is you know not universally applicable. Now once I'll now I'll you know walk through the Hyperledger Bezu security assessment findings so our initial assessment of the known risks in the ecosystem resulted in Bezu being a well-defined and implemented Java client which was good news prior to assessment prior to our assessment we were of course not the first ones. Tevora which is the third party conducted security audit on Bezu and identified primarily two vulnerabilities and to my knowledge both have been mitigated ever since in the iterative versions. Also what we found was that Hyperledger Bezu had a development and a life cycle management which was found to be quite robust with appropriate defect management and security. Again that's the reason why we were only able to quantify 3 percent of the risks to towards secure configurations. Major dependencies were related to ETH Siner and the hosting environment while the operational dependencies identified were of course on the cloud environment and a little bit on the Ethereum. Some of the critical findings we identified were as I said you know ETH Siner is a major dependency and the ETH Siner source code is simply checked into a repository and so we recommend that downloading ETH Siner must be done via verification of appropriate process and so is Bezu source code ensure that you are downloading the source code from an authoritative source. ETH Siner version should be updated to the most recent version that's always a recommendation but even if you didn't for example you let's just say you implemented ensure it's not the version 20.10.0 because it was identified to have a lock 4j vulnerability. Bezu configuration critical finding was that it turns off SSL by default so ensure give it a check keys should be managed according to the best practices so key management best practices and control standards are at different levels of you know maturity and processes depending upon the size of the organization and the industry given that we are a financial services industry and we are heavily regulated for us it's a really critical requirement so we always follow our standard best practices as advised by our industry standards. Some of the moderate impact findings was always start ETH Siner as described in the initialization documentation because initialization is a critical first step. The second one with respect to Bezu configuration is today Bezu configuration uses a mix of config file as well as the command file it's known to have resulted in several error prone patterns so what we recommend is preferably use configuration file one is it gives you maximum auditability and second thing is for reproducibility of your errors. Again ETH Siner could be isolated within the container it's not a mandatory requirement currently so if you were spinning up a node and you were you know putting them together if the best practice recommendation is always separate out the the the VMs because the events and the event listeners with respect to ETH Siner could process sometimes entrusted data. Some of the minor impact findings was again at the OS level the ETH Siner Ulimit it's a configuration setting preferably set the Ulimits 60 25 or something that's what the recommendation is and the default Bezu network in the config file currently sits nil like it's empty of course because Bezu can be configured in multiple different ways but we always recommend set a value to it like if you are you know using mainnet then set it as mainnet instead of leaving it as empty. Following our critical findings we also put together some of the configuration best practices you know I'll read out for with respect to Hyperledger Bezu the tooling ethereum VMs and cloud. The first one is the Hyperledger Bezu so as I said unlike ETH Siner Bezu is configured to be downloaded in an authoritative source so the repository is you know for sure it's an authoritative repository but by default the configuration turns off TLS so when you're downloading if you are not using secure network communications then the corresponding hash to verify the integrity of the downloaded payload is not available and this could potentially result in a network level attacker that can intercept and supply their own Bezu source code to you so preferably use some sort of a secure communication channel like TLS and verify the integrity of the downloaded payload currently Bezu releases their software every two to three weeks and actually they're pretty good at every two-week cycle so ensure you're tracking the updates in accordance with the FOSS scans which is a standard process preferably use a scripted build process as a best practice and load balancing those are like common best practice tips when it comes to tooling ETH Siner is a major dependency ensure monitoring and best practices is usually using you know official Docker file instead of downloading the release from a pre-docker build script cloud best practices every organization is vested in their own cloud service provider whether you're using aws is your or google cloud we come in following the best practices of the cloud service providers documentation that they provide and ensure it meets the security and compliance objectives of your company and the industry and these are standard some of the configuration best practices also applicable for DLT I mean their VM best practices but also applicable for DLT several components must be further privilege separated removes the complexity for a compromise and another fact is Ethereum clients generate huge amounts of state you know so starting a clean node is very slow so it would be good to to maintain snapshots of the ETH client state that is because if you know if there is an introduction of some some sort of a corruption then the snapshots recovery could be a pretty costly affair underlying operating system must be always kept up to date in regards to security patches again standard best practice and configuration best practices when it comes to Ethereum as ensure the boot nodes are carefully audited several well-known entities maintain boot nodes basic health and safety monitoring of each nodes view not just the nodes that you are spinning up but also having a understanding of the state of the network and the health of the network is also important use use logging and log management geographically diverse nodes as much as possible check on API external API usage always a good idea to keep as I said you know watch on the state of the Ethereum's health and network state now coming back to the glaring 67% risk assessment on smart contracts I'll repeat again it's a point in time quantifying factor so the smart contracts you know that today are a superpower the power the decentralized applications on Ethereum they've had security issues and there was no good way up until recently to be precise up until August 22nd there was no good way to understand if a particular smart contract had any sort of security audit or security verification done but on August 22nd enterprise Ethereum alliances published a smart contract security audit specification to ensure consistency when it comes to smart contract security so the publication is called ETH trust security levels specification version one it's developed by the ETH trust security levels working group it's a new specification that aims at certifying a smart contract through a full security audit and it provides you with three different levels of security certification so depending upon your organization or the size of the organization or the needs of security audit you could pick and choose which level of certification is more apt and it provides a you know a stronger assurance that a smart contract does not have specific security vulnerabilities the spec also provides some basic smart contract best practices as it says these new vulnerabilities are discovered from time to time so even if you think you have conducted a security audit on your smart contract deployed it it's not the end of the story you have to constantly keep checking for new vulnerabilities and ensure that those vulnerabilities identified will not impact the functions of the smart contracts that you've deployed keep checking for the current version as well as the previous version use latest compiler for solidity you know follow the ERC standards as well as if you identify or come across a vulnerability that hasn't been reported please disclose them in a responsible manner the smart contract spec also provides you recommendations on making external calls normally exceptions in the subcalls bubble up like solidity gives a quite a few low level low level call functions like delegate call static call these call functions behave differently as in they return a Boolean value and indicating whether a call completed successfully or not so if you are not checking for that Boolean value and if the call fails then it could potentially lead to an unexpected behavior of the caller contract use check effects interactions pattern it ensures that validations of the request and changes to the state variables of the contract are performed before any interactions take place this is important because the scope for re-entrance here tax is reduced significantly if you're doing these checks do not use a delegate call instruction it's an external call contract to manipulate the state of the contract ensure you know your tested code is going through these reviews comes to external call contracts there are a number of known security bugs in various versions in solidity compiler some bugs were introduced in known versions some we always assumed to have existed previously the slide you see up here is you know compiler bugs that the audit must check for security level s but then there are recommendations for the other security certification levels also which is m and q so always a good idea especially for the developers in the room ensure that you're you know you're going through this spec because it gives you a good knowledge and information on secure coding best practices now i will walk through the security levels certification requirements this is the the first or you know the the lowest cater and if you go through this certification and if you have conducted an audit security audit successfully you could potentially claim the badge and publicly showcase that you do comply with eth trust requirements and eth trust certification is like like i said is available in three different levels uh each security level s is intended to allow for an unguided automated tool uh to analyze the the source code and determine if it meets the requirements and some of the standard uh you know recommendations are like no transaction dot origin uh don't use no self-destruct and that actually reminds me of a story um few days ago i think on twitter i saw um there was a post saying am i going to be arrested and it was coming from a developer and he had accidentally called a self-destruct on one of the smart contracts and the naivety of the developer didn't realize and um well then there was you know uh trickle of more uh annoyed and uh you know more messages from other people saying oh my god the word could some very very basic knowledge but then again you know it's a developer's fault in error um eth trust certification security level m means that the tested code has been successfully reviewed by a human auditor now this is not this is a little bit about the automated tool and the team is doing a manual analysis and important security issues have been addressed this level um includes the number of um you know for this level it's assumed that you do meet the lower level s has already been met once the audit has been done by an automated tool basic requirements and now you're coming up to the second level which is level m and it also has um the following set of recommendations like um no unnecessary unicode controls um declare storage explicitly that's very very important um storage variables must be explicitly um declared and then um sources of randomness that's another recommendation security level q this is the last and you know the in in in the sequence of the orders this is the highest level uh verification um this means that you have verified security um level s and the manual manual audit at level m and this uh certification at this level also means that the intended functionality of the code is sufficiently well documented and its functional correctness is being verified and the code and the documentation has been thoroughly reviewed this also has some good practices on you know codelinting and managing gas use increases and the state changes and um it also mentions no private data private data means different for different jurisdictions so um check what your jurisdictional private data definition is um if you're in europe gdpr means different and if you're in california those privacy policies are different now i want to conclude my talk today with a few things to think about and one of the main things is security should never be an afterthought which is how it is for most of the organizations unfortunately today um new security vulnerabilities are getting introduced every minute every day so it's always a a good practice to to to one to bring bring a shift in your organization's thinking and approach um bring a shift in your perspective and collaboration ensure you're bringing in the squads together if you have um security teams but if you're a small organization and you know there are not you know you're lacking of funds to have like a dedicated security practitioner then you know inculcate the thinking security best practices into your development teams but that thought process has to be initiated in the design and development process itself rather than developing and deploying something in production because what's happening with public permissionless networks is they're like they're live and they're out there and once you have deployed and if you have deployed bad court and it sits like bad court sits on public uh you know permissionless networks so always a good practice to change that perspective and thinking bring about consistency in your um you know conducting of your assessments in a governance and you know see if you can grow the um security uh posture and knowledge and educate education is really really critical um and develop a security strategy program if if if possible and encourage the security um you know security education in your organization talk to your peers um see if you can get some someone excited about thinking through and you know bring them into the security thinking all right thank you for your time today i think that's that's about my time if you have any questions i'm happy to take them now but i think i'm up my time um and if you have any questions in regards to the alliances and the collaborations that we have made and interested in participating in any of them feel free to stop me and preach me and ask me questions