 ראשי מקרה.прוכה. ראשי מקרה. נכון. ראשי מקרה. אוהב על בון וידש לצאת של קורודים. We are here to present post-quantum evolution to the cloud-native stack. With me, Emma Dickinson from Washington State University and Daly and Rojana, I am Daron Podoliano from F5. In this talk, we are going to cover what does a quantum computer mean. What does a post-quantum mean. Then we are going to tell you about our experience converting modern cloud native stack to be post quantum resistant. Show you a demo of our work which include the world first post quantum service mesh and the end-wave proxy. Finally, we will conclude with an outline of what comes next. First we have to talk about modern cryptography for a bit. So, why is encryption secure? Common public key DB systems consist of three algorithms. one for key generation, one for encryption, and one for decryption. RSA is a modern key cryptosystem which, like other cryptosystems, rely on the fact that classical computers cannot solve hard mathematical problems in a reasonable time. RSA usually generates 2048-bit private key using two very large numbers, P and Q. It is not practical to factor N and find them introducing the concept of computational intractability. So what are those hard problems? In computer science, we categorize problems based on complexity measured by how much time or memory it takes to solve. Problems used for encryption are at least NP, which means that a solution can be verified in polynomial time. Also, unless P equals NP, finding a solution requires exponential time. If P equals NP, it also means we need no quantum computer to break encryption. We do not know if P equals NP and believe it is not the case, and so we are here. So what are the current industry standards in terms of cryptography? Like we can see on the Wikipedia table, commonly used encryption standards such as TLS-1.2 and older are still being used in all cases, include TLS-1.3, which makes stronger assumptions on the public key. There are no principal differences in the core encryption or the encryption algorithm. As a result, the problems discussed in this talk remain relevant. So let's get a very brief hot air balloon level view of quantum computing. Take note of this sphere on screen. It'll be more relevant later. It's called the block sphere. So taking a short walk starting from the 1981 Feynman paper 41 years ago, the first small-scale computer was made in 1998 with three quantum bits or Q-bits. Over the next decade, starting with the first execution of Shor's algorithm in 2001, major hardware advancements were made exploring the different ways to make a quantum bit. We can clearly see that while this is a very young technology, the rate of advancement has been increasing rapidly. Most notably in 2019, Google laid claim to achieving quantum supremacy, the long sought after exponential improvement over a classical computer. They performed a random circuit sampling computation in 200 milliseconds that a standard computer would take 10,000 years to solve. IBM, their main rival, disputes this, saying that it's actually closer to two and a half days. That being said, it's probably good to take both claims with a grain of salt. Now let's talk about some of the relevant phenomena that make this all possible. Diving into the subatomic scale, it has been observed that when two particles, such as a pair of electrons, become what we call entangled, they remain connected even if separated by a distance. As a result, their state cannot be described independently of the others. Measuring the state of a particle will tell us something about the other particle. Represented, repeated experiments have conferred this spooky action in a distance occurs immediately. Superposition is quantum phenomenon which is counterintuitive for us classical beings. One, not scientifically correct way to imagine how the superposition feel like is imagined being in multiple states simultaneously. Spend the weekend both on the forest camp and on the beach, wear a green and a red shirt, drive two cars at the same time, etc. This makes little sense for our classically calibrated minds, but turns out to be useful as we shall soon see. If we think about information that can be stored using this phenomenon, superposition allows us to, fundamental particle to store exponentially more information than the classical counterparts. For example, given four bits, classical computer has to store all possible configurations, total of 64 bits, while a quantum computer only needs four qubits. Now, let's talk about how quantum model of computation builds off this phenomena. If you haven't seen the comic The Talk by Scott Aronson and Zach Wienersmith, I can't recommend it enough. It's a pretty funny take on the common misconceptions like quantum computing tries all solutions at once. Essentially, we can think of quantum computing as an abstraction of the properties of quantum mechanics as generalized rules of probability. Each probability has an amplitude of likelihood similar to a wave. As a result, a quantum algorithm is a complex choreographed dance of probability waves that either cancel each other out or amplify each other depending on the answer you're trying to output. When using classical bits, we have to store every possible configuration of those bits as everything's in a definite state. On the other hand, quantum bits are in a state of superposition, so we can think of their values as probabilities of being in that specific state. How do we represent those probabilities? By complex numbers, whose sum probability always adds to 1. Take note of that formula at the bottom. It'll be relevant on the next slide. Take the example signed with amplitudes 0.5 and minus 0.5. We can represent that like the bar graph in the middle. To get the probability of being in a 1 or 0 state, we then use the formula from the last slide. The probability of being a 0 or 1 is then the squared value of the amplitude or 0.25. The probability of being between 0 and 1 is then 0.5. Let's apply this intuition to a fundamental particle in a quantum bit. Since all probabilities add to 1, we can show all possible states as vectors in the sphere we saw earlier. When subjected to the amplitude interference applied by a quantum circuit, we cancel out the possibilities we don't want, leaving us with the intended solution. The circuit we see in the middle is actually the first quantum search algorithm or Grover's algorithm. So that's fine and all, but how does this actually get us the answer? Why does this work? Grover's algorithm allows us to use a special version of a powerful mathematical trick called the inverse quantum Fourier transform. Imagine a car racing an unknown track from A to B. It would be very complex to predict exactly to the second how long it would take to get there. This mathematical trick turns this car racing around a track, it turns this into a car racing around a track, where the time it takes to complete a lap is the period or the private key. The quantum circuit then solves for this period, giving us the answer. So quantum computing currently. The classical bits have been standardized for decades, but there still is no qubit standard. All current working models are still subject to decoherence effects due to subatomic interaction with the outside world. While we may still be in the early days of this field, it's crucial for encryption to invest in new security paradigms to get ahead of the threats quantum computing can pose. In 1994, Petr Schor proposed a quantum algorithm to solve a large integer factorization problem. If a quantum computer with sufficient number of qubits could operate without succumbing to quantum noise and other quantum decoherence phenomenon, then Schor algorithm could be used to break public key cryptography schemes, such as RSA scheme and the finite Diffie-Hellman key exchange and other cryptosystem, which rely on the fact that private key cannot be discovered or reproduced using brute force. Based on Schor original works in 1994, continued research by Kera and Gideni et al. estimates that a 2048-bit RSA encryption can be broken in approximately five hours using quantum computer. Considering that RSA is considered computationally intractable for classical computers, this is truly unrelevable. As computational methods improve and the computational power of quantum computer computers have been using and the computational power of quantum computers increase. As computational methods improve and the computational power of quantum computers grows, this is becoming closer to reality by the day. This means we have to change the nature of the mathematical problems we have been using at the core of modern cryptography thus far. As we will soon see, a new family of cryptographic algorithms has to be used. While the mathematical details upon which post-quantum cryptography algorithms are based are not covered in this talk, we do not want to leave you, we do want to leave you with intuitive understanding of the differences. Solving mathematical problems used in classical encryption paradigms like integer factorization can be intuitively envisioned as finding branching tree contained in 2D graph that is shown here. On the other hand, problems used in post-quantum paradigms requires us to find a solution contained within a higher dimensional lattice. Extracting the private key by solving such a problem requires us to find a solution which is contained within a scope exponentially larger cardinality, which makes potentially computational intractable even for a quantum computer. So let's start talking about how we can solve some of these problems practically. So our approach was to tackle the most commonly used open source software packages for web authentication, encryption, and communication. In the order of development, it was anginex for a proof of concept and to develop robust benchmarking methods. Then we'd move to Envoy to solve the same problem in a more complex system. And then finally to Istio to deploy that system across a mesh network of microservices. All of this was made possible using the post-quantum library developed by OpenQuantumSafe. So we first began with anginex, the lightweight efficient implementation with multiple utilities. We focused on TLS 1.3 as it's the most recent standard. It simplifies previous protocols to one round trip as more information is contained in each communication between server and client. TLS for the post-quantum implementation is mechanically identical to a standard implementation. This is important for compatibility and a seamless transition between encryption paradigms. So let's see what this looks like played out. So we can spin up the terminal on the right and we're gonna attempt to make secured and unsecured requests using the standard browser. If I can pause this. Okay. Is that pausing? Okay, that's still going. So the secured and unsecured requests using the standard browser will fail because it doesn't contain the post-quantum certificates that are necessary. However, we can use a post-quantum-enabled fork of curl to return the webpage that was originally in that second terminal. And then using an open-source browser called Epiphany which is loaded with the post-quantum certificates, we are then able to see that the unsecured request works but the secured request accesses the anginex backend. So now that we have this quantum-resistant version of anginex, we wanna make sure that its performance is comparable to anginex used with standard encryption algorithms. We have done extensive benchmarking and instructions to replicate this on your own machine can be found by following that QR code to our GitHub repository. In addition, you can also view all the results that we've done so far. In July of 2022, the National Institute of Standards and Technology announced the four winning algorithms from their six-year effort to call for the world's cryptographers to find the best algorithms that could resist a quantum attack against the quantum computers that are expected to exist in the future. The winners for digital signatures were Dilithium, Falken, and Sphinx. Extensive research has been done on these algorithms and we chose to use Dilithium for our prototyping as it is the fastest of the post-quantum encryption options based on the literature performance thus far. I will be demoing how we did our benchmarking for anginex that uses the Dilithium algorithm and how it compares to the anginex that uses standard RSA encryption. The benchmarking that I am going to show you was run on a clean installation of Ubuntu 20.04. For a benchmarking for anginex, we want to compare vanilla anginex to anginex used with the open quantum safe fork. It was not only important that the open quantum safe fork with the anginex had the proper functionality, but it was also really important that it had a comparable performance to the vanilla anginex in terms of factors such as load and latency. The anginex version we are using is anginex 1.20.0 and we use the open SSL version 1.1.1. To do our benchmarking, we use the tool called H2-Load which we built locally and configured with the post-quantum library. The instructions to do this configuration can also be found in a public GitHub repository. Now I'm going to walk through a demo of our benchmarking. So we start by running the init shell script which makes builds anginex with OQS and then we run the query shell script which will enable the OQS fork and as you can see, this shows that TLS at the top, it should say TLS 1.3, it should show that the TLS was operated and then terminated and then as well it also shows that it's running with the dilithium-3 algorithm. And so once we have that all set up, then we can start to run the H2-Load test. We recommend using our test script instead of just running the H2-Load command on its own because we want to see an average of around 10 runs so our shell script allows you to get an average of those runs because anginex sometimes takes a while to warm up and then also it'll just make the results more valid by doing so and within our test script that we created, you can change parameters such as the number of concurrent connections as well as the number of threads. So now I'm going to show you what happens when you run the H2-Load command on its own. As I mentioned earlier, we don't recommend this but it's just good to see. And so by running the H2-Load command, the results that you want to look for is where it says mean and request per second, that's the throughput for just one run. So here are some of the results that we found and you can see these results as I mentioned earlier in our GitHub. You'll be able to see our results from benchmarking regular anginex as well as anginex using the quantum-resistant algorithm dilithium and then most importantly, you'll be able to view our comparisons. So the figure on the left shows that anginex performance comparison for one client thread and then the figure on the right shows the anginex performance for eight client threads. The number just notes that it starts at 10 concurrent client connections for the eight client threads because you can't have more client threads or concurrent client connections than you have threads. So these charts show that all those predicted that the throughput for the vanilla anginex would be slightly higher. The difference between the two is only eight to 13% lower for the anginex using the quantum-resistant algorithm compared to the anginex using the standard RSA algorithm and these results map perfectly to prior research studies testing the same items. So one thing that's really nice about the H2-Load tool that we use to do our benchmarking is that it's highly portable and so that means we can use it to test different algorithms as well as to test different proxies as well. So we also did it to test the Envoy proxy and to do this we ran pretty much the same commands that we did for the anginex with a few minor changes. The benchmarking that we did on Envoy, some of our visualizations can be seen here and similar to the anginex benchmarking, Envoy that used the dilithium-3 algorithm, throughput was only about a 10% difference from that that used just the standard RSA and this is an acceptable outcome based on the literature performance that we've seen and is what we expected. So next in our development cycle is Envoy, the high-performance open-source server optimized for cloud and cloud-native application hosting. Modifying something so foundational to Envoy's capabilities was a little like teaching in a new language. We started with the latest Envoy open-source then forked the most recent boring SSL modified to utilize the entire OQS encryption suite. The Envoy dependencies were then modified to compile and load the static libraries. Building from source using Bazel was then fairly identical to standard Envoy versions. As a result, the source code was left largely unchanged at final commit so all capabilities are enabled in pass build tests. On screen you can see the commit hashes if you want to follow along yourself on the repository. Here's a sample architecture of how Envoy can be deployed as a front proxy for two services, in this case, Flask backends. We'll see a demonstration of this shortly. So just a reminder, all these QR codes will take you straight to the repose. So let's see a live demonstration of this. Once again, all of the demos come with initialization scripts and we'll see the Docker containers being spun up and we can query this using the query script. This once again invokes the post-quantum fork of curl so it has the proper algorithms to access this. So let's pass the Q flag and we'll see that I can find the mouse. It is, okay. We can see that the TLS handshake is performed and terminated correctly and the JSON is returned containing the information of the page. Okay, let's see that front proxy at work. So we'll spin up the containers in one terminal and in the next terminal we're gonna make that request using the post-quantum curl. We can see that it returned service one at the bottom and then for the second one, it says hello again from the backend from service two. The final stage in development was Istio, a multimodal open source platform that deploys Envoy's sidecar. I hope you enjoy this, that took me like an hour and a half to make. Since Envoy's deployed as a sidecar by Istio, we didn't have to modify the Istio source code other than updating the repository targets. Building Istio then uses two repositories, the proxy and the main Istio. We can make Istio-proxy against that Envoy image that we've already developed and then make Istio using that modified proxy. So why Istio at all? Well, North-South encryption can easily be handled by an NGINX or other L4 proxies, but what ensures data transit security within a cluster? Sometimes this is driven by industry-specific regulations such as PCI compliance or HIPAA. In other cases, it's driven by internal data security requirements, but once inside the cluster, this is where Istio comes in, providing critical East-West authentication and authorization via mutual TLS. So let's take a look at how this works using Istio's Book Info demo. So it just adds up. There's kind of some long wait times as the containers spin up, so they're fast-forwarded. So if it looks a little rapid at times, that's why. Once we start the Minicube cluster, we install the default Istio configuration settings, as we can see right there. And then once we set the Envoy sidecar to be auto-injected into the pods, we can deploy the Book Info application. Once those are up and running in a second terminal, we start up the Minicube built-in load balancer. Here we go. And it's up. Once that's going, we go back to the main terminal and then run some commands to return the IP address to access this application. Finally, using the post-quantum curl, we return the Book Info application and see at the top that it's accessed using the dilithium-3 algorithm. So now what? Thinking about cloud native-based production systems, it is clear there is much work to be done. קריפטוסיסטמס, נוסלסאוס, איסטו-איסט, must be upgraded eventually. This process, as we have shown here, involves multiple dependencies and requires total planning and execution. Eventually, the entire stack should be upgraded. To better understand the scope of the work that needs to be done, let's look at major cloud providers. They've already been working on this transition for a few years. In the images, right to left are key hierarchy of a document encryption on GCP from unencrypted document all the way down to all the keys until the root key management system master key. The crypto systems, certificate authorities, basically all parts of public infrastructure has to change. Similar to GCP, AWS key hierarchy, starting from document all the way to HSM, again, all layers have to change. If you like turtles, those are only the bottom ones that we've seen. Our community contains multiple projects composing the best-of-breed modern cloud-native stack which we'll use. Think about what is running on the cloud infrastructure. Kubernetes, service meshes, poxies, sidecars, various repositories, compliance framework, policy framework, registries, workload identity, and attestation framework. The list goes on. In order to move forward with post quantum encryption across the cloud-native stack, we propose to form a technical advisory group in or alongside CNCF tag security in order to coordinate and govern community effort to deliver the required changes. This work starts as a joint project between industry and academy. There may be a continuation of this work in order to migrate more CNCF project in the future, and we are happy to talk to maintainers of projects that would like to PLC migrations. Most importantly, this is too big of an effort to complete without having community support. Please reach out and consider participating in this board initiative. Thank you. We have time for questions and discussion if you're interested. Thank you for the talk. Do we have any reason understanding as to why performance of Envoy took a greater hit than the performance in Nginx because it looked like the numbers were not statistically significant, but there was a fair bit more drop in Envoy than there was in Nginx. Yeah, we can go back to this. So I think you're referring to this right here versus the Nginx performance. So we can see Nginx kind of maps to those literature performances very perfectly, likely because it's a much smaller implementation. So when we go to Envoy, we see at some levels, it has kind of a greater than 10% hit. That's because when we deployed the Envoy for the benchmarking, it uses two containers. So there was one using the Envoy front proxing and there was one with the echo for the back end. So that echo added some slight latency to it. So that's likely why it went from about 8 to 13 to maybe like 13 to 17%. Okay. Thank you. This is more of just a curiosity question. Do you guys know if NIST is considering in the future using any quantum key distribution algorithms or like certifying those kinds? Or is that kind of just too far in the future based on what we have with quantum computers today? Could you repeat that middle part? Yeah. So the like I'm wondering if NIST is considering like certifying some quantum key distribution algorithms in addition to whatever ones they come up with for things that are secure against quantum computers. I think there are a few key exchange mechanisms that were yeah, I think yeah, I think there was at least one that was. Okay. Yeah, I think there's like three or four. So if you really want all the full information on these algorithms, you can go to the open quantum safe GitHub because they list all the ones that are NIST level two and three certified. And there's also hybrid algorithms like like combination RSA 3078 and Dilithium two or three. Right, okay. So you can you can combine RSA or elliptic curve algorithms with the post quantum ones as well. Okay, cool. Thank you. Are there practical implementations of Shor's algorithm breaking RSA and elliptic curve? Only in simulations. I think the largest number they've factored now I think is like 51. Okay. It's like 17 times three. Okay, okay. So it's getting there. And what is the size of these lattice groups for I mean, you mentioned because I hadn't really studied post quantum cryptography but like what is usually the size of the public keys for this Dilithium three for example. So can you repeat the question? So what is the size of the keys? For Oh, the size of the keys. Oh, yeah, it's a they're about two to three times the size. So that that was part of the issues with the development of getting on void to recognize those those keys and certificates because there's a lot of functions built into boring SSL that specifically look for, you know, like the certificates are the certificates are standardized. Right. So it looks for like certain leading characters, certain trailing characters and certain components. So having a key that was three times the size was kind of one of the biggest impediments of actually getting it to recognize it. So it's a it's a it is a much larger key. Thank you. Do you have another question? What language support would be available for OQS? I'm assuming since you've written it and you've added it to Envoy it's based C++. So OQS is mostly written in C. OK. So all the all the header files are in C. So Envoy does a lot of its compilation files in Go. However, it it properly queries and builds and compiles off of those C headers. OK. Thank you. Well, if there's no questions, thank you guys for coming. Thank you.