 in this track focus model research topics. My name is Lukas Manina, I am from Brno University of Technology and I'm here to talk about cybersecurity in post-quantum era. So let me firstly say a few words about our project which is dedicated to this topic. Actually this project is with Estonian partners and also this is the cooperation of Brno University of Technology and Red Hat. And this project is more or less for like building the networks and good relationships in the research and innovation and we have six research challenges and one of them is of course post-quantum cryptography or quantum safe cybersecurity and this will be the topic for today's short talk which will be approximately about 35 minutes. So what about our motivation? As you know today there is a lot of news about developing and innovation in the field of quantum computing and what means for the current cybersecurity and for the current protocols what we are using every day. It means a lot because in case that some really functional quantum computer will come then we have the problem everybody have the problem because everyone use let's say secure connection in HTTPS right? So everyone should be aware if some quantum computer will be in the world. But what means what is mean in technical we will we will talk about it. So basically quantum computers and current state at now there are like a lot of big companies which are trying to build the quantum computer for instance Google for instance D-Wave or for instance of course IBM but all of these quantum computers are still not ready to be efficient to crack the all things in cryptography. At now we have the oldest which is the D-Wave but D-Wave is based on a different approach which is more or less like analog computer and even it works with thousands qubits it's not like able to compute short algorithm which is like essential for cracking current asymmetric crypto. But another companies or like big teams this is not only about the IBM and Google these teams are conduct from universities and researchers so they are working on actually really quantum computers and now we have the functional quantum computers with hundreds of qubits but still this is the good news that these low amounts of qubits are not very like efficient to solve short algorithm and crack the asymmetric crypto and about the short algorithm it was defined in 1994 by Peter Schorr and this algorithm is basically jeopardize all asymmetric crypto and why is that? Because this algorithm can crack basic math problems used in asymmetric crypto it cracks factorization and a digital algorithm problem. So this means for us that everything which is used for authentication based on certificates and for key exchange will be now like not secure and we need to find some substitution and secure options. But as I mentioned we still don't have the quantum computers which are able to let's say fully use short algorithm because there is the estimation that for instance if you would like to crack RSA which 2000 bit keys you need the really functional quantum computer which 4000 qubits perfectly working and then okay you are able to crack it in the seconds but this is the still big problem now we have only hundreds qubits and there is the problem that quantum computers still working with certain probability of mistakes and this estimation for for 1000 qubits assume that the quantum computers will be without the mistakes because you need to do the process without the mistakes and then you are able to crack it in the couple of seconds. So this is good news for us for the security experts that we are still have the time to be prepared and there is also the another algorithm which is less known and his name is Gruber algorithm and this algorithm jeopardize also symmetric crypto. Why is that? Because this algorithm is more general and he like downgrade the problems for let's say finding the secret keys by brute force to this assumption and this algorithm can like jeopardize symmetric crypto but we have the simple solution we just need to use the longer keys so symmetric crypto and hash functions are still okay because we can only just use and increase the sizes of the keys. So what about the approaches and countermeasures against the quantum computing threats for the cybersecurity? We have basically two options or two fields which are like be helpful in quantum era. We can use quantum cryptography also known as quantum key distribution. This approach is very very long sorry old and it's only just to serve us to exchange the keys. If you know somebody Bennett and Brassard exchange protocol so that's it. You need expensive devices these devices these sets of Aries and Bob's cost like 100 thousands euros so it's very pricey and it just serve for key exchange but what about the signing? We need to signing like we have the software updates so then we have to sign the packages we need we need also authenticate we need some certificates so quantum cryptography based on a QKD is not have the answers for this but what field have the answers? Post quantum cryptography. Post quantum cryptography can be run directly on the current computers current platforms because this cryptography just to look on the mathematical problems and construct cryptographic schemes differently that can win stand short algorithm and quantum computer attacks. So quantum cryptography post quantum cryptography has the cure for all problems with what we have and therefore NIST institution like five years ago start with the standardization and just announce the open calls to find the new standards which can be good substitutions in the future. So let's talk directly about the post quantum cryptography and as I mentioned this crypto solves everything in cybersecurity and prepare us for the cyber for the post quantum era. It's fun fact that the post quantum cryptography is here very long very long. It's 40 years old some schemes are very old and but these times when the quantum computing have the booms in the news so then a lot of researchers started to more pay attention to this. Post quantum crypto can be like divided to couple of families couple of approaches and we will focus on these which are also like present in the standards from the NIST. So one biggest family is lattice based cryptography. This crypto use the lattices it's high dimensionally grids and it used the special problems when just the small chance small change of coordinates like cause some very hard problems to find the secret or just to forge the signature or correct the cyber text in the schemes used these assumptions. This figure just illustrate three dimensional lattice but it's not very good to imagine it how it looks like in the real because the dimensions are there are more dimensions of course this is only the illustrative picture with the very simple lattice and as the lattice based crypto is very popular so there is a lot of schemes a lot of proposals and a lot of proposals just also come in the NIST post quantum crypto calls and here is the some list I don't want to be too much specific and boring you but this is just for your engine that a lot of schemes show up. Also very remarkable family is code based cryptography this family is very long is as old as the usually common asymmetric cryptographic schemes like RSA which you know or E or DSA algorithm and these family use code codes which are more or less known in the link layers when you send the message and then you use the auto correction codes and just to solve some mistakes so this family is also used for the post quantum era and the last big family is the hash based why is this family such popular and very important for us because the schemes as are also like quite old and it's was proven by time this these schemes are like safe so this is also something which we will use in the future days and here is the just the comparison of all families on the top you can see the more major families which is the the old ones and very used ones code bases are in the middle but it is only just before because the some schemes show up later after Mekkelis and let this based okay entry is also like almost 30 years old right but some newer schemes which are using the standards are also quite a young and needs to be like the more explore if doesn't have some mistakes and errors and multivariate cryptography and isogenic base cryptography is less mature and also it was proven by many teams that these schemes sometimes contain bugs and mistakes and are not safe even some schemes here was considered like the standards but some research teams recently in this year just the proof that some schemes are not safe and that schemes has had to be we drove from the competition and that bring us to the next post quantum standardization in the first round there was the I don't know maybe more than 40 submissions but some submissions was like correct and don't promote to the second round in the second round there were 26 semifinals and last year NIST announced four candidates for the standards what it means that means a lot that means a lot now the NIST just to point to some few schemes that okay these schemes will be probably next RSA next DSA signatures so now from the last from now on from okay last year everybody in the field just to work with these schemes just to implement these schemes in hardware in software just edit these schemes into the libraries we will show some progress in the libraries later in the speech but the red ones are the winners are the candidates for the standardization the interesting fact is that the NIST choose only one candidate for the key exchange just the substitution for Dixie helman exchange and this is the kyber scheme and the digital signature NIST choose three schemes deletium falcon and swings plus you can also see that the lattice base have the candidates for both purposes and for the digital schemes they also have the hash base schemes which is swings plus these schemes has own pros and cons and I assume that NIST will be standard standardize all but we will recommend for different use cases different schemes and because they point only just for one chem scheme which is the kyber they still open the door to these orange schemes which is the classic mekeleys bike and HQC scheme as the alternatives why they do that because in case that the lattice base family will be in the future some bugs some mistake then we can simply start to use something from the different family which is the code base mekeleys or bike we did and a lot of researchers in the field did some estimation some evaluation because we are curious which schemes will be like good candidates and doesn't affect too much the performance and also the overhead I mean overhead in the sizes because the post quantum crypto is not just about the performance increase and that all algorithms are quite massive considering the number of cycles but these schemes usually have the much bigger sizes of keys and of course then sizes of the signatures so as we use only like tens of bytes for RSA signatures or EECDSA now we will use to or we need to use to on the thousands of bytes of the signatures so these signatures will be more massive and it could be the problem for the fields like IOT or some constraining devices or constraining protocols such as SIGFOX, LORA where you have only just few bytes for the message which are like communicated between the nodes and this is just the numbers you can check the presentation I put it to the chat so we don't need to like spend too much time here and just now let's proceed with the how the institutions across the world looks on the post quantum and what should be done as I mentioned NIST, NIST started everything then American NSA national security agency just released the new proposal of the commercial national security algorithm suite version 2.0 which says that we should directly just started to use post quantum cryptographic schemes and exchange these schemes in all protocols and stop use the old common schemes like RSA and EECDSA and so on but European institutions like French NC German BSI perhaps also the British have the different view a little different view they are more cautious they are more like conservative and they say okay let's start to exchange these schemes but let's do that in the hybrid approach let's implement these schemes in parallel and do the hybrid signatures do the hybrid exchange because they are still not like convinced that the post quantum crypto could be like the safe in the next 20 years right so they are like more a little conservative in this but all these institutions only just announce that the time for the transition time for the migration should be soon should be since 2025 and should end in 2030 so this is pretty in after next year right so we should be like prepared and we should like only just also take account these in our projects and also check new keep also somehow works with these recommendations and will soon announced own view and how should be done in the Czech Republic and after 2030 I think that we will be used only just the post quantum crypto if not perhaps it will be proven that quantum computer is not like feasible to build then we are okay we don't need to do that because the post quantum crypto adds the bigger sizes and adds the cycles but it seems that we will transit we will migrate to this and in this table you can just compare simply what should be substitute so for key establishment when the sessions just built so old RSA or EC Diffie-Hellman or Diffie-Hellman just will be substituted by kyber and this is just what says Americans what says NSA right this is not like the finite what will be recommended by the Germans and French and see and so on and for the digital signatures used for instance in HTTPS it will be recommended deletium and what is the funny for the digital signatures of software and hardware or some updates will be recommended hash bait schemes concretely light and make a signature scheme and extended medical signature scheme and as we are approaching to that I will be more quick now so this is just the timeline I already talked about it everything will be start in next two years and then there will be like the window of the five years when everything should be like transfer transform to the post quantum and after 2030 we will just the solely except the post quantum solutions this is the plan of NSA and I think that also European institutions like any saw and see BSI and so on will be recommended strongly this then here we have the just the not not exhaustive but just some examples of the libraries which are already available for you for developers which you can use if you would like to add the post quantum crypto the the most famous is lip OQS library this is the essential library that just was under the open quantum safe project and I think that a lot of teams and a lot of the let's say tell us an open SSL libraries use this essential library but they are like more libraries and you can also check it in offline in my presentation and finally in the last part of my talk in a few minutes I will just go through the use the protocols which are used today and how they should be changed to Winston quantum quantum attacks so let's start with less known Maxek protocol Maxek is used on the second layer it is not very common I think that several of you may be heard about it so Maxek just to works with Ethernet frames of course it encryption is fine it is used as 256 long keys but for the key agreement there is the Diffie Helman or RSA and there we should like use the post quantum schemes so some recent works from last one or two years just started to do the experiments and just to start to implement the post quantum crypto inside and just try to figure out if Maxek will work with these the conclusions are good for us I think or the researchers thinks that the Maxek are fine and could be simply moved to quantum safe what about the IPsec this is more like widely known it did a work on the third layer and it's used mainly between the rotors or maybe between the branches for the building the VPN tunnels so IPsec is like big protocol and for the key exchange and authentication it used the internet key exchange protocol version two and this protocol still like have some cipher suites using the common crypto but now also some works proved that the a lot of these recent candidates are fine for this except the classic meccalis because classic meccalis has the huge public keys which are also like exchange so this could be the obstacle for this so we need to use really efficient schemes which has the smaller keys like lattice based and TLS this is the probably most used the protocol because TLS is almost everywhere in the internet it's used it is used in HTTPS and TLS also used the key exchange and negutation before the session realization session is built right so elliptic car vsa certificates should be somehow exchanged by the post quantum certificates and also there are like a lot of studies that prove that some dili tube or falcon signature schemes are fine and even some hybridization also works here there is only problem with the frames but if you are using the jumbo phrase they could like help us for TLS transition to post quantum and the SSH is also very familiar for you and it also used the asymmetry crypto and it is good that the SSH message are designed to take the big messages which are large enough for post quantum and we can simply use everything from post quantum crypto again make a list which using the large keys could be problematic here and last but not least I just use I just put in my presentation these main protocols are certificates and maybe let's certificates as you know have the sizes of hundreds hundreds bytes and of course you can use the chain of certificates which then have the kilobytes sizes right and in this picture there is the the proposal where should be the some amendments some modifications in the X dot 509 certificates format so you can see that there is a lot of like fields where we should do some modifications and let's say propose some draft drafts which will be standardized these new certificates and these new formats which will be like ready for carrying the post quantum cryptography schemes and let's conclude my talk in the time so we know now that quantum computer may break current asymmetric cryptography we know that we need to start to be prepared we have a couple of years to do that we already have the some standards which are like recommended by NIST and also by NSA and also by BSI and French NC and we know that there is still open topics if we will do just the straight exchange common cryptography by the post quantum crypto or we will be more like conservative and we will do the hybrid approach and for some recent works we know that already some libraries and security protocol started to be developed and prepared for the quantum safe or quantum era so that's it that's my talk I hope that you bring something interesting from this speech and there is just the references and thank you for your attention so I maybe have only just one two quick questions and then I will be stay here and we can like just talk on the coffee and so every is here who had the query question yes if I heard you correctly you ask on the symmetric cryptography and your question was about if you if we should just increase the sizes and it will be fine yeah I I'm sorry if I not empathize strongly yes that's the correct we need only to just increase the sizes but it is only for the symmetric cryptography for the asymmetric cryptography we need to we need to change by the new post quantum crypto I hope that yeah it will be secure if you just double-sized another question if not thank you again and I think we can like proceed with the next talk that's mine yeah it's light but since since we directed the camera hello everyone please welcome Radostin Steyanov second new phd student we will talk forensic analysis of container checkpoints today we'll be talking about our work with Adren Weber on introducing container checkpoints in Kubernetes and how you can use this for forensic analysis there was a talk a few years back at kubecon introducing forensic container checkpointing and how what are the tools that can be used in production to perform secure to respond to security incidents and how these tools can be used with containers and one thing during this talk was mentioned was that containers don't support snapshotting and this time was student development and we recently introduced a new feature in Kubernetes that allows you to to take a checkpoint of a container so in this talk I will first briefly cover what are the security boundaries in Kubernetes what are the different trade models that we're going to look into and then I will cover what is container checkpointing and how does it work and how we can use this to perform forensic analysis and what are the limitations and some of the future work so there are two main areas of concern in in terms of security for Kubernetes the first one are the configurable components in Kubernetes and the second is are essentially the applications that are running in the poster in Kubernetes there are many components running on the control plane node that can be configured and every node is also running Kubernetes component called kubelet that is important to be secure namespaces are used for isolation between different tenants and the pods are introducing security context between different containers running on the poster and containers isolate applications from the host environment itself and in addition to that every application has different network namespace that isolates the communication different dns example and Kubernetes also makes available secrets and tokens to the application and of course the application is processing sensitive data from users so the three main trade models that we are going to look into is when an external attacker has access to an application running over the network in this case the security controls that are commonly used is to encrypt all network traffic and to use authentication and authorization for all APIs another aspect is when an attacker has compromised the container in the cluster or has been able to use a malicious container image to run essentially malicious container on on the cluster so in this case the attacker can try to escape the container or perform privileged escalation to take over the whole cluster and the common security controls in this case are essentially limiting the privileges available to containers and limiting the access to the kubelet running on the on the node and preventing the applications running in the container from loading kernel modules and restricting what network access the applications would have. Another trade model is when an attacker has been able to for example steal the keys for accessing the Kubernetes API server so they would be able to create pods and create containers in the in the cluster itself and in this case different security controls that can be used are role-based access control and limiting what a user in Kubernetes can do and limiting the quotas of resources that can be allocated to a single user. So the problem here is that real-time monitoring systems for Kubernetes don't currently support the ability to take a snapshot of the applications that are running in a container and use this to analyze what has happened during the security incident and container checkpointing can capture and preserve this state and can be used to analyze what has happened at a specific point in time but also we need advanced tools to be able to analyze this state. So how do we enable container checkpointing and how to use it? In Kubernetes there are different pods there could be multiple pods running on a cluster node and every pod can have multiple containers inside it and every container has a process tree essentially a set of processes running. When the container engine in this case cryo is involved to perform a checkpoint then it will essentially call the container runtime in this case RunC and RunC will call CreeU and CreeU is going to create a snapshot of essentially serialized the runtime state of all processes running within the container and this state can be used then to restore the container from the point in time when checkpoint was created but it could be also used to analyze what were the processes and what files or network sockets have been opened at this particular time. So to enable checkpointing in Kubernetes CreeU has to be installed on every Kubernetes node. In cryo version 1.25 was introduced the checkpointing feature and we currently have a pull quest for container D. CreeU has to be started with the option enable CreeU support and the container checkpoint feature gate has to be enabled for the cubelet in Kubernetes. And then once this is enabled you can perform you can send a post HTTP request to the cubelet API and specify the namespace the port and the container that needs to be checkpointed and then this will create a checkpoint essentially at our archive that contains all the state of the container in default location in this case barlib cubelet checkpoints where you can inspect the state further. There is also some discussion about how to optimize essentially the way we store checkpoints because currently everything is stored in a single directory and we want to limit the amount of checkpoints that are going to be created for a specific container just to if you have periodic checkpointing to not take the whole disk space available and for this we want we we're probably going to create sub directory or directory for every pod that is probably going to come in in a future version of cryo or cubelet. So I have a short demo of how this works. So this is a Kubernetes cluster of it in this case just two modes and we have a pod running a PHP application in this case Wildo and I have a shell script which implements the checkpoint command for kubectl and this allows me to list what are the different containers and the different pods running on this node. So in this case we have this wild pod and with a single container called demo running inside it. So this script will send the post request that was shown on the slide and this will create a checkpoint in the default directory and in this case we have two checkpoints because I tested this before the talk to see if it works and we can we can use a tool we developed called checkpoint control to inspect the state of essentially to see what is inside of the checkpoint itself and so this is a file level overview showing the IP address of the container that was captured what is the root div size so these are files that have been modified by the application running inside of the container we can see all these files included deleted ones essentially this is capturing the read write layer on top of the container that is used by the application we can see the checkpoint size and the timestamp essentially when the checkpoint was created so we recently introduced and this feature that allows us to see what are the mount points so essentially what has been mounted inside of the container what are the different processes so we can see the process ID and essentially the process name and just a high level overview of what is captured in the checkpoint in addition and so if we untart the the content of the checkpoint we can see all the files inside it and the the files or the images that the checkpoint in tool creates are in the sub-director called checkpoint before this I'll just show you the content so so the root div is essentially are the files that have been modified in the container we can use a tool called crit or checkpoint restore image tool to essentially decode all the image files and this allows us to see the details essentially what is the content of all our checkpoint images in this case what we are seeing here is the process tree image which contains essentially a list of all the all traits and processes and information about the process identifier the trait identifier and other additional information about what is actually running in the what is included in the checkpoint here we also have something called ghost files so these are also known as invisible files this is essentially when a file has been deleted but there is an open file descriptor essentially file descriptor to open for this file then this file is also included in the checkpoint so we can inspect the content of such files this is also important if we decide to restore the application and see what how it will perform what actions is going to perform and there are many different images and we are currently developing the tools to analyze the state of checkpoint further so to go back in my presentation some of the limitations these are some of the three main limitations that i'm going to focus on today so the kubernetes secrets are essentially keys or tokens that are made available to applications running in the container to be able to access resources for example database or other services are in the cluster and when we create the checkpoints since these are these will be stored um either as environment variables or in memory of the application these are also captured with the checkpoint which means um it is important to keep the checkpoints secure so that they uh we can prevent leaking information about um sensitive data such as keys um in the case of life migration and fault tolerance we would want to keep uh disinformation in the checkpoint because we can use the checkpoints to recover the application from failure or to move it to a different physical machine in the case of past startup this is when we want to optimize to improve the start time of applications by taking a checkpoint immediately after um some index or data has been loaded in memory and starting the application from checkpoint uh will essentially improve the start time in this case we don't want to keep the um the secrets or passwords stored in memory because we want to essentially initialize every application with different key so in this case we need some techniques or methods to be able to remove uh desecrate information from checkpoint um another limitation is when an attacker can essentially perform um different actions that would make it more difficult to understand what was actually happening in the container or mimicking the the behavior of trusted processes essentially using um existing processes within the container that would make it more difficult to understand for example what is um what was actually happening during the attack and um attacker can also um perform a set of actions that are not related to the attack itself and this would also make it more difficult for for example intrusion detection systems to detect this attack and there are certain cases that crew doesn't support so um for examples certain system calls are not supported or certain um network sockets or nested namespaces are essentially features that crew doesn't support so if an attacker wants to prevent the checkpoint from being created then they could use this type of techniques to um prevent the checkpoint and some future work um so how can we use container checkpoints with um intrusion detection systems and how this can be used for preventing attacks um there are different aspects that can be used uh from checkpoints one is to improve the visibility of tools such as alcohol to be able to see what is actually running in a container but also to use this for uh forensic analysis after um a security incident has occurred um container checkpoints can also be used to detect certain actions and trigger an alarm and um a new um restart policy can be introduced for containers to allow to restart from a checkpoint and potential attacks that can be used to inspects with checkpoints are for example escort injection we can detect when um certain behavior is happening in the container and it's the same with command line execution or when um an attacker is um using a foul inclusion attack and uh when a malicious container has been uh is currently running on on the cluster we can use container checkpoints to detect and improve the monitoring of of the system and introduce security policies that would allow us to detect different incidents and finally i just want to mention about uh to Google some of the participants who are going to work on this project and um have been contributing to um to this work um and thank you very much and i'll be happy to answer any questions so this something we have been discussing in the community how what would be the best approach of doing this yes um the the way we discuss about implementing it is to introduce a signal handling so we can send the signal to the application running in the container then the application has to essentially drop the secrets there is another project implementing checkpoint restore for java applications they uh essentially um perform certain actions before the checkpoint and then uh certain actions after the checkpoints with which can help with this as well so somebody sent us and it can be triggered by yeah triggering the benefit checkpoint it will trigger inside the java application this yeah but it's still in development it's still something that we are working on yeah point yes um so after um just to repeat the question um can we have a log that would list all the checkpoints and then use the checkpoints after an incident has occurred and yes uh this um a checkpoint would allow you to not only see what was actually happening in the container but for example if an attacker uh runs something entirely in memory without touching the disk which is something that is commonly used today uh would would you can use uh memory forensics or essentially see what was the memory content inside of the checkpoint and understand what was the attack yes um so i'm not sure you mentioned that i'm wondering how much over have the checkpoints uh in terms of how much space they in one of these ones are creating so i can depend on the side of the container and so on but how much over are you that's the overall queue and yeah our queue is very optimized so the way it works is using the ptr system code to enter the address space of the process and uh then use um essentially create a unique socket with the crew running outside and using splice which is essentially avoiding copying the memory from the process into a set of files so it's very efficient uh but of course it depends on the size of the uh for example how much memory does the container use then we have to save all this state yes yeah so essentially it's um i think you need admin permissions at the moment and essentially you're sending uh htp request to the kubelet running on the node so essentially you just need to be authorized to perform the checkpoint um but if you can create a checkpoint of a container then you have access to all the memory of that container so it's you know it has to require admin privileges yes yes uh can we um can we checkpoint the containers one after the other was was that your question yes um so this is actually something that we're working on uh essentially um how we can synchronize the checkpoints between different containers um so we were introducing essentially a synchronization mechanism that allows to um create the checkpoints of two containers at the same time or um these containers are running on different nodes to synchronize the checkpoint in between different containers this this essentially if you have a distributed application running on multiple nodes in the cluster thank you thank you thank you everyone and um is that hello everyone thank you for coming to our slot today with the karm my name is christos we will talk about our joint work with red hub in the aero project i am i have as you see a couple of jobs the main one for this talk is that i'm coordinating uh technically the aero project so karm with me hello i'm karm i work as a quiet engineer at redhead on making quarkus compilable to native mh over to you thank you so let's have a little bit so this in this presentation we're gonna start talking about european policy i'm gonna go down to compilers so it's gonna be a rough ride but stay with us so what is what is all about um the whole story about how the european commission sees the sovereignty of the european union regarding chip manufacturing started i would say more or less along with the pandemic and the fact that the majority of the chip manufacturing in the world is concentrated in taiwan to solve these problems uh both the us and the u they put forward legislation in order to to create sovereignty so what it means the european commission in particular they voted the european chips which is a 50 billion euro budget project um in order for the for the union to be able to design a manufacturer to fabricate and procure um processors within made in the u let's say uh which is a big task it's a big task and it requires a lot of investment both from individual countries and also from the commission in north after this was voted which was let's say a couple of months ago um in parallel the european union was already starting the prep work so what it means projects in order to build both the hardware and the software and as part of this collection of different hardware and software it is our project which called the iron so the goal of the iron is to create um optimize the software stack of a typical cloud application stack like the docket kubernetes different runtimes and operating systems in order to be ready when the hardware of this e u processors we're going to market so essentially imagine that there are projects building the hardware and the software working in parallel and at some point in the near future hopefully we're going to be able to um use these projects or at least the outcome of these projects in order to run and help companies within the e u to migrate from amazon or azure to these cloud services and in particular in this iron project we are many partners and the idea is that we're trying to get a collection of software frameworks that exist on uh current cloud deployments and optimize them and this hardware ecosystem is going to be very heterogeneous it's going to be course with accelerators inside in the socs and accelerators like gpu's and fbgs connected around these cloud services and we try to optimize um compilers and runtimes and different frameworks for managing a cloud operations and of course it's a small project we cannot solve all the software but is a good start to have a first um uh implementation that makes sense for somebody to use these e u cloud services so what is this hardware we don't know i mean we know but not exactly so what what is the so what we don't know we know that there are many projects of creating different designs so there are projects under the umbrella which is called the european processor initiative which creates different designs of course of accelerators of interconnections packages and these designs they are being funneled to other projects like ours where we take those test beds and we start bringing up the software so if we go to the european processor initiative website you're going to see many different streams chips for hpc chips for automotive for iot some of them are based some of them are risk five based so it depends the project they experiment with different designs but nevertheless no matter which design somebody will choose at some point we will have to um run and prove its performance we are not involved in the hardware design we are let's say the consumers of these hardware designs where we bring the software so our target it is a hybrid of arm and risk five which is the first commercially available processor from the epi project which is from a french company called cyperl which is a data processor which is a now an arm core inside the put some risk accelerators and then you have pc i express connected gpu's from intel and nvidia and of course we have other let's say fpgs to experiment with more let's say research staff compared to current upstream software i cannot talk about i don't have time at least to talk about the whole software with target so i'm trying to narrow down the discussion to manage programming models and run times which is uh university of madister and red hats let's say expertise um as you see here is the whole stack that we try to to cover and um regarding the run times we target um manas programming language like the jvm so although my talk is called java in reality is jvm so anything that runs on top of the jvm will benefit from this work and um we target a java and uh a framework for microservices and torrent of vm for accelerating java or the jvm applications on gpu's and fpgs and of course we have the other stream for native programming language like sql and dpc plus plus and one api uh which is done by codplay and intel in this project so now i'm going one layer down the abstraction and i will narrow down the discussion about the java or the jvm so what do we do in this project normally we try to optimize two main frameworks of course we have the open jdk distributions uh that they now have risk five back ports uh from alibaba and they already have arm support and red hat is doing a great work supporting the arm builds and um these are all let's say more or less upstream and the people can download and use them and now we take it a step further let's say in at least now project to try to experiment with this hybrid arm risks five and accelerators so how this would look like for the developers and for the hardware and we focus on quarkus which will be um the part of um the talk from carm and torrent of vm torrent of vm is a framework probably you haven't heard of it it's a it's a framework um from the university of magister uh where we're trying to let's say bring uh higher performance through heterogeneous execution on the jvm so what is torrent of vm in in a sentence it is a jvm plugin although it's called vm is not a new vm torrent of vm by itself doesn't run anything it needs a host jvm so it is a plugin so you have let's say amazon coreto or mandrel from red hat or open jdk you download it you plug in tornado and it gives you a light white a lightweight api which you can use to accelerate automatically code on gpu's and fpgs and i will show you later a little bit how so essentially it is an add-on on any virtual machine jvm that supports jvm ci that we can use to um accelerate automatically code on accelerators and it has let's say two main features that we advertise the first one is of course the api so you don't have to do a lot of work if you have done gpu programming currently you have to use jni calls and kuda kernels and do the memory management of manually copying data from the java heap to the actual accelerator so torrent of vm solves all this problem torrent of vm does not expose any hardware to the developer so we we are very strong believers of the original java idea that you write ones and you run everywhere and for us this everywhere is not only multi isa gpu is cpu's but also gpu's and any platform or modern hardware platform that exists and of course we'll have the automatic hospitalization now if we run for example um open zdk on arm or intel the compiler the c2 or the ground compiler whatever we use will have a specific compilation chain and underneath we have the intrinsics so its company goes and puts their own specialization this is what torrent of vm does also if you compile your code for gpu's it's gonna be different if you compile your code for intel gpu's it's gonna be different than nvidia so we detect all those hardware and automatically we specialize the code to run better on its platform so how do we do it essentially we plug into existing jvm in that case open zdk so developers can use our api and then we take the the bytecodes we go through ground compiler we do the ir and then we lower either to open cl or ptx for kuda or spear v for level zero and each of those frameworks can target different kind of devices and on the top on the right we have different kinds of distributions we support and different kinds of hardware vendors that we support i cannot again i would like to spend at least six hours looking at bottom of vm but i cannot so i will try to give a small let's say idea of what is the model we use so in any gpu programming model essentially we have to do two or three things depending how we look at it the first one is we have to copy the data from the cpu memory to the accelerator memory we have to run the code there the kernel the kuda opens or whatever and then we have to copy the data back from the gpu's memory back to the java and this let's execute a model it is really simple to comprehend i have to do three things no problem but in the jvm world these three things require a lot of code a lot of code that you have to do manually so we solve these problems by automating everything that the developer shouldn't care about how by having essentially um two types of code in the apf tornado the first one it is the host code the the controller who does the playmaking which data is going to go where how and all these optimizations and the second one is the compiler will generate the kernel or the code that corresponds to our java method for acceleration and we have the task graph here with the structure that we compose different tasks so in that case we have a a method called method day in class right so this is like a lambda function so we don't change the java code we just pass it there and the compiler will pick it back or some annotations we put and then we'll compile it and run it a complete transparent to us so there's no jnl or any any manual work that have to be done now we have two apis for development both the loop parallel api and the kernel api these are complementary the loop parallel api it is more for if you have a for loop let's say you have for loop that it's very heavy and i want to accelerate it you just put an annotation and then we do it automatically but if you're a power user so if you come from kuda and you want to put your barriers your local memory all those gpu stuff inside you can use the more advanced kernel api so i make everything sounds perfect right turn the vm is going to solve our problems is it no why because we have to know when to use it not all applications require the raw power of of a gpu only some of them they require it for example some use case we have our computer vision uh ray tracing machine learning and phase detection as soon as we have a lot of compute a lot of parallel computation and data to process then i think it it makes us to consider a gpu acceleration and turning the vm it's uh in our opinion the one of the easiest way uh to achieve that so i would like to conclude now by hopefully showing the ray tracing uh demo running um we just ported it from linux to um arm macbook so i cannot guarantee that it will run okay so what do we have here this is this is a scene written full in java there is no c code here okay this is full in java that does ray tracing so it renders in real time on the cpu on a cq thread at one two fps okay so if i go here and i try to zoom it's very choppy because now the the cpu now is struggling that's working uh uh full and if i zoom now you could see the shadows the reflection of the light source on on the rays on its ball so this is not real time this is useless and you can ask me why you write this i'll tell you in a second so let's change the limitation let's go from pure java single thread to another java limitation that uses parallel streams now i'm around 10 fps because this is a 10 core uh machine i can scale out uh on the machine and now it's easier for me to zoom in before i was zooming in but you couldn't see because it was moving like a turtle so let's now go gpu so now that four magics happened the first magic it didn't cross okay the second hey the second the second magic we see here we now the the same java code this is same java code that was running on cq thread it torrent of vm took it compile it open cl and run it on the m1 pro gpu which is really powerful one for the record and now we are at 60 fps and now we can actually start it's real time you can zoom in zoom out and also you can change the different uh shadows and reflection bounces the more children the more heavy uh it becomes so again this is pure java code automatically compile to open cell running on the gpu third miracle third miracle is that now all the computation happens on the gpu so the cpu is sitting idle so let's develop a physics engine on the cpu run the rendering on the gpu the physics engine on the cpu so now all these physics engines here you see all these balls that are bouncing this whole code is simply mating on the cpu the cpu does the rendering the fourth magic that i didn't have to stop the application i can change now the java code between running on the cpu running on the gpu may combination while the application is running and this is i would say the strength of torrent of vm which is called dynamic configuration and this is where the vm comes inside torrent of m internally has its own byte codes that can recompile the code for gpu or cpu the same way that the open jdk or any jvm compiles the code between c1 and c2 without stopping the code so we follow the same ideas but this time for heterogeneity and i think my time is up thank you very much and i pass the microphone to karm to talk about the view for quarkus you see the force mulek murekko they managed to switch the displays and whether my setup survives hello one one one uh i was talking about quarkus which is a java framework uh it's a part of the aero project um and it's a suite of libraries that are tailored to be cloud native and in our context that means uh being very small um also in the in the footprint and in the resources consumption uh i'll be dropping uh some buzzwords so uh i don't know what's within the in the audience like if i say hibernate does it ring any bells here okay a couple of them spring some java libraries okay so uh this is java on the server side like building java applications mostly on the server side um uh gravium it's uh jdk custom jdk is custom jit and capabilities to compile various languages in a native executable uh mandrel uh that's what our team is focusing on it's a distribution of gravium which is uh made uh smaller because it focuses only on on java it doesn't deal with other languages and its main differentiator is that it uses temur in jdk as its base and it adds the native image tool to it so while you are using mandrel you are using the temur in jdk you would download from adoptium without any additional patches um so native image is the tool we'll be talking about today uh it compiles the suite of your libraries in quarkus in your application into a native executable so you can have your java application that uses several databases as database drivers it talks to uh elastic search it's got a lot of stuff going on a lot of dependencies and native image will choose through all of that uh construct some kind of a closed world uh with all your dependencies and compile that into a native executable including all resources all additional files it's all going to be baked in a single executable uh this closed world assumption is an important stone in this whole machine uh because sometimes you need to specify that uh you are going to do at runtime something that is not apparent from from your source code but quarkus helps you with that and does this heavy lifting for you so for instance if and if a library let's say elastic search is doing something at runtime that it's not apparent at build time uh there is a quarkus extension you depend on and this quarkus extension recognizes it uh constructs bytecode for it and then it's ready for the compilation so you can compile things in your native executable that wouldn't be surprised at runtime by missing let's say class uh i will jump right into trying it out on an arm server um hopefully we are now we are now uh connected to ampere altra 80 core arm server uh i got jink in front of there and we will build quarkus application i got downloaded gdk uh but not our native image not not the compiler because that's uh going to be used from uh container image uh while it does it think i will continue this slides and then come back to it uh quarkus is a huge suite of uh extensions so while your end application is trimmed to the bare minimum and uh the footprint is as small as possible the the possibilities of all libraries are really huge and many of them package some native uh code in the jar files and that comes pertinent to making sure that things run on on arm uh because not all of those libraries produce arm binaries with them um and there is an example of all the libraries that in the core quarkus got some native dependencies and there are some of them that don't currently produce arm binaries but those are usually loaded with uh java native interface and have some java fallback so currently quarkus runs on arm but there is still a lot of stuff that could be done to make it make it better uh this compiler i talked about the uh the mandrel distribution of gravium can be downloaded from uh our github site uh where we got arm binaries for linux and we also uh produce uh container images so you don't have to install it on your system you can just get the uh temur in jdk vist the native image compiler from the container and that's what the quarkus framework does for you also so as a as a java developer you can tell quarkus okay i've got my application as a java developer i'm used to compile it to native image for me i don't know what's native image i don't know what the rvm and i kind of don't care i want my application to be compiled to native executable and quarkus uh downloads that uh container image uses docker or podman and compiles it for you so there is a lot of testing involved to make sure that quarkus is gradually more and more ready for for arm uh we use uh various integration test suites for that some of those are tiny specialized apps or reproducers targeted at some special uh features some of them are uh huge suites or let's say quarkus integration test suite and some of them are kind of artificial applications that gem a lot of stuff on top of each other to really stress the compiler that it could handle a lot of generated entities and hibernate or stuff like that and that it wouldn't blow up and it would handle it um you can build mandrel yourself and it's not a joke it's not any kind of like obscure one of those projects you cannot compile when you download them the compiler compilation scripts are written in java using jbank and you can run it you just give it java home uh to temor in jdk uh gravium uh github repo and it compiles the native image compiler for you the distribution uh we've got public facing Jenkins where we got these arm servers connected where we built and test periodically on various branches uh so that's our uh public facing uh driver and these are our uh precious uh uh left ones over to uh bar metal servers we are currently most working with uh they are photographed on my desk but right now they are safely in a data center uh david looks like he doesn't trust me but it's really already gone it's not there anymore on the desk uh those were generated by by arm to uh to make this effort isolated and it's part of the of the arrow uh effort we are doing and they've got the same new verse and one architecture as the current uh arrow uh spec or target uh they are quite beefy machines with 80 cores um uh the compilation i started uh uh is done and the quarkus is running so i will just scroll back to see what uh what took place um i started um i don't know did uh start with quarkus uh like demo project uh unpacked it and just run uh maven that's uh that's all that happened um in in this uh quarkus application uh and it compiled the java bits it compiled java bytecode and then it realized that i'm trying to do a native uh native image build and that i don't have any native image or grahl vm uh mandrel compiler on this system on the pass uh so it resulted to check in whether i have docker or podman installed it found i got podman on the on the server so it used podman to download uh mandrel uh builder image which is found the artifacts we are regularly updating and uh pushing to publicly accessible container registry um and it downloaded that image it was already downloaded on the system uh and the um horrendous uh blob of text it was constructed automatically by by quarkus for you to drive the compilation of your application to the native executable so that was generated by by quarkus uh it realized what your application needs to be properly compiled to an executable and that process started um the reason why i kind of sneakily out stopped elsewhere was that the compilation is by no means instantaneous it takes some time it analyzes the classes in the in the closed world it it really it analyzes whether there is any gni access um and it finally uh got the the executable image with this machine code that also contains uh baked in resources so for instance you got if you got uh some properties files in your jar files or something like that that's all going to be packed in one single executable uh and it took 40 seconds to uh to build and the quarkus is running now and i can access the its default web page for this one particular demo application um the same binary literally the same i built there uh can be run also on my phone i don't know how to connect it to the projector but it runs there it runs deb debian and quarkus starts there in 58 52 milliseconds to assess the state of things and to gradually make them better we are collecting metrics about build time and the runtime build time is what i showed here uh how long it took uh to compile the application uh bigger the application bigger your your closed world uh the longer it takes um or could take uh so that's one of the the things we are looking at and also the runtime so that's how the application behaves uh when it runs uh we've got a collector for that tool written uh in java using quarkus reason why i mentioned that is that i don't have any java installed on my server i build the application in github actions and i just uh push the executable to the server so there is a huge uh rich java application talking to database running on the server but the server doesn't have any kind of jvm installed on it um this is how we assess the build time um metrics and uh and this is an example of uh some of them that are collected the most important uh would be uh how long it took in all to to compile an application an application uh and also it's important what the target uh architecture uh runtime matrices uh are more interesting uh there is a rough comparison of the same application that uses a lot of uh micro profile uh libraries um in hotspot it takes uh much more memory to uh uh to run uh and in native image it's much more memory efficient the the reason for that is that it doesn't have to keep a lot of metadata for a lot of stuff because there is no there is no just in time compilation there is no de-optimization it just fixed what's compiled in the binary so there is a lot of stuff that's not needed at uh uh at the runtime uh and it also starts uh quicker uh that's not only uh thanks to the uh native compilation but it's also thanks to quarkus that uh pre-initializes a lot of stuff and bakes it on the on the heap uh i ran through that really quickly but we agreed to leave five minutes before the end for questions so we can go back both of us uh to anything that caught your eye or that you find weird or suspicious and you would like to heckle us about or ask about so shoot now that's silence the question the question was whether tornado vm is using only public gdk apis or whether there is anything to be proposed to the whole gdk ecosystem thank you for the answer is yes in both of the question so at the moment we are using normal java apis uh but uh so two things first we use normal java apis the jvm has to be jvm psycho compatible so it has supported in the interface uh but the the as soon as we go torrent of vm as soon as you go gpu's the java spec doesn't apply anymore because that's parallelism as soon as you go to a parallel uh computations then you cannot uh guarantee consistency or ordering any uh but as torrent of vm improves we would like to have to propose to the committee uh some changes that would help benefit torrent of vm specifically for native data types and panama integration if that one early quitter i would hear a hairpin drop in this room so they are they are either like stunned how awesome it was or didn't understand or i don't know okay thank you thank you very much thank you okay so hi everyone welcome to my talk um so today i'll be speaking about future of artificial intelligence and machine learning in software testing but before starting with that i wanted to introduce myself first i am shreya sthana working as a senior software quality engineer with red hat and it's been around 2.5 years since i'm with red hat so i have been involved in uh many testing tool like selenium cypress lemon cheesecake including the language java uh java script python and so on apart from this i also do have some uh techno functional skill related to the erp application like workday and oracle cloud so that's pretty much about myself let's start with the presentation okay so this is the agenda so we will talk about what exactly the ai ml and the deep learning is we will talk about ai history um why we need the ai in software testing the problem the solution the tool um what is going to be happen at the back end of uh ai based text testing tool benefits and the challenges of ai and ai current application so let's begin with the presentation okay so this is the picture which um having you know relationship between the ai machine learning and the deep learning so ai is the development of computer system that performs the task which typically requires the human intelligence such as recognition of speech recognition of image and understanding the natural languages um and also ai is having you know it is a broader field that contains the many surf field like machine learning deep learning so let's talk about what exactly the machine learning is machine learning um is all about training the computer algorithm so that it can find out the patterns from the data and um it's basically the main aim is to create a model that can identify the pattern and can make a prediction and the decision based upon the data that they haven't seen before now next is what is deep learning deep learning uses a neural network and um it perform the task which is a little complex like recognition of image and speech um apart from this also it simulates exactly the same way as the human brain works so this is all about okay so in this particular slide we will um you know see how ai came into the picture and how it transformed into the different um you know um you know the different sectors so i have divided this slide into the three generation first generation second generation and the third one so first one is all about large data set um so in the online um fraud detection and suspension ai worked on analyzing the history of users and also it you know it it analyzed the history just to provide the risk rule and um user can also create the risk rule by allowing and blocking some user section um and also the you know users can flag fraud and the non-fraud activity so as to provide you know so as to avoid the false positive and to provide the better risk solution now let's see how ai help in the supply chain management um ai help in the supply chain management uh by you know it help in telling the accurate inventory management it helps in predicting the demand it helps in understanding or letting the users know the shortage and the excess of an asset in the store at given point of time now let's talk about the second generation so second generation is you know all about studying about the human being so while starting the human data ai created um you know social media platform and the recommendation system so creating a smaller recommendation system was a easy challenge but creating a big recommendation system which can handle millions of users and millions of data was a massive development in the terms of ai um now let's move to the third generation so now third generation is all about creating a machine that can make a human being but i can say that we are still very far away from the true humanoid now the question is what is next after 2020 so um basically what is next after 2020 related to the software testing world right so before starting with you know why we need let's understand you know how ai help in the software testing let's see why even we need it right so um ai automation needs sustainability it means that your automation script needs to be sustainable it needs to be maintained it needs to be refracted and it needs the same attention as the business related code um decrease the maintenance effort is it means like a maintenance is a time consuming and it is costly so this can also be the challenge of implementing the software testing without the ai now the third is root cause analysis is a time consuming ananoi i completely agree with this because let's take an example that your test cases are running in the ci pipeline and your report got generated and once you open the report you saw that many of the test cases or you can say you know 50 percent of the test cases are failing due to the same reason right so why we need to invest our time in fixing same failure for so many test cases right so these are the challenges which i saw we have right now having in the software testing without implementing the ai so i started doing some research and got to know that there is a survey which tell us that 75 percent of the test automation script are failing due to either bad locator strategy and locator change so this is the problem which i you know identify let's move to the solution but before moving to the solution let's talk a little more about the problem okay so now what exactly the problem is yeah so problem is unable to locate element that is exception no such element okay i think many of you have already you know familiar with this exception but still i wanted to explain a little more about it so what exactly the no such element exception is this kind of exception occur when why you're creating a unique ui test cases and it happens when there is a application change and the locator change or any property that has been changed for a specific application so now what is the solution solution is the self-healing in the test automation so self-healing in the test automation is one of the technique that has been provided by the AI based testing tool we can see that nowadays we have a lot of tool in the market which is based upon the AI i mean the AI based testing tool so the user might be confused to choose one of them according to their requirement right so but before seeing the tool let you know understand what exactly the self-healing is it is an automation of an automation so you have an automation and there is some change that has been application up you know again say application change so what AI based testing tool do is it heal your automation script like it creates the automation based upon your automation framework so that is why it is called the automation of an automation okay so now it is based upon the AI algorithm so I mean this will going to be talk little more detail in the later slide it stores the information about the application so it means that your AI based testing tool stores the application about your application about your system and about your objects test heals the automation script we have already saw that it helps in reducing the maintenance so yeah it helps in reducing the manual effort which you know can reduce the cost and which can reduce the time which involve in doing the manual task so this is all about the self-healing which is provided by the AI based testing tool so we have a lot of tool in the market that I have already told and it is quite confusing for the users to choose one of them so don't worry about that I will I am not going to share the one of the you know best tool but we will going to talk about how many tools we have in the market we have the Mabel we have testing we have present is and we have Hellenium so I'll take or maybe pick one of the testing tool in order to let you all understand how the AI based testing tool work so I just chose Hellenium it just it is random not any biasing you know okay so Hellenium let's talk a little about what exactly the Hellenium is it is an open source it is based upon the Selenium and Java so it is having a prerequisite that if your test cases is Selenium and if you're writing the test cases on the Selenium which is and Java this is a prerequisite and also like if you're using Selenium and Python then you have to use Hellenium proxy real-time engine so it doesn't require to install on any server it just tied up with your test cases and it runs automatically so installation is very easy integration is very easy so next point is integration so we just spoke about it integration is very easy machine learning algorithm so behind the AI based testing model or tool there is a machine learning algorithm which is running and it provides you the solution right integration done on the web driver IO so in the image you can see that I have created a chrome chrome chrome driver and with the object of chrome driver I have tied up self-healing driver so this is how that you have to integrate your um Hellenium object with your chrome driver so it is very easy to set up so now I have chosen in Hellenium to let you all understand how the AI based testing tool work okay so this is an example which I have prepared for you all the monitor is considered as your test framework we have a Hellenium jar we have a Hellenium backend and we have a UI on which we will go into to the automation so in the um you know normal scenario what happens is when you write your script it try to find the element okay so we will try to you know automate the password password is having an id mp password one and your automation is also having find by id by mp password one so till then we are good to go so what happened is your framework will going to find this element on the UI and UI respond that okay cool element found so now what exactly happened when Hellenium come into the picture it your script your framework you know interact with Hellenium jar and it tells the Hellenium jar that we found this locator successfully and now further Hellenium jar will interact with the Hellenium backend and it will save the locator which is there on the UI I mean which is running successfully hello scenario now one climax come into the picture we have a version new version and related to the naming convention of an attribute id so now your application password field id is changed from mp password one to password now in the regular scenario what happened is you go to your script and you keep on changing the locator let's suppose if we have a 50 locator changes then you have to do the same task 50 times right you will go your page object and you will you know locator file and you will change the object 50 times so but now you don't have to be worried about that because we have a Hellenium in the picture now what exactly Hellenium will do is okay your id is changed to the password now what Hellenium will do is it again try to find the element and without the Hellenium it will try to find the element and it will say that no such exception which we have already saw that if there is a kind of locator change then these kind of exception come I mean I think testers must be familiar with this kind of exception that we have to go through this exception quite frequently okay so once this exception came what our automation script do is it again integrate interact with Hellenium jar it tells the Hellenium jar that oh no this fine this time we are unable to find the locator so now what Hellenium jars do Hellenium jar again interact with you know Hellenium back end and it tells the page state and now the next thing which happened is between the Hellenium jar and the Hellenium back end so between these two there is a AI algorithm which work they work the AI algorithm and Hellenium produce a new locator and this new locator is called the healed locator and this Hellenium jar you know provide this healed locator to your script and inside the script you can see that ID is now been upgraded to password okay so now manual intervention is cut and now it again try to find the element and the element found so this is the architecture that has been set up behind the AI based testing tool okay so we have talked about the problem the solution the tool let's talk a little about benefits and the challenges that we have for using these kind of AI based testing tool so it improves the test coverage so AI analyze a billion of amount of you know data and while analyzing those data it find the detect that or maybe sorry it find the defect that may go undetected while doing the testing you know so that is why it increase the test automation coverage and it also try to decrease the risk of rate of risk which might be you know sleeping through the cracks faster so its execution is very fast the defect detection is accurate and it also have a test plan and the execution which is quite better self-healing item that this we have already saw that its self-heal it provides one of the mechanism which is a self-healing predictive analysis okay so while analyzing the data or while you know fetching the data AI predict the like a future potential issue and it delivers the same to the testing team so that a testing team might be alert before it can you know be the bigger problem while doing the release and so and also it provides the big data insight to optimize the testing strategy so these are the you know benefit which I saw by using the AI based testing tool now let's see the challenges of using the AI based testing tool so um yeah it requires the specialized you know specialized skill and the expertise to handle these kind of tool and it related to the infrastructure and resource so AI is computationally very expensive and it is estimated that to implement a AI model it is getting you know doubled every 3.5 months from 2012 to 2018 maybe that is why it has been used by the big giant company only difficult in choosing the right tool that we already saw that we have so many tool in the market so it it could be quite difficult in choosing this tool which is right for you data management and quality so AI produce a sample data but before going to the production it requires the high quality data else developer has you know maybe has a risk to garbage in and garbage out so this could also be the challenge and the security and the privacy concerned so I think this is a very very much important because if I talk about the organization so it could be possible that organization doesn't want to share their data right and we know that AI based testing tool fetch or stores your application data your system data your object data that might be the security risk for any organization so this is the biggest challenge for using the AI based testing tool so we saw the benefit challenges now the current application virtual assistant so we will talk about how the AI is been you know used by the other different you know industries so in order to use in the virtual assistant like Google, Alexa, Siri these are using the AI algorithm to understand the human command and to provide the you know information and to perform the task this is how it has been used in the virtual assistant recommendation system so like Netflix, Amazon, Spotify what these are doing is these are fetching the human data and it try to stores the preferences about the human and according to that preferences it suggest or recommend you the song the movie the product so this is how it has been in the you know recommendation system as well autonomous vehicle so company like Tesla these are you know using the AI deep learning and the other algorithm to create a self-driving car so what does the self-driving car do is at the back and they are also using the deep learning machine learning and so many complex things you know to understand the surrounding and on the basis of that surrounding they take a decision that is a driving decision natural language processing so AI powered natural language processing like your Google translator your chat boot these are also again using the AI algorithm AI ML and the deep learning fraud direction so while analyzing a lot amount of data AI catarise the fraud and the non-fraud activity by analyzing a million data so for this also AI can prevent your fraud detection example is if you are using your banking phone and it capture your live location that on this location you usually log in your internet banking or something like that but if you log in through some other country or some other location it you know send you the email or maybe the text that on this location it is trying to log in into your internet banking is it you or not that is mean that at the back end AI is storing your information about how you do the things basically it catarise fraud and the non-fraud activity so it is also helping in detecting the fraud activity healthcare diagnosis so AI now is been used in the healthcare industry as well so it used in analyzing the x-ray analyzing MRI analyzing CT scan in order to find out you know the different diseases like cancers and other anomalies so it also helped the radiologist in their diagnosis personalized advertisement so personalized advertisements like Facebook advertisement Google advertisement these are targeting the advertising campaign and according to the user preferences they are suggesting you know the advertisement financial trading so AI algorithm also help in analyzing the news analyzing the financial data and in order to track or maybe to you know help in detecting the stock stock market or something so that is why it is used in financial trading as well the last one is customer service chat boot I think nowadays many organizations are using this customer service chat boot in order to cut down the manual intervention in order to provide a common query and the instant customer support so these are the different industry you can see nowadays are using AI a presentation I wanted to share one thing with you all you know while creating this presentation one constant question was running into my mind and that question was will AI going to replace software engineers so anybody in the room who is having the same kind of question like me maybe they can raise your hand okay pretty much I think okay so according to me I think this will never going to happen because in 1977 or 1978 there was a thing called program generator that came into the picture and people were saying that this will going to take away all youngsters job but this never happened you know the reason behind it so the reason behind it is a human brain I think it is so flexible and so compatible and you know it adopt the thing so quickly that nothing can replace the human brain so what happened is people started solving the bigger and bigger problem which these AI or this program generator were unable to handle it and you know so that is why AI ML so deep learning these are good we should welcome it we can consider this as a base and on that base we can show our creativity we can show our smartness we can show our innovation so with that positive note I am done with my presentation let's move to the Q&A anybody's having any question yeah if nothing you have changed on the page yeah so it will going to handle that thing also it can you know give you a notification or sort of message that this particular thing has been changed because anyhow AI analyze your attribute this which is attached to the element and if there is some change edit delete or add it will going to alert the user and accordingly it will you know modify because in your current code or maybe current test automation script you have not written that particular field or you are not using that particular attribute obviously it will not going to add that in your framework because might be that it is not a functional test that you want to do right but still this will going to alert the user or populate that this particular part has been changed or added and now it is added now it is up to you that you want to add or not so yeah yeah please okay so our script is good but still we are getting the error might be that it is not the locator error um yeah so related to the self-healing self-healing will not going to help this up because self-healing is all about locator you know changing the if there is some modification has been done on the locator part so it only handles the locator part but later on like if something else has been changed so that is might be the different feature which you know can be provided by AI testing tool or not so as I told in the previous slides that we are having a lot of AI testing tool in the market and different different testing tool is provide the different different you know um mechanism and um you know the things it provides so now I just choose one of them and they are providing the self-healing it might be that in the another tool is providing this mechanism also which you are saying so it is up to you that that how you are you know doing that in the end picking up the testing tool um yeah please okay so the question is asked is if there is some kind of change that happen on the application so AI will going to change it at runtime only or after the you know execution it will change something right this is the question okay okay I got your I got your question so the answer is it will ask that this is changed do you want to heal this so if you do the yeah I want to heal it will heal else it will not you have to decide the solution it could be like um it possible that if you want to always ask the AI to heal right yeah but I think this is a manual task that you have to do okay heal okay do not heal this time so this is how that you have to handle yeah yeah so it might be in the configuration of you know the AI tool AI tool it it could be the separate configuration you can do let's skip this part focus on this or something like that so it is all about the configuration of AI based testing tool yeah somebody else was yeah please okay so AI based testing algorithm what exactly the testing algorithm has been used behind it right so I think I need to do some research because I didn't know what exactly the algorithm has been used as a tester we just wanted to show that it provides you the heal locators or something that has been populated that is you know much deep into algorithm deep learning and so on so that might be we need to look up uh somebody else was there okay mobile I am unable to understand can you please just repeat mobile implement implementation of AI yeah so actually it tells you all the data but it cannot tells you that uh it tells you or make a prediction about specific to your application because it got integrated with your application to it fetch the data about that system and provide you the prediction the future prediction related to that system only like that application only it cannot tell you that okay this might be a problem in the mobile or something like that on which uh you know application you integrate it only tells you about that yeah anybody else is having any questions intermittent issue like um not related to so it comes under the flaky test case um yeah exactly false uh false failure and the flaky test case so um if you're using any um reporting tool it can easily tell you that you know this is a flaky test case and um it doesn't count in that particular part yeah anybody else any questions yeah please um I can't hear you to automate um it cannot automate from scratch shortly it will not but it can help you in eliminating the things that we do after we have a automation script ready so for the first time you have to do but after that we have a tons of things to do we as a tester do like maintaining a script doing some changes to the script you know these many things so so these part of thing is been handled by the ai not that from the scratch most welcome um anybody else is having any questions I think we are good thank you so much everyone hello can you hear me my name is Dmitry Belovsky I work in Red Hat since 2020 I maintain open SEL and open SSH there oh I'm also have an honor to be a member of open SEL technical committee my beloved pet project is also open SEL related but today I will not speak about open SEL directly I'm going to speak about introducing post quantum cryptography in Fedora so first yes I understand that all of those who came here understand what's cryptography for but let me remind that for strong understanding of cryptography it's ciphers but experience people remember that there is much more than ciphers so there are various identity integrity checks there are digital signature and anyway when you want to just cipher something you should provide a key to both party and it's also a task of cryptography so we are waiting the moment when quantum computers appear and will break at least several parts of the cryptography applications speaking about digital signature and key exchange but of course some other areas also will be affected we don't know when and where the quantum computers will appear when I first heard about it it was in nearest five years and it's still in nearest five years for well I say 20 years everyone can bet what happens before quantum computers or say nuclear fusion but it doesn't matter that we should not be ready to the moment when they really appear and so one of the organization management providing cryptography standards American National Institute of Standards initiated the contest of post quantum cryptography in 2016 here is some statistics there were about 70 applications in the first round and there were several rounds and after the third one in 2022 we got four algorithms selected for standardization and we also get several algorithms to be studied for backup standards one of them is broken at that moment so here are the four algorithms that were chosen for the standardization or one of them is chosen for key exchange and three are chosen for digital signatures almost all of them except Sphinx plus so-called latest based algorithms those who attended the first post quantum presentation today here in this room several hours ago understand what's it I don't pretend I understand anything and not so just believe that they belong to various mathematical problems and so breaking one of them will not break the other now let's talk about the standardization process at all when we say that NIST has chosen an algorithm for standardization well it doesn't really matter not nothing but matters less than we can expect because before turning the chosen for standardization algorithm into standard there will be some parameters tuning there will be something else and it's not enough to standardize the algorithm itself we also have to standardize how these algorithms the keys the parameters are stored in certificates how they are used in various protocols it's the area of responsibility of ITF international task force and we also should mention that post quantum algorithms are also in the sphere of interest of OASIS group which develops pkss11 standard and as the algorithms will be used in pkss11 start complying the decisions so the NIST standardization is expected to happen in 2024 what it means for us for us it means that the next day NIST announced that they have standardized we will get a bunch of client's requests to implement post quantum cryptography immediately yes tomorrow yes we should we should have already done this why why we were sleeping all this time we hope for the best we hope that the other standards in the ITF and OASIS areas of responsibility will also be ready and we also will be ready to provide something but for now we have a strict recommendation of NIST that we that nobody should provide the commercial solutions on the versions that are not on the versions of post quantum crypto that are not standardized so we can't do much what can we do now well now we can use fedora as a sort of playground following the our principle upstream first then at the at some moment fedora will turn into our next version of redhead and we should choose the library and we should provide some experiments see what are the narrow places of integrations of this library with the crypto libraries we maintain and see how the how they work together how different libraries are interoperable I will talk about two libraries OpenSSL which is a base for all the server software and for NSS that is a base for such popular client software as Firefox, Internet browser and some other so NSS implements cryptography via PICS-S11 interface that's why I mentioned the OASIS group and PICS-S11 standardization and OpenSSL a recent version we are going to play with has implemented so-called provider API it's a pluggable API that allows to that in theory allows implementing any new crypto and it will just work well it's not true it will not just work at least it will not just work with the current OpenSSL releases because some features will appear only in the next release 3.2 but with the current releases you already if you have some post-quantum implementing some provider implement post-quantum algorithms you already can make some experiments so we have to choose the library and our choice is quite reasonable it's LiboQS project it's a sort of post-quantum ecosystem well it's the library which is a standard de facto as far as I know there were several ITF hackathons and all the projects that use something c-based use LiboQS the authors of these projects also have implemented OpenSSL provider okay LiboQS provider and working with working in close contact with OpenSSL query team hi guys they implemented they did their best and current master supports as far as I know everything that can be done now until we have a finalized specifications well it also inherits it also shares a code base with another library much smaller PQClean that will be the source of post-quantum algorithms implementations in NSS so it's the most significant slide in the presentation why you can't just say you install LiboQS provider and run experiments not because I'm lazy unfortunately there are some license problems with LiboQS most of the code is licensed under the mid license which is not the problems there are also parts of code that are licensed under other licenses which are fine for us such as Apache 2 unilicensed and so on and so forth but there are some parts of code unfortunately the implementation of the algorithms that we are that we are the most interested in in the under Creative Commons license which is not suitable for including into Fedora as is we will need an exception for it we are working on it and I hope that sometime later we will be able to include LiboQS into Fedora so again briefly a reminder about upstream PQ Readiness well NSS has some implementation based on PQClean but again we will meet the same of the same license obstacles because it's the it's exactly the same code license that with a cco license OpenSSL regularly runs tests about against the LiboQS provider and well they recently updated the version to the last current version of the LiboQS but again let me repeat the changes are in master only and last but not least it's a discussion topic we don't have a final solution among ourselves because LiboQS itself has several implementations of crypto algorithms it can use OpenSSL based implementation of low-level algorithms such as chart 2 and chart 3 and AS or we can use OpenSSL implementation it doesn't affect in SS it's not a problem for OpenSSL but we in crypto team maintain two more cryptographic library not LS and legit crypt as post quantum crypto is needed to be implemented in most libraries at least at some moment if we use OpenSSL based build it means that we provide some dependency between very different crypto libraries which has well there are good reasons not to do it but it's a question of near future not of just a moment the nearest problem we need to deal with is the problem as soon as it's built well LiboQS will appear in federal height thank you very much feel free to ask questions PHP how ready is PHP bgp bgp ecosystem is not right pgp pgp well there are some efforts in itf but i'd say they're limited it's not the primary target of itf standardization now sorry well what about the cryptography used for encrypting the hard drive right yeah so it's usually uses symmetric crypto right well if k if kvwrap is implemented it's not a problem to add post quantum kvwrap if it's doable i think it's doable i think it's not ready for now probably yes thanks bobo thank you very much any more questions okay thank you very much