 Quantum computers are here to stay. They will be soon part of the major industry, maybe in 10 years from now. But for now, they are not yet there. What can we do with quantum computers today? This will be one of the topics of my presentation. It's important to realize that quantum computers are not just classical computers, but faster. They use a very different logic. And therefore, they need to be programmed differently. This is what we do at Algorithmic. Yesterday, we announced here slash the release of Aurora, our quantum-boosted in-silico drug discovery platform that unlocks the possibility of using existing quantum computers for simulations that are of use for drug development and discovery. Aurora goes beyond AI and machine learning because it does ab initio simulation of protein ligand docking. And therefore, it does not rely on training data sets. Again, I'm not talking now about things that can be done in 10 years. I'm talking about near-term quantum computers. Why now and not before? Quantum computers have been becoming bigger and bigger and more and more powerful. About a week ago, IBM released Osprey 433 qubits quantum computer. This is the biggest quantum computer existing so far. Despite this progress, there are still error prongs. And therefore, you cannot use the standard algorithms that are used in quantum information science. One has to develop a new framework to work with them in presence of errors. This is the job of Aurora. We have, indeed, discovered, patented, and developed a software platform that smoothly integrates between the most advanced classical algorithms for chemistry simulations with state-of-the-art proprietary quantum algorithms. Now, if you've been in this space, probably you have heard that there are other software companies providing quantum chemistry simulations that are boosted by quantum computers. In which way are we different? There are two things that make us different. The first one is that we provide a smooth integration between quantum and classical. Classical quantum chemistry simulations include DMRG methods, fragmented molecular orbital methods, CASACF methods. And they are all incorporated in Aurora. So we only use the quantum computer whenever it is needed to solve those parts of the simulations that are not accessible conventionally by classical devices. The end user doesn't need to know if we use a quantum or a classical simulation. It's completely smoothly integrated. The second aspect which makes us different is scalability. Nowadays, it is possible to run algorithms that are only useful for prototype quantum computers. But our algorithms, on the contrary, work also for 50, hundreds, thousands of qubits. This means basically to develop a new framework. And I will tell you what this framework is now. We partner with IBM precisely because the algorithms are scalable today. And we announced this partnership a few days ago. We will work for quantum chemistry simulation to prove quantum advantage in the context of drug development and discovery. The key enabler that make us different is the interface between quantum and classical algorithm. This is our major breakthrough. And we call these informationally complete data. The reason why quantum computers are different is that they can propagate in parallel different configurations. However, the moment in which we measure only one configuration eventually is observed and is observed randomly. With informationally complete measurements, on the other hand, we obtain a blurred picture of the multi-configuration state in the quantum computer that becomes clearer and clearer as we continue to perform measurements. So in one data set, we get at the same time all the requested properties of the molecular simulation. This is the core of Aurora, which is divided in three parts, pre-processing, processing, and post-processing. In pre-processing, we optimize the input for the quantum computer. And this means to select the part of the chemical reaction that cannot be performed classically. This needs also to be transformed in a language that quantum computers can understand, that is the language of qubits. In processing, we give instruction to the quantum machine to perform certain computation. Each computation is a sequence of operation. So we design the most efficient sequence of operation to perform the quantum simulation. And we implement it on the hardware. Eventually, we have the informationally complete measurement data and the post-processing, which is the final part, cleaning up from noise and from error of the hardware in order to make the result useful. Combining all these results together, we are able to show that on a 1,000 qubit quantum computer that is the one that IBM will deploy next year, we are able to reach numerical precision. That means, in some cases, a 10 million times reduction in error. 1.4 billion times speed up in runtime and 2.4 billion times cheaper molecular simulations. This is the reason why algorithms of Aurora are scalable. Aurora has a Python interface that is compatible with most quantum chemistry software. It is hardware agnostic. So this means that it can run not only on superconducting qubits, but on trapdions, on photonic simulators, and on many of the platforms that are at the moment considered for quantum computers. Moreover, it is scalable on cloud and high performance computing, is fully integrated. And finally, it is compatible with most of the hybrid quantum cloud systems, such as AWS Blockhead, IBM Runtime, and Azure Quantum. We believe that near-term quantum computers will provide already now the ability to tackle applications that can be groundbreaking. In pharma, in particular, finding a new drug is something that needs to be solved once. Aurora makes pharma company quantum ready. Aurora is available for commercial partners. Thank you for listening.