 So computers will be very helpful in the early stages of drug discovery, but there are two slightly different problems that we need to look at. The first one in the pre-clinical stage is still about finding the hits. We're not quite down there yet. In particular in that case when we'd found zero hits, although testing 300,000. The second part we're gonna need to look at then is whilst we have those hits, how can we optimize their affinity and efficacy? And that's when we turn the hit into a lead, something that actually works. But let's start with the hits first. If we can't do things in the lab with the robot, maybe we can do it with a computational robot. That is a supercomputer. This is an example of a machine that is just being replaced in Sweden actually. The difference here is that the screens we might have in the lab, the NIH's typical screen is 300,000 molecules or so. A few of the pharma companies that we know, AstraZeneca for instance, has a bit over a million compounds in-house at least a few years ago. It might be larger now. But the throughput here is limited. There are only so many structures per day, drugs per day we can't test. And it's fairly expensive. If we're doing things with Q-SAR, pharmacophores or a method that I will introduce in a minute called docking, I can test between a million and a billion compounds per day. And these machines are by no means cheap until you start looking at the experimental equipment. And then we realize that these machines are filthy cheap. No, they are super cheap. Expensive but much cheaper than doing things in the lab. In addition, you might be able to screen a billion compounds here in 24 hours. That is roughly 1000 times faster than in the lab. Even if I'm not as good here, being able to test 1000 times more is a key advantage that will help us. So how we do it has things. Well, there are a couple of methods but I will try to explain this very conceptually on the high level. Basically what I want to do, I want to take a receptor, the protein, whatever that is, and test not just one but many different small compounds. And then I want to check what happens at v-spind. Is this a good binding or bad binding? Based on the molecular simulation lecture, you should be able to know that in principle we can do this with free energy calculation, right? So I can use that free energy cycle and calculate how much does it cost to gradually disappear the molecule here and then gradually disappear the molecule there and then you complete that free energy cycle and then if you're really lucky you would get the delta-delta G value. That would work fine and you might get one drug in one week. And then you only have 999,999,999 drugs to go. That will take a while. We're not going to use MD simulations in general here because we need something that's fast, fast, fast, fast, fast, that's the key words here. Let's simplify this a lot to something like this. Very schematically, forget about the ability of the molecule to move or something, forget about even the small molecule's ability to move and then just check, does the orange part fit no? Does the purple part fit no? Does the blue part fit possibly? Okay, let's keep it. And then screen through this and test thousands of them per second. So basically this is the scientist's way of playing with wooden blocks. We have many holes. We don't care about small mistakes. That's the other key factor. Not making mistakes would of course be great, but here it's actually better to be fast than not to make mistakes. It's of course a bummer if I just happen to lose a really good molecule because my tests was a bit crap. But if I'm testing a billion drugs, I have to accept that there will be errors. I will miss some. The only question, will I find some things that are interesting? After all, that Omeoprasol, the Lucic drug, there is nothing that says that is the theoretically best drug to stop the proton pump in your somicrite. We don't need the theoretically best drug. We just need one good drug to make a fortune.