 Thank you for the introduction. So yes, that's right. I'm an engineer at Red Hat by day. But by night, I like to think about physics, computer science, and how mathematics, and how all of that joins together. So one day I was thinking, what if we let an infinite loop run forever? Of course, assuming that we will cover how we will not be overflowing bits and stuff like that. So I was thinking, if we given some energy budget, how much or how far we can go with this energy budget, what is the cheapest way to compute and stuff like that. So in the overview, so that was the overview. Next up will be the idea of how we end up with the given energy budget. Then we are going to take a look at how much energy we need to pay for a single bit flip. Then we are going to take a look at the optimized computer that we might want to build to answer this question, how far the infinite loop might go. Then I will show you some tables. And I will also later on explain how that solution actually evolves in time, that it's not like constant. And the last thing will be that I will take a look at alternatives. Well, actually it's one alternative, so whatever. So first let's take a look at how we can put together the entire energy budget that we can work with. So I'm pretty sure that you are all familiar with this famous equation. It basically states that the energy of something is equal to its mass times the speed of light in vacuum squared. Of course, this is like the most simple form of the equation, and I'm going to use this one. Because for example, this one, with the gamma term, it gets a little bit complicated, especially because the gamma term basically is a relationship between the velocity of the moving thing as compared to speed of light. The problem with it is that we could use the gamma to make the energy value infinitely large. So that's like not really a good thing to work with when we are trying to find out how far we can go using the limitations we know. So just for the illustration, as the velocity of the object approaches the speed of light, the gamma term basically goes to infinity. So this way we could basically supply infinite amount of energy into the system and cheat our way. So that would be nonsensical. So this is the way we are going to work with the energy budget, so the cost of a bit flip. The cost of a bit flip is essentially described by something called Landau's principle. It's basically, I'm going to explain quickly, the K is a Boltzmann constant, which is this number in joules per Kelvin. The big T is the temperature of the system in Kelvin's. Boltzmann constant, we all know that, but the temperature in Kelvin's that I'm going to use is the temperature of the cosmic microwave background. Because using anything else, we actually need to supply more energy into the system to cool it down than we would get bit flips for the same amount of joules. So it makes no sense to try to lower the temperature of the system to get more bit flips for the same energy budget. Yeah. So how are we going to build our optimized computer? So first off, the computer will not be storing its numbers in the standard binary code, because in terms of what we are trying to do, it really sucks. We are going to use reflected binary code, which is also known as GRACE code. The main principle behind the reflected binary code is that the cost of moving from one number to the next is always one bit flip. And I will show a table later with a bunch of numbers so that we can take a look. And the internals of the adder is going to be composed of so called Fredkin gates. These are the specular kind of gates. They are basically conditional swap gates. They are universal, which means that we can use the Fredkin gates to construct any of the other known gates, like and, or, and, yeah, that's basically it, because the other are just the simple negations of those. And the main selling point of the Fredkin gates is that they are reversible. That means that unless we are actually changing an information, there is no cost to it. So the thing is that the Fredkin gates is a little bit different than the regular gates that we are used to, because the Fredkin gates has three inputs and three outputs. One of the inputs is, like, the conditional bit. And the thing is that, as I will show in a second on the table, it will be pretty clear that the Fredkin gate does not destroy any information as opposed to, for example, the end gate or the or gate. So first, let's take a look at the reflected binary code. We can see that, for example, if we are for the number two, we needed to do two bit flips. We needed to flip the first bit from one to zero. And then we needed to flip the second bit from zero to one. On the other hand, using the reflected binary code, we needed to flip just one bit. So the reflected binary code is essentially a very optimized way of counting. Of course, it has a number of other uses, one of the most intriguing one, at least for me, is that you will use the reflected binary code as indices when you are optimizing logical expressions using Karnoff maps. So the reflected binary code will be basically the way you annotate the rows and the columns of the Karnoff map. And then you can basically use the colocation of ones and zeros to optimize Boolean expressions. And speaking about the gates, if we take a look at, for example, the end gate, we basically have three inputs which all result in the output being zero. That means that we cannot reverse from zero to the inputs because we have three inputs matching one output. So we cannot go back. We no longer have the information to go back from the zero to either of the inputs. Same for the or gate and same for the X or gate. So this is why these traditional gates are not reversible because they truly destroy the state of the system. So let's take at some tables of the energy. So the most basic one would be basically one joule, which is an energy required to heat one gram of water by 0.24 degrees Celsius. It's truly like nothing interesting. But if we plug the one joule into, if we basically plug the one joule into the Landauro's principle, we get the cost of a flip bit. And then we can divide. And we will get that using one joule of energy, we could count up to 2 to the power of 109. That's pretty nice for just one joule of energy. We can move to like 8.4 kilojoules, which is a reference daily caloric intake of human adult. That is like 2 to the power of 125. And for inspiration, I just added a bunch of others interesting data points that I could find. So for example, if we take a look at the bottom of this table, the total energy output of the sun per second, we could count up to 2 to the power of 198. So this is all nice and well, but I was thinking in a slightly bigger terms. So how far could we actually go if we harnessed the entire power of, let's say, the solar system? So if we took the mass energy equivalence of the entire solar system, we could count up to 2 to the power of 266. As you can see, one joule of energy took us to 2 to the power of 109. We burned down the entire solar system. It must have been a glorious explosion. And we managed to count up to only 2 to the power of 266. But that's nothing as compared to the mass energy equivalence of the entire universe. So using the energy of the entire universe, we don't get that much further away. And that's the idea basically behind the exponentiation, that it grows and grows and grows. And even when the number in the exponent seems like, I don't know, 200 away from this one, but the difference, the actual difference in terms of what we can do with the energy is just the solar universe that we had to destroy you, but it was necessary. However, these numbers are not constant. They change in time. So the first point is that the space expands. That is something which was discovered by the Hubble, I don't know, something like 90 years ago. And ever since, it's like this little big mystery. Why and how, basically. So in terms of what we are doing right now, this influences the numbers in two different ways. So as the space expands, it actually clues down the cosmic microwave background. This means that as the temperature goes lower, we can do the cost of a flip bit goes down as well. So we can compute more for the same amount of energy. On the other hand, as the space expands, things get further away. And as a matter of fact, things are getting so further and further away that even if they were moving towards us at the speed of light, there is a fine distance at which it wouldn't really matter that the thing is moving towards us at the speed of light. Because the expansion of the space time would actually sort of negate the speed. Because the space would be expanding much faster than the thing can be moving towards us. So in a sense, the energy budget of our current Hubble volume as the time evolves goes down and down. So because I wasn't really able to find good numbers as to how lower the temperature of the CMB gets over time, and also how the energy density of the Hubble volume changes with respect to time, I mean our Hubble volume because that is basically a term which describes from the point of the observer what is your sphere of influence, what is all the information that we can get from the universe. So it's not clear to me right now. I will definitely try to follow up on that and figure out what is the relationship between these. Like as the temperature goes lower, we can compute more, but the volume shrinks, so we can compute less. And I would be really interested like what is the ratio between these things. So yeah, I still have to figure the one out. And I think that I'm getting to my last slide, which deals with an alternative way, which is even more extreme than the one I just presented. So the alternative approach is like the Black Hole Universal Computer, which was published something like 15 years ago by Settloid. It makes a bunch of assumptions. So the first assumption is that it assumes that our universe is a Black Hole with roughly 10 to the power of 90 operations per second. It also takes into account the mass of the observable universe to be something like 3 times 10 to the power of 52. And then it basically uses the knowledge that Black Holes seemingly evaporate in time due to the Hawking radiation. So Black Hole this big, this size will roughly evaporate over the time of 2.8 times 10 to the power of 139, which means that if we take into account the lifetime of the Black Hole plus the number of operations, the Black Hole can perform every second. We get to the number that the total operations, assuming that our universe is a Black Hole, we don't know really if this is the case, there are some weird arguments which correlate because in the middle of every Black Hole is the thing so-called singularity, which is basically an infinitely small point in space, which actually extends into infinity. So if we know or if we assume that our universe is infinite as well, it sort of works out that we could be living inside a Black Hole and that this could be like the actual upper bound for the computation. And yeah, that's basically it. Thank you for listening. However, I would like to, so why I was thinking about all of these, let's say that we use the circuit or the computer that we designed a few minutes before and that we try, for example, to, I don't know, use a little bit more of Fredkin gates to actually, let's say, break some encryption scheme. Using these numbers, we can actually figure out how impossible it is to try to start with some initial key and then try to explore the entire space of possibilities. For me, these numbers are quite humbling because even if I go back to the maximum number that we get to using Klandauer's principles, it basically means that, I don't know, it's like imagine a square grid, 18 by 18 maybe. And now just imagine that we have a bunch of stones that we place on the grid. And the thing is that this number tells us that we cannot explore every possible combination of placing a random number of stones on the grid in this universe. There is just not enough computational capacity to explore every possible configuration of these things. So when we think about, let's say, the match playing the Go game with like the DeepMind playing the Go game against human beings. Some people like to say, you know, but it's a big computer running a lot of stuff at the same time. It must be able to explore every possibility that there is. However, we see that even I think that the professional games of Go are played at the board, which is like 19 by 19. And we see that it is simply impossible for the AI to explore all the possibilities and take them into account. There has to be something like clever, cleverer than that. So yes. That would be it. Thank you for listening. And I guess that we also have some time for questions. Yes? The camera comes from the speed. This means you have to accelerate. So you have to check and then change the speed. Yes. Yes. And the computation happens in the frame of reference of an article. So it doesn't hit the point that it is frame of reference. Yeah. So I was thinking that. I'm still not convinced by what you are saying. Yeah. So to repeat the question that the gamma is dependent on the frame of reference. So I was actually thinking that we just make the boom and then we channel the energy from that frame of reference somewhere else. If you want to know your universe, then you need another universe. Exactly. Yes. It would be a big, very destructive kind of thing. And yeah. But then my question was, your computer is really optimized, but it's not your computer, because it doesn't have a core counter, a loop, and it's like that. So you will need more energy for this kind of thing. So did you try to see what was the universe of a computer that would use this energy? Then we could use it. It's the universe of a computer that would use the lowest and the lowest energy points. Can you build a computer? Oh. No. It's a problem counter that we could use in case we're talking about. No. I didn't really try it. That should be like too much numbers for my end. Can you do that for me? Can you do that for me, sir? Excuse me? Can you do that for me, sir? If you start now. I think that if I start right away, I might be able to. But yeah, I'll keep that in mind when I'm thinking about what to do next year. Thank you. Do you think about creating some new recognition that there should be a green computation to save energy? Well. So you propose that for this, you can spend that much energy. I think we could do that. You know, I just need a proper representation in the European Parliament. I think it could be possible. Those guys will wait everything to. Yes? Yes. Yes, yes, yes. Like the Langauers principle basically sets the lower bound for the cost of a bitflake using classical terms. Yes. Not while I'm, yes, yes, it should be that. It should be using classical means, not the non-quantum weirdness. Yeah. It looks like lots of financial data that you can, like, run out just for that bitflake. Optimize it. I think that we could optimize it by building the whole circuit or the whole mechanism, let's say that. I mean, just going through all the numbers makes no sense, right? So we would like to also do something on top of that. So let's say that we would like to check whether this combination of bitflakes can decrypt some payload. And in that sense, we could implement the next part which would be checking whether the payload is valid using the thread king gates as well. That means that they would consume no energy. And the final output of the whole thing could be the one single bit of information. Does it break the cipher or not? Or did we get the right key or not? And by initializing the circuit, like with the result being equal to zero, then we will be basically checking every possible number which could represent the key. But because none of them would actually break the thing, the output bit would still be zero. So we wouldn't be flipping any bits. So technically, we could pay the energy for breaking the key or the cipher or decyphering just the payload by paying the cost of one single bitflake. Because we would pay the cost of the bitflake only for the last operation which would actually find the valid key. And that could be implemented using, I guess, that's now the sort of venture into the domain of quantum computing like this. So yeah. Well, first I think we need to see actually working because none of those companies are selling right now. It wasn't proven that they are actually faster than classical computers using optimized way of solving the problem. So we still do not know whether the quantum actually is a thing or not. It's definitely weird, but we still don't know. So yeah, we are out of time. So thank you again.