 Okay, let me begin. Hi, so I'm, or first I'd like to thank all of the organizers for allowing me to give this talk. My name is Anthony Munson. I'm a PhD candidate in physics at the University of Maryland College Park. My advisor, my advisors are Nicole Younger Halpern and Christopher Jarzynski. And today I'll be talking about some follow up work to the referenced physical review article below. And this was done in collaboration with the authors below. In particular, I'll be discussing how work trades off with complexity in computationally restricted thermodynamics. So let's begin with a simple question. So in this problem we're going to consider a system of incubates initialize an arbitrary known state row. And we're going to make a simplifying assumption, namely that there will be no energy difference between each of the cubits zero and one states, which just implies that there's no energy difference between any incubate states. Another way to say this is that we will be working with a completely degenerate Hamiltonian. Now our goal is to reset the cubits to the all zero state, which we denote zero to the end. Now the, we care about this problem because in many practical situations it's useful to initialize a system into a simple and easily manipulated state like the all zero tensor product state. So let's talk about our solutions. We have a thermodynamic solution, as well as a computational solution. So let's detail them in turn. For we do. There's a middle way. And this talk is going to be about this middle way. And it's trade offs. First, let's talk about the thermodynamic solution. This is erasure. So, we're going to perform in this solution will perform an erasure protocol with a fixed temperature heat bath. And there are many experimental realizations of erasure protocols but for this talk, all that we care about is that the erasure protocol maps every incubate state to the all zero state. Currently, we begin with a known state row, which is in general, you know, a mixed, a mixed state, a represented by density operator. And even though we know the states the erasure protocol doesn't really care. So we don't use any of our knowledge about that state in the erasure protocol, it's kind of just like hunting. According to land our we know that the protocol requires a work costs of at least in times kbt log two. And the reason we have this work costs is because the operation is irreversible. Now let's turn to our second solution. Unitary gate based computation. We begin with a universal set of two qubit unitaries, and we're going to create gate sequences of drawing from that universal set so you can think of the universal set is our building blocks, and then we're going to construct a unitary from each of those building blocks. And our goal is to begin with row and then uncomputed to a state which is epsilon close to the all zero state. This is a reversible process in contrast to the erasure we saw in the previous slide. Moreover, by our assumption that there are no energy differences between incubate states. This solution will incur no work cost. Now we have a problem. If we're limited only in practical situations, we won't be able to apply an arbitrary number of gates and often we're limited to some fixed number in our laboratories. And so if we say that that number is our, then we face complexity restrictions. Importantly, if our is too low we may be unable to reset the qubits. So this leads us to the middle way in which we combine our solutions one and two and sequence. So we assume that as an agent we have a complexity allowance are, and we first apply to row a unitary of complexity. And most are in thereafter, we just finished the job with thermodynamic erasure using some fixed temperature heat bath. Now the idea here is that ideally we can use complexity to offset the work cost incurred by the thermodynamic erasure. So a trade off emerges. We notice that if we perform only thermodynamic erasure. Well, complexity doesn't matter. In fact, no knowledge of our state row matters. And if we perform only unitary computation, then we just incur no work cost. So looking at these two extreme cases, it seems that with this middle way, there exists some work complexity trade off. And it turns out that this trade off is controlled by a quantity called the complexity entropy. Now intuitively, the complexity entropy quantifies the states apparent randomness to agents who can implement only limited complexity unitaries. The definition is given below. And let's dissect what it means. So you'll notice that we have a couple of parameter labels on the left. That's the R and the one minus epsilon. And to the right side, we have the logarithm of some optimized quantity, the trace of Q. So Q is a measurement operator. And I want you to think of the trace of Q as something like a, the reciprocal of a probability. Like the logarithm out front and the argument being the reciprocal of a probability. This is essentially like a surprise all that you'll see a quantity and information theory that comes up often. And so what we're really doing is we're defining this entropy as maximizing some surprise all. We impose some restrictions. The first of which concerns complexity. So we ensure that our measurement operators and the optimization must be can be affected only with unitaries of complexity at most are in second of all, we require that these measurement operators successfully identify our state row with probability, at least one minus epsilon. This is also to say that the type one error identifying row is bounded by epsilon. So this comes for those who are familiar from the hypothesis testing entropy. And so really the entropy we've defined here is like a hypothesis testing entropy, plus some explicit complexity restrictions. So if we get some intuition, when row is an incubate state, the complexity entropy ranges between zero and in log two. So the zero comes from the low complexity limit, where we can uncompute row very successfully. log two comes from the high complexity limit where basically, even if row is a pure state, we may not have enough complexity to treat it effectively as anything other than a maximum mixed state. So our main result is that for an agent who one has complexity allowance are two applies a unitary such that row is transformed into a state epsilon close to the all zero state that the agent incurs a work cost, which is at least KT times the complexity entropy. So this looks familiar to the expression for land hours limit we noticed before. Just because it indeed generalizes land hours limit. So, in the case when the complexity entropy takes on its maximal value of in log two, we just recover land hours limit, and that corresponds to the case in which we were just performing erasure, which makes sense. In the case where we're able to leverage our knowledge of row to uncompute it successfully. This bound will go lower in the case where we're able to completely do so and perform and get away with performing no erasure, then this just becomes a trivial bound of zero. So D this shows that the complexity entropy really does quantify the trade off described. And lastly, I want to note that the complexity entropy really is a general tool for quantifying optimal task efficiencies in set complexity restricted quantum information and thermodynamics. And so, in the follow up work, the forthcoming work. My collaborators and I are doing, we actually show that the complexity entropy, not only quantifies this problem of data compression and thermodynamic erasure, but also in randomness extraction. And I invite the audience to use the complexity entropy to address all of your complexity restricted needs. Again, this work was done in collaboration with the following individuals, and it can be found at the reference links below. Thank you for your time. Thank you very much for this great talk. We'll have time for a few questions and then we will continue with the discussion. So, first question. Richard, please. Hello, can you hear me. Yes. Thank you very much for your very interesting talk. I'm just wondering, what is the time needed for performing a similar than a unitary commutation product. I mean, you just mentioned that it is a reversible process so is that a quasi static process. So you're talking about the time needed to perform a unitary computation. I see I don't know the. Yeah, so you're asking what what's a typical time in the laboratory to perform. Yeah, yeah, yeah. I'm not I'm not familiar with that. I work in an office with some experimentalists to know the answer but yeah. So this is a finite time, you know, for land out limit there is a finite time correction, which is proportional to one over town town is the time to perform the front of so. So I'm just wondering whether there is a correction if we consider finite time process. Oh, I see so you're asking in the background that I performed there's a finite time correction for the application of unitary gates. Yeah, I'm not, I'm not sure. And I wonder if that it all has as to do with this assumption about the completely degenerate Hamiltonian. But yes, I don't have a good answer. Although I'm curious. I guess it may be an interesting question and deserve for further study maybe. Sorry, what was the last part. Maybe, maybe this will this deserve for further study this question. The time, you know, the correction to your bound the finite time correction. Yeah, you taught. Yeah, thank you very much. Thanks. Okay, thanks. So we have also a chance to rise up. You should probably be able to read questions from the chat and then you can continue with your question. So the question from the chat is a rocky teeth and what is the meaning of trace of an operator here usually trace or low being some expectation value what does it only operate to trace me. Some expectation value. I see, I see. So let me go back to the slide. Yes. So in this case, the, the trace so, as I said before, it's best to think of this like the reciprocal of a probability and I think this is why the trace doesn't have a direct intuitive meaning in terms of, you know, expectation values and so on that you, you may be used to. So, perhaps to give intuition we can think about the case in which we have no complexity at all. And the only, the only qualifying measurement operator we have is just the identity operator so for a incubate state, that'll just be an identity operator on a two to the n dimensional space. And so we whenever we take the trace of that we're going to get the we're just going to get what two to the end. And then we take the logarithm of that. That kind of gives you some intuition. And then if you take the case of a pure, just, if you take the case of a measurement operator which is just a projector on to let's say the all zero states say that you actually succeed. And you can implement that. You can implement that then you'll end up getting the trace of one which is so the logarithm will just be zero. Other than that you might have to play with it a little bit. But yeah thinking about it as a reciprocal or probability kind of helps out with this and typically in quantum theory we don't directly deal with such quantities so it's not so apparent in this case. One further comment is that this complexity entropy can be understood as a special case of a complexity relative entropy which we discuss in detail in the forthcoming work and if you look. And if you use the, if you start with the complexity relative entropy, then you'll see that we really have this trace of Q times this an additional positive semi definite operator, and in that case, perhaps this expression would make more sense to you as something like an expectation value. Okay, thank you very much. I see this. Yeah. He's here for the plan, but maybe first we could follow up the top questions. Yes, perhaps related to the previous question is there is, is this results, fundamentally quantum in nature, or is there a, is there a obvious classical analog. For example, this, and this complexity entropy, does it make sense in the context of a classical system. Yeah, that's a really good question. So, I think that you couldn't code a, I think it accommodates classical systems in the sense that you could in code the information for classical system into a quantum states. I do think that it's, it is not irreducible to classical case. So, really, the complexity of a state is something which only makes much sense to discuss, meaningfully in, in the case of quantum states because in classical states if I just have zeros and ones, then, okay, you're going to be able to, you know, it's, it feels like it's somehow representing like a, a program for generating that state, right. So it draws from the, so it's inspired by the hypothesis testing entropy which can be understood through a semi definite program so perhaps that's the similarity that you're seeing. I'm not sure if that answers your question. I, I know but that's largely because I don't understand quantum stuff, which is what motivates my question. Got you. Okay, thanks. Okay, thank you. Maybe I could read now one more question from chat and then Rocket could also ask the question in person. So the question by him isn't calculated HR the same complexity as the birth extraction itself. Sorry, one, one more time. Okay, so the question is written in the chat the last message isn't calculating HR the same complexity as the birth extraction itself. I'm not entirely. So, so I don't see obvious reason why that would be the case. I think it's clear from the definition of the complexity entropy that this complexity entropy or restriction that is given for the measurement operator does make calculating this quantity a non trivial task and so perhaps there there is an interesting question here whenever we actually calculate the complexity entropy how how does that relate to the work of this process. One thing that I would say is that the complexity entropy involved in the middle way protocol is reliant upon a fixed temperature bath and the complexity entropy clearly singles out to know such bad so I don't think that there's a direct relation between the two, unless you make some additional assumptions about how you would calculate the complexity entropy using something like like a bath or some other assumption that makes that that calculation of the complexity entropy of physically realized. Okay, okay, thank you so rocket you can ask your question or give your reply. Thank you for the opportunity. It's a really nice talk. I got confused in one aspect so you said that if I am taking a measurement operator of the form of identity in two to the power and then you're taking then you're taking the trace in that number. But suppose my measurement operator, the operator that I'm considering is a trace less kind of operator. How do you take log of zero in that case, because I can take a poly kind of generalized poly matrix kind of operator right. Yeah, so the, so perhaps this was unfortunately hidden in the term measurement operator so by assumption here the measurement operators are positive semi definite operators, or I guess meaningfully positive definite operators. So the, the eigenvalues will be all positive. Or at least, there will exist at least one positive eigenvalue so whenever you take the trace of it you're just going to get a sum of maybe some zeros but then some positive numbers and so the trace will always be some some positive number. So I think the book. Yeah, does that answer it. Thanks. Okay, thank you very much. So I think that we have a few more questions to finish that that were not answered. Before that, if I could ask just one question to Anthony about terminology so we will take complexity entropy and complexity and then to be are sometimes considered as different notions. So if you, if the present slide you consider two limit cases one is low complexity, otherwise, is the high complexity, why do we call it entropy why do we call it complexity and I would do we call it complexity. Great. Yes, so the, the, the complexity part is coming really just from the fact that the the entropy is taking explicit account of the complexity of the state. So consider the von Neumann entropy. If I take a pure state, no matter what its complexity is. I'm still just going to have von Neumann entropy of zero. And so, the von Neumann entropy along with other common entropy is often are dependent only on the spectrum of the state and so are insensitive to differences and complexity, which tells us that to which, you know would be useful to have an entropy to quantify these computationally restricted tasks. We, we need to use an entropy which takes explicit account of the complexity. And so the the entropy, we're introducing really just as a means to quantify optimal efficiencies for restricted tasks, just much like let's say, for example the Shannon entropy is used to quantify the limits of data compression. And part of the intuition you get here is that we're, we're looking at the state's apparent randomness as I noted before. And so that gives you some intuition and motivation for the, why we're using the entropy and then the complexity part is really just comes from our need to actually account for complexity whenever we're talking about computationally restricted tasks. Okay, thank you very much. It was a very interesting research and it was a very interesting talk, indeed. So, I would suggest that we could now read the questions in the chat, which was to finish this task and this question was not answered. In this view, Lou, sorry if I am not pronouncing well the names.