 So welcome to the tutorial session. So this afternoon, the tutorial will be given by Emanuel Prouf. So Emanuel is one of the best experts in the field of crypto implementations and site-channel attacks and site-channel countermeasures. So we are very happy to have him as for giving this tutorial. So this afternoon, he will speak about securing cryptography implementations in embedded systems. So thank you, Emanuel. Thank you for the introduction. Thank you, everybody, for being here instead of visiting this very nice city. Thank you to the PC chairs for the invitation. So the purpose of this tutorial is to present you some concepts and some methods which are used when it comes to implement cryptographic algorithms in embedded systems with limited resources like, for instance, a smart card. So the core observation of site-channel analysis and the field of site-channel analysis is that the behavior of a device on which cryptographic algorithm is processed depends on the values which are manipulated during this processing, and especially depends on some parts of the secret parameters which have been used for the cryptographic algorithm. To illustrate this, let us focus on this very simple processing. So here I have an S-box, for instance, operating on the bitwise addition between a non-part X and a secret part K. So X is assumed to be known by the adversary and K is the secret part he wants to recover. And I denoted by Z the output of this processing. So I'm running this process on a smart card. And during this processing, I am measuring the borough consumption. So here I choose the first time I choose the value of the key equal to 1. So I'm measuring the borough consumption for, let's say, 1,000 operations for 1,000 different values of X generated at random with a non-uniform distribution. So during this processing, I am measuring the borough consumption. So I get 1,000 measurements. And from those measurements, I am building the distribution of the borough consumption over the time. And I get something like that. So here in X axis, you have the time. So this is essentially the period dedicated to this processing. In Y axis, you have the value of the borough consumption. And in Z axis, you have the probability that some given borough consumption value is taken at some time. So this is what I get for key equal to 1. And this is another PDF I get for key equal to 2. And so we observe that this distribution is not the same and depends on the value of the key. And I can repeat it for key equal to 3, key equal to 4, and so on. Actually, this is just to show you that there is indeed a dependency, a one-to-one correspondence between the distribution of the borough consumption during the processing and the value of the key, which has been used to parameterize the processing. In fact, most of the studies dealing with such an analysis or contour measures, most of the studies dealing with such an attack will try to exploit in some way this dependency to perform a distinguishing attack and to recover the key. So the field of such an analysis, so essentially the chess community, so the subjects studied by this community can be split into two main families. So the first one deals with the attacks, and the second one deals with the contour measures. So the main issue for the attacks is the following one. So against each cryptosystem and each implementation of this cryptosystem finds the most efficient SCA. And this big problematic comes with some sub-questions, like how to define the efficiency of such an attack, which attack parameter to improve, what are there such an analysis common trends between all the attacks which are developed each year, and also the problematic of distinguishing what is the concept of an attack from the concept of a characterization. This was a little bit the subject of the presentation by François Xavier yesterday. During this talk, during this tutorial, I will focus on the second problematic. So I will not enter into the details of the questions regarding the attacks. For other contour measures, the main problematic is for each cryptosystem build, find efficient and effective contour measures. Efficient, meaning that it must be practical, so it must work in a reasonable time with reasonable resources, and effective in the sense that it will indeed defeat all the practical attacks which can be performed against my product. And also especially the attacks which can be performed by a security lab. So I don't want perfect security. I want to avoid the security at a reasonable cost. So once again, this problematic comes with some questions. So first, how to formally define the fact that the contour measure defeats some such an attack, which contour measure for which such an attack, and what makes a cryptosystem more vulnerable to such an L than another cryptosystem? So are there some structures which are favorable to such an L attacks and other ones which renders the algorithm more resistant to such an L attacks? Okay, I'm pretty sure that I don't have to convince you that when we want to build a contour measure and we want to guarantee the security, we also want to formally define what is the security, what is the strength of the adversary, and so on. So we need more security models. So I'm pretty sure that I don't have to convince you, but most of the time in the industry, this is something which is not so obvious. And when we build models, we have to start by convincing our boss or sometimes also our colleagues that we need models before defining the contour measures. And for instance, if we want to prove that, we have just to look at the literature and from the beginning of the, yeah, when people started to work on this subject, they tried different solutions, but without formally proving the security. And of course, most of the time, because there was no proof of security, there was attacks quite efficient against the contour measures which were developed at this time. And even for some contour measures with proofs, because a proof can be float. So are there sufficient? So is it sufficient to have a proof in some model? So the answer is no, because the model can be imperfect. And so having a proof for a contour measure is not sufficient. And after proving the contour measure, it's better especially if you want to put the contour measure in a product to test it on a lab. So with real measurements running on a real product, so a product which doesn't necessarily follow for those the model. So an attempt to sum up this slide, proofs help designers. They help designers to achieve measurable security. So they will help because with the proof, I will be able to guarantee that if the model is good, then my contour measure is effective. But they do not prevent evaluators to test theoretically impossible attacks because most of the time, they apply. So introduction of the different contour measures. So the main remark as I showed you in the first slide is that there is a dependency between the device behavior and the values and the value of the circuit parameters. And if I try to formalize what is the efficiency of such an attack in my field, I will say that the efficiency of such an attack is related to the amount of measurements I need to distinguish, to extract from the measurements information which depends on the circuit parameters. And what has been shown is that this efficiency depends, oh, it is quite clear, depends on the noise in the observation. For instance, if we model an observation like that, so an observation during the manipulation of the value Z by the device as a non-function operating on this value plus a noise, then we see that, okay, if I remove the noise, so it's obvious from the adversary to mount an attack just by observing this. And the difficulty for the adversary will be to remove or to average this noise in order to extract information from the measurements. So the core idea in the security contour measures, in the security, in the contour measures we are going to build is to define mechanism to increase this noise or, in other terms, to decrease the signal to noise ratio. So our first solution to do that is to increase the noise in the component. So this can be achieved by adding a lot of processing which are not dummy processing during cryptographic processing in order to hide the sensitive processing. Among very dummy operations. But this is very costly and most of the time we don't want to do that. So the idea, the other idea we are going to develop here is to use secret sharing techniques to split the processing and the data in such a way that the adversary, in order to recover the sensitive information and to recover the information related to the secret parameter, will need to combine noisy informations. So, for instance, if we want to pair it on Z, an idea can be to split first, to first split Z into D shares. So this is done during the personalization of the device. So the splitting of Z itself is not done on the device but before. And then on the device we stir the shares and when we are going to process the shares, manipulate the shares, we will do this manipulation at different times. So each manipulation will be impacted by a different noise. And in order to recover information on the unshared value Z from the D shares D, the adversary will need to combine this observation with this information and so on. And so it will need to combine to multiply, essentially this is the idea, to multiply observation noise. So if we assume here that the noise with standard deviation here is sigma, then the combination of all those observations to recover, to rebuild informations which is statistically dependent on Z, will be impacted by noise with standard deviation sigma to the power D. So this is the main reason why we are going to use sharing techniques also known in our field by masking in order to protect implementations just to increase the noise artificially without adding dummy operation in the circuits or in the implementations. So against those techniques, the adversary game will be in the implementation, find the first thing he has to do is to find D or less intermediate variables that gently depends on the unshared value Z. So if I can prove that such tuple of less or strictly less than D variables does not exist in my implementations, then I can say that I have a security, I will say that I have a security at order D. And so the developer game will be to ensure that such tuple does not exist and to prove that such tuple does not exist in some model. In the rest of my talk, I will consider that I will see an implementation as a sequence of elementary operations which read memory location and write its results in another memory location. So I will be in a model which is sometimes named only computation leaks. I will assume that when I'm measuring something during a processing, I'm just measuring something which is related to this processing. So there is no remanence or there is no cache effects which relate my measurements at time T at some things which happened at times T prime strictly smaller than T. Okay, so what are the different approaches which have been followed to apply those techniques? So there are essentially two big approaches which have been followed. The first one is to design algorithms, cryptographic algorithms, which are inherently resistant to such an analysis under some model. And essentially this approach is burned with the work of Dierbowski and Petrozak in 2007 with their stock's paper. And it interests a lot of what I call the cryptography, the theoretical cryptography community. They will propose a lot of models and a lot of constructions of cryptographic algorithms which are inherently, meaning that I don't have to take care during the implementation of those algorithms about such an attack. I should not and they will defeat all the attacks. At least it's a model. So the models which have been developed in this approach are for instance the bounded retrieval model which assumes that the overall sensitive leakage is bounded which does not really fit the reality. So they propose a more realistic model which is a leakage-resonant cryptography. The continuous leakage-resonant cryptography model in which we assume that the leakage is limited for each invocation only. But once again, the problem with those models is that they are too strong. Especially for the BRM, for the LRC, the models are too strong and so the constructions are far for being practical. And the second problem of this approach is that it allows us to define cryptographic algorithms which are inherently resistant but it does not show us how we can protect an existing algorithm for instance like AES or ARC. So it's not this kind of approach which can help us to protect most of the products. So that's why we try to follow another approach as I explained in previous slides which is based on circuit sharing techniques. So the first idea has been proposed of using sharing techniques to protect cryptographic implementations. I've been introduced in two papers simultaneously a paper by Guba and Pataran in 1999 and a paper by Shari, Jutla, Rao and Raji in 1999 at Crypto. So the soundness of this approach, so this is a formal way to present the ideas I showed you in three slides. So the idea, the core idea is that it's a funny one if we speak about a bit. If I share one bit x into da plus one shares and if I add the normal Gaussian noise with standard deviation with variance sigma square, then the complexity, the number of observations needed to distinguish the distribution of the observations Li when the shared value, the shared bit is zero from the distribution of those observations when the shared bit is one is lower bounded by something which increases exponentially with the number of shares. So this is a very important observation and which explains that we can build security if we have noise. So here we see that the importance of noise is crucial if we don't have noise sharing techniques does bring noise security. So we will use the noise provided by the device. So what we call the electronic noise, also the observation noise coming from the experimentation and we will increase it, we will find the appropriate D to get the certain level of security. Okay, now I show how we can split, we can split a value into shares and in order to protect the manipulation of a value. But now what I would like to do is to process on a sensitive value. So my second question is how can I securely process a value which has been shared while maintaining the security at order D? And not only I have to find techniques to do that, but also I have to find models in which I can prove that my implementation is secure. So for this purpose, in order to prove the certainness of my approach, I have essentially two solutions, two models. The first one is a probing adversary model which has been introduced by Ishaï Saïv-Agnere in a paper at Crypto in 2004. And another approach is to use information bonding model. And in fact, we are lucky of the two approaches can be unified and have been unified in paper at your Crypto by Duke Diabrowski and Frost. So to, and this is quite important because this model comes with quite effective techniques of proof, but it is limited to some adversaries which can only probe some wires in a circuit where a circuit is just an abstraction of a program. So we split the program into elementary operations. Most of the time those elementary operations are bitwise additions and multiplications, for instance. So for instance, the Boolean circuit is typically bitwise addition and XOR gate and gate, but we can extend the definition to any kind of processing in any field. And so in the probing adversary model, the adversary is assumed to be able to probe, so to observe the output of each gate, so of each elementary operation. And if we say that the adversary is at order D, we limit the number of probes, so of intermediate results he can observe to D. So he cannot, he is not allowed to observe more than the intermediate results. So we see that we have a model quite easy to deal with if we want to perform proofs, but it's not very practical because nothing should prevent the adversary to observe a D plus one intermediate results instead of D. So that's why the second model has been introduced, which is essentially to say I giving the adversary all the intermediate results with noise, so all the intermediate results are noisy. I have applied some sharing techniques to split the processing and to split the manipulate, to split the manipulation of intermediate results. And microwave, and this model will try to lower to upper bound the amount of information which is provided by the tuple, by the full tuple of observations to the secret parameter. So I try to minimize to upper bound the mutual information between the observation of all the intermediate results and the secret parameter. And what I've been shown in this paper is that if I can prove the security in the probing model at order D, then I can deduce an upper bound on the mutual information. Okay, so most of the time I will use the probing model just to have a first ID of the security of my contour measure. So if I want to prove the security of my contour measure at order D equal to one or two, it's quite simple, it's not too costly. I can list all the intermediate results occurring during the processing and check that not only all the intermediate results are statistically independent on the secret parameter, but also all the pairs of intermediate results are statistically independent. So if the processing is not too huge, then I can do that quite exhaustively. We see that even for small orders, this approach is too costly and we need other techniques. So this is exactly to answer this problem that Ishaï Saevagner have proposed techniques based on the assignment and simulation in order to prove the security of simple processing at any order D just by with paper and pen proofs. So the idea essentially is the idea of the techniques used by Ishaï Saevagner. This is the following one. So it's a kind of, we have two players. So the adversary who can observe any D tuple of intermediate results, but not more. And an oracle with access to a strict subset of the input shares. So the oracle has no access to all the shares. So the oracle has no access to information which is dependent on the secret parameter. So this is important because the game is to prove that for any D tuple of intermediate results, there exists an oracle assignment. So a definition of a simulation so that these oracle can simulate the adversary's view of the processing. And because this simulation is done with a strict subset of the shares, we are sure that the adversary cannot recover information on the secret based on the observations because the oracle can do exactly the same thing as the adversary and the oracle has no access to the secret. Has no information about the secret. This method works quite well for simple schemes. For instance, for the, for multiplications it can be, it has been applied in the original paper by Ishaï Saïd-Vagnere, but it's difficult to extend to a very general processing. The proofs can be quite difficult to do. Fortunately, there have been efforts by a team of researchers in formal methods led by Gilbert, so especially if there is a paper by Gilbert, Belaïd du Pressoire, Fouk, Grégoire and Strube, where they show that they have developed a tool, a software, which can prove the security of any implementation at any order D in a quite reasonable time. So quite reasonable time means that, okay, we can prove the security for simple schemes, a little bit more complex than what we can do with paper and pen, but not so much. But this is already very interesting because it allows to ensure that there is no flows in the paper and pen proofs. So this is a very, very important result in masking, in masking control measures. So about the information model, I would like to say a few words about it. So the notion I've been introduced by Shari et al at Crypto99, so the notion is essentially the argument that if I share a bit into a deep risk one share, if I add noise on each share, then the complexity to distinguish a distribution related to the sharing of zero and the complexity and the distribution related to the sharing of one. So the complexity to distinguish the distribution increase exponentially with the number of shares. So this was the result in the 99 paper and it has been extended in recent papers to the sharing of any data and also to the sharing of a processing. And the most important fact in this slide is the following one. If I know that the amount of information leaking during the manipulation of each share, the die, if I know that this amount of information is upper bounded by this, so by something which is related to this square, then I know that the difficulty to recover information on the unshared data will increase exponentially with die and so the amount of information is decreasing exponentially with die. So I just have to satisfy this. So if I want to use this result to build the security, either I am measuring the noise in the observation and I am deducing the order D in order to have the, so this is the order D, so this is the order D, I am deducing the order to have the security. So to achieve the security, I am adapting the noise or if I want to do the opposite, increasing the noise in order to get the security level that I wish. Okay, let's see, let us just look at a very simple example of what can be do with sharing for our simple example of the processing of an S-box. So here I just process, so the S-box, the iOS X-box, I assume that this processing is done on two registers. The first one, R0 is containing the plaintext byte, R1 is containing the key byte K and I save the result in a third register, R2. So I'm writing a very simple assembly code for this operation. So I am assuming that the S-box is represented by a lookup table and once again, I am measuring the power consumption during this processing for different values of X. So I get something like that. So this processing even, it can be seen as very simple, takes some amount of time. It's not performed in one time, it takes some period. And depending on the capacity of the oscilloscope you are using, you can have a lot of measurements only during this very simple processing. So if I want to detect, if I want to see whether I have a security flow in this implementation, so in other terms, if I want to detect whether there is a dependency between some power consumption at some time and the value of the secret, I can perform as discussed in the presentation by François Xavier yesterday, I can perform a t-test for instance. So François Xavier suggested other methods but I can try this one. So a t-test is simply, I just fix the value of the plaintext. I am computing the mean when the value of the plaintext is fixed. Then I am processing the mean when the value of the plaintext is ranging, is ranging over the set of A bit values. And I just process the difference between those two means and to normalize, I divide by the variances. So if I have something which depends on the output of the S-box, I should see a peak because here I should observe that there is the mean here is different than the mean here. If there is no dependencies and the mean at the point I am considering will cancel and so I should have something very small, close to zero. Okay, I do that for this processing and as expected, I see that there are big peaks here which show that there is a dependency. There is indeed here something which is manipulated and which depends on the secret parameter. Okay, so now I can try to to securely process this processing with the S-box computation and for that I can try to apply my sharing techniques. So in order to do that, a solution is to split the processing of the S-box into sub-processing, very elementary processing. And so for instance, if I want to protect the S-box, I can describe the processing of the S-box like that. So at least the non-linear part of the S-box is the processing of the power 254 in something. Some pilot field. And so in order to get that, to get these outputs, I have a kind of sequence of operations and so I have to process X3 and so on. Okay, so now my problem of securing the full S-box is reduced to secure each intermediate result during this processing. Okay, and especially here, I would like to protect this processing at least at order one, meaning that I want to protect this processing against an adversary who can only observe one intermediate result. So the adversary in my game is not allowed to observe two intermediate results. It does not mean that in practice, I will ask all the adversaries, again, my product to not use two intermediate results. I'm just assuming that combining two intermediate results combine too much noise. And so it will be difficult, it will require a lot of measurements to extract information from the combining of two points. So I limit the adversary in my game to an adversary who is only able to observe one intermediate result and not two. Okay, so my problem is to securely process the processing X cube. So the processing X times X square. And so I'm first building a sharing of X. So I'm assuming that this sharing is given to me. And so this is a sharing of X. X is itself the bitwise addition between the M and K. So X one and X two is related, is defined like that. And now I am developing the processing in order to see how to split this processing into two parts which are separately independent on the secret parameter, independent on X. So if I'm developing X cube, I have something like that because I am in a field of characteristic two, I get that. So of course I cannot process this directly because here I have X, so this processing is dependent on X. So I'm developing. So this is not dependent on X, this is not dependent on X, but those two monomials are, so it's not sufficient. So I'm adding a random value here and a random value here. And now I am splitting this processing like that. I will say that in order to compute X cube, I first computing Y one defined like that, then Y two defined like that, and it can be checked that those two processings manipulate data which are independent on X. Okay, so I said, so I gave you a sketch of proof, but I said that the processing, that the sketch of proof even in some good model is not sufficient and we always have to test it in practice. So I will test it in practice, so I'm implementing the processing. So in assembly, I am assuming that I have some tabulated multiplication. In fact, it is not tabulated. I have a function which process, which process the field multiplication with log and log tables. And okay, I'm not only doing that in order to prevent cache effects and also in order to protect, to prevent consolation of the random values, air and air prime which are used here. I also adding some line of codes which zeroize the registers between each operation and which prevent cache effects. So if I do that with this code and the add-ons I shortly described, then I got poor consumption like that. So this poor consumption means nothing. So just to see if I can detect or not a dependency, I perform my test as for instance, or this is also suggested by CIRAI. Sometimes it is called also CIRAI test and just a T test applied to poor consumption measurements. And okay, if I do that, I see that indeed, there is no peak. So there is no peak in this T test which seems to tell me that okay, there is no leakage of information at least against an adversary who can observe only one intermediate result. Now I would like to see what happens if an adversary is allowed to combine two leakages, so to combine two intermediate results because of this splitting of the operation, combining two intermediate results should reveal a dependency. For instance, if you just combine X1 plus R, sorry, okay, here when I will process this, if I just combine this with the observation of R, I will detect, I will have something which depends algebraically from X and so I should detect statistical dependency. So I expect to see something if I consider two points. So in order to test this, I'm taking this power consumption, this trace of power consumption and I'm building a new trace just by multiplying each point in the trace with all the other ones. So essentially it's squaring the size of the initial trace and I'm sure that there are points in this new trace which contain information on the two shares, so which contain information on X. So just to verify this with the T-test, I'm performing the T-test on this new trace and I see indeed that there is a dependency. So this picture, this example, is just to show you an example of the kind of approach which can be followed when it comes to implement a simple processing like a simple computation like an S-box. So first I'm implementing a non-protected version. I am measuring the leakage and then I'm splitting the processing into here two parts. I'm measuring the leakage then I'm measuring the leakage when the points are combined and so on until and I can increase the order of security. So I can increase the order of splitting until I achieve some given level of security. Okay, so now if I want to formalize a little bit what I'm going to do and what I have to solve as a designers or as researchers, indeed I have two problems. The first issue is how to share sensitive data. In the previous slides I showed you how to share sensitive data bit wisely, so by bitwise addition but maybe there are other shareings which can be more pertinent for my context. So this is the first issue and the second issue is once I have chosen the sharing technique, how to process and how to securely process on shared data. In fact, the first issue is related to secret sharing by the techniques proposed by Shamir in 79, also it is related to the design of error correcting codes with large draw distance. In fact, it has been shown that there is a one to one correspondence between the two problematics in a paper by Masse in 1993 and also it has been developed in a recent paper by Zemar, Castagnos and Rehner in 2013. And I will come back on this in a few slides. The second issue is related to security by party computation as recently shown in different papers and essentially in the original paper by Nikovay et al on threshold implementations. It is also related to circuit processing in presence of leakage. It is also related as I will show in some slides to the problem of efficient polynomial evaluation. So fortunately, this slide just to say that fortunately we have a lot of theory in which we can find the ideas in order to solve our problematics. Okay, so the first sharing I can use is the linear sharing. So in linear sharing with parameter N and D I am splitting N, I am splitting the sensitive value Z into N shares and I have some threshold denoted here by D and this threshold just say that I need at least D shares to rebuild Z or in other terms, no subfamily of D-1 shares as it died depends on Z. Okay, and in fact, in a very nice paper by a short paper by Masse in 1993 it has been shown that designing linear sharing with parameter N and D is equivalent to the building of a code with length N plus one and with draw distance D. So the draw distance of a code is the distance of the draw code corresponding to the code. Yes, okay, it's interesting but who cares? In fact, it's very interesting for us because it gives the general framework to describe and analyze all linear sharing schemes. So there are many, many linear sharing schemes that I can build, that I can use and it's good to know that I can describe all of them with the same theory and also what is nice is that it links my problem of finding a good linear sharing with problems which are well known in a rich community. Okay, just to make some relationship between the linear sharing and the error correcting codes theory. So in fact, sharing a data is equivalent to encode. So if I want to share a data Z, I first generate K random values R1, K-1, random values R1, RK-1 and I am using a matrix like that and so here I have my, I get my random values here. Here I have Z and here I have the redundancy in my encoding. So all linear sharing can be described like that. And okay, so I have an encoding, I have a code, a generator matrix in systematic form so it can be written like that and now I know that if I have an encoding of a sharing of a value Z then when I multiply this encoding by the parity matrix which can be deduced from the generator matrix in an easy way. So if I multiply my encoding by that, I have to, I must get zero. So the new vector, it simply says that, which simply tells me that Z belongs to the code. So okay, so it is orthogonal to all the gel code. This is the significance of this equation. Okay, so I just rewrite the previous matrix and I call it H for parity matrix. And now I have that Z equals that. I'm just rewriting these equations and I get something like that for Z. And from this equation, I'm just reducing that the masking sharing order, meaning the minimum number of shares Z die, which are needed to rebuild Z, is upper bounded by the minimum of the aming weight, of the combinations of the column of the parity matrix. Okay, good. But what is shown by Masse is that, and it's quite simple, but it's too long to develop here, but what is shown is that in fact, this is exactly this minimum. So this is the minimum of the linear combination of the column of the matrix, of the parity matrix in other terms. This is the minimum weight of a vector in the gel code. And so this is the gel distance of the code. Now, it's just to explicit the relationship between the gel distance of a code and the linear sharing. Okay, and now, okay, I can express my Boolean sharing as an encoding by a very simple, by a very simple matrix. Okay, I can just conclude maybe, just concluding with this slide. So I can express my Boolean sharing like this. So this is a very simple matrix with the identity matrix here and a column with all one vector at the end, at the right of the matrix. And also, Shamir secret sharing, so which consists in generating a random degree d polynomial px, so that the evaluation of this polynomial in zero equals my sensitive value z. So this sharing can be the equivalent to define a read salomon code with parameter n plus one, d plus one, where d is a degree, the degree of the polynomial in Shamir polynomial sharing. So the main issue with linear sharing is to minimize the number of shares for a given d. So if I want a security at order, for instance, two, I want that the number of shares is not too huge, too big, so for instance, I would like to have the number of shares equal to three or four, but not so much, not too much. Okay, I think we can stop here and make the pause and continue with the evaluations of the post. Yes, thank you very much Emmanuel. So this was the first part of the tutorial. Maybe, are there any questions? Maybe we, if anybody has any questions on the first part, otherwise we can take questions at the end of the tutorial. No, no, no questions. Okay, so maybe we can thank Emmanuel for this first part. And so no is the coffee break. And so we meet again at four 10, at four 10 in this room.