 postdoc in mass biology here. So at IGS, we do have a mass biology group, although most of them are mathematicians. Our group works on different subjects. For example, the theoretical models to explain morphogenesis. And for example, the analysis in developmental biology. And also we define metrics on the space of trees. And we also design the corresponding algorithms. We also work on the inference on tissue transplantation, which is today's topic. So let's consider an experiment that we take one piece of the tissue, and then we graft it to another tissue. And then we observe how the grafted tissue behaves. So to simplify the problem, we only consider two possibilities. One is that the grafted tissue develops normally. And the other case is that it develops abnormally. For example, we consider the Xanopus lavix, a kind of frog. If we take a piece of upper lateral lip, the donor, and transplanted to the host, she's the lower lip. Then the result is normal, meaning that the transplanted tissue will develop normally as if it was a host. So here is the table for the results from the 2010 paper. Here we have seven donors and seven hosts. And here each entry means an experiment with this donor and this host. Here un means normal. A means abnormal. And the question marks here are the experiments that we don't know the results. So the question is, with these known results, how do we infer the unknown ones? So the core idea here is that similar experiments should have similar results. Here similar experiments means they have similar donors and similar hosts. With this idea, we can use the known results to infer the unknown ones. And here we assume that we have known the similarities between experiments. So let's consider this experiment similarity chart. This red term is an experiment with the donor and host here, and the result is unknown. It is similar to four different experiments with known results. Here the double line means high similarity. Single line means medium similarity. And if there's no line, it means low similarity. You can see that this experiment with unknown result is similar to three experiments with results normal. And one is abnormal. So we could expect the result is more possible to be normal. Here's the second part of the similarity chart. We have five experiments with unknown results and they are similar to these experiments with known results. And here's the last part. So we want to design a penalty function that evaluates the guesses of those unknown experimental results. And there's a penalty if two similar experiments have different results. For the concrete form of this function, we can get some inspirations from the IC model. The IC model describes ferromagnetism in statistical mechanics. So here we consider a set of lattice sites. Here for each site, it has a variable which takes plus or minus one. Here is described by the up or down arrow. And for each configuration of this plus or minus one, the energy is defined like this. Here we consider those neighboring sites. And if their values are the same, the energy is smaller. Here we assume there's no external field. And with this energy function, we can define the corresponding probability of this configuration, which has an exponential form. And we can see that one configuration with high energy means high penalty has small probability. Therefore, the neighboring sites tend to have the same value. Now we can see the analogies between T2 transplantation experiments and the IC model. For T2 transplantation, we have an experiment similarity chart where there are many experiments and some are similar, which are linked. And for each experiment, we have a binary result, either normal or abnormal. For the IC model, we have a lattice of many sites. And some sites are neighboring. And for each site, it has a variable plus or minus one. And for T2 transplantation, we have a penalty if similar experiments have different results. The same for IC model, that if neighboring sites have different values, we also have a penalty. Therefore, we can see the last analogy. For the IC model, we have an energy function to describe the configuration. And for T2 transplantation, what should be the penalty function? The answer is we can directly use the energy function here. We need to warn that this is just a pure analogy. It doesn't mean any physical correspondence between T2 transplantation and ferromagneticism. So we regard normal as plus one and abnormal as minus one. And if two experiments are similar, then we set the coefficient to be positive. And here's the penalty function, which has the same form with the IC model. And the same, we have the probability of each configuration. So, for example, consider this experiment, similar to chart, where we have an experiment with unknown results. And it is similar to four other experiments with known results. So, if we set the result of this experiment to be normal, then we can calculate the penalty function, which is minus three. And if we set the result to be abnormal, then the penalty is three. So abnormal has a higher penalty. Then we can calculate the corresponding probabilities. The probability of normal is 0.665. So, normal is the most probable guess of these experiments with unknown results. So, for each configuration of the unknown results, we can calculate the penalty. And then we can get the probability of this guess. So, here's the final result that we consider each guess for the unknown experiment, which is in red. And then we calculate the probability of each one. And we choose the one with the largest probability. So, here we presented the most probable guesses. And also, we can calculate the expectation of all guesses, namely the probability for each experiment with unknown results to be normal. Here you can see that most experiments tend to be normal. Now, let's consider another situation that knows that for this experiment, all the known results are deterministic, either normal or abnormal. And the next situation is that the results are not deterministic, but so so stochastic. So, for this experiment, we have five donors and three hosts. And for each experiment, here we have a percentage of normal results. So, the solution here is simple, that we have some stochastic results. And we can decompose them into several deterministic results with different probabilities. For example, we consider this part of the table. These stochastic results can be decomposed into four different deterministic results. And their probabilities are determined by the initial probabilities here. And with this decomposition, for each deterministic configuration, we can use the method we just described and calculate the expectation of all the guesses. And then we take the expectation over all these deterministic configurations. So, here is the final result. For each experiment with unknown results, we can infer a percentage of the result to be normal. The last part of my talk is about experimental design. So, assume there are many tissue transplantation experiments, and so far we don't know any results. And the goal is to know all the results. And here, since we have the inference method, we don't need to conduct all of them, but just to conduct some of them and use these results to infer others. And the question here is, how should we choose which experiments to conduct? On one side, we need enough experimental data to conduct the inference. And on the other side, we want to minimize the experimental cost, which means we don't want to conduct too many experiments. So, remember that the results of non-conducted experiments are inferred by those similar conducted experiments. Therefore, to guarantee the inference quality, one non-conducted experiment should be similar to at least K conducted experiments. If, for example, K plus 2 or K plus 1. And the most efficient design means with minimal cost is that we don't have any similar conducted experiments. And for each non-conducted experiment, it is similar to exactly K conducted experiments. So, consider this figure that each unit is an experiment. And neighboring units are similar experiments. Each black unit is a conducted experiment. And each white unit is a non-conducted experiment. Now, the question is, given this figure, how should we color it so that two black units are not neighboring? And each white unit is neighboring to exactly K black units. So, for K plus 4, this is the current method. You can see that two black units are not neighboring. And for each white unit, if it is not on the boundary, then it is neighboring to exactly K plus 4 black units. And this is for K plus 2. You can see that black units are not neighboring. And for each white unit not on the boundary, it is neighboring to two black units. And for K plus 4, we can see that we have half white and half black. For K plus 2, we have one third of black. For K plus 1, we can see that black units are not neighboring. And each white unit is neighboring to exactly one black unit. And here we have one fifth of the black units. And in practice, the experiment similarity chart is not a two-dimensional figure like this, but four-dimensional. The problem is that we cannot really draw a four-dimensional figure here. So, we need some abstract methods. For example, on K equals 8, for each unit, we use a four-dimensional coordinate, x, y, z, w. We color it to be black if the coordinate satisfies this equation. It means the sum of these four numbers is given. And for this case, we conduct that one-half of total experiments. Similarly, we can define the methods for K plus 4, where we need to conduct one-third of total experiments for K equals 2 and K equals 1. In practice, K plus 2 or K plus 1 is enough to conduct satisfactory inference. Therefore, for the two-dimensional case, we only need to conduct one-fifth to one-third of total experiments. For the four-dimensional case, we only need to conduct one-ninth to one-fifth of total experiments. And then we use the results of these conducted experiments to infer all the other non-conducted experiments. We can see that the more experiments similarities we have, the fewer experiments we need to conduct. Okay, that's all. Thank you.