 Okay, so the next speaker is Mayowa Ayodele, and she will talk about multi-objective cube solver biogephytic quadratic assignment problem. Thank you, my name is Mayowa Ayodele. Who can hear you fine? Okay, thanks. I work for Fujitsu Research Bureau. I'll talk about a work which is multi-objective cube solver or biobjective quadratic assignment problem. This is a collaboration between Fujitsu and the University of Manchester. Today, I'll give a background to digital nailer. I will then talk about the case study, which is a biobjective quadratic assignment problem, after which I'll talk about the proposed method, which is extending the algorithm as a multi-objective solver. And finally, I'll compare existing method with the proposed method. The hardware solvers have been of research interest in recent years. There are, however, limitations of quantum hardware solvers. Other hardware solvers have been proposed to address some of these limitations. An example is Fujitsu's digital nailer, which is not a quantum hardware solver, but uses inspiration from quantum solvers to achieve some speed up when solving optimization problems. Some of the advantages is the fact that it's stable at room temperature, it has a fully connected architecture. Regardless of the difference in hardware, the formulations of the problems are similar. Cubal formulation are often used for these architectures, and that's what we use in this study. And in the cube formulation, we're trying to minimize the energy where x is our binary solution, and we have killed the cuba matrix and k is the constant term. In this work, we investigate method of extending the digital nailer algorithm as a multi-objective solver. The case study is the biobjective quadratic assignment problem, which is a variation of the well-known quadratic assignment problem, where there is a flow matrix, which signifies the flow between any two facilities. There is a distance matrix, which presents the distance between any two locations. We want to minimize the product of the flow and distance matrices, and we want to sum these across all facilities. The solution to this problem is the permutation, where each value in the permutation represents the location assigned to the corresponding facility. In the case of the biobjective quadratic assignment problem, there are two flows, and it models some of the practical applications, such as in the hospital layout problem, where we might want to minimize the flows of doctors as well as the flows of patients. We use two-way one-hards, also known as permutation matrix representation, where we want to convert the natural permutation to representation that is zeroes and one. And to ensure that for a single flip solver, like the digital nailer, a valid solution is encouraged, that is, solutions that can be decoded back to a valid permutation. We introduce the constraint function, which ensures that every column and every row sums to one. We use two different formulations. Given that most cube solvers are single-objective solvers, in the single-objective case that is using the existing algorithm, it is often the case that we need to aggregate all of the objectives into one. So in the single-objective formulation, we introduce the scalarization weight, which is applied to the first objective, and another weight, which is applied to the second objective. We aggregate all of the objectives together with the constraint function by applying a penalty weight. In the case of the multi-objective formulation, because we can minimize multiple objectives in parallel, we define each objective separately. The constraint function is separately applied to the first objective using a penalty factor, alpha one, and the constraint is also applied to the second objective using a penalty weight, alpha two. To give a bit of context to the algorithm that supports the digital nailer, the digital nailer uses the algorithm as similar to the simulated nailing. I should note that this algorithm is the algorithm that supports the first generation digital nailer. Some of the main differences between the digital nailer algorithm and the classical simulated nailing is the fact that the digital nailer is able to measure the difference in energy between one solution and all of its neighbor in one room, so it can do this in constant time, therefore achieving better speed-up. There's also the concept of dynamically changing the temperature, which is a way of escaping Yoku Optima. Multi-objective problems present their own peculiar challenges in the sense that there's not one optimal solution, there are many best solutions, and we have our sort of paraffin where we call the solutions the non-dominated solutions, while solutions in this region are the dominated solutions, and they're the ones we don't want. The best solutions are the best trade-offs between both objectives. We have some solutions that favor objective one a bit more than two, we have other solutions that favor objective two more than one, and depending on the application area, different parts of the paraffin front may be desired. To make a single objective solver combat that to a multi-objective solver, there are a few things to consider. One is the acceptance criteria. In the classical annealing method or in the digital annealing algorithm, there is the concept of the probability of accepting a solution, which is calculated with respect to just one objective. Now we want to make the acceptance criteria extended to multiple objectives. Also because there's not one solution, which is the best solution, there are a sequence of solutions that are the sort of paraffin optimal solutions for multi-objective solver, we want to keep a memory of these solutions. We also want to use these solutions to determine what our current state is. So talk a bit more about the acceptance criteria. In the single objective version of the algorithm, there is the probability of accepting a solution, which depends on the change in energy, the temperature, and the parameter, which is the dynamic of this. This parameter is used to escape local optimum, where the solutions, when the algorithm keeps finding better solutions, the dynamic offset is zero. But when it's stocking local optimum, we dynamically increase that value. In the case of the multi-objective solver, we want to aggregate all of these probabilities. So we consider two different methods of aggregating the probabilities. The first one is the maximum of any of the probabilities, while the second is the product of any of the probabilities. The product is considered stricter than the more lenient method, than the maximum, which we call the lenient acceptance criteria. We also explored two methods of archiving solutions. The first one is more explorative, because it is less restrictive in the quality of solutions that are added to the archive, where the second approach is more explorative, because it is more restrictive in the solutions that are added to the archive. To compare the two methods, the multi-objective algorithm can find a set of solutions in just one run. But to be able to find multiple particle optimal solutions using a single objective algorithm like the digital miller, we need to do multiple scalarizations. So in this study, we consider that uniform weight, where we're trying values between zero and one. We use similar parameters for both algorithms. The first performance criteria that has been used in this study is hypervolume. The hypervolume is a common measure of performance in multi-objective optimization, where we define a reference point, which is higher than what our non-dominated, where our non-dominated solutions are. We want to measure the area between this reference point and the non-dominated solutions. The larger this area is, the better the performance of the algorithm. Initially, we compare the different methods of exploring the archive. We have the more explorative archive and the more explorative archiving method. We see that regardless of whether we use the strict acceptance criteria or the lenient acceptance criterion, we see the major difference is between the archiving strategy. Doing more exploration seems to perform better because we have a higher hypervolume. To understand the difference between the strict and the lenient criteria, the strict criteria being the product of all the probabilities of acceptance in the solution and the lenient being the maximum, we explore a bit to see how quickly they converge. We see that when the strict acceptance criteria is used, there is quicker convergence compared to when the lenient performance criteria is used. Another performance criteria uses the empirical attainment function. In this case, we want to see when comparing two algorithms, it might be the case that one algorithm can find solutions on some aspects of the pirated front where the other algorithm is more able to find solutions on another part of the pirated front. This empirical attainment service gives a visual representation of the performance of an algorithm compared to the other. The middle line is the median performance of the algorithm. The shaded region shows part of the pirated front where the corresponding algorithm is better than the other. The darker the region, the better the performance of the algorithm. For the multi-objective solver, the proposed method we show that on every problem instances, regardless of the size, the level of correlation, we always find better solutions compared to the existing method. In terms also of the number of non-dominated solutions found, that is solutions on a pirated front, we are able to achieve a lot more solutions. Finally, in terms of speed, because the multi-objective solver can gather many solutions in just one run, while the scalarization phase algorithm needs to do multiple scalarizations, we are also able to achieve faster speed. In summary, we are able to achieve quicker solutions, more non-dominated solutions, and also better quality solutions than the existing method. In the future, we would consider how this algorithm can scale to more objectives and also better methods of doing scalarization. Thank you very much. Thank you. We have time for one question. If there is, if not, then let's thank the speaker again. Thank you. Thank you very much.