 Okay, okay. Thank you for introduction. So I'm Koji Nishimura from the JJ Inc. So I'm in the Japan startup company. So my presentation title is Estimation of hyperparameters on Ising model with constraint So in this talk, so I propose a parameter search algorithm of the coefficient of the constraint term So namely, so we focus on the cubo model with constraint And the aim of the parameter search is searching your global minimum or cost function with constraint satisfied both equality and inequality And moreover, so by-product, so our method can estimate the real ability of the solution So namely Okay, so let us begin with a cubo with constraint So as you know, so when you deal with some kind of unneeding algorithm via quantum unneeding So we often use the cubo model and as you know, the cubo consists are stands for quadratic unconstrained binary optimization And the aim to deal with cubo is finding the combination of qi So which is both takes value both zero or one that minimize the Hamiltonian aspect depicted below the quantum unneeding in general use this format to start the optimal state by applying the transverse field and Make the schedule like that to this optimal state Now actually the almost all practical models have constraint So for example in the industrial problem such as logistics telecommunication and the so traffic problem has a constraint problem So one of the typical example of the constraint in cubo model is traveling salesman problem So the aim of the traveling salesman problem is so find the minimum path that the traveling salesman can So travel all over the city only one time but in this example the cost function is the Cost function is depicted as the so this one so distance But actually so there is a constraint namely one hot constraint the two types of constraint and then So we represent so previously how to deal with this constraint with cubo model So let us so into let me introduce the previous approaches So first method is quite naive method applying the penalty method So we're adding is a penalty term So this consists of the square term of the so constraint so with a coefficient. So name you lambda This approach adding just only the penalty terms to the original Hamiltonian to make the constraint satisfied So if the so the right solution Is the so constraint violated the penalty term going through increase so which increase the cost function to However, so this problem is needs exhaustive fine-tuning of the parameter lambda So in general in this case in this case the number of parameter is only one So this is not problem, but in the case so we have to deal with a multiple constraint So this kind of the problem and also, so if we increase the lambda so the So you can derive the solution with the constraint satisfied But the cost function is increasing and it should decrease the lambda so So we can decrease the cost function, but we derive the solution which are by ways the constraint So we have to so exhaustive fine-tuning to find the this parameter and also this approach is not trivial in the case of inequality constraint and The possible approach to deal with inequality constraint by this penalty method That's for example using slack variables, but Slack variables to so convert the inequality problem to the equality constraint But in general so this requires additional beats and Then from now so we going to introduce the two kind of approaches So one approach is a kind of established method argumented Lagrangian method. So we go we call this method ALM So this is a established method for solving optimization problem Not only are needing but also the numerical optimization. So to solving optimization problem with constraint so in the original formulations as a Solution variable X is a continuous variable So so supports us so we have we have to minimize the cost function So subject to this kind of equal constraint The steps of the argumented Lagrangian method is straightforward first So minimize argumented Lagrangian function first So who a where the so introducing the linear term and the quadratic term and Then so updated coefficient of linear terms. So this depends to the low i the coefficient of the square term Then Then so we are going to update coefficient of quadratic term is necessary So namely the one with the naive up naive approach is so multiply of some kind of ratio alpha Also this method is so doing by continuous value a continuous variable Recently so Tanahashi and Tanaka introduced a method to apply this argument of Lagrangian method to Cuba so also the Cuba consists of the so discrete variable and The argument of Lagrangian problem is not is not trivial But actually so they show the applying the argument of Lagrangian method improves the Finding the solution so compared to the naive penalty method So their approach is just minimize argument of Lagrangian Hamiltonian depicted as so this linear term and quadratic term And then update coefficient of linear terms and then so update coefficient of quadratic terms But actually so this problem is so we have to need parameter tuning for initial mu and initial alpha And also this method is not guaranteed to reach the global minima since the argumented Lagrangian method find the solution That satisfies the necessary condition of the global minima under constraint And another approach is using Lagrangian draw problem plus sub gradient method So suppose that we want to so I deal with the original problem So minimizing the f of x subject to the this kind of constraint so we can formulate the Lagrangian draw problem and The Lagrangian draw problem consists of the so linear term adding with the constraint term The one with a feature of the Lagrangian draw problem is so derived solution from the original problem The cost function is of course the bigger than the optimal solution So whereas so Lagrangian draw problem the derived solution is it is it can be proved that the derived Solution is lower than the optimal solution because this is a relaxation problem So our aim to first so minimizing this Cos fun this Lagrangian draw problem function and then so maximize this function in respect to the UI So that's so we can reach the optimal Optimal energy from the bottom to top and also the method as an advantage of this method is so by Repeating these two methods So we can drive the upper bound of the optimal solution and the lower bound of the optimal solution So because the original problem always finds the solution which is bigger than optimal the Lagrangian draw problem Find the solution which is smaller than optimal So then we can estimate a duality gap and if we so decrease the duality gap so we can We can derive the solution quality and actually so this problem is applied by the so Or the key So his approach is minimizing Lagrangian verification problem and the updated cohesion to linear terms And so in fact, so this method is not to aim it for the Subversion method originally so he originally introduced the method for eliminating quadratic penalty time from DB machine But actually that this as a problem of this method is Parameter search process is quite unstable. So because there is no quadratic time. So if we Know what no quadratic penalty time So in this for this reason, so we have to need quite fine tuning of step size either and then so we are going to So my idea. So this is a method. So idea is combine some sub-gradient method to so augment the Lagrangian method so also the sub-gradient method is so seems to be so reasonable so because we find the Upper bound or lower bound but actually so this is unstable So if we apply if we can apply the quadratic time to the sub-gradient a sub-gradient method So we may remain can stabilize the parameter such and also the update rule or augment the Lagrangian method And sub-gradient method is quite similar. So is there my some kind of correspondence between low i and eta So this is my algorithm So we so setting the problem first so minimizing a function of f of x so subject to is also equality constraint So gi equals to zero so note that so also we don't so present in this in this moment But we can easily extend it to inequality constant a constant constraint so without adding so any kind of stock variables The first our approach is minimize augmented Lagrangian function. So this consists of the linear time and the quadratic time and then We get the upper bound and lower bound. So after so we get the Solution by using for example quantum annealer for example, and so we drive the we can drive the lower bound of the this point And also in case of upper bound So we can use for example, non-feasible solution where the constraint is satisfied or if non-feasible solution So we can set the sufficiently large value Not that the initial valuable of the Initial valuable or the ui and low i is set to zero and Then we are going to linear time update. So which correspond to the sub-gradient update So if constant is not satisfied and So this step size is so proportional to so upper bound minus lower bound. So which is equivalent to duality gap Then so we are going to so quadratic term updates for which correspond to so the argument regression method kind of update so the So we update the argument regression method by using so this this valuable eta So which is inspired by the correspondence between the low i and the eta and then so we repeat Repeat repeat this method until the solution converges So let us introduce the numerical experiment to how it works our algorithm. So we present the traveling salesman problem We which has a so equality constraint The we Compairs of all kind of method. So one method is argument regression method with the Coefficient or quadruple meter increasing so based on this alpha We set alpha equal one point one The second is the argument regression method with the so coefficient of coefficient coefficient of quadratic parameter fits and also we apply the Naive Lagrangian dual problem plus sub-gradient method and then so we apply our method so we compare this four method and We use the anneal method as a simulated annealing but not that so our algorithm the annealing method is Not relevant So that's our rhythm can be applicable to some kind of annealer for example derive quantum annealer straightforwardly and So this shows how the algorithm behaves So for example in the case of the argument regression problem with the quadratic size increased So this is a cost function So where the so x axis shows a parameter update iteration and the cost at the right axis shows a cost function and the black dashed line shows the so optimal solution For the argument regression method with quadratic time increased so cannot leave the feasible solution in 50 steps and Organic language with quadratic time fix so it depends on the initial variable of the quadratic time Coefficient so with small for example with small coefficient so we cannot we cannot derive some kind of feasible solution But as we increase the solution so we get the optimal solution but actually so So also so if we change the parameters of a low zero so we can so derive the solution But this is nothing but the extensive search and Also nice sub-gradient method fails so completely so this cost functions so always so takes a value zero and no feasible solution and Our algorithm so the so initial Initial iteration so behaves like the sub-gradient method and jumps near the optimal solution and fluctuate in the fluctuate on this optimal solution and also so we can Experiment with a more larger instance and give the same result Should try to conclude And also so we can Do the numerical explainer on the inequality constraint and it works very well and Our algorithm as a as a by-product result So remember that so we are we are going to calculate in the course of this iteration of the upper bound and lower bound So that so the most so Interesting feature is our method calculate the upper bound and lower bound during parameter search So which is enable us to get information of the reliability of the solution obtained by an editing method So because in general so quantum annealing is even if we get the solution. We don't know so how How good is the solution is So let us give a conclusion. So we formulate the editing model with constraint So which needs exhaustive parameters as for penalty method so we introduce our method to combine some a gradient method and element and so So gives a nice performance compared to the previous approach algorithm and more over so we can obtain love up and lower bound information So that's all for my presentation. Thank you very much Quick questions. I have a question. It's in the chat. Do you have a paper about this work that you could share? Actually, so as a paper is preparing right now Okay, thank you. Okay Nice talk I have a quick question about your method. How do you choose the eta parameter that you used to update? So is the question is so how to choose the element parameter? Yes, your parameter eta. How do you choose it? You mean the wait a second. Yes, this data. Yes. Yes. This 80 is proportional to so upper bound minus lower bound So actually as a so that's so in the initial state so upper Difference between upper bound lower bound and quite large. So which gives sir so quite large step size but if we iterate the algorithm and if we can so decrease the Difference of the upper bound or lower bound so the number as a step size will lower and lower and then we go into this so optimal solution so the answer is We choose so a eta so which is proportional to upper bound minus lower bound So we so calculate in the course of the iteration so upper bound the lower bound and from that we calculate the eta Okay, so Yes, so let's thank you