オーディオ・オーケーのスクリーンですか?ちょっと待ってください。Josepe?私はレコードボタンが見えません。オーディオ・オーケーのスクリーンですか?ちょっと待ってください。あなたを聞いてください。おめでとうございます。おめでとうございます。ちょっと待って。他のメンバーはどうでしょうか。Vocalのリコードボタン。あなたが、それはハッキリです。いい。本当におめでとうございます。オーディオ・オーケーのスクリーンです。この時が終わります。私の説明をお願い致します。今日はクルマイアのレベルを紹介します。ケオリバスティブのトークを使用します。ラグランジャーのメッセドを使用します。このプロバレーションでタノタナハシとシュタナカを使用します。では、アフライトを始めましょう。このプロバンスで使用するアイディンマッシンの機能を解決します。この方法はハイパープロニュータルにより、非常に難しいものです。ここで、オーギュメティラーグランジャーの説明を説明します。そして、ハイパープロニュータルの数を説明します。さらに、ハイパープロニュータルの数を説明します。オーギュメティラーグランジャーの数を説明します。このフィギュアについてオーギュメティラーグランジャーのX is equal to C. The following equation is solved by the penalty method. Given the equality constraint, formulate combinatorial optimization programs with constraint into combinatorial optimization programs, not with constraints. When solving combinatorial optimization programs with constraints generally, the penalty method is well used. At this slide, I'd like to talk about the background of my presentation.By solving combinatorial optimization programs with constraints using a machine, solution accuracy depends on the hyperparameters.To illustrate this, I'll use the case of solving traving system program by penalty method as an example. The Hamiltonian obturbing system program is expressed like this.Please note mu in red. The solution accuracy depends on the value of mu. For example, if the value of mu is too small, a feasible solution cannot be obtained as shown in the left figure.But conversely, if the value of mu is too large, a feasible solution can be obtained, but the solution accuracy is worse as shown in the right figure.To obtain the optimal solution as shown in the middle figure, it is important to tune hyperparameters adaptively.Recently, Hanahashi and Hanaka introduced the augmented Lagrangian method for quantum knitting.As a result, the number of updates of hyperparameters in augmented Lagrangian method is fewer than that in penalty method.And the objective functions in augmented Lagrangian method is better than penalty methods.But problems with complicated constraints were not handled.So, our goal is to examine the performance of augmented Lagrangian method for simulated knitting, including problems with complicated constraints.At this right, I'd like to talk about the difference between augmented Lagrangian method and penalty method.Look at this figure. This is the trajectory of hyperparameters, consisting of lambda and mu.Here, mu has both the augmented Lagrangian method and penalty method,while lambda has only augmented Lagrangian method.Regarding this figure,wean area indicates the area that satisfies course weight corresponding to grand state of aging model.On the other hand,white area indicates the area that does not satisfy constraints.In penalty method,only one dimension is explored as shown in blue line.That is because the penalty method has hyperparameters only mu.On the other hand,in augmented Lagrangian method,two-dimensional space can be exploredas shown in red line.So, the search area expands at augmented Lagrangian method.Therefore,it is expected to find a feasible solution quickly.At this right,I'd like to talk about penalty method used as a comparison for augmented Lagrangian method.The penalty method is generally used to introduce the constraint to the Hamiltonian.For example,in the case of the Hamiltonian optimization program presented in this,the Hamiltonian using penalty method is expressed like this.The updating rule for the hyperparameter mu is expressed as follows.Here,K is the number of iterationsand alpha is increasing ratio of the value of mu.The updates of hyperparameters are repeated until we obtain a feasible solution.In this case,when the value of mu is too small,it might not obtain a feasible solution.However,when the value of mu is too large,it might obtain a feasible solution.But the objective functions are too large.Next,I'd like to talk about augmented Lagrangian method.Although the augmented Lagrangian methodis well-used in continuous optimization,we applied augmented Lagrangian methodto combinatorial optimization programswhich are discrete optimization programs.For example,in the case of the combinatorial optimization programis presented in this,the Hamiltonian using augmented Lagrangian methodis expressed like this.The difference between the augmented Lagrangian methodand penalty method is thatin the augmented Lagrangian method,this linear terms are added to the Hamiltonian infinity method.The updating rule for the hyperparameters,such as lambda and mu,is expressed as follows.Regarding the updating rule for lambda,this expectation is calculated using samplesin the previous iteration.Here,K is the number of iterationsand alpha is the increasing ratio of the viral mu.The update of hyperparameters are repeateduntil we obtain a feasible solution.At this right,I'd like to talk about calculation project.Please look at this project.Firstly,we decide the initial values of hyperparametersand generate the Hamiltonian.Secondly,we solve the Hamiltonian by submitted a meaning.Thirdly,we check if it is a feasible solution.If we obtain a feasible solution,we finish calculation.On the other hand,a feasible solution cannot be obtained,we update hyperparametersand return to second stepand solve Hamiltonian by submitting a meaning again.We repeat this ruleuntil we obtain a feasible solutionand count this rule.To confirm thatOrdinary Lagrangian method is valid,we compare the number of updates of hyperparametersand the objective functionsforOrdinary Lagrangian methodand penalty method.We run the number of updates of hyperparametersto be surebecause the process of solving the Hamiltonianby submitting a meaning is computationally heavy.We dealt with three problems.First,causeway random cure program.Second,tribune salesman program.Third,creatic assignment program.At causeway random cure program,we compare the result of front-end leadingand submitted a meaning.From this slide over,I'll show the results.At this slide,I'll show the comparison ofsimulted a meaning and front-end a meaningfor causeway random cure program.The objective functions and the constant rangeof a program are expressed like this.The number of spins is 54and the constant rangeof three patterns about this.Here,I explain how the graphs are built.The area to the upper left of the straight linewith slope 1is the area whereOrdinary Lagrangian method is superior.On the other hand,the area to the lower rightof the straight line with slope 1is the area's penalty method is superior.However,the opposite is true if wemaximized the objective functions.Here,I'd like to talk about the results.At first,I'll compare the objective functions.In quantum learning,Ordinary Lagrangian methodis better than penalty method.Insimulted a meaning,Ordinary Lagrangian methodand penalty method are about the same.Next,I'll compare the number of updatesof hyperparameters.In quantum learning,Ordinary Lagrangian methodis better than penalty method.As well asInsimulted a meaning,Ordinary Lagrangian methodis better than penalty method.At this point,I'll show the resultsof course-range random-tube program.The pattern was madeby buying the number of spins.We generate ten random programsfor 64 spinsand 256 spins.The course-rangeis varying the three patternsfor the value of bis like this.So,please look at the left figure.The objective functionsinOrdinary Lagrangian methodand penalty methodare about the same.Next,look at the right figure.The number of updatesof hyperparametersinOrdinary Lagrangian methodare better than that in penalty method.And,as the program size gets larger,Ordinary Lagrangian methodhas advantage over a penalty method.At this point,I'll show the resultsof a traveling-sensual program.The objective functionsand constraintsof a traveling-sensual programare expressed like this.The pattern was madeby buying the number of cities.At the progress of four citiesand16 cities,we generate 10 random programs.On the other hand,providesof 48 citiesand 127 cities,partiespilip programs.Look at the left figure.The objective functionsinOrdinary Lagrangian methodand penalty methodare about the same.Look at the right figure.The number of updatesof hyperparametersinOrdinary Lagrangian methodare better than thatin penalty methodin many cases.At this point,I'll show the resultsof a quadratic-simen program.The objective functionsand constraintsof a quadratic-simen programare expressed like this.The pattern was madeby buying the number of cities.At the progress of four cities,eight citiesand16 cities,we generate 10 random programs.On the other hand,providesof 30 citiesand 60 cities,partiespilip programs.We look at the left figure.The objective functionsinOrdinary Lagrangian methodand penalty methodare about the same.Next,look at the right figure.The number of updatesof hyperparametersinOrdinary Lagrangian methodare better than thatin penalty method.And,as the program sizeincreases,the number of updatesof hyperparametersincreases as well.Finally,this is summaryof my presentation.Our goal was to examinethe performanceofOrdinary Lagrangian methodforOrdinary Lagrangian methodincluding problemswith complicatedconstrains.In order to achieve this,we compare the number of updatesof hyperparametersand the objective functionsuntil we obtain a feasible solution.As a result,the number of updatesof hyperparametersinOrdinary Lagrangian methodwas further than thatin penalty methodfor programs with one constraintsuch asConstrainsRandomCuboprogramand multiple constrainssuch asTravingSalesOneProgramandPredatorTecSignmentProgram.The objective functionsinOrdinary Lagrangian methodand penalty methodwere the same.This brings me to the endof my presentation.Thank you for your attention.Questions, Comments.Can I ask you a question?Yeah.Okay.So, I remember thatTanahassan and Tanahassanproposal methodofOrdinary Lagrangian methodforCuboprogram inAQC 2021.So, I'd like to ask youwhat is the main difference?Tanahassi,TanakaIntroduceOrdinary Lagrangian methodfor Quantum UnleadingbutI introduceOrdinary Lagrangian methodfor Simulative Unleading.Okay.So, they proposeQuantum Unleadingand then so, your presentationis applied to Lagrangian methodOrdinary Lagrangian methodto the Simulative Unleading.Yes.Okay.And the second question.So, how do you choosethe ratio of theAQC 2021?The previous slide.This one.This one.Yes.Okay.So, could youquestion again?My question ishow do you chooseAQC 2021?Oh, okay.In this case, I chooseAQC 2021.So, 1.1.Yeah.Why do you choosethis parameter?Because I supportthe optimalalpha.It depends on the problem.Yeah.So,that is becauseTanahashi and Tanakaadaptedthe numberalpha.So,1.1.So,I introducethe samethe variable alpha.Okay.Thank you.Thank you.Hi.So,under the results you showedfor the first couple of problems,the numberof hyperparameter updatesfor the augmented Lagrangian method,they seem to be pushed at the point.Data points seem to be pushed right up againstthe y-axis.So, I guess that's the minimum number possible of up.What value was that?It was like 1 or 2.So,I couldn't really see the x-axis properly.So,are you asking aboutthe number of updatesfor hyperparameter?For example, this slide.So, yeah,the data points are pushed upright against the y-axis.So,I guess is that 0?No,I guess it wouldn't be 0.Would it be just one updateor two updates?I can't really see there.What's the minimum value there?So,you savedso,but this,this point,right?I'm basically askingwhat's the minimum value of any of those pointsin terms of the x-axis?What is the minimum valueof the x-axis?So,minimum value is that,minimum value isobject functions, ok?So,I'm sayingI'm most of them only taking one parameter update.So,parameter updateSo,could you,could you again?Maybe,maybe I misunderstood.So,so,could you say again?Next question.So,I have two questions.So,on this plot,it seems to me thatthe update for the augmented Lagrangianwe need to update n terms.So,should it be shown also on the graphcompared to the penalty term?So,independent method you havejust one parameter you're updating.But,augmented Lagrangian method you haven parameters.So,it scales with a system size.So,each update kind of scales linearly with system size.So,I don't know whether it's shown in this figure here.So,you say,are you asking regardingso,the scaling of the program?Yeah.So,I'm wondering whether it's a fair comparisonjust to put the number of updateswithout considering the cost.It takes to obtain one updatefor each method.So,is this data?Yeah.So,so,thethe 8th ratechodes thealways 0andviable 1andviable 1andso,we choosewe choose this one.Okay,maybe I will go to my second question.So,my second question was like,do you think for the penalty methodyou could do some sort oftransfer learning on the system size?Like,you showed on your floor chatthat the bottleneck of the method iswhen you do SA,you need to do SA for each update.So,can you train,can you update your parameterson a very very small system sizeand then use those parametersfor a larger system sizeonly for the penalty methodbecause it doesn't scalewith the system size.You think it's going to work?I don't know therelationship of thethe system sizechanges.So,oh.Okay.Thanks.Oh,thank you for your question.So,we move on to the nextTalk.Let's start speak again.