どうもありがとうございました。私たちはアイオンデータセンターのインフラスタジアのサービスについてお話しします。私たちはアイオンポークのNTT、FUJITZ、NVIDIA、私の名前はハイデスユギャマです。私はレタートのチーフテコロジースタジアの研究者です。私はジョースケーフルムのNTTです。私はディレクターです。私はアイオンデータセンターのDCIインフラスタジアの組み合わせです。私はナオキオグチです。私はフュジツです。私はディレクターのインフラスタジアシステムビジネスユニットです。私はオースススタジアのサーバープロダットについてお話しします。私はオースステムビジネスユニットです。私はGlobal ForumのMOUとLinuxのファンデーションについてお話しします。3つのプロジェクトはオープンプロアムインフラスタジアプロジェクト、LFHプロジェクト、LF Networking Projectです。Elefinance Information Model Task Force is now working to explore the new blueprint based on the ION infrastructure.Elefinance Networking has several potential opportunities to collaborate, for example the Open Daylight Transport PC Project,Elefinance Project which can be one of the ION APN controller that ION member is now keen to explore.I'd like to highlight the OpenPLOM infrastructure project this time because this is a first trigger when ION Global Forum collaborate with Linux Foundation.You can see that this diagram and we have the concept called Function Dedicated Network which will be enabling the network functionality inside the PCIe card or inside a fightbox gateway.So that's why we started to work with OpenPLOM infrastructure project.Retard is one of the funding of OpenPLOM infrastructure project.So we launched a collaboration session in the ION member summit last year.So since then we are deeply starting to collaborate with each of the project in the Linux Foundation.I think some of you might know about ION Global Forum but we still need to explain what ION Global Forum is.And our technical committee chair, Kulala Lee, actually unfortunately he's not available here but she made a video for you guys.So I quickly learned her video to introduce about ION Global Forum.Then also for what you might need to know about what is each member company's approach to ION Global Forum.So that's why I invited Kulala Lee and he will talk about the approach about how they can enable ION Global Forum technology.And then also I covered the OPI project.Unfortunately LAT is our chair of the use case working group with OPI project.It's not available here actually.Unfortunately I will cover his part about why we need OPI in ION Global Forum data center infrastructure.Then the one of the key feature we are now adopting is a composite design infrastructure.We tried to adopt this kind of computing to compose the multiple type of device like DPU and IPU and GPU together with InterCPU.So I invited Fujitsu, he will talk about the current status of the composite design infrastructure development status.And also he will share the current gap, what is the challenge to realize ION technology.With that, let me learn the quick video that Kulala is.Hello everyone, I'm Kulala Lee.I work for Intel as a Senior Principal Engineer on standards.I also serve as ION Global Forum Technology Senior Computer Chair.Thank you Sugiyama-san for inviting me to the session.Unfortunately I couldn't attend in person.I hope this video can give you an overview on ION Global Forum.ION stands for Innovative Optical Enterprise Network.The forum was established in January 2020.It now has 135 members across network service providers, equivalent manufacturers, solution providers, and end users.The objective of the forum is to develop reference architecture frameworks,スペロケーションスペロケーションスペロケーションスコミュニケーションコミュニケーションand leverage the evolution of optical communication and photonics, electronics, and OK word technologies.So this page shows the overall ION Global Forum work landscape.On the use case side, the forum looks into various consumer and industrial use cases such as area management,industrial management, level entertainment.It also works on building a digital twin framework for the various use cases.On the technology side, the forum develops open-off photonics network,telacentric infrastructure, and ION security.That targets at enabling future communication computing infrastructurewith order of mental improvements on throughput, energy efficiency, and in-eval pose quantum security.Build on top of the communication computing infrastructure,the forum also looks into domain-specific technologies such as mobile network,front hall, and network-functionalization, data storage, and exchange, and leprosency.To date, ION Global Forum has published several deliverables on 2030 vision,use cases and requirements, technology outlook, functional architecture,reference implementation model, and proof of concept references.All documents can be accessed in the ION Global Forum website.So this concludes the brief overview on the forum,and I will hand over to Scamathon to continue the session.So we have many documents already published, actually.So you can download the document and to read,but there are lots of documents, actually.So it's not easy to read everything.So that's why we are here.We tried to have this kind of session.Many times to discuss with you guysto discuss about the possibility to using the ION Global Forum.Actually,POC is our external activitythat any guy can try to use ION infrastructureto contact the POC together, actually.And another probably question you might have.What is ION Company,member company is doing in the ION Global Forum?So I'm inviting the entity Krivesong,and he will talk about the entity's approach about ION.As you might know, the entity announced thatto run that ION1220 serviceto deliver all 14 network,but it's not only the all 14 network service,but there are many things working in the ION Global Forum,especially the computing industry evolution.They are working on the new computing architecture.So I'll hand over to Krivesong.Thank you.Okay.So from here,I'd like to explain the conceptand also challenges of ION.And ION is the next generationa computing ICT infrastructurea aiming to achieve acontable leave of enhanced broadband,lower power and lower latency.And this slide shows the target performanceand the target performance will be 100 to 200better than those of the conventional technologies.So actually the target performanceis not easy to achieve.So collaboration is very importantto gather many people,narrated of many people from various fields.So we have started ION Global Forumand also started a discussionwith relevant communities such asLinux Foundation.And the key technology of IONis electronics to photonics.So this slide shows the approachof ION for networking.Networking of IONis going to fully leverageoptical technologies.So ION Networkis going to shift fromrepeat of switchingto direct connection to APN.This approach can realizea direct EZ connectionwith virtual broadbandand also low latency.And this slide showsthe approach of IONfor computing infrastructure.ION's computing infrastructureis trying to leveragealso for thetechnologist was next generationdisaggregated computing infrastructure.This shifts frombox-orange server-based infrastructureto a fullydisaggregated computing infrastructure.So component deviceswhich composes acompositional serverdisaggregated and connectedwith each otherwill be a high-speedphotonic networkand forming a hardwareresource pool.And this approachcan improveresource usage and alsopower efficiency by allocatingthe necessity amount ofcomputing resourcesto workloads at thecomponent device level.So this slide showsthe data-centric infrastructureand toimprove performance and energyconsumption in massive data handlingand processing ECPSwe are going to applyAPN and alsodisaggregated computing technologiesto this infrastructure.The computing sites are connectedwith each other viaAPN we calleddata center extentand also we applieddisaggregated computing technology to each computing siteand alsohardware resource poolwill help to createhardware accelerated data pipelineforefficientlyhandled or efficientlyprocessed massive datacoming from many devices.So IONis a very long-termconcept butwe are also takinga step-by-step approachto showthe part ofION's concept also to promotea early adoption of ION technologyin real business.So this showsone of such POCis a real-time video-are analysisand this POCis supported byFujitsu, NTT, NVIDIA andRedHeart.In this POCwe are going to demonstratethe utilization ofgeographically distributed resourceswith lower latencyand lower power consumption.We are trying to usetwo different server configurationone is x86plus dpu basedand the other isconverged device basedfor AI inference.And we are also trying toapply a hardwareaccelerated technologies such asEfficient Data Transferringacross APN basedone withGPU Direct RDMAILand also a converged device basedlow power AI systemand we are also trying toapply a managementplatformopen shift toso this POCis integrated withcontainer platform as well.So key wordfor the DCIis a dynamicity, I thinkand the necessary typeand the numberaccelerated can varydepend on the various factors.So it is necessary forthe next generation infrastructureto create, modify theselogical servers with appropriateconfiguration on demand.There are many factorsto change suchappropriate configurationone of such factoris business growthand as showing this rightaccording to the businessgrowth, we needappropriate a scale upor a scale outfor meet thecustomers demand.So I'd like toshow the several examplesof scaling.One isfor x86 CPUplus GPU based configuration.This is a typical configurationfor AI and DCI as a service will becapable of a flexible scale upand scale down and alsoscaling out and scaling infor the given workload.And the next slide shows thescaling of a convergeddevice base.A converged device meansis an acceleratorcan which integratesboth a dpu, ipu anddpu.In this case, we can takea more simply scalingapproach because a dpuaccompanies thecars have bothfunctionality of networking and alsoAI function.So this showsthedynamicity what wewant to support andTsugayama-san andOguchi-san will show the potentialcandidate ofthis solution.Thank you.Let me add one more informationbecauseLetters is also a member of thispoke team to enable the openshift on the x86 hostto expand the GPUand SMARNIC.In addition to thatwe are nowimplementing the micro shiftwhich is a mini open shift which is running insidethe ARM CPUin the convergedGPU acceleratorwhich is in this caseNVIDIA A100X.So we are nowsupporting two type of pattern.Pattern A isjust running the oppositeon top of the x86 CPUto expand the multiple GPUandpattern B is that we enabledthe micro shift insidethe GPUconverged GPU acceleratorto scale outmultipleGPUconverged GPUaccelerator.So we try to keepthe both patternin order to keep the flexible deploymentbecause it's up to the service providerhow to deploy, how toexpand the system.So the current bottleneck is thatbecause we are nowbasically using the pre-confirmedcodeserver which is not possibleto scale upwithout changingthe system.But in the ARM data-centric infrastructurewe can logically scale upwith adding the morePCI device cardthrough the composabledissalate infrastructure.That isI like topattern data.This you can seethat the pattern B is similarto what our open programinfrastructure project is doing.They are now trying to enablethe Linux OSinside the GPU and IPU.That's why we are startingto work with the open program infrastructure project.MaybeI just briefly explainabout what is the open program infrastructure project.I will go actuallytell youbecause we are the member of the open program infrastructure project.We are now targeting to enablethe Linux OSinsidethe GPU and IPUto increase the intelligenceof each infrastructure service.Network service and storage serviceand the managed security service.Becausewe see that manyoffload functionalityso far it's available.We can try tointegrate offload function inside the GPUand IPU andto add more controlintegency for thenetwork infrastructure serviceor storage servicebefore transfer the user datato the x86 CPU.There isn't many solution there.So we are now exchangingthe idea through thatIOPI use case working group.Actuallyin the OPIthere isan eventheard inUS or the OCPGlobal Summit.Their member is demonstratedone of theactivity.They enable the open system GPUand running theenginex proxyto manage the user trafficand transfer the user datato the x86open shift.This is one of the scenariothey are now enabling.But there are many use caseactually we are now discussed.IOPII tried to cover this slidebecausebasically IOPI forum is startingwork with use case working groupin the OPI project to share the use caseeach other to explore the common goal.Actuallylast time we shared the networkwith use case andthey also agreed toadapt that use caseeach relating to UPF use caseand also thatthis AI use caseis one of the candidate towork together with OPIand in the OPI projectactually they are also targetinggenerate AIand enable the DPRAPU for theAIML federationto do they need more high bandwidththat IOPI can providewe have many opportunityto collaborate togetherand I also highlightwhy OPI is needingdata-centric infrastructureas I mentioned a little bit aboutIOPI data-centric infrastructureadapting this side computingwhen we enablethis side computingone of the consigncan be too much transactionwithin the PCI fabricwithin the CPUhost and PCI cardbutOPIif we use that OPIwe enable the network functionalityof stage functionality inside DPR and IPUso that mean thatwe don't need transitmany traffic over the PCI to theCPU because all functionalityis offloaded inside the DPR and IPUsothis is why we are nowworking with OPI teamhow to enable DPR and IPUin thecompose of this side infrastructureit's not only just a simple cardit's more intelligent cardwe are now adapting inside DPR and IPUin our casewe can try to enableopen shift inside DPR and IPUor micro shiftinto the DPR and IPUand this is a conceptwhataboutdatacenter infrastructure as a serviceI owndatacenter infrastructure service systemto enable to composemany type of logical service nodefor eachwork loadit's kind of the purpose built in logical service nodelogical service node meanskind of the code serverbut the dynamicconfinable code serverwe can try to integrateX86CPU along withmany GPUmany DPU and many IPUdepend on the transactionactuallyso for examplein this case injection nodewith AI inferenceAI inference node needs manyGPU cardnot necessary to use many CPUX86CPUwe can add manyGPU PCI cardin the compose of the deserterinfrastructurethat's why we need a compose of the deserterinfrastructure featureso you can compare thisin this tableI just list up one of theexample and we can add20GPUfor one CPUso we can scale up theAI performancesimilar thingin the 5G radio access networkas you might notewhen we build a deserter unitwe can use GPU for thelayer acceleratorsimilar thingin the current X86CPUactually in most of the casemaximum MIMOis 64T64butwe can expand2064T64maximum R20 times largerthan existing the radio access networkwe can upgrade many radio unitsif we can expandGPUwith scale out in the single CPUanywayso it's a compose of deserterinfrastructure feature is one of the keytechnologyso I like tohandle over some Fujitsuand what is the compose of deserterinfrastructureand what is thecurrent challenge of thecompose of deserterinfrastructureokI would like toexplain thecomposable desergregated infrastructurecomposable desergregated infrastructurecdiis emerging new server architectureitdisaggregate existingservers intoseparate componentin resource pooland then it composecustom made serverby software definitionwe call theseservers composed bare metalhow does it workin resource poolall components are connectedconnected topci or cxlphotonic switchesand cdi management softwarecontrol the switchesbased on user demandso as to createcomposed bare metalcdi has some featuresthe most notable featureis to createcustom made serverson demandfor exampleit can creategpu rich serversor memory rich serversit also createserver clustersthis feature is especiallyhelp forion dci nodeand when these serversare not used anymoreuser canreturn those serversto resource poolso this architectureminimize unused resourcesandreduce the total costso second featureis it enablescomposedcomposed bare metalstodetach orattachpci devicesdepending on the workloadlet's consider ai systemfor examplefenzo system isuse ai running at nighttimecdi canattach GPUsand provide higher performanceand power savingto summarizecdi has someadvantagesit contributeto savingand savingand savingand savingand savingand savingand savingand savingand savingit contributetto saving energy consumptionand reducing total cost as describedleft fingerbut of course there is trade offcdi has advantagesin total costonly fell using lots of GPUsand one more featureit's enabledcomposed bare metals to scale upこのアドバンテージから、CDIはION-DCIノードの方法について説明することを期待しています。では、私たちのデータセントインフラッシュアクティビチーを説明します。私たちのデータセントインフラッシュアクティビチーを説明しています。では、お話しします。私たちの質問について説明します。私たちのデータセントインフラッシュアクティビチーを説明します。私たちの質問について説明します。私たちのデータセントインフラッシュアクティビチーを説明します。私たちのデータセントインフラッシュアクティビチーの質問について説明します。このサイトを使って、APNの使用者は遠くに接続することができます。では、地域の地域でデータセンターを建設することができます。でも、APNの使用者はデータセンターを建設することができます。このようなフレクシブリティを使って、APNを使うことができます。そして、デバイスレベルのディスアクティケーションテクノロジーを使用することができます。そして、パワーコンサンプションを使用することができます。この意味で、オペクスやキャペクスを使うことができます。この意味で、コストを減らせることができます。そして、この意味で、ペネフィットを使うことができます。そのため、プライズも減らせることができます。そして、ペンの使用者は、APNは全フォトニックネットワークを使うことができます。そして、ペンの使用者は、データセンターのインフラクションを使うことができます。もし、フォトニックネットワークパンによりんの活動法としてかけた1要法あり、コンピューティング・インフラスタクションを設置する必要があります特にAIに関する必要がありますデータが多く必要です一つの可能性はAIフェザレーションです日本にも多くのファイバーがありますエレクトリーデバイスが多いのでエネルギーコンサプションを減らせる必要がありますそれが一つの可能性です一つの可能性が高くなります125度の速度ですまた、どうしてCDIがアイオンデータセンターに関する必要がありますか?答えがありますか?2つの可能性がありますアイオンデータセンターの2つの可能性があります1つの可能性はDCIノードを設置する必要がありますロジカルサービスノードを設置する必要があります2つの可能性は高いパフォーマンスとパワーセービンを設置する必要があります1つの可能性はCDIを設置する必要がありますベアメタルを設置する必要があります2つの可能性は2つの可能性はCDIを設置する必要がありますベアメタルを設置する必要があります2つの可能性は高いパフォーマンスとパワーセービンを設置する必要があります2つの可能性は2つの可能性はCDIのために必要がある必要がありますDCIノードを設置する必要がありますDCIノードを設置する必要がありますどうぞコンポーサーのディセリンフラスクションの質問はもう一つの質問ですCXLを設置する必要がありますコンポーサーのディセリンフラスクションの質問はCXLを設置する必要がありますか?実はCXLを設置する必要がありますCXLを設置する必要がありますCXLを使う必要がありますCXLを設置する必要がありますCXLを設置する必要がありますCXLを設置する必要がありますCXLの記憶を使って使用者は一生の記憶の太さを使ってCXLを設置する必要がありますV3の使用者もCPUプールを使います様々な可能性があるCXLを使用していますCXLを使用していますコンポサのインフラスタはPCIeのバスについて誕生することができるCXLを使用している時、CDIを支援することができるPCIeのバスについて誕生することができるはいありがとうございます次にコンポサのインフラスタについて誕生することができるコンポサのインフラスタのインフラスタのインフラスタは次にコンポサのインフラスタについて誕生することができるCPUのバスについて誕生することができるコンポサのインフラスタはCPUタスクのオフローディンによる良い解説です。今、多くのメンバーがディピューアイピューなどのテクノロジーを提供しています。この大きな挑戦は、新たなテクノロジーの行動を作ることができます。このようなディバースティーは、新たなテクノロジーを行うことができます。オープニューアイピューはこの問題によって良いコミュニティーを提供しています。イニファイマネジーは、オープニューアイピューのオープニューアイピューのプロジェクトを始めます。オープニューアイピューのオープニューアイピューのプロジェクトは、エンジェル、マーバー、NVD、AMDを参加することができます。そのため、このテクノロジーについてお話しすることができます。次はCDIの挑戦です。CDIはオープニューアイピューのプロジェクトを使用することができます。オープニューアイピューのプロジェクトを使用することができます。クバネデス・ダイナミックリソースアロケーションのDRAは解決できることができます。2つの挑戦です。まず、DRAとCDIの通信方法を特定する必要があります。2つの挑戦です。DRAとCDIの通信方法は、GPUやDPUの通信方法を使用する必要があります。この挑戦は必要です。次の挑戦です。次の挑戦です。次の挑戦です。次の挑戦はプロジェクトを使用する必要があります。次の挑戦はプロジェクトを使用する必要があります。次の挑戦はプロジェクトを使用する必要があります。次の挑戦はプロジェクトを使用する必要があります。