 Okay, I think it's time, and we can get started, okay? Okay, hello everybody, welcome. And we are so excited that you are joining us this session on the Open Program Info Search Project. I think it's my first time to introduce the Open Program Info Search Project here in Japan. My name is Haru Suryanma, I'm Chief Executive Director. I'm Akima Ruth, F5, nice to meet you. So first of all, I'd like to briefly introduce what is the Open Program Info Search Project. Then we can spend time to discuss potential use case, how we can adapt that use case to the Open Program Info Search Project. I'd like to spend time to discuss with you about the expected use case, okay? Open Program Info Search Project was established this summer together with F5, Red Tat, and Intel, NVIDIA, and Marvel, and Dell Technology, and Keysight. And we are now under the Linux Foundation Project. And here is the current member list. Well, welcome, you guys to join the member company or intuition contributor. There are many ways to contribute Open Program Info Search Project. What we are going to do now is, as you know that the current deployment model with SMARTNIC on the Q1S, it's still a monolithic deployment model. It's tied to relationship with QBOT and SMARTNIC through the SRE. But we try to decouple the resource, while for the main CPU resource orchestration, and the other is GPU and IP resource management. We try to create the Info Search cluster for the DPU, IPU, while the existing cluster running on the guest user tenant workload on top of the X86 CPU. So challenge is how we can create the common deployment model to deploy the network workload, storage workload, security workload into that DPU and IP user. So we are trying to create a common API or more vendor-alignistic way to eliminate the hardware and the software and the cloud vendor lock-in, actually, yeah. So what do we need to adapt with the GPU-DPU technology is here, I listed here. So we try to eliminate the lock-in in the cloud vendor lock-in, hardware vendor lock-in, software vendor lock-in, especially, you know that many device vendors are not promoting the DPU, my Marvel DPU and NVIDIA DPU and Intel IPU. So if we implement each DPU, we have to follow that vendor's procedure to deploy. So we try to eliminate that kind of vendor lock-in interface, but we try to create a common API that each vendor are ready to use that anyway. And also, you know that currently we're using the SRB type of the workload on top of the Kubernetes, but we just try to think about how to migrate that workload into the DPU IP user, okay. And also, yeah, we're using that to manage the infrastructure acceleration functionality. And yeah, in order to do, we try to create the open API for the DPU and IPU deployment to manage the application and acceleration functionality. And also, we try to reduce the validation effort across the different vendor IP and DPU, because if we have the common API, we can try to make the common validation model, yeah. And we need a portable Sophia stack, actually, yeah. That's why that we're spending time to focus on the open-gram infrastructure project. Actually, I'm working on the other committee called Ion Global Forum. Through the discussion in the Ion Global Forum, we will focus on the decided computing infrastructure. Yeah, actually, the same issue we feel that we need the common API to deploy DPU and IPU. So Ion Global Forum is not a software committee. So we are aligned to this open-gram infrastructure project. We are tracking the status of the Sophia deployment through the open-gram infrastructure project. Once it's a stable API, we try to adapt, actually, yeah. So the goal of the OPI is here I listed, yeah. First, committee-driven standard-based open ecosystem and vendor analyst framework and architecture. This is a very important thing. And that's why we are now working under the Linux Foundation for the open-source project. And to define the new API and standard where we need it. And my important thing is we have to reuse the existing source, open-source and API if it exists. So for example, we are adopting the open telemetry committee project to adapt. When we focus on the monitor telemetry, we have to collaborate with that committee project. And also, once we develop, of course, we need to show that the information example and the reference case. That's probably we need to collaborate as a community, outside of the software community because many are committed to discuss that new type of architecture. So we can collaborate like Ion Global Forum. They can probably promote this API implementation, yeah. We have to expand the more cross-industry committee collaboration, yeah. So we plan to reduce the variation across, a variation across the information. There are many types of implementation way. So we try to do the common approach first. And also, create the standard API. And we use the API that already use on the existing CPU because we just run the processor inside the DPR-IPU. For example, we can implement the Kubernetes inside the DPR-IPU, to be honest, yeah. And also, we do the best practice, step-by-step, when the code is ready, we'll do the part and show that the feasibility every time, every year. So not just once, okay. We do it repeatedly, like a step-by-step model, okay. And focus on the DPR-IPU side, yeah. Why not focus on upper side? The upper side already have a lot of the ecosystem. We can align to that ecosystem, yeah. Once we are ready for the open API, common API, yeah, we can collaborate with the other community that are running on the existing Kubernetes community. So here's the working group we are establishing. First, the project, the lifecycle working group. Focus on the device discovery model and delta project model. And also, boot sequencing. Yeah, actually, yeah, you see that the open telemetry, we are doing, we are following the monitoring telemetry framework into the open telemetry. And also, regarding delta project, we adopted the SZTP, the Secure ZTP. And regarding the API working group, yeah, we are now working on the API structure. For the network worker, stage worker, the security worker, okay, there are many, various worker, various API, yeah. So we try to create a similar layer of the API similar layer for each functionality. And also, you wanna do the park, we are preparing the park environment under the developer platform working group. And then, also, we are talking about use case. And actually, this is the main point what after this pre-projection, I like to discuss with you guys about what use case you are expecting to DPR-IPU. So far, we know that NVMe overfabriced one of the use case and also basic firewall is also under use case. And that was actually entity I have found to share the ION information use case next month at the use case working group in the OPI project, actually. Yeah, there are many opportunity to learn what use case is common into the OPI project. Yeah, here is an example of the current ongoing status and probably in the life cycle working group, yeah, we are doing the discovery and the provisioning and also inventory management, both sequence and monitoring and temporary work and the life cycle and update. Actually, yeah, when we discussed the life cycle DPR-IPU, yeah, that's one issue. So we have to have the common model of how to manage the life cycle DPR-IPU separate from the existing x86. Because of that, for the sub provider side, we deployed the infrastructure, okay. No one like to impact the maintenance to the user's workload. If we separate the tenant workload and infrastructure cluster, it might be possible to not impact the maintenance issue to the guest user. So we try to think about how we can manage life cycle independently for the DPR-IPU. So far, OPI adopted Secure Zero Touch Projering and also adopting the open telemetry and also system-managed BIOS, we are adopting actually. So you see that we don't make the new thing actually. We put technologies there, we try to adopt that technology and we try to find that what is a missing feature. We have to spend time to develop anyway. So here is API working group status. And actually that, yeah, you see that this diagram and you see that similar API. We try to create each similar API for network and security and AI machine learning. There are many. This is based on the use case and sometimes I get required inquiry from the partner, the customer and whether how people framework can be integrating that network workload. Actually, this is under discussion and we are also targeting and for example, here we are now also adopting the OBS and open config and the PPP here. And yeah, so you can find that the GitHub and to check that each of the status. And we are now planning the March vendor laboratory. Targeting the university new Hampshire, UNH. We try to create that lab but actually separated each member and each partner or each user is now preparing it. It just started to prepare the park to show that the feasibility actually. So you can also find that the GitHub or the what diagram you can prepare for the API park. And then this is the main part that we like to spend time to discuss the use case. Yeah, and I heard about the F5 and he can talk about what use case F5 is expecting and what status of the use case we are now discussing, yeah. Okay, do you have any questions at this point? No? I hope you understand what DPU is now. If you don't, I might wanna go back to it and explain in basics what it is. So show of hands, who knows DPU by now? Kind of, kind of, kind of. I'll go back and talk a little bit about it. So the most important thing I want you guys to know is that this DPU technology is actually the technology that allows a private cloud to be scaled and managed independent of the host applications. So it has the trust boundary between the host and the PCIE card. DPU, IPU, sometimes we call it XPU. It is a PCIE card that plugs into the host. And as Hyde mentioned, it can be managed independent of the host. So it has the BMC controller, the base board management controller chip on the PCIE card itself. There's the same BMC on the host itself. So the host and the card gets booted and provisioned and deployed independently. So what we can do is, for example, use the resources on the DPU card to do multiple things. So the use case is that it will support, initially will be networking, storage, security. Can you go back up, I forgot which one. Yeah, storage, security, networking, AI, ML. So there are three, four, five vendors developing DPU cards and all those vendors may have different forms of hardware accelerations. So one of the key things that this OPI work group is trying to do is standardize a common API. So that, let's say you want an accelerated storage access or you want accelerated networking as a infrastructure service or platform service. You don't want to be building or compiling or integrating a NIC card driver every time you change a card. So what this initiative does is help build a API, a common API that abstracts the use cases that allows a user of this platform to define what they want to accelerate on the DPU without having to do some deep integration of per vendor hardware resources. So the hardware resources could be like a topino chip. It could be a FPGA or a vendor specific programmable ASIC, et cetera. But the users should not care about what the underlying hardware is. So the important thing that this OPI initiative is trying to do is define a common API, define common ways of provisioning such that you can enjoy the benefits of infrastructure workloads being accelerated in hardware with more efficient resources like ARM course, which is the base processing unit on most of the DPU vendors. So those are the basics that I want you guys to take away today. And a few things about NVME and basic firewall. These are the initial use cases that the workgroup is trying to define or build POCs or build demo environments. But if any of you in this room have or anyone online have any use cases that you are interested in, please do speak up. The forum is open to anyone who has interest so that you can contribute by requesting or recommending a certain use case that you are interested in. So it's like, yeah. So I might have verbally went through all of these, but there are the BMC on the host, as you can see. And on the XPU, it's the abbreviation of DPU or IPU. DPU stands for Data Center Processing Unit, or IPU stands for Infrastructure Processing Unit. It might be easier to just call them XPU because there's so many variations. But here the BMC is the one that manages the board. And as you can see, it could be ARM or MIPS scores, but many of the available or roadmap that vendor products do target ARM as the processing unit. There tends to be networking acceleration using P4 pipelines. Some of them use the Tafino chip. Some of them use P4 as the abstraction layer to program their own hardware chips. And there's many other accelerators like doing encryption or GPUs or AIML, FPGA, actual accelerating regular expressions and of course these storage controller. Next slide. So I think this is kind of a repeat of the other slide, but I want to talk a little bit about the NVME use case. So NVME, as you know, it involves the SCSI type of access where SCSI has a single queue, but this NVME can paralyze the access. Now, if you have this GPU card, what can you do? So let's say you have local storage. Local storage could be accessed by of course the NVME over PCI from this GPU card, but this card can also act as a bridge between the networked storage. So let's say you have a fabric of storage networks. This card can access the network over ethernet, but it can also access the storage infrastructure within the host itself. So it could manage both worlds being the bridge to address some of your application demands that would require storage access, lower latency storage access, highly scalable storage access. So that's one of the first things that we're looking at. And the second thing we're looking at of course is that this card is the entry point into your host applications. So doing security, basic security would be the other use case that we are focused on. Initially we will probably do some basic firewalling as shown in the previous slide, but yeah, this is the sideway view of the other diagrams that you've seen before. The other diagram you saw from Hyde was that there was the host application on the top and DPU on the bottom and there was the red line in between. So this time the red line is between the left side and the right side. Left side is the DPU, right side is the host application. So when there's traffic coming into the DPU, you would want to do basic security without impacting the host applications. So what makes sense probably would be to do some basic firewalling or maybe do layer for DDoS mitigations so that your applications are not impacted. Your application, we don't want your application CPUs to be impacted by infrastructure workloads. So that's why you would separate that from the host application. I put it on the card itself and do basic security. So this actually talks about service provider use case, although not limited to service provider use case. This actually talks about a 5G core use case. So 5G core as some of you may or may not know from 3G days to 4G days, it has evolved towards a service-based architecture and 5G core actually uses HTTP to protocols. Now those will be scaled in Kubernetes environments, we don't want those 5G core elements to be impacted by network workloads because let's say you want to open up a webpage or start an application, you want minimal latency for setting up a data path from the smartphone to the core network. So you want minimum latency without being impacted by any other type of infrastructure workload on the host itself. So we would separate that workload, put it on the DPU and let it do ingress control, egress control, egress control is more of defining where the workload shall go and is allowed to vote for example. So let's say you have a IoT device out there. The IoT device could only access, let's say your application controller server or something and it's not meant to access general internet. So the egress control would actually secure the IoT infrastructure by defining the ingress but also defining the egress so that the application stays within the trust zone or the trust domain. So those are applications that allow managed services for service provider environment. That actually leads to the many point that I've written there it's called 5G slicing where the 5G radio has defined a way of slicing the infrastructure to do multiple layers of services. We can consume that slides and make sure that there's a ZTNA, Zero Trust Network Access type of environment for specific device applications and make sure that this DPU allocates a specific trust domain and only allows access to certain microservice workloads. So those are just common or modern day examples of what service providers around the world are looking into but it's not limited to service providers. This DPU technology is actually somewhat similar to the foundational technology that the hyperscalers have adopted. So the hyperscalers meaning Google and AWS. You may have read the AWS Nitro. It is a good example of how you separate the host workload from the card and the card itself actually controls the host itself. Does the network management, storage management and let's say the host application hangs for example. The controller PCIe cards send the interrupt to the host so that it can create a cordon and it does have some hyperfine layer hyperfizer on the host itself so that it can consume the interrupts, et cetera. But those are just common examples of how you build your private cloud or cloud infrastructure for the maybe medium to large enterprise audience. So let me kind of pause at this point and do you now understand what DPU is? Yes, yeah, yeah. Not a question, right? That's fine, that's fine. I'm hoping that you understand what DPU is in a way better than like 10 minutes ago. So, okay, we have a lot of time so I just want the audience to speak up. Does anyone have any specific use cases that or use case that they are interested in that you wanna discuss here? Because we have an opportunity, very small crowd. Maybe anyone online can ask but does anyone want to raise a topic around use case? Hi, sorry, okay. Yeah, so I think for us, we're mostly looking at using a DPU or previously a SmartNate to provide things like tenant isolation for on-prem private cloud and things like we would like to be able to write some network processes on it. Like for example, like a routing agent like FRR or BERT or something similar and to have that be isolated from the actual bare metal system itself. I think that's for, yeah, so for us that's probably how we see ourselves using DPU and we've kind of looked a little into it. So, yeah, I'm very happy for our presentation and I think you've kind of explained it very well on what a DPU can help us. Thank you. Yeah, it's okay, I can use. You know, what is your name? Oh, sorry. Yeah, so my name is Derek, I'm from Rakuten. Okay. Yeah, so, yeah. Very interesting. Yeah, please. For the information, in this slide, you can see that actually, yeah, OpenShift can cover that. It's, maybe we can say that Converge cluster. It's actually, in this diagram, based on this diagram, we just run the Kubernetes cluster to cover the DPU and the user's tenant cluster. But actually, furthermore, we can actually separate the Kubernetes cluster, one for the user one and the other for the DPU cluster. Actually, we can use two types of Kubernetes 1.4 in case of OpenShift, actually. The running on the ARM-based OpenShift and the other is running on the XATC OpenShift. And the challenge is that we have two type of controller in the case, OpenShift, ARM-based OpenShift controller and the XATC-based OpenShift controller. So we have to manage that much architecture cluster. So actually, there's another project we called HyperShift to run that multiple architecture's controller on top of the Q-pad, actually. Yeah, we are working on that. But what I'm saying is that furthermore, actually, we can isolate our administrator, one for DPU infrastructure and the other for the XATC infrastructure. This is another example of what we are doing. So I'm from F5, so I'd like to pitch my product solution as well, but I'm not doing that today. But this is a good way of managing a multi-cluster environment, I do agree, because a cluster may only have 100 nodes in a cluster. And if you look at the scale of a network like Rakuten, for example, they have a very high scalable Kubernetes infrastructure and in order to manage those private cloud, high-scale infrastructures, you would need to control where the traffic goes and where the traffic comes back to. And a service that manages the ingress and the egress aspects of those will be very critical in a multi-cluster environment. So this has a lot of potential in doing that. And the best part about it is that the OPI project is trying to abstract the use cases and make sure that your vendor selection doesn't impact your integrations. So that is very critical in how we are targeting the APIs and the deployment models of OPI. So maybe if you wanna go to the next slide. So a bit of repeat on how the OPI project is structured. So we have a steering committee of course and board of directors. There's the outreach community doing these type of marketing efforts. But we do have some independent workgroups running weekly or bi-weekly meetings. The provisioning and lifecycle API. API is very critical. We are developing APIs. Developer platform is also a very important but use case is one of the most important because we want this to be useful, of course. So next slide. And yeah, so I think this is still. Yeah, so this is basically the last slide. We have like 10 minutes left. So yeah, so the slides are available online in PDF form. So you don't need to take pictures. But you can take pictures if you want. So you can scan the QR code if you join the many list. And also when you join the Slack, we have a Slack to discuss the many part each working group. It's better for you to join the Slack when you have any questions discussed. It's better to interact with communication anyway. And I believe it was last month or so. There was KubeCon in Detroit, Michigan. They were also talking about a enterprise customer enterprise, talking about the importance of IPU, DPU. You might want to look up KubeCon and see what a certain enterprise is thinking about in terms of requirements for IPU environment. So that was a very interesting perspective. I am from the vendor side, but it'll be interesting if you look at the user side of why they want DPUs and IPUs. There's a session from KubeCon. That will be very interesting. It's not linked here, but for FYI. Okay, so. Any questions? Questions, comments? Oh, if you don't mind, please state your name and where you're from. Yoshida from Kyokushia Corporation. Is this vendor? Vendor. And we are also doing VME over fabric. NVME over fabric, yes. Yeah, so we are interested in that NVME bridge. Bridge, yeah, yeah. So is that a simple bridge that converts NVME over PCIe to NVME over, it's like a TGP over. Okay. I think that's it. Yeah, one of the things that. TGP, like the title card, that's what EBS was talking about. What's your, what's your, what's your, what's your view? So. I understand your question. Thank you for the interest in the NVME over fabric use case in the bridge he's interested in. The question was related to, is it similar to how Nitro defines the storage access on the Nitro card? I'm not the right one to answer that question. So maybe I can answer, but there is a forum that you can join and you can, you know, look up Slack and understand what they're discussing. I do not know the actual details of the, or the progress of it, but it is one of the key use cases that the product is looking into. Actually the further activity actually outside the OPI but are in the iron global form. We can try to establish the enemy over public over the old public network that I won't provide. But I think that QCHA is one of the main person. We can discuss first. Yeah. So there's many solution. You adopting the OPI technology and in other photonic network to construct. Yeah, we have. Thank you. And we do accept questions in Japanese. If you want, I'll translate that for you. If you prefer to ask in Japanese by any chance. Hi, Derek from Rakuten again. My next question is, I want to ask, like what kind of operating systems run on this, these DPUs? Like, is it a specific form of embedded Linux or can we put something generic like there be something on it? So of course, Red Hat Linux is ready, but any anything that runs on ARM core. Yes, so. Yeah, this is ARM. Yeah. Yeah. And also we can implement the Kubernetes for them to open shift content platform separate from ARM processor. Yeah. So I'm from the independent software vendor side. We have some challenges and porting certain applications to ARM workloads. So we, for example, my company at five, those have number of software and applications, but not all of them run on our. So there is a challenge in not just the operating system part, but the application readiness for the specific use cases. There are certain milestones that we have not hit yet, but as you mentioned, it can run any Linux operating system. Yeah. It doesn't have to be a specific one, but we will be running POCs with Red Hat Linux, for example, because those are one of the strongest contributors to the forum. So there may be phases in who does it first. And one of the key aspects of this project is that it will target to do qualifications or certifications of compliance to the standard. So we should be excited about that because regardless of what the operating system is, the host or the application user or the cloud platform user doesn't need to worry about how that works out. So there are talks about using real-time OS, et cetera, and doing some telco type of workloads on that system so that we do precision timing, et cetera. Thank you very much. Okay. If we have any further question, please let me know. We can discuss offline also. And also you can join the talks and to discuss and through the chat, yeah. Each working group member is, yeah, they're happy to discuss with us. Yeah. Okay. Thank you for being on this show. Thank you. Thank you.