 Greetings, everyone. Everyone's excited? Lunch? You're revitalized? Was there cookies? Break yet? No? OK, I'll try to keep you awake. So I'm presenting today with my friend Ben. Say hello, Ben. Ben actually couldn't be with us today. So what I have just to sort of orient you is we've got a video. So Ben was nice enough to record his section. So we're going to play that for you, and then I'm going to pick up at the end for that. So we all get to watch me stand here while Ben does his awesomeness. So let's go ahead and get started with Ben. Hello and welcome. I'm Benjamin Beeman. I'm a staff software architect working for GE Healthcare. And I want to welcome you to Edge Day. It's really great to have everybody here after two years of not seeing each other in person. We get to see each other in person, except that I'm not there. I'm really jealous that I couldn't be there, but I'm sure it's really exciting for you to meet new people and talk about one of the most challenging aspects of Kubernetes, deploying it on the Edge. I'm just really excited to be part of the conversation. I'm accompanied today by Jeremy Oki, the VP of Sales for Spectre Cloud. And Spectre Cloud is our strategic partner going forward in our Kubernetes journey. I will be covering our requirements, and Jeremy will go into more detail about the solution. Before I kick our story off, I want to tell you a little bit about my background and why I'm here talking to you. I've worked at GE Healthcare since 2011, where I started an internship. I did an internship with diagnostic cardiology, where we were building a RESTful web service to support an HTML front end displaying diagnostic cardiology data. I was able to join a leadership program when I started with GE called the Edison Development Program. It's a great program because you can switch from different modalities and get really a good flavor of GE in general. So I did four rotations, a rotation at Petomar, a rotation at CT, and then I liked the platform team. So I did two rotations on the platform team and then took my off rotation role with the GE platform team at the time. In my Petomar rotation, we were combining the Pet scanner with an MR machine. Pet is really great at detecting areas of concentration and narrowing down where cancer cells are. And MR is really good at seeing soft tissue. I got to learn things about how they detect tumors in the brain. And then the MR is able to show the doctor the soft tissue, trace the axion nerves path so that they can determine exactly how to take that tumor out without damaging the patient. Then I moved over to the CT recon. And I did some experimental computation algorithms taking data from a CT scanner. And this data has to be reconstructed into these 3D images for the doctor. And this reconstruction algorithm is a terribly intensive compute algorithm. CT is constantly looking for better ways to optimize these computations using GPUs, using fast compute devices. My rotation, we took an algorithm that was running on a single GPU. And we rewrote it to run on multiple GPUs so that we could determine if there was more overhead and latency or speed up in using multiple GPUs. Then I moved over to the platform team. This team was a GE Healthcare platform team creating common solutions for all the modalities in GE. I worked on a common image viewer so that DICOM images can be viewed across different modalities, CT, X-ray, MR. If you all understand what DICOM is, DICOM is the image standard for medical imaging. I did take an outside role out of GE. I worked for a year for a company called Tritech, which merged with Central Square and became Central Square. I worked on an evidence management system where they're tracking evidence into the police department and the different chain of custody in which it goes. Public safety is another area where they are incorporating cloud into their workflow, but they still have several reasons where they can't put things in cloud. They still have to have on-premise solutions. Finally, I rejoined GE and joined the GE Healthcare Edison platform team, which is an evolution of our previous platform team into the Edison platform. So I currently work at GE Healthcare, the platform team, where we're creating Edison Health Services on our on-premise platform that we call the Edison Health Link, and these services can be deployed on the Edison Health Link or in the Edison Cloud. The Edison Health Services goal is to service all modalities at GE, such as CT, MR, patient monitoring, and many, many more. So a little bit more about the Edison platform. What in the world is Edison? Well, of course, we're GE, so we like to coin the phrase Edison because, you know, Thomas Edison was our founder. At a high level, Edison is an intelligent platform supporting multiple and diverse types of modality applications, running on-premise and in the cloud across GE Healthcare. Essentially, Edison is an ecosystem for our internal and third party developers to deploy apps new and old to healthcare edge locations. Edison is a compute platform and by this, I mean that it is the engine to power complex and resource-intensive image reconstructions, AI, and other resource-intensive algorithms. We are always looking to optimize speed, cost, and provide solutions that will last into our customers' future endeavors. The resources certainly include CPU, memory, GPU, and storage. Sharing these resources for better utilization, Edison is also seeking to have a unified approach for developing and managing solutions across on-premise deployments and cloud deployments. In our journey when we met Spectre Cloud, we were looking for common ways to package and deploy applications that wouldn't box us into a particular type of technology. Every day, we're looking for the best ways to leverage open source solutions and commercialized solutions to better enhance current applications. In this way, we're creating a unified platform for any application to be deployed both on-premise, in the hospital's data center, or in the cloud. We were looking for a common cloud native way to package and deploy these applications. One of the main goals of Edison is to be an enabler of legacy applications. GE Healthcare has a lot of powerful, very powerful legacy applications that have taken decades to put together for healthcare solutions that are serving patients every day. These applications come in all different sizes and shapes and deployment models. Enhancing, upgrade, patching these applications can be a huge task. Edison is a way to enable enhancements of legacy applications to build the roadmap for their future. In order to do this, Edison has to support current deployment types and software, and in some cases, deploy software without the redevelopment of the application. I also want to take a brief moment and talk a little bit about healthcare and its unique challenges, deploying medical devices and solutions in the healthcare industry. Whenever we're trying to transform technology in the healthcare world, it's important to consider some of the important limitations in this space. Depending on where you are in the world, there's different regional regulations and restrictions. This is part of the reason that we have the requirement to be able to run without a direct connection to the cloud. We need to be able to deploy, fleet manage with a central source of truth in a disconnected world. We've mentioned Kubernetes on the edge, and in Edison, this is what we mean by edge. In our platform that contains Kubernetes and standalone VM deployments, this is deployed on physical hardware or in data centers that are located at our client sites. In our case, these are medical facilities. I will also refer to the edge as on-premise, as an on-premise location. All of these terms describe the same thing. This is a local edge location, and it can become disconnected from services that are hosted at local cloud providers. It has to be fully capable of running independently if disconnection occurs. There's also other concerns when we're monitoring systems, when we're taking data off the system. We have to pay close attention to PHI, which is Protected Health Information of the Patient, PII, information that may personally identify the patient, and any information that might be able to correlate together in order to produce some PHI in the information. Different regions around the world have different unique regulations when it comes to medical devices and how you deploy in the area. So we have to deploy solutions that are capable, flexible, and configurable to meet the need of a diverse world. Our technology deployed into the hospital environment are life-critical applications, life-saving applications, that when they are up, when they're running, they are increasing outcomes, they are making outcomes better, ensuring that we have proper uptimes is crucial to the success of technology in the hospital world. The next point I wanna make is proximity to the patient, which might not seem as relevant, but the closer that you get to the patient, the more costly your compute devices become, the higher the regulations, the more scrutiny. This proximity can be proximity to the patient, it's also proximity to the scanner devices themselves. So if we're looking at an MR machine, a CT machine, sometimes these solutions need to be deployed close to the scanner, and this can either be related to a performance issue because we're pulling so much data off the scanner, we're pulling so much data, as we're monitoring patients, that the transmission of the data, the further you get away, will create more latency for the operation, which can be unacceptable for the treatment. As you get further away from the patient, there's latency issues, there's also more opportunities for communication failure. The third point I need to make in our highly, highly regulated software environment is that we need long-term support. Typically, we need two years of development support and then an additional three years of production support as we deploy the solution in the field. And the type of support that we need is not to just take the latest application where they've fixed some issues, but we actually need support with the specific versions that we're using because to incorporate a new version can create a whole new development cycle and with the regulations and rigor that we have to follow, this can be too costly in a development cycle to roll out a bug fix or a patch for an ongoing production piece of software. So let's talk a little bit more about this melting pot of innovation and technology that we're trying to bring some order to. This expands to clouds and data centers and gHealthcare's providers. Oh, and thousands of edge devices and hospitals with a myriad of diverse application deployments. When we went looking for a fleet management solution and came across Spectre Cloud, there were four basic buckets of challenges that we were trying to solve. For the lifecycle of our products in the field, we needed a central place to manage the system state and a central source of truth for that state, for all of these devices. We also needed to support configuration for the entire, for every level of our software stack, commercial, open source and internal software. And for data center and appliance and for the on-premise solutions, we needed a simple lightweight solution to store and have a software repository locally that was not only able to store every type of software that we were delivering, but was also lightweight and we didn't have multiple solutions in order to get all the feature effectiveness that we, we need the ability to consistently deploy and manage across every environment, whether that be an appliance, a data center or a cloud, I can deploy applications, add-on services, platform services and infrastructure in a supported common way without reinventing the wheel every single time. All right, let's talk about lifecycle management of this stack. We needed to manage installs, patches, upgrades from a central source of truth for over a thousand edge locations. For each system, every system would have a cloud configuration that I can go to that becomes a central source of truth for what needs to be installed on that system and what versions of those things are installed on that system. Compute resources, we have software components, we have different types of connectivity per site, some sites are connected, some sites are semi-connected, where they have intermittent connectivity or that connectivity performance is very poor and we have some sites that are completely not connected at all and we still have to support that. We don't want one-off solutions for every single one of these, we want a centralized solution that manages all of these things and with all of these multiple system deployments, how do we fleet manage this? How do we ensure a system state? And just to double click on the type of software, we need internal, commercial, community, open source support. Our solution deploys containers, VOS, VMs, multiple support for many different types of artifacts and if we're gonna have all these different types of software, well, we're gonna need a single lightweight solution to store these things on premise because if I don't have connectivity or if I have intermittent connectivity, if I have slow connectivity, I can't necessarily just download this software every time I wanna do an install or launch a container pod. I have to store that software locally on my system and I can't have my software repository take up a big part of the resources on the local system because resources on a local on premise system are the most costly resources. It's prime real estate. So we need a simple single solution that is lightweight to store and then provide these software artifacts, any type of artifact that I'm going to deploy to the system and manage those artifacts in a very lightweight while providing the smallest possible footprint. So moreover, we've talked about many of the challenges in the hospital environment that we're deploying medical applications into, supporting these medical applications as a platform. But most of all, when we went out looking for fleet management solutions, we needed to wrap up the deployment, the upgrade, the system state into a single solution for all of our deployment targets. I wanna work through a single pane of glass configuring my component, whether it be a pound based component or a BM based component, my infrastructure, my Kubernetes layer, overall my life cycle management in a common standard way that we can share with our modality partners within GE Healthcare and they can configure the dynamic applications that they have to deliver on the Edison platform. Now let's bring it all together. We've talked about all the challenges that we're facing, what we were trying to solve when we partnered with SpectreCloud. Now we bring in SpectreCloud. SpectreCloud answered a lot of things for us in this fleet management area. With SpectreCloud's declarative model, we can package a myriad of software, VMs, containers in a common packaging solution. SpectreCloud extended its declarative profile model to includes on-premise management and OS layers. So not only can I configure applications, container applications and profiles, but now we can configure VM orchestration and deploy VMs on-premise through a profile. I can deploy system apps in my system management plane and I can even configure, deploy and update the base OS of the appliance through a SpectreCloud configuration profile. The premise that SpectreCloud management is built upon allows for a disconnected system. I can completely be disconnected from the SpectreCloud SaaS. I can configure my system through that SaaS, take that configuration, apply it to a completely disconnected system and the management onboard the system will perform my installs and my updates and keep that system state consistent, consistent with the specification that I can pull out from the central GitOps driven fleet management solution. For our on-premise software solutions and storage SpectreCloud has deployed the open source harbor but it has enhanced the harbor with a SpectreCloud proxy so that we can use all the wonderful capabilities of the lightweight harbor solution to support a software repository for all of our types of software and through this proxy having a seamless integration. SpectreCloud has a flexible model that can support different system configuration. All of this comes together to help reduce cost and complexity and reducing cost of health care solutions can reduce the overall cost of health care allow more solutions to be delivered to help more patients and this can ultimately provide better patient outcomes. As SpectreCloud continues to become more flexible on-premise and in the cloud, it becomes the glue that Edison uses for a common cloud native solution both on-premise and in the cloud. Hey, I'd like to thank you for having me here today and I appreciate you allowing me to share some of the challenges that we face in the health care industry and I hope that you can see how a cloud native can help us support our customers and our applications and deliver better solutions for better patient outcomes. Maybe next time I can join in person but I hope you enjoy the rest of the conference and have a great day. So as you can see, Ben's passionate about the technology. He's delivering a lot of new technology. This integration between the highly regulated medical devices and starting to do new things like analysis and do that at the edge close to the data is the challenges he's been talking about. So let me double click in a little bit because he's talking about desired state and a SpectreCloud Palette Platform cluster profile and this middle section is actually inside SpectreCloud Palette. This is what the declarative model visually looks like. So these different layers, we call them packs and you can sort of compose your blueprint of what you want the desired state to look like and as you can see on the right, they build a pretty complex stack to deliver out at the edge because there's different third party applications, internal developed applications, the underlying things that make Kubernetes production ready and the fact that the edge environments are customers environments with varying requirements. Some are virtualized, some are big hearts, some are big machines, some are small machines, creates the sort of challenge Ben was talking about. So we're gonna talk a little bit about what we've sort of delivered for GE with some specific benefits. They need a plug and play low touch box. If GE has to send a highly technical Kubernetes person out into the customer's environment, just logistics alone can be challenging. Imagine now if you're a retailer, you have retail stores and there's customers around or you're an ISV and you have lots of customers and you would just wanna ship your software. You don't wanna be intrusive into their business. So having that sort of light touch ability to deploy is important. They need to be able to update it, connected, semi-connected, disconnected. So how do they sort of run that long lifecycle that Ben mentioned as two years of support plus three years of production? There are adjacent virtual machines in their solution. Not all our technology has been moved forward to cloud-native and microservices and containers. So the Edge solution may need a few virtual machines off to the side as well that are dependencies for the application stack while the developers continue to modernize and transform the applications to be entirely cloud-native. Secondly, this desired state we're talking about, this blueprint, this cluster profile ends up being the centralized policy so that we can push this policy out to the Edge device. Now what's different from a lot of technologies is sometimes is something currently matching your desired state may only be measured when you run something like a Terraform. When you run a plan apply, your measuring is the desired state of the Terraform script, actually what's there. Using the integration of Cluster API and Terraform together, they still use GitOps, they still use Terraform to configure a lot of things, but it's the Spectra Cloud Terraform provider. I think we have like 3.4 million downloads. So it's very popular. Most of our customers do things in the UI and as they scale it up, they move it into GitOps and Terraform ways of doing things. So that becomes your system of record and Palet becomes the system of operation. We're constantly looking at is the desired state being maintained. That's actually done at the cluster. So in this disconnected state where there's no management plane observing, how would we know that the configuration tried to drift? Well we know because it's actually being monitored at the cluster in a constant loop that's measuring it. So if they're disconnected, they pull that policy and new artifacts down and sort of walk them across that gap and deliver them to the edge device, then it's applied, it now matches the new desired state and it's constantly being measured to make sure it matches that desired state. And it's a use what you like approach. We provide a lot of capabilities, but GE has the ability to bring third parties, proprietary technology, they don't have to wait for the platform to support it. And so that sort of segues into this local repo. We provide a lot of out of the box, easy to use, quick. One of our problems in Kubernetes and especially EDGE is we're starting to see a skills gap problem. As Kubernetes grows and becomes more popular, we're sort of fighting for constrained resources of Kubernetes experts. So we need to make a lot of the just the what does it take to get Kubernetes in the edge, in production, easy to consume so that we're sort of raising our game for everyone without having to go hire the smartest people on the market. And that also means who do you call Friday night at 2am when it's broken? Part of what we offer is supporting that ecosystem of things like service meshes and load balancers and ingress and storage and networking. So you have someone to call on what's open source when maybe there isn't a commercial offering or maybe even what's internal and they're just struggling of how to make it work on Kubernetes. You have a set of Kubernetes experts that they get access to that they can call when it's sort of all gone wrong. And finally, having multiple management points. What's important here is there are prior generations of Kubernetes management and I think we're where we are today because of some of their efforts so we can thank them. But what happens a lot of times when you start talking on edge, we're talking not tens, we're talking hundreds. In G's case, thousands in retailers, you may have tens of thousands of stores. And a lot of the prior generation architectures, they needed multiple management just to do one type of capability, cloud or edge. So maybe if you had a hundred stores, you need one management point to be able to manage a hundred stores. The way we're architected, you may only choose to have multiple management points because maybe you need something in the EU and something in North America for data sovereignty or maybe you do need AirGapt. We do support self-hosted in AirGapt as well. So maybe you have a version of the management that's in a highly classified or disconnected where you want the whole thing to be self-encompassed. We see this in DO, Department of Defense, Intelligence, Energy, Oil and Gas Control Networks. Those aren't even connected to the internet at all. So you have the ability to sort of choose your management domain, but it's not being forced upon you because of a limitation of the architecture scale. You may be choosing it because of certain regulatory or just administrative functions. And so those are things that GE has seen the benefit of by using palette, Spectra Cloud's palette. Now when we talk about what the deployment workflow looks like, we try to make it really simple. You can't do a truck roll. Truck rolls are very expensive, having an expensive field engineer. A lot of organizations we talk to, they say that the third part of their using for deploying field technicians is a minimum of four hours just to do the truck roll. So just the provisioning of the device, make it simple. Have someone who can warehouse, stage it, get the minimum configuration on it and ship it out so the edge location, it doesn't have to be a highly technical person. It could literally be the customer of the ISV or GE's hospital customer. It could be a store manager or retail. Make it so that they can actually go through a couple simple steps. Plug it in, turn it on, scan NFC. I think out at the table top out there, we have some that have an NFC chip on top of them with a little app. And that registers it back into the Spectra Cloud palette platform. And then this blueprint, this desired state model, our cluster profile is pushed down and then that long running life cycle of capabilities kicks in. But out of the edge, we try to make it as simple as possible. We're really trying to bring a cloud experience to edge. An edge sort of hasn't had that in prior generations. So this distributed orchestration, what we're talking about, I mentioned, we have a lot of the local resources where it has to be disconnected is one option and that's how GE is using it. So there can be things like a local CLI. I just need to put a little bit. Like that customer might need a proxy or might need a specific set of IP addresses that are static. So they have a simple sort of 2E text UI that they can configure on and then even the help of it. Maybe there's no support connectivity direct because it's always disconnected or it's become accidentally disconnected. Give them sort of a lightweight version of the web UI so that they can still sort of see the state. Is the desired state the right version? Is it currently in sync with what the desired state requests the environment to be? So some local troubleshooting as well. And doing that at scale of thousands of locations. So sort of three key takeaways. So are you wanted to picture that one? Go for it. And then we can talk to you outside too. All right. So a couple of three key takeaways. One, reducing the costs of deploying and managing at these locations. The costs are one thing. Just the customer perception. Get the touches to be lighter. Get the intrusion to their clients or hospitals. Their direct indirect customers or the hospital, the patients, the doctors. Those guys don't want you in their way with their technology. So reducing the overall costs of engineering and the high touch that's required is one thing they just had to have and they get that. Working with the Edison platform on a modern Kubernetes helps their third parties and developers also be able to, you have a lot of third parties. You can't always tell them everything about the platform. You have to provide a lot of services and just have those things coming to get repos and the developers don't need to get into details of how does it work. The Edison platform provides those services and so they have a broad range of ecosystem. In this case, for example, hospitals actually buy different apps in the Edison portfolio. So one hospital might buy app one, three, five and seven. Another one might buy two, four, 10 and 12. And so even hospitals have different compositions of what services and software they've purchased from GE. So these edge devices aren't cookie cutter. In an ISV where maybe you deliver the same full stack over and over and over, this is a little simple, but they do have sort of a complex environment where the developers just need to know that they can push these things on the Edison platform and can deliver them all the way out to the edge successfully. And thirdly, they need constant access to the latest innovation. They don't want the platform to have to always come back to us and say, I'd really like to use X but you don't support X yet. So we do provide a lot out of the box but they can always develop their own things. It's not a new way. There's great controllers and operators and helm charts out there so they can integrate those very quickly and try things over and over and over. They do use this in the cloud as well. So a lot of times they're testing and iterating and developing in the cloud and then they push things out to the edge in production. So unfortunately, Ben not being able to be here with us was unfortunate but I do want to sort of thank him for his time, hopefully view this next time. We have a tabletop out there so if you want to ask us more about Edge or what GE is doing or other Edge customers, please come visit us. There's also a, the new stack, probably all you're familiar with them, the Toppest Tuesday. It starts at 4.15 and we're sponsoring that. It's actually when you head back out the main to registration, take a right instead of exiting and it's off through there so that's where the Toppest Tuesday is. There's a panel. I think we're on it. Our CTO who did the keynote earlier is on it and there's some other folks from VMware and another on the panel. So join us for Toppest Tuesday as well starting at 4.15. Thank you. Appreciate it everybody.