 From around the globe, it's theCUBE with digital coverage of DockerCon Live 2020, brought to you by Docker and its ecosystem partners. Hi, I'm Stu Miniman and welcome to theCUBE's coverage of DockerCon Live 2020. Really excited to be part of this online event. We've been involved with DockerCon for a long time. Of course, one of my favorite things always be able to talk to the practitioners. Of course, we remember for years, Docker exploded onto the marketplace. Millions of people downloaded it and using it. So joining me is We Xia, who is a principal deputy director of medical signal processing at the National Heart, Lung and Blood Institute, which is part of the National Institute of Health. We, thank you so much for joining us. Thank you for inviting me. So let's start. Of course, the name of your institute, pretty specific, I think, anyone in the United States knows the NIH. Tell us a little bit about your role there and kind of the scope of what your team covered. So I'm basically a researcher and a developer of the medical imaging technology. We are the heart, lung and the blood. So we work in focus on imaging the heart. So what we exactly do is to develop the new and the novel imaging technology and deploy them to the heart of our clinical library, which stalker play an essential role in this process. So yeah, that's what we do at NHLPI. Okay, excellent. So research, of course, in the medical field with the global pandemic gets a lot of attention. So you keyed it up there. So let's understand where does containerization and doctor specifically play into the work that your team is doing? So maybe I like to give an example, which will be fast. So for example, we're working on the magnetic resonance image MRI. The many of us may have already been scanned if we went to hospital. So we're using MRI to image in the heart. What stalker plays is stalker allow us to deploy our imaging technology to the clinical hospital. So we have a global deployment around 40 hospitals, a bit more for around the world. If we, for example, develop a new AI based image analysis for the heart image, what we do, we stalker it, we put our model and software into the stalker then our collaboration sites, they will hold the software then get the latest the technology then use them for the patients, of course, under the research agreement at NIH. So because stalker is so efficient available globally, so we can actually implement a continuous integration and a testing update framework based on stalker, then our collaborators would have the latest technology instead of, you know, in the traditional medical image or medical in general, the iteration of technology is pretty slow, but with all this latest technology such like container stalker coming into the field, it's actually relatively new in the past two, three years. All this paradigm is changing. Certainly very exciting to us, it's give us the flight mobility we never had before to reach our customers, to reach other people in the world to help them, they also help us. So that's a very good experience. Yeah, that's pretty powerful what you're talking about there rather than, you know, we installed some equipment, you know, who knows how often things get updated, you know, how do you make sure to synchronize between different locations? You know, obviously the medical field, you know, highly regulated and being a government agency. Talk a little bit about how you make sure, you know, you have right version control, you know, securities in place, you know, how do all of those things, you know, thwart out? Yes, that's an essential question. So firstly, I want to clarify one thing. So it's not NIH who endorse stalker, it's us as the researchers, we practice stalker too and we trust this performance. This container technology is efficient, it's globally available and it's very secure. So all the communication between the container and the imaging equipment is incredible. We also had all the paperwork, if they say, to set up to allow us to provide technology to our clinicians. When they pull the latest software, every version we put up into the stalker went through a automated integration test system. So every time we make a change, the newer version of software went through a regular test, something like 200 gigabytes of data was run through and everything is still working. So the basic principle is we don't allow any version of software to be delivered to customers without a stalker, let's say this container technology in general actually with 100% automate all this process, which actually gave us a lot of freedom. So we have a rather very small team here at NIH. If many people, I actually is very impressed by how many customers we support within this so small team. So the key reason is because we strongly use the container technology. So its automation is unparalleled, certainly much better than anything I had before using this container technology. So that's actually the key to maintain the quality and the continuous service to our customers. Yeah, absolutely, automation, something we've been talking about in the industry for a long time, but if we implement it properly, it can have huge impact. Can you bring us inside a little bit? What tools are you doing? How is that automation set up and managed and how that fits into the environment? I can describe in more specific. So we're using a continuous testing framework. There are several after, we're using a specific one called BuildBall, which is the open source Python 2 rather small. What it can do is this tool will set up as a service, for example. Then this service will watch, for example, our GitHub, whenever I made a change or someone in the team made a change, for example, we fix a bug, add a new feature or maybe update a new AI model, we push the change to the GitHub, then this continuous building system will notice, it will trigger the integration test wrong, all inside Docker environment. So this is the key. What container technology offers is we can have 100% reproducible wrong time environment for our customer as the software provider. Because in our particular use case, we don't set up customer with the Uniformed Hardware, so they bought their own server around the world. So everyone may have different, slightly different, we don't want that to get into our software experience. So Docker actually offered us the 100% control of the wrong time environment, which is very essential if we want to deliver a consistent medical image experience, because most applications actually is rather computational intensive. So we don't want something wrong like one minute in one side and maybe three minutes in the other side. So what Docker plays is Docker will run all the integration as if everything pass, then we pack the Docker image, then send to the Docker Hub, then all our collaborators around the world have new image, then they will coordinate with them, so they will find the proper time to update, then they have the newer technology in hand. So that's why Docker is such a useful tool for us. Yeah, absolutely. Okay, containerization in Docker really transformed the way a lot of those computational solutions happen. Wondering if you can explain a little bit more of the stack that you're using, if people that might not have looked at solutions for a couple of years think, oh, it's containers, it's stateless architectures, I'm not sure how it fits into my other network environment. So can you tell us, what are you doing for the storage and the network? So we actually have a rather vertical integration in this medical image application. So we build our own service as a software. It's backed from the C++ for the higher computational efficiency. With lots of Python because this is AI model essential. What Docker provides is, as I mentioned, uniform always consists of long-term environment. So we have a fixed PCC version of them if we want to go into that detail. Specific version of numerical library, certain version of Python, we're using title to allow that. So that's our AI backbone. So another way of using Docker is actually we deploy the same container into the Microsoft Azure Cloud. That's another beauty I found about Docker. So we never need to change anything in our software development process. But the same container actually will transport everywhere on the cloud or outside of our customers. This actually reduce the development costs also improve our efficiency a lot. Another important aspect is this actually will improve customers, how do we say it, customer acceptance a lot because we go to one customer tell them that the software you are running is actually running on the other side. Exactly same up to the, let's say, Shalan hash. So it's a bit by bit consistent. This actually help us convince many people every time when I describe this process, I think that most people can get the idea they actually appreciate the way how they deliver software to them because we always can follow back. So yes, here is another. So we have many Docker image is in the Docker Hub. So if one deployment fails, they can easily follow back. So that's actually very important for my image with that fail because hospitals need to maintain a continuous level of service. So even we want to avoid this completely, but yes, occasionally, very occasionally, there will be some function not working on some new test kits never covered before. Then we give them a maximum calling back. That's actually also an offer to buy the technology. Yeah, absolutely. You brought up really, many have said that, the container is that a comic unit of building block and that portability around any platform environment. We, what about container orchestration? How are you managing these environments you talked about in the public cloud or in different environments? What are you doing from container orchestration? Actually, our setup may be the simplest case. So we basically have a private directory which we paid, actually in-status paid. We'll have something like 50 or 100 privacy. Then for every repo, we have one specific Docker setup with different software version of different, for example, some image is for Pytosh, another for TensorFlow, depending on our application. Maybe some customer has the experiment to have rather small Docker image size, then we have some frame-down version of image. In this process, because it's still in a small number like 20, 30, we are actually managing semi-automatic. So we have the service running push and pull and rolling back image, but we actually figure this process here at Insta. Whenever we feel there are something new to offer to the customer. Regarding managing this Docker image, it's actually another aspect for the medical image. So at the customer side, we had a lot of discussion with them for whether we want to set up a continuous automated app. But in the end, we decided this side, we better have customers involved, better have some people. So we finally stopped there by, we noticed customer, there are something new update, then they will decide when to update or to test. So yeah, this is another aspect. Even we have a very high level of automation using the container technology, we found it's not 100%. In some sense, we still better have a human supervision to help because if the goal is to maintain 100% continuous service, then in the end, we need some IPers on the field, test and verify. So that's how we are in current stage of the deployment of this Docker image. We found it's rather lightweight. So even with a few people at NIH in our team, we can manage a rather large network globally. So it's really exciting for us. Excellent, great. I guess for your final question, give us a little bit of a roadmap as to you've already talked about leverage AI in there. Some various pieces. What are you looking for from Docker and the ecosystem and your solution through the rest of the year? I would say the future definitely is on the cloud. One major direction we are trying to put is to go to the clinical hospital, linking and use the cloud computing as a routine. So in current status, some offsets, hospital may be very conservative. They are afraid of security, connection, all kinds of issues related to cloud. But this scenario is changing, right? Especially container technology contributes a lot on the cloud. So it makes the whole thing so easy, so reliable. So our next push is to moving lots of the application into the cloud and the cloud only. So the model will be, for example, we have new AI application. It may be only available on the cloud. If some customer is waiting to use them, they'll have to be waiting to connect to the cloud and maybe sending data there and they save the sample AI apps from our running Docker image in the cloud. But what we need to do is to make the Docker building even more efficiency, make the computation 100% stable. So we can utilize the huge computational power in the cloud, also the price. So the key here is the price. So if we have one setup in the cloud, a data center, for example, we currently maintain two data centers, one in a doc-serial, another is in a United States. So if we have one data center and the safety hospitals using it every day, then we did the numbers. The average price for one patient is down to a few dollars per patient. So if we consider the medical health care system costs, the adding cost of using cloud computing can be truly trivial, but what we can offer to patients and doctors is it's never happened. The computation you can bring to us, it's something they never saw before and they never experienced. So I believe that's the future. It's not the old model is everyone have its own additional server, then maintaining that costs a lot of work. Even docker makes the software aspects much easier, but hardware, someone still need to set up them. But using cloud will change all of it. So I think the next future is definitely to fully utilize the cloud with the standard technology. Excellent, well, we thank you so much. I know everyone appreciates the work your team's doing and absolutely if things can be done to allow scalability and lower cost per patient, that's a huge benefit. Thank you so much for joining us. Thank you. All right, stay tuned for lots more coverage from theCUBE at DockerCon Live 2020. I'm Stu Miniman and thank you for watching theCUBE.