 From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe 2020 virtual. Brought to you by Red Hat, the CloudNative Computing Foundation and ecosystem partners. Welcome back, I'm Stu Miniman and this is KubeCoverage of KubeCon, CloudNativeCon 2020 in Europe. The virtual edition, of course, one of the things we love when we come to these conferences is to get to the actual practitioners, understanding how they're using the various technologies, especially here at the CNCF show, so many projects, lots of things changing and we're really excited. We're going to talk about security in a slightly different way than we often do on theCUBE, so happy to welcome to the program. From Sarah Four, I have Jeff Plink, who's the Vice President of Engineering and Cloud. Jeff, thanks so much for joining us. Thanks Stu, thanks for having me. All right, so I teed you up there. What would give us, if you could, just a quick thumbnail on Sarah Four, what your company does and then your role there. Absolutely, so we're a physical hardware product addressing the telco markets, utility space, all of those, so we kind of differentiate ourself as a Bluetooth lock for that higher end space, the high security market where digital encryption is really an absolute must. So we have a few products, including our physical lock here, is a physical padlock as well as door locks and controllers that all operate over the Bluetooth protocol and that people can just use simply through their mobile phones and operate at the enterprise level. Yeah, I'm guessing it's a little bit more expensive than the padlock I have on my shed, which is getting a little rusty and needs a little work, but it probably, not quite what I'm looking for, but you have cloud in your title, so give us, if you could, a little bit what the underlying technology that you're responsible for and understand you've rolled out Kubernetes over the last couple of years, kind of set us up with what were the challenges you were facing before you started using that. Absolutely. So Stu, we've grown over the last five years really as a company like in leaps and bounds and part of that has been the scalability concern and where we go with that, originally starting in the virtual machine space and original, some small customers in Telco as we build up the locks and eventually we knew that scalability was really a concern for us and we needed to address that pretty quickly. So as we started to build out our data center space and in this market, it's a bit different than your shed locks. Bluetooth locks are kind of everywhere now, they're in logistics, they're on your home and you actually see a lot of compromises these days actually happening on those kind of locks, the home security locks, they're not built for rattling and banging and all that kind of pieces that you would expect in a Telco or utility market. I think the nuclear space or so, you really don't want a lock that, you know, when it's dropped or banged about, immediately begins to kind of fall apart in your hands and two, you're going to expect a different type of security, much like you'd see in your SSH certificates, you know, a digital key certificate that arrives there. So in our space, as we grew up through that piece, Kubernetes became a pretty big player for us to try to deal with some of the scale and also to try to deal with some of the sovereignty pieces you don't see in your shed locks and data sovereignty meeting in your country or as close to you as possible to try to keep that data with the Telco, with the utility and kind of in country or incontinent with you as well. That was a big challenge for us right off the bat. Yeah, no, Jeff, absolutely. I have some background in some of the Telco space. Obviously, there's very rigorous certifications. There's lots of environments that I need to fit into. I want to poke at a word that you mentioned, scale. So scale means lots of things to lots of different people. This year at the KubeCon, CloudNativeCon show, one of the scale pieces we're talking about is edge, just getting to lots of different locations as opposed to when people first thought about, you know, scale of containers and the like, it was like, oh, do I need to be like Google? Do I have to have that much scale? Of course, there is only one Google and there's only a handful of companies that need that kind of scale. What was it from your standpoint? Is it, you know, the latency of all of these devices? Is it, you know, just the pure number of devices, the number of locations? What was the scale limiting factor that you were seeing? It's a bit of both in two things. One, it was a scale as we brought new customers on. There are extra databases. There was extra identity services. You know, the more locks we sold, all of a sudden, and the more telcos we sold to, suddenly what we started finding is that we needed all these virtual machines and sources in some way to time together. And the natural piece to those is start to build shared services like SSO. And Single Sign-On was a huge driver for us of how do we unite these spaces where they may have maintenance technicians in that space that work for two different telcos? Hey, tower one is down. Could you please use this padlock on this gate and then this padlock on this cabinet in order to fix it? So that kind of scale immediately showed us. We started to see email addresses or other on two different places and say, well, it might need access into this carrier site because some other carrier has equipment on that site as well. So the scale started to pick up pretty quickly as well as this space where they started to unite together in a way that we said, well, we kind of have to scale two parts. Not only the individuals, databases and servers and identity and the storage of their web service data, but all of a sudden we had to unite them in a way that was GDPR compliant and compliant with a bunch of other regulations to say, how do we get these pieces together? So that's where we kind of started to tick the boxes to say in North America, in Latin America, South America, we need centralized services, but we need some central tieback mechanism as well to start to deal with scale. And the scale came when it went from let's sell a thousand locks to, oh, by the way, the carrier wants 8,000 locks in the next coming months. That's a real scalability concern right off the bat, especially when you start to think of all the people going along with those locks in space as well. So that's the kind of first piece we had to address. And single sign-on was the head of that for us. Excellent. Well, today, when we talk about how do I do container orchestration, Kubernetes of course is the first word that comes to mind. Can you bring us back though, how did you end up with Kubernetes? Were there other solutions you looked at when you made your decision? What were your kind of key criteria? How did you choose what partners and vendors you ended up working with? So the first piece was is that we all had a lot of VM backgrounds. We all had some good DevOps backgrounds as well, but nobody was yet into the container space heavily. And so what we looked at originally was Docker Swarm. It became our desktop, our daily, our working environment as we knew we were working towards microservices. But then immediately this problem emerged that reminded me of say 10, 15 years ago. HD, DVD versus Blu-ray. And I thought about it as simply as that. These two are fantastic technologies. They're kind of competing in this space. Docker Compose was huge. Docker Hub was growing and growing. And we kind of said, you got to kind of pick a bucket and go with it and figure out who has the best backing between them. From a security policy, from a usage and size and scalability perspective, we knew we would scale this pretty quickly. So we started to look at the DevOps and the tooling set to say, scale up by one or scale up by 10. Is it doable? Infrastructure as code as well. What could I codify against the best? And as we started looking at those, Kubernetes took a pretty quick change for us. And actually the first piece of tooling that we looked at was Rancher. I said, well, there's a lot to learn in the Kubernetes space. And the Rancher team, they were growing like crazy. And they were actually really, really good inside of some of their Slack channels and some of their groups. But they said, reach out, we'll help you. Even as a free tier and kind of grow our trust in you and vice versa and develop that relationship. And so that was our first major relationship was with Rancher. And that grew our love for Kubernetes because it took away that first edge of what am I staring at here? It looks like Docker swarm. They put a UI on it. They put some lipstick on it and really helped us get through that first hurdle a couple of years ago. Well, it's a common pattern that we see in this ecosystem that open source, you try it, you get comfortable with it, you get engaged and then when it makes sense to roll into production or really start scaling out that's when you can really formalize those relationships. Bring us through the project, if you will. How many applications were you starting with? What was the timeline? How many people were involved? Were there the training or organizational changes? Bring us through another first bits of the project. Sure, absolutely. So like anything, it was a series of VMs. We had some VMs that were load balanced for databases in the back and protected. We had some manual firewalls through our cloud provider as well. But it was really, that was kind of the edge of it. You had your web services, you had your database services and another tier segregated by firewalls, but we were operating at a single DCs. As we started to expand into Europe from the North America, Latin America base, and as well as Africa, we said this has got to kind of stop. We have a lot of VMs, a lot of machines, and so a parallel effort went under way to actually develop some of the new microservices. And that at first glance was our proxies, our ingresses, our gateways, and then our identity service. And SSO would be that unifying factor. We honestly knew that moving to Kubernetes in small steps probably wasn't going to be an easy task for us, but moving the majority of services over to Kubernetes and then leaving some legacy ones in VM was definitely the right approach for us because now we're dealing with ingressing around the world. Now we're dealing with security of the main core stacks. That was kind of our hardcore focus, is to say, secure these stacks up front, ingress from everywhere in the world through like in any cast technology, and then the gateways will handle that and proxy across the globe, and we'll build up from there exactly as we did today. So that was kind of the key for us, is that we did develop our microservices, our identity services for SSO, our gateways, and then our web services were all developed in containers to start. And then we started looking at complimentary pieces like email notification mechanisms, text notification, any of those that could be containerized later, but just dealt with a single one-off RESTful services, were removed at a later date. All right, so Jeff, yeah, absolutely. I want to understand, okay, we went through all this technology, we did all these various pieces. What does this mean to your business project? So you talked about, I need to roll out 8,000 devices. Is that happening faster? Is it, what's the actual business impact of this technology that you've rolled out? So here's the key part, and here's the differentiator for us, is we have two major areas we differentiate in. And the first one is asymmetric cryptography. We do own the patents for that one, so we know our communication is secure, even when relying over Bluetooth. So that's kind of the biggest and foremost one, is that how do we communicate with the locks and how do we ensure we can all the time? Two is offline access. Some of the major players don't have offline access, which means you can download your keys and assign your keys, go off-site to a site, to a nuclear bunker, wherever it may be, and we communicate directly with the lock itself. Our core technology is in the embedded controllers and the lock. So that's kind of our key piece, and then the lock is a housing around it. It's the mechanical mechanism to it all. So knowing that we had offline technology really nailed down, allowed us to do what many call the blue-green approach, which is we're going down for four hours, heads up everybody globally. We really need to make this transition, but the transition was easy to make with our players. These enterprise spaces, and we say we're moving to Kubernetes, it's something where war is kind of a badge of honor to them, and they're saying to these guys, they really know what they're doing. They've got Kubernetes on the back end, some we needed to explain it to, but as soon as they started to hear the words Docker and Kubernetes, they just said, wow, these guys are serious about enterprise. They're serious about addressing it, and not only that, they're forefront of other technologies. I think that's part of our security plan. We use asymmetric encryption. We don't use the Bluetooth security protocol. So every time that's compromised, we're not compromised, and it's a badge of honor we wear much alongside the Kubernetes piece. All right, Jeff, thing that we're hearing from a lot of companies out there is that transition that you're going through from VMs to containerization. I heard you say that you've got a DevOps practice in there. There's some skillset challenges, there's some training pieces. There's often maybe a bumper two in the road. I'm sure your project went completely smoothly, but what can you share about the personnel skillsets, any lessons learned along the way that might help others? There was a ton. Rancher took that first edge off of us. Kube-Cuddle, get things up, get things going, RKE in the Rancher space or the Rancher Kubernetes engine. They were kind of that first piece to say, how do I get this engine up and going? And then I'll work back and take away some of the UI elements and do it myself. From scheduling and making sure the nodes came up to understanding a deployment versus a daemon set, that first UI as we moved from like a Docker swarm environment to the Rancher environment was really kind of key for us to say, I know what these volumes are, I know the networking, I know these pieces, but I don't know how to put core DNS in and start to get them to connect in all of those aspects. And so that's where the UI part really took over. We had guys that were good on DevOps. We had guys that were like, hey, how do I hook it up to a backend? And when you have those UI, those clicks, like your pod security policy on or off, it's incredible. You turn it on, fine. Turn on the pod security policy and then from there, we'll either use the UI or we'll go deeper as we get the skill sets to do that. So it gave us some really good assurances right off the bat. There were some technologies we really had to learn fast. We had to learn the KubeCuttle command line. We had to learn Helm, new infrastructure pieces with Terraform as well. Those are kind of like our backend now. Those are repeatability aspects that we can kind of get going with. So those are kind of our cores now. It's a Rancher every day. It's KubeCuttle from our command lines to kind of do those Terraform to make sure we're doing the same thing. But those are all practices we cut our teeth with Rancher. We looked at the configs that it generated and said, all right, that's actually a pretty good configure. Maybe there's a tank to tolerance or a tweak we could make there. But we kind of work backwards that way to have them give us some best practices and then verify those. So the space you're in, you have companies that rely on what you do. Security is so important. If you talk about telecommunications, many of the other environments, they have rigid requirements. I want to get to your understanding from you. You're using some open source tools. You've been working with startups. One of your suppliers Rancher was just acquired by SUSE. How's that relationship between this ecosystem? Is that something that is there any concerns from your end user clients or in what's your own comfort level with the moves and changes that are happening? Having gone through acquisitions myself and knowing the SUSE team pretty well, I'd say actually it's a great thing to know that the startups are funded in a great source. It's great to hear internally, externally, their marketing departments are growing, but you never know if a startup is growing or not. Knowing this acquisition is taking place actually gives me a lot of security. The team there was healthy, they were growing all the time, but sometimes that can just be a face on a company and just talking to the internals candidly as they've always done with us, it's been amazing. So I think that's a great part. Knowing that there's some great open source techs, Helm, Kubernetes as well, that have great backers towards them. It's nice to see part of the ecosystem getting back as well in a healthy way rather than a, you know, here's $10,000 platinum sponsorship to see them getting the backing from an open source company, I can't say enough for. All right, Jeff, how about what's going forward from you? What projects are you looking at or what additions to what you've already done? Are you looking at doing down the road? Absolutely. So the big thing for us is that we've expanded pretty dramatically across the world now. As we started to expand into South Africa, we've expanded into Asia as well. So managing these things remotely has been great, but we've also started to begin to see some latencies where we're heading back to our ETCD clusters or we're starting to see little cracks and pieces here in some of our QA environments. So part of this is actually the introduction and we started looking into the fog and the edge compute. Security is one of these games where you try to hold the security as core and as tight as you can, but trying to get them the best user experience, especially in South Africa and serving them from either Europe or Asia, we're trying to move into those data centers in region as well to provide the sovereignty, to provide the security, but it's about latency as well. When I open my phone to download my digital keys, I want that to be quick. I want the administrators to assign quickly, but also still giving them that aspect to say, I could store this in the edge, I could keep it secure and I could make sure that you still have it. That's where it's a bit different than the standard web experience to say, no problem, let's put a PNG as close as possible to you to give you that experience. We're putting digital certificates and keys as close as possible to people as well. So that's kind of our next generation of the devices as we upgrade these pieces. Yeah, there was a line that stuck with me a few years ago. If you look at edge computing, if you look at IoT, the security, just surface area is just expanding by orders of magnitude. So that just leaves big challenges that everyone needs to deal with. Exactly. Yep. All right, give us the final word if you would. Final lessons learned, you're talking to your peers here in the hallways, virtually of the show. Now that you've gone through all of this, is there anything that you say, boy, I wish I had known this had would have been this good or I might have accelerated things or which things, hey, I wish I had pulled these people or done something a little bit differently. Yep, there's a couple actually of big parts right off the bat. And one, we started with databases and containers, follow the advice of everyone out there, either do managed services or on standalone boxes themselves. That was something we cut our teeth on over a period of time and we really struggled with it. Those databases and containers, they really perform as poorly as you think they might. You can't get the constraints on those guys. That's one of them. Two, we are a global company, so we operate in a lot of major geographies now and ETC has been a big deal for us. We tried to pull our ETC clusters farther apart for better resiliency. No matter how much we tweak and play with that thing, keep those things in a region, keep them in separate, I guess the right word would be availability zones, keep them as redundant as possible and protect those at all costs. As we expanded, we thought our best strategy would do some geographical distribution. The layout that you have in your Kubernetes cluster as you go global for hub and spoke versus kind of centralized clusters and pods and pieces like that, look it over with an expert in Kubernetes, talk to them, talk about latencies and measure that stuff regularly. That is stuff that kind of tore us apart early and proof of concept and something we had to learn from very quickly. Whether we hub and spoke and centralized ETC and control planes and then workers abroad or we could spread the ETC and control planes a little more. That's a strategy that needs to be played with. If you're not just North America, North America, Europe, Asia, those are my two biggest pieces because those are our big performance killers as well as discovering PSP pod security policies early. Get those in, lock it down, get your environments out of route, out of port 80, things like that. On the security space, those are just your basic house cleaning items to make sure that your latency is low, your performance is high and your security is as tight as you can make it. Wonderful, well, Jeff, thank you so much for sharing. Sarah Forrest story, congratulations to you and your team and wish you the best luck going forward with your initiatives. Absolutely, thanks so much, Stu. All right, thank you for watching. I'm Stu Miniman and thank you for watching theCUBE.