 Hi, this is Yoho Sapin Bhartiya and welcome to another episode of TFI Let's Talk. And today we have with us once again, Julian Fisher, CEO of Anynice. Julian, it's great to have you on the show. Great to be here too. Yeah, there are so much to talk about for the last couple of months. We have been running different series there. One was on cost efficiency this month. We are running a series on security and compliance. So I want to hear when you look at Anynice, the customers that you are serving and you do serve customers where you're touching because cost efficiency is becoming a big topic we talked about in our previous discussion as well. So I want to hear from you. How is Anynice helping customers first with becoming more cost efficient and at the same time also make sure that their workloads, application environments are secure and compliant? Well, the cost efficiency aspect is mostly about operational efficiencies and the general pricing model that we offer. So often we have migration cases where customers save up to 50% in total cost of ownership adapting our platform technologies compared to alternative offerings. So it's not that we have to do much to get those cost efficiencies ready. It's basically built into our pricing model. When it comes to security, I think the experience of working with large enterprises for a while gives you a good feeling about what their security officers are looking for when looking at cloud software automation. So for example, the way we set up accessing the environments, the way we set up VPNs, the way generally networking is set up and secured. Those are things you have to get right when talking to clients like that. So it basically comes from experience. When you look at the connection between application runtimes, whether this is Cloud Foundry or whether this is Kubernetes and data service automation, obviously you have TCP connections there. So you'd be looking at questions such as how do you protect data in transit? How do you protect data address? So encryption is obviously a very important thing. So these features are basically present and we know how to apply them. That helps to get a lot of these security questions answered. Now in general, a lot of companies out there, they follow a software as a service pattern where they basically rent out resources they host in their own, let's say AWS account. And so other people can connect using public endpoints. Well, while we do that with passNNines.com, in the enterprise part of NNines, which is by far the larger one, we allow the software to be operated from within the Amazon account of the customer or from on-premise data centers. So from the security perspective that gives them a huge benefit because they have control over their own software operations, they basically can run it on their own infrastructure they control the account of and therefore also who is allowed to access it. One more topic that we ran was about DevOps versus platform engineering or people say DevOps is dead. And we talked about bringing the developer experience and Cloud Foundry actually kind of bring the developer experience to the Kubernetes world as well. So can you also talk a bit about as companies are transitioning and we are talking about these what role is Cloud Foundry any nice in playing to help those customers with that transition or with this evolution of cultural things like DevOps and platform engineering? You know, the software industry somehow is similar to the fashion industry. So every few years we find new terms that we'll then create a lot of buzz around it. And I mean, we are talking about cloud automation and software platform operations. Well, back in the days we call it hosting. You say that today people love it here because we don't do hosting anymore, do we? In general, there's one thing I have to say when declarative automation technologies enter the scene. For example, Bosch for virtual machines or Kubernetes for containers. There was a big game changer because the predictability these technologies allow for example, a data service automation, you need something that allows you to update a thousand Postgres instances if there's something like shell shock or lock for shell. Now with the NINES data services you have multiple data services like whatever, Redis, Rebit and Q and so on. So you have a multiple thousand data service instance running a multitude of VMs and you need to update them fast and reliable. Now, how is this possible? How can you have one single automation code base and make automation work at that scale? The point with the term DevOps which originates I think from a Ruby-based conference back in Belgium, I think the year was around 2009 if I'm not mistaken here. So back in 2009 we did use very different tools. We used for example, Opscode Chef or Puppet or anything like that a more imperative approach to automation. So we did that as a company NINES was called differently at those days but we created a lot of virtualized clusters based on virtual machines and utilizing Chef was our common strategy. So we had database automation back in the day. So what's the difference to today? I mean, looking at an operations group of 5 to 10 people they'd be able to run hundreds of virtual machines where now the same team could run thousands if not 10,000 of virtual machines. So obviously automation technologies have progressed and the declarative automation technologies to me are together with changes in the data centers in regard to having a programmable data center ephemeral virtual machine reports and persistent disks like automation paradigms in conjunction with those changes in data centers they lifted DevOps or operations to the next level. So in that regard it's fair to change wording and say DevOps is dead then whatever the new term is is the new term but basically it's solving the same problem just with different means. So I wouldn't emphasize too much on the wording but I would say behind the scenes that's the progress that's causing people to say things like DevOps is dead. You don't want to run the thousand database instances with Chef, at least we don't. Very well said and your practical approach I love the way we approach it. Now let's talk about some upcoming events. Of course, KubeCon is coming up. We will be at Postgres conference as well. Let's look at KubeCon for a bit. What do you think is going to be the topic or kind of focus this year especially from the perspective of any nice presence and participation during the event? We spent the last one or two years writing operators and managing. The general question is how does data service automation change when having Kubernetes as the leading paradigm? What's the future of data service automation if you follow that assumption? We started to build our own operators so the Postgres operator is already there and about to be general announced but that's only half of what you need. In our analysis looking at our larger environments we talked about a few Postgres instances, a few Redis instances, a few RabbitMQs we are talking about, as I said, thousands. This leads to the question whether for example operators should be co-located with applications. In larger environments we don't think that this is meaningful because of situations like share shock where you don't want to wait for dozens of application developer groups to respond and redeploy their data service instances. So centralized data service automation on Kubernetes then requires somehow integration with the application clusters having remote control facilities ensuring that there's network to network connectivity between those clusters, ensuring security, authentication, authorization, all that. Those are research areas we are currently focused on quite a lot and around KubeCon there will be several demos of products we introduce and I guess changes in that area are interesting to observe. You will be also attending Postgres conference, you'll be speaking there, talk a bit about your attendance, appearance there at by then what are you going to talk about? My talk at PostgresCon will be about a recap of 10 years of automating Postgres. So we earlier mentioned there was a transitioning from well back in the days we started to use shell scripts testing technologies like BCFG2 later Ops code chef and Puppet and then Bosch and at some point a Kubernetes. So if you go through those last 10 years and think about what were the challenges what was the state of Postgres back then what are the challenges when automating Postgres and for also having a rough indication how many people do you or given a number of people how many Postgres instances could you automate and maintain given a particular technology stack at a time. So going through the last 10 years and give a brief recap that will be the talk at Postgres and I think for those people who've been around for the last 10 years that may be not so exciting other than be reminded about the evolution and progress we've made in that area but I think especially for beginners and more intermediate engineers who have not been there for 10 years it could be very valuable to see how changes in the data center changes in virtualization and the introduction of new automation technologies and new algorithms how they together change context and therefore also revolutionize software operations over and over and we're using Postgres as a good example because Postgres itself also changed and became more cloud friendly over time and it's going to be a bit of history which might be interesting to anybody dealing with database automation these days. It's like platform engineering DevOps and you know we have seen the evolution people get overwhelmed with things like Kubernetes a lot of companies who are big they do understand the terms a lot of new companies they get overwhelmed they're like hey we need to have Kubernetes strategy we need to have platform engineering strategy what is your advice that when companies do look at term they should not look at what technologies are there but what they actually need so what is your advice how companies should approach these changes whether it's or technologies. Well first of all you know some people have seen us as a cloud foundry company I've never seen us as a cloud foundry company I've always seen us as a cloud automation company software operation automation company and and to me you know technologies is it's just second the first thing is always to understand you know the mission the mission is full life cycle automation you want to reduce as much of the human factor as possible because human resources are scarce resources they are hard to hard to recruit they are hard to train they are hard to keep happy and therefore software teams are hard to scale they become a limiting factor in producing software and thus producing company value and that's the whole point we need to focus on in all these technology discussions is to understand how a particular technology provides more efficiency solving things in a more efficient way so in the Kubernetes ecosystem as we've you know looked at in several other videos we've seen in contrast to cloud foundry where a lot of you know the central idea was let's have a central platform as a service established central user management tenant management marketplace to integrate all the services that maintain state and have a wonderful application runtime that's agnostic to frameworks and programming languages now with you know with the rise of Kubernetes the different approach basically you know succeeded which is we are not looking at something that will solve the application development problem for anybody as a very opinionated stack but instead here's the framework to build your own platform and regrow an ecosystem with tiny bits and pieces and you'll have to assemble your own platform now I'm really not sure whether this is a good thing or a bad thing because now every customer I look at they need to build their own cloud foundry based on Kubernetes building blocks and most of the companies I see are absolutely overwhelmed with that challenge and in a few years they might look at very hard integration projects to get all those Kubernetes based workloads somehow be managed in order to get to a you know reasonable operational efficiency well some companies do very well and there will be more products you know bringing and showing or giving the opportunity to have you know predefined software stacks but they will not be for everybody I mean as soon as you start building opinionated stacks that's the nature of being opinionated so I think the challenge of the time is first of all grow the Kubernetes ecosystem by introducing more components that are still missing and there are still those components which are missing we see that when for example looking at data service automation at scale at Kubernetes it's still a very complicated matter for example in a lot of situations you're looking at a Kubernetes cluster and eventually using Kubernetes to do on-demand provisioning of virtual machines because if you need to isolate the service instance you need to provide Kubernetes nodes that are isolated so this in turn requires you to have control over the Kubernetes cluster as well as the POTSier provision ideally with a similar language or you know let's say Kubernetes API based approach so that you can automate those things together now that would be that and I think that's generally a huge challenge out there Julian thank you so much for taking time out today and talking about this topic and I look forward to talking to you again soon thank you thank you for having me again it was always a pleasure talking to you and see you next time