 The end developer at the end of the day who's doing their work needs an environment to be able to make sure what they've built is doing what it needs to do, and it solves whatever problem they're trying to solve, and it works. This is your host, Sapil Bhatia, and we are here at KubeCon in Chicago, and today we have with us Tommy McLean, co-founder and CEO of Release. Tommy is here to have you on the show. Great to be here, and thank you for having me. Yeah, it's my pleasure. First of all, the name of the company's interesting release, and you are a co-founder, so I would like to know a bit about the history, which is not very old, because you said it's 2019. That's right. So tell a bit about it. That's right. In 2019, what was the problem that you saw that nobody else saw, useful that needed to be solved that led to the creation of the company? I was the CTO of a public company by the name of Truecar, and 300-ish developers, technologists in that group that worked for me. We had to go through a big modernization effort, and one of the biggest challenges for that organization was making those developers productive, and environments were a big bottleneck. They had a few staging environments, and it really slowed down our ability to ship because everything would bottleneck there. And so as we were doing that modernization effort, I looked in the market to see if anything existed to help with this, and at the time, there really wasn't anything. And so me and my co-founders saw that opportunity, talked to a lot of potential customers, VPs of engineering and CTOs, and this problem exists everywhere. So we jumped into it, and that was how we started it, and the problem we were solving. So when we look at environment, we talk about there are a lot of cultural nuances there. We talk about DevOps, developer, DevOps, we talk about platform engineering. Who is environment targeting? The end developer at the end of the day who's doing their work needs an environment to be able to make sure what they've built is doing what it needs to do, and it solves whatever problem they're trying to solve, and it works quality. So the end consumer of these environments are developers in their SDLC. I might be developing locally, and I need my local setup to look like production, because I can only see a bug that happens in production, and I need to see that now. But I can't go test that on the production environment, so I need that capability close to me. At the end of the day, everything we do is for developers to streamline their ability to produce and get their ideas released to the world as fast as they can. The reason I asked the question was that for a long time, we almost forgot about developers. We start talking DevOps, and DevOps, and SREs, and platform engineering. But recently we have started going back to talk about developer. We have started talking about developer experience. So when we look at environment, how do you make their life easier also so that they don't get overwhelmed with all that nuances? So talk about what kind of technological, cultural nuances that are happening to make it easy. The answer to that is it just works on their behalf. The goal in our design is that developers should stay within their flow. If they're doing Git flow, for instance, and they do a pull request, they shouldn't have to go ask DevOps to build them a test environment or get access to one. It should just appear. And so our design principle is the developer should stay in the tools that they love. It should work within the workflows that they are currently using, and it should be seamless. They shouldn't have to know the details of, you know, I'm using EKS with these kind of instance types with, you know, this much memory allocated to it. They should be able to write their code, and the infrastructure that they need to get their job done should manifest for them right within the workflows that they're in. So I think that's a big part of this, is trying to keep developers developing and not, my co-founder calls it toil. Toil is this work that is non-value add. It's the setup, it's the, you know, getting your environment to work with the right settings, like all of that stuff is not value add. So for us, the more we can keep developers focused on solving customer problems seamlessly and less out of the weeds of the toil, the better off we are. So what are we offering them in the environment as a code? Are we offering them environmental service? What are we doing to help them? We call it environments as a service, but environments as code is a way to describe it too. So ultimately, if you look at an environment, an environment is the settings the infrastructure, the data, the services, everything that is needed to have your application run. That is abstracted into what we call an application template or a blueprint, and that blueprint now is the definition of that environment. So if I click a button, that blueprint is consumed by release, and then, you know, it manifests a version of that environment. So the developer gets a full stack environment of everything needed to run their application on the branch if they're working on a branch. So their code is loaded into that, the data that they need. So if it's production data or sanitized production data is loaded into that environment. And then when they're done with their work, it gets torn down so that they can save money and they don't need to pay for those resources when the when the developer is not actually using it. The environments can be used for development, so I can develop in these environments. I can use them to preview my changes with a QA person, with a product manager. I can run my automated tests against them. And within the SDLC, our vision is that you should have all the resources you need to get your job done as quickly as possible. And typically, what we see with companies is they have like a staging environment that all their developers share, which just is a huge bottleneck. So this parallelizes that process. Talk a bit about some of the core benefits of environmental service or environmental code, you know, whatever we use there. Well, the biggest benefit is velocity. Like you're going to produce software faster and it's going to be the highest quality because you know it is a replica of what it's going to look like when it's in production. Other benefits that we get are developer experience. You get an experience that's much like one of the big technology companies where developers have a CLI, they can get to those environments, they can console, they can terminal in. The developer experience that we deliver is awesome, which is if you're a CTO or VP of engineering, you care a lot about, hey, we have a great streamlined development process. Great developers are going to want to work for you when you have that. So I think the biggest benefit obviously is velocity and quality of your code and consistency. And then there's a lot of tangible benefits that you get just in developer happiness and experience of having something like this within your product or your ecosystem. And they are not developer, they're developers, they're team. So when we look at the whole, you know, sharing, a female environment, I want to talk about... That's a good point too, yeah. Collaboration, you know, you might be working on a project with five or six developers and you want one integration point for all of that. You're all working on the same branch, you use a pull request, everybody works on that branch and you can see that code manifest itself in a very collaborative, multi-team engineer sort of way. That's what these environments, it's a meeting point for engineers to get their work done and see it manifest itself live across that entire group. There's one more problem which is more or less like tribal knowledge or technical jet. If something is running on your own laptop, when you move out, you move out with your laptop, you move with your own environment, how does environment service address that problem? So you don't have to worry about tribal knowledge or technical data. Yeah, it's consistency and like you said, environments as code. It's, you know, you check in the configuration of these environments so that when that developer leaves, the knowledge doesn't leave with him. It's in source control. So the next developer who comes will pick up right where you left off. How does it also kind of address a problem of burnout? Developers have so much on their pipeline so that they are not stressed during Christmas or holiday season because everything is on their own laptop. So I just want to talk a bit about the collaboration aspect of it. So that not every member is stressing out. Yeah, I mean, I think it removes that toil like we talked about before. I want to talk about toil not from an individual developer's perspective, but once again, when they're collaborating, somebody is sharing some burden. Right. Well, I mean, again, it's that meeting point where, you know, if you're working on your laptop and I'm dependent on you and I can't get to it because you're not there, you're blocking me, right? And so I think removing those blockers, 99% of what we're focused on at release is removing blockers. What are the blockers that stop developers from being productive? When anything is locked in one person's hands and they're the only one that can solve whatever that problem is, you have a problem by sharing environments, by sharing those, you know, and having that reproducibility, you're never leaving one developer with too much that they have that I need. You know, it's being shared amongst that group. So environments as a service is also kind of democratizing that so that everybody has what they need and they can share it when they need to. And you can work with a global team as well. It doesn't matter who series we are. We're a fully remote company. So we use release to build release. And the amount of times that we have said, hey, go check out my environment. You know, this is what I'm working on. Tell me what you think. We send them to customers. It even creates collaboration possibilities early in the dev cycle with customers that you normally couldn't do. How do you, you know, maybe open up your laptop to the Internet and share it with a customer. But in the process of using release, you can say, hey, I built this feature for you. What do you think before it makes its way all the way out to production? So you're moving all of that left. It's another big benefit that you get from using environments as a service. Can you also talk about, you know, what is your actual offering look like? Yeah, yeah. The offering that we have is environments as a service. You will use release to create environments on demand. So the developer in their process of developing will get an environment with every pull request. And they, again, are full replica environments of what you have in production. You can use remote development environments. So if you want to code in them, you can. You can have staging environments. So if you want to run testing against those, ephemeral environments, preview environments, and even our customers will use us to manage their production environments as well. So it's the orchestration and creation of those. And those environments run within your cloud account. They're not running in a release hosted environment. They actually run within your cloud account under your security policies. So they're as close to production replicas as you can possibly get. And they have full control of what that is. Yeah, full control over what's in those services, what they use in AWS. We leverage open standards. So if you have Terraform, you have Helm, you have Docker compose files, Docker files, that goes into our blueprint. And then we're using that to recreate those environments over and over again. As earlier, you were talking about, you know, when you were looking at solving the problem, you talked to a lot of CTOs, other companies as well. Talk a bit about, you know, you can name depending on how comfortable you are. Use cases, kind of companies, kind of overseas who are already leveraging release. We have companies using us all the way from startups that kind of use us like a better Heroku to large companies who have three, four, or 500 engineers that are using us to solve those bottlenecks. They're across all industries and the commonality amongst them is generally multiple software engineering teams where collaboration and bottlenecks are, collaboration is important to them. Bottlenecks are slowing them down. Generally, they're all building in-house software, right? They have software engineering teams. They're running on Agile. They're, you know, leveraging, you know, Kubernetes. They're on the cloud. We don't really specifically focus on one industry because this is across the board. If you're a software technology company, these are problems that you solve. So, you know, the best companies for us are larger because that's when the complexity causes slowdown. And that's where we've really focused our energy is on larger, more complex distributed applications. And so, we really look at it like we can do this for virtually any size company that has a large development organization. What kind of adoption have you seen off release? It's been awesome. Like over the years, hundreds of companies have been using us and, you know, this problem is literally ubiquitous. Our biggest competitor is somebody building it in-house a platform engineering team or a DevOps team that's trying to do it themselves. And so, it's literally applicable to almost every software engineering organization. This may be out of the scope of release, but I do want to ask you because we are here at KubeCon. But the couple of things that we talk about when we look at Kubernetes, what is complexity? Kubernetes complexity? Yeah. And second is, of course, cloud cost. Yeah. How do you look at these two problems? Yeah. And do you see that release, because sometimes we don't realize, but in the indirect way, it does affect these two issues as well. Yes, for sure, yeah. So, talk about how do you look at these two challenges and do you see release or is helping them as well? Yeah, on the Kubernetes front, the core of what we've built is on Kubernetes. So, every environment comes up in a namespace within a Kubernetes cluster. And a lot of companies that started with us had zero experience with Kubernetes, none. Like, didn't know what a Kubernetes manifest file was, didn't know how to size a cluster, didn't know how many nodes to put in it. We actually do, it's a managed Kubernetes version that we're basically managing an EKS or a GKE cluster underneath the covers. You come with your code and all of the manifests that are needed in order to run your application are automatically generated by us. So, if you're getting into Kubernetes and you don't know it, you can start with us. And then as you mature, you can actually take over the management of those clusters yourself, directly get to them, use whatever tooling you want to manage, monitor those clusters. But if you're just getting going or you don't want to take on the burden of that, we can manage the whole side of it. And then on the cost side, one of the benefits of using ephemeral environments is they're not there all the time. So, instead of having long-lived environments that are always on, when you do a pull request, maybe what you want to run is an automated test. That environment spins up, you run the test, and then it shuts off. So, we believe that those environments should only be needed when you need them. And you can save a lot of money by doing that. Not having those environments running all the time is a big part of the cost-saving story that we talked to a lot of our customers about. And since you are running in their AWS order, so do you also incur some cost as well? It actually is not. It actually a lot of times ends up being a reduction because they can get rid of some of the long-lived infrastructure and only have the environments running when they're needed. There's always a trade-off when you want to go fast. It's time, money, and effort that you put into it, right? And so, in some cases, if they get highly parallelized, let's say you had one staging environment and now you have 100, it may cost more, but you're moving fast. And so, there's always that trade-off, but a lot of times what we see is that on-off ephemeral nature of these actually ultimately ends up saving you money because those long-lived environments are big, beefy machines, cost a lot of money, and by only using environments when you need them, you can dial that back. How do you look at Genetic UI, once again, either from a workload perspective? Yeah. Or to power some of the release? Yeah. I mean, we have a couple of efforts we're undertaking with AI. If you go to release.ai, that's a site that we put up to talk about some of the AI initiatives that we have. We look at two very interesting places. One is, how does generative AI make removing bottlenecks in operations for platform DevOps engineering teams possible? We launched a chat interface that allows you to talk to your infrastructure. So you can literally go in and say, hey, what was my AWS bill yesterday? And it will go retrieve that information, pull it back to you. You can ask it, what does my networking setup look like? You can set up schedules to send you information out of your infrastructure. And the idea there is more to reduce the knowledge that's generally trapped in one really smart platform engineer or DevOps engineer's head. You always have that one person on your team that you go to to answer the question about how AWS is configured. Again, shifting left, a developer should not be blocked by the one person that has the answer to that question. So our chat interface is connected into your infrastructure. And you can ask questions, query it, have a conversation with your infrastructure. And that was really just for us to kind of get our arms around what's possible here and to learn what the limitations of some of the open models are. So you can use that. It's free. It's unreleased.ai. And then we also look at the infrastructure side of AI as a part of our core business where as AI becomes more pervasive, the ability to run, train, fine-tune your models, it's an environment. You're going to need environments for that, just like you need environments for doing your software development. So we have a lot of work going into, hey, instead of a Kubernetes cluster built with traditional CPUs, how do you run workloads on GPUs that then you can simplify the environment creation and tear down? And a big problem in AI and model training is the cost of those machines is, you want to talk about cost savings. Having the ability to only have those machines while you're running your training, spin them up, spin them down when necessary with the right size for the workload that you're going into. We look at all of that as really interesting use cases of our core product. And that's not going to get less interesting over time. It's only going to get more interesting as companies get deeper and deeper into the AI world. So we look at both of those as really interesting. Do you see a point when those chat DVD clients will talk to each other and negotiate the bill? You know, we actually thought early on with our efforts in AI that you'd be able to take action with it. But I'll give you the example that we used. One of our engineers was playing around with our chat interface and he asked, can you create an EC2 instance? And now that can be taken one or two ways. Like, can you? Sure, I can do it. Or it, yeah, I just did it. And so there's challenges in letting AI act on your behalf. Our chat interface now is really only for that reason. And those are tricky problems to solve. And when you start wanting to do things that are very specific and it has to be accurate, right now you have to have humans evolve still. What are the other things that I look back on? Of course, there are a lot of things that you cannot talk about at this point. We'll talk about them when they're ready. But just give us a teaser or a glimpse of the thing that the release team is working on. Releasing, no pun intended. Well, I just talked about some of the AI things that we're working on. There's going to be more coming from there. And our core product is always evolving around our job is to increase the surface area of the kind of environments that we can create and manage. And that's a never-ending problem for us. So you'll see a lot of innovation on that side as well. What role do you see of culture here within companies? So even if you have technology like release, they can make best use of it. I think having an open mind to the fact that a lot of problems that could have been solved with in-house teams our biggest competitor is Build versus Buy. And the companies that do the best with us or any other technology understand that a lot of these vendors, that's all we do. We're going to be, by far and away, more capable of solving environment problems in a two or three-person team that you have in-house. So culturally, having that mindset of what matters to build that is an intellectual property that is value-add and unique to me and what things should I not build. And what we tend to find are the more advanced, faster-moving organizations can make that decision really, really well. Is managing and building my own IDP or environments as a service platform a core competency that I have to have? Or should I be focusing that effort on maybe I'm a healthcare company and solving my healthcare problem? So culturally, I think that openness and willingness to make that decision to focus on my core competency versus trying to boil the ocean and do everything is, we see the best and fastest-moving companies doing that. Tommy, thank you so much for taking time out today and talk about release. And I mean, this is a very core problem. So I would love to talk more as you focus on more technology. Anytime you want. But I really appreciate your time. Thank you. Yeah, come find us on release.com. Thanks.