 Hi, this is your host up in Bhartiya and welcome to a special Let's talk show for a lot of content today. We have with us once again kid marker chief growth officer at number nine Get is great to have you on the show. Thanks for having me swap. No Yeah, so we had a very you folks had great as a little content We recorded some interviews there as well talk a bit about any, you know, first of all, I would love to just hear You know, what was the experience this year? What kind of you know speakers or attendees where some you were like Hey, you know what? This is what we're expecting. You're like, hey, we are not expecting that kind of what it is And we're kind of not only surprised but in a positive way. Sure. Yeah I mean, there were lots of lots of great surprises at slowconf this year This is our third year running the event and you know, we started right in the middle of COVID and We try to do something different from a virtual event perspective Which I think kind of surprised people one is like we kind of throughout the agenda There's no schedule really we have this daily stand up that I host but it's a very short time but the rest of it's all done virtually and Asynchronously so people submit their videos and we can watch them and discuss them async But one of the things we did this year that was different and a huge hit is we ran slowconf local events all over the world and we had 10 different cities including New York Chennai Sydney, Australia London Dublin Tokyo you know this just all over the place and Those in-person events got really really rave reviews. So at those they have local speakers They had food and drinks. They got to watch a few videos the pre-recorded We got really positive feedback about the local events and people want us to do more of that And I think it was a very unique way for us to you know Not bring everybody from all over the world to one city But really to meet people where they were and I think that was a cool change this year We had some great speakers from end practitioners. So companies like capital one The New York Times Ford Motor Company people who are adopting SLOs Themselves and can share that story not you know, not necessarily vendors pitching their their technology But really kind of the end user practitioners. We also saw I'd say an increase in the executive audience this year We saw I think it was over 160 people joined that had VP or higher titles Which I think is it also really nice turnout for an event with about 2,000 people that showed up for the event overall So yeah, so I think for this year My big takeaway is the shift from, you know, should I do SLOs to how do I do SLOs and hearing from real practitioners that have seen the business benefits of SLOs in their organization. I think that's really the key thing. Did you folks make any in hospital during this, you know Event week. Yeah, we use the slow-comf week as our time to kind of dump a bunch of announcements So I'll just give you the highlights real quick. So we started the week with Reviews from a couple of different annuals reports. So the state of SLOs 2023 came out and we had you know that that we reviewed with Brian Singer last week at slow-comf and then We also had Paul Machawati from the enterprise strategy group who talked about his new report Which is all about the business-impacted service level objectives. You can find both those on our website at number 9.com We also announced a bunch of product enhancements, which are mostly focused I think, you know in the spirit of reliability engineering focused on the reliability and quality of noble mind itself So we built a query checker. We built a Metrics health notifier and basically a bunch of tools that make it so you're resilient to upstream data issues When you're building SLOs, you're highly dependent on the data and you may have heard this term garbage in garbage out So we're trying to help people figure out, you know, what garbage is coming into the SLOs They can improve upstream monitoring and that kind of stuff We also announced that we're now available on Google cloud platform in addition to AWS So now we're giving customers choice of where to deploy noble mind itself across the clouds But wait, there's more we also announced a new Network what we're calling the noble mind delivery network and we announced this with services companies that want to bring open source Methodologies for SLOs so including open SLO the slow development lifecycle or slow DLC R9 y dev which is a reliability kind of Architecture framework that Google created, but we're working with companies like cognizant and Coravan's and the scepter and Teleon and these other kind of very focused Professional services groups they are working with us now as part of this delivery network We have a set of services including cloud migrations SLO bootcamps even AI policy Workshops we have a bunch of different reliability oriented Services that people can find that we can connect you with partners and and service providers through this and 90 n So that's noble mind comm slash and 90 n the noble mind delivery network and the final and I think the most fun Announcement is I got to work on a little project called slow GPT slow GPT.ai, which is a Basically a way for you to quickly build SLOs using generative AI and it uses Google's new vertex Palm to the preview of palm to which just came out at Google I o what two weeks ago And we showed that off last week and the cool thing here is one of the challenges like I was talking about your garbage in garbage out problem People have to connect to their data sources to get SLOs You want to go get you know an SLI you got to use you know data dog or New Relic or Prometheus or whatever upstream tools Connecting to that finding the data getting the query and then pulling it into an SLO platform is a lot of work Well slow GPT makes it so you can use the universal API which is a screenshot. So all you do is you take a screenshot of your Metrics and we have some examples on the website. So you don't need to necessarily even You know if you just want to try it out, you can do without any setup But it analyzes the data pulls out the the SLI data gives you an interactive SLI It lets you set different targets let you set different thresholds and you can see a burn down but what's really cool using the LLM the large language model AI capability is you can actually ask it questions about The SLO and so you might ask silly questions like, you know, write a song or write a poem about my SLO Which is kind of fun and it'll tell you if you know something something funny But you can also use it to generate open SLO YAML for examples You can you can ask it for that so you're gonna fit that into your Your developer workflow for SLOs and we actually showed a demo of that a slope on the last day Peter Patak who is the developer who built slow GPT along with me You know we worked on it together But what's really cool is he showed how you can fit that into an actual workflow of you know You have your metric and data dog you take a screenshot you put it in a slow GPT it generates the YAML Then you can go into noble mind connected to the data source for real with a real query Add it to your good ops workflow and apply it using the slow cuddle command line Anyway, so that's I mean as a whole rundown of stuff that happened this week from a news perspective But you know everything from AI to to delivery network to new capabilities to industry research All of that kind of encapsulated in the the slow comp week Can you talk a bit about what you also mentioned a slow report? So talk about you know if there was any anything that you know Stood out or it was some major you know kind of point or highlight sure. Yeah, so this the state of SLOs Survey 2023 it is the second year that we've run this and what's what's really interesting about it What it indicates to me My biggest thing I mean we could dig into the data and I definitely encourage people to go deeper to watch Brian's overview of it from slow comp day one daily stand-up last week But the one that surprised me the most is we asked the question new question this year Did you increase your focus and reliability due to the pandemic and 80% of the respondents said yes And so this to me, you know something we kind of felt but having that validation that the pandemic actually increased people's focus On software reliability, I mean that really rung true The other interesting thing is we saw some of the data Being very consistent with the previous year and when I talked to the research team that did it You know, I was like a little bit depressed at first because like oh, so nothing changed They said no no this is actually a good thing because now it validates our sample Because you know if we see the same percentage answers year-over-year with the different people answering the questions It kind of validates that you know the the people were talking to a really talk straight thing So we're seeing in that and I don't you know I don't have all the headline numbers on the top of my head But you know definitely seeing the impact of of SLOs on the business Seeing people who are you know using SLOs and that trend is continuing to increase again The big surprise to me was like okay They really are focused on this as a result of the the pandemic and what we've seen come out of that You're also talking about you turned up on There were a lot of you know folks executives that you know You're like kind of surprised to hear and you also talk about that there are a lot of users You know not just vendor switching their products and solutions, but I'm curious when All these you know, especially you mentioned some names as well And of course we talked about state of SLO, but where when it comes to SLO How much awareness did you see was already there in terms of that that you're like Hey We have actually moved that face as we have this discussion earlier also that the awareness face is gone No, it's more about you know actually helping them, you know with their SLO strategies there are we yeah Yeah, so so the interesting thing, you know I've been in this slow game the SLO game for you know a few years now And I you know Google did a great job of advocating for service level objectives as part of the SRE Methodology, but I think it was kind of seen as one part of something you know of this this larger change And you had a kind of question Okay, do I want to do SRE or not and what we we tried to do was to break out the really great parts about service level objectives And and I think you know Steve McGee from Google did a nice talk about this at slow comp actually But he kind of talked about how Google's true innovation on SLOs, which have been around for a long time I mean SLA's and SLOs have been around for a long time as long as there's been service their true innovation was how to apply SLOs to distributed systems and The SLO methodology that you see is really about distributed systems What we see is SLOs can be used in a variety of use cases And so we've taken actually what Google was popularizing and we've brought together practitioners that are finding all sorts of creative uses for SLOs That are not just about distributed systems are also about you know, how we measure You know our inventories for our supply chain or how you know client software works or even how monolithic software works You can measure it through this concept of services as companies are adopting the service Abstraction and we see services everywhere software as a service platform as a service infrastructure as a service Microservices, right? This is where the way people are thinking about the interactions of software now It only makes sense that you're going to find the reliability targets performance targets You know the shape of that service in a way that can be consumed by other people so so to your question You know we when we started this it was trying to get people excited by SLOs now What we're seeing is well first of all is called SLO comp for a reason like people self-selected They kind of already know what SLOs are before they join I don't think many people join the conference not knowing what is and that's that's sort of by design We want to have people who are all part of this but it definitely has gone from you know What is an SLO? What are the basics? What is an error budget? You know these kind of very basic concepts that are important and counterintuitive, but they're their basic concepts So now how do I convince my boss? How do I show the business value? How do I? Decide whether to do DIY or open source or a vendor solution How do I roll this out across my organization? And actually it was interesting as in the exit survey. We did a post-event Exit survey like we always do we asked people what is the next step on your SLO journey? And we wanted to understand that and we had a very small percent into like 1% who said I'm not sure I need SLOs we had a you know I think about a quarter of them saying they want to they want to do their You know first SLO we want to create our first SLO, but the vast majority like 70% something like that Said they were either Automating SLOs air budget slows this code They want to do something like that or they were trying to scale SLOs across their organization So I that to me is a very strong signal that people are ready to take this to the next level It's not the getting certainly sure there's people you know Quarter of the people are in the getting started camp, but the vast majority are now saying okay I've taken it I've gotten started but now I need to scale it now I need to automate it now. I need to really get the to get the value out of it So as a law is now in production and day two phase or day three day four phase at 100% Yeah, it we're we're definitely past the toy phase and I would be honest like I've got customers like you know Ticketmaster and Cisco and service now and others That and pro core, you know, they're all really Taking SLOs as you know using noble mind is as like a production You know, it's how they're getting alerts It's how they're silencing lures if you look at that report from enterprise strategy group They talk about how out systems reduce their learning by 92 percent noisy alerts reduced by 92 percent And I just think about like the impact for engineering teams and it's funny I you know, I talked to engineering teams all the time that you know we have monitoring over here and we have a learning over there and We can't figure out how to get it to tell us with high signal and high quality What's going on and I think if there's just one thing you can solve from us You know from SLOs if you can get the pager to go off when it's supposed to and not go off when it's not if you can get that Solved that's a very clear business value because you know, I don't know not waking people up at three in the morning Sure, you're not paying them at three in the morning in theory. I mean we all you know most engineers are salaried Okay, but the fatigue and the hazard pay and you know the impact on attention That has a real Impact and that means, you know, I don't I don't like to talk about the cost savings part because I think that's less interesting The more interesting thing is how are you gonna have time to learn how to do AI and Move to cloud and cut costs and and and deliver new capabilities for customers if your team is chasing the pager Like how is an organization? Are you gonna do that? I don't know I how do you become competent at data science and AI if your team is answering the pager from noisy alerts? It just doesn't seem like a strategically smart move. So that's that's my I mean That's my simplest pitch for why SLOs will look Outsiders reduce their page in my night They're false alarms by 92% if you can get to your organization, you could free up resources to work on AI and cool stuff Okay, that's the simplest way I can explain the value of SLOs. Yeah, these events also help us, you know Kind of you know shaping the future. So what was some key takeaways where you're like? Hey, this is where users are or the ecosystem is and that's where we should move forward Which means, you know, what does it mean for noble line and yeah, yeah, it's a really great question So one of the one of the talks we did last week day two daily stand up if people want to go watch it We'd Alex now to who's our CTO at noble nine and he actually we had Michael Hausenblass who wrote is writing a fantastic new book called Cloud observability in action. I definitely recommend checking that book out It has a whole chapter in SLOs and so we spoke with him we saw Alex now at our CTO and Alex's talk was what's the future of open SLO and This is getting to exactly your question like we have this open standard It's been adopted by you know dozens of organizations multiple organizations have adopted it as a standard like, you know Sumo logic and others and What he talked about is this idea of prefabricated SLOs that the future is going to be all about having out of the box solutions That are open source. They're publicly available and its developers sharing with developers How their infrastructure is expected to work and and as a test study of this what Alex built is something called EKG EKG I think maybe you and I even spoke about it before but it's the essential Kubernetes gauges And it's a way to have out-of-the-box SLOs for EK Eks from Amazon or Kubernetes generally and it's an example of a prefabricated SLO where you can now Not just go and like define all the criteria You can actually look to the community wisdom you can go to a central GitHub repo Find SLOs code imported into your environment whatever solution you're using that's open to SLO compatible And you can now have SLOs that give you clear reliability targets performance targets with zero work And this to me is one of the exciting things so as we're you know We're investing in the AI space and the generative AI because who you know, you can't not do that So far all the AI work we've done we've decided to make it free Most of what we're using upstream is in preview like palm to and things like that So we're not willing to charge for that work yet But the analysis tools and the prefabs and things like this We're trying to put this into the open source to improve productivity We want the engineers to focus on running reliable services not on all the plumbing to measure and manage The metrics pipelines for the reliable services You want them to focus on cool and new stuff and that's to me This is really you know directionally what we're trying to do We're trying to make it so that less configuration less setup less guesswork And more productivity and directionally that's where we're going and you can see that for the from these examples prefabs and open examples and Generative AI can all support this mission and then you know the debugging tools like the query checker the metrics health notifier These other reliability resilience improvements. We've made this means that you can trust the system and make it better And to your point, this is the day to plus type of world This is not a oh experimental put in my dev environment This is like you know changing lives in how you manage production How you do software releases how you do auto-scaling how you do rollback how you do incident management All this is affected and so that's where we're really I would say Investing the most of our our mind share and energy right now is on that topic We've built a core platform. That's great. Now the answer is, you know You know, what do we do to make it so that it's just mind-blowingly easy and to give people that instant business value of Making their engineers more productive so they can go focus on shipping new capabilities. Yeah, okay Thank you so much for taking time out today and of course give us an update about slow con and of course I would love to chat with you again soon. Thank you. Thanks so much for being a media sponsor for slow cough this year We really appreciate it. Great talking