 From Hell's Kitchen in New York City, it's theCUBE on the ground at Serverless Con, brought to you by SiliconANGLE Media. Hi, I'm Stu Miniman here with theCUBE at Serverless Con 2017 in New York City. Hell's Kitchen, actually. Happy to welcome to the program. Hard to believe, someone, as far as I can tell, we've never had on the program, but I've known for a long time. Actually, I've been drinking with him in Hell's Kitchen before. So, Anila Connie, thanks so much for joining me. Your current position is Vice President of Marketing at honeycomb.io. Do we just call it Honeycomb? Do we just call it Honeycomb? All right, so, Anila, how are you doing? Tell us a little bit about your background, but keep it short, and what get you involved in the whole Serverless ecosystem? Yeah, sure. So, about me, I've been in tech for a little over 20 years now. Started out as an engineer. Moved through a bunch of systems roles, architecture roles, and product roles, and now I run marketing at startups, which is what I've been doing for the last half decade or so. Yeah, I think back to when Amazon announced Lambda, everybody's like, oh, it's cool, what is it? How do I use it? Things like that. One of the things I've heard out of this event this week is tooling, monitoring, understanding, digging into it, which really falls into Honeycomb space. Yeah, I mean, it sort of does. I mean, at Honeycomb, we do what we call observability, which is something a bit larger than just monitoring, right? It's really getting to the point where you can develop an understanding of what your services and what your code do in real life on the real load with real users. Yeah, we're speaking to John Willis about what is the role of operations when I don't own the infrastructure, I have to trust someone else to do it. So bring us in that a little bit. What are some of the challenges people are having? How do they help when they're leveraging? Yeah, so something that's very clear about serverless approaches to building things, and especially if you're using something like Lambda, is that as a software engineer who writes a function, you are 100% responsible for all of your operations at that point because the ops people for your stack are behind an API. You are on the other side of that API, and what they do is effectively a black box, which means you have to not only understand what your thing does, you have to understand what they do and how they do it, and it's some means of accessing both those bits of data. So you get what Amazon tells you for Lambda or what any of the other providers tell you for their functions, but you also have to then understand how your code performs on that specific provider, which means you have to do things like wrap your functions in timers and emit events, which go into kinesis or wherever else that you can track what's going on. Yeah, one of the problems of course, when any time you have any layer of abstraction is if things go wrong, how do you get the expertise to know how do you get in there? Is this even worse in serverless? Yes and no? I mean it depends on how much faith you have in your provider, right? So there's one of the companies here put up a chart that shows you the performance on average of the call response time for the functions for all of the providers that provide serverless infrastructure, and they're not even remotely consistent. They're not consistent within even a few percentiles. In other words, there's no, if you care about performance and you care about predictability for your function, it's basically impossible to get that from any given provider. All right, so talk to us, what are you hearing from users these days? You know, what's exciting you kind of kind of in this space? Yeah, so what we hear from our users anyway at Honeycomb who are using Lambda and using serverless functions is that the ability for them to get visibility into how a function performs is basically the highest priority outside of writing a function itself because they don't know what's happening below them. They don't know what all the resource allocations at any given point in time by the provider. So the thing they have to go on for the rest of the black box is how their own function performs, which means they need the ability to take any given function and either decompose it into parts which have their own events or metrics or telemetry that they emit, or they need to do that to the entire function from end to end. So basically have a concept of which is, this is an old concept for us, which is an end to end check, right? I want to know what happens when a point that I touch the system until my entire set of functions are complete at the end. Yeah, we're going back to like an IP ping, right? That's right, yeah, effectively. Today, Honeycomb, do you only support Lambda? Do you support some of the other serverless frameworks that are out there? We are agnostic. So basically the way Honeycomb works is that users instrument, our users instrument their code and we're not service only, it could be any code running anywhere and they emit data and that data is in the form of structured events. Those structured events are consumed by Honeycomb and the Honeycomb turns around and lets you do fast analysis against it. Yeah, you've got a lot of background how do we leverage the knowledge of the crowd? So many times it's, what are people finding when they're really getting involved here? Your tooling and others, what mistakes are they making? What, how can they get better faster at what they're doing? Yeah, a common mistake that people make is not thinking about what is and is not blocking within their functions and not understanding the threading model of the underlying stack and when they should spin up additional functions and split up work versus when they shouldn't. And there's, the only way to understand that is one, to read all the damn docs and two, to experiment. Yeah, what about the maturity of serverless? You've been a lot of discussions here. You know, I had Mark from Trend Data on, we talked about security and the like, but what do you see kind of in the maturation cycle of serverless, you know, anything you've heard or you know, still people that things are looking at to get fixed even more? So the maturity isn't the word I want to use here. Though I think it's more interesting to think of it in terms of breadth of capabilities, right? So all of the serverless offerings for all the vendors have limitations on either the programming languages you can use or the nature of the functions that can be run or the resource allocation you can have. I think there's not a lot of maturity that we're going to see from the vendors other than more consistent performance. What we are going to see maturity in is from the user's standpoint of how they construct things. You know, any data you can share is just, you know, how prevalent serverless is out there in the wild. You know, what's the, you know, typical use tastes, typical customer kind of order of magnitude, how many people are, you know, doing it and therefore, you know, driving discussions? I have no idea. What I do know is in our user base, we have some significant users of Honeycomb who are 100% run on Amazon Lambda, but that's my tiny little sample size. Okay, want to give you the final word, serverless conference and serverless in general, you know, what's your take today? What should people be looking at in the next kind of six to 12 months? Yeah, so I more or less agree with Simon wordly about this, which is, effectively, this is a way for Amazon to eat most of the tech ecosystem, assuming people become dependent on it. All right, well, I would say with the queue we like to take those hallway conversations as someone that I've had many hallway conversations with and over the twitters and other ways. It's great to catch up with you. Great. Anil Akhani, thanks so much for joining us. Thank you so much. I'm Stu Miniman and thanks for watching the queue.