 from the Regency Center in San Francisco. It's theCUBE, covering Serverless Conf San Francisco 2018, brought to you by SiliconANGLE Media. I'm Stu Miniman and this is theCUBE's exclusive coverage of Serverless Conf 2018 in San Francisco. Have a nice welcome back to the program to help me dig into what's happening in this space is Corey Quinn who's the editor in last week, editor for last week in AWS. Comes in my mailbox every week. You should sign up too. Well, thank you for having me back. Most people aren't foolish enough to invite me to appear on a program twice. So it's nice to know that some people don't always learn their lesson the first time. Corey, you're spitting out the newsletters, you've got your own podcast now, so you've been getting better. People are used to hearing your voice a little bit more, I think, so they got to put your face out there too. Exactly, watch the next quarter and see what happens in that front. But for now, it's mostly I have a face for radio and I try to embrace that. All right, so only a little bit nervous. So, about 500 people here at the show which is about the same as what they did in New York City. It's bigger than some of the little developer shows that kind of might cover the space, but probably the largest single focus serverless event. I was excited to come for our second year doing theCUBE here. Have you been to this one before? This is my first serverless conf. Generally speaking, they've been very good at weeding out nonsense proposals from things that people actually want to listen to. And this year was no exception. I mostly snuck in to help emcee the sponsor track and I'm giving a lightning talk toward the end of the conference. Oh, okay, because you said that they're mostly good at filtering, but you managed to sneak one in and out. Exactly, you can always sneak through the cracks. The bar raises very highly, but you can generally get a ladder, rope, something to get you over it. Exactly, or you could just duck under real quick and say, I totally jumped over. Exactly, I was so fast, you never saw it. All right, so tell me, what are you hearing from your customers, your consultants, do they understand serverless? What do they get? What don't they get? What are some of the things you have to help with? Serverless is still very much an emerging technology. There's a perception of it today that it is something of a toy, in that you can use it for certain edge case uses that you don't want to have an entire fleet running for. You want to just be able to set it and forget it. And that's where it starts to work its way into an organization. With time, people start to get comfortable with it and then they start to look at applying it to things that are more directly in line as opposed to back end processing. That's one adoption model. Other people just jump in full hog first day as far as figuring out what can we do with this technology? How do we wind up embracing this and being forward looking? And that's neat to see too. Yeah, and the users I've talked to, they can start playing quickly. It's easy, it's very inexpensive to play with, but they can build real things really fast with this and that's what's pretty exciting. Absolutely, the only challenge to that is there's first a fair learning curve as far as approaching this. Once you surmount that, it's incredibly fast to work with, it's fast to iterate with and the economics of it are stand out, fantastic. Yeah, I just had Erica Windish on the program and Erica said, well, some applications you can lift and shift. And I'm like, hold on, wait a second. Well, if I was building microservices, stateless environments, getting that into serverless could make some sense as opposed to, oh, if I had some giant hunk and stateful thing, legacy application, I'm sure not going to do that. Absolutely, every time someone from AWS gets on stage, I'm excited and I hope, and every time I'm disappointed, they do not announce an AWS 400. There's still not a great cloud story for mainframe workloads, for example, and refactoring something 20 or 30 years old that generates revenue to take advantage, even if the idea of containers or stateless VMs or scaling horizontally is still foreign for a number of companies. We don't need to go down this rabbit hole, but I talked to Walmart, who is doing Global Z with Docker on their ZVM, so they could probably run open WISC on that, right? With enough spit and duct tape, anything's possible. Yeah, so I mean, anything's possible, what would you want to do that? So I guess that brings up a point. There is a broad ecosystem here, but in the cloud world, there's Amazon, AWS, and everybody else. There are, last I counted, at least, seven to 10 open source functions as a service, serverless projects out there, and you've got the big three or four cloud guys out there, all, so there's a lot of options out there for serverless, but what are you seeing from the people you talk to? I think this is a microcosm of the cloud ecosystem in general. There are a number of companies experimenting with Azure, with GCP, with Oracle, with IBM, with all of the others. I'm probably not thinking of it the moment, but everyone is doing something with AWS, and when you start experimenting with something that is as far up the stack as serverless functions, you're generally going to go with the provider you're already using, the provider with which you have comfort, the provider that holds all of your data, and as a result, Amazon has a tremendous competitive advantage in this space, and I think they're really presenting tooling that emphasizes that. Building out this serverless things on some completely different platform, you lose the event source mapping. It's not just about run the code. It's about how it ties into what's happening as data flows through your infrastructure. How does it interface with your existing applications? To something like Kubernetes, come into your conversations you're having, and I guess at the Google event, I'm not sure if it's Knative or Knative, but there was the new, it's a Knative project, which supposed to, it almost sounds to me like a bridge to get from if I was doing that, Istio containers, Kubernetes type environment, bridge into a serverless world. Yes, to an extent. There's also their big push to my understanding, as far as running a number of Google Cloud managed equipment in your on-prem environment, which they're still going to manage from their end. So there's an entire security story there that it would be just as easily, if not more easily answered by migrating things into their cloud. So I'm not entirely sure as what the business and strategy play here is. I assume it's something blindly obvious that I'm just missing, but we'll see. Yeah, Corey, I mean, I understand if I'm using Microsoft applications, why I might want something like Azure Stack, I'm looking forward to going to Microsoft Insight and digging in a little bit more there, but that they're also doing serverless, but GCP or GKE or whatever it is on-prem, maybe for some edge use cases, like you can do with Snowball, I can understand it, but you're right, I'm a little bit flummoxed trying to understand why this is. We spent years with companies like IBM and Oracle saying, well, you're going to build that same stack here and there, and there's part of the stack. I want to be a managed multi-cloud, but how much of that do I really build? And I don't have the Google engineers, and I probably don't need, how much do I need? I'm still not sold on some of those, I guess hybrid case you would call them. Absolutely, I'm still beating the drums for the concept that cloud agnosticism is something of a myth when you look at it on a per workload basis. Okay, I'm going to build this thing and be able to deploy it anywhere I want to go, and by doing that, I have to build a lot of abstractions I would otherwise get for free by picking a certain provider. That's a lot of work, it slows my feature velocity, and all I'm buying with that is an optionality I likely will never use. All right, so you talked about that there's training and customers need to get involved with this. Give us, what are some of the pitfalls, the roadblocks, the internal, you say in networking, it's the layer eight and nine, and the politics and organizational. And the financial level, of course, too. Yeah, well, come on, this is all free, right? Or it's going to lower my cost by 90%, to see if we should be real thrilled, right? Absolutely, and there's a question now of, okay, how do we wind up running Lambda functions locally on our developer desktops? Now, yes, you can more or less build some Frankenstein monster to do that in your environment, mostly, but given the tremendously low cost of running a function in AWS, run it there. Have a short enough development cycle time so you change a line of code, press the button, and you're running this as a test inside of 10 seconds. If you can drop it down to that speed, the nine seconds you spent waiting for that to return its results to you are probably not substantive with respect to how you wind up spending your day. Having to wait 10 minutes for a CI CD process to go through all of its steps and then return only to find out, whoops a doozy, you forgot a semi-colon is a terrible experience. There's a, how do we wind up fitting this new paradigm as it stands today into our existing workflow model? And that's something that a lot of companies are putting serious amounts of thought into. And the consensus that has emerged so far is that everyone else is doing it wrong. There is no consensus. Best practices are still emerging, it's still early days. Yeah, I always say, in the emerging spaces, you can't have a best practice, it's a next practice. And I love it in an event like this, you get to hear some of the people, and there's some of the presentations at the show that are saying, here's what I learned, here's the good things, and oh yeah, here's the bear trap that I found myself into. Any bear traps you'd want to point out? Only the ones that have already been beaten to death by hammers by other people. There are very few surprises left in this space as far as Lambda is concerned. I would say that the things you think you know about Lambda with respect to resource limitations, language run times, how many you can run concurrently at a time, what size package you can have, double check that once a quarter or so, they have the tendency to change the story around these things. That's the biggest pitfall I see across the board with AWS. People are still walking around with the impression that you can only have 10 tags per resource, even though that was raised to 50 almost two years ago. It's the sort of thing where once you learn something, it's difficult to keep in touch with what is changed and how. And that's part of the problem I started the newsletter to address. That and the fact that I think I'm far funnier than I am. You make me laugh. I make myself laugh, and that was really the entire point. Well Hank, if you can make yourself laugh, you know, that's good enough sometimes. But you brought up a really good point. This space is changing so fast. You know, used to be you'd plan things out, you'd roll things out, and the challenge is now it's like, oh, from when I think about it to when I deploy it, oh wait, a bunch of these things change, the underlying assumption change. It should mostly be in my benefit, oh wait, I had a certain limit. Now it's expanded or it's now infinite. So, you know, other than reading your newsletter, I mean, what do you recommend to people? How do they even try to keep up? Part of the trick is you don't have to do that necessarily. We've long since crossed a Rubicon where I can have intelligent conversations with AWS employees about AWS services that don't actually exist. But no one can keep them all in their head anymore. It's a terrifying problem. The first time you sit down in front of the AWS console and look at the services, there's 130 some out of them now. And it's, oh my word, I am never going to understand all of this. And that's right, you don't have to. Some are more important than others. If you're going to be building almost anything today, understanding EC2 and Lambda would be terrific. Sumerian, on the other hand, is a great service that's terrific for VR and AR worlds. But if you're building a CRUD accounting app, maybe virtual reality isn't going to be in your belly wick and that service is not for your use case. And every service has a story like that. For example, A Cloud Guru is built entirely on top of serverless technology. They have no EC2 functions. They don't need to think about anything that EC2 does in the context of running their application. Excellent, I think that should be your tagline for your consulting clients. You don't even need to think. Absolutely. That's generally not how people like to think of themselves, but we'll see. I am working on the new tagline for the newsletter of course. Last week in AWS, because save money by moving to Oracle shouldn't be the most ridiculous thing you read this week. All right, what are the ridiculousness, are you seeing out, want to give you the chance to poke and prod at things that you're seeing that are driving you nuts? I used to make jokes about this and now I see people start to consider doing it here in reality, where they'll take things like Greengrass, which lets you run Lambda functions inside your own on-premises environments, on your own hardware, stuff it into a container. Take that container, schedule it with Fargate, put it in an application load balancer in front of it, so you've rebuilt Lambda without the event bits. Also, you can be cloud agnostic on a different proprietary stack of everything that they're offering. It's, this started as a joke and somewhere people started taking that seriously. That's always disturbing. All right, I got one last question for you. You've got a presentation tomorrow. This video will actually air after you've done it, so give us the future according to Corey. So, one of the most defining characteristics of serverless is there's going to be someone popping up to say serverless runs on servers. You talk about monitoring that same person pops up and says that it's not monitoring its observability. I'm combining the two useless semantic arguments into observerless, which is exactly something that you should be doing and probably already are. You can read more about this at observerless.com. All right, well, hopefully, you know, always appreciate when you come on because we are never humorless when you are on the program. So, Corey Quinn, be sure to check out last week in AWS and your podcast also is ScreamingInTheCloud.com. All right, thanks Corey. Always a pleasure to catch up with you and thank you so much for watching the queue.